Model-based testing

From Wikipedia, the free encyclopedia

Model-based testing is software testing in which test cases are derived in whole or in part from a model that describes some (usually functional) aspects of the system under test (SUT).

General model-based testing setting
General model-based testing setting

The model is usually an abstract, partial presentation of the system under test's desired behavior. The test cases derived from this model are functional tests on the same level of abstraction as the model. These test cases are collectively known as the abstract test suite. The abstract test suite cannot be directly executed against the system under test because it is on the wrong level of abstraction. Therefore an executable test suite must be derived from the abstract test suite that can communicate with the system under test. This is done by mapping the abstract test cases to concrete test cases suitable for execution.

There are many different ways to "derive" tests from a model. Because testing is usually experimental and based on heuristics, there is no one best way to do this. It is common to consolidate all test derivation related design decisions into a package that is often known as "test requirements", "test purpose" or even "use case". This package can contain e.g. information about the part of the model that should be the focus for testing, or about the conditions where it is correct to stop testing (test stopping criteria).

An example of a model-based testing workflow. IXIT refers to "implementation extra information" and denotes here the total package of information that is needed when the abstract test suite is converted into an executable one. Typically it includes information about test harness, data mappings and SUT configuration.
An example of a model-based testing workflow. IXIT refers to "implementation extra information" and denotes here the total package of information that is needed when the abstract test suite is converted into an executable one. Typically it includes information about test harness, data mappings and SUT configuration.

Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing. In some aspects, this is not completely accurate. Model-based testing can be combined with source-code level test coverage measurement, and functional models can be based on existing source code in the first place.

Model-based testing for complex software systems is still an evolving field.

Contents

[edit] Models

Especially in Model Driven Engineering or in OMG's model-driven architecture the model is built before or parallel to the development process of the system under test. The model can also be constructed from the completed system. Recently the model is created mostly manually, but there are also attempts to create the model automatically, for instance out of the source code. One important way to create new models is by model transformation, using languages like ATL, a QVT-like Domain Specific Language.

[edit] Deriving tests algorithmically

The effectiveness of model-based testing is primarily due to the potential for automation it offers. If the model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically.

Often the model is translated to or interpreted as a finite state automaton or a state transition system. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is deterministic or can be transformed into a deterministic one.

Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. For finding appropriate test cases, i.e. paths that refer to a certain requirement to proof, the search of the paths has to be guided. For the test case selection multiple techniques are applied.

[edit] Test case generation by theorem proving

Theorem proving has been originally used for automated proving of logical formulas. For model-based testing approaches the system is modelled by a set of logical expressions (predicates) specifying the system's behavior. For selecting test cases the model is partitioned into equivalence classes over the valid interpretation of the set of the logical expressions describing the system under test. Each class is representing a certain system behavior and can therefore serve as a test case.

The simplest (by fak) partitioning is done by the disjunctive normal form approach. The logical expressions describing the system's behavior are transformed into the disjunctive normal form.

The classification tree-method provides a more sophisticated hierarchical partitioning. Also partitioning heuristics are used supporting the partitioning algorithms, e.g. heuristics based on boundary value analysis.

[edit] Test case generation by constraint logic programming

Constraint programming can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints. Solving the set of constraints can be done by boolean solvers (e.g. SAT-solvers based on the boolean satisfiability problem) or by numerical analysis, like the Gaussian elimination. A solution found by solving the set of constraints formulas can serve as a test cases for the corresponding system.

[edit] Test case generation by model checking

Originally model checking was developed as a technique to check if a property of a specification is valid in a model. We provide a model of the system under test and a property we want to test to the model checker. Within the procedure of proofing if this property is valid in the model the model checker detects witnesses and counterexamples. A witness is a path, where the property is satisfied, a counterexample is a path in the execution of the model, where the property is violated. This paths can again be used as test cases.

[edit] Test case generation by symbolic execution

Symbolic execution is often used in frameworks for model-based testing. It can be a means in searching for execution traces in an abstract model. In principle the program execution is simulated using symbols for variables rather than actual values. Then the program can be executed in a symbolic way. Each execution path represents one possible program execution and can be used as a test case. For that, the symbols have to instantiated by assigning values to the symbols.

[edit] External links

Tools (in alphabetical order)
  • AsmL Test Tool can generate tests directly from an AsmL model
  • ATD-Automated Test Designer is a model-based testing tool that generates automatically test cases, test data and test automation scripts from requirement specifications
  • AutoFocus (in German) is a graphical tool for developing and modeling distributed systems with integrated testing facilities
  • Conformiq Qtronic is a design model driven automatic on-the-fly testing tool employing UML and Java models
  • Conformiq Test Generator is a tool that executes tests from UML state charts that represent testing strategies
  • GATeL is a tool that automatically generates test sequences from SCADE/Lustre models, according to a user defined test objective
  • HOL-TestGen is a test case generation tool based on the interactive theorem prover Isabelle.
  • Leirios Test Generator is a model-based testing tool that generates tests automatically from UML 2.0 system specifications
  • Lurette is an automated testing tool of reactive programs written in Lustre
  • MaTeLo is a tool for scenario-based statistical testing based on Markov chains
  • Reactis Tester is another model-based testing tool that focuses on control systems
  • Simulink Tester is a tool that translates Simulink and Stateflow model for model analysis and test generation by the T-VEC Vector Generation System
  • Spec Explorer is a model exploration and test suite generation tool from Microsoft that uses Spec#, C#, or AsmL to describe models
  • TGV is a tool for the generation of conformance test suites for protocols
  • T-VEC Tabular Modeler is a tool that supports requirement modeling and test generation through the T-VEC Vector Generation System
  • TorX is also a prototype testing tool for conformance testing of reactive software

[edit] Related concepts

In other languages