Model-based testing (MBT) is the automatic generation of software test procedures, using models of system requirements and behavior. Although this type of testing requires significantly more up-front effort in building the model, it offers substantial advantages over traditional software testing methods.
The following illustration shows a simplified workflow for MBT.
The model is usually manually authored from information specifications or requirements. Creating a formal model from the requirements already results in feedback about the initial specifications, because the process of creation requires modelers to ask themselves questions that lead to detection of missing information or to seeking more clarity.
After the model is created, Spec Explorer automatically generates test suites from the model. These test suites contain both test sequences and the test oracle.
The role of test sequences is to control the system under test, driving it into the different conditions under which it can be tested for conformance with the model. The test oracle observes the progress of the implementation and issues a pass or fail verdict.
The verdict provides information about all the artifacts. A failure indicates that the behavior of the system under test does not match the model predictions. This usually points to a faulty implementation, but it can also mean that a mistake was made when creating the model or that the informal requirements from which it was authored were wrong in the first place.
There are important features and functions in Spec Explorer that are not shown in the illustration. In particular, the diagram shows the workflow for Test Code Generation; there are more testing modes available. For details, see Testing.
Advantages of Model-Based Testing
Spec Explorer provides the general advantages of model-based testing:
Rules are specified once.
Project maintenance is lower. You do not need to write new tests for each new feature. Once you have a model it is easier to generate and re-generate test cases than it is with hand-coded test cases.
Design is fluid. When a new feature is added, a new action is added to the state machine to run in combination with existing actions. A simple change can automatically ripple through the entire suite of test cases.
Design more and code less.
High coverage. Tests continue to find bugs, not just regressions due to changes in the code path or dependencies.
Model authoring is independent of implementation and actual testing so that these activities can be carried out by different members of a team concurrently.
In addition, Spec Explorer provides the following advantages:
You can slice out test cases from large state machines by defining relevant scenarios.
Spec Explorer also supports combinatorial interaction testing.