Test design
In software engineering, test design is the act of creating and writing test suites for testing a software.
Definition
Test design could require all or one of:
- knowledge of the software, and the business area it operates on,
- knowledge of the functionality being tested,
- knowledge of testing techniques and heuristics.
- planning skills to schedule in which order the test cases should be designed, given the effort, time and cost needed or the consequences for the most important and/or risky features.[1]
Well designed test suites will provide for an efficient testing. The test suite will have just enough test cases to test the system, but no more. This way, there is no time lost in writing redundant test cases that would unnecessarily consume time each time they are executed. In addition, the test suite will not contain brittle or ambiguous test cases.
Automatic test design
Entire test suites or test cases exposing real bugs can be automatically generated by software using model checking or symbolic execution.[2] Model checking can ensure all the paths of a simple program are exercised, while symbolic execution can detect bugs and generate a test case that will expose the bug when the software is run using this test case.
However, as good as automatic test design can be, it is not appropriate for all circumstances. If the complexity becomes too high, then human test design must come into play as it is far more flexible and it can concentrate on generating higher level test suites.
References
- ↑ A Practitioner's Guide to Software Test Design, by Lee Copeland, January 2004
- ↑ KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs, by Cristian Cadar, Daniel Dunbar, Dawson Engler of Stanford University, 2008