User:Robinson weijman/STT

From Wikipedia, the free encyclopedia

Contents

[edit] Introduction

This article gives a brief introduction to various software test techniques. These are all standard techniques widely used in the field of software testing.

Further information can be found in this document: Standard for Software Component Testing, Working Draft 3.4, 27 April 2001 produced by the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST). It can be found here.

Software test techniques are divided into two groups: black box testing techniques and white box testing techniques. Black box techniques are those in which the tester has no knowledge of the internal workings of the component being tested. For example, testing by end users is almost always black box testing. White box techniques require internal knowledge of the component. For example, some software testing requires knowledge of the code, e.g. statement testing.

Some test techniques are similar but are different "strengths". E.g. for a high-risk component, exploratory testing could be used, for medium-risk error guessing could be used and for low-risk ad hoc testing could be used.

Test techniques that have not been included here are: document reviews (Walkthrough, Inspection, Peer (aka Technical) and Informal), Complexity Analysis, Condition Testing (Branch Condition, Modified Condition Decision and Branch Condition Combination) and LCSAJ Testing.

[edit] Equivalence Partitioning

[edit] Description

Equivalence Partitioning is a test technique in which inputs and outputs that display similar behaviour are grouped together. Thus, by testing one input or output from a group, it is assumed that the whole group is tested. The groups are called, “Equivalence Classes”.

Inputs are catgerised or grouped called as Equivalence Classes. Testing conducted based on Equivalence Classes called as Equivalence Partitioning.

[edit] Example

If an exam has a 60% pass rate, then the scores 60 – 100% are assumed to be equivalent: testing any one of them tests the whole group. E.g. it is not necessary to test 70%, 80% and 90%.

[edit] When It Is Used

Whenever there are inputs or outputs in a range. It enables test cases to be objectively created. It is ideally used in combination with Boundary Value Analysis.

[edit] Random Testing

[edit] Description

Random testing is when certain inputs are possible but Equivalence Partitioning is not practised. Rather, inputs are chosen at random.

[edit] Example

Same as Equivalence Partitioning example, but inputs can be anything, e.g. 25%, 56%, 91%.

[edit] When It Is Used

Whenever Equivalence Partitioning is used but no time is available to analyse the scenario to create Equivalence Classes. Can be used in conjunction with a tool that generates random inputs.


[edit] Boundary Value Analysis

[edit] Description

Boundary Value Analysis requires having partitioned inputs and outputs (see Equivalence Partitioning), this test technique tests the boundaries. This is because defects are more likely to occur at the boundaries of equivalence classes than at any other point.

[edit] Example

If an exam has a 60% pass rate, then it is best to test the following values: 59% (fail), 60% (pass) and 61% (pass). However, the equivalence classes also contain other boundaries, so it is also useful to test: -1% (invalid), 0% (fail), 1% (fail), 99% (pass), 100% (pass) and 101% (invalid).

[edit] When It Is Used

Whenever there are inputs or outputs in a range. It enables test cases to be objectively created. Requires an understanding of Equivalence Partitioning.


[edit] State Transition Testing

[edit] Description

This test technique involves testing a system as it changes from one state to another. The initial state, input, output and final states are defined. Note that this technique is easily scalable, so that for a high-risk component more thorough testing can be achieved by testing the transition from the initial state to the final state via an intermediate state (or more than one intermediate state). See State Transition Table for more detailed information.

[edit] Example

A digital watch, which can exist in four possible states: display time, change time, display date and change date. Changing from display time to change time, the input would be “press reset” and the output would be “alter time”.

[edit] When It Is Used

Whenever a component can exist in certain states.


[edit] Cause Effect Graphing

[edit] Description

This technique involves analysing the causes and effects on a component. The word “Graphing” is misleading: graphs can be created but are rarely done so because the test cases can more easily be created via a table, called a decision table.

[edit] Example

A company decides to mail shot everyone in its database. The type of mail sent depends on the following “causes”: age of customer and gender of customer. The “effects” are different types of mail shots.

[edit] When It Is Used

Useful whenever a scenario involves combinations of causes and effects.


[edit] Syntax Testing

[edit] Description

Testing input via syntax.

[edit] Example

If a field requires the age of a customer, valid entries would be an integer (possibly within a range). Invalid entries would be: letters, decimals, special characters (€, @, or TAB) or nothing (leave field blank).

[edit] When It Is Used

Whenever an input requires a certain syntax. Can also be used to test interfaces between components (integration testing).


[edit] Statement Testing

[edit] Description

Statement testing involves running a program and comparing the outcome with the code.

[edit] Example

For code “if a then b”, a should be true to test all the code (i.e. also test b).

[edit] When It Is Used

This technique can be used for any program. It is best used in conjunction with a tool that can measure the coverage and highlight code that has not been tested. Compare with Branch / Decision Testing.


[edit] Branch / Decision Testing

[edit] Description

Branch Testing and Decision Testing are very similar. Here they will be treated as identical. Branch / Decision Testing involves checking paths through the code.

[edit] Example

For code “if a then b”, two tests should be run for 100% coverage: one with a true and one with a false.

[edit] When It Is Used

As with Statement Testing, it can be used for any program but is best used in conjunction with a tool to measure coverage.


[edit] Data Flow Testing

[edit] Description

This technique focuses on how variables are used within code. A path is traced from where a variable is initialised to where it is used.

[edit] Example

Can be used with any code. Simply select one variable and watch how it changes as the code is executed.

[edit] When It Is Used

This is a useful technique for developers to perform their own tests since they can observe this easily using debugging tools.


[edit] Ad Hoc Testing

[edit] Description

The tester simply tries things at random. The minimum knowledge is to have an understanding of the requirements in order to compare expected and actual outcome.

[edit] Example

For an online program in which users need to log in and enter personal information, a tester will simply try that a few times with different information entered into different fields.

[edit] When It Is Used

This technique will find the most serious defect in the least amount of time. The overhead of writing test cases is gone. This is useful for “Friday afternoon” testing.


[edit] Error Guessing

[edit] Description

An experienced tester has an intuitive idea of where bugs can be found. Error Guessing is similar to Ad Hoc Testing except that the tester first tries to consider where defects might lie. They may also attempt to test the most error prone or most visible components first.

[edit] Example

For the online program described under, “Ad Hoc Testing”, a tester might begin with standard input and then try non-standard input, e.g. informal syntax testing, entering a non-valid postcode or credit card details, etc.

[edit] When It Is Used

Error Guessing is used when one experienced tester is given a product and simply asked to, “test it”. Often the project will be under time pressure and so speed has a higher priority than test quality. (Any measure of quality of the component will be subjective.)


[edit] Exploratory Testing

[edit] Description

This is similar to Ad Hoc Testing and Error Guessing, but is more in-depth than either. Exploratory Testing begins with a meeting between two experienced testers and a manager. The group focuses on identifying risk prone areas and then considers what type of testing would be appropriate. The testers then test the product and report back. This procedure is repeated once a day or so, with one meeting per day. In a meeting the manager will typically ask, "What is the most interesting or important defect you have found today?"

[edit] Example

For the online program described under, “Ad Hoc Testing”, the team might decide that particularly risky items include: leap year dates, international address formats, unusual credit cards and foreign characters (like “ô”).

[edit] When It Is Used

Used in similar conditions to Error Guessing but on more risky components. VVXV