Software testing

From Wikipedia, the free encyclopedia
Software development process
Core activities
Methodologies
Supporting disciplines
Tools

Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test.[1] Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include, but are not limited to the process of executing a program or application with the intent of finding software bugs (errors or other defects).

Software testing can be stated as the process of validating and verifying that a computer program/application/product:

  • meets the requirements that guided its design and development,
  • works as expected,
  • can be implemented with the same characteristics,
  • and satisfies the needs of stakeholders.

Software testing, depending on the testing method employed, can be implemented at any time in the software development process. Traditionally most of the test effort occurs after the requirements have been defined and the coding process has been completed, but in the Agile approaches most of the test effort is on-going. As such, the methodology of the test is governed by the chosen software development methodology.[citation needed]

Overview

Testing can never completely identify all the defects within software.[2] Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[3] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria.

A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions.[4] The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.[5]

Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing is the process of attempting to make this assessment.

Defects and failures

Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, e.g., unrecognized requirements which result in errors of omission by the program designer.[6] Requirement gaps can often be non-functional requirements such as testability, scalability, maintainability, usability, performance, and security.

Software faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault, bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong results, causing a failure.[7] Not all defects will necessarily result in failures. For example, defects in dead code will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software.[7] A single defect may result in a wide range of failure symptoms.

Input combinations and preconditions

A fundamental problem with software testing is that testing under all combinations of inputs and preconditions (initial state) is not feasible, even with a simple product.[4][8] This means that the number of defects in a software product can be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional dimensions of quality (how it is supposed to be versus what it is supposed to do)—usability, scalability, performance, compatibility, reliability—can be highly subjective; something that constitutes sufficient value to one person may be intolerable to another.

Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial test design methods to build structured variation into their test cases.[9] Note that "coverage", as used here, is referring to combinatorial coverage, not requirements coverage.

Economics

A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed.[10]

It is commonly believed that the earlier a defect is found, the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was found.[11] For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time.

Cost to fix a defect Time detected
Requirements Architecture Construction System test Post-release
Time introduced Requirements 5–10× 10× 10–100×
Architecture 10× 15× 25–100×
Construction 10× 10–25×

The data from which this table is extrapolated is scant. Laurent Bossavit says in his analysis:

The “smaller projects” curve turns out to be from only two teams of first-year students, a sample size so small that extrapolating to “smaller projects in general” is totally indefensible. The GTE study does not explain its data, other than to say it came from two projects, one large and one small. The paper cited for the Bell Labs “Safeguard” project specifically disclaims having collected the fine-grained data that Boehm’s data points suggest. The IBM study (Fagan’s paper) contains claims which seem to contradict Boehm’s graph, and no numerical results which clearly correspond to his data points.

Boehm doesn’t even cite a paper for the TRW data, except when writing for “Making Software” in 2010, and there he cited the original 1976 article. There exists a large study conducted at TRW at the right time for Boehm to cite it, but that paper doesn’t contain the sort of data that would support Boehm’s claims.[12]

Roles

Software testing can be done by software testers. Until the 1980s, the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing,[13] different roles have been established: manager, test lead, test analyst, test designer, tester, automation developer, and test administrator.

History

The separation of debugging from testing was initially introduced by Glenford J. Myers in 1979.[14] Although his attention was on breakage testing ("a successful test is one that finds a bug"[14][15]) it illustrated the desire of the software engineering community to separate fundamental development activities, such as debugging, from that of verification. Dave Gelperin and William C. Hetzel classified in 1988 the phases and goals in software testing in the following stages:[16]

  • Until 1956 – Debugging oriented[17]
  • 1957–1978 – Demonstration oriented[18]
  • 1979–1982 – Destruction oriented[19]
  • 1983–1987 – Evaluation oriented[20]
  • 1988–2000 – Prevention oriented[21]

Testing methods

Static vs. dynamic testing

There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing is often implicit, as proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment.

Static testing involves verification, whereas dynamic testing involves validation. Together they help improve software quality. Among the techniques for static analysis, mutation testing can be used to ensure the test-cases will detect errors which are introduced by mutating the source code.

The box approach

Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

White-Box testing

White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.

Techniques used in white-box testing include:

  • API testing (application programming interface) – testing of the application using public and private APIs
  • Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
  • Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
  • Mutation testing methods
  • Static testing methods

Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[22] Code coverage as a software metric can be reported as a percentage for:

  • Function coverage, which reports on functions executed
  • Statement coverage, which reports on the number of lines executed to complete the test

100% statement coverage ensures that all code paths, or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.

Black-box testing

Black box diagram

Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation. The tester is only aware of what the software is supposed to do, not how it does it.[23] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing.

Specification-based testing aims to test the functionality of software according to the applicable requirements.[24] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.

Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[25]

One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[26] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested.

This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.

Visual testing

The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information he or she requires, and the information is expressed clearly.[27][28]

At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.

Visual testing provides a number of advantages. The quality of communication is increased dramatically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.

Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams.[citation needed]

Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important.[citation needed]

Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process.[citation needed] For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developer.

Grey-box testing

Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[29] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test.

However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in normal production operations.[citation needed] Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages.

By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.[30]

Testing levels

Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model.[31] Other test levels are classified by the testing objective.[31]

Unit testing

Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[32]

These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other.

Unit testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.

Depending on the organization's expectations for software development, unit testing might include static code analysis, data flow analysis metrics analysis, peer code reviews, code coverage analysis and other software verification practices.

Integration testing

Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed.

Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[33]

Component interface testing

The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[34][35] The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[34] Unusual data values in an interface can help explain unexpected performance in the next unit. Component interface testing is a variation of black-box testing,[35] with the focus on the data values beyond just the related actions of a subsystem component.

System testing

System testing, or end-to-end testing, tests a completely integrated system to verify that it meets its requirements.[36] For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff.

In addition, the software testing should ensure that the program, as well as working as expected, does not also destroy or partially corrupt its operating environment or cause other processes within that environment to become inoperative (this includes not corrupting shared memory, not consuming or locking up excessive resources and leaving any parallel processes unharmed by its presence).[citation needed]

Acceptance testing

At last the system is delivered to the user for Acceptance testing.

Testing Types

Installation testing

An installation test assures that the system is installed correctly and working at actual customer's hardware.

Compatibility testing

A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application, which must render in a web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.

Smoke and sanity testing

Sanity testing determines whether it is reasonable to proceed with further testing.

Smoke testing consists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used as build verification test.

Regression testing

Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working, correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previous sets of test-cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Regression testing is typically the largest test effort in commercial software development,[37] due to checking numerous details in prior software features, and even new software can be developed while using some old test-cases to test parts of the new design to ensure prior functionality is still supported.

Acceptance testing

Acceptance testing can mean one of two things:

  1. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression.
  2. Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed]

Alpha testing

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.[38]

Beta testing

Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.[citation needed]

Functional vs non-functional testing

Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work."

Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the flake point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.

Destructive testing

Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines.[citation needed] Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing.

Software performance testing

Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period.

There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.

Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.

Usability testing

Usability testing is needed to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application.

Accessibility testing

Accessibility testing may include compliance with standards such as:

Security testing

Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.

Internationalization and localization

The general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).[39]

Actual translation to human languages must be tested, too. Possible localization failures include:

  • Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string.
  • Technical terminology may become inconsistent if the project is translated by several people without proper coordination or if the translator is imprudent.
  • Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language.
  • Untranslated messages in the original language may be left hard coded in the source code.
  • Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing.
  • Software may use a keyboard shortcut which has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language.
  • Software may lack support for the character encoding of the target language.
  • Fonts and font sizes which are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small.
  • A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction.
  • Software may lack proper support for reading or writing bi-directional text.
  • Software may display images with text that was not localized.
  • Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency.

Development testing

Development Testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.

Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices.

A/B testing

Testing process

Traditional CMMI or waterfall development model

A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed, before it is shipped to the customer.[40] This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.[41]

Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.[42]

Agile or Extreme development model

In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous integration where software updates can be published to the public frequently. [43] [44]

This methodology increases the testing effort done by development, before reaching any formal testing team. In some other development models, most of the test execution occurs after the requirements have been defined and the coding process has been completed.

Top-down and bottom-up

Bottom Up Testing is an approach to integrated testing where the lowest level components (modules, procedures, and functions) are tested first, then integrated and used to facilitate the testing of higher level components. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. The process is repeated until the components at the top of the hierarchy are tested. This approach is helpful only when all or most of the modules of the same development level are ready.[citation needed] This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.[citation needed]

Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module.

In both, method stubs and drivers are used to stand-in for missing components and are replaced as the levels are completed....

A sample testing cycle

Although variations exist between organizations, there is a typical cycle for testing.[45] The sample below is common among organizations employing the Waterfall development model. The same practices are commonly found in other development models, but might not be as clear or explicit.

  • Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work.
  • Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
  • Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
  • Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team.
  • Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
  • Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
  • Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
  • Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
  • Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.

Automated testing

Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.

While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful.

Testing tools

Program testing and fault detection can be aided significantly by testing tools and debuggers. Testing/debug tools include features such as:

  • Program monitors, permitting full or partial monitoring of program code including:
  • Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points
  • Automated functional GUI testing tools are used to repeat system-level tests through the GUI
  • Benchmarks, allowing run-time performance comparisons to be made
  • Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage

Some of these features may be incorporated into an Integrated Development Environment (IDE).

Measurement in software testing

Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO/IEC 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.

There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.

Testing artifacts

The software testing process can produce several artifacts.

Test plan
A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code or making additional changes. Some companies have a higher-level document called a test strategy.
Traceability matrix
A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage.
Test case
A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[46] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table.
Test script
A test script is a procedure, or programing code that replicates user actions. Initially the term was derived from the product of work created by automated regression test tools. Test Case will be a baseline to create test scripts using a tool or a program.
Test suite
The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests.
Test fixture or test data
In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project.
Test harness
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

Certifications

Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification.[47] Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.[48]

Software testing certification types
  • Exam-based: Formalized exams, which need to be passed; can also be learned by self-study [e.g., for ISTQB or QAI][49]
  • Education-based: Instructor-led sessions, where each course has to be passed [e.g., International Institute for Software Testing (IIST)].
Testing certifications
  • Certified Associate in Software Testing (CAST) offered by the QAI [50]
  • CATe offered by the International Institute for Software Testing[51]
  • Certified Manager in Software Testing (CMST) offered by the QAI [50]
  • Certified Test Manager (CTM) offered by International Institute for Software Testing[51]
  • Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)[50]
  • Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing[51]
  • CSTP (TM) (Australian Version) offered by K. J. Ross & Associates[52]
  • ISEB offered by the Information Systems Examinations Board
  • ISTQB Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board [53][54]
  • ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board [53][54]
  • TMPF TMap Next Foundation offered by the Examination Institute for Information Science[55]
  • TMPA TMap Next Advanced offered by the Examination Institute for Information Science[55]
Quality assurance certifications

Controversy

Some of the major software testing controversies include:

What constitutes responsible software testing? 
Members of the "context-driven" school of testing[57] believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.[58]
Agile vs. traditional 
Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles,[59][60] whereas government and military[61] software providers use this methodology but also the traditional test-last models (e.g. in the Waterfall model).[citation needed]
Exploratory test vs. scripted
[62] Should tests be designed at the same time as they are executed or should they be designed beforehand?
Manual testing vs. automated 
Some writers believe that test automation is so expensive relative to its value that it should be used sparingly.[63] More in particular, test-driven development states that developers should write unit-tests, as those of XUnit, before coding the functionality. The tests then can be considered as a way to capture and implement the requirements.
Software design vs. software implementation
Should testing be carried out only at the end or throughout the whole process?
Who watches the watchmen? 
The idea is that any form of observation is also an interaction—the act of testing can also affect that which is being tested.[64]

Related processes

Software verification and validation

Software testing is used in association with verification and validation:[65]

  • Verification: Have we built the software right? (i.e., does it implement the requirements).
  • Validation: Have we built the right software? (i.e., do the requirements satisfy the customer).

The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology:

Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.

According to the ISO 9000 standard:

Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Software quality assurance (SQA)

Software testing is a part of the software quality assurance (SQA) process.[4] In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called "defect rate". What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.[citation needed]

Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

See also

References

  1. Exploratory Testing, Cem Kaner, Florida Institute of Technology, Quality Assurance Institute Worldwide Annual Software Testing Conference, Orlando, FL, November 2006
  2. Software Testing by Jiantao Pan, Carnegie Mellon University
  3. Leitner, A., Ciupa, I., Oriol, M., Meyer, B., Fiva, A., "Contract Driven Development = Test Driven Development – Writing Test Cases", Proceedings of ESEC/FSE'07: European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering 2007, (Dubrovnik, Croatia), September 2007
  4. 4.0 4.1 4.2 Kaner, Cem; Falk, Jack and Nguyen, Hung Quoc (1999). Testing Computer Software, 2nd Ed. New York, et al: John Wiley and Sons, Inc. pp. 480 pages. ISBN 0-471-35846-0. 
  5. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. pp. 41–43. ISBN 0-470-04212-5. 
  6. Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 426. ISBN 0-470-04212-5. 
  7. 7.0 7.1 Section 1.1.2, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board
  8. Principle 2, Section 1.3, Certified Tester Foundation Level Syllabus, International Software Testing Qualifications Board
  9. "Proceedings from the 5th International Conference on Software Testing and Validation (ICST). Software Competence Center Hagenberg. "Test Design: Lessons Learned and Practical Implications.". 
  10. Software errors cost U.S. economy $59.5 billion annually, NIST report
  11. McConnell, Steve (2004). Code Complete (2nd ed.). Microsoft Press. p. 29. ISBN 0-7356-1967-0. 
  12. Bossavit, Laurent (2013-11-20). The Leprechauns of Software Engineering--How folklore turns into fact and what to do about it. Chapter 10: leanpub. 
  13. see D. Gelperin and W.C. Hetzel
  14. 14.0 14.1 Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. ISBN 0-471-04328-1. 
  15. Company, People's Computer (1987). "Dr. Dobb's journal of software tools for the professional programmer". Dr. Dobb's journal of software tools for the professional programmer (M&T Pub) 12 (1–6): 116. 
  16. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6): 687. doi:10.1145/62959.62965. ISSN 0001-0782. 
  17. until 1956 it was the debugging oriented period, when testing was often associated to debugging: there was no clear difference between testing and debugging. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 
  18. From 1957–1978 there was the demonstration oriented period where debugging and testing was distinguished now – in this period it was shown, that software satisfies the requirements. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 
  19. The time between 1979–1982 is announced as the destruction oriented period, where the goal was to find errors. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 
  20. 1983–1987 is classified as the evaluation oriented period: intention here is that during the software lifecycle a product evaluation is provided and measuring quality. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 
  21. From 1988 on it was seen as prevention oriented period where tests were to demonstrate that software satisfies its specification, to detect faults and to prevent faults. Gelperin, D.; B. Hetzel (1988). "The Growth of Software Testing". CACM 31 (6). ISSN 0001-0782. 
  22. Introduction, Code Coverage Analysis, Steve Cornett
  23. Ron, Patton. Software Testing. 
  24. Laycock, G. T. (1993). The Theory and Practice of Specification Based Software Testing (PostScript). Dept of Computer Science, Sheffield University, UK. Retrieved 2008-02-13. 
  25. Bach, James (June 1999). "Risk and Requirements-Based Testing" (PDF). Computer 32 (6): 113–114. Retrieved 2008-08-19. 
  26. Savenkov, Roman (2008). How to Become a Software Tester. Roman Savenkov Consulting. p. 159. ISBN 978-0-615-23372-7. 
  27. "Visual testing of software – Helsinki University of Technology" (PDF). Retrieved 2012-01-13. 
  28. "Article on visual testing in Test Magazine". Testmagazine.co.uk. Retrieved 2012-01-13. 
  29. Patton, Ron. Software Testing. 
  30. "SOA Testing Tools for Black, White and Gray Box SOA Testing Techniques". Crosschecknet.com. Retrieved 2012-12-10. 
  31. 31.0 31.1 "SWEBOK Guide – Chapter 5". Computer.org. Retrieved 2012-01-13. 
  32. Binder, Robert V. (1999). Testing Object-Oriented Systems: Objects, Patterns, and Tools. Addison-Wesley Professional. p. 45. ISBN 0-201-80938-9. 
  33. Beizer, Boris (1990). Software Testing Techniques (Second ed.). New York: Van Nostrand Reinhold. pp. 21,430. ISBN 0-442-20672-0. 
  34. 34.0 34.1 Clapp, Judith A. (1995). Software Quality Control, Error Analysis, and Testing. p. 313. ISBN 0815513631. 
  35. 35.0 35.1 Mathur, Aditya P. (2008). Foundations of Software Testing. Purdue University. p. 18. ISBN 978-8131716601. 
  36. IEEE (1990). IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York: IEEE. ISBN 1-55937-079-3. 
  37. Paul Ammann; Jeff Offutt (2008). Introduction to Software Testing. p. 215 of 322 pages.
  38. van Veenendaal, Erik. "Standard glossary of terms used in Software Testing". Retrieved 4 January 2013. 
  39. "Globalization Step-by-Step: The World-Ready Approach to Testing. Microsoft Developer Network". Msdn.microsoft.com. Retrieved 2012-01-13. 
  40. EtestingHub-Online Free Software Testing Tutorial. "e)Testing Phase in Software Testing:". Etestinghub.com. Retrieved 2012-01-13. 
  41. Myers, Glenford J. (1979). The Art of Software Testing. John Wiley and Sons. pp. 145–146. ISBN 0-471-04328-1. 
  42. Dustin, Elfriede (2002). Effective Software Testing. Addison Wesley. p. 3. ISBN 0-201-79429-2. 
  43. Marchenko, Artem (November 16, 2007). "XP Practice: Continuous Integration". Retrieved 2009-11-16. 
  44. Gurses, Levent (February 19, 2007). "Agile 101: What is Continuous Integration?". Retrieved 2009-11-16. 
  45. Pan, Jiantao (Spring 1999). "Software Testing (18-849b Dependable Embedded Systems)". Topics in Dependable Embedded Systems. Electrical and Computer Engineering Department, Carnegie Mellon University. 
  46. IEEE (1998). IEEE standard for software test documentation. New York: IEEE. ISBN 0-7381-1443-X. 
  47. Kaner, Cem (2001). "NSF grant proposal to "lay a foundation for significant improvements in the quality of academic and commercial courses in software testing"" (PDF). 
  48. Kaner, Cem (2003). "Measuring the Effectiveness of Software Testers" (PDF). 
  49. Black, Rex (December 2008). Advanced Software Testing- Vol. 2: Guide to the ISTQB Advanced Certification as an Advanced Test Manager. Santa Barbara: Rocky Nook Publisher. ISBN 1-933952-36-9. 
  50. 50.0 50.1 50.2 50.3 50.4 "Quality Assurance Institute". Qaiglobalinstitute.com. Retrieved 2012-01-13. 
  51. 51.0 51.1 51.2 "International Institute for Software Testing". Testinginstitute.com. Retrieved 2012-01-13. 
  52. K. J. Ross & Associates
  53. 53.0 53.1 "ISTQB". 
  54. 54.0 54.1 "ISTQB in the U.S.". 
  55. 55.0 55.1 "EXIN: Examination Institute for Information Science". Exin-exams.com. Retrieved 2012-01-13. 
  56. 56.0 56.1 "American Society for Quality". Asq.org. Retrieved 2012-01-13. 
  57. "context-driven-testing.com". context-driven-testing.com. Retrieved 2012-01-13. 
  58. "Article on taking agile traits without the agile method". Technicat.com. Retrieved 2012-01-13. 
  59. “We’re all part of the story” by David Strom, July 1, 2009
  60. IEEE article about differences in adoption of agile trends between experienced managers vs. young students of the Project Management Institute. See also Agile adoption study from 2007
  61. Willison, John S. (April 2004). "Agile Software Development for an Agile Force". CrossTalk (STSC) (April 2004). Archived from the original on unknown. 
  62. "IEEE article on Exploratory vs. Non Exploratory testing". Ieeexplore.ieee.org. Retrieved 2012-01-13. 
  63. An example is Mark Fewster, Dorothy Graham: Software Test Automation. Addison Wesley, 1999, ISBN 0-201-33140-3.
  64. Microsoft Development Network Discussion on exactly this topic
  65. Tran, Eushiuan (1999). "Verification/Validation/Certification". In Koopman, P. Topics in Dependable Embedded Systems. USA: Carnegie Mellon University. Retrieved 2008-01-13. 

Further reading

  • Bertrand Meyer, "Seven Principles of Software Testing," Computer, vol. 41, no. 8, pp. 99–101, Aug. 2008, doi:10.1109/MC.2008.306; available online.
  • Brian Hambling, Peter Morgan, Angelina Samaroo, Geoff Thompson, Peter Williams, Software Testing: An ISTQB-ISEB Foundation Guide, Oct. 2010, Swindon, BCS Learning and Development Ltd. ISBN 978-1-902505-79-4
  • Brian Hambling, Angelina Samaroo, Software Testing: An ISEB Intermediate Certificate, Aug. 2009, Swindon, BCS Learning and Development Ltd. ISBN 978-1906124137

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.