The process of repeating tests following changes to a program is referred to as “regression testing”.
Regression testing utilizes existing test cases to check that the changes made have produced no new faults and have had no unintentional side effects. In other words, the objective is to ensure that the parts of a revised system that haven’t been changed still work as they used to before the changes took place.
The simplest way to do this is to perform existing tests on the new version of the program.
Regression testing and test automation
In order for existing test cases to be useful for regression testing they have to be repeatable. This means that manual test cases have to be sufficiently well documented. Test cases that are used in regression testing will be used regularly and often, and are therefore predestined for test automation. Automation of regression test cases is extremely useful as it ensures precise repeatability and simultaneously reduces the cost of each test repetition.
The extent of regression testing
Which of the existing tests should be used to ensure successful regression testing?
Because we are checking that existing functionality has not been (unintentionally) impaired, we basically need to re-run all the tests that cover this pre-existing functionality17.
If very few (or no) tests are automated, and you therefore have to perform regression testing manually, you will have to select the smallest possible subset of the manual tests. To select the appropriate test cases you have to carefully analyze the test specifications to see which test cases relate to which pre-existing functionality and which to the new, modified functionality.
If you have automated test cases, the simplest strategy is to simply re-execute all of them for the new product version:
- Test cases that return a “passed” result indicate an unchanged component/feature. This can be either because the required change hasn’t been made or because the “old” test cases weren’t sufficiently precisely formulated to cover the modified functionality. In both cases, the corresponding test cases need to be modified to ensure that they react to the new functionality.
- Test Cases that return a “failed” result indicate modified functionality:
- If this includes features that weren’t flagged for change, a fail is warranted as it indicates that—contrary to planning—the corresponding feature has been changed. Because such unintentional modifications are revealed by the “old” test cases, no further changes are required.
- For features that need to be changed, the test cases need to be adapted too so that they cover the new functionality.
The result of all this is a suite of regression tests that verify the planned alterations to functionality. Test cases that cover completely new functions are not yet part of this suite and have to be developed separately.
Complete vs. partial regression testing
In practice, complete regression testing that runs all existing tests is usually too costly and takes too much time, especially when (as mentioned above) manual tests are involved.
We are therefore looking for criteria that help us to decide which legacy test cases can be ignored without losing too much information. As ever in a testing context, this requires compromise between minimizing costs and accepting business risks. The following are common strategies for selecting test cases:
- Repeat only the tests that were given high priority in the test schedule.
- Leave out special cases for functional tests.
- Limit testing to certain specific configurations (for example, test only the English-language version, test only for a specific operating system, and similar).
- Limit testing to specific components or test levels.
The rules listed here apply primarily to system testing. On lower test levels, regression-testing criteria can be based on design documentation (such as the class hierarchy) or on white-box information.
Leave a Reply