Every software system needs to be modified and tweaked during its lifetime. This process is usually referred to as “software maintenance”.
Software does not wear out. In contrast to the maintenance of hardware and unlike physical industrial products, the purpose of software maintenance is not to maintain the ability to operate or to repair damages caused by use. The purpose of software maintenance are:
- To correct faults that were unintentionally built into the product
- To improve the quality characteristics of the product
- To adapt the product to changed operating conditions (for example, when a new operating system, a new database, or a new communications protocol is implemented)
The corresponding test processes are called maintenance testing.
Changes in a software product can be triggered by bug fixes, or by planned modification/extension of its functionality that is part of “normal” maintenance and continuing development.
Testing new releases
In both cases, the result is a new product release. New releases are largely identical to earlier releases, but with some modifications to existing functionality and some completely new features.
How does the testing process react to these changes? Do all tests on every test level have to be repeated for each release? Or is it sufficient to test only the elements that are directly affected by the changes that have been made?
Maintenance testing
In the case of software maintenance (see above), the basic testing strategy (also called confirmation testing) involves repeating all test cases that revealed failures in the previous version. Such test cases have to be passed in order to classify the corresponding faults as corrected.
If faults have been fixed which were not revealed by previous test cases (for example, because originating from a hotline ticket), you need to draft new test cases to verify that the newly discovered faults really have been corrected.
Correcting previously undiscovered faults often changes the (correct) behavior of nearby program elements. This can be deliberate or accidental, making additional test cases necessary. Theses are either modified or new test cases that verify whether the changes achieve their intended effects. You also need to repeat the existing test cases that verify that “the rest” of the modified element remains unchanged and still functions properly.
Hotfixes
Some software failures cause immediate threats to system integrity and therefore require prompt attention. In such cases, an emergency “hotfix” is more expedient than a well-thought-out long-term solution. Concentrating on the most important confirmation tests helps to deliver a speedy hotfix, but you will nevertheless need to perform comprehensive testing (as described above) as soon as possible.
Maintenance testing is always simpler and more successful if the project manager plans maintenance releases in advance and includes them in the overall test schedule. When dealing with legacy systems, you will often only have access to outdated specifications (or no specifications at all), which makes maintenance and maintenance testing much more difficult. Appropriate planning makes this aspect of testing much easier.
Maintenance mustn’t be used as an excuse to skip tests. If you skip testing because “a future release will correct defects anyway”, you haven’t properly understood the costs and risks that software defects can cause.
Impact analysis
Confirmation testing per se, or new tests in the vicinity of a modification are not really sufficient. Apparently simple local changes can cause unexpected and sometimes disastrous consequences and side effects in other (often distant) parts of the system. How many and which types of test cases are necessary to reduce this risk has to be determined by a highly specific “impact analysis” of the potential effects of the changes you make.
Maintenance testing following modification
When software is modified to suit changed operating conditions, you need to run tests to ensure that the system operator accepts the modified system. This is because various aspects of the non-functional system attributes16 (such as performance, resource requirements, or installation preconditions) could (following modification) behave differently in the customer’s environment.
If modifications involve data conversion or migration, you also need to test data completeness and integrity. Apart from these factors, the overall strategy for testing a modified system is the same as when you are testing a system following maintenance (see above).
Leave a Reply