Terminology
Once integration testing is complete, the next test level is system testing. This level checks that the complete, integrated system actually fulfills its specified requirements. Here too, you might ask yourself why this step is necessary following successful component and integration testing. Reasons are:
Reasons for system testing
- Low-level tests check technical specifications from the software manufacturer’s point of view. In contrast, system testing views the system from the customer and end-user viewpoints11. System testers check whether the specified requirements have been completely and suitably implemented.
- Many functions and system attributes result from the interaction of the system’s components and can therefore only be observed and tested on a system-wide level.
Case Study: VSR-II system testing
For the sales-side stakeholders, the VSR-II system’s most important task is to make ordering a vehicle as simple as possible. The order process uses nearly all the system’s modules: configuration in DreamCar; financing and insurance using EasyFinance and NoRisk; order transmission via JustInTime; paperwork saved in ContractBase. The system only genuinely fulfills its intended purpose when all of these components work together correctly, and system testing verifies if this is the case.
The test basis
The test basis consists of all documents and other information that describe the test object on a system level. These can be system and software requirements, specifications, risk analyses (if available), user manuals, and so on.
Test object and test environment
Once integration testing is finished, you are faced with a complete, ready-to-run system. System testing checks the finished system in an environment that resembles the system’s production environment as closely as possible. Instead of stubs and test drivers, all of the hardware, software, drivers, networks, third-party systems, and other components that will be part of its working environment need to be installed in the system test environment.
As well as checking user, training and system documentation, system testing also checks configuration settings and should support to system optimization by providing load/performance test results.
System testing requires its own test environment
To save time and money, system tests are often performed in the production environment itself, rather than in a separate system-testing environment. There are various reasons why this is a bad idea:
- System testing is sure to reveal failures! These failures can have a highly negative impact on the customer’s production environment. Crashes and data loss at the customer site can be expensive and should be avoided at all costs.
- Testers have limited control over the configuration and the parameters that affect the customer’s production environment. If you test while other parts of the customer’s system are running, this can subtly alter your results and make the tests you perform extremely difficult to reproduce.
System testing effort is often underestimated
Because of the complex test environment it requires, the effort involved in system testing is often underestimated. [Bourne 97] says experience has shown that usually only half of the required test and QA work has been done by the time system testing begins.
Test objectives
As previously noted, the objective of system testing is to verify whether and how well the finished system fulfills the specified (functional and non-functional) requirements. System testing identifies defects and deficiencies that are due to erroneous, incomplete, or inconsistently implemented requirements. It should also identify undocumented or forgotten requirements.
Data quality
In systems that rely on databases or other large amounts of data, data quality is an important factor that may need to be considered as part of the system testing process. The data themselves become a “test object” and they need to be appropriately checked for consistency, completeness, and up-to-dateness.
Side Note: System testing issues
In too many projects, the clarification and documentation of the requirements is either patchy or ignored completely. This makes it difficult for testers to know how the system is actually meant to work, and doubly difficult to reliably reveal failures.
Vague requirements
Where no requirements exist any system behavior is permissible, or system behavior simply cannot be evaluated. The user (or the internal/external customer) will of course have an idea what to expect from “his” system. Requirements do exist, but only in the minds of some of the people participating in the project. The testers are then given the thankless task of collating all the relevant information regarding the system’s planned behavior. One way to deal with a situation like this is using exploratory testing.
Missed decisions
In the process of gathering this information, the testers will find that the various participants have very different ideas and attitudes about what needs to be built. To avoid this situation, the project requirements need to be written down and then agreed upon and approved by all relevant participants.
In other words, as well as gathering requirements, system testing must also enforce clarification and decision-making processes that should have been completed long ago and that are now being done much too late. This kind of information-gathering takes time and is extremely costly, and is guaranteed to delay delivery of the product.
Some projects fail
If requirements aren’t documented, the developers won’t have clear objectives and the probability that the resulting system fulfills the implicit customer requirements is very low indeed. Under such circumstances, nobody expects a usable product to result and system testing can often only “certify” the failure of the project.
Reduce the risk of early feedback
Iterative and agile projects require clearly formulated and written requirements too. Again, there is always a risk that some requirements will be incomplete, incorrectly communicated, or simply overlooked. However, in this case each iteration provides the opportunity to check fulfillment of the given requirements, thus reducing the risk of the project failing.
If requirements are not sufficiently met, you can use the next iteration to improve things. You may end up with more iterations than originally planned to ensure that a specific level of functionality is reached. This then equates to a product that is delivered late but works, instead of a project that fails completely.
Leave a Reply