Terminology
Integration testing is the next level that follows on from component testing. Integration testing assume that the test objects handed over to this level are already component tested and that any component-internal defects have been corrected as far as possible.
Integration
Developers, testers, and specialized integration teams then assemble groups of these components into larger units. This process is called “integration”.
Integration testing
Once assembled, you have to test whether the integrated components interact correctly with each other. This process is known as “integration testing”, and is designed to find faults in the interfaces and the interaction between the integrated components.
The test basis
At this level, all the documents that describe the software architecture and the design of the software system, especially interface specifications, workflow and sequence diagrams, and also use case diagrams, are to be consulted as the test basis.
You might ask yourself why integration testing is necessary when all the components have already been individually tested. Our case study illustrates the kinds of problems that have to be solved:
Case Study: Integration tests for the VSR-II DreamCar module
The VSR–II DreamCar module is made up of a number of basic components.
Fig. 3-4The structure of the VSR-II DreamCar module
One of these components is the CarConfig class, which is responsible for ensuring that a vehicle configuration (base model, special edition, additional extras, and so on) is permissible and for calculating the resulting price. The class includes the calculate_price() and check_config() methods. The class reads the required model options and price data from a database using the CarDB class.
The frontend reads the current vehicle configuration using get_config() and presents the results to the end-user for further tweaking in the graphical user interface. Changes to the current configuration are returned to the backend using update_config(). check_config() then checks that the configuration is permissible and recalculates the price appropriately.
Although component testing revealed no failures in the CarConfig and CarDB classes, their interaction can still be buggy. For example, you may find that check_config() cannot process certain extras provided by the database, or that check_config() requests data that CarDB extracts from the database but that is returned to check_config() in an unsuitable format.
The interaction between the frontend and the backend can be faulty too. For example, if the frontend doesn’t correctly display a logically permissible configuration, the user sees this as a fault. Or perhaps because update_config() is called inappropriately—for example, instead of handing over each change individually, the frontend only sends complete reconfigurations as a single data set to update_config(), which can handle this kind of data, but might perform too slowly as a result.
However, with the integration of the DreamCar module, integration testing for the VSR-II project has only just begun. The components that make up the other VSR-II modules figure 2-1 have to be integrated too, before the modules themselves are integrated into the overall system: DreamCar has to be connected to ContractBase, which in turn is connected to the JustInTime (ordering), NoRisk (insurance), and EasyFinance (vehicle financing) subsystems. One of the final integration steps includes connecting VSR-II to the external FactoryPPS production planning system.
Integration testing is critical
As illustrated by the example above, component testing cannot guarantee that the interfaces between the components are fault-free. This is what makes the integration testing level critical to the overall testing process. Potential interface faults must be discovered there and their causes isolated.
System integration testing
The example above also shows that the interfaces to the external system environment also have to be covered during the integration and integration testing process. When interfaces to external software (or hardware) systems are tested, the process is often referred to as “integration testing in the large” or “system integration testing”.
System integration testing can only be performed once system testing is complete. In this situation, the risk lies in the development team only being able to test “their half” of the interface in question. The “other half” is developed externally and can therefore change unexpectedly at any time. Even if system tests are passed, this doesn’t guarantee that external interfaces will always work as expected.
Different levels of integration
This means that there are multiple levels of integration testing that cover test objects of varying sizes (interfaces between internal components or subsystems, or between the entire system and external systems such as web services, or between hardware and software). If, for example, business processes are to be tested in the form of a cross-interface workflow made up of multiple systems, it can be extremely difficult to isolate the interface or component that causes the fault.
Integration test objects are made up of multiple components
The integration process assembles individual components to produce larger units and, ideally, every integration step will be followed by an integration test. Every module thus built can serve as the basis for further integration into even larger units. Integration test objects can therefore consist of units (or subsystems) that have been integrated iteratively.
Third-party systems and bought-in components
In practice, software systems are nowadays rarely built from scratch, but are instead the result of the extension, modification, or combination of existing systems (for example, a database, a network, new hardware, and so on). Many system components are standardized products bought on the open market (such as the DreamCar database). Component testing doesn’t usually include these kinds of existing or standardized objects, whereas integration testing has to cover these components and their interaction with other parts of the system.
The most important integration test objects are the interfaces that connect two or more components. Furthermore, integration testing can also cover configuration programs and files. Depending on the system’s architecture, access to databases or other infrastructure components can also be part of the (system) integration testing process.
The test environment
Integration testing also requires test drivers to provide test objects with data, and to collect and log the results. Because the test objects are compound units that have no interfaces to the outside world that aren’t already provided by their component parts, it makes sense to re-use the test drivers created for the individual component tests.
If the component testing stage was well organized, you will have access to a generic test driver for all components, or at least a set of test drivers that were designed according to a unified architecture. If this is the case, testers can adapt and use these existing test drivers for integration testing with a minimum of extra effort.
A badly organized component testing process will perhaps provide only a few suitable drivers that have differing operational structures. The downside of this kind of situation is that the test team now has to invest significant time and effort in the creation or improvement of the test environment at a late stage in the project, thus wasting precious integration testing time.
Monitors
Because interface calls and data traffic via the test driver interfaces need to be tested, integration testing often uses “monitors” as an additional diagnostic tool. A monitor is a program that keeps a check on the movement of data between components and logs what it sees. Commercial software is available for monitoring standard data traffic such as network protocols, whereas you will have to develop custom monitors for use with project-specific component interfaces.
Test objectives
The objective of integration testing is clearly to find interface faults. Issues can already occur during the first attempt at integrating two components if their interface formats don’t match, if required files are missing, or if the developer has programmed components that don’t stick to the specified divisions. Such faults will usually be detected early on by failing compile or build runs.
Difficult-to-find issues are faults that occur at runtime during data exchange (i.e., communication) between components, and detecting such faults requires dynamic testing. The following basic types of communication faults can be distinguished:
- A component transmits no data, syntactically false data, or wrongly coded data that the receiving component cannot process, thus causing an exception or a crash. The root cause is a functional component error, an incompatible interface format, or a protocol error.
- The communication works, but the components involved interpret the transferred data differently, due to the functional failure of a component or a misinterpreted specification.
- Data is transferred correctly, but at the wrong moment (timing or timeout issues) or at intervals that are too short (causing throughput, capacity, or load issues).
Case Study: Integration errors in VSR-II
During VSR-II integration testing the following failures of the types described above could occur:
- Selected extras in the DreamCar GUI are not handed over to check_config(), thus producing incorrect price and order data.
- In the DreamCar module, vehicle colors are represented by codes (for example, 442 means metallic blue). However, the production planning system. Interprets some codes differently (here, 442 means pearl effect red). Such discrepancies mean that an order that is correct from VSR-II’s point of view could lead to the wrong product being built and delivered.
- The host system confirms every transferred order. In some cases, checking deliverability takes so long that VSR-II assumes that there is a data transfer error and cancels the connection. The result is a customer who cannot order a vehicle that he has spent a lot of time configuring.
Because the failures only occur during interaction between software units, none of these types of faults can be discovered during component testing.
Alongside functional tests, integration testing can involve non-functional tests too. This is useful in cases where non-functional attributes of a component’s interface are classed as system-relevant or risky (such as performance, behavior under load, or volume-related behavior).
Is component testing necessary?
Is it possible to leave out component testing altogether and start the testing process directly with integration testing? Yes, this is possible and, unfortunately, common practice. However, this approach has potentially serious drawbacks:
- Most of the failures revealed by this kind of testing will be caused by functional faults inside individual components. In other words, what is actually a component test is executed in an unsuitable environment that complicates access to individual components.
- Because there is no easy access to each individual component, some failures will not be provoked and are thus impossible to find.
- If a failure or crash occurs, it is difficult or impossible to localize the component which caused the failure.
Doing without component testing saves effort only at the price of poor rates of fault discovery and increased diagnostic efforts. Combining component testing with integration testing is far more efficient.
Integration strategies
In which sequence should the components be integrated to maximize testing efficiency? Testing efficiency is measured using the relationship between testing costs (staff, tools, and so on) and usefulness (the number and seriousness of discovered failures) for a particular test level. The test manager is responsible for choosing and implementing the optimum testing and integration strategy for the project at hand.
Components are ready at different times
Individual components are finished at times that can be weeks or months apart. Project managers and test managers won’t want to wait until all the relevant components are ready to be integrated in a single run.
A simple ad hoc strategy for dealing with this situation is to integrate the components in the (random) sequence in which they are finished. This involves checking that a freshly arrived component is due for integration with a component or subsystem that already exists. If this check is successful, the new component can be integrated and integration tested.
Case Study: The integration strategy for the VSR-II project
Work on the central ContractBase module in VSR–II turns out to be more complex than originally thought, and completion of the module is delayed by several weeks. In order to avoid wasting time, the project manager decides to begin testing for the DreamCar and NoRisk modules.
These two modules have no mutual interface, but do swap data via ContractBase. To calculate the appropriate insurance premium, NoRisk requires the vehicle type and other parameters.
A stub must be programmed for use as a temporary placeholder for ContractBase. This stub receives simple vehicle configurations from DreamCar, determines the vehicle’s type code and passes it to NoRisk. The stub also enables the entry of various other insurance-relevant customer details. NoRisk then calculates the premium and displays it in a window for checking and logs it as a test result. The stub thus temporarily fills the gap left by the incomplete ContractBase module.
This example emphasizes that, although it might save time, starting integration testing too early generates increased effort creating stubs.
Test management has to choose a testing strategy that optimizes the relationship between time savings and the increased effort involved in maintaining the test environment.
Constraints that influence integration
Which strategy is best (i.e., is most economical and saves most time) depends on constraints that have to be analyzed for every project:
- The system architecture determines the number and type of components the system consists of and the dependencies between them.
- The project schedule defines when individual components are due for completion, integration, and testing.
- The overall testing plan defines how thoroughly which aspects of the system are to be tested and at which testing level.
Agree on an Integration strategy
The test manager has to look at these constraints and use them to develop an integration strategy that suits the current project. Because the delivery time of the individual components is key, it is always a good idea to consult with the project manager at the project planning stage to ensure that components are delivered in a sequence and at times that support testing.
Basic strategies
Test managers can align to one of the following basic integration strategies as a guide to planning:
- Top-down integrationTesting begins with the main component that calls other components, but—apart from the operating system—doesn’t get called itself. Subsidiary components are replaced by stubs. Components on lower system layers are then gradually integrated, while the (already tested) layer above serves as the test driver.• The upside:Because components that have already been tested make up the bulk of the run-time environment, you will need only rudimentary test drivers, or no test drivers at all.• The downside:Subsidiary components that have not yet been integrated have to be replaced by stubs, which can involve a lot of extra work.
- Bottom-up integrationTesting begins with the basic components that don’t call any others (except for operating system functions). Larger units are built gradually from tested components, which are then integration tested.• The upside:No stubs are required.• The downside:Higher-level components have to be simulated using test drivers.
- Ad hoc integrationComponents are integrated in the (random) order of their completion (see above).• The upside:Time savings, as every component is integrated into the environment as early as possible.• The downside:Stubs and test drivers are required.
- Backbone integrationA skeleton framework (“backbone”) is created, and individual components are successively integrated into it. Continuous integration (CI) is a contemporary version of this strategy in which the backbone is made up of existing components to which newly tested ones are added.• The upside:Components can be integrated in any sequence.• The downside:A backbone or CI environment has to be created and maintained, which can involve a lot of extra effort.
Pure top-down and bottom-up integration can only be applied to systems that have a strict hierarchical structure, which is rare in the real world. In practice, most projects rely on a custom mixture of the strategies detailed above10.
Avoid the Big Bang!
It is essential to avoid any non-incremental integration (also referred to as the “Big Bang”). This approach involves no real strategy. The team simply waits until all the components are ready and integration means simply throwing them together all at once. At its worst, upstream component testing is skipped too. The drawbacks are obvious:
- The time spent waiting for the “Big Bang” is carelessly wasted testing time. Testing always involves time pressure, so don’t waste a single testing day.
- All failures occur at once and it is difficult (or simply impossible) to get the system to run at all. Defect localization and correction is complicated and time-consuming.
Leave a Reply