Terminology

Component testing involves systematically checking the lowest-level components in a system’s architecture. Depending on the programming language used to create them, these components have various names, such as “units”, “modules” or (in the case of object-oriented programming) “classes”. The corresponding tests are therefore called “module tests”, “unit tests”, or “class tests”.

Components and component testing

Regardless of which programming language is the used, the resulting software building blocks are the “components” and the corresponding tests are called “component tests”.

The test basis

The component-specific requirements and the component’s design (i.e., its specifications) are to be consulted to form the test basis. In order to design white-box tests or to evaluate code coverage, you must analyze the component’s source code and use it as an additional test basis. However, to judge whether a component reacts correctly to a test case, you have to refer to the design or requirements documentation.

Test objects

As detailed above, modules, units, or classes are typical test objects. However, things like shell scripts, database scripts, data conversion and migration procedures, database content, and configuration files can all be test objects too.

A component test verifies a component’s internal functionality

A component test typically tests only a single component in isolation from the rest of the system. This isolation serves to exclude external influences during testing: If a test reveals a failure, it is then obviously attributable to the component you are testing. It also simplifies design and automation of the test cases, due to their narrowly focused scope.

A component can itself consist of multiple building blocks. The important aspect is that the component test has to check only the internal functionality of the component in question, not its interaction with components external to it. The latter is the subject of integration testing. Component test objects generally arrive “fresh from the programmers hard disk”, making this level of testing very closely allied to development work. Component testers therefore require adequate programming skills to do their job properly.

The following example illustrates the point:

Case Study: Testing the calculate_price class

According to its specifications, the VSRII DreamCar module calculates a vehicle’s price as follows:

We start with the list price (baseprice) minus the dealer discount (discount). Special edition markup (specialprice) and the price of any additional extras (extraprice) are then added.

If three or more extras not included with the special edition are added (extras), these extras receive a 10% discount. For five extras or more, the discount increases to 15%.

The dealer discount is subtracted from the list price, while the accessory discount is only applied to the extras. The two discounts cannot be applied together. The resulting price is calculated using the following C++ method5:

double calculate_price (double baseprice, double specialprice,

double extraprice, int extras,

double discount)

{

double addon_discount; double result;

if (extras ≥ 3)

addon_discount = 10;

else

if (extras ≥ 5)

addon_discount = 15;

else addon_discount = 0;

if (discount > addon_discount)

addon_discount = discount;

result = baseprice/100.0 * (100-discount) + specialprice

+ extraprice/100.0 *(100-addon_discount);

return result;

}

The test environment

In order to test this calculation, the tester uses the corresponding class interface by calling the calculate_price() method and providing it with appropriate test data. The tester then records the component’s reaction to this call—i.e., the value returned by the method call is read and logged.

This piece of code is buggy: the code for calculating the discount for ≥ 5 can never be reached. This coding error serves as an example to explain the white-box analysis detailed.

To do this you need a “test driver”. A test driver is a separate program that makes the required interface call and logs the test object’s reaction.

For the calculate_price() test object, a simple test driver could look like this:

bool test_calculate_price() {

double price;

bool test_ok = TRUE;

// testcase 017

price = calculate_price(10000.00,2000.00,1000.00,3,0);

test_ok = test_ok && (abs (price-12900.00) < 0.01);

// testcase 02

price = calculate_price(25500.00,3450.00,6000.00,6,0);

test_ok = test_ok && (abs (price-34050.00) < 0.01);

// testcase ...

// test result return test_ok;

}

The test driver in our example is very simple and could, for example, be extended to log the test data and the results with a timestamp, or to input the test data from an external data table.

Developer tests

To write a test driver you need programming skills. You also have to study and understand the test object’s code (or at least, that of its interface) in order to program a test driver that correctly calls the test object. In other words, you have to master the programming language involved and you need access to appropriate programming tools. This is why component testing is often performed by the component’s developers themselves. Such a test is then often referred to as a “developer test”, even though “component testing” is what is actually meant.

Testing vs. debugging

Component tests are often confused with debugging. However, debugging involves eliminating defects, while testing involves systematically checking the system for failures.

Our Tip Use Component test frameworks

  • Using component test frameworks (see [URL: xUnit]) significantly reduces the effort involved in programming test drivers, and creates a consistent component test architecture throughout the project. Using standardized test drivers also makes it easier for other members of the team who aren’t familiar with the individual components or the test environment to perform component tests6. These kinds of test drivers can be controlled via a command-line interface and provide mechanisms for handling test data, and for logging and evaluating test results. Because all test data and logs are identically structured, it is possible to evaluate the results across multiple (or all) tested components.

Component test objectives

The component testing level is characterized not only by the type of test objects and the test environment, but also by very specific testing objectives.

Testing functionality

The most important task of a component test is checking that the test object fully and correctly implements the functionality defined in its specifications (such tests are also known as “function tests” or “functional tests”). In this case, functionality equates to the test object’s input/output behavior. In order to check the completeness and correctness of the implementation, the component is subjected to a series of test cases, with each covering a specific combination of input and output data.

Case Study: Testing VSR-II’s price calculations

This kind of testing input/output data combinations is illustrated nicely by the test cases in the example shown above. Each test case inputs a specific price combined with a specific number of extras. The test case then checks whether the test object calculates the correct total price.

For example, test case #2 checks the “discount for five or more extras”. When test case #2 is executed, the test object outputs an incorrect total price. Test case #2 produces a failure, indicating that the test object does not fulfill its specified requirements for this input data combination.

Typical failures revealed by component testing are faulty calculations or missing (or badly chosen) program paths (for example, overlooked or wrongly interpreted special cases).

Testing for robustness

At run time, a software component has to interact and swap data with multiple neighboring components, and it cannot be guaranteed that the component won’t be accessed and used wrongly (i.e., contrary to its specification). In such cases, the wrongly addressed component should not simply stop working and crash the system, but should instead react “reasonably” and robustly. Testing for robustness is therefore another important aspect of component testing. The process is very similar to that of an ordinary functional test, but serves the component under test with invalid input data instead of valid data. Such test cases are also referred to as “negative tests” and assume that the component will produce suitable exception handling as output. If adequate exception handling is not built in, the component may produce runtime errors, such as division by zero or null pointer access, that cause the system to crash.

Case Study: Negative tests

For the price calculation example we used previously, a negative test would involve testing with negative input values or a false data type (for example, char instead of int)7:

// testcase 20

price = calculate_price(-1000.00,0.00,0.00,0,0);

test_ok = test_ok && (ERR_CODE == INVALID_PRICE);

...

// testcase 30

price = calculate_price("abc",0.00,0.00,0,0);

test_ok = test_ok && (ERR_CODE == INVALID_ARGUMENT);

Various interesting things come to light:

  • Because the number of possible “bad” input values is virtually limitless, it is much easier to design “negative tests” than it is to design “positive tests”.
  • The test driver has to be extended in order to evaluate the exception handling produced by the test object.
  • Exception handling within the test object (evaluating ERR_CODE in our example) requires additional functionality. In practice, you will often find that half of the source code (or sometimes more) is designed to deal with exceptions. Robustness comes at a price.

Alongside functionality and robustness, component testing can also be used to check other attributes of a component that influence its quality and that can only be tested (if at all) using a lot of additional effort at higher test levels. Examples are the non-functional attributes “efficiency” and “maintainability”8.

Testing for efficiency

The efficiency attribute indicates how economically a component interacts with the available computing resources. This includes aspects such as memory use, processor use, or the time required to execute functions or algorithms. Unlike most other test objectives, the efficiency of a test object can be evaluated precisely using suitable test criteria, such as kilobytes of memory or response times measured in milliseconds. Efficiency testing is rarely performed for all the components in a system. It is usually restricted to components that have certain efficiency requirements defined in the requirements catalog or the component’s specification. For example, if limited hardware resources are available in an embedded system, or for a real-time system that has to guarantee predefined response-time limits.

Testing for maintainability

Maintainability incorporates all of the attributes that influence how easy (or difficult) it is to enhance or extend a program. The critical factor here is the amount of effort that is required for a developer (team) to get a grasp of the existing program and its context. This is just as valid for a developer who needs to modify a system that he programmed years ago as for someone who is taking over code from a colleague.

The main aspects of maintainability that need to be tested are code structure, modularity, code commenting, comprehensibility and up-to-dateness of the documentation, and so on.

Case Study: Code that is difficult to maintain

The sample calculate_price() code contains a number of maintainability issues. For example, there are no code comments at all, and numerical constants have not been declared as such and are instead hard-coded. If, for example, such a constant needs to be modified, it isn’t clear if and where else in the system it needs to be changed, forcing the developer to make huge efforts figuring this out.

Attributes like maintainability cannot of course be checked using dynamic tests. Instead, you will need to analyze the system’s specifications and its codebase using static tests and review sessions. However, because you are checking attributes of individual components, this kind of analysis has to be carried out within the context of component testing.

Testing strategies

As already mentioned, component testing is highly development-oriented. The tester usually has access to the source code, supporting a white-box oriented testing technique in component testing. Here, a tester can design test cases using existing knowledge of a component’s internal structure, methods, and variables.

White-box tests

The availability of the source code is also an advantage during test execution, as you can use appropriate debugging tools to observe the behavior of variables during testing and see whether the component functions properly or not. A debugger also enables you to manipulate the internal state of a component, so you can deliberately initiate exceptions when you are testing for robustness.

Case Study: Code as test basis

The calculate_price() code includes the following test-worthy statement:

if (discount > addon_discount)

addon_discount = discount;

Additional test cases that fulfill the condition (discount > addon_discount) are simple to derive from the code. But the price calculation specification contains no relevant information, and corresponding functionality is not part of the requirements. A code review can reveal a deficiency like this, enabling you to check whether the code is correct and the specification needs to be changed, or whether the code needs to be modified to fit the specification.

However, in many real-world situations, component tests are “only” performed as black-box tests—in other words, test cases are not based on the component’s inner structure9. Software systems often consist of hundreds or thousands of individual building blocks, so analyzing code is only really practical for selected components.

During integration, individual components are increasingly combined into larger units. These integrated units may already be too large to inspect their code thoroughly. Whether component testing is done on the individual components or on larger units (made up of multiple components) is an important decision that has to be made as part of the integration and test planning process.

Test-first

“Test-first” is the state-of-the-art approach to component testing (and, increasingly, on higher testing levels too). The idea is to first design and automate your test cases and to program the code which implements the component as a second step. This approach is strongly iterative: you test your code with the test cases you have already designed, and you then extend and improve your product code in small steps, repeating until the code fulfills your tests. This process is referred to as “test-first programming”, or “test-driven development” (often abbreviated to TDD—see also [URL: TDD], [Linz 14]). If you derive your test cases systematically using well founded test design techniques this approach produces even more benefits—for example, negative tests, too, will be drafted before you begin programming and the team is forced to clarify the intended product behavior for these cases.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *