Use-Case Testing

Use cases and business cases are often used to determine and document system requirements. These are generally illustrated using use case diagrams. Such diagrams describe typical use/system interactions and are used to specify requirements on a fairly abstract level. Figure 5-8 shows a use case diagram for part of the VSR-II vehicle selection process.

Example

image

Fig. 5-8Use-Case diagram for vehicle selection in VSR-II

The individual use cases in this example are “configure vehicle”, “select vehicle type”, “select special edition”, and “select optional extras”. Relationships between these are classed using the “extend” and “include” tags. “Include” relationships always occur, whereas “extend” relationships only occur under specific circumstances at “extension points”. In other words, there are either alternatives to an “extend” relationship or the relationship simply doesn’t come into effect.

The diagram represents the following situation: To configure a vehicle, a customer has to select a vehicle type. Once this has happened, there are three alternative ways to proceed. The customer can select a special edition, optional extras, or neither of these. The VSR-II system is involved in all three actions, which require their own detailed use case diagrams.

The view from outside

Use case diagrams usually depict the external view of a system, and serve to clarify the users’ view of the system and its relationships to other connected systems. Lines and simple drawings such as the stick figures in the diagram above indicate relationships with external entities, such as people or other systems. Use case diagrams can include a wide range of other symbols and cyphers.

Every use case defines a specific behavior that an object can execute in collaboration with one or more other entities. Use cases are described using interactions and activities that can be augmented with pre- and postconditions. Natural language can also be used to extend these descriptions in the form of comments or additions to individual scenarios and their alternatives. Interactions between external entities and the object can lead to a change in the object’s state, and the interactions themselves can be illustrated in detail using workflows, activity diagrams, or business process models.

Pre- and postconditions

Every use case is subject to specific pre- and postconditions that need to be met for the use case to be successfully executed. For example, one of the preconditions for a vehicle configuration is that the customer is logged on to the system. Postconditions come into play once the use case has been run—for example, once a vehicle has been successfully selected, the configuration can be ordered online. The sequence of use cases within a diagram (i.e., the “path” taken) also depends on pre- and postconditions.

Suitable for system and acceptance testing

Use cases and use case diagrams serve as the test basis when designing use-case-based tests. Because this type of test models an external view of the system, it is highly suitable for system and acceptance testing. If a diagram models the interaction and dependencies between individual system components, it can be used to derive integration test cases too.

Testing “normal’ system usage

Use case diagrams illustrate the “usual”, or most probable sequence of events and their alternatives, so the tests derived from them are used to check typical system usage scenarios. If a system is to be accepted, it is important that it works error-free under “normal” conditions. This makes use-case-based tests extremely important for the customer, and therefore for developers and testers too.

A use case often comprises multiple variants of its basic behavior. In our example, one variant is the selection of a special edition, which in turn makes it impossible for the customer to select any other optional extras. More detailed test cases can also be used to model special cases and error handling as well.

Test Cases

Every use case relates to a specific task and a specific expected result. Events can occur that lead to further activities or alternative actions and postconditions are present following execution. To design a test case, you need to know:

  • The initial situation and required preconditions
  • Any relevant constraints
  • The expected results
  • The required postconditions

However, specific input values and results for individual test cases cannot be directly derived from use cases. Each test case has to be fleshed out with appropriate data. All alternative scenarios shown in the use case diagram (i.e., the “extend” relationships) have to be covered by individual test cases too. Test cases designed on the basis of use case scenarios can be combined with other specification-based testing techniques.

Defining Exit Criteria

One possible exit criterion is that each use case (or sequence of use cases) in the diagram is covered by at least one test case. Because the alternative paths and/or extensions are use cases too, this criterion demands that each alternative/extension is executed.

The degree of coverage can be measured by dividing the number of use case variants you actually test by the total number of available use cases. Such a degree of coverage is usually expressed as a percentage.

Benefits and Limitations

Use-case-based tests are ideal for testing typical user/system interactions, making them ideal for acceptance and system testing. “Foreseeable” exceptions and special cases can be illustrated in the use-case diagram and can be covered by additional test cases. However, there is no simple, methodical way to derive further test cases that cover situations that are beyond the scope of the diagram. For situations like this, you need to use other techniques, such as boundary value analysis.

Side Note: Other techniques

This side note offers brief explanations of some other commonly used techniques, and should help you to decide whether they are appropriate for your particular situation.

Syntax test

A syntax test derives test cases based on formally specified input syntax. The corresponding syntactical rules are used to derive test cases that test both compliance with and violation of these syntax rules.

Random test

Random testing uses randomly selected representatives from the complete set of possible input values. If the values show a statistical distribution (for example, a normal distribution), this should be used to select the representatives. This way, the test cases will be as realistic as possible and will provide meaningful predictions regarding the system’s reliability.

Smoke test

A smoke test is a “quick and dirty” technique that primarily tests the minimum robustness requirements of the test object. Such tests are usually automated and are limited to testing the object’s main functionality without looking in detail at the results. The test only checks whether the system crashes or shows any obvious failures. This technique saves resources by doing without a test oracle to derive the expected result. Smoke tests are usually based on a selection of existing (rather than new) test cases. If a smoke test delivers an “everything OK” result, other “proper” tests can then be performed. The term goes back to the times when electrical devices “went up in smoke” when they failed. A smoke test is often performed before all other types of tests to see if the test object is sufficiently mature to warrant further resource-hungry testing. Smoke tests are often used to put software updates through an initial quick functional test.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *