Test Execution Planning

Detail planning for each cycle

A testing strategy defines the testing framework (for example, which testing techniques to use), but doesn’t define the details of the individual test cases. The test manager has to define test cases during the detail-planning phase that comes before the start of a new iteration or test cycle. The detail-planning phase defines which test cases need to be designed, which require automation, and which tests are to be performed by whom and in which order. The result of detail planning is a “test execution schedule” for the upcoming test cycle. Detail planning is always based on the current situation, which will definitely have changed since the previous iteration! The following factors are important when detail planning:

  • The current stage of development
    The software under test that is actually available can have limited or altered functionality compared with what was initially planned, making changes to the test specifications and existing test case definitions necessary. Tests that are (currently) technically impossible to perform or that are not suitable to the current situation need to be postponed or discarded.
  • Test results
    Issues identified by previous test cycles can make it necessary to change your testing priorities. Fixed defects require additional confirmation tests that have to be added to the test execution schedule. Additional tests can also be required for cases where issues have been identified but aren’t sufficiently reproducible using the available tests. The opposite can also be true, and the number of regression tests (and therefore testing effort) can be reduced for an unchanged component that passed all its tests during the previous iteration.
  • Resources
    Test cycle planning has to be coordinated with the current project or iteration plan and, more importantly, with the available resources. For example, you need to consider the current staffing and vacation plans, the availability of the test environment, the viability of automated tests and testing tools, and so on. If there aren’t enough people available you may have to cancel some manual tests or postpone further test automation. If time and money are limited, you may have to cut back some of the planned test activities or individual test cases.

If you can’t requisition additional resources, you may have to adapt the test plan. Low-priority test cases can be canceled or tests with multiple variants can be pared down (for example, run a test on just one operating system instead of many). These kinds of adjustments mean some meaningful tests won’t be executed, but the resources you save can be used to ensure that at least the higher-priority tests get done.

Plan the test execution sequence

When planning test cycles, the test manager controls not only the selection of test cases, but also how he distributes the test effort within the current cycle between test design, test automation, the initial execution of newly specified tests, and the execution of regression tests.

The weighting of all these activities has to be balanced according to the current situation and the factors detailed above. During the early phases of a project, test design and automation often require more effort than they do later on. For later iterations in which functionalities change only slightly, you will usually need to define and automate only a few new tests. The weighting can then be shifted toward test execution instead.

Our Tip “Definition of Ready” for system test automation

  • Newly specified system test cases should at first be executed manually. Only once you have gathered some experience of how the test object and the test environment react, and whether the new tests do what they are designed to do can you begin to automate them. Ideally, the newly automated tests will be available and fully functional for the next iteration.

Test execution planning includes establishing or optimizing the sequence in which the planned tests are to be executed. Changes in the sequence can be due to tests or test objects that are either missing or that behave differently than they did during previous test cycles, or because a different sequence is simply more efficient.

Prioritizing tests

If—due to any of the situations mentioned above—you need to discard test cases from the “complete” execution plan, you need to decide which these are. In spite of the resulting reduced set of test cases, the overarching objective is still to identify as many risks or potentially critical defects as possible—in other words, you need to concentrate on prioritizing the most important test cases.

Side Note: Prioritization criteria

The following criteria can be used to guide, objectivize, and formalize the prioritization process:

  • The frequency with which a function is called when the software is in use. If a function is used frequently, it is highly likely that a fault witin the function will be triggered, thus causing a failure or system crash. Test cases that check these types of functions have a higher priority than ones that check less frequently used functions.
  • High-risk defects
    A high-risk defect is one that can cause significant loss or damage to the customer—for example, if the software fails and costs the customer money. Tests that are aimed at identifying high-risk defects have a higher priority than ones that check low-risk defects.
  • Defects that are noticed by the end-user offer another criterion for prioritizing test cases, especially for interactive systems. For example, the user of a web shop spots some faulty output in the user interface and is no longer convinced that the rest of the data presented by the system is correct.
  • Test cases can be prioritized to correspond with the prioritization of the requirements. The various functions that a system provides are of differing importance to the customer. Some may be dispensable if they don’t work properly, while others are essential to the system’s basic functionality.
  • As well as functional requirements, you need to consider non-functional quality characteristics, as these too can be of varying importance to the customer. More important characteristics have to be tested more thoroughly.
  • Prioritization can also take place from a development or system architecture viewpoint. Components that can potentially cause a complete system failure need to be tested more thoroughly than other, less critical ones.
  • The complexity of the individual components or modules can be used to prioritize test cases. Because they are more difficult to build (and are therefore more likely to contain errors), complex program parts need to be tested more thoroughly. However, you may find that apparently simple program parts contain a whole raft of errors, perhaps because they were developed with too little care. If figures based on experience are available, it will be easier to decide which option to take for the current project.
  • Defects that require significant correction effort and take up valuable resources can delay project progress (see section 6.4.3) and contribute to project risk, so they have to be identified as early as possible (for example, components that significantly influence system performance). Correcting these types of faults usually requires major changes to the system architecture, so it is never too early to put these kinds of components through performance tests.

Define priority criteria in your test plan

Priority criteria are defined by the test manager as part of the test plan. These criteria may already be defined on the test object or quality characteristic levels, but all test cases must have priorities assigned to them at the latest during planning for the first test cycle (either individually or in groups). The priorities thus defined are used in subsequent iterations to make rapid decisions if tests have to be discarded (for example, due to a lack of resources):

  • If a low-priority test case contains a precondition for a high-priority test case, the low-priority test case has to be executed first. The situation is similar where dependencies exist across multiple test cases—in this case, the test cases have to be arranged in a sequence that functions, regardless of the priorities assigned to the individual test cases.
  • Sometimes, test case sequences can be arranged according to differing degrees of efficiency. In cases like this, you have to strike a compromise between testing efficiency and sticking to your prioritization strategy.

Prioritization only ever demonstrates the relative importance of test cases to one another within the context of the current project. Like all other aspects of test cycle planning, test case prioritization has to be checked regularly and, if necessary, adjusted to suit the current state of the overall testing strategy. The following rules of thumb apply when assigning or adjusting test case priorities:

Our Tip Prioritization rules

  • Test case prioritization should enable the best possible results regardless of whether testing is completed or terminated prematurely.
  • Where many defects exist you are sure to find more (see also section 2.1.6, principle #4). Components (and their neighbors) that contain a large number of defects should be assigned a high priority for several iterations following defect correction.

Test entry and exit criteria

Definition of entry and exit criteria is another important part of overall test planning. These criteria are part of the test plan and determine when particular test activities can begin and when they are considered to be finished. Separate entry and exit criteria can be defined for the entire project, each test level, or every different type of test, and can vary according to the test objectives you have defined.

These criteria are measured and assessed regularly during testing to help test management plan test execution, and project management to decide when to approve a new release.

Entry criteria

Entry criteria (called “definition of ready” in agile projects) define the preconditions required to begin a specific test activity. If the entry criteria are not fulfilled, the activity will probably be more difficult to perform, take more time, cost more, and be more risky than planned. If this is the case, it probably makes little sense to begin the activity.

Typical entry criteria are:

  • The requirement (i.e., the expected behavior) that is to be tested for is known and/or available in written form
  • The test environment is available and ready to use
  • The required test tools are ready for operation within the test environment
  • The test objects are available within the test environment
  • The necessary test data are available

These criteria are the precondition for starting the corresponding test activity, so checking them in a timely manner ensures that the test team doesn’t waste time attempting to perform tests that are not yet fully executable.

Exit criteria

Exit criteria (called “definition of done” in agile projects) mitigate the risk of ending test activities randomly or for the wrong reasons. They serve to ensure that test activities are not terminated too early (for example due to a lack of resources or time pressure), but also that they don’t get out of control. Typical exit criteria and their metrics are:

  • All planned test cases defined in the test execution schedule have been completed
  • The required test coverage has been achieved—for example, measured using the number of completed test cases (test coverage), requirement coverage, code coverage, or similar
  • The target product quality is achieved—for example, measured by the number and seriousness of known but not yet corrected defects within a predefined range of tolerance, reliability, or other quality characteristics
  • Residual risks
    A predefined degree of tolerable risk (i.e., a threshold) can also be used as an exit criterion. Examples are the number of executed tests, the number of lines of code not reached during testing, the estimated number and effects of undiscovered faults, and so on. If the threshold is crossed, the test can be considered successfully completed.

Every project is subject to economic restrictions. Testing may be halted before the originally planned exit criteria are met—perhaps because the budget has run out, time has run out, or simply because of pressure to get the product released. In such situations, the exit criteria and an estimate of how distant they still are can help stakeholders to objectively assess the risks involved in an early release.

Can we stop testing?

Test results sampled during test execution help to determine testing progress and also to decide whether testing can be terminated20 and the product released. Which criteria are appropriate for deciding whether to terminate testing depends on the quality characteristics that need to be fulfilled (i.e., the criticality of the software) and from the available resources (time, staff, tools).

The project exit criteria, too, are defined as part of the test plan, and each exit criterion must be measurable using the metrics collected in the course of the project.

Case Study: Exit criteria for VSR-II system testing

The VSR-II test cases are assigned one of three priorities:

image

Based on these priorities, the test plan lists the following test case-based exit criteria for system testing VSR-II:

  • All Priority 1 test cases have run without failure
  • At least 60% of Priority 2 test cases have run without failure

Once these exit criteria are fulfilled, the project manager (assisted by the test manger) decides whether to release and deliver the test object. Within the context of component and integration testing, “delivery” means handing over the test object to the following test level.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *