Test Reports

The objective of a test report is to summarize and communicate the current state of testing activity to the project manager and other stakeholders. This normally takes place routinely at the end of an iteration/test cycle or when a particular test activity (for example, performance testing) is completed. However, an interim test report can be initiated any time that unexpected issues occur.

Test progress and test summary reports

Test managers often differentiate between test progress reports and test summary reports.21 A test progress report is usually more succinct and focuses on the state reached and the test results from the current (or last finished) iteration. In contrast, a test summary report serves as a basis for deciding whether to approve and release the current product version.

It summarizes all completed test activities from all iterations that lead up to a release, and provides an overview of all completed tests. In this case, what actually took place during which iteration is of secondary interest.

Product release

One important element of a test summary report is the test manager’s (subjective) evaluation (i.e., an expert opinion) of whether the test object can be approved for release. However, “approval” does not necessarily mean “free of defects”. The product is sure to contain a number of undiscovered faults, as well as known faults that are not considered critical to approval and are thus left unchanged. The latter are documented in a database and are often remedied later during routine software maintenance.

The contents of a test report will vary according the nature of the project, the organizational requirements, and the development lifecycle. The following elements are usually included in most reports:

  • A list of the test object(s)
    What was actually tested
  • Dates (from … to …)
    When the tests were performed
  • A summary
    Which types of tests were performed on which test levels, or the general testing focus
  • Test progress statistics measured against the predefined exit criteria For example, planned/run/blocked tests, factors that prevent further progress, or other achieved objectives (such as further test automation)
  • Test object quality statistics, especially defect status reports New defects, defects in progress, or corrected defects
  • Risks
    New, changed, or known risks, and any other unusual events
  • Deviations from plan
    Including changes in scheduling, test duration, planned testing effort, or deviations from the approved budget
  • Forecast
    Tests and testing activities that are planned for the next reporting period
  • Overall assessment
    A (subjective) evaluation of the achieved degree of confidence in the test object

For example, a complex project with multiple stakeholders or a project that is subject to regulatory scrutiny will require reports that are more detailed and precise than those required for updating a small, non-critical mobile app. Agile projects produce regular test progress reports in the form of task boards, defect backlogs, and burndown charts that are discussed in daily standup meetings (see [URL: ISTQB], Foundation Level Agile Tester).

Adapt test reports to the target group

The contents of test reports need to be tailored to the needs of their recipients. The amount and type of data a report contains can vary a lot depending on whether its readers have technical or financial backgrounds. A report for presentation to a project steering committee will be different from one that is presented to the development team. The latter is more likely to contain detailed information about types of defects or test content, whereas the former will contain more detail on budget usage, overall test progress, and the current product quality from the end-user’s point of view.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *