Side Note: Management reviews
Two types of review can be identified when looking at review objects:
- Reviews of documents that are created as work products during the development process
- Reviews that analyze the project or the development process itself
Reviews in the second group are referred to as “management”, “project”, or “process” reviews. Their aims include investigating whether regulations and plans are adhered to, analyzing the implementation of the required tasks, and the effectiveness of changes made to the process.
- The review object is the entire project and the main review objectives are to establish its current technical and economic state, as well as checking whether it is on schedule and how project management is doing
- Management reviews are often conducted when the project reaches a particular planning milestone, a development phase is completed, or as a “post mortem” analysis that supports the learning process for future projects
- In agile projects, such reviews often take the form of “retrospective” meetings. These are usually held following every sprint, and enable the team to compare notes and collect ideas on how to improve things in the next sprint.
The main objective is always to identify defects
Go into detail on the first type of review, whose main objective is to identify defects by investigating documents. The type of review you perform will depend on the nature of the project, the available resources, the type of review object, potential risks, the business area, company culture, and other criteria.
The review can be an informal review, a walkthrough, a technical review, or an inspection. All of these can be conducted as a “peer review”—in other words, with the participation of colleagues on the same an identical or similar level within the company hierarchy.
A single review object can be reviewed using more than one type of review. For example, an informal review can be conducted to verify that the review object is in a fit state for a subsequent technical review and that the effort involved is justified.
Informal Review
No strict guidelines
An informal review is a kind of “soft” review that nevertheless follows the standard review process but without a strict, formal structure. The main aim of an informal review is to identify defects and provide the author of the review object with short-term feedback. It can also be used to develop new ideas and suggest solutions to existing issues. Minor issues can also be resolved during an informal review.
Author/reader feedback cycle
An informal review is usually initiated by the author. The planning stage involves selecting the participants and setting a date for delivering the results. The success of an informal review is highly dependent on the skills and motivation of the reviewer. Checklists can be used, but an informal review usually does without a separate session for discussing the results. In this case, an informal review is really just a simple author/reader feedback cycle.
An informal review is therefore a simple double-check of the review object by one or more colleagues—a “buddy check”. The learning effect and interaction between team members are welcome side effects. A list of the issues found or a corrected/commented copy of the review object usually suffices as far as results are concerned. Techniques such as “pair programming”, “buddy testing”, and “code swapping” can also be seen as kinds of informal review. Because they are easy to organize and involve relatively little effort, informal reviews are widely used in all kinds of projects, and not only in an agile context.
Walkthrough
Running through the review object
As well as identifying defects, a walkthrough can be used to check for conformity with required standards and project specifications. This is also a forum for discussing alternative implementations, and an exchange of ideas about procedures or variations in style can also result and contribute to the participants’ learning curve. The results of a walkthrough should be a consensus opinion.
The focus of a walkthrough is a meeting that is usually led by the author. The results need to be recorded, but there is no real need to prepare a formal transcript or summary report. Use of checklists is also optional. There is little individual preparation involved compared with a technical review or an inspection, and a walkthrough can be conducted with no preparation at all.
During the meeting, typical usage scenarios are named and walked through in a process-oriented fashion. Simulations or test runs of program parts (so called “dry runs“) are also possible, and individual test cases can also be simulated. The reviewers use the comments and questions raised by the participants as a basis for identifying potential defects.
The author is responsible for making any changes that are required, and there is usually no further supervision involved.
Side Note
This technique is suitable for small teams of up to five people, and little or no preparation and follow-up are required. It is great for checking non-critical objects. In practice, walkthroughs range from extremely informal to quite formal.
Because the author usually leads a walkthrough, you have to take care that the review object is nonetheless critically and impartially scrutinized. Otherwise, the author’s potential bias may lead to insufficient discussion of critical issues, thus distorting the results.
Technical Review
Alternative suggestions welcome
Alongside identifying defects, the main focus of a technical review11 is forming a consensus. Other objectives include evaluating product quality and building confidence in the review object.
New ideas and suggestions for alternative implementations are welcome in a technical review, and technical issues can be solved by specialists. To make the most of this kind of discussion, colleagues of the author who work in the same or closely-related technical domain should participate as reviewers. Additionally, involving experts from other fields can help prevent the team from becoming blind to their own habits.
Individual preparation is critical in technical reviews. Checklists can be used too. If possible, the review meeting should be led by a trained review moderator, but not by the author. Discussion amongst the reviewers mustn’t get out of control and should stick to finding a consensus about what the author can do to improve his work in the future. A meeting isn’t mandatory, and the discussion can take place in other forums—for example, on the company intranet.
The results of the review need to be recorded, but not by the author. Usually, a list of descriptions of potential defects and a summary review report are prepared.
Technical reviews, too, can take many forms, from completely informal to strictly organized with predefined entry and exit criteria for each step of the process, and mandatory use of reporting templates.
Side Note
It helps to focus the review session on the most important points if the reviewers submit the findings from their individual reviews to the session moderator in advance. The moderator can then prioritize this input and use the meeting to discuss the most important points and the most obviously divergent opinions.
The results of a technical review are the responsibility of all participants. If you cannot reach a consensus during the meeting, you can hold votes to settle discussions and log the results in the review report.
It is not the responsibility of participants in a technical review to consider the consequences of the suggestions they make. This is down to management.
Inspection
Formal predefined flow
An inspection is the most formal type of review and follows a strict predefined flow12. All participants are project specialists or specialists in other aspects of the review object, and each adopts a specified role during the review. The review process is governed by rules that define check criteria for each aspect of the process. Each testing step has its own entry and exit criteria.
The objectives of an inspection are identifying defects and inconsistencies, determining the quality of the inspection object, and the building of confidence in the work products. Here too, reaching a consensus is an important part of the process. The findings of an inspection should help the author to avoid similar mistakes in the future and—like a technical review—help to improve the quality of future work.
An additional objective is the improvement of the software development process (see below). The objectives of an inspection are defined at the planning stage and the number of issues that the reviewers need to address is limited from the start.
The inspection object is formally checked for “reviewability” and the fulfillment of the entry criteria is verified before the inspection begins. The reviewers’ individual preparation takes place according to predefined rules or standards using checklists.
A sample inspection meeting
A sample inspection meeting could take place as follows: The meeting is led by a trained moderator (not the author) who introduces the participants and their roles, and also provides a brief summary of the subject matter due for inspection. The moderator asks all the reviewers if they are sufficiently well prepared (for instance, by making sure that all the checklist questions have been answered). The moderator may also ask how much time the reviewers spent and how many defects they have identified.
General inconsistencies that affect the entire object are discussed and logged first.
A reviewer then makes a concise and logical presentation of the contents of the inspection object. If necessary, parts of the material can be read out (but not by the author). This is where other reviewers can ask questions and where the selected aspects of the object can be discussed in detail. The author (who mustn’t be the review leader, the moderator, or the scribe) answers any direct questions. If the reviewer and the author cannot agree on how to deal with an issue, this is put to a vote at the end of the session.
If the discussion wanders off the point it is up to the moderator to intervene. The moderator also has to make sure that all selected aspects of the inspection object (and the object in general) are covered, and that all defects and inconsistences are clearly recorded.
At the end of the session, all defects are presented and checked for completeness by all participants. Any unresolved issues are briefly discussed, but no suggestions for possible solutions are discussed. If there is no consensus as to whether an open issue represents a defect, this discrepancy is recorded in the report. The final report summarizes all results of the inspection and provides a list of descriptions of all potential defects.
To conclude, the inspection is evaluated and a decision is made as to whether the inspection object needs more work. During an inspection, any changes that are made and follow-up work that is done are formally regulated.
Additional evaluation of the development and review processes
Some of the data collected during an inspection can also be used to identify the causes of weaknesses in the development process and thus to improve its quality. The data can also be used to improve the inspection process itself. Any improvement in the quality of both processes can be verified by comparing data collected before and after any changes are made.
Side Note
This type of review is often referred to as a design, code, or software inspection. The name is based on the type of documents that are being inspected. However, if formal review criteria exist, all types of documents can be inspected.
Selection Criteria
Selecting the type of review
When and what type of review you use depends on the objectives you are pursuing, the required quality, and the effort this will cost. The project environment is critical to these decisions too, and it is impossible to make specific recommendations. You have to decide from project to project which type of review is best suited to the situation at hand. The following questions will help you decide which type of review to conduct:
- The form of the review’s results can be a factor. Do you need a comprehensive report, or is an undocumented implementation of the results sufficient?
- Is scheduling easy or difficult? Finding a date when five, six, or seven specialists all have time can be hard work.
- Does the review require experts from multiple disciplines?
- How much specific knowledge of the review object do the reviewers need?
- How motivated are the reviewers to spend time and effort concentrating on the planned review?
- Is the planning effort commensurate with the expected results?
- How formal is the review object? Can tool-based analysis take place in advance of the review?
- How much management support do you have? Are reviews likely to be limited or even axed if time is running out?
Leave a Reply