Decision trees are a natural way to describe decision problems under risk, involving a sequence of decisions among a finite set of alternative options and a set of discrete scenarios, modeling uncertain outcomes that follow our decisions. Actually, we have already dealt with decision trees informally in earlier examples.1 Now we should treat this formalism more systematically, by distinguishing two kinds of node:
- Decision nodes, represented by squares, correspond to discrete choices between mutually exclusive alternatives, as depicted in Fig. 13.1(a). At these nodes, the decision maker must choose one among multiple available options.
- Chance nodes, represented by circles, correspond to the realization of random outcomes, as depicted in Fig. 13.1(b). Each outcome i is associated with a probability πi clearly, the probabilities for the random outcomes at each chance node add up to 1.
A decision tree consists of a set of decision and chance nodes, as shown in Fig. 13.2. Typically, decision and chance nodes are interspersed, but we may have two chance nodes or two decision nodes in sequence. We also have terminal nodes, represented by bullets. Typically, terminal nodes are labeled with a payoff, which is essentially the final monetary value of a sequence of decisions and random outcomes. It may be helpful to associate cash flows with intermediate nodes to clarify the economic impact of each decision. When time plays a significant role, cash flows may have to be discounted.
Fig. 13.1 Node types in a decision tree: (a) decision nodes, where choices are made; (b) chance nodes, where random outcomes are selected by “nature” according to probabilities.
Fig. 13.2 A sample decision tree.
Solving the problem means choosing a strategy, i.e., selecting, for each decision node we might visit, one among multiple alternative decisions. In the case of a complex decision tree, when the strategy is implemented most decision nodes will not be reached, as the sequence of random outcomes will generate a path that does not visit those nodes. However, a strategy must plan for every possible contingency. Assuming that payoffs have a monetary nature, the most natural criterion to follow in building the strategy is the maximization of the expected monetary value (EMV). When a decision node is followed by a set of chance nodes, we should label each chance node with an EMV, which allows us to choose the best action at the decision node. The labeling process should go backward in time, starting from terminal nodes, and it is best illustrated by a simple example.
Example 13.1 Let us solve the decision problem of Fig. 13.2. At chance node N4, we calculate the EMV E4 resulting from the three successive terminal nodes:
Then, to calculate the EMV E2 for chance node N2 we consider the expected value E4 and the value of the sibling terminal node, which yields
These calculations are reflected in Fig. 13.3, where nodes are successively labeled. If, at decision node N1 we choose the upbranch, the EMV of the decision is 8.48. The downbranch is a bit more complicated, as we must consider another decision node. We start labeling chance node N6
At decision node N5 we should compare the upbranch, with EMV 5, against the downbranch with EMV 7. If we accept the idea of just considering expected values, disregarding risk, we should choose the downbranch. In the figure, this decision is represented by barring (cutting) the suboptimal path corresponding to the upbranch, and labeling node N5 with the value of the optimal decision, max{5, 7} = 7. Now we label node N3 with the EMV
Taking the maximum between 4.9, and the previously computed EMV 8.48, we conclude that the best initial decision is to follow the upbranch. After that, there are no more decisions to make, and everything is in the hands of Mother Nature.
In this trivial example, along the path following the first decision, we do not have to make any other choice. In realistic cases, decisions are made sequentially, after gathering further information. Moreover, such decisions may represent successive investments, calling for a sequence of cash flows. In this example we did not consider intermediate cash flows, which should be properly weighted probabilistically and possibly discounted, in order to take the time value of money into due account.
Fig. 13.3 Solution of the decision tree of Fig. 13.2.
Example 13.2 As a more meaningful example, let us represent the problem of Example 6.10 as a decision tree. Figure 13.4 depicts the corresponding decision tree, including its solution. The decision tree allows us to visualize the strategy:
- At decision node N1, we choose to run the customers’ survey, which costs €4000 and provides us with a less uncertain forecast of new product success, one way or another. Note once again that the survey does not change the unconditional probability of success of the new product.
- If the result of the customers’ survey is promising, we prefer producing; otherwise, we sell the license.
The strategy does not consist of a deterministic sequence of actions; rather, for each random outcome, we have a course of action. Furthermore, in drawing the tree we subtracted the cost of the survey from the payoff of terminal nodes. This does not reflect the true logical sequence of cash flows, but if there is no discounting, there is no real difference. However, when time is of the essence and discounting is needed, it may be much simpler and more informative to associate cash flows with intermediate nodes.
The decision trees we have considered are rather trivial, and software packages for decision analysis provide the user for more options in order to represent cash flow timing and discounting. Of course, the true difficulty is in estimating cash flows and the probabilities at chance nodes; typically, a thorough sensitivity analysis is needed in order to check the robustness of the recommended strategy. It should be mentioned, however, that the very process of structuring a decision tree has value in itself, as it forces the decision maker(s) to lay down the structure of the problem, the decisions to be made, their logical sequence, and the related risks and opportunities. This thinking process in itself may be more valuable than a recommendation relying on questionable estimates of probabilities. Furthermore, decision trees also allow estimation of the value of information. For the tree of Fig. 13.4 we know that the value of the partial information provided by the customers’ survey is not larger than €5000. In the next section we show how to value perfect information.
Fig. 13.4 Decision tree for new product launch.
Leave a Reply