MULTISTAGE STOCHASTIC LINEAR PROGRAMMING WITH RECOURSE

Multistage stochastic programming formulations arise naturally as a generalization of two-stage models. At each stage, we gather new information and we make decisions accordingly, taking into account immediate costs and expected future recourse cost. The resulting decision process may be summarized as follows:23

  • At the beginning of the first time period (at time t = 0) we select the decision vector x0; this decision has a deterministic immediate cost images and must satisfy constraintsimages
  • At the beginning of the second time period we observe random data (A10A11c1b1) depending on event ω1; then, on the basis of this information, we make decision x1; this second decision has an immediate cost images and must satisfy the constraintimagesNote that these data are not known at time t = 0, only at time t = 1; the new decision depends on the realization of these random variables and is also affected by the previous decision.
  • We repeat the same scheme as above for time periods up to H − 1, where H is our planning horizon.
  • At the beginning of the last time period H, we observe random data (AH,H − 1AH HcHbH) depending on event ωH;, then, on the basis this information we make decision xH, which has an immediate cost images and must satisfy the constraintimages

From the point of view of time period t = 0, the decisions x1, …, xH are random variables, as they will be adapted to the realization of the stochastic process. However, the only information we may use in making each decision consists on the history so far. The resulting dynamic decision process can be appreciated by the following recursive formulation of the multistage problem:

images

In this formulation, we see that decision xt depends directly only on the previous decisions xt−1. In general, decisions may depend on all of the past history, leading to a slightly more complicated model. However, we may often introduce additional state variables, such that the above formulation applies. For instance, in a production planning model we may “forget” the past produced quantities if we know the current inventory levels. It should be noted that, in practice, the real output of the above model is the set of immediate decisions x0. The remaining decision variables could be regarded as contingent plans, which are implemented in time, much in the vein of a feedback control policy; however, in a practical setting, it is more likely that the model will be solved again and again according to a rolling-horizon logic. While this formulation points out the dynamic optimization nature of multistage problems, we usually resort to deterministic equivalents based on discrete scenario trees. We illustrate the idea with a toy financial planning problem.

images

Fig. 13.10 Piecewise linear concave utility function.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *