So far, in terms of concrete procedures, we have considered only decision trees, which are well suited to cope with discrete decisions, when uncertainty can be represented by a finite set of scenarios. More generally, we would like to solve a problem like
where S is a subset of , and the expectation can be taken with respect to a multidimensional continuous distribution. The objective function is not necessarily an expected cost or an expected profit (to be maximized), and it could be related to a utility function. Unfortunately, this optimization problem involves an expectation, which in turn involves a multidimensional integral, if the underlying distribution is continuous. It is a safe bet that a problem like this is almost intractable in all but very simple cases. One ingredient to build a tractable model is the approximation of continuous distributions by a discrete set of scenarios, i.e., a discrete probability distribution, which boils the nasty multidimensional integral down to a more manageable sum. Indeed, this is what is typically done to build decision trees, where the description of uncertainty is discrete by nature.
We should also note that, in practice, it is very difficult to specify a high-dimensional distribution describing the uncertainty we face. The multivariate normal distribution is an exception, as in order to describe it we just need expected values, variances, and the correlation matrix. In other cases, a set of well-crafted scenarios may be the raw material we may start from. They may be obtained by Monte-Carlo simulation of a possibly complex dynamic model, or by taking advantage of historical data; scenarios can also be the result of a discussion with a group of domain experts, if past data are not relevant or not available altogether. However, unlike the case with decision trees, here we want to deal with complex decisions, rather than the choice among a finite and small number of alternatives. As a concrete illustration of these concepts, we will treat the stochastic extension of linear programming (LP) models.
Consider the following deterministic LP model (in canonical form):
We may try to deal with uncertainty by making randomness in the data explicit in the model. In the most general case, we may have randomness in all of our data, which could be represented by random variables c(ω), A(ω), and b(ω), depending on an underlying event ω. However, we cannot simply translate the model above to something like
To begin with, the objective function (13.19) does not make sense, since minimizing a function of a random variable has no clear meaning. Still, we could solve this issue simply by considering its expected value. The real issue is that we should not require that the constraints (13.20) are satisfied for every event ω. In some cases, doing so would yield a “fat” solution, which is expected to be quite costly. In other cases, it would be simply impossible to do so. To see this, consider a simple inventory control system operating under a reorder point policy. If demand is assumed normal, 100% service level would imply setting the reorder point to infinity. By the same token, imagine that the model involves equality constraints; equality constraints for different scenarios are most likely to be inconsistent. Hence, we must relax constraints in some sensible way.
One possible approach to relax the requirements is to allow for a violation in a small subset of scenarios. In other words, we settle for a suitably high probability β < 1 to satisfy constraints. The chance-constrained programming approach deals with models of the form
Note that here we require a high probability of satisfying the joint set of constraints; alternatively, we might enforce the condition for each constraint separately. This modeling framework has a clear interpretation in terms of reliability of the solution. Nevertheless, there are a few difficulties:
- In technical terms, there is no guarantee that the resulting optimization problem is convex in general; nonconvexity may arise with discrete probability distributions.17Fig. 13.9 Scenario tree for a two-stage problem.
- From a more practical perspective, we are not saying anything about what will happen if constraints are violated. Probably, corrective actions will be taken, but they are left outside the model. Furthermore, our previous discussion about VaR shows that disregarding a small set of scenarios is dangerous, if they are unlikely, but associated with catastrophic consequences.
- Finally, the above framework is static: We do not account for a dynamic decision process, whereby decisions may be adapted when uncertainty is progressively resolved.
An alternative framework to deal with stochastic optimization is stochastic programming with recourse. To get a feeling for this modeling framework, consider Fig. 13.9.
- We have a scenario tree. The root of the tree (node on the left) represents the current state, here and now; the nodes on the right represent different future states of nature, or scenarios. Each scenario has some probability of occurrence, which can be an objective measure derived by statistical information or a subjective measure of likelihood. As we have noted before, scenarios can result from a suitable discretization of a continuous probability distribution, or a set of plausible forecasts by a pool of experts.
- We should make a set of decisions now, but in the future, when uncertainty is at least partially resolved, we might take some action in order to “adjust” our previous decisions on the basis of additional information that we have gathered. These adjustments are called recourse actions.
- We want to find a set of decisions now so as to optimize the sum of two terms:
In order to get acquainted with this modeling framework, we consider once again the assembly-to-order production planning model of Section 12.2.1, allowing for demand uncertainty.
Leave a Reply