Scenario generation for stochastic programming

Multistage stochastic programming is a very powerful modeling framework, and it can be extended to cope with risk measures like CVaR, as we have seen in Section 13.3.3. However, the approach can be only as good as the scenario tree on which it is based. Given a multivariate probability distribution characterizing uncertainty, the most obvious way to generate a tree is to draw a random sample using the Monte Carlo principles that we have outlined in Section 9.7. Unfortunately, one clear issue is the exponential growth of the number of tree branches and nodes needed to adequately represent uncertainty. This may be somewhat countered by sophisticated solution strategies. However, it is also important to generate scenarios in a way that even with a limited set of them, uncertainty is captured in a suitable way. By “suitable” we mean in such a way that the first-stage solution is good enough; we are not really interested in representing uncertainty per se on a long time horizon. We list here a few ideas that can be used in practice:

  • Variance reduction strategies are a Monte Carlo sampling technique to improve the quality of estimates for a given sample size. From inferential statistics we know that the width of a confidence interval is related to the standard deviation of the sample mean, images. This expression shows the impact of standard deviation σ of observations and sample size n. A brute force approach entails increasing n, but since the square root is a concave function, this is less and less effective. Apparently, there is little we can do about σ; in fact, clever sampling can be used to reduce this factor.
  • Low-discrepancy sequences were introduced as a method used to evaluate multidimensional integrals numerically with a limited number of function evaluations. Unlike Monte Carlo methods, low-discrepancy sequences do not resort to a statistical framework in any way, as they are a deterministic approach to spread observations in the most regular way on a region.
  • Moment and property matching is a rather natural idea. Imagine taking a sample from a multivariate distribution with expected value vector μ and covariance matrix Σ. A good sample should have sample means and sample covariances as close as possible to these values. We may extend the idea to skewness, kurtosis, and other properties. A perfect matching of the properties of a sample against the required values is impossible to obtain, but we may get the best approximation by solving a least-squares problem.
  • Optimal approximation of probability measures is a formal approach to scenario reduction, based on the idea of approximating a continuous distribution with a discrete one. In order to pursue this approach, there is a need for a precise definition of “distance” between probability measures, as well as for suitable computational procedures.

For more details on these advanced approaches, we refer to the references.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *