An economic interpretation of Lagrange multipliers: shadow prices

Lagrange multipliers play a major role in optimization theory, as well as in economics. Indeed, within the economic community, they are rather known as shadow prices,30 due to their important economical interpretation, which we illustrate in this section. Consider an equality-constrained problem and apply a small perturbation to the constraints:

images

Let images be a vector collecting these perturbations. Solving the perturbed problem by the Lagrange multiplier method, we get a new solution images and a new multiplier vector images, both depending on images. An interesting question is how these perturbations affect the optimal solution and its corresponding value; in other words, we should be interested in the sensitivity

images

To be precise, we should notice that this derivative need not exist in general, as the differentiability of images cannot be guaranteed; nevertheless, when the derivative exists, it is related to the Lagrange multiplier for constraint j.

To see why, let us consider the Lagrangian function for the perturbed problem

images

Equality constraints must be satisfied by the optimal solution of the perturbed problem, too. Hence

images

Now we can find the derivative of the optimal value with respect to each component of images:

images

where we have used the stationarity condition of L. Thus, we conclude that Lagrange multipliers are, apart from a change in sign, sensitivity measures of the optimal value with respect to perturbations in the right-hand side of constraints.

Example 12.20 Consider the quadratic programming problem:

images

where b is a parameter, and let us investigate how the optimal value changes as a function of b. In fact, the optimal value of the objective is a function images, and in this very simple case we may find this function explicitly. To this aim, we may eliminate the constraint in order to obtain an equivalent unconstrained problem. From the constraint we get x2 = b − x1, and plugging this into the objective function yields the unconstrained problem

images

Then, setting the first-order derivative with respect to x1 to zero, we find images. This also implies images. This solution may also be easily checked geometrically, since the problem ask us to find a point on the line x1 + x2 = b, such that the distance from point (2, 2) is minimal; see Fig. 12.18. The optimal value as a function of b is then

images

If we take its derivative with respect to b, we obtain

images

This shows that the optimal value will decrease, if we increase b when the line is below the point (2, 2) (the line gets closer to that point); if the line is above that point, increasing b will increase the distance.

images

Fig. 12.18 Illustrating Lagrange multipliers as sensitivity measures.

Now let us set all of this geometric intuition aside and apply the Lagrange multiplier approach. First, we build the Lagrangian function

images

The stationarity conditions are

images

and we find

images

This confirms that, apart from a change in sign, the multiplier is actually the derivative of the optimal value with respect to b.31

The theorem above applies to the case of an equality constraint, but what about inequality constraints? Thanks to the complementary slackness condition, we can extend the result to this case as well. There are two possibilities:

  1. If an inequality constraint is inactive, the sensitivity with perturbations of its right-hand side is zero, because small changes of the constraint have no effect. In this case, complementary slackness ensures that the corresponding multiplier is zero.
  2. If an inequality constraint is active, it basically behaves as an equality constraint and the above theorem applies, with the additional caveat concerning the sign of the multiplier. In this case, how we write the perturbed constraint is essential:imagesWe know that images, which implies that images. But this makes sense, as enlarging the feasible region can only decrease cost.

Example 12.21 Let us illustrate the meaning of Lagrange multipliers in the case of the optimal mix problem (12.1). Any software tool for linear programming yields the value of the optimal contribution to profit, π0 = 5538.4615, as well as the value of the multipliers for the four capacity constraints (12.2 – 12.5):

images

Not surprisingly, the multipliers for redundant capacity constraints are zero. Now, let us tackle two questions:

  1. If we were to increase capacity, which resource should be our top priority?
  2. How much money should we be willing to pay for one extra hour?

The first answer is that the second resource is more important, as it is associated with the largest multiplier. Note that here we are maximizing profit, rather than minimizing cost, so we expect that the optimal profit will increase by 1.2692 monetary units for each additional hour on the second resource type. In fact, if we solve the problem by changing the right-hand side of constraint (12.3) we obtain the following optimal objective values:

images

It is easy to check that π1 − π0 = 1.2692 and π2 − π0 = 126.92. Therefore, we should not be willing to pay more than 126.92 monetary units for 100 additional hours for the second resource type. Of course, multipliers are only “local” sensitivities: If we increase a capacity beyond a limit, then the corresponding resource constraints will not be binding anymore.

In the optimal mix above, how should we measure multipliers associated with capacity constraints? They are sensitivities of profit (monetary units) with respect to resource availability (hours); hence they are measured in money per unit amount of resource and are interpreted as prices. This is where the term shadow price comes from.

images

Fig. 12.19 Counterexample showing that a constraint may be relevant even if it has a null multiplier.

One word of caution, before we leave this section: It may be tempting to conclude that if a constraint is associated with a zero multiplier, then it can be dropped from the optimization problem without changing the optimal solution. The counterexample of Fig. 12.19 shows that this is not the case. Here we have a convex quadratic objective, whose level curves are concentric circles; the feasible region is the portion of the “bean” S below the constraint g(x) ≤ 0, which is actually an upper bound on x2. The optimal solution is the point A, and the constraint g(x) ≤ 0 is inactive at that point; however, if we eliminate the constraint, the optimal solution is B (A is still a locally optimal solution). It is true that the solution does not move at all for small perturbations of the constraint; but dropping the constraint is a different matter. It is also worth noting that the source of the trouble here is that the overall problem is nonconvex.

Example 12.22 Let us solve the nonlinear programming model of Example 12.6. There, we built the nonlinear programming model

images
images

to minimize inventory cost in an EOQ-like setting, subject to an upper bound on the average number of replenishment orders. Before applying the theory we know, it is advisable to streamline the model a little bit. In fact, we observe that the nonnegativity requirements may be dropped, if we assume a strictly positive, interior optimal solution images. We will check later that this is indeed the case, but it is easy to see that no order size can be zero, since the function involved in the inequality constraint (12.15) is not defined there and goes to infinity for small order sizes, thus exceeding the bound U. Furthermore, the inequality (12.78) may be replaced by an equality constraint. To see this, observe that the objective function calls for small order quantities, which has the effect of increasing the average number of issued orders. However, since the objective is linear, we should reduce order sizes until the ordering capacity constraint is saturated, i.e., the ordering capacity constraint is binding. Since there is no reason to leave any ordering capacity underutilized, we may rewrite the model as follows:

images
images

Now we see that there is no need for applying the KKT conditions; we just introduce a Lagrange multiplier λ and build the Lagrangian function

images

A key observation is that the Lagrangian function, for a given value of the multiplier λ, can be decomposed with respect to items. Moreover, the cost for each item should be quite familiar; it is just the cost function (12.8) for the EOQ model, in which the fixed ordering charge A is replaced by the multiplier λ. Hence, the application of the stationarity conditions with respect to the order size Qi yields

images

This is just the traditional EOQ formula, but which value of λ should we use? Since the equality constraint (12.80) must be satisfied (stationarity condition of the Lagrangian with respect to λ), we should find a multiplier such that the average number of orders is exactly U. Such a value is easily found numerically by trial and error. The important point is that we get an EOQ-like solution, in which the Lagrange multiplier plays the role of a fixed ordering charge, making too frequent orders unattractive. This example contributes to explain the popularity of the EOQ model, even when the introduction of a fixed ordering charge is questionable; the problem is that this charge should depend on data, rather than being given a priori. In old practice, the fixed ordering charge was often overstated, resulting in an unnecessary inventory buildup.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *