When dealing with a MILP, the simplex method cannot guarantee an integer solution; more often than not, we obtain a fractional solution. There is an even more troubling fact, which is essentially due to the lack of convexity. In fact, if some magical oracle hands over the optimal solution of a continuous LP, we know how to verify its optimality by checking reduced costs. Unfortunately there is no such condition for integer LPs; to find the optimal solution, we should explore the whole set of feasible solutions, at least in principle. Enumerating the whole set of solutions is conceptually easy for a pure binary program, such as the knapsack problem of Section 12.4.1. It is natural to consider a search tree like the one depicted in Fig. 12.20. At each node, we branch on a variable generating two subproblems. At the root node in the figure, we branch on x1, defining a left subproblem in which x1 = 0 (i.e., item 1 is not included in the knapsack), and a right subproblem in which x1 = 1 (i.e., item 1 is not included in the knapsack). At each level of the tree we branch on a different variable, and the leaves of the tree correspond to a candidate solution. Not all of the leaves correspond to feasible solutions, because of the budget constraint. Hence, many solutions will be excluded from the search process. Yet, it is easy to see that this is not a very practical solution approach. If there are n binary variables, there are potentially 2n solutions to check, resulting in a combinatorial explosion. We say that this enumeration process has exponential complexity in the worst case.
Things would definitely look brighter, if we could find a way to “prune” the tree by eliminating some of its branches. We may avoid branching at a node of the tree if we are sure that the nodes beneath that one cannot yield an optimal solution; but how can we conclude this without considering all of these nodes? We have already observed that if we relax the integrality requirements in a MILP and solve the corresponding continuous LP, we obtain a bound on the optimal value of the MILP.34 In the familiar production mix example, solving the continuous LP we get an optimal contribution to profit 5538.46. This is an optimistic estimate on the true optimal contribution to profit, 5505, which we obtain when requiring integrality of the decision variables. In the case of a maximization problem, the optimistic estimate is an upper bound UB on the optimal objective value (in a minimization problem, the optimistic estimate is a lower bound). Such bounds may be calculated at any node of the search tree for a MILP by relaxing the free integer decision variables to continuous values; free variables are those that have not been fixed to any value by previous branching decisions. This is called continuous relaxation or LP-based relaxation.
Now suppose that in the process of exploring the search tree we find a feasible (integer), though not necessarily optimal solution. Then, its value is a lower bound LB on the optimal objective value for a maximization problem, since the optimal profit cannot be lower than the value of any feasible solution (a feasible solution yields an upper bound for a minimization problem). It is easy to see that by comparing lower and upper bounds, we may eliminate certain nodes from the search tree. For instance, imagine that we know a feasible solution of a knapsack problem, such that its value is 100; so, we have a lower bound LB = 100 on the value of the optimal knapsack. Say that the LP relaxation at a node in the search tree yields an upper bound UB = 95. Then, we observe that LB > UB and immediately conclude that this node can be safely eliminated: We do not know what is the value of the best solution in the branches below that node, but we know that it cannot be better than the feasible solution we already know. The roles of lower and upper bounds are reversed for a minimization problem. The following example shows in detail LP-based branch and bound for a knapsack problem.35
Example 12.24 Consider the knapsack problem
We first solve the root problem of the tree (P0 in Fig. 12.21), which is the continuous relaxation of the binary problem, obtained by relaxing the integrality of decision variables and requiring 0 ≤ xi ≤ 1. Solving the problem, we get the solution
Fig. 12.21 Search tree for the knapsack problem of Example 12.24.
with objective value 36.2. This is an upper bound and, since all of the problem data are integers, we can immediately conclude that the optimal value cannot exceed 36. We observe that the last variable is fractional, and we branch on it, generating two more subproblems: In P1 we set x4 = 0, and in P2 we set x4 = 1. Note that we are free to branch on other variables as well, but this choice seems to make more sense; there is the possibility of obtaining an integer solution before reaching the bottom level of the tree. Solving P1 yields
whereas P2 yields
It is important to notice that both upper bounds are smaller than the bound obtained at the root node. This makes sense, because we are adding constraints, and the value of the optimal continuous solution cannot increase by doing so. Now we should choose which branch of the tree (i.e., which subproblem) to explore first. One possible rule is to look down the most promising branch. Hence, we generate two more subproblems from P2, branching on the fractional variable x1. Setting x1 = 0 results in subproblem P3, whose solution yields
Setting x1 = 1 results in subproblem P4, whose solution yields our first integer solution
Note that we have found an integer solution without reaching a leaf node; when this happens, i.e., when the continuous relaxation yields an integer solution, there is no point in further branching at that node. Having found our first integer solution, we also have a lower bound (34) on the optimal value. Comparing this with the root bound, we could conclude that the optimal value must either be 34, 35, or 36. Actually, checking the bound on the lower-level, still-active nodes, we may immediately conclude that the optimal value must be either 34 or 35, the bound we obtain from node P3. Subproblem P1 may be immediately eliminated, as its upper bound is lower than the value of our feasible solution. We say that the subproblem has been “fathomed,” or that the tree “has been pruned.” Clearly, the earlier we prune the tree, the more subproblems we eliminate.
We are not done, though, because node P3 looks promising. There, we branch on the fractional variable x3. Setting x3 = 0 results in subproblem P4, whose solution yields a new integer solution
This solution is worse than the first one; yet, it is the best we can do down that branch, so we may prune the node. Setting x3 = 1 results in the infeasible subproblem P4; indeed, if we try putting both items 1 and 3 into the knapsack, we exceed its capacity (24+6 > 7). Now we may report solution x = (1, 0, 0, 1) as optimal. We have explored a fair amount of nodes, but many of them have been eliminated in the process.
Let us generalize the above scheme a bit. The idea of branching consists of taking a problem P(S), with feasible set S, and generating a set of suproblems by partitioning S into a collection of subsets S1, …, Sq such that
Then, for a minimization problem, we have
The rationale behind this decomposition of the feasible set is that solving the problems over smaller sets should be easier; or, at least, the bounds obtained by solving the relaxed problems should be tighter. For efficiency reasons it is advisable, but not strictly necessary, to partition the set S in such a way that subsets are disjoint:
Of course, in a MILP model involving binary and continuous variables, we branch only on binary variables. But what about general integer variables? It is clearly unreasonable to create a branch for each possible integer value. The standard strategy is again to generate only two subproblems, branching on a fractional variable as follows. Assume that an integer variable xj takes a noninteger value in the optimal solution of the relaxed subproblem. Then two subproblems are generated; in the downchild we add the constraint36
to the formulation; in the upchild we add
For instance, if , we generate two subproblems with the addition of constraints xj ≤ 4 (for the downchild) and xj ≥ 5 (for the upchild).
Now we may state a branch and bound algorithm in fairly general terms. To fix ideas, we refer to a minimization problem; it is easy to adapt the idea to a maximization problem. Given subproblem P(Sk), let ν[P(Sk)] denote the value of its optimal solution, and let β[P(Sk)] be a lower bound:
Note that P(Sk) can be fathomed only by comparing the lower bound β[P(Sk)] with an upper bound on ν[P(S)]. It is not correct to fathom P(Sk) on the basis of a comparison with a subproblem P(Si) such that
Fundamental branch and bound algorithm
- Initialization. The list of open subproblems is initialized to P(S); the value of the incumbent solution ν* is set to +∞. At each step, the incumbent solution is the best integer solution found so far.
- Selecting a candidate subproblem. If the list of open subproblems is empty, stop: The incumbent solution x*, if any has been found, is optimal; if ν* = +∞, the original problem was infeasible. Otherwise, select a subproblem P(Sk) from the list.
- Bounding. Compute a lower bound β(Sk) on ν[P(Sk)] by solving a relaxed problem . Let be the optimal solution of the relaxed subproblem.
- Prune by optimality. If is feasible, prune subproblem P(Sk). Furthermore, if , update the incumbent solution x* and its value ν*. Go to step 2.
- Prune by infeasibility. If the relaxed subproblem is infeasible, eliminate P(Sk) from further consideration. Go to step 2.
- Prune by bound. If β(Sk) ≥ ν*, eliminate subproblem P(Sk) and go to step 2.
- Branching. Replace P(Sk) in the list of open subproblems with a list of child subproblems P(Sk1), P(Sk2), …, P(Skq), obtained by partitioning Sk; go to step 2.
A thorny issue is which variable we should branch on. Similarly, we should decide which subproblem we select from the list at step 2 of the branch and bound algorithm. As it is often the case, there is no general answer; software packages offer different options to the user, and some experimentation may be required to come up with the best strategy.
Many years ago, the ability of branch and bound methods to solve realistically sized models was quite limited. Quite impressive improvements have been made in commercial branch and bound packages and these, together with the availability of extremely fast and cheap computers, have made branch and bound a practical tool for business management. Nevertheless, many practical problems remain, for which finding the optimal solution requires a prohibitive computational effort. Domain-dependent heuristics have been developed, but we should note that the above branch and bound scheme can be twisted to yield high-quality heuristics by a simple trick. We should just relax the condition β(Sk) ≥ ν* as follows:
where is a given threshold representing the minimal absolute improvement over the incumbent that we require to explore a branch. Doing so, we might miss the true optimal solution, but we might prune many additional branches, with the guarantee that the difference between the best integer solution found and the optimal solution is bounded by . If we prefer to state the threshold in relative terms, we can prune a node when
In this case, is related to a guarantee on the maximum percentage suboptimality. If , we know that maybe we missed the optimal solution, but this would improve the best integer solution we found by at most 1%. It is up to the user to find the best tradeoff between computational effort and solution quality. Another interesting way to reduce computational effort is by reformulating the model in order to improve its solvability, as we illustrate next.
Leave a Reply