Since I’ll be presenting the paper “Prioritization via Stochastic Optimization,” by Koç and Morton, on Tuesay, I’d like to delve a little deeper into a section of the paper that I won’t be covering in my presentation: Prioritization Cuts.

First things first, I’d like to refer you to a previous blog post for a bit more background on activity prioritization and motivation for the problem.

Next, I’d like to briefly explain the idea of a “cut.” When solving an optimization model, it is at times impracticable to solve the full model to optimality due to time or computational memory limits. Thus, clever optimizers often find ways to “relax” the model, basically removing or loosening constraints that contribute to the model’s difficulty. A prime example of this is integer programming, where it is common to remove the integrality restriction, allowing continuous decision variables in the solution. Of course, more often than not the relaxed solution won’t be feasible to the original problem, so optimizers have to find a way to work back to a feasible solution. Cuts, as their name implies, are constraints that can be added iteratively or all at once to “cut off” infeasible solutions. Once all the cuts are added – and hopefully well before – the solver returns an optimal feasible solution to the original model.

Recall the full two-stage stochastic integer program for the general activity prioritization problem. Clearly, this has numerous integral variables, making solving the actual model difficult. Thus, we’d like to solve the linear relaxation (which is quite easy to do) and add cuts that help us get back to a feasible solution. As a recap, our model is

We’ll start by going over some terminology that we’ll need to discuss the prioritization cuts.

Let

and

Also, for , let

.

* Improving:* A scenario is called

*improving*if for any two vectors with , we have . (Note refers to the set of selected activities in solution vector x). Let be the set of improving scenarios.

** Dominating scenario:** For two scenarios and , we say

*dominates*if . Let be the set of dominating scenario pairs.

** Dominating activity**: Let . Construct a new vector such that every element is identical to except the ith element is set to 1 and its jth element is set to 0. We will say activity i

*dominates*activity j if with and and for all scenarios. Let be the set of dominating activity pairs.

Finally, we are ready to propose some cuts for the activity prioritization model.

**Proposition 1: **There exists an optimal solution to the activity-prioritization problem such that the optimal activity selection vector satisfies for every and for all such that .

**Proposition 2:** There exists an optimal solution to the activity-prioritization problem such that the optimal activity selection vector satisfies for every and for all ..

An important characteristic of these sets of cuts is that they are what optimizers call “super-valid” inequalities, as opposed to plain-old valid inequalities. The reason for this is these new constraints cut off not only feasible points, but sometimes optimal points as well. What super-valid inequalities guarantee, however, is that there remains at least one optimal solution that is not cut off by these additional constraints.

That’s about it for prioritization cuts. For a formal proof of why these cuts work, I’ll refer you to the paper. Thanks for reading!

~AGS

[1] Koç, Ali, and David P. Morton. “Prioritization via stochastic optimization.”*Management Science* 61.3 (2014): 586-603.

This is a very cool model. I especially like the IP formulation of a partition. Since it is somehow related to the idea of “cuts,” I’d like to recap the difference between two types of “on-the-fly” constraint generation: lazy constraints and user cuts.

When a model has too many constraints to add all at once, it is often easier to begin with a subset of the constraints and jump right into the branch and bound tree. At an integer feasible node, we can check if the solution is feasible for the actual model we want to solve, or just feasible for the relaxed version. In the case of the latter, we can add lazy constraints to cut off this integer solution. The relaxed model is not valid without these lazy constraints.

By contrast, user cuts hack away at the feasible space of the LP relaxation. They don’t remove any of the integer feasible points, and we can add them throughout the optimization process.

LikeLike