Network interdiction and Zombie attack!

In this post, I would like to write about the second paper I will present in class. I chose a paper about multicommodity flow network interdiction, since both multicommodity flow and network interdiction models are interesting topics to me.

As Amanda and Eli mentioned in their previous blog posts, network interdiction models have lots of applications especially for the military and homeland security.

In network interdiction models, we have a leader who destroys arcs to disrupt the follower’s (enemy) flows. In the paper, “Algorithms for Discrete and Continuous Multicommodity Flow Network Interdiction Problems,” the model allows for more than one supply node for each commodity and multiple demand nodes. As an objective the leader interdicts the arcs to minimize the follower’s optimal reward with the budget B. The paper considers the discrete and continuous cases for interdiction. While in the discrete case each arc can be fully destroyed or operational, in the continuous case each arc can be partially destroyed so then depending on the percentage of disruption the capacity of arc changes.

The mathematical model is as follows:

model

where y is follower’s decision variable and x is the leader’s decision variable. Hence x is the percentage ofarc disruption and y is the amount of flow passing through that arc. Writers present several methods to solve both the discrete and continuous interdiction models. Here I would like to focus more on one specific application area of the interdiction models….like a Zombie attack!!!

To reach a safe place from Zombie attack, we can use shortest path models and with interdiction model, we can also slow down the attack. In this context, we can consider the enemy as zombies, the leader as human, and adapt the problem to disruption of zombie attack problem, which is one of the most IMPORTANT!!! homeland security issues. Now, the objective becomes to minimize the zombie attack achievement since they want to eat human’s brains and we want to escape from them. There are some guidelines out there on how to escape from Zombie attack, but as operations researcher, we should use our tools to achieve it. We can suppose that after having the first bit of brain they would like to eat more and more… But I’m not sure – just guessing. Anyway with this network interdiction idea, we can decrease the number of our friends that get slaughtered and recruited… Here we can see how interdiction models are helping us to secure our homeland not just from bad people but also from the zombies.

TV STILL FOR TV -- DO NOT PURGE -- Walkers - The Walking Dead _ Season 5. Gallery - Photo Credit: Frank Ockenfels 3/AMC
Photo Credit: Frank Ockenfels 3/AMC

 

If you wonder more about Zombies, there is a whole archive at the “Punk Rock Operation Research” blog (https://punkrockor.com/tag/zombies/) and even a zombie apocalypse course in the University of Michigan to teach students survival skills.

Note:  Stochastic network interdiction is also studied in the literature and there is a great source for that by Janjarassuk and Linderoth..

References:

Lim, Churlzu, and J. Cole Smith. “Algorithms for discrete and continuous multicommodity flow network interdiction problems.” IIE Transactions 39.1 (2007): 15-26.

Janjarassuk, U. and Linderoth, J. (2008), Reformulation and sampling to solve a stochastic network interdiction problem. Networks, 52: 120–132. doi:10.1002/net.20237

https://punkrockor.com

 

 

 

Advertisements

Pareto Frontier Heuristics in Humanitarian Logistics

Last time I wrote about multi-objective programming.  Today, I would like to follow up with a few more details and more generally a discussion of the practical use of it in humanitarian logistics.

In the paper, Rath and Gutjahr use a variant of the bounded objective method called the adaptive epsilon constraint method.  Unfortunately, for anything beyond a bi-objective formulation, this method becomes quite nasty and complicated.  Suffice it to say, where the epsilon constraint method chooses predetermined bounds on the constrained objective, the AECM uses knowledge of the objective space from previous solutions (We are solving for the Pareto frontier) to create better constraint values for future searches. Hence, it adapts its constraints based on new data and is able to significantly decrease the total search time over the more naïve ECM.

Using these constraints, Rath and Gutjahr then seek to find the optimal objective value by narrowing an upper and lower bounds on the solution until they achieve equality and hence optimality.  From here, the primary contribution of the paper is in developing heuristics to aid in the computation of these bounds.  In each iteration, they add new constraints to the upper bound to make a better, but more computationally intense relaxation.  Further, they use basic heuristics to find a feasible lower bound.

All this effort is with the goal of finding a faster, if not quite as accurate solution.  So the practical question then is whether this still provides a useful solution and moreover, whether it is even necessary.

I think it would be hard to argue that even an approximate answer is not reasonably acceptable where an exact answer would have been useful.  This comes simply from the assumption of perfect knowledge.  Of all the assumptions, this is probably the worst.  As Sam pointed out in class, do we even know that all of the network connections (roads) are still in place?  Even if this is accounted for, isn’t it possible that they may be restored over time making our solution suboptimal?  Further, our knowledge of demand is limited to what information we have gathered so far- information that is itself inexact.

So the ability to solve the exact solution is of little use.  We are merely looking for guidance and for that a heuristic is just fine.  In fact, I would argue that in a real aid deployment, a heuristic is perhaps more useful, being faster and more understandable to non-practitioners while producing what is arguably an equally useful solution.

The other question is whether this complicated model is even worth solving.  As was discussed last week, finding the Pareto frontier is not an insignificant problem and perhaps best avoided if we are reasonably able.  Here, I think is the crux of the matter.  At least for the problem that they are considering, I question the usefulness of the multi-objective criteria and the necessity to minimize cost and maximize service at the same time. Either the organization is working under a given budget or it can be changed based on increased donations.  In either case, it makes as much sense to solve the Pareto optimal solution for the current budget, and resolve later if necessary.  The lack of perfect information would necessitate these resolves in any case.

The only real benefit in knowing the entire Pareto Frontier is to get a good idea for the trade-offs.  However, even this is of so much value given our lack of knowledge and the lag time between when an organization makes a decision to contribute resources and when those resources are actually deployed.  In that time not only the knowledge, but the actual situation on the ground and hence the Pareto frontier would likely have changed.

So I see the benefit of models like this being more prominent in domestic disasters where we have better knowledge, faster response, and more control over the budget of the operation.

Triage: START and STM

In the aftermath of the mass-casualty incidents(MCIs), there is a sudden increase in demand for emergency services. As resources(like ambulance vehicles, medical staffs,..) are limited,  even the patients in critical condition may have to wait before receiving care, therefore limited medical resources need to be rationed efficiently. This process is called triage. Patients are classified into triage classes given the severity of the patient’s medical condition. Those in the most severe classes are treated first, and the less severe groups wait until treated afterwards.

The most commonly adopted triage process in United States is Simple Triage and Rapid Treatment (START). START classifies patients into following four different classes:

  1. Immediate: deemed to be critically injured and to require immediate intervention
  2. Delayed: injured byt not expected to die within the first hour if case is delayed
  3. Expectants: presumed deceased or have catastrophic injuries, and survival is not expected.
  4. Minor: can walk away

After patients are classified, START gives the highest priority to immediates, then to delayed. Expectants and Minors are served after two time-critical classes are cleared.

This is a simple and easy-to-implement prioritization scheme. But there have been a number of literatures indicating limitations of START method and similar methods, like

  • There is no consideration for resource availability.
  • A tacit assumption is that there is no deterioration.
  • A worst-first strategy make poor use of scarce resources. More salvageable victims could be left to deterioratee while limited resources are being used on patients with less chance of survival.

In particular, Sacco et al(2005,2007) proposed the Sacco Triage Method (STM) to overcome these drawbacks. They set up a mathematical formulation of resource-constrained triage to maximize the expected number of survivors, subject on the timing and availability of transport and treatment resources. The model also need predictions of survival probability and changes in it over time (deterioration). Their formulation fits Linear Programming framework, so the problem can be solved efficiently for large-scale MCIs.

From their computation result, STM provided higher number of expected survivors than did START in all of the simulations. STM appeared to mitigate the impact of declining resources when there is a resource change, whereas START dramatically drops in survivorship.

In my presentation on Thursday, I will introduce another, a very different approach to resource-based patient prioritization.

Soovin

References:

  • Argon NT, Winslow JE, Ziya S (2011) Triage in the aftermath of mass-casualty incidents. Wiley Encyclopedia of Operations Research and Management Science.
  • Sacco WJ, Navin DM, Fiedler KE, Waddell RK, II, Long WB, Buckman Jr RF (2005) Precise formulation and evidence-based application of resource-constrained triage. Acad. Emergency Medicine.
  • Sacco WJ, Navin DM, Waddell RK, Fiedler KE, Long WB, Buckman RF (2007) A new Resource-Constrained triage method applied to victims of penetrating injury. J. Trauma: Injury, Infection, Critical Care.
  • Uzun Jacobson E, Argon NT, Ziya S (2012) Priority assignment in emergency response. Oper. Res.

Prioritization Cuts

 

Since I’ll be presenting the paper “Prioritization via Stochastic Optimization,” by Koç and Morton, on Tuesay, I’d like to delve a little deeper into a section of the paper that I won’t be covering in my presentation: Prioritization Cuts.

First things first, I’d like to refer you to a previous blog post for a bit more background on activity prioritization and motivation for the problem.

Next, I’d like to briefly explain the idea of a “cut.” When solving an optimization model, it is at times impracticable to solve the full model to optimality due to time or computational memory limits. Thus, clever optimizers often find ways to “relax” the model, basically removing or loosening constraints that contribute to the model’s difficulty. A prime example of this is integer programming, where it is common to remove the integrality restriction, allowing continuous decision variables in the solution. Of course, more often than not the relaxed solution won’t be feasible to the original problem, so optimizers have to find a way to work back to a feasible solution. Cuts, as their name implies, are constraints that can be added iteratively or all at once to “cut off” infeasible solutions. Once all the cuts are added – and hopefully well before – the solver returns an optimal feasible solution to the original model.

Recall the full two-stage stochastic integer program for the general activity prioritization problem. Clearly, this has numerous integral variables, making solving the actual model difficult. Thus, we’d like to solve the linear relaxation (which is quite easy to do) and add cuts that help us get back to a feasible solution. As a recap, our model is

\min_{x,y,s} \sum_{\omega \in \Omega}q^\omega\sum_{j\in J}\sum_{i\in I} d_{ij}y_{ij}^\omega\\  \text{s.t.} \phantom{XX} \sum_{i \in I}x^\omega_i \leq k^\omega\\  \phantom{XXXX}  s_{ii'} + s_{i'i} \geq 1 \phantom{XX} i < i', i,i' \in I\\  \phantom{XXXX} s_{ii'} \in \{0,1\} \phantom{XX} i \neq i', i,i' \in I\\  \phantom{XXXX} x_i^\omega \geq x_{i'}^\omega + s_{ii'} - 1 \phantom{XX} i \neq i', i,i' \in I, \omega \in \Omega\\  \phantom{XXXX} x_i^\omega \geq y_{ij}^\omega\phantom{XX} i \in I, j \in J\\  \phantom{XXXX} \sum_{i \in I} y_{ij}^\omega = 1\phantom{XX} j \in J\\  \phantom{XXXX} x_i^\omega \in \{0,1\}\phantom{XX} i \in I\\  \phantom{XXXX} y_{ij}^\omega \in \{0,1\}\phantom{XX} i \in I, j \in J

We’ll start by going over some terminology that we’ll need to discuss the prioritization cuts.

Let

Y^\omega = \{(x,y) : x \in [0,1]^{|I|}, A^\omega x \leq b^\omega, (x,y) \in C^\omega\}

and

X^\omega = \{x : \exists y s.t. (x,y) \in Y^\omega\}.

Also, for x \in X^\omega, let

h^\omega(x) \equiv c_x^\omega + \min_{\{y : (x,y) \in C^\omega\}} c_y^\omega y.

Improving: A scenario \omega is called improving if for any two vectors x, \bar{x} \in X^\omega with S(\bar{x}) \subseteq S(x), we have h^\omega(x) \leq h^\omega(\bar{x}). (Note S(x) refers to the set of selected activities in solution vector x). Let \Omega_V \subseteq \Omega be the set of improving scenarios.

Dominating scenario: For two scenarios \omega and \hat{\omega} \in \Omega, we say \omega dominates \hat{\omega} if x \in X^{\hat{\omega}} \implies x \in X^{\omega}. Let \Omega_D be the set of dominating scenario pairs.

Dominating activity: Let x \in [0,1]^{|I|}. Construct a new vector x_{(i,j)} such that every element is identical to x except the ith element is set to 1 and its jth element is set to 0. We will say activity i dominates activity j if x \in X^{\omega} with x_i = 0 and x_{j} = 1 \implies x_{(i,j)} \in X^{\omega} and h^{\omega}(x_{(i,j)}) \leq h^{\omega}(x) for all scenarios. Let I_D be the set of dominating activity pairs.

Finally, we are ready to propose some cuts for the activity prioritization model.

Proposition 1: There exists an optimal solution to the activity-prioritization problem such that the optimal activity selection vector \bar{x} satisfies \bar{x}_i^{\omega} \geq \bar{x}_i^{\hat{\omega}} for every i \in I and for all (\omega,\hat{\omega}) \in \Omega_D such that \omega \in \Omega_V.

Proposition 2: There exists an optimal solution to the activity-prioritization problem such that the optimal activity selection vector \bar{x} satisfies \bar{x}_i^{\omega} \geq \bar{x}_{j}^{\omega} for every \omega \in \Omega and for all (i,j) \in I _D..

An important characteristic of these sets of cuts is that they are what optimizers call “super-valid” inequalities, as opposed to plain-old valid inequalities. The reason for this is these new constraints cut off not only feasible points, but sometimes optimal points as well. What super-valid inequalities guarantee, however, is that there remains at least one optimal solution that is not cut off by these additional constraints.

That’s about it for prioritization cuts. For a formal proof of why these cuts work, I’ll refer you to the paper. Thanks for reading!

~AGS

 

[1] Koç, Ali, and David P. Morton. “Prioritization via stochastic optimization.”Management Science 61.3 (2014): 586-603.

 

ARMOR: a cool acronym at a cool airport

Like others, I’ll be using this week’s blog post to give some background on my upcoming presentation. My paper describes an application of operations research to airport security.

The Los Angeles International Airport (known as “LAX” in some niche social circles) is the second busiest airport in the United States in terms of passenger boarding. Boasting nearly 40 million departures in 2015, LAX is believed to be an attractive target for terrorists. Its role in domestic and international transportation cannot be understated. As police and security resources are limited, completely securing the airport is impossible. Even if full security coverage were possible, it would be a logistical nightmare that would seriously delay passengers.

The authors of the paper developed ARMOR (Assistant for Randomized Monitoring over Routes) as an application to help security agencies determine where to allocate their limited resources. It is tailored to meet two key challenges in airport security. First, it provides randomization in its resource allocation. When adversaries are able to recognize patterns in security, they are able to exploit them. Randomization adds a much-desired layer of uncertainty to the adversary’s planning. Second, ARMOR addresses security forces’ uncertainty in information about the adversary. In the context if airport security, threats can manifest in numerous ways (bombs, hijackings, etc.) which require different resources for detection. ARMOR works to mitigate this by considering multiple threats simultaneously and formulating the best overall security plan.

ARMOR is based on the theory of Stackelberg games. In a Stackelberg game, a leader first makes decisions, to which a follower reacts in an attempt to optimize his or her reward. Consider the payoff table below for a Stackelberg game. Let L_1 and L_2 denote possible initial actions for the leader. Similarly, let F_1 and F_2 denote possible recourse actions for the follower.

payoff_table

In a simultaneous game, there exists a Nash equilibrium where the leader chooses action L_1 and the follower chooses action F_1. In this situation, neither player can benefit unilaterally by changing his or her decision. However, we are not considering Nash equilibria in this situation; rather, we allow the leader to choose an action first, and the follower to selfishly react thereto. With this paradigm, the leader would select L_2, because it is known that the follower will subsequently select F_2 to obtain the maximum payoff. This strategy gives the leader a payoff of 3. If we allow the leader to select each action with a fixed probability, selecting L_1 and L_2 each with probability 0.5 would result in a reward of 3.5. The follower, capable of choosing only one action, would select F_2.

In a Bayesian Stackelberg game, we allow a set of leaders and a set of followers to play against one another. ARMOR incorporates a Bayesian Stackelberg game with one leader (airport security) and multiple followers (the variety of security threats). Next week, we’ll discuss how to model this Bayesian Stackelberg game first as a mixed-integer quadratic program, then as a mixed-integer linear program. We’ll discuss some of the challenges of creating the ARMOR software and also how it performed “in the field.”

Pita, James, et al. “Deployed ARMOR protection: the application of a game theoretic model for security at the Los Angeles International Airport.”Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track. International Foundation for Autonomous Agents and Multiagent Systems, 2008.

A brief history of gerrymandering

During my presentation tomorrow, I’ll be talking at length about several aspects of a political districting model. In fact, there’s so much to cover tomorrow that I ended up having to cut out all of the interesting anecdotes that I had originally planned to discuss. Rather than letting that information go to waste, however, I figured I could share it here instead.

As some of you are probably aware, political districting is an important problem because of something called “gerrymandering”.


If you don’t already know what gerrymandering is, I highly recommend watching the following video before continuing:


In essence, gerrymandering is the redrawing of congressional district lines in a strategic way that is usually done to maximize the advantage of a certain political party. If you follow politics at all, you’ve probably heard gerrymandering being discussed a few times, but what you might not know is how the term originated.

As it turns out, the term was first used in Massachusetts over 200 years ago in a situation that involved this guy:

elbridge gerry

This is Elbridge Gerry, 9th governor of Massachusetts and 5th vice president of the United States. In 1812, while serving as Massachusetts’ governor, Gerry signed a controversial redistricting plan that was highly partisan and contained a few “oddly shaped” districts like the one identified in red below:

 

original gerrymandering
As we’ll discuss in class tomorrow, a state’s political districts should ideally be as “geographically compact” as possible, so any long, sprawling districts like this one are often viewed with skepticism. The same was true in 1812. When the plan was announced, many newspapers and political opponents criticized its awkwardly shaped districts by drawing attention to the district in red that they said looked like a salamander.

 

salamander.jpgoriginal gerrymandering

This comparison quickly spawned the term “Gerry-mander”, and Gerry’s political opponents reprinted the term so frequently that it eventually became synonymous with “district manipulation”.

Today, there are far more sophisticated ways to measure the geographical compactness of a district, but these visual comparisons still persist. One of my personal favorites out of those I read while researching recent examples of gerrymandering comes from a judge in Maryland. He claimed the state’s third district looked “reminiscent of a broken-winged pterodactyl, lying prostrate” across the state. This district can be seen in orange below:

maryland congressional districts.png

While you might not see salamanders or pterodactyls in these districts, the important point is that gerrymandering still remains relevant today, and operations research models offer one way to combat this problem.

 

References
http://www.governing.com/blogs/by-the-numbers/most-gerrymandered-congressional-districts-states.html

http://www.theatlantic.com/politics/archive/2012/09/the-twisted-history-of-gerrymandering-in-american-politics/262369/#slide2

 

Multi-Objective Optimization

Similarly to everyone else here, my mind is on the presentation I will be giving next week.  However, I will be saving most of the mathematical details of that paper* for my next blog post after I have given the presentation.  Instead, I would like to address one of the fundamental aspects of that paper- multi-objective optimization.

In the paper, Rath and Gutjahr consider the distribution of aid after a disaster.  This naturally requires an extension to the traditional supply routing problem, as we can optimize over both the placement of the warehouses at the outset and then the routing of supplies from those warehouses to the victims.  Further complicating matters, there is no guarantee that the supplies will be enough to meet demand, so we also want to maximize coverage- ensuring that the majority of victims receive some form of aid.

To solve this, Rath and Gutjahr employ a model which finds the Pareto frontier for these three objectives.  In brief, the Pareto frontier consists of every solution that is not dominated by another solution.  That is to gain a benefit in one area you must give up something in another area.  So for

\text{maximize} \;(A,B)\\ \text{subject to}\;A+B\leq 4

(3,1) is on the Pareto frontier but (2,1) is not, since in the latter case, we can increase A or B without decreasing the other.

In choosing to find the Pareto frontier, they leave the user with more control over how to define the optimal solution.  It does come at a cost however.  Most obviously, it increases the complexity of the answer.  In the Rath and Gutjahr example, we have a three-dimensional frontier and it is easy to imagine further dimensions being added to accommodate network restoration. So we must be careful when doing multi-objective optimization with the Pareto frontier to provide enough information to the decision makers so they can understand the trade-offs without being overwhelmed with data.  The goal is to provide clear decisions not simply dump the problem on someone else.

The second problem of using the Pareto frontier is computational.  To find the frontier often requires more time than would be required to find a single optimal objective value.  Further, it is often necessary to use less traditional and straightforward OR approaches like evolutionary programming or heuristics.

There are, of course, other methodologies for formulating multiple objectives.  We will consider a couple of the most common. For a more detailed survey see a “Survey of multi-objective optimization methods for engineering.”

In my current work on ambulance dispatch we have the problem of optimizing both survival for some patients and customer service for other less critical patients.  As we have a preference, survival is more important than customer service, we can use the weighted-sum method.  This entails adding the objective functions and weighting them according to preference.  The result is merely one of the solutions on the Pareto frontier.  The higher the weight on the survival relative to the weight on the customer service, the more we favor trading customer service for survival on the Pareto frontier.  So, this method limits our search and thus enables us to obtain a much quicker answer.

It is also briefly worth noting that by varying the weights on the objectives it is possible to find the Pareto frontier if the problem is convex.  There is, however, no guarantee that the necessary discretization of such a search will find all the relevant information.

Another common method is the bounded objective.  This requires a preference for which objective is most important.  This objective is then maximized(minimized) and the other objectives are bound the feasible region in the constraints.  This method suffers from both being possibly infeasible, if the bounds are not chosen with care, and not necessarily being on the Pareto frontier.  The preferred objective is clearly optimal, but nothing dictates that there is not a better solution in which the same preferred objective is obtained with better values for the objectives in the constraints.  It is however, a good solution in a case where ‘good enough’ rather than completely optimal is acceptable.

Finally, we consider goal programming.  Goal programming sets a desired level we want to reach for each of our objectives and then seeks to minimize the distance between the solution and the desired outcome.  These differences may be weighted depending on preference.  Traditional goal programming also suffers from the possibility of not achieving a solution on the Pareto frontier.  This has been addressed in more recent algorithms to some extent, however, goal programming can still suffer from computational time problems as it often involves the uses of nonlinear programming.

As can be seen in the brief survey, Rath and Guntjahr, provide a fairly good solution with their heuristic to determine the Pareto frontier.  So long as there is not too much information for the decision maker to handle, this may be the best possible choice.  Other methods in the field generally seek points on the frontier and do so in quicker time and with more precision, but require some knowledge of the decision makers’ preferences.  In practice, such an exact knowledge of preference can be exceedingly hard to come by.

 

*Rath, Stefan, and Walter J. Gutjahr. “A math-heuristic for the warehouse location–routing problem in disaster relief.” Computers & Operations Research 42 (2014): 25-39.