Live Long and ProspOR

For my final blog post, I thought it might be nice to do some sort of review of the 15+ weeks we spent learning about public sector operations research. As such, I have compiled a summary of five things I learned over this past semester that were particularly important to me:

  1. Equity is important in public sector operations research. This is something that perhaps shouldn’t have been surprising to me, but was. I was intrigued by the idea that it’s possible to build characterizations of equity into a mathematical model and that such a model can be constructed in a myriad of ways. Some ways of modeling equity are certainly better than others, as we saw in Sam’s presentation of the paper “Modeling effectiveness-equity trade-offs in public service delivery systems,” but the inclusion of an equity measure is critical in many public sector problems such as assigning children to schools. [1]
  1. Bilevel optimization is cool (and hard). While talking about applications of bilevel optimization, we learned about some interesting ways it can be used to solve tricky problems with multiple decision makers. In Eli’s presentation about stochastic network interdiction, we saw how network interdiction problems work like a Stackelberg game where the leader tries to take links out of a network so the follower has to find other routes. [2] I also presented a paper about nuclear-weapons project interdiction that used bilevel optimization in a project management setting where the leader tried to make the follower’s tasks take as long as possible. [3]
  1. There are a lot of different ways to do facility location. First of all, when deciding where to locate a facility, what’s our goal? Do we want ambulances as close to as many people as possible? Do we want to place distribution centers so that all customers are “covered” (i.e., within some predefined distance radius)? Once we know what we want, how do we model it? Should we use a p-median model? Should we include queuing – and therefore randomness – in the model? Once we finally have a model, we have to solve it, which is often hard since facility placement requires binary decisions about whether to build a particular facility or not. I presented a paper that provided a novel approach to problems with random budgets, such as facility location problems, that involved building a two-stage stochastic program. [4]
  1. Presentations work a lot better when you have a moderator. I got to do something this semester I’ve never done before: moderate someone’s presentation. Not only was it fun, it encouraged me to read the paper (or at least the slides) before watching the presentation and really think about what made sense and what didn’t, because it was my job to come up with several questions for the person I was moderating. I think it’s a great idea to have assigned student moderators and if I teach a class someday I’ll definitely have my students moderate for each other.
  1. It’s possible to do good with good OR. Early in the semester, we discussed the idea of “doing good with good OR,” which basically boiled down to the question of whether our strategic problem solving was actually helping anyone. While there have certainly been times OR has done more harm than good, I’d say our ultimate answer was a resounding “Yes!” We saw many examples of the ways OR had been applied to solve real-world public sector problems; I thought it was encouraging to know our work has the potential to make life better for someone at the end of the day, whether it’s sending children to the best school or placing an ambulance where it has the greatest potential to save someone’s life (see [5] for one great example).

Anyway, all this is to say it was a great semester. Thanks for reading, and here’s to doing even better with even better OR in the future!

(PS: I’m very sorry for the horrible pun that is this post title.)

[1] Mandell, Marvin B. “Modelling effectiveness-equity trade-offs in public service delivery systems.” Management Science 37.4 (1991): 467-482.
[2] Cormican, Kelly J., David P. Morton, and R. Kevin Wood. “Stochastic network interdiction.” Operations Research 46.2 (1998): 184-197.
[3] Brown, Gerald G., et al. “Interdicting a nuclear-weapons project.” Operations Research 57.4 (2009): 866-877.
[4] Koç, Ali, and David P. Morton. “Prioritization via stochastic optimization.”Management Science 61.3 (2014): 586-603.
[5] Hutton, David W., Margaret L. Brandeau, and Samuel K. So. “Doing good with good OR: supporting cost-effective hepatitis B interventions.” Interfaces41.3 (2011): 289-300.

Prioritization Cuts


Since I’ll be presenting the paper “Prioritization via Stochastic Optimization,” by Koç and Morton, on Tuesay, I’d like to delve a little deeper into a section of the paper that I won’t be covering in my presentation: Prioritization Cuts.

First things first, I’d like to refer you to a previous blog post for a bit more background on activity prioritization and motivation for the problem.

Next, I’d like to briefly explain the idea of a “cut.” When solving an optimization model, it is at times impracticable to solve the full model to optimality due to time or computational memory limits. Thus, clever optimizers often find ways to “relax” the model, basically removing or loosening constraints that contribute to the model’s difficulty. A prime example of this is integer programming, where it is common to remove the integrality restriction, allowing continuous decision variables in the solution. Of course, more often than not the relaxed solution won’t be feasible to the original problem, so optimizers have to find a way to work back to a feasible solution. Cuts, as their name implies, are constraints that can be added iteratively or all at once to “cut off” infeasible solutions. Once all the cuts are added – and hopefully well before – the solver returns an optimal feasible solution to the original model.

Recall the full two-stage stochastic integer program for the general activity prioritization problem. Clearly, this has numerous integral variables, making solving the actual model difficult. Thus, we’d like to solve the linear relaxation (which is quite easy to do) and add cuts that help us get back to a feasible solution. As a recap, our model is

\min_{x,y,s} \sum_{\omega \in \Omega}q^\omega\sum_{j\in J}\sum_{i\in I} d_{ij}y_{ij}^\omega\\  \text{s.t.} \phantom{XX} \sum_{i \in I}x^\omega_i \leq k^\omega\\  \phantom{XXXX}  s_{ii'} + s_{i'i} \geq 1 \phantom{XX} i < i', i,i' \in I\\  \phantom{XXXX} s_{ii'} \in \{0,1\} \phantom{XX} i \neq i', i,i' \in I\\  \phantom{XXXX} x_i^\omega \geq x_{i'}^\omega + s_{ii'} - 1 \phantom{XX} i \neq i', i,i' \in I, \omega \in \Omega\\  \phantom{XXXX} x_i^\omega \geq y_{ij}^\omega\phantom{XX} i \in I, j \in J\\  \phantom{XXXX} \sum_{i \in I} y_{ij}^\omega = 1\phantom{XX} j \in J\\  \phantom{XXXX} x_i^\omega \in \{0,1\}\phantom{XX} i \in I\\  \phantom{XXXX} y_{ij}^\omega \in \{0,1\}\phantom{XX} i \in I, j \in J

We’ll start by going over some terminology that we’ll need to discuss the prioritization cuts.


Y^\omega = \{(x,y) : x \in [0,1]^{|I|}, A^\omega x \leq b^\omega, (x,y) \in C^\omega\}


X^\omega = \{x : \exists y s.t. (x,y) \in Y^\omega\}.

Also, for x \in X^\omega, let

h^\omega(x) \equiv c_x^\omega + \min_{\{y : (x,y) \in C^\omega\}} c_y^\omega y.

Improving: A scenario \omega is called improving if for any two vectors x, \bar{x} \in X^\omega with S(\bar{x}) \subseteq S(x), we have h^\omega(x) \leq h^\omega(\bar{x}). (Note S(x) refers to the set of selected activities in solution vector x). Let \Omega_V \subseteq \Omega be the set of improving scenarios.

Dominating scenario: For two scenarios \omega and \hat{\omega} \in \Omega, we say \omega dominates \hat{\omega} if x \in X^{\hat{\omega}} \implies x \in X^{\omega}. Let \Omega_D be the set of dominating scenario pairs.

Dominating activity: Let x \in [0,1]^{|I|}. Construct a new vector x_{(i,j)} such that every element is identical to x except the ith element is set to 1 and its jth element is set to 0. We will say activity i dominates activity j if x \in X^{\omega} with x_i = 0 and x_{j} = 1 \implies x_{(i,j)} \in X^{\omega} and h^{\omega}(x_{(i,j)}) \leq h^{\omega}(x) for all scenarios. Let I_D be the set of dominating activity pairs.

Finally, we are ready to propose some cuts for the activity prioritization model.

Proposition 1: There exists an optimal solution to the activity-prioritization problem such that the optimal activity selection vector \bar{x} satisfies \bar{x}_i^{\omega} \geq \bar{x}_i^{\hat{\omega}} for every i \in I and for all (\omega,\hat{\omega}) \in \Omega_D such that \omega \in \Omega_V.

Proposition 2: There exists an optimal solution to the activity-prioritization problem such that the optimal activity selection vector \bar{x} satisfies \bar{x}_i^{\omega} \geq \bar{x}_{j}^{\omega} for every \omega \in \Omega and for all (i,j) \in I _D..

An important characteristic of these sets of cuts is that they are what optimizers call “super-valid” inequalities, as opposed to plain-old valid inequalities. The reason for this is these new constraints cut off not only feasible points, but sometimes optimal points as well. What super-valid inequalities guarantee, however, is that there remains at least one optimal solution that is not cut off by these additional constraints.

That’s about it for prioritization cuts. For a formal proof of why these cuts work, I’ll refer you to the paper. Thanks for reading!



[1] Koç, Ali, and David P. Morton. “Prioritization via stochastic optimization.”Management Science 61.3 (2014): 586-603.


Nuclear-Weapons Interdiction

This Thursday I’ll be presenting the paper “Interdicting a Nuclear-Weapons Project,” by Brown, et al. (2009). [1] In a previous blog post, I gave an introduction to bilevel programming, which is a key component to understanding project interdiction models such as the one posed in this paper. In my presentation, I plan to spend the majority of my time talking about the model, algorithm, and computational results as reported by Brown, et al., so in my blog post this week I’d like to delve a little deeper into the background of the problem and to help you better visualize the problem setting so we can hit the ground running on Thursday.

In 1968, many nations of the worlds singed theTreaty on the Non-proliferation of Nuclear Weapons. In spite of this, some nations (even some that signed the treaty!) have continued to purse development of a nuclear-weapons arsenal. Of course, some nations are doing so covertly and the existence of their programs has not been confirmed (e.g., Iran and Syria). There is, therefore, international interest in stopping or slowing such programs. Since many world leaders are paying close attention to any nuclear-weapons development activity, if a new nation were to begin proliferation, it would likely do so covertly. Once confirmation of the project was received via intelligence capabilities and some information was gathered, a world leader would be interested in options to slow the proliferation efforts. Likely, the first acts would be diplomatic, escalating slowly and only if necessary to militaristic endeavors.

Even while diplomatic solutions are being sought, though, actions can be taken to hinder the nuclear-weapons project. Toward that end, the defender (interdictor) would attempt to lengthen the time required to perform project tasks (the discrete project units that must be completed before a nuclear weapon is available for use). The interdictor, then, wants to maximize the amount of time it takes to develop a nuclear weapon, whereas the proliferator is trying to minimize that amount of time. Both actors must expend large amounts of nonrenewable resources such as time, money, and highly enriched uranium ore, placing heavy constraints on both the interdictor and the proliferator. It is the complex interactions between resource allocation and project completion time on both sides that make this problem perfect for bilevel optimization. Specifically, the nuclear-weapons interdiction problem is an instance of the two-stage Stackelberg game. A Stackelberg game is a zero-sum game between an attacker and a defender. First, the defender guesses the attacker’s plan and acts to maximally inhibit that plan. Second, the attacker observes the moves made by the defender. Third and finally, the attacker adjusts his original plan to minimize the negative effects to himself. In reality, the nuclear-weapons interdiction problem is probably multi-stage, but we have a much better chance of solving a two-stage model than we do a multi-stage model. Thus, some modeling assumptions are made (I’ll discuss these more in my presentation) that might or might not be reasonable if a world leader actually wanted to implement this model. At the very least, the two-stage model gives a lower bound on how much the interdictor can delay the project.

I’ll get much more into the details of the model itself and how we might be able to solve it on Thursday, but I wanted to give you all a head-start in understanding the problem situation and how we’ll approach it from a modeling perspective.


[1] Brown, Gerald G., et al. “Interdicting a nuclear-weapons project.” Operations Research 57.4 (2009): 866-877.

Get your priorities straight!

I recently read a paper entitled “Prioritization via Stochastic Optimization” (Koç and Morton, 2014), which was a timely read based on what we’ve been discussing in class lately. [1] The purpose of the paper is to present a new approach to solving resource-constrained activity-selection problems (RCASPs) using stochastic optimization and activity prioritization. The example application given by the authors is for the p-median model, a model with which we’re all familiar by this point. (As a quick recap, the idea of the p-median model is that there exists a set of customers located at various points and a certain maximum number p of facilities that can be built to service the customers. The objective is to minimize the total Euclidean distance any customer has to travel to reach a facility.) Formally, the p-median model can be written

\min_{x,y} \sum_{j\in J}\sum_{i\in I} d_{ij}y_{ij}\\  \text{s.t.} \phantom{XX} \sum_{i \in I}x_i \leq k\\  \phantom{XXXX} x_i \geq y_{ij}\phantom{XX} i \in I, j \in J\\  \phantom{XXXX} \sum_{i \in I} y_{ij} = 1\phantom{XX} j \in J\\  \phantom{XXXX} x_i \in \{0,1\}\phantom{XX} i \in I\\  \phantom{XXXX} y_{ij} \in \{0,1\}\phantom{XX} i \in I, j \in J

When everything is deterministic, this model is relatively straightforward. Problems occur, however, when the maximum number of facilities is uncertain (which could be the result of uncertainty in the facility-building budget certain), but the decisions about where to place the facilities must be made before the uncertainty is realized. The figure below depicts optimal facility placement when the maximum number of facilities is known to be 4.


p = 4
Deterministic placement of facilities for p = 1, 2, 3, and 4


Now, we add uncertainty. If either one or two facilities could be built with equal probability, we get the first figure below. If up to three facilities or up to four facilities could be built with equal probabilities, we get the second and third figures, respectively.

Prioritized placement of facilities for p = 1 or 2 with equal probability
Prioritized placement of facilities for p = 1, 2, or 3 with equal probability
Prioritized placement of facilities for p = 1, 2, 3, or 4 with equal probability

We’ll formalize this model in a moment, but first I want to explicitly outline the series of events that occur in this prioritized model. First, a prioritized list of activities (facility locations) is selected without knowing the actual budget. Next, the uncertainty is realized .Finally, we make additional decisions (which facility covers each customer) depending on which scenario is encountered. Thus, perhaps the most “interesting” part of this problem is how to prioritize the activities. A priority list is a many-to-one assignment of activities to priority levels. We have two requirements for a priority list:

  1. A lower priority activity cannot be selected before a higher priority one in any scenario.
  2. Either all activities on priority level t are selected or none are.

The first-stage problem, then, corresponds to creating a priority list. Suppose F^\omega(\mathscr{L}) corresponds to the optimal objective value for scenario \omega \in \Omega. We write the first-stage problem as

\min_{\mathscr{L}} \sum_{\omega \in \Omega}q^\omega F^\omega(\mathscr{L})\\  \text{s.t.} \phantom{XX} \mathscr{L} = [\mathscr{L}_1,\dotsm,\mathscr{L}_L]\\  \phantom{XXXX}\mathscr{L}_t \subseteq I \phantom{XX} t = 1,\dotsm,L\\  \phantom{XXXX}\mathscr{L}_t \neq \emptyset\phantom{XX} t = 1,\dotsm,L\\  \phantom{XXXX}\mathscr{L}_t \cap \mathscr{L}_{t'}= \emptyset \phantom{XX} t \neq t', t,t' = 1,\dotsm,L\\  \phantom{XXXX} \bigcup_{t=1,\dotsm,L} \mathscr{L}_t = I

Essentially, these constraints require the priority list \mathscr{L} = [\mathscr{L}_1,\dotsm,\mathscr{L}_L] to be a list of subsets of the activities such that at least one activity is found in a given priority level, no activity is found in multiple priority levels, and taken together the whole list covers all the possible activities.

The second-stage problem is

\min_{x,y} \sum_{\omega \in \Omega}q^\omega\sum_{j\in J}\sum_{i\in I} d_{ij}y_{ij}^\omega\\  \text{s.t.} \phantom{XX} \sum_{i \in I}x^\omega_i \leq k^\omega\\  \phantom{XXXX} x_i^\omega \geq y_{ij}^\omega\phantom{XX} i \in I, j \in J\\  \phantom{XXXX} \sum_{i \in I} y_{ij}^\omega = 1\phantom{XX} j \in J\\  \phantom{XXXX} x_i^\omega \in \{0,1\}\phantom{XX} i \in I\\  \phantom{XXXX} y_{ij}^\omega \in \{0,1\}\phantom{XX} i \in I, j \in J\\  \phantom{XXXX} x_i^\omega \geq x_{i'}^\omega \phantom{XX} i \in \mathscr{L}_{t}, i' \in\mathscr{L}_{t'}, t < t', t,t' = 1,\dotsm,L\\  \phantom{XXXX} x_i^\omega = x_{i'}^\omega \phantom{XX} i,i' \in\mathscr{L}_t, t = 1,\dotsm,L

where the second-to-last constraint enforces the requirement that all higher-priority activities are selected before lower priority ones and the final constraint requires either all activities or no activities on a given priority level to be selected. The rest of the constraints are the same as in the original p-median model, but the variables and parameters are scenario-dependent. Of course, the first-stage model is not a standard integer program, so we would like to find a formulation for this problem that can be solved by standard stochastic optimization techniques. Toward that end, we create additional decision variables s_{ii'}, which are 1 if activity i has no lower priority than activity i’ and 0 otherwise. The corresponding two-stage stochastic integer model is

\min_{x,y,s} \sum_{\omega \in \Omega}q^\omega\sum_{j\in J}\sum_{i\in I} d_{ij}y_{ij}^\omega\\  \text{s.t.} \phantom{XX} \sum_{i \in I}x^\omega_i \leq k^\omega\\  \phantom{XXXX}  s_{ii'} + s_{i'i} \geq 1 \phantom{XX} i < i', i,i' \in I\\  \phantom{XXXX} s_{ii'} \in \{0,1\} \phantom{XX} i \neq i', i,i' \in I\\  \phantom{XXXX} x_i^\omega \geq x_{i'}^\omega + s_{ii'} - 1 \phantom{XX} i \neq i', i,i' \in I, \omega \in \Omega\\  \phantom{XXXX} x_i^\omega \geq y_{ij}^\omega\phantom{XX} i \in I, j \in J\\  \phantom{XXXX} \sum_{i \in I} y_{ij}^\omega = 1\phantom{XX} j \in J\\  \phantom{XXXX} x_i^\omega \in \{0,1\}\phantom{XX} i \in I\\  \phantom{XXXX} y_{ij}^\omega \in \{0,1\}\phantom{XX} i \in I, j \in J

The first constraint ensures either activities i and i’ have the same priority level or one is higher than the other. It’s also relatively straightforward to check that the third constraint becomes the final two constraints in the previous model depending on the values of the pair s_{ii'} ands_{i'i}.

Now, the activity-prioritization model isn’t particularly easy to solve in general (in fact, it’s NP-hard), but it does allow for potential stochastic or structural interdependencies between activities, which is an improvement on historical ways of looking at activity portfolios where each activity is treated as an independent entity. For more details on alternative formulations and some valid inequalities, I’ll refer you to the paper. Thanks for reading!

[1] Koç, Ali, and David P. Morton. “Prioritization via stochastic optimization.”Management Science 61.3 (2014): 586-603.


Bilevel Programming: An Introduction

Happy week-before-spring-break! In less than a month, I am going to be presenting a paper entitled “Interdicting a Nuclear-Weapons Project,” by Brown et al., written in 2009. While I will obviously be getting into more of the gritty details in my presentation, I wanted to give you a little flavor of what the problem looks like. In particular, I’d like to briefly introduce the concept of bilevel programming and how it relates to nuclear weapon project interdiction.

A bilevel problem can be thought as a “game” between two decision makers. One decision maker makes an decision based on a set of constraints and some objective that benefits them, and then a second decision maker with a different (often contrasting) objective takes action that benefits them. In [2], the authors present the general form of the bilevel programming problem as follows:

\max_x ax + by \text{ s.t. } \{y = \arg \min_y cx + dy\text{ s.t. } Ax + By \geq \bar{b}\}.

In this general form, the first decision maker tries to guess how the second decision maker will minimize their objective and acts accordingly, counteracting the expected actions of the second decision maker. Then the second decision maker takes action after seeing what the first decision maker chose to do, which affects their feasible set of actions. Of course, further restrictions can be imposed, such as requiring certain variables to be integer. Many bilevel problems are nonconvex, so they can be incredibly difficult to solve.

The interdiction problem is a good example of a bilevel programming problem. With nuclear weapon proliferation, specifically, the proliferator would like to quickly produce (and possibly disseminate) a batch of weapons, whereas the interdictor would like to delay the production as much as possible. The underlying assumption is that the proliferator can see any action done by the interdictor and thus can act to inhibit or avoid any such action. The objective used in the nuclear weapon interdiction paper is

z = max_x\{min_y S_{end}\}

where S is the earliest start time of the final task that must be completed by the proliferator, and x and y are the decisions made by the interdictor and the proliferator, respectively.

The nuclear weapons interdiction problem also has many constraints and multiple decisions to be made, so it is a complex problem. I’ll get into the details of the model and the algorithms used to solve it in my presentation, and probably in a future blog post as well, but for the time being I wanted to set up the idea of bilevel programming and how nuclear weapons proliferation can be viewed as a bilevel program.

[1] Brown, Gerald G., et al. “Interdicting a nuclear-weapons project.” Operations Research 57.4 (2009): 866-877.

[2] Bard, Jonathan F. “An algorithm for solving the general bilevel programming problem.” Mathematics of Operations Research 8.2 (1983): 260-272.

Fly Me to the Queue

(Or, A Study of Baggage Claim Queuing at the Dane County Regional Airport)

by Amanda G. Smith

Several years ago, I took an introductory simulation course that allowed me to participate in a very interesting study of queuing. My team analyzed a unique queue: the waiting area at the baggage carousel at the Dane County Airport. If you’ve ever traveled by plane and checked a bag, you know very well how much traffic can build up at baggage carousels. In order to improve airport efficiency, it would be beneficial to find ways to reduce the amount of time passengers spend in the system. Interestingly, the system departure rates seemed to follow a Weibull distribution, not often seen in day-to-day queues. However, the Weibull distribution makes sense because the time that a customer receives his or her final bag can depend on several different factors such as the time the customer arrives at the baggage claim, how many bags he has, and when his bags arrive on the carousel. The queue is also unique because of the implicit characteristic of a finite calling population; we can know in advance how many customers and how many bags will arrive in the system, since it is exactly what was on the plane before arrival.

Passengers and bags arrive at the system with every plane. After arriving, both passengers and bags must make their way to the baggage carousel. Passengers at the Dane County Airport have a relatively short walk from the terminal to the carousel, though if passengers stop to use the bathroom or get food, the transport time can be significantly increased. Meanwhile, the luggage is loaded onto a cart and driven from the plane to the baggage unloading gate. It is then unloaded onto the carousel, where it remains until retrieved by the passenger to whom it belongs. Passengers wait at the carousel until their bags appear so they can depart the system. We collected our data by observing the carousel area and noting the plane arrival time, baggage arrival time, and passenger and bag departure time. We also kept track of how many bags each passenger retrieved.

Our main goal was to understand which aspect of the system was responsible for the congestion around baggage carousels. Our first idea was to cut the baggage transport time in half. After running our simulation with this new input parameter, the passenger wait time was decreased by a whopping 90%! Our second idea was to reduce the passenger travel time to a small constant, approximately 2 minutes. After testing, we saw a small reduction in overall system time, but wait time actually increased. Since people hate waiting in queues, that particular alternative seemed like a bad idea.

Our conclusion was that a 90% reduction of wait time is a significant enough improvement that it is worth any (reasonable) cost incurred by the airport. However, we also believed the cost for a reduction of baggage transport time could be kept low by performing incremental alterations, such as assigning one more employee to help load bags or increasing cart driving speed. To my knowledge, the Dane Count Airport did not try to implement any of our ideas, but we had a lot of fun experimenting with an unusual type of queue.


Operations Research and Homeland Security

by Amanda G. Smith

On Thursday, I will be presenting the paper “A Survey of Operations Research Models and Applications in Homeland Security” by Wright, Liberatore, and Nydick. [1] Although I’ll be covering more of the details of the paper in my presentation, I wanted to start the week off by giving a little preview of what I’ll be talking about.

Before I get to the meat of the paper, I’d like to talk a little bit about homeland security. The concept of “homeland security” has been defined in many ways by many different people since its “official” inception in 2001. One definition that highlights the potential for – and reality of – homeland security’s strong ties to operations research is in the National Strategy for Homeland Security. [2] There, homeland security is described as “A concerted national effort to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and recover from attacks that do occur” (emphasis mine). I’d like to draw your attention to the words “reduce” and “minimize,” especially. The themes of reduction and minimization are, of course, central to operations research.

This definition is slightly different from the specific role of the Department of Homeland Security (DHS), which takes care to differentiate itself from the Department of Defense (DoD) in that the DHS focuses on civilian efforts rather than militaristic ones. Within the civilian realm, the DHS strives to prepare for, prevent, and respond to domestic emergencies, particularly terrorism. [3]

In my presentation, I’ll highlight specific key areas of past research and how they contribute to the problem of homeland security. For now, however, I’ll point out several opportunities for future research. (Spoiler alert: this is the conclusion of my talk, so if you’re trying to stay spoiler-free, stop reading now!) Since homeland security is such a broad and far-reaching field, it is a fertile ground for interesting problems. Some of the biggest open problems in operations research for homeland security include:

  • Focusing more on non-routine emergencies [4]
  • Extending truck routing models to include homeland security concerns, especially when routing hazardous materials
  • Re-examining airline security now that policy and security protocol changes have made many older models obsolete
  • Further collaboration between physical scientists and operations researchers in various areas [5]
  • Further research in cyber security, critical infrastructure protection, threat analysis, and border security
  • Expanding disaster relief research to include the planning, preparation, recovery and response phases of disasters (most papers focus mainly on the planning phase)
  • Modeling interconnections between relief entities
  • Developing a greater understanding of the relationships between different infrastructures, which have so far been assumed to be independent

Though this is only a small snapshot of the extensive operations research work done in homeland security, I hope it gives you a taste of the types of problems operations researchers are trying to solve. One major difficulty of this type of research is that much of it plays along the edge of classified information and thus must be dealt with carefully. However, the questions are no less important. Keeping our country safe is a challenging task, but that makes finding innovative solutions all that much more important and rewarding.

[1] Wright, P. Daniel, Matthew J. Liberatore, and Robert L. Nydick. “A survey of operations research models and applications in homeland security.”Interfaces 36.6 (2006): 514-529.
[2] Defining Homeland Security: Analysis and Congressional Considerations, 08 January 2013, pp.8,.Web. 16 Feb 2016.
[3] Wikipedia contributors. “Homeland security.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 25 Oct. 2015. Web. 16 Feb. 2016.
[4] Green, Linda V., and Peter J. Kolesar. “Anniversary article: Improving emergency responsiveness with management science.” Management Science 50.8 (2004): 1001-1014.
[5] Craft, David L., Lawrence M. Wein, and Alexander H. Wilkins. “Analyzing bioterror response logistics: the case of anthrax.” Management Science 51.5 (2005): 679-694.