The end of the tour

The semester has come to an end, so this will be the last post on the Public Sector OR blog. Thanks for reading the blog this semester. Head to Punk Rock OR [Link] for a list of the course readings.

As you can see from the great posts on this blog, student blogging was a great idea. I’m grateful for the opportunity to work with such a great group of students. I highly recommend checking out the archive and seeing what they had to say.

We put everything we learned into practice on the last day of class when we played Pandemic. We dominated!
We put everything we learned into practice on the last day of class when we played Pandemic. We dominated!

 

Imperfect Classifications

As I think has become increasingly obvious by my topic choice, I believe one of the biggest flaws many OR models have when dealing with emergencies is a lack of clear information.  Some models treat the problem stochastically, which adds to the complexity of the model, but hopefully improves the resulting decisions.  Others, like Rath and Gutjahr [1], which I detailed in my previous post, seeks to look at the problem through the lens of trade-offs.  Argon and Ziya [2] tackle the problem of imperfect information head on by considering how well priority assignment systems work with imperfect information, how to prioritize with imperfect ‘signals’ of severity, and what systems best handle this problem.

Argon and Ziya look at a queuing system with different priorities of patients.  The specific instance that we will consider is that of an EMS dispatch system.  Patients arriving to the system are either type 1 patients who need immediate treatment or type 2 patients who should wait until all type 1 patients are served.  The dispatcher, however, only receives a signal that details how likely a patient is to be type 1.  More specifically, this is not necessarily strictly a probability, since as long as the signals can be ordered such that signals from type 1 patients are larger than from type 2 in likelihood ratio ordering.  The dispatcher must use these signals to assign patients to priority levels, where higher priority levels must all be served before a lower priority level receives any service.

It is worth noting that Argon and Ziya assume penalties are linearly related to waiting time for most of paper.  In the instance where linearity holds, increasing the number of priority levels that dispatchers can assign the patients to will never result in worse outcomes, and may in fact result in better outcomes.  Following this thought to its logical conclusion, the authors argue for a Highest-Signal-First (HSF) policy which serves patients in order from most likely to be type 1 to least likely.

A more important take-away to me is that a two-priority policy achieves a majority of the improvement HSF sees of First-Come First-Serve (FCFS) while being far more practical to implement.  Better yet, when the authors investigate nonlinear wait time penalties, the two-priority policy consistently outperforms HSF and at worst performs on par with FCFS.  Unfortunately, they only look at a single nonlinear function that increases the penalty quadratically with wait time.  This is not very representative of cardiac arrest patients who suffer most of their penalty in the first minutes after their call [3].

A worthwhile extension of this research would be to consider the case where patient penalties are exponentially decaying (similar to what we expect for cardiac arrest victims) and compare two-priority and three-priority policies to FCFS.  These policies are easier to implement and are used in practice.

Further, this can be combined with historic data on the accuracy of the ‘signal’ the dispatchers receive to determine best practices for when to assign a patient to high priority based on system load.  Argon and Ziya argue that as the system load increases, we should become less discriminate in and accept more patients as high priority, but this does not seem to necessarily hold in the exponentially decaying case.  Instead, it may move toward more discrimination in labeling patients as high priority since there is only a major benefit to serving a type 1 patient if it is done so quickly.

I believe that by looking at the possibility that our information is not quite accurate, we can develop better policies that work under a variety of conditions.  Whether this means using a deterministic policy only as a guide or better yet, determining a policy that best mitigates the effects of imperfect information as above, it is better than simply ignoring the problem.

 

[1] Rath, Stefan, and Walter J. Gutjahr. “A math-heuristic for the warehouse location–routing problem in disaster relief.” Computers & Operations Research 42 (2014): 25-39.

[2] Argon, Nilay Tanik, and Serhan Ziya. “Priority assignment under imperfect information on customer type identities.” Manufacturing & Service Operations Management 11.4 (2009): 674-693.

[3] Valenzuela, Terence D., et al. “Estimating effectiveness of cardiac arrest interventions a logistic regression survival model.” Circulation 96.10 (1997): 3308-3313.

Live Long and ProspOR

For my final blog post, I thought it might be nice to do some sort of review of the 15+ weeks we spent learning about public sector operations research. As such, I have compiled a summary of five things I learned over this past semester that were particularly important to me:

  1. Equity is important in public sector operations research. This is something that perhaps shouldn’t have been surprising to me, but was. I was intrigued by the idea that it’s possible to build characterizations of equity into a mathematical model and that such a model can be constructed in a myriad of ways. Some ways of modeling equity are certainly better than others, as we saw in Sam’s presentation of the paper “Modeling effectiveness-equity trade-offs in public service delivery systems,” but the inclusion of an equity measure is critical in many public sector problems such as assigning children to schools. [1]
  1. Bilevel optimization is cool (and hard). While talking about applications of bilevel optimization, we learned about some interesting ways it can be used to solve tricky problems with multiple decision makers. In Eli’s presentation about stochastic network interdiction, we saw how network interdiction problems work like a Stackelberg game where the leader tries to take links out of a network so the follower has to find other routes. [2] I also presented a paper about nuclear-weapons project interdiction that used bilevel optimization in a project management setting where the leader tried to make the follower’s tasks take as long as possible. [3]
  1. There are a lot of different ways to do facility location. First of all, when deciding where to locate a facility, what’s our goal? Do we want ambulances as close to as many people as possible? Do we want to place distribution centers so that all customers are “covered” (i.e., within some predefined distance radius)? Once we know what we want, how do we model it? Should we use a p-median model? Should we include queuing – and therefore randomness – in the model? Once we finally have a model, we have to solve it, which is often hard since facility placement requires binary decisions about whether to build a particular facility or not. I presented a paper that provided a novel approach to problems with random budgets, such as facility location problems, that involved building a two-stage stochastic program. [4]
  1. Presentations work a lot better when you have a moderator. I got to do something this semester I’ve never done before: moderate someone’s presentation. Not only was it fun, it encouraged me to read the paper (or at least the slides) before watching the presentation and really think about what made sense and what didn’t, because it was my job to come up with several questions for the person I was moderating. I think it’s a great idea to have assigned student moderators and if I teach a class someday I’ll definitely have my students moderate for each other.
  1. It’s possible to do good with good OR. Early in the semester, we discussed the idea of “doing good with good OR,” which basically boiled down to the question of whether our strategic problem solving was actually helping anyone. While there have certainly been times OR has done more harm than good, I’d say our ultimate answer was a resounding “Yes!” We saw many examples of the ways OR had been applied to solve real-world public sector problems; I thought it was encouraging to know our work has the potential to make life better for someone at the end of the day, whether it’s sending children to the best school or placing an ambulance where it has the greatest potential to save someone’s life (see [5] for one great example).

Anyway, all this is to say it was a great semester. Thanks for reading, and here’s to doing even better with even better OR in the future!

(PS: I’m very sorry for the horrible pun that is this post title.)

[1] Mandell, Marvin B. “Modelling effectiveness-equity trade-offs in public service delivery systems.” Management Science 37.4 (1991): 467-482.
[2] Cormican, Kelly J., David P. Morton, and R. Kevin Wood. “Stochastic network interdiction.” Operations Research 46.2 (1998): 184-197.
[3] Brown, Gerald G., et al. “Interdicting a nuclear-weapons project.” Operations Research 57.4 (2009): 866-877.
[4] Koç, Ali, and David P. Morton. “Prioritization via stochastic optimization.”Management Science 61.3 (2014): 586-603.
[5] Hutton, David W., Margaret L. Brandeau, and Samuel K. So. “Doing good with good OR: supporting cost-effective hepatitis B interventions.” Interfaces41.3 (2011): 289-300.

Expedited Screening Programs

Transportation Security Administration offers a few expedited airport security screening programs – TSA PreCheck, Global Entry, and the like (Nexus, Sentri..).

TSA PreCheck program gives you access to a special, faster TSA security line when departing from airports within US. With Global Entry program trusted travelers can skip the lines at passport control and customs when entering the United States and also enjoy TSA PreCheck. These programs have grown in popularity and utilization since being introduced.

According to the recent research by Stewart and Mueller (2015), their risk and cost-benefit analysis of PreCheck program shows that while the co-benefits of PreCheck exceed $1 billion/year, the loss of benefits from risk increase is less than $50 million/year. The increase in risk reduction of PreCheck is 0.5% assuming that PreCheck correctly identifies low risk passengers.

However, I wasn’t able to find a quatitative study on Global Entry (and the like.) The costs in  security loss and financial benefits associated with TSA PreCheck and Global Entry would be very different, as the screening process involved is very different between them. They also have different eligibility; only US citizens can participate in TSA PreCheck while Global Entry also covers certain foreigh countries.

Another interesting direction is the random inspection. While using Global Entry passengers are randomly referred to additional inspection. Questions like how to design the selection to minimize type 1/type 2 error, and how often the random inspection can be, can be well approached by OR methodologies.

There’s a huge room for our math to improve aviation security as well as customer experience. However, to be fair, I will finish this post referring the recent event of why we should avoid solving math on flight

Reference:

  • Stewart MC, Mueller J (2015) Responsible policy analysis in aviation security with an evaluation of PreCheck, Journal of Air Transport Management 48:13:22.
  • Wong S, Brooks N (2015) Evolving risk-based security: A review of current issues and emerging trends impacting security screening in the aviation industry, Journal if Air Transport Management 48:60:64.

 

 

Optimization in Julia

Julia is a relatively young technical computing language. One of its features is JuMP (Julia for Mathematical Programming), a powerful package that allows mathematical models to be built and solved within the Julia environment. My first blog post offered a simple model for scheduling the Big Ten conference in a round-robin tournament. To complete the Circle of Life, this final blog post will revisit the model by implementing it in Julia/JuMP.

First things first! Let’s tell Julia we need to use the JuMP package for creating a model and the CPLEX package for accessing the CPLEX solver.

using JuMP, CPLEX

Previously, we defined a set M of teams in the Big Ten and a set D of allowable days for play. To avoid dealing with byes on a 14 team schedule, we specified there to be 13 days. We’ll create an array of days storing the numbers 1 through 13.

M = ["Ind" "UMD" "UMich" "MSU" "OSU" "Penn" "Rtgrs" "Ill" "Iowa" "UMN" "UNL" "NU" "Purd" "UW"]
D = collect(1:13)

We now create a JuMP model object with CPLEX as the solver. This model will have variables, constraints, and an objective added to it. CPLEX options can be passed as arguments into the CplexSolver object; e.g., CPX_PARAM_THREADS=4.

model = Model(solver=CplexSolver())

The binary decision variables will be x_{ijt}=1 if team i\in M hosts team j\in M on day t\in D, and 0 otherwise. Let’s add these variables to our model. This is accomplished with the @variable macro. JuMP is even kind enough to allow us to index our variables with our team names!

@variable(model, x[M,M,D], Bin)

Obviously, a team can’t play itself. Let’s set all binary variables of the form x_{iit} equal to 0, where i\in M and t\in D. Constraints are added to JuMP models using the @constraint macro.

for i in M
	for t in D
		@constraint(model, x[i,i,t] == 0)
	end
end

Our original model included the constraint that each team can play at most once per day, whether they are home or away. The mathematical formulation of this constraint is below.

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \left( x_{ijt} + x_{jit} \right) \leq 1 \phantom{XXXXX}\forall i\in M, t\in D

The following code in Julia adds this exact constraint to our model. We add this constraint for every combination of teams (M) and days (D).

for i in M
	for t in D
		@constraint(model, sum{x[i,j,t] + x[j,i,t], j in setdiff(M,[i])} <= 1)
	end
end

The sum function sums the expression x_{ijt} + x_{jit} over the indices j\in M\setminus \{i\}. The Julia command for referencing this set of indices is setdiff(M,[i]).

Because we want a round-robin schedule, each team must play every other team exactly once. In the first blog post, we saved ourselves from adding redundant constraints by adding a constraint for all i\in M and j\in M such that i<j, not i\neq j.

\displaystyle\sum_{t\in D} \left( x_{ijt} + x_{jit} \right) = 1\phantom{XXXXX}\forall i\in M, j\in M : i<j

We add this constraint to the JuMP model for all i\in M and j\in M satisfying i<j.

for i in 1:length(M)
	for j in M[i+1:end]
		@constraint(model, sum{(x[M[i],j,t] + x[j,M[i],t]), t in D} == 1)
	end
end

So far, there’s nothing in our model that prevents a team from playing all home or all away games. Let’s make sure each team plays either 6 or 7 home games, as well as 6 or 7 away games. These can be accomplished together by stipulating that all teams play at least 6 home games. Our mathematical constraint was

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{t\in D} x_{ijt} \geq \left \lceil{\frac{|D| - 1}{2}}\right \rceil \phantom{XXXXX}\forall i\in M.

In Julia, we will implement this constraint by simultaneously summing over the two indices mentioned above: j\in M\setminus\{i\} and t\in D.

for i in M
	@constraint(model, sum{x[i,j,t], j in setdiff(M,[i]), t in D} >= ceil((length(D) - 1)/2))
end

The final constraints explicitly provided in the first blog post limited the number of consecutive home and away games. This is important to facilitate a healthy, “mixed” schedule. These constraints stated that a team cannot play more than two home games within any consecutive three-game window. Similarly, teams cannot play more than two away games within any consecutive three-game window.

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{\substack{s\in D: \\ t\leq s \leq t+2}} x_{ijs} \leq 2 \phantom{XXXXX} \forall i\in M, t\in D: t\leq |D| - 2

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{\substack{s\in D: \\ t\leq s \leq t+2}} x_{jis} \leq 2 \phantom{XXXXX} \forall i\in M, t\in D: t\leq |D| - 2

The JuMP analog of these constraints again sums over two indices simultaneously.

for i in M
	for t in 1:length(D)-2
		@constraint(model, sum{x[i,j,s], j in M, s in t:t+2} >= 2)
		@constraint(model, sum{x[j,i,s], j in M, s in t:t+2} >= 2)
	end
end

These are all of the constraints we added to the original model. All that’s left is to solve the model.

status = solve(model)

The solve command will return a symbol, stored in the status variable, that lets us know how the model solved. It could take on a value of :Optimal, :Unbounded, or :Infeasible, among other values. Fortunately for us, this model returned an “optimal” value. Remember that at this point, we are just searching for a schedule feasible for our constraints. After tidying up the output, we have a complete schedule.

big_ten_schedule_1

However, this may not be the perfect schedule. Notice that there are 23 instances of teams playing back-to-back away games. Let’s say we wanted to reduce this number as much as possible. How would we do it?

One way to minimize the total number of back-to-back away games is to introduce a binary variable y_{it} to be equal to 1 if team i\in M has back-to-back away games beginning on day t\in D\setminus \{|D|\}, and 0 otherwise.

@variable(model, y[M,1:length(D)-1], Bin)

These variables should be triggered to equal 1 if a team plays two away games in a row. This can be accomplished with the following constraint, courtesy of Felix the Cat’s Slide of Tricks from ISyE 323.

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{\substack{s\in D: \\ t\leq s \leq t+1}} x_{jis} \leq 1 + y_{it} \phantom{XXXXX} \forall i\in M, t\in D: t\leq |D| - 1

The JuMP constraint looks very similar to the previous “window” constraints.

for i in M
	for t in 1:length(D)-1
		@constraint(model, sum{x[j,i,s], j in M, s in t:t+1} <= 1 + y[i,t])
	end
end

The last step is to add an objective function to our JuMP model. We want to minimize the total number of y_{it} variables that are equal to 1.

\min \displaystyle\sum_{i\in M} \displaystyle\sum_{\substack{t\in D: \\ t\leq |D|-1}} y_{it}

Objective functions are added to JuMP models with the @objective macro. We declare the sense of the objective function to be Min because we are minimizing our linear objective. Summations in the objective function behave the same way as they do in the constraints.

@objective(model, Min, sum{y[i,t], i in M, t in 1:length(D)-1})

Our model is complete! After executing the solve(model) command once more, we have a new solution. This new model is very difficult to solve, so we present an improved solution obtained after running the solver for a few minutes.

big_ten_schedule_2

This schedule features 7 instances of back-to-back away games. This is an enormous improvement to our earlier solution!

JuMP is also capable of more advanced solving techniques, such as solver callbacks for lazy constraints and user cuts. If you’re interested in trying out Julia/JuMP, you can download Julia here. Packages are incredibly easy to add to your Julia distribution (Pkg.add("JuMP")). Happy optimizing!

Mother’s Day optimization…

For my last blog post, I would like to write about Mother’s day optimizations.

While online shopping is increasing everyday, retailers try to increase it more during special days like Mother’s day, Christmas etc. Optimization is needed more and more to achieve this, especially during these kinds of days. According to the Mother’s Day Consumer Intentions and Actions Survey, the average person spends around $127 to buy a Mother’s Day gift. Hence, it has a big potential to increase profit.

I will try to list some of the ideas here:

-Searching keywords:

Since consumers will not just use their webpages to search, retailers should use appropriate keywords in their webpages to be reached from more general searches. Certain keywords will make a big difference on special days marketing. Temporarily expending keyword lists about gifts, gifts idea etc. during these days will give a chance to increase sales. For instance, 64% of Mother’s Day searches include “ideas”, “homemade”, “best”, “unique” and “cheap”.

Mothers-Day-Searches

 

-Optimization of voice traffic:

These special days are the busiest time for phone services also. It creates a need to optimize the voice channel usage. Call centers of online shopping and stores should consider these days separately and be ready for them. Using the previous years’ data, necessary information can be obtained to maximize the customer satisfaction and then the profit.

Generally, Erlang models are used to solve these issues.

An Erlang B Traffic model can be used to model blocked calls. Assumptions in this model:

Number of sources is infinite

  • Call arrival pattern is random
  • All blocked calls are cleared
  • Hold times are exponentially distributed

An Erlang C model can be used to find the number of agents needed for incoming call volume and the probability of delaying a call. Assumptions for this model:

  • Number of sources is infinite
  • Call arrival pattern is random
  • Blocked calls are delayed
  • Hold times are exponential distributed

-Optimization of cost and consumption strategies:

Traditional pricing systems do not have a way to reflect pricing strategies for special days. If they do not separate special days from others, there will be underestimation. For example, if they count Mother’s day as spring season, the store can underperform in terms of Mother’s day gifts. If a store has a promotion during these special days, it is important to understand the effects of that promotion on the whole store. Store managers should consider which type of customer would be happy with which kind of promotion. Scientific and data based promotion have a potential for produce more profit.

-The most important optimization:

Of course, do not forget to maximize the effects of words, which will make your mother feel like the best Mom in the entire world when you celebrate her Mother’s day.

happy-mothers-day-super-mom-mummy-funny-pinoy-meme-jokes-photos-2013

Optimize on…

References:

https://yourdigitalresource.com/market-online-business-mothers-day/

https://www.entrepreneur.com/article/219451

http://www.cisco.com/en/US/technologies/tk869/tk769/technologies_white_paper0900aecd8070329d.html

http://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/voip_solutions/TA_ISD.html

http://www.retailtouchpoints.com/features/executive-viewpoints/maximizing-events-promotions-optimization-for-mother-s-day-beyond

 

Mail Transportation VRP (Part 2)

Introduction

In my final blog post, I’m going to once again be writing about the vehicle routing problem I’ve been working on over the past few months. For those of you checking in for the first time, I highly recommend reading “Part 1” in this two part series before continuing:

https://publicsectororcourse.wordpress.com/2016/03/28/overview-of-a-mail-transportation-vehicle-routing-problem-vrp/

As I mentioned in Part 1, enumerating all possible routes in this model is not practical, so a column generation procedure was used to find a reasonable subset of “good” routes. Although the column generation procedure for this problem is not as advanced as what was used in the political districting model I presented in class, it’s still able to generate fairly substantial improvements.

Before diving into the column generation procedure, though, here’s a brief overview of the algorithm I implemented in Gurobi Python:

  1. Start with a feasible subset of routes
  2. Solve the linear relaxation of the problem
  3. While improvements to the optimal LP solution can be found:
    1. Perform the column generation procedure
    2. Add new columns (routes) to the subset
    3. Resolve LP
  4. Change decision variables to binary
  5. Solve IP

 

Column generation

Just like in the political districting model, the point of this column generation procedure is to find columns with negative reduced costs. This stems from the fact that all nonbasic variables should have positive reduced costs if the current LP solution is optimal. In this model, the reduced cost can be represented by the following expression:

Reduced Cost - Mail Transportation VRP

Since we’re trying to minimize the reduced cost, the lambda and mu values associated with each location can therefore be treated as “prizes” when deciding what locations should be visited during the column/route generation procedure. If a location has a large “prize” and is only a short distance away, it’d likely be worth adding to the route.

When generating these routes, I originally started with a fairly simple greedy algorithm to make sure everything was working as intended. Each time the LP was resolved, the algorithm started at the “depot” and chose the next stop by looking for the location with the lowest “distance – prize” value. Then, when a new stop was added to the route, the algorithm checked whether or not the route’s overall reduced cost was negative. If it was, this route was added to the subset and the next stop was chosen based on the same criterion as before. This procedure ended when the “distance – prize” value became positive or when the maximum number of stops was reached, whichever came first.

Building off of this simple heuristic, I then added a “weighted coin flip” to the algorithm, which allowed many more routes to be generated during each iteration of the overall while loop. Weights were assigned to each location based on their “distance – prize” value so that the locations with the largest negative values were most likely to be added to the route. By including these weighted probabilities in the algorithm, I was able to generate a far greater number of routes each time the LP was resolved.

Once all of this had been implemented, the solution I found showed a 29% improvement in the overall travel time when compared to the routes that are currently being used. Additionally, the results also indicated that only four trucks would be needed to complete these routes in the future, which is one less than the current number of trucks. What this means in more practical terms is that one employee could probably be transitioned to more valuable work moving forward. Overall, these results seem very promising and could lead to substantial time and cost savings if implemented.