As you can see from the great posts on this blog, student blogging was a great idea. I’m grateful for the opportunity to work with such a great group of students. I highly recommend checking out the archive and seeing what they had to say.

]]>

Argon and Ziya look at a queuing system with different priorities of patients. The specific instance that we will consider is that of an EMS dispatch system. Patients arriving to the system are either type 1 patients who need immediate treatment or type 2 patients who should wait until all type 1 patients are served. The dispatcher, however, only receives a signal that details how likely a patient is to be type 1. More specifically, this is not necessarily strictly a probability, since as long as the signals can be ordered such that signals from type 1 patients are larger than from type 2 in likelihood ratio ordering. The dispatcher must use these signals to assign patients to priority levels, where higher priority levels must all be served before a lower priority level receives any service.

It is worth noting that Argon and Ziya assume penalties are linearly related to waiting time for most of paper. In the instance where linearity holds, increasing the number of priority levels that dispatchers can assign the patients to will never result in worse outcomes, and may in fact result in better outcomes. Following this thought to its logical conclusion, the authors argue for a Highest-Signal-First (HSF) policy which serves patients in order from most likely to be type 1 to least likely.

A more important take-away to me is that a two-priority policy achieves a majority of the improvement HSF sees of First-Come First-Serve (FCFS) while being far more practical to implement. Better yet, when the authors investigate nonlinear wait time penalties, the two-priority policy consistently outperforms HSF and at worst performs on par with FCFS. Unfortunately, they only look at a single nonlinear function that increases the penalty quadratically with wait time. This is not very representative of cardiac arrest patients who suffer most of their penalty in the first minutes after their call [3].

A worthwhile extension of this research would be to consider the case where patient penalties are exponentially decaying (similar to what we expect for cardiac arrest victims) and compare two-priority and three-priority policies to FCFS. These policies are easier to implement and are used in practice.

Further, this can be combined with historic data on the accuracy of the ‘signal’ the dispatchers receive to determine best practices for when to assign a patient to high priority based on system load. Argon and Ziya argue that as the system load increases, we should become less discriminate in and accept more patients as high priority, but this does not seem to necessarily hold in the exponentially decaying case. Instead, it may move toward more discrimination in labeling patients as high priority since there is only a major benefit to serving a type 1 patient if it is done so quickly.

I believe that by looking at the possibility that our information is not quite accurate, we can develop better policies that work under a variety of conditions. Whether this means using a deterministic policy only as a guide or better yet, determining a policy that best mitigates the effects of imperfect information as above, it is better than simply ignoring the problem.

[1] Rath, Stefan, and Walter J. Gutjahr. “A math-heuristic for the warehouse location–routing problem in disaster relief.” *Computers & Operations Research* 42 (2014): 25-39.

[2] Argon, Nilay Tanik, and Serhan Ziya. “Priority assignment under imperfect information on customer type identities.” *Manufacturing & Service Operations Management* 11.4 (2009): 674-693.

[3] Valenzuela, Terence D., et al. “Estimating effectiveness of cardiac arrest interventions a logistic regression survival model.” *Circulation* 96.10 (1997): 3308-3313.

**Equity is important in public sector operations research**. This is something that perhaps shouldn’t have been surprising to me, but was. I was intrigued by the idea that it’s possible to build characterizations of equity into a mathematical model and that such a model can be constructed in a myriad of ways. Some ways of modeling equity are certainly better than others, as we saw in Sam’s presentation of the paper “Modeling effectiveness-equity trade-offs in public service delivery systems,” but the inclusion of an equity measure is critical in many public sector problems such as assigning children to schools. [1]

**Bilevel optimization is cool (and hard)**. While talking about applications of bilevel optimization, we learned about some interesting ways it can be used to solve tricky problems with multiple decision makers. In Eli’s presentation about stochastic network interdiction, we saw how network interdiction problems work like a Stackelberg game where the leader tries to take links out of a network so the follower has to find other routes. [2] I also presented a paper about nuclear-weapons project interdiction that used bilevel optimization in a project management setting where the leader tried to make the follower’s tasks take as long as possible. [3]

**There are a lot of different ways to do facility location.**First of all, when deciding where to locate a facility, what’s our goal? Do we want ambulances as close to as many people as possible? Do we want to place distribution centers so that all customers are “covered” (i.e., within some predefined distance radius)? Once we know what we want, how do we model it? Should we use a p-median model? Should we include queuing – and therefore randomness – in the model? Once we finally have a model, we have to solve it, which is often hard since facility placement requires binary decisions about whether to build a particular facility or not. I presented a paper that provided a novel approach to problems with random budgets, such as facility location problems, that involved building a two-stage stochastic program. [4]

**Presentations work a lot better when you have a moderator**. I got to do something this semester I’ve never done before: moderate someone’s presentation. Not only was it fun, it encouraged me to read the paper (or at least the slides)*before*watching the presentation and really think about what made sense and what didn’t, because it was my job to come up with several questions for the person I was moderating. I think it’s a great idea to have assigned student moderators and if I teach a class someday I’ll definitely have my students moderate for each other.

**It’s possible to do good with good OR**. Early in the semester, we discussed the idea of “doing good with good OR,” which basically boiled down to the question of whether our strategic problem solving was actually helping anyone. While there have certainly been times OR has done more harm than good, I’d say our ultimate answer was a resounding “Yes!” We saw many examples of the ways OR had been applied to solve real-world public sector problems; I thought it was encouraging to know our work has the potential to make life better for someone at the end of the day, whether it’s sending children to the best school or placing an ambulance where it has the greatest potential to save someone’s life (see [5] for one great example).

Anyway, all this is to say it was a great semester. Thanks for reading, and here’s to doing even better with even better OR in the future!

(PS: I’m very sorry for the horrible pun that is this post title.)

[1] Mandell, Marvin B. “Modelling effectiveness-equity trade-offs in public service delivery systems.” *Management Science* 37.4 (1991): 467-482.

[2] Cormican, Kelly J., David P. Morton, and R. Kevin Wood. “Stochastic network interdiction.” *Operations Research* 46.2 (1998): 184-197.

[3] Brown, Gerald G., et al. “Interdicting a nuclear-weapons project.” *Operations Research* 57.4 (2009): 866-877.

[4] Koç, Ali, and David P. Morton. “Prioritization via stochastic optimization.”*Management Science* 61.3 (2014): 586-603.

[5] Hutton, David W., Margaret L. Brandeau, and Samuel K. So. “Doing good with good OR: supporting cost-effective hepatitis B interventions.” *Interfaces*41.3 (2011): 289-300.

TSA PreCheck program gives you access to a special, faster TSA security line when departing from airports within US. With Global Entry program trusted travelers can skip the lines at passport control and customs when entering the United States and also enjoy TSA PreCheck. These programs have grown in popularity and utilization since being introduced.

According to the recent research by Stewart and Mueller (2015), their risk and cost-benefit analysis of PreCheck program shows that while the co-benefits of PreCheck exceed $1 billion/year, the loss of benefits from risk increase is less than $50 million/year. The increase in risk reduction of PreCheck is 0.5% assuming that PreCheck correctly identifies low risk passengers.

However, I wasn’t able to find a quatitative study on Global Entry (and the like.) The costs in security loss and financial benefits associated with TSA PreCheck and Global Entry would be very different, as the screening process involved is very different between them. They also have different eligibility; only US citizens can participate in TSA PreCheck while Global Entry also covers certain foreigh countries.

Another interesting direction is the random inspection. While using Global Entry passengers are randomly referred to additional inspection. Questions like how to design the selection to minimize type 1/type 2 error, and how often the random inspection can be, can be well approached by OR methodologies.

There’s a huge room for our math to improve aviation security as well as customer experience. However, to be fair, I will finish this post referring the recent event of why we should avoid solving math on flight…

Reference:

- Stewart MC, Mueller J (2015) Responsible policy analysis in aviation security with an evaluation of PreCheck, Journal of Air Transport Management 48:13:22.
- Wong S, Brooks N (2015) Evolving risk-based security: A review of current issues and emerging trends impacting security screening in the aviation industry, Journal if Air Transport Management 48:60:64.

]]>

First things first! Let’s tell Julia we need to use the JuMP package for creating a model and the CPLEX package for accessing the CPLEX solver.

using JuMP, CPLEX

Previously, we defined a set of teams in the Big Ten and a set of allowable days for play. To avoid dealing with byes on a team schedule, we specified there to be days. We’ll create an array of days storing the numbers through .

M = ["Ind" "UMD" "UMich" "MSU" "OSU" "Penn" "Rtgrs" "Ill" "Iowa" "UMN" "UNL" "NU" "Purd" "UW"] D = collect(1:13)

We now create a JuMP model object with CPLEX as the solver. This model will have variables, constraints, and an objective added to it. CPLEX options can be passed as arguments into the CplexSolver object; e.g., `CPX_PARAM_THREADS=4`

.

model = Model(solver=CplexSolver())

The binary decision variables will be if team hosts team on day , and otherwise. Let’s add these variables to our model. This is accomplished with the `@variable`

macro. JuMP is even kind enough to allow us to index our variables with our team names!

@variable(model, x[M,M,D], Bin)

Obviously, a team can’t play itself. Let’s set all binary variables of the form equal to , where and . Constraints are added to JuMP models using the `@constraint`

macro.

for i in M for t in D @constraint(model, x[i,i,t] == 0) end end

Our original model included the constraint that each team can play at most once per day, whether they are home or away. The mathematical formulation of this constraint is below.

The following code in Julia adds this exact constraint to our model. We add this constraint for every combination of teams () and days ().

for i in M for t in D @constraint(model, sum{x[i,j,t] + x[j,i,t], j in setdiff(M,[i])} <= 1) end end

The `sum`

function sums the expression over the indices . The Julia command for referencing this set of indices is `setdiff(M,[i])`

.

Because we want a round-robin schedule, each team must play every other team exactly once. In the first blog post, we saved ourselves from adding redundant constraints by adding a constraint for all and such that , not .

We add this constraint to the JuMP model for all and satisfying .

for i in 1:length(M) for j in M[i+1:end] @constraint(model, sum{(x[M[i],j,t] + x[j,M[i],t]), t in D} == 1) end end

So far, there’s nothing in our model that prevents a team from playing all home or all away games. Let’s make sure each team plays either or home games, as well as or away games. These can be accomplished together by stipulating that all teams play at least home games. Our mathematical constraint was

In Julia, we will implement this constraint by simultaneously summing over the two indices mentioned above: and .

for i in M @constraint(model, sum{x[i,j,t], j in setdiff(M,[i]), t in D} >= ceil((length(D) - 1)/2)) end

The final constraints explicitly provided in the first blog post limited the number of consecutive home and away games. This is important to facilitate a healthy, “mixed” schedule. These constraints stated that a team cannot play more than two home games within any consecutive three-game window. Similarly, teams cannot play more than two away games within any consecutive three-game window.

The JuMP analog of these constraints again sums over two indices simultaneously.

for i in M for t in 1:length(D)-2 @constraint(model, sum{x[i,j,s], j in M, s in t:t+2} >= 2) @constraint(model, sum{x[j,i,s], j in M, s in t:t+2} >= 2) end end

These are all of the constraints we added to the original model. All that’s left is to solve the model.

status = solve(model)

The `solve`

command will return a symbol, stored in the `status`

variable, that lets us know how the model solved. It could take on a value of `:Optimal`

, `:Unbounded`

, or `:Infeasible`

, among other values. Fortunately for us, this model returned an “optimal” value. Remember that at this point, we are just searching for a schedule feasible for our constraints. After tidying up the output, we have a complete schedule.

However, this may not be the perfect schedule. Notice that there are *23* instances of teams playing back-to-back away games. Let’s say we wanted to reduce this number as much as possible. How would we do it?

One way to minimize the total number of back-to-back away games is to introduce a binary variable to be equal to if team has back-to-back away games beginning on day , and otherwise.

@variable(model, y[M,1:length(D)-1], Bin)

These variables should be triggered to equal if a team plays two away games in a row. This can be accomplished with the following constraint, courtesy of Felix the Cat’s Slide of Tricks from ISyE 323.

The JuMP constraint looks very similar to the previous “window” constraints.

for i in M for t in 1:length(D)-1 @constraint(model, sum{x[j,i,s], j in M, s in t:t+1} <= 1 + y[i,t]) end end

The last step is to add an objective function to our JuMP model. We want to minimize the total number of variables that are equal to .

Objective functions are added to JuMP models with the `@objective`

macro. We declare the sense of the objective function to be `Min`

because we are minimizing our linear objective. Summations in the objective function behave the same way as they do in the constraints.

@objective(model, Min, sum{y[i,t], i in M, t in 1:length(D)-1})

Our model is complete! After executing the `solve(model)`

command once more, we have a new solution. This new model is very difficult to solve, so we present an improved solution obtained after running the solver for a few minutes.

This schedule features *7* instances of back-to-back away games. This is an enormous improvement to our earlier solution!

JuMP is also capable of more advanced solving techniques, such as solver callbacks for lazy constraints and user cuts. If you’re interested in trying out Julia/JuMP, you can download Julia here. Packages are incredibly easy to add to your Julia distribution (`Pkg.add("JuMP")`

). Happy optimizing!

While online shopping is increasing everyday, retailers try to increase it more during special days like Mother’s day, Christmas etc. Optimization is needed more and more to achieve this, especially during these kinds of days. According to the Mother’s Day Consumer Intentions and Actions Survey, the average person spends around $127 to buy a Mother’s Day gift. Hence, it has a big potential to increase profit.

I will try to list some of the ideas here:

-Searching keywords:

Since consumers will not just use their webpages to search, retailers should use appropriate keywords in their webpages to be reached from more general searches. Certain keywords will make a big difference on special days marketing. Temporarily expending keyword lists about gifts, gifts idea etc. during these days will give a chance to increase sales. For instance, 64% of Mother’s Day searches include “ideas”, “homemade”, “best”, “unique” and “cheap”.

-Optimization of voice traffic:

These special days are the busiest time for phone services also. It creates a need to optimize the voice channel usage. Call centers of online shopping and stores should consider these days separately and be ready for them. Using the previous years’ data, necessary information can be obtained to maximize the customer satisfaction and then the profit.

Generally, Erlang models are used to solve these issues.

An Erlang B Traffic model can be used to model blocked calls. Assumptions in this model:

Number of sources is infinite

- Call arrival pattern is random
- All blocked calls are cleared
- Hold times are exponentially distributed

An Erlang C model can be used to find the number of agents needed for incoming call volume and the probability of delaying a call. Assumptions for this model:

- Number of sources is infinite
- Call arrival pattern is random
- Blocked calls are delayed
- Hold times are exponential distributed

-Optimization of cost and consumption strategies:

Traditional pricing systems do not have a way to reflect pricing strategies for special days. If they do not separate special days from others, there will be underestimation. For example, if they count Mother’s day as spring season, the store can underperform in terms of Mother’s day gifts. If a store has a promotion during these special days, it is important to understand the effects of that promotion on the whole store. Store managers should consider which type of customer would be happy with which kind of promotion. Scientific and data based promotion have a potential for produce more profit.

-The most important optimization:

Of course, do not forget to maximize the effects of words, which will make your mother feel like the best Mom in the entire world when you celebrate her Mother’s day.

Optimize on…

References:

https://yourdigitalresource.com/market-online-business-mothers-day/

https://www.entrepreneur.com/article/219451

http://www.cisco.com/en/US/technologies/tk869/tk769/technologies_white_paper0900aecd8070329d.html

http://www.cisco.com/c/en/us/td/docs/ios/solutions_docs/voip_solutions/TA_ISD.html

]]>

In my final blog post, I’m going to once again be writing about the vehicle routing problem I’ve been working on over the past few months. For those of you checking in for the first time, I highly recommend reading “Part 1” in this two part series before continuing:

As I mentioned in Part 1, enumerating all possible routes in this model is not practical, so a column generation procedure was used to find a reasonable subset of “good” routes. Although the column generation procedure for this problem is not as advanced as what was used in the political districting model I presented in class, it’s still able to generate fairly substantial improvements.

Before diving into the column generation procedure, though, here’s a brief overview of the algorithm I implemented in Gurobi Python:

- Start with a feasible subset of routes
- Solve the linear relaxation of the problem
- While improvements to the optimal LP solution can be found:
- Perform the column generation procedure
- Add new columns (routes) to the subset
- Resolve LP

- Change decision variables to binary
- Solve IP

Column generation

Just like in the political districting model, the point of this column generation procedure is to find columns with negative reduced costs. This stems from the fact that all nonbasic variables should have positive reduced costs if the current LP solution is optimal. In this model, the reduced cost can be represented by the following expression:

Since we’re trying to minimize the reduced cost, the lambda and mu values associated with each location can therefore be treated as “prizes” when deciding what locations should be visited during the column/route generation procedure. If a location has a large “prize” and is only a short distance away, it’d likely be worth adding to the route.

When generating these routes, I originally started with a fairly simple greedy algorithm to make sure everything was working as intended. Each time the LP was resolved, the algorithm started at the “depot” and chose the next stop by looking for the location with the lowest “distance – prize” value. Then, when a new stop was added to the route, the algorithm checked whether or not the route’s overall reduced cost was negative. If it was, this route was added to the subset and the next stop was chosen based on the same criterion as before. This procedure ended when the “distance – prize” value became positive or when the maximum number of stops was reached, whichever came first.

Building off of this simple heuristic, I then added a “weighted coin flip” to the algorithm, which allowed many more routes to be generated during each iteration of the overall while loop. Weights were assigned to each location based on their “distance – prize” value so that the locations with the largest negative values were most likely to be added to the route. By including these weighted probabilities in the algorithm, I was able to generate a far greater number of routes each time the LP was resolved.

Once all of this had been implemented, the solution I found showed a 29% improvement in the overall travel time when compared to the routes that are currently being used. Additionally, the results also indicated that only four trucks would be needed to complete these routes in the future, which is one less than the current number of trucks. What this means in more practical terms is that one employee could probably be transitioned to more valuable work moving forward. Overall, these results seem very promising and could lead to substantial time and cost savings if implemented.

]]>As Amanda and Eli mentioned in their previous blog posts, network interdiction models have lots of applications especially for the military and homeland security.

In network interdiction models, we have *a leader *who destroys arcs to disrupt the follower’s (enemy) flows. In the paper, “Algorithms for Discrete and Continuous Multicommodity Flow Network Interdiction Problems,” the model allows for more than one supply node for each commodity and multiple demand nodes. As an objective the leader interdicts the arcs to minimize the follower’s optimal reward with the budget B. The paper considers the discrete and continuous cases for interdiction. While in the discrete case each arc can be fully destroyed or operational, in the continuous case each arc can be partially destroyed so then depending on the percentage of disruption the capacity of arc changes.

The mathematical model is as follows:

where y is follower’s decision variable and x is the leader’s decision variable. Hence x is the percentage ofarc disruption and y is the amount of flow passing through that arc. Writers present several methods to solve both the discrete and continuous interdiction models. Here I would like to focus more on one specific application area of the interdiction models….like a Zombie attack!!!

To reach a safe place from Zombie attack, we can use shortest path models and with interdiction model, we can also slow down the attack. In this context, we can consider the enemy as zombies, the leader as human, and adapt the problem to disruption of zombie attack problem, which is one of the most IMPORTANT!!! homeland security issues. Now, the objective becomes to minimize the zombie attack achievement since they want to eat human’s brains and we want to escape from them. There are some guidelines out there on how to escape from Zombie attack, but as operations researcher, we should use our tools to achieve it. We can suppose that after having the first bit of brain they would like to eat more and more… But I’m not sure – just guessing. Anyway with this network interdiction idea, we can decrease the number of our friends that get slaughtered and recruited… Here we can see how interdiction models are helping us to secure our homeland not just from bad people but also from the zombies.

If you wonder more about Zombies, there is a whole archive at the “Punk Rock Operation Research” blog (https://punkrockor.com/tag/zombies/) and even a zombie apocalypse course in the University of Michigan to teach students survival skills.

Note: Stochastic network interdiction is also studied in the literature and there is a great source for that by Janjarassuk and Linderoth..

References:

Lim, Churlzu, and J. Cole Smith. “Algorithms for discrete and continuous multicommodity flow network interdiction problems.” *IIE Transactions* 39.1 (2007): 15-26.

Janjarassuk, U. and Linderoth, J. (2008), Reformulation and sampling to solve a stochastic network interdiction problem. Networks, 52: 120–132. doi:10.1002/net.20237

]]>

In the paper, Rath and Gutjahr use a variant of the bounded objective method called the adaptive epsilon constraint method. Unfortunately, for anything beyond a bi-objective formulation, this method becomes quite nasty and complicated. Suffice it to say, where the epsilon constraint method chooses predetermined bounds on the constrained objective, the AECM uses knowledge of the objective space from previous solutions (We are solving for the Pareto frontier) to create better constraint values for future searches. Hence, it *adapts* its constraints based on new data and is able to significantly decrease the total search time over the more naïve ECM.

Using these constraints, Rath and Gutjahr then seek to find the optimal objective value by narrowing an upper and lower bounds on the solution until they achieve equality and hence optimality. From here, the primary contribution of the paper is in developing heuristics to aid in the computation of these bounds. In each iteration, they add new constraints to the upper bound to make a better, but more computationally intense relaxation. Further, they use basic heuristics to find a feasible lower bound.

All this effort is with the goal of finding a faster, if not quite as accurate solution. So the practical question then is whether this still provides a useful solution and moreover, whether it is even necessary.

I think it would be hard to argue that even an approximate answer is not reasonably acceptable where an exact answer would have been useful. This comes simply from the assumption of perfect knowledge. Of all the assumptions, this is probably the worst. As Sam pointed out in class, do we even know that all of the network connections (roads) are still in place? Even if this is accounted for, isn’t it possible that they may be restored over time making our solution suboptimal? Further, our knowledge of demand is limited to what information we have gathered so far- information that is itself inexact.

So the ability to solve the exact solution is of little use. We are merely looking for guidance and for that a heuristic is just fine. In fact, I would argue that in a real aid deployment, a heuristic is perhaps more useful, being faster and more understandable to non-practitioners while producing what is arguably an equally useful solution.

The other question is whether this complicated model is even worth solving. As was discussed last week, finding the Pareto frontier is not an insignificant problem and perhaps best avoided if we are reasonably able. Here, I think is the crux of the matter. At least for the problem that they are considering, I question the usefulness of the multi-objective criteria and the necessity to minimize cost and maximize service at the same time. Either the organization is working under a given budget or it can be changed based on increased donations. In either case, it makes as much sense to solve the Pareto optimal solution for the current budget, and resolve later if necessary. The lack of perfect information would necessitate these resolves in any case.

The only real benefit in knowing the entire Pareto Frontier is to get a good idea for the trade-offs. However, even this is of so much value given our lack of knowledge and the lag time between when an organization makes a decision to contribute resources and when those resources are actually deployed. In that time not only the knowledge, but the actual situation on the ground and hence the Pareto frontier would likely have changed.

So I see the benefit of models like this being more prominent in domestic disasters where we have better knowledge, faster response, and more control over the budget of the operation.

]]>The most commonly adopted triage process in United States is Simple Triage and Rapid Treatment (START). START classifies patients into following four different classes:

- Immediate: deemed to be critically injured and to require immediate intervention
- Delayed: injured byt not expected to die within the first hour if case is delayed
- Expectants: presumed deceased or have catastrophic injuries, and survival is not expected.
- Minor: can walk away

After patients are classified, START gives the highest priority to immediates, then to delayed. Expectants and Minors are served after two time-critical classes are cleared.

This is a simple and easy-to-implement prioritization scheme. But there have been a number of literatures indicating limitations of START method and similar methods, like

- There is no consideration for resource availability.
- A tacit assumption is that there is no deterioration.
- A worst-first strategy make poor use of scarce resources. More salvageable victims could be left to deterioratee while limited resources are being used on patients with less chance of survival.

In particular, Sacco et al(2005,2007) proposed the Sacco Triage Method (STM) to overcome these drawbacks. They set up a mathematical formulation of resource-constrained triage to maximize the expected number of survivors, subject on the timing and availability of transport and treatment resources. The model also need predictions of survival probability and changes in it over time (deterioration). Their formulation fits Linear Programming framework, so the problem can be solved efficiently for large-scale MCIs.

From their computation result, STM provided higher number of expected survivors than did START in all of the simulations. STM appeared to mitigate the impact of declining resources when there is a resource change, whereas START dramatically drops in survivorship.

In my presentation on Thursday, I will introduce another, a very different approach to resource-based patient prioritization.

Soovin

References:

- Argon NT, Winslow JE, Ziya S (2011) Triage in the aftermath of mass-casualty incidents. Wiley Encyclopedia of Operations Research and Management Science.
- Sacco WJ, Navin DM, Fiedler KE, Waddell RK, II, Long WB, Buckman Jr RF (2005) Precise formulation and evidence-based application of resource-constrained triage. Acad. Emergency Medicine.
- Sacco WJ, Navin DM, Waddell RK, Fiedler KE, Long WB, Buckman RF (2007) A new Resource-Constrained triage method applied to victims of penetrating injury. J. Trauma: Injury, Infection, Critical Care.
- Uzun Jacobson E, Argon NT, Ziya S (2012) Priority assignment in emergency response. Oper. Res.