Expedited Screening Programs

Transportation Security Administration offers a few expedited airport security screening programs – TSA PreCheck, Global Entry, and the like (Nexus, Sentri..).

TSA PreCheck program gives you access to a special, faster TSA security line when departing from airports within US. With Global Entry program trusted travelers can skip the lines at passport control and customs when entering the United States and also enjoy TSA PreCheck. These programs have grown in popularity and utilization since being introduced.

According to the recent research by Stewart and Mueller (2015), their risk and cost-benefit analysis of PreCheck program shows that while the co-benefits of PreCheck exceed $1 billion/year, the loss of benefits from risk increase is less than $50 million/year. The increase in risk reduction of PreCheck is 0.5% assuming that PreCheck correctly identifies low risk passengers.

However, I wasn’t able to find a quatitative study on Global Entry (and the like.) The costs in  security loss and financial benefits associated with TSA PreCheck and Global Entry would be very different, as the screening process involved is very different between them. They also have different eligibility; only US citizens can participate in TSA PreCheck while Global Entry also covers certain foreigh countries.

Another interesting direction is the random inspection. While using Global Entry passengers are randomly referred to additional inspection. Questions like how to design the selection to minimize type 1/type 2 error, and how often the random inspection can be, can be well approached by OR methodologies.

There’s a huge room for our math to improve aviation security as well as customer experience. However, to be fair, I will finish this post referring the recent event of why we should avoid solving math on flight

Reference:

  • Stewart MC, Mueller J (2015) Responsible policy analysis in aviation security with an evaluation of PreCheck, Journal of Air Transport Management 48:13:22.
  • Wong S, Brooks N (2015) Evolving risk-based security: A review of current issues and emerging trends impacting security screening in the aviation industry, Journal if Air Transport Management 48:60:64.

 

 

Advertisements

Triage: START and STM

In the aftermath of the mass-casualty incidents(MCIs), there is a sudden increase in demand for emergency services. As resources(like ambulance vehicles, medical staffs,..) are limited,  even the patients in critical condition may have to wait before receiving care, therefore limited medical resources need to be rationed efficiently. This process is called triage. Patients are classified into triage classes given the severity of the patient’s medical condition. Those in the most severe classes are treated first, and the less severe groups wait until treated afterwards.

The most commonly adopted triage process in United States is Simple Triage and Rapid Treatment (START). START classifies patients into following four different classes:

  1. Immediate: deemed to be critically injured and to require immediate intervention
  2. Delayed: injured byt not expected to die within the first hour if case is delayed
  3. Expectants: presumed deceased or have catastrophic injuries, and survival is not expected.
  4. Minor: can walk away

After patients are classified, START gives the highest priority to immediates, then to delayed. Expectants and Minors are served after two time-critical classes are cleared.

This is a simple and easy-to-implement prioritization scheme. But there have been a number of literatures indicating limitations of START method and similar methods, like

  • There is no consideration for resource availability.
  • A tacit assumption is that there is no deterioration.
  • A worst-first strategy make poor use of scarce resources. More salvageable victims could be left to deterioratee while limited resources are being used on patients with less chance of survival.

In particular, Sacco et al(2005,2007) proposed the Sacco Triage Method (STM) to overcome these drawbacks. They set up a mathematical formulation of resource-constrained triage to maximize the expected number of survivors, subject on the timing and availability of transport and treatment resources. The model also need predictions of survival probability and changes in it over time (deterioration). Their formulation fits Linear Programming framework, so the problem can be solved efficiently for large-scale MCIs.

From their computation result, STM provided higher number of expected survivors than did START in all of the simulations. STM appeared to mitigate the impact of declining resources when there is a resource change, whereas START dramatically drops in survivorship.

In my presentation on Thursday, I will introduce another, a very different approach to resource-based patient prioritization.

Soovin

References:

  • Argon NT, Winslow JE, Ziya S (2011) Triage in the aftermath of mass-casualty incidents. Wiley Encyclopedia of Operations Research and Management Science.
  • Sacco WJ, Navin DM, Fiedler KE, Waddell RK, II, Long WB, Buckman Jr RF (2005) Precise formulation and evidence-based application of resource-constrained triage. Acad. Emergency Medicine.
  • Sacco WJ, Navin DM, Waddell RK, Fiedler KE, Long WB, Buckman RF (2007) A new Resource-Constrained triage method applied to victims of penetrating injury. J. Trauma: Injury, Infection, Critical Care.
  • Uzun Jacobson E, Argon NT, Ziya S (2012) Priority assignment in emergency response. Oper. Res.

Stable matching examples

Next week I will present a paper about public school choice based on the mechanism design approach. A mechanism design approach for the matching between schools and students have a long history. As a background information, an introduction for the one of the earliest studies, Gale and Sharpley (1962) is given here.

We first look what’s called College Admission Problem. A set of applicants is to be assigned among a set of colleges. Each applicant ranks the college in the order of his/her preference. Similarly each college ranks the student in order of preference. We want to find the assignment that is stable and optimal:

* An  assignment of applicants to colleges will be called unstable if there are two students X and Y assigned to college A and B, although student X prefers college B to A and student Y prefers college A to B.

* A stable assignment is called optimal if every applicant is at least as well off under it as under any other stable assignment.

To solve the problem, deferred-acceptance procedure is introduced. First, all student apply to their first choice college. A college puts preferred student in the waitlist, and rejects the rest. Then rejected applicants apply again, but this time to their second choice. And then the college compares all the student among new applicants and those in the waitlist, and rejects the rest. This procedure terminates when every applicant is either on a waitlist or has been rejected by every single college he/she applied. This deferred-acceptance procedure is proven to achieve the stable, optimal matching  between students and colleges.

Another (and possibly more interesting) well-known matching market is Marriage Problem. Think of a community with n boys and n girls. To use deferred acceptance procedure, let each boy propose to his favorite girl. Each girl who receives more than one proposal rejects all but her favorite from among those who have proposed to her. However, she does not accept him yet, but keep him on a string to allow for the possibility that someone better may come along later (!).

It’s time for the second round. Those boys who were rejected from the girl last time now proposes to their second favorite girl. Each girl receiving multiple proposals compares all proposers (including the boy on a string..!) and rejects all but the favorite boy. The procedure continues until all girls gets her proposal, and then finally each girl truly accept the boy on her string.

Now this community has n stable, optimally married couples.

Soovin

Reference: Gale D. and Shapley L.S., College Admissions and the Stability of Marriage, The American Mathematical Monthly (1962).

 

Playing a round of disaster management

This time right after the spring break (and with no class material additionally covered therefore), I would like to try bringing a very casual topic to the post – a board game about disaster management, more specifically, Pandemic.

It is one of the most popular cooperative board game, published by Z-Man Games in 2007. (I guess many of you might already have played this before.) Four diseases have broken out in the world, and players acting as CDC agents work cooperatively to stop the spread and discover all cures before diseases wipe out the world.

I am bringing this as a topic because the framework of this game is really well-suited to the optimization models. The game board is a world map, which is exactly a network with 48 big cities as nodes and arcs connecting nodes with equal distance, as you can see in the picture. Arrivals of the disease to the city as well as contagion outbreaks are probablistic events, implemented as the frequency in the card deck. Moreover, since this is a co-op game, we can say that there is only one stakeholder, which is a setting welcomed by operation researchers.

There exist many kinds of strategic decisions players can make, and one of the main decision is locating the research centers so that players can access all cities easily from the research centers. This can be considered as either a maximal covering or a p-median facility location problem. Actually, some people already studied this problem using basic network analysis. Quick spoiler: when we locate 3 research centers, Atlanta/Hong Kong/Cairo minimizes average distance. To read more about the list of ‘optimal placement’ decision they figured out, you can view Optimal Placement of Research Centers in Pandemic and Overanalyzing Board Games: Network Analysis and Pandemic.

Some say that there’s no fun if we attempt to break everything down into efficient mathematical equations. But I’d rather claim that it is always interesting to see how our intuitive decisions works compared to a computed ‘optimal’ one, or study why the discrepancy between the model and reality is happening. And we know that we can’t do so without doing the math.

Have fun!

Soovin

 

Variations of Hypercube Model

The class material  of the last week covered the hypercube queuing model, approximation and some variations. As an extension, I am going talk more about variations of the hypercube model in this post.

exact model and approximation model
These were covered in the class in detail. See Larson(1974) for the exact model. The approximation model treat servers identical to reduce the state space from 2^n to 2n. See Larson(1975).

customer-dependent service rates
Jarvis(1985) provides a procedure for approximating the equilibrium behavior of multi-server loss systems with distinguishable servers and multiple customer types. Instead of using a fixed value of mean service time, the mean service time is approximated at the end of each iteration using weighted sum of dispatch probabilities and customer-dependent service rates. We’ve covered this iterative procedure in the class. But there are exact models with customer-dependent service rates too. Atkinson et al(2008) considers a queuing system describing EMS with distinct geographical zones. Their exact hypercube model has 3^n states, so for computational tractability they propose two heuristic methods and a simulation approach for approximating the steady-state probabilities.

modeling co-located servers and dispatch ties
It is assumed that the servers are dispatched according to a fixed preference dispatching scheme(which is frequent in hypercube family of queuing models). When servers are co-located in the same station, these equally preferred servers generate ties in the dispatch preference lists. In order to break the tie and distribute workload, Burwell et al(1992) use a method called stacking, which is to split the call type with tied preferred servers into multiple copies. This problem of co-located servers is solved in Budge et al(2009) by other approach. They explicitly allows multiple vehicles per station by providing an extended iterative hypercube approximation procedure.

partial backup hypercube queuing model for EMS on highways
Iannoni et al (2007) considers an EMS on highways that only certain servers in the system can service calls in a given region(partial backup). Calls are of different types and the servers are distinct. Multiple(1-3 servers for a call) dispatch is also considered here.

I myself is also studying some kind of variations, which is to apply a hypercube approximation model to analyze cutoff priority queue. In the cutoff priority queue scheme, low priority customers are served only when the system is relatively idle. When the number of busy servers is above ‘cutoff number’, a low priority call is put in a queue(or lost if queue is out of capacity) instead of being served immediately. As a result, the system keeps servers in hand most of the time so that it can better be ready to serve future high priority customers.

Before I finish the post, let me stress that this is NOT a comprehensive summary of all hypercube queuing model variations. There are vast literatures related to hypercube queuing mode, and I am introducing just a small subset of it. So I expect we may share more information as any of us see other interesting variations of hypercube queuing model later.

Soovin

References
Atkinson JB, Kovalenko IN, Kuznetsov N, Mykhalevych KV (2008) A hypercube queuing loss model with customer-dependent service rates. Eur. J. Oper. Res.
Budge S, Ingolfsson A, Erkut E (2009) Approximating vehicle dispatch probabilities for emergency service systems with location-specific service times and multiple units per location. Oper. Res.
Burwell TH, Jarvis JP, McKnew MA (1993) Modeling co-located servers and dispatch ties in the hypercube model. Comput. Oper. Res.
Iannoni AP, Morabito R (2007) A multiple dispatch and partial backup hypercube queuing model to analyze emergency medical systems on highways. Transportation Res.
Jarvis JP (1985) Approximating the equilibrium behavior of multi-server loss systems. Management Sci.
Larson RC (1974) A hypercube queuing model for facility location and redistricting in urban emergency services. Comput. Oper. Res.
Larson RC (1975) Approximating the Performance of Urban Emergency Service Systems. Oper. Res.

Together, or Separate?

There are situations that just looking after ourselves isn’t enough. Let’s talk about moments we have to wait in a multi-server multi-line queue with somebody else, like going to cafeteria with friends, or waiting in the airport for security check with family. My favorite one is an endless waiting line I meet at the immigration and customs at the Chicago O’hare airport whenever I come back from the vacation.

Think about a queuing system with many identical servers. Each server has its own line. You can leave the system only after both of you and your friends finish service. People in front of you are smart enough to stand in the shortest line, so that every server has equal length of line at the moment of decision. You want to leave the system fast.

queue

Will you and your companion should stand together or separate? Tradeoff: If you stand in the same line, since the server have to service your friend first and then you after that, the expected time until you are served is certainly longer than standing separately. However, if you two choose the different line, one of you always have to wait meaninglessly until the other escapes too. I personally had to make a guess for this at least twice a year(in front of the immigration checkpoint as mentioned above). Therefore, this week’s posting looks like a good chance for me to borrow the power of a simple simulation. In each run, I compared the time until the service completion between two strategies -‘together’ VS ‘separate’, and choose the one with shorter time as the winner. After 10,000 run, I picked the strategy that won more.

First, exponential service time is assumed. The result is quite stimulating; if the length of line is longer than 3, it’s better to row in the same boat. If it’s shorter than 3, standing in the separate line saves your time. When the line length is 3, none was dominant. Another thing to note is that the parameter for the exponential distribution does not matter, since what’s important is the relative ratio between mean and variance.

queuegraphExponential service time assumption is attractive but not always true. I ran the same test with Weibull service time distributions with different scale parameter(l) and shape parameter(k) ratio. The result is shown on the right . The region over the threshold line is where ‘together’ strategy is dominant, and ‘separate’ strategy is better at the region below the line. I also did the same experiment for Normal distribution, and I observed that the threshold increased as the value of (mean/variance) get larger.

The conclusions are intuitive. (1) If the lines are long, stand together; go separate if lines are short. (2) The criteria of being ‘long’ or ‘short’ depends on system variability as well as service time distribution functional form. Given the same mean, the more variable the system is, the more likely it’s better to stand together.

I hope this shall work on the next flight!

Soovin

 

Emergency Response-Related OR

The paper I will present on Thursday is Fifty Years of operational Research and Emergency Response by Simpson and Hancock, which is a summary paper published in 2009, and  reviewing the OR foundation in emergency response.

In this post, I am going to summarize just the first part of the paper because it gives a good overview for EOR publications published during 1965-2007, with some nice tables and figures.  Through the summary, you may expect to know about how many emergency-related OR(henceforth referred to as EOR) journals about which topic are published when, on which journals, using which kind of methodology.

1. sampling : “Convenience sample” that consists of selected EOR papers, which serves as a proxy of a population of EOR work available in the literature between 1965-2007, is the main ingredient of the discussion. The convenience sample includes 361 articles published between 1965-2007, gathered by querying for keywords in the title from Web of Science. Each keywords are mapped into one of four focus category:

  • Urban Services(emergency response from established municipal services such as fire/police),
  • Disaster Services(large-scale emrgency response),
  • Specific Hazards(large-scale again),
  • General Emergency(open category to collect anything else).

table1

2. EOR across time : The figure displays the distributions of convenience sample publications during 1965-2007.

  • Publication of EOR articles has seen an increase recently.
  • Looking into the figure, it is unclear whether the distinct variations in total annual volume are motivated by related large-scale incidents, while it is intuitive to assume so. Publications in the convenience sample do not necessarily reference any distinct motivating incidents.

figure1

3. EOR journal outlets : The table sorts 361 convenience sample by journal outlet.

  • Interestingly, 2 of top journals in count(Reliability Engineering&System Safety, Safety Science) are not main stream OR journals. This is more compelling considering the fact that Reliability Engineering&System Safety does not begin contributing to publish EOR until 1982 and Safety Science does not until 1991.
  • Of mainstream OR journals, JORS claims the highest proportion.

table2

4. EOR methodologies:  The table sorts 361 convenience sample by methodology.

  • Math programming is the most common, except for Hazard Specific
  • Probability&Statistics is the 2nd, except for Disaster Services, where the problem inherent in analysing infrequent and exceptional events

table3

The latter part of the paper includes some topics that we can discuss during the presentation on Thursday.

Reference: NC Simpson and PG Hancock, Fifty Years of Operational Research and Emergency Response, Journal of the Operations Research Society, 2009.

Soovin