“Meals on Wheels” implementation

Since I didn’t get into the details of the “Meals on Wheels” example I cited during my presentation, I thought it would be interesting to discuss it here. As a quick reminder, the researchers’ objective in this example was to solve a traveling salesman problem for a nonprofit in Atlanta, Georgia that delivered meals to senior citizens each day.

While this problem may appear simple at first glance, there were two factors that made it more complicated than most traveling salesman problems. The first was how frequently the client base changed. Since senior citizens could recover from a temporary illness, be moved into a nursing home, or die, the number of clients served by the program was always in flux. Over the course of a year, the total number of clients could be as low as 159 or as high as 358. Given such large variations in the client base, the nonprofit often had to alter its routes on a weekly basis, which meant optimal solutions would quickly become obsolete. The second problem was that the organization operated on very limited resources. Things like a computer or even basic clerical support were out of the question, so any potential solutions could not be expensive or time-consuming. As a result of these constraints, the authors were unable to use any of the usual traveling salesman techniques.

Instead, the authors ended up taking a more creative approach by applying what’s called a “space-filling” or “Sierpinski” curve to solve this problem. While the geometric math describing how this curve is constructed is a bit over my head, the basic idea is that a unit square is repeatedly partitioned into “four identical subsections” and each of these are connected to form a “circuit” (Bartholdi & Platzman, 1982). The first step in this process can be seen below:

 

Capture

 

Following this initial partition, each subsection is divided using an identical process until the curve “fills” the entire area, as seen in this figure:

 

Capture2.PNG

 

After this process repeats itself numerous times, Bartholdi et al. (1983) state that the final space-filling curve will look like an “infinitely crinkly version of the pattern above”. Then, using the completed curve, a list of (x,y) coordinates within the original area can be sequenced “as they appear along the space-filling curve” to provide a heuristic solution to the traveling salesman problem (Bartholdi & Platzman, 1982).

In terms of the Meals on Wheels problem, this technique was applied in a relatively straightforward manner. First, a space-filling curve was constructed based on a map of Atlanta. Next, each client’s (x,y) coordinates were converted to a theta (𝜃) value, which represents each location’s relative position on the space-filling curve. Finally, these locations were ordered according to their 𝜃 values from smallest to largest to determine the best route.

While this approach might still sound far too complicated for a non-technical workforce, the authors made sure the technique was extremely easy to use in practice. By providing the Meals on Wheels program with a map, a precomputed table of 𝜃 values, and two Rolodex card files, adding clients to the route was a simple three-step process, which can be seen in the picture below:
Capture3.PNG

The resulting benefits of this approach were numerous. Most importantly, the problem didn’t need to be resolved every time the client base changed since employees could easily add or remove locations from the route using the card-sorting process described above. On top of that, having a stack of physical cards made it easy to divide up the work between employees by simply separating the cards into equal piles. If one of the four employees was unavailable on any given day, the cards just needed to be separated into three groups instead of four. In addition to the ease of use, however, the routes generated by this technique were also quite good. Compared to the routes the nonprofit had previously used, the researchers were able to develop routes that were 13% shorter, resulting in direct time and cost savings for the nonprofit. Finally, this system also came in at the low price of $50, which the nonprofit could actually afford. Altogether, it seems the solution the researchers developed addressed all the problems they had been presented with. Considering these outcomes, it’s understandable why Kaplan felt this was “a beautiful example of the OR mindset” (Kaplan, 2008).

 

References

Bartholdi, J.J, & Platzman, L.K. (1982). An O(N log N) planar travelling salesman heuristic based on spacefilling curves.Operations Research Letters, 1(4), 121-125.

Bartholdi, J.J.III, Platzman, L.K., Collins, R.L., & Warden, W.H.III. (1983). A Minimum Technology Routing System for Meals on Wheels. Interfaces, 13, 1-8.

Kaplan, E. H. (2008). Adventures in policy modeling! Operations research in the community and beyond. Omega, 36(1), 1-9.

Together, or Separate?

There are situations that just looking after ourselves isn’t enough. Let’s talk about moments we have to wait in a multi-server multi-line queue with somebody else, like going to cafeteria with friends, or waiting in the airport for security check with family. My favorite one is an endless waiting line I meet at the immigration and customs at the Chicago O’hare airport whenever I come back from the vacation.

Think about a queuing system with many identical servers. Each server has its own line. You can leave the system only after both of you and your friends finish service. People in front of you are smart enough to stand in the shortest line, so that every server has equal length of line at the moment of decision. You want to leave the system fast.

queue

Will you and your companion should stand together or separate? Tradeoff: If you stand in the same line, since the server have to service your friend first and then you after that, the expected time until you are served is certainly longer than standing separately. However, if you two choose the different line, one of you always have to wait meaninglessly until the other escapes too. I personally had to make a guess for this at least twice a year(in front of the immigration checkpoint as mentioned above). Therefore, this week’s posting looks like a good chance for me to borrow the power of a simple simulation. In each run, I compared the time until the service completion between two strategies -‘together’ VS ‘separate’, and choose the one with shorter time as the winner. After 10,000 run, I picked the strategy that won more.

First, exponential service time is assumed. The result is quite stimulating; if the length of line is longer than 3, it’s better to row in the same boat. If it’s shorter than 3, standing in the separate line saves your time. When the line length is 3, none was dominant. Another thing to note is that the parameter for the exponential distribution does not matter, since what’s important is the relative ratio between mean and variance.

queuegraphExponential service time assumption is attractive but not always true. I ran the same test with Weibull service time distributions with different scale parameter(l) and shape parameter(k) ratio. The result is shown on the right . The region over the threshold line is where ‘together’ strategy is dominant, and ‘separate’ strategy is better at the region below the line. I also did the same experiment for Normal distribution, and I observed that the threshold increased as the value of (mean/variance) get larger.

The conclusions are intuitive. (1) If the lines are long, stand together; go separate if lines are short. (2) The criteria of being ‘long’ or ‘short’ depends on system variability as well as service time distribution functional form. Given the same mean, the more variable the system is, the more likely it’s better to stand together.

I hope this shall work on the next flight!

Soovin

 

Measuring System Performance and Campbell’s Law

by Eric DuBois

This week I would like to discuss measuring system performance and its unintended consequences.  I will share a story out of Larson’s 1987 paper “Perspectives on Queues: Social Justice and the Psychology of Queueing.”

No one particularly likes waiting around.  So naturally one would assume that a good objective measure of the performance of a queueing system is the average time spent waiting in the queue.  Lower the expected time so that people wait less and they will be happier with your service.  Except, of course, that humans are not so easily understood.

In one of my favorite examples of unintended consequences, Larson describes how early morning business passengers at a Houston airport would complain on a regular basis about having to wait for their luggage after deplaning.  The rub being that it wasn’t the wait that irked them so much as it was seeing their fellow bagless passengers proceed straight out the door to the waiting taxis while they waiting on average an additional 7 minutes to receive their baggage.

The baggage handlers were not taking especially long; it was simply that the airport had chosen the most efficient location for the passengers to deplane and leave the airport in the least amount of time.  That is, they chose a location so close to the exit that it only took about a minute from leaving the plane to get to the baggage carousel.  There was simply no way that baggage handlers could keep up.  The solution was simple- increase the distance between the deplaning location and the door.  Now everyone had to waste their time walking from one end of the terminal to the other, but no one felt cheated.  So despite objectively worsening performance, passengers were happier overall and “passenger complaints dropped to nearly zero.”

The point of this tale is a reminder that while our objective functions effect how we view the world and react to it, we shouldn’t forget that there is a very subjective reality aside from any objective measure we use on it.  One of the common worries with EMS service models is that in only providing rewards for serving patients quickly, you will simply abandon some patients to their fate.  Objectively it makes sense- if we only get credit for serving patients within 9 minutes, why bother even serving the patient that is 10 minutes out?  Worse yet, what if all signs point to them not surviving to discharge? should we still rush the nearest ambulance?

Of course we should still serve them.  Our subjective knowledge of customer service and feelings of fairness argue for serving all patients, even if it is not ‘objectively optimal.’

In many cases, the problems that we see stem from Campbell’s Law (Or the closely related Goodhart’s Law), which basically states that if people understand how they are being measured, the measure becomes useless as people seek to do well on the measure.

A common example is school testing.  Back when I taught in Massachusetts, we spent an inordinate amount of time teaching our students how to take the state test (the MCAS).  If they failed the test, they would have a difficult to well-nigh impossible time in graduating high school.  Moreover, we were judged on how many of our students passed their MCAS- a measure which clearly favored the affluent college-bound suburbanites over my urban, vocational students.

I remember on several occasions having to shutdown fruitful discussions on basic scientific principles, because we had to return to the curriculum defined by the state.  The former were clearly more important to their long-term education, but would never be tested on.  So we plodded along and rather than understanding why the Earth has an atmosphere, they learned how to best answer a multiple choice test.  Most of them did pass the MCAS and will be able to graduate high school, but in the true measure- how much they learned in high school, I fear they will be sorely lacking.

So what do you think? Is there better way to measure system performance without skewing the results or losing site of reality?

Emergency Response-Related OR

The paper I will present on Thursday is Fifty Years of operational Research and Emergency Response by Simpson and Hancock, which is a summary paper published in 2009, and  reviewing the OR foundation in emergency response.

In this post, I am going to summarize just the first part of the paper because it gives a good overview for EOR publications published during 1965-2007, with some nice tables and figures.  Through the summary, you may expect to know about how many emergency-related OR(henceforth referred to as EOR) journals about which topic are published when, on which journals, using which kind of methodology.

1. sampling : “Convenience sample” that consists of selected EOR papers, which serves as a proxy of a population of EOR work available in the literature between 1965-2007, is the main ingredient of the discussion. The convenience sample includes 361 articles published between 1965-2007, gathered by querying for keywords in the title from Web of Science. Each keywords are mapped into one of four focus category:

  • Urban Services(emergency response from established municipal services such as fire/police),
  • Disaster Services(large-scale emrgency response),
  • Specific Hazards(large-scale again),
  • General Emergency(open category to collect anything else).

table1

2. EOR across time : The figure displays the distributions of convenience sample publications during 1965-2007.

  • Publication of EOR articles has seen an increase recently.
  • Looking into the figure, it is unclear whether the distinct variations in total annual volume are motivated by related large-scale incidents, while it is intuitive to assume so. Publications in the convenience sample do not necessarily reference any distinct motivating incidents.

figure1

3. EOR journal outlets : The table sorts 361 convenience sample by journal outlet.

  • Interestingly, 2 of top journals in count(Reliability Engineering&System Safety, Safety Science) are not main stream OR journals. This is more compelling considering the fact that Reliability Engineering&System Safety does not begin contributing to publish EOR until 1982 and Safety Science does not until 1991.
  • Of mainstream OR journals, JORS claims the highest proportion.

table2

4. EOR methodologies:  The table sorts 361 convenience sample by methodology.

  • Math programming is the most common, except for Hazard Specific
  • Probability&Statistics is the 2nd, except for Disaster Services, where the problem inherent in analysing infrequent and exceptional events

table3

The latter part of the paper includes some topics that we can discuss during the presentation on Thursday.

Reference: NC Simpson and PG Hancock, Fifty Years of Operational Research and Emergency Response, Journal of the Operations Research Society, 2009.

Soovin

 

Operations Research and Homeland Security

by Amanda G. Smith

On Thursday, I will be presenting the paper “A Survey of Operations Research Models and Applications in Homeland Security” by Wright, Liberatore, and Nydick. [1] Although I’ll be covering more of the details of the paper in my presentation, I wanted to start the week off by giving a little preview of what I’ll be talking about.

Before I get to the meat of the paper, I’d like to talk a little bit about homeland security. The concept of “homeland security” has been defined in many ways by many different people since its “official” inception in 2001. One definition that highlights the potential for – and reality of – homeland security’s strong ties to operations research is in the National Strategy for Homeland Security. [2] There, homeland security is described as “A concerted national effort to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, and minimize the damage and recover from attacks that do occur” (emphasis mine). I’d like to draw your attention to the words “reduce” and “minimize,” especially. The themes of reduction and minimization are, of course, central to operations research.

This definition is slightly different from the specific role of the Department of Homeland Security (DHS), which takes care to differentiate itself from the Department of Defense (DoD) in that the DHS focuses on civilian efforts rather than militaristic ones. Within the civilian realm, the DHS strives to prepare for, prevent, and respond to domestic emergencies, particularly terrorism. [3]

In my presentation, I’ll highlight specific key areas of past research and how they contribute to the problem of homeland security. For now, however, I’ll point out several opportunities for future research. (Spoiler alert: this is the conclusion of my talk, so if you’re trying to stay spoiler-free, stop reading now!) Since homeland security is such a broad and far-reaching field, it is a fertile ground for interesting problems. Some of the biggest open problems in operations research for homeland security include:

  • Focusing more on non-routine emergencies [4]
  • Extending truck routing models to include homeland security concerns, especially when routing hazardous materials
  • Re-examining airline security now that policy and security protocol changes have made many older models obsolete
  • Further collaboration between physical scientists and operations researchers in various areas [5]
  • Further research in cyber security, critical infrastructure protection, threat analysis, and border security
  • Expanding disaster relief research to include the planning, preparation, recovery and response phases of disasters (most papers focus mainly on the planning phase)
  • Modeling interconnections between relief entities
  • Developing a greater understanding of the relationships between different infrastructures, which have so far been assumed to be independent

Though this is only a small snapshot of the extensive operations research work done in homeland security, I hope it gives you a taste of the types of problems operations researchers are trying to solve. One major difficulty of this type of research is that much of it plays along the edge of classified information and thus must be dealt with carefully. However, the questions are no less important. Keeping our country safe is a challenging task, but that makes finding innovative solutions all that much more important and rewarding.

References:
[1] Wright, P. Daniel, Matthew J. Liberatore, and Robert L. Nydick. “A survey of operations research models and applications in homeland security.”Interfaces 36.6 (2006): 514-529.
[2] Defining Homeland Security: Analysis and Congressional Considerations, 08 January 2013, pp.8,.Web. 16 Feb 2016.
[3] Wikipedia contributors. “Homeland security.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 25 Oct. 2015. Web. 16 Feb. 2016.
[4] Green, Linda V., and Peter J. Kolesar. “Anniversary article: Improving emergency responsiveness with management science.” Management Science 50.8 (2004): 1001-1014.
[5] Craft, David L., Lawrence M. Wein, and Alexander H. Wilkins. “Analyzing bioterror response logistics: the case of anthrax.” Management Science 51.5 (2005): 679-694.

Crime modeling

Here is a sneak preview of my short presentation tomorrow.

In Crime modeling, Alfred Blumstein discusses attempts by researchers to characterize crime frequency in individual offenders. An individual’s offending frequency, called \lambda, is immensely helpful in modeling the criminal justice system. By the late 1970’s, estimates of these values were crudely derived for specific crime types. However, the values of \lambda were based on aggregated crime data and were insufficient in predicting individual crime patterns. A better way to predict the heterogeneous patterns for individual offenders was needed.

Around 1980, Rand initiated an attempt to improve how individual crime patterns could be predicted. They resorted to a self-report by about 2500 prisoners from California, Michigan, and Texas. Each inmate essentially reported his or her own crime history. Upon analyzing the data, Rand noticed several phenomena. For one, the data was extremely skewed. Blumstein gives the example that among prisoners who engaged in burglary, the median number of robberies was five. However, the 90th percentile of burglaries was around 80, and the mean number of burglaries was roughly 50. In short, the data was being radically pulled by offenders who committed many more burglaries than their peers. Rand’s goal was to be able to utilize this data to better identify these high-\lambda offenders.

In 1982, using this newfound data, Peter Greenwood and Allan Abrahamse proposed a policy known was “selective incapacitation.” Selective incapacitation was a way for an individual’s crime to be correlated with the self-reported data set to identify individuals whose crime patterns suggested a high frequency of offenses. This set of predictors was meant to influence sentencing decisions in court. Naturally, this policy was not well-received. Selective incapacitation was a biased sentencing of individuals for crimes they could commit in the future.

The predictability of selective incapacitation was tested in 1987. A group of released inmates from a California prison were correlated with the self-reported data and assigned low and high offense frequencies. If selective incapacitation truly was a predictive measure of crime patterns, the individuals with high offense frequencies would be the most likely to return to the criminal system quickly. However, no correlation was actually observed. Blumstein outlines two major problems with the experiment. For one, the prisoner population involved in the self-report was not indicative of the general population. One can expect that those in prison are more likely to be high frequency offenders. Blumstein also notes that the experiment measured arrest frequencies rather than crime frequencies.

Blumstein’s vignette leaves us with a snapshot of the criminal justice system’s complexity. Forecasting crime patterns is immensely difficult. Not only is knowledge of individual crime patterns often restricted or unavailable, but even accurately using these data sets can require both care and finesse.

Source: Blumstein, Alfred. 2002. Crime modeling. Operations Research 50(1) 16-24.

Has enough research been done in the disaster operations management (DOM) area?

By Suzan Afacan

I will present the article “OR/MS research in disaster operations management” (Altay and Green III, 2006) this week in the ISyE 823. It will be good opportunity for me to give an overview of this paper in my blog post first.

Even in 20th century, disasters are still big issues for the societies and nations. One of the deadliest example is the 2004 Indian Ocean Tsunami which resulted in the lost of 225,000 people.

Altan and Green reviewed the articles in disaster operations management up to 2003.

It is really surprising that most of the studies in the area are done by researchers in the social sciences from social and psychological point of views.

Before starting to the analysis of articles, let’s take a look at the operational stages of disasters;

  • Mitigation: applications to reduce the impact of the disasters,
  • Preparedness: activities to make society better prepared for disasters,
  • Response: resources to protect the community and its property,
  • Recovery: long-term actions to recover from the impacts of the disaster.

There is a great table in which the writers summarize the statistics of the articles on Disaster Operations Managements (DOM).

Screen Shot 2016-02-14 at 2.08.00 PM

Take away from the table:

  • Authors with an affiliation to the USA published most of the articles,
  • Main-stream of DOM starts after the 1990s with one of the possible reason being the declaration of 1990s as the International Decade of Natural Disaster Reduction,
  • Mathematical programming (included heuristics) is the method most used in the research,
  • Based on the Comprehensive Emergency Management’s four-phase disaster classification: mitigation is the most widely studied, preparedness, response and recovery follow it respectively.
  • No one studied humanitarian emergencies (epidemics, war etc.),
  • Just one article studied recovery planning published in the OR/MS journals.

There is another classification paper on this topic (Denizel et al., 2003), which presents extended classification interested ones might look at that paper also.

“DOM is by nature multi-organizational.”

The writers suggest that trying to include the ethical factors of the subject in the models is worth consideration. Also, different incident specifications may need different optimality approaches since political issues might potentially hit here again.

“You can lead a horse to water, but you cannot make him drink.”

You may not able to make every stakeholder happy even though you have the best math-programming model.

We can consider the article’s resulting idea:

“More research needs to be published in academic journals to attract the attention of OR/MS researchers to the subject matter.”

Especially, between the DOM stages, recovery planning needs more attention by researchers with the understanding of the interdependencies between critical infrastructures and systems.

Managing the disasters better will help for rapidly recovering after the disasters hit, increasing the readiness level of the community and its resources, protecting society and their properties by decreasing the response time and effective usage of the potential resources.

Blogger’s Note: After I read this article, I have a better understanding of how important the recovery phase after a disaster. I think that I am in a right direction with my current research topic in which I have been modeling the interdependency between infrastructure systems and emergency services while trying to recover the systems with more effective way. Additionally, there are several articles, which I know studied the interdependency of the systems by considering the recovery phase of the disaster. (e.g., Nurre, Sarah G., et al ; 2012)

     References:

  • Altay, Nezih, and Walter G. Green. “OR/MS research in disaster operations management.” European journal of operational research 175.1 (2006): 475-493.
  • Nurre, Sarah G., et al. “Restoring infrastructure systems: An integrated network design and scheduling (INDS) problem.” European Journal of Operational Research 223.3 (2012): 794-806.
  • Denizel, Meltem, Behlul Usdiken, and Deniz Tuncalp. “Drift or shift? Continuity, change, and international variation in knowledge production in OR/MS.” Operations Research 51.5 (2003): 711-720.