Mail Transportation VRP (Part 2)

Introduction

In my final blog post, I’m going to once again be writing about the vehicle routing problem I’ve been working on over the past few months. For those of you checking in for the first time, I highly recommend reading “Part 1” in this two part series before continuing:

https://publicsectororcourse.wordpress.com/2016/03/28/overview-of-a-mail-transportation-vehicle-routing-problem-vrp/

As I mentioned in Part 1, enumerating all possible routes in this model is not practical, so a column generation procedure was used to find a reasonable subset of “good” routes. Although the column generation procedure for this problem is not as advanced as what was used in the political districting model I presented in class, it’s still able to generate fairly substantial improvements.

Before diving into the column generation procedure, though, here’s a brief overview of the algorithm I implemented in Gurobi Python:

  1. Start with a feasible subset of routes
  2. Solve the linear relaxation of the problem
  3. While improvements to the optimal LP solution can be found:
    1. Perform the column generation procedure
    2. Add new columns (routes) to the subset
    3. Resolve LP
  4. Change decision variables to binary
  5. Solve IP

 

Column generation

Just like in the political districting model, the point of this column generation procedure is to find columns with negative reduced costs. This stems from the fact that all nonbasic variables should have positive reduced costs if the current LP solution is optimal. In this model, the reduced cost can be represented by the following expression:

Reduced Cost - Mail Transportation VRP

Since we’re trying to minimize the reduced cost, the lambda and mu values associated with each location can therefore be treated as “prizes” when deciding what locations should be visited during the column/route generation procedure. If a location has a large “prize” and is only a short distance away, it’d likely be worth adding to the route.

When generating these routes, I originally started with a fairly simple greedy algorithm to make sure everything was working as intended. Each time the LP was resolved, the algorithm started at the “depot” and chose the next stop by looking for the location with the lowest “distance – prize” value. Then, when a new stop was added to the route, the algorithm checked whether or not the route’s overall reduced cost was negative. If it was, this route was added to the subset and the next stop was chosen based on the same criterion as before. This procedure ended when the “distance – prize” value became positive or when the maximum number of stops was reached, whichever came first.

Building off of this simple heuristic, I then added a “weighted coin flip” to the algorithm, which allowed many more routes to be generated during each iteration of the overall while loop. Weights were assigned to each location based on their “distance – prize” value so that the locations with the largest negative values were most likely to be added to the route. By including these weighted probabilities in the algorithm, I was able to generate a far greater number of routes each time the LP was resolved.

Once all of this had been implemented, the solution I found showed a 29% improvement in the overall travel time when compared to the routes that are currently being used. Additionally, the results also indicated that only four trucks would be needed to complete these routes in the future, which is one less than the current number of trucks. What this means in more practical terms is that one employee could probably be transitioned to more valuable work moving forward. Overall, these results seem very promising and could lead to substantial time and cost savings if implemented.

Advertisements

A brief history of gerrymandering

During my presentation tomorrow, I’ll be talking at length about several aspects of a political districting model. In fact, there’s so much to cover tomorrow that I ended up having to cut out all of the interesting anecdotes that I had originally planned to discuss. Rather than letting that information go to waste, however, I figured I could share it here instead.

As some of you are probably aware, political districting is an important problem because of something called “gerrymandering”.


If you don’t already know what gerrymandering is, I highly recommend watching the following video before continuing:


In essence, gerrymandering is the redrawing of congressional district lines in a strategic way that is usually done to maximize the advantage of a certain political party. If you follow politics at all, you’ve probably heard gerrymandering being discussed a few times, but what you might not know is how the term originated.

As it turns out, the term was first used in Massachusetts over 200 years ago in a situation that involved this guy:

elbridge gerry

This is Elbridge Gerry, 9th governor of Massachusetts and 5th vice president of the United States. In 1812, while serving as Massachusetts’ governor, Gerry signed a controversial redistricting plan that was highly partisan and contained a few “oddly shaped” districts like the one identified in red below:

 

original gerrymandering
As we’ll discuss in class tomorrow, a state’s political districts should ideally be as “geographically compact” as possible, so any long, sprawling districts like this one are often viewed with skepticism. The same was true in 1812. When the plan was announced, many newspapers and political opponents criticized its awkwardly shaped districts by drawing attention to the district in red that they said looked like a salamander.

 

salamander.jpgoriginal gerrymandering

This comparison quickly spawned the term “Gerry-mander”, and Gerry’s political opponents reprinted the term so frequently that it eventually became synonymous with “district manipulation”.

Today, there are far more sophisticated ways to measure the geographical compactness of a district, but these visual comparisons still persist. One of my personal favorites out of those I read while researching recent examples of gerrymandering comes from a judge in Maryland. He claimed the state’s third district looked “reminiscent of a broken-winged pterodactyl, lying prostrate” across the state. This district can be seen in orange below:

maryland congressional districts.png

While you might not see salamanders or pterodactyls in these districts, the important point is that gerrymandering still remains relevant today, and operations research models offer one way to combat this problem.

 

References
http://www.governing.com/blogs/by-the-numbers/most-gerrymandered-congressional-districts-states.html

http://www.theatlantic.com/politics/archive/2012/09/the-twisted-history-of-gerrymandering-in-american-politics/262369/#slide2

 

Characteristics of a “good” inequity measure

The paper I’m presenting this week is focused on modeling tradeoffs between equity and effectiveness in the public sector, and one of the most important parts of an equity model is how it actually measures inequity. In fact, the main reason the author wrote this paper is because many of the equity models that came before it either ignored equity considerations entirely or accounted for them in a “significantly flawed” way (Mandell, 1991). As a result, I thought it would be helpful to use this blog post as an introduction to the equity discussion and to describe what makes a “good” inequity measure.

First, it’s worth noting that there are a lot of different ways to measure inequity. According to Mandell (1991), some of the most common inequity measures used in operations research models include “the minimum or maximum level of service, the range between the minimum and maximum levels of service, and the sum of absolute deviations from the mean level of service”. Additionally, there are also other measures such as the Gini coefficient, the variance of the logarithms, and Theil’s index (Allison, 1978). While there are important differences between all of these measures, one nearly universal characteristic is that they’re zero when service is completely equal and positive when it’s not.

Beyond this basic criterion, the second important characteristic a measure should have is what’s called scale invariance. In terms of income inequality, scale invariance means that multiplying everyone’s income by a constant doesn’t change the inequity measure’s value (Allison, 1978). Unfortunately, many measures fail to satisfy this criterion. For example, if everyone’s income is doubled, the range between the minimum and maximum levels of income will also be doubled, implying that the range measure is not scale invariant.

The third criteria, which narrows the list of acceptable inequity measures even further, is the principle of transfers. This principle states that the transfer of income from a poorer person to a richer person should increase the value of the inequity measure regardless of the economic status of the two individuals or the amount that is transferred (Allison, 1978). This criteria eliminates the sum of absolute deviations from the mean from consideration because any transfers strictly above or strictly below the mean wouldn’t change the measure’s value. For example, if the mean income in a population is $600 and there is a transfer of $150 from someone with $200 to someone with $300, the inequity measure would still be $700 after the transfer even though the person that originally had $200 is clearly much worse off [1].

Using these criteria to evaluate potential inequity measures, Allison (1978) finds that only three measures satisfy all of them. These three measures are the coefficient of variation, Theil’s index, and the Gini coefficient (Allison, 1978). Although there are a few differences to consider between these three measures, I’ll save that part of the discussion for the presentation on Thursday.

[1]  As you can see from the calculation below, transferring income from the person with $200 to the person with $300 does not change the value of this inequity measure.

Before the transfer: Sum of deviations from the mean = (600 – 200) + (600 – 300) = $700
After the transfer: Sum of deviations from the mean = (600 – 50) + (600 – 450) = $700

 

References

Allison, P. (1978). Measures of inequality. American Sociological Review, 43(6), 865-880.

Mandell, M. B. (1991). Modelling effectiveness-equity trade-offs in public service delivery systems. Management Science, 37(4), 467-482.

Overview of a Mail Transportation Vehicle Routing Problem (VRP)

Way back in my first blog post, I talked about how there are several areas where operations research would likely be useful within the State of Wisconsin, and one of the areas I highlighted was the State’s interdepartmental mail transportation system. Over the last few months, I’ve been working to model this system with Professor Linderoth, and I figured it’d be worth discussing here since it’s technically a public sector OR problem.

To provide some context for this problem, here’s the basic setup. There are 57 locations in Madison that require mail deliveries each day, and five trucks are available to make these deliveries. Each truck is assigned a total of four routes (one for each time period), and these routes all start and end at the “depot” where the mail is sorted and prepared for the next delivery. Some locations require multiple stops throughout the day, but otherwise there are very few additional complexities. As a result, this problem can be represented using a set-covering model where the objective is to choose a set of routes that minimizes the total distance the trucks have to travel each day.

This model can be seen below:

Mail Transportation VRP model (Full)

Unfortunately, despite the simple mathematical formulation, developing a reasonable solution to this problem is quite challenging because of how many possible routes there are. Even when the number of locations on each route is restricted to 15, there are still more than 2.88e+25 routes that could be taken [1]. That’s a lot of routes, so generating and including all of them in the model simply isn’t practical.

Instead, a more reasonable approach is to create a small set of “good” routes that could be used to find a heuristic solution. While there are a number of sophisticated techniques that exist for this purpose, I originally started with an extremely simple approach and added additional complexity over time. At first, this consisted of randomly generating a large number of routes and throwing out any that exceeded a “high cost” threshold. Building off of this basic structure, I then improved the process by creating routes using location “clusters” to increase the likelihood of generating a good route each time. These routes then served as a starting point for the iterative, column-generation scheme that I’ve been working on most recently, which is something I’ll probably describe in more detail in a future blog post. Although these are fairly crude approaches to route generation, they’ve already resulted in a 15-20% reduction in total travel time when compared to the current state, and that number can only improve as further refinements are made.

[1] Expressed mathematically, this is 57!/(57-15)! since there are 57 locations, the order of stops along a route matters, and each location can only be visited once per route.

Queueing examples

We’ve been spending a lot of time on queueing problems lately, and one of the things I love about queues is that they’re literally everywhere. From call centers to emergency response systems, there are countless examples of queues in the “real world”, so I figured I’d dedicate this blog post to the queues I’ve observed in my past experiences.

For me, the first queues that come to mind relate to the fast-food industry. As a higher schooler, I spent nearly two years working for McDonald’s, and only now do I realize just how many queues that job involved. The drive-through had a queue, the cash registers had a queue, and even the dish washing machine had a queue! It was a queue-topia, and the queues came in all kinds of varieties. Some were simple, like the cash-register queue that had three servers, while others, like the drive-through, were more complicated. During the lunch rush, for example, the line of cars in the drive-through would get so long that it would be backed up to the street and no more customers could enter it, making it a capacitated queue. Additionally, although the drive-through usually functioned as a first-come-first-served (FCFS) queue, sometimes customers would be asked to “pull ahead” if they had a large order so that people with smaller orders could be served while the larger order was being prepared. Of course, some people saw this as a “skip”, and those customers are one of the main reasons I don’t miss McDonald’s or its queues.

Beyond the french fries and golden arches, however, there are also several queues I’ve seen in my more recent work. At the State of Wisconsin, for example, I worked with employees who were responsible for completing IT service requests for several state agencies. While most of these service requests were processed on a FCFS basis, a few receive higher priority. Some of these service requests were marked “urgent” while others, like those from the Governor’s Office, were simply treated with higher priority by default. As we know from lecture, this queue would be classified as a “nonpreemptive priority queue” since the high-priority service requests don’t interrupt anyone’s work.

In contrast, one example where a server is interrupted by higher priority arrivals would be in an emergency department (ED), which is a scenario I had to model in a simulation contest last year. When patients with severe conditions (also known as “priority 1” patients) arrived via ambulance, we had one of the ED physicians stop what they were doing and help that patient. Once the physician was finished helping the patient who was in critical condition, they would then return to the patient they had been treating previously. The logic behind this decision is that someone with a minor injury can have their treatment delayed without the interruption causing much harm whereas someone who just got into a car accident needs immediate attention. According to the queue types discussed in lecture, this is an example of a “preemptive priority queue”, and these subtle differences matter. While simply identifying queues is an important first step, considering the tradeoffs and differences between different types of queues is also essential.

“Meals on Wheels” implementation

Since I didn’t get into the details of the “Meals on Wheels” example I cited during my presentation, I thought it would be interesting to discuss it here. As a quick reminder, the researchers’ objective in this example was to solve a traveling salesman problem for a nonprofit in Atlanta, Georgia that delivered meals to senior citizens each day.

While this problem may appear simple at first glance, there were two factors that made it more complicated than most traveling salesman problems. The first was how frequently the client base changed. Since senior citizens could recover from a temporary illness, be moved into a nursing home, or die, the number of clients served by the program was always in flux. Over the course of a year, the total number of clients could be as low as 159 or as high as 358. Given such large variations in the client base, the nonprofit often had to alter its routes on a weekly basis, which meant optimal solutions would quickly become obsolete. The second problem was that the organization operated on very limited resources. Things like a computer or even basic clerical support were out of the question, so any potential solutions could not be expensive or time-consuming. As a result of these constraints, the authors were unable to use any of the usual traveling salesman techniques.

Instead, the authors ended up taking a more creative approach by applying what’s called a “space-filling” or “Sierpinski” curve to solve this problem. While the geometric math describing how this curve is constructed is a bit over my head, the basic idea is that a unit square is repeatedly partitioned into “four identical subsections” and each of these are connected to form a “circuit” (Bartholdi & Platzman, 1982). The first step in this process can be seen below:

 

Capture

 

Following this initial partition, each subsection is divided using an identical process until the curve “fills” the entire area, as seen in this figure:

 

Capture2.PNG

 

After this process repeats itself numerous times, Bartholdi et al. (1983) state that the final space-filling curve will look like an “infinitely crinkly version of the pattern above”. Then, using the completed curve, a list of (x,y) coordinates within the original area can be sequenced “as they appear along the space-filling curve” to provide a heuristic solution to the traveling salesman problem (Bartholdi & Platzman, 1982).

In terms of the Meals on Wheels problem, this technique was applied in a relatively straightforward manner. First, a space-filling curve was constructed based on a map of Atlanta. Next, each client’s (x,y) coordinates were converted to a theta (𝜃) value, which represents each location’s relative position on the space-filling curve. Finally, these locations were ordered according to their 𝜃 values from smallest to largest to determine the best route.

While this approach might still sound far too complicated for a non-technical workforce, the authors made sure the technique was extremely easy to use in practice. By providing the Meals on Wheels program with a map, a precomputed table of 𝜃 values, and two Rolodex card files, adding clients to the route was a simple three-step process, which can be seen in the picture below:
Capture3.PNG

The resulting benefits of this approach were numerous. Most importantly, the problem didn’t need to be resolved every time the client base changed since employees could easily add or remove locations from the route using the card-sorting process described above. On top of that, having a stack of physical cards made it easy to divide up the work between employees by simply separating the cards into equal piles. If one of the four employees was unavailable on any given day, the cards just needed to be separated into three groups instead of four. In addition to the ease of use, however, the routes generated by this technique were also quite good. Compared to the routes the nonprofit had previously used, the researchers were able to develop routes that were 13% shorter, resulting in direct time and cost savings for the nonprofit. Finally, this system also came in at the low price of $50, which the nonprofit could actually afford. Altogether, it seems the solution the researchers developed addressed all the problems they had been presented with. Considering these outcomes, it’s understandable why Kaplan felt this was “a beautiful example of the OR mindset” (Kaplan, 2008).

 

References

Bartholdi, J.J, & Platzman, L.K. (1982). An O(N log N) planar travelling salesman heuristic based on spacefilling curves.Operations Research Letters, 1(4), 121-125.

Bartholdi, J.J.III, Platzman, L.K., Collins, R.L., & Warden, W.H.III. (1983). A Minimum Technology Routing System for Meals on Wheels. Interfaces, 13, 1-8.

Kaplan, E. H. (2008). Adventures in policy modeling! Operations research in the community and beyond. Omega, 36(1), 1-9.

Beyond technical rationality

Out of all the material we’ve read thus far, I think one of the most important takeaways is that technical solutions are not always sufficient on their own. In Green and Kolesar’s overview of emergency response models, for example, they explained how opposition from New York City’s “politically powerful firefighters union” prevented researchers from implementing changes to the fire department’s staffing schedule (Green & Kolesar, 2004). More generally, Johnson and Smilowitz (2007) note that community-based operations research problems are often “highly dependent on political or social considerations.” What these accounts suggest is that those of us working on public sector operations research problems must consider a much broader array of perspectives than what we might otherwise be accustomed to.

Scholars in the Public Management field differentiate between these perspectives using the terms political rationality and technical rationality. Political rationality is used to describe the fact that those operating in the public sector often face political pressures based primarily on “opinion or ideology rather than bodies of knowledge and fairly weighed evidence” (Hill & Lynn, 2015). In contrast, technical rationality is considered a more scientific line of reasoning because it places value on evidence, efficiency, and effectiveness (Hill & Lynn, 2015). Although these concepts were not explicitly defined by the authors of the operations research papers, I feel these definitions get to the heart of what the authors were describing.

What’s more important than the definitions, however, is the advice that accompanies them. Since political rationality is common in the public sector, Hill and Lynn (2015) warn that arguments based strictly on technical rationality are likely to “fall flat” because they often “fail to persuade or capture the attention of policymakers, politicians, and interest groups.” In an extreme example, these scholars reference the fact that some parents have recently decided against vaccinating their children despite countless arguments citing evidence that vaccines are both safe and effective (Hill & Lynn, 2015). These examples suggest that appealing to both political and technical rationality is crucial for gaining support in the public sector.

Unfortunately, as industrial engineers, it seems we tend to focus almost exclusively on issues relating to technical rationality. In most math classes, for example, there is only one solution to a given problem and it’s not up for debate. As a result, we might overlook the importance of political rationality when working on public sector problems in the “real world”.  Returning to the Green and Kolesar (2004) example, I wonder if this might explain why the firefighters union opposed changes to the staffing schedule. While Green and Kolesar don’t elaborate on this issue, it’s possible that the researchers presented an argument that only focused on the technical details and failed to account for the political factors. In general, I think these kinds of problems are important to keep in mind when working on our own projects. After all, if we can’t gain support for our solutions, they’re unlikely to be implemented and will be of little use to the world. For this reason, I think we should take Hill and Lynn’s advice seriously and pay attention to political rationality as well.

 

References
Green, L. V., & Kolesar, P. J. (2004). Anniversary article: Improving emergency responsiveness with management science. Management Science, 50(8), 1001-1014.

Hill, C. J., & Lynn Jr, L. E. (2015). Public Management: Thinking and Acting in Three Dimensions. CQ Press.

Johnson, M.P. and Smilowitz, K. (2007). “Community-based operations research”, Tutorials in Operations Research, Institute of Operations Research and the Management Sciences, Hanover, MD, pp. 102-23.