Multi-Objective Optimization

Similarly to everyone else here, my mind is on the presentation I will be giving next week.  However, I will be saving most of the mathematical details of that paper* for my next blog post after I have given the presentation.  Instead, I would like to address one of the fundamental aspects of that paper- multi-objective optimization.

In the paper, Rath and Gutjahr consider the distribution of aid after a disaster.  This naturally requires an extension to the traditional supply routing problem, as we can optimize over both the placement of the warehouses at the outset and then the routing of supplies from those warehouses to the victims.  Further complicating matters, there is no guarantee that the supplies will be enough to meet demand, so we also want to maximize coverage- ensuring that the majority of victims receive some form of aid.

To solve this, Rath and Gutjahr employ a model which finds the Pareto frontier for these three objectives.  In brief, the Pareto frontier consists of every solution that is not dominated by another solution.  That is to gain a benefit in one area you must give up something in another area.  So for

\text{maximize} \;(A,B)\\ \text{subject to}\;A+B\leq 4

(3,1) is on the Pareto frontier but (2,1) is not, since in the latter case, we can increase A or B without decreasing the other.

In choosing to find the Pareto frontier, they leave the user with more control over how to define the optimal solution.  It does come at a cost however.  Most obviously, it increases the complexity of the answer.  In the Rath and Gutjahr example, we have a three-dimensional frontier and it is easy to imagine further dimensions being added to accommodate network restoration. So we must be careful when doing multi-objective optimization with the Pareto frontier to provide enough information to the decision makers so they can understand the trade-offs without being overwhelmed with data.  The goal is to provide clear decisions not simply dump the problem on someone else.

The second problem of using the Pareto frontier is computational.  To find the frontier often requires more time than would be required to find a single optimal objective value.  Further, it is often necessary to use less traditional and straightforward OR approaches like evolutionary programming or heuristics.

There are, of course, other methodologies for formulating multiple objectives.  We will consider a couple of the most common. For a more detailed survey see a “Survey of multi-objective optimization methods for engineering.”

In my current work on ambulance dispatch we have the problem of optimizing both survival for some patients and customer service for other less critical patients.  As we have a preference, survival is more important than customer service, we can use the weighted-sum method.  This entails adding the objective functions and weighting them according to preference.  The result is merely one of the solutions on the Pareto frontier.  The higher the weight on the survival relative to the weight on the customer service, the more we favor trading customer service for survival on the Pareto frontier.  So, this method limits our search and thus enables us to obtain a much quicker answer.

It is also briefly worth noting that by varying the weights on the objectives it is possible to find the Pareto frontier if the problem is convex.  There is, however, no guarantee that the necessary discretization of such a search will find all the relevant information.

Another common method is the bounded objective.  This requires a preference for which objective is most important.  This objective is then maximized(minimized) and the other objectives are bound the feasible region in the constraints.  This method suffers from both being possibly infeasible, if the bounds are not chosen with care, and not necessarily being on the Pareto frontier.  The preferred objective is clearly optimal, but nothing dictates that there is not a better solution in which the same preferred objective is obtained with better values for the objectives in the constraints.  It is however, a good solution in a case where ‘good enough’ rather than completely optimal is acceptable.

Finally, we consider goal programming.  Goal programming sets a desired level we want to reach for each of our objectives and then seeks to minimize the distance between the solution and the desired outcome.  These differences may be weighted depending on preference.  Traditional goal programming also suffers from the possibility of not achieving a solution on the Pareto frontier.  This has been addressed in more recent algorithms to some extent, however, goal programming can still suffer from computational time problems as it often involves the uses of nonlinear programming.

As can be seen in the brief survey, Rath and Guntjahr, provide a fairly good solution with their heuristic to determine the Pareto frontier.  So long as there is not too much information for the decision maker to handle, this may be the best possible choice.  Other methods in the field generally seek points on the frontier and do so in quicker time and with more precision, but require some knowledge of the decision makers’ preferences.  In practice, such an exact knowledge of preference can be exceedingly hard to come by.

 

*Rath, Stefan, and Walter J. Gutjahr. “A math-heuristic for the warehouse location–routing problem in disaster relief.” Computers & Operations Research 42 (2014): 25-39.

Advertisements

One thought on “Multi-Objective Optimization

  1. I personally like the idea of adding constraints and optimizing a single objective. This helps to flesh out points on the Pareto frontier, which can be difficult when using weights. You are correct in noting that the feasible region must be convex for this to work.

    On a practical level, goal programming is useful because some of the criteria are more important than others. Those that are more pass/fail can be modeled as a constraint (I don’t care about this objective that much except if it’s at an unacceptable level). Using weights has its merits, too. We saw that idea in the backup coverage models we saw.

    In some problems like the Knapsack Problem, you can identify non-dominated solutions in a single pass of the dynamic programming algorithm. That is pretty cool. You cannot do that with integer programming software, where you must repeatedly solve the model in the hopes of identifying many non-dominated solutions.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s