Optimization in Julia

Julia is a relatively young technical computing language. One of its features is JuMP (Julia for Mathematical Programming), a powerful package that allows mathematical models to be built and solved within the Julia environment. My first blog post offered a simple model for scheduling the Big Ten conference in a round-robin tournament. To complete the Circle of Life, this final blog post will revisit the model by implementing it in Julia/JuMP.

First things first! Let’s tell Julia we need to use the JuMP package for creating a model and the CPLEX package for accessing the CPLEX solver.

using JuMP, CPLEX

Previously, we defined a set M of teams in the Big Ten and a set D of allowable days for play. To avoid dealing with byes on a 14 team schedule, we specified there to be 13 days. We’ll create an array of days storing the numbers 1 through 13.

M = ["Ind" "UMD" "UMich" "MSU" "OSU" "Penn" "Rtgrs" "Ill" "Iowa" "UMN" "UNL" "NU" "Purd" "UW"]
D = collect(1:13)

We now create a JuMP model object with CPLEX as the solver. This model will have variables, constraints, and an objective added to it. CPLEX options can be passed as arguments into the CplexSolver object; e.g., CPX_PARAM_THREADS=4.

model = Model(solver=CplexSolver())

The binary decision variables will be x_{ijt}=1 if team i\in M hosts team j\in M on day t\in D, and 0 otherwise. Let’s add these variables to our model. This is accomplished with the @variable macro. JuMP is even kind enough to allow us to index our variables with our team names!

@variable(model, x[M,M,D], Bin)

Obviously, a team can’t play itself. Let’s set all binary variables of the form x_{iit} equal to 0, where i\in M and t\in D. Constraints are added to JuMP models using the @constraint macro.

for i in M
	for t in D
		@constraint(model, x[i,i,t] == 0)
	end
end

Our original model included the constraint that each team can play at most once per day, whether they are home or away. The mathematical formulation of this constraint is below.

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \left( x_{ijt} + x_{jit} \right) \leq 1 \phantom{XXXXX}\forall i\in M, t\in D

The following code in Julia adds this exact constraint to our model. We add this constraint for every combination of teams (M) and days (D).

for i in M
	for t in D
		@constraint(model, sum{x[i,j,t] + x[j,i,t], j in setdiff(M,[i])} <= 1)
	end
end

The sum function sums the expression x_{ijt} + x_{jit} over the indices j\in M\setminus \{i\}. The Julia command for referencing this set of indices is setdiff(M,[i]).

Because we want a round-robin schedule, each team must play every other team exactly once. In the first blog post, we saved ourselves from adding redundant constraints by adding a constraint for all i\in M and j\in M such that i<j, not i\neq j.

\displaystyle\sum_{t\in D} \left( x_{ijt} + x_{jit} \right) = 1\phantom{XXXXX}\forall i\in M, j\in M : i<j

We add this constraint to the JuMP model for all i\in M and j\in M satisfying i<j.

for i in 1:length(M)
	for j in M[i+1:end]
		@constraint(model, sum{(x[M[i],j,t] + x[j,M[i],t]), t in D} == 1)
	end
end

So far, there’s nothing in our model that prevents a team from playing all home or all away games. Let’s make sure each team plays either 6 or 7 home games, as well as 6 or 7 away games. These can be accomplished together by stipulating that all teams play at least 6 home games. Our mathematical constraint was

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{t\in D} x_{ijt} \geq \left \lceil{\frac{|D| - 1}{2}}\right \rceil \phantom{XXXXX}\forall i\in M.

In Julia, we will implement this constraint by simultaneously summing over the two indices mentioned above: j\in M\setminus\{i\} and t\in D.

for i in M
	@constraint(model, sum{x[i,j,t], j in setdiff(M,[i]), t in D} >= ceil((length(D) - 1)/2))
end

The final constraints explicitly provided in the first blog post limited the number of consecutive home and away games. This is important to facilitate a healthy, “mixed” schedule. These constraints stated that a team cannot play more than two home games within any consecutive three-game window. Similarly, teams cannot play more than two away games within any consecutive three-game window.

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{\substack{s\in D: \\ t\leq s \leq t+2}} x_{ijs} \leq 2 \phantom{XXXXX} \forall i\in M, t\in D: t\leq |D| - 2

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{\substack{s\in D: \\ t\leq s \leq t+2}} x_{jis} \leq 2 \phantom{XXXXX} \forall i\in M, t\in D: t\leq |D| - 2

The JuMP analog of these constraints again sums over two indices simultaneously.

for i in M
	for t in 1:length(D)-2
		@constraint(model, sum{x[i,j,s], j in M, s in t:t+2} >= 2)
		@constraint(model, sum{x[j,i,s], j in M, s in t:t+2} >= 2)
	end
end

These are all of the constraints we added to the original model. All that’s left is to solve the model.

status = solve(model)

The solve command will return a symbol, stored in the status variable, that lets us know how the model solved. It could take on a value of :Optimal, :Unbounded, or :Infeasible, among other values. Fortunately for us, this model returned an “optimal” value. Remember that at this point, we are just searching for a schedule feasible for our constraints. After tidying up the output, we have a complete schedule.

big_ten_schedule_1

However, this may not be the perfect schedule. Notice that there are 23 instances of teams playing back-to-back away games. Let’s say we wanted to reduce this number as much as possible. How would we do it?

One way to minimize the total number of back-to-back away games is to introduce a binary variable y_{it} to be equal to 1 if team i\in M has back-to-back away games beginning on day t\in D\setminus \{|D|\}, and 0 otherwise.

@variable(model, y[M,1:length(D)-1], Bin)

These variables should be triggered to equal 1 if a team plays two away games in a row. This can be accomplished with the following constraint, courtesy of Felix the Cat’s Slide of Tricks from ISyE 323.

\displaystyle\sum_{\substack{j\in M: \\ j\neq i}} \displaystyle\sum_{\substack{s\in D: \\ t\leq s \leq t+1}} x_{jis} \leq 1 + y_{it} \phantom{XXXXX} \forall i\in M, t\in D: t\leq |D| - 1

The JuMP constraint looks very similar to the previous “window” constraints.

for i in M
	for t in 1:length(D)-1
		@constraint(model, sum{x[j,i,s], j in M, s in t:t+1} <= 1 + y[i,t])
	end
end

The last step is to add an objective function to our JuMP model. We want to minimize the total number of y_{it} variables that are equal to 1.

\min \displaystyle\sum_{i\in M} \displaystyle\sum_{\substack{t\in D: \\ t\leq |D|-1}} y_{it}

Objective functions are added to JuMP models with the @objective macro. We declare the sense of the objective function to be Min because we are minimizing our linear objective. Summations in the objective function behave the same way as they do in the constraints.

@objective(model, Min, sum{y[i,t], i in M, t in 1:length(D)-1})

Our model is complete! After executing the solve(model) command once more, we have a new solution. This new model is very difficult to solve, so we present an improved solution obtained after running the solver for a few minutes.

big_ten_schedule_2

This schedule features 7 instances of back-to-back away games. This is an enormous improvement to our earlier solution!

JuMP is also capable of more advanced solving techniques, such as solver callbacks for lazy constraints and user cuts. If you’re interested in trying out Julia/JuMP, you can download Julia here. Packages are incredibly easy to add to your Julia distribution (Pkg.add("JuMP")). Happy optimizing!

ARMOR: a cool acronym at a cool airport

Like others, I’ll be using this week’s blog post to give some background on my upcoming presentation. My paper describes an application of operations research to airport security.

The Los Angeles International Airport (known as “LAX” in some niche social circles) is the second busiest airport in the United States in terms of passenger boarding. Boasting nearly 40 million departures in 2015, LAX is believed to be an attractive target for terrorists. Its role in domestic and international transportation cannot be understated. As police and security resources are limited, completely securing the airport is impossible. Even if full security coverage were possible, it would be a logistical nightmare that would seriously delay passengers.

The authors of the paper developed ARMOR (Assistant for Randomized Monitoring over Routes) as an application to help security agencies determine where to allocate their limited resources. It is tailored to meet two key challenges in airport security. First, it provides randomization in its resource allocation. When adversaries are able to recognize patterns in security, they are able to exploit them. Randomization adds a much-desired layer of uncertainty to the adversary’s planning. Second, ARMOR addresses security forces’ uncertainty in information about the adversary. In the context if airport security, threats can manifest in numerous ways (bombs, hijackings, etc.) which require different resources for detection. ARMOR works to mitigate this by considering multiple threats simultaneously and formulating the best overall security plan.

ARMOR is based on the theory of Stackelberg games. In a Stackelberg game, a leader first makes decisions, to which a follower reacts in an attempt to optimize his or her reward. Consider the payoff table below for a Stackelberg game. Let L_1 and L_2 denote possible initial actions for the leader. Similarly, let F_1 and F_2 denote possible recourse actions for the follower.

payoff_table

In a simultaneous game, there exists a Nash equilibrium where the leader chooses action L_1 and the follower chooses action F_1. In this situation, neither player can benefit unilaterally by changing his or her decision. However, we are not considering Nash equilibria in this situation; rather, we allow the leader to choose an action first, and the follower to selfishly react thereto. With this paradigm, the leader would select L_2, because it is known that the follower will subsequently select F_2 to obtain the maximum payoff. This strategy gives the leader a payoff of 3. If we allow the leader to select each action with a fixed probability, selecting L_1 and L_2 each with probability 0.5 would result in a reward of 3.5. The follower, capable of choosing only one action, would select F_2.

In a Bayesian Stackelberg game, we allow a set of leaders and a set of followers to play against one another. ARMOR incorporates a Bayesian Stackelberg game with one leader (airport security) and multiple followers (the variety of security threats). Next week, we’ll discuss how to model this Bayesian Stackelberg game first as a mixed-integer quadratic program, then as a mixed-integer linear program. We’ll discuss some of the challenges of creating the ARMOR software and also how it performed “in the field.”

Pita, James, et al. “Deployed ARMOR protection: the application of a game theoretic model for security at the Los Angeles International Airport.”Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track. International Foundation for Autonomous Agents and Multiagent Systems, 2008.

Stochastic network interdiction 101

My upcoming presentation covers stochastic network interdiction. This is an extension of the deterministic network interdiction problem covered in class (highlighted in Wood, 1993).

In typical Stackelberg fashion, stochastic network interdiction pits an interdicter (leader) against an adversary (follower) on a capacitated network. As is the case in the deterministic case, the interdictor chooses which arcs to interdict, thereby reducing the arc’s capacity to zero. Afterwards, the adversary attempts to maximize the flow of a commodity from a predetermined source node to a predetermined sink node. The interdictor’s goal is to minimize the adversary’s maximum flow through the network. Unlike the deterministic case, the interdictor will only successfully interdict an arc with a certain probability.

It does seem enticing to model this as a deterministic network interdiction problem, where the interdictor always successfully interdicts the arcs, and we replace the arcs’ “interdicted” capacities with their expected values. Although enticing, this approach will not work. Consider the simple network below. The interdictor successfully interdicts any arc with probability p=0.5 and is only able to interdict two arcs.

simple_network

Let’s attempt to convert this to a deterministic problem. We do this by assuming the interdictor will always be successful in his or her attempts to interdict an arc. Furthermore, we replace the interdicted arc capacities with their expected values. In this case, the optimal interdiction plan is to interdict arcs (s,a) and (s,t). It doesn’t make sense to interdict (a,t), because we are already interdicting (s,a) (we are approaching this is a deterministic maximum-flow problem!). This leaves the adversary with a maximum flow of (1 - 0.5)(100) + (1 - 0.5)(10) = 55.

simple_network_sa_st

However, the real solution in this problem is to interdict arcs (s,a) and (a,t).

simple_network_sa_at

Consider the four scenarios.

snip_table

When these arcs are interdicted, the expected maximum flow through the network becomes (0.25)(10) + (0.25)(10) + (0.25)(10) + (0.25)(110) = 35.

I’ll conclude by presenting the basic formulation for the stochastic network interdiction problem. We’ll be using mostly all-too-familiar notation.

  • G=(N,A): the directed network
  • A' = A \cup \{ (t,s) \}
  • u_{ij}: capacity of arc (i,j)\in A'
  • R: interdiction budget
  • r_{ij}: cost of interdicting arc (i,j)\in A'
  • p_{ij}: probability of successful interdiction on arc (i,j)\in A'
  • s\in N: the adversary’s origin
  • t\in N: the adversary’s destination
  • \tilde{I}_{ij}: indicator random variable for each arc that is 1 with probability p_{ij} and 0 with probability 1-p_{ij}
  • x_{ij}: flow through arc (i,j)\in A'
  • \gamma_{ij}: whether the leader interdicts arc (i,j)\in A'

We’ll include variable x_{ts} as our dummy “max-flow arc” like we do in a standard maximum-flow formulation. We let u_{ts}=\sum_{(i,j)\in A} u_{ij} + 1 and r_{ts}=R+1. That is, let (t,s) be treated as a normal arc that will never achieve its upper bound and will never be interdicted.

The model looks very similar to a standard maximum flow problem, though we account for the different realizations of \tilde{I} by throwing an expected value in the objective function. Additionally, we account for interdiction by modifying the upper bound constraints for the flow variables. Remember: an interdiction on arc (i,j) is successful at reducing the arc capacity to zero with probability p_{ij}. Otherwise, the interdiction has no effect.

w^* = \min \mathbb{E}[h(\gamma,\tilde{I})] \\  \phantom{XXX}\text{s.t.} \displaystyle\sum_{(i,j)\in A} r_{ij}\gamma_{ij} \leq R \\  \phantom{XXXs.t.} \gamma_{ij}\in\{0,1\} \phantom{XXX} \forall (i,j)\in A'

where:

h(\gamma,\tilde{I}) = \max x_{ts} \\  \phantom{XXXXX}\text{s.t.} \displaystyle\sum_{j : (i,j)\in A} x_{ij} + \displaystyle\sum_{j : (j,i)\in A} x_{ji} = 0 \phantom{XXX} \forall i\in N \\  \phantom{XXXXXs.t.}0\leq x_{ij} \leq u_{ij}(1 - \tilde{I}_{ij}\gamma_{ij}) \phantom{XXXXx} \forall (i,j)\in A'

The first constraint family in the above definition of h(\gamma,\tilde{I}) is our typical flow-balance constraints. The second constraint effectively reduces the capacity of arc (i,j)\in A' to zero if the arc is interdicted and the realization of \tilde{I}_{ij} is 1.

In the presentation, we’ll discuss the authors’ clever approach to solving this nightmarish problem.

Source: Cormican, Kelly J., David P. Morton, and R. Kevin Wood. “Stochastic network interdiction.” Operations Research 46.2 (1998): 184-197.

The secrets of submodularity

Today’s topic is submodularity! This special structure allows a whole new mathematical toolbox to be used in the never-ending fight to improve formulations and reach optimal solutions faster.

Let N := \{1,\dots,n\} be a finite ground set. A submodular function f : 2^N\rightarrow\mathbb{R} is a set-defined function that exhibits a “diminishing returns” property. That is, adding an element to a small set results in a greater impact on the function than adding that element to a larger set. There are multiple equivalent ways to define a submodular function. Perhaps the most intuitive of these is below.

Let S_1, S_2\subseteq N with S_1\subseteq S_2. f is submodular if for every x\in N\setminus S_2, we have f(S_1\cup \{x\}) - f(S_1) \geq f(S_2\cup \{x\}) - f(S_2). Here, f(S_1\cup \{x\}) - f(S_1) represents the marginal increase in the set function when element x\in N\setminus S_2 is added to a smaller subset of N, S_1. This marginal increase must be at least as large as the marginal increase obtained by adding x to a larger subset of N that contains S_1.

An equivalent definition is as follows. Let S_1, S_2\subseteq N. f is submodular if f(S_1) + f(S_2) \geq f(S_1\cap S_2) + f(S_1\cup S_2).

It should also be noted that under certain conditions, optimizing submodular functions can be very easy.

Let’s examine a few places in public sector operations research that submodularity could arise.

Example 1: Path interdiction

Let P=\{a_1, a_2, \dots, a_{|P|}\} be a path in a network. In my previous blog post, we discussed basic maximum-reliability network interdiction. In the same vein, let p_a be the probability of traversing arc a\in P undetected when arc a is not interdicted. Let q_a be the probability of traversing arc a\in P undetected when the arc is interdicted. Naturally, p_a>q_a for all a\in P. Let S\subseteq P be the set of path arcs that we choose to inderdict. Let the function f : 2^P\rightarrow [0,1] be the probability of an attacker traversing the entire path undetected. This function can be written as

f(S)= \left(\displaystyle\prod_{a\in P} p_a\right) \left(\displaystyle\prod_{a\in S} \frac{q_a}{p_a}\right).

Then f(S) is supermodular. A supermodular function simply reverses the inequalities above: i.e. f(S_1\cup \{x\}) - f(S_1) \leq f(S_2\cup \{x\}) - f(S_2) holds for S_1, S_2\subseteq N with S_1\subseteq S_2 and every x\in N\setminus S_2. The proof is obtained by directly applying the definition of supermodularity. The fact that f is supermodular is very intuitive; the more arcs that we interdict, the less of a reduction (not increase!) we notice in the attacker’s traversal probability.

It follows that -f(S) is submodular.

Example 2: Radio tower coverage

Imagine we were to give up our lives of fortune as operations researchers and pursue a humbler existence in the radio entertainment industry. We wish to select a subset of radio towers from which to broadcast our station. The broadcast ranges of these towers overlap.

Our goal is to cover as much surface area as possible. Let N := \{1,\dots,n\} be the set of towers. For a subset of selected towers S\subseteq N, let f(S) be the surface area covered by those towers. Then f(S) is submodular. Below are a few pictures to illustrate the point.

Suppose we have n=4 potential radio towers to use for coverage. Suppose also that want to use the second radio tower, because the cost of doing so is negligible.

radio_2

What happens if we choose to also use the third radio tower? Naturally, our covered surface will increase. Note that the marginal increase in surface area is the strictly-blue region.

radio_2-3

Cool! Now, suppose that we initially choose to utilize the first and second radio stations.

radio_1-2

When both of these radio stations are selected, what happens to the marginal increase in surface area when we include the third tower in our broadcast coverage?

radio_1-2-3

Again, the strictly-blue region represents the surface area we added to our network by including the third radio tower. This blue area is much smaller than it was when we included the second tower and not the first. Looking at the definition of submodularity, we expect this marginal increase to be smaller! Mathematically, we have f(\{2\}\cup\{3\}) - f(\{2\}) \geq f(\{1,2\}\cup\{3\}) - f(\{1,2\}).

Conforti, Michele, Gérard Cornuéjols, and Giacomo Zambelli. Integer programming. Vol. 271. Berlin: Springer, 2014.

Maximum-reliability network interdiction

Network interdiction models are used to model a how a “defender” should best react to the presence of an “attacker” on a network.  In contrast to other network interdiction models, shortest-path network interdiction considers an attacker who is attempting to maximize their probability of reaching a given sink node from a given source node.  The attacker has a probability of traversing each arc safely.  The defender is able to “interdict” a subset of these arcs (within the confines of a budget) by placing sensors on them, thereby reducing the attacker’s probability of safely traversing that arc.

This network interdiction model can be seen as a bilevel game.  As the defender, we attempt to minimize the attacker’s maximum-reliability path through the network.  Many of the models are obtained by beginning with this min-max formulation and consolidating it into something more workable.

Here’s a picture to visualize what is happening.  As defenders, we choose first the arcs on which to place sensors.  Afterwards, the attacker will note our decision and take the “best” (maximum-reliability) path through the network.

nip_example

On the picture, the red dashes represent sensors.  The blue path represents the attacker’s maximum-reliability path through the network.

Let’s build a model!  We’ll need to define a few things first.

  • G=(N,A): our directed network
  • AD\subseteq A: arcs on which we can place sensor
  • R: interdiction budget
  • r_{ij}: cost of placing a sensor on arc (i,j)\in AD
  • p_{ij}: probability of attacker traversing arc (i,j)\in A undetected
  • q_{ij}: probability of attacker traversing arc (i,j)\in AD undetected when a sensor is present
  • \psi_j: value of maximum-reliability path from j to t with no sensors installed
  • s\in N: the attacker’s starting point
  • t\in N: the attacker’s goal

For our data to make sense, we must have 0\leq q_{ij} < p_{ij} \leq 1.

Great!  We’ll now add a few variables.

  • \pi_i: probability of attacker making it from node i\in N to node t undetected
  • x_{ij}: whether or not we place a sensor on arc (i,j)\in AD

If \pi_i is defined as above, we surely want \pi_t=1.  It only makes sense that if the attacker is present at the ending goal, they will make it to their goal with probability \pi_t = 1.

Furthermore, we want $\pi_i$ to take on specific values based on our values of x_{ij}.  Specifically, if we do interdict arc (i,j)\in AD, we want to enforce \pi_i\geq q_{ij}\pi_j.  If we do not interdict arc (i,j)\in AD, we want to enforce \pi_i\geq p_{ij}\pi_j.  Our objective will be to minimize \pi_s.  This will ensure that \pi_i is calculated to the attacker’s best path from i to t.

Now let’s examine the model.

\min \pi_s \\    \phantom{XXX} \pi_i - p_{ij} \pi_j \geq 0, \phantom{-(p_{ij}-q_{ij})\psi_j x_{ij}1} (i,j) \in A \setminus AD \\    \phantom{XXX}\pi_i - p_{ij}\pi_j \geq -(p_{ij}-q_{ij})\psi_j x_{ij}, \phantom{01} (i,j)\in AD \\    \phantom{XXX} \pi_i - q_{ij}\pi_j \geq 0, \phantom{-(p_{ij}-q_{ij})\psi_j x_{ij} 1}(i,j) \in AD \\    \phantom{XXX} \pi_t = 1 \\    \phantom{XXX} \displaystyle\sum_{(i,j) \in A} r_{ij} x_{ij} \leq R

Remember when we wanted to enforce certain conditions on \pi_i depending on our choices of x_{ij} for (i,j)\in AD?  Let’s see if those hold!

Let x_{ij}=0.  By the second constraint, we have \pi_i \geq p_{ij}\pi_j, as desired.  We also want to ensure the third constraint is dominated by this constraint; that is, it has no bearing on the model.  Because p_{ij} > q_{ij} we have \pi_i \geq p_{ij} \pi_j \implies \pi_i \geq q_{ij} \pi_j.  This proves the third constraint is dominated by the second.

Now, let x_{ij}=1.  By the third constraint, we have \pi_i \geq q_{ij}\pi_j, which is again what we want.  We will also ensure the second constraint is dominated by this constraint.  Because (p_{ij}-q_{ij})>0 and \pi_j\leq \psi_j, we have \pi\geq q_{ij}\pi_j \implies \pi_i \geq -(p_{ij}-q_{ij})\pi_j + p_{ij}\pi_j \implies \pi_i \geq -(p_{ij} - q_{ij}) \psi_j + p_{ij} \pi_j.  Thus, the second constraint is indeed dominated by the third.

In a few weeks, we’ll be taking a closer look at the stochastic case of the maximum-reliability network problem, in which the attacker’s starting and ending points are unknown.

Anyways, back to watching The Bachelor finale.  I’m totally only watching because my wife is. #TeamLauren

Cormican, Kelly J., David P. Morton, and R. Kevin Wood. “Stochastic network interdiction.” Operations Research 46.2 (1998): 184-197.

Optimization, meet crime modeling

In my recent talk on Alfred Blumstein’s Crime modeling, I briefly discussed how optimization has been used in crime modeling. The consensus was that the problems were simply too complex to model effectively. Consider the usual process for incarceration, which involves countless different entities. An arrest is made. Court appearances are had. Bails are set. Plea bargains are discussed. A trial takes place. A sentencing decision is made. Each case is a unique, gavel-shaped snowflake.

Regardless, efforts have been made to optimize certain aspects of “crime modeling.” One of these is a 2001 paper by Gernot Tragler, Jonathan Caulkins, and Gustav Feichtinger. It attempts to allocate the use of treatment and enforcement resources to mitigate the use of drug trafficking. The problem is tackled from a control theory perspective. The authors note that neither treatment or enforcement is unilaterally superior to the other in combating drug proliferation. Any viable drug control model will result in a solution that varies the use of treatment and enforcement as time progresses.

Tragler’s paper suggests two policies for dealing with new drug problems. The first is “eradication”, a policy that attempts to completely remove the new drug problem before it spirals out of control. This policy is only advisable when the problem is indeed fast-spreading, it is detected early, and policymakers are able to devote large amounts of resources to wipe out the problem. The last of these is a political conundrum; successfully eradicating a potential drug epidemic does not necessarily positively change public perception of a policymaker, because the problem was never allowed to reach a critical level. In fact, funneling financial resources to combat a problem that never becomes overly serious could have an adverse effect on the policymaker’s career. The eradication policy directs enormous amounts of enforcement and treatment to fight the drug problem right away. However, if any of the three aforementioned conditions are not satisfied, it is advisable to pursue a less drastic policy. “Accommodation” begins with a large amount of enforcement. Enforcement grows as the drug problem grows, though not at as fast of a rate. Treatment resources, on the other hand, grow proportionally with the proliferation of the drug problem.

An earlier effort at incorporating optimization and crime modeling was made by Blumstein and Daniel Nagin. The two attempt to determine the proportion of convicted offenders who become imprisoned and the average time to be served by these imprisoned offenders. The goal is to minimize the total crime rate. The obvious solution to minimize the total crime rate is to incarcerate as many offenders as possible for as long as possible. Naturally, this will not work; the authors impose the constraint that the total amount of imprisonment cannot exceed a certain amount. This constraint makes sense in our real world, where overpopulation in prisons is a glaring issue.

Quite possibly the most important result of this work was its admission that simply more work had to be done. The parameters used in the model needed more data to accurately estimate. Furthermore, the model could be modified and expanded to account for multiple crime rates. However, as Blumstein would note 24 years later, the world of crime modeling is often too complex for optimization models.

 

Blumstein, A. (2002). Crime modeling. Operations Research, 50(1), 16-24.

Blumstein, A., & Nagin, D. (1978). On the optimum use of incarceration for crime control. Operations Research, 26(3), 381-405.

Caulkins, J.P., Feichtinger, G., & Tragler, G. (2001). Optimal dynamic allocation of treatment and enforcement in illicit drug control. Operations Research, 49(3), 352-362.

Crime modeling

Here is a sneak preview of my short presentation tomorrow.

In Crime modeling, Alfred Blumstein discusses attempts by researchers to characterize crime frequency in individual offenders. An individual’s offending frequency, called \lambda, is immensely helpful in modeling the criminal justice system. By the late 1970’s, estimates of these values were crudely derived for specific crime types. However, the values of \lambda were based on aggregated crime data and were insufficient in predicting individual crime patterns. A better way to predict the heterogeneous patterns for individual offenders was needed.

Around 1980, Rand initiated an attempt to improve how individual crime patterns could be predicted. They resorted to a self-report by about 2500 prisoners from California, Michigan, and Texas. Each inmate essentially reported his or her own crime history. Upon analyzing the data, Rand noticed several phenomena. For one, the data was extremely skewed. Blumstein gives the example that among prisoners who engaged in burglary, the median number of robberies was five. However, the 90th percentile of burglaries was around 80, and the mean number of burglaries was roughly 50. In short, the data was being radically pulled by offenders who committed many more burglaries than their peers. Rand’s goal was to be able to utilize this data to better identify these high-\lambda offenders.

In 1982, using this newfound data, Peter Greenwood and Allan Abrahamse proposed a policy known was “selective incapacitation.” Selective incapacitation was a way for an individual’s crime to be correlated with the self-reported data set to identify individuals whose crime patterns suggested a high frequency of offenses. This set of predictors was meant to influence sentencing decisions in court. Naturally, this policy was not well-received. Selective incapacitation was a biased sentencing of individuals for crimes they could commit in the future.

The predictability of selective incapacitation was tested in 1987. A group of released inmates from a California prison were correlated with the self-reported data and assigned low and high offense frequencies. If selective incapacitation truly was a predictive measure of crime patterns, the individuals with high offense frequencies would be the most likely to return to the criminal system quickly. However, no correlation was actually observed. Blumstein outlines two major problems with the experiment. For one, the prisoner population involved in the self-report was not indicative of the general population. One can expect that those in prison are more likely to be high frequency offenders. Blumstein also notes that the experiment measured arrest frequencies rather than crime frequencies.

Blumstein’s vignette leaves us with a snapshot of the criminal justice system’s complexity. Forecasting crime patterns is immensely difficult. Not only is knowledge of individual crime patterns often restricted or unavailable, but even accurately using these data sets can require both care and finesse.

Source: Blumstein, Alfred. 2002. Crime modeling. Operations Research 50(1) 16-24.