Like others, I’ll be using this week’s blog post to give some background on my upcoming presentation. My paper describes an application of operations research to airport security.
The Los Angeles International Airport (known as “LAX” in some niche social circles) is the second busiest airport in the United States in terms of passenger boarding. Boasting nearly 40 million departures in 2015, LAX is believed to be an attractive target for terrorists. Its role in domestic and international transportation cannot be understated. As police and security resources are limited, completely securing the airport is impossible. Even if full security coverage were possible, it would be a logistical nightmare that would seriously delay passengers.
The authors of the paper developed ARMOR (Assistant for Randomized Monitoring over Routes) as an application to help security agencies determine where to allocate their limited resources. It is tailored to meet two key challenges in airport security. First, it provides randomization in its resource allocation. When adversaries are able to recognize patterns in security, they are able to exploit them. Randomization adds a much-desired layer of uncertainty to the adversary’s planning. Second, ARMOR addresses security forces’ uncertainty in information about the adversary. In the context if airport security, threats can manifest in numerous ways (bombs, hijackings, etc.) which require different resources for detection. ARMOR works to mitigate this by considering multiple threats simultaneously and formulating the best overall security plan.
ARMOR is based on the theory of Stackelberg games. In a Stackelberg game, a leader first makes decisions, to which a follower reacts in an attempt to optimize his or her reward. Consider the payoff table below for a Stackelberg game. Let and denote possible initial actions for the leader. Similarly, let and denote possible recourse actions for the follower.
In a simultaneous game, there exists a Nash equilibrium where the leader chooses action and the follower chooses action . In this situation, neither player can benefit unilaterally by changing his or her decision. However, we are not considering Nash equilibria in this situation; rather, we allow the leader to choose an action first, and the follower to selfishly react thereto. With this paradigm, the leader would select , because it is known that the follower will subsequently select to obtain the maximum payoff. This strategy gives the leader a payoff of . If we allow the leader to select each action with a fixed probability, selecting and each with probability would result in a reward of . The follower, capable of choosing only one action, would select .
In a Bayesian Stackelberg game, we allow a set of leaders and a set of followers to play against one another. ARMOR incorporates a Bayesian Stackelberg game with one leader (airport security) and multiple followers (the variety of security threats). Next week, we’ll discuss how to model this Bayesian Stackelberg game first as a mixed-integer quadratic program, then as a mixed-integer linear program. We’ll discuss some of the challenges of creating the ARMOR software and also how it performed “in the field.”
Pita, James, et al. “Deployed ARMOR protection: the application of a game theoretic model for security at the Los Angeles International Airport.”Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems: industrial track. International Foundation for Autonomous Agents and Multiagent Systems, 2008.