Coordinating randomized policies for increasing security of agent systems

Praveen Paruchuri, Jonathan P. Pearce, Janusz Marecki, Milind Tambe, Fernando Ordóñez, Sarit Kraus

Research output: Contribution to journalArticlepeer-review

15 Scopus citations

Abstract

We consider the problem of providing decision support to a patrolling or security service in an adversarial domain. The idea is to create patrols that can achieve a high level of coverage or reward while taking into account the presence of an adversary. We assume that the adversary can learn or observe the patrolling strategy and use this to its advantage. We follow two different approaches depending on what is known about the adversary. If there is no information about the adversary we use a Markov Decision Process (MDP) to represent patrols and identify randomized solutions that minimize the information available to the adversary. This lead to the development of algorithms CRLP and BRLP, for policy randomization of MDPs. Second, when there is partial information about the adversary we decide on efficient patrols by solving a Bayesian-Stackelberg games. Here, the leader decides first on a patrolling strategy and then an adversary, of possibly many adversary types, selects its best response for the given patrol. We provide two efficient MIP formulations named DOBSS and ASAP to solve this NP-hard problem. Our experimental results show the efficiency of these algorithms and illustrate how these techniques provide optimal and secure patrolling policies. We note that these models have been applied in practice, with DOBSS being at the heart of the ARMOR system that is currently deployed at the Los Angeles International airport (LAX) for randomizing checkpoints on the roadways entering the airport and canine patrol routes within the airport terminals.

Original languageEnglish
Pages (from-to)67-79
Number of pages13
JournalInformation Technology and Management
Volume10
Issue number1
DOIs
StatePublished - 2009

Bibliographical note

Funding Information:
Acknowledgments This research is supported by the United States Department of Homeland Security through Center for Risk and Economic Analysis of Terrorism Events (CREATE). This work was supported in part by NSF grant no. IIS0705587 and ISF.

Funding

Acknowledgments This research is supported by the United States Department of Homeland Security through Center for Risk and Economic Analysis of Terrorism Events (CREATE). This work was supported in part by NSF grant no. IIS0705587 and ISF.

FundersFunder number
National Science Foundation
U.S. Department of Homeland Security

    Keywords

    • Decision theory
    • Game theory
    • Multiagent systems
    • Randomized policies
    • Security

    Fingerprint

    Dive into the research topics of 'Coordinating randomized policies for increasing security of agent systems'. Together they form a unique fingerprint.

    Cite this