Page 23 |
Save page Remove page | Previous | 23 of 118 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
8 accounts for all the Ɛ-optimal strategies corresponding to any particular choice of the defender. It turns out that the defender reward as obtained by the PT model is smaller than the defender reward for the Ɛ-optimal adversarial strategies. COBRA COBRA is one of the leading works to account for human behavior [17]. This model is supported by experimental evidence over the human subjects. Whether or not this model is as fast as some of the other models is questionable. This model defines the adversaries’ behavior from two different prospects of bounded rationality on computing the optimal strategy and anchoring bias caused by the attackers’ inability to get the precise observation of the mixed strategy that the defender is using for the purpose of security allocation. The calculations here are based on Mixed Integer Linear Program (MILP) with variables to account for selections of Ɛ-optimal adversarial strategies as in the case of PT-Attract model. The major problem with this model is that finding an optimal solution in a Bayesian Stackelberg game is NP-hard [4], thus COBRA happens to be a MILP facing an NP-hard problem.An Alpha (α) parameter is varied to account for the limited number of observations that a human adversary may be able to use. The value α can vary from 0 to 1 to denote the ignorance prior while 1 - α would denote the occurrences viewed by the adversaries. Thus, the value α = 0 would mean that the adversaries have fully observed the system while α = 1 would mean that the adversaries
Object Description
Title | Computational model of human behavior in security games with varying number of targets |
Author | Goenka, Mohit |
Author email | mgoenka@usc.edu; mohitgoenka@gmail.com |
Degree | Master of Science |
Document type | Thesis |
Degree program | Computer Science |
School | Viterbi School of Engineering |
Date defended/completed | 2011-03-30 |
Date submitted | 2011 |
Restricted until | Unrestricted |
Date published | 2011-04-19 |
Advisor (committee chair) | Tambe, Milind |
Advisor (committee member) |
John, Richard S. Maheswaran, Rajiv T. |
Abstract | Security is one of the biggest concerns all around the world. There are only a limited number of resources that can be allocated in security coverage. Terrorists can exploit any pattern of monitoring deployed by the security personnel. It becomes important to make the security pattern unpredictable and randomized. In such a scenario, the security forces can be randomized using algorithms based on Stackelberg games.; Stackelberg games have recently gained significant importance in deployment for real world security. Game-theoretic techniques make a standard assumption that adversaries' actions are perfectly rational. It is a challenging task to account for human behavior in such circumstances.; What becomes more challenging in applying game-theoretic techniques to real-world security problems is the standard assumption that the adversary is perfectly rational in responding to security forces' strategy, which can be unrealistic for human adversaries. Different models in the form of PT, PT-Attract, COBRA, DOBSS and QRE have already been proposed to address the scenario in settings with fixed number of targets. My work focuses on the evaluation of these models when the number of targets is varied, giving rise to an entirely new problem set. |
Keyword | artificial intelligence; behavioral sciences; game theory; human behavior; COBRA; DOBSS; PT; PT-Attract; QRE; Stackelberg |
Language | English |
Part of collection | University of Southern California dissertations and theses |
Publisher (of the original version) | University of Southern California |
Place of publication (of the original version) | Los Angeles, California |
Publisher (of the digital version) | University of Southern California. Libraries |
Provenance | Electronically uploaded by the author |
Type | texts |
Legacy record ID | usctheses-m3757 |
Contributing entity | University of Southern California |
Rights | Goenka, Mohit |
Repository name | Libraries, University of Southern California |
Repository address | Los Angeles, California |
Repository email | cisadmin@lib.usc.edu |
Filename | etd-Goenka-4204 |
Archival file | uscthesesreloadpub_Volume62/etd-Goenka-4204.pdf |
Description
Title | Page 23 |
Contributing entity | University of Southern California |
Repository email | cisadmin@lib.usc.edu |
Full text | 8 accounts for all the Ɛ-optimal strategies corresponding to any particular choice of the defender. It turns out that the defender reward as obtained by the PT model is smaller than the defender reward for the Ɛ-optimal adversarial strategies. COBRA COBRA is one of the leading works to account for human behavior [17]. This model is supported by experimental evidence over the human subjects. Whether or not this model is as fast as some of the other models is questionable. This model defines the adversaries’ behavior from two different prospects of bounded rationality on computing the optimal strategy and anchoring bias caused by the attackers’ inability to get the precise observation of the mixed strategy that the defender is using for the purpose of security allocation. The calculations here are based on Mixed Integer Linear Program (MILP) with variables to account for selections of Ɛ-optimal adversarial strategies as in the case of PT-Attract model. The major problem with this model is that finding an optimal solution in a Bayesian Stackelberg game is NP-hard [4], thus COBRA happens to be a MILP facing an NP-hard problem.An Alpha (α) parameter is varied to account for the limited number of observations that a human adversary may be able to use. The value α can vary from 0 to 1 to denote the ignorance prior while 1 - α would denote the occurrences viewed by the adversaries. Thus, the value α = 0 would mean that the adversaries have fully observed the system while α = 1 would mean that the adversaries |