Page 17 |
Save page Remove page | Previous | 17 of 118 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
2 The challenges in moving beyond rationality have been very significant. There is very little consensus on what model suits which domain, and if one model can outperform all the others. Thus, it becomes a very important research question to analyze all these models and determine which is more suited to one setting or the other. Another important problem that needs to be addressed in this respect is to compute the best strategy based on complex mathematical equations. This is because the calculation of mixed strategy equilibriums can involve matrices of very large sizes making the calculations difficult to handle. In this context, some of the models have proven themselves to be more effective than the others. Game-theoretic models are now being used for analyzing real-world security resource allocation problems [8, 20]. These models provide enough complexity to the system so as to not allow attackers to find any patterns in the allocation of security. ARMOR [16], IRIS [22] and GUARDS [19] are the best examples of the systems that use this approach for allocation of security in the real-world domain. In the past, such systems were designed under the assumption that the attackers are perfectly rational and would only work on maximizing their own benefit, from the system. Such a system would work best against a very intelligent attacker who can calculate his own rewards in the best way possible. However, this is not true in all the cases. The attackers may be prone to human errors that the system may not be robust enough to handle. There is a need to design a system that can not only account for the human errors but also exploit it in the best way possible.
Object Description
Title | Computational model of human behavior in security games with varying number of targets |
Author | Goenka, Mohit |
Author email | mgoenka@usc.edu; mohitgoenka@gmail.com |
Degree | Master of Science |
Document type | Thesis |
Degree program | Computer Science |
School | Viterbi School of Engineering |
Date defended/completed | 2011-03-30 |
Date submitted | 2011 |
Restricted until | Unrestricted |
Date published | 2011-04-19 |
Advisor (committee chair) | Tambe, Milind |
Advisor (committee member) |
John, Richard S. Maheswaran, Rajiv T. |
Abstract | Security is one of the biggest concerns all around the world. There are only a limited number of resources that can be allocated in security coverage. Terrorists can exploit any pattern of monitoring deployed by the security personnel. It becomes important to make the security pattern unpredictable and randomized. In such a scenario, the security forces can be randomized using algorithms based on Stackelberg games.; Stackelberg games have recently gained significant importance in deployment for real world security. Game-theoretic techniques make a standard assumption that adversaries' actions are perfectly rational. It is a challenging task to account for human behavior in such circumstances.; What becomes more challenging in applying game-theoretic techniques to real-world security problems is the standard assumption that the adversary is perfectly rational in responding to security forces' strategy, which can be unrealistic for human adversaries. Different models in the form of PT, PT-Attract, COBRA, DOBSS and QRE have already been proposed to address the scenario in settings with fixed number of targets. My work focuses on the evaluation of these models when the number of targets is varied, giving rise to an entirely new problem set. |
Keyword | artificial intelligence; behavioral sciences; game theory; human behavior; COBRA; DOBSS; PT; PT-Attract; QRE; Stackelberg |
Language | English |
Part of collection | University of Southern California dissertations and theses |
Publisher (of the original version) | University of Southern California |
Place of publication (of the original version) | Los Angeles, California |
Publisher (of the digital version) | University of Southern California. Libraries |
Provenance | Electronically uploaded by the author |
Type | texts |
Legacy record ID | usctheses-m3757 |
Contributing entity | University of Southern California |
Rights | Goenka, Mohit |
Repository name | Libraries, University of Southern California |
Repository address | Los Angeles, California |
Repository email | cisadmin@lib.usc.edu |
Filename | etd-Goenka-4204 |
Archival file | uscthesesreloadpub_Volume62/etd-Goenka-4204.pdf |
Description
Title | Page 17 |
Contributing entity | University of Southern California |
Repository email | cisadmin@lib.usc.edu |
Full text | 2 The challenges in moving beyond rationality have been very significant. There is very little consensus on what model suits which domain, and if one model can outperform all the others. Thus, it becomes a very important research question to analyze all these models and determine which is more suited to one setting or the other. Another important problem that needs to be addressed in this respect is to compute the best strategy based on complex mathematical equations. This is because the calculation of mixed strategy equilibriums can involve matrices of very large sizes making the calculations difficult to handle. In this context, some of the models have proven themselves to be more effective than the others. Game-theoretic models are now being used for analyzing real-world security resource allocation problems [8, 20]. These models provide enough complexity to the system so as to not allow attackers to find any patterns in the allocation of security. ARMOR [16], IRIS [22] and GUARDS [19] are the best examples of the systems that use this approach for allocation of security in the real-world domain. In the past, such systems were designed under the assumption that the attackers are perfectly rational and would only work on maximizing their own benefit, from the system. Such a system would work best against a very intelligent attacker who can calculate his own rewards in the best way possible. However, this is not true in all the cases. The attackers may be prone to human errors that the system may not be robust enough to handle. There is a need to design a system that can not only account for the human errors but also exploit it in the best way possible. |