Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Cooperation in wireless networks with selfish users
(USC Thesis Other)
Cooperation in wireless networks with selfish users
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
COOPERATION IN WIRELESS NETWORKS WITH SELFISH USERS by Hua Liu A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (COMPUTER SCIENCE) December 2010 Copyright 2010 Hua Liu Dedication To my parents, who always support me unconditionally. ii Acknowledgements This thesis is definitely not a singular effort. On completing this thesis, I want to show my deepest gratitude to everyone who guides, accompanies me achieving this point. This thesis would not have been possible without the help from my Ph.D advisor, Prof. Bhaskar Krishnamachari. In the past five years, Prof. Krishnamachari is not only my academic advisor, my mentor, but also my friend and my role model. His encouragement, guidance and support from the time I met him to now make my Ph.D study a wonderful memory. His wisdom, his hard working, his enthusiasm, his inspiration, his insight, and his outstanding communica- tion skills are admirable. I have learned a lot from him beside knowledge. When I got stuck in research topics, he told me that the experience is more important than the results. When I felt confused on the points of obtaining the Ph.D degree, he helped me begin to understand the true meaning of achieving Ph.D. When I felt frustrated on academic job searching, he let me know that persistence and strong will can finally lead my way to what I want to dedicate to. He is an outstanding Ph.D mentor. His guidance helped me in all the time of research and writing of this thesis. It is an honor for me to be his student. I would like to take this opportunity to show my gratitude to Professors in USC who had shared their immense knowledge with me and guided my research work. I am grateful to Pro- fessor Murali Annavaram for showing me how to present ideas and mathematical results in a iii great way. I’d like to thank Professor Ramesh Govindan for teaching me how to design/evaluate a system thoroughly. I’d like give my heartily thanks to Professor David Kempe for teaching me how to design efficient algorithms. I’d like to show my greatly appreciation to Professor Leana Glubchick for her warm-hearted hosting and insightful advice on research topics to me. I’d like to thank Professor Rahul Jain for insightful discussions on game theoretic approaches applied on various directions. I am also grateful for advice from professors outside USC who co-worked with me: Professor Qing Zhao, Professor Allen B. MacKenzie and Professor Xin Liu. Without their guidance, most part of this thesis cannot be accomplished. Of course I need to give my sincerely thanks to my friends in ANRG. This group is an amazing group that every person in this group impressed me with his/her kindness, warmhearted, creativity and passion. Thank you all for the stimulating discussions, for the sleepless nights before deadlines and for all the funs we have had as a big family! I want to specially thank two of my co-workers, Dr. Shyam Kapadia and Longbo Huang from USC. The papers we co-authored make building blocks for this thesis. Last but not the least, I want to thank the friends who accompanied me during the time I pursued this Ph.D. iv Table of Contents Dedication ii Acknowledgements iii List Of Tables viii List Of Figures ix Abstract xi Chapter 1: Introduction 1 Chapter 2: Game Theory in Wireless Networks 9 2.1 Strategic form Game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2 The Nash Equilibrium Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1 Background Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.2 Cooperation in Topology Control . . . . . . . . . . . . . . . . . . . . . 14 2.2.3 Fair Bandwidth Allocation . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.4 Power Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.5 Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.3 Pareto Improvement and Pareto Optimal . . . . . . . . . . . . . . . . . . . . . . 22 2.3.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.3.2 Power Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Supermodularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.4.2 Power Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.5 Mixed Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5.2 Power Saving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6 Repeated Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.6.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 2.6.2 Routing Security and Reputation Systems . . . . . . . . . . . . . . . . . 37 2.6.3 Incentives in Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.7 Bayesian Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.7.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 v 2.7.2 Security Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.7.3 Medium Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.7.4 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.8 Nash Bargaining Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.8.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.8.2 Distributed Spectrum Sharing . . . . . . . . . . . . . . . . . . . . . . . 50 2.9 Auction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.9.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.9.2 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.10 Challenges for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Chapter 3: Background on Wireless Networks 58 3.1 Wireless Ad-hoc Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.1.2 Routing and Cooperation in Wireless Ad-hoc Networks . . . . . . . . . . 59 3.2 Cognitive Radio Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.2.2 Cooperative Spectrum Sharing in Cognitive Radio Networks . . . . . . . 65 3.3 Community-based Mobile Applications . . . . . . . . . . . . . . . . . . . . . . 68 3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.3.2 Enhancing Cooperation in Community-based Mobile Networks . . . . . 69 3.3.3 Privacy in Mobile applications . . . . . . . . . . . . . . . . . . . . . . . 70 Chapter 4: Pricing in Wireless Routing 73 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.5 Theoretical Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Chapter 5: Spectrum Sharing with Multiple Secondary Users in Cognitive Radio Net- works 98 5.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3 Nash Equilibrium Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.4 Nash Bargaining solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.5 Consideration of Truthfulness . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Chapter 6: Cooperation Among Privacy-Focused Users in Social Networks 119 6.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.2 Aegis: a Community Mobile Application For Personal Safety . . . . . . . . . . . 121 6.3 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 vi 6.4.1 Calculating Best Response . . . . . . . . . . . . . . . . . . . . . . . . . 125 6.4.2 Synchronized Best Response Dynamic . . . . . . . . . . . . . . . . . . 127 6.4.3 Sequential Best Response Dynamic . . . . . . . . . . . . . . . . . . . . 129 6.4.4 Moving the Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . 130 6.4.5 Distributed Pareto Improvement . . . . . . . . . . . . . . . . . . . . . . 136 6.5 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Chapter 7: Conclusions and Future Work 143 References 146 vii List Of Tables 2.1 Payoffs of players in intrusion detection game. . . . . . . . . . . . . . . . . . . . 20 4.1 The approximation method results compared with the simulation results whenp is large . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 5.1 Utilities without conflict . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.2 Complete Payoff Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.3 Normalized Payoff Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 viii List Of Figures 2.1 An example of connectivity game . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 An example of a unique pure Nash equilibrium in a broadcast game . . . . . . . 15 2.3 Payoff Matrix for Prisoner’s Dilemma . . . . . . . . . . . . . . . . . . . . . . . 23 2.4 Payoff Table for Rock-Paper-Scissors Game . . . . . . . . . . . . . . . . . . . . 33 2.5 Illustration of Relaying Model with Collisions . . . . . . . . . . . . . . . . . . . 42 4.1 Path reliability versus source pay to each routing node when changing number of nodes in a fixed area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.2 Source gain versus source pay to each routing node for different destination pay- ment, when fixing number of nodes and area size . . . . . . . . . . . . . . . . . 86 4.3 Behavior of source node effect on the path reliability . . . . . . . . . . . . . . . 87 4.4 Cumulative distribution function for the existence of Nash Equilibrium path when increasing source pay to each routing node . . . . . . . . . . . . . . . . . . . . . 88 4.5 The illustration of the approximation method . . . . . . . . . . . . . . . . . . . 91 4.6 The illustration of the approximation method . . . . . . . . . . . . . . . . . . . 91 4.7 The approximation method results, compared with Figure 4.1 . . . . . . . . . . 92 5.1 Case number in corresponding regions . . . . . . . . . . . . . . . . . . . . . . . 105 5.2 Sliced plot for’s value whena 0 andc 0 vary from 2 to 10 . . . . . . . . . . . . . 109 5.3 User P1’s utility increase ratio in case1 after Nash bargaining . . . . . . . . . . . 109 5.4 Sliced plot for’s value whena 0 andc 0 vary from 1 2 to 2 . . . . . . . . . . . . . 111 ix 5.5 User P1’s utility increase ratio in case7 after Nash bargaining . . . . . . . . . . . 112 6.1 An example to show SYN-BR algorithm does not always converge . . . . . . . . 128 6.2 An example: A Random Node Deployment and Corresponding Network Topol- ogy when Setting Neighborhood RangeR = 5 . . . . . . . . . . . . . . . . . . . 137 6.3 An example: SEQ-BR Algorithm and PI Algorithm Converged Results . . . . . . 137 6.4 Number of Constrained but not Saturated Nodes VS Node neighborhood range . 139 6.5 Social Welfare Comparison between SEQ-BR and PI algorithms . . . . . . . . . 140 6.6 Number of Instances Improved for SEQ-BR Algorithm by PI Algorithm (out of 10)140 6.7 Average Number of Iterations Took to Convergence for SEQ-BR and PI Algorithms141 x Abstract In emerging self-organizing wireless networks, each device is controlled by a potentially selfish participant who can tamper with the networking protocols in his/her device. This behavior is dangerous as it can lead to inefficient global utilization and the collapse of service provisioning in the network. Motivating the participants to cooperate with each other becomes a key issue in such networks. The mathematical principles of game theory provide a flexible and powerful framework and tool set to study the behavior of rational selfish participants in strategic interaction. We apply game theory to analyze the performance of protocols wireless networks with selfish users and to design incentives for users to cooperate so that they are driven to operate at efficient equilibria. We provide a thorough survey on game theory applied in wireless networks and illustrate how to apply game theoretic tools to enhance cooperation via three specific case studies: routing in wireless ad-hoc networks, spectrum sharing in cognitive radio networks and incentive design in community-based social mobile networks. In the case study of routing problem in wireless ad-hoc networks, the goal is to find a reliable routing path. We investigate a pricing mechanism and proposed a polynomial-time construction that can generate a Nash equilibrium path in which no route participant has an incentive to cheat. We show that there is a critical price threshold beyond which an equilibrium path exists with high probability. We also illustrate that there exists an optimal price setting beyond the price threshold xi at which the source can maximize its utility. We evaluate the approach using simulations based on realistic wireless topologies. In the case study of spectrum sharing problem in cognitive radio networks, we consider in de- tail the specific case where two secondary users opportunistically access two channels on which each user has potentially different valuations. We formulate the problem as a non-cooperative simultaneous strategic game and identify the equilibria in this game. For cases where the result- ing Nash equilibria are not efficient, we proposed a novel distributed coordinated channel access mechanism that can be implemented with low overhead. This mechanism is based on the Nash bargaining solution and can guarantee full utilization of the available spectrum resources. Result- ing gains are quantified for this mechanism. We also consider the user truthfulness in exchanging channel valuation information. We show that truthfulness is not guaranteed in the bargaining process, so that there is a tradeoff between enforcing truthfulness and efficiency. In the case study of cooperation in community-based mobile social applications, we moti- vate the work through a personal safety application, where we point out a fundamental tension between users desire for preserving the privacy of their own data and their need for fine-grained information about others. We model the privacy-participation tradeoffs in this safety application as a non-cooperative game and design a tit-for-tat (TFT) mechanism to give users incentives to reveal their local information to the application. We propose an algorithm that yields a Pareto op- timal Nash equilibrium. We show that this algorithm, which can be implemented in a distributed manner, guarantees polynomial time convergence. xii Chapter 1 Introduction As recently as two decades ago, computer networks were primarily used for academic research or military use. For this reason, these traditional computer networks were designed with a single control objective in mind. A single control objective does not mean that control is centralized but rather that users are essentially passive and would agree to reduce their own performance for the good of the entire network if necessary. For example, the TCP protocol [19] was originally designed to be “polite” in sharing bandwidth. Things have changed considerably in the past twenty years. The current Internet is used by a widely heterogenous population of users. Different end users place different values on their perceived network performance. The situation is even more serious in wireless networks where resources are more sparse. In wireless networks, resources such as bandwidth and battery power are more limited than in wired networks. Under these scenarios, the common control objective is no longer a valid assumption for a large portion of current networks. The lack of a common goal in a network controlled by different communities without a cen- tralized authority will make them difficult to maintain: each user will attempt to retrieve the most from the network while expecting to pay as less as possible. In human communities, this kind 1 of behavior is called selfishness. While prohibiting selfishness is impossible over a decentral- ized network, applying punishments to regulate those that present this behavior or introducing incentives to avoid such behavior may be beneficial. Previous network and protocol designs have assumed that the networks would be composed of a group of users who will always obey the rule and they are inherently cooperative with sharing a common goal. These assumptions might still hold today on several areas such as military networks, search and rescue operations, or sensor networks where all hosts belong to the same authority and their users share the same goals in real world. However, this assumption will have to be dropped for networks consisting of different communities and users with conflicting interests. Selfish behaviors have already been observed widely in some typical networks. For example, in peer to peer file sharing systems (such as Gnutella [61] and BitTorrent [52]), selfish behaviors such as free-riders hurt the system performance. It was observed in Gnutella that the number of sites providing valuable content to the network was a small part of the number of users retrieving information [3]. In such networks, greed with respect to occupying a resource leads to a selfish behavior of the device’s owner, that may attempt to take the benefit from the resources provided by other nodes without, in return, making available the resources of his own devices. Another compelling example is in the context of mobile ad hoc networks (MANETs) [104]. In such networks, selfish users are not willing to relay other nodes’ packets (due to limited power constraints of the devices) if there is no punishments or incentives on such behavior. This selfish behavior threatens the entire community by potentially reducing the network connectivity (in extreme, this behavior can result in partitioning of the network). 2 Studying how cooperative users are and how much harm selfish behavior does can help us design incentive/punishment based mechanisms and protocols to get the expected system perfor- mance. In this thesis, we focus on this topic by investigating three different case studies to show how the selfishness of the participants in a wireless network can affect its overall function and to illustrate how to design an incentive based mechanism that can enhance the cooperation among self-interested users. To study cooperation in networks, we use game theory as the basis for our analysis. Game theory is a set of tools that originated from microeconomics and is employed to predict the be- havior of selfish individuals in a free market [36]. This tool-set enables capturing the behavior of individuals in strategic situations where an individual’s success is dependent on the choices made by the other individuals. It is powerful in analyzing the behavior of rational selfish players. Game theoretic approaches have been applied successfully in the area of economic systems in previous years and it is getting increasing attention from the network research community as more and more selfish behaviors emerge in current networks. At the core of the development of game theoretic tools lies the concept of an “equilibrium” - a state where individuals are unlikely to change their behavior. While many definitions of equilibria have been proposed in the game theoretic literature, the most popular and widely used one is that given by Nash in 1950 called the Nash equilibrium [80]. The existence and uniqueness of Nash equilibrium and the convergence of learning dynamics are fundamental yet challenging questions in the game theoretic approach. In the context of game theoretic approach, mechanism design is another important part. Mechanism design is the study of designing rules of a game or system to achieve a specific outcome, even though each agent may be self-interested. This is done by setting up a structure in which agents have an incentive to behave according to the rules. In 3 the case where Nash equilibrium is not efficient, we need to design game rules as punishment or incentive to lead nodes behave towards the desired outcome. Proper mechanism design to enhance cooperation in network is one of the main objectives of this thesis. In this thesis, besides providing a survey on the use of the game theoretic method in wireless networks with selfish users, we consider three case studies: a pricing problem for reliable wireless routing a cooperative negotiation scheme for spectrum sharing among multiple secondary users in cognitive radio networks a location-based community mobile application with privacy preservation All the problems are formulated as games in networks. Although all the problems are applied in different wireless and mobile networks (ad-hoc wireless networks, cognitive radio networks and mobile social networks accordingly), the common issue of designing an efficient protocol in each case is to enhance cooperation among self-centered network components. In the case study of reliable routing path in wireless ad-hoc networks, we examine incentives for cooperative reliable routing in wireless ad hoc networks where the users may be inherently selfish. In our game-theoretic formulation, each node on the selected route from a source to a destination receives a payoff that is proportional to the product of a source-defined-price and the probability that a given packet can be delivered to the desired destination, minus the correspond- ing communication cost. We give a polynomial-time construction for deriving a Nash equilibrium path in which no route participant has incentive to cheat. We show that there is a critical price threshold beyond which an equilibrium path exists with high probability. Furthermore, we illus- trate that there exists an optimal price setting beyond the price threshold at which the source can 4 maximize its utility. We examine how these thresholds and price settings vary with node density for different node reliability models. Specifically, in this study, our contributions are multifold. Firstly, we are able to show that the existence of a Nash Equilibrium path can in fact be determined in polynomial time, through an algorithm that is a modification of the Dijkstra technique. Secondly, it is intuitive that the likelihood of finding a Nash equilibrium in which selfish nodes are happy to participate in routing should increase with an increase in the price offered to them as payment. However, through realistic simulations, we find additionally that in fact there exists a critical threshold price beyond which there is a high probability that such an equilibrium path exists, when considering random wireless network configurations of fixed density. The existence of such a critical threshold has practical significance as it implies that a fixed price can be used as incentive in the case of mobile networks where specific configurations change continuously. Thirdly, we also find that there exists an optimum price at which the source can maximize its utility. Finally, we evaluate how the critical price threshold and the source-utility-maximizing price vary with network density and the value of the information to the source. In the research of spectrum sharing among multiple independent decision-making secondary users in cognitive radio networks, we consider a problem where two selfish cognitive radio users try to share two channels on which they each have potentially different valuations. We first for- mulate the problem as a non-cooperative simultaneous game, and identify its equilibria. For cases where the resulting Nash equilibria are not efficient, we then propose a novel coordinated channel access mechanism that can be implemented with low overhead in a decentralized fashion. This mechanism, based on the Nash bargaining solution, guarantees full utilization of the spectrum resources while improving the utility of each user compared to the non-cooperative setting. We 5 quantify the resulting gains. Finally, we prove that risk-averse users that are willing to accept offered information at face value have no incentive to lie to each other about their valuations for the non-cooperative game. However, we find that truthfulness is not guaranteed in the bargaining process, suggesting as an open problem the design of an incentive compatible mechanism for bargaining. For this research, our main contributions are three-fold. First, we formulate the problem as a non-cooperative simultaneous game. Depending on the constitution of the payoff matrix (determined by the utilities that the two users ascribe to each channel), we decompose the game into several different cases, and derive the Nash equilibria of the game in each case. In some cases we show that there is a unique pure Nash equilibrium. In other cases there are multiple equilibria but there exists a unique mixed Nash equilibrium that is focal in a distributed protocol setting. Second, for some of the cases, where the Nash equilibrium does not provide Pareto efficiency, we propose a Nash Bargaining Solution. In this solution, which intrinsically provides a notion of fairness, there is a distributed coordination signal (that can be implemented in practice using a pseudo-random number generator) that allows the two users to each utilize both channels without overlapping, to obtain Pareto-optimal performance. We numerically characterize the utility improvement obtained via bargaining. Finally, we consider whether rational users may have an incentive to lie on their channel valuations in either the original non-cooperative game or in the bargaining enhancement. For risk-averse users that take offered information at face value, we show that there is no incentive to lie in the non-cooperative game scenario. However, truthfulness is not guaranteed in the bargaining process. We leave as an open problem the design of mechanisms to enforce truthfulness in the bargaining solution. 6 In the investigation of human-related selfish behavior in mobile community-based applica- tions, we first point out that in these applications, there is often a fundamental tension between users’ desire for preserving the privacy of their own data and their need for fine-grained infor- mation about others. Our work is motivated by a specific community-based mobile application called Aegis, a personal safety enhancement service based on sharing location information with trusted nearby friends. We model the privacy-participation tradeoffs in this application using a game theoretic formulation. Users in this game are assumed to be self-interested. They prefer to obtain more fine-grained knowledge from others while limiting their own privacy leak (i.e. their own contributions to the game) as much as possible. We design a tit-for-tat (TFT) mechanism to give user incentives to contribute to the application. We investigate the convergence of two best response dynamics to achieve a nontrivial Nash equilibrium for this game. Further, we propose an algorithm that yields a Pareto optimal Nash equilibrium. We show that this algorithm guarantees polynomial time convergence and can be executed in a distributed manner. We also point out that this mechanism can be generalized to a family of utility functions that satisfy certain conditions. For the investigation of community based mobile applications, to the best of our knowledge, we are the first ones to formulate the privacy control problem in a game theoretic way. This case study illustrates the cooperation in networks in two aspects. First, it shows how to design a mechanism with incentives for users to participate in the game (i.e., community service). Sec- ond, it shows how to find a way to improve social welfare by enhancing the cooperation among users. We develop an iterative procedure that converges to a Pareto-optimal (i.e. efficient) Nash equilibrium for this kind of applications. The three cases share some common high-level characteristics. The users in the three prob- lems are both self-interested and rational. One possible way of managing a network with selfish 7 users is to let the individual users compete with one another in a way that allows each of them to reach its (subjective) optimal working state. In such an environment users change their behavior based on the state of the network. The change in behavior of one user is likely to cause changes in other user’s behavior resulting in a dynamic system. In such system, several questions need to be asked: a) whether there exists an equilibrium point of operation such that no user would find it beneficial to change its working parameters (i.e., a Nash equilibrium)? b) whether such an equilibrium point is unique, and whether the dynamic system actually converges to the equilib- rium point. c) If there exist Nash equilibria and the dynamic converges, what is the efficiency of the (converged) Nash equilibrium? d) Can we find a learning dynamic to lead the convergence to a better Nash equilibrium in terms of social welfare? e) If the Nash equilibria is inefficient, can we design a mechanism to lead the game converge to an efficient Nash equilibrium? These are fundamental questions in game theoretic approaches. The organization of this thesis is as follows: In Chapter 2 we present a survey on the use of game theoretic approaches in wireless networks. In Chapter 3, we explain the architecture of different wireless networks and related works accordingly. In the following three chapters, we present one case study in each chapter. In Chapter 4, we present the case study of pricing reliable routing in wireless ad-hoc networks with selfish users. In Chapter 5, we present how to use bargaining to improve channel sharing between selfish cognitive radios. In Chapter 6, we present a game theoretic approach to location sharing with privacy in a community-based mobile applications. Finally, in Chapter 7, we present a summary of the contributions of this work, as well as a discussion to highlight the various open problems associated with this work, which can form the foundation of future research based on this thesis. 8 Chapter 2 Game Theory in Wireless Networks 1 Game theory is a set of tools that originated from microeconomics and is employed to predict the behavior of selfish individuals in a free market [36]. The tool set enables capturing the behavior of individuals in strategic situations where an individual’s success is dependent on the choices made by the other individuals. At the core of the development of game theoretic tools lies the concept of an ‘equilibrium’ - a state where individuals are unlikely to change their behavior. While many definitions of equilibria have been proposed in the game theoretic literature, the most popular and widely used one is that given by Nash in 1950 called the Nash equilibrium [36, 71, 80]. In recent years, game theory has been gaining the attention of the wireless research commu- nity since the tool set is powerful in analyzing rational selfish player behavior. This stems from the fact that self-interested users are considered as basic components for some wireless networks, especially ad hoc networks. In this chapter, we will introduce important concepts in game theory followed by its applications in wireless networks some of which have appeared in recent research literature. A wide variety of problem spaces have been described in which game theory has been 1 This chapter is based on work that appeared in [66]. 9 successfully applied like data routing, power control, wireless security systems, topology control, medium access control and reliable routing. Game theory, like other fields of applied mathematics, has matured over the past 50 years. The 1944 classical work called Theory of Games and Economic Behavior by Neumann and Mor- genstern served in initially popularizing the field of game theory. In the decades to follow, many scholars helped developing this theory further. Some notable contributions besides those made by Nash are described below. In 1957, Luce and Raiffa published the first popular game theory textbook [69] where they introduced the concept of repeated games. Vickrey provided the first formalization of auctions in 1961 [108]. Subsequently, in 1967, Harsanyi developed the concept of a “Bayesian Nash Equilibrium” for Bayesian games [42]. We have organized this survey of game theory in wireless networks based on different types of game formulations. First, we discuss the strategic form game, which is the most widely used formulation in wireless contexts. Then, we define pure Nash equilibrium and mixed equilibrium. Subsequently, we introduce a special class of games called supermodular games. Then, we de- scribe repeated games and related applications in reputation system designs. This is followed by a discussion of Bayesian games that involve the use of probabilistic methods in dominant strategy calculations when players have incomplete information. The last part of the chapter provides a brief description of truthful auctions. 2.1 Strategic form Game Strategic form (or normal form) game is a basic component in game theory. A strategic form game [36] is a triplet< I; (S i ) i2I ; (u i ) i2I > where 10 1. A finite set of game playersi2 I. I = 1;:::;I 2. A set of available actions (S i ) i2I whereS i is a non-empty set of actions for playeri. A tuplefs 1 ;:::;s I g2S is called an action profile. 3. A payoff function (also called as a utility function)u i for each profiles =fs 1 ;::;s I g In game theoretic notation, authors always refer to all players other than some given playeri as “playeri’s opponents” and denote them as “i”. Note that a strategic form game is a simul- taneous decision game. In this section, we consider strategic games with complete information, which implies that each player has knowledge about all the other players’ utility functions. There are two kinds of strategies for a player: pure strategy and mixed strategy. Given a set of strategies, if a player chooses to take one action with probability 1 then that player is playing a pure strategy. A mixed strategy is a probability distribution over pure strategies. Each player’s randomization is statistically independent of those of his opponents, and the payoffs to a profile of mixed strategies are the expected values of the corresponding pure strategy payoffs. In the following four sections, we present significant concepts in strategic games and discuss their applications in wireless networks accordingly. 2.2 The Nash Equilibrium Concept Nash Equilibrium is named after its inventor John Nash (1950), though a similar concept was introduced by Cournot [22] in 1897. Generally speaking, Nash equilibrium is a steady state where no player in the game would unilaterally change his strategy if he is selfish and rational. If a game converges, it must converge to a Nash equilibrium. There might be several Nash equilibrium existing in a game. Since players in the game are self-interested, the Nash equilibrium is not 11 necessary the same as the social welfare. In fact, in most cases, Nash equilibrium is not same as the global optimal solution. The global optimal solution is a solution where players completely cooperate with each other to maximize the system throughput regardless individual gain. Hence, the existence and uniqueness of Nash equilibrium for a game is an important topic for researchers. 2.2.1 Background Knowledge A Nash equilibrium is a set of strategies, one for each player, such that no player has an incentive to unilaterally change his action. In other words, players are in Nash equilibrium, if a change in strategies by any one of them would result in lower gains for that player than if he had remained with his current strategy. Mathematically, a pure strategy Nash equilibrium is a pure strategy profiles such that for all playeri, u i (s i ;s i )u i (s i ;s i ) (2.1) for alls i 2 S i . A mixed strategy Nash equilibrium profile is defined when for each playeri and for each i 2 i u i ( i ; i )u i ( i ; i ) (2.2) where i is the probability distribution over player i’s pure strategies. That is, each element i 2 i indicates how frequently every pure strategy is chosen. In this case, the utility function value is the expected value of the corresponding pure strategy values. Nash equilibrium is an important concept in game theory since if every player is rational in a game, the Nash outcome is a player’s best response when given other players’ strategy. In many 12 (a) Pure Nash equilibrium (b) Not a Nash equilibrium Figure 2.1: An example of connectivity game network game theory contexts, only pure strategies are reasonable or desirable because of prac- tical and easy implementation. However, according to Nash, a mixed Nash equilibrium always exists in finite games, but a pure strategy Nash equilibrium might not exist. Therefore, many studies focus on the existence and complexity of deciding the existence of pure Nash equilibria. In a game formulation, if a Nash equilibrium exists, we need a metric to measure the perfor- mance of the system at Nash equilibrium in comparison to the system performance yielded at the global optima. Two widely used metrics in the literature are Price Of Anarchy (POA) [62] and Price of Stability (POS) [10]. For maximization problems, POA and POS are defined as follows: POA = Value of Worst Equilibrium Value of Optimal Solution ;POS = Value of Best Equilibrium Value of Optimal Solution For minimization problems, the definitions are inverted. In both cases,POA 1 andPOS 1. In the following subsection, we will discuss some papers which focus on the existence of a pure strategy Nash equilibrium and investigate the system performance on Nash equilibrium. 13 2.2.2 Cooperation in Topology Control Eidenbenz et al. [31] consider a problem of topology control in wireless ad hoc networks. In their formulation, the players are all the nodes (referred to as points in the network graph) in the network. The strategy for each player is to decide its own transmission radio range. The authors consider 3 kinds of games: connectivity game, strong connectivity game and reachability game. In the connectivity game, a set of pairs of source and destination nodes (in the form of (s i ;t i )) is given. In each pair, the source nodes i needs to choose a radius to get connected to destination t i (possibly through multiple intermediate nodes) while keeping the radius as small as possible. Figure 2.1 gives an example of the connectivity game. Given a source-destination pairfs;tg, the figure 2.1(a) shows a pure Nash equilibrium result while figure 2.1(b) depicts a result that is not a Nash equilibrium. The strong connectivity game is a special case of the connectivity game, where each node needs to be connected with every other node. In both connectivity games, the utility functions are designed to penalize a node relatively heavily if it cannot satisfy the connectivity constraints. Otherwise, a nodes’ utility increases with decreasing radius. In the reachability game, each node tries to reach as many other nodes as possible while minimizing its radius. The utility function of the reachability game is defined as the total number of connected nodes minus the power cost to set up the radio range. Mathematically, given radius vector r 2 , utility of useri is U(i) =f r (i)r i 2 Each element in vector r denotes the transmitting radius of each node in the network.ri is nodei’s radius. 14 Figure 2.2: An example of a unique pure Nash equilibrium in a broadcast game wheref r (i) is the number vertices reachable fromi and is the distance power gradient, usually being 2. Major conclusions of the work are three-fold. First, the authors claim that in the strong connectivity game, a polynomial time algorithm can find Nash equilibria with costs at most a factor 2 of the minimum possible cost. Second, for the connectivity game, when a given network graph satisfies the triangle inequality, the problem of finding the existence of a Nash equilibrium is NP-hard. Finally, the authors also state that for the reachability game, 1 +o(1) approximate Nash equilibria exist for graphs characterized by a random distribution of nodes. A related topology control game is studied by Kesselman et al. [60]. Considering every node in the network as a player, they consider two games: a broadcast game and a convergecast game. In the broadcast game, each node has to establish a directed path from a pre-defined root node to itself. In the convergecast game, each node needs to deliver their packets to the root node. They formulate the game with both cost sharing and non-sharing models. In cost sharing model, the cost of each node can be paid by several other nodes. In the cost model without sharing, each node pays its own transmission cost. A strategy of a playeru is a payment functionPay u v , which indicates how much playeru would pay for playerv. The transmit power for a node is 15 proportional to its total income obtained from all other nodes in the network. The social cost of the outcome of the game is the total transmission power of all the nodes in the network. Figure 2.2 gives an example of a broadcast game with a unique Nash equilibrium. In this case, there are rays emanating from the root node and the angle between two adjacent rays is 120 degrees. On the upper left and upper right rays, there are two nodes at distancedx andd+x (x<d) respectively from root node. The ray at the bottom has a node with distanced from the root node. The unique Nash equilibrium occurs when the bottom node pays the root node for a transmission ranged and the furthest upper nodes pay the relay nodes for transmission range 2x. The authors also study the pessimistic and optimistic price of anarchy (PoA). By denoting the cost of the Nash equilibrium asC p and the cost of the optimal solution asC p , the authors define the pessimist PoA as max p maxC p =C p and the optimistic PoA as max p minC p =C p . The con- clusions highlight three findings of the study. First, in a broadcast game, a pure Nash equilibrium may not exist and the optimistic PoA is bounded away from 1 while the pessimistic PoA is (n). Secondly, for a convergecast game without sharing, there always exists a pure Nash equilibrium. The optimistic PoA for this game is 1 and the pessimist PoA is (n 1 ) where 2 8 is the distance-power gradient 3 . Finally, for a convergecast game with sharing, the optimistic PoA is 1 and the pessimist PoA is (n). They also provide a heuristic greedy algorithm based on a minimum spanning tree to calculate the Nash equilibrium for a hop-bounded broadcast game. 3 In their modeling, the neighbors of a nodeu are determined by its transmission powerPu, which means nodeu can reach all nodes within its transmission rangeRu =P 1 u . 16 2.2.3 Fair Bandwidth Allocation Fang and Bensaou formulate and solve a game theoretic problem pertaining to fair bandwidth/rate allocation for flows in a wireless multihop network setting [32]. They start with a general for- mulation in which the goal is to maximize the weighted sum of some given convex functions of the rates for each flow, subject to bandwidth consumption constraints that are modeled using a clique-resource approach 4 . The general problem formulation is given below: P : max x i X i w i f i (x i ) s:t X i2S(j) x i c j ; j = 1; 2;:::;M x i 0; i = 1; 2;:::;N From this general problem, they derive an unconstrained Lagrange relaxation based optimization formulation in which the goal is to maximize the sum of a derived utility function V i for each flow that is a function of all the flow rates. This is used to model the problem as a noncooperative strategic form game where the players are the individual flows, the strategy space consists of the continuous flow rate for each flow, and the utility function is V i which is proved to be strictly concave. The authors show that this game has a unique pure Nash equilibrium. Furthermore, they employ a result pertaining to games with concave utility functions [93] to show that the equilibrium can be reached by changing the strategy for each flow at a rate proportional to the gradient of its payoff function with respect to its strategy. This fact yields a simple distributed algorithm for the problem, whose effectiveness is verified through numerical solutions. 4 This approach uses a graph to capture the contention among flows. In this graph, every vertex represents an active flow/link and each edge denotes interference. Flows in the same maximal clique cannot transmit simultaneously. A flow can succeed in transmission if and only if all flows that share at least one clique with this flow do not transmit. 17 2.2.4 Power Control Wireless networks, in contrast to wired networks, are characterized by higher signal interference. Together with fading, multipath and other impairments, signals are distorted as they travel from hop to hop. Signal to interference noise ratio (SIR) is a common denominator employed to characterize the quality of signals. Generally, improving the signal transmit power level leads to a higher SIR. However, wireless devices always have limited resources in terms of battery power. Hence, it is important to optimize the usage of the limited power resource while striving to meet the user’s required signal quality. In this section, we will discuss some research related to this power control problem within the context of a game theoretic framework. A significant piece of work which is widely cited was conducted by Alpcan et al. [6]. In this study, the authors model the power control of Code Division Multiple Access (CDMA) uplinks as a non-cooperative strategic game. The players are the users sharing a single cell. The strategy space for each user is a set of available power levels. The players are trying to minimize their own cost while maximizing their utility. The cost function is defined as the difference of a pricing function and a utility function. For playeri, the pricing functionP i is proportional to the transmit power levelp i , and the utility function is the logarithmic function of the user’s SIR i : J(p i ;p i ) = i p i u i ln(1 + i ) (2.3) where i is a positive number andu i is a user-specific weight parameter. The major contribution of this study is that the authors not only prove that there exists a unique Nash equilibrium for this game, but they also capture and characterize the Nash equilibrium. To obtain the Nash equilibrium point, they employ the following steps. First, for each useri, they 18 define a user specific parametera i := (u i h i = i ) ( 2 =L) where 0<h i < 1 is the channel gain from useri to the base station, 2 > 0 is the interference andL> 1 is the throughput. Then, they index arraya in a descending manner, i.e. a i a j =)i>j (2.4) Let M M (where M is the total number of users) denote the largest integer ~ M which satisfies: a ~ M > 1 L + ~ M 1 ~ M X i=1 a i (2.5) The conclusion is that all usersM +1;:::;M have zero power levels and the users 1; 2;:::;M have power levels which can be calculated by p = 1 h i f L L 1 [a i 1 L +M 1 X j2M a j ]g (2.6) If no such ~ M exists, then the Nash equilibrium solution just assigns zero power levels to allM players. Furthermore, the authors provide two power updating algorithms, namely parallel update algorithm (PUA) and random update algorithm (RUA) to converge to the Nash equilibrium. In PUA, nodes update their power level in each iteration (discrete time intervals) according to the reaction functionp i = maxf0; 1 h i [a i 1 L y i ]g where y i is the summation of power levels of other users as received at the base station. In RUA, users update their power levels with a pre- defined probability. Stability, robustness, and convergence of each algorithm has been discussed in the paper. 19 2.2.5 Intrusion Detection Security is a critical concern in wireless networks. Over the years, researchers have applied vari- ous kinds of reinforcement learning and data mining techniques to detect intrusions and thereby defending wireless equipment from attackers. Application of game theoretic tools toward security related problems is still at a very early stage. Agah et al. propose a static game theoretic approach for security management in mobile wireless sensor networks [4]. The game is based on a cluster-based sensor network comprising trusted base station with infinite energy. This game has two players: an attacker and a cluster- protector. The attacker is trying to launch a denial of service (DoS) attack on a certain cluster head. The protector (say Intrusion Detection System (IDS)) is trying to protect the cluster head before the attack is launched. An important assumption of this game is that the attacker is rational and not malicious. Note that a rational player’s goal is to maximize his own payoff as opposed to a malicious player whose goal may be to destroy the system without any payoff considerations. With respect to one fixed cluster, say thek-th cluster, the attacker employs one of the following 3 strategies: (AS 1 ) attack clusterk; (AS 2 ): do not attack at all; (AS 3 ): attack a cluster other than clusterk. The IDS has two strategies: (SS 1 ): defend clusterk orSS 2 : defend a different cluster. The payoff table for the combination of strategies is shown in Table 2.1. Table 2.1: Payoffs of players in intrusion detection game. AS 1 AS 2 AS 3 SS 1 ((PI(t)CI); (CW;U ij (t)C k ) ((PI(t)CI); (U ij (t)C k )) U ij (t)C k P AL k 0 ) SS 2 ((PI(t)CI); (CW;U ij (t)C k 0 ) ((PI(t)CI); (U ij (t)C k 0 P AL k 00 )) U ij (t)C k 0 P AL k 00 ) 20 Here, the notationk 0 denotes a certain cluster other than clusterk andk 00 denotes a cluster which is neither clusterk nor clusterk 0 . Other symbols are defined as follows:U ij (t) is the utility of ongoing sessions in the sensor network; AL k is the average loss incurred if clusterk is lost (i.e. the loss if the attacker attacks clusterk while the defender defends a cluster other thank); C k is defined as the average cost of defending clusterk;CW is the cost of waiting and deciding to attack in the future;CI denotes the cost of intrusion for the attacker; andPI(t) is the average profit for each attack. Using this payoff table, the authors carefully define the formulae to calculateCI,C k ,CW , andPI(t). Briefly, the cost of intrusion for attacker,CI, is defined proportional to the complicity of the attack. The cost of intrusion for IDS,C k , is defined as a linear combination of the previous attack history of clusterk and the cluster size ofk; the history has a weight that is significantly higher than the cluster size. The cost of waiting for the attacker,CW , is defined as proportional to the number of previous unsuccessful attack attempts on this cluster plus a small constant. The profit for the attacker to successfully attack the cluster equals the IDS loss of the cluster, which contains the values of all the processes running in this cluster. The major contribution of this piece of work is that, with the well designed utility table, the authors prove that this game has a unique pure strategy Nash equilibrium at strategy pair (AS 1 ;SS 1 ). This implies that the equilibrium occurs when both the attacker and the protector target the same cluster. In other words, if the attacker is selfish (not malicious) and plays by the game rules, theoretically, the defender will always defend the attacked cluster successfully. 21 2.3 Pareto Improvement and Pareto Optimal Nash equilibrium only considers the strategy profile when no rational player will unilaterally change his strategy. Sometimes a game is characterized by the existence of multiple Nash equi- libria and among them one Nash equilibrium will result in all players having better or equal payoffs (and at least one player having a better payoff) than in another Nash equilibrium. In such cases, the former Nash equilibrium is preferred over the latter one(s). The former one is known as a Pareto improvement to the latter one. In the next section, we provide a brief discussion on how the concept of Pareto improvement can be utilized to achieve a desirable Nash equilibrium. 2.3.1 Background If a strategy profile can make at least one player better off, without making any other player worse off, we call this a Pareto improvement or Pareto optimization in comparison to the previous strategy profile. That is, for player strategy profiless 0 ands, if for every playeri in the game, we have u i (s 0 i ;s 0 i )u i (s i ;s i ) (2.7) and there exists a playerj such that u j (s 0 j ;s 0 j )>u j (s j ;s j ) (2.8) then (s 0 i ;s 0 i ) is a Pareto improvement over (s i ;s i ). A strategy profile is Pareto efficient or Pareto optimal if no further Pareto improvements can be made. 22 Figure 2.3: Payoff Matrix for Prisoner’s Dilemma The simplest example to explain the relationship between the Nash equilibrium point and the Pareto optimal point is the famous prisoner’s dilemma problem. The classical prisoner’s dilemma (PD) is as follows [36]: Two suspects, A and B, are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both stay silent, the police can sentence both prisoners to only one year in jail for a minor charge. If each betrays the other, each will receive a two-year sentence. Each prisoner must independently make the choice of whether to betray the other or to remain silent. However, neither prisoner knows for sure what choice the other prisoner will make. Let negative integers denote the years of sentence a prisoner gets. The payoff matrix for the prisoner’s dilemma problem is described in Figure 2.3. In this case,fBetray;Betrayg is a Nash equilibrium, whilefSilent;Silentg is a Pareto improvement to the Nash equilibrium; in fact, it is the Pareto optimal strategy profile for both prisoners. In general, Pareto optimal performs no worse than Nash equilibrium as an operation point. Hence, many game theoretic studies have proposed techniques to achieve Pareto improve- ment and Pareto optimality if the Nash equilibrium is Pareto inefficient. 23 2.3.2 Power Control Power control problems are a set of problems seeking to intelligently select transmit power in a communication system to achieve good performance. This performance is achieved by optimiz- ing metrics of interest such as link data rate, network capacity, geographic coverage and range, and life of the network and network devices etc.. Power control problems are considered in many contexts, including cellular networks, sensor networks, wireless LANs, and DSL modems. A power control game is a power control problem form in a game theoretic context. Concerned metrics in such game map to appropriate utility functions. In a power control game, a solution is Pareto-optimal if there exists no other power allocation for which one or more users can improve their utilities without reduction in the utilities of the other users. In this section, we describe a few studies that have investigated power control games in wireless networks. Shah et al. [96] discuss design issues in developing a game theoretic framework for a wireless power control problem in Code Division Multiple Access (CDMA) systems. The authors formu- late the power control problem as a non-cooperative strategic game where the players are all the users in the network who are transmitting data to a common base station. The strategy space for each player is a set of values that the user’s power is restricted to. The strategy space for each player is assumed to be continuous. The authors also provide a discussion on the desired properties a utility function of a realistic wireless network should possess. There are five such properties. Letu i (p i ; i ) denote the utility function of playeri wherep i is its transmit power and i denotes the user’s Signal to interference 24 noise ratio (SIR) for a fixed transmit powerp i . First, for a fixed transmit power, the utility should monotonically increase with SIR, i.e. @u i (p i ; i ) @ i > 0 8 i ;p i > 0 (2.9) Second, the utility should obey the law of diminishing marginal utility 5 when SIR is large enough. That is, lim i !/ @u i (p i ; i ) @ i = 0 p i > 0 (2.10) Third, for a fixed SIR, the utility should be a monotonically decreasing function of the user’s transmitter power. Mathematically, @u i (p i ; i ) @p i < 0 8 i ;p i > 0 (2.11) Fourth, when transmit power approaches zero, the utility should go to 0. lim p i !0 u i = 0 (2.12) Finally, when transmit power goes to infinity, the utility should go to zero, lim p i !/ u i = 0 (2.13) 5 Law of diminishing marginal utility is the customer’s view of law of diminishing returns in economics. 25 As an example, the authors employ the following utility function which satisfies all the five properties mentioned above: u i (p i ; i ) = ER p i (1e 0:5 ) L (2.14) whereE is the remaining power of the node,R is the data transmit rate, andL is the length of the data packet. This quasi-concave utility function 6 yields a unique Nash equilibrium point as per Debreu’s theorem [28]. However, this Nash equilibrium point is Pareto inefficient since the authors prove that if all the users are willing to reduce their transmit power by a small amount, they all will improve their utility. In order to achieve Pareto improvement, the notion of pricing is introduced into the game. The basic idea of pricing is that if more users are competing in the network, each of them will have to utilize higher transmit powers. Hence, the price of using the network should be higher, which in turn should discourage users from using the network. Therefore, the price function should be monotonically increasing with transmit power. The authors model the price function as F i =tRp i wheret is a positive constant. Incorporating the notion of pricing yields a new utility (payoff) function which is described as the difference of user’s satisfaction and the price i.e. u i F i . With pricing, the new game has a unique Nash equilibrium and is a Pareto improvement in comparison to the original one. However, the authors have not been able to conclusively state whether the Pareto improved Nash equilibrium is Pareto optimal or not. The main results of this study are used by many of the following studies. 6 Definition: A function f(x) is quasiconcave if we have f(x1 + (1 )x2 min[f(x1);f(x2)]8x1;x2 in domain where 0 1 26 A closely related study on power control is conducted by Saraydar et al. [94]. A non- cooperative power control game is proposed where as before the players are mutually interfering users. The strategy space is a set of power levels that a player can choose for data transmission. The utility function is defined as u i (p i ;p i ) = LR Mp i f( i ) with unitbits=joule. Here,R is the data rate, L M is the total number of packets to be transmitted, p i is the transmit power level, and f( i ) is called the efficiency function which indicates the probability of successful packet reception. As before [96], here, the authors prove the existence of a unique Nash equilibrium point with this utility function; however, the equilibrium is shown to be Pareto inefficient. With the goal of providing Pareto improvement, a pricing scheme is introduced in the problem framework. The usage-based price is defined as a linear function of the transmit power. The authors claim that this enhanced price-involved game is a supermodular game [105]. Subsequently, they prove the existence of a Nash equilibrium for this game using supermodularity theory (see Section 2.4). However, they also point out that the Nash equilibrium is not unique. Furthermore, they conclude that when the pricing function is defined as c i =(1=p i ) n X j=1;j6=i p j u j (2.15) the Nash equilibrium obtained is Pareto optimal. But achieving this equilibrium needs a central- ized mechanism since the utility value for each nodeu i needs to be known globally. 27 Meshkati et al. have performed several extensive studies in the power control game theoretic context with different scenarios/constraints [75] [74] [73]. The main distinction of these works from prior studies is that the network comprises multiple base stations which may be chosen by the users for data transmission [75]. The game G is defined as G = [K;fA k g;fu k g] where K =f1;:::;Kg are the users,A k = [0;P max ] (P max is the maximum allowed power for trans- mission) is the strategy space for thekth user, and the utility functionu k is defined as the ratio of throughput to transmit power level with units of bits per joule. Here, throughput is the number of reliable (without error or correctable) transmitted bits. With these definitions, the authors show that there exists a unique Nash equilibrium. This can be achieved when all users pick the minimum mean-square error (MMSE) detector as their uplink receiver, and choose their transmit powers such that their output SINRs are all equal to where is the solution to @f( ) @ = 0 while f( ) is an efficiency function 7 which denotes the packet success rate (PSR). Furthermore, the authors state that the obtained Nash equilibrium is not Pareto optimal. They also illustrate that generally the difference between the two is not significant. A similar formulation and results by same authors have been represented in [74] with more details and concrete examples. The authors also provide a distributed greedy algorithm to achieve the Nash equilibrium. Note that with [75] and [74], the goal of each user is to maximize its throughput while employing minimum transmit power. With a similar game setting, taking delay requirements into consideration, Meshkati et al. prove the existence and uniqueness of a Nash equilibrium for a delay constraint problem [73]. In this study, the authors assume that if there is any error during data transmission, the nodes will retransmit the packet. The transmission delay for a packet is then directly proportional to the 7 Generally, represents SIR andf( ) is a monotonically non-decreasing function of 28 number of transmissions required for a packet to be received without any errors (denoted asX). Assuming efficiency functionf( ), the probability that exactlym transmissions are required for successfully transmitting a packet is given by PrfX =mg =f( )(1f( )) m1 (2.16) Hence, the delay requirement of a particular user is given as a pair (D;) where PrfXDg (2.17) i.e. the delay requirement is modeled as a cdf which states “the number of transmissions be at mostD with a probability larger than or equal to”. Sincef( ) is an increasing function of , this constraint provides a lower bound on the value of . Let ~ denote the value which satisfies the equality of the delay requirement. A delay constrained utility function for playerk is defined as: ~ u k = 8 > > < > > : u k ; if k ~ k ; 0; if k < ~ k (2.18) With this modified utility function, the unique Nash equilibrium is derived as ~ p k = minfp k ;P max g for each userk. Here, p k is the transmit power that yields an SIR equal to ~ k at the output of the receiver with ~ k = maxf ~ k ; g 8 . As part of the validation process, the authors show the efficiency of the Nash equilibrium point via numerical results. 8 is the optimal value derived from the original problem without considering delay requirement 29 2.4 Supermodularity Supermodular games were developed by Topkis in 1979 [105]. The main distinguishing idea in a supermodular game is that when a player takes a higher action according to a defined order, the other players are better off if they also take the higher action, i.e., the game has increasing best responses. Moreover, supermodular games are particularly well behaved. For example, they have pure strategy Nash equilibria and are characterized by the existence of polynomial time algorithms to identify these equilibria. 2.4.1 Background Supermodularity is defined through the concept of increasing differences [36]. Definition (Increasing differences): functionu i (s i ;s i ) has increasing differences in (s i ;s i ) if, for all (s i ; ~ s i )2S 2 i and (s i ; ~ s i )2S 2 i such thats i ~ s i ands i ~ s i , u i (s i ;s i )u i ( ~ s i ;s i )u(s i ; ~ s i )u i ( ~ s i ; ~ s i ) (2.19) Definition (Supermodular):u i (s i ;s i ) is supermodular ins i if for eachs i u i (s i ;s i ) +u i ( ~ s i ;s i )u i (s i ^ ~ s i ;s i ) +u i (s i _ ~ s i ;s i ) (2.20) 30 for all (s i ; ~ s i )2S 2 i . Here, the operator “meet” (x^y) and operator “join” (x_y) are defined in Euclidean spaceR k as: x^y (min(x 1 ;y 1 );:::; min(x k ;y k )) (2.21) x_y (max(x 1 ;y 1 );:::; max(x k ;y k )) (2.22) Definition (Supermodular Game): A supermodular gameG =< I;S;U > is such that, for eachi,S i is a sublattice ofR m i ,u i has increasing differences in (s i ;s i ) andu i is supermodular ins i . If the utility functionu i is twice continuously differentiable,u i is supermodular if and only if, for any two componentss l ands k ofs, @ 2 u i @s l @s k 0 (2.23) 2.4.2 Power Control In this section, we describe an application of supermodular games to a power control problem in wireless data networks. Recall that in a supermodular game, players can iteratively modify their strategy in an ordered direction to achieve a desired Nash equilibrium. Saraydar et al. discuss modifying their uplink power control game to a supermodular game by changing the player strat- egy space [94]. Note that the original uplink power control game formulation involving pricing has been described earlier in Section 2.3.2. This original game is not a supermodular game since the utility function does not satisfy the non-decreasing difference condition (see Equation 2.19) 31 in that strategy space. Instead of allowing the lower bound of the power level to be any non- negative number, the modified strategy space for playeri is defined asP i = [p i ; p i ] wherep i is the minimum transmit power which satisfies i 2 lnM whereM is the frame length and p i is the maximum transmit power limited by system constraints. Transmission power for player i: p i p i ensures @u i (p) 2 =@p i @p j 0 for all (i;j) pairs where i6= j. This game with the enhanced strategy space, by the definition in Section 2.4, is a supermodular game. With this game formulation, the authors provide an iterative algorithm to obtain the smallest (i.e. most power-efficient) equilibrium in the set of Nash equilibria. The iterative algorithm initializes the power for each node at time t = 0 as p(0) = p. In each iteration, t = k, each node calculates the best transmit power at that time instance which maximizes its utility, given all other node strategies at timet = k 1 over the modified strategy space. The authors prove that this algorithm converges to the most power efficient equilibrium by using the non-decreasing difference property of supermodular games. 2.5 Mixed Nash Equilibrium So far we have considered applications of pure strategy Nash equilibrium. However, pure Nash equilibrium does not always exist. In this section, we will consider mixed Nash equilibria, which exist for every finite game. A mixed strategy is a strategy in which a player performs his available pure strategies with certain probabilities. In ann-player game, if the game has finite number of pure strategies, then there exists at least one equilibrium in mixed strategies. This existence of a mixed equilibrium is proved by a famous theorem by Nash [36]. Furthermore, if no pure strategy 32 Rock Paper Scissors Rock (0,0) (-1,1) (1,-1) Paper (1,-1) (0,0) (-1,1) Scissors (-1,1) (1,-1) (0,0) Figure 2.4: Payoff Table for Rock-Paper-Scissors Game Nash equilibrium exist, then there must exist a unique mixed equilibrium. A famous example is the game of rock-paper-scissors with payoff table shown in Fig 2.4. Mixed strategies are best understood in the context of repeated games (see Section 2.6), where each player’s goal is to keep the other players guessing. However, the mixed strategy concept can be applied to any kind of game. 2.5.1 Background In the pure strategy context, a player can only choose to perform one strategy among many given strategies. In the context of mixed strategies, a playeri chooses to play each pure strategy in his strategy space (S i =fs 1 i ;s 2 i ;:::g) with a certain probabilityp s j i (0 p s j i 1). The payoff for each player in the mixed strategy game is the expected payoff calculated given the probabilistic distribution of all the other player strategies. A mixed strategy Nash equilibrium is defined on the expected payoff of all the players. 2.5.2 Power Saving In this section, we discuss an example of applying mixed Nash equilibrium to improve the system power saving in a query system with server broadcasting the replies. In their study [114], Yeung and Kwok consider the following scenario for a wireless data access problem: several clients are interested in a set of data items kept at a server. One client 33 sends a query request to the server for a desired data item, and the server responds by broadcasting the requested item so that all clients receive it. In this scenario, the authors show that it is not necessary for each client to send query requests to the server for each data item. Instead, each client determines data item request probability without any explicit communication with each other. The probability distribution on the strategy space forms a mixed Nash equilibrium. In this non-cooperative game, players are the clients numberedf1; 2;:::;ng. Each player determines its request probability across all items as its mixed strategy. The strategy space of player i is given by S i =fs i j0 s i 1g. The strategy space for the game is the Cartesian product of all the player’s strategy space,S = i2N S i . The utility of each playeri is expressed as the number of queries that can be completed given a fixed energy source. Mathematically, clienti’s utility is given by U i = E total E i UL +E i DL (2.24) whereE total is defined as the amount of energy available for the client; E i UL andE i DL are the expected amounts of energy consumption for data request and data download respectively for clienti;E DL is proportional to the size of the requested data item. With this formulation, the authors provide a solution for a two player game. Let the energy cost for sending a request signal beE s and the energy wastage while waiting beE w =E s . Note that if in a pre-fixed durationt nobody sends a request, all players will try again with the strategy specified probability. In order to avoid waiting infinitely, after two broadcast cycles 9 , if a player 9 Broadcast cycle is the estimated/known-beforehand time for server to broadcast reply to whole network 34 still has not obtained the data, it will be forced to send out a request with probability 1. Hence, the expected energy cost for request transmission is E i UL =s i E s + (E s +E s ) n Y j=1 (1s j ) + (1 + 2)E s n Y j=1 (1s j ) 2 (2.25) Substituting this into the utility function specified by Equation 2.24 and plotting the best strategy curve, the authors claim that the symmetric equilibrium strategy for each player in this game is given by: s = 2(1 + 2)(1s ) 2(n1) (1)(1s ) n1 1 2(1 + 2)(1s ) 2(n1) 2(1s ) n1 (2.26) 2.6 Repeated Games All the discussions so far have been related to one-shot games, also known as single stage games or single shot games. As the name implies, these fall under the category of non-repeated games. During a game, if a player’s actions are observed periodically, it becomes possible for other players to condition their play based on the past play actions of their opponents. This can lead to equilibrium outcomes that do not arise in one-shot games. A repeated game consists of a series of one-shot games. In game theory, a repeated game (or iterated game) is an extensive form game which consists of a number of repetitions of some stage game. 2.6.1 Background Repeated games [36] may be repeated finitely or infinitely many times. The most widely studied repeated games are those repeated possibly infinite number of times. Finite repeated games have 35 some inherit defects. Since the players are inherently selfish, if they know when the game is about to end, players can take advantage of this information and may tend to cheat in the final stages of the game. There are several alternate specifications of payoff functions for infinitely repeated games. The most widely used involves applying a discount factor ( < 1) to each subsequent game stage. This discount factor has two primary interpretations. First, it captures the fact that at each stage there is some finite probability that the game will end. Second, it also indicates that each individual cares slightly less about each successive stage. Repeated games need to keep track of the historic behavior of each player in the game to judge if a player violates a cooperation agreement. Normally, a reputation system is designed to detect misbehavior of selfish players. There are some well-known policies employed in repeated games to punish detected misbe- having players. Grim Trigger [11] 10 is one of the famous policies. Initially, a player using Grim Trigger will cooperate, but as soon as his opponent misbehaves (thus satisfying the detect trigger condition), the player using Grim Trigger will betray the agreement for the remainder of the iter- ated game. Since a single defect by the opponent triggers continuous defection for all subsequent stages, Grim Trigger is the most strictly unforgiving policy in an iterated game. Compared to Grim Trigger, Tit-For-Tat is a more effective forgiving strategy. Intuitively, Tit-For-Tat obeys the simple rule of repeating the action (to cooperate or not) of the opponent in the previous game. That is, if previously the opponent was cooperative, the player is cooperative; if not, the player is non-cooperative. 10 Grim Trigger is originally credited to James Friedman because he used the concept in a 1971 paper titled “A non-cooperative equilibrium for super games”. 36 2.6.2 Routing Security and Reputation Systems In wireless ad hoc network routing problems, attackers refer to nodes that gain an edge over other nodes by means of malicious behavior. Two important related problems defined in this space are: misleading and selfish behavior. Misleading refers to cases when a node after prior agreement to forward packets does not actually do so. Selfish refers to node behavior where nodes do not forward packets from any other nodes but occasionally send forwarding request for its own packets. To thwart attacks from misleading and selfish nodes, systems can either use prevention, or detection with response. According to Schneier [95], a prevention-only strategy only works if the prevention mechanisms are perfect. Otherwise, odes will find ways to get around them. In real systems, it is extremely difficult to design a perfect prevention system. On the other hand, a relatively easier and more effective option is to design detect with response systems, using reputation systems. Marti et al. propose a simple while effective reputation system to handle the misleading prob- lem. The system comprises of two components: watchdog and path-rater [72] that are executed on each node in the network. Watchdog provides a mechanism to track the behavior of the nodes in a network. In wireless networks, as opposed to their wired counterparts, nodes can “over- hear” transmissions from neighboring nodes. Watchdog takes advantage of this characteristic and records the forwarding attempts of a given node’s next-hop node. That is, a node’s watchdog can detect misbehavior if it does not overhear its neighbors forwarding packets. Global behavior record exchange helps set up the reputation system of each node. A node’s reputation is based on the merge of all the other nodes’ observations in the network. The path-rater on each node combines knowledge of misbehaving nodes with link reliability data to pick the route most likely 37 to be reliable. After the global reputation message exchange, the reputation record of each node should be kept consistent. Hence the path-rater on each node will induce the same path in the network. Simulation results show that reputation systems can improve throughput by up to 17% with the presence of 40% malicious nodes in the network. This improvement comes at the cost of increasing overhead of message transmission by 9 17%. Bounchegger and Bouder propose a more comprehensive reputation and path selection sys- tem for wireless routing, called CONFIDENT [15]. CONFIDENT is an extension to a reactive source-routing protocol based on DSR (Dynamic Source Routing [55]). The authors show that CONFIDENT can handle both problems of misleading and selfishness by having each node pe- riodically monitor behavior of neighboring nodes. In this system, each node runs CONFIDENT locally in a distributed manner. The CONFIDENT protocol consists of 4 major parts: a) a mon- itor, which keeps track of the behavior of nodes in the neighborhood of a given node; b)a trust manager, which handles the misbehavior reports and sends warnings of malicious nodes if nec- essary; c) a reputation system, which manages a table consisting of entries for nodes and their corresponding ratings. The table follows along with packet path as a reference for other nodes; d)a path manager (similar to the path-rater in citeMarti00), which ranks paths according to how the reputation system reacts to misbehaved nodes’ routing requests 11 . Simulation results indicate that such a reputation based system performs well even when the fraction of malicious nodes in a network is as high as 60%. All the reputation systems mentioned so far require global message exchange, which not only increases the system overhead but also generates additional problems. Although these designs have been proved to be effective, exchanging “second-hand information” (merging reputation 11 Ignoring the onward packet forwarding requests from malicious nodes is the most widely used punishment reac- tion. 38 information from other nodes) is not guaranteed to be secure. Nodes may maliciously accuse other nodes of misbehavior. It is difficult to decide whether an accusation is true, especially so when there is a collusion of several nodes. Bansal and Baker [12] propose a reputation system called OCEAN (Observation-based Cooperation Enforcement in Ad-hoc Networks). OCEAN is a sub layer that resides between the network and data link layers of the network protocol stack and it handles the problem of misleading. A critical characteristic of OCEAN is that it totally abandons employing “second-hand information” when setting up node reputations. Instead, nodes make routing decisions based purely on direct observations of its neighboring nodes’ exchange history with themselves. Surprisingly, the authors find that this system works well in terms of throughput, considering its simplicity compared to the other global message exchange based schemes. The OCEAN idea works partially based on the fact that in a low mobile network, a node normally only needs to interact with a relatively small set of neighboring nodes while selecting a forwarding path. However, performance of OCEAN has not been evaluated for networks with high mobility where a nodes neighbors may change frequently over shorter durations. 2.6.3 Incentives in Routing Rather than building reputation systems and employing punishment policies to coerce nodes to be cooperative, some researchers claim that enough incentives to node utilities can automatically prevent selfish node behavior effectively. Dasilva and Srivastava analytically model node behavior in voluntary resource sharing net- works and quantify cost-benefit tradeoffs that leads to nodes voluntarily participating in resource sharing (for e.g. forwarding packets of other nodes). In each one-shot game, the players are the N user nodes in the network. The strategys for each playerj is either sharing (s j = 1) or not 39 sharing (s j = 0) the resource. The utility function is a linear combination of the benefits from all other nodes’ action and the cost for sharing its own resource with others. This is given by, u j (s) = i ( X i2N;i6=j s i ) + i (s j ) (2.27) where i and i are the weights for benefit and cost parts respectively. Note that i (s j ) can be negative (if sharing is considered as a cost) or positive (if the user can derive some benefit or satisfaction from sharing with others). Similar to the prisoner’s dilemma problem discussed in Section 2.3.1, this one-shot game might reach non-optimal Nash equilibriums. However, better equilibria (cooperation enhanced equilibria) are achievable when the game is repeated. Specifi- cally, the authors assume the repeated game playsK times whereK is a geometrically distributed discrete random variable with parameterp (0<p< 1). The authors consider a grim-trigger strat- egy adopted by all nodes: each node shares as long as all other nodes share; the node does not share if any of the other nodes have deviated in the previous round. If everybody shares, at any time a player’s expected payoff from that point onward is i (N 1) + i (1) p (2.28) If a node decides to unilaterally deviate at the beginning of one round, its expected payoff from that point forward is just i (N 1). Therefore, it is a Nash equilibrium for all nodes to participate and share as long as i (N 1)> i (1) 1p (2.29) 40 As mentioned before, grim trigger is considered a “tough” policy since it never forgives and it requires all nodes to participate in resource sharing which is not a very practical scheme in real scenarios. As an alternate strategy, the authors propose that in thek th round of the game, a rational node will participate in sharing if the following condition is satisfied: i ( X i2N;i6=j s k1 i )> i j 1p (2.30) Intuitively, this formula implies that if the expected share gain for a node in future rounds from all the share volunteers is greater than its cost in the current round, it should decide to participate in resource sharing. The expected share gain is measured from the historic sharing status. One major achievement of this strategy is that desirable equilibria can be achieved even if nodes think there is a low likelihood of the game being repeated for additional rounds in the future. The authors also analyze the risk of having rogue nodes by modeling the participant problem as a Bayesian game which is introduced in the next section. Fabio et al. present policies to achieve cooperation among selfish users without a central authority using a repeated game formulation [76]. This study models packet relaying as a game between two neighboring nodes, A and B. In the basic game, if B forwards one packets for A,A hears the transmission and gains utility. B consumes its resource and gains utility. This gameG can be defined asG =< N;fp i g;fu i g > whereN =f1; 2g is the set of players, p i 2 [0; 1] is the probability of dropping packets for playeri, andu i =p i p i is the payoff of playeri. In this game, the Nash Equilibrium is mutual defection, i.e.,p i = 1 fori = 1; 2 is the unique Nash Equilibrium for this two player game G. However, the authors show that if this game is 41 Figure 2.5: Illustration of Relaying Model with Collisions repeatedly played, cooperation can be achieved under certain conditions. The authors consider two cases: cooperation without package collisions and cooperation with package collisions. In the cooperation without collision case, the repeated packet relaying game is defined as a multistage game =< N;fs i g;fU i g > where N =f1; 2g is the set of players, p (k) i is the dropping probability of playeri at stagek, andu (k) i =p (k) i p (k) i is the payoff of playeri at stagek. The strategy for playeri,s i , is given by (P (0) ; ;P (k1) )!p (k) i .U i is the discounted payoff of playeri, defined as,U i = P k0 k u (k) i . The discount parameter2 [0; 1] represents the subjective evaluation of the future by the player. The simplest strategy to achieve cooperation is Tit-For-Tat (TFT). In the cooperation with collision case,A wants to send a packet toC throughB and, at the same time D wants to a send packet to E. A potential collision between these transmissions prevents A from correctly overhearing B’s transmission to C (See Fig 2.5. In other words, A is unaware of whether B transmitted its packet to C. The authors employ to represent the probability with which each node attempts a transmission in each time instant 12 to capture the distortion introduced by packet collision. So ^ p (k) i = + (1)p (k) i represents the perceived defection of playeri at stagek. Similarly, the perceived payoff of playeri at stagek is ^ u (k) i = ^ p (k) i ^ p (k) i and perceived discounted payoff of playeri is ^ U i = P k0 k ^ u (k) i . The authors 12 The authors employ a discrete time model for this relay game 42 show in this game, TFT is no longer sufficient to sustain mutual cooperation. The solution is to add a tolerance threshold i to the TFT to accommodate a limited number of defections. 2.7 Bayesian Games A player is assumed to have global knowledge of all the other players’ payoff attributes in all the game formulations described so far. This is a simplifying assumption. In the real world, many times players do not have this global information; they may only have partial global informa- tion. When some players do not know the payoffs of the other players, the game is said to have incomplete information. A Bayesian game is a game with incomplete information. 2.7.1 Background In a Bayesian game, it is necessary to specify the strategy spaces, type spaces, payoff functions, and beliefs for every player. A strategy for a player is a complete plan of actions that covers every contingency that might arise for every type for a player. A strategy must not only specify the actions of the player given the type that he is, but must also specify the actions that he would take if he were of another type. Strategy spaces are defined as below. A type space for a player is just the set of all possible types of that player. The beliefs of a player describe the uncertainty of that player about the types of the other players. Each belief is the probability of the other players having particular types, given the type of the player with that belief (i.e. the belief isp(types of other players—type of this player)). A payoff function is a 2-place function of strategy profiles and types. If a player has payoff function U(x;y) and he has type t, the payoff he receives is U(x ;t), wherex is the strategy profile played in the game. 43 A Bayesian Nash equilibrium is defined as a combination of a (i) strategy profile and (ii) belief set for each player about the types of the other players that maximizes the expected payoff for each player given their beliefs about the other player’s types and given the strategies played by the other players. 2.7.2 Security Mechanism Intrusion detection is an effective security tool widely employed in both wired and more recently wireless networks. Within this context, Liu et al. propose a game theoretic framework to analyze interactions between pairs of attacking/defending nodes using a Bayesian formulation [68]. The attacker/defender game is more suitably modeled as an incomplete information game, when the defender is uncertain about the attacker. Two types of games are defined: a static game and a dynamic game. In a static game, the defender does not take game evolution into account while deciding its strategy. The defender always uses his prior belief of his opponent. In contrast, in a dynamic game, the defender adjusts his monitoring strategy based on the new observation of his opponent. In a static game, there are two players,i is the potential attacker,j is the defender. i repre- sents the player type, i = 0 means regular and i = 1 means malicious. Note j = 0 is always true. The malicious type ofi has two strategies: to attack or to not attack. The defender, playerj, has two strategies, to monitor or to not monitor. These strategies have different payoff values. i andj choose their strategies simultaneously at the start of the game. The goal of both players is to maximize their payoff. 44 0 is defined as a common prior, i.e., playeri knows defenderj’s belief of 0 . The conclusion for a static game is that if 0 is high enough, a mixed-strategy Bayesian Nash equilibrium exists for which defenderj plays monitor andi plays attack. A pure-strategy Bayesian Nash equilibrium exists for which the defenderj plays his pure strategy to not monitor, and the playeri plays his pure strategy to attack if malicious, and to not attack if regular. The dynamic game is a multi-stage game, which is an extension of the Bayesian static game. The static Bayesian game is repeatedly played. The authors make several assumptions: 1)the game has an infinite horizon; 2)the payoffs of the players in each stage of the game are the same as in the previous stage; 3)there is no discount factor for payoff to the players; 4)the identities of the players remain consistent throughout the game. The authors show that this multi-stage game has a perfect Bayesian equilibrium. 2.7.3 Medium Access Control Benammar and Baras propose a new medium access control protocol that provides incentives for each of the entities in a wireless network to optimize the overall utility using a Bayesian game formulation [14]. Ann player Bayesian game is described as: =fS 1 ; ;S n ;T 1 ; ;T n ;p 1 ; ;p n ;U 1 ; ;U 2 g where S i denotes a set of strategies for player i, T i is the set of the types of the player i, p i =p(t i jt i ) is the player’s belief about other the player typest i given his own typet i , andU i is the player utility and is a function of the player types and their strategies. 45 In this game, the authors assume that all users are of the same type. Hence, a strategy profile = ( 1 ; ; n ) is a Bayesian equilibrium of if U i [(t);t]U i [ i (t);si;t];8;s i 2S i (2.31) The authors model a three station game and then generalize the game to considern stations. The strategy for each station is the transmit or wait, S i = fT;Wg. In this formulation, u s , u f ,u i denote the payoffs for a successful transmission, failed transmission, and no transmission, respectively. The transmission probabilities of station 1; 2; 3 arex;y;z respectively.U ijX denotes the expected utility of stationi that follows strategyX. If station 1 is a mix between transmitting and waiting, he must have the same expected payoff i.e., U 1jT =U 1jW (2.32) Withx =y =z, the above equation can be derived as (u s u f )x 2 + 2(u f ux)x +u s u i = 0 (2.33) This has a unique solution given by, x = 1 s u i u f u s u f (2.34) 46 The solution is a mixed Nash equilibrium point. Generalizing the game to considern stations, we have, x n = 1 ( u i u f u s uf ) 1 n1 (2.35) Note that in Equations 2.34 and 2.35, u i u f is the transmission cost and u s u f is the payoff for a successful transmission. 2.7.4 Routing In this section, we briefly describe some studies exploring power constrained routing using Bayesian game formulations. Nurmi et al. present a repeated Bayesian game formulation for energy constrained routing in wireless ad hoc networks with selfish participants [85]. In this game, each repetition is a stage. A new stage starts at time stept i k when some nodei generates g(t k ) packets to the network< N;E >, whereN andE denote the node and the edge sets in a connected graph. s(t k ) denotes the actual number of packets sent by the node i. i denotes the remaining energy ofi and the set of all possible values of i is . While i is private toi, j t k denotes i’s brief about the remaining energy of at node j at time step t k and this value is independent over the nodes. A route discovery phase is performed at the beginning of the game. J is the set of nodes on the routing path. Therefore, the set of players are i[J. The belief system is B := j t k and the action space ofi isA(t k ) =s(t k )js(t k )g(t k ).f j (t k ) denotes the number of packetsj forwards toi. The action space of a forwarding nodej isA j (t k ) =f j (t k )j0f j (t k )g(t k ).u i andu j are used to denote the utility functions of the sourcei and the forwarderj. ( j ) denotes the probability of forwarding with energy level j . The authors assume that with more energy 47 levels, the node has a higher probability of forwarding a packet. ^ j;i denotes nodej’s estimate of the cooperation probability of node i. The probability, j;i , with which j forwards to i is a combination of the energy dependent probabilities and the probabilities given by the cooperation mechanism, i.e., j;i = j ^ j;i + j ( j ); where and are adjustable importance parameters. j (t k ) denotes the belief of the source node i’s interest of forwarding to node j. We have j (t k ) =f ^ j (t k ); ^ j g. Node i will select a path P from the path set PP . The probability of selecting a routing pathP2PP is(P ) = Q j2P [ i ^ i;j + i ( j )]: The utility function of node i depends on which path is employed and how many packets are sent. Thus, it is defined as: u(s(t k );P ) =(P )[h(s(t k ))h (s(t k ))] (2.36) whereh() indicates the number of packets to send toi. The authors have a two-fold conclusion. First, every perfect equilibrium point is a sequential equilibrium 13 . Second, the proposed model has at least one sequential equilibrium point and the learning sequence converges to a sequential equilibrium. If there is only one sequential equilib- rium, the sequence converges to that point. Otherwise, the learning sequence converges to some sequential equilibrium point. 13 A sequential equilibrium specifies both the strategy for each of the players and a corresponding belief vector. The belief vector gives a probability distribution on the nodes in the player’s each information set. 48 2.8 Nash Bargaining Solution 2.8.1 Background In a cooperative game, players bargain with each other before the game is played. If an agreement is reached, players act accordingly to the agreement reached, otherwise players act in a non- cooperative way. Note that the agreements reached must be binding, so players are not allowed to deviate from what is agreed upon. John Nash wrote his seminal paper on cooperative games [81] that to understand the outcome of a bargaining game. The point is that we should not focus on trying to model the bargaining process itself, but instead, we should list the properties, or axioms, that we expect the outcome of the bargaining process to exhibit. This way of analyzing cooperative games is called axiomatic bargaining game theory [91]. In this context, an agreement point is any action vector a2 A that is a possible outcome of the bargaining process. A disagreement point is an action vector a2 A that is expected to be the result of non-cooperative play given a failure of the bargaining process. Note that the utility achieved by any agreement point has to be equal or greater than the utility achieved at the disagreement point. A bargaining solution is a map that assigns a solution to a given cooperative game. Following definition is published by Nash in [81]. Definition 1: Nash Bargaining Solution (NBS). LetU = (u i (a))ja2A be a convex, closed and upper bounded subset ofR N , a 0 be the disagreement point, u 0 i = u i (a 0 ) be the utility of playeri achieved at the disagreement point, andU 0 = u2Uju u 0 be the set of achievable utilities. Thenu =(U;u 0 ) is a NBS if it meets the following conditions: 1. Individual rationality (IR):u i u 0 i . That is,u2U 0 2. Parato optimality (PO): If there existsu 0 2U 0 such thatu 0 i u i ,8i thenu 0 i =u i ,8i 49 3. Invariance to affine transformations (INV): if (u): R N ! R N , (u) = u 0 with u 0 i = c i u i +d i ,c i ,d i 2R,c i > 0,8i, then( (U); (u 0 )) = (phi(U;u 0 )). 4. Independence of irrelevant alternatives (IIA): if u 0 2 V U and u 0 = (U;u 0 ) then (V;u 0 ) = u 0 . 5. Symmetry (SYM): ifU is symmetric with respect toi andj,u 0 i =u 0 j , andu 0 =(U;u 0 ), thenu 0 i =u 0 j . Conditions 3)5) are also known as fairness axioms. The INV axiom assures that the solution is invariant if affinely scaled. The IIA axiom states that if the domain is reduced to a subset of the domain that contains the NBS, then the NBS is not changed. The SYM axiom states that the NBS does not depend on the labels. That is, if two players have the same disagreement utility and the same set of feasible utility, then they will achieve the same utility at NBS point. 2.8.2 Distributed Spectrum Sharing With the dramatically increase of network traffics, Spectrum becomes more crowd and spectrum utilization becomes significant factor to system performance. Recent reports on spectrum us- age reveal poor utilization, both spatially and temporally. Spectrum sharing protocols that are dynamic, flexible, and efficient is urgently needed. Suris et al. [102] investigated the opportunistic spectrum access problem using Nash bargain- ing solutions. Specifically, the authors considered a scenario in which nodes in a wireless network seek to agree on a fair and efficient allocation of spectrum. The authors formulated the problem as a cooperative game and state that in high interference environments, the utility space of the 50 game is non-convex, making certain optimal allocations cannot be achieved with pure strate- gies. To mitigate this situation, they showed that as the number of channels available increases, the utility space approaches convexity, thereby making optimal allocations achievable with pure strategies. By comparing and analyzing three bargaining solutions, the authors showed that the Nash bargaining solution achieves the best tradeoff between fairness and efficiency, using a small number of channels. Moreover, they developed a distributed algorithm for spectrum sharing that is general enough to accommodate non-zero disagreement points, and showed that it achieves allocations reasonably close to the Nash bargaining solution. Wang et al. [110] studied a cooperative spectrum share problem in Wireless local area net- works. The objective is to minimize interference among access points’ coverage while preserving throughput fairness. This problem is formulated as a cooperative game. In order to achieve Pareto Efficiency and proportional fairness, the authors developed a Nash bargaining solution based so- lution. Starting with analyzing Nash bargaining solution for a 2-AP and a 3-AP case, the authors showed the Pareto optimality of the mechanism designed. 2.9 Auction An auction depicts a specific scenario of games with imperfect information. Generally, we con- sider a symmetric, private-value, sealed-bid, standard auction game on a single object. The information gathered for each player is symmetric. Private-value means that each bidder knows his own valuation at the time of bidding. He also has some idea of the expectations of other bidders. A sealed-bid is one that is submitted simultaneously by the bidders. A standard auction implies that the participant that submits the highest bid gets the object. 51 There are two major models of an auction: the first-price auction and the second-price auction. In a first-price auction, the winner pays the price that he submitted in his bid (i.e. the highest price among all bids). In a second-price auction, the winner pays the second highest price across all the submitted bids. 2.9.1 Background Specifically, in an bidders first-price auction, each bidder submits a sealed bidb i . With the given bidsb 1 ;:::;b n , the payoffs of the players are: u i =f x i b i forb i >max j6=i b j 0 forb i <max j6=i b j (2.37) In a first-price auction, the players tend to bid a little less than their valuation in order to increase their payoff. In an bidders second-price auction, the payoff for each player is: u i =f x i max j6=i b i forb i >max j6=i b j 0 forb i <max j6=i b j (2.38) In a second-price sealed bid auction, a weakly dominant strategy is for players to bid as per their valuation price. Therefore, we can say that a second-price sealed bid auction enforces truthfulness. The second-price sealed bid auction is also known as a VCG (Vickrey-Clarke- Groves) auction. 52 2.9.2 Routing In wireless networks, recall that equipments generally belong to individual users. If the users are selfish, they will only relay packets for a certain price which can at least compensate for their cost of energy consumption. This raises new questions with regards to the selection of the most cost-efficient route for data transfer while obtaining the true price information from individual users (i.e. how to prevent cheating). Anderegg and Eidenbenz propose a routing protocol for wireless ad hoc networks with selfish users that is based on the VCG protocol. This is named as Ad hoc VCG [8]. The basic idea of the Ad hoc VCG mechanism is to employ the second highest sealed price bid (VCG mechanism) while paying the intermediate nodes an extra price in addition to their actual costs to ensure truthfulness. In this non-cooperative game, the source nodes and intermediate nodes are the players. There are two phases in this Ad hoc VCG protocol: a route discovery phase followed by a data transmission phase. In the first phase, the nodes send out emission signals to determine the energy expended for transmission between every pair of nodes. This information is employed to create a weighted graph, where each edge is the labeled with a cost value that depends on the energy consumed for transferring unit data using that link. The destination node collects information about all the edge weights and calculates the shortest path from the source to itself. In the second phase, packets are forwarded along the shortest path route and payments are made to the intermediate nodes along the selected route. There are two alternatives for payments in the data transmission phase: (a) source model and (b) central-bank model. In a source model system, the source will pay both, the actual cost and the extra price of an intermediate node. In a central-bank model system, the source only pays the 53 actual cost while a central-bank (a special node that does not participate in routing) pays the extra price for all the intermediate nodes. Recall that the extra price is given to ensure node truthfulness. It makes cheating unattractive by providing payments as high as a node could possibly attain by cheating. Specifically, in an Ad hoc VCG, this extra price for an intermediate nodev i is defined as the difference between shortest path calculated with and without nodev i . Mathematically, the VCG paymentM i for intermediate nodev i is defined as: M i :=jSP i jjSPj +c i P min i;i+1 (2.39) wherejSP i j is the cost of the shortest path without nodev i ,jSPj is the cost of the shortest path with nodev i ,c i is the cost for unit power consumption, andP m i;I=1 in is the minimum transmis- sion energy from nodev i to nodev i+1 , calculated using the emission signal information and the information in the acknowledge packets from neighbors that contain received signal strength. The authors also prove that with this mechanism, the resulting total overpayment is always bounded by a factor of 2 +1cmax c min where is the signal loss exponent and c max (c min ) is the maximum (minimum) cost-of-energy declared by the nodes on the most cost-efficient path (i.e. shortest path calculated by destination). Zhong et al. propose a protocol, called Corsac, that gives nodes incentives to route and forward packets in wireless ad hoc networks with selfish users [117]. Corsac is also based on the VCG mechanism. Again, as before, the players in this routing game are all the nodes in the network. The strategy for each player is a finite, discrete set of transmit power levelsl. The utility function for playeri is defined as U i =l +p i 54 wherep i is the credit paid for participating in the forwarding process and is the energy con- sumption parameter. The authors argue that there exists no dominant forwarding protocol for such an ad hoc rout- ing game. In order to give further incentives to the players to participate in the forwarding process, the authors split the game into two stages as suggested by Feigenbaum and Shenker [33]. The game now has two parts: the routing decision part and the forwarding part. In the routing stage, the nodes take a routing decision, which specifies what each node is “supposed to do” in the for- warding stage. In the forwarding stage, a node’s routing decision and the forwarding sub-action (the action whether nodes should forward or not) jointly decides each node’s utility. Now the strategy for each node is separated into two sub-actions: S i = (S (r) i ;S (f) i ) whereS (r) i andS (f) i are the player’s strategy for routing and forwarding, respectively. The utility function is defined as: U 0 i = U i (R;a (f) ). The goal of the players is to maximize the joint utilityU 0 i . Specifically, the optimal solution can be achieved if routing decision is sub-stage optimal and the incentive for nodes to follow the routing decision in forwarding stage leads to sub-stage optimal too. In the routing stage, the routing decision is based on the VCG mechanism. Instead of doing a link cost estimation of the previous work [8], the authors combined the VCG and the crypto- graphic technique in order to prevent cheating. In this routing protocol design, before sending out data packets, the source sends out test packets to give information on the destination to choose the cheapest energy cost path. Each intermediate node forwards the test packet while recording its own MAC address, in an encrypted manner, into the routing information and the test packet. 55 Upon collecting all the routing information and cost information, the destination checks the rout- ing information using a symmetric key mechanism and picks the minimum energy cost path as the routing path. The authors show that this routing protocol is a routing-dominant-protocol 14 . 2.10 Challenges for Future Research Game theory is a powerful tool to model interactions among self-interested users and predicting their behavior. However, a lot of challenges still exist in the process of applying game theoretic tools to network contexts. Here, we list some of the main challenges: 1. Modeling a realistic problem to a game with rational selfish users and a reasonable strategy space such that the problem is tractable. 2. Definition of suitable utility functions that are reasonable and possess “good” properties from the game theoretic point of view (for example, can yield a unique Nash equilibrium or make the game supermodular). 3. Finding distributed techniques to attain the converged equilibrium point and reducing the computational complexity of these approaches. 4. Nash equilibrium is defined as a state when no player is willing to unilaterally change his strategy. However, it does not consider cases where two or more players may change their strategies at the same time. The problem of two or more players cooperating to gain higher benefits as compared to the other players is known as collusion. Although there has been some pioneering work related to the collusion problem [118], further investigation 14 Definition(from the paper [33]):A routing protocol is a routing-dominant protocol to the routing stage if following the protocol is a dominant subaction of each potential forwarding node in the routing stage 56 is needed to better understand the collusion problem, how to effectively avoiding such problems and what kind of policies need to be in place to detect collusion and respond appropriately. 2.11 Summary In this chapter, we briefly introduced some core concepts employed in game theory and then provided a variety of recent results highlighting various applications of game theoretic tools in different wireless networks, especially the ad-hoc wireless networks, the cognitive radio networks and the community based mobile networks. The growing interest of the research community in applying game theoretic tools toward newer problems in wireless networks promises exciting results in the upcoming years. In the next several chapters, after an overview of wireless networks, we will dig into detail in three different scenarios to see how to use game theoretic approach to analysis systems with selfish users and how to design proper mechanisms to enhance cooperation among these self-centric users in order to lead the system to more efficient equilibria. 57 Chapter 3 Background on Wireless Networks In this thesis, we consider applications in three classes of wireless networks: wireless ad-hoc net- works, cognitive radio networks and mobile networks. In this chapter, we provide the background and related work on each of the three types of networks. 3.1 Wireless Ad-hoc Networks 3.1.1 Introduction Wireless Ad-hoc networks are decentralized wireless networks without fixed infrastructure or centralized administration. Ad hoc networks rely on the cooperation of participating nodes (a.k.a terminals) to route data between source and destination pairs that are outside each other’s com- munication range. The determination of which nodes forward data is made dynamically based on the network connectivity. The earliest wireless ad hoc networks were the “packet radio” networks (PRNETs) from the 1970s, sponsored by DARPA after the ALOHAnet project. Major advantages of wireless ad-hoc networks are rapid deployment, robustness, flexibil- ity and support for mobility. These advantages make wireless ad-hoc network valuable when 58 temporary network are needed. For example, when natural disasters have destroyed existing in- frastructure or at battlefield. IEEE 802.11 [1] and bluetooth [39] standards are existing standards supporting wireless ad-hoc networks. 3.1.2 Routing and Cooperation in Wireless Ad-hoc Networks Routing in wireless ad-hoc networks attracts interest of research community for years. Ad- hoc On-demand Distance Vector (AODV) routing [90], Optimized Link State Routing Proto- col (OLSR) [98], Dynamic Source Routing (DSR) [106] and Topology Dissemination Based on Reverse-Path Forwarding (TBRFP) [99] are applied/standardized in the Internet-drafts or RFC of IETF’s MANET protocol stack. Throughput, delay, reliability and efficiency are the metrics when designing routing protocols for wireless ad-hoc networks. Recently, there has been increasing interest in applying the tools of game theory to the design of wireless ad hoc networks, especially the routing problems. This is because a central problem in this domain is providing incentives for selfish users to cooperate with each other in moving information through the network. In order for an ad-hoc network to work, the nodes need to share their resources with the others. For example, relaying packets from other nodes consumes energy. A node may save its energy by not cooperating. Instead of forwarding the traffic of the others, a node may only use the resources of others without contributing to the network. This kind of selfish behavior may ruin the network service if a large portion of nodes take this non- cooperative action. Two main avenues of research in this regard are (a) reputation and punishment-based tech- niques and (b) pricing and payment-based techniques. 59 Reputation-based techniques provide mechanisms to track the behavior of nodes and punish those that behave in a selfish manner. Along these lines, Marti et al. [72] present the watchdog and path-rater mechanisms that punish nodes which don’t relay packets correctly. “Watchdog” runs at each nodes and identifies misbehaving nodes. A particular path-rater helps finding routes avoiding the misbehaving nodes. This monitor technique might increase the throughput by avoid- ing selfish nodes on the route. However, this mechanism cannot prevent nodes’ selfish behavior. There is no punish/incentive mechanism that enhances the cooperation among nodes. Virtual currency, pricing and reputation system can be applied to encourage cooperations among nodes in wireless ad-hoc networks. We have already discussed some of these methods, for example, CONFIDENT [15], in previous section. These mechanism designs using game theoretic method to stimulate cooperations among node or punish/threat non-cooperative behaviors. The OCEAN mechanism [12] seeks to obviate some of the complexity associated with second-hand reputation exchange-based schemes by relying on first-hand observations alone. Srinivasan et al. [100], provide a formal game-theoretic framework for reputation/punishment and show that the generous tit-for-tat mechanism can be used to obtain Nash equilibria that converge to Pareto optimal, rational solutions. Equilibrium conditions obtained using similar generous tit for tat strategies taking into account the multihop network topology for static and dynamic scenarios are investigated in [34, 48]. Altman et al. advocate a less aggressive punishment policy to improve performance [7]. Urpi et al. [107] and Nurmi [84] model the situation as dynamic Bayesian games, which allow effective use of prior history in enforcing cooperation. The alternative to enforcing cooperation is providing nodes with an incentive to cooperate through payment and pricing mechanisms. Buttyan and Hubaux introduce the notion of NU- GLETS, a form of virtual currency that provide an incentive for nodes to cooperate [16]. The 60 use of pricing to obtain incentives for cooperation is also advocated in the works by Crowcroft et al. [24] and Ileri et al. [51]. In all these schemes, nodes which forward data for others receive credits that can be used to pay others to carry their own data. DaSilva and Srivastava [26] study the tradeoffs between cost and benefit in a game theoretic context to determine how they impact cooperation. Our work can be viewed as closely related to these approaches, as we too provide incentive to the intermediate nodes to cooperate in the routing through the payment offered by the source node, and evaluate the impact of pricing upon cooperation and the utility provided to the source. With payment-based schemes, however, there is an associated risk of cheating due to false claims by nodes trying to obtain payments they do not deserve. While we do not explicitly tackle this issue in our work, researchers have proposed solutions for handle this potential abuse. The micropayment scheme presented in [54] incorporates an audit mechanism to prevent false claims. SPRITE is another cheat-proof mechanism that uses a credit clearance server to provide payments to nodes for cooperation. Anderegg and Eidenbenz [8] propose the use of the Vickrey- Clark-Groves mechanism to obtain truthful claims for payments. In this thesis, we investigate a pricing based reliable routing schema for wireless ad-hoc networks. We illustrate how to enhance cooperation among individual nodes in ad-hoc routing problem. Our investigations are motivated by the works of Kannan, Sarangi and Iyengar on reli- able query routing [56–59]. They are the first to formulate a game where the node utilities show a tension between path reliability and link costs, and they have considered different interesting variants of this problem. A key difference in our work is that we explicitly allow the null strategy in which nodes may choose not to forward packets to any next-hop neighbor. This allows us to provide a polynomial time algorithm for obtaining an efficient Nash equilibrium path. Another 61 key difference in our work is that we consider the notion of destination and source payments and incorporate them into the utility functions. 3.2 Cognitive Radio Networks In this section, we focus on presenting the architectures, components and features of cognitive ra- dio networks. We show that game theoretic approach is naturally a proper way to model, analyze and solve the spectrum sharing problem for cognitive radio networks. As in previous section, we also illustrate the art of state research on game theoretic approach applying on cognitive radio networks. 3.2.1 Introduction Cognitive radio technology is one of the emerging long term developments taking place as radio receiver and radio communications technology. The idea for cognitive radio has come out of the need to utilize the radio spectrum more efficiently, and to be able to maintain the most efficient form of communication for the prevailing conditions. Cognitive radio enables distributed devices to scan the spectrum, detect which frequencies are clear, and then implement the best form of communication for the required conditions. Cognitive radios are fully programmable wireless devices that can sense their environment and dynamically adapt their transmission waveform, channel access method, spectrum use and networking protocols as needed for good network and application performance. In this way cognitive radio technology is able to select the frequency 62 band, the type of modulation, and power levels most suited to the requirements, prevailing con- ditions and the geographic regulatory requirements. Cognitive radios offer the promise of being just this disruptive technology innovation that will enable the future wireless world. The term “Cognitive Radio” was extended from software defined radios by Joseph Mitola in his doctoral thesis in year 2000 [50]. The definition of cognitive radio in the thesis is as follows: “The point in which wireless personal digital assistants and the related networks are suffi- ciently computationally intelligent about radio resources and related computer-to-computer com- munications to detect user communications needs as a function of use context, and to provide radio resources and wire less services most appropriate to those needs”. Cognitive radio networks are the networks consists of intelligent cognitive radio devices. One important new concept that cognitive radio network introduced is spectrum management. There are two different ways of spectrum management: dependent management and indepen- dent management. Dependent management methods include spectrum pooling, spectrum leas- ing, spectrum sharing and negotiated spectrum use. Independent management methods include opportunistic spectrum use and dynamic spectrum access. The existing standards related to cognitive radio networks include: P1900.X: IEEE Standard Series on Next Generation Radio and Spectrum Management, Policy Defined Radio, Adaptive Radio and Software Defined Radio P1900.2: Recommended Practice for Interference and Coexistence Analysis P1900.3: Recommended Practice for Conformance Evaluation of Software Defined Radio (SDR) Software Modules 63 IEEE 802.22: Standard for a cognitive radio-based PHY/MAC/air interface for use by license-exempt devices on a non-interfering basis in spectrum allocated to the TV Broad- cast Service FCC: Rules to Permit Unlicensed National Information Infrastructure (U-NII) devices in the 5 GHz band Generally speaking, there are two kinds of user groups in cognitive radio networks: primary users and secondary users (i.e., users with cognitive radio equipped). The primary users refer to an existing system which operates in one or many fixed frequency bands. Primary users can work either in licensed or unlicensed bands. Primary users operate in the licensed bands has the highest priority to use that frequency band (e.g., 2G/3G/4G cellular cells, digital TV broadcast). Other unlicensed users can neither interfere with these primary users in an intolerable way nor occupy the license band. Primary users operating in the unlicensed band (e.g., ISM band) called unlicensed band primary users. Various primary systems should use the band compatibly. Specifically, primary users operating in the same unlicensed band shall coexist with each other while considering that the interference to each other. These primary users may have different levels of priorities which may depend on some regulations. A secondary user neither has a fixed operating fixed operating frequency band or has privilege to access that band. Cognitive radios equipped on secondary users enable them to scan spectrums and use spectrum holes (a.k.a white spectrum). Secondary users utilize free spectrum when pri- mary users do not present. There are two components in secondary user systems: base station and mobile station. Base station is a fixed component that represents the infrastructure side of the 64 cognitive radio system and provides supports (e.g., spectrum holes management, mobility man- agement, security management) to the mobile stations. A mobile station is a portable device with cognitive radio capabilities. It conducts self-configuration to connect to different communication systems. It can sense spectrum holes and dynamically independently making decisions to access corresponding spectrum holes to communicate with other mobile stations or the base station. Cooperatively scanning and accessing the spectrum to obtain more accurate spectrum status and to avoid collision with primary users or among multiple secondary users are major interests of the research community for cognitive radio networks. 3.2.2 Cooperative Spectrum Sharing in Cognitive Radio Networks As discussed in previous section, in cognitive radio networks, secondary users make intelligent decision in a distributed manner on spectrum selection based on spectrum usage information (either cooperatively sensed or individually sensed). Secondary users who compete for spectrum resources may have no incentive to cooperate with each other and instead behave selfishly. There- fore, it is natural to study the intelligent behaviors and interactions of selfish network users from a game theoretic perspective. The importance of studying cognitive radio networks in a game theoretic framework lies on several aspects. First, the devices in cognitive radio networks is inherently self-interested and is able to make decision in a distributed way. Second, game theory equips us with various optimality criteria for the spectrum sharing problem. Specifically, the optimization of spectrum usage is generally a multi-objective optimization problem, which is difficult to analyze and solve. Game theory provides us with well defined equilibrium criteria to measure game optimality under various game settings. Third, game theory enables us to derive efficient distributed algorithms for 65 dynamic spectrum sharing using local information, which is a perfect fit for the cognitive radio networks. Halldorsson et al. [41] formulated the spectrum sharing problem as non-cooperative games, in which the channel assignment for the access points is studied. The price of anarchy (PoA) in this scenario is the ratio between the number of APs assigned spectrum channels in the worst NE and the optimal number of covered APs if the assignment is arranged by a central authority. The theoretical bounds on the PoA are derived for the scenarios of different number of spectrum buyers and sellers. One interesting finding is that the PoA is unbounded in general spectrum sharing games unless certain constraints are applied such as the distribution of the users. Pricing mechanism is introduced in [78], [25], [111], [112], [47] to improve the efficiency of Nash equilibrium. Specifically, linear pricing which increases monotonically with the transmit power of a user has been widely adopted is applied in [78], [25] and [111]. More sophisticated nonlinear pricing function is applied in [112]. Linear pricing advantages in simple implemen- tation and reasonable physical meanings while nonlinear pricing function can be employed ac- cording to specific problem setting and requirements. In [78], a network-user hierarchy model consisting of a spectrum manager, service provider and end users for dynamic spectrum leasing is proposed for joint power control and spectrum allocation problems. When optimizing their pay- off, the end users trade off the achievable data rate and the spectrum cost through transmission power control. With a proper pricing term, which is defined as a linear function of the spectrum access cost and transmission power, efficient power control can be achieved with alleviates inter- ference between users. The revenue of the service provider is also maximized by this method. Daound et al. [25] proposed a certain amount of payment paid by each user to service provider for each unit of transmitting power on the uplink channel in wide-band cognitive radio networks. 66 This pricing mechanism can maximize the revenue, while ensuring incentive compatibility (IC) for users. Wang et al. in their paper [111] pointed out that most existing pricing techniques, e.g., a linear pricing function with a fixed pricing factor for all users, can usually improve the equilibrium by pushing it closer to the Pareto optimal frontier. However, they may not be Pareto optimal, and not suitable for distributed implementation, as they require certain global informa- tion. Therefore, a user-dependent linear pricing function which drives the NE close to the Parato optimal frontier is prosed by the authors in this work. The optimal pricing factor for a link only depends on its neighborhood information, so the proposed spectrum sharing scheme can be im- plemented in a distributed manner. Besides the research we discussed above, many other studies have been done on sharing spec- trum with cognitive radios [41, 115, 116] on spectrum sharing. We here specifically highlight a few closely related works to our study and show how our study different with these previous works. Fu and van der Schaar [35] formulate the spectrum sharing problem in cognitive radio networks as a sequential auction. In each auction, individual users observe the result and ad- just his/her strategy according to historic observations. Another closely related work is by Cao and Zheng [17]. The authors discuss resource sharing in the general case and propose a local bargaining process to approach the Nash Bargaining Solution for spectrum sharing among many users proportional to their reward payoffs of using the acquired channels. Fairness is the main concern in that work. The users are willing to “feed poverty” to achieve fairness which is quite different from the selfish user assumptions in our study. Suris et. al. [101] propose a cooperative game theory model to analyze a scenario where nodes in a multi-hop wireless network need to agree on a fair allocation of spectrum and investigate the fairness-efficiency tradeoff at the Nash bargaining solution. 67 Unlike most other works, in which when collision happens among channel users, none of them can use the channel, we investigate a channel “sharing” case in our work. When two users collide in same channel resource, instead of getting nothing, they both obtain reduced utility. With this assumption, we propose a novel bargaining mechanism that can a) fully utilize the system resource and b) improve the utility obtained for both users with light overhead. Moreover, we have modeled, defined, and examined the truthfulness of user reports in the interaction games in our research; to the best of our knowledge, these issues have not been extensively discussed in other researches on cognitive medium access. 3.3 Community-based Mobile Applications In this section, we introduce the emergence of community-based mobile applications and char- acterize their features. We illustrate the needs of cooperations in these applications and show the research related to this topic. 3.3.1 Introduction In today’s society the penetration of hand-held mobile devices is substantially higher than any other compute or communication device. There are roughly 3.3 billion devices being used in the world as of November 2007 [113]. Till date most mobile applications are based on a simple client- server model, where a mobile device requests for a service from a single service provider. As mobile devices enter a new era with high speed connectivity and increasing compute capabilities, a new class of community-based social networking mobile applications are being showcased as the next revolution in mobile computing. In this class of applications each user in a social group 68 contributes their knowledge about their surrounding environments and the collective knowledge can then be exploited by the community members for a personal or social benefit. One example of such an application has been developed to obtain real time information about traffic congestion on roads [83]. 3.3.2 Enhancing Cooperation in Community-based Mobile Networks Many emerging community-based mobile applications have following three themes: System performance based on user’s contributions. The service provided by the system is usually the integrated information from the users. For example, the average traffic flow speed on a specific segment of highway can be estimated by sampling several cars’ (local users’) speed. Users want to minimize own contributions. Providing information takes the risk of privacy leaking. For example, in location based mobile applications, providing user’s location information to service might let malicious identity to locate user’s current location. Local interactions between users. In mobile networks, users make decisions based on local information independently. From the three features of community-based mobile applications, we can infer that there always has a tension between participation and privacy. The more users contribute to the service, the service can provide more accurate information back to end users. Meanwhile, the more information a user provide, he/she takes more risk on privacy leaking. How to incentive users to participate in the service and share local information/knowledge with others while protect their privacy is a key problem for this kind of applications. Cooperation among users in these 69 applications aims finding proper trade-offs between participation and privacy while achieving efficient equilibria. In [109], V oorneveld et al. applied game theory to non cooperative games, also called anony- mous crowding problem, where a user’s value in visiting a location is inversely proportional to the number of people who have been already there which is the opposite goal of safety applications. Patwardhan et al. [88,89] introduced the notion of packs to create a framework for providing pri- vacy, security in mobile ad-hoc networks. A pack is dynamic set of individuals that collaborate for achieving a collective goal. Agah et al. [5] proposed game theory approach to security in sensor networks where each node in a network achieves better payoff when the node cooperates and its payoff is decreased when misbehavior is detected. To our knowledge, however, there has been no prior attempt to systematically apply game theory to the problem of privacy preservation in community-based mobile applications. 3.3.3 Privacy in Mobile applications Privacy and security concerns already pervade most of the internet application domain. The dra- matic rise of social networking sites, such as Facebook and Bebo, has already ignited the debate on the effectiveness of legislation and privacy policies that are either ineffective or prone to hu- man errors. A recent study by Pew Internet Research Project [82] shows that one third of US teenagers are subjected to cyber-bullying due to privacy compromises. However, bringing the concept of mobility to social networking magnifies these concerns immensely as compromising location privacy may may lead to serious security concerns. There is a large body of research in the area of privacy preservation in traditional internet based social networking applications [2,13,21,27,63,87,97]. However, in traditional internet based application user’s precise location 70 is not revealed either to application provider or to other users, unless explicitly disclosed by the users themselves, such as the city name or zipcode where they are located. These social network- ing applications do not make use of precise location in providing location relevant information. Hence location privacy is primarily relevant in mobile social networking application and as such our study in Chapter 6 focuses on this issue by using game theory to trade off privacy with user utility function. Several software solutions [18, 29, 37, 45, 46, 77] have also been proposed to protect privacy. Hong et al. [45, 46] proposed Confab, a toolkit for developing mobile application that allow developers and end users to support a broad spectrum of privacy needs. Desmet et al. [29] im- plemented a software architecture to allow the secure execution of third party applications on a Windows Mobile device. In [18] Capra et al. proposed a middleware architecture for providing privacy in mobile environments. Tang et al. [53] proposed a distributed method for storing per- sonal information in mobile devices where personal information is split between mobile device and a trusted central server. Several experimental systems [40, 49, 103] also built location based services where the location of a mobile device is hidden from the service provider for protecting privacy. The primary focus of these researchers is the first generation mobile applications where a central authority can be trusted to provide accurate information. Hence the goal of a mobile device is to protect its privacy from this central authority. Our proposal focuses on mobile social networking applications where the data provided by multiple users with no central authority. It is only recently that mobile social networking applications have come to the main stream of mobile computing [9, 23, 30, 44, 86, 92]. Reddy et al. [92] developed Campaignr framework for creating urban participatory sensing using mobile devices. Hoh et al. [44] explored temporal 71 and spatial distortion of location data to protect privacy. Annavaram et al. [9] developed Hang- Out a social networking application that uses a combination of anonymous data aggregation and encryption to show where people with similar interests are likely to congregate. In Hangout the mobile device decides on the granularity of its location update based on how many other users are already seen by the server in a given area. Furthermore the device identification and location update packets are encrypted differently such that the data link provider can only identify the de- vice but not the content and the application service provider can only identify the content but not the device. Hoh et al. [43] proposed a social network based traffic sensing application using the concept of spatial sampling with virtual trip lines. Using a combination of spatial, temporal and speed distortions they showed how real time traffic can be estimated without loss of privacy. In these previous studies the focus is primarily on absolute user privacy rather than trading privacy with utility. Our case study specifically focuses on relative privacy where multiple users can trade their privacy with utility value derived from a community application. 72 Chapter 4 Pricing in Wireless Routing 1 As of discussed in the previous chapter, cooperations in wireless ad-hoc networks among selfish users are critical and need to be carefully designed. In this chapter, we examine incentives for co- operative reliable routing in wireless ad hoc networks where the users may be inherently selfish. In our game-theoretic formulation, each node on the selected route from a source to a destination receives a payoff that is proportional to the product of a source-defined-price and the probability that a given packet can be delivered to the desired destination, minus the corresponding commu- nication cost. Although prior work has suggested that this problem may be NP-hard, we give a polynomial-time construction for deriving a Nash equilibrium path in which no route participant has incentive to cheat. Via simulations using realistic wireless topologies, we find that there is a critical price threshold beyond which an equilibrium path exists with high probability. Further, we show that there exists an optimal price setting beyond the price threshold at which the source can maximize its utility. We examine how these thresholds and price settings vary with node density for different node reliability models. 1 This chapter is based on joint work with Prof. B. Krishnamachari that was first published in [64]. 73 4.1 Introduction In this chapter, we consider a reliable routing game for wireless networks of selfish users that is based on the game-theoretic models proposed and investigated by Kannan, Sarangi, and Iyen- gar [56–59], with some modifications. Each node in the network is able to forward a given packet sent to it with some probability (we treat this probability in the abstract in this work, but in practice this unreliability could be caused by processor utilization, sleep cycling, buffer overflow, bandwidth limitation, etc.). The delivery probability of a packet from the source to the destination is then the product of the intermediate node forwarding probabilities. Further, the transmission of a packet at each hop has a cost that depends upon the link quality. The nodes in the network are essentially selfish in that they need compensation if required to relay information for others. We present and investigate a pull-based routing game that is destination-driven and source- mediated. This means the destination node will pay some amount of virtual credit as payment to the source node for each packet of information that is delivered to it. To motivate nodes on the path, the source node then offers some kind of payment to every node on its path for every packet it forwards. Given this payment from the source, each node on the path has an incentive to participate in this routing game if it receives more payment in expectation than it pays for each transmission. We consider two kinds of behaviors for the source with respect to the destination: cooperative and selfish. A cooperative source will accept any positive payoff, and cooperates with the destination because it is also interested in seeing this information routed end-to-end with optimal reliability. A selfish source is interested only in maximizing its own expected profit and is even willing to select a path of potentially suboptimal reliability in order to get that maximum profit. 74 It is important for the route that is determined to be stable. This means every participating routing node should be faithful and keep forwarding packets along the chosen routing path. From a Game Theory perspective, such a stable configuration corresponds to a Nash equilibrium. The prior literature on this topic has suggested that finding the Nash equilibrium for related reliable routing problems can be NP-hard. We show that for the problem we consider here, a polynomial- time solution exists to find efficient Nash Equilibria; this is based on a suitable modification of Dijkstra’s shortest path algorithm. We also present simulations to evaluate the reliability of the obtained route as a function of the destination and source-offered payments and degree of source-destination cooperation for different network parameter settings. 4.2 Problem Definition In this section, we define the destination driven pricing routing problem formally. A wireless network is modeled as an undirected graphGraph(V;E) whereV denotes all the nodes in the network andE represents the link set. Each nodev i inV is associated with a reliability parameter R i (0 R i 1). R i indicates the node availability and stability – the probability that it can forward a packet sent to it. Each edgee = (v i ;v j )2 E has a link cost parameterC i;j , which represent the communication set up cost between two end nodes. There are three kinds of nodes in the network: destination node dst, source node src and other intermediate nodesv i (wherev i 2Vnfsrc;dstg) that are candidates for participating in a route between the source and the destination. We assume that both destination node and source node always have node reliability 1. The destination node offers to the source a payment amount 75 G for every packet that is successfully delivered to it. The source in turn offers a paymentp (for each successfully delivered packet) that will be given to any intermediate node if it participates in the routing path. To formulate the core game, we now give the definition of the triplet (I; (S i ) i2I ; (u i ) i2I ) whereI is the set of players; (S i ) i2I is the set of available actions withS i be the non-empty set of actions for playeri; and (u i ) i2I ) is the set of payoff functions. In this game, we define I = V nfdstg which means that all nodes except the destination are players 2 . In an n nodes network (including source and destination nodes), for each node v i 2Vnfdstg, its strategy is ann-tupleS i = (s i;1 ;s i;2 ;:::s i;n ) where s i;j = 8 > > > > < > > > > : 1 ;if nodev j isv 0 i snexthopinpath 0; otherwise Note thatv i 2Vnfdstg andv j 2V . Each strategy tuple has at most one 1. That is, 8v i ; n X j=1 s i;j 1 If nodev i ’s strategy tuple contains all zeros, nodev i does not participate on packet forwarding in the game. A system strategy profile (S i ) i2I is a profile which contains all players’ strategies in the network. Given this strategy profile, there is either no path from the source to the destination, or else, there is exactly one pathP (since each node can point to only next-hop). Without loss 2 While the destination does play a role in offering the paymentG, this is a constant that only affects the utility for the source. 76 of generality, let’s denoteP = (src;v 1 ;v 2 ;:::;v h ;dst). Here h denotes the number of hops between the source node and the destination node (not inclusive). The utility function for each player is defined as follows: For the source node: u src = 8 > > > > < > > > > : 0; if nopathexists (Ghp) Q v i 2P R i C src;v 1 ; otherwise (4.1) The utility of the source node equals to the difference between the expected income of the source and the link set up cost from the source node to the first next hop routing node. The expected income of the source is the destination payment minus the source pay to all the intermediate nodes times the probability that the packet is successfully delivered. For each other nodev i : u v i = 8 > > > > < > > > > : 0; if nopathexistsorif v i = 2P p Q v h v i+1 R i C v i ;v i+1 ; otherwise (4.2) (where we are denotingv i as thei th node in the path if it participates in it). The utility of each intermediate routing node equals to the expected payment it obtains from the source node times the ongoing route reliability minus the transmission cost per packet to its next hop neighbor. If the node does not participate in the routing, it gains (and loses) nothing. We now develop an algorithm to obtain an efficient Nash equilibrium for this game. 77 4.3 The Algorithm Our goal is to develop an algorithm for computing an efficient Nash equilibrium path that provides maximum reliability while ensuring that all nodes obtain non-negative payoffs 3 . The link between non-negative payoffs and the equilibrium path is given by the following simple lemma. Lemma 1 If a path exists and it is a Nash Equilibrium, every node on the path must have non-negative payoff. The proof for this lemma is straightforward. According to the payoff function, a node would rather choose not to participate in routing (with payoff 0) if joining the routing makes its payoff negative. However, note that it is not necessary for all the paths with non-negative payoff to be Nash equilibrium. We will term such a path a PPP (Positive Payoff Path). We will correspond- ingly term a path with all routing nodes having non-positive payoff an NPP (negative payoff path). To find a positive payoff path, we first simplify the problem to a more concise representation. According to the definition, we need that for each intermediate routing nodev i , its utilityu v i 0. This implies n Y k=i R k C i;i+1 p To convert the product to summation, we take the logarithm of both sides and get n X k=i logR k log C i;i+1 p 3 We should note that in our model even any shortest-hop path that ensures non-negative payoffs to all nodes is in Nash equilibrium. The algorithm we present could be potentially modified to provide such a shortest-hop Nash equilibrium path; however, our interest is in finding an efficient equilibrium path that also provides maximum reliabil- ity. This allows us to characterize the performance of the most efficient equilibrium path that can be obtained under different prices. 78 Notice that 0 R k 1; we take the inverse of eachR k to make each term in the summation positive. The original formula now transforms to n X k=i log 1 R k log p C i;i+1 for each v i . Replacing log 1 R k by r k (r k 0) and replacing log p C i;i+1 by c i;i+1 , we formulate the problem of finding a PPP in the original graph to an equal problem of finding an NPP in a transformed network graph, where each node has a positive valuer i and each edge is assigned a valuec i;j , according to the following transformed utility functions ~ u. For the intermediate node, ~ u v i = n X k=i r k c i;i+1 For the source node, we get n X k=1 log 1 R k log Ghp C src;v 1 Replacinglog 1 R k asr k as before and also replacinglog Ghp Csrc;v 1 asc src;nbr , we have: ~ u src = n X k=1 r k c src;nbr With these log-transformed formulae, in the following, we will first find an NPP of smallest P r k from each neighbor of source node. Then, if the source node is selfish, it picks up a feasible path provided by neighbors that gives it smallest P r k c src;nbr or else if cooperative with the destination, it picks the path with the smallest P r k . In either case, the source only participates in routing if its own original expected utility will be positive. 79 A polynomial time algorithm modified from Dijkstra’s algorithm can be applied to find the NPP with the smallest P r k from each neighbor of the source to the destination. The psuedocode for the algorithm is given below. Finding an NPP with Minimum P r k in Trans- formed Network Graph 1. Initialize: Feasible set FS = fdstg, all other nodes labeled as (;1;),l(dst) = 0 2. whilesrc = 2FS^N(FS)6=? for eachv i 2N(FS) – while (9v k 2 FS such that (v i ;v k )2E) l(v i ) = min(l(v i ); min v j 2FS^(v i ;v j )2E (l(v j ) + r i )) letv j be the correspond- ing next hop node if ~ u v i c i;j 0: delete edge (v i ;v j ). else: update the label triplet to (v j ;l(v i );l(v i )c i;j ); addv i toFS; break – end while end for end while Note that the original source does not participate in this algorithm, so we denote the neighbor in question assrc in the algorithm. In brief, the algorithm starts labeling nodes from the destina- tion, applying Dijkstra’s algorithm, with adding negative utility checking step. In the algorithm, each node has a label which is a tuple (from;l(v i ); ~ u v i ). The first item in the tuple indicates 80 from which node the label comes, i.e., the next hop of current node starting from source. The second term in the tuple records the summation ofr k , which is analogous to the length in Dijk- stra’s algorithm. The third term tracks the current ~ u value. This algorithm is applied in turn for each neighbor of the source before the source picks one of these neighbors to form the path, as described above. Since ther value is related to nodes instead of the links, we need a definition of neighborhood set for vertices in a given graphG(V;E). Definition 1 Given a graphG(V;E),IV ,SV ,S is the neighborhood set ofI (denote asN(I)) if and only if8v j 2S,v j = 2I and9v i 2I such that (v i ;v j )2E Lemma 2 Given graph G(V;E), if (v i ;v j )2 E is deleted in some step in the Algorithm, (v i ;v j ) does not lie in any NPP fromsrc todst in the original graphG(V;E). Proof: (by contradiction) Assume that there is a link (v i ;v j ) between nodes v i = 2 FS and v j 2FS deleted in some iteration lies in an NPP path P = (v 1 ;:::;v i ;v j ;:::;v n ). First consider that the edge (v i ;v j ) is the first link we delete during the algorithm. SinceP is an NPP, we have P n k=i r k <c i;j , i.e. P n k=j r k +r i <c i;j . Recall that in the algorithm, we check ~ u v i for trying to labelv i as min(l(v i ); min v j 2FS^(v i ;v j )2E (l(v j ) +r i )). And for nodev j ,l(v j ) is the minimum summation ofr values from nodev j onwards sincev j is in the feasible set. Hence, we have n X k=j r k +r i l(v j ) +r i min(l(v i );l(v j ) +r i ) It follows that min(l(v i );l(v j ) +r i ) < c i;j . Then, according to the algorithm, edge (v i ;v j ) should not be deleted. This contradicts the assumption. Thus edge (v i ;v j ) does not lie in any 81 NPP fromsrc todst inG(V;E). This argument can now be inductively applied to the second edge that is deleted, the third, and so on, because deleting an edge that does not lie in any NPP from the graph does not affect the solution to the NPP problem in any way.)2 Theorem 1 The algorithm to find an NPP path with minimum P r k in the transformed net- work graph is correct. Proof: (Soundness): the path found by the algorithm in the transformed graph is guaranteed to be an NPP path since it has a check step to make sure each node in the feasible set has a non- positive payoff. The path is guaranteed to have minimum P r k since in the algorithm, we always label smallest feasible P r k first. (Completeness): We need to prove that if there exists an NPP in the graph, the algorithm will return one. According to Lemma 1, since the edge deleted in the algorithm doesn’t lie in any NPP, the algorithm doesn’t destroy any NPP path in the graph. The algorithm terminates only under two conditions: either it finds the NPP orN(FS) =?^src = 2FS. The latter case indicates that the source and destination are separated into two isolated parts of the graph, which implies that there is no NPP in the original given graph.2 The computational complexity of the algorithm is polynomial. The Dijkstra’s algorithm can be run in time O(n 2 ). For each edge deletion in our algorithm, we need to retry the labeling, which will cost at most extraO(n) time for each node. So the running time of our algorithm is bounded byO(n 3 ). Notice that when mapping the algorithm back to the PPP problem, we always choose the most reliable path among all the feasible paths. In the algorithm we keep adding the nodes with minimum summation ofr that still satisfies the negative utility constraints. This observation can 82 be used to prove that path returned by this algorithm is a Nash equilibrium path (if all nodes not on the path choose the null strategy of not picking any next-hop neighbor). Theorem 2 The path found by the algorithm is a Nash equilibrium path in the PPP finding problem. Proof (by contradiction): Assume that the algorithm returns a path P = (v 1 ;v 2 ;:::;v i ;v i+1 ;:::;v j ;:::v n ) which is not a Nash equilibrium. Without loss of generality, suppose only one nodev i wants to switch his next hop from v i+1 to v j , where j > i + 1. Path ^ P = (v 0 ;v 1 ;::::;v i ;v j ;:::;v n ) is also a PPP, since the payoff of the nodes beforev j increases by the increase of path reliability (remember 0 R k 1) and the payoff afterv j (includingv j ) keep unchanged. Thus path ^ P is one of the feasible paths. Since the path abandoned some intermediate nodes, the path reliability of ^ P is larger than P. This would imply that the algorithm should return path ^ P instead of P, which contradicts the assumption. By construction, the node has no incentive to switch its next hop to a node that is not on the returned path since those nodes do not pick any next-hop neighbor. 2 As we mentioned before, the algorithm runs to obtain a positive payoff path to destination from each neighbor of the source node. If the source node is selfish, among all the feasible paths reported from its set of neighbors, it will pick the one that gives its maximum profit according to the source’s utility function. If the source node is cooperative, it will pick the path which gives the highest path reliability. 83 4.4 Simulation Results In this section, we present our simulation results. We have two different simulation models that essentially yield different link cost distributions: ARQ-based, and distance-based. In the ARQ model, we generate the network topology using a realistic link layer model [119]. The link layer model output a directed graph with each edge has its own PRR (packet reception rate). The link cost of edge (v i ;v j ) in our model is calculated as the average of the expected number of transmissions in each direction (assuming ARQ). However, we find that most of the link costs are around 1 in this link layer model. In the distance-based model, each node has same transmission range, but the link cost is made proportional to the square of the distance between two nodes if they are in each other’s transmission range. If the two nodes are out of each other’s transmission range, the link cost between these two nodes are set to be infinity. The mathematical representation of the distance- based model is as follows: C i;j = 8 > > > > < > > > > : d(i;j) 2 if d(i;j) 1 otherwise whered(i;j) is the distance between nodev i and nodev j ; and is the transmission range of the sensor nodes. In the simulation settings, is set to 0:1 (we also did extensive simulations for different values, similar curve trends are observed). The distance-based model shows greater variance in the link costs and thus allows for more tradeoff between link cost and node reliability than in the ARQ model where the link costs are more uniform. 84 (a) ARQ model (b) distance-based model Figure 4.1: Path reliability versus source pay to each routing node when changing number of nodes in a fixed area 85 (a) ARQ model (b)distance-based model Figure 4.2: Source gain versus source pay to each routing node for different destination payment, when fixing number of nodes and area size 86 (a) selfish source (b) cooperative source Figure 4.3: Behavior of source node effect on the path reliability 87 (a) ARQ model (b) distance-based model Figure 4.4: Cumulative distribution function for the existence of Nash Equilibrium path when increasing source pay to each routing node 88 We use a fixed 1212 square meters area as our simulation area. In the distance-based model, node’s transmission range is set to 5 meters. The node reliability is uniformly chosen at random in interval [0:1; 1]. Figure 4.1 illustrates the path reliability versus source pay for intermediate nodes when fixing G to 300 (a sufficient large amount) for both models. From this figure, we can see that the density of the deployments increases, the maximum reachable path reliability increases. This result is expected. When the source pays more to intermediate nodes, the expected path reliability increase too. We notice in both cases that whenp exceeds some threshold the path reliability will remain almost constant. However, the curves for the distance-based model increase a bit more gradually while those for the ARQ-model are sharp — this reflects the greater variance of link costs in the distance-based model. Figure 4.2 plots the source gain versus the source pay to the intermediate nodes with fixed number of nodes (30) and area size. Recall that from the source utility function in Section III, source utilities in most cases are dominated by the term of (Ghp) Q v i 2Path R i . Increasingp can lead to decreasing ofh and increment of Q v i 2Path R i . Figure 4.2 shows that there exists a best strategy point for the source to maximize its payoff, which is at the same routing price no matter how much destination pay is given in a fixed network topology. The other observation of Figure 4.2 is that the portion of source gain increases as the destination pay increases. This indicates that even if the destination increase the pay to the source to request a certain reliability path, most of the money goes to the source instead of the routing nodes. It implies that even if the destination increase the pay, it will not get a path with more reliability. If we examine Figure 4.1 and Figure 4.2 together, we will find that at the maximum gain of the source node, the path reliability is close to the maximum path reliability which the network 89 can reach. This gives us an important insight: selfish behavior of source node in such system will not hurt system performance much. Figure 4.3 shows a side-by-side comparison of source node behaving cooperatively and selfish, for the ARQ model. These figures demonstrate that there is improvement of path reliability when source acts cooperatively, but the improvement is not significant. We also see that the maximum path reliability will not have significant improvement for any fixed network parameter when destination pay exceeds some threshold (around 50 here) that is necessary to obtain a path. On the other hand, the routing path reliability will increase significantly (from 0.39 to 0.74) when changing network parameters (in this particular simulation, we increase the number of nodes in the fixed area). Figure 4.4 shows the probability that a positive payoff Nash equilibrium path exists as a function of the price offered by the source under both models. For each curve of realistic link layer model, corresponding to a fixed number of nodes (fixed density), we see that the curve increases to a point where it is close to 1. This shows the existence of critical threshold prices (independent of the exact configuration) that ensure the existence of a Nash Equilibrium path with high probability. We also see that this price threshold decreases with the density, a trend that is concrete visualized in the distance-based model which is effected by node distance more seriously. This trend is because with growing density there are more choices to pick the path from, and there are a greater number of high quality links which incur low transmission cost. 90 Figure 4.5: The illustration of the approximation method Figure 4.6: The illustration of the approximation method 91 Figure 4.7: The approximation method results, compared with Figure 4.1 4.5 Theoretical Approximation Given a utility function, if we know the curve of path reliability versus payment to the interme- diate nodes, we can calculate source gain by the source utility function by estimating the hops between source and destination. According to probability theory, ifx 1 ;x 2 ;:::;x n are uniformly distributed in [0; 1], then the expected maximum number among them is 1 n n+1 . Using this result, we give a geometric approximation method for the curve showed in Figure 4.1. As illustrated in Figure 4.5, we divide the path from source to destination into three parts. First part is from the source node to the first intermediate node and it is a quarter circle. The second part, from one intermediate node to the other intermediate node, is modeled as a half circle. The third part is from the very last hop to the destination and is modeled as a triangle area (actually, it is a half circle intersecting with a triangle, which results in a triangle). Letd denote the side length of the square area. Letr denote the transmission range for each node according to the power setting of the sensors. Leth denote the number of estimated hops 92 between the source and the destination. LetN denote number of nodes deployed in the area. The approximation model assumes that the nodes always choose next hop nodes by selecting maximal reliability among its transmission range, and the direction of packet forwarding is around the direct line between source and destination. These assumption will result an upper bound of the path reliability when the source paymentp is large enough andh is accurately estimated. The later result shows this upper bound is tight. Notice that the nodes are uniformly deployed in a area of sized 2 and the node reliability is within the interval [0; 1]. The first hop node expected reliability can be calculated as: S 1 = 1 1 r 2 N 4d 2 + 1 The second hop through the hop before last hop intermediate node’s expected reliability can be calculated as: S 2 = 1 1 r 2 N 2d 2 + 1 The last hop’s expected reliability can be calculated as: S 3 = 1 1 r 2 n d 2 + 1 The expected reliability of the whole path can be modeled as: R max (N) =S 1 S h2 2 S 3 93 Approximation Result Simulation Result 0.4676 0.3900 0.5553 0.5178 0.6188 0.6102 0.6667 0.6587 0.7040 0.7006 0.7339 0.7389 Table 4.1: The approximation method results compared with the simulation results when p is large whereR max (n) denotes the routing path reliability for a network withN nodes when path price p is large enough. In order to show how close the approximate method can be, we need to estimate the number of hops in the routing. In our simulations, according to the parameter setting, we know that the transmission ranger and the side lengthd approximately satisfied: 3r = p 2d Table 4.1 shows the results for the approximation method described above when the price source pays to intermediate nodep is large enough. From the table, we can see the approximation method get close to the simulation results. In the previous part we discussed how to estimate the path reliability when routing price is large enough. Now we consider the case whenp is small, the behavior of the path reliability. Let R(N;p) denote the path reliability as a function ofN andp in a given area. We have R(N;p)E[R max (N)](p) 94 whereE[R max (N)] denote the expected path reliability whenp is large enough. (p) is a func- tion ofp which is the probability that a positive payoff path exists in the network. Assume(p) is piecewise linear, we define(p) as: (p) = 8 > > > > > > > > > < > > > > > > > > > : 0 forp 1 1 forpp max p1 pmax1 otherwise where p max is a high price which for higher price than p max the probability of existing a positive payoff path is 1.p max is related to the network density, the sensor operating power level etc. p = 0 forp 1 since below price it is impossible to compensate the routing node for its transmission cost (since all link costs are at least 1). Recall from the previous section that for intermediate nodes, the payoff is calculated by the expected pay on the forwarding path minus the cost of setting up the link with next hop neighbor. In our model, the cost with next hop neighbor is calculated as the average of the reversion of PRR (packet reception rate) of the directed links between two nodes. We analyzed the histogram of the costs between links (Figure 4.6) and found that most link costs are 1. This is a feature of the realistic link layer model we utilized ( [119]). Hence the bottleneck node among all routing nodes probably will be the first hop node from the source node. For the first hop node, the expected onwards path reliability is exactly the whole routing path reliability. Then we can approximate p max as: p max C E[R max (N)] 1 E[R max (N)] 95 Figure 4.7 shows the approximated curve using the algorithm described in this section, along with simulation curves that are the same as those in 4.1. the approximated curve captures major characteristics of the simulated curve quite well, particularly with higher node densities. The purpose of approximating Figure 4.1 is to derive more information from that figure ana- lytically. Recall that source gain is calculated as (Ghp)R path C(src;v 1 ) In the approximate graph, at each point we can have p and corresponding R path . By previous assumption, we already know the number of hopsh approximately and can assumeC(src;v 1 ) to be 1 with high probability. Therefore for each destination pay price G, we can derive an approximate expression for the relation between source gain and the price source pay to each routing node (which is illustrated in Figure 4.2). 4.6 Summary In this chapter, we described a destination-driven source-mediated pricing routing scenario in- volving three different kind of nodes: the destination, the source, and the intermediate nodes. We presented a polynomial time algorithm which can give us a Nash equilibrium path and used it to evaluate the performance of the performance of the game with respect to prices and source behaviors for different network settings. The simulations results demonstrate several key findings: With the increment of the network density, routing paths become cheaper and more reliable and the source payoff will increase. 96 Even if the source node acts selfishly, it doesn’t necessarily downgrade the reliability of the routing path significantly. Given a network, the increment of destination pay won’t improve path reliability after some threshold. The source will eat up the margins beyond this point. Thus this is the desired point of operation for the destination. 97 Chapter 5 Spectrum Sharing with Multiple Secondary Users in Cognitive Radio Networks 1 In this chapter, we present our second case study in analyzing and solving the spectrum sharing problem among multiple secondary users in cognitive radio networks. We have already discussed the various game theoretic approaches that applied in spectrum sharing problem in Chapter 2. In this chapter, we look closely into a specific scenario with two competing secondary users op- portunistically sharing two channels. We first formulate this scenario as a non-cooperative game and analyze the efficiency of Nash equilibrium. Then we use Nash Bargaining Solution (NBS) to improve the outcome efficiency. Further, we discuss the truthfulness of the user behaviors in this game. 1 This chapter is based on the joint work with Prof. A. MacKenzie and Prof. B. Krishnamachari that appeared in [67]. 98 5.1 Overview With advances in radio technology there is increasing interest in developing wireless commu- nication protocols for cognitive radios to operate and cooperate in an intelligent, adaptive, de- centralized manner, to improve the overall performance. In such an environment, it becomes meaningful to consider the cognitive radios as trying to maximize selfish utility functions, and therefore it has been recognized that game theoretic tools have an important role to play in the design and analysis of the protocols. In this chapter, we consider a simple communication scenario in which two cognitive radios try to share spectrum resources on two channels. We assume that the two users have fixed val- uations for the utility they would derive from each channel. Depending on the context, these valuations may reflect, for instance, the average rate or the probability of packet success (in a general cognitive radio network) or the probability that the channel is free of the presence of a primary user (in the particular case of an opportunistic spectrum access problem). The users wish to decide on the probability with which they should access each of the two channels. We assume that if two users access a channel simultaneously then each of them will get half of the utility they would get respectively if they were to access the channel alone. This assumption reflects a belief that the channel will ultimately be time-shared by the radios, for instance via CSMA. In this scenario, it is reasonable to assume that each user cares about his/her own gain for occupying/sharing the resources. This selfish behavior of users intuitively motivates us introduce game theoretic tools to analyze the possible outcomes. Particularly, a non-cooperative game, in which users make decisions independently, is used to formulate the initial problem. The Nash equilibria of the game, in which no individual user would like to unilaterally change strategy, are 99 investigated. When there exists a unique pure strategy Nash equilibrium, in which all users play deterministic pure strategies, the users will employ this equilibrium. When there exist multiple pure strategy Nash equilibria, there is an equilibrium selection problem in which no single pure strategy equilibrium is focal. Hence, we argue that it is reasonable to assume that users will employ the unique mixed strategy Nash equilibrium instead. Given that the Nash equilibrium is often inefficient, we consider the Nash Bargaining So- lution (NBS) as a mechanism to enhance cooperation between nodes. Starting from the Nash equilibrium point (the so-called disagreement point), users that bargain in good faith can select an efficient operating point at which they are both better off. This operating point is efficient in the Pareto sense: no user’s outcome can be improved without making another user worse off. 5.2 Problem Formulation In this case study, we consider a two-user (denotedP 1 andP 2) two-channel (denotedC1 and C2) case. Each user’s strategy is to choose which channel to use in a certain time interval. If there is no interest conflict (i.e., user can occupy the channel alone), the users’ strictly positive utilities are presented in table 5.1. This table indicates that if userP 1 picks channelC1 and userP 2 chooses channelC2, the two users will get payoff a and d respectively. On the other hand, if user P 1 and P 2 choose channelsC2 andC1 respectively, they will get payoffb andd respectively. However, if the two users pick the same channel, we assume that they will share the chan- nel in such a manner that each of them receives half of their conflict-free individual benefit for 100 C1 C2 P 1 a b P 2 c d Table 5.1: Utilities without conflict P 2P 1 C1 C2 C1 ( a 2 ; c 2 ) (b;c) C2 (a;d) ( b 2 ; d 2 ) Table 5.2: Complete Payoff Table choosing the corresponding channel. Specifically, table 5.2 presents the simple non-cooperative game showing the users’ payoffs in all cases. Without loss of generality, we normalize each user’s payoff in table 5.2 to get a new payoff table 5.3 wherea 0 = a b andc 0 = c d . We claim that the Nash equilibrium point does not change with this normalization. 2 5.3 Nash Equilibrium Analysis We discuss the Nash equilibrium outcome in this section for the non-cooperative game defined previously. For clarity, we discuss all cases while pointing out that some of them are symmetric cases. Before getting into the Nash equilibrium solutions, we first investigate the dominant strategies for both users. It is obvious that for userP 1, when both a 0 2 > 1 anda 0 > 1 2 hold (that is, when 2 Affine transformations of payoffs do not change the Nash equilibrium point or the Nash bargaining solution [36]. We will also show this partially by example in section 5.3. P 2P 1 C1 C2 C1 ( a 0 2 ; c 0 2 ) (1;c 0 ) C2 (a 0 ; 1) ( 1 2 ; 1 2 ) Table 5.3: Normalized Payoff Table 101 a 0 > 2), choosing channelC1 strictly dominates choosing channelC2. Whena 0 = 2, choosing channelC1 weakly dominates choosing channelC2. To be brief, we use “dominant” to represent either “strictly dominant” or “weakly dominant” in the scope of this case study. This fact implies that when a 0 2, choosing channel C1 is a dominant strategy for user P 1. Similarly, when a 0 1 2 , choosing channelC2 is a dominant strategy forP 1. According to symmetry, we also have the following two rules. 1. Choosing channelC1 is the dominant strategy for userP 2 whenc 0 2. 2. Choosing channelC2 is the dominant strategy for userP 2 whenc 0 1 2 . Now we discuss the equilibrium for this problem by considering the following cases: Case 1 a 0 2 andc 0 2 In this case, for both users, the dominant strategy is to pick channel C1. At the Nash equilibrium, the payoff for two users are a 0 2 and c 0 2 , respectively. Case 2 a 0 1 2 andc 0 1 2 For both users, the dominant strategy is to choose channelC2. Each of the users has payoff 1 2 at the Nash equilibrium point. Case 2 is symmetric case with case 1. Switching the channel labels (and renormalizing the resulting payoff table) creates a one-to-one mapping between cases 1 & 2. Case 3 a 0 2 andc 0 2 UserP 1 has dominant strategy of choosing channelC1. If userP 2 chooses channelC1, he/she will get c 0 2 . If user P 2 picks channel C2, he/she will get 1. Since in this case, 102 1 c 0 2 , userP 2 will use channelC2. The payoffs at the Nash equilibrium in this case can be expressed as tuple (a 0 ; 1). Case 4 c 0 2 anda 0 2 This is the symmetric case with case 3 with the user labels switched. Similar to case 3, the Nash equilibrium for this case is that userP 1 picks channelC2 and userP 2 chooses channelC1. The payoff at this Nash equilibrium is (1;c 0 ). Case 5 1 2 <c 0 < 2 anda 0 1 2 Applying same procedure as in previous case, the two users P 1 and P 2 separate their choices on channelC2 andC1, respectively. The corresponding payoff is (1;c 0 ). Case 6 1 2 <a 0 < 2 andc 0 1 2 This is the symmetric case with case 5. Applying the same process of eliminating domi- nated strategies, we know thatP 1 choosingC1 andP 2 choosingC2 is the Nash equilib- rium. The payoff is (a 0 ; 1) Case 7 1 2 <a 0 < 2 and 1 2 <c 0 < 2 This case doesn’t have a dominant pure strategy for either user. Instead, two pure Nash equilibria exist for the game. In each of these two pure Nash equilibria, users are separated in two channels. However, since we assume that there is no pre-defined agreement between the two users and it is a simultaneous game, it is hard for the users to decide which channel to choose. This is the classic equilibrium selection problem. To avoid this, instead of using pure strategy Nash equilibrium, we claim that the mixed strategy Nash equilibrium is focal 103 and that each user will employ a mixed strategy. In a mixed strategy Nash equilibrium, the users intelligently randomize their strategy selection. Assume that userP 1 has probabilityp to use channelC1 and userP 2 has probabilityq to choose channelC2. If userP 1 has employed a mixed strategy at equilibrium, then it must be the case thatP 1 is indifferent between his or her two possible pure strategies. We can use this fact to calculate userP 2’s strategy at equilibrium point. We use the original payoff table as in 5.2 and illustrate that the mixed strategy keeps the same for payoff table 5.2 and normalized payoff table 5.3. If userP 1 picks channelC1, the expected utility is aq 2 + (1q)a If userP 1 chooses channelC2, the expected utility is b 2 (1q) +bq To make userP 1 indifferent, we need aq 2 + (1q)a = b 2 (1q) +bq Hence we getq = 2ab a+b . Applying similar methods, we can obtain thatp = 2cd c+d . When using the normalized payoff table 5.3, we get thatp = 2c 0 1 c 0 +1 andq = 2a 0 1 a 0 +1 ; sincea 0 = a=b and c 0 = c=d, this is exactly the same equilibrium point, as expected. In the case of normalized payoffs, the corresponding expected payoff at Nash equilibrium point is 104 Figure 5.1: Case number in corresponding regions ( 3a 0 2(a 0 +1) ; 3c 0 2(c 0 +1) ). We point out that when 1 2 <a 0 < 2, the utility at the Nash equilibrium point is increasing witha 0 and 1 2 < 3a 0 2(a 0 +1) < 1. Similar result holds byc 0 . Note that mixed strategy Nash equilibrium may perform worse than either pure strategy equilibrium for both users. For instance, when 1<a 0 < 2 and 1<c 0 < 2, at the two pure strategy equilibria the users gain (1;c 0 ) or (a 0 ; 1). Hence, the worst payoff obtained by a user in a pure strategy equilibrium is 1, which is greater than what he/she gets in the mixed Nash equilibrium. Unfortunately, as discussed previously, neither of the two pure strategy equilibria are focal, so without some coordination it is impossible for the users to preselect one of them. Figure 5.1 illustrates each of these cases as two-dimensional regions in a plot where thex-axis represents the value ofa 0 and they-axis represents the value ofc 0 . 105 5.4 Nash Bargaining solution In the previous section, we have analyzed the Nash equilibria of the non-cooperative game. In this section, we will discuss the Nash bargaining solution. In Chapter 2, we have introduced the basic knowledge background of Nash Bargaining Solu- tion (NBS). Typically, the Nash bargaining solution is only considered for convex payoff regions. To convexify the payoff region for our game, we first introduce the notion of a coordination sig- nal. Time is divided into slots. At the beginning of each slot, the coordinator uniformly generates a random number,s2 [0; 1], which is observed by both players. Given such a signal, we claim that all Pareto efficient outcomes for the convexified game are of the following form, for some pre-agreed value of 2 [0; 1]: If s , then user P 1 picks channel C1 and user P 2 picks channel C2 for the timeslot; otherwise, user P 1 is assigned to channelC2 and userP 2 is assigned to channelC1. When is given, the expected utilities for usersP 1 andP 2 areu 1 () =a 0 + (1) andu 2 () = +c 0 (1), respectively. Note that these utilities are higher than those that would be obtained by the corresponding mixed strategies. In mixed strategies, users must randomize independently. Hence, there is a non- zero probability that they will land on the same channel and suffer reduced payoff. Moreover, the payoffs are higher than those that can be obtained through pure-strategy channel sharing. Namely, suppose that both users have a strong preference forC1. That is,a 0 > 2 andc 0 > 2. Then the maximum payoff that the users can obtain by deploying pure strategies is to share channelC1 and obtain payoff tuplea 0 =2;c 0 =2. But, if the users deploy a coordinator and set = 1=2, then their a priori expected payoffs are (a 0 + 1)=2; (c 0 + 1)=2. This is because the coordinator allows the user that is not selected in a given slot to use the other channel during that time slot; without 106 a coordinator, both users will spend all their time contending for channelC1. In latter part of this section, we focus on how to choose to optimize the Nash bargaining result. The Nash bargaining outcome is dependent upon the disagreement point. This is the operating point that the users expect to prevail in the absence of bargaining. We assume the disagreement point of the Nash bargaining game is the Nash equilibrium point when the two users share a same channel or use mixed Nash strategies. When the users do not compete for the same channel, the previous pure Nash strategy equilibrium (cases 3, 4, 5, and 6) is already efficient. We discuss the Nash bargaining solution in cases 1 and 7, described in previous section. As we have pointed out in previous section, case 2 is symmetric with case 1 if the channel labels are switched. For brevity, therefore, we omit the discussion of case 2 here. Axiomatically, the Nash bargaining solution is the only outcome that can satisfy four con- ditions: (1) Pareto efficiency, (2) symmetry, (3) invariance to equivalent payoff representations (affine transformations of utility), and (4) independence of irrelevant alternatives. For details on these axioms, which define reasonable expectations for the outcome of a bargaining process, see [36]. Mathematically, it can be shown that if the payoff region is convex, then the Nash bargaining solution is the point that maximizes the so-called Nash product. That is: max 2[0;1] (u 1 ()u 1 (ne))(u 2 ()u 2 (ne)) whereu i () is useri’s utility using the coordination signal strategy described above with param- eter andu i (ne) is useri’s utility gain at the disagreement point (i.e., the Nash equilibrium point in the non-cooperative game), fori = 1; 2. 107 Case 1 Recall that in case 1,a 0 2 andc 0 2. At the disagreement point,u 1 (ne) = a 0 2 and u 2 (ne) = c 0 2 . Substituting the disagreement point to the maximization problem, we need to find out the that maximizes the following quadratic equation: (a 0 + (1) a 0 2 )( +c 0 (1) c 0 2 ) (5.1) Therefore, = 2a 0 c 0 3c 0 a 0 + 2 4(a 0 1)(c 0 1) (5.2) Figure 5.2 shows the change of’s value with differenta 0 andc 0 values. Compared to the disagreement point, user P 1 increases his/her utility by u 1 ()u 1 (ne) = a 0 +c 0 2 4(c 0 1) and user P 2 increases his/her utility by u 2 ()u 2 (ne) = a 0 +c 0 2 4(a 0 1) . To illustrate the utility improvement, we define the increase ratio for user i as the ratio of utility increase after Nash bargaining to the disagreement point (Nash equilibrium) utility. Mathematically, user Pi’s increase ratioR i is R i = u i ()u i (ne) u i (ne) Figure 5.3 shows userP 1’s utility increase ratio whena 0 2 andc 0 is sampled as 2:5, 3:5, 4:5, 5:5, 6:5. 108 Figure 5.2: Sliced plot for’s value whena 0 andc 0 vary from 2 to 10 Figure 5.3: User P1’s utility increase ratio in case1 after Nash bargaining 109 Case 7 This is the mixed strategy case. The disagreement point is set atu 1 (ne) = 3a 0 2(a 0 +1) and u 2 (ne) = 3c 0 2(c 0 +1) . We focus on obtaining that maximizes the following expression for the Nash product that arises in this case: (a 0 + (1) 3a 0 2(a 0 + 1) )( +c 0 (1) 3c 0 2(c 0 + 1) )) (5.3) LetB denote a 02 (2c 02 c 0 )+a 0 (c 02 1)+c 0 4c 02 +2 4(a 0 +1)(c 0 +1)(a 0 1)(c 0 1) . The optimal value of which maximizes the Nash product is then different for each of the following cases: 7.1 If botha 0 ;c 0 2 ( 1 2 ; 1) or botha 0 ;c 0 2 (1; 2), we have IfB > 1, = 1 is optimal IfB < 0, = 0 is optimal If 0B 1, =B is optimal 7.2 Whena 0 2 ( 1 2 ; 1) andc 0 2 (1; 2), = 0 is optimal. 7.3 Whenc 0 2 ( 1 2 ; 1) anda 0 2 (1; 2), = 1 is optimal. 7.4 Ifa 0 = 1, we need to maximize 1 4 ((1c 0 ) +c 0 3c 0 2(c 0 +1) ). Therefore, the optimal value of is as follows: = 8 > > < > > : 0 if 1<c 0 < 2 1 if 1 2 <c 0 < 1 110 Figure 5.4: Sliced plot for’s value whena 0 andc 0 vary from 1 2 to 2 7.5 Ifc 0 = 1, we need to maximize 1 4 ((a 0 1) + 1 3a 0 2(a 0 +1) ). In this case, the optimal value of is as follows: = 8 > > < > > : 1 if 1<a 0 < 2 0 if 1 2 <a 0 < 1 Figure 5.4 shows’s distribution whena 0 varies from 1 2 to 2 andc 0 is sampled at 0:6; 0:8; 1:2; 1:5 and 1:8 in the interval of ( 1 2 ; 2). Figure 5.5 illustrates the utility improvement ratio for user P 1 when a 0 2 ( 1 2 ; 2) with 5 sampledc 0 value with the Nash bargaining solution comparing to the disagreement point (i.e. the mixed Nash strategy). 5.5 Consideration of Truthfulness In this section, we consider the truthfulness of users’ channel condition reports. We first present some assumptions and relevant truth-telling models and then investigate if the non-cooperative 111 Figure 5.5: User P1’s utility increase ratio in case7 after Nash bargaining game and Nash bargaining games preserve truthfulness. Specifically, truthfulness here refers to each users’ report of his or her channel condition,a 0 orc 0 for userP 1 andP 2, respectively. Here, we consider players that are rational and selfish but not malicious. Such a user will not sacrifice his/her own utility in order to impact the other user’s utility. We also assume that neither user has knowledge of the other user’s channel condition distribution. We need to consider two issues of individual user’s behavior when analyzing truthfulness in this interaction game. The first aspect is “lying” or “truth-telling”, by which we judge each user’s behavior objectively. The second aspect is “suspicious” or “gullible,” by which we identify user’s subjective beliefs when they make rational decisions. A “suspicious” user will not trust the other user’s report while a “gullible” user will take the other user’s report as the truth and consider it during decision making. We have to point that if the individual user is suspicious, the user cannot make rational deci- sion in some cases (e.g. case 7) because it is even unclear how to compute his/her best response 112 without knowing the other user’s beliefs about the distribution of channel valuations. For this reason, we do not treat the “suspicious” case in the scope of this case study. There are three different truthfulness models. We present them from stronger to weaker as following. (M1) Lying prone model: if a user will not lose anything by lying, he/she will lie. (M2) Neutral model: if a user can possibly gain and never lose by lying, the user will lie. (M3) Truth telling prone model: if a user doesn’t lose by telling the truth, he/she will not lie. We consider the neutral model (M2) in the rest of this chapter. Formally, in this model, a user will lie when reporting his/her channel valuation if and only if the following two conditions hold: Incentive Condition: There exists a case in which the lie will strictly increase the user’s utility. Risk Aversion Condition: In all possible cases, the user’s utility is not decreased by lying. Theorem 1: In the non-cooperative game with the gullible-user assumption, truthfulness for both users is ensured under the neutral model (M2). Proof : Without loss of generality, we present the logic from userP 1’s aspect of view. We consider all possible scenarios: 1 When a 0 2. We claim that there is no incentive for user P 1 to lie about his/her channel valuation. We can infer this claim by examine the following two scenarios. Notice that the only way to lie is to under-report channel valuations in this scenario. 113 1.1 If the truth value of c 0 > 2, telling truth will make user P 1 gain a 0 2 1. Under- reporting cannot strictly increase this value. 1.2 If the truth value ofc 0 < 2, telling the truth will make userP 1 gaina 0 , which is the maximal possible gain for userP 1 in the game. UserP 1 still lacks incentive to lie. 2 We claim there is no incentives for user P 1 to lie on channel valuation when a 0 1 2 . We separate this case into three subcases. 2.1 If P 2’s true value c 0 > 2, as we already known in previous step, P 2 will choose channelC1 anyway. In this case, the best payoffP 1 can obtain is 1. Over-reporting cannot help improve utility for userP 1. 2.2 If 1 2 c 0 2, telling truth will giveP 1 payoff 1, which is the highestP 1 can obtain sincea 0 < 1 2.3 Ifc 0 < 1 2 , telling truth will guarantee each user 1 2 payoff. However, if lying on channel valuation for userP 1 is better than telling truth, the same conclusion will be drawn by userP 2 according to symmetry. This fact means that they either end up in mixed strategy which yields a payoff 3a 0 2(a 0 +1) (this value is less than 1 2 whena 0 < 1 2 ) for user P 1, or end up with competing in channelC1 which yields a payoff a 2 < 1 2 forP 1. 3 Here, we consider the case where 1 2 a 0 2. The following subcases are considered. 3.1 If the truth value of c 0 > 2, c has no incentive to lie. When P 1 tells truth, he/she will get channelC2 and the corresponding payoff is 1. Since in this case, payoff 1 is dominant all other strategies forP 1,P 1 has no incentive to lie. 114 3.2 If the truth value ofc 0 is also in [ 1 2 ; 2], we point out a scenario that lying might hurt the user’s utility which contradicts the risk averse condition in the neutral model M2. Suppose both users’ true channel valuations (a 0 andc 0 ) are between (1; 2). Assume lying is better than truth-telling in this scenario. Suppose thatP 1 reportsa 0 > 2 and seesc 0 > 2 (i.e. assume telling larger channel valuations is better. 3 ), according to the gullible assumption 4 , the game will end up at both users using channelC1, which yields userP 1 a 0 2 utility. Notice that a 0 2 < 3a 2(a+1) , which contradicts the risk averse condition. UserP 1 will therefore report the truth. 3.3 Ifc 0 < 1 2 , ifP 1 tells the truth, he/she will get payoffa 0 and choose channelC1. If lying can help forcingP 2 choosing channelC1,P 1 will be able to use channelC2 and gain 1. However, we can come to the conclusion thatP 2 will not choose channel C1 since in all cases staying at channel C2 is P 2’s dominant strategy. Therefore, lying on the channel valuation will not helpP 1 in this case either. From the above, we can conclude that truthfulness is ensured for both players. Theorem 2: Truthfulness is not ensured in current Nash bargaining mechanism under the neutral model M2. Proof : We provide a counterexample. Consider the case where the true value of userP 1’s channel valuation is 1<a 0 < 2. UserP 1 considers the following cases for the true value ofc 0 . 3 Similar checking process can be done for other cases. We omit the details here for brevity. 4 Notice that if the users are suspicious, he/she is not able to make action decision in this case because of insufficient knowledge. 115 c 0 2: UserP 2 might tell the truth or might report increasedc 0 . 5 Whether or notP 2 is truthful, if user P 1 tells the truth, he/she will get payoff 1 since channel C2 is assigned to him/her after bargaining (i.e., = 0). If P 1 were to (falsely) report that a 0 > 2, he/she will separate his/her time on using channelC1 andC2. The utility he/she obtains is a 0 + (1)> 1. Hence, lying has incentive here. 1 < c 0 < 2: This is the tricky case. Notice the fact that sincea 0 > 1, the more time user P 1 can get on channelC1, the higher his/her utility. Since is calculated based on the reported values, ^ a 0 and ^ c 0 , if ^ a 0 > ^ c 0 , > 0:5; if ^ a 0 < ^ c 0 , < 0:5. In this case, if user P 2 does not lie, userP 1 can improve utility by over-reporting that ^ a 0 > 2. However, we assume thatP 1 andP 2 are both rational. If we assume that the possible channel valuation values are upper bounded by a large enough value >> 2, then we claim that both users will end up reporting ^ a 0 = ^ c 0 = . This is because for fixed ^ c 0 , is monotonically non-decreasing with ^ a 0 . If a user reports a value ^ a 0 < , then once the other users’ reported value, ^ c 0 , is known he/she can always improve by increasing his/her reported ^ a 0 . The equilibrium outcome is when both report. It is worth noting, though, that at this liar’s equilibrium where ^ a 0 = ^ c 0 = , = 0:5. If the true value ofa 0 >c 0 , thenP 1’s utility would have been higher in the truthful outcome. Nevertheless, telling the truth is not advantageous for P 1, as P 2 will take advantage of his/her truth-telling and suppress’s value. 1 2 < c 0 1: If both users tell truth in this scenario, thenC1 is allocated toP 1 andC2 to P 2 (i.e., = 1). UserP 2 does not have incentive to compete withP 1 for channelC1. 5 It can be proved that userP 2 will not report a decreasedc 0 . Intuitively, channelC1 is better for userP 2 and he/she wants to claim larger channel valuation so that he/she can gain more time portion in using this channel. 116 However, even ifP 1 over-reports channelC1’s valuation (e.g.,P 1 reports that ^ a 0 > 2), will not change. c 0 1 2 : In this scenario, userP 2 prefers to use channelC2 alone so there is no incentive to over-report the value ofc 0 . Moreover, it is obvious that over-reporting ^ a 0 > 2 does not reduce the utility gained by userP 1. From the analysis above, we can see that when 1 < a 0 < 2, lying (reporting ^ a 0 > 2) is advantageous in the case wherec 0 2 and 1 < c 0 < 2 and does not violate the risk aversion condition in any cases. Thus, lying by over-reportinga 0 is beneficial for userP 1. We conclude that truthful channel condition reporting is not incentivized in the current Nash bargaining mechanism. Thus some mechanism is needed to enforce the truthfulness during the bargaining process. We leave the investigation of such a mechanism as an open problem for future work. 5.6 Summary In this work, in addition to analyzing the Nash equilibria in a non-cooperative game formulation, we have proposed a novel channel bargaining mechanism for cognitive radios that can be im- plemented with low overhead in a decentralized fashion. This mechanism, which uses the Nash Bargaining Solution, guarantees 100% utilization of the available spectrum resources, while pro- viding improvements for each user compared to the non-cooperative outcome. We have seen that even this basic problem involving just two users and two channels has surprising complexity in many dimensions: in the number of cases that arise with respect to the 117 equilibria; in the non-trivial behavior of the Nash bargaining solution in some cases; and in the modeling involved in reasoning about truthfulness. 118 Chapter 6 Cooperation Among Privacy-Focused Users in Social Networks 1 6.1 Overview The primary difference between these new mobile social networking applications and prior mo- bile applications is that the information provided by the service provider is an aggregation of the data provided by multiple users. In their quest to increase relevancy of information to a specific user in a social networking scenario, however, mobile applications are beginning to aggressively collect information pertaining to a user. As the popularity of mobile social networks increases there is a growing realization that information collected about an individual user can compromise one’s privacy and potentially security [20] [27] [38]. The information collected from a user in- clude location and contact logs and hence when privacy is compromised it may lead to serious security concerns. There is therefore a need for technological solutions for providing privacy in mobile social networking applications. In these applications there is a fundamental tension between a user’s desire to protect privacy and their desire to take advantage of the community knowledge. On the one hand if everyone shares their information freely, the community as a whole will get a better 1 This chapter is based on joint work with Prof. M. Annavaram and Prof. B. Krishnamachari that appeared in [65]. 119 experience; on the other hand, users prefer not to reveal too much personal information to protect their privacy. We take a new perspective on this problem that is based on game theory [36]. Originally de- veloped by economists to model strategic interactions between rational agents in market settings, game theory has been applied to many distributed network settings where users must interact while pursuing their self-interest. It is in many ways a natural fit for this domain of community- based mobile applications. Specifically, this framework will allow one to identify the Nash Equi- libria for particular mechanisms - strategy profiles where users have no incentive to deviate uni- laterally. It further enables the design of new mechanisms where the equilibria satisfied desired global performance for the community of users while allowing users to effect the privacy trade- offs that they desire. Finally, it motivates the design of iterative algorithms that ensure that users can converge to the desired equilibrium in a distributed manner and maintain stable performance in the face of dynamics. To ground our work, we describe a community-based mobile social networking application called Aegis that can be envisioned for personal safety enhancement particularly in high-crime urban areas. The basic idea of the Aegis system is that users share their locations with trusted others, and each can in turn view the locations of near-by individuals within their trusted circle, to enhance their sense of personal safety. To our knowledge, this is the first work to quantify the privacy-service tradeoff central to these emerging mobile social network applications in a game theoretic setting. This chapter is orga- nized as follows. After a further description of Aegis in section 6.2, in section 6.3, we formulate a game played by mobile users that are interested in getting fine-grained information about each others locations while wanting to provide only coarse-grained information about themselves. We 120 design a system that enforces a tit-for-tat information trade, with each mobile user getting loca- tion information about other nearby users at a granularity that is no higher than the information they are willing to share with others about themselves. The game that results turns out to have multiple Nash Equilibria, including trivial solutions where subsets of users choose not to share any information at all. In section 6.4.5 we show how the selfish best response can be calculated in a distributed manner by each user and discuss two simple iterative best-response algorithms. We consider the most simple simultaneous best response first in section 6.4.2 and show that it can sometimes fail to converge. However, in section 6.4.3 we consider a minor variation, sequential best response, that provides better performance. But going beyond these simple best-response heuristics, we show in section 6.4.4 that it is possible to solve for a Pareto-Optimal Nash Equilib- rium of this game using a Pareto-improvement algorithm that converges in a polynomial number of steps. We present numerical evaluations comparing the performance of the sequential best- response with this Pareto-improvement algorithm in section 6.5 before our concluding comments in section 6.6. 6.2 Aegis: a Community Mobile Application For Personal Safety Crime is a serious social malice that has received significant attention in social studies. In a recent survey [79] over 80% of people believe that the notion of perceived crime is an important factor in determining where people will stay and what places they will visit. Studies like this have also showed that a person’s perceived notion of safety increases when they carry a mobile phone since the device provides a way for instant communication with their friends and family, and if needed with law enforcement agencies. While instant communication is an obvious benefit of mobile 121 devices, we believe that more comprehensive approaches to personal safety can be achieved by exploiting the rich set of sensors on mobile devices. In order to explore these rich dimensions to personal safety, we envision the development of new personal safety applications on mobile devices that are based on the notion that a person’s sense of security can be closely correlated to how many people that person can trust within his surroundings. While personal trust is subjective, it is generally believed that if there are more people around a user that he/she has some trusted relationship with (either directly or indirectly through a social network) then that user’s sense of security is enhanced. The Aegis system is based on this idea, displaying the locations of near-by trusted individuals to a user to enhance their sense of safety. While practical full-scale implementations of the Aegis application are likely to be quite sophisticated (for instance, taking into account a rich combination of information from call-logs to determine each individual’s circle of trust), we treat a bare-bones version of this application in this study. In this simplified version of Aegis, we assume that all users belong in each others’ circle of trust. Each device registered with the system provides the system with its location. All users with mobile devices within some neighborhood (defined by some physical distance range) of this device can potentially be notified of its location by the system. The fundamental tradeoff that we explore in this study pertains to the granularity of location provided by and to the users. On the one hand they all desire to know the locations of the other users with high accuracy; on the other hand, they each prefer not to reveal their own location with accuracy. We try to resolve this conflict by treating each user as an self-interested entity playing a game. 122 An important part of defining such a game is modeling the utilities for each user. Modeling safety perception by humans realistically is a very challenging task (perhaps best left to sociol- ogists). Our approach in this work is to pick a simple, tractable, almost-linear utility model for each user that has some intuitive features. The utility model has two components: the gain from knowledge of others’ locations, and the loss from the revelation of ones’ own location. In the model we adopt, location accuracy is treated as a tunable term — it may be varied in practice by adding zero mean noise to the true location with different variance, or by selecting different zoom levels of locations). The gain term captures the essence that each user is generally more happy when more other users provide location information, that each user is generally more happy when each other user provides more accurate location information, but that there is a point of satura- tion beyond which the user can be made no happier. The loss term is treated to be linear in the accuracy of the information provided by the user. 6.3 Problem Definition LetN denote the set containing all users in this application. After describing a neighborhood rangeR, useri can see a set of nearby usersN (i) within distanceR on a map on his/her mobile device. Each user is able to specify the granularity with which their location should be made available to others. In sparser areas, for reasons of safety, each user is more interested in knowing the exact location of others than in denser areas. Let a i 2 [a min ;a max ] be a real value that denotes the granularity of location provided by useri, where higher value ofa i corresponds to more accurate location information. Let us consider a particular concrete model to quantify the utilityU(a i ;a i ) (a i denotes the strategy vector for all users except useri) for useri: 123 U(a i ;a i ) =min(K; X j2N (i) a j )ca i (6.1) Where K is a pre-defined positive real number to indicate an upper bound on benefits for nodei andc is a positive penalty factor. With this model, the user’s benefit function is additive in the information accuracy of its neighbors, but saturates at a certain point. Notice that this utility function doesn’t give incentive for nodes to share their location information. A user’s benefit comes from the actions of others but the cost depends only on the user’s own action. It can be shown that the only Nash Equilibrium point in the game by using this utility function is the trivial outcome: a i = a min ,8i; i.e., each user always provides minimum accuracy. While this is ideal for each user in terms of maximizing privacy, it results in arbitrarily poor service. From a game-theoretic point of view, what is missing is a direct incentive for the users to provide high accuracy data to others. A simple tit-for-tat mechanism that can implement such an incentive is to provide information to a user with download granularity commensurate with the user’s upload granularity. An authorized system server through which the users interact can be involved as an information filter to implement this mechanism. The perceived accuracy of a neighborj fori will then be given bya j =min(a j ;a i ), so that the utility now becomes: U(a i ;a i ) =min(K; X j2N (i) min(a i ;a j ))ca i (6.2) For ease of exposition, we assume the range ofa i isa i 2 [0;K] from now on (this is equiv- alent to assuming thata max K. However, as we will point out, all the results can be extended in a straight forward fashion to the case whena max <K). Further, to restrict the utility function 124 to be non-decreasing ina i before the saturated point, we also assume that the penalty factorc is less than 1 (0 < c < 1). 2 We use the word “nodes”, “users”, “players” interchangeably in the following sections. The utility function in (6.2) provides some desired properties for the application. There ex- ists at least one Nash equilibrium for any network topology. The trivial Nash equilibrium is a i = a min = 0. Consider the special case when allN nodes are within the same vicin- ity; there exist infinitely many Nash equilibria. All solutions of the form a i = ,8i (where 2 [a min ;min(a max ; K N1 )]) are Nash equilibria. However, there is a unique Pareto-optimal Nash equilibrium given by the solution a i = min(a max ; K N1 ),8i, which is the best possible solution from a global (social welfare) point of view with respect to the utility. This solution is also intuitively appealing: since the benefit saturates beyond some point, it is best to provide more privacy (less accurate coordinates) when there are more neighbors. 6.4 Algorithms In this section, we give three different algorithms to find a non-trivial Nash equilibrium in the game we defined in the previous section, named as synchronized best response dynamic (SYN- BR), sequential best response dynamic (SEQ-BR) and Pareto improvement path (PI) respectively. 6.4.1 Calculating Best Response Before we describe the algorithms, we first describe the solution for calculating node i’s best response when given all his neighbors’ strategies in Algorithm 1. Nodei’s neighbor set is denoted 2 Whenc 1, there is not enough incentive for the users with degree less thanc to participate in the game. We leave the discussion of the case wherec 1 to future work. 125 asN (i), andjN (i)j denotes the cardinality of setN (i). BR(a i ;a i ) denotes node i’s best response when given the other nodes’ strategies in vectora i . We consider the following three cases to calculate the best response for nodei: When K jN (i)j min j2N (i) a j ( i.e., node i’s neighbors have relative high accuracy than expected), settinga i = K jN (i)j will maximize the utility function. When P j2N (i) a j K (i.e., the summation of node i’s neighbors’ granularity cannot reach K), the best response of node i is to match the maximum of the accuracy of its neighbors. In other cases rather than the two cases discussed above, node i’s best response is a value between two of his neighbors’ accuracy value. If we sort all the node i’s neigh- bors’ accuracy value, nodei’s best response is between two consecutive accuracy values a j k and a j k+1 and P j2N (i) min(a i ;a j ) = K. To calculate node i’s best response in this case, we use the following fact: when c 1 (as defined in previous section) and min j2N (i) a j a i max j2N (i) a i : if P j2N (i) min(a j ;a i ) K, the utility function is non-decreasing witha i ; on the other hand, if P j2N (i) min(a j ;a i ) K, the utility func- tion is decreasing witha i . The algorithm to calculate the best response for a node is presented in Algorithm 1. The best responses can be calculated in a distributed manner. The complexity of a nodei to compute its best response isO(n logn) (wheren =jN (i)j is the number of neighbors of nodei) when choosing proper sorting algorithm. 126 Algorithm 1 Calculate Best Response for Playeri :BR(a i ;a i ) if K jN (i)j min j2N (i) a j then return BR(a i ;a i ) = K jN (i)j ; else if P j2N (i) a j K then return BR(a i ;a i ) =max j2N (i) a j ; else sorta j (8j2N (i)) in ascending order, denote the order asa j 1 ;a j 2 ;:::;a j jN(i)j ; find BR(a i ;a i ) such that a j k BR(a i ;a i ) a j k+1 and P k q=1 a jq + (jN (i)j q)BR(a i ;a i ) =K; return BR(a i ;a i ); end if end if 6.4.2 Synchronized Best Response Dynamic SYN-BR is the easiest learning dynamic in game theory. This algorithm assumes that all players take action simultaneously and periodically. In each iteration, all players give their best responses to the other players’ actions in last iteration. Algorithm 2 illustrates the steps to do SYN-BR. Note that SYN-BR does not guarantee convergence. To avoid infinite loops, we set a large number maxIter as an upper bound for the iterations. However, if the algorithm converges, it will converge to one arbitrary Nash equilibrium. We also notice that a i = 08i is a trivial Nash equilibrium. To avoid this trivial Nash equilibrium, we set the initial state as all nodes at value K. 3 Leta t i denote playeri’s strategy at iterationt. Here we give an example to show that SYN-BR algorithm does not converge in some cases. In Figure 6.1, nodesC,D andE converge after the second iteration and keep stable from then on, while nodesA andB will never converge. The values of nodesA andB keep oscillating forever. 3 The initial state for playeri in SYN-BR can be randomized in the range. Different initial states might lead to different Nash equilibrium at the end. Different initial states might also affect the convergence time. 127 Algorithm 2 Synchronized Best Response Dynamic Initialization:a 0 i =K;t = 1; while Not-Converged AND (t<maxIter) do for Every Nodei do a t i =a t1 i ; CalculateBR(a t i ;a t i ); end for t + +; end while Figure 6.1: An example to show SYN-BR algorithm does not always converge 128 6.4.3 Sequential Best Response Dynamic The example in Figure 6.1 illustrates that the SYN-BR algorithm cannot guarantee convergence. To eliminate the oscillation of the strategies among players, we propose SEQ-BR. In SEQ-BR, players update their strategies sequentially according to some pre-agreed order. The order for updating is called the sequential index. When a playeri calculates his best response, he considers two sets of his opponents’ strategies. For those players who have lower sequential index (i.e., the players who have already updated their strategies before playeri), playeri takes their strategies in “current” iteration into consideration. For the remaining nodes, playeri uses the information from the last iteration. A formal description of this algorithm is in Algorithm 3. Algorithm 3 Sequential Best Response Dynamic Initialization:a 0 i =K;t = 1; while Not-Converged AND (t<maxIter) do fori from 1 toN do forj2N (i) do if j ¡ i then a t j =a t j ; else a t j =a t1 j ; end if end for CalculateBR(a t i ;a t i ); end for t + +; end while We have empirically observed that the sequentially best response dynamic converges in all simulations. However, the Nash equilibrium it converges to is an arbitrary Nash equilibrium based on the initial state. In the following section, we propose a Pareto Improvement algorithm which guarantees convergence and results in a Pareto optimal solution. 129 6.4.4 Moving the Nash Equilibrium Before we propose our algorithm, we first introduce the basic concepts of Pareto improvement and Pareto optimality. Given a set of alternative allocations, a movement from one allocation to another that can make at least one individual better off without making any other individual worse off is called a Pareto improvement. Specifically, in this game, a Pareto improvement strategy vector (~ a 1 ; ~ a 2 ;:::; ~ a jNj ) for strategy vector (a 1 ;a 2 ;:::;a jNj ) satisfies the following two conditions: 9i2N such thatU(~ a i ; ~ a i )>U(a i ;a i ) 8i2N ,U(~ a i ; ~ a i )U(a i ;a i ) When no further Pareto improvement can be made for a joint strategy vector, the strategy vector is called Pareto optimal or Pareto efficient. For our privacy game, we already pointed out that there might exist multiple Nash equilibria. In this case, finding a Pareto efficient Nash equilib- rium becomes an interesting problem. We propose a polynomial-time algorithm to find a Pareto optimal Nash equilibrium starting from the all-zero trivial Nash Equilibrium in the following. We need the following lemmas to hold for the correctness of the Pareto improvement algo- rithm described in Algorithm 4. Lemma 1: Given a Nash equilibrium (a 1 ;a 2 ;:::;a jNj ), no Pareto Improvement can improve nodei’s utility in this Nash equilibrium if P j2N (i) min(a j ;a i ) = K. Such a nodei is called a saturated node. LetS denote the set containing all saturated nodes. 4 4 Notice that in this study, we assumeai2 [amin;amax] is in the range of [0;K], which is equal to stating that amaxK. However, all the results in this sutdy can be easily adapted to the case whereamax <K. To handle the caseamax <K, the corresponding change on the definition of saturated nodes should be that the nodes either satisfy P j2N(i) min(aj;ai) =K or the nodes’ai =amax. 130 Proof (by contradiction): Suppose that exists a Pareto Improvement PI such that after the PI process, U(a PI i ;a PI i ) > U(a i ;a i ) and8j 2N (i), U(a PI j ;a PI j ) U(a j ;a j ). Since U(a i ;a i ) =Kca i andK is the maximal possible value for the positive part, the only way to increase the nodei’s utility is to decreasea i , i.e.,a PI i <a i . We consider two cases here. Case 1:a i min j2N (i) a j . Since (a i ;a i ) is a Nash equilibrium, we have P j2N (i) min(a j ;a i ) = jN (i)ja i =K.a PI i <a i infers that P j2N (i) min(a PI j ;a PI i )jN (i)ja PI i . Hence U(a PI i ;a PI i )jN (i)ja PI i ca PI i = (jN (i)jc)a PI i < (jN (i)jc)a i =U(a i ;a i ) , which is contradictory to the assumption. Case 2:a i >min j2N (i) a j . Since (a i ;a i ) is a Nash equilibrium,a i max j2N (i) a j .sorta j (8j2N (i)) in ascending order, denote the order asa j 1 ;a j 2 ;:::;a j jN(i)j .9k <jN (i)j such that a j k <a i a j k+1 . If none ofa j 1 througha j k increases its accuracy value, sincea PI i <a i , U(a PI i ;a PI i ) k X q=1 a jq + (jN (i)jk)a PI i ca PI i < k X q=1 a jq + (jN (i)jkc)a i =U(a i ;a i ) 131 This contradicts the assumption. Hence, we claim that there exists aq (1 q k) such that a jq < a PI jq . Now we investigate the utility for node j q . According to the definition of Pareto improvement, we have U(a jq ;a jq ) U(a PI jq ;a PI jq ). Notice that a jq < a PI jq , in the previous Nash equilibrium, node a jq must NOT be a saturated node (otherwise, increase a jq can only decrease the node’s utility). Therefore, a jq = max p2N (jq ) a p a i . However, this equation contradicts the previous claim thata jq a j k <a i . 2 Lemma 2: Given a Nash equilibrium (a 1 ;a 2 ;:::;a jNj ), no Pareto Improvement can be made for nodei in this Nash equilibrium if8j2N (i),j2S. Such nodei is called a Constrained node. LetC denote the set containing all constrained nodes. Proof:(by contradiction) Assume that there exists a node i that can change its strategy to a PI i such thatU(a i ;a i ) < U(a PI i ;a PI i ). According to the definition of Nash equilibrium, no node is willing to change its strategy unilaterally. Hence, in this problem, Pareto Improvement needs to involve at least two neighboring nodes to change strategies simultaneously. Without loss of generality, suppose that node i’s neighbor k changes strategy to a PI k with node i while nodei’s all other neighbors keep the same strategy. Notice that nodek is a saturated node in the given Nash equilibrium. According to the proof of Lemma 1, the only way to keep or increase k’s utility is to decrease a k . That is a k > a PI k . In the given Nash equilibrium, node i is not saturated. Therefore, a i = max j2N (i) a j a k . Since in all the neighbors of node i, node k decreases its accuracy value and all other nodes keep same, in order to improve its utility, node i has to increase its accuracy value. That is, a PI i > a i . A contradiction follows, since a PI i =max j2N (i) a PI j max j2N (i) a PI j =a i . 2 132 Lemma 3: Given a Nash equilibrium (a 1 ;a 2 ;:::;a jNj ), for nodei2N (S S C), we can infer that P j2N (i) min(a i ;a j ) < K. Furthermore,9 ~ j 2 N (i) T (N (S S C)) such that a i =a ~ j = max j2N (i) a j . Proof: From Lemma 1 and the definition of saturated node, nodei does not saturated is the same as the conditionK > P j2N (i) min(a i ;a j ). According to the best response calculation, if node i is not a saturated node, in the Nash equilibrium, a i = max j2N (i) a j . From the fact that nodei is not constrained, we can infer that there exists a node within neighborhood of node i that not saturated, denote the node as ~ j. We havea i a ~ j . Since the neighborhood range is symmetric, nodej and nodei are each other’s neighbor. Since nodei is not saturated, nodej is not a constrained node. That is, ~ j2N (S S C). Therefore,a ~ j = max j2N ( ~ j) a j a i . These facts allow us to conclude thata i =a ~ j = max j2N (i) a j . 2 Lemma 3 states that if a nodei is neither saturated or constrained, there must exists at least one other node (denote as node ~ j) within the neighborhood range of nodei that is neither saturated nor constrained. Further, such a node ~ j has the same strategy as node i, which is the largest granularity among all their neighbors. LetPN (i) =N (i) T (N (S S C)), lemma 3 suggests an approach to improve an unsatu- rated node to a saturated node. The step size of increase for nodei and its improvable neighbors to make nodei saturated isInc(i) = K P j2N(i) a j jPN (i)j 5 . Lemma 4: Given a Nash equilibrium S = (a 1 ;a 2 ;:::;a jNj ), let setPN contain all the improvable nodes in the network. Consider a strategy profile ~ S where8i2PN , ~ a i =a i +Inc 5 If amax < K, we just need to change the corresponding part of algorithm 4 as Inc(i) = min(amax ai; K P j2N(i) a j jPN(i)j ). All the lemmas and theorems are still hold after the modification. 133 and8i = 2PN , ~ a i =a i . ~ S is also a Nash equilibrium if the non-negative numberInc is such that the following condition holds: 8i2PN; X j2N (i) ~ a j K We omit the proof for this lemma here for brevity. This lemma can be proved using the previous lemmas and considering the users’ best responses. Theorem 1: Starting from a Nash equilibrium, after one iteration of improvement described in Algorithm 4, the resulting strategy vector is still a Nash equilibrium. This theorem states the correctness of Algorithm 4 to find one Pareto optimal Nash equilib- rium. It can be directly derived from the above four lemmas by taking into account that the initial state of the algorithm is a Nash equilibrium. Corollary 1: Given a Nash equilibrium, after applying Algorithm 4, the resulting strategy vector (a 1 ;a 2 ;:::;a jNj ) satisfies the condition thatS S C =N , this strategy vector is a Pareto optimal Nash equilibrium. This corollary checks the end state of algorithm 4. When all nodes are either saturated or constrained (or both) in a Nash Equilibrium, no Pareto improvement can be made according to Lemma 1 and Lemma 2. Theorem 1 keeps the result after each iteration as one Nash equilibrium. Therefore, if the algorithm converges, it will converge to a Pareto optimal Nash equilibrium. We would like to point out that in arbitrary games, there might not exist a strategy profile that is both Nash equilibrium and Pareto optimal. However, we show that a Pareto optimal Nash equilibrium exists in this game by finding the strategy vector. We provide Algorithm 4 to obtain a Parato optimal Nash equilibrium. The algorithm moves a given Nash equilibrium along a Pareto improvement path to achieve both Pareto efficiency and stability (i.e., Nash equilibrium). 134 Algorithm 4 Pareto Improvement Initialization:a 0 i = 0;t = 1; whiletjNj do Check and Flag Saturated Nodes for vector (a t1 1 ;a t1 2 ;:::;a t1 jNj ), Put in SetS; Check and Flag Constrained Nodes for vector (a t1 1 ;a t1 2 ;:::;a t1 jNj ), Put in SetC ifjS S Cj =jNj then return (a t1 1 ;a t1 2 ;:::;a t1 jNj ) and report convergence; end if for Each Nodei2N (S S C) do Calculate the Improvement Inc(i) = K P j2N(i) a t1 j jPN (i)j , wherePN (i) =N (i) T (N (S S C)); end for Pick the minimum value among the increment listminPI = min i2N(S S C) (Inc(i)) for Each Nodei2N (S S C) do a t i =a t1 i +minPI end for t + +; end while Proposition 1: Algorithm 4 guarantees convergence. Notice that in each PI iteration, we add at least one more node to saturated nodes setS. Since the saturated nodes will keep saturated afterwards, the PI algorithm is guaranteed to converge in jNj steps in worst case. The running time of the PI algorithm isO(jNj 2 ). We need to point out that there might exist multiple Pareto optimal solutions. Different initial Nash equilibria might result in different Pareto optimal solutions. For example, in the algorithm description, the initial state is the trivial all-zeroes Nash equilibrium. We can also set the initial state as the converged result of SEQ-BR algorithm, which is guaranteed to be one Nash equi- librium. Both cases will converge to Pareto optimal results after applying the above algorithm. However, the two Pareto optimal results are not necessarily identical. 135 6.4.5 Distributed Pareto Improvement Note that in algorithm 4, users need extra message exchanges to find out the minimum increasing value among all the possible increase. One centralized way to solve this problem is to involve the base station in selecting the min- imum increase value. In each iteration, each nodei2N (S S C) send a message to the base station, reporting its increasing value. The base picks the minimum value and multi-cast the information to each improvable node. Another option to get the minimum increase value among all flexible nodes is to use the FloodMin algorithm [70]. This algorithm makes the calculation totally distributed at the cost of more message exchange. In this algorithm, nodes send messages to their neighbors reporting the increase numbers. The minimum number will be chosen after the messages flooding throughout the network. The details of the FloodMin algorithm are described in Algorithm 5. Algorithm 5 FloodMin Initialization:min inc =inc(i);t = 1;tmax is the network diameter; msgs i : ift<tmax then sendmin inc to allj2N (i); end if trans i : t =t + 1; letU be the set of increase values that arrive at nodei; min inc = min(fmin incg S U); ift ==tmax then return mininc; end if 136 Figure 6.2: An example: A Random Node Deployment and Corresponding Network Topology when Setting Neighborhood RangeR = 5 Figure 6.3: An example: SEQ-BR Algorithm and PI Algorithm Converged Results 137 6.5 Simulation We conduct the simulations using Matlab. In the simulations, there are 10 sets of different node locations. In each set of node deployment, 20 nodes are randomly located on a 20 20 square. A distance based model is used in the simulation to generate network topology. In this model, each node has same neighborhood range. The benefit upper boundK is set to be 100 in all the simulations and the penalty factorc is set to be 0:1. Figure 6.2 and Figure 6.3 show an example of nodes deploying with nodes’ indexes, network topology when setting neighborhood rangeR = 5, converged results for SEQ-BR and PI algo- rithms respectively. The accuracy value beside each node in Figure 6.3 is rounded to integers for the sake of clear illustration. From the algorithm results, we observe a property that the user with more neighbors are likely to have better privacy preservation (i.e., shared accuracy value is low). For example, node 20 has maximal degree of 7 and minimal sharing granularity at level 13. This is a desirable property for the Aegis application. Intuitively, the more neighbors a user has, the more likely that the user is in a safe place. In this case, the user does not need to compromise his privacy to other users to improve his safety. Another observation is that the two Nash equilibria calculated by SEQ-BR algorithm and PI algorithm are not necessarily the same. Node 1 and node 17 in the triangle at the right part of the graph have improved granularity from level 50 to 67. We can verify that the result of SEQ-BR algorithm is a Nash equilibrium. For node 1 and 17, unilaterally change their strategies will decrease their utility. However, changing their strategy simultaneously can result in utility 138 Figure 6.4: Number of Constrained but not Saturated Nodes VS Node neighborhood range improvement for both nodes and let the system stay in a new stable state. This improvement is essential for this particular application. It increases both user 1 and user 17’s safety level. Figure 6.4 illustrates the number of constrained but not saturated nodes (i.e., the cardinality of setCS ) in the network when node’s neighborhood range varies as integers from 5 to 34. All the statistical results presented below are averaged over 10 different node deployment for each neighborhood range value. This result shows that the number of constrained but not saturated nodes decreases with increase in a node’s neighborhood rangeR. Figure 6.5 plots the social welfare for the Nash equilibria obtained by the two algorithms. Social welfare is defined here as the summation of all nodes’ utilities throughout the network (20 nodes in our simulations). The summation is a statistic value averaged over 10 different node locations. The initial state of PI algorithm is the Nash equilibrium output by sequential best response dynamic. Two facts are observed from this plot: a) when the graph is relatively sparse, the percentage of improvement on social welfare is about 10%; b) the improvement percentage decreases with the increase of neighborhood range. Overall, PI results in more efficient Nash equilibrium than SEQ-BR algorithm in terms of social welfare. 139 Figure 6.5: Social Welfare Comparison between SEQ-BR and PI algorithms Figure 6.6: Number of Instances Improved for SEQ-BR Algorithm by PI Algorithm (out of 10) Figure 6.6 illustrates the number of instances improved by PI algorithm. In this simulation, we use the output of the SEQ-BR algorithm as the initial state for PI algorithm. The plot shows that when the neighborhood range is small or medium (R 18), almost every Nash equilibrium obtained by SEQ-BR algorithm can be improved. SEQ-BR algorithm gives Pareto-optimal Nash equilibrium when the topology is presented as a complete graph. We need to point out that the initial state of SEQ-BR (initialized asa i = K8i) ensures the Pareto-optimal solution for com- plete graph. Random initialization will not necessarily lead SEQ-BR algorithm to this particular Nash equilibrium. 140 Figure 6.7: Average Number of Iterations Took to Convergence for SEQ-BR and PI Algorithms Figure 6.7 compares the average iterations to achieve convergence for SEQ-BR and PI al- gorithms with respect to varied neighborhood rangeR. On average, the PI algorithm converges faster than SEQ-BR algorithm. For both algorithms, medium density (i.e., 8R 20) requires more iterations to converge than in a sparse graph (R < 8) or in a dense graph (R > 20 where the graph is (or close to) a complete graph). Together, these simulations lead us to conclude that the PI algorithm is superior. It offers fast, provable convergence in polynomial time, and good quality solution in a distributed manner. 6.6 Summary In this case study, we described a novel community-based mobile application. The application asks the users to share some location information with neighboring friends to enhance security. Considering that the location information is a privacy information which users prefer to preserve, we formulate the application as a game. The utility function gives the users incentives to reveal more accuracy on the location information while there are a few friends around and be conserva- tive on the information accuracy when more friends appear in the neighborhood. 141 We have illustrated how to calculate the best response for a particular user when fixing all other users’ strategies. Furthermore, we investigated several learning dynamics in the system. We point out that the synchronized best response dynamic does not guarantee convergence. To get more control on the resulting equilibrium, we propose an algorithm which can not only guarantee convergence but also is able to move any Nash equilibrium to a Pareto optimal Nash equilibrium. The simulations on different network topologies compare the sequential best response dynamic with the Pareto improvement algorithm. We find that Pareto improvement can give better social welfare in most cases when the network topology is not a complete graph. 142 Chapter 7 Conclusions and Future Work In various wireless networks, devices are controlled by potentially selfish (rational) participants who can tamper with the networking protocols of its device to exploit the network at the expense of other participants. This behavior is dangerous since it can lead to the collapse of service provisioning in the network. Thus, motivating the participants to cooperate becomes a key issue in self-organizing networks. Game theory provides a framework to study the behavior of selfish participants. Game theory has been successfully applied to many areas such as economics, biology and political science. In this thesis, we have focused on how to use game theory in wireless networks to enhance co- operation among selfish users (devices). Specifically, we used three case studies to illustrate the process of analyzing equilibria, designing mechanism that can improve the efficiency of a particu- lar equilibrium and designing algorithms that can lead to convergence to the efficient equilibrium in a distributed manner. Through overview of game theoretic approach applying to wireless net- works and all the case studies, we have gained a better understanding on the cooperation among selfish users in wireless networks. For each case study, we investigated the answers to the fol- lowing questions: is the current cooperation level enough to lead to an efficient operating point? 143 How seriously does selfishness harm the system performance? How to design a mechanism to obtain better cooperation such that the self-interested behavior of users would lead to an efficient equilibrium? In the case study of routing problem, we investigated a pricing mechanism and proposed a polynomial-time construction that can generate a Nash equilibrium path in which no route participant has an incentive to cheat. We also showed that there is a critical price threshold beyond which an equilibrium path exists with high probability. We evaluated the approach using simulations based on realistic wireless topologies. One direction to extend the game of this case study is to add the destination as a player and explicitly incorporate the prices G and p as strategies decided by the source and destination. Another direction that is of interest for future work is to consider scenarios where the destination can choose from several source nodes for a given piece of information. This will allow for an auction to be held among the source nodes to optimize destination’s payoff. In the case study of spectrum sharing problem in cognitive radio networks, we considered in detail the specific case where two secondary users opportunistically access two channels on which each user has potentially different valuations. The problem has been formulated as a non- cooperative simultaneous strategic game and we identified the equilibria in this game. For cases where the resulting Nash equilibria are not efficient, we proposed a Nash bargaining based chan- nel access mechanism that can be implemented with low overhead in a distributed manner. We also considered the user truthfulness in exchanging channel valuation information. We showed that truthfulness is not guaranteed in the bargaining process, so that there is a tradeoff between enforcing truthfulness and efficiency. 144 There are several directions for the future work of this case study. One direction is to design a mechanism that can enforce truthfulness in the bargaining process and hope to extend this analysis to multiple users and multiple channels. In the case study of cooperation in community-based mobile social applications, we consid- ered the tradeoff between contributing fine-grained information to the system and protecting local information that may yield privacy. We modeled the privacy-participation tradeoffs in this safety application as a non-cooperative game and design a tit-for-tat (TFT) mechanism to give users in- centives to reveal their local information to the application. We proposed an algorithm that yields a Pareto optimal Nash equilibrium. We showed that this algorithm, which can be implemented in a distributed manner, guarantees polynomial time convergence.An interesting direction for the future work of this case study is to solve this problem under dynamic settings when the user configurations change over time. The study conducted in this thesis provides us an answer to the ancient question of how cooperative behavior emerges in self-centric distributed decision making networks and how the possibility of behaving cooperatively shapes the social structure of such networks. We believe that the studies of selfish behavior and the investigation of mechanism design to enhance cooperation among selfish users will form fundamental building blocks for protocol design in next generation wireless networks. 145 References [1] IEEE Std. 802.11. IEEE Standard for Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specification. June 1997. [2] A. Adams. Multimedia Information Changes the Whole Privacy Ballgame. Proceedings of the tenth conference on Computers freedom and privacy, 2000. [3] E. Adar and B. A. Huberman. Free riding on gnutella. First Monday, vol. 5, 2000. [4] A. Agah, S. K. Das, K. Basu, and M. Asadi. Intrusion Detection in Sensor Networks: A Non-Cooperative Game Approach. NCA ’04: Proceedings of the Third IEEE International Symposium on Network Computing and Applications, 2004. [5] A. Agah, S.K. Das, and K. Basu. A Game Theory Based Approach for Security in Wireless Sensor Networks. International Conference on Performance, Computing, and Communi- cations, 2004. [6] T. Alpcan, T. Basar, R. Srikant, and E. Altman. CDMA Uplink Power Control as a Non- cooperative Game. Wireless Networks, vol. 8(6), 2002. [7] E. Altman, A. A. Kherani, P. Michiardi, and R. Molva. Non-cooperative Forwarding in Ad-hoc Networks. Technical Report INRIA Report, (RR-5116), 2004. [8] L. Anderegg and S. Eidenbenz. Ad hoc-VCG: A Truthful and Cost-efficient Routing Pro- tocol for Mobile Ad Hoc Networks with Selfish Agents. MobiCom ’03: Proceedings of the 9th annual International Conference on Mobile Computing and Networking, 2003. [9] M. Annavaram, Q. Jacobson, and J.P. Shen. HangOut: A Privacy Preserving Social Net- working Application. the workshop on Mobile Devices and Urban Sensing, 2008. [10] E. Anshelevich, A. Dasgupta, J. Kleinberg, E. Tardos, T. Wexler, and T. Roughgarden. The Price of Stability for Network Design with Fair Cost Allocation. FOCS ’04: Proceedings of the 45th Conference on Foundations of Computer Science, 2004. [11] R. M. Axelrod. The Evolution of Cooperation. Basic Books, New York, 1984. [12] S. Bansal and M. Baker. Observation-Based Cooperation Enforcement in Ad-hoc Networks. Technical Report, Computer Science Department, Stanforn University, cs.NI/0307012, 2003. informal publication. 146 [13] L. Barkhuus and A.K. Dey. Location-Based Services for Mobile Telephony: a Study of Users’ Privacy Concerns. Proceedings of the 9th International Conference on Human- Computer Interaction, 2003. [14] N. BenAmmar and J. S. Baras. Incentive Compatible Medium Access Control in Wire- less Networks. GameNets ’06: Proceedings of the 2006 Workshop on Game Theory for Communications and Networks, 2006. [15] S. Buchegger and J. L. Boudec. Performance Analysis of the CONFIDANT Protocol. MobiHoc ’02: Proceedings of the 3rd ACM International Symposium on Mobile Ad Hoc Networking & Computing, 2002. [16] L. Buttyan and J. P. Hubaux. Stimulating Cooperation in Self-Organizing Mobile Ad Hoc Networks. ACM Journal for Mobile Networks (MONET), vol. 8(5), 2003. [17] L. Cao and H. Zheng. Distributed Spectrum Allocation via Local Bargaining. Proc. IEEE SECON, 2005. [18] L. Capra, W. Emmerich, and C. Mascolo. A Micro-Economic Approach to Conflict Res- olution in Mobile Computing. Proceedings of the 10th symposium on Foundations of software engineering, 2002. [19] V . G. Cerf and R. E. Kahn. A Protocol for Packet Network Intercommunication, May 1974. [20] R. Clarke. Internet Privacy Concerns Confirm the Case for Intervention. Communications of ACM, vol. 42(2), 1999. [21] S. Consolvo, I.E. Smith, T. Matthews, A. LaMarca, J. Tabert, and P. Powledge. Location Disclosure to Social Relations: Why, When, & What People Want to Share. Proceedings of the conference on Human factors in computing systems, 2005. [22] A. Cournot. A Research Into the Mathematical Principles of the Theory of Wealth. Li- braire des sciences politiques et sociales, 1897. [23] L.P. Cox, A. Dalton, and V . Marupadi. Smokescreen: Flexible Privacy Controls for Presence-Sharing. Proceedings of the 5th international conference on Mobile systems, applications and services, 2007. [24] J. Crowcroft, R. Gibbens, F. Kelly, and S. Ostring. Modelling Incentives for Collaboration in Mobile Ad-Hoc Networks. Performance Evaluation 57, 2004. [25] A. Al Daoud, T. Alpcan, S. Agarwal, M. Alanyali, and A Stackelberg. Game for Pric- ing Uplink Power in Wide-Band Cognitive Radio Networks. Proceedings of 47th IEEE Conference on Decision and Control, 2008. [26] L. DaSilva and V . Srivastava. Node Participation in Ad Hoc and Peer-to-Peer Networks: A Game-Theoretic Formulation. Workshop on Games and Emergent Behavior in Distributed Computing, 2004. 147 [27] S.G. Davies. Re-Engineering the Right to Privacy: How Privacy has been Transformed from a Right to a Commodity. MIT Press, 1997. [28] G. Debreu. Valuation Equilibrium and Pareto Optimum. Proceedings of the National Academy of Sciences, vol. 40, 1954. [29] L. Desmet, W. Joosen, F. Massacci, K. Naliuka, P. Philippaerts, F. Piessens, and D. Vanoverberghel. A Flexible Security Architecture to Support Third-Party Applications on Mobile Devices. Proceedings of Workshop on Computer security architecture, 2007. [30] N. Eagle and A. Pentland. Reality Mining: Sensing Complex Social Cystems. Personal Ubiquitous Comput., vol. 10(4), 2006. [31] S. Eidenbenz, V . S. A. Kumar, and S. Zust. Equilibria in Topology Control Games for Ad Hoc Networks. Mobile Networks and Applications, vol. 11(2), 2006. [32] Z. Fang and B. Bensaou. Fair bandwidth sharing algorithms based on game theory frame- works for wireless ad-hoc networks. Proceedings of the IEEE Infocom, vol. 2, 2004. [33] J. Feigenbaum and S. Shenker. Incentives and Internet Computation. PODC ’03: Pro- ceedings of ACM Symposium on Principles of Distributed Computing, 2003. [34] M. Flegyhzi, L. Buttyn, and J.-P. Hubaux. Equilibrium Analysis of Packet Forwarding Strategies in Wireless Ad Hoc Networks – the Static Case. 8th International Conference on Personal Wireless Communications (PWC 2003), pages 23–25, 2003. [35] F. Fu and M. van der Schaar. Learning for Dynamic Bidding in Cognitive Radio Resources. CoRR September 2007, 2007. [36] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. [37] A.K. Ghosh and T.M. Swaminatha. Software Security and Privacy Risks in Mobile E- Commerce. Communications of ACM, vol. 44(2), 2001. [38] T. Grandison and M. Sloman. A Survey of Trust in Internet Application. IEEE Communi- cations Surveys & Tutorials (Fourth Quarter), 2000. [39] Bluetooth Special Interest Group. Specifications of the Bluetooth System. vol. 1, Decem- ber 1999. [40] M. Gruteser and D. Grunwald. Anonymous Usage of Location-Based Services Through Spatial and Temporal Cloaking. Proceedings of the 1st international conference on Mobile systems, applications and services, 2003. [41] M. Halldorsson, J. Halpern, L. Li, and V . Mirrokni. On Spectrum Sharing Games. Proc. of the 23rd Annual ACM Symposium on Principles of Distributed Computing, 2004. [42] J. C. Harsanyi. Games with Incomplete Information Played by “Bayesian” Players. Man- agement Science, vol. 14(3), November 1967. 148 [43] B. Hoh, M. Gruteser, M. Annavaram, Q. Jacobson, R. Herring, J. Ban, D. Work, J. Herrera, and A. Bayen. Virtual Trip Lines for Distributed Privacy-Preserving Traffic Monitoring. Proceedings of the 6th international conference on Mobile systems, applications and ser- vices, 2008. [44] B. Hoh, M. Gruteser, H. Xiong, and A. Alrabady. Preserving Privacy in GPS Traces via Uncertainty-Aware Path Cloaking. Proceedings of the 14th ACM conference on Computer and communications security, 2007. [45] J.I. Hong and J.A. Landay. An Architecture for Privacy-Sensitive Ubiquitous Computing. Proceedings of the 2nd international conference on Mobile systems, applications, and ser- vices, 2004. [46] J.I. Hong, J.D. Ng, S. Lederer, and J.A. Landay. Privacy Risk Models for Designing Privacy-Sensitive Ubiquitous Computing Systems. Proceedings of the 5th conference on Designing interactive systems, 2004. [47] J. Huang, R. Berry, and M. Honig. Spectrum Sharing with Distributed Interference Com- pensation. Proceedings of IEEE DySPAN, 2005. [48] M. Flegyhziand J.-P. Hubaux and L. Buttyn. Nash Equilibria of Packet Forwarding Strate- gies in Wireless Ad Hoc Networks. IEEE Transactions on Mobile Computing, vol. 5(5), 2006. [49] G. Iachello, I. Smith, S. Consolvo, M. Chen, and G.D. Abowd. Developing Privacy Guide- lines for Social Location Disclosure Applications and Services. Proceedings of the 2005 symposium on Usable privacy and security, 2005. [50] J. Mitola III. Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio. Doctoral Thesis, Royal Institute of Technology (KTH), Sweden. [51] O. Ileri, S. Mau, and N. Mandayam. Pricing for Enabling Forwarding inSelf-Configuring Ad Hoc Networks. IEEE J. Sel. Areas Commun. Special Issue on Wireless Ad Hoc Net- works, 2005. [52] BitTorrent Inc. Bittorrent. http://www.bittorrent.com/. [53] V . Terziyan J. Tang and J. Veijalainen. Distributed PIN Verification Scheme for Improving Security of Mobile Devices. Mobile Networks and Applications, vol. 8(2), 2003. [54] M. Jakobsson, J. Hubaux, and L. Buttyan. A Micropayment Scheme Encouraging Collab- oration in Multi-hop Cellular Networks. n Proceedings of Financial Crypto, 2003. [55] D. Johnson and D. Maltz. Dynamic Source Routing in Ad-hoc Wireless Networks. Mobile Computing, 1996. [56] R. Kannan, S. Sarangi, and S.S. Iyengar. A Simple Model for Reliable Query Reporting in Sensor Networks Information Fusion. Proceedings of the Fifth International Conference, 2002. 149 [57] R. Kannan, S. Sarangi, and S.S. Iyengar. Sensor Centric Quality of Routing in Sensor Networks. IEEE Infocom, 2003. [58] R. Kannan, S. Sarangi, and S.S. Iyengar. Game-Theoretic Models for Reliable Path- Length and Energy-Constrained Routing with Data Aggregation in Wireless Sensor Net- works. IEEE Journal on Selected Areas in Communications, vol. 22(6), 2004. [59] R. Kannan, S. Sarangi, and S.S. Iyengar. Sensor-Centric Energy-Constrained Reliable Query Routing for Wireless Sensor Networks. Journal of Parallel Distributed Computing, vol. 64, 2004. [60] A. Kesselman, D. Kowalski, and M. Segal. Energy Efficient Communication in Ad-hoc Networks from User’s and Designer’s Perspective. SIGMOBILE Mobile Computing and Communications Review, vol. 9(1), 2005. [61] P. Kirk. Gnutella protocol development. http://rfc-gnutella.sourceforge.net/. [62] E. Koutsoupias and C. Papadimitriou. Worst-Case Equilibria. Lecture Notes in Computer Science, vol. 1563, 1999. [63] S. Lederer, J. Mankoff, and A.K. Dey. Who Wants to Know What When? Privacy Pref- erence Determinants in Ubiquitous Computing. Extended abstracts on Human Factors in Computing Systems, 2003. [64] H. Liu and B. Krishnamachari. A Price-Based Reliable Routing Game in Wireless Net- works. GameNets ’06: Proceeding from the 2006 workshop on Game theory for commu- nications and networks, 2006. [65] H. Liu, B. Krishnamachari, and M. Annavaram. Game theoretic approach to location sharing with privacy in a community-based mobile safety application. MSWiM ’08: Pro- ceedings of the 11th international symposium on Modeling, analysis and simulation of wireless and mobile systems, 2008. [66] H. Liu, B. Krishnamachari, and S. Kapadia. Game Theoretic Tools Applied to Wireless Networks. World Scientific Publishers, 2008. [67] H. Liu, A. B. MacKenzie, and B. Krishnamachri. Bargaining to Improve Channel Shar- ing between Selfish Cognitive Radios. GLOBECOM’09: Proceedings of the 28th IEEE conference on Global telecommunications, 2009. [68] Y . Liu, C. Comaniciu, and H. Man. A Bayesian Game Approach for Intrusion Detection in Wireless Ad Hoc Networks. GameNets ’06: Proceeding from the 2006 workshop on Game theory for communications and networks, 2006. [69] R. D. Luce and H. Raiffa. Game and Decisions - Introduction and Critical Survey. John Wiley & Sons, Inc., New York, 1957. [70] N.A. Lynch. Distributed Algorithms. Morgan Kaufmann Publisher, 1997. [71] A. MacKenzie and L. DaSilva. Game Theory for Wireless Engineers (Synthesis Lectures on Communications). Morgan & Claypool Publishers, 2006. 150 [72] S. Marti, T. J. Giuli, K. Lai, and M. Baker. Mitigating Routing Misbehavior in Mobile Ad-hoc Networks. Proceedings of Mobile Computing and Networking, 2000. [73] F. Meshkati, M. Chiang, H. V . Poor, and S. C. Schwartz. A Game-Theoretic Approach to Energy-Efficient Power Control in Multicarrier CDMA Systems. IEEE Journal on Se- lected Areas in Communications, vol. 24(6), 2006. [74] F. Meshkati, H.V . Poor, and S. C. Schwartz. A Non-Cooperative Power Control Game in Delay-Constrained Multiple-Access Networks. IEEE International Symposium on Infor- mation Theory, vol. 2, 2005. [75] F. Meshkati, H.V . Poor, S. C. Schwartz, and N. B. Mandayam. An Energy-Efficient Ap- proach to Power Control and Receiver Design in Wireless Data Networks. IEEE Transac- tions on Communications, vol. 53(11), 2005. [76] F. Milan, J. J. Jaramillo, and R. Srikant. Achieving cooperation in multihop wireless networks of selfish nodes. GameNets ’06: Proceeding from the 2006 workshop on Game theory for communications and networks, 2006. [77] M.F. Mokbel, C. Chow, and W.G. Aref. The New Casper: Query Processing for Location Services Without Compromising Privacy. Proceedings of the 32nd international confer- ence on Very large data bases, 2006. [78] J. Mwangoka, K. Letaief, and Z. Cao. Joint Power Control and Spectrum Allocation for Cognitive Radio Networks via Pricing. Physical Communication, vol. 2(1-2), 2009. [79] J. Nasar, P. Hecht, and R. Wener. Call if You Have Trouble: Mobile Phones and Safety among College Students. International Journal of Urban and Regional Research, vol. 31(4), 2007. [80] J. Nash. Equilibrium Points in N-Person Games. Proceedings of the National Academy of Sciences of the United States of America, vol. 36(1), January 1950. [81] J. Nash. Two-Person Cooperative Games. Econometrica, vol. 21(1), 1953. [82] BBC News. Cyber-bullying gathers pace in us. http://news.bbc.co.uk/2/hi/technology/6245798.stm. [83] NBC News. Uc berkeley, nokia turn mobile phones into traffic probes. http://www.nbc11.com/news/15255056/detail.html. [84] P. Nurmi. Modelling Routing in Wireless Ad-hoc Networks with Dynamic Bayesian Games. 1st IEEE International Conference on Sensor and Ad Hoc Communications and Networks (IEEE SECON), 2004. [85] P. Nurmi. Modeling Energy Constrained Routing in Selfish Ad Hoc Networks. GameNets ’06: Proceeding from the 2006 workshop on Game theory for communications and net- works, 2006. [86] L. Palen and P. Dourish. Unpacking “Privacy” for a Networked World. Proceedings of the conference on Human factors in computing systems, 2003. 151 [87] S. Patil and A. Kobsa. Uncovering Privacy Attitudes and Practices in Instant Messaging. Proceedings of the 2005 international conference on Supporting group work, 2005. [88] A. Patwardhan, F. Perich, A. Joshi, T. Finin, and T. Yesha. Querying in Packs: Trustworthy Data Management in Ad Hoc Networks. International Journal of Wireless Information Networks, 2006. [89] A. Patwardhan, F. Perich, A. Joshi, T. Finin, and Y . Yesha. Active Collaborations for Trustworthy Data Management in Ad-hoc Networks. Proceedings of the 2nd IEEE Inter- national Conference on Mobile Ad-Hoc and Sensor Systems, 2005. [90] C. E. Perkins and E. M. Royer. Ad-hoc on-demand distance vector routing. Mobile Com- puting Systems and Applications, IEEE Workshop on, 1999. [91] H. J.M. Peters. Axiomatic Bargaining Game Theory. Kluwer Academic Publishers, 1992. [92] S. Reddy, J. Burke, D. Estrin, M. Hansen, and M. Srivastava. A Framework for Data Qual- ity and Feedback in Participatory Sensing. Proceedings of the 5th international conference on Embedded networked sensor systems, 2007. [93] J. B. Rosen. Existence and Uniqueness of Equilibrium Points for Concave n-player Games. Econometrica, vol. 33, 1965. [94] C. U. Saraydar, N. B. Mandayam, and D. J. Goodman. Efficient Power Control via Pricing in Wireless Data Networks. IEEE Transactions on Communications, vol. 50(2), 2002. [95] B. Schneier. Secrets and Lies: Digital Security in a Networked World. Wiley, New York, 2000. [96] V . Shah, N. B. Mandayam, and D. J. Goodman. Power Control for Wireless Data Based on Utility and Pricing. The Ninth IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, vol. 3, 1998. [97] I. Smith, S. Consolvo, J. Hightower, J. Hughes, G. Iachello, A. LaMarca, J. Scott, T. Sohn, and G. Abowd. Social Disclosure of Place: From Location Technology to Communication Practice. Proceedings of the International Conference on Pervasive Computing, 2005. [98] The Internet Society. Optimized link state routing protocol (olsr). http://www.ietf.org/rfc/rfc3626.txt. [99] The Internet Society. Topology dissemination based on reverse-path forwarding (tbrpf). http://www.networksorcery.com/enp/rfc/rfc3684.txt. [100] V . Srinivasan, P. Nuggehalli, C. Chiasserini, and R. Rao. Cooperation in Wireless Ad-hoc Networks. In Proceedings of IEEE Infocom, 2003. [101] J. E. Suris, L. A. DaSilva, Z. Han, and A. B. MacKenzie. Cooperative Game Theory for Distributed Spectrum Sharing. Proceedings of IEEE International Conference Communi- cations (ICC), 2007. 152 [102] J. E. Suris, L. A. DaSilva, Z. Han, A. B. MacKenzie, and R. S. Komali. Asymptotic optimality for distributed spectrum sharing using bargaining solutions. Trans. Wireless. Comm., vol. 8(10), 2009. [103] K.P. Tang, P. Keyani, J. Fogarty, and J.I. Hong. Putting People in Their Place: an Anony- mous and Privacy-Sensitive Approach to Collecting Sensed Data in Location-Based Ap- plications. Proceedings of the conference on Human Factors in computing systems, 2006. [104] C. Toh. Ad-hoc Mobile Wireless Networks: Protocols and Systems. Prentice Hall PTR, 2002. [105] D. M. Topkis. Equilibrium Points in Nonzero-Sumn-Person Submodular Games. SIAM Journal on Control and Optimization, vol. 17(6), November 1979. [106] The IETF Trust. The dynamic source routing protocol (dsr) for mobile ad hoc networks for ipv4. http://tools.ietf.org/html/rfc4728. [107] A. Urpi, M. Bonuccelli, and S. Giordano. Modelling Cooperation in Mobile Ad-hoc Net- works: a Formal Description of Selfishness. WiOpt Workshop: Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks INRIA Sophia-Antipolis, 2003. [108] W. Vickery. Counterspeculation, Auctions, and Competitive Sealed Tenders. The Journal of Finance, vol. 16(1):8–37, March 1961. [109] M. V oorneveld, P. Borm, F. Van Megen, S. Tijs, and G. Facchini. Congestion Games and Potentials Reconsidered. International Game Theory Review, 1999. [110] C. Wang, K. Hong, and H. Wei. Nash Bargaining Solution for Cooperative Shared- Spectrum WLAN Networks. Personal, Indoor and Mobile Radio Communications, 2007. PIMRC 2007. IEEE 18th International Symposium on, 2007. [111] F. Wang, M. Krunz, and S. Cui. Price-Based Spectrum Management in Cognitive Radio Networks. IEEE Journal of Selected Topics in Signal Processing, vol. 2(1), 2008. [112] W. Wang, Y . Cui, T. Peng, and W. Wang. Noncooperative Power Control Game with Expo- nential Pricing for Cognitive Radio Network. IEEE 65th Vehicular Technology Conference (VTC2007-Spring), 2007. [113] Wikipedia. Mobile phone. http://en.wikipedia.org/wiki/Mobile-phone. [114] M. Yeung and Y . Kwok. Game Theoretic Power Aware Wireless Data Access. WOWMOM ’05: Proceedings of the Sixth IEEE International Symposium on a World of Wireless Mo- bile and Multimedia Networks, 2005. [115] Q. Zhao, L. Tong, A. Swami, and Y . Chen. Decentralized Cognitive MAC for Oppor- tunistic Spectrum Access in Ad-hoc Networks: A POMDP Framework. IEEE Journal on Selected Areas in Communications: Special Issue on Adaptive, Spectrum Agile and Cognitive Wireles Networks, 2007. [116] H. Zheng and C. Peng. Collaboration and Fairness in Opportunistic Spectrum Access. Proceedings of IEEE International Conference on Communications (ICC), 2005. 153 [117] S. Zhong, L. E. Li, Y . G. Liu, and Y . R. Yang. On Designing Incentive-compatible Routing and Forwarding protocols in Wireless Ad-hoc Networks: An Integrated Approach using Game Theoretical and Cryptographic Techniques. MobiCom ’05: Proceedings of the 11th Annual International Conference on Mobile Computing and Networking, 2005. [118] S. Zhong, Y . Yang, and J. Chen. Sprite: A Simple, Cheat-proof, Credit-based System for Mobile Ad Hoc Networks. Proceedings of IEEE Infocom, 2003. [119] M. Zuniga and B. Krishnamachari. Software for Realistic Wireless Link Quality Model and Generator (to generate realistic static wireless ad-hoc/sensor network scenarios as graphs with links labeled with packet reception rates), http://anrg.usc.edu/www/downloads/linklayermodeljava.zip. 2005. 154
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Gradient-based active query routing in wireless sensor networks
PDF
Congestion control in multi-hop wireless networks
PDF
Robust routing and energy management in wireless sensor networks
PDF
Dynamic routing and rate control in stochastic network optimization: from theory to practice
PDF
Domical: a new cooperative caching framework for streaming media in wireless home networks
PDF
Optimal resource allocation and cross-layer control in cognitive and cooperative wireless networks
PDF
Multichannel data collection for throughput maximization in wireless sensor networks
PDF
Transport layer rate control protocols for wireless sensor networks: from theory to practice
PDF
Relative positioning, network formation, and routing in robotic wireless networks
PDF
Efficient and accurate in-network processing for monitoring applications in wireless sensor networks
PDF
Rate adaptation in networks of wireless sensors
PDF
A protocol framework for attacker traceback in wireless multi-hop networks
PDF
Enabling virtual and augmented reality over dense wireless networks
PDF
Reconfiguration in sensor networks
PDF
On location support and one-hop data collection in wireless sensor networks
PDF
Aging analysis in large-scale wireless sensor networks
PDF
Realistic modeling of wireless communication graphs for the design of efficient sensor network routing protocols
PDF
Understanding the characteristics of Internet traffic dynamics in wired and wireless networks
PDF
Language abstractions and program analysis techniques to build reliable, efficient, and robust networked systems
PDF
Local optimization in cooperative agent networks
Asset Metadata
Creator
Liu, Hua
(author)
Core Title
Cooperation in wireless networks with selfish users
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
10/08/2010
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Cooperation,game theory,OAI-PMH Harvest,selfish users,wireless networks
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Krishnamachari, Bhaskar (
committee chair
), Annavaram, Murali (
committee member
), Govindan, Ramesh (
committee member
)
Creator Email
hual@usc.edu,hualiu.usc@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m3500
Unique identifier
UC1478738
Identifier
etd-Liu-4062 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-410924 (legacy record id),usctheses-m3500 (legacy record id)
Legacy Identifier
etd-Liu-4062.pdf
Dmrecord
410924
Document Type
Dissertation
Rights
Liu, Hua
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
game theory
selfish users
wireless networks