Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Essays on information design for online retailers and social networks
(USC Thesis Other)
Essays on information design for online retailers and social networks
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ESSAYS ON INFORMATION DESIGN FOR ONLINE RETAILERS AND SOCIAL NETWORKS by Shobhit Jain A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (BUSINESS ADMINISTRATION) August 2021 Copyright 2021 Shobhit Jain Dedication Dedicated to my parents, for their love, support and belief in me. ii Acknowledgements I would like to acknowledge the Marshall School of Business at the University of Southern California for their generous nancial support that provided a cornerstone for completing this dissertation. I am greatly indebted to my advisors, Prof. Ramandeep Randhawa and Prof. Kimon Drakopoulos, who have been pillars of support throughout my Ph.D. journey. This dissertation would not have been possible without their continuous guidance, encouragement, and advice. They have provided an invaluable contribution to my learning and growth as a Ph.D. student. I am grateful to Prof. Phebe Vayanos and Prof. Andrew Daw for being on my thesis committee. The faculty members in the Data Sciences and Operations (DSO) department have always uplifted and guided me in my academic career. Prof. Greys Sosic, Prof. Vishal Gupta, Prof. Leon Zhu, Prof. Hamid Nazerzadeh, Prof. Peng Shi, and Prof. Raj Rajagopalan have advised and inspired me time and time again in my Ph.D. studies. I am also thankful to my fellow Ph.D. friends in the department for their support and for making the journey more enjoyable. I also appreciate the eorts of Julie Phaneuf, Ariana Perez, and Rebeca Gonzales in the department, who were always helpful to me with all the administrative issues. I owe every success in my life to my parents, and this dissertation is no exception. They always encouraged and supported me in my pursuits, and their belief in me never wavered. I cannot thank them enough for their love and sacrices. iii Lastly, I would like to thank the friends I made during my Ph.D. life for their assistance, for adding more fun to my life and for being a part of this memorable journey. Special thanks to Umang Gupta for always being available and being my go-to guy during the last phase of my Ph.D. iv TableofContents Dedication ii Acknowledgements iii ListofTables viii ListofFigures ix Abstract x Chapter1: PersuadingCustomerstoBuyEarly:TheValueofPersonalizedInformationPro- visioning 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Model and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Main Results via an Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.1 Baseline: Full Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.2 Public Signaling Does Not Improve Revenues . . . . . . . . . . . . . . . . . . . . . 12 1.4.3 Private Signaling Improves Revenues . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.5 Formal Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.5.1 Optimal Public Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.5.2 Optimal Private Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.6.1 Comparison of Private and Public Mechanisms . . . . . . . . . . . . . . . . . . . . 24 1.6.2 Personalized Pricing Interpretation of Personalized Signaling . . . . . . . . . . . . 24 1.7 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.7.1 Coarse Private Signaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.7.2 Private Disclosure Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.7.3 Demand Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 1.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Chapter2: Fact-CheckingNewsUsingUsers’Votes 38 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 v 2.3.1 Informational Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.2 Platform’s and Users’ Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.3 Utility of Users and Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.3.4 Truthful and Unbiased Strategies and Neutral Fact-Checking . . . . . . . . . . . . 46 2.4 No-Voting and Altruistic Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.5.1 Numerical Experiments Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.5.2 Constant Fact-Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.5.2.1 No-Voting Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.5.2.2 Altruistic Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2.5.3 Voting-Dependent Fact-Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Chapter3: SearchingforanInfectioninaNetwork 63 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.3 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.4 Formulation as a Graph Covering Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.5 NP-Completeness Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.6 The Simplied Case of a Line Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 AppendixA 76 A.1 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 A.1.1 Proof of Theorems 1.5.1 and 1.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 A.1.2 Proof of Theorems 1.5.2 and 1.5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 A.1.2.1 Outline of the Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 A.1.2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 A.1.2.3 Solving the Optimization Problem . . . . . . . . . . . . . . . . . . . . . . 89 A.1.2.4 Proof of Lemma A.1.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 A.1.3 Proof of Theorem 1.7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 A.2 Pricing After Uncertainty Realization Without Commitment . . . . . . . . . . . . . . . . . 102 A.2.1 Equilibrium Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 A.2.2 Preliminary Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 A.2.3 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 A.3 Solution Concept for Continuum of Customers . . . . . . . . . . . . . . . . . . . . . . . . . 108 A.4 Value of Public Signaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 AppendixB 112 B.1 Proofs of Section 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 B.1.1 Proof of Lemma 2.4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 B.1.2 Proof of Lemma 2.4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 B.1.3 Proof of Lemma 2.4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 B.1.4 Proof of Proposition 2.4.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 B.2 Proofs of No-Voting Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 B.2.1 Proof of Lemma 2.5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 B.2.2 Proof of Proposition 2.5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 B.3 Proofs of Altruistic Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 vi B.3.1 Information Gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 B.3.2 Reputation Gains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 B.3.3 Platform’s Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 B.4 Conditions for Each Equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 AppendixC 136 C.1 Proof of Theorem 3.4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 C.2 Proof of Theorem 3.5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 C.2.1 Proof of Lemma 3.5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 C.3 Proof of Proposition 3.6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Bibliography 150 vii ListofTables 2.1 Table of possible altruistic equilibria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 viii ListofFigures 1.1 Timeline showing the dierent periods and events . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Optimal personalized signaling mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.3 Increase in revenue by using personalized information provisioning . . . . . . . . . . . . . 23 1.4 Optimal personalized mechanism and personalized period 1 pricing . . . . . . . . . . . . . 27 2.1 Timeline showing various events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.2 Heatmaps for constant fact-checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.3 Heatmaps for dependent fact-checking showing optimal equilibria, platform’s utility,f andv values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.4 Heatmaps for dependent fact-checking showing optimal fact-checking strategy for the platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 2.5 Heatmaps showing percentage increase in platform’s utility by using voting-dependent fact-checking strategy as compared to constant fact-checking . . . . . . . . . . . . . . . . 61 3.1 Example of a line graph and a star graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.2 NP Completeness Construction procedure example . . . . . . . . . . . . . . . . . . . . . . 73 ix Abstract The amount of information (or misinformation) has grown exponentially in the past years due to the ram- pant growth of internet and social media technologies. With the availability of huge amounts of data and the technology to parse it, it is becoming crucial for the rms to understand its impact and act strategically. Information has dierent signicance for dierent stakeholders. Online retailers can utilize the behavioral data to learn about their customers whereas social media platforms face increasing pressure to curb the spread of misinformation growing on their networks. The job often becomes more complex with the pres- ence of users whose goals might not be completely aligned with those of the rm. This dissertation is broadly aimed to answer the questions that arise in the context of these kinds of interplay between rms, users, and information. We consider settings in revenue management, fake news, and graph epidemics, where the policy maker must make strategic decisions with limited information often in the presence of strategic players. In particular, we try to answer the following research questions in each respective domain: • InformationProvisioninginOnlineMarketplace Modern sellers are better informed about their product’s availability and can potentially disclose this additional information to extract additional revenues from its customers. How should an online retailer reveal inventory information to increase its revenue? • MisinformationMinimizationinaSocialPlatform The recent increase of false stories on social media platforms have led to serious outcomes and put x the platforms in a lot of pressure to curb the spread of fake news. How should an online platform set its fact-checking strategy to reduce misinformation? • SearchingforanInfectioninaNetwork COVID-19 has caused a great disruption in the economic and social life of millions of people around the globe. During such epidemics, eective identication strategies are crucial in controlling the spread of the infection. How should a policy maker screen individuals in a social network to nd an infected person in minimal time? InformationProvisioninginOnlineMarketplace Modern advances in retailing have produced convenient channels for rms to communicate informa- tion to their customers. The rms are also usually better informed about the supply and demand dynamics than their customers. This information asymmetry creates the opportunity for a rm to consider revealing product availability to its customers with the hope of increasing its revenue. For example, the rm can put the words "Limited Stock" in the product description. If the rm chooses to reveal the information in all situations truthfully, it will be optimal for the buyers and may not be the best strategy for the rm itself. In contrast, an uninformative signal from the rm never persuades the buyers failing to extract additional revenue. Thus, the rm must choose information sharing carefully, selecting how much information to reveal and to whom. In the rst chapter, we tackle this revenue maximization problem where a rm has the traditional methods of dynamic pricing and the modern ability to convey inventory information. We adopt a Bayesian persuasion framework to model this problem as a game between the rm and the customers. Dynamic pricing is often used in literature to generate revenues in the presence of strategic customers. Liu and van Ryzin (2008) analyze a deterministic demand scenario consisting of two periods with strategic buyers. In the cheap talk model of Allon and Bassamboo (2011), the rm uses delay announcements to in- uence homogeneous customers in a service system where the prices are endogenously xed. Lingenbrink xi and K. Iyer (2018) analyze a similar model with commitment and nd the ecacy of public signaling in ex- tracting revenues. Our model adds to this literature by analyzing a two-period model with heterogeneous customers in a Bayesian persuasion framework. We analyze information signaling in two formats – public signaling (where the information revealed is the same for all the customers) and private signaling (where the information revealed diers across customers). We show that when the product availability is low, it is optimal for the rm to reveal so and sell out in the rst period. The remaining job of the rm is to ensure incentive compatibility when the product availability is high. We nd that this renders the public signaling unavailing as these conditions can be satised by optimizing the prices alone. However, for the private signaling case, we can obtain higher revenue by sending dierent signals across customers. We characterize the optimal pricing and information provisioning structure and provide some insight into how private signaling has attributes similar to personalized pricing. Our work provides another option for rms to enhance their revenue, especially in scenarios where using personalized pricing might be challenging. MisinformationMinimizationinaSocialPlatform The spread of false stories on social media has often resulted in detrimental and, in worst cases, violent consequences. As a result, social media corporations have faced a lot of criticism and pressure to develop mechanisms that can detect and remove false stories from their platform. In an attempt to limit the spread of fake news, social media platforms have adopted various measures. These include using third-party fact-checkers, labeling harmful content, and removing content from pre-identied fake news sites. On most platforms, the users also have the option of reporting the validity of an article on the platform. These users’ votes provide additional information to the platform which it can utilize to tackle false articles. In Chapter 2, we consider such a setting by modeling a game between the platform and two users. The platform aims to minimize the misinformation of its users in a cost-eective manner. Always revealing the truth, though minimizes the users’ misinformation, comes at a great cost for the platform. On the xii other hand, the platform can choose not to put any eort into fact-checking the articles but might risk higher levels of misinformation for its users. Hence, the platform must strategically select its fact-checking strategy to counter the misinformation of the users. Each user too is strategic and has the goal of reducing her own misinformation and fact-checking eort. Each user also wants to improve her reputation on the platform which is described as the platform’s belief that a user has a lower cost of fact-checking and hence is more likely to fact-check. This reputation is aected by the user’s vote and the platform’s fact-checking, which gives some information about the user’s fact-checking cost. We characterize the possible equilibria and the conditions for their existence. We provide the optimal strategy for the platform when the users do not vote and the closed form expression for its utility. When the users vote, we consider two settings - constant fact-checking (where the platform’s fact-checking strategy is independent of the votes of the users) and voting-dependent fact-checking (where the platform takes into account the users’ votes in its strategy). Through numerical simulations, we evaluate the equilibria that maximizes the platform’s utility for dierent values of the primitives and the fact-checking strategies that the platform should use to achieve those equilibria. In constant fact-checking, we nd that voting helps to increase the utility of the platform when its fact-checking cost is high. As the users have the option of sharing their ndings about the article, voting helps improve users’ overall information about the truth. In voting-dependent fact-checking, we nd that the platform’s utility increases even further due to strategic fact-checking by the platform. SearchingforanInfectioninaNetwork Identifying and isolating the infected sources in a population is crucial in controlling and minimizing the damage to other members of the community. For instance, the current COVID-19 pandemic has put the world to a stop causing harm to life and society. Early detection of groups of individuals with a higher risk of infection can also greatly enhance targeted surveillance and preventive measures. Appropriate xiii safety measures and containment policies are easier to enforce at the onset of an epidemic. Other potential infected individuals in the population can also be found using the identied nodes through contact tracing. Contact tracing is a disease control strategy of detecting who has been in close contact with an in- fected person and then asking them to isolate to break the chain of infection. It is a central public health response to newer outbreaks such as COVID, where specic treatments are unavailable or limited. For ef- fective contact tracing, the policy maker must be able to investigate and identify individuals with probable occurrence of the disease. Screening the individuals quickly and eciently in the population can dramat- ically reduce the impact of an epidemic. Aligning with this objective, we study a deterministic infection spread in a network to design a screening policy that searches for any infected node as quickly as possible. The infected node can then potentially be used to contact trace and control the spread of the virus. We model the problem as a graph search problem aimed at nding an infection in minimal time. Each node in our graph serve as an individual in the social network. The interactions between these individuals are represented as edges in the graph. A search policy is dened as a sequence of individuals that should be screened for infection at various times. The goal of the policy maker is to nd a search policy that would nd an infected person in the least time possible. We propose a sequential graph covering problem showing its equivalence to our infection search. We show that nding the optimal search policy is NP-Complete for a general graph and provide an integer linear program formulation to solve the problem. We also show the simplied case for a line graph and propose future directions of research. xiv Chapter1 PersuadingCustomerstoBuyEarly: TheValueofPersonalized InformationProvisioning 1.1 Introduction Retailers frequently employ dynamic pricing to eectively match supply with demand. Some retailers, such as Amazon, change prices very frequently (up to 2.5 million times a day, see Mehta, Agashe, and Detroja 2019). In such scenarios, customers face both a price risk, i.e., the product price may decrease, and a quantity risk, i.e., the product may not be available in the future. At any given time, customers must make abuynow orwait decision by suitably weighing their estimates of these risks compared with the product’s value to them. As one might expect, a rm tends to have more information about its supply and aggregate demand than its customers do and thus is better informed of the potential future product availability. Traditionally, rms have only had access to prices as a lever to aect customer behavior. However, modern technological advances have enabled inexpensive and adaptable communication pathways between rms and customers, which leads to a natural information design question: How can a rm communicate this information in a protable manner? We explore a rm’s use of an additional communication channel to signal its product availability in- formation. For instance, an e-tailer may place the message “Limited Stock” next to the product under 1 Period 2 Period 1 Period 0 Period -1 Firm commits to a signaling mechanism and posts prices p 1 and p 2 Firm observes realization of Q and sends signal s v to v-customer All customers enter the market and have a common belief on Q Customers who decide to buy in period-1 get the product if available Customers who decide to buy in period-2 get the product if available Figure 1.1: Timeline showing the dierent periods and events in our model. Periods1 and 0 are articial: selling occurs in periods 1 and 2, Q represents the rm’s quantity, andv-customer denotes a customer with valuationv. consideration. Because rms typically have additional information about their customers, it is further possible for the rm to provide availability information in a personalized manner, such as providing the message “Limited Stock” to some customers while providing a dierent (or no message) to others. In this chapter, our focus is on studying such information provisioning that can be personalized, while also op- timizing publicly posted prices. A priori, one expects information provisioning to be benecial in both public and personalized formats. However, under some technical conditions, we nd that, somewhat sur- prisingly, the potential value of public information provisioning can be fully realized by publicly posted prices alone. That is, if a rm optimizes its prices, public information has no additional value. In sharp contrast, we nd that personalized information provisioning can be very protable. We use a Bayesian persuasion framework (Kamenica and Gentzkow 2011) to model the information provisioning game. Our rm is a monopolist and sells identical products to a continuum of customers. We consider the overall mass of customers to be deterministic and normalized to unity. In our model, all customers are present in the market from the beginning, and we do not consider situations in which some customers leave or more customers enter the market at any point in time. A priori, the rm and the customers are uncertain about the rm’s inventory, i.e., the maximum amount (number) of products that the rm will be able to sell. (We also consider uncertainty in demand with deterministic quantity in Section 1.7.3.) In our model, time is discrete, and selling takes place in two periods, named period 1 and period 2 (please refer to Figure 1.1 for the timeline of events). The rm posts prices, which are 2 assumed to be the same for all customers, and commits to a signaling mechanism before realization of the underlying uncertainty (in the articial period -1). In the public information setting, the signal is common across customers. In the personalized information case, the seller can send dierent signals to dierent customers depending on their valuation. In signaling parlance, the rm is the sender, and the customers are receivers. A key dierence from the typical signaling paradigm is the following (indirect) strategic complementarity between receivers: customers who decide to purchase early reduce the availability for customers who decide to postpone their purchase. In addition to characterizing the optimal personalized mechanism and its value, our analysis provides insight into how this value is realized. Given that the rm knows the valuations of its customers, if rst degree price discrimination was possible, the rm could charge each buyer their true valuation and extract all surplus. However, rms may not be able to engage in such pricing in order to adhere to regulations, ∗ and some rms, such as platforms like Airbnb, may not have control over prices. It turns out that personal- ized information provisioning has attributes very similar to personalized pricing. Specically, we provide settings where the rm, when using the optimal personalized mechanism with publicly posted prices, can- not extract greater revenue even by using a personalized price in period 1. Thus, our work suggests that personalized information provisioning has potential, especially in e-tail scenarios, as it allows rms to reap some of the benets of personalized pricing. Given that recent technological advances provide rms with a multitude of options for communicating information to their customers, we believe that it may be possible for rms to derive these benets without encountering some of the issues that may be associated with personalized pricing. Concluding, the main contributions of this chapter can be summarized as follows. First we show that committing to a public information provisioning mechanism does not yield any additional benets to the rm over optimizing its (publicly posted) prices. This nding relies on the fact that both the (public) ∗ The United Kingdom places certain restrictions on personalized pricing under consumer protection and competition laws; see https://one.oecd.org/document/DAF/COMP/WD(2018)127/en/pdf. 3 information provisioning mechanism and the prices are optimized. If prices were set at a suboptimal level, then public information provisioning may add value to the rm. Second, in contrast to public information provisioning, we show that personalizing the information provisioning mechanism (that is, giving dierent information to dierent customers based on their valuation) generates higher revenues for the rm when publicly posted prices are used (even when the latter are optimal). The achieved revenue is comparable to the revenue achieved by oering personalized prices in the rst period (and the same price to all customers in the second period). 1.2 LiteratureReview Dynamic pricing has been well established as a means of extracting revenues from strategic customers; see, for instance, the classical papers Coase (1972), Stokey (1979), and Besanko and Winston (1990) and the survey paper Shen and Su (2007) for a detailed review of this literature. Our work relates to the literature on information communication in retail. One of the early papers on this topic is Allon and Bassamboo (2011), which considers a cheap talk model with endogenously xed prices, focuses on homogeneous customers and shows that the rm is unable to signal availability infor- mation. The contemporary paper Lingenbrink and K. Iyer (2018) shows that by adding commitment to this model, the rm is able to extract additional revenues using public communication (we compare our work with this paper in more detail in Appendix A.4). Both papers show that private information disclosure does not add value to the rm. There is also recent literature that considers public inventory disclosure as a means of communication (cf. Aydinliyim, Pangburn, and Rabinovich 2017, Cui and Shin 2017). Our work diers from this literature, as we focus on the interaction between pricing and committed information provisioning when selling to heterogeneous customers, and specically shows that private information provisioning has signicant value. 4 Our benchmark model relates to Liu and van Ryzin (2008), which considers a deterministic demand scenario with strategic customers facing declining prices over two periods. However, it diers from our work, as it considers risk-averse customers and does not consider communication between the seller and customers. Our chapter builds on the Bayesian persuasion framework that has origins in the seminal papers Rayo and Segal (2010) and Kamenica and Gentzkow (2011) but also relates to the growing area of information communication, for instance, in inventory systems (Allon, Bassamboo, and Randhawa 2012, Yu, Ahn, and Kapuscinski 2014), in queueing systems (Veeraraghavan and Debo 2009, Allon, Bassamboo, and Gurvich 2011, Lingenbrink and K. Iyer 2017), in networks (Candogan and Drakopoulos 2020), in policy (Alizamir, Véricourt, and Wang 2020), in Bayesian exploration (Kremer, Mansour, and Perry 2014, Papanastasiou, Bimpikis, and Savva 2017), and in politics (Alonso and Câmara 2016). Our work diers from Kamenica and Gentzkow (2011) because in our model, the receivers (customers) impose an externality on each other through the scarce inventory. This breaks the independence between the receivers, and one needs to analyze their equilibrium as well. Methodologically, we utilize the notion of omniscience from Bergemann and Morris (2017). The rm in our model is an omniscient information designer because it knows individual customer valuations. We would like to remark that our benchmark model also relates to Allon, Bassamboo, and Randhawa (2012), in which the rm uses prices as a means of signaling its quantity but does so in a traditional signaling paradigm (Spence 1978), as opposed to our Bayesian persuasion framework, in which the rm commits to the information structure. Our work assumes that the sender commits to its prices in addition to the signaling mechanism in order to persuade the customers. There is some support for this assumption in the literature. Blinder et al. (1998) explain why prices are “sticky" in business cycle development and hence are dicult to modify. Pre-commitment to announced xed-discount strategies is studied in Aviv and Pazgal (2008) and shown to increase the seller’s expected revenue when facing a group of strategic 5 customers. Another related paper, Dasu and Tong (2010), suggests that a posted pricing policy with two or three price changes is sucient to achieve near-optimal revenues. In this paper, the authors consider strategic customers facing scarcity of resources, which is similar to the availability notion that we have, and conjecture that it is optimal for the seller to reveal the quantity that it has for sale at the start and then hide the inventory information for subsequent periods. 1.3 ModelandPreliminaries 1.3.1 Model The rm’s customers have valuations that are distributed according to a distributionF with supportV so that for anyv2 V ,F (v) denotes the fraction of customers with valuations less than or equal tov. We assume thatF has a non-decreasing hazard rate, a standard assumption in this literature. Throughout the chapter, we denote by F (y) = 1F (y) the complementary cumulative distribution function. We refer to a customer with valuationv asv-customer and assume thatv is known to the rm. In our main model, we normalize the overall customer mass tod = 1. We denote the rm’s inventory by the random variable Q and consider the setting in which Q has two possible realizations: it can be either high (Q =q H ) or low (Q =q L ). We denote the corresponding events byH andL, respectively. The prior probabilitiesP(H) andP(L) are common between the rm and the customers. For convenience, we will setq H = 1 so that theH-type rm has abundant quantity; this allows us to present our insights with limited technical complexity. † The rm optimizes on the signaling mechanism and the prices for the two sales periodsp 1 andp 2 . See Figure 1.1 for the timeline of events. We investigate an alternative scenario of demand uncertainty with deterministic quantity in Section 1.7.3. We would like to point out to the reader that in our model, the rm sets prices prior to the realization of uncertainty. An alternative is for the rm to update prices after it has full information on quantity † In our analysis, the caseqHd is equivalent toqH =d, and we normalizeqH =d=1 only for expositional convenience. 6 (demand). Such prices would then become a means of communicating information directly. We formally analyze such a scenario, which leads to a classical signaling game (as in Spence (1978), without commit- ment), in Appendix A.2 and establish that it leads to unprotable price distortions. Thus, the rm should prefer committing to prices ex ante even if it had the exibility to postpone this decision. 1.3.2 Preliminaries We denote the underlying probability measure by ( ;F;P), and it represents the uncertainty correspond- ing to both the inventory and the (possible) randomization induced by the signaling mechanism of the rm. Then, denoting the signal space byS, we dene a signaling mechanism as a mapping : V !S. That is, the signal can depend on both the customer valuation and the realized rm inventory (and the potential randomization from the rm). Note that the signaling mechanism denes a joint probability dis- tribution on signals and hence a marginal probability distribution on the signal of eachv-customer. We denote the support of the marginal distribution of the signal ofv-customer byS v . Conceptually, in the beginning of an (articial) period 0 (see Figure 1.1), the rm observes the realization of the inventoryQ and sends signals v to eachv-customer according to the chosen signaling mechanism. Note that customers can only observe the prices and their signals, and not the inventory realization. Given a signaling mechanism and prices p 1 and p 2 , each v-customer has a buying strategy x v : S [0;1) [0;1)!f0; 1; 2g; where 0 denotes the decision to not buy the product in any period and 1 and 2 denote the decisions to buy it in period 1 and period 2, respectively. ‡ Note that even if a customer decides to buy the product in a period, she may not get it if the number of products available is less than the number of customers willing to buy them. In such a situation, we assume that the rm randomly allocates the products among the customers who wish to buy in that period. We denote byA x (v) the (availability) event thatv-customer gets the product in periodx2f1; 2g. Note that the eventA x (v) is realized when ‡ We assume that customers’ strategiesfxvg are measurable. 7 v-customer decides to buy the product in periodi and the product is assigned to her. Note that in our set-up, if the rm runs out of inventory, it does not replenish, and thus, a customer with actionx2f1; 2g either receives the product in periodx or does not receive the product at all. We can write the utility ofv-customer conditional on observing signals and pricesp 1 andp 2 as U v (x;s;p 1 ;p 2 ) =I(x2f1; 2g) (vp x )P(A x (v)js;x v =x); (1.1) whereI() denotes the indicator function. Note that the probability of getting the product is calculated with respect to the actions of other customersx v , the size of the inventoryQ, and the (possible) randomization by the rm. The mathematically inclined reader can refer to Appendix A.3 for a more detailed presentation of the solution concept and equilibrium conditions. Given a signaling mechanism and a signals v , for eachv2V , customers face an incomplete infor- mation game, and the solution concept that we use to analyze their interaction is that of a Bayesian Nash equilibrium, dened as follows: Denition1. Givena signalingmechanism , a set of strategiesfx v g v2V denes a Bayesian Nashequilib- rium if, for allv2V, § x v (s v ;p 1 ;p 2 )2 arg max x2f0;1;2g I(x2f1; 2g) (vp x )P(A x (v)js v ;x v =x); (1.2) for almost alls v 2S v . We denote the set of Bayesian Nash equilibria induced by a signaling mechanism and pricesp 1 ;p 2 byX(;p 1 ;p 2 ). Given a Bayesian Nash equilibriumx =fx v g v2V 2X(;p 1 ;p 2 ), we denote by D x = Z v2V I(A x (v))dF (v) § That is, everywhere with respect toF . 8 the random variable corresponding to the mass of customers who get the product in periodx and calculate the rm’s expected revenue corresponding to this equilibrium as R(x) =E[p 1 D 1 +p 2 D 2 ]: (1.3) The expected revenue corresponding to a signaling mechanism and pricesp 1 ;p 2 is given by R e (;p 1 ;p 2 ) = max x2X(;p 1 ;p 2 ) R(x): (1.4) We would like to emphasize that the above optimization allows for any price levels, and thus, the rm could potentially setp 2 >p 1 to withhold capacity, if needed. Upon receiving a signals v , av-customer forms a (posterior) belief(s v ) that the inventory is high, based on the given signaling mechanism . As per Bayes’ rule, for a given signaling mechanism , this belief can be written as v (s v ) =P(HjS v =s v ) = P(S v =s v jH)P(H) P(S v =s v jH)P(H) +P(S v =s v jL)P(L) (1.5) for almost alls v 2S v . We use the solution concept of a Sender-Preferred Subgame Perfect Bayesian equilibrium to analyze the interaction between the rm and the customers, as dened below: Denition2. A Sender-Preferred Subgame Perfect Bayesian equilibrium consists of a signaling mechanism , a pricing mechanism p 1 and p 2 , a strategy prole x = fx v g v2V , and a set of beliefsf v g v2V if the following conditions are satised: 1. The posterior beliefs(s v ) are derived as per equation (1.5) using Bayes’ rule. 2. The strategies of the customersx dene a Bayesian Nash equilibrium as dened in Denition 1. 9 3. The signaling mechanism and pricesp 1 andp 2 solve max ;p 1 ;p 2 R e (;p 1 ;p 2 ). 4. The strategies x correspond to the best equilibria from the perspective of the rm (the sender), i.e., x2 arg max y2X(;p 1 ;p 2 ) R(y). We would like to point out that condition 4 of Denition 2 is a common assumption in information design literature. In our setting, it is not as restrictive as it initially seems for the following reason: as we will soon see and as shown in Bergemann and Morris (2017), it is enough to consider signaling mechanisms that recommend actions to customers. Assume that there is an equilibrium where a positive mass of consumers deviate from the recommendation (for example, they buy in period 2 when it is recommended to buy in period 1). Then, the rm will have a protable deviation (in the example given, that deviation would be to increasep 2 ), and hence, this new set of strategies cannot be supported as an equilibrium. In practice, implementation of such an equilibrium is not trivial: once the signals are sent, many Bayesian Nash equilibria may arise. Discovering mechanisms that ensure that the desired equilibria are selected is a topic of ongoing research and beyond the scope of this chapter. 1.4 MainResultsviaanExample We illustrate our main results using an example in which customer valuations are uniformly distributed on [0; 1] andq L = 1=2. Moreover, we set the prior on quantity to beP(Q = q L ) = P(Q = q H ) = 1 2 . Notice thatq L = F (p m ), wherep m denotes the unconstrained static price p m = arg max p p F (p) = 1 2 ; (1.6) andq H equals the market size by assumption. Thus, we may consider this scenario to be one of abundant quantity in which there would be no availability risk for any of the two rm types if the rm type was 10 known to customers with certainty, as we illustrate next. In the rest of this section, we will assumep 1 p 2 for convenience. 1.4.1 Baseline: FullInformation Consider a setting in which rm type is known to the customers, and thus, the rm’s only decision is to set prices. For theH-type rm, becauseq H exceeds total market size, the product will be available in both periods with certainty, and so all customers will purchase in the period with the lowest price. That is, all customers will simply wait for period 2. Consequently, from the rm’s perspective, its optimal pricing policy would be to setp 1 =p 2 =p m and obtain the revenueR m = 1=4. Consider next theL-type rm. Note that in this case, we must havep 1 p m because the unconstrained revenue function p F (p) is increasing for p < p m , which implies that setting p 1 < p m would be sub- optimal. However, it is conceivable that the rm could set a low period 2 price to create an availability risk that could entice some high-valuation customers to purchase in period 1. Note that av-customer obtains (vp 1 ) as net utility upon purchase of the product in period 1 and (vp 2 ) 2 as the expected net utility from purchasing in period 2, where 2 denotes the probability that she will obtain the product in period 2. Suppose thatt denotes the valuation of a customer indierent between purchasing in either period. Then, assuming that such a customer exists, we expect allv-customers withv >t to strictly prefer to purchase in period 1 and those withv<t to strictly prefer to purchase in period 2. It follows that we can write the availability as 2;L = q L F (t) F (p 2 ) F (t) . It follows that fort-customer to be indierent, we must have (tp 1 ) = (tp 2 ) q L F (t) F (p 2 ) F (t) : (1.7) Using F (y) = 1y for the uniform distribution, we note that (1.7) has no solution if F (p 1 ) t purchasing in period 1 and remaining customers withvp 2 purchasing in period 2. In the former case, in which all customers purchase in the second period at the pricep 2 , the problem transforms to static pricing in which the optimal price is to set p 2 = p m , and thus the corresponding revenue equalsR m . In the latter case, we havep 1 = p m , and so ifp 2 < p 1 = p m , the corresponding revenue would be less than 1=4 which would be suboptimal. Thus, for the L-type rm with uniform customer valuations andq L = 1=2, it follows that the optimal decision is to setp 1 = p 2 = p m , which generates the revenueR m = 1=4. Thus, when customers have full information, the rm is unable to create an availability risk in this setting. We refer the reader to Liu and van Ryzin (2008) for additional discussion of the full information case. Specically, Liu and van Ryzin (2008) shows how the rm can create an availability risk in such a setting even with uniform customer valuations only if customers are risk averse (with concave utility functions). 1.4.2 PublicSignalingDoesNotImproveRevenues We now return our focus to the setting in which customers do not know the rm type. Compared with the full information setting, the rm could now potentially use information to its advantage by creating some availability risk that could allow it to sell over both periods to a larger total mass of customers and with enough customers buying at the higher price in period 1 such that it can generate higher revenues. Alternatively, one could also argue that given that both rm types could only generate the same revenue of R m = 1=4 under full information, the rm should not be able to do better under asymmetric information. As we will soon see, under public information provisioning, this latter rationale holds true, and we are 12 unable to improve upon the full information setting. However, under private signaling, we can indeed do better. In public signaling, because the information is public, all customers perform the same Bayesian update and thus once again face the same decision as each other customer. For instance, suppose that a public signals was sent to the customers. Then, once again from the customer perspective, we expect there to be a valuation threshold t s (where we use the subscript s to indicate the dependence on the signal) at which the corresponding customer is indierent between purchasing in periods 1 and 2. This customer’s indierence condition is a modication of (1.7): (t s p 1 ) = (t s p 2 )P(A 2 jS =s); (1.8) where the availability is now computed based on an expectation on the rm type as well. That is, we have P(A 2 jS =s) =P(HjS =s) +P(LjS =s) q L F (t s ) F (p 2 ) F (t s ) ; where the product is available with certainty in both periods if the rm type isH. Without loss of gen- erality, let us assume thatP(H j S = s)2 (0; 1) so that the customers cannot identify the rm type with certainty by their Bayesian update. It is then useful to compare (1.8) with (1.7). Specically, for any xed thresholdt s , the expected availability in period 2 is higher under public signaling relative to the case in which rm type is known to be of typeL. Therefore, in this case, for any F (p 1 ) q L , we nd that the right-hand side of (1.8) dominates the left-hand side; that is, all customers prefer to postpone their purchase to period 2. Thus, the rm can only generate the revenueR m = 1=4 by settingp 1 =p 2 =p m . Thus, public signaling becomes redundant. Theorem 1.5.1 will formalize this result and prove that this equivalence between public signaling and full information holds more generally than just in this example alone. 13 We would like to emphasize that our argument here is very specic to (i) the abundance of quantity even for theL-type rm and (ii) the functional form of the uniform distribution for which (1.7) does not have a solution. This argument does not work if we relax these assumptions. The purpose of focusing on this simplistic scenario is to provide a better understanding of the contrast between the public and private signaling mechanisms. However, in Theorem 1.5.1, we prove that as long as prices are also optimized, the overall result that public signaling does not have an impact on revenues is true in general for all quantity levels and valuation distributions that have a non-decreasing hazard rate. This more general result relies on the fact that the rm is jointly optimizing the signaling mechanism and the prices. The intuition of this more general result can be broadly attributed to the following logic. Both public information and pricing are levers that the rm can use to select which set of customers buy in the rst period and which set of customers buy in the second period; this in turn reduces to selecting the valuation of the indierent customer. Allowing the rm the exibility to use both prices and information as levers for setting the valuation of the indierent customer is an overdetermined problem: the optimal indierent customer can be chosen either by setting prices or by designing public information mechanisms; using both is redundant. In fact, as we discuss in Appendix A.4, if prices are not optimized, then public signaling can in fact aect revenues. 1.4.3 PrivateSignalingImprovesRevenues We now illustrate our main result: that private information provisioning can be quite benecial and can even improve the rm’s revenues in our abundant-capacity example. We rst describe intuitively how private signaling can achieve this, and then we formally state the optimal private signaling mechanism for our example (this will be formally proved in Theorem 1.5.2). Intuitive explanation. Consider a public signaling scenario with prices set top 1 = p 2 = 1=2 so that all customers are indierent between purchasing in either period. Now, suppose that we reducep 2 and set 14 it to 1=2 for any small but positive. Then, using the same argument as in Section 1.4.2, we expect customers to simply wait for the lower price in the second period, which hurts the rm’s revenues. The only scenario in which the rm can induce customers to buy in period 1 via public signals is by truthful reporting of its type. That is, theL-type rm sends a “Buy Now” (Buy) signal with certainty, and theH- type rm sends a “Wait for Period 2” (Wait) signal with certainty. In this case, theL-type rm would sell its entire quantity atp 1 = 1=2, and theH-type rm would sell its quantity at a lower price,p 2 = 1=2. Though the lower price generates higher sales, it leads to lower revenues than the baseline case. If the rm were communicating truthfully, then when the rm-type isL, there is an equilibrium in which all customers with valuationsvp 1 = 1=2 purchase in period 1, and other customers are unable to purchase the product because availability is zero in period 2. In this equilibrium, a customer who is indierent between purchasing in either period would have valuation equal top 1 = 1=2 and correspond- ingly obtains zero net utility. Notice further that in this case, the highest-valuation customer (v = 1) would obtain a net utility of (vp 1 ) = 1=2 by purchasing in period 1. Now, suppose that both rm types deviated from their truthful communication strategy only for thev = 1 valuation customer and that both rm types send the signals 1 = Buy with certainty to this customer. Then, this customer would still be willing to purchase in period 1. This is so because in this case, her posterior belief that the rm type isH would beP(Q =q H js 1 =Buy) = 1=2, and thus, her net utility from purchasing in period 2 would then be (vp 2 )P(Q =q H js 1 =Buy) = (1=4 +=2), which is strictly less than the net utility obtained from purchasing in period 1, (vp 1 ) = 1=2. Therefore, there exists some positive such that even if both rm types sendBuy signals with certainty to the highest-valuation customers, i.e., all customers with valuationsv 1, while being truthful to all other customers, we would have an equilibrium in which all customers receiving a Buy signal purchase in period 1, with the indierent customer having valuation 1/2 as in the truthful communication scenario. Straightforward algebra yields that one can nd a suitable (that depends on) so that the rm can generate revenues greater than the baseline case. In this manner, 15 theH-type rm is able to generate additional sales volume due to lowerp 2 while ensuring that a sucient number of customers purchase at the higher period 1 price. This is only possible if the rm can provide in- formation dierentially across its customers based on their heterogeneity in valuations and modify prices accordingly. In the following discussion, we expand on this argument to identify the optimal mechanism and characterize the optimal revenue that can be generated from private signaling. Optimal private signaling mechanism. As we will prove in Theorem 1.5.2, the optimal mechanism sets pricesp 1 = 1=2,p 2 = 1=3, and provides three private signals, namely, “Buy Now" (Buy), “Wait for Period 2” (Wait), or “Do Not Buy” (No), according to the following mechanism. (Figure 1.2 provides a visualization.) We observe that theL-type rm provides theBuy signal with certainty forv 1=2 and the No signal with certainty forv < 1=2. TheH-type rm signals as follows: forv2 [2=3; 1], it signals Buy with probability 1; forv2 [1=2; 2=3], it signalsBuy with probability 6(v 1=2) (lower wedge region) and Wait with the remaining probability (upper wedge region); forv2 [1=3; 1=2], it sends a Wait signal; and forv< 1=3, it sends a No signal. Under this mechanism, the strategy prole in which customers buy in period 1 (period 2) if and only if they receive theBuy (Wait) signal and do not buy if they get theNo signal, is a Bayesian Nash equilibrium. This is so because of the following: (i) When a customer receives a Wait signal, she can immediately infer that the rm type is H and therefore that there will be certain availability in the second period. (ii) In equilibrium, the L-type rm sells out its quantity in period 1 (this is so because all customers withv 1=2 purchase upon a Buy signal and theL-type rm sends a Buy signal with certainty to these customers). Thus, when av-customer withv 1=2 receives a Buy signal, her perceived availability in period 2 is exactly equal to the posterior probability that the rm is ofH-type, i.e., P(Hj S v = Buy). Straightforward calculations conrm that indeed it is incentive compatible for 16 1 0 1 2 1 Do Not Buy Buy Now Probability 2 3 1 0 1 2 1 Do Not Buy Buy Now Probability 2 3 Wait v Customer valuation, 6(v−1/2) Figure 1.2: Optimal personalized signaling mechanism.L-type rm (top) andH-type rm (bottom). all customers who receive aBuy signal to buy in period 1. That is, by construction of the mechanism, we have (v 1=2) (v 1=3)P(HjS v =Buy) for v 1=2; (1.9) with the relation holding with equality forv2 [1=2; 2=3]. Under this equilibrium, the expected revenue of the seller is equal to R private =P(L)p 1 F (p 1 ) +P(H) 5 12 p 1 + 2 3 5 12 p 2 =P(L) 1 4 +P(H) 7 24 = 1 4 +P(H) 1 24 > 1 4 =R m : (1.10) 17 The rst term of the expression above corresponds to the low-inventory case (probability 1/2), in which all customers with valuation above 1/2 (with a total mass of 1/2) receive a signal to Buy at the price 1/2. The second case corresponds to high inventory, in which case customers receive a signal toBuy according to the prescribed mechanism. The total mass of customers who receive such a signal is equal to 5/12 (the area under theP(S v =BuyjH) curve), leading to 5/12 sales in the rst period and (2=3 5=12) = 1=4 sales in the second period. We observe that in theH-state, the rm is able to generate additional 1 24 units of revenue using this personalized signaling mechanism, leading to an overall increase in expected revenue of 8.3% relative to the baseline case of full information (or equivalently, public information), with revenue of 1=4. 1.5 FormalResults 1.5.1 OptimalPublicMechanisms As illustrated in the previous section, customers are able to unravel public information, and thus, the rm can optimally provide full or no information without aecting its revenues, as formalized in the following result. Theorem1.5.1. (a) Providing no information and setting prices at p 1 =P(L) maxf F 1 (q L );p m g +P(H)p m andp 2 =p m is an optimal public signaling mechanism. (b) Providing full information and setting prices atp 1 = maxf F 1 (q L );p m g andp 2 = p m is another optimal public signaling mechanism. 18 (c) The rm’s revenue under any optimal public signaling mechanism is R =P(L)p 1 F (p 1 ) +P(H)p 2 F (p 2 ); (1.11) wherep 1 = maxf F 1 (q L );p m g andp 2 =p m . All proofs regarding this chapter are relegated to Appendix A. To understand why public signaling does not help, notice that after receiving this common information, all customers perform the same Bayesian update and thus face the same decision as each other, just like they would have in full- or no-information situations. Therefore, the seller cannot credibly create an availability risk that would make it incentive compatible for customers to buy early. More concretely, since all customers with high-enough valuation can solve each others’ purchasing decision problem, for any public signal, all customers with valuations above the indierent customer strictly prefer to take the same action as this indierent customer. Therefore, the problem becomes equivalent to one with a single sender and a single receiver, in which the rm tries to persuade the whole market as one receiver to use a “desirable" indierence thresholdt via appropriate signaling. Fixing any indierence thresholdt, the rm can generate revenue from the market as follows: (i) Period1: Intuitively, the rm can extract the entire valuationt of the indierent customer from this customer and consequently the same amountt from all customers with valuation abovet. There is a total mass of F (t) of these customers, which implies that the total value extracted in this manner ist F (t). (ii) Period 2: the rm can extract p 2 from the rest of the market, which buys in the second period, which is given by B(p 2 ;t) = max n 0; minf F (p 2 ) F (t);Q F (t)g o : 19 Combining the two sources, we obtain a total revenue of t F (t) +E Q [p 2 B(p 2 ;t)]: Notice that this total revenue is independent of the period 1 pricep 1 . Consequently, instead of providing information in a public manner, the rm can persuade the market to use a desirable indierence level by just settingp 1 appropriately. As we will formalize in Theorem 1.5.1, the rm’s optimal public signaling mechanism thus entails identifying the optimal thresholdt and period 2 pricep 2 and selecting the signal- ing mechanism and period 1 price simply to ensure that the indierence thresholdt is realized. It follows that under public signaling, the rm cannot do better than providing no information. 1.5.2 OptimalPrivateMechanism We rst formalize the mechanism illustrated in Section 1.4.3. Recall that in this mechanism, theL-type rm sells out in period 1 in such a manner that the product availability equals one in period 1. Further, the L-type rm sends a Buy signal with certainty to customers with valuations greater thanp 1 and sends a No signal otherwise. For customers with valuations greater thanp 1 , theH-type rm sends theBuy signal with the highest probability that it can while maintaining incentive compatibility for the customers (such that they weakly prefer to buy now given this signal). Thus, if theH-type rm sends a Buy signal with probability H (v) tov-customer, this customer weakly prefers to buy now given this signal if the following relation holds: (vp 1 ) (vp 2 )P(HjS v =Buy); (1.12) where P(HjS v =Buy) = P(H;S v =Buy) P(S v =Buy) = P(H) H (v) P(H) H (v) +P(L) L (v) : 20 Using L (v) = 1, the maximum signal value H (v) that satises (1.12) is H (v) = min P(L) P(H) vp 1 p 1 p 2 ; 1 : (1.13) This mechanism comprises both the signaling mechanism and corresponding optimal prices and is dened as follows: (PS1) Prices are set atp 1 = F 1 (q L ) andp 2 that satises d dp 2 p 2 F (p 2 ) = F ( v); where (1.14) v = supfv2V : H (v)< 1g: (1.15) (PS2) Signals are sent according to the following: (a) IfQ = q L , the rm signals Buy with certainty to customers with valuations greater thanp 1 and signals No to all other customers. (b) IfQ =q H : (i) The rm signalsBuy with probability H (v) as dened in (1.13) for all customers withv p 1 , (conditionally) independent from all other customers. With the remaining probability, the rm signals Wait. (ii) The rm signals Wait with certainty to customers with valuationsp 2 vR . We would like to point out that the conditionq L F (p m ) in the statement of the above theorem, while sucient, is not necessary for optimality. In particular, this private signaling mechanism may be benecial to the rm even forq L > F (p m ). In our example of Section 1.4,q L = F (p m ) = 1=2, and in this case,R private >R (this relation holds for general distributions and priors). If we consider scenarios with slightly higherq L values, as expected, we nd that this dominance of private signaling continues to hold. The following result formalizes this case: Theorem1.5.3. Foranyq L > F (p m ),ifR private asdenedin (1.16)existsandexceedsthepublicsignaling mechanism’s revenue for this case, i.e.,R private > R = p m F (p m ), then the private signaling mechanism described by (PS1) and (PS2) is optimal, else a public signaling mechanism (as per Theorem 1.5.1) is optimal. Numerically, we explore the benet of private signaling in Figure 1.3 for dierent quantities q L for the case of uniformly distributed customer valuations. For each quantity, the plot displays the maximal value (in terms of percentage increase in revenues compared to no information) of private information provisioning (over all possible prior values,P(H)). We see that the highest value of this improvement, 10.3%, is obtained forq L = 0:4. We also see that private information provisioning continues to provide benet forq L 2 [0:5; 0:62], for which we have abundant capacity withq L F (p m ). 22 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 L-type quantity, q L 0 2 4 6 8 10 Increase in Revenue (%) Figure 1.3: Increase in revenue by using personalized information provisioning relative to no information. Intuitively, Figure 1.3 can be explained as follows. Increasingq L gives rise to two eects: (i) the sales for theL-type rm increase, and (ii) the availability risk drops, and hence, persuasion becomes more dicult. If we consider these with regards to the gure, then we note that for small values ofq L , the rst eect dominates, and for larger values, the second eect dominates. Finally, we would like to point out that we are able to characterize the optimal private signaling mech- anism for any general prices (p 1 ;p 2 ) in the proof of Theorem 1.5.2 (see Lemma A.1.3). In general, we can have situations in which the optimal mechanism involves additional availability risk in periods 1 and 2. However, we establish in Theorem 1.5.2 that when optimizing on prices for valuations with a non- decreasing hazard rate, the optimal prices under private signaling always sell out the L-type quantity exactly in period 1, i.e.,q L = F (p 1 ). 23 1.6 Discussion In this section, we discuss some attributes of the signaling mechanisms. In Section 1.6.1, we briey com- ment on how optimal private and public mechanisms compare with respect to their social welfare and consumer surplus. In Section 1.6.2, we provide an alternative personalized pricing-based interpretation of personalized information. 1.6.1 ComparisonofPrivateandPublicMechanisms It is useful to compare the optimal private signaling mechanism with public signaling mechanisms. The private signaling mechanism sets an identical period 1 price as an optimal public mechanism. However, it sets a lowerp 2 value. This is so because under this mechanism,p 2 satises (1.14), whereas in the optimal public mechanismp 2 satises d dp 2 p 2 F (p 2 ) = 0, andp 2 F (p 2 ) is quasi-concave per our assumptions. Thus, overall, the private mechanism sets lower prices and consequently generates greater social welfare. Thus, when the private mechanism is revenue optimal, it also improves social welfare relative to public signaling. Turning to consumer surplus, we nd that in some cases, private signaling mechanisms can also increase consumer surplus. Specically, we observe this for the example in Section 1.4, in which the consumer surplus under public signaling equals 0:125, whereas that under private signaling equals 0:139, which is 11:2% higher. 1.6.2 PersonalizedPricingInterpretationofPersonalizedSignaling We will show how personalized signaling shares some characteristics of personalized pricing. Clearly, if the rm knows customer valuations and is allowed to price in a personalized manner, it can implement 24 perfect or rst-degree price discrimination. In this case, for the example in Section 1.4, the rm can set a price ofp(v) =v for eachv-customer for both periods and obtain a revenue of P(L) Z 1 1=2 p(v)f(v)dv +P(H) Z 1 0 p(v)f(v)dv = 7 16 ; which is signicantly higher than revenues obtained under public prices. More importantly, rst-degree price discrimination makes the rm type and correspondingly availability risk irrelevant from the cus- tomer’s perspective, and consequently, the rm can extract the entire surplus from each customer. It turns out that if we tweak this setting a little and x the period 2 price to a constant level (identical for all customers), then personalized (period 1) pricing behaves very similarly to personalized signaling. Specically, let us xp 2 = s for some 0 < s p m , which we can consider as a clearance or sales price. Then, consider av-customer withv2 [1=2; 2=3] who is indierent between purchasing in period 1 versus period 2 upon receiving a Buy signal. For this customer, we have (vp 1 ) = (vs)P(HjS v =Buy): (1.17) This relation can equivalently be written as p 1 =vP(LjS v =Buy) +sP(HjS v =Buy); (1.18) which can be interpreted as the following personalized random price: P 1 (v) = 8 > > > > < > > > > : v; w.p.P(LjS v =Buy); s; w.p.P(HjS v =Buy); (1.19) 25 where we note thatE[P 1 (v)] =p 1 . In this sense, a scenario in which a customer purchases the item at the public pricep 1 is equivalent to one in which the customer is paying the expected value of a corresponding personalized price P 1 (v). Noting that customer utilities and rm revenues are linear in payments, one can interpret this scenario equivalently as the customer paying the random price P 1 (v) rather than its expectation. In this alternative scenario, any such indierentv-customer, upon receiving the Buy signal, purchases the product in period 1, but at a random priceP 1 (v). The random price is realized based on the probabilities stated in (1.19). Notice that one of the realizations of this random price exactly equals the customer’s valuation. In this fashion, private signaling with public prices is able to extract the entire surplus with some probability. Let us now calculate the expected revenue that the rm can obtain from thisv-customer, R(v). The rm obtains the customer’s valuation v with probability P(LjS v = Buy) if it sends a Buy signal and obtainss otherwise. Noting that P(LjS v =Buy)P(S v =Buy) =P(L;S v =Buy) =P(L); we have R(v) =P(L)v +P(H)s =P(L)(vs) +s: This revenue can be interpreted as follows: when the rm type isL, the customer pays her entire valuation v, and when the rm type isH, the customer only payss. Thus, in this fashion, the rm is able to use the availability risk to its advantage and extract the entire value from the indierent customers when it is ofL-type. Reverting our attention to the original setting with public prices and personalized information, the expected revenue generated from thisv-customer is identical, which is noteworthy because the rm is only using public prices and is able to extract this additional revenue by using information alone. 26 Do Not Buy Wait Buy Now 1 0 1 2 0.9 1 Probability Customer valuation, v Figure 1.4: Optimal personalized mechanism performs identically to one with personalized period 1 pric- ing: theH-type rm’s signaling mechanism. If we optimize on period 1 price (along with the signaling mechanism), it is possible to have a situation in which all customers who buy in period 1 upon receiving a Buy signal are indierent between buying now and waiting. For instance, this happens ifq L = 0:1,P(H) = 0:2, p 2 = s = 0:5; then, the rm’s optimal strategy (using an analog of Theorem 1.5.2 with xed period 2 price) is to setp 1 = 0:9 and, for theL-type rm, to signal Buy with certainty to allv-customers withv p 1 and, for theH-type rm, to signal as depicted in Figure 1.4. Thus, in this case, personalized information provisioning with public posted prices is able to achieve the exact same revenue as a mechanism with personalized period 1 prices (when period 2 prices are xed ats for all customers). 1.7 Robustness In this section, we discuss and analyze several variants of our main model to demonstrate the robustness of our key insights. Specically, in Section 1.7.1, we consider a scenario in which the rm can only dif- ferentiate between customers in a “coarse” manner, and we show that sending coarse private signals also provides signicant value. In Section 1.7.2, we show that the benets of private information signaling also hold in a situation in which the rm signals by disclosing its inventory, i.e., it cannot send arbitrary signals, 27 but instead can choose to either disclose or withhold its quantity information. Finally, in Section 1.7.3, we demonstrate how our results apply to a scenario with demand uncertainty instead of quantity uncertainty. 1.7.1 CoarsePrivateSignaling In this chapter, our private signaling mechanisms are based on the assumption that the rm knows the exact valuation of each customer. A more general situation is one in which the rm may not precisely know the valuation of each customer but may be able to classify each customer into two or more classes of customers, each of which has a particular valuation prole. In such scenarios, the rm can send signals that are common within each customer class but potentially dierent across customer classes. As we will illustrate, such coarse signaling can be quite benecial to the rm. Specically, in this subsection, we discuss a scenario in which we have two dierent classes of cus- tomers: classa and classb, with customer masses ofd a andd b , respectively. The rm can send dierent signals across the two classes. Note that the customers within the same class receive the same signal, which is public to the customers of that class but is unobservable to customers belonging to the other class. We denote by F i ,i2fa;bg the complementary cumulative distribution functions for the valuations of each class and assume that they both have a non-decreasing hazard rate. We continue to useR to denote the optimal revenue achieved using public signaling in this setting. It turns out that the rm can benet from such coarse signaling as long as the two underlying distri- butions are suciently dierent. More concretely, consider the (unique) optimal solution (t a ;t b ) to the following optimization problem for theL-type rm: maximize ta;t b t a d a F a (t a ) +t b d b F b (t b ) subject to d a F a (t a ) +d b F b (t b ) =q L ; (1.20) 28 in which the rm optimizes prices (t a ;t b ) for the two customer classes to ensure that the L-type rm sells out. As we formally prove in the following theorem, coarse signaling is benecial if and only if the valuation distributions are such thatt a 6=t b . Theorem1.7.1. (a) Ift a 6=t b and F a (t a ); F b (t b )> 0, then denotingm = arg min i t i and m = arg max i t i , an optimal coarse signaling mechanism is as follows: (i) The rm sets pricesp 1 =t m andp 2 2 arg max p p d a F a (p) +d b F b (p) . (ii) IfQ =q L , the rm signals Buy to both classes with certainty. (iii) IfQ =q H , the rm signals Buy to class-m with probability m = 0 and signals Buy to class- m with probability m = min P(L) P(H) t m p 1 p 1 p 2 ; 1 : The revenue of the rm in this case is equal to R coarse =P(H)p m d a F a (p m ) +d b F b (p m ) +P(L) t a d a F a (t a ) +t b d b F b (t b ) >R : (b) Otherwise, i.e., ift a = t b or F (t a ) = 0 or F (t b ) = 0, then sending a public signal to both classes as described in Theorem 1.5.1 with valuation distribution equal tod a F a +d b F b is optimal andR coarse = R . In summary, when the two valuation distributions are such that the optimal thresholds are identical or one class does not purchase, then the rm is unable to benet from dierentiating. An example of such a case is one in which both classes have the same valuation distribution. In this case, the optimization 29 problem trivially has a solution witht a = t b . Intuitively, since the two classes have the same valuation prole, the rm cannot use sales from one class to extract more revenue from the other class by increasing the corresponding availability risk. In sharp contrast, for general valuation distributions for whicht a 6=t b , the rm can benet from such a practice. We would like to also point out that ift a 6=t b , there are multiple optimal coarse signaling mechanisms. These mechanisms are revenue equivalent and have identical values ofp 2 ;t a ;t b but dier in (p 1 ; a ; b ). There are essentially two dening equations that (p 1 ; a ; b ) must satisfy, and hence, this results in one degree of freedom; for instance,p 1 can be varied while satisfying the two equations and not aecting rev- enue. The mechanism presented in Theorem 1.7.1(a) is the mechanism that reveals information truthfully to one class (and consequently has the highest value ofp 1 among all optimal mechanisms). As an application of this result, consider a setting in which there is a unit mass of customers, with valuations distributed uniformly in [0; 1]; we denote the complementary cumulative distribution function U. Suppose that the rm can classify (with certainty) customers who have valuations greater than or equal to 2 (0; 1) as class-a and those with valuations less than as class-b. Suppose U() < q L to make this a genuine two-class scenario so that theL-type rm cannot sell out its inventory by selling to only class-a customers. In this case, the unique solution to the optimization problem (1.20) is given byt a = andt b = U 1 (q L ). In other words, when the rm sends the Buy signal to class-a, all customers in the class buy in period 1; when the rm sends the Buy signal to class-b, only a portion of the customers buy (enough to ensure total sales equalq L ). The optimal revenue of the rm in this case, using Theorem 1.7.1, is given by R coarse =P(H)p m U(p m ) +P(L) U() + (q L U()) U 1 (q L ) =R +P(L)( U 1 (q L )) U(): 30 The second term in the expression above is the added benet from being able to correctly classify and communicate with high- and low-valuation customers. Note that our assumption implies thatt a >t b , and thus, the rm communicates truthfully to the lower-valuation class-b customers (in both states). The rm distorts (or provides less information to) the higher-valuation class-a customers. These customers are still persuaded to purchase because of their uncertainty about the state of the world, especially in light of the presence of the lower-valuation customers. This allows the rm to increase its revenues. 1.7.2 PrivateDisclosureMechanisms Our work extends to disclosure-type signals that allow veriability wherein the rm may signal its type to the customers in a credible manner. Specically, anL-type rm can disclose its inventory to show that indeed it is of low type. However, anH-type rm cannot do this. It turns out that under such a restriction, personalized information provisioning can still provide signicant benets. For the illustrative example of Section 1.4, if we restrict signals toHigh andLow and only allow theL-type rm to send theLow signal, then a straightforward calculation yields an optimal signaling mechanism of p 1 = 1=2 and p 2 = 1=3 (as before), and theL-type rm signalsLow only tov-customers withv < 2=3. The customer purchase behavior is as follows: v-customers withv 2=3 buy in period 1, and those with 1=2 v 2=3 buy in period 1 if they receive a Low signal, else they wait for period 2. The total expected revenue for this mechanism is 0:264, which is 5:6% higher than the baseline revenue ofR m = 1=4. 1.7.3 DemandUncertainty Thus far, we have focused on a situation where the customers and the rm are a priori uncertain about the size of the inventory and the rm exploits this uncertainty by sending personalized signals after its inventory realization. A natural complement of this scenario is one in which the rm’s inventory is deter- ministic and known to all customers but the demand is uncertain and gets realized only after the seller has 31 made pricing decisions. Our main insight, that private information provisioning has signicant benets compared with public information provisioning, continues to hold in this setting. As we will elaborate, though the analysis for the private information setting is identical, some modications are needed for the public information setting, in terms of both the analysis and the results. Formally, in this section, we assume that the rm’s inventory is deterministic, with a total size equal toq, which we normalize to unity. The overall mass of customers or market size is a priori unknown to both the rm and the customers and is denoted by the random variableD. This market size can be either high,D = d>q, or low,D =d. The prior probabilitiesP(D = d) andP(D =d) are common between the rm and the customers. Analogous to the quantity uncertainty model, we will setd = q = 1 so that when demand is low, there is no shortage of the product. We would like to emphasize that in this model, customers are not “present" in the market at the time of the design of (and commitment to) the mechanism. Instead, they appear right after the mechanism has been announced and right before the selling season begins. To establish that the main insights are similar, let us rst focus on an illustration analogous to that in Section 1.4. That is, we setP(D = d) = 1=2 and valuations are uniformly distributed on [0; 1] and d = 2. In this case, whenp =p m = 1=2, there is abundance even in the high-demand case with d F (p m ) = 1. As one would expect, when the rm does not communicate, the optimal pricing decisions arep 1 = p 2 = 1=2, and the optimal revenue is equal to R noinformation = 0:375: Moving to public signaling mechanisms, as we will see in Theorem 1.7.2, it turns out that there is no benet from publicly signaling information and the optimal public signaling revenueR public =R noinformation = 0:375: In this case, the private signaling mechanism is identical to that in Section 1.4 (Figure 1.2), with theH-type (L-type) rm now represented byd-type ( d-type). In particular, we havep 1 = 1=2,p 2 = 1=3; the d-rm sends a Buy signal to all customers withv 1=2; and thed-rm sends a Buy signal with certainty only to customers withv > 2=3 and with probability 6(v 1=2) to customers withv2 [1=2; 2=3]. Under this equilibrium, 32 R private = 0:396 > R public : Thus, we nd that the main insight of Section 4 continues to hold in this setting. There is a subtle dierence in how the benets of signaling are realized in the two models. Under quantity uncertainty, the rm benets in the high-quantity case because it can sell more relative to the baseline of full information. However, under demand uncertainty, the rm benets when demand is low, and consequently, though it can sell more relative to the baseline of full information, its ability to gener- ate additional sales is limited relative to the scenario of high demand. Thus, the benets of information signaling are lower under demand uncertainty. The asymmetry in the demand realizations leads to another eect in the case of public signaling. We nd that for high enough values of d, unlike in quantity uncertainty, dierent public signaling mechanisms generate dierent revenues. For instance, consider d = 4 in our running example. Then, if the rm does not communicate any information, the optimal prices arep 1 = 0:72 andp 2 = 0:69, which generates a revenue ofR noinformation = 0:47. Alternatively, if the rm reports its type truthfully, then the optimal prices are p 1 = 0:75 andp 2 = 0:5, yielding a revenue of 0:5. In fact, we nd that truth-telling public mechanisms are (strictly) optimal (among public signaling mechanisms) for suciently high demand. Intuitively, to persuade customers to buy early in the low-demand scenario, the rm sends a Buy signal with some probability. This adversely aects the propensity of customers to buy early in the high-demand scenario. Thus, given that the market size is larger in the high-demand scenario, this adverse eect dominates, and when restricted to public signals, the rm nds it optimal to truthfully reveal its type. It will be useful to interpret this scenario from the perspective of the intuitive argument of Section 1.4.2. As in that argument, for any public signal, it is equivalent to consider all high-valuation customers to be represented by the indierent consumer. Then, the rm can generate revenue from the customer in each period as follows: (i) Period 1: Intuitively, the rm can extract the entire valuation of the indierent customer and consequently the same amount for all customers with valuation greater than that of this customer. 33 However, unlike in Section 1.4.2, the mass of the total market can be eitherd or d. Because of this dierence, the rm can benet from signaling, so that dierent-valuation customers are indierent in each state of the world (demand realization). Let us denote these valuations as t d for the case D = d andt d for the caseD =d. (ii) Period 2: theD-type rm can extractp 2 from the rest of the market, which buys in the second period, as given by B(p 2 ;t D ) = max n 0; min n D F (p 2 ) F (t D ) ;qD F (t D ) oo : Thus, combining the revenues from both periods, we obtain the total revenue as E D h t D D F (t D ) +B(p 2 ;t D ) i ; which, as before, is independent of the period 1 pricep 1 . However, it depends on two thresholdst d andt d . For abundant capacity (as in the analog of Section 1.4), the two thresholds are identical, and we recover the same insight: that public signaling is ineective even under demand uncertainty. However, if the two thresholds are dierent, then if the rm is indeed able to persuade two dierent-valuation customers so that they are both indierent, it can generate additional revenue. It then follows that if the rm engages in truth-telling, it can indeed persuade both these customer types in the corresponding demand state and thus improve its revenue relative to no information provisioning. The following theorem states the formal result. Theorem1.7.2. For the caseq =d, we have 1. If d F (p m )q, then all public signaling mechanisms that setp 1 =p 2 =p m are optimal and revenue equivalent. 34 2. If d F (p m ) > q, then the signaling mechanism that provides full information and sets prices atp 1 = max F 1 (q= d) andp 2 =p m is strictly optimal. Unlike public signaling mechanisms, we nd that the private signaling mechanisms for demand un- certainty are exactly the same as in the uncertain quantity model. For completeness, we state the formal result in the following theorem. Theorem1.7.3. Ifq d F (p m ), then the optimal private signaling mechanism sets pricesp 1 = F 1 (q= d) andp 2 that satisfy d dp 2 p 2 F (p 2 ) = F ( v) (1.21) v = supfv2V : L (v)< 1g; (1.22) L (v) = min P(D = d) P(D =d) vp 1 p 1 p 2 ; 1 ; (1.23) and the signaling mechanism is described below: 1. IfD = d, the rm signals Buy with certainty to customers with valuations greater thanp 1 and signals No to all other customers. 2. IfD =d: (i) The rm signals Buy with probability L (v) as dened in (1.23) for all customers withv p 1 , (conditionally) independent from all other customers. With the remaining probability, the rm signals Wait. (ii) The rm signals Wait with certainty to customers with valuationsp 2 v 0. The action of fact-checking by a user is private and not visible to other users or the platform. A user withc i = 0 is referred to as a good user since they have the ability to fact-check information with minimal eort while a user withc i =c H is abaduser since for her, fact-checking requires signicant eort and hence she is less incentivized to fact-check. Users prior to observing the platform’s fact-checking decision and after they make their own fact-checking decision, unless otherwise stated, have the option to vote whether they think the article is true (a v i = +1) or fake (a v i =1) or not vote (a v i = 0). Finally, after fact-checking and voting actions are realized, each user needs to guess the true value of. We denote this guess byx i . In settings where voting is facilitated, the platform announces and commits to a fact-checking strategy (z 1 ;z 2 ) which denotes the probability with which it fact-checks and reports the value ofS based on the visible actions a v 1 = z 1 and a v 2 = z 2 of user 1 and user 2, respectively. Figure 2:1 shows the timeline demonstrating the sequence of dierent events. A user’s strategy prole is a (potentially randomized) mapping from her costc i to the probability of choosing each fact-checking, voting action and guessing action. In other words a strategy prolep i for user i is a mapping fromc i to the probability simplex off0; 1g 3 , i.e.p i :c! (f0; 1g 3 ). The platform’s fact-checking strategy is described by a mapping from users’ votes to the probability simplex onf0; 1g. In other words, the platform’s strategy is dened by the probability of fact-checking (a v 1 ;a v 2 ). 2.3.3 UtilityofUsersandPlatform Each user’s utility comprises of three components. First, a user gets a reward of 1 if their eventual guess about is correct. Furthermore, they pay the costc i if they decide to fact-check. Finally, they maximize 44 Both users have a uniform prior on θ S and θ are realized. Users observe the article y = Sθ Platform commits to a fact-checking strategy φ Each user decides whether to fact- check the article and what to vote Platform reports S with probability φ(a v 1 ,a v 2 ) Posterior beliefs about users’ cost of fact-checking get formed Each user i chooses x i as her guess for the true state Figure 2.1: Timeline showing the sequence in which dierent events happen. their reputation on the platform. We model the latter as the probability that the platform assigns to useri being a good user given all the publicly available information, i.e. P(c i = 0ja v i ;a v i ;a ): We note here that this is an important modeling assumption as many of the results of this chapter depend on it. In particular, an alternative formulation that we considered was modeling the reputation eect as the probability that the other player assigns to user i being a good user, in which case this probability could depend on the fact-checking decision of the former. As the goal of this chapter is to obtain insights for a more realistic situation with many users we believe that evaluating reputation from an outsider’s point of view (i.e. the platform) is more realistic. Furthermore our denition of reputation can easily be translated to a literal public rating of users that the platform can publicly disclose and our work highlights how this reputation could be dened as a function of the behavior of the user on the platform. Finally, the reputation term can be dened as the probability that the user fact-checked given the votes of all players and the platform’s fact checking action. We believe that from modeling perspective, it is more meaningful to dene reputation in terms of an intrinsic characteristic of the user (her cost of fact-checking) rather than a one-time interaction with an article. At a high level, a belief of higher cost of fact-checking is bad reputation since it implies that the user does not have the aptitude to fact-check. On the other hand, the reputation of a user who is believed to 45 have a lower cost of fact-checking due to her ability to fact-check is better. Combining all these terms, the utility of a user is equal to U i (;a f i ;a v i ;a f i ;a v i ;a ;x i ;c i ) = 1(x i =) | {z } Information Gain (I i ) c i a f i |{z} Fact-checking Cost + P(c i = 0ja v i ;a v i ;a ) | {z } Reputation Gain (R i ) ; (2.1) where1() denotes the indicator function. The rst term measures whether the user’s guess of the truth is correct, the second term is the cost of fact-checking incurred by the user and the last term is the reputation component. Here2R + is a measure of how much each useri values her reputation. In the remainder of this chapter we denote byI i andR i the information and reputation components, respectively, of the utility of useri. We conclude this sub-section by presenting the platform’s utility, which consists of the sum of the information gain of users net the fact checking cost of users and the total cost of fact-checking by the platform, i.e. V (;a f i ;a v i ;a f i ;a v i ;a ;x i ;c i ) = X i2f1;2g 1(x i =) | {z } Information Gain X i2f1;2g c i a f i | {z } Fact-checking Cost of Users c p 1(a 6= 0) | {z } Fact-checking Cost of Platform ; wherec p is the platform’s cost of fact-checking. In the remaining of the chapter we writeU i andV to denote the utility of the users and the platforms respectively dropping their dependence on the actions and the outcomes of the game. 2.3.4 TruthfulandUnbiasedStrategiesandNeutralFact-Checking Throughout this chapter we focus on a natural family of strategy proles that we refer to astruthfuland unbiased and satisfy the following properties: 46 1. When a user fact-checks and votes then she votes truthfully. 2. A user breaks ties towards fact-checking instead of not fact-checking. 3. A user that fact-checks breaks ties towards voting instead of no voting. 4. When a user does not fact-check, if she decides to vote she chooses a random vote +1 or1 with equal probability. Essentially, we assume that when a user spends the eort to fact-check and decides to vote then she shares the result of her fact-checking with the rest of the platform. Furthermore, we want to provide insights that do not depend on potential biases of users towards one or the other state of the world, hence tie-breaking is done randomly. Mathematically, other, more complicated strategy proles could be Perfect Bayesian Equilibria of the game. We believe that it is reasonable to focus on truthful strategy proles in order to stay away in this chapter from biases and complex signaling game type of behavior. Finally, we choose to focus on equilibria where in the absence of more information any user breaks ties randomly between voting +1 and1 in order to stay away from modeling idiosyncratic biases of users that would complicate the exposition of the main insights of this chapter. In the remainder of this chapter we focus on Perfect Bayesian Equilibria for which users’ strategy proles are truthful and unbiased and we refer to them as just equilibria. In the same spirit, in order to focus on realistic fact-checking strategies of a platform that is not biased towards one or the other direction of the state of the world, we focus on the set of neutral fact-checking strategies that we dene below. Denition3. For a given equilibrium, a fact checking strategy isneutral if and only if P(a =Sja v i = +1) =P(a =Sja v i =1): 47 We note that the set of neutral strategies does not exclude strategies where the probability of fact- checking depends on the values of the two votes. Neutral strategies ensure that the aggregate probability of fact-checking by the platform is not biased towards +1 or1 when conditioned on only one vote and in the absense of more information. This allows us to simplify the space of fact-checking strategies under consideration to those that are expressed as (a v 1 ;a v 2 ) = 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : agree ; ifa v 1 =a v 2 6= 0 disagree ; ifa v 1 6=a v 2 anda v 1 6= 0;a v 2 6= 0 single ; ifa v 1 = 0 ora v 2 = 0 anda v 1 +a v 2 6= 0 none ; ifa v 1 =a v 2 = 0: 2.4 No-VotingandAltruisticEquilibria In this section we narrow down the possible equilibria of this game to two types: no-voting equilibria and altruistic equilibria. Specically, we show that the only types of equilibria that can arise from truthful and unbiased strategies are either equilibria where no user votes or equilibria where a good user always fact-checks and votes truthfully. In order to obtain this simplication of the space of possible equilibria, we analyze the strategic in- teractions between the players and the platform by evaluating the expected utility of users in regards to guessing the value of by her actionx i . Lemma2.4.1. For any equilibrium and alla f i ;a v i ;a v i ;a , arg max x i E[U i ja f i ;a v i ;a v i ;a ] = 8 > > > > < > > > > : 1; if P( = 1ja f i ;a v i ;a v i ;a )> 1=2; 1; otherwise, (2.2) 48 and the expected information gain is given by E[I i ja f i ;a v i ;a v i ;a ] = max x2f1;1g P( =xja f i ;a v i ;a v i ;a ): A direct consequence of our calculation is that if either the user or the platform fact-checks the infor- mation gain is equal to the maximum possible value of 1, since the true value of the state of the world is revealed to the user. Lemma2.4.2. The information gain of useri when she fact-checks is given by P(x i =ja f i = 1) = 1: (2.3) The information gain of useri when the platform fact-checks is given by P(x i =ja =S) = 1: (2.4) Since for a user that fact-checks the information gain is equal to 1 regardless of the other outcomes of the game (that is the platform’s fact-checking and the other user’s vote), the only remaining consideration is their expected reputation gain. The next lemma presents the natural result that argues that a good user always fact-checks. Lemma2.4.3. For all truthful and unbiased equilibria good users withc i = 0 fact-check with probability 1. At this point in our analysis we introduce two important concepts: the altruistic and the no-voting equilibria. Denition4. Anequilibriumiscalledaltruisticifandonlyifagooduserfact-checksandreportstruthfully with probability one, i.e. P(a v i =S;a f = 1jc i = 0) = 1: 49 Denition5. An equilibrium is calledno-voting if and only if both types of users never vote, i.e. P(a v 1 =a v 2 = 0) = 1: We next state the main result in simplifying the structure of equilibria of our game. Proposition 2.4.4. For all truthful and unbiased equilibria that can exist for some fact-checking strategy of the platform, there exists another fact-checking strategy at which an altruistic or a no-voting equilibrium achieves the same expected utility for the platform. The intuition is fairly straight forward. If it was strictly optimal for a good user to not vote when they fact-check then a bad user obtains a reputation score of 0 if she votes. Therefore, the only reason for a bad user to vote is to increase the platform’s probability of fact-checking. The platform, under such an equilibrium can always increase the probability of fact-checking when the user does not vote in such a way that the total expected information gain of users remain the same. Under such a change none of the incentives of users change and hence their fact-checking strategy remains the same, thus making it optimal for a user who does not fact-check to not vote, resulting in a no-voting equilibrium. Given this simplication of the space of equilibria that we consider, we next obtain the conditions that we will use to describe the various altruistic equilibria later. Each useri decides to fact-check if and only if E[I i ja f i = 1]E[I i ja f i = 0] c i + E[R i ja f i = 1]E[R i ja f i = 0] 0; (FC-condition) where the rst term represents the change in information gain if the user decides to fact-check instead of not fact-checking, the second term is the cost of fact-checking and the last term is change in reputation gain if the user decides to fact-check instead of not fact-checking. 50 A user who does not fact-check votes if and only if E[I i ja f i = 0;a v i = 1]E[I i ja f i = 0;a v i = 0] + E[R i ja f i = 0;a v i = 1]E[R i ja f i = 0;a v i = 0] 0; (V-condition) where the rst term represents the change in information gain if the user decides to vote instead of not voting when not fact-checking and the second term represents the corresponding dierence of reputation gains. We refer the reader to Appendix B for the exact expressions of these conditions. Altruistic equilibria can be described by the probabilities of the user’s fact-checkingf and the user’s voting when they do not fact-checkv. Thus, we have f =P(a f i = 1); v =P(a v i 6= 0ja f i = 0): It can be seen that the (FC-condition) is always satised for a good user suggesting that f 1=2. Checking the inequalities for the bad user will give us her strategy prole. Table 2.1 shows the list of all altruistic equilibria that are possible based on the values of f and v. In particular, if in an equilibrium the bad user always fact-checks (f = 1), the (FC-condition) must be satised whenc i = c H . In such a situation, (V-condition) is trivially satised denoted by equilibria 1 in the table. If the bad user decides to never fact-check (f = 1=2), the (FC-condition) is not satised. In that situation, if the (V-condition) is satised then the bad user always votes +1 and1 with equal probability (v = 1 in equilibria 2). If the (V-condition) is not satised, then the user opts to not vote at all (v = 0 in equilibria 3), while if the user decides to mix between voting and not votingv2 (0; 1) she must be indierent between them when not fact-checking (equilibria 4). Similarly, if the bad user mixes between fact-checking and not (f2 (1=2; 1)), 51 then she must be indierent between them. Then, comparing her utility between voting randomly and not voting, when not fact-checking, gives us the last three cases in the table. Potential altruistic equilibria Bad user fact-checks Bad user votes when not fact-checking Value off Value ofv 1 Always Always 1 1 2 Never Always 1/2 1 3 Never Never 1/2 0 4 Never Sometimes 1/2 (0,1) 5 Sometimes Always (1/2,1) 1 6 Sometimes Never (1/2,1) 0 7 Sometimes Sometimes (1/2,1) (0,1) Table 2.1: List of all possible altruistic equilibria indicating the actions of the bad user and the respective values of user’s fact-checking probabilityf and voting when not fact-checking probabilityv. The good user always fact-checks and votes truthfully. In the next section, we analyze the existence and optimality of these equilibria numerically. 2.5 Results 2.5.1 NumericalExperimentsSetup Due to the complexity of the platform’s optimization problem, we numerically explore the optimal fact- checking policies and the corresponding equilibria. We numerically solve for the optimal equilibrium for dierent values of the primitives by checking each altruistic equilibrium in Table 2.1 for existence over a grid of agree , disagree , single and none . We also add the no-voting equilibria in the simulation and label it as equilibrium 0. We then nd the equilibrium that maximizes the platform’s utility among those that can exist for the respective primitive parameters and show the corresponding platform’s utility and optimal fact-checking strategies. We discuss the outcomes of these simulations in the next sub-sections in the following settings. We rst discuss the setting of constant fact-checking by the platform in which the platform’s fact-checking strategy does not depend on the votes of the users. Clearly, due to the absence of any voting, the no- voting equilibria is also a case of constant fact-checking. Along with the no-voting equilibria, we look 52 at all possible altruistic equilibria under the setting agree = disagree = single = none and analyze the resultant optimal equilibria and their properties. We then consider the setting where the platform’s fact-checking strategy can dier based on the votes of the users. Over a grid of agree , disagree , single and none values, we check for the existence and optimality of the no-voting and all the possible altruistic equilibria and characterize the outcomes that arise in this setting. 2.5.2 ConstantFact-Checking We look at the equilibria in which the platform does not incorporate user voting into its fact-checking strategy. In other words, the platform sets agree = disagree = single = none : We rst discuss the special case of the no-voting equilibria and then move to possible altruistic equilibria. 2.5.2.1 No-VotingEquilibria We rst consider the simpler case of equilibria where no user votes. They still have the option of fact- checking the article for their information gain but do not convey any information to the other user or the platform. In that case, the reputation term is irrelevant (in particular reputation term is equal to 1/2 for both players). Moreover, since users are not voting the platform’s fact-checking strategy can be described by a constant probability of fact-checking. Given the platform’s fact-checking strategy, each agent decides whether to fact-check or not. Specically, a user that fact-checks obtains information gain that is equal to 1. And if a user does not fact check, then her information gain is equal to 1/2 when the platform does not fact-check and 1 when it does. This gives rise to the following structure of user strategies. 53 Lemma2.5.1. For all no-voting equilibria with platform’s fact-checking strategy, good users always fact- check while bad users fact-check if and only ifc H (1)=2. The expected utility of the platform is equal to V novoting () = 1 + 1 2 +1 c H 1 2 1 2 +1 c H > 1 2 2 1 c H 1 2 c H c P : The lemma above suggests that the platform’s objective is linear in and hence obtaining the optimal fact-checking strategy is fairly trivial as the next proposition illustrates. Proposition2.5.2. For any no-voting equilibrium, the optimal strategy of the platform is given by = 8 > > > > < > > > > : 1; ifc P minf1=2;c H g 0; otherwise and the corresponding optimal value for the platform is equal to E[V novoting ] = 8 > > > > > > > > > < > > > > > > > > > : 3=2; ifc H > 1=2;c P > 1=2 2c H ; ifc H 1=2;c H c P 2c P ; ifc P 1=2;c P <c H The above proposition suggests that when it’s too costly for either the platform or the bad user to fact-check, neither of them puts any eort in fact-checking. In such a case, the cost paid in fact-checking outweighs the informational benet obtained by fact-checking. When the costs are lower, the platform compares its cost with that of the bad user. If the bad user’s cost is lower, then the platform again puts no eort as it’s more ecient for the user to pay the cost of fact-checking. And when the platform’s cost is 54 lower, then it takes the initiative and always fact-checks the article maximizing her utility and the utility of the users. NumericalStudies In our numerical studies in Figure 2.2, we see that whenc H orc P is negligibly small, the no voting equilibrium achieves the maximum utility for the platform. This is because the bad user or the platform, respectively, do not incur much cost in fact-checking and hence always fact-check. We also see the dominance of no voting when the reputation weight is higher in addition to lower fact-checking costs of platformc P or the bad userc H . This is because when users give a high weight on the reputation component of their utility, when allowed to vote are willing to take the chance to vote even if they did not fact-check. As a result observed votes carry less information and the information gain of bad users is smaller, hurting the platform’s utility. In such a case, the platform would have been better o by just fact-checking on its own (ifc P is smaller) or if the bad user always fact-checks (ifc H is smaller). However, as discussed in Proposition 2.5.2, if costs are higher in the no-voting equilibrium, the platform does not spend any eort in fact-checking and the bad user also does not fact-check. In such a situation, the utility of the platform becomes 3/2 as in expectation a good user always fact-checks while the bad user is correct half of the times. However, it is possible that allowing voting in such a case might benet the platform. This is because a good user can potentially reveal the true state and thus help in improving the information gain of the bad user. We explore this case next. 2.5.2.2 AltruisticEquilibria In Figure 2.2, we see that when the costs of fact-checkingc H andc P are higher, Case 2 altruistic equilibrium (shown by green color) becomes the dominant equilibrium. We further call this equilibrium as the fully pooling equilibrium since just by seeing user votes, the platform cannot infer the type of the user. When the platform’s fact-checking cost is large, it does not put any eort in fact-checking, as can be observed 55 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 1 2 3 4 5 6 7 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 1 2 3 4 5 6 7 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 1 2 3 4 5 6 7 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 1.5 1.6 1.7 1.8 1.9 2.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 1.5 1.6 1.7 1.8 1.9 2.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 1.5 1.6 1.7 1.8 1.9 2.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2.2: Heatmap plots showing the optimal equilibrium (top row), V (second row), f (third row), v (fourth row) and (last row) values for the platform with change in andc H for constant fact-checking. Herec P is 0:2 (left column), 0:5 (middle column) and 0:8 (right column). 56 from the last row in Figure 2.2. However, due to the option of voting available to them, in this equilibrium bad user can learn the truth from the vote of a good user. This improves her information gain and hence the platform’s utility. We explain the above point by a concrete example. Consider the setting withc H =c P = 0:5. We also make = 0 to simplify our case. Here in the no voting equilibrium we can see from Proposition 2.5.2 that the platform as well as the bad user does not fact-check which leads to platform’s utility of 3=2. In the fully pooling equilibrium, the platform and the bad user still do not fact-check. However, a dierence is observed in the information gain. The net information gain of the users in the fully pooling equilibrium for this case is given by 1 + 1 2 + 1 2 1 2 = 7 4 > 3 2 : Here, the additional 1=4 is because of the learning eect from the other good user. Thus, enabling voting helps improve the utility of the users and, in turn, the utility of the platform with no extra eort from the platform’s part. We also observe from Figure 2.2, that either the platform always fact-check when its fact-checking cost is small (left column) or it decides to not fact-check at all when the cost is higher (middle and right columns). The restriction on the platform to fact-check with the same probability irrespective of the votes of the users potentially makes the platform inecient. The platform might not want to make extra eort when a good user already mentions the true value. Similarly, the platform might wish to target scenarios where the users do not vote and hence no new information is revealed. Thus, combining its fact-checking strategy with the user voting strategy might possibly create better outcomes for the platform. We explore these in the next sub-section. 57 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 1 2 3 4 5 6 7 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 1 2 3 4 5 6 7 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 1 2 3 4 5 6 7 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 1.5 1.6 1.7 1.8 1.9 2.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 1.5 1.6 1.7 1.8 1.9 2.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 1.5 1.6 1.7 1.8 1.9 2.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2.3: Heatmap plots showing the optimal equilibrium (top row) and the optimalV (second row),f (third row), andv (bottom row) values for the platform with change in reputation weight and the cost of user fact-checkingc H . Here the platform’s cost of fact-checkingc P is 0:2 (left column), 0:5 (middle column) and 0:8 (right column). 2.5.3 Voting-DependentFact-Checking We next consider the general setting where the platform chooses its fact-checking strategies considering the votes of the users. Figures 2.3 and 2.4 show the respective equilibria and their properties in this setting. Naturally, the constant fact-checking equilibria exist here as well. But we also see the existence of several 58 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2.4: Heatmap plots showing the optimal agree (top row), disagree (second row), single (third row), and none (bottom row) values for the platform with change in reputation weight and the cost of user fact-checkingc H . Here the platform’s cost of fact-checkingc P is 0:2 (left column), 0:5 (middle column) and 0:8 (right column). other equilibria and the dominance of Case 3 (shown by pink color in the plot) for lower values of the reputation weight. In the equilibrium of Case 3, a good user always fact-checks and votes truthfully while a bad user never fact-checks and votes 0. Therefore, a vote carries high informational benet for the no-voting users and 59 saves the platform the cost of fact-checking when a vote is present. We refer to this equilibrium as thefully separating equilibrium since a user’s type is revealed just by seeing her vote. In order to support such an equilibrium the platform always fact-checks when no user is voting. This incentivizes a bad user to not vote in expectation of a high information gain versus voting which could achieve a higher reputation gain if the user was lucky. Clearly, such desirable equilibria can only be supported for lower reputation weights . As bad users care more about the reputation gain, they are willing to vote in expectation of agreement with the platform or the other user when becomes larger. Also, since the platform selectively chooses when to fact-check, it does not incur such a high cost of fact-checking even though it always fact-checks when no user votes. We explain the above argument by continuing with our example from last sub-section. We havec P = c H = 0:5 and we again take = 0. Consider the fact-checking strategy of the platform where agree = disagree = single = 0 and none = 1. It can be veried that both the (FC-condition) and the (V-condition) are not satised. The platform’s utility in this equilibrium becomes 2 1 4 c P = 15 8 > 7 4 ; where the 2 is because the users always learn the true value and the 1=4 denotes the probability of the event when both the users are bad. In Figure 2.5, we plot the percentage increase in the platform’s utility obtained from using voting- dependent fact-checking strategies compared to constant fact-checking strategies. We see that the platform gains from using voting-dependent fact-checking when the reputation weight of the users is low. We also see that the percentage improvement is the most when the optimal voting-dependent equilibrium is fully separating. This is because the platform strategically chooses her fact-checking strategy that reveals the true state only when both the users are bad users. In all other cases, at least one of the users is a good 60 user who fact-checks and votes the true value. Thus, if a bad user is present with a good user, the good user tells her the value of the article and the platform has to put no eort. This signicantly decreases the resultant expected cost of fact-checking for the platform and the users always learn the true state - either themselves (if they are good), or from other user (if the other user is good) or the platform (if both users are bad). In our numerical studies, we nd that the highest percentage increase in platform’s utility achieved by using vote-dependent fact-checking compared to constant fact-checking is around 10:7% at = 0:2,c P = 0:25 andc H 0:3. 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 2 4 6 8 10 12 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 2 4 6 8 10 12 0.00 0.25 0.50 0.75 1.00 Reputation weight β 0.00 0.25 0.50 0.75 1.00 Fact-checking cost of user cH 0 2 4 6 8 10 12 Figure 2.5: Heatmap plots showing the percentage increase in platform’s utility by using voting-dependent fact-checking strategy. The variation inc H and wherec P is 0:2 (left column), 0:5 (middle column) and 0:8 (right column). 2.6 Conclusion This chapter discusses the eects of introducing the action of user voting on a social media platform that aims to minimize the misinformation on its platform in a cost-eective manner. The users also want to improve their information about the truth while limiting their own eort spent in fact-checking the news. The users also have a social reputation of being a cost-eective user which they wish to improve by their votes. To our knowledge, there has been no attempt in the literature to study the equilibrium outcomes of such a model. We study a special class of equilibria in which each user is truthful in her vote if she fact-checks and the rm’s fact-checking strategy is not biased towards a particular state of the unrevealed truth. We show 61 that an equilibrium in this setting can either be no-voting (in which no user votes) or altruistic (in which a good user always fact-checks and votes truthfully). This narrows down our search to a few equilibria and we provide the conditions for the existence of each of them. In our numerical studies, we nd the cases where voting helps in improving the platform’s utility. We then consider settings where the platform’s fact-checking is dependent on user votes and nd that it improves the platform’s utility even further. 62 Chapter3 SearchingforanInfectioninaNetwork 3.1 Introduction COVID-19 (SARS-CoV-2) was declared as a pandemic disease by the World Health Organization on March 11 2020 (WHO 2020). As of 2 June 2021, there have been more than 170 million conrmed cases worldwide with a rapidly increasing death toll currently estimated to be about 3.5 million (WHO 2021). COVID is yet another coronavirus that spreads from person to person at a rapid pace. To prevent a devastating spread of infection, it is critical to detect the infected individuals in a population quickly. The identied individuals can then be isolated and other potential sources of infection can then be found using contact tracing. Contact tracing has been used for some time by public health experts to control newer outbreaks where the treatment is not easily available. It works by nding out the individuals who have recently come into contact with an infected person. Preventive measures such as home isolation are then applied on these individuals to prevent the further spread of infection. In order for contact tracing to be eective, it is essential that individuals are investigated eciently and the infected ones are identied. To this end, in this chapter we work on nding an ecient screening policy that can nd an infected individual in a network in minimal time. We consider an infection spreading deterministically in a given social network. Each node in our network represents an individual in the population with their interactions represented by the edges in the 63 network. After each time step, an infected individual spreads the infection to its neighboring nodes. The policy maker is unaware of the state of any individual (infected or not) and can inspect one individual during each time step. Our aim is to design a search procedure for the policy maker to nd an infected person in minimal time irrespective of the starting point of the infection. We formulate this problem as a graph covering problem and write an integer linear program to solve it. We also prove that this problem is NP-complete for a general graph and discuss the optimal policy for the simplied case of a line graph. The rest of this chapter is organized as follows. In Section 2, we discuss the related literature. In Section 3, we introduce our model which includes our setup and also some preliminary denitions that will be useful later. In Section 4, we formulate our problem into a graph covering problem where we write it as an integer linear program solution and also show the equivalence of the problems. In Section 5, we prove the NP-Completeness of the problem while also discussing the proof argument and the reduction procedure using the set cover problem. In Section 6, we provide the optimal policy for a line graph. We defer all the proofs to Appendix C. 3.2 LiteratureReview The epidemiology literature has seen a boom since the advent of COVID with the major focus on under- standing, mitigating, and preventing such pandemics. Alamo et al. (2021) review the various data-driven methodologies to ght the challenges raised by the pandemic in forecasting the spread of the epidemic and making timely decisions. Stewart, Heusden, and Dumont (2020) discuss the utility of control theory in managing the outbreak of COVID. Contact tracing and social distancing measures were widely used as methods for control and prevention of COVID-19. Köhler et al. (2020) use the data from COVID out- break in Germany to minimize the number of fatalities while maintaining similar levels of social distancing measures. 64 A related work to our chapter is provided in Ou et al. (2020) who provide a multi-round active screening algorithm for an SIS (susceptible-infected-susceptible) model to select best nodes to actively screen under a limited curing budget. Our model diers mainly in its deterministic spread of infection and the absence of the recovery of infected nodes. Mei et al. (2017) study deterministic non-linear epidemic propagation models and provide an analysis of the equilibria, stability and positivity properties. In our model, we also consider a deterministic model but the aim is to nd the infected node in minimal time. Christley et al. (2005) carried out simulation studies of a SIR (susceptible-infected-recovered) model to evaluate various centrality measures in their eectiveness to predict the risk of infection of an individual in the network. Compared to their paper, we show in this chapter that our problem is NP-Complete and thus any centrality measure will not be optimal in our setting. Reya, Gardnera, and Wallera (2013) use infection reports to propose a maximum likelihood model for predicting the most likely path of infection in a network. In con- trast, we assume no information about the source of infection and analyze the worst case scenario. Ball, Knock, and O’Neill (2015) study the stochastic model of an SEIR (susceptible-exposed-infected-removed) epidemic model that incorporates contact tracing by removing some of the infectious neighbors of the in- fected nodes. In our model, we consider the network as a given and are focused on identication strategies for the infection. 3.3 Model We have an unweighted graphG(V;E), whereV andE are the sets of vertices and edges of the graph, respectively. Each node in the graph can either be infected or not at any time instant. We denote the state of nodei at timet byX i (t), whereX i (t) = 1 if nodei is infected at timet, else it is 0. We dene Z =fz2V :X z (1) = 1g (3.1) 65 Thus, the setZ denes the subset of nodes that are infected at the start. Z is unknown and we make the assumption thatZ consists of a single node. This allows us to focus on cases where the infection starts at a single node. Once infected, a node stays infected in the future time instants. Thus, we have X i (t +w) = 1; ifX i (t) = 1; for anyt;w2f1; 2;:::g: (3.2) We assume that the infection spreads to the neighbouring nodes before the next time instant. Mathemati- cally, this suggests that X j (t + 1) = 1; ifX i (t) = 1; (i;j)2E for anyt2f1; 2;:::g: (3.3) At each time instant, the policy maker can select a node and check whether it is infected or not. Throughout the chapter we call this action by the policy maker as sampling a node. We assume that the policy maker is able to accurately nd the infection on the searched node. Her goal is to nd an infected node as quickly as possible as it helps her to trace back the possible infected nodes and cure them. We restrict the search to a single node at each time instant to capture the costly and time-consuming nature of diagnosing an infection. This allows us to present our results in a neat fashion by capturing the interplay between the spread of the infection and the sequential search of the policy maker. The infection spreads to the neighbouring nodes between two successive time instants. For each time instant, if an infected node has already not been found, the policy maker decides what node to sample. DeneY (t)2V as the node sampled at time t. Denition6. WedeneasearchpolicyasasequenceY = (Y (1);Y (2);:::;Y (l(Y )))ofnodestobesampled until an infected node is found, wherel(Y ) is called the length of the policyY. Having dened a search policy we would now like to introduce a metric of its performance. We use the worst case time of nding the infection, as formally dened below. 66 Denition7. ConsiderasearchpolicyY. WesaythatthesuccesstimeT (Y )istheworstcasetimeuntilthe infection is found, with respect to all possible initial infection locations. T (Y ) = sup z2V T z (Y ); (3.4) whereT z (Y ) represents the time taken by the policy to nd an infected node if the infection actually started at nodez. T z (Y ) = inf t1 ft :X Y (t) (t) = 1; X z (1) = 1g: (3.5) Thus, the policy maker’s objective is to nd a search policy that minimizes her worst possible time of nding the infection. Mathematically, the policy maker wants to select policyY such that Y 2 arg min Y T (Y ): (3.6) We end this section by stating the denition of a path between two nodes that will be used throughout the chapter. Denition8. We dene a path between any two nodesu andv2V as a sequence of vertices, starting from u and ending atv, such that every two consecutive vertices are connected by an edge inE and no vertex is repeated. The shortest path between nodesu andv2V is the path fromu tov having the minimum number of vertices. We denote the distance(u;v) betweenu andv as the number of vertices in this path, excluding the initial vertex (u). 67 3.4 FormulationasaGraphCoveringProblem In this section we will formulate our problem as an integer linear program (ILP) solving a graph covering problem, where our goal would be to cover the set of vertices of the graph by increasing subsets of vertices. We start by dening these subsets. Denition9. WedeneaballB atanodei2V ofradiusr asthesetofverticeswithinadistanceofr from i, i.e. B(i;r) =fj2V :(i;j)rg: (3.7) A ballB(i;r) at nodei of radiusr gives us the set of nodes that will be infected in the nextr time instants if nodei was to get infected at the present time instant. This is because of our assumption that the infection spreads deterministically to the neighboring nodes of the infected nodes in the next time instant. Equivalently, the ballB(i;r) also species the set of nodes at least one of which should have been infected if nodei is infected after the nextr time instants. This suggests that if the policy maker samples nodei at timer and nds that it is not infected, she can reject the possibility that any node inB(i;r1) is infected. This idea reduces the problem of the policy maker of nding a search policy to selecting balls of nodes to reject at each time step. We use it to formulate our ILP which is stated next. Letx t i denote the binary variable indicating that the policy maker selects the ballB(i;t 1) at time t. As discussed in the previous paragraph, this allows the policy maker to reject the nodes present in the ball as the potential infection starting point. Thus, the policy maker selects such balls, one for each time instant, to account for all the nodes as the potential infection starting point. Her goal is to minimize the time taken to cover all the nodes in order to control the spread of the infection. 68 We state the graph covering optimization problem as follows and explain the objective function and the constraints subsequently. min fx t i g X i X t x t i (graph covering) s:t: X i;t:v2B(i;t1) x t i 1;8v2V: (C1) X i x t i 1;8t: (C2) X i x t i X i x t+1 i ;8t: (C3) x t i 2f0; 1g;8i;t: (C4) We start by explaining the constraint (C2) rst. Sincex t i denotes whether the ballB(i;t 1) is chosen or not, the constraint (C2) ensures that at most one ball is chosen at each time step. This ensures our assumption that we can only sample one node at a time. We next look at the constraint (C3). The left hand side represents the number of balls selected at timet while the right hand side represents the number of balls selected at the next time instantt+1. In the presence of constraint (C2) both of these numbers can be at most 1. Using this, we see that the constraint (C3) rejects the possibility that the policy-maker selects a ball at timet + 1 (right hand side is 1) without selecting a ball at timet (left hand side is 0). This maintains the sequential nature of our problem. The rst constraint (C1) ensures that for every element, we at least pick one ball that contains that element. This guarantees that we consider each node v as a potential starting point of infection. And nally the objective function described above minimizes the number of balls chosen in our graph covering. Combined with constraints (C2) and (C3), the objective function gives us the value of 1 + the radius of the largest ball selected. As the radius of the ball increases at each time instant and starts at 0, this gives us the time taken to cover all the nodes (by constraint (C1)). 69 Denote the optimal values of the variables obtained by solving the above optimization problem byx t i , wherei2f1; 2;:::;ng andt2f1; 2;:::;ng. LetO be the optimal value, i.e O = X i X t x t i : (3.8) Due to constraints (C2) and (C3), the optimal solution above naturally denes a search policyY for which Y (t) =fi :x t i = 1g; t2f1; 2;:::;Og: (3.9) We next show that the optimization problem formulated above solves our infection search problem. Theorem3.4.1. For any search policyY, T (Y )O ; whereO as dened in equation (3.8) is the optimal value of the optimization problem andT (Y ) is dened in equation (3.4). 3.5 NP-CompletenessResult In this section we show that our problem of nding the infection (or equivalently graph covering) is NP- complete for a general graph. We focus on the decision version of this problem, i.e. the version of graph covering that can be posed as a yes-no question. More formally, we show that the following problem is NP-complete: For a given graphG and positive integerk 0 , does there exist a policyY of sizel(Y ) k 0 such thatY coversG? Theorem3.5.1. Graph covering is NP-complete, i.e. 1. Graph covering is in NP. 70 2. Every problem in NP is reducible to graph covering in polynomial time. The proof is provided in Appendix C.2 but we discuss the key points here. To prove that a problem is NP-complete, we rst need to show that it’s in NP. This means that we need to show that there exists a certicate verication strategy and a polynomial time verier. A certicate is an answer to our problem and the verier takes the certicate and correctly outputs if the answer to our problem is yes or no. We choose the given sequence of nodesY itself as the certicate to our graph covering problem. To check if the policyY covers all the nodes, we perform sequential breadth-rst search in the following manner as the verier: 1. For each time instantt2f1; 2;:::;l(Y )g, use Breadth First Search starting at nodeY (t) until you reach the nodes at a distance oft 1 fromY (t). 2. Mark all the nodes traversed in the process (including those at a distance oft 1 fromY (t)). 3. If in this process you mark all the vertices ofG, then output Yes, else output No. In other words, in the rst step we create the balls used at each time instant. In the next step, we mark all the nodes present in at least one ball. Finally in the last step we output yes if all the nodes were marked while we output no if some node is left unmarked. Next to prove the NP-hardness of our problem we show that an NP-complete problem is polynomial time reducible to our graph covering problem. In other words, we show that there exists an NP-complete problem that can be solved by polynomial number of calls to the graph covering problem. We choose set cover problem for this purpose which we formally dene below. Set cover: Consider a set of elementsU, such thatjUj = n, and a collection of subsetsS j U; j 2 M =f1; 2;:::;mg. Now the set cover problem can be written as follows: For a given set of elementsU, a collection of subsetsS =fS 1 ;S 2 ;:::;S m g and positive integerk, does there exist a setS 0 S of size at mostk such that the subsets inS 0 contains all the elements inU, i.e.[ i:S i 2S 0S i =U? 71 Figure 3.1: A line graph with 4 nodes (left) and a star graph of branch 3 and length 2 (right). Before describing the construction procedure, we need to dene some basic graph structures that would aid in the construction. Denition10. Alinegraphof nodesreferstoatreehavingtwoleafnodesandthe restofthe 2nodes are each connected to two nodes such that the graph represents the shape of a line. Denition11. A star graph of branchesb and lengthd, denoted byE b d , refers to a tree obtained by joining one node (denoted byc b d ) tob line graphs, ofd nodes each, throughb edges. Figure 3:1 shows the line graph with 4 nodes and the star graphE 2 3 . We now describe the construction procedure that we will use to reduce set cover to graph covering. Construction: For a given instance of set cover, i.e. givenU,S andk, we construct a graphG as follows: 1. For every elementi2U construct a nodee i . 2. For every time instantt2f1; 2;:::;kg and every setS j ; j2M construct a nodes j;t . 3. For every elementi2S j , for every setS j ; j2M and for all time instantst, connecte i tos j;t via t1 nodes for allj2M. (Note that connecting nodeu tov viaw nodes implies having a line graph havingw nodes and connecting one leaf node of the line graph tou and the other tov.) 4. For every time instantt constructk + 1 nodes 1 t ; 2 t ;:::; k+1 t and connect them all separately to s j;t for allj2M via separate sets oft 1 nodes for each i t ;i2f1; 2;:::;k + 1g. 72 s 1,1 e 1 e 2 e 3 s 2,1 s 3,1 τ 1 1 τ 2 1 τ 3 1 s 1,2 s 2,2 s 3,2 τ 1 2 τ 2 2 τ 3 2 o c 5 3 Figure 3.2: Construction procedure for a set cover problem with U = f1; 2; 3g; S = ff1g;f1; 2g;f2; 3gg; k = 2. 5. Construct a star graphE k+3 k+1 and connectc k+3 k+1 tos j;t for allj2M viak + 1t nodes. 6. Add a nodeo by connecting it to one of the leaf nodes ofE k+3 k+1 . Figure 3.2 shows the graph built using the construction procedure mentioned above for a set cover problem withU =f1; 2; 3g; S =ff1g;f1; 2g;f2; 3gg; k = 2. We nally prove the following lemma which proves Theorem 3.5.1. Lemma 3.5.2. For given setsU andS there exists a set cover of size at mostk if and only if there exists a search policyY of length at mostk + 2 that solves graph covering for the constructed graphG. The crux of the proof lies in the construction. To prove that a solution to set cover must exist if a solution to graph covering does, we show that for every time instantt + 1 some subset nodes j;t must be 73 selected wheret =f1; 2;:::kg. The nodes of elementse i are connected to the respective subset nodess j;t in such a fashion that nodee i gets covered by sampling ats j;t ifi2S j . And since we have a solution to graph covering that covers all the nodes, we just pick the subsetsS j corresponding to the subset nodes sampless j;t and get the solution to the corresponding set cover problem. To prove that a solution to graph covering must exist if a solution to set cover does, we construct a policyY such thatY (1) =o,Y (k +2) =c k+3 k+1 and for the remaining time instants we select the nodess j;t corresponding to the subsetsS j in the set cover solution in any order. We then show that this policy covers all the nodes: the element nodese i and the time nodes i t are covered by sampling at the subset nodes, o is covered byY (1), and the remaining nodes are covered by the last sample atc k+3 k+1 . Figure 3.2 shows the construction of the graph for a set cover problem withU =f1; 2; 3g; S =ff1g;f1; 2g;f2; 3gg; k = 2. We note here that it is possible that for some special structures of the network, the problem of graph covering is easier to solve. In the next section, we show such an example with a line graph where we have the closed form structure of the optimal search policy. 3.6 TheSimpliedCaseofaLineGraph In this section, we discuss our problem when the social network is a line graph. We label the nodes of a line graph of nodes from 1 to where nodes 1 and are the leaf nodes and nodei is the node connected to nodesi1 andi+1 fori =f2; 3;:::;1g. We are able to nd an optimal search policy for this graph as discussed in the next proposition. Proposition3.6.1. For the case of a line graph of nodes, the search policyY line dened by Y line (t) = minft 2 t + 1;g 74 for t = 1; 2;:::, minimizes the time taken to nd an infection T (Y ) where T (Y ) is dened in (3.4). The optimal number of samples required is given by O line =d p e; wheredxe represents the nearest integer greater than or equal tox. We clarify here that the policy describe in the above proposition is not the only optimal solution for the line graph. When the network is a line graph, the infection follows a simpler trajectory. In terms of the graph covering problem, the balls to be selected now have a structure that can be exploited to obtain the optimal search policy for this simpler case. 3.7 Conclusion This chapter sets up the model of nding an infected node in a network in minimal time where the infection spreads deterministically to neighboring nodes. We formulate the problem as a graph covering problem and show its equivalence to our original problem while also constructing an integer linear program to solve it. Through a reduction from the Set Cover problem, we also show that our problem is NP-Complete for any general graph. We also show the optimal policy and the minimum time required to catch the infection in the simpler case of a line graph. Our work can easily extend to settings where the speed of the spread of infection or the frequency of searches vary by modifying the radius of the balls or the amount of the balls, respectively, in the graph covering problem. More concretely, suppose that the infection spreads to the neighboring nodes after everyp time instants, instead of after every time instant that we have in our model. Then instead of a ball of radiust 1 at time instantt, we select a ball of radiusdt=pe 1, wheredxe denotes the nearest integer 75 greater than or equal tox. Similarly, if the spread of the infection is faster we increase the size of the balls appropriately for each time instant. Mathematically, if the infection spreads top distant nodes from the current infected nodes after each time instant, then we make the ball sizes as (t 1)p for each time instantt. There are several future directions of research to pursue. Since the problem is NP-complete, a natural order would be to prove a hardness result for the setting. Such a result would suggest that our problem is hard to approximate within a certain factor of approximation (say ). In other words, if there exists an approximation algorithm that nds a sampling policy with the number of samples being times the number of samples being used in the optimal policy, where < , then it would suggest that P=NP. One can then construct and compare approximation algorithms that achieve results closer to the hardness bound. We can also evaluate these policies based on their performance in nding an infection on real-world networks with COVID data. 76 AppendixA A.1 Proofs This appendix contains the proofs of Theorems 1.5.1 - 1.7.2. Proof of Theorem 1.7.3 is analogous to that of Theorem 1.5.2 and is omitted for brevity. A.1.1 ProofofTheorems1.5.1and1.7.2 Outline: The main idea underlying the proofs of both theorems is the same. We rst argue that it is without loss of generality to focus on public signaling mechanisms whose signal is just the indierence threshold, subject to this threshold being incentive compatible (i.e. it is indeed indierent given the gener- ating distribution). We then show the intuitive (but algebraically challenging) fact that it is optimal for the L-type rm to always send the threshold for which theL-type rm sells out, while theH-type rm signals in a way to ensure incentive compatibility. These two requirements can be satised with many combina- tions of price and public signaling mechanisms including one with no information communication, and hence public information provisioning is not eective. Preliminaries: We will begin by presenting some preliminary analysis which is applicable to both the- orems, and then specialize the analysis to complete the proof of each theorem separately. We dene a class of public signaling mechanisms that signal a threshold so that all customers nd it incentive compatible to purchase in period 1 if and only if their valuation equals or exceeds this threshold. 77 Denition 12. A public signaling mechanism is a threshold mechanism if its support S V and for all t2 S, customers with valuationsv t buy in period 1 and the remaining customers buy in period 2 or do not buy, i.e. x v = 8 > > > > > > > > > < > > > > > > > > > : 1; if vt; 2; if t>vp 2 ; 0 otherwise: We denote byT the set of all threshold mechanisms. Noting that any arbitrary public signal would naturally induce a threshold behavior in this setting, we can use a revelation principle type argument to establish that any public signaling mechanism can be reduced to a threshold mechanism that has equivalent revenue. We omit these details, and in the following restrict attention to threshold mechanisms. We will nd the optimal threshold mechanism and prove that it is in fact equivalent to a binary signaling mechanism, which will complete the proof. In order to start with a general analysis of public signaling mechanisms we introduce the events of abundance, denoted byh and scarcity denoted byl. When the size of the inventory is uncertain then event h (l) corresponds to High (Low) quantity. Similarly, when the the demand is uncertain (c.f. Section 1.7.3) h (l) corresponds to Low (High) demand. Finding the optimal threshold mechanism amounts to the seller choosing the joint probability distri- bution, and for simplicity, we denote this function bydP(;t) for2fh;lg andt2 V and the prices (p 1 ;p 2 ) in order to maximize the expected revenue while ensuring that it is incentive compatible for cus- tomers with valuations higher thant to buy in period 1 and customers with valuations smaller thant to buy in period 2 or never buy. We would like to note thatdP(;t) is written with some abuse of notation, keeping in mind that whenever it is not part of an integral the well dened Radon-Nikodym derivative is the relevant object. 78 We denote byS 1 the rst period sales and byS 2 the second period sales. Note that for public signaling mechanisms, P(A 1 (v)j t) = P(A 1 (v 0 )j t) andP(A 2 (v)j t) = P(A 2 (v 0 )j t) for all v;v 0 2 V , and therefore for brevity we write P(A 1 (v)jt) =P(A 1 jt); P(A 2 (v)jt) =P(A 2 jt): We rst note that for any (non trivial) threshold-signalt thet-customer must (weakly) prefer to buy versus not buying, i.e., tp 1 ; and be indierent between buying in the two periods, i.e., (tp 1 )P(A 1 jt) = (tp 2 )P(A 2 jt); which is equivalent to (tp 1 )dP(A 1 ;t) = (tp 2 )dP(A 2 ;t): (A.1) Fori = 1; 2, we can write: dP(A i ;t) =P(A i jt;h)dP(h;t) +P(A i jt;l)dP(l;t): Noting that in the case of abundance, availability is one in both periods, we thus can simplify (A.1) to (p 1 p 2 )dP(h;t) = [(tp 1 )P(A 1 jt;l) (tp 2 )P(A 2 jt;l)]dP(l;t): (A.2) 79 Therefore, the revenue of the rm can then be written as R =p 2 E[minfQ;D F (p 2 )g]+ Z tp 1 E[D 1 jl;t](p 1 p 2 )dP(l;t) + F (t)(p 1 p 2 )dP(h;t) =p 2 E[minfQ;D F (p 2 )g]+ Z tp 1 dP(l;t) E[D 1 jl;t](p 1 p 2 ) + F (t)((tp 1 )P(A 1 jt;l) (tp 2 )P(A 2 jt;l)) : Let (p 1 ;p 2 ;t) =E[D 1 jl;t](p 1 p 2 ) + F (t)((tp 1 )P(A 1 jt;l) (tp 2 )P(A 2 jt;l)) and T (p 1 ;p 2 ) = arg max t (p 1 ;p 2 ;t): This directly leads to the following characterization of optimal public signaling mechanisms. LemmaA.1.1. The following is an optimal public signaling mechanism: (i) Set pricesp 1 andp 2 that (p 1 ;p 2 )2 arg maxR(p 1 ;p 2 ); where R(p 1 ;p 2 ) =p 2 E[minfQ;D F (p 2 )g] +P(l)(p 1 ;p 2 ;t (p 1 ;p 2 )); witht (p 1 ;p 2 )2T (p 1 ;p 2 ). (ii) If rm-type isl, then it signals thresholdt 2T (p 1 ;p 2 ) with certainty. (iii) If rm-type ish, then it signals thresholdt with probability equal to max 0; min P(l) P(h) (t p 1 )P(A 1 jt ;l) (t p 2 )P(A 2 jt ;l) p 1 p 2 ; 1 ; (A.3) 80 and it signalst = sup v2V v with remaining probability. We now specialize our analysis and apply Lemma A.1.1 to the cases of quantity and demand uncer- tainty. ProofofTheorem 1.5.1: In this case, quantity is uncertain, andh-type rm has quantityq H and`-type rm has quantityq L and henceh = H;l = L. The demand is deterministic withD = q H = 1. Observe that E[D 1 jl;t] = F (t)P(A 1 jt;l) E[D 2 jl;t] = ( F (p 2 ) F (t))P(A 2 jt;l) and hence, (p 1 ;p 2 ;t) = (tp 2 ) F (t)(P(A 1 jQ =q L ;S =t)P(A 2 jQ =q L ;t)) =:(t;p 2 ): (A.4) We would like to emphasize that the latter is independent ofp 1 , and thus, the optimization problem in Lemma A.1.1 (i) is independent ofp 1 as well. We next characterize some properties of(t;p 2 ) that will help us solve the optimization problem in Lemma A.1.1(i). LemmaA.1.2. For anyp2V withtp, we have: 1. (t;p) maxf( F 1 (q L )p)q L ; 0g. 2. Ifq< F (p), then arg max t (t;p) = F 1 (q); 81 Proof of Lemma A.1.2. Notice that we have: P(A 1 jQ =q L ;S =t) = minfq L = F (t); 1g P(A 2 jQ =q L ;S =t) = minfmaxf(q L F (t))=( F (p) F (t)); 0g; 1g: (A.5) We will consider the cases (a)q L F (p) and (b)q L < F (p). In case (a), we have F 1 (q L )p and so we have(t;p) = 0: Noting that for this case, maxf( F 1 (q L )p)q; 0g = 0, we nd that part 1 holds. Consider case (b) with F 1 (q L ) > p. Consider two further sub-cases: (i)t F 1 (q L ) and (ii)t > F 1 (q L ). We will prove that(;p;q L ) is increasing for sub-case (i) and decreasing for sub-case (ii), which implies that(;p;q L ) is optimized att = F 1 (q L ). For sub-case (i), we have(t;p) = (tp)q L which is increasing int. For sub-case (ii),t> F 1 (q L ) implies that (t;p) = tp F (p) F (t) F (t)( F (p)q L ); Dierentiating with respect tot and denoting = q L F (t) F (p) F (t) is the period 2 availability forL-type rm, we obtain @ @t = ( F (p)q L ) (tp) f(t) F (p) F (t) + F (t) F (p) F (t) (tp) F (t)f(t) ( F (p) F (t)) 2 (A.6) = (1) F (p) F (t) h F (p) F (t) F (t)f(t)(tp) (tp) F (t)f(t) i = (1) F (p) F (t) F (p) F (t)(tp) F (p) F (t) F (p)(tp) f(t) F (t) : 82 Applying the mean value theorem, we obtain the existence of2 (p;t) such thatf() = F (p) F (t) tp : Using this in (A.6), we obtain @ @t = (1) F (p) F (t) F (p) F (t)(tp) h f() F (p) f(t) F (t) i (1) F (p) F (t) F (p) F (t)(tp) h f() F () f(t) F (t) i < 0: where the last inequality follows from the non-decreasing hazard rate assumption. Thus, for sub-case (ii), (;p) is a decreasing function oft. Therefore, combining both the sub-cases for case (ii), we obtain that the function(;p;q) is maximized att = F 1 (q L ). Combining this with case (i), we complete the proof of part 1. Further noting that we have max t (t;p) = ( F 1 (q)p)q for case (b), and that(t;p) = 0 for case (a), we complete the proof of part 2. We complete the proof by considering two cases: (1)q L < F (p m ), and (2)q L F (p m ), and establish- ing the result for both these cases. Consider case (1). If we restrict the optimization top 2 such that F (p 2 ) q L , then we can write the objective function asp 2 F (p 2 ), and thus its optimal value equals F 1 (q L )q L . We will prove that we obtain higher revenue by optimizing under the restriction F (p 2 )>q L . In that case, we can apply Lemma A.1.2.2 to obtain that the objective is maximized by settingt L = arg max t (t;p;q) = F 1 (q L ). Then, optimizing onp 2 , the corresponding maximum revenue possible equals: P(l)q L F 1 (q L ) +P(h)p m F (p m ); where we setp 2 = p m = arg maxp F (p). We now apply Lemma A.1.1 and observe that the following mechanisms achieve this revenue and thus are optimal: mechanisms that setp 2 = p m , signal threshold 83 t =t L with certainty if rm type isl and signal thresholdt =t L with probability given in (A.3), where p 1 2 (P(l)t L +P(h)p m ;t L ] can be chosen arbitrarily. This proves part (c) of the theorem forq L < F (p m ). Now consider the full information case, i.e.,h-type rm sends the signalt L with zero probability (so that no customer purchases in period 1 forh-type rm); this is obtained by settingp 1 =t L in (A.3). This proves part (b) of the theorem forq L < F (p m ). Next, consider no information. In this case,h-type rm sends the signalt L with certainty, which is obtained by settingp 1 =P(l) F 1 (q L ) +P(h)p m . This proves part (a) of the theorem forq L < F (p m ). Consider case (2). In this case, straightforward calculations yield that the optimal solution is to set p 1 =p 2 =p m to yield an optimal revenue ofp m F (p m ). Combining the results across both cases (1) and (2), completes the proof of Theorem 1.5.1. ProofofTheorem1.7.2: In this case, demand is uncertain, andh-type rm hasD =d and`-type rm hasD = d, i.e.h is equivalent to the eventD =d andl is equivalent to the eventD = d. The quantity is deterministic withQ =q = 1. In this case, denotingx =q= d< 1, we have (p 1 ;p 2 ;t) = 1=x minfx; F (t)g(p 1 p 2 ) + F (t) (tp 1 ) minfx; F (t)g F (t) (tp 2 ) minfx; F (p 2 )g minfx; F (t)g F (p 2 ) F (t) : Straightforward algebra yields (p 1 ;p 2 ;t) = 1 x 1 minfx; F (t)g(p 1 p 2 ) + (tp 2 ) F (t) minfx; F (t)g F (t) minfx; F (p 2 )g minfx; F (t)g F (p 2 ) F (t) : Observe that(p 1 ;p 2 ;t) is increasing inp 1 and therefore, at any optimal solution,p 1 =t. Therefore, By Lemma A.1.1(iii), theh-rm signals sup v2V = 1 with probability 1, and thel rm sendst =p 1 with 84 probability one. Therefore, the corresponding optimal public signaling mechanism is truth-telling. Using Lemma A.1.1, the optimal prices arep 2 =p m andp 1 = F 1 (q= d). In other words, the rm truthfully reveals the state of the market and recommends to customers to wait when the market size is low. Clearly, when the market size is high, it is optimal for the rm to optimize the rst period price and setp 1 = F 1 (q= d). A.1.2 ProofofTheorems1.5.2and1.5.3 A.1.2.1 OutlineoftheProof The high-level idea of the proof can be summarized as follows: (i) Using classical results, it is without loss of optimality to consider signaling mechanisms that recom- mend incentive-compatible actions to customers. (ii) Intuitively, a revenue maximizing rm (sender) prots from persuading customers to buy early. Therefore, the binding incentive compatibility compatibility constraint is the one corresponding to a Buy signal for each customer. (iii) The Buy indierence condition, which as explained above denes the optimal solution and reduces the sender’s optimization problem from an innite dimensional problem (prices and distributions for each inventory level) to a three parameter optimization problem (A.15), prices for each period and the lowest indierent customer valuation. (iv) Given the reduction above, intuitively, the lowest indierent customer should be such that theL- type rm sells-out in the rst period (Lemma A.1.4). This is due to the non-decreasing hazard rate assumption and fully species the optimal signaling mechanism. (v) If the quantity is suciently high, theL-tye rm may not sell-out in the rst period and the optimal private signaling mechanism will turn out to be public signaling. 85 In order to implement these high-level steps, several technical complexities need to be tackled. (i) Instead of solving the full sender’s problem we solve a relaxed version (removing all but the “Buy" incentive compatibility constraints) and then verify that the optimal solution to the relaxed problem is feasible in the original problem. (ii) There are many signaling mechanisms that guarantee “Buy" incentive compatibility for dierent sets of customers. Intuitively, the set of such customers should be an interval [t; 1] since the higher valuation customers are “easier" to persuade. Typically, a moving of masses argument is used to prove such claims but such an argument is extremely convoluted in our setting because the incentive compatibility constraints are intertwined via the availability terms. This is why we use weak duality and solve the dual problem instead which provides the desired structure. (iii) The last step of our proof is algebraically complex. In the absence of information it is easy to show that for a distribution with non-decreasing hazard rate, the indierent customer should be placed at the point where the low-type rm sells out. On the other hand, under dierential information the “demand function" is modied since only those who receive a signal to buy are eligible in the rst period and hence proving the same result requires cumbersome algebraic manipulations (proof of Lemma A.1.4). A.1.2.2 ProblemFormulation We refer to the class of mechanisms in which the seller recommends actionsfBuy;Wait;Nog to each customer who then nds it incentive compatible to follow this recommendation asstraightforwardmecha- nisms. A revelation principle style argument, similar to Bergemann and Morris (2017) allows us to restrict 86 attention to this class of mechanisms without loss of generality. ∗ For all such mechanisms, incentive com- patibility constraints can be written as (vp 1 )P(A 1 (v)jS v =Buy) (vp 2 )P(A 2 (v)jS v =Buy); (Buy-1) and (vp 1 )P(A 1 (v)jS v =Buy) 0; (Buy-2) for allvp 1 withP(S v =Buy)> 0, ensuring that it is incentive compatible for agents to buy in period 1 when recommended to do so. Moreover, (vp 1 )P(A 1 (v)jS v =Wait) (vp 2 )P(A 2 (v)jS v =Wait); (Wait-1) and 0 (vp 2 )P(A 2 (v)jS v =Wait); (Wait-2) for allvp 2 withP(S v =Wait)> 0, ensuring that it is incentive compatible for agents to buy in period 2 when recommended to do so. Finally, (vp 1 )P(A 1 (v)jS v =No) 0; (vp 2 )P(A 2 (v)jS v =No) 0; (No) for allvp 2 withP(S v =No)> 0, ensuring that it is incentive compatible for agents to not buy when recommended to do so. Equation (No) implies that for allv p 2 , S v = No is the only feasible signal ∗ For any private signaling mechanism, each customer, after receiving a signals, calculates the conditional availability of the products in each period,E[AjjSv =s],j =1;2 and then makes the decision on when to purchase. Therefore, by a revelation principle argument, we can restrict our attention to mechanisms where the seller signals to each agent these estimates of the availability in each period directly such that the conditional expectation of the availability given these estimates is consistent with the estimates. Analogously, any such direct availability signaling mechanism can be implemented by appropriate choice of signals. 87 from the seller. Similarly, using (Wait-1), we can infer thatS v = Wait is the only feasible signal for all customers withv2 (p 2 ;p 1 ). Therefore, the platform needs to optimize the signaling mechanism for all customers in ~ V =fv2V : v p 1 g. In particular, the platform will design the (joint) distribution (Q; S) on the space of signals S2fBuy;Waitg ~ V . Notice that S is a function on ~ V that take valuesBuy orWait for each valuation. We denote byS =S H [S L whereS H is the support of (H;) andS L is the support of (L;). We focus on the following relaxation of the optimization problem of the rm, where we drop all con- straints, except (Buy-1): max ;p 1 ;p 2 Z s2S d(H;s)(p 1 jsj 1 +p 2 jsj 2 ) +(jsj 1 )d(L;s) (Private-Opt) s.t. Z s2S d(H;s)P(H) Z s2S d(L;s)P(L) Z s2S;s(v)=Buy d(H;s)(p 1 p 2 ) +d(L;s)(v;jsj 1 ) 0; forvp 1 : (A.7) wherejsj i denotes the mass of customers who attempt to purchase in periodi: jsj 1 = Z v2 ~ V I(s(v) =Buy)dF (v);jsj 2 = F (p 2 )jsj 1 ; and (jsj 1 ) =p 1 minfjsj 1 ;q L g +p 2 minfjsj 2 ; maxfq L jsj 1 ; 0gg; (v;jsj 1 ) = (vp 2 ) minfjsj 2 ; maxfq L jsj 1 ; 0gg jsj 2 (vp 1 ) minfjsj 1 ;q L g jsj 1 : 88 A.1.2.3 SolvingtheOptimizationProblem We note that (Private-Opt) is a linear program with respect to for xedp 1 andp 2 and therefore, strong duality holds so that the optimal dual and primal objective values are identical at any xedp 1 andp 2 . The dual problem can be written as min H ; L ;()0 P(H) H +P(L) L (Dual) s.t. H + Z v:s(v)=Buy;vp 1 (p 1 p 2 )(v)f(v)dv (p 1 p 2 )jsj 1 +p 2 F (p 2 ); 8s2S (A.8) L + Z v:s(v)=Buy;vp 1 (v;jsj 1 )(v)f(v)dv(jsj 1 ); 8s2S: (A.9) The constraints (A.8) and (A.9) can be re-expressed as: s.t. H (p 1 p 2 )jsj 1 Z v:s(v)=Buy;vp 1 (p 1 p 2 )(v)f(v)dv +p 2 F (p 2 ); 8s2S (A.10) L (jsj 1 ) Z v:s(v)=Buy;vp 1 (v;jsj 1 )(v)f(v)dv; 8s2S: (A.11) Equivalently, the above optimization problem can be written as D(p 1 ;p 2 ) = min ()0 sup s H ;s L 2S P(H)(p 1 p 2 )js H j 1 +P(L)(js L j 1 ) +P(H)p 2 F (p 2 ) (A.12) Z vp 1 P(H)I(s H (v) =Buy)(p 1 p 2 ) +P(L)I(s L (v) =Buy)(v;js L j 1 ) (v)f(v)dv: 89 We further relax this optimization by replacingI(s i (v) =Buy) byg i (v)2 [0; 1] fori =H;L, and using the notationjg i j 1 = R vp 1 g i (v)f(v)dv, this gives us the upper bound: D(p 1 ;p 2 ) min ()0 sup g H ();g L ()1 h P(H)(p 1 p 2 )jg H j 1 +P(L) (jg L j 1 ) +P(H)p 2 F (p 2 ) Z vp 1 P(H)g H (v)(p 1 p 2 ) +P(L)g L (v) (v;jg L j 1 ) (v)f(v)dv i : (A.13) Note that for any optimal solution of the above, it must be the case that P(H)g H (v)(p 1 p 2 )P(L) (v;jg L j 1 )g L (v); for almost allv whereg H (v) +g L (v)> 0: (A.14) To see why, consider the case in which the above relation did not hold for a set of positive measure. Then, by setting (v) =1 on this set, we would obtain the dual solution is negative innity, which would imply that the primal is infeasible, and given we know primal is well posed, this would be a contradiction. Further, if the relation in (A.14) held as strict inequality on a set of positive measure, then the optimal solution would set to zero on this set. Therefore, we must have Z vp 1 P(H)g H (v)(p 1 p 2 ) +P(L)g L (v) (v;jg L j 1 ) (v)f(v)dv = 0; and replacing this in (A.13): D(p 1 ;p 2 )D 0 := sup g H ();g L ()1; (A.14) P(H)(p 1 p 2 ) Z vp 1 g H (v)f(v)dv +P(L) (jg L j 1 ) +P(H)p 2 F (p 2 ): 90 Using (A.14) and the fact thatg H 1, then we further have D 0 D(p 1 ;p 2 ) := sup g L ()1;(A.14) P(H)(p 1 p 2 ) Z vp 1 min P(L) P(H) (v;jg L j 1 ) p 1 p 2 g L (v); 1 f(v)dv +P(L) (jg L j 1 ) +P(H)p 2 F (p 2 ): We can further rewrite this optimization problem as: D(p 1 ;p 2 ) = max x F (p 1 ) sup g L ()1;jg L j 1 =x;(A.14) P(H)(p 1 p 2 ) Z vp 1 min P(L) P(H) (v;x) p 1 p 2 g L (v); 1 f(v)dv +P(L) (x) +P(H)p 2 F (p 2 ): (A.15) Noting that(v;x) is increasing inv, and that the constraintjg L j 1 =x is equivalent to Z vp 1 g L (v)f(v)dv =x; while sinceg H (v) 0, it follows that the ( F 1 (x);x) 0; and that the solution to the inner maximization in (A.15) is to set g L (v) = 1 for all v F 1 (x). For notational convenience we writet = F 1 (x) and obtain D(p 1 ;p 2 ) = max tp 1 ;(t; F (t))0 P(H)(p 1 p 2 ) Z vt min P(L) P(H) (v; F (t)) p 1 p 2 ; 1 f(v)dv +P(L) F (t) +P(H)p 2 F (p 2 ): (A.16) 91 Fixing an optimal solutiont to the above problem, we switch to the corresponding primal problem and note that the following primal signaling mechanism achieves D(p 1 ;p 2 ) as expressed in (A.16) as its rev- enue: (A) TheL-type rm sends Buy to allv-customers withvt . (B) TheH-type rm sends Buy to av-customer withvt with probability n P(L) P(H) (v; F (t)) p 1 p 2 ; 1 o . We refer to the signaling mechanism above as (t ;p 1 ;p 2 )-mechanism. After observing that the above is feasible in the original problem (i.e. satises all the constraints (Buy-1)-(No)) we have proven the following lemma. Lemma A.1.3. Lett be the optimal solution to the optimization problem (A.16). Then, for a pair of prices p 1 ;p 2 the optimal private signaling mechanism is the (t ;p 1 ;p 2 )-mechanism. In the remainder of this section we turn our attention to the joint optimization problem and we consider several cases, as presented in the subsections that follow. For simplicity of exposition we denote ^ D(;t;p 1 ;p 2 ) :=P(L) Z t ((v; F (t)))f(v)dv +P(H)(p 1 p 2 ) F () +P(L) F (t) +P(H)p 2 F (p 2 ): (A.17) Note that denoting as the solution to P(L) P(H) ( ; F (t)) p 1 p 2 = 1, the optimization problem (A.16) can be rewritten as: D(p 1 ;p 2 ) = max tp 1 ;(t; F (t))0 ^ D( ;t;p 1 ;p 2 ): (A.18) 92 We further note that if sup v2V v, then it is the unique solution to: @ @ ^ D( ;t;p 1 ;p 2 ) = 0: Maximizing D over prices leads us to the following result (that for expositional convenience is proved in the next section, Appendix A.1.2.4). LemmaA.1.4. The optimal solution to (A.18) is such that either: (a) F (p 2 )q L or (b) F (p 1 ) =q L . Case (a) of Lemma A.1.4 trivially corresponds to the case in which signaling is irrelevant because availability is one in both periods for both rm types. Thus, this case is dominated by public signaling in which setting F (p 2 )q L is a feasible option. It follows that any mechanism corresponding to case (a) is dominated by an optimal public signaling mechanism as characterized in Theorem 1.5.1. Case (b) of Lemma A.1.4 sets F (p 1 ) =q L . The corresponding revenue equals: P(L) p 1 q L + Z p 1 (vp 1 )f(v)dv ! +P(H) (p 1 p 2 ) F () +p 2 F (p 2 ) ; where = p 1 + P(H) P(L) (p 1 p 2 ). A straightforward optimization onp 2 yields that the private-signaling mechanism that satises (PS1) and (PS2) achieves the optimal revenue for mechanisms that set F (p 1 ) =q L . For the caseq L F (p m ), toward a contradiction, assume that case (a) were optimal. Then, the revenue would be equal to the optimal public signaling revenue R =P(L)q L F (q L ) +P(H)p m F (p m ): Now, consider the proposed private mechanism. First, we establish the existence of a solution to (1.14) withp 2 < p m p 1 . To see this, notice that d dp 2 p 2 F (p 2 ) = F (p 2 )p 2 f(p 2 ) which takes the value 1 atp 2 = 0 and the value 0 atp 2 = p m (becausep m is the unconstrained revenue optimizer); further this 93 derivative is continuous. It follows that (1.14) has a solutionp 2 2 (0;p m ). Noting thatp 1 p m in this case, the existence follows and p 2 F (p 2 )p 2 F ()>p m F (p m )p m F (): Therefore, the proposed mechanism is well dened and the achieved revenue is P(L) p 1 q L + Z p 1 (vp 1 )f(v)dv ! +P(H) (p 1 p 2 ) F () +p 2 F (p 2 ) >P(L)q L F (q L ) +P(H)p m F (p m ); and since the last term is positive, by the assumptions of the lemma,R 0 >R contradicting the optimality of a public signaling mechanism. Therefore for the caseq L F (p m ), case (b) occurs and the private sig- naling mechanism that satises (PS1) and (PS2) is the optimal private signaling mechanism. This completes the proof of Theorem 1.5.2. We now turn to the case q L > F (p m ). Recall that if case (a) of Lemma A.1.4 is satised, then the optimal solution must be an optimal public signaling mechanism. Suppose instead, that case(b) of the lemma holds with F (p 1 ) =q L . Then, the revenue equals P(L) p 1 q L + Z p 1 (vp 1 )f(v)dv ! +P(H) (p 1 p 2 ) F () +p 2 F (p 2 ) ; where = p 1 + P(H) P(L) (p 1 p 2 ). A straightforward optimization onp 2 yields that the optimal solution either: is in the interior of [0;p 1 ] in which case the rst-order conditions imply that the private signaling mechanism that satises (PS1) and (PS2) achieves the optimal revenue; or setsp 2 = p 1 in which case a public signaling mechanism is optimal. Combining these two options yields Theorem 1.5.3. 94 A.1.2.4 ProofofLemmaA.1.4. We will prove that if F (p 2 )>q L , then we must have F (p 1 ) =q L . We consider two cases: Case 1 considers the case F (p 1 )q L so that availability is less than one in period 1 for theL-type rm; Case 2 considers the case F (p 1 )q L so that the availability in period 2 for theL-type rm may be non-zero. Case 1: F (p 1 ) q L . In this case, let us consider a relaxed version of (A.16) obtained by removing the constraint(t; F (t)) 0: D(p 1 ;p 2 ) = max tp 1 ^ D( ;t;p 1 ;p 2 ); Ift F 1 (q L ) then ^ D( ;t;p 1 ;p 2 ) =P(L) Z t (vp 1 ) (vp 2 ) q L F (t) F (p 2 ) F (t) f(v)dv +P(H)(p 1 p 2 ) F ( ) +P(L)(p 1 F (t) +p 2 (q L F (t))) +P(H)p 2 F (p 2 ); is decreasing int. Therefore, with out loss of optimality the optimization in (A.18) can be performed in [p 1 ; F 1 (q L )]. Supposep 1 < F 1 (q L ), then for anyt2 [p 1 ; F 1 (q L )], @ @t ^ D( ;t;p 1 ;p 2 ) = @ @t q L F (t) P(L) Z t (vp 1 )f(v)dv =P(L) f(t) q L F (t) 1 F (t) Z t (vp 1 )f(v)dv + (tp 1 ) !! ; (A.19) and therefore, we have att =p 1 , @ @t ^ D( ;t =p 1 ;p 1 ;p 2 )> 0: 95 It follows that for any optimal solution t>p 1 ; (A.20) and without loss of optimality the optimization in (A.18) can be performed in (p 1 ; F 1 (q L )]. If the optimal is achieved at a (strictly) interior point, then by the envelope theorem @ @p 1 D(p 1 ;p 2 ) (A.21) = @ @p 1 max p 1 <t F 1 (q L ) P(L) Z t (vp 1 ) q L F (t) f(v)dv +P(H)(p 1 p 2 ) F ( ) +P(L)p 1 q L = P(H) +P(L) q L F (t) F ( ) 0: (A.22) Instead, if the optimal solution is achieved at the corner pointt = F 1 (q L ), then a direct dierentiation (without appealing to the envelope theorem) also yields an identical relation to (A.22) and therefore we always have @ @p 1 D(p 1 ;p 2 ) 0: This gives us F (p 1 ) =t, which contradicts (A.20). Therefore, for this case, an optimal solution of the joint pricing-signaling problem ist =p 1 = F 1 (q L ): Case2:q L F (p 1 ). In this case, the rst constraint of (A.16),tp 1 also implies tp 1 F 1 (q L ); (A.23) and therefore the second constraint(t; F (t)) 0 can be written as tp 1 tp 2 q L F (t) F (p 2 ) F (t) : (A.24) 96 For any xed pricesp 1 ;p 2 let ~ t(p 1 ;p 2 ) denote the optimal solution to (A.16), then ~ t(p 1 ;p 2 ) = min tp 1 : tp 1 tp 2 q L F (t) F (p 2 ) F (t) : (A.25) Note here that ~ t(p 1 ;p 2 ) is non decreasing inp 1 . Indeed, consider two pricesp 1 p 0 1 and let ~ t(p 0 1 ;p 2 ) < ~ t(p 1 ;p 2 ). Then ~ t(p 0 1 ;p 2 )p 1 ~ t(p 0 1 ;p 2 )p 0 1 ( ~ t(p 0 1 ;p 2 )p 2 ) q L F ( ~ t(p 0 1 ;p 2 )) F (p 2 ) F ( ~ t(p 0 1 ;p 2 )) ; where the rst inequality follows fromp 1 p 0 1 and the second inequality from the denition of ~ t(p 0 1 ;p 2 ). Therefore, ~ t(p 0 1 ;p 2 )p 1 ~ t(p 0 1 ;p 2 )p 2 q L F ( ~ t(p 0 1 ;p 2 )) F (p 2 ) F ( ~ t(p 0 1 ;p 2 )) and hence, ~ t(p 0 1 ;p 2 ) ~ t(p 1 ;p 2 ), by the denition of the latter, which is a contradiction. Therefore, ~ t(p 1 ;p 2 ) is non decreasing inp 1 and hence @ ~ t(p 1 ;p 2 ) @p 1 0: (A.26) We consider the case in which (A.25) has a solution with ~ t(p 1 ;p 2 ) > p 1 . For notational convenience we write ~ t instead of ~ t(p 1 ;p 2 ) in what follows. It has to be the case that ~ t(p 1 ;p 2 )p 1 ~ t(p 1 ;p 2 )p 2 = q L F ( ~ t(p 1 ;p 2 )) F (p 2 ) F ( ~ t(p 1 ;p 2 )) ; otherwise any smallert will still be feasible and satisfy the dening inequality. Therefore, we can write our optimization problem as max tp 1 ;p 1 p 2 ;(tp 1 )=(tp 2 )=(q L F (t))=( F (p 2 ) F (t) ~ D( ;t;p 1 ;p 2 ); (A.27) 97 where ~ D( ;t;p 1 ;p 2 ; ) =P(L) Z t (vp 1 ) (vp 2 ) tp 1 tp 2 f(v)dv +P(H)(p 1 p 2 ) F ( ) +P(H)p 2 F (p 2 ) +P(L)p 2 q L +P(L)(p 1 p 2 ) F (t): Notice that solves P(L) P(H) (; F (t)) p 1 p 2 = 1 which in this case equals =p 2 + (tp 2 ) P(L) : (A.28) We will prove that d ~ D dp 1 < 0 for all suchp 1 , which implies that the maximization in (A.27) is achieved at F (p 1 ) =q L . We have d ~ D dp 1 = F ( )P(L) Z ~ t (vp 2 ) d dp 1 f(v)dvP(L)(p 1 p 2 )f( ~ t) d ~ t dp 1 ; (A.29) where = ~ tp 1 ~ tp 2 the availability in period 2. Using the implicit function theorem, we have dt dp 1 = ~ tp 2 (p 1 p 2 )(1) ; (A.30) where =f( ~ t) ~ tp 2 F (p 2 ) F ( ~ t) : (A.31) By (A.26), when ~ t>p 1 we must have 1. We also have d dp 1 = 1 ~ tp 2 1 : (A.32) 98 Using (A.30) and (A.32) in (A.29), we obtain: d ~ D dp 1 = F ( )P(L) ~ tp 2 1 f( ~ t)P(L) 1 Z t vp 2 ~ tp 2 f(v)dv F ( )P(L) ~ tp 2 1 f( ~ t) +P(L) 1 ( F ( ) F ( ~ t)) =P(L)((P(L))z); (A.33) where () = F (()) + 1 F (()) z = ~ tp 2 1 f( ~ t) + 1 F ( ~ t); and() =p 2 + ( ~ tp 2 ) so that(P(L)) = . We will prove that() is increasing in and(1)<z so that the negativity of(1)z implies that d ~ D dp 1 is also negative completing the result. Using 0 () = ( ~ tp 2 ) 2 , we have 0 () =f() 1 2 ~ tp 2 + ( ~ tp 2 ) 1 F () 2 = F () 2 f() F () (p 2 ) + ( ~ tp 2 ) 1 1 = F () 2 ( () 1): (A.34) Notice that () is increasing in becauseF has non-decreasing hazard rate. So, we can establish 0 > 0 if we prove that ( ~ t) 1> 0. This follows from our analysis in Lemma A.1.2 by noting that ( ~ t) 1 = F (p 2 ) F ( ~ t) F ( ~ t)(1) 1 F (p 2 )q L @( ~ t;p 2 ) @ ~ t 0; (A.35) 99 where @ @ ~ t is given in (A.6) and we have @( ~ t;p 2 ) @ ~ t 0 becauseF has non-decreasing hazard rate. Thus, we have established that () 1 and thus() is maximized at = 1. Notice that(1) = t, and thus it follows that we have (1)z = F (t) (tp 2 ) 1 f(t) = F (t)( (t) 1) 0: (A.36) Using this along with being increasing inP(L) in (A.33) proves that if F (p 1 )>q L , then decreasingp 1 increases revenues, and thus for optimality in this case, we must have F (p 1 ) =q L . A.1.3 ProofofTheorem1.7.1 Using similar (technical) arguments as in the case of public signaling, we conclude that without loss of optimality we can restrict our attention to binary signals for each class: either sendWait to classi2fa;bg in which case all customers (with valuation higher thanp 2 ) in that class wait for the second period orBuy in which case all agents with valuation larger thant i buy in the rst period, where (t i p 1 )P(A 1 jt i ) = (t i p 2 )P(A 2 jt i ); i =fa;bg: (A.37) Clearly when the size of the inventory is low the rm tells both classes to Buy and the non-decreasing hazard rate assumption ensures that in this case it is optimal for the rm to sell, so that d a F a (t a ) +d b F b (t b ) =q L : (A.38) 100 Given this observation, the rm’s revenue can be written as R coarse =P(L)p 1 q l +P(H)p 2 (d a F a (p 2 ) +d b F b (p 2 )) +P(H;t a )(p 1 p 2 )d a F a (t a ) +P(H;t b )(p 1 p 2 )d b F b (t b ) =P(L)[t a d a F a (t a ) +t b d b F b (t b )] +P(H)p 2 [d a F a (p 2 ) +d b F b (p 2 )]; where we used the fact that (A.37) can be equivalently written as (p 1 p 2 )P(H;t i ) = (t i p 1 )P(L); fori2fa;bg, which is equivalent to: P(t i jH) = P(L) P(H) t i p 1 p 1 p 2 : (A.39) Hence the rm’s optimization problem can be written as maximize ta;t b ;p 2 P(L)[t a d a F a (t a ) +t b d b F b (t b )] +P(H)p 2 [d a F a (p 2 ) +d b F b (p 2 )] subject to d a F a (t a ) +d b F b (t b ) =q L : (A.40) The above is separable, and therefore p 2 2 arg max p p d a F a (p) +d b F b (p) ; 101 and the optimal solution (t a ;t b ) is the solution to max ta;t b ;p 2 t a d a F a (t a ) +t b d b F b (t b ) s.t.d a F a (t a ) +d b F b (t b ) =q L : Because of the non-decreasing hazard rate assumption, the objective is concave, and thus there is a unique optimal solution (t a ;t b ). Note that we have a degree of freedom here and can select anyp 1 ,P(t i jH) for i2fa;bg that satisfy (A.39). Finally, if we havet a 6= t b and F (t a ); F (t b ) > 0, then the fact that the optimal objective is strictly greater thanR follows because the optimization problem (A.40) is concave by the non-decreasing hazard rate assumption and has a unique solution. Therefore, any solution of the formt a =t b which is equivalent to public signaling, or with F (t a ) = 0 or F (t b ) = 0, which are equivalent to single class scenarios, will be strictly suboptimal. A.2 PricingAfterUncertaintyRealizationWithoutCommitment In this section, we focus on the quantity uncertainty model and consider a scenario in which the rm rst observes its quantityQ, and then publicly posts prices for both periods (p 1 ;p 2 )2 R 2 + . The customers update their beliefs on the rm type upon observing the prices, and then decide to purchase in period 1 or period 2. Sales and rm revenues are then generated based on the quantity and customer actions. In this setting, prices communicate information on rm-type themselves. So, for convenience, we focus on the case in which there is no separate means of communicating information (we later remark on how inclusion of such communication should not change the main result). We refer to this game as the price-signaling game. This game was rst studied in Allon, Bassamboo, and Randhawa (2012), although in a dierent setting. 102 A.2.1 EquilibriumDenition As in classical signaling models (Spence 1978), we consider pure strategy equilibria, and thus consider situations where each rm type deterministically chooses a pair of prices, i.e.,H-type rm posts prices p H = (p H 1 ;p H 2 ) and type-L posts pricesp L = (p L 1 ;p L 2 ). (Remark 1 at end of this section briey discusses how mixed strategies can be handled by our proof arguments.) Without loss of generality we restrict attention to weakly decreasing prices. Customers form beliefs :R 2 + ! [0; 1], where(p) =P( =Hjp) forp2R 2 + represents the posterior probability that customers believe the rm is ofH-type if the posted prices asp = (p 1 ;p 2 ). Because all customers observe the same information, we feel it is reasonable to restrict attention to common beliefs. As in the main body, customers time their purchase decision to maximize their net value based on the observed pricesp and their beliefs, i.e., x v (p;)2 arg max i2f0;1;2g I(x2f1; 2g)(vp x )P (p) (A i jp); (A.41) whereP (p) (A i ) = (p)P(A i jQ = q H ;p) + (1(p))P(A i jQ = q L ;p). Thus, for type- rm, sales in period-i are given byD i =E[ R V I(A i (v))dF (v)jQ =q ], and the rm revenues are R (x;p;) =E (p 1 D 1 +p 2 D 2 jQ =q ): We use the sub-game perfect Bayesian Nash equilibrium concept as is standard in classical signaling games (Spence 1978). Thus, an equilibrium comprises customer actions x e = fx v : v 2 Vg, prices fp H;e ;p L;e g, and beliefs(), such thatx e satises (A.41), R H;e :=R H (x e ;p H;e ;)R H (x e ;p;) (A.42) R L;e :=R L (x e ;p L;e ;)R L (x e ;p;); (A.43) 103 for anyp2R 2 + and the beliefs() are updated using Bayes’ rule wherever applicable. Note that in any such equilibrium, we can compute the rm’s ex-ante (prior to quantity realization) expected revenue as P(H)R H;e +P(L)R L;e . As is typical in such signaling models, we expect the presence of separating and pooling equilibria. In separating equilibria, both rm types choose dierent prices, i.e.,p H;e 6= p L;e , and thus the rm type is fully revealed upon posting prices. A pooling equilibria involves both rms setting the same prices p H;e = p L;e = p pool , and upon observing these prices, customers’ posterior beliefs remain equal to their prior beliefs. A.2.2 PreliminaryResult Before analyzing the equilibria, we establish the following result on prices and revenue if the rm type wasL with certainty. We will use the notation p = maxf F 1 (q L );p m g: (A.44) LemmaA.2.1. ForthecaseP(L) = 1,therm’soptimalrevenueequalsR L; := p F ( p). Further,if F (p m )> q L ,thenanyperiodtwopricep 2 suchthat F (p 2 )>q L issub-optimal,i.e.,R L (x e ; (p;p 2 );)<R L; forany p 0. Proof. Arguing as in the proof of Theorem 1.5.1, we can establish that for posted prices (p 1 ;p 2 ), we can write the rm’s revenue as R L = (p 1 p 2 ) F (t)P(A 1 ) +p 2 minfq L ; F (p 2 )g; (A.45) where t denotes the valuation of the customer indierent between purchasing in either period, i.e., v- customers withvt purchase in period 1 at pricep 1 andv-customers withp 2 vt purchase at price 104 p 2 in period 2. Notice that the valuationt behaves similar to the threshold in Section A.1.1. Specically, we have (tp 1 )P(A 1 ) = (tp 2 )P(A 2 ): Using this relation in (A.45) allows us to rewrite the rm’s revenue as: R L =R L (t;p 2 ) :=(t;p 2 ) +p 2 minfq L ; F (p 2 )g; (A.46) where (t;p 2 ) = (tp 2 ) F (t)(P(A 1 )P(A 2 )) is identical to that dened in (A.4). Thus, for F (p 2 ) q L , we have R L = p 2 F (p 2 ). For F (p 2 ) > q L , we can use the arguments in Lemma A.1.2 to obtain max t (t;p 2 ) = ( F 1 (q L )p 2 )q L . Thus, it follows that R L; = max t;p 2 R L (t;p 2 ) = max p 2 p 2 minfq L ; F (p 2 )g : The result then follows by noting that the valuation distribution has non-decreasing hazard rate which implies thatp 2 F (p 2 ) is quasi-concave. A.2.3 Result We will formally establish that there is no value to the rm in using prices as a signal ex-post after quan- tity realization. Specically, we will show that in any equilibrium, the rm’s ex-ante expected revenue equals that under full information,R fullinformation =P(L) p F ( p) +P(H)p m F (p m ), which also equals R public in this setting. It follows that personalized information provisioning with prices committed ex-ante dominates. 105 Proposition A.2.2. In the price-signaling game, the rm’s ex-ante expected revenue in any equilibrium equals the expected revenue in the game in which customers have full information on rm type. Proof. We will prove that in any equilibrium the revenues of each rm type are equal to their optimal revenues when customers have full information on rm type. We rst prove that in any equilibria, theL-type rm prices at (p L 1 ; p) over both periods for anyp L 1 p and obtains its full information revenue R L; . Note that under such pricing, availability equals one in both periods, and thus all customers would purchase at the lowest price possible p (irrespective of their beliefs), and the rm would obtain a revenue ofR L; . Toward a contradiction, suppose theL-rm type has a dierent equilibrium pricing strategy. Clearly, this cannot be a separating equilibrium because applying Lemma A.2.1, any such strategy would result in a revenue strictly less thanR L; . Suppose this were part of a pooling equilibrium in which both rm types price atp = (p 1 ;p 2 ) withp 2 6= p, and the corresponding customer strategies and beliefs are denoted byx and, respectively. Then, theL-type rm’s revenue from following this strategyR L (x;p;) satises the following relation: R L (x;p;)R L (x 0 ;p; = 0)<R L; ; (A.47) where x 0 denotes the customers’ equilibrium strategy at the price p but with belief = 0. The rst inequality in (A.47) holds becauseL-type rm’s revenue under the pooling equilibrium is dominated by its revenue at the same prices but when the customers believe it isL-type with certainty, and the second inequality holds by applying Lemma A.2.1. Thus, if theL-type rm were to set its prices to any (p L 1 ;p L 2 ) with p L 1 p L 2 = p, then its revenue would equal R L; under any customer beliefs (even if customers believed its type to beH with certainty). It follows that in any equilibrium, theL-type rm always prices at (p L 1 ; p) for anyp L 1 p and obtains revenueR L; . 106 Now, consider theH-type rm. GivenL-type rm’s dominant strategy, it can choose to either separate or pool. In the separating case, becauseH-type rm’s type would be identied by its prices, its optimal revenue would be attained by pricing at (p H 1 ;p m ) for anyp H 1 p m , in which case it would achieve its full information revenueR H; :=p m F (p m ) because all customers would purchase at pricep m in period 2. If theH-type rm were to pool at theL-type rm’s price, then it would receive an equilibrium revenue of p F ( p)p m F (p m ). Thus, if p6=p m , theH-type rm would deviate and set the price at (p H 1 ;p m ) for any p H 1 p m . It follows that theH-type rm’s revenue under any equilibrium would equalR H; , its optimal revenue when customers have full information on its type. Finally, combining the above arguments, we can also identify a number of equilibria for this game. Specically any pricing strategy of the form: (p L 1 ; p) for theL-type rm and (p H 1 ;p m ) for the H-type rm, corresponding beliefs, and customers purchasing in period 2 are all equilibria. Remark 1 (Mixed strategy equilibria). Note that the argument used in the proof of Proposition A.2.2 to establish the dominant strategies of each rm type can be extended to incorporate mixed strategies as well. We omit the details. Remark2 (Adding information to prices). Onecanconsideravariantofthisprice-signalinggameinwhich rms can engage in cheap talk to send costless messages (without commitment) in addition to setting posted prices. Clearly, such cheap talk would be relevant only when both rm types price identically. It is easy to see that we will only have babbling equilibria here; this is so because theH-type can weakly increase its revenue by sending any potentially communicative message that the L-type rm sends. It follows that even under cheap talk, the ex-ante expected rm revenue does not dominateR fullinformation . 107 A.3 SolutionConceptforContinuumofCustomers Generalizeddenitionofsignalingmechanism. In this section, we present our framework using the formalisms adopted in the game theory literature when working with a continuum of players. Following each denition we explain how the concepts introduced in Section 1.3 relate to these denitions. Denoting the signal space byS, we dene a signaling mechanism 0 as a joint probability measure 0 onSVfq L ;q H g, so that (i) the marginal of 0 onV (denoted by 0 v ) is consistent with the valuation distribution, 0 v (fv : v xg) =F (x) for allx2V , and (ii) the marginal of 0 onfq L ;q H g (denoted by 0 q ) is consistent with the inventory size distribution, 0 q (Q =q) =P(Q =q) forq2fq L ;q H g. The (ex-post) utility ofv-customer who takes actionx v 2f0; 1; 2g is then equal to I(x2f1; 2g)I(A x (v))(vp x ): Equilibriumdenition. Given a signaling mechanism 0 and pricesp 1 ;p 2 , a Bayesian Nash equilibrium can be dened as follows: Denition13. Given a signaling mechanism 0 with signal spaceS, a joint probability measure 0 onS Vfq L ;q H gf0; 1; 2g V is a Bayesian Nash equilibrium if (i) The marginal of 0 onV (denoted by 0 v ) is consistent with the valuation distribution, i.e. 0 v (fv :v yg) =F (y) for ally2V; (ii) Themarginalof 0 onSVfq L ;q H g(denotedby 0 s;v;q )isconsistentwiththesignalingmechanism, i.e. 0 s;v;q = 0 . 108 (iii) 0 f(v;s;x) :x v 2 arg max z2f0;1;2g I(z2f1; 2g) 0 (f(s;v;q;x) :A t (v)gjs v =s;x v =z)(vp z )g ! = 1: (iv) Posterior beliefs are derived using Bayes’ rule whenever possible. We denote byT ( 0 ;p 1 ;p 2 ) the set of Bayesian Nash equilibria corresponding to the signaling mechanism 0 and pricesp 1 ;p 2 . Given a Bayesian Nash equilibrium 0 we denote byD t the random variable corresponding to the mass of customers who get the product in periodt, and calculate the rm’s expected revenue corresponding to this equilibrium as R( 0 ) =E 0[p 1 D 1 +p 2 D 2 ] (A.48) The expected revenue corresponding to a signaling mechanism 0 and pricesp 1 ;p 2 is given by R e ( 0 ;p 1 ;p 2 ) = max 0 2T ( 0 ;p 1 ;p 2 ) R( 0 ): (A.49) Our solution concept of a Sender-Preferred Subgame Perfect Bayesian equilibrium also extends in a natural manner as follows: Denition14. ASender-PreferredSubgamePerfectBayesianequilibriumconsistsofasignalingmechanism 0 ,apricingmechanismp 1 andp 2 ,ajointprobabilitymeasure 0 ,andasetofbeliefsf v g v2V ifthefollowing conditions are satised: 1. Posterior beliefsf v g v2V are derived using Bayes rule. 2. 0 is a Bayesian Nash equilibrium, consistent with 0 , as dened in Denition 13. 3. The signaling mechanism 0 and pricesp 1 andp 2 solve max ;p 0 1 ;p 0 2 R e (;p 0 1 ;p 0 2 ). 109 4. The equilibrium 0 belongs in the best equilibria from the perspective of the rm (the sender), i.e., 0 2 arg max y2T ( 0 ;p 1 ;p 2 ) R(y). Constructing a generalized signaling mechanism using a Section 1.3 mechanism. For any sig- naling mechanism as dened in Section 1.3, one can construct the generalized joint probability measure 0 as 0 (B) :=P(! : (v;!) =s; (s;v;q)2B) for anyB in the Borel-algebra ofSVfq L ;q H g. Further, for any strategy prolex v (s v ;p 1 ;p 2 ), we can construct the corresponding joint probability measure 0 can be constructed as 0 (B) =P(! : (v;!) =s;x v (s v ;p 1 ;p 2 ) =x; (s v (!);v;Q(!);t)2B); for allB in the Borel-algebra ofSVfq L ;q H gf0; 1; 2g V . It is straightforward to verify that for any signaling mechanism and strategy prolex v that is part of a Bayesian Nash equilibrium as per Def- inition 1, the corresponding joint probability measure 0 is a Bayesian Nash equilibrium as per Denition 4. Further, if the original signaling mechanism is a sender-preferred equilibrium as per Denition 2, then the corresponding joint probability measure 0 is also a sender-preferred equilibrium as per Denition 5. A.4 ValueofPublicSignaling In this section, we further discuss the key model underpinnings that lead to no realized benet of public information, and discuss the settings in which such signaling may indeed yield benets. We have proved that public information provisioning does not improve revenues relative to providing full or no informa- tion and private information provisioning strictly dominates public information provisioning. The key driver for the former eect in our setting is the rm’s ability to price, and that for the latter is customer 110 heterogeneity in valuations. To illustrate these, we turn back to our setting in Section 1.4 withq L = 1=2 and priorP(Q =q H ) = 1=2. Let us x prices atp 1 = 0:4 andp 2 = 0:2. In this case, if the rm provides no information, all customers purchase in period 2 and the rm’s revenue equalsR noinformation = 0:13. Truthful communication yieldsR fullinformation = 0:18, and straightforward algebra yields the optimal public mechanism ofL-type rm sending Buy signal with certainty andH-type rm sending Buy signal with probability 1/2, which yields the optimal revenueR public = 0:205. Thus, public signaling can be ben- ecial with xed prices. (In this setting with sub-optimally xed prices, private signaling yields a revenue R private = 0:23.) Notice that these revenues are lower than those achievable when the rm can adjust prices. Consider a dierent scenario in which prices and quantity remain as before, but customers are homo- geneous in their valuations. Specically, we setV = 1=2 so that all customers have valuations 1/2. Then, based on the signaling mechanism, all customers would either buy in period 1 or wait for period 2. Under no information, all customers would wait for period 2 andR noinformation = 0:15. Under full informa- tion, customers buy in period 1 when the rm is ofL-type and in period 2 when the rm is ofH-type yielding R fullinformation = 0:2. The optimal public signaling mechanism comprises the L-type rm sending Buy signal with certainty andH-type rm sending Buy signal with probability 1/4, which yields R public = 0:225. Thus, public signaling has benets. Notice that in this case all customers have identical valuations and thus are all indierent between purchasing in either period upon receiving a Buy signal. Consequently, none of the customers have any positive surplus that the rm could potentially extract by communicating in an heterogeneous fashion. Thus, private signaling does not generate more revenue, and public signaling would be optimal in this scenario. Lingenbrink and K. Iyer 2018 formalizes this result for the case of homogeneous customer valuations with xed prices in a more sophisticated setting with demand uncertainty. 111 It is also worth considering the role of public information in other relaxations of our base model. In the base model, we have two periods and two rm types. In this setting, public information provisioning entails identifying one decision variable, the probability that the H-type sends the same signal as the L-type. The proof of Theorem 1.5.1 (and its sketch described in Section 1.4.2) can be distilled into the observation that we have one constraint, that of the indierence threshold. This constraint reduces one degree of freedom in our decision variables and we only have two free variables. Given the two periods, we can thus obtain the optimal revenue by simply pricing appropriately in both periods, and xing the public information at any level. This argument holds even if theH-type rm had limited quantity. If we had additional rm types, say types, then an argument similar to that in the proof of Theorem 1.5.1 can be used to make the observation that in this case, we would have 1 potential thresholds, with a potential public signaling mechanism in which each rm type except that with highest quantity signals a unique threshold and the highest quantity rm sends a signal with a distribution that has support that includes all these thresholds. Consequently, we would potentially have 1 constraints due to corresponding indierence thresholds and 1 informational decision variables corresponding to the probability mass the highest rm type places on each threshold. At rst glance, this places us in the same scenario as with two types; the additional decision variables provided by public signaling are counterbalanced by the equal number of constraints that need to be satised. However, unlike the two type setting, the constraint that the highest rm type’s signaling should comprise a probability distribution is not trivial to satisfy. Thus, it is possible under some parameters that the case of > 2 types may realize some value for public signaling. However, under other parameters, this case too would exhibit no value of public signaling. The same argument extends to multiple periods, which adds additional pricing decision variables, and in this sense may diminish the value of public signaling further. Another dimension that can be relaxed in the base model is customer demand. As we will discuss in Section 1.7.3, if the rm faces demand uncertainty, then public signaling can generate value. 112 AppendixB B.1 ProofsofSection2.4 B.1.1 ProofofLemma2.4.1 Proof. We begin by noticing that arg max x i E[U i ja f i ;a v i ;a v i ;a ] = arg max x i P(x i =ja f i ;a v i ;a v i ;a ); (B.1) since the other terms in the utility are independent ofx i . Furthermore, max x i P(x i =ja f i ;a v i ;a v i ;a ) = maxfP( = 1ja f i ;a v i ;a v i ;a );P( =1ja f i ;a v i ;a v i ;a )g = maxfP( = 1ja f i ;a v i ;a v i ;a ); 1P( = 1ja f i ;a v i ;a v i ;a )g; (B.2) concluding the proof. B.1.2 ProofofLemma2.4.2 Proof. For equations (2.3) and (2.4), we have P(x i =ja f i = 1) =P(x i =ja =S) =P(x i =jS) = 1; (B.3) 113 which follows as the user can now guessx i =Sy =. This concludes the proof. B.1.3 ProofofLemma2.4.3 Proof. When a user fact-checks, her information gain is the highest possible (equal to 1). For a user with c i = 0 there is no fact-checking cost and a user with c i = 0 who fact-checked (a f i = 1) can always mimic the random voting strategy of a user who did not fact-check hence obtaining at least the same (in expectation) reputation score. Hence, for a user withc i = 0 fact-checking is a weakly dominant strategy. Hence, by Property 2 of truthful and unbiased equilibria, a good user will fact-check. B.1.4 ProofofProposition2.4.4 Proof. Assume that for a given equilibrium it is strictly better for a user that fact-checks to not vote. Then, any vote inf1; +1g immediately reveals thatP(c i = 0ja v i =x) = 0 forx6= 0 whileP(c i = 0ja v i = 0) 1 2 ; since by Lemma 2.4.3 a good user always fact-checks (and hence does not vote by assumption) with probability 1. Therefore, a user that does not fact-check and does not vote obtains a strictly higher reputation gain. Moreover, her information gain if she votes is equal toP(a = Sj a v = x) while her information gain if she does not vote is equal toP(a = Sj a v = 0). IfP(a = Sj a v = x)P(a = Sja v = 0) then an alternative policy that assigns zero probability of fact-checking when a user votes +1 or1 and P 0 (a =Sja v i = 0) = P(c i =c H ;a f i = 0;a v i = 1)P(a =Sja v i = 1) +P(c i =c H ;a f i = 0;a v i = 0)P(a =Sja v i = 0) P(c i =c H ;a f i = 0) achieves the same expected utility for the platform and does not aect the incentive structure of the users. Therefore, either a user that fact-checks is indierent between voting and not voting (and hence by 114 Property 3 votes with probability one) or the equilibrium under consideration is equivalent to a no-voting equilibrium. B.2 ProofsofNo-VotingEquilibria B.2.1 ProofofLemma2.5.1 Proof. With the absence of voting, the reputation gains are irrelevant. Also, there is no learning of the true state from the other user. Hence, each useri fact-checks if and only if 1c i P(x i =ja =S)P(a =S) +P(x i =ja = 0)P(a = 0) = 1P(a =S) + 1 2 P(a = 0) = 1 + 2 : (B.4) Clearly a user with cost of fact checking c i = 0 will always fact-check. And a user with cost of fact checkingc i =c H fact-checks if and only if 1c H 1 + 2 ; which on simplifying gives c H 1 2 : Now, whenc H (1)=2, each useri always fact-checks which results in platform’s utility being V novoting () = 2c H c P : (B.5) 115 And whenc H > (1)=2, the platform’s expected utility is given by V novoting () = 2 P(c i = 0) +P(c i =c H ) 1 2 (1 +) c P = 3 2 + 2 c P ; (B.6) which concludes the proof. B.2.2 ProofofProposition2.5.2 Proof. The proof follows by noticing the linear structure of the platform’s utility in. Thus, it suces to check the values at the end points, i.e. = 0 and = 1, and pick the maximum of the two. First, see that ifc H > 1=2, V novoting () = 3 2 + 1 2 c P : Thus it’s optimal to set =1(c P 1=2). Whenc H 1=2, set = 0 if and only if V novoting (1)V novoting (0); which gives 2c H 2c P : Thus, in this case it is optimal to set =1(c P <c H ). 116 B.3 ProofsofAltruisticEquilibria Altruistic equilibria can be described by the probabilities of the user’s fact-checking and the bad user’s voting when they do not fact-check. For convenience, we dene f =P(a f i = 1); to be the fact-checking probability of a user and v =P(a v i 6= 0ja f i = 0); to be the probability that a user votes when she does not fact-check. The rest of this Section is to develop optimality conditions for each action (voting +1, voting1, no-voting, fact-checking or not) that need to be checked to establish that a given (f;v) pair gives rise to an altruistic equilibrium. B.3.1 InformationGains We denote byR random variable withP(R = 1) = P(R =1) = 1=2 and we rst characterize the optimal guessing strategy for a user that does not fact-check. LemmaB.3.1. Intheabsenceoffact-checkingbytheplatform,itisoptimalforauserthatdoesnotfact-check to guessx i = a v i ifa v i 6= 0 andR otherwise. In other words, when the state of the world does not realize before the nal guess, it is optimal for a user to form a guess that is equal to the other user’s vote. Proof. The posterior belief conditioned on only the vote of the other user is P( =yja f i = 0;a = 0;a v i =y) =P(a f i = 1ja v i =y) +P(a f i = 0ja v i =y) 1 2 ; 117 where we used the fact that a player’s fact-checking decision is independent of the other player’s vote and the platform’s fact-checking action. Ify6= 0, then the posterior belief above is strictly higher than 1=2 and hence it is optimal to guessx i = y. Ify = 0 then the second term is equal to 1=2 and therefore the user is indierent between 1 and1 hence guessingx i =R is optimal. Having observed that it is optimal for an uninformed user to copy the other player’s vote (when it is present), the expected information gain of an uninformed user is 1/2 unless the copied action is coming from a user who fact-checked. LemmaB.3.2. Intheabsenceoffact-checkingbytheplatform,theexpectedinformationgainforauserthat does not fact-check and votesx is equal to E[I i ja f i = 0;a = 0;a v i =x] = 1 2 + 1 2 f Proof. In the absence of a revelation through their own or the platform’s fact-checking, a user’s information gain is equal to 1=2 except for the case where the other user fact-checks in which case her information gain is equal to 1 and hence E[I i ja f i = 0;a = 0;a v i =x] = 1 2 + 1 2 P(a f i = 1ja f i = 0;a = 0;a v i =x) = 1 2 + 1 2 P(a f i = 1); where we used the independence of a user’s fact-checking decision from the platform’s and other users’ actions. LemmaB.3.3. The expected information gain for a user that does not fact-check and votesx is equal to E[I i ja f i = 0;a v i =x] = 1 2 + 1 2 P(a =Sja f i = 0;a v i =x) + 1 2 fP(a = 0ja f i = 1;a f i = 0;a v i =x); 118 where P(a = 0ja f i = 1;a f i = 0;a v i = 0) = 1 single and P(a = 0ja f i = 1;a f i = 0;a v i =x) = 1 1 2 agree 1 2 disagree ; forx2f1; +1g: Proof. If the platform fact-checks then the information gain of both users is equal to 1. If the platform does not fact-check but the other player does, then the user’s information gain is equal to 1. In all other cases, the information gain is equal to 1/2. Therefore, E[I i ja f i = 0;a v i =x] = 1 2 + 1 2 P(a = 1ja f i = 0;a v i =x) + 1 2 P(a = 0;a f i = 1ja f i = 0;a v i =x): Note that P(a = 0;a f i = 1ja f i = 0;a v i =x) =P(a = 0ja f i = 1;a v i =x;a f i = 0)P(a f i = 1ja v i =x;a f i = 0); and hence P(a = 0;a f i = 1ja f i = 0;a v i =x) =P(a = 0ja f i = 1;a v i =x;a f i = 0)f where we used the fact that the fact-checking decision is independent of the other user’s actions. Furthermore, we quantify the information benets (or not) of voting for a user who does not fact-check. 119 Lemma B.3.4. The expected information benet for a user that votes when she does not fact-check is equal to E[I i ja f i = 0;a v i =R]E[I i ja f i = 0;a v i = 0] = 1 2 (1f) v 1 2 agree + 1 2 disagree single + (1v)( single none ) Proof. Using Lemma B.3.3, we have E[I i ja f i = 0;a v i =R] = 1 2 + 1 2 P(a =Sja f i = 0;a v i =R) + 1 2 f 1 1 2 agree 1 2 disagree : (B.7) Now, we can write P(a =Sja f i = 0;a v i =R) = (f + (1f)v) 1 2 agree + 1 2 disagree + (1f)(1v) single ; where the rm term denotes the case when the other user votes while the second term handles the case when the other user decides to not vote. Substituting this equation in (B.7), we get E[I i ja f i = 0;a v i =R] = 1 2 + 1 2 (f + (1f)v) 1 2 agree + 1 2 disagree + (1f)(1v) single + 1 2 f 1 1 2 agree 1 2 disagree : (B.8) Similarly, using Lemma B.3.3 we can also write E[I i ja f i = 0;a v i = 0] = 1 2 + 1 2 ((f + (1f)v) single + (1f)(1v) none ) + 1 2 f (1 single ): (B.9) 120 Subtracting equation (B.9) from equation (B.8) gives us the required expression in the lemma. Our analysis above allows us to concretely focus on the informational benet of fact-checking. Intu- itively, as the other user’s fact-checking increases or the platform’s fact-checking probability increases, fact-checking becomes less desirable since the user’s prefer to free-ride on free information. LemmaB.3.5. E[I i ja f i = 1]E[I i ja f i = 0] = 1 2 1 2 P(a =Sja f i = 0) 1 2 fP(a = 0ja f i = 1;a f i = 0); where P(a =Sja f i = 0) =v (f + (1f)v) 1 2 agree + 1 2 disagree + (1f)(1v) single + (1v)((f + (1f)v) single + (1f)(1v) none ); and P(a = 0ja f i = 1;a f i = 0) = 1v 1 2 agree + 1 2 disagree (1v) single : Proof. When the user fact-checks, we clearly have E[I i ja f i = 1] = 1: And when the user does not fact-check, we have E[I i ja f i = 0] = X a v i E[I i ja f i = 0;a v i ]P(a v i ja f i = 0): 121 Using Lemma B.3.3 in the above equation gives us E[I i ja f i = 0] = 1 2 + 1 2 P(a =Sja f i = 0) + 1 2 fP(a = 0ja f i = 1;a f i = 0) (B.10) Now we can write P(a =Sja f i = 0) = X a v i ;a v i P(a =Sja f i = 0;a v i ;a v i )P(a v i ja f i = 0)P(a v i ) =v P(a v i 6= 0) 1 2 agree + 1 2 disagree +P(a v i = 0) single + (1v)((P(a v i 6= 0) single +P(a v i = 0) none ) =v (f + (1f)v) 1 2 agree + 1 2 disagree + (1f)(1v) single + (1v)((f + (1f)v) single + (1f)(1v) none ) Lastly we have P(a =Sja f i = 1;a f i = 0) = X a v i P(a =Sja f i = 1;a f i = 0;a v i )P(a v i ja f i = 0) =v 1 2 agree + 1 2 disagree + (1v) single : B.3.2 ReputationGains We next focus our attention on the reputation portion of a user’s utility. When the platform decides to fact-check, a user’s reputation score is high if her vote agrees with the (posted) truth. Clearly, the user could have just been lucky, in general, which is why the expected reputation gain for a user whose vote is equal to the conrmed proof is in general less than or equal to 1. On the other hand, if a user’s vote 122 disagrees with the realized truth, then the platform can immediately infer that she is a bad user and assign a reputation score of 0. LemmaB.3.6. The reputation gain of useri when the platform fact-checks conditional on the user’s vote is given by P(c i = 0ja v i ;a 6= 0) = 0; ifa v i 6=a 6= 0: (B.11) R correct , 1 2f + (1f)v ; ifa v i =a 6= 0: (B.12) Proof. Using Bayes’ rule, we have P(c i = 0ja v i ;a ) = P(a v i jc i = 0;a )P(c i = 0ja ) P(a v i ja ) (B.13) Thus, whena v i 6=a 6= 0, we have P(c i = 0ja v i 6=a 6= 0) = 0(1=2) P(a v i ja ) = 0; (B.14) where the result follows from Lemma 2.4.3 and our assumption that the user votes truthfully upon fact- checking. We next consider the casea v i =a 6= 0, and in particular let whena v i =a = 1 (the case ofa v i =a =1 follows by symmetry). P(c i = 0ja v i = 1;a = 1) = P(a v i = 1jc i = 0;a = 1)P(c i = 0ja = 1) P(a v i = 1ja = 1) (B.15) 123 Evaluating the above equation, we obtain P(c i = 0ja v i = 1;a = 1) = 1(1=2) P(a v i = 1ja = 1;a f i = 1)P(a f i = 1) +P(a v i = 1ja = 1;a f i = 0)P(a f i = 0) = 1 2f + (1f)v : (B.16) When the platform does not fact-check if the users votes disagree no information is revealed about the state of the world and hence the platform cannot infer anything about the users’ type other than the fact that they voted. On the other hand, if the users vote agree, it is more likely that agreement happens when at least one is fact-checking and hence the users’ reputation score is higher. Finally, if only one user votes then their reputation score is higher since good users are more likely to fact-check (and hence vote). Lemma B.3.7. The reputation gain of useri when the platform fact-checks conditional on both the users’ votes is given by P(c i = 0ja v i ;a v i ;a = 0) = 0; ifa v i = 0: (B.17) P(c i = 0ja v i ;a v i ;a = 0) =R single = 1 2f + 2(1f)v ; ifa v i 6= 0; ifa v i = 0: (B.18) P(c i = 0ja v i ;a v i ;a = 0) =R agree = 1 2 f + (1f)v 1 2 f 2 +f(1f)v + 1 2 (1f) 2 v 2 ; ifa v i =a v i 6= 0: P(c i = 0ja v i ;a v i ;a = 0) =R disagree = 1 2 R correct ; if 06=a v i 6=a v i 6= 0: (B.19) andR correct R agree R single R disagree 0. 124 Proof. First note that conditional ona v i anda v i , the eventa = 0 is independent ofc i and therefore P(c i = 0ja v i =x;a v i =y;a = 0) =P(c i = 0ja v i =x;a v i =y): Whenx = 0, the probability above is trivially zero, since a good user always fact-checks and votes. When y = 0 andx6= 0, the eventa v i =y is independent fromc i and therefore, P(c i = 0ja v i =x;a v i =y;a = 0) =P(c i = 0ja v i =x) = 1 2 1 2 f 1 2 + (1f)v 1 2 = 1 2f + 2(1f)v : We next consider the casex =y6= 0. For simplicity we assumex =y = 1 without loss of generality by symmetry. Note that P(a v i = 1;a v i = 1) = 1 2 f 2 + 2f(1f)v 1 2 + (1f) 2 v 2 1 4 + 1 2 (1f) 2 v 2 1 4 where the rst term corresponds to the eventS = 1 and the second term corresponds to the eventS =1. WhenS = 1 the two votes are equal to one either when both players fact-check or when one player fact- checks the other one does not but votes and gets a realization of +1 or when neither players fact-check but vote and both realizations are equal to +1. Furthermore, P(a v i = 1;a v i = 1jc i = 0) = 1 2 f + (1f)v 1 2 ; WhenS = 1 the two votes are equal to one either when the second player fact-checks or when she does not but votes and gets a realization of +1. Therefore, P(c i = 0ja v i = 1;a v i = 1;a = 0) = 1 2 f + (1f)v 1 2 f 2 +f(1f)v + 1 2 (1f) 2 v 2 125 Finally, we consider the case 06=x6=y6= 0. For simplicity we assumex = 1 andy =1 without loss of generality by symmetry. Note that P(a v i = 1;a v 2 =1) = 1 2 (f(1f)v 1 2 + (1f) 2 v 2 1=4) + 1 2 (f(1f)v 1 2 + (1f) 2 v 2 1=4) =f(1f)v 1 2 + (1f) 2 v 2 1 4 since the votes disagree either when one user fact-checks and the other does not and chooses a particular realization or when neither of the users fact-check and their realized votes disagree. Furthermore, P(a v i = 1;a v 2 =1jc i = 0) = 1 2 (1f)v 1 2 ; WhenS = 1 the two votes are dierent than one another when the second player does not fact-check but votes and gets a realization of1. WhenS =1 the probability ofa v i = +1 givenc i = 0 is zero under any altruistic equilibrium. Therefore, P(c i = 0ja v i = 1;a v i =1;a = 0) = 1 2 1 2f + (1f)v : The inequalities can either be veried by substitution or by noting that P(c i = 0ja = 0;a v i = +1;a v i = +1) =P(c i = 0ja v i = +1;a v i = +1;a = 0;S = 1)P(S = 1ja = 0;a v i = +1;a v i = +1); where we used the independence ofS andc i conditional on the votes. The rst term is equal toR correct and for the second term 1=2P(S = 1ja = 0;a v i = +1;a v i = +1) 1 126 since the probability that the votes agree on the true state of the world is higher than the probability that they agree on the wrong state of the world. The latter can be seen by noticing that P(a v i = +1;a v i = +1jS = 1) =f 2 + 2f(1f) 1 2 + (1f) 2 v 2 1 4 while P(a v i = +1;a v i = +1jS = 0) = (1f) 2 v 2 1 4 : In the remaining of this subsection, we obtain the expected reputation gain that correspond to dierent actions for the users. First, we consider users who do not fact-check and vote (randomly). Such users may get lucky if their vote either agrees with the other user’s vote or with the true value of when the platform fact-checks or the other user does not fact-check. On the other hand, a likely scenario is that the two votes disagree, in which case she obtains a small reputation gain. Lemma B.3.8. The expected reputation gain for a user that does not fact-check conditional on her vote is equal to E[R i ja f i = 0;a v i =x] =P(a =Sja v i =x) 1 2 R correct + 1 2 (f + (1f)v)(1 agree )R agree (B.20) + 1 2 (f + (1f)v)(1 disagree )R disagree + (1f)(1v)(1 single )R single ; (B.21) for anyx2f1; 1g and 0 otherwise. 127 Proof. We write E[R i ja f i = 0;a v i =x] =E[R i ja f i = 0;a v i =x;a =S]P(a =Sja f i = 0;a v i =x) +E[R i ja f i = 0;a v i =x;a = 0]P(a = 0ja f i = 0;a v i =x) Whena = 1, the vote of the user matches half of the times while the remaining half of the times it does not. Hence, we can write E[R i ja f i = 0;a v i =x;a =S] = 1 2 E[R i ja f i = 0;a v i =a =S] = 1 2 R correct ; where the last equality uses Lemma B.3.6. For the second term we condition on the vote of the other user to get E[R i ja f i = 0;a v i =x;a = 0]P(a = 0ja v i =x) = X a v i E[R i ja f i = 0;a v i =x;a = 0;a v i ]P(a = 0ja v i =x;a v i )P(a v i ): Substituting the corresponding reputation expressions from Lemma B.3.7 and the denitions of agree , disagree and single gives us the required expression. We next consider a user who fact-checks (and votes truthfully). Such a user gets a higher reputation gain if the platform fact-checks (in which case his vote will be conrmed as correct), when the other user’s vote agrees with hers or when the other user doesn’t vote. On the other hand, when the other user votes against her vote and the platform does not fact-check her reputation score is low. 128 LemmaB.3.9. The expected reputation gain for a user who fact-checks and votes truthfully is equal to E[R i ja f i = 1;a v i =S] =P(a = 1ja v i =S)R correct +P(a = 0ja v i =S;a v i = 0)(1f)(1v)R single +P(a = 0ja v i =S;a v i =S)(f + 1=2(1f)v)R agree +P(a = 0ja v i =S;a v i =S)(1=2(1f)v)R disagree while for a user that votesS it is equal to E[R i ja f i = 1;a v i =S] =P(a = 0ja v i =S;a v i = 0)(1f)(1v)R single +P(a = 0ja v i =S;a v i =S)(f + 1=2(1f)v)R disagree +P(a = 0ja v i =S;a v i =S)(1=2(1f)v)R agree and 0 otherwise. Proof. When the user votes 0, her reputation is clearly zero. When she votes truthfully, we write E[R i ja f i = 1;a v i =S] =E[R i ja f i = 1;a v i =S;a =S]P(a =Sja f i = 1;a v i =S) +E[R i ja f i = 1;a v i =S;a = 0]P(a = 0ja f i = 1;a v i =S) Whena = 1, the vote of the user matches and she gets the reputationR correct . For the second term we condition on the vote of the other user to get E[R i ja f i = 1;a v i =S;a = 0]P(a = 0ja v i =S) = X a v i E[R i ja f i = 1;a v i =S;a = 0;a v i ]P(a = 0ja v i =S;a v i )P(a v i ): 129 Substituting the corresponding reputation expressions from Lemma B.3.7 and the denitions of agree , disagree and single gives us the required expression. In a similar fashion, we can also write the expression when the user decides to voteS after fact-checking. The dierence would be that when the platform fact-checks the user’s vote will not match and hence grant her a reputation gain of zero. Also, the respective probabilities of matching and not matching with the other user will swap as the user is not voting truthfully. We next argue that truthful voting is weakly dominant strategy for a user who fact-checks for all altruistic equilibria. LemmaB.3.10. For any altruistic equilibrium it is optimal for a user that fact-checks to vote truthfully. Proof. Comparing the equations from Lemma B.3.9, sincef + 1=2(1f)v 1=2(1f)v andR agree R disagree it is weakly better to vote truthfully. LemmaB.3.11. The expected reputation gain for a user who fact-checks is equal to E[R i ja f i = 1] =P(a =Sja v i =S)R correct +P(a = 0ja v i =S;a v i = 0)(1f)(1v)R single +P(a = 0ja v i =S;a v i =S)(f + 1=2(1f)v)R agree +P(a = 0ja v i =S;a v i =S)(1=2(1f)v)R disagree The expected reputation gain for a user who does not fact-check is equal to E[R i ja f i = 0] =v P(a =Sja v i = +1) 1 2 R correct + 1 2 (f + (1f)v)(1 agree )R agree + 1 2 (f + (1f)v)(1 disagree )R disagree + (1f)(1v)(1 single )R single : 130 Proof. We have P(c i = 0ja f i ) =P(c i = 0ja f i ;a v i 6= 0)P(a v i 6= 0ja f i ) +P(c i = 0ja f i ;a v i = 0)P(a v i = 0ja f i ): (B.22) Now, when the user fact-checks, we have P(c i = 0ja f i = 1) =P(c i = 0ja f i = 1;a v i 6= 0)P(a v i 6= 0ja f i = 1) +P(c i = 0ja f i = 1;a v i = 0)P(a v i = 0ja f i = 1) =P(c i = 0ja f i = 1;a v i 6= 0) (B.23) where the last equality uses the assumption that the user always votes truthfully when she fact-checks. Using Lemma B.3.9 in the previous equation, we get E[R i ja f i = 1] =P(a = 1ja v i =x)R correct +P(a = 0ja v i =S;a v i = 0)(1f)(1v)R single +P(a = 0ja v i =S;a v i =S)(f + 1=2(1f)v)R agree +P(a = 0ja v i =S;a v i =S)(1=2(1f)v)R disagree When the user does not fact-check, we have P(c i = 0ja f i = 0) =P(c i = 0ja f i = 0;a v i 6= 0)P(a v i 6= 0ja f i = 0) +P(c i = 0ja f i = 0;a v i = 0)P(a v i = 0ja f i = 0) =P(c i = 0ja f i = 0;a v i 6= 0)P(a v i 6= 0ja f i = 0); (B.24) 131 where the last equality uses Lemma 2.4.3. Using Lemma B.3.8 in the previous equation, we get E[R i ja f i = 0] =v P(a =Sja v i =x) 1 2 R correct + 1 2 (f + (1f)v)(1 agree )R agree + 1 2 (f + (1f)v)(1 disagree )R disagree + (1f)(1v)(1 single )R single : B.3.3 Platform’sUtility Concluding this section we calculate the expected utility of the platform in order to formally dene the platforms optimization problem that we solve numerically in the next section. LemmaB.3.12. The expected utility of the platform is given by E[V ] = 2f + 2(1f) 1 2 + 1 2 P(a =Sja f i = 0) + 1 2 fP(a = 0ja f i = 1;a f i = 0) (2f 1)c H c P P(a =S); where P(a =S) = f 2 + 2f(1f)v 1 2 + (1f) 2 v 2 1 2 agree + 2f(1f)v 1 2 + (1f) 2 v 2 1 2 disagree + 2f(1f)(1v) + 2v(1v)(1f) 2 single + (1f) 2 (1v) 2 none : Proof. We rst note that f =P(a f i = 1) =P(a f i = 1jc =c H )P(c =c H ) +P(a f i = 1jc = 0)P(c = 0); 132 which gives us that P(a f i = 1jc =c H ) = 2f 1: (B.25) Now the aggregate information gain of the users part of platform’s utility can be written as E[I platform ], 2(E[I i jc i = 0]P(c i = 0) +E[I i jc i =c H ]P(c i =c H )) =E[I i jc i = 0] +E[I i jc i =c H ] = 1 +E[I i jc i =c H ]; where the last inequality uses Lemma 2.4.3. Next we condition the bad user to write her expected information gain given whether she fact-checked or not E[I i jc i =c H ] =E[I i jc i =c H ;a f i = 1]P(a f i = 1jc i =c H ) +E[I i jc i =c H ;a f i = 0]P(a f i = 0jc i =c H ) = 1(2f 1) + 2(1f)E[I i ja f i = 0] Using equation (B.10) gives us E[I platform ] = 2f + 2(1f) 1 2 + 1 2 P(a =Sja f i = 0) + 1 2 fP(a = 0ja f i = 1;a f i = 0) : (B.26) Next, the cost of fact-checking of the users is incurred if and only if the bad user fact-checks which is give by P(a f i = 1jc i =c H )c H = (2f 1)c H : 133 Finally we write P(a =S) = X a v i ;a v i P(a v i ;a v i )P(a =Sja v i ;a v i ) =P(a v i =a v i 6= 0) agree +P(06=a v i 6=a v i 6= 0) disagree +P(a v i a v i = 0;a v i +a v i 6= 0) single +P(a v i =a v i = 0) none : Replacing each probability of user votes conditioning on their fact-checking gives us the required expres- sion forP(a =S) in the lemma. Combining all the terms in the expression for platform’s utility proves the lemma. B.4 ConditionsforEachEquilibria We start this section by proving some conditional fact-checking probabilities of the platform. LemmaB.4.1. The probability that the platform fact-checks conditional on the votea v i of useri is given by P(a =Sja v i = 0) = none (1f)(1v) + single (f + (1f)v): (B.27) P(a =Sja v i =x;a f i = 1) = single (1f)(1v)+ agree (f+(1f)v=2)+ disagree (1f)v=2: (B.28) P(a =Sja v i =x;a f i = 0) = single (1f)(1v)+ agree 1 2 (f +(1f)v)+ disagree 1 2 (f +(1f)v): (B.29) x2f1; 1g. Proof. We write P(a =Sja v i ) = X a v i P(a =Sja v i ;a v i )P(a v i ): 134 And then expanding P(a v i ) = X a f i P(a v i ja f i )P(a f i ): Substituting the appropriate expressions in the above equations gives us the required expressions in the lemma. We rst state the conditions that we will be using to characterize the outcomes of each equilibrium. LemmaB.4.2. Useri decides to fact-check if and only if 1 2 1 2 1 2 P(a =Sja f i = 0) 1 2 fP(a = 0ja f i = 1;a f i = 0) c i + E[R i ja f i = 1]E[R i ja f i = 0] 0: (FC-condition) where the reputation gains are expressed in Lemma B.3.11. Proof. User decides to fact-check if and only if E[U i ja f i = 1]E[U i ja f i = 0] 0; (B.30) Expanding each utility expression, we get E[I i ja f i = 1]E[I i ja f i = 0]c i + E[R i ja f i = 1]E[R i ja f i = 0] 0; (B.31) Using Lemma B.3.5, we write this as 1 2 1 2 1 2 P(a =Sja f i = 0) 1 2 fP(a = 0ja f i = 1;a f i = 0) c i + E[R i ja f i = 1]E[R i ja f i = 0] 0: 135 Here, the expressions for the reputation gains are expressed in Lemma B.3.11 which uses the probability expressions in B.4.1. LemmaB.4.3. A user who does not fact-check decides to vote if and only if 1 2 (1f) v 1 2 agree + 1 2 disagree single + (1v)( single none ) +E[R i ja f i = 0;a v i = 1] 0; (V-condition) where the expression for the reputation gain is given in Lemma B.3.8. Proof. A user who does not fact-check decides to vote if and only if E[U i ja f i = 0;a v i = 1]E[U i ja f i = 0;a v i = 0] 0; (B.32) Expanding each utility expression, we get E[I i ja f i = 0;a v i = 1]E[I i ja f i = 0;a v i = 0] + E[R i ja f i = 0;a v i = 1]E[R i ja f i = 0;a v i = 0] 0; (B.33) Using Lemma B.3.4, we write this as 1 2 (1f) v 1 2 agree + 1 2 disagree single + (1v)( single none ) +E[R i ja f i = 0;a v i = 1] 0: (B.34) Here, the expression for the reputation gain when not voting is 0 while when voting 1 is given in Lemma B.3.8 which uses the probability expressions in B.4.1. 136 AppendixC Denition15. We dene the set of nodes covered by a policyY as Q(Y ) = l(Y ) [ i=1 B(Y (i);i 1): (C.1) C.1 ProofofTheorem3.4.1 We will rst state and prove a couple of lemmatas which will be used to prove the theorem. LemmaC.1.1. O is the smallest length of a policy for whichQ(Y ) =V, i.e., min Y :Q(Y )=V l(Y ) =O: (C.2) Proof. As the solution to graph covering gives the policyY which is minimized on the number of samples, we only need to prove thatY covers all nodes, i.e.Q(Y ) =V . Suppose by contradiction that there exists a nodev satisfying v = 2Q(Y ): (C.3) 137 By the denition ofQ(Y ), this suggests that v = 2B(Y (i);i 1);8i2f1; 2;:::;Og: (C.4) This implies that none of the sets containing the elementv is chosen. Equivalently, x t i = 0;8i;t :v2B(i;t 1): (C.5) Summing up for all the values, we have X i;t:v2B(i;t1) x t i = 0: (C.6) However, asY is the solution to the above optimization problem, it must satisfy the rst constraint (C1). Hence, this is a contradiction, proving thatO covers all the nodes. LemmaC.1.2. SupposethereexistsapolicyY 0 = (Y 0 (1);Y 0 (2);:::;Y 0 (l(Y 0 )))suchthatl(Y 0 ) =T (Y 0 )< O. Then, we must have VnQ(Y 0 )6=; (C.7) where is the null set. Proof. Suppose by contradiction thatQ(Y 0 ) =V . By denition ofQ(Y 0 ), this suggests that l(Y 0 ) [ i=1 B(Y 0 (i);i 1) =V: (C.8) 138 Let x t i = 8 > > > > < > > > > : 1; ifi =Y 0 (t); 0; otherwise: Using equation (C.8), this implies that we have X i;t:v2B(i;t1) x t i 1: (C.9) Thus,Y 0 satises constraint (C1). AsY 0 is a policy, it also satises constraints (C2) and (C3). And nally, by our denition of x t i , the binary variables constraint (C4) is also met. Hence,Y 0 is a feasible solution to the optimization problem. SinceO is the optimal value, we must have Ol(Y 0 ) =T (Y 0 ): But we assumedT (Y 0 )<O, contradicting the previous statement and proving the lemma. We are now ready to prove the theorem. Suppose there exists a policyY 0 = (Y 0 (1);Y 0 (2);:::;Y 0 (l(Y 0 ))) such thatl(Y 0 ) =T (Y 0 )<O. From LemmaC:1:1 and LemmaC:1:2, there exists a nodev such thatv = 2 Q(Y 0 ). We next focus on the case when the infection starts at the nodev. By our assumption, we must have T v (Y 0 ) sup z2V T z (Y 0 ) =T (Y 0 ) Also, since our policy always catches the infection, we must have 139 v2 l(Y 0 ) [ i=1 B(Y 0 (i);i 1) But we assumed thatv = 2Q(Y 0 ), which gives a contradiction and proves the theorem. C.2 ProofofTheorem3.5.1 Proof. As the rst step, we show that Graph Covering is in NP in the following lemma. LemmaC.2.1. Graph Covering is in NP. Proof. We need to show that there exists a certicate and a polynomial time verier which correctly out- puts if a policyY solves the Graph Covering problem or not. We assume thatl(Y )k 0 , else the output is clearly No. We choose the given sequence of nodes in the policy as the certicate and use the following algorithm to verify the solution. Algorithm1 NP Verier S t 1 whiletk 0 do S S[B(Y (t);t 1) t t + 1 end ifS =V then return Yes end else return No end 140 The main step in Algorithm 1 is to computeB(Y (t);t 1) which can be easily implemented using Breadth First Search. Since the number of iterations are bounded byk 0 jVj, it can clearly be carried out in polynomial time. Next to prove the NP-hardness of our problem we show that Set Cover problem is polynomial time reducible to our Graph Covering problem. We restate the construction procedure that constructs a graph using the instance of Set Cover problem below. Construction: For a given instance of Set Cover, i.e. givenU,S andk, we construct a graphG as follows: 1. For every elementi2U construct a nodee i . 2. For every time instantt2f1; 2;:::;kg and every setS j ; j2M construct a nodes j;t . 3. For every elementi2S j , for every setS j ; j2M and for all time instantst, connecte i tos j;t via t1 nodes for allj2M. (Note that connecting nodeu tov viaw nodes implies having a line graph havingw nodes and connecting one leaf node of the line graph tou and the other tov.) 4. For every time instantt constructk + 1 nodes 1 t ; 2 t ;:::; k+1 t and connect them all separately to s j;t for allj2M via separate sets oft 1 nodes for each i t ;i2f1; 2;:::;k + 1g. 5. Construct a star graphE k+3 k+1 and connectc k+3 k+1 tos j;t for allj2M viak + 1t nodes. 6. Add a nodeo by connecting it to one of the leaf nodes ofE k+3 k+1 . We rst show that the construction procedure above can be carried out in polynomial time. LemmaC.2.2. GivenaSetCoverinstance,i.e. givenU,S andk,wecancarryouttheconstructionprocedure inO(poly(jUj;jSj)) wherepoly(jUj;jSj) denotes some polynomial injUj =n andjSj =m. 141 Proof. We count the number of nodes created in each step and upper bound their total number by a poly- nomial inn andm. 1. In the rst step exactlyn nodes are created. 2. The second step createskm nodes. 3. Consider the worst possible case that every element in U is in every subset S j ; j 2 M. Then the maximum number of nodes added in this step can be written as n m P k t=1 (t 1) = nmk(k 1)=2. 4. We add (k + 1)m P k t=1 (t 1) + (k + 1)k = (k + 1)mk(k 1)=2 + (k + 1)k nodes. 5. The star graph consists of (k + 3) (k + 1) + 1 nodes and then you addm P k t=1 (k + 1t) = mk(k + 1)=2 nodes. 6. The last step just adds one more node. Hence the total number of nodes created is upper bounded by n +km +nmk(k 1)=2 +mk(k 1)(k + 1)=2 + (k + 1)k + (k + 3)(k + 1) + 1 +mk(k + 1)=2 + 1: And noting that k m, otherwise selecting all the subsets is the trivial solution, we get that the maximum number of nodes created is polynomial inn andm. Hence, the construction can be carried out in polynomial time with respect to the input size. Before jumping to the proof of the theorem, we rst state and prove some lemmatas concerned with Graph Covering in the graphG built from the construction procedure above. LemmaC.2.3. IfY solves the Graph Covering problem forG withk 0 =k + 2 thenY (k + 2) =c k+3 k+1 . 142 Proof. Suppose by contradiction thatY (k + 2)6=c k+3 k+1 . LetL be the set of leaf nodes ofE k+3 k+1 . Since(u;v) = 2(k + 1) for any two leaf nodesu;v2L, we must have jB(Y (t);t 1)\Lj 1; for allt. Thus, j[ k+2 t=1 B(Y (t);t 1)\Ljk + 2<k + 3 =jLj: Hence, there exists at least one leaf node that is not covered byY . This contradicts thatY is a solution to the Graph Covering problem forG and concludes the proof. LemmaC.2.4. IfY solves the Graph Covering problem for the graphG then there exists a timet o such that B(Y (t o );t o 1)\B(Y (t + 2);t + 1) c =fog; (C.10) whereS c refers to the complement of setS. Proof. Suppose by contradiction that there exits a nodeu6=o such thatu2B(Y (t o );t o 1)\B(Y (k + 2);k + 1) c . From Lemma C:2:3 we know that Y (k + 2) = c k+3 k+1 . Thus, from our construction, we must have (u;o)> 2(k + 1) sinceu = 2B(Y (t + 2);t + 1). But we must have (u;o) 2t o 1 2k + 1; where the rst inequality is a result ofu;o2 B(Y (t o );t o 1) and the second inequality follows since t 0 k + 1. This contradicts that(u;o)> 2(k + 1) and completes the proof. 143 Lemma C.2.5. If Y solves the Graph Covering problem for the graph G then for every time instant t2 f1; 2;:::;kg there existsj2M such thatY (t + 1) =s j;t . Proof. Lemma C.2.3 and Lemma C.2.4 leave us withk samples to cover the nodes apart from nodeo and the nodes covered byY (k + 2). We lett 0 denote the time instant that covers nodeo. We prove the lemma by induction ont. We rst show that for time instantk + 1 there existsj2 M such thatY (k + 1) = s j;k . Suppose by contradiction that there does not. Now notice that for alli2f1; 2;:::k + 1g andj2M, (Y (k + 2); i k ) =(Y (k + 2);s 1;k ) +(s 1;k ; i k ) =k + 2t +t>k + 1: (C.11) This suggests that i k = 2B(Y (k + 2);k + 1). Now, sinceY (k + 1)6=s j;k and( i k ; i 0 k ) = 2( i k ;s j;k ) for alli6=i 0 and for allj2M, we must have jB(Y (k + 1);k)\f i k g k+1 i=1 j 1; (C.12) suggesting that j[ k+1 t=1 t6=t 0 B(Y (t);t 1)\f i k g k+1 i=1 jk<k + 1 =jf i k g k+1 i=1 j: (C.13) This implies that we won’t be able to cover all the nodes off i k g k+1 i=1 thus contradicting thatY solves Graph Covering for graphG. Now suppose the lemma is true for allt = t + 1; t + 2;:::;k + 1. Then fort = t, since ( i t ; i 0 t ) = 2( i t ;s j; t ) for alli6=i 0 and for allj2M, we must have jB(Y ( t); t 1)\f i t g k+1 i=1 j 1; (C.14) 144 suggesting that j[ t t=1 t6=t 0 B(Y ( t); t 1)\f i t g k+1 i=1 j t 1<k + 1 =jf i t g k+1 i=1 j: (C.15) Thus, we won’t be able to cover all the nodes off i t g k+1 i=1 thus contradicting thatY coversG. This concludes the proof. We now complete our proof of the theorem by proving Lemma 3.5.2, which we restate below. For given setsU andS there exists a Set Cover of size at mostk if and only if there exists a policyY with l(Y )k + 2 that solves Graph Covering for the constructed graphG. C.2.1 ProofofLemma3.5.2 Onlyif: Suppose there exists a Set CoverS 0 S such thatjS 0 jk. Without loss of generality letS 1 ;S 2 ;:::;S jS 0 j be the sets contained inS 0 and hence[ jS 0 j i=1 S i = U. Now we need to show that there exists a policyY withl(Y )k + 2 that solves Graph Covering for the constructed graphG. For that consider the policy dened byY (1) = o;Y (k + 2) = c k+3 k+1 andY (t) = s t1;t1 fort2f2; 3;:::;k + 1g. We claim thatY covers all the nodes in the graphG and we prove it by iterating over all the steps of the construction and checking if all the nodes created in each step are covered. 1. Since(Y (t);e i ) =(s t1;t1 ;e i ) =t 1, we must have i2B(Y (t);t 1), for allt2f2; 3;:::;k + 1g andi2S t1 : This suggests that i2[ k+1 t=2 B(Y (t);t 1), for alli2[ k+1 t=2 S t1 =U: Hence all the nodes created in this step are covered. 145 2. As(Y (k + 2);s j;t ) =(c k+3 k+1 ;s j;t ) =k + 2tk + 1, we have thats j;t 2B(Y (k + 2);k + 1) for everyj andt. 3. Consider any nodeu added in this step on the path froms j;t toe i for some elementi2S j for some j andt. We have (Y (k + 2);u) =(Y (k + 2);s j;t ) +(s j;t ;u)k + 2t +t 1 =k + 1 implying thatu is covered byY (k + 2). Here the inequality is because(s j;t ;u)(s j;t ;e i ) 1 = t 1. Thus, all nodes in this step are also covered. 4. For everyi2f1; 2;:::;k + 1g andt2f2; 3;:::;k + 1g, (Y (t + 1); i t ) =(s t;t ; i t ) =t: Hence,Y (t + 1) covers i t for allt2f1; 2;:::;kg. Next consider any nodev6= i t added in this step on the path froms j;t to i t for somei;j andt. We have (Y (k + 2);v) =(Y (k + 2);s j;t ) +(s j;t ;v)k + 2t +t 1 =k + 1; implying thatv is covered byY (k + 2). HenceY (k + 2) covers all the remaining nodes of this step. 5. For any nodew created in this step we have(Y (k + 2);w)k + 1. Hence every node created in this step is covered byY (k + 2). 6. Nodeo is clearly covered byY (1). Thus, we see that all the nodes created by the construction procedure are covered by our policyY . Hence, Y is a solution to the Graph Covering problem forG. 146 If: Now suppose that we have a solutionY to Graph Covering for at mostk + 2 samples and constructed graphG. Suppose by contradiction that there does not exist a solution to the Set Cover with at mostk subsets. We know by Lemma C.2.5 that for every time instantt2f1; 2;:::;kg there existsj2 M such that Y (t + 1) =s j;t . Dene S p =fS j jY (t + 1) =s j;t ; t2f1; 2;:::kgg We claim thatS p constitutes a Set Cover and sincejS p j =k this contradicts our assumption. Now suppose by contradiction that there exists an elementi2Un[ i:S i 2S pS i . Consider the node corresponding to this element inG i.e. nodee i . Now suppose thate i was covered byY (t 0 ) for somet 0 . By Lemma C.2.3 and LemmaC:2:4, we know thate i = 2 B(Y (k + 2);k + 1)[B(Y (t 0 );t 0 1). And by Lemma C.2.5, we get that e i 2B(Y (t 0 );t 0 1) =B(s j;t 0 1 ;t 0 1) for somet 0 2f2; 3;:::;k + 1g andj2f1; 2;:::;mg: But sincei2Un[ i:S i 2S p S i , by construction we must have that(s j;t 0 1 ;e i )>t 0 1. This implies that e i is not covered byY (t 0 ) contradicting our assumption and concluding the proof. C.3 ProofofProposition3.6.1 In this section, we prove the optimality of the policy suggested in Proposition 3.6.1 for searching an infec- tion in a line graph. We will use the graph covering equivalence and show that the policy is optimal for the graph covering problem on the same line graph. We rst show that the policyY line covers all the nodes. 147 LemmaC.3.1. For a line graph of nodes Q(Y line ) =V: Proof. The policy at time covers the nodes in B(Y line (t);t 1) =B(minft 2 t + 1;g;t 1): Since the graph is a line graph, the balls can be dened using the start and the end nodes. Thus, we have f(t 2 t + 1) (t 1);:::; minf(t 2 t + 1) (t 1);ggB(Y line (t);t 1) f(t 1) 2 + 1;:::; minft 2 ;ggB(Y line (t);t 1); where the sets are equal to the balls except possibly the last time instant. Since,t =d p e, we see that Q(Y line ) =[ d p e t=1 B(Y line (t);t 1) =V: Thus, the policy covers all the nodes. We next show that any optimal policy for the line graph of nodes must make at leastd p e searches, thus implying thatY line is indeed an optimal policy. LemmaC.3.2. For a line graph of nodes for any policyY T (Y )O line =d p e: 148 Proof. Suppose by contradiction that there exists a policyY that covers all the nodes inl(Y ) <d p e searches. Then using the structure of the line graph, we must have jB(Y (t);t 1)j 2t 1: Adding all the searches give us jQ(Y )j =j[ l(Y ) t=1 B(Y (t);t 1)j[ l(Y ) t=1 jB(Y (t);t 1)j = l(Y ) X t=1 2t 1 =l(Y ) 2 <: Thus, the policyY does not cover all the nodes which is a contradiction. This concludes our proof. 149 Bibliography Acemoglu, Daron, Munther A Dahleh, Ilan Lobel, and Asuman Ozdaglar (2011). “Bayesian learning in social networks”. In: The Review of Economic Studies 78.4, pp. 1201–1236. Alamo, Teodoro, Daniel G Reina, Pablo Millán Gata, Victor M Preciado, and Giulia Giordano (2021). “Data-Driven Methods for Present and Future Pandemics: Monitoring, Modelling and Managing”. In: arXiv preprint arXiv:2102.13130. Alizamir, Saed, Francis de Véricourt, and Shouqiang Wang (2020). “Warning against recurring risks: An information design approach”. In: Management Science 66.10, pp. 4612–4629. Allcott, Hunt and Matthew Gentzkow (2017). “Social media and fake news in the 2016 election”. In: Journal of economic perspectives 31.2, pp. 211–36. Allon, Gad and Achal Bassamboo (2011). “Buying from the babbling retailer? The impact of availability information on customer behavior”. In: Management Science 57.4, pp. 713–726. Allon, Gad, Achal Bassamboo, and Itai Gurvich (2011). “We will be right with you: Managing customer expectations with vague promises and cheap talk”. In: Operations Research 59.6, pp. 1382–1394. Allon, Gad, Achal Bassamboo, and Ramandeep S Randhawa (2012). “Price as a Signal of Product Availability: Is it Cheap?” In: Available at SSRN 3393502. Allon, Gad, Kimon Drakopoulos, and Vahideh Manshadi (2019). “Information Inundation on Platforms and Implications”. In: Proceedings of the 2019 ACM Conference on Economics and Computation, pp. 555–556. Alonso, Ricardo and Odilon Câmara (2016). “Persuading voters”. In: American Economic Review 106.11, pp. 3590–3605. Aviv, Yossi and Amit Pazgal (2008). “Optimal pricing of seasonal products in the presence of forward-looking consumers”. In: Manufacturing & Service Operations Management 10.3, pp. 339–359. Aydinliyim, Tolga, Michael S Pangburn, and Elliot Rabinovich (2017). “Inventory disclosure in online retailing”. In: European Journal of Operational Research 261.1, pp. 195–204. 150 Ball, Frank G, Edward S Knock, and Philip D O’Neill (2015). “Stochastic epidemic models featuring contact tracing with delays”. In: Mathematical biosciences 266, pp. 23–35. Bergemann, Dirk and Stephen Morris (Feb. 2017). Information Design: A Unied Perspective. CEPR Discussion Papers 11867. C.E.P.R. Discussion Papers. Besanko, David and Wayne L Winston (1990). “Optimal price skimming by a monopolist facing rational consumers”. In: Management Science 36.5, pp. 555–567. Blinder, Alan, Elie RD Canetti, David E Lebow, and Jeremy B Rudd (1998). Asking about prices: a new approach to understanding price stickiness. Russell Sage Foundation. Budak, Ceren, Divyakant Agrawal, and Amr El Abbadi (2011). “Limiting the spread of misinformation in social networks”. In: Proceedings of the 20th international conference on World wide web, pp. 665–674. Candogan, Ozan and Kimon Drakopoulos (2020). “Optimal signaling of content accuracy: Engagement vs. misinformation”. In: Operations Research 68.2, pp. 497–515. Christley, Robert M, GL Pinchbeck, Roger G Bowers, Damian Clancy, Nigel P French, Rachel Bennett, and Joanne Turner (2005). “Infection in social networks: using network analysis to identify high-risk individuals”. In: American journal of epidemiology 162.10, pp. 1024–1031. Coase, Ronald H (1972). “Durability and monopoly”. In: The Journal of Law and Economics 15.1, pp. 143–149. Commission, European (2021). Code of Practice on Disinformation.url: https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation. Cui, Ruomeng and Hyoduk Shin (2017). “Sharing Aggregate Inventory Information with Customers: Strategic Cross-Selling and Shortage Reduction”. In: Management Science. Dasu, Sriram and Chunyang Tong (2010). “Dynamic pricing when consumers are strategic: Analysis of posted and contingent pricing schemes”. In: European Journal of Operational Research 204.3, pp. 662–671. Durbin, Erik and Ganesh Iyer (2009). “Corruptible advice”. In: American Economic Journal: Microeconomics 1.2, pp. 220–42. Farajtabar, Mehrdad, Jiachen Yang, Xiaojing Ye, Huan Xu, Rakshit Trivedi, Elias Khalil, Shuang Li, Le Song, and Hongyuan Zha (2017). “Fake news mitigation via point process based intervention”. In: arXiv preprint arXiv:1703.07823. Gentzkow, Matthew and Jesse M Shapiro (2006). “Media bias and reputation”. In: Journal of political Economy 114.2, pp. 280–316. Hills, Megan C. (May 2018). YouTube Has Removed Half Of ’Violent’ Drill Music Videos At The British Police’s Request.url: https://www.forbes.com/sites/meganhills1/2018/05/29/youtube-drill-music/?sh=7532451369d5. 151 Jun, Youjung, Rachel Meng, and Gita Venkataramani Johar (2017). “Perceived social presence reduces fact-checking”. In: Proceedings of the National Academy of Sciences 114.23, pp. 5976–5981. Kamenica, Emir and Matthew Gentzkow (2011). “Bayesian Persuasion”. In: American Economic Review 101.6, pp. 2590–2615. Kim, Jooyeon, Behzad Tabibian, Alice Oh, Bernhard Schölkopf, and Manuel Gomez-Rodriguez (2018). “Leveraging the crowd to detect and reduce the spread of fake news and misinformation”. In: ProceedingsoftheEleventhACMInternationalConferenceonWebSearchandDataMining, pp. 324–332. Köhler, Johannes, Lukas Schwenkel, Anne Koch, Julian Berberich, Patricia Pauli, and Frank Allgöwer (2020). “Robust and optimal predictive control of the COVID-19 outbreak”. In: Annual Reviews in Control. Kremer, Ilan, Yishay Mansour, and Motty Perry (2014). “Implementing the “wisdom of the crowd””. In: Journal of Political Economy 122.5, pp. 988–1012. Kumar, Srijan and Neil Shah (2018). “False information on web and social media: A survey”. In: arXiv preprint arXiv:1804.08559. Kumar, Srijan, Robert West, and Jure Leskovec (2016). “Disinformation on the web: Impact, characteristics, and detection of wikipedia hoaxes”. In: Proceedings of the 25th international conference on World Wide Web, pp. 591–602. Lingenbrink, David and Krishnamurthy Iyer (2017). “Optimal Signaling Mechanisms in Unobservable Queues with Strategic Customers”. In: Lingenbrink, David and Krishnamurthy Iyer (2018). “Signaling in Online Retail: Ecacy of Public Signals”. In: Working paper. Liu, Qian and Garrett J van Ryzin (2008). “Strategic capacity rationing to induce early purchases”. In: Management Science 54.6, pp. 1115–1131. McLaughlin, Timothy (2018). How WhatsApp Fuels Fake News and Violence in India.url: https://www.wired.com/story/how-whatsapp-fuels-fake-news-and-violence-in-india/. Mehta, Neel, Aditya Agashe, and Parth Detroja (2019). Swipe to unlock: the primer on technology and business strategy. Belle Applications, Incorporated. Mei, Wenjun, Shadi Mohagheghi, Sandro Zampieri, and Francesco Bullo (2017). “On the dynamics of deterministic epidemic propagation over networks”. In: Annual Reviews in Control 44, pp. 116–128. Morris, Stephen (2001). “Political correctness”. In: Journal of political Economy 109.2, pp. 231–265. Nguyen, Nam P, Guanhua Yan, My T Thai, and Stephan Eidenbenz (2012). “Containment of misinformation spread in online social networks”. In: Proceedings of the 4th Annual ACM Web Science Conference, pp. 213–222. 152 Nicas, Jack (Nov. 2016). Google to Bar Fake-News Websites From Using Its Ad-Selling Software.url: https://www.wsj.com/articles/google-to-bar-fake-news-websites-from-using-its-ad-selling- software-1479164646. Ottaviani, Marco and Peter Norman Sørensen (2006). “Professional advice”. In: Journal of Economic Theory 126.1, pp. 120–142. Ou, Han-Ching, Arunesh Sinha, Sze-Chuan Suen, Andrew Perrault, Alpan Raval, and Milind Tambe (2020). “Who and when to screen: Multi-round active screening for network recurrent infectious diseases under uncertainty”. In: Papanastasiou, Yiangos (2020). “Fake news propagation and detection: A sequential model”. In: Management Science 66.5, pp. 1826–1846. Papanastasiou, Yiangos, Kostas Bimpikis, and Nicos Savva (2017). “Crowdsourcing exploration”. In: Management Science. Qian, Feng, Chengyue Gong, Karishma Sharma, and Yan Liu (2018). “Neural User Response Generator: Fake News Detection with Collective User Intelligence.” In: IJCAI. Vol. 18, pp. 3834–3840. Rayo, Luis and Ilya Segal (2010). “Optimal information disclosure”. In: Journal of Political Economy 118.5, pp. 949–987. Reya, David, Lauren Gardnera, and S Travis Wallera (2013). “Finding the Most Likely Infection Path in Networks with Limited Information”. In: Romm, Tony, Rachel Lerman, Cat Zakrzewski, Heather Kelly, and Elizabeth Dwoskin (Oct. 2020). Facebook, Google, Twitter CEOs clash with Congress in pre-election showdown.url: https://www.washingtonpost.com/technology/2020/10/28/twitter-facebook-google-senate-hearing- live-updates/. Roth, Yoel and Nick Pickles (2020). Updating our approach to misleading information.url: https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading- information.html. Scott, Mark (May 2019). Europe’s failure on ’fake news’.url: https://www.politico.eu/article/europe- elections-fake-news-facebook-russia-disinformation-twitter-hate-speech/. Sharma, Karishma, Feng Qian, He Jiang, Natali Ruchansky, Ming Zhang, and Yan Liu (2019). “Combating fake news: A survey on identication and mitigation techniques”. In: ACM Transactions on Intelligent Systems and Technology (TIST) 10.3, pp. 1–42. Shen, Zuo-Jun Max and Xuanming Su (2007). “Customer behavior modeling in revenue management and auctions: A review and new research opportunities”. In: Production and Operations management 16.6, pp. 713–728. Singh, Manish (2021). Facebook, Twitter, WhatsApp face tougher rules in India.url: https://techcrunch.com/2021/02/25/india-announces-sweeping-guidelines-for-social-media-on- demand-streaming-firms-and-digital-news-outlets/. 153 Spence, Michael (1978). “Job market signaling”. In: Uncertainty in economics. Elsevier, pp. 281–306. Stewart, Greg, Klaskevan Heusden, and Guy A Dumont (2020). “How control theory can help us control COVID-19”. In: IEEE Spectrum 57.6, pp. 22–29. Stokey, Nancy L (1979). “Intertemporal price discrimination”. In: The Quarterly Journal of Economics, pp. 355–371. Suen, Wing (2004). “The self-perpetuation of biased beliefs”. In: The Economic Journal 114.495, pp. 377–396. Tambuscio, Marcella, Giancarlo Ruo, Alessandro Flammini, and Filippo Menczer (2015). “Fact-checking eect on viral hoaxes: A model of misinformation spread in social networks”. In: Proceedings of the 24th international conference on World Wide Web, pp. 977–982. Tschiatschek, Sebastian, Adish Singla, Manuel Gomez Rodriguez, Arpit Merchant, and Andreas Krause (2018). “Fake news detection in social networks via crowd signals”. In: Companion Proceedings of the The Web Conference 2018, pp. 517–524. Veeraraghavan, Senthil and Laurens Debo (2009). “Joining longer queues: Information externalities in queue choice”. In: Manufacturing & Service Operations Management 11.4, pp. 543–562. WHO (2020). WHO Director-General’s opening remarks at the media brieng on COVID-19 - 11 March 2020. url: https://www.who.int/director-general/speeches/detail/who-director-general-s-opening- remarks-at-the-media-briefing-on-covid-19---11-march-2020. WHO (2021). WHO Coronavirus (COVID-19) Dashboard.url: https://covid19.who.int/. Yu, Man, Hyun-Soo Ahn, and Roman Kapuscinski (2014). “Rationing capacity in advance selling to signal quality”. In: Management Science 61.3, pp. 560–577. Zubiaga, Arkaitz, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Peter Tolmie (2016). “Analysing how people orient to and spread rumours in social media by looking at conversational threads”. In: PloS one 11.3, e0150989. 154
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Essays on fair scheduling, blockchain technology and information design
PDF
Essays on service systems with matching
PDF
Efficient policies and mechanisms for online platforms
PDF
Essays on information, incentives and operational strategies
PDF
Real-time controls in revenue management and service operations
PDF
Essays on bounded rationality and revenue management
PDF
Essays on digital platforms
PDF
Essays on revenue management with choice modeling
PDF
Essays on consumer product evaluation and online shopping intermediaries
PDF
The essays on the optimal information revelation, and multi-stop shopping
PDF
Three essays on industrial organization
PDF
Essays on improving human interactions with humans, algorithms, and technologies for better healthcare outcomes
PDF
Modeling information operations and diffusion on social media networks
PDF
Essays on understanding consumer contribution behaviors in the context of crowdfunding
Asset Metadata
Creator
Jain, Shobhit
(author)
Core Title
Essays on information design for online retailers and social networks
School
Marshall School of Business
Degree
Doctor of Philosophy
Degree Program
Business Administration
Degree Conferral Date
2021-08
Publication Date
07/07/2021
Defense Date
06/16/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Bayesian persuasion,Epidemiology,fake-news,graph search,information operations,OAI-PMH Harvest,platform operations,Pricing,revenue management
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Randhawa, Ramandeep (
committee chair
), Daw, Andrew (
committee member
), Drakopoulos, Kimon (
committee member
), Vayanos, Phebe (
committee member
)
Creator Email
jain.shobhit01@gmail.com,Shobhitj@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC15276664
Unique identifier
UC15276664
Identifier
etd-JainShobhi-9709.pdf (filename)
Legacy Identifier
etd-JainShobhi-9709
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Jain, Shobhit
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
Bayesian persuasion
fake-news
graph search
information operations
platform operations
revenue management