Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Bounded technological rationality: the intersection between artificial intelligence, cognition, and environment and its effects on decision-making
(USC Thesis Other)
Bounded technological rationality: the intersection between artificial intelligence, cognition, and environment and its effects on decision-making
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
BOUNDED TECHNOLOGICAL RATIONALITY: THE INTERSECTION BETWEEN
ARTIFICIAL INTELLIGENCE, COGNITION, AND ENVIRONMENT AND ITS EFFECTS
ON DECISION-MAKING
by
Sonia Jawaid Shaikh
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMMUNICATION)
August 2020
Copyright 2020 Sonia Jawaid Shaikh
ii
Acknowledgements
I am most thankful to God. You gave me the courage, opportunities, and strength to beat
the odds and fulfill my dreams. I have come a long way, and that’s all because of You.
My wonderful family. I do not quite know how to express my gratitude and love for you
all. My super Pappa ji, who gave up so much for us throughout his life without expecting
anything in return. My quietly brilliant brother Dawood, who I have missed so much all these
years. My loving sister Nageen, who is my Samwise Gamgee and Gandalf all at the same time.
And my shining star, Mamma jaan, who is the most amazing, loving, supportive, kind, wise, and
always the most hopeful and positive person in the room. Thank you all for being there for me
24/7 and showering me with your support, love, kindness, generosity, and wisdom every step of
the way. Thank you for your unwavering belief in me. I love you all very much.
To my dearest friends Marcia Allison, Sara Bano, Christiana Robbins, and Yongrong
Shen: thank you so much for showing me unforgettable big-heartedness, care, and fellowship
through thick and thin. I am blessed to have you all in my life, and wish you love, health,
happiness, and success. Please know that all of you mean a lot to me.
I am grateful to many friends who have contributed towards my journey in various ways.
Thank you to my dear friend and one of my favorite teachers Josh Clark for giving me guidance,
knowledge, and unconditional support. My thanks to Komathi Ale-Valencia, Ignacio Cruz,
Christy Hagen, (my buddy) Hyun Tae Kim, Vishnu Priya Krishnamurthy (who is a telepath),
David Loredo, Arlene Luck, Dron Mandhana (who is always a phone call away 😊 ), Rachel
Moran, Steve McCornack (especially for your empathy and wisdom), Prawit Thainiyom (I have
iii
texted you too many times and owe you a dinner), and Zhiming Xu for being a part of this
transformative experience.
I have had the privilege of being taught by excellent teachers. Many courses I have taken
during my graduate studies have heavily impacted my scholarship and thinking. My teachers did
not only demonstrate discipline and precision of their thoughts, but also showed me the
importance of passion in pedagogy and research. I am thankful to Frank Boster, Michael Bratton,
Roger Bresnahan, Richard John, Robertas Gabrys, Peter Monge, Brendan Mullan, Tim Levine,
Steve McCornack, Lynn Miller, Hee Sun Park, Abbass-al-Sharif, David Wiley, and Gwen
Wittenbaum for teaching me things I did not know and most importantly, for introducing me to
new ways of looking at the world. I still have most of my handwritten notes and books from your
courses. I often think about the exchange of ideas which occurred in your classes and treasure
those memories very much.
I must thank Michael Cody and Lynn Miller for all their help, kindness, and support
during my time at USC. I hope to one day pass on the kindness you’ve shown me to others.
A big thank you to the awesome Francesca Gacho who has helped me so much with my
argument development, thinking, and writing. I truly value all your assistance and our friendship.
To my friends and a superb team of web development & computer science professionals
and engineers Shreya Gupta, Swapnil Surdi, and Aditi Swaroop: thank you so much for all your
efforts, feedback, and hard work! You were the key players in this project. And you all rock!
It is not an overstatement to say that there are some individuals without whom a lot of
what I have accomplished would have been incredibly difficult. To -.-…..-.-…..--..--……-.., I say
thank you SO much! I will always remember what you have done for me.
iv
I am thankful to the three superwomen who have provided me (and many other students)
a strong administrative support system: Anne Marie Campian, Sarah Holterman, and Sophie
Madej. I don’t know how you do it, but you held everything together for us.
I am very grateful to Microsoft Research who funded this dissertation project. I was
honored to have your unconditional support which helped me bring my dream project to fruition.
And of course, thank you USC Annenberg School of Communication for backing my research.
I want to extend my deep appreciation and gratitude to my committee members for their
advice, assistance, time, and valuable feedback during the course of my graduate studies: Janet
Fulk, Andrea Hollingshead, Tim Levine, and Dmitri Williams. You have made me think, rethink,
and do better. Each one of you has uniquely contributed to my goal achievement, personality
development, and scholarship. I am immensely thankful to have known, worked, and learned
from all of you.
A special thank you to Tim Levine who has been there for me literally since day one of
my graduate studies. Your ideas have profoundly influenced my thinking and research. I cherish
our relationship very much and feel fortunate to have had you as my mentor and now a friend.
And finally, a huge thank you to my advisor Dmitri Williams who came into my graduate
life as a result of what I consider to be a combination of my good fortune, interesting timing, and
intuitive decision-making. You were there for me every time I reached out to you. You gave me
a room furnished with your mentorship where I could exercise my intellectual freedom and take
risks. I truly appreciate your counsel, collaboration, and honesty. You have been instrumental in
me getting successfully to the finish line.
v
Table of Contents
Acknowledgements ii
Table of Contents v
List of Tables vii
List of Figures viii
Abstract x
Chapter 1. Introduction 1
Chapter 2. From ELIZA to Alexa: A Brief Note on the History and Definition of Intelligent Assistants 9
1950–1980: Thinking and Communicating Machines 12
1980–2010: The Rise of the ‘Assistant’ 19
2010–present: Interactive, Assistive, and AI-enabled Machines 24
The Intelligent Assistant: Definition and Features 28
Chapter 3. Resource Environments, Intelligent Assistant, and Decision-Making: A Two-Part Story 38
Part 1. Rethinking How and When Resource Abundance and Scarcity Affect Prosocial and Antisocial
Behaviors 39
Part II. Intelligent Assistant in Resource Environments: Consequences for Information Sharing 65
Chapter 4. Notes on Research Design & Method 74
Part I. Building a Web Portal for Experimental Research 76
Pilot Test 89
Part II. Designing and Incorporating an Intelligent Assistant 92
Pilot Tests 94
Chapter 5. Results = Study 1 (Baseline) + Study 2 (Intelligent Assistant) 108
Study 1. The Baseline Model 108
Information Withholding 110
Data Coding and Description 112
Information Sharing - Distribution of Behaviors 113
Study 2. Intelligent Assistant and Resource Environments 115
AI Nudge – Share Best 116
AI Nudge – Share Worst 117
vi
AI Nudge – Share Nothing 118
Comparing Baseline and Intelligent Assistant Conditions: Changes in Information Sharing 123
Benevolence 123
Sabotage 124
Silence 125
Chapter 6. Discussion 128
Baseline Model: Resource Environments’ Effects on Information Sharing 128
Intelligent Assistants in Resource Environments: Effects on Information Sharing 136
Chapter 7. Bounded Technological Rationality 147
Conclusion 153
References 158
Appendix – Intelligent Assistant Designs 181
vii
List of Tables
Table 4. 1 Participant feedback on the design of the intelligent assistant. ................................... 94
Table 4. 2 Participant feedback on the design of the intelligent assistant. ................................... 96
Table 4. 3 Script used by Alana after the game was completed. ................................................ 104
Table 5. 1 Baseline/No AI: Manipulation checks for EAPA & ESPS ........................................ 110
Table 5. 2 Baseline/No AI: Information withholding vs. openness to sharing information ....... 111
Table 5. 3 Baseline/No AI: Descriptive statistics and percentages of sharing best, worst, and no
information by resource environments. ...................................................................................... 114
Table 5. 4 AI nudge – share best: Descriptive statistics and frequency of sharing best, worst, and
no information. ............................................................................................................................ 116
Table 5. 5 AI nudge – share worst: Descriptive statistics and frequency of sharing best, worst,
and no information. ..................................................................................................................... 118
Table 5. 6 AI nudge – share nothing: Descriptive statistics and frequency of sharing best, worst,
and no information. ..................................................................................................................... 119
Table 5. 7 Information withholding and sharing by resource condition and intelligent assistant
across both studies. ..................................................................................................................... 122
Table 5. 8 Sharing best information in the baseline and AI nudge – share best conditions divided
by resource environments. .......................................................................................................... 124
Table 5. 9 Sharing worst information in the baseline and AI nudge - share worst conditions by
resource environments. ............................................................................................................... 125
Table 5. 10 Sharing no information in the baseline and AI nudge - share none conditions by
resource environments ................................................................................................................ 126
viii
List of Figures
Figure 4. 1 Instructions page 1. ..................................................................................................... 79
Figure 4. 2 Instructions page 2 ...................................................................................................... 80
Figure 4. 3 Participants were shown how the interface would look like. ..................................... 80
Figure 4. 4 Attention check. .......................................................................................................... 81
Figure 4. 5 Connection page with time delay. .............................................................................. 81
Figure 4. 6 Environmental scarcity manipulation. ........................................................................ 82
Figure 4. 7Game loading page. ..................................................................................................... 83
Figure 4. 8 Main task page. ........................................................................................................... 84
Figure 4. 9 Message from a new participant. ................................................................................ 85
Figure 4. 10 Choice to respond or ignore the message. ................................................................ 86
Figure 4. 11 Information sharing page. ......................................................................................... 87
Figure 4. 12 Possible connection with other participants. ............................................................ 88
Figure 4. 13 Stepwise process for participants in the baseline study............................................ 89
Figure 4. 14 Round 1. A word cloud depicting user feedback on features to be added or edited to
the design of the assistant. ............................................................................................................ 95
Figure 4. 15 Alana introduces herself. .......................................................................................... 99
Figure 4. 16 Alana and attention check. ..................................................................................... 100
Figure 4. 17 Alana on the main task page. .................................................................................. 101
Figure 4. 18 Reminding participants to check back after round 1. ............................................. 101
Figure 4. 19 Alana and help options. .......................................................................................... 102
Figure 4. 20 Alana reminds participants that she will be available to help. ............................... 103
Figure 4. 21 Alana recommends a door to the user. ................................................................... 103
Figure 4. 22 Message received from a new player. .................................................................... 105
Figure 4. 23 Alana preparing to give advice on information sharing. ........................................ 106
Figure 4. 24 Alana’s nudge for information sharing................................................................... 106
Figure 4. 25 Stepwise process for participants in the intelligent assistant experimental conditions.
..................................................................................................................................................... 107
Figure 5. 1 Baseline/No AI: Information withholding vs. openness to sharing. ........................ 111
ix
Figure 5. 2 Baseline/No AI: Distribution of door values shared by resource environment. Shared
nothing = 0, shared the worst = 1, sharing the best (and accurate) = 8. ..................................... 114
Figure 5. 3 AI nudge - share best: Distribution of door values shared by resource environment.
Shared nothing = 0, shared the worst = 1, sharing the best (and accurate) = 8. ......................... 117
Figure 5. 4 AI nudge – share worst: Distribution of door values shared by resource environment.
Shared nothing = 0; 1 = shared worst, and 8 = shared best. ....................................................... 118
Figure 5. 5 AI nudge – share nothing: Distribution of door values shared by resource
environment. Shared nothing = 0, shared the worst = 1, sharing the best (and accurate) = 8. ... 120
Figure 5. 6 Sharing of best, worst, and no information by AI nudge and the environment resource
condition. .................................................................................................................................... 120
Figure 5. 7 Information withholding and openness to sharing by baseline/no AI model and
intelligent assistant segmented by resource environments. ........................................................ 122
Figure 5. 8 Incidence of best, worst, and no information sharing by intelligent assistant and
baseline conditions segmented by resource environments. ........................................................ 127
Figure A Round 1. Intelligent assistant design………………………………………………....181
Figure B Round 2. Alana – female intelligent assistant………………………..……………….181
Figure C Round 2. Charles – male intelligent assistant……………………..………………….182
Figure D Round 2. Nimos – gender-neutral intelligent assistant…………………..…………...182
x
Abstract
An intelligent assistant is a type of artificial intelligence (AI)-enabled technology which
has the capability to process and learn from data, interact, and perform tasks for humans.
Intelligent assistants are increasingly being deployed within simple (e.g. customer support) as
well as complex environments (e.g. space missions) to assist people with their decision-making
and assignments. From both theoretical and applied perspectives, recommendations from an
intelligent assistant can influence users’ decisions that affect not only themselves but also help or
hurt others. As the use and sophistication of this technology increases, it raises questions such as:
how and to what extent can intelligent assistants influence users’ prosocial and antisocial
decision-making i.e. behaviors that help or hurt others? How do environmental parameters affect
this decision-making? How can we design this technology for social good?
The goal of this dissertation was to address the above-mentioned questions by situating
them within the context of resource abundant and scarce environments to test their effects on
prosocial and antisocial behaviors pertaining to information and collaboration with intelligent
technology. Prior research has shown that resource abundance and scarcity in our environments
affect prosocial and antisocial behaviors. However, studies on this subject have not tested how
these environments influence information sharing from prosocial and antisocial perspectives
even though the exchange of information between people is central to human experience and
plays an important role in how we perceive the world and make decisions. Furthermore, although
intelligent technology is being developed and deployed at rapid rates, there are huge theoretical
and empirical voids on how it can affect the prosocial and antisocial choices people make, and if
it can be used to increase information sharing to help (or hurt) others in environments that
xi
encourage otherwise. These gaps can be a hindrance to not only understanding human-AI
relationships, but also how this technology can be designed to improve human conditions.
This dissertation tackled these problems via a two-pronged investigation. The first part
of this inquiry tested how abundant and scarce resource environments affect prosocial and
antisocial behaviors such as sharing or withholding information that can help or hurt others. The
second part probed into how an intelligent assistant could serve as an intervention to nudge
people into making prosocial and antisocial choices in these resource environments.
Two first-of-their-kind experimental studies were conducted as a part of this
investigation. The baseline study (N= 126) varied two types of resource environments: abundant
(EAPA) and scarce (ESPS). Results showed that there was a higher incidence of sharing accurate
and most helpful information and lower incidence of withholding information in an abundant
environment compared to a scarce one. The incidence of antisocial behavior, i.e. sharing the
worst information to hurt others was 2% in abundant and 3% in scarce environments.
For the second study (N = 256), an intelligent assistant was embedded in abundant and
scarce environments to interact with users and help them navigate these environments. This
experiment tested if an intelligent assistant could nudge people to share best information i.e.
information which could help; share worst i.e. information which could hurt; and share nothing
i.e. withhold helpful information from a recipient. Results from a 2 (Resource environment:
abundant [EAPA] vs. scarce [ESPS]) x 3 (Intelligent assistant nudge: share the best vs. share the
worst vs. share nothing) between-subjects study show that the intelligent assistant was able alter
the distribution of behaviors pertaining to information to reflect its nudge compared to the
baseline. Compared to the baseline, a nudge to share helpful information produced a percentage
change where this behavior increased by 16.66% for EAPA and 131.35% for those in ESPS. The
xii
nudge to share the worst information increased this behavior by 900% for EAPA and 1200% for
ESPS. The nudge to share no information increased this behavior by 375% for EAPA and
433.33% for ESPS. Contrary to expectations, there were no significant differences between the
two resource environments in terms of accepting the prosocial and antisocial nudges from an
intelligent assistant as both the groups shared the best and worst information with others at
similar levels.
Using these findings as preliminary evidence and extending Herbert Simon’s theoretical
perspective on decision-making, the core thesis proposed in this dissertation is that an intelligent
assistant functions like a cognition-of-sorts in our environments. This allows humans to not only
use it for assistive purposes, but also accept its suggestions across resource environments. The
meeting of human cognition and this cognition-of-sorts influences the decision-making processes
within our environments, and thus, gives rise to bounded technological rationality. The
consequences of this perspective and implications for theory and research are also discussed.
Keywords: antisocial, artificial intelligence, cognition, decision-making, information, intelligent
assistant, prosocial, scarcity
1
Chapter 1. Introduction
“History followed different courses for different peoples because of differences among
peoples’ environments, not because of biological differences among people themselves.”
(Diamond, 1999, pp 25).
“The topics of artificiality and complexity are inextricably interwoven” (Simon, 1962, pp
xxii).
In the past, when people had to make a decision, they relied on each other or themselves.
Humans were the only cognition
1
in their environments. That is not entirely true anymore.
Individuals (and groups) are increasingly using a particular type of AI-enabled technology called
intelligent assistants (e.g. Microsoft Cortana) to help them make decisions across various
environments (Wilson & Daugherty, 2018). Intelligent assistants stand apart from other types of
technologies in that they are equipped with three unique features: ability to use AI i.e. learn from
the environment and make changes to it, interact, and assist users. Together, these features allow
them to function as decision-making, assistive and collaborative entities which can be deployed
to help us navigate from simple to complex environments such as space missions (CIMON
brings AI to the space station, n.d.). From both theoretical and applied perspectives,
recommendations from an intelligent assistant can influence users’ decisions that affect not only
themselves but also help or hurt others. The increasing use and sophistication of this technology,
raises important questions in this regard such as: how and to what extent can intelligent assistants
1
This is not meant to say that mammals such as primates do not have cognition, but rather
humans are far more evolved and have very complex thinking, reasoning, and linguistic abilities
that outdo other species.
2
influence users’ prosocial and antisocial decision-making i.e. behaviors that help or hurt others?
How do environmental parameters affect this decision-making? How can we design this
technology for social good?
The dissertation takes the concerns presented above and situates them in the context of
resource abundance and scarcity and their effects on information sharing to render a two-pronged
investigation. The first part investigates how abundant and scarce resource environments affect
individual decisions to share or withhold information that can help or benefit a recipient. The
second part tests how an intelligent assistant can serve as an intervention to nudge decision-
makers to increase sharing information to help a recipient. These issues have not been addressed
in research but need our attention as AI technology increasingly becomes more sophisticated and
personalized to be deployed across contexts where people experience resource abundance and
scarcity and engage in interpersonal interaction and information exchange. Some examples
include financial markets, high pressure tasks and missions (e.g. medical and space), resource
allocation contexts (e.g. welfare schemes), etc. It is easy to imagine that if intelligent assistants
or related technology were to successfully nudge people towards making prosocial and antisocial
choices, there could be large-scale consequences.
Past research on the effects of algorithms, automated decision-making (ADM), and AI
has investigated how and when people prefer machines to humans or vice versa (e.g. Araujo et.
al., 2020; Sundar & Kim, 2019; Logg et al., 2019). However, what we lack is the empirical
inquiry and knowledge of how intelligent technology can nudge humans to make prosocial and
antisocial decisions i.e. actions which can help or hurt others. Prior research has also often
decontextualized the human behavior and technology from resource abundance and scarcity in
the environment although there is ample evidence that these factors trigger prosocial and
3
antisocial behaviors such as generosity, cheating and spite (Gino & Pierce, 2009; Prediger et al.,
2014; Yam et al., 2014). Furthermore, and importantly, there is very little literature on intelligent
assistants and their effects on human prosocial and antisocial decision-making (e.g. Shaikh &
Cruz, 2019; Winkler et. al., 2019) even though this technology is being deployed across physical
and virtual spaces at a rapid pace
2
. Almost everyone with a smartphone can be said to own an
intelligent assistant (e.g. Siri, Bixby, etc.), yet we do not know enough about what impact this
specific technology has on us.
This dissertation addresses the gaps detailed above and makes several contributions to
the literature. First, it defines and conceptualizes intelligent assistants as a distinct and specific
form of AI-enabled technology using a historical approach and modern developments in the field
of AI. It then presents a thesis on the issue of contradictory findings in the literature where both
resource abundance and scarcity on prosocial and antisocial behaviors. This dissertation also
develops a theoretical rationale and conducts an empirical investigation into the hitherto
unexplained problems of how people share information to help or hurt others in abundant or
scarce environments. It also investigates how an intelligent assistant embedded in these
environments can nudge people into sharing or not sharing information that can help or hurt a
recipient. In first-of-their-kind studies, this project tests if an intelligent assistant can encourage
information sharing as a form of prosocial behavior and decrease antisocial behaviors such as
information withholding, and dissemination of incorrect information in resource abundant and
scarce environments. The intended result of this work is to test if human-AI collaboration can
help with increasing the exchange of accurate information amongst decision-makers--a critical
2
There is research on its relational aspects i.e. how people think of it as a friend or companion
(Lopatovska & Williams, 2018; Purington et. al., 2017). There is also survey research on user
satisfaction and experience of intelligent assistants (Bogers et. al., 2019)
4
component of any developmental process. This project is also aimed at reducing antisocial
informational behaviors such as information withholding and deception under resource scarcity--
a condition which is typically associated with antisocial behaviors (Prediger et al., 2014).
The core thesis proposed in this dissertation is that an intelligent assistant function like
cognition-of-sorts in our environments, and therefore, perceived as such by humans. This allows
humans to not only use it for assistive purposes, but also accept its suggestions and be influenced
by its decisions across various contexts. The meeting of human cognition and this cognition-of-
sorts influences the decision-making processes within our environments, and thus, gives rise to
bounded technological rationality.
This dissertation is an interdisciplinary theoretical effort where it combines approaches
from behavioral economics (Mani et. al., 2013; Prediger et al., 2014; Thaler & Sunstein, 2009),
communication research on deception and information manipulation (McCornack et. al., 2014;
Levine, 2014, Serota et al., 2010) to make its arguments on how people withhold or share
information across resource abundant and scarce environments. It also uses findings from the
recent literature which has shown that people adjust their judgement to that of an algorithm’s
(Logg et al., 2019) and trust AI evaluations (Araujo et. al., 2020).
It should be noted that this dissertation takes a descriptive approach on human decision-
making and behavioral change as a function of collaboration with an intelligent assistant across
resource abundant and scarce environments. Its pursuits are not normative in nature.
The dissertation opens its inquiry into how human decision-making is affected by AI and
environment by presenting a conceptual framework on the technology itself. Chapter two defines
intelligent assistant as a specific and distinct form of technology which has three features: AI,
interaction, and assistance. Using a historical approach, this chapter investigates how intelligent,
5
interactive, and assistive technology evolved over time and came to be as it is now in the form of
intelligent assistants. The chapter recognizes and parses three phases since in our history since
the World War II: 1950-1980, when machines’ intelligent and interactive capabilities began to be
developed; 1980-2010 when the technology’s assistive capabilities were introduced; and 2010
and onwards, to argue that the development and integration of AI, interaction, and assistance as
three unique capabilities in one machine have render intelligent assistants as a distinct form of
technology.
Chapter three lays the theoretical groundwork and presents an extensive review of the
literature on abundance and scarcity and its effects on prosocial and antisocial behaviors. It
argues that resource abundance and scarcity exist in two contexts: environmental and personal.
However, research has often either conflated these two contexts with each other or failed to
identify the distinction between them from theoretical and methodological perspectives. This is
one of the reasons behind contradictory or inconsistent findings where both resource abundance
and scarcity seem to affect prosocial and antisocial behaviors. To present a solution to these
conceptual and methodological issues, the chapter presents a detailed literature review to propose
that abundance and scarcity should be environmental resource abundance and scarcity as
separate constructs to personal resource abundance and scarcity. Furthermore, an interaction
between environmental and personal resource contexts yields a “resource environment”. The two
types of resource environments studied in this project were abundant (EAPA) and scarce (ESPS).
Using theoretical perspectives and findings from research on scarcity (Mani et al., 2013;
Mullainathan & Shafir, 2013), prosocial and antisocial behaviors (Prediger et al., 2014), and
communication (McCornack et. al., 2014; Levine, 2014); this chapter hypothesized that
navigating a scarce (ESPS) resource environment is a cognitively taxing experience. When
6
individuals are taxed, they are more interested in conserving physical and mental effort and
therefore, less likely to respond to requests for information. Arguably, this would decrease
information sharing and increase information withholding. Alternatively, those in an abundant
environment (EAPA) would not be as taxed and therefore, show a higher incidence of sharing
helpful information (prosocial behavior) and a lower incidence of information withholding
(inaction/lack of prosocial behavior).
Chapter three also presents a theoretical case on how human interaction and collaboration
with intelligent assistants in abundant and scarce environments can affect their prosocial and
antisocial decision-making pertaining to information. It argues that when people are cognitively
taxed i.e. in scarce environments, they would be more likely to accept suggestions from an
intelligent assistant. Furthermore, since intelligent assistants are cognition-of-sorts, they would
be able to affect human behaviors compared to when individuals make decisions on their own.
Therefore, an intelligent assistant can be used to serve as an intervention to nudge people into
sharing information which can help or hurt recipients.
The studies conducted in this project use an experimental approach to research. Chapter
four describes the research design and the development of a specifically designed web portal
which was used to invite hundreds of virtual users to participate in the two studies. This portal
was used to test human decision-making with and without an intelligent assistant across EAPA
and ESPS.
Chapter five describes findings from the two studies in detail. It shows how abundant and
scarce resource environments encouraged information sharing and withholding in different ways
in the absence of an intelligent assistant. It also discusses findings from the second study where
an intelligent assistant was embedded in abundant and scarce resource environments to nudge
7
people to make prosocial and antisocial choices pertaining to information. This is followed by
chapter six which includes discussion on both the studies. This chapter also details implications
for theory and research. Chapter seven consolidates the theoretical arguments and findings in this
dissertation and presents a perspective called bounded technological rationality. This perspective
is builds upon and extend the Herbert Simon’s work on rationality (Simon 1956, 1972). It
proposes that human behavior reflects a marked difference with and without intelligent
technology across resource environments. This difference indicates that human decision-making
is affected at the intersection of AI, cognition, and the environment, and thus, yields bounded
technological rationality.
This project studies the future. Currently, commercially available intelligent assistants
use what is called narrow AI. This type of AI only has the potential to be task specific. It mimics
the human mind but does not work like it. It must be specifically constructed according to the
needs of the environment and does not have the ability to teach itself without human
intervention. However, slowly, but surely, we are moving towards developing AI that will not
only be more adaptive to their environments but take on cognitive abilities like humans. Signs
for such developments are already propping up. For instance, scientists have developed programs
which can learn on their own (Silver et. al., 2018). This is a breakthrough as it presents the
algorithmic capacity to self-learn without being given an initial set of instructions
3
. If AI were to
develop this capacity, then we would move towards what is called Artificial General Intelligence
(AGI). In the past few years, opinion on when we can achieve AGI has changed rapidly. Only a
decade ago, many leading thinkers believed that such technology could not be developed soon.
Increasingly, this opinion has changed. More industry leaders and innovators now believe that
3
This is rather similar to how a human mind works--we can figure things out on our own without
always being given instructions i.e. with trial and error (see Noble, 1957).
8
AGI is a real possibility (Vincent, 2018). These advancements and rapid deployment of
intelligent assistants only goes to show that the time to study them is now. If we can better
understand how AI affects our decision-making processes, then we cannot only limit its risks, but
also leverage it for social good. This dissertation is a small step in that direction.
9
Chapter 2. From ELIZA to Alexa: A Brief Note on the History and Definition of Intelligent
Assistants
“Good morning. It’s 7 A.M. The weather in Malibu is 72 degrees with scattered clouds.
The surf conditions are fair with waist to shoulder highlines. High tide will be at 10.52 A.M.”
JARVIS, Iron Man
Human interaction and collaboration with intelligent, interactive, and assistive
technology has long been a part of cultural and fictional narratives in the forms of various
characters such as JARVIS
4
in Iron Man, 2008, Samantha in Her, and Marvin in Hitchhiker’s
Guide to the Galaxy to name a few. However, the rapidly changing technological landscape has
allowed the proliferation of this specific form of technology from fiction to reality. Its presence
is hard to miss across physical and virtual spaces as this technology in its various forms and
functionalities is being incorporated in our smartphones, websites, cars, homes, and
organizations (Canbek & Mutlu, 2016). Some examples of commercially available products that
exemplify this technology are known as Alexa, Cortana, and Siri, which have been developed by
Amazon, Microsoft, and Apple respectively. This technology helps users with everyday
mundane tasks such as setting alarms and reminders, making appointments, playing music,
navigating maps, ordering food, and finding information amongst many other things (Cortana,
n.d., Priest et al., 2019). It is also being used to help humans in more complex scenarios such as
supporting astronauts on space missions (CIMON brings AI to the International Space Station,
n.d.; Wall, 2018) and providing mental and physical health services (Ghandeharioun et al.,
2018).
4
Just a Rather Very Intelligent System
10
Interestingly, as this technology continues to spread and attracts the attention of
researchers from various domains (e.g. communication, information studies, human-computer
interaction, etc.) on its socio-psychological and behavioral effects (see Purington et. al., 2017;
Shaikh & Cruz, 2020; Winkler et. al., 2019); there is a surprising lack of consensus on how we
should define it and describe its features and comprehend its various forms. This problem is
amplified when professionals, media, and researchers use various terms and definitions
interchangeably. Some examples include: “intelligent personal assistants” (Cowan et al., 2017;
Hauswald et. al., 2015); “personal virtual assistant” (Cooper et al., 2004); “personal productivity
assistant” (Personal Digital Assistant - Cortana, n.d.), “smart personal assistant” (Winkler et. al.,
2019), and “virtual assistant” (Gruber, et. al., 2018). The lack of coherence in referring to this
technology and defining it poses many threats. First, it precludes our understanding of what this
technology really is, its inherent functions, and the forms it can take. Second, it conflates the
features of this technology with other kinds of technologies that may appear to have similar
features. Third, it creates theoretical and therefore, subsequent empirical difficulty to study this
technology and its effects on human behaviors, socio-psychological, and cultural processes and
outcomes. The ambivalence concerning this technology is also a product of limited historical
understanding on how, when, and why this technology evolved and developed over time.
The purpose of this chapter is to address the above-mentioned issues and present a
conceptual and definitive framework that can help researchers and users to understand the
constitutive features of this technology and the various forms it can take. This framework
emerges out of the historical evolution of this technology that saw the rise of the three
constitutive features of intelligent assistants that I use to define the technology later in the
chapter: artificial intelligence (AI), interaction, and assistance. The history of this technology has
11
a bearing upon the term; and therefore, it is imperative that we outline its constitutive features
through understanding how it evolved over time. To this end, I trace the history of intelligent
assistants across three eras: 1945–1980, when machines’ intelligent and interactive capabilities
began to be developed; 1980–2010 when the technology’s assistive capabilities were introduced;
and from 2010 to today, which has seen the integration of these three features in intelligent
assistants for commercial and public use.
This framework has been developed by tracing the historical evolution of this technology
since the interchangeable use of various terminologies to describe it is a product of a complex
technological landscape that has unfolded in the past 70 years especially as it relates to
theoretical and empirical advancements in the areas of AI and human-machine communication.
This goal of this chapter is to help researchers in communication and other disciplines
understand this technology better and as human-machine communication becomes ubiquitous
and human relationships with AI continue to evolve (Guzman & Lewis, 2019; Sundar, 2020).
The chapter begins by tracing the history of intelligent assistants over the past 70 years.
This is followed by discussions on its definition, constitutive features, forms, and conclusion.
The Evolution of Intelligent Assistants: A Historical Review
The modern history of intelligent assistants has centered mostly around utility,
productivity, and efficiency. The formal evolution of computers or machines that can think,
interact and assist has its origins in the scientific endeavors undertaken during World Wars I
(1914 – 1919) and II (1939 – 1945). During the Great War, Germans developed a complex
machine called Enigma to covertly communicate with each other and their allies. Enigma
converted formal language into a complex code that could only be deciphered if the recipient
also had an Enigma machine which was set to decipher the text (Singh, 1999). Over the years
12
that followed, many attempts were made to decipher and break Enigma codes and as a result,
code-breaking theory and technology only became more sophisticated. A breakthrough came
when Alan Turing, a British computer engineer, deployed a complex system of code-breaking to
decipher German communication (Leavitt, 2006; Singh, 1999). His methods were instrumental in
helping the Allied forces gain significant advantages and win the war against the Axis during the
second world war (Leavitt, 2006). The success of a complex machine--compared to the
technological standards of the time-- to deter and eventually beat the enemy sparked new hopes
and big ambitions. And therefore, technological innovation pertaining to codebreaking during
militaristic pursuits directly prompted a formal inquiry into the creation of thinking and
interactive machines.
1950–1980: Thinking and Communicating Machines
In 1950, on the heels of the success of his code-breaking technology, Turing proposed
the idea of a thinking machine. He theorized that a machine could be said to ‘think’ if it could
imitate a human to the extent that another human could not distinguish it and another human;
then arguably that machine can be said to ‘think’. This was called the Imitation Game and later
the Turing Test. It is important to note here that Turing did not use the word ‘think’ in its strict
literal sense, but rather how a machine could behave like a human and be perceived as such by a
human observer (see Turing, 1950; 1951). Turing’s post World War II (WWII) conceptualization
of machines as thinking entities pushed the development of software and computers that could
function as close approximations to human intelligence. And thus, a technology that was born
out of necessity and urgency in the battlegrounds moved into the hallowed halls of the academy
when John McCarthy along with his colleagues Marvin Minsky, Nathaniel Rochester, and
Claude Shannon presented a proposal in 1955 for an interdisciplinary research project to be
13
conducted at Dartmouth College. The main argument underlying this proposal was to encourage
and conduct formal studies on the problem of “artificial intelligence”.
“The study is to proceed on the basis of the conjecture that every aspect of learning or any
other feature of intelligence can in principle be so precisely described that a machine can
be made to simulate it. An attempt will be made to find how to make machines use
language, form abstractions and concepts, solve kinds of problems now reserved for
humans, and improve themselves.” (McCarthy et. al., 2006, pp. N/A).
This was the first time the term “artificial intelligence” was used. In response to
McCarthy et. al.’s call, the first conference on AI was held in 1956 at Dartmouth College. This
launched an interdisciplinary movement and scholarship comprising researchers from computer
science, mathematics, linguistics, and engineering. In line with the necessity of developing smart
machines that emerged during WWII, utilitarian motivations of reducing cognitive efforts and
increasing performance spurred the furthering of creating thinking machines to approximate
human intelligence or do tasks that would otherwise be done by humans (Solomonoff, 1966). To
achieve these goals, scientists began with developing algorithms to model aspects of human
intelligence.
The earliest attempts to write computer programs that could logically solve problems
came from the work of Newell and Simon called the Logic Theorist (1956). Although the Logic
Theorist was not explicitly labelled an AI program, the goal was to develop a code that could
reason and solve problems. The Logic Theorist was able to solve thirty-eight of the first fifty-two
theorems from the Principia Mathematica written by Bertrand Russell and Alfred North
Whitehead (published in three volumes; 1910, 1912, 1913). This achievement by a computer
program was a breakthrough of sorts at the time. Although the Logic Theorist worked on
14
mathematical theorems using logical protocols (specifically sentential calculus), it was still very
far from displaying aspects that approximated human cognition, reasoning and thinking skills. To
reach that goal, two members of the same team that developed the Logic Theorist, Allen Newell
and Herbert Simon, fused then dominant psychological theories (e.g. behaviorism and the Gestalt
perspective
5
) on human cognition, intelligence, problem solving, and the use of heuristics with
programming logic (1961). The theoretical and empirical move to include cognitive psychology
in creation of human programs was crucial if machines were to reflect the mechanisms that
governed the human mind. Newell and Simon, who were working with RAND at the time--a
think tank dedicated to the advancement of the military--argued that a human mind is an
information processing unit that solves a problem by assigning rules and symbols which make up
the problem system. To test and validate their theory, they conducted laboratory studies where
they presented a human subject with two equations (A and B) and asked him to extract the first
equation from the second. The task required that the subject thinks out loud as he solved the
problem. Newell & Simon found that their participant used a series of propositions and heuristics
that helped him to apply, reject, and reapply logical rules to extract equation B from equation A.
The equation, as Newell and Simon described, was merely a symbolic expression, and nothing
more. They argued that a human mind works by exploiting symbols from the environment and if
a computer program could work in a similar fashion, then it could be said to have simulated
human thought. Newell and Simon used the findings from their experimental studies to write the
Generalized Problem Solver (GPS) which they described as a program that “simulates human
5
Behaviorism proposed that human behavior is a response to the stimuli present in the
environment. The development of behaviorism as a school of thought in psychology has been
attributed to the theoretical and empirical works by J.B. Watson and B.F. Skinner (Roediger,
2004). The Gestalt perspective of human psychology was primarily concerned with how humans
perceive their environment as a whole instead of in parts.
15
thought” (1961, pp. 109). GPS was written to show that computers can be approximated to
function like the human mind to solve problems. GPS was the first AI program that gave
Turing’s conception of thinking machines a chance to by actively demonstrating how human
intelligence can be artificially simulated in computers (Russell & Norvig, 2015). GPS paved the
way for developing theoretical ideas and applications that implemented AI-related methods,
thinking, or procedures in one form or the other (see Feigenbaum, 1963; Pauker et. al., 1976).
This was a time of intellectual excitement and opportunity for the academics who were
ambitious and optimistic about the extent to which machines could be pushed. Herbert Simon
has often been quoted to have predicted in 1957 that a digital computer would become a chess
champion in the next 10 years (Russell & Norvig, 2015; Wall, 1997). Although his prediction did
not come true until 40 years later when IBM’s Deep Blue defeated Garry Kasparov in a chess
game in 1996, it demonstrated the hope and promise researchers had in AI’s potential.
In parallel to designing ‘thinking’ and ‘intelligent’ machines, another group of
researchers in the 1950s and 60s had been developing speech recognition systems, i.e.
investigating how humans could talk to machines. The motivations that guided the development
of machines that could listen and respond to human communication were not only to increase the
utility and practical applications of machines, but to also give humans more power over the
machines
6
.
The development of limited capability voice recognition systems has made it possible for
the first time for humans to “talk” information directly into a computer, with no
intermediate keying or handwritten steps involved, or to control mechanical systems with
6
These developments were a product of some early works in machine translation formulated
during code-breaking procedures in WWII. They also helped with furthering natural language
processing (NLP) techniques used today in AI research and applications (Liddy, 2001).
16
voice commands. Input to machines is simplified since the operator provides instructions
in his natural language. The machine, therefore, adapts to the requirements of the human
and greatly simplifies the task of man-machine communications (Martin, 1976, p. 487).
One of the first attempts in this area was by Davis, Biddush, & Balashek who created an
electronic circuit that could automatically recognize “...telephone-quality digits spoken at a
normal speed by a single individual” (pp. 637, 1952). This circuit would convert voice signals to
frequencies that were mapped on to the circuit. Soon after, more electronic circuitry was
developed which could recognize human voices (Dudley & Balashek, 1958; Denes and
Mathews, 1960). However, this did not occur until 1961, when William C. Dersch presented the
IBM Shoebox--a device that could convert verbally spoken words into written outputs in real
time. The Shoebox recognized digits from zero to nine, and mathematical commands such as
plus, minus, total, sum, and sum total (Dersch 1969; IBM Archives: IBM Shoebox, n.d.;
Roemmele, 2019). It could also perform calculations as verbally dictated by a human user.
Dersch gave a public demonstration of how the Shoebox could work at a technology fair in 1962
(Roemelle, 2019):
Dersche: “Six”
IBM Shoebox PRINTS: 6
Dersche: “Seven”
IBM Shoebox PRINTS: 7
Dersche: “Eight”
IBM Shoebox PRINTS: 8
Dersch: “Plus”
IBM Shoebox PRINTS: +
17
The IBM Shoebox was the first device which successfully showed that it was possible to
build a machine which a user could communicate with to perform some function. This was an
incredible achievement as it showed how far the technology had come along since WWII.
Although these machines provided much-needed advancements in the area of speech
recognition systems, they were seriously limited in terms of their memory as they could only
understand a few words. Much of the work in speech recognition systems was motivated by how
it can be applied to practical and military settings. In the 1970s, a machine called Harpy was
developed by researchers at Carnegie Mellon University. Harpy had a larger memory than the
IBM Shoebox and was able to recognize complete sentences in addition to words. Harpy was a
product of DARPA’s Speech Understanding Research (SUR) program whose aim was to create
advanced machines which could understand the human voice (Moskvitch, 2017).
Human communication with machines moved into a more daring territory when Joseph
Weizenbaum created ELIZA, a computer program he claimed could “...make natural language
communication with a human possible” (1966, pp. 47). ELIZA required that an individual
initiate communication with her by typing a statement (which may be a command, query,
greeting, or proposition) to which she would also respond back in writing. Weizenbaum
programmed ELIZA to communicate by using a system of language-based rules and if-then logic
that would manipulate a user’s sentences or words to be transformed as a response by ELIZA.
Therefore, ELIZA was automated to respond back to a user and therefore, could not sustain an
interaction--a process which requires natural turn-taking between two or more communicators
(see Sacks, Schegloff & Jefferson, 1974). Therefore, a dialogue between a human and ELIZA
could only be established if a human user continued to engage with ELIZA (Figure. 2.).
18
Weizenbaum tested ELIZA with students at MIT, and to his surprise
7
, many students believed
that they were communicating with a human instead of a computer program. This, Weizennaum
suggested, was an indication that ELIZA had passed the Turing Test (1966).
Figure 2. 1 An example of a user’s interaction with ELIZA.
Since 1952 and in the years that followed, much of the research in developing speech,
text, or voice recognition systems was funded by the US government and incorporated members
of both academia and the industry. WWII had taught major decision-makers a big lesson:
machines are as important as humans in winning a war, if not more. It was realized early on that
speech recognition systems would be of tremendous importance from not only militaristic
perspectives, but also in various contexts such as air traffic control, physical disability aids, and
data entry (Martin, 1976).
7
Weizenbaum has been said to have become upset with users’ inability to distinguish between a
computer program and a human. He eventually quit the field of AI, which he had helped create
to become its staunch critic (see Weizenbaum, 1976). In a fascinating interview, he claimed that
the “...the computer was a child of the military…” while describing instances in the history of
computing (see ben-Aaron, 1985).
19
Although ELIZA, IBM Shoebox, and Harpy were primarily developed to show that it
was possible for humans to communicate with the machines, they also appeared to provide some
assistance to their users
8
. For instance, ELIZA was modelled to represent a Rogerian
psychotherapist whose function was to engage with a user as a counselor. This function reflected
ELIZA’s pseudo capability to help a user. IBM Shoebox and Harpy were programmed to do
simple mathematical calculations which indicated that machines could help humans by
performing tasks and saving time. These successes gave hopes to the idea that communicating
machines could prove to be useful if they actually assisted humans. It was hypothesized that
humans would only need to use their voice to engage with the machine who would perform a
task for them and thus ultimately save them physical effort and increase overall performance
(Martin, 1976)
9
. By the end of the 1970s, it became clear that the scientific enterprise which
emerged in the wake of the two world wars had its sights set on building machines which could
not only think and interact, but also assist.
1980–2010: The Rise of the ‘Assistant’
The 1980s were characterized by attempts by scientists and the technology industry to
explicitly develop and implement assistive software that was human-like in both form and
function. In 1987, John Sculley, then-CEO of Apple, presented the concept of a human-like
intelligent agent called the Knowledge Navigator at a conference (Empson, 2011). The
8
Not all communicative software were designed to give the impression that could be of any
assistance to users. For instance, PARRY, a program similar to ELIZA, was created to act as a
paranoid schizophrenic.
9
In the years that followed, more research began to formally address technical and theoretical
issues pertaining to “human-machine communication” and hence, a sub-field which combined
engineering and computer science emerged (see Oberquelle et al., 1983; Suchman, 1987)
20
Knowledge Navigator was depicted to reside in an electronic tablet and designed to look like a
butler-type assistant in service to a university professor. It had a voice through which an
interaction could be established between a user and itself. This concept was put forth in a video
that was released by Apple where the Knowledge Navigator was shown to be an advanced, high-
functioning, intelligent agent that could search databases and information at the request of a user.
An excerpt of the interaction between the Knowledge Navigator and the professor is presented
below (from rkarena, 2007):
Professor walks in and turns on an electronic tablet.
Knowledge Navigator: “You have three messages. Your research team in Guatemala just
checking in. Robert Jordan, a second semester junior requesting a second extension on
his term paper, and your mother reminding you of your father’s…”
Professor: “...surprise birthday party next Sunday.”
The screen shows a schedule list.
Knowledge Navigator: “You have a faculty lunch at twelve o’clock. You have to take
Cathy to the airport by two. You have a lecture at 4.15 on deforestation in the Amazon
rainforest.”
Professor: “Right. Let me see the lecture notes from last semester.”
The screen shows lecture notes.
Professor: “No, that’s not enough. I need to review all the recent literature. Pull up all the
recent literature I haven’t read yet.”
Knowledge Navigator: “Journal articles only?”
Professor: “Hmm, fine.”
21
The Knowledge Navigator was only a concept at the time, yet it presented a vision of the
future of intelligent machines. Around the same time, the term ‘intelligent assistants’ began to
appear in a series of patents and research literature for the first time to refer to software agents
which facilitated coding or software development processes (Kaiser & Gellen, 1987; Tou et. al.,
1982). These intelligent ‘assistants’ were not assistants in the literal sense of the word which
refers to a ‘person’ (human being) who helps or supports somebody in their job (“Assistant”,
n.d.). Instead, these intelligent assistants referred to highly specialized computer programs that
helped software developers code better by providing instructions or searching queries from a
database. In simple terms, they could have been described as software for software.
Soon, “intelligent assistants” and related software were being developed for applied and
specialized but non-commercial settings (e.g. health, finance, office work, etc.) to help
professionals and users with tasks that used computers or electronics (Kaiser & Gellen, 1987;
Huff & Lesser, 1987; Gelman et. al., 1988; Scott et al., 1987; Tou et. al., 1982). These
developments were in line with the original purpose of developing thinking machines--
maximizing machine utility, reducing human cognitive and physical efforts, and increasing
performance. Many intelligent assistants and specialized software were primarily associated with
the healthcare sector where applications such as GUARDIAN (Hayes-Roth et. al., 1989) for
ventilation, and Patient Advocate for helping diabetics manage their health symptoms (Miksch et
al., 1997) were tested amongst others.
The 1980s were characterized by the introduction of the concept of ‘intelligent assistants.
However, this was also when problematic conceptualization of what the term meant started to
emerge. The merger of the word ‘assistant’ with intelligent unwittingly implied various human-
like features and qualities such as ability to interact, think, reason, and assist to highly
22
specialized and task-specific computer programs. For instance, some of this software such as
Patient Advocate (Miksh et al., 1997) were labelled as intelligent assistants, yet they neither had
the ability to interact nor use intelligence from an AI perspective. Instead, they were
programmed using rule-based scripts also known as if-then-else logic protocols which are not
reflective of AI. Concurrently, some “intelligent assistants” incorporated the features of
assistance and AI, but they lacked any ability to interact with the users. A good example in this
regard was an intelligent assistant
10
called MailCat developed by Segal & Kephart from IBM
Research Laboratories (1999). MailCat was an organizing system that was able to learn from
users’ data and classify messages and mails based on a text classifier. MailCat was able to make
correct predictions about the type of messages a user receives and assisted them with
automatically organizing these messages into appropriate repositories or folders; however, it
could not specifically ‘interact’ or ‘talk’ to the user.
The first commercial product which was specifically labelled as “assistant” was made
available in 1993 when Apple commercially released a series of mobile hand-held devices called
‘Personal Digital Assistant’ (PDA) under its Newton Project
11
(Honan, 2013). A PDA could
organize a user’s messages, calendars, alarms, reminders. It also had the ability to recognize a
user’s handwriting. The device also provided a pen that could be used to write on the device’s
screen. PDA would then convert it into digital words for the device. Patents for PDAs and related
applications were also filed by Palm Inc. (see Evers et. al., 2000), Nokia (see Siitonen & Ronkka,
1997); and Motorola (Nagel & Seni, 2007), and Hewlett-Packard (see Curans & Bertani, 2004).
PDAs remained popular in the 1990s and early 2000s and were primarily advertised to help users
10
Also referred to as an “intelligent personal assistant” by its creators in the original paper.
11
Apple discontinued the production of PDAs when Steve Jobs resumed the leadership of the
company in 1998.
23
with the management of their contact lists and messages., etc. Although PDAs were able to help
users with basic tasks, they still lacked the ability to communicate with users, i.e. exchange
direct verbal or non-verbal messages.
The nature of what it meant for an application to be an ‘assistant’ changed radically in
1997 when Microsoft introduced Clippy, an ‘office assistant’, in their MS Office environment.
Clippy was the resident assistant available in the MS Word environment to help users with
various functions of the software. It was also programmed to understand user input via keywords
and give appropriate suggestions or recommendations. A user could type in their query, and
Clippy would give an answer or take the user to the appropriate page. Clippy would also interject
and give recommendations to users even when they did not ask for its help
12
.
Clippy formally introduced assistive technology to have humanness as it was designed to
look like a paperclip with eyes and distinct dark brows. It was designed in collaboration with
Clifford Nass who was one of the original contributors of the influential Computers as Social
Actors (CASA) hypothesis (see Nass et. al., 1994; Reeves & Nass, 1996) and a team of
psychologists at Stanford. The CASA hypothesis argued that humans apply social rules to
computers. This effect was argued to be amplified especially when a computer appeared to have
some human-like features of effects. In line with the CASA hypothesis, experiments with
Stanford undergraduates showed that Clippy was found to be relatable and likeable. However, as
a part of its MS Office package, Clippy became a subject of much derision and dislike by users.
It was discontinued not too long after with the release of Windows XP in 2001.
12
In 2018, Microsoft tried to resurrect Clippy; but it did not receive a warm welcome, and the
project was abandoned (Schlosser, 2019). Moral of the story: Stanford undergraduate students
may not be your best sample of research participants.
24
Although Clippy was disliked by many users and was not driven by AI, its introduction to
the market of ‘assistants’ was revolutionary. The word ‘assistant’ evokes perceptions of a
‘person’ or a human who can interact, help with tasks, and display intelligence. By adding a
touch of human-ness, Clippy combined brought a literal element to the concept of assistant when
applied to technology. It showed that it might just be possible for a virtual entity to have a
persona of sorts.
The technological landscape that emerged after WWII was readying to come full circle. It
had begun in the 1950s by conceptualizing and developing machines followed by ones which
could talk. The 1980s emerged as the decade where machines were developed to explicitly assist
humans. Efforts to improve a machine’s ability to think, interact and assist continued and became
more sophisticated over the years that followed (see Barwise & Cooper, 1981; Bourbakis &
Novraki, 2001; Feigenbaum, 1977; Simon, 1996). Despite these developments, the technology
was still far from successfully deploying a machine which could think, interact, and assist at a
commercial scale. It took another ten years after the turn of the century for everyday users to
have an intelligent assistant literally in their pockets.
2010–present
13
: Interactive, Assistive, and AI-enabled Machines
As mentioned earlier, military funding and government agencies played a crucial role in
the development of intelligent machines since the end of WWII. From 2003 to 2008, DARPA
funded university researchers via a program called Cognitive Assistant that Learns and
Organizes (CALO). This program produced an extensive amount of research into the
development of algorithmic procedures (e.g. Chaudhari et. al., 2006; Zimmerman et. al., 2006)
and intelligent assistants (Myers & Yorke-Smith, 2005) that could help with a variety of tasks
13
Present indicates the time of writing this chapter.
25
such as scheduling and time management (Berry et. al., 2006), etc. Although there were many
beneficiaries of these DARPA-funded projects, Stanford Research Institute (SRI) spearheaded
the CALO project and led the many developments in AI research. In 2007, three engineers
associated with SRI, Adam Cheyer, Dag Kittlaus, and Tom Gruber, developed and later released
a voice-activated application called Siri on Apple’s application store. Steve Jobs, then-CEO of
Apple, bought the software from its developers and released it as a part of its 4S Iphone’s
operating system (OS) in 2011
14
. Therefore, arguably, the crown for making the world’s first
intelligent assistant, i.e. interactive, assistive, and AI-enabled technology commercially
15
available goes to Apple. It is not an exaggeration to say that Siri was a game-changer. It had
brought the dream of thinking, interacting, and assistive machines to fruition and to the members
of the public. Siri’s success in the market was also driven by the layer of human-ness it had been
given as it used a female voice
16
to communicate with its users. As it stands today, Siri is a
voice, and text-ready AI-enabled application that can communicate with its users and perform
various tasks for them.
The introduction of Siri in the commercial market provoked intense competition where
major technology companies such as Amazon, Google, and Microsoft raced to deploy and
develop their own versions of intelligent assistants across various interfaces and devices. Some
examples include Amazon’s ‘Alexa’ which was commercially released in 2015 as a part of its
smart speaker’s series called Amazon Echo. Soon after, Google released a ‘voice assistant’
14
Cheyer, Kittlaus, and Grueber made millions from the sale. They moved on to building another
company called Viv Labs which now works with Apple’s competitor Samsung to build its
intelligent assistant Bixby (Panzarino, 2016). Funny how life works.
15
Non-commercial interactive, assistive, and AI-enabled technology was developed earlier than
the release of Siri (see Bourbakis & Kavaraki, 2001).
16
Current version of Siri allows users to change the dialect and the gender of the voice. See
https://support.apple.com/en-us/HT208316
26
called Google Home in 2016, which is also a part of its smart speaker devices. New types of
commercially available intelligent assistants such as Amazon Echo and Google Home provide
new types of forms and functions as they were developed to be placed in physical spaces and
thus, accessible outside of a smartphone. These intelligent assistants are conversational in nature,
meaning they can understand verbal commands or questions. This allows users to delegate
assignments to them without entirely disengaging from their current task (Luger & Sellen, 2016;
Goksel & Mutlu, 2016; López, Quesada, & Guerrero, 2018). Recent estimates show that the
market for intelligent assistants will reach $25.63 billion by 2025 and will continue to expand
thereafter (Global Intelligent Virtual Assistant (IVA) Market 2019-2025, 2019).
Intelligent assistants are not only limited to the commercial market. Just like their original
conceptualization in the 1980s, they are increasingly designed to cater to specific needs and
niche markets such as banking (Cora, n.d.), video games (Butcher, 2018), or for specific contexts
such as the space station (CIMON bring AI to the International Space Station, n.d.)
What’s in a Name? The Many Names of that Technology
Since the introduction of commercially available intelligent assistants, their varying
applications, and successes, there has been an uptick in research and reporting on this technology
and therefore, more terms have emerged to refer to them. For instance, they have been referred to
as “intelligent personal assistant” (Cowan et al., 2017; Hauswald et. al., 2015); ‘virtual assistant’
(Gruber, et. al., 2018); ‘intelligent virtual assistant’ (Kim et. al., 2020) and ‘intelligent digital
assistant’ (Krupansky, 2017) to name a few. It is clearly observable that the word ‘assistant’ is
the requisite word which is used in combination with “intelligent”, “digital”, “personal,” and/or
“virtual”. Many of these terms can be misleading in their own ways. For instance, the use of the
word ‘personal’ suggests that the product is for one person. However, intelligent assistants such
27
as Alexa may be deployed for team-based work (see Shaikh & Cruz, 2020; Winkler et. al.,
2019). The term virtual assistant to describe this technology is not entirely appropriate or always
useful simply because humans who perform tasks remotely can also be called virtual assistants
(Weiler, 2019). Interestingly, there is a lack of consistency on how this technology can be
referred to in the industry. For instance, Cortana, Google Home, and Siri are referred to as
personal productivity assistant, voice assistant, and intelligent assistant by Microsoft, Google,
and Apple respectively.
It is important to note here that terms such as Intelligent Personal Assistant and
Intelligent Assistant are still used in the traditional sense i.e. to describe software that mimic an
AI-enabled system; but are not actually driven by AI. For instance, many commercially deployed
chatbots use rule-based programming i.e. they communicate by using if-then logic instead of AI
and are also incorrectly referred to as intelligent assistants. The word automated has also been
used to describe the working of an intelligent assistant. Automation simply refers to the
processes which remove or limit human interference or need in to perform actions or do tasks.
Therefore, some automated processes may be AI-enabled, but not all. For example, an Auto
Teller Machine (ATM) automates the process of withdrawing cash i.e. removes any need for
human interference. However, it does not need to use AI to perform this function. Instead, it is
designed using rule-based logic i.e. if-else decision trees. The use of the word “digital” is also
problematic when referring to intelligent assistants because in its literal and figurative sense it
refers to: a. the binary system of storing data, and b. various types of computer technologies and
electronics such as alarm clocks, calculators, and weight scales; and therefore, does always mean
that a technology is enabled by AI.
28
With the proliferation of assistive, interactive, and/or AI-enabled technology in the
commercial market and research, there have also been attempts to define them. While referring
to applications such as Siri, Hausweld et. al. describe an “intelligent personal assistant’ as: “...an
application that uses inputs such as the user’s voice, vision (images), and contextual information
to provide assistance by answering questions in natural language, making recommendations, and
performing actions (2015, pp. 223). Other definitions include: “... an implementation of an
intelligent social agent that assists users in operating a computer device and using application
programs on a computing device” (Gong, 2003, para. 1). Although these definitions attempt to
refer to a technology that can interact with a user, assist them, and use an intelligent mechanism
to function i.e. AI; they fall short on several fronts. For instance, Hauswald et. al.’s definition is
limited to the extent that it refers to an assistant’s ability to ‘answer questions’. However,
intelligent assistants may be able to initiate conversations and ask questions. For instance, an
intelligent assistant can say: “How may I help you?” and therefore show the capacity to also ask
questions or initiate a dialogue with a user. Gong’s definition is reflective of the type of
intelligent assistants originally and still developed for computing purposes and therefore limits
the function of an intelligent machine to helping a user with a computing device. Yet, intelligent
assistants can be applied to help with a variety of tasks that exist outside its hosting computing
device. These may include making phone calls to a real person, book flights (i.e. access another
computing system), or search for information.
The Intelligent Assistant: Definition and Features
It is not an exaggeration to say that the past seventy years have revolutionized the
concept and meaning of machines. Theoretical and practical advancements in the areas of AI,
computing, hardware, etc. have made an enormous impact on humans (and nature). We require a
29
framework that can differentiate intelligent assistants from other forms of technology. To deal
with the complexity surrounding the terminology of this specific technology, it is argued here
that the term intelligent assistant should be used by researchers and developers to refer to any
technology that is interactive, assistive and enabled by AI. Intelligent assistant is a concise and
simple term that is not limited by the primary modality of technology (e.g. voice) or its
ownership (e.g. personal); and therefore, it can be used and applied as an overarching term across
various settings.
To deal with issues with defining an intelligent assistant, it is proposed that the definition
of this specific technology should center on its constitutive features. Therefore, an intelligent
assistant is defined as any application that uses artificial intelligence and can interact with
user(s) via natural language, which may be combined with one or more communicative and
sensory modalities (e.g. sight and touch), to aid and/or collaborate. The above definition
synthesizes decades-long research and development that has gone into creating thinking,
interacting, and assistive machines. It reflects the three constitutive features of an intelligent
assistant: AI, interaction, and assistance. In the following paragraphs, each of these features are
described in detail.
Artificial Intelligence
The first criterion for a technology to be characterized as an intelligent assistant is that it
must be enabled by AI. Since the development of GPS in 1956, the field of AI has come a long
way and is consistently developing new techniques that can be applied across various domains.
The theoretical and empirical developments have led to a variety of AI definitions
17
that have
conceived of it too broadly or too narrowly depending on the context (see Russell & Norvig,
17
See the report the report by Congressional Research Service (2019)
https://fas.org/sgp/crs/natsec/R45178.pdf
30
2015). However, for the intents and purposes of this chapter and the dissertation, AI is
conceptualized as the understanding and building of intelligent agents which can receive
percepts
18
from their environment(s), adapt, and act rationally, and make changes to their
environments.” (Russell & Norvig, 2015). In line with this conceptualization, an AI-enabled
technology can be thought of as an agent which is capable of learning from its environment and
acting rationally.
The concept of what constitutes an ‘environment’ can be confusing. In simple terms, an
environment is the surrounding or situation in which an agent is present, and therefore something
that exists outside of it. An environment provides cues and data for an agent to understand,
navigate, and change it (Russell & Norvig, 2015). Learning from an environment can be thought
of as a continual assessment of data which may be available as text, images, sounds, numbers,
etc. Various branches of AI such as Natural Language Processing (NLP), pattern and image
recognition, statistical learning, etc. are used to process this data and create an intelligent system.
Intelligent assistants by definition need to interact with users, and to accomplish their goals, and
therefore, NLP--which refers to a host of techniques that process large corpus of linguistic data
to achieve human-like communication (Liddy, 2001)--are used in conjunction with other AI
branches.
Some intelligent assistants may also have the capacity to customize themselves
according to a user’s preferences, needs and choices. For instance, Apple claims that: “Siri does
more than ever. Even before you ask
19
.” (Siri does more than ever, n.d.), which implies that Siri
can not only do what is asked or requested, but also predict what would be needed by a user.
18
A percept can be described as an input an agent receives from its environment. This term is
used commonly in the field of AI (see Jepson & Richards, 1993)
19
See a list of things Siri can do according to Apple https://www.apple.com/siri/
31
This reflects how the application continually learns from user data to be able to predict what they
will need in the future, and thus is in line with how Russell & Norvig have conceptualized AI: an
agent that can learn from the environment and make changes to it (2015).
Interaction
The third criterion or constitutive feature for a technology to qualify as intelligent
assistant is its ability to interact and communicate. This means that it must be able to respond to
communication and/or initiate communication with another human and/or machine via verbal,
visual, auditory, written, and/or haptic means. It is important here to unpack what it means for an
intelligent assistant to have the ability to interact. Interaction or communication in this context
can be described as a process or event where a user can engage with a machine for at least one
turn in a natural language understood and spoken by the user (see Sacks, Schegloff, & Jefferson,
1974, for a description on human turn-taking). Natural language refers to any language that has
evolved over time and is used by humans to communicate with each other (Pinker & Bloom,
1990). For instance, English and Urdu are natural languages and JavaScript is not a specifically
designed artificial language for computers.
The new generation of intelligent assistants, such as Amazon’s Alexa and Microsoft’s
Cortana primarily use voice as a communication modality. For instance, to communicate with
Amazon Echo, a user must say the wake word “Alexa” which turns on a blue light on the device
indicating that it is ready to receive a user’s message. However, interaction with intelligent
assistants does not need to be limited to voice. A user should be able to use non-verbal means to
establish interaction with an intelligent assistant. These include text, vision, and biometric
variables (motion sensor technologies, fingerprints, etc.). For instance, a human user can interact
with Google Assistant using both voice and text on an Android smartphone or via a web
32
browser. Similarly, Bixby, the intelligent assistant provided by Samsung across its devices, can
also use camera vision to recognize objects and QR
20
codes in addition to voice. This suggests
that all intelligent assistants can interact; yet, the quantity and ability of communication
modalities will vary from one intelligent assistant to another (See Kepuska & Bohuta, 2018). For
instance, Samsung’s Bixby, offers two three modes of communication: verbal, written, and
visual; but Amazon’s Alexa is primarily a voice-based intelligent assistant
21
.
In line with the motivations that guided the development of voice recognition systems in
the 1960s and 70s, intelligent assistants are designed to reduce the overall effort a user may have
to put on a task and/or increase a user’s capacity to do more. For instance, IBM, who created
CIMON--a robot for the International Space Station--described it as (CIMON brings AI to the
space station, n.d., para 7):
Your hands are busy, and you have a question about the project you’re working on.
Normally, you would have to float over to your laptop to get the answer, then back to the
experiment station. With CIMON, you can just say, ‘CIMON, what’s the next step?’ and
you don’t have to interrupt your workflow.
Assistance
For a technology to be characterized as an intelligent assistant, it must be endowed with
one or more assistive abilities i.e. it must be able to provide some sort of assistance to its user(s).
This means that it should be able to do and/or collaborate on tasks, give recommendations,
provide cognitive, emotional, physical and social support
22
, search information, and conduct
20
Abbreviated from Quick Response code. QR is a machine-readable matrix barcode that
provides information (QR code, n.d.)
21
A user may use Alexa’s app on their smartphone to set alarms or reminders which means that
verbal contact is not always requisite. However, most of Alexa’s interactive capability rests on
verbal communication.
22
They might provide spiritual support in the future. I will update my definition in a few years.
33
assignments on the users’ behalf. An intelligent assistant could be performing mundane tasks
such as setting alarms and playing music to more complex jobs such as calculating planetary
alignment. The ability to assist is a crucial aspect of designing, deploying, and identifying
intelligent assistants. This quality sets them apart from other types of technology that may appear
to be intelligent and interactive yet are devoid of any actual function(s) for a user.
It is important to note here that despite many breakthroughs in AI research, intelligent
assistants are still developed for singular and specialized purposes and require a lot of human
input
23
. This state of AI is often referred to as narrow AI In such instances, an application is
unable to use its intelligence for any tasks or purposes that are outside its purview, and therefore,
serve a very limited purpose. For instance, Alexa is designed for doing specific tasks such as
setting alarms, ordering food, and telling the weather (amongst others). However, if a user were
to ask Alexa to explicate the difference between Thomas Hobbes and Jean-Jacques Rousseau’s
theses on human nature, she cannot be of any assistance
24
.
The above discussion details three constitutive features that must be present in a
technology for it to be categorized as an intelligent assistant. These features allow us to
distinguish it from other technologies that may give the impression of being intelligent,
interactive, or assistive.
What is Not an Intelligent Assistant?
If an intelligent technology fails to pass any of the above requirements, it cannot be
categorized as an intelligent assistant. For instance, Microsoft released a Twitter bot called Tay
23
This refers to the larger debate on weak vs. strong AI which is beyond the scope of this
chapter.
24
I asked Alexa if she had something to say about the Hobbes vs. Rousseau debate on human
nature. Unfortunately, she “does not have an opinion on that”. This simply means that AI cannot
do your philosophy homework just yet. There are other ways to complete such assignments such
as: using your own brain, copying someone else’s homework, or paying virtual workers.
34
in March 2016. Tay was able to communicate with the users and learn from the data, but was
unable to assist them, and therefore, it could not be categorized as an intelligent assistant
25
.
Additionally, various forms of social bots, chatbots, and automated customer care assistants (e.g.
some kinds of banking, airline booking systems, etc.,) cannot be categorized as intelligent
assistants as they are pre-programmed to mimic human conversation and lack ‘dynamic’ learning
from data. In fact, the use of the term ‘intelligent’ to describe these technologies is misleading.
Forms of Intelligent Assistants
Intelligent Assistants can come in various shapes and forms if they fulfill the criteria
presented above. Primarily, speaking intelligent assistants can be categorized as embodied or
non-embodied types/virtual types.
Embodied Intelligent Assistants
An embodied intelligent assistant (EIA) refers to an intelligent assistant that requires or
is hosted by a specifically designed physical device that could be stationary and/or mobile in
nature. Some commercial types of intelligent assistants include those such as Amazon Echo aka
Alexa or Google Home are speakers that can be placed in a physical location. Other forms of
embodied intelligent assistants are robots or mobile devices (e.g. CIMON).
Having said that, the concept of ‘embodiment’ can be a source of confusion. Some
research has characterized intelligent assistants such as Amazon Echo are characterized as
“virtual” (Kim et. al., 2020). This is primarily because intelligent assistants are characterized as
applications or software (see Hauswald et. al., 2015) and therefore, the word ‘virtual’ is
considered appropriate as a descriptor. This can be a source of confusion when intelligent
assistants such as Amazon Echo are characterized as ‘embodied’ as they raise the question: how
25
Tay was shut down very quickly because it was flooded with racist messages which its
algorithm used to learn to send racist and derogatory tweets.
35
can an intelligent assistant be embodied when it is virtual? To this, we argue that an intelligent
assistant is embodied if it is situated in a specifically designed physical device whose primary
purpose is to represent that intelligent assistant. Amazon Echo and Google Home are hosted in
specifically designed hardware, thus in that context, they can be categorized as EIAs.
Disembodied Intelligent Assistants
Intelligent assistants that do not require a specific device to host them are disembodied
intelligent assistants (DIA). Some examples of DIAs include Siri and Cortana. These intelligent
assistants exist in laptops and/or cell phones whose primary purpose is not to merely host these
intelligent assistants. A cell phone or a laptop is multipurpose equipment, and an intelligent
assistant may be incorporated as one of its features. Therefore, intelligent assistants such as Siri
should be categorized as DIAs.
DIAs can be deployed across a variety of visual and virtual interfaces such as websites,
video games, and mobile applications to name a few (Følstad & Brandtzæg, 2017; Hill et al.,
2015; Hoy, 2018). Usually, intelligent assistants on virtual interfaces are represented via an
avatar, a symbol, image, and often come with a chat box or some buttons that allow them to
establish communication with a user. DIAs can also exist outside of visual interfaces. For
instance, they may be accessed via phone (e.g. banking or airline customer service) or just by
voice if they were incorporated in the physical architecture such as that of a building
An interesting question arises as to the nature of intelligent assistants that are holographic
in nature. An example of such technology is Azuma Hikari, a female intelligent assistant
developed by Gatebox, which exists as a holograph (Gatebox 株式会社, n.d.). Azuma resides in a
36
specific device and therefore, can be categorized as an EIA
26
. An intelligent assistant can
potentially take both EIA and DIA forms if it were made available and accessible on specifically
designed hardware or in virtual settings such as web browsers or applications. Thus, for an
intelligent assistant to be, EIA and DIA are not mutually exclusive forms.
Conclusion
The purpose of this chapter was three-fold: a. to present a historical review of the
development of intelligent assistants, b. to define and describe intelligent assistants, and their
constitutive features, and c. to note their various forms. To this end, this chapter covered the
historical developments of the past 70 years that have led to the creation and proliferation of
intelligent assistants. We saw how the two world wars created the necessity of making advanced
machines that could be used against the enemy. For the next three decades after the end of World
War II (1950-1980), ideas on thinking and intelligent machines emerged followed by extensive
theoretical and applied work in hopes to reduce human cognitive and physical efforts, and
approximate human intelligence (Newell & Simon, 1961; Turing 1950, 1951). This era was also
characterized by the developments in speech recognition systems which allowed human-machine
communication. By the end of the 1970s, technology seemed to have advanced enough to give
machines a chance of helping humans and thus, the concept of machines as ‘assistants’ were
born. In 2011, thinking, interacting, and assisting machines became a reality when Siri was
released to the commercial market. Intelligent assistants are now a distinct form of technology
that has three constitutive features: AI, interaction, and assistance. They can reside in specifically
designed hardware, or on a virtual interface.
26
The line between EIA and DIA will blur if an intelligent assistant did not require a specific
device and could exist as an independent holograph.
37
Since the theoretical conceptualization of ‘thinking’ machines by Alan Turing 70 years
ago, intelligent assistants are almost everywhere. They are in our phones, houses, offices,
browsers, and entertainment services. As a technology, they have quickly crossed the cultural,
geographical, and perhaps socioeconomic lines. Thus, our relationship with machines has entered
a new era where human-AI and human-machine relationships will only continue to evolve with
the changes in technology and contexts. Soon, intelligent assistants will only become more
intelligent. The current state of AI is weak, but it's slowly but surely moving into the direction of
strong AI. In December 2018, Google shared the results of an algorithm called AlphaGo in
Science (Silver et. al., 2018). Unlike its predecessors, AlphaGo taught itself chess, shogi
(Japanese chess), and Go. The focus here is on the self-learning component of this algorithm’s
nature. This is revolutionary in many ways including that algorithms will not need human
interference to help them learn. These developments will eventually be applied to building
intelligent assistants who will self-learn to assist, collaborate, think, and communicate which
would only expand the extent to which they can function like humans
27
. In many ways, this
technology will assume the role of cognition-of-sorts in our environments. Thus, the time is ripe
to dig deeper into how human collaboration and interaction with intelligent assistants will affect
behaviors and socio-psychological processes. To this end, this dissertation focuses on human-AI
collaboration as applied to intelligent assistants and how it affects prosocial and antisocial
information behaviors in various resource environments.
27
Some evidence for this comes from Google who debuted their Duplex technology that was
shown to make an appointment with a human hairdresser using complex language and turn-
taking. It also appeared to think on behalf of its user by going over some alternative appointment
options with the hairdresser (Leviathan & Matias, 2018).
38
Chapter 3. Resource Environments, Intelligent Assistant, and Decision-Making: A Two-
Part Story
This project has two layers which build on top of each other. The first layer is a detailed
investigation into the effects of resource abundance and scarcity on prosocial and antisocial
behaviors as applied to information sharing. It details conceptual and empirical problems within
the prior literature to discuss why resource abundance and scarcity often tend to produce mixed
and contradictory findings on the effects of these conditions on prosocial and antisocial choices.
The core argument made in this part is that resource abundance and scarcity exist in personal and
environmental contexts, and when these contexts interact, they create a resource environment
which affects prosocial and antisocial behaviors. A resource environment is conceptualized as a
factor and two of its types, abundant and scarce, are investigated to test how they influence
prosocial and antisocial behaviors pertaining to information. Synthesizing theoretical
perspectives and findings from research in behavioral economics (e.g. Mani & Mullainathan,
2013; Prediger et al., 2014), and communication research (McCornack et. al., 2014; Levine,
2014), it is argued that abundant resource environments encourage information sharing, but
scarce environments increase information withholding.
The second layer focuses on embedding an intelligent assistant in abundant and scarce
resource environments to test how human interaction and collaboration with this technology can
affect human decision-making. The goal is to understand how an intelligent assistant can nudge
people to make prosocial and antisocial decisions pertaining to information i.e. withhold and
share information which can help or hurt recipients. The goal of this project is to see if an
intelligent assistant can serve as an intervention to affect prosocial behaviors in environments
which encourage otherwise. It uses research findings on algorithm appreciation (Logg et al.,
39
2019), intelligent assistants and time scarcity (Shaikh & Cruz, 2019), and human-robot
interaction (Robinette et al, 2017) to propose hypotheses.
The following sections expand upon the above-mentioned subjects and is divided into
two parts/ Part I details literature on resource abundance and scarcity and develops the rationale
for the baseline study. Part II reflects on how an intelligent assistant can nudge people into taking
prosocial and antisocial actions pertaining to information within abundant and scarce
environments and presents hypotheses for study two.
Part 1. Rethinking How and When Resource Abundance and Scarcity Affect Prosocial and
Antisocial Behaviors
“Yet unless it be thoroughly [sic] ingrained in the mind, the whole economy of nature,
with every fact on distribution, rarity, abundance, extinction, and variation, will be dimly seen or
quite misunderstood” (Charles Darwin, 1958, p. 75).
Decision-making units such as individuals, groups, organizations, and societies at large
are consistently engaged in the exchange, management, ownership, and experience of various
resources (Foa, 1971; Foa & Foa, 1980; Kaplan et. al., 2018; Smith, 2017). These resources may
occur in tangible and abstract forms such as food, water, money which reflect the former, and
information, status and time which exemplify the latter type of resources. Anything which can be
described as a resource varies in quantity and quality. And thus, arguably, there is not a single
society, organization, group or an individual who can claim immunity to either resource scarcity
or abundance or both.
In the past decade, psychological and behavioral research has looked into how resource
abundance and scarcity affects prosocial and antisocial behaviors (Gino & Pierce, 2009; Prediger
et al., 2014; Roux et al., 2015; Vardy & Atkinson, 2019; Yam et al., 2014). In this chapter, the
40
term prosocial behaviors are used to refer to a broad class of voluntary behaviors where
individuals and groups engage in activities to benefit others. Some examples of prosocial
behaviors are sharing, volunteering
28
, being honest, and cooperating (Capraro and Cococcioni,
2015; Jones, 1991; Penner et. al., 2015). Antisocial behaviors are often described as individual
and group actions that may hurt or harm others. These also include behaviors which lack
considerations for others or benefit oneself at the expense of others. Unethical behaviors which
are defined as those considered “illegal or morally unacceptable to the larger community” (Jones,
1991, p. 367) can also fall under the larger domain of antisocial behaviors. Some examples
include hoarding, lying, stealing, cheating, etc.
29
. (Batson & Powell, 2003; Jones, 1991; Griffin &
Lopez, 2005; Penner et. al., 2005).
Interestingly, many of the studies investigating the effects of resource abundance and
scarcity on prosocial and antisocial behaviors often contradict each other. Some argue that
resource scarcity increases antisocial behaviors (Roux et al., 2015; Yam et al.,, 2014), while
others suggest that resource abundance may encourage the same (Gino and Pierce 2009).
Additionally, while various types of prosocial and antisocial behaviors such as cheating (Sharma
et. al., 2014), theft (Yam et al., 2014), overstating performance (Gino & Pierce, 2009), and
monetary donations (Roux et al., 2015) have been studied within the context of resource
abundance and scarcity, there is very little research on how these factors influence prosocial and
antisocial behaviors that pertain to information. The importance of information cannot be
understated as it is a core ingredient of human experience (Gleick, 2011). People rely on
28
It must be clarified here that what is not prosocial does not mean it is antisocial. For instance,
if volunteering is classed as a prosocial behavior, not volunteering is not antisocial. It is just one
less prosocial behavior. In a situation where volunteering is required from everyone, then not
volunteering but benefiting from others’ efforts is antisocial.
29
These behaviors and their relationship to information are defined in detail later in the chapter.
41
information from others to make decisions. The quality and quantity of information they receive
can have positive or negative consequences for them. Yet, we have not investigated how
resource abundance and scarcity affects how we share information to help or hurt others.
The purpose of this chapter is to address the above issues and understand the
contradictory findings on the effects of resource abundance and scarcity on prosocial and
antisocial behaviors. Additionally, the aim to also fill the gap in research on how these factors
affect prosocial and antisocial information behaviors. To this end, we conduct a critical review of
research on resource abundance and scarcity to explain when and why these factors affect
prosocial and antisocial behaviors. The core thesis presented here is that resource abundance and
scarcity exist primarily in two contexts: a) environmental and b) personal. However, the
distinction between these contexts often remains unstated or theoretically ill-defined in the
literature. Additionally, methodological techniques often conflate these two contexts in research
designs which might be affecting the various findings. It is argued here that an interaction of
personal and environmental resource contexts yields complex environments which affect the
extent to which people behave in a prosocial manner. It is proposed here that an empirical
inquiry is required to test the effects of this interaction on prosocial and antisocial behaviors. The
focus of this dissertation are behaviors pertaining to information i.e. withholding or sharing
information which can help or hurt a recipient.
This following section begins with a note on existing conceptualizations of resource
abundance and scarcity and how we can extend it to create distinctions between environmental
and personal resources. This is followed by an elaboration on the various theoretical and
empirical approaches that have been used to study the effects of resource abundance and scarcity
42
on prosocial and antisocial behaviors. The focus shifts to developing rationale and hypothesis on
prosocial information sharing as a product abundant and scarce resource environment.
Defining Resource Abundance and Scarcity: The Environmental and Personal Resource
Contexts
Resource scarcity can be described as “sensing or observing a discrepancy between one's
current level of resources and a higher, more desirable reference point” (Cannon et al., 2019, p.
105). This definition suggests that for a singular entity--in the case of this project, an individual--
the perceptual or actual experience of a lack of resource can be thought of as resource scarcity.
While resource scarcity has been defined in multiple ways in the psychological, organizational,
and economic literatures (Cannon et al., 2019, for a review; Sharma & Alter, 2012; Mehta &
Zhu, 2016; Mullainathan & Shafir, 2013), definitions of what constitutes as resource abundance
are scarce. To fill this void, Cannon et al.’s (2019) definition on scarcity is used and extended to
conceptualize resource abundance as an instance where the current level of resource(s) has
exceeded a required or desired reference point
30
. For instance, if someone who wanted to make
$50,000 (reference point)
31
ends up earning $60,000; then we can describe this as an instance of
resource abundance because $60,000 exceeded their reference point of $50,000. However, if they
were to earn $40,000, then this is a situation of resource scarcity
32
because this is below the
reference point of $50,000.
30
This definition assumes that there is no new reference point.
31
While $50,000 is a concrete and identifiable reference point; it may not always be such.
Reference points can be rather abstract which means that it might be hard for decision-makers to
exactly state what they are. The discussion of this phenomenon is beyond the scope of this
project.
32
Note that scarcity here does not mean that there is not enough to eat or live on. Here it is simply
described as the discrepancy between a reference point and current level of resources.
43
Prior research on the effects of resource abundance and scarcity on decision-making has
focused on how these conditions affect an individual or a decision-making entity (see Mani et.
al., 2013; Mullainathan & Shafir, 2013; Shah et. al., 2012; Yam et. al., 2014). For instance, in a
series of studies, Yam et. al. (2014) found that individuals who felt resource scarcity as a
function of being deprived of food for an extended time--i.e. when they were hungry--were more
likely to be dishonest on algebra and geography questionnaires to earn snacks. In this case, the
construct of interest is the experience of scarcity for a person and its effects on that person. Thus,
we can refer to this as personal resource scarcity (PS). It is important here to note that PS or its
flip side, personal resource abundance (PA) are contexts. This means that we conceptualize them
as instances or situations that pertain to resources for a person. While the study of PA and PS are
important to understand how resource scarcity affects an individual’s prosocial behaviors or lack
thereof, they sometimes tell us only one side of the story.
People exist in environments. In addition to sensing or observing their own level of
resources, they also assess what is available in their environments. Their evaluations of resources
in an environment may come from visual, anecdotal, experiential, or informational cues. For
instance, let’s assume that a hungry person walked into a supermarket filled with various food
items. They can sense and observe that there is enough food in the store to meet and exceed their
current needs. In that case, even though that person is experiencing scarcity via hunger, the
environmental context has resource abundance. This shows that the experience of resource
abundance and scarcity differs by two contexts: person and the environment. Therefore, we can
make a distinction between personal and environmental resource abundance and scarcity. While
the former has been defined using the Cannon et. al.’s (2019) definition presented above, we
need to provide a conceptual definition of what constitutes as environmental resource abundance
44
(EA) and environmental resource scarcity (ES). EA can be described as a condition when one
perceives, senses, or observes that the available resources in an environment are enough to meet
or exceed one’s (and/or others’) required level of resources. It logically follows that ES is a
condition where one perceives or observes that the available resources in an environment are not
enough to meet or exceed their (and/or others’) required level of resources
33
.
From a decision-maker’s point of view, environmental and personal resource contexts
often interact with each other and thus, have a relational quality to them. A combination of
environmental and personal resource contexts generates a resource environment. To illustrate
this further, let us use a scenario adapted from a study by Miyazaki et al. (2009). John needs to
buy a DVD
34
which he cannot find anywhere. He finds out that the DVD is in-stock with an
online seller. John is unable to buy it because it is too expensive. Is this a condition of resource
abundance or scarcity? If we think about this problem from how the effects of resource
abundance and scarcity is traditionally studied, then one might argue that John faces a condition
of scarcity. However, if we look at this from environmental and personal perspectives described
above, then arguably, there are enough DVDs available in the environment for John to reach his
reference point i.e. one DVD. This is EA. However, John’s lack of the same DVD is a condition
of PS. Therefore, it is argued here that the above example reflects a combination of EA and PS.
This illustrates the interaction between environmental and personal resource contexts.
33
While the evaluation of resource levels in an environment can be a product of visual cues such
as food in a supermarket or ostentatious displays of wealth (see Gino & Pierce, 2009); it can also
come by comparing oneself with a referent group (see Adams, 1963); personal experiences,
anecdotal evidence, objective or hard information, etc. A good example is the everyday news
stories about the job market. Based on information received from the media or friends, people
can evaluate if there are more or fewer jobs in the environment (let’s say their country, county,
city, profession, etc.)
34
Hardly anyone buys DVDs these days. The study is 11 years old.
45
The distinction between environmental and personal resources has been made to help us
understand the psychological and behavioral literature on how resource abundance and scarcity
affect various types of prosocial and antisocial behaviors. As mentioned earlier, there are
opposing findings on the effects of resource abundance and scarcity on unethical behaviors
(Gino & Pierce, 2009; Prediger et al., 2014; Roux et al., 2015; Vardy & Atkinson, 2019; Yam et
al., 2014). The question of concern here is why do these contradictions occur, and how can we
conceptualize and explain them? The goal of this chapter is to develop a conceptual framework
that can address this issue by reviewing the literature from the perspective of environmental and
personal resource contexts. The following section presents a review of several related studies to
unpack and investigate how they frame and/or label resource abundance and scarcity, and if their
methodological techniques may have conflated the distinction between the environmental and
personal resources contexts.
Literature Review
In the past decade, researchers have studied the effects of resource abundance and
scarcity on human prosocial and antisocial behaviors. This research trajectory differs from other
types of investigations (e.g. the effects of gratitude) on prosocial and antisocial behaviors
because they explicitly mention or theorize resource abundance or scarcity and manipulate or
vary these factors in their research designs. To conduct a literature review of this research
databases such as Google Scholar, JSTOR, ProQuest were searched using combinations of the
following terms: “ethical”, “unethical”, “behaviors”, “deprivation”, “generous”, “selfish”,
“giving”, “generosity, “deception”, “poverty”, “rich”, “environment”, “prosocial”, “antisocial”,
“resource”, “abundance”, and “scarcity”. Additionally, search was conducted within citations to
find relevant studies. The inclusion criteria were as follows:
46
a. The study must have explicitly manipulated or varied or classified resource abundance
and/or scarcity or a closely related construct that can be justified as such as an
independent variable.
b. The study must have used one or more behavioral measures to investigate a prosocial
and/or antisocial decision or action as a dependent variable.
The above criteria were developed because behavioral measures of unethical or ethical
behaviors are generally considered more reliable and robust measures (Gino & Pierce, 2009)
since self-reports of such behaviors suffer from social desirability bias. Additionally, studies
which used self-reported measures to investigate prosocial and antisocial behaviors on resource
abundance and scarcity mostly asked participants single item questions such as “how likely are
you to donate to this charity” (see Roux et. al., 2015) which cannot be equated to a “behavior”.
In the following sections, we will examine how the concept of resource abundance and
scarcity has been theoretically conceptualized and methodologically varied. The concepts of EA,
ES, PA, and PS presented above will be applied to these studies. The review is divided by lab
and field studies to give us a better sense of the theoretical and methodological variation in
existing research.
Lab Studies
The first study we discuss in this regard is by Gino and Pierce who investigated the
effects of environmental wealth on unethical behaviors (2009). They described abundant wealth
in an environment as: “...a large pool of visible money or resources that are either shared by
organizational members or possessed by individuals within an organization” (2009, p. 142). The
core argument that undergirded these studies was that abundant wealth in an environment--
especially in organizational settings--is more likely to increase unethical behaviors. They
47
described this phenomenon as the abundance effect. Using perspectives from equity theory
(Adams, 1963), Gino & Pierce suggested that wealthy environments may cause individuals to
think of the inequity and thus give rise to negative emotions such as envy and attempts at
retributive justice. These developments may give rise to unethical behaviors. To find evidence
for their hypotheses, they conducted a series of lab experiments. Participants were tasked to
arrange letters into words across eight rounds. If a participant created 12 words, then they would
receive $3. This rule would be applied to every round which meant that a participant could
potentially earn $24. At the end of the study, participants evaluated their own performance using
a Scrabble dictionary, and paid themselves. Unethical behavior was measured by the extent to
which participants overstated their performance on the task for personal gain i.e. earn money.
The abundance of wealth in an environment was manipulated by having the participants see huge
piles of cash on a table from which the experimenters paid compensation to them. This was the
treatment condition. In what is described as a “poor condition” (p. 144) or “environment of
scarcity” (p. 142), only “the cash necessary to pay the participants” was on the table (p. 144).
The authors found that participants were more likely to overstate their performance to gain extra
compensation in the environmental wealth condition. Gino & Pierce’s study was the first of its
kind that investigated the impact of abundant displays of wealth on unethical behaviors. These
studies offer us some interesting insights on both theoretical and methodological approaches to
understanding how the conceptualization and tests resource abundance and scarcity affect
prosocial and antisocial behaviors.
First, the conditions which had enough cash for everyone were labelled as scarce or poor.
As per the definition of ES provided above, if there is enough of a resource in an environment to
meet the needs of participants, then this logically cannot be a state of scarcity. Scarcity is an
48
experience where people can sense or observe that the environment does not have enough to
meet or exceed their current level of resources (Cannon et al., 2019). This suggests that while the
treatment group can be classed as EA, the comparison group did not face a scarcity, but
something else. This is a labelling issue. Therefore, the findings attributed to “scarcity”
conditions are to be interpreted carefully especially as they compare with EA.
The second argument is concerned with the theoretical understanding of personal
resource context as applied to this study. Each participant could have earned a maximum of $24
dollars after eight rounds. Therefore, $24 could be interpreted as their reference point.
Participants could be said to be experiencing a state of PS if they were unable to reach their
reference point while doing the study
35
. Arguably, the treatment group which saw huge piles of
cash experienced PS in EA. Therefore, the study potentially examined the joint effects of EA and
PS on unethical behavior. The control or “scarce” group saw enough cash and reported less
unethical behavior overall. Methodologically, their reference point was the same as that of the
treatment group, i.e. $24. However, it can be theorized that the difference in the amount of cash
may have altered reference points for each group. This is a more difficult thing to tackle for
researchers given the constraint of lab settings, but still something to take note of. Overall, these
studies are nuanced and may have tested more than just the effects of EA on unethical behavior.
It must be said here that studies that explicitly mention or vary resource abundance in an
experimental context are rather rare to this date and thus, this effort is much appreciated.
Yam et. al., (2014) investigated the effects of resource scarcity on unethical behaviors
using a series of studies that involved undergraduate students as well as managers in an
35
The second argument could be made that the two different types of ERAs altered the
participants’ reference points and induced differing levels of PS which suggests that what
constitutes as PS is unclear.
49
organization. Yam et. al. hypothesized that physiological deprivation, i.e., hunger--a condition
where the human body does not get food, is linked to unethical behavior for personal gain. Using
Reinforcement Sensitivity Theory, the authors argued that behavioral responses to physiological
deprivation occur to make decisions that will reduce or eliminate the said deprivation. The
authors asked the participants to refrain from eating at least four hours prior to the study. During
the lab session, the participants were asked to answer some geography and algebra questions and
were told that for every correct answer, they would win free drinks and snacks. However, there
was also an option for ‘I don’t know’ which should have been selected for questions which really
did not have a correct response. Like the research design in the Gino and Pierce study (2009), the
hungry and thirsty participants could see an array of beverages and snacks in the same area
where they took the test. However, unlike the Gino and Pierce study, the quantity of snacks did
not vary by hungry/not hungry conditions i.e. all participants saw the same table of snacks. Yam
et al. (2014) found that hungry participants were more likely to be dishonest by not selecting “I
don’t know” to earn snacks.
If we were to think about resource abundance and scarcity from the environmental and
personal perspectives, then it can be argued that the state of hunger is PS. However, this PS
occurred in an environment where abundant or enough food was available for an individual to
satiate their hunger which means that they also experienced EA. Overall, we can infer the
dishonesty was higher when an individual found themselves in the context of PS and EA. This
finding coincides with the Gino and Pierce study partly due to their overlapping methodological
variation although theoretical framing of these two studies appear to tackle two different ideas
where Gino & Pierce claimed that environmental abundance affects unethical behavior, and on
the contrary Yam et. al. argued that it was scarcity.
50
Another quasi-experimental study Yam et. al. was done at the entrance of a cafeteria
where students were walking in to grab lunch (2014). The goal of the study was to examine the
effects of physiological deprivation on cheating for a needed-related item i.e. to note if hungry
people would be more likely to cheat for a food item versus a notebook. Experimenters also
recruited participants who were exiting the cafeteria after having had their lunch at about the
same time. The study task gave participants a chance to overstate their performance to gain either
a snack or a notebook. It was found that participants were more likely to cheat for food than a
notebook especially when they still had not had their lunch which means they were presumably
hungry. In this scenario, although hunger is a state of PS, the study setting was near a cafeteria
which could possibly be an environment of resource abundance. Thus, it ties into their earlier
study done inside a laboratory with an array of beverages and snacks. Interestingly, non-hungry
participants i.e. those who did not face food scarcity, also cheated albeit for a non-food item
36
.
Overall, the argument presented here is that the abundance and scarcity manipulations in the
research design vary interactions between environmental and personal contexts.
Sharma et. al. investigated the impact of financial deprivation on dishonesty for financial
gain (2014). Participants were randomly assigned to win or lose a game which had four rounds.
For every round, participants could win $2.50. Some participants did not win anything whereas
others won $10. In these studies, the reference point for a participant could be thought to be $10.
Therefore, those who did not win anything can be classed as PS. The group which earned $10
could be thought of as a control group since they earned neither less nor more than the reference
point of $10. The authors found that when participants lost money, they were more likely to be
36
A more nuanced argument could be made that earning a notebook could have been a reference
point for the control group, and they might have felt the need to reach a reference point. Food did
not matter for satiated individuals since their food-related needs were already taken care of.
51
dishonest on a subsequent task to earn more money. Here we can observe that Sharma et al.’s
manipulation of resource scarcity falls within personal context, and therefore, arguably, the
condition of PS was more likely to have increased the incidence of unethical behavior.
Roux et al. (2015) investigated the effects of reminders of scarcity on selfish and
generous behaviors. They argued that scarcity prompts people to think competitively, and thus
take actions for their own benefit. Participants were instructed to think of time they felt scarcity
and describe it in detail. Those in the control condition were tasked to think of what they did
during the past week. Participants were then given an opportunity to donate $1 from their
compensation to a UNICEF Relief Fund for the Children of Sudan’s Darfur. They found more
people in scarcity condition kept the money i.e. did not donate compared to those in control
condition. This study thus compared the PS (albeit, from memory) versus no-PS on generosity.
Louie & Rieta, 2018 investigated if receiving a choice of something when it is limited
(scarce) will increase the likelihood of helping behaviors i.e. giving small donations. Participants
were told that they could take two pieces of candy. They could choose from Candy A or B or
both. Scarcity was manipulated by telling participants that Candy A was in short supply.
Participants came to the front of the class, took the candy pieces of their choice and left. During
the middle of the session, an announcement was made that Candy A had run out and therefore,
people would only get two pieces of Candy B. In the control condition, participants were not
informed of the short supply of candy A. The candy pieces came with an envelope that contained
utensils. Participants were told that they could return empty envelopes back to the experimenters
so that they could be used by others in the future. This counted as a “donation” or a measure of
prosocial behavior. A control group was not told of any supply shortage. They simply came in,
ate the candy of their choice, and left. However, they had an option to give the envelope back.
52
The authors found that when participants had a choice under scarcity, they were more
likely to donate compared to the ones with no choice within the same condition by returning the
envelopes back for future use. This study is interesting for several reasons. Since the amount of
candy A was not enough for everyone, then this reflects scarcity in the environmental resource
context i.e. ES. However, personal resource context needs further analysis. Those who had the
choice between Candy A and Candy B could be said to not be in a state of PS since they got
what the desired i.e. met the reference point. However, for the group left without any choice mid-
way during a study possibly gave a rise to a feeling of deprivation i.e. they previously had
something (a choice) but did not anymore. This is PS in combination with ES. Either way, we
can see that scarcity manipulation in this study falls under the category of ES, and when
participants are able to meet their required level, then they are in a ESPS situation, and more
likely to return something back they did not need compared to those in other conditions.
Field Studies
Some studies on resource abundance and scarcity have been done using the lab-in-the-
field approach. A lab-in-the-field study is described as a study that is “...conducted in a
naturalistic environment targeting the theoretically relevant population but using a standardized,
validated, lab paradigm” (Gneezy & Imas, 2017, p. 440). A lab-in-the-field experiment targets a
certain population in their natural settings but may use an environmental variation or artificially
created manipulation to investigate the effects of an independent variable on a dependent factor.
Prediger et al. (2014) used this approach in Namibia and invited two groups who were like one
another on social, cultural, and demographic variables but differed on the annual yield or
variation of the natural biomass in their regions. The environments in which these two groups
resided varied significantly from each other. One group (scarce) resided in an area where
53
biomass was much lower than the other group. Thus, the “environment” was that of scarcity. The
researchers wanted to examine how exogenous resource scarcity such as the lower presence of
biomass affects people’s antisocial behaviors towards others.
The locals took part in a Joy-of-Destruction (JoD) experiment. JoD experiments measure
spite, which is an individual’s deliberate action or decision to decrease another person’s reward
by incurring a decrease in their own endowment. From the perspective of Rational Choice
Theory, this kind of antisocial behavior demonstrates deviation from the rational model
37
which
suggests that individuals are self-interested (see Becker, 1976), and therefore should avoid
making decisions that incur a cost to them or decrease their rewards. Prediger et. al. argued that
exogenous factors such as the scarcity of biomass creates competitive orientation and thus, an
increase in spiteful behavior. Their studies found that individuals from resource abundant
environments were less spiteful compared to a similar group who were in environmentally scarce
resources. This study was framed as research on “scarcity” where the manipulation was a natural
variation of resources from an environmental perspective given that the two groups were mostly
of equal socio-cultural dimensions. It is unclear if there was a difference between the state of
personal resources, but it can be argued for now that this is a test of EA and ES on antisocial
decision-making where ES most increased the incidence of spiteful i.e. antisocial behavior.
Abundance Aksoy & Palma (2009) conducted a lab-in-the-field experiment to test how
resource “scarcity” increases cheating behaviors that benefit oneself. Low-income coffee farmers
from a village in Guatemala were invited to participate in a game of rolling a dice. Farmers were
told that they could win a cash reward if the dice rolled a five in the first round. After the
procedure was completed, the farmers reported their outcomes to the experimenters, and received
37
Basically, do not hurt people by hurting yourself because it is silly.
54
a price. The study was conducted with the same group of farmers at two time points: pre-harvest
and post-harvest. The authors described the post-harvest season as a “period of abundance”
where participants witness an increase in their income compared to the pre-harvest period during
which they struggle financially. Given these descriptions, we can assume that the pre-harvest
period is most likely to be the case of PS. However, post-harvest may or may not be a state of
“abundance” since we do not know if the increase in their income meets or exceeds a reference
point. For an argument’s sake, we can think of post-harvest season as at least no-PS.
Given the study location, the findings should be analyzed by explicitly specifying the
nature of environmental resource context. The study was done in a low-income Guatemalan
village which was mostly inhabited by low-income farmers. Based on this description, we can
infer that on average, environmental resources may not be enough to meet everyone’s needs and
therefore, this study site can be thought of as a resource scarce environment (ES). The study was
done in the same scarce environment context at two time points: pre-harvest where arguably
personal resource context had scarcity (PS) and post-harvest which could be either no-PS or PA
since it is unclear if the increase in income created abundance or simply decreased scarcity. It
was found that cheating occurred both the times: pre-harvest and post-harvest, and thus the
authors concluded that the scarcity did not have a direct effect on cheating for selfish gain.
However, given the environmental context, it can be argued that “scarcity” occurred in two
contexts: personal and environmental. And therefore, this is not simply a matter of scarcity and
abundance, but possibly some combination of how a resource-poor environment (ES) with the
state of personal resource context i.e. PS and no-PS or PA. When analyzed from the perspective
of environmental and personal resource contexts, these findings overlap with those of Prediger et
al. (2014)) who also found that environmental scarcity was a predictor of antisocial behaviors
55
Summary of the Literature Review
This review has yielded an interesting pattern of findings. In lab studies people made
more unethical choices for personal gain when they experienced scarcity in personal resource
contexts (Sharma et. al., 2014; Yam et al., 2014). This effect was also found in field settings such
as during the pre-harvest coffee season in Guatemala (Aksoy & Palma, 2014). This effect was
also observable under conditions where scarcity was present in the environmental context i.e. the
inhabitants of the region with lower biomass in Namibia were more likely to be antisocial
compared to their counterparts in a higher biomass region (Prediger et al., 2014). Additionally,
there seems to be an indication that within scarce environmental contexts, people tend to cheat
regardless of their personal resource context (see Aksoy & Palma, 2014). Thus, it follows that
conditions of environmental and personality scarcity can give rise to antisocial or reduce
prosocial behaviors.
The effects of resource abundance are rather hard to understand, and difficult to explain
primarily because studies that also investigate this factor frame their research on the effects of
“scarcity” rather than abundance which in turn reduces the theoretical attention on this construct.
Studies done by Gino & Pierce (2009) are exceptions in this regard since they explicitly varied
environmental abundance i.e. EA and found that it affected cheating behavior. However, in these
studies, two different types of EA interacted with PS, and thus it is unclear how we can interpret
just the effect of EA. Furthermore, these findings contrast with those from Prediger et al. (2014)
who found that people were less antisocial in conditions of EA. This shows that investigating the
effects of EA without clarifying the personal resource context is problematic. It also shows a
huge gap in the literature where we still do not know or have empirical evidence into how
abundance in both environmental and personal contexts affect prosocial and antisocial behaviors.
56
The literature review also shows that most behaviors which have been investigated are
ones which result in benefit or loss to the decision-maker. For example, cheating for cash or food
are types of behaviors that result in personal gain
38
i.e. bring benefit to the decision-maker who
chooses to engage in this behavior. However, what is less understood or investigated are
behaviors which may result in benefitting someone else but may not have a significant
measurable benefit or loss to the decision-maker themselves. Additionally, the literature has not
investigated if resource abundance and scarcity can affect prosocial and antisocial behaviors
pertaining to information. This is a huge void in research and needs to be investigated further
since information is the building block of human experience and is required by people to
understand the world, form relationships, and make decisions. It is not an exaggeration to
suggest that everyday human experience is shaped by consumption and exchange of information.
The critical examination of the literature on resource abundance and scarcity and its
effects of prosocial and antisocial behaviors has provided us with the following main takeaways.
First, although there is a distinction between environmental and personal resources, it is not
clearly stated or theorized. Second, conceptual and methodological approaches to the study of
abundance and scarcity sometimes conflate the two resource contexts. Third, there is often a
relational quality to the environmental and personal resource contexts which is hard to escape.
This means that the interaction between environmental and personal contexts needs to be
explored explicitly. Fourth, there is a serious dearth in the literature on understanding the effects
38
They are both selfish and unethical behaviors--a distinction which is often conflated in
research (see Duboi et al., 2015).
Readers should also note that Prisoner’s Dilemma (PD) is not used in these studies. This is
primarily because PD is used to investigate cooperation between two decision-makers such that
each individual’s choice affects the other and the self. The above studies do not study the effects
of individual choices in interdependent situations.
57
of resource abundance on prosocial and antisocial behaviors. Fifth, although people consistently
engage in various behaviors pertaining to information, we have not explicitly studied how
abundance and scarcity in environmental and personal resource contexts affects these behaviors.
The next sections address these issues by first elaborating upon how we can converge the
resource abundance in environmental and personal contexts into one construct called resource
environment. This is followed by the development of theoretical arguments which address how
resource environments can affect behaviors pertaining to information from prosocial and
antisocial dimensions.
Resource Environment: The Interaction between Environmental & Personal Resource
Contexts
Resource abundance and scarcity exists in both environmental and personal contexts.
An interaction of these contexts yields what can be termed as a resource environment. The
concept of “environment” can produce ambiguity (Simon, 1956, p. 103) and therefore, it is
important to describe what it refers to in this project. A resource environment refers to a decision
space where environmental and personal resources interact for a decision-maker. Since
perceived, observed and actual resources vary in quantity and quality across the two contexts that
make up a resource environment, then arguably a resource environment has a varying component
to it. If it varies, then logically it follows that a resource environment can be treated as a factor.
Resource environments can be of different types. However, this project focuses on the
one where individuals face: environmental and personal resource abundance (EAPA), and
environmental and personal resource scarcity (ESPS). While there is research on scarcity, prior
studies have not focused on EAPA and its effects on prosocial and antisocial behavior. This
study contributes to the literature by treating EAPA as a factor.
58
The goal of this project is to examine how abundant (EAPA) and scarce (ESPS) resource
environments affect the likelihood and the extent to which a person shares information that can
help a recipient do better or gain an advantage. To this end, the next section describes how
information sharing and withholding can be seen from prosocial and antisocial perspectives. This
will be followed by a presentation of the theoretical approach that informs the what is the first or
baseline study’s hypothesis and research question on the effects of EAPA and ESPS on prosocial
and antisocial information sharing.
The Baseline Model: Resource Environments and Information Sharing
People often ask others for information on finding things such as jobs, housing,
internships, etc. Information is also sought on how to do things such as fixing a car, baking
cookies, or filing taxes. People also want information about things such as potential employees,
romantic partners, employers, etc. These bits of information are requested and gathered because
they can help the people make good choices and/or prevent loss. Intrinsically, we realize the
value of information, and the power it must affect us (Gleick, 2011). The exchange (or lack
thereof) of information between two or more individuals can take prosocial and antisocial forms.
For instance, when a sender shares a piece of useful information
39
to help the recipient gain an
advantage, make an informed decision, achieve goal(s), and/or access something desirable or
reach some kind of reference point even when the sender is under no obligation to do so, then
this can be counted as prosocial behavior. However, what if someone chooses to not share useful
information when someone asks for it. Does that count as an antisocial behavior? Yes or no?
39
In the context of this dissertation, the type of information we are studying does not fall under
the category of public goods. A public good assumes that a resource is both non-excludable and
non-rivalrous. However, a lot of information is excludable and rivalrous. For instance, a chef’s
recipe for his best-selling dish is going to be excludable.
59
And if yes, to what extent? To understand this better, we first need to understand how
information is treated and manipulated by people.
Theorizing Information Sharing as Prosocial and Antisocial Behaviors
Information Manipulation Theory 2 (IMT2) is a theory of deceptive discourse
production (McCornack et. al., 2014). According to IMT2, senders can manipulate both the
quality and quantity of information on multiple dimensions. Within the limits of this project, the
focus is on three types of behaviors pertaining to information: a. withholding information, b.
choosing to share information which will help a recipient and c., choosing to share information
which will hurt a recipient. Let us illustrate this further using the following example:
Mary has an important piece of information that can possibly help others. She knows
that a certain company called The Eagles Inc. is hiring. Suppose someone asks Mary if she can
tell them about any job opportunities. Mary could respond to this question in some of the
following ways:
a. Mary could say nothing at all.
b. Mary could say that Eagles Inc. are hiring.
c. Mary could say that Eagles are not hiring.
The above scenario illustrates that although there is an accurate answer to the question,
information can be treated and manipulated in many ways in terms of quality and quantity. Let’s
assume that Mary chose to say nothing at all i.e. provided no information in response to the
question. In this case, Mary violated the quantity of information i.e. by withholding or not
sharing any information whatsoever. In another instance however, Mary said that the Eagles
were hiring, and provided the recipient an accurate answer to their question. Arguably, Mary
retained both the quality and quantity of information. A third scenario could be described as the
60
situation where Mary deviated from the truth and said that the Eagles were not hiring. In this
case, Mary tampered the quality of information by giving inaccurate information and therefore
engaged in deception which can be defined as: “...intentionally, knowingly, and/or purposely
misleading another person” (Levine, 2014, p. 379).
While thinking about the manipulation of information from a sender’s perspective, it is
also interesting to note the consequences for a receiver who receives these bits of manipulated
information. The consequences will vary with the kind of manipulated information they got. For
instance, if someone were told that The Eagles Inc. are hiring, and they decide to apply, then this
information helped the recipient move toward their goal
40
. Since Mary was under no obligation
to provide this information, but did so anyway to help a requester, this is prosocial behavior on
her part (see Capraro and Cococcioni, 2015; Jones, 1991; Penner et. al., 2015). However, where
Mary deviated from the truth and gave incorrect information that the company was not hiring,
then she engaged in antisocial behavior as this information would mislead a requester into not
applying and therefore, hurt their recipient's prospects.
41
.
In the context of this dissertation, when an individual chooses to share information that
can help a recipient gain maximum advantage or reach a goal in response to a request is
considered as an instance of prosocial behavior. However, when the shared information can hurt
a recipient and prevent them from reaching their goal is categorized as antisocial behavior.
Providing no information i.e. withholding it is described as a lack of prosocial behavior. The
40
Mary could also give additional information on how to apply, things to add, what to say, etc.
41
Given the range of quality and quantity in information sharing, it can be argued that although
some behaviors can be categorized as prosocial and antisocial, it is also useful to view behaviors
on some dimension of prosociality and antisociality since their effects can have varying
consequences for recipients depending on a resource environment..
61
major research question driving the first part of this project is how abundant and scarce resource
environments affect the extent to which people engage in prosocial and antisocial behaviors
pertaining to information. A detailed literature review has shown that there is a gap in research
where we do not know the effects of abundance and scarcity on people’s decisions to withhold or
share information that can help or hurt others
42
. The following section expands upon this
problem to using relevant theoretical approaches.
Theorizing Effects of Abundant and Scarce Environments on Information Sharing
Prior research on the effects of resource abundance and scarcity on prosocial and
antisocial behaviors have used various theoretical perspectives to explain these effects (see above
sections for review). Studies which (may) have varied PS and EA argued that abundance in the
environment cues inequity which results in people overstating performance for personal gain in
order to bridge the gap between their resources and those in the environment (Gino & Pierce,
2009). Others have found and suggested that scarcity in personal contexts triggers
competitiveness which results in selfish behavior such as decreased donations to charity (Roux
et. al, 2015). Using the logic of competitive orientation under scarce resources, similar results
have been found for scarcity in environmental contexts where people shower levels of generosity
and higher levels of spite towards others (Prediger et. al., 2014). Research that (incidentally)
varied environmental scarcity with personal abundance and scarcity found that personal context
42
Research shows that capuchin monkeys use deceptive pointing or holding (non-verbal
communication) to misguide others. These techniques emerge when members lower in status
(hierarchy) are competing for a food resource (Mitchell & Anderson, 1997). Primates, such as
chimpanzees, have been shown to go as far as to deceive humans by using hidden routes to reach
resources such as food (Hare et al.,, 2006). These studies have shown that intelligent mammals
such as monkeys are capable of using information intentionally regarding resources to their
benefit when competing with others.
62
made little difference in cheating for personal gain i.e. at both instances (ESPA and EAPS)
people cheated for personal gain (Aksoy & Palma, 2019). These studies have helped us
understand how abundance and scarcity are tied to prosocial and antisocial decision-making via
various mechanisms. However, they have not investigated the role of a taxed cognition under
abundance and scarcity and how that influences behaviors. This is an unexplored area of research
although scarcity has been shown to negatively affect cognitive function (Mani et. al., 2014).
Furthermore, all the studies mentioned above have focused on decisions pertaining to
things such as food and money whose procurement or exchange resulted in a benefit or loss to
the self under resource abundance or scarcity. For example, cheating resulted in earning more
money or snacks for a participant i.e. incurred some benefit (Gino & Pierce, 2009; Yam et. al.,
2014). Donating money to charity reduced the amount an individual had, and therefore created a
loss for them (see Roux et. al., 2015). These studies point to the nature of tangible resources such
as food and money which decrease or diminish when shared with others and increase when taken
for oneself. Thus, it is understandable how people choose to procure or donate them is affected
by resource abundance and scarcity. Information is unlike money or food since sharing it does
not decrease its quantity. And therefore, in some instances, giving it to others may result in
neither benefit nor loss to self from material perspectives. However, the process of sharing
information requires effort, i.e. an individual must spend physical and mental resources such as
time, energy, movement (e.g. typing on keyboard, speaking, etc.) to comply with a request for
information. Thus, the property of information as an intangible resource, and the processes
involved in its exchange can affect how (and if) it is shared for prosocial purposes.
Research from the lab and the field has shown that scarcity negatively affects cognitive
function. It taxes the mind and induces a scarcity mindset which forces individuals to intensely
63
focus on managing and navigating the circumstances they find themselves in (Mullainathan &
Shafir, 2013). The process of navigating a scarce environment is a difficult and cognitively
taxing experience. Arguably, when people are cognitively taxed, it is only likely that they
conserve both cognitive and physical efforts. This is applicable to instances of behaviors
pertaining to information by ignoring requests for information. Additionally, if sharing
information does not directly benefit the individual in material or relational sense, then it may
become even less attractive to spend effort and share it
43
While scarcity has been theorized to be a taxing experience and its effects on prosocial
and antisocial behaviors have been heavily documented, the influence of abundance on human
decision-making and cognition is hardly investigated and thus, not well understood. Using the
rationale of cognitive taxation described above, it can be argued that abundant environments do
not levy a high cognitive tax on individuals. And therefore, compared to those in a scarce
environment, individuals in abundant environments would have more cognitive effort left to
spare, and as a result be more open to responding to requests for information. When people
navigate abundant and scarce resource environments, how they share information can be affected
by the role these environments play on exhausting cognition. Given this logic and the findings
from prior research on resource abundance and scarcity, it is hypothesized that in response to a
request for information:
Hypothesis 1. Individuals will be more likely to withhold information (i.e. not share) in a
scarce resource environment (ESPS) compared to when they are in an abundant resource
environment (EAPA).
43
Compare this to other studies where unethical actions such as cheating benefitted the self.
64
In the above sections, it was argued that abundant or scarce resource environments can
affect the decision to withhold information. In this study, the accurate answer or truthful
information is also the most helpful information. Any piece of shared information that is not
helpful to a recipient can be thought of as deception. People’s deviation from the truth and
engagement in deception has been theorized and shown to be connected to their motives such as
monetary gains, saving face, concealing transgressions, maintaining social politeness, impression
management, hurting others, etc. (Levine et. al., 2016, Levine et al., 2010). It has been argued
that that lying requires justification--a purpose or reason, and truth telling does not, and
therefore, often, people tell the truth (Serota et al., 2010). There is evidence that lying is not as
common as many believe. In the American context for instance, people lie once or twice per day
on average. However, the distribution of lying is skewed such that some people tell more lies
than others (Serota et al., 2010). Thus arguably, in the absence of explicit motives such as benefit
to oneself, being vengeful, or demonstrating dislike of an outgroup lying especially to be
malicious for the mere sake of it will be hard and less likely to occur. Given that this study is
aimed at understanding information sharing in abundant and scarce resource environments, it is
worth investigating the effects of these factors on the incidence of sharing deceptive and harmful
information. Therefore, the following research question is put forth:
RQ1: How do abundant and scarce resource environments affect antisocial information
sharing?
The above sections have established the nature of resource environments that will be
investigated in this project and how humans make decisions that pertain to information within
the same. In the literature reviewed so far, humans were the only decision-makers. This serves as
the baseline model two conditions were set up: EAPA and ESPS. The next move is to understand
65
how human decision-making pertaining to information sharing is affected when they collaborate
with an intelligent assistant in these environments. The next section is part II of this project and it
details literature on AI and its influences on decision-making and presents relevant hypotheses.
Part II. Intelligent Assistant in Resource Environments: Consequences for Information
Sharing
Technology enabled by AI is already being applied across various environments such as
space applications (Girimonte & Izzo, 2007); financial markets (Dunis et al., 2016); and patient
care (Topol, 2019) to name a few to help humans make various kinds of decisions. Although
various tools, machines and technologies have always helped humans do mechanical tasks and
ease computational efforts; AI-enabled technology brings something different to the table. It can
learn from data, perceive its environment and make changes to it, and adapt
44
to the user's needs.
AI-enabled technology helps individuals to make decisions for themselves such as where
to eat or shop. However, AI-based applications are also being deployed in instances where a
decision-maker’s actions has direct consequences for others. For instance, attempts are being
made to automate some aspects of the judicial system in the United States which will ultimately
affect sentencing decisions (Dressel & Farid, 2018). AI-based applications are increasingly used
to sort CVs and hire people (Heilweil, 2019). Medical professionals also use AI to make
decisions on patient treatment and healthcare (see Inthorn et al., 2015). Thus, when humans
interact and make decisions using AI-enabled technologies, they can not only affect themselves
but also others.
44
See chapter two for detailed notes on AI.
66
Recent research has investigated how people tend to appreciate advice from algorithms
and adjust their evaluations based on algorithmic feedback (Logg et al., 2019), there is very little
literature on how nudges and advice from intelligent assistants affects human behavior. We also
do not know how environmental conditions affect the extent to which humans accept advice
from AI-enabled technology. Additionally, we are yet to explore how intelligent assistants
impact prosocial and antisocial behaviors especially as they pertain to information. These gaps in
the literature need to be addressed as AI increasingly becomes more sophisticated and
personalized to be deployed across contexts where people experience resource abundance and
scarcity and engage in interpersonal interaction and information exchange. Some examples
include financial markets, high pressure tasks and missions (e.g. medical and space), resource
allocation contexts (e.g. welfare schemes), etc. It is easy to imagine that intelligent assistants or
related technology were to successfully nudge people towards making prosocial and antisocial
choices that affect others, there could be large-scale consequences.
This dissertation investigates some of the concerns presented above. In the preceding
section, a baseline model or resource context was developed which investigated how human
decision-making is affected when environmental and personal resource contexts interact. The
product of this interaction was referred to as a resource environment. Two resource environments
(EAPA & ESPS) were discussed and relevant hypotheses were presented. It was argued that
people would be more likely to withhold information in resource scarce environments. This
section takes the baseline model and adds a layer which involves embedding an intelligent
assistant in resource environments. It investigates how a collaborative intelligent assistant with a
rather sophisticated version of intelligence will affect human behaviors as they pertain to
information and if we can use them as interventions for social good.
67
In many ways this is a study of the future since the technology is not there yet in terms
of sophistication. However, it will be. Thus, this is a sneak peek into the future. For study
purposes, a bare bone structure devoid of reciprocity, competition, and benefit was specifically
developed to understand how human behavior can change as a function of collaboration with
intelligent assistants. The overarching goal of this study is to test if an intelligent assistant can
serve as an intervention to nudge decision-makers to increase information sharing and decrease
information withholding. This is important to understand because a low incidence of sharing
helpful information can prevent people from making better choices or gain advantages. The
study also aims to understand how an intelligent assistant can nudge people to engage in
malicious behaviors so that we can use that knowledge to build better machines.
The following sections will review some literature on the variables mentioned above
which will be followed by presentation of hypotheses.
Literature Review
Prior to delving in review pertaining to the influence of intelligent assistants on human
decision-making, it is important to reflect on the state of theory. As it stands, theories on how
humans decision-making is affected by AI or how humans make decisions with AI are not fully
developed. By this, we refer to theories that strictly fall under the realm of behavioral research
and decision-making and not those who utilize constructs of perceptual nature (e.g. perceived
trust)
45
. In fact, the increasingly cited Logg et al. (2019) study did not explicitly refer to a theory
to develop its arguments. The authors used a combination of prior research on how people agree
or disagree with algorithmic judgement to make their arguments. Add to this the fact that there
45
A good example is Prospect Theory (Kahneman & Tversky, 1980). This is a judgement and
decision-making theory where the dependent variable is an explicit choice a decision-maker has
to make between two alternatives involving risk.
68
are also no theoretical frameworks that explicitly investigate the effects of intelligent assistants
on human behaviors. Thus, the theoretical landscape is barren. The absence of AI-specific
theories makes sense at this point in time because although AI as a field of study has been
developing since the 1950s (see chapter 2), technologies which have explicitly incorporated AI-
based tools were only introduced to the commercial market in the past ten to fifteen years.
Empirical research in this area has recently gained momentum and will continue to sprout new
findings which will ultimately help us in theory development. This means that the wider
literature on the effects of algorithms, ADI, and intelligent assistants on people’s judgement and
perceptions will be used to develop the arguments and hypotheses on how an intelligent assistant
can nudge people into making prosocial and antisocial choices pertaining to information.
People and Algorithms
Recent literature using experimental methods has investigated how people adjust their
judgement or choose between human and algorithmic judgements
46
. Studies by Logg et al.
(2019) tested how algorithms,
47
which were defined as a “...series of mathematical calculations”
affect human judgement. The authors found that on a series of forecasting tasks, people adjusted
their preferences and evaluations of people’s (numerical) ratings of attractiveness and weight to
46
Note that this literature uses behavioral methods to test how people actually make decisions or
judgements using AI’s advice. This is very different to studies that investigate human
perceptions of decisions already made by AI using self-reports or survey methods.
47
Algorithms are not AI per se. Algorithms are pieces of instructions (Logg et al., 2019). For
instance: “If A = 0, choose B. If B, delete C AND D”; can be described as an algorithm. As
described in chapter two, something “automated” is also not AI per se. Automation refers to a
process which reduces the need for human intervention. An AI-enabled technology will use
algorithms and automation, but algorithms and automation do not always represent AI. Many
studies conflate algorithms and automation with AI. Additionally, these distinctions may not be
clearly understood by most people. In the Logg, Minson, & Moore study, algorithms were not
conflated with AI (2019).
69
be closer to that of an algorithm’s (Logg et al., 2019; Prahl & Van Swol, 2017). People also
preferred to get advice from algorithms instead of humans to help them make evaluations. A
related line of research has found that people were more likely to avoid algorithms when they
saw them make mistakes (Dietvorst et al., 2015; Dzindolet et al., 2002). This tendency is called
algorithm aversion. Algorithm aversion also occurs for those who consider themselves as experts
and thus, rely more on themselves than an algorithm (see Logg et al., 2019). However, studies
which have tried to replicate algorithm aversion could not find relevant effects primarily because
algorithms were not shown to make mistakes (see Prahl & Van Swol, 2017). Therefore, overall,
we see that the literature on algorithms is increasingly tilting towards showing people’s
preference
48
for algorithmic judgement.
The current literature on people’s relationship with algorithmic advice mostly looks at it
from the perspective of individual attributes e.g. expertise (Logg et al., 2019), algorithm’s
performance (Dietvorst et al., 2015), and task-types such as forecasting or making predictions
(Dietvorst et al., 2015; Logg et al., 2019; Prahl & Van Swol, 2017). Furthermore, the primary
focus of these studies has been to examine how AI technology impacts an individual's (or
group’s) judgement and decision-making for themselves. It does not investigate if and how
algorithms or AI can push user's decisions and behaviors that affect others. For instance, Shaikh
& Cruz (2019) investigated how teams used an intelligent assistant to complete a creative task. A
team’s use of the device, and their acceptance of its advice is an investigation into how decision-
makers interact with technology to affect themselves and their work. However, there are gaps in
48
However, this often rests on the kinds of tasks they get are numerical in nature. It is only
understandable why people may follow it. If someone were to tell you that this equation was
solved by hand versus a calculator, it takes little to figure out that most people would trust a
calculator.
70
the literature where we are still yet to examine the influence of intelligent assistants on our
behaviors where deliberately and knowingly may help or hurt others across various resource
environments. This is important to investigate since AI technology is already being used to make
decisions that directly affect others (e.g. hiring people, Heilweil, 2019). Moreover, we do not
have much evidence on how environmental contexts affect algorithm aversion or appreciation.
One of the biggest motivations that underlies the development and incorporation of AI in our
lives is still in sync with the ideas that helped it originally get started: assist humans to make
tasks easier for them across various contexts (see chapter two). Therefore, it is pertinent to
understand how the constraints in an environment, or the lack thereof, affect the extent to which
people can be nudged by AI.
Resource Environments and Intelligent Assistant: Consequences for Information Behaviors
Intelligent assistants are embedded in contexts to interact and aid users. Since they are
enabled by AI, they can adapt, learn, and make changes to their environments by nudging users
to make certain choices. Nudge as a concept was popularized by Richard Thaler & Cass Sunstein
(2008) who suggested that adjustments to an environmental structure or architecture can change
how people make choices. In the context of intelligent assistants, consider the following
example. Let’s say a user knows two routes to her work: A and B. One morning she is running
late and asks Siri
49
for directions. Siri recommends route A because it has less traffic. Siri then
shows the user directions to her work via route A. The user follows Siri’s suggestions and uses
route A to drive to her destination. In this scenario, Siri nudged the user to decide i.e. suggested
route A. However, environmental parameters or structure (e.g. traffic on the routes, the decision
49
Siri is an intelligent assistant developed by Apple. It is available on its smartphones.
71
task itself i.e. driving to work, and the user being late) were also a part of how the user decided
to choose route A.
Research which explicitly varies or specifies environmental parameters or structures and
how they affect human decision-making with intelligent assistants (or related technology) is still
very limited and its infancy. However, there is some evidence which shows that environmental
constraints can affect how people use and take suggestions from intelligent technology. For
instance, Shaikh and Cruz (2019) found that teams working on a creative task were less likely to
use an intelligent assistant under time scarcity compared to those who did not face temporal
constraints (Shaikh & Cruz, 2019). Related research from the area of human-robot interaction
shows that under emergency situations, people are more likely to follow the advice of an error-
prone robot (Robinette, Howard, & Wagner, 2017; Robinette, Li, Allen, Howard, & Wagner,
2016). In a lab study, participants interacted with a robot who made errors. Later, when the
participants had to evacuate out of time-critical fire evacuation, they relied on the advice of the
same robot. Compare this finding to research on algorithm aversion where people were quick to
punish an algorithm after seeing it make a mistake (Dietvorst, Simmons., & Massey, 2015). It
appears that people trusted and followed a careless technology under an environmentally
constraining situation (Robinette et al., 2017; Robinette et al., 2016). This lends some support to
the idea that environmental features (e.g. emergency, time scarcity) may affect and when people
use and follow technology.
An intelligent assistant is a unique technology since it can assist, interact, and display
intelligence. Given these features, it can be described as a cognition-of-sorts. If an intelligent
assistant could be said to represent and function like a cognition-of-sorts or an intelligent entity
in the environment; then, a cognitively taxing experience may make it easier for humans to
72
accept its nudges. As applied to information, an intelligent assistant may thus have the potential
to nudge people’s behaviors and influence them to share information that can help others in a
scarce environment. Additionally, as hypothesized in earlier, the incidence of not giving any
information at all is easier than making the move to be antisocial, especially in the condition of
ESPS. However, an introduction of an intelligent assistant in an ESPS environment could make it
easier for people to conform to its antisocial nudges for the same reason--i.e., taxing
environments can increase the likelihood to rely on intelligent entities.
Information can be manipulated and shared in many ways. However, in this project, the
focus is on three types of AI nudges that pertain to information: a. sharing the best information
i.e. the one which is accurate and will help a recipient, b. Sharing the worst information i.e. the
one which is inaccurate and will hurt a recipient, and c. sharing no information. Combining
theoretical knowledge and perspectives gleaned from the literature on algorithm appreciation
(Araujo, 2020; Logg et al., 2019), research on resource abundance and scarcity reviewed earlier
(see chapter three), IMT2 are used to argue that when human cognition is under constraints as a
product of navigating a scarce environment, an intelligent assistant’s becomes an influential
entity which can help share the cognitive burden of making decisions. Furthermore, since an
intelligent assistant can function as a cognition-of-sorts by learning, interacting, and helping
people; its suggestions would affect behaviors compared to when people made decisions all by
themselves. Therefore, it is hypothesized that:
Hypothesis 2. Individuals in ESPS would be more likely than those in EAPA to share the
worst information when nudged by an intelligent assistant to do the same
Hypothesis 3. Individuals in ESPS would be more likely than those in EAPA to share the
no information when nudged by an intelligent assistant to do the same.
73
Hypothesis 4. Overall, individuals in EAPA will be less likely to accept nudges from an
intelligent assistant compared to those in ESPS.
Hypothesis 5. When nudged by an intelligent assistant to share the best information,
incidence of the same behavior would be higher compared to the baseline controlling for the
resource environment.
Hypothesis 6. When nudged by an intelligent assistant to share the worst information,
incidence of the same behavior would be higher compared to the baseline controlling for the
resource environment.
Hypothesis 7. When nudged by an intelligent assistant to not share any information,
incidence of the same behavior would be higher compared to the baseline controlling for the
resource environment.
The Next Steps: Research Design and Method
The next chapter will discuss the research design and detail study manipulations, and the
development of a web portal.
74
Chapter 4. Notes on Research Design & Method
“Context matters, but rarely it is defined” (Miller et. al., 2019, p. 173).
The goal of this project is to investigate: a. the impact of environmental resource contexts
has on an individual's prosocial and antisocial information sharing (or lack thereof) to help or
harm others respectively and b. how an intelligent assistant can nudge individuals to engage in
prosocial (i.e. benevolent) and antisocial (i.e. malicious) decisions pertaining to information
sharing within the same environments. These investigations are however, set within the boundary
conditions where decision-makers do not have any expectations of reciprocity and their
behaviors towards others do not have any cost or benefit to themselves. Additionally, anonymity
of decision-makers must be maintained. These conditions were put in place to create a bare bone
structure where human decision-making is not affected by expectations of payoffs or reciprocity.
This meant that a new task had to be created as current experimental paradigms (e.g., Ultimatum
Game, Dictator Game, Prisoner’s Dilemma, and Joy of Destruction) are tied to some benefit to
the self and/or the other where both parties are aware of the payoffs. Furthermore, unlike
economic games where the dependent variable is money, this project investigates information
which is harder to quantify.
This project also investigates how an intelligent assistant can nudge people to take certain
actions within resource environments. This meant that an intelligent assistant had to also be
designed. To accomplish all these goals, a virtual experimental portal was developed to represent
a system which hosted a resource environment, an intelligent assistant, and decision-makers. Its
purpose was to create mini environments for every single individual who participated in the
study. This web portal was specifically designed for this project and was used to invite hundreds
of virtual participants from Amazon Mechanical Turk. HTML, JavaScript, and CSS were used
75
for front-end, i.e. interface design and development work. The website and associated features
including the database were hosted on Amazon Servers and used Python for back-end
integration. A specific domain name was purchased for the project. Developing a system such as
this required incorporation, management, and repeated testing of various moving parts. In the
following sections, the study task, web portal design, manipulations, and other essential elements
will be described in detail.
Chapter three provided a critical examination of the literature on resource abundance and
scarcity and its effects on prosocial and antisocial behaviors. It made the argument that the
resource distributions vary in both environmental and personal contexts. However, the
distinction between these contexts is often unclear or conflated from theoretical and
methodological perspectives. To address these issues, first, the concepts of personal and
environmental resource abundance and scarcity were teased apart and definitions of the
environmental resource abundance and scarcity were provided. The core thesis that followed was
that an interaction between environmental and personal resource context yields resource
environments. This chapter also argued that various kinds of information behaviors such as
sharing, withholding, and manipulations that are benevolent and malicious in nature needed to be
studied within resource contexts.
The chapter will now be divided into two parts. Part I serves as a detailed description on
how the EAPA and ESPS resource environments were built. An investigation into the
information behaviors within these resource environments serves the function of being the
baseline study. In this study, individuals are randomly assigned to two types of resource
environments EAPA and ESPS to investigate their information behaviors. Manipulations, design
features, study tasks, instructions, are also described in this part. Additionally, various images
76
and figures have been added to help readers understand how the study unfolded for the
participants.
Part II is concerned with the design, incorporation, and building of the intelligent
assistant and its interaction with users. Various factors such as intelligent assistants’ appearance,
performance, and interaction with the users are discussed in that section.
Part I. Building a Web Portal for Experimental Research
Study 1. The Baseline Model
In chapter three, an argument for a baseline model was set up. This model was devoid of
an intelligent assistant and was established to understand the effects of abundant (EAPA) and
scarce (ESPS) resource environments on an individual’s decision to share information that can
help or hurt people or withhold helpful information. The core thesis presented in that chapter was
that individuals who in EAPA conditions would be less likely to engage in information
withholding compared to ESPS. Additionally, a research question was also posed which asked
the range of antisocial information within these contexts. The baseline study put up a one factor
resource environment (abundant vs. scarce) between-subjects experimental design. In the
following sections, the study design, development of a web portal for behavioral research, and
procedures would be discussed in detail.
Study Task: Basic Principle
The system was developed to look like an online game. Participants were invited to play
“Open Sesame” where they needed to accumulate 150 points by opening four doors over four
rounds. One hundred and fifty points therefore, served as reference point i.e. a required, higher or
desirable reference point (see Cannon et. al., 2019).
77
Environmental Abundance/Scarcity Manipulations
50
(EA/ES)
Prior to starting the game, participants were given a set of instructions on how the game
was played, its requirements, rules, etc. Participants were told that for each game session, there
was a set amount of points available. Participants were not informed that there were no other real
players and that the points were pre-set. For participants who were assigned to the ES condition,
the total number of points available was said to be only 200 for three players (including the
participant). This meant that out of three participants--the subject and two fictitious others--only
one would collect 150 points or more.
In the EA condition, the total number of points available was 600. Therefore, all three
players could possibly reach their target of 150 or more. These manipulations are in line with the
definition of environmental abundance and scarcity provided in chapter three. An environment is
said to have resource abundance when it has the enough resources to meet the reference point of
the decision-making units in that environment.
Participants were also told that the doors they select do not affect choices made by others.
For instance, if participant 1 were to choose door A, then another participant 2 could also choose
door A. This means that each participant can choose any they like and do not have to be
concerned how choices made by others may or may not affect their chances.
Personal Abundance/Scarcity
51
(PA/PS) Manipulations
Participants in this condition were randomly assigned to either meet their need of 150
points--which is the reference point. Theoretically speaking, if someone were to be higher than
50
Environmental abundance (scarcity) is manipulated when the available quantity of resources is
higher (lower) than the quantity of resources needed for all the individuals within that
environment with respect to a reference point.
51
PA to a condition of a decision-maker or is able to reach their reference point, wherelse in PS,
they are unable to meet it.
78
this reference point then they would be in a situation of personal abundance (PA) and if lower
than personal scarcity (PS). Each condition had a set of predetermined points regardless of which
door they opened. For instance, if a player (P1) assigned to the PS condition, opened door A,
then they would earn 20 points. Another player (i.e. participant) opened Door B, then would also
earn 20 points. This was done to make the number of points earned by each participant depend
on the condition (PA/PS). However, participants were told that the points behind each door were
fixed and if they were to open Door A, they would get 20. The same rule would apply to player
B. This was done to make the participants think that the points are fixed behind the doors so that
they would be able to share their information with others at the end of the game.
These numbers were same between both conditions except for one door where PA got 40
and PS got 20. This difference yielded a total number of points of 140 for PS and 160 for PA.
Therefore, PA were 10 points higher than the reference point of 150 and PS were 10 points lower
than 150.
By the end of round four, those in the personal resource scarcity condition were unable to
meet their target of 150 as they only accumulated 140. However, those in the abundance
condition were able to achieve 160. Throughout this activity, participants could see the points
they earned, the door they opened, and the number of rounds remaining, and the number of
players in their game session. Several interviews and pilot testing were done to see if these
conditions would create a sense of scarcity and abundance (results presented at the end of the
section). However, there is a very important point to note here: the order in which participants
get the points must not create a sense of complete assurance that they would win the game. It
must not make them feel hopeless either so much so that they completely lose interest. For
instance, those in PS should feel scarcity in every round, but remain hopeful that they could still
79
meet their target. Therefore, after several rounds of testing and interviews, the following order
was determined.
PA = 30, 55, 35, 40 [ opened doors] 20, 25, 45, 50 [unopened doors]
PS = 30, 20, 55, 35 [opened doors] 25, 40, 45, 50 [unopened doors]
The participants could take as much time as they wanted to complete the game.
Web Portal Design, Visual, and Interaction with Research Participants
Since participants were made to believe that they were in a game with others, the
experimental portal was designed to give that impression. For instance, time delays were added
at various points to show that the participants were being connected to a virtual setup. The
following pages contain relevant descriptions and details on variables’ manipulations.
Participant Login and Instructions
Once a participant had accessed the URL, they saw a sign-up page. This was followed by
a set of instructions. Participants could choose to go back and forth between pages by using
“Back” and “Next” buttons on the bottom left and right of the page respectively. Once they
clicked “Next” on the page shown below, they saw more instructions which pertained to
environmental resources.
Figure 4. 1 Instructions page 1.
80
Figure 4. 2 Instructions page 2
To make sure that the participants understood the instructions, and what they meant, they
were also shown how the main game interface would look like.
Figure 4. 3 Participants were shown how the interface would look like.
Attention Check. After these instructions were completed, participants took an attention
check test. This was to ensure that they understood the instructions, and the basic requirements
of the game. It was also used to remind them of the reference point i.e. 150. If participants gave
81
one or more incorrect answers, they were not allowed to proceed, and were asked to read the
instructions again.
Figure 4. 4 Attention check.
Time Delay Impression
Once the test was passed, participants were shown a page which had time delay on it.
This was to give the impression that they were actually logging on to a virtual game.
Figure 4. 5 Connection page with time delay.
82
Environmental Abundance/Scarcity Manipulation
Participants then saw a game description which was tied to the environmental resource
manipulation (see content below). If they wanted to proceed to the game, they had to click “I am
ready. Let’s play”.
Text-Based Environmental Abundance Manipulation. “Wow! In this game session,
the total available points are enough for all the players to reach their targets of collecting 150
points. This is because three players (including you) are trying to collect 150 points each from a
set of 600 points! All three players might be able to reach the targets of 150 points if they select
the right combination of doors. Play your best game! Good luck!”
Text-Based Environmental Scarcity Manipulation. “Oh! In this game session, the total
available points aren’t enough for all the players to reach their targets of collecting 150 points.
This is because three players (including you) are trying to collect 150 points each from a set of
200 points! Only one player out of three might be able to reach the target of 150 points if they
select the right combination of doors. Play your best game! Good luck!”
Figure 4. 6 Environmental scarcity manipulation.
83
Connecting to the Game
When they had indicated they were ready to play, they were taken to another page which
had a time delay on it. This was done to suggest that they were being connected to a system
where other people were also present.
Figure 4. 7Game loading page.
Main Task Page
The main task page showed eight doors, out of which participants could select four.
They could see the number of rounds completed, their target, and the number of points they had
earned on top of the page. Any doors opened were listed in the panel on the left. This was done
to ensure that the participants would be able to have all the information at hand instead of relying
on memory. There was a small envelope on the top right corner of the screen to indicate that the
system can send messages. It was also added to give credibility to the process of receiving a
notification from another “participant”.
At the top of the page, participants could see “Total Available Points” represented by a
bar. Although EA and ES conditions were manipulated by text in earlier pages, this bar was
placed to serve as a reminder that in addition to the resources they accumulate for themselves
84
(personal context), they were embedded in an environment where there were enough (or not)
points. After every round, participants could see the remaining “total number of points'' being
calculated and displayed on the top right corner of their screen. The goal here was to make sure
that the decrease seems reasonable and not affect the feeling EA and ES should produce. These
deductions and the cues provided were and pilot tested. The following logic was used to
manipulate EA or ES: Decrease 600 (EA) and 200 (ES) by a multiple of five such that the total
points remaining are always greater than the points earned by the user at any given time.
As participants opened doors, they were able to see the doors and the points they
unlocked on a panel on the left side of the screen. Once all the rounds were completed,
participants were reminded of how they did
The colors of the interface were tested using in-person interviews at USC Annenberg in
lieu of a $5 Starbucks gift card and surveys from Amazon Mechanical Turkers.
Figure 4. 8 Main task page.
Once the participants completed the game, they were shown either of the following
messages based on the experimental condition they were assigned to.
85
EAPA. “Yayy! There were enough points available in this game session for all the
players to collect 150 points each, and you made it!”
ESPS. “Aww! There weren’t enough points available in this game session for all the
players to collect 150 points each, and you didn’t make it.”
A Message from a New Participant
Once the game was over, participants were notified that they had received a message. In
order for participants to see the message, they had to click “Open Message”. This was done to
create a sense of engagement and make the participants believe that they were in a virtual
environment which can link people with each other. To reduce social desirability bias associated
with helping or harming behaviors, participants were also told that “all players are anonymous,
and your conversations are not publicly available”.
Figure 4. 9 Message from a new participant.
After opening this message, participants saw the following note: “Hi there! I need to get
150 points. Can you tell me which door has the most points?”
86
Participants could choose to Respond or Ignore this request. If they chose Ignore, the
study ended, and the participants were taken to a survey page. If they chose ‘Respond’, then they
were taken to another page to share (or not) information (see below)
Figure 4. 10 Choice to respond or ignore the message.
Information Sharing Page
Once participants had chosen “Respond”, they were taken to the main information
sharing page which showed them all the doors, and the points behind them. This page gave them
full authority to share any information they liked. They were not confined to answering the
question in a certain way. If they changed their minds and decided to not share information, they
could do that by clicking “Prefer not to say”, or press “Submit” without entering any
information. Participants were not forced to make a choice, and they could move on at any time.
In the following case, the door with the most points is C and the door with the least points is H.
87
Figure 4. 11 Information sharing page.
Receiving Message
As described in the preceding and the current chapter, the dependent variable in this
exercise is sharing information that can benefit or hurt someone else. However, as per the
boundary condition (see chapter three), it was important to eliminate (or reduce greatly) the
expectation of reciprocity. It was also important to maintain the sense that the participants were
in a larger gaming environment where other players are present. If the study simply gave
participants an option to share information with someone else, then that could surprise them
and/or create feelings of inequity where they might find it unfair to have to give something when
they did not have the chance to request it themselves. Therefore, participants were told in the
instructions that Open Sesame connects some the game sometimes connects players with others.
This was done to give the participants an idea that some connection may occur, but not
necessarily so.
88
Figure 4. 12 Possible connection with other participants.
Post-game Survey
Once they clicked “Submit”, they were taken to a post-game survey after showing the
following message.
Experimental Process
This research design focused on allowing participants to make their own choices at every
step. They were not forced to make choices. Figure X represents how the procedure would
unfold for a participant depending on the choices they made.
89
Figure 4. 13 Stepwise process for participants in the baseline study.
Pilot Test
The development of a web portal such as this requires consistent testing and user
feedback. It was designed to engage users who may be logging in from anywhere in the United
States. The likelihood of these users speaking English was thought to be high, however, it was
assumed that language comprehension, general analytical abilities, ages, political orientations,
and experience using Amazon Mechanical Turk would vary greatly within the participant
population. Therefore, it was important that the study manipulations and game design should
work for most people and any technical or logistical issues should be limited.
A pilot test was run to study experimental manipulations, user perceptions of the game
design, front-end i.e. user interface, and back-end i.e. server and database issues. Eight-four
Amazon Mechanical Turkers (were randomly assigned to 2 (Environmental Resource Context:
EA vs ES) x 2 (Personal Resource Context: PA vs. PS) between-subjects factorial design. Data
90
from 73 participants’ data were included in the final analyses (since some was lost due to
technical issues). Participation criteria were the following: a. participants must be residing in the
United States at the time of the study, and b. participants must be 18 years or older. Results from
the pilot test are presented as follows.
Environmental Abundance & Scarcity Manipulations
The EA and ES manipulations should allow the participants to feel or acknowledge that
there were/were not enough points for all the players to reach their target. PA and PS conditions
should have no effect on this perception. This was measured by one item: “There were enough
points for all the players to collect 150 points each” which used a six-point Likert-type scale to
where 1 = Strongly Disagree and 6 = Strongly Agree. A ‘No Opinion’ option offered at the end
of the scale
52
. This kind of Likert-scale was used because provision of “neither agree nor
disagree” in the middle of the scale has been shown to increase the likelihood that this option
would be selected. This negatively impacts the effect size. (see Nadler, Weston & Voyles, 2015).
The results showed that those who were assigned to EA (M = 5.50, SD = 0.23, N = 42) and ES
(M = 1.96, SD = 0.27, N = 31) significantly agreed or disagreed with this statement compared to
those in PA (M = 3.68, SD = 0.25, N = 36) and PS (M = 3.82, SD = 0.25, N = 37)who did not
show any significant difference.
Personal Abundance & Scarcity Manipulations
The PA/PS should allow the participants to feel or acknowledge there were/were not able
to collect 150 points. This was measured by one item: I managed to collect the 150 points I
needed. EA and ES conditions should have no effect on this perception. Results showed this to
be the case. We found that those who were assigned to PA (M = 5.71, SD = 0.13, N = 36) and
52
All Likert-type scales used in this dissertation used the same format.
91
PS (M = 1.14, SD = 0.12, N = 37) significantly agreed or disagreed with this statement
compared to those in EA (M = 1.14, SD = 0.12, N = 42) and ES (M = 3.33, SD = 0.14, N = 31)
who did not show any significant difference (Table 1).
User Perceptions of the Game Design
Instructions. It was very important that the participants found the instructions easy to
understand. This effect should not have varied by experimental conditions. This was measured
by the following item: “The game instructions were easy to understand”. Most users found the
game instructions easy to understand and this did not vary by conditions (M = 5.72, SD = 0.30, N
= 73). Open-ended responses mentioned adding an image that could participants what is to come
in terms of game design. This suggestion was incorporated by showing the participants how the
interface would like at the end of their instructions.
Aesthetics. During the development of the web portal, several participants were recruited
and interviewed about their impressions of the portal in an in-person setting
The above sections describe the EA/ES and PA/PS manipulations, the design of the
experimental portal, and the various steps a user went through. This virtual system was designed
to represent EAPA and ESPS resource environments at a basic level and therefore, this serves as
the baseline condition. The next section describes the incorporation of the intelligent assistant in
this system.
92
Part II. Designing and Incorporating an Intelligent Assistant
Study two investigates how nudges by an intelligent assistant can affect human behavior
across abundant (EAPA) and scarce (ESPS) resource environments (see chapter three for
hypotheses). The main study task (i.e. collect 150 points), EAPA and ESPS manipulations and
instructions, and participant login procedure were the same as the baseline study. The only
change was the addition of an intelligent assistant in those environments.
In chapter two of this dissertation, we discussed that for a technology to be characterized
as an intelligent assistant, it must have three features: a. must use or be enabled by AI, b. must be
able to interact with humans (or machines), and c. must provide assistance to a human user. The
intelligent assistant developed for this study was designed to give the users a sense of all the
above-mentioned features. Each of these features and relevant manipulations were weaved into
the interface and the design of the intelligent assistant. In the following sections, various aspects
of the design of the intelligent assistant, its introduction to the resource environment, and
interaction with the users are described.
The Design of the Intelligent Assistant
This study was conducted in a virtual setting, and the intelligent assistant communicated
with users using push buttons and text. Intelligent assistants which use text-based
communication are limited to the extent to which they can create an interactive experience with a
user. This is primarily so because text is less engaging and provides fewer cues. Commercially
available intelligent assistants such as Alexa or Google Home use verbal communication (e.g.
Alexa) to communicate with users. Additionally, they use (mostly) female voices and names for
themselves. Voice and names are powerful cues that help incorporate a lot of human-ness
93
without much effort. This positively affects user engagement with them. Text-based assistants,
however, can face more difficulty in terms of establishing relationships with users.
In order to preempt issues that text-based intelligent assistants face and increase user
engagement, special emphasis was paid to the design of the intelligent assistant. Past research
shows that adding human-like elements to interactive technologies such as robots and virtual
agents increases the trust people place in them (Eyssel et. al., 2012; Mathur & Reichling, 2009).
Put simply, when an interactive, intelligent technology gives some human-like cues or cultivates
some sense of anthropomorphism, it is more likely to be trusted, and used.
Gender & Physical Appearance
Most technology companies use a female assistant to describe intelligent assistants. They
include but are not limited to Alexa, Siri, Cortana. If they are not explicitly female, then they use
phonetic cues to suggest gender. A good example is IBM's Cora, which is designed for the Royal
Bank of Scotland (RBS). Cora is a feminine name and IBM describes Cora as a 'she' on their
webpage (Raising Cora, n.d.). RBS, however, does not use any pronouns for Cora. However, if
you chat with Cora, and ask: “Are you a male or female”, Cora will reply: "I am a bot.” It may
be a bot, but subtle hints such as having a feminine name go far and increase human-ness.
Intelligent assistants that use gender cues may be perceived as high on humanness.
In addition to gender, physical appearance and facial features also matter when it comes
to designing interactive technology. For the purposes of this study, several designs and names for
the intelligent assistant were pilot tested. Participants consistently suggested that the intelligent
assistant be made “...more human”, have “eyes”, “be bigger in size” (with respect to the screen
size).
94
In addition to these, some suggested making the assistant “female”. Overall, people were
more likely to say that they would “interact”, “trust”, “ask help” from a female than a male or
gender-neutral intelligent assistant. After several rounds of online testing, and in-person
interviews, the intelligent assistant Alana was developed. The gender was implied via the color
pink, which is traditionally used to represent females, and the name Alana
53
. Research shows in
the English language, phonological elements such a or i at the end of a name cue the listener to
associate it with the female gender (Cassidy et al., 1999). The following section presents details
of the pilot tests.
Pilot Tests
Round 1
In the first round, an intelligent assistant without facial features or an identifiable gender
was tested for various attributes and possibilities of interpersonal connections. Participants were
asked to rate the assistant on various attributes on a seven-point bipolar scale.
They were also asked open-ended questions such as: “what changes would you suggest
we make to the intelligent assistant?” A few themes were consistent across the responses
received where people suggested to make the assistant “more human” and “cute. They also
suggested adding “color”, “interaction”, “eyes”, and “eyebrows” (see Figure 14).
Table 4. 1 Participant feedback on the design of the intelligent assistant.
Attributes Mean SD
Trustworthy 3.63 1.21
Intelligent 3.36 1.37
Friendly 4.03 1.24
Happy 4.13 1.13
Not Silly 2.76 1.38
Human-like 1.83 1.38
N = 30
53
A humble ode to Alan Turing.
95
Likelihood and evaluations of future
interpersonal interaction
I would interact with this assistant if it was
available to me on a website.
3.92 1.01
I would seek assistance from this assistant to
navigate a website.
3.82 1.65
I would trust recommendations given by this
assistant on a website.
4.17 1.82
Figure 4. 14 Round 1. A word cloud depicting user feedback on features to be added or edited to
the design of the assistant.
Round 2
Participant feedback was used to design another intelligent assistant. Some open-ended
responses suggested making the assistant “female”. Therefore, a one-factor (gender: male vs.
female vs. gender neutral) between-subjects experiment was run to gather feedback. It was
important that the assistant was perceived to be trustworthy and had a higher likelihood of user
engagement with it. Overall, the gender-neutral assistant did not do well on those parameters
compared to the female and the male assistants (Table 2). The female assistant Alana was
perceived to be more trustworthy (d = 0.27) than Charles. Users also agreed that they would
interact with her (d = 0.43) and use her suggestions (d = 0.39). It should be noted that the female
assistant was perceived as less intelligent compared to the male assistant Charles (d = -0.22).
However, the “perception” of intelligence was not used to select the assistant because
96
intelligence cannot be seen, it is shown. And it was demonstrated by building in its performance,
interaction, and text—all which will be described below. Therefore, as a result of these tests and
how things are in the commercial market, the female intelligent assistant was selected (see
Appendix A for images).
Table 4. 2 Participant feedback on the design of the intelligent assistant.
Alana - Female
N= 19
Charles - Male
N = 20
Nimos - Gender
Neutral
N = 20
Attributes M M M
(SD) (SD) (SD)
Trustworthy
4.42
(0.67)
4.20
(0.93)
4.05
(0.97)
Intelligent
3.89
(0.72)
4.25
(0.83)
3.90
(0.89)
Friendly
4.32
(0.98)
4.20
(0.98)
4.20
(0.93)
Happy
4.16
(1.04)
4.45
(0.67)
4.10
(0.89)
Not Silly
3.37
(1.18)
3.55
(0.73)
3.75
(0.89)
Human-like
3
(1.41)
2.95
(1.47)
3.30
(1.38)
Likelihood and evaluations of future
interpersonal interaction
I would interact with [name] if
[she/he/it] was available to me on a
website.
5.47
(0.50)
5.10
(1.09)
5.25
(0.70)
I would ask [name] to help me win the
game.
5.37
(0.81)
5.50
(1.07)
5.05
(0.80)
If [name] gave me any suggestions to
help me during a game, I would use
them.
5.47
(0.75)
5.15
(0.85)
5.10
(0.94)
[Name] is probably more intelligent
than an average human.
4.84
(1.39)
4.90
(1.37)
5.00
(1.05)
97
Building in Artificial Intelligence, Interaction & Assistance in System
A defining feature of an intelligent assistant is its ability to interact with users. Alana’s
interaction was modelled after commercially deployed intelligent assistants which use a
combination of text and push buttons to communicate with users on virtual interfaces to
communicate with users.
However, just calling an entity intelligent assistant is not enough from a user’s
perspective. There must be some proof for them to perceive it as such. Unsurprisingly, a
technology’s performance is a big factor in determining how people trust and use it (see Hoff &
Bashir, 2014). Research from the area of automated decision-making (ADM) and human-robot
interaction has consistently shown that when people find technology such as robots or expert
systems as reliable or high performing, then tend to trust it, and of course use it (Bailey &
Scerbo, 2007; Madhavan & Wiegmann, 2007; Hancock et al., 2011). Arguably, high
performance leads to trust, which invariably will be linked to the extent to which advice from a
technology is accepted. Therefore, a technology must perform well and provide good assistance.
As mentioned earlier, this study is about looking into the future. The future will have
better and more intelligent assistants. To create the sense that Alana used AI and could assist
users, participants were told that she could help them select one door during the game. However,
that would only happen after round one was completed. This technique was used to allow
participants to have a sense of agency and ownership in their environment where they were the
ones calling the shots, the intelligent assistant was only meant to help. Note that it was left up to
the participants to choose whether or not they want to ask Alana for help. This was not enforced
upon them. This was modelled on how intelligent assistants in the market such as Siri or Alexa
appear and function on interfaces such as browsers or screens. These intelligent assistants signal
98
users that they are available but do not force users to use them. For example, Google Assistant
on an Android phone shows the following message: “Say ‘Hey Google’.” This serves as a
reminder to a user that an assistant is present; however, it does not force the user to take its help.
Since this study gave the users options to either take the help from Alana or not during
the game, that changed the order in which the doors opened compared to the baseline. If a user
asked for Alana's help in selecting a door, she named a door albeit without mentioning the
points. If a user chose that door, they got 55 points. If a user did not choose that door, then 55
points were held until either a user selected the door recommended by Alana or opened the last
door. This ensured that a user got 55 points regardless of whether they took Alana’s help or not
during the game. However, the reader should note that in this study the doors unlocked and
locked had the same number of points for all the participants as those in the baseline condition.
The following pages show how the virtual environment appeared to those in the AI
conditions.
Introduction to the Participants
The virtual system opened with the intelligent assistant introducing herself to the
participants. The purpose of this introduction was to let participants know that they were
interacting with an AI-enabled intelligent assistant. The script was written to represent the three
key features of intelligent assistants discussed in chapter two. It used phrasing that is rather typical
of intelligent assistants present in the commercial market (Figure 15): “Hey there! I am Alana,
your intelligent assistant. I use artificial intelligence which means that I learn from data to solve
problems, communicate, and give recommendations. You are going to play a game called Open
Sesame. I will assist you throughout this exercise. Sounds good?”
99
A time delay was added to the appearance of the “Yes, I understand.” button on the
bottom right of the page. This was done so that participants took a few seconds to actually read
the description before being allowed to move on. This was followed by Alana telling them that
rules and instructions were to be shown to them. Alana was not placed on any instruction pages.
Figure 4. 15 Alana introduces herself.
Intelligent Assistant and Attention Check
The intelligent assistant also came up before the attention check. This was done to
continue a sense of interaction with the participants.
100
Figure 4. 16 Alana and attention check.
Rules, instructions, and manipulations were the same as in control conditions in terms of
EA/ES and PA/PS manipulations. However, one layer of instruction was added to tell the
participants that she could help them select one door during the game. This manipulation is
directly to invoke two of the three core features of an intelligent assistance: AI and assistance.
Figure 4.18. Instructions on Alana’s availability during the game
101
Main Task Page
Once a participant was on the main task page, they could see Alana on the bottom right of
the page.
Figure 4. 17 Alana on the main task page.
If they were to click on it, they would be shown a message: “Sorry, I will need more data
before I can help you. Please check back after round 1.” Thus, participants could only take her
help after round one was complete.
Figure 4. 18 Reminding participants to check back after round 1.
102
Once round one was completed, Alana seemed to become active, and told the participants
that “I can help you select a good door. Just ask!” This message was added to mimic how
intelligent assistants currently function and interact with users across web browsing applications
or physical spaces. They initiate interaction, or ping users that they are available, but they do not
push them to take the assistant’s suggestions. Thus, across these instances, a user was not forced
to take help from Alana in selecting a door.
Figure 4. 19 Alana and help options.
Participants had choices here. They could not engage at all with Alana and continue
selecting doors. They could also click the button “I don’t need your help. In that case, Alana
would show the message: “OK, I am still here if you need me.” This gave the participants the
option to take her help if they changed their mind. The intelligent assistant would also lit up
when a user would interact with her. This feature was added to add resemblance to commercially
available intelligent assistants such as Alexa which light up when a user interacts with them.
103
Figure 4. 20 Alana reminds participants that she will be available to help.
In case participants decided to take her help, she would suggest them a door after a time
delay to signal that its processing data.
Figure 4. 21 Alana recommends a door to the user.
Post-game Reminder of Alana’s Contribution
Once the game was over, participants were shown the same messages as in the control
conditions. Since some people won and others lost despite Alana guiding them to the best door.
It was possible then that the win or loss might have become attributed to the intelligent assistant.
104
This was not true since the both AI and non-AI conditions showed the same doors for PA and PS
conditions. It was also possible that the participants would forget how Alana’s helped them.
Therefore, participants were reminded of what Alana did for them.
Here, we had to also account for the two possibilities: a. that a user asks Alana for a
recommendation, but not apply her advice or b. they never ask for her recommendation. If any of
the above two possibilities occurred, then Alana recommended or assigned the second-best door
i.e. 50. The following messages were shown depending based on the possibilities discussed
above.
Table 4. 3 Script used by Alana after the game was completed.
In-game door
suggestion from Alana
Outcome Statement
A user asked for help
and chose the same
door recommended by
Alana. PA or PS
I am glad we got a chance to interact. If you look
to your left, you’ll see that I recommended Door
H with 55 points which was the best option.
A user asked for help
but did not choose the
door recommended by
Alana. PA or PS
I am glad we got a chance to interact. If you look
to your left, you’ll see that I recommended Door
X with 50 points which was one of the best
options.
A user did not ask for
any help. PA or PS
We did not get a chance to interact. But just so
you know, I would have recommended Door X
with 50 points which was one of the best options.
AI Information Nudge
Once the game was over, participants saw the same message from an anonymous player.
Here, we gave the participants the agency to decide whether they should “Respond” or “Ignore”.
If they clicked “Ignore”, then the game was over for them, and they headed to the post-game
105
survey. If they clicked “Respond”, then Alana would give some advice on how to respond to the
anonymous player.
a. Share the best information: “Looks like a new player wants some information regarding
the game. My intelligent opinion is that you should tell them Door [Name] and its [55].”
b. Share the worst information: “Looks like a new player wants some information regarding
the game. My intelligent opinion is that you should tell them Door [Name] and its [20]
points.”
c. Share nothing: “Looks like a new player wants some information regarding the game. My
intelligent opinion is that you should not say anything at all.”
Just like the baseline conditions, participants had the chance to respond to share
information, or submit without sharing. It is important for the reader to note that the participants
were under no time pressure or forced into making a decision. The intelligent assistant did push
the participant displaying additional messages. After they made their decision, they went on to
the post-survey page.
Figure 4. 22 Message received from a new player.
106
Figure 4. 23 Alana preparing to give advice on information sharing.
Figure 4. 24 Alana’s nudge for information sharing.
107
Figure 4. 25 Stepwise process for participants in the intelligent assistant experimental
conditions.
The Next Steps: Results
This chapter has detailed main study procedures, the development of the virtual
experimental system, and other relevant features. The next chapter will present the results from
the baseline study (Resource environment: EAPA vs. ESPS) and the second study (2 [Resource
environment: EAPA vs. ESPS] x 3 [Intelligent assistant nudge: Share best information vs. share
the worst information vs. share no information}).
108
Chapter 5. Results = Study 1 (Baseline) + Study 2 (Intelligent Assistant)
Results are divided by two studies: the baseline/no AI model (one factor: EAPA vs.
ESPS), and study two which combines intelligent assistant nudges within resource environments
in a 2 (Resource environment: EAPA vs. ESPS) x3 (AI nudges: share best vs. share worst vs.
share nothing) between-subjects factorial design. All tests were performed by bootstrapping
10,000 samples when applicable. Since this is a behavioral study where individuals made choices
at various stages, sample sizes became uneven between conditions, and therefore, Welch’s t-
test
54
was used.
For count i.e. choice data, the Chi-Square test for Independence (where the expected
frequencies were at least 5 for 80% of the cells) was used where the following conditions were
met:
i. There were two categorical variables with two levels each.
ii. Observations were independent of each other.
Study methods and hypotheses were pre-registered at Open Science Framework.
Study 1. The Baseline Model
Procedure & Participants
This study tests how information sharing is affected as a function of EAPA or ESPS. This
is the baseline (or No AI
55
). A total of 126 individuals participated who were recruited from
Amazon Mechanical Turk and paid $1.75 in compensation. They were provided with a link to
54
. Reported t values are square roots of Welch's statistic. It has been argued that student’s t-test
should be replaced by the Welch’s test because the assumptions made by the former are too strict
and hard to meet especially if the sample sizes are small and uneven between groups (Marie,
Daniel, & Christophe, 2017).
55
Refers to the AI technology i.e. the intelligent assistant.
109
log on to the web portal. These participants were randomly assigned to EAPA or ESPS
conditions. Game principles and procedures have been described in detail in chapter four.
Manipulation Checks
Manipulation checks were performed to test if the environmental abundance and scarcity
and personal abundance and scarcity were effective (see Table 1). Environmental abundance and
scarcity were measured by the item: In this game, there were enough points for all the players to
reach their target of 150. Personal abundance and scarcity were measured by the item: I
managed to collect the 150 points needed in this game. These items used a six-point Likert-type
scale to get self-report data where 1 = Strongly Disagree and 6 = Strongly Agree with a No
Opinion option offered at the end of the scale. A Welch’s t-test showed that those who were in
the EAPA condition were more likely to think that the environment had enough points for
everyone (M = 5.79, SD = 0.69, N = 63, t(10.04), p < 0.05, d = 1.76 compared to those in ESPS
(M = 3.05, SD = 2.07, N = 63). Similarly, participants were also more likely to agree that they
were able to collect the points they needed in EAPA (M = 5.77, SD = 0.745, N = 63, t(11.30), p <
0.05, d = 2.05) compared to those in ESPS (M = 2.72, SD = 1.95, N = 63).
110
Table 5. 1 Baseline/No AI: Manipulation checks for EAPA & ESPS
EAPA
ESPS
N M SD
N M SD t df d
In this game, there were
enough points for all the
players to reach their
target of 150.
61 5.79 0.60
63 3.05 2.07 10.04** 72.91 1.76
I managed to collect the
150 points needed in this
game.
60
5.77 0.75
61
2.72 1.95 11.38** 77.38
2.05
Information Withholding
For the baseline study, information withholding was measured at two instances. The first
was the decision made by a participant to click “Ignore” to a new player’s message which asked
them to share information about the door with the most points. If the participants clicked
“Ignore”, the study ended for them and they were asked to fill the post-study survey. Therefore,
‘Ignore’ counts as the choice to withhold or not share any information. Since clicking “Respond”
does not mean that they shared information, it was considered as openness to sharing.
Hypothesis 1 predicted that individuals would be more likely to withhold i.e., not share
information when they experience scarcity in both personal and environmental contexts. Results
show that this hypothesis was supported as those who were in the ESPS (52.38%) condition were
more likely to ignore the message compared to those in EAPA(33.33%), χ
2
= 4.66 (df = 1, p <
.05, φ = 0.19, see Table 2 & Figure 1).
In case participants clicked “Respond”, they were taken to the main information sharing
page (see Figure 1). At this juncture, participants still had the choice to not share any
information. This was counted as “Shared Nothing.” A combined analysis of “Ignore” and
111
“Shared Nothing” was also done to see the differences between EAPA and ESPS in terms of
information withholding. Results showed that those in ESPS (57.1%) were more likely to
withhold information compared to individuals in EAPA (39.7%), χ
2
= 3.85 (df = 1, p = .05, φ =
0.17).
Figure 5. 1 Baseline/No AI: Information withholding vs. openness to sharing.
Table 5. 2 Baseline/No AI: Information withholding vs. openness to sharing information
EAPA ESPS
Frequency % Frequency % χ
2
df φ
Ignore 21 33.33% 33 52.38% 4.66* 1 0.19
Respond 42 66.66% 30 47.61%
Total 63 63
*p < 0.05
112
Data Coding and Description
The next step in this data analysis is focused on the nature of information sharing i.e., its
prosocial and antisocial dimensions (see chapter 3 for definition). A sender could share
information that could help the recipient do well or gain an advantage. However, they could also
send information which could negatively impact a recipient. In order to understand this better,
the data were coded and structured.
Participants could have offered information regarding one of eight doors: A, B, C, D, E,
F, G, and H. Each door contained 20, 25, 30, 35, 40, 45, 50 or 55 points. These were ranked in
the order of the door with the least number of points to the highest number of points: 1, 2, 3, 4, 5,
6, 7, or 8. If an individual decided to share no information, this was coded as 0. This procedure
was applied on data that was produced as a result of a participant’s behaviors on the main
information sharing page i.e. those people who had selected “Respond” to a new participant’s
request. Based on these values, a door that unlocked 20 points was coded as 1, and a door with
55 as 8. They can be described as the worst and best doors respectively. Based on participant
behaviors, following categories were created: 0 = Shared Nothing, 1 = Shared Worst, and 8 =
Shared Best. To illustrate this further, let’s use an example where a participant sees that each
door had the following points associated with it: [A = 55, B =50, C = 45, D = 40, E = 35, F = 30,
G = 25, and H = 20]
Shared Best - Prosocial Behavior
The first type of information behavior examined refers to sharing that piece information
which would provide the most benefit to the recipient. t to a recipient. This is prosocial
information sharing. This behavior was said to occur when a participant shared the door which
had the highest value i.e. the one with the most points. Giving information about this door was
113
also the correct answer to the information request and therefore, it also counts as the truth. For
instance, if a participant shared Door A, then this can also be stated as:
Ds = Dc = 8.
Where, Ds = door shared and Dc = door with the highest value i.e. the correct door.
This will be referred to as “Shared Best” in terms of data presentation.
Shared Worst - Antisocial Behavior
Using the same logic as above, if Ds = Dc = 1, then this is “Shared Worst”.
Shared Nothing - Lack of Prosocial Behavior
If a participant does not share anything, then this will be referred to as “Shared Nothing”.
Information Sharing - Distribution of Behaviors
Research question 1 asked how abundant and scarce resource environments affect
antisocial and lack of information sharing? Results show that there is not much difference in
terms of withholding or sharing worst information between those in EAPA (2%) and ESPS (3%).
However, there was a difference between EAPA and ESPS where the frequency of sharing was
correct, and the best information was higher in those assigned to the former (71%) compared to
the latter (53%) (see Table 3). Although the data are ordinal in nature, they were also examined
on a continuous scale for the conditions to examine RQ1, which asked how people share
information under EAPA and ESPS. EAPA data showed the following statistics (M = 6.76, SD =
2.63, Mdn = 8 , Mode = 8, N = 42, skewness = - 1.96 [0.36], = 95% CI = 5.89-7.54) where ESPS
(M = 6.26, SD = 2.62, Mdn = 8, Mode = 8, N = 30, skewness = - 1.64 [0.42], 95% CI = 5.28-
7.24) (see Table 3). In line with prior research on truth telling (see Serota & Levine, 2015; Serota
et al., 2010), the data was highly skewed towards the truth i.e., when people shared information,
114
the frequency of giving correct answers was higher than any types of information sharing. This
was true for both EAPA for ESPS.
An analysis including all the participants, including those who clicked Ignore, showed
that EAPA (47.61%) were significantly more likely to share best information compared to ESPS
(25.30%), χ
2
= 6.71 (df = 1, p < .05, φ = 0.23).
Table 5. 3 Baseline/No AI: Descriptive statistics and percentages of sharing best, worst, and no
information by resource environments.
Experimental
Condition Mode Mdn
M
(SD)
Skewness
(SE)
CI
upper
CI
lower
Shared
Best
%
Shared
Worst
%
Shared
Nothing
%
EAPA
N = 42 8 8
6.71
(2.63)
-1.98
(0.36) 0.95 0.47
71%
(n = 30)
2%
(n = 1)
9.5%
(n = 4)
ESPS
N = 30 8 8
6.26
(2.62)
-1.64
(0.42) 0.79 0.26
53%
(n =16)
3%
(n = 1)
10%
(n = 3)
Figure 5. 2 Baseline/No AI: Distribution of door values shared by resource environment. Shared
nothing = 0, shared the worst = 1, sharing the best (and accurate) = 8.
115
Study 2. Intelligent Assistant and Resource Environments
Participants & Procedure
A total of 356 individuals participated who were recruited from Amazon Mechanical
Turk and paid $1.75 in compensation. Participants were randomly assigned to a 2 (Resource
environment: abundant [EAPA] vs. scarce [ESPS]) x 3 (AI nudge: share best vs. share worst vs.
share nothing) between-subjects experiment. Participants logged on to the online portal via a link
to take part in the study. Since there is a risk that one person may participate more than once in
an online study, the web portal blocked any Amazon worker if they had been a part of any pilot
tests or the baseline study conducted for this project
56
.
Manipulation Checks
Manipulation checks were done to replicate if the EAPA and ESPS stimuli produced the
intended effects for the intelligent assistant condition. A Welch’s t-test showed that those who
were in the EAPA condition were more likely to think that the environment had enough points
for everyone (M = 5.44, SD = 1.05, N = 166, t(14.23), p < 0.05, d = 1.76 compared to those in
ESPS (M = 2.84, SD = 2.11, N = 166). Similarly, participants were also more likely to agree that
they were able to collect the points they needed in EAPA (M = 5.60, SD = 0.79, N = 163,
t(21.44), p < 0.05, d = 2.05) compared to those in ESPS (M = 2.27, SD = 1.83, N = 163).
The results are divided by AI nudge (share best, share worst, and share nothing). Within
each group, behaviors from both EAPA and ESPS conditions are discussed and presented.
56
The study advertisement informed Amazon workers prior to the sign-up that the portal will
reject them if they were found to have participated before.
116
AI Nudge – Share Best
Both EAPA and ESPS conditions had an intelligent assistant who nudged them to share
the best information. Intelligent assistant nudged them to share the best information by saying:
“My intelligent opinion is that you share Door X and its 55 points.”. Results show that this nudge
pushed 77% of those in EAPA to share the best door compared to 90.24% in ESPS. Resultantly,
the data were still highly negatively skewed where more people gave correct information than
otherwise. Although not hypothesized, a test of independence was conducted which found that
there was no significant effect in the difference EAPA and ESPS to share best information χ
2
=
2.44 (df = 1, p = n.s, φ = 0.11).
Table 5. 4 AI nudge – share best: Descriptive statistics and frequency of sharing best, worst, and
no information.
Resource
Environment Mode Mdn
M
(SD)
Skewness
(SE)
CI
lower
CI
upper
Shared
Best
%
Shared
Worst
%
Shared
Nothing
%
EAPA
N = 45 8 8
6.95
(2.21)
-1.98
(0.35) 6.29 7.62
77%
(n =35)
2%
(n = 1)
2%
(n = 1)
ESPS
N = 41 8 8
7.60
(1.39)
-3.90
(0.36) 7.16 8.04
90.02%
(n =37)
2%
(n = 1)
-
(n = 0)
117
Figure 5. 3 AI nudge - share best: Distribution of door values shared by resource environment.
Shared nothing = 0, shared the worst = 1, sharing the best (and accurate) = 8.
AI Nudge – Share Worst
Hypothesis 2 predicted that individuals in ESPS would be more likely than those in
EAPA to share the worst information when nudged by an intelligent assistant to do the same.
EAPA and ESPS conditions had an intelligent assistant who nudged them to share the worst
information by saying: “My intelligent opinion is that you share Door X and its 20 points”.
In the EAPA condition, 25% of participants accepted intelligent assistant’s nudge and
shared inaccurate and worst information. The percentage of people who made the same choice in
ESPS was 27.91%. To conduct a test of independence, the data were coded where each incidence
of sharing the worst information was compared against all other choices. There was no
significant difference between EAPA and ESPS in terms of sharing worst information χ
2
= 0.53
(df = 1, p = n.s, φ = 0.53). This shows EAPA and ESPS had a similar pattern of sharing the worst
and inaccurate information, and thus, the null hypothesis could not be rejected.
Sharing worst information noticeably affected the skewness of the distribution of
responses where data showed peaks for sharing best and worst information. This is because in
118
EAPA and ESPS conditions, people also shared the best information. Overall, sharing accurate
and helpful information reduced for both EAPA and ESPS when AI nudged them to share the
worst versus the best.
Table 5. 5 AI nudge – share worst: Descriptive statistics and frequency of sharing best, worst,
and no information.
Resource
Environment Mode
Md
n
M
(SD)
Skewness
(SE)
CI
lower
CI
upper
Shared
Best
%
Shared
Worst
%
Shared
Nothing
%
EAPA
N = 39 8 8
5.64
(3.12)
-0.82
(0.37) 4.62 6.65
51.28%
(n = 20)
25%
(n =10)
2%
(n = 1)
ESPS
N = 43 8 7
7.60
(1.39)
-0.42
(0.36) 4.12 6.06
41%
(n = 18)
27.91%
(n =12)
2.3%
(n = 1)
Figure 5. 4 AI nudge – share worst: Distribution of door values shared by resource environment.
Shared nothing = 0; 1 = shared worst, and 8 = shared best.
AI Nudge – Share Nothing
Hypothesis 3 predicted that individuals in ESPS would be more likely than those in
EAPA to share the no information when nudged by an intelligent assistant to do the same. EAPA
119
and ESPS conditions had an intelligent assistant who nudged them to share the worst information
by saying: “My intelligent opinion is that you share nothing”. In EAPA (N = 49), 38% shared no
information, and in ESPS (N = 42) the same behavior was observed in 35.4% of decision-
makers. The data were also coded where when each incidence of sharing nothing was compared
against all other choices which were collapsed into one level. There was no significant difference
between EAPA and ESPS in terms of choosing to share no information χ
2
= 0.09 (df = 1, p = n.s,
φ = 0.03). This shows that both EAPA and ESPS accepted an intelligent assistant’s nudge to the
same extent. Given these results, the null hypothesis could not be rejected. Thus, these findings
are like the condition where the nudge was to share the worst information.
When information was shared, it was more negatively skewed for those in ESPS than
EAPA since the incidence of sharing best information was higher for the former than the latter.
Table 5. 6 AI nudge – share nothing: Descriptive statistics and frequency of sharing best, worst,
and no information.
Resource
Environment
Mod
e Mdn
M
(SD)
Skewness
(SE)
CI
lower
CI
upper
Shared
Best
%
Shared
Worst
%
Shared
Nothing
%
EAPA
N = 49 8 4
4.02
(3.12)
-0.06
(0.34) 2.97 5.06
32%
(n = 16)
4%
(n = 2)
38%
(n = 19)
ESPS
N = 42 8 8
4.90
(1.39)
-0.50
(0.36) 3.71 6.09
54.76%
(n = 23)
4.72%
(n =2)
35.4%
(n = 15)
120
Figure 5. 5 AI nudge – share nothing: Distribution of door values shared by resource
environment. Shared nothing = 0, shared the worst = 1, sharing the best (and accurate) = 8.
Figure 5. 6 Sharing of best, worst, and no information by AI nudge and the environment resource
condition.
Who Accepts and Rejects Nudges?
Hypothesis 4 predicted that those in EAPA would be less likely to accept an intelligent
assistant’s nudges compared to ESPS. Results show that within EAPA (N = 133), 48.1%
accepted the intelligent assistant’s nudge compared to 50.8% in ESPS (N = 126). However, the
0
5
10
15
20
25
0 1 2 3 4 5 6 7 8
EAPA
ESPS
121
acceptance of nudges did not vary significantly by condition χ
2
= 0.185 (df = 1, p = n.s, φ =
0.02). Therefore, the null hypothesis could not be rejected.
Replicating Information Withholding for Intelligent Assistant Conditions
For the baseline model, it was predicted that information withholding would more likely
to occur in ESPS than EAPA condition. To conduct this analysis, the frequency of “Ignore” and
“Respond” choices were analyzed. The hypothesis was supported, and significant effects were
found. In order to replicate the findings, the data were also analyzed for EAPA and ESPS
conditions having an intelligent assistant. Results show that a higher percentage of individuals in
the ESPS (30.80%) condition ignored the message i.e. withheld information compared more
often than those in EAPA (23.60%). However, this difference was not significant χ
2
= 2.33 (df =
1, p = n.s., φ =0.08). To understand this finding better, additional tests and analyses were
conducted.
The data from baseline and intelligent assistant were collapsed together and split by
EAPA and ESPS. This showed that there is a significant difference such that more people are
likely to withhold information if they are in ESPS (36.30%) compared to those in EAPA (26.2%)
χ
2
= 5.78, df = 1, p < .05, φ =0.11. However, when the data were analyzed from the perspectives
of baseline and intelligent assistant, while ignoring resource environment as a variable; then
overall, a significant effect was found where those in the baseline were more likely to withhold
information (42.90%) compared to those who had an intelligent assistant available to them
(27.0%) χ
2
=10.54, df = 1, p < .05, φ =0.14. Thus, it is very interesting to observe that the being
in EAPA and ESPS with an intelligent assistant may have caused a behavioral change in
information withholding.
122
Table 5. 7 Information withholding and sharing by resource condition and intelligent assistant
across both studies.
Experimental Condition Ignore
%
Respond
% χ
2
df φ
EAPA (Baseline + AI)
N = 237 26.2% 73.80% 5.78** 1 0.11
ESPS (Baseline + AI)
N = 245 36.30% 63.70%
Baseline (EAPA +
ESPS)
N = 126 42.9% 57.1%
10.54*
* 1 0.14
Intelligent Assistant
(EAPA +ESPS)
N = 356 27.2% 72.8%
*p < .10, ** p < .05, + p < .01
Figure 5. 7 Information withholding and openness to sharing by baseline/no AI model and
intelligent assistant segmented by resource environments.
Summary
In the above sections, results from two studies were discussed.
123
a. The baseline/no AI study detailed information sharing and withholding patterns by EAPA
and ESPS. This study served as a baseline model where no intelligent assistant was
present. Details of how people: a. withheld information, b. shared best vs. worst
information, c. deviated from sharing accurate information. Overall, it was found that
people were more likely to withhold information and not share the best information when
they were under the ESPS condition than EAPA condition.
b. Study two detailed how an intelligent assistant’s nudges such as a. sharing best, vs.
sharing worst, and no information under EAPA and ESPS conditions affected
information withholding, the type of information shared, and the degree to which
deviation from truth or best information occurred. Overall, we see that nudges by an
intelligent assistant seem to work across EAPA and ESPS conditions. However, sharing
the best information is higher for ESPS and seems to be driving the overall significant
effect.
c. Additionally, although EAPA rejects the nudges more than ESPS, this difference is not
significant.
Comparing Baseline and Intelligent Assistant Conditions: Changes in Information Sharing
The next series of analyses is concerned with comparing the differences in patterns of
information sharing between the baseline and intelligent assistant conditions.
Benevolence
Hypothesis 5 predicted that individuals would be more likely to share best information
compared to the baseline when nudged by an intelligent assistant to do the same regardless of the
resource environment. An intelligent assistant was used as an intervention to see if it can nudge
124
people to change their behaviors compared to the baseline version. There was not a substantial
difference in sharing the best information between the two groups (baseline vs. intelligent
assistant) in EAPA conditions. However, those in ESPS showed a remarkable difference where
when nudged by an intelligent assistant, 90.24% people shared information compared to only 53%
in the baseline model. The overall percentage change for ESPS was 131.35% compared to merely
16.66% in EAPA. Ignoring for the resource environment, the incidence of giving accurate and best
information was significantly higher when nudged by an intelligent assistant (83.7%) to do the
same compared to the baseline model (63.9%) χ
2
= 8.15 (df = 1, p < 0.01, φ = 0.22). Given these
findings, the null hypothesis was rejected. Thus, there is some evidence that an intelligent assistant
can nudge people to be more benevolent—however, it should be noted that this seems to vary by
their resource environments.
Table 5. 8 Sharing best information in the baseline and AI nudge – share best conditions divided
by resource environments.
EAPA ESPS
Shared
Best
Shared
Best
AI Nudge – Share
Best
N = 45
77%
(n = 35)
AI Nudge – Share
Best
N = 41
90.24%
(n = 37)
Baseline – No
Nudge
N = 42
71%
(n = 30)
Baseline – No
Nudge
N = 30
53%
(n = 16)
Percentage Change 16.66% Percentage Change 131.35%
Sabotage
57
Can an intelligent assistant nudge people to be malicious? Hypothesis 6 predicted that
individuals would be more likely to share the worst information compared to the baseline when
57
Beastie Boys (Diamond, Horvitz, & Yauch, 1994). Credit should be given where credit is due.
125
nudged by an intelligent assistant to do the same controlling for the resource environment. The
baseline model showed that incidence of such behavior was very low in both EAPA and ESPS
conditions. It turns out when an intelligent assistant nudged people to share inaccurate and worst
information with others, the both EAPA and ESPS showed a much higher incidence of making
such choices. The percentage change was 900% for EAPA and 1200% for ESPS (Table 9). This
behavior skewed the data distribution where negative skew decreased. Thus, unlike former
studies (e.g. Serota et al., 2010), and findings from this project, data reflected a higher incidence
of lying. Ignoring for the resource environment, the incidence of giving inaccurate and harmful
information was significantly higher when nudged by an intelligent assistant (26.83%) to do the
same compared to the baseline model (2.78%) χ
2
= 16.85 (df = 1, p < 0.001, φ = 0.33). The null
hypothesis was rejected.
Table 5. 9 Sharing worst information in the baseline and AI nudge - share worst conditions by
resource environments.
EAPA ESPS
Experimental
Condition
Shared
Worst
%
Experimental
Condition
Shared Worst
%
AI Nudge – Share
Worst
N = 39
25.64%
(n = 10)
AI Nudge – Share
Worst
N = 43
27.91%
(n = 12)
Baseline/No AI
N = 42
2%
(n = 1)
Baseline/No AI
N = 30
3%
(n = 1)
Percentage
Change
900% Percentage Change 1200%
Silence
Hypothesis 7 predicted that individuals would be more likely to not share information
compared to the baseline when nudged by an intelligent assistant to do the same controlling for
the resource environment. In the baseline condition, the incidence of not sharing any information
126
after participant had indicated “Respond” was very low. When an intelligent assistant nudged
people to not share any information in EAPA and ESPS conditions, both EAPA and ESPS
showed very similar pattern of information withholding where 38% of the former, and 37% of
the latter did not share any information. Compared to the baseline, this yielded a percentage
change of 375% and 433.33% respectively (Table 10). Ignoring for the resource environment,
the incidence of giving no information was significantly higher when nudged by an intelligent
assistant (21.4%) to do the same compared to the baseline model (4.3%) χ
2
= 15.96 (df = 1, p <
0.001, φ = 0.31). This led to the rejection of the null hypothesis.
Table 5. 10 Sharing no information in the baseline and AI nudge - share none conditions by
resource environments
EAPA ESPS
Experimental
Condition
Shared
Nothing
Experimental Condition Shared
Nothing
AI Nudge – Share
Nothing
N = 49
38.77%
(n = 19)
AI Nudge – Share
Nothing
N = 42
38.09%
(n = 16)
Baseline/No AI
N = 42
9.50%
(n = 4)
Baseline/No AI
N = 30
10%
(n = 3)
Percentage Change 375% Percentage Change 433.33%
Across all conditions, an intelligent assistant was able to nudge behaviors successfully
compared to a baseline. This is consistent with hypotheses 5, 6, and 7. Interestingly, those in
EAPA showed a change in behaviors compared to the baseline, but the change was higher
sharing the worst information or not sharing anything.
127
Figure 5. 8 Incidence of best, worst, and no information sharing by intelligent assistant and
baseline conditions segmented by resource environments.
The Next Steps: Discussion
This chapter has detailed results from two iterations: the baseline study which was a one
factor (Resource environment: EAPA [abundant] vs. ESPS [scarce]) between-subjects design.
This was followed by a 2 (Resource environment: EAPA [abundant] vs. ESPS [scarce]) x 3
(Intelligent assistant nudge: share best vs. share worst vs. share nothing). The next chapter
discusses these findings in detail.
128
Chapter 6. Discussion
This chapter first explicates the findings from the baseline/No AI (Resource environment:
EAPA [abundant] vs. ESPS [scarce]) study. This is followed by a discussion on the results from
the second study 2 (Resource environment: EAPA [abundant] vs. ESPS [scarce]) x 3 (Intelligent
Nudge: Share best vs. share worst vs. share nothing). Implications for theory, research, and
application, and limitations of these studies are presented at the end.
Baseline Model: Resource Environments’ Effects on Information Sharing
The goal of this first-of-its-kind study was to understand how abundant and scarce
resource environments affect prosocial and antisocial behaviors as they pertain to information.
The particular type of information that this project looks into is the one that when received can
help or hurt a recipient. Within the context of this dissertation, the act of sharing information
which can help or hurt a recipient in achieving their goal in response to a request for information
is described as prosocial or antisocial behaviors respectively. The instance where a decision-
maker chooses to not share any information i.e. withhold information is characterized as simply a
lack of prosocial behavior. The baseline study investigated the effects of abundant (EAPA) and
scarce (ESPS) environments on the incidence of information withholding (i.e. choosing not to
share information) by ignoring requests for such.
The first hypothesis predicted that people in scarce resource environments would be more
likely to withhold information. To investigate this hypothesis, the study design allowed
participants to either “respond” or “ignore” a message which requested information that could
help them gain the most advantage. This implies a straightforward decision--i.e., making a
choice between two alternatives: explicitly ignore the request or respond to it. The results
showed that in response to a request, ESPS (52.38%) were more likely to make the choice of not
129
sharing any information compared to 33.33% in EAPA. This choice was deliberate and
intentional, and therefore, can be referred to as information withholding i.e. the decision to retain
information
58
. Thus, when decision-makers chose to not share any information, i.e. they
refrained from action, it shows a lack of prosocial behavior on their part.
The core argument that led up to this hypothesis was that navigating a scarce
environment is a cognitively taxing experience (see Mani & Mullainathan, 2013) which leads
people to conserve physical and mental efforts. Thus, when people are asked to do something
which requires effort, in this case share information, they choose to ignore this request. While the
conservation effort provides us with a useful explanation, there may be another way to think
about this. The experience of scarcity is one of loss or deprivation, and therefore, it is a good
strategy for any decision-maker to limit future “losses'' by saving something now or not share it
with others. However, in the context of this study, the decision to not share information does not
bring any benefit or loss to a participant. Furthermore, keeping information to oneself does not
really save it since it is an intangible resource. Yet, people still chose to withhold it. It is possible
that a false sense of rationality may have emerged where information withholding was equated
with a strategy in a scarce resource environment or information may have been treated as a
tangible resource which could be saved.
In this study, participants could also choose to share information after they had shown
openness to sharing information by clicking “Respond”. This took them to a page where they
could see all the options and enter any information they liked. They also had a chance to not
share any information. This design ensured that people always had a choice to change their
58
Note that information withholding is conceptually different from mere “lack” of information.
Lack of information can be a product of information withholding; but may also be attributable to
other factors such medium issues, comprehension on part of the recipients, etc.
130
minds and not feel forced to share information. Additionally, it also gave participants a chance to
understand the consequences of their actions since all the doors and their points were visible to
them. At this stage, some participants made the choice to not share any information. The
incidence of this behavior was also counted as information withholding. The data were analyzed
to account for these instances of information withholding and results show that of all the
participants in ESPS, 57% chose to share nothing at all compared to 39.68%. Again, this ties to
the theoretical explanations provided earlier--scarcity is a taxing experience which increases
conservation of effort that reflects in not sharing anything. Additionally, people may have chosen
to not share information--even if keeping it provided them no material benefit--by engaging in
naive rationalization that it could serve them some purpose later on.
Interestingly, these findings explicitly take on the under-studied factor resource
abundance and its effects on prosocial and antisocial behaviors. In the past, abundance in an
environment was shown to have negative consequences as it increased unethical behaviors for
personal gain (Gino & Pierce, 2009). In chapter three, it was argued that having abundance in an
environmental context is different than having abundance in the personal context. However, the
Gino & Pierce (2009) research design might have created a condition where there was abundance
in the environment, but not for the person. From the perspective of equity theory, this could have
resulted in unethical behavior (Adams, 1963). In this study, both environmental and personal
resource contexts had abundance i.e. a condition of EAPA. This yielded remarkably different
findings as more people engaged in prosocial behaviors
59
.
If a participant chose to share information, this behavior could take on prosocial or
antisocial forms. In the context of this study, antisocial behavior referred to sharing that kind of
59
This is being said while fully recognizing that the behavior in Gino and Pierce (2009) study is
different than the ones investigated here.
131
information which could hurt a recipient in the pursuit of their goals. On the flip side, prosocial
behavior referred to sharing information which helped the recipient. The participant received a
request to share information about the door with the “most points.” For a sender, there was only
one correct answer to this question, and this answer would yield the highest payoff or benefit to
the recipient. However, the decision-maker also had knowledge of other options which varied in
terms of points and thus, in the extent to which their knowledge could help or hurt a recipient. If
a sender gave information about a door with the least number of points, then that was a deliberate
choice intended to hurt the recipient.
The results show that of the information, which was actually shared, a majority of it was
accurate. This resulted in a very skewed distribution where a higher percentage of people i.e.
71% in EAPA and 53% in ESPS shared the best information. It was also observed that the
number of people who shared the best information was lower in ESPS than EAPA. The data
distribution is in line with prior research which argued and demonstrated that lying is not as
prevalent as we think, and most people tell the truth most of the time (Serota et al., 2010). In the
context of this study, we see that reflected as well especially since telling the truth and being
benevolent incurs neither benefit nor loss to a decision-maker.
To further examine the difference between EAPA and ESPS in sharing the best and
accurate information, the data were analyzed by combining the participants who had clicked
“ignore” with those who shared nothing, then 47.61% of those in EAPA and only 25.30% in
ESPS shared the best information. In terms of information withholding, 39.68% of EAPA and
57% of ESPS engaged in such behaviors. The moral of the story is simple: scarce resource
environments increased information withholding and decreased the incidence of sharing accurate
and helpful information. This was not true for an abundant resource environment where the
132
incidence of information withholding was lower. These findings tie into prior research which
shows that scarcity often decreases the occurrence of prosocial behaviors (Prediger et al., 2014).
The worst information was conceptualized as information which would hurt the recipient
the most. In this case, it was the door with the least number of points. The results show that the
incidence of such behavior was 2% and 3% in EAPA and ESPS respectively. In this study,
engaging in such a behavior had no benefit to the self. Furthermore, the sender had no
knowledge of the recipient which removed the possibility that maliciousness was extended due
to poor interpersonal relationships (e.g. disliking another person), sense of threat, or competition.
Therefore, the question is why would someone engage in sharing the worst information which
could hurt someone especially when there is no benefit to the self and the recipient is unknown?
The incidence of senseless malicious or antisocial behavior is often low in a population
compared to other types of behaviors. Variations in the base rate of malicious behaviors could be
a function of personality disorders, geographic location and/or types of behaviors (e.g. robbery
vs. homicide). In the context of this study, antisocial behavior is said to occur when someone
gave information which would hurt a recipient’s chances of success or goal attainment the most.
Determining a base rate for such a specific type of behavior is very hard. However, if we need to
find some kind of a “real-word” base rate of malicious information sharing, then it might be
interesting to look at the prevalence of Antisocial Personality Disorder (APSD). People with
ASPD have a tendency to engage in criminal and irresponsible behaviors, and they disregard
others’ feelings without remorse (Fisher & Hany, 2019). In the United States, the prevalence of
antisocial personality disorder is roughly between 3 to 5% (Werner et al., 2015). In the case of
baseline study, results show that the overall incidence of this behavior was only 2.5% (combined
average of EAPA and ESPS). If we assume that there is a link between the occurrence of
133
antisocial behavior as a function personality trait--since other reasons do not hold much ground--
then perhaps it can be suggested that the base rate of sharing information which would hurt a
recipient the most someone hovers close to the incidence of ASPD in the United States. This is
not to say that those who engaged in malicious behavior had ASPD, but rather how the incidence
of malicious behavior corresponds to the prevalence of ASPD.
The above findings show us a very interesting and intuitively appealing pattern. Resource
abundance increased openness to sharing information and resource scarce environments
encouraged information withholding. Furthermore, those who initially showed openness to
sharing information
60
had a chance to actually share it: the incidence of sharing the best
information (prosocial behavior) was the highest followed by not sharing anything (lack of
prosocial behavior) and sharing the worst information (antisocial behavior) across both abundant
and scarce resource environments. This can be written as:
Sharing the best information (prosocial behavior) > not sharing/information withholding
> sharing worst information (antisocial behavior)
Although the overall frequency of these behaviors varied by resource environments
where more people in EAPA shared information compared to those in ESPS, the above pattern
remained the same. To an extent, this makes sense. When people are asked for information that
can help others, and if giving this information brings no loss to them, then compliance to such a
request will be high. This reflects the idea that when lying i.e. willfully misrepresenting serves
no purpose, many people just tell the truth (see Levine, 2014). This behavior followed by
refraining from action, or not saying anything at all and antisocial behavior. For most people
who have a choice, refraining from an action is much easier than deliberately hurting someone
60
This is excluding those who did not agree to respond to a message.
134
especially when there is no good reason. This is demonstrated here as well where more people
simply refrain from action than explicitly hurting others.
Extrapolation to Other Contexts
At a larger level, and outside of the realm of the current study, these findings are
scratching the surface of a much bigger issue: how the variation or differences in environmental
and personal resource contexts affects interpersonal relationships, trust, information exchange,
and economic/material outputs. Research consistently finds that more people in the Nordic
countries tend to say that they trust each other (Ortiz-Ospina & Roser, 2016). Why may this be
so? The difference between personal and resource environmental contexts may not be large
amongst the people in those countries. The Nordic states are rich and generate more than enough
capital for all its people. The people themselves also live in abundance as reflected by their high
income (OECD Better Life Index, n.d.). Arguably, there is abundance in both environmental and
personal resource contexts for most people in these countries which reflects an abundant
environment. In these environments, most people are not as taxed and perhaps, not too worried
about diminishing resources all of which increase prosocial behaviors and interpersonal trust.
Compare this to other countries such as India or Brazil where research consistently finds lower
levels of trust. On the surface level, these countries have massive economies, however, on a
micro-level, most people are poor. For instance, research from economic games done in Indian
villages has shown that people engaged in spiteful behavior and reduced other people’s rewards
consistently--even when it decreased their own payoff (Fehr et al., 2008). In some of the
literature reviewed in chapter three, a similar pattern was observed in a Namibian village where
people showed more spite and reduced the other person’s reward (Prediger et al., 2014). Both
Indian and Namibian research sites can be described as ESPS even though these countries are
135
radically different in cultural, geographical, and racial dimensions. It is thus conceivable that
scarce environments can increase antisocial behaviors or reduce prosocial behaviors
61
.
Although this is a macro-level discussion, similar rationale can be applied to other
decision-making systems such as organizations, teams, and families. It should be said here that
“scarce” resource environments do not mean that there is an explicit show of poverty or low-
income--but rather a difference between one’s current levels of resources and some higher or
more desirable reference point (see chapter three). Thus, organizations which “seem” to be
abundant because they signal resource abundance via grand architecture could also have high
levels of interpersonal mistrust and/or lack of trust amongst its members if they perceive scarcity
(and/or competition).
Summary
Overall, this study has found that the incidence of information withholding was lower,
and openness to information sharing was higher in EAPA. However, in ESPS, an opposing
pattern was observed where less people showed openness to sharing information and more made
the choice of withholding information. Of those who had initially shown openness to sharing
information, those in EAPA had a higher incidence of giving accurate and the best information
compared to ESPS. These findings contribute to the literature not only in terms of explicating the
concept of abundance and scarcity, but also how our behaviors pertaining to information are
affected by our environments.
61
It is important to note that religious and cultural practices along with family values affect
prosocial behaviors and therefore, can reduce/limit or amplify the effects of resource
environments.
136
Intelligent Assistants in Resource Environments: Effects on Information Sharing
The baseline study was designed to help us understand the effects of resource
environments on prosocial and antisocial information sharing. The findings gave us an idea of
how abundant resource environments create higher instances of information sharing compared to
scarce resource environments. A second study was done to see how an intelligent assistant can
serve as an intervention in these resource environments to nudge humans into making prosocial
or antisocial decisions. The goals of this study were: a. to investigate how we can increase
information sharing and decrease information withholding in resource scarce environments, and
b., to find the extent an intelligent assistant can nudge people to engage in malicious behaviors
across abundant and scarce resource environments.
Past research on the effects of algorithms/ADM/AI investigated how people prefer
machines to humans or vice versa (e.g. Araujo et. al., 2020; Sundar & Kim, 2019; Logg et al.,
2019). However, these studies did not incorporate or look into how AI affects our decisions that
affect others. Furthermore, most research in the area of the effects of AI-enabled technology on
human behaviors is decontextualized in nature as it does not explicitly factor in environmental
parameters which can affect how people deal and are influenced by technology. The current
study addresses these issues and contributes to the literature by investigating how an intelligent
assistant can nudge humans into making prosocial and antisocial choices that affect others, and
the role of abundant and scarce resource environments on these behaviors.
To conduct this study, an intelligent assistant interacted with and helped humans to
navigate abundant and scarce resource environments. The study concept and variation of
abundance and scarcity was the same as the baseline study. After finishing the game, the users
received a message from an anonymous player who asked them to share the door with most
points. Just like the baseline study, users could click “ignore” or “respond” to this message. If
137
they chose the former option, the study ended for them. However, if the users clicked ‘respond’
i.e. showed openness to sharing, then they were taken to an information sharing page. At this
stage, an intelligent assistant nudged them to: a. share the best information, b. share the worst
information or c. share no information. It should be noted that the assistant merely provided a
nudge and did not force the participants to make a choice.
Users in abundant (EAPA) and scarce (ESPS) resource environments were nudged to
share best information with others. It was theorized that those in scarcity will be more
cognitively taxed, and therefore, more likely to accept nudges from the “cognition-of-sorts”.
Results showed that this nudge encouraged 90.24% of people in ESPS to give the best
information. However, this effect was 77% for those in an abundant resource environment. To
put things into perspective, it is interesting to note here that at the baseline level, those in
abundant environments were already sharing the best information at 71%. With a nudge, they
only went up to 77% and this was not a significant increase from the baseline. However, those in
scarce resource environments jumped from 53% to a whopping 90.2%. This is a substantial
increase and it provides some evidence that scarce environments tax cognition which makes it
easier for people to accept nudges from other cognition-of-sorts in the environment i.e. the
intelligent assistant. However, what is more intriguing is that although the nudge was to do
something prosocial, those in EAPA did not reach the same level as ESPS in terms of accepting
this nudge in terms of frequency. For instance, 9% of those in ESPS and 22% of EAPA rejected
the nudge to share the best information. Although the difference between 9% and 22% is not
significant, it hints that EAPA and ESPS may approach an intelligent assistant’s nudges
differently. Could this mean that those in EAPA are simply more resistant to nudges than ESPS.
138
If yes, then we should find that there would be a lower incidence of accepting nudges for EAPA
compared to ESPS for the nudge to share the worst and no information.
Hypothesis 2 predicted that those in ESPS would be more likely to accept the nudge to
share the worst information compared to EAPA. The theoretical rationale was the same as above
where it was argued that those ESPS would be more cognitively taxed, and hence, more
accepting of the intelligent assistant. Contrary to the prediction, the incidence of sharing the
worst information was similar in EAPA (25%) and ESPS (27.91%). These results hint that
EAPA may not be as immune to the nudges of an intelligent assistant as previously thought.
Compare these results to the baseline study where 2.3% in EAPA and 3.33% in ESPS shared the
worst information. This is a significant and serious finding as it shows that 25% people in a
system can be encouraged to be malicious simply because of a nudge compared to a baseline
which was only 2.3%. Despite the rather depressing findings, one could take the liberty here to
say that the data are beautiful. The results from the baseline study show that the difference
between EAPA and ESPS is small in terms of sharing the worst information, but EAPA had a
slightly lower incidence of malicious behavior compared to ESPS (2.3% vs. 3.33%). This pattern
reemerged for the nudge (share worst) condition where the difference between EAPA and ESPS
in terms of malicious behavior is small. However, the incidence of sharing the worst information
was slightly lower in EAPA compared to ESPS (25% vs. 27.91%). These data have proven to be
a bit of a stumbling block. EAPA seem to accept the antisocial nudge as much as ESPS, but they
do not extend the same courtesy to the prosocial nudge.
Hypothesis 3 predicted that individuals in ESPS would be more likely than those in
EAPA to share no information when nudged by an intelligent assistant to do the same. Again,
contrary to the prediction, the difference between EAPA and ESPS in terms of choosing not to
139
share information was negligible. In EAPA, the incidence of choosing to not share information
was 38.77% and for ESPS it was 38.09%. If we are to compare this behavior with the results
from the baseline study, a similar pattern holds where the difference between EAPA and ESPS
was also negligible in terms of not sharing is 9.5% and 10% respectively.
Hypothesis 4 predicted that those in EAPA would be less likely to accept nudges from
an intelligent assistant compared to those in ESPS. Results showed that although EAPA rejected
the nudges more often than ESPS, the difference between the two groups is not significantly
large. Any difference between these groups is primarily attributable to the higher incidence of
those in EAPA rejecting the nudge to share the best information. Hypothesis 5, 6, and &
predicted that individuals would be more likely to share the best, worst, and no information when
nudged by an intelligent assistant to do the same controlling for the resource environments. The
results showed significant effects, and therefore, the three null hypotheses were rejected.
However, the interesting thing to note here is that amongst all the nudges, the biggest percentage
change came for sharing the worst information where EAPA showed a 900% increase and ESPS
a 1200%. This is also attributable to the fact that at the baseline, this was the least occurring
behavior.
Overall, people tended to accept nudges that encouraged benevolence which was
followed by inaction (i.e. sharing no information), and then maliciousness. This could be
described as:
Sharing the best information (prosocial behavior) > not sharing/information withholding
> sharing worst information (antisocial behavior).
140
Interestingly, these behaviors occurred in the same order in the baseline study. Overall,
we see that an intelligent assistant was able to push the distribution of behaviors to reflect its
nudge compared to a baseline.
The results from study two were also analyzed to replicate if more people “ignore” i.e.
explicitly withhold information when they are in ESPS compared to EAPA. Surprisingly,
although ESPS (31%) had a higher incidence of choosing “ignore” than EAPA (24%), the
difference between them had become smaller and insignificant compared to the baseline. To
understand the data better, more analyses were conducted. The data were collapsed for two
studies by EAPA and ESPS. This showed that ESPS withheld more information than EAPA
(Table 7). However, when the data were split by baseline and the intelligent assistant, results
showed that the incidence of “ignore” at baseline was 42.9% but 27.2% for the intelligent
assistant conditions. This was an unexpected discovery. It has the potential to explain why EAPA
and ESPS did not differ much in terms of accepting the nudges from an intelligent assistant i.e.
accepted them at the same level although it was predicted that ESPS would be more taxed, and
thus more likely to use the technology’s suggestion.
The intelligent assistant was embedded in abundant and scarce environments. It helped
people navigate these environments and helped them select the door with most points. Even
though some people fell short of reaching their targets i.e. collected less than 150 points (ESPS),
the interaction and assistance provided by an intelligent assistant in that scarce environment may
have prevented them from accruing a higher cognitive tax. Thus, the technology could have
functioned like a cognition-of-sorts whose presence and assistance helped limit some of the
adverse effects of scarcity on cognition. c
141
One of the goals of this study was to see if we can nudge people to share more
information especially when they are in resource scarce environments. The findings in this study
show that it is possible for an intelligent assistant to push those in ESPS to be prosocial.
However, what is puzzling (and troubling) is that antisocial nudges by an intelligent assistant
were accepted by both EAPA and ESPS at similar levels; yet--although not significant--those in
EAPA seemed to “reject” the prosocial nudge more than ESPS (22% vs. 9%). What is interesting
here is that participants are engaging in these behaviors even when they do not bring a benefit or
loss to them.
Human societies all over the world have strict rules and regulations in place to control
behavior. Prosocial behaviors are encouraged and rewarded, and bad behaviors are discouraged
and punished. Institutions such as religion, state, family, and education also play an extensive
role in regulating these decisions. If we are told to not hurt others throughout our lives, and
assuming many of us do not intentionally engage in such behaviors on a regular basis, then how
can a mere nudge from an intelligent assistant make us behave maliciously---and that too without
a reason? As a reminder, it should be noted that EAPA went from 2% at 25% in sharing the
worst information while ESPS went from 3% to 27% for the same behavior.
It is About Them
The effects observed above may have to do with how people think of computers,
machines, SDM, and AI
62
. Recent research has shown that people perceive machines to be
autonomous, unbiased, objective and fair, and objective (Araujo et. al., 2020). Behavioral
research has found that people tend to trust computers which invariably leads them to disclose
personal information to the machine (Sundar & Kim, 2019). People have been shown to adjust
62
Since there is not much research on intelligent assistants per se, investigations on ADM, AI,
computers, and algorithms were used to make these arguments.
142
their own judgement to match those of an algorithm’s (Logg et al., 2019). These explanations
can help us only partially answer why people accepted an intelligent assistant’s nudges and made
antisocial choices. This is because if the intelligent assistant was indeed perceived to be fair and
objective, then the rejection of its nudges should not really vary by the nudge or environment
type. However, we found that in terms of frequency, the differences between accepting the nudge
to share information was bigger compared to nudges such as sharing the worst or no information.
At this point, it is important to remind ourselves of how the increase in behaviors as a function of
nudge reflected the baseline incidence of the same behaviors i.e. sharing accurate and best
information action were highest followed by sharing no information, and then the worst
information. This data pattern may be hinting towards something deeper.
It is About Us
We could think about an n number of technology’s attributes, or human biases, or the
relationship between humans and the assistant to find an explanation for these findings.
However, there is one other explanation that we might be overlooking. In any given population,
there may be an N number of people or a certain distribution of decision-makers who can be
nudged into antisocial behaviors more easily compared to the rest. These are just regular people
who seem to do fine usually, but given a situation, and the right nudge, they can hurt others
easily. There are instances not only throughout human history, everyday life, where seemingly
normal and educated people often recruit themselves into following antisocial nudges and ideas
across social and cultural contexts. For any social or organizational groups, such a population
can exist. When given the “permission” or nudge to be malicious, these people might just go for
it. If there exists some distribution of people who will accept antisocial nudges from an AI-
enabled technology, then we should explicitly identify that as a factor to explain our findings in
143
addition to exploring other variables such as attributes of technology or the individual in future
research.
Hope
The purpose of the above discussion is not to focus on the doom and gloom. There is
much hope. We found that an intelligent assistant nudged those in ESPS to change their
behaviors in a prosocial direction as they went from 53% at the baseline to 90.02% with a nudge.
This is an encouraging sign and shows that AI can be used for social good. It should also be
remembered that although 25% (EAPA) accepted the nudge to share the worst information, 75%
did not. Furthermore, it should be noted that although there is a difference between EAPA and
ESPS in “rejecting” an intelligent assistant’s nudges is small--it exists. This is a glimmer of
hope. If we can replicate these effects or find ways to amplify them, then perhaps, we can alter
the choice structure of an environment to reflect abundance, and thus, limit the acceptance of
antisocial nudges.
Overall, it is argued here that an intelligent assistant function like a cognition-of-sorts in
our environments, and thus, may also be perceived as such. It is a cognition-of-sorts because it
has the ability to learn, adapt, communicate, assist, and change the environment. This makes this
technology unique not only from other types of technologies, but also puts it higher on the
cognitive front than other mammals. Thus, humans may be more likely to accept its suggestions
and be influenced by it.
Implications for Theory, Research and Application
The decisions to withhold information can have far reaching consequences which can
affect both the senders and recipients by increasing or limiting output, growth, and trust. The
results from these studies show that abundant environments encourage information sharing, but
144
scarce environments increase information withholding. These findings suggest that we should
explicitly theorize and study information behaviors by contextualizing them in some
environment. This will be important for theory development in the future. In terms of
applications, these findings inform us that it is important to think about how members or
individuals within a group, organization, market, or any collaborative/interactive experience
abundance and scarcity within their environments. This knowledge can help us design and
introduce specific measures that can encourage information sharing and decrease information
withholding.
The implications for the study on intelligent assistants are broad. It shows us that
intelligent assistants can nudge us to be prosocial and antisocial towards others even when there
is no benefit or loss to self. It shows the extent to which people can rely on AI’s judgement. This
can have far reaching consequences for any system. We need more empirical research, and
subsequently better theoretical frameworks to understand how, when, and why AI technology
affects prosocial and antisocial behaviors. If we are going to develop theories and do research in
this area; it is critical that we explicitly define and describe the contexts under which decision-
making occurs. There is extensive research (see chapter 3), and the findings from this study,
which suggests that our environments matter. From a practical perspective, this research could
help researchers and developers design better intelligent assistants that can be more successful at
reorienting behavior toward prosocial goals. It could also assist us in testing the functionality
and applicability of intelligent assistants under different environmental conditions that can easily
apply to a range of computer-supported cooperative work such as: multi-team projects,
development of collaboration tools, information exchange avenues, virtual marketplaces, and
crowdsourcing platforms to name a few.
145
Limitations
In chapter three of this dissertation, some boundary conditions were laid out. The studies
were designed to remove expectations of reciprocity. There was no benefit or loss to a participant
as a function of their actions. Furthermore, participants had no knowledge or relationship with a
recipient. Where these boundary conditions are a strength, they are also limitations. Reciprocity,
benefit or loss to the self, and knowledge of recipients can alter how we behave within these
resource environments, and the extent to which we could accept nudges from an intelligent
assistant. It is arguable that in contexts outside the lab setting, these factors would play a crucial
role in determining how and when people share information with each other. Another limitation
of this study is that it is experimental in nature and done in a virtual setting. This kind of design
and methodological approach has its own challenges and can limit what we can learn about
human behavior outside of its parameters.
These findings should be seen in context. The system developed for this project
incorporated an intelligent assistant who interacted with people and helped them during the task.
It is possible that the effects found are attributable to these features of the intelligent assistant and
will not universally hold for other types of AI-enabled technology or intelligent assistants with
differing features. It is also likely that over time, people will be able to better identify nudges of
an intelligent assistant, and exercise caution in accepting them.
Another limitation of this study is that it was conducted in a virtual setting using
participants from Amazon Mechanical Turk. There is also a need to conduct research in non-
virtual settings with research participants other than those from Amazon Mechanical Turk to
create both theoretical and methodological richness. Additionally, some conditions had smaller
sample sizes primarily because people chose to click ‘ignore’, and never proceeded to the main
information sharing page. This decreased the probability to find significant effects.
146
Future behavioral research can use the current studies’ limitations and explore them
further. At this point, what is most important is to replicate these findings in different contexts
via other methods or using the same method. Studies such as these (and others within this
domain) can have far reaching consequences not only for theory, but for applications. Therefore,
it is crucial that more research goes into investigating these phenomena. Afterall, good science is
about replication.
The Next Steps: Notes on Human Rationality and Conclusion
The next (and final) chapter takes what we have learned so far and synthesizes it to
reflect on the changing nature of human rationality. It also presents concluding remarks.
147
Chapter 7. Bounded Technological Rationality
This dissertation set up two models. The baseline model investigated the impact of
abundant (EAPA) and scarce (ESPS) resource environments on individual decisions to share or
not share information. Specifically, it measured the incidence of sharing information that was
accurate and also yielded the most benefit to a requester of that information. It also tested if
people withheld helpful information, and the incidence of. truth telling across these
environments. Sharing information that could have helped someone i.e. give them the most
benefit in pursuit of their goal was characterized as prosocial behavior. However, sharing
information that could have harmed someone the most was considered as antisocial behavior If a
participant chose to note share any information, then this was described as simply a lack of
prosocial behavior.
The second model incorporated an intelligent assistant in these environments and it
nudged people into sharing the best information i.e. the one which was the most beneficial,
sharing the worst i.e. the one which was the most detrimental, or sharing no information. The
intelligent assistant displayed the three features (see chapter two): AI, interaction, and assistance
while engaging with the participant across these environments.
Results showed that participants’ prosocial and antisocial behaviors pertaining to
information were affected by the nudge of an intelligent assistant when compared to the
baseline. Hence, human decision-making concerning the same variable (information) showed
different patterns across these studies. What does this say about human rationality? How can we
think about the observed changes? In the following sections, these studies and their results will
be explained under the meta-perspective of bounded rationality. The commentary will begin by
describing bounded rationality and some of the history behind it. This will be followed by
148
discussing the baseline model within its scope. The core thesis proposed here is that human
rationality is changing and evolving at the intersection of AI, cognition, and environment.
Bounded Rationality: Cognition and Environment
The homo economicus (Latin for ‘economic man’) model of rationality argued that
decision-makers are only motivated by their self-interests, and therefore, make decisions that can
help maximize rewards or pay-offs without any consideration for others (see Sen, 1977). In line
with this argument, theory and research in classical normative microeconomics was built on the
assumption that a decision-maker had a set of ordered utilities and had complete information
regarding alternatives
63
(see March & Simon, 1958). These ideas turned out to be provocative for
Herbert Simon whose interest in organizations prompted him to criticize the notion of rational
man. His focus instead was on the ‘...instability and complexity of choices that affect decision
making’ (1959, pp 256). Simon argued that the economic models of rationality ignored the role
of human cognition and its limitations. In his study of organizations, Simon presented the notion
of ‘the administrative man’ which he contrasted with the homo economics According to Simon,
the study of human decision-making was too focused on the normative ideals of rationality i.e. it
investigated how people should make decisions. A normative model of decision-making that
undergirded the rational man approach set an ideal or benchmark (e.g. rank ordering alternatives)
against which individual decisions were compared with. He argued that this approach deprived
us of the descriptive understanding of decision-making which would instead reflect on describing
how people actually make decisions.
63
The idea of “rational man” has been rejected thanks to a corpus of literature which has shown
that people display generosity, cooperation, and altruism to benefit others across cultures (see
Heinrich et. al., 2001).
149
According to Simon, an individual has cognitive limitations which prevent them from
determining their subjective utilities or differences between various alternatives at all times
64
. He
also argued that a decision-maker with limited cognitive capacity was also often placed in
environments where they faced time constraints or information overload
65
. He wrote “... if an
organism is confronted with the problem of behaving approximately rationally, or adaptively, in
a particular environment, the kinds of simplifications that are suitable may depend not only on
the characteristics – sensory, neural, and other – of the organism, but equally upon the structure
of the environment.” (Simon, 1956. pp. 130. He differentiated between two kinds of
environments: objective and subjective. The latter differs from the former in terms of the
perception of the organism and not necessarily the actuality. The environment could be an
internal or external state where individuals make decisions.
In his critique of the rational man or homo economicus, Simon also suggested that people
use heuristics, i.e. mental shortcuts
66
, and try to satisfice (combination word: satisfy + suffice)
(Simon, 1990; Simon & March, 1958) instead of optimizing. This means that most individuals
try to make the best possible or good enough choice. March and Simon described the distinction
between satisficing versus optimizing such as: “Most human decision making, whether
individual or organization is concerned with the discovery and selection of satisfactory
alternatives, only in exceptional cases is it concerned with the discovery and selection of optimal
alternatives. To optimize requires processes several orders of magnitude more complex than
those required to satisfice. An example is the difference between searching a haystack to find
64
Simon argued and focused on ‘limits of cognition’ across many of his works and writings
(Simon, 1972; 1990).
65
These were the two environmental parameters that Simon focused on the most while
developing his ideas.
66
The word “heuristic” has been attributed to Simon. It was a core feature of how he thought
about and developed AI programs.
150
the sharpest needle in it and searching the haystack to find a needle sharp enough to sew with.”
(March & Simon, 1958, pp. 140-141). Simon’s core thesis was that human rationality emerged at
the intersection of cognition
67
and the environment.
It is quite interesting to note that over time bounded rationality as a perspective has
hardly been criticized. In fact, consensus has been positive as people expect it to work as it
makes intuitive sense. However, over the years that followed, the bounded rationality as a meta-
perspective gave rise to two areas of research--both of which were concerned with the effects of
uncertainty and environmental variation on human judgement and decision-making. One group
argued that human biases interfere and affect decision-making to mostly the detriment of
decision-makers. This group produced works such as the prospect theory (Kahneman & Tversky,
1980), and generated a huge line of programmatic line of research that falls under the “heuristics
and biases'' program developed by Daniel Kahneman, Amos Tversky and others (see Kahneman,
2003; Kahneman, 2011).
The other side was led by Gerd Gigerenzer, Peter Todd, ABC Research Group, and their
colleagues who developed the “fast & frugal” heuristics program and ideas on ecological
rationality (Gigerenzer, Todd, the ABC Research Group 1999). Unlike the “heuristics & biases”
camp, this group focused on how heuristics can help people make good enough choices.
Researchers associated with this camp claim that heuristics help people survive and do well.
Some heuristics this group have identified are take-the-best, imitate-the-majority, etc.
(Gigerenzer, 2000; Goldstein & Gigerenzer, 2002)
67
Cognition can be described as: “... those processes by which the sensory input is transformed,
reduced, elaborated, stored, and recovered, and used” (Neisser, 1967, p. 4).
151
The differences between the two camps has led to many debates over the past two
decades (see Gigerenzer, 1991; Kahneman & Tversky, 1996). The unifying aspect of this
research is its focus on uncertainty or “constraint” in an environment. This should not be
surprising since Simon himself was focused on the concept of “constraint”. However,
environments may not be constraining as well. Unfortunately, this aspect of the environment is
often ignored, and thus, its effects on decision-making remain understudied. This is apparent in
research (see chapter three) where “scarcity” is studied more often than abundance when
investigating prosocial and antisocial behaviors. This research contributes to the bounded
rationality perspective by explicitly factoring in an abundant resource environment to help us
think about how constraining environments can be compared to less constraining ones.
The purpose of this thesis is not to take sides between the two camps described above.
The (very) brief history is only meant to help us how the idea bounded rationality influenced
research and use what we know to expand Simon’s original conceptualization and add something
new to it.
Baseline Model: Resource Environments and Information Behaviors
The purpose of the baseline project was to understand how abundant and scarce resource
environments affect prosocial and antisocial decisions which pertain to information. Within the
realm of this project, the decision-makers had a complete choice to either share information or
not, and if share, choose the best or the worst kind of information (or something in between). If
they chose to share no information, then that counted as simply a lack or one less prosocial
behavior. If they gave information which could help the recipient gain an advantage, that was an
indicator of prosocial or benevolent behavior. In the case where they shared the worst
information, this counted as antisocial or malicious behavior. Results showed that in the
152
abundant environment condition, more people shared information, and gave accurate answers.
However, in scarce environments, people withheld information, and thus, the incidence of
sharing the best information.
In sum, it was observed that people’s decisions to share or not share information was linked to
their abundant or scarce resource environments. Thus, there is evidence that differences in
resource environments affect decision-making.
Bounded Technological Rationality: The Intersection Between AI, Cognition, and
Environment
Intelligent assistants are increasingly being embedded in our complex and constraining
resource environments to help humans and make decisions. To understand how they can serve as
interventions to influence human decisions that can affect others, a second study was to test if
they can nudge people to make prosocial and antisocial choices as they pertained to information
within abundant and scarce resource environments. The intelligent assistant designed for this
study was given interactive and assistive features. It helped users across two environments:
abundant and scarce. It nudged people to share the best, worst, or no information. The nudges
altered human decision-making compared to the baseline where more people shared the best,
worst, and no information. As an example, consider the following numbers. At the baseline level,
53% of those in a scarce environment shared the best information. The nudge to do the same
pushed it to 90.2%. At the baseline level, 2% people shared the worst information within an
abundant environment, but the nudge pushed it to 25% (see more details in chapter 5). In the
light of these findings (and the research on how AI can affect our judgement - see chapter 3), it is
proposed here that: an intersection of AI, cognition, and the environment is changing and
evolving human judgement and decision-making. This is bounded technological rationality.
153
Bounded technological rationality is pitched as a perspective on human decision-making.
It can help us develop specific theories and models involving AI and environment and their
effects on behaviors to further behavioral research. For this perspective to be explored further, a
model will have to specify the following:
a. Decision problem. Decision-making refers to a choice i.e. a selection between two or
more alternatives. For instance, to share or not share information, following a robot or
not, and adjusting preferences to an algorithm’s judgement or not. Thus, we will need to
focus on describing the specific problem and its nature.
b. Artificial intelligence: AI refers to an agent that can learn from data, adapt, and make
changes to its environment (Russell & Norvig, 2003). However, not all AI is created
equal. Future research should specify the type of AI and its parameters when
investigating its effects on behaviors.
c. Environment: The concept of environment can be difficult to grasp (Simon, 1956). It can
be conceptualized and varied as a physical space. For instance, a shopping mall is a
specific environment different from the Great Canyon. However, the environment can
also be internal to the stimuli or task related. In this project, an environment is a resource
environment--an integration of environmental and personal context where resources are
more or less than what is desired. Future research should lay out the environmental
features so we can have a better understanding of how they affect us.
Conclusion
Let’s imagine a scenario in the (in too distant) future. A financier has an intelligent
assistant that helps her within the scope of a financial market. This financier is trading in stocks
one day and the market seems to be fluctuating--but it does not look too problematic as of yet.
154
Her intelligent assistant “foresees” trouble and more uncertainty and conducts some predictive
analyses. The intelligent assistant recommends that the financier quietly sell her stocks
immediately. The financier sells her stocks. However, the market is an environment where both
humans and their intelligent assistants are integrated and connected. The decision to sell stocks is
observed by other intelligent assistants who start nudging their users to change their behaviors.
The outcome for the market is predictable.
Humans have always considered themselves at the top of the rational food chain.
However, they are in the middle of a shakeup (of their own making) as AI-enabled technology
has started to make decisions with and for them. This dissertation explored how prosocial and
antisocial behaviors pertaining to information emerge within abundant and scarce resource
environments. It also investigated whether an intelligent assistant could serve as an intervention
to nudge people into sharing or not sharing information that can help or hurt others across these
resource environments. This project specifically investigated the type of information which when
received can help a recipient gain an advantage or reach their goals.
Th help us understand the specific technology used in this dissertation, chapter two traced
the historical developments of the past seventy years pertaining to the evolution of intelligent
assistants. Using a historical approach, this chapter defined this technology and presented its
three features that set it apart from other types of technologies: AI, interaction and assistance.
Chapter three tackled two research questions: a. how abundant and scarce resource
environments affect prosocial and antisocial information sharing, and. b. can an intelligent
assistant nudge people’s behavior pertaining to information that can help or hurt others. To this
end, the chapter presented an extensive review of the literature on abundance and scarcity and its
effects on prosocial and antisocial behaviors. It argued that most research either fails to explicitly
155
identify or it conflates environmental and personal resource contexts when studying the effects
abundance and scarcity on prosocial and antisocial behaviors. This often leads to contradictory or
inconsistent findings where it remains unclear how and when abundance and scarcity affect these
behaviors. To present a solution to these conceptual and empirical problems, the chapter added to
the research literature on the subject by proposing environmental resource abundance and
scarcity as a separate construct to personal resource abundance and scarcity. It was argued that
an interaction of environmental and personal resource contexts yields a “resource environment”.
The two types of environments studied in this project reflected instances where resources were
either abundant (EAPA) or scarce (ESPS) in both environmental and personal contexts. Using
the resource environment as a factor, it was argued that navigating a scarce resource environment
taxes cognition which pushes people to conserve energy by limiting action. This materializes
when people choose withhold information i.e. ignore requests for information.
This chapter then introduced the concept of intelligent assistant as a cognition-of-sorts. It
argued that this technology is increasingly embedded in our environments and is equipped with
capabilities of learning, adapting, interacting, assisting and collaborating. These capabilities set
this technology apart from others and can therefore, allow it to influence human decision-
making. This argument was applied to behaviors pertaining to information sharing. Three types
of behaviors were investigated: a. choosing to not share information in response to a request, b.
choosing to share the best piece of information i.e. the one which would help, and c. choosing to
share the worst piece of information i.e. the one which would hurt a recipient. These choices
were categorized as: lack of prosocial behavior, prosocial behavior, and antisocial behavior
respectively. Using findings from research on scarcity and abundance, and the effects of
algorithms on individual judgement, several hypotheses were presented in this chapter. It was
156
predicted that individuals in a scarce environment would be more likely to accept prosocial and
antisocial nudges from an intelligent than those in an abundant resource environment.
Furthermore, compared to a baseline, the behaviors pertaining to information sharing (or lack
thereof) across these environments would reflect the type of nudge an intelligent assistant gave to
the decision-makers. Chapter four provided a detailed description of the study methods and the
design of a web portal which was used to invite hundreds of virtual participants in these
behavioral studies. This chapter also discussed the design and incorporation of the intelligent
assistant in the virtual interface
Chapter five presented results from the baseline and intelligent assistant studies. Results
from the baseline study showed that people shared more information that can help others in an
abundant environment. However, more people choose to withhold information under the scarce
environment. The data also showed that the incidence of antisocial behavior i.e. sharing the
worst information was low where 2% in abundant and 3% in scarce environments displayed this
behavior. The study also replicated prior research by finding that of the information, which was
shared, most of it was accurate (see Serota et al., 2010).
Results from the second study showed that overall, an intelligent assistant was able to
affect the data distribution by increasing both prosocial and antisocial behaviors compared to the
baseline. Furthermore, there was not much of a difference between those in abundant and scarce
environments in accepting its nudges. However, those in a scarce environment showed a
remarkable increase where they went from 53% at baseline to 90.24% in sharing the best
information when nudged by an intelligent assistant.
Chapter six discussed these findings in detail and presented implications for theory,
research and practice. It also describes the limitations of these studies.
157
Human societies are transforming, and they will continue to do so as AI becomes the
cognition-of-sorts in our environments. As creators of this technology, we need to know it better
than it knows itself. We must figure out how this technology affects our rationality because it
will help us not only understand our behaviors, but also design technology for social good.
158
References
Adams, J. S. (1963). Toward an understanding of inequity. Journal of Abnormal and Social
Psychology, 67, 422–436.
Aksoy, B., & Palma, M. A. (2019). The effects of scarcity on cheating and in-group favoritism.
Journal of Economic Behavior & Organization, 165, 100–117.
Allen, J. F., Byron, D. K., Dzikovska, M., Ferguson, G., Galescu, L., & Stent, A. (2001). Toward
conversational human-computer interaction. AI magazine, 22(4), 27-27.
Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust?
Perceptions about automated decision-making by artificial intelligence. AI & Society.
https://doi.org/10.1007/s00146-019-00931-w
Bailey, N. R., & Scerbo, M. W. (2007). Automation-induced complacency for monitoring highly
reliable systems: the role of task complexity, system experience, and operator trust.
Theoretical Issues in Ergonomics Science, 8(4), 321–348.
Barwise, J., & Cooper, R. (1981). Generalized quantifiers and natural language. Linguistics and
Philosophy, 4(2), 159–219.
Batson, C. D., & Powell, A. A. (2003). Altruism and prosocial behavior. In T. Millon, M. Lerner
& I. B. Weiner (Eds.), Handbook of Psychology (pp. 463-484). John Wiley & Sons, Inc.
ben-Aaron, D. (1985, April 09). Weizenbaum examines computers and society. The Tech.
Retrieved November 01, 2019, from http://tech.mit.edu/V105/N16/weisen.16n.html
Berry, P. M., Gervasio, M. T., Peintner, B., Uribe, T. E., & Yorke-Smith, N. (2006, March).
Multi-criteria evaluation in user-centric distributed scheduling agents. In AAAI Spring
Symposium: Distributed Plan and Schedule Management (pp. 151-152).
159
Bogers, T., Al-Basri, A. A. A., Ostermann Rytlig, C., Bak Møller, M. E., Juhl Rasmussen, M.,
Bates Michelsen, N. K., & Gerling Jørgensen, S. (2019). A study of usage and usability
of intelligent personal assistants in Denmark. Information in Contemporary Society, 79–
90.
Bourbakis, N. G., & Kavraki, D. (2001). An intelligent assistant for navigation of visually
impaired people. Proceedings 2nd Annual IEEE International Symposium on
Bioinformatics and Bioengineering (BIBE 2001), 230–235.
Bourbakis, N. G., & Kavraki, D. (2001, November). An intelligent assistant for navigation of
visually impaired people. In Proceedings 2nd Annual IEEE International Symposium on
Bioinformatics and Bioengineering (BIBE 2001) (pp. 230-235). IEEE.
Canbek, N. G., & Mutlu, M. E. (2016). On the track of Artificial Intelligence: Learning with
Intelligent Personal Assistants. Journal of Human Sciences, 13(1), 592–601.
Cannon, C., Goldsmith, K., & Roux, C. (2019). A Self‐Regulatory Model of Resource Scarcity.
Journal of Consumer Psychology, 29(1), 104–127.
Capraro, V., & Cococcioni, G. (2015). Social setting, intuition and experience in laboratory
experiments interact to shape cooperative decision-making. Proceedings. Biological
Sciences / The Royal Society, 282(1811). https://doi.org/10.1098/rspb.2015.0237
Chaudhri, V. K., Cheyer, A., Guili, R., Jarrold, B., Myers, K. L., & Niekarsz, J. (2006). A case
study in engineering a knowledge base for an intelligent personal assistant. Proceedings
of the 5th International Conference on Semantic Desktop and Social Semantic
Collaboration - Volume 202, 25–32.
160
Christakis, N. A. (2019, March 4). How AI will rewire us. The Atlantic. Retrieved from
https://www.theatlantic.com/magazine/archive/2019/04/robots-human-
relationships/583204/
CIMON brings AI to the International Space Station. (n.d.). Retrieved October 20, 2019, from
IBM Innovation Explanations website: https://www.ibm.com/thought-
leadership/innovation_explanations/article/cimon-ai-in-space.html
Clark, B., Robert, B., & Hampton, C. (2016). The technology effect: How perceptions of
technology drive excessive optimism. Journal of Business and Psychology, 31(1), 87-
102. doi: 10.1007/s10869-015-9399-4
Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York: Academic
Press.
Cooper, R. S., McElroy, J. F., Rolandi, W., Sanders, D., Ulmer, R. M., & Peebles, E. (2004).
Personal Virtual Assistant. US Patent No. 6757362
Cortana - Your personal productivity assistant. (n.d.). Cortana - Your personal Productivity
assistant. Retrieved May 26, 2020, from https://www.microsoft.com/en-us/cortana
Cowan, B. R., Pantidi, N., Coyle, D., Morrissey, K., Clarke, P., Al-Shehri, S., Earley, D., &
Bandeira, N. (2017). What can I help you with? Infrequent users’ experiences of
intelligent personal assistants. Proceedings of the 19th International Conference on
Human-Computer Interaction with Mobile Devices and Services, 43.
Crook, J. (2017). Amazon is putting Alexa in the office. Retrieved from
https://techcrunch.com/2017/11/29/amazon-is-putting-alexa-in-the-office/
Currans, K. G., & Bertani, J. A. (2004). Personal digital assistant with streaming information
display. US Patent No. 6727930.
161
Currans, K. G., & Bertani, J. A. (2004). Personal digital assistant with streaming information
display. US Patent No. 6727930.
https://patentimages.storage.googleapis.com/f1/70/9b/75076219718b1f/US6727930.pdf
Darwin, C. (1958). The origin of species. Penguin.
Davis, K. H., Biddulph, R., & Balashek, S. (1952). Automatic recognition of spoken digits. The
Journal of the Acoustical Society of America, 24(6), 637–642.
Denes, P., & Mathews, M. V. (1960). Spoken digit recognition using time‐frequency pattern
matching. The Journal of the Acoustical Society of America, 32(11), 1450–1455.
Dersch, W. C., Jr. (1969). Signal translating apparatus. US Patent No. 3470321.
https://patentimages.storage.googleapis.com/d1/1a/2c/a12d2b8d153773/US3470321.pdf
Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C.,
... & Yanco, H. (2012, March). Effects of changing reliability on trust of robot systems.
In Proceedings of the seventh annual ACM/IEEE international conference on Human-
Robot Interaction (pp. 73-80). ACM.
Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko, S., Bruggeman, C.,
Steinfeld, A., & Yanco, H. (2012). Effects of changing reliability on trust of robot
systems. 2012 7th ACM/IEEE International Conference on Human-Robot Interaction
(HRI), 73–80.
Diamond, J. M. (1999). Guns, germs, and steel: The fates of human societies. W.W. Norton &
Co.
Diamond, M., Horvitz, A., & Yauch, A. (1994). Sabotage. On III Communication. Grand Loyal.
162
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously
avoid algorithms after seeing them err. Journal of Experimental Psychology 144(1), 114–
126.
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism.
Science Advances, 4(1), eaao5580.
Dubois, D., Rucker, D. D., & Galinsky, A. D. (2015). Social class, power, and selfishness: when
and why upper and lower class individuals behave unethically. Journal of Personality
and Social Psychology, 108(3), 436–449.
Dudley, H., & Balashek, S. (1958). Automatic recognition of phonetic patterns in speech. The
Journal of the Acoustical Society of America, 30, 721–732.
Dufournet, D., & Jouenne, P. (1997). MADRAS, an intelligent assistant for noise recognition.
INTERNOISE, 3, 1313–1316.
Dunis, C. L., Middleton, P. W., Karathanasopolous, A., & Theofilatos, K. (Eds.). (2016).
Artificial intelligence in financial markets. Palgrave Macmillan, London.
Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A. (2002). The perceived utility of
human and automated aids in a visual detection task. Human Factors, 44(1), 79-94.
Edwards, J., Liu, H., Zhou, T., Gould, S. J. J., Clark, L., Doyle, P., & Cowan, B. R. (2019).
Multitasking with Alexa: How using intelligent personal assistants impacts language-
based primary task performance. Retrieved from http://arxiv.org/abs/1907.01925
Empson, R. (2011, October 5). Siri-ous mind blowing: Video evidence Of Apple’s prophetic past
— circa 1987. TechCrunch. https://social.techcrunch.com/2011/10/05/siri-ous-mind-
blowing-video-evidence-of-apples-prophetic-past-circa-1987/
163
Epstein, J., & Klinkenberg, W. D. (2001). From Eliza to Internet: a brief history of computerized
assessment. Computers in Human Behavior, 17(3), 295–314.
Evers, M., Northway, D., Christopher, D., Thorpe, Z., Utigard, L., & Yoshimoto, M. (2001).
Personal digital assistant device. US Patent No. D443612:S1.
Eyssel, F., De Ruiter, L., Kuchenbrandt, D., Bobinger, S., & Hegel, F. (2012). “If you sound like
me, you must be more human”: On the interplay of robot and user features on human-
robot acceptance and anthropomorphism. 2012 7th ACM/IEEE International Conference
on Human-Robot Interaction (HRI), 125–126.
Fehr, E., Hoff, K., & Kshetramade, M. (2008). Spite and development. The American Economic
Review, 98(2), 494–499.
Feigenbaum, E. A. (1963). Artificial intelligence research. IEEE Transactions on Information
Theory / Professional Technical Group on Information Theory, 9(4), 248–253.
Feigenbaum, E. A. (1977). The art of artificial intelligence: I. Themes and case studies of
knowledge engineering [Technical Report]. Stanford University.
Foa, E. B., & Foa, U. G. (1980). Resource theory. In K. J. Gergen, M. S. Greenberg, & R. H.
Willis (Eds.), Social Exchange: Advances in Theory and Research (pp. 77–94). Springer
US.
Foa, U. G. (1971). Interpersonal and economic resources. Science, 171(3969), 345–351.
Følstad, A., & Brandtzæg, P. B. (2017). Chatbots and the new world of HCI. Interactions, 24(4),
38-42.
Gatebox 株式会社. (n.d.). Azuma Hikari. Retrieved June November 10, 2019, from
https://www.gatebox.ai/en/hikari
164
Gelman, A., Altman, S., Pallakoff, M., Doshi, K., & Manago, C. (1988). FRM: An Intelligent
Assistant for Financial Resource Management. AAAI.
Ghandeharioun, A., McDuff, D., Czerwinski, M., & Rowan, K. (2018). Emma: An emotionally
intelligent personal assistant for improving wellbeing. arXiv Preprint arXiv:1812.
11423.
Gigerenzer, G. (1991). How to Make Cognitive Illusions Disappear: Beyond “Heuristics and
Biases.” European Review of Social Psychology, 2(1), 83–115.
Gigerenzer, G. (2000). Adaptive thinking: Rationality in the real world. Oxford University Press.
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: models of
bounded rationality. Psychological Review, 103(4), 650.
Gigerenzer, G., Todd, P. M., & the ABC Research Group. (1999). Simple heuristics that make us
smart. Oxford University Press.
Gino, F., & Pierce, L. (2009). The abundance effect: Unethical behavior in the presence of
wealth. Organizational Behavior and Human Decision Processes, 109(2), 142–155.
Girimonte, D., & Izzo, D. (2007). Artificial Intelligence for Space Applications. In A. J. Schuster
(Ed.), Intelligent Computing Everywhere (pp. 235–253). Springer London.
Gleick, J. (2011). The information: A history, a theory, a flood. Vintage.
Global Intelligent Virtual Assistant (IVA) Market 2019-2025: Industry Size, Share & Trends -
ResearchAndMarkets.com. (2019, August 22). Businesswire. Retrieved October 20,
2019. https://www.businesswire.com/news/home/20190822005478/en/Global-Intelligent-
Virtual-Assistant-IVA-Market-2019-2025
Gneezy, U., & Imas, A. (2017). Lab in the field: Measuring preferences in the wild. In A. V.
Banerjee & E. Duflo, Handbook of Economic Field Experiments (pp. 439-464). Elsevier.
165
Goksel, N. & Mutlu, M. (2016). On the track of artificial intelligence: Learning with intelligent
personal assistants. Journal of Human Sciences, 13(1), 592-601.
doi:10.14687/ijhs.v13i1.3549
Goksel, N. & Mutlu, M. (2016). On the track of artificial intelligence: Learning with intelligent
personal assistants. Journal of Human Sciences, 13(1), 592-601.
doi:10.14687/ijhs.v13i1.3549
Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition
heuristic. Psychological Review, 109(1), 75-90.
Gong, L. (2003). Intelligent social agents. US Patent No. 20030163311:A1.
Griffin, R. W., & Lopez, Y. P. (2005). “Bad Behavior” in organizations: A review and typology
for future research. Journal of Management, 31(6), 988–1005.
Gruber, T. R., Brigham, C. D., Keen, D. S., Novick, G., & Phipps, B. S. (2018). Using context
information to facilitate processing of commands in a virtual assistant. US Patent No.
9858925.
https://patentimages.storage.googleapis.com/56/6a/d0/671c625c6fd4d2/US9858925.pdf
Gruber, T. R., Cheyer, A. J., Kittlaus, D., & Guzzoni, D. R. (2016). Intelligent automated
assistant. US Patent No. 9318108B2.
Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human–
Machine Communication research agenda. New Media & Society, 22(1), 70–86.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman,
R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human
Factors, 53(5), 517–527.
166
Hare, B., Call, J., & Tomasello, M. (2006). Chimpanzees deceive a human competitor by hiding.
Cognition, 101(3), 495–514.
Hauswald, J., Laurenzano, M. A., Zhang, Y., Li, C., Rovinski, A., Khurana, A., Dreslinski, R.
G., Mudge, T., Petrucci, V., Tang, L. & Mars, J. (2015). Sirius: An open end-to-end voice
and vision personal assistant and its implications for future warehouse scale computers.
In Proceedings of the Twentieth International Conference on Architectural Support for
Programming Languages and Operating Systems. p.223-238. ACM.
Heilweil, R. (2019, December 12). Artificial intelligence will help determine if you get your next
job. Vox. https://www.vox.com/recode/2019/12/12/20993665/artificial-intelligence-ai-
job-screen
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., & McElreath, R. (2001). In
search of homo economicus: behavioral experiments in 15 small-scale societies. The
American Economic Review, 91(2), 73–78.
Hill, J., Ford, W. R., & Farreras, I. G. (2015). Real conversations with artificial intelligence: A
comparison between human–human online conversations and human–chatbot
conversations. Computers in Human Behavior, 49, 245-250.
Honan, M. (2013, August 5). Remembering the Apple Newton’s prophetic failure and lasting
impact. Wired. https://www.wired.com/2013/08/remembering-the-apple-newtons-
prophetic-failure-and-lasting-ideals/
Hoy, M. B. (2018). Alexa, Siri, Cortana, and More: An Introduction to Voice Assistants. Medical
Reference Services Quarterly, 37(1), 81–88.
https://patentimages.storage.googleapis.com/32/e1/f8/d2e1732e7b8b4d/US7158678.pdf
167
Huff, K. E., & Lesser, V. R. (1987). A plan-based intelligent assistant that supports the process
of programming.
IBM Archives: IBM Shoebox. (n.d).
https://www.ibm.com/ibm/history/exhibits/specialprod1/specialprod1_7.html
Imrie, P., & Bednar, P. (2013). Virtual personal assistant. ItAIS 2013.
https://researchportal.port.ac.uk/portal/en/publications/virtual-personal-
assistant(99a40f22-17ab-4343-9610-b2fb0e8160a3).html
Inthorn, J., Tabacchi, M. E., & Seising, R. (2015). Having the final Say: machine support of
ethical decisions of doctors. In S. P. van Ryswyk & M. Pontier (Eds), Machine Medical
Ethics (pp. 181–206).
James Wilson, H., & Daugherty, P. R. (2018, July 1). How Humans and AI Are Working
Together in 1,500 Companies. Harvard Business Review. Retrieved from
https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
Jepson, A. D., & Richards, W. (1993). What is a percept? Toronto, Canada: Department of
Computer Science, University of Toronto. Retrieved from
https://www.cs.toronto.edu/~jepson/papers/jepson_richards_percept.pdf
Jepson, A. D., & Richards, W. (1993). What is a percept? Technical Toronto, Canada:
Department of Computer Science, University of Toronto.
Jones, T. M. (1991). Ethical decision making by individuals in organizations: an issue-contingent
model. Academy of Management Review. Academy of Management, 16(2), 366–395.
Kahneman, D., & Tversky, A. (1980). Prospect theory. Econometrica, 12.
Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological
Review, 103(3), 582–591;
168
Kaiser, G. E., & Feiler, P. H. (1987). An architecture for intelligent assistance in software
development. Proceedings of the 9th International Conference on Software Engineering,
180–188.
Kalousis, A., & Theoharis, T. (1999). Noemon: Design, implementation and performance results
of an intelligent assistant for classifier selection. Intelligent Data Analysis, 3(5), 319–337.
Këpuska, V., & Bohouta, G. (2018). Next-generation of virtual personal assistants (Microsoft
Cortana, Apple Siri, Amazon Alexa and Google Home). 2018 IEEE 8th Annual
Computing and Communication Workshop and Conference (CCWC), 99–103. IEEE.
Kim, K., Boelling, L., Haesler, S., Bailenson, J., Bruder, G., & Welch, G. F. (2018). Does a
digital assistant need a body? The influence of visual embodiment and social behavior on
the perception of intelligent virtual agents in AR. 2018 IEEE International Symposium on
Mixed and Augmented Reality (ISMAR), 105–114.
Kim, K., de Melo, C. M., Norouzi, N., Bruder, G., & Welch, G. F. (2020). Reducing task load
with an embodied intelligent virtual assistant for improved performance in collaborative
decision making. IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 529–
538.
Kleinberg, S. (2018, January). 5 ways voice assistance is shaping consumer behavior. Mountain
View, CA. Retreived from https://www.thinkwithgoogle.com/consumer-insights/voice-
assistance-consumer-experience/?_ga=2.226615381.99407529.1525160997-
781502433.1525160997
Leavitt, D. (2006). The man who knew too much: Alan Turing and the invention of the computer.
WW Norton & Company.
169
Leviathan, Y, & Mattias, Y. (2018, May 08). Google Duplex: An AI System for Accomplishing
Real-World Tasks Over the Phone. Google AI Blog. Retrieved November 21, 2019, from
https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html
Levine, T. R. (2014). Truth-Default Theory (TDT): A theory of human deception and deception
detection. Journal of Language and Social Psychology, 33(4), 378–392.
Levine, T. R., Ali, M. V., Dean, M., Abdulla, R. A., & Garcia-Ruano, K. (2016). Toward a pan-
cultural typology of deception motives. Journal of Intercultural Communication
Research, 45(1), 1–12.
Levine, T. R., Kim, R. K., & Hamel, L. M. (2010). People lie for a reason: Three experiments
documenting the principle of veracity. Communication Research Reports, 27(4), 271–
285.
Liang, P. (2005). Advanced search, file system, and intelligent assistant agent. US Patent No.
20050144162:A1.
Liddy ED. (2001) Natural language processing. In MA Drake, Encyclopedia of Library and
Information Science (pp. 2126-2136). Marcel Decker
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer
algorithmic to human judgment. Organizational Behavior and Human Decision
Processes, 151, 90–103.
Lopatovska, I., Griffin, A. L., Gallagher, K., Ballingall, C., Rock, C., & Velazquez, M. (2020).
User recommendations for intelligent personal assistants. Journal of Librarianship and
Information Science, 52(2), 577–591.
170
López, G., Quesada, L., & Guerrero, L. A. (2018). Alexa vs. Siri vs. Cortana vs. Google
Assistant: A comparison of speech-based natural user interfaces. Advances in Human
Factors and Systems Interaction, 241–250. Springer International Publishing.
Luger, E., & Sellen, A. (2016). “Like having a really bad PA”: The gulf between user
expectation and experience of conversational agents. Proceedings of the 2016 CHI
Conference on Human Factors in Computing Systems, 5286–5297. New York, NY, USA:
ACM.
Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human–human
and human–automation trust: an integrative review. Theoretical Issues in Ergonomics
Science, 8(4), 277–301.
Makridakis, S. (2017). The forthcoming Artificial Intelligence (AI) revolution: Its impact on
society and firms. Futures. Retrieved from
https://www.sciencedirect.com/science/article/pii/S0016328717300046
Mani, A., Mullainathan, S., Shafir, E., & Zhao, J. (2013). Poverty impedes cognitive function.
Science, 341(6149), 976-980. doi: 10.1126/science.1238041
Martin, T. B. (1976). Practical applications of voice input to machines. Proceedings of the IEEE,
64(4), 487–501.
Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A
quantitative cartography of the Uncanny Valley. Cognition, 146, 22–32.
Matsuyama, Y., Bhardwaj, A., Zhao, R., Romeo, O., Akoju, S., & Cassell, J. (2016). Socially-
aware animated intelligent personal assistant agent. Proceedings of the 17th Annual
Meeting of the Special Interest Group on Discourse and Dialogue, 224–227.
171
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A Proposal for the
dartmouth summer research project on artificial intelligence, August 31, 1955. AI
Magazine, 27(4), 12. https://doi.org/10.1609/aimag.v27i4.1904
McCornack, S. A., Morrison, K., Paik, J. E., Wisner, A. M., & Zhu, X. (2014). Information
manipulation theory 2: A propositional theory of deceptive discourse production. Journal
of Language and Social Psychology, 33(4), 348–377.
Miksch, S., Cheng, K., & Hayes-Roth, B. (1997, February). An intelligent assistant for patient
health care. In Proceedings of the first international conference on Autonomous agents
(pp. 458-465).
Miller, L. C., Shaikh, S. J., Jeong, D. C., Wang, L., Gillig, T. K., Godoy, C. G., Appleby, P. R.,
Corsbie-Massay, C. L., Marsella, S., Christensen, J. L., & Read, S. J. (2019). Causal
inference in generalizable environments: Systematic representative design. Psychological
Inquiry, 30(4), 173-202.
Miner, A. S., Milstein, A., Schueller, S., Hegde, R., Mangurian, C., & Linos, E. (2016).
Smartphone-Based conversational agents and responses to questions About mental
health, interpersonal violence, and physical health. JAMA Internal Medicine, 176(5),
619–625.
Mitchell, R. W., & Anderson, J. R. (1997). Pointing, withholding information, and deception in
capuchin monkeys (Cebus apella). Journal of Comparative Psychology, 111(4), 351–361.
Miyazaki, A. D., & Rodriguez, A. A. (2009). Price, scarcity, and consumer willingness to
purchase pirated media products. Journal of Public.
https://journals.sagepub.com/doi/abs/10.1509/jppm.28.1.71
172
Moskvitch, K. (2017, February, 15). The machines that learned to listen. BBC.
https://www.bbc.com/future/article/20170214-the-machines-that-learned-to-listen
Muir, B. M., & Moray, N. (1996). Trust in automation. Part II. Experimental studies of trust and
human intervention in a process control simulation. Ergonomics, 39(3), 429–460.
Myers, K. L., & Yorke-Smith, N. (2005, November). A cognitive framework for delegation to an
assistive user agent. In Proc. of AAAI 2005 Fall Symposium on Mixed-Initiative Problem-
Solving Assistants (pp. 94-99).
Nagel, J., & Seni, G. (2007). Text input method for personal digital assistants and the like. US
Patent No. 7158678.
Newell, A., & Simon, H. (1956). The logic theory machine--A complex information processing
system. IRE Transactions on Information Theory, 2(3), 61–79.
Newell, A., & Simon, H. A. (1961). Computer simulation of human thinking. Science,
134(3495), 2011-2017.
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-
hall.
Newell, A., Shaw, J. C. & Simon. H. A. (1958) Elements a theory of human problem solving.
Psychological Review, 65(3), 151-166.
Noble, C. E. (1957). Human trial-and-error learning. Psychological Reports, 3(2), 377–398.
Oberquelle, H., Kupka, I., & Maass, S. (1983). A view of human—machine communication and
co-operation. International Journal of Man-Machine Studies, 19(4), 309–333.
Oppenheimer, J. A. (2008). Rational choice theory. The Sage Encyclopedia of Political Theory.
London Sage Publications.
173
Ortiz-Ospina, E., & Roser, M. (2016). Trust. Our world in data. Retrieved from:
https://ourworldindata. Org/trust
Panzarino, M. (2016, October 5). Samsung acquires Viv, a next-gen AI assistant built by the
creators of Apple’s Siri. TechCrunch. https://social.techcrunch.com/2016/10/05/samsung-
acquires-viv-a-next-gen-ai-assistant-built-by-creators-of-apples-siri/
Paraiso, E. C., & Barthès, J.-P. A. (2006). An intelligent speech interface for personal assistants
in R&D projects. Expert Systems with Applications, 31(4), 673–683.
Pauker, S. G., Gorry, G. A., Kassirer, J. P., & Schwartz, W. B. (1976). Towards the simulation of
clinical cognition: taking a present illness by computer. The American Journal of
Medicine, 60(7), 981-996.
Penner, L. A., Dovidio, J. F., Piliavin, J. A., & Schroeder, D. A. (2005). Prosocial behavior:
multilevel perspectives. Annual Review of Psychology, 56, 365–392.
Personal Digital Assistant - Cortana Home Assistant - Microsoft. (n.d.). Retrieved October 20,
2019, from Microsoft Cortana, your intelligent assistant website:
https://www.microsoft.com/en-us/cortana
Pradhan, A., Mehta, K., & Findlater, L. (2018, April). Accessibility came by accident: Use of
voice-controlled intelligent personal assistants by people with disabilities. In Proceedings
of the 2018 CHI Conference on Human Factors in Computing Systems (p. 459). ACM.
Prahl, A., & Van Swol, L. (2017). Understanding algorithm aversion: When is advice from
automation discounted? Journal of Forecasting, 36(6), 691-702.
Prediger, S., Vollan, B., & Herrmann, B. (2014). Resource scarcity and antisocial behavior.
Journal of Public Economics, 119, 1–9.
174
Priest, D., Dyson, T. & Martin, T. (2019, September 17). The complete list of Alexa commands
so far. CNET. https://www.cnet.com/how-to/the-complete-list-of-alexa-commands-for-
your-amazon-echo/
Purington, A., Taft, J. G., Sannon, S., Bazarova, N. N., & Taylor, S. H. (2017). “Alexa is my
new BFF”: Social roles, user satisfaction, and personification of the Amazon Echo.
Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in
Computing Systems, 2853–2859.
Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television,
and new media like real people and places. Cambridge University Press.
Rizzo, A., Forbell, E., Lange, B., Galen Buckwalter, J., Williams, J., Sagae, K., & Traum, D.
(2012). Simcoach: an online intelligent virtual human agent system for breaking down
barriers to care for service members and veterans. Healing War Trauma A Handbook of
Creative Approaches. Taylor & Francis.
Rizzo, A., Shilling, R., Forbell, E., Scherer, S., Gratch, J., & Morency, L. P. (2016). Autonomous
virtual human agents for healthcare information support and clinical interviewing. In
Artificial intelligence in behavioral and mental health care (pp. 53-79). Academic Press.
rkarena. (2007, April 8). Apple’s 1987 Knowledge Navigator Video. Youtube.
https://www.youtube.com/watch?v=HGYFEI6uLy0
Robinette, P., Howard, A. M., & Wagner, A. R. (2017a). Effect of robot performance on human-
robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems,
47(4), 425–436.
175
Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R. (2016, March). Overtrust of
robots in emergency evacuation scenarios. In The Eleventh ACM/IEEE International
Conference on Human Robot Interaction (pp. 101-108). IEEE Press.
Roediger, H. L., III. (2004). What happened to behaviorism. APS Observer, 17(3).
Roemmele, B. (2019). The 1961 IBM Shoebox—The First Voice First device created.
https://www.youtube.com/watch?v=gQqCCzrS5_I
Ross, J. M., Szalma, J. L., Hancock, P. A., Barnett, J. S., & Taylor, G. (2008). The effect of
automation reliability on user automation trust and reliance in a search-and-rescue
scenario. Proceedings of the Human Factors and Ergonomics Society... Annual Meeting
Human Factors and Ergonomics Society. Meeting.
https://doi.org/10.1177/154193120805201908
Roux, C., Goldsmith, K., & Bonezzi, A. (2015). on the psychology of scarcity: when reminders
of resource scarcity promote selfish (and generous) behavior. The Journal of Consumer
Research, 42(4), 615–631.
Russell, S. J., & Norvig, P. (2015). Artificial intelligence: a modern approach. Pearson
Education Limited.
Sakagami, Y., Watanabe, R., Aoyama, C., Matsunaga, S., Higaki, N., & Fujimura, K. (2002).
The intelligent ASIMO: System overview and integration. In IEEE/RSJ international
conference on intelligent robots and systems (Vol. 3, pp. 2478–2483). IEEE.
Scott, A. C., Clayton, J. E., & Garnier, J. (1987). Intelligent assistant for using and operating
computer system capabilities to solve problems. US Patent No 4713775A.
Searle, J. R. (1980). Minds, brains, and programs. The Behavioral and Brain Sciences, 3(3),
417–424.
176
Segal, R. B., & Kephart, J. O. (1999). MailCat: an intelligent assistant for organizing e-mail.
AAAI/IAAI, 925–926.
Sen, A. K. (1977). Rational fools: A critique of the behavioral foundations of economic theory.
Philosophy & Public Affairs, 317-344.
Serota, K. B., & Levine, T. R. (2015). A few prolific liars: variation in the prevalence of lying.
Journal of Language and Social Psychology, 34(2), 138–157.
Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying in America: Three
studies of self-reported lies. Human Communication Research, 36(1), 2–25.
Shah, A. K., Mullainathan, S., & Shafir, E. (2012). Some consequences of having too little.
Science, 338(6107), 682–685.
Shah, A. K., Shafir, E., & Mullainathan, S. (2015). Scarcity frames value. Psychological Science,
26(4), 402–412.
Shah, H., Warwick, K., Vallverdú, J., & Wu, D. (2016). Can machines talk? Comparison of Eliza
with modern dialogue systems. Computers in Human Behavior, 58, 278-295.
Shaikh, S.J., & Cruz, I. (2019). 'Alexa, do you know anything?' The impact of an intelligent
assistant on team interactions and creative performance under time scarcity. arXiv, arXiv-
1912.
Sharma, E., Mazar, N., Alter, A. L., & Ariely, D. (2014). Financial deprivation selectively shifts
moral standards and compromises moral decisions. Organizational Behavior and Human
Decision Processes, 123(2), 90–100.
Siitonen, L., & Ronkka, R. (1997). Personal digital assistant with real time search capability. US
Patent No. 6049796A.
177
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L.,
Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general
reinforcement learning algorithm that masters chess, shogi, and Go through self-play.
Science, 362(6419), 1140–1144.
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of
Economics, 69(1), 99-118.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological
review, 63(2), 129.
Simon, H. A. (1972). Theories of bounded rationality. Decision and Organization, 1(1), 161-176.
Simon, H. A. (1976). Administrative Behavior. New York, NY: The Free Press.
Simon, H. A. (1978). Rationality as process and as product of thought. The American Economic
Review, 1-16.
Simon, H. A. (1979). Rational decision making in business organizations. The American
Economic Review, 493-513.
Simon, H. A. (1982). Models of bounded rationality: Empirically grounded economic reason
(Vol. 3). MIT press.
Simon, H. A. (1986). Rationality in psychology and economics. Journal of Business, S209-S224
Simon, H. A. (1990). Reason in human affairs. Stanford University Press.
Simon, H. A. (1991). Bounded rationality and organizational learning. Organization Science,
2(1), 125-134.
Simon, H. A. (1996). The sciences of the artificial. The MIT Press
Singh, S. (1999). The code book: the science of secrecy from ancient egypt to quantum
cryptography. Doubleday.
178
Solomonoff, R. J. (1966). Some recent work in artificial intelligence. Proceedings of the IEEE,
54(12), 1687–1697.
Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine
communication. Cambridge University Press.
Sundar, S. S. (2020). Rise of machine agency: a framework for studying the psychology of
human–ai interaction (HAII). Journal of Computer-Mediated Communication: JCMC,
25(1), 74–88.
Sundar, S. S., & Kim, J. (2019). Machine heuristic: When we trust computers more than humans
with our personal information. Proceedings of the 2019 CHI Conference on Human
Factors in Computing Systems, 1–9.
Thaler, R. H., & Sunstein, C. R. (2009). Nudge: Improving decisions about health, wealth, and
happiness. Penguin.
Todd, P. M., & Gigerenzer, G. (2000). Précis of simple heuristics that make us smart. Behavioral
and Brain Sciences, 23(5), 727-741.
Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial
intelligence. Nature Medicine, 25(1), 44–56.
Tou, F. N., Williams, M. D., Fikes, R., Henderson, D. A., Jr, & Malone, T. W. (1982). RABBIT:
An Intelligent Database Assistant. AAAI, 314–318.
Tulshan, A. S., & Dhage, S. N. (2019). Survey on virtual assistant: Google Assistant, Siri,
Cortana, Alexa. Advances in Signal Processing and Intelligent Recognition Systems,
190–201. Springer Singapore.
Turing, A. M. (1950). Computing machinery and intelligence. Mind; a Quarterly Review of
Psychology and Philosophy, 59(236), 433–460.
179
Turing, A.M. (1951). Intelligent machinery, a heretical theory. In (Ed) B.J. Copeland). The
Essential Turing: The Ideas that Gave Birth to the Computer Age. Clarendon Press:
Oxford, UK, 2004, pp 472-475
Vardy, T., & Atkinson, Q. D. (2019). Property damage and exposure to other people in distress
differentially predict prosocial behavior after a natural disaster. Psychological Science,
30(4), 563–575.
Vincent, J. (2018, November 27). This is when AI’s top researchers think artificial general
intelligence will be achieved. The Verge.
https://www.theverge.com/2018/11/27/18114362/ai-artificial-general-intelligence-when-
achieved-martin-ford-book
Wall, M. (June, 2018). Meet CIMON. The 1st robot with artificial intelligence to fly in space.
Retrieved from https://www.space.com/41041-artificial-intelligence-cimon-space-
exploration.html
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W.
H. Freeman and Company.
Weizenbaum. J. (1966). ELIZA – A computer programme for the study of natural language
communication between men and machines. Communications of the ACM, 9, pp 36-45
Werner, K. B., Few, L. R., & Bucholz, K. K. (2015). Epidemiology, comorbidity, and behavioral
genetics of Antisocial Personality Disorder and Psychopathy. Psychiatric Annals, 45(4),
195–199.
Werner, K. B., Few, L. R., & Bucholz, K. K. (2015). Epidemiology, comorbidity, and behavioral
genetics of Antisocial Personality Disorder and psychopathy. Psychiatric Annals, 45(4),
195–199.
180
Wilson, J. H., & Daugherty, P. R. (2018, July 1). How humans and AI Are working together in
1,500 Companies. Harvard Business Review. Retrieved from
https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces
Winkler, R., Söllner, M., Neuweiler, M. L., Conti Rossini, F., & Leimeister, J. M. (2019, May).
Alexa, can you help us solve this problem? How conversations with smart personal
assistant tutors increase task group outcomes. In Extended Abstracts of the 2019 CHI
Conference on Human Factors in Computing Systems (pp. 1-6).
Yam, K. C., Reynolds, S. J., & Hirsh, J. B. (2014). The hungry thief: Physiological deprivation
and its effects on unethical behavior. Organizational Behavior and Human Decision
Processes.
Zimmermann, M., Stolcke, A., & Shriberg, E. (2006). Joint segmentation and classification of
dialog acts in multiparty meetings. 2006 IEEE International Conference on Acoustics
Speech and Signal Processing Proceedings, 1, I – I.
.
181
Appendix – Intelligent Assistant Designs
Figure A. Round 1. Intelligent assistant design.
Figure B. Alana – female intelligent assistant.
182
Figure C. Round 2. Charles – male intelligent assistant.
Figure D. Round 2. Nimos – gender-neutral intelligent assistant.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Essays on the effect of cognitive constraints on financial decision-making
PDF
Social motivation and credibility in crowdfunding
PDF
Developing and testing a heuristic-systematic model of health decision making: the role of affect, trust, confidence and media influence
PDF
Leveraging social normative influence to design online platforms for healthy eating
PDF
Understanding human-building interactions through perceptual decision-making processes
PDF
Essays on behavioral decision making and perceptions: the role of information, aspirations and reference groups on persuasion, risk seeking and life satisfaction
PDF
Sequential decision-making for sensing, communication and strategic interactions
PDF
Virtually human? Negotiation of (non)humanness and agency in the sociotechnical assemblage of virtual influencers
PDF
Towards designing mental health interventions: integrating interpersonal communication and technology
PDF
An evaluation of general education faculty practices to support student decision-making at one community college
PDF
Healthy mobility: untangling the relationships between the built environment, travel behavior, and environmental health
PDF
The movement of Mexican migration and its impact based on a GIS geospatial database
PDF
Examining the longitudinal influence of the physical and social environments on social isolation and cognitive health: contextualizing the role of technology
PDF
Essays on understanding consumer contribution behaviors in the context of crowdfunding
PDF
The economy crunch: a multimedia website devoted to the economy and what we eat
ZIP
The economy crunch: a multimedia website devoted to the economy and what we eat [website files]
PDF
Essays on information design for online retailers and social networks
PDF
Integration of truck scheduling and routing with parking availability
Asset Metadata
Creator
Shaikh, Sonia Jawaid
(author)
Core Title
Bounded technological rationality: the intersection between artificial intelligence, cognition, and environment and its effects on decision-making
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Publication Date
07/28/2021
Defense Date
06/26/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
abundance,antisocial,artificial intelligence,cognition,decision-making,information,intelligent assistant,OAI-PMH Harvest,prosocial,scarcity
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Williams, Dmitri (
committee chair
), Fulk, Janet (
committee member
), Hollingshead, Andrea (
committee member
), Levine, Timothy R. (
committee member
)
Creator Email
shaikhsoniaj@gmail.com,soniajas@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-351150
Unique identifier
UC11665280
Identifier
etd-ShaikhSoni-8827.pdf (filename),usctheses-c89-351150 (legacy record id)
Legacy Identifier
etd-ShaikhSoni-8827.pdf
Dmrecord
351150
Document Type
Dissertation
Rights
Shaikh, Sonia Jawaid
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
abundance
antisocial
artificial intelligence
cognition
decision-making
information
intelligent assistant
prosocial
scarcity