Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Technology, behavior tracking, and the future of work
(USC Thesis Other)
Technology, behavior tracking, and the future of work
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
i
TECHNOLOGY, BEHAVIOR TRACKING, AND THE FUTURE OF WORK
ROSHNI RAVEENDHRAN
University of Southern California
Marshall School of Business
Department of Management and Organization
701 Exposition Blvd – Hoffman Hall 431
Los Angeles, CA 90089
raveendh@usc.edu
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(BUSINESS ADMINISTRATION)
August 2018
ii
ABSTRACT
This dissertation aims to advance current understanding of the psychological impact of novel
technologies on individuals in organizations. Leveraging theory on how individuals experience
social contexts, this dissertation proposes and tests the central idea that technology reduces
people’s social evaluation concerns, thereby attenuating the evaluative aspect of social situations
while highlighting their informational aspect. Thus, interacting with or through technology (as
opposed to with other humans) shifts people’s focus from evaluation to information, leading to
profound implications for organizational actors. I build evidence for this claim by testing this
idea in the context of two rapidly spreading types of new technologies – behavior tracking
products and virtual reality. First, I examine how the shift in focus from evaluation to
information influences people’s willingness to adopt and use behavior tracking products. Second,
I build on the idea that technology shifts people’s focus from evaluation to information to
examine when and why managers use virtual reality to monitor their employees. Specifically, my
findings show that technology lowers people’s social evaluation concerns and, as a result, (1)
increases their willingness to adopt behavior tracking products, and (2) influences their
willingness to use virtual reality for monitoring employees. I argue that these findings offer
fundamental insights about the psychological impact of technology and inform our
understanding of people’s behaviors in relation to new technologies. More broadly, this work
provides a novel theoretical perspective that highlights the social psychological functions of
technology and opens avenues for exploring the implications of these effects in various social
and work-related contexts.
iii
TABLE OF CONTENTS
ABSTRACT…………………………………………………………………………………… ii
LIST OF TABLES……………………………………………………………………………. vi
LIST OF FIGURES…………………………………………………………………………... vii
ACKNOWLEDGEMENTS………………………………………………………………….. viii
CHAPTER 1 – THE PSYCHOLOGICAL IMPACT OF NOVEL TECHNOLOGIES…. 1
SOCIAL EVALUATION – A DEFINING ASPECT OF SOCIAL CONTEXTS………… 3
TECHNOLOGY AND SOCIAL EVALUATION………………………………………… 5
DISSERTATION OUTLINE……………………………………………………………… 6
CHAPTER 2 – BEHAVIOR TRACKING PRODUCTS: ADOPTION, USE, AND
EFFECTS……………………………………………………………………………………... 8
ABSTRACT……………………………………………………………………………...... 8
ADVANCES IN BEHAVIOR TRACKING TECHNOLOGY……………………………. 9
THE AVERSIVENESS OF TRACKING…………………………………………………. 10
BEHAVIOR TRACKING TECHNOLOGY, SOCIAL EVALUATION, AND
ANTICIPATED AUTONOMY……………………………………………………………. 11
Concerns about Social Evaluation……………………………………………………... 11
Social Evaluation and Anticipated Autonomy…………………………………………. 12
OVERVIEW OF THE PRESENT RESEARCH…………………………………………... 15
EXPERIMENT 1…………………………………………………………………………... 16
Method…………………………………………………………………………………. 16
Participants…………………………………………………………………………. 16
Materials and Procedure…………………………………………………………… 17
Results…………………………………………………………………………………. 18
EXPERIMENT 2………………………………………………………………………….. 20
Method………………………………………………………………………………… 20
Participants………………………………………………………………………… 20
Materials and Procedure…………………………………………………………… 20
Results…………………………………………………………………………………. 23
PRODUCT TYPICALITY POST TEST …………………………………………………. 26
Method………………………………………………………………………………… 26
Participants………………………………………………………………………… 26
Materials and Procedure…………………………………………………………… 27
Results…………………………………………………………………………………. 28
EXPERIMENT 3…………………….……………………………………………………. 28
Method………………………………………………………………………………… 28
Participants………………………………………………………………………… 28
Materials and Procedure…………………………………………………………… 29
Results…………………………………………………………………………………. 31
iv
EXPERIMENT 4………………………………………………………………………….. 34
Method…………………………………………………………………………………. 34
Participants………………………………………………………………………… 34
Materials and Procedure…………………………………………………………… 34
Results…………………………………………………………………………………. 36
EXPERIMENT 5…………………………………………………………………………... 37
Part 1 – Method………………………………………………………………………… 37
Participants………………………………………………………………………… 37
Materials and Procedure…………………………………………………………… 37
Part 2 – Method………………………………………………………………………… 41
Participants…………………………………………………………………………. 41
Materials and Procedure…………………………………………………………… 41
Results…………………………………………………………………………………. 45
Part 1 – Results…………………………………………………………………….. 45
Part 2 – Results…………………………………………………………………….. 45
EXPERIMENT 6…………………………………………………………………………... 47
Method…………………………………………………………………………………. 47
Participants………………………………………………………………………… 47
Materials and Procedure…………………………………………………………… 48
Results…………………………………………………………………………………. 49
EXPERIMENT 7………………………………………………………………………….. 51
Method………………………………………………………………………………… 52
Participants………………………………………………………………………… 52
Materials and Procedure…………………………………………………………… 52
Results…………………………………………………………………………………. 54
GENERAL DISCUSSION………………………………………………………………… 57
Contributions to Theory and Practice………………………………………………….. 59
Limitations and Future Directions……………………………………………………... 62
CONCLUSION……………………………………………………………………………. 67
CHAPTER 3 – VIRTUAL REALITY IN MANAGEMENT: DRIVERS AND
CONSEQUENCES…………………………………………………………………………… 68
ABSTRACT……………………………………………………………………………….. 68
FREQUENT MONITORING AS A SOURCE OF NEGATIVE EVALUATION……….. 70
POWER AND THE DESIRE TO AVOID NEGATIVE EVALUATION………………... 72
TECHNOLOGY, PSYCHOLOGICAL DISTANCING, AND SOCIAL PRESENCE…… 73
OVERVIEW OF THE PRESENT RESEARCH………………………………………….. 76
EXPERIMENT 1………………………………………………………………………….. 76
Method………………………………………………………………………………… 77
Participants………………………………………………………………………… 77
Materials and Procedure…………………………………………………………… 77
Results and Discussion………………………………………………………………… 79
EXPERIMENT 2…………………………………………………………………………... 80
Personality Differences………………………………………………………………… 80
Typicality of Frequent Monitoring…………………………………………………….. 83
Method…………………………………………………………………………………. 84
v
Participants and Design……………………………………………………………. 84
Materials and Procedure…………………………………………………………… 84
Results and Discussion………………………………………………………………… 88
Mediation Analyses………………………………………………………………... 90
Personality Effects…………………………………………………………………. 93
GENERAL DISCUSSION………………………………………………………………… 95
Contributions to Theory and Practice………………………………………………….. 96
Limitations and Future Directions……………………………………………………... 98
CONCLUSION……………………………………………………………………………. 100
CHAPTER 4 – CONCLUSION……………………………………………………………… 101
IMPLICATIONS FOR MONITORING…………………………………………………… 102
IMPLICATIONS FOR COMMUNICATION…………………………………………….. 104
IMPLICATIONS FOR THE ABANDONMENT OF NOVEL TECHNOLOGIES………. 105
IMPLICATIONS FOR GOAL PURSUIT…………………..…………………………….. 106
IMPLICATIONS FOR PRIVACY………………………………………………………… 107
CONCLUDING THOUGHTS…………………………………………………………….. 108
REFERENCES………………………………………………………………………………. 110
vi
LIST OF TABLES
TABLE 1 – Mediation Results for Experiment 2 (Chapter 2)………………………………… 25
TABLE 2 – Mediation Results for Experiment 3 (Chapter 2)………………………………… 33
TABLE 3 – Mediation Results for Experiment 6 (Chapter 2)………………………………… 51
TABLE 4 – Mediation Results for Experiment 7 (Chapter 2)………………………………… 57
TABLE 5 – Mediation Results for Experiment 2 (Chapter 3)…………. ……………………... 93
vii
LIST OF FIGURES
FIGURE 1 – Theoretical Model Describing the Process Underlying the Adoption of Behavior
Tracking Products……………………………………………………………………………… 15
FIGURE 2 – Anticipated Autonomy Mediates the Relationship Between Technology-backed
Behavior Tracking and Willingness to Use Behavior Tracking Product……………………... 25
FIGURE 3 – Anticipated Autonomy and Anticipated Social Evaluation Concerns Serially
Mediate the Relationship Between Technology-backed Behavior Tracking and Willingness to
Use Behavior Tracking Product……………………………………………………………….. 33
FIGURE 4 – Anticipated Autonomy and Anticipated Social Evaluation Concerns Serially
Mediate the Relationship Between Technology-backed Behavior Tracking and Willingness to
Use Behavior Tracking Product (Elderly Sample)…………………………………………….. 50
FIGURE 5 – Anticipated Autonomy and Anticipated Social Evaluation Concerns Serially
Mediate the Relationship Between Technology-backed Behavior Tracking and Job
Preference……………………………………………………………………………………… 56
FIGURE 6 – Anticipated Negative Evaluation Mediates the Relationship Between Frequent
Monitoring and Preference for Avatars……………………………………………………….. 92
viii
ACKNOWLEDGEMENTS
Many people have contributed to making me the person and scholar I am today. I am certain
that I could not have completed this dissertation without their support and encouragement. I
would like to take this opportunity to convey my deepest gratitude to all the people who made
this possible.
First and foremost, I would like to thank my advisor and dissertation committee chair, Nate
Fast. Nate’s passion for pursuing relevant and impactful research and his commitment to science
have deeply inspired me and shaped my scholarship. His insightful mentorship and guidance
enabled me to find meaning in this process, especially in times of self-doubt and dejection. Nate
has supported me in countless ways, both intellectually and emotionally, throughout my time in
the program. His encouragement and thoughtfulness kept me motivated every time I hit a
roadblock and helped me regain my confidence during the most difficult phases of this process.
By working with Nate, I learned how to develop interesting and relevant ideas, pursue them with
rigor, and communicate them effectively through writing, presentations, and conversations.
Nate’s mentorship is the primary reason why a career in academia was possible for me. I am
extremely fortunate to have Nate as my advisor and I thank him from the bottom of my heart for
everything.
I would also like to thank the rest of my dissertation committee, for each one of them has
inspired me, shaped my thinking, and provided feedback and support at various points in the
development of this dissertation. First, I thank Kyle Mayer for being a constant source of
strength and optimism throughout my tenure in the program. He taught me how to navigate
academia and enabled me to find my voice as a scholar. He was one of the first people to teach
me the importance of developing a research identity and staying true to that identity. I learned a
ix
great deal from Kyle about managing relationships, communicating with diverse audiences, and
navigating the job market. Kyle’s guidance, mentorship, encouragement, and belief in me are
some of the main reasons for where I am today.
I thank Cheryl Wakslak for teaching me the very basics of research including experimental
design, statistics, precision and rigor, and good writing. Cheryl showed me the importance of
persistence in this process and motivated me to go for the big ideas with grit. Her ability to
manage multiple projects simultaneously and effectively balance work and life has always
amazed me. Thank you, Cheryl, for always rooting for me.
Leigh Tost has inspired me in numerous ways. Leigh taught me the importance of paring
down to the most critical aspect of an idea and creating the building blocks of a paper from there.
By working with Leigh and just watching her think, I learned a great deal about developing good
theory, for she showed me the magic of the creative process. Every meeting with Leigh has left
me feeling completely excited about being a researcher. I thank Leigh for being a true
inspiration.
Jonathan Gratch is an invaluable member of my committee. Being an expert in both
computer science and psychology, Jon helped me understand exactly how some of these novel
technologies work and how I could begin thinking about the psychological impact of such
technologies. I am forever grateful to Jon for showing how to navigate the grant-writing process
and how to manage research teams, both immensely valuable skills that I learned by working
with him. I thank Jon for his time, generosity, and encouragement.
I would also like to thank my many mentors at USC who have guided and supported me
in numerous ways. I thank Nandini Rajagopalan for being the beacon of light during the most
trying times in the program. Nandini’s incredible kindness, thoughtful guidance, and astute
x
advice helped me find my purpose when I thought it was lost. I am eternally grateful to Nandini
for teaching me a great deal about life and career, and for inspiring me to be a better person and
scholar. I thank Peer Fiss for always believing in me, cheering me on, and genuinely caring
about my success and happiness. Peer’s unending support throughout my tenure in the program
allowed me to persist through the hard times and emerge stronger on the other end. I am grateful
to Scott Wiltermuth for showing me how to be an effective teacher in an MBA classroom, for
providing numerous resources to help me become a better writer, statistician, and scholar, and
for offering honest advice and feedback at various points during this process. I thank Sarah
Bonner for motivating me to go for the “big potatoes”, for making me strive for accuracy and
appropriateness in my experimental designs, for guiding me and other graduate students through
the process of creating a new PhD organization, and for including me in her Thanksgiving
dinners.
There are several other people who made USC MOR a truly nurturing and supportive
place. I thank Joe Raffiee, Nan Jia, Sarah Townsend, Eric Anicich, Paul Adler, and Tom
Cummings for generously offering me their time, providing feedback and advice, and helping me
in many ways at various points during the last six years. I thank Queenie Taylor, Ruth Joya,
Martha Maimone, and Marie Dolittle for helping me with various administrative requests during
my tenure in the program. Without their promptness and efficiency, my day-to-day life at MOR
would not have gone so smoothly.
My friends were my pillars of support and they greatly influenced my development in the
last six years. I thank Derek Harmon for inspiring me, believing in me, and for showing me that
there is, in fact, light at the end of the tunnel. I thank Adele Xing for always being there for me
and for making this process enjoyable by going through it with me. I thank Jake Grandy for the
xi
many fun conversations and for supporting me during the job market. I thank Arianna Uhalde for
being one of the most supportive people I know, and for cheering me on at every stage. I thank
Stephanie Smallets for her incredible optimism, love, and friendship. I thank Priyanka Joshi for
her thoughtfulness and friendship. This process would not have been half as much fun without
my friends.
I owe a great deal of thanks to my family for their constant love and support, and for
making me who I am today. I thank my mom for giving up so much and for doing everything in
her power to give me access to some of the best opportunities I had only dreamed of. I thank my
brother, Naren, for his unending love and for always being my rock. Amma, Naren – thank you
both for being so patient with me these last few years. I thank our dog, Nala, for bringing so
much love and joy into our lives and for always reminding me to stop and smell the roses, even
when things get a little crazy. I am so grateful for his cuddles, kisses, and hugs – they are, quite
literally, the most effective panacea ever.
Most of all is my gratitude for my husband, Sathish, who was the absolute driving force
behind me pursuing a doctoral degree. His passion for science, his drive for innovation, and his
constant commitment to learning have deeply inspired me. Through his unmatched love and
support, Sathish helped maintain my sanity during my lowest points in this process and made me
a better, happier person. He never once let me give up on myself, and for that I am eternally
grateful. I thank him for always knowing how to make me laugh, for showing me the silver
lining in every dark cloud, for patiently enduring me even when I was not a fun person to be
around, and for always putting my needs and wants before his own. Sathish, you are the
foundation of my existence and you inspire me every day to be a better human being. I dedicate
this dissertation to you.
1
CHAPTER 1
THE PSYCHOLOGICAL IMPACT OF NOVEL TECHNOLOGIES
In recent years, an unprecedented proliferation of technological devices has led to marked
changes in human behavior. This is especially evident in modern workplaces that leverage
advances in numerous areas such as text analytics, natural-language processing, data science, and
the Internet of Things (IoT) to create novel technological tools that can influence employee
behaviors and organizational outcomes (Cain, 2016). For example, collaboration tools (e.g.,
Slack, Google Drive) have expanded the limits of teamwork by allowing employees from
different parts of the world to work remotely with each other. Similarly, immersive technologies
such as virtual reality (VR) and augmented reality (AR) enable employees to virtually interact
and work with each other in a digital workplace. In addition to enabling new ways for employees
to connect, novel workplace technologies are also transforming how employees are being
managed. Managers now have access to a variety of behavior tracking technological tools such
as applications on employees’ phones and computers, sociometric badges equipped with
microphones and sensors, and intelligent software systems that allow them to track and monitor
employees’ behaviors more closely than ever before. From these examples, it is evident that
technological advances have the potential to upend and transform traditional workplaces by
altering the ways in which organizational actors engage with each other and with their work.
Technological innovations have motivated organizational scholars to consider how these
innovations might influence various organizational processes and outcomes. For example, a large
body of extant research on technology in organizational behavior examines how technological
tools influence group performance and satisfaction (e.g., Straus & McGrath, 1994), turnover
intentions (e.g., Golden, Veiga & Dino, 2008), perceptions of organizational fairness (e.g.,
2
Chapman, Uggerslev & Webster, 2003), and the effectiveness of interpersonal processes (e.g.,
Maruping & Agarwal, 2004). Other organizational research on technology focuses on
understanding the effect of technological innovations on organizational processes such as
monitoring (e.g., Alge, 2001; Alge, Ballinger, & Green, 2004), collaboration (e.g., Vignovic &
Thompson, 2010), leadership (e.g., Hoch & Kozlowki, 2014), decision making (e.g., Colquitt,
Hollenbeck & Ilgen, 2002) and communication (e.g., Gajendran & Joshi, 2012; Mesmer-Magnus
& DeChurch, 2009). More recently, organizational scholarship on technology is beginning to
explore issues such as redesigning work for a digital workforce (e.g., Colbert, Yee & George,
2016) and the ethicality of new technologies (e.g., Bonnefon, Shariff & Rahwan, 2016; Greene,
2016). Thus, the existing approach to studying technology in organizational contexts pertains to
treating technology as a tool that organizational actors can leverage to impact organizational
processes and outcomes. A key limitation of this approach is that it overlooks the social
psychological functions of technology, particularly the psychological effects that it has on
individuals who operate in social contexts.
This dissertation addresses this limitation by aiming to advance current understanding of
the psychological impact of novel technologies on individuals. In particular, this dissertation
examines how the use of novel technologies to track and connect with people influences
organizational actors’ perceptions, decisions, and behaviors. To address this goal, I ground this
examination to individuals operating in social contexts (especially organizations) and explore
how technology shapes people’s social and psychological experiences in such contexts.
Specifically, I propose and test the idea that technology reduces people’s social evaluation
concerns, thereby attenuating the evaluative aspect of social situations while highlighting the
informational aspect of such situations. I build evidence for this claim by testing this idea in the
3
context of two rapidly spreading types of new technologies – behavior tracking products and
virtual reality. These two technologies are among the top ten technological trends that are
expected to have a significant strategic impact on organizations in 2018 (Gartner, 2017).
Consistent with this expectation, it is also predicted that worldwide spending on behavior
tracking products and virtual reality will together exceed over $200 billion by 2020 (Gartner,
2016; IDC, 2017). Considering the potential organizational and societal impact of these
technologies, this dissertation seeks to understand the psychological processes that drive people’s
decisions to adopt and use these technologies, and aims to shed light on the behavioral
consequences of using these technologies. In the remainder of this chapter, I describe how social
evaluation is a defining characteristic of social contexts and outline my central argument around
how technology attenuates evaluative pressures prevalent in social contexts.
Social Evaluation: A Defining Aspect of Social Contexts
When people interact with others or operate in the presence of an audience they feel
concerned about being negatively evaluated by others (Schlenker & Leary, 1982). These
concerns result from being in an evaluative situation where one’s behavior can be scrutinized by
others and can possibly be rated as inadequate. In social interactions where people become the
focus of others’ attention, the prospect of interpersonal evaluation leads them to perceive a lower
likelihood of obtaining satisfactory judgments from others (Schlenker & Leary, 1982). In this
way, social situations inherently allow for possible evaluation by others and can make people
focus on the possibility of being negatively evaluated by others (Leary, 1983; Van Boven,
Lowenstein & Dunning, 2005). Potential negative evaluations can make people feel inadequate
in evaluative situations (Muller & Butera, 2007).
4
The perception that one may possibly be negatively evaluated by others in a social
situation is psychologically aversive to people, as it affects how others perceive and treat them
(Goffman, 1959; Leary & Kowalski, 1990), and also affects how people view themselves (Leary
& Baumeister, 2000). Negative social evaluation is also psychologically aversive as it leads to a
range of negative feelings including feelings of embarrassment (Modigliani, 1971), social
anxiety (Schlenker & Leary, 1982), and shame (Tangney, 1992). For example, in social
situations that entail performing before a competent (versus incompetent) audience where the
possibility of negative evaluation is more salient, people report experiencing greater tension and
nervousness (Jackson & Latane, 1981) and behave in ways indicative of embarrassment (Brown
& Garland, 1971; Garland & Brown, 1972). Similarly, perceived negative evaluation of one’s
global self by others leads to feelings of shame. Shame, in turn, is often associated with a feeling
of being exposed to others such that people think about how their defective self would appear to
others (Tangney, 1999). Social situations also result in social anxiety when people are motivated
to make a specific impression on others, but expect that others will react unfavorably toward
them or negatively evaluate them (Schlenker & Leary, 1982).
In addition to being psychologically aversive, the possibility of being negatively
evaluated by others is a physiological stressor for individuals. Cortisol is the hormone that is
produced in the body as a response to threat experiences. Increases in cortisol levels in the body
have been linked to receiving negative social feedback (Koslov, Mendes, Pajtas, & Pizzagalli,
2011; Jamieson & Mendes, 2016). In a meta-analysis of 208 acute stressor studies (Dickerson &
Kemeny, 2004), performance tasks characterized by social evaluative threat (e.g., presence of an
evaluative audience) were associated with cortisol responses more than four times larger than
tasks without these evaluative elements. Taken together, these results suggest that social-
5
evaluative contexts that may potentially result in negative evaluation by others lead to conditions
that can be both psychologically and physiologically aversive.
Technology and Social Evaluation
Given the increasing prevalence of novel technologies in our society, it is important to
consider how technology can influence people’s experiences of social evaluation. To begin
understanding the effect of technology on experiences of social evaluation, we must consider
how people experience social situations. Social situations have two functional aspects – an
informational aspect and a controlling aspect (Ryan, 1982). A situation that is experienced as
informational is one that provides people with behaviorally relevant information in the absence
of pressure for a particular outcome. One the other hand a situation that is experienced as
controlling is one that pressures people to act in a specific manner. Various contextual factors,
either objectively or in the way they are a construed, can influence whether a particular situation
is experienced as informational or as controlling. The relative salience of the informational
aspect and the controlling aspect of a given situation directly impacts how that situation is
perceived (Ryan, 1982). That is, when the salience of the controlling aspect of a situation is
reduced, that situation is likely to be experienced as more informational and vice-versa.
Based on the literature on social evaluation, it is evident that social situations allow for
likely negative social evaluation by others. Therefore, in those situations, individuals focus on
the possibility of negative evaluation (Leary, 1983; Van Boven, Lowenstein & Dunning, 2005).
This awareness of the potential for negative evaluation in social situations imposes external
pressures on people to behave in certain ways. Particularly, people are constrained by the need to
avoid making a negative impression on others (Nicholls, 1984; Ryan & Connell, 1989). Thus,
individuals operating in social contexts focus on the evaluative aspect of such contexts.
6
In social contexts, interacting through or with technology can switch people’s focus from
the evaluative aspect of such contexts to the informational aspect. Technology eliminates the
salience of humans in social contexts and, in doing so, attenuates undesirable social cues that
may otherwise be present in social interactions. Thus, technology mitigates social risks
associated with evaluation, and as a result, reduces people’s concerns about social evaluation.
This is consistent with research suggesting that social contextual cues such as facial expressions,
moods, demeanor, and body language are minimized in technology-mediated interactions (e.g.,
Ang, Cummings, Straub & Earley, 1993; Kiesler, Siegel & McGuire, 1984; Sproull & Kiesler,
1986). Therefore, technology reduces the strength of social evaluative pressures and acts as a
buffer for individuals in situations where they anticipate negative evaluation. By reducing the
salience of the evaluative aspect of social situations, technology allows people to experience the
informational aspect of such situations more saliently. Thus, technology attenuates the evaluative
aspect of social contexts while highlighting the informational aspect.
Dissertation Outline
I test this central idea in two different contexts. First, in Chapter 2, I examine this idea in
the context of behavior tracking products, one of the most widespread and popular novel
technologies. I explore the psychological drivers of using behavior tracking products in social
contexts, especially organizations. In this chapter, I examine how the shift in focus from
evaluation to information influences people’s willingness to adopt and use behavior tracking
products. Second, in Chapter 3, I build on the idea that technology switches people’s focus from
evaluation to information to examine when and why people choose virtual reality tools for social
interactions in organizational contexts. This chapter addresses this question by exploring the
psychological process that motivates managers to turn to virtual reality tools to interact with
7
subordinates in extreme monitoring situations. Finally, Chapter 4 highlights the organizational
implications of this dissertation and puts forth broader conclusions that aim to motivate future
research around novel technologies in the workplace.
8
CHAPTER 2
BEHAVIOR TRACKING PRODUCTS: ADOPTION, USE, AND EFFECTS
Abstract
The use of behavior tracking products is spreading rapidly, in spite of the fact that people may
experience tracking as psychologically aversive. The present research takes a step toward
illuminating this phenomenon by examining the conditions under which people are more likely
to adopt behavior tracking products and services. Seven experiments tested the predictions that
people are more likely to adopt behavior tracking products when these products are technology-
operated versus human-operated, and that this tendency is mediated by expectations that they
will feel less evaluated and more autonomous when using technological (rather than human)
assistance. The results supported my predictions, revealing that people preferred behavior
tracking products that were technology-operated, as opposed to human-operated (Experiments 1-
7), and this preference was driven by increased expectations of autonomy (Experiments 2, 3, 5,
6, and 7). Further, consistent with my theorizing, the anticipated autonomy associated with
technology was driven by reduced social evaluation concerns (Experiments 3-7).
9
Would you allow a person, or a team of people, to keep track of your every move,
keystroke, and physical location at work, or all of your personal health- and sleep-related
behaviors and habits at home? If you are like most people, the answer is no. In fact, intensive
tracking can undermine people’s expectations of self-determination and can be experienced as
psychologically aversive (Neihoff & Moorman, 1993; Sewell & Barker, 2006). Given people’s
desire for self-determination, it is interesting to note the rapidly spreading use of behavior
tracking products that closely and continuously track users’ behaviors. In the present research, I
seek to better understand the psychology behind this puzzling tendency for people to expect
intensive tracking to be aversive, on the one hand, while freely embracing behavior tracking
products on the other.
Advances in Behavior Tracking Technology
Recent years have introduced an unprecedented proliferation of technological devices
that are fundamentally altering the human experience. Reports suggest that by 2020, people will
be using more than 40 billion devices that are connected to the Internet, allowing them to
transmit data wirelessly (ABI Research, 2014). This phenomenon, characterized by a network of
physical objects that contain embedded technology to interact with their environments, is
referred to as the ‘Internet of Things (IoT)’ (Gartner, 2013). One of the most popular
manifestations of the IoT is that of behavior tracking products. This class of technological
devices includes smart watches, health-related smart wristbands, virtual-reality headsets and
smart glasses, to name a few. Behavior tracking products continuously track information about
the user and have the potential to provide feedback based on that information.
In 2014, over 70% of American consumers were aware of such technological devices and
about 15% reported using a behavior tracking product (Nielsen, 2014). The increasing popularity
10
of these devices is evident in the rapid rate at which these products are being adopted. Recent
reports suggest that sales from wearable devices generated $28.7 billion in revenue in 2016 and
this expected to grow to $61.7 billion by 2020 (Gartner, 2016). Importantly, organizations are
beginning to integrate behavior tracking products into the workplace to leverage them for
motivating employees and improving productivity. In fact, organizations handed out over 12
million wearable behavior tracking devices in 2016 and this number is expected to reach around
83 million by 2021 (ABI Research, 2016).
These trends suggest that behavior tracking products are becoming increasingly common,
both in people’s work and personal lives. To fully understand the specific psychological
mechanisms that underlie people’s decisions to adopt behavior tracking products, it is important
to begin by examining why tracking may be experienced as aversive, and how behavior tracking
products have the potential to sidestep these aversive effects when they are technology-operated
(rather than human-operated).
The Aversiveness of Tracking
When people’s behaviors are tracked, information pertaining to them may be accessible
to others. People are reticent about sharing their personal information and may be unwilling to let
others track them. Indeed, extant research suggests that people are inherently opposed to tracking
(Chalykoff & Kochan, 1989) and perceive it as autonomy-reducing (Neihoff & Moorman, 1993),
privacy-invading (Kulik & Ambrose, 1993), denigrating and stress-inducing (Nussbaum &
duRivage, 1986). Similarly, research on organizational surveillance indicates that when
employees’ behaviors are tracked closely by the organization, they may experience surveillance
as coercive (Sewell & Barker, 2006). Moreover, closely tracking employees’ behaviors is
expected to diminish their perceptions of fairness (Alge, Ballinger & Green, 2004) and could
11
lead to psychological reactance. Taken together, these findings suggest that tracking can
potentially be experienced as psychologically aversive.
Given people’s desire for self-determination, it is noteworthy that three out of five people
who responded to the State of Workplace Productivity Survey said that they would be willing to
try behavior tracking wearable devices if they helped them do their job better (Corsello, 2013).
These trends lead to an important question – why are people willing to adopt behavior tracking
products when doing so would subject them to increasingly intensive tracking?
Behavior Tracking Technology, Social Evaluation, and Anticipated Autonomy
One of the reasons people adopt new products is to gain an increased sense of control in
relation to their goals. People seek control in order to feel competent (Deci & Ryan, 1987),
achieve outcomes (Schifter & Ajzen, 1985), and improve performance and well-being (Ajzen &
Madden, 1986). However, although technology can increase control by helping individuals better
achieve their goals (Griffith, 2011), it can also lead to dependence and psychological stress
(Kushlev & Dunn, 2015; Mick & Fournier, 1998; Weil & Rosen, 1997). Moreover, it is
important to note that assistance from humans also leads to increases in control. Yet, we suggest
that people are more likely to embrace behavior tracking products that are automated over those
that have human involvement. To provide theoretical support for this claim, we move beyond
perceived control as an explanatory variable and instead examine the construct of autonomy. We
propose that concerns about being evaluated negatively by other people influences the subjective
experience of autonomy which, in turn, leads to preferences for products that are technology-
operated rather than human-operated.
Concerns about Social Evaluation
12
Social situations (i.e., situations in which people interact with others or are aware of an
audience) allow for possible evaluation by others where one’s behaviors can be scrutinized and
judged. As a result, these situations can cause people to focus on the possibility of being
negatively evaluated by others (Leary, 1983; Van Boven, Lowenstein & Dunning, 2005) and can
trigger concerns about negative evaluation (Schlenker & Leary, 1982). The perception that one
may be negatively evaluated by others is psychologically aversive (Goffman, 1959; Leary &
Baumeister, 2000; Leary & Kowalski, 1990) and leads to a range of negative feelings, including
feelings of embarrassment (Miller, 1995; Modigliani, 1971), social anxiety (Schlenker & Leary,
1982), and shame (Tangney, 1992). In addition, the possibility of being negatively evaluated by
others is also a physiological stressor. Increases in cortisol (the hormone that is produced as a
response to threat) in the body have been linked to receiving negative social feedback (Koslov,
Mendes, Pajtas, & Pizzagalli, 2011; Jamieson & Mendes, 2016). Thus, social-evaluative contexts
can be both psychologically and physiologically aversive.
Social Evaluation and Anticipated Autonomy
Social evaluation has implications for autonomy, which is characterized as experiencing
perceived freedom to choose one’s actions, or “an inner endorsement of one’s actions, the sense
that they emanate from oneself and are one’s own” (Deci & Ryan, 1987, pg. 1025). Being
autonomous signifies that one experiences oneself as the initiator of one’s own behaviors, selects
desired outcomes, and decides how to achieve them without being constrained by external
pressure. By contrast, when one experiences the lack of a true sense of choice and feels that one
has to do what one is doing (owing to external factors) there is a lack of autonomy and a sense of
being controlled by external factors (Rotter, 1954). A lack of autonomy prevails even when
13
people intentionally seek to achieve certain outcomes, but in reality, are constrained by certain
factors to obtain those desired outcomes (deCharms, 1968; Deci & Ryan, 1987).
Cognitive evaluation theory (Ryan, 1982; Deci & Ryan, 1985), offers a comprehensive
framework for determining when and why contextual factors facilitate or restrict autonomy, and
consequently, intrinsic motivation. According to this theory, all contextual factors can be viewed
as having two functional aspects: a controlling aspect and an informational aspect. When one
perceives that a contextual factor introduces pressure to attain a particular behavioral outcome, it
is construed as controlling and, as a result, diminishes perceived autonomy (Ryan, 1982).
Alternatively, when a contextual factor is perceived as a source for obtaining behaviorally
relevant information, in the absence of pressure to obtain particular outcomes, it is construed as
informational. When this informational aspect of the context is salient, it makes people perceive
an internal locus of causality for their behaviors and does not hinder autonomy.
In determining which contexts are autonomy-supportive (i.e., allow for autonomy) and
which contexts restrict autonomy, it is important to note that the way in which people construe
contextual factors is critical (Deci & Ryan, 1985). Construing a contextual factor as
informational fosters autonomy, whereas construing it as controlling diminishes autonomy.
Contextual factors (e.g., the offer of a reward, the existence of a deadline, positive feedback) can
facilitate or restrict autonomy depending on whether they are construed as informational (i.e., as
supporting autonomy) or controlling (i.e., as thwarting autonomy), respectively (Deci, Connell &
Ryan, 1989). Studies have indicated that, in many circumstances, the availability of choice
(Zuckerman, Porac, Lathin, Smith & Deci, 1978) and positive feedback (Blanck, Reis &
Jackson, 1984; Deci, 1971) are factors that are construed as informational (i.e., autonomy-
supportive), while factors such as task-contingent rewards (e.g., Ryan, Mims & Koestner, 1983),
14
the existence of deadlines (Amabile, DeJong & Lepper, 1976) and threats of punishment (Deci &
Cascio, 1972) are construed as controlling (i.e., undermining autonomy).
Although several contextual factors facilitate or restrict autonomy depending on how they
are construed, we focus our attention on a core factor that characterizes social situations: the
possibility of social evaluation. As noted earlier, the perception that one is being negatively
evaluated by others is both: (a) psychologically aversive, leading to feelings of embarrassment
(Miller, 1995; Modigliani, 1971), social anxiety (Schlenker & Leary, 1982), and shame
(Tangney, 1992) and, (b) physiologically stressful (e.g., Dickerson & Kemeny, 2004). As a
result, when operating in social contexts or when allowing others to closely observe their
behaviors, people are constrained by the need to obtain positive evaluative feedback or avoid
making a negative impression on others. In this way, people often experience an external
pressure to obtain particular outcomes (Deci & Ryan, 1987).
Based on cognitive evaluation theory (Deci & Ryan, 1985; Ryan, 1982), when a situation
introduces external pressure to obtain a particular outcome, individuals perceive that situation as
‘controlling.’ Given that one’s behavior in an evaluative situation is constrained by an external
factor (i.e., the possibility of negative evaluation), one experiences the lack of a true sense of
choice to choose one’s own behavior in that situation (Deci & Ryan, 1987). Such a situation
reduces one’s ability to feel an internal locus of control for one’s own behavior and, in so doing,
hinders perceived autonomy (Deci & Ryan, 1987; Ryan & Connell, 1989). In fact, studies show
that the mere presence of an evaluator, even without rewards or aversive consequences, can
undermine autonomy (e.g., Deci & Ryan, 1987; Ryan 1982). In contrast, when a situation
provides individuals with behaviorally relevant information without the external pressure for
15
obtaining an outcome, it is perceived as ‘informational’ and does not hinder perceived autonomy
(Ryan, 1982).
Building on these arguments, I propose that using human assistance to track behavior and
pursue goals makes people feel subject to social evaluation and, as a result, hinders perceived
autonomy. In contrast, products backed by technology offer informational feedback without
inducing external pressure to obtain certain outcomes and, in so doing, sidestep the autonomy-
reducing effects associated with social evaluation. Thus, I argue that people are more likely to
adopt technology-operated behavior tracking products (relative to human-operated behavior
tracking products) because they anticipate greater autonomy due to lower social evaluation
concerns (See Figure 1).
Figure 1. Theoretical model describing the relationship between social evaluation,
anticipated autonomy and the willingness to adopt behavior tracking products that are
technology-operated versus human-operated.
Overview of the Present Research
I conducted seven experiments to test the following predictions: (a) people prefer
technology-operated behavior tracking products compared to human-operated behavior tracking
products, (b) this preference is mediated by increased anticipated autonomy, and (c) the
16
anticipated autonomy associated with technological assistance is driven by reduced social
evaluation concerns. For each experiment, I collected all data in single, complete batches and did
not conduct any analyses until all data for a given experiment were collected. I report all
measures, manipulations and exclusions. Based on recent recommendations for increased power
(e.g., Simmons, Nelson, & Simonsohn, 2013) and a power analysis (G*Power software; Faul,
Erdfelder, Buchner & Lang, 2009) that revealed that I needed at least 156 participants in
between-subjects designs and at least 82 participants in within-subjects designs to have adequate
power (.80) to detect medium effects (0.4 – 0.5), I recruited large sample sizes in each study. I
collected these data between mid-2015 and early 2018.
Experiment 1
In Experiment 1, I examined my first hypothesis that people are more likely to adopt
technology-operated behavior tracking products compared to human-operated behavior tracking
products.
Method
Participants
I recruited 201 undergraduates (52.2% female; M
age
=21.24) from the University of
Southern California’s Marshall School of Business (USC Marshall) to participate for course
credit. I excluded one participant who participated in the study more than once. In order to
ensure that participants were not subject to preconceived biases about the product I used in my
manipulation, I screened the data for participants who reported owning/using a similar product,
resulting in the exclusion of seventeen participants, for a final sample size of 183 (50.8% female;
M
age
=21.31). Participants were randomly assigned to either a technology condition (n=88) or a
human condition (n=95).
17
Materials and Procedure
Participants were informed that they were participating in a study about a behavior
tracking product that would monitor and improve time management by collecting and analyzing
data about work habits and behaviors. I mentioned that the product was developed by a team of
university alumni and that they were surveying students to gauge their interest in the product.
The product description was modified to manipulate whether the product was technology-
operated or human-operated. Participants in the technology [human] condition read the following
description of the product:
GradTime is a new computer application designed to help college students manage their time
during college effectively. Students can easily install GradTime on their computers and smart
devices (smartphones, tablets, smartwatches etc.) and track how they manage their time
during the semester.
• GradTime is an application that is fully controlled by a computer algorithm designed to
help you (the user) [fully controlled by a technical analyst who is assigned to you (the
user)].
• This algorithm will [person will work remotely to] monitor how you manage your time in
the following ways:
1. It [He/she] will provide detailed daily reports on how much time you spent on
different websites and applications when your GradTime is turned on.
2. It [He/she] will summarize your work patterns and send you a daily report on how
productive you were and whether you met your daily work goals.
3. It [He/she] will also offer daily suggestions on how to better manage your time
based on your current work habits.
4. It [He/she] will provide daily reminders about your various appointments.
5. It [He/she] will also offer weekly suggestions on how to better manage your
social time depending on your work engagements that week.
18
Next, participants responded to the following three questions with yes/no responses: “Do
you want to be notified when GradTime is released later this semester?”; “Do you want to be
emailed with a link to download GradTime when it is released later this semester?”; “Would you
be interested in participating in GradTime’s pilot test?” (I informed participants that they would
have the opportunity to download and use the beta version of the application on their computers
and smart devices). I also asked participants to rate their willingness to adopt the product on two
continuous measures (“I can see myself using GradTime for managing my time”; “I can imagine
using GradTime during the semester for being more effective in managing my time”) on a scale
from 1 (strongly disagree) to 7 (strongly agree). The three dichotomous items and the two
continuous items were standardized and combined to form a willingness to adopt measure
(a=.84), my primary dependent variable of interest. In addition to age and gender, ethnicity was
also measured.
Results
Results revealed systematic differences in participants’ willingness to adopt the product.
Consistent with my hypothesis, participants in the technology condition were more willing to
adopt the product (M=.13; SD=.79) than those in the human condition (M=-.12; SD=.74,
t(181)=2.24, p=.027, 95% CI of the difference=[.03, .48], d=0.33). These results were robust
when controlling for age, gender, and ethnicity. I also examined whether this pattern existed
when considering the dichotomous and the continuous measures separately. Results of the
dichotomous measures (standardized and combined) revealed that there was a marginal
difference between conditions such that participants in the technology condition were more
willing to adopt the product (M=.12; SD=.84) than those in the human condition (M=-.11;
SD=.80), F(1,181)=3.39, p=.067, h
p
2
=0.02. Similarly, results of the continuous measures
19
(combined as a scale) revealed that there was a significant difference between conditions such
that participants in the technology condition were more willing to adopt the product (M=.16;
SD=.99) than those in the human condition (M=-.14; SD=.92), F(1,181)=4.55, p=.03, h
p
2
=0.03.
Thus, the findings supported my prediction that people will prefer behavior tracking products
that are technology-operated over those that are human-operated.
In Experiment 1, I also collected data on two additional measures for exploratory
purposes – perceived usefulness and perceived monitoring. Both these measures did not mediate
the relationship between technology and product desirability. I describe these exploratory
analyses below:
Perceived Usefulness: I measured the perceived usefulness of the product using the
following two items: “GradTime seems like it will be useful in helping me manage my time in
college” and “GradTime will be an effective tool to help me improve my time management
skills” (a = .91). Results revealed that there was a marginal difference in participants’ ratings of
perceived usefulness of the product (M
technology
=4.86, SD=1.43; M
human
=4.49, SD=1.30,
t(181)=1.83, p=.07, 95% CI of the difference = [-.03, .77], d=0.27).
Perceived Monitoring. I measured the extent to which people perceived they would be
monitored when using the product with the following two items: “I will feel uncomfortable using
GradTime for monitoring how I manage my time” and “I will be concerned that GradTime will
be constantly monitoring how I spend my time” (a = .56). Results revealed that there was a
marginal difference in participants’ ratings of perceived monitoring when using the product
(M
technology
=4.24, SD=1.36; M
human
=4.56, SD=1.34, t(181)=-1.60, p=.11, 95% CI of the difference
= [-.71, .07], d=-0.24). Additionally, when controlling for perceived usefulness, participants in
the technology condition anticipated feeling marginally less monitored than those in the human
20
condition (M
technology
=4.21; SE=.14 versus M
human
=4.59; SE=.14; F(1,180)=3.68, p=.057, h
p
2
=
.02). I examined the implications of these findings more closely in Experiments 2 and 3.
Experiment 2
In Experiment 2, I examined why people prefer technology-operated behavior tracking
products over human-operated behavior tracking products, exploring the role of anticipated
autonomy as a possible mechanism. I also assessed perceived control and motivation as possible
alternative mechanisms for my findings.
Method
Participants
I recruited participants (53.5% female; M
age
=28.59) from two different sources (N=301);
undergraduates (N=149) (59.1% female; M
age
=20.83) from USC Marshall who participated for
course credit, and individuals recruited from Amazon’s Mechanical Turk (N=152) (48% female;
M
age
=36.15) who received $0.75. Participants from both sources were recruited at the same time
and data were collected simultaneously. I selected participants from these two samples in order
to improve the generalizability of my findings.
I screened the data for participants who reported doing a similar survey
1
. This resulted in
the exclusion of seventeen participants, for a final sample size of 284 participants (54.6% female;
M
age
=28.61). Participants were randomly assigned to either a technology condition (n=139) or a
human condition (n=145).
Materials and Procedure
Participants read the description of a new behavior tracking product and indicated how
they anticipated they would feel if they were to use the product. In order to construct a
1
A similar survey was conducted for a related but separate project.
21
conservative test of my prediction, participants in both the technology and human conditions
read product descriptions of a smartwatch that would keep track of their health goals/behaviors.
In the technology (human) condition, I stated that a computer (person) remotely controlled the
watch. In this way, the product described in the two conditions was exactly the same with the
single exception of human involvement (importantly, the human involvement was remote, rather
than in person, thus maintaining consistency across the two conditions).
Participants in the technology [human] condition read:
Company creates new smartwatch [smartcompanion], makes some majorly lofty promises.
A company has just released information about its innovative new smartwatch
[smartcompanion program] that is aimed at improving people’s overall health and effectiveness.
You can wear a watch that is controlled by a computer [remotely controlled by a person] and it
can help you improve all facets of your life. This new watch [companion] will “know you better
than you know yourself” and it [he/she] will “help you be a better human,” according to the
company that makes it.
So how exactly does the smartwatch [smartcompanion program] work?
You will be wearing the watch that is operated by a computer [remotely by a person]. The watch
will [Using the watch, the person will]:
1) Monitor your heart rate.
2) Track your runs and biking with GPS.
3) Monitor ultraviolet levels and tells you whether you need to wear sunscreen.
4) Provide guided workouts from Gold's Gym, Shape, and Men's Fitness, urging you to
do pushups and other exercises.
5) Show you how many calories you are burning.
22
6) Tell you how well you're sleeping.
7) Sync with your smart phone to display your calendar appointments and text messages.
8) Tell you the time.
After reading the product description, participants indicated how much autonomy they
anticipated having while using the product as well as the desirability of the product. They also
rated their ability to exercise control over health and their motivation to take care of their health
when using the product. All ratings were given on Likert scales anchored by 1 (Strongly
disagree) and 7 (Strongly agree).
Anticipated autonomy. Anticipated autonomy was measured using a six-item scale
developed for this study. Below are the items in the scale:
1. Using this product makes me feel like my behaviors are being dictated by someone or
something other than myself (Reverse-coded).
2. Using this product makes me feel controlled (Reverse-coded).
3. Using this product feels like I am being too closely observed, monitored and
evaluated (Reverse-coded).
4. Using this product makes me feel micromanaged
2
(Reverse-coded).
5. Using this product makes me feel interrupted because of being constantly checked in
on about my health (Reverse-coded).
6. Using this product makes me feel like I have complete freedom in how I take care of
my health.
2
Reviewers were concerned that this item could be confounded with the manipulation. Per
reviewers’ request, I ran all analyses excluding this item from my autonomy scale and found that
all my results were robust.
23
An exploratory factor analysis revealed that all the items loaded on to one factor (loadings > .51).
Moreover, the scale had high reliability, a=.90.
Product desirability. Product desirability was measured using a three-item scale (“This
product is highly desirable to me”, “I favor using this product”, “I definitely want to use this
product,”). The scale had high reliability, a=.97, and the three items loaded onto one factor in an
exploratory factor analysis (loadings >.96).
Perceived control. Perceived control over health was measured using a three-item scale
(“This product would make me feel confident about my ability to take care of my health”, “This
product would make me feel self-assured about my capabilities to take care of my health”,
“When using this product, I will have mastered the skills necessary to take care of my health”).
Results from an exploratory factor analysis indicated that all three items loaded on to one factor
(loadings > .92), and the scale also had high reliability, a=.93.
Motivation. Motivation was measured using a four-item scale (“I would be highly
motivated to take care of my health,” “I would work hard to maintain good health,” “I would
heed to all the instructions that I receive in order to take care of my health,” “I would invest as
much effort as needed to maintain good health”). The scale had high reliability, a=.92.
Moreover, all the items loaded onto one factor in an exploratory factor analysis (loadings > .83).
Results
Consistent with my predictions, participants in the technology condition rated the product
as more desirable (M
technology
=4.01; SD=1.74) than those in the human condition (M
human
=3.21;
SD=1.78, t(282)=3.83, p<.001, 95% CI of the difference=[.39, 1.21], d=0.45). Also as expected,
participants in the technology condition anticipated having higher autonomy (M
technology
=4.00;
24
SD=1.53) than those in the human condition (M
human
=3.01; SD=1.27, t(282)=5.96, p<.001, 95%
CI of the difference=[.66, 1.32], d=0.70).
There were no differences in participants’ ratings of perceived control (M
technology
=4.24;
SD=1.49 vs. M
human
=4.01; SD=1.56, t(282)=1.25, p=.21, 95% CI of the difference=[-.13, .58],
d=0.14). There was, however, a marginal difference in participants’ ratings of their motivation to
take care of their health (M
technology
=4.69; SD=1.31 vs. M
human
=4.36; SD=1.48, t(282)=1.96,
p=.051, 95% CI of the difference=[-.002, .65], d=0.24).
I conducted bootstrapping analyses following procedures for testing direct and indirect
effects using the PROCESS macro (model 4) (Hayes, 2013) to test whether people’s preference
for technological assistance was mediated by anticipated autonomy. The bootstrap results based
on a resampling size of 10,000 indicated that the total direct effect of technology on product
desirability (b=.80, SE=.21, p<.001) decreased to non-significance (b=.19, SE=.19, p=.33) when
anticipated autonomy was included as the mediator. Moreover, the 95% bias-corrected
confidence intervals for the indirect effect through anticipated autonomy did not include zero
(.40, .86), indicating that people’s preference for technological assistance was mediated by
anticipated autonomy. In line with recommendations for reporting unstandardized coefficients
when independent variables are dichotomous (Darlington & Hayes, 2017), I present the
unstandardized regression coefficients for each pathway in Figure 2. The 95% bias-corrected
confidence intervals for the direct and indirect effects are presented in Table 1.
Because the manipulation had a marginal effect on motivation, I assessed it as a possible
mechanism. In contrast to anticipated autonomy, however, motivation did not mediate the
relationship between technology and product desirability. When I included motivation as a
mediator using the PROCESS macro (model 4) (Hayes, 2013), results based on a resampling size
25
of 10,000 revealed that the total direct effect of technology on product desirability (b=.80,
SE=.21, p<.001) did not reduce to non-significance (b=.55, SE=.17, p=.001) and the 95% bias-
corrected confidence intervals for the indirect effect through motivation included zero (.00, .51).
Figure 2. Anticipated autonomy mediates the effect of technology on product desirability.
Unstandardized regression coefficients and standard errors for each path are reported. R
2
= .28
(Experiment 2).
Table 1. Mediation results for the hypothesized Technology à Anticipated Autonomy à
Product Desirability path (Experiment 2)
26
It is possible that a behavior tracking product such as the smartwatch in Experiment 2
may be perceived differently by participants in terms of its typicality (i.e., how normative it
seemed) depending on whether it was operated by technology or by humans. In order to address
the issue of typicality, I conducted a follow-up study (reported below) in which I asked an
independent sample of participants to rate the product on typicality. No signficant difference
between the technology and human conditions emerged: M
technology
=5.40; SD=.95 vs.
M
human
=5.11; SD=1.18, t(145)=1.68, p=.10, 95% CI of the difference = [-.05, .64], d=0.27),
suggesting that people perceived both the technology-operated behavior tracking product and the
human-operated behavior tracking product as equally typical. Therefore, I was able to rule out
typicality as a possible alternative explanation for my results, particularly in light of the strong
support for autonomy as a mediator.
Product Typicality Post-Test
It is possible that the product description I used in my manipulation in Experiments 2
varied in typicality such that people perceive the technology-backed behavior-tracking product
(in this case, the smartwatch) as more typical than the one backed by humans. These perceptions
of typicality may possibly affect people’s preferences for adopting the technology-backed
behavior-tracking product compared to the one backed by humans if they perceive the human-
backed product as atypical and as a result, overly intrusive. In order to address this possibility, I
conducted a follow-up experiment in which I asked an independent sample of participants to read
the description of the behavior-tracking product and rate the typicality of the product.
Method
Participants
27
To ensure that my sample was similar to the sample in Experiment 2, I recruited U.S.
individuals (51.8% female; M
age
=27.39) from two different sources to participate in this study
(N=197) – (1) undergraduates (N=97) (52.6% female; M
age
=20.92) who were offered course
credit for their participation and (2) individuals recruited from Amazon’s Mechanical Turk
(N=100) (51% female; M
age
=33.67) who were offered monetary compensation ($0.50) for their
participation. Participants from both sources were recruited at the same time and data were
collected simultaneously. In order to ensure that participants were not subject to preconceived
biases about the product I used in our manipulation, I screened the data for participants who
reported owning/using a similar product, resulting in the exclusion of forty-eight participants
(24.4%). I also screened the data for outliers (values more than 3 standard deviations from the
mean) on typicality and excluded two participants (1%). These exclusions resulted in a final
sample size of 147 (50.3% female; M
age
=27.5). Participants were randomly assigned to either a
technology condition (n=73) or a human condition (n=74).
Materials and Procedure
Participants read the original product description of the behavior tracking product that
was used in Experiment 2. The product description was either consistent with the technology
condition or the human condition (see the section above for the full product descriptions).
Following this, I measured the typicality of the product on a four-item scale (“This watch is a
feasible product that is doable for a company in the health and fitness industry”, “Companies in
the health and fitness industry typically create products like this watch”, “This watch is a typical
example of a class of products that help manage people’s health and fitness”, and “Behavior
tracking products like this are becoming common in the health and fitness industry”, a=.73) .
Ratings were given on Likert scales anchored by 1 (Strongly disagree) and 7 (Strongly agree).
28
Results
Results revealed that there were no significant differences in participants’ ratings of
typicality across conditions. Participants in the technology condition rated the behavior-tracking
product as not differing in typicality relative to those in the human condition (M
technology
=5.40;
SD=.95 vs. M
human
=5.11; SD=1.18, t(145)=1.68, p=.10, 95% CI of the difference = [-.05, .64],
d=0.27). Furthermore, both were well above the scale midpoint of 4, indicating that the products
in both conditions were seen as being high in typicality. However, it is important to note that,
although not significant, there was a marginal difference (p=.10) between conditions. Thus, I do
not claim to rule this alternative out completely, though the results do cast doubt on typicality as
a possible alternative explanation for my results. Importantly, these results, combined with my
mediation results, suggest that a typicality-based explanation cannot account for the findings in
this chapter.
Experiment 3
In Experiment 3, I further examined the link between people’s increased anticipated
autonomy and their preference for technology. Specifically, I predicted that people’s
expectations about social evaluation concerns associated with technology versus humans would
determine the extent to which they feel autonomous in a given situation.
Method
Participants
I recruited 222 U.S. individuals (55.4% female; M
age
=26.59) from two different sources:
undergraduates (N=122) (48.4% female; M
age
=20.69) from USC Marshall who participated for
course credit, and individuals recruited from Amazon’s Mechanical Turk (N=100) (62% female;
29
M
age
=33.79) who received $0.75. Similar to Experiment 2, participants from both sources were
recruited at the same time and data were collected simultaneously.
Similar to Experiment 2, I excluded twelve participants who reported doing a similar
survey in the past
3
, resulting in a final sample size of 210 (55.2% female; M
age
=26.69).
Participants were randomly assigned to either a technology (n=105) or human (n=105) condition.
Materials and Procedure
I used the same manipulations of the smartwatch used in Experiment 2; participants in the
technology condition read that a computer remotely controlled the watch and participants in the
human condition read that a person remotely controlled the watch.
Next, participants were asked to vividly imagine being able to use the product. They then
rated their anticipated concerns about social evaluation as well as how much autonomy they
anticipated. They also rated the desirability of the product, which served as my dependent
measure. Based on previous research, I had clear theoretical reason to posit that anticipated
social evaluation concerns will mediate changes in anticipated autonomy and not the other way
around (Deci & Ryan, 1987; Ryan, 1982; Ryan & Connell, 1989), thus ruling out the reverse
relationship (see Hayes, 2012). Given the importance of perceived control, I asked participants to
rate their ability to exercise control over health. I did not measure motivation in this experiment
because I only found a marginally significant difference in motivation between the two
conditions in Experiment 2 and more importantly, motivation did not mediate the effect of
technology on product desirability.
3
A similar survey was conducted for a related but separate project.
30
Anticipated social evaluation concerns. I measured concerns about social evaluation
using a four-item scale directly adapted from the Brief Version of Fear of Negative Evaluation
Scale (Leary, 1983). Items include “When using this product, I will worry about what other
people will think of me even when I know it doesn’t make a difference”, “When using this
product, I will be frequently afraid of other people noticing my shortcomings”, “When using this
product, I will be afraid that others will not approve of me”, “When using this product, I will be
afraid that other people will find faults with me.” All four items loaded onto one factor (loadings
> .90). The scale also had high reliability, a=.95.
Anticipated autonomy. Anticipated autonomy was measured using the six-item scale
(a=.91) used in Experiment 2. All the items loaded onto one factor in an exploratory factor
analysis (loadings >.43).
Product desirability. Product desirability was measured using the three-item scale used in
Experiment 2 (a=.98). An exploratory factor analysis revealed that all the items loaded onto one
factor (loadings >.98).
Perceived control. Perceived control over one’s health was measured using a three-item
scale (a=.87). In Experiment 2, the items used to measure perceived control were based more on
the usefulness of the information received from the product to take care of one’s health rather
than people’s ability to control or choose their own behavior when taking care of their health. In
order to address this, I substituted one item from the previous scale (“This product would make
me feel self-assured about my capabilities to take care of my health”) with a new item (“When
using this product, I will have complete control over how I take care of my health”) to directly
measure people’s anticipated behavioral control in this scenario. The other two items were
31
identical to those used in Experiment 2. Results from an exploratory factor analysis indicated
that all items loaded onto one factor (loadings >.86).
Results
Consistent with my hypotheses, participants in the technology condition rated the product
as more desirable (M
technology
=3.93; SD=1.79) than those in the human condition (M
human
=3.28;
SD=1.85, t(208)=2.59, p=.01, 95% CI of the difference=[.16, 1.15], d=0.36). Additionally,
participants in the technology condition had lower concerns about social evaluation
(M
technology
=2.17; SD=1.28) than those in the human condition (M
human
=3.04; SD=1.68, t(208)=-
4.24, p<.001, 95% CI of the difference=[-1.28, -.47], d=-0.58). Moreover, consistent with my
findings in Experiment 2, participants in the technology condition anticipated higher autonomy
(M
technology
=4.04; SD=1.61) than those in the human condition (M
human
=3.15; SD=1.58,
t(208)=4.02, p<.001, 95% CI of the difference=[.45, 1.32], d=0.56). There were no differences in
participants’ ratings of perceived control (M
technology
=4.16; SD=1.40 vs. M
human
=4.02; SD=1.55,
t(208)=.70, p=.48, 95% CI of the difference=[-.26, .54], d=0.09).
I conducted bootstrapping analyses using the PROCESS macro (model 6) (Hayes, 2013)
to test whether preference for technology-operated behavior tracking products was mediated by a
two-step process through anticipated social evaluation concerns and anticipated autonomy. I used
a serial multiple mediator model in these analyses for two reasons: (1) I theorized that
anticipated social evaluation concerns would mediate changes in anticipated autonomy as
evaluation makes individuals experience situations as controlling and therefore, autonomy-
hindering (e.g., Deci & Ryan, 1987; Ryan, 1982), and (2) a parallel mediator model that allows
the mediators to co-vary in the model would be inappropriate in this case as I had theoretically
justified predictions about the direction of causal influence between mediators (Hayes, 2012).
32
Moreover, I don’t view anticipated autonomy and anticipated social evaluation concerns as
competing mechanisms of the hypothesized effect because I can theoretically rule out the reverse
relationship (i.e., the likelihood of anticipated autonomy mediating changes in anticipated social
evaluation concerns).
Results from the bootstrapping analyses based on a resampling size of 10,000 revealed
that the total direct effect of technology on product desirability (b=.65, SE=.25, p=.01) decreased
to non-significance (b=.01, SE=.20, p=.97), when anticipated social evaluation concerns and
anticipated autonomy were serially included as mediators. The 95% bias-corrected confidence
interval for the indirect effect of technology on product desirability through anticipated social
evaluation concerns and anticipated autonomy excluded zero (.14, .46), supporting the
hypothesized two-step mediation (See Figure 3 for the unstandardized regression coefficients of
each pathway). Unstandardized regression coefficients and the 95% bias-corrected confidence
intervals for the direct and indirect effects are presented in Table 2. Thus, I was able to further
explicate the effect of technology on product desirability to suggest that people have lower
concerns about social evaluation when using technology and consequently have higher
anticipated autonomy leading to an increased preference for technology-operated behavior
tracking products over human-operated behavior tracking products.
33
Figure 3. Anticipated social evaluation concerns and anticipated autonomy serially mediate the
effect of technology on product desirability. Unstandardized regression coefficients and standard
errors for each path are reported. R
2
= .47 (Experiment 3).
Table 2. Mediation results for the hypothesized Technology à Anticipated Social Evaluation
Concerns à Anticipated Autonomy à Product Desirability path (Experiment 3)
34
Experiment 4
In Experiment 4, I sought to obtain stronger evidence of the mediating process by directly
manipulating the mediator – anticipated social evaluation concerns. I predicted that people would
be more likely to prefer technology-operated behavior tracking products over human-operated
behavior tracking products in situations where social evaluative threat is high compared to
situations where such threat is low.
Method
Participants
I recruited 205 undergraduates (34.7% female; M
age
=20.83) from USC Marshall who
participated for course credit. I excluded six participants who participated in the study more than
once, resulting in a final sample size of 199 (34.7% female; M
age
=20.73). Participants were
randomly assigned to either a ‘high social evaluative threat’ condition (n=99) or a ‘low social
evaluative threat’ (n=100) condition.
Materials and Procedure
I informed participants that a new start-up was developing a wearable behavior tracking
product and was interested in understanding people’s reactions to the product. In the high social
evaluative threat condition, I described the product as one that would allow people to track and
monitor their sexual activities. In the low social evaluative threat condition, I described the
product as one that would allow participants to track and monitor their cooking activities. In both
conditions, participants read that the company had developed two versions of the product – (1) a
technology-based version in which users’ behaviors would be tracked and monitored by a
computer program, and (2) a human-based version in which users’ behaviors would be tracked
and monitored by a human analyst. In my descriptions, I specified that both versions of the
35
product were equally effective and indicated that the company was conducting research to see
which version of the product people preferred.
Participants in the high social evaluative threat condition read the following description
of the product:
A new startup (currently in stealth mode) is developing a wearable product that will allow
people to track and monitor their sexual activities and receive feedback on how to
improve performance. You wear the monitor on your wrist and it will track behavioral
and biological factors that can lead to better sexual habits. There are two versions of the
product that are equally effective and we are conducting research to see which version
people prefer. They are as follows:
1. Human-based: your behavior will be monitored and assessed by a human analyst
2. Technology-based: your behavior will be monitored and assessed by an automated
computer program
Participants in the low social evaluative threat condition read the following description of the
product:
A new startup (currently in stealth mode) is developing a wearable product that will allow
people to track and monitor their cooking activities and receive feedback on how to
improve performance. You wear the monitor on your wrist and it will track behavioral
and biological factors that can lead to better cooking habits. There are two versions of the
product that are equally effective and we are conducting research to see which version
people prefer. They are as follows:
1. Human-based: your behavior will be monitored and assessed by a human analyst
2. Technology-based: your behavior will be monitored and assessed by an automated
computer program
After reading the product description, participants in both conditions were asked to
indicate their preference for tracking their behaviors (sexual activities or cooking activities) via
the technology-based version of the product or the human-based version. Participants rated their
preference for the two versions on the following three items adapted from the product
desirability scale used in Experiments 2 and 3: “Which version of the product is more desirable
36
to you?”, “Which version of the product would you favor?”, and “Which version of the product
would you most want to use?”. Participants rated their preference on a seven-point scale in which
lower numbers on the scale (1, 2, 3) indicated a preference for the technology-based version, the
mid-point of the scale (4) indicated no preference, and higher numbers (5, 6, 7) indicated a
preference for the human-based version. I reverse-coded all three items in the scale to create a
‘preference for technology’ measure, my primary dependent variable. The scale had high
reliability, a=.96, and the three items loaded onto one factor in an exploratory factor analysis
(loadings >.95).
Results
First, I sought to examine whether participants indicated an overall preference for one of
the two versions of the product. Therefore, I conducted a one-samples t-test on my entire sample
and compared participants’ preference ratings to a test value of 4 that represented “no
preference”. Results revealed that participants’ preference ratings were significantly different
from the test value and that, in general, participants showed an overall preference for the
technology-based version of the product, M=4.83 (SD=1.97), t(198)=5.93, p<.001, 95% CI of the
difference=[.55, 1.10], d=0.42).
Next, I conducted an independent samples t-test to examine whether there were
systematic differences in participants’ preference for the technology-based version of the product
(over the human-based version) between the high social evaluative threat condition and the low
social evaluative threat condition. Consistent with my predictions, participants in the high social
evaluative threat condition indicated a higher preference for the technology-based version of the
product (M
high social evaluative threat
=5.21; SD=1.88) than those in the low social evaluative threat
condition (M
low social evaluative threat
=4.45; SD=1.99, t(197)=2.75, p=.006, 95% CI of the
37
difference=[.21, 1.30], d=0.39). These results suggest that when people anticipate having higher
concerns about social evaluation, they are more likely to prefer technological tracking over
human-based tracking. Thus, by directly manipulating social evaluative threat I obtained further
evidence supporting my mediating mechanism.
Experiment 5
In Experiment 5, I sought to test my predictions in the laboratory within a context that
allowed for assessing behaviors with real consequences for participants. I also assessed whether
people’s preference for technological assistance (over human assistance) and their concerns
about social evaluation changed with use over time. Experiment 5 was set up as a two-part study
in which participants completed the first part of the study online and the second part of the study
in the laboratory.
Part 1
Method
Participants
I recruited 120 undergraduate students from USC Marshall to participate for course
credit. In Part 1 of the study, I did not collect any demographic information from the participants.
I excluded three participants who participated in the study more than once, for a final sample size
of 117. Part 1 of the study was set up as a within-subjects design such that participants read
about two versions of the same product: a human version, and a technology version.
Materials and Procedure
I informed participants that this was a two-part study and that they must complete Part 1
online before participating in Part 2. Participants who completed Part 1 were asked to sign up for
Part 2 and were informed that the second part would be conducted in the laboratory 1 – 2 days
38
after Part 1. At the beginning of Part 1, I informed participants that a team of university alumni
have created a start-up company called MasterTests with the goal of creating innovative
computer applications that enable college students to become better at taking standardized tests
such as the GRE, GMAT, and LSAT. Following this, I informed participants that the company
had come out with its first computer application called Aptitude Tracker. This application was
described as one that can offer specific feedback to test takers about their strengths and
weaknesses by tracking their behaviors while they take tests. Following this, I informed
participants that the company sought to pilot test their computer application with USC Marshall
students before formally launching the application. Participants were informed that they were
recruited to participate in that pilot test.
Next, I described that the company created two versions of the new application and that
one of the goals of the pilot test was to assess people’s reactions to the two versions of the new
application. One version was described as an automated version that was fully controlled by a
computer algorithm (technology-operated) and the other version was described as one that was
fully controlled by a person who was an analyst in the company (human-operated). Participants
read the following descriptions of the two versions of the new application:
AptitudeTracker - Automated Version
This version of AptitudeTracker is fully controlled by a computer algorithm that
is designed to automatically perform data analytics.
The computer algorithm:
1. Tracks the amount of time a participant spends on each question in the aptitude
test.
2. Calculates the overall time left at any given point in the aptitude test.
3. Tracks the number of times a participant switches from the test screen to browse
the internet.
4. Tracks the number of times a participant uses online tools like calculators and
dictionaries.
5. Calculates the percentage of incorrect answers.
39
6. Calculates the overall percentile rank of the participant relative to other users.
All the information that is collected by the app is analyzed by the computer algorithm.
The algorithm will provide feedback to participants during the test.
In the beta tests of AptitudeTracker conducted by MasterTests, the algorithm version is
shown to boost performance by 65%.
AptitudeTracker - Human Version
This version of AptitudeTracker is fully controlled by a person who is an analyst in the
data analytics team at MasterTests.
This analyst:
1. Tracks the amount of time a test taker spends on each question in the aptitude test.
2. Calculates the overall time left at any given point in the aptitude test.
3. Tracks the number of times a test taker switches from the test screen to browse
the internet.
4. Tracks the number of times a test taker uses online tools like calculators and
dictionaries.
5. Calculates the percentage of incorrect answers.
6. Calculates the overall percentile rank of the test taker relative to other users.
All the information that is collected by the app is analyzed by this person in the data
analytics team. The analyst will provide feedback to participants during the test.
In the beta tests of AptitudeTracker conducted by MasterTests, the human version is
shown to boost performance by 65%.
To control for perceived effectiveness and perceived quality of the product, in both conditions, I
described the application as one that was shown to boost performance by the same level (i.e.,
boost performance by 65%).
After reading the descriptions of both versions of the application, participants were asked
to rate their preference for the two versions of the application on three-item scales that were
adapted from the product desirability scale in Experiments 2 and 3. Participants rated their
preference for the automated version on the following three items (a=.91): “The app controlled
by the computer algorithm is highly desirable to me”, “I favor using the app controlled by the
40
computer algorithm”, “I definitely want to use the app controlled by the computer algorithm”.
Participants also rated their preference for the human version on the following three items
(a=.91): “The app controlled by the human analyst is highly desirable to me”, “I favor using the
app controlled by the human analyst”, “I definitely want to use the app controlled by the human
analyst”. All ratings were made on a seven-point scale from 1 (strongly disagree) to 7 (strongly
agree).
Following this, I informed participants that Part 2 of the study will be conducted within 1
– 2 days and that they will be completing a standardized test in the laboratory during the second
part of the study. I informed them that their behaviors during the test will be tracked by Aptitude
Tracker and asked them to select between the automated version and the human version of
Aptitude Tracker to use during the second part of the study. The two options were randomized
and presented as below:
Please select the version of Aptitude Tracker that you would like to use in the second part of
this study when you will be completing an aptitude test.
• Aptitude Tracker – Automated Version (controlled by the computer algorithm)
• Aptitude Tracker – Human Version (controlled by a human analyst)
After participants made their selection, I informed them that they will receive real-time feedback
from the application via text messages when they take the aptitude test. I asked them to provide
their cell phone numbers in order to directly receive this feedback. After they provided their cell
phone numbers, I reminded them to be present in the laboratory for the second part of the study
in the next few days.
I decided to provide feedback to participants during Part 2 via text messages in order to
address another possible factor that might influence participants’ reactions towards the product –
perceived execution of service. In prior experiments, although I described the product as having
41
the exact same functionalities in both the technology and the human conditions, it is likely that
participants might imagine that the same functionalities may be executed differently when the
product is operated by technology versus by humans. I addressed this concern in Part 2 of the
study by providing feedback to participants through the exact same method (i.e., text messages)
in both conditions.
Part 2
Participants
Participants who completed Part 1 received reminders to sign up for Part 2 that was
scheduled to be conducted 1 – 2 days after Part 1. Out of the 117 participants who participated in
Part 1, 91 participants completed Part 2 of the study (48.4% female; M
age
=20.86). Part 2 was
designed as a between-subjects experiment and participants were randomly assigned to one of
two conditions: (1) the technology condition, or (2) the human condition.
Materials and Procedure
Participants arrived in the laboratory to participate in Part 2 of the study. Once
participants arrived, I verified that they had completed Part 1 of the study before allowing them
to participate in Part 2. I then informed participants that they will be asked to complete a
standardized aptitude test that will last 20 minutes and that they will be assigned to individual
testing rooms where the test was set up on the computer in that room. I also informed
participants that Aptitude Tracker, the application that tracks test takers, was installed on the
computers in the testing rooms. Before going to the testing rooms, I asked participants to verify
that the cell phone numbers they provided in Part 1 was correct and informed them that they will
be receiving real-time feedback from Aptitude Tracker in the form of text messages. Participants
were asked to begin the test as soon as they entered the testing rooms.
42
At the beginning of the test, participants read the following prompt: “Each computer in
this lab has Aptitude Tracker installed. The app will start tracking you as soon as you begin the
test”. Following this, participants were reminded that Aptitude Tracker came in two versions –
(1) an automated version that was fully controlled by a computer algorithm designed to perform
data analytics, and (2) a human version that was fully controlled by a person in the data analytics
team at the company. They were also reminded that, in Part 1 of the study, they had selected one
of the two versions of the application to use in Part 2. As I randomly assigned participants to
either the technology condition or the human condition, I expected that some participants would
encounter the version of the application that they selected while others would encounter the
version of the application that they did not select. To address this, I informed participants that I
had tried my best to accommodate their selection but because this was a pilot test in which the
company was testing each version of the application in equal proportion, there was a slight
chance that they might have to use the version of the application that they did not select.
Next, participants in the technology [human] condition read the following:
You are now being tracked by the Automated version [Human version] of
Aptitude Tracker. This version is run and monitored by a Computer
Algorithm that is designed to automatically perform data analytics [an analyst
who is part of the data analytics team at Master Tests].
Aptitude Tracker's algorithm [analyst in the data analytics team] will analyze
your data and offer feedback about your performance during the test. The
algorithm will send the automated feedback [the analyst will send the feedback] to
your cell phone through text messages.
Following this, participants read that they had a total of 20 minutes to complete the test. The test
had two parts. I informed participants that after 10 minutes, the first part of the test will
automatically close and that they will be directed to the second part of the test. I also informed
participants that they will receive three points for every correct answer and that they will lose
43
two points for every incorrect answer. In the first part of the test, I included three quantitative
reasoning questions and six verbal reasoning questions. These questions were adapted from
sample GRE test questions.
Participants in both conditions received three text messages during the first part of the
test. These messages were crafted in such a way that they would be perceived as real-time
feedback from Aptitude Tracker. The first text message was sent at minute 5, the second text
message was sent at minute 7, and the third text message was sent at minute 9 during the first
part of the test. Participants in the technology condition received the following text messages:
Message 1 (At minute 5)
You should know that you have spent 10% more time on the first set of
questions compared to the average test taker. You should try and speed things up.
Message 2 (At minute 7)
You tend to spend more time on math questions compared to verbal reasoning questions.
Message 3 (At minute 9)
68% of the participants have now completed the first part of the aptitude test. You have
little time left.
In the human condition, the messages were modified slightly so that they would appear to be
coming from a person. Participants in the human condition received the following text messages:
Message 1 (At minute 5)
Hey, this is Steve. You should know that you have spent 10% more time on the first set
of questions compared to the average test taker. You should try and speed things up.
Message 2 (At minute 7)
I'm noticing that you tend to spend more time on math questions compared to verbal
reasoning questions.
Message 3 (At minute 9)
I just wanted to let you know that 68% of the participants have now completed the first
part of the aptitude test. You have little time left.
44
I sought to understand whether participants’ preference for the technology-operated
behavior tracking product versus the human-operated behavior tracking product changed with
use over time. To assess this, I asked participants to decide whether they wanted Aptitude
Tracker to continue tracking them during the second part of the test. At the end of the first part of
the test, participants were asked to select between the following two options:
• Yes, I want Aptitude Tracker to continue tracking me
• No, I do not want Aptitude Tracker to continue tracking me
If participants decided to continue using the application, they were asked to reply to
Aptitude Tracker via text with the following message: “Yes, continue tracking”. Alternatively, if
participants decided to stop using the application, they were asked to reply to Aptitude Tracker
via text with the following message: “No, don’t continue tracking”.
Following this, participants went on to complete the second part of the aptitude test. This
part included three quantitative reasoning questions and nine verbal reasoning questions, all of
which were adapted from sample GRE questions. After completing the second part of the
aptitude test, participants were asked to complete the following open-ended question: “Please
indicate any thoughts or reactions you have about your experience with the Aptitude Tracker
app”. The purpose of this open-ended question was to gauge whether participants experienced
any social evaluation concerns during the test when they believed that they were being tracked
by Aptitude Tracker. After this, I informed participants that they had reached the end of the study
and collected their demographic information. Finally, I debriefed participants by informing them
that Aptitude Tracker was not a real computer application and that any personal information that
was collected during the study would be immediately destroyed and would not be used in the
future.
45
Results
Part 1
Consistent with the previous studies and supporting my predictions, participants indicated
greater preference for the automated version of the application that was fully controlled by the
computer algorithm (M
Automated Version
=5.10; SD=1.06) than for the version of the application that
was fully controlled by a human (M
Human Version
=3.81; SD=1.31), t(116)=6.95, p<.001, 95% CI of
the difference=[.92, 1.65], d=.64. Participants were also asked to select one of the two versions
of the application to use in Part 2 of the study. Results from a one-sample chi-square test
revealed that 79.4% of the participants selected the technology-operated, automated version of
the application (compared to the expected 50%), χ
2
(1, N = 117) = 40.69, p< .001. My findings
from Part 1 reveal that, in addition to differences in anticipated preference for technological
assistance over human assistance, there are real differences in adoption behavior such that people
are more likely to adopt technology-operated behavior tracking products relative to human-
operated behavior tracking products.
Part 2
To test whether there were differences in participants’ likelihood of continuing to use the
behavior tracking application depending on whether it was technology-operated versus human-
operated, I conducted binary logistic regressions on the decision to continue variable with the
condition variable as the predictor. Interestingly, I found that there were no differences in
participants’ likelihood of continuing to use the application between the technology and the
human conditions. Specifically, 76.6% of the participants in the technology condition decided to
continue using the application compared to 77.3% of the participants in the human condition, b =
-.04, SE = .50, p = .94. These results suggest that, although people are more likely to adopt a
46
technology-operated behavior tracking product over a human-operated behavior tracking
product, they continue to use the products in equal rates once they start.
In prior experiments, I found that people had lower concerns about social evaluation
when using technological (versus human) assistance and that this led to an increased preference
for technology-operated behavior tracking products. In this experiment, I sought to examine
whether differences in people’s concerns about social evaluation when using technological
(versus human) assistance persisted with use over time. To do so, I coded participants’ open-
ended responses where they described their reactions to using the Aptitude Tracker application.
Two research assistants who were blind to condition independently coded each of the open-
ended responses for social evaluation concerns. Responses were coded as ‘evaluation (1)’ if there
was anything in the response that suggested that the participant felt evaluated or judged. For
example, statements about not wanting to be compared to others, or those about having concerns
regarding being watched, and those about emotions caused due to being evaluated were coded as
‘evaluation’. On the other hand, responses that did not have any statements about feeling
evaluated or judged were coded as ‘no evaluation (0)’. For example, statements about
suggestions for the researcher, or those about feeling pressed for time were coded as ‘no
evaluation’. As responses were coded for ‘evaluation’ and ‘no evaluation’ in a binary manner, I
sought complete agreement between the raters. Therefore, the two raters were asked to code each
response independently and then resolve any discrepancies by discussing thereafter. The two
raters initially had discrepancies on 14 out of the 91 responses, but resolved all of them after
mutually discussing them.
To test whether there were differences in participants’ social evaluation concerns
between conditions, I conducted binary logistic regressions on the evaluation variable with the
47
condition variable as the predictor. Results revealed that participants in the technology condition
were less likely to report having concerns about social evaluation. Specifically, 17% of the
participants in the technology condition described feeling concerned about social evaluation
compared to 36.4% of the participants in the human condition, b = -1.03, SE = .50, p = .04. The
odds ratio indicated that participants in the technology condition were 64% less likely to
experience concerns about social evaluation than those in the human condition. These results
suggest that, concerns about social evaluation when using human-operated behavior tracking
products persisted even with use over time.
Experiment 6
In Experiments 1 through 5, I tested my hypotheses in samples of younger adults whose
average age ranged between 20.78 (in my college student samples) and 36.19 (in my mTurk
samples). A notable limitation of testing my hypotheses using samples of younger adults pertains
to their inherent comfort with technology given the extent to which they perceive technology as
integral to their lives (Hershatter & Epstein, 2010). To address this concern, I sought to test my
hypotheses in samples of people who are expected to be on the other end of the spectrum with
regard to the extent of their comfort with technology – older adults over the age of 55.
Method
Participants
I recruited 301 U.S. adults over the age of 55 (65.8% female; M
age
=63.18) from
Amazon’s Mechanical Turk using the portal’s age-based screening. These adults received $0.75
as compensation for their participation. Participants were randomly assigned to either a
technology (n=151) or human (n=150) condition.
Materials and Procedure
48
I used the same manipulations of the smartwatch used in Experiments 2 and 3.
Participants in the technology condition read that a computer remotely controlled the watch and
participants in the human condition read that a person remotely controlled the watch. I made two
minor changes to the description of the smartwatch [the smartcompanion program]. First, I
sought to control for differences in perceived effectiveness and perceived quality of the
smartwatch by including the following statement in both conditions: “In beta tests conducted by
this company the smartwatch [smartcompanion program] was shown to boost health and
effectiveness by 67%.” Next, I slightly modified the statement in the description of the
smartwatch [smartcompanion program] about guided workouts from popular gyms. I removed
the names of specific gyms such as Gold’s Gym, Shape, and Men’s Fitness in order to ensure
that the smartwatch [smartcompanion program] would not come across as disproportionately
more or less appealing to male or female participants. Instead, I modified that particular
statement to be: “Provide guided workouts, urging you to do pushups and other exercises”.
Next, I asked participants to vividly imagine being able to use the product. They then
rated their anticipated concerns about social evaluation as well as their anticipated autonomy. I
measured anticipated concerns about social evaluation using the same four-item scale (a=.96),
and my exploratory factor analyses revealed that all items loaded onto one factor (loadings >.93).
I measured anticipated autonomy using the same six-item scale (a=.96) used in Experiments 2
and 3, and all six items loaded onto one factor (loadings >.78). In order to mask the true purpose
of the study, I also counterbalanced the order in which the anticipated social evaluation scale and
the anticipated autonomy scale were displayed to the participants. Finally, participants also rated
the desirability of the product on the same three-item scale (a=.99) used in Experiments 2 and 3.
49
All three items loaded onto one factor (loadings >.98). All ratings were made on a seven-point
scale from 1(Strongly disagree) to 7 (Strongly agree).
Results
Consistent with the previous studies and supporting my hypotheses, participants in the
technology condition rated the product as more desirable (M
technology
=3.79; SD=1.93) than those
in the human condition (M
human
=2.68; SD=1.76, t(299)=5.23, p<.001, 95% CI of the
difference=[.70, 1.54], d=0.60). Moreover, participants in the technology condition had lower
concerns about social evaluation (M
technology
=1.78; SD=1.20) than those in the human condition
(M
human
=2.73; SD=1.78, t(299)=-5.42, p<.001, 95% CI of the difference=[-1.29, -.60], d=-0.63).
Participants in the technology condition also anticipated having higher autonomy
(M
technology
=3.47; SD=1.79) than those in the human condition (M
human
=2.41; SD=1.47,
t(299)=5.62, p<.001, 95% CI of the difference=[.69, 1.43], d=0.65).
I tested mediation by conducting bootstrapping analyses using the PROCESS macro
(model 6) (Hayes, 2013). Similar to Experiment 3, I used a serial multiple mediator model.
Results based on a resampling size of 10,000 revealed that the total direct effect of technology on
product desirability (b=1.12, SE=.21, p<.001) decreased to non-significance (b=.25, SE=.15,
p=.10), when anticipated social evaluation concerns and anticipated autonomy were serially
included as mediators. The 95% bias-corrected confidence interval for the indirect effect of
technology on product desirability through anticipated social evaluation concerns and anticipated
autonomy excluded zero (.11, .33), supporting the hypothesized two-step mediation (See Figure
4 for the unstandardized regression coefficients of each pathway). Unstandardized regression
coefficients and the 95% bias-corrected confidence intervals for the direct and indirect effects are
presented in Table 3.
50
Thus, I obtained further evidence supporting my hypotheses from a unique sample of
participants – adults over the age of 55 – who are not expected to be as predisposed to feeling
comfortable with technology as younger adults. Through this study, I can increase the
generalizability of my central claim that people are more likely to prefer technological assistance
over human assistance and this preference is driven by expectations that they will feel less
evaluated and more autonomous when using technological (rather than human) assistance.
Figure 4. Anticipated social evaluation concerns and anticipated autonomy serially
mediate the effect of technology on product desirability. Unstandardized regression
coefficients and standard errors for all the paths are reported. R
2
= .62 (Experiment 6).
51
Table 3. Mediation results for the hypothesized Technology à Anticipated Social Evaluation
Concerns à Anticipated Autonomy à Product Desirability path (Experiment 6)
Direct and Indirect Effects
Direct Effect of Technology Condition on
Product Desirability
B SE t p 95%
LLCI
95%
ULCI
Technology Condition à Product
Desirability
1.12
.21
5.23
.000
.70
1.54
Direct Effect of Technology Condition on
Product Desirability when Anticipated
Social Evaluation Concerns and Anticipated
Autonomy are included as Mediators
B SE t p 95%
LLCI
95%
ULCI
Technology Condition à Product
Desirability
.25 .15 1.64 .10 -.05 .54
Indirect Effects of Technology Condition on
Product Desirability
B SE 95%
LLCI
95%
ULCI
Total Indirect Effect .87 .17 .53 1.22
Technology Condition àAnticipated Social
Evaluation Concerns à Product Desirability
-.06 .04 -.15 .02
Technology Condition à Anticipated Social
Evaluation Concerns à Anticipated
Autonomy à Product Desirability
.20 .06 .11 .33
Technology Condition à Anticipated
Autonomy à Product Desirability
.72 .17 .38 1.07
Experiment 7
In Experiment 7, I sought to examine when and why people decided to enter contexts
where behavior tracking products are already in use. Organizations are a context where people’s
behaviors are constantly tracked. Given the proliferation of behavior tracking products in
organizations, I wanted to examine how people’s preference for technology-operated behavior
tracking products over those operated by humans translated to their decisions to select into
52
contexts where those products are used. I used a within-subjects experimental design in this
study to test my predictions that technology (compared to humans) would reduce social
evaluation concerns and increase anticipated autonomy, leading to an increased preference for
contexts where technology-operated behavior tracking products are used.
Method
Participants
Two hundred and fifteen MBA students (31.6% female; M
age
=28.23) enrolled in the USC
Marshall full-time MBA program participated in this study. This study was included as part of a
survey instrument that was administered to all the MBA students of the incoming class at the end
of the semester. In this study, I used a within-subjects design where I asked participants to view
two job descriptions and indicate the extent to which they were willing to accept each job. Given
that I had access to 215 MBA students, I sought to ensure a priori that this would be a sufficient
sample size based on findings in Experiment 3 (Cohen’s d =.3 – .5), using a within-subjects
experimental design. A power analysis (G*Power software; Faul, Erdfelder, Buchner & Lang,
2009) revealed that I needed at least 82 participants to have adequate power (.80) to detect
medium effects similar to Experiment 3.
Materials and Procedure
I asked participants to evaluate two different jobs in the technology industry that they
wanted to work in, both in the same city and both offering the same pay and benefits. I informed
participants that the main difference between the jobs was how each company monitored their
employees. Next, participants read the following information about sociometric badges that both
companies were beginning to use to monitor their employees:
Both companies have begun to use sociometric badges. The sociometric badge is a
wearable electronic device that is used to enable employees become more productive and
53
effective in their jobs. These companies are experimenting with sociometric badges to
monitor their employees and help improve their productivity and effectiveness.
Following this, I provided the specific descriptions of how sociometric badges were used
in each company. Here, I manipulated whether the sociometric badge was controlled by a
computer algorithm (akin to the technology condition in Experiments 1 – 3) or by an analyst
(akin to the human condition in Experiments 1 – 3). Aside from this difference, the specifics of
how the sociometric badges functioned were kept exactly the same. I counterbalanced the order
in which these descriptions were displayed to the participants.
Company A
Company A asks that all its employees wear a sociometric badge while at work. This
badge is controlled by a computer algorithm and is not monitored by any person.
The sociometric badge:
1. Measures how long the employee is at his/her desk.
2. Measures the amount of face-to-face interaction that the employee has with
other people.
3. Measures the employee’s conversational patterns in meetings (i.e., the amount
of time the employee speaks in a meeting).
4. Provides feedback about the employee’s social interactions at work.
All the information about the employees that is collected by the sociometric badge is
analyzed by the computer and it provides automated reports to employees with feedback
about their productivity and suggestions for how to improve their effectiveness.
Company B
Company B asks that all its employees wear a sociometric badge while at work. This
badge is controlled by a person who works as an analyst in the company’s human
resources team.
The sociometric badge:
1. Measures how long the employee is at his/her desk.
2. Measures the amount of face-to-face interaction that the employee has with other
people.
3. Measures the employee’s conversational patterns in meetings (i.e., the amount of
time the employee speaks in a meeting).
4. Provides feedback about the employee’s social interactions at work.
All the information that is collected by the sociometric badge is analyzed by the analyst
on the human resources team and he/she personally provides reports to employees with
feedback about their productivity and suggestions for how to improve their effectiveness.
54
After reading about the companies, participants were asked to rate both jobs on three
scales – anticipated concerns about social evaluation, anticipated autonomy, and likelihood of
accepting the job. Participants rated their anticipated concerns about social evaluation and their
anticipated autonomy while wearing the sociometric badge for each job on a scale from
1(Strongly disagree) to 7(Strongly agree). I measured anticipated concerns about social
evaluation using the same four-item scale and anticipated autonomy using the same six-item
scale used in Experiment 3. Given the repeated measures design, and in order to mask the true
purpose of the study, I counterbalanced the order in which the anticipated social evaluation scale
and the anticipated autonomy scale were displayed to the participants. Finally, participants rated
their likelihood of accepting the job offer from Company A and the job offer from Company B
on a scale from 1(Not at all) to 7 (Very much).
Results
Consistent with the previous studies, participants rated that they were more likely to
accept the job offer from Company A where the sociometric badge was controlled by a computer
algorithm (M
Company A
=3.54; SD=1.64) than the job offer from Company B where the sociometric
badge was controlled by a human (M
Company B
=2.66; SD=1.47), t(214)=7.83, p<.001, 95% CI of
the difference=[.66, 1.11], d=.54. Moreover, participants indicated that they had lower concerns
about social evaluation while wearing Company A’s sociometric badge (M
Company A
=3.95;
SD=1.67) than while wearing Company B’s sociometric badge (M
Company B
=4.70; SD=1.60),
t(214)=-7.98, p<.001, 95% CI of the difference=[-.94, -.57], d=-.54. Participants also indicated
that they had anticipated having greater autonomy while wearing Company A’s sociometric
badge (M
Company A
=2.71; SD=1.21) than while wearing Company B’s sociometric badge (M
Company
55
B
=2.29; SD=1.10), t(214)=6.77, p<.001, 95% CI of the difference=[.30, .55], d=.45. Thus, as
expected, these results were consistent with the findings from Experiments 2 and 3.
I tested mediation by conducting bootstrapping analyses using the MEMORE macro
(serial mediation model) (Montoya & Hayes, 2016). Results based on a resampling size of
10,000 revealed that the 95% bias-corrected confidence interval for the indirect effect of job
preference through anticipated social evaluation concerns and anticipated autonomy excluded
zero (.07, .19), indicating that anticipated social evaluation concerns and anticipated autonomy
serially mediated the effect of technology on job preference. The total direct effect of technology
on job preference (b=.88, SE=.11, p<.001) reduced but did not decrease to non-significance
(b=.30, SE=.11, p=.007), indicating partial mediation (See Figure 4 for the unstandardized
regression coefficients of each pathway). Unstandardized regression coefficients and the 95%
bias-corrected confidence intervals for the direct and indirect effects are presented in Table 3.
Through this experiment, I was able to obtain converging evidence supporting my hypotheses
that technology reduces people’s anticipated concerns about social evaluation, increases their
anticipated autonomy and consequently, increases their preference for using technology-operated
behavior tracking products over human-operated products. However, it also leaves open
additional mechanisms, pointing to the need for future research.
56
Figure 5. Anticipated social evaluation concerns and anticipated autonomy serially mediate the
effect of technology on job preference. Unstandardized regression coefficients and standard
errors for each path are reported. R
2
= .59.
57
Table 4. Mediation results for the hypothesized Technology à Anticipated Social Evaluation
Concerns à Anticipated Autonomy à Job Preference path (Experiment 7)
General Discussion
Across seven studies, I find that people are more likely to adopt technology-operated
behavior tracking products over human-operated behavior tracking products. Results from Study
1 indicated that participants were willing to use computer applications that constantly tracked
their work habits when they were operated by technology (versus humans), establishing that
58
people preferred technology-operated behavior tracking products over human-operated behavior
tracking products. In Study 2, I examined the psychological mechanism that drives people’s
preference for technology-operated behavior tracking products and found that anticipated
autonomy mediates this preference. Participants anticipated greater autonomy when using
behavior tracking products that were operated by technology (versus humans) and their
subjective sense of autonomy drove their preference for adopting these products. In Studies 3
and 4, I further examined the link between people’s increased anticipated autonomy and their
preference for technology by exploring how social evaluation concerns influenced this
relationship. Results from both studies established that, people anticipated having lower social
evaluation concerns when tracked by technology-operated behavior tracking products (compared
to human-operated behavior tracking products). As a result, they anticipated being more
autonomous in that situation and had a greater preference for technology-operated behavior
tracking products.
In Study 5, I tested these ideas within a context that allowed me to assess participants’
behavioral choices pertaining to behavior tracking products. I also examined whether people’s
preference for behavior tracking products operated by technology (versus humans) and their
social evaluation concerns changed with use over time. My results indicated that, in addition to
rating technology-operated behavior tracking products as more desirable compared to human-
operated behavior tracking products (consistent with Studies 1 – 4), participants also actually
adopted technology-operated (versus human-operated) behavior tracking products when making
a behavioral choice. I also found that, differences in people’s concerns about social evaluation
continued to persist with use over time such that participants who were exposed to the
technology-operated behavior tracking products were less concerned about social evaluation than
59
those who were exposed to the human-operated behavior tracking products. However, I found
that participants continued to use both technology-operated and human-operated behavior
tracking products at equal rates – an effect that raises interesting questions for future research in
this area, as I discuss later in this section.
In Study 6, I explored the role of participants’ comfort with technology as a boundary
condition of my effects and examined whether my effects held in participant populations with
people who may inherently feel less comfortable with technology (adults over the age of 55).
Consistent with my prior findings, I found that participants in this study also had a greater
preference for technology-operated behavior tracking products over human-operated behavior
tracking products, and that this preference was driven by reduced social evaluation concerns and
increased anticipated autonomy. In Study 7, I sought to examine people’s decisions to select into
contexts where behavior tracking products are used. Consistent with my predictions, I found that
participants were more likely to enter contexts where behavior tracking products were
technology-operated rather than human-operated.
Contributions to Theory and Practice
The present research makes several novel contributions to theory. First, my findings
provide a clear psychological account for the adoption and diffusion of behavior tracking
technologies by highlighting that social evaluation concerns and anticipated autonomy are
important drivers of this phenomenon. Specifically, I suggest that technology has the potential to
sidestep people’s concerns about social evaluation and allow them to receive informational
feedback about their behaviors in various contexts. I also posit that, by switching people’s
attention from evaluation to information, technology enables people to experience greater
autonomy in a given situation. Thus, I elucidate the underlying psychological mechanism that
60
drives one of the most widespread societal trends in recent times – the adoption and diffusion of
behavior tracking products.
Second, I contribute to research on cognitive evaluation theory (Deci & Ryan, 1985;
Ryan, 1982) by addressing the recent call for understanding the impact of advanced technologies
on people’s experiences of autonomy (Deci, Olafsen & Ryan, 2017). In the present research, I
clarify when and why people perceive differences in autonomy in the context of advanced
technologies, particularly behavior tracking products. I suggest that behavior tracking
technologies that closely and constantly track individuals’ behaviors are perceived as autonomy-
supportive (rather than autonomy-reducing) as people have lower social evaluation concerns
when using these technologies. Specifically, I propose that behavior tracking technologies can
offer informational feedback without inducing the pressure to avoid negative evaluation.
Through my findings, I highlight the role of technology as an important contextual factor that
can influence whether people perceive a situation as informational or controlling. Given that a
key application of cognitive evaluation theory is in the domain of feedback, I also contribute to
research in this area by emphasizing that the medium through which feedback is offered can
influence whether that feedback is perceived as informational or controlling.
Third, I aim to contribute to the growing body of scientific knowledge in the emerging
area of psychology of technology (e.g., Waytz & Norton, 2014; Epley, Schroeder, Waytz, 2013).
Prior research in this area suggests that when autonomous technologies have human-like
features, people perceive such technologies more favorably (Waytz, Heafner & Epley, 2014). I
extend this emerging work (Waytz & Norton, 2014; Epley, Schroeder, Waytz, 2013) by
suggesting that although human-like features enhance people’s perceptions of technology, those
very features may cause people to feel socially evaluated. In fact, studies show that
61
anthropomorphizing non-human agents increases the social influence of those agents and
constrains people to act in accordance with socially desirable norms (Waytz, Cacioppo & Epley,
2010). Therefore, people may favorably experience technologies with human-like features,
except in situations where their behaviors could be evaluated – in such situations, they may find
them objectionable.
From a practical standpoint, my findings suggest that when describing new products to
employers or consumers, those wishing to pitch the products would be well served to focus on
how they allow for autonomy. My findings also have implications for employees in the
workplace. Using technology-mediated methods for organizational practices such as hiring,
training and monitoring employees are becoming more common in recent times. These practices
have received research attention and studies indicate that using technology-mediated methods for
hiring and monitoring employees can be bias-fraught and apprehension-provoking in certain
situations and for certain types of people (e.g., Behrend, Toaddy, Foster Thompson & Sharek,
2012; Watson, Foster Thompson, Rudolph, Whelan, Behrend & Gissel, 2013). My research
suggests that using products that are fully technology-operated (without human involvement) to
track behaviors can allow for increased anticipated autonomy and reduce evaluative
apprehensions associated with technology-mediated monitoring. Moreover, organizational
leaders may find a greater willingness by employees to receive performance feedback from
technology-based products relative to human managers. Finally, and more broadly, my findings
have implications for society as it faces the proliferation of devices in the category of the Internet
of Things. Many of these devices will introduce information security challenges, and making
thoughtful choices about them will be paramount to leading effective, healthy, and meaningful
62
lives. My research indicates that people may tend to anticipate that adopting such products will
provide greater autonomy than may actually be the case.
Limitations and Future Directions
This research is not without limitations. I examined the psychological factors that drive
people’s decision to adopt behavior tracking products. However, I did not explore important
consequences of this decision, including implications for privacy. Behavior tracking products
closely and continuously track users and, in doing so, collect large volumes of personal, sensitive
data about them. Manufacturers of these products often store and transmit these data and may
also share these data with other third parties (Chester, 2017), thus creating numerous risks related
to information security and privacy. In fact, the widespread sharing of personal information via
technology coupled with people’s limited focus on privacy-related issues have led information
security experts to suggest that people are becoming vulnerable to innumerable privacy-related
risks such as identity theft, misuse of personal information, stalking, and even extortion (Kang,
Dabbish, Fructer & Kiesler, 2015). Privacy-related risks are especially likely when people use
hardware-based behavior tracking devices. These devices are fully trackable and one in five
behavior tracking applications transmit user-generated data, including consumers’ names, email-
addresses and passwords without encryption (Hunt, 2015). Given the increasing proliferation of
behavior tracking products in our society, it is important to understand how the adoption and use
of such products might influence people’s privacy-related perceptions and behaviors.
Studies show that people’s perception of privacy in a given context is determined by their
assessment of possible risk (in the form of negative evaluation, deterioration of perceived self-
image, and possibility of embarrassment) in that context (Norberg, Horne & Horne, 2007).
Moreover, extant research on privacy and self-disclosure reveal that people are more comfortable
63
revealing sensitive, personal information during computer-administered assessments (versus
human-administered assessments) due to an illusion of privacy (Weisband & Kiesler, 1996) and
a perception of increased anonymity when disclosing to computers rather than to humans
(Joinson, 2001). Taken together, these results suggest that people may have higher perceptions of
privacy when using technology-operated behavior tracking products due to the reduced salience
of the presence of other humans who may be able to access their data. This illusion of privacy
may be exaggerated by the assumption that organizations that have access to the data collected
from behavior tracking products may only use and analyze aggregate data from groups of users,
rather than identify and use individual-level data. Thus, people may think that automated systems
such as technology-operated behavior tracking products pose a lower risk for privacy. However,
it is not clear whether people make calculated assessments of privacy-related risks and have
accurate privacy perceptions when using automated technologies. Future research in this area
should examine whether people are more likely to trust automated technological products and
disclose more information about themselves to those products compared to human-operated
technologies and explore how their assessments of privacy-related risks influence their self-
disclosure behaviors.
In addition to examining how automated technologies influence people’s perceptions of
privacy, it is important to understand whether engaging with those technologies can impact
people’s privacy-related behaviors. My findings indicate that one of the main factors that drive
people’s decision to adopt technology-operated behavior tracking products is low social
evaluation concerns. In light of these findings, future research should consider how low social
evaluation concerns associated with using technology-operated behavior tracking products can
influence people’s privacy-related behaviors. Extant literature on privacy highlights that, among
64
other reasons, people seek control over their personal information by desiring privacy in order to
maintain a positive social identity and avoid undesirable consequences of disclosing personal
information to others (e.g., Alge, 2001; Pedersen, 1997; Tolchinsky, McCuddy, Adams, Ganster
& Woodman, 1981). This suggests that people may be less likely to engage in privacy-seeking
behaviors in situations where social evaluative threat is low. In one study, participants were more
willing to share details about their engagement in illegal or ethically questionable behaviors
when they were informed that most participants admitted to having engaged in the same
behaviors (Acquisti, John & Loewenstein, 2012). Thus, when people know that certain behaviors
are common in a given context, they believe that they are less likely to be negatively evaluated
by others for engaging in those behaviors. Therefore, they feel comfortable sharing sensitive,
personal information about themselves in that context. It would be interesting to extrapolate
these ideas to the context of behavior tracking products and explore whether people who adopt
technology-operated behavior tracking products are less likely to engage in privacy-seeking
behaviors.
It would also be interesting to examine whether social evaluation concerns have
implications for technology abandonment. My findings suggest that there are systematic
differences between people’s adoption behaviors in relation to technology-operated and human-
operated behavior tracking products. Interestingly, even though people experience higher social
evaluation concerns when using human-operated behavior tracking products, they continue to
use both the technology-operated and the human-operated behavior tracking products at equal
rates after adoption. Thus, although I find systematic differences in people’s adoption behaviors
with a marked preference for technology-operated behavior tracking products, it is unclear what
motivates people to continue using human-operated behavior tracking products even when they
65
experience social evaluation concerns. It is possible that when pursuing goals in the presence of
other people, quitting involves a psychological cost that entails experiencing aversive negative
feelings (e.g., embarrassment) due to being negatively evaluated by others. That is, one might
feel negatively evaluated for abandoning the product and feel embarrassed to quit. Therefore, it
is likely that, after adoption, people continue to use human-operated behavior tracking products
because of possible negative evaluation associated with quitting. This is consistent with prior
research suggesting that the presence of an audience during goal pursuit can make individuals
feel compelled to persist and continually invest resources toward attaining the goal in order to
save face and avoid negative evaluations (Brockner, Rubin and Lang, 1981). Future research
examining when and why people abandon automated technologies while continuing to use
human-mediated technologies could yield interesting results that can shed light on the underlying
psychology that drives these effects.
Although my studies provided evidence of mechanisms, future research should explore
possible factors that may act as boundary conditions of the current findings. In the present
research, I examined the role of an important boundary condition – people’s inherent comfort
with technology. Future research should examine other relevant user-related characteristics that
can influence people’s preference for technological assistance over human assistance.
Specifically, it will be important to explore factors that may influence people to seek out human
involvement over technology. One such factor is people’s need for social connectedness.
Technology-operated behavior tracking products and the sense of autonomy that people
experience when using these products can reduce their likelihood of seeking out social
interactions when pursuing goals. Extant research suggests that when people rely on technology
at the expense of real human interactions, they experience lower feelings of social connection
66
and belonging (Kushlev, Proulx & Dunn, 2017; Sandstrom & Dunn, 2014). Future research can
examine how people’s need for social connectedness and belonging can influence their decision
to choose between situations where they can use technology-operated products and services
compared and situations that afford more opportunities for social interaction.
In addition to examining participant characteristics as boundary conditions, it will be
interesting to explore how situational factors influence people’s preference for technology-
operated behavior tracking products over human-operated behavior tracking products. For
example, the type of goal that people pursue can influence people’s preference for technological
versus human assistance. Particularly, the extent to which a goal creates social evaluation
pressures can have an impact on people’s willingness to adopt technological assistance. Research
on social comparison and evaluation has suggested that when individuals are motivated by self-
enhancement goals (i.e., goals that are aimed at improving oneself on a particular dimension),
they prefer positive evaluation outcomes and are more likely to engage in downward social
comparisons or avoid social comparison altogether (Wood, 1989). In my studies, I have focused
particularly on goals that are self-enhancing (e.g., becoming better at managing time, improving
one’s health). Therefore, it is plausible that the self-enhancing nature of these goals led to a
preference for situations involving little or no social evaluation and consequently, a preference
for technological assistance to pursue these self-enhancing goals. On the contrary, when
individuals are motivated by self-evaluation goals (i.e., goals that are aimed at attaining a more
accurate understanding of oneself), they are receptive to all comparative information that can be
obtained from social evaluation (Wood, 1989). In such situations, social evaluations are
perceived as necessary, rather than as negatively pressurizing or autonomy-reducing. Therefore,
67
when pursuing self-evaluation goals, people may gain various benefits from adopting human
assistance over technological assistance.
Finally, future research can also examine other possible alternative explanations for my
findings. For example, it would be interesting to explore how perceptions of imposing on another
person affect people’s preference for technological assistance. Although I did not examine this
variable in my experiments, participants who considered the human-operated product may have
thought about the extent to which they may need to impose on another individual when using the
product. In fact, research on helping behavior suggests that, in addition to experiencing feelings
of vulnerability and embarrassment, one of the main reasons for people’s reluctance to solicit
help from others is their tendency to attend to the instrumental costs (e.g., time, effort) they may
impose on others (Flynn & Lake, 2008; Bohns & Flynn, 2010). Therefore, in addition to social
evaluation concerns and anticipated autonomy, perceptions of imposing on another individual
could also be a factor that may contribute to reducing people’s preference for human-operated
behavior tracking products.
Conclusion
Since the advent of the Internet 28 years ago, the U.S. Patent and Trademark Office has
granted over four million patents for technological inventions (U.S Patent and Trademark Office,
2016). In the face of such exponential growth, it is imperative to understand the psychological
determinants of the adoption of new technologies. The present research identifies the
psychological drivers that facilitate the adoption of behavior tracking technologies: reduced
social evaluation concerns and increased anticipated autonomy. As people become ever more
enmeshed with technology, it is my hope that this work will help shed light on this rapid
proliferation as well as inspire further research related to the psychology of technology.
68
CHAPTER 3
VIRTUAL REALITY IN MANAGEMENT: DRIVERS AND CONSEQUENCES
Abstract
The present research examines the conditions under which managers prefer interacting with their
subordinates virtually, via computer avatars, as opposed to through traditional face-to-face
conversations. Two experiments tested the prediction that managers prefer to use technological
interaction tools to interact with their subordinates when they are concerned about negative
social evaluation. This prediction is based on the idea that technology acts as a psychological
buffer, allowing people to avoid face-to-face interactions in uncomfortable social contexts. Using
computer avatars (graphical computer representations of humans), I conducted two experiments
to test this prediction. Results supported my prediction, indicating that participants in
managerial/leadership roles avoided face-to-face interactions and preferred virtual interactions in
situations where they were concerned about negative social evaluation. Specifically, in Study 1
(N = 101), participants in a managerial role, who were asked to monitor their subordinates
frequently, preferred interacting with their subordinates via computer avatars. In Study 2 (N =
196), participants, in their roles as team leaders, preferred interacting with their team members
via computer avatars and this effect was mediated by reduced concerns about negative
evaluation. I also explored how various individual-level and situational variables moderated this
effect. I discuss the theoretical implications of these findings and highlight future directions for
research on the psychology of technology.
69
One of the most popular forms of emerging technologies that is becoming increasingly
common in social and work settings is virtual reality (VR). The utility and popularity of VR is
evident in the numerous ways in which people use this technology in their work and personal
lives. For example, in 2017, a couple in the UK hosted their wedding in a virtual reality social
platform where their friends and family from across the globe joined them using VR headsets
(Fortson, 2017). In work settings, VR is used for recruiting and training in various organizations
including the US Navy, the British Army, and Walmart (Chandler, 2017). In a recent survey of
18,000 professionals and students from 19 countries, respondents cited VR as the technology that
is most likely to revolutionize their work in the coming decade (Bresman & Rao, 2017).
Consistent with this expectation, VR was predicted to be one of the top technological trends that
would have a strategic impact for organizations in 2018 (Gartner, 2017). The rising popularity of
this technology is also reflected in forecasts suggesting that worldwide spending on VR will
increase from $13.9 billion in 2017 to over $143 billion in 2020 (IDC, 2017).
Virtual reality (VR) technology can be described as a computer-generated simulation of a
three-dimensional (3D) environment “that surrounds a user and responds to that individual’s
actions in a natural way” (Gartner, 2013). VR replaces physical reality with computer-generated
environments and allows users to experience these virtual environments visually through devices
such as VR headsets, in a tactile manner through devices such as VR gloves, and in a fully
immersive manner through avatars – real-time, digital representations of human beings
(Bailenson & Blascovich, 2004).
The popularity of avatars in the workplace is evident from the increasing number of ways
in which they are being used. For example, avatars are used by employees to engage in
immersive virtual meetings and to access a shared virtual workspace (Colbert, Yee & George,
70
2016). Avatars are also used for training employees in high-risk professions in virtual
environments, for enabling remote employees to collaboratively work together, and for providing
diversity and sensitivity training by asking employees to virtually experience life through avatars
of another gender or race (e.g., Bessiere, Ellis & Kellogg, 2009).
The emergence of VR as a management tool in organizations raises an important question
– when and why do managers prefer virtual interactions through VR over face-to-face
interactions? In this chapter, I examine this question specifically in the context of monitoring and
seek to understand the underlying psychological mechanism that drives managers’ preference for
monitoring subordinates through avatars. I begin by proposing that technology can attenuate the
strength of social evaluative pressures by acting as a psychological buffer. Based on this idea, I
posit that managers will prefer monitoring their subordinates through avatars in situations where
they feel threatened, so that they can distance themselves from their subordinates. I tested these
ideas in the context of monitoring as it is a managerial responsibility that may sometimes elicit
negative evaluation from subordinates.
Frequent Monitoring as a Source of Negative Evaluation
Monitoring is a critical aspect of managers’ jobs that allows them to obtain information
about the performance of subordinates (Komaki, Zlotnick, & Jensen, 1986) and use this
information to differentiate between high and low performers to appropriately administer
contingent rewards (Komaki, 1986). Monitoring influences several key managerial and
employee outcomes such as perceived managerial effectiveness (Komaki, 1986), employee
performance (Chalykoff & Kochan, 1989), individual accountability (Brewer & Ridgway, 1998),
organizational citizenship behaviors and justice perceptions (Niehoff & Moorman, 1993). Due to
71
the positive effects associated with monitoring, scholars have included the construct in various
taxonomies of effective managerial behaviors (e.g., Komaki et al., 1986; Yukl, 1989).
Although monitoring is a critical aspect of the managerial role, it is a function that might
sometimes elicit negative evaluation from subordinates. Adams (1976) noted that too much
managerial monitoring of employee behaviors could lead to distrust and negative evaluation.
Thus, managers may recognize that monitoring is an important part of their jobs, but may not
always feel comfortable doing it, especially when they are required to monitor more frequently
than they would prefer. However, frequent monitoring may sometimes be situationally
necessary. For instance, managers may be required to closely and frequently monitor poor
performers, or subordinates who are new at their jobs. Similarly, while working on an important
project, they may be pressured by top management to frequently monitor every aspect of the
project. In some contexts, the organizational culture may require frequent monitoring. For
example, supervisors at Walt Disney World were encouraged to monitor employees closely to
ensure that employees displayed desired emotions as a rude employee could potentially cost
Disney future guests. Disney supervisors, therefore, recognized that close monitoring was
necessary and mandated (Van Maanen & Kunda, 1989). In addition to organizational culture,
certain tasks may also demand frequent monitoring. For example, a study on project
management styles revealed that frequent monitoring was considered vital for reducing
uncertainty during the early stages of a project and for pushing the project to completion during
the later stages (Lewis, Welsh, Dehler & Green, 2002).
In these circumstances, managers may experience significant discomfort engaging in
frequent monitoring. Such discomfort may stem from the belief that frequent monitoring may
signal a lack of trust (Adams, 1976; Langfred, 2004), and consequently cast them in negative
72
light. Being subject to negative evaluation can be psychologically aversive, especially for people
in positions of power. In the following sections, I outline the key reasons for why managers may
be particularly motivated to avoid situations in which they may be evaluated negatively, and
describe how technologies such as VR can serve as a psychological buffer in such situations.
Power and the Desire to Avoid Negative Evaluation
Social situations, in which people interact with others or operate in the presence of an
audience, trigger concerns about being negatively evaluated by others (Schlenker & Leary,
1982). These concerns about negative evaluation arise when people perceive that their actions
will create undesired impressions of themselves or will garner unsatisfactory reactions from their
audience (Schlenker & Leary, 1982). The perception that one may possibly be negatively
evaluated by others in a social situation is psychologically aversive to people as it affects how
others perceive, evaluate and treat them (Goffman, 1959; Leary & Kowalski, 1990) and also how
people view themselves (Leary & Baumeister, 2000). Negative social evaluation is also
psychologically aversive as it leads to a range of negative feelings including feelings of
embarrassment (Miller, 1995; Modigliani, 1971), social anxiety (Schlenker & Leary, 1982), and
shame (Tangney, 1992).
The possibility of negative evaluation can be especially aversive if one is in a position of
power. Such positions inherently come with higher expectations for people to behave in ways
that inspire positive social evaluation. For instance, studies show that individuals in positions of
power experience salient expectations that they possess and display high levels of competence
(Fast, Burris & Bartel, 2014; Mintzberg, 2009), and failing to meet these expectations leads to
ego threat (Cho & Fast, 2012; Fast & Chen, 2009). Moreover, when individuals in positions of
power are evaluated negatively, they face the possibility of severe repercussions in the form of
73
loss of status and influence (Marr & Thau, 2014). They may also experience negative
relationship outcomes (Anicich, Fast, Halevy, & Galinsky, 2016; Fast, Halevy & Galinsky,
2012) and even be perceived as illegitimate occupants of high-power positions (Magee &
Galinsky, 2008). Thus, the possibility of negative evaluation may be quite salient for managers,
given their positions of power over others.
The idea that managers may be especially averse to situations that can garner negative
evaluation is evident in the communication context. A survey of 616 managers conducted by
Interact (a communication consultancy) and Harris Poll in 2016 revealed that 69% of managers
were uncomfortable communicating with their employees (Interact Report, 2015; Solomon,
2016). This survey also revealed that 37% of managers were especially uncomfortable in
situations where they were expected to give direct feedback or criticism about their employees’
performance to which employees might respond negatively. Moreover, 20% of managers
reported feeling uncomfortable to demonstrate vulnerability (e.g., sharing that they might have
made a mistake), and 16% reported feeling uncomfortable communicating with their employees
face-to-face (Interact Report, 2015; Solomon, 2016). These results highlight that managers are
uncomfortable communicating with employees, especially in situations where they might be
perceived negatively or be subject to negative evaluation.
Technology, Psychological Distancing, and Social Presence
Although managers might recognize that monitoring is an important responsibility
associated with their roles, they may also be aware that it can sometimes elicit negative
evaluation. Despite anticipating psychological discomfort in such situations, managers may find
themselves in circumstances where frequent monitoring is either required or mandated. Given
that the possibility of negative evaluation can be aversive, especially for those in positions of
74
power, managers who must engage in frequent monitoring may be motivated to minimize the
psychological aversion it engenders. As a result, they may resort to distancing themselves from
the situation as a way of avoiding negative evaluation.
This is consistent with research showing that evaluative concerns lead to defensive
distancing. A study examining intergroup interactions revealed that participants who believed
that they exhibited more enthusiasm about a possible relationship than their interaction partner
were more likely to distance themselves from the situation to avoid potential embarrassment
(Vorauer & Sakamoto, 2006). Such defensive distancing can alleviate the evaluative threat posed
by the situation (Peetz, Gunn & Wilson, 2010) and, thus, help people restore self-integrity by
downplaying the significance of the threat for the self (e.g., Harris & Napper, 2005). In line with
this argument, other studies also show that people are motivated to avoid negative evaluation by
distancing themselves from the situation (e.g., Snyder, Lassegard & Ford, 1986; Curtis & Miller,
1986).
In the modern workplace, one way in which managers may seek to distance themselves
from subordinates in situations where they anticipate negative evaluation is by using technology
to interact with them. When people interact with each other face-to-face, they experience higher
levels of social risks. Technology can mitigate these social risks for the interacting parties by
reducing the availability of relevant social contextual cues that may be evident in face-to-face
interactions such as facial expressions, moods, demeanor, and body language (e.g., Ang,
Cummings, Straub & Earley, 1993; Kiesler, Siegel & McGuire, 1984; Sproull & Kiesler, 1986). I
propose that, in addition to attenuating relevant social contextual cues, technology sidesteps
concerns about negative evaluation by reducing people’s perceptions of social presence. Social
presence refers to the degree of perceived tangibility and proximity of other people in a given
75
context (Short, Williams, & Christie, 1976). Extant research suggests that, in general, social
presence is lower when people interact through technology compared to when they interact face-
to-face (e.g., Bailenson, Blascovich, Beall & Loomis, 2003; Joinson, 2004; McLeod, Baron,
Marti & Yoon, 1997; Postmes, Spears, Sakhel & de Groot, 2001). When social presence is high,
people pay more attention to their interaction partners’ behaviors and are more likely to be
influenced by them, while people are less likely to experience others’ reactions or be affected by
them when social presence is low (Short, Williams, & Christie, 1976).
Studies comparing face-to-face communication and technology-mediated communication
have offered support for this idea across various contexts. For example, shy individuals are
reported to be more comfortable engaging with others in a virtual environment due to reduced
social presence (Joinson, 2004). In another study, McLeod and colleagues argued that the
expression of deviant minority expression would be greatest when social presence is lowest as
negative reactions of others are least likely to be felt in such situations (McLeod et al., 1997).
Thus, by reducing perceived social presence, technology reduces the strength of social evaluative
pressures and acts as a buffer that individuals can use to distance themselves from situations in
which they anticipate negative evaluation.
Providing support for our prediction that managers will use technology as a buffer from
potential negative evaluation, studies on interpersonal communication and disclosure have found
that people prefer to disclose negative, personally sensitive information about themselves to
technological tools such as embodied conversational agents (ECAs) over human interviewers
(e.g., Lucas, Gratch, King & Morency, 2014; Pickard, Roster & Chen, 2016). These results
indicate that interacting with or through computer avatars could reduce anticipated social
evaluation concerns that may be present in face-to-face interactions. In fact, scholars have
76
suggested that “the possibility that people would tell an impartial machine personal or
embarrassing things about themselves, without fear of negative evaluation, has been raised since
the first uses of computers for communication” (Weisband & Kiesler, 1996, p. 3).
Building on these ideas, I posit that in situations that call for frequent monitoring (and,
thus, pose the risk of negative evaluation for managers), managers will be less likely to prefer
traditional face-to-face interactions with subordinates. Rather, in such situations, managers may
be more likely to use technologies such as VR as a medium to interact with their subordinates.
Technology serves as a psychological buffer as it helps sidestep concerns about negative
evaluation by reducing social presence.
Overview of the Present Research
I conducted two experiments to test the following predictions: (a) managers are likely to
move away from face-to-face interactions and choose to interact through computer avatars in
situations that call for frequent monitoring, (b) this effect is mediated by the extent to which they
anticipate being evaluated negatively in that situation. For each experiment, I collected all data in
single, complete batches and did not conduct any analyses until all data for a given experiment
were collected. I report all measures, manipulations and exclusions. Based on recent
recommendations for increased power (e.g., Simmons, Nelson, & Simonsohn, 2013), I recruited
large sample sizes (101 in Experiment 1, and 196 in Experiment 2).
Experiment 1
In Experiment 1, participants were asked to place themselves in the role of a manager
who would be overseeing a subordinate working on a project. Participants read detailed
descriptions of the project requirements and were informed that they had to monitor their
subordinate a certain number of times during the length of the project. Following this, they were
77
asked to choose their preferred method for overseeing the subordinate – either face-to-face or via
a computer avatar.
Method
Participants
One hundred and one U.S. individuals (41.6% female; M
age
= 34.75) recruited from
Amazon’s Mechanical Turk (mTurk) participated in exchange for $0.50. This data source has
been verified to yield research data that is at least as reliable as those obtained via traditional
methods and includes a large participant pool that is significantly more diverse than typical
American college students (Buhrmester, Kwang & Gosling, 2011). A majority (82.5%) of the
participants reported that they were currently employed. Additionally, 58.5% of the participants
reported that they held at least a bachelor’s degree and 79.8% reported that they had managerial
experience. Participants were randomly assigned to one of the two conditions – minimal
monitoring (n = 50) and frequent monitoring (n = 51).
Materials and Procedure
I informed participants that, in this study, I would be simulating typical interactions
between managers and subordinates in organizations. I asked participants to imagine that they
were playing the role of a manager in the marketing department of an organization and that they
were to oversee a subordinate who would be working on a marketing project. Then I provided
detailed descriptions of the company and the project that the subordinate would allegedly be
working on. The project involved creating a print advertisement for a new product (a smart
watch) that the company was about to launch. Following this, I informed participants that their
subordinate would have 4 hours to work on the project and that it was their responsibility as a
manager to ensure that the project was of high quality.
78
Participants then received specific instructions pertaining to managing their subordinates.
Participants in the minimal monitoring condition read: “In this scenario, you want to check in on
your subordinate once during the 4-hour period.” Participants in the frequent monitoring
condition read: “In this scenario, you want to check in on your subordinate every half hour, for a
total of 8 times during the 4-hour period.”
After reading these instructions pertaining to how frequently they would need to monitor
their subordinates, participants in both conditions read the following descriptions of the different
methods of monitoring that were available to them:
Face-to-face:
“You may walk up to your subordinate’s office down the hall and ask him/her for an update on
the project face-to-face. Your subordinate will be able to respond to your update request face-to-
face.”
Computer Avatar:
“You may walk up to the computer room down the hall and send your avatar to ask him/her for
an update on the project. Your avatar is a personalized computer video graphic that represents
you and can be seen on your subordinate’s computer screen. Your subordinate will be able to
respond to your update request with their own avatar that will appear on your computer screen.”
In both descriptions, I intentionally controlled for ease/convenience of using the avatar in
order to rule out this explanation as a possible reason for why participants may prefer monitoring
through the avatar over monitoring subordinates face-to-face. To do this, I specified that for both
methods of monitoring, participants had to walk down the hall either to the subordinate’s room
(in the face-to-face method) or to the computer room (in the computer avatar method) to monitor
their subordinate. Following this, I asked participants to select between the face-to-face method
79
and the computer avatar method to accurately reflect their most preferred method for monitoring
their subordinate in this context.
In addition to selecting between the face-to-face method and the computer avatar method,
I asked participants to report whether they had any prior managerial experience. I expected that
the experience of being in a managerial position would increase the salience of potential negative
evaluations for participants. Therefore, I anticipated that the effect of frequent monitoring on
preference for using computer avatars would be stronger for participants with managerial
experience.
In addition to age, gender, employment status, educational level and managerial
experience, I also asked participants to report their ethnicity.
Results and Discussion
Results revealed differences between conditions in participants’ preference for using the
computer avatar to monitor subordinates. Specifically, results of a chi-square test indicated that
10% of the participants in the minimal monitoring condition indicated that they preferred using
the computer avatar method to monitor their subordinates, but this significantly nearly tripled to
27.5% of participants in the frequent monitoring condition, χ
2
(1, N = 101) = 5.034, p = .025.
Thus, managers in a situation calling for frequent monitoring were more likely to move away
from face-to-face interactions and instead preferred interacting via a computer avatar.
Next, I examined possible differences among participants based on prior managerial
experience. Perhaps, for example, the effects were driven primarily by individuals who lacked
managerial experience. However, I found precisely the opposite. When I split my sample into
those who did and did not have managerial experience, I found that my prediction was supported
among participants who reported having managerial experience but not among those who
80
reported having no managerial experience. Among participants with managerial experience,
results of a chi-square test revealed that 27% of participants in the frequent monitoring condition
opted for using the computer avatar to monitor their subordinate compared to 9.5% of the
participants in the minimal monitoring condition, χ
2
(1, N = 79) = 4.133, p = .042. However,
among participants with no managerial experience, there were no significant differences in
participants’ preference for using the computer avatar method depending on monitoring
frequency (χ
2
(1, N = 20) = .22, p = .64), such that 23.1% of participants in the frequent
monitoring condition opted for using the computer avatar compared to 14.3% in the minimal
monitoring condition. This pattern adds further validity to my findings, as the effects emerged
among those with actual managerial experience.
Experiment 2
Next, I sought to replicate my findings from Experiment 1 using a different manipulation
of monitoring frequency. I also examined the prediction that managers’ preference for using
computer avatars in situations that called for frequent monitoring is mediated by the extent to
which they anticipate being evaluated negatively in those situations. Finally, I sought to extend
my findings from Experiment 1 by exploring how certain individual-level and situational-level
variables affected the relationship between monitoring frequency and preference for using
technology to monitor subordinates. Specifically, I focused on personality differences at the
individual level and on contexts that encouraged managerial control through frequent monitoring
at the situational level.
Personality Differences
Individual differences such as age, gender and personality traits influence how people use
and interact with technology (Rosengren, 1974). As research on technology use burgeoned,
81
numerous scholars have examined the influence of personality traits on individuals’ use of newer
forms of technology including social media (e.g., Hughes, Rowe, Batey & Lee, 2012; Ross, Orr,
Sisic, Arseneault, Simmering & Orr, 2009; Ryan & Xenos, 2009), blogging and personal
websites (e.g., Guadagno, Okdie & Eno; Marcus, Machilek & Schutz, 2006), and computer
avatars (Fong & Mar, 2015). Given the importance of personality traits in the context of studying
technology use, I sought to examine how personality differences influenced people’s preferences
for adopting technology for monitoring subordinates. Specifically, I focused on differences in the
Big-Five personality traits (extraversion, agreeableness, conscientiousness, neuroticism and
openness to experience), dominance motivation, and need for belonging.
The five-factor model of personality (McCrae & Costa, 1997) has been widely used in
research pertaining to technology use. Numerous studies show that extraversion and neuroticism
were significantly related to internet usage (e.g., Amichai-Hamburger, 2002; Amichai-
Hamburger & Ben-Artzi, 2003; Amichai-Hamburger, Wainapel, & Fox, 2002). Individuals low
in extraversion and high in neuroticism used the Internet more heavily than their extraverted, less
neurotic counterparts as they felt that they could express their real selves better when
communicating with others online rather than offline (Amichai-Hamburger et al, 2002).
Moreover, studies show that individuals high in extraversion (compared to introverts) are more
likely to use social media and other forms of technology as social tools but not as a substitute for
real world interactions (e.g., Amiel & Sargent, 2004; Ross et al., 2009). Studies also show that
individuals high in neuroticism (compared to those low in neuroticism) preferred online tools
(e.g., chat rooms, social media etc.) for communication over face-to-face interactions as such
tools offered more time to contemplate their responses and control their communication with
others (e.g., Butt & Philips, 2008; Correa, Hinsley & de Zuniga, 2009). Based on these findings,
82
I expect that extraversion will be negatively related and neuroticism will be positively related to
preference for using computer avatars to monitor subordinates.
Conscientiousness reflects the degree to which an individual is diligent and scrupulous.
Prior research has shown that conscientiousness is negatively related to the use of Internet and
other forms of computer mediated communication (e.g., Butt & Philips, 2008) as individuals
high in conscientiousness are dutiful and responsible and may consider social media and
computer-mediated communication tools as sources of distraction (e.g., Ross et al., 2009). Based
on this idea, I expect that individuals high in conscientiousness will be less likely to prefer using
technology to monitor subordinates. Prior research suggests that the relationship between
agreeableness and technology use is quite unclear. Some researchers show that agreeableness is
negatively related to Internet use (Landers & Lounsbury, 2006) while others show that
agreeableness has no effect on social media use (e.g., Ross et al., 2009). In the context of
monitoring subordinates, I expect that agreeableness may be negatively related to preference for
avatar use as individuals high in agreeableness may feel more secure expressing themselves in
face-to-face interactions where they can minimize the possibility of misunderstanding. Finally,
given that computer avatars are a fairly novel form of technology in the context of monitoring
subordinates, I expect a positive relationship between openness to experiences and preference for
using avatars to monitor subordinates. This is consistent with prior research showing that
openness to experience positively predicted blogging behavior (Guadagno et al., 2008) and social
media use (Correa et al., 2009).
In addition to the Big-Five personality factors, I sought to examine how differences in
dominance motivation influenced preference for using avatars to monitor subordinates.
Dominance is a social influence strategy in which people use their power to control others,
83
irrespective of others’ desire to follow (Mead & Maner, 2012). Individuals high in dominance
motivation use social interaction as a channel to exert their power to control others and maintain
their dominance. In the context of monitoring subordinates, I expect that dominance motivation
will be negatively related to preference for using avatars to monitor subordinates. I reason that
individuals high in dominance motivation will seek opportunities to exert their power to control
others and face-to-face interactions offer more salient opportunities for such a display of
dominance compared to interactions through technology.
Need for belonging is another personality factor that might influence preferences for
using avatars to monitor subordinates because individual differences in belonging motivation are
shown to drive behaviors focused on obtaining/maintaining interpersonal acceptance and is
associated with people’s concerns with others’ evaluations of them (Leary, Kelly, Cottrell &
Schreindorfer, 2013). Given my prediction that people’s preference for using avatars to monitor
subordinates will be mediated by the extent to which they anticipate being evaluated negatively
in a given situation, I expect that individuals with a high need for belonging will be more likely
to use avatars to monitor subordinates when they feel concerned about others’ evaluations of
them (such as in situations that call for frequent monitoring).
Typicality of Frequent Monitoring
In this experiment, I also sought to explore the role of the typicality of frequent
monitoring as a possible boundary condition in the relationship between monitoring frequency
and preference for using avatars for monitoring. Specifically, I posited that in contexts where
frequent monitoring is considered typical, the relationship between monitoring frequency and
preference for using technology for monitoring would be weakened. In such contexts, I expected
that engaging in frequent monitoring would be considered appropriate and therefore will not lead
84
to concerns about being evaluated negatively by subordinates. When managers do not anticipate
being evaluated negatively by subordinates (due to the increased appropriateness of frequent
monitoring in contexts where frequent monitoring is considered typical), they would not have to
use technology as a psychological buffer to distance themselves from their subordinates. In order
to test this prediction, I manipulated the typicality of frequent monitoring.
Method
Participants and Design
One hundred and ninety-six undergraduates recruited from USC Marshall (40.1% female;
M
age
= 20.50) participated in this experiment for course credit. I screened for incomplete
responses (n=4) and excluded participants who did not complete the study. In this experiment, I
included an attention check (cf., Oppenheimer, Meyvis & Davidenko, 2009) that asked
participants to report information they directly received in order to make sure they were reading
the information I provided, and excluded participants who failed the attention check (n=28,
14.3%), for a final sample size of one hundred and sixty-four (37.8% female; M
age
= 20.52). In
this sample, 31.7% of the participants reported that they were currently employed and 68.3%
reported that they had some managerial experience.
Participants were randomly assigned to one of four conditions in a 2 (Monitoring
frequency: minimal vs. frequent) x 2 (Typicality of frequent monitoring: typical vs. atypical)
between-subjects design. (ns: minimal monitoring, frequent monitoring is atypical = 41;
minimal monitoring, frequent monitoring is typical = 39; frequent monitoring, frequent
monitoring is atypical = 40; frequent monitoring, frequent monitoring is typical = 44).
Materials and Procedure
85
I informed participants that I would be simulating typical class room interactions that
occur between students who work together on in-class projects. Participants were asked to
vividly imagine that they were to complete an in-class project in teams for their Marketing class.
Before beginning work on the project, I informed participants that they were to complete a
personality survey so that I could better understand how students’ personality characteristics
impacted the way they worked together on projects. Following this, participants completed the
Big Five Personality Measure (BFI-10; Rammstedt & John, 2007). The BFI-10 is an abbreviated
version of the BFI wherein each of the five personality factors – extraversion, agreeableness,
conscientiousness, neuroticism and openness – is measured using two items (one true-scored
item and one reverse-scored item). The BFI-10 has been demonstrated to have good convergence
with the Big-Five Inventory-44 (BFI-44) and has good test – retest reliability (Rammstedt &
John, 2007). Participants were asked to rate themselves on a scale from 1 (Strongly disagree) to 7
(Strongly agree) to accurately describe how they generally see themselves on a total of ten items
– two items pertaining to each of the five personality factors.
Next, I asked participants to complete the seven-item dominance motivation subscale (a
=.91) of the Achievement Motivation Scale (AMS) (Cassidy & Lynn, 1989; Maner & Mead,
2010). The dominance subscale consists of seven items that assessed a person’s desire for power
and authority (“I like to give orders and get things going,” “I would enjoy having authority over
people,” “I prefer to direct group activities myself rather than having someone else organize
them,” “I would make a good leader,” “I am usually leader of my group,” “People take notice of
what I say,” and “I enjoy planning things and deciding what other people should do”).
Participants provided their ratings on a scale from 1 (Strongly disagree) to 7 (Strongly agree).
86
Following this, I asked participants to complete the ten-item Need to Belong scale
(a=.75; Leary, Kelly, Cottrell & Schreindorfer, 2013). This scale measured a person’s belonging
motivation, a trait that is said to go beyond a mere desire to affiliate/socialize to reflect one’s
“desire to be accepted, form relationships and belong to social groups” (Leary et al., 2013, pg.
611). Sample items include: “I want other people to accept me”, “I have a strong ‘need to
belong’”, “My feelings are easily hurt when I feel that others do not accept me”. Participants
provided their ratings on a scale from 1 (Not at all) to 5 (Extremely) to indicate the degree to
which each statement is characteristic of them.
Next, I provided additional detail about the in-class project and informed participants that
they were assigned to be the team leader of a team of four students and that they needed to
complete the in-class project in one class session. They were informed that one of the main
responsibilities associated with the role of team leader was to oversee other students in their
teams to ensure the timely completion of the project. I also indicated that their projects would be
graded on four criteria: the timely completion of the project, the quality of the project, team
members’ contributions and effectiveness of the team leader. Following this, I described the
specific tasks involved in completing the project – i.e., to create a print advertisement for a new
product that an organization was about to launch.
After providing instructions about the project, I informed participants that they would be
sent to another room at the end of the hall with the other team leaders and that their teams will be
working in the classroom. Next, I manipulated the typicality of frequent monitoring in the
following way: Participants in the atypical condition read: “Team leaders typically check in on
their teams at least one time during the 1 hour and 20-minute period.” In contrast, participants in
the typical condition read: “Team leaders typically check in on their team at most ten
87
times during the 1 hour and 20-minute period.” Through this manipulation, I intended to convey
that a certain degree of monitoring (minimal vs. frequent) was considered typical and would be
acceptable in this situation. I expected that the extent to which participants would anticipate
being negatively evaluated for frequently monitoring their subordinates would be influenced by
the extent to which such frequent monitoring was considered typical in that situation.
Following this, I manipulated the frequency of monitoring: Participants in the minimal
monitoring condition read: “In this scenario, you decide that you want to check in on your
team one time during the 1 hour and 20-minute period.” Participants in the frequent monitoring
condition read: “In this scenario, you decide that you want to check in on your team every 10
minutes, for a total of 8 times during the 1 hour and 20-minute period.”
After reading these instructions, participants responded to the following set of two items
that I included to measure anticipated negative evaluation (a =.75): “I will worry that team
members will think negatively of me” and “I will be concerned that my team members will think
that I don’t trust them”. All ratings were made on a scale from 1(Strongly disagree) to 7(Strongly
agree).
After answering these questions, participants read about the two possible methods for
monitoring team members – face-to-face and computer avatars. The descriptions for each of the
methods was the same as in Experiment 1. Participants then rated their preference for using the
face-to-face method and the computer avatar method in this scenario on a scale from 1 (Not at
all) to 7 (Very much). I combined the two items (“To what extent do you prefer checking in on
your team face-to-face” (reverse-coded); “To what extent do you prefer checking in on your
team via the computer avatar”) to create the preference for using avatars measure (a =.56) – my
primary dependent variable of interest. In addition, I included a dichotomous measure of
88
preference for using avatars for monitoring and asked participants to choose between monitoring
their team members face-to-face or via a computer avatar (same as the dependent measure
included in Experiment 1).
Results and Discussion
I conducted a series of 2 (monitoring frequency: minimal vs. frequent) x 2 (typicality of
frequent monitoring: atypical vs. typical) ANOVAs on the dependent measures – (a) preference
for using avatars for monitoring, and (b) anticipated negative evaluation. Consistent with my
predictions and with Experiment 1, results revealed a significant main effect of monitoring
frequency on preference for using avatars for monitoring, F (1,160) = 6.84, p = .01, h
p
2
=.04.
Participants in the frequent monitoring condition had a greater preference for using avatars for
monitoring compared to those in the minimal monitoring condition (M
frequent monitoring
= 2.63; SE =
.12 vs. M
minimal monitoring
= 2.18; SE = .12). However, typicality of frequent monitoring did not have
a significant main effect on preference for using avatars for monitoring, F (1,160) = .36, p = .55,
h
p
2
=.002. There was also no significant interaction between monitoring frequency and typicality
of frequent monitoring on preference for using avatars for monitoring, F (1,160) = .27, p = .60,
h
p
2
=.002. This suggests that the extent to which a certain degree of monitoring (minimal vs.
frequent) was considered typical in a given situation did not affect people’s preferences for using
avatars in that situation.
I also examined the effect of monitoring frequency on preference for using avatars for
monitoring through the dichotomous outcome variable. As expected, and replicating the results
of Experiment 1, while only 2.5% of the participants in the minimal monitoring condition
indicated that they would prefer monitoring their team members via a computer avatar, this
increased to 13.1% of the participants in the frequent monitoring condition, χ
2
(1, N = 164) =
89
6.30, p = .012. Next, I examined how monitoring frequency affected participants’ preference for
using avatars for monitoring depending on typicality of frequent monitoring in that situation.
Among participants in the frequent monitoring – atypical condition, results of a chi-square test
revealed that 17.5% of participants in the frequent monitoring condition opted to use the
computer avatar for monitoring their subordinate compared to 4.9% of the participants in the
minimal monitoring condition, χ
2
(1, N = 81) = 3.27, p = .07. Among participants in the frequent
monitoring – typical condition, results of a chi-square test revealed that 9.1% of participants in
the frequent monitoring condition opted to use the computer avatar compared to 0% of the
participants in the minimal monitoring condition, χ
2
(1, N = 83) = 3.73, p = .054. Consistent with
the ANOVA results on the continuous measure of preference for technology adoption, these
results also suggest that typicality of frequent monitoring did not affect the relationship between
monitoring frequency and preference for avatars.
Following this, I conducted a 2 (monitoring frequency: minimal vs. frequent) x 2
(typicality of frequent monitoring: atypical vs. typical) ANOVA to examine differences between
conditions on anticipated negative evaluation. Consistent with my predictions, results revealed a
significant main effect of monitoring frequency on anticipated negative evaluation, F (1,160) =
20.23, p < .001, h
p
2
=.11. Participants in the frequent monitoring condition anticipated more
negative evaluation (M
frequent monitoring
=3.89; SE=.15) than those in the minimal monitoring
condition (M
minimal monitoring
=2.92; SE=.15). Again, typicality of frequent monitoring did not have
a significant main effect on anticipated negative evaluation, F (1,160) = 1.08, p = .30, h
p
2
=.007.
There was also no significant interaction between monitoring frequency and typicality of
frequent monitoring on anticipated negative evaluation, F (1,160) = .07, p = .80, h
p
2
<.001.
Contrary to my expectation that typicality of frequent monitoring would moderate participants’
90
expectations of negative evaluation in contexts of where frequent monitoring is considered
typical, I found that this factor did not affect the extent to which people anticipated negative
evaluation when they engaged in frequent monitoring
4
.
Mediation Analyses
I predicted that the effect of monitoring frequency on preference for using avatars for
monitoring will be mediated by the extent to which managers anticipated being evaluated
negatively in frequent monitoring situations. Given that I used a measurement-of-mediation
design to test my prediction, I wanted to ensure that my process measure and my outcome
measure were seen as distinct constructs, both theoretically and empirically. Anticipated negative
evaluation measures the extent to which one expects that others will evaluate them negatively in
a given situation. Preference for using avatars for monitoring was measured as the extent to
which participants preferred using a computer avatar to monitor subordinates. In this mediation
design, I can consider anticipated negative evaluation as the triggering factor in a situation to
which people respond by preferring to use avatars. Moreover, anticipated negative evaluation
and preference for using avatars are empirically distinct constructs as indicated by the weak
positive correlation between the two measures (r = .38, p < .01).
4
I also examined whether managers’ hesitance about seeing their team members react to
their frequent monitoring could be a possible alternative mechanism that may explain the
relationship between monitoring frequency and increased preference for using avatars for
monitoring in situations of frequent monitoring. Therefore, I included the following item: “I will
not want to see how my team members react.” There were no significant main effects of either
monitoring frequency (F (1,160) = .87, p = .35, h
p
2
=.005), or typicality of frequent monitoring
(F (1,160) = .07, p = .80, h
p
2
<.001) on participants’ ratings of hesitance about seeing team
members’ reactions to their level of monitoring. Moreover, there were no significant interaction
effects of monitoring frequency and typicality of frequent monitoring on this dependent measure,
(F (1,160) = 1.96, p = .16, h
p
2
=.012).
91
I conducted bootstrapping analyses following procedures for testing direct and indirect
effects using the PROCESS macro (model 4) (Hayes, 2013) to test whether team leaders’
preference for using avatars to monitor their team members in frequent monitoring situations was
mediated by anticipated negative evaluation. The bootstrap results based on a resampling size of
5,000 indicated that the total direct effect of monitoring frequency on preference for using
avatars for monitoring (b = .44, SE = .17, p = .01) decreased to non-significance (b = .19, SE =
.17, p = .27) when anticipated negative evaluation was included as the mediator. Moreover, the
95% bias-corrected confidence intervals for the indirect effect through anticipated negative
evaluation did not include zero (.13, .45). These results suggest that team leaders’ preference for
using avatars to monitor team members in situations that called for frequent monitoring is
mediated by anticipated negative evaluation. In line with recommendations for reporting
unstandardized coefficients when independent variables are dichotomous (Darlington & Hayes,
2017), I have presented the unstandardized regression coefficients for each pathway in Figure 6.
The 95% bias-corrected confidence intervals for the direct and indirect effects are presented in
Table 5.
92
Figure 6. Anticipated negative evaluation mediate the effect of frequent monitoring on avatar
preference. Unstandardized regression coefficients and standard errors for each path are reported.
R
2
= .39.
93
Table 5: Mediation Results for the Hypothesized Frequent Monitoring à Anticipated
Negative Evaluation à Preference for Avatars Path (Experiment 2)
Predictor B SE t p 95%
LLCI
95%
ULCI
Anticipated Negative Evaluation
Constant 2.93 .15 19.02 .000 2.62 3.23
Frequent Monitoring Condition .96 .21 4.48 .000 .54 1.39
Preference for Avatars
Constant 1.41 .21 6.81 .000 .99 1.81
Anticipated Negative Evaluation .26 .06 4.56 .000 .15 .38
Frequent Monitoring Condition .19 .17 1.10 .27 -.15 .52
Direct and Indirect Effects
Direct Effect of Frequent Monitoring
Condition on Preference for Avatars
B SE t p 95%
LLCI
95%
ULCI
Frequent Monitoring Condition à
Preference for Avatars
.19 .17 1.10 .27 -.15 .52
Indirect Effect of Frequent Monitoring
Condition on Preference for Avatars
through Anticipated Negative Evaluation
B Boot
SE
Boot
95%
LLCI
Boot
95%
ULCI
Total Indirect Effect: Anticipated
Negative Evaluation
.26 .08 .13 .45
Personality Effects
I was also interested in exploring how individual level personality differences affected
the relationship between monitoring frequency and preference for using avatars for monitoring.
Specifically, I anticipated that individual differences would interact with the need for monitoring
to affect preference for avatars.
Results of these exploratory analyses revealed that several personality factors affected
participants’ preference for using avatars for monitoring. When each of the Big Five personality
94
factors were included independently as predictors, extraversion had a significant negative effect
(b = -.15, SE = .07, p = .02), conscientiousness had a significant negative effect (b = -.22, SE =
.08, p = .01), openness to experience had a significant negative effect (b = -.15, SE = .07, p =
.02), and neuroticism had a significant positive effect (b = .20, SE = .07, p = .004), on preference
for using avatars for monitoring. When controlling for the other Big Five personality measures,
only neuroticism retained its significant positive effect (b = .15, SE = .07, p = .037) on
preference for using avatars for monitoring. Conscientiousness had a marginal negative effect (b
= -.16, SE = .09, p = .08) on preference for using avatars for monitoring when controlling for the
other Big Five personality measures. There were no significant interactions of any of the Big
Five personality measures with monitoring frequency on preference for using avatars for
monitoring.
Next, I explored the role of dominance motivation in predicting people’s preference for
using avatars for monitoring. When I included dominance motivation as a predictor in the model,
I found that it had a significant negative effect (b = -.24, SE = .08, p = .004) on preference for
using avatars for monitoring. However, there was no significant interaction between dominance
motivation and monitoring frequency on preference for using avatars for monitoring (b = -.07,
SE = .08, p = .36).
Finally, I explored the role of need for belonging in predicting people’s preference for
using technology for monitoring. When I included need for belonging as a predictor in the
model, I found that it did not predict people’s preference for using avatars for monitoring (b =
.11, SE = .15, p = .46). Moreover, need for belonging did not moderate the relationship between
monitoring frequency and preference for using avatars for monitoring as indicated by the non-
significant interaction (b = .12, SE = .15, p = .43).
95
In sum, these results suggest that, participants had a greater preference for using avatars
for monitoring in situations that call for frequent monitoring. This effect was mediated by
participants (in their roles as managers/team leaders) anticipating negative evaluations from their
subordinates/team members in such situations. The typicality of frequent monitoring in a given
context did not influence participants’ preference for using avatars for monitoring. I also found
that individuals who rated themselves highly on neuroticism were, in general, more likely to
prefer using avatars to monitor subordinates and that individuals who rated themselves highly on
conscientiousness, extraversion and openness, and individuals with a high dominance motivation
were, in general, less likely to prefer using avatars to monitor subordinates. Agreeableness and
Need for Belonging did not predict participants’ preference for using avatars for monitoring.
General Discussion
In this chapter, I addressed a number of questions related to when and why managers use
novel technological tools such as virtual reality in the form of computer avatars for monitoring. I
examined the role of both situational factors and personality differences in influencing managers’
decisions to use computer avatars for monitoring. The present findings indicate that, in situations
that call for frequent monitoring, managers prefer monitoring their subordinates using avatars
(Experiments 1 and 2) and that this preference is driven by the extent to which they anticipate
being evaluated negatively in those situations (Experiment 2). Thus, my results suggest that, at
least in some cases, managers use avatars in order to buffer themselves from negative evaluation.
I expected that managers’ preference for using avatars in frequent monitoring situations would
be moderated by the extent to which frequent monitoring was considered typical in a given
context. However, my results suggest that the typicality of frequent monitoring in a given context
influenced neither the extent of negative evaluation managers anticipated nor the relationship
96
between frequent monitoring and managers’ preference for using avatars (Experiment 2).
Finally, on exploring the role of individual level personality differences, I found that neuroticism
was positively related to preference for using avatars for monitoring while extraversion,
conscientiousness, openness to experience, and dominance motivation were all negatively related
to preference for using avatars for monitoring.
Contributions to Theory and Practice
The present research provides several contributions to both theory and practice. First, by
elucidating the psychological process that underlies managers’ preferences for using avatars for
monitoring, I seek to provide a clear psychological account for the increasing prevalence of
novel technological tools such as virtual reality in modern workplaces. In doing so, I move
beyond factors such as ease and convenience of use (e.g., Davis, 1989), individual attitudes and
social norms (e.g., Bonnefon, Shariff, Rahwan, 2016; Venkatesh, Morris & Ackerman, 2000), to
shed light on the role of an important psychological determinant of technology adoption –
anticipated negative evaluation.
Second, by highlighting the role of technology as a psychological buffer that reduces
anticipated negative evaluation, I contribute to the literature on psychological safety (e.g.,
Edmondson, 1999) by offering a novel perspective on how managers can use technology to feel
psychologically safe by effectively navigating evaluative concerns. Psychological safety is
experienced when one feels that one can express oneself freely and take interpersonal risks
without fear of negative consequences such as embarrassment, rejection or punishment (e.g.,
Edmondson, 1999; Kahn, 1990). The anticipation that one’s behaviors may engender potential
negative evaluation creates the sense that the context is psychologically unsafe for expressing
oneself freely. My findings suggest that individuals may be able to create a psychological buffer
97
to distance themselves from psychologically unsafe contexts by using technology to interact with
others. In this way, the use of technology can influence the extent of psychological safety
employees experience in a given context and, consequently, their likelihood of speaking up with
suggestions for organizational improvement (e.g., Burris, 2012; Detert & Burris, 2007) and
engaging in activities related to innovation and learning (e.g., Baer & Frese, 2003; Edmondson,
1999).
Third, I aim to contribute to the growing body of scientific knowledge in the emerging
area of psychology of technology (e.g., Epley, Schroeder, Waytz, 2013; Waytz & Norton, 2014).
As technology is increasingly becoming a substitute for humanity, research in this area has
primarily focused on increasing people’s sense of connectedness to their social environments
through technology. For example, studies show that anthropomorphizing technology by ascribing
a human-like mind to it increases the extent to which people trust technology to competently
perform its function (Waytz, Heafner & Epley, 2014). In addition to engendering greater levels
of trust, technologies which incorporate a certain degree of humanness are known to elicit more
positive attitudes (e.g., Li, Kizilcec, Bailenson & Ju, 2016) and greater user enjoyment (e.g.,
Weibel, Wissmath, Habegger, Steiner & Groner, 2008). I aim to extend this line of work by
highlighting the importance of focusing on the role of technology as a distancing tool that can be
a useful psychological buffer in certain situations. In doing so, I attempt to shift the focus from
examining how technology can improve people’s sense of connectedness to their social
environments to exploring when and why it may be important for people to consider using
technology to psychologically distance themselves from their social environments.
As virtual reality becomes increasingly common in the workplace (Future Workplace
Study, 2016), the present research also has important implications for practice. My findings
98
suggest these technologies can act as a psychological buffer and reduce evaluative concerns. This
insight may motivate the use of such technologies in organizations for recruitment, training, team
work and performance evaluations to enable individuals to express themselves freely in
interpersonal interactions. My findings also have implications for designers and marketers of
novel technologies such as VR. Given that one of the reasons why people adopt these
technologies is to psychologically buffer themselves from undesirable situations, designers and
marketers may be well served to highlight the distancing aspect of these technologies in addition
to focusing on the immersive experiences that they offer. As the line between technology and
humanity is continuing to become blurred in many fields such as business, medicine, law
enforcement and the armed forces, my findings have implications for decision makers who are
faced with decisions about when it may be permissible to replace humans with technology to
perform certain functions. As technology serves as a psychological buffer and allows individuals
to express themselves more freely, decision makers can consider using technological tools in
place of humans in situations where open communication can be beneficial. For example, using
technological tools such as computer avatars in medical or psychological interviews can
encourage people to be more honest and disclose more information about themselves, without
evaluative concerns. Using technological tools in these situations can enhance the quality of
interpersonal interactions, while also offering economic benefits by reducing costs.
Limitations and Future Directions
The present research offers numerous opportunities for future research. The proposition
that managers hide behind technology stems from the notion that interactions with subordinates
can be awkward and the preference for interacting through technology reflects a desire to
manage that awkwardness. If so, it suggests an interesting paradox in technology preference: the
99
technology choice could lead to additional awkwardness, the very thing that it was deemed to
avoid. For example, using an avatar may create the impression that the manager is avoiding the
interaction and the problem. This, in turn, may exacerbate the awkwardness, producing a spiral
of awkwardness and social distance afforded by hiding behind the technology. The paradox is
borne from the effort to manage awkwardness, and the effort in the end exacerbates
awkwardness. Future studies may more directly manipulate the degree of expected awkwardness
in the manager-subordinate interaction. With higher degree of awkwardness (e.g., having to give
pointed, negative performance feedback), there should be greater reliance on distancing
technology.
In my studies, I examined the relationship between monitoring frequency and preference
for using avatars for monitoring. Although it was important to use the same type of technology
across studies so that I could control for unique technological features, this limited me from
exploring important characteristics of the technology that can act as moderators of or boundary
conditions for the current findings. For example, an important characteristic of this type of
technology that may influence my findings is the agency of virtual humans. Research in this area
specifies the distinctive behavioral impact of two types of virtual humans – embodied agents and
avatars – distinguished by their level of agency (e.g., Bailenson, Blascovich, Beall, Loomis,
2003). Agency, in this context, refers to the extent to which users believe they are interacting
with another human being (Guadagno, Blascovich, Bailenson & McCall, 2007). Avatars –
virtual, real-time representations of humans – are considered high in agency, and embodied
agents – virtual representations controlled entirely by computers – are considered low in agency
(Bailenson et al., 2003). Future research can explore how agency of virtual humans may
100
influence the extent to which individuals feel buffered from evaluative concerns when using such
technologies.
In my experiments, I limited the scope of participants’ autonomy with respect to the
number of times they monitored their subordinates. Although this allowed me to systematically
vary monitoring frequency between experimental conditions, it will be interesting to explore how
individual discretion in monitoring frequency affects the extent to which people anticipate being
negatively evaluated in a given situation and consequently, their preferences for using
technology for monitoring.
Conclusion
The influx of new technologies in the workplace is dramatically changing how employees
interact with each other. With a majority of employees believing that face-to-face interactions
will become obsolete in the near future due to the integration of novel technologies in the
workplace (Future Workplace Study, 2016), it is imperative to understand the psychological
determinants of technology use in work contexts. The present research identifies anticipated
negative evaluation as an important psychological factor that drives managers’ preference for
using avatars for monitoring and suggests that technology can act as a psychological buffer in
evaluative situations. As organizations continue to spend billions of dollars on creating smart
offices with novel technologies, it is my hope that this work will allow organizational decision
makers to consider the psychological underpinnings of technology use in the workplace.
101
CHAPTER 4
CONCLUSION
This dissertation advances knowledge regarding the psychological impact of novel
technologies on individuals operating in social contexts, especially organizations. Leveraging
theory on how individuals experience social situations, this dissertation proposes and tests the
central idea that technology reduces people’s social evaluation concerns, thereby attenuating the
evaluative aspect of social situations while highlighting the informational aspect. In Chapter 1, I
outline this broader theoretical argument and describe how this work draws attention to how
technology influences individuals’ psychological experiences in social contexts. In Chapter 2, I
test this theoretical argument in the context of behavior tracking products, one of the most
widespread, popular, and controversial technologies in recent times. In this chapter, I specifically
focus on understanding the psychological process that drives people to adopt behavior tracking
products. I build on the idea that technology reduces people’s concerns about social evaluation to
argue that this leads to increased anticipated autonomy when using technology-operated behavior
tracking products (relative to human-operated behavior tracking products) and, consequently,
increases people’s willingness to adopt those products. In Chapter 3, I test the broader theoretical
argument that technology reduces social evaluation concerns in the context of virtual reality
(VR), another popular and impactful new technology that is becoming increasingly prevalent in
organizations in recent times. Considering the emergence of VR as a management tool in
organizations, this chapter explores when and why managers prefer virtual interactions through
VR over face-to-face interactions. In this chapter, I further unpack the core theoretical argument
that technology reduces social evaluation concerns to suggest that reduced social presence leads
to this effect. Examining this effect in a monitoring context, I build on this idea to argue that
102
technology allows for psychological distancing in uncomfortable social situations by acting as a
psychological buffer. Taken together, this dissertation provides a novel theoretical perspective
for understanding the social psychological functions of technology and has important
implications for various domains.
Implications for Monitoring
Monitoring is a fundamental aspect of management (Komaki, 1986; Niehoff &
Moorman, 1993; Tenbrunsel & Messick, 1999) and has an impact on several key managerial and
employee outcomes such as perceived managerial effectiveness (Komaki, 1986), employee
performance (Chalykoff & Kochan, 1989), individual accountability (Brewer & Ridgway, 1998),
organizational citizenship behaviors and justice perceptions (Niehoff & Moorman, 1993).
Technology has significantly influenced monitoring processes and outcomes in organizations.
Indeed, recent advances in technology have dramatically transformed both the nature and scope
of monitoring. New technologies increase the scope of monitoring by enabling managers to use
fine-grained, digital tools that gather a wide variety of information about employees such as their
physical movements and location, proximity to other people, facial expressions, conversational
patterns, and even moods and emotions (e.g., Kim, McFee, Olguin, Waber & Pentland, 2012).
Novel technologies such as behavior tracking products can influence not only the scope
of monitoring (i.e., the amounts and types of information that can be collected about people’s
behaviors) but also the very nature of monitoring. In this dissertation, I posit that technology
reduces social evaluation concerns and attenuates the evaluative aspect of social situations. This
central idea has important implications for monitoring. Extrapolating this idea to the domain of
monitoring, one can anticipate that automated monitoring through technology will likely be
experienced as psychologically different from human-based monitoring. Particularly,
103
technology-based monitoring will switch individuals’ focus from the evaluative aspect to the
informational aspect of monitoring. Theoretically, this idea offers a novel way to conceptualize
the role of technology in monitoring. Extant research on monitoring suggests that people are
inherently opposed to being monitored (Chalykoff & Kochan, 1989; Kulik & Ambrose, 1993).
However, by highlighting how technology-based monitoring can be psychologically different
from human-based monitoring, this dissertation offers important insights about when and why
people may show an increased willingness to be monitored and how technology can influence
monitoring decisions. This work suggests that technology-based monitoring may be perceived as
informational and autonomy-supportive owing to reduced social evaluation concerns, while
human-based monitoring may be perceived as controlling and autonomy-impinging. In doing so,
this work emphasizes how the medium through which monitoring occurs can influence whether it
is perceived as autonomy-enhancing or autonomy-impinging.
Reducing social evaluation through technology may also influence subordinates’ attitudes
toward monitoring. Although monitoring is a key component of organizational control, close
supervision of subordinates through monitoring is known to reduce perceived autonomy and
sense of self-responsibility (Deci, 1975). Studies show that monitoring discourages employees
from engaging in extra-role organizational citizenship behaviors (i.e., behaviors that are above
and beyond one’s roles and responsibilities and have a positive effect on the organization) as
they might believe that those behaviors will not be evaluated positively by their managers
(Neihoff & Moorman, 1993). Technology can mitigate these negative effects of monitoring by
reducing employees’ concerns about social evaluation associated with monitoring. This has
important implications for the extent to which employees have favorable attitudes toward
technologies used for monitoring. Thus, behavior-tracking products may be more positively
104
received by subordinates when they know that these devices can reduce the experience of social
evaluation that is prevalent in direct human-based monitoring by managers. Finally, employees
may show a greater willingness to receive performance feedback from technological products
relative to human managers, due to reduced social evaluation concerns.
Implications for Communication
In the context of communication, reducing social evaluation through technology can have
important implications for both managers and subordinates. Given that technology can act as a
psychological buffer that reduces evaluative concerns, organizations can implement novel
technologies such as virtual reality to enable individuals to express themselves freely in
interpersonal interactions. In fact, communication research suggests that technology is quite
effective in reducing communication apprehension. For example, shy individuals experience less
communication apprehension when they interact via virtual reality in virtual worlds (Hammick &
Lee, 2014). Compared to face-to-face interactions, virtual environments are described as quite
effective in reducing people’s likelihood of detecting negative or inhibitory feedback cues from
others (Stritzke, Nguyen & Durkin, 2004). Reducing employees’ communication apprehension
and concerns about negative evaluation will be critical for ensuring they speak up and offer
feedback and suggestions intended to improve organizational functioning. This is important,
given that employee voice behavior is an important component of effective organizations (Detert
& Burris, 2007).
For managers, novel technologies such as virtual reality reduce concerns about social
evaluation when they engage in behaviors that may be perceived negatively by their
subordinates. This is particularly evident in the communication context. A survey of 616
managers conducted by Interact (a communication consultancy) and Harris Poll in 2016 revealed
105
that 69% of managers were uncomfortable communicating with their employees (Interact Report,
2015; Solomon, 2016). Novel technologies like VR/AR can be particularly helpful to buffer
managers from their discomfort associated with communicating with employees. Moreover,
research suggests that technology can have a positive effect on subordinates by reducing
evaluation apprehension and engendering flexibility in communication between managers and
subordinates (Avolio & Kahai, 2003; Kahai, Sosik, & Avolio, 2003). Furthermore, typical
impression management tactics that people use in face-to-face interactions to manage or avoid
negative evaluation are minimized when interacting via technology (DeRosa, Hantula, Kock &
D’Arcy, 2004). Therefore, when interacting via novel technologies like virtual reality, managers
may more easily facilitate coordination of work without having to pay attention to interpersonal
impression management behaviors. Thus, reduced social evaluation through technology has
several important implications for communication for both managers and subordinates in
organizations.
Implications for the Abandonment of Novel Technologies
Novel technologies such as behavior-tracking products are quickly abandoned by users,
diminishing the potential benefits that can be gained with ongoing use. For example, a recent
survey of 9,592 individuals from the United States, Australia and the United Kingdom revealed
that 30% of the users of wearable fitness trackers abandon them in the first six months (Gartner,
2016b). This rapid decline in the use of behavior-tracking technologies is a puzzling
phenomenon that warrants further attention.
Technological products may be abandoned for many reasons. For example, if users find
that the product is difficult to use, or that the product is no longer useful to them, or if they are
bored of using the product, they are likely to abandon the product. A study on assistive
106
technologies for individuals with disabilities revealed that 29.3% of all assistive devices were
abandoned by users and the most common reasons for abandonment were a lack of consideration
of user opinion in selection, easy device procurement, poor device performance, and changes in
user needs or priorities (Phillips & Zhao, 1993).
Most of the common reasons cited for the abandonment of technological products pertain
to objective aspects of the product itself while ignoring subjective psychological experiences of
the users. I suggest that, the potential for technology to reduce social evaluation concerns is an
important factor that can explain people’s abandonment of novel technologies. When people use
technological products, they do not feel negatively evaluated for discontinuing use. As a result,
there is no psychological cost to quitting the technology. That is, reducing concerns about
negative evaluation through technology also reduces people’s commitment toward using the
technology as there are no negative psychological or social effects associated with abandoning
the technology in such cases.
Implications for Goal Pursuit
In addition to being both psychologically aversive (e.g., Schlenker & Leary, 1982) and
physiologically stressful (e.g., Dickerson & Kemeny, 2004), the likelihood of negative social
evaluation in a situation can also affect how we select, perceive and pursue goals. Studies show
that when pursuing goals related to performance, people are motivated by a need to demonstrate
competence either by seeking favorable or avoiding negative evaluations from others and that
these motivations have distinct implications for how individuals choose goals and pursue them
(Elliot & Harackiewicz, 1996; Middleton & Midgley, 1997). Thus, when pursuing goals related
to performance (especially in the presence of others), people’s goals are oriented toward
avoiding negative judgments or toward obtaining positive judgments.
107
Beyond influencing people’s motivations during goal pursuit, social situations also
impact the salience of people’s goals and their commitment toward attaining those goals. For
example, Shah (2003) found that even mere mental representations of significant others
increased the salience of the goals to which they are closely associated and motivated individuals
to persist in attaining those goals. Similarly, Brockner, Rubin and Lang (1981) found that the
presence of an audience during goal pursuit can make individuals feel compelled to persist and
continually invest resources toward attaining the goal in order to save face and avoid negative
evaluations (even when the likelihood of goal attainment is low). Thus, pursuing goals in the
presence of others compels individuals to persist in attaining those goals to avoid the likelihood
of being negatively evaluated in that social situation. Given that technology reduces concerns
about social evaluation, employees may be more likely to slack at work or expend less effort
when they know that they are monitored solely through technology (e.g., behavior-tracking
products).
Implications for Privacy
Privacy is an important antecedent condition for individuals to maintain a positive social
identity as it pertains to controlling which groups and individuals one interacts with and how one
is viewed by them (Alge, 2001). One of the main benefits of privacy is anonymity, which allows
people to do what they want to do without fear of social evaluation (Pedersen, 1997). In
situations where people feel less concerned about social evaluation, such as when using
technology, they are likely to feel more in control of their social identity and therefore, their
sensitivity toward privacy concerns is likely to reduce. In fact, studies show that increased
perceived control decreases people’s concerns about privacy and increases their likelihood of
disclosing sensitive personal information (Brandimarte, Acquisti & Loewenstein, 2013).
108
According to a recent survey assessing Americans’ attitudes about privacy, security and
surveillance, 93% of respondents reported that being in control of who can get information about
them is very important, 90% of respondents reported that controlling what information is
collected about them is important and 55% of respondents supported the idea of online
anonymity for certain activities (Pew Research Center, 2015). A key underlying motivation for
seeking control over both the content of information that others can access, and the audience that
receives this information pertains to concerns about being evaluated (potentially negatively) by
others. Similarly, desiring anonymity also pertains to avoiding negative evaluation by others.
Thus, when technology reduces concerns about social evaluation, people may pay less attention
towards privacy threats and may be more likely to divulge personal information through
technology (compared to face-to-face interactions).
Concluding Thoughts
As modern organizations continue to be transformed by new technologies, employees
will work in a digital mesh of intelligent systems that can act autonomously. Artificial
intelligence and machine learning will encompass systems that learn, adapt and function
autonomously. These systems will have the potential to drive digital innovation in several
business areas. Virtual personal assistants may become more prevalent in the workplace and
reduce employees’ workloads by enabling more efficient coordination. Autonomous robots in the
workplace may help make work processes more efficient by performing tasks that are difficult or
dangerous without creating liabilities. Entire businesses may be created on digital technology
platforms with a fully digital workforce.
Technology can reduce concerns about social evaluation. However, we know that social
evaluation has both benefits and downsides in the workplace. Therefore, it will be critical for
109
organizations to consider the implications for social evaluation when deciding to integrate novel
technologies in their workplaces. Organizations must carefully consider how the characteristics
of their workforce, their organizational culture, and the nature of tasks influence the pertinence
of social evaluation in a given situation and choose technological solutions appropriately, based
on these considerations.
110
REFERENCES
ABI Research. (2014, August 20). More Than 30 Billion Devices Will Drive Wireless Connected
Devices to 40.9 Billion in 2020. Retrieved from: https://www.abiresearch.com/press/the-
internet-of-things-will-drive-wireless-connect/.
ABI Research. (2016, September 29). New Workplace Wearables Bridge Communication Gap
Between Employees and Systems. Retrieved from:
https://www.abiresearch.com/press/new-workplace-wearables-bridge-communication-
gap-b/.
Acquisti, A., John, L. K., & Loewenstein, G. (2012). The impact of relative standards on the
propensity to disclose. Journal of Marketing Research, 49(2), 160-174.
Adams, J. S. (1976). The structure and dynamics of behavior in organizational boundary
roles. Handbook of Industrial and Organizational Psychology, 1175, 1199.
Ajzen, I., & Madden, T. J. (1986). Prediction of goal-directed behavior: Attitudes, intentions, and
perceived behavioral control. Journal of Experimental Social Psychology, 22(5), 453-
474.
Alge, B. J. (2001). Effects of computer surveillance on perceptions of privacy and procedural
justice. Journal of Applied Psychology, 86(4), 797.
Alge, B. J., Ballinger, G. A., & Green, S. G. (2004). Remote control: Predictors of electronic
monitoring intensity and secrecy. Personnel Psychology, 57(2), 377-410.
Amabile, T. M., DeJong, W., & Lepper, M. R. (1976). Effects of externally imposed deadlines
on subsequent intrinsic motivation. Journal of Personality and Social Psychology, 34(1),
92.
111
Amichai-Hamburger, Y. (2002). Internet and personality. Computers in Human Behavior, 18(1),
1-10.
Amichai-Hamburger, Y., Wainapel, G., & Fox, S. (2002). " On the Internet no one knows I'm an
introvert": Extroversion, neuroticism, and Internet interaction. Cyberpsychology &
behavior, 5(2), 125-128.
Amichai-Hamburger, Y., & Ben-Artzi, E. (2003). Loneliness and Internet use. Computers in
Human Behavior, 19(1), 71-80.
Amiel, T., & Sargent, S. L. (2004). Individual differences in Internet usage motives. Computers
in Human Behavior, 20(6), 711-726.
Ang, S., Cummings, L. L., Straub, D. W., & Earley, P. C. (1993). The effects of information
technology and the perceived mood of the feedback giver on feedback
seeking. Information Systems Research, 4(3), 240-261.
Anicich, E. M., Fast, N. J., Halevy, N., & Galinsky, A. D. (2015). When the bases of social
hierarchy collide: Power without status drives interpersonal conflict. Organization
Science, 27(1), 123-140.
Avolio, B. J., & Kahai, S. S. (2003). Adding the “E” to E-leadership: How it may impact your
leadership. Organizational Dynamics, 31(4), 325-338.
Baer, M., & Frese, M. (2003). Innovation is not enough: Climates for initiative and
psychological safety, process innovations, and firm performance. Journal of
Organizational Behavior, 24(1), 45-68.
Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2003). Interpersonal distance in
immersive virtual environments. Personality and Social Psychology Bulletin, 29(7), 819-
833.
112
Bailenson, J. N., & Blascovich, J. (2004). Avatars. In Encyclopedia of human-computer
interaction, Berkshire Publishing Group.
Behrend, T., Toaddy, S., Thompson, L. F., & Sharek, D. J. (2012). The effects of avatar
appearance on interviewer ratings in virtual employment interviews. Computers in
Human Behavior, 28(6), 2128-2133.
Bessière, K., Ellis, J. B., & Kellogg, W. A. (2009, April). Acquiring a professional Second Life:
Problems and prospects for the use of virtual worlds in business. In CHI'09 Extended
Abstracts on Human Factors in Computing Systems (pp. 2883-2898). ACM.
Blanck, P. D., Reis, H. T., & Jackson, L. (1984). The effects of verbal reinforcement of intrinsic
motivation for sex-linked tasks. Sex Roles, 10(5-6), 369-386.
Bohns, V. K., & Flynn, F. J. (2010). “Why didn’t you just ask?” Underestimating the discomfort
of help-seeking. Journal of Experimental Social Psychology, 46(2), 402-409.
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous
vehicles. Science, 352(6293), 1573-1576.
Brandimarte, L., Acquisti, A., & Loewenstein, G. (2013). Misplaced confidences: Privacy and
the control paradox. Social Psychological and Personality Science, 4(3), 340-347.
Bresman, H. & Rao,V.D. (2017, August 25). A Survey of 19 Countries Shows How Generations
X, Y, and Z Are – and Aren’t – Different. Retrieved from: https://hbr.org/2017/08/a-
survey-of-19-countries-shows-how-generations-x-y-and-z-are-and-arent-different.
Brewer, N., & Ridgway, T. (1998). Effects of supervisory monitoring on productivity and quality
of performance. Journal of Experimental Psychology: Applied, 4(3), 211.
Brockner, J., Rubin, J. Z., & Lang, E. (1981). Face-saving and entrapment. Journal of
Experimental Social Psychology, 17(1), 68-79.
113
Brown, B. R., & Garland, H. (1971). The effects of incompetency, audience acquaintanceship,
and anticipated evaluative feedback on face-saving behavior. Journal of Experimental
Social Psychology, 7(5), 490-502.
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A new source
of inexpensive, yet high-quality, data?. Perspectives on Psychological Science, 6(1), 3-5.
Burris, E. R. (2012). The risks and rewards of speaking up: Managerial responses to employee
voice. Academy of Management Journal, 55(4), 851-875.
Butt, S., & Phillips, J. G. (2008). Personality and self reported mobile phone use. Computers in
Human Behavior, 24(2), 346-360.
Cain, M. (2016, November 2). Top 10 emerging technologies in the digital workplace. Retrieved
from: https://www.forbes.com/sites/gartnergroup/2016/11/02/top-10-emerging-
technologies-in-the-digital-workplace/#1af99ffc1e48.
Cassidy, T., & Lynn, R. (1989). A multifactorial approach to achievement motivation: The
development of a comprehensive measure. Journal of Occupational Psychology.
Chalykoff, J., & Kochan, T. A. (1989). Computer-‐aided monitoring: its influence on employee
job satisfaction and turnover. Personnel Psychology, 42(4), 807-834.
Chandler, S. (2017, September 14). When VR Training Makes the Job Look Better than It Is.
Retrieved from: https://www.wired.com/story/when-vr-training-makes-the-job-look-
better-than-it-is/.
Chapman, D. S., Uggerslev, K. L., & Webster, J. (2003). Applicant reactions to face-to-face and
technology-mediated interviews: A field investigation. Journal of Applied
Psychology, 88(5), 944.
114
Chester, J. (2017, August 29). New Report: Health Wearable Devices Pose New Consumer and
Privacy Risks. Retrieved from: https://www.democraticmedia.org/CDD-Wearable-
Devices-Big-Data-Report
Cho, Y., & Fast, N. J. (2012). Power, defensive denigration, and the assuaging effect of gratitude
expression. Journal of Experimental Social Psychology, 48(3), 778-782.
Colbert, A., Yee, N., & George, G. (2016). The digital workforce and the workplace of the
future. Academy of Management Journal, 59(3), 731-739.
Colquitt, J. A., Hollenbeck, J. R., Ilgen, D. R., LePine, J. A., & Sheppard, L. (2002). Computer-
assisted communication and team decision-making performance: The moderating effect
of openness to experience. Journal of Applied Psychology, 87(2), 402.
Correa, T., Hinsley, A. W., & De Zuniga, H. G. (2010). Who interacts on the Web?: The
intersection of users’ personality and social media use. Computers in Human
Behavior, 26(2), 247-253.
Corsello, J. (2013, January 1). What the Internet of Things Will Bring to the Workplace.
Retrieved from:
https://www.wired.com/insights/2013/11/what-the-internet-of-things-will-bring-to
theworkplace/.
Curtis, R. C., & Miller, K. (1986). Believing another likes or dislikes you: Behaviors making the
beliefs come true. Journal of Personality and Social Psychology, 51(2), 284.
Darlington, R.B., & Hayes, A.F. (2017). Regression analysis and linear models: Concepts,
applications and implementation. New York, NY: Guilford Press.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS quarterly, 319-340.
115
DeCharms, R, (1968). Personal causation: The internal affective determinants of behavior. New
York: Academic Press.
Deci, E. L. (1971). Effects of externally mediated rewards on intrinsic motivation. Journal of
Personality and Social Psychology, 18(1), 105.
Deci, E. L., & Cascio, W. F. (1972, April). Changes in intrinsic motivation as a function of
negative feedback and threats. Paper presented at the meeting of the Eastern
Psychological Association, Boston, MA.
Deci, E.L. (1975). Intrinsic Motivation. New York, NY: Plenum.
Deci, E. L., & Ryan, R. M. (1985). The general causality orientations scale: Self-determination
in personality. Journal of Research in Personality, 19(2), 109-134.
Deci, E. L., & Ryan, R. M. (1987). The support of autonomy and the control of
behavior. Journal of Personality and Social Psychology, 53(6), 1024.
Deci, E. L., Connell, J. P., & Ryan, R. M. (1989). Self-determination in a work
organization. Journal of Applied Psychology, 74(4), 580.
Deci, E. L., Olafsen, A. H., & Ryan, R. M. (2017). Self-determination theory in work
organizations: The state of a science. Annual Review of Organizational Psychology and
Organizational Behavior, 4, 19-43.
DeRosa, D. M., Hantula, D. A., Kock, N., & D'Arcy, J. (2004). Trust and leadership in virtual
teamwork: A media naturalness perspective. Human Resource Management, 43(2-‐3),
219-232.
Detert, J. R., & Burris, E. R. (2007). Leadership behavior and employee voice: Is the door really
open?. Academy of Management Journal, 50(4), 869-884.
116
Dickerson, S. S., & Kemeny, M. E. (2004). Acute stressors and cortisol responses: a theoretical
integration and synthesis of laboratory research. Psychological Bulletin, 130(3), 355.
Edmondson, A. (1999). Psychological safety and learning behavior in work
teams. Administrative Science Quarterly, 44(2), 350-383.
Elliot, A. J., & Harackiewicz, J. M. (1996). Approach and avoidance achievement goals and
intrinsic motivation: A mediational analysis. Journal of Personality and Social
Psychology, 70, 461-475.
Epley, N., Schroeder, J., & Waytz, A. (2013). Motivated mind perception: Treating pets as
people and people as animals. In Objectification and (De)Humanization (127 – 152).
Springer New York.
Fast, N. J., & Chen, S. (2009). When the boss feels inadequate: Power, incompetence, and
aggression. Psychological Science, 20(11), 1406-1413.
Fast, N. J., Halevy, N., & Galinsky, A. D. (2012). The destructive nature of power without
status. Journal of Experimental Social Psychology, 48(1), 391-394.
Fast, N. J., Burris, E. R., & Bartel, C. A. (2014). Managing to stay in the dark: Managerial self-
efficacy, ego defensiveness, and the aversion to employee voice. Academy of
Management Journal, 57(4), 1013-1034.
Faul, F., Erdfelder, E., Buchner, A., & Lang, A. G. (2009). Statistical power analyses using G*
Power 3.1: Tests for correlation and regression analyses. Behavior Research
Methods, 41(4), 1149-1160.
Flynn, F. J., & Lake, V. K. (2008). If you need help, just ask: Underestimating compliance with
direct requests for help. Journal of Personality and Social Psychology, 95(1), 128.
117
Fong, K., & Mar, R. A. (2015). What does my avatar say about me? Inferring personality from
avatars. Personality and Social Psychology Bulletin, 41(2), 237-249.
Fortson, D. (2017, July 30). Virtual Reality Pioneer Runs Out of Cash. Retrieved from:
https://www.thetimes.co.uk/article/virtual-reality-pioneer-runs-out-of-cash-sx8thpq35.
Future Workplace Study. (2016, July 18). Dell and Intel Future Workforce Study Provides Key
Insights into Technology Trends Shaping the Modern Global Workplace. Retrieved from:
http://www.dell.com/learn/us/en/uscorp1/press-releases/2016-07-18-future-workforce-
study-provides-key-insights.
Gajendran, R. S., & Joshi, A. (2012). Innovation in globally distributed teams: The role of LMX,
communication frequency, and member influence on team decisions. Journal of Applied
Psychology, 97(6), 1252.
Garland, H., & Brown, B. R. (1972). Face-saving as affected by subjects' sex, audiences' sex and
audience expertise. Sociometry, 280-289.
Gartner. (2013). Internet of Things. Retrieved from: http://www.gartner.com/it-glossary/internet-
of-things/.
Gartner. (2013). Virtual Reality (VR). Retrieved from: www.gartner.com/it-glossary/vr-virtual-
reality/.
Gartner. (2016, February 2). Gartner Says Worldwide Wearable Devices Sales to Grow 18.4
Percent in 2016. Retrieved from: http://www.gartner.com/newsroom/id/3198018.
Gartner. (2016b, December 7). Gartner Survey Shows Wearable Devices Need to be More
Useful. Retrieved from: http://www.gartner.com/newsroom/id/353711.
Gartner. (2017, October 4). Gartner identifies the top ten strategic technology trends for 2018.
Retrieved from: www.gartner.com/newsroom/id/3812063.
118
Goffman, E. (1959). The moral career of the mental patient. Psychiatry, 22(2), 123-142.
Golden, T. D., Veiga, J. F., & Dino, R. N. (2008). The impact of professional isolation on
teleworker job performance and turnover intentions: Does time spent teleworking,
interacting face-to-face, or having access to communication-enhancing technology
matter?. Journal of Applied Psychology, 93(6), 1412.
Greene, J. D. (2016). Our driverless dilemma. Science, 352(6293), 1514-1515.
Griffith, T. L. (2011). The plugged-in manager: Get in tune with your people, technology, and
organization to thrive. San Francisco, CA: Jossey-Bass.
Guadagno, R. E., Blascovich, J., Bailenson, J. N., & Mccall, C. (2007). Virtual humans and
persuasion: The effects of agency and behavioral realism. Media Psychology, 10(1), 1-22.
Guadagno, R. E., Okdie, B. M., & Eno, C. A. (2008). Who blogs? Personality predictors of
blogging. Computers in Human Behavior, 24(5), 1993-2004.
Hammick, J. K., & Lee, M. J. (2014). Do shy people feel less communication apprehension
online? The effects of virtual reality on the relationship between personality
characteristics and communication outcomes. Computers in Human Behavior, 33, 302-
310.
Harris, P. R., & Napper, L. (2005). Self-affirmation and the biased processing of threatening
health-risk information. Personality and Social Psychology Bulletin, 31(9), 1250-1263.
Hayes, A. F. (2012). PROCESS: A versatile computational tool for observed variable mediation,
moderation, and conditional process modeling [White paper]. Retrieved from
http://www.afhayes.com/ public/process2012.pdf
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A
regression-based approach. New York, NY: Guilford Press.
119
Hershatter, A., & Epstein, M. (2010). Millennials and the world of work: An organization and
management perspective. Journal of Business and Psychology, 25(2), 211-223.
Hoch, J. E., & Kozlowski, S. W. (2014). Leading virtual teams: Hierarchical leadership,
structural supports, and shared team leadership. Journal of Applied Psychology, 99(3),
390.
Hughes, D. J., Rowe, M., Batey, M., & Lee, A. (2012). A tale of two sites: Twitter vs. Facebook
and the personality predictors of social media usage. Computers in Human
Behavior, 28(2), 561-569.
Hunt. (2015, February 5). Experts: Wearable tech tests our privacy limits. Retrieved from:
http://www.usatoday.com/story/tech/2015/02/05/tech-wearables-privacy/22955707/.
IDC. (2017, February 27). Worldwide Spending on Augmented and Virtual Reality Forecast to
Reach $13.9 Billion in 2017, According to IDC. Retrieved from:
http://www.idc.com/getdoc.jsp?containerId=prUS42331217.
Interact Report. (2015, February). New Interact Report: Many leaders shrink from straight talk
with employees. Retrieved from: http://interactauthentically.com/new-interact-report-
many-leaders-shrink-from-straight-talk-with-employees/.
Jackson, J. M., & Latané, B. (1981). All alone in front of all those people: Stage fright as a
function of number and type of co-performers and audience. Journal of Personality and
Social Psychology, 40(1), 73.
Jamieson, J. P., & Mendes, W. B. (2016). Social stress facilitates risk in youths. Journal of
Experimental Psychology: General, 145(4), 467.
120
Joinson, A. N. (2001). Self-‐disclosure in computer-‐mediated communication: The role of self-‐
awareness and visual anonymity. European Journal of Social Psychology, 31(2), 177-
192.
Joinson, A. N. (2004). Self-esteem, interpersonal risk, and preference for e-mail to face-to-face
communication. CyberPsychology & Behavior, 7(4), 472-478.
Kahai, S. S., Sosik, J. J., & Avolio, B. J. (2003). Effects of leadership style, anonymity, and
rewards on creativity-relevant processes and outcomes in an electronic meeting system
context. The Leadership Quarterly, 14(4), 499-524.
Kahn, W. A. (1990). Psychological conditions of personal engagement and disengagement at
work. Academy of Management Journal, 33(4), 692-724.
Kang, R. Dabbish, L., Fruchter, N. Kiesler, S. (2015). “'My data just goes everywhere:' User
mental models of the Internet and implications for privacy and security.” In Proceedings
of Symposium on Usable Privacy and Security (SOUPS) 2015.
Kiesler, S., Siegel, J., & McGuire, T. W. (1984). Social psychological aspects of computer-
mediated communication. American Psychologist, 39(10), 1123.
Kim, T., McFee, E., Olguin, D. O., Waber, B., & Pentland, A. (2012). Sociometric badges:
Using sensor technology to capture new forms of collaboration. Journal of
Organizational Behavior, 33(3), 412-427.
Komaki, J. L. (1986). Toward effective supervision: An operant analysis and comparison of
managers at work. Journal of Applied Psychology, 71(2), 270.
Komaki, J. L., Zlotnick, S., & Jensen, M. (1986). Development of an operant-based taxonomy
and observational index of supervisory behavior. Journal of Applied Psychology, 71(2),
260.
121
Koslov, K., Mendes, W. B., Pajtas, P. E., & Pizzagalli, D. A. (2011). Asymmetry in resting
intracortical activity as a buffer to social threat. Psychological Science, 22(5), 641-649.
Kulik, C. T., & Ambrose, M. L. (1993). Category-based and feature-based processes in
performance appraisal: Integrating visual and computerized sources of performance
data. Journal of Applied Psychology, 78(5), 821.
Kushlev, K., & Dunn, E. W. (2015). Checking email less frequently reduces stress. Computers in
Human Behavior, 43, 220-228.
Kushlev, K., Proulx, J. D., & Dunn, E. W. (2017). Digitally connected, socially disconnected:
The effects of relying on technology rather than other people. Computers in Human
Behavior, 76, 68-74.
Landers, R. N., & Lounsbury, J. W. (2006). An investigation of Big Five and narrow personality
traits in relation to Internet usage. Computers in Human Behavior, 22(2), 283-293.
Langfred, C. W. (2004). Too much of a good thing? Negative effects of high trust and individual
autonomy in self-managing teams. Academy of Management Journal, 47(3), 385-399.
Leary, M. R. (1983). A brief version of the Fear of Negative Evaluation Scale. Personality and
Social Psychology Bulletin, 9(3), 371-375.
Leary, M. R., & Kowalski, R. M. (1990). Impression management: A literature review and two-
component model. Psychological Bulletin, 107(1), 34.
Leary, M. R., & Baumeister, R. F. (2000). The nature and function of self-esteem: Sociometer
theory. Advances in Experimental Social Psychology, 32, 1-62.
Leary, M. R., Kelly, K. M., Cottrell, C. A., & Schreindorfer, L. S. (2013). Construct validity of
the need to belong scale: Mapping the nomological network. Journal of Personality
Assessment, 95(6), 610-624.
122
Lewis, M. W., Welsh, M. A., Dehler, G. E., & Green, S. G. (2002). Product development
tensions: Exploring contrasting styles of project management. Academy of Management
Journal, 45(3), 546-564.
Li, J., Kizilcec, R., Bailenson, J., & Ju, W. (2016). Social robots and virtual agents as lecturers
for video instruction. Computers in Human Behavior, 55, 1222-1230.
Lucas, G. M., Gratch, J., King, A., & Morency, L. P. (2014). It’s only a computer: Virtual
humans increase willingness to disclose. Computers in Human Behavior, 37, 94-100.
Magee, J. C., & Galinsky, A. D. (2008). 8 Social Hierarchy: The self-‐reinforcing nature of power
and status. Academy of Management Annals, 2(1), 351-398.
Marcus, B., Machilek, F., & Schütz, A. (2006). Personality in cyberspace: Personal web sites as
media for personality expressions and impressions. Journal of Personality and Social
Psychology, 90(6), 1014.
Marr, J. C., & Thau, S. (2014). Falling from great (and not-so-great) heights: How initial status
position influences performance after status loss. Academy of Management
Journal, 57(1), 223-248.
Maruping, L. M., & Agarwal, R. (2004). Managing team interpersonal processes through
technology: A task-technology fit perspective. Journal of Applied Psychology, 89(6),
975.
McCrae, R. R., & Costa Jr, P. T. (1997). Personality trait structure as a human
universal. American Psychologist, 52(5), 509.
McLeod, P. L., Baron, R. S., Marti, M. W., & Yoon, K. (1997). The eyes have it: Minority
influence in face-to-face and computer-mediated group discussion. Journal of Applied
Psychology, 82(5), 706.
123
Mead, N. L., & Maner, J. K. (2012). On keeping your enemies close: Powerful leaders seek
proximity to ingroup power threats. Journal of Personality and Social
Psychology, 102(3), 576.
Mesmer-Magnus, J. R., & DeChurch, L. A. (2009). Information sharing and team performance: a
meta-analysis. Journal of Applied Psychology, 94(2), 535.
Mick, D. G., & Fournier, S. (1998). Paradoxes of technology: Consumer cognizance, emotions,
and coping strategies. Journal of Consumer Research, 25(2), 123-143.
Middleton, M. J., & Midgley, C. (1997). Avoiding the demonstration of lack of ability: An
underexplored aspect of goal theory. Journal of Educational Psychology, 89(4), 710.
Miller, R. S. (1995). On the nature of Embarrassability: Shyness, social evaluation, and social
skill. Journal of Personality, 63(2), 315-339.
Mintzberg, H. (2009). Managing. Oakland, CA: Berrett-Koehler Publishers.
Modigliani, A. (1971). Embarrassment, facework, and eye contact: Testing a theory of
embarrassment. Journal of Personality and Social Psychology, 17(1), 15.
Montoya, A. K., & Hayes, A. F. (2016). Two condition within-participant statistical mediation
analysis: A path-analytic framework. Psychological Methods.
Muller, D., & Butera, F. (2007). The focusing effect of self-evaluation threat in coaction and
social comparison. Journal of Personality and Social Psychology, 93(2), 194.
Niehoff, B. P., & Moorman, R. H. (1993). Justice as a mediator of the relationship between
methods of monitoring and organizational citizenship behavior. Academy of Management
Journal, 36(3), 527-556.
Nicholls, J. G. (1984). Achievement motivation: Conceptions of ability, subjective experience,
task choice, and performance. Psychological Review,91(3), 328.
124
Nielsen. (2014, March 20). Tech Styles: Are Consumers Really Interested in Wearing Tech on
Their Sleeves? Retrieved from: http://www.nielsen.com/us/en/insights/news/2014/tech-
styles-are-consumers-really-interested-in-wearing-tech-on-their-sleeves.html.
Norberg, P. A., Horne, D. R., & Horne, D. A. (2007). The privacy paradox: Personal information
disclosure intentions versus behaviors. Journal of Consumer Affairs, 41(1), 100-126.
Nussbaum, K., & DuRivage, V. (1986). Computer monitoring: Mismanagement by remote
control. Business and Society Review, 56, 16-20.
Oppenheimer, D. M., Meyvis, T., & Davidenko, N. (2009). Instructional manipulation checks:
Detecting satisficing to increase statistical power. Journal of Experimental Social
Psychology, 45(4), 867-872.
Pedersen, D. M. (1999). Model for types of privacy by privacy functions. Journal of
Environmental Psychology, 19(4), 397-405.
Peetz, J., Gunn, G. R., & Wilson, A. E. (2010). Crimes of the past: Defensive temporal
distancing in the face of past in-group wrongdoing. Personality and Social Psychology
Bulletin, 36(5), 598-611.
Pew Research Center. (2015, May 20). Americans’ attitudes about privacy, security and
surveillance. Retrieved from: http://www.pewinternet.org/2015/05/20/americans-
attitudes-about-privacy-security-and-surveillance/.
Phillips, B., & Zhao, H. (1993). Predictors of assistive technology abandonment. Assistive
Technology, 5(1), 36-45.
Pickard, M. D., Roster, C. A., & Chen, Y. (2016). Revealing sensitive information in personal
interviews: Is self-disclosure easier with humans or avatars and under what
conditions?. Computers in Human Behavior, 65, 23-30.
125
Postmes, T., Spears, R., Sakhel, K., & De Groot, D. (2001). Social influence in computer-
mediated communication: The effects of anonymity on group behavior. Personality and
Social Psychology Bulletin, 27(10), 1243-1254.
Rammstedt, B., & John, O. P. (2007). Measuring personality in one minute or less: A 10-item
short version of the Big Five Inventory in English and German. Journal of research in
Personality, 41(1), 203-212.
Rosengren, K. E. (1974). Uses and gratifications: A paradigm outlined. The uses of mass
communications: Current perspectives on gratifications research, 3, 269-286.
Ross, C., Orr, E. S., Sisic, M., Arseneault, J. M., Simmering, M. G., & Orr, R. R. (2009).
Personality and motivations associated with Facebook use. Computers in Human
Behavior, 25(2), 578-586.
Rotter, J. B. (1954). Social learning and clinical psychology. Englewood Cliffs, NJ: Prentice-
Hall.
Ryan, R. M. (1982). Control and information in the intrapersonal sphere: An extension of
cognitive evaluation theory. Journal of Personality and Social Psychology, 43(3), 450.
Ryan, R. M., Mims, V., & Koestner, R. (1983). Relation of reward contingency and interpersonal
context to intrinsic motivation: A review and test using cognitive evaluation
theory. Journal of Personality and Social Psychology, 45(4), 736.
Ryan, R. M., & Connell, J. P. (1989). Perceived locus of causality and internalization: examining
reasons for acting in two domains. Journal of Personality and Social Psychology, 57(5),
749.
126
Ryan, T., & Xenos, S. (2011). Who uses Facebook? An investigation into the relationship
between the Big Five, shyness, narcissism, loneliness, and Facebook usage. Computers in
Human Behavior, 27(5), 1658-1664.
Sandstrom, G. M., & Dunn, E. W. (2014). Is efficiency overrated? Minimal social interactions
lead to belonging and positive affect. Social Psychological and Personality Science, 5(4),
437-442.
Schifter, D. E., & Ajzen, I. (1985). Intention, perceived control, and weight loss: an application
of the theory of planned behavior. Journal of Personality and Social Psychology, 49(3),
843.
Schlenker, B. R., & Leary, M. R. (1982). Social anxiety and self-presentation: A
conceptualization model. Psychological Bulletin, 92(3), 641.
Sewell, G., & Barker, J. R. (2006). Coercion versus care: Using irony to make sense of
organizational surveillance. Academy of Management Review, 31(4), 934-961.
Shah, J. (2003). Automatic for the people: how representations of significant others implicitly
affect goal pursuit. Journal of Personality and Social Psychology, 84(4), 661.
Short, J., Williams, E., & Christie, B. (1976). The social psychology of telecommunications.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2013, January). Life after p-hacking.
In Meeting of the Society for Personality and Social Psychology, New Orleans, LA (pp.
17-19).
Snyder, C. R., Lassegard, M., & Ford, C. E. (1986). Distancing after group success and failure:
Basking in reflected glory and cutting off reflected failure. Journal of Personality and
Social Psychology, 51(2), 382.
127
Solomon, L. (2016, March 9). Two-thirds of managers are uncomfortable communicating with
employees. Harvard Business Review. Retrieved from: https://hbr.org/2016/03/two-
thirds-of-managers-are-uncomfortable-communicating-with-employees.
Sproull, L., & Kiesler, S. (1986). Reducing social context cues: Electronic mail in organizational
communication. Management Science, 32(11), 1492-1512.
Straus, S. G., & McGrath, J. E. (1994). Does the medium matter? The interaction of task type
and technology on group performance and member reactions. Journal of Applied
Psychology, 79(1), 87.
Stritzke, W. G., Nguyen, A., & Durkin, K. (2004). Shyness and computer-mediated
communication: A self-presentational theory perspective. Media Psychology, 6(1), 1-22.
Tangney, J. P. (1992). Situational determinants of shame and guilt in young
adulthood. Personality and Social Psychology Bulletin, 18(2), 199-206.
Tangney, J. P. (1999). The self-conscious emotions: Shame, guilt, embarrassment and pride. In
T. Dalgleish & M. J. Power (Eds.), Handbook of Cognition and Emotion, 541-568.
Tenbrunsel, A. E., & Messick, D. M. (1999). Sanctioning systems, decision frames, and
cooperation. Administrative Science Quarterly, 44(4), 684-707.
Tolchinsky, P. D., McCuddy, M. K., Adams, J., Ganster, D. C., Woodman, R. W., & Fromkin,
H. L. (1981). Employee perceptions of invasion of privacy: A field simulation
experiment. Journal of Applied Psychology, 66(3), 308.
Van Boven, L., Loewenstein, G., & Dunning, D. (2005). The illusion of courage in social
predictions: Underestimating the impact of fear of embarrassment on other
people. Organizational Behavior and Human Decision Processes, 96(2), 130-141.
128
VanMaanen, J., & Kunda, G. (1989). Real feelings-emotional expression and organizational
culture. Research in Organizational Behavior, 11, 43-103.
Venkatesh, V., Morris, M. G., & Ackerman, P. L. (2000). A longitudinal field investigation of
gender differences in individual technology adoption decision-making
processes. Organizational Behavior and Human Decision Processes, 83(1), 33-60.
Vignovic, J. A., & Thompson, L. F. (2010). Computer-mediated cross-cultural collaboration:
Attributing communication errors to the person versus the situation. Journal of Applied
Psychology, 95(2), 265.
Vorauer, J. D., & Sakamoto, Y. (2006). I thought we could be friends, but… Systematic
miscommunication and defensive distancing as obstacles to cross-group friendship
formation. Psychological Science, 17(4), 326-331.
Watson, A. M., Foster Thompson, L., Rudolph, J. V., Whelan, T. J., Behrend, T. S., & Gissel, A.
L. (2013). When big brother is watching: Goal orientation shapes reactions to electronic
monitoring during online training. Journal of Applied Psychology, 98(4), 642.
Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of
individual differences in anthropomorphism. Perspectives on Psychological
Science, 5(3), 219-232.
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism
increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52,
113-117.
Waytz, A., & Norton, M. I. (2014). Botsourcing and outsourcing: Robot, British, Chinese, and
German workers are for thinking—not feeling—jobs. Emotion, 14(2), 434.
129
Weibel, D., Wissmath, B., Habegger, S., Steiner, Y., & Groner, R. (2008). Playing online games
against computer-vs. human-controlled opponents: Effects on presence, flow, and
enjoyment. Computers in Human Behavior, 24(5), 2274-2291.
Weil, M. M., & Rosen, L. D. (1997). Technostress: Coping with technology@ work@ home@
play., New York, NY: John Wiley & Sons.
Weisband, S., & Kiesler, S. (1996, April). Self disclosure on computer forms: Meta-analysis and
implications. In Proceedings of the SIGCHI conference on human factors in computing
systems (pp. 3-10). ACM.
Wood, J. V. (1989). Theory and research concerning social comparisons of personal
attributes. Psychological Bulletin, 106(2), 231.
Yukl, G. (1989). Managerial leadership: A review of theory and research. Journal of
Management, 15(2), 251-289.
Zuckerman, M., Porac, J., Lathin, D., & Deci, E. L. (1978). On the importance of self-
determination for intrinsically-motivated behavior. Personality and Social Psychology
Bulletin, 4(3), 443-446.
Abstract (if available)
Abstract
This dissertation aims to advance current understanding of the psychological impact of novel technologies on individuals in organizations. Leveraging theory on how individuals experience social contexts, this dissertation proposes and tests the central idea that technology reduces people’s social evaluation concerns, thereby attenuating the evaluative aspect of social situations while highlighting their informational aspect. Thus, interacting with or through technology (as opposed to with other humans) shifts people’s focus from evaluation to information, leading to profound implications for organizational actors. I build evidence for this claim by testing this idea in the context of two rapidly spreading types of new technologies—behavior tracking products and virtual reality. First, I examine how the shift in focus from evaluation to information influences people’s willingness to adopt and use behavior tracking products. Second, I build on the idea that technology shifts people’s focus from evaluation to information to examine when and why managers use virtual reality to monitor their employees. Specifically, my findings show that technology lowers people’s social evaluation concerns and, as a result, (1) increases their willingness to adopt behavior tracking products, and (2) influences their willingness to use virtual reality for monitoring employees. I argue that these findings offer fundamental insights about the psychological impact of technology and inform our understanding of people’s behaviors in relation to new technologies. More broadly, this work provides a novel theoretical perspective that highlights the social psychological functions of technology and opens avenues for exploring the implications of these effects in various social and work-related contexts.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Working-class advantages: when interdependent cultural norms can decrease threat in interpersonally uncertain interactions
PDF
Better now than later: the cost of victims' delayed accusations
PDF
Power, status, and organizational citizenship behavior
PDF
The curse of loyalty: interdependent self construal and support for corruption
PDF
Increasing capabilities or decreasing cost: Fairness perceptions of job displacement due to automation and outsourcing
PDF
Computational modeling of human behavior in negotiation and persuasion: the challenges of micro-level behavior annotations and multimodal modeling
PDF
The morality of technology
PDF
Automated contracts and the lawyers who don't review them: adoption and use of machine learning technology
PDF
Towards social virtual listeners: computational models of human nonverbal behaviors
PDF
Automatic tracking of flies and the analysis of fly behavior
PDF
Parasocial consensus sampling: modeling human nonverbal behaviors from multiple perspectives
PDF
Enabling human-building communication to promote pro-environmental behavior in office buildings
PDF
The effects of a customer's comparative processing with positive and negative information on product choice
PDF
The structure of strategic communication: theory, measurement, and effects
PDF
Household carbon footprints: how to encourage adoption of emissions‐reducing behaviors and technologies
PDF
Novelty versus familiarity: divergent effects of low predictability and low personal influence
PDF
Essays on understanding consumer contribution behaviors in the context of crowdfunding
PDF
Creation and influence of visual verbal communication: antecedents and consequences of photo-text similarity in consumer-generated communication
PDF
The interpersonal effect of emotion in decision-making and social dilemmas
PDF
Self-presence: body, emotion, and identity extension into the virtual self
Asset Metadata
Creator
Raveendhran, Roshni Ravindra
(author)
Core Title
Technology, behavior tracking, and the future of work
School
Marshall School of Business
Degree
Doctor of Philosophy
Degree Program
Business Administration
Publication Date
06/26/2019
Defense Date
05/02/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Autonomy,behavior tracking,Business,device,employee,experiment,Future,Management,manager,OAI-PMH Harvest,Psychology,self-determination,social evaluation,Technology,virtual reality,wearables,Work
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Fast, Nathanael J. (
committee chair
), Gratch, Jonathan (
committee member
), Mayer, Kyle J. (
committee member
), Tost, Leigh P. (
committee member
), Wakslak, Cheryl J. (
committee member
)
Creator Email
raveendh@usc.edu,RaveendhranR@darden.virginia.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-505800
Unique identifier
UC11267430
Identifier
etd-Raveendhra-6348.pdf (filename),usctheses-c40-505800 (legacy record id)
Legacy Identifier
etd-Raveendhra-6348.pdf
Dmrecord
505800
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Raveendhran, Roshni Ravindra
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
behavior tracking
device
employee
self-determination
social evaluation
virtual reality
wearables