Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The morality of technology
(USC Thesis Other)
The morality of technology
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
The Morality of Technology
by
David T. Newman
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements of the Degree
DOCTOR OF PHILOSOPHY
MANAGEMENT AND ORGANIZATION
May 2021
ii
Acknowledgments
This dissertation would not have been possible without support from my committee, including
Scott S. Wiltermuth, Nathanael J. Fast, and Ali Abbas.
Empirical research underlying the work in Chapter II was conducted under supervision of
Nathanael J. Fast & Jesse Graham, and in Chapter III under supervision of Nathanael J. Fast.
iii
Table of Contents
Acknowledgments .................................................................................................................................. ii
Table of Contents ................................................................................................................................... iii
Dissertation Abstract ..............................................................................................................................iv
CHAPTER I ............................................................................................................................................. 1
INTRODUCTION ............................................................................................................................... 2
TOWARD A MORAL PSYCHOLOGY OF TECHNOLOGY .......................................................... 4
CONCLUSION ................................................................................................................................. 17
CHAPTER II ......................................................................................................................................... 19
INTRODUCTION ............................................................................................................................. 20
TRANSHUMAN TECHNOLOGIES ............................................................................................... 21
OVERVIEW OF THE PRESENT RESEARCH............................................................................... 27
STUDY 1: MORAL FOUNDATIONS AND ATTITUDES ABOUT TRANSHUMANISM ......... 28
STUDY 2: RESTORATIVE VS. AUGMENTATIVE TRANSHUMANISM ................................. 33
STUDY 3: TRANSCRANIAL DIRECT CURRENT STIMULATION .......................................... 37
GENERAL DISCUSSION ................................................................................................................ 45
CONCLUSION ................................................................................................................................. 52
CHAPTER III ........................................................................................................................................ 53
INTRODUCTION ............................................................................................................................. 54
OVERVIEW OF THE PRESENT RESEARCH............................................................................... 61
STUDY 1: COGNITIVE ENHANCEMENT, FAIRNESS, AND MORAL CHARACTER ........... 62
STUDY 2: COGNITIVE ENHANCEMENT, FAIRNESS, AND WORK MOTIVATION ............ 67
STUDY 3: CHEATING AS A REACTION TO COMPETITIVE COGNITIVE
ENHANCEMENT............................................................................................................................. 71
GENERAL DISCUSSION ................................................................................................................ 75
CONCLUSION ................................................................................................................................. 77
REFERENCES ...................................................................................................................................... 79
iv
Dissertation Abstract
Technology is the means by which human beings take persistent and decisive control over their
environment—and so, gradually, over their own destinies. However, the acceleration of
technological innovation in recent years has enabled humanity to exert far more direct influence
over its own social, psychological, and biological nature. Technology is therefore of central
interest to behavioral scientists in general and morality scholars in particular. Across three
Chapters, this dissertation explores the emergent field of the psychology of technology, with a
specific eye toward ethical implications and the role of moral psychology. Chapter I presents an
overview of literature pertinent to the moral psychology of novel technologies. Chapter II
consists of three empirical studies of the moral psychology specific to transhumanism, a
movement that advocates integrating humanity with advanced technology to transcend our
biological limitations. Chapter III consists of three further empirical studies of the moral
psychology specific to competitive cognitive enhancement.
1
CHAPTER I
The Moral Psychology of Novel Technology: An Overview
ABSTRACT
Technology is the means by which human beings take persistent and decisive control over their
environment—and so, gradually, over their own destinies. However, the acceleration of
technological innovation in recent years has enabled humanity to exert far more direct influence
over its own social, psychological, and biological nature. Technology is therefore of central
interest to behavioral scientists in general and morality scholars in particular. This paper explores
the emergent field of the psychology of technology, with a specific eye toward ethical implications
and the development of an approach that recognizes the weight of moral psychology in predicting
the effects of novel technologies as we enter a future in which they are an inextricable component
of what it means to be human.
2
INTRODUCTION
Can technology be moral or immoral? What makes people embrace or reject novel
technology for ethical reasons? The interaction of psychology with technology has been of
interest to behavioral researchers at the very least since the early 20
th
century, when Taylor’s
(1911) project of “scientific management” sought to comprehend and improve upon the
sequences of tasks involved in the operation of technical procedures. However, a generalized and
empirically testable account of individual psychological reactions to novel technologies was not
to come until the technological explosion following the dawn of the computing era, with the
development of the Technology Acceptance Model (Davis, 1986). Under the Model, various
design features of a technology interact with each other to yield two different clusters of
perceptions in end users: (1) usefulness and (2) ease of use. Accordingly, it is the combination
(or absence) of these perceptions that results in acceptance (or rejection) of technology.
In the field of management studies, there is evidence that perceived usefulness is
empirically a stronger influence than ease of use in producing technology acceptance, although
both are important factors (Davis, Bagozzi, & Warshaw, 1989). Usefulness is perceived if a
technology is believed to enhance an individual’s performance, and a series of longitudinal field
studies (Venkatesh & Davis, 2000) identified several key determinants of this belief:
(1) Subjective norms regarding whether people important to you think that you should
adopt the technology.
(2) Image concerns about the degree that usage of the technology is seen as enhancing
status within a social system.
(3) Job relevance, or the extent to which the technology is directly applicable to the work
or tasks in question.
3
(4) Output quality: how well the technology or system performs the tasks for which it is
intended.
(5) Result demonstrability, defined as how much the covariation of technology usage and
positive results is readily discernible.
On the other side of the equation, ease of use is perceived when a technological system
appears to be relatively free of effort, and this factor may in turn be disaggregated into its own
set of components (Venkatesh, 2000):
(1) Internal control, also referred to as self-efficacy: belief in one’s own competence at
using the technology.
(2) External control, also known as facilitating conditions, which include the availability
of helpful resources and the compatibility of the technology with existing systems.
(3) Intrinsic motivation, which is to say feeling playful, spontaneous, creative, and
flexible when using the technology.
(4) Technology-specific anxiety.
Anxiety merits special attention, as it is the aspect of the model most closely associated
with negative emotional responses, which in turn are associated with the ethical lens of interest
to the present work (cf. Haidt, 2001). The Technology Acceptance Model was initially
understood to be addressing the deteriorative impact of “computer anxiety,” a well-documented
affective phenomenon that is influenced by experience with the technology (Maurer, 1994) as
well as by technological literacy and general self-efficacy, among other factors (Beckers &
Schmidt, 2001). Technology anxiety can be reduced by certain minimalist design principles like
training on real tasks, getting started quickly, reading instructions in any order, exploiting prior
knowledge and familiar comparisons, and so on (Reznich, 1996). Additionally, anxiety responses
4
may be ameliorated by the quality of the technical interface and users’ interactions therewith, as
well as by specialized training (Igbaria & Chakrabarti, 2007).
However, aside from the partial role of negative affect in mitigating perceived ease of
use, and the similar mitigation of perceived usefulness by subjective norms (specifically, about
whether important cohorts believe an individual should use the technology), the Technology
Acceptance Model does not stake strong claims about the role of moral psychology in
determining people’s reactions to technology. One notable exception involves the study of
people’s acceptance of novel food technologies, which has implicated the positive impact of
perceived naturalism (i.e., whether a technology is derived from something found in nature) in
technology acceptance (Cox & Lease, 2007), alongside perceived benefits and risks (Siegrist,
2008).
In contrast, more contemporary approaches to the emerging nature of the psychology of
technology have explicitly invoked the ethical dimensions of technology, applying disciplines
from evolutionary biology and neuroscience to cognitive psychology and philosophy to the study
of technology’s role in numerous phenomena; brain development, hedonistic technology design,
unethical online behaviors, and human identity comprise only a few of the topics of interest (e.g.,
Kool & Agrawal, 2016). Nonetheless, attempts to model psychological acceptance of novel
technologies as a function of moral psychology writ broad have been limited, although there has
been some success in using morality theory to predict responses to the technological
modification of human nature (e.g., Newman, Fast, & Graham, 2016). And despite the relative
paucity of unifying theories of the moral psychology of technology, there is a growing body of
work researching discrete technological intersections with elements of the moral domain.
TOWARD A MORAL PSYCHOLOGY OF TECHNOLOGY
5
One challenge for generalist theories about technology on the whole is that while
technology is ever-changing, the architecture of the human brain is laid out according to
blueprints drafted on an evolutionary timescale. Psychologists tend to see this as putting
humanity at a significant disadvantage in a protean world of accelerating distraction,
interruption, and information saturation (e.g., Gazzaley & Rosen, 2016). Indeed, careers have
been built and names made on the premise that technology must be humanized in order to
safeguard our time, our health, and even social institutions such as our democracy (cf. Center for
Humane Technology, 2020; CHT has reached over 100 million people globally through its
documentary The Social Dilemma). However, while vast programs of research have been
dedicated to uncovering the acute and chronic impacts of technology (particularly media
multitasking) on our brains, there is nevertheless evidence to suggest that intentional design has
the capacity to promote healthy effects on human neuroplasticity (Ziegler, Mishra, & Gazzaley,
2015). As with any complex subject matter, the ethical reality on the ground is not black-and-
white but highly context-dependent.
Along these lines, one promising avenue for elucidating people’s moral relationship to
technology is to employ a social-identity model of motivation; a combination of global self-
identity (local/global identification and parochialism/cosmopolitanism) and self-regulation
(prevention and promotion focus) has been found to predict technology readiness and use, with
global-identified, cosmopolitan, and promotion-focused individuals more likely to accept
technology—even across cultures (Westjohn, Arnold, Magnusson, Zdravkovic, & Zhou, 2009).
Self- and social-identity, as well as self-regulation, are intrinsically linked to ethics through the
notion of moral identity, a “self-regulatory mechanism that motivates moral action” (Aquino &
Reed, 2002, p. 1423); crucially, there is evidence that moral identity is of central importance to
6
personal identity in its entirety (Strohminger & Nichols, 2014). As a result, it may be concluded
that people are strongly motivated to accept or reject novel technologies as a consequence of
their moral implications.
The nature of those implications, however, will differ considerably between technologies
and across demographic conditions. For example, there is a well-known tendency for digital
environments to replicate preexisting gender disparities through a combination of negative
stereotypes and social-role expectations (Atwell, 2001; Joiner, Stewart, & Beaney, 2015). At the
same time, not all differences are similarly categorical; although casual demographers often
bracket people into distinct age groups (e.g., millennials vs. baby boomers), cohort effects on
technology-related values are in fact more linear than generational (Rosen & Lara-Ruiz, 2015).
Additionally, there is substantial variety in the ethical consequences of technologies applied to
different cultural institutions and social milieux. Several subdomains of present interest to
researchers in this burgeoning field are briefly reviewed below.
Boundary Management and Privacy
As novel technologies continue to reshape temporal, spatial, and relational boundaries
(for example, between work and family), potentially reinforcing existent socioeconomic
inequities to unforeseeable degrees, new frameworks and boundary management practices are
required to actively address such developments (Ollier-Malaterre, Jacobs, & Rothbard, 2019).
Collectively, the “awareness, motivation, and skill needed to perform technology management”
of this kind may be thought of as “digital cultural capital,” an internalized competency applied to
the tasks of managing one’s connectivity, privacy, and online self-presentation (Ollier-Malaterre
et al., 2019, p. 427, 433). Adaptation to the constant reinvention of technology’s significance in
human life necessitates a conceptual shift from regarding technologies uncritically as neutral
7
instruments toward viewing them as “contradictory and ambivalent” artifacts “fully entangled in
how we imagine and practice our daily existence” (Chimirri and Schraube, 2019, p. 54); in other
words, technology is inherently embodied, political, and moral, and the psychology of
technology cannot effectively be studied as though it were otherwise.
However, to examine technology’s boundary-dissolving potential with a critical lens is
not necessarily to abandon all possibility of optimism. In particular, both measurement and
intervention within the field of positive psychology show high likelihood of radical innovations
through informational and biological technologies over the next decade (Yaden, Eichstaedt, &
Medaglia, 2018). As boundaries to resources become ever more porous, human civilization may
continue its ongoing trend of diminished violence, poverty, and disease (e.g., Pinker, 2018) and
substantially improve lifespans, decision making, and wellbeing (cf. Harari, 2016). The news is
not all bad. On the other hand, there are many areas in which we do not yet know whether the
impact of technology will be positive or negative. For example, although people increasingly
control their environments by giving orders to hierarchically subordinate artificial intelligences,
it remains unclear whether this kind of power over AI assistants will cause the same
psychological outcomes as experiencing power over humans (Fast & Schroeder, 2020). While
the feeling of power over other humans tends to make people more goal-oriented and attentive to
social role expectations, power over machines could alternatively make users “[more] vulnerable
to manipulation by companies that control the digital assistants” and “care less about maintaining
their social role (e.g. behaving competently) when they know that their interaction partner is not
human, even when it acts in human-like ways” (Fast & Schroeder, 2020, p. 174). Could the ease
of misrepresentation in digital life lead to misunderstandings and misevaluations between
humans in “real” life?
8
Recent research has provided compelling support for the distinct psychological
experience of interacting with technology, which alleviates the social evaluation concerns
typically associated with human interactions (Raveendhran & Fast, 2019). Nor is this only the
case for low-status individuals hoping to avoid direct evaluation by their peers and supervisors.
In fact, there is evidence that leaders are also concerned about negative social evaluation and
prefer to interact with their subordinates via avatars in contexts that entail frequent monitoring
(Raveendhran, Fast, & Carnevale, 2020). Hence our society may be reaching an inflection point
whereupon the very technologies that have lowered boundaries to efficient information access
and decision making may begin to raise boundaries between human beings. One need only look
at adaptations to the isolating nature of the COVID-19 pandemic to see the proliferation of
videoconference-based relationship management (e.g., Wind, Rijkeboer, Andersson, & Riper,
2020); in the coming years, even after widespread vaccinations, we are likely to see some
entrenchment of the ground gained by these and other forms of arms-length digital interactivity
solutions.
The substitution of technology for traditional forms of interaction has paradoxically taken
people farther apart, socially and physically, even as it has made the world smaller. And yet,
simultaneously, we know more about each other than ever before. The human need for
connection has engendered a pervasive desire in individuals to make highly public disclosures on
Internet platforms, especially through social media and social networking sites; while neither
wholly pernicious nor beneficial, this phenomenon is certainly a double-edged sword (Archer,
Christofides, Nosko, & Wood, 2015). Although the right to privacy as historically been defended
as a cherished value, the conveniences of sharing information digitally in order to receive various
social benefits and personalized products—coupled with the seeming unlikelihood of
9
experiencing a harmful data breach—have led to a broad belief that the age of privacy is already
over (Fast & Jago, 2020). And if the average person lacks the motivation to take the steps
necessary to safeguard their own privacy, it is tempting to agree with the fatalistic view.
As standard practices of notice and consent currently do little to meaningfully inform
privacy-concerned users of their risks, and as unconcerned users may actually prefer to sacrifice
a little privacy for the immediate and tailor-made benefits of psychologically targeted
notifications and services, some scholars have argued for a shift in designers’ mindset toward
transparent controls that make it easy for users to act in line with their privacy goals and freely
choose their personally desired level of disclosure (Matz, Appel, & Kosinski, 2020).
The need for privacy is always balanced against the need for self-presentation and
exploration, but it remains an open question whether social media users present their true selves
online or take shelter under anonymity and psychological distance to proffer false or idealized
self-portraits (Bartsch & Subrahmanyam, 2015). There is some evidence that emerging media
have created an environment that increases individual and cultural narcissism by permitting
people to inflate their self-esteem via grandiosity in online self-presentation (Campbell &
Twenge, 2015). At the same time, some individuals, including members of vulnerable
populations such as adolescents, use social networking displays to showcase health-risk
behaviors including sexual activity, substance abuse, and violence (Moreno & Pumper, 2015).
Because not all online interactions can be taken at face value, the world of cyberspace has
provided scholars with fertile soil for cultivating psychological research.
Online Interactions and Cyberspace Cadets
All forms of human activity have the potential to implicate moral considerations, but the
relative novelty of online interactions presents a specific set of ethical challenges. In particular,
10
the elision of public and private spaces, the necessity of informed consent, and the benefits and
drawbacks of anonymity have all come to the fore in psychological research, and these concerns
are especially relevant within online forums catering to at-risk populations (Gavin & Rodham,
2015). The emergence of online interactions over the past half-century has created a distinct
form of communication characterized not only by greater anonymity but by relative absence of
nonverbal cues, diverse opportunities to form and strengthen social ties, and widespread
dissemination of information (Lieberman & Schroeder, 2020). Moreover, as “these interactions
often bleed into one another,” online communication methods may disrupt offline life in some
ways (e.g., by reducing sociality and increasing feelings of loneliness) while simultaneously
enhancing it in others (e.g., by bolstering relationships and fostering collaboration) (Lieberman
& Schroeder, 2020, p. 18; see also, e.g., Waytz & Gray, 2018).
One area in which the online environment may excel at enhancing offline experiences
involves the interactions of neurodiverse groups; for instance, individuals on the autism spectrum
can communicate through social media as effectively as neurotypical users—showing
empathizing abilities often found to be deficient in offline exchanges (Brosnan & Gavin, 2015).
At the same time, the phenomenon of cyberbullying is on the rise in recent years, becoming a
public health concern exacerbated by the physical absence of bystanders and causing numerous
suicides, especially among younger people (e.g., Rice, Petering, Rhoades, Winetrobe, Goldbach,
Plant, Montoya, & Kordic, 2015; Kowalski & Whittaker, 2015).
Another prominent public health issue in this sphere is the spread of misinformation
about medical science over social media (e.g., Waszak, Kasprzycka-Waszak, & Kubanek, 2018).
People’s susceptibility to this kind of “fake news” has become especially dangerous with the
proliferation of misleading reports about the COVID-19 pandemic, resulting in reductions in
11
vaccination and public safety compliance (Roozenbeek, Schneider, Dryhurst, Kerr, Freeman,
Recchia, Van der Bles, & Van der Linden, 2020). Although these specific problems may in time
be addressed with proper interventions (e.g., Van der Linden, Roozenbeek, & Compton, 2020),
the technology itself remains intertwined with the broader ethical issue of people’s “digital
media literacy,” which is to say their capacity to assess the credibility of information on the
Internet (Cheever & Rokkum, 2015).
The widening overlap of cyberspace and offline life has given Internet users more
information than ever before about their friends and colleagues, blurring the boundary between
professional and personal identities and invoking serious reputational concerns (Ollier-Malaterre,
Rothbard, & Berg, 2013). To address these concerns, individuals engage in a variety of boundary
management behaviors, which may be categorized in terms of their preference for segmenting
versus integrating these identities as well as by their motivations toward self-verification versus
self-enhancement—with differential consequences for respect, liking, and difficulty of
behavioral maintenance (Ollier-Malaterre et al., 2013, p. 652). However, the balance of these
strategies is likely to shift over time as cyberspace continues to encroach upon physical space. As
both mobile phones (Drouin, Kaiser, & Miller, 2015) and the Internet itself (Vondráčková &
Šmahel, 2015) have been shown to be addictive, it is probable that online and offline
environments will become continually and inextricably enmeshed. Additionally, there is
evidence that the mere presence of a smartphone is a distraction that limits cognitive capacity
(Ward, Duke, Gneezy, & Bos, 2017), suggesting that—unless adequate solutions are
developed—it may be very difficult for people to make good decisions about managing their
boundaries in an increasingly technological world. For better or worse, the machines are already
inside our brains.
12
Dehumanization, Anthropomorphism and Machine Responsibility
The combination of (1) telecommunication (i.e., interacting with people through machine
interfaces) and (2) the blending of digital and physical life (i.e., associating personal identity
with technological representations) is cause for profound ethical concern, as “[t]reating a human
mind like a machine is an essential component of dehumanization” (Schroeder & Epley, 2016, p.
1427). In particular, the imbrication of humanity and technology facilitates mechanistic
dehumanization: the practice of treating people rigidly and superficially, as fungible, inert, and
without interpersonal warmth or individuality (Haslam, 2006). Like all forms of dehumanization,
this practice perpetuates injustice by excluding people from moral consideration (Opotow, 1990).
Hence, to the extent that novel technologies mechanize human interactions, they have the
potential not only to promote behaviors by moral agents that many might view as immoral, but
to limit those who are or are not deemed to be moral patients—i.e., proper targets for moral
duties at all (see Gray, Young, & Waytz, 2012).
The widespread adoption of text-based communication is one such technology that may
gradually be eroding our ability to recognize each other’s common humanity. The removal of
humanlike vocal, visual, and paralinguistic cues makes people more likely to mistake a human
interlocutor for a machine (Schroeder & Epley, 2016), and text communication especially
facilitates the dehumanization of those with whom people already have conflicts or
disagreements, by concealing the existence of a thoughtful and reflective mind (Schroeder,
Kardas, & Epley, 2017). Text messaging research remains a nascent field of study, but scholars
are attentive to its capacity to damage relationships (Grace & Kemp, 2015). That said, because
children and younger cohorts are strikingly more fluent in “textese” compared to older adults, it
is possible that society will comfortably adapt to its language features over time, although not
13
without consequences for general writing ability (Waldron, Kemp, Plester, & Wood, 2015). In
other words, the machines—with their streamlined forms of communication—are rapidly making
us sound more like them. Meanwhile, the machines are becoming more and more like us.
The psychology of anthropomorphism suggests three considerations that facilitate seeing
technologies, such as artificial intelligence, in a more human light: (1) accessibility and
applicability of anthropocentric knowledge (i.e., humanlike qualities), (2) effectance motivation
(i.e., to explain and understand the behavior of other agents), and (3) sociality motivation (i.e.,
desire for social contact and affiliation) (Epley, Waytz, & Cacioppo, 2007). Of these, humanlike
qualities are possibly the simplest to instantiate from a design perspective. For example, users
display greater trust in an autonomous vehicle outfitted with anthropomorphic features such as a
name, gender, and voice (Waytz, Heafner, & Epley, 2014). Concordantly, people are more
willing to disclose personal information to digital assistants when they communicate with the
device by talking rather than by texting (Schroeder & Schroeder, 2018). Thus the trustworthiness
lost between human beings to dehumanizing forms of communication may be acquired by
companies making efforts to humanize their information systems.
This is nothing new. The field of human-computer interaction has, since its inception,
identified a number of humanizing design factors that include ease and speed of use, flexibility in
dealing with different types of people and conditions, offering choices and overrides, letting
individuals inspect and correct their information, respect for privacy, and the absence of
deception, manipulation, and secrecy (Sterling, 1975). There are so many ways to ameliorate the
mechanistic aspects of technology that designers can pick and choose whichever features suit
them. In point of fact, because “cute” entities are hyper-mentalized (i.e., humanized), eliciting
14
anthropomorphic reactions may be as simple as putting an adorable smile on an interface (cf.
Sherman & Haidt, 2011).
As machines (e.g., self-driving cars, digital assistants, AI security systems, surgical
robots, etc.) become increasingly autonomous and agentic, they will have increasing
opportunities to influence human lives in ways typically regarded as having ethical valence. But
will people see them as having actual responsibility to pursue good outcomes and avoid
wrongdoing? Moral psychology suggests that people will see machines as responsible to the
extent that they seem (1) to be “aware of the moral concerns inherent in the situation,” (2) to be
“capable of intentionality, that is, holding a belief that an action will have a certain outcome,” (3)
to have “free will” or “the ability to independently implement actions,” (4) to have “human-like
bodies, human-like voices, and human-like faces,” and (5) to cause “damage and suffering,”
which “lead people to search for an intentional agent to hold responsible” (Bigman, Waytz,
Alterovitz, & Gray, 2019, pp 366-367). One promising approach to proactively address the
growing ethical responsibility of artificial intelligences is to create an algorithmic social contract
conceived of as society-in-the-loop, which adapts human-in-the-loop paradigms found in
machine learning systems while negotiating for agreement between “stakeholders with
conflicting interests and values” (Rahwan, 2018, p. 9). However, with as few as 19% of
Americans trusting self-driving vehicles, the most resilient obstacles to widespread adoption of
morally entangled autonomous technologies may be psychological rather than technological
(Shariff, Bonnefon, & Rahwan, 2017).
Technology in Labor, Education, and Human Evaluation
For a long time, the primary ethical concern as regarded technology in the workplace
involved the threat (or boon) of automation driving a need for new employee competencies and
15
crowding out unskilled labor (e.g., Adler, 1992). However, as the employment-population ratio
(before COVID-19) has remained essentially the same as it was thirty years ago (U.S. Bureau of
Labor Statistics, 2020, p. 7), fears of widespread layoffs may be misplaced. The actual effects of
automation and artificial intelligence on labor may be difficult to prognosticate, as scientists lack
access to high-quality data about human-machine complementarity, the interaction of
technologies with economic dynamics, and even the changing nature of work itself (Frank,
Autor, Bessen, Brynjolfsson, Cebrian, Deming, Feldman, Groh, Lobo, Moro, Wang, Youn, &
Rahwan, 2019). Accurately forecasting the impact of technology on the future of work will
require scholars and policymakers to prioritize “data collection that is detailed, responsive to
real-time changes in the labor market, and respects regional variability,” including “skills data
from resumes and job postings along with new indicators for both intercity and intracity labor
dependencies” (Frank et al., 2019, p. 6537). However, while a machine-learning-powered crystal
ball to foresee labor distribution may be years away, the existing research on technology at work
and in education has illuminated a few other areas of interest for ethicists and moral
psychologists, beyond what has already been discussed.
One prominent area of study is multitasking. Although multitasking has been called a
“myth” (e.g., Rosen, 2008), there is a strong reverse-correlation between age and self-reported
multitasking, largely based on the proliferation of technologies that permit or promote it, which
are more frequently used among younger generations (Carrier, Kersen, & Rosen, 2015). The
mythic label is supported to the extent that while multitasking—especially media multitasking—
has increased, people do not appear to be improving at it, with the difficulty ratings of specific
task combinations highly correlated across generations (Carrier, Cheever, Rosen, Benitez, &
Chang, 2009); while some young people report ease with multitasking, they tend to perform
16
objectively worse on individual tasks (e.g., Cheever, Peviani, & Rosen, 2018). If trends in
distributed work and virtual teams continue (e.g., Eisenberg & Krishnan, 2018), the availability
of multitasking in work-from-home arrangements may spell problems for nationwide or global
productivity going forward. Additionally, multitasking is associated with negative mental health
outcomes such as depression and social anxiety (Becker, Alzahabi, & Hopwood, 2013), and it is
especially destructive in safety-critical environments wherein interruptions have the potential to
disrupt the performance of essential tasks (Werner, Cades, & Boehm-Davis, 2015).
A further moral concern is that the technologically facilitated impulse to multitask also
has the potential to impede human flourishing and development. In education, the rise of
multitasking is particularly relevant to the performance of college students, who are frequent
multitaskers not only in their home and social lives but in the classroom, often engaging in off-
task technology use that diminishes learning outcomes (Bowman, Waite, & Levine, 2015; Wood
& Zivcakova, 2015). However, there are legitimate educational uses of multitasking, such as
technology-immersive learning environments, and note-taking, which may offer advantages for
students’ working memory and cognitive load, although more research is necessary to fully
contrast different kinds of multi- and single-tasking behaviors (Lin & Bigenho, 2015). While
multitasking preference tends to be associated with lower academic performance overall,
controlled analyses in at least one recent study indicate that it may be media and technology
usage specifically, and not multitasking per se, that drives this negative relationship (Uzun &
Kilis, 2019).
One final contested area in this domain is the role of algorithms, or automated computer
processes—including artificial intelligences—in workplace decision making (e.g., Kellogg,
Valentine, & Christin, 2020), which could easily be extended to decisions made in educational
17
settings. Broadly, algorithms are increasingly used to “direct workers by restricting and
recommending, evaluate workers by recording and rating, and discipline workers by replacing
and rewarding” (Kellogg et al., 2020, p. 368). Algorithms can add substantial economic and
social value to organizations, for instance by outperforming human stock traders (Heaton,
Polson, & Witte, 2018), predicting customer preferences (Gomez-Uribe & Hunt, 2016), and
improving radiologists’ interpretations of body scans (Hosny, Parmar, Quackenbush, Schwartz,
& Aerts, 2018)—to name only a few. However, while algorithmic decisions may be perceived as
acceptable for purely mechanical tasks, when making more socially-oriented or “human”
decisions, they are seen as less fair and trustworthy, and they provoke more negative emotions
compared to human decisions (Lee, 2018). This affective response may be driven by the view
that it is “dehumanizing to use machines to judge a person” (Lee, 2018, p. 12). In a similar vein,
there is evidence that people regard algorithmic evaluations as less fair and less accurate than
evaluations by humans because they seemingly neglect the holistic and qualitative dimensions of
human activity in favor of a reductionistic and quantitative approach to decision making
(Newman, Fast, & Harmon, 2020). Despite the accelerating entwinement of humanity and
technology, people may retain persistent beliefs about the need to separate these two estates.
CONCLUSION
While no single theory yet captures the multifariousness of the moral psychology of
novel technology, when one examines the application of psychological and behavioral theories to
distinctive technologies—and vice versa—in conjunction with the ethical implications thereof, a
complex picture begins to emerge. Reactions to technology are generally pluralistic and context-
dependent, pointing to the need for psychological frameworks that readily admit a high degree of
intrapsychic and interpersonal nuance. The most obvious candidate for a broad analysis of ethical
18
judgments of technology is therefore Moral Foundations Theory, which posits a suite of adaptive
moral intuitions that are elicited in response to different kinds of environmental triggers (e.g.,
Graham, Haidt, Koleva, Motyl, Iyer, Wojcik, & Ditto, 2013).
Notably absent from the foregoing discussion of the psychology of technology is the
notion of technologically modifying humanity itself. However, as all forms of technology to
some degree represent human modification, the assessment of various technologies in this
chapter is in fact an inductive (i.e., bottom-up) exploration of this very notion. Going forward, it
is through tapping into the higher-order concept of human modification that we may extrapolate
more completely about human responses to technologies that have yet to be promulgated or even
imagined. Thus the following chapters will engage with this topic more squarely and
experimentally, first by employing Moral Foundations Theory in an investigation of altering
humanity writ broad (Chapter II) and then by focusing on the moral foundation of fairness, or
justice, to examine judgments specifically of human cognitive enhancement in competitive
settings (Chapter III). It is the aim of the following body of empirical work to shed light on the
question of the morality of technology, by providing a generalizable account of the moral
intuitions activated when novel technologies intervene in human nature.
19
CHAPTER II
Is Technology Good or Evil? Moral Attitudes About Altering Humanity
ABSTRACT
Technology is developing at an accelerating pace and rapidly transforming civilization. It is
therefore important to understand how people will respond as technology increasingly alters what
it means to be human. Across three studies, we use Moral Foundations Theory to examine the
influence of moral psychology on reactions to transhumanism, a cultural and intellectual
movement that advocates integrating humanity with advanced technology to transcend our
biological limitations. In Study 1, we found that negativity toward transhumanism is most
associated with endorsement of the Purity foundation. In Study 2, we found that transhuman
technologies are perceived as more immoral when their function was framed as augmentative
(enhancing human abilities beyond normal) compared to restorative (helping people with deficits
reach the human baseline). The interaction of Care endorsements with condition predicted
perceived immorality for augmentative technologies. Study 3 shifted the focus to a specific
transhuman technology: transcranial direct current stimulation (tDCS), which we framed in a 2x2
design according to function (augmentative or restorative) and form (embeddable or wearable).
We found that augmentative tDCS was perceived as more immoral than restorative tDCS,
embeddable tDCS was perceived as more immoral than wearable tDCS, and the interaction of
augmentation and embedment caused the greatest increase in perceived immorality. These findings
show systematic moral reactions to a novel topic, lending unique insight into moral judgment.
20
“The first victim of transhumanism might be equality.” – Francis Fukuyama
“Man is something that shall be overcome.” – Friedrich Nietzsche
INTRODUCTION
Human biological evolution is measured in hundreds of millennia, and cultural evolution
takes place over generations, yet our technological evolution seems to be accelerating over years
and months, faster than our social systems can accommodate (Kurzweil, 2001; Hård & Jamison,
2005). Tools and technology have been part of the human experience since the dawn of
civilization, and new technologies have often been the subject of ethical debate (Potter, 1970;
Reich, 1978; Engelhardt, 1986; Feenberg, 1991). How and why these debates are resolved one
way or another is unclear. Some technologies, such as human cloning, engender widespread
disapproval (Kass, 1997; Nisbet, 2004). Others, such as nuclear power, meet with censure in
some contexts and praise in others (Gamson & Modigliani, 1989). Society institutionalizes
technologies as ethical or unethical, so to understand how people moralize technology, we must
examine a new category of technologies for which people as yet lack preformed moral opinions.
Here, we investigate transhumanism, a cultural and intellectual movement that advocates
integrating humanity with advanced technology to transcend our biological limitations. Not only
is transhumanism important because it is a novel lens through which to view technology, but its
constituent technologies are rapidly becoming central to human existence. Understanding the
moral psychology underpinning reactions to this movement may be crucial to our future.
Many have noted that technological advances are radically changing what it means to be
human (e.g., More, 1994; Greenfield, 2004; Bostrom, 2009; Al-Rodhan, 2011; Sandu, 2015).
The ethical implications of such changes loom large; the U.S. Office of Naval Research recently
allocated $7.5 million to teach morality to robots (Tucker, 2014, The Atlantic), and individual
21
philanthropists have furnished even greater sums to mitigate possible catastrophes from next-
generation technology (Vanian, 2015, Fortune). Interestingly, little is known about how humans
themselves will respond to the increasing integration of technology into everyday human
experience. Advanced technologies promise to make us faster, stronger, more resistant to
disease, longer-lived, and even more intelligent (More, 2013). People’s reactions to such
developments will either pave the way for further technological advancement or serve as a
roadblock that hinders future growth. The present research aims to understand when and why
people will be most likely to reject, rather than embrace, technological growth.
Transhumanism, a movement at the forefront of technological advancement, offers a
perfect opportunity to begin exploring attitudes toward technology, as it celebrates and seeks to
advance the use of technology to alter human capabilities. Because attitudes about the proper
direction of advancement are intimately connected to moral concerns about right and wrong, we
contend that the tools to better understand and predict reactions to technology may be found in
moral psychology.
TRANSHUMAN TECHNOLOGIES
Technology increasingly offers possibilities to redefine humanity. Not only are targeted
gene-editing programs now commonplace in biological research (Sander & Joung, 2013), but
Chinese scientists have already reported altering the genomes of human embryos (Liang et al.,
2015). Mechanical exoskeletons can augment our physical capabilities (Yang et al., 2008), while
brain-machine interfaces promise to make technology respond directly to our thoughts (Lebedev
& Nicolelis, 2006). Computer-mediated brain-to-brain communication (hyperinteraction) is
already a reality (Grau et al., 2014), and machine telepathy may have implications for social
interaction in the future. Over time, multidisciplinary advancements will recombine to yield an
22
increasingly diverse array of technologies (Arthur, 2009). Human modification is very likely
only in its infancy.
Certain modifications are already available for retail purchase over the Internet.
Transcranial direct current stimulation devices (which operate by running a weak electric current
through the brain) have been used to enhance human cognitive function in any number of
domains, including but not limited to motor learning (Reis & Fritsch, 2011), language learning
(Flöel et al, 2008), attention and memory formation (Coffman et al., 2014), and emotion
processing (Nitsche et al, 2012).
Broadly speaking, these advancements and their advocates fall under the banner of
transhumanism, a cultural and intellectual movement propounding the belief that it is both
possible and desirable to overcome biological limitations on cognition, emotion, and physical
and sensory capabilities by integrating humanity with advanced technology (cf. More, 2013).
Although no single philosophy may completely articulate the ideals and aims of all
transhumanists, they are united by the common goal of transcending the boundaries of human
experience. Some see transhumanism as progression toward a “posthuman” state, the result of
human beings who have applied enhancement technologies to themselves or their descendants
(Bostrom, 2005). While it is arguable that our species has been undergoing the process of
transhumanism ever since we first began to use sticks and stones as rudimentary tools, some fear
that the current accelerating pace of technological discovery may instigate a quantum leap in
human development, after which the transhumans of tomorrow may bear only some resemblance
to the humans of today.
These concerns are dramatic, but dozens of subtler changes have already transformed
how people live and work. The Internet has altered how we express ourselves and connect with
23
others, and its capacities for data collection, surveillance, and communication continue to exert
an increasing influence on our behavior (Ward, 2013). It is estimated that smartphones reached 2
billion units by the end of 2014 (Angarita, 2015), providing users with access to nearly the sum
total of human knowledge. The day may not be far off when such power will be granted by
nanotechnology implanted directly into the brain (Lynch, 2009). Already brain-computer
interfaces exist that afford people direct mental control over computer programs (Wolpaw et al.,
2002) and robotic limbs (McFarland & Wolpaw, 2008). As these and other technologies cross
the boundary from speculation to reality, they invite scholars to grapple with an evolving set of
ethical implications (Nordmann & Rip, 2009).
Attitudes Toward Transhuman Technologies
The concept of using technology to become transhuman raises profound moral questions,
as the integration of humanity with technology forces us to reconsider our answers to the
question, “What does it mean to be human?” Although transhuman technologies have the
potential to radically improve our way of life, they are equally capable of destruction. Hence the
transhumanism movement involves an inherent moral polarization between those who take a
positive view of our technological future and others who, more skeptical of this prospect, warn
us not to invest in what may ultimately be a danger to humanity (Birnbacher, 2009). Pointing out
that transhuman technologies may reinforce existing power imbalances, some have called them a
“threat to the idea of equality” (Fukuyama, 2004). However, while scientists and philosophers
debate the ethical ramifications of a distant future, do-it-yourself “biohackers,” impatient with
the slow pace of society’s legitimating institutions, are already experimenting with genetic
engineering in small laboratories outside official auspices and performing kitchen-sink surgeries
to augment their bodies (e.g., Alper, 2009). For example, some biohackers embed rare-earth
24
magnets in their fingertips, which not only enable them to sense magnetic fields but could
theoretically be used to operate a novel tactile human-machine interface (Hameed, 2010).
Technology’s transhumanist capabilities often simultaneously inspire a sense of wonder
and a certain degree of existential dread. Some have suggested that there is wisdom in abhorring
technologies that arouse our repugnance (Kass, 1997). The ubiquitous fear that the tools of our
rapid rise to prominence on Earth will also be the instruments of our downfall indicates the
centrality of moral concerns to questions of transhuman advancement. The mere fact that we can
do something does not entail that we should. Navigating ethical quandaries surrounding the
integration of technology with human identity will comprise one of the great challenges of the
21
st
century (Greenfield, 2008).
Transhuman technologies will eventually push their way to the forefront of public debate.
However, if psychologists wait until then to understand the general population’s attitude toward
these dramatic technological advancements, they will have waited too long. So far, the voices of
experts have sounded much louder in the debate over human enhancement than have surveys of
public opinion (Dijkstra & Schuijff, 2015). Such surveys exist only with respect to narrow
domains of specific technologies: for example, prescription cognitive enhancers (Banjo et al.,
2010) and life extension (Partridge et al., 2009). The results show that while many people
support these technologies, fewer would actually use them. Those surveyed about life extension
(Partridge et al., 2011) point to a variety of moral issues regarding such technologies, most
commonly that they are “unnatural,” but also including concerns about welfare (e.g.,
overpopulation) and fairness (e.g., unequal access to technologies).
The imprint of morality is pervasive in human meaning-making (Janoff-Bulman, 2013);
hence arguments over what it means to be human are likely driven by distinct moral intuitions.
25
The pro- and anti-transhumanism factions each bring to the table their own ideals about human
perfection and dignity, rooted in different moral values (Roduit et al., 2013). The relationship
between human and posthuman dignity can be perceived in many ways (Jotterand, 2010);
perhaps the former naturally leads to the latter, but the latter could just as easily be seen as
erasing and supplanting the former. There is evidence that people are neuroessentialists (“I am
my own brain”) and that cognitive enhancement threatens people’s psychological essentialism
(Reiner, 2013). If many people are uncomfortable modifying their neural architecture by
occasionally popping a pill, how will they react to the possibility of permanently inserting
technology into the brain? We have yet to uncover the full psychological impact of transhuman
technologies, and such a project is complicated by the fact that reactions to transhumanism
appear to lack cohesion and consistency.
Some scholars of transhumanism, such as Bostrom (2009), eagerly await the integration
of humanity with advanced technology, pointing to its capacity to enhance human flourishing by
letting us take control of our future (Hopkins, 2008). Other ethicists (e.g., Sorgner, 2009) have
explicitly connected the aims of transhumanism to the Nietzschean vision of humankind
eventually overcoming its own limitations and frailties. Although proponents of transhumanism
acknowledge that it increases the risk of human self-annihilation, transhuman progress may
nonetheless offer the safest overall route into the future compared to various policies of
technological relinquishment; transhumanism could be the “safest unsafe” option for
ameliorating human suffering (Verdoux, 2009, p. 60). One further, optimistic possibility is that
transhuman technologies will in fact rescue civilization through the enhancement of moral virtue
(Persson & Savulescu, 2010).
26
Other scholars have questioned the utopian claims of transhumanists, suggesting that
their promises to increase happiness and variety in the human experience overlook our affective
system’s need for meaningful interaction with the environment (Bergsma, 2000). Still others
suggest that promulgating transhuman technologies is a slippery slope leading to destructive
practices such as eugenics (McNamee & Edwards, 2006; Koch, 2010). Francis Fukuyama (2002,
2004) declared transhumanism the world’s most dangerous idea, arguing that it threatens our
common metaphysical essence and thereby undermines the inherent value that makes each
human life deserving of equal rights. Some have declared transhumanism to be a kind of
theology, one that transcends the human condition by transgressing against human nature
(Bishop, 2010). However, others have argued that transhumanism should not be understood as a
religion, though its methods and ideals may be incorporated into religious aims (Hopkins, 2005).
In particular, the work of the Jesuit philosopher Teilhard de Chardin (1973) anticipates how
technology might facilitate the amplification of human intelligence into a transcendent cosmic
super-intelligence (Steinhart, 2008).
Hence the debate over the morality of transhuman technologies is marked by a peculiar
lack of clarity. It is not certain whether these experts are looking at different technologies,
different aspects of the same technologies, or if they are appealing to different moral concerns.
This uncertainty points to a need for a broader look at people’s attitudes toward technology and
their basis in moral psychology. In particular, a psychological theory of moral pluralism may
help us understand people’s strong and disparate reactions to this novel but steadily emerging
topic. The present work represents an initial piece of a larger program aimed at comprehending
the psychosocial effects of transhuman technologies through investigation of the interplay
between technology, humanity, and morality.
27
Examining moral reactions to this novel topic calls for a pluralist approach to moral judgment, in
order to understand the multiple moral values and concerns that may come into conflict over human-
altering technologies. In the current studies we rely on one such pluralist approach, Moral Foundations
Theory (MFT; Graham et al, 2013; Graham & Haidt, 2010; Haidt & Joseph, 2004). MFT has sought to
expand the range of moral judgments and concerns under scientific scrutiny, combining evolutionary
theories with the anthropological work of Richard Shweder (Shweder et al., 1997) and concentrating on
five related but distinct moral foundations upon which cultures build specific virtues and vices:
Care/harm, Fairness/cheating, Loyalty/betrayal, Authority/subversion, and Purity/degradation. MFT’s
constructs and measures have been used to: investigate different moral concerns across ideological groups
(Graham, Haidt, & Nosek, 2009; Iyer et al., 2012), gender and cultures (Graham et al., 2011);
conceptualize and measure moral stereotypes different groups have about each other and themselves
(Graham, Nosek, & Haidt, 2012); predict social attitudes (Koleva et al., 2012) and attachment styles
(Koleva et al., in press); predict and causally influence social distancing (Dehghani et al., in press); and
frame persuasive appeals to different groups (Day & Fiske, in press; Feinberg & Willer, 2013; 2015).
Anecdotal comparisons of the rhetoric in pro- and anti-transhumanist arguments suggests different moral
concerns underlying support and resistance to these technologies. For instance, supportive arguments
commonly highlight the benefits of reduced suffering, while critical arguments often invoke concerns of
contamination and violation of sacred existential boundaries (e.g., between human and divine).
OVERVIEW OF THE PRESENT RESEARCH
In Study 1, we used an exploratory approach to better understand what makes people
accept or reject transhuman technologies. We were interested in using the five moral foundations
to predict two things: (1) attitudes toward transhumanism and (2) the types of arguments people
come up with both for and against transhumanism. We measured the relationship between
general attitudes about transhumanism and endorsements of the five moral foundations.
Additionally, we asked participants to give arguments in favor of and against transhumanism, to
28
paint a more detailed picture of the concerns and tensions evoked by human-altering technology.
We were particularly interested in seeing if people deployed different kinds of reasons when
making arguments in favor of versus against transhumanism.
The purpose of Study 2 was to shift personal opinions on this novel topic. To accomplish
this, we tested whether framing technology as either restorative (alleviating suffering or
disability) or argumentative (improving ability beyond normal function) led to predictable
differences in moral attitudes toward technology. We also investigated the notion that the
perceived immorality of transhumanism might be related to concerns about impurity and
degradation, regardless of framing. Additionally, we were curious whether concerns about harm
and suffering might yield perceptions of transhumanism’s immorality only under augmentative
framing conditions.
In Study 3, we asked participants to judge the immorality of a specific transhuman
technology, transcranial direct current stimulation (tDCS). This allowed us to go beyond Study 2
in two ways. First, we were able to examine attitudes about a specific technology, rather than a
technological movement. Second, by focusing on tDCS, we were able to clearly contrast four
distinct framings of the technology as defined by two dimensions: (1) function (i.e., restorative
versus augmentative) and (2) form (i.e., wearable versus embeddable). The inclusion of a form-
based frame enabled us to hone in on purity concerns. In addition to the immorality of tDCS, we
measured judgments of its fairness in various contexts, whether it was seen as appropriate for use
in different institutions, and how much it was perceived to disrupt the identity of its users.
STUDY 1: MORAL FOUNDATIONS AND ATTITUDES ABOUT TRANSHUMANISM
In Study 1, we investigated attitudes toward transhumanism and their relationship to
different moral concerns. We anticipated that people would feel some degree of ambivalence
29
about transhumanism, perceiving it to have both positive and negative qualities consonant with
the values of distinct moral intuitions. For this reason, we treated both positive and negative
attitudes as separate constructs, and we asked participants to generate arguments both for and
against transhumanism. Because concerns about affronts to nature and human dignity
predominate among those opposed to technologies that alter human nature (Kass, 1997;
Partridge, 2011) we hypothesized that resistance to transhumanism would be positively related to
the strength of individuals’ Purity intuitions.
Participants. 128 undergraduate students (40% female, mean age = 20.39) attending a
large private West Coast university completed an online, multiple-measure survey designed to
measure their attitudes toward transhumanism. They received classroom credit in return for their
participation. The final compositions of the sample by race or ethnicity was 46.1% Caucasian,
36.7% Asian or Asian American, 8.6% Hispanic, 3.1% African American, and 5.5% Other.
Procedure. After reading a passage describing transhumanism (based on real-world
descriptions in books (e.g., More, 2013) and popular press such as Psychology Today (Istvan,
2014), Forbes (Hicks, 2014), and PBS (Michels, 2014), participants completed measures of (1)
positive, negative, and ambivalent views toward transhumanism; (2) arguments in favor of and
against transhumanism; (3) feelings of meaning and purpose regarding transhumanism; and (4)
the Moral Foundations Questionnaire (MFQ; Graham et. al., 2011). The passage read as follows:
Following is an excerpt from an article about transhumanism. Please read the passage carefully and
then answer the questions that follow.
What is transhumanism?
Transhumanism is a cultural and intellectual movement that believes we can, and should, improve the
human condition through the use of advanced technologies. One of the core concepts in transhumanist
thinking is life extension: Through genetic engineering, nanotech, cloning, and other emerging
technologies, eternal life may become possible. Likewise, transhumanists are interested in the ever-
increasing number of technologies that can boost physical, intellectual, and psychological capabilities
beyond what humans are naturally capable of (thus the term transhuman). Transcranial direct current
stimulation (tDCS), for example, which speeds up reaction times and learning speed by running a very
30
weak electric current through the brain, has already been used by the US military to train snipers. On the
more extreme side, transhumanism deals with the concepts of mind uploading (to a computer), and what
happens when we finally craft a computer with greater-than-human intelligence (the technological
singularity).
Although it may come across as extreme, the passage above was shaped to reflect actual
descriptions of transhumanism and related technologies as closely as possible. Moreover, we
wanted to ensure that participants experienced an emotional reaction to this novel topic.
Attitudes. Attitudes toward transhumanism were measured on 5-point Likert scales. The
items consisted of the following three questions: “How positive is your view of transhumanism?
(aside from any negative feelings about it),” “How negative is your view of transhumanism?
(aside from any positive feelings about it),” and “To what degree do you feel your reactions to
transhumanism are mixed?” Given that ratings for ambivalence were higher than ratings for
positivity or negativity (see results, below), we kept all scores separate rather than combining
positive and negative ratings.
Arguments. Participants were next asked to provide arguments in favor of and against
transhumanism. Specifically, they were instructed: “Please provide all the arguments you can in
favor of transhumanism. List as many as you can think of,” and “Please provide all the
arguments you can against transhumanism. List as many as you can think of.” Enough space to
write a paragraph was provided after each item. Arguments in favor of and against
transhumanism were coded by three independent coders (blind to hypotheses and study design)
for implicit or explicit reference to moral foundations drawn from Graham et al. (2013): Care,
Fairness, Loyalty, Authority, and Purity, as well as candidate foundation Autonomy (Iyer et al.,
2012). Arguments that did not clearly reference any of the moral foundations were coded as
evoking nonmoral reasons or non-specific moral reasons.
31
Meaning. Participants filled out the Meaning in Life Questionnaire, which assesses the
presence of and search for meaning and purpose in life (Steger, Frazier, Oishi, & Kaler, 2006).
However, these measures are not relevant to the current work, and the results are not included in
this report.
Moral Foundations. Participants filled out the MFQ (Graham et al., 2011), a 32-item
scale (including two filler items) that measures endorsement of five moral foundations: Care,
Fairness, Loyalty, Authority, and Purity.
Results
Attitudes toward transhumanism are ambivalent. Participants’ views were more positive
(M = 2.9, S.D = 0.95) than negative (M = 2.37, S.D. = 1.00; d = 0.33, t(127) = 3.79, p < .001), but
more ambivalent (M = 3.29, S.D. = 1.00) than positive (d = 0.27, t(127) = 3.06, p = .003.
Arguments in favor of transhumanism are dominated by concerns about Care. When
making arguments in favor of transhumanism, 77% of participants referenced Care (“less
disease,” “avoid suffering,” “save lives”), and 27% generated nonmoral reasons (“increase
productivity”). Autonomy (“more control of life”) was a distant third at 8%, followed by
Fairness (“more equality”) at about 4%.
Arguments against transhumanism evince concerns for both Care and Purity. When
making arguments against transhumanism, 40% referenced Care (“harmful”), 35% Purity
(“unnatural”), 33% Autonomy (“humans being controlled by computers”), 16% Fairness
(“creates inequality”), 15% non-moral (“very costly”), and 12% of participants offered non-
specific moral reasons (“it is unethical”).
These results suggest that individuals are particularly drawn to transhumanism’s promise
to reduce human suffering and enhance welfare. However, when considering its downsides, they
32
think not only of its potential for harm but of its unnatural qualities and capacity to impair self-
determination or to aggravate existing injustices. It is worth noting that almost all arguments
were moral arguments; hardly anyone made non-moral arguments (e.g., “it’s too expensive”).
Negative attitudes toward transhumanism are driven by concerns about Purity. Of the
moral foundations, only Purity was a significant predictor of attitudes about transhumanism,
specifically predicting negative attitudes (r = .30, p < .001). Political conservatism (as measured
by a 7-point Likert scale from Very Liberal to Very Conservative) was also significantly related
to negativity (r = .26, p = .003) about transhumanism, as was religious affiliation (i.e., any
answer other than “atheist/agnostic”; r = .20, p = .029).
Multiple regression controlling for the effects of political conservatism and religious
affiliation revealed that Purity was a unique predictor (p = .015) of negative attitudes about
transhumanism. Conservatism remained a marginal predictor (p=.053) in this model.
Discussion
Regression analysis yielded Purity as the primary driver of negative attitudes about
transhumanism, whereas the open-ended “arguments” measure demonstrated that concerns about
Care are predominant among reasons marshaled both in favor of and against transhumanism.
0
20
40
60
80
100
Care Fairness Autonomy Purity
% of participants using MFT arguments
when arguing in favor of transhumanism when aguing against transhumanism
33
These preliminary results suggest that intuitions about care and purity play a large role in
attitudes toward transhumanism and offer implications regarding how people may respond to
continued technological integration with human life. Given that intuitions about care and harm
offer both support for (“less disease”) and opposition to (“harmful”) transhuman technologies, it
is necessary to parse how transhumanism resonates across different contexts. In certain contexts,
transhumanism may resonate as caring and compassionate, while in other contexts it may come
across as harmful and dangerous. Concordantly, we theorize that people will be most supportive
of technology when it serves the function of restoration, or is used to help individuals with
limitations to catch up with the rest of society. In contrast, there will be less support for
augmentation, or the use of technology to advance performance beyond current human
limitations. We test these hypotheses in Study 2.
STUDY 2: RESTORATIVE VS. AUGMENTATIVE TRANSHUMANISM
The results of Study 1, combined with the ambivalence of attitudes surrounding the
transhumanism movement, suggest that different ways of framing new technologies may lead to
different responses. The arguments advanced in favor of and against transhumanism by
participants in Study 1 offer important clues. When arguing in favor of transhumanism,
participants most frequently framed their reasoning in a restorative context, pointing to the
potential for technology to reduce human suffering. When arguing against transhumanism,
participants appeared to be thinking about technology as a tool for augmented human
capabilities, highlighting the capacity of technology to magnify cruelty, facilitate oppression, and
exceed the essential limits of human nature. These patterns suggest that framing technology as a
tool for restoring people to good health or a baseline level of physical or mental functioning may
34
be viewed positively, whereas framing technology as a tool for augmenting human capacities
beyond what they could otherwise achieve may be eschewed as morally wrong.
Accordingly, in the present study, we framed transhumanism as either restorative or
augmentative. Additionally, we moved beyond general affective attitudes toward transhumanism
to focus specifically on participants’ moral attitudes. Because the values of the care/harm
foundation appear to be salient in people’s arguments both for and against transhumanism,
whereas the values of the purity/degradation foundation are particularly relevant to people’s
negative attitudes about transhumanism, we expected these intuitions to respond differently to
restorative or augmentative frames. In particular, we hypothesized that (1) augmentative framing
of transhumanism would lead to increased perceptions of immorality, (2) Care intuitions would
be associated with increased perceptions of immorality only for augmentative transhumanism,
and (3) Purity intuitions would be associated with increased perceptions of immorality,
regardless of framing.
Participants. 156 undergraduates (63% female, mean age = 21.38) attending a large
private West Coast university completed an online survey designed to measure moral attitudes
toward two different framings of transhumanism. They received classroom credit in return for
their participation. The final compositions of the sample by race or ethnicity was 30.1%
Caucasian, 54.9% Asian or Asian American, 5.9% Hispanic, 3.3% African American, and 5.9%
Other.
Procedure. Participants were randomly sorted into either the restorative condition (N =
81) or the augmentative condition (N = 75). After reading a passage describing transhumanism as
either restorative or augmentative, participants completed measures of (1) their perception of the
immorality of transhumanism and (2) the MFQ.
Augmentative Condition
35
Transhumanism is a cultural and intellectual movement that believes in the use of advanced technologies to
enhance human capabilities, both physical and cognitive, beyond their current limitations. From
prosthetic devices to digital implants, Transhumanism holds that technology can and should enable human
beings to become stronger, faster, more resistant to disease, and even more intelligent.
One of the core values of the movement is Life Extension. Through genetic engineering, nanotechnologies,
cloning, and other emerging scientific advances, much longer lifespans may become possible.
Transhumanists believe that we should pursue all means available to us to make that a reality. Likewise,
Transhumanists are interested in the ever-increasing number of technologies that can boost physical,
intellectual, and psychological capabilities beyond what humans are naturally capable of. Transcranial
direct current stimulation (tDCS), for example, speeds up learning and reaction times by running a very
weak electric current through the brain, and can thus be used to enhance mental performance.
In short, Transhumanism promises to move the human race to new levels of achievement in all areas of
life.
Restorative Condition
Transhumanism is a cultural and intellectual movement that believes in the use of advanced technologies to
restore the physical and cognitive capabilities of people struggling with various limitations. From
prosthetic devices to digital implants, Transhumanism holds that technology can and should enable human
beings to recover functions that have been reduced or lost due to bodily damage, illnesses and defects.
One of the core values of the movement is Life Extension. Through genetic engineering, nanotechnologies,
cloning, and other emerging scientific advances, a full lifespan may become possible for a higher
percentage of people. Transhumanists believe that we can and should pursue all means available to us to
make that a reality. Likewise, Transhumanists are interested in the ever-increasing number of technologies
that can boost physical, intellectual, and psychological capabilities in order to help individuals with
disabilities and various forms of brain damage. Transcranial direct current stimulation (tDCS), for
example, speeds up learning and reaction times by running a very weak electric current through the brain,
and can thus be used to enhance mental performance by individuals suffering from a brain injury or
learning disability.
In short, Transhumanism promises to help more members of the human race fulfill their natural potential
in all areas of life.
Moral Foundations. As in Study 1, the Moral Foundations were assessed using the
Moral Foundations Questionnaire.
Perceived immorality of transhumanism. Participants rated their agreement with the
statement “I think transhumanism is an immoral movement” on a 7-point Likert scale.
1
Results
1
Participants further indicated their agreement with the statement “I think transhumanism is a moral movement.”
Because Study 1 showed effects primarily on negative attitudes, and because of the ambiguity of the term “moral
movement” (i.e., does it mean morally positive or simply morally salient?), we did not include this item as a
dependent variable. However, the results of this study hold when controlling for this item as a covariate.
36
Augmentative transhumanism is perceived to be more immoral than restorative
transhumanism. As predicted, our manipulation influenced participants’ likelihood of rejecting
transhumanism as morally wrong. The measure yielded a mean of 3.06 in the restorative
condition and 3.92 in the augmentative condition. ANOVA testing showed that this difference
was significant (d = .092, F(1, 154) = 15.69, p < .001).
Perceived immorality of transhumanism is predicted by endorsement of Purity, Loyalty,
and Authority, as well as by Care when moderated by augmentation. The perceived immorality
of transhumanism across both conditions was predicted by MFQ-Purity (r = .27, p < .001),
MFQ-Loyalty (r = .24, p = .002), and MFQ-Authority (r = .22, p = .007). A two-step regression
of MFQ-Care with the framing manipulation yielded a significant interactive effect on perceived
immorality (p = .045). MFQ-Care predicted views of immorality in the augmentative condition
(r = .23, p = .048). MFQ-Care showed no significant relationship to perceived immorality in the
restorative condition (r = –.1, p = .394), but the sign flip comports with expectations regarding
the appeal of restorative technologies.
Discussion
Although participants did not regard transhumanism as especially immoral, the results
reveal that the framing of the movement as either restorative or augmentative substantially
influences moral views by tapping distinct moral intuitions within the care/harm foundation.
Specifically, restorative technologies activate intuitions about care and compassion, leading to
positive moral judgments, while augmentative technologies activate intuitions about harm and
danger, leading to negative moral judgments.
Concerns for the purity and sanctity of human nature impel the greatest resistance to
transhumanism, followed by concerns about group cohesiveness and traditional hierarchical
37
relationships. These moral endorsements showed no interaction with the framing conditions,
suggesting that people who endorse such values may be deontologically opposed to human
alteration in and of itself, irrespective of its beneficial or detrimental results. On the other hand,
concerns about care and harm predicted resistance to transhumanism only when the movement
was framed as promoting human augmentation, suggesting that people who endorse such values
judge technology by reasoning teleologically about its possible consequences. Augmentative
technologies are likely seen to have greater potential for misuse, abuse, and injury.
STUDY 3: TRANSCRANIAL DIRECT CURRENT STIMULATION
Taken together, the first two studies illustrated that care and purity are important
predictors of people’s moral attitudes about new technologies. In Study 3, we sought to extend
the previous findings by manipulating factors that have implications for both foundations. In
order to do so, we chose to focus on a single transhuman technology that is both powerful and
already being used: transcranial direct current stimulation, or tDCS. By running a weak electric
current through the brain, tDCS can enhance learning, attention, and memory (Coffman et al.,
2014). This technology can be used restoratively (e.g., to ameliorate senescence) or
augmentatively (e.g., to improve concentration and work productivity). Additionally, while tDCS
involves noninvasive wearable hardware, it is conceivable that a tDCS device could be
miniaturized and biologically embedded. These possibilities enabled us to investigate the moral
psychology of transhuman technologies that vary on two dimensions: function (restorative vs.
augmentative; relevant to the care foundation) and form (wearable vs. embeddable; relevant to
the purity foundation).
Given our theory and previous findings, we hypothesized that (1) augmentative tDCS
would be perceived as more immoral than restorative tDCS, (2) embeddable tDCS would be
38
perceived as more immoral than wearable tDCS, and (3) the augmentation and embedment
conditions would interact to increase perceptions of the immorality of the tDCS device.
Additionally, we wanted to explore one possible mechanism underlying people’s purity-based
resistance to technologies that alter human nature: discontinuity of personal identity. We
hypothesized (4) that the more people perceived tDCS as disrupting the continuity of an
individual’s identity, the more they would judge tDCS to be immoral.
Because tDCS is currently available for retail purchase and has been marketed as a
gaming enhancer (MacDonald, 2014), we contemplated the possibility that it will have important
fairness implications in various contexts such as employment, education, casual competition, and
professional competition. In particular, we hypothesized (5) that the function of tDCS
(augmentative or restorative) would influence whether the technology was seen as fair or unfair,
while form of tDCS (wearable or embeddable) would have no impact on perceptions of fairness.
Participants. 199 participants (35.7% female, mean age = 33.4) recruited from
Amazon’s Mechanical Turk worker pool completed an online, multiple-measure survey designed
to measure moral attitudes toward transcranial direct current stimulation (tDCS). They received
compensation of $0.50 in return for their participation. The final compositions of the sample by
race or ethnicity was 74.9% Caucasian, 8.5% Asian or Asian American, 10.1% Hispanic, 5%
African American, and 1.5% Other.
Procedure. Participants were randomly sorted into one of four conditions in a 2x2
design. Participants read about a company developing a tDCS device that varied in function (i.e.,
either restorative or augmentative) and form (i.e., either wearable as an electronic headband or
embeddable as microchips in the brain):
Transcranial direct current stimulation (tDCS) is a process by which a weak electric current is run
through the brain in order to [enhance cognitive abilities beyond those of individuals with
39
normal brain function/restore cognitive abilities of individuals with impaired brain
function].
A company is developing a tDCS device consisting of [an electronic headband that, when worn
on the forehead/microchips and wires that, when implanted in the brain], can
[improve/recover] the reaction times, learning capabilities, and intelligence of people [who want
to boost their mental performance/suffering mental deficits due to injury or disability].
After reading about the tDCS device, participants completed measures of (1) their general
moral views about the device (2) the fairness of the device in various contexts, (3) their personal
willingness to use the device if given the opportunity, (4) the appropriateness of the device for
use by various institutions, (5) the extent to which use of the device causes discontinuity of
identity, and (6) the MFQ30.
Immorality of tDCS. Participants rated their agreement with the statement “I think this
device is morally bad” on a 7-point Likert scale.
2
Fairness. On a 7-point Likert scale, participants indicated their agreement with
statements that it would be fair for someone to use the tDCS device (1) at work, (2) in school, (3)
in a professional game for money and career success, and (4) in a casual game among friends and
acquaintances.
Identity discontinuity. In an adaptation of previous research (Strohminger & Nichols,
2014), participants rated how much they thought use of tDCS changes a person’s identity on a
slider array from 0 (“They’re the same person as before”) to 100 (“They’re completely different
now”).
Moral Foundations. As in the previous studies, the Moral Foundations were assessed
using the Moral Foundations Questionnaire.
2
Participants further rated their agreement with the statement “I think this device is morally good” on a 7-point
Likert scale. Given the preponderance of ambivalent feelings toward transhuman technologies, and the focus on
negativity and immorality highlighted in Studies 1 and 2, we chose to examine “morally bad” as the dependent
variable. However, the results of the analyses below hold even when controlling for ratings of “morally good.”
40
Results
Augmentative function of tDCS increases perceived immorality. ANOVA testing
revealed that the main effect of the function manipulation was significant, so that augmentative
tDCS was perceived to be more immoral than restorative tDCS (d = .093, F(1, 197) = 20.26, p
< .001). The interaction of the two conditions was significant (p = .003), suggesting that tDCS is
perceived as more immoral when it is both augmentative and embeddable.
Embeddable form of tDCS increases perceived immorality. ANOVA testing revealed
that the main effect of the form manipulation was significant, such that embeddable tDCS was
perceived to be more immoral than wearable tDCS (d = .023, F(1, 197) = 4.61, p = .033).
Interaction of augmentation and embedment yields greatest perceptions of immorality.
ANOVA testing revealed that the interaction effect of the function and form manipulations was
significant, such that augmentative-embeddable tDCS was perceived to be the most immoral (d
= .040, F(1, 195) = 9.30, p = .003). The finding of this predicted interaction suggests an
elicitation of combined intuitions about harm and impurity more powerful than either moral
intuition on its own.
Note. Error bars represent one standard error in either direction from the mean.
1
2
3
4
5
Wearable Embeddable
Perceived immorality of tDCS
Restorative
Augmentative
41
Form of tDCS has no effect on perceptions of fairness. The main effect of embedment
alone was not significant for any of the fairness items. This result was consonant with our
expectation that embedment taps intuitions about purity (hence the increase in overall perceived
immorality demonstrated above) but does not activate intuitions about fairness.
Augmentative tDCS is seen as less fair than restorative tDCS. The main effect of the
function manipulation showed that augmentation significantly decreased perceptions of fairness
in each context: work (r = –.18, p =.010), school (r = –.28, p < .001), casual games (r = –.19, p
= .006), and professional games (r = –.23, p = .001), suggesting that augmentative tDCS may be
seen to create undesirable inequalities if used as a cognitive enhancer by individuals with normal
mental faculties.
Function and form of tDCS both individually and interactively influenced perceptions
of identity discontinuity. ANOVA testing showed that both embedment (p = .025) and
augmentation (p = .048) significantly increased perceptions that tDCS caused identity
discontinuity. Analysis of variance revealed significant differences between all four conditions (p
= .016), and planned contrast testing showed that perceived identity discontinuity was
significantly higher in the Embeddable-Augmentative condition than in the Wearable-
Restorative condition (p = .003), the Wearable-Augmentative condition (p = .018), and the
Embeddable-Restorative condition (p = .028).
42
Note. Error bars represent one standard error in either direction from the mean.
In light of findings by Strohminger and Nichols (2014) that moral characteristics are
considered to be the traits most essential to personal identity, we expected moral views of tDCS
to fully mediate the effects of augmentation and embedment on identity discontinuity. Preacher
and Hayes (2008) testing for mediation confirmed this expectation:
30
35
40
45
50
55
60
Wearable Embeddable
Identity discontinuity from tDCS
Restorative
Augmentative
Augmentation
Identity
discontinuity
Immorality
a = .305*** b = .323***
c’ = .046, n.s. (c = .14*)
43
These results support the inference that, to the extent that technological modification such
as tDCS is considered immoral, it is also seen as altering an individual’s identity.
However, we also found evidence that perceived identity discontinuity partially mediated
the effect of augmentation on perceived immorality (a = .14*; b = .323***; c’ = .265***; c
= .305***) and fully mediated the effect of embedment on perceived immorality (a = .159*; b
= .323***; c’ = .103, n.s.; c = .151*). These results support the inference that, to the extent that
technological modification such as tDCS is seen as altering an individual’s identity, it is also
considered immoral. Overall, this pattern of bidirectional mediation suggests a cyclical model of
morality and identity in which the immorality of technological modification and its propensity to
cause identity discontinuity influence each other in a positive feedback loop.
Perceived immorality of tDCS is predicted by moral foundations endorsements. The
perceived immorality of tDCS was positively associated with MFQ-Purity (r = .24, p < .001),
MFQ-Authority (r = .15, p = .036), and MFQ-Loyalty (r = .14, p = .058), and two-step
regression showed that tDCS immorality was further associated with the interaction of MFQ-
Care and Augmentation (β = 1.171, t(195) = 3.07, p = .002), confirming the results of Study 2.
Additionally, two-step regression revealed that tDCS immorality was positively associated with
Embedment
Identity
discontinuity
Immorality
a = .151* b = .323***
c’ = .112, n.s. (c = .159*)
44
the interaction of MFQ-Fairness and Augmentation (β = 1.047, t(195) = 2.65, p = .009), while
MFQ-Fairness alone was anti-correlated with perceived immorality (r = –.17, p = .018),
suggesting that people highly concerned with injustice appreciate the potential for tDCS devices
to level the playing field but worry that augmentative tDCS may exacerbate inequality.
Perceived identity discontinuity due to tDCS is also predicted by moral foundations.
The perceived identity discontinuity caused by tDCS was positively associated with MFQ-Purity
(r = .30, p < .001), -Authority (r = .26, p < .001), and -Loyalty (r = .20, p = .005). For people
who endorse these values, questions of identity are closely linked to concerns about bodily and
spiritual integrity, traditional social structures, and communal relationships, all of which may be
seen as threatened by human-altering technologies such as tDCS.
Perceived immorality of and identity discontinuity due to tDCS are also predicted by
concerns about Autonomy. The perceived immorality of tDCS was significantly predicted by
political libertarianism as measured on a 7-point Likert scale (r = .2, p = .005), a proxy for
concerns about Autonomy (Iyer et al., 2012). Libertarianism also predicted perceptions of
identity discontinuity (r = .21, p = .002). These results may seem at odds with a laissez-faire
libertarian moral philosophy, but they could be attributable to libertarians’ strong sense of
individuality, which they may fear will be compromised by tDCS devices, especially if such
devices are imposed by external authority. Such concerns would align with those raised by
participants in Study 1; when arguing against transhumanism, 33% of respondents referenced
concerns related to autonomy and oppression.
Supplemental Measures and Results
Willingness to use. Participants indicated how likely they would be to use the tDCS
device, on a 1–7 scale from Very Unlikely to Very Likely. The augmentation condition had no
45
effect on willingness to use the device, whereas the embedment condition significantly decreased
willingness to use the device (r = –.15, p = .04). There was no interaction effect.
Institutional appropriateness. Participants indicated how appropriate it would be (on a
1–7 scale from Very Inappropriate to Very Appropriate) for various institutions to make use of
tDCS: Military, Hospitals, Corporations, Nonprofits, and Universities. Ratings on these items (α
= .89) were averaged to yield a measure of perceived institutional appropriateness.
The embeddable condition marginally decreased institutional appropriateness (r = –.13, p
= .076). This effect was driven primarily by a significant decrease in the appropriateness of tDCS
use by corporations (r = –.17, p = .018).
The augmentative condition also significantly decreased institutional appropriateness (r =
–.17, p = .02). This effect was driven primarily by significant decreases in the appropriateness of
tDCS use by Hospitals (r = –.24, p < .001), Universities (r = –.25, p < .001), and Nonprofits (r =
–.17, p = .014). No relationship was found between the augmentative condition and
appropriateness for the Military (r = –.01, p = .857) or Corporations (r = –.02, p = .783).
GENERAL DISCUSSION
Three studies found that the endorsement of specific moral foundations predicted
psychological reactions to novel technologies associated with the transhumanism movement,
which advocates integrating humanity with advanced technology so that people may transcend
their biological limitations. We discovered that such technologies elicited various, sometimes
conflicting moral intuitions depending on the context in which they were presented. By framing
transhuman technologies in several ways according to their use and form, we were able to
activate some moral foundations more strongly than others, thereby influencing the extremity of
people’s moral judgments about this topic.
46
Study 1 took an exploratory approach to better understand what makes people accept or
reject transhuman technologies. We found that negative attitudes toward transhumanism were
predicted by endorsement of moral concerns related to purity and degradation, even controlling
for the effects of political conservatism and religious affiliation (which likewise predicted
negative reactions). Additionally, we examined how people employed different kinds of moral
reasoning when arguing in favor of and against transhumanism. In particular, arguments in favor
of transhumanism overwhelmingly drew support from concerns related to caring for human
welfare and reducing human suffering. Conversely, arguments against transhumanism showed a
mixture of foundational moral concerns, prominently including the potential of technology not
only to cause harm but to violate the natural order and to impinge on human autonomy.
Anecdotal comparison of the reasons invoked in favor of and against transhumanism suggested
that people would be more likely to reject transhuman technologies that augment human
capabilities beyond their natural limits, as opposed to technologies that restore users with
disabilities to the ordinary human baseline.
Study 2 deployed the distinction between augmentative and restorative technologies to
shift moral judgments on this novel topic. Overall, framing transhuman technology as
augmentative (improving ability beyond normal function) compared to restorative (alleviating
suffering or disability) yielded more severe judgments that such technology was immoral.
Additionally, we found that people who were predisposed to endorse concerns about care and
harm showed proportionately harsher moral judgments of transhuman technologies only when
they were used for human augmentation rather than restoration. On the other hand, people who
were predisposed to endorse concerns about purity and degradation showed proportionately
harsher moral judgments of transhuman technologies regardless of their proposed use. This result
47
indicated that such technologies might invoke negative purity intuitions simply by virtue of their
tendency to alter what it means to be human, even if they would serve a compassionate purpose.
3
Study 3 asked participants to judge the immorality of a specific transhuman technology,
transcranial direct current stimulation (tDCS), which enhances cognitive abilities by running a
weak electric current through the brain. In addition to manipulating the augmentative-or-
restorative function of tDCS to invoke intuitions about harm and care, we also manipulated the
form of tDCS as wearable on the head or embeddable in the brain to invoke intuitions about
purity and degradation. As expected, both manipulations showed main effects increasing the
perceived immorality of tDCS, and the interaction of function and form demonstrated that
augmentative, embeddable tDCS was judged to be the most immoral of all. We further
investigated the role of concerns related to the purity of human nature by examining how the
functions and forms of tDCS influenced perceptions of identity discontinuity (i.e., the belief that
applying tDCS changes who you are). Mediation analysis revealed a close bidirectional link
between the perceived immorality of technology and the belief that it disrupts personal identity.
Theoretical Contributions
Psychology of technology. These findings have several implications for theory and
research in the nascent field of the psychology of technology. First, we demonstrate that attitudes
about technology are enmeshed with moral intuitions, which are the fundamental building blocks
underlying systems of ethics, law, and culture. Emphasizing the intersection of morality and
technology opens up numerous avenues for further research into how scientific breakthroughs
are perceived, as well as when and by whom radical innovations will be adopted or opposed. In
3
One limitation of this study is that we do not know how naturally competent people feel regarding the various tasks
implicated by the description of transhumanism. Extending this work, one could find that those who feel naturally
unskilled at something might support an assistive technology more than those who are already competent in that
domain.
48
particular, we invite a deeper investigation of the psychological antecedents and consequences of
ongoing debates in both bioethics and public opinion surrounding the necessity or permissibility
of technologies that alter the course of human nature and social development. Technology has
always shaped our understanding of humanity and its potential; throughout history we have
continually adapted our ways of thinking and acting to keep pace with our technical capabilities.
However, our rate of technological advancement is now accelerating like never before, offering
opportunities to dramatically redefine the very essence of what it means to be human. We stand
on the cusp of momentous change to our physical and social lives, but our ability to navigate the
uncertainties of the technological future will be constrained by the contours of moral mindsets
developed through the far slower processes of biological and cultural evolution.
Second, we introduce social psychology to the study of transhumanism, a movement that
exemplifies contemporary society’s fascination with the intrication of technology and human
progress. Whether the implications of transhumanism fill you with anxiety or excitement, terror
or awe, there is no denying that the notion of transcending human limitations is one that will
continue to shape discourse on the psychology of technology for many years to come. To date,
scholarship on transhumanism has been largely confined to niche philosophical debates. We
believe that these debates stand to be substantially enriched through contributions from social
psychology. Concomitantly, we suggest that the psychology of technology, as a field, has a great
deal to gain by incorporating transhumanism both as a superordinate category of human-altering
technologies and as a conceptual framework for guiding future research.
Finally, we bring integrative complexity to the psychological study of technology by
elucidating the importance of framing considerations, which highlight particular aspects of
technology and its applications and thereby drive cognitive-affective reactions. Specifically, with
49
regard to technologies that integrate with and alter human biological characteristics, we identify
two potentially generative framing dimensions: function (which we operationalize as
augmentation versus restoration) and form (which we operationalize as adornment versus
embedment). These two dichotomies are by no means intended as a complete account of either
the categories or intervals along which technology may be framed. Nonetheless, they provide a
useful starting point from which to consider how the contextualization of technology contributes
to the way it is perceived. We further contend that framing considerations interact with people’s
particularized conceptual and motivational architecture to yield a diverse array of cognitive,
emotive, and behavioral responses to any given technology.
Moral psychology. These findings extend work in moral psychology by testing Moral
Foundations Theory in a new domain and showing how the moral foundations predict the
complexity of individual reactions to novel moral conflicts centered on emerging technologies.
Although many may assume that some people are anti-technology while others are pro-
technology, the morality of technology is actually much more nuanced. Technology can be
framed in a variety of ways according to the purpose of its use and the form of its integration
with the human experience, which will activate different types of moral intuitions. This
highlights the need for a pluralistic approach to understanding people’s moral judgments about
advanced technology, as people endorse different moral domains to different degrees. The
framing of technology interacts with the foundations to which people most closely adhere,
producing distinct moral reactions in each unique context.
Our work also applies the theory of moral identity as a mechanism for the moral
judgment of technology that alters humanity. Certain technologies, particularly those that are
biologically embedded and augment human abilities beyond their current limitations, are seen to
50
change people’s personal identity, yielding judgments that such technologies are immoral.
Moreover, to the extent that technologies are seen ex ante as morally offensive, these
technologies are proportionately considered to alter human identity. These findings reify the
central importance of morality to considerations of identity and vice versa. For many people,
transcending human limitations may be equivalent to transgressing against human dignity.
Practical Implications
This research program offers fresh insight into seemingly intractable conflicts over
technologies that alter or remove natural constraints on biological function, including not only
nascent technologies such as gene editing and brain-machine interfaces but well understood yet
controversial practices like stem-cell research and cognitive enhancement. Moreover, because
most of the technologies that invite these kinds of controversies have not yet been invented, we
intend to increase the predictability and manageability of novel conflicts as they continue to
arise. In some respects, these conflicts may be attributable to differences in foundational moral
endorsements, and in others, they may be the result of differences in the contextual framing of
the underlying technologies. By understanding the psychological mechanisms that influence
moral judgments in the technological domain, we hope to bring a new perspective to debates
over bioethics in both the expert community and the public at large.
This perspective has important implications for the formation of public policies
regulating technological development and for the allocation of research funding from both the
public sector and private concerns. If technology researchers and innovators are not careful in
how they communicate the form and function of their projects, the aims of scientific progress
may be stymied by unanticipated moral disapprobation. On the other hand, the force of
opposition to certain technologies (e.g., human cloning) may at times indicate that there is good
51
reason to be cautious of advancing too quickly. Our moral intuitions have been carefully selected
over evolutionary history and cultivated by many generations of socialization to help us make
peace not only with nature but with each other. To ignore them completely in the name of
inexorable progress may be to court existential risks the likes of which we have not yet faced.
We further suggest that it is crucial to distinguish between people’s moral judgments of
novel technologies and their likelihood of adopting them. Although participants in Study 3
judged augmentative tDCS to be more immoral than restorative tDCS, they showed no difference
in their willingness to use the technology for one purpose over the other. The practical
advantages of transhuman modification may be difficult to deny, even when acknowledging their
potential to cause harm or undermine equality. The temptation to abuse prescription cognitive
enhancers to outperform competitors at work or school is well documented; absent thoughtful
regulation, the day may not be too far off when tDCS becomes a permissible or even mandatory
means of boosting productive output in the military, in corporations, and in other organizations
that seek to gain an edge in accomplishing their goals.
At the same time, we must highlight the more positive moral judgments induced by a
restorative framing of transhuman technology. Moral intuitions about caring for the vulnerable
and redressing unfair disadvantages resonate favorably with technologies that enable people
suffering from disabilities or injuries to achieve normative levels of physical and cognitive
function. Our work implies that the strongest moral case for transhumanism is one that
emphasizes its capacity to help people reach their full potential. Policymakers and technologists
alike should keep this at the forefront of their minds when considering how to approach the
contentious issue of integrating humanity with technology. In the restorative context, technology
does not inhibit but actually facilitates the goal of respecting human dignity.
52
Directions for Future Research
Future research could extend the use of moral psychology to understand the ways in
which technology changes our personal selves, our interpersonal relationships, our organizations
and affiliations, and even society writ large. While transhuman technologies remain a useful lens
through which to understand the general morality of technology, the true scope of this field is
very broad. How do people moralize autonomous machines such as self-driving cars? How do
we judge behaviors executed through social media or telepresence devices? What about the
morality of interactions within virtual or augmented realities? Our research shows that the
answers to these kinds of questions will be much more nuanced than mere report of the
percentage of people supporting or opposing any given technology.
CONCLUSION
Whether or not transhumanism has entered mainstream consciousness, the effects of
transhuman technologies can already be felt. The opportunities and challenges presented by these
developments are not easily disentangled from one another. No matter what choices we make as
a civilization, certain moral goods will have to be privileged over others in order to make sense
of our collective relationship to technology. Over time, these issues will grow only more
complex alongside the accelerating proliferation of options for modifying human biology and
sociality. It is impossible to say with certainty where these advancements will ultimately lead. By
refining the tools to understand the psychological impact of such changes, we may gain at least
some insight into recurring patterns of moral responses that they elicit. This work represents one
piece of a larger project: modeling the moral psychology of novel technologies, so that we may
come one step closer to envisioning the technological evolution of humanity.
53
CHAPTER III
Stimulation Demotivation: How Unfair is Competitive Cognitive Enhancement?
ABSTRACT
From coffee to electronic training, from prescription drugs to brain stimulation devices, technology
increasingly offers methods for individuals to enhance their own cognitive performance. In
competitive contexts, this could mean the ability to succeed above unenhanced colleagues and
rivals. And in the post-COVID era, with distributed work arrangements becoming ever more
commonplace, opportunities for unmonitored cognitive enhancement will only continue to grow.
Across three studies, we examined perceptions of competitive cognitive enhancement strategies.
In Study 1, participants found pharmaceutical enhancement and brain stimulation to be less fair
than studying and coffee, with downstream negative impact on perceptions of users’ moral
character. In Study 2, the comparative unfairness of a competitor using brain stimulation to get
ahead (over coffee) led to a decrease in participants’ work motivation. And in Study 3, participants
who believed they were competing against someone using brain stimulation (regardless of whether
it was augmentative or restorative) reported less success in an incentivized task designed to permit
deceptive self-appraisals. Implications and future directions of this topic are discussed.
54
INTRODUCTION
The field of management is at an inflection point as regards the use of stimulants in the
workforce, with mounting evidence indicating that employees are engaging in nonprescribed use
of pharmacological cognitive enhancement to supplement their performance (Leon, Harms, &
Gilmer, 2019). Although such stimulants are “easily available, there is little stigma attached to
use, and there is little information regarding the long-term safety effects of use…almost no
formal research has been executed in the realm of organizational behavior” (Leon et al. 2019, p.
67). As the COVID-19 pandemic has radically precipitated the development of distributed (or
“virtual”) work arrangements, in which employees perform tasks in distracting home
environments under conditions of relatively low supervision, the topic of competitive cognitive
enhancement has never been more timely. Central to this topic is the following ethical question:
is cognitive enhancement a form of cheating?
Ethics of Neuroenhancement
Ethicists of the Welfarist school of thought contend that human enhancement is any
change in the biology or physiology of a person which increases the chances of leading a good
life (Savulescu, 2006). In competitive contexts, an incentive system that rewards (or penalizes)
better (or worse) performance often motivates individuals to seek unfair advantage, so as to
maximize their life outcomes. However, one way to reduce cheating is to make safe
enhancement legal; few would argue that drinking coffee is an unethical strategy. Indeed, in
many circumstances fairness actually necessitates enhancement in order to level the playing
field. For example, as a society we must get as many people as possible up to the minimum level
of cognitive function necessary for an adequate chance of a good life. The question thus
becomes: where do we set the ethical threshold for enhancement?
55
Aside from fairness concerns, cognitive enhancement technologies implicate a range of
ethical issues including notions of authenticity, the good life, and the role of medicine in our
lives (Bostrom & Sandberg, 2009). While enhancement may promote authenticity by
deemphasizing menial tasks and enabling the pursuit of more meaningful accomplishments, fears
surrounding human enhancement also invoke concerns about meaning, such as apprehension
about making human nature into a technological project (e.g., Kass, 2002). Nevertheless, some
scholars believe such fears as misplaced, on the grounds that society historically appears to adapt
to the sensible application of new knowledge (e.g., Gazzaniga, 2005).
Assuming the existence of safe and effective cognitive enhancers, should students in fact
be positively encouraged toward enhancement for the same reasons they are advised to take
detailed notes and thoroughly review their subject matter (Bostrom & Sandberg, 2009)? Some
experts (e.g., Harris, 2011) contend that widespread use of prescription cognitive enhancers
creates a coercive and corrosive environment, in which the competitive pressure to enhance
performance proves irresistible for individuals hoping to meet the new baseline. Others,
however, argue that prescription cognitive enhancement (PCE) does not raise any ethical issues
apart from the broader domain of brain-state neuroenhancement (Zohny, 2015). As ethicists are
likely motivated to express opinions of neuroenhancement that maintain consistency with their
publicly espoused prescriptive moral philosophies, perhaps it would be more instructive to turn
to descriptive accounts of public attitudes at large.
Public Attitudes Toward Cognitive Enhancement
A 2014 overview of 40 empirical studies found that public concerns regarding PCE
largely mirror those propounded by normative academic discussions, most commonly: medical
safety, coercion, and fairness (subdivided into equality of opportunity, honesty, and authenticity)
56
(Schelle, Faulmüller, Caviola, & Hewstone, 2014). Certain attitudes (such as coercion) were
coherent across studies, whereas others (such as authenticity) yielded mixed results.
Additionally, different groups (such as users, nonusers, students, parents, and healthcare
providers) perceived PCE in different ways: for example, nonusers showed greater concern for
medical safety and fairness than did users. Additionally, numerous cognitive biases appear to
influence moral judgments about PCE, including status quo bias, loss aversion, risk aversion,
omission bias, scope insensitivity, nature bias, and optimistic bias; one study found more well-
documented biases likely to cause irrational aversion to PCE than the opposite, suggesting that
public attitudes about PCE are mostly negatively biased (Caviola, Mannino, Savulescu, &
Faulmüller, 2014).
One study of various forms of enhancement found that participants were more tolerant of
cognitive enhancers than of drugs that affected athletic performance, and that they felt it was less
unfair to allow the drug if the affected only the bottom 10% of performers than if it affected
everyone; these results indicated a nuanced appreciation for the competitive context and
remediating potential of human enhancement (Sabini & Monterosso, 2005). One explanation for
the favorability of PCE may lie in the perception that cognitive enhancement can be used to
reduce cognitive test anxiety (CTA); a study of German university students in 2010 found that
increased CTA increased the prevalence of PCE use over various time windows (Sattler &
Wiegel, 2013). The same study found that risk-liking students were more willing to accept risks
to achieve potential increases in their cognitive performance.
When predicting moral judgments of the unacceptability of PCE, Faber, Savulescu, and
Douglas (2016) found that unfairness judgments were the only significant predictor, explaining
about 36% of the variance, with neither hollowness nor undeservingness showing explanatory
57
power above and beyond unfairness, and leading the authors to endorse an Unfairness-
Undeservingness model as superior to a Hollowness-Undeservingness model. However,
individuals may hold varying standards for supporting cognitive enhancement depending on who
is using PCE and why: Conrad, Humphries, and Chatterjee (2019) found that participants were
more likely to support the use of PCE by others than by themselves, and especially so when the
use of enhancement by others was framed with a fuel metaphor rather than a steroid metaphor
(metaphoric framing did not influence individuals’ attitudes toward their own use). Additionally,
and crucially for the present work, participants supported the use of cognitive enhancement by
employees more than by students or athletes. This result suggests that the context of cognitive
enhancement may hold differential implications for the moral character of those who choose to
be enhanced.
The public may in fact be biopolitically moderate, cautiously accepting cognitive
enhancement even while recognizing its potential dangers (Fitz, Nadler, Manogaran, Chong, &
Reiner, 2014). Acknowledging the four cardinal concerns elucidated by neuroethicists (safety,
pressure, fairness, and authenticity), individuals appear to endorse both meritocratic principles
and the intrinsic value of hard work, and while the public may find success to be a primary
determinant of worthiness, they also view successful people who work hard without
enhancement as significantly worthier still (Fitz et al., 2014). Hence, while public opinion does
not seem to vehemently condemn all enhancement (so long as the bottom line is successful
performance), people remain sensitive enough to authentic achievement to perceive it as showing
greater merit.
In keeping with their prevailing concern for authenticity, people show a tendency to
dehumanize those who augment their cognitive faculties beyond ordinary levels, but they do not
58
dehumanize those who use identical products to restore faculties that have been lost—or those
who emphasize a prosocial motivation for enhancement (Castelo, Schmitt, & Sarvary, 2019).
This suggests that cognitive enhancement as a medical treatment for people with mental deficits,
or as a means for neurotypical individuals to help others, may in fact be seen to facilitate
authentic achievement. In a similar vein, physicians report increasing comfort with the
prescription of cognitive enhancers as patients increase in age—however, there remain persistent
concerns regarding the balance between benefit and safety (Banjo, Nadler, & Reiner, 2010).
The beneficial applications of cognitive enhancement, even for healthy adults, range from
improvements to quality of life and work productivity to the prevention of ordinary and
pathological cognitive declines; as a result, some scholars have called for physicians to address
the growing demand for enhancement by shifting from a narrow view of medicine-as-healing to
a broad view of medicine as helping people live better and achieve their goals (as is currently the
case in plastic surgery, dermatology, sports medicine, and fertility medicine) (Greely, Sahakian,
Harris, Kessler, Gazzaniga, Campbell, & Farah, 2008). When aggregated, the benefits of
enhancement may yield positive reforms for entire social systems: for example, in the justice
system, wherein enhancement of memory functions and analytic abilities for lawyers, judges,
and jurors could mean the difference between fair and unfair imprisonment (Sandberg, Sinnott-
Armstrong, & Savulescu, 2011).
Non-Pharmacological Cognitive Enhancement
Cognitive enhancement has long been recognized as an “everyday event,” with common
substances such as sugar and caffeine shown to enhance memory retention (White, 1998).
Moreover, people rarely question or impugn the moral status of cognitively-enhancing external
artifacts such as journals, calculators, textbooks, models, maps, and so on; this is likely because
59
such artifacts are not perceived to have the same impact on personal identity (Heersmink, 2017).
In part, this distinction may be one of prevalence, with selectively distributed (and expensive)
enhancers seen as morally suspect compared to those that are widely accessible (and affordable).
Indeed, the Internet’s public availability and broad beneficence for societal institutions (e.g.,
academia, journalism, transportation, etc.) are central to its value as an external memory system
(Heersmink, 2016).
In some cases, the empirical data have suggested that non-pharmacological enhancement
approaches including nutrition, exercise, sleep, meditation, mnemonics, computer training, and
brain stimulation may be more effective than prescription pharmaceuticals (Dresler, Sandberg,
Ohla, Bublitz, Trenado, Mroczko-Wąsowicz, Kühn, & Repantis, 2013). Electronic or
videogame-based cognitive training has been the subject of considerable attention, generating
broad evidence of improvement in trained cognitive tasks—however, the evidence that such
training yields significant enhancement on untrained tasks (even within the trained cognitive
domain) is sparse (Jak, Seelye, & Jurick, 2013). In fact, different genres of videogames may
differentially enhance cognitive abilities according to the differences in game mechanics
(Dobrowolski, Hanusz, Sobczyk, Skorko, & Wiatrow, 2015). Research into the cognitive
benefits of both electronic and athletic training has created a third approach, “designed sports
training,” which has shown that the benefits of traditional cognitive training and physical
exercise may be effectively combined in a single activity (Moreau & Conway, 2013). To the
extent that studying and exercise are physiologically noninvasive and socially deployable at
scale, such training programs are unlikely to raise some of the moral qualms typically associated
with prescription cognitive enhancement.
60
Brain stimulation, on the other hand, may represent the field’s next moral frontier. A
growing body of evidence shows that techniques such as transcranial direct current stimulation
(tDCS) can enhance cognition in both clinical and healthy populations, by running a weak
electric current through specific cortical regions (Pisoni, Mattavelli, Papagno, Rosanova, Casali,
& Romero Lauro, 2018). Noninvasive brain stimulation has been explored as a form of treating
stroke aftereffects, mood and anxiety disorders, and chronic pain, as well as a form of improving
learning, working memory, and other executive functions (Farah, Smith, Ilieva, & Hamilton,
2014). There are also indications that specific applications of tDCS can yield long-lasting
improvements in mathematical thinking (Cohen Kadosh, Soskic, Iuculano, Kanai, & Walsh,
2010) and even dramatically enhance creative problem solving, with one study showing that 10
minutes of noninvasive brain stimulation enabled 40% of participants to solve a problem that no
participants could solve without stimulation or with sham stimulation (Chi & Snyder, 2012).
Neuroethicists have pointed out that tDCS raises unique concerns within the broader
debate over cognitive enhancement (Cohen Kadosh, Levy, O’Shea, Shea, & Savulescu, 2012):
for example, tDCS devices are inexpensive and portable, usable by almost anyone, at any time,
for any function. And while FDA approval is required for specific marketing strategies such as
claims of medical benefit, regulators cannot currently prevent widespread use of tDCS; several
retailers already offer such devices for personal home use. Lastly, because tDCS is external
hardware rather than an ingested pharmaceutical, there may be a (mis)perception that such
devices are less morally problematic, lowering the threshold for premature or otherwise careless
usage. A recent study of several hundred tDCS purchasers (typically wealthy, educated, liberal,
forty-something American males who self-describe as early adopters of technology) found that
61
almost three-fourths indicated using tDCS for enhancement and one-fourth for restoration, with
40% reporting their usage as a form of treatment (Wexler, 2018).
OVERVIEW OF THE PRESENT RESEARCH
The use and abuse of prescription cognitive enhancers (e.g., Adderall, Modafinil) to
outperform competitors in educational and occupational settings is well known. With the rise of
work-from-home arrangements in the post-COVID era, especially for cognitively taxing
nonphysical labor, the social barriers to cognitive enhancement have never been lower, and the
incentives toward enhancement are on the rise. Accordingly, we expect that now-unfamiliar
cognitive enhancers such as tDCS devices will eventually make their way into familiar
competitive settings. Although retail tDCS devices are currently marketed as mere novelties or
“gaming” enhancers (largely for regulatory reasons), the day may not be far off when some
individuals use tDCS to enhance their concentration and performance in school and at work. In
fact, that day may have already arrived. In a short time hereafter, tDCS devices could become
commonplace or even normative in organizational contexts, as firms rely on them to boost
employee productivity. In this series of exploratory experimental studies, we ask how
competition with individuals using cognitive enhancers affects people who are reliant solely on
their natural ability and effort.
In Study 1, we examined the extent to which a competitor’s use of four different
enhancement strategies (practice, coffee, pharmaceuticals, and tDCS) influenced participants’
judgments of fairness, with downstream effects on judgments of the competitor’s moral
character. Building on these results, Study 2 investigated how participants—in particular, office
employees—judged the fairness of a work colleague getting ahead by use of coffee versus tDCS,
with downstream effects on participants’ work motivation. Finally, Study 3 showed differential
62
effects on participants’ cheating behavior (in a monetarily incentivized laboratory task) as a
reaction to various forms of competitive cognitive enhancement, including both augmentative
and restorative brain stimulation.
STUDY 1: COGNITIVE ENHANCEMENT, FAIRNESS, AND MORAL CHARACTER
In this study, 204 undergraduate student participants (43.1% female, mean age = 20.52)
imagined that they had lost out on a job opportunity that was instead given to a fictitious
classmate who used one of four performance-enhancing strategies (practice, coffee, a “smart
pill,” or tDCS). We hypothesized that the non-normative-and-less-available strategies (smart pill
and tDCS) would be perceived as less fair than the normative-and-widely-available strategies
(practice and coffee), and that this decrease in perceived fairness would mediate a decrease in the
perceived moral character of the fictitious classmate. We further expected that judgments of the
classmate’s moral character as a result of performance-enhancing strategy choice would directly
influence participants’ reported likelihood of using the same strategy in future job searches.
Participants. About 200 undergraduate students from the USC Marshall student subject
pool will participate in a four-cell study design.
Procedure. Participants filled out an online survey using a computer. First, all
participants read the following vignette:
Imagine that you were selected as one of 25 finalists competing for a job with a top consulting
firm which was conducting interviews here on campus.
The last step in the interview process was a creativity task completed in a closed room. The task
required demonstration of the ability to be highly creative under time constraints.
You did not get the job. You learn, instead, that it went to a classmate named Lauren.
Following this prompt, participants were randomly assigned to one of four conditions
(control, coffee, pill, or tDCS). In each condition, participants were told “This is Lauren:” and
were presented with a photograph. In the tDCS condition, the woman in the photograph was
63
wearing a tDCS device attached to her forehead. Following the photograph was additional
information:
4
Control In order to maximize her performance on the creativity task, Lauren practiced creativity-related
tasks ahead of time. Practicing these kinds of tasks has been shown to enhance creativity.
Coffee In order to maximize her performance on the creativity task, Lauren drank a cup of coffee before
the task began. Coffee has been shown to enhance creativity.
Pill In order to maximize her performance on the creativity task, Lauren took a new kind of “smart
pill” before the task began. This smart pill has been shown to enhance creativity.
tDCS In order to maximize her performance on the creativity task, Lauren used a new transcranial direct
current stimulation (tDCS) device during the task. This tDCS device has been shown to enhance
creativity.
[all] On the next page, you will be asked some questions about this performance-maximizing strategy.
After reading the above materials, participants completed survey measures to indicate
their judgments of (1) the fairness of Lauren’s strategy, (2) Lauren’s moral character, and (3) the
likelihood that they will use the same strategy in future job searches.
Fairness. Participants rated their agreement on a 1–7 Likert scale (“very little” to “very
much”) with four statements: “Lauren completed the creativity task unfairly,” “It was unjust that
Lauren got the job,” “Lauren’s performance-maximizing strategy was fair,” and “Lauren
deserved her success.”
Moral character. Participants rated their agreement on a 1–7 Likert scale (“very little” to
“very much”) with four statements: “Lauren makes immoral decisions,” “Lauren is unethical,”
“Lauren has upstanding moral character,” and “Lauren is a good person.”
4
Although participants across all conditions were told that the strategy under evaluation has been shown to enhance
creativity, one limitation of this work is that it is unknown how much participants believed this assertion in each
instance (e.g., do people actually believe that coffee improves creativity?).
64
Likelihood of use. Participants indicated the probability on a 1–7 Likert scale (“very
low” to “very high”) that they “would use Lauren’s performance-maximizing strategy in future
job searches.”
Results
Fairness. Analysis of variance yielded significant differences (F(3, 200) = 82.64, p
< .001) across all conditions considering the fairness of using practice (M = 5.5, SD = 1.26),
coffee (M = 5.68, SD = 1.1), a pill (M = 2.73, SD = 1.22), and tDCS (M = 3.43, SD = 1.06). Post-
hoc Tukey tests for mean differences showed that (1) practice was considered fairer than the pill
(p < .001) and tDCS (p < .001), (2) coffee was considered fairer than the pill (p < .001) and
tDCS (p < .001), and (3) tDCS was considered fairer than the pill (p = .015). No differences in
fairness perceptions were observed between use of practice and use of coffee.
Moral character. Analysis of variance yielded significant differences (F(3, 200) = 42.11,
p < .001) across all conditions considering the moral character of someone using practice (M =
5.04, SD = 1.06), coffee (M = 5.13, SD = 0.82), a pill (M = 3.41, SD = 0.9), and tDCS (M = 4.1,
SD = 0.8). Post-hoc Tukey tests for mean differences showed that (1) practice yielded higher
65
ratings of moral character compared to the pill (p < .001) and tDCS (p < .001), (2) coffee yielded
higher ratings of moral character compared to the pill (p < .001) and tDCS (p < .001), and (3)
tDCS yielded higher ratings of moral character compared to the pill (p = .001). No differences in
moral character ratings were observed between use of practice and use of coffee.
Likelihood of use. Analysis of variance yielded significant differences (F(3, 200) =
15.11, p < .001) across all conditions considering participants’ self-reported likelihood of use of
practice (M = 5.69, SD = 1.53), coffee (M = 5.1, SD = 1.64), a pill (M = 3.73, SD = 1.86), and
tDCS (M = 3.9, SD = 1.9). Post-hoc Tukey tests for mean differences showed that (1)
participants reported greater likelihood of using practice compared to the pill (p < .001) and
tDCS (p < .001), and (2) participants reported greater likelihood of using coffee compared to the
pill (p < .001) and tDCS (p = .003). Participants reported no difference in likelihood of using (1)
practice compared to coffee, or (2) the pill compared to tDCS.
66
Mediation by fairness. Preacher and Hayes’s (2008) testing for mediation was employed
to examine the relationship between fairness (as a mediating variable) and moral character and
likelihood of use (as outcome variables). As predicted, fairness fully mediated the effect on
moral character between common performance-maximizing strategies (practice, coffee) and
uncommon performance-maximizing strategies (pill, tDCS): the bias-corrected confidence
interval was between 0.89 and 1.49. Similarly, fairness fully mediated the effect on likelihood of
use between common strategies (practice, coffee) and uncommon strategies (pill, tDCS): the
bias-corrected confidence interval was between 1.13 and 2.08.
Discussion
The results indicate a moral preference for common performance-maximizing strategies
such as practice and coffee over the use of more novel technologies such as pharmaceuticals and
tDCS, not only in terms of fairness perceptions but in terms of perceptions of users’ moral
character, as well as participants’ willingness to avail themselves of such technologies. Practice
and coffee were seen as morally indistinguishable according to the measurements of this study.
However, while tDCS was considered fairer and more morally approbative than the pill, these
67
differences did not translate into a difference in participants’ likelihood of use. It is possible that
differences between tDCS and the pill may be explained by a combination of factors, including
(1) the ingestible nature of the pill (which may be seen as more transformative) compared to the
wearable nature of tDCS, as well as (2) the pill’s familiar antisocial dynamics (which connote a
history of recreational and competitive usage in conflict with legitimate medical practices),
compared to the unknown societal implications of transcranial direct current stimulation.
The result that perceptions of fairness mediated participants’ views of users’ moral
character, as well as participants’ own likelihood of use, suggests that the perceived fairness of
performance-maximizing technologies may play a critical role in the promulgation, acceptance,
and downstream consequences of such technologies’ integration within the workplace.
STUDY 2: COGNITIVE ENHANCEMENT, FAIRNESS, AND WORK MOTIVATION
In the management literature, equity theory (e.g., Adams, 1963, 1965) suggests that if
someone outperforms you unfairly, you will adjust your behavior either by working less or by
lying about your performance. In this study, we wished to test the idea that the perceived
unfairness of using tDCS to get ahead at work leads people to become unmotivated and put forth
less effort. We hypothesized that using tDCS as a workplace productivity strategy, compared to
using coffee, will be seen as unfair and will consequently lead non-users to feel less motivation
to work.
Participants. 202 participants (37.1% female, mean age = 35.11) from the Amazon
Mechanical Turk worker pool participated in a two-cell study design. Of these participants, 126
(62.4%) responded “Yes” to the question “Do you work in an office (outside your home)?”
Procedure. Participants filled out an online survey using the computer. Participants were
randomly assigned to read one of two versions (“coffee” or “tDCS”) of the following vignette:
68
Imagine that you work in an office. You have a typical white-collar desk job that involves working
at a computer, speaking with clients on the phone, participating in meetings, and preparing written
reports for your supervisor. You have been working there for just over a year.
You have a coworker on your floor who started working at the company the same week that you
did. You’ve noticed that this coworker comes into work early every day to get a jump-start on the
day and [drink a cup of coffee/wear a transcranial direct current stimulation (tDCS) brain-
stimulating headband] to enhance performance. Throughout the day, this person will
occasionally [have another cup of coffee/wear the brain-stimulation headband] when needing
to concentrate and perform at a higher level.
Today your supervisor has announced to the office that your coworker will receive a promotion
and significant raise in recognition of this person’s high performance.
Participants then filled out measures of perceived fairness and work motivation.
Fairness. Participants are instructed, “Think about the way in which your coworker went
about getting the promotion. How does this situation make you feel?” They will then indicate
their agreement on a 1–7 Likert scale (“strongly disagree” to “strongly agree”) with each of four
statements: “My coworker completed the work unfairly,” “It was unjust that my coworker
received this promotion,” “My coworker’s performance-maximizing strategy was fair,” and “My
coworker deserved the promotion.”
Work motivation. Participants are instructed, “Think about the way in which your
coworker went about getting the promotion. How does this situation make you feel?” They will
then indicate their agreement on a 1–7 Likert scale (“strongly disagree” to “strongly agree”) with
each of four statements: “I would be willing to work harder at my job,” “I would be willing to
work longer hours to get more accomplished for my supervisor,” “I would put in less effort at the
office,” and “I would feel reduced motivation at my work.”
Results
Fairness. Analysis of variance indicated that a coworker’s use of coffee (M = 5.95, SD =
1.17) was considered significantly fairer than the use of tDCS (M = 5.36, SD = 1.51) (F(200, 1) =
9.61, p = .002).
69
Work motivation. Analysis of variance indicated that a coworker’s use of coffee (M =
5.47, SD = 1.24) caused participants to report significantly greater work motivation compared to
a coworker’s use of tDCS (M = 5.08, SD = 1.22) (F(1, 200) = 5.28, p = .023).
Mediation by fairness. Preacher and Hayes’s (2008) testing for mediation was employed
to determine the relationship of fairness to work motivation as caused by a coworker’s coffee
versus tDCS. Fairness fully mediated the effect of condition on motivation: the bias-corrected
confidence interval ranged between 0.12 and 0.57.
Moderation by employment. The effects of new technology (i.e., tDCS) on fairness and
motivation were more pronounced among participants who reported working in an office outside
the home. In the case of fairness perceptions, participants’ employment status significantly
moderated the main effect (F(1, 198) = 6.95, p = .009) and in fact appeared to completely drive
the effect of new technology on fairness perceptions:
In the case of work motivation, participants’ employment status appeared to enhance the
main effect of new technology but did not show evidence of significant moderation (F(1, 198) =
1, p = .32):
70
Discussion
The results of this study indicate that the use of novel technologies (e.g,. tDCS) to
enhance performance in the workplace may decrease non-users’ motivation to work, especially
to the extent that use of such technologies is considered unfair. The results of the mediation
analysis suggest that the less fair a performance-enhancing technology is perceived to be, the
greater its negative impact on non-users’ motivation. However, it should be noted that this study
did not address people’s perceptions of the fairness of their own use of enhancement
technologies; the results are limited to participants’ observations of others availing themselves of
performance enhancement.
Within the current sample, participants employed as officer workers were particularly
sensitive to the unfair nature of such technologies, with concomitantly strong effects on their
workplace motivation. Although the motivation of non-office-workers may also be negatively
affected by others’ usage of tDCS, such an impact may be less related to concrete views of
fairness than to other factors. Suppositionally, one additional reason for a negative impact on
motivation could be the belief that performance-enhanced work is of greater value to the
71
organization than unenhanced work, with the consequence that unenhanced workers feel less
compelled to put forth their best effort.
STUDY 3: CHEATING AS A REACTION TO COMPETITIVE COGNITIVE
ENHANCEMENT
In the management literature, equity theory (e.g., Adams, 1963, 1965) suggests that if
someone outperforms you unfairly, you will adjust your behavior either by working less or by
lying about your performance. While in Study 2 examined the effect of competitive cognitive
enhancement on work motivation, the present study tested the hypothesis that being outcompeted
by someone who used a tDCS device would lead non-users to lie about their performance in
order to level the playing field. To create an opportunity for deception, participants were
incentivized to self-report their success on a word search task (cf. the “taguan” anagram task in
Wiltermuth, 2011).
Participants. 199 undergraduate students (47.7% female, mean age = 19.88) from the
USC Marshall student subject pool participated in a four-cell study design carried out online.
Procedure. Prior to completing the study, participants were told that the research
concerned the effects of pressure on competition and performance. Participants were instructed:
In this study, you will be asked to perform a 2.5-minute word search task. You will be looking for
words exactly 4 letters long.
A grid of letters will conceal these words, which may be spelled out forward, backward, top-down, bottom-
up, or diagonally. On average, people in our pretests found 3 qualifying words in the time allotted.
After these initial instructions, participants are randomly assigned to one of four conditions:
Control You have been randomly selected to compete against an individual who will complete the puzzle
in our lab.
Expert You have been randomly selected to compete against a word puzzle expert who will complete the
puzzle in our lab. Experts practice puzzles regularly and, in preliminary tests, tended to find, on
average, 6 words.
Augme- You have been randomly selected to compete against an individual who will complete the
72
ntative puzzle in our lab while wearing a brain stimulation device that elevates cognitive performance
and, in preliminary tests, enabled people to find, on average, 6 words. This technology has been
shown to be more effective than prescription medications such as Adderall.
Restor- You have been randomly selected to compete against an individual who has a cognitive
ative disability and will complete the puzzle in our lab while wearing a brain stimulation device that
assists cognitive performance and, in preliminary tests, enabled people with disabilities to perform
at an average level. This technology has been shown to be more effective than prescription
medications such as Adderall.
[all] If you win the competition with this person you will receive 10 lottery tickets for a chance to win a
$150 Amazon gift card (if the other person wins, the tickets will go to him/her). In addition, each
participant will receive one ticket per word found for a chance to win this prize.
On the next page, you will be asked some questions about this task.
Perceived fairness (mediating variable). Participants were next presented with the
following prompt: “Considering the competitive task just described, which you will perform in a
moment, please indicate how much you agree (“very little” to “very much”) with each of the
following statements: (1) “This competitive task is fair,” (2) “It would be unjust for my
competitor to win the task,” (3) “The rules of the competition are fair,” (4) “The winner of the
task deserves his or her success.” These items were rated on a scale from 1–7.
Word search instructions. Participants were then reminded of the upcoming task:
On the next page, you will encounter the competitive word-search task. You have 2.5 minutes to complete
it.
You are searching for words exactly 4 letters long. They may be spelled out forward, backward, top-down,
bottom-up, or diagonally.
Between you and your competitor, whoever finds the most words will receive 10 tickets for a lottery to win
$150. Additionally, you will receive 1 ticket for each word you find.
Please keep track in your head of how many words you find (you will be asked to enter this number after
the task). The survey will automatically skip the word search grid after 2.5 minutes.
Word search task. Participants were presented with a large word search grid:
73
After 2.5 minutes, the survey skipped to the following page:
Self-reported word-search success, or cheating (dependent variable). Participants
were asked “How many words of exactly 4 letters were you able to find?” and given the
opportunity to answer with a menu ranging from 0 to 10.
Results
Fairness. Analysis of variance indicated significant differences in the perceived fairness
of the competitive task across all conditions (F(195, 3) = 3.32, p = .021):
74
Additionally, when conditions were collapsed into brain-stimulation (restorative plus
augmentative) versus non-brain-stimulation (control plus expert), perceived fairness was
significantly lower when participants were told that they were matched up against a competitor
using a brain stimulation device (M = 4.97, SD = 1.02) compared to the other conditions (M =
5.4, SD = 1.03) (F(1, 197 = 8.71, p = .004).
Self-reported word-search success, or cheating. Analysis of variance revealed
significant differences across all conditions in terms of the number of exactly-4-letter words
participants claimed to have discovered on the word-search task (F(195, 3) = 3.78, p = .011):
Additionally, when conditions were collapsed into brain-stimulation (restorative plus
augmentative) versus non-brain-stimulation (control plus expert), self-reported number of words
discovered was significantly lower when participants were told that they were matched up
against a competitor using a brain stimulation device (M = 5.47, SD = 1.41) compared to the
other conditions (M = 6.32, SD = 2.47) (F(1, 197 = 9.16, p = .003).
Mediation by fairness. Preacher and Hayes’s (2008) testing for mediation was employed
to determine the relationship of fairness to self-reported number of words discovered, as caused
by a competitor’s purported use of a brain stimulation device. There was no evidence that
75
fairness mediated the effect of condition on self-reported number of words discovered: the bias-
corrected confidence interval ranged between –0.15 and 0.14.
Discussion
Contrary to expectations, competing against someone using augmentative brain
stimulation did not cause participants to cheat more thoroughly at the word search task. In fact,
competing against someone using brain stimulation led to lower numbers of self-reported words
discovered, even if the brain stimulation technology was used restoratively. This counterintuitive
result could be a consequence of self-handicapping (e.g., Jones & Berglas, 1978), whereby
participants simply disengaged from the task in order to protect their self-image. However, while
participants clearly believed that it was unfair to compete against individuals using brain
stimulation devices, it cannot be concluded from the foregoing mediation analysis that this
perceived unfairness led to differential claims of word search success; further research is
required.
It is possible that participants found the prospect of competition against someone using
brain stimulation to be inherently demotivating, regardless of the fairness-related implications.
This demotivation could have then led to less effortful attention to the word-search task, with the
consequence that participants reported fewer words discovered. This would have profound
implications for the future of work in a world in which brain stimulation devices become
commonplace.
GENERAL DISCUSSION
Across three studies, participants exposed to others’ competitive cognitive enhancement
described enhancement behaviors (specifically, pharmaceuticals and tDCS) as unfair, with the
consequences of (a) viewing enhancing individuals as having lower moral character (Study 1)
76
and (b) reporting decreased motivation to compete against them (Study 2). Additionally, when
told they were competing against individuals using either restorative or augmentative tDCS in an
incentivized task, participants self-reported lower scores on the task, even though they had ample
opportunity to deceive. These results paint a complex picture of responses to novel cognitive
enhancement technologies; the perception that one is competing against people availing
themselves of brain stimulation may be so demotivating that workers cannot be bothered even to
lie about their performance. This was the case even when the enhanced competitor was described
as using the technology as treatment to reach the cognitive baseline.
Collectively, these studies concretize the claims of equity theory (e.g., Adams 1963,
1965) as applied to the domain of seemingly unfair cognitive enhancements. Under this theory, a
worker suffering under conditions of unfairness will compensate either by cheating or by
working less. While initially it seemed more plausible that unobserved workers would lie about
their performance to keep up with cognitively enhanced individuals, the results indicate that the
actual response is a decrease in motivation more broadly. The practical consequence of this
demotivation is that as brain stimulation and other cognitive enhancements become more
widespread and publicized, non-enhancing individuals may put forth less effort. As long as there
are fewer enhancers than non-enhancers, this suggests a global decrease in economic output for
organizations subject to these dynamics.
One limitation of the present work is that participants had no opportunity to enhance
themselves, although in Study 1 participants claimed they were relatively unlikely to avail
themselves of pharmaceuticals or brain stimulation. Future work ought to offer participants a
seemingly live option to enhance, even if the pill or device is factually inert. These studies were
also conducted in the pre-COVID era, when unemployment was lower and fewer employees
77
worked from home. It would be highly illuminating to examine whether these attitudes toward
cognitive enhancement are changing, as well as how the spread of technologies like tDCS might
be facilitated by the emergence of a comparatively asocial organizational culture—as well as by
the rise of the gig economy, in which workers are less connected to overarching company
policies and to each other.
The present work also sheds light on the debate in neuroethics over different forms of
cognitive enhancement. Although the public may be biopolitically moderate on the topic overall,
it appears that attitudes toward enhancement are more clearly negative in competitive contexts
such as the academy and the economy. Crucially, in Study 1, brain stimulation was seen as less
unfair—and less damaging to moral character—than pharmaceutical enhancement, which
comports with ethicists’ assumptions (e.g., Cohen Kadosh et al., 2012) that the public may be
more receptive to external forms of enhancement as compared to ingestible compounds. Future
researchers in this area might further explore the psychological distinctions made by observers of
these different enhancements. Brain stimulation may be seen as less immoral than
pharmaceutical enhancement as a result of various factors in combination, including but not
limited to (a) assumptions about the transformation of personal identity (cf. Chapter 2 of this
dissertation), (b) the ability to turn off a tDCS device at will, (c) the selective gatekeeping of
pharmaceuticals by the medical establishment, and (d) the prevalence of media attention to the
use and abuse of prescription drugs.
CONCLUSION
The history of humankind is the story of cognitive enhancement. From cooperation to
language, from the doctor’s office to the Internet, people have always found ways to improve
their cognitive performance. In the present day, the acceleration of technological innovation in
78
conjunction with increasing economic pressures, as well as the development of distributed work
arrangements, have all created a perfect storm for the proliferation of novel forms of
enhancement, including not only trialed-and-tested designer drugs but unregulated nootropic
supplements and electrical brain stimulation machines. The present work represents an early step
in modeling how people will respond, morally and motivationally, to the deployment of
unfamiliar enhancements in familiar competitive environments. But much more work remains to
be done if we are to predict and address the socio-psychological consequences of these rapidly
evolving strategies.
79
REFERENCES
Adams, J. S. (1963). Towards an understanding of inequity. The Journal of Abnormal and Social
Psychology, 67(5), 422.
Adams, J. S. (1965). Inequity in social exchange. In Advances in Experimental Social
Psychology (Vol. 2, pp. 267–299). Academic Press.
Adler, P. (1992). Technology and the future of work. Oxford University Press.
Al-Rodhan, N. R. F. (2011). The politics of emerging strategic technologies: Implications for
geopolitics, human enhancement and human destiny. Palgrave Macmillan.
Alper, J. (2009). Biotech in the basement. Nature Biotechnology, 27, 1077–1078.
Angarita, F. A. (2015). Incorporating smartphones into clinical practice. Annals of Medicine and
Surgery, 4, 187–188.
Aquino, K., & Reed, A. (2002). The self-importance of moral identity. Journal of Personality
and Social Psychology, 83(6), 1423–1440.
Archer, K., Christofides, E., Nosko, A., & Wood, E. (2015). Exploring disclosure and privacy in
a digital age: Risks and benefits. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.),
The Wiley handbook of psychology, technology, and society (1st ed., pp. 301–320). John
Wiley & Sons.
Arthur, W. B. (2009). The nature of technology: What it is and how it evolves. Simon &
Schuster.
Atwell, P. (2001). The first and the second digital divide. Sociology of Education, 74(3), 171–
191.
Banjo, O. C., Nadler, R., & Reiner, P. B. (2010). Physician attitudes towards pharmacological
cognitive enhancement: safety concerns are paramount. PLoS One, 5(12), e14322.
Bartsch, M., & Subrahmanyam, K. (2015). Technology and self-presentation: Impression
management online. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley
handbook of psychology, technology, and society (1st ed., pp. 339–357). John Wiley &
Sons.
Becker, M. W., Alzahabi, R., & Hopwood, C. J. (2013). Media multitasking is associated with
symptoms of depression and social anxiety. Cyberpsychology, Behavior, and Social
Networking, 16(2), 132–135.
Beckers, J. J., & Schmidt, H. G. (2001). The structure of computer anxiety: A six-factor model.
Computers in Human Behavior, 17(1), 35–49.
80
Bergsma, A. (2000). Transhumanism and the wisdom of old genes: Is neurotechnology a source
of future happiness? Journal of Happiness Studies, 1, 401–417.
Bigman, Y. E., Waytz, A., Alterovitz, R., & Gray, K. (2019). Holding robots responsible: The
elements of machine morality. Trends in Cognitive Sciences, 23(5), 365–368.
Birnbacher, D. (2009). Posthumanity, transhumanism and human nature. The International
Library of Ethics, Law and Technology, 2, 95–106.
Bishop, J. P. (2010). Transhumanism, metaphysics, and the posthuman god. Journal of Medicine
and Philosophy, 0, 1–12.
Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19, 202–214.
Bostrom, N. (2009). Why I want to be a posthuman when I grow up. In B. Gordijn & R.
Chadwick (Eds.), Medical Enhancement and Posthumanity. Springer Netherlands.
Bostrom, N., & Sandberg, A. (2009). Cognitive enhancement: methods, ethics, regulatory
challenges. Science and engineering ethics, 15(3), 311–341.
Bowman, L. L., Waite, B. M., & Levine, L. E. (2015). Multitasking and attention: Implications
for college students. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley
handbook of psychology, technology, and society (1st ed., pp. 388–403). John Wiley &
Sons.
Brosnan, M., & Gavin, J. (2015). Are “friends” electric? Why those with Autism Spectrum
Disorder (ASD) thrive in online cultures but suffer in offline cultures. In L. D. Rosen, N.
A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of psychology, technology, and
society (1st ed., pp. 250–270). John Wiley & Sons.
Campbell, K. W., & Twenge, J. M. (2015). Narcissism, emerging media, and society. In L. D.
Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of psychology,
technology, and society (1st ed., pp. 358–370). John Wiley & Sons.
Carrier, L. M., Cheever, N. A., Rosen, L. D., Benitez, S., & Chang, J. (2009). Multitasking
across generations: Multitasking choices and difficulty ratings in three generations of
Americans. Computers in Human Behavior, 25(2), 483–489.
Carrier, L. M., Kersten, M., & Rosen, L. D. (2015). Searching for Generation M: Does
multitasking practice improve multitasking skill? In L. D. Rosen, N. A. Cheever, & L. M.
Carrier (Eds.), The Wiley handbook of psychology, technology, and society (1st ed., pp.
371–387). John Wiley & Sons.
81
Castelo, N., Schmitt, B., & Sarvary, M. (2019). Human or robot? Consumer responses to radical
cognitive enhancement products. Journal of the Association for Consumer Research,
4(3), 217–230.
Caviola, L., Mannino, A., Savulescu, J., & Faulmüller, N. (2014). Cognitive biases can affect
moral intuitions about cognitive enhancement. Frontiers in Systems Neuroscience, 8, 195.
Center for Humane Technology (2020). Who we are: Our story. Center for Humane Technology,
accessed on December 23, 2020 at http://www.humanetech.com/who-we-are.
Cheever, N. A., Peviani, K., & Rosen, L. D. (2018). Media multitasking and mental health. In M.
A. Moreno, & A. Radovic (Eds.), Technology and Adolescent Mental Health (1st ed., pp.
101–112). Springer.
Cheever, N. A., & Rokkum, J. (2015). Internet credibility and digital media literacy. In L. D.
Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of psychology,
technology, and society (1st ed., pp. 56–73). John Wiley & Sons.
Chi, R. P., & Snyder, A. W. (2012). Brain stimulation enables the solution of an inherently
difficult problem. Neuroscience Letters, 515(2), 121–124.
Chimirri, N. A., & Schraube, E. (2019). Rethinking psychology of technology for future society:
Exploring subjectivity from within more-than-human everyday life. In K. O’Doherty, L.
Osbeck, E. Schraube, & J. Yen (Eds.), Psychological studies of science and technology
(1st ed., pp. 49–76). Palgrave Macmillan.
Coffman, B. A., Clark, V. P., & Parasuraman, R. (2014). Battery powered thought: Enhancement
of attention, learning, and memory in healthy adults using transcranial direct current
stimulation. NeuroImage, 85, 895–908.
Conrad, E. C., Humphries, S., & Chatterjee, A. (2019). Attitudes toward cognitive enhancement:
the role of metaphor and context. AJOB neuroscience, 10(1), 35–47.
Cox, D. N., & Lease, E. H. J. (2007). The influence of information and beliefs about technology
on the acceptance of novel food technologies: A conjoint study of farmed prawn
concepts. Food Quality and Preference, 18(5), 813–823.
Davis, F. D. (1986). A technology acceptance model for empirically testing new end-user
information systems: Theory and results. Doctoral Dissertation at the Sloan School of
Management, Massachusetts Institute of Technology.
Davis, F. D., & Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer
technology: A comparison of two theoretical models. Management Science, 35(8), 982–
1003.
82
Dehghani, M., Johnson, K., Hoover, J., Sagi, E., Garten, J., Parmar, N.J., Vaisey, S., Iliev, R., &
Graham, J. (2016). Purity homophily in social networks. Journal of Experimental
Psychology: General, 145, 366–375.
Dijkstra, A. M. & Schuijff, M. (2015). Public opinions about human enhancement can enhance
the expert-only debate: A review study. Public Understanding of Science, 1–15.
Dobrowolski, P., Hanusz, K., Sobczyk, B., Skorko, M., & Wiatrow, A. (2015). Cognitive
enhancement in video game players: The role of video game genre. Computers in Human
Behavior, 44, 59–63.
Dresler, M., Sandberg, A., Ohla, K., Bublitz, C., Trenado, C., Mroczko-Wąsowicz, A., Kühn, S.,
& Repantis, D. (2013). Non-pharmacological cognitive enhancement.
Neuropharmacology, 64, 529–543.
Drouin, M., Kaiser, D., & Miller, D. A. (2015). Mobile phone dependency: What’s the buzz all
about? In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of
psychology, technology, and society (1st ed., pp. 192–206). John Wiley & Sons.
Eisenberg, J., & Krishnan, A. (2018). Addressing virtual work challenges: Learning from the
field. Organization Management Journal, 15(2), 78–94.
Engelhardt, H. T. (1986). The foundations of bioethics. Oxford University Press.
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of
anthropomorphism. Psychological Review, 114(4), 864–886.
Faber, N. S., Savulescu, J., & Douglas, T. (2016). Why is cognitive enhancement deemed
unacceptable? The role of fairness, deservingness, and hollow achievements. Frontiers in
Psychology, 7, 232.
Farah, M. J., Smith, M. E., Ilieva, I., & Hamilton, R. H. (2014). Cognitive enhancement. Wiley
Interdisciplinary Reviews: Cognitive Science, 5(1), 95–103.
Fast, N. J., & Jago, A. S. (2020). Privacy matters… Or does it? Algorithms, rationalization, and
the erosion of concern for privacy. Current Opinion in Psychology, 31, 44–48.
Fast, N. J., & Schroeder, J. (2020). Power and decision making: New directions for research in
the age of artificial intelligence. Current Opinion in Psychology, 33, 172–176.
Feenberg, A. (1991). Critical Theory of Technology. Oxford University Press.
Feinberg, M. & Willer, R. (2013). The moral roots of environmental attitudes. Psychological
Science, 24, 56–62.
83
Feinberg, M. & Willer, R. (2015). From gulf to bridge: When do moral arguments facilitate
political influence? Personality and Social Psychology Bulletin, 41, 1665–1681.
Fitz, N. S., Nadler, R., Manogaran, P., Chong, E. W., & Reiner, P. B. (2014). Public attitudes
toward cognitive enhancement. Neuroethics, 7(2), 173–188.
Flöel, A., Rösser, N., Michka, O., Knecht, S., & Breitenstein, C. (2008). Noninvasive brain
stimulation improves language learning. Journal of Cognitive Neuroscience, 20, 1415–
1422.
Frank, M. R., Autor, D., Bessen, J. E., Brynjolfsson, E., Cebrian, M., Deming, D. J., Feldman,
M., Groh, M., Lobo, J., Moro, E., Wang, D., Youn, H., & Rahwan, I. (2019). Toward
understanding the impact of artificial intelligence on labor. Proceedings of the National
Academy of Sciences, 116(14), 6531–6539.
Fukuyama, F. (2002). Our posthuman future: Consequences of the biotechnology revolution.
Farrar, Straus, Giroux.
Fukuyama, F. (2004). Transhumanism. Foreign Policy, 144, 42–43.
Gamson, W. A. & Modigliani, A. (1989). Media discourse and public opinion on nuclear power:
A constructionist approach. American Journal of Sociology, 95, 1–37.
Gavin, J., & Rodham, K. (2015). Navigating psychological ethics in shared multi-user online
environments. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley
handbook of psychology, technology, and society (1st ed., pp. 105–116). John Wiley &
Sons.
Gazzaley, A., & Rosen, L. D. (2016). The distracted mind: Ancient brains in a high-tech world.
MIT Press.
Gazzaniga, M. S. (2005). The Ethical Brain. Dana Press.
Gomez-Uribe, C. A., & Hunt, N. 2016. The Netflix recommender system: Algorithms, business
value, and innovation. ACM Transactions on Management Information Systems (TMIS),
6(4), 13.
Grace, A., & Kemp, N. (2015). Assessing the written language of text messages. In L. D. Rosen,
N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of psychology, technology,
and society (1st ed., pp. 207–231). John Wiley & Sons.
Graham, J. & Haidt, J. (2010). Beyond beliefs: Religions bind individuals into moral
communities. Personality and Social Psychology Review, 14, 140–150.
84
Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral
foundations theory: The pragmatic validity of moral pluralism. Advances in Experimental
Social Psychology, 47, 55–130.
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of
moral foundations. Journal of Personality and Social Psychology, 96, 1029–1046.
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the
moral domain. Journal of Personality and Social Psychology, 101, 366–385.
Graham, J., Nosek, B. A., & Haidt, J. (2012). The moral stereotypes of liberals and
conservatives: Exaggeration of differences across the political spectrum. PLoS ONE, 7,
e50092.
Grau, C., Ginhoux, R., Riera, A., Nguyen, T. L., Chauvat, H., Berg, M., Amengual, J. L.,
Pascual-Leone, A., & Ruffini, G. (2014). Conscious brain-to-brain communication in
humans using non-invasive technologies. PLoS ONE, 9, e105225.
Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of
morality. Psychological Inquiry, 23(2), 101–124.
Greely, H., Sahakian, B., Harris, J., Kessler, R. C., Gazzaniga, M., Campbell, P., & Farah, M. J.
(2008). Towards responsible use of cognitive-enhancing drugs by the healthy. Nature,
456(7223), 702–705.
Greenfield, S. (2004). Tomorrow’s people: How 21
st
-century technology is changing the way we
think and feel. Penguin Books.
Greenfield, S. (2008). ID: The quest for meaning in the 21
st
century. Hodder & Stoughton.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Review, 108(4), 814–834.
Haidt, J. & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate
culturally variable virtues. Daedalus, 133, 55–66.
Hameed, J., Harrison, I., Gasson, M. N., & Warwick, K. (2010). A novel human-machine
interface using subdermal magnetic implants. Cybernetic Intelligent Systems (CIS), 2010
IEEE 9
th
International Conference on, pp. 1–5.
Harari, Y. N. (2016). Homo deus: A brief history of tomorrow. Harvill Secker.
Hård, M. & Jamison, A. (2005). Hubris and hybrids: A cultural history of technology and
science. Taylor & Francis Group.
85
Harris, J. (2011). Chemical cognitive enhancement: Is it unfair, unjust, discriminatory, or
cheating for healthy adults to use smart drugs? In Oxford Handbook of Neuroethics, Vol.
1.
Haslam, N. (2006). Dehumanization: An integrative review. Personality and Social Psychology
Review, 10(3), 252–264.
Heaton, J. B., Polson, N., & Witte, J. H. (2017). Rejoinder to ‘deep learning for finance: Deep
portfolios’. Applied Stochastic Models in Business and Industry, 33(1), 19–21.
Heersmink, R. (2016). The internet, cognitive enhancement, and the values of cognition. Minds
and Machines, 26(4), 389–407.
Heersmink, R. (2017). Extended mind and cognitive enhancement: Moral aspects of cognitive
artifacts. Phenomenology and the Cognitive Sciences, 16(1), 17–32.
Hicks, J. (2014, March 15). Move over hackers, biohackers are here. Forbes. Retrieved from
http://www.forbes.com.
Hopkins, P. D. (2005). Transcending the animal: How transhumanism and religion are and are
not alike. Journal of Evolution and Technology, 14, 13–28.
Hopkins, P. D. (2008). A moral vision for transhumanism. Journal of Evolution and Technology,
19, 3–7.
Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. (2018). Artificial
intelligence in radiology. Nature Reviews Cancer, 18(8), 500–510.
Igbaria, M., & Chakrabarti, A. (2007). Computer anxiety and attitudes towards microcomputer
use. Behaviour & Information Technology, 9(3), 229–241.
Istvan, Z. (2014, July 6). Interview with transhumanist biohacker Rich Lee. Psychology Today.
Retrieved from http://www.psychologytoday.com.
Iyer, R., Koleva, S., Graham, J., Ditto, P. H., & Haidt, J. (2012). Understanding libertarian
morality: The psychological dispositions of self-identified libertarians. PLoS ONE, 7,
e42366.
Janoff-Bulman, R. (2013). Meaning and morality: A natural coupling. In K. Markman, T. Proulx,
& M. Lindberg (Eds.), The Psychology of Meaning. APA.
Jak, A. J., Seelye, A. M., & Jurick, S. M. (2013). Crosswords to computers: a critical review of
popular approaches to cognitive enhancement. Neuropsychology review, 23(1), 13–26.
Joiner, R., Stewart, C., & Beaney, C. (2015). Gender digital divide: Does it exist and what are
the explanations? In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley
86
handbook of psychology, technology, and society (1st ed., pp. 74–88). John Wiley &
Sons.
Jones, E. E., & Berglas, S. (1978). Control of attributions about the self through self-
handicapping strategies: The appeal of alcohol and the role of underachievement.
Personality and Social Psychology Bulletin, 4(2), 200–206.
Jotterand, F. (2010). Human dignity and transhumanism: do anthro-technological devices have
moral status? American Journal of Bioethics, 10, 45–52.
Kadosh, R. C., Soskic, S., Iuculano, T., Kanai, R., & Walsh, V. (2010). Modulating neuronal
activity produces specific and long-lasting changes in numerical competence. Current
Biology, 20(22), 2016–2020.
Kass, L. (1997, June 2). The wisdom of repugnance. The New Republic. Retrieved from
http://web.stanford.edu/~mvr2j/sfsu09/extra/Kass2.pdf.
Kass, L. (2002). Life, Liberty, and Defense of Dignity: The Challenge for Bioethics. Encounter
Books.
Koch, T. (2010). Enhancing who? Enhancing what? Ethics, bioethics, and transhumanism.
Journal of Medicine and Philosophy, 35, 685–699.
Koleva, S. P., Graham, J., Iyer, R., Ditto, P. H., & Haidt, J. (2012). Tracing the threads: How
five moral concerns (especially Purity) help explain culture war attitudes. Journal of
Research in Personality, 46, 184–194.
Kool, V. K., & Agrawal, R. (2016). Psychology of technology. Springer International Publishing.
Kowalski, R. M., & Whittaker, E. (2015). Cyberbullying: Prevalence, causes, and consequences.
In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of
psychology, technology, and society (1st ed., pp. 142–157). John Wiley & Sons.
Kurzweil, R. (2001, March 7). The law of accelerating returns. Accelerating Intelligence.
Retrieved from http://www.kurzweilai.net/the-law-of-accelerating-returns.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Penguin Books.
Lebedev, M. A. & Nicolelis, M. A. L. (2006). Brain-machine interfaces: past, present and future.
Trends in Neurosciences, 29, 536–546.
Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and
emotion in response to algorithmic management. Big Data & Society, 5(1),
2053951718756684.
87
Leon, M. R., Harms, P. D., & Gilmer, D. O. (2019). PCE use in the workplace: The open secret
of performance enhancement. Journal of Management Inquiry, 28(1), 67–70.
Liang, P., Xu, Y., Zhang, X., Ding, C., Huang, R., Zhang, Z., Lu, J., Xie, X., Chen, Y., Li, Y.,
Sun, Y., Bai, Y., Songyang, Z., Ma, W., Zhou, C., & Huang, J. (2015). CRISPR/Cas9-
mediated gene editing in human tripronuclear zygotes. Protein & Cell, 6, 363–372.
Lieberman, A., & Schroeder, J. (2020). Two social lives: How differences between online and
offline interaction influence social outcomes. Current Opinion in Psychology, 31, 16–21.
Lin, L., & Bigenho, C. (2015). Multitasking, note-taking, and learning in technology-immersive
learning environments. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley
handbook of psychology, technology, and society (1st ed., pp. 420–435). John Wiley &
Sons.
Lynch, Z. (2009). The neuro revolution: How brain science is changing our world. St. Martin’s
Press.
MacDonald, K. (2014, February 5). Can an electronic headset make you a better video gamer?
The Guardian. Retrieved from http://www.theguardian.com.
Matz, S. C., Appel, R. E., Kosinski, M. (2020). Privacy in the age of psychological targeting.
Current Opinion in Psychology, 31, 116–121.
Maurer, M. M. (1994). Computer anxiety correlates and what they tell us: A literature review.
Computers in Human Behavior, 10(3), 369–376.
McFarland, D. J. & Wolpaw, J. R. (2008). Brain-computer interface operation of robotic and
prosthetic devices. Computer, 10, 52–56.
McNamee, M. J. & Edwards, S. D. (2005). Transhumanism, medical technology and slippery
slopes. Journal of Medical Ethics, 32, 513–518.
Michels, S. (2014, September 23). What is biohacking and why should we care? PBS NewsHour.
Retrieved from http://www.pbs.org.
More, M. (1994). On becoming posthuman. Free Inquiry, 14, 38–41.
More, M. (2013). The philosophy of transhumanism. In M. More & N. Vita-More (Eds.), The
Transhumanist Reader. John Wiley & Sons.
Moreau, D., & Conway, A. R. (2013). Cognitive enhancement: a comparative review of
computerized and athletic training programs. International Review of Sport and Exercise
Psychology, 6(1), 155–183.
88
Moreno, M. A., & Pumper, M. A. (2015). Sex, alcohol, and depression: Adolescent health
displays on social media. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The
Wiley handbook of psychology, technology, and society (1st ed., pp. 287–300). John
Wiley & Sons.
Newman, D. T., Fast, N. J., & Graham, J. (2016). Transhumanism and morality: Reactions to
enhancing human abilities with technology. International Journal of Psychology, 51,
1043.
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When eliminating bias isn’t fair:
Algorithmic reductionism and procedural justice in human resource
decisions. Organizational Behavior and Human Decision Processes, 160, 149–167.
Nisbet, M. C. (2004). Public opinion about stem cell research and human cloning. Public
Opinion Quarterly, 68, 131–154.
Nitsche, M. A., Koschack, J., Pohlers, H., Hullemann, S., Paulus, W., & Happe, S. (2012).
Effects of frontal transcranial direct current stimulation on emotional state and processing
in healthy humans. Frontiers in Psychiatry, 3, 58.
Nordmann, A. & Rip, A. (2009). Mind the gap revisited. Nature Nanotechnology, 4, 273–274.
Ollier-Malaterre, A., Rothbard, N. P., & Berg, J. M. (2013). When worlds collide in cyberspace:
How boundary work in online social networks impacts professional relationships.
Academy of Management Review, 38(4), 645–669.
Ollier-Malaterre, A., Jacobs, J. A., & Rothbard, N. P. (2019). Technology, work, and family:
Digital cultural capital and boundary management. Annual Review of Sociology, 45, 425–
447.
Opotow, S. (1990). Moral exclusion and injustice: An introduction. Journal of Social
Issues, 46(1), 1–20.
Partridge, B., Lucke, J., Bartlett, H., & Hall, W. (2009). Ethical, social, and personal implications
of extended human lifespan identified by members of the public. Rejuvenation Research,
12, 351–357.
Partridge, B., Lucke, J., Bartlett, H., & Hall, W. (2011). Public attitudes towards human life
extension by intervening in ageing. Journal of Aging Studies, 25, 73–83.
Persson, I. & Savulescu, J. (2010). Moral transhumanism. Journal of Medicine and Philosophy,
35, 656–669.
Pinker, S. (2018). Enlightenment now: The case for reason, science, humanism, and progress.
Penguin Books.
89
Pisoni, A., Mattavelli, G., Papagno, C., Rosanova, M., Casali, A. G., & Romero Lauro, L. J.
(2018). Cognitive enhancement induced by anodal tDCS drives circuit-specific cortical
plasticity. Cerebral Cortex, 28(4), 1132–1140.
Potter, V. R. (1970). Bioethics, the science of survival. Perspectives in Biology and Medicine,
14, 127–153.
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and
Information Technology, 20(1), 5–14.
Raveendhran, R., & Fast, N. J. (2019). Technology and social evaluation: Implications for
individuals and organizations. In R. N. Landers (Ed.), The Cambridge handbook of
technology and employee behavior (1st ed., pp. 921–943). Cambridge University Press.
Raveendhran, R., Fast, N. J., & Carnevale, P. J. (2020). Virtual (freedom from) reality:
Evaluation apprehension and leaders’ preference for communicating through avatars.
Computers in Human Behavior, 111, 106415.
Reich, W. T. (1978). Encyclopedia of Bioethics. Free Press.
Reiner, P. B. (2013). The biopolitics of cognitive enhancement. In Cognitive Enhancement.
Springer Netherlands.
Reis, J. & Fritsch, B. (2011). Modulation of motor performance and motor learning by
transcranial direct current stimulation. Current Opinion in Neurology, 24, 590–596.
Reznich, C. B. (1996). Applying minimalist design principles to the problem of computer
anxiety. Computers in Human Behavior, 12(2), 245–261.
Rice, E., Petering, R., Rhoades, H., Winetrobe, H., Goldbach, J., Plant, A., Montoya, J., &
Kordic, T. (2015). Cyberbullying perpetration and victimization among middle-school
students. American Journal of Public Health, 105(3), e66–e72.
Roduit, J. A., Baumann, H., & Heilinger, J. C. (2013). Human enhancement and perfection.
Journal of Medical Ethics, 39, 647–650.
Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., Van der
Bles, A. M., & Van der Linden, S. (2020). Susceptibility to misinformation about
COVID-19 around the world. Royal Society Open Science, 7(10), 201199.
Rosen, C. (2008). The myth of multitasking. The New Atlantis, 20, 105–110.
Rosen, L. D., & Lara-Ruiz, J. M. (2015). Similarities and differences in workplace, personal, and
technology-related values, beliefs, and attitudes across five generations of Americans. In
L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of psychology,
technology, and society (1st ed., pp. 20–55). John Wiley & Sons.
90
Sabini, J., & Monterosso, J. (2005). Judgments of the fairness of using performance enhancing
drugs. Ethics & Behavior, 15(1), 81–94.
Sandberg, A., Sinnott-Armstrong, W., & Savulescu, J. (2011). Cognitive enhancement in courts.
The Oxford Handbook of Neuroethics, 273–285.
Sander, J. D. & Joung, J. K. (2014). CRISPR-Cas systems for editing, regulating and targeting
genomes. Nature Biotechnology, 32, 347–355.
Sandu, A. (2015). The anthropology of immortality and the crisis of posthuman conscience.
Journal for the Study of Religions and Ideologies, 14, 3–26.
Sattler, S., & Wiegel, C. (2013). Cognitive test anxiety and cognitive enhancement: the influence
of students’ worries on their use of performance-enhancing drugs. Substance Use &
Misuse, 48(3), 220–232.
Savulescu, J. (2006). Justice, fairness, and enhancement. Annals–New York Academy of Sciences,
1093, 321.
Schelle, K. J., Faulmüller, N., Caviola, L., & Hewstone, M. (2014). Attitudes toward
pharmacological cognitive enhancement—a review. Frontiers in Systems Neuroscience,
8, 53.
Schroeder, J., & Epley, N. (2016). Mistaking minds and machines: How speech affects
dehumanization and anthropomorphism. Journal of Experimental Psychology: General,
145(11), 1427–1437.
Schroeder, J., Kardas, M., & Epley, N. (2017). The humanizing voice: Speech reveals, and text
conceals, a more thoughtful mind in the midst of disagreement. Psychological
Science, 28(12), 1745–1762.
Shariff, A., Bonnefon, J. F., & Rahwan, I. (2017). Psychological roadblocks to the adoption of
self-driving vehicles. Nature Human Behavior, 1(10), 694–696.
Sherman, G. D., & Haidt, J. (2011). Cuteness and disgust: The humanizing and dehumanizing
effects of emotion. Emotion Review, 3(3), 245–251.
Shweder, R., Much., N., Mahapatra, M., & Park, L. (1997). The “big three” of morality
(autonomy, community, divinity) and the “big three” explanations of suffering. Morality
and Health, 119, 119–169.
Siegrist, M. (2008). Factors influencing public acceptance of innovative food technologies and
products. Trends in Food Science & Technology, 19(11), 603–608.
91
Sorgner, S. L. (2009). Nietzsche, the overhuman, and transhumanism. Journal of Evolution and
Technology, 20, 29–42.
Steger, M. F., Frazier, P., Oishi, S., & Kaler, M. (2006). The meaning in life questionnaire:
Assessing the presence of and search for meaning in life. Journal of Counseling
Psychology, 53, 80–93.
Steinhart, E. (2008). Teilhard de Chardin and transhumanism. Journal of Evolution and
Technology, 20, 1–22.
Sterling, T. D. (1975). Humanizing computerized information systems. Science, 190(4220),
1168–1172.
Strohminger, N., & Nichols, S. (2014). The essential moral self. Cognition, 131(1), 159–171.
Taylor, F. W. (1911). Scientific management. Happer & Bros. Publishers.
Teilhard de Chardin, P. (1973). Toward the future. Trans. Hague, R. Harcourt.
Tucker, P. (2014, May 14). The military wants to teach robots right from wrong. The Atlantic.
Retrieved from http://www.theatlantic.com.
U.S. Bureau of Labor Statistics (2020). Charting the labor market: Data from the Current
Population Survey (CPS). U.S. Bureau of Labor Statistics, accessed on December 28,
2020 at https://www.bls.gov/web/empsit/cps_charts.pdf.
Uzun, A. M., & Kilis, S. (2019). Does persistent involvement in media and technology lead to
lower academic performance? Evaluating media and technology use in relation to
multitasking, self-regulation and academic performance. Computers in Human
Behavior, 90, 196–203.
Van der Linden, S., Roozenbeek, J. & Compton, J. (2020). Inoculating against fake news about
COVID-19. Frontiers in Psychology, 11, 2928.
Vanian, J. (2015, July 1). Why Elon Musk is donating millions to make artificial intelligence
safer. Fortune. Retrieved from http://fortune.com.
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic
motivation, and emotion into the technology acceptance model. Information Systems
Research, 11(4), 342–365.
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance
model: Four longitudinal field studies. Management Science, 46(2), 186–204.
Verdoux, P. (2009). Transhumanism, progress and the future. Journal of Evolution and
Technology, 20, 49–69.
92
Vondráčková, P. & Šmahel, D. (2015). Internet addiction. In L. D. Rosen, N. A. Cheever, & L.
M. Carrier (Eds.), The Wiley handbook of psychology, technology, and society (1st ed.,
pp. 469–485). John Wiley & Sons.
Waldron, S., Kemp, N., Plester, B., & Wood, C. (2015). Texting behavior and language skills in
children and adults. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley
handbook of psychology, technology, and society (1st ed., pp. 232–249). John Wiley &
Sons.
Ward, A. F. (2013). Supernormal: How the Internet is changing our memories and our minds.
Psychological Inquiry, 24, 341–348.
Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain drain: The mere presence of
one’s own smartphone reduces available cognitive capacity. Journal of the Association
for Consumer Research, 2(2), 140–154.
Waszak, P. M., Kasprzycka-Waszak, W., & Kubanek, A. (2018). The spread of medical fake
news in social media–the pilot quantitative study. Health Policy and Technology, 7(2),
115–118.
Waytz, A., & Gray, K. (2018). Does online technology make us more or less sociable? A
preliminary review and call for research. Perspectives on Psychological Science, 13(4),
473–491.
Werner, N. E., Cades, D. M., & Boehm-Davis, D. A. (2015). Multitasking and interrupted task
performance: From theory to application. In L. D. Rosen, N. A. Cheever, & L. M. Carrier
(Eds.), The Wiley handbook of psychology, technology, and society (1st ed., pp. 436–
452). John Wiley & Sons.
Westjohn, S. A., Arnold, M. J., Magnusson, P., Zdravokvic, S., & Zhou, J. X. (2009).
Technology readiness and usage: A global-identity perspective. Journal of the Academy
of Marketing Science, 37(3), 250–265.
Wexler, A. (2018). Who uses direct-to-consumer brain stimulation products, and why? A study
of home users of tDCS devices. Journal of Cognitive Enhancement, 2(1), 114–134.
Wiltermuth, S. S. (2011). Cheating more when the spoils are split. Organizational Behavior and
Human Decision Processes, 115(2), 157–168.
Wind, T. R., Rijkeboer, M., Andersson, G., & Riper, H. (2020). The COVID-19 pandemic: The
‘black swan’ for mental health care and a turning point for e-health. Internet
interventions, 20, 100317.
93
Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G., & Vaughan, T. M. (2002).
Brain-computer interfaces for communication and control. Clinical Neurophysiology,
113, 767–791.
Wood, E., & Zivcakova, L. (2015). Understanding multimedia multitasking in educational
settings. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook of
psychology, technology, and society (1st ed., pp. 404–419). John Wiley & Sons.
Yaden, D. B., Eichstaedt, J. C., & Medaglia, J. D. (2018). The future of technology in positive
psychology: Methodological advances in the science of well-being. Frontiers in
psychology, 9, 962.
Yang, C. J., Zhang, J. F., Chen, Y., Dong, Y. M., & Zhang, Y. (2008). A review of exoskeleton-
type systems and their key technologies. Proceedings of the Institution of Mechanical
Engineers, Part C: Journal of Mechanical Engineering Science, 222, 1599–1612.
Ziegler, D. A., Mishra, J., & Gazzaley, A. (2015). The acute and chronic impact of technology
on our brain. In L. D. Rosen, N. A. Cheever, & L. M. Carrier (Eds.), The Wiley handbook
of psychology, technology, and society (1st ed., pp. 1–19). John Wiley & Sons.
Zohny, H. (2015). The myth of cognitive enhancement drugs. Neuroethics, 8(3), 257–269.
Abstract (if available)
Abstract
Technology is the means by which human beings take persistent and decisive control over their environment—and so, gradually, over their own destinies. However, the acceleration of technological innovation in recent years has enabled humanity to exert far more direct influence over its own social, psychological, and biological nature. Technology is therefore of central interest to behavioral scientists in general and morality scholars in particular. Across three Chapters, this dissertation explores the emergent field of the psychology of technology, with a specific eye toward ethical implications and the role of moral psychology. Chapter I presents an overview of literature pertinent to the moral psychology of novel technologies. Chapter II consists of three empirical studies of the moral psychology specific to transhumanism, a movement that advocates integrating humanity with advanced technology to transcend our biological limitations. Chapter III consists of three further empirical studies of the moral psychology specific to competitive cognitive enhancement.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Better now than later: the cost of victims' delayed accusations
PDF
The dynamics of well-being in daily life: a multilevel perspective
PDF
Increasing capabilities or decreasing cost: Fairness perceptions of job displacement due to automation and outsourcing
PDF
Technology, behavior tracking, and the future of work
PDF
The curse of loyalty: interdependent self construal and support for corruption
PDF
Bound in hatred: a multi-methodological investigation of morally motivated acts of hate
PDF
The spread of moral content in online social networks
PDF
Workplace organization and asset pricing
PDF
Power, status, and organizational citizenship behavior
PDF
“What difficulty means for me”: predictors and consequences of difficulty mindsets
PDF
La Maternidad Sacra: translations and editions of selected works by Raphael Castellanos for the Immaculate Conception
PDF
Technological pedagogical skills among K-12 teachers
PDF
Instructional technology integration in a parochial school district: an evaluation study
PDF
The cost of a poker-face: consequences of self-regulation on emotion recognition and interpersonal perception
PDF
Two essays on the mutual fund industry and an application of the optimal risk allocation model in the real estate market
PDF
Sex differences in moral judgements across 67 countries
PDF
Expectation dynamics and stock returns
PDF
Predictors and outcomes across the transition to fatherhood
PDF
Studies on the creation of regulations in nascent drone industry
PDF
Factors influencing technology at a secondary school
Asset Metadata
Creator
Newman, David T.
(author)
Core Title
The morality of technology
School
Marshall School of Business
Degree
Doctor of Philosophy
Degree Program
Business Administration
Publication Date
03/16/2021
Defense Date
03/16/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
cognitive enhancement,ethics,human enhancement,moral psychology,Morality,OAI-PMH Harvest,organizational behavior,Psychology,psychology of technology,Social Psychology,Technology,transhumanism
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Fast, Nathanael (
committee chair
), Wiltermuth, Scott (
committee chair
), Abbas, Ali (
committee member
)
Creator Email
david.newman.2019@marshall.usc.edu,newman.david.t@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-428119
Unique identifier
UC11668086
Identifier
etd-NewmanDavi-9325.pdf (filename),usctheses-c89-428119 (legacy record id)
Legacy Identifier
etd-NewmanDavi-9325.pdf
Dmrecord
428119
Document Type
Dissertation
Rights
Newman, David T.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
cognitive enhancement
human enhancement
moral psychology
organizational behavior
psychology of technology
transhumanism