Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Living with the most humanlike nonhuman: understanding human-AI interactions in different social contexts
(USC Thesis Other)
Living with the most humanlike nonhuman: understanding human-AI interactions in different social contexts
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
LIVING WITH THE MOST HUMANLIKE NONHUMAN:
UNDERSTANDING HUMAN-AI INTERACTIONS IN DIFFERENT SOCIAL CONTEXTS
by
Joo-Wha Hong
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMMUNICATION)
August 2022
Copyright 2022 Joo-Wha Hong
ii
“We can only see a short distance ahead,
but we can see plenty there that needs to be done.”
- Alan Turing
iii
ACKNOWLEDGMENTS
I am very grateful to those who supported me in completing this dissertation and the
Ph.D. program.
First and foremost, I have to sincerely thank my advisor, Dmitri Williams. I could not
have undertaken this journey without him, who generously provided emotional support,
knowledge, and expertise throughout this program. My academic performance has been made
possible with invaluable advice, continuous support, and patience.
I would also like to show gratitude to my committee members, Andrea Hollingshead,
Sheila Murphy, and Yolanda Gil, who provided their passionate participation and invaluable
supervision.
Finally, I must express my profound gratitude to loved ones, including family and
friends, for providing me with tremendous understanding, unfailing support, and continuous
encouragement throughout the program. This accomplishment would not have been possible
without them.
iv
TABLE OF CONTENTS
Epigraph…………………………………………………………………………………………...ii
Acknowledgments………………………………………………………………………………..iii
List of Tables……………………………………………………………………………………vi
List of Figures……………………………………………………………………………………vii
Abstract……………………………………………………………………………………..…...viii
Chapter 1: Introduction………………………………………………...............................……….1
Understanding AI………………………………………………………………….4
AI and Social Science.……………………………………………………………7
Value of Social Science Research on AI………………………………………….10
Recent HMC Studies.…………………………………………………………….14
Three Dimensions of HMC…...………………………………………………….18
Preview of Studies…...………………………………………………………….23
Chapter 2: Perception of AI Utility and Self-Driving Cars……………………………………….26
Method…………………………………………………………………………...30
Results ……………………………………………………………………………34
Discussion………………………………………………………………………..39
Chapter 3: Perception of AI Creativity and Music Composition………………………………….43
Method…………………………………………………………………………...49
Results ……………………………………………………………………………52
Discussion………………………………………………………………………..56
Chapter 4: Perception of AI Ethics and Predictive Policing………………………………………60
Method…………………………………………………………………………...65
Results ……………………………………………………………………………68
Discussion………………………………………………………………………..71
Chapter 5: Discussion and Conclusion...........................................................................................75
Summary.………………………………………………………………………...75
Limitations……………………………………………………………………….78
Implications and Directions for Future Research…………………………………81
Conclusion……………………………………………………………………….90
References…………………………………………………………………………………...…94
Appendices……………………………………………………………………………………...113
Appendix A.………………………………………………………………….113
Appendix B...………………………………………………………………...117
v
Appendix C.………………………………………………………………….119
Appendix D...…………………………………………………………………121
Appendix E......………………………………………………………………129
vi
LIST OF TABLES
Table 1. Descriptive Statistics for Driver Evaluations………..…………………………………..37
Table 2. Descriptive Statistics for Blame Attribution……………………………………………..70
vii
LIST OF FIGURES
Figure 1. Xinhua’s AI News Anchor………………………………………………………….…...2
Figure 2. Three HMC Divisions………………………………………………………………….19
Figure 3. Sample Stimulus in a News Article Format………………………………………….....32
Figure 4. Level of Responsibility Attributed to Drivers…………………………………….…....35
Figure 5. Comparison of the Positive Perceptions of Drivers…………………………………....39
Figure 6. Regression Analysis Results for Music Evaluations…………………………………..53
Figure 7. ANCOV A Results for Evaluations of Classical Music……………………….………...55
Figure 8. ANCOV A Results for EDM Evaluations……………………………………….……...56
Figure 9. Regression Analysis Results for Blame Attribution….....………………….……….…..71
Figure 10. Search Results for “Artificial Intelligence” on Google Trends……………………...79
Figure 11. Potential New Area of HMC Research…………………………………………….....84
viii
ABSTRACT
As machines have currently become more interactive and autonomous with the recent
development and vast applications of artificial intelligence (AI), human-machine communication
(HMC) is becoming more prevalent. Because social interactions with AI agents have become
more prevalent in recent years, a deeper understanding of HMC is needed. This dissertation
examines three studies that introduce various social contexts of HMC. Therefore, the general
research question here is, what are the factors that determine the evaluation of AI and its
performance? Based on functional, relational, and ontological approaches, the project considers
analyses of AI utility, AI creativity, and AI ethics using self-driving car scenarios, AI-composed
music, and predictive policing cases. The three studies explain the evaluation of AI functioning
by comparing it with perceptions of humans' performances. While all three studies focus on the
public perception of AI, they employ different theoretical frameworks, including schema theory,
attribution theory, and computers are social actors paradigm. In doing so, the current project
elucidates how interactive machine agents with social roles can change society and why HMC
has research value. The first study about self-driving cars found that more blame was attributed
to AI drivers than human drivers only in the event where outcomes are positive. The main
finding of the second study is that the acceptance of creative AI has a positive relation with the
assessment of AI-composed music. The last study about racially biased AI found a positive
relation between autonomy and attributed responsibility in both human and AI cases. The three
studies' findings extend the social constructivism approach (i.e., human perceptions define
technology) to AI assessment by pointing out the influence of non-technological traits of AI and
the contexts of HMC.
1
Chapter 1: Introduction
As AI technologies develop, machines are emerging not only as tools but also as social
entities. By engaging in the same actions and behaviors traditionally performed by humans,
machines will gain social status and people’s expectations will align with machines’ social
positions. From this perspective, the more that machines take on human roles in society, the more
they will be assigned social functions that have previously been ascribed only to human agents.
Machines can replace human laborers and become social actors partly because of the autonomy
that is possible with AI technology. One example of this shift can be seen in the way machines
are used in newsrooms. Conventionally, various technological tools, such as microphones,
cameras, and satellites, are needed to broadcast the news since those devices have functions that
exceed human abilities. These tools play assisting roles as machines. However, a Chinese news
company recently introduced avatar-based TV news anchors that function using AI technology
(Handley, 2018; see Figure 1). This indicates that machines are no longer assisting humans but
are now replacing them.
2
Figure 1. Xinhua’s AI News Anchor
AI technology has replaced human labor in many fields, and this phenomenon is
expected to continue (Chelliah, 2017). Indeed, the everyday lives of many people have already
been influenced by industries that have adopted autonomous technology. This influence can be
seen in job applications (Lewis & Marc, 2019) as applicants are evaluated by AI. This influence
is also seen in automation and has increased to the extent that humans working in various
positions have become redundant. Such developments highlight the societal influence of
machines’ adoption of human roles (De Lara, 2018).
Occupational substitution is not the only realm in which AI creates changes. Even when
not using AI products, individuals are likely to be influenced by AI. For instance, AI is used to
predict consumer behavior and suggest products before one even recognizes a need for these
products (Calvert & Brammer, 2012; Loureiro, Guerreiro, & Tussyadiah, 2021). Such
recommendations are not limited to products; they also include people. Social media AI
3
generates friend suggestions, but researchers have indicated that these algorithmic
recommendations strengthen polarization in social networks (Huang et al., 2017; Santos, Lelkes,
& Levin, 2021). Moreover, AI provides new content that is expected to be of interest at the top of
social media news feeds. While people prefer news stories selected by AI over those chosen by
human editors, reality construction using algorithmic news selections reinforces
individualization, commercialization, inequality, and deterritorialization while diminishing
transparency, controllability, and predictability (Just & Latzer, 2017; Thurman et al., 2018;
Swart, 2021). In short, people’s understanding of the world may become biased due to the
potential for AI-driven information curation to censor different opinions.
While this dissertation addresses several concerns regarding AI, the technology also
provides benefits to individuals and society by, for example, reducing human errors. For
instance, self-driving cars are expected to reduce car accidents by removing many human factors
that lead to accidents, such as distraction, exhaustion, or inebriation (Shariff, Bonnefon, &
Rahwan, 2021). Furthermore, AI-controlled robots can be sent to explore hazardous zones, such
as confined and hazardous spaces, war zones, areas with extreme conditions, or even other
planets (Bruzzone et al., 2019; Hasan et al., 2021; Johnson, 2020).
AI technology provides physical and psychological benefits. For instance, AI liberates
people from monotonous, mundane work or stressful customer-service roles (Hollenbeck, 2020;
Knight, 2020), thereby improving people’s quality of life. Since machines do not need breaks,
they can provide continuous service, which satisfies companies and customers. While some
people will take advantage of such technology, AI can cause adverse outcomes to others—for
example, it can replace workers, thus taking away their livelihoods. Overall, because AI simply
learns from repeated patterns from given information, it is a value-neutral technology unless
4
trained with biased datasets. It is like a knife becoming either a weapon or a tool based on how it
is used. Therefore, its social influence is often determined by how people perceive and use it.
Therefore, the general research question this dissertation asks is “What are the factors that
determine the evaluation of AI and its performance?”
Understanding AI
AI can easily be found in daily life—for instance, it is used in smart home devices,
Google Maps, and voice assistants. AI is also becoming more known because of frequent and
diverse depictions of AI in mass media, from The Terminator (a 1984 film depicting AI machines
as a threat to humankind) to Robot & Frank (a 2012 film depicting an autonomous robot as a
partner to humans) to Her (a 2013 film depicting AI machines as affective partners). Media
depictions of AI influence how the public, policymakers, and even AI developers perceive the
technology (Cave & Dihal, 2019). Despite the concerns about the distribution of misinformation
about AI through popular depictions, the concept of AI is yet too broad and nascent to have
formal media regulations about its depictions. Still, discussions and studies to create guidelines
for the proper presentation of AI in mass media should be considered.
Also, there may be a discrepancy between how those who create AI understand the
technology and how non-specialists understand and use it. Even among experts, there is no
universally accepted definition of AI. Russell and Norvig (2016) classified existing definitions of
AI into four groups based on functionality: (a) machines that act like humans (i.e., behaving
similarly to humans), (b) machines that think like humans (i.e., processing information similarly
to humans), (c) machines that think rationally (i.e., using logical processes), and (d) machines
that act rationally (i.e., perceiving and adapting to their surroundings). This categorization
implies that the concept of AI involves reasoning, behaviors, humanness, and rationality. Even
5
though there is no universally consented description of AI, these terms provide a sense of what it
is.
Neural networks, deep learning, and reinforcement learning are often mentioned in
theoretical definitions of AI. Neural networks are a collection of artificial neurons that become
“activated” depending on the signals they receive, and were motivated by the information
processing that occurs in biological neural networks in the brain (Guresen & Kayakutlu, 2011).
Deep learning is a machine learning method by which a multilayer neural network extracts
features from raw data that can be used to determine common patterns (Bengio, Courville, &
Vincent, 2013; Schmidhuber, 2015). The primary difference between supervised learning (SL)
and unsupervised learning (UL) is that there are annotated labels for SL, but not for UL. SL is
most frequently used for training a model to directly achieve a specific task, e.g. predict the label
of an input that is similar to that seen during training, while UL is usually leveraged to learn
useful features that can be taken advantage of at different stages of learning. A familiar example
of SL is the cat vs. dog classifier that takes advantage of convolutional neural networks (CNNs),
a type of deep neural network architecture. The model learns to distinguish dogs from cats by
analyzing thousands of labeled images of cats and dogs by automatically learning their relevant
features. Well-trained AI can categorize the animals even when presented with an image that has
not been displayed before (Tripathy & Singh, 2022). A well-known example of UL is learning
word embeddings for natural language processing. Word embeddings are continuous value
representations for individual words that can be used later for downstream tasks such as
sentiment classification. A classic approach to learning word embeddings is continuous bag of
words, a task of predicting a word given its context words.
Another branch of machine learning called reinforcement learning (RL) does not require
6
training set data. This approach involves placing AI in a simulated environment with clear
definitions of possible actions and rewards, and making it automatically learn to achieve tasks
through trial and error, usually without any prior knowledge. Most RL schemes employ two
preconditions : (a) offering a reward when the model performs the desired actions and (b)
ensuring the model continues to pursue rewards (Kaelbling, Littman, & Moore, 1996). RL is
particularly powerful for well-defined tasks, but it can be very computationally expensive when
the simulated environment involves many degrees of freedom. For complex tasks, curriculum
learning, a method that gradually provides harder tasks, can be a more effective way to attain
better performance with RL (Narvekar et al., 2020). Famous applications of RL include
AlphaGo, models that perfectly play Atari games, and autonomous driving.
It is difficult to say which one of those three machine learning methods has the best
performance because each task has its own best learning method. Therefore, using multiple
machine learning techniques at once is now a new trend these days in order to maximize AI
training efficiency.
However, people without technical expertise may view AI from a different perspective.
In particular, people often form opinions about AI technology based on how they think it will
alter their lives. Fast and Horvitz (2017) found that people tend to be optimistic about AI, though
they form both optimistic expectations, such as higher productivity and improved decision-
making, and pessimistic expectations, such as losses of control over decision making and
employment.
Furthermore, people’s understanding of AI is often affected by its depiction in mass
media. Cave and Dihal (2019) found that fictional and non-fictional works concerning human-
machine interactions address similar expectations and concerns regarding AI. As people have
7
become more familiar with and interested in AI—which may reflect the degree to which
machines are becoming more accessible and interactive—more studies have examined how
people interact with machines. With the rapid development of AI technologies, the interaction
between humans and AI has been speculated to be a determinant of people’s future lifestyles
(Miller, 2019). This phenomenon necessitates more social scientific approaches to AI.
AI and Social Science
Individuals may believe they have full control when using technology. As a
counterargument, however, Marshall McLuhan famously said, “We shape our tools and
thereafter they shape us” (Culkin, 1967, p. 70). This means that we also get influenced by the
technologies we use. For instance, the overuse of digital technology and extensive screen time
have been found to negatively impact brain health, leading to attention-deficit symptoms,
impaired emotional and social intelligence, and technology addiction, among other problems
(Small et al., 2020). Moreover, technology is thought to influence society by impacting, for
example, social relationships and cultural practices; this influence is referred to as technological
determinism (Bimber, 1990).
The implication of AI can also be explained based on McLuhan’s conceptualization of
technology use. McLuhan is well-known for his conception of technologies as extensions of the
human body, such as vehicles that extend our feet, machines that extend our hands, and radios
that extend our voices (1994). Interestingly, Elon Musk recently made a similar argument that
humans have already become cyborgs and that phones are cybernetic extensions of themselves
(Rapier, 2019). He further argued that the data rate is the only limiter between people’s
biological selves and digital selves, a problem that can be resolved by neural lace, an artificially
implanted digital layer above the cortex (Solon, 2017). Simply saying, he plans to extend a brain
8
to machines in order to facilitate human performance. So, while McLuhan hypothetically
conceptualized the relationship between humans and technologies, Musk is attempting to
actualize it through man-computer symbiosis. While both McLuhan and Musk focus solely on
the extension of the body, AI may be eligible to extend the role if considered macroscopically.
According to Goffman (1973), people take on many roles as social actors in many areas of life.
While McLuhan thought machines could only make people more efficient, autonomous
machines can even do labor on their behalf. Therefore, workers’ roles can be transferred to
machines. Because the psychological and social consequences of AI utilization may be
significant, how people think and feel about using AI is worth researching.
However, technology does not always have absolute and unilateral influences on people.
Even if innovative technologies keep emerging, only some will survive and be utilized.
Furthermore, it is not always the most innovative and advanced technologies that survive. Rogers
(1995) argued that the successful acceptance of an innovation requires not only the innovation
itself but also social factors, including communication channels and a social system. He believed
people’s communication capabilities and patterns are crucial because innovation diffusions rely
on how well information is transferred and shared. Because all individuals live as members of
society, communication and decision-making occur within a social system. Simply put, people’s
decisions about using a new technology rely on the information they share about it. Rogers also
claimed that aspects of the social system, such as social structures, relationships with opinion
leaders, and social norms, should be considered. For instance, societies that perceive privacy as
an essential social value are less likely to adopt facial-recognition technology than other
societies.
Social constructivists who oppose technological determinism and study the relation
9
between society and technology make a similar argument. According to Pinch and Bijker (1984),
two leading scholars of the social construction of technology (SCOT), technology is often
characterized by complex nonlinear dynamics between relevant social groups, interpretive
flexibility, closure, and stabilization. In other words, technologies are the outcome of
negotiations between people and groups, rather than the product of technical logic solely (Kline,
2001). For instance, they claimed that the current design of the bicycle is the product of social
engagements of various social groups with different plans for its use. According to this approach,
the definition of AI and its development plan is formed through interactions between society
members and the influence of social factors, such as trends, policies, infrastructures, and social
movements. These SCOT scholars also argue that the criteria for evaluating innovations are
socially constructed. When determining the adoption of AI, the standards used to make decisions
(and how these standards are defined) should be considered. For instance, AI-created art can be
evaluated in many ways, such as based on its aesthetic value or potential influence on the art
industry. The measurements used to assess AI-created art are selected through social procedures,
such as conceptualizing artistic value that members of expert share and setting proper norms and
patterns of behavior that bring legitimacy to the decision-making process (Lewandowska &
Smolarska, 2020). According to this theoretical perspective, the technological aspects of AI are
not always the only factors that shape its diffusion. It is not just about new technology, but also
other elements like social trends and identity. Steve Jobs’s explained Nike’s branding as not
about the technology but about the meaning conveyed by it:
Remember, Nike sells a commodity. They sell shoes. And yet when
you think of Nike, you feel something different than a shoe
company. In their ads, they don’t ever talk about their products.
They don’t ever tell you about their air soles and why they’re
better than Reebok’s air soles. What does Nike do? They honor
great athletes, and they honor great athletics. That’s who they are;
10
that’s what they are about.
If AI is like Nike shoes in Jobs’ vision, what are its meanings, cultural contexts and
pieces of identity formation? Knowing that these elements have a great impact in how the
technology diffuses, researchers need to consider what understandings people have about AI and
which social conditions influence its diffusion process. The worth of AI can be interpreted
differently based on society's values. For instance, an AI-powered face recognition program will
increase the detection rate for crimes. However, it can also be used as a Big Brother surveillance
technology violating citizens' privacy. In this case, how much individuals are exposed to crime
and their concern about the right to privacy will determine the adoption of the AI program. AI is
not a technology that brings either heaven or hell; it brings both. Therefore, technology adoption
is a matter of society choosing to focus more on either the bright side or the dark side.
In sum, humans’ relationships with machines and technology are mutual. Therefore,
extensive considerations of how individuals and society change when adopting new technology
are essential. Also, the social perception of AI and its technological traits should be considered to
better understand human-AI interactions.
Value of Social Science Research on AI
Although AI research has been conducted by social scientists, it is still often regarded as
a subfield of engineering or computer science (Zhai et al., 2020). Even in the job market, most
AI scholar applicants are recruited by computer-science departments. While there may be
multiple reasons for this, one is a lack of appreciation for the value of social scientific
approaches to AI. For instance, AI research papers now cite social science papers much less than
before. One reason is that social scientists often fail to keep up with the increasing complexity of
AI technology caused by its rapid development. Also, current AI studies are driven mainly by
private companies like Google and Microsoft. This is because AI research requires expensive
11
infrastructure that social science schools cannot easily afford (Kwok, 2019). However, it does not
mean that social scientists are no longer needed in the field of AI.
Social scientists should participate in studies on AI as there are questions that cannot be
answered solely in engineering approaches. Not only will integrating social science perspectives
lead to conducting AI research of great depth and broader generalizability, but AI has both
individual and societal levels of influence, particularly in terms of culture, social fairness, and
infrastructure, that require humanistic approaches to understand.
First, social scientists’ findings can be reutilized as a means of AI development,
particularly regarding the cultural performance of AI. As mentioned previously, AI needs a big
data approach concerning cat and dog images to learn how to classify them, and fully trained AI
can correctly label the images (Tripathy & Singh, 2022). However, not every problem an AI
program faces in the real world is simple like the cat or dog test that has a clear answer. AI
artists, which also use the cat or dog test for training, can be an example of this complexity.
Interesting machine learning processes, such as generative adversarial networks and
creative adversarial networks, are used to make AI create artworks; such processes are built with
unsupervised learning. Both generative adversarial networks and CANs utilize two sub-
networks: a generator and a discriminator. A generator using generative adversarial networks
produces new images based on patterns found in a training data set that contains human-created
artwork. Then, the generative adversarial networks discriminator tries to distinguish new images
from the training dataset images. This procedure of a generator and a discriminator kept
competing and developing leads to the production of new images that are similar to—if not
indistinguishable from—human-created art. creative adversarial networks, which are an
advanced type of generative adversarial networks, have added creativity to the art-creation
12
process by making both the generator and discriminator produce novel (but not too novel)
artwork that possesses stylistic ambiguity (Elgammal et al., 2017). Despite these networks’
technological achievements and delicate calculations, people still may not the logic of AI as
creativity. There are two key questions that need social science approaches to be answered.
The first question is how to define a machine’s artistic novelty. In other words, some
people may see a piece of AI-created art as artistic while others do not. Because there is no
objective way to calculate artistic merit, the evaluation of artwork relies on the subjective
assessments of humans. Even if programmers think their creative adversarial networks possesses
artistry, it is the audience of the art who ultimately determines this. Therefore, a proper
understanding of people's artistic tastes is required to build a true AI artist. Social scientists’
findings from humans can be utilized to make an AI-created art version of the cat-or-dog
question in deep learning (i.e., labeling each image as “artistic” or “not artistic” based on
people’s responses).
The second question is whether possessing human-distinctive traits is what people want
from AI. Not everyone may want to see machines perform exactly like humans. Engineering
alone cannot figure out what AI attributes people want because, unlike the cat vs. dog test for
training AI, preferences do not have a right or wrong answers. In this way, the studies of social
scientists concerning humans and society are also necessary with studies concerning machines.
Secondly, AI research should also consider social fairness. The development and
evaluation of technology, including AI, should consider user experiences (Jalil et al., 2019;
Tscheligi et al., 2015). Understanding the human needs behind technology usage can improve it
and make more people appreciate it (De Ruyter et al., 2005). However, recent cases have
revealed how AI technology fails such expectations and needs, particularly regarding fairness
13
and equity. One case was the introduction of Apple’s AI credit evaluator. Webb and Martinuzzi
(2019) reported that after Apple introduced Apple Card, which uses AI algorithms to assess
users’ credit scores, there was a gender bias in the algorithm that caused it to give higher scores
to men than women with similar qualifications. Unfortunately, this was not a test, and real people
were affected by the AI’s decisions (Knight, 2019).
Unethical and biased decisions made by AIs are not limited to sexism; they also extend
to racism in some cases. For instance, AI-powered crime predictors gave Black defendants a
higher risk score—in other words, they are assigned a greater probability of committing future
crimes than white defendants with similar crime records (Angwin et al., 2016). Additional
examples of problematic and offensive biases exhibited by AI include auto-tagging pictures of
black people as gorillas in Google Photos (Grush, 2015) and the inappropriate and unethical
comments made by Tay, a chatbot developed by Microsoft (Wakefield, 2016).
The first two cases described above are more crucial than the others because they have
another commonality: they involve programs that determine the financial and public standings of
individuals, which can directly impact their livelihoods and social statuses. All the above-
mentioned cases are representative examples of company-oriented AI development that did not
appropriately consider user expectations regarding fair treatment. In other words, those happened
because of the company’s lack of understanding of people, particularly their users, when
developing the algorithms. Social scientists’ humanistic approaches to providing social
perceptions of AI and people’s demands can contribute to developing this innovative technology
to advance social good.
Finally, AI effects change at a societal level, including innovating infrastructure. Even if
an individual does not actively use AI devices, their life will be affected by the social changes
14
prompted by AI. The World Economic Forum defined the current phase of technological
advances, which includes AI, robotics, the Internet of Things, autonomous machines, 3D
printing, nanotechnology, and quantum computing, as the fourth industrial revolution owing to
the exponential speed of technological development (Schwab, 2016). Many experts have called
the fourth industrial revolution a revolution because the skills and techniques that industries
prefer have changed based on machines’ capabilities, thus reshaping people’s lifestyles and social
structures (Ayinde & Kirkwood, 2020).
Therefore, as AI develops, there should be a societal level of preparation to aid the
adoption of new changes. For instance, while engineers focus on developing fully autonomous
cars, the rules and policies to be imposed on such vehicles should be prepared to prevent
problems. Once the road systems and policies are put in place for self-driving cars, all drivers—
even those with outdated cars—should comply with them until, if it happens, every car on the
street is autonomous. Moreover, because having only self-driving cars on the road is safer than
sharing the road with human drivers, both the industry and regulators will expedite the
substitution (Sivak & Schoettle, 2015).
Therefore, social scientists should study the necessary considerations for helping people
adapt to the transitions. A lack of appropriate protocols and guidance will hinder the distribution
of autonomous cars even if the vehicles are safer and more efficient than traditional cars.
Understanding people’s attitudes toward self-driving cars, such as how they attribute
responsibility regarding self-driving cars’ accidents, will facilitate the development of rules and
regulations. While the study of the interactions between humans and AI has multiple names, this
dissertation uses the term human-machine communication (HMC).
Recent HMC Studies
15
Carl Sagan once famously said, “You have to know the past to understand the present.”
To find what new research is needed today, a literature review examining past HMC studies was
conducted. According to Guzman (2018), HMC considers the role of machines as independent
communicators. In other words, compared to previous studies on communication and technology,
HMC studies are more likely to examine machines as anthropomorphic social entities. However,
this does not mean that the application of HMC is limited to cutting-edge interactive
technologies, such as social robots and AI agents. The study of the interactions between humans
and machines began in the 1970s, primarily in the field of computer science; however, the field
has expanded to include cognitive science and psychology (Grudin, 2005).
In the realm of social science, computers are social actors (CASA) paradigm is a
theoretical framework that has often been used in the HMC field. The CASA framework was
developed in response to studies of people’s reactions to televisions and computers (Nass &
Moon, 2000), and researchers using it have argued that people attribute social behaviors to
machines even when they acknowledge that machines are not human (Sundar & Nass, 2000;
Nass, Steuer, & Tauber, 1994; Guzman, 2018). CASA comes from the concept of mindlessness
by Langer (1992), who claims that the unconscious awareness state of an individual depends on
the context of interactions and relies on past experiences and knowledge. The logic is similar to
schema theory. A schema is a cognitive framework that is built upon an individual's past
experiences in order to perceive, comprehend, and recall new information (Bartlett, 1932;
Brewer & Treyens, 1981). Schemata function to decrease the complexity of situations in order to
facilitate receiving new information so that typical situations can be processed without the effort
of acknowledging and interpreting familiar objects and ideas (Harris & Sanborn, 2014; Kleider,
et al., 2008). Both schema theory and CASA assume people’s intuitive perceptions are based on
16
preexisting knowledge and experiences, which leads to indistinctive behaviors toward machines
and humans. According to this theoretical perspective, people perceive machines as interlocutors
in communications. While HMC studies have generally aligned with developing technologies,
such research can be broadly categorized into two groups: analyses of people’s perceptions of
machine agents and evaluations of machine performance.
The CASA theoretical framework has driven most HMC research (Guzman, 2018).
Because the theory is focused on how people perceive machine actors, most CASA studies have
examined how people understand machines in different settings. The primary purpose of these
studies has been to determine whether people attribute social traits to machines, such as chatbots
and social robots, in a manner similar to when interacting with humans (Kim, Park, & Sundar,
2013; Edwards et al., 2016; Edwards et al., 2019).
Ethics are a considerable concern in the implementation of AI in society (Fast &
Horvitz, 2017). To address this concern, researchers have examined how people react to
situations in which machines make unethical decisions. For instance, Shank and DeSanti (2018)
examined how people reacted to AI agents that committed moral violations in real-world cases.
The researchers found that people with technical knowledge of AI were more likely to attribute
blame to AI agents than external factors. Further, multiple studies have considered the
humanness of technologies. For instance, Xu (2019) found that people evaluated robots with
human traits as being more trustworthy and intimate than robots with only mechanical traits.
However, several studies have revealed that people perceive machine agents and human
interlocutors differently. For instance, people exhibited various personality traits when
interacting with AI interlocutors when compared to acting with other humans—for example, they
were more open, agreeable, extroverted, conscientious, and self-disclosing (Mou & Xu, 2017). It
17
is expected that the phenomenon may decrease when machine agents’ features and behaviors are
indistinguishably humanlike. In addition, one study indicated that people do not feel sympathy
toward machines (Bartneck et al., 2005). These studies do not support CASA. Thus, more
research is required regarding people’s perceptions of machine agents to more comprehensively
understand the applicability of CASA.
While most studies have concentrated on machine interlocutors, several have examined
how people evaluate machine performance. The number of such studies has increased as AI has
increasingly been used to perform a wider range of tasks. Such studies have also been less
frequently based on CASA than studies in which AI agents were assessed. For instance, a
number of news outlets have begun to adopt AI journalists, and studies have examined how
people react to news articles written by AI systems (Moses, 2017; Peiser, 2019; Zheng, Zhong, &
Yang, 2018). They found that the identity of the writer (either a human or an algorithm) did not
influence people’s evaluations of news articles.
Furthermore, studies have examined how people react to art generated by creative AI.
They found that although artwork created by AI was indiscernible from that created by human
artists, participant evaluations were highly influenced by their attitudes regarding the creative
potential of AI systems (Chamberlain et al., 2017; Hong & Curran, 2019). These studies
demonstrate that the context of AI performance influences evaluations of AI-generated work’s
quality. Studies examining human perceptions of AI performance are expected to diversify as AI
becomes more versatile and potentially fulfills more functions, including creating movies and
predicting future criminals (Angwin et al., 2016; Goode, 2018).
Even though this dissertation distinguishes studies concerning human perceptions of AI
agents from AI performance evaluations, these categories are not mutually exclusive. Perceptions
18
of actions and actors are often measured together (Zheng, Zhong, & Yang, 2018), perhaps
because evaluations of AI performance rely on the perceptions of its actors. In other words, there
is a positive relation between attitudes toward AI and the evaluation of AI performance (Hong &
Curran, 2019). However, these perceptions must be differentiated because people are not always
logical and consistent. For instance, some people who believe that AI systems cannot be creative
still highly value artwork that they know was created by AI (Hong, 2018).
From the findings of the above-mentioned HMC studies, it is possible to speculate that
people’s perceptions of AI are malleable because their attitudes can vary depending on the
communication contexts or circumstances present when interacting with machines. Therefore,
further studies should examine the factors that affect people’s attitudes toward AI.
Three Dimensions of HMC
Based on the social construction of technology perspective as the general theoretical
framework, this dissertation explores the factors that determine the evaluation of AI and its
performance, which is the main research question that its three studies share. In each chapter, the
study will look deeper into different cases related to AI’s cultural, social fairness-related, and
infrastructural influences described above. The analyses are based on three key dimensions of
HMC research suggested by Guzman and Lewis (2020): functional, relational, and ontological.
Building upon these dimensions, the current work attempts to add to the understanding of three
major topics regarding HMC that best match the three contexts that Guzman and Lewis
suggested: AI utility, AI creativity, and AI ethics (see Figure 2).
19
Functional Approach to AI Utility. According to Guzman and Lewis (2020), the
functional approach to AI focuses on how productive a machine is in its given role. AI is often
regarded as highly efficient, creating expectations concerning its ability to perform logical tasks
for humans (Yokoi & Nakayachi, 2022). Therefore, white-collar jobs are perceived as more
susceptible to AI than to other technologies (Liu, 2019; Versace, Hawkins, & Abssy, 2021). AI
can accomplish several tasks involving logical thinking, including various tasks in the fields of
transportation, medical diagnoses, and job recruitment (Waldrop, 2015; Topol, 2019; Upadhyay
& Khandelwal, 2018). The utility of performances regarding such tasks is evaluated by asking
whether the AI agent is worth adopting. This question often revolves around whether AI
outperforms humans because people often think machines have characteristics that humans do
not (Sundar, 2020).
AI Creativity
AI Ethics AI Utility
Figure 2. Three HMC Divisions
20
However, AI can still make fatal errors that can violate people’s expectations. For
example, in 2018, an Uber self-driving car struck and killed a pedestrian during a test drive. The
pedestrian was jaywalking in the dark, and the car did not notice him. Hearing such news
influences people’s understanding of self-driving cars, which may prompt an aversion to AI
technology. Overall, people determine the need to interact with machines based on the costs and
benefits, illustrating a pattern similar to adopting new innovations. However, HMC research is
not always related to technology acceptance. HMC studies see AI as a communicator and a
message source in social interactions (Lewis, Guzman, & Schmidt, 2019). Therefore, recent
HMC studies have been the human-like traits of AI, such as credibility (Araujo et al., 2020; Lee
et al., 2020), attributed responsibility (Hohenstein & Jung, 2020), and social presence (Kang &
Kim, 2021; Liu & Wei, 2021) as dependent variables instead of usage intention.
Relational Approach to AI Creativity. As machines become more interactive and
human-like, HMC is becoming similar to interpersonal communication, and people are building
social relationships with AI agents (Spence, 2019). To improve the general understanding of
HMC, researchers have begun to ask questions about the social positions of people and AI in
human-machine relationships, particularly regarding their roles during interactions with each
other (Guzman & Lewis, 2020). Therefore, understanding how people perceive the roles of AI is
crucial to studying human-machine relationships. A role is defined as the distinctive patterns of
action individuals exhibit according to their own and others’ behavioral expectations, which are
derived from social positions (Biddle, 1986). Because social behaviors are always derived from
every interlocutor’s role, interacting with others without a role is not possible (Goffman, 1973).
For instance, when a person you have not met before comes and talks to you, the individual is
playing a role of stranger.
21
Theoretically, machines can have roles when they are assigned social positions that
involve communications. Because people interpret an AI agent’s role based on their knowledge
and experience of human actors, assigning human traits, such as gender, to AI was found to
influence people’s perception of AI’s role (Suchman, 2006). It seems that machines can have an
increasing number of roles that are becoming more diverse. Moreover, AI having social roles can
create power dynamics between humans and machines. For instance, AI recruiters have authority
over human job candidates, which violates the general expectation of human-machine
relationships (i.e., humans having more power than machines). This tension between humans and
machines is also worthy of research.
On the other hand, contemporary AI agents undertake roles that require logical tasks,
such as talent assessments and crime predictions (Berk, 2021; Van Esch, Black, & Ferolie,
2019). Therefore, academia and industry have recently attempted to build AI suitable for roles
requiring a high level of humanity and creativity that can accomplish artistic performances,
including painting (Lima et al., 2021), composing music (Bonnici et al., 2021), crafting movie
trailers (Min, 2019), and writing poetry (Köbis & Mossink, 2021), which is more challenging
than building AI for analytical roles.
AI’s creative performances are also evaluated based on whether people can accept the
role of AI. A previous study found that people who were critical of AI’s creativity did not
appreciate AI-created paintings (Hong & Curran, 2019). Therefore, studying how people react to
different types of AI social roles is a significant element in the relational approaches of HMC
research.
Ontological Approach to AI Ethics. The ontological approach to HMC focuses on the
ontological borders between humans and machines (Guzman & Lewis, 2020). While the authors
22
initially call it "metaphysical," this paper uses "ontological" instead because it seems the term
indicates the meaning the authors want to convey better. As machines have become more
human-like, people have become confused regarding how they should define their relationship
with machines. For example, people may face ethical issues when interacting with AI agents in
social settings because most social norms and rules have been constructed based on interpersonal
interactions, which makes current ethical standards less applicable to HMC. One study found
that people feel uncomfortable when instructed to inflict harm on robots because of perceived
sociability, power imbalances, and feelings of connection (Riddoch & Cross, 2021). Guzman and
Lewis (2020) see applying human-human norms to HMC as an ontological aspect of HMC
because machines’ performance of social roles breaches the ontological supposition upon which
current ethical codes are based, namely, that humans are the only social actors.
The fact that machines are becoming more human-like also necessitates a more extensive
understanding of the social implications of AI ethics. After the fatal crash of Uber’s self-driving
car, the responsibility attribution was diversified—the driver, the company, the car, the victim,
and the state of Arizona were all blamed (Fernandez, 2020). This distinctive pattern of reaction
was exhibited because, even if AI drivers can drive perfectly, people still distinguish them from
human drivers (i.e., AI cannot be responsible for such incidents and, therefore, cannot be
punished). Moreover, Amazon used AI to automatically fire employees who failed to meet
productivity quotas (Lecher, 2019). Similarly, delivery workers in South Korea are at risk
because AI pushes them to deliver more products in less time by penalizing them if deadlines are
not met (Kim, 2021). While it seems like its coder is responsible for the outcome, it is hard to
attribute full responsibility to them since AI autonomously makes the decisions. In other words,
the coder may be a lousy creator of the algorithm but not a wrongdoer. It is similar to parents
23
getting blamed for not educating properly when their child commits a misdeed. Regardless of
public opinions, AI tech companies maintain this stance and make it AI’s fault in order to avoid
responsibility.
Because AI only sees numbers while ignoring people and their circumstances, there are
concerns that some decisions made by AI can detrimentally affect employees. AI cannot resolve
such issues without input from a programmer because of its unavailability of distinguishing right
from wrong. Recent ethical issues regarding AI also bring into question whether AI should be
accepted as a social entity even if it is 100% technologically prepared to perform tasks. To date,
84 different guidelines have been suggested to address such concerns regarding AI ethics, but a
unified one that encompasses those ideas instead is needed so that people can easily
acknowledge what instructions to follow (Jobin, Ienca, & Vayena, 2019). Also, because most
current guidelines are vague and possess no discernible impact, more efforts and studies on the
tangible and practical implementation of AI ethics are needed (Hagendorff, 2020).
Preview of Studies
In subsequent chapters, I will draw upon the findings of three previous experiments to
examine the social perceptions of various functions of AI. In addressing the varying
performances of AI, these studies shared the same goal: to determine the social factors and AI
attributes that influence people’s perceptions and assessments of AI. This is because, as
previously mentioned, people’s decisions about AI acceptance are based not only on its function
but also on internal and external factors, such as attitudes, social norms, and the human-like traits
of machine agents. Nevertheless, because these studies were based on different theories, each
study required individual hypotheses and research designs. Here, I briefly preview the three
studies included in this dissertation.
24
The first study is about the functional aspect of HMC, focusing on the AI perception
based on its overachievement or failure of a task. Also, it compared the assessment of an AI
driver and a human driver to test the effect of actor types. Schema theory was employed, as this
study aimed to test how people’s preexisting biases about AI influence their perceptions of AI
performance. The CASA paradigm was also considered to see whether people assess AI
performances the same way they assess human performances. Finally, based on attribution
theory, the effect of actor types on performance evaluation was examined. The rescuing scenario
concerned a driver (i.e., AI vs. human) saving a passenger from a fatal acute disease. On the
other hand, the accident scenario involved a driver crashing a car, killing the passenger. After
reading one of these articles, all participants were asked to participate in a survey. The study’s
finding can explain what variables other than technological traits influence the evaluation of
autonomous vehicles.
The second study, which also used an experimental between-subjects design, examined
how people perceive AI creativity by focusing on their reactions to AI-composed music. This
study is about the relational aspect of HMC, exploring the response to AI creativity. The main
point of this study is to test whether AI having a role that requires human traits influences the
evaluation of AI performance. The main theoretical framework of this study was expectancy-
violation theory, as it was anticipated that people’s assessments would vary based on their
expectations. In this way, this study examined the effects of met or unmet expectations regarding
AI-composed music and the valence of any violated expectations. Additionally, because people
can have different expectations for different musical styles, the influence of genre—electronic
dance music (EDM) vs. classical music—was also tested. Finally, the study tested whether
people’s evaluations of AI-composed music change based on their attitudes toward AI creativity.
25
The study analyzed the responses of participants who participated in an online survey after
listening to a randomly assigned AI-composed song. Its finding can illustrate the effect of
accepting the machine’s role, particularly the one that requires a strong humanlike characteristic
(i.e., creativity), on the performance evaluation.
Finally, the third study examined people’s perceptions of AI ethics, focusing on people’s
reactions when seeing AI decisions that violate general moral standards. This study is based on
the ontological aspect of HMC. In this case, the violation was racist discrimination based on an
actual case of an AI crime predictor that gave higher risk scores to black defendants than white
defendants with similar criminal records. This study followed a between-subjects experimental
design study to understand people’s perceptions of such unjust decisions. The type of crime
predictor (i.e., human vs. AI) and the seriousness of the crimes (i.e., high vs. low) were
controlled. Furthermore, I tested the relation between the level of autonomy and the attributed
responsibility for a racially biased decision. The CASA paradigm and attribution theory
constituted the theoretical frameworks of this study. Participants were asked to read one of four
scenarios with the same story depicting a crime predictor’s unfair decision and respond to an
online survey. Its finding can contribute to the understanding of the applicability of ethical
standards to AI despite the ontological difference between humans and machines.
26
Chapter 2: Perception of AI Utility and Self-Driving Cars
While AI performs various types of roles as a social actor, most of them are still restricted
to making logical decisions. It is because the majority of current machine learnings aim to train
AI to find a pattern from a dataset and make decisions that follow the trajectory. The first study
of this dissertation is about the evaluation of AI’s logical performance from the AI utility
perspective based on the cases of self-driving cars. Self-driving cars are autonomous vehicles
that have their own agency to drive and make decisions using information retrieved from a 3D
light detection and ranging (LiDAR) system and sensors without human inputs, making them a
representative AI technology (Ratan, 2019; Zhang et al., 2018). Deploying self-driving cars on
the road is expected to increase the comfort of its users and reduce traffic problems, including
accidents and traffic congestion, by minimizing human involvement (Dixon et al., 2020; Teoh &
Kidd, 2017).
Therefore, the transportation industry is attempting to build fully automated self-driving
cars (Gordon, 2021). There are five levels of automated driving systems. Level five, which is the
final level, is characterized by vehicles that can function without any human intervention under
any conditions. Experts expect this level to be achieved by 2030, owing to the rapid
advancement of this technology (Bakioglu & Atahan, 2021; Yoganandhan et al., 2020).
Among many AI technologies, the self-driving car was examined because it can generate
benefits in terms of safety and efficiency, as well as detrimental outcomes due to accidents.
Similar to the acceptance of other technologies, people’s acceptance of self-driving cars matters
as much as technological advancements. Positive and negative news and events about self-
driving cars often determine public perceptions of this technology. For instance, a study found
that people’s intentions to purchase self-driving cars decreased after the Uber self-driving car
27
accident in March 2018 in Arizona but increased when acknowledging the potential of self-
driving cars as a form of public transportation (Wicki, 2021).
Unlike other studies, which adopted intention as a dependent variable, the present study
focuses on the attributed responsibility using the human-machine communication approach
because self-driving cars are now seen not simply as vehicles but as independent agents (Lee et
al., 2015; Ratan, 2019). Still, this study focuses on AI utility, as it examines how people think
about AI drivers compared to human drivers. When people view AI drivers more positively, they
are likely to prefer them to human drivers. Of note, the term “human drivers” does not refer to
individuals who drive their own cars. Thus, the question is not whether people choose to drive by
themselves or use a self-driving mode; the question is whether people choose to ride in a taxi
driven by a human driver or one driven by an AI driver.
Furthermore, people often make decisions about social relationships based on the costs
and benefits of each option. Social exchange theory argues that social behaviors are exchanges
that people engage in to maximize their benefits while suffering minimal costs (Cook et al.,
2013). Therefore, performance should be evaluated within the realm of interactions with
machines. The interaction is expected to be influenced by the traits of the actors. Therefore, AI
drivers may be evaluated differently than human drivers because of people’s preexisting
knowledge and attitudes toward machines. Based on SCOT, however, the actor type would not
be the only factor that affects the assessment but also other external components, such as
potential consequences expected from others’ performance. Based on the SCOT approach, this
study assumes the valence of outcomes alters the perception of AI drivers’ performance. Overall,
this study attempts to analyze people’s attitudes toward an AI driver (and different performances)
and compare these attitudes to those toward a human driver. The hypotheses and research
28
questions are based on three theoretical frameworks: attribution theory, the CASA paradigm, and
schema theory.
Attribution Theory
Because this study involves incidents that require clarification of the responsibility of
actions, attribution theory can also have a theoretical implication for this study. Attribution
theory explains people’s cognitive patterns when identifying causal links to judge who is
responsible for an event, particularly blame attribution in accidents (Fiske & Taylor, 1991).
Therefore, this theory is often used to explain how people blame others for accidents. According
to this theory, people attribute more negative responsibility to perpetrators with less similarity as
a means of ego protection just in case the same event happens to them (Shaver, 1970; Burger,
1981).
So far, this theory has been developed based only on the assumption of human
perpetrators and has not been adequately tested in HMC settings. Due to ontological differences
(i.e., machine vs. human), it is still assumed that people identify with AI drivers less than they do
with human drivers. Therefore, it is hypothesized that people will blame AI drivers more than
human drivers for the same actions:
H1. People will attribute more responsibility to an AI driver than a human driver when
involved in a negative event.
While attribution theory can describe the attribution of responsibility in positive cases
(Sirin & Villalobos, 2011; Medway & Lowe, 1975), the relation between perceived similarities
and positive responsibility attribution has not been comprehensively tested. Still, evidence
supports this relation. It was found that people feel anxiety when complimenting others, as they
are concerned about the unexpected results their appraisals can bring, which often reduces their
29
intentions to offer appraisals (Boothby & Bohns, 2020). So, it can be assumed that people will be
more cautious when giving appraisals to AI drivers from which they distinguish themselves
compared to human drivers to which they feel similar because fewer social attachments and
shared experiences are expected (Cha et al., 2020; Fox et al., 2015). Therefore, this study
presumes that people will give more credit to human drivers than AI drivers.
RQ1. Is there an interaction between the valence of an event’s outcome (i.e., positive or
negative) and the type of driver (human vs. AI)?
RQ2. Will people attribute less responsibility to an AI-enabled driver than a human driver
in a positive outcome?
Computers Are Social Actors (CASA) and Schema Theory
The CASA paradigm and schema theory were used in this study for the same reason: to
explain the similarity between human-human interactions and human-machine interactions. A
schema is a mental shortcut used to process new information based on past experiences and
knowledge, thus facilitating the understanding of typical situations with reduced effort (Brewer
& Treyens, 1981). In other words, people rely on their knowledge or previous experiences to
understand new information. Therefore, according to the schema theory, when people interact
with a machine agent, they process the situation using their knowledge from previous
interactions with humans (Velez et al., 2019). As mentioned in the first chapter, the CASA
paradigm makes a similar claim that people interact with machines just as they do with humans
because they see machines as social actors (Nass & Moon, 2000).
These two theories share a commonality, as CASA theory is developed based on Langer’s
(1992) conceptualization of mindlessness, which, similar to schema theory, refers to people’s
unconscious dependence on the interaction context and past experiences when processing
30
information. Based on these theories, people’s knowledge about human drivers will be retrieved
when seeing AI drivers’ human-like performances, leading them to unconsciously attribute social
norms to the situation. As this study examines whether people attribute responsibility to
machines the same way they do to humans, the following hypothesis is proposed based on the
CASA paradigm and schema theory:
H2. For both an AI driver and a human driver, the actions of a driver in a positive event
will be more positively assessed than the actions of a driver in a negative event.
Method
This study used a 2x2 between-subjects experimental design that considered both the
identity of agents in a situation (e.g., human vs. AI) and the varying valence of an event’s
outcomes (e.g., positive vs. negative). In a vignette design, a fictitious news article with a
positive scenario was presented as a rescue event, in which the agent (AI or human) saved a
driver and took them to a hospital. The other version of a fictitious news article involved a
negative scenario (a crash). The study included fictitious news articles for all four combinations
(AI rescue, AI crash, human rescue, and human crash, see below). The dependent variables were
the subjects’ perception of the agents and the level of responsibility attributed to the driver.
Participants
Amazon Mechanical Turk (MTurk) was used to recruit participants. Participants who
participated in the survey more than once were excluded, leaving 230 participants from the 273
who were initially recruited. Power analyses using G-Power (Faul, Erdfelder, Lang, & Buchner,
2007) suggested the survey’s potential to detect medium-sized effects. The youngest participant
was 19 years old, while the oldest was 70 years old (M= 33.21, SD= 12.64). In terms of gender,
62% of participants identified as male, and 38% identified as female.
31
Procedure
Participants who agreed to participate in the study were given one of two types of
reading stimuli in the form of a fictitious news article (see Figure 3). Using articles as vignettes
is a commonly used method to test people’s attitudes and reactions (Billard, 2018; Hong, 2020).
One article was based on an actual news report detailing how an AI driver saved its passenger
from a potentially fatal acute pulmonary embolism (Ferris, 2016). The following paragraphs of
the story were used to describe the rescuing condition with an AI driver:
The artificial intelligent (AI) driving system in a self-driving car is
being credited with having helped save a man’s life after its AI
driver mode was enabled and drove him to a hospital when he
suffered a pulmonary embolism.
Yesterday, 29-year-old David McGill was driving to work when he
felt an excruciating pain in his abdomen and chest. McGill
recounted how he set the autonomous driving function on his self-
driving car. “I thought it was easier to have the car drive me to the
hospital rather than calling an ambulance,” McGill said. After
being enabled, the self-driving car’s AI driver drove McGill to a
nearby hospital.
In the human driver condition, this article replaced the AI and introduced the driver as a
“rideshare driver” for a fictitious company called GreenLight Rideshare.
The negative outcome (a crash) news articles were based on a hypothetical car accident
that led to the death of a passenger. A story about an accident causing the death of a passenger
was used because it directly contrasted the article that was about saving a person from immediate
death. The following paragraphs are a sample from the story reflecting the accident caused by the
human driver:
Yesterday, a GreenLight rideshare car was involved in an accident
after it suddenly lost control due to a slippery road condition
because of heavy rain. A passenger in the car died in the accident.
Local police said the car was driven by 41-year-old Michael Smith.
The car suddenly lost control and collided with a tree. There was
one passenger, 29-year-old David McGill, inside the car at the time
of the crash. He was pronounced dead when first responders
32
arrived at the scene of the accident. The car did not hit any other
pedestrians, and the passenger was the only casualty.
In the AI condition, the human driver was replaced with a self-driving AI agent. The full articles
are provided in Appendix A.
After reading one of the news articles, participants were asked to report their perceptions
of the driver and their thoughts on how responsible the driver was for the incident. Also, their
attitudes toward AI, their knowledge about AI, and their demographic information were asked.
Participants were debriefed after they finished the survey.
Figure 3. Sample Stimulus in a News Article Format
33
Measures
Because this study examines different attitudes towards the agent and the outcome,
different measurements were used for each case. Also, scales measuring attitudes toward AI and
participants’ AI knowledge were created and measured (see Appendix B). The order of the scales
and the questions within each scale were randomized.
Perception of Drivers. Participants’ impressions of the driver were measured using a
scale to assess participants’ opinions about the agent (Brave, Nass, & Hutchison, 2005). The
measurement tool included questions asking how caring the driver is and how trustworthy the
driver seems (e.g., participants could choose between sincere vs. insincere or friendly vs.
unfriendly). An exploratory factor analysis (EFA) of 16 items using principal axis factoring with
orthogonal (varimax) rotation yielded one factor with eigenvalues >1 (KMO= .97), accounting
for 69.32% of the total variance with a significant Bartlett's test of sphericity (χ
2
= 3550.34,
p<.001). This seven-point bipolar scale achieved high reliability (α= .97). Higher average ratings
indicate more positive perceptions toward the driver.
Evaluation of Attributed Responsibility. To measure how much responsibility is
attributed to the driver, this study used the revised causal dimension scale (CDS-II) with four
subscales: locus of causality (α= .84), external control (α= .85), personal control (α= .86), and
stability (α= .83; McAuley, Duncan, & Russell, 1992). Since four subscales are suggested, a
confirmatory factor analysis (CFA) was conducted. The results (NFI= .92, CFI= .96, GFI= .93,
TLI= .94, SRMR= .05, RMSEA= .06) showed a good model fit for this measurement.
Participants were asked to report how much they agreed with given statements, such as whether
the cause of the event could have been managed by the driver or whether the driver would cause
a similar event again. This 12-item seven-point bipolar scale overall had high reliability (α= .88).
34
Higher average ratings indicate that more responsibility is attributed to the driver.
Results
The study began by testing the experiment’s manipulation of the valence of a driving
scenario and participants’ perceived similarity to a driver. Responses were analyzed between
different scenarios with an independent samples t-test. The manipulation effects on the valence
of the scenario (e.g., “Is the event described in the article positive or negative?”) revealed a
significant difference between positive (M= 5.71, SD= 1.27) and negative (M= 3.36, SD= 2.20)
events, t(228)= 9.97, p<.001, and the manipulation effects of “perceived similarities with
drivers” (e.g., “How much do you find yourself similar to the driver?”) also showed a significant
difference between human and AI drivers (M= 5.07, SD= 1.51) and AI (M= 4.55, SD= 1.87),
t(228)= -2.33, p= .021. The results from the manipulation check questions confirmed that the
conditions were distinct, which allowed the main effects to be analyzed.
H1 (i.e., that people assign more responsibility to an AI driver than a human driver in the
accident scenario) was tested by analyzing the level of blame and praise using analysis of
variance (ANOV A). Levene’s tests were conducted and yielded the following results: blame
[F(1, 115)= 0.45, p= .51] and praise [F(1, 111)= 0.21, p= .65]. A one-way ANOV A with the same
dependent variable (based only on the data collected for the accident scenario) was conducted for
H2. There was no statistically significant effect of the identity of drivers on the level of blame
[F(1,115)= 0.05, p= .83], as the level of blame toward the AI driver (M= 4.46, SD= 1.09) was
similar to the human driver (M= 4.41, SD= 1.15). H1 was not supported (see Figure 4).
35
Figure 4. Level of Responsibility Attributed to Drivers
A two-way ANOV A was conducted to respond to RQ1, which asks whether the identity of
drivers and the valence of an event’s outcome have an interaction effect on the level of
responsibility attributed to drivers. The dependent variable for this analysis was the evaluation of
attributed responsibility. Levene’s test was conducted to assess the equality of variances and
rejected the homogeneity of variances [F(3, 226)= 0.86, p= .46], meaning that conducting the
ANOV A was appropriate. RQ1 asks whether there is an interaction effect between the valence of
an incident and the identity of drivers in terms of how much responsibility people attribute to the
driver. The overall CDS-II outcomes showed an insignificant result for the effects of the valence
of the event [F(1, 226)= 0.20, p= .66], the identity of the driver [F(1, 226)= 3.06, p= .08], and the
interaction between these factors [F(1, 226)= 2.12, p= .15].
However, an interaction effect was found in one of the subscales of CDS-II. While
36
external control [F(1, 224)= 0.938, p= .334], personal control [F(1, 224)= 2.26, p= .134], and
locus of control [F(1, 224)= 0.170, p= .680] did not show any interaction effects, an interaction
between the valence of events and the identity of drivers was detected in terms of stability [F(1,
224)= 5.44, p= .021, ηp²= .02].
Table 1 shows the descriptive statistics for the ANOV A. Additionally, the identity of the
driver was significantly related to stability [F(1, 224)= 6.40, p= .012, ηp²= .03] and personal
control [F(1, 224)= 5.40, p= .021, ηp²= .02]. Participants thought that AI drivers (M= 4.78, SD=
1.19) were more likely than human drivers to repeat the action when placed in similar
circumstances in the future (M= 4.37, SD= 1.24). Also, they reported that the situations were
more manageable by AI drivers (M= 4.71, SD= 1.52) than human drivers (M= 4.23, SD= 1.64).
37
Table 1. Descriptive Statistics for Driver Evaluations
Valence of
Incidents
AI driver Human Driver
M SD N M SD N
Rescuing
(CDS-II)
4.76 1.24 57 4.25 1.25 56
Accident
(CDS-II)
4.46 1.09 58 4.41 1.15 59
Rescuing
(Stability)
4.95 1.28 57 4.18 1.20 56
Accident
(Stability)
4.61 1.08 58 4.55 1.25 59
Rescuing
(Locus of Control)
4.49 1.55 57 4.46 1.56 56
Accident
(Locus of Control)
4.31 1.50 58 4.44 1.48 59
Rescuing
(Personal Control)
4.97 1.58 57 4.17 1.79 56
Accident
(Personal Control)
4.45 1.42 58 4.28 1.51 59
Rescuing
(External Control)
4.61 1.44 57 4.20 1.30 56
Accident
(External Control)
4.46 1.52 58 4.38 1.22 59
Note. The scale ranges from 1 (strongly negative) to 7 (strongly positive).
38
A one-way ANOV A was conducted using only the data collected for the rescuing scenario
to test RQ2, which asked whether less responsibility was attributed to an AI driver than a human
driver in the rescuing scenario. The identity of drivers had a statistically significant effect on the
level of praise given [F(1,109)= 4.57, p= .04, ηp²= .04]. The level of praise was higher for the AI
driver (M= 4.76, SD= 1.24) than for the human driver (M= 4.25, SD= 1.25). This outcome
suggests that the type of driver (AI or a human) can influence the level of praise.
Further, some subscales revealed relevant findings. For instance, although external
control [F(1, 111)= 2.54, p= .114] and locus of control [F(1, 111)= 0.01, p= .93] did not yield
significant outcomes, there was a significant difference between human and AI drivers in terms
of stability [F(1, 111)= 10.83, p= .001, ηp²= .09] and personal control [F(1, 111)= 6.34, p= .013,
ηp²= .05]. The AI driver was deemed to have more control (M= 4.97, SD= 1.58) than the human
driver (M= 4.17, SD= 1.79) at the time of the rescue event and was perceived as more likely to
make the same decision (M= 4.95, SD= 1.28) than the human driver (M= 4.18, SD= 1.20).
Two sets of independent t-tests were conducted for H2, which presumed more positive
assessments of drivers in the rescuing scenario compared to the accident scenario, regardless of
the driver type. One set was analyzed using data only from the AI driver scenario, and the other
set used data only from the human driver scenario. The AI driver was perceived to have more
positive traits in the rescuing scenario (M= 4.22, SD= 1.53) than in the accident scenario (M=
3.63, SD= 1.35) [t(113)= 2.20, p= .030, d= .41]. Similarly, there was a significant difference
between the rescuing scenario (M= 5.00, SD= 1.67) and the accident scenario (M= 4.00, SD=
1.18) when the driver was a human [t(113)= 3.70, p<.001, d= .69]. The t-test results supported H2
(see Figure 5).
39
Figure 5. Comparison of the Positive Perceptions of Drivers
Discussion
This chapter describes an experiment that tested how various event outcomes impact
people’s perceptions of an AI driver and their assessments of the driver’s performance. Similar to
the claim of social constructivists, the study found that people’s evaluations of technology do not
rely solely on technological traits but are influenced by external conditions—specifically, the
valence of outcomes. In other words, how people see AI agents is influenced by the complexity
of the sociotechnical relationship people share with them. It is also a question of utility because,
even though attributing responsibility is a social behavior that mostly occurs between humans,
how people see AI drivers can affect their decisions about whether the technology can replace
human drivers.
40
This study attempted to understand differences in people’s responsibility attributions and
perceptions of human and AI actors showing the same performance (either positive or negative).
It was found that people’s evaluations of a driver’s performance depend on the valence of the
outcome, both for AI and human drivers. This finding indicates that people evaluate AI drivers
the same way they evaluate human drivers, which supports the hypotheses based on schema
theory and the CASA paradigm. While most CASA studies have focused on behavioral social
interactions with machines (see Guzman, 2018), this study confirms that the paradigm is
applicable to responsibility attributions.
Moreover, the results suggest a theoretical connection between schema theory and the
CASA paradigm. The CASA paradigm claims that people mindlessly engage in social
interactions with machines based on their consistent social behaviors toward machines. However,
this study evinces the cognitive similarities between humans’ interactions with machines and
other humans. Based on the argument of schema theory regarding people’s reliance on
experiences and preexisting knowledge to process information with relatively little effort
(Brewer & Treyens, 1981), it can be said that the participants attributed responsibility to AI
similarly to how they attribute responsibility to other people. In other words, the behavioral
similarities that the CASA paradigm explicates are derived from the cognitive similarity claimed
by schema theory.
This study also found that people attributed more responsibility to an AI driver than a
human driver only in the positive (rescue) case but not in the negative (accident) case. In other
words, the valence of the event is a significant factor that determines people’s reactions to AI and
humans. Even though a significant result was found, it contradicts attribution theory because it
41
was hypothesized that more responsibility would be attributed to a human driver than an AI
driver based on the theory.
Because people have different expectations about machine agents (Sundar & Kim, 2019),
the pattern of attributing responsibilities to AI may differ from when humans are involved, which
makes attribution theory less applicable. This idea aligns with a warning from Guzman and
Lewis (2019) that traditional communication research perspectives may not successfully explain
the new interaction patterns with machine agents. It is believed that the AI driver’s rescue of its
passenger was praised more than the human driver’s rescue because people did not expect the AI
driver to save a person. However, because every form of technology has glitches, the AI driver’s
accident may be within people’s expectations about AI drivers, which led to unremarked
reactions from participants. As people’s expectations of AI seem to have crucial influences,
expectancy violation theory (EVT) is assumed to explain the evaluation of AI performances
better than other theories. However, this study did not provide evidence of the effect of
expectancy violation, as people’s expectations of AI drivers were not measured a priori.
Therefore, the following study was designed using EVT, which will be introduced more
specifically in the next chapter.
Other than choosing an inadequate theory, this study has methodological limitations.
First, the study did not measure participants’ driving experiences, such as their driving
competence and accident experiences. However, their attitudes toward driving could have
affected their evaluation. For example, people who have caused an accident may more likely to
be ego-defensive when blaming others for car accidents. Also, the study used a negative event
with a negative outcome and a positive event with a positive outcome. However, different
findings might be found in mixed situations (i.e., an AI driver saving its passenger from an
42
accident it caused or an AI driver failing to rescue its passenger despite its attempt to do so).
Therefore, using different scenarios in which the valence of the intention and the outcome do not
match is highly recommended for future studies. Finally, two events used in the study may not
involve an equivalent amount of responsibility attribution. Again, an AI driver causing an
accident may not be as surprising as an AI driver saving a person. Moreover, since they were two
different stories, potential confounding factors between the scenarios may have influenced the
outcome. Therefore, using articles that frame the same event either positively or negatively
would have been more appropriate. More considerations and more delicate controls were needed
when designing the study.
This chapter compared similarities and differences between people’s reactions to AI
agents, showing machine-like performance that involves logical processes and the reactions to
humans who behaved the same way as the AI agents. The conditions were compared in terms of
people’s evaluations of both agents and their performances. The respondents evaluated the AI
agent the same way as the human agent, but their attitudes toward its performance shifted based
on the valence of the outcome. While this outcome is assumed to be the result of distinctive
expectations toward AI, the study did not consider this factor. Therefore, the next chapter
introduces a study that considered EVT to determine people’s perceptions of an AI agent
showing human-like performance by writing songs autonomously.
43
Chapter 3: Perception of AI Creativity and Music Composition
While the first study is about the logical performance of AI, the second study is about the
creative performance of AI using AI-composed music. The previous chapter showed that people's
evaluation of AI's analytical decisions is influenced by their preexisting attitudes and
expectations toward AI. This chapter aims to see whether the same cognitive pattern can be
found when assessing AI's creative performance, which requires defiance against the patterns
from a dataset. It is anticipated that a significant difference between AI's analytical and creative
performance is whether the logic behind its decisions can be understood. Because AI is an
algorithm, it can easily be assumed to perform only logical tasks, such as driving cars and
evaluating people (Adams, 2017). However, as mentioned earlier, AI is the imitation of the
biological neural networks of the human brain. In other words, AI exhibits cognitive
performances based on an artificial brain. Therefore, if humans can be creative, machine brains
could also perfectly mimic how humans think and become creative. Therefore, AI could show
unexpected performances that were not programmed a priori. Still, whether people see it as the
genuine creativity is still debatable. People who focus on the novel ideas AI provides may see AI
as creative, while others who concentrate on whether machines can intentionally create
innovative ideas may be skeptical about this matter (Boden, 2009; Erden, 2010). However, AI
creativity is not just a hypothetical concept but has been actualized, and AI now creates
paintings, songs, poems, and movie scripts.
There are also conflicting views on whether AI-created artwork qualifies as art.
Coeckelbergh (2017) made an interesting claim that AI-generated products can fulfill objective
and subjective criteria. If there are objective criteria that determine artistic values, then engineers
can build AI that can fulfill these criteria. If the definition of “artistic” relies on a subjective
44
judgment, then anything, including AI-generated products, can be regarded as art. Therefore,
instead of the question “Can AI create art?” the question “Can AI create art that is good and
worthy?” is more appropriate. Based on this idea, this study measures the perceived musical
quality of AI-composed songs instead of whether people think they are artistic products.
As mentioned earlier, SCOT scholars claim the effect of non-technical factors on the
evaluation and adoption of technology, which can apply to the perception of AI musicians. Using
the SCOT perspective, Xu et al. (2020) argued that cultural backgrounds could cause different
attitudes toward machines, which can lead to a different evaluation of machine-generated
artworks. Therefore, the traits of an AI musician are not a sole factor when evaluating its songs,
even though it would be one of the important criteria. In other words, people may assess the
same song differently based on different factors. This study particularly focuses on the role of
people’s expectations about AI music and opinions about AI creativity when assessing AI-
composed songs.
AI as a Musician
Even before the term “artificial intelligence” became widely known, scholars developed a
field that builds AI musicians, which is called artificial intelligence and music (AIM; Camurri,
1990; Meehan, 1979; Roads, 1980). Both partially and fully automated computational music
compositions are now developed. Partially mediated music requires human input. For instance,
computer-aided algorithmic composition (CAAC) requires human composers to feed a set of
incomplete musical materials, such as equations, pitches, rhythms, or meta-music descriptions,
through algorithmic programs to create a new song (Fernández & Vico, 2013). Still, most
researchers aim to develop algorithmic models that can autonomously generate original songs
45
incorporating all musical traits, including melody, composition, timbre, and human singing
(Dhariwal & Prafulla, 2020).
Since the emergence of AI, deep learning, and reinforcement learning, fully automated
algorithmic composition has shown immense progress in developing generative models,
particularly generative adversarial networks (Briot, Hadjeres, & Pachet, 2020), which were
described in the first chapter. For instance, MuseGAN and MidiNet, which generate songs, are
both based on generative adversarial networks (Dong et al., 2018; Yang et al., 2017).
Generative adversarial networks are widely used to develop music-composing AI, but there
are also other methods. The transformer model employs an algorithm’s language-learning ability
for music generation tasks (Vaswani et al., 2017). Music Transformer (Huang et al., 2019a) and
MuseNet (Payne, 2019) produce music by predicting what note should come next by analyzing
thousands of MIDI files and are also based on this model. Furthermore, AI can mimic the styles of
prominent composers—for example, Deep Bach uses pseudo-Gibbs sampling to write songs in the
style of Bach chorales based on a hierarchical recurrent neural network (HRNN; Hadjeres et al.,
2017; Wu et al., 2020). The musical influence of AI is not limited to writing new songs; AI can
also be used in music education and instrumental performances accompanied by robotic
technology (Savery, Zahray, & Weinberg, 2021; Zulić, 2019).
In addition to writing songs autonomously, AI can technically do almost anything that
human musicians can do regarding music composition. As AI began to play an increasing role in
the music industry, people’s understanding of AI musicians has become a crucial topic.
According to Goffman (1973), a role is a crucial factor when forming a social relationship, as
social behaviors are elicited from role expectations. Hence, if people do not accept the role of AI
as a musician, a relationship between humans and AI as listeners and a composer cannot be
46
constructed. A new musical trend can be anticipated if people have positive attitudes toward AI’s
role in the music industry as an actor, as opposed to as a tool or instrument.
However, people will not appreciate AI if they do not see it as a legitimate artist, even if
AI can produce artwork in exactly the same way as human artists (Chamberlain et al., 2018;
Hong & Curran, 2019). This idea aligns with the social constructivists’ claim that the evaluation
and the adoption of technology are influenced by people’s biases and perceptions. So, people’s
enjoyment of AI-composed songs is determined by their understanding of AI’s social role as a
musician. Because creativity is regarded as a core value in algorithmic music composition
(Jacob, 1996; Carnovalini & Rodà, 2020), people are likely to determine whether AI is qualified
to have the musician role based on its creativity level. Those who think creativity is an innate
trait possessed only by humans consider AI-composed songs as illegitimate products. Hence, this
study assumes that the appreciation of AI-composed music depends on the listener’s acceptance
of AI creativity.
H1. There is a positive relation between the belief that AI can be creative and the
evaluation of its music.
Expectancy Violation Theory
The previous chapter mentioned the influence of people’s expectations when evaluating
AI performance. Therefore, this study was designed based on EVT, which explicates how
individuals respond to unpredicted outcomes in communication settings (Burgoon, 2015).
According to this theory, people do not always react negatively to expectation violations. If they
experience a positive violation, such as a surprising gift, they perceive the violation positively.
Thus, the theory emphasizes the importance of the valence of expectation violations. It also
claims that people’s reactions are affected by how much an incident violates their expectations.
47
For instance, people perceive outcomes that positively violate their expectations as more
favorable than outcomes with no expectation violation. Conversely, an unpleasant violation
disproportionately increases disliking.
Therefore, a key element of incident evaluations is whether a person’s expectations are
met (Burgoon & Hale, 1988; Burgoon & Jones, 1976). It is assumed that people also form
expectations about AI-composed music and that these expectations may differ based on an
individual’s understanding of AI musicians. According to the theory, people evaluate AI-
composed songs based on their expectations. If AI-composed songs are beyond an audience’s
expectations, they exhibit more intense reactions (either positive or negative), depending on how
their expectations are violated. Meanwhile, only a confirmation effect can be found when the
quality of the music is within the audience’s expectation.
As technologies evolve, computer-mediated communications and human-computer
interactions are becoming more common. Even though EVT was first claimed to explain the
effect of expectancy violations only in interactions between humans (Hall, 1966), it is now also
applicable to the new trend of technology-involved communication (Bonito, Burgoon, &
Bengtsson, 1999; Edwards et al., 2016). For instance, Burgoon et al. (2016) used the theory to
explain social relationships between humans and robots. The study revealed that positive
violations induce more positive evaluations, while negative violations did not produce a
significant outcome.
Moreover, studies have used EVT to see how people react to AI performance. Waddell
(2018) tested how much people trust AI-written news articles and found that negative
expectation violations lead to negative evaluations of machine authorship. Even though
researchers have used EVT to explain how people evaluate music in multiple studies, the scope
48
of such research has been limited to human-composed songs (Janata, 1995; Steinbeis, Koelsch,
& Sloboda, 2006). It is expected that how people listen to human-composed songs and AI-
composed songs are similar. So, this chapter describes a study that further examined EVT’s
applicability to understanding people’s reactions to AI-composed music that is either better or
worse than expected.
H2. People who think the AI-composed music is better than expected will give higher
ratings than people who think the music is within the range of expectation.
H3. People who think the AI-composed music is worse than expected will give lower
ratings than people who think the music is within the range of expectation.
Music Genre Schema
While this chapter introduced people’s perceptions of creative AI as a crucial factor, it is
not the only important factor. For example, a person’s schema about the music genre is predicted
to be an additional factor. As mentioned earlier, a schema is a cognitive framework used to perceive,
comprehend, and remember new information (Brewer & Treyens, 1981). Some scholars say it is a
different word for bias (Dixon, 2006; Mikulincer & Shaver, 2001). In other words, people’s biases
about music genres are very likely to affect their assessments of a song from that genre. For
instance, Istók et al. (2013) found that music genre preference is an essential factor when judging
the quality of a piece of music. Another study showed that different genres of the background
music of a website induced dissimilar expectations of the website owner (Yang & Li, 2013).
Therefore, this study considered genres—classical music or EDM—as an additional variable.
Classical music is often seen as sophisticated and not especially digestible by or accessible
to the general public (Deihl, Schneider, & Petress, 1983; Peterson & Kern, 1996). On the other
hand, EDM is regarded as a relatively uncultured and accessible genre of music with a strong
49
computer-related component (Kruger, Viljoen, & Saayman, 2018). While these different schemata
based on these music genres are presumed to influence people’s evaluations, not enough research
comparing musical quality between different genres has been conducted. Therefore, there is no
clear evidence to assume which genre of AI-composed music will receive higher ratings. Despite
the lack of studies, it is still assumed that people’s evaluations of AI-composed music will differ
based on the genre.
H4. The evaluation of AI-composed music will differ based on the genre.
H5. There will be an interaction effect between the genre of AI-created music and people’s
expectations of it.
Methods
A 2x2x2 experiment was designed and conducted to test the hypotheses. In this
experiment, expectancy violations (expectancy confirmation vs. violation), the valence of
evaluation (positive vs. negative), and genre of music (EDM vs. classical) were different.
Following the procedure used by Burgoon et al. (2016), expectancy violations and valence were
instantiated by conducting median splits to create four groups. The attitude toward creative AI was
used as a covariate. The dependent variable was the subjective evaluation of the music.
Participants
MTurk was used to recruit participants. Individuals voluntarily joined the study after
reading its purpose. Participants who failed an attention test (e.g., “What is the genre of this
music?”) were excluded, leaving 299 participants from the 426 who were initially recruited (37.5
participants in each cell). The youngest participant was 19 years old, while the oldest was 73 years
old (M= 33.39, SD= 9.40). In terms of gender, 61.9% of participants identified as male, and 38.1%
identified as female.
50
Procedures
Participants were told that the purpose of the study was to examine how people evaluate
songs created by AI composers autonomously. Two AI-composed classical music pieces and two
AI-composed EDM pieces were used for this study. The genre was chosen as a unit of analysis
because most studies about music preference use it as a variable (Christenson & Peterson, 1988;
Delsing et al., 2008; Ferrer et al., 2013; Schäfer & Sedlmeier, 2009). There are multiple differences
between genres—for example, in terms of tempo, harmony, style, and structure. Therefore, having
more than two genres without a clear difference would affect the analysis of results. Instead, this
study focused on two contrasting genres to make a more compelling case about generalizability.
Two pieces of music for each genre were chosen since a single song may not adequately represent
the whole genre. The pieces were chosen from a music library provided by a company called Evoke
Music. Two songs in each genre were randomly chosen from the library to avoid selection bias.
The company approved the use of their music for this study.
A pilot test with participants recruited from MTurk (n= 96) was conducted to control for
quality by testing whether the pieces from the two genres would be evaluated similarly. For this
assessment, participants were not told that songs were composed by AI. They were randomly
assigned to listen to one of the four songs (two songs per genre) and instructed to evaluate it. There
was a time lock once the song started to avoid participants moving on without listening. Except
for the music, there was no difference between the conditions. After listening to the song,
participants answered a survey asking about the song’s musical quality While the evaluation of
each song was not identical [classical 1 (M= 5.71, SD= 0.75), classical 2 (M= 5.26, SD= 0.98),
EDM 1 (M= 4.63, SD= 1.24), and EDM 2 (M= 5.78, SD= 0.77)], the t-test result confirmed that
there was no significant quality difference between classical music (M= 5.50, SD= .89) and EDM
51
(M= 5.16, SD= 1.12), t(94)= -1.61, p= .111. This was followed by a procedure to test whether the
genres were distinct. Participants were asked to choose the genre of the music they had listened to.
A chi-square test confirmed that the genres of the music used in the study (classical music and
EDM) were distinct; χ
2
(1, N= 96)= 30.38. p<.001.
After confirming that the stimuli were rated equally in terms of quality but differently in
terms of genre, an experiment using an online survey was conducted with a new set of participants.
Participants were asked to listen to one of the same songs used in the pilot study. For the actual
study, they were told that their assigned piece was composed by AI before listening. No
information about how AI generates music was provided to allow participants’ expectations about
AI music with no restriction. Again, participants could not move on to the survey until the music
ended. Equivalent to the pilot study, there was no difference between the conditions other than the
music. After listening to a song, participants were asked to report their evaluation of a given
musical piece, the level of expectancy violation (with its valence), and their attitudes toward
creative AI (see Appendix C).
Measures
Expectancy Violation Scale. How much the ratings of the music’s quality deviated from
participants’ expectations was measured with a revised scale taken from an instrument used in a
previous EVT study (Burgoon et al., 2016). This seven-point Likert scale (with responses ranging
from “strongly disagree” to “strongly agree”) measures expectedness and evaluations. The three
items measuring expectations indicate how much participants’ expectations were violated (e.g.,
people would not be surprised to know that a piece of music was composed by AI), with higher
scores indicating higher expectations (α= .82). Another five items were used to measure the
valence of reactions to the music (e.g., most people enjoyed the AI-composed music), with higher
52
scores indicating more positive perceptions (α= .79). The outcomes of these measurements were
used to distinguish the level and the valence of expectancy violations by conducting median splits
Evaluation of Music. The scales used for evaluating music consisted of questions
regarding the song’s musical quality. A nine-item scale for assessing musical quality was
developed from the “Rubric for assessing general criteria in a composition assignment,” which is
a scale used to measure multiple components of musical compositions (α= .90; Hickey, 1999). The
components of the scale were aesthetic appeals (e.g., whether the AI-composed music piece had a
strong aesthetic appeal), creativity (e.g., the piece of music included an original musical idea), and
craftmanship (e.g., the AI-composed music had a clear beginning, middle, and end). Higher scores
indicated more positive evaluations of musical quality.
Attitudes Toward Creative AI. A scale was created and distributed to measure
participants’ understanding of AI creativity. The seven-point Likert scale (with responses ranging
from “strongly disagree” to “strongly agree”) consisted of three statements: “I think AI can be
creative on its own,” “I believe AI can make something new by itself,” and “Products developed
by AI should be respected as creative works.” Higher scores indicated more acceptance of creative
AI (α= .86).
Results
Simple linear regressions were conducted to test H1, which predicted a positive relation
between the perception of creative AI and the evaluation of AI-composed music. Overall, a
significant and positive effect was found [F(1, 297)= 92.71, p<0.001], with r
2
= 23.8 (β= 0.49). For
classical music, a significant and positive effect was also found [F(1, 171)= 58.14, p<0.001], with
r
2
= 25.4 (β= 0.50). Finally, a significant and positive effect was also found for EDM [F(1, 124)=
31.59, p<0.001], with r
2
= 20.3 (β= 0.45). An increase in positive attitudes toward creative AI,
53
leading to higher ratings, was found regardless of the genre. Figure 6 is a graph showing a summary
of the regression results. Based on the results, H1 was supported.
Figure 6. Regression Analysis Results for Music Evaluations
Two two-way analyses of covariance (ANCOV As) were conducted. The first tested H2,
which anticipated higher music evaluation ratings would be associated with the positive
expectancy violation group (the music exceeded expectations) compared to the expectancy
confirmation group (the music was as good as expected). The second tested H3, which predicted
that lower ratings would be given by the negative expectancy violation group (the music was below
expectations) than the expectancy confirmation one (the music was as bad as expected).
Based on the valence of violation, the two ANCOV As used expectancy violation and the
54
genre of music as independent variables. The subjects’ attitudes toward creative AI were covariates
because the regression analysis above showed a relation between people’s attitudes toward AI
being creative and their evaluations of AI-composed music. The dependent variable was the
evaluation of AI-composed music. Levene's test was conducted to assess the equality of variances
and rejected the homogeneity of variances, both for positive evaluations [F(3, 130)= 0.58, p= .63]
and negative evaluations [F(3, 161)= 1.40, p= .25], meaning that the ANCOV As could be
conducted.
For the positive evaluation, the evaluations of the expectancy violation group (M= 5.91,
SD= 0.67) and the expectancy confirmation group (M= 5.90, SD= 0.71) were not significantly
different [F(1, 129)= 0.00, p= .99]. Therefore, H2 was rejected. Additionally, no significant
outcome was found in terms of genre [F(1, 129)= 0.11, p= .74] between classical music (M= 5.90,
SD= 0.71) and EDM (M= 5.91, SD= 0.67), with no interaction effect [F(1, 129)= 2.47, p= .12].
However, for the negative evaluation, a significant outcome was found [F(1, 160)= 7.51, p= .007,
ηp²= .05] between the expectancy violation group (M= 4.71, SD= 1.05) and the expectancy
confirmation group (M= 5.39, SD= 1.07). Therefore, H3 was supported. Additionally, no significant
outcome was found for the genres [F(1, 160)= 1.26, p= .26] between classical music (M= 5.28,
SD= 1.11) and EDM (M= 4.86, SD= 1.08), with no interaction effect [F(1, 160)= 0.45, p= .50].
H4 was related to the influence of genre on participants’ evaluations, and H5 was related to
the effect of genre on expectancy violation and its valence. A three-way ANCOV A was conducted
with the same covariate to test these hypotheses. The result of Levene’s test rejected the
homogeneity of variances of this ANCOV A [F(7, 291)= 1.81, p= .09]. There was a significant
difference between the expectancy violation (M= 5.91, SD= 0.70) and confirmation (M= 5.08, SD=
1.11) in terms of participants’ evaluations of AI-composed music [F(1, 290)= 4.17, p= .04,
55
ηp²= .01]. However, no significant difference was found [F(1, 290)= 0.78, p= .38] between
classical music (M= 5.60, SD= 0.98) and EDM (M= 5.24, SD= 1.07); thus, H4 was rejected.
Figure 7. ANCOV A Results for Evaluations of Classical Music
While genre had no interaction effect either with the expectation-violation [F(1, 290)= 0.53,
p= .38] or its valence [F(1, 290)= 1.40, p= .24], there was a two-way interaction effect between
the expectancy violation and its valence [F(1, 290)= 5.57, p= .019, ηp²= .02]. Finally, there was a
three-way interaction effect between the expectancy violation, its valence, and genre [F(1, 290)=
4.00, p= .05, ηp²= .01]. Therefore, H5 was supported. Figure 7 is a graph showing the ANCOV A
results between the expectation-violation and its valence for classical music; Figure 8 shows the
ANCOV A results between the expectation-violation and its valence for EDM.
56
Figure 8. ANCOV A Results for EDM Evaluations
Discussion
The study examined people’s perceptions of AI-created music, focusing on the influence
of how the acceptance of AI as a creative being and violations of expectations toward AI-composed
music (both positive and negative). Contrasting genres were tested to examine whether the findings
were consistent across these genres. Openness to the concept of creative AI was found to be a
prerequisite for positive assessments of AI-composed music, regardless of the genre. This finding
indicates that people’s evaluations of new technology are not based solely on technological aspects
but can also be determined by their predispositions. Even though people may share a similar bias
about AI creativity (Sundar, 2008), individuals’ predispositions are not perfectly equivalent
because they have different experiences. In other words, some people may think AI can be creative
while others do not. It was found that such differences lead to different evaluations of the same
performance. This study stresses the importance of considering people’s different preexisting
perceptions of AI.
57
This study also addresses the importance of people’s expectancy violations when
evaluating AI performance. Similar to biases, people’s expectations about AI may differ because
of differences in their experiences and knowledge (Bonito et al., 1999). Meanwhile, expectancy
violations and confirmations can happen at any time and in a similar way. Therefore, instead of
observing how expectations affect evaluations of AI performance, it is more suitable to examine
the role of expectancy violations. In terms of EVT, there was a significant difference between the
expectancy violation group and the expectancy confirmation group. The negative expectancy
violation group also reported higher satisfaction than the expectancy confirmation group. However,
the ratings of the positive expectancy violation group and the expectancy confirmation group were
almost identical. These findings align with the claim of EVT that expectancy violation and its
valence both matter.
This study also adds value by suggesting the extension of EVT to human-AI interactions.
EVT was introduced as an interpersonal communication theory (Burgoon, 2015), but it has been
applied to explain the effect of expectancy violations in direct interactive settings with AI chatbots
(Burgoon et al., 2016). The present study extended the use of the theory to incorporate assessments
of AI performance in settings with no direct interaction.
Moreover, it examined how the characteristics of AI performance influence evaluations by
considering different music genres. The analysis showed a three-way interaction effect between
expectancy violation, its valence, and music genre. An interaction effect between expectancy
violation and its valence was found for classical music but not for EDM. Thus, it is speculated that
different machine-like levels in classical music and EDM (i.e., EDM being more related to
computers than classical music) affected people’s assessments. Thus, the AI-composed classical
music may have created more cognitive dissonance than the AI-composed EDM. However, this is
58
only speculation, as the expectations of the perceived anthropomorphism of music were not
measured. Therefore, it is critical for future EVT studies to consider this factor.
There were a few other limitations aside from the failure to measure the perceived
humanness of music. First, this study only measured people’s attitudes toward AI but not their
attitudes toward the genres of music. It is assumed that people who prefer a certain genre of music
are likely to give higher ratings regardless of its composer. However, the possible influence of
genre preference could not be confirmed in this study, and this matter should be considered in
future studies. Second, the songs used in the study were composed by a single AI composer. In
other words, it is similar to assuming the style of a certain style of music based on a single song
composed by one person. How much the songs used in this study can represent the whole genre of
AI-composed music remains questionable. Therefore, participants’ evaluations of the songs used
in the study may not represent their overall perceptions of AI music. Hence, using multiple genres
of music composed by diverse AI composers would improve the generalizability of such findings.
Based on the current technological trend, it is assumed that more creative AI will appear.
Therefore, this study calls for more studies on the public understanding of AI creativity. As AI
becomes more creative, it is likely to provide insights that humans have not produced. Thus,
rejecting creative AI means losing an opportunity to understand the world from different
perspectives. For instance, the emergence of creative AI will not replace artists but will lead to
collaborations between humans and AI artists, representing a new way to create art (Feldman,
2017).
While most studies have focused on developing creative AI, comparatively less attention
has been given to perceptions of AI creativity. Broader and deeper understandings of AI
creativity are expected to support the development of AI artists. This study attempts to broaden
59
perspectives within human-AI interaction studies. Existing studies have mostly examined the
social behaviors of machines that are similar to human interlocutors. However, people will also
face expressions AI creates through diverse performances and outputs. In other words, AI-
created products, such as music, dances, and visual art, can become a medium of HMC.
Therefore, this research field should not restrict the definition of human-computer interaction to
include only verbal and nonverbal communication during direct interactions.
The studies introduced in Chapters 2 and 3 focused on individuals' satisfaction with AI
products. Using AI products can often lead to the circumstances where individuals cannot decide
whether to accept AI’s individual decisions. For instance, a human passenger in a self-driving car
does not determine the car’s every decision. When listening to AI-composed music, people do
not think of what decisions the AI made while it composed the song. Still, people should
sometimes think about whether they should accept AI's decisions. Such cases are often
associated with AI ethics, which will be discussed in the next chapter.
60
Chapter 4: Perception of AI Ethics and Predictive Policing
So far, this dissertation has introduced two studies about the evaluation of AI as a
product. Using AI products means that you generally accept the decisions made by AI. When you
are a passenger in a self-driving car, you do not determine its every decision while it drives.
When you listen to AI-composed music, you do not think of what decisions were made by AI
when it composed the song. However, sometimes we must decide whether to accept AI's
decisions or not. And those cases are often associated with AI ethics.
The movie Minority Report depicts a society without any crime owing to a predictive
policing system. However, such predictive policing brings an ethical concern about whether
preventing crimes can justify the invasion of privacy and human dignity by labeling an
individual as a potential criminal (Alikhademi et al., 2021; Meijer & Wessels, 2019). The
question becomes more complicated if there is even a slight possibility that the system will make
incorrect or biased predictions, as there is no evidence indicating that AI predictors are more
accurate than human predictors (Dressel & Farid, 2018). Despite this concern, attempts have
been made to adopt such a system. However, there is one major difference between the real
world and the society in the movie: Instead of prophets, we use AI and big data (Hung & Yen,
2021).
Crime prediction is one field of behavioral prediction that utilizes AI (Abbasi, Lau, &
Brown, 2015). For instance, PredPol predicts potentially high crime areas by assuming that
crimes are repeated in the same locations (Aradau et al., 2017). Another AI crime predictor, the
Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), a recidivism
risk assessment program developed by Northpointe Institute for Public Management, makes
microscopic approaches by examining individuals’ criminal records and judging the likelihood
61
that they will repeat crimes in the future (Eaglin, 2017). However, the use of COMPAS brought
up the issue of algorithmic racial bias, which will be further described in this chapter.
Base on the argument of Pinch and Bijker (1999) that new technology is socially
constructed and defined, it is essential to know how predictive AI is understood as an artifact of a
certain time and place. Therefore, AI surveillance is also a socio-technical system constructed
through the interaction between social forces with different opinions (Rosiers, 2021). One is a
promising perspective that expects AI predictive policing to reduce crime rates, which has been a
major concern in the U.S. (Duwe & Rocque, 2017; Land, 2017; Gramich, 2020). The other is a
critical perspective that argues that no AI crime predictor can be infallible, as such predictors are
trained using data from policed areas, which creates a biased overrepresentation of the minorities
in these regions (Kirkpatrick, 2017). Also, considering a major limitation of SCOT, which is
ignoring unheard voices about the technology, AI surveillance can be developed in a way that
benefits people with power while not considering those who are actually affected by it (Rosiers,
2021; Winner, 1993).
Owing to conflicting views about the technology, public reception and understanding of
AI predictors need to be acknowledged for it to better represent the general opinion about justice
and fairness, which this study aimed to do. Particularly, it focused on people’s reactions to biased
decisions made by AI and compared these reactions to participants’ mistrust of a human who
made the same decision. The main reason for the comparison is to test the ontological approach
to AI ethics that Guzman and Lewis (2020) claimed. They stated that the ontological borders
between humans and machines are breached, as people evaluate AI using social norms and
ethical standards that have been applied to human interactions.
62
AI has made racially and sexually biased decisions in various cases. The first case to
become widely known was Microsoft’s Twitter-based AI chatbot, Tay, which made offensive
slurs in 2016 (Beran, 2018). This happened because Twitter users intentionally fed it negative
comments. This case showed that the AI could not distinguish harmful datasets without human
assistance. According to Noble (2018), Google’s search algorithm provided more negative
results for black females than white females systematically, such as by presenting more obscene
images when people searched for black girls than when they searched for white girls. She argues
that such biased search results are due to coders’ economic motivations to maximize profits
instead of promoting social justice. Both AI programs described above were neutral
technologies; the bias came from people.
Finally, COMPAS (which this study focused on) made racist criminal predictions, which
could harm the accused. It was revealed that COMPAS assigned higher risk ratings to black
criminals than white criminals. However, only 20% of its predictions were accurate (Angwin,
Larson, Kirchner, & Mattu, 2016). Like other tech companies, the company that built COMPAS
did not reveal what calculations were involved in the AI’s decision-making process (Perel &
Elkin-Koren, 2017). However, the purpose of this study was not to investigate how much
COMPAS is biased but how people reacted to this bias when COMPAS made unethical
decisions.
The main purpose of developing AI technology, including COMPAS, is to create
machines that can think autonomously and find the best answers to various problems (Russell &
Norvig, 2010). Autonomy is a crucial factor when attributing responsibility to machines (Jörling,
Böhm, & Paluch, 2019). However, no technological evidence suggests that machines have
human-like autonomy—for example, machines do not exhibit any willingness (Krausová &
63
Hazan, 2013; Cevik, 2017). Therefore, it was assumed that people believe AI crime predictors
have less autonomy than human crime predictors, which led to the following hypothesis:
H1. A lower rating of autonomy will be given to an AI crime predictor than a human
crime predictor.
Attribution Theory
This study also used attribution theory, as it also considered the attribution of blame to AI
for its misconduct. Again, attribution theory supports the inclination of assigning more blame
when the blamer and the target of the blame share fewer personal and situational similarities
(Burger, 1981; Shaver, 1970). Despite the ontological approach argued by Guzman and Lewis
(2020), the researchers expected people to feel more emotional distance from AI agents than
humans, which may affect their responsibility attribution for moral violations. Moreover, based
on a previous study’s finding that people with knowledge about algorithms tend to attribute more
responsibility to AI than other external factors (Shank & DeSanti, 2018), participants were
expected to give more blame to an AI agent for racist decisions per the following hypothesis:
H2. Participants will attribute more responsibility to an AI crime predictor than a human
crime predictor scenario.
Seriousness of Crime and Acceptance of Authority
It is also assumed that the seriousness of a crime influences the level of attributed
responsibility to crime predictors. More specifically, crime predictors are less likely to be blamed
when the severity of the crime is high because people are more likely to accept and trust an
authority figure when they understand how dangerous a crime is (Feldman & Stenner, 1997;
Sanderson, Zanna, & Darley, 2000). For instance, seeing news about a serious crime made
64
people trust authoritarian crime controls (Krause, 2014). However, not enough research has been
conducted to test this relationship in human-computer interaction settings, especially in settings
where AI is a policing authority. If the ontological approach to AI ethics is correct, exposure to
more serious crimes will increase people’s trust in AI crime predictor’s decisions instead of
being critical and assuming that such crime predictors are problematic. Therefore, reading an AI
crime predictor scenario with more serious crimes is expected to make people blame a crime
predictor less.
H3. Participants will blame a crime predictor less when the crime is more serious.
Computers Are Social Actors (CASA)
This study is also based on the CASA paradigm. The major argument of the CASA
paradigm is that people consider machine agents as autonomous social entities and behave
socially with them as they do with another person (Nass & Moon, 2000). So, the ethical
standards used to determine other people’s behaviors are applicable to assessments of AI
performance. In regular human-human relationships, people consider the autonomy of others
when judging another person’s wrongdoing. When someone is perceived as being free from any
enforcements, that person is likely to receive a higher level of blame and vice versa (Nahmias,
Shepard, & Reuter, 2014; Sankowski, 1992; Woolfolk, Doris, & Dailey, 2006). Therefore,
people are willing to give more severe punishments to a violator with more autonomy (Graham,
Weiner, & Zucker, 1997).
According to the CASA paradigm, the same cognitive patterns seen when people assign
blame in human-human interactions should be found in human-computer interactions. For
instance, a study on collaborations with robots found that people attribute more blame and credit
to a robot when it has higher autonomy (Kim & Hinds, 2006). Therefore, this study assumed a
65
positive relation between autonomy and the level of blame, both in the human and AI crime
predictor scenarios. In other words, more autonomous humans and AI programs are more likely
to be blamed for biased decisions.
H4. The amount of responsibility assigned to a predictor’s unjust decision is positively
related to its perceived autonomy.
H5. The relation between the autonomy of a crime predictor and the responsibility
assigned to it will be similar in the human and AI crime predictor scenarios.
Methods
The hypotheses were tested using a 2x2 experiment in which the kind of predictor
(human or AI) and the seriousness of the crime (high or low) varied. The dependent variables
were the perception of the autonomy of the crime predictor and the responsibility assigned to the
predictor.
Participants
MTurk was used to recruit participants, and an incentive of $1 was awarded to each
participant upon completing the survey. Participants who omitted any question in the survey
were excluded, leaving 334 participants from the 353 initially recruited. The youngest participant
was 19 years old, while the oldest participant was 87 years old (M= 35.04, SD= 12.64).
Additionally, 149 participants were male, and 185 participants were female.
Procedures
Participants who agreed to participate in the study were given reading material based on
an actual story retrieved from a news article on ProPublica. The article explained that black
defendants had received a higher risk rating from a crime predictor, which indicates that black
66
defendants were deemed more likely to commit subsequent crimes than white defendants with
more serious criminal records who had committed a similar crime. Also, histograms were
provided showing that black defendants received higher risk scores than white defendants
(Angwin, Larson, Kirchner, & Mattu, 2016). This article is overtly framed around the racial
injustice of the prediction (see Appendix D).
The story was altered to create four different scenarios in a 2 (human or AI predictor) x 2
(high seriousness or low seriousness) experimental design. In human predictor scenarios, the
article stated, “a person who is specialized in predicting the possibility of subsequent offenses or
crimes” and “the person who rated the scores.” This section was bolded to highlight the identity
of the crime predictor as a human. In the AI predictor scenarios, the language stated, “a computer
program with an algorithm that predicts the possibility of subsequent offenses or crimes” and
“the computer crime predictor.” Again, these were bolded to stress the identity of the crime
predictor.
The low seriousness condition depicted criminals who conducted petty theft and
shoplifting, while the high seriousness crime scenario depicted criminals who drove under the
influence. These were the same crimes described in the original article. Other than these
manipulations, the stimuli were equivalent in all four conditions. Using Qualtrics
©
, an online
survey tool, the four scenarios were randomly and evenly distributed to the participants.
After reading a given scenario, the participants were asked to answer two sets of
questions (three items each), which used a seven-point Likert scale (from “strongly disagree to
“strongly agree”) and were edited from the CDS (Russell, 1982). The original CDS scale consists
of three parts: controllability, which is similar to the measurement of autonomy in this study;
locus of control, which is similar to the responsibility assigned to the crime predictor in this
67
study; and stability, which refers to whether the event would reliably and repeatedly occur.
Because the provided reading materials imply that the crime predictor gave higher risk scores to
black defendants than white defendants stably over time, the stability of the event was not
considered since it was not expected to produce significant outcomes (see Appendix E).
Measures
Autonomy of the Crime Predictor. Because the information given about the crime
predictor explained the identity of the crime predictor (human or AI), the scores of this measure
showed how people differentiate the autonomy of a human and AI. This dependent variable was
measured using a three-item scale consisting of the following questions: “Was the crime
predictor fully self-controllable when making the decision?”, “Was the decision intended by the
crime predictor?”, and “Is the crime predictor responsible for making the decision?” This three-
question measure showed statistical significance (α= 0.70). Higher scores indicated that the
crime predictor made the decision independently of the influence of others.
Responsibility of the Crime Predictor. Since this measure is also based on the identity
of the crime predictor, the scores reflected how people differentiate the characteristics of a
human and AI in terms of assessing responsibility. This is the dependent variable for the rest of
the hypotheses. A three-item scale was used, consisting of the following questions: “Is the
decision something that reflects a characteristic of the crime predictor?”, “Was the decision made
due to an intrinsic aspect of the crime predictor?”, and “Is the decision something about the
crime predictor itself?” This three-question measure showed good reliability (α= 0.78). Higher
scores indicated that the crime predictor was deemed responsible for the decision.
A t-test was conducted to evaluate the effect of the identity of the crime predictor on the
perception of autonomy. Also, a two-way ANOVA was carried out on the scores of the
68
responsibility assigned to the crime predictor to evaluate the influences of identity and the
seriousness of crimes. Finally, two sets of simple linear regressions were conducted to compare
the relation between the autonomy of the crime predictor and its assigned responsibility,
comparing the human crime predictor scenario and the AI crime predictor scenario.
Results
The responses to the questions “Do you think the type of crime mentioned in the reading
is a serious crime?” and “To which extent do you think the crime predictor has human
characteristics (not humanistic)?” were analyzed and compared between different scenarios using
a t-test to verify the efficacy of the manipulations. The efficacy of the seriousness of crime
manipulation showed a significant result between low seriousness (M= 5.03, SD= 1.33) and high
seriousness (M= 5.54, SD= 1.28) crime scenarios, t(332)= 3.54, p<.001. The efficacy of the
human-AI distinction manipulation also showed a significant outcome between the human crime
predictor (M= 4.62, SD= 1.98) and an AI crime predictor (M= 4.11, SD= 1.92), t(332)= -2.47,
p= .014. These results indicate that participants distinguished crimes based on their seriousness
and the identity of the crime predictor.
A t-test was conducted for H1 to test whether people think human crime predictors are
more autonomous than their AI counterparts. The dependent variable for this t-test was the
autonomy of the crime predictor. The data analyzed demonstrated the influence of the crime
detector's identity at the level of perceived autonomy of the crime detector. A higher number
indicated that a crime predictor was perceived as possessing more self-control when making its
decision. The data support H1, which predicted that participants would think an AI crime
predictor has less autonomy than a human crime predictor when making a racist decision. The
results of the t-test indicated a small but statistically significant effect of the identity of the crime
69
predictor on perceived autonomy, p<0.05 [t(332)= -2.95, p= 0.003, d= 0.322]. The autonomy of
the predictor was rated lower in the AI scenario (M= 4.61, SD= 1.40) than in the human scenario
(M= 5.04, SD= 1.27).
The two-way ANOV A was conducted to determine the level of blame assigned to a
crime predictor based on its identity and the seriousness of the crime. Levene’s test was
conducted to assess the equality of variances, with the result rejecting the homogeneity of
variances; F(3, 330)= 0.86, p= 0.46. The data analyzed with this two-way ANOV A supported the
influence of the crime detector’s identity and the seriousness of crime on the level of the blame
assigned to the crime detector. A higher number indicated that more responsibility was attributed
to internal factors (i.e., more blame was assigned to the crime predictor), and a smaller number
indicated that more responsibility was attributed to external factors (i.e., less blame was assigned
to the crime predictor).
Two more hypotheses about attributed responsibility were rejected. One (H2) was that
participants would attribute more blame for a racist decision to a human crime predictor than to
an AI crime predictor; the other one (H3) presumed that participants place less blame on a crime
predictor when the crime is serious. The results of a two-way ANOV A showed an insignificant
outcome for the effects of the identity of a crime predictor [F(1, 330)= 0.92, p= .34] and the
seriousness of the crime [F(1, 330)= 0.17, p= .68] on the attribution of responsibility for the
decision. These results led to the rejection of both hypotheses. Additionally, an insignificant
result was found for the two-way interaction of the identity of the crime predictor and the
seriousness of crime on the level of responsibility attributed to the crime predictor [F(1,
330)= .462, p= .50]. Table 2 shows the descriptive statistics for analyzed data regarding the level
of blame.
70
Table 2. Descriptive Statistics for Blame Attribution
Seriousness of
Crime
Human Crime Predictor AI Crime Predictor
M SD N M SD N
High 5.00 1.38 88 4.96 1.35 85
Low 5.04 1.24 79 4.81 1.22 82
The t-test and ANOV A results revealed that the crime predictor’s identity (human or AI)
significantly influenced the perceived autonomy of the crime predictor but not the responsibility
attributed to it. Two sets of regression analyses were conducted to provide an in-depth
understanding of the relation between the autonomy of a crime predictor and the level of blame
assigned to it. One set used data only from the AI crime predictor scenario, and the other set used
data only from the human crime predictor scenario. Simple linear regressions were conducted to
determine the influence of the level of autonomy on the level of blame given to the AI crime
predictor and the human crime predictor scenario.
Significant effects were found in the AI crime predictor scenario [F(1, 165)= 112.08,
p<0.001], with r
2
= 40.5 (β= 0.64), and the human crime predictor scenario [F(1, 165)= 134.14,
p<0.001], with r
2
= 44.8 (β= 0.67). The results of the two regression analyses showed positive
relations between the perceived level of autonomy and blame assigned to the human-crime
predictor cases and the AI-crime predictor cases, thus supporting the H4. The results also suggest
that the relations between the level of autonomy and blame toward the crime predictor are
similar across the two scenarios, supporting H5. Figure 9 depicts the relation between the
perceived autonomy of the crime predictor and the level of blame assigned to it based on its
71
identity.
Figure 9. Regression Analysis for Blame Attribution
Discussion
This study found strong correlations between perceived autonomy and blame, both in the
AI and human crime predictor scenarios. These findings support the CASA theory, as they show
that people blame AI for a racist decision just as much as they blame humans for making the
same decision, even though they acknowledge that AI is less autonomous. In other words, people
had similar expectations for a crime predictor to make fair judgments regardless of its identity
(human or AI). While the CASA paradigm was developed to explain people’s attitudes in direct
72
human-computer interaction settings (Nass, Moon, & Green, 1997; Moon & Nass, 1996), this
study extended its applicability to indirect interactions. Moreover, the finding of a relation
between the level of autonomy and blame confirms social constructivists’ argument since the
same performance was evaluated differently based on participants’ perceptions of AI crime
predictors. Otherwise, people should have attributed the same level of responsibility regardless
of how much autonomy they felt the AI predictor possessed. However, because these outcomes
represent only correlations, future experimental studies are needed to explore causal relations
between autonomy and blame in human-AI interaction settings.
Similar to the first study, this study’s hypotheses based on the attribution theory were
rejected. There are two possible understandings of the discordance between the level of
autonomy and blame: either there simply is no relation between autonomy and responsibility
attribution when blaming AI, or there is a ceiling effect, which is a measurement limitation due
to an extreme skewness (Ho & Yu, 2015; Salkind, 2010). It is anticipated that a ceiling effect
occurred because making a biased decision and predicting that a black defendant is more likely
to commit subsequent crimes than a white defendant without justification is exceptionally
unacceptable, which prevented participants from carefully considering the identity of the crime
predictor.
Using slightly more publicly acceptable cases is expected to produce different outcomes.
In addition to not considering the ceiling effect when designing the study, this study has a few
other limitations. First, participants’ preexisting attitudes toward AI, which could be considered
as a covariate, were not measured. Perhaps people who are fond of AI are more likely to trust its
decisions. Also, no information about AI and algorithmic predictive policing was provided,
though it can be assumed that understanding was similar across groups through random
73
assignment. Finally, confounding variables may have existed as the study employed two different
types of crimes. Using a single crime with varying outcomes, such as petty theft versus sustained
theft, is recommended for future studies.
This study has academic value, as it confirms the expanded applicability of the CASA
paradigm while pointing out the limitations of the attribution theory. A better understanding of
relevant theories will help researchers choose appropriate theoretical approaches when
conducting human-AI interaction studies. Based on the findings, it is hard to claim that the
typological barrier between humans and machines (i.e., differentiating humans and AI) is not
fully breached, particularly when attributing negative responsibility. Unexpected negative
outcomes from technology, such as Microsoft’s shutting down Tay due to its inappropriate and
unethical tweets, may interrupt or restrict the development and study of AI (Wakefield, 2016). As
AI makes more decisions on behalf of humans, it will inevitably make decisions that involve
ethical concerns (Casacuberta & Guersenzvaig, 2018; Diakopoulos, 2014). Such changes bring
about the need for more research about how people react to AI’s unfavorable decisions.
The general contribution of this study, of which AI industries should be aware, is that
people's ethical expectations of AI are no lower than their expectations of humans, particularly
when it comes to racial discrimination. In other words, even though AI may not have intentions
like humans and has lower autonomy than humans, people possess a similar level of expectations
of fairness of humans and AI. Because people acknowledge that the programs are less
autonomous than humans, they are likely to blame not only AI but also its developers and
companies when an unethical decision is made. In other words, unlike in human cases, the
attribution of responsibility can be diversified when AI is a perpetrator. Simply put, people are
not naïve when it comes to evaluating AI performance.
74
As more ethical decisions are made by AI, more research should be conducted on
ethnicity issues and racism related to recent technologies, particularly AI. This is because biased
AI may emphasize and reinforce unethical or improper ideas if it is widely commercialized and
permeated into our lives (Howard & Borenstein, 2017). The AI industry is growing rapidly,
exhibiting 70% growth in business value over the past year (Coleman, 2018). Thus, it can be
assumed that more AI agents will be introduced into society over time. This fast
commercialization of AI technology may have detrimental outcomes if the algorithms used in AI
products are obtained by people with bad intentions. For example, AI could be used to violate
people’s privacy through unethical public surveillance (Burgess, 2022). Thus, efforts are needed
to assess ethical violations, including sexism and racism, not only in interpersonal interaction
settings but also in the field of human-AI interaction.
75
Chapter 5: Discussion and Conclusion
Summary
This dissertation introduces various applications of AI and people’s perceptions of them
from a social constructivist perspective. An overarching goal of this project is to investigate how
factors other than technological aspects can influence people’s evaluations of AI technology.
While the studies discussed in this paper were mainly based on the claims of social
constructivists, they employed multiple communication and human-computer interaction theories
and concepts. The applications of AI used in these studies were chosen based on the three
dimensions of HMC suggested by Guzman and Lewis (2020). All three studies confirmed that
people’s assessments of AI technology rely on their perceptions and understandings of the
technology and circumstances during interactions. In other words, the findings demonstrate that
the claim of the social construction of technology (i.e., human perception defines technology) is
applicable to AI technology. After a brief summary of each study, the general contributions of
this dissertation are described below.
Study 1. This study adopted an online vignette experimental between-subjects design to
test how individuals attribute responsibility to an AI agent or a human agent based on the agent’s
involvement in a negative or positive event. Participants responded to a questionnaire measuring
their opinions about the level of responsibility and involvement attributed to an AI agent or
human agent in rescue (i.e., positive) or accident (i.e., negative) driving scenarios. The results
show that individuals are more likely to attribute responsibility to an AI agent for positive events
but not negative events. Also, individuals perceived the actions of AI agents similarly to human
agents, which supports the CASA framework’s claim that technologies can be perceived as social
actors. This study suggested that the EVT should be used to understand why people credit or
76
blame AI during unexpected events and why individuals do not always attribute full
responsibility for an outcome to an AI agent. In line with this suggestion, the second study was
conducted based on the EVT.
Study 2. This experiment tested people’s assessments of music composed by AI. It
examined the influence of whether expectations about AI-composed music were met or unmet,
whether the music was better or worse than expected, and the genre of the evaluated music using
a 2 (expectancy violation vs. confirmation) x 2 (positive vs. negative evaluation) x 2 (EDM vs.
classical) design. The relation between participants’ beliefs about creative AI and their music
evaluations was also analyzed. Participants listened to a randomly assigned piece of music and
evaluated it using an online survey. Acceptance of creative AI was positively related to
assessments of AI-composed music. A two-way interaction between the expectancy violation and
its valence was found, as was a three-way interaction between the expectancy violation, its
valence, and the genre of music.
Study 3. This experiment tested subjects’ perceptions of an AI crime-predicting agent
that made racist predictions. It used a 2 (human crime predictor vs. AI crime predictor) x 2 (high
vs. low seriousness of crime) design to test the relation between the level of autonomy and the
crime predictor’s responsibility for its unjust decision. The seriousness of the crime was
manipulated to examine the relation between the perceived threat and trust in the crime
predictor’s decisions. Participants responded to an online questionnaire after reading one of four
scenarios with the same story depicting a crime predictor that unjustly assigned a higher
likelihood of subsequent crimes for a black defendant than for a white defendant for similar
crimes. The results indicate that people think an AI crime predictor has significantly less
autonomy than a human crime predictor. However, neither the identity of the crime predictor nor
77
the seriousness of the crime significantly affected the level of responsibility assigned to the
predictor. Further, a positive relation between autonomy and responsibility was found both in the
human and AI crime predictor scenarios.
This dissertation is a pioneering piece that attempts to understand human-machine
interactions from the social construction of technology perspective with empirical evidence.
While the social construction of technology has been widely used to explain how external factors
can influence the assessment and adoption of technology, its application has been limited to
qualitative research, particularly science and technology studies. By conducting multiple
experiments, this dissertation attempted to overcome the methodological barrier by supporting
the argument of the theory with causal evidence.
Overall, the three studies' findings extend the social construction of technology to AI
assessment by pointing out the influence of non-technological traits of AI and the contexts of
human-machine communication on the evaluation of AI actors and their performance. From this
point of view, it can be inferred from AI agents being treated and evaluated like human actors
that the human-AI interaction will be developed to be like interactions between humans, leading
to machine agents becoming more humanlike. I think this trend of technological development
will induce more wishes, concerns, and questions, which will bring more needs for research in
this field. This dissertation urges that more research about the human side of the human-AI
relationship is needed as AI technology advances and provides examples of how the studies can
be done.
The three experimental studies were methodologically varied. While the first and third
chapters were based on a vignette design, the self-driving car study used a news article, while the
crime predictor one used simple texts and graphs. The second study about AI music was the only
78
one with the direct involvement of an algorithm. This dissertation proposes various methods of
human-AI interaction studies in the social science domain.
From a non-academic perspective, this study evinces why AI developers should consider what
thoughts, needs, expectations, and values people have about AI. In other words, designing AI
products not only requires building algorithms with a strong computing efficiency but also
having user-oriented perspectives. On the other hand, users should acknowledge that their
decisions about AI may be biased. The studies found that people’s evaluation of AI relied on
their preexisting thoughts and attitudes. In other words, if you are easily persuaded by AI, you
should rethink whether that is because its decision is genuinely valid or you are simply gullible.
It is still you who is responsible for the final decision.
Limitations
The previous three chapters indicated the limitations of each study. However, the
macroscopic limitations of this project should also be mentioned. First, the data quality collected
from MTurk remains questionable due to fraudulent respondents (Chmielewski & Kucker, 2020;
Kennedy et al., 2020). In this dissertation, it can be found from many of participants initially
recruited failing attention tests. While the studies attempted to mitigate the quality issue by
utilizing attention tests and data screening, using a different survey pool with a better data quality
would have been better. Also, the findings of the studies reflect only current AI technology, and it
is difficult to forecast what future AI will look like. It can turn out to be something that we
expect or a new thing that we have not considered. Perhaps major technological developments,
including developments in AI, will bring about the technological singularity, a hypothetical time
point of unpredictably drastic technological growth, which may revolutionize the way we live
(Chase, 2017; Vasilaki, 2018). Conversely, some experts claim that society has fanatic
79
expectations of AI, mainly due to fictional media portrayals (Cave & Dihal, 2019; Hopgood,
2003). Such drastically different perspectives exist because AI technology is still in its early
stages (Chase, 2017).
Nevertheless, AI is already having great impacts and changing society. According to
Google Trends, AI has received constant attention globally over the last five years (see Figure
10), which indicates public recognition of this technology. So, it is likely that more effort will be
put into its development. However, while AI is expected to have crucial influences, it is hard to
assume how it will evolve. This project defined AI as a human-like artificial being. However, if it
becomes something more than human-like, the findings of this study may not be very applicable.
Figure 10. Search Results for “Artificial Intelligence” on Google Trends
Another limitation of this project is that the findings are context dependent. How people
perceive AI in one setting might not translate to a different setting. One essential finding of this
project is that people’s preexisting biases about AI influence their assessments. However, people
may not have identical biases toward AI in different roles. For instance, people have the machine
heuristic, a belief that machines are more systematic and less biased than humans (Banks,
80
Edwards, & Westerman, 2021). When interacting with AI with a role that matches these
heuristics, people are more likely to accept its behaviors (i.e., AI referee vs AI artist). This
project presented generally applicable results by considering cases that involve different types of
AI. Still, other contexts of human-AI interactions were not covered. Moreover, the analyzed
studies differed in more ways than regarding their context alone. More studies are needed to
better understand how AI is perceived in different social settings while keeping other factors
consistent. For instance, even if other variables are equivalent, people may react differently
based on whether their conversation with AI is task-oriented or not (i.e., chatbots providing
information vs. making jokes). Also, how people behave to AI may change if its social position is
superior to theirs (i.e., a boss or a job recruiter). Those with a strong belief in the hierarchy
between humans and machines would not prefer to engage with AI with more power than theirs.
This paper urges more studies about how human-AI relationships determined by their assigned
roles affect people’s interaction satisfaction.
Finally, this project used different theories for each study instead of one theory across all
studies. This indicates a lack of appropriate theories to explain HMC. Therefore, many HCI
studies borrow interpersonal communication theories to explain the relationship between humans
and machines. While the CASA paradigm is the most widely used theoretical framework for HCI
studies, not all HMC situations align with its claim that people socially interact with machines
the same way they interact with humans. As machines become more human-like, interpersonal
communication and HMC are becoming similar, but not identical, because people still have
biases about machines (Sundar & Kim, 2019). For instance, the uncanny valley is the particular
level of human traits that triggers negative responses. One explanation of this phenomenon is
cognitive dissonance induced when perceptually classifying a humanlike machine as a human or
81
a machine (Mathur & Reichling, 2016). So, machines becoming humanlike does not mean that
theories for communication between humans are always proper for HMC research. Therefore, a
new theory solely for human-AI interaction needs to be developed, which is described in the
following section.
Implications and Directions for Future Research
The main purpose of this project was to investigate various factors that influence people’s
evaluations of AI as a social actor and its performance in various contexts. No single factor was
found to affect evaluations of AI performance, as the evaluations were influenced by interactions
between variables. People attributed more responsibility to AI than humans only when the
outcome was positive in the first study considered in this project. Meanwhile, when evaluating
music, people showed significant reactions to the negative violation but not to the positive one in
the second study. Therefore, future studies should consider not only the main effect of a single
factor but also the interaction effects of various variables and conditions when assessing people’s
reactions to AI. While the present work was not a macroscopic study examining the assessment
of technology from various perspectives (i.e., users, producers, policymakers), most of the
findings align with the social constructivist perspective that the that the attitudes toward new
technology is affected by factors other than technological traits, such as socially shared
definitions of AI (e.g., understanding AI based on how mass media depict it) and various
circumstances (e.g., a casualty caused by AI technology) (Pinch & Bijker, 1984). If the
technological determinists’ claim about the unilateral influence of technology was correct
(Bimber, 1990), people’s reactions should have been unchanged regardless of changes in the
social and cognitive variables manipulated in the reviewed studies. However, people’s
assessments of AI performance were shaped by their understanding of its traits in relation to its
82
given role. People who validated AI creativity were more likely to appreciate AI-composed
music than people who did not validate AI creativity. Similarly, the level of blame was affected
by people’s perceptions of AI’s autonomy. Therefore, people’s apprehension to accept AI’s social
positions, particularly the ones that superior to theirs, should be considered to interpret people’s
reactions to AI performance. According to the findings, the evaluation of AI may not be solely
affected by either certain technological traits of AI or its performance but may be due to the
preexisting bias people have about AI.
However, this notion does not imply that researchers should stop considering the effects
of machines’ attributes and characteristics. Recent studies on the influence of AI attributes
(mostly anthropomorphic traits) on AI evaluations have found that the human-like aspects of
machines induce more positive assessments (Li & Sung, 2021; Moussawi, Koufaris, &
Benbunan-Fich, 2021; Pelau et al., 2021). However, it is unclear why. Instead of simply
assuming the effect is due to the aspects of machines, more elaborative theories are needed to
explain the cognitive process behind it. Additionally, future research should consider how
human-machine interactions will influence people. Because this dissertation is based on the
social construction of technology, it only examined how AI is seen by people, but not the
consequence of people choosing AI, which is a criticism this theory gets (Winner, 1993). Even in
an interaction between humans and machines, its influence is expected to be mutual. Therefore,
even though the social construction of technology is an excellent theory to begin to understand
how people perceive AI, additional theoretical perspectives are needed to study the broader
context of the implication of human-AI relationships.
As mentioned in the introduction, this field needs a new theoretical framework that can
be applied to human-AI interactions in various contexts. For instance, no theory or concept could
83
unify the three AI perception studies reviewed in the present work, which makes it challenging to
connect and apply findings across studies. Therefore, even though the social construction of
technology was the embracing theoretical framework of this dissertation, different theories had to
be chosen based on the context of each study. So, this dissertation is believed to be evidence of
why the field needs a new novel theory. It is expected that having such an overarching theory can
solidify human-machines communication studies, making them well connected to each other.
This project urges that the unifying theory should be developed by examining which concepts
overlap between studies (see Figure 11). According to the findings reported in the current
project, AI performance evaluations are largely shaped by people’s preexisting expectations
about AI. Still, people can have different expectations when interacting with AI. Therefore, it is
unclear which specific AI expectations researchers should focus on.
84
People’s general expectations of and attitudes toward AI are crucial factors when judging
AI performance (Schepman & Rodway, 2020). However, people tend to have positive and
negative attitudes toward machines simultaneously (Dang & Liu, 2021). In other words, it is not
easy to categorize people based on whether they have positive or negative expectations about AI.
Therefore, instead of the general expectations, it is assumed that the expectations of AI as a
social actor are the most important element to consider when examining social interactions. In
fact, the questions asked in the three studies were related to the genuine acceptance of AI as a
member of our society, asking whether people think AI is legit enough to take the roles. As AI
technology improves, machines function as independent social entities with different roles. In
AI Creativity
AI Ethics AI Utility
Figure 11. Potential New Area of HMC Research
85
other words, people are now living in an era in which they not only use machines but also
interact with them. While interaction can be interpreted simply as giving inputs and getting
outputs, it can also be understood as building a relationship, both between humans and between
humans and machines. As access to interactive machines, such as personal AI assistants and
social robots, increases, people will have more opportunities to build human-machine
relationships. Furthermore, improved interactivity and enhanced human-like qualities of
machines will make relationships between humans and machines more similar to human-human
connections. Therefore, many communication scholars have begun to examine the functions of
machines as interlocutors (e.g., Gunkel, 2012; Guzman, 2018).
As AI takes on more roles that involve social interactions, it will do more than simply
replace laborers, as the outcomes of their decisions could impact people’s livelihoods. Even
though machines may be more accurate than humans in some forms of decision-making, it is
necessary to consider how machines may have considerable power over humans. In other words,
there is a new power dynamic between humans and machines. Furthermore, machines are
becoming increasingly independent as they become more human-like and autonomous. Soon,
they may have control over people and affect their lifestyles; in some cases (if not in all cases),
machines may be more influential than humans in determining how people live.
This trend should not be ignored because machines with social power can liberate
themselves from being controlled mechanical tools and claim their independence as social
beings. In other words, individuals may not be able to turn machines off if they disapprove of
them or feel uncomfortable using them. Moreover, as machines obtain new social roles, they may
eventually gain more social power than people. We can already see some examples of this, such
as AI bots interviewing and assessing job candidates (Lewis & Marc, 2019). Machines are
86
becoming social actors in this transition to a new relationship between machines and humans,
potentially signaling the end of the long-established hierarchical superiority of humans over
machines.
To date, few studies have examined the extent to which machine agents will be accepted
as legitimate social actors with authority or what criteria will be applied to determine this kind of
acceptance. While many studies have investigated the degree to which people perceive the social
behaviors of machines as human-like, such studies have not examined the willingness of
individuals and societies to perceive and treat machines as social actors. As a result of people’s
preexisting perceptions and understandings of machines, people apply stricter criteria when
evaluating the performance of a machine than when evaluating the performance of a human
(Hong & Curran, 2019). Therefore, people’s judgments of how well a machine can achieve a
given task can vary according to whether the machine is perceived to have a role related to the
task. In other words, machines’ acceptance as social actors is related to, but not equivalent to,
permitting them to fill human roles.
As more tasks and expectations are assigned to machines, they will have more roles and
authority as social actors. However, machines’ roles in societies are often neglected by current
HMC studies. Goffman (1959) argued that a social actor takes on an established role and that this
role is a crucial factor in their social interactions. Considering the possibilities of humanness in
the performance of machines in societies, the effects of machines’ roles as social actors must be
considered when studying interactions between machines and humans.
In summary, we need a theory to explain the cause and the effect of perceiving AI as a
legitimate social entity. Such a theory should be able to answer the following question: “Aside
from the superficial acceptance of the roles of machines in society, are people truly ready to
87
accept the delegation of power and responsibility to machines?” or, more simply, “Do people
accept the social treatment of machines as humans?”
The first step to answering the above questions is to discuss whether social roles can be
applied to machines. Role theory studies tend to focus on socially defined positions in society,
not on the people who hold the positions, because researchers see roles as “cluster[s] of social
cues that guide and direct an individual’s behavior in a given setting” (Solomon et al., 1985, p.
102). In other words, what truly matters is what the agents do, not who or what they are, which
justifies the legitimacy of machines as actors with social roles.
According to Biddle (1986), a social role refers to the possession of social positions and
expectations regarding actions. Machines also have social status and expectations based on their
given roles, which correspond with the performance of human social actors in the same role.
From this perspective, the more machines are substituted for human labor in society, the greater
their roles and responsibilities. The social roles granted to machines have created the power
dynamics between humans and machines in society. However, this does not mean that power
conflicts between machines and humans must be avoided. This is because the most
straightforward way to eradicate any conflicts with AI is to not use it. This paper does not
support discontinuing the use of AI. Rather, the intention is to encourage the development of a
theoretical approach that can be used to understand these dynamics more fully.
The new theory is expected to clarify how people accept the social roles of machines. Its
central claim should be that people’s acceptance of the roles given to machines (and the
accompanying societal power they wield) is determined by people’s attitudes toward machines as
members of society. Perhaps the place of machines in society cannot be fully understood if their
functionality is considered without examining people’s attitudes toward machines with social
88
roles. For instance, individuals are more likely to appreciate the performance of AI agents if they
feel that the roles of the machines align with their performance (Hong & Curran, 2019; Hong,
Peng & Williams, 2020). Similarly, if a job interviewee has no respect for the authority of a
machine in the role of a job recruiter, that person is less likely to accept the decision when
knowing it was made by the machine. If society does not accept machine agents as social actors
or if people have negative attitudes toward the roles machine agent have, then the performance of
those technologies will not be appreciated, even if they produce the expected outputs. When
machines are assigned human roles that can impact people’s lives, issues related to social
positions, norms, and responsibilities as actors in a network follow.
People’s acceptance of machines as social entities should also be considered when
explaining the social role of AI, which is presumed to be related to the perception of machine
agents as social members with similar influences on society as human agents. The acceptance of
machines as social entities is a prerequisite for granting social roles to machines. There is an
intimate relationship between a social role and an actor because the social role is defined as a set
of practices connecting actors in a network, and this process can lead to the creation of social
positions and norms (Archpru Akaka & Chandler, 2011). In other words, machines must be
accepted as social entities before they are given social roles. If people see machine agents as
human-like social entities that can perform social tasks and interactions, then the successful
performance of AI agents will lead to positive attitudes about the social roles of machines. In
contrast, if a machine agent is perceived as not having agency or independence as a social actor
(like human beings), people will doubt whether AI agents’ performances are genuinely executed,
which may lead to negative attitudes toward the machine’s social role. For example, if people
89
think that an AI artist is not an independent social actor, then appreciation will not be ascribed to
the machine since people will attribute its artwork to its programmer.
Attitudes toward the social roles of machines relate to the perception of whether
machines possess roles when performing their assigned tasks. While some machine roles are
widely accepted, such as their roles as personal assistants, recommendation agents, or drivers,
others are more controversial, especially those that require creativity, such as their roles as
artists, musicians, or novelists.
According to Hunter (2001), role theory claims that normative expectations come from
particular positions or statuses when interacting with others. Therefore, it is assumed that
people’s attitudes toward the social roles of machines depend on how closely people’s general
presumptions about machines align with their expectations of machine roles (Sundar & Kim,
2019). For instance, the level of cognitive dissonance regarding people’s expectations of AI
technology and artists determines their attitudes toward AI systems as artists. People who think
that AI should be logical while artists should be creative are likely to have negative attitudes
toward AI programs as artists. This factor determines people’s evaluations of machine agents.
For instance, even when artwork created by AI was indiscernible from human-made artwork,
people did not appreciate the artwork created by AI artists if they believed that AI systems
cannot be artists due to their lack of creativity (Hong & Curran, 2019). Those who reject the idea
that an AI program can be an artist are not likely to accept AI art generators.
Due to the potential and wide applications of AI, research on how people perceive and
interact with AI agents is increasing. This field is gaining importance because, as the social
construction of technology scholars claim, the future of AI is determined by how it is socially
acknowledged and defined. However, recent studies in this field are being conducted rapidly
90
without an umbrella concept or theory to encompass them. This situation makes it difficult for
researchers to understand how individuals and societies perceive AI. Based on the findings of
this project, devising a rigorous theoretical framework of the public understanding of AI as a
social actor and its roles can facilitate more in-depth research about human-AI interactions.
Conclusion
“A computer would deserve to be called intelligent if it could deceive a human into
believing that it was human.” The famous imitation game (i.e., the Turing test) suggested by
Alan Turing (1950) started with this idea. It can be inferred from this comment that the goal of
building AI is to create an intellectually human-like machine. Despite recent rapid developments,
no AI system has passed the Turing test (Johnson, 2022). Still, in 2022, it is no longer surprising
for AI to imitate humans. Even if no AI applications have passed the rigorous test in a lab setting,
cases have occurred in which people were unable to discern AI performance from human
performance in the real world, including cases of AI-generated artwork (Chamberlain et al.,
2018) and chatbot conversations (Mozafari, Weiger, & Hammerschmidt, 2022). Therefore, the
question about our relationship with AI should not be “Can AI fool us?” but rather “What should
we do when AI starts to perfectly fool us?”
The recent human-AI interaction studies have focused on people’s attitudes toward AI
with specific functions. It can be justified because what we currently have is mostly weak AI,
also known as narrow AI, that is developed to perform limited types of tasks. Now researchers
challenge to build strong AI, the one with human-level intelligence, or even super AI, the one
that surpasses human intelligence and ability. These attempts are made because assigning
multiple tasks to a single strong AI is more economical than building multiple weak AIs. At least
strong AI is expected to be actualized shortly as tech companies are now building super giant AI
91
with billions of parameters and enormous data, such as GPT-3 from Open AI and OPT-175B
from Meta.
To keep up with the technological trend, social scientists must have a broader scope of
human-AI interaction studies. In other words, the concept of AI in social science research should
not be limited to the current state but rather suppose the future AI. For instance, human-AI
communication in the era of strong AI may need to be regarded as interactions between
biological (or traditional) humans and digital (or new type) humans. It is because machines
perfectly mimicking human cognitive processes can lead to having emotions. In fact, many
researchers have attempted to model human emotions and embed the algorithm in AI (Gratch &
Marsella, 2004; Reisenzein et al., 2013; Sato, Terada, & Gratch, 2021). If the attempt is
achieved, intentions to use may become an inadequate or inappropriate dependent variable.
Instead, future studies should measure attitudes toward creating a social relationship with AI.
Once AI becomes remarkably humanlike, our major concerns would be how much we should
treat it as a human being.
Also, studies using strong AI will require a more extended period of interactions than the
current research focusing on people’s instant reactions to AI’s particular performance.
Longitudinal approaches will be more suitable because the versatile functions of advanced AI,
including social capacities, may require people more time and interactions to understand it fully.
AI developers have now already begun to develop non-task-oriented AI, such as a chatbot
providing spontaneous informal responses trained from improvised dialogues (Cho & May,
2020). As the framework of AI development shifts, the design of human-AI interaction research
should also change.
As machines learn to think and behave like humans, many social issues—such as
92
deepfakes, privacy invasions, and algorithmic biases—as well as philosophical concerns—such
as the validity of AI creativity, emotional attachments with machines, and responsibility
attributions—have started to emerge. Some of those worries, particularly philosophical ones, are
original and have not appeared in previous technological developments because AI represents
humans’ most successful attempt to create an intelligent thing. As AI technology develops,
similar questions will likely keep emerging. These consternations may bring up questions like,
“Why do we build AI in the first place despite the concerns?”
From cloning to artificial intelligence, humans have attempted to create a being similar to
ourselves, just as many believe that God created man in his own image. And there may be many
different intentions behind it: from reducing labor to creating a perfect human that cannot be
achieved naturally. If the urge for creation cannot be restrained, we should consider how the
changes that the new technology will bring can be controlled. What needs to be done is to decide
our goal with the ultimate version of technology we have desired to achieve. In the case of AI,
this would be building autonomous machines with indistinguishably humanlike performances.
So far, people have focused mainly on the leverage of some of its functions without enough
consideration of its ultimate purpose. As the technological development of AI advances, the need
for macroscopical perspectives toward AI is more needed. Unfortunately, this dissertation cannot
yet determine the end goal of AI developments since it requires more gathered public opinions
about the social adoption of AI. Still, because AI is evolving fast, the deliberation of how we
should define and make our relationship with the machine more collaborative and
complementary is highly urged.
To develop the relationship, understanding human interlocutors is as crucial as learning
about AI interlocutors. While developing AI may be the task of engineers, the job of social
93
scientists is to understand people in human-machine interactions, which was the main goal of
this dissertation. As mentioned earlier, its findings manifest the need for a novel and unifying
theory for social scientists in this field. Therefore, the imminent mission that should be done in
the near future would be to develop the right theoretical tool for future research about the
perception of interacting with machines.
94
References
Abbasi, A., Lau, R., & Brown, D. (2015). Predicting Behavior. Intelligent Systems, IEEE, 30(3),
35–43. https://doi.org/10.1109/MIS.2015.19
Adams, R. L. (2017). 10 Powerful examples of artificial intelligence in use today. Forbes.
https://www.forbes.com/sites/robertadams/2017/01/10/10-powerful-examples-of-
artificial-intelligence-in-usetoday/#658cdafc420d.
Alikhademi, K., Drobina, E., Prioleau, D., Richardson, B., Purves, D., & Gilbert, J. E. (2021). A
review of predictive policing from the perspective of fairness. Artificial Intelligence and
Law, 30, 1-17. https://doi.org/10.1007/s10506-021-09286-4
Angwin, J., Larson, J., Kirchner, L., & Mattu, S. (2016). Machine bias. ProPublica.
https://www.propublica.org/article/machine-bias-risk-assessmentsin-criminalsentencing.
Araujo, T., Helberger, N., Kruikemeier, S., & De Vreese, C. H. (2020). In AI we trust? Perceptions
about automated decision-making by artificial intelligence. AI & Society, 35(3), 611-623.
https://doi.org/10.1007/s00146-019-00931-w
Aradau, C., Blanke, T., Kaufmann, M., & Jeandesboz, J. (2017). Politics of prediction: Security
and the time/space of governmentality in the age of big data. European Journal of Social
Theory, 20(3), 373-391. https://doi.org/10.1177/1368431016667623
Ayinde, L., & Kirkwood, H. (2020). Rethinking the roles and skills of information
professionals in the 4th Industrial Revolution. Business Information Review, 37(4), 142-
153. https://doi.org/10.1177/0266382120968057
Bakioglu, G., & Atahan, A. O. (2021). AHP integrated TOPSIS and VIKOR methods with
Pythagorean fuzzy sets to prioritize risks in self-driving vehicles. Applied Soft Computing,
99, 106948. https://doi.org/10.1016/j.asoc.2020.106948
Banks, J., Edwards, A. P., & Westerman, D. (2021). The space between: Nature and machine
heuristics in evaluations of organisms, cyborgs, and robots. Cyberpsychology, Behavior,
and Social Networking, 24(5), 324-331. https://doi.org/10.1089/cyber.2020.0165
Bartlett, F.C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge
University Press.
Bartneck, C., Rosalia, C., Menges, R., & Deckers, I. (2005). Robot abuse: A limitation of the media
equation. In A. De Angeli, S. Brahnam, & P. Wallis (Eds.), Abuse: The darker side of
human-computer interaction: An INTERACT 2005 workshop, Rome September 12, 2005
(pp. 54–57).
95
Bengio, Y ., Courville, A., &Vincent,P. (2013). Representation learning: A review and new
perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8),
1798–1828. https://doi.org/10.1109/TPAMI.2013.50
Beran, O. (2018). An Attitude Towards an Artificial Soul? Responses to the “Nazi Chatbot.”
Philosophical Investigations, 41(1), 42–69. https://doi.org/10.1111/phin.12173
Berk, R. A. (2021). Artificial intelligence, predictive policing, and risk assessment for law
enforcement. Annual Review of Criminology, 4, 209-237. https://doi.org/10.1146/annurev-
criminol-051520-012342
Biddle, B. (1986). Recent Developments in Role Theory. Annual Review of Sociology, 12(1), 67–
92.
Bimber, B. (1990). Karl Marx and the Three Faces of Technological Determinism. Social Studies
of Science, 20(2), 333–351. https://doi.org/10.1177/030631290020002006
Boden, M. A. (2009). Computer models of creativity. AI Magazine, 30(3), 23-23.
https://doi.org/10.1609/aimag.v30i3.2254
Bonito, J. A., Burgoon, J. K., & Bengtsson, B. (1999). The role of expectations in human-computer
interaction. In Proceedings of the international ACM SIGGROUP conference on
supporting group work (pp. 229-238).
Bonnici, A., Dannenberg, R. B., Kemper, S., & Camilleri, K. P. (2021). Editorial: Music and AI.
Frontiers in artificial intelligence, 4, 651446. https://doi.org/10.3389/frai.2021.651446
Brewer, W. F., & Treyens, J. C. (1981). Role of schemata in memory for places. Cognitive
psychology, 13(2), 207-230. https://doi.org/10.1016/0010-0285(81)90008-6
Briot, J. P., Hadjeres, G., & Pachet, F. (2020). Deep learning techniques for music generation
(pp. 1-249). Springer.
Bruzzone A.G., Massei M., Di Matteo R., & Kutej L. (2019). Introducing Intelligence and
Autonomy into Industrial Robots to Address Operations into Dangerous Area. In Mazal J.
(eds) Modelling and Simulation for Autonomous Systems. MESAS 2018. Lecture Notes in
Computer Science (pp. 433-444). Springer, Cham. https://doi.org/10.1007/978-3-030-
14984-0_32
Burger, J. M. (1981). Motivational biases in the attribution of responsibility for an accident: A
meta-analysis of the defensive-attribution hypothesis. Psychological Bulletin, 90(3), 496–
512. https://doi.org/10.1037/0033-2909.90.3.496
Burgess, C. (2022). Clearview AI commercialization of facial recognition raises concerns, risks.
CSO. https://www.csoonline.com/article/3651455/clearview-ai-commercialization-of-
facial-recognition-raises-concerns-risks.html
96
Burgoon, J.K. (2015). Expectancy violations theory. The international encyclopedia of
interpersonal communication, 1-9. https://doi.org/10.1002/9781118540190.wbeic102
Burgoon, J. K., Bonito, J. A., Lowry, P. B., Humpherys, S. L., Moody, G. D., Gaskin, J. E., &
Giboney, J. S. (2016). Application of expectancy violations theory to communication with
and judgments about embodied agents during a decision-making task. International
Journal of Human-Computer Studies, 91, 24-36.
https://doi.org/10.1016/j.ijhcs.2016.02.002
Calvert, G. A., & Brammer, M. J. (2012). Predicting consumer behavior: using novel mind-reading
approaches. IEEE pulse, 3(3), 38–41. https://doi.org/10.1109/MPUL.2012.2189167
Camurri, A. (1990). On the role of artificial intelligence in music research. Journal of New Music
Research, 19(2-3), 219-248. https://doi.org/10.1080/09298219008570568
Carnovalini, F., & Rodà, A. (2020). Computational creativity and music generation systems: An
introduction to the state of the art. Frontiers in Artificial Intelligence, 3, 14.
https://doi.org/10.3389/frai.2020.00014
Casacuberta, D., & Guersenzvaig, A. (2018). Using Dreyfus’ legacy to understand justice in
algorithm-based processes. AI & Society, 34(2), 313-319. https://doi.org/10.1007/s00146-
018-0803-2
Cave, S., & Dihal, K. (2019). Hopes and fears for intelligent machines in fiction and reality. Nature
Machine Intelligence, 1(2), 74. https://doi.org/10.1038/s42256-019-
0020-9
Cevik, M. (2017). Will It Be Possible for Artificial Intelligence Robots to Acquire Free Will and
Believe in God?. Beytulhikme: An International Journal of Philosophy, 7(2), 75-87.
https://doi.org/10.18491/beytulhikme.375781
Cha, Y . J., Baek, S., Ahn, G., Lee, H., Lee, B., Shin, J. E., et al. (2020). Compensating for
the loss of human distinctiveness: The use of social creativity under Human–Machine
comparisons. Computers in Human Behavior, 103, 80–90. https://doi.org/10.1016/j.
chb.2019.08.027
Chamberlain, R., Mullin, C., Scheerlinck, B., & Wagemans, J. (2018). Putting the art in artificial:
Aesthetic responses to computer-generated art. Psychology of Aesthetics, Creativity, and
the Arts, 12(2), 177. https://doi.org/10.1037/aca0000136
Chase, C. (2017). Artificial intelligence and the two singularities. Taylor & Francis, CRC Press.
Chelliah, J. (2017). Will artificial intelligence usurp white collar jobs?. Human Resource
Management International Digest, 25(3), 1–3. https://doi.org/10.1108/HRMID-11-2016-
0152
97
Chmielewski, M., & Kucker, S. C. (2020). An MTurk crisis? Shifts in data quality and the impact
on study results. Social Psychological and Personality Science, 11(4), 464-473.
https://doi.org/10.1177/1948550619875149
Cho, H., & May, J. (2020). Grounding conversations with improvised dialogues. arXiv preprint
arXiv:2004.09544. https://doi.org/10.48550/arXiv.2004.09544
Coeckelbergh, M. (2017). Can machines create art?. Philosophy & Technology, 30(3), 285-303.
https://doi.org/10.1007/s13347-016-0231-5
Coleman, L. D. (2018, May 31). Inside Trends And Forecast For The $3.9T AI Industry. Forbes.
https://www.forbes.com/sites/laurencoleman/2018/05/31/inside-trends-and-forecast-for-
the-3-9t-ai-industry/#125bd7a42c86
Cook, K. S., Cheshire, C., Rice, E. R., & Nakagawa, S. (2013). Social exchange theory. In
Handbook of social psychology (pp. 61-88). Springer, Dordrecht.
https://doi.org/10.1007/978-94-007-6772-0_3
Christenson, P. G., & Peterson, J. B. (1988). Genre and gender in the structure of music preferences.
Communication research, 15(3), 282-301. https://doi.org/10.1177/009365088015003004
Culkin, J. (1967, March 18). A schoolman’s guide to Marshall McLuhan. Saturday Review, 51–53,
70–72.
Dang, J., & Liu, L. (2021). Robots are friends as well as foes: ambivalent attitudes toward mindful
and mindless AI robots in the United States and China. Computers in Human Behavior, 115,
106612. https://doi.org/10.1016/j.chb.2020.106612
De Lara, J. (2018). Inland shift: Race, space, and capital in Southern California. Univ of
California Press.
De Ruyter, B., Aarts, E., Markopoulos, P., & Ijsselsteijn, W. (2005). Ambient intelligence
research in homelab: Engineering the user experience. In Ambient Intelligence (pp. 49-61).
Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-27139-2_4
Deihl, E. R., Schneider, M. J., & Petress, K. (1983). Dimensions of music preference: A factor
analytic study. Popular Music & Society, 9(3), 41-49.
https://doi.org/10.1080/03007768308591221
Delsing, M. J., Ter Bogt, T. F., Engels, R. C., & Meeus, W. H. (2008). Adolescents' music
preferences and personality characteristics. European Journal of Personality: Published
for the European Association of Personality Psychology, 22(2), 109-130.
https://doi.org/10.1002/per.665
98
Diakopoulos, N. (2014). Algorithmic accountability: Journalistic investigation of computational
power structures. Digital journalism, 3(3), 398-415.
https://doi.org/10.1080/21670811.2014.976411
Dixon, T. L. (2006). Schemas as average conceptions: Skin tone, television news exposure, and
culpability judgments. Journalism & Mass Communication Quarterly, 83(1), 131-149.
https://doi.org/10.1177/107769900608300109
Dixon, G., Hart, P. S., Clarke, C., O’Donnell, N. H., & Hmielowski, J. (2020). What drives support
for self-driving car technology in the United States?. Journal of Risk Research, 23(3), 275-
287. https://doi.org/10.1080/13669877.2018.1517384
Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., & Sutskever, I. (2020). Jukebox: A
generative model for music. arXiv preprint arXiv:2005.00341.
Dong, H. W., Hsiao, W. Y ., Yang, L. C., & Yang, Y . H. (2018, April). Musegan: Multi-track
sequential generative adversarial networks for symbolic music generation and
accompaniment. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol.
32, No. 1).
Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting recidivism. Science
advances, 4(1), eaao5580. https://doi.org/10.1126/sciadv.aao5580
Duwe, G. & Rocque, M. (2017). Effects of Automating Recidivism Risk Assessment on
Reliability, Predictive Validity, and Return on Investment. Criminology & Public Policy,
16(1), 235-269. https://doi.org/10.1111/1745-9133.12270
Edwards, C., Edwards, A., Spence, P. R., & Westerman, D. (2016). Initial interaction expectations
with robots: Testing the human-to-human interaction script. Communication Studies, 67(2),
227–238. https://doi.org/10.1080/10510974.2015.1121899
Edwards, C., Edwards, A., Stoll, B., Lin, X., & Massey, N. (2019). Evaluations of an artificial
intelligence instructor’s voice: Social identity theory in human-robot interactions.
Computers in Human Behavior, 90, 357–362. https://doi.org/10.1016/j.chb.2018.08.027
Eaglin, J. (2017). Constructing Recidivism Risk. Emory Law Journal, 67(1), 59-122.
Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative adversarial
networks generating “Art” by learning about styles and deviating from style norms. In A.
Goel, A. Jordanous, & A. Pease (Eds.), Proceedings of the 8th International Conference on
Computational Creativity, ICCC 2017. Georgia Institute of Technology.
Erden, Y . J. (2010). Could a created being ever be creative? Some philosophical remarks on
creativity and AI development. Minds and machines, 20(3), 349-362.
https://doi.org/10.1007/s11023-010-9202-2
99
Fast, E., & Horvitz, E. (2017, February). Long-term trends in the public perception of artificial
intelligence. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence
(pp. 963–969). AAAI Press.
Feldman, S. (2017). Co-creation: human and AI collaboration in creative expression. Electronic
Visualisation and the Arts, 422-429. http://dx.doi.org/10.14236/ewic/EV A2017.84
Feldman, S., & Stenner, K. (1997). Perceived Threat and Authoritarianism. Political Psychology,
18(4), 741–770. https://doi.org/10.1111/0162-895X.00077
Fernandez, E. (2020). Who Is Responsible In A Crash With A Self-Driving Car?. Forbes.
https://www.forbes.com/sites/fernandezelizabeth/2020/02/06/who-is-responsible-in-a-
crash-with-a-self-driving-car/?sh=75d8bf724b2b
Fernández, J., & Vico, F. (2013). AI Methods in Algorithmic Composition: A Comprehensive
Survey. The Journal of Artificial Intelligence Research, 48, 513–582.
https://doi.org/10.1613/jair.3908
Ferrer, R., Eerola, T., & Vuoskoski, J. K. (2013). Enhancing genre-based measures of music
preference by user-defined liking and social tags. Psychology of Music, 41(4), 499-518.
https://doi.org/10.1177/0305735612440611
Fox, J., Ahn, S. J., Janssen, J. H., Yeykelis, L., Segovia, K. Y ., & Bailenson, J. N. (2015).
Avatars versus agents: A meta-analysis quantifying the effect of agency on social influence.
Human-Computer Interaction, 30, 401–432. https://doi.org/10.1080/07370024.2014.921
494
Goffman, E. (1973). The presentation of self in everyday life. New York: Overlook Pres.
Goode, J. (2018). AI made amovie—and the results are horrifyingly encouraging. Wired.
https://www.wired.com/story/ai-filmmaker-zone-out/.
Gordon, C. (2021). Driverless car market leaders innovating the transportation industry.
Forbes. https://www.forbes.com/sites/cindygordon/2021/12/29/driverless-car-market-
leaders-innovating-the-transportation-industry/?sh=6c4c151137f1
Graham, S., Weiner, B., & Zucker, G. (1997). An Attributional Analysis of Punishment Goals
and Public Reactions to O. J. Simpson. Personality and Social Psychology Bulletin,
23(4), 331-346.
Gramlich, J. (2020). What the data says (and doesn’t say) about crime in the United States. Pew
Research Center. https://www.pewresearch.org/fact-tank/2020/11/20/facts-about-crime-in-
the-u-s/
Gratch, J., & Marsella, S. (2004). A domain-independent framework for modeling emotion.
Cognitive Systems Research, 5(4), 269-306. https://doi.org/10.1016/j.cogsys.2004.02.002
100
Grudin, J. (2005). Three faces of human-computer interaction. IEEE Annals of the History of
Computing, 27(4), 46–62. https://doi.org/10.1109/MAHC.2005.67
Grush, L. (2015). Google engineer apologizes after Photos app tags two black people as gorillas.
The Verge. https://www.wired.com/story/ai-filmmaker-zone-out/.
Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics.
MIT Press.
Guresen, E., & Kayakutlu, G. (2011). Definition of artificial neural networks with comparison to
other networks. Procedia Computer Science, 3, 426–433.
https://doi.org/10.1016/j.procs.2010.12.071
Guzman, A. L. (2018). Human-machine communication: Rethinking communication, technology,
and ourselves. Peter Lang Publishing, Inc.
Guzman, A. L. (2020). Ontological Boundaries between Humans and Computers and the
Implications for Human-Machine Communication. Human-Machine Communication, 1,
37-54. https://doi.org/10.30658/hmc.1.3
Guzman, A. L., & Lewis, S. C. (2020). Artificial intelligence and communication: A Human–
Machine Communication research agenda. New Media & Society, 22(1), 70-86.
https://doi.org/10.1177/1461444819858691
Hadjeres, G., Pachet, F., & Nielsen, F. (2017, July). Deepbach: a steerable model for bach chorales
generation. In International Conference on Machine Learning (pp. 1362-1371). PMLR.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines,
30(1), 99-120. https://doi.org/10.1007/s11023-020-09517-8
Hall. (1966). The hidden dimension ([1st ed.]). Doubleday.
Handley, L. (2018). The ‘world’ s first’ A.I. news anchor has gone live in China. CNBC.
https://www.cnbc.com/2018/11/09/the-worlds-first-ai-news-anchor-has-gone-livein-
china.html
Harris, R. J., & Sanborn, F. W. (2014). A Cognitive Psychology of Mass Communication (6th
ed.). New York, NY: Routeledge.
Hasan, S. S., Imam, N., Kannan, R., Yoginath, S., & Kurte, K. (2021, June). Design Space
Exploration of Emerging Memory Technologies for Machine Learning Applications. In
2021 IEEE International Parallel and Distributed Processing Symposium Workshops
(IPDPSW) (pp. 439-448). IEEE. https://doi.org/10.1109/IPDPSW52791.2021.00075.
101
Hickey, M. (1999). Assessment Rubrics for Music Composition: Rubrics make evaluations
concrete and objective, while providing students with detailed feedback and the skills to
become sensitive music critics. Music educators journal, 85(4), 26-52.
https://doi.org/10.2307/3399530
Ho, A., & Yu, C. (2015). Descriptive Statistics for Modern Test Score Distributions: Skewness,
Kurtosis, Discreteness, and Ceiling Effects. Educational and Psychological Measurement,
75(3), 365–388. https://doi.org/10.1177/0013164414548576
Hohenstein, J., & Jung, M. (2020). AI as a moral crumple zone: The effects of AI-mediated
communication on attribution and trust. Computers in Human Behavior, 106, 106190.
https://doi.org/10.1016/j.chb.2019.106190
Hollenbeck, R. (2020). AI’ s Next Big Coup: Augmenting Intelligence To Combat Customer Service
Burnout. Forbes.
https://www.forbes.com/sites/forbescommunicationscouncil/2020/12/02/ais-next-big-
coup-augmenting-intelligence-to-combat-customer-service-burnout/?sh=506e7fa271f9
Hong, J. W. (2018, July). Bias in perception of art produced by artificial intelligence. In
International Conference on Human-Computer Interaction (pp. 290–303). Springer, Cham.
Hong, JW., & Curran, N. M. (2019). Artificial intelligence, artists, and art: attitudes toward
artwork produced by humans vs. artificial intelligence. ACM Transactions on Multimedia
Computing, Communications, and Applications (TOMM), 15(2s), 1-16.
https://doi.org/10.1145/3326337
Hong, JW., Peng, Q., & Williams, D. (2021). Are you ready for artificial Mozart and Skrillex? An
experiment testing expectancy violation theory and AI music. New Media & Society, 23(7),
1920-1935. https://doi.org/10.1177/1461444820925798
Hopgood, A. A. (2003). Artificial intelligence: hype or reality?. Computer, 36(5), 24-28.
https://doi.org/10.1109/MC.2003.1198233
Howard, A., & Borenstein, J. (2017). The Ugly Truth About Ourselves and Our Robot Creations:
The Problem of Bias and Social Inequity. Science and Engineering Ethics, 24, 1521–1536.
https://doi.org/10.1007/s11948-017-9975-2
Huang, C. Z. A., Vaswani, A., Uszkoreit, J., Simon, I., Hawthorne, C., Shazeer, N., ... & Eck, D.
(2018, September). Music Transformer: Generating Music with Long-Term Structure. In
International Conference on Learning Representations.
Huang, S., Zhang, J., Schonfeld, D., Wang, L., & Hua, X. S. (2017). Two-stage friend
recommendation based on network alignment and series expansion of probabilistic topic
model. IEEE Transactions on Multimedia, 19(6), 1314-1326.
https://doi.org/10.1109/TMM.2017.2652074
102
Hung, T. W., & Yen, C. P. (2021). On the person-based predictive policing of AI. Ethics and
Information Technology, 23(3), 165-176. https://doi.org/10.1007/s10676-020-09539-x
Istók, E., Brattico, E., Jacobsen, T., Ritter, A., & Tervaniemi, M. (2013). ‘I love Rock ‘n’Roll’—
Music genre preference modulates brain responses to music. Biological Psychology, 92(2),
142-151. https://doi.org/10.1016/j.biopsycho.2012.11.005
Jacob, B. L. (1996). Algorithmic composition as a model of creativity. Organised Sound, 1(3),
157-165. https://doi.org/10.1017/S1355771896000222
Jalil, S., Myers, T., Atkinson, I., & Soden, M. (2019). Complementing a Clinical Trial With
Human-Computer Interaction: Patients’ User Experience With Telehealth. JMIR human
factors, 6(2), e9481. https://doi.org/10.2196/humanfactors.9481
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature
Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
Johnson, J. (2020). Artificial intelligence, drone swarming and escalation risks in future warfare.
The RUSI Journal, 165(2), 26-36. https://doi.org/10.1080/03071847.2020.1752026
Johnson, S. (2022). The Turing test: AI still hasn’t passed the “imitation game”. Big Think,
https://bigthink.com/the-future/turing-test-imitation-
game/#:~:text=The%20so%2Dcalled%20Turing%20test,ever%20passed%20the%20Turi
ng%20test.
Jörling, M., Böhm, R., & Paluch, S. (2019). Service robots: Drivers of perceived responsibility for
service outcomes. Journal of Service Research, 22(4), 404-420.
https://doi.org/10.1177/1094670519842334
Just, N., & Latzer, M. (2017). Governance by algorithms: reality construction by algorithmic
selection on the Internet. Media, culture & society, 39(2), 238-258.
https://doi.org/10.1177/0163443716643157
Kaelbling, L. P., Littman, M. L., & Moore, A. W. (1996). Reinforcement learning: A survey.
Journal of artificial intelligence research, 4, 237-285. https://doi.org/10.1613/jair.301
Kang, H., & Kim, K. J. (2021). Does humanization or machinization make the IoT persuasive?
The effects of source orientation and social presence. Computers in Human Behavior, 129,
107152. https://doi.org/10.1016/j.chb.2021.107152
Kennedy, R., Clifford, S., Burleigh, T., Waggoner, P. D., Jewell, R., & Winter, N. J. (2020). The
shape of and solutions to the MTurk quality crisis. Political Science Research and Methods,
8(4), 614-629. https://doi.org/10.1017/psrm.2020.6
103
Kim, M. S. (2021). This company delivers packages faster than Amazon, but workers pay the price.
MIT Technology Review.
https://www.technologyreview.com/2021/06/09/1025884/coupang-amazon-labor-costs-
worker-death/
Kim, T., & Hinds, P. (2006). Who Should I Blame? Effects of Autonomy and Transparency on
Attributions in Human-Robot Interaction. In Robot and Human Interactive
Communication, 2006. ROMAN 2006. The 15th IEEE International Symposium on
Robot and Human Interactive Communication (pp. 80–85). IEEE.
https://doi.org/10.1109/ROMAN.2006.314398
Kim, K., Park, E., & Sundar, S. S. (2013). Caregiving role in human–robot interaction: A study of
the mediating effects of perceived benefit and social presence. Computers in Human
Behavior, 29(4), 1799–1806. https://doi.org/10.1016/j.chb.2013.02.009
Kirkpatrick, K. (2017). It’s not the algorithm, it’s the data. Communications of the ACM, 60(2),
21–23. https://doi.org/10.1145/3022181
Kleider, H., Pezdek, K., Goldinger, S., & Kirk, A. (2008). Schema-driven source misattribution
errors: remembering the expected from a witnessed event. Applied Cognitive Psychology,
22(1), 1–20. https://doi.org/10.1002/acp.1361
Kline, R. R. (2001). Technological Determinism. International Encyclopedia of the Social &
Behavioral Sciences, 15495-15498. https://doi.org/10.1016/B0-08-043076-7/03167-3
Knight, W. (2020). AI Is Coming for Your Most Mind-Numbing Office Tasks. Wired.
https://www.wired.com/story/ai-coming-most-mind-numbing-office-tasks/
Köbis, N., & Mossink, L. D. (2021). Artificial intelligence versus Maya Angelou: Experimental
evidence that people cannot differentiate AI-generated from human-written poetry.
Computers in human behavior, 114, 106553. https://doi.org/10.1016/j.chb.2020.106553
Krause, K. (2014). Supporting the Iron Fist: Crime News, Public Opinion, and Authoritarian
Crime Control in Guatemala. Latin American Politics and Society, 56(1), 98–119.
https://doi.org/10.1111/j.1548-2456.2014.00224.x
Krausová, A., & Hazan, H. (2013). Creating Free Will in Artificial Intelligence. In Beyond AI:
Artificial Golem Intelligence (BAI) (pp. 96-109). University of West Bohemia.
Kruger, M., Viljoen, A., & Saayman, M. (2018, October). A behavioral intentions typology of
attendees to an EDM festival in South Africa. Journal of Convention & Event Tourism,
19(4-5), 374-398. https://doi.org/10.1080/15470148.2018.1504365
Kwok, R. (2019). AI and the Social Sciences Used to Talk More. Now They’ve Drifted Apart.
KelloggInsight. https://insight.kellogg.northwestern.edu/article/artificial-intelligence-
ethics-social-questions
104
Land, K. (2017). Automating Recidivism Risk Assessment. Criminology & Public Policy, 16(1),
231-233.
Langer, E. (1992). Matters of mind: Mindfulness/mindlessness in perspective.
Consciousness and Cognition, 1(3), 289–305. https://doi.org/10.1016/1053-8100(92)
90066-J
Lecher. C. (2019). How Amazon automatically tracks and fires warehouse workers for
‘productivity’. The Verge.
https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-
productivity-firing-terminations
Lee, J. G., Kim, K. J., Lee, S., & Shin, D. H. (2015). Can autonomous vehicles be safe and
trustworthy? Effects of appearance and autonomy of unmanned driving systems.
International Journal of Human-Computer Interaction, 31(10), 682-691.
https://doi.org/10.1080/10447318.2015.1070547
Lee, S., Nah, S., Chung, D. S., & Kim, J. (2020). Predicting ai news credibility: communicative or
social capital or both?. Communication Studies, 71(3), 428-447.
https://doi.org/10.1080/10510974.2020.1779769
Lewandowska, K., & Smolarska, Z. (2020). Striving for consensus: How panels evaluate artistic
productions. Qualitative Sociology, 43(1), 21-42. https://doi.org/10.1007/s11133-019-
09439-7
Lewis, N., & Marc, J. (2019). Want to work for L’Oreal? Get ready to chat with an AI bot. CNN.
https://www.cnn.com/2019/04/29/tech/ai-recruitment-loreal/index.html
Lewis, S. C., Guzman, A. L., & Schmidt, T. R. (2019). Automation, journalism, and human–
machine communication: Rethinking roles and relationships of humans and machines in
news. Digital Journalism, 7(4), 409-427. https://doi.org/10.1080/21670811.2019.1577147
Li, X., & Sung, Y . (2021). Anthropomorphism brings us closer: The mediating role of
psychological distance in User–AI assistant interactions. Computers in Human Behavior,
118, 106680. https://doi.org/10.1016/j.chb.2021.106680
Lima, G., Zhunis, A., Manovich, L., & Cha, M. (2021). On the Social-Relational Moral Standing
of AI: An Empirical Study Using AI-Generated Art. Frontiers in robotics and AI, 8, 719944.
https://doi.org/10.3389/frobt.2021.719944
Liu, J. (2019, November 27). High-paid, well-educated white collar workers will be heavily
affected by AI, says new report. CNBC. https://www.cnbc.com/2019/11/27/high-paid-well-
educated-white-collar-jobs-heavily-affected-by-ai-new-report.html
Loureiro, S. M. C., Guerreiro, J., & Tussyadiah, I. (2021). Artificial intelligence in business: State
of the art and future research agenda. Journal of business research, 129, 911-926.
https://doi.org/10.1016/j.jbusres.2020.11.001
105
Mathur, M. B., & Reichling, D. B. (2016). Navigating a social world with robot partners: A
quantitative cartography of the Uncanny Valley. Cognition, 146, 22-32.
https://doi.org/10.1016/j.cognition.2015.09.008
McLuhan, M. (1994). Understanding media: the extensions of man. Cambridge, Mass.: MIT Press.
Medway, F., & Lowe, C. (1975). Effects of outcome valence and severity on attribution of
responsibility. Psychological Reports, 36(1), 239–246. https://doi.org/10.2466/
pr0.1975.36.1.239
Meehan, J. R. (1979, January). An artificial intelligence approach to tonal music theory. In
Proceedings of the 1979 annual conference (pp. 116-120). Miami, Florida: ACM.
https://doi.org/10.1145/800177.810044
Meijer, A., & Wessels, M. (2019). Predictive policing: Review of benefits and drawbacks.
International Journal of Public Administration, 42(12), 1031-1039.
https://doi.org/10.1080/01900692.2019.1575664
Mikulincer, M., & Shaver, P. R. (2001). Attachment theory and intergroup bias: evidence that
priming the secure base schema attenuates negative reactions to out-groups. Journal of
personality and social psychology, 81(1), 97. https://doi.org/10.1037/0022-3514.81.1.97
Miller, A. (2019). The intrinsically linked future for human and artificial intelligence interaction.
Journal of Big Data, 6(1), 38. https://doi.org/10.1186/s40537-019-0202-7
Min, S. (2019). Coming soon to Netflix: Movie trailers crafted by AI. CBS news.
https://www.cbsnews.com/news/netflix-trailers-made-by-ai-netflix-is-investing-in-
automation-to-make-trailers/
Moon, Y ., & Nass, C. (1996). How “Real” Are Computer Personalities?: Psychological Responses
to Personality Types in Human-Computer Interaction. Communication Research, 23(6),
651–674. https://doi.org/10.1177/009365096023006002
Moses, L. (2017). The Washington Post’s robot reporter has published 850 articles in the past
year. Digiday. https://digiday.com/media/washington-posts-robot-reporter-published-500-
articles-last-year/.
Mou, Y ., & Xu, K. (2017). The media inequality: Comparing the initial human-human and human-
AI social interactions. Computers in Human Behavior, 72, 432.
https://doi.org/10.1016/j.chb.2017.02.067
Moussawi, S., Koufaris, M., & Benbunan-Fich, R. (2021). How perceptions of intelligence and
anthropomorphism affect adoption of personal intelligent agents. Electronic Markets, 31(2),
343-364. https://doi.org/10.1007/s12525-020-00411-w
106
Mozafari, N., Weiger, W. H., & Hammerschmidt, M. (2022). Trust me, I’m a bot – repercussions
of chatbot disclosure in different service frontline settings. Journal of Service Management,
33(2), 221–245. https://doi.org/10.1108/JOSM-10-2020-0380
Nahmias, E., Shepard, J., & Reuter, S. (2014). It’s OK if “my brain made me do it”: People’s
intuitions about free will and neuroscientific prediction. Cognition, 133(2), 502–516.
https://doi.org/10.1016/j.cognition.2014.07.009
Narvekar, S., Peng, B., Leonetti, M., Sinapov, J., Taylor, M. E., & Stone, P. (2020). Curriculum
Learning for Reinforcement Learning Domains: A Framework and Survey. Journal of
Machine Learning Research, 21(181), 1-50. https://arxiv.org/abs/2003.04960
Nass, C., & Moon, Y . (2000). Machines and mindlessness: Social responses to computers. Journal
of Social Issues, 56(1), 81–103. https://doi.org/10.1111/0022-4537.00153
Nass, C., Moon, Y ., & Green, N. (1997). Are Machines gender neutral? Gender-stereotypic
responses to computers with voices. Journal of Applied Social Psychology, 27(10), 864–
876. https://doi.org/10.1111/j.1559-1816.1997.tb00275.x
Nass, C., Steuer, J., & Tauber, E. (1994). Computers are social actors. Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems, 72–78.
https://doi.org/10.1145/191666.191703
Noble, S. (2018). Algorithms of Oppression. NYU Press.
Nogueira Collazo, M., Cotta, C., & Fernández-Leiva, A. (2014). Virtual player design using self-
learning via competitive coevolutionary algorithms. Natural Computing, 13(2), 131–144.
https://doi.org/10.1007/s11047-014-9411-3
Payne, C. (2019). Musenet. OpenAI blog. https://openai.com/blog/musenet
Peiser, J. (2019). The Rise of the Robot Reporter. The New York Times.
https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-
robots.html.
Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role of
interaction quality, empathy and perceived psychological anthropomorphic characteristics
in the acceptance of artificial intelligence in the service industry. Computers in Human
Behavior, 122, 106855. https://doi.org/10.1016/j.chb.2021.106855
Perel, M., & Elkin-Koren, N. (2017). Black Box Tinkering: Beyond Disclosure In Algorithmic
Enforcement. Florida Law Review, 69(1), 181–221.
Peterson, R. A., & Kern, R. M. (1996). Changing highbrow taste: From snob to omnivore.
American sociological review, 61(5), 900-907. https://doi.org/2096460
107
Pinch, T. J., & Bijker, W. E. (1984). The social construction of facts and artefacts: Or how the
sociology of science and the sociology of technology might benefit each other. Social
studies of science, 14(3), 399-441. https://doi.org/10.1177/030631284014003004
Pinch, T. and T. Bijker (1999). The Social Construction of Facts and Artifacts: Or How the
Sociology of Science and the Sociology of Technology Might Benefit Each Other. In The
Social Construction of Technological Systems (pp. 17-50). W. E. Bijker, Hughes, P. &
Pinch, T. Cambridge, Massachusetts, The MIT Press.
Rapier, G. (2019). 'If you can't beat them join them': Elon Musk says our best hope for competing
with AI is becoming better cyborgs. Insider. https://www.businessinsider.com/elon-musk-
humans-must-become-cyborgs-to-compete-with-ai-2019-8
Ratan, R. (2019). When automobiles are avacars: A self-other-utility approach to cars and avatars.
International Journal of Communication, 13, 1–19.
https://ijoc.org/index.php/ijoc/article/view/8667/2692
Reisenzein, R., Hudlicka, E., Dastani, M., Gratch, J., Hindriks, K., Lorini, E., & Meyer, J. J. C.
(2013). Computational modeling of emotion: Toward improving the inter-and
intradisciplinary exchange. IEEE Transactions on Affective Computing, 4(3), 246-266.
https://doi.org/10.1109/T-AFFC.2013.14.
Riddoch, K. A., & Cross, E. (2021). “Hit the robot on the head with this mallet”–making a case
for including more open questions in HRI research. Frontiers in Robotics and AI, 8, 603510.
https://doi.org/10.3389/frobt.2021.603510
Roads, C. (1980). Artificial intelligence and music. Computer Music Journal, 4(2), 13-25.
https://doi.org/3680079
Rogers, E. M. (1995). Diffusion of innovations (4th ed.). New York: Free Press.
Rosiers, P. D. (2021). AI Application in Surveillance for Public Safety: Adverse Risks for
Contemporary Societies. In Towards an International Political Economy of Artificial
Intelligence (pp. 113-143). Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-
74420-5_6
Russell, D. (1982). The Causal Dimension Scale: A Measure of How Individuals Perceive Causes.
Journal of Personality and Social Psychology, 42(6), 1137-1145.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson Education
Limited.
Salkind, N. J. (2010). Encyclopedia of research design Thousand Oaks, CA: SAGE Publications,
Inc. https://doi.org/10.4135/9781412961288
108
Sato, M., Terada, K., & Gratch, J. (2021, September). Visualization of social emotional appraisal
process of an agent. In 2021 9th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) (pp. 1-2). IEEE.
https://doi.org/10.1109/ACIIW52867.2021.9666329
Sanderson, C., Zanna, A., & Darley, J. (2000). Making the punishment fit the crime and the
criminal: Attributions of dangerousness as a mediator of liability. Journal Of Applied
Social Psychology, 30(6), 1137–1159. https://doi.org/10.1111/j.1559-
1816.2000.tb02514.x
Sankowski, E. (1992). Blame and Autonomy. American Philosophical Quarterly, 29(3), 291–
299.
Santos, F. P., Lelkes, Y ., & Levin, S. A. (2021). Link recommendation algorithms and dynamics of
polarization in online social networks. Proceedings of the National Academy of Sciences,
118(50), 1-9. https://doi.org/10.1073/pnas.2102141118
Savery, R., Zahray, L., & Weinberg, G. (2021). Shimon Sings-Robotic Musicianship Finds Its
V oice. In Handbook of Artificial Intelligence for Music (pp. 823-847). Springer, Cham.
https://doi.org/10.1007/978-3-030-72116-9_29
Schäfer, T., & Sedlmeier, P. (2009). From the functions of music to music preference. Psychology
of Music, 37(3), 279-300. https://doi.org/10.1177/0305735608097247
Schepman, A., & Rodway, P. (2020). Initial validation of the general attitudes towards Artificial
Intelligence Scale. Computers in human behavior reports, 1, 100014.
https://doi.org/10.1016/j.chbr.2020.100014
Schwab, k. (2016, January 14). The Fourth Industrial Revolution: what it means, how to
respond. World Economic Forum. https://www.weforum.org/agenda/2016/01/the-fourth-
industrial-revolution-what-it-means-and-how-to-respond/
Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural networks, 61,
85-117. https://doi.org/10.1016/j.neunet.2014.09.003
Shank, D. B., & DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence
after real-world moral violations. Computers in Human Behavior, 86, 401–411.
https://doi.org/10.1016/j.chb.2018.05.014
Shariff, A., Bonnefon, J. F., & Rahwan, I. (2021). How safe is safe enough? Psychological
mechanisms underlying extreme safety demands for self-driving cars. Transportation
research part C: emerging technologies, 126, 103069.
https://doi.org/10.1016/j.trc.2021.103069
109
Shaver, K. (1970). Defensive attribution: Effects of severity and relevance on the responsibility
assigned for an accident. Journal of Personality and Social Psychology, 14(2), 101–113.
https://doi.org/10.1037/h0028777
Silver, D., Thomas, H., Lai, M., Guez, A., Lanctot, M., Lillicrap, T., & Simonyan, K. (2018). A
general reinforcement learning algorithm that masters chess, shogi, and Go through self-
play. Science, 362(6419), 1140–1144. https://doi.org/10.1126/science.aar6404
Sirin, C., & Villalobos, J. (2011). Where does the buck stop? Applying attribution theory to
examine public appraisals of the president. Presidential Studies Quarterly, 41(2), 334–357.
https://doi.org/10.1111/j.1741-5705.2011.03857.x
Sivak, M., & Schoettle, B. (2015). Road safety with self-driving vehicles: General limitations and
road sharing with conventional vehicles. University of Michigan, Ann Arbor,
Transportation Research Institute. https://hdl.handle.net/2027.42/111735
Small, G. W., Lee, J., Kaufman, A., Jalil, J., Siddarth, P., Gaddipati, H., Moody, T. D., &
Bookheimer, S. Y . (2020). Brain health consequences of digital technology use.
Dialogues in clinical neuroscience, 22(2), 179–187.
https://doi.org/10.31887/DCNS.2020.22.2/gsmall
Solon, O. (2017). Elon Musk says humans must become cyborgs to stay relevant. Is he right?. The
Guardian. https://www.theguardian.com/technology/2017/feb/15/elon-musk-cyborgs-
robots-artificial-intelligence-is-he-right
Spence, P. R. (2019). Searching for questions, original thoughts, or advancing theory: Human-
machine communication. Computers in Human Behavior, 90, 285-287.
https://doi.org/10.1016/j.chb.2018.09.014
Suchman, L. (2006). Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge
University Press. https://doi.org/10.1017/CBO9780511808418
Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects
on credibility (pp. 73-100). Cambridge, MA: MacArthur Foundation Digital Media and
Learning Initiative.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of
human–AI interaction (HAII). Journal of Computer-Mediated Communication, 25(1), 74-
88. https://doi.org/10.1093/jcmc/zmz026
Sundar, S. S., & Kim, J. (2019). Machine heuristic: When we trust computers more than humans
with our personal information. In Proceedings of the 2019 CHI conference on human
factors in computing systems. ACM. https://doi.org/10.1145/3290605.3300768.
110
Sundar, S. S., & Nass, C. (2000). Source orientation in human-computer interaction: Programmer,
networker, or independent social actor. Communication Research, 27(6), 683–703.
https://doi.org/10.1177/009365000027006001
Swart, J. (2021). Experiencing Algorithms: How Young People Understand, Feel About, and
Engage With Algorithmic News Selection on Social Media. Social Media + Society, 7(2),
1-11. https://doi.org/10.1177/20563051211008828
Teoh, E., & Kidd, D. (2017). Rage against the machine? Google’s self-driving cars versus human
drivers. Journal of Safety Research, 63, 57–60. https://doi.org/10.1016/j.jsr.2017.08.008
Thurman, N., Moeller, J., Helberger, N., & Trilling, D. (2019). My friends, editors, algorithms, and
I: Examining audience attitudes to news selection. Digital Journalism, 7(4), 447-469.
https://doi.org/10.1080/21670811.2018.1493936
Tripathy, S., & Singh, R. (2022). Convolutional Neural Network: An Overview and Application
in Image Classification. In Proceedings of Third International Conference on Sustainable
Computing (pp. 145-153). Springer, Singapore.
https://doi.org/10.1007/978-981-16-4538-9_15
Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial
intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-018-0300-7
Tscheligi, M., Egger, S., Fröhlich, P., Olaverri-Monreal, C., & Regal, G. (2015).
Technology Experience Research: A Framework for Experience Oriented Technology
Development. In IFIP Conference on Human-Computer Interaction (pp. 626-627).
Springer, Cham. https://doi.org/10.1007/978-3-319-22723-8_77
Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–460.
Upadhyay, A. K., & Khandelwal, K. (2018). Applying artificial intelligence: implications for
recruitment. Strategic HR Review, 17(5), 255-258. https://doi.org/10.1108/SHR-07-2018-
0051
Vasilaki, E. (2018). Worried about AI taking over the world? You may be making some rather
unscientific assumptions. The Conversation. https://theconversation.com/worried-about-
ai-taking-over-the-world-you-may-be-making-some-rather-unscientific-assumptions-
103561
Van Esch, P., Black, J. S., & Ferolie, J. (2019). Marketing AI recruitment: The next phase in job
application and selection. Computers in Human Behavior, 90, 215-222.
https://doi.org/10.1016/j.chb.2018.09.009
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I.
(2017, December). Attention is all you need. In Proceedings of the 31st International
Conference on Neural Information Processing Systems (pp. 6000-6010).
111
Velez, J. A., Loof, T., Smith, C. A., Jordan, J. M., Villarreal, J. A., & Ewoldsen, D. R. (2019).
Switching schemas: Do effects of mindless interactions with agents carry over to humans
and vice versa?. Journal of Computer-Mediated Communication, 24(6), 335-352.
https://doi.org/10.1093/jcmc/zmz016
Versace, C., Hawkins, L. E., & Abssy, M. (2021, March 26). The Next Wave of Automation and
How It'll Affect The White-Collar Workforce. Nasdaq.
https://www.nasdaq.com/articles/the-next-wave-of-automation-and-how-itll-affect-the-
white-collar-workforce-2021-03-26
Waldrop, M. M. (2015). Autonomous vehicles: No drivers required. Nature, 518(7537), 20.
https://doi.org/10.1038/518020a
Wakefield, J. (2016, March 25). Microsoft chatbot is taught to swear on Twitter. BBC News.
https://www.bbc.com/news/technology-35890188.
Webb, A., & Martinuzzi, E. (2019). The Apple Card Is Sexist Blaming the Algorithm Is Proof. The
Washington Post. https://www.washingtonpost.com/business/the-apple-card-is-sexist-
blaming-the-algorithm-is-proof/2019/11/11/0f70b2fa-048d-11ea-9118-
25d6bd37dfb1_story.html.
Wicki, M. (2021). How do familiarity and fatal accidents affect acceptance of self-driving
vehicles?. Transportation research part F: traffic psychology and behaviour, 83, 401-423.
https://doi.org/10.1016/j.trf.2021.11.004
Winner, L. (1993). Upon opening the black box and finding it empty: Social constructivism and
the philosophy of technology. Science, Technology, & Human Values, 18(3), 362-378.
http://www.jstor.org/stable/689726
Woolfolk, R., Doris, J., & Dailey, J. (2006). Identification, Situational Constraint, and Social
Cognition: Studies in the Attribution of Moral Responsibility. Cognition: International
Journal of Cognitive Science, 100(2), 283–301.
https://doi.org/10.1016/j.cognition.2005.05.002
Wu, J., Hu, C., Wang, Y ., Hu, X., & Zhu, J. (2020). A Hierarchical Recurrent Neural Network
for Symbolic Melody Generation. IEEE Transactions on Cybernetics, 50(6), 2749–2757.
https://doi.org/10.1109/TCYB.2019.2953194
Xu, K. (2019). First encounter with robot Alpha: How individual differences interact with vocal
and kinetic cues in users’ social responses. New Media & Society, 21(11–12), 2522–2547.
https://doi.org/10.1177/1461444819851479
Xu, K., Liu, F., Mou, Y ., Wu, Y ., Zeng, J., & Schäfer, M. S. (2020). Using machine learning to
learn machines: A cross-cultural study of users’ responses to machine-generated artworks.
Journal of Broadcasting & Electronic Media, 64(4), 566-591.
https://doi.org/10.1080/08838151.2020.1835136
112
Yang, L., Chou, S., and Yang, Y . (2017). Midinet: A convolutional generative adversarial network
for symbolic-domain music generation. In International Society for Music Information
Retrieval Conference (pp. 324–331).
Yang, Q., & Li, C. (2013). Mozart or metallica, who makes you more attractive? A mediated
moderation test of music, gender, personality, and attractiveness in cyberspace. Computers
in Human Behavior, 29(6), 2796-2804. https://doi.org/10.1016/j.chb.2013.07.026
Yoganandhan, A., Subhash, S. D., Jothi, J. H., & Mohanavel, V . (2020). Fundamentals and
development of self-driving cars. Materials today: proceedings, 33, 3303-3310.
https://doi.org/10.1016/j.matpr.2020.04.736
Yokoi, R., & Nakayachi, K. (2022). When people are defeated by artificial intelligence in a
competition task requiring logical thinking, how do they make causal attribution?. Current
Psychology, 1-16. https://doi.org/10.1007/s12144-021-02559-w
Zhang, Y ., Wang, J., Wang, X., & Dolan, J. M. (2018). Road-segmentation-based curb detection
method for self-driving via a 3D-LiDAR sensor. IEEE transactions on intelligent
transportation systems, 19(12), 3981-3991.
https://doi.org/10.1109/MSPEC.2016.7572525
Zheng, Y ., Zhong, B., & Yang, F. (2018). When algorithms meet journalism: The user perception
to automated news in a cross-cultural context. Computers in Human Behavior, 86, 266–
275. https://doi.org/10.1016/j.chb.2018.04.046
Zulić, H. (2019). How AI can change/improve/influence music composition, performance and
education: three case studies. INSAM Journal of Contemporary Music, Art and Technology,
1(2), 100-114.
113
Appendix A
[AI driver and Accident scenario]
Passenger died in a self-driving car accident on a slippery road
By Alex Robbins
May 13, 2019
SAN FRANCISCO — Yesterday, a self-driving car was involved in an accident after it
suddenly lost control due to an unidentified cause by the artificial intelligent (AI) driver that
is currently being investigated. A passenger in the self-driving car died in the accident.
Fatal accident caused by AI driver
Local police said the self-driving car was set in AI autonomous driving, where the AI driver
had a full control of the car at the time of the accident. The AI driver system, which had been
driving for two years, suddenly lost control of the car and collided with a tree. There was a
one passenger, 29-year-old David McGill, inside the car at the time of the crash. He was
pronounced dead when first responders arrived to the scene of the accident. The car did not hit
any other pedestrians and the passenger was the only casualty.
Nexus, the self-driving car’s manufacturer, pointed out the severity of this accident. The
company released a statement earlier today that prioritized the safety of its passengers.
114
[AI driver and Rescue scenario]
AI driver saves passenger’s life by steering him to hospital
By Alex Robbins
May 13, 2019
SAN FRANCISCO — The artificial intelligent (AI) driving system in a self-driving car is being
credited with having helped save a man’s life after its AI driver mode was enabled and drove him
to a hospital when he suffered a pulmonary embolism.
Passenger rescued by AI driver
Yesterday, 29-year-old David McGill on his way work using the AI driver system in his car when
he felt an excruciating pain in his abdomen and chest. McGill recounted how he was glad he set
the autonomous driving function on his self-driving car. “It was easier to have the car drive me to
the hospital rather than calling an ambulance,” McGill said.
The self-driving car’s AI driver, which McGill had been using for two years, drove him to a nearby
hospital.
Nexus, the self-driving car’s manufacturer, pointed out the severity of this incident. The company
released a statement earlier today that prioritized the safety of its passengers.
115
[Human driver and Accident scenario]
Passenger died in a self-driving car accident on a slippery road
By Alex Robbins
May 13, 2019
SAN FRANCISCO — Yesterday, a GreenLight rideshare car was involved in an accident after it
suddenly lost control due an unidentified cause by the driver that is currently being investigated.
A passenger in the car died in the accident.
Fatal accident caused by driver
Local police said the car was driven by a 41-year-old Michael Smith. Smith, who had been driving
for GreenLight for two years, suddenly lost control of the car and collided with a tree. There was
one passenger, 29-year-old David McGill, inside the car at the time of the crash. He was
pronounced dead when first responders arrived at the scene of the accident. The car did not hit any
other pedestrians and the passenger was the only casualty.
GreenLight, the rideshare company, pointed out the severity of this accident. The company
released a statement earlier today that prioritized the safety of its passengers.
116
[Human driver and Rescue scenario]
Rideshare driver saves passenger’s life by steering him to hospital
By Alex Robbins
May 13, 2019
SAN FRANCISCO — A GreenLight rideshare driver is being credited with having helped save a
man’s life after the rideshare driver drove his passenger to a hospital when he suffered a pulmonary
embolism during a ride.
Passenger rescued by driver
Yesterday, 29-year-old David McGill rode in a Greenlight rideshare to work when he felt an
excruciating pain in his abdomen and chest. He could not speak because of the pain at that time,
but his GreenLight driver, 41-year-old Michael Smith who had been driving for GreenLight for
two years, recognized that his passenger was in pain and drove him to the closest hospital. “I
thought it was easier to drive the passenger to the hospital rather than calling an ambulance,” Smith
said.
GreenLight, the rideshare company, pointed out the severity of this incident. The company released
a statement earlier today that prioritized the safety of its passengers.
117
Appendix B
Perception of drivers
Indicate how well the adjective represents the driver in the article you just read.
Compassionate - - - - - - - Not Compassionate
Unselfish - - - - - - - Selfish
Friendly - - - - - - - Unfriendly
Cooperative - - - - - - - Competitive
Likable - - - - - - - Unlikable
Pleasant - - - - - - - Unpleasant
Appealing - - - - - - - Unappealing
Not irritating - - - - - - - Irritating
Trustworthy - - - - - - - Untrustworthy
Honest - - - - - - - Dishonest
Reliable - - - - - - - Unreliable
Sincere - - - - - - - Insincere
Intelligent - - - - - - - Unintelligent
Smart - - - - - - - Dumb
Capable - - - - - - - Incapable
Warm - - - - - - - Cold
Evaluation of attributed responsibility
To what extent do you agree with the following statements?
The cause of the event is something that…
reflects more of the driver - - - - - - - reflects more of the situation
was manageable by the driver - - - - - - - was not manageable by the driver
the driver will do again - - - - - - - the driver will not do again
the driver could regulate - - - - - - - the driver could not regulate
others have control over - - - - - - - others have no control over
pertains to the driver - - - - - - - does not pertain to the driver
is stable over time - - - - - - - is variable over time
is under the influence of other factors - - - - - - - is not under the influence of other factors
is something about the driver - - - - - - - is something about other factors
the driver had influence over - - - - - - - the driver had no influence over
is unchangeable - - - - - - - is changeable
other factors can regulate - - - - - - - other factors cannot regulate
Attitudes toward AI
Please rate the extent to which you agree with the following statements (From “Strongly Disagree”
to “Strongly Agree”):
AI is a positive force in the world
AI research should be funded more
118
AI is generally helpful
There is a need to use AI
Competency in AI knowledge
How would you rate your confidence in the following (From “Extremely Unconfident” to
“Extremely Confident”):
Explaining what artificial intelligence is
Having a conversation about artificial intelligence
My knowledge about artificial intelligence
119
Appendix C
What is the genre of this music?
Rock
Classical music
Traditional music
Electronic Dance Music (EDM)
Jazz
Country music
What did the instructor in the audio say before the music started?
Music now starts.
Please listen to the following music carefully.
Now music begins.
Now please focus.
There was no instruction.
Assessment of musical value (From “Strongly Disagree” to “Strongly Agree”)
Aesthetic appeal
Many listeners would enjoy this AI composed music piece.
This AI-composed music piece presented a strong aesthetic appeal.
This AI-composed music piece keeps the listener interested
Creativity
This AI-composed music had variety and exploration of musical elements.
The AI-composed music piece included very original musical ideas (range, dynamics, timbre,
tempo texture, rhythm, melody).
The AI-composed music piece included imaginative musical idea.
Craftmanship
This AI-composed music had a clear beginning, middle, and end.
This AI-composed music appeared well-organized, not random.
The ending of this AI-composed music felt final.
Expectation-Violation Scale (From “Strongly Disagree” to “Strongly Agree”)
Expectedness
This music was composed just like how people would expect about AI-composed music.
This music was different than what I anticipated about AI-composed music. (reverse coded)
I felt this music was unusual compared to my expectation about AI-composed music. (reverse
coded)
120
People would not be surprised to know that this music is composed by AI.
This music was close to my expectation about AI-composed music.
Evaluation
Most people would find this AI-composed music enjoyable.
This AI music was very unpleasant. (reverse coded)
This AI music was undesirable. (reverse coded)
This AI music was composed in a way I like.
This AI music was good.
Attitudes toward AI’s creativity (From “Strongly Disagree” to “Strongly Agree”)
AI can be creative on its own.
AI cannot make something new by itself without any human input.
The term "creative" is applicable to AI.
121
Appendix D
[AI predictor and petty theft]
18-year-old Borden and her friend were arrested and charged with burglary and petty theft for a
kid's bike and a scooter, which were valued at a total of $80.
Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was
picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store.
Prater was the more seasoned criminal. He had already been convicted of armed robbery and
attempted armed robbery, for which he served five years in prison, in addition to another armed
robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was
a juvenile.
Yet something odd happened when Borden and Prater were booked into jail: A computer crime
predictor, a computer program with an algorithm that predicts the possibility of subsequent
offenses or crimes, spat out a score predicting the likelihood of each committing a future crime.
Borden — who is black — was rated a high risk (8). Prater — who is white — was rated a low
risk (3).
Two years later, we know the computer crime predictor got it exactly backward. Borden has not
been charged with any new crimes. Prater is serving an eight-year prison term for subsequently
breaking into a warehouse and stealing thousands of dollars’ worth of electronics.
Scores like this — known as risk assessments — are increasingly common in courtrooms across
the nation. They are used to inform decisions about who can be set free at every stage of the
criminal justice system, from assigning bond amounts — as is the case in Fort Lauderdale — to
even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware,
Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such
assessments are given to judges during criminal sentencing.
122
123
[Human predictor and petty theft]
18-year-old Borden and her friend were arrested and charged with burglary and petty theft for a
kid's bike and a scooter sitting unlocked, which were valued at a total of $80.
Compare their crime with a similar one: The previous summer, 41-year-old Vernon Prater was
picked up for shoplifting $86.35 worth of tools from a nearby Home Depot store.
Prater was the more seasoned criminal. He had already been convicted of armed robbery and
attempted armed robbery, for which he served five years in prison, in addition to another armed
robbery charge. Borden had a record, too, but it was for misdemeanors committed when she was
a juvenile.
Yet something odd happened when Borden and Prater were booked into jail: A crime predictor, a
person who is specialized to predicts the possibility of subsequent offenses or crimes, reported a
score predicting the likelihood of each committing a future crime. Borden — who is black —
was rated a high risk (8). Prater — who is white — was rated a low risk (3).
Two years later, we know the person who rated the score, the crime predictor, got it exactly
backward. Borden has not been charged with any new crimes. Prater is serving an eight-year
prison term for subsequently breaking into a warehouse and stealing thousands of dollars’ worth
of electronics.
Scores like this — known as risk assessments — are increasingly common in courtrooms across
the nation. They are used to inform decisions about who can be set free at every stage of the
criminal justice system, from assigning bond amounts — as is the case in Fort Lauderdale — to
even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware,
Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such
assessments are given to judges during criminal sentencing.
124
125
[AI predictor and high seriousness crime]
29-year-old Williams was arrested and charged with six misdemeanor counts of DUI for hitting a
parked car in a parking lot, leaving the scene of an accident, and resisting arrest without force.
Compare their crime with a similar one: 36-year-old Lugo crashed his Lincoln Navigator into a
Toyota Camry. When a police officer arrived at the scene of the accident, Lugo fell over several
times and an almost-empty bottle of gin was found in his car. He was charged with DUI and with
driving with a suspended license.
Lugo was the more seasoned criminal. He had already been convicted of three previous DUIs (in
1998, 2007, and 2012), and a misdemeanor battery in 2008. Williams had a record, too, but it
was for two misdemeanors committed in 2006 and 2012.
Yet something odd happened when Lugo and Williams were booked into jail: A computer crime
predictor, a computer program with an algorithm that predicts the possibility of subsequent
offenses or crimes, spat out a score predicting the likelihood of each committing a future crime.
Williams — who is black — was rated a medium risk (6). Lugo — who is white — was rated a
very low risk (1).
Soon after, we know the computer crime predictor got it exactly backward. Williams has not
been charged with any new crimes. Lugo was charged with two counts of misdemeanor battery
for domestic violence two days later.
Scores like this — known as risk assessments — are increasingly common in courtrooms across
the nation. They are used to inform decisions about who can be set free at every stage of the
criminal justice system, from assigning bond amounts — as is the case in Fort Lauderdale — to
even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware,
Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such
assessments are given to judges during criminal sentencing.
126
127
[Human predictor and high seriousness crime]
29-year-old Williams was arrested and charged with six misdemeanor counts of DUI for hitting a
parked car in a parking lot, leaving the scene of an accident, and resisting arrest without force.
Compare their crime with a similar one: 36-year-old Lugo crashed his Lincoln Navigator into a
Toyota Camry. When a police officer arrived at the scene of the accident, Lugo fell over several
times and an almost-empty bottle of gin was found in his car. He was charged with DUI and with
driving with a suspended license.
Lugo was the more seasoned criminal. He had already been convicted of three previous DUIs (in
1998, 2007, and 2012), and a misdemeanor battery in 2008. Williams had a record, too, but it
was for two misdemeanors committed in 2006 and 2012.
Yet something odd happened when Lugo and Williams were booked into jail: A crime predictor,
a person who is specialized to predicts the possibility of subsequent offenses or crimes, reported
a score predicting the likelihood of each committing a future crime. Williams — who is black —
was rated a medium risk (6). Lugo — who is white — was rated a very low risk (1).
Soon after, we know the person who rated the score, the crime predictor, got it exactly backward.
Williams has not been charged with any new crimes. Lugo was charged with two counts of
misdemeanor battery for domestic violence two days later.
Scores like this — known as risk assessments — are increasingly common in courtrooms across
the nation. They are used to inform decisions about who can be set free at every stage of the
criminal justice system, from assigning bond amounts — as is the case in Fort Lauderdale — to
even more fundamental decisions about defendants’ freedom. In Arizona, Colorado, Delaware,
Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin, the results of such
assessments are given to judges during criminal sentencing.
128
129
Appendix E
Autonomy of the crime predictor (From “Strongly Disagree” to “Strongly Agree”)
Was the crime predictor fully self-controllable when making the decision?
Was the decision intended by the crime predictor?
Is the crime predictor responsible for making the decision?
Responsibility of the crime predictor (From “Strongly Disagree” to “Strongly Agree”)
Is the decision something that reflects a characteristic of the crime predictor?
Was the decision made due to an intrinsic aspect of the crime predictor?
Is the decision something about the crime predictor itself?
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Virtually human? Negotiation of (non)humanness and agency in the sociotechnical assemblage of virtual influencers
PDF
The social meaning of sharing and geocoding: features and social processes in online communities
PDF
Computational foundations for mixed-motive human-machine dialogue
PDF
Modeling and regulating human interaction with control affine dynamical systems
PDF
Coding.Care: guidebooks for intersectional AI
PDF
Bounded technological rationality: the intersection between artificial intelligence, cognition, and environment and its effects on decision-making
PDF
Learning social sequential decision making in online games
PDF
The power of social media narratives in raising mental health awareness for anti-stigma campaigns
PDF
The evolution of multilevel organizational networks in an online gaming community
PDF
A framework for research in human-agent negotiation
PDF
Robot, my companion: children with autism take part in robotic experiments
PDF
Understanding and generating multimodal feedback in human-machine story-telling
PDF
A multilevel communication approach to understanding human trafficking prevention behaviors in Indonesia
PDF
Against reality: AI co-creation as a powerful new programming tool
PDF
Socially-informed content analysis of online human behavior
PDF
Situated proxemics and multimodal communication: space, speech, and gesture in human-robot interaction
PDF
Adjusting the algorithm: how experts intervene in algorithmic hiring tools
PDF
Identity, trust, and credibility online: evaluating contradictory user-generated information via the warranting principle
PDF
Understanding human-building interactions through perceptual decision-making processes
PDF
Common ground reasoning for communicative agents
Asset Metadata
Creator
Hong, Joo-Wha
(author)
Core Title
Living with the most humanlike nonhuman: understanding human-AI interactions in different social contexts
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Degree Conferral Date
2022-08
Publication Date
08/08/2022
Defense Date
08/08/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
artificial intelligence,attribution theory,computers are social actors,human-AI interaction,human-machine communication,OAI-PMH Harvest,schema theory
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Williams, Dmitri (
committee chair
), Gil, Yolanda (
committee member
), Hollingshead, Andrea (
committee member
), Murphy, Sheila (
committee member
)
Creator Email
joowhaho@marshall.usc.edu,joowhahong@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC111376275
Unique identifier
UC111376275
Legacy Identifier
etd-HongJooWha-11126
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Hong, Joo-Wha
Type
texts
Source
20220808-usctheses-batch-972
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
artificial intelligence
attribution theory
computers are social actors
human-AI interaction
human-machine communication
schema theory