Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Using virtual humans during clinical interviews: examining pathways to self-disclosure
(USC Thesis Other)
Using virtual humans during clinical interviews: examining pathways to self-disclosure
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Using Virtual Humans During Clinical Interviews:
Examining Pathways to Self-Disclosure
by
Laura Garcia-Cardona
A Dissertation presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the Requirements for the Degree
DOCTOR OF PHILOSOPHY
PSYCHOLOGY
December 2020
Copyright 2020 Laura Garcia
A Computational Design
Love will take us up and down
The empathy training of all kinds
To see and be seen from afar
In different coordinates in space and time
In complete balance and harmony
All the instruments joining at once
Lighting up a wave of events
Each with beauty beyond blind spots
Uniting in crying and laughter
We dance with our feats and masks
Transcending the struggle of the day
To travel, create, and play
Piece by piece mirroring the whole
A lesson to humans and humanoids
For when it's time to fly back
from our flow state of mind
Our reality will expand
And through empathic art
AMOR FATI will shine
iii
Acknowledgments
I am extremely grateful for:
• Gerald Davison -- For his support and guidance, but most importantly, for giving me the
opportunity to express my curiosity for technology and psychology in my research. For
adopting me when I was an academic orphan and letting me share in his lineage of
brilliant ideas and minds.
• Albert “Skip” Rizzo -- For opening my mind to the world of therapeutic VR and, with
one personable phone call, clarifying the path of my entire research and professional life.
For being an inspiring and admirable human being whose steps I would be honored to
follow.
• Jonathan Gratch -- For his commitment and enthusiasm in providing me with the
technology and the academic tools to complete this dissertation project. For his research,
our stimulating meetings, and the thoughtful feedback that was vital in the development
of my perspectives.
• Gale Lucas -- For her eagerness to share with me her passion for research and her
interesting theories and ideas. For laying down the groundwork for my project and being
willing to support me with my statistical analyses.
• Mohammad Soleymani -- For his enthusiasm and willingness to assist me in the
completion of this project.
• David Schwartz & Morteza Dehghani -- For supporting my research efforts by being
part of my committee and guiding this project's incubation process.
• USC ICT Community -- For their continued research and development efforts that not
only contribute meaningfully to the field but that also brought Ellie to life.
iv
• Research assistants -- For their dedication and hard work in collecting data for this
dissertation project. A special thanks to Alexis DeFendis and Victoria Finnegan for being
an integral part of the process throughout and granting me the honor to offer them
mentorship and friendship.
• My cohort and Lab-mates – For the invaluable emotional support of Sohyun Han,
Hannah Khoddam, Gabby Lewine, and Luiza Mali. For Justin Hummer and Sylvanna
Vargas who contributed to this dissertation with insightful perspectives and valuable
feedback. For the many laughs and tears we shared.
• Undergraduate advisors -- For their encouragement and trust that I could one day attain
this degree.
• Marientina Gotsis and Vangelis Lympouridis -- For their mentorship, guidance, and
friendship. For all the creative projects, meetings, and world-building. You both showed
me that work can be about learning, creating, and playing.
• Family and Friends -- For their constant support and for sharing with me the struggles
and joys of my PhD time. We worked, we laughed, we traveled, we suffered, but above
all, we showed love to each other.
Throughout this document, I use the pronoun “we” to acknowledge all of these and the many
other individuals without whom this research would have never been possible. Thank you all for
making my past seven years the most rewarding academic experience of my lifetime.
v
Table of Contents
Dedication ........................................................................................................................................ ii
Acknowledgements ........................................................................................................................ iii
List of Tables ................................................................................................................................. vii
List of Figures ............................................................................................................................... viii
Abstract ........................................................................................................................................... ix
Introduction ..................................................................................................................................... 1
Virtual Human Technology and Self-disclosure ....................................................................... 3
Anonymity and Self-disclosure .......................................................................................... 3
Rapport-building and Self-disclosure ................................................................................ 5
Self-disclosure with Avatars vs. Agents ............................................................................ 7
Rapport and Self-disclosure with Virtual Humans .................................................................... 9
Social Presence and Self-disclosure with Virtual Humans ..................................................... 10
Overview of Study ................................................................................................................... 11
Methods ......................................................................................................................................... 12
Participants and Recruitment ................................................................................................... 12
Design and Procedure .............................................................................................................. 13
Measures .................................................................................................................................. 16
Results ........................................................................................................................................... 17
Correlations ............................................................................................................................. 17
Main Effects of Condition on Self-Disclosure ........................................................................ 18
Moderation Analyses ............................................................................................................... 20
Rapport ............................................................................................................................. 20
Social Presence ................................................................................................................ 21
Fear of Negative Evaluation ............................................................................................ 22
Discussion ...................................................................................................................................... 23
Virtual Humans and Depth of Self-disclosure ......................................................................... 23
Virtual Humans and Amount of Self-disclosure ..................................................................... 25
Current Study vs. Previous Research with Virtual Human Interviewers ................................ 26
Overall Self-disclosure Patterns with Virtual Humans ........................................................... 27
Theoretical Implications .......................................................................................................... 28
Implementation and Design Implications ................................................................................ 32
Limitations ............................................................................................................................... 34
Conclusion ............................................................................................................................... 35
References ..................................................................................................................................... 36
vi
Tables ............................................................................................................................................ 49
Figures ........................................................................................................................................... 65
Appendices .................................................................................................................................... 72
Appendix A ............................................................................................................................. 72
Appendix B .............................................................................................................................. 73
vii
List of Tables
Table 1 .......................................................................................................................................... 49
Table 2 .......................................................................................................................................... 50
Table 3 .......................................................................................................................................... 51
Table 4 .......................................................................................................................................... 52
Table 5 .......................................................................................................................................... 53
Table 6 .......................................................................................................................................... 54
Table 7 .......................................................................................................................................... 55
Table 8 .......................................................................................................................................... 56
Table 9 .......................................................................................................................................... 57
Table 10 ........................................................................................................................................ 58
Table 11 ........................................................................................................................................ 59
Table 12 ........................................................................................................................................ 60
Table 13 ........................................................................................................................................ 61
Table 14 ........................................................................................................................................ 62
Table 15 ........................................................................................................................................ 63
Table 16 ........................................................................................................................................ 64
viii
List of Figures
Figure 1 .......................................................................................................................................... 65
Figure 2 .......................................................................................................................................... 66
Figure 3 .......................................................................................................................................... 67
Figure 4 .......................................................................................................................................... 68
Figure 5 .......................................................................................................................................... 69
Figure 6 .......................................................................................................................................... 70
Figure 7 .......................................................................................................................................... 71
ix
Abstract
Virtual human interviewers have the potential to reduce treatment-seeking barriers and
transform the mental health screening process for college students. Our research examined the
potential use of virtual humans to increase self-disclosure in clinical assessment interviews.
Virtual humans are digital characters controlled by a human--avatar--or by a computer program-
-agent. We sought to understand how virtual humans may impact self-disclosure by examining
two conditions of anonymity: 1) perceived invisibility and 2) the belief to be interacting with an
agent vs. an avatar. Furthermore, we assessed whether conditions of anonymity moderated the
individual relationships between rapport (the mutual attention, closeness, and coordination
experienced with another being), social presence (an individual’s sense of awareness that they
are interacting with another being), and fear of negative evaluation and our outcome measures of
self-disclosure. Participants (N=144) were randomized into one of three conditions. They were
led to believe that they would interact with either (1) an avatar using their real-time audio and
video, (2) an avatar using their real-time audio only, and (3) a fully automated agent. In reality,
all participants in the three conditions interacted with an agent. Our results demonstrated that
individuals reported self-disclosing more intimate information to an avatar when they believed
the human could only hear them compared to when they believed the human could see and hear
them. No significant differences were found for reported intimacy of self-disclosure between the
agent and either of the avatar conditions. Additionally, no differences were present in the amount
of self-disclosure across our conditions. Moderation analyses indicated that greater rapport,
social presence, and fear of negative evaluation were associated with greater amounts of self-
disclosure when participants believed they were interacting with the agent only. These findings
suggest major differences in the function and the process of eliciting self-disclosure with virtual
x
human technologies. We describe theoretical and design implications of our findings. Our study
contributes to the nascent but growing body of research regarding the effective design and
implementation of virtual human interviewers.
Keywords: technology-based assessments, virtual humans, avatar, agent, clinical
assessments, clinical interviews, self-disclosure, rapport, social presence.
1
Using Virtual Humans During Clinical Interviews: Examining Pathways to Self-Disclosure
Introduction
College students are at great risk of experiencing mental health concerns (Eisenberg et
al., 2009; Saleem et al., 2013). In the latest National College Health Assessment survey, 23% of
college students reported an ongoing diagnosis of anxiety and 19% reported a diagnosis of
depression (American College Health Association., 2020), both rates reflecting an ongoing trend
of increasing mental health issues in this population. Specifically, Duffy, Twenge and Joiner
(2019) compared data from two national surveys on mental health from 2011-2012 to 2017-2018
and determined that anxiety rates increased by 24% and depression rates by 34%. Rates of
intentional self-injury increased by 47%, and those of suicidal ideation and suicide-related
injuries increased by 76% and 58%, respectively. It is likely due to this increasing psychological
distress that suicide is now the second leading cause of death in this population (Centers for
Disease Control and Prevention., 2018). Alarmingly, from 2010 to 2011, only approximately
36% of college students with a mental health diagnosis had sought professional help (Eisenberg
et al., 2011). Though rates of past-year treatment appear to be increasing for the general college
population (i.e. from 18.7% in 2007 to 33.8% in 2017; (Lipson et al., 2018)), continuing to
address the gap between treatment need and treatment-seeking can have beneficial implications
for college students.
One way to reduce the treatment gap is by addressing negative attitudes towards
treatment-seeking. Multiple factors have been shown to contribute to these attitudes in college
students, including a lack of perceived utility, a tendency to conceal information, and perceived
risks of emotional disclosure (Vogel & Wester, 2003). Anticipated risks of self-disclosure may
include a potential rejection and/or any expected feelings of shame or embarrassment (Baxter &
2
Montgomery, 1996). A recent meta-analysis demonstrated a consistent negative association
between the anticipated risks of self-disclosure and help-seeking intentions in college students
(Li et al., 2014). Since perceived risks of self-disclosure not only impact college students’
attitudes but their intentions to seek treatment as well, reducing these risks as barriers for
treatment is of high importance. Even if treatment seeking were to increase, mental health care
service structures may become strained and unable to respond to the increasing demands (Lipson
et al., 2018). Therefore, another way to reduce the gap is by developing avenues to identify
students’ symptoms and treatment needs in order to adequately distribute resources of care and
maximize gains. Screening measures alone could be used to identify college students in distress
and those who are likely to seek treatment and respond to outreach efforts (Nordberg et al.,
2013). Other beneficial and complementary screening processes include initial clinical
interviews, where self-disclosure of emotional content and sensitive information to a provider is
essential (Keith-Lucas, 1994). Since these interviews may trigger a fear of self-disclosure, the
development of clinical interviewing methods that can reduce perceptions of risk is needed.
Given that college students are highly familiar with computer-mediated communication
(Vrocharidou & Efthymiou, 2012), the use of technology has been proposed as a way to address
the increasing mental health needs of this population (Eisenberg, 2019). In a web-based survey
about attitudes toward online mental health services, 60% of college students endorsed
willingness to use online services (Dunbar et al., 2018). This willingness and openness to
technology-based mental health services opens the door for the possibility of using technologies
to reduce the perceived risks of self-disclosure during clinical interviews. Our study examined
the pathways through which the use of virtual humans could enhance the clinical interviewing
process and potentially provide effective mental health care screenings to college students.
3
Virtual Human Technology and Self-disclosure
Virtual humans are both virtual representations of humans or digital characters portraying
human-like traits and behaviors. They can be categorized based on whether they are controlled
by a human (referred to as avatars) or by a computer program (referred to as agents). Virtual
humans may reduce perceived risks of self-disclosure by 1) providing anonymity and 2)
preserving the feelings of rapport relevant in clinical interviews (Lucas et al., 2014). However,
only few studies have examined the difference between avatar vs. agents in eliciting self-
disclosure. The following section will address relevant background information.
Anonymity and Self-disclosure
Anonymity refers to the degree of identifiability, a continuum where an individual can
perceive to be fully identifiable or fully unidentifiable (Anonymous, 1998). Though theories
explaining social outcomes of computer-mediated communication (CMC) are numerous
(Walther, 2011) and out of the scope of the current review, there is consensus that CMC may
afford individuals a higher sense of anonymity, which then translates into behaviors that defy the
social norms of face-to-face communication (Suler, 2004). Current evidence regarding the
impact of computer-administered assessments through anonymity in self-disclosure is
inconsistent. Some meta-analyses have indicated a positive relationship between the anonymity
afforded in CMC and self-disclosure, such that computer-administered assessments led to
increases in self-disclosure across multiple studies compared to face-to-face interviews,
especially when disclosing sensitive information (Clark-Gordon et al., 2019; Weisband &
Kiesler, 1996). Other reviews, however, have demonstrated that self-disclosure is greater in face-
to-face interactions than in CMC (Kim & Dindia, 2011), particularly depth of self-disclosure was
found to be greater in face-to-face interactions across studies (Ruppel et al., 2017). One potential
4
reason for these mixed findings is the fact that previous research on anonymity and computer-
administered assessments has not focused on the degree to which individuals perceive
themselves to be anonymous (Clark-Gordon et al., 2019). This is especially the case for one type
of anonymity: physical anonymity, or invisibility, which describes situations when a message
source lacks a physical presence (Anonymous, 1998) and often a type of anonymity afforded by
CMC (e.g. text-based CMC, phone calls, etc.). This anonymity could be the perception of the
self as invisible--perceived invisibility--or the perception that the source of a message is
invisible--visual anonymity (Anonymous, 1998; Joinson, 2001). For example, individuals who
work towards a common goal while interacting via a text-chat have been shown to self-disclose
more than those who interact via video conferencing (Joinson, 2001). This effect may be
different during interviews, as there might not be a salient shared goal of the interaction. In fact,
some research has suggested that though individuals may perceive themselves as invisible during
telephone interviews, their interviewers are visually anonymous as well, and this may lead
interviewees to be less cooperative and more suspicious of the interviewer (Holbrook et al.,
2003). In sum, previous research does not provide a full understanding of the unique effect of
perceived invisibility and of visual anonymity in self-disclosure. This difference is relevant as
recent research has demonstrated that the perceived lack of eye contact afforded by invisibility is
the mechanism by which anonymous assessments may reduce inhibitions (Lapidot-Lefler &
Barak, 2012).
Indeed, one way in which avatars may facilitate self-disclosure is by establishing
invisibility during interactions between humans. Kang and Gratch (2010) compared levels of
self-disclosure towards humans under three different conditions: a raw video, a degraded video,
and an avatar. Participants were anonymized in the same way as their corresponding condition.
5
Therefore, they were either represented in raw video (i.e., able to readily be seen), in a degraded
video (i.e., edge-detector filtered view of their face), or by an avatar. Participants were asked a
set of 10 questions designed to trigger intimate self-disclosure, including “What have you done
in your life that you feel most guilty about?” and “What is your most common sexual fantasy?”
Based on inter-rater codings, results of this study demonstrated that individuals high in social
anxiety expressed a larger quantity of ideas and disclosed more intimate information about the
self with the avatar compared to the other conditions. Furthermore, the avatar also triggered
higher quantities of expressed words than the other two conditions. When interacting through
avatars, participants’ identity and that of their interaction partner were completely disguised by
an avatar. Therefore, the invisibility afforded by the avatar might have led to more self-
disclosure.
Agents, on the other hand, are a form of human-computer interaction and the notion of
invisibility is implicit as there is no observer from which to be invisible. Therefore, anonymity
with agents is related to the actual lack of presence of an interviewer, which has shown to
increase self-disclosure of sensitive behaviors (Chang & Krosnick, 2009, 2010; Ye et al., 2011).
This anonymity effect was demonstrated to be stronger in computer-administered assessments as
opposed to paper-and-pencil assessments (Gnambs & Kaspar, 2015). Therefore, agents may
increase self-disclosure by being perceived as impartial and anonymous (Joinson, 1999; Richman
et al., 1999; Trau et al.; 2013), and therefore, trigger less fears of negative evaluation (Lucas et
al., 2014).
Rapport-building and Self-disclosure
Though computer-assisted interviewing formats may provide the benefits of greater
anonymity, individuals may feel less socially connected (DeVault et al., 2014; Gratch et al.,
6
2007, 2013). Social connection, or rapport, refers to the mutual attention, closeness, and
coordination experienced with another being (Tickle-Degnen & Rosenthal,1990). Development
of rapport has been shown to be key in eliciting self-disclosure (Hall et al., 1995; Miller et al.,
1983) and positive relational outcomes in clinical contexts (Miller, 2019). Only until recently,
computerized assessments have been able to incorporate rapport-building techniques. For
example, avatar systems are interactive in nature and often allow for verbal and nonverbal
communication in a human-like manner (Bente et al., 2008). Therefore, interactive avatars may
foster feelings of social presence (i.e. an individual’s sense of awareness that they are interacting
with another being; Fabri et al., 2007) and rapport (Gratch et al., 2006). Similarly, with the
advancement of automatic conversational speech and gesture abilities, agents are now able foster
social connections and elicit social behaviors during interactions (Eyssel & Hegel, 2012; Reeves
& Nass, 1996). In fact, agents that behave socially (e.g. demonstrating turn-taking and nodding
responses) have been shown to elicit feelings of rapport (Gratch et al., 2013; Gratch et al., 2007,
Kang & Gratch, 2010). Greater rapport with agents, in turn, may increase the self-disclosure of
intimate and private information (Dijkstra, 1987; Gratch et al., 2007, 2013). Therefore, agents’
pre-programmed rapport-building capabilities may allow them to facilitate self-disclosure in
therapeutic contexts where human interviewers might be impractical.
Importantly, an agent’s ability to develop rapport might give it an advantage over other
widely-used computer-administered assessments (Lucas et al., 2017; Zalake & Lok, 2018).
Lucas et al., (2017) examined whether virtual agents increased self-disclosure of mental health
symptoms by comparing active-duty military members’ responses across three different
assessments: the official Post-deployment Health Assessment (PDHA), an anonymized version
of the PDHA, and similar questions provided by a virtual agent. Results indicated that
7
individuals endorsed more symptoms when asked by the agent than in the official or anonymized
PDHA. Researchers theorized that the rapport-building abilities of the agent triggered greater
disclosure and symptom-endorsement. Though there is a possibility that these outcomes were
due to the fact that the agent presented open-ended questions rather than the forced-choice
questions from the PDHA versions, it could be argued that open-ended questions lead to more
rapport than closed-ended questions (Patton, 1990). The Lucas et al., (2017) study supports the
notion that beyond the perception of anonymity or unidentifiability in an assessment, an agent’s
combination of a lack of an interviewer and heightened rapport-building may promote the
greatest self-disclosure. Interestingly, the perception of interacting with an agent alone may
impact how individuals behave socially during interactions (Fox et al., 2015), including their
patterns of self-disclosure (Lucas et al. 2014).
Self-disclosure with Avatars vs. Agents
Agents have been shown to elicit greater self-disclosure compared to avatars (Lucas et
al., 2014). Lucas et al., (2014) examined the effect of using virtual humans during a semi-
structured screening interview. This interview entailed a wide range of questions to gather
information about clinical symptoms while potentially eliciting feelings of rapport. In their study,
military service members were led to believe that they were interacting with an agent or with an
avatar. In reality, some of the individuals who believed they were interacting with an agent were
instead interacting with an avatar. Participants who believed they were interacting with the agent
showed a greater willingness to disclose sensitive information than participants who believed
they were interacting with an avatar. These participants also expressed more intense expressions
of sadness and reported less fear of negative evaluation and impression management. These
findings were irrespective of whether they were interacting with an agent or an avatar, in reality.
8
It is likely that when individuals believed there was no interviewer in the background, they
experienced less risks of self-disclosure. Thus, this study demonstrated that the mere belief that
an individual is interacting with an agent can have an impact on self-disclosure. Conversely, it is
possible that the differences in self-disclosure in the Lucas et al. (2014) study might be also
explained by invisibility. Participants who believed they were interacting with the avatar had the
perception that a human was observing them visually and hearing their responses during the
interview. On the other hand, participants in the agent condition were led to believe that no
human was observing them in the moment of the interview (e.g. invisibility), but that their
recordings would be accessed by researchers at a later time. Thus, all participants in this study
were aware that their information was going to be eventually observed or even judged by
humans. Perhaps, in the moment of the interaction, individuals in the agent condition
experienced perceived invisibility, and in turn self-disclosed more intimately towards the agent.
Therefore, to better understand how avatars and agents independently impact self-disclosure, it is
imperative to disentangle the effect of believing to be interacting with an agent from that of mere
invisibility in self-disclosure.
The aforementioned research suggests that while invisibility may promote self-disclosure
with avatars, the lack of an interviewer may promote self-disclosure with agents. Furthermore,
the independent effects of perceived invisibility and the belief to be interacting with an agent on
self-disclosure have not been addressed in the past. It was our goal to understand how virtual
humans may elicit self-disclosure during clinical interviews. Therefore, we sought to expand
previous literature by examining two major conditions of anonymity that might explain virtual
human’s effect on self-disclosure: an interviewee’s perceived invisibility and whether the
interviewer is perceived to be an agent. It could be argued that these conditions of anonymity are
9
static perceptions that shape the expectations of an interaction from the beginning to end (i.e. an
individual will feel invisible for the entire duration of an interview). Regarding rapport-building,
the literature shows that both avatars and agents can facilitate a perceived social connection.
However, this experience of rapport might be more complex, flexible, and fluid than conditions
of anonymity during an interaction. In fact, rapport-building behaviors elicit experiences of
connection that may strongly relate to self-disclosure across all conditions of anonymity (e.g.
face-to-face, audio-based, etc.). In the case of virtual humans, rapport-building capabilities may
trigger interview processes such as rapport and social presence that may lower the risks of self-
disclosure during clinical assessments. Furthermore, given that agents have been shown to
trigger less fear of negative evaluation than avatars, an individual’s tendency to fear negative
evaluation itself may be more strongly related to self-disclosure with agents than with avatars.
Understanding how relevant interview processes and individual characteristics relate to self-
disclosure with avatars and agents not only provides insights into clinical interviews with virtual
humans, but also those conducted by human interviewers in general.
Rapport and Self-disclosure with Virtual Humans
Because of the recent use of virtual humans for clinical interviewing, research addressing
the relationship between rapport and self-disclosure in this area is limited. However, current
evidence demonstrates that avatars and agents may facilitate rapport differently. Participants in a
study described in DeVault et al. (2014) and Rizzo et al. (2016a) either interacted with an agent,
an avatar, or face-to-face with a clinician during a semi-structured clinical interview. In their
experiment, individuals in the avatar condition were led to believe that they were interacting with
an agent instead of an avatar. Their results suggested that participants interacting with the avatar
in reality reported more rapport than individuals interacting with the actual agent or face-to-face
10
with the clinician. In other words, individuals experienced greater rapport when their perceived
agent was an avatar in reality compared to a fully automated agent. Interestingly, the experiences
of rapport were no different between the actual agent and face-to-face conditions. It is
conceivable that the avatar condition in this study was better able to elicit rapport. Assuming a
strong relationship between rapport and intimacy of self-disclosure, we should expect self-
disclosure to be greater towards the avatar--even if perceived as an agent--than the actual agent.
However, in the Lucas et al., (2014) study, there were no differences in self-disclosure between
individuals who interacted with an agent or an avatar when they were both perceived to be an
agent. Therefore, greater experiences of rapport might not readily translate into more willingness
to self-disclose when comparing an agent with an avatar. These findings could indicate that the
relationship between rapport and self-disclosure may vary in strength or direction depending on
an avatar or agent interaction. For this reason, we explored the relationship between rapport and
self-disclosure under conditions of anonymity with virtual humans.
Social Presence and Self-disclosure with Virtual Humans
It is well known that humans are able to experience social presence when interacting with
virtual humans (Kim & Welch, 2015). Greater behavioral realism or more social cues have been
found to increase this experience of social presence (Guadagno et al., 2007; von der Pütten et al.,
2010). Interestingly, the mere belief to be interacting with an avatar has been shown to lead to
greater feelings of social presence than the belief to be interacting with an agent (Appel et al.,
2012). Greater social presence with an avatar may also translate into greater fears of evaluation
and less disclosure. Conversely, greater social presence with an agent may be perceived as more
positive, as there is no risk of human judgment (Oh et al., 2018). This is consistent with Kang
and Gratch (2014) who showed that an agent’s behavioral realism was related to greater self-
11
disclosure of attitudes, desires, or values. It is possible that higher behavioral realism triggers
greater social presence and promotes more self-disclosure with agents. Even though the direct
relationship between social presence and self-disclosure with virtual humans has not been
explored, social presence may contribute to both greater self-disclosure with agents and lower
self-disclosure with avatars (Sacau et al., 2008). In other words, the relationship between social
presence and self-disclosure may also vary depending on anonymity with virtual humans.
Therefore, we explored the relationship between social presence and self-disclosure at different
levels of anonymity, vis-à-vis the condition manipulations.
Overview of Study
In summary, the purpose of the current study was to examine the conditions in which the
use of virtual humans could increase self-disclosure among college students in a semi-structured
clinical interview. Our first aim was to better understand how virtual humans increase self-
disclosure by disentangling the effect of two different conditions of anonymity: 1) perceived
invisibility and 2) the belief to be interacting with an agent. Based on previous research, we
hypothesized that both the belief to be interacting with an agent and invisibility would result in
greater levels of self-disclosure. We speculated that virtual humans’ differential conditions of
anonymity might impact the relationship between interview processes, such as rapport and social
presence, and self-disclosure. Furthermore, it is possible that individuals’ general tendencies to
fear negative evaluation may also relate differently to self-disclosure depending on anonymity.
Therefore, our second aim was to explore whether the relationship between each of these factors
and self-disclosure differed across conditions of anonymity. In other words, we examined the
moderating role of anonymity in the relationship between interview-related factors (i.e. rapport,
social presence, and fear of negative evaluation) and self-disclosure during clinical assessments
12
with virtual humans. Understanding these relationships can help elucidate how interacting with
these interviewing systems may increase self-disclosure. Considering the exploratory nature of
this second aim, we did not make any specific hypotheses.
To accomplish our aims, participants in our study were randomized into one of three
conditions. Participants were led to believe that they would participate in a semi-structured
clinical interview with either 1) a virtual avatar tele-operated by a human with access to their
real-time audio and video, 2) a virtual avatar tele-operated by a human with access to their real-
time audio only, and 3) a fully automated virtual agent. To maximize experimental control and
ensure that we were assessing perceived invisibility only, all participants interacted with the
agent in reality. In other words, in the avatar conditions, participants were led to believe that the
agent was an avatar. Within those two avatar conditions, participants were also led to believe that
a human could both hear and see them or that the human could only hear them. Therefore, we
modified the mere belief to be interacting with an avatar under different perceived visibilities
(Figure 1). The methodology to induce different beliefs with the same interacting system was
previously used in Lucas et al., (2014), where it was demonstrated that the mere belief to be
interacting with an agent or avatar was sufficient to influence levels of self-disclosure.
Methods
Participants and recruitment
A total of 169 college students majoring in Psychology at the University of Southern
California participated in this current study. Participants were recruited through the USC subject
pool and received course credit as compensation. Data collection for this study lasted a total of
two years. Our final sample, following exclusions due to a manipulation check (described below)
and technology difficulties, was comprised of 144 participants. This final sample had a mean age
13
of 20.10 years and was composed of both females (68%) and males (31%). The ethnic
breakdown of this sample was 38% White, 36% Asian, 16% Hispanic, 4% Black or African
American, <1% Native Hawaiian or Other Pacific Islander and 4% Other.
Design and Procedure
Following consent to participate in the study, participants completed an online baseline
assessment with multiple questionnaires and a demographic form. They were then scheduled to
participate in the lab portion of our study where they were randomized to one of the three
conditions. Once randomized, participants received a description of the system they would be
interacting with. Conditions were based on the manipulation of participants’ belief to be
interacting with the agent or either of the virtual avatars. Virtual avatars were described as tele-
operated by a psychotherapist with access to video and audio or access to audio only.
Irrespective of the manipulated belief, all participants were alone in the room while interacting
with the agent platform. The following are the condition descriptions:
Virtual Agent
The agent platform used was “SimSensei”, a computer-driven virtual human system that uses
automatic behavioral analysis, voice recognition, and programmed scripts to interact with
humans (Figure 2; Appendix A). SimSensei is a computer system able to read non-verbal
information from its users to infer their internal states and react based on a set of programmed
responses. The software used for this study was identical to the software previously used for the
Lucas et al. (2014) study and described in depth in DeVault et al., (2014). However, due to
technical issues, smile mimicry--one element of the agent’s rapport-building behaviors--had to
be disabled across all conditions. Participants in this condition were told they would interact with
a computer / virtual human and were expected to believe that their agent was automated and
14
controlled by a computer program. They were also told that the audio of their interaction would
be reviewed by a therapist to ensure the interview had been done correctly and for transcription
purposes. The following is the manipulation prompt for this condition.
Agent Condition. “For the following section of the study, you will be interviewed by our
virtual human. Our virtual human uses artificial intelligence to have a conversation with you.
The system gets audio and visual input from you. It uses a Speech Recognition tool to
understand what you’re saying, then uses a complex series of equations to choose the best way to
respond. Even though your interview will be video recorded, only the audio will be analyzed
later by a therapist for transcription purposes. The therapist will not have access to any of your
identifiable information, therefore preserving your anonymity.”
Virtual Avatar Conditions
The avatar platforms were described as a software in which a therapist controlled an avatar
through a background computer interface. Participants interacted with a 3D female virtual human
named “Ellie”. Participants in this condition were told that, through the avatar computer
interface, the therapist had access to the real-time video and/or audio of the interaction and
engaged with the participant by sending pre-recorded verbal responses and non-verbal responses
through a selection menu of behavioral expressions. They were also told that their interaction
would be video recorded for transcription purposes only. As indicated above, in reality, Ellie’s
responses were driven by the virtual agent platform. The following are the manipulation prompts
for each of these conditions.
Avatar with Video Condition (Avatar Video). “Our virtual human is designed to be
like a puppet. Our computer software allows a therapist to have a real-time conversation with
you through our virtual human. My colleague will be sitting in the adjacent room and will be
15
able to see and hear from the feed transmitted through your webcam and microphone. The
therapist has access to a set of pre-recorded questions and behavioral responses that will be
mirrored by Ellie to have a conversation with you. The therapist has no access to any of your
identifiable information, therefore preserving your anonymity.”
Avatar with Audio-only Condition (Avatar Audio). “Our virtual human is designed to
be like a puppet. Our computer software allows our therapist to have a real-time conversation
with you through our virtual human. My colleague will be sitting in the adjacent room and will
ONLY be able to hear your responses from the feed transmitted through your microphone. The
therapist has access to a set of pre-recorded questions and behavioral responses that will be
mirrored by Ellie to have a conversation with you. Even though your interview will be video
recorded, only the audio will be analyzed later for transcription purposes. The therapist has no
access to any of your identifiable information, therefore preserving your anonymity.”
Immediately following the interview, participants in all three conditions completed a
manipulation check. They were asked to select either “computer agent” or “therapist with access
to video and audio” or “therapist with access to audio only” depending on which condition they
believed to have participated in. Those who failed this manipulation check were excluded from
the current study (n = 17). Participants who experienced computer system malfunctions were
also excluded (n = 8). Final condition sample sizes were as follows: avatar video (n = 49), avatar
audio (n = 44) and agent (n = 51). Following this check, participants completed another round of
questionnaires while in the lab and were debriefed on the details of the study, as well as the
actual condition they had participated in (i.e., a virtual agent platform).
16
Measures
Perceived Self-disclosure
The modified version of the Revised Self-Disclosure Scale (Wheeless & Grotz, 1978;
RSDS) was used to measure two dimensions of reported self-disclosure: depth and breadth
(Leung, 2002). Depth is often referred to as the degree of intimacy of a self-disclosure. Intimate
self-disclosure may include the discussion of private information about the self, including
emotions such as fear, shame, regret, and/or admission of behaviors or traits that an individual
would rarely share with others. Depth was assessed with the Control of Depth of Self-Disclosure
subscale (7-items) of the RSDS. A sample item is: “Once I got started, I intimately and fully
revealed myself in my self-disclosures.” Breadth refers to the quantity, amount, or duration of
information disclosed and was measured with the Amount of Self-Disclosure subscale (3-items)
of the RSDS. A sample item is the reverse coded version of: “My statements of my feelings were
usually brief.” Participants responded to these subscales on a 7-point Likert scale (1 = strongly
disagree to 7 = strongly agree). In the current study, both depth (Cronbach’s α = .84) and amount
(Cronbach’s α = .62) had acceptable reliability coefficients.
Word Count
The Word Count (WC) descriptor variable from the computerized text analysis method
Linguistic Inquiry and Word Count (LIWC) was used as a proxy for self-disclosure (Pennebaker
et al., 2015). We used this measure under the assumption that greater frequency of words would
be a manifestation of greater amount of self-disclosure.
Rapport
Rapport was assessed with the Rapport and Connection subscale of the 29-item Rapport
Scale (von der Pütten et al., 2010). This subscale is composed of 10 items responded to on a 7-
17
point Likert scale (1= strongly disagree to 8= strongly agree). This scale demonstrated
acceptable internal consistency in the current study (Cronbach’s α = .61). Sample items include:
“I think the listener and I established rapport” and “I felt I had a connection with the listener.”
Social Presence
Social presence was measured with the Bailenson et al. (2001) Co-Presence
Questionnaire (BCPQ; Bailenson et al., 2001). The scale is composed of five items responded to
on a 7-point Likert scale (1= strongly disagree to 7= strongly agree). Internal consistency of this
measure was adequate in the current study (Cronbach’s α = .81). Sample items include: “I felt
that the person was watching me and was aware of my presence” and “The person appeared to be
sentient (conscious and alive) to me.”
Fear of negative evaluation
Fear of negative evaluation was assessed with the 8-item version of the Brief Fear of
Negative Evaluation (BFNE-S) scale (Leary, 1983; Rodebaugh et al., 2004; Weeks et al., 2005).
Items are rated on a 5-point Likert scale (1 = not at all characteristic of me to 5 = extremely
characteristic of me). In this study, the scale demonstrated adequate internal consistency
(Cronbach’s α = .94). A sample item is: “When I am talking to someone, I worry about what they
may be thinking about me.”
Results
Correlations
Means and standard deviations of all study variables are presented in Table 1.
Independent samples t-tests for each of these variables and demographic information with the
Bonferroni correction did not demonstrate any significant differences between groups.
Exploratory correlation analyses were conducted with the entire sample (Table 1) as well as with
18
each condition separately (Table 2) to examine associations not assessed in our moderation
analyses. Results of these Pearson correlations indicated that reported amount of self-disclosure
was not associated with depth of self-disclosure but was associated with total interaction word
count in the entire sample (r(142) = .43, p < .01), as well as in each independent condition
(avatar video: r(47) = .30, p = .02; avatar audio: r(42) = .51, p < .001; agent: r(49) = .53, p <
.001). These findings are promising given that they demonstrate that participants’ subjective
insights into their patterns of amount of self-disclosure might be validated objectively by the
quantity of words they expressed during the interview. There was a trend demonstrating that a
higher word count was associated with greater reported depth of self-disclosure in the agent
condition only (r(51) = .27, p = .06). Results of Fisher's Z-Transformation demonstrated that this
relationship was significantly stronger in the agent condition compared to the avatar video
condition [z = 2.21, p = .01 (one tail)], suggesting that the relationship between depth and word
count is likely influenced by both invisibility and the belief to be interacting with an agent.
Furthermore, rapport was positively associated with social presence in the avatar audio (r(42) =
.37, p = .02) and agent (r(49) = .54, p < .001) conditions. This relationship was significantly
stronger in the agent condition compared to the avatar video condition [z = 2.25, p < .05 (one
tail)]. Based on these correlations, it appears that the relationship between rapport and social
presence might also be influenced by both perceptions of invisibility and the belief to be
interacting with an agent.
Main Effects of Condition on Self-Disclosure
Our first aim was to explore differences in depth and amount of self-disclosure based on
differing perceptions of visibility and the belief to be interacting with an agent. Traditional
methods for testing differences across groups (e.g. ANOVA F-test) are often limited in
19
accounting for outliers, non-normality, and heteroscedasticity--suffering from low power and
increased likelihood of Type I errors (Wilcox & Keselman, 2012). Robust methods comparing
trimmed means have been shown to address these limitations. Therefore, the significance of our
main effects were evaluated following a percentile bootstrap method based on the results by Liu
and Singh (1997) and described in Wilcox (R. Wilcox, 2017). Pairwise comparisons were
evaluated with the bootstrap version of the Yuen's test for trimmed means. All analyses were
performed with the software R using functions outlined in the R package WRS (Mair & Wilcox,
2019).
There were significant main effects of condition on self-disclosure depth. Specifically,
participants in the avatar audio condition reported significantly greater depth of self-disclosure
than participants in the avatar video condition (𝜓 = -4.62; t (91) = -2.41, p = .01, CI (-7.62, -
1.09); Figure 3). No other significant differences across groups in depth of self-disclosure
emerged (Table 3). Furthermore, no significant differences were found for the self-reported
amount of self-disclosure and objective word count across groups.
The only significant difference supports the notion that, when believing that they are
interacting with humans, individuals would self-disclose more intimate information under
conditions of invisibility. Though the effect sizes of other comparisons were not statistically
significant, it still appeared that as perceived anonymity increased, so did the comfort of self-
disclosing (e.g. depth of self-disclosure means were higher, albeit non-significantly, for both of
the invisible conditions than for the visible condition; Table 2). Interestingly, it appears that,
under conditions of perceived invisibility, individuals self-disclosed more intimately when they
believed they were interacting with an avatar rather than an agent. It is possible that perceived
invisibility alone might increase the impression of anonymity, yet, other processes specific to
20
human interactions (e.g. rapport or social presence) might promote further intimate self-
disclosure when interacting with avatars as opposed to agents.
Moderation Analyses
To better understand how virtual humans elicit self-disclosure, we focused on examining
the moderating effect of conditions of anonymity on the relationship between the constructs of 1)
rapport, 2) social presence, and 3) fear of negative evaluation and self-disclosure. All of these
constructs have been shown to influence levels of self-disclosure in human and computer-
mediated communication. We conducted hierarchical regression analyses to determine whether
the relationship between these potential variables and self-disclosure (depth and two measures of
amount) varied as a function of our conditions. Continuous variables were centered, and our
group variable was dummy coded. Main effects of each of these moderators were evaluated in
the first step and interaction effects were evaluated in the second step of these regressions. Age
was used as a covariate to account for any age-related differences in comfort and trust with
virtual human technology (Ho et al., 2005; Hoff & Bashir, 2015; Pak et al., 2012; Sanchez et al.,
2004).
Rapport
Significant relationships and interactions emerged when evaluating rapport as a predictor
of both depth and amount of self-disclosure across our conditions. Regarding depth of self-
disclosure, greater rapport was positively associated with greater depth of self-disclosure
irrespective of condition (β= .24, p= .00; Table 4). No interaction emerged as significant between
rapport and condition on depth of disclosure. Though results indicated a trend in which the
relationship between rapport and amount of self-disclosure was different between the invisible
conditions--avatar audio and agent (β = -.40, p= .06; Table 5), no significant relationships
21
emerged when examining this relationship in each condition separately (Table 6). However, it
appeared that this relationship had the opposite direction such that greater rapport was related to
more self-disclosure in the agent condition but related to less self-disclosure in the avatar audio
condition (Figure 4). This pattern held when examining the total interaction word count. Results
indicated a significant difference in the relationship between rapport and word count between the
avatar audio and agent conditions (β = -.44, p= .04; Table 7). Particularly, as the experience of
rapport increased, only individuals in the agent condition showed a tendency to utter more words
(t (129) = 2.19, p = .03; Table 8; Figure 5). It is likely that for depth of self-disclosure,
experiencing rapport is relevant irrespective of invisibility or the belief to be interacting with an
agent. However, for the subjective and objective amount of self-disclosure, greater rapport may
prevent individuals from self-disclosing greater quantities of information towards a human. This
might be true even when participants perceived themselves to be invisible to this human.
Social Presence
No significant relationships or interactions emerged when evaluating social presence as a
predictor of depth of self-disclosure across our conditions (Table 9). When examining the self-
reported amount of self-disclosure, results indicated a main effect of social presence on the
amount of self-disclosure such that greater social presence was associated with greater reported
amounts of self-disclosure (β= .24, p = .00; Table 10). When interaction effects were added to
this model, a trend emerged in which the relationship between social presence and amount of
self-disclosure was different between the avatar video and agent conditions (β= .38, p = .06;
Table 10). As the experience of social presence increased, only individuals in the agent condition
reported a significant tendency to self-disclose more (t (137) = 3.37, p < .01; Table 11; Figure 6).
When examining the amount of words, greater social presence was significantly related to a
22
greater word count irrespective of condition (β= .18, p = .04; Table 12). These findings support
the possibility that the experience of social presence may increase the amount of self-disclosure,
especially when interacting with agents. Based on these findings, it appears that as social
presence increased, individuals reported sharing a larger amount of personal information but not
necessarily of more intimate nature.
Fear of negative evaluation
No significant relationships or interactions emerged when evaluating individuals’
tendency to fear negative evaluation as a predictor of depth and amount of self-disclosure across
our conditions. When examining the total interaction word count, results indicated that the
relationship between fear of negative evaluation and amount of expressed words was
significantly different between the avatar video and agent conditions (β= .48, p= .02; Table 15).
Additionally, there was a trend showing a difference in this relationship between the avatar audio
and agent conditions (β= -.40, p = .06; when agent was used as a reference group). The
relationship emerged as significant only in the agent condition (t (129) = .32, p = .03; Table 16;
Figure 7) such that greater fear of negative evaluation was associated with higher word count.
Though not statistically significant, this relationship had the opposite direction for both of the
avatar conditions, with a stronger magnitude for the avatar video condition. Therefore,
participants with greater tendencies to fear negative evaluations expressed more words when
interacting with an agent and potentially fewer words when interacting with the avatar under
visible conditions. Being high in fear of negative evaluation may prevent individuals from
disclosing towards humans but may increase their comfort when disclosing towards agents, as
these may be perceived as unable to form evaluations.
23
Discussion
The present research adds to our understanding of how conditions of anonymity using
virtual humans contribute to greater self-disclosure in clinical assessments. Specifically, we
examined the impact of the belief to be interacting with an agent and perceived invisibility in
college students’ levels of self-disclosure during semi-structured clinical interviews.
Furthermore, we assessed the moderating role of these conditions of anonymity in the individual
relationships between interview-related factors (i.e. rapport, social presence, and fear of negative
evaluation) and self-disclosure. The careful design of our study provided an opportunity to
elucidate the process of self-disclosure with virtual humans. Our study contributes to the current
scientific understanding of these systems by demonstrating an effect of invisibility on self-
disclosure towards humans and seemingly paradoxical relationships between the belief to be
interacting with an agent and two different types of self-disclosure: depth and amount.
Virtual Humans and Depth of Self-disclosure
Reported depth of self-disclosure appeared to be strongly influenced by invisibility.
Specifically, individuals reported self-disclosing more intimate information when they believed
they were interacting with the avatar audio (i.e. invisibility) compared to the avatar video (i.e.
visibility) condition. This finding is consistent with the online disinhibition effect, wherein
individuals would lower their behavioral inhibitions as invisibility with an interaction partner
increased (Joinson, 2003; Joinson & Paine, 2007; Lapidot-Lefler & Barak, 2012). However,
previous studies assessing invisibility did not measure the extent to which individuals perceive
themselves to be invisible. Rather, most of the research focused on the interaction partner’s (or in
our case the interviewer’s) visibility (Clark-Gordon et al., 2019). One of the strengths of the
24
current study design is that it demonstrates that the perception of visibility alone on the part of
the interviewee has an impact on the perceived anonymity of an interaction.
We did not find support for our expectation that the belief to be interacting with an agent
had a significant effect on depth of self-disclosure. This is consistent with Hasler et al., (2013),
who demonstrated that during research interviews about religion (which also could be considered
a sensitive topic), avatars elicited greater response rates than agents but did not differ in eliciting
quality or intimate disclosures. Similarly, Newman et al. (2002) demonstrated that individuals
would report more stigmatized behaviors with an audio-based computer assisted interview but
more psychological distress with a human face-to-face. Given that our interview focused on
distress-related content, it is possible that individuals felt more comfortable sharing this
information with a human rather than a computer. Furthermore, though invisibility alone might
support more intimate self-disclosure, other processes such as rapport might have given the
perceived human an advantage over the agent with regards to reported depth of self-disclosure.
Some research studies have suggested that individuals experience greater rapport towards
perceived humans than perceived agents (Lucas et al., 2018; Yokotani et al., 2018). In our study,
we found a strong relationship between greater experiences of rapport and greater depth of self-
disclosure irrespective of condition. Therefore, greater rapport with an avatar might explain
greater depth of self-disclosure compared to an agent. In fact, our participants indeed reported
greater rapport when they believed they were interacting with the avatar audio than with the
agent condition. Though these differences were not statistically significant, it is quite possible
that even a marginal rapport difference would have led to differences in depth of self-disclosure
favoring the human as opposed to the computer.
25
Virtual Humans and Amount of Self-disclosure
No main effects of condition were observed for either measure of self-disclosure amount
(i.e., breadth). However, conditions of anonymity moderated individual relationships between
social presence, rapport, and fear of negative evaluation and the amount of self-disclosure. First,
greater rapport was associated with greater quantity of expressed words in the agent condition
only; and this relationship was significantly stronger in the agent condition than in the avatar
audio condition. Second, social presence emerged as a significant positive predictor of frequency
of words expressed during the interview as well perceived amount of self-disclosure; and this
latter relationship was significantly stronger in the agent condition than in the avatar video
condition. Thus, greater reported feelings of rapport and social presence when interacting with
computers (but not with humans) is associated with a greater amount of self-disclosure during
interviews. Based on previous theories, it is likely that, when experiencing such social
connection with a computer, individuals may perceive fewer risks associated with self-disclosing
a greater quantity of information compared to a human (Lucas et al., 2014). On the contrary,
when interacting with humans, and despite their increased sense of social connection, the
perception of risk (e.g., being negatively appraised, socially rejected, betrayed, etc.) may prevent
individuals from disclosing. In support of this interpretation, a greater tendency to fear negative
evaluation was related to more expressed words in the agent condition only, showing that
computers might be perceived as lacking the capacity for negative judgments. Therefore, the
examined relationship patterns appear to favor computers as opposed to humans regarding the
amount of self-disclosure during clinically oriented interviews.
26
Current Study vs. Previous Research with Virtual Human Interviewers
Our findings regarding the amount of self-disclosure are consistent with previous
research demonstrating the possible advantages of agents over avatars in self-disclosure, yet in
prior studies (Lucas et al., 2014, 2017), an agent's advantage was also reflected in greater depth
or intimacy of self-disclosure. It is likely that our results regarding depth of self-disclosure may
be specific to self-reported intimacy rather than systematic observer codings. A meta-analysis by
Fox et al. (2015) examined whether agents or avatars elicited different social influence outcomes
and demonstrated that studies conducted on a desktop using objective measures (such as the
Lucas et. al 2014 study) showed greater effects for agents than those using subjective measures
(current study). Furthermore, there were major differences in participant pools between our study
and the Lucas et al. studies. First, participants in our study were college students rather than
military service members or veterans. A cross-national systematic review found that individuals
in the military ranked stigma as a barrier to seeking treatment much higher than any other
population, including students. Furthermore, disclosure and confidentiality concerns were the
most commonly reported barrier to help-seeking in this population (Clement et al., 2015). A
higher stigma about disclosing intimate information towards humans may make veterans more
likely to disclose to agents as opposed to avatars. Second, while the veteran sample in Lucas et
al. (2014) was approximately 62% males, our sample consisted of only 31% males. Males are
more likely to disclose sensitive information in computerized assessments than females (Joinson
et al., 2010). Females are also more sensitive to differences between recorded and synthetic
speech than males (Nass et al., 2003), which may indicate that females may be more sensitive to
differences in social cues between perceived agents and perceived avatars, and thus, more likely
to notice inconsistencies in social cues, especially in our study given the absence of smile
27
mimicry. This may have prevented females from disclosing in depth with an agent because of
reductions in social presence. Third, the mean age of our sample was 20 years of age while all
previously mentioned studies comparing avatars with agents reported sample mean ages that
ranged from 32-44 years of age. Zhang et al. (2015) showed that older individuals’ self-
disclosure patterns tended to be more affected by survey method than younger individuals.
Therefore, our younger participants might have been less receptive to the effects of our
manipulations. Considering our focus was on understanding the use of virtual humans with
college students, the current findings retain utility.
Overall Self-disclosure Patterns with Virtual Humans
Our examination of conditions of anonymity and disclosure-related relationships
highlight major differences in the process of eliciting self-disclosure by virtual humans during
clinically oriented interviews with college students. First, perceived invisibility with avatars was
directly related to greater reported depth of self-disclosure whereas the belief to be interacting
with an agent impacted the relationship between relevant variables and amount of self-disclosure
rather than directly affecting self-disclosure. Second, greater rapport predicted depth of self-
disclosure whereas greater social presence predicted amount of self-disclosure. Third, the
bivariate association between the reported amount and depth of self-disclosure was non-
significant. In fact, based on bivariate correlations only, greater word count was marginally
associated with depth of self-disclosure under the agent condition only. Fourth, rapport and
social presence were strongly correlated in the agent condition only. Although unexpected, the
lack of relationship between amount and depth is consistent with previous research studies
(Cozby, 1973). It is possible that depth and amount of self-disclosure serve different functions
during clinical interviews, for example, as clinically relevant information for a clinician or as
28
mechanisms that may facilitate therapeutic change or even as potentially therapy-interfering
behavior. The clinical utility of these constructs under conditions such as those in the current
study remains a fruitful direction for future research. When interacting with agents, the
relationship between rapport and social presence as well as the relationship between amount and
depth of self-disclosure both strengthened. This pattern demonstrates that the processes and
triggers of self-disclosure may be significantly different when interacting with avatars vs.
agents.
Theoretical Implications
The Omarzu (2000) Disclosure Decision Model may provide insight into our findings.
The model describes three major types of self-disclosure: 1) breadth which refers to the number
of topics related to the self; 2) duration which refers to the amount, word total, or persistence of
self-disclosure; and 3) depth which refers to the intimacy of the information disclosed.
Considering our conceptualization of self-disclosure, what Omarzu called duration we referred to
as the amount of self-disclosure. According to the Disclosure Decision Model, individuals make
decisions about the quantity and intimacy of their self-disclosures based on two factors:
subjective utility and subjective risk. Subjective utility is the perceived value and associated
reward for disclosing, which may include self-expression, social connection, or social control.
The utility of a goal can be shaped by individual differences and situational cues. Subjective risk
is the level of risk of social rejection, betrayal, or of causing discomfort to the listener when the
individual discloses. This risk is shaped by the listener’s evaluative power. According to the
model, depth of self-disclosure is most influenced by perceived subjective risk, as individuals
would want to control their levels of expressed vulnerability depending on that risk. On the other
hand, amount/duration is not thought to vary significantly as a function of risk, as individuals
29
could self-disclose at great lengths about superficial topics to achieve a goal but with little
intimacy and expression of vulnerability.
Based on our findings, it appears that individuals either 1) had different perceptions
regarding the utility of self-disclosing towards the avatar versus the agent and/or 2) perceived
lower risks of disclosing intimately towards the avatar with audio-only than the agent. Regarding
the utility of self-disclosing, when individuals believed they were interacting with the therapist
controlling the avatar, the reward of social approval and relief of distress through talking about
problems might have been activated. It is theorized that humans assume computers are unable to
engage in the same emotional tasks as other humans (Madhavan et al., 2006). Therefore, the
reward of social approval and distress relief would not have been accomplished with the agent,
as the agent would not have had the capacity to approve or empathically understand the
interviewee (Reis et al., 2017). This would explain why participants were not willing to provide
more intimate information towards the agent in our study.
Regarding the risk of self-disclosure, it is possible that perceived risk was lower for the
avatar than the agent condition because of differential effects of rapport and social presence on
trust. First, rapport-building might work differently when interacting with humans as opposed to
computers. Research has shown that rapport-building or dialog-building techniques in agents are
beneficial only when there is a shift from less trust to more trust during an interaction. Khashe,
Lucas, Becerik, and Gratch (2019) demonstrated that individuals who were initially more
reluctant to accept an agent’s request were also more likely to accept them after the agent had
used rapport-building techniques. Therefore, in situations where there is a lack of trust or social
connection, rapport-building can increase social influence (i.e. elicited self-disclosure). Given
that an interaction with a perceived human is initially thought of as riskier than the interaction
30
with an agent, greater rapport over time might have led to more behaviors consistent with the
goal of social approval or distress relief: greater depth of self-disclosure. On the other hand,
rapport might not have had the same impact on depth of self-disclosure in the agent condition
because of this lack of shift in trust. It is conceivable that, when interacting with the avatar,
individuals may have become more trusting as they experienced more rapport. Indeed, Hasler et
al., (2013) indicated that participants, especially those with higher familiarity with the
technology, had more negative affective responses towards an avatar than an agent. A shift in
these affective responses as a result of greater rapport could explain differences in self-disclosure
favoring the avatar. Additionally, individuals with more social anxiety report feeling less rapport
and more embarrassment with so-called non-contingent agents, those that do not produce timely
nonverbal feedback as response to a human’s behaviors (Kang et al., 2008). Considering the lack
of smile mimicry in our agent, it is likely that our agent was consistently non-contingent
throughout the interview. This might explain our finding that greater fear of negative evaluation
was related to greater expression of words with the agent only. Possibly, individuals high in fear
of negative evaluation might have experienced more embarrassment with the agent and actively
used a disclosure strategy (e.g. expressing more words) that would have prevented them from
expressing intimate information. Future research is needed to examine the role of the trajectory
of rapport during an interaction and its effect on self-disclosure with virtual humans.
Second, individuals may become less trusting as they begin to experience more social
presence when interacting with agents. Greater social presence with a human is believed to
trigger greater fears of negative evaluation. It is possible that greater experiences of social
presence with agents have a similar influence (Reeves and Nass, 1996). For example, Bente et
al., (2008) demonstrated that interpersonal emotional trust was greater when interacting with
31
humans under perceived invisibility conditions than with avatars with high behavioral realism.
Similarly, it has been shown that chatbots (text-based agents) that have better conversational
abilities may inhibit self-disclosure compared to unresponsive chatbots (Schuetzler et al., 2018).
This would explain why in Kang and Gratch (2014) an agent’s behavioral realism was related to
greater disclosure of attitudes, desires, or values but not related to disclosure of feelings,
emotions, and fears (i.e. more intimate information). They also demonstrated that individuals
rated their self-disclosure to be more personal and intimate at low behavioral realism, and hence,
at lower social presence with an agent. Interestingly, we did not find any significant differences
in social presence across conditions, though the magnitude seemed greater for the avatar audio
condition. It is likely that a shift in rapport with a perceived human may serve as a buffer of the
risk of evaluation triggered by the experience of higher social presence.
Conversely, agents may not benefit from a buffering effect of rapport change and thus,
greater social presence may translate into less trust. Differences in trust might explain the fact
that in our study, individuals reported greater depth of self-disclosure towards avatars. However,
previous research has also suggested that when agents are unable to produce social feedback that
matches meaningful events during an interaction, individuals may feel more embarrassed or less
trusting (Gratch & Marsella, 2014, pg. 194). Therefore, it is possible that the combination of low
behavioral realism and appropriate timing of feedback may allow greater social presence with
agents to translate into greater trust. Future studies could assess the effect of feedback timing and
behavioral realism in social presence and self-disclosure. Research is also needed to understand
the effect of rapport and social presence in experiences of trust. It is possible that these effects
may change over the span of multiple sessions as greater rapport may develop and, in turn, more
trust with an agent. Our moderation results suggest that the experience of trust could be a
32
mediator of the effect of conditions of anonymity on self-disclosure. Therefore, further
examination of increased trust as a mechanism of disclosure with virtual humans would be
valuable.
Implementation and Design Implications
Our study highlights great opportunities for the use of virtual humans during clinical
assessments with college students. First of all, we indicated that disclosing with a human under
invisible conditions is beneficial for depth of self-disclosure. We highlighted the value of
increasing perceptions of anonymity (e.g. invisibility) through the use of avatars. Particularly,
invisibility can impact self-disclosure because of a lack of eye contact--which has been
empirically demonstrated to be associated with the online disinhibition effect (Lapidot-Lefler &
Barak, 2012). Research has shown that during clinical interactions, patients exhibited a tendency
to follow a clinician’s eye gaze but not vice versa (Montague et al., 2011). It is possible that
direct eye contact from a clinician may increase patients’ attention to social cues, affiliative
behaviors, and socially desirable responses as well as reduce self-disclosure. Even early thinkers
in the field (e.g. Freud) believed in the benefits of sitting behind a client to allow for a sense of
freedom when expressing intimate psychological content (Friedberg & Linn, 2012). Similarly,
the use of invisibility in confessional booths has been eliciting disclosure of sensitive
information for centuries. Greater anonymity, therefore, can benefit the clinical assessment
process for college students by reducing perceptions of risks and increasing self-disclosure
during screening assessments.
However, greater anonymity can also impact the development of the client-therapist
relationship needed for beyond the assessment process. Previous research on non-verbal
behaviors during therapeutic interactions suggests that direct eye contact and, even face-to-face
33
body orientation, are positively associated with impressions of clinician’s empathy, patients’
feelings of rapport, and self-disclosure (Hall et al., 1995; Robinson, 2006). Notably, the benefits
of these behaviors in therapy are maximized when there is an appropriate balance, as prolonged
direct eye contact may have detrimental effects on rapport (Rotenberg et al., 2003). Given that
clinical assessments are not only used for information gathering but also to build rapport,
understanding the interpersonal contexts that improve both self-disclosure and alliance
development with both agents and avatars is imperative. Furthermore, our research findings
emphasize the need to address clients’ individual differences in clinical assessment. For example,
individuals with social anxiety (i.e. more likely to fear negative evaluation) may benefit from the
use of virtual humans (Kang & Gratch, 2010).
Our study also demonstrated that interacting with an agent may be beneficial in eliciting
greater amounts of self-disclosure, which can be useful in multiple contexts (e.g. interviews
about finances, general health, risky behaviors, or drug use). Though interacting with a human
may lead to the greatest depth of disclosure, which is likely most integral in therapeutic settings,
there is potential to increase the depth of self-disclosure elicited by agents. Based on Omarzu’s
model and the power of subjective utility in self-disclosure patterns, studies could test the
potential of making goals of an interaction with an agent more salient with the hope that this may
change individuals’ approach to disclosure. For example, overtly describing the interaction with
an agent as an opportunity to relieve stress may trigger willingness to disclose intimately.
Furthermore, design changes in anthropomorphism or behavioral realism, sequence of dialog,
and rapport-building techniques could have an impact in the development of trust with an agent
(Kulms & Kopp, 2019). For example, more glitches with the dialogue in the beginning of an
interaction with an agent could trigger the perception of lack of trust and, as the frequency of
34
these glitches lessens, greater social presence and rapport-building may in turn lead to greater
comfort in disclosing intimate information. It is important to keep in mind that self-disclosure
effects may change as individuals become more accustomed to virtual human interviewers,
especially younger adults. It is possible that greater exposure to these virtual human interviewers
may trigger similar social behaviors as those demonstrated with human interviewers, including
less self-disclosure (Hasler et al., 2013). More research is certainly needed to understand the
function of trust and familiarity and how their progression during an interaction has an effect on
depth and amount of self-disclosure with virtual humans.
Limitations
It is important to acknowledge our study’s limitations. Two of our measures of self-
disclosure were based solely on self-report. Though an objective measure of amount of self-
disclosure (word count) was related to the subjective measure, this validation was not possible
for depth of self-disclosure. This is relevant considering that previous studies (Lucas et al., 2014)
demonstrated differences in objective measures of intimacy. Therefore, future research on
objective coding of intimate self-disclosure could demonstrate different patterns of findings with
college students. Nevertheless, the level of intimacy of self-disclosure is often subjective and,
self-reported depth of self-disclosure might be also a valid way to assess self-disclosure.
Furthermore, given the experimental approach of our study, generalizability to other contexts and
populations may be reduced. This is compounded by the fact that, by the time this study was
conducted, the technology driving our agent was older and presented many technical difficulties.
One of these technical issues included the inability for the agent to mirror participants’ gestures
(i.e. smile mimicry), though this lack of mimicry was consistent across conditions. Even with
35
these limitations, our current study contributes to our understanding of self-disclosure with
virtual humans and provides insights for the direction of future research and design.
Conclusion
The use of virtual humans for clinical interviewing could revolutionize the current mental
health care for college students. Particularly, once developed, virtual humans are a low-cost and
accessible way to reach students who perceive greater barriers to healthcare (Kazdin & Blase,
2011). These virtual humans or assistants could not only increase perceptions of anonymity and
lead to greater self-disclosure, but they could also increase beneficial social processes related to
matching students’ preferred language, ethnicity, personality, background story, etc. These
systems could also become standardized and provide objective metrics that can improve current
clinical assessments--reducing the burden of healthcare providers (Rizzo et al., 2016b). In spite
of these opportunities, virtual human assistants require ample development costs and large-scale
interdisciplinary and industry collaborations. Additionally, privacy concerns need to be
addressed as none of these systems is currently compliant with health information privacy
regulations. Our study contributes to the growing body of research regarding the effective design
and implementation of technology-based care through virtual human interviewers, particularly
within mental health settings on college campuses.
36
References
American College Health Association. (2020). American College Health Association-National
College Health Assessment II: Undergraduate Student Reference Group Data Report
Fall 2019. Retrieved from https://www.acha.org/documents/ncha/NCHA-
III_FALL_2019_UNDERGRADUATE_REFERENCE_GROUP_DATA_REPORT.pdf
Anonymous. (1998). To reveal or not to reveal: A theoretical model of anonymous
communication. Communication Theory, 8(4), 381–407.
Appel, J., von der Pütten, A., Krämer, N. C., & Gratch, J. (2012). Does humanity matter?
Analyzing the importance of social cues and perceived agency of a computer system for
the emergence of social reactions during human-computer interaction. Advances in
Human-Computer Interaction, 2012.
Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2001). Equilibrium theory
revisited: Mutual gaze and personal space in virtual environments. Presence:
Teleoperators & Virtual Environments, 10(6), 583-598.
Baxter, L. A., & Montgomery, B. M. (1996). Relating: Dialogues and dialectics. Guilford Press.
Bente, G., Rüggenberg, S., Krämer, N. C., & Eschenburg, F. (2008). Avatar-mediated
networking: Increasing social presence and interpersonal trust in net-based
collaborations. Human Communication Research, 34(2), 287–318.
Centers for Disease Control and Prevention. (2018). Web based injury statistics query and
reporting system. 2016.
Chang, L., & Krosnick, J. A. (2009). National surveys via RDD telephone interviewing versus
the Internet: Comparing sample representativeness and response quality. Public Opinion
Quarterly, 73(4), 641-678.
37
Chang, L., & Krosnick, J. A. (2010). Comparing oral interviewing with self-administered
computerized QuestionnairesAn experiment. Public Opinion Quarterly, 74(1), 154-167.
Clark-Gordon, C. V., Bowman, N. D., Goodboy, A. K., & Wright, A. (2019). Anonymity and
Online Self-Disclosure: A Meta-Analysis. Communication Reports, 32(2), 98–111.
https://doi.org/10.1080/08934215.2019.1607516
Clement, S., Schauman, O., Graham, T., Maggioni, F., Evans-Lacko, S., Bezborodovs, N.,
Morgan, C., Rüsch, N., Brown, J. S. L., & Thornicroft, G. (2015). What is the impact of
mental health-related stigma on help-seeking? A systematic review of quantitative and
qualitative studies. Psychological Medicine, 45(1), 11–27.
https://doi.org/10.1017/S0033291714000129
Cozby, P. C. (1973). Self-disclosure: A literature review. Psychological Bulletin, 79(2), 73.
DeVault, D., Artstein, R., Benn, G., Dey, T., Fast, E., Gainer, A., ... & Lucas, G. (2014, May).
SimSensei Kiosk: A virtual human interviewer for healthcare decision support.
In Proceedings of the 2014 international conference on Autonomous agents and multi-
agent systems (pp. 1061-1068).
Dijkstra, W. (1987). Interviewing style and respondent behavior: an experimental study of the
survey-interview. Sociological Methods & Research, 16(2), 309–334.
https://doi.org/10.1177/0049124187016002006
Duffy, M. E., Twenge, J. M., & Joiner, T. E. (2019). Trends in Mood and Anxiety Symptoms
and Suicide-Related Outcomes Among U.S. Undergraduates, 2007–2018: Evidence From
Two National Surveys. Journal of Adolescent Health, 65(5), 590–598.
https://doi.org/10.1016/j.jadohealth.2019.04.033
38
Dunbar, M. S., Sontag-Padilla, L., Kase, C. A., Seelam, R., & Stein, B. D. (2018). Unmet mental
health treatment need and attitudes toward online mental health services among
community college students. Psychiatric Services, 69(5), 597–600.
Eisenberg, D. (2019). Countering the troubling increase in mental health symptoms among US
college students. Journal of Adolescent Health, 65(5), 573–574.
https://doi.org/10.1016/j.jadohealth.2019.08.003
Eisenberg, D., Downs, M. F., Golberstein, E., & Zivin, K. (2009). Stigma and help seeking for
mental health among college students. Medical Care Research and Review, 66(5), 522–
541.
Eisenberg, Hunt, J., Speer, N., & Zivin, K. (2011). Mental health service utilization among
college students in the United States. The Journal of Nervous and Mental Disease,
199(5), 301–308.
Eyssel, F., & Hegel, F. (2012). (s) he's got the look: Gender stereotyping of robots 1. Journal of
Applied Social Psychology, 42(9), 2213-2230.
Fabri, M., Elzouki, S. Y. A., & Moore, D. (2007, July). Emotionally expressive avatars for
chatting, learning and therapeutic intervention. In International conference on human-
computer interaction (pp. 275-285). Springer, Berlin, Heidelberg.
https://doi.org/10.1007/978-3-540-73110-8_29
Fox, J., Ahn, S. J. (Grace), Janssen, J. H., Yeykelis, L., Segovia, K. Y., & Bailenson, J. N.
(2015). Avatars versus agents: a meta-analysis quantifying the effect of agency on social
influence. Human–Computer Interaction, 30(5), 401–432.
https://doi.org/10.1080/07370024.2014.921494
Friedberg, A., & Linn, L. (2012). The couch as icon. The Psychoanalytic Review, 99(1), 35–62.
39
Gnambs, T., & Kaspar, K. (2015). Disclosure of sensitive behaviors across self-administered
survey modes: A meta-analysis. Behavior Research Methods, 47(4), 1237–1259.
https://doi.org/10.3758/s13428-014-0533-4
Gratch, J., Hartholt, A., Dehghani, M., & Marsella, S. (2013). Virtual humans: A new toolkit for
cognitive science research. Applied Artificial Intelligence, 19, 215–233.
Gratch, J., & Marsella, S. (Eds.). (2013). Social emotions in nature and artifact. Oxford
University Press.
Gratch, J., Okhmatovskaia, A., Lamothe, F., Marsella, S., Morales, M., van der Werf, R. J., &
Morency, L. P. (2006, August). Virtual rapport. In International Workshop on Intelligent
Virtual Agents (pp. 14-27). Springer, Berlin, Heidelberg.
https://doi.org/10.1007/11821830_2
Gratch, J., Wang, N., Gerten, J., Fast, E., & Duffy, R. (2007, September). Creating rapport with
virtual agents. In International workshop on intelligent virtual agents (pp. 125-138).
Springer, Berlin, Heidelberg.
Guadagno, R. E., Blascovich, J., Bailenson, J. N., & McCall, C. (2007). Virtual humans and
persuasion: The effects of agency and behavioral realism. Media Psychology, 10(1), 1–
22.
Hall, J. A., Harrigan, J. A., & Rosenthal, R. (1995). Nonverbal behavior in clinician—Patient
interaction. Applied and Preventive Psychology, 4(1), 21–37.
https://doi.org/10.1016/S0962-1849(05)80049-6
Hasler, B. S., Tuchman, P., & Friedman, D. (2013). Virtual research assistants: Replacing human
interviewers by automated avatars in virtual worlds. Computers in Human Behavior,
29(4), 1608–1616. https://doi.org/10.1016/j.chb.2013.01.004
40
Ho, G., Wheatley, D., & Scialfa, C. T. (2005). Age differences in trust and reliance of a
medication management system. Interacting with Computers, 17(6), 690–710.
https://doi.org/10.1016/j.intcom.2005.09.007
Hoff, K. A., & Bashir, M. (2015). Trust in Automation: Integrating Empirical Evidence on
Factors That Influence Trust. Human Factors, 57(3), 407–434.
https://doi.org/10.1177/0018720814547570
Holbrook, A. L., Green, M. C., & Krosnick, J. A. (2003). Telephone versus face-to-face
interviewing of national probability samples with long questionnaires: Comparisons of
respondent satisficing and social desirability response bias. Public Opinion Quarterly,
67(1), 79–125. https://doi.org/10.1086/346010
Joinson, A. (1999). Social desirability, anonymity, and internet-based questionnaires. Behavior
Research Methods, Instruments, & Computers, 31(3), 433–438.
https://doi.org/10.3758/BF03200723
Joinson, A. N. (2001). Self-disclosure in computer-mediated communication: The role of self-
awareness and visual anonymity. European Journal of Social Psychology, 31(2), 177–
192. https://doi.org/10.1002/ejsp.36
Joinson, A. N. (2003). Understanding the psychology of Internet behaviour: Virtual worlds, real
lives. Revista Iberoamericana de Educación a Distancia, 6(2), 190.
Joinson, A. N., & Paine, C. B. (2007). Self-disclosure, privacy and the Internet. The Oxford
handbook of Internet psychology, 2374252.
Joinson, A. N., Reips, U.-D., Buchanan, T., & Schofield, C. B. P. (2010). Privacy, trust, and self-
disclosure online. Human–Computer Interaction, 25(1), 1–24.
41
Kang, S. H., & Gratch, J. (2010a). Virtual humans elicit socially anxious interactants' verbal self‐
disclosure. Computer Animation and Virtual Worlds, 21(3‐4), 473-482.
https://doi.org/10.1002/cav.345
Kang, S. H., & Gratch, J. (2010). The effect of avatar realism of virtual humans on self-
disclosure in anonymous social interactions. In CHI'10 Extended Abstracts on Human
Factors in Computing Systems (pp. 3781-3786).
https://doi.org/10.1145/1753846.1754056
Kang, S. H., Gratch, J., Wang, N., & Watt, J. H. (2008, May). Does the contingency of agents'
nonverbal feedback affect users' social anxiety?. In Proceedings of the 7th international
joint conference on Autonomous agents and multiagent systems-Volume 1 (pp. 120-127).
International Foundation for Autonomous Agents and Multiagent Systems.
Kazdin, A. E., & Blase, S. L. (2011). Rebooting psychotherapy research and practice to reduce
the burden of mental illness. Perspectives on Psychological Science, 6(1), 21–37.
https://doi.org/10.1177/1745691610393527
Keith-Lucas, A. (1972). Giving and taking help (2nd ed.). North Amer Assn of.
Khashe, S., Lucas, G., Becerik-Gerber, B., & Gratch, J. (2019). Establishing social dialog
between buildings and their users. International Journal of Human–Computer
Interaction, 35(17), 1545–1556. https://doi.org/10.1080/10447318.2018.1555346
Kim, J., & Dindia, K. (2011). Online self-disclosure: A review of research. Computer-Mediated
Communication in Personal Relationships, 156–180.
Kim, K., & Welch, G. (2015, September). Maintaining and enhancing human-surrogate presence
in augmented reality. In 2015 IEEE International Symposium on Mixed and Augmented
Reality Workshops (pp. 15-19). IEEE.
42
Kulms, P., & Kopp, S. (2019). More human-likeness, more trust? The effect of
anthropomorphism on self-reported and behavioral trust in continued and interdependent
human-agent cooperation. In Proceedings of Mensch und Computer 2019 (pp. 31-42).
https://doi.org/10.1145/3340764.3340793
Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-
contact on toxic online disinhibition. Computers in Human Behavior, 28(2), 434–443.
https://doi.org/10.1016/j.chb.2011.10.014
Leary, M. R. (1983). A brief version of the Fear of Negative Evaluation Scale. Personality and
Social Psychology Bulletin, 9(3), 371–375.
Leung, L. (2002). Loneliness, self-disclosure, and ICQ (" I seek you") use. CyberPsychology &
Behavior, 5(3), 241-251. https://doi.org/10.1089/109493102760147240
Li, W., Dorstyn, D. S., & Denson, L. A. (2014). Psychosocial correlates of college students’
help-seeking intention: A meta-analysis. Professional Psychology: Research and
Practice, 45(3), 163.
Lipson, S. K., Lattie, E. G., & Eisenberg, D. (2018). Increased rates of mental health service
utilization by US college students: 10-year population-level trends (2007–2017).
Psychiatric Services, 70(1), 60–63. https://doi.org/10.1176/appi.ps.201800332
Liu, R. Y., & Singh, K. (1997). Notions of limiting P values based on data depth and bootstrap.
Journal of the American Statistical Association, 92(437), 266–277.
Lucas, G. M., Gratch, J., King, A., & Morency, L.-P. (2014). It’s only a computer: Virtual
humans increase willingness to disclose. Computers in Human Behavior, 37, 94–100.
https://doi.org/10.1016/j.chb.2014.04.043
43
Lucas, G. M., Krämer, N., Peters, C., Taesch, L.-S., Mell, J., & Gratch, J. (2018). Effects of
Perceived Agency and Message Tone in Responding to a Virtual Personal Trainer.
Proceedings of the 18th International Conference on Intelligent Virtual Agents, (pp. 247–
254). https://doi.org/10.1145/3267851.3267855
Lucas, G. M., Rizzo, A., Gratch, J., Scherer, S., Stratou, G., Boberg, J., & Morency, L. P. (2017).
Reporting mental health symptoms: breaking down barriers to care with virtual human
interviewers. Frontiers in Robotics and AI, 4, 51.
https://doi.org/10.3389/frobt.2017.00051
Madhavan, P., Wiegmann, D. A., & Lacson, F. C. (2006). Automation failures on tasks easily
performed by operators undermine trust in automated aids. Human Factors, 48(2), 241–
256.
Mair, P., & Wilcox, R. (2019). Robust statistical methods in R using the WRS2 package.
Behavior Research Methods, 1–25.
Miller, C. (2019). Interviewing strategies, rapport, and empathy. In Diagnostic interviewing (pp.
29-53). Springer, New York, NY. https://doi.org/10.1007/978-1-4939-9127-3_2
Miller, L. C., Berg, J. H., & Archer, R. L. (1983). Openers: Individuals who elicit intimate self-
disclosure. Journal of Personality and Social Psychology, 44(6), 1234.
Montague, E., Xu, J., Asan, O., Chen, P., Chewning, B., & Barrett, B. (2011). Modeling eye gaze
patterns in clinician–patient interaction with lag sequential analysis. Human Factors,
53(5), 502–516.
Nass, C., Robles, E., Heenan, C., Bienstock, H., & Treinen, M. (2003). Speech-based disclosure
systems: Effects of modality, gender of prompt, and gender of user. International Journal
of Speech Technology, 6(2), 113–121.
44
Newman, J. C., Jarlais, D., C, D., Turner, C. F., Gribble, J., Cooley, P., & Paone, D. (2002). The
differential effects of face-to-face and computer interview modes. American Journal of
Public Health, 92(2), 294–297.
Nordberg, S. S., Hayes, J. A., McAleavey, A. A., Castonguay, L. G., & Locke, B. D. (2013).
Treatment utilization on college campuses: Who seeks help for what? Journal of College
Counseling, 16(3), 258–274. https://doi.org/10.1002/j.2161-1882.2013.00041.x
Oh, C. S., Bailenson, J. N., & Welch, G. F. (2018). A systematic review of social presence:
Definition, antecedents, and implications. Frontiers in Robotics and AI, 5.
https://doi.org/10.3389/frobt.2018.00114
Omarzu, J. (2000). A disclosure decision model: Determining how and when individuals will
self-disclose. Personality and Social Psychology Review, 4(2), 174–185.
https://doi.org/10.1207/S15327957PSPR0402_05
Pak, R., Fink, N., Price, M., Bass, B., & Sturre, L. (2012). Decision support aids with
anthropomorphic characteristics influence trust and performance in younger and older
adults. Ergonomics, 55(9), 1059–1072. https://doi.org/10.1080/00140139.2012.691554
Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park,
CA: Sage.
Pennebaker, J. W., Boyd, R. L., Jordan, K., & Blackburn, K. (2015). The development and
psychometric properties of LIWC2015.
Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television,
and new media like real people and places. Cambridge university press.
45
Reis, H. T., Lemay Jr, E. P., & Finkenauer, C. (2017). Toward understanding understanding: The
importance of feeling understood in relationships. Social and Personality Psychology
Compass, 11(3), e12308.
Richman, W. L., Kiesler, S., Weisband, S., & Drasgow, F. (1999). A meta-analytic study of
social desirability distortion in computer-administered questionnaires, traditional
questionnaires, and interviews. Journal of Applied Psychology, 84(5), 754.
Rizzo, A., Scherer, S., DeVault, D., Gratch, J., Artstein, R., Hartholt, A., ... & Stratou, G.
(2016b). Detection and computational analysis of psychological signals using a virtual
human interviewing agent. Journal of Pain Management (Nov. 2016), 311-321.
Rizzo, A., Shilling, R., Forbell, E., Scherer, S., Gratch, J., & Morency, L. P. (2016a).
Autonomous virtual human agents for healthcare information support and clinical
interviewing. In Artificial intelligence in behavioral and mental health care (pp. 53-79).
Academic Press. https://doi.org/10.1016/B978-0-12-420248-1.00003-9
Robinson, J. D. (2006). Nonverbal communication and physician–patient interaction. The Sage
handbook of nonverbal communication, 437-459.
https://doi.org/10.4135/9781412976152.n23
Rodebaugh, T. L., Woods, C. M., Thissen, D. M., Heimberg, R. G., Chambless, D. L., & Rapee,
R. M. (2004). More information from fewer questions: The factor structure and item
properties of the original and brief fear of negative evaluation scale. Psychological
Assessment, 16(2), 169.
Rotenberg, K. J., Eisenberg, N., Cumming, C., Smith, A., Singh, M., & Terlicher, E. (2003). The
contribution of adults’ nonverbal cues and children’s shyness to the development of
46
rapport between adults and preschool children. International Journal of Behavioral
Development, 27(1), 21–30.
Ruppel, E. K., Gross, C., Stoll, A., Peck, B. S., Allen, M., & Kim, S.-Y. (2017). Reflecting on
Connecting: Reflecting on connecting: Meta-analysis of differences between computer-
mediated and face-to-face self-disclosure. Journal of Computer-Mediated
Communication, 22(1), 18–34. https://doi.org/10.1111/jcc4.12179
Sacau, A., Laarni, J., & Hartmann, T. (2008). Influence of individual factors on presence.
Computers in Human Behavior, 24(5), 2255–2273.
Saleem, S., Mahmood, Z., & Naz, M. (2013). Mental Health Problems in University Students: A
Prevalence Study. FWU Journal of Social Sciences, 7(2).
Sanchez, J., Fisk, A. D., & Rogers, W. A. (2004). Reliability and age-related effects on trust and
reliance of a decision support aid. In Proceedings of the Human Factors and Ergonomics
Society Annual Meeting, (Vol. 48, No. 3, pp. 586-589). Sage CA: Los Angeles, CA: Sage
Publications. https://doi.org/10.1177/154193120404800366
Schuetzler, R. M., Giboney, J. S., Grimes, G. M., & Nunamaker, J. F. (2018). The influence of
conversational agent embodiment and conversational relevance on socially desirable
responding. Decision Support Systems, 114, 94–102.
https://doi.org/10.1016/j.dss.2018.08.011
Suler, J. (2004). The online disinhibition effect. Cyberpsychology & Behavior, 7(3), 321–326.
Tickle-Degnen, L., & Rosenthal, R. (1990). The nature of rapport and its nonverbal correlates.
Psychological Inquiry, 1(4), 285–293.
47
Trau, R. N., Härtel, C. E., & Härtel, G. F. (2013). Reaching and hearing the invisible:
Organizational research on invisible stigmatized groups via web surveys. British Journal
of Management, 24(4), 532–541.
Vogel, D. L., & Wester, S. R. (2003). To seek help or not to seek help: The risks of self-
disclosure. Journal of Counseling Psychology, 50(3), 351–361.
https://doi.org/10.1037/0022-0167.50.3.351
von der Pütten, A. M., Krämer, N. C., Gratch, J., & Kang, S.-H. (2010). “It doesn’t matter what
you are!” Explaining social effects of agents and avatars. Computers in Human Behavior,
26(6), 1641–1650. https://doi.org/10.1016/j.chb.2010.06.012
Vrocharidou, A., & Efthymiou, I. (2012). Computer mediated communication for social and
academic purposes: Profiles of use and University students’ gratifications. Computers &
Education, 58(1). http://doi.org/10.1016/j.compedu.2011.09.015
Walther, J. B. (2011). Theories of computer-mediated communication and interpersonal
relations. The Handbook of Interpersonal Communication, 4, 443–479.
Weeks, J. W., Heimberg, R. G., Fresco, D. M., Hart, T. A., Turk, C. L., Schneier, F. R., &
Liebowitz, M. R. (2005). Empirical validation and psychometric evaluation of the Brief
Fear of Negative Evaluation Scale in patients with social anxiety disorder. Psychological
Assessment, 17(2), 179.
Weisband, S., & Kiesler, S. (1996, April). Self-disclosure on computer forms: Meta-analysis and
implications. In Proceedings of the SIGCHI conference on human factors in computing
systems (pp. 3-10).
Wheeless, L. R., & Grotz, J. (1978). Revised self-disclosure scale. Communication research
measures: A sourcebook, 322-326.
48
Wilcox, R. (2011). Modern statistics for the social and behavioral sciences: A practical
introduction. CRC press.
Wilcox, R. R., & Keselman, H. (2012). Modern regression methods that can substantially
increase power and provide a more accurate understanding of associations. European
Journal of Personality, 26(3), 165–174.
Ye, C., Fulton, J., & Tourangeau, R. (2011). More positive or more extreme? A meta-analysis of
mode differences in response choice. Public Opinion Quarterly, 75(2), 349–365.
Yokotani, K., Takagi, G., & Wakashima, K. (2018). Advantages of virtual agents over clinical
psychologists during comprehensive mental health interviews using a mixed methods
design. Computers in Human Behavior, 85, 135–145.
https://doi.org/10.1016/j.chb.2018.03.045
Zalake, M., & Lok, B. (2018, November). Non-Responsive Virtual Humans for Self-Report
Assessments. In Proceedings of the 18th International Conference on Intelligent Virtual
Agents (pp. 347-348). https://doi.org/10.1145/3267851.3267893
Zhang, J. (2015). Voluntary information disclosure on social media. Decision Support Systems,
73, 28–36.
49
Table 1
Relationship Between Variables with Entire Sample (N = 144)
Means SDs α 1 2 3 4 5 6 7
1. Age
20.10 2.57 __
__
2. Self-disclosure depth 27.18 8.17 .84 -.05 __
3. Self-disclosure amount 12.54 3.58 .62 -.03 .08 __
4. Social Presence
20.57 3.46 .81
-.02 0.11 .23* __
5. Rapport
40.08 7.89 .61
-.11 0.22* .07 .12 __
6. Fear of Negative 24.15 8.53 .94
-.12 0.07 .06 -.01 .09 __
7. Word Count 1491.3 1087.2 __
-.03 0.01
.43** .11 .01 -.00 __
Note:
+
p < .07. *p < .05. **p < .01.
50
Table 2
Relationship Between Variables by Virtual Human Condition
Means SDs 1 2 3 4 5 6 7
Avatar Video (N = 49)
1. Age
19.84 1.66
__
2. Self-disclosure depth
25.47 7.80
.04 __
3. Self-disclosure amount
13.76 2.23
.03 .06 __
4. Social Presence
19.82 3.34
-.01 .14 .05 __
5. Rapport
39.16 7.90
-.26 .17 .14 .20 __
6. Fear of Negative
25.46 7.97
-.05 .02 .01 .14 .15 __
7. Word Count
1484.73 1032.02
-.02 -.19 .30* .04 .06 -.15
__
Avatar Audio (N = 44)
Means SDs 1 2 3 4 5 6 7
1. Age
20.09 1.85
__
2. Self-disclosure depth
28.98 8.43
.09 __
3. Self-disclosure amount
13.34 2.45
.04 -.00 __
4. Social Presence
21.43 3.69
-.13 .26 .11 __
5. Rapport
40.86 7.43
-.09 .25 -.18 .37* __
6. Fear of Negative
23.48 8.36
-.07 -.08 .19 .18 -.02 __
7. Word Count
1255.58 781.64
-.07 .07 .51** .03 -.16 -.17
__
Agent (N = 51)
Means SDs 1 2 3 4 5 6 7
1. Age
20.37 1.63
__
2. Self-disclosure depth
27.27 8.11
-.18 __
3. Self-disclosure amount
13.31 1.89
-.08 .14 __
4. Social Presence
20.55 3.22
.01 -.06 .18 __
5. Rapport
40.29 8.31
-.22 .33* .22 .54** __
6. Fear of Negative
23.51 9.19
.05 .22 -.02 -.03 .17 __
7. Word Count
1698.77 1321.94
-.11 .27
+
.53** .13 .28
+
.26
__
Note:
+
p < .07. *p < .05. **p < .01.
51
Table 3
Self-Disclosure Depth Main Differences Bootstrap Results and Pairwise Comparisons
N Trimmed M LSPB 𝜓
"
Yuen’s t (SE) df p-val Low CI High CI
Avatar Video 49 25
Avatar Audio 44 29.36
Agent 51 26.64
Avatar Video vs. Avatar Audio
-4.62 -2.41 (1.72) 56.29 .01 -7.62 -1.09
Avatar Video vs. Agent
-2.16 -.89 (1.80) 58.75 .35 -5.29 2
Avatar Audio vs. Agent
2.45 1.42 (1.85) 56.93 .17 -1.17 6.6
52
Table 4
Regression Analyses Examining Relationship Between Rapport and Depth of Self-Disclosure Across Conditions
Self-Disclosure Depth Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model 3.46* (4,139) .01 .09 .01*
Age -.01 .08 -.15 .88
Agent vs. Avatar Video -.19 .19 -.97 .34
Agent vs. Avatar Audio .19 .20 .95 .34
Rapport .24 .08 2.96 .00
Interaction Model 2.39* (6,137) .03 .09 .03*
Age -.01 .08 -.12 .91
Agent vs. Avatar Video -.20 .20 -1.00 .32
Agent vs. Avatar Audio .19 .20 .94 .35
Rapport .30 .13 2.28 .02*
Avatar Video x Rapport -.15 .19 -.76 .45
Avatar Audio x Rapport -.03 .21 -.16 .88
Note: Conditions were represented as three dummy variables with Agent serving as the reference group. ∆F (2,137) = .31,
∆R
2
= .00, p = .74.
+
p < .07. *p < .05. **p < .01.
53
Table 5
Regression Analyses Examining Relationship Between Rapport and Amount of Self-Disclosure Across Conditions
Self-Disclosure Amount Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model .43 (4,139) .01 -.02 .79
Age -.05 .09 -.52 .60
Agent vs. Avatar Video -.14 .20 -.67 .50
Agent vs. Avatar Audio -.12 .21 -.57 .57
Rapport .07 .09 .85 .40
Interaction Model .94 (6,137) .04 .00 . 47
Age -.03 .09 -.36 .72
Agent vs. Avatar Video -.12 .20 -.61 .54
Agent vs. Avatar Audio -.09 .21 -.42 .67
Rapport .21 .14 1.51 .14
Avatar Video x Rapport -.06 .20 -.33 .74
Avatar Audio x Rapport -.40 .21 -1.90 .06
+
Note: Conditions were represented as three dummy variables with Agent serving as the reference group. ∆F (2,136) = 1.96,
∆R
2
= .03, p = .14.
+
p < .07.
54
Table 6
Relationship Between Rapport and Amount of Self-Disclosure by Condition
Conditions 95% Confidence Interval
Estimate SE Lower Upper t p
Avatar Video .14 .14 -.14 .42 .97 .33
Avatar Audio -.19 -.19 -.51 .12 -1.20 .23
Agent .20 .20 -.06 .47 1.50 .14
55
Table 7
Regression Analyses Examining Relationship Between Rapport and Word Count Across Conditions
Word Count Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model 1.43 (4,131) .04 .01 .23
Age -.06 .09 -.72 .47
Agent vs. Avatar Video -.20 .20 -.97 .34
Agent vs. Avatar Audio -.42 .21 -1.97 .05
Rapport .09 .09 1.07 .29
Interaction Model 1.69 (6,129) .07 .03 .13
Age -.04 .09 -.52 .6
Agent vs. Avatar Video -.20 .20 -.99 .32
Agent vs. Avatar Audio -.40 .21 -1.86 .07
Rapport .31 .14 2.19 .03*
Avatar Video x Rapport -.27 .20 -1.34 .18
Avatar Audio x Rapport -.44 .21 -2.04 .04*
Note: Conditions were represented as three dummy variables with Agent serving as the reference group. ∆F (2,129) = 2.17,
∆R
2
= .03, p = .12.
+
p < .07. *p < .05. **p < .01.
56
Table 8
Relationship Between Rapport and Total Interaction Word Count by Condition
Conditions 95% Confidence Interval
Estimate SE Lower Upper t p
Avatar Video .05 .14 -.24 .33 .32 .75
Avatar Audio -.13 .16 -.45 .20 -.77 .44
Agent .31 .14 .03 .59 2.19 .03*
Note:
+
p < .07. *p < .05. **p < .01.
57
Table 9
Regression Analyses Examining Relationship Between Social Presence and Depth of Self-Disclosure Across Conditions
Self-Disclosure Depth Model Fit
B SE
T
ratio
p F df R
2
adj. R
2
p
Simple Effects Model 1.48 (4,139) .04 .01 .21
Age -.05 .08 -.56 .57
Avatar Video vs. Avatar Audio .41 .21 2.00 .05
Avatar Video vs. Agent .22 .20 1.09 .28
Social Presence .09 .08 1.03 .30
Interaction Model 1.22 (6,137) .05 .01 .30
Age -.04 .08 -.49 .63
Avatar Video vs. Avatar Audio .43 .21 2.09 .04
Avatar Video vs. Agent .23 .20 1.13 .26
Social Presence -.02 .16 -.14 .89
Avatar Audio x Social Presence .04 .23 .15 .88
Agent x Social Presence .22 .20 1.07 .29
Note: Conditions were represented as three dummy variables with Avatar Video serving as the reference group. ∆F (2,137) =.31,
∆R
2
= .00, p = .74.
58
Table 10
Regression Analyses Examining Relationship Between Social Presence and Amount of Self-Disclosure Across Conditions
Self-Disclosure Amount Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model 2.35 (4,139) .06 .04 .06
+
Age -.03 .08 -.31 .75
Avatar Video vs. Avatar Audio -.02 .20 -.10 .92
Avatar Video vs. Agent .11 .20 .57 .57
Social Presence .24 .08 2.89 .00
Interaction Model 2.25 (6,137) .09 .05 .04*
Age -.01 .08 -.17 .86
Avatar Video vs. Avatar Audio .01 .20 .05 .96
Avatar Video vs. Agent .13 .20 .68 .50
Social Presence .03 .16 .20 .84
Avatar Audio x Social Presence .14 .22 .63 .53
Agent x Social Presence .38 .20 1.93 .06
+
Note: Conditions were represented as three dummy variables with Avatar Video serving as the reference group. ∆F (2,137) = 1.99,
∆R
2
= .03, p = .14.
+
p < .07. *p < .05. **p < .01.
59
Table 11
Relationship Between Social Presence and Amount of Self-disclosure by Condition
Conditions 95% Confidence Interval
Estimate SE Lower Upper t p
Avatar Video .03 .15 -.28 .33 .20 .84
Avatar Audio .17 .16 -.15 .49 1.06 .29
Agent .41 .12 .17 .65 3.37 .00**
Note:
+
p < .07. *p < .05. **p < .01.
60
Table 12
Regression Analyses Examining Relationship Between Social Presence and Word Count Across Conditions
Word Count Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model 2.23 (2,131) .06 .04 .07
Age -.06 .08 -.66 .51
Avatar Video vs. Avatar Audio -.26 .21 -1.21 .23
Avatar Video vs. Agent .18 .20 .88 .38
Social Presence .18 .09 2.07 .04
Interaction Model 1.97 (6,129) .08 .04 .07
Age -.05 .08 -.54 .59
Avatar Video vs. Avatar Audio -.22 .21 -1.06 .29
Avatar Video vs. Agent .19 .20 .94 .35
Social Presence -.01 .16 -.03 .97
Avatar Audio x Social Presence .12 .24 .51 .61
Agent x Social Presence .32 .20 1.63 .11
Note: Conditions were represented as three dummy variables with Avatar Video serving as the reference group. ∆F (2,129) = 1.41,
∆R
2
= .02, p = .25.
+
p < .07. *p < .05. **p < .01.
61
Table 13
Regression Analyses Examining Relationship Between Fear of Negative Evaluation and Depth of Self-disclosure
Across Conditions
Self-Disclosure Depth Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model 1.17 (4,130) .03 .01 .33
Age -.06 .08 -.69 .49
Avatar Video vs. Avatar Audio .45 .21 2.13 .03*
Avatar Video vs. Agent .24 .20 1.21 .23
Fear of negative evaluation
(FNE) .06 .08 .76 .45
Interaction Model 1.26 (6,136) .08 .01 .28
Age -.07 .08 -.78 .44
Avatar Video vs. Avatar Audio .43 .21 2.04 .04*
Avatar Video vs. Agent .25 .20 1.23 .22
FNE .01 .16 .08 .94
Avatar Audio x FNE -.10 .22 -.45 .65
Agent x FNE .20 .20 .97 .34
Note: Conditions were represented as three dummy variables with Avatar Video serving as the reference group. ∆F (2,128) = 3.05,
∆R
2
= .04, p = .05
+
.
+
p < .07. *p < .05. **p < .01.
62
Table 14
Regression Analyses Examining Relationship Between Fear of Negative Evaluation and Amount of Self-disclosure
Across Conditions
Self-Disclosure Amount Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model .31 (4,137) .01 -.02 .87
Age -.05 .09 -.64 .52
Avatar Video vs. Avatar Audio .04 .21 .17 .87
Avatar Video vs. Agent .15 .21 .72 .47
Fear of negative evaluation
(FNE) .05 .09 .63 .53
Interaction Model .40 (6,135) .02 -.03 .88
Age -.05 .09 -.61 .55
Avatar Video vs. Avatar Audio .04 .21 .19 .85
Avatar Video vs. Agent .14 .21 .65 .51
FNE .01 .16 .05 .96
Avatar Audio x FNE .19 .22 .84 .40
Agent x FNE -.02 .21 -.11 .91
Note: Conditions were represented as three dummy variables with Avatar Video serving as the reference group. ∆F (2,135) =.58,
∆R
2
= .01, p = .56.
63
Table 15
Regression Analyses Examining Relationship Between Fear of Negative Evaluation and Word Count Across Conditions
Word Count Model Fit
B SE T ratio p F df R
2
adj. R
2
p
Simple Effects Model 1.17 (4,130) .03 .01 .33
Age -.08 .08 -.90 .37
Avatar Video vs. Avatar Audio -.20 .22 -.95 .35
Avatar Video vs. Agent .20 .21 .98 .33
Fear of negative evaluation
(FNE) .05 .09 .53 .60
Interaction Model 1.82 (6,128) .08 .04 .09
Age -.09 .08 -1.10 .27
Avatar Video vs. Avatar Audio -.25 .21 -1.17 .24
Avatar Video vs. Agent .18 .20 .89 .38
FNE -.16 .15 -1.04 .30
Avatar Audio x FNE .08 .22 .37 .71
Agent x FNE .48 .21 2.29 .02*
Note: Conditions were represented as three dummy variables with Avatar Video serving as the reference group. ∆F (2,128) = 3.05,
∆R
2
= .04, p = .05
+
.
+
p < .07. *p < .05. **p < .01.
64
Table 16
Relationship Between Fear of Negative Evaluation and Word Count by Condition
Conditions 95% Confidence Interval
Estimate SE Lower Upper t p
Avatar Video -.16 .15 -.46 .14 -1.04 .30
Avatar Audio -.08 .16 -.39 .24 -.49 .62
Agent .32 .14 .04 .59 2.26 .03*
Note:
+
p < .07. *p < .05. **p < .01.
65
Figure 1
Condition Descriptions
66
Figure 2
SimSensei Platform
67
Figure 3
Boxplot showing differences in depth of self-disclosure between conditions.
68
Figure 4
Simple slopes of rapport in relation to self-disclosure amount by condition.
69
Figure 5
Simple slopes of rapport in relation to total word count by condition.
70
Figure 6
Simple slopes of social presence in relation to self-disclosure amount by condition.
71
Figure 7
Simple slopes of fear of negative evaluation in relation to total word count by condition.
72
Appendix A
Simsensei Interview Questions
1. How are you doing today?
2. Where are you from originally?
3. What are some things you really like about L.A.?
4. What are some things you don't really like about L.A.?
5. Do you travel a lot?
6. What do you enjoy about traveling?
7. I'd like to hear about one of your trips.
8. What's one of your most memorable experiences?
9. What was your favorite subject in school?
10. What do you do now?
11. If you could do something else, what would be your dream job?
12. Do you consider yourself more shy or outgoing?
13. Tell me about your relationship with your family.
14. What do you do to relax?
15. How are you at controlling your temper?
16. When was the last time you argued with someone and what was it about?
17. Tell me about a situation that you wish you had handled differently.
18. What's something you feel guilty about?
19. Tell me about the hardest decision you've ever had to make.
20. What's something you regret?
21. Tell me about an event or something that you wish you could erase from your memory.
22. How have you been feeling lately?
23. Have you noticed any changes in your behavior or thoughts lately?
24. How easy is it for you to get a good night's sleep?
25. What are you like when you don't sleep well?
26. Do you feel therapy is useful?
27. What advice would you have given yourself ten or twenty years ago?
28. When was the last time you felt really happy?
29. Who's someone that's been a positive influence in your life?
30. How would your best friend describe you?
31. Tell me about something you did recently that you really enjoyed.
32. What are you most proud of in your life?
73
Appendix B
Measures
Revised Self-Disclosure Scale (RSDS)
Indicate the degree to which the following statements reflect how you communicated with our
virtual human. We would appreciate your honest responses.
Scale: 1 (strongly disagree) – 7 (strongly agree)
Amount Subscale
1. My statements of my feelings were usually brief.
2. My conversation lasted the least time when I was discussing myself.
3. I often talked about myself.
Depth Subscale
1. I did not often talk about myself.
2. I usually talked about myself for fairly long periods at a time.
3. I often discussed my feelings about myself.
4. Once I got started, my self-disclosures lasted a long time.
5. I often disclosed intimate, personal things about myself without hesitation.
6. I felt that I sometimes did not control my self-disclosure of personal or intimate things I
told about myself.
7. Once I got started, I intimately and fully revealed myself in my self-disclosures.
Rapport Scale
Indicate to what extent you agree with each of these statements regarding your experience during
the interview with our virtual human.
Scale: 1 (strongly disagree) – 8 (strongly agree)
1. I felt I was able to engage the listener with my story. (1)
2. I think the listener and I established a rapport. (2)
3. I felt that the listener was interested in what I was saying. (3)
4. I felt I had no connection with the listener. (4)
5. I felt that the listener and I understood each other. (5)
6. The listener's body language encouraged me to continue talking. (6)
7. I felt that I was unable to engage the listener with my story. (7)
8. The listener was warm and caring. (8)
9. Seeing the listener helped me focus on telling the story. (9)
10. I felt I had a connection with the listener. (11)
74
Social Presence Survey
Indicate to what extent you agree with each of these statements regarding your experience during
the interview with our virtual human.
Scale: 1 (strongly disagree) – 7 (strongly agree)
1. I perceived that I was in the presence of another person in the chat with me.
2. I felt that the person was watching me and was aware of my presence.
3. The thought that the person is not a real person crossed my mind often.
4. The person appeared to be sentient (conscious and alive) to me.
5. I perceived the person as being only a computerized image, not a real person.
Brief Version of the Fear of Negative Evaluation Scale
Indicate to what extent you agree with each of these statements
Scale: 0 (not at all characteristic of me) - 5 (extremely characteristic of me)
1. I worry about what other people will think of me even when I know it doesn't make a
difference.
2. I am frequently afraid of other people noticing my shortcomings.
3. I am afraid that others will not approve of me.
4. I am afraid that other people will find fault with me.
5. When I am talking to someone, I worry about what they may be thinking about me.
6. I am usually worried about what kind of impression I make.
7. Sometimes I think I am too concerned with what other people think of me.
8. I often worry that I will say or do the wrong things.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Decoding information about human-agent negotiations from brain patterns
PDF
Parasocial consensus sampling: modeling human nonverbal behaviors from multiple perspectives
PDF
Towards social virtual listeners: computational models of human nonverbal behaviors
PDF
Generating gestures from speech for virtual humans using machine learning approaches
PDF
Enabling human-building communication to promote pro-environmental behavior in office buildings
PDF
Reacting to the negative consequences of your decisions: an ATSS comparison of cognitive and affective reactions in older versus younger adults
PDF
Using ecological momentary assessment to study the impact of social-cognitive factors on paretic hand use after stroke
PDF
Socially-assistive robots using empathy to reduce pain during peripheral IV placement in children: a randomized controlled trial
PDF
Understanding human-building-emergency interactions in the built environment
PDF
Responding to harm
PDF
Detecting joint interactions between sets of variables in the context of studies with a dichotomous phenotype, with applications to asthma susceptibility involving epigenetics and epistasis
Asset Metadata
Creator
Garcia-Cardona, Laura Marcela
(author)
Core Title
Using virtual humans during clinical interviews: examining pathways to self-disclosure
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Psychology
Publication Date
09/29/2020
Defense Date
04/16/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
agent,avatar,clinical assessments,clinical interviews,OAI-PMH Harvest,rapport,self-disclosure,social presence,technology-based assessments,virtual humans
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Davison, Gerald C. (
committee chair
), Dehghani, Morteza (
committee member
), Gratch, Jonathan (
committee member
), Schwartz, David (
committee member
)
Creator Email
lgarcia.dtx@gmail.com,lgarciac@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-380205
Unique identifier
UC11666183
Identifier
etd-GarciaCard-9025.pdf (filename),usctheses-c89-380205 (legacy record id)
Legacy Identifier
etd-GarciaCard-9025.pdf
Dmrecord
380205
Document Type
Dissertation
Rights
Garcia-Cardona, Laura Marcela
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
avatar
clinical assessments
clinical interviews
rapport
self-disclosure
social presence
technology-based assessments
virtual humans