Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
“You move, therefore I am”: the combinatorial impact of kinesthetic motion cues on social perception
(USC Thesis Other)
“You move, therefore I am”: the combinatorial impact of kinesthetic motion cues on social perception
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
1
Title Page:
“You Move, Therefore I Am”: The Combinatorial Impact of Kinesthetic Motion Cues on Social
Perception.
By
David C. Jeong
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the Requirements for the Degree
DOCTOR OF PHILOSOPHY (COMMUNICATION)
December 2017
2
“Cogito ergo sum”
- Descartes
3
Acknowledgements
I am forever grateful to two women who gave me the tools to complete this dissertation.
To my mother, thank you for your love, patience, and undying support. I love you, mom.
To my advisor and second mother, Lynn, my academic career has been made possible with your
unconditional love and support. Thank you for giving me the tools to conduct, write, and refine
this dissertation and giving me the confidence to undertake any challenge in life.
I also owe so much to teachers like Michael Cody, Tom Goodnight, Stacy Marsella, and Steve
Read who provided invaluable mentorship to me at various stages of my academic progress.
4
Table of Contents
I. Abstract 6
II. Chapter 1: Introduction 7
i. Overview 7
ii. Literature Review 15
iii. Preview of Studies 25
III. Chapter 2: 1
st
Person Social Perception of Text-Based Representation of Physical
Movement using Situational DIAMONDS Measurement 30
i. Introduction 30
ii. Method 31
iii. Results 33
iv. Discussion 57
IV. Chapter 3: 3
rd
Person Sequential Social Perception of Text-Based Representation of
Physical Movement using Situational DIAMONDS Measurement 61
i. Introduction 61
ii. Method 62
iii. Results 65
iv. Discussion 82
V. Chapter 4: Using SmartBody to Examine the Role of Movement in Social
Perception 83
i. Introduction 83
ii. Method 83
5
iii. Results 87
iv. Discussion 104
VI. Chapter 5: The Transformative Effects of Spatial Distance in an Immersive VR Learning
Environment 107
i. Introduction 107
ii. Method 112
iii. Results 118
iv. Discussion 132
VII. Chapter 6: Full Discussion and Conclusion 140
i. Full Discussion 140
ii. Conclusion 152
VIII. Reference List 154
IX. Appendix 171
i. Study 1 Appendix 171
ii. Study 2 Appendix 187
iii. Study 3 Appendix 233
6
Abstract
This dissertation introduces the concept of Kinesthetic Motion Cues and examines across 4
studies the combinatorial effects of these cues on a specific type of social perception – namely,
generating inferences and judgements of social situations. An overarching goal of this project is
the construction of a taxonomy that affords normative social inferences and distribution of
inferences people make given the same basic action components. By normative social inferences
I refer to inferences that fall within a normal distribution of inferences given the same basic
action components. While I focus on specific movement and visual cues previously tied to the
construction of social meaning, this line of inquiry is novel in both method and conceptual
significance with its analysis of the effects of layers of movement combinations. Studies 1-3 in
particular, demonstrate the iterative process in which meaning is constructed and re-constructed
as different movement and action components are added and removed within social situations.
A significant consideration from this dissertation is the potential for attaining normative data on
reliable patterns of how users make inferences from combinations of these cues across multiple
extended narratives. This dissertation is the first step towards a new methodology and approach
to understanding the elements of our communication that construct social meaning. I focus my
attention here on the role of the body and its movement on coloring this social meaning. If we
understood and could constrain the meaning of movements, and had a way to more automatically
code them, this could lead to a transformation in our ability to understand a more limited range
of “alternative interpretations” of changing social situations “online” and in real-time.
7
Chapter 1: Introduction
Overview
You notice a young man waiting as a train pulls in to the station. As people disembark, he
suddenly orients to a young woman on the crowded platform and briskly walks directly towards
her. His eyes widen and he immediately smiles from ear to ear as he makes eye contact. But, the
woman doesn’t smile back, tilting her head from side to side and pulling away. At that moment,
another young woman who looks much like the first, taps the man on the shoulder and he turns
around and after a wide opening of his upper eyelids and a wide vertically elongated mouth
opening, smiles broadly anew, and hugs this woman. She smiles and hugs the man back. The
young man turns around and puts his hands up and says something to the first woman. The first
woman puts her hand to her mouth, smiles, and nods. Most of us might infer that the man
initially mistook a stranger for his girlfriend and that the first young woman is relieved, and now
understands the man approached her in error.
In this and every social scenario, our physical actions generate social messages. Every
movement, gesture, glance, and facial expression in combination and in sequence, generates
different meanings in combinations and sequences given the context. That is, a smile could be
generally accepted as a “positive” action – but a smile that follows a slow approach to a
significant other in a secluded beach would arguably be interpreted differently from a smile that
follows a fast approach to a stranger in a crowded place. Every moment of our social lives is
layered with these unspoken actions and movements that build levels of complexity in social
meaning and intent. In fact, every moment in a single self-contained social interaction is layered
with levels upon levels of social meaning that transform with every visual action or movement.
8
At any given moment in an interaction, people may – intentionally or unintentionally –
inject a multiplicity of visual cues into an interaction. In this dissertation, I attempt to
demonstrate that with every injection of these visual cues into an interaction – be it nonverbal or
movement-based (social variable) into an interaction, individuals’ social perceptions and
inferences (social equation) drastically transform. I will begin by establishing a comprehensive
conceptual understanding of the respective Independent and Dependent Variables, and their
relationship for this dissertation. I first discuss what I refer to as Kinesthetic Motion Cues (used
here as the independent variable), before discussing the impact of Kinesthetic Motion Cues on
social meaning and judgment (used here as the dependent variable).. Finally, I briefly introduce
the DIAMONDS measure, which I used as a measure for social meaning in this dissertation.
Kinesthetic Motion Cues
Kinesthetic Motion Cues are defined here as movement cues generated by an individual
or agent in relation to one or more other social agents. Kinesthetic Motion Cues as a concept
involve three levels of conceptualization, including the form, the nature, and the change. First,
the form refers to the specific type of human movement or nonverbal behavior that can be used
as a cue for social judgment. It is worth noting that as a concept, Kinesthetic Motion Cues draw
from two distinct fields of study: (1) Animacy and agency from Developmental Psychology, and
(2) nonverbal communication from Communication. I expand on the typology used for this
dissertation below. Second, I emphasize that Kinesthetic Motion Cues involve change in a one
or a combination of nonverbal behaviors (e.g., facial expression, gaze pattern, gesture, proxemics
(use of space), body movement (kinesis)) that involve movement. Finally, Kinesthetic motion
cues not only involve change, but also the nature of that change (e.g., rapid, slow) in an agent’s
relative location vis a vis other agents.
9
Psychology. Categories and typologies of movement to analyze were determined based
on wide range of prior literature on studies of point-light displays and geometric shapes depicting
movement. For example, point-light displays were used to distinguish between functional and
social types of movements (Dittrich, 1993). Further, infant studies revealed the significance of
causality, trajectory, and contingency in human movement perception (Rakison & Poulin-
Dubois, 2001). Among studies examining movement represented by geometric shapes,
perception of animacy was linked with changes of speed and direction (Tremoulet & Feldman,
2000), and perception of intentionality was examined in terms of degree of movements, distance,
and eye gaze duration. Finally, event segmentation of geometric movement simulations were
also predicted by movement features such as position, velocity, and acceleration (Zacks, 2004).
In order to parsimoniously account for the movement categories represented in the above studies
and typologies of movement, I fixate on three main categories of movements: Distance,
Direction, and Speed. Each of these categories were designated two levels: Far/Near (Distance),
Towards/Away (Direction), and Slow/Fast (Speed).
Communication. The concept of social distance has figured prominently in the field of
Nonverbal Communication, conceptualized as “Proxemics”. Proxemics, or interpersonal
distance between communicators, highly impacts the perception of meaning in all forms of
human social interaction. Hall (1966) identified 4 types of interpersonal distance zones with
varying distances and social meaning: the intimate zone (0–45 cm), the personal–casual zone
(45–120 cm), the socio-consultive zone (120–360 cm), and the public zone (360–750 cm).
Notably, Proxemics is but one component of a series of visual, non-verbal cues as conceptualized
by Communication scholars.
10
Among these additional visual, non-verbal components of communication, the
significance of hand/arm gestures cannot be understated (Ekman & Friesen, 1972). While, the
function and utility of certain gestures (emblems that are symbolic) are largely culturally-specific
and socially learned (illustrators that augment speech), other gestures serve more basic, implicit
self-needs and emotions (adaptors). In order to account for the significance of hand/arm gesture
while being mindful of cultural and social differences, I added the two levels of Open and Closed
Gesture to the previously discussed movement categories. These hypothetical orientations of an
agent’s gesture signified some general positive or negative orientation or disposition, but were
intended to be vague enough to neutralize cultural differences. As such, of interest here as
independent variables -- in addition to the movement categories identified above (e.g., Distance
(Far/Near), Direction (Towards/Away), Speed (Slow/Fast) -- was also Gesture (Open/Closed)).
Movement and its Impact on Social Meaning
Thus far, I have described the unique typology (form) in the concept of Kinesthetic
Motion Cues as it draws from literature from two traditionally separate fields in Developmental
Psychology and Communication. It is also worth noting that the methodology used in this
dissertation to examine physical movements and nonverbal cues is novel in two ways. First,
prior related work has utilized physical cues and nonverbal communication primarily as
measured or dependent variables. In correlational studies, for example, researchers might assess
the social inferences participants make about others’ cues (e.g., Vinciarelli et al., 2012), or
correlate personality scores on an attribute (e.g., dominance) with raters’ coding of video
recorded interaction nonverbal behaviors of these same individuals (e.g., Gifford, 1994). Where
researchers have manipulated nonverbal behaviors displayed, they typically do so one at a time:
For example, Tiedens and Fragale (2003) manipulated dominance versus submission of a
11
confederate (with postural expansion versus contraction) and examined participants’
complementarity or mimicry responses (as well as attributions). However, in some nonverbal
domains (e.g., voice), researchers can and have manipulated sound features (e.g., speech rate,
pitch contour, voice quality, etc.) that affect judgments (e.g., of affect) (Juslin & Scherer, 2008).
Nonetheless, regarding motion cues, there has not been a systematic attempt to manipulate
combinations of these to examine their resultant effects.
The current approach differs from most prior work in that I utilize my concept of
Kinesthetic Motion Cues as an independent variable. The significance of this reversal of
convention is that we are in a better position to make causal inferences about the meaning of
these individual and collective cues, and how they impact social inference. This leads me to my
second point regarding relative methodological novelty. Prior literature in nonverbal
communication (Ekman & Friesen, 1972; Hall, 1966) and visual cues has predominantly focused
on the effects of one or a small number of single action or movement cues at a time. This
dissertation, however, introduces a novel methodology to studying the combinatorial effects of
various types of human movement in a narrative. I examine each of the combinations of
Kinesthetic Motion Cue components in a step-wise fashion through which I attempt to
demonstrate the iterative changes in social meaning as we layer one visual cue over another.
While here I limit my focus to 4 specific types of movement cues, the sheer magnitude of the
combinatorial effects of just these 4 cues is a testament to the incredible complexity under which
even the simplest human interactions operate. Ultimately, the aim of this dissertation is to
establish a path that suggests that just as a small set of letters in an alphabet can produce nearly
infinite numbers of words and meanings in speech, a small set of movements – including but not
exclusive to those explored here –might create a similar language of meaning. This approach
12
obviously reaches beyond the relatively simple interactions described in this dissertation and
ultimately begins to approach a model and an understanding of what defines and characterizes
communication in its more complex forms.
There is a long history of interest in the role of motion cues on social inferences,
including inferences about agents’ goals, intentions, and emotions. This interest in social
perception has its 20
th
century roots in work using animated movements of abstract shapes (e.g.,
triangles, squares) conducted by Heider and Simmel (1944). That work (and the Heider-Simmel
stimuli) demonstrated that typical humans made inferences about the goals, intentions, and
emotions of even inanimate objects like triangles and squares when those objects moved relative
to one another in a context (e.g., opening and closing fence). That work attracted the attention of
researchers -- primarily in developmental and clinical psychology -- focused on how human
minds are wired developmentally to use physical movements alone – even without facial
expressions -- to infer intentions, goals, emotions of “others” (references). Understanding
underlying developmental mechanisms and processes also involved examining individual
humans who departed from typical (e.g., autistic individuals) (Baron-Cohen, 1989; Baron-
Cohen, Rogers, 1999). It also involved understanding if and when humans respond to inanimate
entities (e.g., robots, and intelligent agents) as if they have intentions, goals, and emotions
(references). This prior work, reviewed in greater detail below, makes it clear that: (1) humans
can and do respond to movement cues alone in making inferences about other humans’ goals,
intentions, and emotions, (2) humans can, under specific conditions (e.g., virtual interactions),
respond even to inanimate objects and non-human agents in virtual environments as if those
entities had intentions, goals, and emotions. But, while these literatures lay important
groundwork for the current work, they are primarily focused on developmental questions (e.g.,
13
about the development of “theory of mind”). Left relatively unaddressed is the following
question: Might humans use a codifiable communication (or language-like) system for inferring
social meaning over time using combinations of movement cues? If we systematically
manipulated combinations of movement cues, how would that affect social perceptions? These
are the primary questions that are the focus of this dissertation.
Inferred Social Meaning (Dependent Variable)
To provide some focus for the current dissertation, I mostly focus on one category of
inferred social meaning: the combinations of social cues that may underlie inferences about
different types of social situations. Psychologists in personality and social psychology (Pervin,
1978; Miller & Read, 1991; Read & Miller 1995) as well as communication scholars (Miller,
Cody, & McLaughlin, 1994) have a long history of trying to understand and discriminate among
social situations. Although the focus on inferring social situations has not focused on
combinations of movement cues as proposed here, Rauthmann and colleagues (2014) recently
introduced the “DIAMONDS” – various inferences, which provides the dependent variable for
four of the five studies reported here. The DIAMONDS consist of situational categories where
the situation may involve duty (e.g. work, tasks), intellect (e.g., aesthetic, profound), adversity
(e.g., threat, criticism), mating (e.g. romance, sexuality), positivity (e.g., pleasant, nice),
negativity (e.g., unpleasant, bad), deception (e.g., deceit, lies), and/or sociality (e.g.,
communication, interaction). While the original DIAMONDS scale was a 32-item Q-Sort
method measurement, a condensed 8-item Likert-scale version was introduced (Rauthmann &
Sherman, 2016a). That said, this condensed version still required multiple measurements
conducted throughout a day. To mitigate the need for multiple daily measurements, an even
further simplified version was introduced to describe present situations (Rauthmann & Sherman,
14
2016b). This simplified version was tested in an analysis of 20 million tweets to identify daily
and weekly situational trends, as well as differences in what types of situations men and women
were sharing online (Serfass & Sherman, 2015).
Below, we discuss work on inferring social situations, and on the DIAMONDS in greater
detail. Although the DIAMONDS provides a useful starting place for thinking about movement
combinations impact social inferences, considered also is how we can begin to better understand
how movements might impact a broader range of inferences (e.g., affect, attribution) that have
often been a focus in psychology, communication, and by researchers attempting to create
realistic animated agents (Cross et al., 2015). The DIAMONDS, as a measure, is not without its
flaws. In addition to its inability to account for the full range of all social situations we
experience, there is certainly a weakness in its ability to provide fine-grained insight into the
situations it does in fact measure. Part of this lack of specificity lies in the general bifurcation of
its existing measured situations into a positive/negative valence. Consequently, the situations
may be unwittingly assigned to more generalized positive/negative categories as opposed to
situation-specific categories. That being said, the DIAMONDS provides the most current and
best possible measurement today that attempts to explain social situations. Indeed, a larger over-
arching theme of this dissertation hints at the sheer magnitude and richness that color each of our
social experiences and encounters. In the sections that follow I provide a review of the
conceptual development that underpin its independent and dependent variables. First, I begin by
making the case for an improvement on existing conceptions of social situations. Then, I trace
the significance of movement cues in human development from infancy to adulthood as they lay
the foundation for our perception of animacy, agency, causality, and intentionality.
15
Literature Review
Meaning Construction of Social Situations
What are situations? Of what are they composed? How do we make meaning of them?
These have long been complex questions in the social sciences: The dynamics of situations are
certainly difficult to capture (Miller, Cody, & McLaughlin, 1994). Pervin (1978) proposed that
three components of situations were critical: who is involved, where is the action occurring, and
what activities are involved. Read and Miller (1998) argued that social concepts (including
situations) are composed of one or more combinations of scenarios that are detailed
representations of who (or what) did what to whom (or what) under what circumstance, why,
where, how, with what effect (e.g., emotional outcome). Even non-human animals, such as dogs,
seem wired to react to and make sense out of action sequences, and to have episodic memories
about them (Fugazza, Pogany, & Miklosi, 2016). Furthermore, for humans, Read and Miller
(1995) argued that the structure of stories is the structure of human action. Historically, they
noted that many psychologists (e.g., Barker, 1963; Barker & Wright, 1955; Miller, Galanter, &
Pribram, 1960) argued that chains of behavioral episodes contain actor’s goals, why they had
them, how (e.g., with plans) they would achieve them, and plan and goal outcomes (Read &
Miller, 1995). Similarly, text comprehension has similar story and narrative structures (see for
example Mandler, 197; Mandler & Johnson, 1977; Rumelhart, 1977; Schank & Abelson, 1977).
In important ways then, stories (and social representations) are “cleaned-up event structures”
(Read & Miller, 1995). And, our brain may leverage these event structure mechanisms in
developing a range of “cleaned up” or economical social concept representations, including
social situation representations (Read & Miller, 1998) and language. Indeed, Wierzbicka (1992),
a linguist, argued that across human cultures the most basic “scripts” involve actions of one to
16
another: That is a of “mini-language” whose connotative and denotative meanings humans
universally share: Namely “I want this, you do this, this happened, this person did something
bad, and something bad happened because of this”.
Situational components taking into account these event structures in different
combinations could create some interesting alternative “situations” (Miller, Cody, &
McLaughlin, 1994) Indeed, narratives, composed of such event structures, could differ in terms
of the agent’s activity, his or her potency and the evaluation of the sequence (e.g., as a negative
or positive outcome from a given audience perspective). Indeed, these three dimensions were a
key basis for the Osgood semantic differential (Osgood & Tannenbaum, 1955), a classic model
underlying dimensions of social perception (i.e., evaluation, potency, and activity).
The long-term goal of this area of study is to move towards a taxonomy of social
interactions and situations by pinpointing the most basic action components needed for a subject
to make normative and plausible inferences about a given ambiguous social situation. The latter
goal will follow the Social Dynamics Model (Miller & Read, 1991; Read & Miller 1995), which
argues that a coherent goal-oriented narrative of others’ actions is requisite to comprehension of
others’ minds-- mindreading. The model is based on the assumptions that during narrative-
construction, goal-based knowledge structures are necessary to understand others (Schank &
Abelson, 1975) and that constraint satisfaction processes in neural networks organize knowledge
structures into a coherent narrative that allows for the optimal explanation of the other’s actions
(Miller & Read, 1991; Read & Miller, 1993). If we understood and could constrain the meaning
of movements, and had a way to more automatically code them, this could lead to a
transformation in our ability to understand a more limited range of “alternative interpretations”
17
of changing social situations “online” and in real-time. But, a first step remains: More
systematically understanding the meaning of action and movement.
Movement
There is a strong connection between one’s own experience with one’s own movement and
the capacity to construe the social meaning of movement by others. Beginning early in infancy,
children begin to orient to change in other’s and objects’ movements (Rakison & Poulin-Dubois,
2001; Gelman & Opfer, 2002), but the perception of different types of motion (e.g., lateral,
vertical, towards/away; slow and fast velocities) that may underlie social perception, depends on
experience, maturation, and a child’s own emerging capacities that are not present at birth (e.g.,
depth perception) as well as, and this is critical, his or her own experience moving (Kellman et
al., 1986). Indeed, mirror neurons (Iacoboni, 2008) appear to be one mechanism by which
individuals’ own movements help enable them to understand similar movements by others, and
the absence of or impairment in this evolved mechanism may play a causal role in departures
from species-typical patterns of social perception (e.g., in autism) ((Baron-Cohen, 1989; Baron-
Cohen, Rogers, 1999). Typically, even in young childhood and in species-typical adults, in
natural human interaction as well as in virtual spaces, combinations and sequences of kinesthetic
movements (actions) are an importance basis for social perceptions and inferences.
Surprisingly, as suggested above, the species-typical perceptual system appears ready early
in development to make social inferences, for example about social situations and related
concepts (e.g., emotions, attitudes), on the basis of movements alone. Humans can make such
inferences about movements, inferring goals and intentions to entities, even if those movements
are made by inanimate objects. Heider’s and Simmel’s (1944) landmark study demonstrated that
people can relate to two-dimensional geometric shapes in social and anthropomorphic
18
terms. According to Heider (1958), peoples’ mentalistic attributions to non-human geometric
shapes was a commonsense, or folk, psychology of human minds, in which perceptions of action
link to beliefs, desires, and other dispositional properties that underlie social perception. This
capacity to make inferences and predictions about mental states, intentions, and goals was later
referred to as “Theory of Mind” (Premack & Woodruff, 1978). The essential take-aways from
the Heider-Simmel simulation/experiment, were that: (1) humans can make inferences from the
movements of entities, and (2) faces and facial expressions are not required to generate
inferences about the triangles’ actions, goals, intentions, and emotions. Rather, we can generate
such inferences purely by virtue of the triangles’ (and presumably humans’) movements: rate of
speed, particular patterns of movement, whether they are moving towards or away from other
shapes, and so on, and (3) humans can make inferences about objects “as if” they were animate,
even if they are not, if they meet certain conditions.
Regarding the first two points above, to date, inference making in social perception has
been studied primarily with facial expressions, and various measurements of empathy and
mentalizing (Dijksterhuis & Bargh, 2001; Niedenthal et al., 2005). While faces are indeed
important in social perception and the generation of inferences (Allison et al., 2000; Schultz,
2005; Zebrowitz & Montepare, 2008), I argue that body movements and actions also play a
tremendous role in the generation of inferences in social perception, and aim to more holistically
understand the body and its movements as a source of emotion/intention/inference construction.
The third point suggests that in addition to systematically manipulating descriptions of
combinations of human movements, researchers could also manipulate the actions of non-human
stimuli (e.g., shapes; animated human agents).
19
Presented here is a comprehensive framework for examining social perception as it
happens in the real world. Prior to a full discussion of movements as it pertains to social
judgments and perceptions, I will first elaborate on the significance of the Heider-Simmel
experiment. Specifically, I will further draw upon the developmental and neuroscience
literatures to make the connection between perceptions of movement and perceptions of agency
and intentionality—all closely tied components of Theory of Mind.
Theory of Mind
To reiterate, this paper attempts to connect physical actions and movements to social
inferences and judgments. This cognitive processing of physical cues can be generally referred
to as social perception, first discussed by Heider (1944). Social perception is a multi-level
process that may involve interpretations of intentionality (Heider & Simmel, 1944), causality
(Michotte, 1962), agency (Leslie, 1984; Baron-Cohen, 1995), and animacy (Scholl & Tremoulet,
2000). Each of these concepts are critical in different ways to assumptions about human
behavior and attribution of mental states – at various points referred to as commonsense
psychology, naive theory of action, theory of mind, and folk psychology (Bruner, 1990;
Churchland, 1981; Heider 1958; Sellars, 1956; Leslie, 1987; Perner, 1991). Premack and
Woodruff (1978) first coined the term more widely used today, "theory of mind", in reference to
the capacity to impute causal mental states in behavior prediction and explanation. While there
are many different approaches and interpretations to the concept of theory of mind, two
conceptualizations in particular (Leslie, 1984; Baron-Cohen, 1995) underscore the significance
of action-related components of social perception (intentionality, causality, agency).
Agency and Causality. Leslie’s (1984) Theory of Agency argues that a child’s mind
contains 3 subsystems for detecting relevant information: (1) Theory of Body (Mechanical
20
Agency) that is tuned to mechanical properties of inanimate objects 0F
1
, (2) Theory of Mind-1
(Actional Agency) tuned to goal-directed actions such as eye gaze, and (3) Theory of Mind-2
(Attitudinal Agency) tuned to cognitive properties in the world, such as how mental states can
drive goal-directed behavior. The Theory of Body module deals with the understanding of
physical causality in a mechanical sense, accounting for causal explanations for “billiard ball”
launching displays (Michotte, 1962).
While the mechanical level of Theory of Body focuses on the contiguous relationships of
mechanical force representations in time and space, Actional Agency suggests that agents may
also take on actional properties through the pursuit of goals and perception-based reaction to the
environment. Most importantly, Agents act and react to circumstances at a distance in space and
time. In other words, objects simply exist, whereas agents act, react, and interact. Leslie refers to
this Actional Agency as the Theory of Mind Mechanism, which maintains two sub-types: system
1 and 2. While system 1 makes explicit the information the Agent physically acts to bring about,
system 2 allows truth properties to be based on mental states. That is, system 2 is required to
understand that others hold beliefs different from our own knowledge or from the observable
world, for understanding different perspectives, and for understanding pretense.
Agency. Baron-Cohen’s (1995) Theory of Mind model assumes two forms of perceptual
input: (1) Stimuli with self-propelled motion and (2) Stimuli with eye-like shapes. Baron-
Cohen’s Theory of Mind system can be deconstructed into four modules: (1) Intentionality
detector, which describe the basic movements of approach and avoidance (2) Eye-direction
detector, which interprets eye gaze direction as a perceptual state (3) Shared attention
1
The advanced media technologies (i.e., social media) of today are interesting cases representing
inanimate objects that retain properties of social actors.
21
mechanism, which specifies that the self and the external agent are attending to the same
perceptual object/event, and (4) Theory of mind mechanism, which allows for the suspension of
normal truth, allowing for knowledge states that are not necessarily true nor match the direct
knowledge/perception of the self. The Intentionality Detector (ID) and the Eye-Direction
Detector (EDD) are dyadic representations that are combined to produce the triadic
representation of the Shared Attention Mechanism (SAM), allowing the interpretation of eye
direction as an intention-driven goal state.
Intention. A discussion of Theory of Mind would be remiss without noting the
contributions of Heider (1944; 1958). The landmark experiment by Heider and Simmel (1944)
continues to spark fascination over our understanding of theory of mind, particularly in terms
understanding what criteria are necessary to attribute an object with intentionality. Indeed,
Heider (1958) was first to identify distinctions between explanations of intentional and
unintentional behavior, often based on the agent’s reasons for engaging in the
behavior. Subsequent work in social psychology has focused on delineating the differences
between person causes of behavior and situation/environment causes of behavior1F
2
(Malle, 2004).
According to Baron-Cohen (1995), the Heider-Simmel experiment demonstrates the
existence of an “Intentionality Detector” that interprets “almost anything with self-propelled
motion… as a query agent with goals and desires (p. 34). Notably, this perception of agents
within social situations as intentional (intent-driven) is immediate (Scholl & Leslie, 1999). What
more, this capacity to attribute intentionality happens as early as infancy (Gergely et al., 1995;
2
According to Malle (2004), while explicit behaviors such as blushing are clearly unintentional,
the explanations people attribute to these unintentional behaviors mirror the explanations people
give for inanimate objects’ behavior.
22
Csibra et al., 1999; Dasser et al., 1989). In all, these experiments suggest that the theory of mind
system contains neural circuitry mechanisms adapted to enable goal/desire detection (Nichols &
Stich, 2003).
Intentionality and Movement
Our bodies, our gestures, our movements in general add richness that makes human
interaction so complex and nuanced. The role of the body and movement in generating
inferences about social information has been discussed extensively as it pertains to embodiment
of social perceptual information (Niedenthal et al., 2005), embodied simulations of language
(Barsalou et al.,2009), and the role of mirror neurons as embodied simulations of others’ actions
(Iacoboni et al, 2008). In this dissertation, I am specifically interested in movement as a source
of information about goals and intentions. Judgments about intentions based on motion allows
for fast inference-making about the future behavior of others. Indeed, judgments of
intentionality have been shown to be largely automatic (Scholl & Tremoulet, 2000).
Specific applications of human body movement in social perception include studies of
point light displays of the human body (Johansson, 1973; Walk & Homan, 1984; Dittrich, 1993;
Dittrich et al., 1996; (Blythe, Todd, & Miller, 1999) and extensions of the Heider Simmel
simulation of moving geometric shapes (Tremoulet & Feldman, 2000; Abell, Happe, & Frith,
2000; Barret et al., 2005; Gao et al., 2009; Roemmele et al., 2016).
Point-light displays. Human “biological motion”, a term first coined by Johannson
(1973), was shown to be determined by 10-12 individual points of light that depict human
movement (Kozlowki & Cutting, 1977). These movement patterns then, could be used to
identify certain dances and emotions (Walk & Homan, 1984). In fact, points of light could be
used to determine social perception of positive and negative emotion (Dittrich et al., 1996).
23
Dittrich (1993) identified three categorizations of such point-light based motions and found that
actions were more accurately recognized as being locomotory (walking) and social (dancing,
greeting) than instrumental (hammering, ball bouncing).
Perception of human movement is closely tied to the ability to distinguish between
animate agents and inanimate objects, a capacity that begins in infancy (Rakison & Poulin-
Dubois, 2001; Gelman & Opfer, 2002). According to Rakison and Poulin-Dubois (2001) infants
can begin to perceive motion (self-propelled vs. caused motion), trajectory (smooth vs. erratic),
causality (action at a distance vs. action from contact), contingency (vs. noncontingent), and role
(agent vs. recipient). In the case of depictions of human biological motion, walking is a goal-
directed movement (Carey, 1999; Opfer & Gelman, 1998) that is clearly animate (Blythe, Todd,
& Miller, 1999). It is within such dynamic information such as walking that contains more
abstract information about agency and intentionality (Gelman & Opfer, 2002). Indeed, even 3-
years-old can determine intention based off a simple motion pattern (Montgomery &
Montgomery, 1999).
Geometric shapes. In more recent years, attempts have been made to simplify the
process of understanding agency and intentionality. Specifically, scholars have moved back
towards the Heider-Simmel simulation as a basis for examining how the mind perceives agency
and intent from movement. Tremoulet and Feldman (2000) find that greater changes in speed
and direction among moving dots and rectangles are associated with greater perceptions of
animacy, possibly suggesting that perception of intentionality is a bottom-up process. Further,
representations of an agent and its goal are closely tied to perceptions of animacy and
intentionality (Dittrich & Lea, 1994).
24
Dennett (1978) first argued that testing false belief, or understanding the capacity to hold
false belief, represents the optimal test of a child’s understanding of belief—in essence, a test for
“theory of mind”. Subsequently, “false-belief tests” were designed for normal children (Baron-
Cohen et al., 1985; Perner et al., 1989; Perner & Wimmer, 1985; Sullivan et al., 1994) and for
children with autism and children with Down’s Syndrome (Baron-Cohen, Leslie, & Frith,
1985). False belief tasks, however, tax executive function capacities such as inhibitory control
(Leslie & Thaiss, 1992), such executive dysfunction may cause autistic children to fail false
belief tasks. Finally, false belief tasks fail to capture the nuances of high-functioning individuals
with autism who fail to mentalize and generate accurate inferences about others despite passing
false belief tasks (Frith, 1994).
Abell, Happe, & Frith (2000) attempted to mitigate the limitations of false belief tasks by
designing movement-based, triangle stimuli that evoked mental state attributions. They based
these stimuli off previous studies (Oatley and Yuill, 1985; Rime et al., 1985) that demonstrated
that particular patterns of movement are more influential in social perception than appearance of
the moving characters (geometric shapes vs. human silhouettes). While individuals with autism
had been shown to be able to pass standard false belief tasks (Happe, 1995), Abell, Happe, &
Frith (2000) suggest that such performance may not necessarily signify an accurate ability to
attribute mental states of movement-based information in real time.
Barret et al., (2005) designed a 2-person video game where each player controlled a V-
shaped arrowhead that pointed in the direction of its movement. They instructed participants to
depict the movement patterns of 6 basic categories of intentional movement, as conceptualized
by Blythe, Todd, & Miller (1999): (1) pursuit and evasion, (2) fighting, (3) courtship, (4) leading
and following, (5) guarding and invading, and (6) playing. Barret et al., (2005) found that at any
25
point, a moving agent can be categorized based on position, velocity, heading (direction), and
change in heading. Specifically, low velocities were associated with courtship, following, and
being followed, whereas high changes in heading were associated with evasion and fighting.
Preview of Studies
In the current work, I attempt to add to our understanding of the role of human movement
and motion in social perception of inferences via several different approaches. The goal of most
of these studies is to examine how combinations of different animated agents’ movements (e.g.,
away from or towards the viewer) affect raters’ judgements of the motives and characteristics of
the agent. In different studies the perspective shifts (e.g., third person or first person) to assess
potential generalizability of findings.
Situational components in different combinations could create some interesting
alternative “situations.” Indeed, narratives could differ in terms of the agent’s activity, his or her
potency and the evaluation of the sequence (e.g., as a negative or positive outcome from a given
audience perspective). Indeed, these three dimensions were a key basis for the Osgood semantic
differential (Osgood & Tannenbaum, 1974), a classic model underlying dimensions of social
perception (i.e., evaluation, potency, and activity). In these examples, what in these cues is most
likely to produce the perception of threat? Probably strong agent quickly moving towards
another with negative affect (also the combination may involve an interaction, for example, such
that speed may polarize the affect).
I manipulate stimuli according to the below combinations of visual movement-related
cues (making up specific situations), and then ask participants to make various situational
ratings. Throughout the studies 1-3, participants were asked to make situational judgments using
what is referred to as the DIAMONDS. The DIAMONDS is perhaps the leading taxonomy of
26
situations (Rauthmann et al., 2014) and consists of a series of situational categories where the
situation may involve duty (e.g. work, tasks), intellect (e.g., aesthetic, profound), adversity (e.g.,
threat, criticism), mating (e.g. romance, sexuality), positivity (e.g., pleasant, nice), negativity
(e.g., unpleasant, bad), deception (e.g., deceit, lies), and/or sociality (e.g., communication,
interaction). Participants across studies 1-3 are asked to indicate the degree to which each
respective combination of visual cues/situation matches a variation of DIAMONDS. Study 4
differs from the others in that it measures differences in participants’ affective, causal, and
physical movement responses to differences in interpersonal distance to a virtual character in a
virtual environment. I will now provide a quick preview of the 4 studies included in this
dissertation.
Study 1
The aim of this study is to understand, at the 1
st
person level, fundamentals of how
various combinations of social-perception based movements of another person underlie
inferences about the social situation indicated by those movements (using the DIAMOND
taxonomy). To address this, we examined how combinations of four behavioral indicators
(those used most prominently in the literature; those that were highly visible, those that appeared
to represent fundamental movements) with two levels of each indicator (e.g., fast versus slow;
far versus near) affected social perceptions. The four dimensions examined were speed (slow,
fast), movement (towards, away), distance (near, far), gesture (open, closed). Therefore, we
could examine the main effects for each behavioral indicator on judgments regarding 8 diamond
situational inferences (duty, intellect, adversity, romance, positivity, negativity, deception, and
sociality). In addition, we could examine the two-way, three-way, and four-way interactions that
might differentially affect social judgments. This online study (n = 817 of those participants who
27
had complete data) was an all within-subjects design with order of dependent variables (i.e.,
diamond dimension) as well as the order of combinations of movements randomly presented via
software (i.e., Qualtrics) within and across participants.
Study 2
The aim of study 2 is to examine how the perceived interactions of two individuals affect
social inferences regarding DIAMOND situational inferences. In other words, participants were
provided a combination of movements made by one person and were then presented with a
combination of potential reactionary movements made by a secondary person before being
instructed to make judgments about the particular 2-person interaction. This sequential social
perception study was necessary because of the importance of contingency of behaviors between
two agents in dynamic cues (Rakison & Poulin-Dubois, 2001). That is, actions are contingently
linked to another action or response, which is essentially what defines a “social interaction”.
Watson (1972) found that 20-month old infants responded to both “contingent” caregivers and
mobiles with an equal amount of smiling, perhaps evidence that social responsiveness evolved
based on contingent stimuli rather than necessarily just the physical human face (Watson &
Ramey, 1972).
The logic here is that although the social meaning assigned based on one person’s
movements are indeed interesting to note, social inferences rarely happen in a one-person
vacuum, and the multiplicative potential reactions made by a secondary person certainly
transform the entire social meaning that would have been present with just one person. Although
we examined all the behavioral combinations and their impact on DIAMOND inferences, given
28
the combinatorics involved, we had to reduce the task for a given participant who would
otherwise have had to make over 1000 individual judgments.
Study 3
While the results of Study 1 are extremely valuable for obtaining insight into the different
combinations of body movement that allow people to make inferences about internal mental
states and the assignment of social meaning, Study 1 has the drawback of being entirely based on
text-based narrative stimuli. In other words, various body movements were described to in the
form of text, but of course this form of stimuli presents a few issues. First, we do not experience
our perceptual world via text and so the case can be made that the results of Study 1 do not
generalize to real-world assignment of social meaning. Second, text is subject to the
interpretation and imagination of the study participant, posing a risk in reliability.
In order to mitigate the issues raised by a text-based stimulus, I replicated Study 1 using
visual simulations of body movement using SmartBody (Feng, Huang, Kallmann, & Shapiro,
2012; Shapiro, 2011; Thiebaux, Marsella, Marshall, & Kallmann, 2008), a virtual character
animation platform written in C++ originally developed at the USC Institute for Creative
Technologies. SmartBody allows manipulation of movement, gaze, nonverbal behavior among
others, making for an excellent tool for use in social perception.
Study 4
Study 4 seeks to apply some tenets of motion-based social perception in a specific social
situation using an immersive virtual reality environment. This particular study attempts to apply
one of the Kinesthetic Motion Cue variables (Distance) in a simulated real-life interaction that
takes place in an immersive virtual reality environment. This study follows the model of a
previous study that applied the Kinesthetic Motion cue of movement Direction in the same
29
virtual environment (Feng et al., 2017). That said, this study departs from the model followed by
Studies 1-3 in this dissertation in that the Kinesthetic Motion Cue variable acts as both an
Independent Variable and a Dependent Variable. Specifically, this study examines the
behavioral (causal attribution, affect), physiological (Skin conductance), and physical (head
movement direction) impact of interpersonal distance (Independent variable) behavior in a VR-
based learning environment. Essentially, I measured the degree to which participants moved
their head forward or backwards in response to a very close or far virtual agent.
The interactive virtual environment used in this study simulates an acting class scenario,
where the participants take on the role of an acting student rehearsing a role with a virtual
professor who provides instructions and feedback. We used a virtual environment, where male or
female virtual professors either stand stationary or approach the participant, so as to invade
participants’ personal space while providing negative feedback. We also had separate conditions
where the professor stood stationary, but was positioned either far or very close to the
participant. We also observed greater internal attribution tendencies when a professor stood at
an uncomfortably close distance. The results of the present study have numerous implications
for the design of virtual agents for learning outcomes as well as the methodological design of
studies utilizing virtual agents in virtual environments.
30
Chapter 2: 1
st
Person Social Perception of Text-Based Representation of Physical
Movement using Situational DIAMONDS Measurement
Introduction
The aim of this study is to understand, at the 1
st
person level, fundamentals of how
various combinations of social-perception based movements of another person underlie
inferences about the social situation indicated by those movements (using the DIAMOND
taxonomy). To address this, we examine how combinations of four behavioral indicators (those
used most prominently in the literature; those that were highly visible, those that appeared to
represent fundamental movements) with two levels of each indicator (e.g., fast versus slow; far
versus near) affected social perceptions. The four dimensions examined were speed (slow, fast),
movement (towards, away), distance (near, far), gesture (open, closed).
Manipulating these movement-related features, we examine the main effects for each
behavioral indicator on judgments regarding 8 diamond situational inferences (duty, intellect,
adversity, romance, positivity, negativity, deception, and sociality). In addition, we examine the
two-way, three-way, and four-way interactions that might differentially affect social judgments.
This online study (n = 817), including only participants who had complete data) was an all
within-subjects design with order of dependent variables (i.e., diamond dimension) as well as the
order of combinations of movements randomly presented via software (i.e., Qualtrics) within and
across participants.
Current Study
The current study is the first attempt to analyze the combinatory effects of individual
variations of body movement, orientation, and gesture on the social perception of situational
inferences. Through this study, we hope to provide objective conclusions as to the physical
31
movement components that entail a selected variety of social situations. Social situations
represent the fundamental basis upon which all our interactions and social lives operate. As
such, this study attempts to provide a somewhat predictive framework for identifying and parsing
out the physical components of a social situation – at least from the perspective of a 1
st
person
recipient.
Method
Participants
817 participants were recruited from the University of Southern California Psychology
subject pool over the course of three academic semesters from Spring 2016 to Spring 2017. Of
the 817 participants who started the study, 507 participants completed the study in its entirely.
As the nature of the method of analysis requires completion of all measures, the analytical
sample for this study was 507. 359 of these participants were female (70.8%) and 148 were male
(29.2%).
Design
The statistical design of this study was a 2x2x2x2 4-way repeated measures MANOVA
to examine the effect of different combinations of movement each at 2 levels (Far-Near,
Towards-Away, Open-Closed, Slow-Fast) on ratings of the 8 situational DIAMONDS (Duty,
Intellect, Adversity, Marital (Romance), pOsitivity, Negativity, Deception, Sociality). The
individual dimensions of physical movement as well as the differentiated effects of 2-way, 3-
way, and 4-way combinations of physical motion are examined.\
Figure 1
Experimental Design
32
Materials
The participants were exposed to text-based descriptions of 16 different randomized
combinations of a body movement and indicate the likelihood that the above set of movements
would involve a DIAMONDS situation in an online questionnaire on Qualtrics. For instance,
“Person far away moves towards you slowly with open arms.” See Table 1 in the Appendix for
a full list of stimuli descriptions presented to participants. Participants were asked to respond to
each set of the DIAMONDS situational dimensions for each set of movement stimuli on a
graphical slider that ranged from 0-7. The decision to use a graphical slider as opposed to a
conventional Likert-scale was to mitigate the reliability issues that arise from forcing participants
to select whole integers as values of judgement. An example of an item that addressed the Duty
element of the DIAMONDS was, “How likely is this situation to involve work, tasks, or duties?”
The wording of the DIAMONDS measurement items for this study were derived from
Serfass and Sherman (2015), which is a condensed version of the original DIAMONDS tested in
an analysis of 20 million tweets to identify daily and weekly situational trends, as well as
differences in what types of situations men and women were sharing online. While the original
DIAMONDS scale was a 32-item Q-Sort method measurement, a condensed 8-item Likert-scale
version was introduced (Rauthmann & Sherman, 2016a). To mitigate the need for multiple daily
33
measurements, an even further simplified version was introduced to describe present situations
(Rauthmann & Sherman, 2016b). That being said, a drawback of this shortened version is that as
only one DV exists per DIAMONDS element, reliability cannot be assessed.
Results
Multivariate Effects
Multivariate main effects (see Table 1) were observed for all 4 movement dimensions.
This means that if we collapse across the collection of Diamonds’ judgments (e.g., Duty,
Intellect, Non-Adversity, Positivity, Non-Negativity, Non-Deception, and Sociality), participants
rated the situations when the actor’s action (1) was near versus far as generally more indicative
of both positive outcomes (e.g., intellect, romance, positivity, and sociality) and more negative
outcomes (e.g., negativity, deception, adversity). [Note, all differences for the original variables
would be negative had we not reverse coded these items as in Table 6]. That is, whatever the
situation, it’s more polarized near versus far. (2) For towards versus away, participants rated the
more positive situations (e.g., duty, intellect, romance, positivity, sociality) as higher for towards
and the more negative situations (e.g., adversity, negativity, deception) as higher for away. That
is, positive is associated with approach and negative is associated with away [see Table 7]. (3)
For open versus closed, we get another pattern. Adversity, negativity, deception and duty show a
similar pattern: All of them are associated with a more closed than open movement whereas
intellect, romance, positivity, and sociality are all more associated with a more open than closed
movement [Table 8]. (4) For slow versus fast, some situations are associated with slow
movement (e.g., intellect) while others are associated with faster movement (e.g., adversity,
negativity, deception, sociality). Interestingly, each of these action possibilities divides the
DIAMONDS in a different way.
34
2-way multivariate interaction effects were observed for Far-Near and Towards-Away,
Towards-Away and Open-Closed, Far-Near and Slow-Fast, Towards-Away and Slow-Fast, and
Open-Closed and Slow-Fast. Each of these two-way interactions takes a somewhat different
form depending upon the DIAMOND situation involved. These are therefore discussed further
in the univariate analyses and mean tables below. Significant 3-way multivariate interaction
effects were observed for Towards-Away, Open-Closed, and Slow-Fast as well as for Towards-
Away, Open-Closed, and Slow-Fast. Again, the particular form of these interactions depended
upon the specific DIAMOND. Thus, each, where significant at the univariate level is delineated
further below. There was no 4-way interaction
Table 1
Multivariate Tests of Movement Dimensions
Value F Hypothesis
df
Error df Sig.
Intercept .989 5342.936
b
8.000 475.000 0.000
Distance .161 11.881
c
8.000 497.000 .000
Direction .638 109.661
c
8.000 497.000 .000
Gesture .727 165.315
c
8.000 497.000 .000
Speed .249 20.562
c
8.000 497.000 .000
Distance * Direction .091 5.916
b
8.000 475.000 .000
Distance * Gesture .022 1.349
b
8.000 475.000 .217
Direction * Gesture .577 81.033
b
8.000 475.000 .000
Distance * Direction * Gesture .022 1.305
b
8.000 475.000 .239
Distance * Speed .036 2.213
b
8.000 475.000 .025
Direction * Speed .074 4.764
b
8.000 475.000 .000
Distance * Direction * Speed .024 1.432
b
8.000 475.000 .181
Gesture * Speed .175 12.561
b
8.000 475.000 .000
Distance * Gesture * Speed .031 1.915
b
8.000 475.000 .056
Direction * Gesture * Speed .094 6.176
b
8.000 475.000 .000
Distance * Direction * Gesture *
Speed
.016 .948
b
8.000 475.000 .477
35
Univariate Effects
Univariate main effects. Univariate main effects of Distance were observed for Duty,
Intellect, Adversity, Positivity, Negativity, Deception, and Sociality. Univariate main effects of
Direction and Gesture were observed for all 8 DIAMONDS dimensions. Univariate main effects
of Speed were observed for Intellect, Adversity, Negativity, Deception, and Sociality.
Univariate 2-way interaction effects. A univariate 2-way interaction effect of Distance
and Direction was observed for Duty, Adversity, Romance, Negativity, Deception, and Sociality.
A univariate 2-way interaction effect of Distance and Gesture was observed for Romance and
Negativity. A univariate 2-way interaction effect of Direction and Gesture was observed for
Duty, Intellect, Adversity, Romance, Positivity, Negativity, Deception, and Sociality. A
univariate 2-way interaction effect of Distance and Speed was not observed for any
DIAMONDS. A univariate 2-way interaction effect of Direction and Speed was observed for
Romance, Positivity, Negativity, Deception, and Sociality. A univariate 2-way interaction effect
of Gesture and Speed was observed for Intellect, Adversity, Romance, Positivity, Negativity,
Deception, and Sociality.
Univariate 3-way interaction effects. A univariate 3-way interaction effect of Distance,
Direction, and Speed was not observed. A univariate 3-way interaction effect of Distance,
Gesture, and Speed was observed for Duty and Adversity only. A univariate 3-way interaction
effect of Direction Gesture and Speed was observed for Intellect, Romance, Positivity,
Negativity, Deception, and Sociality. A marginal univariate 3-way interaction effect of Distance,
Direction, and Gesture was observed for Romance, F(1, 504) = 3.857, p = .05.
Table 3
Univariate main effects of movement variables on DIAMONDS
36
Movement DIAMONDS Mean
Square
F Sig. Partial
Eta
Squared
Observed
Power
a
Distance Duty 3.052 2.032 .155 .004 .296
Intellect 14.156 10.660 .001 .022 .903
Adversity 17.562 9.630 .002 .020 .872
Romance 47.033 29.189 .000 .058 1.000
Positivity 22.526 15.912 .000 .032 .978
Negativity 31.118 17.171 .000 .035 .985
Deception 8.109 4.577 .033 .009 .570
Sociality 132.968 57.837 .000 .108 1.000
Direction Duty 150.712 50.238 .000 .095 1.000
Intellect 561.118 219.533 .000 .315 1.000
Adversity 63.529 14.641 .000 .030 .968
Romance 2339.558 616.136 .000 .563 1.000
Positivity 2985.357 687.157 .000 .590 1.000
Negativity 612.541 137.475 .000 .223 1.000
Deception 390.730 114.832 .000 .194 1.000
Sociality 2871.052 520.376 .000 .521 1.000
Gesture Duty 921.277 209.273 .000 .304 1.000
Intellect 491.394 139.842 .000 .226 1.000
Adversity 6082.410 751.942 .000 .611 1.000
Romance 4970.162 835.276 .000 .636 1.000
Positivity 8036.862 1147.115 .000 .706 1.000
Negativity 8356.609 960.197 .000 .668 1.000
Deception 2943.125 512.827 .000 .518 1.000
Sociality 2280.466 391.131 .000 .450 1.000
Speed Duty 2.760 1.068 .302 .002 .178
Intellect 90.974 40.405 .000 .078 1.000
Adversity 223.496 81.959 .000 .146 1.000
Romance 4.672 2.460 .117 .005 .347
Positivity .078 .034 .853 .000 .054
Negativity 117.897 41.507 .000 .080 1.000
Deception 17.755 8.670 .003 .018 .836
Sociality 12.475 6.005 .015 .012 .686
Table 4
Significant 2-Way Interactions
2-way
Mean
Square
F Sig. Partial
Eta
Squared
Observed
Power
a
Distance
*
Direction
Duty 6.539 4.844 .028 .010 .594
37
Intellect .025 .020 .889 .000 .052
Adversity 34.421 17.787 .000 .036 .988
Romance 13.054 9.794 .002 .020 .878
Positivity 3.771 2.882 .090 .006 .395
Negativity 42.069 23.464 .000 .047 .998
Deception 18.752 9.689 .002 .020 .874
Sociality 14.324 7.705 .006 .016 .791
Distance
*
Gesture
Duty .058 .040 .842 .000 .055
Intellect 1.634 1.263 .262 .003 .202
Adversity 2.204 1.238 .266 .003 .199
Romance 7.456 5.635 .018 .012 .659
Positivity 3.826 2.868 .091 .006 .394
Negativity 6.874 4.441 .036 .009 .557
Deception 1.156 .765 .382 .002 .141
Sociality 1.904 1.055 .305 .002 .176
Direction
*
Gesture
Duty 273.005 113.477 .000 .192 1.000
Intellect 67.039 34.340 .000 .067 1.000
Adversity 642.444 191.997 .000 .287 1.000
Romance 1679.893 531.569 .000 .527 1.000
Positivity 1660.657 500.030 .000 .511 1.000
Negativity 507.717 174.653 .000 .268 1.000
Deception 102.660 44.791 .000 .086 1.000
Sociality 608.360 229.749 .000 .325 1.000
Distance
* Speed
Duty 2.586 1.621 .204 .003 .246
Intellect .338 .239 .625 .000 .078
Adversity 1.110 .554 .457 .001 .115
Romance 1.238 .900 .343 .002 .157
Positivity 3.012 2.312 .129 .005 .329
Negativity 3.569 1.894 .169 .004 .279
Deception .680 .377 .540 .001 .094
Sociality .997 .565 .453 .001 .117
Direction
* Speed
Duty .251 .149 .700 .000 .067
Intellect .053 .038 .846 .000 .054
Adversity 2.484 1.150 .284 .002 .188
Romance 22.281 13.601 .000 .028 .957
Positivity 24.601 15.372 .000 .031 .975
Negativity 13.547 6.672 .010 .014 .732
38
Deception 30.114 17.303 .000 .035 .986
Sociality 20.073 11.959 .001 .024 .932
Gesture
* Speed
Duty 4.428 2.402 .122 .005 .340
Intellect 31.882 19.595 .000 .039 .993
Adversity 111.611 51.874 .000 .098 1.000
Romance 59.916 35.322 .000 .069 1.000
Positivity 98.625 54.840 .000 .103 1.000
Negativity 115.044 54.272 .000 .102 1.000
Deception 33.239 20.853 .000 .042 .995
Sociality 32.383 16.583 .000 .034 .982
Table 5
Significant 3-Way Interaction
3-way
Mean
Square
F Sig. Partial
Eta
Squared
Observed
Power
a
Distance *
Gesture *
Speed
Duty 7.734 4.948 .027 .010 .603
Intellect .767 .652 .420 .001 .127
Adversity 8.751 4.805 .029 .010 .590
Romance .578 .413 .521 .001 .098
Positivity .826 .722 .396 .002 .136
Negativity .785 .449 .503 .001 .103
Deception .224 .154 .695 .000 .068
Sociality .167 .106 .745 .000 .062
Direction *
Gesture *
Speed
Duty 1.448 .918 .339 .002 .159
Intellect 5.649 4.013 .046 .008 .516
Adversity 5.127 2.271 .132 .005 .324
Romance 42.364 25.729 .000 .051 .999
Positivity 48.856 34.741 .000 .068 1.000
Negativity 22.166 11.071 .001 .023 .913
Deception 11.445 6.413 .012 .013 .715
Sociality 37.774 21.909 .000 .044 .997
Mean Differences of Movement Variables
Distance, Direction, Gesture, and Speed (Complete Model). All movement variable
means were examined in terms of the DIAMONDS to determine patterns and trends. These 4-
way mean scores are represented graphically in Figure 2 (Graph), Figure 3 (Graph with Reverse-
code) (Graph), and Figure 4 (cell-based matrix).
39
Figure 3
Graphical Representation of Movement Combination Means Scores
Figure 3
Graphical Representation of Movement Combination Means Scores (Reverse-Coded)
0.000
1.000
2.000
3.000
4.000
5.000
6.000
Movement Combination Mean Scores
Duty Intellect Adversity Mating pOsitivity Negativity Deception Sociality
0.000
1.000
2.000
3.000
4.000
5.000
6.000
7.000
Movement Combination Mean Scores (Reverse-Coded)
Duty Intellect Non-Adversity Mating
pOsitivity Non-Negativity Non-Deception Sociality
40
Upon a scan of Figure 2, we can observe that regardless of starting Distance, Towards,
Open, and Fast movement is associated with Positivity and Sociality. This is also the context
within which mating opportunities also occur especially near towards open(slow) but especially
fast. Interestingly, Intellect is most activated as an interpretation with greater openness in
Positive, Social contexts (Near, Towards, Open). This is also consistent with work by
Fredrickson (2004) on “broaden and build” that associates attentional processes involving
creativity with positive emotions. There is other work of an evolutionary vein that links intellect
with mating situations (perhaps because they both involve Toward Openness (Buss, 1994; 1995)
In contrast, Closed Gesture with Fast movement is observed to be associated with
Negativity, Adversity, and Deception (with Deception particularly high in Near Distance and
moving Away (we expect the least mating opportunities and lowest levels of intellect operating
in these contexts). Another notable pattern is that Duty tends to be elevated under Adversity
(moving Towards, Closed Gesture). Finally, moving Away with Open gesture is a combination
that contains most ambiguity and uncertainty about what type of social situation is occurring.
A clear pattern that we notice upon graphing the reverse-coded mean score combinations
(Figure 3) is a distinct alignment of all the DIAMONDS according to each of general changes of
the movement combinations. Indeed, circling back to the original graph in Figure 2
demonstrates a bifurcation of the DIAMONDS across positive and negative valence items. This
is interesting to note for several reasons, which are addressed in the Discussion. The sections
that follow provide an itemized description of the movement combination patterns observed for
each of the DIAMONDS across all 4 movement dimensions.
41
Figure 3
4-Way Mean Scores: Complete Model
Speed slow fast slow fast slow fast slow fast slow fast slow fast slow fast slow fast
Gesture open closed open closed open closed open closed
Direction Towards Away Towards Away
Distance Far Near
DIAMON
DS
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Duty
1.97
8
1.92
2
2.88
4
3.05
7
1.94
6
1.92
9
2.2
2.40
5
1.90
8
1.87 2.98
3.01
3
2.05
2.12
7
2.37
4
2.31
9
Intellect
2.66
5
2.64
4
2.19
6
1.72
9
2.03
3
1.88
5
1.72
6
1.43
2.75
6
2.72
1
2.21
3
1.88
3
2.06
2
1.89
7
1.86
7
1.57
2
Adversity
5.39
9
5.47
8
3.46
4
2.79
4.86
9
4.77
4
3.93
3.36
4
5.56
4
5.45
5
3.38
6
2.86
9
4.75
8
4.48
1
3.67
6
3.11
2
Mating
(Romance)
3.59
8
4.11
6
1.54
4
1.30
7
1.95
1
1.95
9
1.29
4
1.29
1
3.97
7
4.41
1
1.60
8
1.50
7
2.07
9
2.00
7
1.43
5
1.26
6
Positivity 4.19
4.66
5
1.66
2
1.36
6
2.38
6
2.24
6
1.31
4
1.13
9
4.36
7
4.89
4
1.72
5
1.49
3
2.38
2
2.43
7
1.34
5
1.18
Negativity
5.36
4
5.59
3
3.22
3
2.61
1
4.66
9
4.38
2
3.29
2.79
1
5.46
1
5.59
5
3.11
4
2.69
6
4.38
9
4.28
2
2.90
6
2.46
7
Deception
5.42
8
5.64
9
4.20
2
3.99
2
5.00
9
4.91
2
4.09
9
3.85
6
5.47
3
5.71
4
4.18
6
4.02
4
4.94
1
4.7
3.95
5
3.63
3
Sociality
4.26
5
4.77
1
2.93
4
2.90
5
2.68
6
2.66
6
2.17
3
2.13
2
4.54
2
4.93
8
3.11
8
2.97
4
3.05
3.00
5
2.48
2.50
7
42
Duty. The highest values for Duty were observed for Moving Towards, Slowly, with
Closed Gesture for both Far (m = 3.06) and Near Distance (m = 3.01). The lowest values for
Duty tended to be Moving Towards, Quickly, with Open Gesture for both Far (m = 1.92) and
Near Distance (m =1.87).
Intellect. High values for Intellect tended to be Movement Towards with Open Gesture
for either Slow or Fast Speed and either Far or Near Distance. That said, the highest values were
observed to be from Moving Towards from a Near Distance with Open Gesture for either Slow
(m = 2.76) or Fast Speed (m = 2.72). The lowest values were consistently for Movement Away,
with Fast Speed, and With Closed Gesture for both Far (m = 1.43) and Near Distance (m = 1.57).
Non-Adversity. High values for Non-Adversity tended to be for Movement Towards
with Open Gesture for either Slow or Fast Speed and for either Far or Near Distance. In other
words, Non-Adversity can be identified strictly with Movement Towards and Open Gesture,
regardless of Speed and Distance. The lowest values for Non-Adversity were consistently for
Movement Towards with Fast Speed and Closed Gesture, for both Far (m = 2.79) and Near
Distance (m = 2.87).
Romance. High values for Romance generally tended to be for Movement Towards with
Open Gesture, although highest values were observed for Movement Towards with Open
Gesture and Fast Speed, for both Far (m = 4.12) and Near Distance (m = 4.41). Lowest values
for Romance generally tended to be for Movement Away with Closed Gesture, although lowest
values were observed for Movement Away with Closed Gesture and Fast Speed, for both Far (m
= 1.29) and Near Distance (m = 1.27).
Positivity. High values of Positivity Generally tended to be for Movement Towards with
Open Gesture (regardless of Speed or Distance), but the highest values for Positivity were for
43
Movement Towards with Open Gesture and Fast Speed for both Far (m = 4.67) and Near
Distance (m = 4.89). Low values for Positivity were observed for any instance of Closed
Gesture, although the lowest values were observed for Movement Away with Closed Gesture,
and Fast Speed, for both Far (m = 1.31) and Near Distance (m = 1.18).
Non-Negativity. High values of Non-Negativity generally tended to be for Movement
Towards with Open Gesture (regardless of Speed or Distance), but the highest values for Non-
Negativity were for Movement Towards with Open Gesture and Fast Speed for both Far (m =
5.59) and Near Distance (m = 5.60). Low values for Non-Negativity were observed for any
instance of Closed Gesture, although the lowest values were observed for Movement Towards
with Closed Gesture, and Fast Speed, for both Far (m = 2.61) and Near Distance (m = 2.47).
Non-Deception. High values of Non- Deception generally tended to be for Movement
Towards with Open Gesture (regardless of Speed or Distance), but the highest values for Non-
Deception were for Movement Towards with Open Gesture and Fast Speed for both Far (m =
5.65) and Near Distance (m = 5.71). Lowest values were observed for Movement Away with
Closed Gesture, and Fast Speed, for both Far (m = 3.86) and Near Distance (m = 3.63).
Sociality. High values of Sociality generally tended to be for Movement Towards with
Open Gesture (regardless of Speed or Distance), but the highest values for Sociality were for
Movement Towards with Open Gesture and Fast Speed for both Far (m = 4.77) and Near
Distance (m = 4.94). Low values of Sociality tended to be for Movement Away with Closed
Gesture (regardless of Distance and Speed), but the lowest values of Sociality were observed for
Movement Away with Closed Gesture from a Far Distance, for both Fast (m = 2.17) and Slow
Speed (m = 2.13).
44
Mean Differences: Distance. Table 6 depicts the mean differences between the simple
dichotomy of Far (1) versus Near (2) distance. Negative values of mean differences indicate a
greater mean value for Near relative to Far while positive values indicate a greater mean value
for Far relative to Near.
Negative dimensions (Adversity, Negativity, Deception) of the DIAMONDS category
were reverse coded in order to manage the issue of polarity between positively and negatively
valenced items. As such, the categories of Adversity, Negativity, and Deception were reverse-
coded while all other categories remained the same.
Significant mean differences exist between judgments of Far versus Near across virtually
all dimensions of the DIAMONDS. For instance, non-adversity (reverse-coded) is associated
with Far distance more so than Near—an issue of extremity. Adversity then, would be more
associated with Near than Far. In fact, with exception of Duty, all dimensions of the
DIAMONDS indicate significant mean differences in favor of Near relative to Far, indicating a
more polarized pattern of effects when people are Near in distance. It is worth noting that the
lack of significant mean differences for Duty may be attributable to the fact that Duty as a social
situation is more ambiguous or is in itself polarized in its valence. Individuals may make a
variety of different set of judgments and attitudes about what constitutes a “dutiful” situation,
and the respective valence of that situation. To circle back to a general conclusion about the Far
versus Near dichotomy, it appears that extremes involving social interaction and their meaning
occur. That is, when individuals are near, individuals engage in a more polarized set of
situational inferences.
Table 6
Mean differences of Far (I) versus Near (J) (Adversity, negativity and deception were reversed)
Measure (I) Far (J) Near Mean
Difference
Std. Error Sig.
b
45
(I-J)
Duty 1 2 -.033 .027 .230
Intellect 1 2 -.090
*
.025 .000
Non-
Adversity
1 2
.090
*
.030 .003
Romance 1 2 -.160
*
.028 .000
Positivity 1 2 -.116
*
.026 .000
Non-
Negativity
1 2
.120
*
.030 .000
Non-
Deception
1 2
.064
*
.029 .029
Sociality 1 2 -.265
*
.033 .000
Mean Differences: Direction. Table 7 depicts the mean differences between judgments
of movement Towards vs. Away from the participant. Positive values of mean differences
indicate a greater mean value for Towards relative to Away while negative values indicate a
greater mean value for Away relative to Towards. As can be seen in the above table, all the
DIAMONDS dimensions were positive when accounting for the Towards-Away difference.
That said, recall that the negatively valenced items (Adversity, Negativity, Deception) were
recoded to account for polarity. As such, after accounting for the reverse coding, DIAMONDS
dimensions that were more associated with Towards than Away included ONLY the non-
negatively valenced items (Duty, Intellect, Romance, Positivity, and Sociality). What’s
fascinating here is a clear bifurcation of the DIAMONDS categories between negative and non-
negative (some items are ambiguous in valence) valenced items. In other words, non-negative
DIAMONDS dimensions tend to be associated with the movement Towards while negative
DIAMONDS dimensions tend to be associated with the movement Away. Another way to
interpret the above pattern is that more polarizing/competitive effects are observed for negative
situations with movement Away, and that more polarizing/competitive effects are observed for
non-negative situations with movement Towards.
46
Table 7
Mean differences of Towards (I) versus Away (J) (Adversity, negativity and deception were
reversed)
Measure (I)
Towards
(J)
Away
Mean
Difference
(I-J)
Std.
Error
Sig.
b
Duty 1 2 .283
*
.039 .000
Intellect 1 2 .542
*
.036 .000
Non-
Adversity
1 2 .180
*
.047 .000
Romance 1 2 1.098
*
.044 .000
Positivity 1 2 1.242
*
.047 .000
Non-
Negativity
1 2 .560
*
.048 .000
Non-
Deception
1 2 .446
*
.042 .000
Sociality 1 2 1.219
*
.053 .000
Mean Differences: Gesture. The Open vs. Closed dichotomy reveals a pattern of
judgments unique from those observed thus far. For this comparison, a negative value indicates
a mean difference in favor of Closed, whereas a positive value indicates a mean difference in
favor of Open. Here, we observe significant negative mean differences for only Duty while we
see significant positive mean differences for all other dimensions. Recall however, that the
negatively valenced dimensions were reverse-coded, which indicates that in addition to Duty, the
categories of Adversity, Negativity, and Deception also have negative mean differences. That is,
Duty, Adversity, Negativity, and Deception are all competing for judgements of Closed arm
gestures, while Intellect, Romance, Positivity, and Sociality are all competing for judgements of
Open arm gestures. Here, we can begin to learn what separates Duty as a situational category
from the others. While Duty is not an overtly negative in valence, it clearly is not necessarily
being perceived to follow the pattern of the other non-negatively valenced dimensions. That is,
with Duty, we are faced with a situational category that describes an act or behavior that we may
not necessarily want to do, but rather are forced to do. It is this lack of situational agency that
47
may begin to explain the roots of this association of Duty with a Closed arm gesture along with
the more explicitly negatively valenced categories of Adversity, Negativity, and Deception.
Table 8
Mean differences of Open (I) versus Closed (J) (Adversity, negativity and deception were
reversed)
Measure (I) Open (J) Closed Mean
Difference
(I-J)
Std.
Error
Sig.
b
Duty 1 2 -.688
*
.048 .000
Intellect 1 2 .506
*
.043 .000
Non-
Adversity
1 2 1.773
*
.065 .000
Romance 1 2 1.606
*
.055 .000
Positivity 1 2 2.043
*
.060 .000
Non-
Negativity
1 2 2.080
*
.067 .000
Non-
Deception
1 2 1.235
*
.054 .000
Sociality 1 2 1.087
*
.055 .000
Mean Differences: Speed. In the Slow vs. Fast comparison, negative mean differences
indicate greater judgments of Fast while positive mean differences indicate greater judgments of
Slow. Here, we see a nonsignificant negative mean differences for Duty, Romance, and
Positivity. We see significant negative mean differences only for Sociality initially. However,
after accounting for reverse-coding, we see significant negative mean differences also for
Adversity, Negativity, and Deception. That is, Sociality, Adversity, Negativity, and Deception
are all associated with Fast movements while Intellect is associated with Slow movements. To
reiterate, Duty, Romance, and Positivity revealed nonsignificant mean differences, which seem
to suggest competition among judgments, and a need to examine the interactions of these Speed
variables with other social perception categories (i.e., Movement Towards vs. Away). It is
interesting that the explicitly positively valenced dimension of Sociality competes with the
explicitly negative dimensions (Adversity, Negativity, Deception) for judgments of Fast
48
movements, which seems to suggest that rate of movement on its own does not provide rich
social perceptual cues. Indeed, the presence of nonsignificant mean differences for the other
dimensions seems to indicate a need to more closely examine the interactions of Rate of
movement with other movement dimensions.
Table 9
Mean differences of Slow (I) versus Fast (J) (Adversity, negativity and deception were reversed)
Measure (I) Slow (J) Fast Mean
Difference
(I-J)
Std.
Error
Sig.
b
Duty 1 2 -.040 .036 .272
Intellect 1 2 .219
*
.034 .000
Non-
Adversity
1 2 .340
*
.038 .000
Romance 1 2 -.047 .031 .130
Positivity 1 2 -.006 .034 .860
Non-
Negativity
1 2 .250
*
.038 .000
Non-
Deception
1 2 .102
*
.033 .002
Sociality 1 2 -.081
*
.033 .013
Multi-level Mean Ratings
Thus far we’ve seen that there are significant mean differences of DIAMONDS ratings
according to different levels of the separate categories of movement. To further examine the
combinatory effects of the separate categories of movement and how they relate to how people
categorize situations, we next examined the 2-, 3-, and 4- way representations of all the mean
ratings for each DIAMONDS. Tables depicting mean ratings may be found in the Appendix.
Distance and Direction. Mean scores for combinations of Distance and Direction were
examined across all the DIAMONDS dimensions to determine trends and patterns. Direction,
particularly the Towards movement, appeared to generate much convergence in the high mean
scores across multiple dimensions. Conversely, Distance did not seem to play a major role in
49
this interaction, corroborating the stark difference in the strength of the multivariate main effects
observed for Distance, F(8, 497) = 11.881, p < .001, and Direction, F(8, 497) = 109.661, p <
.001.
Duty. The highest ratings of duty were for movement towards, regardless of Far (m = 2.46) or
Near (m = 4.44) Distance. The lower ratings were observed for movement Away at both Far (m
= 2.12) and Near (m = 2.22). As such, we can conclude that movement Towards, particularly
from a Far Distance, is associated with situation that involves Duty.
Intellect. Movement Towards was associated with greater Intellect than movement Away,
regardless of Far (m = 2.31) or Near (m = 2.93) Distance.
Romance. Movement Towards was associated with greater Romance than movement Away,
regardless of Far (m = 2.64) or Near (m = 2.88) Distance.
Positivity. Movement Towards was associated with greater Positivity than movement Away,
regardless of Far (m = 2.97) or Near (m = 3.12) Distance.
Non-Negativity. Movement Towards was associated with greater Non-Negativity than
movement Away, regardless of Far (m = 4.20) or Near (m = 4.21) Distance.
Sociality. Movement Towards was associated with greater Sociality than movement
Away, regardless of Far (m = 3.72) or Near (m = 3.89) Distance.
Distance and Gesture. Mean scores for combinations of Distance and Gesture were
examined across all the DIAMONDS dimensions to determine trends and patterns. Unlike with
Distance and Direction where a relatively consistent pattern of high ratings for a single level
(Towards), for Distance and Gesture, a more mixed combination of patterns was observed. That
being said, other that Duty, all other DIAMONDS dimensions for this category of observations
50
were associated with Open Gesture as opposed to Closed to Gesture. This pattern seemed to be
largely consistent regardless of Distance.
Duty. The highest ratings of duty were for Closed Gesture, regardless of Far (m = 2.64)
or Near (m = 2.67) Distance. The lower ratings were observed for movement Away at both Far
(m = 1.94) and Near (m = 1.99). As such, we can conclude that Closed Gesture, particularly
from a Near Distance, is associated with situation that involves Duty.
Intellect. Open Gesture was associated with greater Intellect than movement Away, regardless
of Far (m = 2.31) or Near (m = 2.36) Distance.
Non-Adversity. Open Gesture was associated with greater Non-Adversity (less Adversity) than
Closed Gesture, regardless of Far (m = 5.13) or Near (m = 5.065) distance. Here, we see a
significant difference that indicates that the orientation of one’s gestures is a distinct indicator of
the presence or absence of threat in a given situation.
Romance. Open Gesture was associated with greater Romance than Closed Gesture, regardless
of Far (m = 5.13) or Near (m = 5.065) distance.
Positivity. Open Gesture was associated with greater Positivity than Closed Gesture, regardless
of Far (m = 3.37) or Near (m = 3.52) distance.
Non-Negativity. Not surprisingly, non-negativity was also associated with Open Gesture moreso
than Closed Gesture, regardless of Far (m = 5.00) or Near (m = 4.93) Distance.
Non-Deception. Open Gesture was associated with greater Non-Deception than Closed Gesture
regardless of Far (m = 5.25) or Near (m = 5.21) Distance.
Sociality. Open Gesture was associated with greater Sociality than Closed Gesture, but
highest Sociality was observed with Open Gesture at a Near Distance (m = 3.88) as opposed to
Open Gesture at a Far Distance (m = 3.60).
51
Distance and Speed. Mean scores for combinations of Distance and Speed were
examined across all the DIAMONDS dimensions to determine trends and patterns.
Nonsignificant mean differences were observed across the multiple combinations of Distance
and Speed, corroborating the lack of a significant univariate interaction effect observed for
Distance and Speed on any of the DIAMONDS. A clear explanation for this lack of effect is that
Speed pre-supposes a movement, which also pre-supposes a movement Direction. As such,
Speed as a movement dimension can only be examined in conjunction with a Direction. Due to
the requirement of Direction in analyzing Speed, no further mean comparisons for Speed without
Direction will be conducted.
Direction and Speed. Mean scores for combinations of Direction and Speed were
examined across all the DIAMONDS dimensions to determine trends and patterns. Generally,
we observe a stronger effect of Direction relative to Speed, particularly in the Towards Direction.
These corroborate the stronger multivariate main effects observed for Direction, F(8, 497) =
109.66, p < .001, relative to Speed, F(8,497) = 20.56, p < .001.
Duty. The highest ratings of Duty were for movement Towards, regardless of Slow (m =
2.44) or Fast (m = 2.47) Speed. The strength of the effect of Direction relative to Speed
corroborates the presence of a univariate main effect of Direction for Duty, F (1, 497) = 50. 24, p
< .001, and a corresponding absence of a univariate main effect of Speed for Duty, F(1, 497) = 1.
07, p = .30.
Intellect. The highest ratings of Intellect were for movement Towards, regardless of
Slow (m = 2.46) or Fast (m = 2.24) Speed. The highest rating of Intellect was observed for Slow
movement Towards whereas the lowest rating of Intellect was observed for Fast movement
Away. The strength of the effect of Direction relative to Speed corroborates strength of the
52
univariate main effect of Direction for Intellect, F (1, 497) = 219.53, p < .001, relative to the
univariate main effect of Speed for Intellect, F(1, 497) = 40.41, p < .001.
Non-Adversity. Slow movement was associated with greater Non-Adversity (less
Adversity) than Fast Gesture, regardless of Towards (m = 4.45) or Away (m = 4.31) Direction.
The lowest rating of Non-Adversity was observed for Fast movement Away from the participant.
Romance. Movement Towards was associated with greater Romance than movement
Away, regardless of Slow (m = 2.68) or Fast (m = 2.84) Speed. The strength of the effect of
Direction relative to Speed corroborates strength of the univariate main effect of Direction for
Romance, F (1, 497) = 616.14, p < .001, relative to the absence of an univariate main effect of
Speed for Romance, F(1, 497) = 2.46, p = .117.
Positivity. Movement Towards was associated with greater Positivity than movement
Away, regardless Speed, although the highest rating was observed for Fast movement Towards
(m = 3.10). The strength of the effect of Direction relative to Speed corroborates strength of the
univariate main effect of Direction for Positivity, F (1, 497) = 687.16, p < .001, relative to the
absence of an univariate main effect of Speed for Positivity, F(1, 497) = .034, p = .853.
Non-Negativity. Movement Towards was associated with greater Non-Negativity than
Movement Away, regardless of Speed, although the highest rating was observed for Slow
Movement Towards (m = 4.29). Univariate main effects for both Direction, F(1, 497) = 137.48,
p < .001), and Speed, F(1, 497) = 41.51, p < .001) were observed for Non-Negativity, reflecting
the strength of the mean values for each cell representing these categories.
Non-Deception. Movement Towards was associated with greater Non-Deception than
Movement Away, regardless of Speed. The lowest ratings of Non-Deception were observed for
Fast Movement Away from the participant (m = 4.275). The strength of the effect of Direction
53
relative to Speed corroborates the strength of the univariate main effect of Direction for
Positivity, F (1, 497) = 114.83 p < .001, relative to the univariate main effect of Speed for
Positivity, F(1, 497) = 8.67, p = .003.
Sociality. Movement Towards was associated with greater Sociality than Movement
Away, regardless of Slow (m = 3.72) or Fast (m = 3.90) Speed. The strength of the effect of
Direction relative to Speed corroborates the enormity of the strength of the univariate main effect
of Direction for Positivity, F (1, 497) = 520.38 p < .001, relative to the univariate main effect of
Speed for Positivity, F(1, 497) = 6.01, p = .012.
Direction and Gesture. Mean scores for combinations of Direction and Gesture were
examined across all the DIAMONDS dimensions to determine trends and patterns. Generally,
we observe a stronger effect of Direction relative to Gesture, particularly in the Towards
Direction. These corroborate the stronger multivariate main effects observed for Gesture
F(8,497) = 165.32, p < .001,, relative to Direction, F(8, 497) = 109.66, p < .001.
Duty. The highest ratings of Duty were specifically for movement Towards with a
Closed Gesture (m = 2.98), and the lowest ratings were for movement Towards with an Open
Gesture (m = 1.92). This pattern was consistent for movement Away – albeit with a smaller
effect. Movement Away with a Closed Gesture (m = 2.32) had higher ratings than movement
Away with an Open Gesture (m = 2.01).
Intellect. Though movement Towards tended to have higher ratings than Movement
Away, the highest ratings were observed for Movement Towards with an Open Gesture (m =
2.70) and the lowest ratings were observed for movement Away with a Closed Gesture (m =
1.65).
54
Non-Adversity. Open Gesture was generally associated with Non-Adversity, particularly
in conjunction with movement Towards (m = 5.47). Lowest ratings of Non-Adversity was
observed for Closed Gesture with movement Towards (m = 3.13). The strength of the effect of
Gesture relative to Direction confirms the relative strength of the univariate main effect of
Gesture for Non-Adversity, F(1, 497) = 751.94, p < .001, compared to the main effect of
Direction for Non-Adversity, F(1, 497) = 14.64, p < .001.
Romance. Although Open Gesture generally received higher ratings of Romance than
Closed Gesture, the highest ratings of Romance were observed for Movement Towards with an
Open Gesture (m = 4.03).
Positivity. Although Open Gesture generally received higher ratings of Romance than
Closed Gesture, the highest ratings of Romance were observed for Movement Towards with an
Open Gesture (m = 4.53). The lowest ratings of Romance was observed for Movement Away
with a Closed Gesture.
Non-Negativity. Open Gesture received higher ratings of Non-Negativity than for Closed
Gesture for both Towards (m = 5.50) and Away Movement (m = 4.43). The strength of the
effect of Gesture relative to Direction confirms the relative strength of the univariate main effect
of Gesture for Non-Adversity, F(1, 497) = 960.20, p < .001, compared to the main effect of
Direction for Non-Adversity, F(1, 497) = 137.48, p < .001.
Non-Deception. Open Gesture received higher ratings of Non-Deception than for Closed
Gesture for both Towards (m = 5.57) and Away Movement (m = 4.89). The lowest rating for
Non-Deception was observed for Movement Away with a Closed Gesture. The strength of the
effect of Gesture relative to Direction confirms the relative strength of the univariate main effect
55
of Gesture for Non-Adversity, F(1, 497) = 512.83, p < .001, compared to the main effect of
Direction for Non-Adversity, F(1, 497) = 114.83, p < .001.
Sociality. The highest rating for Sociality was observed for Movement Towards with
Open Gesture (m = 4.63) whereas the lowest rating for Sociality was observed for Movement
Away with a Closed Gesture. No other observable pattern was observed for these sets of
variables.
Gesture and Speed. Mean scores for combinations of Gesture and Speed were
examined across all the DIAMONDS dimensions to determine trends and patterns. Speed, not
surprisingly, accounted for little effect in the mean scores of DIAMONDS. As mentioned
previously, Speed pre-supposes a movement, which also pre-supposes a movement Direction.
As such, Speed as a movement dimension can only be examined in conjunction with a Direction.
The results of this combination of movements heavily favored Gesture across all dimensions.
Duty was associated with Closed Gesture, and all other dimensions were associated with Open
Gesture regardless of Speed.
Distance, Direction, and Gesture (3-way). Based off the strength of the multivariate
main effects of Gesture relative to Direction and Distance, the expectation is that Gesture will
account for most of the effect in high ratings of DIAMONDS. Generally, most of the
DIAMONDS were associated with Open Gesture in conjunction with Movement Towards.
Duty. Across the board, high ratings of Duty tended to involve Closed Gesture as
opposed to Open Gesture. The two highest ratings were associated with Movement Towards
with a Closed Gesture, regardless of Distance. The highest ratings of Duty were observed for
Movement Towards with a Closed Gesture from a Near Distance (m = 2.997). The lowest
56
ratings of Duty were observed for Movement Towards with Open Gesture from a Near Distance
(m = 1.889).
Intellect. Open Gesture generally tended to be associated with greater Intellect than
Closed Gesture. The highest values of Intellect were observed for Movement Towards with
Open Gesture, regardless of Far (m = 2.65) or Near (m = 2.74) Distance. The lowest values of
Intellect were observed for Movement Away with a Closed Gesture for both Far (m = 1.58) and
Near (m = 1.72).
Non-Adversity. Open Gesture generally tended to be associated with greater Non-
Adversity than Closed Gesture. The highest values of Non-Adversity were observed for
Movement Towards with Open Gesture, regardless of Far (m = 5.44) or Near (m = 5.51)
Distance. The lowest values of Non-Adversity were observed for Movement Towards with a
Closed Gesture for both Far (m = 3.13) and Near (m = 3.13).
Romance. Open Gesture generally tended to be associated with greater Romance than
Closed Gesture. The highest values of Romance were observed for Movement Towards with
Open Gesture, regardless of Far (m = 3.86) or Near (m = 4.19) Distance. The lowest values of
Romance were observed for Movement Away with a Closed Gesture for both Far (m = 1.29) and
Near (m = 1.35).
Positivity. Open Gesture generally tended to be associated with greater Positivity than
Closed Gesture. The highest values of Positivity were observed for Movement Towards with
Open Gesture, regardless of Far (m = 4.43) or Near (m = 4.63) Distance. The lowest values of
Positivity were observed for Movement Away with a Closed Gesture for both Far (m = 1.23) and
Near (m = 1.26).
57
Non-Negativity. Open Gesture generally tended to be associated with greater Non-
Negativity than Closed Gesture. The highest values of Non-Negativity were observed for
Movement Towards with Open Gesture, regardless of Far (m = 5.48) or Near (m = 5.53)
Distance. The lowest values of Non-Negativity were observed for Movement Away with a
Closed Gesture from a Near Distance (m = 2.69).
Non-Deception. Open Gesture generally tended to be associated with greater Non-
Deception than Closed Gesture. The highest values of Non-Deception were observed for
Movement Towards with Open Gesture, regardless of Far (m = 5.54) or Near (m = 5.59)
Distance. The lowest values of Non-Deception were observed for Movement Away with a
Closed Gesture from a Near Distance (m = 3.80), although a similar pattern was observed for
Movement Away with a Closed Gesture from a Far Distance (m = 3.98).
Sociality. Open Gesture generally tended to be associated with greater Sociality than
Closed Gesture. The highest values of Sociality were observed for Movement Towards with
Open Gesture, regardless of Far (m = 4.52) or Near (m = 4.74) Distance. The lowest values of
Sociality were observed for Movement Away with a Closed Gesture from a Far Distance (m =
2.15), although a similar pattern was observed for Movement Away with a Closed Gesture from
a Close Distance (m = 2.49).
Discussion
The aim of this study was to understand, at the 1
st
person level, fundamentals of how
various combinations of social-perception based movements of another person underlie
inferences about the social situation indicated by those movements (using the DIAMOND
taxonomy). To address this, we examine how combinations of four behavioral indicators (those
used most prominently in the literature; those that were highly visible, those that appeared to
58
represent fundamental movements) with two levels of each indicator (e.g., fast versus slow; far
versus near) affected social perceptions. The four dimensions examined were speed (slow, fast),
movement (towards, away), distance (near, far), gesture (open, closed).
This is the first attempt to analyze the combinatory effects of individual variations of
body movement, orientation, and gesture on the social perception of situational inferences.
Through this study, we hope to provide objective conclusions as to the physical movement
components that entail a selected variety of social situations. Social situations represent the
fundamental basis upon which all our interactions and social lives operate. As such, this study
attempts to provide a somewhat predictive framework for identifying and parsing out the
physical components of a social situation – at least from the perspective of a 1
st
person recipient.
Implications
As noted above, the results of the mean score combinations demonstrate a bifurcation of
the DIAMONDS across positive and negative valence items. This is interesting for several
reasons. First, there is a clearly apparent polarization of judgements for negative versus positive
valenced DIAMONDS, indicating competition amongst multiple judgments for specific
movement combination patterns. In other words, the 4 movement variables used here provide
some degree of predictive value – at least to the level of distinguishing negative versus positive.
That being said, additional levels of abstraction (more cues as variables) may be required to
attain a more fine-grained analysis. Second, this positive/negative pattern may overarch overall
situational inferences in a reptilian approach-avoidance fashion (Roth & Cohen, 1986; Elliot &
Thrash, 2002), which has been demonstrated as containing hierarchical components. (Elliot &
Church, 2006). Relatedly, this positive/negative bifurcation of judgments may be evidence for
the need for a more fine-grained measurement of social situations. This leads me to the final
59
implication of this positive/negative distinction, which is in fact a limitation of the DIAMONDS
measurement. Specifically, these results suggest we need a closer examination of the
DIAMONDS components. Indeed, there seems to be a clear nested nature of the DIAMONDS
components into larger categories and perhaps sub-categories. Having a multi-layer typology, on
the other hand, with Negative vs. Positive at the top level would perhaps then provide a more
fine-grained understanding of the distinction between the other DIAMONDS (Duty, Intellect,
Adversity, Mating, Deception, Sociality).
Future Directions
This study is one of several in a series of studies that attempt to add to our understanding
of the role of human movement and motion in social perception of inferences via several
different approaches. The goal of most of these studies is to examine how different animated
agents’ movements (e.g., away from or towards the viewer) and perspective (e.g., third person or
first person) affect raters’ judgements of the motives and characteristics of the agent.
2-Person Social Perception. The aim is to examine how the perceived interactions of
two individuals affect social inferences regarding DIAMOND situational inferences. In other
words, participants were provided a combination of movements made by one person and were
then presented with a combination of potential reactionary movements made by a secondary
person before being instructed to make judgments about the particular 2-person interaction. The
logic here is that although the social meaning assigned based on one person’s movements are
indeed interesting to note, social inferences rarely happen in a one-person vacuum, and the
multiplicative potential reactions made by a secondary person certainly transform the entire
social meaning that would have been present with just one person. Although we examined all
the behavioral combinations and their impact on DIAMOND inferences, given the combinatorics
60
involved, we had to reduce the task for a given participant who would otherwise have had to
make over 1000 individual judgments.
Conclusion
In this study we introduce a basic yet novel framework for examining the fundamental
elements that define a social situation. We demonstrate the complex array of possible inferences
that arise from an extremely raw and basic input of 4 sets of physical movement each limited to
simply 2 levels. Clearly, human interaction is much more nuanced and dynamic than what
we’ve shown here. That said, this study presents the exciting opportunities that new software
and technology affords us in our goal of constructing a predictive taxonomy of physical
movements and their impact on social inferences, intentions, goals, attitudes, and situational
meanings.
61
Chapter 3: 3
rd
Person Sequential Social Perception of Text-Based Representation of
Physical Movement using Situational DIAMONDS Measurement
Introduction
Study 1 involved examining how combinations of four behavioral indicators of speed
(slow, fast), movement (towards, away), distance (near, far), gesture (open, closed) underlie
inferences about the social situation indicated by those movements from a 1
st
person perspective
of a single communicator. Human communication however, clearly involves more than simply a
one-way perception of a single communicator. Indeed, the social meaning attached to a given
social scenario transforms from moment to moment as individuals within an interaction co-create
meaning via sequential moment-to-moment responses to one another. The following study
aimed to offer a sampling of the complexities that come about in social meaning simply by
introducing a second communicator to the framework defined in Study 1. That is, while one
person’s series of movement cues may engage a particular series of social inferences
(DIAMONDS-based), a second person’s hypothetical set of reactionary kinesthetic motion cues
to each of potential of the first person’s motion cues (Study 1) create an elaborate matrix of new
possible social inferences that either replicate, shift, or drastically transform the social inferences
gleaned from a single person’s set of actions.
Recall that infants can begin to distinguish between animate agents with intention from
inanimate objects based on contingency of motion/action cues. This importance of action
contingency in dynamic cues thus represented a justification for the present study at the
fundamental level of understanding perception of intentionality. Watson (1972) found that 20-
month old infants responded to both “contingent” caregivers and mobiles with an equal amount
of smiling, perhaps evidence that social responsiveness evolved based on contingent stimuli
rather than necessarily just the physical human face (Watson & Ramey, 1972). The present
62
study was virtually identical in form to study 1 with exception to the inclusion of the second
agent/person and the perspective taken by the participant. Specifically, Study 1 described an
agent’s actions directed at the participant at a 1
st
person perspective. Study 2 on the other hand,
described the actions of two agents to one another to the participant as a passive 3
rd
person
observer. Other than these distinctions, the identical 4 movement categories (Distance,
Direction, Speed, Gesture) and the 8 DIAMONDS social situational categories (Duty, Intellect,
Adversity, Mating, Positivity, Negativity, Deception, Sociality) from Study 1 were re-used in the
present study. Naturally, the addition of the second agent in Study 2 transformed the
methodological design used in Study 1. See details in Method section.
The long-term goal of the study is to examine how different agents’ movements (e.g.,
away from or towards the viewer) and perspective (e.g., third person or first person) affect raters
judgements of the motives and characteristics of the agent. We will aim to accomplish this by
examining the meaning derived from a 2-part sequence of an interaction. We note that most
social situations warrant more than a single hypothetical movement/goal/action/intent from one
individual. Specifically, a person’s reaction to a given situation changes the social meaning
attributed to the social interaction. Essentially, we will aim to demonstrate that these 2-leveled
interactions change the meaning of a given situation. Participants will take on a 3rd person
perspective as they observe an interaction taking place between two hypothetical social agents.
Method
Participants
282 participants were recruited from the University of Southern California Psychology
subject pool over the course of two academic semesters from Fall 2016 to Spring 2017.
Participants were given the choice of taking two versions of the study: Study A and Study B.
63
Study A consisted of interactions that take place at a Far Distance and Study B consisted of
interactions that take place at a Near Distance. Of the 282 participants who participated in this
study, 150 participants belonged to Study A and 132 participants belonged to Study B. The
reasoning for this split into two sub-studies is explained in the design section below.
Figure 1
Model of study design: Person 1 (2x2x2) x Person 2 (2x2x2) across Distance (Near and Far)
Design
The statistical design of this study consisted of two separate 2x2x2x2x2x2 6-way
repeated measures MANOVA that consisted of Person 1 movements (2x2x2) and Person 2
movements (2x2x2). As seen in Figure 1, the movement categories examined included
Direction, Speed, and Gesture for both Person 1 and Person 2. These movement combinations
were examined for their effect on ratings of the 8 situational DIAMONDS of Duty, Intellect,
Adversity, Marital (Romance), pOsitivity, Negativity, Deception, Sociality. Unlike in Study 1,
Distance was not factored into the MANOVA model for two reasons. First, Distance is assumed
64
to be constant between two communicators. That is, if a hypothetical person 1 is far away from a
hypothetical person 2, person 2 is naturally also far away from person 1. Second, adding
Distance to this model would complicate an already highly complicated 6-way model. Not only
would this make analysis difficult to interpret, a big concern was the feasibility for participants to
reliably provide ratings of over 1000 separate scenarios. As such, we decided upon separating
the study into 2 studies that examined the above 6-way model for Far Distance as well as another
6-way model for Close Distance.
Materials
The participants were exposed to text-based descriptions of 64 different randomized
combinations of a body movement and indicate the likelihood that the above set of movements
would involve a DIAMONDS situation in an online questionnaire on Qualtrics. For instance,
“Person 1 moves towards Person 2 slowly with open arms. Person 2 moves towards Person 1
quickly with closed arms.” See Table 1 in the Appendix for a full list of stimuli descriptions
presented to participants. Participants were asked to respond to each set of the DIAMONDS
situational dimensions for each set of movement stimuli on a graphical slider that ranged from 0-
7. The decision to use a graphical slider as opposed to a conventional Likert-scale was to
mitigate the reliability issues that arise from forcing participants to select whole integers as
values of judgement. An example of an item that addressed the Duty element of the
DIAMONDS was, “How likely is this situation to involve work, tasks, or duties?” The wording
of the DIAMONDS measurement items for this study were derived from Serfass and Sherman
(2015).
65
Results
Mean Comparisons
Figures 2 (Far) and 3 (Near) graphically represent the collective mean scores for each of
the 6-way movement combinations according to Distance. The most noticeable pattern for both
graph and particularly for Far Distance is that greater polarity and competition in judgments is
observed when Person 1 moves Towards Person 2 as opposed to Away.
The sheer number of conditions in this study precludes a complete report of all the
individual points of data output that resulted from the 6-way repeated measures MANOVA. For
instance, the 64 separate conditions for both Far and Near Distance means that there are 64
separate mean scores for each of the complete 6-way model of movements for each of the 8
DIAMONDS (512 different mean scores for one of Far/Near and 1024 different mean scores
total). Clearly, it would not be prudent to report each individual mean score. That said, we will
report on the general patterns of mean scores observed for a particular selection of the
DIAMONDS. Specifically, we will report on how mean scores change from Study 1’s high and
low mean scores that include the movements of only one communicator as well as compare the
high and low mean scores of Study 1’s one-person DIAMONDS ratings with the high and low
mean scores of two-person DIAMONDS ratings. To maximize explanatory power while
maintaining efficiency, we will only report on the 6-level mean scores of movements from a Far
Distance for Duty, Adversity, and Mating (Romance). Complete tables of remaining results may
be found in the Appendix.
Figure 2
Graphical representation of mean scores (Far Distance)
66
Figure 3
Graphical representation of mean scores (Near Distance)
Duty. In study 1, the highest values for Duty were observed when one person moved
Towards, Slowly, with Closed Gesture from both Far (m = 3.06) and Near Distance (m = 3.01).
0.000
1.000
2.000
3.000
4.000
5.000
6.000
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
TOS TOS TOS TOS TOS TOS TOS TOS TOF TOF TOF TOF TOF TOF TOF TOF TCS TCS TCS TCS TCS TCS TCS TCS TCF TCF TCF TCF TCF TCF TCF TCF AOS AOS AOS AOS AOS AOS AOS AOS AOF AOF AOF AOF AOF AOF AOF AOF ACS ACS ACS ACS ACS ACS ACS ACS ACF ACF ACF ACF ACF ACF ACF ACF
Far
Duty Intellect Adversity Mating
pOsitivity Negativity Deception Sociality
Person 1: Towards
0.000
1.000
2.000
3.000
4.000
5.000
6.000
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
T
O
S
T
C
S
A
O
S
A
C
S
TOS TOS TOS TOS TOS TOS TOS TOS TOF TOF TOF TOF TOF TOF TOF TOF TCS TCS TCS TCS TCS TCS TCS TCS TCF TCF TCF TCF TCF TCF TCF TCF AOS AOS AOS AOS AOS AOS AOS AOS AOF AOF AOF AOF AOF AOF AOF AOF ACS ACS ACS ACS ACS ACS ACS ACS ACF ACF ACF ACF ACF ACF ACF ACF
Near
Duty Intellect Adversity Mating
pOsitivity Negativity Deception Sociality
Person 1: Towards
67
In study 2, the highest value for Duty was also observed when person 1 moved Towards, Slowly,
with a Closed Gesture. That said, the action sequences of person 2’s movements drastically
changed the degree to which participants deemed a situation characterized by duty. The highest
value for here for Duty (m = 3.035) was for when person 2 moved Toward, Quickly, with a
Closed Gesture. This may be reflecting a hypothetical scenario where person 1 represents a
person of power or authority slowly approaching person 2 with a duty or task represented by the
closed gesture. Person 2’s duty and responsibility as a subordinate may warrant the fast rate of
movement while also maintaining the movement towards which represents compliance as well as
the closed gesture which again represents duty. While moving Towards, Slowly with Closed
Gesture represents Duty in a one-person scenario, person 2’s alternative sequential actions
demonstrate the variability in the collective social meaning in a simple 2-person interaction. For
instance, person 2 moving Toward, Slowly, with Open Gesture curiously had the effect of
decreasing participants’ ratings of Duty (m = 2.413) to levels below the observed ratings for
study 1 (m = 3.06).
The lowest values for Duty in study 1 tended to be when one person moved Towards,
Quickly, with Open Gesture from both Far (m = 1.92) and Near Distance (m =1.87). Generally,
a pattern of low ratings was observed for all potential person 2 reactions to the above sequence
of movements. That said, some degree of variability was observed. As we saw above, Person
2’s movement Towards, Slow, with a Closed Gesture also had the effect of raising perceived
ratings of duty (m = 2.401), even in response to a person 1 set of actions that represented the
lowest rating of Duty in study 1. On the contrary, person 2 moving Towards, Quickly, with
Open Gesture had the effect of creating relatively low ratings of Duty (m = 1.93).
68
The absolute lowest rating of duty for all 2-person interactions of movement was observed when
person 1 moved Towards, Slowly, with Open Gesture and person 2 reacted by moving Towards,
Quickly, with Open Gesture (m = 1.85). Generally, having an open gesture had an effect of
decreased ratings of Duty, indicating that Duty is associated with the social inferences attached
to having an open gesture. Openness in general is an abstract and non-specific description of
physical gesture, but certainly indicates a degree of freedom and flexibility. Situations involving
Duty conspicuously involve a certain restriction of personal control as one acts to fulfill a need
as opposed to a personal want.
Adversity. In study 1, high values for adversity tended to be for when a person moved
Towards, Quickly, with Closed Gesture. Indeed, in study 2, we observed a relatively consistency
pattern of high ratings of adversity when person 1 moved as specified above. That said, there
was some variation based on person 2’s reactionary movement sequences. For example, person
2 moving Away, Quickly, with a Closed Gesture yielded high ratings of Adversity (m = 3.82).
On the other hand, when person 2 was described as moving Towards, Slowly, with Open gesture,
this action sequence negated the degree to which the situation was deemed “threatening” (m =
2.93).
Also in study 1, low values for Adversity were observed for movement Towards with Open
Gesture – regardless of Speed (as well as Distance). Study 2 revealed the full range of
combinatorial possibilities of interpretations based on a second person’s response to this
relatively simple set of movements. For example, the highest individual rating of Adversity in
study 2 was observed when person 1 moved Towards, Slowly, with Open Gesture and person 2
responded by moving Away, Quickly, with Closed Gesture (m = 4.158). Interestingly, the same
action sequence from person 1 also yielded the lowest individual rating of Adversity in study 2.
69
In this action sequence, person 2 responded by moving Towards, Slowly, with Open Gesture (m
= 1.42). In fact, person 2’s speed of movement seemed to have a minor impact on this effect as
moving Towards, Quickly, with Open Gesture also had the effect of creating low ratings of
Adversity (m = 1.44). Generally, this pattern of low Adversity when person 2 moved Towards
with Open Gesture and high Adversity when person 2 moved Away with Closed Gesture was
also observed when Person 2 moved Fast (See Figure 2).
Table 1
Mean scores: Adversity; Person 1 Towards, Open, Slow and Fast.
DIAMONDS Person 1 Person 2
Adversity Towards Open
Slow
Towards
Open
Slow 1.418
Fast 1.441
Closed
Slow 2.858
Fast 2.74
Away
Open
Slow 2.771
Fast 3.068
Closed
Slow 3.707
Fast 4.158
Fast
Towards
Open
Slow 1.909
Fast 1.457
Closed
Slow 2.898
Fast 2.859
Away
Open
Slow 2.925
Fast 3.083
Closed
Slow 3.607
Fast 4.153
A general pattern observed was that person 1’s movement Away had the effect of neutralizing
any tendency to rate a situation as one of Adversity. All ratings of person 1 moving Away,
regardless of person 2’s reaction to this, had a middling effect where participants did not seem to
feel the action sequences warranted either high nor low Adversity.
70
Romance. In study 1, high values for Romance generally tended to be for Movement
Towards with Open Gesture, although highest values were observed for Movement Towards
with Open Gesture and Fast Speed, for both Far (m = 4.12) and Near Distance (m = 4.41). Table
3 depicts the full range of shifts in meaning that occur when person 2’s action sequences are
added in reaction to person 1’s action sequences. Person 2’s movement Away, Quickly, with a
Closed Gesture received low ratings of Romance (m = 2.05), intuitively demonstrating the
aversion that person 2 feels. On the contrary, person 2 moving Towards, Quickly, with Open
Gesture had the effect of enhancing ratings of Romance above study 1 levels (m = 4.75). In fact,
this set of 2-person action sequences (Person 1 moves Towards, Slowly, Open Gesture; Person 2
moves Towards, Quickly, with Open Gesture) represents the highest observed rating of Romance
across all available conditions.
Table 2
Mean scores: Romance (Mating); Person 1 Towards, Open, Slow and Fast.
DIAMONDS Person 1 Person 2
Romance Towards Open
Slow
Towards
Open
Slow 4.234
Fast 4.752
Closed
Slow 2.73
Fast 3.039
Away
Open
Slow 2.425
Fast 2.318
Closed
Slow 2.205
Fast 2.051
Fast
Towards
Open
Slow 3.755
Fast 4.558
Closed
Slow 2.535
Fast 2.816
Away
Open
Slow 2.422
Fast 2.508
Closed
Slow 2.38
Fast 2.156
71
In study 1, the lowest values for Romance generally tended to be for Movement Away with
Closed Gesture, although lowest values were observed for Movement Away with Closed Gesture
and Fast Speed, for both Far (m = 1.29) and Near Distance (m = 1.27). Indeed, in study 2,
person 1 moving Away had the general effect of creating low perceived associations with
Romance-based situations, regardless of other person 1 or person 2 action sequences.
Interestingly however, the lowest overall rating for Romance in study 2 was observed in an
action sequence involving person 1 moving Towards. As can be seen in Figure 4, the lowest
rating of Romance was observed when person 1 moved Towards, Slowly, with Closed Gesture,
and person 2 moved Away, Quickly, with Closed Gesture (m = 1.74). Interestingly, this person
2 action sequence for low Romance is identical to the person 2 action sequence for high
Adversity, suggesting a somewhat inverse nature in how individuals physically perceive
romantic and threatening social situations.
Table 3
Mean scores: Romance (Mating); Person 1 Towards, Closed, Slow and Fast.
DIAMONDS Person 1 Person 2
Romance Towards Closed
Slow
Towards
Open
Slow 2.696
Fast 2.724
Closed
Slow 2.031
Fast 2.057
Away
Open
Slow 2.272
Fast 2.156
Closed
Slow 1.896
Fast 1.741
Fast
Towards
Open
Slow 2.805
Fast 2.692
Closed
Slow 2.176
Fast 2.215
Away
Open
Slow 2.204
Fast 2.235
Closed
Slow 2.38
Fast 2.156
72
Ad-Hoc Romance Analysis. Recall that one of the large take-aways from Study 1 was
the polarization of negative and positive judgments. It is worth noting that this identical pattern
is observed in the present study, but again, the sheer number of conditions makes any complete
graph or table difficult to decipher. As such, Figure 4 represents a minor sample of Figure 2
above, comparing the judgments for Romance and Adversity (Threat), two situations that may
confused with one another in the absence of sufficient information. As can be seen in Figure 4,
Romance and Threat have a nearly inverse relationship with one another, further strengthening
the argument for needing more fine-grained analysis tools to understand the complexities of
situational judgments and inference-making.
Figure 4
Mean score comparison of Romance and Threat (From Near Distance)
Another significant consideration in the current study is a comparison with the
situational-specific results of Study 1. Figures 5 and 6 note the highest and lowest points
attributed to Romance, respectively. The major goal of the current study that was left
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Person 2
TOF
TCF
AOF
ACF
TOF
TCF
AOF
ACF
TOF
TCF
AOF
ACF
TOF
TCF
AOF
ACF
TOF
TCF
AOF
ACF
TOF
TCF
AOF
ACF
TOF
TCF
AOF
ACF
TOF
TCF
AOF
ACF
Person 1 TOS TOS TOS TOS TOS TOS TOS TOS TOF TOF TOF TOF TOF TOF TOF TOF TCS TCS TCS TCS TCS TCS TCS TCS TCF TCF TCF TCF TCF TCF TCF TCF AOS AOS AOS AOS AOS AOS AOS AOS AOF AOF AOF AOF AOF AOF AOF AOF ACS ACS ACS ACS ACS ACS ACS ACS ACF ACF ACF ACF ACF ACF ACF ACF
Romance vs. Threat (Near)
Romance Threat
73
unanswered by Study 1 is to examine the impact of a second person’s sequential movement
response to the first person. Figure 7 directly compares the movement combination from Study 1
that garnered the highest rating for Romance with the range of possible Person 2 responses from
the current study. While there is a convergence in the results of the two studies when it involves
two people moving Towards each other with Open Gesture, we clearly see a reduction in
Romance ratings across the other possible Person 2 movement combinations. Essentially, the
range of possibilities for Person 2’s movements effectively transform the situational inferences
for what was the highest rating of Romance when it involved just one person’s movement
combination. Interestingly, Table 8 shows no convergence between Study 1 and Study 2, as an
inclusion of a second person regardless of movement combination indicates an elevated
perception of Romance across the board.
Figure 5
Highest rating of Romance from Study 1
Figure 6
Comparison of Highest Romance Study 1 with Study 2 Possible Responses
74
Figure 7
Lowest rating of Romance from Study 1
Figure 8
Comparison of Lowest Romance Study 1 with Study 2 Possible Responses
Univariate Interaction Effects
75
Although the full range of person 1 and person 2 actions were analyzed in a repeated
measures MANOVA, recall we are mainly interested in the sequential transformation of social
inferences from one person to two people. As such, only interaction effects involving both
person 1 and person 2 actions will be reported here.
A significant 2-way univariate interaction effect of person 1’s movement Direction and
person 2’s movement Direction was observed for Intellect, F(131) = 11.117, p = . .001,
Adversity, F(131) = 75.489, p < .001, Romance, F(131) = 36.706, p < .001, Positivity, F(131) =
100.246, p < .001, Negativity, F(131) = 89.251, p < .001, and Deception, F(131) = 65.835, p <
.001. A significant 5-way univariate interaction effect of person 1’s Direction and Speed and
person 2’s Direction, Gesture, and Speed was observed for Intellect, F(131) = 7.004, p = .009. A
full range of significant univariate interaction effects can be found in Table 1.
Table 1
Significant Interaction Effects for Person 1 and Person 2 Action Sequences (Far Distance Only)
Source df Mean
Square
F Sig. Partial
Eta
Squared
P1Direction
*
P2Direction
Duty 1 4.896 1.413 .237 .011
Intellect 1 17.995 11.117 .001 .078
Adversity 1 420.030 75.489 .000 .366
Romance 1 143.617 36.706 .000 .219
Positivity 1 301.112 100.246 .000 .434
Negativity 1 403.593 89.251 .000 .405
Deception 1 295.092 65.835 .000 .334
Sociality 1 .108 .023 .881 .000
P1Gesture
*
P2Direction
Duty 1 3.357 2.256 .135 .017
Intellect 1 8.002 6.226 .014 .045
Adversity 1 120.989 49.610 .000 .275
Romance 1 107.009 46.569 .000 .262
Positivity 1 183.740 88.366 .000 .403
Negativity 1 130.986 55.858 .000 .299
Deception 1 68.639 26.842 .000 .170
Sociality 1 2.726 1.631 .204 .012
Duty 1 13.486 8.798 .004 .063
76
P1Direction
*
P1Gesture
*
P2Direction
Intellect 1 9.327 7.969 .006 .057
Adversity 1 96.269 42.132 .000 .243
Romance 1 100.278 64.380 .000 .330
Positivity 1 119.705 69.216 .000 .346
Negativity 1 134.414 68.817 .000 .344
Deception 1 40.821 20.307 .000 .134
Sociality 1 2.243 1.077 .301 .008
P1Speed *
P2Direction
Duty 1 1.504 1.172 .281 .009
Intellect 1 1.333 1.144 .287 .009
Adversity 1 9.520 6.176 .014 .045
Romance 1 11.112 10.421 .002 .074
Positivity 1 10.339 10.503 .002 .074
Negativity 1 18.344 11.023 .001 .078
Deception 1 16.073 11.703 .001 .082
Sociality 1 .316 .186 .667 .001
P1Gesture
* P1Speed
*
P2Direction
Duty 1 .089 .110 .741 .001
Intellect 1 .152 .153 .696 .001
Adversity 1 4.581 3.024 .084 .023
Romance 1 8.822 6.844 .010 .050
Positivity 1 10.545 8.820 .004 .063
Negativity 1 6.936 5.086 .026 .037
Deception 1 16.282 12.165 .001 .085
Sociality 1 .937 .712 .400 .005
P1Direction
*
P2Gesture
Duty 1 3.337 1.542 .216 .012
Intellect 1 5.007 3.757 .055 .028
Adversity 1 75.612 37.333 .000 .222
Romance 1 47.147 28.818 .000 .180
Positivity 1 137.498 66.973 .000 .338
Negativity 1 162.517 74.987 .000 .364
Deception 1 65.085 40.126 .000 .234
Sociality 1 14.545 10.307 .002 .073
P1Gesture
*
P2Gesture
Duty 1 8.189 3.695 .057 .027
Intellect 1 4.540 3.358 .069 .025
Adversity 1 147.458 52.621 .000 .287
Romance 1 24.246 7.422 .007 .054
Positivity 1 168.554 53.438 .000 .290
Negativity 1 167.160 49.416 .000 .274
Deception 1 100.762 41.352 .000 .240
Sociality 1 3.439 1.868 .174 .014
P1Direction
*
Duty 1 .234 .225 .636 .002
Intellect 1 2.656 2.224 .138 .017
77
P1Gesture
*
P2Gesture
Adversity 1 19.243 12.149 .001 .085
Romance 1 23.897 14.864 .000 .102
Positivity 1 56.600 37.618 .000 .223
Negativity 1 35.460 23.735 .000 .153
Deception 1 17.211 8.656 .004 .062
Sociality 1 5.658 4.754 .031 .035
P1Speed *
P2Gesture
Duty 1 .238 .166 .684 .001
Intellect 1 1.197 1.178 .280 .009
Adversity 1 3.250 1.823 .179 .014
Romance 1 5.192 4.570 .034 .034
Positivity 1 .768 .730 .394 .006
Negativity 1 12.949 10.635 .001 .075
Deception 1 2.290 1.662 .200 .013
Sociality 1 .143 .152 .698 .001
P2Direction
*
P2Gesture
Duty 1 29.498 21.815 .000 .143
Intellect 1 3.932 3.338 .070 .025
Adversity 1 10.061 4.177 .043 .031
Romance 1 152.973 64.276 .000 .329
Positivity 1 138.470 58.941 .000 .310
Negativity 1 27.980 14.569 .000 .100
Deception 1 2.315 1.457 .230 .011
Sociality 1 8.536 6.298 .013 .046
P1Direction
*
P2Direction
*
P2Gesture
Duty 1 1.086 .656 .419 .005
Intellect 1 1.038 1.043 .309 .008
Adversity 1 8.288 4.002 .048 .030
Romance 1 40.846 25.616 .000 .164
Positivity 1 75.034 65.688 .000 .334
Negativity 1 22.137 13.875 .000 .096
Deception 1 20.989 12.933 .000 .090
Sociality 1 6.883 4.965 .028 .037
P1Gesture
*
P2Direction
*
P2Gesture
Duty 1 .126 .095 .759 .001
Intellect 1 1.214 .892 .347 .007
Adversity 1 1.768 1.017 .315 .008
Romance 1 39.217 21.620 .000 .142
Positivity 1 54.468 40.176 .000 .235
Negativity 1 18.722 11.789 .001 .083
Deception 1 30.572 29.237 .000 .182
Sociality 1 17.736 12.812 .000 .089
P1Direction
*
P1Gesture
Duty 1 .001 .001 .981 .000
Intellect 1 .681 .550 .460 .004
Adversity 1 .144 .131 .717 .001
78
*
P2Direction
*
P2Gesture
Romance 1 29.357 24.057 .000 .155
Positivity 1 23.788 15.553 .000 .106
Negativity 1 4.191 3.148 .078 .023
Deception 1 .036 .022 .883 .000
Sociality 1 1.604 1.484 .225 .011
P1Speed *
P2Direction
*
P2Gesture
Duty 1 3.346 3.766 .054 .028
Intellect 1 .455 .572 .451 .004
Adversity 1 2.117 1.243 .267 .009
Romance 1 3.317 3.108 .080 .023
Positivity 1 2.935 2.908 .091 .022
Negativity 1 1.649 1.516 .220 .011
Deception 1 1.035 .669 .415 .005
Sociality 1 4.727 5.278 .023 .039
P1Gesture
* P1Speed
*
P2Direction
*
P2Gesture
Duty 1 .186 .140 .709 .001
Intellect 1 7.286 7.972 .005 .057
Adversity 1 .472 .336 .563 .003
Romance 1 .494 .417 .519 .003
Positivity 1 .389 .343 .559 .003
Negativity 1 .466 .322 .571 .002
Deception 1 .059 .040 .841 .000
Sociality 1 1.536 1.238 .268 .009
P1Gesture
* P2Speed
Duty 1 .368 .335 .563 .003
Intellect 1 .002 .002 .966 .000
Adversity 1 .001 .001 .975 .000
Romance 1 11.898 12.870 .000 .089
Positivity 1 .932 .675 .413 .005
Negativity 1 .342 .262 .610 .002
Deception 1 .295 .231 .632 .002
Sociality 1 2.878 1.991 .161 .015
P2Direction
* P2Speed
Duty 1 .809 .918 .340 .007
Intellect 1 1.157 1.336 .250 .010
Adversity 1 9.632 7.642 .007 .055
Romance 1 18.680 15.636 .000 .107
Positivity 1 22.912 19.436 .000 .129
Negativity 1 9.796 8.208 .005 .059
Deception 1 3.215 2.572 .111 .019
Sociality 1 6.313 5.584 .020 .041
P1Direction
*
P2Direction
* P2Speed
Duty 1 .508 .487 .487 .004
Intellect 1 1.478 1.793 .183 .014
Adversity 1 21.773 16.252 .000 .110
Romance 1 9.849 8.381 .004 .060
79
Positivity 1 15.404 12.065 .001 .084
Negativity 1 11.430 6.408 .013 .047
Deception 1 6.635 4.781 .031 .035
Sociality 1 .756 .604 .438 .005
P1Gesture
*
P2Direction
* P2Speed
Duty 1 .219 .253 .616 .002
Intellect 1 .013 .016 .899 .000
Adversity 1 1.062 .722 .397 .005
Romance 1 13.442 10.842 .001 .076
Positivity 1 4.704 4.000 .048 .030
Negativity 1 .215 .140 .709 .001
Deception 1 .847 .757 .386 .006
Sociality 1 .336 .281 .597 .002
P1Direction
*
P1Gesture
*
P2Direction
* P2Speed
Duty 1 1.879 2.448 .120 .018
Intellect 1 .532 .693 .407 .005
Adversity 1 8.480 6.401 .013 .047
Romance 1 4.562 3.884 .051 .029
Positivity 1 1.068 .987 .322 .007
Negativity 1 3.456 2.911 .090 .022
Deception 1 8.605 9.668 .002 .069
Sociality 1 7.781 5.258 .023 .039
P1Speed *
P2Direction
* P2Speed
Duty 1 .355 .387 .535 .003
Intellect 1 .373 .321 .572 .002
Adversity 1 1.960 1.278 .260 .010
Romance 1 .098 .123 .727 .001
Positivity 1 2.187 2.099 .150 .016
Negativity 1 .003 .002 .962 .000
Deception 1 6.562 6.625 .011 .048
Sociality 1 .209 .225 .636 .002
P2Gesture
* P2Speed
Duty 1 .134 .145 .704 .001
Intellect 1 .023 .030 .862 .000
Adversity 1 2.204 1.257 .264 .010
Romance 1 4.520 3.612 .060 .027
Positivity 1 4.908 5.378 .022 .039
Negativity 1 3.746 3.846 .052 .029
Deception 1 1.752 1.367 .244 .010
Sociality 1 .091 .071 .790 .001
P1Gesture
*
P2Gesture
* P2Speed
Duty 1 2.700 2.211 .139 .017
Intellect 1 .131 .144 .705 .001
Adversity 1 .000 .000 .985 .000
Romance 1 9.317 8.471 .004 .061
Positivity 1 .275 .297 .587 .002
80
Negativity 1 .161 .131 .718 .001
Deception 1 .080 .067 .796 .001
Sociality 1 .935 .681 .411 .005
P1Direction
*
P1Gesture
*
P2Gesture
* P2Speed
Duty 1 2.514 2.359 .127 .018
Intellect 1 .152 .178 .674 .001
Adversity 1 5.783 4.637 .033 .034
Romance 1 .012 .013 .909 .000
Positivity 1 2.022 2.349 .128 .018
Negativity 1 .005 .003 .953 .000
Deception 1 .003 .003 .959 .000
Sociality 1 .039 .034 .854 .000
P1Speed *
P2Gesture
* P2Speed
Duty 1 1.064 .926 .338 .007
Intellect 1 6.574 8.325 .005 .060
Adversity 1 .895 .914 .341 .007
Romance 1 .259 .249 .619 .002
Positivity 1 2.449 1.785 .184 .013
Negativity 1 .852 .530 .468 .004
Deception 1 7.479 6.599 .011 .048
Sociality 1 3.893 4.053 .046 .030
P1Direction
* P1Speed
*
P2Gesture
* P2Speed
Duty 1 1.290 1.489 .224 .011
Intellect 1 .171 .201 .654 .002
Adversity 1 2.236 2.606 .109 .020
Romance 1 .758 .611 .436 .005
Positivity 1 .002 .002 .961 .000
Negativity 1 6.927E-
05
.000 .994 .000
Deception 1 4.869 4.082 .045 .030
Sociality 1 .007 .006 .938 .000
P1Direction
*
P1Gesture
* P1Speed
*
P2Gesture
* P2Speed
Duty 1 1.458 1.144 .287 .009
Intellect 1 1.486 1.863 .175 .014
Adversity 1 9.367 6.781 .010 .049
Romance 1 3.132 3.055 .083 .023
Positivity 1 4.103 3.478 .064 .026
Negativity 1 .000 .000 .990 .000
Deception 1 .098 .093 .761 .001
Sociality 1 .009 .009 .924 .000
P2Direction
*
P2Gesture
* P2Speed
Duty 1 .583 .511 .476 .004
Intellect 1 .075 .091 .764 .001
Adversity 1 4.350 3.933 .049 .029
Romance 1 .004 .003 .955 .000
Positivity 1 .104 .111 .739 .001
81
Negativity 1 3.323 2.521 .115 .019
Deception 1 4.639 4.571 .034 .034
Sociality 1 1.999 1.782 .184 .013
P1Gesture
*
P2Direction
*
P2Gesture
* P2Speed
Duty 1 .470 .519 .472 .004
Intellect 1 .002 .002 .960 .000
Adversity 1 .227 .186 .667 .001
Romance 1 .525 .517 .473 .004
Positivity 1 5.460 4.780 .031 .035
Negativity 1 .177 .125 .724 .001
Deception 1 .396 .337 .562 .003
Sociality 1 4.259 3.947 .049 .029
P1Direction
*
P1Gesture
*
P2Direction
*
P2Gesture
* P2Speed
Duty 1 2.532 3.174 .077 .024
Intellect 1 .485 .582 .447 .004
Adversity 1 4.858 4.063 .046 .030
Romance 1 1.391 1.249 .266 .009
Positivity 1 .221 .216 .643 .002
Negativity 1 4.447 4.647 .033 .034
Deception 1 2.037 2.314 .131 .017
Sociality 1 .006 .004 .947 .000
P1Direction
* P1Speed
*
P2Direction
*
P2Gesture
* P2Speed
Duty 1 .245 .240 .625 .002
Intellect 1 6.278 7.004 .009 .051
Adversity 1 1.006 .771 .381 .006
Romance 1 .000 .000 .989 .000
Positivity 1 .051 .073 .788 .001
Negativity 1 1.608 1.783 .184 .013
Deception 1 .621 .517 .474 .004
Sociality 1 1.936 1.722 .192 .013
P1Direction
*
P1Gesture
* P1Speed
*
P2Direction
*
P2Gesture
* P2Speed
Duty 1 1.165 .821 .367 .006
Intellect 1 1.933 2.238 .137 .017
Adversity 1 .001 .001 .977 .000
Romance 1 .552 .599 .440 .005
Positivity 1 4.179 3.862 .052 .029
Negativity 1 1.755 1.654 .201 .012
Deception 1 .010 .006 .937 .000
Sociality 1 .115 .089 .766 .001
82
Discussion
The major goal of the current study that was left unanswered by Study 1 is to examine the
impact of a second person’s sequential movement response to the first person. Analysis
demonstrates that the range of possibilities for Person 2’s movements effectively transform the
situational inferences for what any rating of Person 1’s movement combinations for both Study 1
and the current study.
While the results of the current study are extremely valuable for obtaining insight into the
different combinations of body movement that allow people to make inferences about internal
mental states and the assignment of social meaning, the current study has the drawback of being
entirely based on text-based narrative stimuli. In other words, various body movements were
described to in the form of text, but of course this form of stimuli presents a few issues. First, we
do not experience our perceptual world via text and so the case can be made that the results of
the current study do not generalize to real-world assignment of social meaning. Second, text is
subject to the interpretation and imagination of the study participant, posing a risk in reliability.
In order to mitigate the issues raised by a text-based stimulus, I replicated the current study using
visual simulations of body movement using SmartBody (Feng, Huang, Kallmann, & Shapiro,
2012; Shapiro, 2011; Thiebaux, Marsella, Marshall, & Kallmann, 2008), a virtual character
animation platform written in C++ originally developed at the USC Institute for Creative
Technologies. SmartBody allows manipulation of movement, gaze, nonverbal behavior among
others, making for an excellent tool for use in social perception. The data collection for Study 3
is currently on-going, and should be ready for analysis this week.
83
Chapter 4: Using SmartBody to Examine the Role of Movement in Social Perception
Introduction
The current study attempts to expand on the results from Study 1 and Study 2 by utilizing
software to generate visual stimuli. The goal of this study is consistent with the previous studies
in that it attempts to analyze the combinatory effects of individual variations of body movement,
orientation, and gesture on the social perception of situational inferences. Through this study,
we hope to provide objective conclusions as to the physical movement components that entail a
selected variety of social situations. Social situations represent the fundamental basis upon
which all our interactions and social lives operate. As such, this study attempts to provide a
somewhat predictive framework for identifying and parsing out the physical components of a
social situation – at least from the perspective of a 1
st
person recipient.
Method
Participants
197 participants were recruited from the University of Southern California Psychology
subject pool over the course of one academic semester.
Figure 1
Model of study design: (2x2x2x2)
84
Design
The statistical design of this study was a 2x2x2x2 4-way repeated measures MANOVA
to examine the effect of different combinations of movement each at 2 levels (Far-Near,
Towards-Away, Gaze-No Gaze, Slow-Fast) on ratings of the 8 situational DIAMONDS (Duty,
Intellect, Adversity, Marital (Romance), pOsitivity, Negativity, Deception, Sociality). The
individual dimensions of physical movement as well as the differentiated effects of 2-way, 3-
way, and 4-way combinations of physical motion are examined.
Materials
The participants were exposed to 16 different randomized stimuli on Qualtrics, which
included combinations of 4 types of body movement. Participants then provided ratings on the
likelihood that the above visual stimuli would involve a DIAMONDS situation on a graphical
slider that ranged from 0-7. The decision to use a graphical slider as opposed to a conventional
Likert-scale was to mitigate the reliability issues that arise from forcing participants to select
whole integers as values of judgement. An example of an item that addressed the Duty element
of the DIAMONDS was, “How likely is this situation to involve work, tasks, or duties?” The
wording of the DIAMONDS measurement items for this study were derived from Serfass and
Sherman (2015).
Figure 2
Screenshot of stimuli viewed by participants (actual stimuli consisted of moving characters)
85
Stimuli. The visual stimuli used in this experiment were adapted using the animation
software Smartbody, developed by the Institute of Creative Technology at the University of
Southern California. SmartBody (Feng, Huang, Kallmann, & Shapiro, 2012; Shapiro, 2011;
Thiebaux, Marsella, Marshall, & Kallmann, 2008) is an open source modular framework for
animating virtual humans and other embodied characters, written in C++ and designed based on
SAIBA (Simulation, Agent, Intention, Behavior, Animation) framework’s Behavior Markup
Language (BML) standard. Essentially, Smartbody is an engine that allows BML behavior
descriptions to be converted into real-time animations (Shapiro, 2011). Further information
about the standardization of virtual human construction into the Behavior Markup Language may
be found elsewhere (Kopp et al., 2006).
Smartbody can generate human-like movement based animations for a digital 3D
character such as locomotion (walk, jog), facial expressions, gaze, gestures, head nods, and eye
saccades. After generating these specific behavioral visualizations of a 3D character via
Smartbody, animations can then be fed into existing game engines such as Unity and Ogre for
the creation of character-based games. The graphical user interface (GUI) of Smartbody can be
seen in Figure 3. For the purposes of this study we adapted an ultra-user-friendly GUI (Figure 4)
86
of Smartbody that allowed for simple manipulation of the movement variables relevant to this
study (Distance, Direction, Gaze, Speed).
Figure 3
Screenshot of Smartbody GUI
Figure 4
Ultra-user-friendly GUI depicting buttons to manipulate Distance, Direction, Speed, and Gaze.
87
Results
Multivariate Effects
Multivariate main effects were observed for all 4 movement dimensions. 2-way
multivariate interaction effects were observed for Gaze and Direction, F(8, 189) = 2.734, p =
.007, and Speed and Direction, F(8, 189) = 5.913, p < .001 . Significant 3-way multivariate
interaction effects were observed for Distance, Speed, and Direction, F(8, 189) = 3.81, p < .001.
Finally, the sole 4-way interaction effect of Distance, Speed, Gaze, and Direction was
significant, F(8, 189) = 2.641, p = .009.
Table 1
Multivariate Tests of Movement Dimensions
Value F Hypothesis
df
Error df Sig.
Intercept .912 243.504
b
8.000 189.000 .000
Distance .080 2.041
b
8.000 189.000 .044
Speed .099 2.587
b
8.000 189.000 .011
Gaze .086 2.220
b
8.000 189.000 .028
88
Direction .562 30.367
b
8.000 189.000 .000
Distance * Speed
.074 1.894
b
8.000 189.000 .063
Distance * Gaze
.049 1.229
b
8.000 189.000 .284
Speed * Gaze
.033 .805
b
8.000 189.000 .598
Distance * Speed * Gaze
.070 1.779
b
8.000 189.000 .083
Distance * Direction .031 .758
b
8.000 189.000 .640
Speed * Direction
.200 5.913
b
8.000 189.000 .000
Distance * Speed * Direction
.139 3.808
b
8.000 189.000 .000
Gaze * Direction
.104 2.734
b
8.000 189.000 .007
Distance * Gaze * Direction
.070 1.773
b
8.000 189.000 .085
Speed * Gaze * Direction
.046 1.150
b
8.000 189.000 .332
Distance * Speed * Gaze * Direction
.101 2.641
b
8.000 189.000 .009
Univariate Effects
Univariate main effects. Univariate main effects of Distance were observed for
Positivity, F(1, 196) = 7.44, p = .007, and Sociality, F(1, 196) = 8.28, p = .004. Univariate main
effects of Gaze were observed for Negativity, F(1, 196) = 6.30, p = .013, Deception, F(1, 196) =
6.59, p = .011, and Sociality, F(1, 196) = 6.77, p = .01. Univariate main effects of Direction
were observed for all 8 DIAMONDS. No main effects of Speed were observed (See Table 2).
Table 2
Univariate main effects of movement variables on DIAMONDS
Movement DIAMONDS Mean
Square
F Sig. Partial
Eta
Squared
Observed
Power
a
Distance Duty
.001 .000 .986 .000 .050
Intellect .443 .368 .545 .002 .093
Adversity 5.231 2.221 .138 .011 .317
Romance 1.190 .824 .365 .004 .147
Positivity 11.430 7.442 .007 .037 .775
Negativity .084 .044 .835 .000 .055
Deception .454 .224 .637 .001 .076
89
Sociality 15.594 8.279 .004 .041 .817
Speed Duty 9.661 3.793 .053 .019 .491
Intellect 2.062 1.395 .239 .007 .217
Adversity 6.522 1.700 .194 .009 .254
Romance .032 .017 .896 .000 .052
Positivity 7.941 3.898 .050 .020 .502
Negativity .645 .177 .675 .001 .070
Deception 3.566 1.096 .297 .006 .181
Sociality .861 .408 .524 .002 .097
Gaze Duty .568 .334 .564 .002 .089
Intellect 2.716 2.382 .124 .012 .336
Adversity 5.431 2.076 .151 .010 .300
Romance 1.399 1.111 .293 .006 .182
Positivity 1.867 1.337 .249 .007 .210
Negativity 15.349 6.295 .013 .031 .704
Deception 14.973 6.590 .011 .033 .724
Sociality 14.626 6.768 .010 .033 .735
Direction Duty 352.586 66.646 .000 .254 1.000
Intellect 429.089 98.907 .000 .335 1.000
Adversity 473.936 63.851 .000 .246 1.000
Romance 591.186 99.859 .000 .338 1.000
Positivity 1229.664 194.279 .000 .498 1.000
Negativity 1127.414 137.069 .000 .412 1.000
Deception 958.226 123.011 .000 .386 1.000
Sociality 1693.712 133.526 .000 .405 1.000
Univariate 2-way interaction effects. A univariate 2-way interaction effect of Distance
and Speed was observed for Romance (Mating), F(1, 196) = 4.39, p = .037. A univariate 2-way
interaction effect of Direction and Speed were observed for Intellect, F(1, 196) = 7.94, p = .005,
Adversity, F(1, 196) = 7.72, p = .006, Romance, F(1, 196) = 9.46, p = .002, Positivity, F(1, 196)
= 22.03, p < .001, Negativity, F(1, 196) = 23.12, p < .001, Deception, 38.29, p < .001, and
Sociality, 7.82, p = .006. A univariate 2-way interaction effect of Gaze and Direction was
observed for Adversity, F(1, 196) = 5.88, p = .016, Negativity, F(1, 196) = 7.67, p = .006,
Deception, F(1, 196) = 17.40, p < .001, and Sociality, F(1, 196) = 7.43, p = .007. No other 2-
way interaction effects were observed (Table 3).
90
Table 2
Significant 2-Way Interactions
2-way
Mean
Square
F Sig. Partial
Eta
Squared
Observed
Power
a
Distance
* Speed
Duty
6.517 3.295 .071 .017 .439
Intellect 1.537 1.685 .196 .009 .253
Adversity .437 .176 .675 .001 .070
Romance 6.459 4.388 .037 .022 .550
Positivity 5.948E-
05
.000 .995 .000 .050
Negativity .011 .005 .945 .000 .051
Deception 5.478 2.793 .096 .014 .383
Sociality 1.412 .921 .338 .005 .159
Distance
* Gaze
Duty
.411 .212 .646 .001 .074
Intellect 3.964 2.997 .085 .015 .406
Adversity 3.353 1.405 .237 .007 .218
Romance .320 .176 .675 .001 .070
Positivity 9.702E-
05
.000 .993 .000 .050
Negativity .014 .007 .934 .000 .051
Deception 3.925 1.967 .162 .010 .287
Sociality .410 .268 .605 .001 .081
Speed *
Gaze
Duty
.606 .390 .533 .002 .095
Intellect .878 .897 .345 .005 .156
Adversity .144 .071 .790 .000 .058
Romance
1.654 1.074 .301 .005 .178
Positivity .322 .230 .632 .001 .076
Negativity .017 .008 .929 .000 .051
Deception 3.352 2.263 .134 .011 .322
Sociality .295 .172 .679 .001 .070
Distance
*
Direction
Duty
.915 .516 .473 .003 .110
Intellect 2.530 2.202 .139 .011 .315
Adversity .001 .001 .980 .000 .050
Romance .159 .131 .718 .001 .065
Positivity .325 .193 .661 .001 .072
Negativity 1.464 .596 .441 .003 .120
Deception .193 .102 .750 .001 .062
Sociality 2.127 1.383 .241 .007 .216
91
Direction
* Speed
Duty
.254 .113 .737 .001 .063
Intellect 10.424 7.935 .005 .039 .800
Adversity 30.191 7.719 .006 .038 .790
Romance 12.822 9.458 .002 .046 .864
Positivity 34.757 22.033 .000 .101 .997
Negativity 88.353 23.124 .000 .106 .998
Deception 102.669 38.291 .000 .163 1.000
Sociality 15.253 7.819 .006 .038 .795
Gaze *
Direction
Duty
.271 .154 .695 .001 .068
Intellect 1.286 .913 .341 .005 .158
Adversity 17.508 5.875 .016 .029 .674
Romance .771 .487 .486 .002 .107
Positivity 5.559 2.885 .091 .015 .394
Negativity 21.314 7.666 .006 .038 .787
Deception 51.783 17.404 .000 .082 .986
Sociality 16.998 7.429 .007 .037 .774
Univariate 3-way interaction effects. A univariate 3-way interaction effect of Distance,
Gaze, and Speed was observed for Positivity, F(1, 196) = 5.73, p = .018, Negativity, F(1, 196) =
8.85, p = .003, and Deception, F(1, 196) = 3.89, p = .05 – albeit marginally. A univariate 3-way
interaction effect of Direction, Distance, and Speed was observed for Adversity, F(1, 196) =
13.77, p < .001, Positivity, F(1, 196) = 7.80, p = .006, Negativity, F(1, 196) = 6.91, p = .009, and
Deception, F(1, 196) = 21.76, p < .001.
Table 3
Significant 3-Way Interaction
3-way
Mean
Square
F Sig. Partial
Eta
Squared
Observed
Power
a
Distance *
Gaze *
Speed
Duty 1.088 .658 .418 .003 .127
Intellect .151 .143 .706 .001 .066
Adversity 8.348 3.256 .073 .016 .435
Romance .375 .271 .603 .001 .081
Positivity 7.225 5.732 .018 .028 .664
Negativity 19.097 8.853 .003 .043 .842
Deception 6.958 3.887 .050 .019 .501
92
Sociality
5.336 3.427 .066 .017 .453
Direction *
Distance *
Speed
Duty .038 .020 .888 .000 .052
Intellect 2.168 1.507 .221 .008 .231
Adversity 34.256 13.766 .000 .066 .958
Romance 2.434 1.741 .189 .009 .259
Positivity 10.533 7.803 .006 .038 .794
Negativity 13.706 6.907 .009 .034 .744
Deception 39.761 21.757 .000 .100 .996
Sociality 7.051 3.205 .075 .016 .429
Mean Differences of Movement Variables
0.000
0.500
1.000
1.500
2.000
2.500
3.000
3.500
4.000
4.500
Distance*Speed*Gaze*Direction (Smartbody)
Duty Intellect Adversity Mating
pOsitivity Negativity Deception Sociality
93
Mean Differences: Distance. Table 4 depicts the mean differences between the simple
dichotomy of Far (1) versus Near (2) distance. Negative values of mean differences indicate a
greater mean value for Near relative to Far while positive values indicate a greater mean value
for Far relative to Near.
0.000
0.500
1.000
1.500
2.000
2.500
3.000
3.500
4.000
Threat vs. Romance
Adversity Mating
0.000
0.500
1.000
1.500
2.000
2.500
3.000
3.500
4.000
4.500
Positivity vs. Negativity
pOsitivity Negativity
94
As seen in Table 4, significant mean differences exist between judgments of Far vs. Near
specifically for Positivity and Sociality. In other words, participants judged the virtual human to
be more Positive and Social when they were Near as opposed to Far.
Table 4
Mean differences of Distance: Far (I) versus Near (J)
Measure (I) Far (J) Near Mean
Difference
(I-J)
Std. Error Sig.
b
Duty 1 2 .001 .055 .986
Intellect 1 2 -.024 .039 .545
Adversity 1 2 .081 .055 .138
Romance 1 2 -.039 .043 .365
Positivity 1 2 -.120
*
.044 .007
Negativity 1 2 .010 .049 .835
Deception 1 2 -.024 .051 .637
Sociality 1 2 -.141
*
.049 .004
*. The mean difference is significant at the .05 level.
b. Adjustment for multiple comparisons: Least Significant Difference (equivalent to no adjustments).
Mean Differences: Speed. Table 5 depicts the mean differences between judgments of
the Speed of the virtual human’s movements per Slow (I) and Fast (J). Positive values of mean
differences indicate a greater mean value for Slow relative to Fast while negative values indicate
a greater mean value for Fast relative to Slow. DIAMONDS associated with Slow movement
included Intellect, Romance, and Negativity. DIAMONDS associated with Fast movement
included Duty, Adversity, Positivity, Deception, and Sociality. What this indicates is that as far
as Speed is concerned, there is a great deal of competition among the DIAMONDS judgments
characteristics for both Slow and Fast Speed. In other words, simply considering Speed on its
own may not be enough to attain a normative inference or judgment of the social meaning.
Intuitively, additional variables of movements such as Direction and Distance would be required
to attain a closer understanding of a pattern in social inference making.
95
Among the DIAMONDS, there were no significant differences between ratings of Slow
and Fast. That said, marginally significant differences between Slow and Fast were observed for
Duty and Positivity. In other words, participant judgments of Fast moving virtual humans was
marginally significantly more associated with Duty and Positivity than their judgments of Slow
moving virtual humans.
Table 5
Mean differences of Speed: Slow (I) versus Fast (J)
Measure (I) Slow (J) Fast Mean
Difference
(I-J)
Std.
Error
Sig.
b
Duty 1 2 -.111 .057 .053
Intellect 1 2 .051 .043 .239
Adversity 1 2 -.091 .070 .194
Romance 1 2 .006 .048 .896
Positivity 1 2 -.100
*
.051 .050
Negativity 1 2 .029 .068 .675
Deception 1 2 -.067 .064 .297
Sociality 1 2 -.033 .052 .524
*. The mean difference is significant at the .05 level.
b. Adjustment for multiple comparisons: Least Significant Difference (equivalent to no adjustments).
Mean Differences: Gaze. Table 6 depicts the mean differences between judgments of
the virtual human’s Gaze per the dichotomy of Averted (I) and Direct (J). Positive values of
mean differences indicate a greater mean value for Averted relative to Direct while negative
values indicate a greater mean value for Direct relative to Averted. Interestingly, all
DIAMONDS characteristics demonstrated negative mean differences, indicating that all
DIAMONDS characteristics were associated with Direct Gaze. Of these, significant mean
differences were observed between Averted and Direct Gaze for Negativity, Deception, and
Sociality. That is, participant judgments of Direct Gaze were significantly greater than
judgments of Averted Gaze for Negativity, Deception, and Sociality. The patterns of mean
differences for Gaze may both demonstrate a potential methodological flaw to the nature of the
96
stimuli as well as demonstrate some insight into main assumptions that individuals have about
social situations. Having every DIAMONDS characteristics associated more so with Direct
Gaze than Averted Gaze may indicate that individuals have a de facto assumption that social
situations should involve eye contact from a social other. This seems rather intuitive in the sense
that eye contact is necessary to engage in communication at the most fundamental level. That
said, this pattern may be one that is so obvious that some may argue that it should have been
removed from the initial set of independent variables.
Table 6
Mean differences of Gaze: Averted (I) versus Direct (J)
Measure (I) Averted (J) Direct Mean
Difference
(I-J)
Std.
Error
Sig.
b
Duty 1 2 -.027 .046 .564
Intellect 1 2 -.059 .038 .124
Adversity 1 2 -.083 .058 .151
Romance 1 2 -.042 .040 .293
Positivity 1 2 -.049 .042 .249
Negativity 1 2 -.140
*
.056 .013
Deception 1 2 -.138
*
.054 .011
Sociality 1 2 -.136
*
.052 .010
*. The mean difference is significant at the .05 level.
b. Adjustment for multiple comparisons: Least Significant Difference (equivalent to no adjustments).
Mean Differences: Direction. Table 7 depicts the mean differences between judgments
of the Direction of the virtual human’s movement per the dichotomy of Away (I) and Towards
(J). Positive values of mean differences indicate a greater mean value for Away relative to
Towards while negative values indicate a greater mean value for Towards relative to Away.
DIAMONDS characteristics associated with movement Away included Adversity, Negativity,
and Deception whereas DIAMONDS characteristics associated with movement Towards
included Duty, Intellect, Romance, Positivity, and Sociality. Not only do we see a clear
97
alignment of DIAMONDS positive/negative valence and movement Direction, these mean
differences for each of the DIAMONDS were statistically significant.
Table 7
Mean differences of Direction: Away (I) versus Towards (J)
Measure (I) Away (J) Towards Mean
Difference
(I-J)
Std.
Error
Sig.
b
Duty 1 2 -.669
*
.082 .000
Intellect 1 2 -.738
*
.074 .000
Adversity 1 2 .776
*
.097 .000
Romance 1 2 -.866
*
.087 .000
Positivity 1 2 -1.249
*
.090 .000
Negativity 1 2 1.196
*
.102 .000
Deception 1 2 1.103
*
.099 .000
Sociality 1 2 -1.466
*
.127 .000
*. The mean difference is significant at the .05 level.
b. Adjustment for multiple comparisons: Least Significant Difference (equivalent to no adjustments).
Multi-level Mean Ratings
Thus far we’ve seen that there are significant mean differences of DIAMONDS ratings
according to different levels of the separate categories of movement. To further examine the
combinatory effects of the separate categories of movement and how they relate to how people
categorize situations, we next present the 4- way representations of all the mean ratings for each
DIAMONDS. Tables depicting mean 2-way, 3-way mean ratings may be found in the
Appendix, but will not be reported on below.
Distance, Direction, Gesture, and Speed (Complete Model). All movement variable means
were examined in terms of the DIAMONDS to determine patterns and trends.
Duty. The highest values for Duty were observed for Moving Towards, Fast, Direct Gaze, from
a Far Distance (m = 2.738). As can be seen in Table 8, Duty was generally associated with
movement Towards, and there appear to be little effect of the nature of the Gaze, corroborating
98
the lack of Gaze-related results above. The lowest value for Duty was observed for movement
Away, Slowly, with Averted Gaze, from a Far Distance.
Table 8
4-way mean scores for Duty
Measure Distance Direction Gesture Speed Mean
Duty Far Slow Averted Away 1.743
Towards 2.423
Direct Away 1.764
Towards 2.540
Fast Averted Away 2.121
Towards 2.578
Direct Away 1.839
Towards 2.738
Near Slow Averted Away 1.794
Towards 2.581
Direct Away 1.975
Towards 2.480
Fast Averted Away 1.859
Towards 2.536
Direct Away 1.972
Towards 2.542
Intellect. The highest value for Intellect was for movement Towards, Fast, with Direct Gaze,
from a Far Distance (m = 2.27). The lowest value for Intellect was for movement Away, Fast,
with Averted Gaze, from a Far Distance (m = 1.16). As can be seen in Table 9, Direction
Towards was more associated with Intellect than Direction Away. No other clear-cut patterns
were observed for Intellect.
Table 9
4-way mean scores for Intellect
Measure Distance Direction Gesture Speed Mean
99
Intellect Far Slow Averted Away
1.336
Towards
1.906
Direct Away
1.389
Towards
2.073
Fast Averted Away
1.160
Towards
2.028
Direct Away
1.215
Towards
2.272
Near Slow Averted Away
1.432
Towards
2.115
Direct Away
1.437
Towards
1.991
Fast Averted Away
1.296
Towards
1.965
Direct Away
1.257
Towards
2.075
Adversity. Highest values for Adversity were observed for movement Away, Fast, from a
Far Distance with Direct Gaze (m = 3.5). The lowest value for Adversity was observed for
movement Towards, Slowly with Direct Gaze, from a Close Distance (m = 2.11). As can be
seen in Table 10, Adversity is generally associated with movement Away, particularly with
Averted Gaze.
Table 10
4-way mean scores for Adversity
Measure Distance Direction Gesture Speed Mean
Adversity Far Slow Averted Away
2.691
Towards
2.456
Direct Away
2.887
Towards
2.377
Fast Averted Away
3.112
Towards
2.084
Direct Away
3.503
100
Towards
2.169
Near Slow Averted Away
2.691
Towards
2.456
Direct Away
2.887
Towards
2.377
Fast Averted Away
3.112
Towards
2.084
Direct Away
3.503
Towards
2.169
Romance. None of the 16 virtual human conditions were rated highly in terms of
Romance, raising a unforeseen limitation of the stimuli in that it only involves one virtual human
character. That said, among the 16 conditions, the highest rating for Romance was observed for
movement Towards, Quickly, with Averted Gaze from a Near Distance (m = 2.25). The lowest
rating for Romance was observed for movement Away, Quickly, with Direct Gaze from a Far
Distance (m = 1.09).
Table 11
4-way mean scores for Romance
Measure Distance Direction Gesture Speed Mean
Romance Far Slow Averted Away
1.323
Towards
1.974
Direct Away
1.435
Towards
2.122
Fast Averted Away
1.108
Towards
2.131
Direct Away
1.091
Towards
2.138
Near Slow Averted Away
1.308
Towards
1.970
Direct Away
1.208
Towards
2.162
Fast Averted Away
1.246
Towards
2.249
101
Direct Away
1.294
Towards
2.196
Positivity. As can be seen in Table 12, the highest value for Positivity was observed for
movement Towards, Fast, with Direct Gaze, from a Near Distance (m = 2.85). The lowest value
for Positivity was observed for movement Away, Slowly, with Averted Gaze, from a Far
Distance (m = 1.303).
Table 12
4-way mean scores for Positivity
Measure Distance Direction Gesture Speed Mean
Positivity Far Slow Averted Away
1.303
Towards
2.214
Direct Away
1.434
Towards
2.411
Fast Averted Away
1.217
Towards
2.732
Direct Away
1.069
Towards
2.744
Near Slow Averted Away
1.457
Towards
2.491
Direct Away
1.330
Towards
2.565
Fast Averted Away
1.399
Towards
2.599
Direct Away
1.401
Towards
2.847
Negativity. As seen in Table 13, the highest value for Negativity was observed for
movement Away, Fast, with Direct Gaze, from a Far Distance (m = 3.98). The lowest value for
Negativity was observed for movement Towards, Fast, with Averted Gaze from a Far Distance
(m = 2.06).
Table 13
4-way mean scores for Negativity
102
Measure Distance Direction Gesture Speed Mean
Negativity Far Slow Averted Away
3.295
Towards
2.685
Direct Away
3.355
Towards
2.592
Fast Averted Away
3.554
Towards
2.057
Direct Away
3.980
Towards
2.238
Near Slow Averted Away
3.229
Towards
2.435
Direct Away
3.758
Towards
2.479
Fast Averted Away
3.564
Towards
2.338
Direct Away
3.765
Towards
2.106
Deception. As seen in Table 14, the highest values of Deception were observed for
movement Away, Fast, with Direct Gaze from a Far Distance (m = 3.56). The lowest value for
Deception was observed for movement Towards, Fast, with Direct Gaze, from a Far Distance (m
= 1.59).
Table 14
4-way mean scores for Deception
Measure Distance Direction Gesture Speed Mean
Deception Far Slow Averted Away
2.596
Towards
2.116
Direct Away
2.557
Towards
1.971
Fast Averted Away
3.063
Towards
1.632
Direct Away
3.562
103
Towards
1.585
Near Slow Averted Away
2.594
Towards
2.003
Direct Away
3.191
Towards
1.881
Fast Averted Away
2.753
Towards
1.870
Direct Away
3.273
Towards
1.709
Sociality. As seen in Table 16, highest values for Sociality were observed for movement
Towards, Fast, with Direct Gaze, from a Far Distance (m = 3.93). Lowest values for Sociality
were observed for movement Away, Fast, with Direct Gaze from a Far Distance (m = 1.877).
Table 16
4-way mean scores for Sociality
Measure Distance Direction Gesture Speed Mean
Sociality Far Slow Averted Away
2.087
Towards
3.247
Direct Away
2.223
Towards
3.632
Fast Averted Away
2.117
Towards
3.572
Direct Away
1.877
Towards
3.926
Near Slow Averted Away
2.427
Towards
3.483
Direct Away
2.164
Towards
3.847
Fast Averted Away
2.080
Towards
3.686
Direct Away
2.404
Towards
3.715
104
Discussion
The aim of this study was to add to our understanding of how various combinations of
social-perception based movements of another person underlie inferences about the social
situation indicated by those movements (using the DIAMOND taxonomy). To do so, used
Smartbody software to generate a virtual human that moved in 16 different combinations of four
behavioral indicators with two levels of each indicator. The four dimensions examined were
speed (slow, fast), direction (away, towards), distance (near, far), gaze (averted, direct).
This study adds to the body of work explored in Study 1, which was limited by its use of
text-based narrative stimuli. Text, of course, is bounded by imagination as well as the sequential
order in which components of the stimuli are presented.
Summary of Findings
DIAMONDS associated with Slow movement included Intellect, Romance, and
Negativity. DIAMONDS associated with Fast movement included Duty, Adversity, Positivity,
Deception, and Sociality. What this indicates is that as far as Speed is concerned, there is a
great deal of competition among the DIAMONDS characteristics for both Slow and Fast Speed.
Additional variables of movements would be required to attain a closer understanding of a
pattern in social inference making.
The patterns of mean differences for Gaze may both demonstrate a potential
methodological flaw to the nature of the stimuli as well as demonstrate some insight into main
assumptions that individuals have about social situations. Having every DIAMONDS
characteristics associated more so with Direct Gaze than Averted Gaze may indicate that
individuals have a de facto assumption that social situations should involve eye contact from a
social other. This seems rather intuitive in the sense that eye contact is necessary to engage in
105
communication at the most fundamental level. That said, this pattern may be one that is so
obvious that some may argue that it should have been removed from the initial set of
independent variables.
Not only do we see a clear alignment of DIAMONDS positive/negative valence and
movement Direction, these mean differences for each of the DIAMONDS were statistically
significant. In other words, when the virtual human moved Away, participants associated this
with all the negatively valenced DIAMONDS and when the virtual human moved Towards,
participants associated this with all the positively valenced DIAMONDS – although Duty and
Intellect are arguably neither positive or negative in valence. These Direction-based findings are
one of the rare set of results that replicate Study 1’s results, demonstrating a general agreement
of both text and visual effect of movement Direction in terms of social inferences.
Future Directions
This study, in conjunction with Studies 1 and 2 attempt to add to our understanding of the
role of human movement and motion in social perception of inferences via several different
approaches. The goal of most of these studies is to examine how different animated agents’
movements (e.g., away from or towards the viewer) and perspective (e.g., third person or first
person) affect raters’ judgements of the motives and characteristics of the agent. By using
Smartbody, we can construct social scenarios of varying levels of richness to further understand
the impact of various patterns of movement on social inference-making.
Neural Networks. The results of the current study suggest there tends to be competition
and confusion when it comes to assigning social meaning based on certain combinations of body
movement. For example, it is difficult to make a statistically significant conclusion that a
situation involving a person who is close, moving towards you quickly with your arms open is
106
solely a romantic situation. Such a neural network model of social situations based on the
Competition Model of Social Perception (Read & Miller, 1998) should allow for more accurate
judgments of internal mental states as well as more accurate predictions of future behavior—
topics of high relevance for those constructing deep learning/artificial intelligence technology.
We are modeling the results of the above studies in a Neural Network using the modeling
software Emergent. The goal here would be to create a neural network of social situations and
the combinations of body motion and movement that predict the inference making that comes
with social perception.
Conclusion
In this study we introduce a basic yet novel framework for examining the fundamental
elements that define a social situation. We demonstrate the complex array of possible inferences
that arise from an extremely raw and basic input of 4 sets of physical movement each limited to
simply 2 levels. Clearly, human interaction is much more nuanced and dynamic than what
we’ve shown here. That said, this study presents the exciting opportunities that new software
and technology affords us in our goal of constructing a predictive taxonomy of physical
movements and their impact on social inferences, intentions, goals, attitudes, and situational
meanings.
107
Chapter 5: The Transformative Effects of Spatial Distance in an Immersive VR Learning
Environment
Introduction
In natural human interaction as well as in virtual spaces, combinations and sequences of
kinesthetic movements (actions) are an importance basis for social perceptions and inferences.
This study seeks to apply some tenets of motion-based social perception in a specific social
situation using an immersive virtual reality environment. This particular study attempts to apply
one of the Kinesthetic Motion Cue variables (Distance) in a simulated real-life interaction that
takes place in an immersive virtual reality environment. This study follows the model of a
previous study that applied the Kinesthetic Motion cue of movement Direction in the same
virtual environment (Feng et al., 2017). That said, this study departs from the model of Studies
1-3 in this dissertation in that the Kinesthetic Motion Cue variable acts as both an Independent
Variable and a Dependent Variable. Specifically, this study examines the behavioral (causal
attribution, affect), physiological (Skin conductance), and physical (head movement direction)
impact of interpersonal distance (Independent variable) behavior in a VR-based learning
environment. Essentially, I measured the degree to which participants moved their head forward
or backwards in response to a very close or far virtual agent.
This study explores these questions with a 2 (Proxemic Distance) X 2 (Virtual Instructor
Gender) between subject design. In this experiment, participants (n = 118) actively engage in a
learning task with a male/female virtual instructor that provides negative feedback while either
standing close to or far from participant. Based on the different deliveries of the negative
feedback, the study aims to identify the sources of variations in participant reactions to the
negative feedback, namely patterns of attribution and both behavioral and physiological
108
measurements of emotions. The results of the present study have numerous implications for the
design of virtual agents for learning outcomes as well as the methodological design of studies
utilizing virtual agents in virtual environments.
Virtual Environments
Virtual environments (VE) have been widely explored in the context of learning and
pedagogy. Utilizing virtual environments as an educational tool has long been discussed, Virtual
environments have been investigated as educational tools that encourage active participation and
individualization (Pantelidis, 1993) and that foster a constructivist educational approach
(Bricken, 1990; Winn, 1993). Bailenson and colleagues (2008a) conceptualize immersive virtual
environment (IVE) as one that “perceptually surrounds the user, increasing his or her sense of
presence or actually being within it” (p. 104). A simple example of an IVE is a virtual reality
(VR) environment. Relative to conventional virtual environments, the sensory and perceptual
cues within an IVE are more salient and engaging while actual sensory information outside of
the IVE (real-world) is minimized.
The technological characteristics of three dimensional (3D) spatial Virtual Reality (VR)
provides pedagogical advantages through its construction of virtual environments (VE),
multimodal channels, and environments allowing user immersion (Mikropoulos & Bellou, 2006).
For a comprehensive review of the use of virtual environments and virtual reality in education,
see Mikropoulos and Natsis (2010).
Embodied Pedagogical Agents
While research examining learning within virtual environments have mostly made use of
computer-driven embodied agents (Bailenson & Blascovich, 2004). To have optimal learning
109
effects using virtual agents, studies have underscored the need to integrate socio-emotional and
relational variables such as embodiment and nonverbal behavior (Krämer & Bente, 2010).
These studies have traditionally focused on the effects of positive feedback from virtual agents in
a virtual learning environment (Krämer et al., 2010; Krämer et al., 2016; Gratch et al., 2006;
Wang et al., 2009). For instance, Wang et al. (2008) found that an agent who uses polite
requests had a more positive impact on learning than a more direct agent. Further, Krämer et al.
(2016) found a significant improvement on participant's performance when interacting with
same-gender virtual agents that rapidly respond to the participants with positive non-verbal
behavior. Departing from these previous works, Feng et al., (2017) focused on students' direct
response to purely negative feedback from virtual instructors, and found that students attribute
greater self-blame (internal attribution) for their purported poor performance when interacting
with the female virtual instructor than when interacting with the male virtual instructor.
Presence
Examining the proxemics-based effects of virtual agents in a virtual environment
warrants a measurement to ensure that participants are perceiving the virtual agents’ actions as
real and actual. The concept of presence (Steuer, 1992; Sheridan, 1992; Biocca, 1997; Lombard
& Ditton, 1997; Lee, 2004) reduces the perception of mediation and is a useful resource to
generalize results of a study utilizing virtual environments to actual human communication and
learning. Specifically, the category of social presence refers to the degree to which a mediated
interaction is taking place without mediation.
Negative Feedback
First introduced by Dweck (1975), the effects of negative feedback in educational
contexts have long been debated. Some argue that negative feedback benefits learning (Kluger
110
et al., 1996) while others argue that it leads to a “learned helplessness” that hampers learning
(Dweck et al., 1978). At a fundamental level, negative feedback has been shown to lower
motivation (Vallerand et al., 1984). That said, students may employ strategies to counteract the
negative feedback, such as increasing effort (Carver et al., 1998) and lowering goals and
expectations (Kluger et al., 1996). Such goal regulation strategies have been observed both for
legitimate and manipulated feedback (Ilies et al., 2005). Further, negative feedback has been
observed to act as a motivator for tasks that are required (Hattie et al., 2007).
Attribution
A crucial response to negative feedback in an educational context is one's attribution of
blame or responsibility. That is, does the student attribute blame to their own poor abilities or do
they attribute blame to the instructor's poor teaching abilities? Attribution theory has long been
discussed in the field of education (Weiner et al., 1978). While students' success is often
attributed to the self, failures are typically attributed to others (Klein et al., 2001). In fact,
students tend to ignore negative feedback that contrasts with their own assessments of their
performance (Campbell et al., 1983).
Attribution in education involves two categories of learning goals. A mastery goal
involves a belief that effort is linked with achievement, or mastery (Weiner, 1979). In contrast, a
performance goal is linked to avoidance of challenging tasks (Dweck, 1988), negative affect in
response to failure, and a subsequent judgment that one lacks ability (Jacacinski, 1987). When
trying hard does not lead to success, the expenditure of effort can become a threat to one's self-
concept of ability (Covington, 1979), which has previously been identified as a mediator of
cognitive and behavioral variables when students adopt a performance goal (Dweck, 1986).
111
What remains to be seen, however, is an understanding of how negative feedback
transforms and shifts students’ attribution tendencies based on the gender of a teacher and the
interpersonal distance between the student and the teacher.
Proxemics
Proxemics, or interpersonal distance between communicators, highly impacts the
perception of meaning in all forms of human social interaction. Hall (1966) identified 4 types of
interpersonal distance zones with varying distances and social meaning: the intimate zone (0–
45 cm), the personal–casual zone (45–120 cm), the socio-consultive zone (120–360 cm), and the
public zone (360–750 cm). Hall notes that somewhat of a gradient of familiarity exists across
these spatial categories, with the intimate zone being for romantic partners, close friends, or
family members and the public zone reserved for public speech and/or stage performance. The
relationship between proxemics and physiological responses was first examined by McBride et
al. (1965) and later extended to linking invasion of space with discomfort and a rise in Galvanic
Skin Response (GSR) (Sommer, 1969).
Management of and responses to interpersonal distance has also been extended to non-
human agents such as virtual characters (Bailenson et al., 2001, 2003; Gillath et al., 2008;
Llobera et al., 2010; Wilcox et al., 2006) and robotics (Gockley, Forlizzi, & Simmons, 2007;
Pacchierotti, Christensen, & Jensfelt, 2005; Walters et al., 2005). As Bailenson et al (2001) note,
studies about proxemics have historically been wrought with issues of reliability and validity
across participants. Virtual environments offer an opportunity to reliably test precisely defined
proxemics while also maximizing realism (Loomis et al.,1999; Bailenson et al., 2001; Blascovich
et al., 2002).
Hypotheses
112
We anticipate the proxemic distance of the virtual instructor to have a wide impact on
participants’ experiences in this virtual learning environment. Specifically, we expect the closer
interpersonal distance to be associated with poorer evaluations of performance, greater negative
affect, lower attributional control, external attributional tendencies, and greater head movements
(HMD). Instructor gender has previously been linked with varying attributional tendencies.
This study will further examine these gender effects in a negative feedback-based virtual
learning environment. Specifically, we will explore, in a between-subjects design, the role of
instructor gender (male or female), and potential interaction effects of instructor gender with
both instructor proxemic distance (near or far) and student gender (male or female participant) as
they impact the above outcomes.
Method
Participants
118 students from two universities (54 men and 64 women), with an average age of 20.94
(SD = 2.77) participated in this study and were randomly assigned to one of 4 conditions in a 2
(Virtual Instructor Gender) * 2 (Close/Far) between-subjects design. The distribution of
participants across conditions was as follows: There were 28 participants in condition 1(Male
Close), 24 participants in condition 2 (Female Close), 30 participants in condition 3 (Male Far),
and 36 participants in condition 4 (Female Far).
Measures
Positive and Negative Affect Schedule-Expanded Form (PANAS-X). The general
dimension PANAS-X scales2F
3
of positive affect and negative were included in this study scale
3
Please note that reliability scores for each of the measures used in this study are
reported post-Factor Analysis in the Results section.
113
(Watson & Clarke, 1999). Items were measured on a 5-point Likert scale ranging from (1) Very
slightly or not at all to (5) Extremely. Positive affect items included active, alert, attentive,
determined, enthusiastic, excited, inspired, interested, proud, and strong. Negative affect items
included afraid, scared, nervous, jittery, irritable, hostile, guilty, ashamed, upset, and distressed.
Participant responses were collected pre-and post the experiment. For the current study, each
item of the PANAS-X scale will be analyzed at the univariate level.
Causal Dimension Scale II. The Revised Causal Dimension Scale (CDSII) (McAuley,
Duncan, & Russell, 1992) was used to measure assignment of causal attributions after the
conclusion of the experiment. The CDSII consists of four individual dimensions, “locus of
causality” (internality), “stability”, “personal control”, and “external control”. Responses are
made on 9-point semantic differential scale with anchoring statements at either end of the scale.
In addition to the original items grouped under each dimension of the scale, items tailored for
this experiment were included as slight modifications to the existing items.
Rosenberg Self-Esteen Scale (RSE). The Rosenberg Self-Esteen Scale (RSE) was used
to measure pre-test self-esteem levels among participants (Rosenberg, 1965). The RSE consists
of 10 items. People were asked to evaluate each item on a 4-point Likert scale from Strongly
Agree (3) to Strongly Disagree (0). As five items on this scale are negatively coded and were
thus reverse coded to align all items in the same direction.
Ad-Hoc Items. Upon completion of the experiment, participants were also asked to
respond to a series of ad-hoc items designed to attain a more holistic understanding of
participants’ experiences with the task, the instructor, and the feedback they received. We first
asked the participants to evaluate the virtual human regarding valance (e.g. likable), dominance,
activity, attributions of the professor's behavior (e.g. due to his/her personality or participants'
114
underperformance). As these items do not represent a scale or combined measure, each item will
be analyzed at the univariate level.
Social Presence. Social presence was measured using a modified version of the Temple
Presence Inventory (Lombard, Ditton, & Weinstein, 2009). The original Temple Presence
Inventory includes eight dimensions covering a wide scope of presence measures. For the
purposes of this study, only the dimensions relevant to social presence were retained and tailored
for this study. The dimension used included in this study were Social Presence-Actor within
Medium (7 items), Passive Social Presence (5 items), Active Social Presence (3 items), Mental
Immersion (engagement) (5 items), and Social Richness (7 items). The items within these
dimensions included 7-point Likert scale items that ranged from (1) Not at all to (7) Very much
or (1) Never to (2) Always.
Head Mounted Display (HMD) Movements. To reiterate, this study involved the use
and application of a three-dimensional virtual environment, which was essentially a model of a
superordinate space. In this three-dimensional space, the environment was not designed to
appear bounded to a specific space, such as an enclosed room. Rather, the characters in the
virtual environment appeared and interacted with the participants in a seemingly limitless
environment, with every point in the space identified by three coordinates (x, y, and z), with each
point representing a specific data point for analysis. The movement of each participant's head
along the x, y, and z planes in this study was tracked by a three-axis sensing system integrated
within the head mounted display.
Materials
Virtual Environment Design. A mismatch between resources and demands can create
threat states that can potentially account for performance outcomes (Blascovich et al.,, 1999).
115
The inability to satisfy the person of power in this interaction is suggested to lead to feelings of
helplessness and lack of motivation, key components of negative feedback. Using this scenario
as a template, we created a virtual environment in which individuals are assigned to a task that
they are incapable of completing to the satisfaction of an instructor. This system enables the
researchers to develop a keener understanding of how participants respond, both verbally and
physiologically, to the experience of negative feedback. This information can then be used to
create intervention strategies to ``buffer" participants against the instructor's negative feedback:
These student tactics could be taught through successive iterations of the virtual scene. Thus, the
first step before constructing such an intervention is to create a specific learning context where
the instructor provides negative feedback and within which participants' responses can be
assessed.
Specifically, the interactive virtual environment here simulated an acting class scenario.
One of the virtual characters was designed to be the instructor in the scene. Each participant and
other non-player characters (NPC) were students who were asked to rehearse 'Romeo and Juliet:
Act 3, Scene 3'. The researcher told the participants that their goal is to finish their rehearsal in a
limited amount of time. Each time the participant finished reading a line, the virtual instructor
provided negative feedback in several ways including harsh language, negative non-verbals,
encroaching on personal space, and ridiculing the participants' performance. Although the
negative feedback from virtual instructor were scripted and identical for all participants,
participants were told the feedback was tailored based on their performance and they should
follow the instructor's directions to the best of their ability. After the experiment, participants
were debriefed about the scripted and non-authentic nature of the ``feedback".
116
To invoke negative affect, the system utilized social interaction and an impossible task
framework as mechanisms. All feedback given by the virtual instructor, regardless of actual
performance, were designed to be negative and variable in nature. For example, ``Woah woah
woah, stop. You are sounding way too excited.", ``Ugh, stop. You sound like a dead fish... Let's
do it again, and put a little more energy into it.", ``Hang on. You are giving it too much energy.
Try bringing it down a notch, okay?". In order to get the participants engaged in the experiment,
the participants were given a time limit for each line they read. They would be interrupted by the
virtual instructor if they could not finish it in time.
Figure 1
Example of virtual agents that interacted with participants
The nonverbal behavior such as gesture, facial expression, gaze and posture of the virtual
agents are generated by Cerebella (Lhommet & Marsella, 2013) to convey negative affect.
Cerebella is an intelligent framework which can take a communicative intent as input and
generates a multimodal nonverbal behavior commands using the Behavior Markup Language
(BML). Taking advantage of the 3D environment, the proxemics between virtual characters
could also be manipulated.
Apparatus. The 3D virtual environment was developed using Unity3D. The framework
of virtual human and character animation is developed based on the Virtual Human Toolkit. The
head mounted display (HMD) is Oculus Rift Development Kit 2.
Figure 2
117
Apparatus set-up on participant
Experiment Procedure
Prior to arrival, participants were randomly assigned to one of four conditions (Male
Close, Female Close, Male Far, Female Far). Participants were informed about the experiment
and their role in the study before being instructed to read the informed consent form. As they
began to read the informed consent form, participants were fitted with the E4/ skin conductance
measure bracelet. Participants were informed that they would be evaluated by a virtual professor
based on his/her acting performance and that they should react and adjust their performance
according to the feedback being received. After completing this briefing session, participants
were asked to fill out the PANAS-X (pre-test) and the Rosenberg Self-Esteem Scale. After
completing those two questionnaires, participants were fitted with the HMD and headphones at
an appropriate distance of about 5 feet from the HMD sensor. When the participant reached a
comfortable state and indicated readiness, the virtual acting rehearsal began. Upon completing
the experiment, the participants responded to additional questionnaire measurements including
the PANAS-X (post-test) and the CDS II. A secondary function of the post-test was to allow the
participants to rest for at least 3 minutes to collect the post-experiment physiological data. Each
118
session for a given participant lasted no more than 30 minutes. More detailed information on the
experimental procedure may be found elsewhere (Feng et al., 2017).
Figure 3
Timeline of Experiment Protocol
Results
Data Preparation
CDS II. Factor analysis was conducted on individual subscales that make up the Causal
Dimension Scale II. Five items under the “Locus of causality” dimension of the CDSII were
examined via principal components analysis using varimax rotation as the primary purpose was
to establish and compute composite variables for each subscale of the CDSII. All five items
loaded onto one factor and were retained under a “locus of causality” composite measure
(Cronbach’s α = .86). Three items under the “personal” dimension of the CDSII were examined
via principal components analysis and were all found to load on one factor (Cronbach’s α = .87).
Three items under the “stability” dimension of the CDSII were examined via principal
components analysis using varimax rotation, and all loaded on one factor (Cronbach’s α = .73).
Six items under the “external” dimension of the CDSII were examined via principal components
119
analysis using varimax rotation. Three items did not load on the first factor and were dropped
from the composite “external” measure (Cronbach’s α = 76).
Social Presence. Seven items under the Spatial Presence dimension were examined via
principal components analysis using varimax rotation. Two items of the Spatial dimension did
not load on the first factor and were removed from the composite Spatial dimension (Cronbach’s
α = .96). Seven items under the Parasocial Social Presence dimension were examined via
principal components analysis using varimax rotation. Three items of the Parasocial Social
Presence dimension did not load on the first factor. The first and second factors for this
dimension accounted for 42.82% and 15.3% of the variance of the initial eigenvalues,
respectively. That said, all items of this measure had higher reliability when consolidated into
one composite than when separated into separate sub-dimensions. The four items that made up
the first factor had a reliability of .74, and the three items on the second factor had a particularly
low reliability of .58. As such, all items of the original Parasocial Social Presence measure were
included in the composite (Cronbach’s α = .77).
Four items in the Passive Interpersonal Social Presence were examined via principal
components analysis using varimax rotation. All four items loaded on one factor and were
retained under this dimension to form a composite measure (Cronbach’s α = .70). Five items of
the Mental Immersion dimension were examined via principal components analysis using
varimax rotation. All five items of the loaded on one factor and were retained (Cronbach’s α =
.80). Seven items of the Social Richness dimension were examined via principal components
analysis using varimax rotation. Two items did not load on the first factor and were removed
from the composite Social Richness measure (Cronbach’s α = .79).
120
Head-Mounted Display (HMD). HMD data coordinates for x-, y-, and z-axes were
recorded at 25 separate time points over the course of the actual acting experiment. HMD
movement analyzed for aggregate movement (bi-directional). Head movement was calculated
by summing up the absolute values of the differences between each pair of the sequential data
points. In other words, the absolute value of the difference between time 1 and time 2 was added
with the absolute value of the difference between time 2 and time 3, and so on up to the absolute
value of the difference between time 24 and time 25. By summing up the absolute values of
each time point for each axis, we computed aggregate movement variables for each axis. The
formulas for the movement variables of each axis are depicted below.
( ): − = |1 − 2| + |2 − 3| + |3 − 4| + … |24 − 25|
( ): − = |1 − 2| + |2 − 3| + |3 − 4| + … |24 − 25|
( ): − = |1 − 2| + |2 − 3| + |3 − 4| + … |24 − 25|
Manipulation check
Negative Affect. Manipulation checks were performed to verify manipulations were
being interpreted accurately and as intended. Critical to the present study was the impact and
reception of the negative feedback messages from the Virtual Instructor. As such, we tested the
effectiveness of the negative feedback by determining the level of negative affect that the
feedback generated. We conducted a series of paired samples t-tests to examine this emotional
impact of the negative feedback message. Significant mean differences between pre- and post-
test measurements of PANAS-X were observed for the negative affect items of Upset, t(114) = -
5.74, p < .001, Guilty, t(114) = -2.252, p = .026, Hostile, t(114) = -4.041, p < .001, Irritable,
t(114) = -2.52, p = .013, Ashamed, t(114) = -4.54, p < .001, and Nervous, t(114) = 3.495, p =
.001. Here, we see clear indication that the experimental negative feedback was generally
121
successful in communicating its meaning and intent. Significant mean differences were also
observed for the positive affect items of Enthusiastic, t(114) = 2.62, p = .01, Proud, t(114) =
2.82, p = .006, but the direction of the mean differences indicate a decrease in enthusiasm and
pride, providing further support for that the “negativity” of the feedback was accurately
perceived.
Experiment Authenticity. Another significant factor in the present study was the degree
to which participants truly believed that the negative feedback they were receiving was tailored
and specific to each person. Although efforts were made to make the virtual environment and
authentic, some participants could pick up on the actual non-intelligent nature of the virtual
environment. That is, although participants were (falsely) told that the virtual instructor would
be tailoring their feedback to the participants’ performance, not all participants deemed the
virtual environment to be authentic. As such, a manipulation check was delivered to the
participants in the form of two items on a 7-point Likert scale (anchors extremely inauthentic and
extremely authentic): “To what extent did you feel that the instructor's feedback was
authentic/real?” and “To what extent did you feel that the virtual environment was
authentic/real?”
Both items were normally distributed and no outliers were identified, enabling all
participants to be included for analysis. Participants generally were mixed in their judgments of
the authenticity of the instructor feedback (M= 3.92, SD = 1.7) and generally felt the virtual
environment was more authentic than inauthentic (M = 4.43, SD= 1.5). Further, the medians for
each of the two items were 5, which corresponds to “Slightly authentic” on the 7-point Likert
scale.
122
A dichotomous Feedback Authenticity variable was constructed by splitting the responses
at the median and examined in a 2-way MANOVA with the original independent variables of
Proxemic Distance and Virtual Instructor Gender. A main effect of Feedback Authenticity was
observed on multiple Ad-hoc items, including task difficulty, F(6,89) = 6.624, p = .012,
feedback helpfulness, F(6, 89) = 15.99, p < .001, feedback accuracy, F(6, 89) = 22.126, p < .001,
affected by professor’s reactions, F(6, 89) = 4.57, p = .036, feedback attributed to own
underperformance, F(6, 89) = 12.433, p < .001, and feedback attributed to professor having a bad
day, F(6, 89) = 4.83, p = .031.
Table 1
MANOVA of Feedback Authenticity on Ad-hoc items
Presence. A concept related to participants’ perceptions of experiment authenticity was
the degree of presence they felt in the virtual environment. A 3-way MANOVA was conducted
Independent
Variable(s)
Dependent Variable Df Mean
Square
F Sig.
Feedback
Authenticity
Difficulty of task 1 6.542 6.624 .012
Helpfulness of feedback 1 9.836 15.999 .000
Accuracy of feedback 1 13.464 22.126 .000
Affected by feedback 1 3.758 4.565 .036
Attribute to own
underperformance
1 25.367 12.433 .001
Attribute to professor
having a bad day
1 9.770 4.843 .031
Proxemic
Distance *
Feedback
Authenticity
Difficulty of task 1 4.069 4.120 .046
Proxemic
Distance *
Feedback
Authenticity
* Virtual
Instructor
Gender
Helpfulness of feedback 1 3.504 5.700 .019
Affected by feedback 1 8.333 10.124 .002
123
examining the effects of Proxemic Distance, Virtual Instructor Gender, and Participant Gender
on the individual dimensions of Social Presence. A multivariate main effect for Proxemic
Distance was observed for Social Presence, F(5, 75) = 2.762, p = .024. Further, a multivariate
interaction effect of Virtual Instructor Gender and Participant Gender was observed for Social
Presence, F(5,75) = 2.455, p = .041. As can be seen in Table 1, univariate main effects for
Proxemic Distance were observed for the factor analyzed dimensions of Spatial Presence, F(1,
87) = 7.265, p = .009, Passive Interpersonal Social Presence, F(1, 87) = 8.072, p = .006, and
Mental Immersion, F(1, 87) = 5.503, p = .021.
A univariate interaction effect was observed for the Virtual Instructor Gender and
Participant Gender on the Passive Interpersonal Social Presence dimension, F(1, 87) = 5.603, p
= .02. As can be seen in Figure 4, participants reported experience greater Passive Interpersonal
Social Presence when interacting with an instructor of the opposite gender. To be clear, the
items of this dimension asked participants about the degree to which they could observe the
facial expressions, observe the changes in tone of voice, observe the style of dress, observe the
body language of the virtual instructor.
Figure 4
Interaction effect of Virtual Instructor Gender and Participant Gender on Social Presence
Table 2
124
Main effects for Proxemic Distance on Social Presence dimensions
Dependent Variable df Mean
Square
F Sig.
Spatial Presence 1 9.989 7.265 .009
Parasocial Social
Presence
1 1.510 1.772 .187
Passive Interpersonal
Social Presence
1 7.785 8.072 .006
Mental Immersion 1 4.564 5.503 .021
Social Richness 1 .340 .570 453
Statistical Analysis
CDSII. A 2-way MANOVA was conducted examining the effects of Proxemic Distance
with the Gender of the Virtual instructor on the factor analyzed composite CDS II Dimensions of
Locus of Causality, Personal Control, Stability, and External Control. A multivariate main effect
of Proxemic Distance on the CDS Dimensions was observed, F(4, 117) = 7.15, p < .001. No
other main effects or interaction effects were found. Univariate main effects of Proxemic
Distance were observed for Stability, F(3, 117) = 20.69, p < .001, Personal Control, F(3, 117) =
4.88, p = .029, and External Control, F(3, 117) = 7.91, p = .006. No other main effects or
interaction effects were observed. As each independent variable was limited to 2 levels, post-hoc
tests were not conducted.
Regardless of Virtual Instructor Gender, those who interacted with a Close instructor
reported significantly higher levels of External Control. In other words, the participants in the
Close conditions tended to report that people outside of themselves (the professor) had a more
impactful role in their performance. Further, those who interacted with a Close instructor
reported significantly lower levels of personal control, or one’s own ability to regulate and
manage one’s performance, as well as significantly higher levels of stability, deeming the current
situation of negative feedback to be more permanent, stable, and unchangeable.
125
Table 3
Main effect for Proxemic Distance on CDS Stability, Personal Control, and External Control
Dependent
Variable
Type III
Sum of
Squares
df Mean
Square
F Sig.
CDS Stability 61.322 1 61.322 20.686 .000
CDS Causality 1.127 1 1.127 .382 .538
CDS Personal 20.418 1 20.418 4.879 .029
CDS External 25.829 1 25.829 7.910 .006
PANAS-X. A 3-way MANOVA was conducted to examine the 3-way effects of
Proxemic Distance, Gender of the Virtual Instructor, and Participant Gender on the individual
post-test measurements of PANAS-X. As each independent variable was limited to 2 levels,
post-hoc tests were not conducted. Univariate main effects of Participant Gender were found on
Interested, F(3, 117) = 4.52, p = .036, Excited, F(3,117) = 7.46, p = .007, Enthusiastic, F(3, 117)
= 4.135, p = .044, Inspired, F(3, 117) = 8.39, p = .005, Determined, F(3, 117) = 8.68, p = .004,
and Active, F(3,17) = 13.96, p < .001. That is, male participants in general reported being more
interested, excited, enthusiastic, determined, and active than female participants after receiving
the negative feedback. This indicates a presence of a gender-based pattern in which male
participants seemingly “bounce-back” in reaction to harsh negative feedback.
Further, an interaction effect between Proxemic Distance and Participant Gender was
observed for Irritable, F(3, 117) = 5.57, p = .02. As can be seen in Figure 5, male participants
were far more irritated by the instructor at a Far distance than were female participants.
Figure 5
Interaction effect of Proxemic Distance and Participant Gender on PANAS-X: “Irritable”
126
Ad-Hoc Analyses
Ad-Hoc Items. A 3-way MANOVA was conducted examining the effects of Proxemic
Distance, Virtual Instructor Gender, and Participant Gender on the individual Ad-hoc items. As
the Ad-hoc items did not constitute a composite measurement scale, each item was examined at
the univariate level. Univariate main effects of Proxemic Distance were observed for the
helpfulness of the feedback, F(3, 117) = 7.69, p = .007, the likability of the professor, F(3, 117)
= 23.74, p < .001, and the level of effort put into the task, F(3, 117) = 27.46, p < .001. That is,
participants in the Close condition perceived the feedback to be less helpful, the professor to be
less likable, and tried harder to complete the task than participants in the Far condition did.
Further, participants in the Close condition attributed the professor’s reactions to his/her
personality more so than those in the Far conditions, F(3.117) = 28.23, p < .001.
Table 4
Main effect for Proxemic Distance on Ad-hoc items
Dependent Variable df Mean
Square
F Sig.
Helpfulness of feedback 1 9.078 7.687 .007
Accuracy of feedback 1 2.676 2.928 .090
Attribute to professor’s personality 1 88.264 28.226 .000
Attribute to professor having a bad day 1 5.398 2.424 .122
Level of Effort 1 23.509 27.460 .000
127
Likability of professor 1 39.971 23.738 .000
An interaction effect of Participant Gender and Proxemic Distance was observed for the level of
effort placed on the acting task, F(3, 117) = 5.304, p = .023. That is, male participants in the
Close condition tried much harder on the task than the males in the Far condition. The difference
in effort between Close and Far conditions was not as pronounced for the female participants.
See figure below.
Figure 6
Interaction effect of Participant Gender and Proxemic Distance on effort put into the task
Self Esteem. A bivariate correlation analysis was conducted to examine the effects of
one’s pre-test ratings of self-esteem on evaluations of the experiment feedback on the Ad-hoc
Questionnaire. Higher ratings of self-esteem were associated with higher ratings of feedback
helpfulness, r(117) = .221, p = .017, higher degree of effort, r(117) = .426, p < .001, and lower
ratings of professor “likability”, r(117) = -.297, p = .001. Further, higher ratings of self-esteem
were associated higher professor-based attributions of the professor’s negative feedback, namely
128
to the professor’s personality, r(117) = .528, p < .001, or to the professor having a bad day,
r(117) = .183, p = .048.
Moderation Analysis
Because both Proxemic Distance and Self Esteem both significantly predicted higher
reports of causal attribution, we investigated whether Self Esteem moderates the link between
Proxemic Distance and the 4 CDS dimensions (Locus of Causality, Personal Control, Stability,
External Control). To do this we tested a series of bias-corrected, bootstrapped (at 10,000
samples) moderation models using logistic regressions with model 1 of the PROCESS macro for
SPSS (Hayes, 2013). In these moderation models, the b1 path denotes the effect of Proxemic
Distance (x) on CDS (y), the b2 path denotes the effect of Self Esteem (m) on CDS, and the b3
path denotes the effect of the interaction between Proxemic Distance and Self Esteem (x*m) on
CDS. The overall moderation model for CDS Stability was significant, F(3, 113) = 13.5076, p
< .001, R
2
= .27. That is, 27% of the variance was due to these 3 predictors (Self Esteem,
Proxemic Distance, Interaction). Before proceeding with the moderation analysis results, we
will review and reiterate the nature of the variable values and their interpretations. First,
Proxemic Distance is a dichotomous variable with the value 1 representing Close distance, and
the value 2 representing Far distance. As such, high values of Proxemic Distance indicate Far
distance, and low values indicate Close distance. Second, Self-Esteem was a conventional scale
measure with high and low values representing high and low Self-Esteem, resepectively.
Finally, low values of CDS Stability refer to feelings of permanence and lack of control whereas
high values of CDS Stability refer to feelings of flexibility and greater control.
Each of the predictors also significantly predicted CDS Stability. First, Self Esteem
significantly predicted CDS Stability, b = 1.12, t(113) = 2.86, p = .005. That is, for every 1 unit
129
increase in CDS Stability, there was a 1.12 unit increase in Self-Esteem. Second, Proxemic
Distance significantly predicted CDS Stability, b = -1.17, t(113) = -3.60, p < .001. That is, for
every 1 unit increase in CDS Stability, there was a 1.17 unit decrease in Proxemic Distance.
Finally, the interaction between Proxemic Distance and Self Esteem also significantly predicted
CDS Stability, b = 1.66, t(113) = 2.11, p = .037. In order to interpret the significant interaction
effect, we examined the conditional effect of x on y. To achieve this, Self-Esteem was centered
at 0, and examined according to low, average, and high Self-Esteem. The average value, 0, here
represents the mean of Self-Esteem (m=2.51), and low and high represent 1 standard deviation
above and below this mean value. For low Self-Esteem, Proxemic Distance significantly
predicted CDS Stability, b = -1.90, t(113) = -4.05, p = .0001. That is, for low Self-Esteem,
every one point increase in Proxemic Distance leads to a 1.90 reduction in CDS Stability. Next,
Proxemic Distance also predicted CDS Stability for average Self-Esteem, b = -1.74, t(113) = -
3.60, p = .0005. That is, for average attendance, every one point increase in Proxemic Distance
leads to a 1.74 reduction in CDS Stability. Finally, Proxemic Distance did not significantly
predict CDS Stability for high Self-Esteem individuals, b = -.451, t(113) = -.9422, p = .3481,
meaning for these individuals, there is no relationship between Proxemic Distance and CDS
Stability.
Figure 7
Conditional effect of Proxemic Distance (x) on CDS Stability (y)
130
Johnson-Neyman Technique. The Johnson-Neyman Technique was used to further
examine the nature of the moderation model. Here, we found that the effect of Proxemic
Distance on CDS Stability began to be significant for individuals with ratings of Self-Esteem
.2486 above the mean (2.51) and below. Beginning at .3178 above the mean, however,
Proxemic Distance and CDS Stability cease to be related. To reiterate, when reported Self-
Esteem is at least 2.7586(2.51+.2486), Proxemic Distance and CDS Stability are significantly
related, t(113) = -1.98, p = .05, b = -.76. As Self-Esteem decreases, the relationship between
Proxemic Distance and CDS-Stability becomes more positive with the highest self-esteem (min
= 1.63), b = -2.83, t(113) = -3.35, p = .001. Conversely, as Self-Esteem increases (max = 3.38),
the relationship between Proxemic Distance and CDS Stability ceases to exist, b = .081, t(113) =
.12, p = .91. As can be seen in Figure 7, the high Self-Esteem has the flattest slope for Proxemic
Distance and low Self-Esteem has the steepest slope for Proxemic Distance.
Head-Mounted Display (HMD)
HMD Movement. A 3-way MANOVA was conducted examining the effects of
Proxemic Distance, Virtual Instructor Gender, and Participant Gender on the separate
measurements of HMD movement on the x, y, and z axes. A multivariate main effect for
Proxemic Distance was observed for HMD movement, F(3, 105) = 2.983, p = .035. Further, a 2-
131
way multivariate interaction effect was observed between Participant Gender and Virtual
Instructor Gender, F(3, 105) = 5.334, p = .002. Finally, a 3-way multivariate interaction effect
was observed for Proxemic Distance, Participant Gender, and Virtual Instructor Gender, F(3,
105) = 2.994, p = .034.
Univariate analyses revealed main effects of Proxemic Distance on x-axis movement, F(1, 115)
= 8.498, p = .004, and z-axis movement, F(1, 115) = 4.105, p = .045. In other words, there was
a significant difference in side-to-side movement (x) and front-back movement (z) depending on
the Virtual Instructor’s Proxemic Distance. Proxemic Distance also impacted the up-down
movement (y), but this main effect was not significant, F(1, 115) = 3.069, p = .083.
A univariate 2-way interaction effect between Participant Gender and Virtual Instructor Gender
was observed on only the x-axis movement, F(1,115) = 9.598, p = .002. As seen on Figure 8,
male participants’ x-axis head movements shot up in response to interacting with a female virtual
instructor, whereas female participants’ x-axis head movement declined when interacting with a
male virtual instructor. No other significant univariate main effects or interaction effects were
observed.
Figure 8
Interaction effect between Participant Gender and Virtual Instructor Gender the x-axis
movement (HMD)
132
Discussion
Summary of Findings
Attribution. The most compelling finding in this study must be the transformative
effects of Proxemic Distance. Manipulating a controlled negative feedback message according
to Far or Close distance had the effect of categorically transforming the social perception of the
message. Specifically, delivering a series of negative feedback messages from a close distance
had the effect of lowering one’s sense of personal control and ability to change the present
situation while simultaneously raising a sense of the critic’s (the Virtual Instructor) control.
This finding has tremendous implications for learning and pedagogy, suggesting that a unique
pattern of attribution exists in response to criticism and negative feedback delivered at different
proxemics distances. Quite simply, negative feedback delivered at a proximal distance appears
to have the effect of stripping the student of a sense of control and agency, thus debilitating the
student from making the necessary adjustments needed to address the purported root of the
negative feedback.
133
Affect. The most compelling affect-based findings in the present study were the male
participants’ distinct tendency as students to “bounce-back” in response to the negative feedback
from the virtual instructor, reporting significantly higher levels of interest, excitement,
enthusiasm, determined, and activity. As observed in the initial manipulation check of the
negative feedback, the harshly critical nature of the negative feedback had the effect of reducing
various positive affect items and raising various negative feedback items from pre to post. That
said, male participants exhibited a curiously unexpected pattern of asserting what must be
categorized as a defiant resilience, and a refusal to be negatively impacted by the criticisms. This
pattern of behavior must be explained by the tendency of negative feedback to enforce a sense of
accountability, thereby generating the attention and motivation needed to complete the task
successfully. This suggests male participants may require a straightforward, even harsh negative
feedback in a learning situation as opposed to a teaching strategy that simply complimenting and
reinforcing the student.
An additional interaction effect was observed across Proxemic Distance and Participant
gender on ratings of “Irritable”. Specifically, male participants seemed to report feeling far more
irritated by instructors providing negative feedback at a far distance than close distances,
whereas the inverse effect was seen among female participants. The male effect may be
attributable to the cognitive dissonance experienced with an extremely critical message coupled
with a perception of a less engaged body language (far distance). This irritation experienced by
male participants is likely associated with the tendency for male participants to try harder on the
task in the close conditions. This effect is discussed further in the following section.
Ad-hoc items. The results of the ad-hoc items demonstrated that the interpersonal distance of
the professor had clear impact on perceptions of the likability of the professor, the helpfulness of
134
the feedback, as well as the subsequent effort put into the task. Although participants in the
close conditions perceived the instructor to be less likable and the feedback to be less helpful,
they did report trying harder in the task than did participants in the far condition. The close
distance seems to play the role of raising the degree of accountability in the task – particularly
for male participants, who demonstrate a sizable drop off in effort when interacting with a far
virtual instructor. This pattern of behavior somewhat corroborates the above tendency for the
male participants to be being more interested, excited, enthusiastic, determined, and active in
response to the instructor’s negative feedback.
Social Presence. Proxemic Distance also revealed a clear effect when it came to degree
of presence felt by participants across distance conditions. Specifically, participants in the close
conditions reported feeling significantly greater feelings of physical transportation into the
virtual environment, ability to observe subtle cues such as body language, and a sense of
immersion and engagement.
Head Movements. Close distance resulted in significantly greater side-to-side
movement (x) and front-back movement (z) than far distance. Further, male participants’ x-axis
head movements shot up in response to interacting with a female virtual instructor, whereas
female participants’ x-axis head movement declined when interacting with a male virtual
instructor. The male participants’ head movements corroborated their defiant “bounce-back”
tendencies as evidenced in their PANAS ratings. Female participants, on the other hand,
responded in a more expected fashion, as one might expect participants receiving criticisms to
sink their heads in discouragement.
Further research should be done into examining the specific gender-specific patterns observed
here. While questionnaire items are admittedly subjective, HMD data is definitively objective.
135
The objective gender-specific pattern observed here may have greater implications for how we
study face to face human interaction.
Implications and Limitations
The present study has major implications for educational technology, the design of virtual
agents, and for presence researchers.
Teaching Style. Design of the negative feedback was scripted following the model of
cinematic arts courses that invoke a great deal of repetition and straight-forward feedback. In
post experiment interviews, many subjects reported feeling that the degree of repetition had an
annoying effect. In other words, the participants felt that the repetition was not helpful from a
pedagogical standpoint of learning “why” and “how” to improve. The experiment of course, was
designed to deny success.
Suspending Disbelief. By the end of the experiment, most participants had some inkling
of the non-intelligent design of the experiment stimuli. That is, whether due to limitations in the
design of the messages or the design of the environment, most came to a gradual realization that
the feedback could not be entirely legitimate or authentic. What some participants reported in
post-experimental interviews, however, was they could not suppress their emotions regardless of
this realization. Indeed, the various effects observed in this study highlight the participants’
inabilities to suspend disbelief over the medium – a significant issue in the presence research.
Although presence measures were originally included in this experiment to test for the
immersiveness of the virtual environment, an overlooked aspect of the experiment was the
presence-based effects of the feedback messages themselves. In other words, the negative
feedback in this experiment was designed under the assumption that they would be received at
face value. The general sharpness and quick eye of participants to pick up on the inauthentic and
136
non-tailored nature of the feedback in effect revealed an interesting finding about the
permanence of negative feedback and negative affect: That is, even in the face of denial and
rejection of the legitimacy of the negative feedback, participants ultimately reacted in the
expected patterns of causality and emotion associated with authentic negative feedback.
Distance and Authenticity. One interesting absent effect was the lack of any
distinguishable pattern among the independent variables of distance, participant gender, and
instructor gender in dictating the perception of feedback and/or environment authenticity. That
is, there were no significant differences in perception of feedback or environment authenticity for
any of the independent variables. The lack of a significant difference between close and far
conditions was rather surprising as one would intuitively anticipate that the close distance would
elicit a more realistic perception of the interaction and in turn, the feedback. What is even more
surprising however, was when simply comparing the ratings of feedback authenticity/realism,
participants in the far condition rated to the feedback to be greater in authenticity than
participants in the close condition. Although the difference is nonsignificant, the perception of
greater realism of the far instructor conditions may have two possible explanations.
First, the farther virtual instructor is displayed completely in terms of physical
orientation, movements, and gestures whereas the closer virtual instructor requires intentional
movement of the head-mounted display to observe different parts of the instructor’s physical
orientation and gestures. The full representation of the body and all its inference-generating
affordances may have provided participants with a greater sense that the instructor’s feedback
was authentic.
Second, the farther virtual instructors may be representing a step prior to reaching the
uncanny valley, while the close virtual instructors – with their more closely depicted graphical
137
features—may be representing the uncanny valley itself. When the virtual characters are farther
away, some participants noted that they could not make out the facial expressions of the
characters. Others noted that the farther characters could not necessarily be definitively
identified as virtual characters, suggesting a possibility that the characters may represent an
actual human displayed in the virtual environment. In other words, the far distance may have
had the unintended effect of eliciting perceptions of greater realism simply by virtue of their lack
of realism. As a result of this relative lack of realism in the far conditions, participants would
have also had to “fill in” the facial expressions of characters and fixate more-so on the speech
and gesture of the character. Essentially, any confounding effect of the uncanny valley of the
close distance characters on the perception of the negative feedback message (“This is obviously
just a virtual character, so the feedback is probably not real”) would have been nullified by the
far distance.
Conclusion
That said, this study departs from the model of Studies 1-3 in this dissertation in that the
Kinesthetic Motion Cue variable acts as both an Independent Variable and a Dependent Variable.
Specifically, this study examines the behavioral (causal attribution, affect), physiological (Skin
conductance), and physical (head movement direction) impact of interpersonal distance
(Independent variable) behavior in a VR-based learning environment. Essentially, I measured
the degree to which participants moved their head forward or backwards in response to a very
close or far virtual agent.
The results of the present study have numerous implications for the design of virtual
agents for learning outcomes as well as the methodological design of studies utilizing virtual
agents in virtual environments. The simple difference of manipulating the interpersonal distance
138
between a participant and a virtual agent had the effect of transforming the attributional,
affective, immersiveness, effort, and even head-movement reactions of the participants in a
virtual environment. As mentioned earlier, social situations in human-human communication
naturally fluctuate in valence and nature. The results of the present study demonstrate evidence
that distinct response patterns exist among female and male users of virtual environments and
these differences should be accounted for when designing games and interventions that simulate
negatively-valenced and/or emotionally-charged social situations.
This project has numerous implications for classroom learning and teaching, mental
health, decision-making, skill-based training, and AI/machine learning. Since the effects of
negative feedback on learning have been well-established, teachers and educators tend to avoid
negative feedback in the learning process. That said, this does not preclude all educators from
employing negative feedback in their teaching. Some teachers have bad days and some teachers
are quite simply ineffective teachers. By simulating a negative feedback teaching situation in a
virtual environment, we present the potential for a more precise understanding of the effects of
negative feedback on students' learning, emotional state, attribution patterns, and even their
nonverbal reactions to the negative feedback.
The benefits of using immersive VR in this domain of research is that it allows the
reliable and precise replication of any social context while maximizing physical and social
realism. For obvious reasons, it simply would not be practical or feasible to test for these effects
of negative feedback in an actual teaching environment. Although many studies using VR target
prosocial themes and goals, real world social interactions, certainly involve a wider spectrum of
contexts that are by no means exclusively positive. VR environments allow us to test the
specific patterns in which people respond and react in these negatively valenced contexts and
139
offer the potential for designing interventions and training programs for positive social and
health outcomes.
140
Chapter 6: Full Discussion and Conclusion
Full Discussion
Summary of Findings and Limitations
This dissertation introduces the concept of Kinesthetic Motion Cues and examines the
combinatorial effects of these cues on a specific type of social perception – namely, generating
inferences and judgements of social situations. An overarching goal of this project is the
construction of a taxonomy that affords normative social inferences and distribution of
inferences people make given the same basic action components. By normative social inferences
I refer to inferences that fall within a normal distribution of inferences given the same basic
action components. While I focus on specific movement and visual cues previously tied to the
construction of social meaning, this line of inquiry is novel in both method and conceptual
significance with its analysis of the effects of layers of movement combinations. Indeed, Studies
1-3 demonstrate the iterative process in which meaning is constructed and re-constructed as
different movement and action components are added and removed within social situations.
Study 1. Study 1 was my first venture into the question of examining the impact of
physical cues on social meaning, using 4 different Kinesthetic Motion Cues (Distance, Direction,
Speed, Gesture) to generate 16 different text-based stimuli. In a 2x2x2x2 within-subjects design,
participants rated the degree to which they judged each 4-way movement cue combinations to be
characteristic of the DIAMONDS elements (Duty, Intellect, Adversity, Mating, pOsitivity,
Negativity, Deception, and Sociality). From this I was able to generate mean scores for each of
the movement combinations according to each of the DIAMONDS elements, thereby attain a
rough estimate of the highs and lows for each of the DIAMONDS. That is, I found the 4-way
movement combinations that participants generally deemed to be high in a particular
141
DIAMONDS, as well as low. The results of this mean score comparison revealed a pattern
where positive-valenced DIAMONDS, such as Intellect, Romance, Positivity and Sociality, were
generally associated with Movement Towards with Open Gesture. Negative-valenced items, on
the other hand, were generally associated with Movement Away with Closed gesture. As
reiterated earlier, this study is the novel in its approach of analyzing the effects of multiple
movement and physical cues in combination. That being said, the tendency of the DIAMONDS
to bifurcate into posivite versus negative judgments presents a lack of specificity that will be
addressed in the Limitations section below. A subsequent repeated measures MANOVA
analysis allowed further insight into the multi-level effects of each of the Kinesthetic Motion Cue
combination either on their own, with another, with 2, or with all 4 analyzed together.
Study 2. The results from Study 1 provided fascinating clues to the cues that are pieced
together to form judgments of social situations from a one-way 1
st
person perspective. That is,
the stimuli in Study 1 were presented such that participants rated judgments based on a
hypothetical person engaging in a hypothetical set of behaviors. What was left unasnswered
however, was a sense of the manner in which the meaning of the social situation could be
subsequently changed by the reactionary movement responses of a second person. As such, a
2x2x2x2x2x2 within-subjects design was constructed to examine participants’ social judgments
(DIAMONDS ratings) based on a 2-person “interaction” where Person 1 engaged in a particular
movement (Direction, Speed, and Gesture) and Person 2 sequentially engaged in their own set of
movements (Direction, Speed, and Gesture). Distance, which acts as a constant variable across
two individuals, was separated into a separate between-subject condition. In other words, the
design of this study consisted of a within-subject 2x2x2 (Person 1) x 2x2x2 (Person 2) in a
between subject 2x2 (Close and Far Distance). The results of Study 2 revealed empirical
142
evidence for the transformation of social meaning with the various different combinations of a
second person’s reactionary movements to a first person’s various movement combinations.
Study 3. While Study 1 and Study 2 were interesting first steps to exploring the way
people form judgments about social situations based on a combination of Kinesthetic Motion
Cues, one lingering limitation of the 2 studies were that the stimuli containing the movement
cues was purely text-based. Indeed, other studies exploring the social perceptual impact of
movement have examined movement of geometric shapes, as well as point-light displays of
walking human silhouettes. To capture the nuances of perceiving the movements of an actual
human, Study 3 relied on Smartbody to design virtual agents that moved in the manner of the
text-based stimuli of Study 1 and Study 2. While using virtual agents have tremendous potential
for exploring human movement in a testable environment, the software presented a few
limitations. First, movements could only be generated in sequence. That is, each particular
movement command would have to occur one at a time. Now, this may not be so terrible as it
mirrors reality. That said, while text-based stimuli allow for an imagined sequence, an explicit
sequence would necessitate accounting for order effects. In other words, does a walking towards
before a gesture mean something different from gesturing before walking towards? Clearly, there
is a difference—albeit small—and this would need to be accounted for. This relates closely to
the second limitation of Study 3. Smartbody as software does not allow for locomotion
(walking) to occur simultaneously with a gesture. As such, Gesture as is conceptualized in
Studies 1 and 2 had to be scrapped in Study 3. To replace Gesture, I added the variable of Gaze
(direct versus averted), which has figured prominently in developmental psychology (Baron-
Cohen, 1995) and nonverbal communication (Ekman & Friesen, 1969). It should be noted that
the design of Study 3 mirrored that of Study 1 (2x2x2x2) with the exception of Gaze replacing
143
Gesture as one of the independent variables. A third limitation of Study 3 was the use of only
one virtual agent character, and thus the use of only one gender. Clearly, gender and appearance
(in addition to culture) closely impact the decoding of Kinesthetic Motion Cues to social
meaning. Unfortunately for Study 3, the stimuli were presented in a very raw, basic form.
While this may be useful in terms of experimental control, it is worth considering the effects of
Kinesthetic Motion Cues in an actual interaction – thus warranting Study 4.
Study 4. Study 4 differs from the 1-3 in two primary ways. First, Study 4 demonstrates
an applied analysis of Kinesthetic Motion Cues in an “actual” (virtual) social interaction. Even
though Study 4 only focuses on one of the 4 (5 if including Gaze from Study 3) Kinesthetic
Motion Cues, we can begin to understand the challenges scholars face in examining such visual,
often implicit cues with reliability. To reiterate, it is for the sake of this reliability and to
effectively “contain” the analysis of these social situations that I limit my cue variables, both in
form (4) and in levels (2). The second way Study 4 differs from Studies 1-3 in this dissertation is
that the Kinesthetic Motion Cue variable acts as both an Independent Variable and a Dependent
Variable. Specifically, this study examines the behavioral (causal attribution, affect),
physiological (Skin conductance), and physical (head movement direction) impact of
interpersonal distance (Independent variable) behavior in a VR-based learning environment.
Essentially, I measured the degree to which participants moved their head forward or backwards
in response to a very close or far virtual agent. While Studies 1-3 established a causal
explanation of the effects of movement combinations on social judgments, Study 4 introduces
two different interpersonal distances and measures the different patterns of psychological,
physiological, and physical reactions. Among these, the most compelling and relevant finding to
this dissertation is the analysis of participants’ head movements. Specifically, I find that when
144
presented with a virtual agent in close proximity, participants move forward if the virtual agent is
the same gender while moving backwards if the virtual agent is the opposite gender. This is the
first known examination of the subtle head movement patterns in response to physical
interpersonal distance. The benefits of using immersive VR in this domain of research is that it
allows the reliable and precise replication of any social context while maximizing physical and
social realism. It simply would not be practical or feasible to test for these movements in an
actual physical environment.
DIAMONDS-specific Limitations
The results of Studies 1-3 providing insight into both the implications and the limitations
of the DIAMONDS measurement. Specifically, there is a clearly apparent polarization of
judgements for negative vs positive valenced DIAMONDS, indicating competition amongst
multiple judgments for specific movement combination patterns. In other words, the 4
movement variables used here provide some degree of predictive value – at least to the level of
distinguishing negative vs. positive. That being said, additional levels of abstraction (more cues
as variables) may be required to attain a more fine-grained analysis. Second, this
positive/negative pattern may overarch overall situational inferences in a reptilian approach-
avoidance fashion (Roth & Cohen, 1986; Elliot & Thrash, 2002), which has been demonstrated
as containing hierarchical components. (Elliot & Church, 2006). Relatedly, this
positive/negative bifurcation of judgments may be evidence for the need for a more fine-grained
measurement of social situations. This leads me to the final implication of this positive/negative
distinction, which is in fact a limitation of the DIAMONDS measurement. Specifically, these
results suggest we need a closer examination of the DIAMONDS components. Indeed, there
seems to be a clear nested nature of the DIAMONDS components into larger categories and
145
perhaps sub-categories. Having a multi-layer typology, on the other hand, with Negative vs.
Positive at the top level would perhaps then provide a more fine-grained understanding of the
distinction between the other DIAMONDS (Duty, Intellect, Adversity, Mating, Deception,
Sociality).
Implications and Future Directions
A significant consideration from this dissertation is the potential for attaining normative
data on reliable patterns of how users make inferences from combinations of these cues across
multiple extended narratives. Indeed, as the studies in this dissertation only focused on one
sequence of each of simple movement combinations, future studies could focus both on
accounting for the order effect of presenting the movement cues in different sequences as well as
examining the effects of the movement cues in extended story-like narratives. The normative
versions of these inferences might then provide the basis for thinking about how atypical groups,
such as individuals on the autism spectrum, might differ in how they process these movement
cues.
I reiterate that movement and action as social cues are associated with some of the most
basic elements of human social perception and cognition – namely, detection of agency,
intentionality, and causality. Indeed the Heider-Simmel experiment (1966) laid the groundwork
for this all related lines of inquiry that examine the effects of movement on perceptions of others’
mental states and emotions. Evidence from developmental psychology demonstrate that some of
these capacities and competencies develop rather early in human development. For example,
Meltzoff (1995) found that while infants attribute goals to people engaging in a particular action,
they do not do so to a mechanical device that mimicked the same human action, suggesting that
physical movements by a non-human machine do not retain the same meaning. Johnson (2000)
146
took this idea a step further and found that infants follow the gaze of a fuzzy, anthropomorphic
object, but only if the object demonstrated a rudimentary form of interaction (i.e., beeping when
the infant babbles). As we consider the question of normative inference-making, it is worth
considering the specific cognitive and developmental differences among individuals in the
autism spectrum that may generate non-normative inferences based on movement cues.
Autism Application. One of the more prominent applications of Baron-Cohen’s (1995)
model of agency and Theory of Mind has been to the development of social skills in autism,
which has a range of deficiencies that can be explained by some combination of impairment in
the Shared Attention Mechanism and/or the Theory of Mind modules with the basic functioning
of Intentionality Detector and Eye-Direction Detector intact. For example, most children with
autism do not exhibit main forms of joint-attention behavior such as gaze monitoring,
“protodeclarative” pointing gestures (Baron-Cohen, 1989), and other declarative showing
gestures, which are critical to the functioning of shared attention between the self and other.
According to Baron-Cohen (1995), SAM impairments leave children with autism with no
output to trigger the Theory of Mind Mechanism (ToMM). As a result of impairments in
ToMM, children with autism have difficulty understanding the mental state of pretending,
distinguishing seeing from knowing, distinguishing between ambiguous emotions, identifying
the mentalistic role of the brain, making ontological distinctions between mental and physical
entities, and distinguishing appearance (rock) from reality (sponge shaped like a rock).
Imitation in autism. There is considerable evidence that autism entails a significant
deficit in imitative ability 3 F
4
(Rogers & Pennington, 1991; Smith & Bryson, 1994; Rogers, 1999;
Whiten & Brown, 1999). Specifically, researchers have found that people with autism maintain
4
It is worth noting that imitation is one of the precursors to a fully-functioning Theory of Mind (Malle, 2004).
147
imitative deficits that are 1.5 standard deviations below non-autistic people (Rogers et al.,
1996). Further, Rogers (1999) notes that caretakers have difficulty in teaching young children
with autism to imitate the action of brushing teeth with an imaginary toothbrush. Further,
individuals with autism have demonstrated difficulties imitating meaningless gestures (Merians
et al., 1997), difficulties imitating uncommon actions with common objects (Smith & Bryson,
1994), and difficulties imitating reversed hand positions (Perner, 1996). Ultimately, difficulty in
imitation may represent a deficit among individuals with autism to map actions of others onto
oneself (Whiten & Brown, 1999).
Mirror neurons. Williams et al. (2001) suggest that a mirror neuron system delay or
dysfunction at a young age can be the first domino that results in a complex cascade of
dysfunctions that characterize autism. Essentially, a dysfunction in the mirror neuron system
would interfere with imitation capacities at a macro level and coordination of self-other
representations at a more fundamental level (Rogers & Pennington, 1991). Consequently,
autistic individuals will fail to develop communication competencies such as shared attention,
gesture recognition, empathy, and a complete Theory of Mind.
A representational understanding of desire maintains that a single entity (object, event,
situation) can appear desirable to some and undesirable to others (Phillips et al., 1995), and such
understanding requires that desires be seen as subjective (Astington & Gopnik, 1991).
Scholars like Baron-Cohen (1989) and Carruthers (1996) argue that based on appearance/reality
tasks, autistic children lack a self-awareness. An example of an appearance/reality task is when
children are asked to play with a sponge that resembles a rock. While most 3-year olds
understand that the sponge is indeed a sponge, autistic children have difficulty understanding that
the sponge is not a rock. Similarly, Frith and Happe (1999) argue that adults with Asperger’s
148
syndrome also lack self-awareness. Nichols and Stich (2003) refute the above claims, suggesting
that they are indeed aware of and have access to their own beliefs, desires, thoughts, and
emotions. Instead, Nichols and Stich (2003) argue that the “inner lives” of individuals with
autism are starkly different from that of individuals without autism. Specifically, individuals
with autism exhibit less inner speech and significantly less time thinking about others’ inner
thoughts. Much like a person who is a novice on a topic such as birds will have much less to say
about birds than an expert on birds, individuals with autism are mere novices in their reports of
their own mental states.
Neural Networks. The results of this dissertation hint at the tremendous complexity
that colors human interaction. Evidenced by Study 2, even the most rudimentary of 2-way social
interactions involve a 128-condition design, which stretches the limits of both human
comprehension and calculation. Even without mentioning human verbal speech, the movement
combinations addressed in this dissertation point at the challenges of fully grasping and
analyzing the nuances of human communication. With these challenges in mind, I am
attempting to model the results of the Studies 1-3 in a Neural Network using the modeling
software Emergent. The goal here would be to create a neural network of social situations and
the combinations of body motion and movement that predict the inference making that comes
with social perception.
The results of this dissertation suggest there tends to be competition and confusion when
it comes to assigning social meaning based on certain combinations of body movement. For
example, it is difficult to make a statistically significant conclusion that a situation involving a
person who is close, moving towards you quickly with your arms open is solely a romantic
situation. Such a neural network model of social situations based on the Competition Model of
149
Social Perception (Read & Miller, 1998) should allow for more accurate judgments of internal
mental states as well as more accurate predictions of future behavior—topics of high relevance
for those constructing deep learning/artificial intelligence technology.
Computational modeling affords precise measurements of the perceptual correlates of
Theory of Mind components such as intention. Intention, for instance, may be measured in terms
of degree of movements, distance, and eye gaze duration for Frith-Happe animations (Roux,
Passeriux, & Ramus, 2013). Additionally, movement features such as position, velocity, and
acceleration predict subject segmentation of events depicted on Heider-Simmel simulations
(Zacks, 2004). Others show that perceptions of animacy of a single moving shape is based on
differences in orientation/direction, changes in direction, and changes in speed (Tremoulet &
Feldman, 2000).
Figure 1
Data from Neural Network Analysis Demonstrates Learning Difficulty
150
Figure 2
Visual Representation of Neural Network Analysis of Study 1
That being said, we face a challenge in the current technology of machine learning in that
computers are just now beginning to be able to identify objects and predict rudimentary
behavior. Something as complex as generating social inference making in neural networks have
simply never been done and is arguably the goal of computer scientists who study artificial
intelligence today.
Face versus Body. Another question drawn from this Study 1 and 2 is the degree to
which the face plays a role in the perception of movement cues. That is, how important are
facial cues when we generate social meaning from physical body-based cues. I am i in the
process of beginning data collection on a study testing whether the realism and accuracy of the
representation of kinesthetic movements may be seen as a closer determinant of authenticity than
realism and accuracy of the face when it comes to generating inferences about intentions and
151
ideas. Take, for example, the popularity of the Nintendo Wii in the 2000s. In spite of its lack of
image realism, the accuracy of the movement representations from one’s movement of the
joystick to the video game sparked its mass popularity. This “accuracy” may be referred to as
kinesthetic fidelity.
The inspiration for this study comes from a study by Yovel an O’Toole (2016), which
compared body motion and faces as a source of person recognition in the superior temporal
sulcus region of the brain. As the authors note, person recognition research had traditionally
been studied with static images of faces when in fact face recognition is part of a larger system
responsible for recognizing people dynamically in a naturalistic environment (not a still
photograph!).
Very much along the same vein, social perception research has primarily focused on
facial expressions and somewhat on gestures when in actuality, body movements and
orientations play a major role in the assignment of social meaning. As such, I pose the following
research questions: Does a conception of a full body (albeit virtual) change the perceived
authenticity of the interaction as opposed to a mere face? While faces provide much meta data
about inferences, the body clearly provides a great deal of additional data. Are certain
emotions/intentions better represented/communicated by the body and others better
communicated by the face? By gaining a more holistic understand the body as a site of inference-
making we may gain more insight into the significance of kinesthetic fidelity in social
perception.
Social Perception of Social Media Messages. When considering the impact on message
cues in social interactions, it is important to acknowledge that much of our interaction today and
certainly in the future takes place virtually, in a non-visual, non-physical environment. Social
152
media scholars would argue that the move towards the virtual reduces the need for the depth to
which I analyze Kinesthetic Motion Cues in this Dissertation. That said, I would argue that
movement cues persist in a non-physical environment. In the social media era, emojis,
emoticons, stickers, and memes provide a richer, more satisfying mode of computer-mediated
communication. Emojis/stickers that represent the entire body, or additional body parts as more
satisfying and accurately depicting a state of mind or emotion than emojis simply representing
the face. Additional indicators on emojis such as hearts, tears, smoke, and other cartoonish
personifications of actual emotions are likely deemed more satisfying to CMC users because
they more accurately represent their emotional states and intentions. In other words, there is
greater agreement and overlap in users’ interpretations of these emojis – thus forming a separate
“language” of emojis. A simple comparison of the emojis and emoticons that represent body
movement/gestures with those that strictly include the face will be tested across the above
conditions in order to provide further evidence of the importance of bodily representations in
social media interactions.
Further, the sequences of messages in CMC—whether they involve a message coupled
with an emoji or a message on its own – clearly impact the perceived social meaning of the
message. Using the methods of this dissertation as a starting point, it is worth exploring the
sequential ordering of text and visual messages in CMC.
Conclusion
Rooted in human communication is intent and purpose. Every communicative act has an
intention and rooted in this intention is an actionable thought, an idea with an action potential.
At the most fundamental level then, communication presumes movement and motion.
Communication is your declaration of agency, and essential to your agency lies the extraordinary
153
possibility space of kinesthetic movement. One’s capacity to engage in free and autonomous
locomotion (physically or virtually) is what defines you as a living organism with agency. Your
capacity to use such motion to either explicitly or implicitly structure and re-define your
communicative intent is what makes you a human with agency.
This dissertation is the first step towards a new methodology and approach to
understanding the elements of our communication that construct social meaning. I focus my
attention here on the role of the body and its movement on coloring this social meaning. If we
understood and could constrain the meaning of movements, and had a way to more automatically
code them, this could lead to a transformation in our ability to understand a more limited range
of “alternative interpretations” of changing social situations “online” and in real-time. The
methods used in this dissertation are novel, but not without its limitations. Future work in
computational modeling, machine learning, and further analysis of the additional role of physical
movement sequences as well as the exploration of lengthier narratives and verbal messages is
needed. With this dissertation, we are but at the tip of an iceberg.
154
Reference List
Abell, F., Happé, F., & Frith, U. (2000). Do triangles play tricks? Attribution of mental states to
animated shapes in normal and abnormal development. Cognitive Development, 15(1), 1–
16. https://doi.org/10.1016/S0885-2014(00)00014-9
Accurate judgments of intention from motion cues alone: A cross-cultural study. (n.d.).
Retrieved June 14, 2017, from
http://www.sciencedirect.com/science/article/pii/S1090513804000807
Agyei, S. B., Weel, V. D., R. (Ruud), F., Meer, V. D., & H, A. L. (2016). Development of Visual
Motion Perception for Prospective Control: Brain and Behavioral Studies in Infants.
Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.00100
Allison, T., Puce, A., & McCarthy, G. (2000). Social perception from visual cues: role of the
STS region. Trends in Cognitive Sciences, 4(7), 267–278. https://doi.org/10.1016/S1364-
6613(00)01501-1
Astington, J. W., & Gopnik, A. (1991). Theoretical explanations of children’s understanding of
the mind. British Journal of Developmental Psychology, 9(1), 7–31.
https://doi.org/10.1111/j.2044-835X.1991.tb00859.x
Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2001). Equilibrium Theory
Revisited: Mutual Gaze and Personal Space in Virtual Environments. Presence:
Teleoperators and Virtual Environments, 10(6), 583–598.
https://doi.org/10.1162/105474601753272844
Bailenson, J. N., Blascovich, J., Beall, A. C., & Loomis, J. M. (2003). Interpersonal Distance in
Immersive Virtual Environments. Personality and Social Psychology Bulletin, 29(7),
819–833. https://doi.org/10.1177/0146167203029007002
155
Barnaud, M.-L., Morgado, N., Palluel-Germain, R., Diard, J., & Spalanzani, A. (2014a).
Proxemics models for human-aware navigation in robotics: Grounding interaction and
personal space models in experimental data from psychology. In Proceedings of the 3rd
IROS’2014 workshop “Assistance and Service Robotics in a Human Environment.”
Chicago, United States. Retrieved from https://hal.archives-ouvertes.fr/hal-01082517
Barnaud, M.-L., Morgado, N., Palluel-Germain, R., Diard, J., & Spalanzani, A. (2014b).
Proxemics models for human-aware navigation in robotics: Grounding interaction and
personal space models in experimental data from psychology. In Proceedings of the 3rd
IROS’2014 workshop “Assistance and Service Robotics in a Human Environment.”
Chicago, United States. Retrieved from https://hal.archives-ouvertes.fr/hal-01082517
Baron-Cohen, S. (1989). The Autistic Child’s Theory of Mind: a Case of Specific
Developmental Delay. Journal of Child Psychology and Psychiatry, 30(2), 285–297.
https://doi.org/10.1111/j.1469-7610.1989.tb00241.x
Baron-Cohen, S., Campbell, R., Karmiloff-Smith, A., Grant, J., & Walker, J. (1995). Are
children with autism blind to the mentalistic significance of the eyes? British Journal of
Developmental Psychology, 13(4), 379–398. https://doi.org/10.1111/j.2044-
835X.1995.tb00687.x
Baron-Cohen, S., Leslie, A. M., & Frith, U. (1985). Does the autistic child have a “theory of
mind” ? Cognition, 21(1), 37–46. https://doi.org/10.1016/0010-0277(85)90022-8
Barrett, H. C., Todd, P. M., Miller, G. F., & Blythe, P. W. (2005). Accurate judgments of
intention from motion cues alone: A cross-cultural study. Evolution and Human
Behavior, 26(4), 313–331. https://doi.org/10.1016/j.evolhumbehav.2004.08.015
Barsalou, L. W. (2009). Simulation, situated conceptualization, and prediction. Philosophical
156
Transactions of the Royal Society of London B: Biological Sciences, 364(1521), 1281–
1289. https://doi.org/10.1098/rstb.2008.0319
Blascovich, J., Loomis, J., Beall, A. C., Swinth, K. R., Hoyt, C. L., & Bailenson, J. N. (2002).
TARGET ARTICLE: Immersive Virtual Environment Technology as a Methodological
Tool for Social Psychology. Psychological Inquiry, 13(2), 103–124.
https://doi.org/10.1207/S15327965PLI1302_01
Blascovich, J., Mendes, W. B., Hunter, S. B., & Salomon, K. (1999). Social “facilitation” as
challenge and threat. Journal of Personality and Social Psychology, 77(1), 68–77.
https://doi.org/10.1037/0022-3514.77.1.68
Blythe, P. W., Todd, P. M., & Miller, G. F. (1999). How motion reveals intention: Categorizing
social interactions. In Simple heuristics that make us smart (pp. 257–285). New York,
NY, US: Oxford University Press.
Building_Animation_Shapiro.pdf. (n.d.). Retrieved from
http://www.arishapiro.com/Building_Animation_Shapiro.pdf
Buss, D. M. (1995). Evolutionary Psychology: A New Paradigm for Psychological Science.
Psychological Inquiry, 6(1), 1–30. https://doi.org/10.1207/s15327965pli0601_1
Carruthers, P., & Smith, P. K. (1996a). Theories of Theories of Mind. Cambridge University
Press.
Carruthers, P., & Smith, P. K. (1996b). Theories of Theories of Mind. Cambridge University
Press.
Causalité, permanence et réalité phénoménales. [Phenomenal causality, permanence and
reality.]. (1962). Oxford, England: Publications Universitaires.
Chessa, M., Maiello, G., Borsari, A., & Bex, P. J. (2016). The Perceptual Quality of the Oculus
157
Rift for Immersive Virtual Reality. Human–Computer Interaction, 0(0), 1–32.
https://doi.org/10.1080/07370024.2016.1243478
CollisionAvoidance_BoenschEtAl_2015.pdf. (n.d.). Retrieved from http://vr.rwth-
aachen.de/media/papers/CollisionAvoidance_BoenschEtAl_2015.pdf
Cross, E. S., Ramsey, R., Liepelt, R., Prinz, W., & Hamilton, A. F. de C. (2016). The shaping of
social perception by stimulus and knowledge cues to human animacy. Phil. Trans. R. Soc.
B, 371(1686), 20150075. https://doi.org/10.1098/rstb.2015.0075
Dijksterhuis, A., & Bargh, J. A. (2001). The perception-behavior expressway: Automatic effects
of social perception on social behavior. Advances in Experimental Social Psychology, 33,
1–40. https://doi.org/10.1016/S0065-2601(01)80003-4
Dittrich, W. H. (1993). Action Categories and the Perception of Biological Motion. Perception,
22(1), 15–22. https://doi.org/10.1068/p220015
Dittrich, W. H., & Lea, S. E. G. (1994). Visual Perception of Intentional Motion. Perception,
23(3), 253–268. https://doi.org/10.1068/p230253
Dittrich, W. H., Troscianko, T., Lea, S. E. G., & Morgan, D. (1996). Perception of Emotion from
Dynamic Point-Light Displays Represented in Dance. Perception, 25(6), 727–738.
https://doi.org/10.1068/p250727
Do triangles play tricks? Attribution of mental states to animated shapes in normal and abnormal
development. (n.d.). Retrieved June 14, 2017, from
http://www.sciencedirect.com/science/article/pii/S0885201400000149
download.pdf. (n.d.). Retrieved from
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.457.7730&rep=rep1&type=pdf
Ekman, P., & Friesen, W. V. (1969). The repertoire of nonverbal behavior: Categories, origins,
158
usage, and coding. semiotica, 1(1), 49-98.
Ekman, P., & Friesen, W. V. (1972a). Hand Movements. Journal of Communication, 22(4), 353–
374. https://doi.org/10.1111/j.1460-2466.1972.tb00163.x
Elliot, A. J., & Church, M. A. (1997). A hierarchical model of approach and avoidance
achievement motivation. Journal of Personality and Social Psychology, 72(1), 218–232.
https://doi.org/10.1037/0022-3514.72.1.218
Elliot, A. J., & Thrash, T. M. (2002). Approach-avoidance motivation in personality: Approach
and avoidance temperaments and goals. Journal of Personality and Social Psychology,
82(5), 804–818. https://doi.org/10.1037/0022-3514.82.5.804
Falck-Ytter, T., Bölte, S., & Gredebäck, G. (2013). Eye tracking in early autism research.
Journal of Neurodevelopmental Disorders, 5(1), 28. https://doi.org/10.1186/1866-1955-
5-28
Feng, A., Huang, Y., Kallmann, M., & Shapiro, A. (2012). An Analysis of Motion Blending
Techniques. In Motion in Games (pp. 232–243). Springer, Berlin, Heidelberg.
https://doi.org/10.1007/978-3-642-34710-8_22
Feng, A. W., Xu, Y., & Shapiro, A. (2012). An Example-based Motion Synthesis Technique for
Locomotion and Object Manipulation. In Proceedings of the ACM SIGGRAPH
Symposium on Interactive 3D Graphics and Games (pp. 95–102). New York, NY, USA:
ACM. https://doi.org/10.1145/2159616.2159632
Fletcher, G. J. O., & Fincham, F. D. (2013a). Cognition in Close Relationships. Psychology
Press.
Fletcher, G. J. O., & Fincham, F. D. (2013b). Cognition in Close Relationships. Psychology
Press.
159
Frith, U. (1994). Autism and theory of mind in everyday life. Social Development, 3(2), 108–
124. https://doi.org/10.1111/j.1467-9507.1994.tb00031.x
Frith, U., & Happé, F. (1999). Theory of Mind and Self-Consciousness: What Is It Like to Be
Autistic? Mind & Language, 14(1), 82–89. https://doi.org/10.1111/1468-0017.00100
Gao, T., McCarthy, G., & Scholl, B. J. (2010). The Wolfpack Effect: Perception of Animacy
Irresistibly Influences Interactive Behavior. Psychological Science, 21(12), 1845–1853.
https://doi.org/10.1177/0956797610388814
Gao, T., Newman, G. E., & Scholl, B. J. (2009). The psychophysics of chasing: A case study in
the perception of animacy. Cognitive Psychology, 59(2), 154–179.
https://doi.org/10.1016/j.cogpsych.2009.03.001
Gelman, S. A., & Opfer, J. E. (2002). Development of the Animate–Inanimate Distinction. In U.
Goswami (Ed.), Blackwell Handbook of Childhood Cognitive Development (pp. 151–
166). Blackwell Publishers Ltd. https://doi.org/10.1002/9780470996652.ch7
Gifford, R. (1994). A lens-mapping framework for understanding the encoding and decoding of
interpersonal dispositions in nonverbal behavior. Journal of Personality and Social
Psychology, 66(2), 398.
Gillath, O., McCall, C., Shaver, P. R., & Blascovich, J. (2008). What Can Virtual Reality Teach
Us About Prosocial Tendencies in Real and Virtual Environments? Media Psychology,
11(2), 259–282. https://doi.org/10.1080/15213260801906489
Goal attribution without agency cues: the perception of “pure reason” in infancy. (n.d.).
Retrieved June 20, 2017, from
http://www.sciencedirect.com/science/article/pii/S0010027799000396
Gockley, R., Forlizzi, J., & Simmons, R. (2007). Natural Person-following Behavior for Social
160
Robots. In Proceedings of the ACM/IEEE International Conference on Human-robot
Interaction (pp. 17–24). New York, NY, USA: ACM.
https://doi.org/10.1145/1228716.1228720
Grassinger, R., & Dresel, M. (2017). Who learns from errors on a class test? Antecedents and
profiles of adaptive reactions to errors in a failure situation. Learning and Individual
Differences, 53, 61–68. https://doi.org/10.1016/j.lindif.2016.11.009
Hall, E. T. (1966). The hidden dimension, 1st ed (Vol. xii). New York, NY, US: Doubleday &
Co.
Happé, F. G. E. (1995). The Role of Age and Verbal Ability in the Theory of Mind Task
Performance of Subjects with Autism. Child Development, 66(3), 843–855.
https://doi.org/10.1111/j.1467-8624.1995.tb00909.x
Hartholt, A., Traum, D., Marsella, S. C., Shapiro, A., Stratou, G., Leuski, A., … Gratch, J.
(2013). All Together Now. In Intelligent Virtual Agents (pp. 368–381). Springer, Berlin,
Heidelberg. https://doi.org/10.1007/978-3-642-40415-3_33
Heider, F., & Simmel, M. (1944). An Experimental Study of Apparent Behavior. The American
Journal of Psychology, 57(2), 243–259. https://doi.org/10.2307/1416950
Hirschfeld, L. A., & Gelman, S. A. (1994). Mapping the Mind: Domain Specificity in Cognition
and Culture. Cambridge University Press.
Iacoboni, M. (2009). Imitation, Empathy, and Mirror Neurons. Annual Review of Psychology,
60(1), 653–670. https://doi.org/10.1146/annurev.psych.60.110707.163604
iui16.pdf. (n.d.). Retrieved from http://people.ict.usc.edu/~roemmele/publications/iui16.pdf
Johansson, G. (1973). Visual perception of biological motion and a model for its analysis.
Perception & Psychophysics, 14(2), 201–211. https://doi.org/10.3758/BF03212378
161
Johnson, S. C. (2000a). The recognition of mentalistic agents in infancy. Trends in Cognitive
Sciences, 4(1), 22–28. https://doi.org/10.1016/S1364-6613(99)01414-X
Johnson, S. C. (2000b). The recognition of mentalistic agents in infancy. Trends in Cognitive
Sciences, 4(1), 22–28. https://doi.org/10.1016/S1364-6613(99)01414-X
Juslin, P.N. & Sherer, K.R. (2008). Vocal expression of affect. In Harrigan, J.A., Rosenthal, R.,
& Sherer, K.R. (Ed.), The new handbook of Methods in Nonverbal Behavior Research
(pp. 65). New York, NY: Oxford University Press.
Kamide, H., Mae, Y., Takubo, T., Ohara, K., & Arai, T. (2014). Direct comparison of
psychological evaluation between virtual and real humanoids: Personal space and
subjective impressions. International Journal of Human-Computer Studies, 72(5), 451–
459. https://doi.org/10.1016/j.ijhcs.2014.01.004
Kastanis, I., & Slater, M. (2012). Reinforcement Learning Utilizes Proxemics: An Avatar Learns
to Manipulate the Position of People in Immersive Virtual Reality. ACM Trans. Appl.
Percept., 9(1), 3:1–3:15. https://doi.org/10.1145/2134203.2134206
Kellman, P. J., Spelke, E. S., & Short, K. R. (1986). Infant Perception of Object Unity from
Translatory Motion in Depth and Vertical Translation. Child Development, 57(1), 72–86.
https://doi.org/10.2307/1130639
Kopp, S., Krenn, B., Marsella, S., Marshall, A. N., Pelachaud, C., Pirker, H., … Vilhjálmsson, H.
(2006). Towards a Common Framework for Multimodal Generation: The Behavior
Markup Language. In Intelligent Virtual Agents (pp. 205–217). Springer, Berlin,
Heidelberg. https://doi.org/10.1007/11821830_17
Leslie, A. M. (1984). Spatiotemporal Continuity and the Perception of Causality in Infants.
Perception, 13(3), 287–305. https://doi.org/10.1068/p130287
162
Leslie, A. M. (1987). Pretense and representation: The origins of “theory of mind.”
Psychological Review, 94(4), 412–426. https://doi.org/10.1037/0033-295X.94.4.412
Leslie, A. M., & Thaiss, L. (1992). Domain specificity in conceptual development:
Neuropsychological evidence from autism. Cognition, 43(3), 225–251.
https://doi.org/10.1016/0010-0277(92)90013-8
Lhommet, M., & Marsella, S. C. (2013). Gesture with Meaning. In Intelligent Virtual Agents (pp.
303–312). Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40415-3_27
Llobera, J., Spanlang, B., Ruffini, G., & Slater, M. (2010). Proxemics with Multiple Dynamic
Characters in an Immersive Virtual Environment. ACM Trans. Appl. Percept., 8(1), 3:1–
3:12. https://doi.org/10.1145/1857893.1857896
Loomis, J. M., Blascovich, J. J., & Beall, A. C. (1999). Immersive virtual environment
technology as a basic research tool in psychology. Behavior Research Methods,
Instruments, & Computers, 31(4), 557–564. https://doi.org/10.3758/BF03200735
Loomis, J. M., da Silva, J. A., Fujita, N., & Fukusima, S. S. (1992). Visual space perception and
visually directed action. Journal of Experimental Psychology: Human Perception and
Performance, 18(4), 906–921. https://doi.org/10.1037/0096-1523.18.4.906
Malle, B. F. (2004). How the mind explains behavior: Folk explanations, meaning, and social
interaction (Vol. viii). Cambridge, MA, US: MIT Press.
Malle, B. F., Moses, L. J., & Baldwin, D. A. (2001). Intentions and Intentionality: Foundations
of Social Cognition. MIT Press.
McAuley, E., Duncan, T. E., & Russell, D. W. (1992). Measuring Causal Attributions: The
Revised Causal Dimension Scale (CDSII). Personality and Social Psychology Bulletin,
18(5), 566–573. https://doi.org/10.1177/0146167292185006
163
Meltzoff, A. N. (1995). Understanding the intentions of others: Re-enactment of intended acts by
18-month-old children. Developmental Psychology, 31(5), 838–850.
https://doi.org/10.1037/0012-1649.31.5.838
Merians, A. S., Clark, M., Poizner, H., Macauley, B., Rothi, L. J. G., & Heilman, K. M. (1997).
Visual-imitative dissociation apraxia. Neuropsychologia, 35(11), 1483–1490.
https://doi.org/10.1016/S0028-3932(97)00064-X
mig12compare.pdf. (n.d.). Retrieved from http://www.arishapiro.com/mig12/mig12compare.pdf
Miller, L., Cody, M. J., & McLaughlin, M. . (1994). Situations and goals as fundamental
constructs in interpersonal communication research. In Handbook of interpersonal
communication. Thousand Oaks, CA: Sage.
Movement and Mind: A Functional Imaging Study of Perception and Interpretation of Complex
Intentional Movement Patterns. (n.d.). Retrieved June 14, 2017, from
http://www.sciencedirect.com/science/article/pii/S1053811900906128
Nichols, S., & Stich, S. P. (2003). Mindreading: An integrated account of pretence, self-
awareness, and understanding other minds. New York, NY, US: Clarendon Press/Oxford
University Press. https://doi.org/10.1093/0198236107.001.0001
Niedenthal, P. M., Barsalou, L. W., Winkielman, P., Krauth-Gruber, S., & Ric, F. (2005a).
Embodiment in Attitudes, Social Perception, and Emotion. Personality and Social
Psychology Review, 9(3), 184–211. https://doi.org/10.1207/s15327957pspr0903_1
Niedenthal, P. M., Barsalou, L. W., Winkielman, P., Krauth-Gruber, S., & Ric, F. (2005b).
Embodiment in Attitudes, Social Perception, and Emotion. Personality and Social
Psychology Review, 9(3), 184–211. https://doi.org/10.1207/s15327957pspr0903_1
Oatley, K., & Yuill, N. (1985). Perception of personal and interpersonal action in a cartoon film.
164
British Journal of Social Psychology, 24(2), 115–124. https://doi.org/10.1111/j.2044-
8309.1985.tb00670.x
Osgood, C. E., & Tannenbaum, P. H. (1955). The principle of congruity in the prediction of
attitude change. Psychological Review, 62(1), 42–55. https://doi.org/10.1037/h0048153
Ozonoff, S., Rogers, S. J., & Pennington, B. F. (1991). Asperger’s Syndrome: Evidence of an
Empirical Distinction from High-Functioning Autism. Journal of Child Psychology and
Psychiatry, 32(7), 1107–1122. https://doi.org/10.1111/j.1469-7610.1991.tb00352.x
p412-wilcox.pdf. (n.d.). Retrieved from http://delivery.acm.org/10.1145/1200000/1190041/p412-
wilcox.pdf?ip=68.181.88.160&id=1190041&acc=ACTIVE%20SERVICE&key=B63AC
EF81C6334F5%2EC52804B674E616B8%2E4D4702B0C3E38B35%2E4D4702B0C3E3
8B35&CFID=913336817&CFTOKEN=75827737&__acm__=1489883975_e2cf5e68282
ed1fcf6dc4fe1e971473d
Pacchierotti, E., Christensen, H. I., & Jensfelt, P. (2005). Human-robot embodied interaction in
hallway settings: a pilot user study. In ROMAN 2005. IEEE International Workshop on
Robot and Human Interactive Communication, 2005. (pp. 164–171).
https://doi.org/10.1109/ROMAN.2005.1513774
Pervin, L. A. (1978). Definitions, measurements, and classifications of stimuli, situations, and
environments. Human Ecology, 6(1), 71–105. https://doi.org/10.1007/BF00888567
PHILLIPS, K. A., ATALA, K. D., & ALBERTINI, R. S. (1995). Case Study: Body Dysmorphic
Disorder in Adolescents. Journal of the American Academy of Child & Adolescent
Psychiatry, 34(9), 1216–1220. https://doi.org/10.1097/00004583-199509000-00020
Premack, D., & Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral
and Brain Sciences, 1(4), 515–526. https://doi.org/10.1017/S0140525X00076512
165
Rakison, D. H., & Poulin-Dubois, D. (2001). Developmental origin of the animate–inanimate
distinction. Psychological Bulletin, 127(2), 209–228. https://doi.org/10.1037/0033-
2909.127.2.209
Rascle, O., Le Foll, D., Charrier, M., Higgins, N. C., Rees, T., & Coffee, P. (2015). Durability
and generalization of attribution-based feedback following failure: Effects on
expectations and behavioral persistence. Psychology of Sport and Exercise, 18, 68–74.
https://doi.org/10.1016/j.psychsport.2015.01.003
Rauthmann, J. F., Gallardo-Pujol, D., Guillaume, E. M., Todd, E., Nave, C. S., Sherman, R. A.,
… Funder, D. C. (2014). The Situational Eight DIAMONDS: A taxonomy of major
dimensions of situation characteristics. Journal of Personality and Social Psychology,
107(4), 677–718. https://doi.org/10.1037/a0037250
Rauthmann, J. F., & Sherman, R. A. (2016a). Measuring the Situational Eight DIAMONDS
characteristics of situations: An optimization of the RSQ-8 to the S8*. European Journal
of Psychological Assessment, 32(2), 155–164. https://doi.org/10.1027/1015-
5759/a000246
Rauthmann, J. F., & Sherman, R. A. (2016b). Situation Change: Stability and Change of
Situation Variables between and within Persons. Frontiers in Psychology, 6.
https://doi.org/10.3389/fpsyg.2015.01938
Rauthmann, J. F., & Sherman, R. A. (2016c). Ultra-brief measures for the Situational Eight
DIAMONDS domains. European Journal of Psychological Assessment, 32(2), 165–174.
https://doi.org/10.1027/1015-5759/a000245
Read, S. J., & Miller, L. C. (1993a). Rapist or “Regular Guy”: Explanatory Coherence in the
Construction of Mental Models of Others. Personality and Social Psychology Bulletin,
166
19(5), 526–540. https://doi.org/10.1177/0146167293195005
Read, S. J., & Miller, L. C. (1993b). Rapist or “Regular Guy”: Explanatory Coherence in the
Construction of Mental Models of Others. Personality and Social Psychology Bulletin,
19(5), 526–540. https://doi.org/10.1177/0146167293195005
Read, S. J., & Miller, L. C. (1998). Connectionist Models of Social Reasoning and Social
Behavior. Psychology Press.
Read, S. J., Vanman, E. J., & Miller, L. C. (1997). Connectionism, Parallel Constraint
Satisfaction Processes, and Gestalt Principles: (Re)Introducing Cognitive Dynamics to
Social Psychology. Personality and Social Psychology Review, 1(1), 26–53.
https://doi.org/10.1207/s15327957pspr0101_3
Rimé, B., Boulanger, B., Laubin, P., Richir, M., & Stroobants, K. (1985). The perception of
interpersonal emotions originated by patterns of movement. Motivation and Emotion,
9(3), 241–260. https://doi.org/10.1007/BF00991830
Roemmele, M., Morgens, S.-M., Gordon, A. S., & Morency, L.-P. (2016). Recognizing Human
Actions in the Motion Trajectories of Shapes. In Proceedings of the 21st International
Conference on Intelligent User Interfaces (pp. 271–281). New York, NY, USA: ACM.
https://doi.org/10.1145/2856767.2856793
Rogers, S. J. (1999). An examination of the imitation deficit in autism. In J. Nadel & G.
Butterworth (Eds.), Imitation in infancy (pp. 254–283). New York, NY, US: Cambridge
University Press.
Roth, S., & Cohen, L. J. (1986). Approach, avoidance, and coping with stress. American
Psychologist, 41(7), 813–819. https://doi.org/10.1037/0003-066X.41.7.813
Ruhland, K., Andrist, S., Badler, J., Peters, C., Badler, N., Gleicher, M., … Mcdonnell, R.
167
(2014). Look me in the eyes: A survey of eye and gaze animation for virtual agents and
artificial systems. Eurographics State-of-the-Art Report, 69–91.
https://doi.org/10.2312/egst.20141036
Saygin, A. P., Chaminade, T., & Ishiguro, H. (2010). The Perception of Humans and Robots:
Uncanny Hills in Parietal Cortex. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of
the 32nd Annual Conference of the Cognitive Science Society. Cognitive Science Society.
Schank, R. C., & Abelson, R. P. (1975). Scripts, Plans, and Knowledge. In Proceedings of the
4th International Joint Conference on Artificial Intelligence - Volume 1 (pp. 151–157).
San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. Retrieved from
http://dl.acm.org/citation.cfm?id=1624626.1624649
Scholl, B.J., & Tremoulet, P. D. (2000). Perceptual causality and animacy. Trends in Cognitive
Sciences, 4(8), 299–309. https://doi.org/10.1016/S1364-6613(00)01506-0
Scholl, Brian J., & Leslie, A. M. (1999). Modularity, Development and “Theory of Mind.” Mind
& Language, 14(1), 131–153. https://doi.org/10.1111/1468-0017.00106
Schultz, R. T. (2005). Developmental deficits in social perception in autism: the role of the
amygdala and fusiform face area. International Journal of Developmental Neuroscience,
23(2), 125–141. https://doi.org/10.1016/j.ijdevneu.2004.12.012
Serfass, D. G., & Sherman, R. A. (2015). Situations in 140 Characters: Assessing Real-World
Situations on Twitter. PLOS ONE, 10(11), e0143051.
https://doi.org/10.1371/journal.pone.0143051
Shapiro, A. (2011). Building a Character Animation System. In Motion in Games (pp. 98–109).
Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-25090-3_9
Smith, I. M., & Bryson, S. E. (1994). Imitation and action in autism: A critical review.
168
Psychological Bulletin, 116(2), 259–273. https://doi.org/10.1037/0033-2909.116.2.259
Society, T. R. (2004). The broaden–and–build theory of positive emotions. Philosophical
Transactions of the Royal Society B: Biological Sciences, 359(1449), 1367–1377.
https://doi.org/10.1098/rstb.2004.1512
Sofia, R. C., Saeik, F., & Mendes, P. (2016). A Proxemics Analysis with NSense: Characterizing
Human Personal Spaces via Pervasive Sensing - UNDER SUBMISSION. Retrieved from
http://siti2.ulusofona.pt:8085/xmlui/handle/20.500.11933/625
Sofia, R., Firdose, S., Lopes, L. A., Moreira, W., & Mendes, P. (2016). NSense: A people-
centric, non-intrusive opportunistic sensing tool for contextualizing nearness. In 2016
IEEE 18th International Conference on e-Health Networking, Applications and Services
(Healthcom) (pp. 1–6). https://doi.org/10.1109/HealthCom.2016.7749490
Sousa, M., Mendes, D., Ferreira, A., Pereira, J. M., & Jorge, J. (2015). Eery Space: Facilitating
Virtual Meetings Through Remote Proxemics. In Human-Computer Interaction –
INTERACT 2015 (pp. 622–629). Springer, Cham. https://doi.org/10.1007/978-3-319-
22698-9_43
Sousa, M., Mendes, D., Medeiros, D., Ferreira, A., Pereira, J. M., & Jorge, J. (2016). Remote
Proxemics. In C. Anslow, P. Campos, & J. Jorge (Eds.), Collaboration Meets Interactive
Spaces (pp. 47–73). Springer International Publishing. https://doi.org/10.1007/978-3-319-
45853-3_4
SS07-07-004.pdf. (n.d.). Retrieved from http://www.aaai.org/Papers/Symposia/Spring/2007/SS-
07-07/SS07-07-004.pdf
Stainton, R. J. (2000). Perspectives in the Philosophy of Language: A Concise Anthology.
Broadview Press.
169
Sullivan, K., Zaitchik, D., & Tager-Flusberg, H. (1994). Preschoolers can attribute second-order
beliefs. Developmental Psychology, 30(3), 395–402. https://doi.org/10.1037/0012-
1649.30.3.395
Taking the intentional stance at 12 months of age. (n.d.). Retrieved June 20, 2017, from
http://www.sciencedirect.com/science/article/pii/001002779500661H
The Perception of Intention - ProQuest. (n.d.). Retrieved June 20, 2017, from
http://search.proquest.com/docview/213538752?pq-origsite=gscholar
Thiebaux, M., Marsella, S., Marshall, A. N., & Kallmann, M. (2008). SmartBody: Behavior
Realization for Embodied Conversational Agents. In Proceedings of the 7th International
Joint Conference on Autonomous Agents and Multiagent Systems - Volume 1 (pp. 151–
158). Richland, SC: International Foundation for Autonomous Agents and Multiagent
Systems. Retrieved from http://dl.acm.org/citation.cfm?id=1402383.1402409
Tiedens, L. Z., & Fragale, A. R. (2003). Power moves: complementarity in dominant and
submissive nonverbal behavior. Journal of personality and social psychology, 84(3), 558.
Tremoulet, P. D., & Feldman, J. (2000). Perception of Animacy from the Motion of a Single
Object. Perception, 29(8), 943–951. https://doi.org/10.1068/p3101
Vinciarelli, A., Salamin, H., Polychroniou, A., Mohammadi, G., & Origlia, A. (2012). From
nonverbal cues to perception: personality and social attractiveness. Cognitive
Behavioural Systems, 60-72.
Walk, R. D., & Homan, C. P. (1984). Emotion and dance in dynamic light displays. Bulletin of
the Psychonomic Society, 22(5), 437–440. https://doi.org/10.3758/BF03333870
Walters, M. L., Dautenhahn, K., Boekhorst, R. te, Koay, K. L., Kaouri, C., Woods, S., … Werry,
I. (2005). The influence of subjects’ personality traits on personal spatial zones in a
170
human-robot interaction experiment. In ROMAN 2005. IEEE International Workshop on
Robot and Human Interactive Communication, 2005. (pp. 347–352).
https://doi.org/10.1109/ROMAN.2005.1513803
Watson, J. S. (1972). SMILING, COOING, AND “THE GAME.” Merrill-Palmer Quarterly of
Behavior and Development, 18(4), 323–339.
Watson, J. S., & Ramey, C. T. (1972). REACTIONS TO RESPONSE-CONTINGENT
STIMULATION IN EARLY INFANCY. Merrill-Palmer Quarterly of Behavior and
Development, 18(3), 219–227.
Wilcox, L. M., Allison, R. S., Elfassy, S., & Grelik, C. (2006). Personal Space in Virtual Reality.
ACM Trans. Appl. Percept., 3(4), 412–428. https://doi.org/10.1145/1190036.1190041
Williams, J. H. G., Whiten, A., Suddendorf, T., & Perrett, D. I. (2001). Imitation, mirror neurons
and autism. Neuroscience & Biobehavioral Reviews, 25(4), 287–295.
https://doi.org/10.1016/S0149-7634(01)00014-8
Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and constraining function
of wrong beliefs in young children’s understanding of deception. Cognition, 13(1), 103–
128. https://doi.org/10.1016/0010-0277(83)90004-5
Wyer, R. S. (2014). Knowledge and Memory: the Real Story: Advances in Social Cognition.
Psychology Press.
171
Appendix
Study 1 Appendix
Table 1
Text-based stimuli depicting movement combinations
Person far away moves towards you slowly with open arms.
Person far away moves towards you quickly with open arms.
Person far away moves towards you slowly with arms crossed.
Person far away moves towards you quickly with arms crossed.
Person far away moves away from you slowly with open arms.
Person far away moves away from you quickly with open arms.
Person far away moves away from you slowly with arms crossed.
Person far away moves away from you quickly with arms crossed.
Person nearby moves towards you slowly with open arms.
Person nearby moves towards you quickly with open arms.
Person nearby moves towards you slowly with arms crossed.
Person nearby moves towards you quickly with arms crossed.
Person nearby moves away from you slowly with open arms.
Person nearby moves away from you quickly with open arms.
Person nearby moves away from you slowly with arms crossed.
Person nearby moves away from you quickly with arms crossed.
Participants were asked to respond to each text-based stimuli and
Table 10
2-way mean scores : Distance * Direction (overall multivariate is significant)
Measure Distance Direction Mean Std.
Error
Duty* Far Towards 2.460 .055
Away 2.120 .059
Near Towards 2.443 .056
Away 2.217 .057
Intellect Far Towards 2.308 .053
Away 1.768 .055
Near Towards 2.393 .058
Away 1.849 .055
Far Towards 4.283 .048
172
Non-
Adversit
y*
Away 4.234 .059
Near Towards 4.319 .048
Away 4.007 .061
Romance
*
Far Towards 2.641 .050
Away 1.624 .056
Near Towards 2.875 .049
Away 1.697 .056
Positivity Far Towards 2.971 .044
Away 1.771 .053
Near Towards 3.120 .043
Away 1.836 .053
Non-
Negativit
y*
Far Towards 4.198 .045
Away 3.783 .060
Near Towards 4.217 .046
Away 3.511 .059
Non-
Deceptio
n*
Far Towards 4.818 .050
Away 4.469 .063
Near Towards 4.849 .052
Away 4.307 .062
Sociality
*
Far Towards 3.719 .058
Away 2.414 .062
Near Towards 3.893 .054
Away 2.760 .064
Table 11
2-way mean scores: Distance * Gesture
Measure Distance Gesture Mean Std.
Error
Duty Far Open 1.944 .056
Closed 2.636 .061
Near Open 1.989 .055
Closed 2.672 .062
Intellect Far Open 2.307 .057
Closed 1.770 .053
Near Open 2.359 .060
Closed 1.883 .055
Non-
Adversity
Far Open 5.130 .054
Closed 3.387 .060
Near Open 5.065 .056
Closed 3.261 .063
173
Romance Far Open 2.906 .056
Closed 1.359 .054
Near Open 3.118 .057
Closed 1.454 .055
Positivity Far Open 3.372 .052
Closed 1.370 .052
Near Open 3.520 .053
Closed 1.436 .051
Non-
Negativity
Far Open 5.002 .053
Closed 2.979 .063
Near Open 4.932 .054
Closed 2.796 .060
Non-
Deception
Far Open 5.249 .053
Closed 4.037 .063
Near Open 5.207 .055
Closed 3.950 .064
Sociality Far Open 3.597 .055
Closed 2.536 .063
Near Open 3.884 .057
Closed 2.770 .065
Table 12
2-Way Mean Scores: Direction * Speed
Measure Direction Speed Mean Std.
Error
Duty Towards Slow 2.438 .057
Fast 2.465 .057
Away Slow 2.143 .059
Fast 2.195 .061
Intellect Towards Slow 2.457 .057
Fast 2.244 .056
Away Slow 1.922 .059
Fast 1.696 .054
Non-
Adversity
Towards Slow 4.453 .051
Fast 4.148 .049
Away Slow 4.308 .060
Fast 3.933 .061
Romance Towards Slow 2.682 .053
Fast 2.835 .050
Away Slow 1.690 .056
174
Fast 1.631 .056
Positivity Towards Slow 2.986 .048
Fast 3.104 .044
Away Slow 1.857 .053
Fast 1.751 .054
Non-
Negativity
Towards Slow 4.291 .050
Fast 4.124 .048
Away Slow 3.813 .060
Fast 3.480 .060
Non-
Deception
Towards Slow 4.822 .052
Fast 4.845 .051
Away Slow 4.501 .062
Fast 4.275 .062
Sociality Towards Slow 3.715 .059
Fast 3.897 .055
Away Slow 2.597 .061
Fast 2.577 .063
Table 13
2-Way Mean Scores: Direction * Gesture
Measure Direction Gesture Mean Std.
Error
Duty Towards Open 1.920 .059
Closed 2.984 .064
Away Open 2.013 .058
Closed 2.324 .063
Intellect Towards Open 2.696 .064
Closed 2.005 .057
Away Open 1.969 .059
Closed 1.649 .054
Adversity Towards Open 5.474 .056
Closed 3.127 .065
Away Open 4.721 .064
Closed 3.521 .065
Romance Towards Open 4.025 .065
Closed 1.491 .055
Away Open 1.999 .064
Closed 1.322 .056
Positivity Towards Open 4.529 .060
Closed 1.561 .055
175
Away Open 2.363 .065
Closed 1.244 .050
Negativity Towards Open 5.503 .054
Closed 2.911 .065
Away Open 4.430 .066
Closed 2.863 .064
Deception Towards Open 5.566 .052
Closed 4.101 .063
Away Open 4.890 .063
Closed 3.886 .067
Sociality Towards Open 4.629 .057
Closed 2.983 .066
Away Open 2.851 .067
Closed 2.323 .067
Table 14
2-Way Mean Scores: Gesture * Speed
Measur
e
Gesture Speed Mean Std.
Error
Duty Open Slow 1.971 .056
Fast 1.962 .058
Closed Slow 2.609 .062
Fast 2.698 .064
Intellect Open Slow 2.379 .060
Fast 2.287 .059
Closed Slow 2.000 .058
Fast 1.653 .054
Non-
Adversi
ty
Open Slow 5.148 .054
Fast 5.047 .058
Closed Slow 3.614 .064
Fast 3.034 .063
Roman
ce
Open Slow 2.901 .059
Fast 3.123 .056
Closed Slow 1.470 .055
Fast 1.343 .056
Positivi
ty
Open Slow 3.331 .056
Fast 3.561 .055
Closed Slow 1.512 .052
Fast 1.294 .052
176
Non-
Negativ
ity
Open Slow 4.971 .054
Fast 4.963 .057
Closed Slow 3.133 .064
Fast 2.641 .063
Non-
Decepti
on
Open Slow 5.213 .054
Fast 5.244 .055
Closed Slow 4.111 .063
Fast 3.876 .065
Socialit
y
Open Slow 3.636 .057
Fast 3.845 .056
Closed Slow 2.676 .063
Fast 2.629 .064
2-Way Mean Scores: Distance * Gesture
Measur
e
Distan
ce
Speed Mean Std.
Error
Duty
Far Slow
2.238 .056
Fast
2.316 .056
Near Slow
2.303 .055
Fast
2.316 .056
Intellect
Far Slow
2.135 .054
Fast
1.905 .051
Near Slow
2.215 .057
Fast
2.004 .053
Adversi
ty
Far Slow
4.424 .052
Fast
4.102 .051
Near Slow
4.363 .053
Fast
3.982 .052
Roman
ce
Far Slow
2.083 .051
Fast
2.155 .048
Near Slow
2.266 .050
Fast
2.291 .049
Positivi
ty
Far Slow
2.368 .047
Fast
2.337 .043
Near Slow
2.443 .046
Fast
2.494 .043
Negativ
ity
Far Slow
4.140 .052
Fast
3.845 .050
Near Slow
3.979 .050
Fast
3.767 .049
177
Decepti
on
Far Slow
4.693 .054
Fast
4.600 .054
Near Slow
4.646 .054
Fast
4.519 .055
Socialit
y
Far Slow
3.008 .056
Fast
3.110 .053
Near Slow
3.298 .057
Fast
3.349 .055
Table 15
3-Way Mean Scores: Distance * Direction * Gesture
Measure Dist
ance
Direction Gesture Mean Std.
Error
Duty Far Towards Open 1.950 .063
Closed 2.970 .069
Away Open 1.938 .065
Closed 2.302 .070
Near Towards Open 1.889 .065
Closed 2.997 .071
Away Open 2.088 .063
Closed 2.346 .070
Intellect* Far Towards Open 2.654 .067
Closed 1.963 .060
Away Open 1.959 .064
Closed 1.578 .058
Near Towards Open 2.738 .074
Closed 2.048 .064
Away Open 1.980 .064
Closed 1.719 .059
Non-
Adversit
y
Far Towards Open 5.439 .061
Closed 3.127 .072
Away Open 4.821 .070
Closed 3.647 .073
Near Towards Open 5.510 .062
Closed 3.127 .072
Away Open 4.620 .073
Closed 3.394 .075
Far Towards Open 3.857 .073
178
Romance
*
Closed 1.426 .060
Away Open 1.955 .070
Closed 1.293 .059
Near Towards Open 4.194 .071
Closed 1.557 .060
Away Open 2.043 .071
Closed 1.351 .060
Positivity Far Towards Open 4.428 .068
Closed 1.514 .059
Away Open 2.316 .071
Closed 1.227 .054
Near Towards Open 4.631 .066
Closed 1.609 .059
Away Open 2.410 .073
Closed 1.262 .054
Non-
Negativit
y*
Far Towards Open 5.479 .059
Closed 2.917 .072
Away Open 4.525 .072
Closed 3.040 .074
Near Towards Open 5.528 .059
Closed 2.905 .070
Away Open 4.335 .075
Closed 2.686 .069
Non-
Deceptio
n*
Far Towards Open 5.539 .056
Closed 4.097 .070
Away Open 4.960 .070
Closed 3.977 .075
Near Towards Open 5.594 .057
Closed 4.105 .071
Away Open 4.820 .070
Closed 3.794 .075
Sociality
*
Far Towards Open 4.518 .066
Closed 2.920 .073
Away Open 2.676 .075
Closed 2.153 .071
Near Towards Open 4.740 .063
Closed 3.046 .072
Away Open 3.027 .076
Closed 2.494 .076
Table 16
3-Way: Distance * Direction * Speed
179
Measur
e
Distan
ce
Direction Speed Mean Std.
Error
Duty Far Towards Slow 2.431 .062
Fast 2.489 .063
Away Slow 2.073 .066
Fast 2.167 .068
Near Towards Slow 2.444 .063
Fast 2.441 .064
Away Slow 2.212 .064
Fast 2.223 .067
Intellect Far Towards Slow 2.431 .062
Fast 2.186 .059
Away Slow 1.879 .063
Fast 1.658 .060
Near Towards Slow 2.484 .066
Fast 2.302 .064
Away Slow 1.964 .065
Fast 1.735 .058
Adversi
ty
Far Towards Slow 4.432 .060
Fast 4.134 .055
Away Slow 4.399 .068
Fast 4.069 .069
Near Towards Slow 4.475 .057
Fast 4.162 .059
Away Slow 4.217 .068
Fast 3.797 .071
Roman
ce
Far Towards Slow 2.571 .062
Fast 2.712 .056
Away Slow 1.622 .060
Fast 1.625 .063
Near Towards Slow 2.792 .058
Fast 2.959 .057
Away Slow 1.757 .063
Fast 1.637 .062
Positivi
ty
Far Towards Slow 2.926 .056
Fast 3.015 .051
Away Slow 1.850 .059
Fast 1.692 .060
Near Towards Slow 3.046 .054
Fast 3.194 .051
Away Slow 1.864 .061
Fast 1.809 .059
Far Towards Slow 4.294 .056
180
Negativ
ity
Fast 4.102 .053
Away Slow 3.979 .069
Fast 3.586 .071
Near Towards Slow 4.287 .056
Fast 4.146 .056
Away Slow 3.647 .068
Fast 3.374 .068
Decepti
on
Far Towards Slow 4.815 .058
Fast 4.821 .057
Away Slow 4.554 .070
Fast 4.384 .071
Near Towards Slow 4.830 .060
Fast 4.869 .058
Away Slow 4.448 .069
Fast 4.166 .071
Socialit
y
Far Towards Slow 3.600 .069
Fast 3.838 .062
Away Slow 2.429 .068
Fast 2.399 .070
Near Towards Slow 3.830 .063
Fast 3.956 .061
Away Slow 2.765 .069
Fast 2.756 .073
Table 17
3-Way: Distance * Gesture * Speed
Measur
e
Distan
ce
Gesture Speed Mean Std.
Error
Duty Far Open Slow 1.962 .063
Fast 1.925 .063
Closed Slow 2.542 .070
Fast 2.731 .072
Near Open Slow 1.979 .062
Fast 1.998 .064
Closed Slow 2.677 .066
Fast 2.666 .071
Intellect Far Open Slow 2.349 .064
Fast 2.264 .063
Closed Slow 1.961 .062
Fast 1.580 .057
Near Open Slow 2.409 .068
181
Fast 2.309 .066
Closed Slow 2.040 .065
Fast 1.727 .061
Adversi
ty
Far Open Slow 5.134 .061
Fast 5.126 .063
Closed Slow 3.697 .071
Fast 3.077 .071
Near Open Slow 5.161 .060
Fast 4.968 .067
Closed Slow 3.531 .073
Fast 2.991 .071
Roman
ce
Far Open Slow 2.774 .066
Fast 3.037 .063
Closed Slow 1.419 .058
Fast 1.299 .060
Near Open Slow 3.028 .068
Fast 3.209 .064
Closed Slow 1.521 .060
Fast 1.386 .060
Positivi
ty
Far Open Slow 3.288 .064
Fast 3.455 .062
Closed Slow 1.488 .057
Fast 1.252 .056
Near Open Slow 3.375 .064
Fast 3.666 .062
Closed Slow 1.535 .056
Fast 1.336 .056
Negativ
ity
Far Open Slow 5.017 .061
Fast 4.987 .064
Closed Slow 3.257 .073
Fast 2.701 .072
Near Open Slow 4.925 .061
Fast 4.938 .064
Closed Slow 3.010 .068
Fast 2.581 .070
Decepti
on
Far Open Slow 5.218 .059
Fast 5.281 .061
Closed Slow 4.151 .070
Fast 3.924 .072
Near Open Slow 5.207 .060
Fast 5.207 .062
Closed Slow 4.071 .069
Fast 3.828 .074
Socialit
y
Far Open Slow 3.476 .064
Fast 3.718 .062
182
Closed Slow 2.554 .070
Fast 2.519 .069
Near Open Slow 3.796 .067
Fast 3.971 .064
Closed Slow 2.799 .070
Fast 2.740 .073
Table 18
3-Way: Direction * Gesture * Speed
Measur
e
Distan
ce
Gesture Speed Mean Std.
Error
Duty Far Open Slow 1.943 .061
Fast 1.919 .062
Closed Slow 2.533 .069
Fast 2.713 .070
Near Open Slow 1.958 .060
Fast 1.988 .062
Closed Slow 2.649 .065
Fast 2.644 .069
Intellect Far Open Slow 2.329 .062
Fast 2.239 .062
Closed Slow 1.942 .061
Fast 1.571 .055
Near Open Slow 2.395 .067
Fast 2.295 .065
Closed Slow 2.036 .063
Fast 1.713 .059
Adversi
ty
Far Open Slow 5.129 .060
Fast 5.121 .061
Closed Slow 3.719 .071
Fast 3.083 .071
Near Open Slow 5.172 .059
Fast 4.977 .065
Closed Slow 3.553 .072
Fast 2.988 .069
Roman
ce
Far Open Slow 2.763 .064
Fast 3.021 .061
Closed Slow 1.403 .056
Fast 1.288 .058
Near Open Slow 3.020 .066
Fast 3.191 .063
Closed Slow 1.513 .059
Fast 1.392 .059
183
Positivi
ty
Far Open Slow 3.271 .063
Fast 3.440 .061
Closed Slow 1.465 .055
Fast 1.233 .054
Near Open Slow 3.368 .062
Fast 3.662 .061
Closed Slow 1.518 .054
Fast 1.326 .054
Negativ
ity
Far Open Slow 5.013 .060
Fast 4.981 .063
Closed Slow 3.267 .072
Fast 2.710 .070
Near Open Slow 4.932 .059
Fast 4.948 .062
Closed Slow 3.026 .067
Fast 2.585 .068
Decepti
on
Far Open Slow 5.223 .058
Fast 5.277 .060
Closed Slow 4.162 .069
Fast 3.923 .071
Near Open Slow 5.218 .058
Fast 5.216 .060
Closed Slow 4.074 .068
Fast 3.821 .073
Socialit
y
Far Open Slow 3.457 .063
Fast 3.701 .062
Closed Slow 2.559 .069
Fast 2.520 .068
Near Open Slow 3.789 .066
Fast 3.964 .062
Closed Slow 2.808 .069
Fast 2.734 .071
Table 19
4-Way Mean Scores: Complete Model
Measure Distance Direction Gesture Speed Mean
Duty Far Towards Open Slow 1.978
Fast 1.922
184
Closed Slow 2.884
Fast 3.057
Away Open Slow 1.946
Fast 1.929
Closed Slow 2.2
Fast 2.405
Near Towards Open Slow 1.908
Fast 1.87
Closed Slow 2.98
Fast 3.013
Away Open Slow 2.05
Fast 2.127
Closed Slow 2.374
Fast 2.319
Intellect Far Towards Open Slow 2.665
Fast 2.644
Closed Slow 2.196
Fast 1.729
Away Open Slow 2.033
Fast 1.885
Closed Slow 1.726
Fast 1.43
Near Towards Open Slow 2.756
Fast 2.721
Closed Slow 2.213
Fast 1.883
Away Open Slow 2.062
Fast 1.897
Closed Slow 1.867
Fast 1.572
Non-
Adversity
Far Towards Open Slow 5.399
Fast 5.478
Closed Slow 3.464
Fast 2.79
Away Open Slow 4.869
Fast 4.774
Closed Slow 3.93
Fast 3.364
185
Near Towards Open Slow 5.564
Fast 5.455
Closed Slow 3.386
Fast 2.869
Away Open Slow 4.758
Fast 4.481
Closed Slow 3.676
Fast 3.112
Romance Far Towards Open Slow 3.598
Fast 4.116
Closed Slow 1.544
Fast 1.307
Away Open Slow 1.951
Fast 1.959
Closed Slow 1.294
Fast 1.291
Near Towards Open Slow 3.977
Fast 4.411
Closed Slow 1.608
Fast 1.507
Away Open Slow 2.079
Fast 2.007
Closed Slow 1.435
Fast 1.266
Positivity Far Towards Open Slow 4.19
Fast 4.665
Closed Slow 1.662
Fast 1.366
Away Open Slow 2.386
Fast 2.246
Closed Slow 1.314
Fast 1.139
Near Towards Open Slow 4.367
Fast 4.894
Closed Slow 1.725
Fast 1.493
Away Open Slow 2.382
Fast 2.437
186
Closed Slow 1.345
Fast 1.18
Negativity Far Towards Open Slow 5.364
Fast 5.593
Closed Slow 3.223
Fast 2.611
Away Open Slow 4.669
Fast 4.382
Closed Slow 3.29
Fast 2.791
Near Towards Open Slow 5.461
Fast 5.595
Closed Slow 3.114
Fast 2.696
Away Open Slow 4.389
Fast 4.282
Closed Slow 2.906
Fast 2.467
Deception Far Towards Open Slow 5.428
Fast 5.649
Closed Slow 4.202
Fast 3.992
Away Open Slow 5.009
Fast 4.912
Closed Slow 4.099
Fast 3.856
Near Towards Open Slow 5.473
Fast 5.714
Closed Slow 4.186
Fast 4.024
Away Open Slow 4.941
Fast 4.7
Closed Slow 3.955
Fast 3.633
Sociality Far Towards Open Slow 4.265
Fast 4.771
Closed Slow 2.934
Fast 2.905
187
Away Open Slow 2.686
Fast 2.666
Closed Slow 2.173
Fast 2.132
Near Towards Open Slow 4.542
Fast 4.938
Closed Slow 3.118
Fast 2.974
Away Open Slow 3.05
Fast 3.005
Closed Slow 2.48
Fast 2.507
Study 2 Appendix
Figure 1
6-way Mean Scores of DIAMONDS ratings based on Person 1 and Person 2 movement at a Far
Distance
Measure
P1Directio
n
P1Gestur
e
P1Spee
d
P2Directio
n
P2Gestur
e
P2Spee
d
Mea
n
Std.
Erro
r
Duty Towards Open Slow
Towards
Open
Slow
2.02
4
0.16
2
Fast
1.84
8
0.15
8
Closed
Slow
2.13
4
0.14
8
Fast
2.26
2
0.16
5
Away Open Slow
2.22
8
0.16
4
188
Fast 2.03
0.15
3
Closed
Slow
2.05
5
0.14
8
Fast
2.15
1
0.15
3
Fast
Towards
Open
Slow
2.14
5
0.14
5
Fast
1.93
4
0.16
3
Closed
Slow
2.40
1
0.15
2
Fast
2.17
3
0.14
7
Away
Open
Slow
2.32
3
0.15
2
Fast
2.24
8
0.16
1
Closed
Slow
2.29
7
0.15
Fast
2.27
5
0.16
Closed
Slow
Towards
Open
Slow
2.41
3
0.14
9
Fast
2.54
3
0.15
9
Closed
Slow
2.96
9
0.16
7
Fast
3.03
5
0.16
7
Away
Open
Slow
2.47
4
0.16
Fast
2.57
5
0.15
2
Closed
Slow
2.60
5
0.17
4
Fast
2.62
1
0.17
6
Fast Towards
Open
Slow
2.46
4
0.15
6
Fast
2.52
3
0.15
4
Closed Slow
2.79
6
0.16
189
Fast
2.80
2
0.15
9
Away
Open
Slow
2.41
6
0.15
3
Fast
2.45
5
0.16
1
Closed
Slow
2.70
9
0.15
9
Fast 2.46 0.17
Away
Open
Slow
Towards
Open
Slow
2.17
8
0.14
4
Fast 2.32
0.14
9
Closed
Slow
2.35
5
0.14
4
Fast
2.49
7
0.16
1
Away
Open
Slow
2.12
1
0.14
9
Fast
2.25
2
0.15
7
Closed
Slow
2.13
4
0.15
3
Fast
2.04
4
0.14
8
Fast
Towards
Open
Slow
2.20
9
0.14
2
Fast
2.22
3
0.14
Closed
Slow
2.23
8
0.14
2
Fast 2.44
0.15
3
Away
Open
Slow
2.06
8
0.13
9
Fast
2.15
7
0.15
2
Closed
Slow
2.19
5
0.14
6
Fast
2.12
7
0.14
7
Closed Slow Towards Open
Slow
2.26
3
0.14
7
Fast
2.29
2
0.15
190
Closed
Slow
2.49
1
0.15
7
Fast
2.54
8
0.15
7
Away
Open
Slow
2.30
4
0.16
Fast
2.28
8
0.14
8
Closed
Slow
2.29
5
0.15
4
Fast
2.25
4
0.15
5
Fast
Towards
Open
Slow
2.17
8
0.13
8
Fast
2.32
1
0.14
7
Closed
Slow
2.59
9
0.15
6
Fast
2.49
5
0.15
4
Away
Open
Slow
2.25
1
0.15
4
Fast
2.26
4
0.14
9
Closed
Slow 2.38
0.15
8
Fast
2.42
8
0.16
Intellect Towards Open Slow
Towards
Open
Slow
2.66
3
0.16
1
Fast
2.52
8
0.15
6
Closed
Slow
2.22
7
0.14
3
Fast
2.30
2
0.14
5
Away
Open
Slow 2.17
0.15
1
Fast
1.89
7
0.13
3
Closed
Slow
1.91
7
0.13
4
Fast
1.92
7
0.14
1
191
Fast
Towards
Open
Slow
2.79
8
0.15
2
Fast 2.84
0.16
5
Closed
Slow 2.41
0.14
7
Fast
2.24
5
0.14
2
Away
Open
Slow
2.24
4
0.14
1
Fast
2.32
3
0.15
3
Closed
Slow
1.98
5
0.12
9
Fast
1.73
3
0.12
9
Closed
Slow
Towards
Open
Slow
2.33
9
0.13
5
Fast
2.44
7
0.14
Closed
Slow
2.26
6
0.13
8
Fast
2.21
3
0.14
1
Away
Open
Slow
2.27
3
0.13
1
Fast
2.04
1
0.13
3
Closed
Slow
2.00
1
0.13
4
Fast
1.90
6
0.14
2
Fast
Towards
Open
Slow
2.55
7
0.14
7
Fast
2.44
9
0.14
1
Closed
Slow
2.09
4
0.13
6
Fast
2.27
7
0.14
2
Away Open
Slow 2.12
0.13
6
Fast
2.23
6
0.14
4
192
Closed
Slow 2.25
0.14
4
Fast
2.04
6
0.14
7
Away
Open
Slow
Towards
Open
Slow
2.58
1
0.14
9
Fast
2.52
3
0.14
1
Closed
Slow
2.17
1
0.12
8
Fast
2.27
6
0.14
1
Away
Open
Slow
2.12
7
0.13
7
Fast 2.13 0.14
Closed
Slow
2.11
6
0.13
9
Fast
2.17
3
0.13
9
Fast
Towards
Open
Slow
2.24
3
0.13
4
Fast 2.35
0.14
7
Closed
Slow
2.18
4
0.13
5
Fast
2.12
7
0.13
8
Away
Open
Slow
2.26
5
0.14
3
Fast
2.28
4
0.15
1
Closed
Slow
2.07
4
0.14
5
Fast
2.01
1
0.13
4
Closed Slow
Towards
Open
Slow
2.37
4
0.15
Fast
2.22
5
0.13
3
Closed
Slow
2.11
7
0.14
2
Fast 2.16 0.14
Away Open Slow
2.20
9
0.15
1
193
Fast
2.11
4
0.13
1
Closed
Slow
1.96
5
0.14
2
Fast
1.93
7
0.14
Fast
Towards
Open
Slow
2.32
8
0.14
5
Fast
2.37
4
0.14
8
Closed
Slow
2.31
9
0.13
5
Fast 2.19
0.14
1
Away
Open
Slow
2.21
4
0.15
1
Fast
2.17
1
0.12
9
Closed
Slow
2.00
9
0.14
Fast
2.11
4
0.14
7
Adversit
y
Towards Open
Slow
Towards
Open
Slow
1.41
8
0.12
7
Fast
1.44
1
0.15
1
Closed
Slow
2.85
8
0.15
6
Fast 2.74
0.16
6
Away
Open
Slow
2.77
1
0.17
5
Fast
3.06
8
0.16
7
Closed
Slow
3.70
7
0.16
5
Fast
4.15
8
0.16
7
Fast Towards
Open
Slow
1.90
9
0.14
3
Fast
1.45
7
0.13
2
Closed Slow
2.89
8
0.15
5
194
Fast
2.85
9
0.16
Away
Open
Slow
2.92
5
0.15
6
Fast
3.08
3
0.17
7
Closed
Slow
3.60
7
0.14
2
Fast
4.15
3
0.15
5
Closed
Slow
Towards
Open
Slow
3.01
1
0.15
Fast
3.13
1
0.15
3
Closed
Slow
3.64
4
0.16
2
Fast
3.57
3
0.16
5
Away
Open
Slow
3.46
1
0.15
8
Fast
3.63
1
0.15
4
Closed
Slow
3.69
9
0.15
4
Fast
4.01
5
0.16
Fast
Towards
Open
Slow
2.93
3
0.14
6
Fast
3.09
2
0.14
7
Closed
Slow
3.50
3
0.15
3
Fast
3.40
2
0.15
7
Away
Open
Slow
3.38
6
0.14
7
Fast
3.39
2
0.15
6
Closed
Slow
3.53
8
0.14
3
Fast
3.81
8
0.15
9
Away Open Slow Towards Open Slow 2.41
0.14
8
195
Fast
2.41
1
0.15
4
Closed
Slow 3.11
0.15
2
Fast
3.08
7
0.14
6
Away
Open
Slow 2.56
0.17
5
Fast
2.50
3
0.15
8
Closed
Slow
3.00
6
0.16
5
Fast
3.22
8
0.15
9
Fast
Towards
Open
Slow
2.53
1
0.15
7
Fast
3.08
7
0.16
4
Closed
Slow
3.20
2
0.14
6
Fast
3.32
5
0.15
2
Away
Open
Slow
2.57
7
0.16
3
Fast
2.57
8
0.16
4
Closed
Slow
3.08
2
0.15
9
Fast
2.97
1
0.16
5
Closed Slow
Towards
Open
Slow
3.00
6
0.14
6
Fast
3.09
2
0.14
8
Closed
Slow
3.12
9
0.14
6
Fast
3.41
5
0.16
2
Away
Open
Slow 2.81
0.16
3
Fast
3.16
2
0.14
3
Closed Slow 3.03
0.16
2
196
Fast
3.14
9
0.16
9
Fast
Towards
Open
Slow
3.24
2
0.14
9
Fast
3.09
7
0.14
5
Closed
Slow
3.29
1
0.14
7
Fast
3.37
1
0.15
1
Away
Open
Slow 3.05
0.17
5
Fast
2.82
2
0.15
7
Closed
Slow
3.21
5
0.16
8
Fast
3.33
5
0.17
4
Romance Towards Open
Slow
Towards
Open
Slow
4.23
4
0.15
2
Fast
4.75
2
0.15
8
Closed
Slow 2.73
0.16
6
Fast
3.03
9
0.17
3
Away
Open
Slow
2.42
5
0.16
1
Fast
2.31
8
0.14
9
Closed
Slow
2.20
5
0.15
3
Fast
2.05
1
0.15
2
Fast
Towards
Open
Slow
3.75
5
0.15
8
Fast
4.55
8
0.17
2
Closed
Slow
2.53
5
0.16
1
Fast
2.81
6
0.15
1
Away Open Slow
2.42
2
0.14
4
197
Fast
2.50
8
0.16
6
Closed
Slow 2.38
0.14
8
Fast
2.15
6
0.14
7
Closed
Slow
Towards
Open
Slow
2.69
6
0.15
Fast
2.72
4
0.15
5
Closed
Slow
2.03
1
0.14
2
Fast
2.05
7
0.15
4
Away
Open
Slow
2.27
2
0.15
7
Fast
2.15
6
0.15
Closed
Slow
1.89
6
0.14
3
Fast
1.74
1
0.13
4
Fast
Towards
Open
Slow
2.80
5
0.16
3
Fast
2.69
2
0.15
2
Closed
Slow
2.17
6
0.15
8
Fast
2.21
5
0.15
2
Away
Open
Slow
2.20
4
0.14
9
Fast
2.23
5
0.16
Closed
Slow
2.04
7
0.14
3
Fast
1.97
8
0.15
5
Away Open Slow Towards
Open
Slow
2.73
2
0.15
1
Fast
3.05
3
0.16
2
Closed Slow
2.23
2
0.14
3
198
Fast
2.26
1
0.14
6
Away
Open
Slow
2.21
8
0.15
6
Fast
2.35
8
0.15
7
Closed
Slow
2.16
6
0.14
8
Fast 2.07
0.15
1
Fast
Towards
Open
Slow
2.59
1
0.14
8
Fast
2.65
4
0.15
6
Closed
Slow
2.20
5
0.15
1
Fast
2.25
4
0.14
4
Away
Open
Slow
2.32
6
0.15
5
Fast
2.38
5
0.17
Closed
Slow
2.13
7
0.15
Fast
2.02
4
0.14
9
Closed
Slow
Towards
Open
Slow
2.70
7
0.15
9
Fast
2.62
9
0.15
9
Closed
Slow
2.05
2
0.14
5
Fast
2.10
2
0.14
7
Away
Open
Slow
2.21
8
0.15
9
Fast
2.18
7
0.13
8
Closed
Slow
1.91
7
0.14
2
Fast
2.02
3
0.14
6
Fast Towards Open Slow
2.61
8
0.14
7
199
Fast
2.59
1
0.15
3
Closed
Slow
2.24
6
0.14
2
Fast
2.21
9
0.14
9
Away
Open
Slow
2.33
7
0.16
Fast
2.26
5
0.14
9
Closed
Slow
2.03
1
0.14
7
Fast
2.00
5
0.14
8
Positivity Towards Open
Slow
Towards
Open
Slow
4.71
2
0.14
Fast
4.95
8
0.16
2
Closed
Slow
2.54
4
0.15
9
Fast
2.83
7
0.16
1
Away
Open
Slow 2.58
0.16
4
Fast
2.16
3
0.14
8
Closed
Slow
1.92
1
0.13
8
Fast 1.67
0.13
2
Fast
Towards
Open
Slow
4.28
4
0.13
4
Fast 4.89
0.16
2
Closed
Slow
2.52
9
0.12
9
Fast
2.77
9
0.14
5
Away
Open
Slow
2.50
4
0.15
1
Fast
2.59
9
0.17
3
Closed Slow
1.94
8
0.13
7
200
Fast
1.84
3
0.13
6
Closed
Slow
Towards
Open
Slow
2.56
2
0.14
1
Fast
2.93
1
0.14
7
Closed
Slow
2.08
1
0.14
4
Fast
2.27
2
0.15
4
Away
Open
Slow
2.11
1
0.14
5
Fast
2.08
4
0.13
8
Closed
Slow
2.00
2
0.14
7
Fast
1.70
4
0.13
5
Fast
Towards
Open
Slow 2.86
0.15
1
Fast
2.79
8
0.14
Closed
Slow
2.10
3
0.15
Fast
2.24
4
0.14
4
Away
Open
Slow
2.07
4
0.13
4
Fast
2.28
1
0.15
1
Closed
Slow
2.00
8
0.13
4
Fast
1.80
6
0.15
1
Away Open Slow
Towards
Open
Slow
2.92
3
0.14
9
Fast
3.19
9
0.15
9
Closed
Slow
2.47
5
0.14
9
Fast
2.36
4
0.14
5
Away Open Slow
2.50
3
0.15
201
Fast
2.42
3
0.15
3
Closed
Slow
2.10
1
0.14
6
Fast
2.14
5
0.14
8
Fast
Towards
Open
Slow
2.69
2
0.15
1
Fast
2.76
8
0.15
5
Closed
Slow
2.17
6
0.14
3
Fast
2.12
1
0.13
4
Away
Open
Slow
2.44
7
0.15
Fast
2.49
2
0.16
8
Closed
Slow
2.15
9
0.14
7
Fast 1.99
0.13
9
Closed
Slow
Towards
Open
Slow
2.57
9
0.14
6
Fast
2.37
6
0.13
9
Closed
Slow
2.05
1
0.13
6
Fast
2.10
8
0.14
7
Away
Open
Slow
2.23
5
0.16
Fast
2.21
2
0.13
9
Closed
Slow
1.98
2
0.14
1
Fast
1.97
1
0.14
4
Fast Towards
Open
Slow
2.37
6
0.14
3
Fast
2.43
2
0.14
2
Closed Slow
2.30
1
0.14
5
202
Fast
2.24
2
0.14
9
Away
Open
Slow
2.23
9
0.15
2
Fast
2.25
1
0.14
4
Closed
Slow
2.05
3
0.14
9
Fast
1.97
4
0.14
6
Negativit
y
Towards
Open
Slow
Towards
Open
Slow
1.59
4
0.12
9
Fast
1.37
9
0.13
1
Closed
Slow
3.22
2
0.15
4
Fast
3.23
7
0.16
2
Away
Open
Slow 3.27 0.17
Fast
3.43
2
0.17
Closed
Slow
4.25
8
0.16
Fast
4.56
7
0.15
6
Fast
Towards
Open
Slow
1.94
3
0.13
7
Fast
1.63
8
0.14
1
Closed
Slow
3.46
9
0.15
9
Fast
3.21
5
0.15
6
Away
Open
Slow
3.27
7
0.15
5
Fast
3.33
9
0.17
5
Closed
Slow
3.99
4
0.14
1
Fast
4.23
9
0.14
9
Closed Slow Towards Open
Slow
3.44
8
0.14
7
Fast
3.31
3
0.14
5
203
Closed
Slow
4.04
5
0.15
6
Fast
3.91
8
0.15
7
Away
Open
Slow
3.76
1
0.14
9
Fast
3.91
7
0.13
3
Closed
Slow
4.19
9
0.15
1
Fast
4.33
7
0.14
9
Fast
Towards
Open
Slow
3.25
8
0.14
2
Fast 3.4
0.13
9
Closed
Slow
3.87
8
0.15
8
Fast
3.77
1
0.14
9
Away
Open
Slow
3.81
7
0.14
7
Fast
3.56
8
0.15
Closed
Slow
3.81
6
0.15
3
Fast
4.27
4
0.15
Away Open Slow
Towards
Open
Slow
2.76
4
0.14
9
Fast
2.84
9
0.15
1
Closed
Slow
3.50
2
0.14
4
Fast
3.62
4
0.14
Away
Open
Slow
3.04
5
0.16
5
Fast
3.03
2
0.16
4
Closed
Slow
3.40
5
0.15
4
Fast
3.53
1
0.14
6
204
Fast
Towards
Open
Slow
3.16
4
0.15
Fast
3.33
6
0.15
9
Closed
Slow
3.56
4
0.13
4
Fast
3.58
1
0.14
4
Away
Open
Slow
3.10
1
0.15
9
Fast
3.00
8
0.16
8
Closed
Slow
3.41
1
0.15
6
Fast
3.49
9
0.16
Closed
Slow
Towards
Open
Slow
3.46
3
0.14
Fast
3.47
9
0.15
4
Closed
Slow
3.67
9
0.14
3
Fast
3.65
7
0.15
6
Away
Open
Slow
3.42
6
0.15
6
Fast
3.46
9
0.13
9
Closed
Slow
3.73
4
0.16
5
Fast
3.62
5
0.15
5
Fast
Towards
Open
Slow
3.73
3
0.14
Fast
3.56
6
0.14
Closed
Slow
3.73
2
0.14
3
Fast
3.72
2
0.14
1
Away Open
Slow
3.53
4
0.16
3
Fast
3.51
5
0.15
5
205
Closed
Slow
3.66
2
0.16
Fast
3.76
5
0.17
3
Deceptio
n
Towards
Open
Slow
Towards
Open
Slow
1.51
6
0.12
5
Fast
1.44
8
0.13
6
Closed
Slow
2.89
2
0.14
9
Fast
2.58
1
0.15
5
Away
Open
Slow
2.88
9
0.16
6
Fast
3.12
1
0.16
5
Closed
Slow 3.53
0.16
4
Fast
3.79
2
0.17
3
Fast
Towards
Open
Slow
1.93
5
0.13
8
Fast
1.64
4
0.14
3
Closed
Slow 2.92
0.16
5
Fast
2.84
9
0.15
8
Away
Open
Slow
2.82
8
0.15
3
Fast
2.75
7
0.16
3
Closed
Slow
3.32
4
0.15
1
Fast
3.59
1
0.16
8
Closed Slow Towards
Open
Slow
2.96
5
0.15
8
Fast
3.03
1
0.15
3
Closed
Slow
3.46
1
0.15
8
Fast
3.27
7
0.16
5
206
Away
Open
Slow
3.35
6
0.14
9
Fast
3.40
9
0.14
7
Closed
Slow
3.69
3
0.15
4
Fast
3.73
5
0.15
8
Fast
Towards
Open
Slow
2.94
5
0.14
3
Fast
2.89
6
0.14
7
Closed
Slow
3.12
7
0.15
4
Fast
3.20
6
0.15
2
Away
Open
Slow 3.45
0.15
1
Fast
3.12
4
0.14
3
Closed
Slow
3.42
3
0.14
4
Fast
3.65
6
0.15
3
Away Open
Slow
Towards
Open
Slow
2.52
1
0.15
2
Fast
2.51
4
0.14
7
Closed
Slow
3.09
9
0.14
6
Fast
2.96
2
0.14
7
Away
Open
Slow 2.7
0.16
3
Fast
2.68
6
0.16
Closed
Slow
3.04
6
0.16
Fast
3.11
4
0.15
5
Fast Towards Open
Slow
2.62
1
0.14
4
Fast 2.91
0.16
2
207
Closed
Slow
3.05
5
0.14
4
Fast
3.19
3
0.14
6
Away
Open
Slow
2.76
9
0.15
8
Fast
2.53
1
0.16
6
Closed
Slow
2.94
7
0.16
Fast
2.92
5
0.16
Closed
Slow
Towards
Open
Slow
3.19
2
0.15
2
Fast
3.12
8
0.15
1
Closed
Slow
3.06
1
0.15
1
Fast
3.15
5
0.15
Away
Open
Slow
2.80
2
0.15
9
Fast
2.99
6
0.14
8
Closed
Slow
3.13
4
0.16
8
Fast
3.20
2
0.16
Fast
Towards
Open
Slow
3.20
7
0.14
3
Fast
3.27
1
0.13
8
Closed
Slow
3.16
4
0.15
5
Fast
3.20
6
0.15
4
Away
Open
Slow 2.94
0.16
2
Fast
2.92
2
0.15
7
Closed
Slow
3.23
3
0.16
6
Fast
3.33
9
0.16
9
208
Sociality Towards
Open
Slow
Towards
Open
Slow
5.05
9
0.14
Fast
5.26
3
0.14
1
Closed
Slow
4.46
1
0.14
4
Fast
4.56
6
0.14
1
Away
Open
Slow 4.29
0.16
2
Fast
4.12
6
0.15
3
Closed
Slow 4.1
0.15
8
Fast
4.42
3
0.15
2
Fast
Towards
Open
Slow
5.04
3
0.13
Fast
5.10
8
0.16
1
Closed
Slow
4.67
3
0.13
5
Fast
4.62
9
0.14
5
Away
Open
Slow
4.30
2
0.15
Fast
4.34
8
0.15
9
Closed
Slow
4.19
8
0.14
9
Fast
4.11
8
0.16
8
Closed Slow
Towards
Open
Slow
4.45
3
0.13
8
Fast
4.52
6
0.13
8
Closed
Slow 4.24
0.14
7
Fast
4.25
7
0.15
1
Away Open
Slow
4.21
6
0.15
2
Fast
3.96
6
0.15
5
209
Closed
Slow
3.94
1
0.16
2
Fast
3.73
7
0.15
7
Fast
Towards
Open
Slow
4.60
4
0.13
2
Fast 4.67
0.13
2
Closed
Slow
4.26
4
0.15
Fast 4.39 0.15
Away
Open
Slow 4.1
0.15
1
Fast
4.06
3
0.15
2
Closed
Slow 3.89
0.15
7
Fast
3.68
9
0.16
6
Away Open
Slow
Towards
Open
Slow
4.09
8
0.15
8
Fast
4.33
8
0.14
3
Closed
Slow
3.95
1
0.14
3
Fast
4.14
8
0.14
2
Away
Open
Slow
3.57
5
0.17
9
Fast
3.46
7
0.18
3
Closed
Slow
3.42
6
0.17
5
Fast
3.71
3
0.17
3
Fast
Towards
Open
Slow
3.77
3
0.14
7
Fast
4.10
2
0.15
1
Closed
Slow
3.86
5
0.15
Fast
3.95
6
0.14
3
Away Open Slow 3.41
0.17
1
210
Fast
3.33
1
0.17
1
Closed
Slow
3.46
7
0.17
7
Fast
3.35
2
0.17
2
Closed
Slow
Towards
Open
Slow
4.01
3
0.14
3
Fast
4.08
1
0.14
9
Closed
Slow
3.77
4
0.14
6
Fast
3.86
4
0.15
1
Away
Open
Slow
3.35
7
0.17
5
Fast
3.63
9
0.14
3
Closed
Slow
3.19
4
0.17
6
Fast
3.30
6
0.17
5
Fast
Towards
Open
Slow
4.00
3
0.15
Fast
4.04
6
0.13
9
Closed
Slow 3.99
0.14
8
Fast 3.9
0.14
7
Away
Open
Slow
3.56
4
0.17
2
Fast
3.59
6
0.16
4
Closed
Slow
3.29
7
0.17
6
Fast
3.28
3
0.16
6
Figure
211
6-way Mean Scores of DIAMONDS ratings based on Person 1 and Person 2 movement at a Far
Distance
Measure
P1Directio
n
P1Gestur
e
P1Spee
d
P2Directio
n
P2Gestur
e
P2Spee
d
Mea
n
Std.
Erro
r
Duty Towards
Open
Slow
Towards
Open
Slow
2.03
1
.148
Fast
2.00
0
.156
Closed
Slow
2.38
6
.153
Fast
2.07
3
.142
Away
Open
Slow
2.19
5
.139
Fast
2.06
4
.152
Closed
Slow
1.91
3
.148
Fast
2.00
2
.155
Fast
Towards
Open
Slow
2.03
0
.144
Fast
2.10
8
.161
Closed
Slow
2.55
7
.153
Fast
2.28
5
.141
Away
Open
Slow
2.35
0
.149
Fast
2.25
6
.150
Closed
Slow
2.25
1
.151
Fast
2.08
9
.154
Closed Slow Towards Open
Slow
2.62
1
.150
Fast
2.50
9
.153
212
Closed
Slow
3.21
7
.158
Fast
3.02
5
.162
Away
Open
Slow
2.54
1
.158
Fast
2.63
0
.151
Closed
Slow
2.61
5
.159
Fast
2.39
2
.162
Fast
Towards
Open
Slow
2.61
1
.153
Fast
2.70
1
.155
Closed
Slow
2.92
6
.155
Fast
3.04
6
.152
Away
Open
Slow
2.50
2
.144
Fast
2.50
4
.149
Closed
Slow
2.70
2
.153
Fast
2.66
2
.158
Away Open Slow
Towards
Open
Slow
2.44
7
.156
Fast
2.62
3
.156
Closed
Slow
2.46
6
.144
Fast
2.49
5
.154
Away
Open
Slow
2.61
0
.164
Fast
2.46
6
.148
Closed
Slow
2.40
6
.151
Fast
2.32
2
.144
213
Fast
Towards
Open
Slow
2.65
3
.154
Fast
2.55
7
.156
Closed
Slow
2.44
2
.145
Fast
2.60
4
.147
Away
Open
Slow
2.65
2
.157
Fast
2.47
0
.156
Closed
Slow
2.52
3
.154
Fast
2.52
6
.156
Closed
Slow
Towards
Open
Slow
2.56
8
.156
Fast
2.57
5
.143
Closed
Slow
2.82
8
.150
Fast
2.73
9
.152
Away
Open
Slow
2.57
0
.152
Fast
2.66
4
.151
Closed
Slow
2.89
0
.164
Fast
2.69
6
.163
Fast
Towards
Open
Slow
2.55
3
.153
Fast
2.68
2
.147
Closed
Slow
2.83
2
.164
Fast
2.64
0
.151
Away Open
Slow
2.60
9
.146
Fast
2.77
5
.159
214
Closed
Slow
2.65
3
.156
Fast
2.67
1
.153
Intellect Towards
Open
Slow
Towards
Open
Slow
2.44
8
.158
Fast
2.54
3
.158
Closed
Slow
2.08
3
.137
Fast
1.88
0
.126
Away
Open
Slow
2.11
5
.131
Fast
2.03
2
.138
Closed
Slow
1.79
9
.140
Fast
1.69
4
.134
Fast
Towards
Open
Slow
2.55
9
.154
Fast
2.69
0
.167
Closed
Slow
2.41
7
.143
Fast
2.23
3
.136
Away
Open
Slow
2.24
3
.138
Fast
2.23
6
.150
Closed
Slow
2.06
9
.142
Fast
1.87
8
.141
Closed Slow Towards
Open
Slow
2.39
4
.139
Fast
2.67
2
.145
Closed
Slow
2.50
5
.141
Fast
2.38
6
.141
215
Away
Open
Slow
2.18
8
.143
Fast
2.24
4
.140
Closed
Slow
2.31
3
.149
Fast
1.98
4
.141
Fast
Towards
Open
Slow
2.49
3
.139
Fast
2.51
1
.149
Closed
Slow
2.44
5
.140
Fast
2.47
6
.147
Away
Open
Slow
2.45
5
.138
Fast
2.24
7
.144
Closed
Slow
2.34
1
.153
Fast
2.26
1
.147
Away Open
Slow
Towards
Open
Slow
2.56
0
.143
Fast
2.69
1
.152
Closed
Slow
2.18
7
.130
Fast
2.40
0
.145
Away
Open
Slow
2.35
5
.141
Fast
2.24
7
.142
Closed
Slow
2.26
3
.134
Fast
2.37
1
.150
Fast Towards Open
Slow
2.57
3
.148
Fast
2.41
1
.137
216
Closed
Slow
2.32
0
.141
Fast
2.29
3
.138
Away
Open
Slow
2.41
8
.144
Fast
2.40
9
.152
Closed
Slow
2.27
5
.138
Fast
2.33
6
.145
Closed
Slow
Towards
Open
Slow
2.40
0
.144
Fast
2.80
9
.148
Closed
Slow
2.49
2
.138
Fast
2.54
4
.142
Away
Open
Slow
2.22
6
.139
Fast
2.47
9
.142
Closed
Slow
2.48
4
.147
Fast
2.47
0
.151
Fast
Towards
Open
Slow
2.52
4
.144
Fast
2.50
5
.132
Closed
Slow
2.62
0
.147
Fast
2.50
5
.149
Away
Open
Slow
2.52
9
.145
Fast
2.41
2
.148
Closed
Slow
2.34
0
.141
Fast
2.30
3
.140
217
Adversit
y
Towards
Open
Slow
Towards
Open
Slow
1.62
9
.127
Fast
1.49
4
.136
Closed
Slow
2.75
2
.146
Fast
2.77
7
.160
Away
Open
Slow
2.66
2
.157
Fast
2.72
7
.165
Closed
Slow
3.39
1
.168
Fast
3.83
4
.172
Fast
Towards
Open
Slow
1.66
6
.128
Fast
1.61
8
.145
Closed
Slow
2.82
8
.143
Fast
2.76
1
.162
Away
Open
Slow
2.56
1
.145
Fast
2.83
0
.160
Closed
Slow
3.30
8
.154
Fast
3.77
9
.168
Closed Slow
Towards
Open
Slow
2.87
3
.154
Fast
2.67
8
.139
Closed
Slow
3.49
6
.152
Fast
3.60
7
.157
Away Open
Slow
3.05
6
.156
Fast
3.13
6
.142
218
Closed
Slow
3.50
9
.167
Fast
3.75
0
.149
Fast
Towards
Open
Slow
2.82
7
.147
Fast
2.89
1
.160
Closed
Slow
3.37
8
.154
Fast
3.41
8
.161
Away
Open
Slow
2.89
7
.147
Fast
3.06
1
.152
Closed
Slow
3.48
5
.150
Fast
3.80
9
.159
Away Open
Slow
Towards
Open
Slow
2.39
6
.148
Fast
2.62
7
.154
Closed
Slow
2.91
1
.142
Fast
3.09
4
.153
Away
Open
Slow
2.41
4
.158
Fast
2.55
5
.152
Closed
Slow
2.72
1
.152
Fast
2.91
8
.157
Fast Towards
Open
Slow
2.65
6
.151
Fast
2.67
0
.155
Closed
Slow
3.05
3
.145
Fast
3.16
2
.139
219
Away
Open
Slow
2.56
4
.157
Fast
2.66
0
.155
Closed
Slow
3.14
3
.142
Fast
2.99
5
.143
Closed
Slow
Towards
Open
Slow
2.96
9
.148
Fast
3.13
4
.143
Closed
Slow
3.31
6
.144
Fast
3.36
2
.140
Away
Open
Slow
2.85
8
.148
Fast
2.92
4
.148
Closed
Slow
3.16
8
.153
Fast
3.38
5
.160
Fast
Towards
Open
Slow
2.79
5
.142
Fast
2.91
5
.139
Closed
Slow
3.29
7
.138
Fast
3.27
2
.147
Away
Open
Slow
2.93
8
.147
Fast
2.92
4
.149
Closed
Slow
3.39
9
.148
Fast
3.32
8
.157
Romance Towards Open Slow Towards Open
Slow
4.08
5
.158
Fast
4.52
9
.150
220
Closed
Slow
2.50
0
.150
Fast
2.68
0
.158
Away
Open
Slow
2.54
3
.167
Fast
2.50
5
.170
Closed
Slow
2.18
0
.150
Fast
2.08
3
.158
Fast
Towards
Open
Slow
3.96
7
.149
Fast
4.59
7
.160
Closed
Slow
2.68
5
.149
Fast
2.65
7
.151
Away
Open
Slow
2.71
1
.142
Fast
2.68
3
.160
Closed
Slow
2.25
2
.145
Fast
2.09
7
.145
Closed Slow
Towards
Open
Slow
2.72
8
.147
Fast
2.96
5
.150
Closed
Slow
2.23
2
.151
Fast
2.13
0
.147
Away
Open
Slow
2.34
5
.154
Fast
2.25
9
.144
Closed
Slow
2.00
3
.146
Fast
2.00
0
.141
221
Fast
Towards
Open
Slow
2.89
4
.150
Fast
3.01
8
.155
Closed
Slow
2.07
7
.141
Fast
2.21
1
.141
Away
Open
Slow
2.52
2
.148
Fast
2.41
1
.148
Closed
Slow
2.15
3
.144
Fast
2.16
5
.148
Away Open
Slow
Towards
Open
Slow
2.86
5
.155
Fast
3.10
5
.156
Closed
Slow
2.38
7
.148
Fast
2.45
6
.158
Away
Open
Slow
2.53
2
.157
Fast
2.49
3
.152
Closed
Slow
2.28
9
.140
Fast
2.34
6
.152
Fast
Towards
Open
Slow
2.88
0
.159
Fast
3.00
3
.160
Closed
Slow
2.40
2
.143
Fast
2.35
1
.143
Away Open
Slow
2.74
3
.155
Fast
2.63
9
.156
222
Closed
Slow
2.12
8
.141
Fast
2.62
0
.153
Closed
Slow
Towards
Open
Slow
2.76
5
.152
Fast
2.95
2
.149
Closed
Slow
2.43
7
.150
Fast
2.43
2
.149
Away
Open
Slow
2.48
6
.144
Fast
2.57
9
.156
Closed
Slow
2.35
0
.150
Fast
2.44
3
.157
Fast
Towards
Open
Slow
2.68
9
.139
Fast
2.89
9
.150
Closed
Slow
2.44
9
.138
Fast
2.48
6
.148
Away
Open
Slow
2.51
3
.151
Fast
2.63
5
.154
Closed
Slow
2.50
1
.147
Fast
2.40
0
.145
Positivity Towards Open Slow Towards
Open
Slow
4.37
7
.161
Fast
4.79
6
.150
Closed
Slow
2.45
3
.140
Fast
2.49
2
.143
223
Away
Open
Slow
2.48
8
.152
Fast
2.58
0
.167
Closed
Slow
1.69
1
.132
Fast
1.64
5
.134
Fast
Towards
Open
Slow
4.12
7
.150
Fast
4.79
1
.162
Closed
Slow
2.55
6
.136
Fast
2.72
7
.143
Away
Open
Slow
2.62
7
.144
Fast
2.63
0
.159
Closed
Slow
2.06
7
.140
Fast
1.87
5
.141
Closed
Slow
Towards
Open
Slow
2.63
7
.136
Fast
2.84
6
.153
Closed
Slow
2.21
7
.136
Fast
2.02
1
.143
Away
Open
Slow
2.18
6
.139
Fast
2.25
6
.140
Closed
Slow
1.99
2
.146
Fast
1.77
7
.132
Fast Towards Open
Slow
2.75
1
.138
Fast
2.94
8
.141
224
Closed
Slow
2.26
9
.140
Fast
2.35
5
.145
Away
Open
Slow
2.59
0
.148
Fast
2.48
4
.148
Closed
Slow
2.16
7
.140
Fast
2.10
3
.147
Away Open
Slow
Towards
Open
Slow
3.09
7
.144
Fast
3.19
1
.151
Closed
Slow
2.46
7
.133
Fast
2.37
0
.143
Away
Open
Slow
2.73
5
.159
Fast
2.66
7
.150
Closed
Slow
2.38
8
.139
Fast
2.28
2
.136
Fast
Towards
Open
Slow
3.09
6
.159
Fast
2.94
7
.144
Closed
Slow
2.55
3
.139
Fast
2.38
5
.137
Away
Open
Slow
2.82
1
.149
Fast
2.87
1
.153
Closed
Slow
2.32
4
.138
Fast
2.39
0
.142
225
Closed
Slow
Towards
Open
Slow
2.65
2
.158
Fast
2.67
7
.144
Closed
Slow
2.49
1
.146
Fast
2.56
4
.143
Away
Open
Slow
2.50
2
.141
Fast
2.45
1
.149
Closed
Slow
2.31
7
.154
Fast
2.39
0
.150
Fast
Towards
Open
Slow
2.66
3
.146
Fast
2.79
5
.147
Closed
Slow
2.46
2
.150
Fast
2.43
5
.149
Away
Open
Slow
2.56
2
.153
Fast
2.63
6
.151
Closed
Slow
2.28
3
.146
Fast
2.19
2
.145
Negativit
y
Towards Open Slow
Towards
Open
Slow
1.76
9
.133
Fast
1.54
3
.134
Closed
Slow
3.29
8
.143
Fast
3.33
4
.152
Away Open
Slow
2.96
8
.153
Fast
3.24
1
.166
226
Closed
Slow
4.04
8
.150
Fast
4.28
1
.161
Fast
Towards
Open
Slow
2.07
5
.132
Fast
1.63
4
.134
Closed
Slow
3.23
2
.139
Fast
3.02
7
.147
Away
Open
Slow
3.06
4
.153
Fast
3.11
3
.159
Closed
Slow
3.83
1
.141
Fast
3.99
8
.158
Closed
Slow
Towards
Open
Slow
3.16
7
.140
Fast
3.03
3
.136
Closed
Slow
3.61
4
.143
Fast
3.72
3
.144
Away
Open
Slow
3.44
4
.140
Fast
3.50
4
.125
Closed
Slow
3.97
8
.151
Fast
4.08
3
.137
Fast Towards
Open
Slow
3.02
6
.140
Fast
3.17
8
.147
Closed
Slow
3.70
1
.144
Fast
3.69
8
.153
227
Away
Open
Slow
3.15
3
.133
Fast
3.41
0
.141
Closed
Slow
3.75
4
.136
Fast
3.95
9
.146
Away
Open
Slow
Towards
Open
Slow
2.70
2
.140
Fast
2.79
8
.143
Closed
Slow
3.18
6
.138
Fast
3.37
9
.146
Away
Open
Slow
2.92
5
.147
Fast
2.83
9
.150
Closed
Slow
3.33
2
.147
Fast
3.40
4
.147
Fast
Towards
Open
Slow
2.89
0
.147
Fast
2.94
2
.139
Closed
Slow
3.19
0
.140
Fast
3.53
5
.123
Away
Open
Slow
3.05
5
.154
Fast
2.99
5
.151
Closed
Slow
3.57
4
.136
Fast
3.34
9
.139
Closed Slow Towards Open
Slow
3.43
2
.135
Fast
3.42
2
.143
228
Closed
Slow
3.52
3
.138
Fast
3.65
9
.139
Away
Open
Slow
3.26
3
.147
Fast
3.31
1
.146
Closed
Slow
3.69
4
.152
Fast
3.64
1
.142
Fast
Towards
Open
Slow
3.31
2
.133
Fast
3.36
1
.142
Closed
Slow
3.52
0
.135
Fast
3.72
7
.148
Away
Open
Slow
3.30
2
.144
Fast
3.20
6
.144
Closed
Slow
3.71
3
.134
Fast
3.66
7
.149
Deceptio
n
Towards Open Slow
Towards
Open
Slow
1.72
7
.124
Fast
1.66
1
.140
Closed
Slow
2.76
5
.145
Fast
2.84
0
.153
Away
Open
Slow
2.59
3
.147
Fast
2.80
8
.150
Closed
Slow
3.24
3
.156
Fast
3.63
5
.151
229
Fast
Towards
Open
Slow
2.00
4
.138
Fast
1.68
7
.136
Closed
Slow
2.92
4
.146
Fast
2.75
3
.151
Away
Open
Slow
2.66
9
.142
Fast
2.76
2
.148
Closed
Slow
3.23
1
.156
Fast
3.43
4
.158
Closed
Slow
Towards
Open
Slow
2.86
3
.140
Fast
2.72
4
.149
Closed
Slow
3.19
5
.145
Fast
3.24
7
.156
Away
Open
Slow
2.87
9
.142
Fast
3.03
8
.136
Closed
Slow
3.37
5
.161
Fast
3.39
6
.153
Fast
Towards
Open
Slow
2.68
5
.146
Fast
2.81
5
.148
Closed
Slow
3.37
2
.149
Fast
3.30
5
.151
Away Open
Slow
2.80
4
.133
Fast
2.95
0
.143
230
Closed
Slow
3.21
5
.147
Fast
3.51
9
.156
Away
Open
Slow
Towards
Open
Slow
2.60
0
.144
Fast
2.47
6
.145
Closed
Slow
2.88
6
.139
Fast
2.93
1
.150
Away
Open
Slow
2.59
2
.151
Fast
2.64
7
.152
Closed
Slow
2.95
2
.151
Fast
3.00
4
.146
Fast
Towards
Open
Slow
2.62
7
.148
Fast
2.65
6
.145
Closed
Slow
2.99
7
.147
Fast
2.95
6
.142
Away
Open
Slow
2.76
9
.147
Fast
2.69
5
.158
Closed
Slow
3.13
7
.146
Fast
2.84
1
.144
Closed Slow Towards
Open
Slow
3.04
1
.154
Fast
2.97
9
.147
Closed
Slow
3.21
0
.142
Fast
3.27
1
.144
231
Away
Open
Slow
2.91
1
.149
Fast
3.05
6
.154
Closed
Slow
3.18
1
.153
Fast
3.22
3
.153
Fast
Towards
Open
Slow
2.95
2
.139
Fast
3.08
7
.146
Closed
Slow
3.28
9
.148
Fast
3.30
9
.144
Away
Open
Slow
3.10
1
.148
Fast
3.04
0
.153
Closed
Slow
3.26
0
.147
Fast
3.28
8
.154
Sociality Towards Open
Slow
Towards
Open
Slow
5.07
1
.135
Fast
5.22
7
.130
Closed
Slow
4.22
1
.145
Fast
4.33
5
.145
Away
Open
Slow
4.29
0
.148
Fast
4.18
1
.142
Closed
Slow
3.86
6
.161
Fast
4.03
3
.164
Fast Towards Open
Slow
4.96
1
.127
Fast
5.34
4
.140
232
Closed
Slow
4.37
1
.140
Fast
4.51
6
.134
Away
Open
Slow
4.24
8
.159
Fast
4.33
6
.151
Closed
Slow
4.01
0
.154
Fast
4.02
4
.161
Closed
Slow
Towards
Open
Slow
4.47
9
.140
Fast
4.61
7
.141
Closed
Slow
4.23
8
.141
Fast
4.16
1
.149
Away
Open
Slow
4.15
3
.137
Fast
4.09
9
.153
Closed
Slow
4.05
0
.162
Fast
3.82
4
.168
Fast
Towards
Open
Slow
4.73
2
.125
Fast
4.54
7
.140
Closed
Slow
4.17
8
.151
Fast
4.28
3
.153
Away
Open
Slow
4.00
6
.147
Fast
4.02
4
.156
Closed
Slow
4.02
9
.162
Fast
4.00
4
.149
233
Away
Open
Slow
Towards
Open
Slow
4.22
5
.153
Fast
4.51
4
.149
Closed
Slow
4.02
0
.150
Fast
4.18
0
.146
Away
Open
Slow
3.75
6
.178
Fast
3.74
6
.179
Closed
Slow
3.63
4
.167
Fast
3.92
5
.158
Fast
Towards
Open
Slow
4.37
3
.140
Fast
4.33
7
.137
Closed
Slow
3.91
2
.148
Fast
4.19
0
.144
Away
Open
Slow
4.09
5
.159
Fast
3.95
7
.169
Closed
Slow
3.82
4
.158
Fast
3.91
3
.160
Closed Slow
Towards
Open
Slow
4.29
2
.147
Fast
4.32
5
.135
Closed
Slow
3.76
3
.143
Fast
4.14
0
.136
Away Open
Slow
3.78
8
.161
Fast
3.89
8
.164
234
Closed
Slow
3.76
3
.163
Fast
4.04
0
.161
Fast
Towards
Open
Slow
4.08
2
.146
Fast
4.22
2
.134
Closed
Slow 3.99
0.14
8
Fast 3.9
0.14
7
Away
Open
Slow
3.56
4
0.17
2
Fast
3.59
6
0.16
4
Closed
Slow
3.29
7
0.17
6
Fast
3.28
3
0.16
6
Study 3 Appendix
4-Way Mean Scores: Complete Model
Measure Distance Direction Gesture Speed Mean
Duty Far Slow Averted Away
1.743
Towards
2.423
Direct Away
1.764
Towards
2.540
Fast Averted Away
2.121
Towards
2.578
Direct Away
1.839
Towards
2.738
Near Slow Averted Away
1.794
Towards
2.581
Direct Away
1.975
Towards
2.480
Fast Averted Away
1.859
235
Towards
2.536
Direct Away
1.972
Towards
2.542
Intellect Far Slow Averted Away
1.336
Towards
1.906
Direct Away
1.389
Towards
2.073
Fast Averted Away
1.160
Towards
2.028
Direct Away
1.215
Towards
2.272
Near Slow Averted Away
1.432
Towards
2.115
Direct Away
1.437
Towards
1.991
Fast Averted Away
1.296
Towards
1.965
Direct Away
1.257
Towards
2.075
Adversity Far Slow Averted Away
2.691
Towards
2.456
Direct Away
2.887
Towards
2.377
Fast Averted Away
3.112
Towards
2.084
Direct Away
3.503
Towards
2.169
Near Slow Averted Away
2.767
Towards
2.188
Direct Away
3.109
Towards
2.114
Fast Averted Away
2.993
Towards
2.330
Direct Away
2.992
Towards
2.133
Romance Far Slow Averted Away
1.323
Towards
1.974
Direct Away
1.435
236
Towards
2.122
Fast Averted Away
1.108
Towards
2.131
Direct Away
1.091
Towards
2.138
Near Slow Averted Away
1.308
Towards
1.970
Direct Away
1.208
Towards
2.162
Fast Averted Away
1.246
Towards
2.249
Direct Away
1.294
Towards
2.196
Positivity Far Slow Averted Away
1.303
Towards
2.214
Direct Away
1.434
Towards
2.411
Fast Averted Away
1.217
Towards
2.732
Direct Away
1.069
Towards
2.744
Near Slow Averted Away
1.457
Towards
2.491
Direct Away
1.330
Towards
2.565
Fast Averted Away
1.399
Towards
2.599
Direct Away
1.401
Towards
2.847
Negativity Far Slow Averted Away
3.295
Towards
2.685
Direct Away
3.355
Towards
2.592
Fast Averted Away
3.554
Towards
2.057
Direct Away
3.980
Towards
2.238
Near Slow Averted Away
3.229
237
Towards
2.435
Direct Away
3.758
Towards
2.479
Fast Averted Away
3.564
Towards
2.338
Direct Away
3.765
Towards
2.106
Deception Far Slow Averted Away
2.596
Towards
2.116
Direct Away
2.557
Towards
1.971
Fast Averted Away
3.063
Towards
1.632
Direct Away
3.562
Towards
1.585
Near Slow Averted Away
2.594
Towards
2.003
Direct Away
3.191
Towards
1.881
Fast Averted Away
2.753
Towards
1.870
Direct Away
3.273
Towards
1.709
Sociality Far Slow Averted Away
2.087
Towards
3.247
Direct Away
2.223
Towards
3.632
Fast Averted Away
2.117
Towards
3.572
Direct Away
1.877
Towards
3.926
Near Slow Averted Away
2.427
Towards
3.483
Direct Away
2.164
Towards
3.847
Fast Averted Away
2.080
Towards
3.686
Direct Away
2.404
238
Towards
3.715
Abstract (if available)
Abstract
This dissertation introduces the concept of Kinesthetic Motion Cues and examines across 4 studies the combinatorial effects of these cues on a specific type of social perception—namely, generating inferences and judgements of social situations. An overarching goal of this project is the construction of a taxonomy that affords normative social inferences and distribution of inferences people make given the same basic action components. By normative social inferences I refer to inferences that fall within a normal distribution of inferences given the same basic action components. While I focus on specific movement and visual cues previously tied to the construction of social meaning, this line of inquiry is novel in both method and conceptual significance with its analysis of the effects of layers of movement combinations. Studies 1-3 in particular, demonstrate the iterative process in which meaning is constructed and re-constructed as different movement and action components are added and removed within social situations. A significant consideration from this dissertation is the potential for attaining normative data on reliable patterns of how users make inferences from combinations of these cues across multiple extended narratives. This dissertation is the first step towards a new methodology and approach to understanding the elements of our communication that construct social meaning. I focus my attention here on the role of the body and its movement on coloring this social meaning. If we understood and could constrain the meaning of movements, and had a way to more automatically code them, this could lead to a transformation in our ability to understand a more limited range of “alternative interpretations” of changing social situations “online” and in real-time.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The interpersonal effect of emotion in decision-making and social dilemmas
PDF
Investigating the effects of Pavlovian cues on social behavior
PDF
Using virtual environments to unobtrusively measure real-life risk-taking: findings and implications for health communication interventions
PDF
The social functions of celebrity gossip among high school girls
PDF
Mitigating Chornobyl's lingering threat: what messages might motivate Ukraine's radiation-exposed youth to seek cancer screening tests?
PDF
Falun Gong’s evolving definitions through stages and disputes
PDF
Social support in Cambodia: the role of peer educators in behavior change
PDF
Understanding normative influence of neighborhoods: a multilevel approach to promoting Latinas’ cervical cancer prevention behaviors in urban ethnic communities
PDF
The social groups approach to quitting smoking: An examination of smoking cessation online and offline through the influence of social norms, social identification, social capital and social support
PDF
Investigating the effects of personality factors on goal shielding
PDF
Mobile self-tracking for health: validating predictors, effects, mediator, moderator, and social influence
PDF
Technology, behavior tracking, and the future of work
PDF
For whom is neo-soul?: Black women and rhetorical invention in the public sphere
PDF
Voice, participation & technology in India: communication technology for development in the mobile era
PDF
Generating gestures from speech for virtual humans using machine learning approaches
PDF
Do you see what I see? Personality and cognitive factors affecting theory of mind or perspective taking
PDF
Gamification + HCI + CMC: effects of persuasive video games on consumers’ mental and physical health
PDF
Cryptographic imaginaries and networked publics: a cultural history of encryption technologies, 1967-2017
PDF
Miracles of birth and action: natality and the rhetoric of birth advocacy
PDF
Understanding music perception with cochlear implants with a little help from my friends, speech and hearing aids
Asset Metadata
Creator
Jeong, David C.
(author)
Core Title
“You move, therefore I am”: the combinatorial impact of kinesthetic motion cues on social perception
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Publication Date
09/22/2017
Defense Date
05/01/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
communication,computational modeling,OAI-PMH Harvest,psychology,social perception,virtual agents,VR
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Miller, Lynn C. (
committee chair
), Cody, Michael J. (
committee member
), Goodnight, Gerald Thomas (
committee member
), Marsella, Stacy (
committee member
), Read, Stephen J. (
committee member
)
Creator Email
davidjeo@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-432600
Unique identifier
UC11264288
Identifier
etd-JeongDavid-5748.pdf (filename),usctheses-c40-432600 (legacy record id)
Legacy Identifier
etd-JeongDavid-5748.pdf
Dmrecord
432600
Document Type
Dissertation
Rights
Jeong, David C.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
communication
computational modeling
psychology
social perception
virtual agents
VR