Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Experience modulates neural activity during action understanding: exploring sensorimotor and social cognitive interactions
(USC Thesis Other)
Experience modulates neural activity during action understanding: exploring sensorimotor and social cognitive interactions
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
EXPERIENCE MODULATES NEURAL ACTIVITY DURING ACTION
UNDERSTANDING: EXPLORING SENSORIMOTOR AND SOCIAL COGNITIVE
INTERACTIONS
by
Sook-Lei Liew
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(OCCUPATIONAL SCIENCE)
August 2012
Copyright 2012 Sook-Lei Liew
ii
DEDICATION
For my grandmother and my parents,
with love and gratitude
for taking risks, making sacrifices,
and encouraging me to embrace every experience.
iii
ACKNOWLEDGEMENTS
The first thing I did when I started thinking about my dissertation was create a
document entitled “Acknowledgements,” in a vain attempt to thank everyone I
have been so fortunate to be encouraged and mentored by throughout my
graduate education. One of my biggest concerns in this entire document is that I
leave someone out or do not do justice in expressing my gratitude; to that end, I
hope you will understand the depths of my appreciation, as well as the extent of
my current memory and page limitations.
First, I owe an enormous debt of gratitude to my primary advisor, Dr. Lisa Aziz-
Zadeh, who took a chance on me and let me join the lab when I had no
background in science and no experience in research. Thank you, Lisa, for your
kindness and enthusiasm, for teaching me to pursue questions I am passionate
about, for helping me to develop a love for research, and for guiding and
mentoring me so that I could find my own feet. You have been a role model and
a friend, and I have been so fortunate to learn and grow with you. I am also
grateful for all of my professors, who have taken the time to teach me what they
are passionate about and guide me along this journey. In particular, I have
heartfelt appreciation for Dr. Florence Clark, whose handwritten note of
recruitment brought me to LA and whose vision and support have been a guiding
force throughout my career ever since; Dr. Nancy Bagatell, who introduced me to
neuroscience in my first summer of occupational therapy school and whose
iv
kindness and understanding led me to my current path; Dr. Michael Arbib, whose
wealth of knowledge (and witticisms) have consistently kept me on my feet and
challenged me to produce better science; Dr. Sharon Cermak, for thought-
provoking discussions, support, and Bisquick recipes; Dr. Ann Neville-Jan, for
constant encouragement and enthusiasm about the research I am pursuing; Dr.
Hanna Damasio, for patiently teaching me about the brain’s incredible, and
incredibly complex, anatomy and for being as excited about our current project
as I am; Dr. Marco Iacoboni, for giving me the opportunity to collaborate and
learn from him and his excitingly diverse lab; and Dr. Shihui Han, who welcomed
me into his lab and mentored me through my first ever fMRI study, and who has
been a mentor ever since.
In addition to formal mentors and professors, I am also thankful to those who
have informally mentored me and dedicated their time to patiently teaching and
helping me learn the tricks of the trade. In particular, I thank Dr. Justin Haldar, Dr.
Savio Wong, Dr. Jonas Kaplan, Dr. Kaspar Meyer, Dr. Dimitrios Pantazis, Dr.
Choi Deblieck, and Dr. Gui Xue for their wealth of knowledge and willingness to
share it. Heartfelt thanks are owed also to my colleagues and labmates, who
have also helped me to learn and encouraged me to keep going. Dr. Sir David
Pitcher, thank you for every well-chosen word of advice you have given me and
for being an amazing friend, inspiration, encourager, and ‘pacer’ in this marathon
v
of a journey; I look forward to being able to pass all of this on to others as
generously as you have done with me.
Thank you to all of my wonderful labmates in the A-Z lab, at the Brain and
Creativity Institute, in the USC Neuroscience Graduate Program, and beyond,
who have made doing research more fun and full of laughs than I ever could
have anticipated. It has been such a joy to work in an atmosphere of
camaraderie, creativity, and general good will. Thank you (in alphabetical order):
Andrea, Bradford, Chao, Dave Clewett, David Hans, Farhan, Fei, Glenny, Helder,
all the iLab folks who are at HNB late at night, John Shen, Julie, Katie, Kingson,
Komperda, Meghen, Madeline, Mona, Natalie, Nichole, Shaner, Sims, Tong, &
last but certainly not least, Vilay. Special thanks to Mona Sobhani, for your
encouragement, t-rex hugs, and introducing me to jujubes, Simren Dulai, for
keeping me sane, introducing me to running, and Souplantation study session,
and Katie Garrison, for letting me learn, collaborate, and grow with you, for your
friendship, and for teaching me to be more concise (with mild success). Thank
you also to all of my colleagues at the Culture and Social Cognitive Neuroscience
Laboratory at Peking University, who taught me how to run fMRI studies—in
Chinese nonetheless, to Megan May Daadler, for being excited about bringing
the MIrrorbox into a crazy blend of art + science, to Henryk Bukowski, for
teaching me how to effectively deceive participants and encouraging me to be
hardcore about work, play, and even burger making, and to all of my
vi
collaborators, colleagues, and research assistants over the years: Alicia, Ahra,
Danny, Erika, Franny, Leo, Mark, Mustafa, Pavitra, Peter, Sarah, Zara.
Thank you also to my dear friends who have supported me both inside and
outside of the lab: Elyse Bealer, Erica Jehling, Eddy Chao, Gloria McCahon – a
million thanks for reminding me that there is life outside of the basement and for
your best friendship and support.
Finally, my greatest appreciation and gratitude goes to my family for their
unconditional love and support; thank you to my big sister Wei, for being a role
model and my cheerleader throughout the years. I am unendingly grateful to my
grandparents, who worked hard so that their kids could have better opportunities,
and to my parents, Kien and Mooi Liew, who worked even harder, so that we
could have even better opportunities. Thank you most of all, Mom and Dad, for
encouraging me to ‘make significant contributions to society,’ and for believing
that I can do so.
My graduate work was supported by the National Science Foundation East Asia
and Pacific Summer Institutes under Grant No. 0813067, National Science
Foundation Graduate Research Fellowship under Grant No. 2009072048, the
University of Southern California Provost’s PhD Fellowship, the Division of
Occupational Science and Occupational Therapy at the Herman Ostrow School
vii
of Dentistry, the Dornsife Neuroimaging Center, and the Brain and Creativity
Institute. Portions of this work were additionally supported by the American Heart
Association National Scientist Development Grant (10SDG3510062), and Award
Number R03HD067475 from the Eunice Kennedy Shriver National Institute Of
Child Health & Human Development. The content is solely the responsibility of
the authors and does not necessarily represent the official views of the Eunice
Kennedy Shriver National Institute Of Child Health & Human Development or the
National Institutes of Health.
viii
TABLE OF CONTENTS
DEDICATION ii
ACKNOWLEDGEMENTS iii
LIST OF TABLES x
LIST OF FIGURES xi
ABBREVIATIONS xiv
ABSTRACT xvi
CHAPTER 1. Introduction 1
1.1 Specific Aims 1
1.2 Overview 6
1.3 Two Systems for Understanding 9
1.4 The Mirror Neuron System 13
1.5 The Mentalizing System 25
1.6 Background for Specific Aims 36
1.7 Aim 1: Perceptual and Motor Experiences
and Action Understanding 37
1.7.1 Background 37
1.7.2 Gestures: Action and Language 46
1.7.3 Conceptual Model of
Gestural Understanding 56
1.8 Aim 2: Visual Experience and Action Understanding 61
1.8.1 Background 61
1.8.2 Conceptual Model of Unusual and Novel
Effectors 65
1.9 Aim 3: Real-life Experiences and Action Understanding 69
1.9.1 Background 69
1.9.2 Conceptual Model of Real-life Experience 72
1.10 Methods: Functional Magnetic Resonance
Imaging (fMRI) 75
CHAPTER 2: Perceptual and Motor Experiences Affect
Action Understanding Regions 88
2.1 Introduction 89
2.2 Materials and Methods 94
2.3 Results 101
2.4 Discussion 114
CHAPTER 3: Visual Experience Affects Action Understanding
Regions 125
3.1 Introduction 126
3.2 Materials and Methods 128
ix
3.3 Results 140
3.4 Discussion 150
CHAPTER 4: Real-life Experience Affects Action Understanding
Regions 162
4.1 Introduction 163
4.2 Materials and Methods 167
4.3 Results 169
4.3.1 CJ Analyses 182
4.4 Discussion 185
CHAPTER 5: Current & Future Work 205
5.1 Attention, Experience, and the MNS 206
5.2 Using the Mirrorbox to Induce Embodiment and Empathy 217
5.3 Experience & Motor Systems in Individuals After Stroke 226
5.4 The Mirror Neuron System: Innovations and
Implications for Occupational Therapy 238
CHAPTER 6: Discussion 259
REFERENCES 277
APPENDICES 325
Appendix A: Brief List of Neural Regions and Functions 325
Appendix B: Cultural Experiences Modulate Social Perception 330
x
LIST OF TABLES
Table 2-1. Localization of brain activations from random
effects analysis. 105
Table 3-1. Localization of brain activations in novices during
the PRE condition. 144
Table 3-2. Localization of brain activations in novices during
the POST condition. 146
Table 4-1. Localization of brain activations in experienced OTs
during the PRE condition. 174
Table 4-2. Localization of brain activations in experienced OTs
during the POST condition. 176
xi
LIST OF FIGURES
Figure 1-1. Basic schematic of the putative human mirror
neuron system (left) and mentalizing system (right). 11
Figure 1-2. The adapted FARS model for grasping. 19
Figure 1-3. The MNS Model. 20
Figure 1-4. The MNS2 Model. 22
Figure 1-5. A model of praxis processing. 24
Figure 1-6. The Mental State Inference model. 33
Figure 1-7. An integrated conceptual model of mentalizing and
mirror systems. 34
Figure 1-8. A model of bidirectional processing between visual
and social information. 35
Figure 1-9. An integrated conceptual model from action
observation networks and social-networks. 36
Figure 1-10. Conceptual model for Aim 1. 60
Figure 1-11. Conceptual model for Aim 2. 68
Figure 1-12. Conceptual model for Aim 3. 75
Figure 1-13. Raw BOLD timeseries from one voxel. 77
Figure 1-14. Processing stream for fMRI data within a single
subject using FSL. 80
Figure 1-15. Convolution of a canonical HRF with predicted
neural activity. 81
Figure 1-16. Resulting T-map of neural activity during one condition. 83
Figure 1-17. Sample contrast matrix. 84
Figure 1-18. A thresholded activation map comparing two
conditions (A>B). 85
xii
Figure 2-1. Examples of still images of the stimuli. 96
Figure 2-2. Study paradigm and questions addressed in this study. 98
Figure 2-3 Brain responses to observations of gestures versus
still images. 103
Figure 2-4. Race-driven and experience-driven brain responses. 107
Figure 2-5. Correlations between Multigroup Ethnic Identity
Measure (MEIM) scores and neural activity. 108
Figure 2-6. Beta values from MNS and mentalizing ROIs. 110
Figure 2-7. BOLD results of interaction effects between race
and familiarity. 112
Figure 2-8 Overlap between action execution and other experimental
conditions. 113
Figure 3-1. Action observation run paradigm. 131
Figure 3-2. Overall experimental paradigm. 134
Figure 3-3. Mirror neuron system localizer run paradigm. 135
Figure 3-4. Results from the mirror neuron system localizer. 136
Figure 3-5. fMRI results in novice viewers. 143
Figure 3-6. Percent signal change in ROIs for novices. 148
Figure 3-7. Correlation between percent signal change in right inferior
parietal lobule during residual limb observation and
empathic concern scores. 150
Figure 4-1. Ratings of experience correlate with age across novice
and experienced OTs. 170
Figure 4-2. fMRI results in experienced occupational therapists. 173
Figure 4-3. Percent signal change in ROIs for experienced OTs. 177
xiii
Figure 4-4. ANOVA between Experience (Novice, Experienced OTs)
and MNS ROI. 178
Figure 4-5. Correlation between percent signal change in left
inferior frontal gyrus during residual limb observation and
familiarity with residual limbs in experienced OTs. 179
Figure 4-6. Between groups comparison for Novices > Experienced OTs. 181
Figure 4-7. Main effect of Age across Novices & Experienced OT groups. 182
Figure 4-8. fMRI results in CJ. 184
Figure 4-9. Comparison of groups means across MNS ROIs. 185
Figure 4-10. Conceptual model to explain group differences in data. 190
Figure 4-11. A hypothesized relationship between BOLD response
and action familiarity. 199
Figure 5-1. Example of the Mirrorbox paradigm. 218
Figure 5-2. Examples of cavitation and broad lesion tracings
in BrainVox. 233
Figure 5-3. ROI correlations between %BROAD lesion volume and
beta values in the L IFG (left) and L PMv (right)
from observing possible (left hand) actions. 234
Figure 5-4. Whole brain correlation between BOLD activity for
Observing Impossible Actions (right hand) > Rest
and lesion volume. 235
Figure 5-5. Diffusion weighted imaging in a participant with
chronic stroke. 236
Figure 5-6. BOLD results in one participant with right MCA stroke. 237
Figure 5-7. The putative human mirror neuron system. 242
xiv
ABBREVIATIONS
ACC, anterior cingulate cortex
ACQ, Augmented Competitive Queuing model
AIP, anterior inferior parietal
BA, Brodmann area
BOLD, blood-oxygen-level dependent signal
dlPFC, dorsal lateral prefrontal cortex (BA 46)
dmPFC, dorsal medial prefrontal cortex
DTI, diffusion tensor imaging
EBA, extrastriate body area
FARS, Fagg-Arbib-Rizzolatti-Sakata model
FFA, fusiform face area
fMRI, functional magnetic resonance imaging
GAEM, Grasp Affordance Learning Model
GCM, Granger causality mapping
IFG, inferior frontal gyrus
ILGM, Infant Learning to Grasp Model
IPL, inferior parietal lobule
IT, inferotemporal cortex
MFG, middle frontal gyrus
MNS, Mirror Neuron System model
MNS2, Mirror Neuron System model 2
xv
MSI, Mental State Inference model
MTG, middle temporal gyrus
PCC, posterior cingulate cortex
PFC, prefrontal cortex
PMC, premotor cortex
MNS, putative mirror neuron system
PPC, parahippocampal place area
pSTS, posterior superior temporal sulcus
SFG, superior frontal gyrus
SMG, supramarginal gyrus
SPL, superior parietal lobule
STG, superior temporal gyrus
TMS, transcranial magnetic stimulation
TPJ, temporoparietal junction
vmPFC, ventral medial prefrontal cortex
xvi
ABSTRACT
Our experiences shape who we are. They affect everything that we say, do, and
think, at both conscious and subconscious levels. It is no surprise that our
experiences affect how we interact with others—what we perceive, whether we
understand or empathize with one another. This thesis explores how experience
modulates neural regions involved in both motor function and social cognition. In
particular, it examines the putative human mirror neuron system, a network of
premotor and parietal brain regions that are active both when performing and
observing actions, and how these regions are engaged when observing new,
unfamiliar or impossible actions. Furthermore, this work explores how different
types of experience modulate activity in these networks, along with other
networks engaged in social cognitive processes, such as perspective taking. The
overarching aim of this thesis is to understand how we make sense of actions
that are beyond our own abilities and how our everyday experiences shape our
neural responses.
1
CHAPTER 1. Introduction
1.1 SPECIFIC AIMS
Every encounter affects how we understand, relate to, and interact with one
another. Whether it is the shared motor experience of learning the same skill as
another, or a common visual experience of having seen the same scene, our
past experiences influence our future social and motor abilities. Recent
technological advances in human neuroimaging have allowed researchers to
explore the neural bases of many of these social and motor processes, which
often occur concurrently in real life (e.g., understanding someone’s social
intentions from his or her observed motor actions). This research suggests that
we understand others’ actions by representing them onto our own bodies
(Rizzolatti & Craighero, 2004), utilizing our own prior experiences to understand
another’s movements and intentions.
However, there is currently no commonly accepted model for how experience
affects specific regions associated with both social and motor comprehension,
and how these factors and regions interact to support overall human cognition.
Such a model holds unparalleled potential for not only understanding complex
social cognitive processes in typically developed humans, but also for developing
ways to use experience to enhance an individual’s social or motor abilities when
impaired. This may be particularly useful to the field of occupational science (OS)
and occupational therapy (OT), which promotes rehabilitation of impaired social
2
and/or motor functioning in order to enhance or restore an individual’s
engagement in meaningful activities. Thus, the current body of research
systematically examines how different types of experience modulate activity in
neural regions underlying social cognition and sensorimotor processing during
action understanding to understand how these regions work together and what
each region specifically contributes. In doing so, I put forth a working
conceptual model of how experience drives social and motor aspects of
cognition in the brain during action understanding, how regions integrate
information between these two main networks associated with social
cognition and sensorimotor processing, and how different modalities of
training (visual, motor, and real-life interactions) may enhance cognitive
processing—findings which may subsequently contribute to the
development of new methods of neural-based rehabilitation in a wide range
of affected populations.
While there are many regions associated with social cognitive processing, two
networks in particular have been highly studied: (1) the human putative mirror
neuron system (MNS; Gallese, Keysers, & Rizzolatti, 2004; Rizzolatti &
Craighero, 2004; Iacoboni et al., 2005), and (2) the mentalizing system (Fletcher
et al., 1995; Saxe, 2006; Frith & Frith, 2006). The interplay of these two networks
is most prominent during the social task of understanding another’s intentions
from his or her actions, with the former engaging sensorimotor regions and the
3
latter involving higher cognitive processing regions. Functional magnetic
resonance imaging (fMRI) has demonstrated modulations of blood-oxygen-level
dependent (BOLD) activity in these two networks when individuals are asked to
observe actions in different contexts (e.g., familiar versus unfamiliar actions), or
asked to process the actions on different levels of abstraction (e.g., ‘what is he
doing?’ versus ‘why is he doing it?’; Calvo-Merino, Glaser, Grezes, Passingham,
& Haggard, 2005; de Lange, Spronk, Willems, Toni, & Bekkering, 2008; Spunt,
Satpute, & Lieberman, 2010). This occurs in response to many stimuli, ranging
from goal-directed movements (e.g., reaching for a cup) to communicative,
intransitive gestures, such as a thumbs up. These movements seem to engage
both systems (Gallagher & Frith, 2004; Gentilucci & Dalla Volta, 2008; Villarreal
et al., 2008; Schippers, Gazzola, Goebel, & Keysers, 2009; Skipper, Goldin-
Meadow, Nusbaum, & Small, 2009) and are sensitive to one’s level of experience
(Liew, Aziz-Zadeh, & Han, 2011; Molnar-Szakacs, Wu, Robles, & Iacoboni, 2007;
Newman-Norlund, van Schie, van Hoek, Cuijpers, & Bekkering, 2010). In this
work, I explore the effects of three levels and types of experience on
modulating activity in action understanding networks: (1) how are regions
associated with action understanding modulated by both motor and perceptual
familiarity with an action, (2) how does the introduction of visual experience
modulate action understanding regions when observing novel human effectors,
and (3) how do more dynamic types of experience (real-life interactions,
4
personal experience) modulate action understanding regions when observing
bodies different from our own?
Aim 1: To demonstrate how MNS and mentalizing regions are modulated by
both motor and perceptual familiarity. I hypothesize that:
A. Motor familiarity with an action, such as a known gesture, will more
strongly activate MNS regions during action observation, while unfamiliar
gestures will engage mentalizing regions.
B. Perceptual familiarity with an individual, such as an actor of one’s own
race when one has limited exposure to other races, will more strongly
activate both MNS and mentalizing regions than a less familiar race.
Aim 2: To examine how the introduction of visual experience modulates
activity in MNS and mentalizing regions during observation of novel human
effectors. I hypothesize that:
A. A visually novel human effector, such as the residual upper limb of a
woman with congenital amputations, will activate one’s own sensorimotor
regions to a lesser degree than a familiar effector (e.g., a hand).
B. Visual experience with the novel effector will allow for the learned mapping
of the new effector onto the individual’s own motor representations, thus
increasing sensorimotor activity after visual training.
5
Aim 3: To explore how different modalities of experience (real-life
interactions, personal experience) modulate MNS and mentalizing regions
when observing individuals different from oneself. I hypothesize that:
A. As one’s level of experience with someone different from oneself
increases, so does MNS activity, particularly in the IFG, where the goal of
the action is encoded.
B. As the experience becomes more dynamic (e.g., from visual to real-life to
personal experience), additional social factors such as attention,
motivation, and emotions may also become more important and play a
role in modulating activity in action understanding brain regions.
The studies here aim to elucidate and test a working model of the neural
mechanisms active during social understanding based on action
observation. In addition, these studies intend to understand how experience
drives changes in activity among these regions and how these regions may be
modulated as a function of different types of experience and training,
specifically increased visual, motor, or real-life experience. Such knowledge has
the potential to provide a neurological understanding of how real life engagement
in occupations (which provide, among other things, visual, motor, and personal
experiences) can alter the neural circuitry that allows us to better understand and
interact with one another. These findings may then be utilized to direct research
6
on neural-based rehabilitation methods for individuals with disorders of either, or
both, social and motor functioning.
1.2 OVERVIEW
Understanding others is a crucial component of human interactions. However,
the process of understanding others is enormously complex and can be
modulated by our perceptions of the individual with whom we are interacting, our
own cultural backgrounds, and the context in which we are interacting, among
many other variables. One great challenge in understanding the neural bases of
human social cognition is that each one of these factors may alter the
interactions between the network of neural regions supporting various aspects of
social cognitive calculations for optimal processing. While this flexibility is
maximally advantageous for our daily social interactions, it greatly complicates
any attempts to quantitatively study social cognition. Thus, in the current work, I
define a subset of social cognitive processing to systematically test. Here I focus
specifically on the social cognitive processes involved in understanding other’s
intentions from observing his or her actions. Two neural networks in particular—
the putative mirror neuron system (MNS) and the mentalizing system—have
been studied extensively in this task, and each network on its own has been
implicated in a large number of social cognitive tasks including empathy,
imitation, self-representation, perspective taking, goal-understanding, and social
communication (for a review, see Liew & Aziz-Zadeh, 2011a; Liew & Aziz-Zadeh,
7
in press). Understanding the interactions of these two networks, which also have
connections with emotion-related regions (such as the amygdala and insula),
among other regions, may thus provide us with a maximally informative model
about how a multitude of regions interact during social cognitive processing. In
addition, by utilizing a task that has been shown to differentially recruit both
networks of social cognition, we might better dissociate the roles of not only
individual networks but also individual regions within these networks. Finally, in
order to clearly understand the complex interactions that occur during social
cognition, I propose to manipulate the task along one main factor known to affect
neural activity in these regions: experience.
Experience. Our life experiences shade our perceptions of the world around us.
How we take in new scenery, what stands out to us when reading a story, and
how we interact with others can all be affected by our previous experiences. In
addition, experiences can consist of a variety of modalities – visual experience
from having seen a desert sunrise, auditory experience from having heard the
sounds of a symphony, motor experience from having thrown a baseball, cultural
experience from acting within a specific social setting with cultural norms, or
personal experience that bridges all of these aspects together. Our repertoire of
experiences may thus affect our conscious behaviors and choices, as well as the
subconscious, automatic neural processes that underlie those conscious
behaviors. In this work, I aim to understand how different types of experience
8
affect the neural circuits associated with understanding others. More specifically,
I focus on 1) understanding abstract intentions from observed motor actions (i.e.,
communicative gestures), which integrates social understanding across both
motor and semantic levels, and 2) understanding motor intentions from ‘abstract’
motor representations—that is, bodies that are different from our own and are
therefore at some level ‘abstracted’ from our own concrete motor
representations. Furthermore, I examine modulations of these regions before and
after the introduction of visual experience during the experiment. Altogether, this
work may bridge the findings of previous studies, which have found potentially
conflicting results regarding experience-driven modulations of specific neural
regions, and answer new questions about the role of experience as it is neurally
represented during social understanding.
In the first chapter of this proposal, I provide an extended review of background
literature relevant to the proposed studies. First, I review the literature on the
mirror neuron system, including several original studies performed in macaques
as well as later findings in human, and summarize a number of key models
relevant to action generation and observation. Then I review the mentalizing
system and models that integrate both the mentalizing and/or mirror systems to
support higher level mental state inference and social understanding. Following
this, I provide background information specific to each one of my three aims, and
at the end of each section, I propose a hypothesized conceptual model that I will
9
be testing in that study. Finally, I present a brief summary of the neuroimaging
methods I use. In the second through forth chapters, I provide a detailed account
of each study, including a more specific introduction, materials and methods,
results, and discussion. The fifth chapter explores current studies and future
directions that have emerged from this dissertation work, and the sixth chapter
summarizes and integrates the results from these studies, providing an extended
and integrated discussion across these chapters. Finally, a complete list of
references, a brief appendix of regions of interest, and data and results from a
fourth study exploring the effects of cultural experience on social perception
(which, while relevant to the topic of experience, did not fit within the main body
of work presented here) conclude this dissertation.
1.3 TWO SYSTEMS FOR UNDERSTANDING
While many neural regions have been associated with the study of understanding
others, there are two neural networks in particular that have received a large
amount of attention for their roles in action understanding and social cognition,
thus garnering our focus in this proposal (see Figure 1-1). The human putative
mirror neuron system (MNS), located in the inferior frontal gyrus (IFG) and
inferior parietal lobule (IPL), is thought to be active both for the execution of a
motor action as well as for the observation of the same or similar actions
(Gallese et al., 2004; Rizzolatti & Craighero, 2004; Iacoboni et al., 2005), with the
IPL thought to provide possible affordances for the observed motor action and
10
the IFG associating motor patterns with plausible goals. The resonance between
observed and executed actions led researchers to propose that we may possibly
understand others’ actions by internally replicating them ourselves (Keysers &
Gazzola, 2006), provided these actions are within our own motor repertoire even
if they are performed differently (e.g. eating performed by a human versus a
monkey; (Buccino et al., 2004a). In contrast, the mentalizing system, located in
the medial prefrontal cortex (mPFC), bilateral temporoparietal junctions (TPJ),
and posterior cingulate cortex (PCC), is thought to underlie one’s efforts to
consciously and effortfully reason about the mental states of another (Fletcher et
al., 1995; Saxe, 2006; Frith & Frith, 2006). A cursory review of the literature
suggest that the mPFC is associated with self- and other-reflections and mental
state attribution, the TPJ with effortful perspective taking and modulation of
attention, and the PCC with self-reflection and autobiographical memory
(Fletcher et al., 1995; Saxe & Kanwisher, 2003; Gallagher & Frith, 2003; Saxe,
2006; Saxe & Powell, 2006; Frith & Frith, 2006). However, I will review all of
these regions, and their proposed functional correlations, further on.
11
Figure 1-1. Basic schematic of the putative human mirror neuron system
(left) and mentalizing system (right). Approximate locations of the left inferior
parietal lobule (L IPL) and left inferior frontal gyrus (L IFG) are denoted by
orange circles, and approximate locations of the medial prefrontal cortex
(mPFC), posterior cingulate cortex (PCC), and left and right temporoparietal
junctions (L TPJ; R TPJ) denoted by blue circles.
While the process of understanding another person requires the effort of
numerous interconnected brain regions, these two networks in particular have
attracted attention as they appear to play complementary roles in action
understanding (Keysers & Gazzola, 2007; Thioux, Gazzola, & Keysers, 2008; de
Lange et al., 2008; Hesse, Sparing, & Fink, 2009; Van Overwalle & Baetens,
2009). In addition, specific regions within each network, as well as each network
as a whole, have been implicated in a number of diseases with social
components, such as autism (Iacoboni & Dapretto, 2006; Dapretto et al., 2006;
Perkins, Stokes, McGillivray, & Bittar, 2010; Fan, Decety, Yang, Liu, & Cheng,
12
2010), schizophrenia (Arbib & Mundhenk, 2005; Greicius, 2008; Lynall et al.,
2010), and stroke (Foundas et al., 1995; Mukherjee et al., 2000; Heath, Roy,
Black, & Westwood, 2001; Buxbaum et al., 2008; Damoiseaux & Greicius, 2009).
In addition, several studies hypothesize that engaging MNS or mentalizing
regions through other tasks may result in improvements in social functioning at
large (Iacoboni & Dapretto, 2006; Buccino, Solodkin, & Small, 2006; Iacoboni &
Mazziotta, 2007; Garrison, Winstein, & Aziz-Zadeh, 2010).
However, many of the studies have produced conflicting results, and it has been
difficult to dissociate the exact roles of each system in both individuals with
typical and atypical development, as the stimuli and task instructions vary widely
from experiment to experiment. Recent studies are beginning to use new
experimental and statistical methods that highlight the interplay between these
two systems by using longer time periods for analysis and more naturalistic tasks
than previously examined (Zaki, Weber, Bolger, & Ochsner, 2009; Spunt et al.,
2010; Zaki, Hennigan, Weber, & Ochsner, 2010; Schippers, Roebroeck, Renken,
Nanetti, & Keysers, 2010). While these studies suggest that the neural
mechanisms for MNS and mentalizing systems may have complex interactions
during natural human social interaction, much about these possible linkages
remains unknown, including the possible timecourse of information flow between
these regions and the differential stimulation of each region in response to
different modalities of stimuli and training.
13
1.4 THE MIRROR NEURON SYSTEM
1.4.1 Background
Mirror neurons were originally discovered in macaque monkeys from single-cell
recordings of premotor neurons in area F5 that fired when the monkey reached
for a piece of food as well as when the monkey observed the experimenter reach
for a piece of food (di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992;
Rizzolatti, Fadiga, Gallese, & Fogassi, 1996a; Gallese, Fadiga, Fogassi, &
Rizzolatti, 1996). Researchers hypothesized that the firing of these motor
neurons in response to visually observed actions allowed macaques to encode
the goals of the observed actions and thus predict and make sense of the
completed action (Rizzolatti et al., 1996a; Gallese et al., 1996; Rizzolatti &
Craighero, 2004). In addition, these neurons can be divided into strictly and
broadly congruent categories, with approximately 1/3 of mirror neurons being
strictly congruent (i.e., they respond to observations of a specific grasp that they
also execute) and 2/3 being broadly congruent (i.e., they respond to observations
of more diverse grasps, not just the one that they motorically encode;(Gallese et
al., 1996; Rizzolatti & Craighero, 2004). In support of this, there is evidence that
premotor mirror neurons in F5 also fire for actions even when the end goal of the
action is hidden (e.g. watching a hand move to grasp a piece of food that is
hidden behind a screen; (Umilta et al., 2001). The fact that the neurons fired in
anticipation of the goal, even when the goal itself was occluded, suggested that
14
the end goal had already been predicted well before the action was complete.
Another study demonstrated that mirror neurons may be multimodal, that is to
say, some premotor neurons fired in response to the sound of actions, as
opposed to the sight (Kohler et al., 2002). This novel discovery further
emphasized that this type of neuron may be more attuned to the comprehensive
properties of an action than to a specific modality or parameter.
Mirror neurons are not only limited to premotor region F5, but are also found in a
subset of parietal neurons in area 7b (or PF; a rostral portion of the inferior
parietal lobule; (Fogassi, Gallese, Fadiga, & Rizzolatti, 1998; Gallese, Fogassi,
Fadiga, & Rizzolatti, 2002; Rizzolatti & Craighero, 2004; Fogassi et al., 2005).
Neurons in this region typically respond to sensory stimuli, although some also
fire during execution of motor actions. In addition, a subset of visual neurons
here fire specifically for action observations, and a subset of these contain mirror
properties (Gallese et al., 2002). Anatomically, visual input regarding human
biological movement is processed in the superior temporal sulcus (STS), which is
also active for observation of others’ actions, and then passed to the inferior
parietal lobule (area 7b) where it is then relayed the information to premotor area
F5 (Perrett et al., 1989; Perrett, Mistlin, Harries, & Chitty, 1990; Jellema, Baker,
Wicker, & Perrett, 2000) (Rizzolatti & Craighero, 2004).
15
1.4.2 The Putative Human Mirror Neuron System
A similar mirror system has been proposed in humans, and a wide body of
neuroimaging research has provided support in favor of a comparable, if not
more evolved, system. This putative mirror neuron system (MNS) in humans is
commonly thought to be located in the inferior frontal gyrus (IFG) and inferior
parietal lobule (IPL), which are the human homologues to the macaque mirror
neuron regions (Rizzolatti & Craighero, 2004), with a recent study using a
repetition-suppression paradigm, providing evidence of their existence (Kilner &
Frith, 2008). Interestingly, new single-cell recordings from human cells suggest
that neurons with mirror-like properties (e.g. that fire for both the execution and
observation of similar actions) may exist not only in lateral frontoparietal regions,
but also in medial frontal and temporal regions (Mukamel, Ekstrom, Kaplan,
Iacoboni, & Fried, 2010). Such findings provide evidence that a single-cell mirror-
like mechanism exists in humans, but are otherwise difficult to interpret without
further information regarding the precise function and locations of such neurons.
A wide majority of the research on mirror neurons in humans has utilized
functional neuroimaging techniques and defined MNS regions loosely as areas
that are active both when humans perform actions and when they observe
actions. These regions have generally been found in the IFG, premotor gyrus
(ventral and dorsal), and IPL (particularly the supramarginal gyrus and anterior
inferior parietal sulcus; (Van Overwalle & Baetens, 2009). In addition, the
16
posterior superior temporal gyrus (pSTG) is thought to encode biological
movement which is then passed to mirror regions (Perrett et al., 1989; Perrett et
al., 1990; Jellema et al., 2000). Thus, while other regions of the brain may
contain mirror like properties (e.g., for a review of shared circuits for
somatosensory input, see (Keysers, Kaas, & Gazzola, 2010), for the purposes of
this proposal, I will use these commonly studied regions when I refer to the
human MNS, unless otherwise noted.
One interesting difference between the macaque MNS and the human MNS is
that, while macaque mirror neurons tend to fire only for object-oriented transitive
actions, such as reaching for a piece of food, in humans, the MNS is active both
for transitive, goal-directed actions and intransitive actions, such as a
communicative gesture. Transitive actions, such as reaching for a piece of food,
tend to be less abstract than intransitive actions, such as a hand shape
symbolizing a semantic or conceptual meaning. The exception to this trend is a
subset of macaque mouth mirror neurons, which in fact fire when the monkey
observes communicative mouth actions (e.g. lipsmacking; (Ferrari, Gallese,
Rizzolatti, & Fogassi, 2003), provides evidence of a suggestive link between
goal-directed actions and communicative abilities (Rizzolatti & Craighero, 2004).
However, in humans, mirror regions demonstrate activity whether reaching for an
object or making a symbolic hand gesture (Gentilucci & Dalla Volta, 2008;
Villarreal et al., 2008; Liew, Han, et al., 2011; Schippers et al., 2009; Skipper et
17
al., 2009). The plausible evolution of this system from concrete actions to
abstract gestures in humans has led some to propose that the evolution of MNS
and its interaction with other brain regions played a role in the formation and
development of language ({Arbib, Rizzolatti & Arbib, 1998; Gallese & Lakoff,
2005; Arbib, 2005; Fadiga, Craighero, & D'Ausilio, 2009; Arbib, 2010). In
addition, the encoding of even intransitive gestures suggests that the MNS may
be able to encode more abstract goals. Work on the goal-specific nature of the
MNS has found that there is a stronger MNS response in the posterior IFG and
ventral PMC for actions embedded in a context (e.g. picking up a cup to clean it
after having tea) than the same motor action performed outside of a context (e.g.
simply picking up a cup), attributing a high level of specificity and intentionality to
the premotor portion of the MNS (Iacoboni et al., 2005). In addition, regions in the
IFG and IPL also demonstrate different responses to meaningful and
meaningless object-directed actions, again promoting the idea of regions of the
MNS being modulated by high-level action goals (Newman-Norlund et al., 2010).
1.4.3 Models of the Mirror System
It is necessary to note that while the inferior parietal lobule and inferior frontal
gyrus are two main regions noted as the mirror neuron system, there are many
additional regions that contribute to action understanding in conjunction with
these regions (see Fagg and Arbib, 1998 for an example of F5 canonical neurons
in grasping). Many models have been conceptualized to attempt to understand
18
the complex relationships between the regions supporting our ability to grasp,
reach, imitate, and understand others’ grasps. These models involve many
neural regions, a few of which are briefly described in Appendix A with a limited
description of some of their hypothesized functions.
In order to examine modulations of these regions with various manipulations, as
proposed in the specific aims, it is important to first conceptualize how these
regions might work together to support action understanding. A series of models
provide a useful basis for understanding these relationships (Fagg & Arbib, 1998;
Oztop & Arbib, 2002; Bonaiuto, Rosta, & Arbib, 2007; Arbib, Bonaiuto, Jacobs, &
Frey, 2009; Bonaiuto & Arbib, 2010), which I very briefly review here. To begin
with, the adapted FARS model (Fagg & Arbib, 1998; Arbib et al., 2009) provides
a theoretical framework for how F5 canonical neurons work with many of the
above mentioned regions to transform visual information about objects into the
generation of hand grasping actions. In this model, input from the visual cortex
feeds into areas V6A/VIP, cIPS, and IT to provide different information about the
object (object spatial location, object features, and object identity, respectively).
This information then flows to AIP, which selects possible motor affordances with
which to interact with the object. This information then goes to the premotor
cortex, and specifically F5 canonical neurons, to select a grasp motor program
(although many other regions in the premotor/prefrontal cortices (such as areas
F6, F2, and 46) assist with this selection based on context, need, constraints,
19
etc.). Finally, motor neurons in the primary motor cortex execute the action. See
Figure 1-2 below.
Figure 1-2: The adapted FARS model for grasping (from Arbib et al., 2009,
Figure 6).
Next, the Mirror Neuron System (MNS) model (Oztop & Arbib, 2002) provides
a basis for how F5 mirror neurons may be incorporated into grasping to learn
action recognition of grasp patterns from F5 canonical neurons (Oztop & Arbib,
2002). This model builds upon the FARS model by demonstrating how F5 mirror
neurons might develop learned associations between objects and the motor
programs used for grasping. This may occur through connections between F5
mirror neurons and F5 canonical neurons, as well as between parietal mirror
neurons and AIP/STS, all of which become strengthened over time. These
learned associations then formed the basis for action recognition by mirror
neurons (see Figure 1-3 below).
20
Figure 1-3: The MNS model (from Oztop & Arbib, 2002, Figure 5).
Bonaiuto and Arbib (2007; 2010) consequently updated this model with the
MNS2 model, which utilizes computationally less-demanding recurrent neural
networks to constantly update information about the grasp (see Figure 1-4
below). The MNS2 model also takes into consideration how audio-visual mirror
neurons (Kohler et al., 2002) and hidden grasps (Umilta et al., 2001) are
21
incorporated into the mirror neuron system. To address the first, auditory
information about an action is associated with action recognition via Hebbian
learning (Bonaiuto et al., 2007). To address the second, two working memory
components, hypothesized to be stored in the dlPFC (area 46), are added to the
model: object working memory and hand working memory. Thus, even if the final
portion of the grasp is occluded, the model is able to predict the grasp if the
remembered object position matches the appropriate hand state (affordances,
position) prior to its disapperance behind a screen (Bonaiuto et al., 2007). Most
recently, this model was updated to also include a new role for mirror neurons –
that of recognizing one’s own actions, being activated by one’s own actions, and
being able to evaluate the desirability and executability of one’s actions towards
a specific goal (Bonaiuto & Arbib, 2010). These updates introduce a new model,
known as Augmented Competitive Queuing, in which the mirror system is able
to represent one’s own multiple actions (the efference copy from the desired
action and the somatosensory feedback from the actual executed action) and
determine the desirability of a given action (as updated by the hypothalamus,
which evaluates the body’s need states) and executability of an action (as
updated by a comparison of desired and executed actions; (Bonaiuto & Arbib,
2010)). This model thus allows one to more quickly learn successful actions,
even if they were not the original, intended actions (Bonaiuto & Arbib, 2010).
22
Figure 1-4: The MNS2 model (from Bonaiuto & Arbib, 2007, Figure 3).
In addition to these models representing the mirror neuron system, additional
models representing grasp learning are also relevant to the current project. The
Infant Learning to Grasp Model (ILGM) demonstrates a simple circuit through
which an infant explores its environment, unintentionally initiates contact with an
object, develops hand/wrist/finger states to interact with the object, grasps the
object, and evaluates whether or not the grasp was successful via
somatosensory feedback (Oztop, Bradley, & Arbib, 2004). This model is then
complemented by the Grasp Affordance Learning Model (GAEM), which adds
23
in how visual input can be utilized by the AIP to extract the affordances that are
then learned and reinforced for successful grasping (Oztop, Imamizu, Cheng, &
Kawato, 2006; Arbib et al., 2009).
Finally, two additional models will be useful for our conceptualization of gestural
understanding. First, Rothi et al. (1991) proposed a model of praxis processing
and demonstrated how different types of input (auditory/verbal versus
visual/gestural) may result in different types of apraxia. In this model, there is a
direct path from visual input to object recognition to semantics. There are also
pathways from visual input to motor response (imitation) and from auditory input
to motor response (act on verbal command; (Gonzalez Rothi, Ochipa, & Heilman,
1991). This model demonstrates a possible mechanism by which an individual
may have ideomotor, but not ideational, apraxia or vice versa, as it delineates
parallel streams for action processing based on modality, as shown below in
Figure 1-5.
24
Figure 1-5: A model of praxis processing (from Rothi et al., 1991, Figure 5).
A final conceptual model demonstrates the plausible evolution of a mirror system
for actions to a mirror system for words, which links the visual/audio input of a
word to schemas for the word’s abstract meaning and perceptual-motor
representations, which develop over the course of evolution (Arbib, 2006; Arbib,
2010). In the current proposal, I draw upon these prior models in order to suggest
how familiarity modulates action understanding networks when observing
intransitive gestures, how novel effectors are represented neurally both before
and after visual training, and how training with different modalities (visual, motor,
semantic) might strengthen the connections between different regions in the
MNS and mentalizing systems. I will incorporate these discussed models into a
25
proposal of hypothesized conceptual models when discussing background
related to each specific aim.
1.5 THE MENTALIZING SYSTEM
1.5.1 Background
The MNS is clearly not the only neural network involved in understanding others,
and much research has attempted to discern the role of the MNS as it works with
several other networks associated with social cognition. Other multimodal
regions have been heavily implicated in social cognition such as the medial
prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and temporoparietal
junctions (TPJ; commonly known collectively as the mentalizing system;
(Fletcher et al., 1995; Gallagher & Frith, 2003; Frith & Frith, 2006). These regions
are thought to be involved in effortful reasoning about another’s mind, with a role
for the mPFC in general mentalizing abilities, the PCC in autobiographical and
episodic memory retrieval, and the bilateral TPJ in directing attention and taking
others’ perspectives (Frith & Frith, 2006). In addition, all of these regions have
reciprocal connections with one another (Pandya, Van Hoesen, & Mesulam,
1981; Seltzer & Pandya, 1989b; Seltzer & Pandya, 1994).
The mentalizing system has been shown to be active when inferring other’s
intentions, regardless of the type of stimuli (story, cartoon, visually observed
actions), suggesting that it participates in higher level reasoning regardless of the
26
input modality (Gallagher et al., 2000; de Lange et al., 2008; Spunt et al., 2010).
It also has been shown to respond to emotional face observations (Schulte-
Rüther, Markowitsch, Fink, & Piefke, 2007), and in fact has been modulated by
the race of the individual whose intentions one is trying to extract (Adams et al.,
2009, J Cogn Neurosci), suggesting that it is heavily context-dependent. In
addition, a recent study demonstrated that different levels of intentions can
differentially recruit mentalizing regions, with private intentions (i.e., actions that
only require one social agent, such as watering a plant) recruiting the PCC and
right TPJ, social prospective intentions (i.e., actions requiring two or more social
agents for an event occurring in the future, such as planning a dinner party for
another person) recruiting the PCC, right TPJC, and mPFC, and social
communicative intentions (i.e., actions requiring two or more social agents
communicating in the present, such as a communicative gesture) recruiting all
four regions (mPFC, bilateral TPJ, and PCC; (Ciaramidaro et al., 2007). Such
findings suggest that inferring intentions from present communicative actions
recruit the greatest amount of mentalizing activity, with the PCC and right TPJ
providing general mentalizing abilities regardless of private or social intentions,
the mPFC providing socially-driven mentalizing abilities, and the L TPJ providing
understanding of communicative intentions in particular (Ciaramidaro et al.,
2007). These findings match prior results suggesting that the R TPJ plays a
major role in taking others’ perspectives, more so than other regions in the
mentalizing network (Saxe & Wexler, 2005).
27
Interestingly, two components of the mentalizing system (the PCC and mPFC)
are also implicated in what is known as the default mode network, a network of
cortical midline structures that are active during rest, such as when no task is
being presented(Raichle et al., 2001; Greicius, Krasnow, Reiss, & Menon, 2003;
Raichle & Snyder, 2007; Greicius et al., 2007). Resting-state activity in the
default mode network is thought to represent general internal processing, in the
absence of external stimuli, and has strong implications for the PCC and mPFC
in self-representation and self-reflection. In addition, activity in these regions
have been correlated with a number of disease states including Alzheimer’s and
depression (Greicius, Srivastava, Reiss, & Menon, 2004; Greicius et al., 2007;
Greicius, 2008; Stevens, Hasher, Chiew, & Grady, 2008). The PCC has also
been strongly implicated in the representation of autobiographical memory
(Cabeza & St Jacques, 2007), which may provide one’s own past experiences
with an individual or with a social situation during any and all mentalizing
processes, explaining its activation for both general and specific mental
inferencing as seen in Ciaramidaro et al. (2007).
The bilateral TPJ have been implicated in both mentalizing and attentional
modulation processes (Saxe & Kanwisher, 2003; Saxe & Wexler, 2005; Saxe &
Powell, 2006; Decety & Lamm, 2007; Liepelt, Von Cramon, & Brass, 2008;
Schippers et al., 2009). Some claim that the right TPJ is uniquely involved in
28
social cognition and perspective taking (Saxe & Kanwisher, 2003; Saxe &
Wexler, 2005), while others posit that it is involved generally in attentional
modulation, which is required during many social tasks such as perspective
taking (Decety & Lamm, 2007). A recent meta-analysis suggests that the TPJ
may represent more transient intention/goal states while the mPFC represents
more enduring traits, such as personal characteristics (Van Overwalle, 2009). In
addition, the TPJ is thought to be a link between mentalizing and mirror systems,
due to its close proximity and connections with the parietal mirror regions, and it
may be active in representing and selecting possible goal states of the other
(Van Overwalle & Baetens, 2009). Another meta-analysis concluded that the TPJ
orients the invidiual “to externally generated behaviors with the aim of identifying
the possible end-state of these behaviors” (Van Overwalle & Baetens, 2009).
While each region of the mentalizing system may possess specific roles in social
cognition, they also work together and are often co-activated during social
processes including false-belief tasks, mental state attribution, social conflict
resolution, among others(Saxe, 2006; Saxe & Powell, 2006) (Adams et al., 2009,
J Cogn Neurosci; Frith & Frith, 2006; Fletcher et al., 1995; Zaki et al., 2010;
Gallagher et al., 2000; Spunt et al., 2010). Although much more can be explored
about the mentalizing system on its own, I reviewed only the main functions of
these regions as they pertain to the current studies.
29
1.5.2 Mentalizing Interactions with Mirror Regions
Importantly, many recent studies suggest that brain regions in mentalizing and
mirroring work together in complementary roles to promote understanding of
others’ intentions from his or her actions (for a review, see Van Overwalle &
Baetens, 2009). One study showed that instructing participants to attend to the
motor kinematics of an action, or “how” the action was completed, activated the
MNS, particularly in the IFG, regardless of if participants were instructed to
attend to the intention of the action or the way in which the action was completed
(de Lange et al., 2008). In contrast, attending to the intention of the action, or
“why” the action was completed, activated the mentalizing system in the mPFC,
PCC, and right pSTS. This finding was replicated in another study in which
participants watched fingers manipulate a cube (Hesse et al., 2009). Again,
increased MNS activity in the bilateral ventral PMC and IPL was found when
participants attended to the means of the action compared to the end goal (e.g.,
where the cube was eventually placed), and increased mentalizing activity in the
bilateral TPJ was found when participants attended to the end goal of the action
compared to the means. Building on that, another study demonstrated that as the
level of understanding became more abstract (e.g. “What is he doing” to “Why is
he doing it?”), mirror region activation stayed roughly the same while activity in
mentalizing regions of the mPFC (dorsal and ventral segments), PCC, and
temporal poles increased as the task became more abstract (Spunt et al., 2010).
These studies suggest that both MNS and mentalizing regions support action
30
understanding, yet the MNS is more active when attending to visuomotor
properties of an observed action, while the mentalizing system is more active
when trying to infer higher-level goals of an observed action (de Lange et al.,
2008; Van Overwalle & Baetens, 2009; Spunt et al., 2010).
A recent meta-analysis categorized the activity of the MNS and brain regions
associated with mentalizing during action understanding tasks in brain imaging
research and concluded that the MNS contributes to understanding motor goals
and predicting future motor actions while mentalizing regions contribute to the
understanding of abstract goals based on the observed actions. For example, the
MNS may encode the understanding that someone is reaching an outstretched
arm with fingers abducted to pick up a glass of milk, whereas the mentalizing
system may help to understand that they are reaching for the glass of milk
because they are thirsty after eating too many cookies (Van Overwalle &
Baetens, 2009). However, further research is needed to continue to tease apart
the contributions of the various brain regions and systems supporting social
cognition, particularly under different tasks, with different stimuli, and with
different levels of experiences.
1.5.3 Models of Mentalizing
The mentalizing system generally consists of the medial prefrontal cortex, the
posterior cingulate and precuneus, and the bilateral temporoparietal junctions
31
(Fletcher et al., 1995; Gallagher & Frith, 2003; Frith & Frith, 2003; den Ouden,
Frith, Frith, & Blakemore, 2005; Saxe, 2006; Frith & Frith, 2006; Ciaramidaro et
al., 2007). There are bidirectional anatomical connections between these four
main regions, as demonstrated in anatomical studies of the rhesus monkey
(Seltzer & Pandya, 1978; Pandya et al., 1981; Barbas & Pandya, 1989; Seltzer &
Pandya, 1989b; Morecraft, Cipolloni, Stilwell-Morecraft, Gedney, & Pandya,
2004). The temporal poles are also often associated with these processes (Frith
& Frith, 2003; Frith & Frith, 2006), however, are not consistently represented
during mentalizing tasks (Van Overwalle & Baetens, 2009). A list of mentalizing
regions and their hypothesized functions can also be found in Appendix A.
In addition, while there are no commonly used models for how regions of the
mentalizing system work together, as the complex function of mentalizing evokes
activity that appears to be extremely task-dependent, many researchers have
proposed models for how mental state inferences and attributions are made,
using both action understanding (e.g., MNS) and higher-level cognitive networks
(e.g., mentalizing).
Oztop et al. (2005) proposed the only computational model of mental state
inference that is based on activity in sensorimotor regions, including the MNS.
This model, which is supposed as a precursor to more abstract mentalizing
abilities, provides a means by which the parietal, premotor, and prefrontal
cortices interact to support both action execution and action understanding. The
32
Mental State Inference (MSI) model engages a feedback mechanism, using a
forward model in the premotor cortex that allows the observer to predict (via
mental simulation) the actor’s movements and then compare the predicted motor
pattern from the premotor cortex with the actual observed motor pattern from the
pSTS and parietal regions. This model is flexible in that during action execution,
the forward model generated in the PMC is useful in compensating for temporal
delays in sensory feedback. However, during action observation, the same
forward model generates a mental simulation that is compared with the parietal
output (observed state), resulting in an error that can be systematically minimized
(Oztop, Wolpert, & Kawato, 2005). Importantly, this model allows for
interpretation without prior experience with an action, as long as one has a
matching effector with which to represent the actor’s actions. In the event of a
novel effector, such as an amputated limb as seen in Study 2, the individual may
need additional resources in order to mentally simulate the action as mental
simulation by itself may not be adequate. In the event of a novel action, however,
this model provides a means through which an individual might be able to learn
an unfamiliar action, via increased mapping between the updated parietal output
(actual action) and one’s own existing patterns (desired/predicted action), thus
possibly integrating the novel action into one’s own existing patterns (see Figure
1-6).
33
Figure 1-6: The Mental State Inference model (from Oztop et al., 2005,
Figure 2).
While there are no other computational models of mental attribution processes to
my knowledge, many others have proposed conceptual models for social
cognition from observed actions. One model proposes two types of processing:
1) pre-reflective processing, which engages implicit, automatic neural
representations in the insula, premotor, parietal, and secondary somatosensory
cortices, to represent sensorimotor and emotion-related body states, and 2)
34
reflective processing, which engage effortful higher-level cognition of either the
self (in the ventromedial prefrontal cortex) or others (in the dorsomedial prefrontal
cortex) (Keysers & Gazzola, 2007). Pre-reflective and reflective representations
can be used to understand not only oneself but also others, using the visible
state of others to evoke pre-reflective representations via mirror regions, and
then using these simulated pre-reflective representations to evoke simulated
reflective representations (see Figure 1-7).
Figure 1-7: An integrated conceptual model of mentalizing and mirror
systems (from Keysers & Gazzola, 2007, Figure 1).
In another model, both MNS and mentalizing networks are characterized as
contributing to theory of mind processes, with MNS regions providing implicit
information about another individual and mentalizing regions providing more
35
explicit information about mental states (Teufel, Fletcher, & Davis, 2010).
Importantly, this model represents bidirectional connections between incoming
sensory information and higher level processing, suggesting a dynamic, as
opposed to hierarchical, bottom-up model of social cognition (see Figure 1-8).
Figure 1-8: A model of bidirectional processing between visual and social
information (from Teufel et al., 2010, Figure 2).
This is similar to Grafton’s model of action observation networks (AON; e.g.,
mirror regions, somatosensory regions) and social-networks (S-N; e.g.
mentalizing regions, emotion-related regions), which also provides bidirectional
influences of the two (AON, S-N) networks to support social cognition (see Figure
1-9).
36
Figure 1-9: An integrated conceptual model from action observation
networks and social-networks (from Grafton, 2009, Figure 2).
1.6 BACKGROUND FOR SPECIFIC AIMS
Functional MRI allows researchers to investigate changes in the blood-oxygen-
level dependent (BOLD) signal in neural regions throughout the brain while
participants perform tasks, as described in more detail in section 1.10. Many
regions associated with action understanding and social understanding appear to
be modulated by one’s experience with the actions and/or stimuli being
observed, as well as the type of experience (visual, motor, real-life interactions)
obtained. Currently, very little is known about how these networks may interact
during social tasks that require the integration of both conceptual and motor
information to understand another’s mental states, as found in gestural
inferences. In addition, here I suggest how dcifferent forms of experience and
familiarity may alter neural activity in regions associated with these two
37
processes (mirroring and mentalizing), which are often studied separately. In
addition, there is little research on how these networks might interact during
training, as one’s level of experience with the stimuli and actions changes and
how these areas are functionally and anatomically connected. Thus, in this
dissertation, I aim to answer three questions: (1) how are MNS and mentalizing
regions modulated by motor familiarity (such as experience with a gesture)
and perceptual familiarity (such as with the race of an actor), (2) how are
MNS and mentalizing regions active when observing visually novel human
effectors and how does the introduction of visual experience change these
activation patterns, and (3) how do different, more dynamic types of
experience (such as real-life interactions or personal experience) alter the
activity of regions involved in action understanding of someone different
from oneself?
1.7 AIM 1: PERCEPTUAL AND MOTOR EXPERIENCES AND
ACTION UNDERSTANDING
1.7.1 Background
The first question posed is, “How are MNS and mentalizing regions modulated by
both motor familiarity (e.g., with a familiar gesture) and perceptual familiarity
(e.g., with a race of an actor)?” To answer this question, I use intransitive,
communicative gestures, which uniquely combine motor representations
(activating the MNS; Villarreal et al., 2008; Skipper et al., 2009; Straube et al.,
38
2009) with higher-level conceptual and semantic knowledge (activating the
mentalizing system; Gallagher and Frith, 2004). In addition, here I modulate the
familiarity with an action, demonstrating what occurs when one has both a motor
and conceptual representation of an action, compared to when one does not.
This study is thus pivotal for establishing a basis of how familiarity with an action
modulates MNS and mentalizing regions, and I build on these findings in the next
two studies.
A wealth of literature has demonstrated that, during passive observation, actions
that one has either seen before (visually familiar) or done before (motorically
familiar) generate greater MNS activity than unfamiliar actions (Calvo-Merino et
al., 2005; Calvo-Merino, Grezes, Glaser, Passingham, & Haggard, 2006; Cross,
Hamilton, & Grafton, 2006; Cross, Kraemer, Hamilton, Kelley, & Grafton, 2009),
with both the IPL and ventral PMC being specifically modulated by one’s
experience with the actions (Cross et al., 2006). In addition to exact actions that
one has actually performed or seen before, actions that are in the same style as
one is familiar with (e.g. a ballerina watching a sequence of ballet moves, as
opposed to capoeira moves) also generate greater MNS activity, in the bilateral
PMC, intraparietal sulcus, right superior parietal lobule, and left pSTS, during
passive observation (Calvo-Merino et al., 2005). Additionally, such experience
can be gained in a matter of days or even hours, as participants who learned
simple dance patterns had greater MNS activity for their newly learned patterns
39
than for unlearned patterns (Cross et al., 2009). Thus, it may be hypothesized
that even a ballerina watching new ballet moves, as long as they are in the same
style (ballet) and use the same fundamental components, will have greater MNS
activity during passive observations.
In fact, many aspects of an experience may generate MNS activity that is
possibly associated with that experience—for instance, studies of embodied
semantics have found that hearing the sounds of actions (e.g., a peanut shell
breaking) triggered activity in mirror neurons in macaques (Kohler et al., 2002) as
well as in humans (Gazzola, Aziz-Zadeh, & Keysers, 2006), and in humans,
action words (e.g., “kick the ball”) activated motor regions associated with the act
of kicking (Aziz-Zadeh, Wilson, Rizzolatti, & Iacoboni, 2006b). Our experiences,
and even different components of, and modalities of, experiences, may affect our
neural responses, a finding which provides interesting hypotheses about the link
between action and language, as discussed in the next section.
Yet, it is unclear from the existing literature whether these increases in MNS
activity occur regardless of the task. For instance, passive observation of an
unfamiliar action may not generate MNS activity, but observation of an unfamiliar
action, with an intent to imitate it, may (Vogt et al., 2007). Studies of imitation and
learning demonstrate activation of the ventral premotor and IPL regions during
observation of unfamiliar guitar chords when they are preparing to imitate them
40
(Buccino et al., 2004b; Vogt et al., 2007). Conversely, research demonstrates
that musicians with experience imitating hand movements, and likely with more
detailed attention to hand and finger fine-motor movements, perform better on
gestural imitation tasks than non-musicians (Spilka, Steele, & Penhune, 2010).
Clearly the modulatory role of experience on MNS activation depends largely on
the goal of the observation as well as the individual’s prior experiences.
In contrast, lack of experience, or difficult-to-understand actions, may recruit
regions that are more strongly associated with effortful reasoning and intention
understanding, such as those found in the medial prefrontal cortex (mPFC),
posterior cingulate cortex (PCC), and bilateral temporoparietal junctions (TPJ;
(Brass, Schmitt, Spengler, & Gergely, 2007; Liepelt et al., 2008). In some
situations, the MNS may be insufficient to make sense of the observed actions.
For instance, contextually-appropriate actions, such as using one’s knee to flip
up a light switch when one’s hands are full of books, may generate less activity in
reasoning-related regions (e.g., the left pSTS and mPFC) than the same action in
a contextually-inappropriate situation, such as using one’s knee to flip a light
switch when one’s hands are free (Brass et al., 2007; Kilner & Frith, 2008). In
addition, implausible, non-sterotypic finger movements were also shown to
activate mentalizing regions in the TPJ and angular gyrus, as well as the pSTS
(likely contributing to increased visual activity for the unusual movement), as
opposed to MNS regions (Liepelt et al., 2008). However, these results are in
41
direct contrast to prior literature suggesting that the IFG still represents observed
actions, even when they are biologically implausible, while the IPL selectively
represents plausible versus implausible actions (Costantini et al., 2005). These
findings would be in line with the understanding that the IPL extracts affordances
and thus cannot extract biologically implausible affordances, while the IFG
functions in recognizing actions whether or not they can be carried out.
However, this delineation between the MNS and mentalizing systems is not only
based on experience or context. Task instructions of attending to the movement
versus attending to the goal of the movement have also been shown to modulate
this neural activity, with attention to the movement generating greater MNS
activity and attention to the goal of the movement generating greater mentalizing
activity (de Lange et al., 2008; Hesse et al., 2009) as discussed briefly
previously. Such studies grossly dissociate MNS and mentalizing activity based
on task goals, attributing more basic motor recognition to the MNS (e.g., how a
movement is performed) and higher-level reasoning to the mentalizing system
(e.g., why an action is performed). Supporting this, mentalizing activity has been
shown to be parametrically increased as the mentalizing task becomes more
abstract (e.g. from “what is he doing” to “why is he doing it?”; (Spunt et al.,
2010)), again demonstrating that the goal of the observation strongly modulates
the neural regions involved in processing an action.
42
Furthermore, in addition to experience and task goals, social group affiliations
such as one’s racial group, cultural group, or social identity, may also modulate
MNS activity. One facet of an early study by Buccino et al. (2004) considered
action observation of conspecifics compared to non-conspecifics (Buccino et al.,
2004a). Human participants observed humans, monkeys, or dogs perform mouth
actions and showed the greatest MNS activity in response to mouth actions
performed by humans, followed by monkeys, followed by dogs. That is, MNS
activity during the observation of mouth actions decreased as humans observed
species that were less and less similar (Buccino et al., 2004a). In terms of motor
simulation, this suggests that the MNS may match visual and kinesthetic features
of actions such that the more physically similar one is to the actor, and the more
an action is within the scope of one’s own motor repertoire, the more MNS
activity that occurs. Cultural experiences may also affect one’s focus and interest
during observation (e.g., perceiving actions as socially appropriate/inappropriate
or rewarding based on one’s culture; Freeman, Rule, Adams, & Ambady, 2009).
While the current work does not directly examine cultural experiences on action
observation networks, a preliminary step towards understanding the effects of
cultural experience on social perception can be found in a completed behavioral
study that is related to this work (see Appendix B; Liew, Ma, Han, & Aziz-Zadeh,
2011).
43
Beyond the social aspects of culture, in humans, race is a highly automatic and
implicitly encoded social group affiliation (Phelps & Thomas, 2003; Chiao et al.,
2008). Experimental modulation of race between the actor and the observer has
been shown to affect an array of neural responses depending on the task,
including empathy for another’s pain (Xu, Zuo, Wang, & Han, 2009), fear
responses to others (Chiao et al., 2008), and social liking (Phelps & Thomas,
2003). These data suggest that racial in-group/out-group associations can
powerfully modulate neural responses to others in a variety of contexts. While
little has been studied regarding race and the MNS, several studies suggest that
there is a complex race-based modulation of mirror regions (Desy & Theoret,
2007; Molnar-Szakacs et al., 2007). Using transcranial magnetic stimulation
(TMS), Molnar-Szakacs et al. (2007) found increased corticospinal excitability
when Euro-American participants observed actors of their own race compared to
another race (e.g., Nicaraguan) perform the same actions, suggesting a racial in-
group bias during action observation. However, Desy et al. (2007) found an
opposite pattern, with greater motor responses for the other race versus one’s
own race during observation of finger movements. These findings demonstrate
the complexity of studying a construct such as race, as there are many possible
explanations for these incongruent findings – for instance, it may be that there
are different modulations between racial groups, as Molnar-Szakacs et al. (2007)
employed Caucasian and Nicaraguan individuals, while Desy et al. (2007)
employed White and Black Americans. Stimuli also differed, showing faces of
44
individuals as they performed gestures (Molnar-Szakacs et al., 2007) or black or
white hands only performing simple movements (Desy & Theoret, 2007). Finally,
the task differed, from inferring gestural intentions to passively observing hand
movements. Thus, clearly there are many controls necessary when studying a
construct such as race.
In addition to plausible modulations of the MNS related to race, however, the
MNS also responds differently to observations of the self versus others.
Researchers found that repetitive TMS (rTMS) applied to the right IPL
significantly decreased the participants’ abilities to distinguish their own face from
the faces of others (Uddin, Molnar-Szakacs, Zaidel, & Iacoboni, 2006). Notably,
only the right hemisphere demonstrated this effect in decreased self-other
distinction, leading researchers to surmise that a possible role for the MNS in
self-processing may occur largely in the right hemisphere. Further support for this
hypothesis was found in an fMRI study which found that the right hemisphere
MNS was more active for the self even across modalities (Kaplan, Aziz-Zadeh,
Uddin, & Iacoboni, 2008). Both observations of one’s own face versus a friend’s
face and listening to one’s own voice versus a friend’s voice generated increased
activity in the right IFG, suggesting that the MNS may play a role in distinguishing
the self from others across both visual and auditory systems.
45
Based on this research, I use functional magnetic resonance imaging (fMRI) to
manipulate two critical factors related to familiarity (perceptual familiarity with
the race of the actor, motor familiarity from experience with the action)
during the task of inferring an actor’s intentions. The aim is to understand the
specific contributions of individual regions within the MNS and mentalizing
systems when perceptual and motor familiarity with the stimuli are
manipulated. In order to explore the social influences on action understanding in
these two neural systems, the current study utilizes symbolic, or intransitive,
gestures (e.g. thumbs up), which are learned and familiarized through one’s
cultural experiences (Archer, 1997). As these gestures require an integration of
visuomotor representations with abstract intentions, they have been shown to
activate both MNS and mentalizing networks (Gallagher & Frith, 2004; Villarreal
et al., 2008; Liew, Han, et al., 2011; Schippers et al., 2009; Straube, Green,
Weis, Chatterjee, & Kircher, 2009; Skipper et al., 2009). The results of this study
allow us to better understand the interactions between mentalizing and mirror
regions during observation of socially communicative actions, when given a task
of inferring another person’s mental state, and how these interactions might be
modulated at a very general level by one’s motor familiarity with a gesture or
perceptual familiarity with a certain race.
46
1.7.2 Gestures: Action and language
Before continuing on, special attention should be given to the study of gesture,
praxis, and language. There are many types of gestures, including co-speech
gestures (i.e., gestures that naturally accompany speech), pantomimes (i.e.,
gestures that mimic actual object or tool use or other actions), intransitive,
communicative gestures (i.e., gestures that connotate abstract meanings, which
are also known as emblems), and sign language (i.e., gestures that hold
linguistic meaning; (McNeill, 1992; McNeill, 2005). In the present studies, I use
the third class of gestures—that is, intransitive, communicative gestures or
emblems—to best engage both MNS and mentalizing systems in typically-
developed populations. Communicative gestures have strong ties to language
and many studies have examined the neural activity evoked by communicative
gestures, showing activation of mirror, mentalizing, and language-related regions
based on different task and stimulus conditions (Gallagher & Frith, 2004;
Villarreal et al., 2008; Straube et al., 2009; Skipper et al., 2009; Flaisch, Schupp,
Renner, & Junghofer, 2009; Schippers et al., 2010; Liew & Aziz-Zadeh, 2011b).
For a review of literature linking action and speech over several types of
gestures, see Willems and Hagoort (2007). Particularly, communicative gestures
have been shown to share an overlapping network with their spoken description
(e.g. a thumbs up and the phrase “it’s good”) in the left IFG and bilateral pMTG
extending into the pSTS, with greater left sided activation (Xu, Gannon,
Emmorey, Smith, & Braun, 2009). Uniquely, however, gestures versus speech
47
activated the fusiform gyrus and inferior temporal cortex bilaterally, while speech
versus gestures additionally activated the anterior STS, MTG and bilateral STG,
closer to the auditory cortex. In addition, functional connectivity between these
regions demonstrated a similar pattern, with gestures activating a network
between the left IFG and left ventral temporal regions, and speech activating a
network between the left IFG and pMTG and STS (Xu et al., 2009). Xu et al.
(2009) suggested that such results support the idea of a modality-independent
communication system that is not specifically tied to language processing but is
more general for communication of many types, although this may additionally be
affected by one’s contextual experiences and factors associated with the use of
gestures versus speech. This is in line with suggestions by several researchers
that language-related regions, such as Broca’s area in the IFG, may be
functionally involved in a variety of networks that allow the one region to partake
in many, flexible processes as opposed to being only speech-specific or even
language-specific (Corina & Knapp, 2006; Willems & Hagoort, 2007).
While I choose to use general communicative gestures in the current study, there
is a wealth of literature that supports these hypotheses regarding the relationship
between action and gesture. One well-studied class of gestures are the
conventionalized signs used in manual systems designed specifically for
communication, as found in sign language used by native signers (American
Sign Language, British Sign Language, etc.). Observing sign language in
48
congenitally deaf native signers generally produced activations along the
perisylvian cortex (including Broca’s and Wernicke’s areas) and the pSTS,
regions that are also associated with processing spoken language in hearing
individuals (Corina, Vaid, & Bellugi, 1992b; Corina et al., 1992a; MacSweeney et
al., 2002b; MacSweeney et al., 2002a; MacSweeney et al., 2004; Corina &
Knapp, 2006; Willems & Hagoort, 2007), with activity generally lateralized to the
left hemisphere (Corina et al., 1992b).
Interestingly, in addition to speech/language-related regions, several other
regions have been associated with sign language comprehension, namely, the
left IPL. Observations of British Sign Language (BSL) have been shown to
activate the IFG, middle and superior temporal cortex, as well as the IPL
(supramarginal gyrus) more strongly than watching another manualized
communication system that did not have linguistic meaning, which indicated that
these regions are not simply active for gestural observations of any sort
(MacSweeney et al., 2004). Left parietal activity is commonly found in a number
of sign language studies (Corina et al., 1999; MacSweeney et al., 2002b;
MacSweeney et al., 2002a; Emmorey et al., 2004; MacSweeney et al., 2004;
Emmorey et al., 2005; Corina & Knapp, 2006), with researchers proposing that
the IPL and SPL may play an important role in extracting hand configurations as
well as hand/arm spatial positions (MacSweeney et al., 2002a). In addition,
lesions in the supramarginal gyrus (SMG) in deaf American Sign Language users
49
impairs sign comprehension, suggesting that it plays a unique role in
representing semantic meaning attached to signs (Chiarello, Knight, & Mandel,
1982; Corina et al., 1992a).
In addition to the left IPL, the left inferior frontal cortex is also active during sign
language observation (MacSweeney et al., 2002b; MacSweeney et al., 2002a;
MacSweeney et al., 2004), as well as during observation of communicative
gestures and even co-speech gestures (Willems & Hagoort, 2007; Willems et al.,
2007, Cerebral Cortex, 17, 2322). Lesions in the IFG produce deficits in sign
production, similar to Broca’s aphasia in hearing individuals, but comparatively
minor deficits for sign comprehension (Poizner, Klima, & Bellugi, 1987; Corina &
Knapp, 2006), suggesting that the frontal component is not necessarily needed
for sign language comprehension. These results were also demonstrated using
cortical stimulation mapping in an individual undergoing treatment for a seizure
disorder, demonstrating impairments in motor execution of signs with stimulation
to Broca’s area and sign comprehension deficits and difficulties with semantic-
phonological decisions with stimulation of the SMG, suggesting a role for the
parietal component in binding linguistic features (Corina et al., 1999).
Neuroimaging results also support the role of a generalized fronto-parietal
network in sign language production and comprehension which is stronger on the
left hemisphere but, under certain conditions, represented bilaterally
(MacSweeney et al., 2002b; MacSweeney et al., 2002a; MacSweeney et al.,
50
2004; Corina & Knapp, 2006; Villarreal et al., 2008; Straube et al., 2009; Skipper
et al., 2009). Regardless, it appears that manualized language may involve both
regions typically involved in language processing in hearing individuals, as well
as unique contributions from the supramarginal gyrus among other regions,
which may contribute to the binding of linguistic and motor activity, similar but not
identical to that found in speech production.
One might expect a similar finding, then, for intransitive, communicative gestures
observed by typically developed individuals, and indeed, this is generally
supported by the literature (Molnar-Szakacs et al., 2007; Villarreal et al., 2008;
Schippers et al., 2009; Straube et al., 2009; Straube, Green, Jansen, Chatterjee,
& Kircher, 2010). Most studies find activity in IFG, pSTS, and IPL for gestural
observation, with one study finding greater BOLD activity in the left IFG when
watching intransitive communicative gestures as compared to pantomimes of
transitive gestures, suggesting a possible modulation of this region by abstract
meaning (Villarreal et al., 2008). In fact, evidence of a shared single
communication system that integrates gesture and speech comes from
observations that producing an emblem and saying the word that describes the
emblem has cross-modal effects that are not seen when either producing the
emblem or saying the word separately (Bernardis & Gentilucci, 2006). That is to
say, when participants pronounced words and performed emblems at the same
time, the acoustic range of the word increased while the duration of the gesture’s
51
movement patterns decreased, suggesting that the two are not contained in
isolated systems but interact within the brain. Following up on this finding, the
same research group then used TMS to disrupt the left IFG, which modulated the
acoustic production of the word pronounced in response to a symbolic gesture,
an effect which was not seen when TMS was delivered to the right IFG or not
delivered at all (Gentilucci, Bernardis, Crisi, & Dalla Volta, 2006). These findings
suggest a role for the left IFG in multimodal semantic processing.
In addition to increased activity in the IFG, communicative gestures also showed
greater hippocampal activity when observing metaphoric, compared to free or
unrelated, gestures that accompanied speech, also with a strong left
lateralization suggesting that the left hemisphere may be involved in semantic
integration of gesture with speech (Straube et al., 2009). However, even co-
speech gestures, which accompany speech and may not contain meaning in
themselves, demonstrate increased processing in Broca’s area when verbal and
gestural information do not match, potentially indicating integration of action and
language processing at this region (Willems, Ozyurek, & Hagoort, 2007).
Moreover, cospeech gestures can reflect the grammar of the language they
accompany. In an EEG experiment, when subjects were presented with either a
word or a gesture that didn’t fit the context of a sentence, an N400 effect
(associated with difficulty processing a word based on the surrounding context)
was found (Ozyurek, Willems, Kita, & Hagoort, 2007). Importantly, the timing of
52
this effect did not differ between speech and gesture conditions, suggesting that
the timecourse of integration for a gesture is similar to that of integration for a
spoken word (Ozyurek et al., 2007). Supporting this is recent evidence that
incongruent speech and co-speech gestures elicit the N400 effect only within a
certain time window (e.g., when speech and gesture are presented
simultaneously or within 160 msec, but not at 360 msec), suggesting that the two
are processed in an integrative manner in order to resolve each (Habets, Kita,
Shao, Ozyurek, & Hagoort, 2010). In addition, these effects were found to
increase activation in the left IFG for both speech and gesture, suggesting a
common neural basis for integrating semantic information, regardless of the
modality (Willems et al., 2007). Such co-speech gestures were also found to
increase accuracy of memory retrieval for stories and have been shown to
increase functional connectivity between MNS regions in both the IFG and IPL
and anterior regions of the superior temporal sulcus, which have been associated
with semantic aspects of language comprehension (Skipper et al., 2009).
Furthermore, task instructions or contextual components of the gesture may also
increase mentalizing activity to gestural observations, as found in several studies
(Straube et al., 2010; Liew, Han, et al., (2011); Schippers et al., 2009). The task
of inferring other’s intentions from charades evoked activity in both MNS
(premotor, parietal) and mentalizing (TPJ) regions (Schippers et al., 2009).
Granger causality between the charades actor and the charades guesser also
53
demonstrated that MNS activity in the actor’s brain was Granger causally-related
to both MNS and mentalizing activity in the guesser’s brain (Schippers et al.,
2010). In addition, Liew, Han, et al. (2011) demonstrated greater mentalizing
activity when participants inferred the meaning of familiar gestures, but greater
MNS activity when participants inferred the meaning of unfamiliar gestures,
suggesting that the task demands, as well as prior experience with the gestures,
modulated the activity of MNS and mentalizing systems in understanding the
gestures. Greater mentalizing activity was also found for gestures that are
expressive (e.g., “I am angry”) compared to motor-related (e.g., “Come here”),
particularly in the anterior paracingulate cortex, bilateral temporal poles, right
pSTS, and the amygdala (Gallagher & Frith, 2004). In contrast, and in line with
the reviewed literature, motor-related gestures compared to expressive gestures
more strongly activated a left-lateralized frontoparietal system associated with
language and motor imitation. Finally, it appears that social cues, such as
face/body orientation, may influence the neural regions related to processing
gestures, due to top-down modulations by the mentalizing system (Straube et al.,
2010). Thus it appears clear that, while MNS regions are commonly activated in
response to observations of many types of gestures, mentalizing regions may
also be activated based on the social context, task, and content of the gestures
themselves.
54
Notably, these findings are reinforced by studies of individuals with apraxia, in
which brain damage to specific portions of the brain can present with difficulty
performing and/or comprehending actions, including actions involving conceptual
knowledge of tool use, imitation of movements, pantomimed actions with tool
use, and/or symbolic gestures, among many other types (Heilman, Schwartz, &
Geschwind, 1975; Geschwind, 1975; Heilman, Rothi, & Valenstein, 1982; Rothi &
Heilman, 1984; Rothi, Heilman, & Watson, 1985). Some patients with apraxia
were found with specific deficits in the comprehension of intransitive gestures
(see (Heath et al., 2001). Interestingly, deficits in gestural comprehension were
linked to limb apraxia due to deficits of the left IFG and not apparent when
individuals had lesions in the left IPL or the right IFG/IPL, a finding which is
controversial with the existing literature but explained by differences between this
novel gestural recognition task and the task used in prior studies (Pazzaglia,
Smania, Corato, & Aglioti, 2008).
Finally, it appears that the links between language and action are found not only
in gesture, but also in the conceptual representation evoked by words
themselves (Pulvermuller, Hauk, Nikulin, & Ilmoniemi, 2005; Tettamanti et al.,
2005; Pulvermuller & Hauk, 2006; Aziz-Zadeh et al., 2006b; Gentilucci & Dalla
Volta, 2008; Aziz-Zadeh & Damasio, 2008). That is to say, hearing certain words
(notably, action verbs) will evoke sensorimotor representations associated with
the actual motor performance of those words (e.g., “kick the ball” activates leg
55
regions, “pick the cup” activates hand regions, and so on ((Aziz-Zadeh et al.,
2006b) (Pulvermuller, Shtyrov, & Ilmoniemi, 2005) (Tettamanti et al., 2005). Such
embodied semantics not only show somatotopic properties, mapping onto the
specific regions associated with the actions, but also are body-specific, such as
right-handed individuals will activate left premotor cortex when hearing actions,
while left-handed individuals activate right premotor cortex for the same actions
(Willems, Hagoort, & Casasanto, 2010). In addition, individuals with apraxia are
also impaired in their ability to match action sounds with the appropriate action
photo, and this effect is body-part-specific such that individuals with limb apraxia
are impaired in matching limb action sounds/pictures, while individuals with
buccofacial apraxia are impaired in matching mouth action sounds/pictures
(Pazzaglia, Pizzamiglio, Pes, & Aglioti, 2008). This body-specific representation
is also true during motor imagery (Willems, Toni, Hagoort, & Casasanto, 2009),
suggesting that words can evoke one’s own motor representations during
comprehension. There is a wealth of literature on embodied semantics that is not
discussed here for the purposes of brevity (for a review see (Aziz-Zadeh &
Damasio, 2008), but overall these findings suggest that there are multimodal
representations of conceptual information that may involve sensorimotor
information.
While each of these topics can be widely expanded to delve even further into the
relationship between action and language, this brief summary of findings
56
demonstrates the highly flexible and interconnected nature of regions supporting
not only motor representations but also language and semantics. In the current
studies, I aim primarily to understand MNS and mentalizing regions associated
with gestural understanding, however, many of these studies on experience may
be extended to understand how familiarity with a stimulus or action increases
one’s conceptual representation of it as possibly demonstrated through study of
embodied semantics. Lastly, these studies highlight the interplay between key
regions of interest, including the IFG, IPL, and pSTS, along with the MTG and
anterior/posterior inferotemporal cortices, as well as the laterality of regions, as
they support abstract understanding from motor representations.
1.7.3 Conceptual model of gestural understanding
Based on previous models of action understanding (Fagg & Arbib, 1998; Oztop &
Arbib, 2002; Arbib, 2006; Bonaiuto et al., 2007; Bonaiuto & Arbib, 2010) and
praxis (Gonzalez Rothi et al., 1991), here I propose a model of gestural
recognition during normal processing (see Figure 1-10). In this model, the visual
input is divided into visual analysis regarding the gesture (e.g., the movement
parameters, hand shape, hand position in space, body kinematics) and the
social/context information (e.g., the individual’s facial expression, eye
movements, race, and appearance, the background scene and objects in the
scene). Visual gesture input is likely processed in the pSTS, and in particular, in
the extrastriate body area (EBA), which have been associated with the
57
processing of biological movement and human movement, respectively (Perrett
et al., 1989; Perrett et al., 1990; Downing, Jiang, Shuman, & Kanwisher, 2001).
Visual contextual information regarding the participant’s face may be processed
in the fusiform face area (FFA; (Kanwisher, McDermott, & Chun, 1997), while
contextual location and object information in the scene may activate regions of
the parahippocampal place area (PPA) (Epstein & Kanwisher, 1998; Epstein,
Harris, Stanley, & Kanwisher, 1999) and inferotemporal cortex (IT) (Gross,
Bender, & Rocha-Miranda, 1969; Gross, Rocha-Miranda, & Bender, 1972; Fujita,
Tanaka, Ito, & Cheng, 1992; Ito, Tamura, Fujita, & Tanaka, 1995) respectively.
Information about the body movements comprising the gesture is then passed
from visual regions detecting motion (pSTS, EBA) to the IPL where hand and
arm configurations in body space related to the observed gesture are extracted
(Rizzolatti, Fogassi, & Gallese, 1997; Oztop et al., 2005). This information is then
processed by the IFG, with canonical neurons providing one’s own motor
patterns that match the desired hand configurations, and mirror neurons
supporting action recognition and possibly motor goal attribution to the observed
movements (Rizzolatti et al., 1996a; Gallese et al., 1996; Fagg & Arbib, 1998;
Oztop & Arbib, 2002; Oztop et al., 2004; Stepniewska, Preuss, & Kaas, 2006;
Willems et al., 2007; Villarreal et al., 2008; Van Overwalle & Baetens, 2009).
From here, information about motor-related goal states may then be integrated
with semantic or conceptual information regarding the recognized action, as
processed in a number of gesture-related regions as discussed previously,
58
including classic language areas (Broca’s area, Wernicke’s area), areas
associated with naming objects or actions, and areas underlying recognition of
emblematic gestures (Damasio et al., 2001; Emmorey et al., 2003; Damasio,
Tranel, Grabowski, Adolphs, & Damasio, 2004; Xu et al., 2009; Straube et al.,
2010). Most likely, this binding of motor and semantic/conceptual information
occurs in the inferior frontal gyrus, roughly around Broca’s area (Gentilucci et al.,
2006; Willems & Hagoort, 2007; Willems et al., 2007; Gentilucci & Dalla Volta,
2008; Xu et al., 2009). This information is then passed to higher-level mentalizing
regions to allow for mental state attribution based on the hypothesized intentions.
At the same time, information from visual contextual and social input is
processed in parallel with gestural visual information, activating regions
associated with emotional processing such as the insula, amygdala, and other
limbic structures, along with higher level reasoning regions such as the midline
cortical structures (mPFC, PCC), involved in self/other representation and
autobiographical memory and the ACC, associated with conflict detection
(Maddock, Garrett, & Buonocore, 2001; Frith & Frith, 2003; Walter et al., 2004;
Cabeza & St Jacques, 2007; Ciaramidaro et al., 2007; Greicius, Supekar,
Menon, & Dougherty, 2009). This information, along with input from the IFG and
from semantic/conceptual centers, is integrated in mentalizing regions, many of
which are multimodal association cortices and hold reciprocal connections with
diverse regions across the brain (Seltzer & Pandya, 1980; Barbas & Pandya,
59
1989; Morecraft et al., 2004; Stepniewska et al., 2006; Parvizi, Van Hoesen,
Buckwalter, & Damasio, 2006; Greicius et al., 2009; Seltzer & Pandya, 2009;
Damoiseaux & Greicius, 2009).
Importantly, the model contains a number of reciprocal connections between
areas. Contextual information is thus hypothesized to affect not only the
inference of another’s intentions, but it also may be able to influence, and be
influenced by, basic action understanding such as from the IPL in a bidirectional
manner, extending the proposal by Teufel et al. (2010) that sensory input
interacts dynamically with higher-level abstract and social reasoning. This may
provide one way in which context modulates MNS and mentalizing networks.
However, the exact directionality of information flow requires additional methods
that cannot be captured using BOLD fMRI and may in future studies require
additional neuroimaging modalities with better temporal resolution.
60
Figure 1-10. Conceptual model demonstrating the flow of information
during gestural observation. Regions associated with motor representations
are displayed in blue, contextual information in green, and
semantic/conceptual/abstract representations in red.
61
1.8 AIM 2: VISUAL EXPERIENCE AND ACTION
UNDERSTANDING
1.8.1 Background
In the first study, I propose to ask how perceptual familiarity with the race of an
actor and motor familiarity with an action may affect MNS and mentalizing
activity. In the second, I ask how the MNS responds to visually novel human
effectors and how this response may be modulated by visual experience.
Interestingly, MNS activation can be modulated not only by one’s amount of
experience with the action, or the race of the person performing the action, but
also by the effector used to complete the action (e.g. hand, mouth, foot, robotic
claw, tool). Neuroimaging data suggests that we can have MNS activity when
observing actions whose goals we can accomplish, even when we do not
possess the effector being used for those actions (Gazzola, Rizzolatti, Wicker, &
Keysers, 2007a; Gazzola et al., 2007b). For instance, an individual without arms
will demonstrate a mirror response for hand actions based on which effector (feet
or hands) he or she uses to complete the same goal (Gazzola et al., 2007b). In
addition, individuals demonstrate mirror responses to actions performed by a
robotic claw, that possesses different kinematics from a human hand, onto their
hand regions as that is the effector they themselves use to perform the same
goal (e.g. picking up a cup; (Gazzola et al., 2007a)}. Furthermore, the IFG may
represent biomechanically impossible actions, while the IPL will respond only to
biomechanically possible, but not impossible, actions (Costantini et al., 2005).
62
However, a preliminary fMRI study conducted by our lab suggests that when
typically developed individuals observe actions performed by a woman without
arms use her upper arm stump to complete actions, such as turning a book page,
they do not have significant activity within the MNS. Instead, they show increased
activity in visual regions, as well as emotion-related regions such as the
amygdala (Liew, Sheng, & Aziz-Zadeh, 2010). In contrast, when they observe
other typically developed individuals carry out the same actions with their hands
(e.g., turn a book page), they had significant activity in the MNS. Thus although
the MNS is generally a goal-matching system, it seems that in some instances,
there is no mirror activity despite one’s ability to accomplish the observed goals.
This may be due to either: (1) too large a degree of physical dissimilarity between
the actor and the observer, or (2) a lack of visual familiarity with the novel
effector, thus preventing it being mapped onto one’s own motor representations.
While it is yet unclear why this occurs, there is evidence in favor of the latter
hypothesis. First, there is the fact that prior studies have shown MNS activity in
humans for observations of different species and even for non-living machines
(Buccino et al., 2004a; Gazzola et al., 2007a), as discussed previously. In
addition, whether or not an observer possesses the same body parts as the actor
should not matter, since individuals with amputated limbs demonstrate MNS
activity for hand actions, even though they do not have these physical body parts
63
(Gazzola et al., 2007b). However, these individuals have a breadth of prior
experience observing individuals with hands performing actions, and visual
familiarity with another limb may also allow for one to incorporate a limb into
one’s own body representation. On the other hand, individuals with typically
developed hands observing amputated limbs, who have never seen amputated
before, may demonstrate a remarkably different pattern from this.
Additionally, research on tool use demonstrates that with repeated visual training,
mirror neurons in monkeys may respond more strongly to observations of tools (a
stick, pliers) performing a grasping task than a human effector (e.g. a hand;
(Ferrari, Rozzi, & Fogassi, 2005). Monkeys similarly can be trained to learn how
to ‘use’ tools themselves, with some motor neurons demonstrating goal-specific
firing (as opposed to movement-specific firing; (Umilta et al., 2008)). There is
also evidence that humans are able to use tools to perform actions, with similar
mirror regions in the anterior inferior parietal lobule (AIP) and ventral PMC firing
for both representations of hand or tool use to achieve the same goal (Jacobs,
Danielmeier, & Frey, 2009). It has thus been proposed that, through training,
tools can be integrated into one’s body scheme, replacing the prior end-effector
(e.g. a hand) with the new extension (e.g. a tool; (Arbib et al., 2009). Thus, it
seems that with adequate visual training, one might be able to similarly
proximalize an upper-arm stump as the end effector and thus generate a MNS
response for such images. The second study addresses this question by
64
examining neural activity in response to observation of unfamiliar upper
residual limb actions compared to familiar hand actions, and assessing
whether there are changes in the neural networks underlying these actions
after prolonged visual exposure to the novel effector. However, no research
has addressed whether goal-based encoding of actions performed by dissimilar
others is affected by one’s visual familiarity with the effector. It is possible that as
long as the kinematics of the observed effector can be mapped onto one’s own
motor schema, as has been proposed (Oztop & Arbib, 2002), it can be
represented in the MNS. Even if the effector cannot be directly mapped onto
one’s own motor scheme, such as when observing robotic machines or other
novel tools, if the concept of the observed object is familiar (e.g. this is a machine
that performs X task), it might be quickly learned (Gazzola et al., 2007a).
However, in comparison, what if the end effector is biological but novel, such as
an individual’s amputated limb? This situation presents a unique question as 1)
the effector cannot be directly matched to one’s own motor schema, and 2) the
observer is not visually familiar with, and has no pre-existing concept for, this
effector. Thus, this study asks how do we represent actions performed by
uncommon biological effectors, with which we are unfamiliar, and can
these representations be modulated to resemble that of familiar actions by
increased visual experience?
65
1.8.2 Conceptual model of unusual or novel effectors
In order to better understand how observations of novel effectors might be
represented in the mirror system, I present the following conceptual model with
expected modifications for both (A) an individual using an unusual effector (feet,
in the absence of hands), and (B) an individual using a novel effector (an upper
arm stump, in the absence of hands; see Figure 1-11). The top images are
meant simply as examples of actions that each individual might perform using
unusual or novel effectors.
In this model, I suggest that visual information about the action can be
segregated into three parts: the effector, the object being manipulated, and the
context. At this stage of visual processing, an unusual effector may receive
additional visual attention as it is unexpected given the object and the unusual
affordances for grasping the object. In addition, a novel effector may require even
greater visual attention, as there are no pre-existing affordances for grasping
using this effector. The unexpected nature of an unusual or novel effector may
evoke top-down modulation, directing greater attention to these effectors during
the initial observation. However, with visual exposure, these modulations may
decrease and attention may be equally divided between the effector, object, and
context as one might see normally.
66
At the IPL, information about the object and effector are integrated to extract
possible affordances (Fagg & Arbib, 1998; Heath et al., 2001; Gallese et al.,
2002; Oztop & Arbib, 2002). Again, here, an unusual effector may require
additional processing, as well as some higher-level reasoning, in order to
ascertain possible options since one rarely sees such an effector performing this
action. Novel effectors may require even greater processing as there are no pre-
existing representations for this effector and thus no known affordances. One’s
own AIP may have extracted affordances that one would ordinarily use for this
task; however a mismatch between the predicted affordances and the observed
affordances would result in a large error, thus increasing visual attention towards
extracting appropriate affordances or learning new affordances (Oztop et al.,
2005; Bonaiuto et al., 2007; Bonaiuto & Arbib, 2010). Increased visual
processing may, over time, provide learned associations between the visually
observed action and end goals.
Finally, at the IFG, information about the possible affordances may then be used
to recognize the action. This may happen in one of two ways: 1) a mapping of the
observed effector’s joint kinematics and movement parameters onto one’s own
motor kinematics for one’s own preferred effector (e.g. mapping the foot grasping
action onto the hand grasping motor program) using recognition of hand state,
similar to that described in Oztop & Arbib (2002), which is likely for the unusual
effector such as a foot that has similar kinematics to a hand, or 2) a learned
67
association develops directly between the visual information and end-goal over
repeated visual exposure without mapping it onto one’s own body (as is likely the
case with a novel effector such as an amputated limb).
In these ways, we might expect to see modulations of the visual system and
action understanding networks, particularly in the MNS, when observing unusual
or novel effectors during object-directed tasks.
68
Figure 1-11. Conceptual model of unexpected or novel effectors during
action understanding. Top portion demonstrates (A) an individual using an
unusual effector (feet) to perform a goal in the absence of hands, and (B) an
individual using a novel effector (upper arm stump) and an unusual effector
(mouth) to perform a goal in the absence of hands. The middle portion provides a
conceptual model representing the flow of visual information during action
understanding. The bottom portion (in the yellow box) suggests three points in
the model at which one might experience modulations due to the unusual or
novel effector.
69
1.9 AIM 3: REAL-LIFE EXPERIENCE AND ACTION
UNDERSTANDING
1.9.1 Background
While the first two questions explored how MNS and/or mentalizing regions are
modulated by discrete types of familiarity and experience (visual, perceptual,
motor), the last question asks how different, and seemingly more dynamic, types
of experience alter these networks involved in action understanding. While the
prior study examines how we understand individuals before and after visual
experience, a further step is to understand how our actual, real-life experiences
with individuals shape how we understand people like them. While this factor is
‘messier’ and more difficult to control for than the previous condition of visual
experience (which is systematically introduced during the experiment), examining
real-life interactions allows for greater generalizability of these findings to the real
world applications and situations. Thus, this third aim attempts to examine
modulations of action understanding regions through real-life interactions with
individuals and through the personal experience having a different body oneself.
These modulations may provide more insight into the relative contributions of
MNS and mentalizing regions together, and may also provide dissociable
information about how the type and amount of experience drives activity in
different regions of each network.
70
The large majority of studies have demonstrated that the MNS and mentalizing
systems are activated separately, based on different contexts and contrasts, with
very little concurrent activation occurring. However, three very recent, novel
methods of analysis have demonstrated that, over longer time-courses, both
systems are activated. One study employed a novel empathic accuracy (EA)
paradigm in which participants were shown 2-3 minute videos of people talking
about events in their lives (Zaki et al., 2009). As they watched, participants were
asked to continuously rate 1) how positive or negative they felt at each moment
of the video, and 2) how positive or negative they thought the speaker felt at
each moment in the video. Participants’ ratings were then compared to the
speaker’s own ratings of their positive/negative emotions while talking, with a
very close match indicating that the participant was able to accurately identify the
speaker’s emotions (high EA) and a poor match indicating the opposite.
Participants’ EA scores were then used as a parametric modulator for neural
activity during the task. Results demonstrated that increased EA was predicted
by activity in three regions of the mentalizing system (the dorsal and rostral
regions of medial prefrontal cortex (mPFC) and the superior temporal sulcus
(STS)) as well as two regions in the MNS (the right inferior parietal lobule (IPL)
and the bilateral dorsal premotor cortex (dPMC)), showing for the first time
activation of both systems together.
71
A second study utilized a novel dual-EEG set-up in which two participants
engaged in a motor imitation task where one individual performed an action and
the other imitated it, and vice versa (Dumas, Nadel, Soussignan, Martinerie, &
Garnero, 2010). Using this set-up, EEG data could be recorded from both
participants as they interacted, allowing for the careful temporal examination of
inter-brain phase synchrony during the social imitation task. Importantly, activity
in the alpha-mu, beta, and gamma bands all demonstrated synchronous activity,
with the greatest synchrony in the alpha-mu band over parietal regions in both
brains, thought to represent MNS activity (Dumas et al., 2010). In addition,
activity in the right TPJ was synchronous between individuals, suggesting a
coordinated mentalizing of other other’s perspective and goals during the social
task (Dumas et al., 2010). These findings overall provide evidence of a new
method for studying real-time social interactions between individuals and
recording from both individuals as they interact, suggesting that a combination of
mirror and mentalizing regions participate in synchronizing brains during a social
task.
Finally, Schippers et al. (2010) used between-subject Granger causality to study
causal patterns of activation between two participants as they engaged in
gestural communication during the game of charades. One participant was the
guesser, inside the MR scanner, while the other participant (the gesturer) was
outside being videotaped while performing gestures (Schippers et al., 2010).
72
These videos were streamed onto a screen so that the guesser could participate
in the fMRI scan while guessing his or her partner’s actions. The between-brain
Granger causality mapping suggested that the gesturer’s own actions influenced
activity in the ventral mPFC (mentalizing system) and the middle temporal gyrus,
and parietal and premotor regions (MNS) of the guesser’s brain. This suggests
that MNS activity, as well as mentalizing activity, may resonate across brains and
that more complex social interactions may contribute to the observation of
multiple regions and multiple networks firing in conjunction.
Altogether, these studies point out that in more complex or involved or personally
meaningful situations, we may see activation of both mentalizing and mirror
neuron regions fire. Thus, in the current study, we aim to understand how
individuals with more experience, and experience on a more personal level—
either through interactions with people who have physical differences, or through
having a physical difference oneself—further modulate the activity of these
networks, and how this compares to modulations of pure visual experience.
1.9.2 Conceptual model of real-life experience
Referring again to the conceptual model on familiarity affecting mentalizing and
mirror neuron systems, I now turn to posit specific hypotheses regarding the
effects of different modalities of training on these regions in terms of real-life
experiences (Figure 1-12). In this model, there are three critical ways in which
73
the neural regions supporting understanding another’s intentions might be
modulated. First, real-life interactions with an individual with a different body will
likely activate regions associated with observing an unfamiliar body part (e.g., a
residual limb), particularly in visual regions such as the pSTS and parietal
regions such as the IPL, as these regions may allow one to extract new
hand/arm configurations, joint kinematics, and locations in body space, from the
observed action, which is necessary in order to understand the new limb, even if
one has a pre-existing model for something similar to it. Thus, both the visual
cortex and IPL may present with enhanced activation when observing novel body
parts, such as when people who have real-life experience observe a new
individual with a different body to extract the kinematics of the new limb. The
second region of modulation is likely the frontal mirror region in the IFG (and
possibly dorsal/ventral premotor regions which encode complex motor goals and
motor planning (Preuss, Stepniewska, & Kaas, 1996; Molnar-Szakacs, Iacoboni,
Koski, & Mazziotta, 2005; Stepniewska et al., 2006; Vogt et al., 2007; Hesse et
al., 2009; Bonini et al., 2010). This area is likely to be active once an individual
has more experience with a different type of body, since once the observer
recognizes the action, he or she may then recall predicted goals that match that
particular action pattern. The IFG may thus participate in attempting to match the
motorically-modeled gesture with pre-existing motor patterns with known goals,
possibly attempting to bind the motor representation with a conceptual
representation from the individual’s own motor repertoire or based on the
74
individual’s own prior experiences. Finally, if actions of the residual limb are at
least somewhat familiar, such as in individuals with more experience with people
who have residual limbs, then these individuals may also strongly activate
regions associated with intention inference, such as the mPFC, PCC, and
bilateral TPJ, as the action may be recognized by MNS regions and additionally
activate regions associated with one’s own personal memories of other
individuals doing those same actions. In all these scenarios, contextual
information is present and may influence the overall inference of the other’s
intentions, including in regions related with emotional processing. A future study
may wish to examine the role of contextual-types of experience (e.g., with a
personally familiar or unfamiliar individual, with a liked or disliked individual, with
emotional facial expressions, etc.) on these network modulations. However, in
the present work, I focus primarily on the interactions of mirror and mentalizing
systems in supporting intention understanding, keeping contextual information as
static between conditions as possible.
75
Figure 1-12. Conceptual model demonstrating possible effects of real-life
experience. Model (as seen in Figure 1-10) may also be modulated by real-life
experience with different bodies, specifically during action representation (IPL),
action recognition (IFG), and inferring others’ intentions (mPFC, PCC, bilateral
TPJ).
1.10 METHODS: FUNCTIONAL MAGNETIC RESONANCE
IMAGING (FMRI)
Functional magnetic resonance imaging (fMRI) will be used to examine functional
activation of neural regions, including those in the MNS and mentalizing systems.
76
In fMRI, a blood-oxygen-level dependent (BOLD) signal is noninvasively
recorded, driven by task-dependent neural activity within a region. This data can
then be analyzed to provide both individual responses contrasted between one
task and another, or across group means. Functional MRI is thought to be an
indirect measure of brain activity. As the MR scanner provides an extremely
strong (in this case, 3-Tesla) static magnetic field, small changes in magnetic
properties can be detected in the rate at which atoms return to their normal
alignments within this magnetic field after being disrupted with radiofrequency
pulses. Interestingly, oxygenated and deoxygenated hemoglobin have different
magnetic properties (diagmagnetic vs paramagnetic, respsectively), thus allowing
MRI to distinguish between the two forms. As neuronal activity requires oxygen
as fuel, the proportion of oxygenated to deoxygenated hemoglobin (or, the blood-
oxygen level dependent (BOLD) signal) in the brain may be one indirect marker
of brain activity within a given space.
For quantitative purposes, the brain space is divided up into small cubic regions
known as voxels, or volume elements (in this case, 3.5mm x 3.5mm x 3.5mm).
Signal intensity can be calculated at each voxel, thus providing a detailed
account of brain activity throughout the brain at each time point. Below is an
example of a raw dataset from one voxel across 233 timepoints.
77
Figure 1-13. Raw BOLD timeseries from one voxel. BOLD response at one
voxel during a condition over the timecourse of the experimental run.
As this raw data is extremely noisy, it must undergo a series of preprocessing
steps before being able to be used as the dependent variable in the statistical
analysis, with several steps directly affecting the data analysis itself. First, the
data is corrected for motion over the timecourse of data acquisition, which can be
anywhere from 20 minutes to 2 hours. During the time, the head may sink into
the foam padding, or the participant may have subtle movement in the order of 1-
2 mm. Images are sampled at a static rate (in this case, every 2 seconds) over
the timecourse (e.g. 8 minutes), and thus motion parameters must be calculated.
78
Six parameters, in the direction of x, y, z, roll, yaw, and pitch, are then added to
the GLM to ensure that simple movement does not account for any significant
“activation.” Oftentimes, if the movement is larger than 1 voxel size, the
participant is excluded from further analyses. Next, slice timing correction also
helps to temporally reassemble the images in the order that they were collected,
as many scanners will sample the brain in an interleaved slice pattern, so as to
avoid signal from one slice contaminating the next slice. Thus, the images are
realigned into a proper volume and corrected for both motion-related and
temporal disturbances.
Spatial filtering is then employed to increase the signal to noise ratio in the data.
Since each voxel is not truly independent from each other voxel (e.g. the signal in
one voxel of tissue is likely influenced by the same veins and arteries as the
neighboring voxel), spatial smoothing allows us to take a weighted average of
signal intensity at each voxel, thereby decreasing noise. Typically, a full-width-
half-max (FWHM) Gaussian smoothing kernel on the order of 5-8 mm is
employed, depending on one’s voxel size and hypothesized effect size. Spatial
smoothing is not effective if the smoothing kernel is larger than the effect size,
however, that is generally not the case. This preprocessing step has important
implications for the later thresholding, as each voxel can no longer be considered
to be an independent sample.
79
Temporal filtering can then be applied to the data, using both high pass and low
pass filters to remove noise from the sample due to high-frequency fluctuations
and low-frequency temporal drifts. Finally, as the BOLD signal is dependent on a
number of factors including physiological state, individual differences, and more,
the absolute values of signal intensity can vary widely from participant to
participant, and between sessions within the same subject. Thus a final step may
include global intensity normalization, in which each session’s data is normalized
to the grand mean for more accurate comparisons between sessions and
subjects.
After all of these steps are completed, we can then use this preprocessed fMRI
data as the dependent variable in the GLM, as shown in the comprehensive
diagram below.
80
Figure 1-14. Processing stream for fMRI data within a single subject using
FSL. Figure depicting the typical processing stream for analyzing the BOLD
response in single subjects (adapted from the FSL tutorial).
Data from each voxel is calculated in a separate GLM, such that the Y for one
GLM is the timecourse data from one voxel (in this case, a vector of 233
timepoints). The design matrix, or X, consists of the predicted model for each
condition. The predicted model is based on the onset times of stimuli
presentation within a condition (e.g. condition A occurred at 4 s, 30 s, 50 s, and
90 s and lasted 30 seconds on each trial). However, the expected signal is not a
simple linear boxcar model because it is a biologically based and, as such,
81
depends on the physiological properties of blood flow. This means that the
expected signal is in fact a much more complex waveform, commonly known as
the hemodynamic response function (HRF). The hemodynamic response
function is a canonical pattern of blood flow that is seen in response to a period
of brief neural stimulation, reflecting the initial decrease of oxygenated
hemoglobin in the blood concentration due to metabolism, followed by an
overshoot of oxygenated hemoglobin approximately 6 seconds later. One can
then expect to observe the increase in cerebral blood flow, several seconds after
the stimulus is shown, in the activated region. Based on this, the end model is
the predicted model convolved with the hemodynamic response function, as
shown in yellow in the bottom portion of the figure below.
Figure 1-15. Convolution of a canonical HRF with neural activity. The top
row demonstrates a canonical hemodynamic response function (HRF) which is
then convolved with a boxcar function symbolizing predicted neural activity at
discrete timepoints. The bottom image demonstrates the results of the
convolution (adapted from the FSL tutorial).
82
For illustrative purposes, assume a study on visual creativity, in which there were
5 experimental conditions (creative task, control task, waiting period, true rest,
and answer period). We also model the temporal derivatives of each condition,
which allows us to fit the model even when the timing does not match up exactly
(e.g., the predicted response occurs slightly before/after the model), resulting in a
total of 10 conditions. Finally, we add the 6 motion parameters as described in
the preprocessing, for a total of 16 experimental variables. This design matrix,
then, is 233 x 16, with each stimulus onset time convolved with the canonical
HRF. This model results in 16 beta values, or parameter estimates, one for each
condition. These betas are transformed into t-statistics by normalizing the beta
values to their standard deviations. Now, a larger t-statistic represents a greater
signal to noise ratio for the parameter estimates - aka, more activity. A map of t-
stats across all voxels in the brain can then be generated for each condition as
shown below.
83
Figure 1-16. Resulting T-map of neural activity during one condition.
Parameter estimates at each voxel are normalized to their standard deviations
and visualized as a 3-D image (adapted from the FSL tutorial).
However, this data on its own does not provide much information about brain
activity, as it simply represents voxels in the brain that have a t-statistic that is
higher than chance—that is, these are regions that are supposedly active for
each condition. When looked at on its own, it could be due to a number of
different factors – the participant’s physiological state, the scanner environment,
etc. Thus, it is important to compare one condition to another, as each condition
is assumed to share the same basic qualities in terms of external variables, but
will yield different activity patterns which can then be attributed primarily to
changes in brain activity. These contrasts are called contrast of parameter
84
estimates, or COPEs, and can indicate which voxels in the brain are significantly
more or less active during one condition versus another. This can be done
through a simple linear combination of parameter estimates, for example,
indicating a [1 0] contrast to examine condition 1 – condition 2:
Figure 1-17. Sample contrast matrix. A sample contrast matrix demonstrating
the general linear model indicated by each contrast (adapted from the FSL
tutorial).
The COPE is then normalized by its standard deviation (VARCOPE) to generate
another t-statistic. This t-statistic can then be standardized to a z-statistic to allow
for comparisons across sessions and subjects. A final step involves thresholding.
Given that the GLM is performed independently on each voxel in the brain, a
correction for multiple comparisons must be used. However, as noted before, a
Gaussian smoothing kernel is applied to the data to increase the signal to noise
ratio, which decreases the number of independent observations (e.g. each voxel
is no longer an independent observation). Thus, a Bonferroni correction, which
assumes independent observations at each voxel, may be far too stringent and
conservative. There are two different methods for correcting for multiple
comparisons that may be relevant for neuroimaging data. The first is known as
85
Gaussian Random Field Theory, in which the total number of resels (# of resels =
total volume / FWHM smoothing kernel) is calculated and used to determine the
Euler characteristic for the image. The Euler characteristic basically determines
the number of clusters expected in a smooth statistical map at a given, and it can
be calculated for different thresholds as long as the number of resels is known.
This is then used to determine the height threshold at each voxel. The Gaussian
Random Field Theory is used in a number of neuroimaging analysis programs,
most notably, SPM (Statistical Parametric Mapping). FSL, however, uses a
different method in which a threshold is initially applied to the entire image, and
then the probability of getting a cluster of a certain spatial extent and height is
determined. The image can then be cluster thresholded at a specific p-value,
resulting in a statistically significant activation map. The figure below
demonstrates one such thresholded activation map, with brighter colors
indicating higher Z-scores, for the contrast of creative conditions compared to
control conditions.
Figure 1-18. A thresholded activation map comparing two conditions (A>B).
86
Higher level analyses, such as averaging parameter estimates across sessions
(in this case, 4 sessions per subject) or across subjects (in this case, 13 subjects
total), can be performed in FSL using a program called FLAME, which estimates
the inter-session or inter-subject random-effects component of mixed-effects
variance. The fixed-effects variance is simply the variance within each session,
across the time course. The random-effects variance is the variance across
sessions, or across subjects. Thus, the mixed-effects variance in these higher
level analyses is simply the combination of fixed effects variance (from each
session) and random effects variance (across the comparison group).
Averaging across the sessions for a single subject requires a concatenation of all
the first-level copes and first-level varcopes. The result from this second-level
analysis, again subjected to the GLM, is a mean parameter estimate across
several sessions. We can also average across subjects similarly, using the
second-level analyses (copes and varcopes) as input for the third-level analysis,
which results in a group mean for each variable. In addition, at this higher level,
we can not only use the GLM to come up with a single group average, using a
one-sample t-test, but we can also apply the GLM to perform a comparison of
groups (e.g. a patient group versus a control group) using a two-sample unpaired
or paired t-test (e.g. one group during two different conditions), depending on the
group qualities. We can also run an F-test to assess if any of our groups
activates regions of the brain. Finally, we can run an ANOVA with the group data
87
by setting up the design matrix so that each factor/level is represented
appropriately, and look at not only main effects of factors, but also interaction
effects between factors.
88
CHAPTER 2. Perceptual and Motor Experiences Affect
Action Understanding Regions
ABSTRACT
Recent research suggests that the inference of others’ intentions from their
observed actions is supported by two neural systems that perform
complementary roles. The human putative mirror neuron system (MNS) is
thought to support automatic motor simulations of observed actions, with
increased activity for previously experienced actions, while the mentalizing
system provides reflective, non-intuitive reasoning of others’ perspectives,
particularly in the absence of prior experience. In the current fMRI study, we
show how motor familiarity with an action and perceptual familiarity with the race
of an actor uniquely modulate these two systems. Chinese participants were
asked to infer the intentions of actors performing symbolic gestures, an important
form of non-verbal communication that has been shown to activate both
networks. Stimuli were manipulated along two dimensions: 1) actor’s race
(Caucasian vs. Chinese actors) and 2) participants’ level of experience with the
gestures (familiar or unfamiliar). We found that observing all gestures compared
to observing still images was associated with increased activity in key regions of
both the MNS and mentalizing systems. Additionally, observations of one’s same
race generated greater activity in the posterior MNS-related regions and the
insula than observations of a different race. Surprisingly, however, familiar
gestures more strongly activated regions associated with mentalizing, while
89
unfamiliar gestures more strongly activated the posterior region of the MNS, a
finding that is contrary to prior literature and demonstrates the powerful
modulatory effects of both motor and perceptual familiarity on MNS and
mentalizing regions when asked to infer the intentions of intransitive gestures.
2.1 INTRODUCTION
How do we efficiently infer other’s intentions by observing their actions? Recent
research indicates that intention understanding engages two complementary
systems: the putative human mirror neuron system (MNS) and the mentalizing
system (Keysers and Gazzola, 2007; Uddin et al., 2007; de Lange et al., 2008;
Hesse et al., 2009). The MNS, composed of motor-related brain regions in the
inferior frontal gyrus (IFG) and inferior parietal lobule (IPL), is activated both
when an individual makes an action and when he or she observes another
person make the same action (Gallese et al., 1996; Rizzolatti and Craighero,
2004; Aziz-Zadeh et al., 2006). It has been proposed that mapping observed
actions onto one’s own motor representations supports motor simulations of an
observed action, allowing the observer to then predict others’ intentions (Gallese
et al., 2004; Rizzolatti and Craighero, 2004; Iacoboni et al., 2005). In contrast,
the mentalizing system is composed of regions in the medial prefrontal cortex
(mPFC), posterior cingulate cortex (PCC), and the bilateral temporal-parietal
junctions (TPJ), and is thought to be involved in non-intuitive reflections of others’
mental states (Frith and Frith, 2006; Saxe and Powell, 2006; Saxe, 2006). These
90
regions have been linked to perspective-taking and tend to be activated by a
conscious effort to infer others’ intentions, across a variety of stimuli including
stories, cartoons, and images of others (Gallagher et al., 2000; Saxe and
Kanwisher, 2003; Frith and Frith, 2006).
A recent meta-analysis (Van Overwalle and Baetens, 2009) revealed that MNS
regions tend to be active when observing biological movement, with stronger
activity when observing familiar actions for which one has a pre-existing motor
representation (Calvo-Merino et al., 2005; Cross et al., 2006; Van Overwalle and
Baetens, 2009). In contrast, mentalizing regions are activated for higher-level
goal inferences, regardless of the presence of visual biological stimuli, and tend
to be active in the absence of existing motor representations, such as during
observation of movements that are unplanned, out-of-context, or biomechanically
impossible (Brass et al., 2007; Kilner and Frith, 2008; Liepelt et al., 2008; Van
Overwalle and Baetens, 2009). Thus, the existing body of research suggests that
these systems serve complementary roles, with mentalizing regions more active
during novel contexts and MNS regions more active during familiar contexts (Van
Overwalle and Baetens, 2009).
Although the task of understanding others’ actions is heavily social in nature, little
is known about how social factors affect the contributions and interactions of
MNS and mentalizing regions. Prior research has demonstrated that one’s race,
91
culture, religion, and even political affiliation can modulate cognitive and
sensorimotor processing during passive observation (Han and Northoff, 2008;
Serino et al., 2009; Xu et al., 2009). However, to date, no studies have explored
how these powerful factors modulate neural activity when asked to infer
intentions from observed human movements.
The current functional magnetic resonance imaging (fMRI) study manipulated
two critical factors related to familiarity (perceptual familiarity with the race of the
actor, motor familiarity from experience with the action) during the task of
inferring an actor’s intentions. The aim was to understand the specific
contributions of individual regions within the MNS and mentalizing systems when
perceptual and motor familiarity with the stimuli are manipulated. In order to
explore the social influences on action understanding in these two neural
systems, the current study utilized symbolic, or intransitive, gestures (e.g. thumbs
up), which are learned and familiarized through one’s cultural experiences
(Archer, 1997). As these gestures require an integration of visuomotor
representations with abstract intentions, they have been shown in separate
studies to activate regions of both MNS (Villarreal et al., 2008; Skipper et al.,
2009; Straube et al., 2009) and mentalizing networks (Gallagher and Frith, 2004).
There is sparse existing research on how perceptual familiarity modulates action
understanding. In the current study, we examined the effect of perceptual
92
familiarity on action understanding by manipulating the race of the actors used in
the stimuli (Chinese, Caucasian). To this end, we recruited Chinese individuals
living in mainland China who have limited exposure to, and thus less perceptual
familiarity with, Caucasian individuals compared to Chinese individuals. For the
sake of clarity, in this paper, we refer to this factor related to perceptual familiarity
specifically as “race,” while experience with an action (motor familiarity) will be
referred to simply as “familiarity.” However, we acknowledge that the construct of
race may also include many other factors not explored here, most notably, in-
group/out-group effects, and that these larger constructs related to race should
be further studied using a design comparing participants from two or more racial
groups.
Previous studies on the effects of race on the MNS demonstrate conflicting
evidence, with two transcranial magnetic stimulation (TMS) studies revealing
opposite results: one demonstrated increased corticospinal excitability during
observation of actors of one’s own race versus a different race (Molnar-Szakacs
et al., 2007), while the other found a reverse pattern (Desy and Theoret, 2007).
In support of the former results, an fMRI study showed greater activation in
regions associated with the MNS for more physically similar others than
physically dissimilar others (Buccino et al., 2004a). In addition, there is increased
activity in mentalizing regions when observing the eyes of one’s own race versus
the eyes of another race, suggesting increased higher-level processing of one’s
93
own race (Adams et al., 2009). Notably, these same-race effects may also be
due to one’s increased perceptual familiarity with one’s own race compared to
another race. We thus hypothesized that observation of same-race individuals,
who are more perceptually familiar than different-race individuals, ought to evoke
stronger activity in regions associated with both motor simulation and mentalizing
than observation of different-race individuals.
Furthermore, research on passive observation of familiar or unfamiliar actions
suggests that experience with the action increases MNS activity for one’s own
expert skilled actions, such as expert dancers watching their own dance form
versus an unfamiliar dance form (Calvo-Merino et al., 2005; Cross et al., 2006).
On the other hand, passive observation of actions that are biomechanically
impossible or that do not make sense within a context has been associated with
increased activation in mentalizing regions (e.g. turning on a light-switch with
one’s knee when one’s hands are free versus when one’s hands are occupied;
Brass et al., 2007; Kilner and Frith, 2008; Liepelt et al., 2008). Thus we
hypothesized that observations of familiar gestures, which are within one’s own
motor repertoire, would be easier to simulate and thus more strongly involve
MNS regions, while observations of unfamiliar gestures, which lack an existing
motor representation and may require additional reasoning capabilities, would
more strongly involve mentalizing regions.
94
2.2 MATERIALS AND METHODS
Participants
Eighteen healthy Chinese adults (10 males and 8 females, 18 to 30 years of age,
mean ± SD = 23.0 ± 2.28), born in and living in China, were recruited in this
study. Participants were scanned while observing familiar and unfamiliar
gestures performed identically by two actors, one Caucasian and one Chinese.
All participants were right-handed, had normal or corrected-to-normal vision, and
had no neurological or psychiatric history. Written informed consent was obtained
from all participants before inclusion in the study. This study was approved by a
local ethics committee and the University of Southern California Institutional
Review Board and was performed in accordance with the 1964 Declaration of
Helsinki.
Stimuli
Action observation. The visual stimuli consisted of 2-second movie clips. Half of
the clips depicted a Caucasian performing expressive hand gestures that were
either familiar (i.e. thumbs up) or unfamiliar (i.e. “quail” in American Sign
Language) with his right hand. The other half depicted a Chinese actor making
identical gestures. Both actors were male, in their mid-20s, right-handed and of
similar physical build. While performing gestures, actors maintained a neutral
affect with gaze held directly forward and no additional eye movements to
prevent provision of additional social cues. In addition, both actors were equally
95
familiar or unfamiliar with the gestures they were asked to perform and rehearsed
all gestures prior to filming. To assess the potential differences in familiarity with
gestures performed by Chinese and Caucasian actors, in a separate behavioral
study, we asked 64 Chinese participants to rate how familiar they were with the
gestures performed by the actors using a 3-point Likert scale (1= familiar,
3=unfamiliar). We found that there were no significant differences between the
gestures performed by Chinese and Caucasian actors; familiar gestures
performed by the two actors were judged as being equally familiar (Chinese:
1.35±.24, Caucasian: 1.38± .28, p = .43) while unfamiliar gestures performed by
both actors were judged as being equally unfamiliar (Chinese: 2.80±.09,
Caucasian: 2.77±.11, p = .51). Still photos of some of the different stimuli are
illustrated in Figure 1. Each actor was filmed completing 6 different familiar
gestures and 6 different unfamiliar gestures, resulting in 12 clips per actor and 24
different clips total. A control for action observation consisted of a 2-second
presentation of a still photo made from the first frame of the video clips (6 stills
per actor and 12 different stills total).
Action execution. A cue for action execution trials consisted of a stimulus with
500 ms of a black box outlined in red, followed by 1500 ms of the red outline
around the still images used in control trials.
96
Figure 2-1 Examples of still images of the stimuli. Participants observed
2-second videos of familiar gestures (left panel), unfamiliar gestures (middle
panel), and control still images (right panel). Each gesture and still image
was performed by an actor of the participants’ own race (Chinese) and an
actor of a different race (Caucasian). Original videos were presented in full
color.
Task Design & Procedure
Action observation. Prior to scanning, in order to try to engage both the MNS
and mentalizing regions (de Lange et al., 2008), participants were instructed to
observe the video clips as though the actors were performing the gestures
directly to them and were asked to think about the actor’s intentions in doing
each gesture. Also prior to scanning, they were shown still photos of both actors
for 30 seconds each in order to become familiar with the actors’ faces. They were
finally instructed to actively infer the actor’s intentions by attending to the actor’s
hand movements, rather than the actor’s face, for the duration of the clips shown
during the scanning session and were informed that they would be asked the
meaning of each gesture immediately after the scanning session, as an
additional motivation to actively think about the intentions of each gesture clip.
97
Action execution. Participants were instructed to rest their right hand next to,
but not on, a button box. When cued by the red-outlined action execution stimuli,
participants moved their hand to the button box, using their index fingers to
repeatedly press the button for the duration of the clip.
General procedure and design. The video clips were presented through a
projector onto a rear-projection screen located at the subject’s head. Each movie
clip subtended a visual angle of 21.4°×17.1° at a viewing distance of 80 cm.
Each condition was shown for 18 trials per run for 3 runs, for a total of 54 trials
per condition, with the exception of the action execution condition, which was
shown for 9 trials per run for a total of 27 trials. All conditions, including action
observation and action execution conditions, were combined and evenly
distributed across three functional runs of 340 seconds (170 TRs) each.
Following an event-related design, each run used an optimized random
sequence generated in Optseq (http://surfer.nmr.mgh.harvard.edu/optseq/) with
an interstimulus interval between successive clips that was jittered between 0-5
seconds, with a mean of 2 seconds. A schemata of the general design can be
found in Figure 2-2.
98
Figure 2-2 Study paradigm and questions addressed in this study. On the
left is a representation of the experimental design. Each trial type (Action
Observation, Action Execution, Still) is displayed with a jittered rest after every
trial. Listed below the trial types are the subcategories per type (e.g., Action
Observation may be of a Chinese actor performing a familiar gesture, a Chinese
actor performing an unfamiliar gesture, a Caucasian actor performing a familiar
gesture, or a Caucasian actor performing an unfamiliar gesture). The number of
trials per condition is detailed in the methods section of the manuscript. The box
on the right displays the questions addressed in this study.
Behavioral methods. Following scanning, subjects were shown the gesture
stimuli on a computer outside the scanner and were asked to rate how familiar
they were with each gesture, using a Likert-type scale where 1 indicated
extremely unfamiliar and 10 indicated extremely familiar. They were also asked
how positive/negative they felt each gesture was and what they thought each
99
gesture meant, using a 3-point scale for positive, negative, or neutral for the
former and an open response for the latter. Finally, they were asked to rate how
much they liked each actor on a Likert-type scale where 1 indicated not liking the
actor at all and 10 indicated liking the actor very much. These scores were later
computed to ensure the stimuli were accurately perceived as either familiar or
unfamiliar and positive or negative and to ensure both actors were similarly
perceived. In addition, participants were given the Multigroup Ethnic Identity
Measure (MEIM), a self-report measure designed to examine one’s sense of
ethnic identity (Roberts et al., 1999, modified from Phinney, 1992).
fMRI Image Acquisition and Analysis
Scanning was performed at Peking University First Hospital on a GE 3-T scanner
with a standard head coil. Thirty-two transverse slices of functional images
covering the whole brain were acquired using a gradient-echo echo-planar pulse
sequence (64 × 64 ×32 matrix with a spatial resolution of 3.4 x 3.4 x 4.4 mm,
repetition time=2000 ms, echo time=30 ms, FOV=24 x 24 cm, flip angle=90°).
Anatomical images were obtained using a 3D FSPGR T1 sequence (256 x 256 x
128 matrix with a spatial resolution of 0.938 x 0.938 x 1.4 mm, TR=7.4 ms,
TI=450 ms, TE=3.0 ms, flip angle=20 º).
Imaging data was analyzed using SPM2 (Statistical Parametric Mapping 2; the
Wellcome Department of Cognitive Neurology, London, United Kingdom)
100
implemented in MATLAB (Mathworks Inc., Sherborn, MA, USA). The functional
data were first time-corrected to compensate for delays associated with
acquisition time differences between slices during the sequential imaging. The
functional images were then realigned to the first scan to correct for head motion
between scans. All six movement parameters (translation: x, y, z and rotation:
pitch, roll, yaw) were included in the statistical model. The anatomical image was
co-registered with the mean functional image produced during the process of
realignment. All images were normalized to a 2 x 2 x 2 mm
3
Montreal
Neurological Institute (MNI) template. Functional images were spatially smoothed
using a Gaussian filter with the full-width/half-maximum parameter (FWHM) set
to 8 mm. In addition, high pass temporal filtering with a cut-off of 180s was
applied. The event-related neural activity was modeled using a canonical
hemodynamic response function (HRF) with temporal derivative. Effects at each
voxel were estimated and regionally specific effects were compared using linear
contrasts in individual participants using a fixed effects analysis.
A group-level random effects analysis was then conducted, taking into account
between-subject variability (Penny et al., 2004). A priori regions of interest (ROIs)
for the MNS (left IFG and IPL) and the mentalizing systems (dmPFC, PCC,
bilateral TPJ) were defined independently of the current dataset in order to avoid
circularity (Kriegeskorte et al., 2009). Functional definitions were taken from two
relevant papers on the MNS (Buccino et al., 2004b) and mentalizing system (den
101
Ouden et al., 2005), with the criteria that each paper contained activity from all
ROIs within the given system and were well-cited within the field. We employed a
small volume correction with a mask defining the 6 regions with 10-mm radius
spheres with centers at the peak activations from these papers. Results were
reported at the p <. 05 level, FDR-corrected for multiple comparisons over the 6
ROIs, and with a cluster threshold of 8 contiguous voxels (k ≥ 8). Non-apriori
regions of significant activation were reported at the whole-brain level using a
threshold of p < 0.001 (uncorrected) and a cluster threshold of 8 contiguous
voxels (k ≥ 8).
ROI analyses were then performed by extracting beta-values from group-level
results within each of the previously defined 10 mm ROIs. A 2x2 repeated
measures ANOVA was performed on each ROI with the factors of familiarity and
race using the R statistical package (Ihaka and Gentleman, 1996), and results
were subjected to a Bonferroni correction for multiple comparisons.
2.3 RESULTS
Behavioral Results
Participants rated gestures from the category “familiar gestures” as significantly
more familiar than gestures from the category “unfamiliar gestures” (familiar:
9.40±1.27; unfamiliar: 3.17±1.96; p<0.001). Participants also accurately identified
all familiar gestures and were unable to accurately identify any of the unfamiliar
102
gestures. Participants also rated half of the gestures as neutral (51.7%), followed
by positive (28.3%) and negative (20.0%). Furthermore, there was no significant
difference in subjects’ responses to the question “How much do you like [actor’s
name]?” as subjects reported liking Caucasian and Chinese actors equally
(Caucasian: 6.78±2.05; Chinese: 6.17±1.54; p>0.3). All of the participants
reported having had limited interactions with Caucasian individuals, primarily
through the media only. In addition, participants’ scores on the MEIM were
correlated with the fMRI data, as described below.
fMRI Results
All gestures versus control still images. Observation of all gestures versus
control still images activated a priori MNS regions of interest in the left dorsal
inferior frontal gyrus (IFG) and the left inferior parietal lobe (IPL). At the whole
brain level, the right posterior cingulate cortex, left middle temporal gyrus
(MT/V5), right fusiform gyrus, bilateral superior parietal lobules, left precentral
gryus, and left posterior superior temporal gyrus (pSTG), including Wernicke’s
area, were active (see Figures 2-3/4 for fMRI results, Table 2-1 for all peak
activations, and Figure 2-5 for bar diagrams of beta values from ROIs).
103
Figure 2-3 Brain responses to observations of gestures versus still images.
All images displayed at p < .001 uncorrected for visualization purposes; x = -51.
(A) Observation of all gestures across familiarity and races versus still images
evoked greater activity in components of the MNS [the left dorsal inferior frontal
gyrus (IFG) and dorsal premotor cortex and inferior parietal lobule (IPL)], as well
as the posterior superior temporal sulcus (pSTS) and posterior cingulate cortex
(PCC; not shown). (B) Observation of the same race versus still (red) evoked
activity in the left IPL and pSTS, while observation of a different race versus still
(green) evoked activity in the left dorsal premotor cortex and pSTS. (C)
Observation of familiar gestures versus still images (red) evoked greater activity
in the left pSTS, while unfamiliar gestures versus still images (green) evoked
activity in dorsal IFG, IPL, and pSTS.
Same and different race observations. Observation of gestures performed by
the same race (Chinese) and different race (Caucasian) actors were separately
contrasted to the control condition (still images). The same race (Chinese) actors
compared to stills displayed activity in a priori MNS regions of interest (the left
IFG and the left IPL) along with a large region of activity in the left postcentral
gyrus and bilaterally in the pSTS, SPL and the V5/MT region. In contrast, the
different race (Caucasian) versus stills resulted in no significant activity in a priori
regions. Whole-brain analyses at the p < .001 uncorrected level revealed activity
104
in the dorsal precentral gyrus (BA 6) bilaterally, the left pSTS, and the middle
occipital gyrus bilaterally.
The direct comparison of same race versus different race observations did not
reveal any significant activation in a priori regions of interests. However, at the
whole-brain level, the same versus different race contrast demonstrated activity
in the left anterior IPL (supramarginal gyrus) and the right posterior insula.
Different race versus same race showed increased activity only in the bilateral
fusiform gyri and left middle occipital gyrus extending into the middle temporal
gyrus (V5/MT; see Figure 2-4).
In addition, scores on the MEIM were correlated with fMRI activity during
observations of one’s own race versus a different race. Higher MEIM scores
were positively correlated with activity in the dmPFC when observing one’s own
race versus a different race, and with activity in the left dorsal premotor cortex
when observing a different versus same race (see Figure 2-5).
105
Table 2-1. Localization of brain activations from random effects analysis. A
priori regions (in bold) reported at p < .05 FDR, whole brain results reported at
p<.001 uncorrected at the voxel level, cluster threshold >8.
Anatomical Region BA T-Value
Cluster
Size
Coordinates
[x y z]
All Gestures > Still Photo
L Inferior parietal lobule 40 4.80 515 [-52 -32 34]
L Inferior frontal gyrus 44 4.09 336 [-48 10 38]
L V5/MT 18 11.86 1747 [-48 -70 0]
R Fusiform gyrus 37 8.89 1867 [50 -58 -18]
R Superior parietal lobule 7 5.22 106 [32 -56 60]
L Superior parietal lobule 7 5.17 156 [-26 -56 64]
L Precentral gyrus 6 4.73 21 [-46 2 50]
R Posterior cingulate cortex 30/31 4.10 15 [6 -38 12]
L Inferior frontal gyrus 44 4.09 26 [-48 10 38]
R Posterior superior temporal
gyrus
22 3.93 10 [68 -34 14]
Same Race > Different Race
R Insula 4.387 20 [38 -2 -6]
L Inferior parietal lobule 2 4.20 37 [-58 -22 32]
Different Race > Same Race
L Middle occipital gyrus 19 5.64 57 [-28 -86 2]
L Fusiform gyrus 37 4.61 95 [-32 -66 -16]
R Fusiform gyrus 37 4.36 20 [30 -78 -10]
Familiar Gestures > Unfamiliar Gestures
R Posterior cingulate cortex 23 5.38 498 [6 -38 32]
L Temporoparietal junction 39 4.85 334 [-50 -66 38]
L Dorsal medial prefrontal
cortex
32/9 4.09 421 [-4 44 26]
R Temporoparietal junction 39 3.56 267 [52 -68 40]
R Posterior cingulate cortex 23 7.39 178 [6 -34 36]
L Lingual gyrus 17/18 6.02 505 [-4 -82 2]
L Posterior cingulate cortex 31 5.38 113 [-4 -16 48]
L Temporoparietal junction 39 4.85 101 [-50 -66 38]
R Angular gyrus 40 4.36 33 [62 -54 34]
L Middle frontal gyrus 10 4.30 35 [-28 48 26]
R Posterior cingulate cortex 23 4.17 19 [10 -4 48]
106
Table 2-1. Localization of brain activations from random effects analysis
(continued). A priori regions (in bold) reported at p < .05 FDR, whole brain
results reported at p<.001 uncorrected at the voxel level, cluster threshold >8.
Anatomical Region BA T-Value
Cluster
Size
Coordinates
[x y z]
Familiar Gestures > Unfamiliar Gestures (continued)
L Dorsal medial prefrontal
cortex
9 4.09 35 [-4 44 26]
R Calcarine gyrus 17 4.01 40 [14 -80 14]
Unfamiliar Gestures > Familiar Gestures
L Inferior parietal lobule 40 6.90 515 [-52 -30 36]
L Superior parietal lobule 7 10.25 1817 [-20-70 60]
L Middle occipital gyrus 19/18 9.09 484 [-38 -78 4]
R V5/MT 18 7.75 671 [50 -72 2]
L Thalamus 4.50 13 [-14 -28 0]
L Superior frontal gyrus 6 4.23 9 [-20 -6 68]
R Superior parietal lobule 7 4.16 54 [18 -66 60]
107
Figure 2-4 Race-driven and experience-driven brain responses. All images
displayed at p < .001 uncorrected for visualization purposes. (A) Observations of
another race versus one’s own race (DifferentRace > SameRace) evoked greater
activity in the occipital cortex bilaterally in the fusiform gyrus and middle temporal
gyrus (area V5/MT; not shown; z = -11). (B) Observations of one’s own race
versus another race (SameRace > DifferentRace) evoked greater activity in the
left IPL and right posterior insula (not shown; x = -59). (C) Observations of
familiar gestures versus unfamiliar gestures (Familiar > Unfamiliar) evoked
greater activity in the dorsal medial prefrontal cortex (dMPFC), the posterior
cingulate (PCC), the cuneus, and the bilateral temporoparietal junctions (not
shown), regions associated with mentalizing and reasoning processes (x = -4).
(D) Observations of unfamiliar gestures versus familiar gestures (Unfamiliar >
Familiar) evoked greater activity in the left IPL and postcentral gyrus and the
bilateral middle temporal gyri (area V5/MT) in the putative extrastriate body area
(EBA; x = -53).
108
Figure 2-5. Correlations between Multigroup Ethnic Identity Measure
(MEIM) scores and neural activity. Participants’ scores on the MEIM, a
measure of ethnic identity, were correlated with neural activity from Same Race >
Different Race (in red) and Different Race > Same Race (in green) contrasts.
High MEIM scores correlated with Same Race > Different Race activations (in
red) in regions associated with mentalizing processes, including the dorsal
medial prefrontal cortex (dmPFC; as shown in panel A, x = 7). In contrast, high
MEIM scores correlated with Different Race > Same Race activations (in green)
in the left dorsal premotor region (x = -55), often associated with the MNS. These
findings suggest that there is some effect of racial identification on the neural
activity when observing one’s own, versus a different, race.
Familiar and unfamiliar gestures. Familiar gestures compared to still images
did not demonstrate significant activity in a priori regions of interest, but did
reveal activity in the left dorsal IFG, left pSTS and bilateral visual cortices
(V5/MT, middle occipital gyri) at the whole brain level. Unfamiliar gestures,
compared to still images, activated a priori regions of interest in the left dorsal
IFG and left IPL. They also activated the left pSTS, bilateral SPL, and bilateral
lateral middle temporal gyrus (V5/MT) into the putative extrastriate body area
(EBA), as well as in the fusiform gyri, at the whole brain level.
109
Familiar versus unfamiliar gestures revealed activity in a priori regions of interest,
the posterior cingulate cortex (PCC), the dorsal portion of the medial prefrontal
cortex (dmPFC) and the bilateral TPJ. Whole-brain analysis further demonstrated
activation in the bilateral occipital gyri within the primary visual cortex (BA 17/18).
In contrast, unfamiliar versus familiar gestures generated activity in an a priori
region of interest in the left IPL, along with additional activity at the whole brain
level along a large vertical region of the left postcentral gyrus, from the dorsal
aspect of the postcentral gyrus to the ventral portion of the supramarginal gyrus.
Additional activity also appeared in the bilateral SPL and bilateral V5/MT in the
putative EBA and fusiform gyri (see Figure 2-4).
Region of interest analyses. Beta values from our regions of interest in the
MNS (L IFG, L IPL) and mentalizing systems (mPFC, PCC, L TPJ, R TPJ), as
previously defined, were then extracted (see Figure 2-6) and analyzed in 2x2
repeated measures ANOVAs with factors of race and familiarity, corrected for
multiple comparisons, to examine whether there was a main effect or interaction
effect between race and familiarity in these regions.
110
Figure 2-6. MNS and mentalizing ROIs. Beta values from six ROIs were plotted
to visualize the fMRI results. Mentalizing regions of interest (in blue) in the left
TPJ, right TPJ, mPFC, and PCC demonstrated significantly higher beta values
for familiar (left) than unfamiliar (right) gestures. MNS region of interest (in
orange) in the left IPL demonstrated significantly higher beta values for unfamiliar
(right) than familiar (left) gestures. The left IFG also demonstrated this trend but
did not approach significance. (*) indicates a significance of p < .05. Top row: L
TPJ (p<.0015), R TPJ (p<.00046), mPFC (p<.0016). Bottom row: PCC (p<.042),
L IPL (p<.0062), L IFG (ns). In each graph, familiar gestures are plotted on the
left and unfamiliar gestures on the right.
We found a main effect of familiarity, with beta values significantly greater for
unfamiliar than familiar conditions in the IPL (F = 18.45, p < .00012), whereas
there was a main effect of familiarity, with beta values significantly greater for
familiar than unfamiliar conditions in three of the four mentalizing ROIs, with the
last one nearing significance (mPFC: F = 15.35, p < .00058; L TPJ: F = 9.89, p <
111
.010; R TPJ: F = 12.79, p < .0022; PCC: F = 5.67, p < .11). Additionally, the IPL
demonstrated a significant effect of race with beta values for observations of the
same-race actor greater than those of the different-race actor (F = 7.0, p < .05).
While none of the ROIs demonstrated a significant interaction effect between
race and familiarity with the gesture, two fMRI contrasts exploring interactions
between race and familiarity (Same Race + Familiar > Different Race +
Unfamiliar; Different Race + Unfamiliar > Same Race + Familiar) found significant
results in regions of the MNS and mentalizing systems (see Figure 2-7). In
addition, post-hoc analyses demonstrated a significant overlap between BOLD
signal from action execution and action observation conditions, suggesting
validation of the presence of MNS activity as found by using independently-
defined ROIs in the small volume correction (see Figure 2-8).
112
Figure 2-7 BOLD results of interaction effects between race and familiarity.
The possible interaction effects between Race and Familiarity are explored in
this analysis in the following contrasts: A) Same Race + Familiar Gesture >
Different Race + Unfamiliar Gesture (shown in red), demonstrating increased
activity in the bilateral TPJ, PCC (not shown), and the dmPFC (z =22), and
B) Different Race + Unfamiliar Gesture > Same Race + Familiar Gesture,
demonstrating increased activity in the bilateral MT/v5 and left SPL (z = 43;
no significant results in a priori ROIs in either contrast; results reported at p <
.001 uncorrected, whole brain analysis). These results suggest that
combining both racial similarity with motor familiarity heightens activity in the
prefrontal cortex, PCC, and bilateral TPJ, which are thought to be part of the
mentalizing network. In contrast, racial dissimilarity with motor unfamiliarity
113
heightens activity in visual regions and the SPL, associated with the planning
and monitoring of arm movements.
Figure 2-8 Overlap between action execution and other experimental
conditions. Results from Action Execution > Still contrast (shown in red) and,
clockwise from top left: A) Same Race > Different Race, B) Different Race >
Same Race, C) Familiar > Unfamiliar, and D) Unfamiliar > Familiar. All 4 action
observation contrasts listed are shown in green, with the overlap between the
execution and observation shown in yellow (notably, this only occurs in A and D).
This post-hoc analysis demonstrates more directly that similar voxels are active
during both Action Execution and the Unfamiliar > Familiar contrast, as well as
Action Execution and the Same Race > Different Race contrast, in line with the
results reported in the manuscript using a small volume correction with ROIs
based on an independently-defined coordinates from a prior paper.
114
2.4 DISCUSSION
Abstract Gestures
Observations of all gestures compared to still images generated activity within
both the left dorsal IFG and the left IPL, which comprise the human MNS, as well
as the right PCC, which is thought to be a component of the mentalizing system
(Van Overwalle and Baetens, 2009), in line with our initial hypotheses. Previous
studies have focused on the task-dependent activity of either MNS or mentalizing
regions during gesture observation, with results reported in one system or the
other (Gallagher and Frith, 2004; Villarreal et al., 2008; Skipper et al., 2009;
Straube et al., 2009). The current findings, however, support recent literature
demonstrating activity of both systems during the general process of
understanding the intentions behind an observed gesture (Schippers et al.,
2009). Furthermore, these data support previous findings indicating that regions
of the human MNS are involved in the processing of manual gestures and
abstract communication (Corina and Knapp, 2006; Willems et al., 2007;
Gentilucci and Dalla Volta, 2008). The activation of the PCC, a region commonly
associated with the mentalizing system as well as with episodic and
autobiographical memory retrieval (Maddock et al., 2001), may be involved in
interpreting the actor’s intentions and/or comparing the observed stimulus to prior
memories in order to understand the gesture’s meaning. Altogether, our findings
suggest that observing symbolic gestures requires the interplay between regions
from both MNS and mentalizing regions.
115
Processing Perceptual Familiarity in Individuals of the Same versus a
Different Race
Observations of the same race compared to still images demonstrated significant
activity in MNS regions of interest (IFG, IPL). Observations of a different race
compared to still images generated no significant activity in any a priori regions of
interest. However, there was activity in regions associated with the MNS (e.g.,
the dorsal premotor cortex and pSTS at the whole-brain level; Van Overwalle and
Baetens, 2009), suggesting a less robust signal for observing a different race
compared to one’s own race, possibly in different regions of the MNS from
observations of one’s own race. In addition, in accordance with our hypothesis,
observations of same-race actors directly contrasted with different-race actors
demonstrated greater activity in the posterior component of the MNS (the anterior
IPL), further contributing to the suggestion that actions of more perceptually
familiar and/or physically similar individuals are more readily mapped onto
sensorimotor representations of the self. Additional activity was found in the
insula and may indicate enhanced emotional processing for individuals of the
same race. These results are consistent with prior research suggesting that
greater shared physical properties are associated with increased activity in the
MNS (Buccino et al., 2004a; Molnar-Szakacs et al., 2007). Furthermore, prior
research has found that racial group membership increases emotional responses
to members of one’s own group (Xu et al., 2009).
116
In contrast, observations of different-race actors versus same-race actors
generated greater visual activity within the fusiform gyrus bilaterally, which is
thought to support processing of face stimuli (Kanwisher et al., 1997), as well as
in the middle occipital gyrus extending into area V5/MT which is the putative
extrastriate body area (EBA) and thought to support processing of body
movements (Astafiev et al., 2004). These findings are also in accordance with
prior research demonstrating that physically different others generate greater
activity in visual regions (Buccino et al., 2004a).
Interestingly, one’s self-reports of ethnic identification as being Chinese
correlated with higher mentalizing activity for one’s own race versus a different
race and higher motor-related activity for a different race versus one’s own race.
These results suggest that the more one identifies with one’s own race, the more
one utilizes mentalizing regions to process one’s own race versus another race,
a finding that is in accordance with previous research (e.g. Adams et al., 2009).
In contrast, the more one identifies with one’s own ethnic group, the more motor-
related activity they have when observing different-race individuals. This result
seems to conflict with our previous suggestion that we map those who are more
perceptually familiar onto our own sensorimotor representations than those who
are perceptually less familiar. Thus, it is likely that there are other variables at
play when we begin to incorporate ethnic identification into the analysis.
117
All together, these results suggest that humans are more apt to process the
actions of those more perceptually familiar by engaging our own sensory-motor
representations and emotional responses more strongly, as seen here when
Chinese participants viewed their own race. Thus it appears that activity in MNS
and mentalizing regions may be modulated by social factors such as perceptual
familiarity and, in this case, race. This effect may be strengthened by one’s daily
life practice, particularly if one has limited experience or perceptual familiarity
with another racial group, as found in our pool of participants.
By contrast, when observing actors that are perceptually less familiar from
ourselves (e.g., actors of difference race), we may engage in increased visual
processing, particularly of individuals’ faces and body movements, as these often
may provide additional information that might assist us in understanding the
“other.” Notably, these results are seen despite the fact that, in the current study,
participants were asked to attend to the hand gesture rather than to the face of
the actor, thus decreasing the amount of direct attention to race, while in many
prior studies on race, participants are instructed to observe the faces of actors,
thus increasing the explicit attention to racial information.
Thus, although the effects of race may have been minimized by the task
instructions of specifically asking participants to focus on the hand gestures
rather than on the race of the actor, these results indicate an implicit, automatic
118
difference in neural processing despite an attentional focus elsewhere. Further
research employing eye tracking may be useful to assess whether diverted
attention to visual processing of different race individuals is in fact responsible for
decreased MNS activity. In addition, a better understanding of whether these
neural patterns of activation can be correlated with stereotyping or prejudiced
behavior would be beneficial.
Finally, it should be noted that these observed effects may be influenced by
cultural or racial factors. Recent cultural neuroscience studies have shown
increasing evidence that sociocultural contexts can influence or modulate neural
substrates of human cognition (Chiao & Ambady, 2007; Han & Northoff, 2008; Ito
& Bartholow, 2009). Culture-specific neural processes have been observed in
many aspects of human cognition such as perception, attention, and emotion,
such as one recent study demonstrating the culture-specific modulation of
automatic fear responses (Chiao et al., 2008). As our participants were all
Chinese individuals, living in China, future research may use additional diverse
subject pools to explore cross-cultural differences in race-related effects on
action understanding networks to assess whether these results may be
modulated by the culture or race of the participants.
119
Gesture Familiarity
While both familiar and unfamiliar conditions activated regions of the MNS when
compared to observations of still images, during a direct comparison, familiar
gestures more strongly activated all four a priori regions associated with the
mentalizing network (mPFC, PCC, and bilateral TPJs), while unfamiliar gestures
more strongly activated parietal sensory-motor regions and the putative
extrastriate body area (EBA). These findings, which are reversed from our initial
hypothesis, seem to suggest that when observing, and likely trying to understand
the intentions of, an actor making a familiar action, we activate the MNS as well
as additionally recruit components of the mentalizing network.
One explanation is that familiarity with the movement may not only provide
existing visually and motorically-based representations but also existing semantic
and episodic memories associated with the observed action. This is notable, as
participants were not explicitly tested on their motoric familiarity with the
observed gestures, and therefore may have seen and recognized—but never
personally performed—the familiar gestures. However, as prior studies have
found that both visual experience and personal motoric experience with an action
sequence can increase motor representations in MNS regions when observing
familiar actions (Cross et al., 2009), it may be that either visual or motoric
familiarity is enough to modulate the observed effect seen in these results. Thus,
regardless of whether or not participants had physically performed the gestures
120
themselves, it appears that as long as they were familiar with the gesture, they
more heavily recruited activity in regions associated with mentalizing processes.
This includes the dorsal mPFC, which is associated with general mentalizing
tasks and triadic social interactions in which two people jointly attend to a third
item or action (Saxe & Powell, 2006; Saxe, 2006). In addition, the bilateral TPJ,
involved in the direction of attention as well as perspective-taking, emotional
meaning, and linguistic associations (Saxe & Kanwisher, 2003), and the posterior
cingulate cortex, associated with monitoring the external environment and
episodic autobiographical memory retrieval (Gusnard & Raichle, 2001; Maddock
et al., 2001), may assist in taking the perspective of the actor and possibly linking
the familiar gesture with existing memories and experiences respectively. The
recruitment of these areas may reflect the individual’s ability to retrieve higher-
level intentions and goals from observed familiar actions based on prior
experiences with the familiar actions.
By contrast, this additional cognitive processing may not occur when observing
unfamiliar gestures, for which one has no prior experiences, memories, or
knowledge of meanings. Instead, when we try to understand an actor making an
unfamiliar action, motor-related regions within the MNS network, in particular the
IPL, become more active. Interestingly, these findings are similar to previous
studies demonstrating that observation of an unfamiliar action, with the intent to
imitate the observed action, generates greater activity in MNS regions than
121
familiar actions (Buccino et al., 2004b; Vogt et al., 2007). While participants were
not explicitly instructed to imitate the gestures, it is possible that understanding
unfamiliar gestures may implicitly recruit regions involved in imitation of the novel
action, in order to make sense of it. Thus, it may be that when possibly inferring
the higher-level goals of an action that is unfamiliar but that we are capable of
performing, we attempt to simulate the observed action, particularly using the
IPL, which may contain pre-existing motor affordances (Oztop & Arbib, 2002) and
the postcentral gyrus, indicating greater motor readiness, to generate a basic
understanding of the observed action (Vogt et al., 2007).
Interestingly, the IPL, which is a multi-modal region commonly associated with
grasp affordances, motor attention, body awareness and action planning (Oztop
& Arbib, 2002; Fogassi et al., 2005), showed increased activation both in
response to unfamiliar gestures as well as to one’s own race. Thus the IPL may
be involved in developing a sensorimotor representation of the observed action
that is sensitive to both prior experiences with an individual and with a given
action. Activity in the IPL may be increased in an automatic process via
connections with the visual cortex as well as automatic top-down modulations by
context when there is increased perceptual familiarity to an actor (i.e. simulating
someone of one’s own race), as well as via a more effortful cognitive process
when an action is unfamiliar, directing attention to the manner in which an
unfamiliar action is performed in order to extract basic motor goals and
122
intentions. In addition, the unpredictable nature of the unfamiliar gestures may
also explain the increased activation in the bilateral superior parietal lobule, a
region associated with error-monitoring and thought to be involved in building on-
line motor predictions of observed actions (Wolperts et al., 1998). Activity in this
region bilaterally may indicate a reaction to unexpected, unfamiliar movements
during internal simulation of the movement.
These results may seem contradictory to previous findings, as studies on passive
observations of actions that are unusual or inconsistent within a context have
demonstrated increased activity in mentalizing regions (Brass et al., 2007; Kilner
and Frith, 2008; Liepelt et al., 2008; Van Overwalle & Baetens, 2009). Such
results suggest that the mentalizing regions become active when a passively
observed goal does not have a matching motor representation, as in the case of
an unfamiliar action. However, such results may be heavily influenced by task
instructions, such as focusing on the goal of an action rather than on how the
action is performed (de Lange et al., 2008; Hesse et al., 2009). As our study is
the first to our knowledge to employ a task instructing participants to make active
inference of intentions during observation of both familiar and unfamiliar human
movements, we propose that the unique social demands of inferring an
unfamiliar action’s intention—a scenario that occurs often in real life, such as
when in a new country or learning a new sport—modulates neural activity in MNS
and mentalizing regions differently than during passive observation. Furthermore,
123
observing familiar but contextually implausible actions (e.g., observing an actor
turn on a light switch with their elbow; Brass et al., 2007) may not require
additional sensorimotor representations but higher level reasoning capabilities
instead. Thus, the current results suggest that while understanding a familiar
action may involve regions associated with the mentalizing system,
understanding an unfamiliar action, for which one has no abstract, experience-
specific, or contextually-relevant information, may activate sensorimotor regions,
such as in the IPL, more heavily. Additional research may help us to elucidate
whether these results are due to the strategy employed in understanding the
observed actor’s gesture (e.g. trying to simulate the gesture versus trying to
objectively reason about the gesture), and the extent to which sociocultural
influences might determine the strategies involved.
CONCLUSION
Our results reveal the complex interplay between both one’s perceptual and
motor familiarity with an actor or an action on neural regions likely underlying
both action observation and intention understanding. This interaction requires a
contribution from both MNS and mentalizing regions during gesture observations
and may be strongly modulated by a variety of social factors. Specifically, our
results suggest that observations of actors that are perceptually familiar are
associated with increased sensory-motor processing, whereas observations of
actors who are perceptually unfamiliar to oneself are associated with increased
124
visual processing. Furthermore, our data indicate that understanding familiar
gestures increases activity in the mPFC, PCC, and bilateral TPJs, suggesting an
increased engagement of mentalizing processes during this task. In contrast,
understanding unfamiliar gestures more strongly activates the posterior
component of the MNS (the IPL), possibly reflecting the generation of a motor-
based representation of the unfamiliar action, which may then provide basic
information about the goal of the observed action. Activity in the IPL may also
reflect violations of expectations during simulation, in accordance with the
region’s proposed role in the error-monitoring of actions.
In line with prior research, our findings also demonstrate that the MNS and
mentalizing regions are differentially modulated, supporting the idea that the two
systems largely perform complementary roles (Van Overwalle and Baetens,
2009). Our data suggest that such roles can further be dissociated along the
bases of familiarity with race or with actions, and that the activity in these regions
may be heavily dependent on the particular task and stimuli employed.
125
CHAPTER 3. Visual Experience Affects Action
Understanding Regions
ABSTRACT
We activate our own sensorimotor regions when simply observing actions
performed by others, with evidence that we engage specific regions of our body
representation in accordance with the specific body parts we observe. This
resonance between one’s own body and another’s is increased when we observe
actions within our own motor repertoire, with research from the prior chapter
suggesting we may also engage our own body when observing actions with
which we are unfamiliar. It is unclear, however, whether we engage our own
body representations when watching a body with which we are unfamiliar—
which is physically different and novel from our own—and if experience can lead
to an increased sensorimotor response to the different body. Using fMRI, we
scanned typically-developed participants as they observed actions performed by
a novel biological effector (the residual limb of a woman born without arms) and a
familiar biological effector (a hand). Surprisingly, participants demonstrated
greater activity in their own sensorimotor regions when observing actions made
by the residual limb compared to the hand, with more empathic participants
activating their own sensorimotor regions more. Participants were then provided
with extended visual experience with each effector and scanned again and
demonstrated similar neural responses for residual limb and hand action
observation. Altogether, these results suggest that we engage our own body
126
representations more when observing those who have different bodies from
ourselves and that visual experience attenuates the neural response to bodies
extremely different from our own.
3.1 INTRODUCTION
Regions of one’s own sensorimotor system become active when simply
observing actions performed by another, engaging specific cortical motor
representations that correspond to the observed body parts (Fadiga, Fogassi,
Pavesi, & Rizzolatti, 1995; Buccino et al., 2001). This ‘motor resonance’ between
observed actions and one’s own motor representation occurs in a network of
regions in the inferior frontal gyrus, ventral premotor cortex and inferior parietal
lobule collectively referred to as the action observation network (Rizzolatti &
Craighero, 2004; Caspers, Zilles, Laird, & Eickhoff, 2010). This network in
humans is related to the mirror neuron system, which was discovered in single
neurons in macaque monkeys that fired both when the monkey performed and
observed actions (Rizzolatti et al., 1996a; Gallese et al., 1996). Evidence
suggests that our own motor experiences affect how we activate this network,
with increased activity when observing individuals more similar to ourselves
(Buccino et al., 2004a; Molnar-Szakacs et al., 2007) or actions with which we are
more familiar (Calvo-Merino et al., 2005; Cross et al., 2006), leading some to
suggest we utilize our own motor representations to help understand others’
127
actions (Keysers & Gazzola, 2007). How then do we understand actions made
with a body that differs from our own?
Recent studies demonstrate that we may also engage motor regions when
observing actions beyond our own abilities (Aziz-Zadeh, Sheng, Liew, &
Damasio, 2011; Liew, Han, & Aziz-Zadeh, 2011), but used actions or effectors
that were visually familiar to the observer. As visual experience, even without
motor practice, can still allow one to incorporate novel actions into one’s own
motor repertoire, individuals demonstrate increased sensorimotor activity when
observing actions they have either performed or seen before (Cross et al., 2009).
Thus, what remains to be explored is how we process actions made by
individuals with body parts we do not have and have not seen before, and the
role of experience in modulating these responses. These questions hold
important implications not only for the scientific community but also for our
increasingly diverse society. In 2007, over 1.7 million individuals in the United
States alone had limb differences such as amputations (Center, 2011), and many
more have other uncommon physical differences. Such individuals cite perceived
social stigma as a major barrier to participating in their communities, affecting
their quality of life (Frank, 2000; Murray, 2009). Given that the average, typically
developed individual has limited exposure to individuals with physical differences,
can experience change how we represent bodies unlike our own?
128
To answer these questions, we scanned participants who had no prior
experience with individuals with amputations (novices) as they observed a
woman with bilateral arm amputations perform actions with her residual upper
limb, which extend several inches past her shoulder. They also observed a
typically developed woman perform the same actions with her hand. We scanned
them at initial viewing of residual limb and hand actions, as well as after they
received visual experience with both effectors.
3.2 MATERIALS AND METHODS
Participants
Nineteen healthy, typically-developed participants (9 females, 10 males; mean ±
SD = 24.8 ± 4.8 years) who had minimal to no prior experience with individuals
with amputations as assessed by a self-report questionnaire were recruited to
participate in this study. Amount of experience was briefly quantified during the
initial screening and further elaborated upon with an extensive behavioral
questionnaire after the fMRI scanning procedure. Detailed questions were not
asked prior to the fMRI experiment to avoid biasing participants to the goal of the
study. All participants were right-handed, had normal or corrected-to-normal
vision, and were safe for fMRI. Written informed consent was obtained from all
participants before inclusion in the study. This study was approved by the
University of Southern California Institutional Review Board and was performed
in accordance with the 1964 Declaration of Helsinki. Due to technical difficulties,
129
data from three participants was incomplete and excluded from the study,
resulting in 16 novice participants.
Stimuli
Action observation runs. Stimuli consisted of 2-second video clips of goal-
matched actions performed by an individual with physical differences using her
upper residual limb and typically developed women using their hands (see Figure
3-1 for the experimental paradigm). Hand action observation (HAO) and residual
limb action observation (RLAO) clips both contained the same actions (e.g., flip
book page), performed by the right hand or right residual limb respectively.
Control stimuli consisted of still images of the hand (hand still; HS) or upper
residual limb (residual limb still; RLS) and were also presented for 2 seconds.
Catch trials consisted of a red frame outlining an image of a hand, presented for
2 seconds, indicating that participants should press the button boxes they were
given. Participants were informed that the button-press stimuli were to ensure
they paid careful attention to the stimuli throughout each run. A fixation cross was
presented during rest trials and jittered between 2 and 8 seconds in duration.
Visual exposure runs. To provide participants with increased visual exposure to
both hand and residual limb actions, visual exposure stimuli consisted of 16-
second blocks of short 4-second video clips of different actions of each effector,
cropped to provide more of the body and context for each of the actions. Hand
130
visual exposure observations included actions such as a hand and arm twisting
off a bottle cap and threading a needle through a piece of fabric, while residual
limb visual exposure actions included using the residual limb to push objects and
using the residual limb plus mouth to manipulate a pencil.
MNS localizer run. In order to identify neural regions that were active both
during action observation and during action execution, participants performed
one MNS localizer run at the end of the scanning session (see Figure 3-2). This
entailed observing 3-second videos of hands picking up objects (e.g., keys, a
mug), still images of a hand next to an object, rest trials with a fixation cross, and
action execution trials. Action execution trials were cued by a red box flashing
briefly for 500 ms before a static image of a hand was presented for the
remaining 2500 ms. This cued participants to perform a basic hand action for the
duration of the clip.
131
Figure 3-1. Action observation run paradigm. Action observation runs
included observing hand actions, residual limb actions, hand still images, residual
limb still images, and jittered rests in an event-related design.
Task Design & Procedure
For a complete schema of the experimental session, see Figure 3-2. Participants
were provided short training runs outside of the scanner, prior to the scanning
session. For action observation runs, participants were asked to watch the
132
actions performed on the screen and pay attention to the movements and actions
that they saw. They were further informed that at the end of each run, they might
be asked “What was the last action you saw?” During the action observation
training, they were shown 4 clips each of hand actions and residual limb actions
to familiarize them with the format of stimulus presentation and the effectors they
would be seeing in the scanner, and to train them to respond to the catch trials.
An additional motive of these clips was to lessen any initial emotional or
attentional effects the unfamiliar effector may evoke. Action observation runs
consisted of 16 trials of each condition, plus a jittered rest period between 2 and
8 seconds. The trial order of the design was then optimized using a genetic
algorithm (Wager & Nichols, 2003). The total run time was 5 min and 36 seconds
(168 TRs), for each of 4 runs. For the analyses, the first two action observation
runs (PRE) and the second two action observation runs (POST) were averaged
for a total of 32 trials per condition in each analysis.
After two action observation runs (PRE), there was a visual exposure run, during
which participants observed longer video clips (16 seconds each) consisting of
several actions, followed by longer rest trials with a fixation cross (12 seconds).
Following this run, participants observed another two action observation runs
(POST).
133
For the MNS localizer run, participants were asked to watch the actions
performed on the screen and pay attention to the movements and actions that
they saw. When they saw the red frame indicating an action execution trial, they
were asked to move their right hand as though picking up a wine glass several
times, for the duration of the clip (generally 2-3 times; for a schema of the MNS
localizer paradigm, see Figure 3-3). They were further instructed to remain still
for all other times. Once in the scanner, participants were monitored for
extraneous movements via an MRI-safe mirror placed next to the scanner bed,
which allowed for monitoring of hand movements from the control room. Any
non-task-related movements were then recorded by an experimenter in-sync with
the stimulus presentation. Trials with movement were removed from further
analyses.
After the scanning session, participants completed a demographics questionnaire
with in-depth questions about their prior experiences with individuals with
physical differences and impressions of the observed videos. They also rated on
a 10-point Likert scale how familiar they were with each effector, whether they
felt watching the videos helped them to understand hand/residual limb actions
better, and whether they felt watching the videos made them any more likely to
interact with an individual with hands/residual limbs (1= very unfamiliar;
unhelpful; unlikely to interact; 10 = very familiar; helpful; likely to interact). Finally,
participants also completed the Interpersonal Reactivity Index (IRI; (Davis,
134
1983)), a self-report behavioral measure of both cognitive and emotional
empathy.
Figure 3-2. Overall experimental paradigm.
135
Figure 3-3. Mirror neuron system localizer run paradigm. The MNS localizer
run included observing hand actions, hand still images, executing actions, and
jittered rests in an event-related design.
136
Figure 3-4. Results from the mirror neuron system localizer. Results from the
overlap between action observation and action execution from the separate MNS
localizer run, masked with the Harvard-Oxford probabilistic atlas definitions for
the bilateral inferior frontal gyri/ventral premotor cortices and the bilateral inferior
parietal lobule. These four regions of activation were then used as MNS regions
of interest for the main analyses.
Scanning Procedure
The images were presented through a projector onto a rear-projection screen
attached to the head coil and located above the subject’s head. As described
previously, participants’ actions were monitored via an MRI-safe mirror placed
adjacent to the scanner bed, allowing the experimenter to observe movements
from the control room. The experiment utilized an event-related design in the
Action Observation runs in which all conditions (HAO, RLAO, HS, RLS, rest)
were evenly distributed across 4 runs, which lasted 336 seconds (168 TRs). The
Visual Exposure run utilized a block design with 16-second blocks,
counterbalanced across participants, with 12-second rests in between and lasted
137
464 seconds (232 TRs). The MNS Localizer run utilized an optimized event-
related design, with 30 trials of each observation condition (AO, S) and 15 trials
of action execution, as execution tends to produce a more robust signal, thus
requiring less trials. This run lasted seconds (492 TRs).
Image Acquisition
All images were acquired using a Siemens MAGNETOM Trio 3T MRI scanner
with standard head coil. A high resolution T1-weighted anatomical volume was
acquired from each participant (208 coronal slices, 256 x 256 x 208 matrix with a
spatial resolution of 1 x 1 x 1 mm, TR=1950 ms, TE=2.56 ms, FOV = 256 mm;
flip angle=90º). Functional volumes were acquired while participants performed
the action observation, visual exposure, and MNS localizer runs. Thirty-seven
axial slices of functional images covering the whole brain were acquired using a
gradient-echo echo-planar pulse sequence (64 × 64 ×37 matrix with a spatial
resolution of 3.5 x 3.5 x 3.5 mm, TR=2000 ms, TE=30 ms, FOV=224 mm, flip
angle=90°). Functional volumes were acquired using Siemens' prospective
acquisition correction (PACE) technique for motion correction in which head
movements were calculated by comparing successively acquired volumes and
were corrected online (Thesen, Heid, Mueller, & Schad, 2000).
138
Data Processing and Analyses
Functional data processing was carried out using FEAT (FMRI Expert Analysis
Tool) Version 5.98, part of FSL (FMRIB's Software Library,
www.fmrib.ox.ac.uk/fsl). The following pre-statistics processing were applied to
individual subjects: motion correction using MCFLIRT (Jenkinson, Bannister,
Brady, & Smith, 2002), slice-timing correction using Fourier-space time-series
phase-shifting; non-brain removal using BET (Smith, 2002), spatial smoothing
using a Gaussian kernel of FWHM 5mm, grand-mean intensity normalisation of
the entire 4D dataset by a single multiplicative factor, and highpass temporal
filtering (Gaussian-weighted least-squares straight line fitting, with sigma=65.0s)
(Smith, 2002; Jenkinson et al., 2002). For each subject, a time-series statistical
analysis was carried out using FILM GLM with local autocorrelation correction
(Woolrich, Ripley, Brady, & Smith, 2001). Z (Gaussianised T/F) statistic images
were then thresholded at p=0.001 (uncorrected), and registered to a high
resolution standard space image (2 x 2 x 2 mm
3
Montreal Neurological Institute
(MNI) space) using FLIRT (FSL’s Linear Image Registration Tool) (Jenkinson &
Smith, 2001; Jenkinson et al., 2002). A second-level analysis to average across
the two runs in PRE and POST conditions respectively was carried out using a
fixed effects model, by forcing the random effects variance to zero in FLAME
(FMRIB's Local Analysis of Mixed Effects) (Beckmann, Jenkinson, & Smith,
2003; Woolrich, Behrens, Beckmann, Jenkinson, & Smith, 2004; Woolrich, 2008).
139
A group-level analysis was then completed using FLAME stage 1, which
employed a mixed effects model that includes both fixed effects and random
effects from cross session/subject variance (Beckmann et al., 2003; Woolrich et
al., 2004; Woolrich, 2008). Z (Gaussianised T/F) statistic images at this level
were thresholded using clusters determined by Z > 2.3 and a (corrected) cluster
significance threshold of P=0.05 (Jezzard, Matthews, & Smith, 2001).
Region of interest analyses were performed for a priori regions in the MNS
(bilateral IFG/PMv and IPL). These regions were defined by the overlap between
action observation and action execution during the MNS localizer run, and further
masked by anatomical definitions based on the probabilistic Harvard-Oxford atlas
of the IFG/PMv and IPL respectively. IFG and ventral premotor regions were
combined into one region of interest as prior meta-analyses of the MNS suggest
that both comprise the frontal component of the MNS (Van Overwalle & Baetens,
2009). Percent signal change (%SC) for the observation of each effector
(HAO/RLAO) compared to the control still image (HS/RLS) was then extracted
using Featquery in FSL. These values were contrasted through paired t-tests.
Finally, correlation analyses between the %SC values from the ROI analyses and
scores on the Interpersonal Reactivity Index empathy questionnaire and the
demographics questionnaire were run in SPSS (Release Version 18.0, © SPSS,
Inc., 2009, Chicago, IL, www.spss.com).
140
3.3 RESULTS
Behavioral Results
Based on our demographics questionnaires after the scan, novice participants
rated being significantly more likely to interact with an individual with an upper
residual limb after watching the videos of residual limbs and hands, as compared
to likeliness to interact with an individual with typically developed hands (RL:
5.69±2.52, H: 4.43±2.61, t=-2.21, p=.043). In addition, the videos helped novice
participants to understand residual limb actions significantly more than they
helped participants to understand hand actions (RL: 8.13±1.93, H: 5.188±2.95,
t=-5.74, p<.00005). They also demonstrated a range of scores on the empathy
measure, on the IRI total and the Perspective Taking and Empathic Concern
subscales (IRI TOTAL: 65.75±9.67, IRI Perspective Taking: 19.75±3.30; IRI
Empathic Concern: 21.00±3.76).
fMRI Results
Novice, Action Observation (PRE-Visual Exposure). In novices during the
PRE-visual exposure action observation runs, Residual Limb Action Observation
versus Residual Limb Still images (RLAO>RLS) activated the right dorsal and
ventral premotor cortices, the bilateral inferior and superior parietal lobules, and
the bilateral lateral occipital cortices from the posterior middle temporal gyrus
(MT/V5) into the medial lingual gyri (BA 17/18; see Tables 1-4 for all coordinates
of significant clusters of activation). Hand Action Observation versus Hand Still
141
images (HAO>HS) generated a similar pattern of activity in bilateral inferior and
superior parietal lobules and the bilateral occipital cortices (MT/V5 into V1). In the
direct contrast between Residual Limb and Hand Action Observation
(RLAO>HAO), there was activity in the bilateral IPL including the supramarginal
gyrus, postcentral gyrus, and extending into the superior parietal lobules and the
posterior middle temporal gyrus (MT/V5; see Figure 3-5). Hand versus Residual
Limb Action Observation (HAO>RLAO) generated activity in the bilateral occipital
poles (BA 17/18) only (see Figure 3-5).
Novices, Action Observation (POST-Visual Exposure). After the Visual
Exposure run, novices viewing Residual Limb Action Observation versus
Residual Limb Still images (RLAO>RLS) activated the bilateral inferior and
superior parietal lobules and bilateral occipital cortices from MT/V5 extending into
V1, similar to the pre-visual exposure runs. Hand Action Observation versus
Hand Still images (HAO>HS) resulted in a similar pattern of activity with clusters
of activity in bilateral superior parietal regions, left inferior parietal lobule, right
posterior superior temporal sulcus into the inferior parietal lobule, and strong
bilateral occipital activation (MT/V5 into V1). Residual Limb versus Hand Action
Observation (RLAO>HAO) in the POST run demonstrated activity in the right
superior parietal lobule and bilateral posterior middle temporal gyri (MT/V5; see
Figure 3-5). In contrast, Hand versus Residual Limb Action Observation
142
(HAO>RLAO) again generated activity in the bilateral occipital poles (BA 17/18)
only (see Figure 3-5).
143
Figure 3-5. fMRI results when novice viewers observe residual limb and
hand actions for the first time (PRE novices) and after visual exposure
(POST novices). TOP – PRE. ORANGE: Residual Limb > Hand activated
sensorimotor regions, including the bilateral inferior and superior parietal lobules,
and occipital regions. BLUE: Hand > Residual Limb generated significant
activation in the bilateral occipital poles. BOTTOM – POST. ORANGE: Residual
Limb > Hand activated the right superior parietal lobule and occipital regions,
including MT/V5. BLUE: Hand > Residual Limb generated significant activation in
the bilateral occipital poles. All results thresholded at Z > 2.3, p < 0.05 (cluster
corrected for multiple comparisons).
144
Table 3-1. Localization of brain activations in novices during the PRE
condition. Group-level random effects analyses, thresholded at Z>2.3, p<.05,
cluster corrected for multiple comparisons.
Coordinate
s [x y z]
Anatomical
Region
Z-Stat
Cluster Size
[2mm
3
voxels]
Cluster
Index
Residual Limb Action Observation > Still Photo of Residual Limb
[50 -66 0]
R lateral occipital
cortex (MT/V5)
5.67 20946 3
[-48 -74 6]
L lateral occipital
cortex (MT/V5)
5.62 - 3
[62 -34 22]
R inferior parietal
lobule /
supramarginal
gyrus
5.11 - 3
[-58 -22 20]
L inferior parietal
lobule /
supramarginal
gyrus
4.77 - 3
[34 -44 50]
R superior parietal
lobule (BA 7)
4.58 1307 2
[52 6 38]
R ventral precentral
gyrus
3.56 484 1
[44 2 48]
R dorsal precentral
gyrus
3.54 - 1
Hand Action Observation > Still Photo of Hand
[-50 -76 -2]
L lateral occipital
cortex (MT/V5)
5.91 15589 3
[48 -60 0]
R lateral occipital
cortex (MT/V5)
5.57 - 3
145
Table 3-1. Localization of brain activations in novices during the PRE
condition (continued). Group-level random effects analyses, thresholded at
Z>2.3, p<.05, cluster corrected for multiple comparisons.
Coordinate
s [x y z]
Anatomical
Region
Z-Stat
Cluster Size
[2mm
3
voxels]
Cluster
Index
Hand Action Observation > Still Photo of Hand (continued)
[66 -26 32]
R inferior parietal
lobule
3.26 - 3
[-58 -24 22]
L inferior parietal
lobule /
supramarginal
gyrus
3.95 936 2
[-58 -30 36] L postcentral gyrus 3.55 - 2
[-48 -38 22]
L posterior superior
temporal gyrus
3.40 - 2
[-32 -50 62]
L superior parietal
lobule (BA 7)
3.96 344 1
Residual Limb Action Observation > Hand Action Observation
[42 -70 2]
R lateral occipital
cortex (MT/V5)
5.62 13686 3
[24 -60 -8]
R lingual gyrus (BA
17/18)
4.89 - 3
[62 -28 20]
R inferior parietal
lobule
3.98 - 3
[44 -38 60]
R superior parietal
lobule
3.01 - 3
[-50 -70 4]
L lateral occipital
cortex (MT/V5)
4.45 5120 2
[-18 -60 64]
L superior parietal
lobule
4.04 - 2
[-50 -30 34]
L inferior parietal
lobule /
supramarginal
gyrus
4.12 1169 1
Hand Action Observation > Residual Limb Action Observation
[28 -102 4] R occipital pole 4.10 959 2
[-18 -94 -20] L occipital pole 3.85 569 1
146
Table 3-2. Localization of brain activations in novices during the POST
condition. Group-level random effects analyses, thresholded at Z>2.3, p<.05,
cluster corrected for multiple comparisons.
Coordinate
s [x y z]
Anatomical
Region
Z-Stat
Cluster Size
[2mm
3
voxels]
Cluster
Index
Residual Limb Action Observation > Still Photo of Residual Limb
[-46 -76 2]
L lateral occipital
cortex (MT/V5)
5.30 15172 4
[44 -78 2]
R lateral occipital
cortex (MT/V5)
5.23 - 4
[56 -30 18]
R inferior parietal
lobule /
supramarginal
gyrus
3.94 - 4
[-44 -34 20]
L inferior parietal
lobule /
supramarginal
gyrus
3.75 816 3
[34 -50 60]
R superior parietal
lobule (BA 5/7)
4.09 765 2
[-34 -50 54]
L superior parietal
lobule (BA 5/7)
3.68 422 1
Hand Action Observation > Still Photo of Hand
[-46 -76 0]
L lateral occipital
cortex (MT/V5)
5.81 19348 3
[46 -64 0]
R lateral occipital
cortex (MT/V5)
5.22 - 3
[38 -56 54]
R superior parietal
lobule (BA 7)
3.24 - 3
[-34 -52 56]
L superior parietal
lobule (BA 5/7)
4.01 1100 2
[-40 -30 34]
L inferior parietal
lobule /
supramarginal
gyrus
3.18 - 2
147
Table 3-2. Localization of brain activations in novices during the POST
condition (continued). Group-level random effects analyses, thresholded at
Z>2.3, p<.05, cluster corrected for multiple comparisons.
Coordinate
s [x y z]
Anatomical
Region
Z-Stat
Cluster Size
[2mm
3
voxels]
Cluster
Index
Hand Action Observation > Still Photo of Hand (continued)
[60 -32 20]
R temporoparietal
junction / posterior
superior temporal
gyrus
3.88 520 1
Residual Limb Action Observation > Hand Action Observation
[50 -78 8]
R lateral occipital
cortex (MT/V5)
4.90 4071 2
[22 -60 54]
R superior parietal
lobule (BA 5/7)
3.37 - 2
[-36 -88 16]
L lateral occipital
cortex (BA 18/19)
3.60 701 1
[-50 -68 -2]
L lateral occipital
cortex (MT/V5)
3.43 - 1
Hand Action Observation > Residual Limb Action Observation
[32 -94 -6] R occipital pole 3.89 908 2
[-20 -98 -12] L occipital pole 4.01 744 1
Parameter Estimates
In order to ascertain whether the differences between Residual Limb and Hand
contrasts in PRE and POST runs were due to a global decrease in signal in the
POST run, percent signal change was then calculated for each of the contrasts
(Hand Action Observation > Hand Still; Residual Limb Action Observation >
Residual Limb Still) over four a priori ROIs in the MNS (bilateral IFG/PMv and
IPL), which were defined by a functional localizer run and masked with a
probabilistic anatomical map.
148
For novices, a one-tailed paired samples t-test revealed a significant decrease in
percent signal change between PRE-POST RLAO > RLS at the L IPL (t=2.18,
p=.02) and a marginally significant decrease at the R IFG (t=1.6, p=.06) and R
IPL (t=1.6, p=.06). There were no significant differences in percent signal change
between PRE-POST runs for HAO > HS. For results, see Figure 3-6.
Figure 3-6. Percent signal change in regions of interest from fMRI results in
novices observing Hand and Residual Limb actions pre and post visual
exposure. Percent signal change in MNS regions of interest from fMRI results in
novices when observing A) Hand Actions > Hand Still images (HAO > HS) and B)
Residual Limb Action Observation > Residual Limb Still images (RLAO > RLS) at
PRE and POST visual exposure. Only residual limb demonstrated a difference in
MNS regions for PRE compared to POST observations, significant at the L IPL (p
< .05) and marginally significant at the R IFG and R IPL (p=.06).
149
Correlations
Percent signal change in MNS regions of interest positively correlated with
several scores on the Interpersonal Reactivity Index empathy subscales. Novices
in the PRE condition demonstrated a significant positive correlation between
activity in the R IPL during RLAO > RS with both the empathic concern (IRI EC;
r=.66, p=.006; see Figure 4) and the perspective taking (IRI PT) subscales
(r=.54, p=.03). This also occurred at the R IPL for the contrast RLAO > HAO with
empathic concern (IRI EC; r=.51, p=.04). There were no significant correlations
for novices in the POST condition with empathy.
150
Figure 3-7. Correlation between percent signal change in right inferior
parietal lobule during residual limb observation and empathic concern
scores. Percent signal change in the R IPL when novices observed Residual
Limb Action Observation > Residual Limb Still (RLAO > RLS) in the PRE
condition correlated with Empathic Concern scores on the Interpersonal
Reactivity Index (r=.66, p=.006). Correlations also occurred between the R IPL
and the Perspective Taking subscale the IRI, and the R IPL and Empathic
Concern during RLAO > HAO.
3.4 DISCUSSION
The current study examined how we respond to actions made by bodies different
from our own and how this changes with experience, using fMRI as participants
observed an individual with residual limbs and an individual with hands perform
actions before and after visual exposure. The results suggest that we initially
engage our own sensorimotor regions more when observing individuals with
different bodies, despite the fact that the observed effector has not been seen
151
before and lacks corresponding motor representations in the observer’s body.
More empathic individuals demonstrate this increased sensorimotor response to
a greater degree than less empathic individuals, and experience—even simply
through visual exposure—attenuates the neural response to the novel effector,
resulting in similar neural responses to both residual limb and hand actions.
Increased Sensorimotor Activity for the Novel Effector
Observing the novel effector activated one’s own sensorimotor regions more than
observing the familiar effector. This challenges the traditional notion that we
activate our own sensorimotor regions more for actions within our motor
repertoire, as found in expert ballet dancers watching ballet compared to
capoeria or individuals watching dance moves they had rehearsed versus not
rehearsed (Calvo-Merino et al., 2005; Cross et al., 2006; Cross et al., 2009).
These studies concluded that action observation engages one’s existing motor
repertoire to understand others’ actions, with more activation when observing
actions for which one has a greater motor representation. However, here we find
the opposite, with more activation when observing actions for which one has less
of a motor representation. This occurs despite the fact that the novel actions are
not only beyond one’s current motor abilities, but also beyond one’s future
capabilities, as they are performed by a body part that one does not possess.
152
Increased activation occurred in both the inferior and superior parietal lobules
when novices observed the novel versus familiar effector. These regions are
associated with a multitude of tasks, including directing attention, spatial
integration of visuomotor information, and encoding the kinematics and
orientation of movements within body space (Rizzolatti et al., 1997; Colby &
Goldberg, 1999; Oztop et al., 2006). We propose that novice individuals engage
their own sensorimotor regions in an attempt to match the unfamiliar kinematics
of the observed, novel actions to a corresponding body part in their own motor
schema. In particular, they may use their existing knowledge of motor kinematics,
based on their own bodily experiences, to map the affordances of the residual
limb by dissecting the kinematics of each part of the effector onto the closest
match in their own body—in this case, possibly matching the residual limb to
one’s upper or lower arm (Arbib et al., 2009; Bonaiuto & Arbib, 2010). Similar to
the suggestion that tool use may extend our internal body representations distally
to incorporate the tool into our internal motor schema (i.e., “distalization of the
end effector,” (Arbib et al., 2009)), we may also “proximize” our end effector to
match the shortened length of the observed residual limb. However, rather than
update the internal model through sensory and motor feedback from using the
tool, we may evoke an internal representation to simulate sensory and motor
feedback in order make accurate predictions about the residual limb’s shortened
kinematics. Through generating an internal model via action observation and
153
internal simulation, we may then develop correspondances between observed
actions and our own motor abilities.
Importantly, this match does not need to be perfect. Previous research has
demonstrated that motor training of incongruent motor performance and action
observation (e.g., observing finger flexion while performing finger extension) can
still result in greater motor excitability for the paired stimulus, regardless of if it is
corresponding or not (Catmur, Walsh, & Heyes, 2007). Thus while in the current
study, participants were not given explicit instructions to map the residual limb to
their own bodies, it appears that one way of understanding a novel body is to
subconsciously map it onto one’s own. The plasticity of action observation
regions, which allows for the ability to pair an observed action with an
incongruent motor performance, also warrants further exploration. Specifically, if
individuals can be trained to consciously ‘match’ an observed effector, such as a
residual limb, to a specific body part (e.g., their own elbow or upper arm), this
may result in increased understanding of the different body. In addition,
qualitative methods may be useful in studying changes in attitude and behaviors
towards individuals with different bodies (such as level of comfort around the
other individual, feeling of understanding of the other individual, likeliness to
interact with the other individual, etc.) in order to assess how changes in neural
activity in one’s own motor-related regions may result in changes in behaviors
and attitudes.
154
Emerging literature also supports the idea that increased sensorimotor activity
towards the novel body part may reflect an increased effort to use own’s own
motor regions to understand the observed body. Recent work has shown that
observing novel compared to familiar gestures also increases activation in one’s
own parietal regions (Liew, Han, et al., 2011) while another study demonstrated
increased activation of both premotor and parietal regions when observing robot-
like compared to human-like movements (Cross et al., 2011). A third study found
activity in premotor, parietal, and dorsolateral prefrontal regions when observing
novel versus familiar guitar patterns, which the authors suggested reflects an
increased effort towards observational learning (Vogt et al., 2007). We propose
that similar processes are occurring here, with participants mapping the observed
novel actions onto their own sensorimotor representations, possibly though
through covert imitation, to better learn and understand the kinematics and
affordances of the novel limb. Overall, this data suggests that the one’s own
sensorimotor regions are flexibly engaged to support action understanding of
many different types of actions, both within one’s realm of expertise and beyond
what one has ever seen before.
Increased Visual Activity for the Novel Effector
In addition to parietal regions, residual limb compared to hand actions also
activated the lateral posterior middle temporal gyrus (MT/V5), corresponding to
155
previous reports of the putative extrastriate body area (EBA) (Downing et al.,
2001), and the neighboring posterior superior temporal sulcus (pSTS), which is
commonly active in conjunction with regions of the MNS during action
observation (Keysers & Gazzola, 2007; Engel, Burke, Fiehler, Bien, & Rosler,
2008; Liew, Han, et al., 2011). As part of the dorsal visual stream, both area
MT/V5 and the pSTS have reciprocal connections with the parietal cortex to
support spatial awareness and are particularly active in response to observed
biological movements (Seltzer & Pandya, 1989a; Perrett et al., 1989; Perrett et
al., 1990; Seltzer & Pandya, 1994; Downing et al., 2001). Thus, it is not
surprising that these regions are also more active during observation of novel
biological movement compared to familiar movements, as novel biological
movements may require more visual attention to encode the new movements, as
suggested by one study demonstrating greater EBA activation for extreme body
postures performed by a contortionist compared to normal body (Cross, Mackie,
Wolford, & de, 2010). Similar to the increased parietal activation, these regions
may be more effortfully engaged in order to encode the visual properties of novel
actions, particularly dorsal stream visual properties (e.g., where/how the action is
occurring). In addition, these regions are the most consistently active and found
for the residual limb over the hand both in novices before and after experience.
As discussed further below, while parietal activation changes with experience,
these results suggest that visual attention to the different body part persists even
156
after some amount of experience, possibly because it remains more visually
novel than the hand.
Increased Parietal Activity Correlates with Empathy in Novices
If understanding another’s actions relies in part on our ability to engage our own
corresponding sensorimotor regions, then individuals who are more empathic
may also engage their own body representations more when observing others.
This has been demonstrated in a number of studies, with greater sensorimotor
activity correlating with higher scores on trait empathy, such as perspective
taking (Kaplan & Iacoboni, 2006; Gazzola et al., 2006; Aziz-Zadeh, Sheng, &
Gheytanchi, 2010). These studies suggest that more empathic individuals
engage their own motor regions more than less empathic individuals when
perceiving others’ actions. Similarly, here we find that participants who score
higher on either cognitive or affective measures of empathy demonstrate more
right inferior parietal activation when observing residual limb actions. This
correlation is especially notable when novices initially view the novel effector, and
is no longer significant once the effector becomes more familiar. This suggests
that our ability to map the actions of unfamiliar, dissimilar others onto our own
sensorimotor representations may support our capacity to understand and
empathize with individuals who are different from ourselves. Put another way, the
more empathic an individual is, the more they may attempt to represent very
157
different bodies onto their own, as though literally “putting oneself in another’s
shoes.”
Experience and Attention Modulate Sensorimotor Activity
After visual exposure to the novel limb, novices demonstrated reduced
sensorimotor activity during action observation of the residual limb compared to
their initial viewing, instead demonstrating activity only in the right superior
parietal and occipitotemporal regions. This suggests that both the visual
experience introduced in the experiment. Importantly, this attenuation due to
experience was unique to the residual limb; there was no difference in neural
activity for the hand before and after visual experience, indicating that the
difference between pre- and post-experience cannot be explained by habituation
alone.
Experience with the observed effector and actions may thus modulate how much
we use our own sensorimotor representations to map the kinematics of other’s
actions. In novices, the initial viewing requires more extensive parietal activation
to generate a basic kinematic representation of the novel effector, while after
experience, novices only require fine tuning of their residual limb model, reflected
in the right superior parietal activation which may play a role in updating one’s
internal model of an effector with specific kinematic parameter (Wolperts,
Goodbody, & Husain, 1998; Iacoboni & Zaidel, 2004).
158
The role of attention in driving these results also warrants discussion. Experience
may modulate the sensorimotor response by driving attention to the more novel
effector initially, and then to both effectors equally after novelty wears off. Studies
directly examining the effects of attention on motor resonance during action
observation suggest that selective attention increases motor resonance (Bach,
Peatfield, & Tipper, 2007; Chong, Cunnington, Williams, & Mattingley, 2009).
Here we propose that the observed effects are the result of a complex interplay
between 1) attention and 2) one’s existing motor abilities based on prior
experiences. It is likely that these two factors modulate sensorimotor activity
together, as the initial viewing of the residual limb is both more novel than the
hand, driving attention to it, and requires more effort to map the kinematics of it to
one’s own body representations since one does not have prior model for it. After
visual exposure, the residual limb already has matching correspondances with
one’s own body representations and no longer holds the novelty it did initially.
Thus, experience decreases novelty and increases motor representation,
resulting in similar patterns for the both the residual limb and hand.
This proposed interplay between experience and attention also explains the
apparent contradiction between these findings and prior research. In particular,
sensorimotor activity may increase for expert or practiced actions partially
because individuals tend to pay more attention to actions within their own
159
interests. Conversely, at other times, we may pay more attention to things that
are novel or different to us—such as a new body part, robotic movements, or
uncommon hand gestures (Liew, Han, et al., 2011; Cross et al., 2011). Thus,
while attention is likely not the only determinant of the observed effects, we
suggest that we engage our own sensorimotor regions depending on an intricate
interplay between our prior experiences and our current attentional focus, neither
of which can be cleanly separated from the other. Future studies might directly
explore the interaction between attention and motor experience in driving
sensorimotor activity during action observation. Altogether, gaining experience
with a novel effector not only gives one the visual input needed to incorporate the
body part into one’s own motor schema, but also reduces the novelty of different
limb, leading to similar neural representations both for body parts one does and
does not have.
Behavioral Changes after Experience
Finally, novice participants reported that the videos of residual limb actions
helped them to understand residual limb actions better and that they were
significantly more likely to interact with individuals with residual limbs after
watching these videos. This is notable as only participants watched less than 5
minutes of different video clips of residual limb actions, which suggests that even
just a short amount of media exposure or other similar experience could help to
increase typical individuals’ attitudes towards people with physical differences.
160
While future research should explore the scope of these behavioral changes in
greater depth, these preliminary ratings are promising as they suggest that one
easy way to improve societal attitudes and reach many novice individuals could
be through increased inclusion of individuals with physical differences in the
media.
CONCLUSION
We encounter individuals who differ from ourselves along a wide range of
characteristics, including, in the most extreme cases, differences in physical
bodies. Beyond limb differences, thousands more individuals have other physical
differences that manifest either in different body parts or different ways of
moving, including stroke, cerebral palsy, Parkinson’s, or Huntington’s diseases,
to name a few. For many of these individuals, integration into society is difficult,
in part due to the perceived societal attitudes emerging from the general public
(Bishop, 2005; Murray, 2009). Understanding how we respond to individuals with
different bodies on a neural level, and how these responses change with
experience, is thus a step towards developing ways of improving these societal
responses and mediating how we perceive differently bodied individuals. The
current study demonstrates that experience, even simply through videos, allows
individuals to represent novel body parts onto their own sensorimotor regions in a
similar manner as familiar ones. Despite the age-old adage to not stare at others,
here we suggest that visual input, whether through the media or through
161
meaningful real-life interactions, may be crucial in providing the necessary
information to better understand how the actions of those unlike ourselves relate
to our own actions. In doing so, we may better grasp how someone with a very
different body is, in fact, just like ourselves.
162
CHAPTER 4. Real-life Experience Affects Action
Understanding Regions
ABSTRACT
As seen in the previous chapter, initial observations of someone with a novel,
different body, such as someone with upper residual limbs instead of hands,
overwhelmingly activates one’s own sensorimotor regions, as though one is
representing the different body onto one’s own sensorimotor regions to better
understand them. Supporting this, after individuals have increased visual
experience with the unfamiliar body, they then show a decrease in these
sensorimotor regions, which is possibly due to the fact that they no longer need
to put forth the effort to generate a new motor program for the different effector.
However, the sheer visual experience of watching someone with a different body
perform actions differs considerably from real-life interactions with such
individuals. Thus, the current study aimed to understand how different types of
real-life experience modulate activity in neural regions associated with action
understanding. Using the same design as the previous study in which
participcants observed residual limb and hand actions, the current study recruited
three subject groups: (1) 11 occupational therapists (OTs) who had a range of
prior experiences working with people with residual limbs; (2) 1 individual who
was born with bilateral below elbow amputations (CJ), and (3) 13 novice older
participants (median age of 52 years old). Experienced OTs demonstrate a
similar response to both the residual limb and hand actions upon initial viewing,
163
primarily in the right superior parietal lobule, associated with updating one’s
internal model of an action with new kinematics or parameters. This small
difference in activation is attenuated even further after visual experience.
Furthermore, experience correlates with activation of the left inferior frontal gyrus
while observing residual limbs, while age and experience correlates with
activation of the medial prefrontal cortex. In contrast, participant CJ demonstrates
greater activation in all regions of the MNS when observing the residual ilmb
compared to hand, with little difference after visual experience. Altogether, these
results show that there is increased sensorimotor activation during observation of
novel actions or effectors is associated with generating an internal motor
representation of an observed action, which is no longer required after sufficient
experience. In addition, these results hint towards specific modulations of the
frontal versus parietal MNS, the left versus right hemispheres, and the
engagement of the medial prefrontal cortex as modulated by different levels of
experience.
4.1 INTRODUCTION
Recent research suggests that one way we understand people who are unlike
ourselves is by representing their actions onto our own bodies, as though to
literally ‘put ourselves in their shoes” (Liew et al., submitted). This occurs
primarily in parietal regions, associated with the integration of visuomotor and
spatial information about movement kinematics and limb affordances of an
164
observed limb or one’s own (Rizzolatti et al., 1997; Colby & Goldberg, 1999;
Oztop et al., 2006), and seems to reflect an individual’s concerted effort to
directly match the novel body parts that one is observing with one’s own body
representation. After individuals gain experience with a different body, even
through watching less than 5 minutes of video clips of the unfamiliar body, they
demonstrate a reduction in the sensorimotor response, with activity remaining
only in the right superior parietal region and visual areas. The right superior
parietal lobule (SPL) is thought to play a role in updating one’s internal model of
an effector with specific kinematic parameters (Wolperts et al., 1998; Iacoboni &
Zaidel, 2004) – for instance, updating one’s motor program for throwing a dart
towards a target after receiving feedback that the thrown dart landed slightly to
the left so that a future attempt will try to throw the dart slightly to the right. In this
case, the right SPL activity after experience may reflect a ‘fine-tuning’ of
participants’ models of the unfamiliar body, which they generate through parietal
activity in the intial viewings. Altogether, the change in activation suggests that
visual experience allows participants to incorporate a novel body part into their
own motor representations, thus resulting in similar activation patterns for both
the novel body and one like their own by the end of the experiment.
However, our interactions with individuals who differ from the self often extend
beyond pure visual experience into real-life interactions. Previous research has
demonstrated that both visual and motor experience with an action or sequence
165
of actions may generate activity in one’s own sensorimotor regions during
observation (Cross et al., 2009), with slightly more activity when one has motor
experience (Calvo-Merino et al., 2006). However, these studies used actions that
the observer could perform with body parts that the observer possessed. No
studies have yet examined the effects of our real-life experiences with individuals
whose bodies differ from our own, and whose actions we cannot perform, when
we observe them. To this end, we scanned participants with a range of real-life
experience with individuals with amputations as they also observed an individual
without arms perform actions with her residual limb and an individual with arms
perform actions with her hand. Using the same design as the previous study
which examined this effect in novices (Liew et al., submitted), in the current
study, we recruited participants who were all occupational therapists or
occupational therapy students who had completed at least one fieldwork
experience in the area of physical rehabilitation and had experience with
individuals with residual limbs. Participants varied greatly in their range of
experience with individuals with residual limbs including, in the most extreme
case, an individual with residual limbs himself. While the participants had a range
of experience with individuals with residual limbs, they had not viewed the
individual with residual limbs observed in the stimuli prior to the study. In
addition, to examine whether observed effects were due primarily to age or to
experience—factors which were confounded in the experienced OT group—we
166
additionally scanned 13 older adults who had little experience with individuals
with residual limbs.
We anticipated that experienced occupational therapists, upon initially viewing
the individual with residual limbs, would demonstrate a pattern of activity similar
to that of novices after visual experience. That is, we hypothesized that both
visual and real-life experience with an individual with a different body from one’s
own would generate similar neural patterns when viewing that individual. In
particular, we expected that experienced occupational therapists would
demonstrate some parietal activation, possibly in the right superior parietal
lobule, which would reflect an update of their current model of residual limb
kinematics with the specific kinematics of the viewed limb, and that after visual
exposure, experienced OTs would demonstrate almost identical patterns of
activation to both the residual limb and hand actions. We further anticipated that
CJ, having similar but not identical experiences, might activate his own
sensorimotor regions more when observing the residual limb due to interest in
someone similar, but still novel, to himself. Finally, we expected that specific
experience with individuals with residual limbs would modulate one’s neural
responses more than just general age-related experience.
167
4.2 MATERIALS AND METHODS
Participants
Eleven healthy, typically developed participants (9 females, 2 males; mean ± SD
= 33.9 ± 11.5 years), and one healthy participant who was born with bilateral
below elbow amputations (male; age 22; referred to henceforth as participant
CJ), were recruited to participate in this study. All participants were occupational
therapists or occupational therapy students and had moderate to extensive prior
experience working with individuals with amputations. Amount of experience was
briefly quantified during the initial screening and further elaborated upon with an
extensive behavioral questionnaire after the fMRI scanning procedure. Detailed
questions were not asked prior to the fMRI experiment to avoid biasing
participants to the goal of the study. In addition, thirteen healthy, typically
developed females age (median: 52 years old; range: 36-68 years old) from a
range of occupations participated in a related study, using a subset of the current
stimuli. A portion of the data from these participants was reported previously
(Aziz-Zadeh et al., 2011) and thus not included in the current study. Due to slight
differences in study designs, the current study analyzes a subset of the existing
data that is relevant to the current aims. All participants were right-handed, had
normal or corrected-to-normal vision, and were safe for fMRI. Written informed
consent was obtained from all participants before inclusion in the study. This
study was approved by the University of Southern California Institutional Review
Board and was performed in accordance with the 1964 Declaration of Helsinki.
168
Procedure and Analyses
All stimuli, task designs, procedures, and analyses for the first two groups
(Experienced OTs and participant CJ) were performed identically to the
previously reported study (see Chapter 3; Liew et al., submitted). Data was
analyzed from both the current study and the previous study for all group
comparisons, which utilized data between the 16 novice participants from the
previous study and the 11 experienced occupational therapists from the current
study (not including participant CJ). Data from participant CJ was analyzed as a
single subject case and was also carried out using FEAT (FMRI Expert Analysis
Tool) Version 5.98, part of FSL (FMRIB's Software Library,
www.fmrib.ox.ac.uk/fsl). As CJ is only one participant, higher-level analyses were
carried out using a fixed effects model, by forcing the random effects variance to
zero in FLAME (FMRIB's Local Analysis of Mixed Effects; Beckmann et al., 2003;
Woolrich et al., 2004; Woolrich, 2008), and a higher-level analysis was not
employed, thus not modeling random effects from cross session variance
(Beckmann et al., 2003; Woolrich et al., 2004; Woolrich, 2008). Despite the small
sample size, Z (Gaussianised T/F) statistic images for CJ were still thresholded
using clusters determined by Z > 2.3 and a (corrected) cluster significance
threshold of P=0.05 (Jezzard, Matthews, & Smith, 2001) to provide a
conservative estimate of effects. Finally, procedures and data analysis for the 13
older novice participants are detailed in Aziz-Zadeh, et al. (2011). A post-hoc
169
open-ended question was administered to these participants to measure amount
of prior experience with individuals with residual limbs.
4.3 RESULTS
Behavioral Results
As expected, experienced OTs rated being significantly more familiar with
residual limbs than novices (E: 7.18±1.99; N: 1.31±.48; t= -9.58, p<0.0001;
TOTAL: 3.70±3.21; CJ: 10). Experienced participants similarly reported that they
understood residual limb actions significantly better than hand actions after the
videos (RL: 5.64±3.50, H: 4.73±2.94, t=-2.65, p=0.02) but unlike novices, they
were not significantly more likely to interact with individuals with either effector
after watching the videos (RL: 4.82±3.12, H: 4.09±3.33, t=-2.70, p=.12). Among
the 27 total experienced participants, age was correlated with experience with
individuals with RL (R
2
=.60, p<.001); however, with three outliers removed, there
was no longer a significant correlation (R
2
=0.09, p=0.14; see Figure 4-1 below;
see also Discussion). Novices and experienced occupational therapists did not
significantly differ in their scores on the empathy measure (IRI TOTAL: N:
65.75±9.67, E: 70.46±6.59, t=-1.40, p=.17; IRI Perspective Taking: N:
19.75±3.30, E: 19.00±3.79, t=.55, p=.85; IRI Empathic Concern: N: 21.00±3.76,
E: 22.00±2.886, t=-0.75, p=.30). Of the 13 older novice participants, 7 responded
to the post-hoc questionnaire, all reporting minimal experience (brief visual
exposure in the community or through the media only).
170
Figure 4-1. Ratings of experience correlate with age across Novice and
Experienced OT groups. TOP: Age correlates significantly with amount of
experience, as measured by a self-report Likert scale of how familiar participants
felt with individuals with residual limbs (1=very unfamiliar, 10=very familiar),
R
2
=.60, p<.001. BOTTOM: With the three outliers removed, there was no longer
a significant correlation between age and experience (R
2
=0.09, p=0.14).
fMRI Results
Experienced OTs, Action Observation (PRE-Visual Exposure). In the PRE-
visual exposure action observation runs, experienced occupational therapists
observing Residual Limb Action Observation versus Residual Limb Still images
(RLAO>RLS) activated the left inferior frontal gyrus, and bilateral premotor
cortices, parietal cortices (inferior into superior lobules), and occipital cortices
(MT/V5 into VI). Hand Action Observation versus Hand Still images (HAO>HS)
activated a similar pattern, with activation in the bilateral inferior frontal gyri and
premotor cortices (dorsal and ventral), mid-anterior cingulate cortex, bilateral
parietal cortices (inferior into superior, and bilateral occipital cortices (MT/V5 into
V1). Comparing Residual Limb versus Hand Action Observation (RLAO>HAO)
resulted in activity in the right posterior middle temporal gyri (MT/V5 into V1) and
R² = 0.60195
-2
2
6
10
16 26 36 46 56
Experience
Age (years)
Age and Experience Correlation
R² = 0.09596
-2
2
6
10
16 21 26 31 36
Experience
Age (years)
Age and Experience Correlation
(Outliers Removed)
171
bilateral superior parietal lobules, with stronger activation on the right side, and in
the left cerebellum (see Figure 4-2). Hand versus Residual Limb Action
Observation (HAO>RLAO) generated activity in the bilateral occipital poles (BA
17/18) only (see Figure 4-2).
Experienced OTs - Action Observation (POST-Visual Exposure). After the
visual exposure run, experienced viewers observing Residual Limb Action
Observation versus Residual Limb Still images (RLAO>RLS) activated the left
inferior parietal lobule, right superior parietal lobule, and bilateral lateral occipital
cortices (MT/V5). Hand Action Observation versus Hand Still images (HAO>HS)
generated activity in the bilateral occipital cortices, from MT/V5 into V1. Residual
Limb versus Hand Action Observation (RLAO>HAO) in the POST run
demonstrated activity in the right occipital cortex, from MT/V5 into V1 and into the
superior lateral occipital cortex corresponding to V3 (see Figure 4-2). In contrast,
Hand versus Residual Limb Action Observation (HAO>RLAO) generated no
significant activity.
Parameter Estimates
In order to ascertain whether the differences between Residual Limb and Hand
contrasts in PRE and POST runs were due to a global decrease in signal in the
POST run, percent signal change was then calculated for each of the contrasts
(Hand Action Observation > Hand Still; Residual Limb Action Observation >
172
Residual Limb Still) over four a priori ROIs in the MNS (bilateral IFG/PMv and
IPL), which were defined by a functional localizer run and masked with a
probabilistic anatomical map.
While for novices, a one-tailed paired samples t-test revealed a significant
decrease in percent signal change between PRE-POST sessions for observation
of the residual limb only (not the hand), for experienced OTs, a one-tailed paired
samples t-test demonstrated a significant decrease for both residual limb and
hand action observations between PRE-POST sessions. In particular, for
novices, the PRE-POST change for RLAO > RLS was significant at the L IPL
(t=2.18, p=.02) and marginally significant at the R IFG (t=1.6, p=.06), while no
significant changes were found for HAO>HS between PRE-POST sessions. In
contrast, experienced OTs demonstrated significant differences between PRE
and POST runs for both effectors, with a significant decrease in percent signal
change between PRE-POST RLAO > RLS at the L IFG (t=2.73, p=.01), L IPL
(t=3.08, p=.006), R IFG (t=2.02, p=.04), and R IPL (t=2.07, p=.03). There was
also a significant decrease in percent signal change between PRE-POST HAO >
HLS at the L IFG (t=2.40, p=.02), L IPL (t=2.44, p=.02), R IFG (t=2.24, p=.03),
and R IPL (t=1.95, p=.03). For these results, see Figure 4-3.
173
Figure 4-2. fMRI results when experienced occupational therapists observe
residual limb and hand actions for the first time (PRE experienced OTs) and
after visual exposure (POST). TOP – PRE. ORANGE: Residual Limb > Hand
activated the right superior parietal lobule and occipital regions. BLUE: Hand >
Residual Limb generated significant activation in the bilateral occipital poles.
BOTTOM – POST. ORANGE: Residual Limb > Hand activated the right occipital
regions, including MT/V5. BLUE: Hand > Residual Limb did not generate any
significant activation. All results thresholded at Z > 2.3, p < 0.05 (cluster
corrected for multiple comparisons).
174
Table 4-1. Localization of brain activations in experienced occupational
therapists during the PRE condition. Group-level random effects analyses,
thresholded at Z>2.3, p<.05, cluster corrected for multiple comparisons.
Coordinate
s [x y z]
Anatomical
Region
Z-Stat
Cluster Size
[2mm
3
voxels]
Cluster
Index
Residual Limb Action Observation > Still Photo of Residual Limb
[-46 -72 0]
L lateral occipital
cortex (MT/V5)
4.68 19479 3
[58 -28 24]
R inferior parietal
lobule /
supramarginal
gyrus
4.56 - 3
[52 -64 8]
R lateral occipital
cortex (MT/V5)
4.40 - 3
[-54 -28 38]
L inferior parietal
lobule /
supramarginal
gyrus
3.96 - 3
[-54 10 20]
L inferior frontal
gyrus
4.25 855 2
[44 0 54]
R dorsal precentral
gyrus
3.50 758 1
Hand Action Observation > Still Photo of Hand
[-50 -70 10]
L lateral occipital
cortex (MT/V5)
4.73 21784 3
[-56 -34 24]
L inferior parietal
lobule /
supramarginal
gyrus
4.41 - 3
[58 -28 22]
R inferior parietal
lobule /
supramarginal
gyrus
4.37 - 3
[54 -64 -2]
R lateral occipital
cortex (MT/V5)
4.35 - 3
175
Table 4-1. Localization of brain activations in experienced occupational
therapists during the PRE condition (continued). Group-level random effects
analyses, thresholded at Z>2.3, p<.05, cluster corrected for multiple
comparisons.
Coordinate
s [x y z]
Anatomical
Region
Z-Stat
Cluster Size
[2mm
3
voxels]
Cluster
Index
Hand Action Observation > Still Photo of Hand (continued)
[48 6 14]
R inferior frontal
gyrus
3.39 - 3
[-52 8 18]
L inferior frontal
gyrus
4.13 2231 2
[2 -2 38]
R mid-anterior
cingulate cortex
3.83 1840 1
Residual Limb Action Observation > Hand Action Observation
[4 -82 -2]
R lingual gyrus (BA
17/18)
4.88 6406 2
[44 -66 8]
R lateral occipital
cortex (MT/V5)
4.38 - 2
[31 35 61]
R superior parietal
lobule (BA 7/19)
3.14 - 2
[-26 -72 -36] L cerebellum 2.82 - 2
[-18 -74 54]
L superior parietal
lobule (BA 7/19)
3.36 344 1
Hand Action Observation > Residual Limb Action Observation
[32 -98 -8] R occipital pole 4.11 1150 2
[-32 -94 -10] L occipital pole 3.54 425 1
176
Table 4-2. Localization of brain activations in experienced occupational
therapists during the POST condition. Group-level random effects analyses,
thresholded at Z>2.3, p<.05, cluster corrected for multiple comparisons.
Coordinate
s [x y z]
Anatomical
Region
Z-Stat
Cluster Size
[2mm
3
voxels]
Cluster
Index
Residual Limb Action Observation > Still Photo of Residual Limb
[-50 -72 10]
L lateral occipital
cortex (MT/V5)
4.81 5254 4
[52 -72 8]
R lateral occipital
cortex (MT/V5)
4.81 3036 3
[30 -48 64]
R superior parietal
lobule (BA 7)
3.89 687 2
[-58 -28 44]
L inferior parietal
lobule /
supramarginal
gyrus
3.58 325 1
Hand Action Observation > Still Photo of Hand
[50 -70 0]
R lateral occipital
cortex (MT/V5)
5.02 4816 2
[-46 -70 0]
L lateral occipital
cortex (MT/V5)
4.24 1611 1
Residual Limb Action Observation > Hand Action Observation
[20 -70 -12]
R fusiform gyrus
(BA 17/18)
4.41 2173 2
[46 -68 8]
R lateral occipital
cortex (MT/V5)
3.95 - 2
[24 -86 32]
R superior lateral
occipital cortex
3.84 495 1
177
Figure 4-3. Percent signal change in regions of interest from fMRI results in
experienced OTs observing Hand and Residual Limb actions pre and post
visual exposure. Percent signal change in MNS regions of interest from fMRI
results in experienced OTs when observing A) Hand Actions > Hand Still images
(HAO > HS) and B) Residual Limb Action Observation > Residual Limb Still
images (RLAO > RLS) at PRE and POST visual exposure. All ROIs
demonstrated a significant difference for PRE compared to POST observations.
Interaction Between Novice and Experienced Viewers and Frontal/Parietal
MNS Regions
As the parietal MNS is associated with generating greater action kinematics and
the frontal MNS is associated with goal representation, it follows that novice
viewers may activate the parietal regions more when initially viewing a novel
body part, in order to generate a kinematic representation of the novel effector,
while experienced viewers may activate premotor regions more when initially
viewing a residual limb, in order to extract action goals from the observed
actions. Thus to test this hypothesis, we used a 2x2 mixed measures ANOVA
with MNS ROI (bilateral IFG/PMv, bilateral IPL) as a within subject factors and
Experience (novice, experienced OTs) as a between subjects factor. There was
178
no main effect of ROI alone in either bilateral or right hemisphere only
comparisons (bilateral: F=0.431, p=.53; right hemisphere only: F=.315, p=.58).
There was also no main effect of Experience (F=0.310, p=.58). There was a
marginally significant interaction between these two (F=3.77,p=.058). When
comparing the right hemisphere ROIs only (right IFG/PMv and IPL), the
interaction with Experience was significant (F=5.899, p=.023; see Figure 4-4).
Figure 4-4. ANOVA between Experience (Novice, Experienced OTs) and
MNS Region (Frontal, Parietal) during observation of residual limb versus
hand actions. Percent signal change in the right frontal and parietal MNS ROIs
show a significant interaction with MNS region and Experience (experienced OTs
versus novices; F=5.899, p=.023).
179
Correlations
In contrast to novices, there were no significant correlations for experienced OTs
with empathy in either PRE or POST conditions. However, experienced
participants reported being more familiar with residual limb actions than novices,
and this resulted in a marginally significant correlation with activity in the R IFG
across the entire group (R
2
=0.14, p=0.058, 2-tailed). This correlation became
significant when restricted to the experienced OT group only (R
2
=0.40, p=0.03, 2-
tailed; see Figure 4-5).
Figure 4-5. Correlation between percent signal change in left inferior frontal
gyrus during residual limb observation and familiarity with residual limbs
in experienced OTs. Percent signal change in the L IFG when experienced OTs
observed Residual Limb Action Observation > Residual Limb Still (RLAO > RLS)
in the PRE condition correlated with familiarity with the residual limb (r=.51,
p=.05).
180
Between Groups Analysis with Covariates of Age and Gender
A direct between-groups comparison examining the differences between
experienced OTs and novices, with age and gender measures added as
regressors, demonstrated more activation for Novices > Experienced OTs in
bilateral parietal and occipital regions (middle temporal gyrus into V5) in the PRE
condition (see Figure 4-6) at an uncorrected threshold, and no significant
activation for Experienced OTs > Novices at the whole brain level. These
differences were significantly reduced in the POST condition, with activity only in
the right parietal and occipital regions (not shown).
A main effect of age across groups was noted in the ventral medial prefrontal
cortex (mPFC). Since age and experience were confounded in the current study
(see Behavioral Results), an analysis was run on a separate group of 13 older
adult novice viewers (median age: 52 years old) for the contrast of Residual Limb
> Rest. Contrary to the main effect of age, older adult novice participants did not
demonstrate medial prefrontal activation, even at greatly reduced thresholds
(p=.01, uncorrected; see Figure 4-7).
181
Figure 4-6. Between groups comparison for Novices > Experienced OTs
when watching residual limb and hand actions for the first time (PRE), with
age and gender added as regressors. Novices > Experienced OTs generated
widespread bilateral activity primarily in parietal and occipital regions. All results
thresholded at Z > 2.3, shown here uncorrected for multiple comparisons.
182
Figure 4-7. Main effect of Age across novices and Experienced OT groups
when observing Residual Limb > Hand actions, compared to older adult
novices’ activity during Residual Limb > Rest. LEFT: Main effect of Age
across novices and Experienced OTs when observing Residual Limb > Hand
actions, shown at Z > 2.3, corrected for multiple comparisons generated
widespread bilateral activity primarily in parietal and occipital regions.
Experienced OTs compared to novices did not generate any significant
activation, even at lower thresholds (shown at p < 0.001, uncorrected).
4.3.1 CJ ANALYSES
Whole-brain fMRI Results
CJ, Action Observation (PRE-Visual Exposure). In the PRE-visual exposure
action observation runs, observation of Residual Limb Action Observation versus
Residual Limb Still images (RLAO>RLS) activated the right inferior frontal gyrus
extending into the bilateral dorsal and ventral premotor cortices, the left ventral
premotor cortex, and bilateral parietal cortices (inferior into superior lobules),
along with bilateral occipital cortices (MT/V5 into VI). Hand Action Observation
183
versus Hand Still images (HAO>HS) activated only bilateral occipital cortices
(MT/V5 into V1), although at a lower threshold, similar regions of bilateral
premotor and parietal regions were active. Comparing Residual Limb versus
Hand Action Observation (RLAO>HAO) resulted in activity in the bilateral inferior
frontal gyri into the bilateral premotor cortices, bilateral inferior and superior
parietal lobules, with stronger activation on the right side, the midline precuneus,
and in the bilateral occipital regions (MT/V5 into V1) (see Figure 4-8). Hand
versus Residual Limb Action Observation (HAO>RLAO) did not generate any
significant activity.
CJ, Action Observation (POST-Visual Exposure). After the visual exposure
run, CJ observing Residual Limb Action Observation versus Residual Limb Still
images (RLAO>RLS) activated the left inferior parietal lobule, right superior
parietal lobule, and bilateral lateral occipital cortices (MT/V5). Hand Action
Observation versus Hand Still images (HAO>HS) generated activity in the
bilateral occipital cortices, from MT/V5 into V1. Residual Limb versus Hand
Action Observation (RLAO>HAO) in the POST run demonstrated a small cluster
of activity in the right occipital cortex, roughly corresponding to the occipital
fusiform gyrus, extending into the left hemisphere. In contrast, Hand versus
Residual Limb Action Observation (HAO>RLAO) generated no significant activity.
184
Figure 4-8. fMRI results when CJ observed residual limb and hand actions
for the first time (PRE). Residual Limb > Hand activated the bilateral inferior
frontal gyri, premotor cortices, inferior and posterior parietal lobules, and occipital
regions. All results thresholded at Z > 2.3, p < 0.05 (cluster corrected for multiple
comparisons).
Parameter Estimates
Using the same MNS ROIs applied to novices and experienced OTs, parameter
estimates for Residual Limb > Hand Action Observation were derived for CJ and
descriptively displayed in comparison to the other two groups. CJ demonstrated
greater activity in both left IFG and left IPL regions of interest compared to the
mean of both the experienced and novice groups (see Figure 4-9 below).
However, CJ demonstrated a similar amount of activity in R IFG, and less activity
in R IPL, as compared to the group means.
185
Figure 4-9. Comparison of groups means across MNS regions of interest.
Boxplots of novices (N), experience OTs (E) and CJ mean percent signal change
at each MNS region of interest (L IFG, L IPL (on the left of the image), R IFG, R
IPL (on the right of the image) during Residual Limb > Hand Action Observation,
demonstrating that CJ’s activation is higher than the group mean for novice and
experienced OT groups at the L IFG and L IPL.
4.4 DISCUSSION
We encounter individuals who differ from ourselves along a wide range of
characteristics, including, in the most extreme cases, differences in physical
bodies. In the United States alone, over 1.7 million individuals reported limb
186
differences in 2007, with more than 185,000 new amputations performed each
year (Center, 2011) and an increasing numbers of service men and women who
are currently returning with amputations (Stansbury, Lalliss, Branstetter, Bagg, &
Holcomb, 2008). Beyond limb differences, many more individuals have other
physical differences that manifest either in different body parts or different ways
of moving, including movement disorders in cerebral palsy, Parkinson’s, or
Huntington’s diseases, to name a few. For many of these individuals, integration
into society is difficult, in part due to the perceived societal attitudes emerging
from the general public (Livneh & Antonak, 1997; Bishop, 2005; Murray, 2010).
The current study addressed how our every day interactions with people who
differ from ourselves affect our neural responses when we observe their actions.
In particular, we examined a variety of types of real-life experience, including a
range of experience with individuals with residual limbs and having personal
experience with residual limbs oneself.
In the current study, when a group of occupational therapists with a range of real-
life interactions with individuals with residual limbs initially viewed stimuli of a
woman with residual limbs performing actions, they generated a pattern of
activity similar to that of novice viewers after visual experience. That is, both
visual experience and real-life experience generated similar patterns of activity
only in the right superior parietal lobule and visual regions when observing an
individual with residual limbs, suggesting that experience allows one to internally
187
map observed actions of different bodies onto our own. Amount of experience
correlated with activation of the frontal MNS, particularly in the right IFG,
suggesting that more experienced participants may attend more strongly to the
goal of the action as opposed to how it is performed. Age-related effects also
include activation of the ventral mPFC, associated with mental state attribution,
which may indicate that older and more experienced individuals rely more on
their own prior personal experiences when observing an unfamiliar individual,
instead of generating a new model of the observed individual. Importantly, this
mPFC activation was not found in a separate cohort of less experienced older
adults, suggesting that it is an effect of specific experience with different-bodied
individuals rather than simply age. Finally, a case study of CJ, an individual with
bilateral congenital amputations below the elbow, revealed overwhelming
sensorimotor activation when observing the residual limb compared to the hand.
Altogether, these results support a U-shaped model of experience in modulating
the action observation network (Cross et al., 2011), with both novices and very
experienced individuals demonstrating more sensorimotor activity than simply
experienced individuals. Taken together, these results support the hypothesis
that we activate our own sensorimotor regions in order to generate and update
internal models of observed actions, even those we cannot perform, and suggest
that different regions of the putative mirror system are selectively modulated by
different types of experience.
188
Different Types of Experience Attenuate the Sensorimotor Response to
Residual Limbs
Experienced OTs observing the residual limb stimuli for the first time
demonstrate a pattern of activation similar to that of novices after they receive
visual exposure—namely increased activation in the right superior parietal lobule
and visual regions only. While novices in the PRE condition demonstrated
significantly more right inferior parietal activation when observing residual limbs
versus hands, suggesting a greater effort to match the observed effector to their
own body representations, novices in the POST condition and experienced OTs
in the PRE condition demonstrate similar patterns of activation to both effectors.
As discussed in Liew et al. (submitted), it is likely that the right superior parietal
activation supports participants’ ability to update their pre-existing internal models
of residual limb actions (Wolperts et al., 1998), which they generated either
through visual or real-life experience, with the specific kinematic parameters of
the residual limb shown in the stimuli. Thus, experienced OTs in the PRE
condition show only this right superior parietal activation upon initial viewing, as
they have already developed a basic representation of residual limbs from prior
experiences. This supports the hypothesis that experience attenuates the
difference between how we represent bodies similar to our own versus those
dissimilar to our own, and suggests that this applies across different types of
experience.
189
In addition to the right superior parietal lobule, experienced OTs observing
residual limb compared to hand actions also activated the lateral posterior middle
temporal gyrus (MT/V5), which correlates with the putative extrastriate body area
(EBA; Downing et al., 2001), and grows into the posterior superior temporal
sulcus (pSTS), an area often found active in with regions of the MNS during
action observation (Keysers & Gazzola, 2007; Engel et al., 2008; Liew, Han, et
al., 2011). As discussed previously, both area MT/V5 and the pSTS have
reciprocal connections with the parietal cortex to support spatial awareness and
are particularly active in response to observed biological movements (Seltzer &
Pandya, 1989a; Perrett et al., 1989; Perrett et al., 1990; Seltzer & Pandya, 1994;
Downing et al., 2001). Thus, it is not surprising that experienced occupational
therapists also activate these regions during observation of the new residual limb
compared to hands, as the residual limb may require more visual attention to the
specific movement kinematics of the novel limb. This activation is only present in
occupational therapists for the PRE condition and decreases after experienced
OTs receive visual exposure, suggesting that visual attention to the different
body part may in part be attention-driven. That is to say, experienced OTs may
initially find the residual limb actions more visually salient, and thus attend to
them more than the hands; after novelty wears off in the POST condition, they
then attend equally to both hands and residual limbs. As seen in the decreased
parameter estimates for both HAO>HS and RLAO>RLS, experienced OTs may
demonstrate a general decrease in attention and interest in the stimuli in the
190
POST condition, likely due to their familiarity with both effectors. This is true also
for participant CJ.
Increased Frontal MNS Activity Correlates with Familiarity in Experienced
OTs
While novices primarily activated parietal regions when observing residual limbs
for the first time, experienced OTs also demonstrated modulations of the frontal
node of the MNS in the inferior frontal gyrus. This supports a model of the action
observation system in which parietal activity is a necessary first step to encode
the kinematics of a novel effector, followed by activation of the inferior frontal and
premotor regions to associate the observed movements with a motor goal or plan
(see Figure 4-10 below, adapted from Chapter 1: Background).
Figure 4-10. Conceptual model to explain group differences in data.
Adapted from Chapter 1: Background.
Thus, since experienced OTs have access to prior knowledge of residual limb
kinematics, they may engage both superior parietal regions to update their model
with the current kinematics of the observed residual limb and frontal regions to
191
extract the goal of observed residual limb actions (both steps 2 and 3 in the
above model). While this does not appear in the whole brain analyses at the
cluster-corrected level, possibly due to the limited sample size and variance
across the sample, frontal activity does appear for experienced OTs at a lower,
uncorrected threshold—a finding which is not evident in novices. In addition, the
parameter estimates from the ROI analysis also demonstrate this trend, and
there is a significant interaction when experience (Novices, Experienced OTs)
and MNS node (Frontal, Parietal) are examined in a 2x2 ANOVA. Specifically,
experienced OTs activate the frontal MNS in the IFG more than the parietal MNS
(IPL) when initially viewing residual limb versus hand actions, while novices
activate the IPL more than the IFG during the same initial viewing, and this effect
is stronger on the right than left hemispheres. This suggests that experience
may affect which part of our own action system we engage to help us understand
other’s actions, particularly when observing actions that we cannot perform
ourselves. If it is something that we have never seen before, we may primarily
recruit regions that help us develop a basic kinematic representation of the
action, while if it is something more familiar to us, we may also rely on our
knowledge of how the observed effector interacts with objects to achieve motor
goals, such as picking up a pen or moving an object to the side. This hypothesis
remains to be directly tested, but one might anticipate that novices who receive
more extensive visual or real-life experience with an individual with residual limbs
may, over time, begin to develop a repertoire of motor plans that the residual limb
192
can carry out. For instance, after seeing an individual with residual limbs
manipulate a pencil several times, they may be able to anticipate this action
when viewing the residual limb move towards a pencil with specific kinematics.
This goal and corresponding motor program may be stored in the frontal node of
the MNS, allowing for later goal attribution during future action observation of
individuals with residual limbs. This is supported by a wealth of research
suggesting that the frontal component of the MNS (ventral premotor and inferior
frontal gyri) supports the encoding of the action goal, while the parietal regions
encode the action kinematics and the object affordances of the observed action
in addition to the goal (Van Overwalle & Baetens, 2009; Bonini et al., 2010).
Thus, individuals who have more experience with the residual limb may attend
more to encoding the goal of the actions, rather than the kinematics of the
residual limb actions, while individuals with less experience may initially activate
parietal regions more strongly to encode the novel kinematics of the body part.
In support of this theory, experienced OTs also demonstrate a significant positive
correlation between activity in the right IFG and the amount of experience they
have with residual limb actions. That is, the more experience an individual has
with residual limb actions, the more he or she activates the right IFG when
observing residual limb versus hand actions. This pattern is not seen for the
either of the parietal ROIs or the left IFG, and suggests that a specific modulation
of the right IFG with experience with impossible actions. The specificity of the
193
right, versus left, IFG occurred despite the fact that individuals observed right
hand/residual limb actions, for which we might expect to see contralateral (or left)
activation. This follows with the hypotheses that the right hemisphere is
associated with body integration and body and space representations, processes
which are needed when observing a novel effector and trying to incorporate it
into one’s own body schema (Roth, 1949; McGeoch et al., 2011). As discussed
further below, left hemisphere activation may be more strongly associated with
specific motor matching when two bodies correspond, while the right hemisphere
may be associated more with general motor matching of a different type of body
into one’s own representations.
While these findings begin to provide evidence of dissociable experience-
dependent modulations of specific parts of the action observation system, further
research, both with larger sample sizes and a wider range of experience is
needed to directly explore this issue.
Experience, Age, and the mPFC
One possible limitation of the current study is the demographic differences
between the two groups. Experienced OTs had a higher mean age and more
females than novice participants. However, the main finding of increased parietal
activation for novices in the PRE condition, but not for experienced OTs in the
same condition, still holds true when the groups are compared in a direct,
194
between-groups analysis with age and gender added as regressors. This
suggests that experience, rather than age and gender, is the primary driver of the
increased parietal activation observed in novices for unfamiliar effectors.
Interestingly, however, age, even when collapsed across the groups,
demonstrated a significant main effect in medial prefrontal cortex (mPFC). Older
participants observing residual limb versus hand actions generated more activity
in the mPFC, a region that plays a strong role in self-reflection, autobiographical
memory, and attributing mental states and intentions to others (Gusnard,
Akbudak, Shulman, & Raichle, 2001; Van Overwalle, 2009). This region is also
commonly activated as part of the mentalizing network, which performs higher-
level cognitive reasoning about one’s own and others’ mental states (Frith &
Frith, 2003; Ciaramidaro et al., 2007). As it is not possible to run a similar
regression for Experience (as there is not a range of experience in the novice
group), based on this finding, there are two possible explanations for the age-
related mPFC activation: 1) compensatory recruitment of mPFC to assist
inadequate activation of action observation regions, and 2) utilization of a
different cognitive strategy that relies more on one’s store of prior experiences as
opposed to generating new kinematic models.
The former explanation is in line with research suggesting that as the brain ages,
it begins to recruit more neural regions to perform tasks. Studies of action
195
observation typically recruit a college age population, and these results
demonstrate that mentalizing regions are generally recruited separately from, but
in complement with, action observation regions. However, it is possible that the
aging brain recruits regions from both networks simultaneously to process the
observed actions of others. While this may be a small factor, results from the
separate group of older adults (median age: 52 years) who did not have
extensive prior experience with individuals with residual limbs did not
demonstrate any medial prefrontal activation when observing residual limb
actions, nor did they demonstrate activity in any other mentalizing regions, even
at lower thresholds. In addition, while another study on empathy across the
lifespan suggests that older adults score lower on empathy tests than younger
adults, these results were likely confounded with cohort effects rather than
specific to a universal aging process (Grühn, Rebucal, Diehl, Lumley, &
Labouvie-Vief, 2008). Overall, it appears that age may drive differences in social
cognitive processing, but that this may be determined more by individual
differences, such as one’s personal experiences, personality, and environmental
factors, more than just age. These issues, too, warrant further study.
The latter suggestion, which ties age with experience, may thus be a more
accurate explanation of the observed effects. Specifically, older individuals, who
have more experience with people with residual limbs, may draw more heavily
upon their own prior experiences when observing new actions than on their
196
actual motor representations. For instance, when viewing a person with a
residual limb flip the page of a book, an experienced OT may think about prior
experiences with individuals who have done the same thing, compare how this
individual flips the page with how they have seen other individuals with residual
limbs do the same action, or attribute intentions to the observed individual based
on their own prior experiences. This theory corresponds with the increased
activation of the frontal MNS for more experienced individuals, as the frontal
MNS may extract the motor goal or program that then is utilized by the mPFC to
allow participants to infer the higher-level intentions of the observed action.
Interestingly, this same pattern of both MNS and mentalizing activity when
observing impossible, but familiar, actions is seen also in a separate case study
of a woman who was born without limbs herself (Aziz-Zadeh et al., 2011). In
particular, when she observes hand actions that are impossible for her to perform
yet visually familiar (such as using scissors), she demonstrates this same pattern
of activity seen in experienced OTs (activity in both her MNS and in the
mentalizing system, particularly in the mPFC and precuneus). This suggests that
as individuals age and gain experience with other limbs, they engage both their
own body regions and additional regions associated with their own memories of
such actions and mental attributions of others to process these impossible but
familiar actions.
197
To the best of our knowledge, no studies have directly examined the role of age
in typically developed older adults as it impacts action observation and intention-
related networks. In the current study, one limitation is that age and experience
are correlated, and this correlation is primarily driven by the effect of three
individuals who are significantly older and haves significantly more experience
than the rest of the participants across both groups. Furthermore, since the two
groups have a non-parametric range of experience, it is not possible to run a
similar across-groups comparison to specifically test for the main effect of
experience. The results from the 13 older adult novices participants suggest,
however, that these results are not due only to age but also require some level of
experience for mPFC regions to be activated. To fully answer this question, an
additional cohort of younger individuals with extensive experience with people
who have residual limbs might be recruited in a future study. However, these
preliminary findings provide interesting hypotheses that may be examined in
further in future studies.
Extreme Expertise Increases Sensorimotor Activity
Finally, a case study of an individual (CJ) born with bilateral below elbow
amputations provide more information about how experience—in many ways,
extreme experience—affects sensorimotor activation when observing someone
with a different body. As someone with a different body himself, CJ demonstrates
extensive activation in all of his sensorimotor regions (bilateral inferior frontal,
198
premotor, and parietal regions) when observing residual limb compared to hand
actions. This activation is similar to novices upon their initial viewing of the stimuli
but with the addition of left-hemisphere premotor and parietal regions. At first
glance, these results may appear to conflict with the pattern of reduced activation
as experience increases, as shown between novices in the PRE condition, which
is greater than novices in the POST condition, which is greater than experienced
OTs in the PRE condition. However, upon closer examination of the entire body
of literature demonstrating the role of experience on action observation networks,
the prior results plus CJ’s results now perfectly fit with the U-shaped model of
experience recently proposed by Cross et al. (2011). In this model (see Figure 4-
11 below), activation in the action observation network is demonstrates a non-
linear relationship with experience. In particular, situations of both extreme
unfamiliarity (novices) and extreme familiarity (experts) demonstrate greater
BOLD activity within action observation regions than actions that are generally
familiar.
199
Figure 4-11. A hypothesized relationship between BOLD response and
action familiarity. (Adapted from Cross et al., 2011, Figure 7).
novices in the PRE condition fall on the left of the proposed model, as they are
observing very unfamiliar and novel actions. This is similar to recent findings of
greater sensorimotor activation when individuals observe novel or unfamiliar
gestures, or odd robot-like movements compared to human movements (Liew,
Han, et al., 2011; Cross et al., 2011). All of these situations are novel, interesting,
and unfamiliar, and activate one’s own sensorimotor regions more in order to
generate a corresponding motor representation. Experienced OTs, and novices
after visual experience, may be considered generally familiar – that is, they have
prior experience with the residual limb actions and demonstrate an interest in
200
them, but have neither the novelty of novices in the PRE condition, nor the
expertise of CJ with his own matching motor repertoire, to generate a
significantly increased sensorimotor response to the observed actions.
Completing the other end of the graph, CJ, in many ways, can be thought of as
an ‘expert’ in understanding residual limb actions. He not only has extensive
visual and real-life experience with individuals who have residual limbs, but he
also has first-hand motor experience of residual limb actions. While his limb does
not exactly match the residual limb viewed in the stimuli, he still has a greater
degree of personal motor experience with a similar type of end effector. Thus,
when he sees an individual with residual limbs that are not exactly like his own
but share similar kinematics, he may activate his own motor regions more
strongly, since he is paying more attention to the nuances of her actions and
comparing it to how he might do it himself. Accordingly, he is likely more
interested in watching how another individual with residual limbs performs
actions than how someone with hands performs them, and he has a greater
motor representation of this type of effector through both visual and personal
motor experience. These results resonate with the prior studies of expertise, in
which ballet or capoeria dancers observed the dance within their own expertise
as well as one that was visually similar but not their expertise (Calvo-Merino et
al., 2005). In both situations, the observers are expert in the type of movement
they are observing, but have not seen these exact stimuli before. Thus they may
201
attend more to these stimuli because they are interested in how this action, within
their realm of expertise, is precisely performed, and may have more extensive
motor representations for the observed actions. Similarly, people who watch
actions they have practiced extensively may activate their corresponding motor
regions more because they are more interested in these recently practiced
actions and more focused on the nuances of the actions compared to those that
they have not learned and do not need to learn (Cross et al., 2009). Thus, fitting
this model with the current and previous data, we suggest that experience—and
likely, related factors such as attention, interest, motivation, and novelty—
modulate BOLD activity in the one’s own sensorimotor regions when observing a
wide variety of actions and effectors in a non-linear, U-shaped manner.
Lastly, several details from CJ’s activation might be added to fine-tune this
model. First, supporting the argument that the IFG and premotor regions may be
active in encoding motor goals and programs associated with residual limb
actions, and that these regions are modulated by experience as seen in
experienced OTs, CJ also activates the frontal MNS more than novices, likely
due to his pre-existing understanding of residual limb motor plans. Second, CJ
demonstrates bilateral activity of his sensorimotor regions, while both novices
and experienced OTs tend to activate the right hemisphere more strongly. In
particular, he demonstrates greater left-hemisphere activation of both the left IFG
and left IPL than either novices or experienced OTs when observing residual limb
202
versus hand actions. While the literature on the laterality of action observation
networks is sparse, with studies reporting bilateral activation during action
observation (Calvo-Merino et al., 2005; Aziz-Zadeh, Koski, Zaidel, Mazziotta, &
Iacoboni, 2006a), part of the reason for this may be the use of whole body
actions in which both sides of the body are moving in the observed action, thus
activating both hemispheres. In other instances, simple actions, which naturally
evoke reciprocal activations between the two hemispheres, may elicit this same
effect without an overwhelming need to activate one side more than the other.
However, in the studies presented here, it appears that the left hemisphere may
be more active for novel motor programs that are possible or within the realm of
performance for the observer, while the right hemisphere may be more active for
when actions require less of a direct motor match and more of an incorporation
into one’s general body space or motor schema (Wolperts et al., 1998; Iacoboni
& Zaidel, 2004). For example, when observing gestures, individuals tend to
activate the contralateral, or left, hemisphere more strongly both when observing
gestures in general and when observing unfamiliar versus familiar gestures
(Liew, Han, et al., 2011). Similarly, a recent study demonstrated that individuals
after stroke activate the contralateral hemisphere to the observed actions when
asked to observe an action with the intent to imitate it, particularly when it is an
action that requires more effort for them to perform (Garrison, 2011). That is,
when an observed action can be performed by the observer and requires more
effort for the observer to physically perform, he or she may activate the
203
contralateral hemisphere more strongly as it may require a more concentrated
effort at activating matching motor representations. In contrast, when observing
actions that one cannot do, even with a great amount of effort, such as actions
performed with body parts that one does not have, one tends to activate the right
hemisphere, which, as discussed earlier, may be associated with incorporating
actions into one’s own body schema (Liew et al., submitted). While the current
study is not designed to directly address this question, future studies may explore
whether the laterality of the action observation system is related to the motoric,
compared to social, nature of the task at hand.
CONCLUSION
Our everyday experiences shape our neural responses to individuals unlike
ourselves. The current study demonstrates that real-life experiences with
individuals who differ from ourselves attenuates the neural response in our own
sensorimotor regions when observing them and preferentially engages frontal
regions of our own motor system, possibly to encode the goals, rather than the
kinematics, of the observed actions. Furthermore, experience as related to age
also demonstrates a main effect of recruiting additional regions, such as the
medial prefrontal cortex, which is associated with trying to understand another
person’s perspective. On the other end of the spectrum, a case examination of
CJ, an individual with congenital below elbow amputations, demonstrates that
extreme experience and motor familiarity—particularly for something that is not
204
common in the general population–may in fact increase sensorimotor activation
in the observer. Altogether, these results support a recently-proposed U-shaped
model of experience-related modulations on the action observation network, in
which both extreme novelty and extreme expertise generate more activity in
one’s own sensorimotor regions than a general, mid-range level of experience.
Thus, it appears that our own real-life experiences, along with our attention,
interests, and motivations, may modulate complex interactions between neural
regions supporting our ability to understand and make sense of one another.
205
CHAPTER 5. Current and Future Work
This work on experience, while provocative, leaves many questions to be
answered and suggests many possible applications for using experiences to
influence both social and motor situations to modulate neural responses and
hopefully enhance social and/or motor abilities. Current and future directions
explore several of these questions, and this chapter highlights four areas in
particular that stem out of the current body of work. The background, methods
and preliminary results (where applicable) are provided for each of the first three
studies, with the fourth providing a full literature review and future directions.
These include:
1. The role of attention or interest in modulating the MNS. This study is
currently being conducted using transcranial magnetic stimuliation (TMS)
in collaboration with Professor Marco Iacoboni at UCLA.
2. The utility of this system as a social mediator. This study is currently
being conducted in collaboration with artist and designer Megan Daadler,
using an installation called the Mirrorbox to induce empathy between
individuals.
3. The utility of this system in motor enhancement after motor impairment.
This study is currently being conducted with individuals who have
hemiparesis after chronic stroke in collaboration with Professors Carolee
Winstein and Hanna Damasio, as well as colleagues Kathleen Garrison,
PhD and Justin Haldar, PhD at USC.
206
4. Additional ways in which the MNS may be applied to clinical practice in
occupational therapy, including an explication of promising directions for
research between neuroscience and occupational therapy.
5.1 ATTENTION, EXPERIENCE AND THE MNS
INTRODUCTION
The studies presented here suggest that attention may play a key role in
modulating sensorimotor activity. Experience may modulate the sensorimotor
response in the direction observed in the current work by driving attention to the
more novel action or effector initially, as shown in these studies of unfamiliar
gestures and novel effectors. Then, in the case of the second and third studies
on novel effectors, after visual experience, attention is driven to both effectors
almost equally after novelty wears off. However, in these studies, experience and
attention are intertwined and cannot be directly separated. Several studies have
specifically examined the effects of attention on modulating motor resonance
during action observation and suggest that attention selectively increases motor
resonance to the observed, attended movement (Bach et al., 2007; Chong,
Williams, Cunnington, & Mattingley, 2008; Muthukumaraswamy & Singh, 2008;
Chong et al., 2009). If indeed the observed effects seen in this dissertation work
are the result of a complex interplay between 1) attention and 2) one’s existing
motor abilities based on prior experiences, it may further help to explain
discrepancies in the existing body of literature on experience and action
207
understanding. Specifically, these two factors may modulate sensorimotor activity
as the initial viewing of a residual limb or a new gesture is both more novel than
the hand or familiar gesture, driving attention to it, and requires more effort to
map the kinematics of it to one’s own body representations since one does not
have prior experience with it. Both of these factors lead to increased
sensorimotor activity for the novel action or limb. Then, in the case of the limb,
after visual exposure, the residual limb is more readily incorporated into one’s
own body representations and no longer holds the novelty it did initially. Thus,
with moderate experience, there is less activation during residual limb action
observation, resulting in similar patterns for the both residual limb and hand. In
contrast, for experts, such as CJ, observing a novel version of a familiar type of
effector (e.g., a residual limb he has not seen before) generates increased
sensorimotor experience, likely due to both increased interest and attention, and
a more detailed motor representation.
While attention is not the only determinant of the observed effects, it may be that
we engage our own sensorimotor regions depending on an intricate interplay
between our prior experiences and our current attentional focus, neither of which
can be cleanly separated from the other in the current studies. Thus, in a
separate study, I use single-pulse transcranial magnetic stimulation (TMS) to
evoke cortical motor excitability and directly explore the interaction between
208
attention and motor experience on sensorimotor activity during action
observation.
BACKGROUND
Experience and the MNS
The MNS is a putative network of premotor and parietal regions that are active
both when we perform an action and when we observe others perform the same,
or similar, actions (Fadiga et al., 1995; Gallese et al., 1996; Rizzolatti et al.,
1996b; Rizzolatti & Craighero, 2004). This shared neural system for both doing
and observing actions may be one way in which we are able to recognize and
understand the actions of others (Rizzolatti & Craighero, 2004; Iacoboni et al.,
2005). The plausible convergence of observed and performed actions onto a
common neural substrate suggests that being able to represent others’ actions
may depend on our own existing motor repertoire. Several studies have explored
this possibility and demonstrate that the more experience one has with an action,
the more MNS activity one has when observing others perform those actions
(Calvo-Merino et al., 2005; Cross et al., 2006). For instance, dancers have more
MNS activity when watching their own form of dance than when watching
another, unlearned form of dance, with similar kinematic properties (Calvo-
Merino et al., 2005). In addition, participants had more MNS activity when
observing dance patterns that they had learned versus those they had not
learned (Cross et al., 2006). Importantly, while physical experience with an action
209
yields greater MNS than only visual experience with the action (Calvo-Merino et
al., 2006), visual experience alone can also modulate MNS activations. One
study revealed that both physical practice with an action and visual practice
(simply observing the action) yield an increase in MNS activity when observing
learned versus unlearned actions (Cross et al., 2009). In addition, actions that
are not possible sometimes do not activate the MNS (Liepelt et al., 2008),
suggesting that actions beyond one’s motor repertoire are processed by other
neural regions (e.g., the temporoparietal junction (TPJ) in the mentalizing
network, which is associated with higher-level cognitive reasoning).
However, these studies do not represent the entire picture. Newer studies are
finding converse results—that is, observing novel or impossible actions activates
the MNS while observing familiar actions activates the mentalizing network (Liew,
Han, et al., 2011,; Cross et al., 2011; Liew et al., submitted). In addition, some
studies are finding that observing actions that are beyond one’s own motor
abilities in fact activate the MNS more than observing actions that are in one’s
motor repertoire (Aziz-Zadeh, Sheng, Liew, & Damasio, 2011; Garrison, 2011).
These findings are not entirely surprising as the MNS is also known to play a role
in imitation (Iacoboni et al., 1999; Iacoboni, 2005) and is active when learning
new actions (Vogt et al., 2007)—processes which may be evoked when
observing novel actions or effectors and which may similarly drive up
sensorimotor activity.
210
Task Modulations and the MNS
It seems highly plausible that the activation of the MNS is the product of an
interaction between one’s prior experience with the observed actions and one’s
current goals of observing. For instance, passively observing actions may result
in more MNS when one observes familiar actions, as there may be some
subconscious, automatic motor resonance with these actions that are within
one’s own repertoire. However, actively observing actions with an intent to
imitate or make sense of them may result in more MNS activity when observing
novel actions, through a deliberate, top-down engagement of one’s own motor-
related regions (MNS included) to represent the new observed actions.
Several researchers have specifically examined the role of an external task to
direct attention to either the way in which an action is performed or the goal of
the action, and how task modulates MNS and mentalizing system activity during
action observation (de Lange et al., 2008; Hesse et al., 2009; Spunt et al., 2010).
These studies suggest that the MNS is more active when participants attend
specifically to the means of the action—that is, how the action was performed
(e.g., what grasp is he using, what effector?)—whereas the mentalizing system is
more active when participants attend to the goal of the action (e.g., what is he
doing?) (de Lange et al., 2008; Hesse et al., 2009). In addition, mentalizing
activity appears to increase as the inference becomes more abstract. (e.g., why
211
is he doing that, versus what is he doing), while MNS activity stays the same
(Spunt et al., 2010).
Not only does an external task modulate MNS and mentalizing activity, but one’s
own internal motivations can also modulate MNS activity. For instance, one study
found greater MNS activity when hungry participants observed eating-related
actions (e.g., grasping food) than when satiated participants observed the same
actions (Cheng, Meltzoff, & Decety, 2007), suggesting that one’s internal drives
may help to direct attention and thus MNS activation towards salient stimuli.
Selective Attention and the MNS
In addition to these more indirect studies of attention, there are several existing
studies specifically on role of attention on the MNS (Bach et al., 2007; Chong et
al., 2008; Muthukumaraswamy & Singh, 2008; Chong et al., 2009; Schuch,
Bayliss, Klein, & Tipper, 2010). These studies have focused primarily on the role
of selective attention in modulating the MNS, and generally demonstrate that
attending to the specific effector being moved affects the amount of motor
resonance with the overall observed action. These findings are not surprising;
prior TMS studies have also shown that motor cortex excitability and plasticity, as
measured by motor-evoked potentials (MEPs), is increased by attending to the
limb being stimulated (Stefan, Wycislo, & Classen, 2004; Conte et al., 2007).
212
Current Study Goals
Research supports the idea that one’s attentional focus, whether directly or
indirectly modulated, and internally or externally driven, may play a role in driving
the MNS response. As more complex studies of social cognition emerge, it is not
always clear what activating the MNS to a greater or lesser degree actually
represents. Prior studies have found increased MNS activity for more familiar
actions, which are taken to suggest that the MNS “integrates observed actions of
others with an individual’s personal motor repertoire, and suggest that the human
brain understands actions by motor simulation” (Calvo-Merino et al., 2005).
However, in some cases it seems that the MNS may represent novel actions
onto existing motor programs, as though learning or imitating new actions (Liew,
Han, et al., 2011; Liew et al., submitted).
Thus, the aim of the current study is to disentangle the complex interactions
between one’s own prior experiences and one’s current attentional focus in
modulating MNS activity. To do this, I propose a novel TMS study that modulates
both the participants’ experience with observed actions (novel, learned) and their
attentional focus (through monetary incentives) during observation to
demonstrate that one’s attention to the task may matter more than whether an
action is within one’s motor repertoire or not. Specifically, I will have people
practice a set of finger tapping sequenes, and then observe a wider set of
sequences that include both practiced (familiar) and novel sequences. I will then
213
motivate attentional focus via monetary incentives by prefacing each observation
with a monetary amount gained by accurately performing the following sequence.
I expect that:
1. When passively observing learned and unlearned complex finger
tapping sequences, participants will have more cortical motor
excitability (CME) for the ones they attend to more (which should be
the ones that are more interesting—for some individuals, this will be
the familiar ones, and for others, this will be the novel ones, and for
some, both equally). Level of interest will be measured post-hoc with a
questionnaire.
2. When told that knowing certain finger tapping sequences will provide a
monetary reward if performed correctly, participants will pay more
attention to these sequences, and thus have more CME for them,
regardless of if they have practiced them or not.
MATERIALS AND METHODS
Participants
12 healthy, right-handed participants will be recruited for this study. All
participants should have no prior experience with manual sign languages. In
addition, all participants should be safe for TMS in accordance with UCLA safety
214
protocol, including no prior history of seizures or seizure-related disorders.
Informed consent will be obtained from all participants.
Stimuli
The stimuli will consist of 10-second long videos of right-hand, unimanual finger
tapping sequences filmed from the first person perspective. All patterns will
engage the first dorsal interosseous (FDI) muscle. A total of 30 complex finger
tapping sequences will be used in this study, and each sequence will be
repeated twice, with TMS delivered on the second presentation of the stimuli so
that participants know whether they have learned it before or not.
Task Design and Procedure
Electromyographic (EMG) activity will be recorded from participants’ right first
dorsal interosseus muscle (FDI) using surface electrodes. TMS will be used to
stimulate the left motor cortex over the region that elicits the highest motor-
evoked potential (MEP) from the right FDI using a Magstim 200
2
monophasic
stimulator. The location that evokes the highest motor-evoked potential from the
right FDI consistently (>5 out of 10 trials) will be marked. Single-pulse magnetic
stimuli will be delivered at this spot at 110% of the individual’s resting motor
threshold using a figure-of-8 coil (High Power 90mm remote coil). Timing of the
stimuli will be at the midpoint of movement at the greatest point of activation for
the FDI, synced with the onset of the video stimuli through the presentation
215
software (Presentation). Attention to each video may be measured through the
use of sporadic catch trials (response to an additional stimulus), and, if available,
physiological measurements (heart rate and skin conductance). Attention may
also be explored via the number of blinks during each video. Ability to perform
each sequence quickly and accurately will be measured at the end of the study.
Participants will learn half of the sequences through intensive practice, and will
watch half for the first time. I anticipate that participants will demonstrate better
performance on those they have previously practiced.
The study will consist of 4 segments. These segments will be 1) LEARN
SEQUENCES, 2) PASSIVELY OBSERVE ALL SEQUENCES, 3)
OBSERVATION + INCENTIVE TO LEARN, 4) POST HOC MEASURES.
1. LEARN SEQUENCES (15 minutes) – Participants will be instructed to
learn 15 complex finger tapping sequences through both observation
and practice them for 15 minutes via a computer program that provides
feedback on performance. Participants will tap out sequences on a
laptop keyboard.
2. PASSIVELY OBSERVE ALL SEQUENCES (10 minutes) –
Participants will be instructed to passively observe each stimulus
presented in a randomized order. MEPs will be recorded for each
stimulus presentation.
3. OBSERVATION + INCENTIVE TO LEARN (10 minutes) – Participants
will be instructed to passively observe each stimulus presented in a
216
randomized order. However, they will also be informed that they will be
asked to perform some of the stimuli at the end of the experiment, and
that the speedy and accurate performance of these particular stimuli
will result in a monetary incentive ($1 extra per sequence). Incentivized
stimuli will be marked with a red cue screen before the presentation of
the stimulus, and will be randomized (half from the familiar stimuli and
half from the novel). If possible, we will also record eye blinks and
physiological information (heart rate, skin conductance) as additional
measures of attention.
4. POST HOC MEASURES (20 minutes) – Participants will be asked to
perform all the sequences at the end of the experiment, and will be
rewarded for the accurate and fast performance of the sequences that
were incentivized (up to $10 extra). They will also be asked to rate
their interest in each sequence, whether they found the familiar or
novel sequences more interesting, and whether they found the
incentivized sequences more or less interesting.
Analyses
Peak amplitudes of each MEP will be recorded in Signal and analyzed in SPSS.
Familiar versus Novel sequences during the passive observation condition will be
compared in a paired t-test. Familiar vs. Novel, and Incentivized vs. Non-
Incentivized MEPs from the Observation + Incentivized Learning run will be
217
compared in an ANOVA. Finally, height of MEPs can be correlated with
measurements of ability to perform each gesture, along with other behavioral
measures.
5.2 USING THE MIRRORBOX TO INDUCE EMBODIMENT AND
EMPATHY INTRODUCTION
From the outset, the work presented in this dissertation has had the overarching
aim of not just explaining how experience modulates neural activity during action
understanding, but of actually using experiences to facilitate changes in neural
activity to help people understand one another better. To this end, I am currently
pursuing a study that gives participants an experience in which they see
themselves embodied in another person, and vice-versa. This illusion of
embodiment is accomplished through an apparatus known as a Mirrorbox,
created by artist and designer Megan Daadler, in which two people stand facing
one another and see their faces imposed on each others to differing degrees
depending on the transparency of the mirror at any given moment (see Figure 5-
1). The aim of this study is to demonstrate that the experience of embodiment
can then lead to increased sensorimotor representations for another individual
with whom one has been ‘embodied,’ and that this then leads to behavioral
changes (e.g., increased empathy for the other, increased liking or perception of
the other). If successful, such a simple manipulation might have an application in
218
mediating negative perceptions between individuals, such as between individuals
of different races or beliefs that dislike one another.
Figure 5-1: Example of the Mirrorbox paradigm. Participants will stand facing
one another with a two-way mirror separating their faces. Pre-programmed
lighting changes alter the transparency of the mirror, allowing individuals to see
their own faces overlapped to differing degrees with their partner’s face.
BACKGROUND
We have countless social interactions everyday, some deeply meaningful and
others quickly forgotten. What makes an interaction important, and what makes
us feel good towards another person? Prior research suggests that similarity
219
between individuals is one basis for empathy and liking of one another
(Chartrand & Bargh, 1999). That is, we tend to like individuals who are more like
ourselves, who share our same mannerisms, habits, and tics. While this seems
like common sense, the effects are astounding: simply mimicking another’s
subtle habits and behaviors makes that person like you more, even between
strangers. There may be a neural explanation behind this behavioral effect,
dubbed the Chameleon Effect. A network of brain regions in motor-related areas
(premotor and parietal cortices) known as the mirror neuron system (MNS) was
originally discovered in macaque monkeys and becomes active both when
monkeys perform an action and when they simply observe someone else, like an
experimenter, perform an action (di Pellegrino et. al 1992; Rizzolatti et. al 1996;
Gallese et. al1996; Rizzolatti & Craighero, 2004). A similar system is
hypothesized to exist in humans, with numerous neuroimaging reports
demonstrating overlapping regions of the brain when people do and see actions
(Aziz-Zadeh & Ivry, 2009; Van Overwalle & Baetens, 2009; Liew, Han, et. al,
2011). This led researchers to suggests that one component of understanding
other individuals might be through internally simulating their actions onto our own
motor regions, and thus understanding their actions from a first-hand perspective
(Rizzolatti & Craighero, 2004).
In keeping with this hypothesis, activation of the putative human MNS has been
linked to empathy, with individuals who score higher on measures of trait
220
empathy also demonstrating more MNS activity when perceiving others (Kaplan
& Iacoboni, 2006; Gazzola et. al 2006; Shamay-Tsoory et. al 2008; Aziz-Zadeh
et. al 2010; Liew et. al, submitted). Extending the Chameleon Effect hypothesis,
this research suggests that individuals who empathize more are also more likely
to simulate or embody others’ actions when they see them. Representing others’
onto one’s own body thus might be a basis for empathizing and understanding
them. Numerous studies have looked at the role of the MNS in embodiment
(Aziz-Zadeh & Damasio 2008) and demonstrate MNS activity not only when
viewing others’ actions, but also when hearing others’ actions or even just
hearing words that describe actions (Pulvermuller et. al 2005; Pulvermuller,
2005; Aziz-Zadeh et. al 2006; Gazzola et al. 2006; Aziz-Zadeh & Damasio 2008;
Aziz-Zadeh & Ivry 2009). A wide body of literature has explored the role of
embodiment, using various different illusions and experimental designs.
Simulated embodiment of others has a basis in the Rubber Hand Illusion (RHI,)
wherein a subject views a rubber hand that is exposed to synchronous touch in
the same area as their own hand. The subject comes to feel the rubber hand as
their own, causing a drift in proprioception of their real hand and decrease in the
body temperature of the hand (Botvinick & Cohen, 1998). Embodiment illusions
using synchronized visuotactile stimuli to that of the RHI have simulated
embodiment in the face and even the entire body (Ehrsson 2008, Slater et al
2010, Lenggenhager et al 2007, Lenggenhager et al 2009, Mazzurega et al
2011, Tsakiris 2008, Paladino et al 2010, Sforza et al 2010). The illusions have
221
been used to isolate brain regions active during the embodiment of others, most
notably the left extrastriate body area (lEBA) and the right temporoparietal
junction (rTPJ) (Arzy et al 2006), and using TMS on the rTPJ reduced indicators
of embodiment (Schulte-Rüther et al 2007, Tsakiris et al 2008, Pitcher et al
2008). The illusion of embodying another person or individual has thus been well
documented.
What remains to be seen is if embodiment illusions are capable of producing
lasting effects in subjects. Studies focusing on facial embodiment, dubbed
“enfacement,” have measured the effects that this illusion creates, including a
decreased level of self-recognition and increased levels of closeness, self-
projection, and physical resemblance, along with conflicting reports of increased
attraction and conformity to their enfacement partner (Paladino et al 2010,
Mazzurega 2011). No attempt has yet been made to measure the lasting effects
of the increased measurements of empathy and closeness provided by
embodiment illusions. The cognitive processes of embodiment show capability
for generalization. The RHI has been modified to show that hands of different
skin color and size can still be embodied as long as they resemble a hand
(Pavani & Zampini 2007, Haans et al 2008). It could thus be that faces of
different race and gender are capable of embodying one another. Given
embodiment’s link to empathy (Sforza et al 2010, Schulte-Ruther et al 2007)
enfacement has the potential to bridge relationships between diverse, and even
222
disparate, groups. Beyond race and gender, routine practice of embodiment in
individuals may lead to a generalized increase in empathy.
MATERIALS AND METHODS
Participants
This experiment will take place in two phases. The first phase aims to establish
baseline measures of the Mirrorbox effects and will recruit forty healthy
individuals to perform the Mirrorbox paradigm (20 participants) or a control
paradigm (20 participants) with a race- and gender-matched confederate to avoid
possible confounds from these two factors, as described below. The second
phase aims to explore whether the Mirrorbox generates different effects between
individuals from different racial backgrounds and will recruit forty participants
(twenty pairs of different-race participants). All participants will be right-handed
as measured by a modified Edinburgh Handedness Inventory (Oldfield, 1971),
have normal or corrected-to-normal vision, and no neurological or psychiatric
history. Written informed consent will be obtained from all participants before
inclusion in the study. Though this study has been approved by the University of
Southern California Institutional Review Board (IRB) in its original version, we will
add a new amendment and will wait for an answer from the IRB. The study will
be performed in accordance with the 1964 Declaration of Helsinki.
223
Task Design and Procedure
The experiment will consist of three parts: 1) Pre-Mirrorbox Assessments, 2)
Mirrorbox Paradigm/Control Paradigm, and 3) Post-Mirrorbox Assessments.
Pre-Mirrorbox Assessments
Upon arriving on the day of the experiment, participants will first have their photo
taken, which will be processed for the face morphing assessment (see below).
We expect that the extent to which an individual experiences effects from the
Mirrorbox will be affected by factors such as their personality traits and trait
empathy. Thus, they will be asked to complete a battery of assessments
designed to examine whether attitudes towards their partner change after the
behavioral paradigm, as well as the Implicit Association Task (IAT). They will
complete these again post-Mirrorbox, In line with previous studies examining
embodiment, individuals will fill out self-reports of the perceived attractiveness,
closeness, and self-projection to assess overt attitudes towards their partner. To
assess subconscious attitudes, we will also have participants perform a self-face
identification task in which they will see images of their face morphed with their
partner’s face along a gradient (e.g., 10% self, 90% other to 90% self, 10% other)
and be asked to estimate the percentage of their own face shown in the
composite face. As we hypothesize that the experience of embodying another’s
face might also lead to more generalized effects (e.g., an overall decrease in
self-face identification), we will also have them perform the questionnaires and
224
the face morph task in relation to a stranger. If the effect is generalized, they
should show a similar change in responses to both their partner and the stranger
in the post assessments.
Mirrorbox/Control Paradigm
Participants in the first cohort will be randomly assigned to either the Mirrorbox or
Control paradigm (n=20 per group). Participants will complete the same tasks
with a confederate in either paradigm; the only difference is that in the Mirrorbox
paradigm participants will face one another in the Mirrorbox, during which their
face is morphed with their partner’s face (see Figure XX below), while in the
Control paradigm, participants will face one another directly. The Control
paradigm is designed to provide the same face-to-face interaction that the
Mirrorbox provides, without providing direct embodiment of one face onto the
other.
Participants will complete either the Mirrorbox or Control paradigm for 4 minutes
with a trained confederate, during which the confederate will initiate an array of
tasks to promote motor synchrony between the individuals. These include
reciting the alphabet and counting together, moving faces in tandem, making
facial expressions in synchrony with one another (e.g., “Both smile. Both frown.”)
and attempting to line up facial features. The confederate will be trained to
loosely follow a script during the 4 minute process while the experimenter gives
225
instructions to both the confederate and participant. Notably, the control condition
will also control for aspects of pure imitation (as opposed to embodiment) during
these joint tasks. Apple boxes will be used as platforms so that both participants
are at eye-level with one another.
Post-Mirrorbox Assessments
Participants will again complete the battery of assessments from the pre-
Mirrorbox session to measure whether attitudes towards their partner changed.
These attitudes will include the IAT and the assessment measuring notions of
agency, location, and ownership, as well as measurements of bonding such as
attractiveness, likeability and closeness. Ownership will be further tested in the
face morphing measurement. Participants will also be asked to complete several
measures of personality and empathic abilities which will be made available via a
link on Qualtrics. These include the Myers-Briggs Type Indicator (MBTI) and the
Interpersonal Reactivity Index (IRI.) In the second experiment, individuals will
also be asked to fill out their Multiethnic Inventory Measure (MEIM) to assess
racial group identification, along with the Implicit Association Test to assess
subconscious attitudes towards other racial groups. One week later, they will be
asked to complete the same battery of tests online via Qualtrics to examine
whether the paradigm introduced longer-lasting effects.
226
5.3 EXPERIENCE AND MOTOR SYSTEMS IN INDIVIDUALS
AFTER STROKE
One of the most interesting facets about the MNS is its versatile role in both
social and motor cognition. While the previous study using the Mirrorbox
attempts to manipulate action understanding between individuals to improve
social behaviors, another use of the current findings is to examine whether
experience can be used to enhance impaired motor abilities, such as after a
stroke. As both premotor and parietal regions become active when we perform
actions and when we watch others perform the same actions, the experience of
observing actions may activate these action observation networks and may allow
us to stimulate motor-related regions through observation alone, without overt
movement. This may represent one way to engage damaged or perilesional
motor regions when motor performance is impaired (Liew, Garrison, Werner, &
Aziz-Zadeh, 2012). Recent research has proposed and begun to examine this,
with preliminary results showing the existence of a similar action observation
network in participants with hemiparesis due to chronic stroke, which occurs
when observing actions performed by the counterpart to the paretic limb
(Garrison, 2011). In addition, some results suggest that action observation in
addition to normal therapy improves motor gains of individuals with hemiparesis
(Ertelt et al., 2007; Franceschini et al., 2010). While this novel line of work is still
preliminary, it may be a very powerful form of adjunct therapy in the not-too-
227
distant future. However, many studies must be conducted in order to establish
this promising line of research.
Thus, to begin with, the study detailed here is currently underway to begin to
address how the structural properties of the MNS after stroke affect the functional
activation of motor-related regions in the MNS during action observation, and
how this in turn modulates and is modulated by one’s experiences and abilities
post-stroke. Specifically, does the extent of the lesion damage (lesion size), the
hemisphere in which the lesion occurs (laterality), and the integrity of connections
between motor regions (white matter integrity) affect whether and how these
motor-related regions are activated and whether the activation of these motor-
related regions leads to changes in actual motor performance and ability?
Answering these questions will be a step towards assessing the generalizability
of using an action observation-based therapy with the diverse population of
individuals with hemiparesis. This information may subsequently allow us to tailor
such therapies with optimal parameters for each individual in the future. As space
does not permit a full explication of each of these ideas, which would be beyond
the scope of this chapter, in the current summary, I will provide a brief overview
of the project goals, methods, and preliminary results. For a detailed background
on the theory and applicability of using action observation to evoke motor-related
gain in a therapeutic setting, see the next section of this chapter (Section 5.4). In
addition, more information about structural and functional correlations can be
228
found in Liew et al. (Liew, Garrison, Haldar, Winstein, Damasio, & Aziz-Zadeh,
2011).
The first part of this study is aimed at understanding how the extent of structural
neural damage in participants with chronic stroke correlates with functional
activation of motor-related networks in the brain when observing meaningful
actions. A second component examines how lesion size and location affect the
structural connectivity between regions of the motor system that may support
action observation. The third part of this study is aimed at examining how the
laterality of the lesion affects the functional activation of these same networks. I
hypothesize that: 1) individuals with larger lesions post-stroke will activate
perilesional components of the action observation network, not commonly
activated in non-lesioned participants, during observation, 2) structural measures
of white matter integrity using diffusion weighted imaging will correlate with lesion
size, lesion location, and behavioral motor performance and functional activity
patterns, and 3) individuals will activate the damaged hemisphere when
observing actions they can no longer perform, regardless of which hemisphere
(left/right, dominant/non-dominant) is affected.
229
MATERIALS AND METHODS
Participants
Twelve individuals with right-hemisphere middle cerebral artery stroke (mean age
= 65 years old) have participated in the study (Garrison, 2011), and an additional
12 individuals with left-hemisphere middle cerebral artery stroke (mean age = 65
years old) will be recruited. All participants are right-handed prior to the stroke
and have normal or corrected-to-normal vision. In addition, all participants have
chronic middle cerebral artery (MCA) stroke greater than 3 months prior to the
experiment, and moderate to severe motor impairments of the contralateral
upper extremity.
Task and Procedures
Scanning Procedures. A portion of these procedures have been reported
previously (Garrison, 2011). The entire study takes 1.5 hour and includes an MP-
RAGE to examine anatomical structure, a fluid attenuated inversion recovery
(FLAIR) scan to examine lesion characteristics and extent, an arterial spin
labeling (ASL) scan to examine vasculature, a diffuson weighted imaging (DWI)
sequence to examine integrity of structural connections, and several echo planar
imaging (EPI) sequences to examine functional neural activity during visual
stimulus presentation. Of the EPI sequences, the “MNS Localizer” maps the
MNS for grasp in each participant using four 12-s blocked conditions: a) action
observation – observe an actor grasp a button box with his right or left hand; b)
230
execution – grasp and press buttons on a button box placed at the subject’s right
and left sides 4 times in response to visual cue (green circle in right or left visual
field to indicate right or left hand); c) visual control – static images of hands and
button box; d) fixation – rest. Participants will practice the procedure prior to MRI.
Each task condition is repeated 6 x followed by rest, randomized across one 7-
min. run, acquiring 216 EPI (echo gradient, TR=2). The Main fMRI Procedure
includes four 12-s blocked observation conditions: a) “possible” actions – an
actor grasp objects using his right and left hands; b) “no longer possible” actions
– an actor grasp objects using his right or left hand (for stroke patients this is the
paretic limb); c) visual control – static images of hands and objects; and d)
fixation – rest. All observed actions are adapted from Wolf Motor Function Test
[16] items that are typically not possible for participants with stroke to perform
skillfully using their paretic limb (e.g., stacking checkers). Observed actions are
considered “possible” if performed using the non-paretic limb, and “no longer
possible” if performed using the paretic limb. Participants are instructed to remain
still and to pay attention to the actions and to which hand the actor uses (videos
are shown from the first person perspective), as they will be asked to imitate
those actions using the same hand after their MRI, and they will be asked a
question about the actions after each MRI run. Each task condition is repeated
15 x followed by rest, randomized across three 6-min. runs, acquiring 180 EPI
per run (gradient echo, TR=2). Behavioral Task: All participants will perform the
observed actions using each hand, outside of the scanner after the MRI, while
231
being videotaped for offline scoring. The high-resolution MPRAGE provides a 1
mm
3
anatomical volume of the whole brain for ech participant (208 coronal slices,
resolution=1mm
3
, TR=2350ms, TE=3.09ms). The HARDI 144-direction diffusion
sequence provides a high-resolution diffusion volume to examine the structural
integrity of white matter tracts between brain regions (b=2500 s/mm
2
,
TR=8742ms, TE=115ms, 6 b=0 images; data obtained separately on 4 subjects
with R MCA stroke). The ASL and FLAIR scans are used to examine lesion
characteristics and effects of the lesion on cerebral vasculature, which may have
subsequent effects on BOLD activity.
Behavioral Measures. After scanning, participants are administered a Modified
Wolf Motor Function Test
consisting of the 4 items observed during the fMRI
scanning portion and completed the test with each hand. They are also
administered the Fugl-Meyer Motor Assessment (UE)
5
to assess arm function for
the paretic limb.
Analyses
Functional analyses are run using SPM and FSL neuroimaging analysis
software. Importantly for this analysis, beta values for regions of interest (ROI) in
the MNS [left and right inferior frontal gyrus (IFG), ventral premotor cortex (PMv),
and inferior parietal lobule (IPL)] are extracted. Beta values are extracted both in
native subject space and in standardized template space. Whole brain
232
correlations are also performed using a portion of the data collected by Garrison
(2011).
All structural analyses are performed in BrainVox (Frank, Damasio &
Grabowski, 1997) in subject native space. High resolution anatomical images for
each participant are first manually skull-stripped, with the whole brain volume
traced from intact gyrus to neighboring intact gyrus, without dipping into
intermediate sulci so as to provide the most accurate measurement of atrophy for
future analyses. Lesioned tissue is then manually traced with the following
categories: 1) cavitation only (CAV), and 2) broader lesion, including cavitation
plus surrounding damaged tissue (BROAD). For the CAV tracings, a separate
ROI is defined for each discreet cavitation. Similarly, for BROAD tracings, a
separate ROI is defined for each discreet region of damaged tissue. BROAD
ROIs always contain at least one CAV ROI (e.g., the cavitation and surrounding
damaged tissue). Some BROAD ROIs contain more than one CAV ROI (e.g., the
lesioned tissue continues whereas separate cavitations occur throughout the
volumes). For IC+ participants, lesions that expand beyond the cortex into the
cerebrospinal fluid are traced until the next closest intact gyrus. For examples of
the two types of tracing, see Figure 5-2 below.
233
Figure 5-2: Examples of cavitation and broad lesion tracings in BrainVox.
Left: a participant with a cortical + internal capsule lesion; Right: an internal
capsule only lesion.
From the structural images, the following values are calculated:
• Total brain volume (after brain extraction; in mm
3
)
• Lesion volume – CAV
• Lesion volume – BROAD
• Percent of whole brain volume that is CAV lesion (%CAV)
• Percent of whole brain volume that is BROAD lesion (%BROAD)
• Percent of BROAD lesion that is the CAV lesion (%BC)
•
These values are then correlated with beta values from MNS ROIs (described
above) and with whole brain BOLD activation. All diffusion weighted imaging
analyses are performed using in-house software for image denoising and
processing (Haldar et al, in press), and visualized in TrackVis (Wang, Benner,
Sorensen, & Wedeen, 2007).
234
PRELIMINARY RESULTS
1. Structural Correlations
For the ROI correlations in native space, both %CAV and % BROAD
correlated negatively with beta values from the L IFG [%CAV: r = -0.58;
%BROAD: r=-0.62] and L PMv [%CAV: r = -0.55; %BROAD: r = -0.61] when
observing “possible” (left hand) actions (see Figure 5-3 below). There were no
other significant correlations with native space ROIs.
L IFG L PMv
Figure 5-3. ROI correlations between %BROAD lesion volume and beta
values. Shown in the L IFG (left) and L PMv (right) from observing possible (left
hand) actions. X-axis: %BROAD lesion volume out of whole brain volume; Y-
axis: beta values.
For the ROI correlations in normalized data, %CAV and %BROAD correlated
with activity in L BA 44 when observing “possible” (left hand) actions [%CAV: r = -
.38; %BROAD: r = -.40]. This is similar to the above analysis in native space but
a slightly smaller effect.
R² = 0.38903
‐1.5
‐1
‐0.5
0
0.5
0 2 4 6 8 10
235
For the whole brain BOLD correlation in normalized data, using the structural
lesion data as a regressor in the second level model, the lesion values (both
%CAV/BROAD and lesion volumes CAV/BROAD) correlate with the left IFG
(pars triangularis, BA 45) during “impossible” (right hand) action observation
(p<.001, uncorrected; shown in Figure 5-4 below).
Figure 5-4. Whole brain correlation between BOLD activity and lesion
volume. Whole brain correlation when observing for observing Impossible
Actions (right hand) > Rest and lesion volume. A significant correlation was found
at the left IFG in pars triangularis, adjacent to the more “canonical” MNS ROI in
pars opercularis (p<.001, uncorr; image from KG).
2. Diffusion Imaging
Functional ROIs defined by activation of PMv and M1 during observation of
actions corresponding to paretic and non-paretic limbs may also be useful in
examining white matter tract integrity in relation to functional patterns.
Differences in existing corticospinal tracts (CST) between lesioned and intact
hemispheres become evident when comparing ROIs in L and R M1 (Fig 5-5A)
and fibers connecting L vs. R M1 and PMv (Fig 5-5B; all ROIs defined by BOLD
236
fMRI activation during action observation of the contralateral hand and masked
with a probabilistic anatomical mask for M1 or PMv respectively).
Figure 5-5. Diffusion weighted imaging in chronic stroke. Diffusion weighted
imaging reveals the corticospinal tract (CST) in left and right hemispheres (left)
and the tracts connecting ventral premotor (PMv) and primary motor cortex (M1)
on left and right hemispheres. Shown on one participant with chronic MCA stroke
in native space.
3. Hemispheric Similarities
While data has been collected from only 4 subjects with right-hemisphere MCA
stroke (non-dominant hemisphere, resulting in left hand damage), the patterns
seen in each individual on a single-subject level are promising and suggest that
regardless of the hemisphere affected, individuals activate their own
sensorimotor regions in the damaged hemisphere more when observing actions
performed by the counterpart to their paretic limb versus nonparetic limb. A
single individual’s native space anatomical scan and coregistered BOLD results
highlighting this pattern are displayed below (see Figure 5-6).
237
Figure 5-6. BOLD results in one participant with right MCA stroke. BOLD
results in one participant with chronic right MCA stroke resulting in moderate
hemiparesis when the participant observed actions performed by the counterpart
to the paretic, versus nonparetic, hand. Results presented in native space,
thresholded at Z=.05, uncorrected.
PRELIMINARY DISCUSSION
Altogether, these results suggest: 1) a relationship between lesion volume post-
stroke and functional activation of the frontal MNS, 2) a promising method for
examining white matter integrity between motor-related regions associated with
action observation, and 3) a consistent pattern of activation of the affected
hemisphere during action observation of the counterpart to the paretic limb,
regardless of which hemisphere has been affected. In particular, it appears that
238
as lesion volume increases, the brain recruits perilesional regions of the action
observation system to support observation of the affected limb on the affected
hemisphere. In addition, white matter integrity may provide telling indicators of
who may be able to respond best to this type of action observation therapy,
depending on who has intact connections between regions of the action
observation network (in the IFG, premotor, and IPL cortices) and primary motor
cortex (M1). Finally, preliminary results in participants with right-hemisphere
lesions suggest that there is a consistent pattern despite the hemispheric
laterality of the stroke; actions that one can no longer do consistently activate the
affected hemisphere more strongly than actions that one can do. This is
promising as it suggests that action observation networks are plastic and
moldable by experience, and that they are preserved even after extensive
neurologic injury. Complete results from these preliminary efforts will shed
greater light on how action observation may be used as an important form of
therapy for individuals with motor difficulties due to neurological injury.
5.4 THE MIRROR NEURON SYSTEM: INNOVATIONS
AND IMPLICATIONS FOR OCCUPATIONAL THERAPY
Finally, in addition to understanding how the underlying structural neuroanatomy
affects the engagement of action observation networks after neural damage,
research is also needed to explore how different types of actions maximally
engage the action observation system to evoke the strongest response possible
239
from action observation therapy. For instance, understanding the factors that
influence motor excitability during action observation, such as attention (from
5.1), have significant clinical implications for issues of motor control. Importantly,
many characteristics of the mirror neuron system suggest that it is optimally
engaged in goal-directed, context-dependent settings, such as those often found
in occupations used by occupational therapists during therapy. While preliminary
work has been done towards this effort, much more about using action
observation as an adjunct therapy to enhance motor gains from normal
occupational therapy has yet to be discovered. This section provides a detailed
review of this literature and potential future directions for using the MNS in
occupational therapy for motor rehabilitation, including stroke and also exploring
dyspraxia and other disorders (Liew et al., 2012).
ABSTRACT
Occupational therapy has traditionally championed the use of meaningful
occupations in rehabilitation. Now, emerging research in neuroscience about the
putative human mirror neuron system may provide empirical support for the use
of occupations to improve outcomes in rehabilitation. The current paper provides
an interdisciplinary framework for understanding the mirror neuron system—a
network of motor-related brain regions activated during the production and
perception of the same actions—in relation to occupational therapy. We present
an overview of recent research on the mirror neuron system, highlighting features
240
that are relevant to clinical practice in occupational therapy. We then discuss the
potential use of the mirror neuron system in motor rehabilitation, and how the
mirror neuron system may be deficient in populations served by occupational
therapy, including individuals with dyspraxia, multisensory integration disorders,
and social interaction difficulties. Throughout, we propose ways for occupational
therapy to translate these neuroscience findings on the mirror neuron system into
clinical applications and suggest that future research in neuroscience would
benefit from integrating the occupational therapy perspective.
BACKGROUND
The recent discovery of the putative human mirror neuron system (MNS)
suggests that we activate motor-related brain regions both by doing actions and
also by seeing actions (Rizzolatti & Craighero, 2004). This finding has led to a
compelling conversation between neuroscientists and health professionals on
how to best utilize the MNS in a therapeutic context (e.g., Garrison, Winstein, &
Aziz-Zadeh, 2010). This overview will introduce the MNS and its relation to
occupational therapy, and discuss current research and practical applications of
the MNS for clinical populations served by occupational therapists. We note how
neuroscience research of the MNS may benefit by being informed by the tenets
of occupational therapy. Overall, we propose that the MNS may provide scientific
support for practical methods commonly used in occupational therapy and,
241
likewise, that occupational therapy may offer a framework for additional studies
and clinical applications of the MNS.
The Mirror Neuron System
Mirror neurons were discovered as researchers recorded activity from single cells
in the macaque monkey brain using electrodes (Gallese, Fadiga, Fogassi, &
Rizzolatti, 1996). Some neurons in motor-related brain regions fired vigorously
both when the monkey manipulated an object, or when the monkey observed the
experimenter manipulate an object, such as grasping a piece of food. Because
the ability to conduct single-cell recordings in humans is limited, we rely on
functional brain imaging methods, such as functional magnetic resonance
imaging (fMRI), to describe a similar mirror neuron system (MNS) in the human
brain (Rizzolatti & Craighero, 2004). The putative human MNS (Figure 5-7) is
comprised of frontal and parietal motor regions. The frontal MNS occupies the
pars opercularis of the inferior frontal gyrus and adjacent ventral premotor cortex,
regions important for motor planning, action selection, and the representation of
action goals. The parietal MNS occupies the inferior parietal lobule, involved in
body and object representation. Action observation and execution generate
activity in premotor and parietal brain regions, which are involved in motor control
and performance (Rizzolatti & Craighero, 2004).
242
Figure 5-7. The putative human mirror neuron system. The putative human
mirror neuron system is considered to occupy the inferior frontal gyrus and
ventral premotor cortex (frontal MNS) and inferior parietal lobule (parietal MNS);
circled. This image is derived from functional brain imaging data as individuals
observe a hand action as compared to observing still images of hands.
The MNS and Occupations
A founding principle of occupational therapy is the use of meaningful everyday
activities or occupations to promote health and quality of life. An interdisciplinary
research effort into the putative human MNS has described features of this
system that are relevant to clinical applications of occupations, including that the
MNS is: (a) goal-directed, (b) context-dependent, (c) modulated by experience,
and (d) multimodal.
First, the MNS is goal-directed, in that motor regions within this neural system
show stronger activations when a person observes actions directed toward a
243
goal or object as compared to actions without a goal or object (Enticott, Kennedy,
Bradshaw, Rinehart, & Fitzgerald, 2010; Jonas et al., 2007; Rizzolatti &
Craighero, 2004). For example, observing actions such as lifting a pen reliably
elicits brain activity in the MNS (Aziz-Zadeh, Sheng, Liew, & Damasio, 2011);
however, actions without a goal, such as meaningless finger movements,
produce less MNS activation (Jonas et al., 2007). An exception to this bias
toward object-directed actions is the observation of gestures, in which the shape
of the hand is considered the action goal, which again activates the MNS (Liew,
Han, et al., 2011).
The goal-directedness of occupations emphasized in occupational therapy
(Nelson, 1997; Reilly, 1962) may relate to the preference for goal-directed
actions in activating the MNS. Greater MNS activity during observation of actions
directed toward a specific goal may provide a neural basis for a recovery
advantage from using occupations—as opposed to rote exercise—in therapy
(Gray, 1998). Motor rehabilitation may benefit by using task practice of
meaningful occupations in part because premotor and parietal activity in the MNS
is greater for purposeful actions. Future studies may use brain imaging as
individuals observe meaningful, as compared to meaningless, occupations that
are tailored to the individual’s own motivations to understand how personally
meaningful actions may affect the engagement of motor activity in the MNS.
244
Second, the MNS is context-dependent, in that these brain regions show
significantly more activity during observation of actions embedded in meaningful
contexts. One fMRI study revealed that observing actions performed in a context,
for example picking up a mug from a table set for breakfast, elicits greater activity
in the frontal MNS than observing actions performed without context, for example
picking up a mug from an empty table (Iacoboni et al., 2005). Observing actions
in some contexts generates stronger activations in the frontal MNS as compared
to other contexts, such as picking up a mug to drink versus picking up a mug to
clean. This suggests a context-dependence of frontal MNS activity. Intrinsic
motivations such as hunger may also affect MNS activity when observing actions
that relate to the body state, for example there is greater MNS activity in hungry
compared to satiated participants when observing food-related action videos
(Cheng, Meltzoff, & Decety, 2007). Thus activity in the MNS may support action
understanding by using context to aid inference of others’ intentions when we
observe actions (Iacoboni et al., 2005). In occupational therapy, MNS activity
may be modulated by actions embedded in contextual occupations.
Third, the MNS shows stronger activations during observation of actions for
which the observer has prior experience, likely because those actions are highly
represented in the motor system of the observer. A recent fMRI study showed
that classical ballet dancers demonstrate stronger activity in the MNS during
observation of classical ballet moves as compared to observation of dance
245
moves from the Brazilian martial art, capoeira, which are visually similar, but for
which the observers have no prior experience (Calvo-Merino, Glaser, Grezes,
Passingham, & Haggard, 2005). Action representation in the MNS may also be
acquired through visual experience alone, with increased MNS activity for motor
sequences that one has viewed before but not rehearsed, compared to
sequences that were neither viewed nor rehearsed (Cross, Kraemer, Hamilton,
Kelley, & Grafton, 2009). Thus, an individual’s motor and visual experience
affects action representation in the MNS, and brain activity in the motor system
will differ during observation of actions for which the individual has more or less
experience. Therefore, an individual’s occupational experiences may be relevant
to rehabilitative methods utilizing the MNS, for example by using occupations for
which an individual has visual or motor experience in order to promote positive
outcomes.
Finally, the MNS is multimodal, in that brain activity in the MNS occurs as we
observe or perform actions, and also as we hear action sounds or words
describing actions. For instance, the MNS shows strong activity in response to
action sounds such as a peanut cracking (e.g., in hand-related motor regions;
Kohler et al., 2002). Hearing action-related words also activates the MNS,
specifically the part of the motor cortex controlling the body part involved in the
action word—hearing “kick” activates leg-related motor regions and hearing “hit”
activates hand-related motor regions (Aziz-Zadeh, Wilson, Rizzolatti, & Iacoboni,
246
2006). MNS activity in response to multimodal action representations suggests a
role for the MNS in integrating multiple sensory inputs, as well as a possible
disruption in MNS activity in individuals who have sensory integration
impairments, a hypothesis which warrants further study. This property of the
MNS also suggests that we may use several means to access similar neural
regions in therapy (e.g., through visual, auditory, or semantic inputs), either as
complements or alternatives, if one modality is impaired in an individual (see
Multimodal Rehabilitative Strategies, below).
We may thus consider the MNS as a goal-directed, context-dependent,
experience-driven component of the cortical motor system, involved in both the
perception and production of action, and accessible by multiple sensory inputs.
Each of these features contributes to a conceptual understanding of the MNS as
being activated more strongly by meaningful actions, and supports the use of
occupations to promote health and well-being. Functional brain imaging studies
have shown that observation of actions with familiar meanings, such as a hand
pressing down on a stapler, activates regions of the MNS more strongly than
observation of actions without familiar meanings, such as a foot pressing down
on a stapler (Newman-Norlund, van Schie, van Hoek, Cuijpers, & Bekkering,
2010). The use of occupations both as an end goal and as a means to achieve
the goal (Gray, 1998) suggests that including occupations, which may be
considered context-rich activities that fall within an individuals’ motor repertoire
247
and relate to an individuals’ motivations, may improve outcomes. Overall, these
features of the MNS lend support to the use of occupations in clinical applications
to more strongly engage cortical motor regions. However, the proposed
effectiveness of using meaningful occupations in rehabilitation, and the
assumption that such cortical motor activity will promote better outcomes,
warrants further collaborative study between neuroscientists and occupational
therapists.
Practical Applications of the Mirror Neuron System in Occupational
Therapy
Forms of therapy that activate the MNS. Engaging the motor system when we
observe actions without performing them may be useful to promote motor
learning or relearning of motor skills after injury. The MNS may be especially
relevant to motor rehabilitation in occupational therapy. Activating cortical motor
regions for motor recovery, for example after stroke, usually involves intensive
motor practice. However, motor practice may be fatiguing or frustrating to
patients with limited mobility. Methods that activate the MNS with minimal or no
motor practice—such as action observation—provide another way to activate the
motor system that may be especially useful in patients with limited mobility
(Garrison et al., 2010).
248
In addition to action observation and action performance, brain regions in the
MNS are also active during imitation (combined action observation and
performance). Research in humans suggests that imitation activates the MNS
more strongly than action observation or execution alone (Iacoboni et al., 1999).
Activity in the motor system beyond the MNS (e.g., primary motor cortex) is
facilitated by combined action observation and performance (Brass, Bekkering,
Wohlschlager, & Prinz, 2000). Action observation with an intent to imitate the
observed actions also leads to greater MNS activity as compared to passive
observation (Buccino et al., 2004) and may be a useful alternative for individuals
unable to imitate. Activity in the MNS may be engaged in therapy by asking
individuals to imitate or attempt to imitate actions modeled by the therapist, or by
asking individuals with limited mobility to imagine imitating actions. Such brain
activity may drive plasticity in regions of the motor system upon subsequent
motor practice.
Motor impairments. A deficit in neural processing in the MNS may underlie
impairments in imitation, motor planning or motor learning, as found in dyspraxia
or developmental coordination disorder (DCD; Werner, Aziz-Zadeh, & Cermak,
2011). Although support of this hypothesis remains theoretical, recent research
provides some empirical evidence for disordered motor and praxis-related
imagery in children with DCD (Wilson, Maruff, Ives, & Currie, 2001). Wilson and
colleagues (2001) measured speed-for-accuracy trade-off in real and imagined
249
movements in typically developing children and children with DCD. The group
with DCD did not maintain the expected speed-to-accuracy ratio in imagined
movements (despite typical performance in real movements), leading the
researchers to conclude that motor imagery was impaired in this group. Children
with DCD have also been found to have impairments in the ability to generate
accurate internal representations of movements (Wilson et al., 2004). The
internal modeling deficit proposed by these researchers, combined with imitation
impairments in individuals with DCD, is consistent with our understanding of the
human MNS. Because the MNS supports imitation, it is reasonable to speculate
that imitation impairments in dyspraxia or DCD may in part be a consequence of
MNS dysfunction.
Although there is relatively little data on the neural mechanisms of DCD, a
handful of fMRI studies have been conducted with children with DCD using
various motor tasks (Kashiwagi, Iwaki, Narumi, Tamai, & Suzuki, 2009; Zwicker,
Missiuna, Harris, & Boyd, 2011). Brain regions indicated as underlying DCD in
these studies overlap with MNS regions. More neuroimaging studies are needed
to elucidate the role of the MNS in DCD and developmental dyspraxia,
particularly regarding functions related to the MNS, such as imitation and motor
planning. This is an area of both research and practice that occupational
therapists may play a role in by providing quantitative and descriptive measures
of decreased imitation abilities.
250
Motor rehabilitation. Compelling evidence for utilizing the MNS in motor
rehabilitation suggests that the MNS is involved in building motor memories.
Stefan et al. (2005) show that action observation drives reorganization of motor
representations in primary motor cortex to form a specific motor memory of the
observed action, similar to those induced by motor practice (Classen, Liepert,
Wise, Hallett, & Cohen, 1998). Such motor memories may be a prerequisite to
motor learning (Pascual-Leone et al., 1995) and motor relearning in rehabilitation
(Krakauer, 2006). Motor memory formation is facilitated by concurrent action
observation and execution (i.e., imitation) compared to motor practice alone
(Stefan, Classen, Celnik, & Cohen, 2008). These effects are preserved after
brain injury, such as stroke (Celnik, Webster, Glasser, & Cohen, 2008). Further,
brain regions involved in motor memory formation and motor learning are similar
to regions that may contribute to motor recovery after stroke (Ward, 2006).
However, our ability to form motor memories declines with age, whether by motor
practice (Sawaki, Yaseen, Kopylev, & Cohen, 2003) or by action observation
(Celnik, et al., 2006). This may influence motor learning abilities (Harrington &
Haaland, 1992) and contribute to challenges with motor rehabilitation in older
adult populations (Butefisch, 2004). Applied research of methods that activate
the MNS for motor rehabilitation is promising but limited. Some important work
has been done to promote the use of action observation (Ertelt et al., 2007) and,
251
more so, motor imagery (Liu, 2009; Page, Levine, & Leonard, 2007) to improve
outcomes after stroke.
Because imitation is an effective means for learning motor skills (Schaal, Ijspeert,
& Billard, 2003), it may be useful in therapy. For instance, occupational therapists
may use imitation in daily practice in the form of task demonstration, using their
own movements as instruction. Studies suggest that a component of learning
how to successfully imitate others is the experience of being imitated (Ingersoll,
2010). Because the MNS may develop through the repeated pairing of perceived
and produced actions (Heyes, 2010), we may speculate that reciprocal imitation
training improves imitation skills through the strengthening of neural circuitry in
the MNS. Some preliminary work has examined the feasibility of training imitation
in populations in which imitation is impaired (e.g., autism), with positive results
(Ingersoll, 2010). Further studies are needed to evaluate the role of the MNS in
reciprocal imitation training, and how improvements in imitation ability relate to
MNS activity. In particular, it has yet to be determined if reciprocal imitation is
also useful for training imitation and motor skill in individuals with impairments
resulting from acquired brain injury and other populations, such as those with
schizophrenia.
Multimodal rehabilitative strategies. Because the MNS is multimodal, it may
be useful to combine sensory inputs when applying MNS-based methods, for
252
instance, by using combined speech and motor practice to promote rehabilitation
of motor or language deficits. The left frontal MNS overlaps with Broca’s area, a
region of the brain essential for speech production and also for imitation (Heiser,
Iacoboni, Maeda, Marcus, & Mazziotta, 2003). Language and motor systems
may therefore share functional neural resources within the MNS. A recent study
in occupational therapy utilized this action-language link through semantic
priming, or the vocalization of words related to a motor task prior to moving, to
enhance subsequent motor performance (Grossi, Maitra, & Rice, 2007).
Participants who read aloud “grasp, cup, shelf” prior to grasping a cup and
placing it on a shelf showed decreased time-to-grasp and increased grasp
velocity during motor performance as compared to motor performance without
vocalization. In a related study, hearing words with emotional tones activated
similar regions within the MNS as speaking words aloud with the same emotional
tones (Aziz-Zadeh, Sheng, & Gheytanchi, 2010). These studies suggest that
vocalization may be useful to rehabilitation; occupational therapists may ask
clients to vocalize immediate goals before or during action execution, or to
practice producing emotional speech in order to better perceive it.
One multimodal strategy to activate language and motor regions that is
consistent with this research is offered by Cognitive Orientation to daily
Occupational Performance (CO-OP; Polatajko, Mandich, Miller, & Macnab,
2001a), which promotes physical coordination and motor control by “talking
253
through” performance strategies upon encountering movement problems in daily
occupations. Evidence suggests that this combination of verbal and cognitive
strategies with action modeling may be effective in promoting motor learning, for
example in DCD (Sangster, Beninger, Polatajko, & Mandich, 2005; Ward &
Rodger, 2004). CO-OP is one example of many available approaches, and is
highlighted here to demonstrate how language-action links in the MNS may be
applied in clinical practice. Future studies may assess how different treatment
approaches (e.g., CO-OP versus DIR/Floortime) modulate MNS activity.
In complement, we may use motor practice to improve language performance.
Early work in occupational therapy by Ayres and Mailloux (1981) and Kantner et
al. (1982), showed that sensory integration, particularly vestibular stimulation,
enhanced language acquisition and expression in children with language delays.
Engaging the motor system for language rehabilitation may also be used to treat
aphasias. A recent study by Meinzer et al. (2010) showed that engaging the
motor system by standing rather than sitting during language performance
improved word retrieval in patients with aphasia, suggesting that engaging
stronger cortical motor activity by standing rather than sitting leads to greater
multimodal activations in MNS regions also associated with language ability.
Overall, these studies show that a shared substrate for language and motor
systems in the MNS may be used to improve one function by engaging the other.
254
Engaging regions of the MNS by using one intact sensory or motor input may
improve a related but impaired sensory or motor modality. We may attend to the
use of language during occupations, or suggest movement during speech
therapy, to facilitate improved occupational performance. By doing so, multiple
sensory inputs are activated toward the same goal.
Beyond the Motor System: The Mirror Neuron System and Social Cognition
Another area of research into the putative MNS that is relevant to occupational
therapy is social cognition, specifically, the neural processes that support our
ability to understand and interact with others. Studies of social cognition may
similarly benefit by being informed by occupational therapy. While a consensus
on the role of the MNS in social cognition is lacking, theoretical models suggest
that the MNS cooperates with other multimodal brain systems to support higher
cognitive functions such as action understanding, perspective taking, and
empathy (Keysers & Gazzola, 2007). One hypothesis is that individuals who
show greater MNS activity during action observation may be better at mapping
others’ action onto their own bodies, and may in turn be more empathic toward
observed individuals. For example, greater MNS activity as typically developed
participants watched actions made by an individual with aplasia, born without
arms or legs, correlated with higher scores on standard empathy scales (Liew, et
al., submitted). It may be that more empathic individuals make a greater effort to
255
map the actions of dissimilar individuals onto their own body representations in
the MNS.
It follows that the MNS may be related to disordered social interactions such as
those associated with schizophrenia or autism (Dapretto et al., 2006; Iacoboni &
Dapretto, 2006). However, the role of the MNS in social cognition and social
disorders is debated: while some studies report a negative correlation between
MNS activity and impairments associated with autism such as deficits in
perspective taking (Oberman, Ramachandran, & Pineda, 2008), other
researchers express concern that the MNS may be an over-simplified
explanation for the complex autism spectrum disorders (Arbib, 2007; Hamilton,
2009). Because remediating impairments in social interactions is an important
domain of practice, future occupational therapy research on the MNS, and social
cognitive disorders in particular, can examine how activating the MNS may play a
role in improving social interactions. Iacoboni and Dapretto (2006) present a
more in depth review of social cognition and the MNS.
Limitations and Future Directions for Occupational Therapy
Our current understanding of the basic neuroscience of the MNS is based on
functional neuroimaging studies of the healthy brain. Research in clinical
populations is needed to provide insight on how this basic science can be
translated into clinical applications. One limitation of recent research is that most
256
brain imaging studies use young, college-aged participants, and few brain
imaging studies on the MNS have examined neural processing in the older adult.
Studies are needed to better understand how aging affects activity in the cortical
motor system during action observation, motor imagery, and imitation. For
example, it may be more effective to combine methods (e.g., by using imitation
as compared to action observation alone) to modulate MNS activity in the older
brain. Applied studies of the MNS are limited, and usually include small
participant numbers; further studies are needed in varied clinical populations to
assess the efficacy of MNS-based interventions, and such studies should be
sufficiently powered. We have suggested several hypothesis-driven studies of
the MNS related to occupational therapy. We argue that occupational therapists
are well-positioned to contribute to this research.
This article suggests studies of the MNS that evaluate potential applications in
occupational therapy. These studies require an interdisciplinary approach,
incorporating both basic neuroscience and clinical perspectives. Of interest are
studies assessing the efficacy of rehabilitative methods that use meaningful
occupations to augment cortical motor activity in the MNS, and studies that relate
activity in the MNS to behavioral and cognitive performance or rehabilitation
outcomes. We have highlighted several populations who may either benefit from
using the MNS in rehabilitation, such as after brain injury, or who may have
impairments related to the MNS, such as impaired imitation in dyspraxia, or
257
impaired social cognition in autism or schizophrenia. A better understanding of
activation patterns in the MNS in individuals with different clinical characteristics,
and across different interventions, may clarify the usefulness of MNS-based
methods to heterogeneous clinical populations. We may also gain insight into the
neural function of the MNS under varied circumstances related to the individual,
the environment, and the context. Finally, occupational therapists may contribute
clinical expertise in selecting appropriate behavioral assessments in relation to
MNS activity, for example to understand how outcome measures such as
engagement in activities of daily living are related to MNS activity. Overall, we
suggest that research of the MNS is relevant to clinical practice, and emergent
studies may contribute to evidence-based practice in occupational therapy.
Likewise, principles and practices of occupational therapy should inform further
research and applications of the MNS.
CONCLUSION
This article provides an overview of research on the putative human MNS and
suggests the relevance of this research to occupational therapy, as well as how
occupational therapy may contribute to studies of the MNS. We argue that recent
findings in neuroscience about the MNS support several principles of
occupational therapy, such as the emphasis in clinical practice on goal-directed,
context-rich, meaningful actions to improve outcomes. Such actions have been
found to strongly activate the MNS during action observation, and may contribute
258
to effective rehabilitation. In addition, the salience of such actions for activating
the MNS echoes the emphasis in occupational therapy on methods that consider
interactions between the individual, environment, and context (Law et al., 1996).
The MNS may support learning or relearning of valued occupations, thereby
improving health and quality of life. Interdisciplinary research that incorporates
both the neuroscience perspective on modulating activity in the MNS, with the
clinical perspective from occupational therapy on effective treatments that
address personalized goals, holds potential to generate new innovations for
client-centered practice.
259
CHAPTER 6. Discussion
The current work suggests that we understand other people’s actions by
engaging our own motor regions in order to gain a simulated form of motor
experience, simply through observation. In a series of studies, we examine how
different types of experience—motor, perceptual, visual, and real life—
differentially modulate activity in neural regions associated with action
perception, motor performance, and social cognition. While the prior body of
literature had suggested that we activate our own sensorimotor regions more for
actions we have practiced or experienced ourselves, the body of work presented
in this dissertation provides consistent evidence that we also activate our own
sensorimotor regions when observing actions we have not seen before or cannot
perform ourselves. This finding demonstrates the enormous flexibility of the
human mirror system in representing actions beyond our own abilities and
suggest a further role of these regions in acquiring and generating internal motor
patterns for new, even impossible, actions. These results support a non-linear
model of how experience and attention may together modulate neural activity in
action observation regions during action observation. Furthermore, these results
hint towards dissociable roles for different types of experience in activating
specific regions of MNS and mentalizing networks as needed to fit the needs to
each individual’s situation.
260
Consistent Results Across Experiences
The most consistent finding across all of these studies and chapters is that we
activate our own sensorimotor regions more when observing unfamiliar, novel, or
impossible actions compared to those that are possible. This finding may initially
seem to be in contrast with many prior studies showing that we activate our own
motor regions more when observing actions in which we are experts or for which
we have prior motor or visual experience (Calvo-Merino et al., 2005; Calvo-
Merino et al., 2006; Cross et al., 2006; Cross et al., 2009). The assumption from
these studies is that action observation engages our own corresponding motor
regions in accordance with our own motor repertoire—thus, we activate our own
motor regions more when observing actions for which we have a larger motor
repertoire. Indeed, several studies show that even when observing actions that
we cannot perform, such as those performed by an effector we do not possess
(e.g., a robotic claw), we still activate our own motor regions in accordance with
what regions we ourselves use to complete the goal of the action (Gazzola et al.,
2007a; Gazzola et al., 2007b). Even when individuals born without arms
observed hand actions, they represented these actions onto the corresponding
part of their own bodies that they used to complete the action (e.g., their feet).
These studies suggest that the MNS is not effector specific, but goal specific –
that is, we use our body maps to simulate the action goals, regardless of the
means. In this way, we are able to match the goal of the observed action to the
motor program we use to achieve the same goal.
261
The studies presented here contribute a new dimension to this body of studies on
experience by suggesting that for some new actions or effectors, we also engage
our sensorimotor regions to perform an even more basic task than trying to
match goals. Instead, in some cases, we may be trying to actually break down
observed movements into their component kinematic parts and represent each of
these basic movements onto our own bodies. This may be the first step in
understanding very novel actions; that is, before we can assume higher level
goals or intentions, we may first need to generate an internal motor
representation of what we are seeing. Rather than contradict the prior results, it
is possible that these findings, combined with the prior findings, support a non-
linear model of neural activation in action observation regions, as discussed in
Chapter 4 and in more detail later in this chapter. Briefly, it is likely that many
factors modulate activation of action observation regions, with extreme ends of
the continuums generating more activity than moderate components. For
example, both very familiar and very unfamiliar actions generate more MNS
activity, as these categories are both very interesting and may both modulate
attention. In addition, the prior studies typically used whole body actions engaged
in a complex series of movements, while the studies presented here used actions
limited to understanding the movements of an isolated effector. Thus, the level of
detail with which one is attending to an action, or series of actions, may also
affect the activation of action observation regions.
262
Summary of MNS Results
The first study reveals that participants activate their own parietal regions when
observing symbolic gestures that they have not seen before. In this study, they
are asked to infer the intentions of each viewed gestures, whether they are
familiar with it or not. If an internal motor representation of observed actions were
not necessary for action understanding, we might expect these participants to
directly activate higher level regions associated with reasoning, mental state
attribution or perspective taking when observing these unfamiliar gestures.
However, we find the opposite result: when they observe unfamiliar gestures,
they demonstrate significantly more activity in their own parietal regions as
though representing these observed actions onto their own motor regions. This
occurs despite the fact that they have not been asked to perform the action or
told they will need to imitate or recognize these gestures in the future. Without
any instruction to focus on the way the action is performed, participants still
demonstrate the greatest BOLD response in parietal regions associated with
body space, limb kinematics, and reach and grasp abilities. This suggests that
even abstract tasks require some level of physical or motor embodiment before
being able to infer higher-level cognitive and social intentions.
In the second study, we see that individuals activate their own sensorimotor
regions, again most strongly in the parietal lobule, when observing an individual
263
with a completely different body from their own. In this study, participants with
typically developed hands observe an individual with residual upper limbs
manipulate objects, such as turning the page of a book. In these cases, they
activate their own sensorimotor regions more when observing the individual with
residual limbs perform actions than when observing an individual with hands
perform the same actions with her hands. Clearly, participants have a much
wider motor repertoire for hand actions as compared to residual limbs. However,
they consistently demonstrate increased parietal activation for residual limb
actions. In order to examine whether this is an effect of experience, participants
are then shown extended video clips of each individual (woman with residual
limbs, woman with hands) in order to provide participants with greater visual
experience. If we assume that the parietal activation during residual limb
observation is due to participants’ effort to generate an internal motor
representation for the novel effector, then we may predict that after sufficient
experience, which allows them to generate an adequate internal model, this
parietal activation will decrease. Indeed, this is exactly what we find. After
participants observe several minutes of extended video clips of both residual limb
and hand actions, they then demonstrate an attenuation of activation in their
parietal regions between hand and residual limb actions. They continue to
demonstrate right superior parietal activation, which suggests that after
experience, they continue to update their model of residual limb actions based on
264
their predictions and the visual feedback they receive, as though ‘fine-tuning’
their model to better fit the observed stimuli.
We then predicted that beyond sheer visual experience, real-life experience
would show similar results with more nuanced activation of particular
components of mirror and mentalizing systems. That is, since participants who
have prior real-life experience with individuals with residual limbs have already
generated an internal model of residual limb actions through their varied
experiences, they should not demonstrate the large amount of parietal activation
that was associated with generating a new internal model (seen in novices in the
PRE condition). Instead, we anticipated that they may show some superior
parietal activation, again to help ‘fine tune’ their existing residual limb model with
the specific kinematics of the observed limb, and possibly some frontal MNS
activity (in the inferior frontal gyrus and premotor cortices), associated with goal
attribution for observed actions. Since they already have experience with a
variety of residual limb actions, they may then be able to use their existing
models to predict the goals of the observed actions—tasks that are thought to be
supported by these frontal regions. Indeed, results from the third study confirmed
these hypotheses as well. Experienced occupational therapists, who had a range
of experience working with individuals with residual limbs, demonstrated only
right superior parietal and visual activation upon initially viewing the residual limb
stimuli, suggesting they updated pre-existing models for residual limb actions
265
based on the new kinematics of this particular stimuli. Furthermore, a correlation
between experience and BOLD activity suggests that the more experience
individuals had with people with residual limbs, the more they activated their right
IFG, one of the MNS regions associated with predicting and understanding the
goal of observed actions.
Summary of Mentalizing Results
Action observation also elicited activation of mentalizing regions, particularly
when participants had prior personal experience with the actions. This was
observed first during the observation of familiar gestures, when trying to guess
the actions’ intentions. In the third study, older experienced OTs with prior
experience with individuals with residual limbs also demonstrated a main effect of
age when observing residual limb actions. This main effect was not seen in a
group of older novice individuals, suggesting that as individuals get older, they
may rely more on their own prior experiences when trying to understand new
actions rather than expending energy to generate a new internal representation
for the observed action. A similar pattern is seen in an older individual born
without arms when she observes others perform impossible hand actions, such
as using scissors (Aziz-Zadeh, et al., 2011). It is likely that as individuals gain a
larger wealth of experience, they are more likely to rely on these prior
experiences and memories than on their own internal motor representations.
266
Interest, Attention, and Experience
Of course, factors such as interest, attention, novelty, motivation and
engagement may play a crucial role in modulating these responses as well (Bach
et al., 2007; Cheng et al., 2007; Chong et al., 2008; Chong et al., 2009). In many
cases, the relationship between experience and attention is confounded. For
instance, observing novel actions like an unfamiliar gesture or someone with a
different body move may be much more interesting than observing a familiar
gesture or body that one has seen many times before. On the other hand,
observing something within one’s own expertise, like dances performed in a type
of dance one has practiced expensively, may be more interesting than observing
a dance form that one has not practiced and is not interested in. A final case
study begins to shed light on these questions.
As reported in Chapter 4, a case study was performed with participant CJ, a
young man who was born with bilateral arm differences below the elbow (as
opposed to above the elbow as seen in the woman in the stimuli). He had a
lifetime of both visual and personal motor experience using residual limbs in the
place of hands and fingers, and while his end effector differed slightly from that of
the woman shown in the stimuli (e.g., he demonstrated far more dexterity with his
upper limbs and had below, as opposed to above, elbow amputations), he still
might be considered an ‘expert’ in residual limb actions. In accordance with
previous literature on expert dancers watching their own form of dance versus
267
that of another, CJ also demonstrated greater sensorimotor activation in both
parietal and premotor regions, bilaterally, when observing the residual limb
compared to the hand. This finding is in contrast to experienced OTs, who had
prior experience seeing other individuals with residual limbs but only activated
superior parietal regions. This difference may be explained by two factors: 1) CJ,
having residual limbs himself, may have been very interested to view the residual
limb actions of another individual, while experienced OTs may have been
interested but less so because it was not so new to them and there was not that
much to attend to beyond what they had already seen before, and 2) CJ, having
residual limbs himself, may have a much greater motor representation to match
to the observed actions, and with much more nuanced attention to the detail of
the observed movements, than experienced OTs who could at most generate a
rough approximation of the residual limb onto their own arms (but nothing more
specific or detailed).
The former suggestion is supported by data from the woman depicted in the
stimuli herself, and reported in a separate study (Aziz-Zadeh et al., 2011). When
she observed her own residual limb actions and other people’s hand actions, she
in fact demonstrated greater sensorimotor activation for others’ hands than her
own residual limbs, despite the fact that she had a much more extensive motor
representation for her own residual limb actions. However, despite having near
similar amounts of expertise, CJ activates his own sensorimotor regions more for
268
the observed residual limb actions than the woman who has those limbs herself.
This suggests that we consistently activate our own sensorimotor regions more
for actions we are interested in, whether they are similar to our own or not. While
this is not directly tested in the current study, this may explain the two case
studies, as the woman observing her own actions was more interested observing
others’ actions (despite it being less similar to her own body representations),
while the man observing her actions was more interested in observing her
actions (which are more similar to his own body representations, but still novel to
him) than hand actions. Altogether, we might surmise that individuals interested
in examining their own action performance may demonstrate more MNS activity
when observing themselves than others, while if they are more interested in
another person, they will show more MNS activity for that individual. We find in
the first study that Chinese participants who observe an actor from their own race
demonstrate more sensorimotor activation for him than for someone from the
other race, possibly because they find him more interesting; however, another
recent study demonstrated that individuals who identify as being Jewish
demonstrate more activity in pain regions when observing someone who is a
neo-Nazi receive painful injections than someone who is not (Fox, Sobhani &
Aziz-Zadeh, submitted). In this case, it is possible that individuals want people
they dislike to experience more pain than those they like. While these studies
engage different systems (motor-related regions in the former, pain-related
regions in the latter), they provide preliminary evidence of context-dependent
269
attentional modulations. These questions should be directly tested in future
studies.
The latter suggestion, that CJ demonstrates more sensorimotor activity when
observing residual limbs also in part due to his own extensive motor
representation for residual limbs, matches the conclusion of prior studies of
dance or skilled experience (Calvo-Merino et al., 2005; Haslinger et al., 2005;
Calvo-Merino et al., 2006; Cross et al., 2006; Cross et al., 2009). That is, the
more experience one has, the more one can pay attention to details and nuances
of actions and the more precisely one can match the actions of the other person
onto his or her own body. In support of this, these studies typically find bilateral
activation in both premotor and parietal regions, similar to the activation found in
CJ upon observation of these actions here. This may reflect the simultaneous
processing both of action kinematics and action goals, as individuals with
extensive experience may be able to do, as well as just more widespread
representations of actions onto different components of one’s motor system.
A U-shaped Model of Experience and Attention
All of these results converge to support a recently proposed U-shaped model of
experience and the action observation network. In addition to the work presented
here, another study has demonstrated greater sensorimotor activity when
observing individuals performing robot-like, compared to human-like, actions, and
270
even more activity when observing robots perform robot-like movements than
humans performing the same movements (Cross et al., 2011). A proposed model
from this work suggests that we activate our own sensorimotor regions more
either when observing something very familiar (in which we are an expert) or
something very unfamiliar (in which we are a novice). However, the mid-level of
experience demonstrates less BOLD activity in sensorimotor regions. Indeed, our
results fit this pattern, with novices and unfamiliar gestures demonstrating more
BOLD activity than novices after experience and experienced OTs, and more
activity again for an individual who is very familiar with residual limbs actions.
However, these results suggest an important caveat that modulates these
observed patterns more, or in conjunction with, the relationship predicted here:
attention, and related constructs of interest, motivation, task, and context, interact
with experience to modulate sensorimotor activation more than pure experience
alone. To this end, a study using single-pulse transcranial magnetic stimulation
(TMS) to directly examining attention and motivation on motor resonance is
currently underway (see Chapter 5), and hopefully future research will also
systematically examine these subtle but powerful modulators.
In addition, as mentioned previously, it is possible that the processes
engaged in each end of the continuum are different. For instance, on the far left,
where previous studies found that very unfamiliar actions activate MNS regions,
it is also true that ‘single-unit’ actions (e.g., simple hand movements, isolated
effectors) activate the MNS more, in part possibly because the participant is
271
paying attention to how muscles within a hand or effector are moving. On the
other end, at the far right, there are complex representations of movement
sequences. In these cases, individuals who are more experienced with the
components of these complex sequences (e.g., ballerinas who are familiar with
each turn and transition from one movement to the next) may demonstrate more
MNS activity than those who are unfamiliar and view the actions as a whole.
Specific Regions within the System
Clearly there are modulations of neural regions supporting action understanding
that are common across different types of experience. However, while it is
tempting to view regions of the MNS and the mentalizing systems as whole
systems that perform blanket functions altogether, these results also point out
specific recruitment of individual regions within each system that may support
particular functions, in accordance with the observer’s interest, experiences and
abilities. While the data supporting this section is largely preliminary and requires
the confirmation of additional follow-up studies, it is, at the least, one way to view
the body of work presented in the current dissertation.
Several comparisons emerge from the current study including comparisons
between frontal versus parietal regions, left versus right hemispheres, and MNS
versus mentalizing regions. These results are briefly summarized in Figure 6.1
below.
272
LEFT
-Possible actions
-Actions that one wants
to perform or thinks
about performing
RIGHT
-Impossible actions
-Actions that one cannot
perform
-Actions without a direct
motor match (abstract or
impossible)
FRONTAL
IFG/PMv/PMd
-Familiar actions
-Actions for which one has
a pre-existing knowledge of
possible goals
Familiar gestures (vs.
stills)
Familiar effectors one
has (CJ observing RL -
similarities)
Familiar effectors one
does not have
(Experienced OTs
observing RL)
PARIETAL
IPL/SPL
-Unfamiliar actions
-Actions for which one
needs to generate a
kinematic representation
Unfamiliar gestures
Unfamiliar effectors one
has (CJ observing RL-
differences)
Unfamiliar effectors one
does not have (novices
observing RL)
Figure 6.1 Summary chart. Chart summarizing the current findings as related to
lateral and frontal/parietal activation patterns of the action observation system.
Premotor versus Parietal: Goal versus Kinematics and Familiarity
As mentioned previously, many studies have examined the relative contributions
of premotor and parietal regions to action understanding, and suggest that the
premotor nodes of the action observation system are more strongly associated
with encoding the goal of an observed action (Fagg & Arbib, 1998; Gallese et al.,
2002; Costantini et al., 2005; Newman-Norlund et al., 2010; Bonini et al., 2010;
273
Molenberghs, Cunnington, & Mattingley, 2012). In accordance with this, actions
that are familiar, including gestures with which one is familiar, compared to still
images of hands, and effectors with which one is familiar, activate premotor
regions in the inferior frontal gyrus and dorsal premotor cortex. These categories
of familiarity versus unfamiliarity do not always apply, of course, and they, as
everything else, may also depend on the goals and focus of the observer. For
instance, if the observer is not trying to infer the intention of the actor, or is
paying more attention to how the action is done, you may see parietal activation
instead of frontal, even if it is a familiar action. In contrast, if the observer is trying
to understand the goal of the action, instead of how it is done, you may expect to
see premotor regions active more than parietal, even if the action is less familiar.
However, the familiarity of performing the observed action is one way to think
about the differential activation of premotor and parietal regions within the
context of the present studies.
Hemispheric Laterality and Action Feasibility
Action performance primarily activates the hemisphere contralateral to the hand
moving. However, action perception demonstrates variable responses, often in a
bilateral pattern (Aziz-Zadeh et al., 2006a). One trend that appears in the current
data, however, is the differential activation of left and right hemispheres in
relation to the observer’s ability to perform the viewed action. In particular, when
observing actions that one can perform, such as symbolic hand gestures, the left
274
hemisphere is more active than the right. In contrast, when observing actions that
one cannot perform, such as residual limb actions when one does not have a
residual limb, the right hemisphere tends to be more active. This is not to say that
the other hemisphere is not active as well; indeed, in instances such as CJ
observing actions that are similar to his own but not exactly the same, he also
activates his right IPL and IFG along with his left IPL and IFG. However, his
BOLD response on the left is far higher than that of the participants with hands,
while his right-hemisphere activation is within the range of the others’. This
suggests that the left (or contralateral) hemisphere may participate when the
action is one that is feasible for the observer, and the right hemisphere is more
active when it is something that is less possible or requires more integration into
one’s own body space as opposed to a direct motor match between the observed
actions and one’s own motor abilities. The role of the right hemisphere in body
space integration have been most directly examined through lesion studies, in
which patients with lesions of the right hemisphere demonstrate difficulties with
body ownership and proprioception of their limbs within space (Roth, 1949; Rothi
et al., 1985; McGeoch et al., 2011). These cases point towards the utility of the
right hemisphere in integrating different part of the body together, and may also
play a role in integrating new body parts into one’s own body schema. However,
these hypotheses for the role of the right versus left hemispheres during action
understanding also warrant further exploration.
275
Mentalizing and Mirror Regions
Finally, as discussed more extensively in Chapter 2, mentalizing and mirror
regions also appear to perform different roles, with mentalizing regions
supporting action understanding via one’s own memories and personal
experiences, and mirror regions supporting action understanding by generating
an internal motor representation of the observed actions. Thus, actions that are
completely unfamiliar may not directly activate mentalizing regions without first
generating an internal motor representation of the actions, while actions that are
familiar may activate either mentalizing or mirror regions depending on the task
and attentional focus. The results from Chapter 4 confirm this pattern, with
evidence that older individuals who have more experience with people with
residual limbs activate the medial prefrontal cortex, associated with self-
reflection, personal memories, and mental state attribution. While age and
experience were confounded in that population, additional results from a
separate cohort of older but novice individuals demonstrates no mPFC activation
when observing the different individual, likely because these individuals do not
have a pre-existing store of personal memories to draw upon when observing the
woman with residual limbs while the experienced individuals do.
CONCLUSION
Altogether, these results support a view of action understanding networks that
are flexibly engaged to encode both actions we can and cannot do, as well as
276
ones that we have and have not seen before. As more studies emerge examining
the specific characteristics of these intricate neural networks, we are likely to find
even more complex but predictable patterns regarding how we engage our own
motor systems when observing others’ actions. Most importantly, this information
is pivotal for understanding how we might optimally engage these regions in
order to effect behavioral changes, both in terms of improving impaired motor
performance, such as after chronic stroke, and in generating improved social
understanding between different individuals through experimental manipulations
that enhance embodiment of others. These practical applications, described
briefly in the section on current and future studies, thus aim to apply this body of
basic science exploring how experience affects our neural responses to others’
actions by applying them to real-life needs and situations. While our everyday
experiences may be something we often take for granted, this work may help to
demonstrate the power of our experiences in shaping how we perceive, and
ultimately interact with, the world around us. Knowing this, one overarching goal
of this work is to encourage individuals to engage in more diverse experiences in
an attempt to better understand those around us others, and to follow the words
of novelist Paul Coelho: “Be brave. Take risks. Nothing can substitute
experience.”
277
REFERENCES
Adams, R., Rule, N., Franklin, R., Wang, E., Stevenson, M., Yoshikawa, S., et al.
(2010). Cross-cultural reading the mind in the eyes: An fmri investigation. J
Cogn Neurosci 27, 97-108.
Arbib, M. A. (2005). From monkey-like action recognition to human language: An
evolutionary framework for neurolinguistics. Behav Brain Sci, 28(2), 105-24;
discussion 125-67.
Arbib, M. A. (2006). Aphasia, apraxia and the evolution of the language-ready
brain. Aphasiology, 20, 1-30.
Arbib, M.A. (2007). Autism—more than the mirror system. Clinical
Neuropsychiatry, 4, 208-222.
Arbib, M. A. (2010). Mirror system activity for action and language is embedded
in the integration of dorsal and ventral pathways. Brain Lang, 112(1), 12-24.
Arbib, M. A., Bonaiuto, J. B., Jacobs, S., & Frey, S. H. (2009). Tool use and the
distalization of the end-effector. Psychol Res, 73(4), 441-462.
Arbib, M. A., & Mundhenk, T. N. (2005). Schizophrenia and the mirror system: An
essay. Neuropsychologia, 43(2), 268-280.
Archer, D. (1997). Unspoken diversity: Cultural differences in gestures.
Qualitative sociology, 20(1), 79-105.
Arzy, S., Thut, G., Mohr, C., Michel, C.M., & Blanke, O. (2006). Neural basis of
embodiment: Distinct contributions of temporoparietal junction and
extrastriate body area. Journal of Neuroscience, 26(31), 8074-81.
278
Astafiev, S. V., Stanley, C. M., Shulman, G. L., & Corbetta, M. (2004).
Extrastriate body area in human occipital cortex responds to the
performance of motor actions. Nature Neuroscience, 7(5), 542-548.
Ayres, A.J. & Mailloux, Z. (1981). Influence of sensory integration procedures on
language development. American Journal of Occupational Therapy, 35,
383-390.
Aziz-Zadeh, L., & Damasio, A. (2008). Embodied semantics for actions: Findings
from functional brain imaging. J Physiol Paris, 102(1-3), 35-39.
Aziz-Zadeh, L., Iacoboni, M., Zaidel, E., Wilson, S., & Mazziotta, J. (2004). Left
hemisphere motor facilitation in response to manual action sounds. Eur J
Neurosci, 19(9), 2609-2612.
Aziz-Zadeh, L., & Ivry, R. B. (2009). The human mirror neuron system and
embodied representations. Adv Exp Med Biol, 629, 355-376.
Aziz-Zadeh, L., Koski, L., Zaidel, E., Mazziotta, J., & Iacoboni, M. (2006a).
Lateralization of the human mirror neuron system. Journal of Neuroscience,
26(11), 2964-2970.
Aziz-Zadeh, L., Sheng, T., & Gheytanchi, A. (2010). Common premotor regions
for the perception and production of prosody and correlations with empathy
and prosodic ability. PLoS One, 5(1), e8759.
Aziz-Zadeh, L., Sheng, T., Liew, S. L., & Damasio, H. (2011). Understanding
otherness: The neural bases of action comprehension and pain empathy in
a congenital amputee. Cereb Cortex, doi: 10.1093/cercor/bhr139.
279
Aziz-Zadeh, L., Wilson, S. M., Rizzolatti, G., & Iacoboni, M. (2006b). Congruent
embodied representations for visually presented actions and linguistic
phrases describing actions. Curr Biol, 16(18), 1818-1823.
Bach, P., Peatfield, N. A., & Tipper, S. P. (2007). Focusing on body sites: the role
of spatial attention in action perception. Exp Brain Res, 178(4), 509-517.
Barbas, H., & Pandya, D. N. (1989). Architecture and intrinsic connections of the
prefrontal cortex in the rhesus monkey. J Comp Neurol, 286(3), 353-375.
Beckmann, C. F., Jenkinson, M., & Smith, S. M. (2003). General multilevel linear
modeling for group analysis in FMRI. Neuroimage, 20(2), 1052-1063.
Bernardis, P., & Gentilucci, M. (2006). Speech and gesture share the same
communication system. Neuropsychologia, 44(2), 178-190.
Bishop, M. (2005). Quality of life and psychosocial adaptation to chronic illness
and disability: Preliminary analysis of a conceptual and theoretical
synthesis. Rehabilitation Counseling Bulletin, 48, 219-231.
Blonder, L. X., Bowers, D., & Heilman, K. M. (1991). The role of the right
hemisphere in emotional communication. Brain, 114, 1115-1127.
Bochner, S. (1994). Cross-cultural differences in the self concept: A test of
Hofstede's individualism/collectivism distinction. Journal of Cross-Cultural
Psychology, 25(2), 273-283.
Bonaiuto, J., & Arbib, M. A. (2010). Extending the mirror neuron system model, II:
What did I just do? A new role for mirror neurons. Biol Cybern, 102(4), 341-
359.
280
Bonaiuto, J., Rosta, E., & Arbib, M. (2007). Extending the mirror neuron system
model, I. Audible actions and invisible grasps. Biol Cybern, 96(1), 9-38.
Bonini, L., Rozzi, S., Serventi, F. U., Simone, L., Ferrari, P. F., & Fogassi, L.
(2010). Ventral premotor and inferior parietal cortices make distinct
contribution to action organization and intention understanding. Cereb
Cortex, 20(6), 1372-1385.
Borod, J. C., Cicero, B. A., Obler, L. K., Welkowitz, J., Erhan, H. M., Santschi, C.
et al. (1998). Right hemisphere emotional perception: Evidence across
multiple channels. Neuropsychology, 12(3), 446-458.
Botvinick, M., & Cohen, J. (1998). Rubber hands `feel' touch that eyes see.
Nature, 391, 756.
Brass, M., Bekkering, H., Wohlschlager, A., & Prinz, W. (2000). Compatibility
between observed and executed finger movements: comparing symbolic,
spatial, and imitative cues. Brain Cogn, 44(2), 124-143.
Brass, M., Schmitt, R. M., Spengler, S., & Gergely, G. (2007). Investigating
action understanding: Inferential processes versus action simulation. Curr
Biol, 17(24), 2117-2121.
Brockner, J., & Chen, Y. R. (1996). The moderating roles of self-esteem and self-
construal in reaction to a threat to the self: Evidence from the People's
Republic of China and the United States. J Pers Soc Psychol, 71(3), 603-
615.
281
Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V. et al.
(2001). Action observation activates premotor and parietal areas in a
somatotopic manner: an fMRI study. Eur J Neurosci, 13(2), 400-404.
Buccino, G., Lui, F., Canessa, N., Patteri, I., Lagravinese, G., Benuzzi, F. et al.
(2004a). Neural circuits involved in the recognition of actions performed by
nonconspecifics: An FMRI study. J Cogn Neurosci, 16(1), 114-126.
Buccino, G., Solodkin, A., & Small, S. L. (2006). Functions of the mirror neuron
system: Implications for neurorehabilitation. Cogn Behav Neurol, 19(1), 55-
63.
Buccino, G., Vogt, S., Ritzl, A., Fink, G. R., Zilles, K., Freund, H. J. et al. (2004b).
Neural circuits underlying imitation learning of hand actions: An event-
related fMRI study. Neuron, 42(2), 323-334.
Butefisch, C. M. (2004). Plasticity in the human cerebral cortex: Lessons from the
normal brain and from stroke. Neuroscientist, 10, 163-173.
Buxbaum, L. J., Haaland, K. Y., Hallett, M., Wheaton, L., Heilman, K. M.,
Rodriguez, A. et al. (2008). Treatment of limb apraxia: Moving forward to
improved action. Am J Phys Med Rehabil, 87(2), 149-161.
Cabeza, R., & St Jacques, P. (2007). Functional neuroimaging of
autobiographical memory. Trends Cogn Sci, 11(5), 219-227.
Calvo-Merino, B., Glaser, D. E., Grezes, J., Passingham, R. E., & Haggard, P.
(2005). Action observation and acquired motor skills: an FMRI study with
expert dancers. Cereb Cortex, 15(8), 1243-1249.
282
Calvo-Merino, B., Grezes, J., Glaser, D. E., Passingham, R. E., & Haggard, P.
(2006). Seeing or doing? Influence of visual and motor familiarity in action
observation. Current Biology, 16(19), 1905-1910.
Caspers, S., Zilles, K., Laird, A. R., & Eickhoff, S. B. (2010). ALE meta-analysis
of action observation and imitation in the human brain. Neuroimage, 50(3),
1148-1167.
Catmur, C., Walsh, V., & Heyes, C. (2007). Sensorimotor learning configures the
human mirror system. Curr Biol, 17(17), 1527-1531.
Celnik, P., Stefan, K., Hummel, F., Duque, J., Classen, J., & Cohen, L. G. (2006).
Encoding a motor memory in the older adult by action observation.
Neuroimage, 29, 677-684.
Celnik, P., Webster, B., Glasser, D. M., & Cohen, L. G. (2008). Effects of action
observation on physical training after stroke. Stroke, 39, 1814-1820.
Chartrand, T. L., & Bargh, J. A. (1999). The chameleon effect: The perception-
behavior link and social interaction. Journal of personality and social
psychology, 76, 893-910.
Chen, G., & Chung, J. (1994). Confucian impact on organizational
communication. Communication Quarterly, 42(2), 93-105.
Cheng, Y., Meltzoff, A. N., & Decety, J. (2007). Motivation modulates the activity
of the human mirror-neuron system. Cereb Cortex, 17(8), 1979-1986.
283
Chiao, J.Y., Ambady, N. (2007). Cultural neuroscience: Parsing universality and
diversity across levels of analysis. In: Kitayama S, Cohen D, editors.
Handbook of Cultural Psychology, New York, NY: Guilford Press. pp 237-
254.
Chiao, J. Y., Harada, T., Komeda, H., Li, Z., Mano, Y., Saito, D. et al. (2009).
Neural basis of individualistic and collectivistic views of self. Hum Brain
Mapp, 30(9), 2813-2820.
Chiao, J. Y., Harada, T., Komeda, H., Li, Z., Mano, Y., Saito, D. et al. (2010).
Dynamic cultural influences on neural representations of the self. J Cogn
Neurosci, 22(1), 1-11.
Chiao, J. Y., Iidaka, T., Gordon, H. L., Nogawa, J., Bar, M., Aminoff, E. et al.
(2008). Cultural specificity in amygdala response to fear faces. Journal of
Cognitive Neuroscience, 20(12), 2167-2174.
Chiarello, C., Knight, R., & Mandel, M. (1982). Aphasia in a prelingually deaf
woman. Brain, 105(Pt 1), 29-51.
Chong, T. T., Cunnington, R., Williams, M. A., & Mattingley, J. B. (2009). The role
of selective attention in matching observed and executed actions.
Neuropsychologia, 47(3), 786-795.
Chong, T. T., Williams, M. A., Cunnington, R., & Mattingley, J. B. (2008).
Selective attention modulates inferior frontal gyrus activity during action
observation. Neuroimage, 40(1), 298-307.
284
Chua, H. F., Boland, J. E., & Nisbett, R. E. (2005). Cultural variation in eye
movements during scene perception. Proc Natl Acad Sci U S A, 102(35),
12629-12633.
Ciaramidaro, A., Adenzato, M., Enrici, I., Erk, S., Pia, L., Bara, B. G. et al. (2007).
The intentional network: How the brain reads varieties of intentions.
Neuropsychologia, 45(13), 3105-3113.
Classen, J., Liepert, J., Wise, S. P., Hallett, M., & Cohen, L. G. (1998). Rapid
plasticity of human cortical movement representation induced by practice.
Journal of Neurophysiology, 79, 1117-1123.
Colby, C. L., & Goldberg, M. E. (1999). Space and attention in parietal cortex.
Annu Rev Neurosci, 22, 319-349.
Conte, A., Gilio, F., Iezzi, E., Frasca, V., Inghilleri, M., & Berardelli, A. (2007).
Attention influences the excitability of cortical motor areas in healthy
humans. Exp Brain Res, 182(1), 109-117.
Corina, D. P., & Knapp, H. (2006). Sign language processing and the mirror
neuron system. Cortex, 42, 529-539.
Corina, D. P., McBurney, S. L., Dodrill, C., Hinshaw, K., Brinkley, J., & Ojemann,
G. (1999). Functional roles of Broca's area and SMG: Evidence from cortical
stimulation mapping in a deaf signer. Neuroimage, 10(5), 570-581.
Corina, D. P., Poizner, H., Bellugi, U., Feinberg, T., Dowd, D., & O'Grady-Batch,
L. (1992a). Dissociation between linguistic and nonlinguistic gestural
systems: A case for compositionality. Brain Lang, 43(3), 414-447.
285
Corina, D. P., Vaid, J., & Bellugi, U. (1992b). The linguistic basis of left
hemisphere specialization. Science, 255(5049), 1258-1260.
Costantini, M., Galati, G., Ferretti, A., Caulo, M., Tartaro, A., Romani, G. L. et al.
(2005). Neural systems underlying observation of humanly impossible
movements: An FMRI study. Cereb Cortex, 15(11), 1761-1767.
Cross, E. S., Hamilton, A. F., & Grafton, S. T. (2006). Building a motor simulation
de novo: Observation of dance by dancers. Neuroimage, 31(3), 1257-1267.
Cross, E. S., Kraemer, D. J. M., Hamilton, A. F. C., Kelley, W. M., & Grafton, S.
T. (2009). Sensitivity of the action observation network to physical and
observational learning. Cerebral Cortex, 19(2), 315.
Cross, E. S., Liepelt, R., de, C. H. A. F., Parkinson, J., Ramsey, R., Stadler, W.
et al. (2011). Robotic movement preferentially engages the action
observation network. Hum Brain Mapp, doi:10.1002/hbm.21361.
Cross, E. S., Mackie, E. C., Wolford, G., & de, C. H. A. F. (2010). Contorted and
ordinary body postures in the human brain. Exp Brain Res, 204(3), 397-407.
Damasio, H., Grabowski, T. J., Tranel, D., Ponto, L. L., Hichwa, R. D., &
Damasio, A. R. (2001). Neural correlates of naming actions and of naming
spatial relations. Neuroimage, 13(6 Pt 1), 1053-1064.
Damasio, H., Tranel, D., Grabowski, T., Adolphs, R., & Damasio, A. (2004).
Neural systems behind word and concept retrieval. Cognition, 92(1-2), 179-
229.
286
Damoiseaux, J. S., & Greicius, M. D. (2009). Greater than the sum of its parts: A
review of studies combining structural connectivity and resting-state
functional connectivity. Brain Struct Funct, 213(6), 525-533.
Dapretto, M., Davies, M. S., Pfeifer, J. H., Scott, A. A., Sigman, M., Bookheimer,
S. Y. et al. (2006). Understanding emotions in others: mirror neuron
dysfunction in children with autism spectrum disorders. Nat Neurosci, 9(1),
28-30.
Davis, M. H. (1983). Measuring individual differences in empathy: Evidence for a
multidimensional approach. Journal of Personality and Social Psychology,
44(1), 113-126.
de Lange, F. P., Spronk, M., Willems, R. M., Toni, I., & Bekkering, H. (2008).
Complementary systems for understanding action intentions. Curr Biol,
18(6), 454-457.
Decety, J., & Lamm, C. (2007). The role of the right temporoparietal junction in
social interaction: How low-level computational processes contribute to
meta-cognition. Neuroscientist, 13(6), 580-593.
den Ouden, H. E. M., Frith, U., Frith, C., & Blakemore, S. J. (2005). Thinking
about intentions. Neuroimage, 28(4), 787-796.
Desy, M. C., & Theoret, H. (2007). Modulation of motor cortex excitability by
physical similarity with an observed hand action. PLoS One, 2(10), e971.
Devinsky, O. (2000). Right cerebral hemisphere dominance for a sense of
corporeal and emotional self. Epilepsy & Behavior, 1(1), 60-73.
287
di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., & Rizzolatti, G. (1992).
Understanding motor events: A neurophysiological study. Exp Brain Res,
91(1), 176-180.
Downing, P. E., Jiang, Y., Shuman, M., & Kanwisher, N. (2001). A cortical area
selective for visual processing of the human body. Science, 293(5539),
2470-2473.
Dumas, G., Nadel, J., Soussignan, R., Martinerie, J., & Garnero, L. (2010). Inter-
brain synchronization during social interaction. PLoS ONE, 5(8), e12166.
Emmorey, K., Grabowski, T., McCullough, S., Damasio, H., Ponto, L., Hichwa, R.
et al. (2004). Motor-iconicity of sign language does not alter the neural
systems underlying tool and action naming. Brain Lang, 89(1), 27-37.
Emmorey, K., Grabowski, T., McCullough, S., Damasio, H., Ponto, L. L., Hichwa,
R. D. et al. (2003). Neural systems underlying lexical retrieval for sign
language. Neuropsychologia, 41(1), 85-95.
Emmorey, K., Grabowski, T., McCullough, S., Ponto, L. L., Hichwa, R. D., &
Damasio, H. (2005). The neural correlates of spatial language in English
and American Sign Language: A PET study with hearing bilinguals.
Neuroimage, 24(3), 832-840.
Engel, A., Burke, M., Fiehler, K., Bien, S., & Rosler, F. (2008). How moving
objects become animated: the human mirror neuron system assimilates
non-biological movement patterns. Soc Neurosci, 3(3-4), 368-387.
288
Enticott, P. G., Kennedy, H. A., Bradshaw, J. L., Rinehart, N. J., & Fitzgerald, P.
B. (2010). Understanding mirror neurons: evidence for enhanced
corticospinal excitability during the observation of transitive but not
intransitive hand gestures. Neuropsychologia, 48(9), 2675-2680.
Epstein, R., Harris, A., Stanley, D., & Kanwisher, N. (1999). The
parahippocampal place area: Recognition, navigation, or encoding? Neuron,
23(1), 115-125.
Epstein, R., & Kanwisher, N. (1998). A cortical representation of the local visual
environment. Nature, 392(6676), 598-601.
Ertelt, D., Small, S., Solodkin, A., Dettmers, C., McNamara, A., Binkofski, F. et al.
(2007). Action observation has a positive impact on rehabilitation of motor
deficits after stroke. Neuroimage, 36 Suppl 2, T164-73.
Fadiga, L., Craighero, L., & D'Ausilio, A. (2009). Broca's area in language, action,
and music. Ann N Y Acad Sci, 1169, 448-458.
Fadiga, L., Fogassi, L., Pavesi, G., & Rizzolatti, G. (1995). Motor facilitation
during action observation: a magnetic stimulation study. J Neurophysiol,
73(6), 2608-2611.
Fagg, A. H., & Arbib, M. A. (1998). Modeling parietal-premotor interactions in
primate control of grasping. Neural Netw, 11(7-8), 1277-1303.
Fan, Y. T., Decety, J., Yang, C. Y., Liu, J. L., & Cheng, Y. (2010). Unbroken
mirror neurons in autism spectrum disorders. J Child Psychol Psychiatry.
289
Ferrari, P. F., Gallese, V., Rizzolatti, G., & Fogassi, L. (2003). Mirror neurons
responding to the observation of ingestive and communicative mouth
actions in the monkey ventral premotor cortex. Eur J Neurosci, 17(8), 1703-
1714.
Ferrari, P. F., Rozzi, S., & Fogassi, L. (2005). Mirror neurons responding to
observation of actions made with tools in monkey ventral premotor cortex. J
Cogn Neurosci, 17(2), 212-226.
Fiske, A. P., Kitayama, S., Markus, H. R., & Nisbett, R. E. (1998). The cultural
matrix of social psychology. In D. T. Gilbert, S. T. Fiske, & G. Linzey
(Eds.), Handbook of social psychology. (pp. 915-981). Boston: McGraw-
Hill.
Flaisch, T., Schupp, H. T., Renner, B., & Junghofer, M. (2009). Neural systems of
visual attention responding to emotional gestures. Neuroimage, 45(4), 1339-
1346.
Fletcher, P. C., Happe, F., Frith, U., Baker, S. C., Dolan, R. J., Frackowiak, R. S.
et al. (1995). Other minds in the brain: A functional imaging study of "theory
of mind" in story comprehension. Cognition, 57(2), 109-128.
Fogassi, L., Ferrari, P. F., Gesierich, B., Rozzi, S., Chersi, F., & Rizzolatti, G.
(2005). Parietal lobe: From action organization to intention understanding.
Science, 308(5722), 662-667.
290
Fogassi, L., Gallese, V., Fadiga, L., & Rizzolatti, G. (1998). Neurons responding
to the sight of goal directed hand/arm actions in the parietal area PF (7b) of
the macaque monkey. Social Neuroscience Abst, 24, 257.
Foundas, A. L., Macauley, B. L., Raymer, A. M., Maher, L. M., Heilman, K. M., &
Rothi, L. J. (1995). Gesture laterality in aphasic and apraxic stroke patients.
Brain Cogn, 29(2), 204-213.
Fox, G., Sobhani, M., &Aziz-Zadeh, L. (Submitted). Empathy for the enemy?
Threat modulates neural pain processing.
Franceschini, M., Agosti, M., Cantagallo, A., Sale, P., Mancuso, M., & Buccino,
G. (2010). Mirror neurons: action observation treatment as a tool in stroke
rehabilitation. Eur J Phys Rehabil Med, 46(4), 517-523.
Frank, G. (2000). Venus on wheels: Two decades of dialogue on disability,
biography, and being female in America. Los Angeles, CA: University of
California Press.
Frank, R.J., Damasio, H., & Grabowski, T.J. (1997). Brainvox: An interactive,
multimodal visualization and analysis system for neuroanatomical imaging.
Neuroimage, 5(1), 13-30.
Freeman, J. B., Rule, N. O., Adams, R. B. J., & Ambady, N. (2009). Culture
shapes a mesolimbic response to signals of dominance and subordination
that associates with behavior. Neuroimage, 47(1), 353-359.
Frith, C. D., & Frith, U. (2006). The neural basis of mentalizing. Neuron, 50(4),
531-534.
291
Frith, U., & Frith, C. D. (2003). Development and neurophysiology of mentalizing.
Philos Trans R Soc Lond B Biol Sci, 358(1431), 459-473.
Fujita, I., Tanaka, K., Ito, M., & Cheng, K. (1992). Columns for visual features of
objects in monkey inferotemporal cortex. Nature, 360(6402), 343-346.
Gallagher, H. L., & Frith, C. D. (2003). Functional imaging of 'theory of mind'.
Trends Cogn Sci, 7(2), 77-83.
Gallagher, H. L., & Frith, C. D. (2004). Dissociable neural pathways for the
perception and recognition of expressive and instrumental gestures.
Neuropsychologia, 42(13), 1725-1736.
Gallagher, H. L., Happe, F., Brunswick, N., Fletcher, P. C., Frith, U., & Frith, C.
D. (2000). Reading the mind in cartoons and stories: An fMRI study of
'theory of mind' in verbal and nonverbal tasks. Neuropsychologia, 38(1), 11-
21.
Gallese, V., Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in
the premotor cortex. Brain, 119(Pt 2), 593-609.
Gallese, V., Fogassi, L., Fadiga, L., & Rizzolatti, G. (2002). Action representation
and the inferior parietal lobule. In W. Prinz & B. Hommel (Eds.), Attention &
Performance XIX. Common Mechanisms in Perception and Action. (pp.
247-266). Oxford, UK: Oxford Univ. Press.
Gallese, V., Keysers, C., & Rizzolatti, G. (2004). A unifying view of the basis of
social cognition. Trends in Cognitive Sciences, 8(9), 396-403.
292
Gallese, V., & Lakoff, G. (2005). The brain's concepts: The role of the sensory-
motor system in conceptual knowledge. The Multiple Functions of Sensory-
Motor Representations, 22(3/4), 455.
Garrison, K.A. (2011). Modulating the motor system by action observation:
Implications for stroke rehabilitation. Ph.D. dissertation, University of
Southern California.
Garrison, K.A., Winstein, C.J., & Aziz-Zadeh, L. (2010). The mirror neuron
system: A neural substrate for methods in stroke rehabilitation. Neurorehabil
Neural Repair, 24(5), 404-412.
Gazzola, V., Aziz-Zadeh, L., & Keysers, C. (2006). Empathy and the somatotopic
auditory mirror system in humans. Curr Biol, 16(18), 1824-1829.
Gazzola, V., Rizzolatti, G., Wicker, B., & Keysers, C. (2007a). The
anthropomorphic brain: The mirror neuron system responds to human and
robotic actions. Neuroimage, 35(4), 1674-1684.
Gazzola, V., van der Worp, H., Mulder, T., Wicker, B., Rizzolatti, G., & Keysers,
C. (2007b). Aplasics born without hands mirror the goal of hand actions with
their feet. Current biology, 17(14), 1235-1240.
Gentilucci, M., Bernardis, P., Crisi, G., & Dalla Volta, R. (2006). Repetitive
transcranial magnetic stimulation of Broca's area affects verbal responses
to gesture observation. J Cogn Neurosci, 18(7), 1059-1074.
293
Gentilucci, M., & Dalla Volta, R. (2008). Spoken language and arm gestures are
controlled by the same motor control system. The Quarterly Journal of
Experimental Psychology, 61(6), 944-957.
Geschwind, N. (1975). The apraxias: neural mechanisms of disorders of learned
movement. Am Sci, 63(2), 188-195.
Gonzalez Rothi, L. J., Ochipa, C., & Heilman, K. M. (1991). A cognitive
neuropsychological model of limb praxis. Cognitive Neuropsychology, 8(6),
443-458.
Gray, J. M. (1998). Putting occupation into practice: Occupation as ends,
occupation as means. American Journal of Occupational Therapy, 52(5),
354-364.
Greicius, M. (2008). Resting-state functional connectivity in neuropsychiatric
disorders. Curr Opin Neurol, 21(4), 424-430.
Greicius, M. D., Flores, B. H., Menon, V., Glover, G. H., Solvason, H. B., Kenna,
H. et al. (2007). Resting-state functional connectivity in major depression:
Abnormally increased contributions from subgenual cingulate cortex and
thalamus. Biol Psychiatry, 62(5), 429-437.
Greicius, M. D., Krasnow, B., Reiss, A. L., & Menon, V. (2003). Functional
connectivity in the resting brain: a network analysis of the default mode
hypothesis. Proc Natl Acad Sci U S A, 100(1), 253-258.
294
Greicius, M. D., Srivastava, G., Reiss, A. L., & Menon, V. (2004). Default-mode
network activity distinguishes Alzheimer's disease from healthy aging:
Evidence from functional MRI. Proc Natl Acad Sci U S A, 101(13), 4637-
4642.
Greicius, M. D., Supekar, K., Menon, V., & Dougherty, R. F. (2009). Resting-state
functional connectivity reflects structural connectivity in the default mode
network. Cereb Cortex, 19(1), 72-78.
Gross, C. G., Bender, D. B., & Rocha-Miranda, C. E. (1969). Visual receptive
fields of neurons in inferotemporal cortex of the monkey. Science, 166(910),
1303-1306.
Gross, C. G., Rocha-Miranda, C. E., & Bender, D. B. (1972). Visual properties of
neurons in inferotemporal cortex of the macaque. J Neurophysiol, 35(1), 96-
111.
Grossi, J.A., Maitra, K.K., & Rice, M.S. (2007) Semantic priming of motor task
performance in young adults: Implications for therapy. American Journal of
Occupational Therapy, 61, 311-320.
Grühn, D., Rebucal, K., Diehl, M., Lumley, M., & Labouvie-Vief, G. (2008).
Empathy across the adult lifespan: Longitudinal and experience-sampling
findings. Emotion, 8(6), 753-765.
Gusnard, D. A., Akbudak, E., Shulman, G. L., & Raichle, M. E. (2001). Medial
prefrontal cortex and self-referential mental activity: relation to a default
mode of brain function. Proc Natl Acad Sci U S A, 98(7), 4259-4264.
295
Gusnard, D. A., & Raichle, M. E. (2001). Searching for a baseline: Functional
imaging and the resting human brain. Nature Reviews Neuroscience, 2(10),
685-694.
Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2010). The role of
synchrony and ambiguity in speech-gesture integration during
comprehension. J Cogn Neurosci.
Haldar, J. P., Wedeen, V. J. , Nezamzadeh, M., Dai, G., Weiner, M. W., Schuff,
N., & Liang, Z.-P. (In press). Improved diffusion imaging through SNR-
enhancing joint reconstruction. Magnetic Resonance in Medicine 2012.
Hamilton, A.C.F. (2009). Goals, intentions and mental states: Challenges for
theories of autism. Journal of Child Psychology and Psychiatry, 50, 881-
892.
Han, S., & Northoff, G. (2008). Culture-sensitive neural substrates of human
cognition: A transcultural neuroimaging approach. Nat Rev Neurosci, 9(8),
646-654.
Han, S., & Northoff, G. (2009). Understanding the self: a cultural neuroscience
approach. Prog Brain Res, 178, 203-212.
Harrington, D. L., & Haaland, K. Y. (1992). Skill learning in the elderly:
Diminished implicit and explicit memory for a motor sequence. Psychology
& Aging, 7, 425-434.
296
Haslinger, B., Erhard, P., Altenmuller, E., Schroeder, U., Boecker, H., &
Ceballos-Baumann, A. O. (2005). Transmodal sensorimotor networks during
action observation in professional pianists. J Cogn Neurosci, 17(2), 282-
293.
Heath, M., Roy, E. A., Black, S. E., & Westwood, D. A. (2001). Intransitive limb
gestures and apraxia following unilateral stroke. J Clin Exp Neuropsychol,
23(5), 628-642.
Heilman, K. M., Rothi, L. J., & Valenstein, E. (1982). Two forms of ideomotor
apraxia. Neurology, 32(4), 342-346.
Heilman, K. M., Schwartz, H. D., & Geschwind, N. (1975). Defective motor
learning in ideomotor apraxia. Neurology, 25(11), 1018-1020.
Heiser, M., Iacoboni, M., Maeda, F., Marcus, J., & Mazziotta, J. C. (2003). The
essential role of Broca's area in imitation. European Journal of
Neuroscience, 17, 1123-1128.
Hesse, M. D., Sparing, R., & Fink, G. R. (2009). End or means--the "what" and
"how" of observed intentional actions. J Cogn Neurosci, 21(4), 776-790.
Heyes, C. (2010). Where do mirror neurons come from? Neuroscience &
Biobehavioral Reviews, 34, 575-583.
Hofstede, G. (1980). Culture's consequences: International differences in work-
related values. Beverly Hills, CA: Sage.
Iacoboni, M. (2005). Neural mechanisms of imitation. Current Opinion in
Neurobiology, 15(6), 632-637.
297
Iacoboni, M., & Dapretto, M. (2006). The mirror neuron system and the
consequences of its dysfunction. Nat Rev Neurosci, 7(12), 942-951.
Iacoboni, M., & Mazziotta, J. C. (2007). Mirror neuron system: Basic findings and
clinical applications. Ann Neurol, 62(3), 213-218.
Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J. C., &
Rizzolatti, G. (2005). Grasping the intentions of others with one's own mirror
neuron system. PLoS Biol, 3(3), e79.
Iacoboni, M., Woods, R. P., Brass, M., Bekkering, H., Mazziotta, J. C., &
Rizzolatti, G. (1999). Cortical mechanisms of human imitation. Science,
286(5449), 2526-2528.
Iacoboni, M., & Zaidel, E. (2004). Interhemispheric visuo-motor integration in
humans: the role of the superior parietal cortex. Neuropsychologia, 42(4),
419-425.
Ihaka, R., & Gentleman, R. (1996). R: A language for data analysis and graphics.
Journal of computational and graphical statistics, 299-314.
Ikeda, K., & Huckfeldt, R. (2001). Political communication and disagreement
among citizens in Japan and the United States. Political Behavior, 23(1),
23-51.
Ingersoll, B. (2010). Pilot randomized controlled trial of reciprocal imitation
training for teaching elicited and spontaneous imitation to children with
autism. Journal of Autism & Developmental Disorders, 40, 1154-1160.
298
Ionta, S., Gassert, R., & Blanke, O. (2011) Multi-sensory and sensorimotor
foundation of bodily self-consciousness - an interdisciplinary approach.
Front Psychol, 2, 383. doi: 10.3389/fpsyg.2011.00383
Ito, T.A. & Bartholow, B.D. (2009). The neural correlates of race. Trends Cogn
Sci 13, 524-531.
Ito, M., Tamura, H., Fujita, I., & Tanaka, K. (1995). Size and position invariance
of neuronal responses in monkey inferotemporal cortex. J Neurophysiol,
73(1), 218-226.
Jacobs, S., Danielmeier, C., & Frey, S. H. (2009). Human anterior intraparietal
and entral premotor cortices support representations of grasping with the
hand or a novel tool. J Cogn Neurosci.
Jellema, T., Baker, C. I., Wicker, B., & Perrett, D. I. (2000). Neural representation
for the perception of the intentionality of actions. Brain Cogn, 44(2), 280-
302.
Jenkinson, M., Bannister, P., Brady, M., & Smith, S. (2002). Improved
optimization for the robust and accurate linear registration and motion
correction of brain images. Neuroimage, 17(2), 825-841.
Jenkinson, M., & Smith, S. (2001). A global optimisation method for robust affine
registration of brain images. Med Image Anal, 5(2), 143-156.
Jezzard, P., Matthews, P. M., & Smith, S. M. (Eds.). (2001). Statistical analysis of
activation images. Oxford: Oxford University Press.
299
Jonas, M., Siebner, H. R., Biermann-Ruben, K., Kessler, K., Baumer, T., Buchel,
C. et al. (2007). Do simple intransitive finger movements consistently
activate frontoparietal mirror neuron areas in humans? Neuroimage, 36
Suppl 2, T44-53.
Kantner, R.M., Clark, D.L., Atkinson, J., & Paulson, G. (1982). Effects of
vestibular stimulation in seizure-prone children. Physical Therapy, 62, 16-
21.
Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A
module in human extrastriate cortex specialized for face perception. Journal
of Neuroscience, 17(11), 4302-4311.
Kaplan, J. T., Aziz-Zadeh, L., Uddin, L. Q., & Iacoboni, M. (2008). The self across
the senses: An fMRI study of self-face and self-voice recognition. Soc Cogn
Affect Neurosci, 3(3), 218-223.
Kaplan, J. T., & Iacoboni, M. (2006). Getting a grip on other minds: Mirror
neurons, intention understanding, and cognitive empathy. Soc Neurosci,
1(3-4), 175-183.
Kashiwagi, M., Iwaki, S., Narumi, Y., Tamai, H., & Suzuki, S. (2009). Parietal
dysfunction in developmental coordination disorder: A functional MRI
study. Neuroreport, 20, 1319-1324.
Keenan, J. P., Freund, S., Hamilton, R. H., Ganis, G., & Pascal-Leone, A. (2000).
Hand response differences in a self-face idenfication task.
Neuropsychologia, 38, 1053-1074.
300
Keenan, J. P., McCutecheon, B., Freund, S., Gallup, G. G., Sanders, G., &
Pascal-Leone, A. (1999). Left hand advantage in a self-face recognition
task. Neuropsychologia, 37, 1421-1425.
Keenan, J. P., Nelson, A., O'Connor, M., & Pascual-Leone, A. (2001). Self-
recognition and the right hemisphere. Nature, 409(6818), 305.
Keysers, C., & Gazzola, V. (2006). Towards a unifying neural theory of social
cognition. Prog Brain Res, 156, 379-401.
Keysers, C., & Gazzola, V. (2007). Integrating simulation and theory of mind:
From self to social cognition. Trends in Cognitive Sciences, 11(5), 194-196.
Keysers, C., Kaas, J. H., & Gazzola, V. (2010). Somatosensation in social
perception. Nat Rev Neurosci, 11(6), 417-428.
Kilner, J. M., & Frith, C. D. (2008). Action observation: Inferring intentions without
mirror neurons. Curr Biol, 18(1), R32-3.
Kirkman, B. L., & Shapiro, D. L. (2001). The impact of team members' cultural
values on productivity, cooperation, and empowerment in self-managing
work teams. Journal of Cross-Cultural Psychology, 32(5), 597-617.
Kohler, E., Keysers, C., Umilta, M. A., Fogassi, L., Gallese, V., & Rizzolatti, G.
(2002). Hearing sounds, understanding actions: Action representation in
mirror neurons. Science, 297(5582), 846-848.
Krakauer, J. W. (2006). Motor learning: Its relevance to stroke recovery and
neurorehabilitation. Current Opinion in Neurology, 19, 84-90.
301
Law, M., Cooper, B., Strong, S., Stewart, D., Rigby, P., & Letts, L. (1996). The
Person-Environment-Occupation Model: A transactive approach
occupational performance. Canadian Journal of Occupational Therapy, 63,
9-23.
Leary, M. R. (1983). A brief version of the fear of negative evaluation scale.
Personality and Social Psychology Bulletin, 9, 518-530.
Lenggenhager, B., Mouthon, M., & Blanke, O. (2009). Spatial aspects of bodily
self-consciousness. Consciousness and Cognition, 18(1), 110-17.
Lenggenhager, B., Tadi, T., Metzinger, T., & Blanke, O. (2007). Video ergo sum:
Manipulating bodily self-consciousness. Science, 317(5841), 1096-99.
Leong, F. T., Hardin, E. E., & Gupta, A. (2010). A cultural formulation approach
to career assessment and career counseling with asian american clients.
Journal of Career Development, 37(1), 465-486.
Liepelt, R., Von Cramon, D. Y., & Brass, M. (2008). How do we infer others' goals
from non-stereotypic actions? The outcome of context-sensitive inferential
processing in right inferior parietal and posterior temporal cortex.
Neuroimage, 43(4), 784-792.
Liew, S.-L. & Aziz-Zadeh, L. (2011a). The mirror neuron system and social
cognition. [Book chapter]. In: From DNA to Social Cognition. Editors: Ebstein,
Shamay-Tsoory, & Chew.
302
Liew, S.-L., & Aziz-Zadeh, L. (2011b). The neuroscience of language in
occupations: A review of findings from brain and behavioral sciences. Journal
of Occupational Science, 18(2), 97-114.
Liew, S.-L. & Aziz-Zadeh, L. (In press). The mirror neuron system, social control,
and language. [Book chapter]. In: Handbook of Neurosociology. Editor:
Franks, D. & Turner, J.H.
Liew, S.-L., Garrison, K.A., Haldar, J., Winstein, C.J., Damasio, H., & Aziz-
Zadeh, L. (2011). Structural neuroanatomy of lesioned brains in chronic
stroke patients and correlations with functional activation of action
observation networks. Proc Soc for Neurosci 41
st
Annual Meeting,
Washington, DC, 589.10.
Liew, S. L., Garrison, K. A., Werner, J., & Aziz-Zadeh, L. (2012). The mirror
neuron system: Innovations and implications for occupational therapy.
OTJR: Occupation, Participation, and Health, DOI: 10.3928/15394492-
20111209-01.
Liew, S. L., Han, S., & Aziz-Zadeh, L. (2011). Familiarity modulates mirror neuron
and mentalizing regions during intention understanding. Hum Brain Mapp,
32(11), 1986-1997.
Liew, S. L., Ma, Y., Han, S., & Aziz-Zadeh, L. (2011). Who's afraid of the boss:
Cultural differences in social hierarchies modulate self-face recognition in
Chinese and Americans. PLoS One, 6(2), e16901.
303
Liew, S.-L., Sheng, T., & Aziz-Zadeh, L. (2010). The neural correlates of stigma
for physical differences. Paper presented at the Cognitive Neuroscience
Society Annual Meeting.
Liew, S.L., Sheng, T., & Aziz-Zadeh, L. (Submitted). Experience with a congenital
amputee modulates one’s own sensorimotor response during action
observation.
Liu, K. P. (2009). Use of mental imagery to improve task generalisation after a
stroke. Hong Kong Medical Journal, 15, 37-41.
Livneh, H., & Antonak, R. F. (1997). Psychosocial adaptation to chronic illness
and disability. Gaithersburg, MD: Aspen.
Loh, J., Restubog, S. L. D., & Zagenczyk, T. J. (2010). Consequences of
workplace bullying on employee identification and satisfaction among
Australians and Singaporeans. Journal of Cross-Cultural Psychology,
41(2), 236-252.
Longo, M.R., Schuur, F., Kammers, M.P.M., Tsakiris, M., & Haggard, P. Self
awareness and the body image. Acta Psychologica, 132(2), 166-172.
Lynall, M. E., Bassett, D. S., Kerwin, R., McKenna, P. J., Kitzbichler, M., Muller,
U. et al. (2010). Functional connectivity and brain networks in
schizophrenia. J Neurosci, 30(28), 9477-9487.
Ma, Y., & Han, S. (2009). Self-face advantage is modulated by social threat –
Boss effect on self-face recognition. Journal of Experimental Social
Psychology, 45, 1048-1051.
304
Ma, Y., & Han, S. (2010). Why we respond faster to the self than to others? An
implicit positive association theory of self-advantage during implicit face
recognition. J Exp Psychol Hum Percept Perform, 36(3), 619-633.
MacSweeney, M., Campbell, R., Woll, B., Giampietro, V., David, A. S., McGuire,
P. K. et al. (2004). Dissociating linguistic and nonlinguistic gestural
communication in the brain. Neuroimage, 22(4), 1605-1618.
MacSweeney, M., Woll, B., Campbell, R., Calvert, G. A., McGuire, P. K., David,
A. S. et al. (2002a). Neural correlates of British sign language
comprehension: Spatial processing demands of topographic language. J
Cogn Neurosci, 14(7), 1064-1075.
MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams,
S. C. et al. (2002b). Neural systems underlying British Sign Language and
audio-visual English processing in native users. Brain, 125(Pt 7), 1583-
1593.
Maddock, R. J., Garrett, A. S., & Buonocore, M. H. (2001). Remembering familiar
people: The posterior cingulate cortex and autobiographical memory
retrieval. Neuroscience, 104(3), 667-676.
Markus, H., & Kitayama, S. (1998). The cultural psychology of personality.
Journal of Cross-Cultural Psychology, 29(1), 63-87.
Markus, H. R., & Kitayama, S. (1991). Culture and the self: Implications for
cognition, emotion, and motivation. Psychol Rev, 98, 224-253.
305
Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically:
comparing the context sensitivity of Japanese and Americans. J Pers Soc
Psychol, 81(5), 922-934.
Mazzurega, M., Pavani, F., Paladino, M.P., Schubert, T.W. (2010). Self-other
bodily merging in the context of synchronous but arbitrary-related
multisensory inputs. Experimental Brain Research, 213(2-3), 213-21.
McGeoch, P. D., Brang, D., Song, T., Lee, R. R., Huang, M., & Ramachandran,
V. S. (2011). Xenomelia: a new right parietal lobe syndrome. J Neurol
Neurosurg Psychiatry, 82(12), 1314-1319.
McNeill, D. (1992). Hand and Mind: What Gestures Reveal About Thought.
Chicago: University of Chicago Press.
McNeill, D. (2005). Gesture and Thought. Chicago, IL: The Unversity of Chicago
Press.
Meinzer, M., Breitenstein, C., Westerhoff, U., Sommer, J., Rosser, N., Rodriguez,
A.D., et al. (2011). Motor cortex preactivation by standing facilitates word
retrieval in aphasia. Neurorehabilitation & Neural Repair, 25, 178-187.
Miyamoto, Y., Nisbett, R. E., & Masuda, T. (2006). Culture and the physical
environment. Holistic versus analytic perceptual affordances. Psychol Sci,
17(2), 113-119.
Molenberghs, P., Cunnington, R., & Mattingley, J. B. (2012). Brain regions with
mirror properties: a meta-analysis of 125 human fMRI studies. Neurosci
Biobehav Rev, 36(1), 341-349.
306
Molnar-Szakacs, I., Iacoboni, M., Koski, L., & Mazziotta, J. C. (2005). Functional
segregation within pars opercularis of the inferior frontal gyrus: Evidence
from fMRI studies of imitation and action observation. Cereb Cortex, 15(7),
986-994.
Molnar-Szakacs, I., Wu, A. D., Robles, F. J., & Iacoboni, M. (2007). Do you see
what I mean? Corticospinal excitability during observation of culture-specific
gestures. PLoS One, 2(7), e626.
Morecraft, R. J., Cipolloni, P. B., Stilwell-Morecraft, K. S., Gedney, M. T., &
Pandya, D. N. (2004). Cytoarchitecture and cortical connections of the
posterior cingulate and adjacent somatosensory fields in the rhesus
monkey. J Comp Neurol, 469(1), 37-69.
Moskowitz, D. S., Suh, E. J., & Desaulniers, J. (1994). Situational influences on
gender differences in agency and communion. J Pers Soc Psychol, 66(4),
753-761.
Mukamel, R., Ekstrom, A. D., Kaplan, J., Iacoboni, M., & Fried, I. (2010). Single-
neuron responses in humans during execution and observation of actions.
Curr Biol.
Mukherjee, P., Bahn, M. M., McKinstry, R. C., Shimony, J. S., Cull, T. S.,
Akbudak, E. et al. (2000). Differences between gray matter and white matter
water diffusion in stroke: Diffusion-tensor MR imaging in 12 patients.
Radiology, 215(1), 211-220.
307
Murray, C. (2010). Understanding adjustment and coping to limb loss and
absence through phenomenologies of prosthesis use. In C. Murray (Ed.),
Amputation, prosthesis use, and phantom limb pain: An interdisciplinary
perspective (pp. 81-99). New York, NY: Springer Science+Business Media,
LLC.
Murray, C. D. (2009). Being like everybody else: the personal meanings of being
a prosthesis user. Disabil Rehabil, 31(7), 573-581.
Muthukumaraswamy, S. D., & Singh, K. D. (2008). Modulation of the human
mirror neuron system during cognitive activity. Psychophysiology, 45(6),
896-905.
Mychack, P., Kramer, J. H., Boone, K. B., & Miller, B. L. (2001). The influence of
right frontotemporal dysfunction on social behavior in frontotemporal
dementia. Neurology, 56(11 Suppl 4), S11-5.
National Limb Loss Information Center. (2011). Limb Loss FAQ's. Retrieved
12/28/2011, 2011, from http://www.amputee-coalition.org/nllic_faq.html.
Nelson, D. L. (1997). Why the profession of occupational therapy will flourish in
the 21st century. The 1996 Eleanor Clarke Slagle Lecture. American
Journal of Occupational Therapy, 15(1), 11-24.
Newman-Norlund, R., van Schie, H. T., van Hoek, M. E., Cuijpers, R. H., &
Bekkering, H. (2010). The role of inferior frontal and parietal areas in
differentiating meaningful and meaningless object-directed actions. Brain
Res, 1315, 63-74.
308
Ng, S. H., Han, S., Mao, L., & Lai, J. C. (2010). Dynamic bicultural brains: fMRI
study of their flexible neural representation of self and signficant others in
repsonse to culture primes. Asian Journal of Social Psychology, 13, 83-91.
Nisbett, R. E., & Miyamoto, Y. (2005). The influence of culture: holistic versus
analytic perception. Trends Cogn Sci, 9(10), 467-473.
Nisbett, R. E., Peng, K., Choi, I., & Norenzayan, A. (2001). Culture and systems
of thought: holistic versus analytic cognition. Psychol Rev, 108(2), 291-
310.
Oberman, L. M., Ramachandran, V. S., & Pineda, J. A. (2008). Modulation of mu
suppression in children with autism spectrum disorders in response to
familiar or unfamiliar stimuli: the mirror neuron hypothesis.
Neuropsychologia, 46(5), 1558-1565.
Oetzel, J. G. (1998). Explaining individual communication processes in
homogeneous and heterogeneous groups through individualism-
collectivism and self-construal. Human Communication Research, 25(2),
202-224.
Offermann, L. R., & Hellmann, P. S. (1997). Culture's consequnces for leadership
behavior: National values in action. Journal of Cross-Cultural Psychology,
28(3), 342-351.
Oldfield, R. C. (1971). The assessment and analysis of handedness: The
Edinburgh inventory. Neuropsychologia, 9(1), 97-113.
309
Oztop, E., & Arbib, M. A. (2002). Schema design and implementation of the
grasp-related mirror neuron system. Biol Cybern, 87(2), 116-140.
Oztop, E., Bradley, N. S., & Arbib, M. (2004). Infant grasp learning: A
computational model. Exp Brain Res, 158(4), 480-503.
Oztop, E., Imamizu, H., Cheng, G., & Kawato, M. (2006). A computational model
of anterior intraparietal (AIP) neurons. Neurocomputing, 69, 1354-1361.
Oztop, E., Wolpert, D., & Kawato, M. (2005). Mental state inference using visual
control parameters. Brain Res Cogn Brain Res, 22(2), 129-151.
Ozyurek, A., Willems, R. M., Kita, S., & Hagoort, P. (2007). On-line integration of
semantic information from speech and gesture: Insights from event-related
brain potentials. J Cogn Neurosci, 19(4), 605-616.
Page, S. J., Levine, P., & Leonard, A. (2007). Mental practice in chronic stroke:
Results of a randomized, placebo-controlled trial. Stroke, 38, 1293-1297.
Paladino, M.-P., Mazzurega, M., Pavani, F., & Schubert, T.W. (2010).
Synchronous multisensory stimulation blurs self-other boundaries.
Psychological Science, 21(9), 1202-1207.
Pandya, D. N., Van Hoesen, G. W., & Mesulam, M. M. (1981). Efferent
connections of the cingulate gyrus in the rhesus monkey. Exp Brain Res,
42(3-4), 319-330.
Parvizi, J., Van Hoesen, G. W., Buckwalter, J., & Damasio, A. (2006). Neural
connections of the posteromedial cortex in the macaque. Proc Natl Acad Sci
U S A, 103(5), 1563-1568.
310
Pascual-Leon, A., Dang, M, Cohen, LG., Brasil-Neto, P., Cammoroto, A., &
Hallett, M. (1995). Modulation of muscle responses evoked by transcranial
magnetic stimulation during the acquisition of new fine motor skills.
Journal of Neurophysiology, 74, 1037-1045.
Pavani, F. & Zampini, M. (2007). The role of hand size in the fake-hand illusion
paradigm. Perception, 36(10), 1547-1554.
Pazzaglia, M., Pizzamiglio, L., Pes, E., & Aglioti, S. M. (2008). The sound of
actions in apraxia. Curr Biol, 18(22), 1766-1772.
Pazzaglia, M., Smania, N., Corato, E., & Aglioti, S. M. (2008). Neural
underpinnings of gesture discrimination in patients with limb apraxia. J
Neurosci, 28(12), 3030-3041.
Penny, W. D., Holmes, A. P., & Friston, K. J. (2004). Random effects analysis.
Human brain function, 843ñ850.
Perkins, T., Stokes, M., McGillivray, J., & Bittar, R. (2010). Mirror neuron
dysfunction in autism spectrum disorders. J Clin Neurosci.
Perrett, D. I., Harries, M. H., Bevan, R., Thomas, S., Benson, P. J., Mistlin, A. J.
et al. (1989). Frameworks of analysis for the neural representation of
animate objects and actions. J Exp Biol, 146, 87-113.
Perrett, D. I., Mistlin, A. J., Harries, M. H., & Chitty, A. J. (1990). Understanding
the visual appearance and consequence of hand actions. In M. A. Goodale
(Ed.), Vision and Action: The Control of Grasping (pp. 163-342). Norwood,
NJ: Ablex.
311
Petkova, V.I. & Ehrsson, H.H. (2008). If I were you: Perceptual illusion of body
swapping. PLoS ONE 3(12), e3832.
Phelps, E. A., & Thomas, L. A. (2003). Race, behavior, and the brain: The role of
neuroimaging in understanding complex social behaviors. Political
Psychology, 24(4), 747-758.
Phinney, J. (1992). The multigroup ethnic identity measure: A new scale for use
with adolescents and young adults from diverse groups. Journal of
Adolescent Research 7, 156-176.
Pitcher, D., Garrido, L., Walsh, V., & Duchaine, B.C. (2008). Transcranial
magnetic stimulation disrupts the perception and embodiment of facial
expressions. Journal of Neuroscience, 28(36), 8929-8933.
Polatajko, H.J., Mandich, A.D., Miller, L.T., & Macnab, J.J. (2001a). Cognitive
orientation to daily occupational performance (CO-OP): The evidence.
Physical & Occupational Therapy in Pediatrics, 20, 83-106.
Poizner, H., Klima, E. S., & Bellugi, U. (1987). What the Hands Reveal about the
Brain. Cambridge, MA: MITPress.
Power, D., Schoenherr, T., & Samson, D. (2010). The cultural characteristic of
individualism/collectivism: A comparative study of implications for
investment in operations between emerging Asian and industrialized
Western countries. Journal of Operations Management, 28, 206-222.
312
Preuss, T. M., Stepniewska, I., & Kaas, J. H. (1996). Movement representation in
the dorsal and ventral premotor areas of owl monkeys: A microstimulation
study. J Comp Neurol, 371(4), 649-676.
Pulvermuller, F. (2005). Brain mechanisms linking language and action. Nat Rev
Neurosci, 6(7), 576-582.
Pulvermuller, F., & Hauk, O. (2006). Category-specific conceptual processing of
color and form in left fronto-temporal cortex. Cerebral Cortex, 16(8), 1193.
Pulvermuller, F., Hauk, O., Nikulin, V. V., & Ilmoniemi, R. J. (2005). Functional
links between motor and language systems. European Journal of
Neuroscience, 21(3), 793-797.
Pulvermuller, F., Shtyrov, Y., & Ilmoniemi, R. (2005). Brain signatures of meaning
access in action word recognition. J Cogn Neurosci, 17(6), 884-892.
Raichle, M. E., MacLeod, A. M., Snyder, A. Z., Powers, W. J., Gusnard, D. A., &
Shulman, G. L. (2001). A default mode of brain function. Proc Natl Acad Sci
U S A, 98(2), 676-682.
Raichle, M. E., & Snyder, A. Z. (2007). A default mode of brain function: a brief
history of an evolving idea. Neuroimage, 37(4), 1083-90; discussion 1097-9.
Reilly, M. (1962). Occupational therapy can be one of the great ideas of 20th
century medicine. The 1961 Eleanor Clarke Slagle Lecture. American
Journal of Occupational Therapy, 16(1), 87-105.
Riley, J.D., Le, V., Der-Yeghiaian, L., See, J., & Newton, J.M., et al. (2011).
Anatomy of stroke injury predicts gains from therapy. Stroke, 42, 421-426.
313
Rizzolatti, G., & Arbib, M. A. (1998). Language within our grasp. Trends
Neurosci, 21(5), 188-194.
Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annu Rev
Neurosci, 27, 169-192.
Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996a). Premotor cortex and
the recognition of motor actions. Cognitive brain research, 3(2), 131-141.
Rizzolatti, G., Fadiga, L., Matelli, M., Bettinardi, V., Paulesu, E., Perani, D. et al.
(1996b). Localization of grasp representations in humans by PET: 1.
Observation versus execution. Exp Brain Res, 111(2), 246-252.
Rizzolatti, G., Fogassi, L., & Gallese, V. (1997). Parietal cortex: From sight to
action. Curr Opin Neurobiol, 7(4), 562-567.
Roberts, R., Phinney, J., Masse, L., Chen, Y., Roberts, C., & Romero, A. (1999).
The structure of ethnic identity in young adolescents from diverse
ethnocultural groups. Journal of Early Adolescence 19, 301-322.
Roth, M. (1949). Disorders of the body image caused by lesions of the right
parietal lobe. Brain, 72, 89-111.
Rothi, L. J., & Heilman, K. M. (1984). Acquisition and retention of gestures by
apraxic patients. Brain Cogn, 3(4), 426-437.
Rothi, L. J., Heilman, K. M., & Watson, R. T. (1985). Pantomime comprehension
and ideomotor apraxia. J Neurol Neurosurg Psychiatry, 48(3), 207-210.
314
Sanchez-Burks, J., Lee, F., Choi, I., Nisbett, R., Zhao, S., & Koo, J. (2003).
Conversing across cultures: East-West communication styles in work and
nonwork contexts. J Pers Soc Psychol, 85(2), 363-372.
Sangster, C. A., Beninger, C., Polatajko, H. J., & Mandich, A. (2005). Cognitive
strategy generation and children with developmental coordination
disorder: The impact of a cognitive-oriented treatment approach.
Canadian Journal of Occupational Therapy, 72, 67-77.
Sawaki, L., Yaseen, Z., Kopylev, L., & Cohen, L. G. (2003). Age-dependent
changes in the ability to encode a novel elementary motor memory.
Annals of Neurology, 53, 521-524.
Saxe, R. (2006). Uniquely human social cognition. Curr Opin Neurobiol, 16(2),
235-239.
Saxe, R., & Kanwisher, N. (2003). People thinking about thinking people. The
role of the temporo-parietal junction in "theory of mind". Neuroimage, 19(4),
1835-1842.
Saxe, R., & Powell, L. J. (2006). It's the thought that counts: Specific brain
regions for one component of theory of mind. Psychol Sci, 17(8), 692-699.
Saxe, R., & Wexler, A. (2005). Making sense of another mind: The role of the
right temporo-parietal junction. Neuropsychologia, 43(10), 1391-1399.
Schaal, S., Ijspeert, A., & Billard, A. (2003). Computational approaches to motor
learning by imitation. Philosophical Transactions of the Royal Society, B:
Biological Sciences, 358, 537-547.
315
Schutz-Bosbach, S., Tausche, P., & Weiss, C. (2009) Roughness perception
during the rubber hand illusion. Brain and Cognition, 70(1), 136-44.
Schippers, M. B., Gazzola, V., Goebel, R., & Keysers, C. (2009). Playing
charades in the fMRI: Are mirror and/or mentalizing areas involved in
gestural communication? PLoS One, 4(8), e6801.
Schippers, M. B., Roebroeck, A., Renken, R., Nanetti, L., & Keysers, C. (2010).
Mapping the information flow from one brain to another during gestural
communication. Proc Natl Acad Sci U S A, 107(20), 9388-9393.
Schnell, K., Bluschke, S., Konradt, B., & Walter, H. (2010). Functional relations of
empathy and mentalizing: An fMRI study on the neural basis of cognitive
empathy. Neuroimage.
Schuch, S., Bayliss, A. P., Klein, C., & Tipper, S. P. (2010). Attention modulates
motor system activation during action observation: evidence for inhibitory
rebound. Exp Brain Res, 205(2), 235-249.
Schulte-Rüther, M., Markowitsch, H. J., Fink, G. R., & Piefke, M. (2007). Mirror
neuron and theory of mind mechanisms involved in face-to-face
interactions: A functional magnetic resonance imaging approach to
empathy. Journal of Cognitive Neuroscience, 19(8), 1354-1372.
Seltzer, B., & Pandya, D. N. (1978). Afferent cortical connections and
architectonics of the superior temporal sulcus and surrounding cortex in the
rhesus monkey. Brain Res, 149(1), 1-24.
316
Seltzer, B., & Pandya, D. N. (1980). Converging visual and somatic sensory
cortical input to the intraparietal sulcus of the rhesus monkey. Brain Res,
192(2), 339-351.
Seltzer, B., & Pandya, D. N. (1989a). Intrinsic connections and architectonics of
the superior temporal sulcus in the rhesus monkey. J Comp Neurol, 290(4),
451-471.
Seltzer, B., & Pandya, D. N. (1989b). Frontal lobe connections of the superior
temporal sulcus in the rhesus monkey. J Comp Neurol, 281(1), 97-113.
Seltzer, B., & Pandya, D. N. (1994). Parietal, temporal, and occipital projections
to cortex of the superior temporal sulcus in the rhesus monkey: a retrograde
tracer study. J Comp Neurol, 343(3), 445-463.
Seltzer, B., & Pandya, D. N. (2009). Posterior cingulate and retrosplenial cortex
connections of the caudal superior temporal region in the rhesus monkey.
Exp Brain Res, 195(2), 325-334.
Serino, A., Giovagnoli, G., & Ladavas, E. (2009). I feel what you feel if you are
similar to me. PLoS One, 4(3), e4930.
Sforza, A., Bulfalari, I., Haggard, P., & Aglioti, S.M. (2010). My face in yours:
Visuo-tactile facial stimulation influences sense of identity. Social
Neuroscience, 5(2), 148-162.
Shamay-Tsoory, S. G., Aharon-Peretz, J., & Perry, D. (2008). Two systems for
empathy: A double dissociation between emotional and cognitive empathy
in inferior frontal gyrus versus ventromedial prefrontal lesions. Brain.
317
Singelis, T. M., & Sharkey, W. F. (1995). Culture, self-construal, and
embarrassability. Journal of Cross-Cultural Psychology, 26(6), 622-644.
Skipper, J. I., Goldin-Meadow, S., Nusbaum, H. C., & Small, S. L. (2009).
Gestures orchestrate brain networks for language understanding. Curr Biol,
19(8), 661-667.
Slater, M., Spanlang, B., Sanchez-Vives, M.V., & Blanke, O. (2010). First person
experience of body transfer in virtual reality. PLoS ONE, 5(5), e10564.
Smith, S. M. (2002). Fast robust automated brain extraction. Hum Brain Mapp,
17(3), 143-155.
Spilka, M. J., Steele, C. J., & Penhune, V. B. (2010). Gesture imitation in
musicians and non-musicians. Exp Brain Res, 204(4), 549-558.
Spunt, R. P., Satpute, A. B., & Lieberman, M. D. (2010). Identifying the what,
why, and how of an observed action: An fMRI study of mentalizing and
mechanizing during action observation. J Cogn Neurosci.
Stansbury, L. G., Lalliss, S. J., Branstetter, J. G., Bagg, M. R., & Holcomb, J. B.
(2008). Amputations in U.S. military personnel in the current conflicts in
Afghanistan and Iraq. J Orthop Trauma, 22(1), 43-46.
Stefan, K., Wycislo, M., & Classen, J. (2004). Modulation of associative human
motor cortical plasticity by attention. J Neurophysiol, 92(1), 66-72.
Stepniewska, I., Preuss, T. M., & Kaas, J. H. (2006). Ipsilateral cortical
connections of dorsal and ventral premotor areas in New World owl
monkeys. J Comp Neurol, 495(6), 691-708.
318
Stevens, W. D., Hasher, L., Chiew, K. S., & Grady, C. L. (2008). A neural
mechanism underlying memory failure in older adults. J Neurosci, 28(48),
12820-12824.
Straube, B., Green, A., Jansen, A., Chatterjee, A., & Kircher, T. (2010). Social
cues, mentalizing and the neural processing of speech accompanied by
gestures. Neuropsychologia, 48(2), 382-393.
Straube, B., Green, A., Weis, S., Chatterjee, A., & Kircher, T. (2009). Memory
effects of speech and gesture binding: Cortical and hippocampal activation
in relation to subsequent memory performance. J Cogn Neurosci, 21(4),
821-836.
Stefan, K., Classen, J., Celnik, P., & Cohen, L. G. (2008). Concurrent action
observation modulates practice-induced motor memory formation.
European Journal of Neuroscience, 27, 730-738.
Stefan, K., Cohen, L. G., Duque, J., Mazzocchio, R., Celnik, P., Sawaki,
L.,...Classen, J. (2005). Formation of a motor memory by action
observation. Journal of Neuroscience, 25, 9339-9346.
Sui, J., & Han, S. (2007). Self-construal priming modulates neural substrates of
self-awareness. Psychol Sci, 18(10), 861-866.
Sui, J., Liu, C. H., & Han, S. (2009). Cultural difference in neural mechanisms of
self-recognition. Soc Neurosci, 4(5), 402-411.
319
Tettamanti, M., Buccino, G., Saccuman, M. C., Gallese, V., Danna, M., Scifo, P.
et al. (2005). Listening to action-related sentences activates fronto-parietal
motor circuits. Journal of Cognitive Neuroscience, 17(2), 273-281.
Teufel, C., Fletcher, P. C., & Davis, G. (2010). Seeing other minds: Attributed
mental states influence perception. Trends Cogn Sci, 14(8), 376-382.
Thesen, S., Heid, O., Mueller, E., & Schad, L. R. (2000). Prospective acquisition
correction for head motion with image-based tracking for real-time fMRI.
Magn Reson Med, 44(3), 457-465.
Thioux, M., Gazzola, V., & Keysers, C. (2008). Action understanding: How, what
and why. Current Biology, 18(10), 431-434.
Triandis, H. C., & Gelfand, J. (1998). Converging measurement of horizontal and
vertical individualism and collectivism. J Pers Soc Psychol, 74(1), 118-
128.
Tsakiris, M. (2008) Looking for myself: Current multisensory input alters self-face
recognition. PLoS One, 3(12), e4040.
Tsakiris, M., Costantini, M., & Haggard, P. (2008). The role of the right temporo-
parietal junction in maintaining a coherent sense of one's body.
Neuropsychologia 46(12), 3014-18.
Uddin, L. Q., Iacoboni, M., Lange, C., & Keenan, J. P. (2007). The self and social
cognition: The role of cortical midline structures and mirror neurons. Trends
Cogn Sci, 11(4), 153-157.
320
Uddin, L. Q., Kaplan, J. T., Molnar-Szakacs, I., Zaidel, E., & Iacoboni, M. (2005).
Self-face recognition activates a frontoparietal "mirror" network in the right
hemisphere: an event-related fMRI study. Neuroimage, 25(3), 926-935.
Uddin, L. Q., Molnar-Szakacs, I., Zaidel, E., & Iacoboni, M. (2006). rTMS to the
right inferior parietal lobule disrupts self-other discrimination. Social
cognitive and affective neuroscience, 1(1), 65.
Umilta, M. A., Escola, L., Intskirveli, I., Grammont, F., Rochat, M., Caruana, F. et
al. (2008). When pliers become fingers in the monkey motor system. Proc
Natl Acad Sci U S A, 105(6), 2209-2213.
Umilta, M. A., Kohler, E., Gallese, V., Fogassi, L., Fadiga, L., Keysers, C. et al.
(2001). I know what you are doing: A neurophysiological study. Neuron,
31(1), 155-165.
Van Overwalle, F. (2009). Social cognition and the brain: A meta-analysis. Hum
Brain Mapp, 30(3), 829-858.
Van Overwalle, F., & Baetens, K. (2009). Understanding others' actions and
goals by mirror and mentalizing systems: A meta-analysis. Neuroimage,
48(3), 564-584.
Villarreal, M., Fridman, E. A., Amengual, A., Falasco, G., Gerscovich, E. R.,
Ulloa, E. R. et al. (2008). The neural substrate of gesture recognition.
Neuropsychologia, 46(9), 2371-2382.
321
Vogt, S., Buccino, G., Wohlschläger, A. M., Canessa, N., Shah, N. J., Zilles, K. et
al. (2007). Prefrontal involvement in imitation learning of hand actions:
Effects of practice and expertise. Neuroimage, 37(4), 1371-1383.
Wager, T. D., & Nichols, T. E. (2003). Optimization of experimental design in
fMRI: a general framework using a genetic algorithm. Neuroimage, 18(2),
293-309.
Walter, H., Adenzato, M., Ciaramidaro, A., Enrici, I., Pia, L., & Bara, B. G. (2004).
Understanding intentions in social interaction: The role of the anterior
paracingulate cortex. J Cogn Neurosci, 16(10), 1854-1863.
Wang, R., Benner, T., Sorensen, A.G., & Wedeen, V.J. (2007) Diffusion toolkit: A
software package for diffusion imaging data processing and tractography.
In: Proc Int Soc Magn Reson Med, 3720.49-51.
Ward, N. S. (2006). The neural substrates of motor recovery after focal damage
to the central nervous system. Archives of Physical Medicine &
Rehabilitation, 87, S30-35.
Ward, A., & Rodger, S. (2004). The application of cognitive orientation to daily
occupational performance (CO-OP) with children 5-7 years with
developmental coordination disorder. British Journal of Occupational
Therapy, 67, 256-264.
Wedeen, V.J., Wang, R.P., Schmahmann, J.D., Benner, T., Tseng, W.Y., et al.
(2008). Diffusion spectrum magnetic resonance imaging (DSI)
tractography of crossing fibers. Neuroimage, 41, 1267-1277.
322
Weisz, J. R., Rothbaum, F. M., & Blackburn, T. C. (1984). Standing out and
standing in: The psychology of control in America and Japan. American
Psychologist, 39(9), 955-969.
Werner, J., Aziz-Zadeh, L., & Cermak, S. (2011, June). Neural correlates of
imitation in DCD. Poster session and brief presentation presented at the
Developmental Coordination Disorder International Conference IX,
Lausanne, Switzerland.
Willems, R. M., & Hagoort, P. (2007). Neural evidence for the interplay between
language, gesture, and action: A review. Brain Lang, 101(3), 278-289.
Willems, R. M., Hagoort, P., & Casasanto, D. (2010). Body-specific
representations of action verbs: Neural evidence from right- and left-
handers. Psychol Sci, 21(1), 67-74.
Willems, R. M., Ozyurek, A., & Hagoort, P. (2007). When language meets action:
The neural integration of gesture and speech. Cereb Cortex, 17(10), 2322-
2333.
Willems, R. M., Toni, I., Hagoort, P., & Casasanto, D. (2009). Body-specific
motor imagery of hand actions: Neural evidence from right- and left-
handers. Front Hum Neurosci, 3, 39.
Wilson, P.H., Maruff, P., Butson, M., Williams, J., Lum, J., & Thomas, P.R.
(2004). Internal representation of movement in children with
developmental coordination disorder: a mental rotation task.
Developmental Medicine & Child Neurology, 46, 754-759.
323
Wilson, P.H., Maruff, P., Ives, S., & Currie, J. (2001). Abnormalities of motor and
praxis imagery in children with DCD. Human Movement Science, 20, 135-
159.
Wolperts, D. M., Goodbody, S. J., & Husain, M. (1998). Maintaining internal
representations: The role of the human superior parietal lobe. Nature
neuroscience, 1, 529-533.
Woolrich, M. (2008). Robust group analysis using outlier inference. Neuroimage,
41(2), 286-301.
Woolrich, M. W., Behrens, T. E., Beckmann, C. F., Jenkinson, M., & Smith, S. M.
(2004). Multilevel linear modelling for FMRI group analysis using Bayesian
inference. Neuroimage, 21(4), 1732-1747.
Woolrich, M. W., Ripley, B. D., Brady, M., & Smith, S. M. (2001). Temporal
autocorrelation in univariate linear modeling of FMRI data. Neuroimage,
14(6), 1370-1386.
Xu, J., Gannon, P. J., Emmorey, K., Smith, J. F., & Braun, A. R. (2009a).
Symbolic gestures and spoken language are processed by a common
neural system. Proc Natl Acad Sci U S A, 106(49), 20664-20669.
Xu, X., Zuo, X., Wang, X., & Han, S. (2009b). Do you feel my pain? Racial group
membership modulates empathic neural responses. Journal of
Neuroscience.
324
Yum, J. (1988). The impact of Confucianism on interpersonal relationships and
communication patterns in East Asia. Communication Monographs, 55,
374-388.
Zaki, J., Hennigan, K., Weber, J., & Ochsner, K. N. (2010). Social cognitive
conflict resolution: contributions of domain-general and domain-specific
neural systems. J Neurosci, 30(25), 8481-8488.
Zaki, J., Weber, J., Bolger, N., & Ochsner, K. (2009). The neural bases of
empathic accuracy. Proceedings of the National Academy of Sciences.
Zhu, Y., Zhang, L., Fan, J., & Han, S. (2007). Neural basis of cultural influence
on self-representation. Neuroimage, 34(3), 1310-1316.
Zwicker, J.G., Missiuna, C., Harris, S.R., & Boyd, L.A. (2011). Brain activation
associated with motor skill practice in children with developmental
coordination disorder: An fMRI study. International Journal of
Developmental Neuroscience, 29, 145-152.
325
APPENDIX A. Brief list of regions and functions
This is a brief list of regions to provide a quick reference for some of the areas
referred to in this dissertation. While it is far from comprehensive, it is meant to
highlight a few salient properties of each region.
• Posterior superior temporal sulcus (pSTS): recognition of biological
motion, active also during self-generation of motion, connects with IPL
among other regions, but not with the premotor cortex directly
• Extrastriate body area (EBA): visual region in the pSTS/pMTG,
more active for body movements than other movements, including
face movements
• Inferior temporal lobule (IT): visual information on object identity
• Caudal inferior parietal sulcus (cIPS): recognition of object features
which are then passed to AIP for recognition of object affordances
• Parietal cortex: production and perception of one’s own body movements
in space, as well as perception of others’ bodies; extracts affordances,
hand configurations, spatial positions, portions of parietal cortex connect
with occipital, temporal, and frontal regions, as well as subcortical
structures
• Anterior inferior parietal lobule (AIP): receives input from IT,
cIPS; extracts affordances for interacting with an object and
encodes object properties, has strong reciprocal connections with
F5
326
• Parietal area 7a: analysis of spatial parameters between hand and
object (from STS and cIPS)
• Parietal area 7b: associations between hand state and object
affordances (from STS and AIP); contains mirror neurons
• Superior parietal lobule: affordances for reaching, monitoring of
self-generated actions (efference copy); often implicated in
abnormal body representations such as Body Integrity Identity
Disorder (BIID)
• Premotor cortex: motor planning for movements, active during
perception of others’ movements as well as during imitation, context-
dependent processing related to the end motor goal; outputs movements
parameters to reach desired actions as calculated by the parietal cortex
• F5 canonical neurons: motor programs for executed actions such
as grasping, finger coordination
• F5 mirror neurons: recognition of motor programs for grasping;
goal-driven; fires for sounds of actions as well as observation of
hidden actions, also homologous with Broca’s area which is
involved in speech production and active during communicative
gestures
• Dorsal pars opercularis – more active for imitation than
observation
327
• Ventral PO – only active for imitation but not for observation; may
be involved in forward modeling of actions with posterior parietal
and cerebellar regions
• F2 dorsal premotor (PMd): motor planning for complex
movements, incorporates context, instructions; receives
somatosensory and visual inputs and outputs to primary motor
cortex, may participate in planning and selecting actions to output
• F4 ventral premotor (PMv): motor planning with wrist movements,
specifies effectors and postures during motor planning and
transforms object locations into motor plans, receives input from
posterior parietal cortex and sends descending projections to brain
stem and spinal cord; space here is encoded in body-centric
coordinates
• Primary motor cortex (F1/M1): motor execution
• Basal ganglia (BG): sequence organization for actions, incorporates the
desirability of the action (from the hypothalamus, which provides need
states), and the executability of the action (from the motor patterns from
the F5 canonical and mirror neurons) to select action order; provides
feedback to the AIP regarding which affordances to use
• Cerebellum: motor coordination, active both during executed and
observed actions
328
• Primary somatosensory cortex (S1): receives somatosensory input of 4
different types (BA 1, 2, 3a, 3b) encoding: BA3a – proprioceptive input,
BA3b – tactile input, BA1 – input from BA3b, second level of tactile
processing, BA2 – inputs from BA3a, 3b, and 1, possibly a third level of
somatosensory processing
• Supplementary motor area (SMA): motor planning, particularly planning
of internally generated action sequences, sequence retrieval
• F6 (pre-SMA): movement timing, sequence organization, acquiring new
sequences
• Secondary somatosensory cortex (SII): vicariously activated for the
sight of others being touched, encodes quality/intensity of touch rather
than somatotopic location
• Prefrontal cortex (PFC): decision making, context-processing, higher-
level processing and regulation of choices; influences the functionality of
the premotor/parietal cortices based on context, “operational space
(extrinsic, kinematic-baesd signals), and task (Oztop et al., 2005)
• Medial prefrontal cortex (mPFC): representation of self, others
(ventral/dorsal respectively), active during mental state attribution,
default mode network/resting state network, moral judgements
attributed to this region
329
• Dorsolateral prefrontal cortex, area 46 (dlPFC): working
memory, could contain recently used grasps and affordances;
active during imitation learning of non-practiced actions
• Insular cortex: emotional and visceral processing (affective and somatic
pain, disgust, happiness)
• Anterior cingulate cortex (ACC): conflict detection, pain sensation in self
and others
• Amygdala: fear, learning, memory, disgust processing
• Hippocampus: memory consolidation, learning
• Posterior cingulate cortex (PCC): autobiographical memory; default
mode network; mind wandering
• Right temporoparietal junction (rTPJ): effortful perspective taking,
active during false belief tasks, general mentalizing
• Left temporoparietal junction (lTPJ): effortful perspective taking,
directing attention, communicative interactions
• Anterior temporal poles: semantic social information; active during
naming of objects and actions, activated for animation-based mentalizing
tasks
330
APPENDIX B. Cultural Experiences Modulate Social
Perception
ABSTRACT
Human adults typically respond faster to their own face than to the faces of
others. However, in Chinese participants, this self-face advantage is lost in the
presence of one’s supervisor, and they respond faster to their supervisor’s face
than to their own. While this “boss effect” suggests a strong modulation of self-
processing in the presence of influential social superiors, the current study
examined whether this effect was true across cultures. Given the wealth of
literature on cultural differences between collectivist, interdependent versus
individualistic, independent self-construals, we hypothesized that the boss effect
might be weaker in independent than interdependent cultures. Twenty European
American college students were asked to identify orientations of their own face or
their supervisors’ face. We found that European Americans, unlike Chinese
participants, did not show a “boss effect” and maintained the self-face advantage
even in the presence of their supervisor’s face. Interestingly, however, their self-
face advantage decreased as their ratings of their boss’s perceived social status
increased, suggesting that self-processing in Americans is influenced more by
one’s social status than by one’s hierarchical position as a social superior. In
addition, when their boss’s face was presented with a labmate’s face, American
participants responded faster to the boss’s face, indicating that the boss may
represent general social dominance rather than a direct negative threat to
331
oneself, in more independent cultures. Altogether, these results demonstrate a
strong cultural modulation of self-processing in social contexts and suggest that
the very concept of social positions, such as a boss, may hold markedly different
meanings to the self across Western and East Asian cultures.
INTRODUCTION
“At home, a young man should be dutiful towards his parents; going outside, he
should be respectful towards his elders.”
-Confucius (Chinese philosopher, 551 – 479 BC)
“Your real boss is the one who walks under your hat.”
-Napoleon Hill (American author, 1933-1970)
Cultural differences play a key role, not only in how people understand
themselves, but also in how they relate to others. This is exemplified in the above
quotations, with the former Chinese quote emphasizing the importance of
respecting one’s elders both at home and in public while the latter American one
affirms one’s independence and autonomy above all else. Several decades of
both behavioral and neuroimaging research suggest that self-concept is largely
determined by one’s culture, with notable differences between East Asian and
Western cultures (Markus & Kitayama, 1991; Fiske, Kitayama, Markus, & Nisbett,
1998; Markus & Kitayama, 1998; Oetzel, 1998; Han & Northoff, 2009; Chiao et
332
al., 2009). In particular, people from Western countries tend to be more
individualistic and have what is known as an independent self-construal (Markus
& Kitayama, 1991). In these cases, the self is thought of as an isolated unit that
strives to be unique, autonomous, and assertive, functioning in parallel with, but
not dependent upon, others. In contrast, those from more collectivist cultures,
such as East Asians, tend to demonstrate an interdependent self-construal, in
which the self is conceptualized in terms of its relationship to others, which blurs
the distinction between self and other and allows the self to be easily modulated
by dynamic social contexts, such as the presence of one’s supervisor (Markus &
Kitayama, 1991).
There are a significant number of findings that attribute differences in both
cognitive processes and affective states to these noted cultural differences in
self-construals (Markus & Kitayama, 1991; Singelis & Sharkey, 1995; Fiske et al.,
1998; Markus & Kitayama, 1998; Oetzel, 1998; Zhu, Zhang, Fan, & Han, 2007;
Han & Northoff, 2009; Chiao et al., 2009). For instance, individuals with
independent self-construals tend to be more assertive and use competitive
conflict tactics in group work settings, while individuals with interdependent self-
construals are more likely to shy away from conflict and use cooperative tactics
(Oetzel, 1998). In addition, the interdependent self-construal was positively
correlated with ease of embarrassment while the independent self-construal was
negatively correlated; Asian Americans were more easily embarrassed than
333
European Americans (Singelis & Sharkey, 1995). Neuroimaging studies have
also shown that while Americans activate neural regions associated with self-
processing (e.g., the medial prefrontal cortex) only when thinking about oneself,
Chinese participants activate these self-processing regions both when thinking
about oneself and one’s close family members, like one’s mother (Zhu et al.,
2007). Similarly, an EEG study showed that images of one’s own face, compared
to familiar faces, elicited greater fronto-central activity, related to self-processing,
in British participants but less fronto-central activity in Chinese (Sui, Liu, & Han,
2009), demonstrating the blurred distinction between self and other in Chinese
individuals. In addition, when looking across cultures, neural activity in the mPFC
was predictive of how individualistic or collectivist participants were (Chiao et al.,
2009). These results suggest that interdependent individuals are much more
affected by social contexts than independent individuals, and that interdependent
self-construals encompass other individuals while independent self-construals
largely include only the self.
Cultural differences in self-construals also affect relationships in work
environments, where individuals must navigate complex social hierarchies. In line
with the idea of independent self-construals, Americans are generally
encouraged to be socially dominant, competitive, and assertive, while East
Asians tend to value subordinance, cooperation, and harmony (Weisz,
Rothbaum, & Blackburn, 1984; Moskowitz, Suh, & Desaulniers, 1994; Triandis &
334
Gelfand, 1998). Weisz et al. (1984) elaborate on these differences as desiring
primary control (e.g., social dominance, as found typically in Americans) versus
secondary control (e.g., social subordination, as found typically in East Asians),
and note that these cultural differences affect a myriad of social activities
including work, child-rearing, and religious involvement. A recent neuroimaging
study provided support for these findings by demonstrating that Americans show
neural activity in reward-related brain regions in response to signals of
dominance, while Japanese participants show neural activity in these same
reward-related brain regions in response to signals of subordination (Freeman,
Rule, Adams, & Ambady, 2009). In addition, self-construal appears to play a role
in mediating social interactions. One study found that the higher self-esteem an
individual has, the more strongly he or she demonstrates positive self-protective
behaviors in response to negative feedback from others (Brockner & Chen,
1996). However, this was true only in American participants and in Chinese
participants who demonstrated a more independent self-construal. Chinese
participants who were more interdependent did not demonstrate self-protective
behaviors in relation to negative feedback, suggesting that one’s self-construal
affects how one interprets and reacts to social threats.
The self-construal has been studied in a number of ways, with a wealth of
literature suggesting that one’s own face is even processed differently from faces
of others (Keenan et al., 1999; Keenan, Freund, Hamilton, Ganis, & Pascal-
335
Leone, 2000; Keenan, Nelson, O'Connor, & Pascual-Leone, 2001; Uddin,
Kaplan, Molnar-Szakacs, Zaidel, & Iacoboni, 2005; Ma & Han, 2010). Behavioral
studies show faster reaction times (RTs) to one’s own face compared to
another’s face during either explicit face–recognition tasks requiring judgments of
face identity (Keenan et al., 1999; Keenan et al., 2000) or implicit face
recognition tasks requiring determination of whether a face is oriented to the right
or left (Ma & Han, 2010). Notably, these effects are most significant on left-hand
responses, leading researchers to suggest that this is reflective of self-
processing, which is thought to occur in the right hemisphere (Keenan et al.,
1999; Keenan et al., 2000; Ma & Han, 2009; Ma & Han, 2010). Ma and Han
(2010) suggest that this self-face RT advantage may be due to implicit positive
associations with the self. In a series of 4 experiments, they demonstrated that
self-concept threat priming (i.e., deciding whether negative trait words describe
oneself) weakens one’s implicit positive associations with oneself and decreases
the self-face RT advantage, an effect that is seen in left-hand (but not right-hand)
responses (Ma & Han, 2010). These interesting results suggest that threats to
one’s self-concept, which weaken one’s implicit positive association with oneself,
may reduce any advantages in self-referential processing.
A recently discovered phenomenon, known as the “boss effect,” supports these
findings. Ma and Han (2009) found that Chinese graduate students demonstrated
a typical self-face advantage (i.e., faster RT to one’s own face than another’s
336
face) when their faces were presented in a block with a familiar faculty member’s
face. However, when their faces were presented with their boss’s face,
participants lost the self-face advantage and demonstrated significantly faster
RTs to their boss’s face, which the authors termed the “boss effect” (Ma & Han,
2009). Notably, participants did not demonstrate significant RT differences when
shown blocks with faces of their boss and a labmate, suggesting that the boss
effect was specific to the social threat incurred when one’s own face was paired
with the presence of one’s boss.
The current study assessed whether the “boss effect” on self-face recognition is
culturally universal or specific to cultures dominated by interdependent self-
construals. We hypothesized that this boss effect might be modulated by cultural
influences on participants’ self-construals. In support of this, Ma and Han (2010)
found that while similar effects of self-concept threat are observed in both
Chinese and American participants, they occurred to a much lesser extent in
Americans as compared to Chinese. In addition, Sui et al. (2009) found that
event related potentials recorded from British participants showed larger
amplitudes to self-face than to a friend’s face whereas a reverse pattern was
observed in Chinese participants, suggesting greater social salience of self-face
in people with independent-self construals compared to those with
interdependent self-construals. Thus, here we surmised that more independent
selves would be less affected by social contexts and hierarchies, and therefore
337
less susceptible to any social threat induced by seeing one’s boss. We
anticipated that American participants would demonstrate a self-face advantage
in all contexts, whether paired with their boss (high social threat) or another
faculty member (low social threat).
However, we anticipated that it would still possible to see a faster RT time to the
boss when the boss was paired with others (not including the self, such as a
labmate), due to his or her general social dominance over the labmate. This
would demonstrate that the boss evokes a faster RT in general social situations,
despite not directly impacting the individual’s self-processing. The current study
replicated the study from Ma and Han (2009) with European American
participants to test these hypotheses. We report data from both the current study
and the previous study (Ma & Han, 2009) for cross-cultural comparisons.
MATERIALS AND METHODS
Participants
Twenty healthy European American graduate students in America (10 females /
10 males, mean age of all participants = 26.6, SD = 3.05) and twenty healthy
Chinese graduate students in China (10 females / 10 males, mean age of all
participants = 24.8, SD = 1.94) participated in this study. All were right-handed
and had normal or corrected-to-normal vision. In addition, all had worked with
their advisors for more than one year (13 – 60 months), and advisors were of the
338
same race as the student to avoid confounds due to the social influences of race.
Written informed consent was obtained from all participants before inclusion in
the study. This study was approved by the University of Southern California
Institutional Review Board and by the local ethics committee in Beijing, China
and was performed in accordance with the 1964 Declaration of Helsinki.
Questionnaire Measurement
Participants were given a modified Brief Fear of Negative Evaluation (Brief-FNE)
scale (Leary, 1983) to assess their fear of being negatively evaluated by both
their advisor and another faculty member who worked for the same department
but was not in the participant’s lab (e.g., I am afraid that Professor XXX will not
approve of me). Participants used a 5-point Likert scale (1 = not at all
characteristic, 5 = extremely characteristic) in response to each item, reporting
how properly each statement fit them in respect to 1) their advisor and 2) the
other faculty member. In addition, participants were asked to rate each
professor’s (advisor, other faculty member) social status, which was defined as
the individual’s ability to exert influence over other people and institutions, on an
11-point Likert scale (0 = not at all dominant, 10 = extremely dominant).
Stimuli and Procedure
Ten digital face images were taken from each participant, his/her faculty advisor,
another faculty member, and one of his/her labmates prior to the experiment.
339
Half of the faculty advisors and other faculty members were of the same gender
as the participant, and half were of a different gender as the participant.
Participants knew both the faculty advisor and faculty member for the same
length of time. In addition, an advisor for one participant might be used as the
other faculty member for another participant, so as to match perceptual features
of the stimuli.
Five of the images of each individual were oriented to the left (varied from 30° to
90°) and the other five were oriented to the right. Participants were instructed to
look directly ahead and maintain a neutral facial expression. Control images used
scrambled images of the faces, which were created by dividing face images into
10 x 10 arrays and randomly rearranging them, using Matlab. These images
were presented with a gray bar on either the left or the right. For an example of
all stimuli and the experimental paradigm, see Figure 1. The participants in this
figure have given written informed consent (as outlined in the PLoS consent
form) to the publication of their photographs. All images were calibrated in
luminance and contrast and subtended a visual angel of 2.13° x 2.17° at a
viewing distance of 70 cm. Images were presented for 200 ms each at the center
of the screen, with a varying intertrial interval of 800 to 1200 ms during which a
fixation cross was presented. Participants were instructed to indicate whether
faces were oriented to the left or the right, or whether the gray bar of scrambled
340
images was on the left or the right, by pressing two keys using the index and
middle fingers. Task instructions emphasized both speedy and accuracy.
Each block of trials contained 40 face images and 20 scrambled images. The
block design is illustrated in Figure 1. Self-face was presented in a high-threat
context (20 trials each of self-face, advisor’s face in each block) for two blocks
and in a low-threat context (20 trials each of self-face, other faculty member’s
face in each block) for two blocks. In addition, two blocks used 20 trials of each a
labmate’s face and the advisor’s face in order to discern whether the advisor’s
face generated increased processing speed when paired with non-self faces. For
each stimulus condition, participants responded with the left hand in one block
and the right hand in the other block. The order of responding hands and
conditions was counterbalanced across participants.
341
Figure 1. Examples of the stimuli, experimental paradigm, & block design.
Participants were shown images of themselves/their labmate, their boss/faculty
member, and scrambled images of faces for 200 ms, separated by a fixation
cross that lasted between 800-1200 ms (left diagram). Blocks consisted of the
following three stimuli sets (right diagram): self/boss/scrambled,
self/faculty/scrambled, labmate/boss/scrambled, and were performed with both
left and right hands, for a total of 6 blocks. Starting response hand and stimuli
sets were counterbalanced across participants.
RESULTS
Subjective Ratings
Both European American and Chinese participants’ subjective reports indicated
comparable perceived social status of their advisors and the other faculty
members (European Americans: 5.90 ± 2.29 vs. 6.0 ± 1.89, t(1,19) = -0.276, p =
342
0.79; Chinese: 8.30 ± 1.45 vs. 7.85 ± 1.57, t(1,19) = 1.690, p = 0.107). In
addition, the results of the Brief-FNE scale suggested that both European
American and Chinese participants were significantly more afraid of negative
evaluation from their advisors than from the other faculty members (European
Americans: 2.56 ± 0.44 vs. 2.24 ± 0.39, t(1,19) = 3.482, p = 0.0025; Chinese:
3.38 ± 0.73 vs. 2.41 ± 0.66, t(1,19) = 5.265, p < 0.001). However, a 2-factor
mixed-effects analysis of variance (ANOVA) with Culture (Chinese, American) x
Threat (Boss, Faculty Member) demonstrated an interaction effect between
Chinese and American participants’ reports of negative evaluation from their
boss versus their faculty member (F(1,19) = 9.536, p = 0.004; see Figure 2), with
Chinese participants reporting higher fear of negative evaluation from their
bosses than European American participants.
343
Figure 2. Chinese and American ratings of fear of negative evaluation from
bosses versus faculty members. Participants ratings of fear of negative
evaluation from the Brief-Fear of Negative Evaluation (B-FNE) questionnaire are
presented for the boss (left; Americans in blue, Chinese in red) and for the other
faculty member (right; Americans in blue, Chinese in red).
RT results
Response accuracy was high for both European American and Chinese
participants in face orientation judgment tasks (European Americans: 97.42% ±
2.21%; Chinese: 94.96% ± 2.43%). RTs with correct responses and within three
standard deviations were analyzed. As used by two of the authors in a previous
study (Ma & Han, 2010), RTs were normalized by dividing RTs to self/other faces
by RTs to scrambled images to rule out the influence of difference in response
selection and execution between different blocks. Response accuracies and
normalized RTs were then subjected to repeated measure ANOVAs with Hand
(left vs. right hand), Face (self vs. other faces), and Threat (high- vs. low-threat)
344
as independent within-subject variables. Results from Chinese participants have
been reported previously (Ma & Han, 2009). Thus, here we first report results
from European American participants, followed by cross-cultural comparisons
with results from Chinese participants.
European American RT results
While none of the response accuracies showed significant effects (p > 0.05),
ANOVAs of normalized RTs showed a significant effect of Face (F(1,19) =
11.403, p = 0.003), with normalized RTs to one’s own face being faster than RTs
to other faces. There were no significant interaction effects, and the finding of a
Face x Threat interaction in Chinese subjects (F(1,19) = 58.469, p < 0.001) (Ma
& Han, 2009) was not found with European Americans (F(1,19) =1.911, p =
0.182), suggesting a comparable RT self-face advantage when self-face was
presented with the boss and faculty member in Americans.
Normalized RTs to faces of labmates and advisors were also subjected to an
ANOVA with Hand (left vs. right hand) and Face (labmate vs. advisor) as
independent within-subject variables. While this analysis did not yield significant
results in Chinese participants (Ma & Han, 2009), it did yield a significant
interaction effect between Hand and Face in European American participants
(F(1,19) = 6.618, p = 0.018). A post-hoc analysis revealed that normalized RTs
were significantly faster for the advisor’s face on left-hand trials (0.88 ± 0.148 vs
345
0.91 ± 0.162; t(1,19) = 1.78, p= 0.045) but not on right-hand trials (0.90 ± 0.174
vs. 0.91 ± 0.180; t(1,19) = -0.32, p= 0.38).
Correlation analysis
To assess whether subjective evaluation of social threat from others affected
these behavioral performances associated with self-face recognition, we
correlated mean ratings from the Brief-FNE scale related to advisors and the
differential RTs (normalized RTs to self-face minus normalized RTs to advisor’s
face). We did not find any significant correlations between either left, right, or
combined hand responses and these scores (ps > 0.05). We then assessed
whether subjective ratings of perceived social status correlated with differential
RTs (normalized RTs to self-face minus normalized RTs to advisor’s face). We
found a significant positive correlation between boss’s perceived social status
and left-hand responses (r = 0.475, p = 0.034), as shown in Figure 3. This effect
was not found for right-hand responses (r = 0.282, p = 0.228). Additionally, this
effect was not found when correlating the social status of the other faculty
member with differential RTs (normalized RTs to self-face minus normalized RTs
to other faculty member’s face) for either hand (ps > 0.05).
346
Figure 3. Correlation between boss’s perceived social status and
normalized RT difference in European Americans (boss-self). Participants’
ratings of their boss’s social status (x-axis) correlates positively with normalized
RT differences (self minus boss; y-axis), R
2
= 0.225, p = .034.
Cross-Cultural RT results
To assess differences between European American and Chinese participants, a
mixed-design ANOVA was assessed with Culture (European American vs.
Chinese) as a between-subjects factor, and Hand (left vs. right hand), Face (self
vs. other faces), and Threat (high- vs. low-threat) as independent within-subject
factors. The four factor ANOVA revealed a marginally significant interaction effect
of Culture x Face x Threat (F(1,19) = 3.616, p = 0.073), as the interaction of Face
x Threat was more salient in Chinese subjects (F(1,19) = 58.469, p < 0.001) than
in American subjects (F(1,19) =1.911, p = 0.182). There was also a significant
interaction effect of Culture x Face (F(1,19) = 12.409, p = 0.002), with faster
347
normalized RTs to one’s own face in European Americans (F(1,19) = 11.403, p =
0.003) than in Chinese participants (F(1,19) = 0.712; p = 0.409).
Given prior findings suggesting that the self-face advantage has a more
significant effect on left-hand responses (Keenan et al., 1999; Keenan et al.,
2000; Ma & Han, 2009; Ma & Han, 2010), we then analyzed data from left-hand
responses. Using left-hand responses only, we found a significant interaction
between Culture x Face x Threat (F(1,19) = 7.003, p = 0.018). As demonstrated
in Figure 4, while the normalized RTs were significantly faster to the self in the
high-threat condition for European Americans, normalized RTs were significantly
faster to boss in the high-threat condition for the Chinese participants. This
pattern of self-face advantage persisted in European Americans during the low-
threat condition, while Chinese participants regained self-face advantage during
the low-threat condition.
348
Figure 4. Bar graphs depicting Culture x Face x Threat normalized RTs (left
hand only). American participants demonstrated a self-face advantage in both
high threat (self and boss) and low threat (self and other faculty member) blocks
shown on the left (A). Chinese participants demonstrated a boss-face advantage
in the high threat block (self and boss), but a self-face advantage in the low
threat block (self and other faculty member), shown on the right (B).
DISCUSSION
The current study examined how cultural differences in self-construal affect one’s
implicit self-processing in different social contexts. We compared normalized RTs
of American and Chinese participants during an implicit face orientation task and
discovered that, while both groups show a self-face RT advantage when self-face
was presented with a faculty member’s face (low-threat condition), only Chinese
participants showed a loss of self-face advantage, replaced with a boss-face
advantage, when self-face was presented with the boss’s face (high-threat
condition). In contrast, American participants maintained a self-face RT
advantage in both low and high threat conditions, in accordance with our
hypothesis that self-processing in Americans is not influenced by the social threat
349
of one’s boss. Interestingly, the correlation results show a modulation of this
effect in Americans by their boss’s perceived social status, so that the self-face
advantage decreased as the subjective feelings of the boss’s social status
increased. Overall, these results demonstrate that culture modulates how self-
processing is affected by the presence of a social threat and that the very
concept of a “boss” may hold vastly different meanings in different cultures (i.e.,
negative threat in interdependent cultures versus social dominance in
independent cultures).
Cultural Selves and Social Threats
The results of the questionnaire measurements suggest that both European
American and Chinese participants reported significantly greater fear of negative
evaluation from their advisor than from another faculty member, despite giving
comparable ratings of social status to both advisors and faculty members. This
suggests a culturally universal pattern in which advisors, who have direct
influence over our participant pool of graduate students, constitute a greater
social threat than other faculty members, despite equal social status. However,
the ratings were greater overall in Chinese participants, suggesting that Chinese
participants are more likely to fear negative evaluation from their bosses than
American participants. This is in line with the idea that interdependent self-
construals are more sensitive to fear of negative evaluation from others than
independent self-construals (Hofstede, 1980; Markus & Kitayama, 1991) and
350
holds implications for multicultural work environments in which individuals may
be more or less sensitive to different forms of evaluation and feedback from their
supervisors, depending on their cultural self-construals (Hofstede, 1980; Yum,
1988; Chen & Chung, 1994).
In line with this finding, European Americans’ self-face advantage was not
diminished by the presence of their advisors during the high-threat conditions, as
was found in Chinese participants. Instead, European American participants had
faster RTs to their own face in both low-threat (self and faculty member) and
high-threat (self and advisor) conditions, maintaining the self-face advantage
regardless of social context. Numerous studies on Western versus East Asian
culture have associated East Asian culture with greater collectivism and attention
to context and Western culture with greater individualism and attention to focal
points (Nisbett, Peng, Choi, & Norenzayan, 2001; Masuda & Nisbett, 2001;
Chua, Boland, & Nisbett, 2005; Nisbett & Miyamoto, 2005; Miyamoto, Nisbett, &
Masuda, 2006). Our results correspond with these prior findings, suggesting that
Westerners are less influenced by the presence of social context (e.g., the other
faces in the block) than East Asians during a self-face recognition task. This may
be due to the robustness of European American’s self-concept, which is
individualistically defined, as compared to the holistic representation of self found
in Chinese participants, which often takes into account not only the self but also
others within one’s social circle (Zhu et al., 2007; Sui et al., 2009).
351
Cultural Variations of the Boss
While European American participants did not show a boss-face advantage,
faster RTs to the boss’s face compared to their own face were correlated with the
boss’s perceived social status and relative social influence. That is to say,
advisors with higher perceived social status had a stronger effect on participants’
self-face response than advisors with lower social status. This is in stark contrast
to Chinese participants, all of who showed a loss of self-face advantage during
high-threat blocks, regardless of the boss’s social status. Interestingly, in
Chinese individuals, faster RTs to the boss’s face compared to their own face
were correlated with how much they feared negative evaluation from their boss.
Thus, in both cultures, faster RTs to the boss’s face compared to self-face were
correlated with a behavioral measure—but these measures are very different.
For Chinese participants, it was fear of negative evaluation from the boss, while
for Americans, it was the boss’s social dominance.
These findings lead us to suggest that the very concept of the “boss” holds
different social meanings in independent versus interdependent cultures.
Namely, the boss may represent a social threat related to the fear of negative
evaluation in more interdependent cultures, particularly where there are more
set, hierarchical relationships with greater “power distances” between positions
(Hofstede, 1980; Weisz et al., 1984; Triandis & Gelfand, 1998). In contrast, in
352
cultures with more independent self-construals and less distance between the
levels of power of the boss and the employee (Hofstede, 1980), the boss may
represent varying degrees of social dominance, which is dependent upon his or
her perceived social status. It appears that one’s cultural conceptualizations of
oneself mediate this attitude, as Americans tend to focus on primary control,
emphasizing autonomy, the self-made man, and personal goals above work
goals, while the Japanese focus on secondary control, emphasizing teamwork,
the good of the team above all else, and distinct hierarchical levels (Weisz et al.,
1984). This is also reinforced by the neuroimaging finding that mesolimbic reward
regions in the caudate nucleus and the medial prefrontal cortex are active during
observation of signals related to social dominance for Americans and social
subordination for Japanese participants, indicating that cultural differences in
social attitudes are personally rewarding (Freeman et al., 2009).
In addition, these findings are in accordance with Ma & Han’s (2010) implicit
positive association (IPA) theory of self-face advantage. In one of their
experiments, they demonstrated that both Americans and Chinese participants
showed an elimination of the self-face advantage after negative threat-to-self-
concept priming, although Americans did not demonstrate as great a decrease in
self-face advantage as Chinese participants, suggesting that Americans were
more robust to threats to self (Ma & Han, 2010). Here, we showed that
Americans do not demonstrate a loss of self-face advantage in the presence of
353
the boss, suggesting that the boss does not constitute a threat to oneself for
American participants, while it does for Chinese participants, who did lose the
self-face advantage in the presence of the boss.
While the boss may not constitute a direct negative threat to one’s self-concept in
American participants, thus not eliminating one’s self-face advantage, it is
notable that American participants did demonstrated faster responses to their
boss when his or her images were presented with a labmate’s images. This
indicates that there may be an effect of one’s boss in more general social
settings, which are not found in self-related settings. Notably, this reaction was
only found in left-hand blocks, which suggests a lateralization of the effect to the
right hemisphere, which has been associated with emotional communication
(Blonder, Bowers, & Heilman, 1991; Borod et al., 1998) and social behavior and
social interactions (Devinsky, 2000; Mychack, Kramer, Boone, & Miller, 2001).
Thus, while one’s social superior may not necessarily influence American
participants’ self-perception, he or she may affect more general social situations
in which social dominance plays a role. Under this line of reasoning, it follows
that the presence of one’s boss might affect overall social perception, such as
having a faster reaction to someone who is more socially dominant (boss) than
someone who is socially inferior (labmate). Altogether, these findings shed light
on the role that a social superior may play in different cultural settings. While a
boss may constitute a social threat in interdependent cultures, it appears that a
354
boss represents general social dominance in more independent cultures, a
finding that holds significant consequences for cross-cultural relationships, both
in the workplace and beyond.
Future Directions
While the current study examined cross-cultural behavioral effects on self-face
recognition in social contexts, future studies may explore the neural mechanisms
underlying these observed effects and how culture modulates the related neural
activity. Prior research suggests that several brain regions may be involved in
these processes, namely in the medial prefrontal (mPFC) (Zhu et al., 2007; Sui &
Han, 2007; Sui et al., 2009; Ng, Han, Mao, & Lai, 2010), and the right parietal
cortices (Uddin et al., 2005; Uddin, Molnar-Szakacs, Zaidel, & Iacoboni, 2006).
Cultural neuroscience studies of self-traits (Zhu et al., 2007; Ng et al., 2010) and
self-face recognition (Sui & Han, 2007; Sui et al., 2009) indicate that the medial
prefrontal cortex, which is involved in self-representation and tends to be more
active in response to oneself than to others during trait judgments, also
represents close and familiar others (e.g., one’s mother) in East Asian cultures
but not in Western cultures. In addition, this effect can be modulated by culture-
specific priming, with priming towards more interdependent ideals enhancing the
representation of close others in the mPFC and priming towards more
independent ideals decreasing mPFC activity (Sui & Han, 2007; Ng et al., 2010).
Applied to the current study, it is possible that strong neural representations of
355
the self in brain areas such as the mPFC in American participants stand against
the influence of social contexts to a greater degree compared to Chinese
participants, thus not demonstrating a boss-effect on the typical self-face
advantage. In addition, the right parietal region has also been implicated in self-
other distinctions, as shown by repetitive transcranial magnetic stimulation to the
right parietal cortex disrupting performance in a self-other face recognition task
(Uddin et al., 2006) and an fMRI study demonstrating right hemisphere activation
in the parietal, frontal and occipital regions during self-face recognition (Uddin et
al., 2005). These results are consistent with our findings of a stronger effect on
left- than right-hand responses, and suggest that the right parietal region may
also play a role in the cultural modulation of this effect. Finally, as discussed by
Ma and Han (2010), emotion-related regions, such as the anterior cingulate and
anterior insula, may also affect self-versus-boss representations. Future
neuroimaging research may help to better understand the neural regions
responsible for these sociocultural effects.
CONCLUSION
The current study demonstrated the strong effects of culture on self-processing in
the presence of one’s social superior. Not only do these results reveal the ways
in which self-construals are affected by social threats, with interdependent self-
construals more strongly influenced than independent ones, but they propose
that what constitutes a social threat may differ across cultures. Specifically, we
356
suggest that the concept of a “boss” may hold vastly different meanings for
individuals from East Asian versus Western cultures, representing a personal
social threat in the former and general social dominance in the latter. Research
on cultural differences has already noted that cultural tendencies (e.g., collectivist
vs. individualist; high vs. low power distance) have an impact on leadership
behavior (Hofstede, 1980; Offermann & Hellmann, 1997), political communication
(Ikeda & Huckfeldt, 2001), career counseling (Leong, Hardin, & Gupta, 2010),
work team dynamics (Kirkman & Shapiro, 2001), and investments and economic
outcomes (Power, Schoenherr, & Samson, 2010), to name a few. Future
research may explore whether and how cultural differences in relation to one’s
social superior present themselves in the workplace and political arenas, and
whether there are ways to effectively mediate these differences, as one study
has suggested that the self demonstrates cultural plasticity and can be
modulated by cultural priming (Chiao et al., 2010). These findings are particularly
salient as globalization increases, and along with it, the prevalence of
multicultural work, political, and public environments, particularly between East
Asian and Western partners. Studies of multicultural workplaces indicate intricate
dynamics between in- and out-group members based on their
individualistic/collectivist tendencies (Bochner, 1994; Loh, Restubog, &
Zagenczyk, 2010), communication styles, and interpersonal relationships (Yum,
1988; Chen & Chung, 1994; Sanchez-Burks et al., 2003). Better understanding of
cross-cultural social relationships and social hierarchies may elucidate the ways
357
in which individuals hold different culture-based social understandings and
expectations. These enhanced understandings may in turn help to foster
smoother and more productive global collaborations and exchanges.
358
ARTICLE REFERENCES
Blonder, L. X., Bowers, D., & Heilman, K. M. (1991). The role of the right
hemisphere in emotional communication. Brain, 114, 1115-1127.
Bochner, S. (1994). Cross-cultural differences in the self concept: A test of
Hofstede's individualism/collectivism distinction. Journal of Cross-Cultural
Psychology, 25(2), 273-283.
Borod, J. C., Cicero, B. A., Obler, L. K., Welkowitz, J., Erhan, H. M., Santschi, C.
et al. (1998). Right hemisphere emotional perception: Evidence across
multiple channels. Neuropsychology, 12(3), 446-458.
Brockner, J., & Chen, Y. R. (1996). The moderating roles of self-esteem and self-
construal in reaction to a threat to the self: Evidence from the People's
Republic of China and the United States. J Pers Soc Psychol, 71(3), 603-
615.
Chen, G., & Chung, J. (1994). Confucian impact on organizational
communication. Communication Quarterly, 42(2), 93-105.
Chiao, J. Y., Harada, T., Komeda, H., Li, Z., Mano, Y., Saito, D. et al. (2009).
Neural basis of individualistic and collectivistic views of self. Hum Brain
Mapp, 30(9), 2813-2820.
Chiao, J. Y., Harada, T., Komeda, H., Li, Z., Mano, Y., Saito, D. et al. (2010).
Dynamic cultural influences on neural representations of the self. J Cogn
Neurosci, 22(1), 1-11.
359
Chua, H. F., Boland, J. E., & Nisbett, R. E. (2005). Cultural variation in eye
movements during scene perception. Proc Natl Acad Sci U S A, 102(35),
12629-12633.
Devinsky, O. (2000). Right cerebral hemisphere dominance for a sense of
corporeal and emotional self. Epilepsy & Behavior, 1(1), 60-73.
Fiske, A. P., Kitayama, S., Markus, H. R., & Nisbett, R. E. (1998). The cultural
matrix of social psychology. In D. T. Gilbert, S. T. Fiske, & G. Linzey
(Eds.), Handbook of social psychology. (pp. 915-981). Boston: McGraw-
Hill.
Freeman, J. B., Rule, N. O., Adams, R. B. J., & Ambady, N. (2009). Culture
shapes a mesolimbic response to signals of dominance and subordination
that associates with behavior. Neuroimage, 47(1), 353-359.
Han, S., & Northoff, G. (2009). Understanding the self: a cultural neuroscience
approach. Prog Brain Res, 178, 203-212.
Hofstede, G. (1980). Culture's consequences: International differences in work-
related values. Beverly Hills, CA: Sage.
Ikeda, K., & Huckfeldt, R. (2001). Political communication and disagreement
among citizens in Japan and the United States. Political Behavior, 23(1),
23-51.
Keenan, J. P., Freund, S., Hamilton, R. H., Ganis, G., & Pascal-Leone, A. (2000).
Hand response differences in a self-face idenfication task.
Neuropsychologia, 38, 1053-1074.
360
Keenan, J. P., McCutecheon, B., Freund, S., Gallup, G. G., Sanders, G., &
Pascal-Leone, A. (1999). Left hand advantage in a self-face recognition
task. Neuropsychologia, 37, 1421-1425.
Keenan, J. P., Nelson, A., O'Connor, M., & Pascual-Leone, A. (2001). Self-
recognition and the right hemisphere. Nature, 409(6818), 305.
Kirkman, B. L., & Shapiro, D. L. (2001). The impact of team members' cultural
values on productivity, cooperation, and empowerment in self-managing
work teams. Journal of Cross-Cultural Psychology, 32(5), 597-617.
Leary, M. R. (1983). A brief version of the fear of negative evaluation scale.
Personality and Social Psychology Bulletin, 9, 518-530.
Leong, F. T., Hardin, E. E., & Gupta, A. (2010). A cultural formulation approach
to career assessment and career counseling with asian american clients.
Journal of Career Development, 37(1), 465-486.
Loh, J., Restubog, S. L. D., & Zagenczyk, T. J. (2010). Consequences of
workplace bullying on employee identification and satisfaction among
Australians and Singaporeans. Journal of Cross-Cultural Psychology,
41(2), 236-252.
Ma, Y., & Han, S. (2009). Self-face advantage is modulated by social threat –
Boss effect on self-face recognition. Journal of Experimental Social
Psychology, 45, 1048-1051.
361
Ma, Y., & Han, S. (2010). Why we respond faster to the self than to others? An
implicit positive association theory of self-advantage during implicit face
recognition. J Exp Psychol Hum Percept Perform, 36(3), 619-633.
Markus, H., & Kitayama, S. (1998). The cultural psychology of personality.
Journal of Cross-Cultural Psychology, 29(1), 63-87.
Markus, H. R., & Kitayama, S. (1991). Culture and the self: Implications for
cognition, emotion, and motivation. Psychol Rev, 98, 224-253.
Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically:
comparing the context sensitivity of Japanese and Americans. J Pers Soc
Psychol, 81(5), 922-934.
Miyamoto, Y., Nisbett, R. E., & Masuda, T. (2006). Culture and the physical
environment. Holistic versus analytic perceptual affordances. Psychol Sci,
17(2), 113-119.
Moskowitz, D. S., Suh, E. J., & Desaulniers, J. (1994). Situational influences on
gender differences in agency and communion. J Pers Soc Psychol, 66(4),
753-761.
Mychack, P., Kramer, J. H., Boone, K. B., & Miller, B. L. (2001). The influence of
right frontotemporal dysfunction on social behavior in frontotemporal
dementia. Neurology, 56(11 Suppl 4), S11-5.
Ng, S. H., Han, S., Mao, L., & Lai, J. C. (2010). Dynamic bicultural brains: fMRI
study of their flexible neural representation of self and signficant others in
repsonse to culture primes. Asian Journal of Social Psychology, 13, 83-91.
362
Nisbett, R. E., & Miyamoto, Y. (2005). The influence of culture: holistic versus
analytic perception. Trends Cogn Sci, 9(10), 467-473.
Nisbett, R. E., Peng, K., Choi, I., & Norenzayan, A. (2001). Culture and systems
of thought: holistic versus analytic cognition. Psychol Rev, 108(2), 291-
310.
Oetzel, J. G. (1998). Explaining individual communication processes in
homogeneous and heterogeneous groups through individualism-
collectivism and self-construal. Human Communication Research, 25(2),
202-224.
Offermann, L. R., & Hellmann, P. S. (1997). Culture's consequnces for leadership
behavior: National values in action. Journal of Cross-Cultural Psychology,
28(3), 342-351.
Power, D., Schoenherr, T., & Samson, D. (2010). The cultural characteristic of
individualism/collectivism: A comparative study of implications for
investment in operations between emerging Asian and industrialized
Western countries. Journal of Operations Management, 28, 206-222.
Sanchez-Burks, J., Lee, F., Choi, I., Nisbett, R., Zhao, S., & Koo, J. (2003).
Conversing across cultures: East-West communication styles in work and
nonwork contexts. J Pers Soc Psychol, 85(2), 363-372.
Singelis, T. M., & Sharkey, W. F. (1995). Culture, self-construal, and
embarrassability. Journal of Cross-Cultural Psychology, 26(6), 622-644.
363
Sui, J., & Han, S. (2007). Self-construal priming modulates neural substrates of
self-awareness. Psychol Sci, 18(10), 861-866.
Sui, J., Liu, C. H., & Han, S. (2009). Cultural difference in neural mechanisms of
self-recognition. Soc Neurosci, 4(5), 402-411.
Triandis, H. C., & Gelfand, J. (1998). Converging measurement of horizontal and
vertical individualism and collectivism. J Pers Soc Psychol, 74(1), 118-
128.
Uddin, L. Q., Kaplan, J. T., Molnar-Szakacs, I., Zaidel, E., & Iacoboni, M. (2005).
Self-face recognition activates a frontoparietal "mirror" network in the right
hemisphere: an event-related fMRI study. Neuroimage, 25(3), 926-935.
Uddin, L. Q., Molnar-Szakacs, I., Zaidel, E., & Iacoboni, M. (2006). rTMS to the
right inferior parietal lobule disrupts self-other discrimination. Soc Cogn
Affect Neurosci, 1(1), 65-71.
Weisz, J. R., Rothbaum, F. M., & Blackburn, T. C. (1984). Standing out and
standing in: The psychology of control in America and Japan. American
Psychologist, 39(9), 955-969.
Yum, J. (1988). The impact of Confucianism on interpersonal relationships and
communication patterns in East Asia. Communication Monographs, 55,
374-388.
Zhu, Y., Zhang, L., Fan, J., & Han, S. (2007). Neural basis of cultural influence
on self-representation. Neuroimage, 34(3), 1310-1316.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Engagement of the action observation network through functional magnetic resonance imaging with implications for stroke rehabilitation
PDF
Behavioral and neural influences of interoception and alexithymia on emotional empathy in autism spectrum disorder
PDF
Social and motor skills in autism spectrum disorder & developmental coordination disorder: functional & structural neurobiology
PDF
Schema architecture for language-vision interactions: a computational cognitive neuroscience model of language use
PDF
The brain's virtuous cycle: an investigation of gratitude and good human conduct
PDF
The symbolic working memory system
PDF
Modeling the mirror system in action observation and execution
PDF
The effects of fast walking, biofeedback, and cognitive impairment on post-stroke gait
PDF
Using neuroinformatics to identify genomic and proteomic markers of suboptimal aging and Alzheimer's disease
PDF
Neural and behavioral correlates of fear processing in first-time fathers
PDF
Heart, brain, and breath: studies on the neuromodulation of interoceptive systems
PDF
Value-based decision-making in complex choice: brain regions involved and implications of age
PDF
The neural correlates of face recognition
PDF
Lived experiences of gay men with HIV; intersections of portraiture, narrative and engagement
PDF
Decoding the neural basis of valence and arousal across affective states
PDF
Neural circuits control and modulate innate defensive behaviors
PDF
Modeling social and cognitive aspects of user behavior in social media
PDF
Neural circuits underlying the modulation and impact of defensive behaviors
PDF
A social-cognitive approach to modeling design thinking styles
PDF
Decoding information about human-agent negotiations from brain patterns
Asset Metadata
Creator
Liew, Sook-Lei
(author)
Core Title
Experience modulates neural activity during action understanding: exploring sensorimotor and social cognitive interactions
School
School of Dentistry
Degree
Doctor of Philosophy
Degree Program
Occupational Science
Publication Date
06/01/2012
Defense Date
03/27/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
action observation,Empathy,experience,fMRI,mentalizing system,mirror neuron system,neuroimaging,OAI-PMH Harvest,social cognition
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Aziz-Zadeh, Lisa (
committee chair
), Arbib, Michael A. (
committee member
), Cermak, Sharon A. (
committee member
), Clark, Florence (
committee member
), Neville-Jan, Ann (
committee member
)
Creator Email
lei.liew@gmail.com,sliew@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-45085
Unique identifier
UC11290083
Identifier
usctheses-c3-45085 (legacy record id)
Legacy Identifier
etd-LiewSookLe-872.pdf
Dmrecord
45085
Document Type
Dissertation
Rights
Liew, Sook-Lei
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
action observation
fMRI
mentalizing system
mirror neuron system
neuroimaging
social cognition