Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Mechanisms of motor learning in immersive virtual reality and their influences on retention and context transfer
(USC Thesis Other)
Mechanisms of motor learning in immersive virtual reality and their influences on retention and context transfer
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Mechanisms of Motor Learning in Immersive Virtual Reality
and Their Influences on Retention and Context Transfer
by
Julia M. Juliano
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(NEUROSCIENCE)
August 2022
Copyright 2022 Julia M. Juliano
ii
Acknowledgements
It is a privilege to be able to acknowledge all the people who have been so
important to me throughout my Ph.D. journey.
I would first like to express my sincere gratitude to my Ph.D. advisor, Dr. Sook-Lei
Liew. Thank you, Lei, for providing me with so much support and guidance over these
years. You have given me more opportunities than I can count which has helped shape
me into a researcher and scientist and allowed me to enjoy the journey along the way.
You encouraged and advocated for me when I would announce my latest goals or ideas,
and you never discouraged my interests or doubted my capabilities. You have created a
lab culture that is fun and family-like, and as my ‘academic mom’ are always going above
and beyond what is expected from an advisor. Thank you for your inspiration and
mentorship, for having confidence in me, and for continually pushing me to be my best. I
will always be grateful to you.
Next, I would like to thank my committee members for your mentorship and support
throughout this process. Thank you, Dr. Nicolas Schweighofer, for taking the time and
having the patience to help me think through concepts and code, and for pushing me to
think like a scientist. Thank you, Dr. James Finley, for inspiring and challenging me to
think logically and to ask thoughtful questions, and for always providing helpful feedback.
Thank you, Dr. Beth Fisher, for teaching me about motor learning, and for being someone
I can turn to for help and encouragement. Thank you, Dr. Kristian Leech, for helping me
understand motor learning concepts, and for being kind and supportive. I am also
wholeheartedly grateful to several other mentors who have been instrumental both before
and during graduate school: Dr. Carolee Winstein, Dr. Cheryl Conrad, Dr. Linda Caterino,
iii
Dr. James Middleton, Dr. James Vicich, and Dr. Matthew Rhodes. Thank you all for your
guidance, support, letters of recommendation, wisdom, and kindness.
I would next like to thank my colleagues and friends, who have made these years
enjoyable. Thank you to my friends at the NPNL, especially to (in alphabetical order):
Anisha, Artemis, Cathy, Chris, Coralie, David, Lily, Meghan, Miranda, Octavio, and
Stephanie. You have all made my time in lab so memorable and I will always cherish the
fun we had, the food we ate, and all the laughs we shared. A special thanks to Lily for our
walks and talks and jumping rope, and for being someone I could always lean on for
support; to Octavio for bringing me pica fresa and for always being willing to listen and
help; and to Artemis for all your Ph.D. wisdom. Also, thank you to my other friends both
in and outside of research. Aram, Chung, Natalia thank you for your invaluable advice
and friendship. Abby and Andrew, Alice and Kevin, Cindi and Andy, Jamie and Dan, and
Larry thank you for making weekends fun by helping me ‘get away from work’. And a
special thanks to Justine, for keeping me sane during this process; merci ma bonne amie.
I would also like to thank all the individuals who took the time to participate in my
studies; without your help none of this research would have been possible. A special
thanks to my siblings who drove to California to participate in my studies. And a very
special thanks to Dave, who is always willing to help and an inspiration.
Thank you to the Neuroscience Graduate Program for providing a wonderful
graduate school experience and for your funding support through the NGP T32 Training
Grant. And thank you to my other funding sources, the Rose Hills Foundation, and the
Link Foundation, for providing resources to fund this research.
iv
Most importantly, I would like to thank my family for their endless love and
continuous support. Mom, thank you for your selflessness, for being my confidant, and
for allowing me to take on challenges, despite knowing I would likely return with scraped
knees. Dad (Steve), thank you for always being there for me, for teaching me integrity,
and for believing in me. Dad (Glen), thank you for encouraging me to get up when I fall
and to keep fighting. Logan, Tucker, Anna, thank you for being wonderful brothers and
an amazing sister; you are all an inspiration to me, and I am so grateful for the
relationships we have. Pappy and Baba, Grandma and Grandpa, thank you for all the
lessons you taught and the love you gave me. Jane and Daniel, thank you both for the
love and support you have given me over the years, for your guidance and wisdom, and
for welcoming me into your family.
Finally, I would like to express my appreciation to my husband, Joseph. Your
unwavering belief in me has given me the strength to finish this through. You are always
there to encourage me when I am discouraged and put a smile on my face when I am
down. Thank you for being my teammate and closest friend, and for not giving me too
much grief for ‘running our heater 24/7’ when writing this dissertation. The road I have
been traveling on has been full of wonderful adventures; thank you for being on this
journey with me - I am so excited for the next chapter in our adventure book.
v
Table of Contents
Acknowledgements .......................................................................................................... ii
List of Tables ................................................................................................................. viii
List of Figures .................................................................................................................. ix
Abstract ............................................................................................................................ x
Chapter 1: Overview ......................................................................................................... 1
1.1 Introduction ............................................................................................................. 1
1.2 Specific Aims .......................................................................................................... 3
1.3 Significance ............................................................................................................. 5
Chapter 2: Background ..................................................................................................... 7
2.1 Goal-Directed Movements ...................................................................................... 7
2.2 Motor Learning ........................................................................................................ 7
2.2.1 Motor Adaptation .......................................................................................................... 8
2.2.2 Dual-State, Multi-Rate Model of Motor Adaptation ..................................................... 10
2.2.3 Neural Mechanisms of Motor Adaptation ................................................................... 11
2.2.4 Motor Learning in HMD-VR ........................................................................................ 13
2.3 Cognitive Processes in Motor Learning ................................................................ 14
2.3.1 Working Memory ........................................................................................................ 15
2.3.2 Attention and Motor Planning ..................................................................................... 17
2.3.3 Cognitive Load ........................................................................................................... 19
2.3.4 Cognitive Load in HMD-VR ........................................................................................ 20
2.4 Visual Processing for Action ................................................................................. 21
2.4.1 Visual Processing of Real 3D and 2D Objects ........................................................... 23
2.4.2 Visual Processing in HMD-VR ................................................................................... 24
Chapter 3: Context Transfer of Motor Skills in HMD-VR ................................................ 26
3.1 Abstract ................................................................................................................. 26
3.2 Introduction ........................................................................................................... 27
3.3 Methods ................................................................................................................ 30
3.4 Results .................................................................................................................. 38
3.5 Discussion ............................................................................................................. 48
3.6 Conclusion ............................................................................................................ 52
3.7 Supplemental Material .......................................................................................... 53
Chapter 4: Cognitive Load, Retention, and Context Transfer of Visuomotor Adaptation in
HMD-VR ......................................................................................................................... 57
vi
4.1 Abstract ................................................................................................................. 57
4.2 Introduction ........................................................................................................... 58
4.3 Methods ................................................................................................................ 62
4.4 Results .................................................................................................................. 71
4.5 Discussion ............................................................................................................. 82
4.6 Conclusions .......................................................................................................... 88
4.7 Supplementary Material ........................................................................................ 89
Chapter 5: Visual Processing of Action Towards Virtual 3D Objects in HMD-VR .......... 93
5.1 Abstract ................................................................................................................. 93
5.2 Introduction ........................................................................................................... 94
5.3 Methods ................................................................................................................ 98
5.4 Results ................................................................................................................ 103
5.5 Discussion ........................................................................................................... 107
5.6 Conclusion .......................................................................................................... 112
Chapter 6: Discussion .................................................................................................. 114
6.1 Summary of Key Findings ................................................................................... 114
6.1.1 Main Findings from Aim 1 ........................................................................................ 114
6.1.2 Main Findings from Aim 2 ........................................................................................ 115
6.1.3 Main Findings from Aim 3 ........................................................................................ 115
6.1.4 Main Findings from Aim 4 ........................................................................................ 116
6.2 Implications and Significance of Findings ........................................................... 117
6.2.1 Research Implications .............................................................................................. 117
6.2.2 Clinical Implications .................................................................................................. 119
6.3 Limitations ........................................................................................................... 121
6.4 Future Directions and Conclusion ....................................................................... 123
References ................................................................................................................... 125
Appendix A: Visuomotor Adaptation in HMD-VR and CS ............................................. 159
A.1 Abstract .............................................................................................................. 159
A.2 Introduction ......................................................................................................... 160
A.3 Methods .............................................................................................................. 165
A.4 Results ................................................................................................................ 171
A.5 Discussion .......................................................................................................... 176
Appendix B: Embodiment on a Brain–Computer Interface in HMD-VR ........................ 180
B.1 Abstract .............................................................................................................. 180
B.2 Introduction ......................................................................................................... 181
B.3 Methods .............................................................................................................. 184
vii
B.4 Results ................................................................................................................ 196
B.5 Discussion .......................................................................................................... 203
B.6 Conclusions ........................................................................................................ 212
B.7 Supplementary Material ...................................................................................... 213
Appendix C: List of Relevant Publications .................................................................... 216
viii
List of Tables
Table 3.1 ......................................................................................................................... 44
Table 3.2 ......................................................................................................................... 46
Table 3.S1 ...................................................................................................................... 53
Table 3.S2 ...................................................................................................................... 54
Table 3.S3 ...................................................................................................................... 55
Table 3.S4 ...................................................................................................................... 56
Table 4.S1 ...................................................................................................................... 89
Table 4.S2 ...................................................................................................................... 89
Table 4.S3 ...................................................................................................................... 90
Table 4.S4 ...................................................................................................................... 91
Table 4.S5 ...................................................................................................................... 92
Table B.1 ...................................................................................................................... 193
Table B.S1 .................................................................................................................... 213
ix
List of Figures
Figure 1.1 ......................................................................................................................... 6
Figure 2.1 ....................................................................................................................... 13
Figure 2.2 ....................................................................................................................... 20
Figure 2.3 ....................................................................................................................... 22
Figure 3.1 ....................................................................................................................... 32
Figure 3.2 ....................................................................................................................... 33
Figure 3.3 ....................................................................................................................... 41
Figure 3.4 ....................................................................................................................... 43
Figure 3.5 ....................................................................................................................... 45
Figure 3.6 ....................................................................................................................... 47
Figure 4.1 ....................................................................................................................... 64
Figure 4.2 ....................................................................................................................... 72
Figure 4.3 ....................................................................................................................... 74
Figure 4.4 ....................................................................................................................... 77
Figure 4.5 ....................................................................................................................... 79
Figure 4.6 ....................................................................................................................... 81
Figure 5.1 ....................................................................................................................... 99
Figure 5.2 ..................................................................................................................... 100
Figure 5.3 ..................................................................................................................... 104
Figure 5.4 ..................................................................................................................... 105
Figure 5.5 ..................................................................................................................... 107
Figure A.1 ..................................................................................................................... 164
Figure A.2 ..................................................................................................................... 168
Figure A.3 ..................................................................................................................... 175
Figure B.1 ..................................................................................................................... 185
Figure B.2 ..................................................................................................................... 189
Figure B.3 ..................................................................................................................... 197
Figure B.4 ..................................................................................................................... 198
Figure B.5 ..................................................................................................................... 200
Figure B.6 ..................................................................................................................... 205
Figure B.S1 ................................................................................................................... 214
Figure B.S2 ................................................................................................................... 215
x
Abstract
Immersive virtual reality using a head-mounted display has been increasing in use
for motor learning purposes. This increased use in areas such as motor rehabilitation and
surgical training is largely driven by the potential for clinicians and educators to have
greater control and customization of the training environment. However, there is
conflicting evidence on the effectiveness of these devices, particularly on whether the
motor skills learned in an HMD-VR environment transfer to the real world. Importantly,
the mechanisms driving and limiting HMD-VR context transfer remain unclear, as well as
how differences between HMD-VR and more conventional training environments affects
the motor learning process. One noteworthy difference is that HMD-VR seems to increase
cognitive load during complex motor learning tasks. However, it is unclear how increased
cognitive load affects motor memories (i.e., retention and context transfer) nor is it clear
what may cause increased cognitive load in HMD-VR. The overall goal of this dissertation
work is to address gaps in our understanding of what makes upper extremity motor
learning and movements in HMD-VR different from more conventional training
environments and how these differences could influence the formation of motor
memories. Together, this work bridges motor learning mechanisms with a theoretical
framework of cognitive load to examine the impact of cognitive load on motor memory
formation. The findings discussed will influence the increasing use of HMD-VR in motor
learning applications, such as motor rehabilitation and surgical training.
1
Chapter 1: Overview
1.1 Introduction
In a technologically advancing world, there is a need to understand how we interact
with technology so that they may be effectively used for learning and training purposes.
One technology increasing in use for motor learning purposes, such as surgical training
and motor rehabilitation, is immersive virtual reality using a head-mounted display (HMD-
VR) (Huber et al., 2017; Levin, 2020). Despite this increase in popularity, there is
conflicting evidence about whether motor skills learned in HMD-VR will transfer to the real
world (Levac et al., 2019). These inconsistent findings suggest that there may be personal
factors (e.g., presence) or instances (e.g., complexity of task) when the use of HMD-VR
could potentially result in less effective motor learning. Understanding what makes motor
learning in HMD-VR different from the real world and what reasons underlie a lack of
contextual transfer can better inform the design of HMD-VR applications so that they can
be more effective for training purposes.
Converging evidence suggests that motor skills are learned differently between
HMD-VR and conventional training environments (Levac et al., 2019). A potential
difference is the underlying motor learning mechanisms used in each of the environments.
My previous work found that individuals performing a visuomotor adaptation task in HMD-
VR relied more on explicit, cognitive mechanisms and less on implicit mechanisms in
order to adapt. In contrast, individuals performing the same task on a conventional
computer screen (CS) relied more on implicit mechanisms and less on explicit, cognitive
mechanisms to adapt (Appendix A). These findings suggest that motor learning in HMD-
VR may require additional cognitive processing compared to conventional training
2
environments. Supporting this is evidence showing that complex motor tasks in HMD-VR
increases cognitive load compared to a CS environment (Frederiksen et al., 2020).
However, it is unclear whether visuomotor adaptation in HMD-VR also increases cognitive
load and whether this increase in cognitive load is related to explicit, cognitive
mechanisms.
Cognitive load is the amount of information that can be held in working memory at
one time. The theoretical construct of cognitive load suggests that novel information (e.g.,
a new visuomotor mapping) can be encoded in long-term memory when the load on
working memory is within working memory limits (Orru & Longo, 2019). Increased
cognitive load during complex motor tasks in HMD-VR is shown to decrease motor
performance compared to a CS environment (Frederiksen et al., 2020). However, it is
unclear whether HMD-VR-related increases in cognitive load and decreases in motor
performance result in decreased long-term motor memory formation.
Motor learning in HMD-VR is largely guided by vision. Visually guided movements
towards real three-dimensional (3D) objects use selective attention to relevant
information, ignoring irrelevant information unrelated to the movement. In other words,
visual processing for action in the real world is not typically affected by perceptual effects,
such as processing object dimensions irrelevant to performing the action. Processing
object dimensions as a unified whole is referred to as holistic processing. While
perception does not typically intrude on movements towards 3D objects, converging
evidence suggests that visually guided movements towards two-dimensional (2D) objects
relies on holistic processing (Freud & Ganel, 2015; Ganel et al., 2020). HMD-VR
environments can be perceived as seemingly flat and therefore may be processed more
3
as 2D compared to real world 3D objects (Kelly et al., 2017; Phillips et al., 2009).
However, it is unclear whether visually guided movements towards virtual 3D objects in
HMD-VR are also susceptible to perceptual effects, and whether objects are processed
more as 2D than 3D by the brain.
1.2 Specific Aims
A better understanding of motor learning in HMD-VR is needed to effectively inform
the design of HMD-VR applications. If HMD-VR is used for training or rehabilitation
purposes, it is important to understand how the brain interprets the information provided
by the headset and how increased cognitive load in HMD-VR affects long-term motor
memory formation. This work seeks to address important questions that could help gain
a better understanding of motor learning in HMD-VR. Questions are addressed through
three experiments, where the first experiment addresses Aim 1, the second experiment
addresses Aims 2 and 3, and the third experiment addresses Aim 4.
Aim 1: Determine whether personal factors affect immediate transfer
between HMD-VR and CS contexts. To address this aim, healthy young adults were
trained on an established skilled motor learning task, called the Sequential Visual
Isometric Pinch Task (SVIPT), in either a CS or HMD-VR environment. Then we
examined whether the motor skills immediately transferred from HMD-VR to CS, and vice
versa. After completing the task, participants responded to questionnaires collecting
personal factors (i.e., presence, age, gender, video game use, and previous HMD-VR
experience). We found that the motor skills acquired in HMD-VR did not immediately
transfer to CS and that this decrease in motor skill performance could be predicted by
presence. Conversely, motor skills acquired in CS not only transferred but improved in
4
HMD-VR, and this increase in motor skill performance could be predicted by presence,
gender, age, and video game use.
Aim 2-1: Determine whether visuomotor adaptation in HMD-VR increases
cognitive load compared to CS. To address this aim, healthy young adults were trained
on an established visuomotor adaptation task in either a CS or HMD-VR environment. To
estimate cognitive load, a dual-task probe measuring attentional demands was used
throughout adaptation. We found that cognitive load was greater in HMD-VR compared
to CS across adaptation.
Aim 2-2: Examine the relationship between cognitive load during visuomotor
adaptation and explicit mechanisms. To address this aim, explicit and implicit
mechanisms were measured throughout adaptation. Then we examined the relationship
between cognitive load during adaptation and the explicit process in early and late
adaptation, as well as across the course of adaptation. We found that higher cognitive
load was related to decreased explicit, cognitive mechanisms, specifically early in
adaptation.
Aim 3-1: Determine whether visuomotor adaptation in HMD-VR leads to
decreased long-term retention and context transfer. To address this aim, participants
from Aim 2 returned after a 24-hour retention interval to complete a retention test (i.e., no
feedback) in either the same or a separate context from training. We found that
visuomotor adaptation in HMD-VR led to decreased long-term retention and context
transfer, which appeared to be due to greater forgetting of explicit processes.
Aim 3-2: Examine the relationship between cognitive load during visuomotor
adaptation and long-term retention and context transfer. To address this aim, we
5
examined the relationship between cognitive load during adaptation and long-term
retention and context transfer. We found that increased cognitive load was related to
decreased long-term motor memory formation.
Aim 4: Determine whether visual processing for action in HMD-VR is
susceptible to perceptual effects. To address this aim, we used an established Garner
interference task to examine whether vision-for-action directed towards virtual 3D objects
in HMD-VR relies on holistic processing of object shape in healthy young adults. We found
that unlike real-world processing, visual processing for action in HMD-VR involves holistic
processing and is therefore susceptible to perceptual effects.
1.3 Significance
Overall, this work has important implications for the development of motor learning
applications in HMD-VR. As technology advances, mediums such as HMD-VR will be
increasingly incorporated for motor learning and training purposes. These applications
may range from training a surgeon to perform complex procedures in a high-risk
environment or a stroke survivor to use their own brain activity to move a virtual arm
(Figure 1.1). In each example, the motor skills learned in HMD-VR will only be effective if
they are transferred to the real world. Understanding the mechanisms involved in
facilitating this transfer of motor skills is imperative to developing effective training
applications. This research provides critical insight into motor learning in these emerging
technologies and guide the designs of future applications in an array of disciplines.
6
Figure 1.1. Example of a stroke survivor training on a brain
computer interface called REINVENT (Rehabilitation
Environment using the Integration of Neuromuscular-based
Virtual Enhancements for Neural Training). REINVENT is
designed for individuals with severe motor impairments and
provides brain (electroencephalography) and/or muscle
(electromyography) neurofeedback in HMD-VR. See
Appendix B for more details on REINVENT.
7
Chapter 2: Background
2.1 Goal-Directed Movements
When we interact with the world, there are certain things that need to happen in
order for us to move around in a meaningful way. First, the central nervous system takes
in sensory inputs from both the world (extrinsic information) and the body (intrinsic
information) (Desmurget et al., 1998). Then, these sensory inputs go through a series of
sensorimotor transformations in order to generate a motor command (Soechting &
Flanders, 1989). Sensorimotor transformations are computed through neural circuits
maintaining internal representations, called internal models, that relate sensory inputs to
motor commands (Markov et al., 2021). There are two types of internal models: a forward
model which estimates future sensory inputs based on motor output and an inverse model
which calculates motor outputs from sensory inputs (Wolpert & Kawato, 1998). Internal
models are controlled through a combination of feedforward and feedback control
(Desmurget & Grafton, 2000). Feedforward control generates a motor command by
anticipating the movement needed to achieve the goal but does not correct for any errors
during online movements (Kawato, 1999). Feedback control compares the sensor inputs
to the goal during online movements and then makes adjustments to compensate for any
errors (Rachael D. Seidler et al., 2004). Resulting movement errors are then used to
update feedforward and feedback models to improve future performance (Maeda et al.,
2018; Wagner & Smith, 2008; Yousif & Diedrichsen, 2012).
2.2 Motor Learning
Motor learning is the acquisition and maintenance of motor skills (Krakauer et al.,
2019). When acquiring new motor skills, we move through a series of overlapping stages
8
in which movements that were initially effortful, transition and increasingly become
effortless. There are three proposed overlapping stages to explain the course of motor
skill acquisition: the cognitive stage, the associative stage, and the autonomous stage
(Fitts, P. M. And Posner, 1967). The cognitive stage generally involves the explicit
knowledge of a strategy and increased attentional resources. Then with practice, learning
moves into the associative stage where basic movement patterns begin to form, and
attention can begin to focus on the details of movement patterns. After extensive practice,
learning then moves into the autonomous stage and movements become effortless and
automatic (Eversheim & Bock, 2001; Fitts, 1964). Throughout this process, neural
changes occur to allow for faster and more accurate motor skill performance that requires
increasingly less cognitive effort (Diedrichsen & Kornysheva, 2015; Taylor & Ivry, 2012).
However, once these motor skills become more automatic, changes in the environment
may force the motor system to adapt in order to maintain motor skill performance.
Examining how the motor system updates on a trial-to-trial basis in response to sensory
prediction errors (i.e., the difference between actual and predicted movements) is called
motor adaptation.
2.2.1 Motor Adaptation
Motor skill acquisition is thought to be separate from motor adaptation. While motor
skill acquisition refers to improvements in accuracy or speed, motor adaptation refers to
the recalibration of well-trained movements to changes in the environment (Diedrichsen
& Kornysheva, 2015; Kitago & Krakauer, 2013). However, both motor skill acquisition and
motor adaptation have similar learning curves where initial improvements in performance
9
increase rapidly and then slow to a gradual increase as practice continues (Julien Doyon
& Benali, 2005).
Motor adaptation can be studied in several ways including force-field adaptation,
prism adaptation, saccade adaptation, and visuomotor adaptation. Force-field adaptation
paradigms are used to investigate the motor systems ability to adapt to external kinetic
distortions and prism, saccade, and visuomotor adaptation paradigms are used to
investigate the motor systems ability to adapt to external kinematic distortions (Bo et al.,
2008). While the neural mechanisms underlying these motor adaptation paradigms seem
to overlap, they have largely been examined through visuomotor adaptation paradigms
(Diedrichsen et al., 2005; Krakauer et al., 2019).
Visuomotor adaptation is thought to be driven by competing explicit and implicit
processes. During a visuomotor adaptation task, participants reach to targets on a screen
that occludes the vision of their arm but provides feedback of their arm location in the
form of a cursor. After normal reaching towards targets, a large, unexpected perturbation
may occur, causing the cursor to be rotated a certain distance away from the target,
resulting in decreased performance. Then, either by being explicitly given a strategy or
by developing a strategy on their own, participants will adapt to the perturbation and
performance will improve (Mazzoni & Krakauer, 2006; Taylor et al., 2014). However, as
the trials continue, performance will slowly and incrementally begin to decrease as
reaching moves towards the location of the target. This stereotypical adaptation curve is
thought to occur because of competing explicit, cognitive strategies and implicit motor
processes that update an internal forward model. Explicit learning is driven by the
difference between the location of the target and the visual feedback and implicit learning
10
is driven by the difference between the predicted reach location and the visual feedback,
also known as sensory prediction errors (Taylor et al., 2014). Therefore, while cognitive
processes may lead to flexible compensatory strategies early in visuomotor adaptation,
gradual adaptation of an internal model, driven by implicit mechanisms, appear later in
adaptation (Taylor & Ivry, 2011). An updated internal model is typically reflected by the
presence of an aftereffect, which can be observed after adaptation in trials with no visual
feedback.
2.2.2 Dual-State, Multi-Rate Model of Motor Adaptation
Explicit mechanisms reflect the cognitive strategies used to adapt to experienced
perturbations while implicit mechanisms reflect a recalibration of an internal model (Miall
& Wolpert, 1996; Taylor & Ivry, 2013b). There is increasing evidence that explicit and
implicit processes are homologous with the fast and slow processes of a dual-state, multi-
rate model of sensorimotor learning (Equations 2.1-3) (McDougle et al., 2015).
Specifically, the fast process predicts early, explicit learning and the slow process predicts
later, implicit learning. Each process is composed of a learning rate modulated by sensory
prediction errors (B term in Equations 2.1 and 2.2) and a forgetting rate modulated by
prior learning (A term in Equations 2.1 and 2.2). As observed in Equations 2.1 and 2.2,
the fast and slow processes compete for sensory prediction errors (SPE). The fast
process responds strongly to error early in adaptation but exhibits fast forgetting, while
the slow process increases gradually as adaptation continues and contributes to motor
memory formation (Joiner & Smith, 2008; M. A. Smith et al., 2006). Importantly, the
magnitude of the slow process at the end of adaptation predicts the amount of information
11
retained at long-term retention, suggesting that the magnitude of implicit adaptation may
also predict long-term retention (Schweighofer et al., 2011).
𝑭
𝐭"𝟏
= 𝑨
𝑭
×𝑭
𝐭
+𝑩
𝑭
×𝑺𝑷𝑬
𝐭
𝑺
𝐭"𝟏
= 𝑨
𝑺
×𝑺
𝐭
+𝑩
𝑺
×𝑺𝑷𝑬
𝐭
𝑼
𝐭"𝟏
= 𝑭
𝐭"𝟏
+𝑺
𝐭"𝟏
(2.1)
(2.2)
(2.3)
Equations 2.1-3. Dual-state, multi-rate model. Variables: F = Fast
Process; S = Slow Process; A = Forgetting Rate; B = Learning Rate;
SPE = Sensory Prediction Error; U = Motor Command; t = trial.
The two-state, multi-rate model can predict retention, savings, anterograde
interference, and spontaneous recovery for single motor adaptations (S. T. Albert &
Shadmehr, 2018; Ethier et al., 2008; Joiner & Smith, 2008; Sing & Smith, 2010; M. A.
Smith et al., 2006). However, this model cannot account for multiple motor adaptations.
Converging evidence suggests that multiple fast and slow processes may be more
accurate models of behavior and the number of states can depend on whether there is
multiple adaptations or longer adaptation time periods (Forano & Franklin, 2019; Inoue et
al., 2015; S. Kim et al., 2015; J. Y. Lee & Schweighofer, 2009).
2.2.3 Neural Mechanisms of Motor Adaptation
Motor adaptation largely depends on the cerebellum (Tseng et al., 2007). The
cerebellum uses sensory prediction errors to update an internal forward model and
improve performance. When the cerebellum is damaged, implicit mechanisms are
impaired (Butcher et al., 2017). Individuals with cerebellar degeneration are shown to
have difficulty adapting to perturbations and are only able to do so when given an explicit
aiming strategy (Taylor et al., 2010). While the cerebellum plays a critical role in motor
12
adaptation, cortical structures as well as the basal ganglia also contribute to the
adaptation of sensorimotor perturbations (Figure 2.1) (Krakauer et al., 2019; Rachael D.
Seidler et al., 2006). During the early stages of adaptation, structures such as the
prefrontal cortex (PFC; specifically the dorsolateral PFC), premotor cortex,
supplementary motor area, and basal ganglia are activated (Anguera et al., 2008;
Krakauer et al., 2004; McDougle et al., 2016; Rachael D. Seidler et al., 2006; Tzvi et al.,
2020). Similarly, the fast process has been found to be associated with structures such
as the PFC, the parietal cortex and the posterior part of the cerebellum (S. Kim et al.,
2015). The activation of frontal associative regions during early learning suggests that
explicit, cognitive mechanisms are engaged during this stage (Taylor & Ivry, 2013b). As
learning continues, activity over wide areas of the cerebellum decrease, and engagement
shifts from frontal associative to sensorimotor regions (Julien Doyon et al., 2009; Julien
Doyon & Benali, 2005; Imamizu et al., 2000; Taylor & Ivry, 2013b). Specifically, activity in
the premotor cortex, sensorimotor cortex, posterior parietal cortex, and deep cerebellar
nuclei increases during the later stages of adaptation (Della-Maggiore & Mcintosh, 2005;
Krakauer et al., 2004; Nezafat et al., 2001). The cortical shift in engagement from frontal
associative to sensorimotor regions as learning continues potentially reflects the idea that
early learning involves areas important for goal-selection and planning and late learning
involves areas important for movement specification and execution (Taylor & Ivry, 2013b).
As learning is consolidated and retained, the cerebellum and sensorimotor regions remain
engaged and are important for the formation of long-term motor memories (Julien Doyon
et al., 2009; Hadipour-Niktarash et al., 2007; Nezafat et al., 2001; Tzvi et al., 2020).
13
Figure 2.1. The theoretical framework of early and late learning in motor adaptation.
During the early stages of learning when errors are large, the cortico-striato-cerebellar
loop is recruited for reducing errors and increasing performance. As learning becomes
asymptotic and motor memory encoding takes place in the late stages of learning and
retention, activity shifts to the cortico-cerebellar loop. Figure adapted from Doyon et al.,
2009.
2.2.4 Motor Learning in HMD-VR
Immersive virtual reality using a head-mounted display (HMD-VR) has been
increasing in use for motor learning purposes (e.g., surgical training and motor
rehabilitation) (Huber et al., 2017; Levin, 2020). Recent developments in HMD-VR
technology have made these devices obtainable at low costs. Driving factors for using
HMD-VR in motor learning include the ability to replicate and even go beyond the real
world, allowing for researchers, instructors, and clinicians to have increased control and
individual personalization of the training environment (Levin, 2020). Applications of HMD-
VR are shown to be engaging and motivating (Laver et al., 2017; Levin & Demers, 2020;
Zimmerli et al., 2013) with promising results suggesting HMD-VR applications are
comparable or in some cases superior to conventional training environments (Devos et
14
al., 2009; Howard, 2017; Nemani et al., 2018). However, while several studies have
reported effective use of using HMD-VR for motor learning purposes, there are also
reports on the limitations of these devices (Levac & Jovanovic, 2017; Massetti et al., 2018;
Müssgens & Ullén, 2015). Of particular importance is conflicting evidence about whether
the motor skills learned in HMD-VR will transfer to the real world (Levac et al., 2019), with
some studies finding successful context transfer from HMD-VR to the real world (A. Kim
et al., 2019; Petri et al., 2019), and other studies finding poor context transfer (Carlson et
al., 2015; Gavish et al., 2015; Kozak et al., 1993). Additionally, motor learning in HMD-
VR has been shown to rely on different learning processes compared to conventional
training environments. For example, visuomotor adaptation in HMD-VR has been found
to recruit greater explicit, cognitive strategies compared to a conventional computer
screen (Appendix A). This evidence suggests that motor learning in HMD-VR may rely
more on cognitive processes than motor learning in conventional training environments
which could affect the formation of motor memories.
2.3 Cognitive Processes in Motor Learning
Cognitive processes involved in motor learning can include working memory,
attention, and motor planning (T. D. Lee et al., 1994; Rachael D. Seidler & Carson, 2017).
Working memory is the temporary storage and management of information for later
processing (Zhang, 2019). There are thought to be two types of working memory: verbal
working memory and spatial working memory. Verbal working memory maintains
phonological information active and spatial working memory manipulates and maintains
spatial and visual information in an active state (De Beni et al., 2005). Verbal working
memory is thought to be primarily localized to the left hemisphere (e.g., left posterior
15
parietal cortex) and spatial working memory is thought to be primarily localized to the right
hemisphere (E. E. Smith & Jonides, 1997). Spatial working memory is often relied upon
in motor learning. Neural processes of spatial working memory are thought to include the
prefrontal cortex (PFC; specifically the right dorsolateral PFC), the premotor cortex
(specifically in the right hemisphere), the parietal cortex (specifically the posterior parietal
cortex; PPC), and the basal ganglia (Jonides et al., 1993; Rachael D. Seidler et al., 2006,
2012; E. E. Smith & Jonides, 1997; van Asselen et al., 2006). Attention is the process of
selectively concentrating on a particular object, action, or thought while managing and
ignoring competing perceivable information (Zhang, 2019). Neural processes of attention
are thought to include the anterior cingulate cortex and the parietal cortex (specifically the
PPC) (Jenkins et al., 1994). Motor planning is the integration of sensory information and
internal models in order to execute a series of appropriate movements (Peters et al.,
2015). Neural processes of planning are thought to include the supplementary motor area
(SMA), the pre-SMA, the premotor cortex, and the PPC (Brandi et al., 2014; Tanji, 2001;
Torres et al., 2013).
2.3.1 Working Memory
Working memory is a necessary cognitive process for motor learning. In long bouts
of motor skill learning (e.g., 2 hours), working memory is found to be maintained despite
self-reports of fatigue and increased perception of effort, whereas other cognitive
functions, such as motivation and visuomotor tracking abilities, are found to be
decreased. These findings suggest that working memory may be prioritized over other
cognitive processes for the learning of motor skills (Solianik et al., 2018). However, as
learning becomes more automatic, activity in the PFC decreases, suggesting that working
16
memory may not be as necessary for the later stages of motor learning (Jenkins et al.,
1994). Working memory capacity is also important in the acquisition and retention of
complex motor skill learning. For example, in a study examining complex movements in
volleyball, working memory capacity was found to be the best predictor for motor
performance (Bisagno & Morra, 2018). Similarly, in a study examining basketball
shooting, children with high working memory capacity displayed improved retention
compared to other children with low working memory capacity (Buszard et al., 2017).
These studies suggest that working memory capacity is important for the formation of
motor memories.
Working memory capacity is also shown to predict the rate of learning for
visuomotor adaptation. The neural correlates of spatial working memory are thought to
include the dorsolateral PFC (DLPFC), the premotor cortex, and the posterior parietal
cortex and have been shown to specifically be localized to the right hemisphere (Jonides
et al., 1993; E. E. Smith & Jonides, 1997). Neural correlates of early visuomotor
adaptation also include the right middle frontal gyrus, the right inferior parietal lobule, the
right middle temporal gyrus, bilateral premotor cortices, the supplementary motor area,
the medial cerebellum, and bilateral basal ganglia circuitries (Rachael D. Seidler et al.,
2006). In younger adults, the neural corelates during spatial working memory that overlap
with the neural correlates early in visuomotor adaptation are the right DLPFC and bilateral
inferior parietal lobules (Anguera et al., 2008). In younger adults, depleting spatial working
memory capacity negatively affects the rate of early, but not late, learning in visuomotor
adaptation (Anguera et al., 2012). However, enhancing working memory capacity through
training does not increase the rate of learning in visuomotor adaptation, suggesting there
17
may be other limiting factors affecting this learning rate. In older adults, the neural
correlates during spatial working memory do not overlap with the neural correlates in
either early or late visuomotor adaptation (Anguera et al., 2011). Suggesting that a failure
to engage spatial working memory early in visuomotor adaptation may be related to the
adaptation deficits seen in older adults. Additionally, individuals post-stroke with
visuospatial working memory deficits are unable to maintain spatial information in memory
for a long period of time (van Asselen et al., 2006). Visuospatial working memory deficits
have been shown to result in a decreased initial rate of learning in visuomotor adaptation
(Schweighofer et al., 2011). Further supporting the important role that spatial working
memory and related neural corelates have in visuomotor adaptation, especially in the
early, explicitly driven stage.
2.3.2 Attention and Motor Planning
Attention is used to monitor movements during the early stages of motor learning.
Novice learners acquiring complex motor skills, such as putting in golf or kicking in soccer,
show an increase in performance when using online attentional mechanisms to control
movements. However, when irrelevant distractions are present during the early stages of
learning, performance is shown to be negatively affected (Beilock et al., 2002).
Additionally, performance also decreases when novice learners are asked to perform a
motor skill under fast conditions, which tends to restrict attentional monitoring (Beilock et
al., 2004). When motor skills become more automatic, there is less of a demand for
attentional monitoring (Shiffrin & Schneider, 1977). This suggests that a shift in attentional
cognitive resources needed occurs as motor skills are acquired. Moreover, if there is too
18
much attentional monitoring involved in automatic movements, performance can
decrease (Beilock et al., 2002, 2004; Overney et al., 2008).
Attention and motor planning has also shown to affect the rate of learning in
visuomotor adaptation (Anguera et al., 2012). Attention and working memory are thought
to share limited neural resources (Feng et al., 2012; Todd & Marois, 2004). Specifically,
the parietal cortex is thought to be a shared capacity-limited neural mechanism (Fusser
et al., 2011). Divided attention during early learning of visuomotor adaptation can disrupt
learning (Eversheim & Bock, 2001). Specifically, divided attention has been shown to
impair trial-by-trial learning of force-field adaptation, but not impair feedback during
movement (Taylor & Thoroughman, 2007). While patients with right parietal cortex
damage exhibit normal visuomotor adaptation and aftereffects, patients with left parietal
cortex damage exhibit accuracy impairments to a perturbation as well as limited
aftereffects (Mutha et al., 2011). While the left parietal cortex has been shown to be
involved in attention (Rushworth et al., 2001), it has also been shown to contribute to the
planning of movement sequences in either arm (Haaland et al., 2004). Planning is also
thought to play an important role in the early stages of visuomotor adaptation. When
preparation time is limited (i.e., less time to generate a motor plan) in trials early in
adaptation, there is an increase in errors compared to trials where there is no restrictions
on preparation time (Haith et al., 2015). However, as learning continues, performance
between trials with and without limited preparation time are similar, suggesting that
planning is not as necessary late in visuomotor adaptation.
19
2.3.3 Cognitive Load
Cognitive load is part of a theoretical construct which suggests that novel
information (e.g., novel visuomotor mapping) can be encoded in long-term memory when
the load on working memory is within working memory limits (Figure 2.2A) (Orru & Longo,
2019). There are suggested to be three types of cognitive loads that contribute to working
memory demands: intrinsic load, extraneous load, and germane load. Intrinsic load is the
cognitive resources allocated to essential elements of learning a task. Extraneous load is
the cognitive resources allocated to nonessential elements of learning a task. Germane
load is the cognitive resources allocated to form and update a schema (e.g., motor
memory) for long-term memory. Learning is suggested to result when germane load is
optimized, which can occur when intrinsic load matches the expertise of the learner and
extraneous load is minimized. The limitations of working memory constrain the amount of
the three types of cognitive load such that when cognitive load exceeds the limits of
working memory, cognitive overload occurs, and learning is impaired (Figure 2.2B).
However, when intrinsic load matches the learner and extraneous load decreases,
germane load increases and results in learning (Figure 2.2C).
20
Figure 2.2. (A) A theoretical representation of memory architecture and the encoding
and retrieval of motor memories. While long-term memory is thought to be unlimited,
working memory is limited. Retrieval of a motor memory can occur in either the same
context as training (long-term retention) or a separate context from training (context
transfer). (B) Increased extrinsic load results in cognitive overload. (C) Decreased
extrinsic load is thought to result in increased germane load. Figure adapted from Orru
& Longo, 2019.
2.3.4 Cognitive Load in HMD-VR
One way to objectively measure cognitive load is by using a dual-task probe
paradigm (Goh et al., 2014; Jang et al., 2005). In dual-task probe paradigms, a secondary
task (e.g., a reaction time (RT) task) is given in parallel to the primary learning task and
cognitive load is measured by the performance of the secondary task (e.g., RT) (Park &
Brünken, 2017). Converging evidence suggests that cognitive load, measured by the RT
of a secondary task, increases during highly stressful and complex motor tasks in HMD-
21
VR compared to a conventional computer screen (CS) (Baumeister et al., 2017;
Frederiksen et al., 2020). This increase in cognitive load during complex motor tasks is
found to be related to a decrease in motor performance in HMD-VR compared to CS
(Levac et al., 2019). A potential explanation for this cognitive load increase is an increase
in extraneous load as a result of factors related to wearing the HMD-VR, such as the
HMD-VR field of view or distractors in the HMD-VR environment (Baumeister et al., 2017;
Frederiksen et al., 2020). One potential mechanism facilitating this increase is that vision-
for-action in HMD-VR may involve holistic processing. Holistic processing means that
information unrelated to the task, such as irrelevant distractors in the HMD-VR
environment, may also be processed during visuomotor control. As a failure to filter out
irrelevant information during movement has been shown to result in increased cognitive
load, actions in HMD-VR involving holistic processing could potentially explain an
increase in cognitive load (Emami & Chau, 2020; D. J. Harris et al., 2019; Jost & Mayr,
2016; Thoma & Henson, 2011).
2.4 Visual Processing for Action
There are thought to be two interacting visual streams for higher level processing
of visual information: a ventral stream important for processing perception and a dorsal
stream important for processing action (Milner, 2017; Rossetti et al., 2017). The ventral
stream is thought to project from the primary visual area (V1) to the inferior temporal
cortex and the dorsal stream is thought to project from V1 to the posterior parietal cortex
(Figure 2.3) (Creem & Proffitt, 2001; Goodale, 2014; Goodale et al., 2005; Milner et al.,
1991). In addition to the separate but interacting neural representations, the computations
for vision-for-perception and vision-for-action are also thought to be different. Vision-for-
22
perception is suggested to rely on holistic processing, meaning that individual features
and their spatial relations are perceived as a combined whole. Whereas vision-for-action
is suggested to rely on analytical processing, where only the relevant features are
considered without being influenced by other irrelevant information (Ganel & Goodale,
2003; Goodale, 2014).
Figure 2.3. Pathways involved in visual processing for perception (red) and in visual
processing for action (blue). AIP: anterior intraparietal cortex; FEF: frontal eye field; IT:
inferior temporal cortex; LIP: lateral intraparietal cortex; MIP: medial intraparietal cortex;
MST: medial superior temporal cortex; MT: middle temporal cortex; PF: prefrontal
cortex; PMd, PMv: dorsal and ventral premotor cortices; TEO: occipitotemporal cortex;
VIP: ventral intraparietal cortex; V1-V4: areas of visual cortex. Figure adapted from
Kandel et al., 2012.
The perception–action distinction has largely been examined through behavioral
experiments involving a psychophysical principle called Weber’s law and a selective
23
attention paradigm called Garner interference. Weber’s law states that the minimum
amount to detect a difference between two objects, called the just noticeable difference
(JND), depends on the magnitude of the object. That is, smaller sized objects have
smaller JND’s whereas larger sized objects have larger JND’s. In experiments using
Weber’s law, either grasps or estimations of perceptual size are made for objects of
varying lengths, and the JND is used as the outcome measure (e.g., Ganel et al., 2008).
In these experiments, violation of Weber’s law suggests that actions are processed in an
efficient manner and are not susceptible to perceptual effects. Garner interference is a
selective attention paradigm that is used to investigate the psychological constructs of
holistic and analytical processing. A Garner interference occurs when irrelevant object
dimensions interfere with the processing of relevant object dimensions. In experiments
using Garner interference, either grasps or estimations of perceptual size are made for
rectangular objects in two separate conditions: a ‘baseline’ condition where the width
(relevant dimension) varies while the length (irrelevant dimension) remains constant, and
a ‘filtering’ condition where both the width and length vary. Greater reaction times and
response variability in the ‘filtering’ condition compared to the ‘baseline’ condition indicate
a Garner interference effect, and are used as the outcome measure (e.g., Ganel &
Goodale, 2003). In these experiments, Garner interference and a lack thereof suggests
holistic and analytical processing of a single dimension relative to the object’s other
dimensions, respectively.
2.4.1 Visual Processing of Real 3D and 2D Objects
The perception–action distinction is demonstrated with real three-dimensional (3D)
objects (Goodale, 2011, 2014). Vision-for-perception for real 3D objects has been shown
24
to adhere to Weber’s law where the JND increases linearly with object size (Ganel et al.,
2008; Heath et al., 2017; Heath & Manzone, 2017). However, vision-for-action with real
3D objects does not adhere to Weber’s law where the JND is unaffected by changes in
object size (Ayala et al., 2018; Ganel, 2015; Ganel et al., 2008, 2014; Heath et al., 2017).
Similarly, a Garner interference effect is observed when making speeded judgements
(vision-for-perception) of real 3D objects. However, a Garner interference effect is not
observed when performing speeded grasps (vision-for-action) towards real 3D objects
(Ganel & Goodale, 2003, 2014).
Separate from real 3D objects, vision-for-action directed towards virtual two-
dimensional (2D) objects are shown to be susceptible to perceptual effects (Ganel et al.,
2020). Specifically, vision-for-action with 2D objects has been shown to adhere to
Weber’s law (Holmes & Heath, 2013; Ozana, Namdar, et al., 2020; Ozana & Ganel, 2018,
2019a, 2019b). Adherence of Weber’s law suggests that actions are susceptible to
perceptual effects. Similarly, vision-for-action with 2D objects has been shown to
generate a Garner interference effect (Freud & Ganel, 2015; Ganel et al., 2020). Actions
producing a Garner interference effect suggests that actions are influenced by holistic
processing.
2.4.2 Visual Processing in HMD-VR
Movements in HMD-VR are performed in 3D; however, the stereoscopic
presentation is in 2D. One consequence of this is that distances in HMD-VR are often
underestimated and perceived as seemingly flat (Kelly et al., 2017; Phillips et al., 2009).
Given that scenes and objects in HMD-VR can be perceived as flatter than in the real
world, this may mean that visual processing for movement in HMD-VR may also be
25
susceptible to perceptual interferences (D. J. Harris et al., 2019; Kelly et al., 2017; Phillips
et al., 2009). Supporting this is evidence that vision-for-action towards virtual 3D objects
is shown to adhere to Weber’s law (Ozana et al. 2018, 2020a). This suggests that objects
in HMD-VR may be processed more as 2D than 3D.
26
Chapter 3: Context Transfer of Motor Skills in HMD-VR
This chapter is adapted from:
Juliano J.M. & Liew S.L. (2020) Transfer of motor skill between virtual reality viewed using
a head-mounted display and conventional screen environments. Journal of
NeuroEngineering and Rehabilitation, 17(1), 1-13.
3.1 Abstract
Virtual reality viewed using a head-mounted display (HMD-VR) has the potential
to be a useful tool for motor learning and rehabilitation. However, when developing tools
for these purposes, it is important to design applications that will effectively transfer to the
real world. Therefore, it is essential to understand whether motor skills transfer between
HMD-VR and conventional environments and what factors predict transfer. We
randomized 70 healthy participants into two groups. Both groups trained on a well-
established measure of motor skill acquisition, the Sequential Visual Isometric Pinch Task
(SVIPT), either in HMD-VR or in a conventional environment (i.e., computer screen). We
then tested whether the motor skills transferred from HMD-VR to the computer screen,
and vice versa. After the completion of the experiment, participants responded to
questions relating to their presence in their respective training environment, age, gender,
video game use, and previous HMD-VR experience. Using multivariate and univariate
linear regression, we then examined whether any personal factors from the
questionnaires predicted individual differences in motor skill transfer between
environments. Our results suggest that motor skill acquisition of this task occurs at the
27
same rate in both HMD-VR and conventional screen environments. However, the motor
skills acquired in HMD-VR did not transfer to the screen environment. While this decrease
in motor skill performance when moving to the screen environment was not significantly
predicted by self-reported factors, there were trends for correlations with presence and
previous HMD-VR experience. Conversely, motor skills acquired in a conventional screen
environment not only transferred but improved in HMD-VR, and this increase in motor
skill performance could be predicted by self-reported factors of presence, gender, age,
and video game use. These findings suggest that personal factors may predict who is
likely to have better transfer of motor skill to and from HMD-VR. Future work should
examine whether these and other predictors (i.e., additional personal factors such as
immersive tendencies and task-specific factors such as fidelity or feedback) also apply to
motor skill transfer from HMD-VR to more dynamic physical environments.
3.2 Introduction
The use of virtual reality (VR) in rehabilitation has been growing exponentially over
recent years (Cano Porras et al., 2018; Keshner et al., 2019). Clinical applications of VR
have been shown to be engaging and motivating (Reid, 2002; Zimmerli et al., 2013) with
promising results suggesting VR interventions are comparable (Nemani et al., 2018) or in
some cases superior (Devos et al., 2009; Howard, 2017) to conventional rehabilitation.
However, while a number of studies have reported benefits of using VR for cognitive and
motor rehabilitation, there are also reports on the limitations of using these devices for
clinical applications (Laver et al., 2017; Levin, Weiss, et al., 2015). In particular, some
studies have shown that VR interventions are not effective at improving motor
performance in the real world due to a lack of motor skill transfer (i.e., the application of
28
a motor skill in a novel task or environment (Müssgens & Ullén, 2015)) (Levac &
Jovanovic, 2017; Massetti et al., 2018).
Concerns about motor skill transfer from virtual to real environments are even
greater when specifically considering the use of VR viewed using a head-mounted display
(HMD-VR). HMD-VR provides a more immersive experience compared to conventional
environments (e.g., computer screens) and results in increased levels of presence (i.e.,
the illusion of actually being present in the virtual environment) and embodiment (i.e., the
perceptual ownership of a virtual body in a virtual space) (Osimo et al., 2015; Slater &
Sanchez-Vives, 2016) that modulate behavior (Kilteni, Normand, et al., 2012) and impact
performance on motor learning and rehabilitation applications (e.g., gait, balance,
neurofeedback tasks) (Iruthayarajah et al., 2017; Juliano et al., 2019; Tieri et al., 2018).
Additionally, motor learning in HMD-VR (e.g., upper extremity visuomotor adaptation) has
been shown to rely on different learning processes compared to a conventional screen
environment (J. M. Anglin et al., 2017). Given the differences in immersive experiences
and learning processes between HMD-VR and conventional environments, it can be
assumed that individuals may experience these environments as separate contexts.
Studies have found the context of the training environment to affect the transfer of motor
skills (Taylor & Ivry, 2013a), where motor performance may decrease when testing occurs
in an environment different from training (S. M. Smith & Vela, 2001). However, only a
small number of studies have specifically explored motor skill transfer of from an HMD-
VR training environment to a more conventional environment (e.g., computer screen or
real world) (Carlson et al., 2015; Gavish et al., 2015; A. Kim et al., 2019; Kozak et al.,
1993; Petri et al., 2019). Among these studies, there are again conflicting results, with
29
some studies finding successful motor skill transfer from HMD-VR to the real world (A.
Kim et al., 2019; Petri et al., 2019), and others not (Carlson et al., 2015; Gavish et al.,
2015; Kozak et al., 1993).
There is also large interindividual variability within the results, and this variability
suggests there may be particular tasks or particular individuals that will be more
successful in transferring HMD-VR motor skills to the real world. Understanding the task-
related or personal factors that mediate learning and transfer from HMD-VR environments
should be examined in order to understand what makes HMD-VR interventions effective.
One advantage of HMD-VR over conventional screen environments is the ability to
realistically simulate the real world which allows for greater task specificity (Gerig et al.,
2018). Task-related factors such as fidelity (i.e., imitation of the real environment) and
dimensionality (i.e., matching dimensions between virtual and real environments)
between HMD-VR and the real world have been shown to influence lower extremity motor
performance (A. Kim et al., 2018) and have been suggested to have an influence on
transfer in both lower and upper extremity motor transfer (Fluet & Deutsch, 2013; Levac
et al., 2019). Individual differences in personal factors such as gender, age, video game
experience, prior technical computer literacy, and computer efficacy seemed to influence
transfer from HMD-VR to the real world in studies examining the transfer of spatial
knowledge acquired in an HMD-VR environment (Carlson et al., 2015; Waller et al.,
1998). However, the individual differences on both task-related and personal factors have
not been extensively examined in HMD-VR motor skill transfer. We begin to address this
gap by examining whether individual personal factors facilitate better transfer from upper
extremity motor skill acquisition in HMD-VR to a conventional screen environment.
30
In the current study, we examined: (1) whether transfer of upper extremity motor
skills occurs between HMD-VR and conventional screen environments, and (2) what
personal factors predict transfer between environments. Given the variability of motor skill
learning and transfer in previous studies (Carlson et al., 2015; Gavish et al., 2015; A. Kim
et al., 2019; Kozak et al., 1993; Levac et al., 2019; Petri et al., 2019), we hypothesized
that individual motor performance would vary after transfer to a novel environment, and
that this variability could be predicted by individual differences in variables such as
presence in the training environment, prior experience with HMD-VR, or non-VR video
games.
3.3 Methods
3.3.1 Participants
Seventy-four healthy adults were recruited. Participants were randomized into two
groups (Train-HMD-VR, Train-Screen). Three participants in the Train-Screen group were
excluded from the analysis as a result of performing all trials in the Baseline training block
incorrectly (see Analyses) and one participant in the Train-HMD-VR group was excluded
from the analysis as a result of being an outlier, which was defined as being beyond three
standard deviations from the group mean motor skill in at least one of the blocks. This
resulted in a total of seventy participants (53 females/16 males/1 other, aged: M = 25.81,
SD = 4.71) with thirty-five participants in each group included in the analysis. A statistical
power analysis was performed for sample size estimation based on data from a pilot study
of this work (N = 12) (Julia M. Anglin et al., 2017). The effect size in this study was d =
0.38. With an alpha = 0.05 and power = 0.60, the projected sample size need with this
effect size was approximately N = 35. Eligibility criteria included healthy, self-reported
31
right-handed individuals and no previous experience with the motor skill task (see
Experimental design). Written informed consent was obtained from all subjects. The
experimental protocol was approved by the University of Southern California Institutional
Review Board and performed in accordance with the 1964 Declaration of Helsinki.
3.3.2 Experimental Design
Figure 3.1A provides an overview of the experimental design. The experiment
consisted of training and testing blocks in which participants completed a modified version
of the Sequential Visual Isometric Pinch Task (SVIPT) (Reis et al., 2009). In this task,
participants were instructed to apply varying degrees of isometric force between their
thumb and index finger to a small pinch force sensor (Futek Pinch Sensor FSH01465;
Futek IPM FSH03633; Figure 3.1B) to move a cursor between numbered colored gates
as quickly and accurately as possible (Figure 3.1C). A small circle at the bottom of the
screen changed from red to green to indicate the start of each trial. For each trial, no time
limit was given and trial completion time was recorded. At the end of each trial, the small
circle at the bottom of the screen changed from green to red and participants received
auditory feedback (a pleasant “ding” if the cursor correctly entered all the gates or an
unpleasant “buzz” if the cursor missed one or more of the gates). A two second time
interval was given between each trial.
32
Figure 3.1. Experimental paradigm. (A) Experimental design. (B) Pinch force between
the thumb and index finger was applied to a small force transducer to move the cursor
in the SVIPT. (C) Sequential Visual Isometric Pinch Task (SVIPT) display. Participants
were asked to apply force to the force transducer, which translated into the movement
of a small black cursor (shown at the home position in the white bar) moving horizontally
to the right in the environment. The cursor moved left by reducing force. Instructions
were to move the cursor between the gates, in order from 1 to 5, as quickly and
accurately as possible, without over- or under-shooting any of the gates.
Participants completed 4 training blocks (Training Blocks 1-4) consisting of 30
SVIPT trials either in an HMD-VR (Figure 2.2A; Train-HMD-VR) environment or on a
computer screen (Figure 3.2B; Train-Screen). Block 1 was considered the Baseline
training block for each group. After completion of the training blocks, all participants
completed 2 counter-balanced testing blocks consisting of 20 SVIPT trials in an HMD-VR
environment and on a computer screen. We defined the testing block that matched the
33
training condition as the “Acquired Skill” testing block (e.g., train in HMD-VR, test in HMD-
VR). There was no difference in either group between the last block of training (Block 4)
and the Acquired Skill testing block; thus, the Acquired Skill testing block was used as a
proxy for total amount of motor skill within the assigned training environment. We defined
the testing block that was different from the training condition as the “Transfer” block (e.g.,
train in HMD-VR, test in Screen). Importantly, there was no difference in the order of
testing block completion for either group. Lastly, after completion of both training and
testing blocks, participants were asked to complete three questionnaires (see
Questionnaires).
Figure 3.2. Training and testing environments. Participants were trained on a
motor skill task in either the HMD-VR or Screen environment and were then
tested on the same task in both environments. (A) HMD-VR environment; the
stimulus shown in the HMD-VR display is also shown on the computer screen.
(B) Screen environment.
34
3.3.3 Training and Testing Environments
The environments for all blocks were designed using the game engine
development tool, Unity 3D (Version 5.6.6). In the blocks where participants were in the
HMD-VR environment, participants performed the task in a head-mounted display
(Oculus Rift DK2). In the blocks where participants were in the Screen environment,
participants performed the task on a 17.3 inch, 1920x1080 pixel resolution computer
laptop (ASUS ROG G751JY-DH71). The HMD-VR environment was created based on a
fixed coordinate system that did not depend on the participant’s head position. All
participants were physically seated in the same location for all blocks and used the same
force transducer for the task. The only difference between HMD-VR and Screen blocks
is that participants put on the HMD-VR headset for HMD-VR blocks.
3.3.4 Questionnaires
Participants were asked to complete two Likert-scale questionnaires regarding
their reaction to the training environment; the first questionnaire related to participants’
simulator sickness and the second questionnaire related to participants’ level of presence.
The first questionnaire was the simulator sickness questionnaire, adapted from Kennedy,
Lane, Berbaum, & Lilienthal (1993) (Kennedy et al., 1993), and consisted of a series of
questions to gauge participant sickness level and was given both before and after the
task. Questions were collapsed along four main themes: nausea, oculomotor reactions,
disorientation, and overall simulator sickness. The second questionnaire was the
presence questionnaire, which was adapted from Witmer & Singer (1998) (Witmer &
Singer, 1998) and revised by the UQO Cyberpsychology Lab (2004). It consisted of a
series of questions to gauge the participant’s sense of presence in the training
35
environment. Questions were measured along five main themes: realism, possibility to
act, quality of interface, possibility to examine, and self-evaluation of performance.
Participants were also asked questions regarding their gender, age, whether or not they
played video games, and whether or not they had previous experience using HMD-VR.
Both these questions and the presence questionnaire were administered at the end of the
experiment.
3.3.5 Analyses
All analyses were complete in R (Version 3.5.3) using R Studio (Version 1.1.423).
We assessed the normality of each variable using skewness and kurtosis of the
distribution. Given our sample size, we considered a variable with an absolute skewness
or kurtosis value of less than 1.96 as normally distributed (H.-Y. Kim, 2013). Only
simulator sickness variables were considered non-normally distributed (see Simulator
sickness, presence, and subjective measures). Significance was defined as p < 0.05
unless corrected with multiple comparisons using a Bonferroni correction; these
corrections are specified for each analysis.
3.3.5.1 Simulator Sickness, Presence, and Subjective Measures
To examine any differences between groups in the variables calculated based on
simulator sickness questionnaire responses (i.e., nausea, oculomotor reactions,
disorientation, and overall simulator sickness), we use a Mann-Whitney U test as either
skewness or kurtosis values were greater than 1.96 for each variable. To examine any
differences between groups in the variables calculated based on presence and subjective
measures questionnaire responses, we used an unpaired t-test for quantitative variables
36
(i.e., realism, possibility to act, quality of interface, possibility to examine, self-evaluation
of performance, age) and a chi-squared test for qualitative variables (i.e., gender, video
game use, and previous HMD-VR experience). Significance was defined as p < 0.0125
for the four simulator sickness variables and defined as p < 0.0055 for the nine presence
and subjective variables.
3.3.5.2 Motor Skill Acquisition and Motor Skill Transfer
Motor skill was calculated based on a formula first presented in Reis et al. 2009
(Reis et al., 2009), which measures the ratio of speed to accuracy over the trials in each
block. Motor skill for each block is calculated as:
Motor Skill= ln5
&'())*) ),-(
())*) ),-((/0(12),-3*0)
!
)
6,
where error rate and duration are averaged over trials in each block and b is a free
parameter that equals 5.424 (Reis et al., 2009). If error rate = 0 for a given block (i.e., all
trials were incorrect), the resulting motor skill calculation would be undefined. For each
group, we assessed motor skill acquisition and motor skill transfer; motor skill acquisition
is the increased performance in the trained environment and motor skill transfer is the
maintained performance in the untrained environment after training. To calculate motor
skill acquisition, we compared motor skill from the Baseline training block to the Acquired
Skill testing block for each group. Additionally, we compared the motor skill on the
Baseline training block to the Transfer testing block to assess whether training in one
environment had an effect on motor skill performance in the other environment. To
calculate motor skill transfer we compared the Acquired Skill testing block to the Transfer
testing block for each group. A paired t-test was used for these within-group comparisons
with significance defined as p < 0.025 since comparisons were made for each group.
37
Lastly, to quantify individual motor skill transfer, we took the difference in motor skill
between the Transfer testing block and the Acquired Skill testing block across individuals
and compared overall motor skill transfer between the two groups; an unpaired t-test was
used for this between group comparison.
3.3.5.3 Identifying Individual Factors that Predict Motor Skill Transfer
Finally, to examine which individual factors predicted motor skill transfer, we
considered the nine presence and subjective variables in a multivariate linear regression
model (i.e., realism, possibility to act, quality of interface, possibility to examine, self-
evaluation of performance, age, gender, video game use, and previous HMD-VR
experience) for each group. To identify variables that strongly predicted motor skill
transfer, we used the regularization technique lasso (Tibshirani, 1996) with 10-fold cross-
validation, which shrinks some coefficients and sets others to zero. Shrinking coefficient
estimates through lasso can reduce the variance at the cost of a small increase in bias
(James et al., 2013) and has been suggested for datasets with a similar sample size to
predictor ratio (Kirpich et al., 2018; Lu & Petkova, 2014). Specifically, we trained a lasso
model with cross-validation on 75% of the dataset using the glmnet R function (Friedman
et al., 2010). Then, using the tuning parameter lambda that produced the minimum mean
square error (MSE), we calculated the prediction error on the remaining 25% of the
dataset and refit the lasso model using the full dataset. This resulted in a sparse linear
model that is more interpretable and only includes a subset of the variables included in
the initial linear model. Variance inflation factor (VIF) was calculated for each predictor to
check for multicollinearity; we considered a VIF value less than 3.3 as meeting the
assumption of collinearity (Kock & Lynn, 2012). For exploratory purposes, we then
38
individually examined the variables in a univariate linear regression model to determine
whether any variables on their own could explain motor skill transfer. Qualitative
predictors (i.e., gender, video game use, and previous HMD-VR experience) in both
multivariate and univariate linear regression models were encoded as dummy variables.
3.4 Results
3.4.1 No Differences in Simulator Sickness, Presence, and Subjective
Measures Between Environments
To assess differences in simulator sickness level between training environments,
we compared scores of nausea, oculomotor reactions, disorientation, and overall
simulator sickness between groups and found no significant difference for each of the
measures (Supplementary Table 3.S1), suggesting that this type of motor skill training
does not produce any additional simulator sickness side effects in HMD-VR compared to
what is experienced when training on a computer screen. Additionally, we compared each
of the nine prediction variables (realism, possibility to act, quality of interface, possibility
to examine, self-evaluation of performance, age, gender, video game use, and previous
HMD-VR experience) and found no significant differences between the two groups
(Supplementary Table 3.S2).
3.4.2 No Differences in Motor Skill Acquisition Between Training
Environments
To assess initial and end of training performance between training environments,
we compared motor skill between groups at the Baseline training block (Block 1) and the
Last training block (Block 4). There were no differences between groups in Baseline
training blocks (t(34) = 0.48, p = 0.6319; Train-HMD-VR: M = -4.83, SD = 1.12; Train-
39
Screen: M = -4.95, SD = 1.13) and in the Last training blocks (t(34) = 0.24, p = 0.8113;
Train-HMD-VR: M = -2.93, SD = 1.06; Train-Screen: M = -3.00, SD = 0.90). To compute
individual participant acquisition rates, we applied a linear-log linear regression to motor
skill across the four training blocks (Benoit, 2011). We found similar acquisition rates (i.e.,
slopes from regression model) between the two groups (t(34) = -0.47, p = 0.6386; Train-
HMD-VR: M = 1.36, SD = 0.84; Train-Screen: M = 1.43, SD = 0.56), suggesting that motor
skill acquisition occurred at a similar rate across HMD-VR and Screen groups.
3.4.3 Motor Skill Acquisition and Motor Skill Transfer
3.4.3.1 Motor Skill Acquisition Occurs in Both Environments
To ensure that motor skill acquisition occurred in both environments, we compared
the motor skill between the Baseline training block and the Acquired Skill testing block for
the Train-HMD-VR group and for the Train-Screen group separately. On average, we
found motor skill acquisition occurred after training in HMD-VR (Train-HMD-VR: t(34) = -
11.42 , p < 0.0001; Baseline (Block 1): M = -4.83, SD = 1.12, Acquired Skill (HMD-VR):
M = -2.88, SD = 1.14; Figure 3.3A) and after training on a computer screen (Train-Screen:
t(34) = -9.68 , p < 0.0001; Baseline (Block 1): M = -4.95, SD = 1.13, Acquired Skill
(Screen): M = -3.14, SD = 0.92; Figure 3.3B). This suggests that motor skill acquisition
on an isometric pinch force task can occur both in HMD-VR as well as on a more
conventional screen environment.
To assess whether training in HMD-VR had an effect on motor skill performance
on a computer screen, we compared the motor skills on the Baseline training block to the
Transfer testing block for the Train-HMD-VR group. We found a significant difference in
performance between the HMD-VR Baseline training block (M = -4.83, SD = 1.12) and
40
the computer screen Transfer testing block (M = -3.20, SD = 0.96; t(34) = -9.12; p <
0.0001; Figure 3.3A), suggesting that motor skill training in HMD-VR increased the motor
skill performance on a computer screen, compared to if no HMD-VR training occurred.
To assess whether training on a computer screen had an effect on motor skill
performance in HMD-VR, we compared the motor skills on the Baseline training block to
the Transfer testing block for the Train-Screen group. We found a significant difference in
performance between the computer screen Baseline training block (M = -4.95, SD = 1.13)
and the HMD-VR Transfer testing block (M = -2.73, SD = 1.01; t(34) = -12.52; p < 0.0001;
Figure 3.3B), suggesting that motor skill training on a computer screen increased the
motor skill performance in HMD-VR, compared to if no training on a computer screen
occurred.
3.4.3.2 Motor Skill Transfer to Computer Screen: Performance Decreases
To assess motor skill transfer to a computer screen after training in HMD-VR, we
compared the Acquired Skill testing block (HMD-VR) to the Transfer testing block
(Screen) and found a significant difference (t(34) = 2.83, p = 0.0078; Figure 3.3A), where
motor skill was lower in the Transfer testing block (M = -3.20, SD = 0.96) compared to the
Acquired Skill testing block (M = -2.88, SD = 1.14). This suggests that performance
decreased after transfer to the untrained computer screen environment. As a reminder,
there was no significant difference in motor skill based on the order of the Acquired Skill
and Transfer blocks, which were counterbalanced across individuals.
41
Figure 3.3. Motor skill shown for the Train-HMD-VR group in (A) and the Train-
Screen group in (B). Light yellow blocks are HMD-VR training blocks, dark yellow
blocks are HMD-VR testing blocks. Light blue blocks are Screen training blocks,
dark blue blocks are Screen testing blocks. (A) Motor skill across training blocks
in Train-HMD-VR group and both corresponding testing blocks. Participants
increased their motor skill after training in HMD-VR (t(34) = -11.42 , p < 0.0001).
Transfer to a computer screen occurred as a result of HMD-VR training (t(34) =
-9.12; p < 0.0001); however, the motor skills transferred to a computer screen
was less than the motor skills in HMD-VR (t(34) = 2.83, p = 0.0078). (B) Motor
skill across training blocks in Train-Screen group and both corresponding testing
blocks. Participants increased their motor skill after training on a computer
screen (t(34) = -9.68 , p < 0.0001). Transfer to HMD-VR occurred as a result of
computer screen training (t(34) = -12.52; p < 0.0001); however, the motor skills
transferred to HMD-VR was greater than the motor skill on a computer screen
(t(34) = -2.59, p = 0.0142). Indicators of significance: p < 0.05*, p < 0.01**, p <
0.0001****.
42
3.4.3.3 Motor Skill Transfer to HMD-VR: Performance Increases
We then assessed motor skill transfer to HMD-VR after training on a computer
screen. To examine this, we compared the Acquired Skill testing block (Screen) to the
Transfer testing block (HMD-VR) and found a significant difference (t(34) = -2.59, p =
0.0142; Figure 3.3B), where motor skill was higher in the Transfer testing block (M = -
2.73, SD = 1.01) compared to the Acquired Skill testing block (M = -3.14, SD = 0.92). This
suggests that performance increased after transfer to the untrained HMD-VR
environment. In this group, there was also no significant difference in motor skill based
on the order of the Acquired Skill and Transfer blocks, which were counterbalanced
across individuals.
3.4.3.4 Individual Motor Skill Transfer
In the Train-HMD-VR group, a greater proportion of participants performed worse
on the Transfer testing block compared to the Acquired Skill testing block (Figure 3.4A,
left). Conversely, in the Train-Screen group, a greater proportion of participants
performed better on the Transfer testing block compared to the Leaned Skill testing block
(Figure 3.4A, right). To examine group and individual differences in transfer for each
group, we first calculated the amount of motor skill transfer for each individual. To do this,
we took the difference in motor skill between the Transfer testing block and the Acquired
Skill testing block for each individual. At the group level, we compared the average motor
skill transfer between the two groups and found a significant difference (t(61.5) = 3.75, p
= 0.0004; Figure 3.4B), where the motor skill transfer to a computer screen in the Train-
HMD-VR group (M = -0.31, SD = 0.67) was significantly lower that the motor skill transfer
to HMD-VR in the Train-Screen group (M = 0.41, SD = 0.94). This suggests that the type
43
of training environment during motor skill acquisition may affect the overall transfer of the
motor skills to another environment; specifically, training in an HMD-VR environment may
not transfer to a conventional environment. However, as seen in Figure 3.4A, not all
participants had similar transfer, suggesting that individual differences may predict the
transfer of motor skill acquisition between environments.
Figure 3.4. (A) Individual motor skill differences on Acquired Skill and Transfer testing
blocks for both the Train-HMD-VR group (left) and the Train-Screen group (right).
Purple represents individuals with greater motor skill on the Acquired Skill testing block
and green represents individuals with greater motor skill on the Transfer testing block.
(B) The y-axis shows “Motor Skill Transfer”, which is defined as the motor skill on the
Transfer block minus the motor skill on the Acquired Skill block for each individual.
There was a significant difference in average motor skill transfer between the Train-
HMD-VR group (left) and Train-Screen group (right; t(61.5) = 3.75, p = 0.0004). Dots
represent individuals, the box represents the first and third quartiles, and the line
represents the median. p < 0.001***.
3.4.4 Predicting Motor Skill Transfer
Given the interindividual variability of motor skill transfer (Figure 3.4A), we were
interested in whether any self-reported measurements collected (i.e., realism, possibility
to act, quality of interface, possibility to examine, self-evaluation of performance, age,
gender, video game use, and previous HMD-VR experience) could predict the motor skill
transfer in each group. Using lasso with cross-validation to select the penalty term
44
lambda, we performed variable selection to examine which of the nine variables most
strongly predicted individual motor skill transfer (see Analyses). Additionally, we
examined variables individually in each group with a univariate linear regression model
for exploratory purposes.
3.4.4.1 Predicting HMD-VR Motor Skill Transfer to a Computer Screen
For the Train-HMD-VR group, the resulting multivariate linear regression model
retained four variables (Table 3.1) and explained 25.5% of the variance but was not
statistically significant (F(4,30) = 2.57, R
2
= 0.255, p = 0.0580). The model contained two
presence variables predicting the motor skill transfer: positively correlated possibility to
act and negatively correlated self-evaluation of performance. Multicollinearity was not an
issue as the VIF for each variable was < 3.3.
Table 3.1. Results from a multivariate regression model for the Train-HMD-VR
group.
Predictor Estimate Std. Error t-value p-value
(Intercept) -0.7396 0.7741 -0.9555 0.3470
Possibility to Act 0.0835 0.0382 2.1880 0.0366*
Self-Evaluation of Performance -0.1515 0.0716 -2.1170 0.0426*
Video Game Play = Yes 0.1138 0.2380 0.4779 0.6362
Previous HMD-VR Experience = Yes 0.2836 0.2284 1.2420 0.2240
Possibility to act and self-evaluation of performance significantly predicted the amount of
transfer from HMD-VR to a computer screen. p < 0.05*.
We did not find any significant results in examining variables individually in the
univariate linear regression models (Supplementary Table 3.S3). However, there was
45
non-significant evidence of a difference in motor skill transfer in reported previous HMD-
VR experience (F(1,33) = 2.90, R
2
= 0.081, p = 0.0982) where individuals with previous
HMD-VR experience had higher motor skill transfer (M = -0.19, SD = 0.69) compared to
individuals who had never tried HMD-VR (M = -0.58, SD = 0.57; Figure 3.5). Although
these results are weak, they provide a preliminary suggestion that individual
characteristics in these areas may explain why a reduction in motor skill may occur during
HMD-VR transfer to a conventional environment. However, further research is needed to
confirm these findings in a larger sample and with multiple tasks.
Figure 3.5. Train-HMD-VR Group: Individuals with previous
HMD-VR experience had higher motor skill transfer to the
screen compared to individuals who had never tried HMD-
VR (F(1,33) = 2.90, R
2
= 0.081, p = 0.0982). p < 0.1
†
.
46
3.4.4.1 Predicting Computer Screen Motor Skill Transfer to HMD-VR
For the Train-Screen group, the resulting multivariate linear regression model
retained all nine variables (Table 3.2) and explained 59.7% of the variance (F(10,24) =
3.55, R
2
= 0.597, p = 0.0053) with quality of interface, gender, age, and video game use
significantly predicting the motor skill transfer, suggesting that the combination of these
variables may be important for predicting computer screen motor skill transfer to HMD-
VR. Multicollinearity was not an issue as the VIF for each variable was < 3.3.
Table 3.2. Results from multivariate regression model for Train-Screen
Predictor Estimate Std. Error t-value p-value
(Intercept) 1.205 1.191 1.012 0.3218
Realism -0.04472 0.02562 -1.745 0.09372
Possibility to Act -0.06161 0.05833 -1.056 0.3014
Quality of Interface 0.1027 0.0493 2.084 0.04798*
Possibility to Examine 0.07857 0.04586 1.713 0.09958
Self-evaluation of Performance 0.1428 0.08006 1.784 0.08715
Age -0.0543 0.02388 -2.274 0.03222*
Gender = Male 0.9471 0.3572 2.652 0.01396*
Gender = Other -0.1832 0.7773 -0.2357 0.8157
Video Game Use = Yes -0.8933 0.2903 -3.077 0.005164**
Previous HMD-VR Experience = Yes -0.2622 0.2848 -0.9208 0.3663
Quality of interface, gender, age, and video game use significantly predicted the amount
of transfer from a computer screen to HMD-VR. p < 0.05*, p < 0.01**.
Examining variables individually in the univariate linear regression models, we
found significant results for age and video game use. Age was negatively correlated with
47
motor skill transfer (F(1,33) = 4.75, R
2
= 0.126, p = 0.0366; Figure 3.6A), suggesting that
younger age may facilitate transfer of the acquired motor skill to an HMD-VR environment.
Additionally, there was significant evidence of a difference in motor skill transfer in
reported video game use (F(1,33) = 4.15, R
2
= 0.112, p = 0.0498; Figure 3.6B) where
individuals who did not play video games had overall higher motor skill transfer (M = 0.82,
SD = 0.78) compared to individuals who played video games (M = 0.17, SD = 0.96).
Furthermore, we found a non-significant positive trend between the quality of interface
and motor skill transfer (F(1,33) = 3.61, R
2
= 0.099, p = 0.0663; Figure 3.6C), suggesting
that this presence variable may be important in predicting computer screen motor skill
transfer to HMD-VR; however, this should be further examined. Univariate linear
regression results for Train-Screen can be found in Supplementary Table 3.S4.
Figure 3.6. (A) Younger age was significantly related to increased screen-based motor
skill transfer to HMD-VR (F(1,33) = 4.75, R
2
= 0.126, p = 0.0366). (B) Individuals who
did not play video games had overall higher motor skill transfer to HMD-VR than
individuals who played video games (F(1,33) = 4.15, R
2
= 0.112, p = 0.0498). (C) Higher
reports on the quality of the interface during training on a computer screen was related
to increased computer screen motor skill transfer to HMD-VR; however, this result was
non-significantly correlated (F(1,33) = 3.61, R2 = 0.099, p = 0.0663). p < 0.05*.
48
3.5 Discussion
In this study, we examined motor skill transfer from an HMD-VR environment to a
conventional environment (i.e., computer screen), and vice-versa. First, we confirmed that
motor skill acquisition occurs in both HMD-VR and conventional screen environments and
demonstrated that acquisition occurs at a similar rate in both environments, suggesting
that task difficulty was not different between the environments. We then demonstrated
that while motor skill transfer occurs after training in either environment, there are
individual differences in the amount of motor skill that transferred.
In examining whether motor skills acquired during training in HMD-VR transferred
to a conventional screen environment, we found a significant decrease in motor skill
performance as a result of the transfer. To see if this decrease in motor skill transfer could
be explained, we examined whether individual differences in five presence themes
(realism, possibility to act, quality of interface, possibility to examine, self-evaluation of
performance), age, gender, video game use, and previous HMD-VR experience could be
used as predictors. We found trending but nonsignificant evidence that a combination of
two presence themes, positively correlated possibility to act and negatively correlated
self-evaluation of performance, best predicted this observed decrease in motor skill.
Additionally, we found trending evidence that previous experience using HMD-VR
independently may predict the decrease in the motor skill transfer. Overall, these results
suggest that while the motor skills acquired in HMD-VR may not transfer to a conventional
environment, the factors mentioned could mitigate this decrease.
We also examined whether motor skills acquired during training on a conventional
screen environment transferred to HMD-VR. We found that motor skills learned in a
49
conventional screen environment transfer to HMD-VR; however, not only do the motor
skills transfer, but performances seem to improve in the novel HMD-VR environment. We
found that the combination of the quality of interface, gender, age, and video game use
best predicted this motor skill transfer. Additionally, we found evidence that age and video
game use independently may predict the increase in motor skill transfer between
computer screen and HMD-VR. This supports previous findings that age and video game
use affect acquisition and transfer in non-immersive virtual environments (Richardson et
al., 2011; Rachael D. Seidler, 2007). We also found trending evidence that the quality of
interface independently may predict the increase in motor skill transfer between a
computer screen and HMD-VR, further supporting the involvement of presence in the
transfer of motor skill. These predictors may be useful to consider in cases when an HMD-
VR rehabilitation intervention is introduced after motor skills have already been acquired
in the real world.
Our work adds to the limited knowledge of personal factors that could potentially
drive motor acquisition in HMD-VR and the transfer of motor skill to other environments.
While other studies have identified potential mechanisms for HMD-VR transfer by
examining existing literature (Howard, 2017), there is inconclusive evidence for why motor
skill acquisition in HMD-VR and transfer to other environments may be more effective for
some individuals compared to others. The two presence themes identified support
previous findings that levels of presence relate to motor performance in an HMD-VR
environment (Stevens & Kincaid, 2015) and extend these findings to the transfer of motor
skill acquisition. Additionally, previous experience with the training device, which is HMD-
VR in the present case, support findings that the transfer of spatial knowledge is
50
influenced by previous experience with the environment (Waller et al., 1998). Increased
exposure to HMD-VR may decrease the novelty, and subsequent attention evoked during
the task, which may decrease motor performance. Future studies should examine
whether individuals with more HMD-VR experience have greater motor skill transfer to
the real world.
In addition to the personal factors that we have examined in this study, there are
undoubtably more mechanisms that could either drive or predict HMD-VR motor skill
transfer, and this should be further explored. Future studies should also consider other
personal factors such as participants’ immersive tendencies, the likelihood that an
individual will feel immersed in a new environment (Slater, 1999) as well as avatar
embodiment, if applicable (Gonzalez-Franco & Peck, 2018). In addition to personal
factors, task-related factors likely contribute to differences in motor skill acquisition and
transfer from HMD-VR to conventional environments, and vice versa. Previous findings
have suggested that fidelity and dimensionality influences the transfer of motor skills from
HMD-VR environments (Levac et al., 2019). In the current study, a possible explanation
for the decrease in performance on a computer screen could be that the visual
representation of the HMD-VR environment did not reflect what individuals expected and
therefore, motor skill performance was not maintained with transfer. Future studies should
consider examining the level of fidelity and dimensionally in HMD-VR needed to optimize
motor skill transfer to the real world, and vice versa. Motor skill transfer has also been
shown to be influenced by other task-related factors such as task variability, engagement,
and feedback (Jung et al., 2017; Lohse et al., 2016). In the current study, the increase in
performance in HMD-VR could be a result of an increase in attention or engagement after
51
transfer from the computer screen. Future studies should also examine how these task-
related factors influence HMD-VR motor transfer to the real world, and vice versa.
It has also been suggested that HMD-VR may require additional cognitive
resources and that additional information and stimuli must be processed in order to solve
tasks in virtual reality (Neguţ et al., 2016). One study found that the motor skills acquired
in HMD-VR through the reliance of spatial cognitive capabilities did not transfer to the
same task in the real world (Kozak et al., 1993). Our own previous work has shown that
visuomotor adaptation in HMD-VR requires a greater reliance on cognitive strategies than
performing the same task on a computer screen (J. M. Anglin et al., 2017). Taken
together, this suggests that the decrease in motor skill transfer observed when moving to
a conventional screen environment could also be due to less engagement of the cognitive
processes used when in HMD-VR. The utilization of these cognitive processes during
performance in either environment could be influenced by any of the personal or task-
related factors described. Future work should examine whether specific cognitive
processes have a role in HMD-VR motor skill transfer to the real world.
One limitation of this study was the use of a computer screen as the transfer
condition from HMD-VR. Although this was purposefully designed to provide the most
well-controlled and subtle differences between HMD-VR and the conventional
environment, and previous studies have reported significant differences between HMD-
VR and computer screen environments (J. M. Anglin et al., 2017; Dan & Reiner, 2017;
Slobounov et al., 2015; Subramanian & Levin, 2011), future work should examine whether
presence, gender, age, video game use, or previous HMD-VR experience has an effect
on HMD-VR motor skill transfer to more dynamic, real world physical applications (e.g.,
52
throwing a ball in HMD-VR versus throwing a ball in real life). Future research should also
look to see if the identified factors apply to different clinical populations and examine
whether mechanisms such as functional independence or cognitive status could predict
success of HMD-VR rehabilitation interventions (Dušica et al., 2015; Ween et al., 1996).
Another limitation was that our definition of motor skill transfer reflects the transfer of
motor skill acquisition rather than motor skill learning. Experimental designs of motor skill
learning typically examine transfer after a retention interval and compare transfer
performance to baseline performance in the transfer context (A. Kim et al., 2019). Future
studies should examine whether the personal factors identified here are also predictors
for this type of experimental design. Lastly, the use of a subjective questionnaire to
measure presence is also a limitation; future work should use alternative objective
measures, such as physiological responses, in addition (Wiederhold et al., 2001). Overall,
despite these limitations, we believe that the work presented in this study provides an
initial examination into the transfer of motor skills between HMD-VR and conventional
screen environments as well as insight into the factors that may mediate this transfer.
3.6 Conclusion
Both HMD-VR and conventional screen environments resulted in the acquisition
of a motor skill at a similar rate, as well as transfer to a different environment. However,
motor skill performance decreased when transferring from HMD-VR to a conventional
screen environment, while motor skill performance increased when transferring from a
conventional screen environment to HMD-VR. Furthermore, themes of presence, gender,
age, and video game use significantly predicted the motor skill transfer in individuals
training on the screen, while themes of presence and previous HMD-VR experience were
53
loosely related to the motor skill transfer in individuals training in HMD-VR. As HMD-VR
becomes an increasingly popular medium for motor learning and rehabilitation
applications, it is important to understand how to optimize interventions to ensure the
complete transfer of motor skills to the target environment. Future studies should examine
individual differences in other personal factors and in task-related factors.
3.7 Supplemental Material
Table 3.S1. Differences in simulator sickness level between Train-HMD-VR and
Train-Screen.
U n1 = n2 p-value Train-HMD-VR Train-Screen
Nausea 542 35 0.3741
M = 2.18,
SD = 2.18
M = 4.63,
SD = 13.16
Oculomotor
Reactions
675 35 0.4586
M = 12.56,
SD = 16.84
M = 8.45,
SD = 17.52
Disorientation 560 35 0.4881
M = 7.16,
SD = 18.91
M = 6.76,
SD = 16.67
Overall Simulator
Sickness
609 35 0.9715
M = 8.98,
SD = 15.77
M = 7.80,
SD = 16.31
There were no significant differences in levels of nausea, oculomotor reactions,
disorientation, and overall simulator sickness between groups.
54
Table 3.S2. Differences in themes of presence (top) and other self-reported
measures (bottom) between Train-HMD-VR and Train-Screen.
t-value df p-value Train-HMD-VR Train-Screen
Realism 1.03 68.0 0.3082
M = 34.86,
SD = 7.31
M = 33.06,
SD = 7.36
Possibility to Act -0.28 67.3 0.7772
M = 22.20,
SD = 3.53
M = 22.43,
SD = 3.19
Quality of
Interface
0.43 67.9 0.6655
M = 13.34,
SD = 3.38
M = 13.00,
SD = 3.23
Possibility to
Examine
0.77 60.0 0.4430
M = 15.00,
SD = 2.59
M = 14.40,
SD = 3.80
Self-Evaluation of
Performance
-0.20 68.0 0.8435
M = 11.20,
SD = 1.81
M = 11.29,
SD = 1.81
Age -0.84 61.0 0.4069
M = 25.34,
SD = 3.84
M = 26.29,
SD = 5.46
c
2
df p-value Train-HMD-VR Train-Screen
Gender 0.47 2 0.7919
25 female,
10 male
18 female,
6 male, 1 other
Video Game Use 2.27 1 0.1322
23 Yes,
12 No
23 Yes,
13 No
Previous HMD-VR
Experience
0.04 1 0.8351
23 Yes,
12 No
24 Yes,
11 No
There were no significant differences in realism, possibility to act, quality of interface,
possibility to examine, self-evaluation of performance, age, gender, video game use, and
previous HMD-VR experience between groups.
55
Table 3.S3. Train-HMD-VR results from univariate analysis of predicting HMD-VR
motor skill transfer to a computer screen.
Predictor Estimate Std. Error t-value p-value
(Intercept) -0.3270 0.5700 -0.5737 0.5700
Realism 0.0002 0.0160 0.0094 0.9926
(Intercept) -1.3640 0.7226 -1.8880 0.0679
Possibility to Act 0.0470 0.0322 1.4600 0.1537
(Intercept) -0.1755 0.4761 -0.3687 0.7147
Quality of Interface -0.0110 0.0347 -0.3166 0.7535
(Intercept) -0.7376 0.6842 -1.0780 0.2888
Possibility to Examine 0.0278 0.0450 0.6164 0.5418
(Intercept) 0.4028 0.7219 0.5580 0.5806
Self-Evaluation of Performance -0.0647 0.0637 -1.0170 0.3168
(Intercept) 0.0408 0.7784 0.0524 0.9585
Age -0.0143 0.0304 -0.4710 0.6408
(Intercept) -0.3772 0.1354 -2.7860 0.0088
Gender = Male 0.1939 0.2533 0.7654 0.4495
(Intercept) -0.5319 0.1919 -2.7720 0.0091
Video Game Use = Yes 0.3197 0.2367 1.3510 0.1860
(Intercept) -0.5826 0.189 -3.0820 0.0041
Previous HMD-VR
Experience = Yes
0.3968 0.2331 1.7020 0.0982
†
There was non-significant evidence of a difference in motor skill transfer in reported
previous HMD-VR experience. p < 0.1†.
56
Table 3.S4. Train-Screen results from univariate analysis of predicting computer
screen motor skill transfer to HMD-VR.
Predictor Estimate Std. Error t-value p-value
(Intercept) 0.2554 0.7540 0.3388 0.7369
Realism 0.0047 0.0223 0.2121 0.8333
(Intercept) 0.4707 1.1630 0.4047 0.6883
Possibility to Act -0.0026 0.0514 -0.0513 0.9594
(Intercept) -0.7799 0.6458 -1.2080 0.2358
Quality of Interface 0.0917 0.0483 1.8990 0.0663
†
(Intercept) -0.2126 0.6326 -0.3361 0.7389
Possibility to Examine 0.0434 0.0425 1.0200 0.3153
(Intercept) -0.8375 1.0130 -0.8269 0.4142
Self-Evaluation of Performance 0.1107 0.0886 1.2490 0.2206
(Intercept) 2.0190 0.7530 2.6810 0.0114
Age -0.0611 0.0281 -2.1780 0.0366
*
(Intercept) 0.3288 0.1740 1.8890 0.0680
Gender = Male 0.6367 0.4143 1.5370 0.1342
Gender = Other -0.9192 0.9372 -0.9807 0.3341
(Intercept) 0.8152 0.2500 3.2610 0.0026
Video Game Use = Yes -0.6420 0.3153 -2.0360 0.0498
*
(Intercept) 0.4776 0.2880 1.6590 0.1067
Previous HMD-VR
Experience = Yes
-0.0963 0.3478 -0.2767 0.7837
Significant results were found with age and video use. Additionally, a non-significant
positive trend between the quality of interface and motor skill transfer was found. p <
0.05*, p < 0.1†.
57
Chapter 4: Cognitive Load, Retention, and Context Transfer
of Visuomotor Adaptation in HMD-VR
This chapter is adapted from:
Juliano J.M., Schweighofer N., & Liew S.L. (2022) Increased cognitive load in immersive
virtual reality during visuomotor adaptation is associated with decreased long-term
retention and context transfer. Manuscript submitted for publication.
4.1 Abstract
Complex motor tasks in immersive virtual reality using a head-mounted display
(HMD-VR) have been shown to increase cognitive load and decrease motor performance
compared to conventional computer screens (CS). Separately, visuomotor adaptation in
HMD-VR has been shown to recruit more explicit, cognitive strategies, resulting in
decreased implicit mechanisms thought to contribute to motor memory formation.
However, it is unclear whether visuomotor adaptation in HMD-VR increases cognitive
load and whether cognitive load is related to explicit mechanisms and long-term motor
memory formation. We randomized 36 healthy participants into three equal groups. All
groups completed an established visuomotor adaptation task measuring explicit and
implicit mechanisms, combined with a dual-task probe measuring cognitive load. Then,
all groups returned after 24-hours to measure retention of the overall adaptation. One
group completed both training and retention tasks in CS (measuring long-term retention
in a CS environment), one group completed both training and retention tasks in HMD-VR
(measuring long-term retention in an HMD-VR environment), and one group completed
the training task in HMD-VR and the retention task in CS (measuring context transfer from
58
an HMD-VR environment). A Generalized Linear Mixed-Effect Model (GLMM) was used
to compare cognitive load between CS and HMD-VR during visuomotor adaptation, t-
tests were used to compare overall adaptation and explicit and implicit mechanisms
between CS and HMD-VR training environments, and ANOVAs were used to compare
group differences in long-term retention and context transfer. Cognitive load was found
to be greater in HMD-VR than in CS. This increased cognitive load was related to
decreased use of explicit, cognitive mechanisms early in adaptation. Moreover, increased
cognitive load was also related to decreased long-term motor memory formation. Finally,
training in HMD-VR resulted in decreased long-term retention and context transfer. Our
findings show that cognitive load increases in HMD-VR and relates to explicit learning
and long-term motor memory formation during motor learning. Future studies should
examine what factors cause increased cognitive load in HMD-VR motor learning and
whether this impacts HMD-VR training and long-term retention in clinical populations.
4.2 Introduction
Immersive virtual reality using a head-mounted display (HMD-VR) has been
increasingly used for motor learning and rehabilitation purposes (Levin, 2020). Recent
technological developments in HMD-VR have made these devices obtainable at relatively
low costs. Driving factors for using HMD-VR in motor rehabilitation include the ability to
replicate and even go beyond the real world, allowing for researchers and clinicians to
have increased control of the training environment and tailor it for each patient’s needs
(Levin, 2020). Moreover, use of these devices have been shown to increase motivation
and engagement and allow for patients to spend more time actively engaged in therapy
(Laver et al., 2017; Levin & Demers, 2020; Zimmerli et al., 2013). However, while some
59
applications of interventions performed in HMD-VR have shown to be either comparable
or superior to the same intervention performed in a conventional rehabilitation setting
(Devos et al., 2009; Howard, 2017; Nemani et al., 2018), other applications have been
less effective than conventional environments (Levac & Jovanovic, 2017; Massetti et al.,
2018; Müssgens & Ullén, 2015). There is also inconclusive and conflicting evidence about
whether motor skills learned in HMD-VR will transfer to the real world, and reasons
underlying a lack of contextual transfer are unclear (Juliano & Liew, 2020; Levac et al.,
2019). These findings suggest that there may be instances when the use of HMD-VR
could potentially result in less effective motor learning. Understanding what makes
learning motor skills in HMD-VR different from learning motor skills in the real world can
better inform the design of HMD-VR applications so that they can be transferred to new
environments and be more effective for clinical populations.
Evidence suggests that motor skills are learned differently between HMD-VR and
conventional environments (Levac et al., 2019). A first potential difference is the
movement kinematics within each environment. Studies comparing movements made by
individuals both with and without motor impairments found movements in an HMD-VR
environment to be slower and less smooth compared to a real-world environment (Levin,
Magdalon, et al., 2015; Magdalon et al., 2011). These results suggest that movement
parameters, especially when a virtual avatar is present, should be monitored in HMD-VR
motor learning applications. A second potential difference is the underlying motor learning
mechanisms used in each environment. HMD-VR has been shown to recruit greater
explicit, cognitive strategies during visuomotor adaptation compared to a conventional
computer screen (CS), suggesting that the process by which motor skills are acquired in
60
HMD-VR may be different than in conventional environments (J. M. Anglin et al., 2017).
Specifically, these findings suggest that motor learning in HMD-VR may require additional
cognitive processing compared to conventional environments.
Converging evidence supports this and shows that cognitive load increases during
highly stressful and complex motor tasks in HMD-VR compared to CS environments
(Baumeister et al., 2017; Frederiksen et al., 2020; Funk et al., 2016). Cognitive load is the
amount of information that can be held in working memory at one time. The theoretical
construct of cognitive load suggests that novel information (e.g., a new visuomotor
mapping) can be encoded in long-term memory when the load on working memory is
within working memory limits (Orru & Longo, 2019). Increased cognitive load in HMD-VR,
measured by the attentional demands of a secondary task during complex motor tasks,
is shown to decrease motor performance compared to CS (Frederiksen et al., 2020).
However, it is unclear whether HMD-VR-related increases in cognitive load and
decreases in motor performance result in decreased long-term motor memory formation,
which is critical for clinical uses. Based on the theoretical framework of cognitive load, we
hypothesized that an increase in cognitive load during HMD-VR motor tasks would be
related to a decrease in motor memory formation, resulting in both decreased long-term
retention and context transfer.
To examine this hypothesis, we first aimed to examine differences in cognitive load
between HMD-VR and CS during a visuomotor adaptation task. Visuomotor adaptation is
thought to be an error-driven process driven by competing explicit and implicit
mechanisms updated on a trial-by-trial basis (Krakauer et al., 2019; Tseng et al., 2007).
Explicit mechanisms are thought to be important early in adaptation and reflect the
61
cognitive strategies used to adapt to experienced perturbations. Implicit mechanisms, on
the other hand, develop over the course of adaptation and reflect a recalibration of an
internal model, thought to represent a mapping between the desired goal and the
appropriate motor response to accomplish the goal (Miall & Wolpert, 1996; Taylor & Ivry,
2013b). Explicit processes are also found to be affected by a secondary cognitive task,
while implicit processes are not affected by the interference (Redding et al., 1992). As
previously mentioned, HMD-VR has been shown to recruit greater explicit, cognitive
strategies during visuomotor adaptation. However, it is not known whether the recruitment
of greater cognitive strategies is related to increased cognitive load. Thus, the second
aim of this study is to examine the relationship between explicit mechanisms and
cognitive load during visuomotor adaptation. This was done by combining an established
visuomotor adaptation task with a dual-task probe measuring attentional demands to
assess cognitive load and by training individuals in either an HMD-VR or CS environment.
After training on the visuomotor adaptation task with the dual-task probe, participants
returned after a 24-hour retention period to measure long-term retention as well as HMD-
VR context transfer. To address the third aim of this study, we then examined the
relationship between cognitive load and long-term motor memory and context transfer of
the overall adaptation. By measuring attentional demands throughout the visuomotor
adaptation process, we then examined whether the cognitive load during training is
related to long-term retention and context transfer.
62
4.3 Methods
4.3.1 Participants
Forty-one participants were recruited for the study. Two participants did not
complete the experiment due to an inability to follow instructions (CS: N = 2). From the
thirty-nine participants who completed the experiment, one was removed due to a
previous knee injury that hindered their ability to press the foot pedal on the dual-task
probe (CS: N = 1) and two were removed due to missing greater than 25% of the dual-
task probe trials (HMD-VR: N = 2). This resulted in thirty-six participants included in the
final training analysis, with N = 12 training in the CS environment and N = 24 training in
the HMD-VR environment (22 female/14 male; aged: M = 26.3, SD = 4.6). For the final
retention analysis, participants training in the HMD-VR environment were equally divided
into two groups, where half of participants continued in the HMD-VR environment (N =
12), and the other half transferred to the CS environment (N = 12; see Retention for
details). Eligibility criteria included right-handed individuals with no neurological
impairments and normal or corrected-to-normal vision. Data were collected in-person
during the COVID-19 pandemic, and all participants wore surgical masks for the duration
of the experiment. Written informed consent was electronically obtained from all
participants to minimize in-person exposure time at the lab. The experimental protocol
was approved by the USC Health Sciences Campus Institutional Review Board and
performed in accordance with the 1964 Declaration of Helsinki.
4.3.2 Experimental Apparatus
Participants completed an established visuomotor adaptation task modified with a
dual-task probe to measure attentional demands (Figure 4.1A) (J. M. Anglin et al., 2017;
63
Goh et al., 2014; Taylor et al., 2014). The task was completed in either a CS or HMD-VR
environment. The HMD-VR environment used an Oculus Quest (Figure 4.1B) and
showed an environment modeled after the CS environment (Figure 4.1C). In both
environments, participants grasped a digitalized stylus with their right hand and reached
for one of eight pseudorandomized targets using a tablet (Wacom Intuos4 Extra Large).
Movement trajectories were sampled at 60 Hz in both CS and HMD-VR environments.
To control for potential differences in movement kinematics, participants were unable to
see their bodies in either environment (i.e., bodies were covered using a cloth cover in
CS and no virtual avatar was provided in HMD-VR). Instead, visual feedback of the stylus
was provided in the form of a red circular cursor (5 mm diameter) and displayed on an
upright computer monitor set on a large box which was placed over the tablet. The
computer monitor located in the CS environment was a 24.1 inch, 1920x1200 pixel
resolution computer monitor (HP) located 23 cm above the tablet. The HMD-VR
environment replicated the dimensions of the computer monitor as well as all the other
aspects of the room and was designed using the game engine development tool, Unity
3D (version 2019.4.11f1). There were no differences between CS and HMD-VR
environments in how participants were able to move. Participants were given the
opportunity to explore the virtual environment before beginning the task.
64
Figure 4.1. Experimental Paradigm. (A) Experimental design. Participants trained on a
visuomotor adaptation task in either (B) an HMD-VR environment or (C) a CS
environment. (D) Visuomotor adaptation task with a 45° counterclockwise rotation (E)
combined with dual-task probe. After finding the start circle, a blue target with numbers
flanking the target would appear, and participants were asked to report where they
planned to aim. Participants remained at the start circle for 3000 ms as the target
changed from blue to yellow to green, and then made quick reaching movements
through the target. Once crossing the invisible outer circle, the endpoint location of the
reach (or rotated reach) would be displayed as a red cursor. On some of the trials, after
the target turned yellow, an auditory cue was played 500 ms after the target turned
yellow and participants responded by quickly pressing a foot pedal under their right foot.
The reaction time of the foot pedal press was used as the measure of cognitive load.
65
4.3.3 Experimental Design
The experiment took place over two days (Figure 4.1A). On the first day,
participants completed two long training blocks followed by a short retention block (Blocks
1-3). On the second day, participants completed a 24-hour retention block (Block 4)
followed by an adaptation wash-out block (Block 5). Participants completed 8
familiarization trials before starting Block 1.
4.3.3.1 Training
On the first day, participants completed familiarization and training (8 trials followed
by Blocks 1-2, which spanned a total of 392 trials). At the start of each trial, participants
moved to the start circle (7 mm diameter) located at the center of the monitor using a
guiding circle that got smaller as they got closer to the center. After remaining at the start
circle for 1000 milliseconds (ms), the cursor appeared along with one of eight
pseudorandom targets (10 mm in diameter) flanked by positive and negative numbers
spaced 5.625° apart (Figure 4.1D). The targets were located on an invisible outer circle
with a diameter of 28 cm and spaced 45° apart (0°, 45°, 90°, 135°, 180°, 225°, 270°,
315°). During each trial, the target turned three different colors. The target initially
appeared blue, signaling participants to verbally report to the experimenter where they
planned to aim. After 1500 ms, the target then turned yellow, signaling participants to “get
ready”. Lastly, after another 1500 ms, the target turned green, signaling participants to
reach for the target. Participants were instructed to make “fast, slicing movements”
towards the target and to shoot through the target. Reaction time (RT) was defined as the
time between when the target turned green and when the cursor left the start circle, and
movement time (MT) was defined as the time between when the cursor left the start circle
66
and when the cursor crossed the invisible outer circle. If the MT exceeded more than 500
ms, participants were given an auditory warning that said, “Too Slow”. After crossing the
invisible outer circle, participants received auditory feedback based on the accuracy of
their reach (i.e., a pleasant “ding” if the cursor crossed the target or an unpleasant “buzz”
if the cursor did not cross the target). Participants were also given visual endpoint
feedback at the location where the cursor crossed the invisible outer circle for 1000 ms
before starting the next trial (Figure 4.1E).
Pseudorandomly, once per cycle (8 trials/cycle), an auditory cue (i.e., a horn)
would sound 500 ms after the target turned yellow. Participants were instructed to press
a foot pedal located under their right foot as quickly as possible, with the goal being to
still reach for the target with their stylus as soon as it turned green. This was adapted
from Goh et al., 2014, which validated this dual-task probe measuring attentional
demands during a discrete motor task as a measure of cognitive load (Goh et al., 2014).
Before beginning training, participants watched an instructional video describing
the experimental design and providing an example of a trial. Participants were instructed
to begin by aiming directly at the target but told that at some point in the task, they may
need to aim somewhere other than the target to make the cursor land on the target. After
completing a familiarization cycle (8 trials) where they were given clarifying instructions if
needed, participants then began training with the Baseline block (Block 1: 72 trials) where
they made unperturbed reaches to targets. Without additional instruction, the Rotation
block began where a 45° counterclockwise perturbation was introduced. Participants
needed to counteract the perturbation for the cursor to land on the target (Block 2: 320
trials). Participants were given a two-minute break every 100 trials. During these breaks,
67
participants in the HMD-VR group were allowed to remove the HMD-VR headset if
desired.
4.3.3.2 Retention
After completing training, participants completed a No Feedback (Immediate)
block split into Strategy and No-Strategy cycles, counterbalanced across participants
(Block 3: 16 trials). In the Strategy cycle, participants were instructed to continue using
whatever strategy they developed at the end of training to get the cursor to land on the
target. In the No-Strategy cycle, participants were told to refrain from using any aiming
strategy and instead aim directly to the target. In both cycles, numbers flanking the target
still appeared; however, participants were told to no longer report their aim and that no
feedback would be provided. The average movement angle in the Strategy cycle was
used to determine how much of the adaptation was retained. The average movement
angle in the No-Strategy cycle was used to examine the implicit processes contributing
to retention. The difference between the average movement angles in the Strategy and
No-Strategy cycles was calculated to examine the explicit processes contributing to
retention. Use of the Strategy and No-Strategy cycles, called process dissociation
procedure, assumes that participants can disengage from using a cognitive strategy when
instructed to do so (Bouchard & Cressman, 2021; Werner et al., 2019). This concluded
the first day of the experiment. Participants returned the next day, after a 24-hour retention
interval period, and completed No Feedback (Delayed) (Block 4: 16 trials) and Washout
(Block 5: 40 trials) blocks. The No Feedback (Delayed) block was identical to the No
Feedback (Immediate) block with Strategy and No Strategy cycles, and the Washout
68
block was identical to the Baseline block, with the exception that participants no longer
had to report their aim.
Participants who trained in the CS environment completed both No Feedback
blocks as well as the Washout block (Blocks 3-5) in the CS environment to measure long-
term retention (CS-R). Half of the participants who trained in the HMD-VR environment
completed the two No Feedback and Washout blocks in the HMD-VR environment to
measure long-term retention of HMD-VR (HMD-VR-R), while the other half completed
these blocks in the CS environment to measure context transfer from HMD-VR to CS
(HMD-VR-T). As noted above, participants were randomly assigned to groups.
4.3.3.3 Questionnaires
After training, participants were asked to complete two Likert-scale questionnaires
similar to what was collected in Chapter 3: a virtual reality simulator sickness
questionnaire (VRSQ) and a presence questionnaire (IPQ). The VRSQ consisted of a
series of questions to gauge participant sickness level in the training environment and
were collapsed along three main themes: oculomotor reactions, disorientation, and
overall simulator sickness (H. K. Kim et al., 2018). The IPQ consisted of a series of
questions to gauge the participant’s sense of presence in the training environment and
were collapsed along 4 main themes: general presence, spatial presence, involvement,
and realism (Schwind et al., 2019). Also similar to Chapter 3, participants were asked
questions regarding their gender, age, whether they played video games, and whether
they had previous experience using HMD-VR. As an exploratory analysis, participants
were also asked to complete two other Likert-scale questionnaires: a sense of agency
(SoA) questionnaire and an intrinsic motivation inventory (IMI). The SoA questionnaire
69
consisted of three questions to gauge participants sense of agency in the training
environment (D’Angelo et al., 2018). The IMI consisted of a series of questions to gauge
the perceived motivation while completing the task and were collapse along 4 main
themes: interest/enjoyment, perceived competence, effort/importance, and
pressure/tension (Lourenço et al., 2008). All questionnaires were collected through
RedCap (P. A. Harris et al., 2009).
4.3.4 Movement Analysis
All kinematic data was recorded by Unity 3D for both CS and HMD-VR
environments. To assess overall adaptation, we used the endpoint hand angle, which
was measured as the moment when the cursor crossed the invisible outer circle. Targets
were rotated to a common reference angle set at 0°, and endpoint hand angle was
calculated as the difference between the reference angle and the line between the origin
and the endpoint of the hand. To assess explicit adaptation, we used aiming angle, which
was measured as the reported aim multiplied by 5.625° (i.e., the degrees separating each
number on the invisible outer circle). Positive angles indicate a clockwise direction from
the target and negative angles indicate a counterclockwise direction from the target. To
assess implicit adaptation, we calculated the difference between aiming angle and hand
angle. Changes in hand angle, aiming angle, and implicit adaptation were calculated as
individual means across 8 trials per cycle. All data are reported in endpoint hand angles,
not target errors.
70
4.3.5 Statistical Analysis
Statistical analyses were conducted using R (version 3.6.3). Trials were excluded
if participants failed to report the aiming angle (0.45% of trials), moved before the target
turned green (2.92% of trials), or movements were made in the wrong direction (i.e., >
120° from target or rotation angle; 0.69% of trials) (Vaswani et al., 2015). We also
removed trials where the reaction time or movement time was greater than 3 standard
deviations from the participants mean (2.87% of trials) (Butcher & Taylor, 2018; McDougle
& Taylor, 2019). The two HMD-VR groups were combined for the training analysis but
separated for the retention analysis.
To compare attentional demands between CS and HMD-VR during visuomotor
adaptation, we used a Generalized Linear Mixed-Effect Model (GLMM) with individual
participants as a random-effect variable. The RT (ms) on the dual-task probe (i.e.,
cognitive load) was used as the response variable, while Training Environment, Cycle,
and a Training Environment x Cycle interaction term were used as fixed-effect variables.
We used the function “glmer” with an Inverse Gaussian family in the lme4 R package (Lo
& Andrews, 2015). The significance of each parameter was assessed by the Wald z-
statistic.
To quantify training, we used unpaired t-tests and compared the mean hand angle,
aiming angle, implicit adaptation, RT, and MT between CS and HMD-VR environments
during Baseline and Rotation blocks. We also examined differences between training
environments at the End of Baseline, defined as the last cycle of the Baseline block, as
well in Early and Late Adaptation, defined as the mean of the first and last four cycles of
the Rotation block, respectively.
71
To quantify retention and context transfer, we examined both immediate and
delayed forgetting, calculated by subtracting Late Adaptation from the No Feedback
(Immediate) and No Feedback (Delayed) blocks, respectively (French et al., 2021). One-
factor ANOVAs were used to compare group differences and individuals with movement
angle greater than two standard deviations from the group mean in either the Strategy or
No-Strategy cycles of the No Feedback blocks were excluded from this part of the
analysis. All measures are reported as mean ± standard deviations in Supplementary
Table 4.S1 and 4.S2 and significance was considered at p < 0.05.
To examine any differences between training conditions in the variables calculated
based on the VRSQ, we use a Mann-Whitney U test as either skewness or kurtosis values
were greater than 1.96 for each variable. To examine any differences between training
conditions in the personal factor variables from the IPQ, SoA, IMI and subjective
measures questionnaires, we used an unpaired t-test for quantitative variables and a chi-
squared test for qualitative variables. Significance was defined as p < 0.05 unless
corrected with multiple comparisons using a Bonferroni correction.
4.4 Results
4.4.1 Cognitive Load is Greater Across Visuomotor Adaptation in HMD-VR
Compared to CS
Cognitive load was larger in HMD-VR than in CS, as shown by the significant
coefficient of Training Environment (𝛃
9
𝟐
= 167±38, p < 0.0001). In addition, cognitive load
decreased over the course of training in both environments as shown by the negative
coefficient of Cycle (𝛃
9
𝟏
= -0.83±0.32, p = 0.009) (Figure 4.2A; Supplementary Table 4.S3).
Cognitive load was also greater in HMD-VR than in CS during the Baseline block (Figure
72
4.2B; t(27.5) = -3.12, p = 0.004) and during the Rotation block (Figure 4.2C; t(30.4) = -
2.96, p = 0.006). These results suggest that the attentional demands of participants in
HMD-VR were greater, both with and without perturbed reaches, compared to CS
visuomotor adaptation.
Figure 4.2. Results between HMD-VR and CS training environments on dual-task
probe. (A) Cognitive load measured as the reaction time to the dual-task probe across
the visuomotor adaptation task for participants training in either CS or HMD-VR
environments. GLMM indicated both Cycle (𝛃
9
𝟏
= -0.83, p = 0.009) and Training
Environment (𝛃
9
𝟐
= 167.39, p < 0.0001) were significantly related to cognitive load where
training in HMD-VR was related to increased cognitive load. Dots represent the average
cognitive load across individuals for each cycle. Furthermore, average cognitive load
was greater for the HMD-VR environment during (B) the Baseline block (t(27.5) = -3.12,
p = 0.004) and (C) the Rotation block (t(30.4) = -2.96, p = 0.006) compared to the CS
environment; dots represent individual averages across blocks and error bars indicate
standard error. p < 0.05*
73
4.4.2 Overall Visuomotor Adaptation Between HMD-VR and CS
Environments
To examine overall visuomotor adaptation between training environments, we
compared hand angle between HMD-VR and CS across the entire Baseline block and
the entire Rotation block (Figure 4.3A).
4.4.2.1 Baseline
Performance was similar between environments before the perturbation was
introduced, with no difference in hand angle at the End of Baseline (t(34.0) = 1.55, p =
0.130). Additionally, there were no significant differences between training environments
at the End of Baseline for RT (t(33.8) = -1.19, p = 0.241) and MT (t(18.8) = -0.91, p =
0.376).
4.4.2.1 Rotation
There was a significant difference in hand angle across the entire Rotation block,
where overall visuomotor adaptation was greater for CS than in HMD-VR (t(32.8) = 2.15,
p = 0.039). In Early Rotation, the hand angle was larger in CS than in HMD-VR (t(27.8) =
2.31, p = 0.028). However, there was no difference in hand angle in Late Rotation (t(29.6)
= -1.19, p = 0.244). There were no significant differences between training environments
in RT (t(29.9) = -1.09, p = 0.286) and MT (t(21.5) = -0.67, p = 0.512) across the entire
Rotation block. Lastly, there were no significant differences between training
environments for RT (t(32.2) = -1.07, p = 0.293) and MT (t(29.0) = -1.67, p = 0.105) in
Early Rotation, nor in Late Rotation (RT: t(33.9) = -1.14, p = 0.263; MT: t(19.2) = -0.34, p
= 0.739).
74
Figure 4.3. Results between HMD-VR and CS training environments on visuomotor
adaptation task. Means (M) and standard errors (SE) are plotted across cycles. Insert
bar graphs show group M and SE as well as individuals means during early (first 4
cycles) and late (last 4 cycles) adaptation. (A) Hand angle, measured as the angle at
which the hand crossed the outer circle, for the HMD-VR and CS training environments.
The hand angle was significantly larger for CS compared to HMD-VR early in adaptation
(t(27.8) = 2.31, p = 0.028); however, by late adaptation there were no differences
between groups (t(29.6) = -1.19, p = 0.244). (B) Aiming angle, measured as the aiming
number reported by the participant, for the HMD-VR and CS training environments. The
aiming angle was significantly larger for CS early in adaptation (t(23.7) = 2.09, p =
0.047) but was significantly larger for HMD-VR late in adaptation (t(14.5) = -2.28, p =
0.038). (C) Implicit adaptation, measured as the difference between aiming angle and
hand angle, for the HMD-VR and CS training environments. While there was no
significant difference in implicit adaptation early in adaptation (t(18.7) = 0.47, p = 0.642),
there were significant group differences late in adaptation (t(17.0) = 2.14, p = 0.047),
where implicit adaptation in CS was larger than HMD-VR. p < 0.05*
4.4.3 Explicit and Implicit Contributions to Visuomotor Adaptation Between
HMD-VR and CS Environments
Before examining the relationship between explicit mechanisms and cognitive
load, we examined the relative contributions of explicit and implicit mechanisms across
the visuomotor adaptation task between training environments. Aiming angle was used
to measure the contributions of explicit, cognitive mechanisms, and implicit adaptation
was used to measure the contributions of implicit mechanisms.
75
4.4.3.1 Baseline
There were no differences in aiming angle between environments at the End of
Baseline (t(18.8) = 0.48, p = 0.640). There was also no difference in implicit adaptation
between environments at the End of Baseline (t(30.0) = 1.11, p = 0.277). These results
suggest that explicit and implicit mechanisms were similar between environments before
the perturbation was introduced.
4.4.3.2 Rotation
We examined aiming angle across the entire Rotation block and found no
difference between training environments (t(18.1) = -0.27, p = 0.787). However, in Early
Rotation, aiming angle was larger in CS than in HMD-VR (t(23.7) = 2.09, p = 0.047).
Conversely, in Late Rotation, aiming angle was larger in HMD-VR than in CS (t(14.5) = -
2.28, p = 0.038). These results show that while no differences are seen across the entire
adaptation block in aiming angle, this is the result of aiming angle being greater early in
adaptation for the CS environment but greater later in adaptation for the HMD-VR
environment (Figure 4.3B).
Throughout the Rotation block, implicit adaptation was larger in CS than in HMD-
VR (t(17.1) = 2.57, p = 0.020). In Early Rotation, there was no significant difference
between environments (t(18.7) = 0.47, p = 0.642). In Late Rotation, implicit adaptation
was significantly larger in CS than in HMD-VR (t(17.0) = 2.14, p = 0.047). These results
suggest that the differences in implicit adaptation between training environments are
driven by differences developed later in adaptation (Figure 4.3C).
76
4.4.4 Relationship Between Cognitive Load and Explicit Mechanisms
During Visuomotor Adaptation
We hypothesized that cognitive load influences the explicit, cognitive component
of adaptation. Therefore, we examined the relationship between cognitive load and
aiming strategy in early and late adaptation, as well as across the course of adaptation.
There was a significant relationship between average cognitive load and aiming angle in
Early Rotation (Figure 4.4; F(1,34) = 6.85, R
2
= 0.168, p = 0.013), where a higher cognitive
load was related to a decreased use of an explicit, cognitive strategy early in adaptation.
There were no significant relationships in Late Rotation (F(1,34) = 0.31, R
2
= 0.009, p =
0.579) or throughout Rotation (F(1,34) = 1.97, R
2
= 0.055, p = 0.170). These results
suggest that the increased cognitive load may affect the explicit, cognitive component of
adaptation, specifically during the early stages of learning.
77
Figure 4.4. Average cognitive load plotted against aiming
angle during Early Rotation. Increased cognitive load
related to decreased cognitive, explicit mechanism early in
adaptation (F(1,34) = 6.85, R2 = 0.168, p = 0.013).
4.4.5 Immediate and Long-Term Retention and HMD-VR Context Transfer
To examine retention and HMD-VR context transfer, we compared both immediate
and delayed 24-hour forgetting between CS-R, HMD-VR-R, and HMD-VR-T groups.
There was no significant difference between groups for immediate forgetting (Figure 4.5A;
F(2,27) = 0.81, p = 0.455). There was a trending difference between groups for delayed
24-hour forgetting (Figure 4.5B; F(2,27) = 3.01, p = 0.066). Given our a priori hypothesis
that cognitive load would affect long-term retention and context transfer, we further
explored these results with post hoc analyses at delayed 24-hour forgetting. HMD-VR-R
showed significantly more forgetting than CS-R (t(13.4) = 2.42, p = 0.031), and HMD-VR-
78
T showed a trend of more forgetting than CS-R (t(9.2) = 2.23, p = 0.052). No differences
were observed between HMD-VR-R and HMD-VR-T (t(14.2) = 0.51, p = 0.618). While
these results are not strong, they suggest that training in HMD-VR could possibly result
in less long-term retention of the adaptation, independent of whether participants were
measured in the same (HMD-VR-R) or in a different (HMD-VR-T) context.
We then examined whether the differences in long-term retention and context
transfer could be explained by differences in explicit or implicit processes. There was a
significant difference between groups for the amount of explicit processes forgotten after
the 24-hour retention interval (Figure 4.5C; F(2,27) = 3.45, p = 0.046). Post hoc analysis
showed both HMD-VR-R and HMD-VR-T groups had significantly greater forgetting of the
explicit process compared to CS-R (HMD-VR-R: t(18.9) = 2.47, p = 0.023; HMD-VR-T:
t(15.0) = 2.23, p = 0.042). No differences were observed between HMD-VR-R and HMD-
VR-T (t(16.6) = 0.02, p = 0.984). Separately, there was no significant difference between
groups for the amount of implicit process forgotten after the 24-hour retention interval
(Figure 4.5D; F(2,27) = 0.42, p = 0.663).
79
Figure 4.5. Results between HMD-VR and CS training environments on retention and
context transfer. (A) Overall visuomotor adaptation at immediate forgetting was not
significantly different between groups. (B) At delayed 24-hour forgetting, there was
more forgetting in HMD-VR-R than in CS-R (t(13.4) = 2.42, p = 0.031) and a trend of
more forgetting in HMD-VR-T than in CS-R (t(9.2) = 2.23, p = 0.052). (C) Differences in
delayed 24-hour retention could be explained by more explicit process forgetting in
HMD-VR-R than in CS-R (t(18.9) = 2.47, p = 0.023) and in HMD-VR-T than in CS-R
(t(15.0) = 2.23, p = 0.042). (D) Forgetting of implicit process at delayed 24-hour
retention was not significantly different between groups. Dots represent individual
participants and error bars indicate standard error. p < 0.05*, p < 0.1
†
80
We then examined the relationship between forgetting of the explicit process and
overall 24-hour forgetting and found that greater forgetting of explicit processes was
related to decreased long-term retention (Figure 4.6A; F(1,28) = 43.38, R
2
= 0.608, p <
0.0001). These results suggest that the differences observed in the long-term retention
between CS and HMD-VR groups can a explained by greater forgetting of explicit
processes.
4.4.6 Relationship Between Cognitive Load and Long-Term Motor Memory
Formation
Since we hypothesized that cognitive load would influence long-term motor
memories, we examined the relationship between cognitive load and 24-hour forgetting
across all groups. There was a significant relationship between average cognitive load
and overall 24-hour forgetting (Figure 4.6B; F(1,28) = 4.31, R
2
= 0.133, p = 0.047), where
a higher cognitive load was related to increased 24-hour forgetting. These results suggest
that increased cognitive load may affect the retention of the learned adaptation.
81
Figure 4.6. (A) Greater forgetting of the explicit process at delayed 24-hour forgetting
was related to decreased long-term retention, measured by more forgetting of the
overall adaptation (F(1,28) = 43.38, R
2
= 0.608, p < 0.0001). (B) Increased cognitive
load was related to decreased long-term retention, measured by more forgetting of the
overall adaptation (F(1,28) = 4.31, R
2
= 0.133, p = 0.047).
4.4.7 Differences in Simulator Sickness, Presence, Sense of Agency,
Motivation and Subjective Measures Between Training Environments
To assess differences in simulator sickness between training environments, we
compared reports of oculomotor reactions, disorientation, and overall simulator sickness
between training conditions and found no significant difference for each of the measures.
These findings are similar to what was found in Anglin et al. 2017 (Appendix A),
suggesting that visuomotor adaptation does not produce any additional simulator
sickness side effects in HMD-VR compared to CS. Additionally, we compared personal
factor variables from the IPQ and subjective measures questionnaires and found greater
reports of spatial realism and realism in HMD-VR than in CS (Supplementary Table 4.S4).
There were no significant differences in overall realism and involvement between the two
82
training conditions. As an exploratory analysis, we also compared personal factor variable
from the SoA and IMI questionnaires and found no significant differences between the
two training conditions (Supplementary Table 4.S5).
4.4.8 No Relationship Between Personal Factors and Cognitive Load or
Long-Term Retention and Context Transfer
As an exploratory analysis, we also examined whether any personal factor
variables from the IPQ, SoA, IMI and subjective measures questionnaires predicted either
cognitive load or long-term retention and context transfer but found no significant
correlations among any of the variables.
4.5 Discussion
The purpose of this study was to examine whether cognitive load increases in
HMD-VR during visuomotor adaptation compared to a conventional computer screen
(CS) environment, and whether increased cognitive load relates to long-term retention
and context transfer. This was the first study to our knowledge that compared cognitive
load in HMD-VR with known motor learning mechanisms and examined the relationship
between cognitive load in HMD-VR with long-term motor memory formation. We found
four main results. First, we showed that cognitive load is greater in HMD-VR compared
to CS across adaptation. Second, we showed that higher cognitive load is related to
decreased explicit, cognitive mechanisms, specifically early in adaptation. Third, we
showed that visuomotor adaptation in HMD-VR leads to decreased long-term retention
and context transfer, which appears to be due to greater forgetting of explicit processes.
Fourth, we showed that increased cognitive load is related to decreased long-term motor
83
memory formation. These findings have important implications for the development of
clinical and motor learning applications in HMD-VR.
4.5.1 Cognitive Load During Visuomotor Adaptation is Greater in HMD-VR
than CS and Related to Decreased Explicit Processes Early in Adaptation
HMD-VR has been shown to increase cognitive load while performing complex
motor skill tasks (Frederiksen et al., 2020). Here, we show that HMD-VR also increases
cognitive load during a specific type of motor learning (i.e., visuomotor adaptation).
Visuomotor adaptation is thought to be driven by explicit and implicit mechanisms. Explicit
mechanisms are important early in adaptation and are thought to rely more on cognitive
brain areas such as the dorsolateral prefrontal and premotor cortices (Anguera et al.,
2008; S. Kim et al., 2015; R. D. Seidler & Noll, 2008). Implicit mechanisms on the other
hand develop over the course of adaptation and are the reflection of new visuomotor
mappings driven by the anterior-medial cerebellum (S. Kim et al., 2015; Liew et al., 2018).
These mechanisms are thought to work together in order to drive overall adaptation.
In this study, we found that early in adaptation, greater cognitive load was related
to decreased explicit processes and that explicit processes—and subsequently, overall
adaptation performance—were lower in HMD-VR than in CS. One interpretation of these
findings is that greater cognitive load limits the use of explicit processes at the time when
they are the primary drivers of overall adaptation. Put another way, increased cognitive
load in HMD-VR limits the engagement of explicit processes specifically when they were
most important for adaptation (i.e., early in adaptation). If this interpretation is true, then
cognitive load may have the strongest affects early in the motor learning process. Motor
learning in the real world has been shown to facilitate subsequent motor learning
84
processes in HMD-VR, suggesting that HMD-VR may be more effectively used in later
stages of motor learning (Takeo et al., 2021). Thus, initial training done without the use
of HMD-VR may then increase the effectiveness of HMD-VR applications.
Another interpretation of these findings is that the engagement of explicit
processes is limited at times when cognitive load is beyond working memory limits. We
found that, while cognitive load was greater in HMD-VR than in CS across adaptation,
overall cognitive load decreased over the course of training. Therefore, cognitive load
may have limited the cognitive resources dedicated to the visuomotor adaptation task
when it was most needed, early in adaptation. If this interpretation is true, then cognitive
load may affect the motor learning process whenever the load on working memory is
beyond working memory limits. HMD-VR applications may need to be designed to
decrease cognitive load throughout training, or training in HMD-VR may need to be
extended to reduce the early effects of cognitive load. Future work should systematically
test these two hypotheses as both interpretations would affect how HMD-VR is effectively
designed and used for motor learning applications.
Furthermore, recent evidence found suggests that a history of task errors, as
opposed to a history of sensory prediction errors, is necessary for encoding memories
(Leow et al., 2020). In sensorimotor adaptation, explicit processes are thought to be
driven by task errors and implicit processes are thought to be driven by sensory prediction
errors (Huberdeau et al., 2015). Therefore, a decreased use of explicit processes early in
adaptation in HMD-VR suggests that task errors were less relied on to update overall
adaptation during this time. This could potentially explain why greater cognitive load
85
related to decreased explicit processes could then result in decreased long-term
retention.
Similar to our findings in Anglin et al., 2017 (Appendix A), here we again found that
at the end of adaptation, performance was the same whether training in HMD-VR or in
CS, but the mechanisms driving performance were different between environments.
Specifically, at the end of adaptation, the HMD-VR group showed a greater reliance on
explicit mechanisms while the CS group showed a greater reliance on implicit
mechanisms, although the net performance was the same across groups. Our
interpretation of these results is that adaptation in HMD-VR relies more on explicit,
cognitive strategies. If HMD-VR does rely more on explicit processes than implicit
processes, then this can potentially explain why the performance was lower in HMD-VR
than in CS early in adaptation, when explicit processes might have been affected by
increased cognitive load.
One potential explanation for why cognitive load may be higher in HMD-VR could
be how the brain processes vision for action. Vision for action is typically processed
through the dorsal mode of control; however, artificial presentations of depth information
in HMD-VR may cause a shift from a dorsal to ventral mode of control (Ganel & Goodale,
2003; D. J. Harris et al., 2019). A ventral mode of control is thought to be dependent on
visual perception and increased cognitive processes and therefore could potentially
explain increased cognitive load in HMD-VR during motor learning (Goodale et al., 2005).
Separately, depth information has been found to uniquely affect explicit processes and
therefore could also explain why HMD-VR may rely more on explicit, cognitive strategies
(Campagnoli et al., 2021). Further research is needed to examine whether HMD-VR relies
86
more on a ventral mode of control and whether this shift in control could explain a greater
reliance of explicit processes in HMD-VR.
4.5.2 Visuomotor Adaptation in HMD-VR Leads to Decreased Long-Term
Retention and Context Transfer
In this study, we found that training in HMD-VR resulted in decreased long-term
retention. Importantly, a decrease in retention occurred whether participants remained in
an HMD-VR environment or transferred to a new context. Although the context transfer
results reported here are relatively weak, this was likely due to large variability observed
at delayed 24-hour forgetting (Figure 4.5B). Both findings from long-term retention and
context transfer suggests that training in HMD-VR may lead to less efficient motor
memory formation. That is, retention following training in HMD-VR cannot be explained
by a context interference effect (i.e., better retention in the same environment as training),
but is rather best explained by the training process itself.
Converging evidence suggests that explicit and implicit processes are homologous
with the fast and slow processes of a dual-state model of sensorimotor learning
(McDougle et al., 2015). The fast process generally dominates early in adaptation,
responding strongly to error but exhibiting fast forgetting, while the slow process
increases gradually, becoming stable over time and contributing to motor memory
formation (Joiner & Smith, 2008; M. A. Smith et al., 2006). Importantly, the slow process
is thought to predict long-term retention, suggesting that implicit adaptation may also
predict long-term retention. Implicit adaptation at the end of training was lower in HMD-
VR than in CS and could potentially explain the decreased long-term retention and
context transfer. We also found that long-term retention was related to greater forgetting
87
of explicit processes. Taken together, these findings suggest that an increased reliance
of explicit processes in visuomotor adaptation may lead to less efficient motor memory
formation, explained by fast forgetting of explicit processes.
4.5.3 Cognitive Load is Related to Decreased Long-Term Motor Memory
Formation
Cognitive load was found to be related to decreased motor memory formation.
While this relationship was relatively weak, this finding indicates that the cognitive load
increased by HMD-VR could directly affect the retention of a motor memory. Given that
cognitive load during motor learning seems to have a negative effect on motor memory
formation, HMD-VR motor learning applications should consider measuring cognitive
load. Importantly, here we found that for both training environments, the cognitive load
decreased over the course of adaptation. This finding has important implications for HMD-
VR applications because it suggests that even though cognitive load may initially be high,
it can decrease with practice or exposure. Motor learning applications and rehabilitation
interventions in HMD-VR may need more training compared to conventional training
environments to develop similar levels of motor skills. Future studies should look to see
if continued training in HMD-VR could increase retention by decreasing cognitive load if
people are given more time to practice.
4.5.4 Limitations
A limitation of this study was the use of a computer screen to measure context
transfer from HMD-VR. Using a computer screen allowed for a well-controlled study
design as the only difference between HMD-VR and CS environments was the head-
mounted display, which allowed us to control for any transfer effects that may have
88
occurred due to a task change. However, future studies should examine whether
increased cognitive load in HMD-VR during motor learning affects transfer to more
dynamic, interactive real-world tasks such as manipulating physical objects, like a cup or
a ball. Similarly, while visuomotor adaption is a specific type of motor learning, over-
generalization to the domain of motor skill learning may not always be applicable
(Krakauer et al., 2019). Future studies should look to see if an increased cognitive load
in HMD-VR during motor skill learning also affects long-term retention.
4.6 Conclusions
We show that cognitive load was greater across visuomotor adaptation in HMD-
VR compared to CS and related to decreased explicit processes early in adaptation.
Cognitive load was also found to be related to decreased long-term motor memory
formation, and visuomotor adaptation in HMD-VR resulted in decreased long-term
retention and context transfer. These findings suggest that increased cognitive load in
HMD-VR during motor learning may affect long-term motor memory formation. This study
bridges motor learning mechanisms with a theoretical framework of cognitive load and
examines the impact of cognitive load on motor memory formation. While these findings
suggest that cognitive load may be increased in HMD-VR during motor learning, the
reasons driving this increase is unclear. Future studies should aim to determine factors
that may lead to increased cognitive load in HMD-VR motor learning.
89
4.7 Supplementary Material
Table 4.S1. Summary of statistics for visuomotor adaptation training.
Cognitive Load t-value df p-value CS HMD-VR
Baseline -3.12 27.5 0.004 752±108ms 883±138ms
Rotation -2.96 30.4 0.006 744±107ms 876±157ms
Hand Angle t-value df p-value CS HMD-VR
End of Baseline 1.55 34.0 0.130 -0.53±1.18° -1.45±2.38°
Rotation 2.15 32.8 0.039* 44.96±3.24° 41.82±5.45°
Early Rotation 2.31 27.8 0.028* 30.87±11.5° 20.46±14.9°
Late Rotation -1.19 29.6 0.244 45.21±2.5° 46.40±3.5°
Aiming Angle t-value df p-value CS HMD-VR
End of Baseline 0.48 18.8 0.640 0.09±1.2° -0.09±1.0°
Rotation -0.27 18.1 0.787 37.14±7.1° 37.78±5.6°
Early Rotation 2.09 23.7 0.047* 30.47±12.8° 20.74±13.8°
Late Rotation -2.28 14.5 0.038* 34.13±8.0° 39.77±4.5°
Implicit Adaptation t-value df p-value CS HMD-VR
End of Baseline 1.11 30.0 0.277 -0.62±1.62° -1.36±2.3°
Rotation 2.57 17.1 0.020* 7.82±4.5° 4.04±3.3°
Early Rotation 0.47 18.7 0.642 0.40±4.3° -0.29±3.6°
Late Rotation 2.14 17.0 0.047* 11.08±6.4° 6.64±4.7°
Mean and standard deviations are reported. p < 0.05*
Table 4.S2. Summary of statistics for visuomotor adaptation retention.
90
Hand Angle F-value df
p-
value
CS-R HMD-VR-R HMD-VR-T
Immediate
Forgetting
0.81 2,27 0.455 1.51±2.5° 0.11±2.2° 1.27±3.4°
24-hour
Forgetting
3.01 2,27 0.066
†
-0.57±2.4° -5.35±6.0° -7.05±8.4°
Explicit
Process
F-value df
p-
value
CS-R HMD-VR-R HMD-VR
24-hour
Forgetting
3.45 2,27 0.046* 4.30±5.2° -1.89±6.3° -1.95±6.8°
Implicit
Process
F-value df
p-
value
CS-R HMD-VR-R HMD-VR
24-hour
Forgetting
0.42 2,27 0.663 -4.87°±4.1° -3.46°±5.2° -5.10°±3.6°
Mean ± standard deviations are reported. p < 0.05*, p < 0.1
†
Table 4.S3. Results from GLMM examining cognitive load across the visuomotor
adaptation task.
Parameter Estimate Std. Error t value Pr (> |z|)
𝜷
9
𝟎
Intercept 805.07 31.05 25.93 < 0.0001***
𝜷
9
𝟏
Cycle -0.83 0.32 -2.61 0.009**
𝜷
9
𝟐
Environment:HMD-VR 167.39 38.41 4.36 < 0.0001***
𝜷
9
𝟑
Cycle Number x
Environment:HMD-VR
-0.80 0.42 -1.89 0.059
Significance for fixed effects p < 0.0001***, p < 0.01**
91
Table 4.S4. Differences in personal factors between HMD-VR and CS.
U N p-value CS HMD-VR
Oculomotor
Reactions
107.5 36 0.216
M = 12.5,
SD = 6.6
M = 21.5,
SD = 19.5
Disorientation 123.5 36 0.467
M = 0.97,
SD = 1.1
M = 1.4,
SD = 1.4
Overall Simulator
Sickness
106 36 0.206
M = 6.7,
SD = 3.5
M = 11.5,
SD = 10.0
t-value df p-value CS HMD-VR
General Presence -1.66 21.5 0.1109
M = 3.8,
SD = 1.3
M = 4.5,
SD = 1.3
Spatial Presence -3.00 24.6 0.006*
M = 3.6,
SD = 0.9
M = 4.5,
SD = 1.0
Involvement -0.26 17.1 0.800
M = 3.2,
SD = 1.5
M = 3.3,
SD = 1.1
Realism -2.52 22.4 0.019*
M = 2.5,
SD = 1.0
M = 3.4,
SD = 1.0
Age -0.55 24.3 0.585
M = 25.7,
SD = 4.3
M = 26.5,
SD = 4.8
c
2
df p-value CS HMD-VR
Gender 0.06 1 0.809
7 female,
5 male
15 female,
9 male
Video Game Use 0.9 1 0.343
9 Yes,
3 No
21 Yes,
3 No
Previous HMD-
VR Experience
0.9 1 0.343
11 Yes,
1 No
19 Yes,
5 No
Significance corrected with multiple comparisons for VRSQ is considered p < 0.067* and
for IPQ is considered p < 0.0125*
92
Table 4.S5. Differences in exploratory personal factors between HMD-VR and CS.
SoA t-value df p-value CS HMD-VR
Cause 0.08 16.9 0.935
M = 5.1,
SD = 1.6
M = 5.0,
SD = 1.1
Control 1.46 33.4 0.154
M = 5.5,
SD = 0.8
M = 5.0,
SD = 1.4
Will -0.06 20.2 0.950
M = 4.2,
SD = 1.9
M = 4.2,
SD = 1.7
IMI t-value df p-value CS HMD-VR
Interest/Enjoyment 0.21 25.3 0.835
M = 4.7,
SD = 1.2
M = 4.6,
SD = 1.4
Perceived
Competence
0.23 18.1 0.817
M = 3.5,
SD = 1.4
M = 3.6,
SD = 1.1
Effort/Importance 0.60 28.9 0.554
M = 5.7,
SD = 0.9
M = 5.5,
SD = 1.2
Pressure/Tension 0.14 30.8 0.889
M = 2.9,
SD = 0.9
M = 2.8,
SD = 1.3
Significance corrected with multiple comparisons for SoA is considered p < 0.067* and
for IMI is considered p < 0.0125*
93
Chapter 5: Visual Processing of Action Towards Virtual 3D
Objects in HMD-VR
This chapter is adapted from:
Juliano J.M., Phanord C., & Liew S.L. (2022) Visual processing of action directed toward
three-dimensional objects in immersive virtual reality may involve holistic processing of
object shape. Manuscript submitted for publication.
5.1 Abstract
Immersive virtual reality using a head-mounted display (HMD-VR) is increasing in
use for motor learning and motor skill training. However, it remains unclear how visual
information for action is processed in an HMD-VR environment. In the real world, actions
towards three-dimensional (3D) objects are processed analytically and are immune to
perceptual effects, such as processing object dimensions irrelevant to performing the
action (i.e., holistic processing). However, actions towards two-dimensional (2D) objects
are processed holistically and are susceptible to perceptual effects. In HMD-VR,
distances are often underestimated, and the environment can appear flatter compared to
the real world. Thus, actions towards virtual 3D objects in HMD-VR may be processed
more like 2D objects and involve holistic processing, which is susceptible to perceptual
effects. In an initial study, we used a Garner interference task to examine whether vision-
for-action in HMD-VR without haptic feedback is processed holistically and hypothesized
that vision-for-action towards virtual 3D objects in HMD-VR would result in a Garner
interference effect, suggesting holistic processing. We found Garner interference effects
for reaction times to reach maximum grip aperture and to complete movement. These
94
results show that visual processing of actions towards virtual 3D objects in HMD-VR may
involve holistic processing of object shape. These findings demonstrate that visual
information for action in HMD-VR without haptic feedback is processed differently
compared to real 3D objects and is susceptible to perceptual effects, which could affect
motor skill training in HMD-VR. Future studies should examine whether haptic feedback
in HMD-VR-based grasping avoids perceptual effects.
5.2 Introduction
Visual processing for perception (vision-for-perception) and visual processing for
action (vision-for-action) are thought to rely on two distinct but interacting cortical
processing routes (Goodale, 2011; Goodale & Milner, 1992). Evidence from both lesion
and imaging studies have demonstrated that vision-for-perception relies heavily on the
ventral stream, projecting from the primary visual area (V1) to the inferior temporal cortex,
while vision-for-action relies heavily on the dorsal stream, projecting from V1 to the
posterior parietal cortex (Creem & Proffitt, 2001; Goodale, 2014; Goodale et al., 2005;
Milner et al., 1991). The computations for vision-for-perception are also thought to be
different compared to vision-for-action. Vision-for-perception is thought to rely on holistic
processing, meaning that individual features and their spatial relations are perceived as
a combined whole. However, vision-for-action is suggested to rely on analytical
processing, where only the relevant features are considered without being influenced by
other irrelevant information (Ganel & Goodale, 2003; Goodale, 2014).
Accumulating evidence for the perception–action distinction has been shown
through behavioral experiments involving real three-dimensional (3D) objects (Goodale,
2011, 2014). One such piece of experimental evidence can be observed through a
95
psychophysical principle known as Weber’s law. Weber’s law states that the minimum
amount to detect a difference between two objects, called the just noticeable difference
(JND), depends on the magnitude of the object (e.g., the larger the object, the larger the
JND). Vision-for-perception for real 3D objects are shown to adhere to Weber’s law and
the JND increases linearly with object size (Ganel et al., 2008; Heath et al., 2017; Heath
& Manzone, 2017). However, vision-for-action for real 3D objects does not adhere to
Weber’s law and the JND is unaffected by changes in object size (Ayala et al., 2018;
Ganel, 2015; Ganel et al., 2008, 2014; Heath et al., 2017). Evading the influence of
Weber’s law suggests that actions are processed in an efficient manner and are not
susceptible to perceptual effects. Other experimental evidence can be observed through
a selective attention paradigm known as the Garner interference, in which irrelevant
information of an object interferes with the processing of relevant information. A Garner
interference effect is found when making speeded judgements (vision-for-perception) of
real 3D objects, but not when performing speeded grasps (vision-for-action) towards real
3D objects (Ganel & Goodale, 2003, 2014). Garner interference and a lack thereof
suggests holistic and analytical processing of a single dimension relative to the object’s
other dimensions, respectively. Together, this work demonstrates that analytical
processing of vision-for-action for real 3D objects is not susceptible to perceptual effects.
In a technologically advancing world, it is common to interact with two-dimensional
(2D) objects such as smartphones and tablets. As opposed to real 3D objects,
accumulating evidence suggests that vision-for-action directed at 2D objects is
susceptible to perceptual effects (Ganel et al., 2020). Unlike real 3D objects, action
towards 2D objects has been shown to adhere to Weber’s law, indicating a susceptibility
96
to perceptual effects (Holmes & Heath, 2013; Ozana, Namdar, et al., 2020; Ozana &
Ganel, 2018, 2019a, 2019b). Action towards 2D objects has also been shown to produce
a Garner interference effect, indicating holistic processing (Freud & Ganel, 2015; Ganel
et al., 2020). Holistic processing suggests that irrelevant information may also be
processed during visuomotor control. Together, this work shows that vision-for-action
when movements are directed at 2D objects involves holistic processing and are
susceptible to perceptual effects.
A particular technology increasing in use for motor learning purposes (e.g., motor
rehabilitation and surgical training) is virtual reality using a head-mounted display (HMD-
VR) (Huber et al., 2017; Levin, 2020). A driving factor for using HMD-VR for motor
learning purposes includes the ability to replicate the real world, allowing for users to
interact with virtual 3D objects in a fully adaptable and controlled environment. However,
it is not clear whether vision-for-action directed towards virtual 3D objects is similar to real
3D objects. Several pieces of evidence suggest that vision-for-action in HMD-VR may
shift users away from dorsal stream processing to more ventral stream processing (D. J.
Harris et al., 2019). A first is that distances in HMD-VR are often underestimated and
perceived as seemingly flat, suggesting that the HMD-VR environment may be processed
more as 2D compared to the real world (Kelly et al., 2017; Phillips et al., 2009). A second
is recent findings that grasping virtual 3D objects without haptic feedback adheres to
Weber’s law, and is therefore susceptible to perceptual effects (Ozana et al., 2018;
Ozana, Berman, et al., 2020). Taken together, these findings suggest that vision-for-
action for grasping virtual 3D objects in HMD-VR may be processed more like 2D objects
and importantly, that actions in HMD-VR may involve holistic processing and be
97
susceptible to perceptual effects. This has implications for motor skill training in HMD-VR
as differences in underlying motor learning mechanisms between HMD-VR and the real
world has been shown to result in a lack of contextual transfer (D. J. Harris et al., 2019;
Juliano et al., 2021).
Research has shown that haptic-free HMD-VR is a useful platform to study reach-
to-grasp movements (Mangalam et al., 2020). Moreover, motor learning application in
HMD-VR often do not provide haptic feedback because the technology is either expensive
or unable to provide realistic feedback (Borst & Volz, 2005; Burdea, 2000; Seth et al.,
2011). Therefore, in this initial study we examined whether grasping virtual 3D objects
without haptic feedback in HMD-VR produces a Garner interference effect. Participants
completed a Garner interference task similar to the one used by Ganel and Goodale
(2003). In this task, participants reach to 3D rectangular objects of varying widths and
lengths, grasping the virtual object by the width. The task includes two different
conditions: a Baseline condition where the width (relevant dimension) varies while the
length (irrelevant dimension) remains constant, and a Filtering condition where both the
width and length vary. Greater reaction times and response variability in the Filtering
condition compared to the Baseline condition indicate a Garner interference effect (Ganel
et al., 2008; Ganel & Goodale, 2003, 2014). We hypothesized that vision-for-action
towards virtual 3D objects without haptic feedback in HMD-VR would result in a Garner
interference effect, suggesting holistic processing.
98
5.3 Methods
5.3.1 Participants
Eighteen participants were recruited for the study (10 female/8 male; aged: M =
26.7, SD = 4.8). Eligibility criteria included right-handed individuals with no neurological
impairments and normal or corrected-to-normal vision. Data was collected in-person
during the COVID-19 pandemic and all participants wore surgical masks for the duration
of the experiment. Written informed consent was electronically obtained from all
participants to minimize in-person exposure time at the lab. The experimental protocol
was approved by the USC Health Sciences Campus Institutional Review Board and
performed in accordance with the 1964 Declaration of Helsinki.
5.3.2 Experimental Apparatus
The task was programmed using the game engine development tool, Unity 3D
(version 2019.4.11f1), and participants wore an Oculus Quest headset (1,832 x 1,920
pixels per eye, 60 Hz) while completing the task. When wearing the headset, participants
were immersed in an environment modeled after the real-world room they were located
in and sat in a chair in front of a virtual table (Figure 1A). Participants also observed their
right hand which was tracked by the built-in cameras located on the outside of the
headset. The positions of the index finger, thumb, and wrist were recorded at each frame
for the duration of the session. A 60 Hz sampling rate was used where the index finger
and thumb positions were measured at the fingertips and the wrist position was measured
at the wrist center. The Euclidean distance between the index finger and thumb was used
to measure grip aperture and were transformed from Unity 3D coordinates to millimeters
(mm).
99
Figure 5.1. (A) The real-world room modelled in the HMD-VR environment. (B) Each
trial began by participants placing their index finger and thumb on a virtual button
located on a virtual table. (C) After pinching on the button for 3000 ms, a virtual
rectangular object appeared and participants quickly moved to grab the object by the
width, squeezing the edges and holding the position until it disappeared ending the trial
(2000 ms).
The participants virtual hand could interact with two virtual objects: a button (Figure
1B) and four rectangular objects (Figure 1C). The dimensions of the button were 10 mm
in diameter and 12 mm in height. The dimensions of the four virtual rectangular objects
were created from a factorial combination of two different widths (narrow: 30 mm, wide:
35.7 mm) with two different lengths (narrow: 63 mm, wide: 75 mm) (Figure 2A) (Freud &
Ganel, 2015; Ganel & Goodale, 2003). The heights of the rectangular objects were 15
mm. Participants did not experience haptic feedback when coming in contact with the
virtual objects; however, they were instructed to treat all objects as if they were real.
Moreover, participants were able to influence both virtual objects when coming into
contact with them through colliders within the Unity3D. Specifically, pinching the button
would change the color to green and trigger a timer, but if either the index finger or thumb
left the button before a rectangular object appeared (after 3000 ms of pinching), the button
would change to red, and the timer would restart until the button was pinched again.
Similarly, grasping the rectangular object would trigger a timer, but if either the index
100
finger or thumb left the object before it disappeared (after 2000 ms of grasping), the timer
would restart until the object was grasped again. This was done through Unity3D physics
engine which uses colliders to determine when the index finger and thumb interacted with
either the button or rectangular objects and gave participants ability to influence the virtual
objects.
Figure 5.2. (A) The four virtual rectangular objects dimensions created
from a factorial combination of two different widths (Narrow Width: 30 mm,
Wide Width: 35.7 mm) and two different lengths (Narrow Length: 63 mm,
Wide Length: 75 mm). Participants grasped objects along the width;
therefore, the width is considered the relevant dimension while the length
is considered the irrelevant dimension. (B) Participants completed a
baseline and filtering condition blocks, counterbalanced across
participants. In the baseline condition, only the width varied across trials.
In the filtering condition, both width and length varied across trials.
101
5.3.3 Experimental Design
Participants completed two baseline condition blocks and two filtering condition
blocks, counterbalanced across participants (Figure 2B). Each block started with 4
practice trials which were excluded from the analysis. Then, participants completed 32
trials where, in each trial, participants reached for a virtual rectangular object. In the
baseline blocks, rectangular objects randomly varied between trials only in the width
dimension relevant for grasping, and participants completed one block with the two
narrow length objects and one block with the two wide length objects. In the filtering
blocks, rectangular objects randomly varied between trials in both the relevant width
dimension and the irrelevant length dimension and were divided equally between two
blocks.
At the start of each trial, participants placed their index finger and thumb on the
virtual button located on the edge of the virtual table (Figure 1B). Participants were
instructed to pinch the walls of the button as if they were pinching a real button. After
pinching the button for 3000 ms, a virtual rectangular object appeared on the table
approximately 30 cm away from the participant’s head. Participants were instructed to
move as quickly as possible to grab the rectangular object by the width (Figure 1C). When
grasping the rectangular object, participants were instructed to pretend that they were
about to pick up the object by squeezing the edges, holding the position until it
disappeared and the trial ended (2000 ms).
5.3.4 Movement Analysis
All kinematic data was recorded by Unity 3D. The reaction time (RT) to initiate
movement was measured as the time between when the rectangular object appeared
102
and the time when both index finger and thumb left the button. The RT to complete
movement was measured as the time between when both index finger and thumb left the
button and the time when both index finger and thumb collided with the rectangular object.
RT to reach maximum grip aperture (MGA) was measured between when both index
finger and thumb left the button and the time as the time when the distance between index
finger and thumb was greatest during the movement trajectory.
We also calculated the response precision, measured by the within-subject
standard deviation of MGA in baseline and filtering blocks (Freud & Ganel, 2015; Ganel
& Goodale, 2014). To do this, the standard deviations for each of the four rectangular
objects were computed and then averaged across participants for baseline and filtering
blocks.
To validate that the spatial resolution recorded from the Oculus Quest was
sufficient to accurately characterize subtle differences in movement trajectories in
baseline and filtering conditions, we assessed whether MGAs were sensitive to width size
in both conditions. We also calculated the grip aperture for both narrow and wide width
rectangular objects across the movement trajectories in both conditions. Movements were
segmented into 10 normalized time points from movement initiation (0%) to movement
completion (100%), binned in 10% increments. Then, movement trajectories were
averaged for narrow and wide width rectangular objects and collapsed across conditions
(Freud & Ganel, 2015).
5.3.5 Statistical Analysis
Statistical analyses were conducted using R (version 3.6.3). Participant RTs and
MGAs greater than three times the standard deviation of the mean were removed from
103
the final analyses (2% of trials). Paired t-tests were used to compare RTs and within-
subject standard deviation of MGA between the baseline and filtering conditions. All
measures are reported as mean ± standard deviations and significance was considered
at p < 0.05.
5.4 Results
RTs to initiate movement, to reach MGA, and to complete movement were
averaged for each participant for both baseline and filtering conditions (Figure 3). Average
RT to reach MGA in the filtering condition was significantly slower than in the baseline
condition (Figure 3B; t(17) = 2.17, p = 0.044; Baseline: 462±143.4 ms; Filtering:
505±185.9 ms). Similarly, average RT to complete movement in the filtering condition
was significantly slower than in the baseline condition (Figure 3C; t(17) = 2.29, p = 0.035;
Baseline: 784±254.8 ms; Filtering: 858±282.2 ms). These results show that grasping
virtual 3D objects in HMD-VR is susceptible to a Garner interference effect, and thus is
subserved by a holistic representation. In contrast, average RTs to initiate movement
were similar between the filtering and baseline conditions (Figure 3A; t(17) = 1.17, p =
0.260; Baseline: 479±47.7 ms; Filtering: 485±44.7 ms). This result show that initiating
movement to virtual 3D objects in HMD-VR may not be susceptible to a Garner
interference effect.
104
Figure 5.3. (A) Average time to initiate movement was similar between conditions (t(17)
= 1.17, p = 0.260). (B-C) Average time to reach MGA (t(17) = 2.17, p = 0.044) and
average time to complete movement (t(17) = 2.29, p = 0.035) were significantly slower
in the filtering condition than in the baseline condition. Red lines represent individuals
with RT greater in baseline condition than in filtering condition and blue lines represent
individuals with RT greater in filtering condition than in baseline condition. Error bars
represent standard error. p < 0.05*
Response precision was similar between the filtering condition and the baseline
condition (Figure 4; t(17) = -0.59, p = 0.561; Baseline: 11.0±4.4 mm; Filtering: 10.6±4.0
mm). This result shows no evidence for variability-based Garner interference effects for
grasping virtual 3D objects in HMD-VR.
A two-way repeated measures ANOVA was used to examine the main effects and
interactions of width size (Narrow, Wide) and condition (Baseline, Filtering) on average
MGA. Sensitivity to object width size was similar for baseline and filtering conditions
(Figure 5A). There was a main effect of width size (F(1,17) = 28.63, p < 0.0001) but no
main effect of condition or interaction between width size and condition (p > 0.5). Post
hoc analysis showed MGAs to be greater for the wide width compared to the narrow width
105
for baseline (t(17) = 4.65, p < 0.001) and filtering (t(17) = 3.01, p = 0.008) conditions.
These results are similar to previous studies with similar analyses of movement towards
real 3D objects (e.g., Ganel et al. 2012; Freud et al. 2015) and show that the resolution
of the Oculus Quest tracking system is sufficient to accurately characterize subtle
differences in reaching and grasping during baseline and filtering conditions.
Figure 5.4. Average within-subject standard
deviation of MGA was similar between
conditions (t(17) = -0.59, p = 0.561). Red lines
represent individuals with RT greater in
baseline condition than in filtering condition
and blue lines represent individuals with RT
greater in filtering condition than in baseline
condition. Error bars represent standard error.
Movement towards 3D virtual objects also followed similar behavior to movement
toward 3D real-world objects. That is, grip aperture for narrow width objects was smaller
106
than for wide width objects (Figure 5B). A three-way repeated measures ANOVA was
used to examine the main effects and interactions of width size (Narrow, Wide), condition
(Baseline, Filtering), and normalized movement time on grip aperture. There were main
effects of width size (F(1,17) = 41.49, p < 0.0001) and movement time (F(9,153) = 84.53,
p < 0.0001), but not of condition (p > 0.5). As expected, there were significant two-way
interactions between width size and movement time (F(9,153) = 25.09, p < 0.0001)
condition and movement time (F(9,153) = 2.37, p = 0.015), but not between width size
and condition (p > 0.5). There was also a significant three-way interaction between width
size, condition, and movement time (F(9,153) = 3.95, p = 0.0002). These results further
show that recorded grip aperture sufficiently characterized the subtle differences in
movement trajectories during baseline and filtering conditions between narrow and wide
widths. Moreover, MGA was reached around 70% of the movement, similar to movement
trajectories towards real 3D objects (e.g., Ganel et al. 2012). Taken together, while there
were no direct comparisons made to grasping 3D objects in the real world, these results
resemble that of movements previously reported in similarly designed studies (e.g., Freud
and Ganel 2015).
107
.Figure 5.5. (A) Sensitivity to object width size was found for both baseline (t(17) =
4.65, p < 0.001) and filtering (t(17) = 4.65, p = 0.008) conditions, with larger MGAs for
the wide width compared to the narrow width. Purple lines represent individuals with
average MGA greater with narrow width compared to wide width and green lines
represent individuals with average MGA greater with wide width compared to narrow
width. (B) Average grip aperture across the movement trajectory. After normalizing
movements into 10 time points, movement trajectories were averaged for narrow and
wide width rectangular objects and collapsed across conditions. Maximum grip aperture
reached a peak at around 70% of the movement time, resembling movement
trajectories toward real 3D objects. Error bars represent standard error. p < 0.01
**
, p <
0.001
***
5.5 Discussion
The purpose of this study was to examine whether vision-for-action directed toward
virtual 3D objects without haptic feedback in HMD-VR is processed holistically. We found
that grasping virtual 3D objects in HMD-VR produced a Garner interference effect for
reaction times to reach maximum grip aperture and to complete movement. This
demonstrates that vision-for-action in HMD-VR may involve holistic processing during
movement. However, both reaction times to initiate movement and the response precision
of maximum grip aperture did not produce a Garner interference effect. The lack of
108
significant Garner interference for movement initiation is not surprising as this also is
observed for grasping directed at 2D objects and is possibly due to grasping in HMD-VR
being hybrid in nature (Freud & Ganel, 2015). The lack of Garner interference for
response precision could also be because grasping in HDM-VR is hybrid in nature.
However, this could also be explained by a ceiling effect resulting from how finger position
was measured, discussed in detail in the limitations. Overall, these findings from reaction
times demonstrate that certain aspects of visual processing for action in HMD-VR without
haptic feedback is processed differently compared to real 3D objects and involve holistic
processing which is susceptible to perceptual effects.
HMD-VR has been increasing in use for motor learning purposes, such as in
surgical training (Mao et al., 2021), sports training (Bird, 2020), and motor rehabilitation
(Levin, 2020). Reasons for this increase include the ability for HMD-VR to replicate a real-
world environment while simultaneously increasing control of the training environment.
However, recent evidence suggests that motor learning in HMD-VR may differ from the
real world (Levac et al., 2019). One difference is that the underlying motor learning
mechanisms used in HMD-VR seem to differ from a conventional training environment.
Specifically, HMD-VR has shown to recruit greater explicit, cognitive strategies during
visuomotor adaptation compared to a conventional computer screen (J. M. Anglin et al.,
2017). The results of the current study add further support that the underlying
mechanisms driving movement in HMD-VR differ from the real world.
Converging evidence has also found cognitive load to be greater during motor
learning tasks in HMD-VR compared to conventional training environments (Frederiksen
et al., 2020; Juliano et al., 2021). While untested, a potential reason increased cognitive
109
load could be that vision-for-action in HMD-VR involves holistic processing. Holistic
processing could suggest that information unrelated to the task, such as irrelevant objects
in the HMD-VR environment, may also be processed during visuomotor control. Studies
have found that a failure to filter out irrelevant information during movement can result in
increased cognitive load (Jost & Mayr, 2016; Thoma & Henson, 2011). Therefore, if the
holistic processing observed here results from a failure to filter out irrelevant information,
then this could potentially explain increased cognitive load sometimes observed during
motor learning in HMD-VR. Increased cognitive load in HMD-VR during motor learning
has been shown to negatively affect long-term retention and context transfer (Juliano et
al., 2021). Action towards 2D objects are also affected by irrelevant perceptual
information and are shown to result in context-dependent learning (Ozana et al., 2018).
Thus, if visuomotor control in HMD-VR is similarly affected by irrelevant perceptual
information, then contextual transfer from HMD-VR could be limited, potentially explaining
decreased context transfer of motor learning in HMD-VR to the real world (Juliano & Liew,
2020; Levac et al., 2019).
One potential explanation for a mechanistic shift to more ventral stream visuomotor
control is the artificial presentations of distance cues used to perceive depth in an HMD-
VR environment (refer to Harris et al. 2019 for a more detailed discussion). Briefly, depth
perception occurs from incorporating both monocular cues (e.g., texture, shadows) and
binocular cues (e.g., disparity, convergence). Monocular cues are thought to be primarily
processed by the ventral stream and binocular cues are thought to be primarily processed
by the dorsal stream (Minini et al., 2010; Parker, 2007). Binocular cues are shown to be
important for effective online control of hand movements in depth (Hu & Knill, 2011). While
110
we provided depth cues in our HMD-VR environment (see Figure 1), depth perception
relying on binocular cues has shown to result in greater inaccuracies and misestimations
in HMD-VR compared to the real world (Hornsey & Hibbard, 2021; Jamiy & Marsh, 2019a,
2019b; Ping et al., 2019). Therefore, HMD-VR may require more of a reliance on
monocular cues in order to compensate for inaccurate binocular cues (Jamiy & Marsh,
2019b). A reliance on monocular cues during prehension has been shown to result in
longer movement times (Mon-Williams & Dijkerman, 1999). Our results similarly show
longer reaction times specifically during movement. Therefore, inaccurate binocular cues
in HMD-VR could potentially explain why perceptual effects were isolated to the outcome
measures related to movement.
Realism also has a role in depth perception and identifying correct distance
estimations (Hibbard et al., 2017). Imaging studies have found that reaching towards and
grasping real 3D objects and 2D objects rely on different neural representations.
Specifically, the left anterior intraparietal sulcus, important in the control of voluntary
attention and visually guided grasping, is thought to process object realness during
planning and generate a forward model for visuomotor control of real 3D objects but not
2D images (Freud et al., 2018; Geng & Mangun, 2009). This finding emphasizes the role
of the dorsal visual stream as being important in “vision for real actions upon real objects”
(Freud et al., 2018). As opposed to 2D objects, our results did not show a Garner
interference effect for precision of the response (Freud & Ganel, 2015). One potential
explanation for this is that virtual 3D objects more resemble real 3D objects compared to
2D objects and therefore, allow for more precise and realistic movements than 2D objects.
Future work should examine whether realism in HMD-VR is related to holistic processing.
111
There were multiple limitations to this study. The first was the lack of haptic
feedback given to participants when grasping virtual 3D objects. This was purposefully
done given that haptic-free HMD-VR is shown to be a useful platform to study reach-to-
grasp movements and current HMD-VR-compatible haptic technology has limitations
(Borst & Volz, 2005; Burdea, 2000; Mangalam et al., 2020; Seth et al., 2011). However,
haptic feedback may allow for more efficient analytical processing during grasping and
thus could avoid the perceptual effects observed in the current study (Ozana & Ganel,
2019b). It is therefore critical for future studies to examine whether including haptic
feedback in HMD-VR-based grasping would avoid a Garner interference effect. If so, this
would be a strong case that motor learning in HMD-VR would benefit greatly from the use
of haptic feedback or interaction with physical objects. Relatedly, to avoid any interference
with objects in the real world and to generate an experience similar to that of a number of
HMD-VR applications, our experimental design did not have participants come into
contact with any of the objects in the virtual room. This absence of tactile feedback could
also potentially explain the perceptual effects observed in this study. Future studies
should examine whether this type of feedback could also avoid a Garner interference
effect in HMD-VR-based grasping.
Another limitation of this study was that the location where the index finger and
thumb positions were measured was at the fingertips. As a result, the grip aperture
appears artificially wide, which explains the large precision errors observed (i.e., > 10
mm). While this measurement has no effect on the reaction times results, it does affect
the precision results and thus, the lack of Garner interference observed could partially be
explained by a ceiling effect. Future studies should measure index finger and thumb
112
positions at the finger pads to examine whether precision does result in a Garner
interference effect in HMD-VR-based grasping.
Another limitation of this study was the use of the built-in cameras located on the
outside of the Oculus Quest headset to track hand movements. This was used as it
reflected the data that generated the virtual hand observed by participants in the HMD-
VR environment and previous evidence has found this type of hand tracking to be suitable
for research (Holzwarth et al., 2021). As a validation for the spatial resolution, we showed
that this type of tracking system was sufficient to accurately characterize subtle
differences in movement trajectories in baseline and filtering conditions. However, as the
sampling rate was lower compared to similarly designed studies, there may have been
undetected effects that could have otherwise been observed with more superior motion
tracking systems. Future studies should utilize devices with a higher sampling rate in
addition to the Oculus Quest to compare and validate.
Despite the above limitations, we believe the findings from this study offer an initial
examination of visual processing of actions directed towards 3D objects in HMD-VR and
shows that movements without haptic feedback in HMD-VR involves holistic processing
of object shape.
5.6 Conclusion
We show that grasping virtual 3D objects without haptic feedback in HMD-VR is
susceptible to a Garner interference effect. However, not all outcome measures typically
used for this experimental design reflected this effect, suggesting that grasping in HMD-
VR being hybrid in nature. This study offers an initial examination of vision for action in
HMD-VR and shows that visual processing of action directed toward virtual 3D objects in
113
HMD-VR may involve holistic processing of object shape. Furthermore, this study offers
supporting evidence that movements in HMD-VR may be susceptible to perceptual
effects. Future studies should examine whether perceptual effects could be avoided with
the inclusion of haptic feedback and whether holistic processing affects motor learning
applications in HMD-VR.
114
Chapter 6: Discussion
Immersive virtual reality using a head-mounted display (HMD-VR) is increasingly
being used for motor learning purposes. These devices have the potential to be useful
tools in motor training and rehabilitation as they allow for researchers and clinicians to
control and individualize the virtual environment. However, we do not yet know how to
maximize these devices potential because the mechanisms underlying how we move and
learn motor skills in them are unclear. The overall goal of this dissertation work is to
address gaps in our understanding of what makes movements and motor learning in
HMD-VR different from more conventional training environments and how these
differences could influence the formation of motor memories.
6.1 Summary of Key Findings
6.1.1 Main Findings from Aim 1
In Aim 1, I investigated whether motor skills immediately transfer to and from an
HMD-VR environment and identified personal factors related to immediate context
transfer. We found that the motor skills acquired in HMD-VR did not immediately transfer
to a conventional computer screen environment, and this decrease in motor skill
performance was predicted by personal factors of presence. Conversely, the motor skills
acquired in a conventional computer screen environment not only immediately transferred
but improved in HMD-VR, and this increase in motor skill performance was predicted by
personal factors of presence, gender, age, and video game use. These results suggest
that these personal factors may predict who is likely to have better immediate transfer of
motor skills to and from HMD-VR.
115
6.1.2 Main Findings from Aim 2
In Aim 2, I investigated whether visuomotor adaptation in HMD-VR increases
cognitive load compared to a conventional computer screen environment. By utilizing a
dual-task probe paradigm to objectively measure cognitive load, we found cognitive load
to be greater in HMD-VR compared to a conventional computer screen across visuomotor
adaptation training. This result demonstrates that cognitive load increases during more
simple motor tasks in HMD-VR, extending previous findings that cognitive load increases
during complex motor tasks in HMD-VR (Frederiksen et al., 2020; Qadir et al., 2019).
Importantly, this result also allowed for subsequent examination of the mechanisms
related to and the implications of increased cognitive load during motor learning. Thus,
Aim 2 also sought to examine the relationship between cognitive load during visuomotor
adaptation and explicit, cognitive mechanisms. We found that during the early stages of
adaptation, greater cognitive load was related to a decreased use of explicit mechanisms
and that the explicit mechanisms—and subsequently, overall adaptation—were lower in
HMD-VR than a conventional computer screen environment.
6.1.3 Main Findings from Aim 3
In Aim 3, I investigated the implications of increased cognitive load during motor
learning and examined whether visuomotor adaptation in HMD-VR leads to decreased
long-term retention and context transfer. We found that training in HMD-VR resulted in
decreased long-term retention and importantly that a decrease in retention occurred
whether participants remained in an HMD-VR environment or transferred to a new context
(i.e., conventional computer screen). These results suggest that training in HMD-VR may
lead to less efficient motor memory formation that cannot be explained by a context
116
interference effect but rather the training process itself. Aim 3 also sought to examine the
relationship between cognitive load during visuomotor adaptation and long-term retention
and context transfer. We found cognitive load to be related to decreased motor memory
formation, indicating that the cognitive load increased by HMD-VR affects the retention of
a motor memory.
6.1.4 Main Findings from Aim 4
A potential explanation for why cognitive load may be higher in HMD-VR could be
how the brain processes visual information for action in an immersive virtual environment.
Vision-for-action is typically processed through the dorsal mode of control; however, the
artificial presentation of depth information in HMD-VR may cause a shift from a dorsal to
ventral mode of control (Ganel & Goodale, 2003; D. J. Harris et al., 2019). A ventral mode
of control is thought to be dependent on visual perception and increased cognitive
processes as it involves processing irrelevant information (i.e., holistic processing). Thus,
Aim 4 sought to determine whether visual processing for action in HMD-VR involves
holistic processing and is therefore susceptible to perceptual effects. We found evidence
that grasping virtual objects in HMD-VR was susceptible to perceptual effects. This result
suggests that vision-for-action in HMD-VR is processed differently than in the real world
and involves holistic processing of object shape. This may be one identifiable factor that
impacts the increased cognitive load in HMD-VR.
117
6.2 Implications and Significance of Findings
6.2.1 Research Implications
HMD-VR offers researchers the ability to manipulate and standardize a
participant’s environment while maintaining high experimental control. However, the
findings from this dissertation work suggests that researchers need to carefully consider
the ecological validity of these devices in upper extremity motor learning. In corroboration
with previous findings, I demonstrate that cognitive load is greater during visuomotor
adaptation in HMD-VR compared to conventional training environments. However, I
extend previous findings by showing that increased cognitive load during visuomotor
adaptation in HMD-VR is related to decreased long-term retention and context transfer.
These findings suggest that increased cognitive load during motor learning in HMD-VR
may affect motor memory formation and recommend that researchers using HMD-VR in
their experimental designs consider measuring cognitive load. Importantly, these findings
demonstrate the importance of preventing cognitive overload (see Section 6.2.2 for
further discussion) as well as the need to understand why cognitive load may increase
during upper extremity motor learning in HMD-VR.
The findings from this dissertation work offer a potential explanation for why
cognitive load may be greater in HMD-VR compared to conventional training
environments. I demonstrate that visual processing of action in HMD-VR involves holistic
processing of object shape which means that information unrelated to the movement is
also being processed during visuomotor control. As processing irrelevant information
during movement is shown to increase cognitive load (Jost & Mayr, 2016; Thoma &
Henson, 2011), these findings suggest that the intrusion of task-irrelevant information
118
could potentially increase cognitive load during visuomotor control in HMD-VR. However,
future work should examine whether this relationship exists and whether there is a causal
relationship between holistic processing and cognitive load.
In visuomotor adaptation, explicit mechanisms are important during the early
stages of adaptation. I demonstrate that early in adaptation, increased cognitive load is
related to decreased explicit mechanisms and that explicit mechanisms—and
subsequently, overall adaptation—is lower in HMD-VR than in a conventional training
environment. An interpretation of this finding is that increased cognitive load limits the use
of explicit mechanisms at the time when they are the primary drivers of overall adaptation.
This would suggest that cognitive load has the strongest affects early in the motor learning
process. Another interpretation of these findings is that the engagement of explicit
mechanisms is limited at times when cognitive load is beyond working memory limits. I
demonstrate that while cognitive load is greater in HMD-VR than in a conventional training
environment across adaptation, overall cognitive load decreased over the course of
training. This would suggest that cognitive load may affect the motor learning process
whenever the load on working memory is beyond working memory limits. These
interpretations would allow researchers and clinicians to focus on preventing cognitive
overload either during the early stages of motor learning or throughout the motor learning
process. Future work should examine these potential interpretations as the results would
provide a better understanding of how explicit mechanisms affect overall adaptation and
motor memory formation.
The conflicting evidence regarding the effectiveness of upper extremity motor
learning in HMD-VR may come from the mechanistic differences used to learn in these
119
devices. I demonstrate that visuomotor adaptation in HMD-VR relies more on explicit
mechanisms. I also demonstrate that visuomotor adaptation in HMD-VR leads to
decreased long-term retention and context transfer, which appears to be due to greater
forgetting of explicit mechanisms. These findings suggest that an increased reliance of
explicit mechanisms in visuomotor adaptation may lead to less efficient motor memory
formation, explained by the forgetting of explicit mechanisms. Overall, the findings
discussed align with previous research showing that upper extremity motor learning in
HMD-VR is different from conventional training environments and extend these findings
by providing evidence for mechanisms of learning and their consequences.
6.2.2 Clinical Implications
A necessary component of using HMD-VR in motor rehabilitation is that the motor
skills learned in the immersive virtual environment will be retained and transferred to the
real world. However, in this dissertation work I demonstrate that for upper extremity motor
learning, retention and context transfer may not always occur and a potential mechanism
is increased cognitive load during motor learning in HMD-VR. These findings advocate
for the need to monitor and reduce the cognitive load during upper extremity motor
learning and rehabilitation in HMD-VR.
To monitor cognitive load, researchers and clinicians can use either
electroencephalography (EEG) or functional near-infrared spectroscopy (fNIRS) which
have been used to measure brain activity and estimate cognitive load in immersive virtual
reality environments (Qadir et al., 2019; Unni et al., 2017). Researchers and clinicians
can also use eye tracking to measure pupil diameter and saccades to estimate cognitive
load in HMD-VR (Babu et al., 2019; Souchet et al., 2021). EEG, fNIRS, and eye tracking
120
in HMD-VR allows for real-time estimates of cognitive load that do not require changes to
motor learning tasks, as was required with the dual-task probe paradigm used in this
dissertation work. Other ways to monitor cognitive load also include self-report measures,
such as the Multidimensional Cognitive Load Scale for Virtual Environments (MCLSVE)
which has been validated in immersive virtual reality and includes measurements of
extraneous cognitive load (M. S. Andersen & Makransky, 2021).
To reduce cognitive load, researchers and clinicians can modify practice conditions
for motor learning applications in HMD-VR. Repeated and distributed practice as opposed
to massed practice has been found to decrease cognitive load during repeated motor
learning tasks in HMD-VR (S. A. W. Andersen et al., 2016, 2018). In this dissertation
work, I demonstrate that cognitive load decreases after continued training, suggesting
that while cognitive load may initially be high early in the motor learning process, it can
decrease with practice or exposure. Motor learning and rehabilitation applications in
HMD-VR may need more training compared to conventional training environments to
develop similar levels of motor skills. Another modification can be to carefully design the
virtual environment to reduce cognitive load during motor learning in HMD-VR. Filtering
visual information by blurring insignificant regions in a person’s field of view has been
shown to reduce cognitive load (Tsurukawa et al., 2015). Virtual avatars have also been
shown to alleviate the mental load of spatial rotation exercises in HMD-VR (Steed et al.,
2016). In this dissertation work, virtual avatars were not used in order to control for
potential differences in movement kinematics in both HMD-VR and conventional training
environments; however, future work should examine whether filtering visual information
121
or including virtual avatars would reduce cognitive load during upper extremity motor
learning in HMD-VR.
6.3 Limitations
This dissertation work has several limitations. In the first study (Aim 1) where I
examined whether motor skills immediately transferred to and from an HMD-VR
environment, a limitation was that motor skill transfer reflected the transfer of motor skill
acquisition rather than motor skill learning. Motor skill learning typically examines transfer
after a retention interval and compares transfer performance to a baseline performance.
To begin addressing this limitation, the second study in this dissertation work (Aims 2 and
3) included a retention interval. However, we found that personal factors did not predict
long-term retention or context transfer. This could potentially be explained by either task
or sample size differences. Future studies should continue to examine whether personal
factors, such as presence and previous HMD-VR experience, influence motor learning in
HMD-VR.
Another limitation of both the first and second studies was the use of a conventional
computer screen as the transfer context. While this was done intentionally to provide the
most well-controlled and subtle differences between HMD-VR and a conventional training
environment, future work should examine whether motor learning in HMD-VR transfers
to more dynamic, real-world environments.
In the second study where I examined cognitive load in an HMD-VR environment,
another limitation was that the experimental design did not provide a mechanistic
understanding for why cognitive load may increase during visuomotor adaptation in HMD-
VR. To begin addressing this limitation, the third study of this dissertation work (Aim 4)
122
examined visual processing of action in HMD-VR and found evidence that vision-for-
action in HMD-VR may involve holistic processing of object shape and is therefore
susceptible to perceptual effects. However, while this provides a potential explanation for
increased cognitive load, cognitive load was not directly compared or manipulated. Future
research should examine whether the holistic processing of object shape facilitates
increased cognitive load during motor learning in HMD-VR.
Additionally, while the dual-task probe paradigm is commonly used as an objective
measure of cognitive load (e.g., Frederiksen et al., 2020), it is not always possible or
desired to integrate with motor learning tasks. Additionally, the dual-task probe paradigm
specifically measures attentional demands and therefore, other cognitive domains (e.g.,
working memory capacity or motor planning) could also influence motor learning in HMD-
VR. Future research should examine whether other measures of cognitive load such as
brain activity, eye tracking, or physiological response measures also reflect increased
cognitive load during motor learning in HMD-VR.
In the third study where participants were asked to reach and grasp for virtual
objects in HMD-VR, a limitation was the lack of haptic feedback provided. As haptic
feedback has been shown to reduce perceptual effects (Ozana & Ganel, 2019b), future
studies should examine whether including haptic feedback in HMD-VR motor applications
would reduce holistic processing of object shape. If this type of feedback does reduce the
holistic processing of object shape in HMD-VR, then motor learning applications in HMD-
VR could potentially benefit from haptic feedback integration. Separately, holistic
processing of object shape does not necessarily mean ventral stream processing. While
vision-for-perception largely relies on the ventral stream, and vision-for-perception
123
involves holistic processing, the inference that holistic processing implies ventral stream
processing is inconclusive. Future studies should use brain imaging to examine whether
motor learning in HMD-VR leads to a greater reliance on ventral mode control.
6.4 Future Directions and Conclusion
The next steps of this work would be to examine the neural mechanisms underlying
motor learning in HMD-VR. The COVID-19 pandemic prevented the collection of brain
activity using EEG which was originally intended for a portion of this dissertation work.
Identifying specific neural mechanisms could potentially address whether motor control
in HMD-VR causes a shift from a dorsal to ventral mode of control. Measuring brain
activity during visuomotor adaptation in HMD-VR could also potentially identify specific
neural mechanisms that reflect a change in the use of explicit processes, which could
provide a deeper understanding of the cognitive processes involved in adaptation.
Next steps would also be to further examine why motor learning in HMD-VR
increases cognitive load, by determining whether visual processing of action in HMD-VR
is related to cognitive load. Other next steps would also be to examine how cognitive load
during motor rehabilitation in HMD-VR affects clinical populations and how cognitive
overload could be avoided. Lastly, the findings from this dissertation work raises broader
questions about the role of cognitive processes in motor learning. Questions such as
whether the consequences of increased cognitive load found in this work during
visuomotor adaptation is unique to HMD-VR or extends to motor learning in general.
These next steps would provide a deeper understanding of the role of cognitive processes
in motor learning and influence the increasing use of HMD-VR in motor learning
applications.
124
Overall, this dissertation sheds new light on upper extremity motor learning in
HMD-VR and paves the way for understanding the impact of cognitive load on motor
memory formation as well as how the brain interprets information viewed through
technologies such as HMD-VR. The findings in this dissertation will influence the
increasing use of HMD-VR in motor learning applications, such as motor rehabilitation
and surgical training. In future studies, we can further explore the effects of cognitive
overload in motor learning as well as identify neural mechanisms underlying motor
learning in HMD-VR.
125
References
Albert, N. B., Robertson, E. M., Miall, R. C., & Hall, G. (2009). Report The Resting Human
Brain and Motor Learning. Current Biology, 19(12), 1023–1027.
https://doi.org/10.1016/j.cub.2009.04.028
Albert, S. T., & Shadmehr, R. (2018). Estimating properties of the fast and slow adaptive
processes during sensorimotor adaptation. Journal of Neurophysiology, 119(4),
1367–1393. https://doi.org/10.1152/jn.00197.2017
Albiol-Pérez, S., Gil-Gomez, J. a, Llorens, R., Alcañiz, M., Font, C. C., & Gil-Gómez, J.-
A. (2014). The Role of Virtual Motor Rehabilitation: A Quantitative Analysis Between
Acute and Chronic Patients With Acquired Brain Injury. IEEE Journal of Biomedical
and Health Informatics, 18(1), 391–398. https://doi.org/10.1109/JBHI.2013.2272101
Alimardani, M., Nishio, S., & Ishiguro, H. (2016). The importance of visual feedback
design in BCIs; from embodiment to motor imagery learning. PLoS ONE, 11(9),
e0161945. https://doi.org/10.1371/journal.pone.0161945
Andersen, M. S., & Makransky, G. (2021). The validation and further development of a
multidimensional cognitive load scale for virtual environments. Journal of Computer
Assisted Learning, 37(1), 183–196. https://doi.org/10.1111/jcal.12478
Andersen, S. A. W., Konge, L., & Sørensen, M. S. (2018). The effect of distributed virtual
reality simulation training on cognitive load during subsequent dissection training.
Medical Teacher, 40(7), 684–689. https://doi.org/10.1080/0142159X.2018.1465182
Andersen, S. A. W., Mikkelsen, P. T., Konge, L., Cayé-Thomasen, P., & Sørensen, M. S.
(2016). Cognitive load in distributed and massed practice in virtual reality
mastoidectomy simulation. Laryngoscope, 126(2), E74–E79.
https://doi.org/10.1002/lary.25449
Ang, K. K., Guan, C., Phua, K. S., Wang, C., Zhou, L., Tang, K. Y., Ephraim Joseph, G.
J., Kuah, C. W. K., & Chua, K. S. G. (2014). Brain-computer interface-based robotic
end effector system for wrist and hand rehabilitation: results of a three-armed
randomized controlled trial for chronic stroke. Frontiers in Neuroengineering, 7, 30.
126
https://doi.org/10.3389/fneng.2014.00030
Anglin, J. M., Sugiyama, T., & Liew, S. L. (2017). Visuomotor adaptation in head-mounted
virtual reality versus conventional training. Scientific Reports, 7, 45469.
https://doi.org/10.1038/srep45469
Anglin, Julia M., Saldana, D., Schmiesing, A., & Liew, S.-L. (2017). Transfer of a skilled
motor learning task between virtual and conventional environments. 2017 IEEE
Virtual Reality (VR), 401–402. https://doi.org/10.1109/VR.2017.7892346
Anguera, J. A., Bernard, J. A., Jaeggi, S. M., Buschkuehl, M., Benson, B. L., Jennett, S.,
Humfleet, J., Reuter-Lorenz, P. A., Jonides, J., & Seidler, R. D. (2012). The effects
of working memory resource depletion and training on sensorimotor adaptation.
Behavioural Brain Research, 228(1), 107–115.
https://doi.org/10.1016/j.bbr.2011.11.040
Anguera, J. A., Reuter-Lorenz, P. A., Willingham, D. T., & Seidler, R. D. (2008).
Contributions of spatial working memory to visuomotor adaptation. Journal of
Cognitive Neuroscience, 22(9), 172.
Anguera, J. A., Reuter-Lorenz, P. A., Willingham, D. T., & Seidler, R. D. (2011). Failure
to engage spatial working memory contributes to age-related declines in visuomotor
learning. Journal of Cognitive Neuroscience, 23(1), 11–25.
https://doi.org/10.1162/jocn.2010.21451
Ayala, N., Binsted, G., & Heath, M. (2018). Hand anthropometry and the limits of aperture
separation determine the utility of Weber’s law in grasping and manual estimation.
Experimental Brain Research, 236(8), 2439–2446. https://doi.org/10.1007/s00221-
018-5311-6
Babu, M. D., JeevithaShree, D. V., Prabhakar, G., Saluja, K. P. S., Pashilkar, A., &
Biswas, P. (2019). Estimating Pilots’ Cognitive Load From Ocular Parameters
Through Simulation and In-Flight Studies. Journal of Eye Movement Research,
12(3), 1–16. https://doi.org/10.16910/JEMR.12.3.3
Bailey, J. O., Bailenson, J. N., & Casasanto, D. (2016). When does virtual embodiment
change our minds? Presence, 25(3), 222–233.
127
https://doi.org/10.1162/PRES_a_00263
Ballester, B. R., Oliva, L. S., Duff, A., & Verschure, P. (2015). Accelerating Motor
Adaptation by Virtual Reality Based Modulation of Error Memories. Rehabilitation
Robotics (ICORR), 2015 IEEE International Conference On, 623–629.
Banakou, D., Groten, R., & Slater, M. (2013). Illusory ownership of a virtual child body
causes overestimation of object sizes and implicit attitude changes. Proceedings of
the National Academy of Sciences, 110(31), 12846–12851.
https://doi.org/10.1073/pnas.1306779110
Banakou, D., Hanumanthu, P. D., & Slater, M. (2016). Virtual embodiment of white people
in a black virtual body leads to a sustained reduction in their implicit racial bias.
Frontiers in Human Neuroscience, 10, 601.
https://doi.org/10.3389/fnhum.2016.00601
Baumeister, J., Ssin, S. Y., Elsayed, N. A. M., Dorrian, J., Webb, D. P., Walsh, J. A.,
Simon, T. M., Irlitti, A., Smith, R. T., Kohler, M., & Thomas, B. H. (2017). Cognitive
Cost of Using Augmented Reality Displays. IEEE Transactions on Visualization and
Computer Graphics, 23(11), 2378–2388.
https://doi.org/10.1109/TVCG.2017.2735098
Beilock, S. L., Bertenthal, B. I., McCoy, A. M., & Carr, T. H. (2004). Haste does not always
make waste: Expertise, direction of attention, and speed versus accuracy in
performing sensorimotor skills. Psychonomic Bulletin and Review, 11(2), 373–379.
https://doi.org/10.3758/BF03196585
Beilock, S. L., Carr, T. H., MacMahon, C., & Starkes, J. L. (2002). When paying attention
becomes counterproductive: Impact of divided versus skill-focused attention on
novice and experienced performance of sensorimotor skills. Journal of Experimental
Psychology: Applied, 8(1), 6–16. https://doi.org/10.1037/1076-898X.8.1.6
Benoit, K. (2011). Linear Regression Models with Logarithmic Transformations. London
School of Economics, 22(1), 23–36.
Biasiucci, A., Leeb, R., Iturrate, I., Perdikis, S., Al-Khodairy, A., Corbet, T., Schnider, A.,
Schmidlin, T., Zhang, H., Bassolino, M., Viceic, D., Vuadens, P., Guggisberg, A. G.,
128
& Millán, J. d. R. (2018). Brain-actuated functional electrical stimulation elicits lasting
arm motor recovery after stroke. Nature Communications, 9(1), 2421.
https://doi.org/10.1038/s41467-018-04673-z
Bird, J. M. (2020). The use of virtual reality head-mounted displays within applied sport
psychology. Journal of Sport Psychology in Action, 11(2), 115–128.
https://doi.org/10.1080/21520704.2018.1563573
Blanco-Mora, D. A., Almeida, Y., Vieira, C., & Badia, S. B. i. (2019). A study on EEG
power and connectivity in a virtual reality bimanual rehabilitation training system.
2019 IEEE International Conference on Systems, Man and Cybernetics (SMC),
2818–2822. https://doi.org/10.1109/SMC.2019.8914190
Bo, J., Langan, J., & Seidler, R. D. (2008). Cognitive Neuroscience of Skill Acquisition.
Advances in Psychology, 139, 101–112. https://doi.org/10.1016/S0166-
4115(08)10009-7
Borst, C. W., & Volz, R. A. (2005). Evaluation of a haptic mixed reality system for
interactions with a virtual control panel. Presence: Teleoperators and Virtual
Environments, 14(6), 677–696. https://doi.org/10.1162/105474605775196562
Bouchard, J. M., & Cressman, E. K. (2021). Intermanual transfer and retention of
visuomotor adaptation to a large visuomotor distortion are driven by explicit
processes. PLoS ONE, 16(1 January), e0245184.
https://doi.org/10.1371/journal.pone.0245184
Brandi, M. L., Wohlschläger, A., Sorg, C., & Hermsdörfer, J. (2014). The neural correlates
of planning and executing actual tool use. Journal of Neuroscience, 34(39), 13183–
13194. https://doi.org/10.1523/JNEUROSCI.0597-14.2014
Brooks, B. M. (1999). Route Learning in a Case of Amnesia: A Preliminary Investigation
into the Efficacy of Training in a Virtual Environment. Neuropsychological
Rehabilitation, 9(1), 63–76. https://doi.org/10.1080/713755589
Bryanton, C., Bossé, J., Brien, M., McLean, J., McCormick, A., & Sveistrup, H. (2006).
Feasibility, motivation, and selective motor control: virtual reality compared to
conventional home exercise in children with cerebral palsy. Cyberpsychology &
129
Behavior, 9(2), 123–128. https://doi.org/10.1089/cpb.2006.9.123
Burdea, G. C. (2000). Haptics issues in virtual environments. Proceedings of Computer
Graphics International Conference, CGI, 295–302.
https://doi.org/10.1109/cgi.2000.852345
Butcher, P. A., Ivry, R. B., Kuo, S. H., Rydz, D., Krakauer, J. W., & Taylor, J. A. (2017).
The cerebellum does more than sensory prediction error-based learning in
sensorimotor adaptation tasks. Journal of Neurophysiology, 118(3), 1622–1636.
https://doi.org/10.1152/jn.00451.2017
Butcher, P. A., & Taylor, J. A. (2018). Decomposition of a sensory prediction error signal
for visuomotor adaptation. Journal of Experimental Psychology: Human Perception
and Performance, 44(2), 176–194. https://doi.org/10.1037/xhp0000440
Campagnoli, C., Domini, F., & Taylor, J. A. (2021). Taking aim at the perceptual side of
motor learning: Exploring how explicit and implicit learning encode perceptual error
information through depth vision. Journal of Neurophysiology, 20(7), 413–426.
https://doi.org/10.1152/jn.00153.2021
Cano Porras, D., Siemonsma, P., Inzelberg, R., Zeilig, G., & Plotnik, M. (2018).
Advantages of virtual reality in the rehabilitation of balance and gait. Neurology,
90(22), 1017–1025. https://doi.org/10.1212/WNL.0000000000005603
Carlson, P. E., Peters, A., Gilbert, S. B., Vance, J. M., & Luse, A. (2015). Virtual training:
Learning transfer of assembly tasks. IEEE Transactions on Visualization and
Computer Graphics, 21(6), 770–782. https://doi.org/10.1109/TVCG.2015.2393871
Carrasco, D. G., & Cantalapiedra, J. A. (2016). Effectiveness of motor imagery or mental
practice in functional recovery after stroke: a systematic review. Neurología (English
Edition), 31(1), 43–52. https://doi.org/10.1016/j.nrleng.2013.02.008
Caspar, E. A., Cleeremans, A., & Haggard, P. (2015). The relationship between human
agency and embodiment. Consciousness and Cognition, 33, 226–236.
https://doi.org/10.1016/j.concog.2015.01.007
Cincotti, F., Pichiorri, F., Aricò, P., Aloise, F., Leotta, F., De Vico Fallani, F., Millán, J. d.
R., Molinari, M., & Mattia, D. (2012). EEG-based Brain-Computer Interface to support
130
post-stroke motor rehabilitation of the upper limb. 2012 Annual International
Conference of the IEEE Engineering in Medicine and Biology Society, 4112–4115.
https://doi.org/10.1109/EMBC.2012.6346871
Creem, S. H., & Proffitt, D. R. (2001). Defining the cortical visual systems: “What",
“Where", and “How". Acta Psychologica, 107(1–3), 43–68.
https://doi.org/10.1016/S0001-6918(01)00021-X
Cunningham, H. A. (1989). Aiming error under transformed spatial mappings suggests a
structure for visual-motor maps. Journal of Experimental Psychology. Human
Perception and Performance, 15(3), 493–506. https://doi.org/10.1037/0096-
1523.15.3.493
D’Angelo, M., di Pellegrino, G., Seriani, S., Gallina, P., & Frassinetti, F. (2018). The sense
of agency shapes body schema and peripersonal space. Scientific Reports, 8(1).
https://doi.org/10.1038/s41598-018-32238-z
Dan, A., & Reiner, M. (2017). EEG-based cognitive load of processing events in 3D virtual
worlds is lower than processing events in 2D displays. International Journal of
Psychophysiology, 122, 75–84. https://doi.org/10.1016/J.IJPSYCHO.2016.08.013
De Beni, R., Pazzaglia, F., Gyselinck, V., & Meneghetti, C. (2005). Visuospatial working
memory and mental representation of spatial descriptions. European Journal of
Cognitive Psychology, 17(1), 77–95. https://doi.org/10.1080/09541440340000529
Dechent, P., Merboldt, K.-D., & Frahm, J. (2004). Is the human primary motor cortex
involved in motor imagery? Cognitive Brian Research, 19(2), 138–144.
https://doi.org/10.1016/j.cogbrainres.2003.11.012
Della-Maggiore, V., & Mcintosh, A. R. (2005). Time Course of Changes in Brain Activity
and Functional Connectivity Associated With Long-Term Adaptation to a Rotational
Transformation. J Neuro-Physiol, 93, 2254–2262.
https://doi.org/10.1152/jn.00984.2004
Delorme, A., & Makeig, S. (2004). EEGLAB: an open source toolbox for analysis of single-
trial EEG dynamics including independent component analysis. Journal of
Neuroscience Methods, 134(1), 9–21.
131
https://doi.org/10.1016/j.jneumeth.2003.10.009
Desmurget, M., & Grafton, S. (2000). Forward modeling allows feedback control for fast
reaching movements. Trends in Cognitive Sciences, 4(11), 423–431.
https://doi.org/10.1016/S1364-6613(00)01537-0
Desmurget, M., Pélisson, D., Rossetti, Y., & Prablanc, C. (1998). From eye to hand:
Planning goal-directed movements. Neuroscience and Biobehavioral Reviews,
22(6), 761–788. https://doi.org/10.1016/S0149-7634(98)00004-9
Devos, H., Akinwuntan, A. E., Nieuwboer, A., Tant, M., Truijen, S., De Wit, L., Kiekens,
C., & De Weerdt, W. (2009). Comparison of the effect of two driving retraining
programs on on-road performance after stroke. Neurorehabilitation and Neural
Repair, 23(7), 699–705. https://doi.org/10.1177/1545968309334208
Diedrichsen, J., Hashambhoy, Y., Rane, T., & Shadmehr, R. (2005). Neural correlates of
reach errors. Journal of Neuroscience, 25(43), 9919–9931.
https://doi.org/10.1523/JNEUROSCI.1874-05.2005
Diedrichsen, J., & Kornysheva, K. (2015). Motor skill learning between selection and
execution. Trends in Cognitive Sciences, 19(4), 227–233.
https://doi.org/10.1016/j.tics.2015.02.003
Doyon, J, Gaudreau, D., Laforce, R., & Castonguay, M. (1997). Role of the Striatum ,
Cerebellum , and Frontal Lobes in the Learning of a Visuomotor Sequence. Brian
and Cognition, 245(34), 218–245.
Doyon, Julien, Bellec, P., Amsel, R., Penhune, V., Monchi, O., Carrier, J., Lehéricy, S., &
Benali, H. (2009). Contributions of the basal ganglia and functionally related brain
structures to motor learning. Behavioural Brain Research, 199(1), 61–75.
https://doi.org/10.1016/J.BBR.2008.11.012
Doyon, Julien, & Benali, H. (2005). Reorganization and plasticity in the adult brain during
learning of motor skills. Current Opinion in Neurobiology, 15(2), 161–167.
https://doi.org/10.1016/j.conb.2005.03.004
Dušica, S.-P. S., Devecerski, G. V., Jovićević, M. N., & Platiša, N. M. (2015). Stroke
rehabilitation: Which factors influence the outcome? Annals of Indian Academy of
132
Neurology, 18(4), 484–487. https://doi.org/10.4103/0972-2327.165480
Emami, Z., & Chau, T. (2020). The effects of visual distractors on cognitive load in a motor
imagery brain-computer interface. Behavioural Brain Research, 378, 112240.
https://doi.org/10.1016/J.BBR.2019.112240
Ethier, V., Zee, D. S., & Shadmehr, R. (2008). Spontaneous recovery of motor memory
during saccade adaptation. Journal of Neurophysiology, 99(5), 2577–2583.
https://doi.org/10.1152/jn.00015.2008
Eversheim, U., & Bock, O. (2001). Evidence for processing stages in skill acquisition: A
dual-task study. Learning and Memory, 8(4), 183–189.
https://doi.org/10.1101/lm.39301
Feng, J., Pratt, J., & Spence, I. (2012). Attention and visuospatial working memory share
the same processing resources. Frontiers in Psychology, 3, 103.
https://doi.org/10.3389/fpsyg.2012.00103
Finkelstein, S., Nickel, a, Lipps, Z., Wartell, Z., Barnes, T., Suma, E., Finkelstein, S.,
Carolina, N., & Suma, E. a. (2011). Astrojumper : Motivating Exercise with an
Immersive Virtual Reality Exergame Astrojumper : Motivating Exercise with an
Immersive Virtual Reality. Presence: Teleoperators and Virtual Environments, 20(1),
78–92.
Fitts, P. M. And Posner, M. I. (1967). Human performance.
Fitts, P. M. (1964). Perceptual-Motor Skill Learning. In Categories of Human Learning
(pp. 243–285). Academic Press. https://doi.org/10.1016/b978-1-4832-3145-
7.50016-9
Fluet, G. G., & Deutsch, J. E. (2013). Virtual Reality for Sensorimotor Rehabilitation Post-
Stroke: The Promise and Current State of the Field. Current Physical Medicine and
Rehabilitation Reports, 1(1), 9–20. https://doi.org/10.1007/s40141-013-0005-2
Forano, M., & Franklin, D. W. (2019). Timescales of motor memory formation in dual-
adaptation. BioRxiv, 698167. https://doi.org/10.1101/698167
Frederiksen, J. G., Sørensen, S. M. D., Konge, L., Svendsen, M. B. S., Nobel-Jørgensen,
133
M., Bjerrum, F., & Andersen, S. A. W. (2020). Cognitive load and performance in
immersive virtual reality versus conventional virtual reality simulation training of
laparoscopic surgery: a randomized trial. Surgical Endoscopy, 34(3), 1244–1252.
https://doi.org/10.1007/s00464-019-06887-8
French, M. A., Morton, S. M., & Reisman, D. S. (2021). Use of explicit processes during
a visually guided locomotor learning task predicts 24-h retention after stroke. Journal
of Neurophysiology, 125(1), 211–222. https://doi.org/10.1152/JN.00340.2020
Freud, E., Aisenberg, D., Salzer, Y., Henik, A., & Ganel, T. (2015). Simon in action: the
effect of spatial congruency on grasping trajectories. Psychological Research, 79(1),
134–142. https://doi.org/10.1007/s00426-013-0533-5
Freud, E., & Ganel, T. (2015). Visual control of action directed toward two-dimensional
objects relies on holistic processing of object shape. Psychonomic Bulletin and
Review, 22(5), 1377–1382. https://doi.org/10.3758/s13423-015-0803-x
Freud, E., Macdonald, S. N., Chen, J., Quinlan, D. J., Goodale, M. A., & Culham, J. C.
(2018). Getting a grip on reality: Grasping movements directed to real objects and
images rely on dissociable neural representations. Cortex, 98, 34–48.
https://doi.org/10.1016/j.cortex.2017.02.020
Friedman, J., Hastie, T., & Tibshirani, R. (2010). Regularization paths for generalized
linear models via coordinate descent. Journal of Statistical Software, 33(1), 1–22.
https://doi.org/10.18637/jss.v089.i03
Frolov, A. A., Mokienko, O., Lyukmanov, R., Biryukova, E., Kotov, S., Turbina, L.,
Nadareyshvily, G., & Bushkova, Y. (2017). Post-stroke rehabilitation training with a
motor-imagery-based Brain-Computer Interface (BCI)-controlled hand exoskeleton:
A randomized controlled multicenter trial. Frontiers in Neuroscience, 11, 400.
https://doi.org/10.3389/fnins.2017.00400
Funk, M., Kosch, T., & Schmidt, A. (2016). Interactive worker assistance: Comparing the
effects of in-situ projection, head-mounted displays, tablet, and paper instructions.
UbiComp 2016 - Proceedings of the 2016 ACM International Joint Conference on
Pervasive and Ubiquitous Computing, 934–939.
134
https://doi.org/10.1145/2971648.2971706
Fusser, F., Linden, D. E. J., Rahm, B., Hampel, H., Haenschel, C., & Mayer, J. S. (2011).
Common capacity-limited neural mechanisms of selective attention and spatial
working memory encoding. European Journal of Neuroscience, 34(5), 827–838.
https://doi.org/10.1111/j.1460-9568.2011.07794.x
Ganel, T. (2015). Weber’s law in grasping. Journal of Vision, 15(8), 1–2.
https://doi.org/10.1167/15.6.1
Ganel, T., Chajut, E., & Algom, D. (2008). Visual coding for action violates fundamental
psychophysical principles. Current Biology, 18(14), R599–R601.
https://doi.org/10.1016/j.cub.2008.04.052
Ganel, T., Freud, E., Chajut, E., & Algom, D. (2012). Accurate visuomotor control below
the perceptual threshold of size discrimination. PLoS ONE, 7(4), e36253.
https://doi.org/10.1371/journal.pone.0036253
Ganel, T., Freud, E., & Meiran, N. (2014). Action is immune to the effects of Weber’s law
throughout the entire grasping trajectory. Journal of Vision, 14(7), 1–11.
https://doi.org/10.1167/14.7.11
Ganel, T., & Goodale, M. A. (2003). Visual control of action but not perception requires
analytical processing of object shape. Nature, 426(6967), 664–667.
https://doi.org/10.1038/nature02156
Ganel, T., & Goodale, M. A. (2014). Variability-based Garner interference for perceptual
estimations but not for grasping. Experimental Brain Research, 232(6), 1751–1758.
https://doi.org/10.1007/s00221-014-3867-3
Ganel, T., Ozana, A., & Goodale, M. A. (2020). When perception intrudes on 2D grasping:
evidence from Garner interference. Psychological Research, 84(8), 2138–2143.
https://doi.org/10.1007/s00426-019-01216-z
Gavish, N., Gutiérrez, T., Webel, S., Rodríguez, J., Peveri, M., Bockholt, U., & Tecchia,
F. (2015). Evaluating virtual reality and augmented reality training for industrial
maintenance and assembly tasks. Interactive Learning Environments, 23(6), 778–
798. https://doi.org/10.1080/10494820.2013.815221
135
Geng, J. J., & Mangun, G. R. (2009). Anterior intraparietal sulcus is sensitive to bottom-
up attention driven by stimulus salience. Journal of Cognitive Neuroscience, 21(8),
1584–1601. https://doi.org/10.1162/jocn.2009.21103
Gerig, N., Mayo, J., Baur, K., Wittmann, F., Riener, R., & Wolf, P. (2018). Missing depth
cues in virtual reality limit performance and quality of three dimensional reaching
movements. PLoS ONE, 13(1). https://doi.org/10.1371/journal.pone.0189275
Gilbert, F., Cook, M., O’Brien, T., & Illes, J. (2019). Embodiment and estrangement:
Results from a first-in-human “intelligent BCI” trial. Science and Engineering Ethics,
25(1), 83–96. https://doi.org/10.1007/s11948-017-0001-5
Goh, H. T., Gordon, J., Sullivan, K. J., & Winstein, C. J. (2014). Evaluation of attentional
demands during motor learning: Validity of a dual-task probe paradigm. Journal of
Motor Behavior, 46(2), 95–105. https://doi.org/10.1080/00222895.2013.868337
Gonzalez-Franco, M., & Peck, T. C. (2018). Avatar embodiment. towards a standardized
questionnaire. Frontiers in Robotics and AI, 5, 74.
https://doi.org/10.3389/frobt.2018.00074
Goodale, M. A. (2011). Transforming vision into action. Vision Research, 51(13), 1567–
1587. https://doi.org/10.1016/j.visres.2010.07.027
Goodale, M. A. (2014). How (and why) the visual control of action differs from visual
perception. Proceedings of the Royal Society B: Biological Sciences, 281(1785).
https://doi.org/10.1098/rspb.2014.0337
Goodale, M. A., Króliczak, G., & Westwood, D. A. (2005). Dual routes to action:
Contributions of the dorsal and ventral streams to adaptive behavior. Progress in
Brain Research, 149, 269–283. https://doi.org/10.1016/S0079-6123(05)49019-6
Goodale, M. A., & Milner, A. D. (1992). Separate Visual Pathways for Perception and
Action. Essential Sources in the Scientific Study of Consciousness.
https://doi.org/10.7551/mitpress/2834.003.0016
Groen, J., & Werkhoven, P. J. (1998). Visuomotor Adaptation to Virtual Hand Position in
Interactive Virtual Environments. Presence: Teleoperators and Virtual Environments,
7(5), 429–446. https://doi.org/10.1162/105474698565839
136
Guerra, Z. F., Bellose, L. C., Danielli Coelho De Morais Faria, C., & Lucchetti, G. (2018).
The effects of mental practice based on motor imagery for mobility recovery after
subacute stroke: Protocol for a randomized controlled trial. Complementary
Therapies in Clinical Practice, 33, 36–42. https://doi.org/10.1016/j.ctcp.2018.08.002
Haaland, K. Y., Elsinger, C. L., Mayer, A. R., Durgerian, S., & Rao, S. M. (2004). Motor
sequence complexity and performing hand produce differential patterns of
hemispheric lateralization. Journal of Cognitive Neuroscience, 16(4), 621–636.
https://doi.org/10.1162/089892904323057344
Hadipour-Niktarash, A., Lee, C. K., Desmond, J. E., & Shadmehr, R. (2007). Impairment
of retention but not acquisition of a visuomotor skill through time-dependent
disruption of primary motor cortex. Journal of Neuroscience, 27(49), 13413–13419.
https://doi.org/10.1523/JNEUROSCI.2570-07.2007
Haith, A. M., Huberdeau, D. M., & Krakauer, J. W. (2015). The influence of movement
preparation time on the expression of visuomotor learning and savings. Journal of
Neuroscience, 35(13), 5109–5117. https://doi.org/10.1523/JNEUROSCI.3869-
14.2015
Harris, D. J., Buckingham, G., Wilson, M. R., & Vine, S. J. (2019). Virtually the same?
How impaired sensory information in virtual reality may disrupt vision for action.
Experimental Brain Research, 237(11), 2761–2766. https://doi.org/10.1007/s00221-
019-05642-8
Harris, P. A., Taylor, R., Thielke, R., Payne, J., Gonzalez, N., & Conde, J. G. (2009).
Research electronic data capture (REDCap)-A metadata-driven methodology and
workflow process for providing translational research informatics support. Journal of
Biomedical Informatics, 42(2), 377–381. https://doi.org/10.1016/j.jbi.2008.08.010
Heath, M., & Manzone, J. (2017). Manual estimations of functionally graspable target
objects adhere to Weber’s law. Experimental Brain Research, 235(6), 1701–1707.
https://doi.org/10.1007/s00221-017-4913-8
Heath, M., Manzone, J., Khan, M., & Davarpanah Jazi, S. (2017). Vision for action and
perception elicit dissociable adherence to Weber’s law across a range of ‘graspable’
137
target objects. Experimental Brain Research, 235(10), 3003–3012.
Hibbard, P. B., Haines, A. E., & Hornsey, R. L. (2017). Magnitude, precision, and realism
of depth perception in stereoscopic vision. Cognitive Research: Principles and
Implications, 2(1). https://doi.org/10.1186/s41235-017-0062-7
Holmes, S. A., & Heath, M. (2013). Goal-directed grasping: The dimensional properties
of an object influence the nature of the visual information mediating aperture shaping.
Brain and Cognition, 82(1), 18–24. https://doi.org/10.1016/j.bandc.2013.02.005
Holzwarth, V., Gisler, J., Zurich, E., Christian Hirt, S., & Andreas Kunz, S. (2021).
Comparing the Accuracy and Precision of SteamVR Tracking 2.0 and Oculus Quest
2 in a Room Scale Setup. ICVARS 2021, 1–5.
https://doi.org/10.1145/3463914.3463921
Hornsey, R. L., & Hibbard, P. B. (2021). Contributions of pictorial and binocular cues to
the perception of distance in virtual reality. Virtual Reality, 25(4), 1087–1103.
https://doi.org/10.1007/s10055-021-00500-x
Howard, M. C. (2017). A meta-analysis and systematic literature review of virtual reality
rehabilitation programs. Computers in Human Behavior, 70, 317–327.
https://doi.org/10.1016/j.chb.2017.01.013
Hu, B., & Knill, D. C. (2011). Binocular and monocular depth cues in online feedback
control of 3D pointing movement. Journal of Vision, 11(7), 23–23.
https://doi.org/10.1167/11.7.23
Huber, T., Paschold, M., Hansen, C., Wunderling, T., Lang, H., & Kneist, W. (2017). New
dimensions in surgical training: immersive virtual reality laparoscopic simulation
exhilarates surgical staff. Surgical Endoscopy, 31(11), 4472–4477.
https://doi.org/10.1007/s00464-017-5500-6
Huberdeau, D. M., Krakauer, J. W., & Haith, A. M. (2015). Dual-process decomposition
in human sensorimotor adaptation. Current Opinion in Neurobiology, 33, 71–77.
https://doi.org/10.1016/j.conb.2015.03.003
IJsselsteijn, W. A., Kort, Y. A. W. de, Westerink, J., Jager, M. de, & Bonants, R. (2006).
Virtual Fitness: Stimulating Exercise Behavior through Media Technology. Presence:
138
Teleoperators and Virtual Environments, 15(6), 688–698.
https://doi.org/10.1162/pres.15.6.688
Imamizu, H., Miyauchi, S., Tamada, T., Sasaki, Y., Takino, R., Pütz, B., Yoshioka, T., &
Kawato, M. (2000). Human cerebellar activity reflecting an acquired internal model
of a new tool. Nature, 403(6766), 192–195. https://doi.org/10.1038/35003194
Inoue, M., Uchimura, M., Karibe, A., O’Shea, J., Rossetti, Y., & Kitazawa, S. (2015). Three
timescales in prism adaptation. Journal of Neurophysiology, 113(1), 328–338.
https://doi.org/10.1152/jn.00803.2013
Iruthayarajah, J., McIntyre, A., Cotoi, A., Macaluso, S., & Teasell, R. (2017). The use of
virtual reality for balance among individuals with chronic stroke: A systematic review
and meta-analysis. Topics in Stroke Rehabilitation, 24(1), 68–79.
https://doi.org/10.1080/10749357.2016.1192361
Jackson, P. L., Lafleur, M. F., Malouin, F., Richards, C. L., & Doyon, J. (2003). Functional
cerebral reorganization following motor sequence learning through mental practice
with motor imagery. Neuroimage, 20(2), 1171–1180. https://doi.org/10.1016/S1053-
8119(03)00369-0
James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An Introduction to Statistical
Learning (Vol. 102). Springer. https://doi.org/10.1016/j.peva.2007.06.006
Jamiy, F. El, & Marsh, R. (2019a). Survey on depth perception in head mounted displays:
Distance estimation in virtual reality, augmented reality, and mixed reality. IET Image
Processing, 13(5), 707–712. https://doi.org/10.1049/iet-ipr.2018.5920
Jamiy, F. El, & Marsh, R. (2019b). Distance estimation in virtual reality and augmented
reality: A survey. IEEE International Conference on Electro Information Technology,
2019-May, 063–068. https://doi.org/10.1109/EIT.2019.8834182
Jang, S. H., You, S. H., Hallett, M., Cho, Y. W., Park, C.-M., Cho, S.-H., Lee, H.-Y., &
Kim, T.-H. (2005). Cortical Reorganization and Associated Functional Motor
Recovery After Virtual Reality in Patients With Chronic Stroke: An Experimenter-
Blind Preliminary Study. Archives of Physical Medicine and Rehabilitation, 86(11),
2218–2223. https://doi.org/10.1016/j.apmr.2005.04.015
139
Jenkins, I. H., Brooks, D. J., Nixon, P. D., Frackowiak, R. S. J., & Passingham, R. E.
(1994). Motor sequence learning: A study with positron emission tomography.
Journal of Neuroscience, 14(6), 3775–3790. https://doi.org/10.1523/jneurosci.14-06-
03775.1994
Johnson, N. N., Carey, J., Edelman, B. J., Doud, A., Grande, A., Lakshminarayan, K., &
He, B. (2018). Combined rTMS and virtual reality brain–computer interface training
for motor recovery after stroke. Journal of Neural Engineering, 15(1), 016009.
https://doi.org/10.1088/1741-2552/aa8ce3
Joiner, W. M., & Smith, M. A. (2008). Long-term retention explained by a model of short-
term learning in the adaptive control of reaching. Journal of Neurophysiology, 100(5),
2948–2955. https://doi.org/10.1152/jn.90706.2008
Jonides, J., Smith, E. E., Koeppe, R. A., Awh, E., Minoshima, S., & Mintun, M. A. (1993).
Spatial working memory in humans as revealed by PET. Nature, 363(6430), 623–
625. https://doi.org/10.1038/363623a0
Jost, K., & Mayr, U. (2016). Switching between filter settings reduces the efficient
utilization of visual working memory. Cognitive, Affective and Behavioral
Neuroscience, 16(2), 207–218. https://doi.org/10.3758/s13415-015-0380-5
Juliano, J. M., & Liew, S. L. (2020). Transfer of motor skill between virtual reality viewed
using a head-mounted display and conventional screen environments. Journal of
NeuroEngineering and Rehabilitation, 17(1), 1–13. https://doi.org/10.1186/s12984-
020-00678-2
Juliano, J. M., Schweighofer, N., & Liew, S.-L. (2021). Increased cognitive load in
immersive virtual reality during visuomotor adaptation is associated with decreased
long-term retention and context transfer. Research Square, PREPRINT(Version 1).
https://doi.org/10.21203/rs.3.rs-1139453/v1
Juliano, J. M., Spicer, R. P., Lefebvre, S., Jann, K., Ard, T., Santarnecchi, E., Krum, D.
M., & Liew, S.-L. (2019). Embodiment improves performance on an immersive brain
computer interface in head-mounted virtual reality. BioRxiv, 578682.
https://doi.org/10.1101/578682
140
Jung, J., Yu, J., & Kang, H. (2017). Virtual and augmented reality based balance and gait
training. White Paper, February. https://doi.org/10.1589/jpts.24.1133
Just, M. A., Stapley, P. J., Ros, M., Naghdy, F., & Stirling, D. (2014). A comparison of
upper limb movement profiles when reaching to virtual and real targets using the
Oculus Rift : implications for virtual-reality enhanced stroke rehabilitation. Journal of
Pain Management, 9(2), 277–281.
Kandel, E. R., Schwartz, J. H., Jessell, T. M., Siegelbaum, S. A., & Hudspeth, A. J. (2012).
Principles of Neural Science (5th ed.). McGraw-Hill.
Kawato, M. (1999). Internal models for motor control and trajectory planning. In Current
Opinion in Neurobiology (Vol. 9, Issue 6, pp. 718–727). Elsevier Current Trends.
https://doi.org/10.1016/S0959-4388(99)00028-8
Kelly, J. W., Cherep, L. A., & Siegel, Z. D. (2017). Perceived space in the HTC vive. ACM
Transactions on Applied Perception, 15(1), 1–16. https://doi.org/10.1145/3106155
Kennedy, R. S., Lane, N. E., Berbaum, K. S., & Michael, G. (1993). Simulator sickness
questionnaire: an enhanced method for quantifying simulator sickness. International
Journal of Aviation Psychology, 3(3), 203–220.
https://doi.org/10.1207/s15327108ijap0303_3
Keshner, E. A., Weiss, P. T., Geifman, D., & Raban, D. (2019). Tracking the evolution of
virtual reality applications to rehabilitation as a field of study. Journal of
NeuroEngineering and Rehabilitation, 16(1), 76. https://doi.org/10.1186/s12984-019-
0552-6
Kilteni, K., Bergstrom, I., & Slater, M. (2013). Drumming in immersive virtual reality: the
body shapes the way we play. IEEE Transactions on Visualization and Computer
Graphics, 19(4), 597–605. https://doi.org/10.1109/TVCG.2013.29
Kilteni, K., Groten, R., & Slater, M. (2012). The Sense of Embodiment in Virtual Reality.
Presence: Teleoperators and Virtual Environments, 21(4), 373–387.
Kilteni, K., Normand, J.-M., Sanchez-Vives, M. V., & Slater, M. (2012). Extending body
space in immersive virtual reality: a very long arm illusion. PLoS ONE, 7(7), e40867.
https://doi.org/10.1371/journal.pone.0040867
141
Kim, A., Kretch, K. S., Zhou, Z., & Finley, J. M. (2018). The quality of visual information
about the lower extremities influences visuomotor coordination during virtual
obstacle negotiation. J Neurophysiol, 120, 839–847. https://doi.org/10.1152/jn
Kim, A., Schweighofer, N., & Finley, J. M. (2019). Locomotor skill acquisition in virtual
reality shows sustained transfer to the real world. Journal of NeuroEngineering and
Rehabilitation, 16(1), 113. https://doi.org/10.1186/s12984-019-0584-y
Kim, H.-Y. (2013). Statistical notes for clinical researchers: assessing normal distribution
(2) using skewness and kurtosis. Restorative Dentistry & Endodontics, 38(1), 52–54.
https://doi.org/10.5395/rde.2013.38.1.52
Kim, H. K., Park, J., Choi, Y., & Choe, M. (2018). Virtual reality sickness questionnaire
(VRSQ): Motion sickness measurement index in a virtual reality environment. Applied
Ergonomics, 69, 66–73. https://doi.org/10.1016/j.apergo.2017.12.016
Kim, S., Ogawa, K., Lv, J., Schweighofer, N., & Imamizu, H. (2015). Neural Substrates
Related to Motor Memory with Multiple Timescales in Sensorimotor Adaptation.
PLoS Biology, 13(12), e1002312. https://doi.org/10.1371/journal.pbio.1002312
Kirpich, A., Ainsworth, E. A., Wedow, J. M., Newman, J. R. B., Michailidis, G., & McIntyre,
L. M. (2018). Variable selection in omics data: A practical evaluation of small sample
sizes. PLoS ONE, 13(6). https://doi.org/10.1371/journal.pone.0197910
Kitago, T., & Krakauer, J. W. (2013). Motor learning principles for neurorehabilitation. In
Handbook of Clinical Neurology (Vol. 110, pp. 93–103). Elsevier.
https://doi.org/10.1016/B978-0-444-52901-5.00008-3
Kizony, R., Katz, N., & (Tamar) Weiss, P. L. (2003). Adapting an immersive virtual reality
system for rehabilitation. The Journal of Visualization and Computer Animation, 14,
261–268. https://doi.org/10.1002/vis.323
Klem, G. H., Lüders, H. O., Jasper, H. H., & Elger, C. (1999). The ten-twenty electrode
system of the International Federation. Electroencephalography and Clinical
Neurophysiology., 52, 3–6.
Kock, N., & Lynn, G. S. (2012). Lateral collinearity and misleading results in variance-
based SEM: An illustration and recommendations. Journal of the Association for
142
Information Systems, 13(7), 546–580. https://doi.org/10.17705/1jais.00302
Kozak, J. J., Hancock, P. A., Arthur, E. J., & Chrysler, S. T. (1993). Transfer of training
from virtual reality. Ergonomics, 36(7), 777–784.
Krakauer, J. W. (2009). Motor Learning and Consolidation: The Case of Visuomotor
Rotation. Advanced Medical Biology, 629, 405–421. https://doi.org/10.1007/978-0-
387-77064-2_21
Krakauer, J. W., Ghilardi, M. F., Mentis, M., Barnes, A., Veytsman, M., Eidelberg, D., &
Ghez, C. (2004). Differential Cortical and Subcortical Activations in Learning
Rotations and Gains for Reaching: A PET Study. Journal of Neurophysiology, 91(2),
924–933. https://doi.org/10.1152/jn.00675.2003
Krakauer, J. W., Hadjiosif, A. M., Xu, J., Wong, A. L., & Haith, A. M. (2019). Motor
learning. Comprehensive Physiology, 9(2), 613–663.
https://doi.org/10.1002/cphy.c170043
Laver, K. E., Lange, B., George, S., Deutsch, J. E., Saposnik, G., & Crotty, M. (2017).
Virtual reality for stroke rehabilitation. Cochrane Database of Systematic Reviews,
2017(11). https://doi.org/10.1002/14651858.CD008349.pub4
Lee, J., Kim, M., & Kim, J. (2017). A study on immersion and VR sickness in walking
interaction for immersive virtual reality applications. Symmetry, 9(5), 78.
https://doi.org/10.3390/sym9050078
Lee, J. Y., & Schweighofer, N. (2009). Dual adaptation supports a parallel architecture of
motor memory. Journal of Neuroscience, 29(33), 10396–10404.
https://doi.org/10.1523/JNEUROSCI.1294-09.2009
Lee, T. D., Swinnen, S. P., & Serrien, D. J. (1994). Cognitive effort and motor learning.
Quest, 46(3), 328–344. https://doi.org/10.1080/00336297.1994.10484130
Leow, L. A., Marinovic, W., de Rugy, A., & Carroll, T. J. (2020). Task errors drive
memories that improve sensorimotor adaptation. The Journal of Neuroscience,
40(15), 3075–3088. https://doi.org/10.1101/538348
Levac, D. E., Huber, M. E., & Sternad, D. (2019). Learning and transfer of complex motor
143
skills in virtual reality: a perspective review. Journal of NeuroEngineering and
Rehabilitation, 16(1), 121. https://doi.org/10.1186/s12984-019-0587-8
Levac, D. E., & Jovanovic, B. B. (2017). Is children’s motor learning of a postural reaching
task enhanced by practice in a virtual environment? 2017 International Conference
on Virtual Rehabilitation (ICVR), 1–7. https://doi.org/10.1109/ICVR.2017.8007489
Levin, M. F. (2020). What is the potential of virtual reality for post-stroke sensorimotor
rehabilitation? Expert Review of Neurotherapeutics, 20(3), 195–197.
https://doi.org/10.1080/14737175.2020.1727741
Levin, M. F., & Demers, M. (2020). Motor learning in neurological rehabilitation. Disability
and Rehabilitation, 1–9. https://doi.org/10.1080/09638288.2020.1752317
Levin, M. F., Magdalon, E. C., Michaelsen, S. M., & Quevedo, A. A. F. (2015). Quality of
Grasping and the Role of Haptics in a 3-D Immersive Virtual Reality Environment in
Individuals With Stroke. IEEE Transactions on Neural Systems and Rehabilitation
Engineering, 23(6), 1047–1055. https://doi.org/10.1109/TNSRE.2014.2387412
Levin, M. F., Weiss, P. L., & Keshner, E. A. (2015). Emergence of virtual reality as a tool
for upper limb rehabilitation: incorporation of motor control and motor learning
principles. Physical Therapy, 95(3), 415–425.
Lewis, G. N., Woods, C., Rosie, J. a, & McPherson, K. M. (2011). Virtual reality games
for rehabilitation of people with stroke: perspectives from the users. Disability and
Rehabilitation. Assistive Technology, 6(5), 453–463.
https://doi.org/10.3109/17483107.2011.574310
Liew, S. L., Thompson, T., Ramirez, J., Butcher, P. A., Taylor, J. A., & Celnik, P. A. (2018).
Variable neural contributions to explicit and implicit learning during visuomotor
adaptation. Frontiers in Neuroscience, 12(SEP), 610.
https://doi.org/10.3389/fnins.2018.00610
Lo, S., & Andrews, S. (2015). To transform or not to transform: using generalized linear
mixed models to analyse reaction time data. Frontiers in Psychology, 6, 1171.
https://doi.org/10.3389/fpsyg.2015.01171
Lohse, K. R., Boyd, L. A., & Hodges, N. J. (2016). Engaging Environments Enhance Motor
144
Skill Learning in a Computer Gaming Task. Journal of Motor Behavior, 48(2), 172–
182. https://doi.org/10.1080/00222895.2015.1068158
Lourenço, C. B., Azeff, L., Sveistrup, H., & Levin, M. F. (2008). Effect of environment on
motivation and sense of presence in healthy subjects performing reaching tasks.
2008 Virtual Rehabilitation, IWVR, 93–98.
https://doi.org/10.1109/ICVR.2008.4625143
Lu, F., & Petkova, E. (2014). A comparative study of variable selection methods in the
context of developing psychiatric screening instruments. Statistics in Medicine, 33(3),
401–421. https://doi.org/10.1002/sim.5937
Luu, T. P., He, Y., Brown, S., Nakagame, S., & Contreras-Vidal, J. L. (2016). Gait
adaptation to visual kinematic perturbations using a real-time closed-loop brain-
computer interface to a virtual reality avatar. Journal of Neural Engineering, 13(3),
036006. https://doi.org/10.1088/1741-2560/13/3/036006
Maeda, R. S., Cluff, T., Gribble, P. L., & Pruszynski, J. A. (2018). Feedforward and
feedback control share an internal model of the arm’s dynamics. Journal of
Neuroscience, 38(49), 10505–10514. https://doi.org/10.1523/JNEUROSCI.1709-
18.2018
Magdalon, E. C., Michaelsen, S. M., Quevedo, A. A., & Levin, M. F. (2011). Comparison
of grasping movements made by healthy subjects in a 3-dimensional immersive
virtual versus physical environment. Acta Psychologica, 138(1), 126–134.
https://doi.org/10.1016/j.actpsy.2011.05.015
Mangalam, M., Yarossi, M., Furmanek, M. P., & Tunik, E. (2020). Control of aperture
closure during reach-to-grasp movements in immersive haptic-free virtual reality.
BioRxiv. https://doi.org/10.1101/2020.08.01.232470
Mao, R. Q., Lan, L., Kay, J., Lohre, R., Ayeni, O. R., Goel, D. P., & SA, D. de. (2021).
Immersive Virtual Reality for Surgical Training: A Systematic Review. Journal of
Surgical Research, 268, 40–58. https://doi.org/10.1016/j.jss.2021.06.045
Markov, D. A., Petrucco, L., Kist, A. M., & Portugues, R. (2021). A cerebellar internal
model calibrates a feedback controller involved in sensorimotor control. Nature
145
Communications, 12(1), 1–21. https://doi.org/10.1101/2020.02.12.945956
Massetti, T., Fávero, F. M., de Menezes, L. D. C., Alvarez, M. P. B., Crocetta, T. B.,
Guarnieri, R., Nunes, F. L. S., Monteiro, C. B. de M., & Silva, T. D. da. (2018).
Achievement of virtual and real objects using a short-term motor learning protocol in
people with duchenne muscular dystrophy: a crossover randomized controlled trial.
Games for Health Journal, 7(2), 107–115. https://doi.org/10.1089/g4h.2016.0088
Mazzoni, P., & Krakauer, J. W. (2006). An implicit plan overrides an explicit strategy
during visuomotor adaptation. Journal of Neuroscience, 26(14), 3642–3645.
https://doi.org/10.1523/JNEUROSCI.5317-05.2006
McDougle, S. D., Bond, K. M., & Taylor, J. A. (2015). Explicit and implicit processes
constitute the fast and slow processes of sensorimotor learning. Journal of
Neuroscience, 35(26), 9568–9579. https://doi.org/10.1523/JNEUROSCI.5061-
14.2015
McDougle, S. D., Ivry, R. B., & Taylor, J. A. (2016). Taking Aim at the Cognitive Side of
Learning in Sensorimotor Adaptation Tasks. Trends in Cognitive Sciences, 20(7),
535–544. https://doi.org/10.1016/j.tics.2016.05.002
McDougle, S. D., & Taylor, J. A. (2019). Dissociable cognitive strategies for sensorimotor
learning. Nature Communications, 10(1). https://doi.org/10.1038/s41467-018-07941-
0
McEwen, D., Taillon-Hobson, A., Bilodeau, M., Sveistrup, H., & Finestone, H. (2014).
Virtual reality exercise improves mobility after stroke: An inpatient randomized
controlled trial. Stroke, 45(6), 1853–1855.
https://doi.org/10.1161/STROKEAHA.114.005362
McMahon, M., & Schukat, M. (2018). A low-cost, open-source, BCI-VR game control
development environment prototype for game based neurorehabilitation. IEEE
Games, Entertainment, Media Conference (GEM), 1–9.
https://doi.org/10.1109/gem.2018.8516468
Miall, R. C., & Wolpert, D. M. (1996). Forward models for physiological motor control.
Neural Networks, 9(8), 1265–1279. https://doi.org/10.1016/S0893-6080(96)00035-4
146
Milner, A. D. (2017). How do the two visual streams interact with each other?
Experimental Brain Research, 235(5), 1297–1308. https://doi.org/10.1007/s00221-
017-4917-4
Milner, A. D., Perrett, D. I., Johnston, R. S., Benson, P. J., Jordan, T. R., Heeley, D. W.,
Bettucci, D., Mortara, F., Mutani, R., Terazzi, E., & Davidson, D. L. W. (1991).
Perception and action in “visual form agnosia.” Brain, 114(1), 405–428.
https://doi.org/10.1093/brain/114.1.405
Minini, L., Parker, A. J., & Bridge, H. (2010). Neural modulation by binocular disparity
greatest in human dorsal visual stream. Journal of Neurophysiology, 104(1), 169–
178. https://doi.org/10.1152/jn.00790.2009
Mon-Williams, M., & Dijkerman, H. C. (1999). The use of vergence information in the
programming of prehension. Experimental Brain Research, 128(4), 578–582.
https://doi.org/10.1007/s002210050885
Müssgens, D. M., & Ullén, F. (2015). Transfer in motor sequence learning: Effects of
practice schedule and sequence context. Frontiers in Human Neuroscience, 9(NOV),
642. https://doi.org/10.3389/fnhum.2015.00642
Mutha, P. K., Sainburg, R. L., & Haaland, K. Y. (2011). Left parietal regions are critical for
adaptive visuomotor control. Journal of Neuroscience, 31(19), 6972–6981.
https://doi.org/10.1523/JNEUROSCI.6432-10.2011
Naito, E., Kochiyama, T., Kitada, R., Nakamura, S., Matsumura, M., Yonekura, Y., &
Sadato, N. (2002). Internally simulated movement sensations during motor imagery
activate cortical motor areas and the cerebellum. The Journal of Neuroscience,
22(9), 3683–3691. https://doi.org/20026282
Neguţ, A., Matu, S.-A., Sava, F. A., & David, D. (2016). Task difficulty of virtual reality-
based assessment tools compared to classical paper-and-pencil or computerized
measures: A meta-analytic approach. Computers in Human Behavior, 54, 414–424.
https://doi.org/10.1016/j.chb.2015.08.029
Nemani, A., Ahn, W., Cooper, C., Schwaitzberg, S., & De, S. (2018). Convergent
validation and transfer of learning studies of a virtual reality-based pattern cutting
147
simulator. Surgical Endoscopy, 32(3), 1265–1272. https://doi.org/10.1007/s00464-
017-5802-8
Neuper, C., Wörtz, M., & Pfurtscheller, G. (2006). ERD/ERS patterns reflecting
sensorimotor activation and deactivation. Progress in Brain Research, 159, 211–222.
https://doi.org/10.1016/S0079-6123(06)59014-4
Nezafat, R., Shadmehr, R., & Holcomb, H. H. (2001). Long-term adaptation to dynamics
of reaching movements: A PET study. Experimental Brain Research, 140(1), 66–76.
https://doi.org/10.1007/s002210100787
Ono, T., Shindo, K., Kawashima, K., Ota, N., Ito, M., Ota, T., Mukaino, M., Fujiwara, T.,
Kimura, A., Liu, M., & Ushiba, J. (2014). Brain-computer interface with
somatosensory feedback improves functional recovery from severe hemiplegia due
to chronic stroke. Frontiers in Neuroengineering, 7(19).
https://doi.org/10.3389/fneng.2014.00019
Orru, G., & Longo, L. (2019). The Evolution of Cognitive Load Theory and the
Measurement of Its Intrinsic, Extraneous and Germane Loads: A Review.
Communications in Computer and Information Science, 1012, 23–48.
https://doi.org/10.1007/978-3-030-14273-5_3
Osimo, S. A., Pizarro, R., Spanlang, B., & Slater, M. (2015). Conversations between self
and self as Sigmund Freud—A virtual body ownership paradigm for self counselling.
Scientific Reports, 5, 13899. https://doi.org/10.1038/srep13899
Overney, L. S., Blanke, O., & Herzog, M. H. (2008). Enhanced temporal but not attentional
processing in expert tennis players. PLoS ONE, 3(6), 2380.
https://doi.org/10.1371/journal.pone.0002380
Ozana, A., Berman, S., & Ganel, T. (2018). Grasping trajectories in a virtual environment
adhere to Weber’s law. Experimental Brain Research, 236(6), 1775–1787.
https://doi.org/10.1007/s00221-018-5265-8
Ozana, A., Berman, S., & Ganel, T. (2020). Grasping Weber’s Law in a Virtual
Environment: The Effect of Haptic Feedback. Frontiers in Psychology,
11(November), 1–15. https://doi.org/10.3389/fpsyg.2020.573352
148
Ozana, A., & Ganel, T. (2018). Dissociable effects of irrelevant context on 2D and 3D
grasping. Attention, Perception, and Psychophysics, 80(2), 564–575.
https://doi.org/10.3758/s13414-017-1443-1
Ozana, A., & Ganel, T. (2019a). Obeying the law: speed–precision tradeoffs and the
adherence to Weber’s law in 2D grasping. Experimental Brain Research, 237(8),
2011–2021. https://doi.org/10.1007/s00221-019-05572-5
Ozana, A., & Ganel, T. (2019b). Weber’s law in 2D and 3D grasping. Psychological
Research, 83(5), 977–988. https://doi.org/10.1007/S00426-017-0913-3/FIGURES/8
Ozana, A., Namdar, G., & Ganel, T. (2020). Active visuomotor interactions with virtual
objects on touchscreens adhere to Weber’s law. Psychological Research, 84(8),
2144–2156. https://doi.org/10.1007/s00426-019-01210-5
Park, B., & Brünken, and R. (2017). Secondary Task as a Measure of Cognitive Load.
Cognitive Load Measurement and Application, 75–92.
https://doi.org/10.4324/9781315296258-6
Parker, A. J. (2007). Binocular depth perception and the cerebral cortex. Nature Reviews
Neuroscience, 8(5), 379–391. https://doi.org/10.1038/nrn2131
Pavone, E. F., Tieri, G., Rizza, G., Tidoni, E., Grisoni, L., & Aglioti, S. M. (2016).
Embodying others in immersive virtual reality: electro-cortical signatures of
monitoring the errors in the actions of an avatar seen from a first-person perspective.
The Journal of Neuroscience, 36(2), 268–279.
https://doi.org/10.3389/fpsyg.2016.01260
Peters, S., Handy, T. C., Lakhani, B., Boyd, L. A., & Garland, S. J. (2015). Motor and
visuospatial attention and motor planning after stroke: Considerations for the
rehabilitation of standing balance and gait. Physical Therapy, 95(10), 1423–1432.
https://doi.org/10.2522/ptj.20140492
Petri, K., Emmermacher, P., Danneberg, M., Masik, S., Eckardt, F., Weichelt, S., Bandow,
N., & Witte, K. (2019). Training using virtual reality improves response behavior in
karate kumite. Sports Engineering, 22(1), 2. https://doi.org/10.1007/s12283-019-
0299-0
149
Pfurtscheller, G. (2000). Spatiotemporal ERD/ERS patterns during voluntary movement
and motor imagery. Supplements to Clinical Neurophysiology, 53, 196–198.
https://doi.org/10.1016/S1567-424X(09)70157-6
Pfurtscheller, Gert, & Neuper, C. (1997). Motor imagery activates primary sensorimotor
area in humans. Neuroscience Letters, 239(2–3), 65–68.
https://doi.org/10.1016/S0304-3940(97)00889-6
Phillips, L., Ries, B., Interrante, V., Kaeding, M., & Anderson, L. (2009). Distance
perception in immersive virtual environments, revisited. Proceedings - APGV 2009:
Symposium on Applied Perception in Graphics and Visualization, 11–14.
https://doi.org/10.1145/1620993.1620996
Pichiorri, F., Morone, G., Petti, M., Toppi, J., Pisotta, I., Molinari, M., Paolucci, S.,
Inghilleri, M., Astolfi, L., Cincotti, F., & Mattia, D. (2015). Brain-computer interface
boosts motor imagery practice during stroke recovery. Annals of Neurology, 77(5),
851–865. https://doi.org/10.1002/ana.24390
Pine, Z. M., Krakauer, J. W., Gordon, J., & Ghez, C. (1996). Learning of scaling factors
and reference axes for reaching movements. In Neuroreport (Vol. 7, Issue 14, pp.
2357–2361). https://doi.org/10.1097/00001756-199610020-00016
Ping, J., Liu, Y., & Weng, D. (2019). Comparison in depth perception between virtual
reality and augmented reality systems. 26th IEEE Conference on Virtual Reality and
3D User Interfaces, VR 2019 - Proceedings, 1124–1125.
https://doi.org/10.1109/VR.2019.8798174
Porcino, T. M., Clua, E., Trevisan, D., Vasconcelos, C. N., & Valente, L. (2017).
Minimizing cyber sickness in head mounted display systems: Design guidelines and
applications. 2017 IEEE 5th International Conference on Serious Games and
Applications for Health (SeGAH), 1–6.
https://doi.org/10.1109/SeGAH.2017.7939283
Qadir, Z., Chowdhury, E., Ghosh, L., & Konar, A. (2019). Quantitative Analysis of
Cognitive Load Test While Driving in a VR vs Non-VR Environment. Lecture Notes
in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and
150
Lecture Notes in Bioinformatics), 11942 LNCS, 481–489.
https://doi.org/10.1007/978-3-030-34872-4_53
Ramos-Murguialday, A., Broetz, D., Rea, M., Läer, L., Yilmaz, Ö., Brasil, F. L., Liberati,
G., Curado, M. R., Garcia-Cossio, E., Vyziotis, A., Cho, W., Agostini, M., Soares, E.,
Soekadar, S., Caria, A., Cohen, L. G., & Birbaumer, N. (2013). Brain-machine
interface in chronic stroke rehabilitation: a controlled study. Annals of Neurology,
74(1), 100–108. https://doi.org/10.1002/ana.23879
Redding, G. M., Rader, S. D., & Lucas, D. R. (1992). Cognitive load and prism adaptation.
Journal of Motor Behavior, 24(3), 238–246.
https://doi.org/10.1080/00222895.1992.9941619
Reid, D. T. (2002). Benefits of a virtual play rehabilitation environment for children with
cerebral palsy on perceptions of self-effcacy: a pilot study. Pediatric Rehabilitation,
5(3), 141–148. https://doi.org/10.1080/136384902100003934
Reis, J., Schambra, H. M., Cohen, L. G., Buch, E. R., Fritsch, B., Zarahn, E., Celnik, P.
A., & Krakauer, J. W. (2009). Noninvasive cortical stimulation enhances motor skill
acquisition over multiple days through an effect on consolidation. Proceedings of the
National Academy of Sciences, 106(5), 1590–1595.
Richardson, A. E., Powers, M. E., & Bousquet, L. G. (2011). Video game experience
predicts virtual, but not real navigation performance. Computers in Human Behavior,
27(1), 552–560. https://doi.org/10.1016/j.chb.2010.10.003
Robertson, E. M., & Miall, R. C. (1999). Visuomotor adaptation during inactivation of the
dentate nucleus. NeuroReport, 10(5), 1029–1034.
Rossetti, Y., Pisella, L., & McIntosh, R. D. (2017). Rise and fall of the two visual systems
theory. Annals of Physical and Rehabilitation Medicine, 60(3), 130–140.
https://doi.org/10.1016/j.rehab.2017.02.002
Roy, A. K., Member, S., Soni, Y., & Dubey, S. (2013). Enhancing Effectiveness of Motor
Rehabilitation Using Kinect Motion Sensing Technology. Global Humanitarian
Technology Conference: South Asia Satellite (GHTC-SAS), 2013 IEEE, 298–304.
https://doi.org/10.1109/GHTC-SAS.2013.6629934
151
Ruddle, R. A., Payne, S. J., & Jones, D. M. (1999). Navigating Large-Scale Virtual
Environments : What Differences Occur Between Helmet-Mounted and Desk-Top
Displays? Presence, 8(2), 157–168.
Rushworth, M. F. S., Krams, M., & Passingham, R. E. (2001). The attentional role of the
left parietal cortex: The distinct lateralization and localization of motor attention in the
human brain. Journal of Cognitive Neuroscience, 13(5), 698–710.
https://doi.org/10.1162/089892901750363244
Saposnik, G., Teasell, R., Mamdani, M., Hall, J., McIlroy, W., Cheung, D., Thorpe, K. E.,
Cohen, L. G., & Bayley, M. (2010). Effectiveness of virtual reality using wii gaming
technology in stroke rehabilitation: A pilot randomized clinical trial and proof of
principle. Stroke, 41(7), 1477–1484.
https://doi.org/10.1161/STROKEAHA.110.584979
Schweighofer, N., Lee, J.-Y., Goh, H.-T., Choi, Y., Shin Kim, S., Campbell Stewart, J.,
Lewthwaite, R., & Winstein, C. J. (2011). Mechanisms of the contextual interference
effect in individuals poststroke. J Neurophysiol, 106, 2632–2641.
https://doi.org/10.1152/jn.00399.2011
Schwind, V., Knierim, P., Haas, N., & Henze, N. (2019). Using presence questionnaires
in virtual reality. Conference on Human Factors in Computing Systems -
Proceedings, 1–12. https://doi.org/10.1145/3290605.3300590
Seidler, R. D., & Noll, D. C. (2008). Neuroanatomical correlates of motor acquisition and
motor transfer. Journal of Neurophysiology, 99(4), 1836–1845.
https://doi.org/10.1152/jn.01187.2007
Seidler, Rachael D. (2007). Aging affects motor learning but not savings at transfer of
learning. Learning and Memory, 14, 17–21. https://doi.org/10.1101/lm.394707
Seidler, Rachael D., Bo, J., & Anguera, J. A. (2012). Neurocognitive contributions to motor
skill learning: The role of working memory. Journal of Motor Behavior, 44(6), 445–
453. https://doi.org/10.1080/00222895.2012.672348
Seidler, Rachael D., & Carson, R. G. (2017). Sensorimotor Learning: Neurocognitive
Mechanisms and Individual Differences. Journal of NeuroEngineering and
152
Rehabilitation, 14(1), 74. https://doi.org/10.1186/s12984-017-0279-1
Seidler, Rachael D., Noll, D. C., & Chintalapati, P. (2006). Bilateral basal ganglia
activation associated with sensorimotor adaptation. Experimental Brain Research,
175(3), 544–555. https://doi.org/10.1007/s00221-006-0571-y
Seidler, Rachael D., Noll, D. C., & Thiers, G. (2004). Feedforward and feedback
processes in motor control. NeuroImage, 22(4), 1775–1783.
https://doi.org/10.1016/j.neuroimage.2004.05.003
Seth, A., Vance, J. M., & Oliver, J. H. (2011). Virtual reality for assembly methods
prototyping: A review. Virtual Reality, 15(1), 5–20. https://doi.org/10.1007/s10055-
009-0153-y
Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information
processing: II. Perceptual learning, automatic attending and a general theory.
Psychological Review, 84(2), 127–190. https://doi.org/10.1037/0033-295X.84.2.127
Sing, G. C., & Smith, M. A. (2010). Reduction in learning rates associated with
anterograde interference results from interactions between different timescales in
motor adaptation. PLoS Computational Biology, 6(8).
https://doi.org/10.1371/journal.pcbi.1000893
Slater, M. (1999). Measuring presence: A response to the Witmer and Singer presence
questionnaire. Presence, 8(5), 560–565. https://doi.org/10.1162/105474699566477
Slater, M. (2018). Immersion and the illusion of presence in virtual reality. British Journal
of Psychology, 109(3), 431–433. https://doi.org/10.1111/bjop.12305
Slater, M., & Sanchez-Vives, M. V. (2016). Enhancing our lives with immersive virtual
reality. Frontiers in Robotics and AI, 3, 74. https://doi.org/10.3389/frobt.2016.00074
Slobounov, S. M., Ray, W., Johnson, B., Slobounov, E., & Newell, K. M. (2015).
Modulation of cortical activity in 2D versus 3D virtual reality environments: An EEG
study. International Journal of Psychophysiology, 95(3), 254–260.
https://doi.org/10.1016/J.IJPSYCHO.2014.11.003
Smith, E. E., & Jonides, J. (1997). Working memory: A view from neuroimaging. Cognitive
153
Psychology, 33(1), 5–42. https://doi.org/10.1006/cogp.1997.0658
Smith, M. A., Ghazizadeh, A., & Shadmehr, R. (2006). Interacting Adaptive Processes
with Different Timescales Underlie Short-Term Motor Learning. PLoS Biology, 4(6),
e179. https://doi.org/10.1371/journal.pbio.0040179
Smith, S. M., & Vela, E. (2001). Environmental context-dependent memory: A review and
meta-analysis. Psychonomic Bulletin and Review, 8(2), 203–220.
https://doi.org/10.3758/BF03196157
Soechting, J. F., & Flanders, M. (1989). Sensorimotor representations for pointing to
targets in three-dimensional space. Journal of Neurophysiology, 62(2), 582–594.
https://doi.org/10.1152/jn.1989.62.2.582
Souchet, A. D., Philippe, S., Lourdeaux, D., & Leroy, L. (2021). Measuring Visual Fatigue
and Cognitive Load via Eye Tracking while Learning with Virtual Reality Head-
Mounted Displays: A Review. International Journal of Human-Computer Interaction,
00(00), 1–24. https://doi.org/10.1080/10447318.2021.1976509
Spicer, R., Anglin, J. M., Krum, D. M., & Liew, S.-L. (2017). REINVENT: A low-cost, virtual
reality brain-computer interface for severe stroke upper limb motor recovery. IEEE
Virtual Reality, 385–386. https://doi.org/10.1109/VR.2017.7892338
Steed, A., Pan, Y., Zisch, F., & Steptoe, W. (2016). The impact of a self-avatar on
cognitive load in immersive virtual reality. Proceedings - IEEE Virtual Reality, 2016-
July, 67–76. https://doi.org/10.1109/VR.2016.7504689
Stevens, J. A., & Kincaid, J. P. (2015). The relationship between presence and
performance in virtual simulation training. Open Journal of Modelling and Simulation,
3, 41–48. https://doi.org/10.4236/ojmsi.2015.32005
Subramanian, S. K., & Levin, M. F. (2011). Viewing medium affects arm motor
performance in 3D virtual environments. Journal of NeuroEngineering and
Rehabilitation, 8(1), 36. https://doi.org/10.1186/1743-0003-8-36
Subramanian, S. K., Lourenço, C. B., Chilingaryan, G., Sveistrup, H., & Levin, M. F.
(2013). Arm motor recovery using a virtual reality intervention in chronic stroke:
randomized control trial. Neurorehabilitation and Neural Repair, 27(1), 13–23.
154
https://doi.org/10.1177/1545968312449695
Takeo, Y., Hara, M., Shirakawa, Y., Ikeda, T., & Sugata, H. (2021). Sequential motor
learning transfers from real to virtual environment. Journal of NeuroEngineering and
Rehabilitation, 18(1), 1–8. https://doi.org/10.1186/s12984-021-00903-6
Tanji, J. (2001). Sequential Organization of Multiple Movements: Involvement of Cortical
Motor Areas. Annual Review of Neuroscience, 24(1), 631–651.
https://doi.org/10.1146/annurev.neuro.24.1.631
Taylor, J. A., & Ivry, R. B. (2011). Flexible cognitive strategies during motor learning.
PLoS Computational Biology, 7(3). https://doi.org/10.1371/journal.pcbi.1001096
Taylor, J. A., & Ivry, R. B. (2012). The role of strategies in motor learning. Annals of the
New York Academy of Sciences, 1251(1), 1–12. https://doi.org/10.1111/j.1749-
6632.2011.06430.x
Taylor, J. A., & Ivry, R. B. (2013a). Context-dependent generalization. Frontiers in Human
Neuroscience, 7, 171. https://doi.org/10.3389/fnhum.2013.00171
Taylor, J. A., & Ivry, R. B. (2013b). Implicit and Explicit Processes in Motor Learning.
Action Science, 63–87. https://doi.org/10.7551/mitpress/9780262018555.003.0003
Taylor, J. A., Klemfuss, N. M., & Ivry, R. B. (2010). An explicit strategy prevails when the
cerebellum fails to compute movement errors. Cerebellum, 9(4), 580–586.
https://doi.org/10.1007/s12311-010-0201-x
Taylor, J. A., Krakauer, J. W., & Ivry, R. B. (2014). Explicit and implicit contributions to
learning in a sensorimotor adaptation task. Journal of Neuroscience, 34(8), 3023–
3032. https://doi.org/10.1523/JNEUROSCI.3619-13.2014
Taylor, J. A., & Thoroughman, K. A. (2007). Divided attention impairs human motor
adaptation but not feedback control. Journal of Neurophysiology, 98(1), 317–326.
https://doi.org/10.1152/jn.01070.2006
Thoma, V., & Henson, R. N. (2011). Object representations in ventral and dorsal visual
streams: FMRI repetition effects depend on attention and part-whole configuration.
NeuroImage, 57(2), 513–525. https://doi.org/10.1016/j.neuroimage.2011.04.035
155
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the
Royal Statistical Society. Series B (Methodological), 58(1), 267–288.
Tieri, G., Morone, G., Paolucci, S., & Iosa, M. (2018). Virtual reality in cognitive and motor
rehabilitation: facts, fiction and fallacies. Expert Review of Medical Devices, 15(2),
107–117. https://doi.org/10.1080/17434440.2018.1425613
Todd, J. J., & Marois, R. (2004). Capacity limit of visual short-term memory in human
posterior parietal cortex. Nature, 428(6984), 751–754.
https://doi.org/10.1038/nature02466
Torres, E. B., Quian Quiroga, R., Cui, H., & Buneo, C. (2013). Neural correlates of
learning and trajectory planning in the posterior parietal cortex. Frontiers in
Integrative Neuroscience, 7(39), 1–20. https://doi.org/10.3389/fnint.2013.00039
Tseng, Y. W., Diedrichsen, J., Krakauer, J. W., Shadmehr, R., & Bastian, A. J. (2007).
Sensory prediction errors drive cerebellum-dependent adaptation of reaching.
Journal of Neurophysiology, 98(1), 54–62. https://doi.org/10.1152/jn.00266.2007
Tsurukawa, J., Al-Sada, M., & Nakajima, T. (2015). Filtering visual information for
reducing visual cognitive load. UbiComp and ISWC 2015 - Proceedings of the 2015
ACM International Joint Conference on Pervasive and Ubiquitous Computing and the
Proceedings of the 2015 ACM International Symposium on Wearable Computers,
33–36. https://doi.org/10.1145/2800835.2800852
Tung, S. W., Guan, C., Ang, K. K., Phua, K. S., Wang, C., Zhao, L., Teo, W. P., & Chew,
E. (2013). Motor imagery BCI for upper limb stroke rehabilitation: an evaluation of
the EEG recordings using coherence analysis. 2013 35th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 261–
261. https://doi.org/10.1109/EMBC.2013.6609487
Turolla, A., Dam, M., Ventura, L., Tonin, P., Agostini, M., Zucconi, C., Kiper, P., Cagnin,
A., & Piron, L. (2013). Virtual reality for the rehabilitation of the upper limb motor
function after stroke: a prospective controlled trial. Journal of Neuroengineering and
Rehabilitation, 10, 85. https://doi.org/10.1186/1743-0003-10-85
Tzvi, E., Koeth, F., Karabanov, A. N., Siebner, H. R., & Krämer, U. M. (2020). Cerebellar
156
– Premotor cortex interactions underlying visuomotor adaptation. NeuroImage, 220.
https://doi.org/10.1016/j.neuroimage.2020.117142
Uktveris, T., & Jusas, V. (2018). Development of a modular board for eeg signal
acquisition. Sensors (Switzerland), 18(7), 2140. https://doi.org/10.3390/s18072140
Unni, A., Ihme, K., Jipp, M., & Rieger, J. W. (2017). Assessing the driver’s current level
of working memory load with high density functional near-infrared spectroscopy: A
realistic driving simulator study. Frontiers in Human Neuroscience, 11.
https://doi.org/10.3389/fnhum.2017.00167
van Asselen, M., Kessels, R. P. C., Neggers, S. F. W., Kappelle, L. J., Frijns, C. J. M., &
Postma, A. (2006). Brain areas involved in spatial working memory.
Neuropsychologia, 44(7), 1185–1194.
https://doi.org/10.1016/j.neuropsychologia.2005.10.005
Vaswani, P. A., Shmuelof, L., Haith, A. M., Delnicki, R. J., Huang, V. S., Mazzoni, P.,
Shadmehr, R., & Krakauer, J. W. (2015). Persistent residual errors in motor
adaptation tasks: Reversion to baseline and exploratory escape. Journal of
Neuroscience, 35(17), 6969–6977. https://doi.org/10.1523/JNEUROSCI.2656-
14.2015
Vecchiato, G., Tieri, G., Jelic, A., De Matteis, F., Maglione, A. G., & Babiloni, F. (2015).
Electroencephalographic correlates of sensorimotor integration and embodiment
during the appreciation of virtual architectural environments. Frontiers in Psychology,
6, 1944. https://doi.org/10.3389/fpsyg.2015.01944
Vourvopoulos, A., & Bermúdez i Badia, S. (2016). Motor priming in virtual reality can
augment motor-imagery training efficacy in restorative brain-computer interaction: a
within-subject analysis. Journal of NeuroEngineering and Rehabilitation, 13(1), 69.
https://doi.org/10.1186/s12984-016-0173-2
Vourvopoulos, A., Jorge, C., Abreu, R., Figueiredo, P., Fernandes, J. C., & Bermúdez i
Badia, S. (2019). Efficacy and brain imaging correlates of an immersive motor
imagery BCI-driven VR system for upper limb motor rehabilitation: A clinical case
report. Frontiers in Human Neuroscience, 13, 244.
157
https://doi.org/10.3389/fnhum.2019.00244
Vourvopoulos, A., Pardo, O. M., Lefebvre, S., Neureither, M., Saldana, D., Jahng, E., &
Liew, S. L. (2019). Effects of a brain-computer interface with virtual reality (VR)
neurofeedback: A pilot study in chronic stroke patients. Frontiers in Human
Neuroscience, 13, 210. https://doi.org/10.3389/fnhum.2019.00210
Wagner, M. J., & Smith, M. A. (2008). Shared internal models for feedforward and
feedback control. Journal of Neuroscience, 28(42), 10663–10673.
https://doi.org/10.1523/JNEUROSCI.5479-07.2008
Waller, D., Hunt, E., & Knapp, D. (1998). The transfer of spatial knowledge in virtual
environment training. Presence, 7(2), 129–143.
https://doi.org/10.1162/105474698565631
Webster, J. S., McFarland, P. T., Rapport, L. J., Morrill, B., Roades, L. A., & Abadee, P.
S. (2001). Computer-assisted training for improving wheelchair mobility in unilateral
neglect patients. Archives of Physical Medicine and Rehabilitation, 82(6), 769–775.
https://doi.org/10.1053/apmr.2001.23201
Ween, J. E., Alexander, M. P., D’Esposito, M., & Roberts, M. (1996). Factors predictive
of stroke outcome in a rehabilitation setting. Neurology, 47(2), 388–392.
https://doi.org/10.1212/WNL.47.2.388
Welch, P. D. (1967). The use of fast Fourier transform for the estimation of power spectra:
A method based on time averaging over short, modified periodograms. IEEE
Transactions on Audio and Electroacoustics, 15(2), 70–73.
https://doi.org/10.1109/TAU.1967.1161901
Werner, S., Strüder, H. K., & Donchin, O. (2019). Intermanual transfer of visuomotor
adaptation is related to awareness. PLoS ONE, 14(9).
https://doi.org/10.1371/journal.pone.0220748
Wiederhold, B. K., Jang, D. P., Kaneda, M., Cabral, I., Lurie, Y., May, T., Kim, I. Y.,
Wiederhold, M. D., & Kim, S. I. (2001). An investigation into physiological responses
in virtual environments: an objective measurement of presence. Towards
CyberPsychology: Mind, Cognitions and Society in the Internet Age, 175–184.
158
Witmer, B. G., Jerome, C. J., & Singer, M. J. (2005). The factor structure of the Presence
Questionnaire. Presence, 14(3), 298–312.
Witmer, B. G., & Singer, M. J. (1998). Measuring presence in virtual environments: a
presence questionnaire. Presence, 7(3), 225–240.
https://doi.org/10.1162/105474698565686
Wolpert, D. M., & Kawato, M. (1998). Multiple paired forward and inverse models for motor
control. Neural Networks, 11(7–8), 1317–1329. https://doi.org/10.1016/S0893-
6080(98)00066-5
Yee, N., & Bailenson, J. N. (2007). The Proteus effect: The effect of transformed self-
representation on behavior. Human Communication Research, 33(3), 271–290.
https://doi.org/10.1111/j.1468-2958.2007.00299.x
Yousif, N., & Diedrichsen, J. (2012). Structural learning in feedforward and feedback
control. Journal of Neurophysiology, 108(9), 2373–2382.
https://doi.org/10.1152/jn.00315.2012
Zhang, J. (2019). Cognitive Functions of the Brain: Perception, Attention and Memory.
ArXiv Preprint, 1907.02863. http://arxiv.org/abs/1907.02863
Zimmerli, L., Jacky, M., Lünenburger, L., Riener, R., & Bolliger, M. (2013). Increasing
patient engagement during virtual reality-based motor rehabilitation. Archives of
Physical Medicine and Rehabilitation, 94(9), 1737–1746.
https://doi.org/10.1016/j.apmr.2013.01.029
159
Appendix A: Visuomotor Adaptation in HMD-VR and CS
This chapter is adapted from:
Anglin J.M., Sugiyama T., & Liew S.-L. (2017). Visuomotor adaptation in head-mounted
virtual reality versus conventional training. Scientific Reports, 7, 45469.
A.1 Abstract
Immersive, head-mounted virtual reality (HMD-VR) provides a unique opportunity
to understand how changes in sensory environments affect motor learning. However,
potential differences in mechanisms of motor learning and adaptation in HMD-VR versus
a conventional computer screen (CS) environment have not been extensively explored.
Here, we investigated whether adaptation on a visuomotor rotation task in HMD-VR yields
similar adaptation effects in CS and whether these effects are achieved through similar
mechanisms. Specifically, recent work has shown that visuomotor adaptation may occur
via both an implicit, error-based internal model and a more cognitive, explicit strategic
component. We sought to measure both overall adaptation and balance between implicit
and explicit mechanisms in HMD-VR versus CS. Twenty-four healthy individuals were
placed in either HMD-VR or CS and trained on an identical visuomotor adaptation task
that measured both implicit and explicit components. Our results showed that the overall
timecourse of adaption was similar in both HMD-VR and CS. However, HMD-VR
participants utilized a greater cognitive strategy than CS, while CS participants engaged
in greater implicit learning. These results suggest that while both conditions produce
160
similar results in overall adaptation, the mechanisms by which visuomotor adaption
occurs in HMD-VR appear to be more reliant on cognitive strategies.
A.2 Introduction
Virtual reality (VR) provides a computer-generated environment that allows users
to engage in virtual experiences. VR has been used for simulations and training across a
wide range of disciplines including military, education, and healthcare applications. While
previous versions of VR have traditionally consisted of a participant sitting in front of a
large screen, exciting recent technological advances have produced new forms of VR,
such as head-mounted displays (HMD), which allow for more immersive experiences.
HMDs (e.g., Oculus Rift, HTC Vive) have recently become commercially available and
affordable and are thought to induce greater feelings of embodiment and immersion
compared to traditional forms of VR (Ruddle et al., 1999). VR using head-mounted
displays (HMD-VR) thus provides a novel medium for examining the effects of context
and experience on human behavior.
Previous research has shown benefits of VR in healthcare, specifically for motor
rehabilitation. Traditional VR, which uses large monitors and motion capture equipment,
such as the Microsoft Kinect or Wii, has been used to improve clinical motor performance
in individuals with stroke and cerebral palsy (Kizony et al., 2003; Roy et al., 2013), and
shown positive effects, attributed to increased motivation and participation (Bryanton et
al., 2006; Lewis et al., 2011; Roy et al., 2013). Evidence suggests that some forms of VR
in rehabilitation can produce similar (Brooks, 1999; Bryanton et al., 2006; Webster et al.,
2001) or greater (Turolla et al., 2013) therapeutic gains as compared to conventional
rehabilitation, and the use of VR in motor rehabilitation has led to improved motor
161
recovery and mobility-related outcomes (McEwen et al., 2014; Reid, 2002; Saposnik et
al., 2010; Turolla et al., 2013). However, certain patient characteristics, such as one’s
severity of motor impairment or the amount of time since the loss of motor function, can
affect how much a patient benefits from VR-based interventions (Albiol-Pérez et al., 2014;
Subramanian et al., 2013).
Aside from the use of VR to improve rehabilitation gains by increasing patient
motivation and engagement in a task, VR, especially HMD-VR, could be a powerful tool
to experimentally manipulate context and feedback, and examine these effects on human
motor adaptation, learning, and recovery. HMD-VR can provide a similarly engaging and
motivating environment with the additional potential to increase one’s feeling of
embodiment in the virtual environment (Finkelstein et al., 2011; IJsselsteijn et al., 2006),
which may have additional effects on motor performance. For example, healthy
individuals that are trained in HMD-VR with a body that has longer arms than their own
learn to interact with the world as though they have longer arms, and this behavior persists
briefly outside of the HMD-VR environment (Kilteni, Normand, et al., 2012). Similarly,
healthy adults that are shown to have a child’s body in HMD-VR begin to exhibit more
childlike behaviors than adults shown an adult body (Banakou et al., 2013). Thus,
embodiment through HMD-VR may be a powerful tool for manipulating and enhancing
motor learning by showing individuals movements in HMD-VR that they are not able to
perform in reality.
However, before jumping straight into using HMD-VR for motor learning and
rehabilitation, it is important to understand the potential differences and mechanisms
underlying motor performance between these two environments so that virtual conditions
162
can be tailored to maximize learning and rehabilitation outcomes. Given the recent
availability of HMD-VR, very few studies have systematically examined how motor
learning in an immersive environment using an HMD affects mechanisms of motor
adaptation compared to a conventional computer screen (CS). Researchers found that
discrepancies between real and virtual hand movements during a manipulation task in an
HMD-VR versus CS did not affect motor performance (Groen & Werkhoven, 1998), with
some suggesting that visuomotor adaptation occurs similarly across virtual and CS
environments (Ballester et al., 2015). However, another study showed that an arm
reaching task in both HMD-VR and CS resulted in similar movement profiles but but the
HMD-VR system produced longer movement times (Just et al., 2014).
We aimed to examine the mechanisms of visuomotor adaptation in HMD-VR
versus CS through use of a visuomotor adaptation paradigm. Visuomotor adaptation is a
form of sensorimotor learning that has been widely studied and generally consists of
participants learning to adapt, or correct for, an external perturbation (Cunningham, 1989;
Krakauer, 2009; Pine et al., 1996). For instance, individuals make reaching movements
towards a target, and a visual perturbation is applied to the cursor (e.g., the cursor is
rotated 45° degree clockwise). Individuals must learn to compensate for this perturbation
by a reaching in the opposite direction (e.g., reach 45° degrees counter-clockwise).
Learning in this model is largely thought to rely on the formation of an internal model that
calculates the difference between anticipated errors of the intended movement and actual
errors from sensory feedback, which is then used to plan one’s next movement. This type
of learning is often referred to as implicit, or error-based, learning (Mazzoni & Krakauer,
2006), and is thought to be supported by neural activity in the cerebellum (Tseng et al.,
163
2007). Recent work, however, has shown that visuomotor adaptation may occur via both
an implicit, error-based mechanism and a more cognitive, explicit strategic mechanism
(Taylor et al., 2014). That is, when adapting to a perturbation, an explicit strategy may be
used in order to achieve the desired movement outcome. In the example given above, an
individual might see that their cursor went 45 degrees clockwise of where they
anticipated, and explicitly reason that their next movement should be 45 degrees counter-
clockwise to correct for this. Research suggests that both implicit and explicit mechanisms
may contribute to visuomotor adaptation (Taylor et al., 2014).
In this study, we investigated whether visuomotor adaptation in immersive HMD-
VR results in similar adaptation effects in CS and whether these effects are achieved
through similar mechanisms. We adapted a paradigm used in Taylor et al. 2014 to
measure both overall adaptation and the balance between implicit and explicit
mechanisms in each environment. In this paradigm, participants reported their planned
aim and made reaching movements to different targets on a computer screen located in
either the virtual or real world (see Figure A.1). After training on the task, a perturbation
was introduced; we then measured implicit and explicit learning by comparing the
difference between participants’ reported aiming direction and actual hand position. After
training with the perturbation, feedback was removed while participants continued to aim
at the targets to explore any aftereffects. This paradigm was replicated in both HMD-VR
and a conventional computer screen (CS) environment, and participants were
randomized to either the HMD-VR group or the CS group. We hypothesized that while
participants in either environment should adapt similarly, the mechanisms by which they
achieve this adaptation may be different. Based on previous work showing effects of VR
164
on cognitive aspects such as engagement and motivation, we anticipated that the HMD-
VR environment may increase participants’ reliance on an explicit, cognitive strategy,
while CS would show greater reliance on implicit, error-based mechanisms.
Figure A.1. Experimental environments. Top: Visuomotor
adaption task in conventional computer screen (CS). Bottom:
Visuomotor adaptation task in head-mounted virtual reality (HMD-
VR). The environment in virtual reality was designed to emulate
the environment of the CS condition in the size and lighting of the
room, the furniture placed around the room such as a refrigerator
and a desk, the computer monitor, and more.
165
A.3 Methods
A.3.1 Participants
A total of 29 individuals were recruited to participate in this experiment. Of this
total, 2 individuals were excluded from the study as a result of technical difficulties and 3
individuals were excluded from the study as a result of being beyond two standard
deviations away from the mean of target error or aiming direction, resulting in 24
individuals (15 female/9 males, aged: M=23.9, SD=3.85; 12 per group) included in the
analysis. The 2 participants excluded for technical reasons were in the HMD-VR condition
and of the 3 participants who performed poorly, 2 were from the CS condition and 1 was
from the HMD-VR condition. Excluding the 3 participants that performed poorly did not
significantly change the results of the study. Eligibility criteria included healthy, right-
handed individuals with no previous experience with visuomotor adaptation and informed
consent was obtained from all subjects. The experimental protocol was approved by the
Institutional Review Board at University of Southern California and performed in
accordance with the 1964 Declaration of Helsinki.
A.3.2 Experimental Apparatus
Participants were placed in either a virtual reality environment using the Oculus
Rift (DK2) head-mounted display (HMD-VR) or in a conventional computer screen (CS)
environment. In both the HMD-VR and CS conditions, we adapted the paradigm used by
Taylor and colleagues (2014) in which participants used a digitalized pen and tablet
(Wacom Intuos4 Extra Large) to reach for one of eight pseudo-randomized targets
located on an upright computer monitor located in either the HMD-VR or CS environment
(Figure A.1). Participants were unable to see their hands or forearms in either condition
166
during the duration of the task but visual feedback was provided in the form of a red
circular cursor (5 mm diameter) on the computer screen. Observation of one’s hand and
forearm movements were occluded in the CS environment with a large box that was place
over the tablet, and on top of which the monitor was placed. This box and monitor set-up
was replicated in the HMD-VR condition, and there was no virtual body shown in HMD-
VR. Movement trajectories were sampled at 60 Hz in both the HMD-VR and CS
conditions. In the CS condition, the stimuli were presented on a 24.1 inch, 1920 x 1200
pixel resolution computer monitor (HP) located 23 cm above the tablet. The HMD-VR
environment was designed using the game engine development tool, Unity 3D, to
replicate the visual characteristics of the CS environment, and was delivered via a head-
mounted VR display (Oculus Rift DK2). The HMD-VR environment was designed to
emulate features of the physical room where both groups performed the task (e.g., the
size and lighting of the room, the furniture, the computer monitor, and more were all
similar to the CS environment). The HMD-VR environment was created based on a fixed
coordinate system that did not depend on the participant’s head position and all
participants were physically seated in the same location for both groups to keep
everything consistent.
A.3.3 Reaching Task
Participants were randomly assigned into one of two conditions resulting in twelve
participants per group. Participants completed five distinct blocks that spanned a total of
312 trials (Figure A.2A). At the start of each trial, participants located the starting circle (7
mm diameter) at the center of the screen using a second guiding circle. After staying
within the starting circle for 1 second, a cursor appeared (5 mm diameter) along with one
167
of eight pseudo-random green target circles (10 mm diameter). These target circles were
located on an invisible ring with a radius of 14 cm and spaced 45° apart (0,45, 90, 135,
180, -135, -90, -45°). During the initial baseline trials (Block 1: 56 trials), participants were
asked to reach towards a target flanked by numbers (Figure A.2B) and complete their
reach within 500 milliseconds. Reaction time (RT) was defined as time between when the
target appeared and when the cursor exited the starting circle, and movement time (MT)
was defined as time between when the cursor exited the starting circle and when the
cursor crossed the border of the invisible ring. To encourage faster movements,
participants were given a warning via an audio clip saying “Too Slow!” if MT exceeded
more than 500 milliseconds. Participants were instructed to make a fast, “slicing”
movement through the target. Once they passed the invisible ring, participants received
auditory feedback (either a pleasant “ding” if the cursor crossed the target or an
unpleasant “buzz” if the cursor did not cross the target). Visual feedback of the cursor’s
endpoint position was displayed for 1 second before starting the next block or trial.
Excluding no feedback trials (Block 4), participants were given endpoint feedback, in
which they were only shown the cursor when their hand crossed the invisible outer ring.
168
Figure A.2. Experimental paradigm. (A) Experimental design with blocks
and number of trials in each block. (B) Visuomotor adaptation task with
45° counterclockwise rotation adapted from Taylor et al. 2014. After
finding the start circle, participants made quick reaching movements
through the targets. Once crossing the outer circle, the endpoint location
where their hand crossed the circle appeared as a red circle. During the
baseline + report and rotation blocks, participants reported where they
were aiming before each reach.
Participants began the paradigm with the baseline block (Block 1: 56 trials) and
were instructed to reach for the targets within the set time. In the baseline + report block
(Block 2: 16 trials), participants were instructed to continue reaching for the targets within
169
the set time but were additionally asked to explicitly say the number that they were aiming
to prior to reaching towards the target. In this way, their explicit aim was recorded, and
the difference between their explicit aim and actual hand endpoint position was measured
as the implicit component. In the rotation + report block (Block 3: 160 trials), a 45°
counterclockwise perturbation was introduced as participants continued the task without
any new instructions. In Blocks 4 and 5, the cursor rotation was removed, and numbers
no longer flanked the targets. Following the completion of the rotation + report block,
participants started the no feedback block (Block 4: 40 trials) where they were asked to
reach directly to the targets and told that feedback would not be provided. In the washout
block (Block 5: 40 trials), participants continued to aim for the targets without receiving
additional instructions. During this phase, both visual and auditory feedback was
provided, similarly to the baseline block (Block 1).
A.3.4 Behavior
Following the completion of the reaching task, participants were asked to complete
two questionnaires about their physical reaction to the environment. The first
questionnaire, adapted from Witmer & Singer (1998) and revised by the UQO
Cyberpsychology Lab (2004), asked participants a series of questions to gauge their
sense of presence in the training environment. Questions were collapsed along five main
themes: realism, possibility to act, quality of interface, possibility to examine, and self
evaluation of performance. The second questionnaire was adapted from Kennedy, Lane,
Berbaum, & Lilienthal (1993) and revised by the UQO Cyberpsychology Lab (2013) and
asked participants a series of questions to gauge their sickness level as a result of being
170
in the training environment. Questions were measured along two main themes: nausea
and oculo-motor reactions.
A.3.5 Movement Analysis
All kinematic data were recorded by Unity 3D (5.1.2, Unity Technologies, San
Francisco, CA) for the HMD-VR condition and by MATLAB (MATLAB R2013b, The
MathWorks Inc., Natick, MA) for the CS condition. The horizontal direction was set at 0°,
and counterclockwise direction was defined as the positive direction. Hand angle was
defined as the angle made by the horizontal line (0°) and the line between the origin and
the endpoint of the hand. Cursor angle was identical to the hand angle except during the
rotation + report block where 45° clockwise was added to the hand angle. For both
environments, cursor angle was computed by calculating the intersection point between
the invisible ring and a line drawn between the points sampled before and after crossing
the invisible ring. Target error was calculated by subtracting hand angle from target angle.
Aiming angle was calculated by multiplying the reported aiming number by 5.625 degrees
(the number of degrees per number on the invisible ring; 64 numbers in total). Update of
the implicit adaptation (IA) was measured by subtracting the participant’s explicitly stated
aiming angle from the hand angle. To examine changes in target error, aiming, and IA
over the rotation + report block (Block 3), we calculated epochs of 8 movements per
epoch and normalized each individual’s performance across the rotation + report trials to
the last epoch (8 movements) of baseline + report trials (Block 2). To study whether there
were differences in early versus late adaptation, we also analyzed the mean of just the
first six epochs (early adaptation) and the mean of just the last six epochs (late
adaptation). Finally, we computed the aftereffects of the rotation + report block as the first
171
epoch of no feedback trials (Block 4), normalized to the last epoch of baseline + report
trials.
A.3.6 Statistical Analyses
Statistical analyses for demographics and questionnaires were conducted using R
(3.2.2, The R Foundation for Statistical Computing, Vienna, Austria) and statistical
analyses for kinematics and motor performance were conducted using MATLAB
(MATLAB R2013b, The MathWorks Inc., Natick, MA). To assess differences in the
demographics and questionnaires mentioned above either a two-sample unpaired t-test
for interval data or a chi-squared test for nominal data was performed on each measure
across both groups. Similarly, two sample unpaired t-tests were used to examine
differences between groups (HMD-VR, CS) on target error, aiming (explicit component),
IA (implicit component), and aftereffects.
A.4 Results
A.4.1 Demographics and Subjective Experience Was Not Significant
Between Groups
A total of 24 participants (n=12 per group) participated in this study. There were
no significant differences between groups for age, gender, education, or previous VR use
between HMD-VR and CS (age: t(16)=0.16, p=0.878, HMD-VR: M=24±5.0; CS:
M=23.75±2.5; gender: c
2
(1,N=24) = 0, p = 1; education: c
2
(3, N=24) = 7.06, p = 0.07;
previous VR use: c
2
(3, N=24) = 1.09, p = 0.78). To examine the differences in the physical
effects of either environment, participants were asked to fill out post-experiment
questionnaires looking at their sense of presence and their sickness level in each
172
environment. While differences in self-reported answers from the presence questionnaire
were not significant (realism: t(21.2)=0.20, p=0.841, HMD-VR: M=29.83±8.2, CS:
M=29.08±9.9; possibility to act: t(20.3)=0.57, p=0.573, HMD-VR: M=19.0±4.2, CS:
M=17.83±5.7; possibility to examine: t(19.2)=-1.14, p=0.269, HMD-VR: M=11.5±3.1, CS:
M=13.33±4.6; self evaluation of performance: t(21.9)=0.55, p=0.586, HMD-VR:
M=9.42±2.2; CS: M=8.9±2.3), questions addressing the quality of interface approached
significance (quality of interface: t(21.2)=1.95, p=0.064, HMD-VR: M=12.92±2.7; CS:
M=10.5±3.3), suggesting that participants in the HMD-VR and CS groups may have felt
differences in the HMD-VR versus CS interfaces. Self-reported answers from the
simulator sickness questionnaire were also not significantly different between groups
(nausea: t(20.9)=-0.67, p=0.511, HMD-VR: M=1.67±2.1; CS: M=2.33±2.7; oculo-motor:
t(18.6)=-1.16, p=0.262, HMD-VR: M=3.25±2.9; CS: M=5.08±4.6). These results suggest
that individuals in the VR condition did not experience any additional sense of presence
or other side effects from being in an immersive virtual environment compared to
conventional training during this task, but that there was a trend towards a difference in
the quality of the interface.
A.4.2 HMD-VR and CS Produce Similar Results for Overall Visuomotor
Adaptation
The primary outcome measure for the study was target error, which was measured
as the difference between the target angle and the participants hand angle. This was
used to measure overall adaption throughout each block in the visuomotor adaptation
task (Figure A.2A). We compared the target error between training environments while
participants reported and reached for targets at baseline (without the perturbation, Block
173
2: baseline + report) and during the rotation block (with the perturbation, Block 3: rotation
+ report), as well as after all feedback was removed (Block 4: aftereffect; for a full
explanation of the experimental protocol, see Methods). We also compared the reaction
time (RT), or the time it took to start a movement from when the target appeared, and the
movement time (MT), or the time it took from the start of a movement until the target area
was hit. Baseline + Report. We compared baseline + report (Block 2) performance
between groups to ensure that there were no group differences in performance before
the visual rotation occurred. There were no significant differences between environments
(HMD-VR vs. CS) in target error (t(22)=0.71, p=0.48; HMD-VR: M=-1.6°±1.5°; CS: M=-
1.0°±2.3°), RT (t(22)=0.28, p=0.78; HMD-VR: M=1.3s±0.9s; CS: M=1.4s±1.1s), or MT
(t(22)=0.56, p=0.58; HMD-VR: M=0.54s±0.50s; CS: M=0.67s±0.63s) during baseline +
report. Rotation + Report. Then, we compared performance between groups during the
rotation + report block (Block 3), and again found that there were no significant differences
between HMD-VR and CS in target error (t(22)=1.38, p=0.18; HMD-VR: M=-0.41°±2.0°;
CS: M=-2.0°±3.4°), RT (t(22)=0.19, p=0.85; HMD-VR: M=0.68s±0.84s; CS:
M=0.61s±0.76s), or MT (t(22)=0.85, p=0.40; HMD-VR: M=-0.10s±0.24s; CS: M=-
0.24s±0.51s). In order to understand whether there were differences between groups
during early or late adaptation, we also analyzed just the first 6 epochs (early adaptation)
and just the last 6 epochs (late adaptation) of the rotation block. For early adaptation, we
found that there were no significant differences between HMD-VR and CS in target error
(t(22)=1.33, p=0.196, HMD-VR: M=-7.20°±5.2°, CS: M=-11.84°±10.9°), RT (t(22)=-0.32,
p=0.75; HMD-VR: M=0.65s±0.50s; CS: M=0.72s±0.66s), or MT (t(22)=-0.29, p=0.77;
HMD-VR: M=1.32s±0.76s; CS: M=1.45s±1.27s). Similarly, during late adaptation, we also
174
found no significant differences between groups in target error (t(22)=0.90, p=0.379,
HMD-VR: M=2.92°±2.8°, CS: M=1.99°±2.2°), RT (t(22)=0.38, p=0.70; HMD-VR:
M=0.69s±1.12s; CS: M=0.53s±0.84s), or MT (t(22)=0.26, p=0.80; HMD-VR:
M=1.37s±0.94s; CS: M=1.26s±1.11s). Aftereffects. There were also no significant
differences between HMD-VR and CS in the no feedback block (Block 4) in target error
(t(22)=0.37, p=0.71; HMD-VR: M=6.7°±5.7°; CS: M=7.4°±3.9°), RT (t(22)=0.64, p=0.53;
HMD-VR: M=-0.55s±0.94s; CS: M=-0.80s±1.00s), or MT (t(22)=0.88, p=0.39; HMD-VR:
M=-0.19s±0.26s; CS: M=-0.37s±0.67s). Note that RT in the no feedback (Block 4) was
much shorter than in either the baseline + report and rotation + report (Blocks 2 and 3,
respectively) because participants did not report the aiming direction during the no
feedback block. The comparison of target error between environments and across trials
is shown in Figure A.3A.
175
Figure A.3. Results between head-mounted virtual reality (HMD-VR) and conventional
computer screen (CS). Inset bar graphs show group means and standard errors as well
as individual means during the rotation block. (A) Target error, measured by subtracting
hand angle from target angle, for the HMD-VR group (orange) and the CS group (blue).
No significant differences were found between the groups during the baseline + report,
rotation + report, or no feedback blocks (Blocks 2-4). (B) Aiming angle, measured as
the aiming number reported by the participant, for HMD-VR (orange) and CS (blue).
The aiming angle was significantly larger (t(22)=4.00, p<0.001) for HMD-VR compared
to the CS group during the rotation + report block (Block 3). (C) Implicit adaptation
(aiming angle and rotation subtracted from the target error) for HMD-VR (orange) and
CS (blue). The IA was significantly smaller (t(22)=3.67, p=0.001) for HMD-VR
compared to the CS environment during the rotation + report block (Block 3).
A.4.3 HMD-VR Relies More on Explicit Learning, CS Relies More on
Implicit Learning
We were also interested in the relative contributions of implicit and explicit
mechanisms during adaptation. The implicit adaptation (IA) was measured as the
difference between the aiming angle and the hand angle, and the explicit component was
measured as the self-reported aiming angle. Similar to the analysis of target error, we
compared both groups during the baseline, rotation, and no feedback blocks of the
visuomotor adaptation task. Baseline + Report. We compared aiming during the baseline
+ report (Block 2) and found that there was no significant difference between
176
environments (t(22)=0.30, p=0.77; HMD-VR: M=-0.2°±0.8°; CS: M=-0.4°±1.1°). In
addition, we similarly found that there was no significant difference between environments
in IA (t(22)=0.70, p=0.49; HMD-VR: M=-1.3°±1.2°; CS: M=-0.7°±2.8°). Rotation + Report.
We compared the effects of training environment on aiming angle and IA during the
rotation + report block (Block 3), and found that aiming angle was significantly larger
(t(22)=4.00, p<0.001) for HMD-VR (M=41.4°±4.1°) compared to CS (M=33.6°±5.4°;
Figure A.3B). In addition, the size of the IA was also significantly smaller (t(22)=3.67,
p=0.001) for HMD-VR (M=3.3°±4.2°) than CS (M=9.5°±4.0°) (Figure A.3C). We also
compared the differences between groups during early or late adaptation for aiming angle
and IA. For aiming angle, we found there were significant differences between HMD-VR
and CS in both early (t(22)=2.59, p=0.017, HMD-VR: M=37.82°±6.0°, CS:
M=25.56°±15.3°) and late adaptation (t(22)= 3.20, p=0.004, HMD-VR: M=41.47°±5.1°,
CS: M=34.79°±5.2°). We also found significant differences between groups in IA for both
early (t(22)=-3.94, p=0.0007, HMD-VR: M=-0.10°±4.3°, CS: M=7.66°±5.3°) and late
adaptation (t(22)=-2.44, p=0.023, HMD-VR: M=6.59°±5.3°, CS: M=12.29°±6.1°). These
results suggest that although the overall target error was the same between groups,
HMD-VR participants utilized a greater cognitive strategy than CS participants, and CS
participants engaged in greater implicit learning than HMD-VR participants (Figure
A.3B,C).
A.5 Discussion
In this study, we sought to examine whether visuomotor adaptation, and the
mechanisms supporting it, are similar in an immersive virtual reality environment using a
177
head-mounted display compared to a conventional computer screen (CS) environment.
We show that, while the overall adaptation is similar across both groups, individuals in
the virtual reality using a head-mounted display group relied more on an explicit, cognitive
strategy to adapt, while individuals in the CS group relied more on implicit mechanisms.
These findings have implications for how HMD-VR is used to study and manipulate motor
learning and rehabilitation.
A.5.1 HMD-VR Produces Similar Adaptation Effects to CS
In this study, we show that individuals in both HMD-VR and CS environments are
able to successfully adapt to a visuomotor rotation. This is a critical initial validation, as it
indicates that either environment can promote, and be used to study, visuomotor
adaptation. Visuomotor adaptation has played an importation role in helping scientists
uncover the basic neural processes by which the brain supports new motor programs in
response to a changing environment (N. B. Albert et al., 2009; Anguera et al., 2008; J
Doyon et al., 1997; Robertson & Miall, 1999). Our findings open up new potential for
studying visuomotor adaptation in novel, virtual environments or with virtual manipulations
that have not previously been possible in conventional paradigms.
A.5.2 Adaptation in HMD-VR May Rely on More of a Cognitive Strategy
While the reduction of errors during adaptation occurred similarly across both
groups, the mechanisms supporting the adaptation appear to differ. In particular,
individuals placed in an immersive VR environment appear to be more reliant on explicit
cognitive strategies, while individuals in a conventional, real environment appear to be
more reliant on implicit, error-based mechanisms. There may be several potential reasons
178
for this. First, despite the fact that the paradigm and visual environment was designed to
be as similar as possible between HMD-VR and CS, the sheer novelty of being in an
HMD-VR environment may have increased participants’ attention and engagement and
therefore increased reliance on cognitive strategies. To this end, an important follow up
study might examine whether individuals with greater experience in HMD-VR continue to
utilize more cognitive mechanisms over time, or if familiarization to an HMD-VR
environment results in levels of cognitive engagement as CS over time.
It should also be noted that as there were differences between groups in explicit
aiming, we would in addition expect to see a difference between groups in the aftereffect
magnitude. However, we do not see these difference and only see a decrease in
aftereffect magnitude for the CS condition and not for the HMD-VR condition. A recent
study by Day et al. (2009) found that IA generalizes around the most frequent aiming
location and therefore, the aftereffect should be measured at the mean point of the most
frequent aim instead of the original target location as is done here. Therefore, one
possible explanation for this lack of decreased aftereffect is that HMD-VR may not
generalize in the traditional way, but rather in a more global manner. Further studies
would be needed to probe whether this interpretation is correct by measuring the
aftereffects at different locations.
Overall, it appears that visuomotor adaptation occurs in both HMD-VR and CS,
with the strategies employed in each environment to reduce target error differing. It is also
worth noting that these results could change if the task was less reliant on cognition.
Evidence suggests that endpoint feedback may utilize a more cognitive strategy
compared to online feedback, which seems to utilize more of an implicit strategy (Taylor
179
et al., 2014). This task utilized endpoint feedback, thus biasing individuals towards a more
cognitive strategy. Immersive HMD-VR could potentially increase cognitive reliance when
cognition contributes to task performance, but these differences may not be as apparent
if a cognitive strategy was not as important (e.g., with online feedback). Similarly, if the
novelty of the environment affected these results, then individuals with more experience
in an immersive VR environment may show patterns in HMD-VR that are more similar to
those of CS. These questions, among others, remain to be explored.
A.5.3 The Future Role of Immersive Virtual Reality in Motor Learning and
Rehabilitation
As HMD-VR becomes more ubiquitous, it is likely to become a powerful tool for
asking previously inaccessible questions in motor learning and rehabilitation. For
instance, learning in dangerous, riskier, or rapidly changing environments can now be
explored. However, much research remains to be done to systematically understand the
differences between HMD-VR and conventional training on basic aspects of motor
learning and adaptation. Our findings provide an initial step towards understanding how
immersive VR could be used and interpreted for motor learning studies and clinical
rehabilitation. In particular, it is important to note that, while motor learning paradigms
may produce similar results in HMD-VR as in conventional training, they may not be
accomplished in the same way. HMD-VR may also be a more powerful tool for motor
learning and rehabilitation paradigms that engage a strong cognitive component. As
researchers and clinicians increase the use of immersive VR for motor rehabilitation,
understanding the mechanisms underlying effects produced by this novel environment is
critical for designing paradigms that are maximally beneficial to its users.
180
Appendix B: Embodiment on a Brain–Computer Interface in
HMD-VR
This chapter is adapted from:
Juliano, J. M., Spicer, R. P., Vourvopoulos, A., Lefebvre, S., Jann, K., Ard, T., ... & Liew,
S. L. (2020). Embodiment is related to better performance on a brain–computer interface
in immersive virtual reality: A pilot study. Sensors, 20(4), 1204.
B.1 Abstract
Electroencephalography (EEG)-based brain–computer interfaces (BCIs) for motor
rehabilitation aim to ”close the loop” between attempted motor commands and sensory
feedback by providing supplemental information when individuals successfully achieve
specific brain patterns. Existing EEG-based BCIs use various displays to provide
feedback, ranging from displays considered more immersive (e.g., head-mounted display
virtual reality (HMD-VR)) to displays considered less immersive (e.g., computer screens).
However, it is not clear whether more immersive displays improve neurofeedback
performance and whether there are individual performance differences in HMD-VR
versus screen-based neurofeedback. In this pilot study, we compared neurofeedback
performance in HMD-VR versus a computer screen in 12 healthy individuals and
examined whether individual differences on two measures (i.e., presence, embodiment)
were related to neurofeedback performance in either environment. We found that, while
participants’ performance on the BCI was similar between display conditions, the
participants’ reported levels of embodiment were significantly different. Specifically,
participants experienced higher levels of embodiment in HMD-VR compared to a
181
computer screen. We further found that reported levels of embodiment positively
correlated with neurofeedback performance only in HMD-VR. Overall, these preliminary
results suggest that embodiment may relate to better performance on EEG-based BCIs
and that HMD-VR may increase embodiment compared to computer screens.
B.2 Introduction
Neurofeedback training produces beneficial changes in motor function and has
been shown to be successful in motor rehabilitation for clinical populations, such as
individuals with stroke (Ramos-Murguialday et al., 2013). Electroencephalography (EEG)
can be used to measure brain activity used by brain–computer interfaces (BCI) to provide
sensory feedback to reward specific brain activity patterns. This feedback can then be
used to control a robotic or computerized device (e.g., movement of an object on a
computer screen) to train individuals to control their own brain activity. BCIs designed for
the rehabilitation of individuals with severe motor impairment attempt to “close the loop”
between motor commands and sensory feedback by providing supplemental sensory
information when individuals successfully establish specific brain patterns. Successfully
closing the loop is achieved when individuals perceive the control of supplemental
sensory information as their own (i.e., embodiment). Perceived embodiment can influence
an individual’s sense of agency (Caspar et al., 2015) where greater embodiment may
result in a sense of increased control, but a lack of embodiment may result in a sense of
decreased control or distress and lead to a distortion of capabilities (Gilbert et al., 2019).
Given that individuals with severe motor impairment cannot generate active
volitional movement, a primary neurofeedback approach is to use imagined movement
(i.e., motor imagery) to drive the BCI. Motor imagery (MI) is thought to engage areas that
182
modulate movement execution (Dechent et al., 2004; Jackson et al., 2003; Naito et al.,
2002). MI has been shown to be an effective intervention for motor rehabilitation,
especially when it is coupled with physical practice (Carrasco & Cantalapiedra, 2016;
Guerra et al., 2018). Previous work has shown that BCIs employing MI can produce
clinically meaningful improvements in motor function in individuals with motor
impairments (Ang et al., 2014; Biasiucci et al., 2018; Cincotti et al., 2012; Frolov et al.,
2017; Pichiorri et al., 2015; Tung et al., 2013). These BCIs have used a variety of displays
to provide feedback, ranging from devices that provide an immersive and compelling
experience (e.g., projected limbs, robotic orthoses, or exoskeletons) (Cincotti et al., 2012;
Frolov et al., 2017; Pichiorri et al., 2015; Ramos-Murguialday et al., 2013) to devices that
are considered less immersive (e.g., computer screens) (Ang et al., 2014; Tung et al.,
2013). Recently, BCIs have also begun to incorporate immersive virtual reality using a
head-mounted display (HMD-VR) in order to provide a more immersive and realistic
environment (McMahon & Schukat, 2018) and to provide more biologically relevant
feedback (Vourvopoulos & Bermúdez i Badia, 2016). However, it is not known whether
HMD-VR improves neurofeedback performance compared to feedback provided on a
screen. It is also unclear whether neurofeedback provided in HMD-VR increases
embodiment, which in the context of HMD-VR can be defined as the perceptual ownership
of a virtual body in a virtual space (Banakou et al., 2013; Kilteni, Groten, et al., 2012),
compared to screen-based neurofeedback.
Studies have shown that HMD-VR facilitates the embodiment of a virtual body and
that the observation of this virtual body in the first person perspective is enough to induce
a strong feeling of embodiment for the virtual body’s actions (Banakou et al., 2013; Kilteni
183
et al., 2013; Kilteni, Normand, et al., 2012; Osimo et al., 2015; Yee & Bailenson, 2007).
In HMD-VR, individuals exhibit behaviors that match those of a digital self-representation,
such as overestimating object sizes when an adult has been given a virtual child body
(Banakou et al., 2013) or exhibiting a reduction in implicit racial bias when given a body
of a different race (Banakou et al., 2016). Initially coined the Proteus effect (Yee &
Bailenson, 2007), this sense of embodiment that arises from viewing a virtual limb has
the potential to alter one’s own neurophysiology and behavior. Regarding motor behavior,
an increased level of embodiment has been shown to be related to increased
sensorimotor rhythms (SMR) desynchronization (Vecchiato et al., 2015). In related work,
observing the actions of virtual limbs in virtual reality has been shown to increase SMR
desynchronization (Pavone et al., 2016). In addition, the immersive nature of HMD-VR
has also been shown to increase an individual’s sense of presence in the virtual
environment (Vecchiato et al., 2015), which in the context of HMD-VR can be defined as
the illusion of actually being present in the virtual environment (Slater, 2018). It is also
unclear whether neurofeedback provided in HMD-VR increases one’s feeling of presence
compared to screen-based neurofeedback. Here we examined the role of both
embodiment and presence on neurofeedback performance using HMD-VR versus a
computer screen through the use of qualitative questionnaires previously used to
measure these concepts (Bailey et al., 2016; Banakou et al., 2013; Witmer & Singer,
1998).
We have created a hybrid brain–computer interface for individuals with severe
motor impairments called REINVENT (Rehabilitation Environment using the Integration
of Neuromuscular-based Virtual Enhancements for Neural Training), which provides brain
184
(EEG) and/or muscle (electromyography (EMG)) neurofeedback in HMD-VR. Although
we designed REINVENT as an EEG-based BCI device for individuals with severe motor
impairments, in this pilot study, we first wanted to examine whether providing
neurofeedback in HMD-VR improves neurofeedback performance compared to receiving
the same neurofeedback on a computer screen in healthy adults. Furthermore, we
wanted to examine whether there were differences in the levels of embodiment and
presence induced by HMD-VR versus a computer screen and how individual differences
in these features relate to neurofeedback performance in each environment. As
embodiment and presence play an important role in increasing SMR desynchronization
and HMD-VR induces high levels of embodiment and presence, we predicted that
participants would show better neurofeedback performance in an HMD-VR environment
compared to a computer screen, and that improved performance would be related to
increased embodiment and presence.
B.3 Methods
B.3.1 Participants
Twelve healthy participants were recruited for this experiment (7 females/5 males;
age: M = 24.4 years, SD = 2.7 years) where all participants underwent the same
experimental design (see Section B.3.3). Eligibility criteria included healthy, right handed
individuals, and informed consent was obtained from all participants. Eight participants
reported being naive to immersive virtual reality using a head-mounted display; the four
participants with previous use of head-mounted displays reported using the device no
more than four times. The experimental protocol was approved by the University of
185
Southern California Health Sciences Campus Institutional Review Board and performed
in accordance with the 1964 Declaration of Helsinki.
B.3.2 REINVENT Hardware, Software, Online Processing, and Data
Integration
The REINVENT system (see Figure B.1A for an example) is a brain–computer
interface (BCI) composed of four main components: EEG, EMG, an inertial measurement
unit (IMU), and a HMD-VR system (Spicer et al., 2017). While the current study only
utilized the EEG and IMU components, we describe the entire system here.
Figure B.1. Example of the REINVENT system. (A) REINVENT hardware used here is
composed of electroencephalography (EEG), electromyography (EMG), inertial
measurement units (IMUs), and a head-mounted display virtual reality (HMD-VR)
system. Written informed consent for the publication of this image was obtained from
the individual depicted. (B) The environment participants observed on both a computer
screen and in HMD-VR; arm movements are goal-oriented such that when the arm
reaches a target position, it interacts with an object (e.g., hitting a beach ball). On EEG
blocks (Screen, HMD-VR), participants would attempt to move their virtual arm (right
arm) to the orange target arm (left arm) by thinking about movement. On the IMU block,
the virtual arm would match participants actual arm movements.
186
B.3.2.1 Electroencephalography (EEG) and Electromyography (EMG)
The EEG/EMG component of REINVENT is composed of hardware from OpenBCI
(www.openbci.com), a low-cost solution for measuring brain and muscle activity. The
EEG component consists of reusable dry EEG electrodes, and the EMG component
consists of snap electrode cables connected to mini disposable gel electrodes (Davis
Medical Electronics, Inc.). Both EEG and EMG wires were connected to a 16-channel,
32-bit v3 processor (Cyton + Daisy Biosensing Open BCI Board) and sampled at 125 Hz.
Twelve EEG locations based on the international 10-20 system (Klem et al., 1999) and
concentrated over the prefrontal and motor cortex are used to record brain activity (F3,
F4, C1, C2, C3, C4, CP1, CP2, CP5, CP6, P3, and P4); however, in the current study we
were primarily interested in channels closest to the left motor network (i.e., C1, C3, and
CP1). Ground and reference electrodes are located at the right and left earlobes,
respectively. EMG is recorded from four electrodes placed on the wrist flexors and
extensors on the muscle bellies of the right forearm, with a reference electrode on the
bony prominence of the elbow. In the current experiment, muscle activity from EMG was
collected during the experiment to examine EEG–EMG coherence, but as these analyses
are beyond the scope of the current study, they are not included here.
Custom software is used to control the BCI and provide users with real-time
neurofeedback of a virtual arm. The neurofeedback, composed of the sum
desynchronization from C1, C3, and CP1, was used to drive the movement of a virtual
right arm towards a target arm. EEG signals were recorded from electrodes of interest
over the left motor cortex (i.e., C1, C3, and CP1, based on the international 10-20 system)
with both ear lobes used as the ground and reference electrodes, and sent to the
187
REINVENT software. Data processing occurred online. Individual channels were high-
pass filtered using a second order Butterworth filter with a cutoff of 3 Hz, and a sliding
window consisting of 125 incoming samples were fast Fourier transformed (FFT). Power
was then computed between the frequency ranges of 8-24 Hz, capturing the broad activity
in alpha and beta bands that may correspond to motor imagery (i.e., sensorimotor
desynchronization). The virtual arm direction updated every second and moved towards
the target in response to sensorimotor desynchronization, measured as a decrease in
amplitude compared to the baseline recording of the left sensorimotor area (i.e., the
combined C1, C3, and CP1).
B.3.2.2 Inertial Measurement Unit (IMU)
The IMU component of REINVENT is composed of two nine degrees of freedom
IMUs, with one placed on the hand and the other placed on the wrist of the right arm. To
foster a sense of embodiment between the participant and the virtual arm, the REINVENT
system allows for the participant’s own arm movements to be recorded. Before beginning
this experiment, the participant’s arm was passively moved by the experimenter, and the
virtual representation of the arm was shown on the computer screen and in HMD-VR. In
this way, a sensorimotor contingency was developed between the participant’s own arm
and the virtual arm they were subsequently asked to control.
B.3.3 Experimental Design
We used a within-subject experimental design where all participants underwent
the same protocol (Figure B.2). Prior to the experiment, participants underwent pre-
assessments that included a simulator sickness questionnaire (see Section B.3.6) and a
188
resting EEG baseline recording. The resting EEG baseline recording lasted three minutes
and was recorded while the HMD-VR was removed. For the duration of the recording,
participants were instructed to keep their eyes open and fixed on a location at the center
of the computer screen and asked to think about a stationary object and to stay as still as
possible. The recording was used to provide the baseline EEG values for the experiment.
Following the resting EEG baseline recoding, participants completed three blocks of 30
trials (90 trials in total) where each block was a separate condition. The conditions were
(1) controlling the virtual arm with brain activity on a conventional computer screen
(Screen), (2) controlling the virtual arm with brain activity in a head-mounted display
virtual reality (HMD-VR) system, and (3) controlling the virtual arm with actual arm
movements in a head-mounted display (IMU). Participants completed the conditions in
the following block order: Block 1 (Screen), Block 2 (HMD-VR), Block 3 (IMU), with Blocks
1 and 2 (Screen, HMD-VR) counterbalanced. In this experiment, the IMU condition strictly
provided a control condition of real movement instead of neurofeedback; these data are
briefly reported but not focused on in this paper.
189
Figure B.2. Experimental timeline. Prior to the experimental blocks, participants
completed a questionnaire relating to simulator sickness and then completed a resting
EEG recording for three minutes with eyes open. Participants then completed the three
experimental blocks where the first two blocks were counterbalanced; during Blocks 1
and 2 (Screen, HMD-VR), participants were asked to think about movement in order to
move their virtual arm to a virtual target arm on either a computer screen or in HMD-
VR. After the Screen condition and after the HMD-VR condition, participants completed
a resting EEG recording for three minutes with eyes open and then completed a series
of questionnaires relating to simulator sickness, presence, and embodiment. During
Block 3 (IMU), participants were asked to move their physical arm to a virtual target arm
in HMD-VR, as a control condition.
Before starting the experimental conditions, participants were given instructions on
how to control their virtual arm (i.e., “You will see two right arms. One is orange and that
is the target arm that moves to different positions. The other is your arm. We want you to
move it to match the target arm’s position. You can move your arm in two ways. First, you
will complete 60 trials of moving the virtual arm with just your thoughts by thinking about
moving; 30 of the trials will be on the computer screen, without the head-mounted display,
and 30 trials will be with the head-mounted display. Then you will complete 30 trials of
moving the virtual arm using your actual arm movements.”). Instructions were repeated
at the start of each condition. For each EEG neurofeedback condition (Screen, HMD-VR),
participants were instructed to stay as still as possible. After the completion of each EEG
neurofeedback condition, a resting-EEG acquisition of three minutes was recorded while
the HMD-VR was removed; participants were again instructed to keep their eyes open
and fixed on the center of the screen for the duration of the recording. For the duration of
190
the experiment, participants were seated and asked to rest their hands comfortably on a
pillow placed across their lap.
B.3.4 Individual Trials
At the start of each trial, a target arm animated a wrist extension pose in one of
three target positions. Once the target arm stopped moving, participants were instructed
to move their virtual arm to match the position of the target arm given the current condition
(i.e., in the case of the two EEG neurofeedback conditions (Screen, HMD-VR), and they
were asked to think about moving; in the case of the IMU condition, they were asked to
actually move their arm to the target location). During the EEG neurofeedback condition
trials, the virtual hand incremented either forward or backward, as determined by the sum
of the three channel EEG desynchronization compared to baseline. Most of the time, the
EEG activity was significantly above or below the baseline; however, if the sensorimotor
activity was hovering around the baseline, the arm would move back and forth. The
duration of each trial was 15 seconds. If the target arm was reached within this time
constraint, a successful auditory tone was played; however, if the target arm was not
reached, then an unsuccessful auditory tone was played. At the completion of each trial,
the target and virtual arms returned to their starting position.
B.3.5 Displays and Neurofeedback
For all conditions, participants observed the virtual arm from a first person
perspective. For the HMD-VR and IMU conditions, we used the Oculus CV1, which
includes positional and rotational tracking to display the stimuli. For the Screen condition,
we used a 24.1 inch, 1920 × 1200 pixel resolution computer monitor (Hewlett-Packard)
191
to display the stimuli. In both displays, participants observed a scene that included two
virtual arms: (1) one virtual arm that represented the participant’s own arm and (2) a
second virtual arm, colored in orange, that provided different target arm positions that
participants were asked to move their own virtual arm towards (Figure B.1B). For all
conditions, the target arm generated wrist extension movements to different target
locations; participants were either asked to either think about moving their arm to these
locations (EEG neurofeedback conditions: Screen, HMD-VR) or actually move their arm
to these locations (IMU condition). In the IMU condition of the experiment, participants
were required to actually perform wrist extension movements to match the virtual arm in
the HMD-VR display.
B.3.6 Subjective Questionnaires
Prior to the experiment, participants were given a series of standard questions
about their baseline comfort levels (simulator sickness questionnaire; adapted from
Kennedy et al. (1993). After participants completed each EEG neurofeedback condition
(Screen, HMD-VR), they were given the same simulator sickness questionnaire to
examine changes following each block. Responses were reported on a 0 to 3-point scale,
and questions were collapsed along three main features: Nausea, Oculomotor, and
Disorientation. In addition, after completing both the Screen and HMD-VR conditions,
participants were also asked questions pertaining to their overall sense of presence and
embodiment in each respective environment. The Presence Questionnaire was adapted
from Witmer and Singer (1998) and revised by the UQO Cyberpsychology Lab (2004)
and asked participants a series of questions to gauge their sense of presence in each
environment. Responses were reported on a 1 to 7-point scale, and questions were
192
collapsed along five main features: Realism, Possibility to Act, Quality of Interface,
Possibility to Examine, and Self-Evaluation of Performance. The Presence Questionnaire
is a validated questionnaire (Witmer et al., 2005) that has been used in research using
HMD-VR (J. Lee et al., 2017). The Embodiment Questionnaire was adapted from Bailey
et al. (2016) and Banakou et al. (2013) and asked participants a series of questions to
gauge their sense of embodiment. Responses were reported on a 1 to 10-point scale and
questions were averaged to generate an overall Embodiment feature. In addition, we also
collapsed questions relating to either Self Embodiment or Spatial Embodiment to
generate two embodiment sub-features. Self Embodiment describes the extent to which
participants felt the virtual arm was an extension of their own arm, and Spatial
Embodiment describes the extent to which participants felt that they were in the virtual
environment. Table C.1 includes individual questions asked on the Embodiment
Questionnaire.
193
Table C.1. Individual questions on Embodiment Questionnaire
Type Question Referenced Scoring Scale
Self
To what extent did you feel that the virtual
arm was your own arm?
Own Arm
Not at all/
Very much
(1…10)
Self
How much did the virtual arm’s actions
correspond with your commands?
Arms
Actions
Not at all/
Very much
(1…10)
Self
To what extent did you feel if something
happened to the virtual arm it felt like it
was happening to you?
Happening
to Arm
Not at all/
Very much
(1…10)
Self
How much control did you feel you had
over the virtual arm in this virtual
environment?
Amount of
Arm Control
No control/
Full control
(1…10)
Self
How much did you feel that your virtual
arm resembled your own (real) arm in
terms of shape, skin tone or other visual
features?
Resembled
Arm
Not at all/
Very much
(1…10)
Self
Did the virtual arm seem bigger, smaller
or about the same as what you would
expect from your everyday experience?
Size of Arm
Smaller/
Larger
(1…10)
Spatial
To what extent did you feel like you were
really located in the virtual environment?
Location
None/
Completely
(1…10)
Spatial
To what extent did you feel surrounded
by the virtual environment?
Surrounded
None/
Completely
(1…10)
Spatial
To what extent did you feel that the virtual
environment seemed like the real world?
Real World
None/
Completely
(1…10)
Spatial
To what extent did you feel like you could
reach out and touch the objects in the
virtual environment?
Reach Out
and Touch
None/
Completely
(1…10)
194
B.3.7 Analyses
B.3.7.1 Post-Hoc EEG Analysis on Activity During Task
In addition to the online processing (see Section B.3.2), post-hoc EEG signals
were processed offline using MATLAB® (R2017a, The MathWorks, MA, USA) with the
EEGLAB toolbox (Delorme & Makeig, 2004). After importing the data and channel
information, a high-pass filter at 1 Hz was applied to remove the baseline drift followed by
line-noise and harmonics removal at 60 Hz. Furthermore, bad channels were rejected,
while any potential missing channels were interpolated before the re-referencing stage.
Additionally, all channels were re-referenced to the average. Next, data epoching was
performed by extracting the trials from the EEG neurofeedback conditions (Screen, HMD-
VR) for each participant. Artifact rejection was performed using independent component
analysis (ICA) over the epoched data and visual inspection of the time-series. Finally, the
baseline data (180 seconds) were extracted from the resting-state session that occurred
before the task.
For computing the average spectral power, Welch’s method for power spectral
density (PSD) of the power spectrum (Welch, 1967) was used across the online frequency
range (8-24 Hz) and for the alpha (8-12 Hz) and beta (13-24 Hz) bands. PSD was
extracted from both the epoched motor-related data and the baseline. As there was only
one movement detection with one degree of freedom, classifier training did not take place.
We used the target location C3 to represent the primary motor cortex; thus, the band
power was extracted over the C3 electrode location and calculated using the following
formula:
195
𝑃𝑆𝐷
𝑩𝒂𝒏𝒅
=𝑃𝑜𝑤𝑒𝑟𝐶3
𝑴𝒐𝒕𝒐𝒓 𝑨𝒄𝒕𝒊𝒗𝒊𝒕𝒚
− 𝑃𝑜𝑤𝑒𝑟𝐶3
𝑩𝒂𝒔𝒆𝒍𝒊𝒏𝒆
, (C.1)
Similarly, for analyzing the primary ipsilateral motor cortex, we used the target
location C4 (Neuper et al., 2006; G. Pfurtscheller, 2000; Gert Pfurtscheller & Neuper,
1997).
B.3.7.2. Statistical Analysis
All data used in statistical results can be found in the supplementary materials
(supporting datasets). Statistical analysis for neurofeedback performance, subjective
experience from questionnaires, and EEG activity during the task was analyzed using the
statistical package R (Version 3.2.2) using R Studio (Version 1.1.423). For each variable,
we checked for normality using a Shapiro–Wilk test. To assess statistical differences in
performance, subjective experience, and average spectral power during the task between
the two EEG conditions (Screen, HMD-VR), a paired t-test was performed on each
measure found to be normally distributed, and a Mann–Whitney U test was performed on
each measure found to not be normally distributed. Means (M), standard deviations (SD),
and skewness are reported for each measure (Supplementary Table B.S1). To confirm
that neurofeedback based on motor imagery was successfully used to increase
performance, we ran a simple linear regression on neurofeedback performance based on
PSD. Lastly, we examined the relationship between neurofeedback performance and
responses from the Presence Questionnaire and the Embodiment Questionnaire using
regression analysis. For both questionnaires, we first tested for normality using a
Shapiro–Wilk normality test. For the Presence Questionnaire, we ran a multiple
regression analysis on neurofeedback performance based on the five presence features
for each condition (Screen, HMD-VR). For the Embodiment Questionnaire, we first ran a
196
simple linear regression analysis on neurofeedback performance based on the overall
Embodiment feature for each condition. Then, we ran a multiple regression analysis on
neurofeedback performance based on the two embodiment sub-features (Self
Embodiment and Spatial Embodiment) for each condition. For all regression analyses,
adjusted R
2
is reported. Finally, all participants completed the control IMU condition with
100% accuracy, and therefore this condition is not included in further analysis.
B.4 Results
B.4.1 Relationship Between Power Spectral Density and Neurofeedback
Performance
To confirm the relationship between PSD in the 8-24 Hz frequency range and the
corresponding neurofeedback performance, we ran a simple linear regression of
neurofeedback performance based on PSD across the combined EEG neurofeedback
conditions (i.e., Screen and HMD-VR). We found a significant relationship between PSD
and neurofeedback performance (Figure B.3; F(1,22) = 9.328, p = 0.006; R
2
= 0.266)
where an increased sensorimotor desynchronization corresponded to better
neurofeedback performance.
197
Figure B.3. Relationship between power spectral density
and neurofeedback performance. There was a significant
relationship between power spectral density (PSD) and
neurofeedback performance across the EEG
neurofeedback conditions (combined Screen and HMD-
VR).
B.4.2 Comparison of Neurofeedback Performance and Time to Complete
Successful Trials Between Screen and HMD-VR
The proportion of correct trials completed was similar between the two conditions
(Figure B.4A; t(11) = -0.46, p = 0.656, d = 0.19; Screen: M = 80.95%, SD = 9.1%, and
HMD-VR: M = 83.33%, SD = 14.9%). These results suggest that participants seemed to
perform similarly independent of whether neurofeedback was provided in HMD-VR or on
a conventional computer screen. In addition, the time to complete each of the successful
trials was also similar between the two conditions (Figure B.4B; t(11) = 0.54, p = 0.597, d
= 0.19; Screen: M = 4.347 s, SD = 1.17 s, and HMD-VR: M = 3.996 s, SD = 2.41 s). These
results suggest that when participants were able to increment the virtual arm towards the
198
target with their brain activity, the efficiency of control was similar whether viewing the
arm in the HMD-VR environment or on a conventional computer screen. The distribution
of the data was not significantly different from a normal distribution for neurofeedback
performance and time to complete successful trials.
Figure B.4. Average performance on trials and time to complete successful trials
between conditions. (A) The analysis showed no significant differences in performance
between Screen (left, blue) and HMD-VR (right, yellow) conditions. (B) The analysis
showed no significant differences in time on successful trials between Screen (left, blue)
and HMD-VR (right, yellow) conditions.
B.4.3 Comparison of Power Spectral Density Between Screen and HMD-
VR
Similar to the neurofeedback performance results, we did not find significant
differences in group-level PSD between the Screen and HMD-VR conditions across the
8-24 Hz frequency range (Figure B.5A; t(11) = 0.475, p = 0.644, d = 0.12; Screen: M = -
4.69, SD = 2.96, and HMD-VR: M = -4.32, SD = 3.41). We also explored alpha and beta
199
bands separately and did not find significant differences in group-level PSD between the
Screen and HMD-VR conditions in either band (alpha: Figure B.5B, t(11) = 1.363, p =
0.200, d = 0.35, Screen: M = -1.84, SD = 2.90, and HMD-VR: M = -2.89, SD = 3.04; beta:
Figure B.5C, t(11) = -1.141, p = 0.278, d = 0.29, Screen: M = -5.88, SD = 3.08, and HMD-
VR: M = -4.92, SD = 3.63). This further suggests that participants had similar levels of
sensorimotor activity whether neurofeedback was provided in HMD-VR or on a
conventional computer screen. The distribution of the data was not significantly different
from a normal distribution for each power spectral density variable. Additionally, we have
included two supplementary figures reporting individual participant EEG activity in alpha
and beta bands for both C3 (Supplementary Figure B.S1; contralateral to and controlling
of the virtual hand) and C4 (Supplementary Figure B.S2; ipsilateral to the virtual hand)
recordings.
B.4.4 Comparison of Simulator Sickness Between Screen and HMD-VR
We checked for normality for each feature obtained by the simulator sickness
questionnaire (i.e., Nausea, Oculomotor, Disorientation). For two of the features, the
distribution of the data was significantly different from a normal distribution; thus, we used
a nonparametic Mann–Whitney U test for each of the comparisons.
We found no significant differences in reports of simulator sickness between the
Screen (Nausea: M = 1.59, SD = 8.94; Oculomotor: M = 9.48, SD = 12.15; Disorientation:
M = 4.64, SD = 17.13) and the HMD-VR (Nausea: M = 2.39, SD = 5.93; Oculomotor: M =
9.45, SD = 9.76; Disorientation: M = 3.48, SD = 8.65) conditions (Nausea: U = 66, p =
0.730, d = 0.10; Oculomotor: U = 68, p = 0.837, d = 0.00; Disorientation: U = 67.5, p =
0.745, d = 0.09). These results suggest that HMD-VR neurofeedback does not cause
200
additional adverse effects beyond using a conventional computer screen in healthy
individuals.
Figure B.5. Average power spectral density during trials between conditions. (A) The
relative group-level PSD for the target electrode C3, representing the left motor cortex
(8-24 Hz) between the Screen (left, blue) and HMD-VR (right, yellow) conditions was
not significantly different. (B) The relative group-level alpha between the Screen (left,
blue) and HMD-VR (right, yellow) conditions was also not significantly different. (C) The
relative group-level beta between the Screen (left, blue) and HMD-VR (right, yellow)
conditions was also not significantly different.
B.4.5 Comparison of Presence and Embodiment Between Screen and
HMD-VR
We checked for normality for each feature obtained by the Presence Questionnaire
(i.e., Realism, Possibility to Act, Quality of Interface, Possibility to Examine, and Self-
Evaluation of Performance) and Embodiment Questionnaire (i.e., Embodiment, Self
201
Embodiment, Spatial Embodiment). For each feature, the distribution of the data was not
significantly different from a normal distribution.
There was a significant difference in reports of embodiment between the two
conditions (t(11) = -2.21, p = 0.049, d = 0.48; Screen: M = 4.68, SD = 1.27, and HMD-VR:
M = 5.4, SD = 1.71) where individuals reported higher levels of Embodiment in the HMD-
VR condition. We then examined the sub-features of embodiment and found a significant
difference in reports of Spatial Embodiment between the two conditions (t(11) = -3.77, p
= 0.003, d = 0.87; Screen: M = 3.60, SD = 2.04, and HMD-VR: M = 5.35, SD = 2.00)
where individuals reported higher levels of Spatial Embodiment in the HMD-VR condition.
However, there was no significant difference in reports of Self Embodiment between the
two conditions (t(11) = -0.10, p = 0.922, d = 0.03; Screen: M = 5.39, SD = 1.17, HMD-VR:
M = 5.43, SD = 1.76). These results suggest that neurofeedback presented in a first
person perspective in HMD-VR may increase one’s feeling of embodiment compared to
neurofeedback presented on a conventional computer screen.
In addition, there were no significant differences between reports of presence in
the two conditions (Realism: t(11) = -1.95, p = 0.078, d = 0.47, Screen: M = 30.00, SD =
6.35, HMD-VR: M = 33.00, SD = 6.40; Possibility to Act: t(11) = -1.37, p = 0.199, d = 0.44,
Screen: M = 18.17, SD = 3.70, HMD-VR: M = 19.92, SD = 4.19; Quality of Interface: t(11)
= − 0.62, p = 0.548, d = 0.19, Screen: M = 12.83, SD = 3.07, HMD-VR: M = 13.42, SD =
2.97; Possibility to Examine: t(11) = − 2.01, p = 0.070, d = 0.72, Screen: M = 13.17, SD
= 2.59, HMD-VR: M = 14.92, SD = 2.27; Self-Evaluation of Performance: t(11) = -1.24, p
= 0.241, d = 0.49, Screen: M = 10.0, SD = 1.95, HMD-VR: M = 11.00, SD = 2.13). This
202
suggests that HMD-VR neurofeedback may specifically increase embodiment but not
presence in healthy individuals.
B.4.6 Relationship Between Embodiment, Presence, and Neurofeedback
Performance
We next examined whether individual differences in embodiment related to
neurofeedback performance for each condition. We ran a simple linear regression of
neurofeedback performance based on the overall Embodiment feature. For the HMD-VR
condition, we found a significant relationship between embodiment and neurofeedback
performance (F(1,10) = 8.293, p = 0.016; R
2
= 0.399). However, for the Screen condition,
we did not find a significant relationship between embodiment and neurofeedback
performance (F(1,10) = 0.434, p = 0.525; R
2
= -0.054). These results suggest that level
of embodiment is specifically related to neurofeedback performance only in HMD-VR and
not on a conventional computer screen (Figure B.6A).
To better understand whether specific sub-features of embodiment also related to
neurofeedback performance, we then examined if participants’ levels of self and spatial
embodiment related to their neurofeedback performance for each condition (Screen,
HMD-VR). We ran a multiple linear regression of neurofeedback performance based on
the two embodiment sub-features (i.e., Self Embodiment, Spatial Embodiment). For the
HMD-VR condition, we found a near significant relationship between the two embodiment
sub-features and neurofeedback performance (F(2,9) = 3.858, p = 0.0617; R
2
= 0.342).
For the Screen condition, we did not find a significant relationship between the two
embodiment sub-features and neurofeedback performance (F(2,9) = 0.706, p = 0.519; R
2
= -0.056). These results further suggest that level of embodiment is specifically related to
203
HMD-VR neurofeedback performance. Figure B.6B and B.6C shows regression lines for
both Self Embodiment and Spatial Embodiment, respectively.
Although there were no differences in presence between the Screen and HMD-VR
conditions, we also explored whether individual differences in presence related to
neurofeedback performance for each condition (Screen, HMD-VR). We ran a multiple
linear regression of neurofeedback performance based on the five presence features (i.e.,
Realism, Possibility to Act, Quality of Interface, Possibility to Examine, and Self-
Evaluation of Performance). We did not find a significant relationship between the five
presence features and neurofeedback performance for either the Screen or HMD-VR
condition (HMD-VR: F(5,6) = 0.476, p = 0.452; R
2
= 0.039; Screen: F(5,6) = 0.840, p =
0.567; R
2
= -0.078). These results suggest that the level of presence does not seem to
be significantly related to either HMD-VR or computer screen neurofeedback
performance.
B.5 Discussion
The current pilot study examined whether neurofeedback from a motor-related
brain–computer interface provided in HMD-VR could lead to better neurofeedback
performance compared to the same feedback provided on a standard computer screen.
In addition, differences in embodiment and presence between Screen and HMD-VR
conditions were examined. Finally, we explored whether individual differences in
embodiment and presence related to neurofeedback performance in each condition.
Overall, we found preliminary evidence that healthy participants showed similar
levels of neurofeedback performance in both Screen and HMD-VR conditions; however,
we found a trend for better performance in the HMD-VR condition. Additionally,
204
participants reported greater embodiment in the HMD-VR versus Screen condition and
higher reported levels of embodiment related to better neurofeedback performance in the
HMD-VR condition only. These preliminary results suggest that HMD-VR-based
neurofeedback may rely on an individual’s sense of embodiment for successful use and
improved performance. This is in line with our theoretical framework, which is further
detailed in (Spicer et al., 2017; Vourvopoulos, Pardo, et al., 2019), in which we propose
that greater embodiment of a virtual avatar should lead to larger changes in neural activity
in the direction of the desired feedback. These results suggest that greater embodiment
of HMD-VR neurofeedback essentially augments the weight of the neurofeedback,
creating a more effective and responsive closed-loop neurofeedback system. This has
important implications for a number of more recent HMD-VR neurofeedback systems
(Blanco-Mora et al., 2019; Johnson et al., 2018; Luu et al., 2016; Vourvopoulos, Jorge, et
al., 2019; Vourvopoulos & Bermúdez i Badia, 2016). However, future studies should
explore these findings with a larger sample size over a longer period of time.
205
Figure B.6. Relationship between subjective experience and neurofeedback
performance in Screen (blue) and HMD-VR (yellow). Participants reported their level of
Embodiment on a scale from 1 to 10 (Table B.1). (A) Embodiment: For the HMD-VR
condition, embodiment was significantly related to performance. However, for the
Screen condition, embodiment did not significantly relate to neurofeedback
performance. (B) Self Embodiment and (C) Spatial Embodiment: For the HMD-VR
condition, we found a near significant relationship between the two embodiment sub-
features and neurofeedback performance. However, for the Screen condition, we did
not find a significant relationship between the two embodiment sub-features and
neurofeedback performance.
B.5.1 Similar Neurofeedback Performance and Time to Complete
Successful Trials Between Screen and HMD-VR
Regardless of condition (Screen, HMD-VR), we found that on average, individuals
were able to accurately modulate their brain activity to successfully control a virtual arm
on over 80 percent of trials. These results suggest that neurofeedback based on motor
206
imagery, using biologically relevant stimuli, can occur either on a conventional computer
screen or in immersive virtual reality using a head-mounted display. However, as seen in
Figure B.4A and B.4B, there is a trend towards better performance and faster time to
complete a successful trial in the HMD-VR condition compared to the Screen condition,
which may not allow for significance because of our limited dataset (further discussed in
Section B.5.7). This trend towards greater sensorimotor desynchronization can also be
observed in the individual subject data (Supplementary Figures B.S1 and B.S2), with
more individuals showing more sensorimotor activity for the HMD-VR condition than the
Screen condition. Additionally, there is a larger range of interindividual variability in both
performance and average time to complete a successful trial in the HMD-VR condition,
suggesting that some individuals may benefit from HMD-VR compared to others. This
suggestion is further supported by the correlation between performance and embodiment,
in which we show that individuals who had greater embodiment had better performance
in HMD-VR only (further discussed in Section B.5.5). An important future question to
examine is whether neurofeedback performance in a clinical population (e.g., individuals
with stroke) also shows no differences between HMD-VR and Screen conditions.
B.5.2 Similar Power Spectral Density Between a Computer Screen and
HMD-VR
Similarly, regardless of condition (Screen, HMD-VR), we found that on average,
individuals had similar levels of sensorimotor activity, as measured by PSD between 8-
24 Hz and when divided into alpha and beta frequency bands. This was expected as the
sensorimotor desynchronization used to calculate PSD was also used to drive the virtual
arm in the task. However, similar to the performance results, we see a trend for greater
207
desynchronization in the alpha band for the HMD-VR condition (Figure B.5B). While we
do not see a trend for greater desynchronization in the beta band for the HMD-VR
condition (Figure B.5C), these results may indicate a neurofeedback-based effect for the
different displays, suggesting that feedback type may be able to alter brain activity. We
also showed a significant relationship between PSD and neurofeedback performance,
where increased desynchronization corresponded to increased performance.
B.5.3 Similar Simulator Sickness Between a Computer Screen and HMD-
VR
As expected, regardless of condition (Screen, HMD-VR), we found that on
average, individuals had similar reported levels of simulator sickness. These results were
expected given that our experimental design did not involve factors found to result in
adverse simulator sickness (e.g., high accelerated movements, jumping movements, long
usage time, etc.) (Porcino et al., 2017) and suggest that HMD-VR neurofeedback does
not cause additional adverse effects beyond using a conventional computer screen in
healthy individuals.
B.5.4 A Higher Level of Embodiment in HMD-VR Compared to Screen
After performing the neurofeedback task in each condition (Screen, HMD-VR),
participants reported having higher levels of embodiment in HMD-VR compared to a
conventional computer screen. This is in agreement with previous research showing that
HMD-VR is effective for inducing embodiment (Osimo et al., 2015; Slater & Sanchez-
Vives, 2016). However, while it has been intuitively suggested that viewing a virtual body
in HMD-VR should induce greater embodiment than viewing the same virtual body on a
conventional computer screen, to our knowledge, there has been little empirical evidence
208
to demonstrate this. Here, we address this gap by providing evidence that HMD-VR does
seem to in fact increase embodiment compared to a conventional computer screen during
a neurofeedback task.
B.5.5 Greater Embodiment is Related to Better Neurofeedback
Performance in HMD-VR
In line with our hypothesis, we show that greater embodiment was positively
related to better neurofeedback performance in HMD-VR. This uniqueness to HMD-VR
could possibly be explained by an increased range of embodiment levels in the HMD-VR
condition compared to the Screen condition. These results are consistent with previous
research where embodiment has been shown to lead to neurophysiological and
behavioral changes based on the virtual body’s characteristics, such as overestimating
object distances after being given an elongated virtual arm in HMD-VR (Kilteni, Normand,
et al., 2012). These results are also constant with previous research showing embodiment
of a humanoid arm compared to a robotic arm improves motor imagery skills (Alimardani
et al., 2016). While our findings do not support causality, they are important because they
suggest that embodiment may have the potential to improve an individual’s
neurofeedback performance, and HMD-VR may be able to increase the level of
embodiment of an individual, beyond that of a conventional computer screen. This
suggests that if individuals were to encounter a ceiling effect while controlling
neurofeedback on a conventional computer screen, they might be able to show greater
improvements, beyond this ceiling, if they show greater embodiment in HMD-VR.
209
B.5.6 Future Clinical Implications
We designed REINVENT as an EEG-based BCI with HMD-VR neurofeedback for
individuals with severe motor impairments, such as stroke. However, before exploring the
effectiveness of this device in a population with severe motor impairments, we first
examined whether providing neurofeedback in HMD-VR improves performance
compared to receiving the same neurofeedback on a conventional computer screen in
healthy adults. Our findings suggest that increased embodiment may improve individuals’
neurofeedback performance, which could potentially improve patients’ recovery.
Furthermore, our results suggest that HMD-VR may facilitate an increased level of
embodiment, beyond what might be seen with traditional screen-based BCIs.
The findings in this pilot study are important considering that a patients’ perception
of BCI control can have an effect on their own perceived capabilities (Gilbert et al., 2019).
Patients exhibiting a lack of embodiment may experience distress or a loss of control
(Gilbert et al., 2019), which could then stifle recovery. Therefore, it is important to measure
patients’ sense of embodiment in EEG-based BCI therapy to avoid feelings of
estrangement. Future work might explore whether measures of embodiment,
administered prior to HMD-VR neurofeedback training, could predict embodiment and
neurofeedback performance. If so, these “pre-assessments” of embodiment potential
could be used to predict and personalize EEG-based BCI therapy. As previous brain–
computer interfaces have been shown to have a positive change on muscle and
sensorimotor brain activity in post-stroke individuals, even when using screen-based
environments (Ono et al., 2014), we anticipate that embodiment in HMD-VR may lead to
210
even greater improvements. However, as these data are preliminary, more data are
needed to explore this hypothesis.
B.5.7 Limitations
Our pilot study has several limitations. First was the limited sample size of 12
individuals and the limited number of trials collected per condition (i.e., 30 trials per
condition). However, even with this limited sample, we were still able to extract the PSD,
calculate relative PSD to baseline, and find a significant relationship between PSD and
neurofeedback performance. However, this limited power may have resulted in the lack
of significance seen between conditions in performance, PSD, and presence, where we
see trends towards increased performance, greater sensorimotor desynchronization, and
increased presence in the HMD-VR condition compared to the Screen condition, but
these trends do not reach significance. Future research should explore this with greater
power both in the number of participants and in the number of trials collected. Additionally,
as our study was a within-subject experimental design, we were not able to examine
differences in age and gender, which could influence EEG-based BCI performance. In
addition to increasing the number of participants and trials collected, future studies should
also consider a between-subjects experimental design.
A second limitation was the use of only eight channels of dry electrodes to collect
sensorimotor activity and the broad frequency band used (8-24 Hz). Given that our
system was initially designed to provide a low-cost rehabilitation intervention, we chose
to drive the neurofeedback-based device with a limited number of dry electrodes as
previous studies have found dry electrodes to be suitable for neurofeedback applications
(McMahon & Schukat, 2018; Uktveris & Jusas, 2018). However, we recognize that the
211
signal quality of these electrodes can be noisy, and even though we were able to
successfully extract power spectral density, in future studies, we plan to use higher quality
electrodes (e.g., active gel electrodes), which would also allow us to narrow the frequency
band and personalize the feedback across individuals. In addition, although the low
resolution from eight channels, primarily clustered around bilateral sensorimotor regions,
facilitated a faster application of the EEG cap, it also limited our post-hoc analyses. Future
research studies should utilize more channels for higher resolution. This would enable
topographical analyses of whole brain activity during neurofeedback training as well as
the ability to examine brain activity in non-motor regions as control regions.
A third limitation was the use of two separate questionnaires to examine presence
and embodiment. Although presence and embodiment are often discussed separately,
there are certainly overlapping features of the two constructs (e.g., greater embodiment
often leads to greater presence). Future work may consider the use of standardized
questionnaires that combine measures of presence and embodiment into a single
questionnaire, such as the recently proposed questionnaire by Gonzalez-Franco and
Peck, 2018.
A fourth limitation is that here we studied only healthy individuals. This is notable
as the effects observed may be smaller than those of a clinical population, who may have
more room to improve. Specifically, the healthy individuals in our study showed, on
average, 80% accuracy with the based BCI within a short time frame, which may reflect
their intact sensorimotor control. However, individuals with severe motor impairments
may start with lower scores and have greater room for improvement due to damage to
these same networks. Future work may examine extended training with the HMD-VR
212
environment to see if it is possible for individuals to improve beyond their current levels
with greater time in the environment, as well as the effects of embodiment on EEG-based
BCI performance in individuals with stroke, which may provide a greater range of abilities
and thus greater potential effects with immersive virtual reality. Additionally, future work
may examine whether EEG-based BCI training has an effect on real world movements
and whether these effects are different between a conventional computer screen or HMD-
VR environments. Future work should build upon these modest results and explore the
effects of embodiment on HMD-VR neurofeedback performance with large samples and
in clinical populations.
B.6 Conclusions
This preliminary work suggests that individuals have higher levels of embodiment
when given immersive virtual reality-based neurofeedback compared to the
neurofeedback displayed on a computer screen. Furthermore, this increased sense of
embodiment in immersive virtual reality neurofeedback has the potential to improve
neurofeedback performance in healthy individuals over their performance on a computer
screen. HMD-VR may provide a unique medium for improving EEG-based BCI
performance, especially in clinical settings related to motor recovery. Future work will
explore ways to increase presence and embodiment in immersive virtual reality and
examine these effects on motor rehabilitation in patients with severe motor impairment.
213
B.7 Supplementary Material
Table B.S1. Mean, SD, and skewness for all variables
Screen Condition HMD-VR Condition
Mean SD Skewness Mean SD Skewness
Neurofeedback
performance
80.95% 9.1% -0.27 83.33% 14.9% -0.32
Time to complete
successful trials
4.347 s 1.17 s 0.26 3.996 s 2.41 s 0.07
PSD band (8-24 Hz) -4.69 2.96 -0.07 -4.32 3.41 -0.48
Alpha band (8-12 Hz) -1.84 2.90 0.26 -2.89 3.04 -0.37
Beta band (13-24 Hz) -5.88 3.08 -0.19 -4.92 3.63 -0.46
Nausea 1.59 8.94 0.36 2.39 5.93 -0.15
Oculomotor 9.48 12.15 0.96 9.45 9.76 -0.22
Disorientation 4.64 17.13 2.39 3.48 8.65 2.22
Realism 30.00 6.35 0.41 33.00 6.40 1.07
Possibility to Act 18.17 3.70 -0.57 19.92 4.19 0.21
Quality of Interface 12.83 3.07 0.16 13.42 2.97 -0.32
Possibility to
Examine
13.17 2.59 0.17 14.92 2.27 0.45
Self-Evaluation of
Performance
10.0 1.95 0.69 11.00 2.13 -0.18
Embodiment 4.68 1.27 1.01 5.4 1.71 0.64
Self Embodiment 5.39 1.17 -0.13 5.43 1.76 0.17
Spatial Embodiment 3.60 2.04 1.23 5.35 2.00 0.29
214
Figure B.S1. Individual participant EEG activity for C3. (A) Analysis of the relative PSD
to baseline of C3 for the alpha band at the group-level (left) and at the individual-level
(right). For individual subjects, for relative alpha levels during the task, there were
significant differences between Screen and HMD-VR in nine participants. From those,
three participants had significantly lower alpha (greater desynchronization) during the
Screen condition and six participants had significantly lower alpha (greater
desynchronization) during the HMD-VR condition. (B) Analysis of the relative PSD to
baseline of C3 for the beta band at the group-level (left) and at the individual-level
(right). For individual subjects, for relative beta levels during the task, there were
significant differences between Screen and HMD-VR in ten participants. From those,
five participants had significantly lower beta during the Screen condition and five
participants had significantly lower beta during the HMD-VR condition.
215
Figure B.S2. Individual participant EEG activity for C4. (A) Analysis of the relative PSD
to baseline of C4 for the alpha band at the group-level (left) and at the individual-level
(right). There were significant differences between Screen and HMD-VR in eleven
participants. From those, four participants had significantly lower alpha during the
Screen condition and seven participants had significantly lower alpha during the HMD-
VR condition. (B) Analysis of the relative PSD to baseline of C4 for the beta band at the
group-level (left) and at the individual-level (right). There were significant differences in
the beta band between Screen and HMD-VR in eleven participants. From those, five
participants had significantly lower beta during the Screen condition and six participants
had significantly lower beta during the HMD-VR condition.
216
Appendix C: List of Relevant Publications
Published:
1. Juliano, J.M., & Liew, S.-L. (2020). Transfer of motor skill between virtual reality
viewed using a head-mounted display and conventional screen environments.
Journal of NeuroEngineering and Rehabilitation, 17, 1-13.
2. Juliano J.M., Spicer R., Vourvopoulos A., Lefebvre S., Jann K., Santarnecchi E.,
Krum D., Liew S.-L. (2020). Embodiment is related to better performance on a brain-
computer interface in immersive virtual reality: A pilot study. Sensors, 20(4), 1204.
3. Anglin J.M., Sugiyama T., & Liew S.-L. (2017). Visuomotor adaptation in head-
mounted virtual reality versus conventional training. Scientific Reports, 7, 45469.
Manuscripts submitted for publication:
4. Juliano J.M., Schweighofer N., Liew S.L. (2022) Increased cognitive load in
immersive virtual reality during visuomotor adaptation is associated with decreased
long-term retention and context transfer. Manuscript submitted for publication.
5. Juliano J.M., Phanord C., Liew S.L. (2022) Visual processing of action directed
toward three-dimensional objects in immersive virtual reality involves holistic
processing of object shape. Manuscript submitted for publication.
Abstract (if available)
Abstract
Immersive virtual reality using a head-mounted display has been increasing in use for motor learning purposes. This increased use in areas such as motor rehabilitation and surgical training is largely driven by the potential for clinicians and educators to have greater control and customization of the training environment. However, there is conflicting evidence on the effectiveness of these devices, particularly on whether the motor skills learned in an HMD-VR environment transfer to the real world. Importantly, the mechanisms driving and limiting HMD-VR context transfer remain unclear, as well as how differences between HMD-VR and more conventional training environments affects the motor learning process. One noteworthy difference is that HMD-VR seems to increase cognitive load during complex motor learning tasks. However, it is unclear how increased cognitive load affects motor memories (i.e., retention and context transfer) nor is it clear what may cause increased cognitive load in HMD-VR. The overall goal of this dissertation work is to address gaps in our understanding of what makes upper extremity motor learning and movements in HMD-VR different from more conventional training environments and how these differences could influence the formation of motor memories. Together, this work bridges motor learning mechanisms with a theoretical framework of cognitive load to examine the impact of cognitive load on motor memory formation. The findings discussed will influence the increasing use of HMD-VR in motor learning applications, such as motor rehabilitation and surgical training.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Locomotor skill learning in virtual reality in healthy adults and people with Parkinson disease
PDF
Development and implementation of a modular muscle-computer interface for personalized motor rehabilitation after stroke
PDF
Examining the role of virtual reality in environmental education
PDF
The effects of fast walking, biofeedback, and cognitive impairment on post-stroke gait
PDF
Virtual production for virtual reality: possibilities for immersive storytelling
PDF
Ascension: a look into context-dependent memory development in virtual worlds
PDF
Penrose Station: an exploration of presence, immersion, player identity, and the unsilent protagonist
PDF
Experience modulates neural activity during action understanding: exploring sensorimotor and social cognitive interactions
PDF
Manipulating cognitive intensity during aerobic exercise: clinical proof of concept
PDF
Virtual surgeries as a tool for studying motor learning
PDF
Examining dual language immersion program instructional practices with regard to cognitive load theory
PDF
Intelligent Learning Quotient
PDF
Feasibility, acceptability, and implementation context of a complex telerehabilitation intervention for post-stroke upper extremity recovery
PDF
Transfer learning for intelligent systems in the wild
PDF
Computational models and model-based fMRI studies in motor learning
PDF
Learning reaching skills in non-disabled and post-stroke individuals
PDF
Quantify human experience: integrating virtual reality, biometric sensors, and machine learning
PDF
Impact of enhanced efficacy expectation on motor learning in individuals with Parkinson’s disease
PDF
On virtual, augmented, and mixed reality for socially assistive robotics
PDF
A virtual reality exergaming system to enhance brain health in older adults at risk for Alzheimer’s disease
Asset Metadata
Creator
Juliano, Julia
(author)
Core Title
Mechanisms of motor learning in immersive virtual reality and their influences on retention and context transfer
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Neuroscience
Degree Conferral Date
2022-05
Publication Date
05/02/2022
Defense Date
03/03/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
attention,cognitive load,context transfer,Human Factors,immersive virtual reality,motor learning,OAI-PMH Harvest,retention,vision-for-action,visuomotor adaptation
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Finley, James (
committee chair
), Fisher, Beth (
committee member
), Leech, Kristan (
committee member
), Liew, Sook-Lei (
committee member
), Schweighofer, Nicholas (
committee member
)
Creator Email
julia.m.juliano@gmail.com,juliaang@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC111160251
Unique identifier
UC111160251
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Juliano, Julia
Type
texts
Source
20220502-usctheses-batch-936
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
attention
cognitive load
context transfer
immersive virtual reality
motor learning
retention
vision-for-action
visuomotor adaptation