Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Decoding limb movement from posterior parietal cortex in a realistic task
(USC Thesis Other)
Decoding limb movement from posterior parietal cortex in a realistic task
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DECODING LIMB MOVEMENT FROM POSTERIOR PARIETAL CORTEX
IN A REALISTIC TASK
by
Markus Hauschild
______________________________________________________________________
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(BIOMEDICAL ENGINEERING)
May 2010
Copyright 2010 Markus Hauschild
ii
Table of Contents
List of Tables ................................................................................................................................... iv
List of Figures ................................................................................................................................... v
Abstract .......................................................................................................................................... vii
Preface ............................................................................................................................................ ix
Chapter 1: Introduction – relevant motor control basics ................................................................ 1
Neural control of movement ....................................................................................................... 1
Motor execution – online control of limb trajectories ............................................................ 5
The role of forward estimation in motor control .................................................................... 8
Behavioral evidence for the existence of forward estimation .............................................. 11
Key central nervous system components involved in motor control ........................................ 12
The role of posterior parietal cortex in motor control .............................................................. 13
Anatomy of posterior parietal cortex .................................................................................... 16
Chapter 2: Decoding movement from PPC .................................................................................... 20
Introduction ............................................................................................................................... 20
Materials and Methods .............................................................................................................. 22
Behavioral training ................................................................................................................. 25
Head holder implant surgery ................................................................................................. 26
Array implant surgery ............................................................................................................ 27
Recording arrays .................................................................................................................... 32
Signal Processing, Spike Sorting ............................................................................................. 34
Behavioral Task ...................................................................................................................... 36
Workspace Considerations .................................................................................................... 36
Offline decode algorithms ..................................................................................................... 39
Model assessment ................................................................................................................. 43
Closed‐loop brain‐control decode ......................................................................................... 44
Temporal tuning analysis ....................................................................................................... 45
iii
Analysis of adaptation to temporal perturbations ................................................................ 46
EMG Recordings ..................................................................................................................... 47
Latency measurements .......................................................................................................... 48
Results ........................................................................................................................................ 50
Offline decoding ..................................................................................................................... 50
Online decoding ‐ brain control ............................................................................................. 62
Temporal tuning of PPC neurons ........................................................................................... 73
Adaptation to temporal perturbations .................................................................................. 75
Microelectrode recording array performance ....................................................................... 80
Discussion .................................................................................................................................. 83
Restoring motor function using cortical signals: state of the art .......................................... 83
Unique characteristics make posterior parietal cortex an attractive target region for the
extraction of signals to operate prosthetic assist‐devices ..................................................... 85
Summary and discussion of results ........................................................................................ 87
Conclusions ................................................................................................................................ 99
Chapter 3: A Virtual Reality Environment for Designing and Fitting Neural Prosthetic Limbs .... 101
Key Requirements .................................................................................................................... 101
VR Environment ‐ Overview ..................................................................................................... 103
VRE Architecture ...................................................................................................................... 105
Bibliography ................................................................................................................................. 111
iv
List of Tables
Table 2‐1: Array implant locations for monkey G and R. ......................................................... 30
Table 2‐2: Electrode specifications for short type and long type arrays. ................................ 33
Table 2‐3: R
2
position decode performance of linear regression, ridge filter,
and Kalman filter decode algorithms ...................................................................... 57
Table 2‐4: Kalman filter R
2
prediction performance for position, velocity,
and acceleration ...................................................................................................... 59
v
List of Figures
Figure 1‐1: Surfer riding a wave – an example for voluntary movement
in a complex motor task ............................................................................................ 1
Figure 1‐2: Observer control structure ....................................................................................... 7
Figure 1‐3: Major CNS components involved in motor control ................................................ 13
Figure 1‐4: Brain areas involved in motor control .................................................................... 17
Figure 1‐5: Simplified network model of visually guided reaching and grasping. .................... 19
Figure 2‐1: Anatomical location of targeted brain areas .......................................................... 22
Figure 2‐2: Closed loop feedback paradigm .............................................................................. 23
Figure 2‐3: 3D virtual reality setup for monkeys with limb and eye tracking hardware .......... 24
Figure 2‐4: Gray Matter Research single asymmetric type titanium head holder .................... 27
Figure 2‐5: MRI images of monkey R ........................................................................................ 28
Figure 2‐6: Surgery plan for monkey R ...................................................................................... 31
Figure 2‐7: Floating microelectrode arrays ............................................................................... 32
Figure 2‐8: Well isolated neurons recorded on a single electrode ........................................... 35
Figure 2‐9: Virtual 3D workspace .............................................................................................. 38
Figure 2‐10: Artificially imposed dynamics alter the trajectory of the virtual 3D cursor ............ 46
Figure 2‐11: Single reach sequence to 6 targets displayed in Cartesian coordinates ................. 53
Figure 2‐12: Offline decoding performance for trajectory reconstruction ................................. 54
Figure 2‐13: Ridge filter reconstruction of hand position ........................................................... 55
Figure 2‐14: Kalman filter reconstruction of hand position ........................................................ 56
Figure 2‐15: Kalman filter reconstruction of hand velocity. ....................................................... 58
Figure 2‐16: Kalman filter reconstruction of hand acceleration ................................................. 59
vi
Figure 2‐17: Comparison of Kalman filter R
2
prediction performance for hand position,
velocity, and acceleration ....................................................................................... 60
Figure 2‐18: Neuron dropping curves comparing Kalman and Ridge filter
decoding efficiencies ............................................................................................... 61
Figure 2‐19: Brain control performance improvement over single sessions .............................. 64
Figure 2‐20: Brain control performance improvement over multiple sessions .......................... 65
Figure 2‐21: Examples of successful brain control trajectories .................................................. 66
Figure 2‐22: Examples of successful brain control trajectories in the absence of real‐limb
movement ............................................................................................................... 68
Figure 2‐23: Control sequence of training reaches to 8 targets with EMG leads attached ........ 69
Figure 2‐24: Brain control performance over multiple sessions with the limb immobilized ...... 70
Figure 2‐25: Brain‐control in absence of limb movement results in increased R
2
offline
decode performance ............................................................................................... 72
Figure 2‐26: Temporal tuning distribution of PPC neurons ........................................................ 74
Figure 2‐27: Temporal tuning of the same neuron for position, velocity and acceleration ....... 76
Figure 2‐28: Changes of temporal tuning observed in a single neuron while the animal
learned to control the task with novel dynamics superimposed............................ 77
Figure 2‐29: Neurons exposed to novel dynamics update their representation of ongoing
movement ............................................................................................................... 79
Figure 2‐30: Microelectrode recording array performance reported over one year ................. 82
Figure 3‐1: VRE Overview ........................................................................................................ 103
Figure 3‐2: High level diagram of the VR environment used for prosthesis design and
early patient training. ............................................................................................ 104
Figure 3‐3: VR configuration ‐ block diagram .......................................................................... 107
Figure 3‐4: Top layer of the real‐time simulation environment in SIMULINK......................... 108
Figure 3‐5: Architectural diagram of MSMS ............................................................................ 110
vii
Abstract
Neural activity in posterior parietal cortex (PPC) can be harnessed to estimate not only the
endpoint of a reach (Musallam, Corneil et al. 2004) but also to control the continuous trajectory
of an end‐effector (Mulliken, Musallam et al. 2008). Here we expand on this work by showing
that trajectory information can be extracted robustly from PPC neurons in more realistic, less
constrained tasks. Although it is thought that the visuo‐motor areas in PPC rely on gaze‐
centered reference frames to encode movement‐related parameters, for the first time we were
able to show that hand movement can be decoded accurately without constraining gaze.
Furthermore, to evaluate the potential of PPC signals for controlling prosthetic limbs under
realistic conditions we increased the complexity of the task by studying point‐to‐point reaches in
a 3D workspace instead of relying on the classic lower‐dimensional 2D center‐out task.
Specifically, we trained two monkeys to perform arm movements to guide a 3D cursor on a
computer display to targets presented at random locations. We found that we could accurately
reconstruct the trajectory of the cursor using a relatively small ensemble of simultaneously
recorded PPC neurons. We also tested whether we could decode trajectories during closed‐loop
brain control sessions, in which the real‐time position of the cursor was determined solely by
the monkeys' neural activity in PPC. The monkeys learned to perform brain control trajectories
at up to 100% success rate after just a few sessions. This improvement in behavioral
performance was accompanied by an increase in off‐line decoding performance of the PPC
ensemble. In addition to these spatial learning effects we observed learning in the temporal
domain. In a task where the animal’s cursor was intentionally perturbed by superimposed
artificial dynamics of a realistic prosthetic limb model, we found gradual adaptation of single‐
viii
neuron activity, suggesting that PPC neurons are able to update their representation of limb
movement when its dynamics change. Both spatial and temporal learning will be crucial for
achieving satisfactory results when the central nervous system is forced to adapt to an assist‐
device with initially unknown kinematic and dynamic characteristics.
Based on our findings we conclude that PPC is a strong candidate brain region for the extraction
of signals to control neural prosthetic limbs. The decoding accuracies found are similar to
results reported from the more commonly targeted motor areas, but unlike the motor areas PPC
provides access to additional variables such as gaze and intended reach‐targets.
ix
Preface
This thesis summarizes scientific and engineering results of my research conducted at University
of Southern California and California Institute of Technology. The overall goal of this research
effort was to provide novel insight into how the brain controls voluntary movement. Posterior
parietal cortex, the brain region we focused on, performs a wide variety of functions covering
the entire spectrum from attention to intention, but it is thought to play a key role in motor
planning and execution. The research presented here investigates its involvement in limb
trajectory control during movement execution.
Section 2 “Introduction – relevant motor control basics” introduces a few basic motor control
concepts. The emphasis is on parietal cortex and its role in motor execution, i.e. the concepts
the following research sections rely on.
In section 3, “Decoding limb movement from posterior parietal cortex”, we present experiments
designed to reveal the role of posterior parietal cortex in online guidance of hand movement.
We introduce and test mathematical models predicting limb trajectories based on the neural
ensemble activity recorded from posterior parietal cortex. In a separate series of experiments
we show how these mathematical models can serve to control a robotic manipulator, an
approach that may be applicable in the field of cortically controlled prosthetic limbs to restore
movement in paralyzed patients and amputees. The section concludes with a brief encoding
study of neurons in posterior parietal cortex adapting to altered movement dynamics,
demonstrating the involvement of parietal cortex in temporal aspects of motor control in
addition to the spatial aspects analyzed in the preceding decoding analyses.
x
Section 4, “A virtual reality environment for designing and fitting neural prosthetic limbs”,
summarizes our efforts to develop a universal platform for the design and testing of novel
prosthetic limbs and algorithms and for training patients to use their artificial limbs. The
experiments presented in this thesis utilized a limited set of features offered by this platform,
whose features, implementation, and potential applications are presented in more detail in this
section.
1
Chapter 1 Introduction – relevant motor control basics
Neural control of movement
The ability to move in a rapid, coordinated manner has developed over millions of years from
rather primitive reflex responses in simple vertebrates to an impressive repertoire of complex,
skilful movements in humans.
The surfer in Figure 1‐1 demonstrates execution of a highly complex motor task, involving well
coordinated activity of the entire musculoskeletal system including legs, arms, and torso. The
small surfboard of an experienced surfer offers maximum maneuverability but minimum
stability and therefore requires not only highly accurate timing and coordination of limb
movements, but also a well trained sense of gravity and balance. Although surfing represents a
Figure 1‐1 Surfer riding a wave – an example for voluntary movement in a complex
motor task
2
highly specialized and demanding task clearly beyond the motor skills required in average
human life, it can serve as a useful example to illustrate important aspects of voluntary motor
control:
1. It requires Motor Planning, the process by which the desired movement states are specified,
given a motor goal: The surfer, for example, aims for an approaching wave, considering the
current state of his body (position, velocity, heading, …), and environmental conditions such as
wave size and shape. During motor planning, his central nervous system (CNS) attempts to find
a solution to a problem with a potentially infinite number of solutions by reducing the degrees
of freedom from neural signals to movement kinematics. One attempt to explain how our CNS
solves this complex problem is the optimal control framework. It hypothesizes that the CNS
attempts to eliminate redundancy in the system under control (the musculoskeletal plant) by
evaluating quantitatively a cost function (Bryson and Ho 1975). The cost function is usually
defined as the integral of an instantaneous cost, over a certain time interval, and the aim is to
minimize the value of this cost function. It has been demonstrated, for example, that a model
minimizing jerk, the first derivative of acceleration, successfully converges to a unique solution
that accurately approximates real movement (Hogan 1984; Flash and Hogan 1985). The
minimized cost function, therefore, identifies the movement plan with the lowest associated
cost, i.e. the optimal approach for performing a particular movement. It explains why the skilled
surfer chooses a particular approach, a particular kinematic limb configuration, and muscle
activation, to catch and ride the approaching wave seemingly without effort, whereas the less
experienced beginner, although with the same goal in mind, and possibly achieving a similar,
although less elegant outcome, chooses a different, less optimal strategy, struggling
considerably and exhausting himself quickly. Evidence from perturbed visual feedback studies
3
indicate that humans (and perhaps other primates) plan movement trajectories in visual,
kinematic coordinates (Wolpert, Ghahramani et al. 1994; Flanagan and Rao 1995; Wolpert,
Ghahramani et al. 1995); force field studies suggest that movements are planned in the
kinematic, instead of the dynamic domain (Lackner and Dizio 1994; Shadmehr and Mussa‐Ivaldi
1994). However, knowledge of the body dynamics is used in planning.
2. Motor planning is usually followed by Motor Execution, the computational process that
monitors and guides movement online, and that determines appropriate motor commands to
correct for deviations from the original motor plan: The surfer for example, while riding a wave,
may end up in a position less than ideal with respect to the wave that propels him, either due to
environmental factors such as strong currents abruptly changing the shape of the wave, or due
to his own sub‐optimal choice of strategy, for example possibly caused by his center of mass
being in a non‐ideal position. In order to continue riding the wave he needs to make precisely
timed adjustments, typically by shifting his weight. Feedback involved to notice such deviation
is primarily visual, proprioceptive (relying primarily on muscle stretch and velocity information
from muscle spindles, and on muscle force information from Golgi tendon organs), and in the
surfer’s case vestibular to obtain body position and acceleration with respect to the
gravitational field. Corrective actions could, therefore, be accounted for by a feedback
controller. However, frequently there is not enough time for sensory feedback signals to reach
the brain for computing a corrective action. This is particularly true in case of the inherently
unstable system of a surfer on a small surfboard riding a wave continuously changing its shape,
but also in very basic motor tasks for example when a person performs rapid reaching
movements. This represents strong evidence against pure feedback control, and multiple
4
studies indicate that online corrections that are based on sensory feedback can produce
unstable trajectories, e.g. (Hollerbach 1982; Gerdes and Happee 1994). Therefore, it was
hypothesized that movement is controlled in a feedforward manner. The equilibrium point
hypothesis is one feedforward approach that initially had multiple supporters. It states that
trajectories are created by defining equilibrium points which the limb tracks due to the elastic
spring‐like properties of the musculoskeletal plant (Feldman 1966). Soon, however, it was found
that the equilibrium point hypothesis does not account properly for various movement
characteristics (Bizzi, Accornero et al. 1984), despite multiple extensions, e.g. (McIntyre and Bizzi
1993). In general, a major disadvantage of feedforward control is the need to issue accurate
and appropriate motor commands to achieve the desired outcome. This implies detailed
knowledge of the response characteristics of the motor apparatus and of its initial state.
Although this is possible for basic musculoskeletal plant structures such as the eye which has
relatively simple dynamics, it is unlikely to be the case for the multi‐joint arm (Polit and Bizzi
1979), (Ghez, Gordon et al. 1990).
The current working hypothesis is that our CNS continuously holds and updates an expectation
about ongoing processes it needs to monitor. In case of motor control, it is thought that the
brain relies on an internal estimate of the ongoing movement in order to be able to respond to
motor errors rapidly, in the presence of delayed sensory feedback information. The concept is
similar to observer‐based control, a hybrid feedforward‐feedback control scheme originating
from engineering (Goodwin and Sin 1984) where continuously updated internal feedback loops
replace direct sensory feedback, thus resolving the latency issue of pure feedback control (Fig.
1‐2). These loops rely on a forward model that predicts the consequences of motor commands
sent to the limb based on motor outflow. The prediction is integrated with the sensory inflow to
5
form an accurate estimate of the current movement state (Wolpert 1997). In such a model, the
probable position and velocity of an effector can be estimated with negligible delays and even
predicted in advance, thus making feedback strategies possible for fast reaching movements
(Desmurget and Grafton 2000).
3. The skills necessary to perform a potentially demanding motor task are acquired through
Motor Learning: Motor control relies on kinematic and dynamic transformations. As pointed
out under “motor execution”, these transformations may be contained in forward and inverse
models of the limb under control. The parameters of these transformations are not static but
change throughout life both on a short time scale, due to interactions with objects in the
environment (e.g. when the surfer switches from a small surfboard to a larger surfboard with
different dynamics), and on a longer time scale (e.g. due to growth and injury). Internal models
must therefore be adaptable to changes in the properties of the limb. Motor learning is the
process by which the CNS identifies changes in movement‐related parameters and adjusts to
them while practicing the movement sequence to be learned or even while mentally practicing
the imagined movement (Hall, Buckolz et al. 1992; Yue and Cole 1992). Several computational
approaches to learning internal models have been proposed (for a review see (Atkeson 1989;
Jordan 1995)).
Motor execution – online control of limb trajectories
The idea that neural control of limb movement encompasses both feedforward motor
commands and sensory feedback has been around since before the turn of the 20
th
century
6
(Mott and Sherrington 1895), (Woodworth 1899). Nevertheless, how these components are
integrated to produce a motor command to the muscles that results in accurate limb movement
to a desired goal is still being intensely debated (Bhushan and Shadmehr 1999), (Desmurget and
Grafton 2000), (Todorov and Jordan 2002), (Wolpert and Kawato 1998). Recent developments
in the field of sensorimotor integration have provided a new approach to understanding how
sensory and motor signals are combined to provide an estimate of the state of both the world
and one’s own body (Wolpert, Ghahramani et al. 1995; Wolpert 1997). The approach originates
from the powerful observer framework in engineering (Goodwin and Sin 1984). This framework
is relevant when there is a system under control, in this case the body, and the goal is to
estimate the state of the system (e.g. joint angles, hand position, velocity, etc.). To produce
optimal state estimates, the observer framework requires a system that monitors both the
inputs and outputs of the system, i.e. motor commands and sensory feedback. The observer
then uses this information to produce online state estimates, which are updated as further
sensory and motor signals arrive (Fig. 1‐2). In engineering terminology the mechanism that
predicts sensory consequences of motor commands is the internal forward model. It processes
recent motor commands (efference copy) using a model of the movement dynamics thereby
predicting the upcoming state of the effector (e.g. the limb) and compensating for delays in the
system. The prediction is integrated with sensory feedback (visual and somatosensory) to form
an optimal estimate of the current state of the limb (Jordan and Rumelhart 1992; Kawato 1999).
In other words, by including the feedforward model within an internal negative feedback loop,
an internal feedback signal is made available, thus avoiding the delays of the actual sensory
feedback signal. High gain negative feedback control is therefore possible if the CNS relies on
observer estimates instead of using sensory signals directly. The Kalman filter (Kalman 1960) is
7
an example of a recursively operating observer. It provides a method for obtaining state
estimates by combining two processes. The first process is based upon internal simulation of the
motor system while the second uses sensory feedback to correct the internal simulation. The
relative contributions of the internal simulation and sensory correction processes to the final
estimate are modulated across time so as to provide optimal state estimates.
Numerous studies support the idea that the brain constructs internal forward (and inverse)
models to control movement (see (Kawato, Furukawa et al. 1987), (Atkeson 1989), (Jordan and
Rumelhart 1992), (Jordan 1995), (Wolpert, Ghahramani et al. 1995), (Desmurget, Epstein et al.
Plant
(musculo‐
skeletal limb)
Controller
Sensors
Forward
Model
Desired
Trajectory
Limb Trajectory
Integration of
delayed sensory
feedback
Observer
Motor Command
Visual, Proprioceptive
Feedback
Figure 1‐2 Observer control structure
Efference copy signals serve as input to the forward prediction, and sensory signals are
used to update a potentially inaccurate prediction. The CNS may rely on a similar
concept to perform high‐gain negative feedback control despite delayed sensory
feedback.
8
1999), (Pisella, Grea et al. 2000), (Shadmehr and Krakauer 2008) for review). Two distinct
groups of internal forward models are thought to exist: (1) forward models containing a causal
representation of the motor system (Jordan and Rumelhart 1992), where motor commands
serve as inputs to the forward model predicting the next state of a movement based on
knowledge about the present state; (2) models of the behavior of an external environment used
to perform predictions about changes in this environment, e.g. predicting the trajectory of a ball
we wish to catch, (Lacquaniti and Maioli 1989). Entirely different from these two classes of
forward models, inverse internal models, (Atkeson 1989), invert the causal flow of the motor
system. An inverse dynamic model of the arm could, for example, estimates the motor
command that caused a particular movement.
The role of forward estimation in motor control
During execution of movement, information about the state of the limb is essential for accuracy;
subjects deprived of proprioceptive and cutaneous cues are very disabled, (Ghez, Gordon et al.
1990; Gordon, Ghilardi et al. 1995; Miall 1996). However, the available sensory signals may not
directly provide an adequate, timely state estimate.
Internal feedback to overcome time delays
Feedback control is robust, as the controller need not be precisely matched to the motor
apparatus – any error in the motor output will be sensed and corrected. Its principal
disadvantage is that feedback control is sensitive to intrinsic delays in the sensorimotor loop,
and, therefore, must be designed to avoid high gain at high frequencies. This reduces the speed
9
of its response. In motor control delays arise in sensory transduction, central processing, and in
the motor output. Sensory transduction latencies are most noticeable in the visual system
where the retina introduces a delay of 30‐60ms. Overall delays can lie between about 30ms for
a spinal reflex loop up to 200‐300ms for a visually guided response, (Miall 1996). Motor
responses for kinesthetic targets (proprioceptively driven control) are known to start with
150ms delay (Flanders and Cordo 1989), whereas motor responses for visual targets (visually
driven control) start with 250ms delay (Georgopoulos, Kalaska et al. 1981), (Flanders and Cordo
1989). As fast arm movements can last less than 200ms, conventional feedback control is not
suitable. Pure feedforward control, on the other hand, implies detailed knowledge of the
response characteristics of the motor apparatus, an assumption that is realistic only for
structures with simple dynamic characteristics such as the eye. The mechanism the CNS most
likely relies on, therefore, is an approach conceptually similar to observer‐based control,
providing timely estimates of the outcome of motor commands which can be used for negative
feedback control. In other words, by including the forward estimation within an internal
negative feedback loop, it provides an internal feedback signal that is available much more
rapidly than the actual feedback signals resulting from the movement.
Cancellation of sensory reafference
From a behavioral viewpoint it is necessary to distinguish between signals from self generated
movement and changes in the environment. Although the afferent (from environment) and
reafferent (from self movement) signals have distinct causes, they are carried by the same
sensory channels. By generating an estimate of the sensory consequences of a motor
10
command, an internal forward model can be used to cancel reafferent sensory signals, and thus
allow the external environment‐related signals to be recovered.
Mental simulation of motor tasks
State prediction can be useful to extrapolate into the future, for example in a task where the
task dynamics are known. Mental practice and planning can lead to improved performance, and
it is suggested that mental rehearsal allows performance to be monitored and motor learning to
take place in the absence of real action, (Hall, Buckolz et al. 1992; Yue and Cole 1992). Based on
the relationship between the desired movement and its predicted outcome given by the model,
a controller could select between possible actions, or the controller could itself adapt. Hence,
internal prediction could also be involved in motor planning.
Distal supervised learning
The CNS faces the problem that goal and outcome of a movement are often defined in a task‐
related reference frame. In order to perform the task, the CNS needs to translate task‐related
goals and errors into the appropriate intrinsic signals (motor commands, motor errors). The
forward relationship between motor signals and sensory signals can be captured by a forward
model. Jordan and Rumelhart, (Jordan and Rumelhart 1992) have shown how such a forward
model can then be applied to estimate the motor errors during performance by
backpropagation of sensory errors through the model. They demonstrated that a forward
model could be used to transform errors between the desired and actual sensory outcome of a
11
movement into the corresponding error in the motor command, thereby providing appropriate
signals for motor learning.
Behavioral evidence for the existence of forward estimation
Evidence for internal forward estimation is mainly drawn from
1. the speed of human movements which are too fast to be controlled by sensory feedback due
to the intrinsic delays of sensory processing (Pisella, Grea et al. 2000), (Miall 1996), (Wolpert and
Miall 1996), (Desmurget and Grafton 2000).
2. studies of the timing of coordinated movement of multiple effectors, such as in hand‐eye
coordination (Ariff, Donchin et al. 2002; Nanayakkara and Shadmehr 2003), (Vercher and
Gauthier 1988), (Vercher and Gauthier 1992), or in coordination of reach and grasp (Johansson
and Westling 1984), (Johansson and Cole 1992), (Flanagan and Wing 1997).
3. attempts to model perturbed reaches (Shadmehr and Mussa‐Ivaldi 1994), (Thoroughman
and Shadmehr 1999), (Thoroughman and Shadmehr 2000), (Wolpert, Ghahramani et al. 1995).
In summary, there is substantial human behavioral evidence supporting the idea that a form of
direct transformation scheme, one that makes use of forward models to estimate the state of
the limb, can conceivably support the rapid, online control of movement.
12
Key central nervous system components involved in motor control
Planning and execution of voluntary movements involves obtaining sensory information from
the environment, evaluation of the significance of this information, the formation of a
movement strategy and the generation of actual motor behavior. These tasks of considerable
complexity require smooth interaction of multiple cortical areas including the motor areas,
posterior parietal cortex (PPC), subcortical structures, spinal cord and musculoskeletal system
(Fig. 1‐3).
The spinal cord is on the lowest level of this hierarchical organization of neural processing. It
contains the neuronal circuits that mediate a variety of reflexes and rhythmic behavior such as
locomotion. Motor neurons, innervating muscle and thus triggering contractions, receive input
either directly from a primary sensory neuron or from interneurons and axons descending from
higher areas, conveying commands to coordinate motor actions from cortical areas to the spinal
cord.
On mid‐brain level the thalamus connects two other important mid‐brain structures involved in
goal directed movement, basal ganglia and cerebellum, to higher level cerebral cortex, and to
lower level spinal cord via brain stem. The thalamus receives input from the cerebral cortex and
projects it to the spinal cord, furthermore it closes feedback loops between cerebral cortex and
basal ganglia, and between cerebral cortex and cerebellum. Visual, vestibular, and
somatosensory information is integrated and relayed to higher cortical structures via the
thalamus. The precise role of the cerebellum and basal ganglia in motor control is not clear,
however both are necessary for smooth movement and posture control.
13
On the highest cortical level, the primary motor cortex, M1, several premotor areas, mainly
PMd, and some somatosensory areas project to the spinal cord, modulating the action of motor
neurons directly, or indirectly. Premotor areas are interconnected with the posterior parietal
areas and exchange information during movement planning and execution.
The role of posterior parietal cortex in motor control
Parietal areas contain highly cognitive functions related to attention, intention, and decision.
Traditionally, posterior parietal cortex (PPC) was understood as an association area that
combined information from different sensory modalities to form a unified representation of
space (Mountcastle, Lynch et al. 1975; Hyvarinen 1982). This theory was based on lesion studies
which produce a broad range of deficits depending on the location and the extent of the
damage. Extinction (the lack of awareness of objects on the affected side of the visual field
Cerebral Cortex
Brain Stem
Basal Ganglia
Cerebellum
Spinal Cord
Thalamus
Figure 1‐3 Major CNS components involved in motor control
14
when there are competing stimuli toward the healthy side) is more common with lesions of the
superior parietal lobe, whereas profound neglect is more common with lesions of the inferior
parietal lobe (Milner 1997), (Vallar and Perani 1986; Mattingley, Husain et al. 1998). Later the
deficits resulting from parietal lesions were interpreted differently: it was found that deficits
observed following lesions of the PPC are consistent with the area playing an intermediate role
in sensorimotor integration. Patients with PPC lesions do not suffer from primary sensory or
motor deficits. However, numerous defects become apparent when they attempt to connect
perception with action, for instance during sensory‐guided movement. Such patients often
suffer from optic ataxia. They experience difficulty in estimating the location of stimuli in three‐
dimensional space and in reaching to targets resulting in pronounced reaching errors (Balint
1909), (Rondot, de Recondo et al. 1977), (Perenin and Vighetto 1988)). Their ability to make
corrective movements (Pisella, Grea et al. 2000), (Grea, Pisella et al. 2002), and to maintain an
estimate of the internal state of the arm (Wolpert, Goodbody et al. 1998) is limited. Patients
with PPC lesions also often show one or more of the apraxias, a class of deficits characterized by
the inability to plan movements (Geshwind and Damasio 1985). These defects can range from a
complete inability to follow verbal commands for simple movements to difficulty in performing
sequences of movements. Furthermore, PPC lesions have been shown to disrupt the mental
simulation of movement (Sirigu, Duhamel et al. 1996), an effect not seen with lesions of M1
(Sirigu, Cohen et al. 1995), cerebellum (Kagerer, Bracha et al. 1998) or basal ganglia (Dominey,
Decety et al. 1995). Another study showed that a parieto‐occipital lesion disrupted the
cancellation of sensory reafference (Haarmeier, Thier et al. 1997). The study of these
syndromes, together with more recent functional imaging investigations of individuals without
brain injury (Clower, Hoffman et al. 1996), (Ehrsson, Fagergren et al. 2003)), transcranial
15
magnetic stimulation (TMS) in trajectory correction tasks (Desmurget, Epstein et al. 1999), force
field adaptation studies (Della‐Maggiore, Malfait et al. 2004), and electrophysiology studies in
monkeys (Mulliken, Musallam et al. 2008), (Duhamel, Colby et al. 1992), (Batista, Buneo et al.
1999), (Gnadt and Andersen 1988) has led to the current view that PPC, located at an
intermediate stage in the sensorimotor pathway, is heavily involved in the transformations from
sensory to motor representations during planning and execution of movement (Buneo and
Andersen 2006), so‐called sensorimotor integration. Directly related, these studies also support
the notion of PPC being involved in monitoring ongoing movement using forward estimation.
Strong connectivity between the frontal motor areas and PPC (Jones and Powell 1970; Jones,
Coulter et al. 1978; Strick and Kim 1978; Zarzecki, Strick et al. 1978; Petrides and Pandya 1984)
may provide the efference copy signals that relay replicas of recent movement commands from
downstream motor areas back to PPC with little or no delay (Kalaska, Caminiti et al. 1983),
whereas projections from visual and somatosensory areas (Jones and Powell 1970; Jones,
Coulter et al. 1978; Johnson, Ferraina et al. 1996) may provide sensory feedback information,
thus supporting the model of an observer, estimating the current state of the ongoing limb
movement, implemented in posterior parietal cortex. Despite the evidence from lesion studies,
there is ongoing debate about whether PPC merely integrates forward model information that
may originate from other brain areas such as the cerebellum (e.g. (Kawato 1999)) with
proprioceptive and visual feedback from the sensory areas to maintain an accurate
representation of the current arm state (see (Shadmehr and Krakauer 2008) for a review), or if
PPC performs the role of a state observer, thus performing both forward model computation
and integration with sensory feedback (Wolpert, Goodbody et al. 1998).
16
Parietal cortex is highly differentiated with many functional subdivisions. Several areas of PPC
are considered to be primarily visual in terms of sensory inputs (e.g. LIP), whereas others are
considered primarily somatosensory (dorsal area 5), or somatosensory and visual (e.g. VIP). The
spatial representation in PPC is, therefore, formed from a variety of modalities, including vision,
somatosensation, audition, and vestibular sensation. This representation can be used to
convert the sensory locations of stimuli into the appropriate motor coordinates required for
making directed movements. Zipser & Andersen (Zipser and Andersen 1988), for example, in an
artificial neural network (ANN) modeling study, showed that when eye and retinal position
signals are converted to a map of the visual field in head‐centered coordinates, the hidden units
that perform this transformation develop activity patterns very similar to those demonstrated in
the posterior parietal cortex.
Anatomy of posterior parietal cortex
Anatomically the posterior parietal cortex can be subdivided into the superior parietal lobe
(Brodmann areas 5 and 7) and the inferior parietal lobe (Brodmann areas 39 and 40), separated
by the intraparietal (IP) sulcus. The intraparietal sulcus (IPS) and adjacent areas are essential in
guidance of limb and eye movement, and based on cytoarchitectural and functional differences
is further divided into medial (MIP), lateral (LIP), ventral (VIP), and anterior (AIP) areas (Fig.1‐4).
MIP receives input from visual areas and projects to movement‐processing areas in the frontal
cortex (Marconi, Genovesio et al. 2001). Cells in area MIP have activity related to plans for limb
movements, but not eye movements (Snyder, Batista et al. 1997).
17
LIP receives visual input and has strong projections to the frontal eye fields. It is specialized for
saccadic eye movements (Barash, Bracewell et al. 1991). In instructed‐delay tasks LIP neurons
typically have stronger activity when monkeys are planning saccades rather than reaches,
indicating that a large component of this activity is related to saccade planning (Snyder, Batista
et al. 2000), and not attention or reach planning.
VIP lies in the floor of the intraparietal sulcus. This region appears to be highly multi‐modal,
even for sensory stimuli that are not task relevant (Duhamel, Colby et al. 1998; Bremmer,
Schlack et al. 2001). In particular, many neurons have somatosensory fields on the head, and
VIP in the floor of IPS
MIP
PRR
AIP
LIP
SMA
PMv
PMd
M1 PPC
AB
Figure 1‐4 Brain areas involved in motor control
A, in the human cerebral cortex; B, in the monkey’s posterior parietal cortex (PPC).
18
respond when visual stimuli approach those head locations (Duhamel, Colby et al. 1998).
AIP is specialized in grasping (Sakata, Taira et al. 1995). Cells in this area respond to shapes of
objects and the configuration of the hand for grasping the objects.
PRR is an arm movement area of the PPC that consists of the MIP subdivision, and possibly parts
of the parieto‐occipital area, and area 5.
Anatomical studies showed that PPC receives strong input from primary somatosensory cortex
(SI) (Jones and Powell 1970; Jones, Coulter et al. 1978). Projections to the primary motor cortex,
premotor cortex, and supplementary motor cortex were demonstrated by anatomical and
electrophysiological methods (Jones and Powell 1970; Jones, Coulter et al. 1978; Strick and Kim
1978; Zarzecki, Strick et al. 1978; Petrides and Pandya 1984). However, the projections between
PPC and the precentral gyrus are all reciprocal (Johnson, Ferraina et al. 1993; Johnson, Ferraina
et al. 1996), so that PPC activity is in turn subject to modulation by corticocortical outputs from
precentral cortex. PPC also projects heavily to a number of subcortical motor structures,
including the basal ganglia and pontine cerebellar nuclei (Hyvarinen 1982). It is therefore ideally
positioned to influence motor processes throughout the motor system (Fig. 1‐5).
19
PRR
Visual Input
AIP
PMd PMv
M1
Thalamus
Spinal Cord
Musculoskeletal System (Arm, Hand)
S1
Cortical Network
SubcorticalNetwork
Plant
Figure 1‐5 Simplified network model of visually guided reaching and grasping.
The parietal area PRR is an arm movement area involved in reach planning and error
correction during reach execution, whereas AIP performs similar functions to coordinate
grasping, and LIP (not shown) to control saccadic eye movements. Cerebellar and Basal
Ganglia loops are not shown for simplification.
20
Chapter 2 Decoding movement from PPC
Introduction
Restoring movement in patients suffering from high‐level spinal cord injury, later stages of
amyotrophic lateral sclerosis (ALS), and severe forms of cerebral palsy and brain stem stroke is a
challenge because easily accessible control signals such as EMG are not available. A small
patient sub‐population is completely locked‐in, i.e. they are quadriplegic, some of them unable
to communicate, but otherwise cognitively‐intact individuals. These patients frequently still
have substantial motor control of eye, facial and tongue muscles and usually some neck, that
can be used to operate an assistive device, but the operation of an assist device based on such
signals is rather cumbersome and unintuitive. Its use will also be limited to a highly specific task.
Recordings from cortical areas that are known to be involved in the generation of movement, on
the other hand, may provide a richer control signal, thus allowing patients to perform more
complex and diverse tasks more intuitively via prosthetically actuated limbs. If the gain in
performance is significant enough, the invasiveness of the cortical approach may be deemed
clinically appropriate. The cost/benefit bar for hemiplegia is higher than for quadriplegia, but
once the technology has established, this larger patient population may eventually benefit as
well. Our neuroprosthetic effort spans several related directions, including cognitive‐based
brain machine interfaces (BMIs) (Andersen, Burdick et al. 2004), decoding from local field
potentials (LFPs) (Scherberger, Jarvis et al. 2005), identification of alternative cognitive control
signals, electrophysiological recoding advances and development of new algorithms (see
(Nenadic, Rizzuto et al. 2007) for a review). A first series of closed‐loop real‐time decoding
experiments was performed by Meeker and colleagues (Meeker, Cao et al. 2002) in which single
21
cells were recorded in a delayed‐reach task from the parietal reach region (PRR), a sub‐region of
posterior parietal cortex (PPC). This study later was expanded to include activity of multiple
single cells recorded simultaneously with the animals positioning the cursor in more than two
locations (Musallam, Corneil et al. 2004). Recently Mulliken and colleagues demonstrated that
trajectory decodes from PRR are feasible (Mulliken, Musallam et al. 2008).
Here we present results from a study where we targeted PRR and area 5 to obtain the control
signals to operate a robotic limb. The overall goal of these series of experiments was to
demonstrate that the decoding of trajectories from PPC is feasible in a relatively unconstrained,
realistic and more natural task: In contrast to previous studies
1. the task was implemented in a three‐dimensional workspace,
2. the animal was required to perform point‐to‐point reaches to targets at pseudo‐random
locations,
3. the animal was not trained for gaze fixation, i.e. we did not disrupt natural hand‐eye
coordination patterns.
This simulates a realistic prosthetic‐limb‐control application where objects need to be
manipulated throughout the entire three‐dimensional workspace in presence of natural hand‐
eye coordination patterns. While in the study preceding these series of experiments central
gaze fixation was imposed (Mulliken, Musallam et al. 2008) in order to maintain a constant
visual reference frame and to rule out the possibility of decoding activity related to eye position
or saccades, here we demonstrate that PPC ensembles are sufficient for real‐time decoding of
movement with a minimum of constraints (head fixation) imposed only.
22
Materials and Methods
Two male rhesus monkeys (Macaca mulatta; 8–11kg) were used in this study. All experiments
were performed in compliance with the guidelines of the Caltech Institutional Animal Care and
Use Committee and the National Institutes of Health Guide for the Care and Use of Laboratory
Animals. Spike and LFP signals were recorded from awake, behaving animals using microwire
arrays (Musallam, Bak et al. 2007). After some initial behavioral training, each animal was first
implanted with a headpost for head fixation. Once the animal was fully behaviorally trained, the
electrode arrays were implanted stereotaxically using magnetic resonance imaging (MRI) to
guide the implantation. Four arrays with a total of 128 electrodes were placed in the medial
bank of the intraparietal sulcus (IPS), a portion of PRR and area 5 (also in PPC) (Fig. 2‐1). The
arrays were designed to equally access the deeper regions in IPS (2 arrays with electrode length
Area 5
Intraparietal
sulcus (IPS)
Medial intraparietal
area (MIP)
IPS
Area 5
MIP
A B
Figure 2‐1 Anatomical location of targeted brain areas
A, PPC areas targeted for array implantation. B, coronal section showing the location of
area 5 and MIP.
23
1.5‐7mm) and the area 5 surface regions (2 arrays with electrode length 1.2‐1.8mm). Spike‐
sorting was performed online using a 128 channel multi‐channel data acquisition system (MAP;
Plexon, Inc.) and later verified using the Plexon Offline Sorter. LFP activity was recorded as well.
Real‐time recording of
•Hand movements
•Eye movements
•Cortical activity
Subject
Real‐time
3D visual feedback
Real‐time algorithms
Initially a) collecting training data (cursor
driven by hand movements)
b) training the decode model
Then online decode (cursor driven by cortical
activity through decode model)
Figure 2‐2 Closed loop feedback paradigm
All experiments were performed in a closed loop feedback environment where the
animal obtained immediate visual feedback about the consequence of actions, for both
hand control mode where the animal had to steer a virtual cursor using hand
movements and cortical control mode where the animal had to control the cursor by
modulating the neural ensemble activity.
24
All experiments were conducted in a virtual reality environment (VRE) (Hauschild, Davoodi et al.
2007) under closed‐loop, real‐time visual feedback conditions (Fig. 2‐2). The VRE employed here
was derived from a general purpose VRE, originally developed for human subjects (see section 5
for a description of the original VRE).
Eye position was monitored using infrared video eye trackers (ETL200; ISCAN, Inc.) and hand
position was recorded using an infrared 3D motion capture system (OPTOTRAK 3020; Northern
Digital, Inc.).
3D shutter glasses (NuVision 60GX; MacNaughton, Inc.) and a 22” CRT monitor (P220f;
ViewSonic, Corp.) were used to enable 3D stereoscopic vision, and a mirror projected the image
from the overhead monitor into the monkey’s eyes (Fig. 2‐3). To increase the degree of
CRT‐monitor
3d‐hand‐tracker
3d‐shutter‐glasses
stereoscopic
eye tracking
Figure 2‐3 3D virtual reality setup for monkeys with limb and eye tracking hardware
25
immersion, the monkey sat in a completely dark room during the experiment. The mirror
prevented the animals from observing their own arms and arm movements under any
experimental conditions, therefore visual feedback about the movement was provided through
the virtual limb endpoint cursor only. The viewing distance (total distance eye‐CRT monitor)
was 580mm for monkey G, and 440mm for monkey R. Virtual objects were scaled accordingly to
appear at the same location and with identical size for both monkeys. To provide realistic visual
disparity (the main 3D cue in virtual environments) MSMS (Davoodi, Urata et al. 2007), the 3D
visualization client, considered the actual viewing distances and eye separations to compute the
correct offset between left and right eye visual channel.
Behavioral training
The animals were first trained to perform a basic reaching task. The animal sat in a primate
chair and viewed reach targets presented under computer control. The animal’s behavior was
shaped in daily training sessions, usually lasting 3 to 4 hours (corresponding to approximately
1000‐3000 trial per day). A fluid reward (water or juice, depending on the animals preferences)
was provided to the animal to reinforce correct behavior. The complexity of the reaching task
was increased gradually (accuracy required, number of targets to reach for in a rewarded
sequence). Eye movements were recorded but not constrained during training sessions. In case
of inappropriate reaches the trials were cancelled without reward, and thus the animal quickly
learned the correct behavior. Training took approximately 3 months. The initial training in a
two‐dimensional workspace was performed without head fixation, whereas the second phase of
26
training in a three‐dimensional workspace required surgical implantation of a head holder in
order to attach the 3D‐vision‐enabling shutter glasses to the monkey’s head.
Head holder implant surgery
The head holder (Gray Matter Research, MT) used to fixate the monkey’s head rigidly for the
duration of the experiment was machined from medical grade titanium and employed an
orthopedic rail system that allowed it to be mounted without the use of acrylic cement (Fig. 2‐
4). The head post (i.e. the portion that sits above the skin) is the element that interfaces with a
chair‐mounted extension to fixate the animals head. Below the post is a smooth round neck
where the skin interfaces and four rails or legs. The rails of the head holder incorporate counter
sunk holes that fit flat head medical grade titanium bone screws thereby reducing irritation and
possible necrosis of the overlying skin by protruding screw heads. The entire assembly was
coated using a type‐2 Titanium anodization (Danco Metal Surfacing, CA). The resulting surface
finish facilitates osseointegration and tissue adhesion and thereby improved the stability and
longevity of the implant.
The head holder was placed as anterior as possible, with the front legs extending to near the
animal’s brow ridges to keep the skull surface near posterior parietal cortex unobstructed. The
head holder was surgically implanted under general anesthesia with the monkey’s head held in
a stereotaxic frame. An incision was made over the area where the head post was implanted.
The skin was blunt dissected away from the skull and the skull cleaned of tissue. The rails of the
head post were bent during surgery with orthopedic bending bars to allow them to lay flush
with the curvature of the skull. 12 titanium bone screws were used to secure the head post to
27
the skull. The skin was sutured up over the rails of the head post to the neck so that only the
upper restraint portion was exposed through the skin. Skin sutures of non‐absorbable material
were placed to close the incision. Skin sutures were removed 7‐14 days post‐op.
Array implant surgery
Once the animal was fully trained to perform the reaching task in a three‐dimensional
workspace while head‐fixed, a set of microelectrode recording arrays was implanted. We
implanted four arrays, each containing 34 electrodes on a 4 mm X 2 mm footprint. The
locations of the arrays and of the craniotomy were determined based on anatomical MRI images
(Fig.2‐5). Figure 2‐6 shows a schematic representation of the monkey’s left hemisphere to be
implanted showing both craniotomy and planned implant coordinates.
Anterior
headpost
skin
protruding
neck
rail
Figure 2‐4 Gray Matter Research single asymmetric type titanium head holder
28
During the surgery the animal was placed on the ventilator to control CO
2
and subsequently
reduce the potential for brain swelling during the surgery. Blood gases were monitored. Two
incisions were made in a “T” shape to encompass the craniotomy and the location of an acrylic
island holding the electrical connectors. The craniotomy was created following the surgery
planning diagram (Fig. 2‐6). A section of dura was opened directly over the site of array
implantation to allow for easier penetration of the electrodes. The arrays were placed
according to the surgery planning diagram, however small adjustments were made to avoid
puncturing of local blood vessels (see Table 2‐1 for actual array locations). The arrays were
L R
IPS
LIP
MIP
IPS
Figure 2‐5 MRI images of monkey R
Left: axial slice, 34mm superior; right: coronal slice, 3mm posterior, showing the
intraparietal sulcus (IPS), the medial intraparietal area (MIP), and the lateral
intraprarietal area (LIP). The intersection of the green axes indicate the origin of the
interaural stereotaxic coordinate frame used.
29
lowered with a mechanical inserter at a rate that kept dimpling of the brain tissue and pressure
on the cortex to a minimum. After the arrays had been inserted, an artificial dura replacement
(DuraGen, Integra Life Sciences Corp.) was used to cover the area to protect the cortex.
Craniofacial mesh (Osteomed) was shaped to cover the craniotomy and secured with titanium
bone screws. Once the arrays were in place, with the array cables exiting the cranial vault and
running along the skull, covered by the titanium mesh, a small acrylic island was built to support
the array connectors. The acrylic island was designed to avoid a full size acrylic head cap that
would have covered the arrays and the craniotomy. It allowed us to isolate and secure the
connectors away from the craniotomy site. The acrylic island was located contra‐lateral to the
array implant location in order to maximize spatial separation and to reduce the risk of an
infection migrating from the wound margin to the array implantation site. Six to eight titanium
bone screws were used to secure the acrylic island to the skull. The array connectors were
housed under a removable protective plastic cap that was seated in the acrylic island. The
plastic cap could be removed in the lab for access to the connectors for daily recording
experiments. The skin was closed and sutured over the craniotomy and around the acrylic
island. Skin sutures were removed 7‐14 days post‐op.
30
Monkey Array # Position Angle
GL1 9.0P/10.0L18⁰
L2 12.5P/6.1L 4⁰
S1 1.6A/13.4L surface normal
S2 4.0P/12L surface normal
R L1 9.2P/6.7L 5⁰
L2 4.6P/11.8L 30⁰
S1 0.8P/10.9L surface normal
S2 2.1A/7.8L surface normal
Table 2‐1 Array implant locations for monkey G and R.
All measurements were taken using a stereotaxic manipulator. Position measurements
are in mm, angle measurements are in degree, measured in the coronal plane.
31
8L,7A
(5.5L, 7.5P, 5deg,
1.5‐7mm)
19L,7A
19L,4P
12L,14P 2L,14P
(5L, 10P, 5deg,
1.5‐7mm)
2L,2P
(9.5L, 7.5P, 37deg,
1.5‐7mm)
(10.5L, 5P, 37deg,
1.5‐7mm)
0
anterior posterior
Figure 2‐6 Surgery plan for monkey R
The polygon indicates the size and location of the craniotomy with coordinates indicating the
corners. The dark filled circles represent the location of IPS where visible as determined
from MRI data to facilitate orientation during surgery. The light filled circles represent the
location of the IPS where covered by other brain structures. Arrays Long 1 (L1), Long 2 (L2)
are shown with coordinates for position (in mm) and insertion angle. The insertion angle was
determined base on MRI data to target the deeper structures of MIP. Arrays Short 1 (S1) and
Short 2 (S2) were inserted normal to the brain surface and anterior to array L2 along IPS.
32
Recording arrays
The floating microelectrode arrays (MicroProbe, Inc.) used were designed to collect high‐density
neural data and to ‘float’ with respect to the skull through the incorporation of a flexible cable
between the array and the skull‐mounted connector (Fig. 2‐7). Each array had 32 recording
electrodes, and two independent reference electrodes. During the microelectrode
manufacturing process (Loeb, Peck et al. 1995) parylene‐coated Pt/Ir (70/30) microelectrodes
made from 0.003” wire were attached to a ceramic substrate. 0.001” diameter gold (99.99%
purity) wire was used to connect the array electrodes to the connectors feeding the signals to
the external amplifier. The 3‐4 µm Parylene C coating on the electrodes was ablated at the tip of
each electrode using an Excimer laser to expose the metal surface for recording. Table 2‐2
specifies length and exposure of the electrodes used in short type and long type arrays.
Figure 2‐7 Floating microelectrode arrays
Array assembly diagram (left), long type electrode array before implantation (right).
33
The arrays for monkey R were tested before implantation. Impedance measurements were
taken at 1kHz in physiological 0.9% saline. Electrodes with impedances > 4 MΩ were classified as
defective. The arrays for monkey R contained 4 defective electrodes, the remaining 124
electrodes had impedances of 254 kΩ ± 106 kΩ (MEAN ± SD). In vivo measurements two weeks
post surgery indicated that 23 electrodes had impedances > 4 MΩ. It is possible that connecting
wires to the affected electrodes broke during surgery. The remaining 105 electrodes had
impedances of 929 kΩ ± 769 kΩ. The higher mean and standard deviation of the impedances
may have been a result of slight surface oxidation and beginning encapsulation due to the
body’s immune response and/or varying conductivity of the surrounding tissue. No pre‐
Array type # of electrodes electrode use electrode length (mm) tip exposure
Short 2 reference 6 mm 6 mm (bare wire)
2ground 6 mm 6 mm (bare wire)
6recording 1.2 mm 20/25 µm
16 recording 1.5 mm 20/25 µm
10 recording 1.8 mm 20/25 µm
Long 2 reference 6 mm 6 mm (bare wire)
4recording 1.5 mm 20/25 µm
4recording 2.3 mm 20/25 µm
4recording 3.1 mm 20/25 µm
4recording 3.9 mm 20/25 µm
4recording 4.7 mm 20/25 µm
4recording 5.5 mm 20/25 µm
4recording 6.3 mm 20/25 µm
4recording 7.1 mm 20/25 µm
Table 2‐2 Electrode specifications for short type and long type arrays.
Two short type and two long type arrays were implanted. Tip exposure was 20 µm in
animal G (aiming for 500 kΩ electrode impedance, tip exposure was 25 µm in animal R
(aiming for 250 kΩ electrode impedance).
34
implantation impedance measurements were available for the arrays for monkey G. The arrays
implanted in monkey G contained 63 defective electrodes (impedance > 4 MΩ, based on in vivo
impedance measurements two weeks post surgery), the remaining 65 electrodes had
impedances of 1.66 MΩ ± 876 kΩ. The large number of defective electrodes was a result of
array manufacturing issues which have been resolved in later array generations. The higher
impedance (compared to the impedance measured in monkey R) resulted from the smaller
exposed electrode tip (Table 2‐2).
Signal Processing, Spike Sorting
The raw, differentially recorded neural signals from all electrodes were band‐pass filtered
(154Hz – 8.8kHz), hardware‐amplified (x1000), and analog to digital converted (40kHz sampling
rate) using signal processing hardware (Multichannel Acquisition Processor, Plexon, Inc.) for
real‐time spike sorting and storage to hard disk.
A broad range of spike detection and sorting algorithms ranging from basic thresholding
(amplitude discrimination) to sophisticated, computationally demanding algorithms exists
(Lewicki 1998; Quiroga, Nadasdy et al. 2004). Although better performance with superior
discrimination between multiple units is generally achievable using complex algorithms, we
chose to rely on the window discriminator approach implemented in the Plexon recording
software, which assigns the spikes crossing two windows to the same neuron. The window
discriminator provides reasonable spike sorting performance and can still be implemented in
real‐time for closed‐loop experiments, but requires manual adjustment of the windows by the
user. Neurons were manually classified as single units if waveform and signal to noise ratio
35
allowed isolation using the window discriminator approach (e.g. Fig. 2‐8), whereas other units
that could not be isolated, due to either poor signal to noise ratio or due a waveform very
similar to that of other neurons on the same electrode, were manually classified as multi‐units.
Due to the difficulty in isolating multi‐unit activity it is therefore possible that a substantial
amount of noise was recorded together with the multi‐unit activity, even though electrodes
lacking any signs of neural activity were generally excluded from recording.
Offline principal component analysis based spike sorting (Abeles and Goldstein 1977) was used
to verify the accuracy of the window‐discriminator sorting
Figure 2‐8 Well isolated neurons recorded on a single electrode
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
-200
-150
-100
-5 0
0
50
100
150
t[m s]
U[ V]
Single U nit 1
Single U nit 2
No ise
36
Behavioral Task
Both monkeys were trained to perform a 3D reaction‐time task in which they were required to
guide a cursor in a 3D display to an illuminated reach target. The monkey controlled his cursor
either using movements of his right hand via real time motion tracking or he used cortical neural
signals processed by a decoding algorithm trained to predict movement intention in real‐time.
A reaching sequence was initiated when the first reach target (green) appeared at a random 3D
location. The monkey had five seconds to move his cursor (white) to the green target. After
successful target acquisition the target extinguished and a new target appeared at a different,
random 3D location, representing the next reach target. The monkey was rewarded with water
after having completed a sequence of reaches to eight targets. The monkey was free to choose
his preferred initial hand position before a sequence started, and he was allowed to take breaks
between sequences.
Workspace Considerations
The virtual workspace scaled 1:1 with the monkey’s physical workspace, and the location was
adjusted to match the monkey’s limb length. Both animals had to extend their arm close to the
maximum in order to reach for the distal targets, whereas maximal flexion (limited by
mechanical setup constraints) was required to reach for the proximal targets.
One limitation of virtual workspaces presented to subjects using 3D stereoscopic visualization
technology generally is that depending on the used configuration they put strain on the ocular‐
motor system and require adaptation. In a non‐virtual (real) workspace if an observer wishes to
change fixation from a distant object to one near (or vice versa), the retinal image of the target
37
object is initially defocused (blur), and there is a fixation error between the image of the target
and the fovea (disparity). In order to bring clarity to the retinal image the eye must focus
(accommodate), and to overcome disparity the eyes must change vergence angle to maintain
fixation within corresponding retinal areas (if noncorresponding points of the retina are
stimulated then double vision will result). Normally the accommodation and vergence
responses are neurally cross‐linked, i.e. accommodation produces vergence eye movements and
vice versa. In a stereoscopic virtual reality environment, however, subjects are required to
maintain a constant level of accommodation (fixation on physical surface of the display/screen
plane), whereas the vergence angle changes when fixation changes between near and far
objects (Wann and Mon‐Williams 2002). To reduce the conflict between continuous fixation on
the screen plane and changing binocular disparity depending on the depth location of the area
of interest, intermediate depth targets were located in the screen plane (no conflict), whereas
proximal and distal targets were configured to appear in front or behind the screen plane thus
creating a balanced and mild conflict between fixation depth and disparity only (Fig. 2‐9). To
further facilitate 3D orientation, objects scaled with depth and transparency allowed occluded
or intersecting objects to be perceived.
38
Reach targets were presented as semitransparent spheres (d=32mm), the monkey’s cursor was
represented by a solid white sphere (d=31mm). A reach was detected as successful if the animal
kept the center of the hand cursor within < 20 mm of the center of the target for a minimum of
300 ms. These values were chosen as a tradeoff between avoiding exhausting the animal by a
screen surface (0 disparity, fixation plane)
behind screen surface plane
(negative binocular disparity)
in frond of screen surface plane
(positive binocular disparity)
50 mm
Figure 2‐9 Virtual 3D workspace
The workspace consisted of a 3x3x3 grid of equally spaced potential target locations.
The first target was presented at a randomly chosen location, the following target was
chosen from the remaining 26 targets. The process was repeated until all 27 targets had
been presented. This approach guaranteed that targets, although presented at random
order, were shown to the animal with identical frequency across a daily recording
session.
39
task requiring extreme accuracy and on the other hand requiring the animal to perform well
planned and coordinated reaches.
Offline decode algorithms
We were recording from a maximum of 128 electrodes simultaneously. Channels that were
known to be connected to defective electrodes were disabled to reduce the amount of data.
Channels connected to intact electrodes but recording noise only were disabled as well. The
remaining channels recorded either a single or sometimes multiple well isolated single units, or
inseparable multi‐unit‐activity. All units were sorted using a window discriminator algorithm.
Simultaneously the behavior (limb movement, eye movement), and events (reward, target
location) were recorded. To convert the raw spike events to firing rates, all channels of raw
neural data were processed using 90ms non‐overlapping bins. All channels with average firing
rates below 1 Hz were eliminated. The firing rates for the remaining channels were then
standardized by first subtracting the neurons’ mean firing rates and then dividing by their
standard deviations. Using an ensemble of standardized firing rates along with concurrently
recorded behavioral data from the training segment where the animal was using hand
movements to control the cursor, we trained a mathematical decoding model to reconstruct the
monkeys’ trajectories offline. We implemented a linear ridge regression (Hoerl and Kennard
1970) decode algorithm, predicting the instantaneous 3D cursor position, and a Kalman filter
(Kalman 1960), predicting the 3D cursor state (position, velocity, and acceleration). All trials
from the training segment were shuffled. 80% of the shuffled training data was used to train
the model and 20% was used to validate the model (Mulliken, Musallam et al. 2008). The
40
shuffling, training and validation procedure was repeated 100 times to obtain a mean ± standard
deviation offline reconstruction performance.
We quantify the offline reconstruction performance of the decoding algorithms using a
statistical criteria, the coefficient of determination, or R
2
. R
2
values for X, Y and Z directions are
determined independently and can be averaged to give a single R
2
value for position.
Ridge regression
We first constructed a linear model of the instantaneous 3D cursor position as a function of the
standardized firing rates r(t) of N simultaneously recorded neural units. Firing rates were
computed for non‐overlapping 90 ms bins. Each sample of the behavioral state vector, x(t), was
modeled as a function of the vector of ensemble firing rates measured for four successive time
bins. Only the four causal bins immediately preceding the movement were used, i.e. the firing
rates used in conjunction with the behavioral state x(t) were centered at {r(t‐315ms), r(t‐
225ms), r(t‐135ms) ,r(t‐45ms)}. An estimate of the 3D cursor position, ) ( ˆ k x , was constructed as
a linear combination of the ensemble of firing rates, r, sampled at four leading binning intervals
according to
) ( ) ( ˆ
0
k r k x
N
j
j j
, (1)
Where k denotes the discretized 90 ms time steps, and with N representing the total number of
neural inputs including four successive bins per channel.
41
The least squares solution for β yields the minimum variance, unbiased estimator. However, a
zero‐bias estimator often suffers from high MSE due to a large variance component of the error.
Therefore, instead of relying on a basic linear regression, we used a ridge regression which
penalized the average size of the coefficients in β in order to reduce the variance component of
the error, while allowing a small increase in the bias (Hoerl and Kennard 1970). The ridge
regression added a complexity term to the optimization problem, which penalized coefficients
for having large weights (Hastie, Tibshirani et al. 2001), such that
Term Complexity
2
Term squares Least
2
0
) ( ) ( min arg
ˆ
N
j
j
M
k
N
j
j j
ridge
k r k x
, (2)
where M is the number of training samples used in a session. The regularization parameter, λ,
was chosen iteratively to minimize the MSE. A gradient descent algorithm was used to find the
optimal λ iteratively. For a given value of λ, a unique solution for the ridge coefficients can be
expressed in matrix notation, when estimating the 3D cursor position,
X R I R R
T T ridge
1
ˆ
, (3)
where
N M
R
is the standardized firing rate matrix sampled at 4 lead/lag time steps,
3
M
X is the mean‐subtracted, 3D position matrix, and
3
N
are the model
coefficients unique to a particular λ. For λ=0, the ridge filter is equivalent to the least squares
solution.
42
Kalman filter
In addition to Ridge regression we implemented a discrete Kalman filter (Kalman 1960). The
Kalman filter was originally proposed by Wu and colleagues for decoding continuous trajectories
from M1 ensembles (Wu, Black et al. 2002).
Unlike the feedforward ridge filter, our Kalman filter implementation estimated the current
state of the movement, including velocity and acceleration,
T
k
z y x z y x z y x x ] , , , , , , , , [
. (5)
It provided a recursive estimate of hand kinematics from the firing rate of non‐overlapping 90ms
bins using two governing equations: an observation equation that modeled the firing rates
(observation) as a function of the state of the cursor, x
k
, and a process equation that propagated
the state of the cursor forward in time as a function of only the most recent state, x
k‐1
(Welch
and Bishop 2006). Both models were assumed to be linear stochastic functions, with additive
Gaussian white noise:
1 1 1
k k k k k
w Bu x A x
(process equation), (6)
k k k k
x H r . (observation equation) (7)
The control term, u, was assumed to be unidentified and was therefore set to zero in our model,
excluding B from the process model.
One simplifying assumption was that the process noise (
1 9
w ), observation noise (
1 9
v
), transition matrix (
9 9
A ) and the observation matrix (
9
N
H ) were fixed in time (Wu,
Black et al. 2002), thus simplifying (6) and (7) to
43
w Ax x
k k
1
, (8)
k k
Hx r , (9)
where A and H were solved using least squares regression.
To estimate the state of the cursor at each time step k, the output of the process model (8),
k
x ˆ
(i.e. the a priori estimate), was linearly combined with the difference between the output of the
observation model (9) and the actual neural measurement (i.e. the neural innovation) using an
optimal scaling factor, the Kalman gain, K
k
, to produce an a posterior estimate of the state of the
cursor,
k k k k k
x H R K x x ˆ ˆ ˆ . (11)
The entire two‐step discrete estimation process, consisting of an a priori time update and a
posterior measurement update, was iterated recursively to generate an estimate of the state of
the cursor at each time step in the trajectory.
Model assessment
We quantified the offline reconstruction performance of each decoding algorithm using a single
statistical criterion, the coefficient of determination, R
2
. R
2
values for X, Y, and Z directions were
averaged to give a single R
2
value for position, velocity, and acceleration. We also constructed
neuron‐dropping curves for each session to quantify how well trajectories could be
reconstructed using PPC ensembles of different sizes. To assess the performance of a particular
ensemble size q, we randomly selected and removed N ‐ q neural units from the original
44
ensemble, which contained N total neural units. 80% of the data were used for training, 20% for
validation using the remaining q units to obtain an R
2
value. The random selection, training and
validation procedure was repeated 100 times to obtain an average R
2
for each ensemble size q.
Closed‐loop brain‐control decode
To assess behavioral performance during brain control, we computed the success rate as a
function of trial number. If the monkey managed to acquire a target within 10 seconds, and held
the cursor within the target zone for at least 180ms, a trial was detected as successful,
otherwise the trial was marked as unsuccessful. To obtain a smoothed success rate, the trial
outcome point process was convolved with a Gaussian kernel, SD = 5 trials). Success rates
obtained during the brain control sessions were compared with chance level performance. To
calculate the chance level for success rate, we randomly shuffled firing rate bin samples for a
given neural unit recorded during brain control, effectively preserving each neural unit’s mean
firing rate, but breaking its temporal structure. Chance trajectories were then generated by
simulation, iteratively applying the actual ridge or Kalman filter to the shuffled ensemble of
firing rates to generate a series of pseudocursor positions. Each chance trajectory was allowed
10 seconds for the cursor to reach the target zone for at least 180 ms, the same criteria used
during actual brain control trials. This procedure was repeated hundreds of times to obtain a
distribution of chance performances for each session, from which a mean and SD were derived.
45
Temporal tuning analysis
To understand the temporal tuning characteristics of PPC neurons, we performed decoding
analyses, separately for each unit recorded from. First, to obtain instantaneous firing rates for
the neuron under consideration, we convoluted the point process of spike time events with a
Gaussian kernel (σ = 30ms). Second, to determine instantaneous tuning, we performed a linear
regression analysis for firing rate and the movement parameter to be analyzed (i.e. hand
position, velocity, acceleration in x, y, z direction). The resulting R
2
performance thus described
the correlation between a movement parameter and the neural activity of a single cell when
they were temporally aligned (as recorded during the experimental session). Third, we repeated
the same analysis after introducing a temporal offset between neural activity and motor
behavior. The underlying idea of this approach was to determine the exact temporal tuning of
the cell: a proprioceptively driven neuron, for example, will not show strong tuning when its
neural activity is aligned with the motor behavior, because latencies cause information about
the movement to be available with significant delays only. However, strong correlations will be
detected, once the cell is realigned with the movement. We called the offset necessary to
temporally realign a cell to obtain maximum tuning the “Optimal Lag Time”, OLT. We
temporally shifted all cells from τ = ‐300 ms (to capture sensory activity, i.e. neural activity
lagging the movement) to τ = + 600 ms (to capture motor command activity, i.e. neural activity
leading the movement), in 10 ms increments to obtain a high resolution map of OLTs for all
neurons. The analysis was performed using 80% of the available data samples for training
(linear regression), and the remaining 20% for testing. The analysis was repeated 500 times and
samples were chosen randomly to obtain mean and standard deviation for the temporal tuning
46
curves. To determine whether the obtained tuning was significant we compared it with chance
tuning. Only tuning separated from chance tuning by at least 2 σ was considered significant.
Analysis of adaptation to temporal perturbations
We performed a set of recording sessions where the animal’s virtual cursor was perturbed by
artificially superimposed dynamics. Although cursor movement was coupled to hand
movement, hand movement did not translate to identically timed and scaled cursor movement;
instead the dynamics model of a prosthetic limb introduced a small lag and altered the shape of
target hand dynamic cursor
Time [s]
2 4 6 8 10
x‐Position (depth)
[m]
y‐Position (vertical)
[m]
z‐Position (horizontal)
[m]
Figure 2‐10 Artificially imposed dynamics alter the trajectory of the virtual 3D cursor
The dynamics are imposed in real‐time using a computationally accurate model of the
DARPA “Revolutionizing Prosthetics 2009” Proto 1 prosthetic limb.
47
the 3D trajectory, essentially performing low‐pass filtering (Fig. 2‐10).
We monitored changes in neuron tuning, potentially suggesting adaptation to the imposed
perturbation by performing temporal tuning analysis using the approach described in section
“3.2.11 Temporal tuning analysis”. To recognize tuning changes occuring during a single
recording session, we did however perform the analysis in multiple steps: for the first analysis
cycle only samples from reach sequences 1‐10 were used (one sequence consisted of reaches to
8 targets), for the second analysis cycle the samples from sequences 2‐11 were used. The
analysis was repeated until the last sequence was reached. Although changes occuring within
10 sequences analyzed in a single analysis cycle cannot be caught using this approach, it allowed
us to detect trends of tuning changes occuring across an entire recording session.
EMG Recordings
The main purpose of the brain‐control closed‐loop decode was to demonstrate that PPC may
provide signals to control prosthetic limbs in paralyzed or amputee patients. As a control, in
order to make sure that we are decoding neural activity related to movement intention, the
possibility that the decoding algorithm relied on sensory feedback from the monkey’s limb
generating neural activity in PPC had to be ruled out. This was done by (1) mechanically
immobilizing the limb during the brain‐control decode session, and (2) monitoring the
electromyographic (EMG) activity of the muscle groups typically involved in reaching
movements. We recorded EMG activity from various limb muscles involved in reaching while
the monkey performed the task in the working set‐up. EMG recordings were made via small
48
percutaneous hook electrodes commercially available in single‐use, pre‐packaged and sterilized
form (paired hook‐wire electrode, 30mmx27ga; VIASYS Healthcare, Inc.).
Percutaneous EMG is a standard means to acutely record EMG in both humans and monkeys,
and involves injecting small hook electrodes into the muscle belly via a 27 gauge needle. Prior
to insertion, the area of interest was scrubbed clean with an alcohol swab. The small size of the
needle allowed for minimum discomfort to the animal. After insertion, the needle was
withdrawn over the wire, leaving the electrodes in the muscle bed. The wires were then
connected to a differential amplifier (EMG electrodes, wide spaced pads with integrated ground;
B&L engineering) to record the EMG signal. Following injection, the leads from the electrode
were stabilized by taping them to the monkey’s shoulder. The monkey’s other arm was
prevented from accessing the electrodes via physical barriers attached to the primate chair.
Recordings with four EMG electrodes were taken simultaneously from four muscles in a single
experimental session, sampling from the deltoid, rhomboideus, biceps and triceps muscles. To
verify proper placement and function of the EMG electrodes, recordings were taken prior to the
brain control session and after the brain control session for a series of hand‐controlled reaching
sequences where the monkey was required to move his limb. These sequences were analyzed
for characteristic EMG features indicating muscle activity.
Latency measurements
Visuo‐motor tasks are sensitive to delays. Although our closed loop system relied on a real‐time
operating system to process all data, there are inevitably some delays in the hardware and
software: the Optotrak motion tracking system, tracking the monkey’s limb movements was
49
found to introduce a consistent delay of 17 ms. This delay was compensated for when
realigning the motion tracking data with the neural recordings. The 3D visualization showed a
total delay (rendering + CRT image buildup) of 15‐20ms, which was taken into account for cue
onset removal.
50
Results
Restoring movement in paralyzed or amputee patients is a challenge because suitable control
signals are difficult to obtain, interpret, and to record continuously over multiple years,
particularly under realistic conditions representative for tasks of daily living, where artificially
imposed constraints such as the requirement for central gaze fixation, or instructed movement
withholding and execution are not acceptable. In our study we were able to show that we can
extract signals robustly representing hand movement from posterior parietal cortex under a
only minimally constrained experimental paradigm. The algorithms employed were simple
enough to be suitable for later implementation on a low power embedded CPU integrated in a
prosthetic limb, didn’t require any sophisticated spike sorting, and allowed the animal to
steadily improve task performance in brain control mode through behavioral adjustments and
gradually improving tuning of the neural ensemble. We also report some progress in recording
electrode array development: the most recent array generation used in the 2
nd
animal provided
satisfactory and stable recording performance for > 1 year.
Offline decoding
The objective of offline decoding was to identify decoding models that estimated movement
parameters at multiple time steps in a trajectory from parallel observations of PPC neural
activity. Training sets used were constructed from ≈ 220 reaches per session. We analyzed a
total of 19 different recording session offline and found that, although generally neural activity
of individual neurons appeared to be correlated only weakly with movement (e.g. Fig. 2‐11), we
could reconstruct reach trajectories with very good accuracy using ensemble activity from PPC,
51
accounting for 60% of the variance in the 3D cursor position during a single session (R
2
= 0.59
(combined R
2
for x, y, and z degree of freedom (DoF)); = 0.61, = 0.68, = 0.45). The
neural activity recorded included a combination of single units and multi‐units (30 ± 4.09 single
units, 37.21 ± 3.81 multi‐units, mean ± SD). Note that the R
2
tuning for the z‐DoF (horizontal)
was noticeably weaker than tuning for x, and y‐DoF (also see Fig. 2‐13 ‐ 15). Even though we
occasionally had recording sessions with stronger z‐DoF tuning, generally the decoding results in
z‐direction in both animals appears to be slightly weaker than in x‐, and y‐direction. Although
this has not yet been verified, we hypothesize that this may be related to monkeys being
significantly more ambidextrous than humans: because they tend to perform reaches to objects
in the right half of their workspace with their right hand, and reaches to objects in the left half
using their left hand, it is possible that the spatial representation in the left hemisphere of PPC,
known for coordinating movement of the animal’s right limb, did not properly represent
movements to the other (left) half of the workspace, thus resulting in a weaker decode in z‐
direction.
Position decoding performance: model comparisons
We evaluated the performance of two decoding algorithms, the ridge regression and the
Kalman filter in comparison to the linear regression decode algorithm. In general, we found that
the Kalman filter outperformed the ridge filter (Fig. 2‐12). Both algorithms perform better than
the basic linear regression decode. The ridge filter performs 6% better (R
2
= 0.4035 ± 0.0497
average performance; R
2
= 0.4992 ± 0.0436 best day performance (mean ± SD)) than the linear
regression decode (R
2
= 0.4196 ± 0.0507 average performance; R
2
= 0.4796 ± 0.0482 best day
52
performance) , whereas the Kalman filter performs 24% better, on average (R
2
= 0.5044 ± 0.0725
average performance; R
2
= 0.5802 ± 0.0737 best day performance). The superior performance
of the ridge filter in direct comparison to the linear regression reflects the benefit of
subselection for high‐dimensional input spaces that exhibited multicollinearity among neural
units and/or contained little or no tuned activity on some recorded channels. It is very likely
that the superior performance of the Kalman filter decode results from consideration of
position, velocity, and acceleration information, possibly encoded by the ensemble of recorded
neurons, whereas linear regression and ridge filter have to rely on position encoding exclusively.
Figures 2‐13 and 2‐14 illustrate on the example of the time course for the same single reach
sequence the position reconstruction performance of ridge filter and Kalman filter. As predicted
by single best day and average across session R
2
performance values, the Kalman filter performs
slightly better than the ridge filter.
53
x‐Position (depth)
[m]
y‐Position (vertical)
[m]
z‐Position (horizontal)
[m]
target
hand
Time [s]
Figure 2‐11 Single reach sequence to 6 targets displayed in Cartesian coordinates
Six targets (blue) were presented to the animal sequentially at random locations. Once
the animal had acquired a target successfully the next target lit up immediately. The
animal was rewarded at the end of the sequence after having reached to six targets
successfully. While the animal performed the task, the hand trajectory was recorded in
3D Cartesian coordinates (red). Simultaneously PPC neural activity from the implanted
microelectrode arrays was recorded. The spike raster plots show the three neurons
that were strongest correlated with hand position in either x, y, or z DoF.
54
Position R
2
Position R
2
Ridge Kalman Ridge Kalman
cue onset phase removed
cue onset phase not removed
cue onset phase removed
cue onset phase not removed
A B
Single Session Average Across Sessions
Linear Linear
Figure 2‐12 Offline decoding performance for trajectory reconstruction
A, Single session R
2
values for position estimation. Performances of linear regression,
Ridge and Kalman filter applied to neural data with visual cue onset removed (black
bars) and visual cue onset not removed (white bars). B, Average R
2
performances for 19
sessions.
55
x‐Position (depth)
[m]
y‐Position (vertical)
[m]
z‐Position (horizontal)
[m]
Time [s]
target hand prediction
1 5 0 2 8 7 3 4 6 9
Figure 2‐13 Ridge filter reconstruction of hand position from PPC neural activity
during a single reach sequence to 6 targets. Actual behavior (red) and decoded
estimates of hand position (black). The decode algorithm updated the position estimate
every 90ms (marked with ●). Reach targets are indicated in blue. Note: to facilitate
comparison, Figs 3.12‐3.15 are based on reach data from the same sequence.
56
The role of the visual cue onset response
Neurons in PPC frequently show a vigorous response to appearing visual stimuli regardless of
the motor plan they generally encode (Gail and Andersen 2006). For the control of prosthetic
devices this is undesirable and needs to be taken into consideration to avoid unintentional
misinterpretation of neural responses caused by appearing visual stimuli as motor control
signals. Because prosthetic devices for real‐world applications will not have the luxury of
x‐Position (depth)
[m]
y‐Position (vertical)
[m]
z‐Position (horizontal)
[m]
Time [s]
target hand prediction
Time [s]
1 5 0 2 8 7 3 4 6 9
Figure 2‐14 Kalman filter reconstruction of hand position from PPC neural activity
during a single reach sequence to 6 targets.
57
knowing when stimuli are being presented or attended to, we tested if decoding can be
performed without removal of the cue‐onset phase. We compared the decoding performance
of ridge and Kalman filter applied to the neural signals with the 200ms cue‐onset response
transient removed with the decoding performance of the same filters applied to the neural
signals with the cue onset response not removed. Although the visual response is known to
dominate over any motor related neural activity during a brief cue onset period in individual
cells, we were able to decode movement from a population of cells with the cue onset interval
being present without a loss in decoding accuracy (Table 2‐3, Fig. 2‐12).
Decoding of velocity and acceleration
Although we mostly evaluate the performance of decode algorithms based on their accuracy in
predicting position, the Kalman filter decode implicitly provides estimates for velocity and
acceleration (and/or any other variable it has been designed to estimate). Table 2‐4 shows that,
despite noisy measurements of the actual limb trajectory that particularly affect acceleration
linear regression ridge regression Kalman filter
R
2
, cue onset interval removed 0.3951 ± 0.0545 0.4171 ± 0.0550 0.491 ± 0.0743
R
2
, cue onset interval NOT removed 0.4035 ± 0.0497 0.4196 ± 0.0507 0.5044 ± 0.0725
Table 2‐3 R
2
position decode performance of linear regression, ridge filter, and Kalman
filter decode algorithms applied to recordings with the visual cue onset phase removed
and not removed.
58
but also velocity obtained from limb position recordings through numerical differentiation, the
Kalman filter algorithm provides reliable estimates for these variables.
Figures 2‐15 and 2‐16 show, on the example of the reach sequence data used for the position
plots in Figures 2‐13 and 2‐14, that the Kalman‐filter‐based reconstruction for velocity and
acceleration followed their experimental values reasonably closely, despite noisy data caused by
numerical differentiation to obtain velocity and acceleration from hand position tracking data.
x‐Velocity (depth)
[m/s]
y‐Velocity (vertical)
[m/s]
z‐Velocity (horizontal)
[m/s]
Time [s]
hand prediction
Time [s]
1 5 0 2 8 7 3 4 6 9
Figure 2‐15 Kalman filter reconstruction of hand velocity from PPC neural activity
during a single reach sequence to 6 targets.
59
x‐Acceleration
(depth) [m/s
2
]
y‐Acceleration
(vertical) [m/s
2
]
z‐Acceleration
(horizontal) [m/s
2
]
Time [s]
hand prediction
Time [s]
1 5 0 2 8 7 3 4 6 9
Figure 2‐16 Kalman filter reconstruction of hand acceleration from PPC neural activity
during a single reach sequence to 6 targets. Although hand acceleration was obtained
from hand position tracking data through numerical differentiation, the Kalman filter
algorithm predicts features of hand movement in the acceleration domain with fair
accuracy.
position velocity acceleration
R
2
performance (single session) 0.5802 ± 0.0737 0.5126 ± 0.0383 0.2872 ± 0.0348
R
2
performance (average across sessions) 0.5044 ± 0.0725 0.5397 ± 0.0369 0.3308 ± 0.0384
Table 2‐4 Kalman filter R
2
prediction performance for position, velocity, and
acceleration
60
Figure 2‐17 indicates that position, velocity and acceleration can be estimated reliably using the
Kalman filter algorithm. Although acceleration appears to be represented less strongly than
position and velocity, it is possible that the Kalman filter predicts all three variables with similar
accuracy, whereas the acceleration result reported here is weaker mainly due to the noise
observed in the acceleration signal caused by numerical differentiation.
R
2
R
2
Velocity Acceleration Velocity Acceleration
A B
Single Session Average Across Sessions
Position Position
Figure 2‐17 Comparison of Kalman filter R
2
prediction performance for hand position,
velocity, and acceleration
A, single best session: Both position and velocity are predicted reliably whereas the
prediction of acceleration is less accurate. B, average across all 19 sessions.
61
Decoding performance for neural populations of different size
We also sought to test how decoding performance varied as a function of the PPC ensemble
size. Figure 2‐18 A shows neuron‐dropping curves for the single best session which plot R
2
for
decoding cursor position as a function of ensemble size. To verify that these single‐session
trends were representative of the decoding efficiency in general, we averaged neuron dropping
curves from 19 sessions (Fig. 2‐18 B). Note that when averaging performance across all sessions,
the maximum possible number of neural units considered was constrained by an upper limit of
56, the smallest ensemble size of any of the recording‐sessions we considered. As expected,
session‐averaged neuron‐dropping curves were shifted downward from the single best session
curves due to variation in decoding performance across multiple sessions. Nonetheless, trends
analogous to those reported for the single best session were preserved after averaging across
sessions, indicating that performance steadily increases as the number of neurons increases.
Offline Decode R
2
Kalmanfilter
Ridge filter
A B
Best Performance –Single Session
# of Neural Units
Kalman Filter
Ridge Filter
Average Performance –Across 19 Sessions
# of Neural Units
Offline Decode R
2
Figure 2‐18 Neuron dropping curves comparing Kalman and Ridge filter decoding
efficiencies
A, single, best day performance; B, across‐sessions performance average.
62
Online decoding ‐ brain control
The R
2
values reported in the previous sections indicate good, but not perfect correlation
between recorded neural activity and limb movement. Generally there will be no simple, direct
mapping between cortical signals and the actuators and linkages of a prosthetic limb being
operated by a paralyzed patient, therefore performance will depend on the design of the
control algorithm and the “learnability” of the system. Importantly, the prosthetic assist device
will be operated closed‐loop, i.e. the patient will immediately experience the consequences of
issued control commands, and can issue corrective commands when observing a mismatch
between intended trajectory and actual artificial limb trajectory. The implication is that a
patient, therefore, will likely attempt to learn strategies to modify the neural command signal to
the prosthetic device. Over time this may cause the tuning characteristics of the neural
ensemble to adjust, thus providing superior control commands, resulting in a higher success
rate. Here we present the results from a set of brain‐control experiments where the animal was
required to control on‐screen 3D‐cursor movement using PPC neural activity while obtaining
immediate real‐time visual feedback about the evolving cursor trajectory, thus being able to
correct for and to learn from errors. The ridge and Kalman filter algorithms employed to drive
the cursor were identified immediately before each brain‐control session using data recorded
during a brief segment of reach trials with the cursor under manual control, as described in
previous sections for offline decoding. We were able to show that performance, which was
always higher than chance level, did increase gradually. The initial rapid performance increase
observed during the first session was presumably due to the animal learning the concept of the
reach task under direct brain control. Furthermore, we show that the later, continuous, but
63
more gradual increase in task performance can be attributed to neural plasticity causing the
neural ensemble to develop an encoding pattern more compatible with the linear decoding
algorithms.
Behavioral performance
We found that the monkey was able to successfully guide the cursor to the target using brain
control at a level of performance much higher than would be expected by chance. Figure 2‐19 A
illustrates the monkey’s behavioral performance for the first brain control session. The 10‐trial
moving average success rate was 26% for 27 targets, which was significantly higher than the
chance level calculated for that session (chance = 5.2 ± 1.2%, mean ± SD). However, after a few
additional training sessions, the monkey’s performance had improved substantially, reaching
100% during certain periods of a session (Figs. 2‐19 B, 2‐20). Simultaneously the mean time to
target (time from target appearance to reach successfully completed) decreased with practice
from initially 2.76 ± 2.66 s to eventually 1.89 ± 2.03 s (Fig. 2‐20). Consistent with these findings
the animal showed other signs of improved control over the brain‐control task: Figure 2‐21 A
presents three examples of brain‐control reach trials recorded during the first brain‐control
session. These trajectories were inaccurate, required multiple corrections and took an
increased amount of time to complete, whereas trials recorded during a later session (Fig. 2‐21
B) were executed more accurately, faster, and rarely required corrective movements.
64
% of Targets Acquired Successfully
current performance
A B
Ridge Filter
Brain Control Trial #Brain Control Trial #
% of Targets Acquired Successfully
chance performance
current performance
chance performance
KalmanFilter
Figure 2‐19 Brain control performance improvement over single sessions
A, Ten‐trial averaged success rate during the first closed‐loop brain control session
obtained using the ridge filter. The dashed line denotes the chance level calculated for
that session. B, Improved brain control success rate measured during a later session
using the Kalman filter.
65
peak per session performance
average per session performance
A B
Recording Day
chance per session performance
Kalman Filter
peak per session performance
% of Targets Acquired Successfully t [s]
average per session performance
chance per session performance
Recording Day
% of Targets Acquired Successfully t [s]
Ridge Filter
Figure 2‐20 Brain control performance improvement over multiple sessions
A, ridge filter performance reported over 19 days. The top plot indicates actual success
rate and chance success rate; the bottom plot shows time required to complete single
brain‐controlled reach trials. B, Kalman filter performance reported over 6 days. Note
that the Kalman filter evaluation was performed after 19 brain‐control sessions using
the ridge filter, therefore skills acquired during the ridge filter brain‐control sessions are
likely to account for the high initial Kalman filter performance at least partially, which
eventually, after few sessions only, peaked at 100%.
66
z‐Position [m] y‐Position [m] x‐Position [m]
hand (decoded) target
Time [s] Time [s]
A B
Figure 2‐21 Examples of successful brain control trajectories
The gray areas between reach trials indicate one second breaks for reward (the animal
was rewarded for each successfully completed reach). A, trajectories recorded during
first brain control session. The trajectories were inaccurate and required multiple
corrections before successful target acquisition. B, trajectories recorded during a later
brain control session when the animal had adjusted to the online decode. Generally
reaches were performed more rapidly and accurately.
67
Brain‐control in the absence of real limb movement
Patients equipped with a brain‐controlled robotic assist device are typically left without sensory
feedback from the periphery, due to their injury, therefore the neural signals recorded from the
brain are assumed to be of central, i.e. purely cortical origin. This is different from the situation
we studied in our experiments using able‐bodied monkeys. The animals’ feedback pathways
were fully intact, and because PPC, and area 5 in particular, have access to proprioceptive
signals through S1, it cannot be ruled out that the cortical activity harnessed by the array
implants in PPC was generated partially by peripheral feedback. Because proprioceptive
feedback is relatively easy to modulate by adjustments in behavior (i.e. by altering limb
movement patterns), it is possible that the increase in performance reported is a direct result of
the monkey learning to adjust his behavior resulting in superior task performance. To rule out
this possibility we mechanically immobilized the monkey’s limb and monitored the EMG signals
of multiple muscles involved in reaching, thus generating a situation where feedback from the
limb is at least minimized. The monkey was required to perform the same brain‐control 3D
reach task introduced previously. Initially we observed reach‐performance dropping to chance
level, presumably because the animal needed to learn the concept of having full control over
cursor movement without actually moving the real limb. Eventually performance recovered,
and although it didn’t reach online performance reported from sessions where the limb was not
constrained and frequently resulted in jerky trajectories (Fig. 2‐22), we report consistent
performance above chance level (Fig. 2‐24).
68
Time [s]
x‐Position
[m]
y‐Position
[m]
z‐Position
[m]
EMG
Biceps
[V]
EMG
Triceps
[V]
EMG
Deltoid
[V]
EMG
Trapezius
[V]
hand (decoded) target
Figure 2‐22 Examples of successful brain control trajectories in the absence of
real‐limb movement
EMG signals were recorded simultaneously from 4 muscles normally involved in
reaching movements to verify absence of muscle contractions (for comparison see
EMG recordings while the animal was performing real reaches, Fig. 3.23). Note
that although the trajectories were not as smooth as previously recorded samples
(with the real limb free to move), and frequently required multiple corrective
cursor movements before target acquisition, the animal was able to perform the
task.
69
Time [s]
x‐Position
[m]
y‐Position
[m]
z‐Position
[m]
target
hand
EMG
Biceps
[V]
EMG
Triceps
[V]
EMG
Deltoid
[V]
EMG
Trapezius
[V]
Figure 2‐23 Control sequence of training reaches to 8 targets with EMG leads attached
to verify EMG lead function.
70
Brain control learning effects
Particularly the brain‐control sessions with the limb movement suppressed represent a novel
motor control task the CNS was not previously exposed to. In this scenario the 3D‐cursor is to
be driven by centrally generated neural activity in complete absence of sensory feedback which,
under normal conditions, where limb movement is not suppressed, is assumed to contribute to
the overall cortical activity in PPC. To increase performance the central parts of the motor
control system need to adapt to this novel task. This may gradually, over multiple brain‐control
sessions, induce plastic changes in the brain. Here we report changes in neural activity in parallel
with behavioral performance trends by analyzing PPC population activity recorded during each
peak per session performance
average per session performance
chance per session performance
Recording Day
% of Targets Acquired Successfully
Figure 2‐24 Brain control performance over multiple sessions with the limb immobilized
Although the accuracy requirements were reduced to encourage the animal to perform
(3.5cm radius vs 3cm for brain‐control sessions without movement restraint), the animal
performed robustly above chance performance level.
71
session’s training segment (immediately preceeding the online brain‐control segment). We
observed learning effects resulting in a steady increase in R
2
performance from 0.4036 ± 0.0504
to 0.5398 ± 0.0555, a 34% increase after 5 days only (Fig. 2‐25). This result suggests that when
presented with continuous visual feedback about the decoded position of the cursor during
brain control, PPC neurons were able to collectively modify/improve their encoding properties
(as evidenced by an increase in off‐line decoding performance), effectively making more
information available to the ridge filter for controlling the cursor during subsequent brain
control sessions. Interestingly, we did not observe similar learning effect during the phase of
brain‐control trials with limb movement allowed. Here the R
2
performance approximately
followed the trend for the number of neural units (single units + multi‐units) recorded from
simultaneously in PPC, which suggests that due to the similarity of the brain‐control task with
hand movement allowed and real‐world reaching, no learning occurred during these trials, and
the R
2
performance reported from the offline training trials here was mostly determined by the
number of neural units recorded from.
72
Online Brain‐Control Day
# of single units +
multi‐units
R
2
offline decoding
performance
Online decoding
performance
(% targets acquired
successfully)
Figure 2‐25. Brain‐control in absence of limb movement results in increased R
2
offline
decode performance
Note the one day offset between online task performance (top plot) and the plots for
offline performance (center plot), and number of neural units (bottom plot) to align R
2
offline performance of an individual training session with online brain‐control
performance of the preceding session which may have caused the encoding changes in
the neural ensemble, resulting in the increased offline R
2
performance reported for the
next session. Improving quality of the signals recorded from the array implants can be
ruled out as a trivial explanation for the performance increase (bottom plot, showing
approx. constant number of neural units).
73
Temporal tuning of PPC neurons
The temporal encoding properties of PPC neurons have been studied before for continuous
movement tasks (Ashe and Georgopoulos, 1994; Averbeck et al., 2005; Mulliken et al., 2008).
Recently, it was suggested that PPC neurons best encode the changing movement direction (and
velocity) with approximately zero lag time (Mulliken et al., 2008). That is, the firing rates of PPC
neurons were best correlated with the current state of the movement direction, a property
consistent with the operation a forward model for sensorimotor control (Jordan and Rumelhart,
1992; Wolpert et al., 1995). These results were obtained in a central‐fixation, center‐out task.
We obtained similar results in a free‐gaze, point‐to‐point reaching task. Figure 2‐26 shows the
temporal tuning distribution of a set of neurons recorded during a single session. 2 neurons
were classified as “sensory” type, i.e. their peak tuning lagged the limb movement for at least 30
ms (the earliest proprioceptive information in PPC has been shown to arrive with a delay of at
least 30 ms, whereas the earliest visual information was found after 90 ms (Flanders and Cordo
1989; Wolpert and Miall 1996; Ringach, Hawken et al. 1997; Petersen, Christensen et al. 1998)).
13 neurons were classified as “motor” or “command” type, i.e. their peak tuning led the limb
movement for at least 90 ms (Ashe and Georgopoulos 1994; Paninski, Fellows et al. 2004). The
timing of the third class of 9 neurons is neither explainable by sensory nor by motor activity.
Instead, their timing between sensory (lag time τ > ‐30 ms, i.e. less lag than “sensory” and τ < 90
ms, i.e. less lead than “motor”), was classified as “instantaneous”. Because “instantaneous”
timing is incompatible with the notion of motor command activity and sensory feedback
information, these neurons are assumed to carry information about an internal prediction of the
ongoing movement.
74
The distributions of peak temporal tuning for velocity and acceleration are qualitatively similar
to the distribution reported for position (Fig. 2‐26). These findings are consistent with Mulliken
# of neurons
Optimal Lag Time, τ[ms]
Sensory (τ< ‐30 ms) Motor (τ> 90 ms) Instant.
(‐30 ≤ τ ≤ 90 ms)
‐30 ms 90 ms
‐180 300 0
Figure 2‐26 Temporal tuning distribution of PPC neurons
Out of 24 neurons significantly tuned for position in a single degree of freedom (y‐axis,
i.e. vertical DoF), 2 neurons showed sensory timing. 13 neurons showed motor tuning,
and the remaining 9 neurons were “instantaneous”, or “predictive”. The red dashed
lines indicate the thresholds chosen to distinguish “sensory”, “instantaneous”, and
“motor” tuning, based on the optimal lag time (OLT), i.e. the lead/lag resulting in
maximal tuning, found.
75
and colleagues’ findings (Mulliken, Musallam et al. 2008; Mulliken, Musallam et al. 2008). They
concluded that the optimal lag time for decoding limb velocity was 10 ms in the future, whereas
limb position was best decoded slightly further into the future, at an OLT of 40 ms.
Direct comparison of temporal tuning in the position, velocity and acceleration domain reveals
one important difference: while position tuning was broad in time, velocity and acceleration
were significantly more narrowly tuned (Fig. 2‐27). Interestingly, entire reach trials were
frequently completed in < 0.5 seconds. Although we have shown that limb position can be
decoded accurately from PPC neurons, this suggests that other, less dynamic variables are
simultaneously encoded by the same neurons. One candidate variable, potentially explaining
broad temporal tuning is target position.
Adaptation to temporal perturbations
The technological development of prosthetic limbs to restore movement in amputees is still at
an early stage. Today’s state of the art artificial limbs are limited by their kinematics, dynamics,
and control modes, i.e. they are very dissimilar to the previously lost limb. This suggests that a
substantial degree of learning is necessary to achieve reasonable performance. The previous
sections have demonstrated that repeated brain‐control training in a virtual environment
resulted in a significant increase in task performance. The animals learned to reach for targets
that were initially out of reach, the time‐to‐target decreased, trajectories were increasingly
smooth, and position/velocity/ acceleration R
2
performance increased, i.e. the learning
observed occurred primarily in the spatial domain. Learning to control a robotic manipulator
such as a prosthetic limb, however, requires mastery not only of its kinematics, but also of its
76
dynamics which are almost certainly different from the dynamics of the previously lost limb the
CNS was used to control. Because state‐of‐the‐art prosthetic limbs do not provide
proprioceptive feedback, vision, available to the brain with significant delay only, is the only
feedback signal available for control. We therefore tested if parietal neurons, in a task where
novel dynamics were introduced to the animal via visual feedback, show signs of adaptation.
Tuning (normalized R
2
)
Lag Time, τ[ms]
Figure 2‐27 Temporal tuning of the same neuron for position, velocity and acceleration
To allow direct comparison, the R
2
tuning curves obtained were normalized. Notice the
broad tuning for position in contrast to the much sharper tuning for velocity and
acceleration.
77
Similar to the experimental paradigm for decoding spatial aspects of limb movement, the
monkey was required to guide the on‐screen cursor using real‐limb movement. Here, however,
limb movement did not directly translate 1:1 to cursor movement, instead the dynamics of a
real prosthetic limb introduced a lag and altered the shape of the on‐screen trajectory.
Although the effects of this dynamic perturbation were relatively mild (Fig. 2‐10, methods
A Position tuning BVelocity tuning
R
2
R
2
Figure 2‐28 Changes of temporal tuning observed in a single neuron while the animal
learned to control the task with novel dynamics superimposed
R
2
tuning for position, (A), and velocity, (B) increases significantly and almost constantly
from early analysis cycles (early during the recording session, animal not familiar with
task dynamics) to later analysis cycles (later during the recording session, animal
presumably more familiar with task dynamics). The cell did not show any significant
acceleration tuning. The cell chosen shows approximately “instantaneous” position
tuning, whereas velocity tuning leads the movement.
78
section), the animal had to adjust his behavior in order to complete reach trials successfully. We
recorded neural activity from PPC during the first sessions when the animal was exposed to the
new task, presumably learning to control the novel dynamic limb representation. We found
multiple neurons where tuning increased significantly over time with practice (Fig 2‐28).
Because hand movement and cursor movement are not identical in this task, an important
question is whether adaptation causes the representation of hand movement to strengthen in
the presence of the altered visually experienced dynamics, or alternatively whether adaptation
results in a stronger representation of the dynamically perturbed visual cursor. Due to the
strong correlation between real limb movement and virtual cursor movement, an increase in
tuning for real‐limb movement almost certainly results in a tuning increase for virtual‐cursor
movement. Direct comparison of both, however, reveals that in multiple neurons the increase
in virtual‐cursor tuning is stronger than the increase in real‐limb tuning, or in other words, these
neurons appear to have learned to estimate movement parameters under the imposed
dynamics (Fig. 2‐29). Although the effect is generally weak, presumably due to the small
difference between dynamic cursor movement and hand movement (i.e. hand and cursor
movement are highly correlated), the results are highly significant (two‐sided t‐test).
Importantly, such changes in tuning characteristics are observable not only in sensory neurons
where visual feedback from the dynamic cursor could account for the shift, but also in non‐
sensory (i.e. “instantaneous” or “motor” neurons). The result therefore suggests that an
internal prediction about the ongoing movement, performed by these neurons in PPC, adjusts to
represent the additionally imposed dynamics, perceived visually. This indirectly provides
evidence supporting the idea of forward estimation in parietal cortex (Mulliken, Musallam et al.
79
2008): if there really is forward estimation, it is absolutely essential that the representation
updates whenever limb dynamics change (e.g. due to growth, tool use, or to learn new types of
movement).
A Peak hand &cursor tuning B Cursor tuning –hand tuning
R
2
R
2
difference (cursor –hand)
Analysis cycle Analysis cycle
Figure 2‐29 Neurons exposed to novel dynamics update their representation of ongoing
movement
A, The R
2
tuning for cursor and hand are initially identical. While the monkey is
performing the task with the novel dynamics superimposed, the neuron shows
adaptation to the new task. Both tuning for hand and cursor increase (solid lines:
mean; dashed lines: standard deviation). B, The increasing difference between cursor
tuning and hand tuning indicates that the neuron develops a better representation of
the dynamic cursor.
80
Microelectrode recording array performance
It has been reported previously that the quality of single cell signals in most channels of fixed‐
geometry implanted electrode arrays degrades noticeably after a few weeks or months
(Rousche and Normann 1998). Factors contributing to this deleterious loss of signal include
reactive gliosis (Bovolenta, Wandosell et al. 1992; Turner, Shain et al. 1999) resulting from
electrode movement in the tissue (owing to tissue movement caused by respiratory or
circulatory pressure variations (Avezaat and van Eijndhoven 1986) and mechanical shocks due to
body movements (Fee 2000) or bio‐incompatibility of the electrode’s surface material (Edell, Toi
et al. 1992; Schmidt, Horch et al. 1993). Stability of the neural recordings in both animals was
monitored for 12 months. Neural activity throughout a single experimental session tended to
be stationary; it was, however, not unusual that between sessions on subsequent days, neural
activity changed gradually, thus typically requiring daily adjustments in spike sorting. Most
electrode impedances have been stable throughout the lifetime of the implant. Only few
electrodes showed abruptly increasing impedances (most likely cause: wire breakage) or
abruptly dropping impedances (most likely cause: failure of electrode insulation). We have been
able to record a relatively constant number of single and multi‐units until approximately month
7, when especially the number of well isolated single units started to decrease gradually. By
month 9, the recordable neural activity (single units and multi‐units) had reduced to
approximately 60% compared to the initial recordings. The loss of neural activity was most
noticeable on higher impedance electrodes (2‐4MΩ), whereas the yield was relatively stable on
electrodes with impedances below 2MΩ. The arrays for the 2
nd
animal were therefore designed
with a lower target impedance (300kΩ vs. 1MΩ). The tradeoff between lowering the
impedance, therefore reducing noise associated with the microelectrode (Loeb, Peck et al. 1995)
81
and improving the signal to noise ratio (SNR) on the one hand, and the reduction in spatial
selectivity due to the larger recording area was acceptable because the number of cells
recorded from simultaneously never reached a threshold were spike sorting was not possible
any more. Overall, the neural activity recorded one year after array implantation was still
sufficient to perform highly accurate decodes, both offline and online. The floating array design
(Musallam, Bak et al. 2007) where the array is allowed to ‘float’ with the cortex instead of being
anchored to the skull seems to effectively prevent tissue damage due to electrode movement
relative to the surrounding tissue, and biocompatibility seem to be sufficient to maintain a
healthy tissue in the immediate vicinity of the electrodes over extended periods of time. The
number of neural units recorded from plotted over time (Fig. 2‐30) indicates no significant loss
of recording performance over one year.
82
Figure 2‐30 Microelectrode recording array performance reported over one year
after implantation in our 2nd animal. Due to an accident the recording connector
assembly got detached from the monkey’s skull at month 3.5. The period the animal
was unavailable for recording due repair surgery and recovery is marked in gray. The
lower neuron count following the event is assumed to be a result of a number of
electrical connections having been compromised. Importantly, the recording
performance did now show any signs of further deterioration, and we therefore
conclude that the drop in recording performance is exclusively a result of the accident
and cannot be attributed to gradually progressing gliosis.
83
Discussion
Restoring motor function using cortical signals: state of the art
Although there are examples of successful neuroprosthetic devices, the field in general is still in
its infancy. The few success stories have in common that they are all stimulating applications,
such as the cochlear implant, surgically implanted to restore hearing (Loeb 1990), or deep brain
stimulation devices to alleviate the symptoms of Parkinson’s disease (Krack, Batir et al. 2003;
Bittar, Burn et al. 2005). In contrast, there are no examples for clinical success of recording
neuroprosthetic devices. Despite massive efforts in engineering and science, no useful device
that relies on brain signals to restore or replace motor function exists ‐ until now only one
clinical trial study was conducted where a patient was able to perform rudimentary tasks
(Hochberg, Serruya et al. 2006).
We believe that there are two primary reasons for this complete lack of success:
1. No reliable interface technology to record from multiple neurons simultaneously in the
central nervous system over an extended period of time currently exists. The electrode implants
typically used for recording and stimulating frequently suffer from the fundamental problem of
insufficient biocompatibility (Yuen, Agnew et al. 1987; Shain, Spataro et al. 2003; Szarowski,
Andersen et al. 2003). Although this does not appear to be a major limitation for a stimulating
neuroprosthetic device because stimulation currents can usually be increased to reach more
distant neural tissue when a local immune response resulted in electrode encapsulation, it
makes recording from neurons very difficult.
2. Both stimulating and recording prosthetic applications suffer from our currently limited
understanding of the underlying neural substrate, its feature space, and encoding schemes
84
utilized by individual neurons. In case of stimulating neural interfaces such as the cochlear
implant, however, it turns out that the brain is relatively tolerant to suboptimal stimulation and
“learns to make sense” of the presented distorted information (Loeb 2005). This is
fundamentally different in recording applications where poorly understood information has to
be extracted from the nervous system, interpreted, and converted into command signals to
operate a prosthetic assist device, although the brain, to a limited extent, may learn to generate
the activity required to drive a poorly identified decoding algorithm.
Restoring motor function using motor cortical neural activity
Almost all recent approaches for restoring motor function using single unit and multi‐unit neural
activity targeted the motor areas (Serruya, Hatsopoulos et al. 2002; Taylor, Tillery et al. 2002;
Carmena, Lebedev et al. 2003; Santhanam, Ryu et al. 2006). Although these brain areas are
among the most studied, they are still poorly understood (Churchland and Shenoy 2007; Scott
2008). Consensus has not been reached on even the seemingly most fundamental questions
such as whether M1 is involved in low‐level individual‐muscle force and single‐joint control
(Evarts 1968), or if it is involved in motor control on a much higher level, coordinating entire‐
limb trajectories (Georgopoulos, Kalaska et al. 1982).
A common approach to facilitate the interpretation of the recorded signals therefore still is to
eliminate degrees of freedom in the motor task studied, or to constrain the workspace, e.g.
(Serruya, Hatsopoulos et al. 2002; Taylor, Tillery et al. 2002; Carmena, Lebedev et al. 2003).
Therefore, although the reported correlations between neural activity and movement
parameters were typically reasonable, none of the decoding algorithms used were suitable for
85
driving a prosthetic assist device to be used in a meaningful task in a reasonably‐sized
workspace.
Unique characteristics make posterior parietal cortex an attractive target region for the
extraction of signals to operate prosthetic assist‐devices
Unlike the previously mentioned studies targeting the motor areas, our approach relies on
signals from posterior parietal cortex. We believe that the higher‐level, abstract nature of
motor planning and execution signals extracted from PPC are highly compatible with the
requirements of neural prosthetic applications. As it has been demonstrated in our and
Mulliken and colleagues’ work, the quality of hand trajectory information extracted from this
brain area is comparable to accuracy levels reported from the motor areas (Mulliken, Musallam
et al. 2008). Furthermore, due to its unique functional position between sensory and motor
processing, PPC is engaged in multiple motor‐related and higher level planning and execution
tasks (Mulliken and Andersen 2009) which may provide other useful information to control
additional features of a motor prosthetis or to further increase decoding accuracy :
(1) PRR and area 5 are known to encode motor variables in gaze‐centered coordinates
(Batista, Buneo et al. 1999; Buneo, Jarvis et al. 2002). We may, therefore, be able to decode
information about gaze to either drive a neural prosthetic device such as an on‐screen cursor
directly, or in order to increase accuracy of the hand trajectory decode, exploiting the strong
and highly stereotypical coupling between hand and eye movement typically observed under
natural conditions (Crawford, Medendorp et al. 2004).
86
(2) Information about intended reach goals is known to be encoded much more strongly in
PPC (Snyder, Batista et al. 1997) than in M1 (Crammond and Kalaska 2000). Goal information
can either serve to increase the accuracy of a trajectory decode (Mulliken, Musallam et al.
2008), or it can be used as a discrete variable to be decoded itself, for example to choose
between multiple menu options in a computer‐screen‐based assist device (Musallam, Corneil
et al. 2004).
(3) Parietal cortex is known to play a crucial role in motor planning not only for movements
to be executed, but also for mentally simulated movements, whereas this does not appear to
be the case for the primary motor areas (Sirigu, Duhamel et al. 1996). The initial identification
of a decoding algorithm in patients will be greatly hampered by the inability of the patients to
perform a set of voluntary movements that are normally used to derive the initial decode in
intact non‐human primates. The robust representation of imagined movements in parietal
cortex may therefore facilitate initial decode algorithm identification.
(4) Other, higher level variables (e.g. the expected value of an action) are known to be
represented in PPC (Musallam, Corneil et al. 2004). Although not directly useful as control
signals, such variables may help monitoring the motivational state of the patient. This may be
particularly valuable in the severely “locked‐in” patients whose global lack of communication
channels makes them likely candidates for emergent neural prosthetic technologies.
(5) Robust LFP signals in PPC are known to provide information about state transitions
during reach planning and execution (Pesaran, Pezaris et al. 2002; Scherberger, Jarvis et al.
2005). It is unclear whether similar signals can be decoded from the primary motor areas.
State machines will be an absolutely necessity for safe operation of any type of cortically driven
prosthetic device, whether the solution targets parietal or motor‐area neurons. They will be of
87
particular importance in robotic assist devices in order to decode a “go” or “no‐go” decision
allowing clear discrimination between signals intended to result in movement and noise, which,
if decoded by the same algorithm, would result in undesired and potentially unsafe prosthetic
limb movement.
Summary and discussion of results
The intention behind the work presented here was to evaluate the feasibility of decoding limb
trajectories in a 3‐dimensional point‐to‐point reaching task with a minimum of constraints
imposed only. We demonstrated that 3D computer cursor trajectories controlled by hand
movements could be reliably reconstructed from the activity of PPC neurons. We were able to
account for approximately 60% of the variance in cursor position when decoding offline. This
level of accuracy was obtained despite unconstrained gaze. We showed that the decoder
performed robustly even in presence of the visual cue onset interval. In addition, we verified
that state‐space models (Kalman filter) outperformed feedforward linear decoders (least‐
squares linear regression, ridge regression). These findings are compatible with results reported
from M1 decoding studies (Wu, Black et al. 2003), and from PPC decoding studies under
centrally fixed gaze (Mulliken, Musallam et al. 2008). Furthermore, we demonstrated that PPC
neural ensembles are appropriate for controlling 3D cursor trajectories in real‐time for neural
prosthetic applications. Significant and rapid learning effects in PPC during brain‐control
enabled the monkey to improve performance substantially over several experimental sessions.
Finally, we report that the incremental improvements of our neural recording arrays eventually
allowed us to record neural activity for > 1 year without a significant loss in signals.
88
Considerations for off‐line decoding of trajectories from PPC
Offline reconstruction of 2D trajectories from PPC activity has been reported in two earlier
studies (Carmena, Lebedev et al. 2003; Mulliken, Musallam et al. 2008). Mulliken and
colleagues reported ,
= 0.53, ,
= 0.71 for the G‐Kalman filter,
which on average was reported to perform 17% better than the standard Kalman filter that was
used here. Mulliken and colleagues obtained their results in a task requiring central gaze
fixation; average performance results were based on recordings from 7.8 ± 5.2 single units and
107 ± 27 multi‐units, whereas single best day performance was obtained while recording from
approx. 180 units. We report slightly weaker but overall comparable performance of
,
= = 0.5044 ± 0.0725 and , ,
= 0.5802 ± 0.0737, obtained using
30 ± 4.09 single units, 37.21 ± 3.81 multi‐units and 28 single units, 41 multi‐units, respectively. A
single cause for the slightly weaker decode performance may be hard to identify due to multiple
differences in experimental design: 3D vs 2D workspace, point‐to‐point reaches vs center‐out
reaches, and unconstrained gaze vs central fixation. Furthermore, it is generally difficult to
compare results obtained from different animals with different recording electrode implants.
However, the overall more challenging task and the elimination of gaze fixation constraints
makes the small decrease in performance appear realistic. We did, however, not observe the
dramatic performance gain observed by Mulliken and colleagues when using a Kalman filter vs a
ridge filter (they reported a 42% improvement whereas we only observed a 25% improvement).
It is not unlikely that this is due to the differences in the behavioral task which in Mulliken’s case
consisted of relatively stereotyped 2D center‐out reaches. The predictive part of the Kalman
filter may have been able to capture these exceptionally well, whereas the less stereotyped
nature of point‐to‐point reaches in our task may have been be less suitable for prediction. The
89
Kalman filter used here may therefore have been forced to rely less on movement prediction,
and to produce a movement state estimate relying more on measured neural activity, instead.
This would explain the only slightly better performance in comparison to the ridge regression
which performs movement state estimation based on measurements of neural activity only.
Carmena and colleagues, in contrast, decoded from 64 neural units in PPC (single units and
multi‐units) under free gaze conditions, and concluded that they could reconstruct cursor
position with relatively poor accuracy only, using PPC activity and least‐squares linear regression
(R
2
= 0.25, single session). Although the behavioral task underlying this study is very similar (but
more demanding) than Carmena’s (point‐to‐point reaches in a 3D instead of in a 2D workspace),
note that the performance we report here is dramatically better. In addition, Carmena and
colleagues did not observe a significant improvement in R
2
performance when using a Kalman
filter instead of a least‐squares model, whereas we report slightly better performance. It
remains unclear why the decoding performance Carmena and colleagues reported is poor in
comparison to our results, and compared to performance Mulliken and colleagues reported.
Neuron dropping curves allowed us to compare decoding performance for the same number of
neurons between different studies. For the ridge filter we report noticeably higher R
2
performance than Mulliken and colleagues for the same number of neurons (R
2
= 0.4 vs R
2
=
0.15, both reported for 56 units), although caution is advised when directly comparing these
results for two reasons: (1) the studies were performed with different animals with different
implants; (2) the recordings we compiled to generate the neuron dropping curves were
obtained from training sessions preceding same‐day brain‐control sessions, whereas Mulliken
and colleagues compiled data from training‐sessions without subsequent brain‐control sessions
90
for their neuron dropping curves. Our results indicate that repeated exposure to the brain‐
control task resulted in gradually increasing offline R
2
performance. This may have, therefore,
contributed to the R
2
performance reported here being higher than the R
2
performance
reported by Mulliken and colleagues. Despite these considerations we conclude that the higher
dimensional, more challenging, more realistic point‐to‐point 3D reaching task under
unconstrained gaze does not result in weaker decoding performance. 3D trajectories can
therefore be reconstructed very reliably using PPC activity under realistic conditions from a
relatively small ensemble of PPC neurons.
Reference frame and feature space considerations
We found that a rather basic linear decode applied in a head, world, or body‐centered reference
frame (for decoding purposes they are identical due to the animal being head‐fixed) provides
robust results, despite the fact that the underlying neural substrate in PRR and area 5 is known
to encode motor planning and execution variables in gaze‐centered reference frames (Batista,
Buneo et al. 1999; Buneo, Jarvis et al. 2002). This raises a set of new questions: Does posterior
parietal cortex rely on a head‐ or body‐centered encoding scheme when gaze is not
constrained? Because gaze‐centered reference frames were found in studies performed under
gaze fixation, whereas reference frames have not been studied systematically under free gaze,
this is theoretically possible. Or, do single neurons in posterior parietal cortex carry information
about gaze? This explanation appears to be more likely than populations of neurons switching
their encoding schemes between “gaze‐centered” under constrained‐gaze‐conditions and
“body‐centered” under free‐gaze‐conditions, and we therefore hypothesize that gaze‐related
91
information is encoded by single cells, and therefore can be compensated for when decoding
from a population of cells in a head‐centered reference frame. This would reconcile our
decoding results under free‐gaze conditions with results from previous research suggesting
gaze‐centered reference frames. One indication supporting this hypothesis is that correlation
between single cell firing patterns and movement parameters was always extremely weak (e.g
Fig. 2‐11), whereas consistently strong R
2
performance was obtained when decoding from a
population of cells. This suggests that neurons in PPC rely on a complex encoding scheme in a
high‐dimensional feature space where parameters of interest can be decoded with reasonable
accuracy only when relying on information from the neural ensemble, but not when considering
single neuron activity only. We speculate that the same population of cells providing accurate
limb movement decoding results would therefore allow us to decode gaze as well.
Generally, it will be important to further understand the feature space encoded by PPC cell
populations. The most obvious extension, for example, is the generalization of the free gaze
paradigm from allowing eye movements only to allowing eye and head movements. Previous
research suggests that eye position (Andersen, Essick et al. 1985), as well as head position
(Brotchie, Andersen et al. 1995) are represented in posterior parietal cortex. Neural network
studies (Zipser and Andersen 1988) suggest that neurons in parietal cortex rely on linear
mechanisms (gain fields) to encode these variables. The functionally rather central location of
PPC between sensory and motor processing suggests that other variables are encoded as well.
Once the feature space is well understood, we may be able to obtain superior decode
performance by clever design of state space decoding approaches that explicitly consider such
variables. A good example for the superiority of this approach is Mulliken and colleagues’
successful demonstration of trajectory decodes using a G‐Kalman filter (Mulliken, Musallam et
92
al. 2008). The G‐Kalman filter is an extension of the standard Kalman filter. It includes
information about the intended reach goal to increase the accuracy of limb trajectory estimates.
Information about intended reach goals is known to be encoded in PPC, and due to strong
correlations between reach goal location and reach‐trajectory toward this location, goal
information facilitates the decoding of accurate trajectories. Using this strategy, Mulliken and
colleagues reported an average performance increase of 17% over the conventional Kalman
filter. The approach is particularly appealing due to the availability of intended goal location
from easy to record and long‐lasting LFP signals (Scherberger, Jarvis et al. 2005).
Eventually, once comprehensive understanding of PPC’s feature space is established, it may be
worthwhile to extend currently employed linear decode schemes to nonlinear approaches such
as artificial neural networks, particle filters, or nonlinear versions of the Kalman filter (Schwartz,
Cui et al. 2006).
Considerations for online decoding of trajectories from PPC
Learning effects in PPC became evident particularly during brain control sessions when real‐limb
movement was suppressed, with offline R
2
performance increasing by 34% in just 5 sessions,
whereas the R
2
performance remained at the same level for sessions with unconstrained real‐
limb movement during brain‐control. However, we report significant increases in behavioral
performance, improving from initially 26% of the targets acquired successfully to eventually
100%. These results are comparable to learning effects observed by Mulliken and colleagues in
their PPC brain‐control study under gaze fixation (Mulliken, Musallam et al. 2008). Studies in
93
M1 have reported brain‐control learning effects of comparable magnitude (Taylor, Tillery et al.
2002; Carmena, Lebedev et al. 2003).
Interestingly, when free to move the real limb during brain‐control sessions, the monkeys never
voluntarily ceased to move their hand. Other groups reported that their animals, when exposed
to similar tasks under M1 brain‐control, also kept pushing against the mechanical barrier, at
least initially (Taylor, Tillery et al. 2002). Typically only spot checks of EMG recordings were
performed, and it is therefore unclear whether the animals used in M1 studies ever ceased to
move the limb completely and consistently (Taylor, Tillery et al. 2002; Carmena, Lebedev et al.
2003). Multimedia material documenting actual brain‐control trials (Schwartz 2009) suggests
the opposite: note that (a) similar to the approach taken here, limb movement had to be
suppressed using a mechanical barrier, whereas the animals did not appear to cease moving
their limb voluntarily; (b) regular and vigorous limb movement against the barrier is noticeable,
although occasionally reach trials appear to be completed without visible real‐limb movement.
This is comparable to our results, where the monkeys only ceased using their real‐limb if forced
to do so by a mechanical restraint. We would expect neural activity recorded from PPC to be
relatively independent from actual movement execution, due to PPC’s functional distance to
movement execution, especially in direct comparison to M1, where decoupling of neural activity
from movement execution should be harder to achieve due to the functional proximity to
movement execution.
Note, however, that the decoding algorithm used to drive the brain‐control cursor was
identified from a set of training reaches involving real‐limb movement. Although the parietal
regions we recorded from are primarily known for their involvement in visually guided reaching,
94
proprioceptive inputs are known exist, in particular in area 5 (Jones and Powell 1970; Jones,
Coulter et al. 1978). It is therefore not unlikely that proprioceptive information, as a result of
real limb movement, was included for the decoding algorithm identification. Not surprisingly,
the brain‐control decode therefore performed better when real limb movement was allowed,
whereas a slight deterioration in performance was observable when real‐limb movement was
suppressed. This also explains why animals generally preferred not to stop performing real‐limb
movements while performing the task in brain‐control mode. We believe that for the control of
neural prosthetic limbs in patients this is not a limiting factor because candidate patients to be
equipped with such a device will not have proprioceptive feedback from the limb due to their
injury. The algorithm identification approach involving real‐limb training reaches to identify
decoding algorithms will need to be replaced by an identification routine relying on imagined
reaches. While the patients will be asked to mentally imagine movements demonstrated in a
display (motor imagery), an initial decoding algorithm will be identified based on correlations
between imagined trajectory and simultaneously recorded neural activity. It is to be expected
that the initially identified algorithm will allow for poor decode performance only, and therefore
will need to be updated incrementally. It is, however, free of any proprioceptive influence.
Similarly, we may be able to obtain better brain‐control performance in non‐human primate
experiments in complete absence of limb movement if we identify the initial decode algorithm
relying on real‐limb movements, and then, after immobilizing the limb, incrementally update the
algorithm to adapt to the neural signals not containing proprioceptive information. Musallam
and colleagues demonstrated successful implementation of incrementally updating decoding
algorithms in a brain‐control intended reach‐target decoding task (Musallam, Corneil et al.
2004).
95
When observing behavioral improvement during brain‐control tasks, e.g. increase in success
rate, and/or decrease in time‐to‐target, one important question relates to the cause of this
behavioral improvement. To demonstrate usefulness for future prosthetic applications, one
must show that the improvement is not a result of (a) better quality neural recordings (more
units, higher signal‐to‐noise ratio); or (b) the animal learning to manipulate behavioral variables
(e.g. through exaggerated limb movement, or systematic gaze shifts) to optimize decoding
performance. (a) was ruled out by continuous monitoring of the underlying neural ensemble.
Although a correlation between neural ensemble size and decoding performance exists (see
neuron dropping curve (Fig. 2‐18)), we have shown that performance does increase with a
constant size underlying neural ensemble (Fig. 2‐25). Mulliken and colleagues in their analysis
went a step further and demonstrated that their increase in R
2
performance was a result of both
tuning depth and tuning spread increase, thus making more information available to the
decoding algorithm (Mulliken, Musallam et al. 2008). Despite the reported changes in neuron
tuning it cannot be determined where in the central nervous system the plastic changes
occurred. Although this is not important for the operation of prosthetic limbs as long as the
region of plasticity is not affected by the injury, it poses a challenging scientific question which
we may only be able to answer in future extensive networking modeling studies in combination
with experimental studies where potential regions of plastic changes in the CNS are selectively
inactivated. To rule out (b), candidate behavioral parameters, potentially modulating the neural
ensemble’s firing patter need to be monitored. Although real‐limb movement does modulate
neural activity in PPC (see discussion in previous paragraph), we were able to show that a brain‐
control decode is feasible in complete absence of limb movement. Although the elimination of
systematic gaze shifts as a potential behavioral strategy for optimizing task performance is less
96
crucial because patients using the prosthetic device generally have full control over gaze and
could therefore adopt a similar strategy, control would be cumbersome and unnatural. As
discussed previously, gaze shifts appear to be encoded by PPC neurons (Andersen, Essick et al.
1987; Cohen, Batista et al. 2002). Because decoding algorithms were identified under free gaze
conditions, the recorded hand‐eye coordination patterns were implicitly taken into account by
the decoding algorithm. It is therefore highly unlikely that the monkey developed a very
unnatural and difficult to learn strategy to systematically use gaze shifts in order to obtain
improved online performance. Inspection of eye tracking recordings during training and brain‐
control sessions allowed us to rule out this possibility: the animals showed normal hand‐eye
coordination patterns under both experimental conditions. Furthermore, Mulliken and
colleagues demonstrated that brain‐control trials can be performed successfully while animals
are required to fixate centrally (Mulliken, Musallam et al. 2008).
Considerations for the analysis of temporal tuning characteristics of PPC neurons
In order to operate a prosthetic‐assist device in brain‐control mode causality of the recorded
neural activity is a necessity: although sensory information can be used to obtain higher offline
analysis tuning results, it is generally not available in patients relying on the device. We
therefore performed a temporal tuning analysis, determining the alignment of peak tuning of all
units recorded with respect to the ongoing movement. Although we analyzed one single
recording session only, the distribution of temporal tuning observed was similar to results
reported previously (Mulliken, Musallam et al. 2008). We found relatively few neurons showing
“sensory” tuning, i.e. their peak tuning lagged the movement for at least 30 ms for
97
proprioceptive information, or 90 ms for visual information (Flanders and Cordo 1989; Wolpert
and Miall 1996; Ringach, Hawken et al. 1997; Petersen, Christensen et al. 1998). The majority of
neurons either displayed motor command timing characteristics, i.e. their peak tuning led the
movement for at least 90 ms (Ashe and Georgopoulos 1994; Paninski, Fellows et al. 2004), or
was temporally aligned with the movement. The temporal tuning characteristics of the broad
majority of neurons in parietal cortex therefore appears to be suitable for online control of
prosthetic devices.
Comparison of temporal tuning curves for position, velocity and acceleration derived from
activity of the same neuron revealed that velocity and acceleration tuning is narrow in time,
whereas temporal tuning for position is broad. Position tuning was significant for the entire
interval (from ‐300 ms, sensory lag to +600 ms, motor lead) although a well trained animal
frequently completes an entire reach trials in 500 ms. This observation is in agreement with our
finding that multiple consecutive bins of neural activity are necessary to perform accurate
position decodes (for the ridge filter we used 4 consecutive 90ms bins, thus capturing neural
activity from 0 ms, aligned with the movement to 360 ms, leading the movement. The extremely
broad tuning suggests that the neuron carries substantial information about task related but
relatively stationary variables. We speculate that the broad tuning is a result of these neurons
carrying substantial target location information, because previous research has shown that
there is a strong reach‐target representation in PPC, allowing highly accuracy decodes
(Musallam, Corneil et al. 2004). The much narrower tuning for velocity and acceleration, in
contrast, is more compatible with the notion of the studied cell carrying information about
ongoing dynamic movement.
98
Considerations for the adaptation of PPC neural activity to temporal perturbations
For this study we perturbed the dynamics of the virtual cursor movement artificially by filtering
the monkey’s real limb movement using the dynamics model of a realistically performing
prosthetic limb. The monkey was required to guide the virtual cursor using real‐limb
movement. The real‐limb movement, however, did not directly translate 1:1 to cursor
movement, instead the dynamics of the superimposed prosthetic limb model altered the shape
of the on‐screen trajectory. Although the effects of this dynamic perturbation were relatively
mild, the animal had to adjust his behavior to complete reach trials successfully. Neural data
recorded while the animal was learning the new task suggests that multiple neurons in PPC
gradually altered their tuning, developing an improved representation of cursor movement.
Because such updating was observed not only in “sensory” type neurons, we believe that these
neurons carry information regarding the expectation about, or an internal prediction of the
ongoing movement.
The updating of predictions observed is highly desirable for the operation of any type of brain‐
machine‐interface. Instead of forcing a patient to rely on slow visual feedback only, PPC neural
activity, being able to learn visually perceived dynamics of movement may provide
instantaneous control signals compatible with the characteristics of the prosthetic device.
Estimation and prediction of motor parameters for online trajectory control is one of the
fundamental concepts in motor control theory. There is ongoing debate about the location of
the neural substrate implementing such a predictive mechanism (Shadmehr and Krakauer 2008),
(Wolpert, Goodbody et al. 1998). The fact that we found predictive information adjusting in
99
response to exposure to novel movement dynamics supports the idea of forward estimation in
motor control. It, however, does not allow us to conclude that its implementation is in PPC.
Conclusions
The intention behind the work presented here was to evaluate the feasibility of decoding limb
trajectories from posterior parietal cortex in a realistic, only minimally constrained task.
For the first time we demonstrated successful decodes in a 3‐dimensional point‐to‐point
reaching task without requiring gaze fixation. Furthermore we observed significant and rapid
learning effects in the spatial domain, resulting in better workspace coverage, reduced time‐to‐
target, and smoother trajectories in brain‐control mode. Similarly we described learning
occurring in the temporal domain. Neurons exposed to novel dynamics of limb movement
appeared to contain an increasing amount of information about the previously unknown
dynamics while learning occurred.
These results were obtained without requiring recordings from large size (>100 neurons) neural
populations. Incremental improvements of our neural recording arrays eventually allowed us to
record sufficient neural activity for > 1 year without a significant loss in signals.
We conclude that posterior parietal cortex is highly suitable for the extraction of command
signals to control prosthetic limbs. In addition to trajectory information, other signals about eye
movements, go/no‐go decisions and intended reach‐goals are easily accessible and can be
decoded robustly. Although the neural prosthetics community until recently mostly focused on
100
the motor areas to extract command signals for driving assist‐devices, posterior parietal cortex
may eventually prove to be the more suitable area to target.
Regardless of the clinical success of the proposed method, the series of experiments was an
important opportunity to formulate and test hypotheses about fundamental neuroscience. In
particular, the realistic, unconstrained, self‐paced task provided an opportunity to further
understand the role of PPC in online control of motor behavior
101
Chapter 3 A Virtual Reality Environment for Designing and Fitting Neural
Prosthetic Limbs
Being able to design truly functional prosthetic devices for the restoration of movement would
be a major breakthrough, but building and testing such devices is expensive, risky, and time‐
consuming. In order to facilitate and accelerate such development, we, therefore, designed a
virtual reality environment (VRE) in which subjects can operate a simulated arm system to
interact with virtual objects. Such a configuration is also helpful for the early patient training
phase. The level of complexity can be increased gradually, and the parameters can be adjusted
to correct for errors and limitations. This section describes the VRE architecture and how it can
be used throughout the design process for FES and prosthetic devices, starting with prototyping
of software and hardware components, all the way to fitting the device to the patient and
patient training. Currently, the VRE is actively used by a number of research groups, and the
goal is to introduce a user‐friendly version to clinicians in the fields of rehabilitation and
prosthetology.
Key Requirements
The VRE must support the following different types of research and development and clinical
activities:
study of normal reaching to set performance criteria for prostheses (research and
development);
behavioral environment for tests of brain–machine interfaces in primates (research and
development);
hardware design, control algorithm design (research and development);
102
prosthesis fitting (clinical)
patient training to master complexity in stages (clinical).
To address these goals, the platform must allow the patient or subject to operate a simulated
prosthetic limb to perform common activities of daily living. Reaching and grasping of, and
interaction with virtual objects must be possible in a realistic environment. To achieve good
performance online, a fairly high level of immersion, which necessitates high‐quality feedback
channels, is mandatory. Specifically the VRE must offer
realistic real‐time‐executable models of prosthetic arms and musculoskeletal
components;
a rich, customizable task environment similar to the environment the patient is usually
exposed to;
high‐quality visual feedback, requiring a high resolution, minimum delay, 3‐D
stereoscopic visualization technology, a rich set of visual cues, and head tracking to
update the view;
support for multiple configurations such as different skeletal and prosthetic arm models,
a wide variety of input devices (sensors to record from the patient), and different output
devices (for feedback to the patient).
Furthermore, the platform should be affordable and supportable over the long term. This
suggests that it should be assembled from commonly available hardware and software
components. Because of the range of users, tasks, and environments, the graphic user interface
(GUI) should be readily customized.
103
VR Environment ‐ Overview
Figure 3‐1 depicts the proposed VRE in a motion tracking configuration as it is typically used for
training adaptive control systems using intact subjects. The subject performs a reaching
movement to a virtual target, and the motion capture system tracks the arm movement. Some
VR PCs
Signal
Acquisition
Control &
Dynamics
Rendering &
Visualization
Figure 3‐1 VRE Overview
In the VRE, subjects can operate a simulated prosthetic arm to interact with virtual
objects. Multiple input modalities such as motion tracking systems and EMG/EEG
electrodes provide maximum flexibility when evaluating different control approaches.
The figure shows a subject operating a prosthetic arm prototype in VR (right side). The
subject controls the arm via real‐time motion tracking (left side), and 3D visual feedback
is provided via stereoscopic goggles for closed loop operation.
104
components of the voluntary movement are used for command signals while others are used as
performance criteria. Other configurations of the same VRE can be used with subjects equipped
with myoelectric or neural prosthetic interfaces (see below). Real time algorithms determine
the resulting virtual arm trajectory. A 3‐D head mounted display (HMD) provides visual
feedback of the animated arm from the subject’s perspective. Figure 3‐2 shows the interaction
and signal flow between subject/ patient, control code, virtual arm, and feedback hardware.
Typically, the subject wears multiple sensors (motion tracking, EMG, head tracking for human
subjects, or cortical implants, EMG, motion tracking, eye tracking for nonhuman primates) which
Subject/
Patient
Sensor 1
Sensor 2
…
Sensor n
EMG,
EEG,
Motion Tracking,
Cortical Recordings
Signal Pre-
processing
Pattern
Recognition
Control
Algorithms
Virtual
Actuators
Virtual
Structure/
Dynamics
Signal
Conditioning
3D HMD
Haptic
Feedback
Hardware
Raw
Signal
Preprocessed
Signal
Intention
Command
Signals
Forces
Real Time Control Code
Model of Prosthesis
Components
Virtual
Movement
Haptic/Tactile
Feedback
Visual Feedback
Virtual
Sensors
Virtual
Movement
Feedback
Prosthetist,
Graphic User
Interface
Figure 3‐2 High level diagram of the VR environment used for prosthesis design and
early patient training.
105
provide the input to the real‐time control code. The output of that control code drives the
virtual limb actuators, which can be muscles for FES limbs or electromechanical actuators for
prosthetic limbs. These virtual actuators interact with the dynamic model of the virtual limb to
cause movement, which is displayed in a 3‐D animation. Haptic feedback hardware (robotic
manipulanda, tactor arrays, etc.) may be used to provide additional feedback.
VRE Architecture
A: Hardware: The VRE components, as briefly described previously, are distributed in a multiple
PC environment. Some algorithms need to be executed in real‐time but do not require
sophisticated video output. The visualization of the VRE requires high quality 3‐D video output
but does not need to be executed in real‐time as long as minimum delay and high frame update
rate can be ensured. These fundamentally incompatible specifications require the separation of
these two classes of algorithms to different PCs, one with a real‐time kernel for fast real‐time
code execution and other Windows machines with good video performance but poor real‐time
capabilities. The VRE, as shown in Figure 3‐3, consists of 3 PCs and multiple input/output
hardware components. Our system currently supports magnetic motion tracking (Flock of Birds,
Ascension Technology Corporation) and alternatively optical motion tracking (Optotrak,
Northern Digital Inc.) to capture arm movement, but the VRE system can accept properly
formatted data from any motion capture technology. A gyro‐based three‐axis sensor (3DM‐
GX1, Microstrain) captures head movement to adjust the point of view for the 3‐D animation.
EMG electrodes and additional inputs can be connected to a general purpose data acquisition
Board (PCI‐6040E, National Instruments). An interface to a real‐time neural data acquisition
106
system (Multichannel Acquisition Processor, Plexon Inc.) has been developed to support spike
and local field potential inputs. The real‐time xPC in Figure 3‐3 samples all these inputs,
executes the prosthetic arm control code, and drives a real‐time dynamic model of
musculoskeletal and mechatronic arm components. The resulting output is sent to the
Visualization PCs for animation of the model limb. Multiple PCs can be used for visualization.
Our configuration employs one PC for subject visualization (PC II) via 3‐D head mounted display
(HMD) (NVISOR SX, NVIS), and a second PC (PC I) for operator visualization from any desired
perspective. Virtually any OpenGL (a graphics programming interface) compatible visualization
technology is supported. The Subject PC provides a stereoscopic view of the ongoing
experiment to the subject, whereas the operator PC only displays a 2‐D view for experiment
supervision purposes. Simultaneously the operator PC provides a user interface for experiment
control and online parameter tuning. The operator PC is also used for code development, and to
up‐ and download programs and files to and from the real‐time xPC.
B. Software: Three major software packages are incorporated in the VRE.
1) Matlab/SIMULINK which is used as a platform for development of control and dynamics
related algorithms. Many powerful tools for mechanical dynamics (e.g., Sim‐Mechanics),
visualization and control are already available on this platform. SIMULINK itself is not real‐time
capable, but the xPC‐Target toolbox allows execution of SIMULINK code on a separate real‐time
PC (the xPC) with minor modifications only.
2) MSMS (Musculoskeletal Modeling Software), a Java based program, developed in our lab
(Davoodi, Urata et al. 2004), is used to create and visualize the simulated arm.
107
3) xPCLabDesk (Bajcinca), a LabVIEW based solution for the efficient design of GUI is utilized as
the primary human–machine interface (HMI) for parameter tuning and experiment control.
1) Real-Time Code: Figure 3‐4 shows the top level of a typical real‐time implementation of a
prosthetic device design environment in SIMULINK as it is initially developed on the operator PC
and then downloaded to the real‐time xPC. The entire block diagram is library‐based, i.e. each
block can be chosen from a library, depending on the requirements of the prosthesis to be
tested. Standardized busses ensure that, independent from the configuration to be tested, the
PC II
3D Subject Visualization
NVISOR SX 3D goggles
PC I
2D Operator Visualization
xPC Interface (Control
Panel)
Matlab
C-Compiler
FoB: arm tracking,
Microstrain: head tracking
Subject PC
Experiments Development
6x RS232
xPC
PC-based real time
hardware
FoB data acquisition
Controller code
Setup timing
UDP/IP: Joint Angles, Head
Tracking, Object Parameters
Operator PC
TCP/IP: Code Download,
Parameter Update
Figure 3‐3 VR configuration ‐ block diagram
The system consists of subject PC, operator PC, the real‐time PC (xPC), and a motion
tracking system for head and arm movements.
108
appropriate signals are always routed through the entire system.
The ”Input” subsystem is chosen based on the underlying input hardware and contains the
drivers to support hardware such as motion tracking systems, EMG acquisition devices and
neural recording systems. The “Signal Analysis” subsystem contains algorithms to pre‐process
the raw input signals from the “Input” subsystem. The library contains a broad range of
algorithms from simple filtering to sophisticated pattern recognition. The “Control” and “Plant”
subsystems represent a model of the prosthetic device to be tested in VR and the corresponding
control algorithms. Various models of artificial limbs and FES limbs have been implemented.
The “Presentation” subsystem contains the drivers for all output hardware. It includes a real‐
time network interface for visual feedback. Other feedback modalities such as haptic and tactile
are supported as well. The “Environment” block contains the code for the implementation of a
task to evaluate the performance of the prosthesis being simulated. Environments can vary in
Figure 3‐4 Top layer of the real‐time simulation environment in SIMULINK
showing the library‐based Input, Signal Analysis, Control, Plant, Presentation, Head‐
Tracking, and Environment subsystems
109
complexity from simple (e.g. presentation of simple targets in 3D space) to realistic (e.g. real
world tasks with realistic objects to be manipulated in a complex environment with obstacles).
The “Head Tracking” block updates the 3D view if head movement is allowed.
2) Visualization: As shown in the VR setup diagram (Fig. 3‐3), the output of the real‐time PC
consists of angles describing arm and head movements, updated in real‐time. These data are
transmitted via UDP/IP network connection to multiple visualization PCs (PC I and PC II in Figure
4‐5), allowing the subject/patient to obtain 3‐D visual feedback, and the operator or physician to
monitor the experiment. We are using MSMS (Fig. 3‐4) to animate musculoskeletal and
prosthetic systems based on the transmitted angles within a virtual experimental environment.
This software was developed primarily to provide researchers and clinicians with a user‐friendly
environment to model and simulate the behavior of complex neural prosthetic systems. It
consists of a sophisticated graphic user interface to create, manipulate, and save
musculoskeletal models in MSMS or import them into MSMS from SIMM (Delp and Loan 1995),
an older model creation system. The MSMS models can be animated by motion data from a
saved file, by motion data streamed in real‐time from a motion capture system, or by motion
data computed by a dynamics engine.
3) Graphic User Interface: A complex development environment such as the VRE, consisting of
multiple PCs and software packages, requires a centralized graphic user interface to control
code execution on all machines from a single desktop. Depending on the targeted user group,
the GUI must grant access to different levels of complexity. The prosthetist for example needs a
rather basic interface for parameter tuning and update only, whereas developers usually desire
access to the entire system. Manual implementation of a user interface that supports this level
110
of complexity is not feasible. Instead we use xPCLabDesk, which was designed as a user‐friendly
GUI for xPC real‐time applications. It offers tools for parameter tuning and updating,
visualization of signals and parameters, and signal recording. The user interface is created
simply in a drag and drop manner, and all control and monitoring elements on the front panel
are connected to the corresponding xPC variables using LabVIEW graphical programming. The
end‐users (clinicians) can run the entire VRE without purchasing Matlab/SIMULINK and
LabVIEW, because xPCLabDesk was designed as a standalone software, being able to handle all
communication with the xPC. Matlab, SIMULINK, and LabVIEW licenses are only necessary for
code development and modifications.
GUI
(Java)
3D Rendering
Java/OpenGL
Modeling and Simulation
(Java / C)
Database
(XML)
MSMS
Main GUI
(xPCLabDesk)
SIMULINK
xPCTarget
CAD Software
(Solid Works)
3
rd
Party Software
Figure 3‐5 Architectural diagram of MSMS
A GUI establishes the link between user and modeling and simulation core. Models can
be created or imported (either from other modeling software packages or from CAD
software), edited, and saved in an XML database. The built‐in SIMULINK support allows
automatic derivation of the equations of motion in SimMechanics for offline simulation
and real‐time execution of models with or without patient/hardware in the loop.
111
Bibliography
Abeles, M. and M. Goldstein (1977). "Multispike train analysis." Proc. IEEE 65(5): 762–773.
Andersen, R. A., J. W. Burdick, et al. (2004). "Cognitive neural prosthetics." Trends Cogn Sci
8(11): 486‐493.
Andersen, R. A., G. K. Essick, et al. (1985). "Encoding of spatial location by posterior parietal
neurons." Science 230(4724): 456‐458.
Andersen, R. A., G. K. Essick, et al. (1987). "Neurons of area 7 activated by both visual stimuli and
oculomotor behavior." Exp Brain Res 67(2): 316‐322.
Ariff, G., O. Donchin, et al. (2002). "A real‐time state predictor in motor control: study of
saccadic eye movements during unseen reaching movements." J Neurosci 22(17): 7721‐7729.
Ashe, J. and A. P. Georgopoulos (1994). "Movement parameters and neural activity in motor
cortex and area 5." Cereb Cortex 4(6): 590‐600.
Atkeson, C. G. (1989). "Learning arm kinematics and dynamics." Annu Rev Neurosci 12: 157‐183.
Avezaat, C. J. and J. H. van Eijndhoven (1986). "The role of the pulsatile pressure variations in
intracranial pressure monitoring." Neurosurg Rev 9(1‐2): 113‐120.
Bajcinca, N. "The LabVIEW interface to xPC target." from http://www.xpclabdesk.com.
Balint, R. (1909). "Die Seelenlähmung des "Schauens", optische Ataxia, räumliche Störung der
Aufmerksamkeit." Monatsschr Psychol Neurol 25: 51‐81.
Barash, S., R. M. Bracewell, et al. (1991). "Saccade‐related activity in the lateral intraparietal
area. I. Temporal properties; comparison with area 7a." J Neurophysiol 66(3): 1095‐1108.
Batista, A. P., C. A. Buneo, et al. (1999). "Reach plans in eye‐centered coordinates." Science
285(5425): 257‐260.
Bhushan, N. and R. Shadmehr (1999). "Computational nature of human adaptive control during
learning of reaching movements in force fields." Biol Cybern 81(1): 39‐60.
Bittar, R. G., S. C. Burn, et al. (2005). "Deep brain stimulation for movement disorders and pain."
J Clin Neurosci 12(4): 457‐463.
Bizzi, E., N. Accornero, et al. (1984). "Posture control and trajectory formation during arm
movement." J Neurosci 4(11): 2738‐2744.
112
Bovolenta, P., F. Wandosell, et al. (1992). "CNS glial scar tissue: a source of molecules which
inhibit central neurite outgrowth." Prog Brain Res 94: 367‐379.
Bremmer, F., A. Schlack, et al. (2001). "Polymodal motion processing in posterior parietal and
premotor cortex: a human fMRI study strongly implies equivalencies between humans and
monkeys." Neuron 29(1): 287‐296.
Brotchie, P. R., R. A. Andersen, et al. (1995). "Head position signals used by parietal neurons to
encode locations of visual stimuli." Nature 375(6528): 232‐235.
Bryson, A. E. and Y. C. Ho (1975). Applied Optimal Control. New York, NY, Wiley.
Buneo, C. A. and R. A. Andersen (2006). "The posterior parietal cortex: sensorimotor interface
for the planning and online control of visually guided movements." Neuropsychologia 44(13):
2594‐2606.
Buneo, C. A., M. R. Jarvis, et al. (2002). "Direct visuomotor transformations for reaching." Nature
416(6881): 632‐636.
Carmena, J. M., M. A. Lebedev, et al. (2003). "Learning to control a brain‐machine interface for
reaching and grasping by primates." PLoS Biol 1(2): E42.
Churchland, M. M. and K. V. Shenoy (2007). "Temporal complexity and heterogeneity of single‐
neuron activity in premotor and motor cortex." J Neurophysiol 97(6): 4235‐4257.
Clower, D. M., J. M. Hoffman, et al. (1996). "Role of posterior parietal cortex in the recalibration
of visually guided reaching." Nature 383(6601): 618‐621.
Cohen, Y. E., A. P. Batista, et al. (2002). "Comparison of neural activity preceding reaches to
auditory and visual stimuli in the parietal reach region." Neuroreport 13(6): 891‐894.
Crammond, D. J. and J. F. Kalaska (2000). "Prior information in motor and premotor cortex:
activity during the delay period and effect on pre‐movement activity." J Neurophysiol 84(2):
986‐1005.
Crawford, J. D., W. P. Medendorp, et al. (2004). "Spatial transformations for eye‐hand
coordination." J Neurophysiol 92(1): 10‐19.
Davoodi, R., C. Urata, et al. (2007). "Model‐based development of neural prostheses for
movement." IEEE Trans Biomed Eng 54(11): 1909‐1918.
Davoodi, R., C. Urata, et al. (2004). Development of clinician‐friendly software for
musculoskeletal modeling and control. IEEE EMBS, San Francisco, IEEE.
113
Della‐Maggiore, V., N. Malfait, et al. (2004). "Stimulation of the posterior parietal cortex
interferes with arm trajectory adjustments during the learning of new dynamics." J Neurosci
24(44): 9971‐9976.
Delp, S. L. and J. P. Loan (1995). "A graphics‐based software system to develop and analyze
models of musculoskeletal structures." Comput Biol Med 25(1): 21‐34.
Desmurget, M., C. M. Epstein, et al. (1999). "Role of the posterior parietal cortex in updating
reaching movements to a visual target." Nat Neurosci 2(6): 563‐567.
Desmurget, M. and S. Grafton (2000). "Forward modeling allows feedback control for fast
reaching movements." Trends Cogn Sci 4(11): 423‐431.
Dominey, P., J. Decety, et al. (1995). "Motor imagery of a lateralized sequential task is
asymmetrically slowed in hemi‐Parkinson's patients." Neuropsychologia 33(6): 727‐741.
Duhamel, J. R., C. L. Colby, et al. (1992). "The updating of the representation of visual space in
parietal cortex by intended eye movements." Science 255(5040): 90‐92.
Duhamel, J. R., C. L. Colby, et al. (1998). "Ventral intraparietal area of the macaque: congruent
visual and somatic response properties." J Neurophysiol 79(1): 126‐136.
Edell, D. J., V. V. Toi, et al. (1992). "Factors influencing the biocompatibility of insertable silicon
microshafts in cerebral cortex." IEEE Trans Biomed Eng 39(6): 635‐643.
Ehrsson, H. H., A. Fagergren, et al. (2003). "Evidence for the involvement of the posterior
parietal cortex in coordination of fingertip forces for grasp stability in manipulation." J
Neurophysiol 90(5): 2978‐2986.
Evarts, E. V. (1968). "Relation of pyramidal tract activity to force exerted during voluntary
movement." J Neurophysiol 31(1): 14‐27.
Fee, M. S. (2000). "Active stabilization of electrodes for intracellular recording in awake
behaving animals." Neuron 27(3): 461‐468.
Feldman, A. G. (1966). "Central and reflex mechanisms of motor control." Biofizika 11(667).
Flanagan, J. R. and A. K. Rao (1995). "Trajectory adaptation to a nonlinear visuomotor
transformation: evidence of motion planning in visually perceived space." J Neurophysiol 74(5):
2174‐2178.
Flanagan, J. R. and A. M. Wing (1997). "The role of internal models in motion planning and
control: evidence from grip force adjustments during movements of hand‐held loads." J
Neurosci 17(4): 1519‐1528.
114
Flanders, M. and P. J. Cordo (1989). "Kinesthetic and visual control of a bimanual task:
specification of direction and amplitude." J Neurosci 9(2): 447‐453.
Flash, T. and N. Hogan (1985). "The coordination of arm movements: an experimentally
confirmed mathematical model." J Neurosci 5(7): 1688‐1703.
Gail, A. and R. A. Andersen (2006). "Neural dynamics in monkey parietal reach region reflect
context‐specific sensorimotor transformations." J Neurosci 26(37): 9376‐9384.
Georgopoulos, A. P., J. F. Kalaska, et al. (1982). "On the relations between the direction of two‐
dimensional arm movements and cell discharge in primate motor cortex." J Neurosci 2(11):
1527‐1537.
Georgopoulos, A. P., J. F. Kalaska, et al. (1981). "Spatial trajectories and reaction times of aimed
movements: effects of practice, uncertainty, and change in target location." J Neurophysiol
46(4): 725‐743.
Gerdes, V. G. and R. Happee (1994). "The use of an internal representation in fast goal‐directed
movements: a modeling approach." Biol Cybern 70(6): 513‐524.
Geshwind, N. and A. R. Damasio (1985). Apraxia. Handbook of clinical neurology. P. J. Vinken, G.
W. Bruyn and H. L. Klawans. Amsterdam, Elsevier: 423‐432.
Ghez, C., J. Gordon, et al. (1990). "Roles of proprioceptive input in the programming of arm
trajectories." Cold Spring Harb Symp Quant Biol 55: 837‐847.
Gnadt, J. W. and R. A. Andersen (1988). "Memory related motor planning activity in posterior
parietal cortex of macaque." Exp Brain Res 70(1): 216‐220.
Goodwin, G. C. and K. S. Sin (1984). Adaptive filtering prediction and control. Englewood Cliffs,
NJ, Prentice Hall.
Gordon, J., M. F. Ghilardi, et al. (1995). "Impairments of reaching movements in patients without
proprioception. I. Spatial errors." J Neurophysiol 73(1): 347‐360.
Grea, H., L. Pisella, et al. (2002). "A lesion of the posterior parietal cortex disrupts on‐line
adjustments during aiming movements." Neuropsychologia 40(13): 2471‐2480.
Haarmeier, T., P. Thier, et al. (1997). "False perception of motion in a patient who cannot
compensate for eye movements." Nature 389(6653): 849‐852.
Hall, C., E. Buckolz, et al. (1992). "Imagery and the acquisition of motor skills." Can J Sport Sci
17(1): 19‐27.
Hastie, T., R. Tibshirani, et al. (2001). The Elements of Statistical Learning. New York, NY,
Springer‐Verlag New York.
115
Hauschild, M., R. Davoodi, et al. (2007). "A virtual reality environment for designing and fitting
neural prosthetic limbs." IEEE Trans Neural Syst Rehabil Eng 15(1): 9‐15.
Hochberg, L. R., M. D. Serruya, et al. (2006). "Neuronal ensemble control of prosthetic devices
by a human with tetraplegia." Nature 442(7099): 164‐171.
Hoerl, A. E. and R. W. Kennard (1970). "Ridge regression: biased estimation for nonorthogonal
problems " Technometrics 12(1): 55‐67.
Hogan, N. (1984). "An organizing principle for a class of voluntary movements." J Neurosci 4(11):
2745‐2754.
Hollerbach, J. M. (1982). "Computers, brains and the control of movement." Trends Neurosci 5:
189–192.
Hyvarinen, J. (1982). "Posterior parietal lobe of the primate brain." Physiol Rev 62(3): 1060‐
1129.
Johansson, R. S. and K. J. Cole (1992). "Sensory‐motor coordination during grasping and
manipulative actions." Curr Opin Neurobiol 2(6): 815‐823.
Johansson, R. S. and G. Westling (1984). "Roles of glabrous skin receptors and sensorimotor
memory in automatic control of precision grip when lifting rougher or more slippery objects."
Exp Brain Res 56(3): 550‐564.
Johnson, P. B., S. Ferraina, et al. (1996). "Cortical networks for visual reaching: physiological and
anatomical organization of frontal and parietal lobe arm regions." Cereb Cortex 6(2): 102‐119.
Johnson, P. B., S. Ferraina, et al. (1993). "Cortical networks for visual reaching." Exp Brain Res
97(2): 361‐365.
Jones, E. G., J. D. Coulter, et al. (1978). "Intracortical connectivity of architectonic fields in the
somatic sensory, motor and parietal cortex of monkeys." J Comp Neurol 181(2): 291‐347.
Jones, E. G. and T. P. Powell (1970). "An anatomical study of converging sensory pathways
within the cerebral cortex of the monkey." Brain 93(4): 793‐820.
Jordan, M. I. (1995). Computational aspects of motor control and motor learning. Handbook of
Perception and Action: Motor Skills. H. Heuer and S. Keele. New York, NY, Academic Press.
Jordan, M. I. and D. E. Rumelhart (1992). "Forward Models: Supervised Learning with a Distal
Teacher." Cognitive Science 16: 307‐354.
Kagerer, F. A., V. Bracha, et al. (1998). "Ataxia reflected in the simulated movements of patients
with cerebellar lesions." Exp Brain Res 121(2): 125‐134.
116
Kalaska, J. F., R. Caminiti, et al. (1983). "Cortical mechanisms related to the direction of two‐
dimensional arm movements: relations in parietal area 5 and comparison with motor cortex."
Exp Brain Res 51(2): 247‐260.
Kalman, R. E. (1960). "A new approach to linear filtering and prediction problems." Transactions
of the ASME‐‐Journal of Basic Engineering 82(Series D): 35‐45.
Kawato, M. (1999). "Internal models for motor control and trajectory planning." Curr Opin
Neurobiol 9(6): 718‐727.
Kawato, M., K. Furukawa, et al. (1987). "A hierarchical neural‐network model for control and
learning of voluntary movement." Biol Cybern 57(3): 169‐185.
Krack, P., A. Batir, et al. (2003). "Five‐year follow‐up of bilateral stimulation of the subthalamic
nucleus in advanced Parkinson's disease." N Engl J Med 349(20): 1925‐1934.
Lackner, J. R. and P. Dizio (1994). "Rapid adaptation to Coriolis force perturbations of arm
trajectory." J Neurophysiol 72(1): 299‐313.
Lacquaniti, F. and C. Maioli (1989). "Adaptation to suppression of visual information during
catching." J Neurosci 9(1): 149‐159.
Lewicki, M. S. (1998). "A review of methods for spike sorting: the detection and classification of
neural action potentials." Network 9(4): R53‐78.
Loeb, G. E. (1990). "Cochlear prosthetics." Annu Rev Neurosci 13: 357‐371.
Loeb, G. E. (2005). We made the deaf hear. Now what? Toward Replacement Parts for the Brain.
Implantable Biomimetic Electronics as Neural Prostheses. T. W. Berger and G. D. L. Cambridge,
MA, The MIT Press: 3‐13.
Loeb, G. E., R. A. Peck, et al. (1995). "Toward the ultimate metal microelectrode." J Neurosci
Methods 63(1‐2): 175‐183.
Marconi, B., A. Genovesio, et al. (2001). "Eye‐hand coordination during reaching. I. Anatomical
relationships between parietal and frontal cortex." Cereb Cortex 11(6): 513‐527.
Mattingley, J. B., M. Husain, et al. (1998). "Motor role of human inferior parietal lobe revealed in
unilateral neglect patients." Nature 392(6672): 179‐182.
McIntyre, J. and E. Bizzi (1993). "Servo Hypotheses for the Biological Control of Movement." J
Mot Behav 25(3): 193‐202.
Meeker, D., S. Cao, et al. (2002). Rapid plasticity demonstrated in the parietal reach area
demonstrated with a brain‐computer interface. Soc. Neurosci.
117
Miall, R. C. (1996). "Task‐Dependent Changes in Visual Feedback Control: A Frequency Analysis
of Human Manual Tracking." J Mot Behav 28(2): 125‐135.
Milner, A. D. (1997). Neglect, extinction, and the cortical streams of visual processing. Parietal
lobe contributions to orientation in 3D space. P. Thier and H.‐O. Karnath. Heidelberg, Springer.
Mott, F. W. and C. S. Sherrington (1895). "Experiments upon the influence of sensory nerve
upon movement and nutrition of the limbs. ." Proc. R. Soc. London (B) 157: 481–488.
Mountcastle, V. B., J. C. Lynch, et al. (1975). "Posterior parietal association cortex of the
monkey: command functions for operations within extrapersonal space." J Neurophysiol 38(4):
871‐908.
Mulliken, G. H. and R. A. Andersen (2009). Forward models and state estimation in posterior
parietal cortex. The Cognitive Neurosciences. M. S. Gazzaniga. Cambridge, MA, The MIT Press.
IV: 599‐611.
Mulliken, G. H., S. Musallam, et al. (2008). "Decoding trajectories from posterior parietal cortex
ensembles." J Neurosci 28(48): 12913‐12926.
Mulliken, G. H., S. Musallam, et al. (2008). "Forward estimation of movement state in posterior
parietal cortex." Proc Natl Acad Sci U S A 105(24): 8170‐8177.
Musallam, S., M. J. Bak, et al. (2007). "A floating metal microelectrode array for chronic
implantation." J Neurosci Methods 160(1): 122‐127.
Musallam, S., B. D. Corneil, et al. (2004). "Cognitive control signals for neural prosthetics."
Science 305(5681): 258‐262.
Nanayakkara, T. and R. Shadmehr (2003). "Saccade adaptation in response to altered arm
dynamics." J Neurophysiol 90(6): 4016‐4021.
Nenadic, Z., D. S. Rizzuto, et al. (2007). Advances in cognitive neural prosthesis: recognition of
neural data with an information‐theoretic objective. Toward Brain‐Computer Interfacing. G.
Dornhege, J. R. Millan, T. Hinterberger, M. D.J. and K.‐R. Muller. Cambridge, Massachusetts, The
MIT Press: 175‐190.
Paninski, L., M. R. Fellows, et al. (2004). "Spatiotemporal tuning of motor cortical neurons for
hand position and velocity." J Neurophysiol 91(1): 515‐532.
Perenin, M. T. and A. Vighetto (1988). "Optic ataxia: a specific disruption in visuomotor
mechanisms. I. Different aspects of the deficit in reaching for objects." Brain 111 ( Pt 3): 643‐
674.
118
Pesaran, B., J. S. Pezaris, et al. (2002). "Temporal structure in neuronal activity during working
memory in macaque parietal cortex." Nat Neurosci 5(8): 805‐811.
Petersen, N., L. O. Christensen, et al. (1998). "Evidence that a transcortical pathway contributes
to stretch reflexes in the tibialis anterior muscle in man." J Physiol 512 ( Pt 1): 267‐276.
Petrides, M. and D. N. Pandya (1984). "Projections to the frontal cortex from the posterior
parietal region in the rhesus monkey." J Comp Neurol 228(1): 105‐116.
Pisella, L., H. Grea, et al. (2000). "An 'automatic pilot' for the hand in human posterior parietal
cortex: toward reinterpreting optic ataxia." Nat Neurosci 3(7): 729‐736.
Polit, A. and E. Bizzi (1979). "Characteristics of motor programs underlying arm movements in
monkeys." J Neurophysiol 42(1 Pt 1): 183‐194.
Quiroga, R. Q., Z. Nadasdy, et al. (2004). "Unsupervised spike detection and sorting with
wavelets and superparamagnetic clustering." Neural Comput 16(8): 1661‐1687.
Ringach, D. L., M. J. Hawken, et al. (1997). "Dynamics of orientation tuning in macaque primary
visual cortex." Nature 387(6630): 281‐284.
Rondot, P., J. de Recondo, et al. (1977). "Visuomotor ataxia." Brain 100(2): 355‐376.
Rousche, P. J. and R. A. Normann (1998). "Chronic recording capability of the Utah Intracortical
Electrode Array in cat sensory cortex." J Neurosci Methods 82(1): 1‐15.
Sakata, H., M. Taira, et al. (1995). "Neural mechanisms of visual guidance of hand action in the
parietal cortex of the monkey." Cereb Cortex 5(5): 429‐438.
Santhanam, G., S. I. Ryu, et al. (2006). "A high‐performance brain‐computer interface." Nature
442(7099): 195‐198.
Scherberger, H., M. R. Jarvis, et al. (2005). "Cortical local field potential encodes movement
intentions in the posterior parietal cortex." Neuron 46(2): 347‐354.
Schmidt, S., K. Horch, et al. (1993). "Biocompatibility of silicon‐based electrode arrays implanted
in feline cortical tissue." J Biomed Mater Res 27(11): 1393‐1399.
Schwartz, A. B. (2009). MotorLab ‐ Multimedia.
Schwartz, A. B., X. T. Cui, et al. (2006). "Brain‐controlled interfaces: movement restoration with
neural prosthetics." Neuron 52(1): 205‐220.
Scott, S. H. (2008). "Inconvenient truths about neural processing in primary motor cortex." J
Physiol 586(5): 1217‐1224.
119
Serruya, M. D., N. G. Hatsopoulos, et al. (2002). "Instant neural control of a movement signal."
Nature 416(6877): 141‐142.
Shadmehr, R. and J. W. Krakauer (2008). "A computational neuroanatomy for motor control."
Exp Brain Res 185(3): 359‐381.
Shadmehr, R. and F. A. Mussa‐Ivaldi (1994). "Adaptive representation of dynamics during
learning of a motor task." J Neurosci 14(5 Pt 2): 3208‐3224.
Shain, W., L. Spataro, et al. (2003). "Controlling cellular reactive responses around neural
prosthetic devices using peripheral and local intervention strategies." IEEE Trans Neural Syst
Rehabil Eng 11(2): 186‐188.
Sirigu, A., L. Cohen, et al. (1995). "Congruent unilateral impairments for real and imagined hand
movements." Neuroreport 6(7): 997‐1001.
Sirigu, A., J. R. Duhamel, et al. (1996). "The mental representation of hand movements after
parietal cortex damage." Science 273(5281): 1564‐1568.
Snyder, L. H., A. P. Batista, et al. (1997). "Coding of intention in the posterior parietal cortex."
Nature 386(6621): 167‐170.
Snyder, L. H., A. P. Batista, et al. (2000). "Intention‐related activity in the posterior parietal
cortex: a review." Vision Res 40(10‐12): 1433‐1441.
Strick, P. L. and C. C. Kim (1978). "Input to primate motor cortex from posterior parietal cortex
(area 5). I. Demonstration by retrograde transport." Brain Res 157(2): 325‐330.
Szarowski, D. H., M. D. Andersen, et al. (2003). "Brain responses to micro‐machined silicon
devices." Brain Res 983(1‐2): 23‐35.
Taylor, D. M., S. I. Tillery, et al. (2002). "Direct cortical control of 3D neuroprosthetic devices."
Science 296(5574): 1829‐1832.
Thoroughman, K. A. and R. Shadmehr (1999). "Electromyographic correlates of learning an
internal model of reaching movements." J Neurosci 19(19): 8573‐8588.
Thoroughman, K. A. and R. Shadmehr (2000). "Learning of action through adaptive combination
of motor primitives." Nature 407(6805): 742‐747.
Todorov, E. and M. I. Jordan (2002). "Optimal feedback control as a theory of motor
coordination." Nat Neurosci 5(11): 1226‐1235.
Turner, J. N., W. Shain, et al. (1999). "Cerebral astrocyte response to micromachined silicon
implants." Exp Neurol 156(1): 33‐49.
120
Vallar, G. and D. Perani (1986). "The anatomy of unilateral neglect after right‐hemisphere stroke
lesions. A clinical/CT‐scan correlation study in man." Neuropsychologia 24(5): 609‐622.
Vercher, J. L. and G. M. Gauthier (1988). "Cerebellar involvement in the coordination control of
the oculo‐manual tracking system: effects of cerebellar dentate nucleus lesion." Exp Brain Res
73(1): 155‐166.
Vercher, J. L. and G. M. Gauthier (1992). "Oculo‐manual coordination control: ocular and manual
tracking of visual targets with delayed visual feedback of the hand motion." Exp Brain Res 90(3):
599‐609.
Wann, J. P. and M. Mon‐Williams (2002). Measurement of visual aftereffects following virtual
environment exposure. Handbook of Virtual Environments. K. M. Stanney. Mahwah, New Jersey,
Lawrence Erlbaum Associates, Inc.
Welch, G. and G. Bishop (2006). An introduction to the Kalman filter.
Wolpert, D. M. (1997). "Computational approaches in motor control." Trends Cogn. Sci. 1(6):
209‐216.
Wolpert, D. M., Z. Ghahramani, et al. (1994). "Perceptual distortion contributes to the curvature
of human reaching movements." Exp Brain Res 98(1): 153‐156.
Wolpert, D. M., Z. Ghahramani, et al. (1995). "Are arm trajectories planned in kinematic or
dynamic coordinates? An adaptation study." Exp Brain Res 103(3): 460‐470.
Wolpert, D. M., Z. Ghahramani, et al. (1995). "An internal model for sensorimotor integration."
Science 269(5232): 1880‐1882.
Wolpert, D. M., S. J. Goodbody, et al. (1998). "Maintaining internal representations: the role of
the human superior parietal lobe." Nat Neurosci 1(6): 529‐533.
Wolpert, D. M. and M. Kawato (1998). "Multiple paired forward and inverse models for motor
control." Neural Netw 11(7‐8): 1317‐1329.
Wolpert, D. M. and R. C. Miall (1996). "Forward Models for Physiological Motor Control." Neural
Netw 9(8): 1265‐1279.
Woodworth, R. S. (1899). "The accuracy of voluntary movement." Psychological Review
Monographs 3(2): 1‐114.
Wu, W., M. J. Black, et al. (2002). Inferring hand motion from multi‐cell recordings in motor
cortex using a Kalman filter. SAB'02‐Workshop on Motor Control in Humans and Robots: On the
Interplay of Real Brains and Artificial Devices. Edinburgh, Scotland: 66‐73.
121
Wu, W., M. J. Black, et al. (2002). "Inferring hand motion from multi‐cell recordings in motor
cortex using a kalman filter." SAB‐02 workshop on motor control in humans and robots: on the
interplay between real brains and artificial devices: 66‐73.
Wu, W., M. J. Black, et al. (2003). Neural decoding of cursor motion using a Kalman filter.
Advances in Neural Information Processing Systems. 15: 133‐140.
Yue, G. and K. J. Cole (1992). "Strength increases from the motor program: comparison of
training with maximal voluntary and imagined muscle contractions." J Neurophysiol 67(5): 1114‐
1123.
Yuen, T. G., W. F. Agnew, et al. (1987). "Tissue response to potential neuroprosthetic materials
implanted subdurally." Biomaterials 8(2): 138‐141.
Zarzecki, P., P. L. Strick, et al. (1978). "Input to primate motor cortex from posterior parietal
cortex (area 5). II. Identification by antidromic activation." Brain Res 157(2): 331‐335.
Zipser, D. and R. A. Andersen (1988). "A back‐propagation programmed network that simulates
response properties of a subset of posterior parietal neurons." Nature 331(6158): 679‐684.
Abstract (if available)
Abstract
Neural activity in posterior parietal cortex (PPC) can be harnessed to estimate not only the endpoint of a reach (Musallam, Corneil et al. 2004) but also to control the continuous trajectory of an end-effector (Mulliken, Musallam et al. 2008). Here we expand on this work by showing that trajectory information can be extracted robustly from PPC neurons in more realistic, less constrained tasks. Although it is thought that the visuo-motor areas in PPC rely on gaze-centered reference frames to encode movement-related parameters, for the first time we were able to show that hand movement can be decoded accurately without constraining gaze. Furthermore, to evaluate the potential of PPC signals for controlling prosthetic limbs under realistic conditions we increased the complexity of the task by studying point-to-point reaches in a 3D workspace instead of relying on the classic lower-dimensional 2D center-out task.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
A non-invasive and intuitive command source for upper limb prostheses
PDF
Spinal-like regulator for control of multiple degree-of-freedom limbs
PDF
Detection and decoding of cognitive states from neural activity to enable a performance-improving brain-computer interface
PDF
Deficits and rehabilitation of upper-extremity multi-joint movements in individuals with chronic stroke
PDF
insideOut: Estimating joint angles in tendon-driven robots using Artificial Neural Networks and non-collocated sensors
PDF
Investigating the role of muscle physiology and spinal circuitry in sensorimotor control
PDF
Excitatory-inhibitory interactions in pyramidal neurons
PDF
Modeling and simulation of multicomponent mass transfer in tight dual-porosity systems (unconventional)
Asset Metadata
Creator
Hauschild, Markus
(author)
Core Title
Decoding limb movement from posterior parietal cortex in a realistic task
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Biomedical Engineering
Publication Date
02/03/2010
Defense Date
01/19/2010
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
neural prosthetics,OAI-PMH Harvest,posterior parietal cortex
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Loeb, Gerald E. (
committee chair
), Andersen, R. A. (
committee member
), Gordon, James (
committee member
), Mel, Barlett W.. (
committee member
)
Creator Email
hauschil@caltech.edu,hauschil@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2829
Unique identifier
UC1451449
Identifier
etd-Hauschild-3478 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-293666 (legacy record id),usctheses-m2829 (legacy record id)
Legacy Identifier
etd-Hauschild-3478.pdf
Dmrecord
293666
Document Type
Dissertation
Rights
Hauschild, Markus
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
neural prosthetics
posterior parietal cortex