Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The behavioral and neural bases of tactile object localization
(USC Thesis Other)
The behavioral and neural bases of tactile object localization
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright 2020 Jonathan Andrew Cheung
THE BEHAVIORAL AND NEURAL BASES OF
TACTILE OBJECT LOCALIZATION
by
Jonathan Andrew Cheung
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
NEUROSCIENCE
August 2020
ii
Acknowledgements
I would first and foremost like to thank my advisor, Samuel Andrew Hires, for his dedicated
mentorship throughout my doctoral career. I would not have succeeded and grown as a scientist if it were
not for your fearless, bold, and trusting nature. You encouraged me to pursue my passions and trusted I
would use that time well to make fruitful discoveries. Your trust in my talents and work transformed this
pursuit of knowledge from a tireless endeavor to one filled with joy and excitement. It was a joy to share
my successes and, importantly, my failures as you welcomed them as a challenge we would embark on
together. These challenges we walked through together taught me fearlessness and tactfulness in the
articulation of thoughts. It has truly been a joy being your first student and working with you these past
years. I thank you deeply for all you have taught me.
Thank you to my past and present committee members and faculty Huizhong Tao, Bartlett Mel,
Michelle Povinelli, David McKemy, Judith Hirsch, Sarah Bottjer, and Fritz Sommer for their feedback,
support, and encouragement. You have all been willing to lend an ear when experiments were not going
as planned and offer an air of encouragement.
Thank you to all members of the Hires lab and fellow NGP students, especially Jinho Kim,
Phillip Maire, and Rachel Yuan. It has been a joy and pleasure to walk through these past years with you
not just as colleagues, but as friends. I will cherish the many memories of scientific discussions,
conversations, shared meals, laughter, concerts, conferences, and adventures we have embarked on. I say
this not with an air of sadness for what has passed but with a joyful hope that our friendship will continue
to flourish over the years to come.
Thank you to my beloved wife, Jessica Chow. You have been a rock and a model for joy;
pointing me always back to Christ.
Thank you to my supportive and loving family. You have always given me the space to pursue
my passion, loving me every step of the way and never asking anything in return.
iii
Thank you to my community of friends: Alex Ouligian, Edward Marcus, Ernest Chow, Jordan
Ho, Ruskin Cua and Ryan Jirapong. I thank you all for celebrating with me through my victories and
walking with me throughout my failures.
This work is truly the summation of what you all have poured into me. For this, I am eternally
grateful.
iv
Abstract
Tactile object localization is a crucial aspect of mammalian behavior. Yet our understanding of how the
brain generates this percept is still in its infancy. In this thesis two projects are designed and implemented
to understand how the primary somatosensory cortex represents the location of objects by touch. In the
first project, we rely on rodent models to understand behavior during tactile localization. From six
debated models in the field, we provide evidence that a simple model utilizing two features, the number
of touches and the midpoint of whisker motion, is enough to recapitulate behavioral outcomes. In the
second project we record from the deep layers of primary somatosensory cortex and discover the
representation of self-motion and touched object location by single neurons. These findings demonstrate
sensorimotor integration to generate a percept of object locations by touch and provide a foundation for
future studies in cortical computation and tactile perception.
Table of Contents
Acknowledgements ............................................................................................................... ii
Abstract ................................................................................................................................ iv
Chapter 1: The neural computation of touch localization ....................................................... 1
Touch localization .........................................................................................................................1
The brain and touch perception. ...................................................................................................1
Studying the neural representations of touch ................................................................................3
Inputs and outputs of primary somatosensory cortex ...................................................................5
Neural representations of touch in S1 ...........................................................................................7
Chapter 2: The sensorimotor basis of whisker-guided anteroposterior object localization in
head-fixed mice ..................................................................................................................... 9
Summary ......................................................................................................................................9
Introduction..................................................................................................................................9
Results ........................................................................................................................................ 12
Materials and Methods ............................................................................................................... 30
Supplemental Figures ................................................................................................................. 38
Chapter 3: Active touch remaps barrel cortex output from a representation of self-motion to
object location. .................................................................................................................... 43
Summary .................................................................................................................................... 43
Introduction................................................................................................................................ 43
Results ........................................................................................................................................ 45
Discussion ................................................................................................................................... 56
Materials and Methods ............................................................................................................... 58
Supplemental Figures ................................................................................................................. 66
Chapter 4: Concluding Remarks .......................................................................................... 72
Literature Cited ................................................................................................................... 74
1
Chapter 1: The neural computation of touch localization
Touch localization
The sense of touch is ubiquitous in everything we do but is generally overlooked. It encapsulates
sensations of vibration, pressure, pain, temperature and muscle movements, all of which are essential to
many unnoticed actions such as fishing through your pockets for coins or reaching for a cup of coffee.
Despite its broad importance, our knowledge of how the brain computes a perception of touch is still in its
infancy. Specifically, localization by touch and how it’s represented by the brain is unknown. Leveraging
motion and touch information to generate a percept of location, localization is crucial for manipulating the
objects around us, avoiding environmental dangers, and building an internal map of the external world
when vision is not present. Though many studies over the past decades have highlighted primary
somatosensory cortex (S1) as a critical node in processing touch information, we have yet to understand
how touch localization is computed by neurons in this region of the brain.
Below, I first introduce studies that demonstrate the importance of S1 for touch perception in
humans and macaques. I then show why rodents serve as a powerful model to study touch perception in
cortex and review literature about the neural circuitry of S1 and the features of touch it encodes. These
studies will highlight a gap in knowledge regarding a fundamental component of touch perception that my
thesis aims to address: object localization by touch in S1. The following Chapters 2 and 3 are two
manuscripts I have written regarding touched object localization behavior and the neural representation of
that in S1.
The brain and touch perception.
The human cerebral cortex is believed to be what gives us our amazing perceptual abilities
(Gazzaniga et al., 2008). It is approximately 40% of the brain’s weight and contains 14-16 billion neurons
giving rise to approximately 140 trillion synapses. In humans, S1 is defined as Broadman Areas (BA) 1,
2, and 3. Broadman areas have been proposed and refined over the past century based on cytoarchitecture,
2
neuronal organization, and functional neuroimaging. Studies in both humans and macaques have
implicated cortical region S1 as a crucial node in touch perception.
A damaged S1 in humans leads to deficits in touch perceptions and related behaviors. Patients
with lesions in BA 1, 2, and/or 3 demonstrate a reduced perception in pressure sensitivity, two-point
discrimination, point localization, position sense, and tactual object recognition (Corkin et al., 1970,
Roland 1987). Lesions in S1 also lead to an inability to form a mental 3D percept of touched objects
(Okuda et al., 1995). These deficits go beyond discriminations and object recognition. Patients with
lesions exhibit randomized motor movements and clumsy complex finger movements during exploration
but not during imitation, suggesting a crucial role for S1 in touch feedback (Pause et al., 1989). Taken
together, these studies highlight the importance of human S1 in processing touch information necessary
for perception and motor actions.
Many of the conclusions in humans have been corroborated and expanded upon in non-human
primates. Non-human primates offer researchers in-vivo observations and finer control over manipulations
of S1. In one of the first studies to investigate the role of different subregions of S1, Mary Randolph and
Josephine Semmes lesioned BA 1, 2, or 3 and observed their roles in a tactile discrimination task
(Randolph and Semmes 1978). In this task, macaques were trained to reach into a covered box and
discriminate between lever pairs of varying shapes, textures, and sizes. Lesioning BA 3 revealed an
inability to discriminate between any lever pair, BA 2 lesions led to deficits in shape discrimination, and
BA 1 lesions led to deficits in texture perceptions. More recent work using finer stimulus control and
recordings in S1 find that BA3 and BA1 neurons were tuned to touched object orientation, indented bars,
and vibration, supporting the role of BA 1 in texture discrimination (Hyvarinen and Poranen 1978,
Lebedev and Nelson 1996, Hsiao et al., 2002, Bensmaia et al., 2008a, Bensmaia et al., 2008b). In BA 2,
neurons were found tuned to bar curvature, a primary component in shape recognition (Yau et al., 2013).
These data support a conclusion that subregions of S1 may play distinct roles in touch perception.
3
Studying the neural representations of touch
The above examples of S1 lesions in human and non-human primates highlight that large portions
of S1 is necessary for tactile perception but fails to answer whether smaller subregions may be sufficient.
Recent work across humans, non-human primates, and rodents suggest computations of sensory
perception requires distinct populations of neurons. These studies have highlighted how optogenetic
stimulation of subsets ranging from one (Buchan and Rowland 2018) to few hundred neurons in mouse
(Huber et al., 2008) or non-human primates (May et al., 2014) may be sufficient to influence or drive
tactile perception. Further, electrically stimulating large portions of S1 in humans may actually disrupt
and lead to delays in touch detection compared to peripheral touch stimulation (Caldwell et al., 2019).
Taken together, these recent works suggest that components of touch are encoded in distinct populations
of neurons rather than whole portions of S1.
Questions regarding cortical representations of touch and the algorithms underlying their
composition are poised for studies using the rodent whisker system. Rodents depend on their facial
whiskers, similar to how humans use their fingers, to gather touch information and have genetic and
technological advantages unavailable in humans or non-human primates. Importantly cortical circuit
motifs are preserved in rodents and S1 stands as a key node in touch perception in rodents.
Rodent cerebral cortex contains 8-10 million neurons compared to 10-15 billion in humans.
Though the numbers of neurons are orders of magnitude fewer in rodents, the neuronal diversity and basic
structure of cortex remains preserved. Neurons in the brain fall under two major classes of excitatory or
inhibitory neurons. Within these major classes, subclasses exist. For example, the three subclasses of
inhibitory units in rodent cortex are parvalbumin (PV), somatostatin (SST), and vasoactive intestinal
peptide (VIP) expressing neurons. Each of these subclasses exhibit diverse morphological, physiological,
and molecular profiles (Pfeffer et al., 2013, Wall et al., 2016). A large-scale comparative study between
human and mouse cortex reveal that homologous subclasses such as the above have similar diversity
between species (Hodge et al., 2019). The lamination of six layers of cortex along with the hodology
4
(conserved connectivity rules between cell types), a key identifier of mammalian cortices, is present in
both rodents and humans (Harris and Shepherd 2013). These data taken together suggest that core
mammalian cortex components and circuit motifs are preserved between the two species.
Similar to studies in humans and macaques, lesions and manipulations of S1 in rodents leads to
altered touch perception. Touch perception in rodents can be evaluated via tasks such as object detection
and localization, gap-crossing, and texture discriminations. All of these tasks require whiskers to solve
and perturbations of S1 through lesions, aspirations, muscimol, or even optical inhibition lead to deficits
in task performance (Guic-Robles et al., 1992, O’Connor et al., 2010a, O’Connor and Hires et al., 2013,
Hong et al., 2018, Rema and Chadhary 2018). Many studies like the above, manipulating S1 and probing
behavior, highlight S1 as a crucial node in processing touch information relevant for behavior in rodents.
Further, there exists two ethological advantages of the rodent whisker system: behavioral and
anatomical. Behaviorally, rodents depend on touch from their whiskers, similar to how humans use their
fingers, as a primary sense to explore and navigate the space around them (Vincent 1912). This large
dependency on whisker touch is reflected in 70% (i.e. 13% of the entire cortical area) of rodent S1 (i.e.
barrel cortex) dedicated to processing whisker touch information (Dawson and Killackey 1987, Lee and
Erzurumlu 2005). This system is organized with a 1:1 mapping where columns of neurons in S1 are
primarily dedicated to processing information for a single whisker (i.e. principal whisker) (Woolsey and
Van der Loos 1970). The combination of behavior and anatomical organization provides a convenient
way to assess cortical activity in vivo during active whisker behaviors (O’Connor et al., 2010).
Rodents offer several advantages not present in humans or non-human primates. Technological
advances have provided tools to observe and manipulate specific cell types and circuits in the brain
unavailable in humans and macaque. These advantages are manifested in over 300 mouse lines with
different genetic backgrounds to allow targeting of specific cell types and populations of neurons (Gerfen
et al., 2013). Two photo calcium imaging and electrophysiology provides access to thousands of neurons
at sub-second temporal and sub-millimeter spatial resolution or sampling of single neurons at sub-
5
millisecond temporal resolution (Stosiek et al., 2003, De Kock and Sakmann 2007). In parallel with cell-
type specific targeting via genetic lines, improvements in indicators and opsins open the door for higher
temporal resolution of neural activity and finer manipulation of specific circuits (Pegard et al., 2017,
Marshel et al., 2019). Taken together, the rodent whisker system serves as a powerful model to study
touch and its representation in cortex.
Inputs and outputs of primary somatosensory cortex
Whisker information is somatotopically organized as it traverses to S1. The somatosensory
circuitry begins with forces on the whiskers and whisker follicle from self-motion or object contact.
Neurons in the trigeminal ganglion, with mechanosensitive nerve endings in the follicle, faithfully
transmit touch and motion information to neurons in the principal trigeminal brainstem nucleus (Pr5). In
the Pr5, neurons are somatotopically arranged in what are known as barrelletes. Whisker information
from Pr5 is passed to ventral posterior medial nucleus of the thalamus (VPM) where, similar to Pr5,
somatotopic barreloids exist to largely innervate barrels (denoted by an increased density of neurons in
L4) within S1 in a largely one-to-one fashion.
The cortical column has long been hypothesized to be the fundamental processing unit of cortex
(Mountcastle 1957, Hubel and Wiesel 1962, Hubel and Wiesel 1968). Each of the six laminar layers has
distinct densities of excitatory and inhibitory neurons along with input and output projection patterns
(Ramon y Cajal 1904, Lefort et al., 2009). The canonical microcircuit was first proposed in 1989 to
explain the flow of excitation throughout the cortical column (Douglas et al., 1989). In S1, these
individual columns are known as barrel columns; tools such as viral tracers and subcellular
channelrhodopsin assisted circuit mapping (sCRACM) have allowed researchers to finely assess
functional connectivity of the canonical microcircuit within each barrel and its output target regions. In
brief, the excitatory canonical microcircuit flows information from L4 -> L2/3 -> L5 (Douglas and Martin
2004, Petreanu et al., 2009, Hooks et al., 2010, Feldmeyer et al., 2012, Petersen 2009, Figure 1.1).
Thalamic input from VPM to S1 primarily targets neurons in L4 with minor targets to L5B
6
(Constantinople and Bruno 2013). Following the canonical circuit, L4 input synapses onto L2/3 excitatory
neurons. These neurons in L2/3 have dendrites that stretch up to L1, where they can sample from other
sensory cortices or higher order cortical regions, such as primary motor cortex or secondary
somatosensory cortex. These neurons are typically classified as intertelencephalic (IT), whose axonal
projections target mainly S1, striatum, and contralateral S1. L2/3 excitatory neurons target L5 neurons,
which have large dendritic arbors reaching up to L1. Within L5 two classes of projection neurons exist: IT
and pyramidal tract (PT). Distinct from IT neurons, PT neurons target subcortical regions such as
brainstem, thalamus, and basal ganglia. PT neurons are found exclusively in L5B and IT neurons are
preferentially located in L5A. L6 excitatory neurons are primarily classified as corticothalamic (CT) and
their major output targets are VPM and posterior medial nucleus of the thalamus (POm). A brief summary
of the major outputs of S1 (consisting of 70% of the outputs) are striatum, secondary somatosensory
cortex, vibrissa motor cortex, thalamus, and superior colliculus, ordered from greatest to least (Mao et al.,
2011, Zingg et al., 2014).
Figure 1.1 Cortical microcircuit
A). Laminar specific input from 5 regions into and within S1 evaluated via subcellular channel-rhodopsin-assisted
circuit mapping (sCRACM). Sources of projections are denoted by rectangles, stars, and triangles. Horizonal bars
denote projection targets from source. Thickness of bars denote relative strength of projection. B) Summary of the
7
major inputs onto L3, L5A, and L5B excitatory neurons. Thickness of bars denote relative strength of input.
Adapted from Petreanu et al., 2009.
Neural representations of touch in S1
In the past two decades, advanced tools applied to well-designed behavioral tasks have
illuminated how populations of a few hundred neurons or even single neurons within S1 represent self-
motion and touch features. Almost all layers of cortex contain single units that are found responsive to
touch (Simons, 1978, Armstrong-James et al., 1992, De Kock et al., 2007, O’Connor et al., 2010b, Peron
et al., 2015) and/or whisking (Fee et al., 1997, Curtis and Kleinfeld 2009, Xu et al., 2012). Evidence has
also been found for encoding of texture, (Jadhav et al., 2009, Isett et al., 2018), object distance from face
(Pammer et al., 2011, Sofroniew et al., 2015), touch direction (Simons and Carvell 1989), touch
amplitude and velocity (Simons 1978), and even multimodal features such as running (Ayaz et al., 2019).
Leveraging advances in optical tools, viruses, and genetic lines in rodents, studies have demonstrated the
necessity and sufficiency of S1 in touch perception. Stimulation of barrel cortex via optical tools can
generate a detectable percept (Huber et al., 2008), create a false percept of touch (O’Connor and Hires
2013, Sofroniew et al., 2015), and inhibit touch detection (O’Connor et al., 2010b, Hong et al., 2018).
Taken together, these results highlight the numerous motion and touch features S1encodes and its
necessity and sufficiency in touch perception.
The finer circuits underlying touch in S1 have been observed by leveraging cell-type specificity
of certain mouse lines for targeted recordings. For example, it has been shown that L4 excitatory units
encode touch information with low noise and across the population can decode touch within milliseconds
of the true touch time (Hires et al., 2015). Insight to how touch could be faithfully encoded in this low-
noise manner can be gained from Yu and colleagues, who performed in-vivo whole cell recordings from
excitatory neurons in VPM thalamus and parvalbumin interneurons in L4 during active behavior. They
found that these inhibitory L4 units filtered out reafferent whisking information from thalamocortical
projections onto excitatory units (Yu et al., 2016); this mechanism may be responsible for the low noise
encoding of touch observed in L4 excitatory neurons (Gutnisky et al., 2017).
8
While these studies have made great progress in investigating the neural representations of
different tactile features, the functional significance of these representations during active behavior
remains understudied. One crucial function of the tactile system is to aid in the localization of objects
relative to self, which requires integrating information of self-motion and touch. Despite the numerous
studies in rodents, how rodents localize objects and how the cortex represents touched object location is
not known. The next two chapters highlight two notable findings I have made during my doctorate.
Chapter 2 of this thesis focuses on characterizing the precision and algorithm deployed by rodents for
object localization. Chapter 3 details recordings of a subgranular excitatory neurons in S1 and their
representations of self-motion and touched object locations.
9
Chapter 2: The sensorimotor basis of whisker-guided
anteroposterior object localization in head-fixed mice
Summary
Active tactile perception combines directed motion with sensory signals to generate mental
representations of objects in space. Competing models exist for how mice use these signals to determine
the precise location of objects along their face. We tested six of these models using behavioral
manipulations and statistical learning in head-fixed mice. Trained mice used a whisker to locate a pole in
a continuous range of locations along the anteroposterior axis. Mice discriminated locations to ≤0.5mm
(<2°) resolution. Their motor program was noisy, adaptive to touch, and directed to the rewarded range.
This exploration produced several sets of sensorimotor features that could discriminate location.
Integration of two features, touch count and whisking midpoint at touch, was the simplest model that
explained behavior best. These results show how mice locate objects at hyperacute resolution using a
learned motor strategy and minimal set of mentally accessible sensorimotor features.
Introduction
Locating objects through the sense of touch is an essential behavior across animal species. In
humans and rodents, tactile object localization is an active process that combines directed sensor motion
with mechanosensory signals. Rodents sweep their large whiskers back and forth and use the resulting
tactile sensations to locate (Krupa et al., 2001) and orient to objects (Schroeder and Ritt 2016), and guide
navigation (Vincent 1912). Identifying the motor strategies deployed and the resulting sensorimotor
features that underlie object location perception during these behaviors is an essential step to
understanding algorithms and neural circuit implementations of active sensory perception (Marr 1982,
Krakauer et al., 2017).
Head-fixed preparations are advantageous for investigating object localization due to their
exquisite level of experimental control, including the ability to monitor motion with high precision
10
(Evarts 1968, Guo et al., 2014). Rodents can determine the location of objects by active exploration with
whiskers (Knutsen et al., 2006), even when head-fixed (O’Connor et al., 2010). High speed videography
(Clack et al., 2012) and physical models (Hires et al., 2013, Voigts et al., 2015, Hires et al., 2016, Belli et
al., 2017) can quantify motion and forces that drive whisker input with submillisecond resolution during
behavior (Boubenec et al., 2012, Vaxenburg et al., 2018). This input is transformed and integrated in a
topographic arrangement of columns in primary somatosensory cortex (S1) which have a one-to-one
mapping to individual whiskers. Examination of the activity patterns within and across these cortical
columns has revealed how sensorimotor features of tactile exploration are represented and processed in
the brain (Chen et al., 2015, Pluta et al., 2017, Gutnisky et al, 2017, Isett et al., 2018, Zuo et al, 2019).
Whiskers project from an array of follicles arranged in columns and rows across the face. From
posterior to anterior positions within a row, each large whisker (i.e. macrovibrissa) launches from its
follicle at progressively greater azimuthal angles, with about 20° of angular difference between
neighboring whiskers. Thus, discriminating object locations separated by ≥20° along the anteroposterior
(i.e. horizontal) axis is trivial using multiple whiskers and a labelled-line code based on touch presence
(Knutsen et al., 2008). However, head-fixed mice with a full whisker field can do better than this, and
discriminate object location to at least 6° of resolution (O’Connor et al., 2010). Achieving this hyperacute
localization resolution is not trivial. Head-fixed mice can also discriminate well-separated anteroposterior
locations (~15°) with a single whisker (Mehta et al., 2007) using motor strategies that establish large
differences in touch likelihood (O’Connor et al., 2013) or direction (Campagner et al., 2019) between
locations. The perceptual limits, motor strategies, and sensorimotor features that drive hyperacute object
location perception remain unclear.
Several plausible models have been proposed for how rodents achieve location hyperacuity along
the anteroposterior axis (Figure 2.1A), all of which have gaps in experimental support. These models
differ in the sensorimotor features gathered and used to construct location perception. In a roll angle
model (Figure 2.1B), rodents sense how much the whisker has rotated on its long axis at time of touch
11
through a differing pattern of mechanoreceptor activation (Knutsen et al., 2008, Yang et al., 2016). In a
whisk latency model (Figure 2.1C), rodents measure the time of touch referenced to the time from
maximum retraction. More anterior objects take longer to reach (Szwed et al., 2003). Similarly, in a cue
latency model (Figure 2.1C), mice measure the time from cue-triggered whisking onset to the time of
touch across one or more whisks. In a touch count model (Figure 2.1D), rodents direct their whisking to a
location range, so objects more central to this range generate more touches and consequently more spikes
in primary somatosensory cortex (S1). Location is read out by spike count in S1 with more central objects
represented by more spikes (O’Connor et al., 2013, Hires et al., 2015). In a radial distance model (Figure
2.1E), rodents measure the distance between follicle and object by comparing the angle of the normal
force relative to the angle of the follicle. This varies depending on where along the whisker touch occurs
due to increasing whisker flexibility with distance (Birdwell et al., 2007, Solomon et al., 2011, Pammer et
al., 2013, Bagdasarian et al., 2013). In a Hilbert recomposition model (Figure 2.1F), rodents integrate
three Hilbert components of whisker motion (amplitude, midpoint, and phase), to compute the azimuthal
angle of the whisker at time of touch (Kleinfeld et al., 2011). Amplitude and midpoint originate from
primary motor cortex (M1) as efference copy (Hill et al., 2011), while phase is encoded in a reafferent
sensory signal from the whisker follicle (Fee et al., 1997, Curtis and Kleinfeld 2009, Moore et al., 2015).
Despite this proliferation of models, no consensus has emerged for which approach, or mix thereof, is
actually used.
Here, we use behavioral manipulations and statistical learning to identify the simplest model that best
explains localization hyperacuity with a single whisker. We quantify the perceptual limits of
anteroposterior localization, determine what motor strategies are deployed and how they influence
sensorimotor information gathered, and how this information influences location perception. We identify
a two-stage classifier that combines touch count with whisking midpoint at touch as the simplest, best
performing model. This provides insight into where and how active location computations are performed
by neural circuits and a foundation from which natural object localization can be better understood.
12
Figure 2.1 Models of anteroposterior object localization
Schematic of task geometry. The whisker is actively swept back and forth to locate a pole (black circle). Angle is the
azimuthal angle of the whisker at the follicle relative to the mediolaterial axis of the animal. B) Position is
discriminated by how much the follicle has rotated at the moment of touch. C) Position is discriminated by when
spikes occur relative to onset of whisking. D) Position is discriminated by the number of touches during a bout of
directed exploration. E) Position is discriminated by the degree to which the normal force of object is pointing
laterally versus toward the follicle launch angle. F) Position is discriminated by which neurons are activated by
touch at specific angles. Angle is uniquely specified by the amplitude, midpoint and phase of a whisk cycle. Activity
from primary sensory neurons that are modulated by phase cycle is combined with an internal representation of
whisking amplitude and midpoint to activate distinct sets of neurons in S1 at the moment of touch, depending on the
azimuthal angle at which touch occurs.
Results
Task design and animal performance
To investigate the sensorimotor basis of object location perception, we used a variation of a
go/nogo whisker-guided localization task in head-fixed mice (O’Connor et al., 2010) that requires precise
knowledge of object location to achieve maximum performance. We trained water-restricted head-fixed
mice (n=15) to discriminate the location of a smooth vertical pole randomly presented in contiguous
ranges of go (0-5mm) and nogo (5-10mm) positions along the anteroposterior axis of the animal, about
8mm lateral from the whisker pad (Figure 2.2A). Mice were trimmed to a single whisker at the start of
training, maintained till task mastery. We traced whisker motion, touch, and deflection from an overhead
view at 1000 fps (Figure 2.2B).
For each trial, the pole was presented for at least 2 seconds in the go range (50% trials) or nogo
range (50% trials). The pole moved vertically into and out of the field with the onset of motion associated
with a 250ms sound of a pneumatic valve which cued mice to task structure. Mice voluntarily explored
the pole and their surroundings with their whisker (i.e. whisking) during the sampling period (0.75s
13
duration) and reported their perception of object location by licking or not licking during the answer
period (1.25s duration; Figure 2.2C). The response choice led to different trial outcomes based on pole
location (Figure 2.2D). On hit trials, mice were rewarded with a water droplet (4-8µL) and on false alarm
trials mice were punished with a 2 second timeout. Correct rejection and miss trials were neither rewarded
nor punished. Licking extended the duration of the pole presentation.
In all analyses, we only consider sensorimotor behavior (e.g. whisks and touches) that contributed
to a decision by including only data before the decision lick. The decision lick is defined as the first lick
in the answer period or, on no-lick trials, the median of the decision lick times. This cutoff excludes post-
decision motor activity that is driven by rhythmic licking on hit and false alarm trials. The reaction time
between the first touch in a trial and the decision lick was 736ms ± 240ms (741±249ms on hit trials and
690±243ms on false alarm trials; Figure 2.2E). Whisker motion resulted in a change of azimuthal angle
(i.e. angle) of the whisker base relative to the mediolateral axis of the mouse. Across all mice, this angle
at onset of touch (i.e. touch angle) spanned 49.4±8.8° from extreme posterior to anterior pole positions.
Touch angle for the go and nogo range varied across sessions and was affected by the radial distance of
the pole presentation axis and translation of the follicle during whisking. Across all touched pole
positions, the follicle translated by 1.5±0.3mm total (1.3±0.4mm in the anteroposterior axis and
0.7±0.1mm in the mediolateral axis). Mice performed 485±179 trials per session. It took 8194±1816 trials
(Figure 2.2F, excluding one outlier of 19923 trials) to reach expert performance, defined as >75%
accuracy over 200 trials.
To determine the spatial precision of localization we examined trials near the go/nogo
discrimination boundary. On average, there was a significant change in lick probability between go and
nogo trials when the pole was presented ≤1mm (3.8±0.5° mean angle difference, 29%±11% lick
difference, p=4.4e-5, 2-tail ttest) or ≤0.5mm (1.9±0.4° mean angle difference, 18%±20% lick difference,
p=0.03, 2-tail ttest) from the boundary (Figures 2.2G and 2.2H). This indicates mice can discriminate
above chance with sub-millimeter precision along the anteroposterior axis with a single whisker. The
14
mean number of pre-decision touches per trial (i.e. touch count) decreased from most posterior (6.8±2.6)
to anterior bin (1.1±0.9; Figure 2.2G).
Figure 2.2 Head-fixed task and performance
A) Trained mice report the perceived location of a pole presented along the anteroposterior axis via licking (go) or
not licking (nogo). B) Overhead view of tracked whisker for two trials. To eliminate variation from fur, azimuthal
angle is determined at the intersection of mask and whisker trace. C) Trial structure with example imaging frames at
top. Pole presentation is triggered 500ms from session start and takes ~200ms to come into reach. Azimuthal angle
time-series for 15 consecutive trials are overlaid with the sampling period (750ms duration), answer period (1250ms
duration) and licks. D) Possible trial outcomes based on pole presentation and mouse choice. E) The average
reaction time for each individual mouse (gray circles) and the mean ± SEM for all mice (black circle). F) Learning
rates for this task highlighting 7,000 trials before and 1000 trials after reaching expert (75% accuracy over 200
trials). Inset, number of trials required to reach expert for each mouse in gray and population in black (mean ± std
8194±1816 trials). G) Psychometric performance curves for individual mice (gray) and across the population (black)
expert in the task (n=15 mice). Bars denote the mean number of pre-decision touches prior to decision for go (blue)
and nogo (red) trials. H) Performance between go-nogo pairs of bins with the max distance of 0.5, 1, 2, 3, 4 and 5
mm from the discrimination boundary. Circles denote individual mice. X denote mean ± SEM across the population.
P-values comparing population hit trials to false alarm trials: 0.5mm p=9.2e-3, 1mm p=5.2e-13, 2mm p=1.5e-15,
3mm p=1.5e-15, 4mm p=1.5e-15, 5mm p=1.5e-15; 2-sample t-test. (t-stat, degrees of freedom : 0.5mm=2.6, 276;
1mm=7.4, 552; 2mm=20.0, 596; 3mm=21.8, 569; 4mm=26.4, 578; 5mm=31.8, 594)
15
Behavior is consistent with closed loop integration of sensorimotor cues for hyperacute object
localization
Active exploration during object localization was intentional, adaptive, directed, and noisy. Mice
initiated whisking in a stereotyped manner throughout a session regardless of trial outcome (Figure
2.3A). Mice held their whiskers steady outside of the period of pole availability and began vigorously
whisking to the sound of pole-in with short latency (60ms±16ms; Figure 2.3B). This shows that active
exploration for object localization is an intentional process triggered by a cue.
Mice made 2.5±1.2 whisks before the first touch on trials with touch, and 1.9±1.2 whisks
before the median time of those first touches on trials without touch. There was no significant difference
in the two distributions (Kld 0.04; Figure 2.3C), which shows that failure to touch on a trial is not due to
a failure to initiate a motor program. However, mice made many more whisks (6.5±3.2) after the first
touch on trials with touch, compared to the number of whisks (2.5±2.1, Kld 1.12; Figure 2.3C) after the
median time of those first touches on trials without touch. This demonstrates that mice deploy an
exploration strategy that is adaptive to sensory feedback, consistent with a closed-loop model of tactile
perception (Ahissar et al., 2016, Zuo et al., 2019a, Zuo et al., 2019b).
We quantified the motor strategy and its precision by the angle of maximum protraction for each
whisk cycle. The first two whisks on go and nogo trials were targeted to the discrimination boundary
(Figure 2.3D). On these whisks, the pole was generally still ascending and not yet in reach. From the
third whisk, the go and nogo trials diverged. On go trials, the peak protraction settled around 10 degrees
posterior to the decision boundary, due to physical restriction by the pole (Figure 2.3D). On nogo trials,
average peak protraction was maintained at the discrimination boundary. If this average motor strategy
was executed with no variance, it would result in at least one touch for go positions and zero touches for
nogo positions on each trial; essentially transforming our precise discrimination task into an active
detection task. However, the whisk to whisk variance was large (10.9° mean stdev), suggesting a noisy
execution of the motor plan. This variance resulted in mice touching the pole on 94.6±1.5% SEM go trials
16
and 54.9±6.1% SEM nogo trials (Figure 2.3E). A logistic classifier based on the presence or absence of
touch discriminated go from nogo locations with 70.5%±9.5% accuracy, but mice significantly
outperformed this, correctly discriminating 81.2±5.7% of trials (Figure 2.3F). Furthermore, mice licked
on 94.7±3.6% of go trials with at least one touch (i.e. touch trials), but only licked on 43.6±13.2% of nogo
touch trials (Figure 2.3G).
These data show that mice direct their exploration to produce a difference in probability of touch
between go and nogo positions. If they fail to touch, they rarely lick, similar to a detection task. However,
if they do touch, they still are able to discriminate the object location. Thus, they must be interpreting
additional sensorimotor features of touch to locate the object.
17
Figure 2.3 Motor strategy and its influence on patterns of touch
A) Heat map of whisking amplitude for one mouse. Trials are sorted with first at the bottom and grouped by trial
outcome. White dots are time points of first touch. Magenta circles show time points of first lick after onset of pole
presentation. B) Whisking amplitude relative to time of pole onset for each mouse (gray) and average for all mice
(black). Mean ± std of whisking onset from cue is 60ms±16ms C) Left, population distribution for the number of
whisks before first touch. Right, population distribution of the number of whisks after first touch and before
decision. For no-touch trials, the median first touch time for that mouse was used. Distribution difference is
quantified using Kullback-Leibler divergence (see Methods). D) Mean ± std of the peak protraction relative to the
discrimination boundary for each whisk in a go (blue) or nogo (red) trial before decision. E) Proportion of trials
with touch for each mouse. Bars = SEM F) Trial type prediction performance of a logistic classifier based on touch
presence compared to each mouse’s trial type discrimination performance. Bars = SEM. G) The proportion of go or
nogo trials in which licking occurs conditioned on whether touch occurred on that trial. Bars = STD.
18
Sensorimotor features at touch that discriminate location and choice
What features of touch could mice possibly use to discriminate location? We examined how six
sensorimotor features associated with proposed hyperacute localization models (Figure 2.1) were
distributed at the instant of touch. The torsional roll angle, quantified by the apparent whisker curvature
1ms prior to touch , was greater for nogo versus go locations (Figure 2.4A). More posterior locations had
shorter average time from whisk onset to touch in each whisk cycle (Figure 2.4B). Similarly, go positions
were associated with shorter latency from cue to first touch, since mice tended to need fewer whisks
before hitting the pole (Figure 2.4C). There were more touches on go trials than nogo trials (Figure
2.4D). The radial distance from follicle to pole was greater for nogo trials (Figure 2.4E). The azimuthal
angle at touch was more protracted on nogo trials (Figure 2.4F).
Using supervised learning, we built a logistic classifier to identify trial type using each of the
above features on touch trials. By definition, touch presence had no discrimination power on these trials,
while anteroposterior pole location discriminated perfectly. We quantify performance using Matthew’s
correlation coefficient (MCC) to account for the unbalanced distribution of touch trials between go and
nogo positions in the training set (Figures S2.1A and S2.1B, Methods). Unsurprisingly, radial distance
(MCC 0.98+/-0.02; accuracy 98.9%+/-0.1%) and azimuthal angle at touch (MCC 0.93+/-0.04; accuracy
97.0%+/-1.4%) were the best discriminators, since they had the least overlap and were dependent on the
task geometry rather than behavior. Roll angle (MCC 0.23+/-0.26; accuracy 71.3%+/-9.4%) and whisk
latency (MCC 0.25+/-0.19; accuracy 71.4%+/-5.9%) were the worst predictors. Cue latency (MCC
0.33+/-0.15; accuracy 73.1%+/-6.9%) and touch count (MCC 0.43 +/- 0.07; accuracy 77.4 +/- 2.1%) were
significantly better than chance (Figures 2.4G and S1C).
What features of touch do mice actually use to discriminate location? We built a logistic classifier
to predict choice using each of the above features. Two models based on actual pole position or all other
features combined were reference standards. Touch count, radial distance, and azimuthal angle classifiers
predicted choice best, significantly better than shuffled models (Figure 2.4H). Due to the
19
multicollinearity of features, a combined classifier with mean normalized features and L1 regularization
allowed us to determine which features were most predictive in each mouse (see Methods). In the
combined classifier, the average feature weight of touch count, distance, and angle were significantly
different from zero (Figure S2.2). This supports that touch count and correlates of radial distance or
azimuthal angle at touch, but not roll angle or timing based cues, are used to refine choice on touch trials.
Figure 2.4 The distribution of sensorimotor features and their utility for predicting trial type and choice
A-F) Distribution of six sensorimotor features on go and nogo trials associated with six localization models for one
example mouse. Lines show optimal logistic classifier for discriminating trial type from feature distribution. G) Trial
type prediction performance of logistic classifiers for all mice based on each of the six features. Bars = SEM. Touch
trials only. (See also Figure S1). H) Choice prediction performance of logistic classifiers for all mice trained on pole
position, each of the six features or all six features combined. Bars = SEM. Touch trials only. Significance based on
Wilcoxon signed rank vs. shuffled models. (See also Figure S2).
20
Identifying the simplest model that predicts choice best
Having narrowed down potential sensorimotor drivers of choice, we sought to find the simplest
set of features that predicted choice best. Touch count was the only choice-predictive feature under active
control of the mouse (distance and touch angle are primarily dependent on geometry in this task).
Therefore, we reasoned that mice may exclusively use touch count to drive choice.
If touch count was the only feature that drove choice, then mice should lick at equal probability
on go and nogo trials which have identical numbers of touches. This was not the case. Reminiscent of the
results on touch presence (Figure 2.3G), mice were significantly more likely to lick on go than on nogo
trials with equal touch counts (Figure 2.5A). Surprisingly, greater stimulus sampling (i.e. more touches)
decreased the difference in lick probability between trials with equal touches. Since touch count varies
with pole position (Figure 2.2G), we isolated the effect of touch count on choice by comparing the
difference between actual and average number of touch counts for that pole position. Trials with a higher
number of touches than usual for that position had higher lick probability (Figure 2.5B), particularly on
nogo trials. This shows that touch count has a direct effect on choice, but additional features also must be
used.
Figure 2.5 Mice discriminate location using more than touch count
A) Population average of touch count distributions and associated lick probabilities for all mice in go (blue) and
nogo (red) trials. (P-values for 0:5 touches = 0.64, 4.4e-4, 1.7e-3, 1.1e-4, 7.0e-4, 5.8e-3; 2-tailed pair t-test. [t-stat,
degrees of freedom: 0 touches=0.48, 13 ; 1 touch=4.9, 11; 2 touches=3.9, 13; 3 touches=5.3, 4; 4 touches=4.4, 13; 5
21
touches=3.5, 10]) B) Touch count influence on licking controlled for pole position. Number of touches normalized
to mean number of touches for each pole position plotted against lick probabilities for go (blue) and nogo (red)
trials. Lick probabilities are shown as mean ± 95% confidence intervals.
The two remaining choice-predictive features, radial distance and touch angle, are tightly
correlated in this anteroposterior localization task. To disentangle their influence on choice, we
introduced a task variation which decomposed the discrimination to solely depend on distance or angle.
To establish the baseline context, five expert mice were first presented 120 trials of the anteroposterior
task. Then they were presented with randomly interleaved distance and angle trials (Figure 2.6A).
Distance trials matched the contiguous distribution of radial distance while holding azimuthal angle fixed
at the value of the anteroposterior discrimination boundary. Angle trials were vice versa (Figure 2.6B).
Psychometric performance curves for angle trials were indistinguishable from anteroposterior trials in all
mice (n=5 mice, 15 sessions), while performance fell to chance levels on distance trials, with a small bias
towards licking (Figures 2.6C and 2.6D). This demonstrates that mice do not use distance to the pole to
achieve anteroposterior location hyperacuity. Instead, they use features that co-vary with the azimuthal
position of the pole.
22
Figure 2.6 Mice discriminate location using features correlated to azimuthal angle rather than radial distance
A) Task design. After 120 trials of anteroposterior pole presentation, angle or distance trials were presented with
50% probability. B) The angle presentation positions (blue) held distance to the discrimination boundary constant
while varying azimuthal angle across the anteroposterior task range. The distance presentation positions (cyan) held
azimuthal angle fixed to the discrimination boundary angle while varying distance across the anteroposterior task
range. Go positions spanned a range of 31+/-1.6° or 8-10mm distant while nogo positions spanned 19+/-3° or 10-
13mm distant. C) Mean psychometric performance curves ± SEM for each class of trials across the population (n=5
mice, 15 sessions). D) Mean accuracy ± std across the population for each task. The mean accuracy for the angle
trials was not significantly different from the anteroposterior (p = 0.26; one-way ANOVA). Distance trial
performance was at chance, and significantly different from the anteroposterior and angle task (anteroposterior
p=9.6e-10, angle p=9.5e-10; one-way ANOVA [F-value, degrees of freedom = 96.5, 36]).
This leaves the perplexing question of how azimuthal angle at touch can influence choice, since
the azimuthal angle of the whisker is not directly encoded by primary sensory afferents (Leiser et al.,
2007, Khatri et al., 2009, Bush et al., 2016, Campagner et al., 2016, Wallach et al., 2016, Severson et al.,
2017). An influential model has been proposed for how the brain could compute this angle from mentally
accessible time-varying features. Whisker angle motion can be losslessly transformed into three
components, amplitude, midpoint, and phase, using the Hilbert transform (Figure 2.7A). These
components vary in their characteristic time-scales, with phase changing completely during a single whisk
23
cycle, and midpoint remaining most similar across multiple whisk cycles (Figure 2.7B). Neural correlates
of amplitude and midpoint are found in areas of M1 that project to S1 (Hill et al., 2011), while correlates
of phase are found in ascending projections from the follicle to S1 (Fee et al., 1997). The Hilbert
recomposition model supposes that these components of time-varying angle are combined with touch
time to produce a precise, unambiguous representation of azimuthal angle at touch (Kleinfeld and
Deschenes 2011) (Figure 2.1F). We tested whether the Hilbert recomposition model is consistent with
behavior by training a choice classifier on the average of each of these three components for all pre-
decision touches in each trial (Figure 2.7C). To avoid difficulties associated with fitting a periodic
variable, phase, with a logistic function, we only considered trials that had protraction touches (89.9% +/-
5.9% of the touch trials). This Hilbert recomposition classifier performed similarly (MCC 0.51±0.17) to
angle at touch (MCC 0.48±0.19; Figure 2.7D). Thus, this model, while complex, is a plausible means of
computing whisker angle to construct location perception.
To assess whether a simpler model could explain choice equally well on trials with touch, we
trained choice classifiers on only one Hilbert component at a time. On average, these classifiers
performed similarly to each other, and worse than angle or touch counts alone (Figure 2.7E). Which
classifier performed best varied between mice. To determine if touch count provided redundant or
complementary information about choice, we tested the combination of each component with touch count
on touch trials (Figure 2.7F). Adding counts improved each classifier’s performance. Midpoint + touch
count achieved the highest average performance (MCC 0.59±0.04 SEM). This was indistinguishable from
the performance of angle + touch count (MCC 0.61±0.05 SEM). Angle + touch count performance
remained significantly better than phase or amplitude + touch count (MCC 0.48±0.07 SEM; MCC
0.48±0.07 SEM). Thus, touch count provides complementary location information which, when combined
with one Hilbert component, whisking midpoint at touch, predicts mouse choice as well models that
compute or use the exact touch angle.
24
While these classifiers predicted choice well for trials with protraction touch, some trials have no
touches or exclusively retraction touches. To fully assess classifier performance, we applied the same
classifiers to all trials. Since sensorimotor features at touch are undefined on trials without touch, the
classifiers were implemented in two steps. For trials without touch, choice was predicted using touch
count alone, which invariably predicted ‘no lick’ for those trials. For trials with touch, either angle or
midpoint at touch was combined with touch count to predict choice (Figures 2.7G and S2.3). These two
classifiers predicted choice equally well (midpoint + count MCC 0.71±0.03 SEM, accuracy 87.2%±3.9%;
angle + count 0.72±0.04 SEM, accuracy 87.5±4.7%) with little difference in performance between mice
(MCC r2 = 0.87; Figure 2.7H), and essentially the same performance as the protraction touch only trials.
Amplitude or phase with touch count also performed reasonably well, though significantly worse than
angle with touch count (Figure S2.4).
To determine if midpoint and touch count provide sufficient information about pole location to
account for mouse performance, we trained trial type classifiers on either midpoint + touch count or angle
+ touch count and compared their predictions to psychometric performance curves of mice. We found that
both classifiers provide sufficient information about pole position, but the midpoint + touch classifier
better fit the psychometric curves in 14/15 mice (Figures 2.7I and S2.5). Furthermore, the discrimination
resolution of midpoint + touch count was a better match to mouse performance than angle + touch count
(Figure 2.7J). Together, these data best support a simple model of active tactile perception, where mice
deploy targeted, noisy, adaptive exploration and use their sense of touch count combined with whisking
midpoint to locate objects with submillimeter precision.
25
Figure 2.7 Choice can be best predicted by a combination of touch count and whisking midpoint at touch
A) Time varying azimuthal angle can be transformed to the Hilbert components amplitude, midpoint, and phase.
Example exploration bout for go (blue) and nogo (red) trial. B) Average autocorrelation across all mice for angle,
amplitude, midpoint and phase. C) Choice prediction space for one mouse using Hilbert features. D) Classifier
performance measured using MCC between angle and Hilbert features (p=0.76; Wilcoxon signed-rank test). E)
Performance (MCC) of classifiers trained with individual model components versus angle at touch. Significant
differences: angle to phase (p=1.5e-2), amplitude (p=6.7e-3), and midpoint (p=1.2e-2). N.S. differences: phase to
26
amplitude (p=0.23), phase to midpoint (p=0.80) and amplitude to midpoint (p=0.52). All compared using Wilcoxon
signed-rank test. F) Performance (MCC) of classifiers trained with individual model components plus touch count,
versus angle at touch plus touch count. Significant differences: angle to phase (p=3.4e-3) and amplitude (p=2.0e-3).
N.S. differences: phase to amplitude (p=0.64), phase to midpoint (p=0.19), amplitude to midpoint (p=0.12) and angle
to midpoint (p=0.64). All compared using Wilcoxon signed-rank test. G) Heatmap of one sorted session task
structure, sensorimotor inputs, classifier predictions, and mouse choice. Continuous variables (pole position, touch
count, midpoint at touch, angle at touch, midpoint+touch count choice prediction, and angle+touch count choice
prediction) are normalized from minimum (-1) to maximum (+1). NaN data is gray. Categorical variables (trial type,
primary touch direction, mouse choice) are colored as in the legend. (See also Figure S3). H) Comparison of
midpoint+touch count and angle+touch count classifiers for all trials across individual mice (gray) and the
population mean±std (turquoise). (p=0.64; Wilcoxon signed-rank test). Black arrow denotes mouse shown in
example in G. (See also Figure S4). I) Psychometric curves for optimal trial type discrimination performance using
midpoint+counts and angle+counts compared against mouse choice for example mouse in G. (See also Figure S5) J)
Comparison of discrimination resolution between optimal trial type classifiers and mouse performance from Figure
2H. Shading denotes distance from discrimination boundary.
Discussion
We assessed how well the sensorimotor features associated with six models of active tactile
perception (Figure 2.1) could discriminate object position and predict choice during head-fixed
anteroposterior object localization. Mice achieved hyperacuity with a single whisker, discriminating
locations separated by ≤0.5mm and <2° (Figure 2.2). The directed and adaptive search strategy used by
mice (Figure 2.3) made the number and characteristics of touches to be predictive of object location and
choice (Figure 2.4). Mice discriminated location on trials with equal numbers of touches, suggesting
location perception was refined by other sensorimotor features when touch occurred (Figure 2.5). By
independently manipulating the distance or angle of the presented object during localization, we found
that azimuthal angle, but not radial distance also drove choice (Figure 2.6). A model for computing
azimuthal angle from three Hilbert components of whisker motion predicted choice as well as angle on
touch trials (Figure 2.7). When combined with touch count, a single Hilbert component, midpoint,
predicted choice as well as azimuthal angle with touch count, showing that computing azimuthal angle is
not necessary (Figure 2.7). This supports a model where neural correlates of touch count and a single
motor feature, midpoint, are integrated to produce hyperacute perception of object location along the
anteroposterior axis of the mouse face.
We note several limitations of our work. We relied on a single overhead view of whisker
curvature to estimate torsional roll angle, which is subject to greater measurement noise than a 3D
27
reconstruction from multiple camera angles. We also did not account for inertial bending during
whisking. Despite these caveats, a prior study showed a tight linear relationship between whisker
curvature in an overhead projection and roll angle in this range of protraction angles in rats (Knutsen et
al., 2008), which supports that curvature is a good enough proxy for roll angle here.
While azimuthal angle at touch nearly perfectly predicted pole location, the best classifiers only
achieved 87% performance in predicting choice. This remaining unpredictable variability may reflect
internal changes in motivation, attention, satiety, or frustration that were uncontrolled. Pupillometry and
facial expression tracking may allow a more precise accounting for the role of internal state changes on
choice in future work. Alternatively, reward or choice history may influence choice during these types of
tasks. A recent study of whisker-based object localization found that choice history had a significant
influence on error trials (Campagner et al., 2019). However, we were unable to find an influence of
choice history on choice in our current study. This may be because we restricted analysis to expert mice
in a contiguous block of 200 trials in the middle of each session, where motor engagement and
performance was high and stable.
The insights this work provides about how mice naturally explore the world are necessarily
limited by our experimental constraints: the mice are trimmed to a single whisker, head-fixed, and highly
trained within a stable environmental context. In freely moving mice, the initial strikes of an object along
a side of the face are likely to be with a single whisker due to a number of factors. Many whiskers are
often missing from cohabitating mice, due to trimming associated with social hierarchy in C57/BL6 mice
(Long 1972). The progressive length of whiskers across the pad (Hires et al., 2016) prevents short
whiskers from reaching distant objects. Rodents use tactile feedback to adapt their whisking pattern,
minimizing impingement during object contact (Mitchinson and Prescott 2013). Initial whisker contacts
are followed by a shift to asymmetric whisking attempting to bring more macrovibrissae into contact and
an orienting head movement to explore more carefully with their microvibrissae (Mitchinson et al., 2007).
28
Head-fixed tasks are thus relevant to the localization computations that guide contact-induced whisking
asymmetry and head orientation to objects.
Classifiers based on roll angle, whisk latency or cue latency each performed relatively poorly at
discriminating location and predicting choice. While mice roll their whiskers through cycles of
protraction and retraction, our results suggest the variance in this rotation is too great to be useful for
precise location discrimination. Experimental support of whisk latency models are primarily based on
electrically evoked artificial whisking in anesthetized rats (Brunton et al., 2013), which has minimal trial
to trial variance in whisker motion. Our results suggest that it is difficult to use timing of touch referenced
to a point of the whisk cycle for precise location discrimination during active whisking due to the
variability in amplitude, midpoint, and velocity across whisking cycles (Khatri et al., 2009). Likewise, cue
latency requires less whisking variance to be useful for precise location discrimination. On the other hand,
classifiers based directly on angle, or angle computed from Hilbert components discriminated location
and predicted choice well. Adding touch count as a feature improved choice prediction, showing its
importance for driving choice. Since choice classifiers trained on midpoint + touch count equaled the
performance of angle + touch count (Figure 2.7F-H), we conclude that mental computation of the exact
angle of the whisker at the exact time of touch (Kleinfeld and Deschenes 2011) is unnecessary to
precisely locate objects.
A pure touch count model is supported by prior work showing that the number optogenetic
stimulation pulses applied to layer 4 (L4) of S1, but not their millisecond precise timing, influences
illusory perception of object location (O’Connor et al., 2013). Yet, we show that mice discriminate
location when identical numbers of touches occur (Figure 2.5). In the same prior study, stimulation
needed to be coincident with whisking to influence perception. This suggests that touch signals may be
referenced to a slowly changing motor variable to refine location perception during active exploration.
The use of midpoint, which is auto-correlated across whisk cycles (Figure 2.7B) may provide mice with a
way to average out the variability in whisking during bouts of exploration. Since trial type classifiers
29
trained on midpoint + touch count matched the discrimination performance of mice better than angle +
touch count (Figures 2.7I and 2.7J), we conclude that whisking midpoint is more likely to be used than
angle to precisely locate objects.
Perhaps our most surprising observation was that mice had a higher false alarm rate when they
made more touches (Figure 2.5). This was contrary to our expectation that those trials would either show
higher performance, since pole position would be sampled more times, or at least the same performance,
as in a decision-making model where evidence is accumulated until a confidence boundary is reached.
Rats performing active texture discrimination also vary the number of touches prior to a decision, but
their performance is independent of touch count (Zuo et al., 2019b), consistent with bounded evidence
accumulation seen in tactile (Zuo et al., 2019a), auditory (Shadlen and Newsome 2001), and visual
discrimination (Hill et al., 2008) tasks.
A key difference between active object localization and these tasks explains our surprising
observation. In texture discrimination, mice must touch the object to gather evidence. In active
localization, both the presence and absence of touch provide evidence of object location. Directed
exploration makes the act of touching, not just the properties of the touch, a location informative feature.
This distinction also explains why mice increase their motor engagement if touch occurs (Figure 2.3C).
More whisking causes more touches, but noisy, directed whisking (Figure 2.3D) makes the chance of
touching less for nogo positions than for go positions. Thus, touch count per se informs the mouse’s
choice and the closed-loop motor response to touch tends to increase the separation in distributions of this
decision informative feature between go and nogo trials.
Our finding that midpoint and touch count together best predict choice in this task has
implications for the origin and site of integration of sensorimotor signals that drive location perception.
Whisking midpoint is correlated to neural activity in M1 (Hill et al., 2011), reflects the relative activity of
intrinsic and extrinsic muscles in the whisker pad (Petreanu et al.,2009), and changes over timescales of
hundreds of milliseconds (Figure 2.7B). M1 axons strongly excite the tuft and proximal dendrites of thick
30
tufted L5B neurons in S1 (Petreanu et al., 2012). Calcium responses in the axons of these projection
neurons (Xu et al., 2012) and the dendritic tuft of L5B recipient neurons in S1 correlate to object location
(Ranganathan et al., 2018). Activity of these L5B neurons forms a distributed representation of object
location (Hooks et al., 2011). Meanwhile, touch count is tightly correlated to the spike count in L4
excitatory neurons of the primary whisker barrel in S1 (Hires et al., 2015). These L4 S1 neurons strongly
excite L5B proximal dendrites of S1 (Xu et al., 2012). Combining this evidence with our new behavioral
results suggests L5B neurons in S1 as a prime candidate for where midpoint and touch count signals are
integrated to drive perception of object location.
Materials and Methods
LEAD CONTACT AND MATERIALS AVAILABILITY
Further information and requests for resources and reagents should be directed to and will be
fulfilled by the Lead Contact, Samuel Andrew Hires (shires@usc.edu). This study did not
generate new unique reagents or mouse lines.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Fifteen VGAT/ChR2/EYFP mice (JAX B6.Cg-Tg), both male and female, of at least 3 months of
age were used for the following experiments. A complete description of head-plate installation, water
restriction procedure and behavioral apparatus has been described in previous work (O’Connor et al.,
2010, Guo et al., 2013). Following head-plate installation, mice were housed with littermates and singly
housed if fighting occurred. Mice were provided food ad libitum. 7 days prior to training, mice were
water restricted to 1mL of water per day. During this period, a daily health and weight assessment was
completed to ensure mice were healthy. All procedures were approved under USC IACUC protocols
20169 and 20731.
METHOD DETAILS
Object localization task
31
Mice were trained in a whisker-based go-nogo localization task. Using a single whisker (C2),
mice learned to identify a smooth 0.6mm diameter pole 7-12mm lateral from the whisker pad as either a
posterior rewarded location (go) or anterior unrewarded location (nogo). Pole positions were presented
across a continuous range of 10mm along the anteroposterior axis with a go/nogo discrimination
boundary at the center of this range. The pole was positioned by a pair of stepper linear actuators with
99nm resolution, 25µm accuracy and <5µm repeatability (Zaber NA11B30-T4). To avoid potential pole
motion duration clues to position, between trials the motors first moved to the discrimination boundary
then to the presentation location. To avoid potential ultrasonic clues associated with stepper motor
function, the pole location was randomly jittered 0-127 microsteps (0-25µm) on each trial. The pole was
vertically lifted into reach by a pneumatic linear slider (Festo) which also provided a sound cue on pole
presentation onset. The position of this slider and the valve, and thus the location and amplitude of this
cue sound, is fixed for all trials, confirmed by audio recording with an Earthworks M50 ultrasonic
microphone. Mice made their decisions by licking or withholding licking to an electrical port during
stimulus presentation. 4 trial outcomes were available: hit and miss or false alarm and correct rejection by
licking or not licking on a go or nogo trial. On hit trials, a water reward (4-8µL) was dispensed. The total
amount of water dispensed of the session was limited only by the number of trials the mice chose to
perform. False alarm trials led to a 2 second timeout that reset upon each lick. Correct rejection and miss
trials were unpunished.
Each trial was 4000ms or longer. The pole was triggered to rise at 500ms from trial start and
came into touch range within ~200ms. The sampling period was 0-750ms after pole onset. Licking within
this time block had no effect. The answer period was 1250-2000ms. Licking within this time block led to
Hit or False Alarm outcome. Licking in this time also prolonged the period of pole presentation to
provide the opportunity for additional sensory feedback to help learning. The extended presentation time
does not affect any analyses since only pre-lick touches are considered in this work. The inter-trial
interval was 2000ms.
32
To quantify learning rates all sessions leading up to the expert session were used, excluding one
to two rig acclimation sessions. Expert threshold was set at >75% accuracy smoothing across 200 trials.
Training
15 mice were trained in the object localization task. In the first sessions, the farthest go position
was set ~30 degrees anterior of the resting whisker position. Optimal learning was achieved by first
setting a gap between go and nogo ranges and slowly reducing that gap as performance improved. The
initial gap set between go and nogo ranges were 4mm. Once mice reached >75% accuracy over 200 trials,
this gap was reduced in 1mm increments till the go and nogo ranges were contiguous, with their shared
border defined as the discrimination boundary.
Five expert mice in the object localization task were tested on the angle/distance task. Angles and
distances were calculated from the estimated follicle position at the discrimination boundary to the full
range of pole positions in the object localization task. During the angle/distance task, 120 trials of the
object localization task were first presented to establish baseline performance levels. Next, angle trials or
distance trials were presented at 50% chance levels for the remainder of the session.
Whisker motion acquisition and analysis
Whisker behavior was captured for 4 seconds spanning the period prior to pole onset to response
window. Video frames were acquired at 1000fps using Basler acA200-340kmNIR camera and Edmund
Optics 0.18X ½" GoldTL™ Telecentric Lens (Model # 52-258) under 940nm illumination on Streampix
6 software. Whisker position was tracked using Janelia Whisker Tracker (https://www.janelia.org/open-
science/whisk-whisker-tracking; (Clack et al., 2012)). A mask was traced from the edge of the fur and
whisker follicle was estimated 1mm back from the mask. The whisker’s azimuthal angle was quantified at
the point of intersection of the mask and whisker trace, to avoid tracking noise in the fur. Whisking
midpoint, amplitude and phase was decomposed from this angle using the Hilbert transform. Hilbert
decompositions were calculated from band-pass filtered (6-60Hz, Butterworth) whisker angle time-series.
33
Whisking amplitude is defined as the magnitude of the Hilbert transform of the filtered whisker angle.
Whisking midpoint is defined as the filtered (6-60Hz) difference between the raw whisker angle time-
series and the band-pass filtered signal. Whisking phase is defined as the phase angles of the Hilbert
transform of the filtered whisker angle time-series. Whisker curvature was measured at 3-5mm out from
the mask.
The precise millisecond of touch was determined through custom MATLAB software
(https://github.com/hireslab/HLab_Whiskers) using distance to pole and change in whisker curvature,
followed by manual curation of images of uncertain whisker and pole intersections.
QUANTIFICATION AND STATISTICAL ANALYSIS
In all analyses, we considered only whisker motion and touch before the decision lick, the first
lick of the answer period. On trials without licking, the median decision lick time on lick trials was used
as the decision point. Licks before the answer period were ignored. To minimize the effects of change
internal states of motivation, attention, satiety or frustration, the set of the 200 highest performing
contiguous trials in a single session per mouse was used for all analyses. Trials (0-15) where the animal
was grooming or the video dropped 1 or more frames were removed from this set of 200.
Adaptive whisking analyses
Pre-touch windows were defined as the time from stimulus onset to first touch. Post-touch
windows were set as time of first touch to the first lick. If no first touch or first lick was present, the
median first touch time or median first lick time of the session was used. A whisk is defined as the
number of whisking peaks with a whisking amplitude of 5 or greater. The difference in distributions is
quantified using Kullback-Leibler divergence from using kl_div from Mathworks
(https://www.mathworks.com/matlabcentral/fileexchange/20688-kullback-leibler-divergence).
Trial type and choice prediction
34
Retraction and protraction touches occur with ~pi radian offset in phase, which makes phase
difficult to express as a linear function. Therefore we excluded retraction touches and trials with
exclusively retraction touches for the Hilbert transform decoders (Fig 7C-F). For all other analysis
retraction touches were included.
The list of features used to predict trial type (go/nogo) or choice (lick/no lick) and their
description are:
• motor position (the horizontal motor position in microsteps for each trial)
• touch presence (the presence or absence of a touch pre-decision)
• touch counts (the number of touches pre-decision)
• roll angle (the mean whisker curvature 1ms prior to touch for each trial)
• whisk latency (the mean time in milliseconds from the nearest whisking trough
prior to touch for each trial)
• cue latency (the time of first touch from cue onset in milliseconds)
• radial distance (the mean radial distance from follicle at touch to pole position for
each trial)
• angle (the mean whisker angle at touch for each trial)
• phase (the mean phase of the whisker at touch for each trial)
• amplitude (the mean amplitude of the whisker at touch for each trial)
• midpoint (the mean midpoint of the whisker at touch for each trial)
• combined (curvature, cue latency, whisk latency, touch counts, radial distance
and angle for each trial)
• hilbert decomposition (phase, amplitude, and midpoint)
35
For features using multiple predictors, each feature was mean normalized using the following
equation:
𝑥 ′
=
x − m e a n ( x )
m a x ( 𝑥 ) − m i n ( 𝑥 )
The logistic classifier was adapted from Andrew Ng’s Machine Learning Course
(https://www.coursera.org/learn/machine-learning) and modified to include lasso regularization.
Sigmoid link function:
ℎ
𝜃 ( 𝑥 ) = 𝑔 ( 𝜃 𝑇 𝑥 )
where 𝑔 ( 𝑧 ) =
1
1 + 𝑒 − 𝑧
Cost function:
𝐶 𝑜 𝑠 𝑡 ( ℎ
𝜃 ( 𝑥 ) , 𝑦 ) = {
− lo g ( ℎ
𝜃 ( 𝑥 ) ) 𝑖𝑓 𝑦 = 1
− lo g ( 1 − ℎ
𝜃 ( 𝑥 ) ) 𝑖𝑓 𝑦 = 0
𝐽 ( 𝜃 ) =
1
𝑚 ∑ [ − 𝑦 𝑖 lo g ( ℎ
𝜃 ( 𝑥 𝑖 ) ) − ( 1 − 𝑦 𝑖 ) lo g ( 1 − ℎ
𝜃 ( 𝑥 𝑖 ) ) ]
𝑚 𝑖 = 1
+ Regularization
L1 lasso regularization equation:
𝜆 ∗ ∑ | 𝜃 𝑖 |
𝑁 𝑖 = 1
Where 𝜆 is the regularization parameter, 𝜃 are the partial regression coefficients of the model and
N is the number of parameters.
Gradient (partial derivative of the cost function):
36
𝜕𝐽 ( 𝜃 )
𝜕 𝜃 𝑗 =
1
𝑚 ∑ ( ℎ
𝜃 ( 𝑥 𝑖 ) − 𝑦 𝑖 ) 𝑥 𝑗 ( 𝑖 )
𝑚 𝑖 = 1
The cost function was minimized through the fmincg MATLAB function. The inputs of this
function are the cost and the gradient:
𝐽 ( 𝜃 ) and
𝜕𝐽 ( 𝜃 )
𝜕 𝜃 𝑗 .
Classifier model evaluation
For each set of features the optimal regularization parameter 𝜆 , classifier performance and partial
regression coefficients 𝜃 were evaluated across 20 iterations with 5-fold stratified cross-validation.
Optimal 𝜆 was chosen as the mean 𝜆 value between peak 𝜆 and first 𝜆 one SEM away from the peak.
Classifier performance was calculated using Matthew’s correlation coefficient (MCC). MCC
provides an unbiased metric of model performance in light of imbalanced datasets (Boughorbel et al.,
2017). MCC values range from 1 to -1 with 1 meaning perfect model performance, 0 meaning chance,
and -1 meaning all predictions are errors. The MCC was calculated using the following equation:
𝑀 𝐶 𝐶 =
𝑇𝑃 ∗ 𝐹𝑁 − 𝐹𝑃 ∗ 𝐹𝑁
√ ( 𝑇𝑃 + 𝐹𝑃 ) ∗ ( 𝑇𝑃 + 𝐹𝑁 ) ∗ ( 𝑇𝑁 + 𝐹𝑃 ) ∗ ( 𝑇𝑁 + 𝐹𝑁 )
where TP are true positives, TN are true negatives, FN are false negatives and FP are false
positives predictions.
In order to interpret the weight of the logistic classifier, partial regression coefficients were
converted to odds ratios using the following equation:
𝑂 𝑑 𝑑 𝑠 𝑅𝑎 𝑡 𝑖𝑜 = 𝑒 𝜃
Odds ratios were normalized between 0 and 1 and multiplied by their respective sign for each
cross-validation step and averaged to calculate the normalized weight of each feature in prediction.
37
DATA AND CODE AVAILABILITY
The datasets generated during this study are available on the Hires Lab Dropbox repository at
(https://www.dropbox.com/sh/bjla01r0bzt49j7/AAAzMjaq2mZSH5Gp8sf_UY5ga).
The code generated during this study and any updated links to the datasets are available on
GitHub at (https://github.com/hireslab/Pub_LocalizationBehavior)
38
Supplemental Figures
Figure S2.1. MCC versus accuracy for the trial type prediction models, Related to Figure 4. A) P-values
comparison between the trial type predictive accuracy of pairs of features using a 1-way ANOVA. B) In a balanced
distribution of predicted values (e.g. trial type), both accuracy and MCC values provide complementary information
of model performance. (r = 0.93, R2 = 0.85). C) Model accuracy reported instead of MCC for Fig 4G.
39
Figure S2.2. Combined model weights and lasso feature reduction in choice prediction, Related to Figure 4.
A) Comparison of normalized odds for predicting choice in the combined model. Gray dots denote the weights for
each mouse and turquoise dots denote the mean ± SEM across the population. B) Penalty plots for the population
mean. C) MCC performance as a function of lambda. Green dots denote peak MCC value, blue dots denote MCC
values 1 SEM away from peak, and red dot denotes mean lambda of peak and 1 SEM away (i.e. chosen optimal
lambda). D) Penalty plots for individual mice. E) p-values comparing classifier weights; one-way ANOVA.
40
Figure S2.3. Heatmap of model comparisons as in Figure 7G for each mouse, Related to Figure 7.
41
Figure S2.4. Comparison of touch count + touch angle classifier performance versus touch count + each
Hilbert component individually, Related to Figure 7.
42
Figure S2.5. Comparison of models for optimal trial type discrimination and mouse behavior, Related to
Figure 7. A) Psychometric curves for optimal trial type discrimination performance using counts, midpoint+counts
and angle+counts compared against mouse choice. B) Comparison of mean absolute error between models and
mouse choice for counts (0.19±0.07std), angle+counts(0.17±0.07SD) and midpoint+counts (0.13±0.06std)
classifiers (paired t-test). Red filled dot denotes example in Figure 7I.
43
Chapter 3: Active touch remaps barrel cortex output from a
representation of self-motion to object location.
Summary
During active tactile exploration, the dynamic patterns of touch are transduced to electrical
signals and transformed by the brain into a mental representation of the object under investigation. This
transformation from sensation to perception is thought to be a major function of the mammalian cortex. In
primary somatosensory cortex (S1) of mice, layer 5 (L5) pyramidal neurons are major outputs to
downstream areas that influence perception, decision-making, and motor control. We investigated self-
motion and touch representations in layer 5 of S1 with juxtacellular loose-seal patch recordings of
optogenetically identified excitatory neurons. We found that during rhythmic whisker movement, 66% of
neurons represent self-motion. This population was significantly more modulated by whisker angle than
by phase. Upon active touch, a distinct pattern of activity was evoked across L5, which represented the
whisker angle at the time of touch. Object location was decodable with submillimeter precision from the
touch-evoked spike counts of a randomly sampled handful of these neurons. These representations of
whisker angle during self-motion and touch were independent, both in the selection of which neurons
were active, and in the angle-tuning preference of co-active neurons. Thus, the output of S1 transiently
shifts from a representation of self-motion to an independent representation of explored object location
during active touch.
Introduction
A major function of the mammalian cortex is to integrate sensory input with self-knowledge to
form mental representations of the external world to guide flexible behavior (Francis and Wonham 1976,
Wolpert et al., 1995). Object location is one such representation, and it is essential for skillful navigation
and object interaction (Vincent 1912, Sofroniew et al., 2015, Høydal et al., 2019). Object locations can be
rapidly and accurately identified via active touch (Lederman and Klatzky 1987, Knutsen et al., 2006,
44
Mehta et al., 2007, Horev et al., 2011, Cheung et al., 2019). In active touch, mechanosensory input is
thought to be referenced to the movement and position of tactile sensors to produce a mental percept not
of the self, but of the object under investigation (Wolpert et al., 1995). Determining where and how these
sensory and motor signals are transformed by neural circuits into a representation of the external world
would improve our understanding of brain function.
Head-fixed mice are an excellent model system to investigate the neural basis of object
localization. They can locate objects along the anteroposterior axis of the face with submillimeter
precision by sweeping a single whisker back and forth (i.e. whisking; Cheung et al., 2019) and
interpreting the mechanically-evoked neural activity patterns transduced in the follicle which holds the
whisker (Li et al., 2011; Severson et al., 2017; Furuta et al., 2020). Similar sensorimotor mechanisms may
underlie texture discrimination in rodents (Jadhav et al., 2009, Isett et al., 2018, Zuo et al., 2019a) and
tactile sensing with tools in humans (Chan & Turvey 1991; Miller et al., 2019). High-speed whisker
imaging and mechanical models of whisker deformation provide rich knowledge of the sensory input and
motor program underlying the computation of object location (Kuntsen et al., 2008; Clack et al., 2012,
Hires et al., 2013, Pammer et al., 2013, Hires et al., 2016, Vaxenburg et al., 2018). Early cortical
processing of tactile input is topographically organized into columns of primary somatosensory cortex
(S1) that have a one-to-one correspondence with large facial whiskers (Woolsey and Van der Loos 1970).
Intrinsic signal imaging allows whisker-specific neural activity to be targeted for electrical recording
(Masino et al., 1993; O’Connor et al., 2010). Furthermore, transgenic mouse lines allow assignment of
observed neural activity patterns to neurons of specific types (Aranoff and Petersen 2008, Gerfen et al.,
2013). Thus, mice allow a dissection of self-motion and object location representation at behavioral,
perceptual, computational, and neural circuit levels.
A prime candidate for the construction of neural representations of object location are layer 5
(L5) pyramidal neurons of S1. S1 activity is required for whisker-based anteroposterior object localization
(O’Connor et al., 2010) (though not object detection; Hong et al., 2018). L5 pyramids contain the major
45
output of S1 to cortical and subcortical targets involved in decision making, action selection, and motor
control (Lévesque et al., 1996, Kita and Kita 2012, Shepherd 2013; Gerfen et al., 2013). Distinct cellular
compartments of L5 pyramids receive sensorimotor features that are assembled in models of object
location representation and perception (Kleinfeld & Deschenes 2011; Cheung et al., 2019). These features
include sensory representations of self-motion from ventral posteromedial nucleus of the thalamus (VPM)
(Armstrong-James, et al., 1992; Fee et al., 1997) of touch from VPM and Layer 3 and 4 (L3, L4) of S1
(de Kock et al., 2007; Petreanu et al. 2009, Hires et al., 2015), and efference copy from primary motor
cortex (M1) (Hill et al., 2011; Petreanu et al., 2012). Object location-specific calcium responses have
been observed in tuft dendrites (Xu et al., 2012), apical trunk, and soma (Ranganathan et al., 2018) of L5
pyramids. Thus, L5 pyramidal neurons have the appropriate inputs and outputs to transform sensation into
an object location representation that guides flexible behavior.
Here, we use single-unit juxtacellular electrophysiology to investigate the neural representation of
sensory input, self-motion, and object location in L5 excitatory neurons during behavior. Over half of the
active neurons encode self-motion during free-whisking, while a third encode the location of touched
objects. This encoding does not require specialized training. Population responses to touch can decode
object location with sub-millimeter accuracy. Contrary to expectations, the cellular identity and positional
preferences of the touch-evoked object location representation were uncorrelated with the self-motion
representation during free-whisking. Thus, touch activates an independent representation of object
location in L5, rather than amplifying an underlying representation of self-motion. These data suggest that
a perceptual transformation from self to sensed object is accomplished by neural circuits interacting with
L5 pyramidal neurons of S1.
Results
Experimental design
To investigate the organization of neural representations in L5B during whisker-mediated
exploration, we used variations of a go/no-go whisker-guided object localization task in head-fixed mice
46
(O’Connor et al., 2010). Water-restricted mice (n=16 Vgat-ChR2-EYFP) were trained to whisk and
contact a smooth vertical pole presented randomly across a contiguous range (10 mm) of pole positions
along the anteroposterior axis about 8mm lateral from the whisker pad (Figures 3.1A). Mice were
trimmed to a single whisker (C2) across all training and recording sessions. Whisker motion and object
interactions were tracked from an overhead view at 1000 fps (Figures 3.1B). Whisker traces were
converted to time series of whisker azimuthal angle (i.e. angle), the Hilbert decomposition of amplitude,
midpoint and phase (Curtis & Kleinfeld, 2009) and touch (Figures 3.1D). We serially recorded
optogenetically tagged single neurons via blind juxtacellular loose-seal patch in and around L5B of the
C2 whisker representation of S1 (Figures 1D, S1A and S1B; Methods). Each trial consisted of a 0.5 s
pre-pole period, followed by a 0.75 s stimulus sampling period, and then a 1.25 s answer period where
licks triggered water dispensing or a brief time out (Figures 3.1E). We recorded from 156 single units
during active touch behavior. Twenty units were silent and 14 others were putative inhibitory neurons,
based on short latency spiking in response to illumination of S1 with 473nm light (Figure S3.1C),
leaving 122 active excitatory neurons. To quantify the neural representation of sensorimotor features, we
correlated these features to the times of detected action potentials.
Figure 3.1 Head-fixed task and in-vivo juxtacellular electrophysiology
A) Schematic of task. Mice sweep a whisker forward and backward to locate a pole (black cylinder) presented along
the anteroposterior axis. Angle is the azimuthal angle of the whisker at the follicle relative to the mediolateral axis of
the animal. B) Overhead view of whisker traces captured from a single trial. A mask (gray) crops traces near fur. C)
47
Angle time series can be decomposed to the Hilbert components amplitude, midpoint, and phase. D) Selected
excitatory flow into L5 neurons of S1 (border depth in µm and dendritic arbors from Lefort et al., 2009). E) Trial
structure with example traces of recorded stimuli and spikes. Phase masked to periods of amplitude > 5. Pole
presentation is triggered 500 ms from trial start and takes ~200 ms to come into reach. Pole exits at varying times
based on trial events.
Representation of self-motion
We first examined the neural representation of self-motion during free-whisking (Figure S3.2A).
Most neurons (96 of 122 active units) were significantly (Chi-squared test) modulated (positively or
negatively) by whisking, with the mean firing rate significantly increased from 5.0 ± 5.6 spks/s (mean ±
SD) during non-whisking to 6.0 ± 7.1 spks/s (mean ± SD) during whisking (Figures 3.2A). The bulk of
this increase occurred among neurons with non-whisking firing rates in excess of 5 spks/s. Whisking
tuned neurons were relatively uniformly distributed across the recorded depth (Figure S3.2B). Since
whisking was volitional (Cheung et al., 2019), we could not dictate the exploration time or range of the
mouse (Figures S3.2C and S3.2D), but many neurons (60/122) were significantly modulated with
respect to whisker angle within the chosen range of whisking (Figures 3.2B and 3.2C). Across the
population, preferred angles spanned the range of whisking (Figures 3.2C and S3.2E). Most of these
neurons (46/60) were also modulated by phase in the whisker cycle (Figures 3.2B, 3.2D and S3.2F) with
representations tiling the phase space. Across the population of 122 active units, 46 were co-tuned to
whisker phase and angle, 14 to angle only, 0 to phase only, and 62 to neither (Figures 3.2E). Among
neurons tuned to at least one, the mean depth of modulation to angle was significantly greater than to
phase (p = 8.9e-6, Figures 3.2F and 3.2G, Methods). Greater modulation to angle was more correlated
with greater modulation to whisking midpoint than to amplitude, phase or velocity (Figures 3.2H and
3.2I). The absolute modulation depth of midpoint was most similar to that of angle (Figures 3.2J). This
suggests that midpoint-correlated inputs are important for constructing an angle-tuned representation,
which is consistent the importance of midpoint in predicting choice during object localization (Cheung et
48
al., 2019). These data show that during free-whisking, L5 excitatory neurons encode a representation self-
motion that is more specific to whisker angle than phase of whisk cycle.
Figure 3.2 L5B excitatory neurons encode a representation of self-motion during free-whisking
A) Firing rates for non-whisking (5.0 ± 5.6 Hz) and whisking periods (6.0 ± 7.1 Hz) (p=2.9e-2, t-stat 2.2, df 121,
paired sample t-test). Data are represented as mean ± S.D. B) Three example units tuned to both whisking angle
(blue) and phase (red). C) Population heat map for units tuned to whisker angle sorted by peak angle response. D)
Population heat map of phase tuned units sorted by peak phase response. E) Pie chart of self-motion tuning across
the L5B excitatory population. Phase (red, 0/122), angle (blue, 14/122), co-tuned (black, 46/122) and not tuned to
either (gray, 62/122). F) Normalized positional preference for the 3 examples in B, phase (red), angle (blue). G)
Absolute modulation depth (Methods) comparison between free-whisking phase and angle tuning. Red dot and error
bars denote phase/angle mean ± SEM (4.2/7.1 ± 0.5/0.8 spks/s, p = 8.88e-6, t-stat= -4.87, df = 59; paired t-test). H)
Modulation depth of angle, phase, amplitude, and midpoint for all angle tuned units. I) Contingency table of Pearson
correlation coefficients for modulation depths across angle and motor variables. J) Difference in absolute
modulation between angle and motor variables (motor – angle modulation). Phase to angle (mean ± SEM = 2.8 ±
0.6, p = 8.9e-6, t-stat = 4.9, df = 59). Amplitude to angle (mean ± SEM = 1.7 ± 0.6, p = 8.2e-3, t-stat = 2.7, df = 59).
Midpoint to angle (mean ± SEM = 1.2 ± 0.5, p = 0.01, t-stat = 2.7, df = 59). Velocity to angle (mean ± SEM = 2.6 ±
0.6, p = 2.4e-5, t-stat = 4.6, df = 59). All compared using paired t-test.
Representation of object location
We then examined sensorimotor representations in the same neurons during active touch. Of the
active excitatory neurons, 54 out of 122 were excited by touch. Touch responses were temporally sharp
49
with short latency (Table 3.1, Figure S3.3A). In 42 of the 54 touch neurons, the number of spikes evoked
was dependent on the anterolateral position of the pole (Figure 3.3A). These touch location tuned
neurons were concentrated between 690-890µm from pia (Figure 3.3B), roughly corresponding to Layer
5B (Figures 3.1D, Lefort et al., 2009). There was minimal shift in follicle position across numerous
touches in a trial (Figure S3.3B and S3.3C). Touch location tuning was driven by a greater probability of
spiking and a greater number spikes evoked per touch (Figure S3.3D and S3.3E). Across the tuned
population, the preferred object location spanned the entire range of touched pole locations (Figures 3.3C
and 3.3D). The mean half-max width response was 1.8 mm (~9.2° of azimuth) (Figure 3.3E). Thus, this
subpopulation of L5B excitatory neurons form a distributed neural code for touched object location.
Figure 3.3 L5B S1 excitatory units are tuned object location at touch.
A) Raster for example neuron (top) tuned to far object locations (bottom). B) Average firing rate vs. depth from pia
for active non-location (black), location (gold), and silent (gray) units. C) Touch peri-stimulus time histogram (left)
and location tuning curves (right) for three example units tuned to far (top), middle (middle), and close (bottom)
pole positions. Data are represented as mean ± SEM. D) Population heat map of object-location tuned units, sorted
by preferred location. White spaces are insufficiently sampled pole locations. E) Shape of normalized tuning curves
across all object-location tuned units. Data are represented as mean ± SEM. Mean half-max width response was 1.8
mm (~9.2° of azimuth).
50
Table 3.1: Table comparing properties of non-touch (n=68), non-location touch units (n=12), and location touch
units (n=42).
Mean SD Median min - max Mean SD Median min - max Mean SD Median min - max
Whisking (Hz) 4.47 5.81 1.38 .01 - 23.86 4.14 4.33 2.67 .20 - 26.33 9.10 10.20 5.44 .06 - 41.70
Quiet (Hz) 4.27 5.26 2.15 .04 - 43.89 3.79 4.38 1.53 .10 - 29.05 6.65 6.37 4.41 .04 - 28.81
Proportion of spikes evoked by touch 0.19 0.17 0.13 .00 - .72 0.33 0.23 0.26 .05 - .72 0.41 0.23 0.33 .09 - .99
Proportion of spikes evoked by touch + whisking 0.45 0.19 0.42 .10 - .88 0.52 0.21 0.56 .18 - .86 0.60 0.20 0.60 .16 - .99
Touch onset latency (ms) 12.33 6.33 11.00 6.00 - 26.00 10.12 4.39 9.50 4.00 - 22.00
Touch response duration (ms) 17.25 8.84 16.00 4.00 - 34.00 18.52 9.75 17.50 4.00 - 43.00
Spikes in response window (#) 0.40 0.33 0.37 .04 - 1.25 0.74 0.80 0.52 .07 - 3.60
Probability of touch response 0.29 0.19 0.26 .03 - .64 0.43 0.29 0.43 .05 - .93
Probability of response at peak bin 0.36 0.19 0.33 .07 - .84 0.55 0.27 0.59 .09 - .99
Probability of response at trough bin 0.21 0.23 0.12 .00 - .65 0.26 0.24 0.18 .00 - .88
Response at peak bin (Hz) 24.97 10.32 22.44 48.37 - 233.33 51.87 35.49 42.15 45.45 - 200.00
Response at trough bin (Hz) 11.58 10.47 6.99 45.88 - 200.00 19.97 21.60 13.79 27.00 - 200.00
Non-touch units (n=68) Non-location touch units (n=12) Location touch units (n=42)
51
Touch location tuning did not require training in whisker-guided location discrimination. We
performed recordings in two related tasks. In 92 naïve recording sessions with untrained mice (n=10),
water rewards were given randomly on 50% of the trials, regardless of pole location, while in 30 trained
sessions, trained mice (n=6) were first trained to discriminate go and nogo locations as in Cheung et al.,
2019 (Figures 3.4A and 3.4B) with water only available in the posterior go range. Trained mice made
significantly more touches with less time spent whisking than naïve mice (Figure S3.4A). However, there
was no significant difference in the proportion of touch responsive units that were tuned to object location
between naïve (n=26/35, 74.3 %) and trained (n=14/19, 73.7 %) (p = 1.0, Fisher’s exact test; Figures
3.4C), though we did observe a larger proportion of touch responsive units in trained animals (Figure
S3.4B). The width of the tuning was indistinguishable between the groups (Figures 3.4D) and the
preferred locations spanned the full range of presented locations in both naïve and trained mice (Figures
3.4E and 3.4F).
52
Figure 3.4 Object location tuning does not require specialized training.
A) Schematic of two tasks. Mice were presented a pole randomly in a 10mm range, 7 – 12mm from the face. Naïve
task, reward was available on 50% of trials, regardless of pole position. Trained task, reward exclusively available
100% of time in 0-5mm proximal Go range. B) Performance on naïve (49.3% ± 3.1% mean ± SD) and trained
(66.9% ± 8.1% mean ± SD) recording sessions (p = 2.4e-34, t-stat = 17.2, df = 120, unpaired t-test). C) Proportion
of touch units that are location tuned for naïve (left; 77.1%) vs trained (right; 78.9%) animals. D) Shape of
normalized tuning curves for touch location units from naïve (gray) and trained (red/blue) mice. E) Population
heatmap of touch location units from naïve (top 27 units) and trained (bottom 15 units) animals, sorted by preferred
object location. Each row denotes a single location neuron. F) Histogram of positional preference of touch location
units compared between naïve and trained animals. (p=0.12, t-stat = -1.6, df = 40; two-sample t-test)
53
To access location information from a distributed representation, downstream neurons must
sample multiple members of the representing population. However, the number of possible inputs to a
neuron is limited. Thus, we wondered how accurately the object location could be determined from
varying numbers of randomly sampled object-location tuned neurons. We constructed a multinomial
generalized linear model (GLM) to predict the location of the pole from the distribution of the number of
spikes evoked by single touches (Methods, Figure S3.3F). A linear classifier pooling the touch-evoked
spike counts from 25 of our location tuned neurons (the subset with > 75 touches in > 80% of binned pole
positions) predicted the pole location to < 0.5 mm distance from actual on 60.5% ± 1.3% (mean ± S.D.) of
touches (Figures 3.5A and 3.5B).
Our prior work showed that expert mice discriminate location to < 0.5 mm resolution in this task
(Cheung et al., 2019). How many location tuned neurons are required to meet or exceed the psychometric
performance of these expert mice? We constructed neurometric performance curves from the predicted
object locations (Figures 3.5C). Random sampling from five or more location tuned neurons produced
virtual performance that met or exceeded expert behavior (Figures 3.5D, Methods). This suggests that
downstream neurons that sample from at least five location-tuned 5B neurons have access to a touch-by-
touch object-location representation that meets or exceeds the behavioral performance of the mouse.
54
Figure 3.5 Object location is decodable to <0.5 mm precision from touch-evoked spike counts.
A) Contingency table of pole location decoding performance from 25 pooled unique touch location units using a
multinomial GLM B) Performance as a function of pool neuron count. C) Average psychometric curves from 15
expert mice (gray; Cheung et al., 2019) and neurometric curves from varying numbers of sampled location units. D)
Performance from neurometric curves compared to expert mice. Data are represented as mean ± S.D. Solid black
lines denote points significantly different (p < 0.05; two-sample t-test) from expert mouse performance.
Active touch remaps, rather than amplifies self-motion tuning
Does touch amplify an underlying whisker angle representation during free whisking? Or, does
touch evoke an object-location representation that is independent of the free-whisking representation?
Multiple lines of evidence support the independent model. First, 50% of neurons were tuned to angle
during free whisking, and 36% were tuned to angle during touch, but only 20% neurons were tuned to
angle under both conditions (Figures 3.6A). Thus, tuning during free whisking is neither necessary nor
sufficient to exhibit angle tuning during touch. Moreover, this co-tuned overlap is nearly identical to and
expected overlap (18%) if the two representations were independently distributed across the population.
We compared angle tuned responses between free-whisking and touch in individual neurons
using spike integration windows derived from each neuron’s touch-evoked response (Figures 3.6B and
S3.5A). Whisker angle at touch is tightly correlated with (Figure S3.3B), and a proxy for the object
location in this analysis (Figure S3.3C, Methods). The average absolute modulation depth was 3.6x
greater for touch (14.6 ± 1.7 Hz; mean ± SEM) than for whisking (4.04 ± 0.5 Hz; mean ± SEM) (Figures
3.6C). Note that since touch evoked responses tended to be much larger, many neurons with higher
absolute modulation to touch than whisking were not significantly touch angle tuned (p<0.01 ANOVA
55
across angle bins). The shapes of normalized angle-tuning curves in each neuron were uncorrelated
between the two conditions, and not significantly different from a randomly shuffled population (Figures
3.6D). Finally, the angles of maximum response during free-whisking and at touch were also uncorrelated
(Figures 3.6E). Repeating these analyses using phase instead of angle as the independent variable showed
similar results (Figure S3.6). We conclude that rather than amplifying an underlying tuning to whisker
angle during free-whisking, active touch remaps L5B population activity from a representation of self-
motion to an independent representation of object location.
Figure 3.6 Active touch unmasks a distinct population code for object position in Layer 5 of S1.
A) Proportion of units tuned to whisker angle during free-whisking (blue, 36/122), at touch (gold, 19/122), co-tuned
(black, 25/122), or not-tuned (gray, 44/122). B) Absolute (top) and normalized (bottom) tuning curves for angle
responses during free whisking (blue) and at touch (gold). C) Absolute modulation depth for angle tuning during
free-whisking and touch for each class in A. D) Shape correlation between whisking and touch tuning curves for all
units tuned to whisking and/or touch (blue and gold hash) compared to shuffled responses (gray). Kolmogorov-
Smirnov p=0.18. E) Preferred angle during free-whisking vs. at touch. Single-tuned units on histograms, co-tuned
units on plot. Distance from midline for co-tuned units: (mean ± SD = 12.6 ± 10.9°, p = 9.7e-6, t-stat = 5.6, df = 24;
one sample t-test)
56
Discussion
We quantified sensorimotor representations in L5B excitatory neurons during active whisker
exploration and touch using juxtacellular electrophysiology (Figures 3.1). Most active neurons
represented self-motion during free whisking (Figures 3.2), with greater modulation by whisker angle
than whisking phase. A third of L5B excitatory neurons were highly modulated by touched object
location (Figure 3.3). This location tuning did not require training (Figures 3.4). Pooling activity of five
random location-tuned units discriminated object location with equal or better skill than expert mice
(Figures 3.5), suggesting that neurons in downstream areas need only sample a handful of S1 outputs to
access behaviorally relevant representations of object location. The representations of whisker angle and
phase during free-whisking and at touch were uncorrelated at population and within-cell levels (Figures
3.6 and S3.6). Together these data indicate that active touch remaps S1 output from a sensory
representation of self-motion to a perceptual representation of object location.
Limitations and advantages of the research
We note several limitations of our work. We targeted recordings to L5B (Lefort et al., 2009)
using axial penetration distance from pia to estimate cell depth, but cell-types do not strictly respect layer
boundaries and this depth estimate is only accurate to within ± 30 μm (O’Connor et al., 2010b). Due to
recording across multiple days during behavior, we did not attempt to recover cell morphology by
juxtacellular filling. Thus, we could not determine which recordings were from thin vs. thick-tufted L5
pyramids or their projection patterns. Use of projection specific L5 cre-lines (e.g. intertelencephalic (IT)
vs pyramidal tract (PT); Gerfen et al., 2013) could parse this in future work. Finally, we did not establish
a causal role for location coding neurons in driving perceptual choice during object localization. Recent
developments in structured illumination and optogenetics (Pegard, et al., 2017; Marshel et al., 2019) may
allow testing this in the future.
However, our approach also had several advantages over prior investigations of S1 activity during
whisker-guided object localization (Curtis & Kleinfeld 2009; Ranganathan et al., 2018). Optogenetic
57
tagging allowed us to identify excitatory vs. inhibitory units. Juxtacelluar loose-seal recording, considered
a gold-standard for extracellular single unit isolation (de Kock et al., 2008), allowed us to sample activity
with high accuracy and temporal fidelity, without bias from firing rate, avoid misassignment of
synchronous touch-evoked spikes (Hires et al., 2015), and avoid false negative responses common in
calcium imaging when scanning population sized fields of view (Huang et al., 2019). The high temporal
resolution of electrophysiology allowed us to determine whisker angle and phase tuning during free-
whisking (Figures 3.2) and its relationship to tuning at touch (Figures 3.3 and 3.6), which was not
investigated in prior studies using calcium imaging (Xu et al., 2012; Ranganathan et al., 2018).
The transformation from self-motion to object-location representation
Our investigation yielded two major surprises. First, L5B excitatory neurons show greater
modulation to whisker angle than to phase during free-whisking (Figures 3.2). This is surprising because
prior work showed that the vast majority of rapidly touch excited neurons across layers of rat S1 were
phase, not angle tuned during free-whisking (Curtis & Kleinfeld, 2009). This suggests that reafferent
phase modulation (Fee et al., 1997) in input layers of S1 (e.g. L4; Hires et al., 2015; Yu et al., 2016) is
combined with other whisker position information (e.g. efference copy from M1; Hill & Kleinfeld 2011;
Kleinfeld & Deschenes 2011) to produce angle-specific tuning in S1 output during free whisking.
Furthermore, the rarity of angle-specific tuning across other layers of S1 (Curtis & Kleinfeld, 2009)
suggests this combination of positional information occurs in the output layer (L5B) rather than across
multiple intermediate layers of S1.
The second surprise is that within L5B, the cellular identity of angle tuned neurons (Figures 3.6A
and 3.6C) and their angle preferences (Figures 3.6B and 3.6E) were uncorrelated between free-whisking
and at touch. The same was true for phase tuning (Figure S3.6). This is surprising, because prior work
across layers of rat S1 showed that touch responses are highly amplified when they occur at the peak of
free-whisking phase tuning and suppressed when occurring at non-preferred phases (Curtis & Kleinfeld
2009). Moreover, the preferred phase during free-whisking and at touch were tightly correlated (Curtis &
58
Kleinfeld 2009). Our observation of uncorrelated tuning again suggests that, upon touch, additional
sources of positional information are integrated in L5B S1 to construct a representation of touched object
location that is independent of the self-motion representation during free-whisking.
How is this remapping from self-motion to object location representation accomplished? At least
three mechanisms could play a role. First, touch-induced follicle stresses differ from those during free-
whisking (Severson et al., 2017), so distinct patterns of mechanosensory transduction (Furuta et al., 2020)
likely underlie at least part of remapped tuning. Second, touch and whisking are encoded by largely
distinct populations in superficial layers (Peron et al., 2015) which project to L5B (Lefort et al., 2009;
Hooks et al., 2011). Thus, touch recruits a new set of interlaminar S1 projections that influence L5B
responses. Third, touch could enhance integration of distant inputs on L5B dendrites (Ranganathan et al.,
2018) by transient changes in dendritic conductances (Larkum et al. 1999). M1 input is strongest in
electrically distant tuft dendrites of L5 neurons (Petreanu et al., 2009). Thus, touch could transiently
increase the influence of efference copy from M1 on L5B activity, further contributing to the remapped
tuning. Determining the extent to which each of these possible mechanisms contribute to object location
tuning in L5B of S1 may reveal more general principles for how the transformation from sensation to
perception is accomplished by cortical circuits.
Materials and Methods
LEAD CONTACT AND MATERIALS AVAILABILITY
Further information and requests for resources and reagents should be directed to and will be
fulfilled by the Lead Contact, Samuel Andrew Hires (shires@usc.edu).
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Sixteen VGAT/ChR2/EYFP mice (JAX B6.Cg-Tg), both male and female, of at least 3 months of
age were used for the following experiments. A complete description of the head-plate procedure has
been documented in previous work (Guo et al., 2014). Post-op, mice were housed with littermates or
59
singly housed if fighting occurred. Mice were provided food ad libitum and water restricted to 1 mL per
day for one week before training and recording. A daily health and weight assessment was completed to
ensure mice were healthy. All procedures were approved under USC IACUC protocols 20169 and 20731.
METHOD DETAILS
Object localization task
Mice were trained in a whisker based go/no-go object localization task. Using a single whisker
(C2), water-restricted mice were motivated to whisk and identify the location of a smooth vertical pole
(0.6mm diameter) 7-12mm lateral from the whisker pad. The pole moved along the anteroposterior axis
acoss 10 mm was positioned using stepper linear actuators with 99 nm resolution, 25 μm accuracy and <5
μm repeatability (Zaber NA11B30-T4). To avoid potential ultrasonic cues associated with stepper motor
movement, the pole was jittered 0-127 microsteps (0-25 μm) on each trial. A pneumatic linear slider
(Festo) was used to raise the pole vertically into touch reach for each trial. The Festo also provided a
sound cue on pole presentation onset.
Specific pole locations rewarded mice with water (4-8 μL), punished mice with a timeout (2 s), or
had no effect based on the mouse’s decision to lick or withhold licking. In a go/no-go paradigm, four trial
outcomes exists. In a minority of sessions where the animals were trained, the close posterior 5 mm of
pole positions (go) were rewarded with water rewards upon licking (hit) or had no effect if mice withheld
licking (miss). The far anterior 5mm of pole positions (no-go) were punished with timeout (false alarm)
or had no effect if mice withheld licking (correct rejection). For the remaining sessions, rewards and
punishment were given regardless of the pole location - go trials and no-go trials had overlapping pole
locations.
Behavior, videography, and electrophysiology
Animal behavior, videography and electrophysiology were synchronized and captured during task
performance using EPHUS (https://www.janelia.org/open-science/ephus). A single computer running
60
BControl (MATLAB 2007b) was used to initiate each trial of the object localization task and synchronize
video and electrophysiology recordings via a second computer running EPHUS. Trial onset triggered
high-speed video capture of whisker motion (1000 fps) and electrophysiology recording of single unit
activity (MultiClamp 700b).
Whisker motion was captured from an overhead view and spanned 4 seconds, spanning the period
prior to pole onset to response window. Video frames were acquired using Basler acA200-340kmNIR
camera and Edmund Optics 0.18X 1⁄2’’ GoldTL Telecentric Lens (Model # 52-258) under 940 nm
illumination on Streampix 6 software. Whisker shape and position was traced and tracked using Janelia
Farm’s Whisker Tracker (https://www.janelia.org/open-science/whisk-whisker-tracking). A mask was
traced around the edge of the fur to reduce tracking noise. Whisker angle is quantified as the intersection
between the mask and the whisker. The whisker midpoint, phase and amplitude was decomposed from the
band-pass filtered (6-60 Hz, Butterworth) whisker angle time series using the Hilbert Transform
(MATLAB 2018b: hilbert). Whisking amplitude and phase is defined as the magnitude and phase angle
(radians) of the Hilbert Transform of the whisker angle time series, respectively. Whisking midpoint is
the filtered (6-60 Hz) difference between whisker angle time series and band-pass filtered signal. Whisker
curvature is the amount of bending of the whisker measured 3-5 mm lateral from the whisker mask.
The precise millisecond of touch was determined through custom MATLAB software via
distance to pole and change in whisker curvature. This was followed with manual curation of images of
uncertain whisker and pole intersections.
In-vivo loose seal juxtacellular recordings
All animals used in this study were adult male or female transgenic mice (VGAT-ChR2-EYFP)
expressing channelrhodopsin in inhibitory units. Following head-plate surgery, mice were trimmed to one
whisker (C2) and intrinsic signal imaging was used to target the barrel column associated. A single
whisker was maintained throughout training and recording. Prior to recording, animals were anesthetized
(2% isofluorane) and a small craniotomy (200-300 μm) was made above the barrel column associated
61
with the C2 whisker. On the first day of recording, animals were allowed to recover for 1 hour before
recording. Recordings were repeated for 4.8 ± 1.5 sessions (mean ± SD) per animal.
To sample single unit spiking activity in a manner unbiased by firing rate, blind juxtacellular
loose-seal patch recordings were targeted to L5 (600 μm - 950 μm from pia; Lefort et al., 2009) neurons
using patch pipettes (Warner Instruments; 5-8 MΩ) filled with 0.9% saline (Growcells). Electrical
recordings (n=156 neurons) were acquired and amplified using MultiClamp 700b and Headstage CV-7B.
The pipette axis was aligned parallel to the C2 barrel column at 35°. To perform an unbiased sampling of
L5B, we recorded from any isolated unit. An isolated unit was identified by an increase in resistance to
15-20 MΩ. Once a unit was isolated, 10 trials of the behavioral task was run to test for spikes during
performance. If spikes were observed an isolated unit was maintained for at least 100 trials (137 ± 57;
mean ± SD). Upon recording completion, 10 trials of a 10 Hz pulse of blue light (480nm, 10, 20ms at 15-
20 milliwatts) was used to test whether the unit was an interneuron. Short latency spiking (or inhibition)
to a 490nm 10 Hz 5ms light indicated if the neuron was inhibitory (or excitatory) (Figure S3.1).
Fourteen units were inhibitory and excluded from analysis. On the other hand, if an isolated unit did not
spike after 10 trials a current pulse (100 us, 20 nanoamps) was injected to check if a unit was indeed
patched. If a burst of spikes was observed, we deemed that neuron a silent cell.
Histology
DiI (ThermoFisher D282) was coated onto a patch pipette and inserted into the recording location
on the final day of recording to identify the location of recordings. DiI coated pipettes were inserted
1000um deep into the recording location and left there for 5 minutes to ensure proper coating of the
recording location. 2 hours post dye, animals were deeply anesthetized with ketamine (110mg/kg) –
xylazine (10mg/kg) cocktail before perfusion with 0.1 M sodium phosphate buffer, followed by 4%
paraformaldehyde (PFA, in 0.1M sodium phosphate buffer). The fixed brain was then flattened along the
axis perpendicular to the barrel column.
62
The flattened brain was immersed in 4% PFA for 1 hour post-perfusion, transferred to 20%
sucrose solution for 1 day, and then 30% sucrose for 1 day. 100 μm slices were cut tangentially and
cytochrome oxidase staining was performed to reveal the barrel columns. Fluorescence imaging was done
to recover the location of the DiI track. Recording location was determined by overlapping fluorescent
track on top of bright field imaging of barrel columns.
QUANTIFICATION AND STATISTICAL ANALYSIS
Defining touch response window
A smoothed (Bayesian Adaptive Regression Splines [BARS] Wallstrom et al., 2008) response -50
ms to 50 ms around touch was used to evaluate the touch response window. The touch response window
is defined as any time point from 5 to 50 ms post-touch in the smoothed response that exceeded baseline
(-50 to 0 ms pre-touch) ± the 95% confidence interval. 2 parameters were imposed to ensure an accurate
response window was captured: 1) the mean firing rate of the touch response had to be > 2 Hz: 2) the
touch response window had to be greater than 4 milliseconds. A touch neuron is defined as any neuron
that had a touch response window.
Tuning curves
For a single neuron 5% of sampled touches or 5% of total whisking time points were used to
define a point along the touch or whisking tuning curve. This method ensured 20 equally sampled bins.
Stimulus values are defined as the median of each stimulus bin and the response values as the mean of
each response bin. For touch tuning the response bins include the firing rates within the touch response
window as defined above. For whisking tuning the same response window as touch was used. If a neuron
was not tuned to touch, the median touch response window was used to evaluate whisking tuning. The
median touch response window is 10 to 28 milliseconds post-touch. Tuning curves were generated by
smoothing using Bayesian Adaptive Regression Splines on the binned histograms. Neurons that had mean
whisking responses less than 2 Hz were not evaluated.
63
We used a one-way analysis of variance (ANOVA) at alpha level of 0.01 to quantify whether a
neuron was tuned or not to a whisking or touch parameter. To further ensure that the tuning we observed
was not due to noise in neural responses, we shuffled touch/whisking responses 1000 times and evaluated
F-values from a one-way ANOVA. If our observed F-value was above the 95th percentile of the shuffled
population distribution of F-values we deemed the neuron as tuned.
Tuning preference is the location of the peak response of the tuning curve. To define the width of
the tuning, a multiple comparison test using Tukey-Kramer type critical value was used to identify the
first bins in both direction that were significantly different from the peak value. If no bins were
significant, no modulation width was defined. Maximum and minimum responses were calculated from
BARS fitted tuning curves.
In computing tuning curves whisker angle at touch instead of object location we find that two
more units qualify as tuned (Figure S3.3E and S5C). Upon closer inspection we discover that those two
units exhibit a non-linear 2nd order polynomial relationship between whisker angle and pole location. This
second order polynomial fit leads to non-linear increases in whisker angles for incremental gains in pole
location, causing those two units to have tuning to far locations not seen when observing pole locations.
Modulation
The absolute modulation depth and modulation depth for each tuning curve is calculated as:
𝑎 𝑏 𝑠 𝑜 𝑙𝑢 𝑡 𝑒 𝑚𝑜 𝑑 𝑢 𝑙𝑎 𝑡 𝑖𝑜 𝑛 𝑑 𝑒 𝑝 𝑡 ℎ = m a x 𝑟𝑒 𝑠 𝑝 𝑜 𝑛 𝑠 𝑒 − m i n 𝑟𝑒 𝑠 𝑝 𝑜 𝑛 𝑠 𝑒
𝑚𝑜 𝑑 𝑢 𝑙𝑎 𝑡 𝑖𝑜 𝑛 𝑑 𝑒 𝑝 𝑡 ℎ =
m a x 𝑟𝑒 𝑠 𝑝 𝑜 𝑛 𝑠 𝑒 − m i n 𝑟𝑒 𝑠 𝑝 𝑜 𝑛 𝑠 𝑒 m a x 𝑟𝑒 𝑠 𝑝 𝑜 𝑛 𝑠 𝑒 + m i n 𝑟𝑒 𝑠 𝑝 𝑜 𝑛 𝑠 𝑒
Neural decoding
64
We used multinomial logistic regression to decode pole location implemented using glmnet.
(Qian et al., 2013). Only touch units that sampled at least 80% of the pole position range were used for
decoding. Each unit had a tuning curve that was interpolated to 40 bins to estimate location to 0.25 mm
resolution. At each bin, 50 samples were drawn from a Poisson pdf with a λ as the mean of each
interpolated bin. We justified drawing from a poisson pdf because we found that at touch the number of
spikes generated in the touch response window followed a Fano factor of 0.94 ± 0.22 (mean ± SD, Figure
S3.3F). For the design matrix, each row is a location bin, each column a single neuron, and each entry a
sampled neural response for the associated neuron.
The decoder was run for 10 iterations. During each iteration a random 70% of trials were
allocated for training and the remaining 30% for test. Lasso regularization (alpha parameter 0.95) was
used to reduce over-fitting. To identify the number of units required, we sampled varying numbers of
neurons with replacement from the units used to train the original model 500 times. The indices of the
selected neurons were used to create a new population design matrix and matrix of learned coefficients
from the original design matrix and learned coefficients. The prediction probabilities of location were
computed by the below:
ℎ
𝜃 ( 𝑥 ) = 𝑔 ( 𝜃 𝑇 𝑥 )
where 𝑔 ( 𝑧 ) =
1
1 + 𝑒 − 𝑧
where ℎ
𝜃 ( 𝑥 ) is the hypothesis function, 𝜃 𝑇 are the learned coefficients, 𝑥 is the input design
matrix, and 𝑔 ( 𝑧 ) is the normal function of logistic regression used to calculate prediction probabilities.
The predicted location was chosen as the location with the highest probability. Model evaluation
of accuracy and resolution was performed on the test set. Model accuracy is defined as the total number
of correct predictions divided by the total number of predictions. A confusion matrix made from true and
predicted locations was normalized across the total number of given true cases and used to define the
decoding resolution and neurometric curves. Decoding resolution is defined as the total number of
65
predictions within n bins of the diagonal, where each bin was 0.25 mm. Neurometric curves, defined here
as the choice to lick given neural activity, is defined as the sum of predictions along true values for the go
predictions (left half of the confusion matrix). Simulated neurometric curve performance for licks were
defined as any lick probability that exceed 50%.
DATA AND CODE AVAILABILITY
Please contact the lead contact for data structures used in the analysis above. All analyses were
computed in MATLAB 2018b and all code for generating figures can be found at
(https://github.com/hireslab/Pub_LocationCode)
66
Supplemental Figures
Figure S3.1 Recording targeting, recovery, and opto-tagging.
A) Intrinsic signal imaging highlighting region of activity during whisker stimulation (top) overlaid with skull
vasculature. B) 4x (top) and 10x (bottom) zoom of recovered DiI on top of cytochrome oxidase labeling of barrel
field. C) Example trace of single stimulation (480nm 10Hz pulse) trial (top) with zoom of first 500 milliseconds of
stimulation (bottom).
67
Figure S3.2 Whisker angle during free-whisking tuning
A) Example whisker trace with spikes overlaid for one example cell tuned to whisker angle during free-whisking. B)
Average firing rate and depth from pia for active non-location (black), whisking angle (blue), and silent (gray) units.
C) Scatter of mean ± SD (1.4 ± 0.5) for time (seconds) spent whisking for each recorded neuron. D) Cumulative
distribution function of whisker angle sampling during free whisking for all recorded units (gray) and population
average ± SEM (red). E) Free-whisking angle tuning across the population of significantly tuned units (n=60). F)
Phase preference with modulation depth (Methods) across the population of phase-tuned units (n=46).
68
Figure S3.3 Whisker angle during touch tuning
A) 3 example touch units and their responses from touch onset. B) The change in follicle relative to the first touch
along the anteroposterior axis (left) and mediolateral axis (right). C) Cumulative distribution function showing the
number of touches made for each recording session (gray, n=122) and the population average ± S.D. (red). D)
Response probability of generating a response above baseline ± 95% CI in the most preferred location versus the
least preferred location (p = 1.2e-14, t-stat = 11.7, df = 41, paired t-test). E) Same as D but for the firing rate of
responses (p = 2.9e-11, t-stat = 8.9, df = 41, paired t-test). F) Justification for modeling spikes using a Poisson
process. Black dots denote scatter of spike count (0.73 ± 0.81, mean ± SEM) against Fano factor (0.94 ± 0.24, mean
± SEM) for each point along the angle at touch tuning curve (n = 784 points). Red dots denote average for each
individual location at touch tuned neuron.
69
Figure S3.4 Naïve vs trained animals comparison
A) Comparison of number of touches made per trial (left, p=3.0e-4) and proportion of time whisking (right, p =
5.5e-4) between naive (gray) and trained animals (red/blue hash). Both compared using two-sample Kolmogorov-
Smirnov test. B) The distribution of non-touch units, touch location units, and touch non-location units compared
between recordings from naïve (n=92) and trained (n=30) animals
70
Figure S3.5 Co-tuning of whisker angle during free-whisking and touch
A) Tuning curves with observed firing rates (top) and normalized firing rates (bottom) for co-tuned (n=25), touch
tuned only (n=19), and whisking tuned only units (n=36). Solid lines and dashed lines denote tuning and not tuning
respectively. B) Whisker angle at touch is tightly correlated with anteroposterior object location. Three example
sessions are shown. C) Population heat map of angle tuned units, sorted by preferred angle at touch. White spaces
are insufficiently sampled pole locations.
71
Figure S3.6 Co-tuning of whisker phase during free-whisking and touch
A) Pie chart highlighting proportion of units phase tuned (maroon, 21/122), at touch (gold, 25/122), co-tuned (black,
25/122), or not-tuned (gray, 51/122). B) Tuning curves with observed firing rates (top) and normalized firing rates
(bottom) for co-tuned (n=25), touch tuned only (n=25), and whisking tuned only units (n=21). Solid lines and
dashed lines denote tuning and not tuning respectively. C) Absolute modulation depth for angle tuning during free-
whisking and touch for each class in A. Average absolute modulation depth was 8x greater for touch (16.4 ± 1.8 Hz;
mean ± SEM) than for whisking (2.5 ± 0.4 Hz; mean ± SEM) D) Shape correlation between whisking and touch
tuning curves for all units tuned to whisking and/or touch (maroon and gold hash) compared to shuffled responses
(gray). Kolmogorov-Smirnov p=0.11. E) Scatter of preference during free-whisking and touch for co-tuned units
(mean ± SD; 0.7 ± 0.5 radians, p = 4.6e-7, t-stat = 6.8, df = 24; one sample t-test). Histograms denote phase
preference for units tuned to either touch or free-whisking phase.
72
Chapter 4: Concluding Remarks
Object localization is a crucial aspect of tactile sensing. Rodents serve as powerful models to
answer questions regarding the behavior and the cortical representations of object localization. Chapter 2
focuses on characterizing this behavior and identifying the key features of touch used to locate objects.
My findings make a key contribution to the field by directly comparing across six proposed models for
localization and elucidating which features rodents use. Understanding which features are most relevant
for localization has implications for the origin and site of integration of sensorimotor signals in the brain
that drive location perception. Using a combination of fine stimulus control, high-speed imaging, feature
extraction, and interpretable machine learning models we find that rodents can localize with 0.5 mm
acuity and a two-step model using the number of touches and whisker midpoint at touch explains
behavior best. These findings suggest that there exists a neural substrate of touched object location and
ideally this representation could decode location to 0.5 mm.
How does the brain represent self-motion and touch features during a localization task? Prior
work has highlighted S1 as a key region in touch processing and more specifically, L5B neurons in S1 as
a hub for integrating both sensory and motor information. In Chapter 3 I find that the sensorimotor
representation of location is encoded by single neurons in L5B during whisking and touch. Further, this
representation at touch can decode location to 0.5 mm and meet or exceed behavioral thresholds using 5-7
neurons. These findings make two notable contributions. First, I find support that L5B is a hub for
integrating sensorimotor features. As a reminder, whisker angle can be losslessly computed by a linear
combination of the Hilbert components of phase, amplitude, and midpoint. The input regions to L5B
show tuning to one or more of the Hilbert components but not angle - L4 and VPM thalamus both show
tuning to phase and L5B apical dendrites show tuning to midpoint and amplitude. L5B excitatory neurons
shows greatest modulation to whisker angle during whisking, suggesting that angle computation occurs
within L5B excitatory neurons. Secondly, I find that the tuning for self-motion versus touch are
independent, both in selection of which neurons are active and in the preference of co-active neurons.
73
Since whisking and touch are behaviorally intertwined, distinct population responses encoding whisking
and touch location could be advantageous in distributing location information to downstream targets
without ambiguity.
The detailed dissection of localization behavior and the discovery of the neural substrate
encoding location information provides an intriguing avenue for future studies. For example, one could
identify the circuits at the cellular level responsible for location perception. In my work we find that units
encoding location information are in L5B but do not identify the projection classes. IT and PT neurons
both inhabit L5B, and their projection targets greatly differ. Two-photon calcium imaging, holographic
photo stimulation, and tracing of these two projection classes under active behavior could uncover the
circuits at play during localization. Ideally, these results would help identify the underlying circuits not
just for object detection but for building an internal representation of the external environment in
downstream targets such as entorhinal cortex (Høydal et al., 2018). Taken together, the results presented
here identify fundamental encoding of touch in the brain and lay the groundwork for future studies to
uncover how sensory information is internalized to create perception and build a mental representation of
the external environment.
74
Literature Cited
Adibi, M. (2019). Whisker-mediated touch system in rodents: from neuron to behavior. Frontiers in
systems neuroscience, 13, 40.
Ahissar E., and Assa E. (2016) Perception as a closed-loop convergence process. Elife 5, e12830
Armstrong-James, M., Fox, K., & Das-Gupta, A. (1992). Flow of excitation within rat barrel cortex on
striking a single vibrissa. Journal of neurophysiology, 68(4), 1345-1358.
Aronoff, R., & Petersen, C. (2008). Layer, column and cell-type specific genetic manipulation in mouse
barrel cortex. Frontiers in neuroscience, 2, 1.
Bagdasarian, K., Szwed, M., Knutsen, P.M., Deutsch, D., Derdikman, D., Pietr, M., Simony, E. and
Ahissar, E. (2013). Pre-neuronal morphological processing of object location by individual
whiskers. Nature neuroscience 16(5), p.622.
Belli, H. M., Yang, A. E., Bresee, C. S., and Hartmann, M. J. (2016). Variations in vibrissal geometry
across the rat mystacial pad: base diameter, medulla, and taper. Journal of
neurophysiology 117(4), 1807-1820.
Bensmaia, S. J., Denchev, P. V., Dammann, J. F., Craig, J. C., & Hsiao, S. S. (2008). The representation
of stimulus orientation in the early stages of somatosensory processing. Journal of
Neuroscience, 28(3), 776-786.
Bensmaia, S. J., Hsiao, S. S., Denchev, P. V., Killebrew, J. H., & Craig, J. C. (2008). The tactile
perception of stimulus orientation. Somatosensory & motor research, 25(1), 49-59.
Birdwell, J.A., Solomon, J.H., Thajchayapong, M., Taylor, M.A., Cheely, M., Towal, R.B., Conradt, J.,
and Hartmann, M.J. (2007). Biomechanical models for radial distance determination by the rat
vibrissal system. Journal of Neurophysiology 98(4), 2439-2455.
Boubenec, Y., Shulz, D.E., and Debrégeas, G. (2012). Whisker encoding of mechanical events during
active tactile exploration. Frontiers in behavioral neuroscience 6, 74.
Boughorbel, S., Jarray, F., and El-Anbari, M. (2017). Optimal classifier for imbalanced data using
Matthews Correlation Coefficient metric. PloS one 12(6), e0177678.
Brunton, B.W., Botvinick, M.M., and Brody, C.D. (2013). Rats and humans can optimally accumulate
evidence for decision-making. Science 340(6128), 95-98.
Buchan, M. J., & Rowland, J. M. (2018). Stimulation of Individual Neurons Is Sufficient to Influence
Sensory-Guided Decision-Making. The Journal of Neuroscience, 38(30), 6609.
Bush, N. E., Schroeder, C. L., Hobbs, J. A., Yang, A. E., Huet, L. A., Solla, S. A., and Hartmann, M. J.
(2016). Decoupling kinematics and mechanics reveals coding properties of trigeminal ganglion
neurons in the rat vibrissal system. Elife 5, e13969.
Caldwell, D. J., Cronin, J. A., Wu, J., Weaver, K. E., Ko, A. L., Rao, R. P., & Ojemann, J. G. (2019).
Direct stimulation of somatosensory cortex results in slower reaction times compared to
peripheral touch in humans. Scientific reports, 9(1), 3292.
Campagner, D., Evans, M. H., Bale, M. R., Erskine, A., and Petersen, R. S. (2016). Prediction of primary
somatosensory neuron activity during active tactile exploration. Elife 5, e10696
Campagner, D., Evans, M.H., Chlebikova, K., Colins-Rodriguez, A., Loft, M.S., Fox, S., Pettifer, D.,
Humphries, M.D., Svoboda, K., and Petersen, R.S. (2019). Prediction of choice from competing
mechanosensory and choice-memory cues during active tactile decision making. Journal of
Neuroscience 39(20), 3921-3933.
75
Chan, T. C., & Turvey, M. T. (1991). Perceiving the vertical distances of surfaces by means of a hand-
held probe. Journal of experimental psychology: Human perception and performance, 17(2), 347.
Chen, J.L., Margolis, D.J., Stankov, A., Sumanovski, L.T., Schneider, B.L. and Helmchen, F. (2015).
Pathway-specific reorganization of projection neurons in somatosensory cortex during
learning. Nature Neuroscience 18(8), 1101.
Cheung, J., Maire, P., Kim, J., Sy, J. and Hires, S.A., (2019). The sensorimotor basis of whisker-guided
anteroposterior object localization in head-fixed mice. Current Biology, 29(18), 3029-3040.
Clack, N.G., O'Connor, D.H., Huber, D., Petreanu, L., Hires, A., Peron, S., Svoboda, K. and Myers, E.W.
(2012). Automated tracking of whiskers in videos of head fixed rodents. PLoS computational
biology 8(7), e1002591.
Corkin, S., Milner, B. & Rasmussen, T. Somatosensory thresholds–contrasting effects of postcentral-
gyrus and posterior parietallobe excisions. Arch. Neurol. 23, 41–58 (1970).
Curtis, J.C., and Kleinfeld, D. (2009). Phase-to-rate transformations encode touch in cortical neurons of a
scanning sensorimotor system. Nat. Neurosci. 12, 492–501.
De Kock, C. P. J., & Sakmann, B. (2008). High frequency action potential bursts (≥ 100 Hz) in L2/3 and
L5B thick tufted neurons in anaesthetized and awake rat primary somatosensory cortex. The
Journal of physiology, 586(14), 3353-3364.
De Kock, C. P. J., Bruno, R. M., Spors, H., & Sakmann, B. (2007). Layer‐and cell‐type‐specific
suprathreshold stimulus representation in rat primary somatosensory cortex. The Journal of
physiology, 581(1), 139-154.
Evarts, E. V. (1968). Relation of pyramidal tract activity to force exerted during voluntary movement. J
Neurophysiol 31(1), 14-27.
Fee, M. S., Mitra, P. P., and Kleinfeld, D. (1997). Central versus peripheral determinants of patterned
spike activity in rat vibrissa cortex during whisking. Journal of neurophysiology, 78(2) 1144-
1149.
Francis, B. A., & Wonham, W. M. (1976). The internal model principle of control
theory. Automatica, 12(5), 457-465.
Furuta, T., Bush, N.E., Yang, A.E.T., Ebara, S., Miyazaki, N., Murata, K., Hirai, D., Shibata, K.I. and
Hartmann, M.J., 2020. The Cellular and Mechanical Basis for Response Characteristics of
Identified Primary Afferents in the Rat Vibrissal System. Current Biology.
Gazzaniga, M. S. (2008). Human: The science behind what makes us unique.
Gerfen, C. R., Paletzki, R., & Heintz, N. (2013). GENSAT BAC cre-recombinase driver lines to study the
functional organization of cerebral cortical and basal ganglia circuits. Neuron, 80(6), 1368-1383.
Guic-Robles E, Jenkins WM, Bravo H (1992) Vibrissal roughness discrimination is barrel cortex-
dependent. Behav Brain Res 48: 145–152
Guo, Z.V., Hires, S.A., Li, N., O'Connor, D.H., Komiyama, T., Ophir, E., Huber, D., Bonardi, C.,
Morandell, K., Gutnisky, D., Peron, S., et al. (2014). Procedures for behavioral experiments in
head-fixed mice. PloS one 9(2), e88678.
Gutnisky, D.A., Yu, J., Hires, S.A., To, M.S., Bale, M., Svoboda, K. and Golomb, D. (2017).
Mechanisms underlying a thalamocortical transformation during active tactile sensation. PLoS
computational biology 13(6), e1005576.
Harris, K. D., & Shepherd, G. M. (2015). The neocortical circuit: themes and variations. Nature
neuroscience, 18(2), 170.
76
Hastie, T., & Qian, J. (2014). Glmnet vignette. Retrieve from http://www. web. stanford. edu/~
hastie/Papers/Glmnet_Vignette. pdf. Accessed September, 20, 2016.
Hill, D. N., Bermejo, R., Zeigler, H. P., and Kleinfeld, D. (2008). Biomechanics of the vibrissa motor
plant in rat: rhythmic whisking consists of triphasic neuromuscular activity. Journal of
Neuroscience 28(13), 3438-3455.
Hill, D. N., Curtis, J. C., Moore, J. D., and Kleinfeld, D. (2011). Primary motor cortex reports efferent
control of vibrissa motion on multiple timescales. Neuron 72(2), 344-356.
Hires, S. A., Gutnisky, D. A., Yu, J., O'Connor, D. H., and Svoboda, K. (2015). Low-noise encoding of
active touch by layer 4 in the somatosensory cortex. Elife 4, e06619.
Hires, S. A., Pammer, L., Svoboda, K., and Golomb, D. (2013). Tapered whiskers are required for active
tactile sensation. Elife 2, e01350.
Hires, S. A., Schuyler, A., Sy, J., Huang, V., Wyche, I., Wang, X., & Golomb, D. (2016). Beyond cones:
an improved model of whisker bending based on measured mechanics and tapering. Journal of
neurophysiology, 116(2), 812-824.
Hodge, R. D., Bakken, T. E., Miller, J. A., Smith, K. A., Barkan, E. R., Graybuck, L. T., ... & Yao, Z.
(2019). Conserved cell types with divergent features in human versus mouse
cortex. Nature, 573(7772), 61-68.
Hong, Y. K., Lacefield, C. O., Rodgers, C. C., & Bruno, R. M. (2018). Sensation, movement and learning
in the absence of barrel cortex. Nature, 561(7724), 542.
Hooks, B.M., Hires, S.A., Zhang, Y.X., Huber, D., Petreanu, L., Svoboda, K., and Shepherd, G.M.
(2011). Laminar analysis of excitatory local circuits in vibrissal motor and sensory cortical
areas. PLoS biology 9(1), e1000572.
Horev, G., Saig, A., Knutsen, P. M., Pietr, M., Yu, C., & Ahissar, E. (2011). Motor–sensory convergence
in object localization: a comparative study in rats and humans. Philosophical Transactions of the
Royal Society B: Biological Sciences, 366(1581), 3070-3076.
Høydal, Ø. A., Skytøen, E. R., Andersson, S. O., Moser, M. B., & Moser, E. I. (2019). Object-vector
coding in the medial entorhinal cortex. Nature, 568(7752), 400-404.
Hsiao, S. S., Lane, J., & Fitzgerald, P. (2002). Representation of orientation in the somatosensory
system. Behavioural brain research, 135(1-2), 93-103.
Huang, L., Knoblich, U., Ledochowitsch, P., Lecoq, J., Reid, R.C., de Vries, S.E., Buice, M.A., Murphy,
G.J., Waters, J., Koch, C. and Zeng, H., (2019). Relationship between spiking activity and
simultaneously recorded fluorescence signals in transgenic mice expressing GCaMP6. bioRxiv,
p.788802.
Huber, D., Petreanu, L., Ghitani, N., Ranade, S., Hromádka, T., Mainen, Z., & Svoboda, K. (2008).
Sparse optical microstimulation in barrel cortex drives learned behaviour in freely moving
mice. Nature, 451(7174), 61-64.
Hwang, K., Bertolero, M. A., Liu, W. B., & D'esposito, M. (2017). The human thalamus is an integrative
hub for functional brain networks. Journal of Neuroscience, 37(23), 5594-5607.
Hyvärinen, J., & Poranen, A. (1978). Movement‐sensitive and direction and orientation‐selective
cutaneous receptive fields in the hand area of the post‐central gyrus in monkeys. The Journal of
Physiology, 283(1), 523-537.
Isett, B. R., Feasel, S. H., Lane, M. A., & Feldman, D. E. (2018). Slip-based coding of local shape and
texture in mouse S1. Neuron, 97(2), 418-433.
77
Jadhav, S. P., Wolfe, J., & Feldman, D. E. (2009). Sparse temporal coding of elementary tactile features
during active whisker sensation. Nature neuroscience, 12(6), 792
Khatri, V., Bermejo, R., Brumberg, J.C., Keller, A., and Zeigler, H.P. (2009). Whisking in air: encoding
of kinematics by trigeminal ganglion neurons in awake rats. Journal of neurophysiology 101(4),
1836-1846.
Kita, T., & Kita, H. (2012). The subthalamic nucleus is one of multiple innervation sites for long-range
corticofugal axons: a single-axon tracing study in the rat. Journal of Neuroscience, 32(17), 5990-
5999.
Kleinfeld, D., & Deschênes, M. (2011). Neuronal basis for object location in the vibrissa scanning
sensorimotor system. Neuron, 72(3), 455-468.
Knutsen, P.M., Biess, A., and Ahissar, E. (2008). Vibrissal kinematics in 3D: tight coupling of azimuth,
elevation, and torsion across different whisking modes. Neuron 59(1), 35-42.
Knutsen, P.M., Pietr, M. and Ahissar, E. (2006). Haptic object localization in the vibrissal system:
behavior and performance. Journal of Neuroscience 26(33), 8451-8464.
Krakauer, J. W., Ghazanfar, A. A., Gomez-Marin, A., MacIver, M. A., and Poeppel, D. (2017).
Neuroscience needs behavior: correcting a reductionist bias. Neuron 93(3), 480-490.
Krupa, D. J., Matell, M. S., Brisben, A. J., Oliveira, L. M., and Nicolelis, M. A. (2001). Behavioral
properties of the trigeminal somatosensory system in rats performing whisker-dependent tactile
discriminations. Journal of Neuroscience 21(15), 5752-5763.
Larkum, M. E., Zhu, J. J., & Sakmann, B. (1999). A new cellular mechanism for coupling inputs arriving
at different cortical layers. Nature, 398(6725), 338.
Lebedev, M. A., & Nelson, R. J. (1996). High-frequency vibratory sensitive neurons in monkey primary
somatosensory cortex: entrained and nonentrained responses to vibration during the performance
of vibratory-cued hand movements. Experimental brain research, 111(3), 313-325.
Lederman, S. J., & Klatzky, R. L. (1987). Hand movements: A window into haptic object
recognition. Cognitive psychology, 19(3), 342-368.
Lefort, S., Tomm, C., Sarria, J. C. F., & Petersen, C. C. (2009). The excitatory neuronal network of the
C2 barrel column in mouse primary somatosensory cortex. Neuron, 61(2), 301-316.
Leiser S. C., and Moxon K. A. (2007) Responses of trigeminal ganglion neurons during natural whisking
behaviors in the awake rat. Neuron 53(1):117-33.
Lévesque, M., Charara, A., Gagnon, S., Parent, A., & Deschênes, M. (1996). Corticostriatal projections
from layer V cells in rat are collaterals of long-range corticofugal axons. Brain research, 709(2),
311-315.
Li, L., Rutlin, M., Abraira, V.E., Cassidy, C., Kus, L., Gong, S., Jankowski, M.P., Luo, W., Heintz, N.,
Koerber, H.R. and Woodbury, C.J., (2011). The functional organization of cutaneous low-
threshold mechanosensory neurons. Cell, 147(7), pp.1615-1627.
Long, S.Y. (1972). Hair-nibbling and whisker-trimming as indicators of social hierarchy in mice. Animal
behaviour 20(1), 10-12.
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of
visual information, Henry Holt and Co. Inc., New York, NY, 2(4.2).
Marshel, J.H., Kim, Y.S., Machado, T.A., Quirin, S., Benson, B., Kadmon, J., Raja, C., Chibukhchyan,
A., Ramakrishnan, C., Inoue, M. and Shane, J.C., (2019). Cortical layer–specific critical
dynamics triggering perception. Science, 365(6453), p.eaaw5202.
78
Masino, S. A., Kwon, M. C., Dory, Y., & Frostig, R. D. (1993). Characterization of functional
organization within rat barrel cortex using intrinsic signal optical imaging through a thinned
skull. Proceedings of the National Academy of Sciences, 90(21), 9998-10002.
May, T., Ozden, I., Brush, B., Borton, D., Wagner, F., Agha, N., Sheinberg, D.L. and Nurmikko, A.V.,
(2014). Detection of optogenetic stimulation in somatosensory cortex by non-human primates-
towards artificial tactile sensation. PloS one, 9(12).
Mehta, S. B., Whitmer, D., Figueroa, R., Williams, B. A., & Kleinfeld, D. (2007). Active spatial
perception in the vibrissa scanning sensorimotor system. PLoS biology, 5(2).
Miller, L.E., Fabio, C., Ravenda, V., Bahmad, S., Koun, E., Salemme, R., Luauté, J., Bolognini, N.,
Hayward, V. and Farnè, A., (2019). Somatosensory cortex efficiently processes touch located
beyond the body. Current Biology.
Mitchinson, B., and Prescott, T.J. (2013). Whisker movements reveal spatial attention: a unified
computational model of active sensing control in the rat. PLoS Computational Biology 9(9),
e1003236.
Moore, J. D., Lindsay, N. M., Deschênes, M., and Kleinfeld, D. (2015). Vibrissa self-motion and touch
are reliably encoded along the same somatosensory pathway from brainstem through
thalamus. PLoS biology 13(9), e1002253.
O'Connor, D. H., Clack, N. G., Huber, D., Komiyama, T., Myers, E. W., and Svoboda, K. (2010).
Vibrissa-based object localization in head-fixed mice. Journal of Neuroscience 30(5), 1947-1967.
O'Connor, D. H., Peron, S. P., Huber, D., & Svoboda, K. (2010). Neural activity in barrel cortex
underlying vibrissa-based object localization in mice. Neuron, 67(6), 1048-1061.
O'connor, D.H., Hires, S.A., Guo, Z.V., Li, N., Yu, J., Sun, Q.Q., Huber, D., and Svoboda, K. (2013).
Neural coding during active somatosensation revealed using illusory touch. Nature
neuroscience 16(7), 958.
Pammer, L., O'Connor, D. H., Hires, S. A., Clack, N. G., Huber, D., Myers, E. W., and Svoboda, K.
(2013). The mechanical variables underlying object localization along the axis of the
whisker. Journal of Neuroscience 33(16), 6726-6741.
Pause, M., Kunesch, E., Binkofski, F. & Freund, H. J. Sensorimotor disturbances in patients with lesions
of the parietal cortex. Brain 112(Pt 6), 1599–1625 (1989).
Pégard, N. C., Mardinly, A. R., Oldenburg, I. A., Sridharan, S., Waller, L., & Adesnik, H. (2017). Three-
dimensional scanless holographic optogenetics with temporal focusing (3D-SHOT). Nature
communications, 8(1), 1228.
Peron, S. P., Freeman, J., Iyer, V., Guo, C., & Svoboda, K. (2015). A cellular resolution map of barrel
cortex activity during tactile behavior. Neuron, 86(3), 783-799.
Petersen, C. C. (2019). Sensorimotor processing in the rodent barrel cortex. Nature Reviews
Neuroscience, 20(9), 533-546.
Petreanu, L., Mao, T., Sternson, S.M., and Svoboda, K. (2009). The subcellular organization of
neocortical excitatory connections. Nature 457(7233), 1142.
Petreanu, L., Gutnisky, D.A., Huber, D., Xu, N.L., O’Connor, D.H., Tian, L., Looger, L., and Svoboda,
K. (2012). Activity in motor–sensory projections reveals distributed coding in
somatosensation. Nature 489(7415), 299.
Pluta, S. R., Lyall, E. H., Telian, G. I., Ryapolova-Webb, E., and Adesnik, H. (2017). Surround
integration organizes a spatial map during active sensation. Neuron 94(6), 1220-1233.
79
Ranganathan, G.N., Apostolides, P.F., Harnett, M.T., Xu, N.L., Druckmann, S., and Magee, J.C. (2018).
Active dendritic integration and mixed neocortical network representations during an adaptive
sensing behavior. Nature neuroscience 21(11), 1583-1590.
Rema, V., & Chaudhary, R. (2018). Deficits in behavioral functions of intact barrel cortex following
lesions of homotopic contralateral cortex. Frontiers in systems neuroscience, 12, 57.
Roland, P. E. Somatosensory detection in patients with circumscribed lesions of the brain. Exp. Brain
Res. 66, 303–317 (1987).
Romo, R., Hernández, A., Zainos, A., & Salinas, E. (1998). Somatosensory discrimination based on
cortical microstimulation. Nature, 392(6674), 387. – microstimulation of BA3 leads to detection
of touch flutter.
Schroeder, J. B., and Ritt, J. T. (2016). Selection of head and whisker coordination strategies during goal-
oriented active touch. Journal of neurophysiology 115(4), 1797-1809.
Severson, K. S., Xu, D., Van de Loo, M., Bai, L., Ginty, D. D., and O’Connor, D. H. (2017). Active touch
and self-motion encoding by merkel cell-associated afferents. Neuron 94(3), 666-676.
Shadlen, M.N., and Newsome, W.T. (2001). Neural basis of a perceptual decision in the parietal cortex
(area LIP) of the rhesus monkey. Journal of neurophysiology 86(4), 1916-1936.
Shepherd, G. M. (2013). Corticostriatal connectivity and its role in disease. Nature Reviews
Neuroscience, 14(4), 278.
Sofroniew, N. J., Vlasov, Y. A., Hires, S. A., Freeman, J., & Svoboda, K. (2015). Neural coding in barrel
cortex during whisker-guided locomotion. Elife, 4, e12559.
Solomon, J.H., and Hartmann, M.J. (2011). Radial distance determination in the rat vibrissal system and
the effects of Weber's law. Philosophical Transactions of the Royal Society B: Biological
Sciences 366(1581), 3049-3057.
Stosiek, C., Garaschuk, O., Holthoff, K., & Konnerth, A. (2003). In vivo two-photon calcium imaging of
neuronal networks. Proceedings of the National Academy of Sciences, 100(12), 7319-7324.
Szwed M., Bagdasarian K., and Ahissar E. (2003). Encoding of vibrissal active touch. Neuron 40(3), 621-
30.
Vaxenburg R., Wyche I., Svoboda K., Efros A. L., and Hires S. A. (2018). Dynamic cues for whisker-
based object localization: An analytical solution to vibration during active whisker touch. PLoS
Comput Biol. 14(3)
Vincent, S. B. (1912). The function of vibrissae in the behavior of the white rat. Behavior Monographs
1(5): 1-82.
Voigts, J., Herman, D. H., and Celikel, T. (2014). Tactile object localization by anticipatory whisker
motion. Journal of neurophysiology, 113(2), 620-632.
Wallach, A., Bagdasarian, K., and Ahissar, E. (2016). On-going computation of whisking phase by
mechanoreceptors. Nature neuroscience 19(3), 487.
Wallstrom, G., Liebner, J., & Kass, R. E. (2008). An implementation of Bayesian adaptive regression
splines (BARS) in C with S and R wrappers. Journal of Statistical Software, 26(1), 1.
Wolpert, D. M., Ghahramani, Z., & Jordan, M. I. (1995). An internal model for sensorimotor
integration. Science, 269(5232), 1880-1882.
Woolsey, T. A., & Van der Loos, H. (1970). The structural organization of layer IV in the somatosensory
region (SI) of mouse cerebral cortex: the description of a cortical field composed of discrete
cytoarchitectonic units. Brain research, 17(2), 205-242.
80
Xu, N. L., Harnett, M. T., Williams, S. R., Huber, D., O’connor, D. H., Svoboda, K., and Magee, J. C.
(2012). Nonlinear dendritic integration of sensory and motor input during an active sensing
task. Nature 492(7428), 247.
Yang, A. E., and Hartmann, M. J. (2016). Whisking kinematics enables object localization in head-
centered coordinates based on tactile information from a single vibrissa. Frontiers in behavioral
neuroscience 10, 145.
Yu, J., Gutnisky, D. A., Hires, S. A., & Svoboda, K. (2016). Layer 4 fast-spiking interneurons filter
thalamocortical signals during active somatosensation. Nature neuroscience, 19(12), 1647.
Zuo, Y., & Diamond, M. E. (2019). Texture identification by bounded integration of sensory cortical
signals. Current Biology, 29(9), 1425-1435.
Zuo, Y., and Diamond, M. E. (2019). Rats generate vibrissal sensory evidence until boundary crossing
triggers a decision. Current Biology 29(9), 1415-1424.
Abstract (if available)
Abstract
Tactile object localization is a crucial aspect of mammalian behavior. Yet our understanding of how the brain generates this percept is still in its infancy. In this thesis two projects are designed and implemented to understand how the primary somatosensory cortex represents the location of objects by touch. In the first project, we rely on rodent models to understand behavior during tactile localization. From six debated models in the field, we provide evidence that a simple model utilizing two features, the number of touches and the midpoint of whisker motion, is enough to recapitulate behavioral outcomes. In the second project we record from the deep layers of primary somatosensory cortex and discover the representation of self-motion and touched object location by single neurons. These findings demonstrate sensorimotor integration to generate a percept of object locations by touch and provide a foundation for future studies in cortical computation and tactile perception.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Neural circuits control and modulate innate defensive behaviors
PDF
Tactile object localization: behavioral correlates, neural representations, and a deep learning hybrid model to classify touch
PDF
Neural circuits underlying the modulation and impact of defensive behaviors
PDF
Spatial and temporal precision of inhibitory and excitatory neurons in the murine dorsal lateral geniculate nucleus
PDF
Computational investigation of cholinergic modulation of the hippocampus and directional encoding of touch in somatosensory cortex
PDF
Cell-type specialization of layer 5 excitatory neuron functions in tactile behavior underlying object localization
PDF
Contextual modulation of sensory processing via the pulvinar nucleus
PDF
Functional properties of the superficial cortical interneurons
PDF
Quantitative analysis of mouse medial prefrontal cortex whole brain structural connectivity
PDF
Motor cortical representations of sensorimotor information during skill learning
PDF
Exploring sensory responses in the different subdivisions of the visual thalamus
PDF
Neural sequence models: Interpretation and augmentation
PDF
Development of a toolbox for global functional brain imaging of wake and sleep states in zebrafish
PDF
Sensory learning and anatomical plasticity in barrel cortex
PDF
Functional magnetic resonance imaging characterization of peripheral form vision
PDF
Synaptic integration in dendrites: theories and applications
PDF
Imaging neuromodulator dynamics in somatosensory cortex
PDF
Neural and behavioral correlates of fear processing in first-time fathers
PDF
From sensory processing to behavior control: functions of the inferior colliculus
PDF
Excitatory-inhibitory interactions in pyramidal neurons
Asset Metadata
Creator
Cheung, Jonathan Andrew
(author)
Core Title
The behavioral and neural bases of tactile object localization
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Neuroscience
Publication Date
02/11/2021
Defense Date
04/20/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
cortex,cortical computation,localization,neurons,OAI-PMH Harvest,tactile,Touch
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Tao, Huizhong (
committee chair
), Hires, Samuel Andrew (
committee member
), Mel, Bartlett (
committee member
), Povinelli, Michelle (
committee member
)
Creator Email
jacheung@usc.edu,jacheung6@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-362926
Unique identifier
UC11666001
Identifier
etd-CheungJona-8918.pdf (filename),usctheses-c89-362926 (legacy record id)
Legacy Identifier
etd-CheungJona-8918.pdf
Dmrecord
362926
Document Type
Dissertation
Rights
Cheung, Jonathan Andrew
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
cortex
cortical computation
localization
neurons
tactile