Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Synaptic integration in dendrites: theories and applications
(USC Thesis Other)
Synaptic integration in dendrites: theories and applications
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
i
SYNAPTIC INTEGRATION IN DENDRITES –
THEORIES AND APPLICATIONS
by
Lei Jin
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirement for the Degree
DOCTOR OF PHILOSOPHY
(NEUROSCIENCE)
December 2018
Copyright 2018 Lei Jin
ii
Epigraph
Whereof one cannot speak, thereof one must be silent.
_____________________
Ludwig Wittgenstein
iii
Dedication
To my family
iv
Acknowledgements
I wish to thank my advisor, Dr. Bartlett W. Mel for his great mentorship,
guidance and support in finishing the work in this thesis. His constant curiosity
and optimism encouraged me to power through the difficult times, and his
acuity in logical thinking always helped improving the work in very constructive
ways. I am also deeply grateful to my committee members, Dr. Gerald E. Loeb
and Dr. Judith Hirsch, for their instructions, advices and encouragement in the
completion of this work. This thesis also benefited a lot from discussions with
other faulty members in the Neuroscience Graduate Program, as well as the lab
members in the Laboratory of Neural Computation. Finally, I’m very thankful to
my family for their sacrifice and patience.
v
Table of Contents
Epigraph ii
Dedication iii
Acknowledgements iv
List of Figures vii
Abstract ix
Chapter 1: Overview 1 1
Chapter 2: Classical-contextual interactions in V1 may rely on dendritic
computations 7
2.1 Introduction.................................................................................................... 7
2.2 Results ........................................................................................................... 10
2.3 Discussion..................................................................................................... 27
2.4 Methods ........................................................................................................ 37
Chapter 3: The spatial dimensionality of dendritic computation 41
2.1 Introduction.................................................................................................. 41
2.2 Results ........................................................................................................... 42
2.3 Discussion..................................................................................................... 67
2.4 Methods ........................................................................................................ 72
vi
Chapter 4: Integrating synaptic parameters into a single strength measure 77
2.1 Introduction.................................................................................................. 77
2.2 Results ........................................................................................................... 79
2.3 Discussion..................................................................................................... 81
2.4 Methods ........................................................................................................ 83
References 84
Appendix A: Multi-compartment Model Parameters 97
vii
List of Figures
1.1 Models of synaptic integration in pyramidal neurons .................................. 2
2.1 Classical contextual interactions in V1: phenomenon and proposed
conceptual circuit ............................................................................................ 11
2.2 Measurement of a classical-contextual interaction function (CC-IF)
from human-labeled natural images ............................................................. 13
2.3 Responses of a biophysically-detailed compartmental model of a
layer 2-3 PN in response to two spatially-separated excitatory inputs
delivered to a basal dendrite ......................................................................... 13
2.4 Nonlinear structure of the image-derived CC-IF compared to a
dendritic proximal-distal interaction function, and a conventional
sigmoidal neural activation function ............................................................ 18
2.5 Analysis of dendritic response asymmetries due to differences other
than spatial bias .............................................................................................. 20
2.6 Variations in dendritic response functions resulting from changes in
parameters of the compartmental model ..................................................... 22
2.7 Dendritic spatial processing capabilities can be leveraged by local
inhibitory circuits ............................................................................................ 23
2.8 Elaboration of simulation run leading to multiplicative scaling of i/o
curves through combined effects of SOM and PV inhibition (and
disinhibition) ................................................................................................... 26
2.9 Applying the CC-IF as a nonlinear image filter ........................................... 31
3.1 Quantification of the spine density profile ................................................... 45
3.2 Fitting arbitrary patterns of excitation with overlapping cosine-
shaped basis functions ................................................................................... 47
3.3 Prediction quality for models of increasing dimension .............................. 49
3.4 Sample distribution of synapses used to train and test the multi-
segment model ................................................................................................ 52
3.5 Liner model performance on predating somatic firing rates ...................... 56
viii
3.6 Scatter plots showing the performances of the multi-dimensional
models ............................................................................................................. 57
3.7 Summary of prediction performance for linear and multi-dimensional
model ............................................................................................................... 59
3.8 Qualification of prediction performance by multi-dimensional models ... 60
3.9 The effect of spine neck resistance. Left to right, 0.03 !m spine, 0.05
!m spine and 0.1 !m spine neck diameter cases ......................................... 63
3.10 The effect of presynaptic firing rates............................................................. 65
3.11 The combined effect of dendritic length and presynaptic firng rate .......... 67
4.1 Interactions between presynaptic firing rate, peak synaptic conduct-
ance, and number of synapses ....................................................................... 78
4.2 Selection of samples that have the same mono-synaptic response ............ 79
4.3 The somatic responses in a 2-focal stimulation for different synaptic
input configurations ....................................................................................... 81
4.4 Conceptual diagram of the next generation argumented 2-layer
model ............................................................................................................... 83
ix
Abstract
The brain is a complex machinery that performs a wide range of computations
and processing. Within the cortex, the key processing units that carry different
streams of information and integrate them are pyramidal neurons (PNs). Recent
attempts in modeling synaptic integration have evolved from a simple notation
of linear summation unit to an augmented two-layer model, with a first layer
describing the nonlinear interactions within the dendritic subunit and the second
layer representing the axo-somatic activation function. The importance of
dendrites in mediating spatial synaptic integration gained more and more
attention in the evolution of models. However, the functional relevance for such
dendrite-mediated computations remains unclear. In this thesis, we started by
looking for a plausible functional role for nonlinear dendritic integration. We
noted that in the sensory cortex, the main feedforward projection, which defines
the layer 2/3 PN’s classical receptive field (CRF), accounts for only a small
fraction of the excitatory contacts onto those cells. In contrast, 60%-70% of the
contacts arise from “horizontal” connection. The horizontal network provides
contextual information that helps modulate the cell’s response. Many of such
modulations are nonlinear and little is known regarding its detailed mechanisms.
We chose to study the classical-contextual interaction between the center and
x
flanker edge elements in simple cells in the primary visual cortex (V1). An earlier
study showed that the presence of aligned edge elements in the flanking
receptive field can “multiplicatively” enhance the activity of the V1 cell’s
response to its direct input within the CRF. The literature only provided the cell’s
response to a limited number of conditions. In order to gather more information,
we conducted a human-labeling experiment with natural image patches to map
out the 2-input integration between the center and flanker inputs. We gathered a
two-dimensional surface of ground truth contour probability, which is an
example of classical-contextual interaction function (CC-IF). The CC-IF can be
considered as the function a V1 simple cell is supposed to compute in order to
reach its functional goal of contour detection. Using a detailed compartmental
model with two focal inputs spatially separated on a single dendrite, we
reproduced this CC-IF in terms of the somatic response to proximal and distal
inputs. We demonstrated that the effect was NMDA-dependent, and the spatial
separation was critical for such interaction to happen. Furthermore, we showed
that by altering the synaptic dynamics in the model, we could extend the CC-IF
to a rich set of 2-D functions, including AND/OR-like operations. The flexibility
could be further extended by accommodating the recently discovered SOM/PV
interneuron motif. We showed that with the inhibitory circuitry, a single
dendrite was capable of computing pure multiplication and certain type of
nonmonotonic interactions. Our findings support a model in which the nonlinear
synaptic integration effects in PNs contribute directly to fine-grained contextual
processing in V1. Moreover, the modeling study provided a valuable framework
for studying other sensory modalities and other types of 2-input interactions.
Next, we moved our attention back to the theoretical question of how exactly two
or more spatially distinctive inputs streams gets converged on a single dendrite.
xi
The key question we asked is that how many independent dimensions of spatial
analog processing are necessary to accurately describe the branch’s input-output
profile. We proposed two approaches to address this question. The first
approach used a set of spatial activation patterns defined by some pre-set cosine-
shaped pathways. We attempted to apply low-dimensional models to predict the
somatic response of arbitrary activation patterns generated by high-dimensional
models ,and quantified the performances of the predictions. The results showed
that a single dendrite may be represented by roughly three independent
dimensions, and the marginal gain of adding more dimensions diminished
significantly. Although the first approach adopted assumptions that were more
constrained to the biological conditions, it didn’t sufficiently represent the spatial
variability of activation patterns. In order to seek for the theoretical upper limit
of a dendrite’s spatial processing power, we proposed a second approach. In the
second approach, we used spatial activation patterns consisting of a fixed
number of synapses within clusters to maximize the variability of possible inputs.
In addition, we developed a set of non-overlapping, segmentation-based multi-
dimensional models that predicted the somatic response based on simple
synaptic counts. A linear-weight model was also developed in this approach to
produce sets or “slices” of patterns that received the same linear predictions.
These “slices” were used to further test the capability of the multi-dimensional
models in capturing specific variances due to nonlinear interactions.
Furthermore, we expanded the study with a set of different biophysical
parameters, including neck resistance, overall input intensity, and length of the
dendrite. With the second approach, we observed that a four- or five-
dimensional model would generally be sufficient in accurately predicting the
somatic response of an arbitrary input pattern. We also noted that certain
xii
biophysical specifications tended to increase the likelihood of nonlinear dendritic
integration. Specifically, a lower neck resistance may in general increase the
nonlinear spatial processing power of the dendrite, and it was common to see
certain input intensity level trigger more nonlinear integration than other input
levels. Finally, we further extended our model to encompass focal synaptic
variations. We systematically changed three focal synaptic parameters, namely
number of synaptic contacts, peak conductance and presynaptic firing rate, and
looked into the interaction between these variables. Then we tried to test the
possibility of using a unifying measure to describe the dynamics between the
three focal strength variables and found that the mono-input current response
could be a good candidate for this purpose. We hope our theoretical study
regarding the dimensionality of spatial processing in dendrites may facilitate the
interpretation of experimental results involving active dendritic actives, provide
new hypotheses for future experimental researches, and help guide new designs
of future neuromorphic hardware systems.
1
Chapter 1
Overview
A typical layer 2/3 pyramidal neuron has an extensive basal dendritic arbor,
which may receive thousands of inputs from many different sources. Modeling
attempts at how a single dendrite integrate these information have been made
since the 1960s. The classical view of a pyramidal neuron is a “point-neuron
model” and is still widely adopted nowadays in artificial neural network
literatures. In particular, the output of this model is computed by the weighted
sum of all inputs running through a single nonlinearity. The weights on each
input is an analog to the strength of individual synaptic contact, and the
nonlinearity is a representation of the axo-somatic activation function.
Since the 1990s, more evidences have been collected, revealing that dendrites in
fact have active properties and are capable of producing local dendritic spikes
mediated by NMDA channels. This discovery was later formalized as the two-
layer model of synaptic integration (Poirazi, Brannon, & Mel, 2003). In addition
2
to the axo-somatic nonlinearity, the two-layer model captures the local dendritic
interactions with an additional layer of nonlinearity. In the first layer, the
weighted sum of the inputs was governed by a thresholded dendritic subunit.
The output, represented as currents flowing to the soma, was then summed and
ran through the axo-somatic nonlinearity in the second layer to generate a
response (Figure 1.1a).
Figure 1.1 Models of synaptic integration in pyramidal neurons. a. The 2-layer model. Inputs
were summed in the first dendritic layer and ran through an individually thresholded local
nonlinearity to get the current following to the soma. Then the sum of the currents passed
through the axo-somatic activation function to generate the output. b. The augmented 2-layer
model, where the 1-dimensional nonlinearity in the first layer was replaced with an asymmetric
2-dimensional function mapping the interactions between the proximal and distal inputs.
More recently, the two-layer model was further expanded to incorporate
interactions between two inputs landing proximally and distally on the dendrite.
The main observation is that instead of a single 1-dimensional nonlinearity, the
input-output function of a single dendrite with proximal and distal inputs took
the shape of a 2-dimensional asymmetric nonlinear surface, leading to an
augmented 2-layer model (Behabadi, Polsky, Jadi, Schiller, & Mel, 2012; Figure
1.1b). Furthermore, studies have been made to reveal the effect of inhibitory
inputs on tailoring the excitatory input/output functions, with a mixture of gain
3
and threshold effects (M. Jadi, Polsky, Schiller, & Mel, 2012, also see reviews in M.
P. Jadi, Behabadi, Poleg-Polsky, Schiller, & Mel, 2014). These early studies
suggested that the dendrites may be more powerful than what we had thought
before, especially in mediating nonlinear computations through NMDA-
dependent spatial synaptic integration. However, the answer to the following
two questions remains unclear:
1. What functional roles could the dendritic computational substrate serve in
cortical computation?
2. What is the maximum capability of a single dendrite in mediating
nonlinear computations?
We address the first question in Chapter 2 and the second one in Chapter 3.
In Chapter 2, we provided a model for understanding the mechanism of
classical-contextual interaction in the primary visual cortex (V1), which utilized
the power of 2-input spatial interactions on a single dendrite. A signature feature
of the neocortex is the dense network of horizontal connections through which
pyramidal neurons exchange "contextual" information. In primary visual cortex
(V1), which has been extensively examined by studies on sensory integration,
horizontal connections are thought to facilitate object boundary detection, a
crucial operation for shape-based recognition. But how horizontal connections
modulate PN responses for this purpose remains obscure. To gain traction, we
collected natural image data to better characterize the problem faced by a
putative boundary-detecting cell in V1. We found the function that a neuron
should use to compute boundary probability from aligned edge elements within
4
and outside its receptive field have an asymmetric 2-D sigmoidal form which
provided the first normative explanation for a decades-old neurophysiological
observation (Kapadia et al. 1995). We showed using a detailed neuron model that
this peculiar classical-contextual interaction function optimized for boundary
detection could be directly computed by NMDA receptor-dependent spatial
interactions in PN dendrites – the site where classical and contextual inputs first
converge in the cortex. We further showed that in conjunction with local
interneuron circuitry, the spatial computing capabilities of PN dendrites afforded
the neocortex with a powerful and highly flexible substrate for processing
contextual information.
As we learned in Chapter 2 that the synaptic integration on dendrites may play
very critical roles in cortical computation, it is valuable to seek for the upper limit
of such computational capability. While the 2-input interaction gives rise to a 2-
dimensional input-output function that can be utilized to facilitate classical-
contextual interactions, it is natural to ask whether additional inputs, or highly
complex spatial patterns of activation can lead to higher dimensional I/O
relationship.
In Chapter 3, we proposed a theoretical framework to investigate this issue. We
developed a dimensionality-reduction approach that allowed us to predict a
dendrite’s time-averaged response to an arbitrary spatial pattern of excitation,
and as a side benefit, to determine the maximum dimensionality of the spatial
processing within a PN dendrite. We proposed two approaches to answer this
question. In the first approach, we defined a family of N cosine-shaped spatial
basis functions covering the length of a dendrite, each representing the density of
5
excitation delivered to that region. Using biophysically detailed compartmental
simulations, we estimated the number of basis functions needed to accurately
predict the dendritic response to arbitrary spatially varying excitation patterns.
We found that a PN thin dendrite had, within its ~200 !m length, roughly N=3
independent dimensions of spatial analog processing capability, meaning that a
branch’s input-output function was in fact a 3-D rather than a 1-D sigmoid. The
first approach, while more closely related to biological constrains, did suffer
from the limitation that the model didn’t maximize the diversity of spatial
activation profiles, and tended to underestimate the true dimensionality. In the
second attempt, we defined the activation patterns with a fixed number of
synapses systematically grouped in clusters. We partitioned the dendrite into N
segments and developed a training-testing scheme based on the count of
synapses landing in each partition. In addition, we developed a linear model and
generated slices of data that had the same linear predictions so that the multi-
diemensional model could be tested on mostly nonlinear variances. We’ve also
accommodated other biophysical factors including spine neck resistance,
presynaptic input frequency, and length of the dendrite. In all cases, the 4-D or 5-
D model performed well in predicting both linear and nonlinear variations in the
firing rate output. In addition, we found that there existed certain biophysical
setups or “sweet-spots” (e.g. overall input intensity levels) that may intrinsically
lead to higher levels of cluster sensitivity, thus having higher nonlinear
processing capabilities. Specifically, a low neck resistance may increase the
capability of a dendrite in facilitating nonlinear interactions.
Lastly, in Chapter 4, we further extended our spatial integration model to
accommodate non-standard synaptic specifications. We looked into three
6
different parameters – number of synapses, peak conductance and presynaptic
firing rate, and tested how different configurations changed the input efficacy by
measuring the mono-input current contribution to the soma. We found that these
parameters interacted cooperatively in general, but we didn’t find any intuitive
method to describe the interaction between them. Furthermore, our results
showed that when we used the cases with the same mono-input-response and
ran them through the multi-dimensional model, the somatic responses showed
little variability, suggesting that the mono-input-current-response was a good
candidate as the unifying focal strength measure. This result completed our
dendritic integration model with the flexibility in accommodating detailed non-
uniform focal or synaptic specifications.
An overarching goal of this thesis is to construct simplified models of complex
(real) neurons that capture the essential processing capabilities but abstract away
from the biological details as possible. The long-term goal of the lab is to be able
to predict accurately the response of a pyramidal neuron to an arbitrary pattern
of excitatory and inhibitory synaptic input within few steps of model-based
calculations.
7
Chapter 2
Classical-contextual interactions in V1
may rely on dendritic computations
2.1 Introduction
In the primary visual cortex, the main feedforward pathway rises vertically from
the input layer (L4) to the next stage of processing in layer 2/3 (Poirazi, Brannon,
& Mel, 2003). These "driver" inputs establish the L2/3 pyramidal neurons (PNs)
classical receptive fields, but account for only a small fraction of the excitatory
contacts innervating those cells (<10%) (Binzegger, Douglas, & Martin, 2004). A
much larger number of contacts (>60%) (Binzegger et al., 2004; Stepanyants,
Martinez, Ferecskó, & Kisvárday, 2009) arises from the massive network of
horizontal connections (HCs) through which cortical PNs exchange contextual
information (Angelucci et al., 2002; Bosking, Zhang, Schofield, & Fitzpatrick,
1997; Boucsein, 2011; Chisum, Mooser, & Fitzpatrick, 2003; McGuire, Gilbert,
8
Rivlin, & Wiesel, 1991; Rockland & Lund, 1982). Despite their large numbers and
undoubted importance, relatively little is known regarding the HC’s
contributions to behavior, the functional form(s) of the classical-contextual
interactions they give rise to, or the biophysical mechanisms that underlie their
modulatory effects.
Object boundary detection provides an attractive framework for studying
classical-contextual interactions in visual cortex, given that object contours are
known to contain essential information for recognition (Biederman, 1987; Field,
Hayes, & Hess, 1993), and neurons in the first visual cortical area (V1) already
show strong boundary-related contextual modulation effects(C.-C. Chen & Tyler,
2001; Chisum et al., 2003; Kapadia, Ito, Gilbert, & Westheimer, 1995; Kapadia,
Westheimer, & Gilbert, 2000; Nelson & Frost, 1985; Polat, Mizobe, Pettet,
Kasamatsu, & Norcia, 1998) (Figure 1a). With the aim to link this behaviorally
relevant computation to underlying neural mechanisms, we took the following
unconventional approach. First, to gain a better understanding of the problem
faced by a putative boundary-detecting neuron in V1, we collected oriented filter
responses on and off object boundaries in human-labeled natural images. This
allowed us to construct the first (that we know of) natural image-derived
classical-contextual interaction function (CC-IF) that captures quantitatively how
aligned boundary elements within and outside a neuron's receptive field
together determine object boundary probability. As described below, the CC-IF,
which has an asymmetric 2-D sigmoidal form, provides the first normative
explanation for three hallmark features of boundary-related classical-contextual
interactions that have been described in the neurophysiological
literature(Kapadia et al., 1995, 2000).
9
Second, we noticed that the asymmetric sigmoidal CC-IF we extracted from
natural images is strikingly similar in form to the input-output functions
resulting from NMDAR-dependent spatial interactions between synapses
targeting proximal vs. distal sites on a PN thin dendrite (Behabadi et al.,
2012)(Figure 1b). We therefore attempted to explicitly fit the input-output
behavior of a detailed neuron model to the image-derived CC-IF, as a test of the
hypothesis that PN dendrites could be the neural substrate where boundary-
related CC-IFs are computed in V1. We show that the fit is remarkably good,
supporting the idea that nonlinear synaptic integration effects in PN dendrites
could contribute to classical-contextual processing in V1.
Third, we carried out prospective simulations to assess the generality and
expressive power of this dendrite-based analog computing mechanism. This was
essential since the exact form of the boundary-related CC-IF we collected is tied
to assumptions about the inputs available to the pyramidal neuron (in our case,
two aligned contour elements, one within, and the other outside the cell’s CRF),
and the task the neuron is supposed to perform based on those inputs (detecting
object boundaries). Since either of these assumptions could have been different,
leading to a different CC-IF, it is critical to assess the sources of flexibility in the
cortex that could allow HCs to produce many different types of classical-
contextual interactions. We therefore carried out additional simulations to
explore the spectrum of CC interactions attainable through variations in single
neuron and circuit-level parameters. We conclude that PN dendrites, forming the
core of the cortical circuit, provide a powerful computing substrate through
which HCs can flexibly modulate neural responses depending on context.
10
2.2 Results
2.2.1 Deriving a contour-related CC-IF from human labeled natural
images
In studying the boundary-related responses of V1 neurons, a typical observation
is that a cell's response to an oriented contour element inside its classical
receptive field (CRF) is boosted (often 2-3 fold) by aligned "flankers" lying
outside the CRF, whereas the flankers produce little or no response on their own
(C. C. Chen, Kasamatsu, Polat, & Norcia, 2001; Kapadia et al., 1995, 2000; Polat et
al., 1998)(Figure 1a). This nonlinear facilitatory interaction between center and
flanker stimuli concords well with psychophysical effects (Kapadia et al., 2000;
Polat & Sagi, 1994), but also seems intuitive, in that evidence for an object
boundary within a cell's CRF ought to be "amplified" when corroborated by
evidence from nearby locations. Our progress in understanding the biophysical
mechanisms underlying this type of classical-contextual interaction has been
hampered, though, by the lack of a method for quantitatively predicting the form
of the CC-IF under different assumptions about a neuron's goal and available
inputs. The ability to predict CC-IFs would be valuable in two ways: it would
provide a reference to which a measured CC-IF could be compared, and a target
towards which biophysical modeling efforts could be aimed.
11
Figure 2.1. Classical contextual interactions in V1: phenomenon and proposed conceptual circuit.
a. Schematic of key result of Kapadia et al. (1995). Cell recorded in monkey V1 showed modest
response (~20 Hz) to bar stimulus in CRF (dashed box); ~no response to flanking bar stimulus in
surround, but strongly boosted response when flanking bar was paired with the CRF stimulus.
b. Circuit model potentially accounting for the “multiplicative” classical-contextual interaction in
a. Driver input representing the CRF stimulus rises vertically from layer 4, terminating with a
distal bias on basal dendrite of a layer 2/3 PN. Horizontal input from neighboring V1 cell
representing the flanker stimulus terminates on same PN dendrite with a proximal bias. From
Behabadi et al (2012), the proximal modulator is expected to multiplicatively boost the dendrite’s
response to the driver input.
With this goal in mind, we turned to natural images to obtain an empirical CC-IF
involving two aligned boundary elements, one inside and one outside the CRF of
a virtual V1 neuron (by analogy with the stimulus configuration of Kapadia et al.
1995 – see Figure 1a). We first constructed a 3x5 pixel oriented edge filter (Figure
2.2a) loosely inspired by the receptive field structure of a canonical even-
symmetric V1 "simple cell" (Hubel & Wiesel, 1962). The filter returns a value r ∈
[0,1] when applied at any position/orientation in an image, signifying the
strength of the oriented luminance contrast at that site. Image patches (100x100
pixels) were collected at random from the Corel image database, and a CRF
(dashed rectangle in Figure 2.2b), was placed (virtually) at the center of each
12
image patch (i.e., was not actually drawn). Two filter values were computed for
each patch: rcenter, measured within the virtual CRF, and rflanker from an aligned
position just outside the CRF (Figure 2.2b). Image patches were then sorted
based on this pair of measured filter responses. An 11x11 grid of image bins was
defined over the 2-D space of center-flanker response pairs. These bins were
centered at regularly spaced values {0, 0.1, 0.2 … 1.0} along each of the two filter
dimensions, with a bin width +/-0.005. Image patches were collected until each
of the 121 bins contained a minimum of 30, but typically 100 image patches. A
few of the image bins are illustrated schematically in Figure 2.2c. Image patches
that did not fall into any bin were discarded. To collect responses from human
labelers, patches in each of the 121 bins were presented on a video monitor in
pseudorandom order. Labelers were told to focus on a red box contained within
the CRF (Figure 2.2b), and asked to assign a score ranging from 1 to 5 (without
time pressure) indicating their level of confidence that all of the following were
true: Scores were linearly converted to a [0,1] range and averaged within each
bin, yielding a plot of ground truth "boundary probability" over the 2-D space of
center-flanker score pairs (Figure 2.2e).
13
Figure 2.2. Measurement of a classical-contextual interaction function (CC-IF) from human-
labeled natural images. a. Schematic of oriented edge filter. The filter response was obtained by
computing pairwise differences PDs for five pixel pairs, each separated by an unsampled pixel.
14
Each PD was passed through a sigmoidal nonlinearity, given by x/(0.2+x) for x ≧0, and x/(0.2-x)
for x<0, and the results were summed. Sign indicated edge polarity. b. Sample image patch
shown with the CRF of a virtual V1 neuron superimposed (dashed box). Each patch was
characterized by responses of two aligned filters r center within the CRF, and r flanker just outside the
CRF. c. Image patches were drawn at random from a natural image database and binned based
on r center and r flanker values, forming a 2-dimensional space of bins. Only 3 of the 11 bins are shown
along each axis. d. Human labelers were asked to judge whether an object contour was present in
the red box that entered one end, exited the other, remained always within the box, and was
unoccluded at the center. Scores were assigned as follows: 1 = “definitely no”, 2 = “probably no”,
3 = “can’t decide”, 4 = “probably yes”, 5 = “definitely yes”. Examples of patches that received
each label are shown. e. Boundary probability within each image bin was plotted as a function of
r center and r flanker.
Figure 2.3. Responses of a biophysically-detailed compartmental model of a layer 2-3 PN in
response to two spatially-separated excitatory inputs delivered to a basal dendrite. Model neuron
was from macaque V1 (see methods for detail). a. Two excitatory input “pathways”, each
consisting of 30 identical excitatory spine synapses, were placed in clusters at 60 and 120 !m
from the soma. Each spine contained both an NMDA and an AMPA-type conductance (see
Methods), and was stimulated by a regular spike train (with random phase) with frequency
ranging from 0-20 Hz. Voltage was recorded at the soma and in the stimulated dendrite 90 !m
from the soma. b. Dendritic and somatic recordings for 3 representative cases: (1) 17.5 Hz
proximal and 2.5 Hz distal simulation. (2) 2.5 Hz proximal and 17.5 Hz distal; and (3) both
proximal and distal at 17.5 Hz. All traces started from -70 mV. c. Firing rates were averaged over
10 1-second runs. Bold outer frame is identical to that in Figure 2.2e.
The plot in Figure 2.2e illustrates three features typical of a CC-IF(Kapadia et al.,
1995). First, as the value of rcenter increases when rflanker is near zero, boundary
probability rises to a modest level (40.5%). In contrast, as the value of rflanker
increases when rcenter is near zero, boundary probability remains low (8.9%).
Despite this weak effect on its own, a strong flanker input more than doubles the
gain of the response to the center input, leading to a boundary probability of 98.3%
when both center and flanker filters are strongly activated.
While other spatial configurations of two or more filters, or different filter
designs, or different labeling criteria could have been used, it is notable that an
arrangement of just two aligned filters used to sort image patches into bins,
coupled with a simple definition of an object boundary, already allowed us to
15
capture the hallmark features of a contour-related CC-IF (Kapadia et al., 1995;
Polat et al., 1998).
2.2.2 Could PN dendrites be the site where the CC-IF is computed?
What biophysical mechanisms might be capable of producing this peculiar type
of functional interaction? The fact that horizontal and vertical inputs first
converge on the basal and apical oblique dendrites of layer 2/3 PNs raises the
question as to what role PN dendrites might play in nonlinearly integrating
classical and contextual inputs. A previous combined neurophysiological and
modeling study (Behabadi et al., 2012) showed that NMDAR-dependent
interactions between proximal and distal synapses on PN basal dendrites
produce asymmetric 2-D sigmoidal interaction functions similar to the plot of
Figure 2.2e (see also (M. P. Jadi et al., 2014)). This similarity led us to ask whether
a spatially-biased projection pattern in which horizontal axons project closer to
the soma where they can exert a more multiplicative effect, and vertical inputs
from layer 4 connect more distally (Figure 2.1b), could reproduce the boundary-
related CC-IF we had extracted from natural images (Figure 2.2e).
To test this, we ran compartmental simulations of a PN stimulated by groups of
30 proximal and distal synapses (Figure 2.3a). Examples of simultaneous
dendritic and somatic recordings are shown in Figure 2.3b for 3 different input
intensity combinations. Dendritic traces show either no active response (case 1),
intermittent slow dendritic spikes (case 2), or plateaus (case 3), with fast back-
propagating somatic action potentials superimposed. We systematically varied
proximal and distal input rates from 0 to 20 Hz, and recorded the firing rate at
16
the soma. The resulting 2-D neural response function (Figure 2.3c) closely
matched the natural image-derived boundary probability plot shown in Figure
2.2e; the bold outer frames are identical in the two plots. All three hallmark
features of a classical-contextual interaction were again present: the distal input
alone drove the cell to fire at a moderate rate (11.02 Hz). In contrast, the proximal
input alone drove the cell to fire only weakly (3.16 Hz), but significantly boosted
the gain of the response to the distal input when the two pathways were
activated together (25.96 Hz at full activation). Thus, we concluded that the
analog nonlinear processing capabilities of PN dendrites are well suited to
produce the asymmetric interaction between CRF and extra-classical inputs for
purposes of boundary detection in natural images.
2.2.3 On the nature and potential sources of the neural response
asymmetry
Could the asymmetry of the CC-IF be produced by a different (and especially a
simpler) mechanism, not involving dendritic spatial integration? A common
abstraction of a neuron (or dendrite's) input-output function is a weighted sum
of inputs followed by a sigmoidal nonlinearity,
𝑦 =sigmoid (+𝑤
-
-
∙𝑥
-
)
This conventional neural activation function produces a nonlinear interaction
between its inputs 𝑥
-
by virtue of the output nonlinearity, and can even produce
an asymmetric nonlinear interaction by assigning different weights (𝑤
-
) to each
17
input. Can such a model, properly parameterized, capture the type of
asymmetric nonlinear interaction expressed by the natural image-derived CC-IF,
eliminating the need to consider more complex dendritic integration mechanisms?
To address this question, we fit each iso-modulator slice of the CC-IF with
logistic functions whose threshold, slope, and asymptote were allowed to vary
arbitrarily (see Figure 2.4 caption for formula and parameters used). As shown in
Figure 2.4a, the best-fitting sigmoids (red curves) followed a progression wherein,
as the threshold (i.e. x-coordinate of the steepest point) moved leftward under
the influence of an increasing modulatory input, both the maximum slope and
amplitude of the sigmoidal curves increased. These correlated changes in
threshold, slope, and amplitude are summarized graphically by the upward-
leftward shift and steepening of the black bars marking the maximum slopes
moving from low to high modulation levels. The same progressive increase in
slope and amplitude was seen in the proximal-distal dendritic interaction
function (Figure 2.4b; note the i/o curves are plotted over a greater range of
inputs than in Figure 2.4a to more fully visualize the curves' sigmoidal form).
The existence of changes in slope and amplitude from curve to curve rules out
that the CC-IF can be represented by a conventional sigmoidal activation
function, for which the maximum slope and amplitude of the i/o curves remain
unchanged across modulation levels (Figure 2.4c; this is true regardless of which
input is considered the driver and which the modulator). Based on these results,
we conclude that the natural image-derived CC-IF has nonlinear structure that
falls outside the representational scope of a conventional neural activation
function.
Having established that a conventional sigmoidal activation function lacks the
fundamental asymmetric structure needed to represent the natural image-
18
derived CC-IF, we next asked whether a proximal-distal separation of driver and
modulator pathways on PN dendrites is necessary to achieve the type of
asymmetric nonlinear interaction seen in the CC-IF, or whether other types of
asymmetries between the two input pathways can produce a similar type of
interaction.
Figure 2.4. Nonlinear structure of the image-derived CC-IF compared to a dendritic proximal-
distal interaction function, and a conventional sigmoidal neural activation function. a. Each CC-IF
slice (blue) was fit by a logistic function (red) with variable threshold, slope, amplitude, and y-
offset. Steepest slope of each fit is marked by an asterisk; black bars at lowest and highest
modulation levels help visualize progression of threshold and slope values. Progression of
inflection points has a significant non-zero slope (two sided test, p<0.01) b. Same as a but for
responses of compartmental model with a distal driver and proximal modulatory input. Slices
are plotted over a larger range of inputs than in a for better visualization of the sigmoidal form. c.
Progression of i/o curves of a conventional sigmoidal activation function with two inputs; peak
slopes remain constant regardless of “modulation” level.
While it would be impossible to consider all possible alternatives, we identified
three types of differences between two pathways, other than dendritic location,
that could potentially produce the type of amplitude+slope boosting interaction
seen in the CC-IF. We then ran simulations in which: (1) the classical and
contextual synapses were co-mingled on the dendrite, thus eliminating their
spatial asymmetry, (2) one of the pathways retained its original biophysical
characteristics and was called the "Standard input"; and (3) the other pathway
was altered in one of three ways:
19
1. Increased peak synaptic conductance. Rationale: Increasing the peak
conductance of an input pathway effectively lowers its threshold for NMDAR
activation, which could lead to an enhanced superlinear interaction between
the two pathways;
2. Reduced NMDA-AMPA ratio. Rationale: by making one (e.g. the driver)
pathway less superlinear on its own, it might benefit more from the nonlinear
excitability boost provided by the modulatory pathway;
3. Increased spine neck resistance. Rationale: higher spine neck resistances
amplify spine voltages, and are said to "encourage electrical interaction
among coactive inputs" and "promote nonlinear dendritic processing"
(Harnett, Makara, Spruston, Kath, & Magee, 2012).
The results of these manipulations are shown in Figure 2.5. In each row, the
surface plot at left shows the 2-D interaction with the "Standard" input (same in
all 3 rows) plotted on the left abscissa and the "Altered" input plotted on the right
abscissa. The 2-D interaction surface is shown sliced along both cardinal
directions in the middle and right plots of each row, and the maximum slope
points of the slices are again marked by black bars. While the interaction
functions take on various forms, in none of the cases or for either direction of
slicing do we observe conjoint amplitude+slope increases with increasing value
of the modulator. Based on these negative results, we conclude that the peculiar
type of nonlinear interaction between two inputs that arises from a dendritic
location asymmetry, which closely matches the nonlinear structure of the CC-IF,
20
cannot be easily reproduced by simply modifying the relative excitability of one
of the two pathways in the absence of a dendritic location asymmetry.
Figure 2.5. Analysis of dendritic response asymmetries due to differences other than spatial bias.
In all cases, both Standard and Altered inputs were located 60 !m from the soma. Only one
synaptic parameter was changed in the Altered input; Standard input was always the same. a.
Peak synaptic conductance was increased 3-fold in the Altered input. Panel a 1 shows the 2-D
response surface; panels a 2 and a 3 show slice views of the same data. Black bars again show
steepest points of individual i/o curves. Dashed arrow shows direction of increasing modulatory
level. b. Same as a but Altered input has NMDA/AMPA ratio lowered from 2 to 0.1 Total
conductance was increased to produce roughly the same amount of activation compared to the
Standard input. c. Same as a but Altered input had spine neck resistance increased 3.5 fold to
~500 M".
21
2.2.4 Interneuron circuits provide additional flexibility for
tailoring classical-contextual interactions
Importantly, the natural image-derived CC-IF (Figure 2.2e) that has guided our
search for underlying mechanisms is just one of an essentially unlimited number
of different interaction functions that could be needed in different cortical areas,
which must process very different kinds of information, and in different animal
species, which must perform well in very different kinds of environments. We
therefore set out to more fully explore the spectrum of CC-IFs that could be
produced by varying anatomical and physiological parameters available to
cortical neurons and circuits. As a first step we generated dendritic interaction
functions that deviated in various ways from the standard shown in Figure 2.3c
(reproduced in Figure 2.6a over for larger range of inputs), achieved by: (1)
reducing the separation distance of the classical and contextual inputs from
90!m to 30!m (Figure 2.6b); (2) altering the NMDA conductance model to
eliminate post-synaptic receptor saturation (Figure 2.6c); and (3) increasing spine
neck resistance from 100 MW to 500 MW. In all three cases, unlike those in Figure
2.4, synapses in both the driver and modulator pathways were identical but for
their dendritic location. The gallery of cases in Figure 2.6 illustrates that even
when driver and modulator synapses have identical properties, different spatial
biases in their projections to PN thin dendrites can produce CC-IF's that are
widely varying in functional form.
22
Figure 2.6. Variations in dendritic response functions resulting from changes in parameters of the
compartmental model. a. 2-D response surface for a standard seed condition (proximal-distal
separation = 90 µm, spine neck resistance = 100 M", synaptic peak conductance = 2 nS, and
synaptic saturation “cap” was 100% (meaning that a single presynaptic release event saturated all
available channels at the synapse; a cap of 200% meant that through repeated stimulation,
temporal facilitation could produce up to twice the base conductance; a cap of ¥ meant synapse
did not saturate at any rate. b. Spatial separation of the two inputs was reduced to 30 µm. c.
Synaptic saturation cap was set to ¥. d. Spine neck resistance was set to ~500 M".
An additional source of flexibility that could be used by the cortex to tailor
classical-contextual interactions lies in the parameters that govern how inhibitory
interneurons activated by horizontal axons affect PNs both directly and
indirectly. We focused on the circuit motif shown in Figure 2.7a, a subset of the
full interneuron circuit summarized in (Pfeffer, Xue, He, Huang, & Scanziani,
23
2013; Tremblay, Lee, & Rudy, 2016). Our reason for focusing on the SOM->PV-
>PN subcircuit stems from the fact that SOM interneurons are strongly activated
by HCs (Adesnik, Bruns, Taniguchi, Huang, & Scanziani, 2012), and are therefore
particularly relevant for understanding contextual modulation. Furthermore,
SOM interneurons have the interesting property that they inhibit PN dendrites
directly, but also inhibit PV interneurons, which leads to a disinhibition of PNs
perisomatically. The net effect of this arrangement is that activation of HCs
produce a shift of inhibition away from the soma and towards the dendrites of
their target PNs (Pfeffer et al., 2013). What might be the functional role of this
local PV-SOM circuit motif?
Figure 2.7. Dendritic spatial processing capabilities can be leveraged by local inhibitory circuits.
a. Conceptual model modified from Figure 2.1b to include a SOM->PV interneuron circuit. SOM
24
interneuron inhibits PN dendrite distally; PV interneuron inhibits PN perisomatic region, and is
inhibited by SOM interneuron. SOM neuron is driven by horizontally offset modulator. b.
Dendritic response function with pure excitatory spatial interaction, similar to Figure 2.3c with
input range extended. c. Slices of surface in b (only even numbered lines are shown.) d, e. Same
as b and c but with inhibitory effects included. Progression of i/o curves follows a relatively pure
multiplicative scaling (as evidenced by aligned thresholds) with increased dynamic range. f.
Activation curves of PV (green) and SOM (blue) interneurons used for the simulation shown in d
and e. g, h. An alternative set of PV and SOM activation curves, providing an example of a non-
monotonic modulatory effect.
Previous work has shown that excitatory (E) and inhibitory (I) synapses interact
in complex ways in dendrites, depending on their absolute and relative locations
(Gidon & Segev, 2012; M. Jadi et al., 2012; Koch, Poggio, & Torre, 1983; Vu &
Krasne, 1992). Extrapolating from the findings of (M. Jadi et al., 2012), who
focused on proximal-distal E-I interactions in active dendrites, we expected that
increasing SOM activation by HCs should have two specific effects on a PN's
dendritic input-output curves, namely: (1) a progressive increase in the threshold
(i.e. right-shifting) of the cell’s dendritic i/o curves, caused by the increasing
distal inhibition; and (2) a progressive increase in the effective gain of the cell's
i/o curves, caused by the cell's gradual alleviation from proximal (i.e.
perisomatic) inhibition. As a “control” condition, the 2-D response surface
produced by the compartmental model with no inhibition is shown in Figure
2.7b, and the corresponding 1-D slices are shown in Figure 2.7c (consisting of a
subset of those in Figure 2.4b). The response surface with intact SOM+PV
inhibition is shown in Figure 2.7d, and the corresponding 1-D slices are shown in
Figures 7e.
Interestingly, when a modulatory input engages the interneuron subcircuit,
leading to coupled threshold and gain increases in the dendritic input-output
curves (as shown in Figure 2.7d,e), the net effect can be to produce (1) a more
25
purely multiplicative scaling of the dendritic response curves, as evidenced by
the more vertical alignment of the thresholds in Figure 2.7e compared to Figure
2.7c, along with (2) an expanded dynamic range from low to high modulation
levels, as evidenced by the greater vertical spread of asymptotes in the red vs.
blue slices. The functions that map the modulation intensity to PV (perisomatic)
and SOM (dendritic) inhibitory firing rates in this example are shown in Figure
2.7f, and a detailed analysis of the separate and combined effects of the SOM and
PV inhibitory components can be found in Figure 2.9. It is important to note that
the two curves shown in Figure 2.7f were designed through trial and error to
achieve a pure multiplicative effect, so that pure multiplicative modulation is by
no means an inevitable outcome of this type of circuit, nor are the curves in
Figure 2.7f likely to be unique in producing a multiplicative effect. Rather, the
example is intended to illustrate the flexibility that even a simple interneuron
circuit adds to an already richly expressive classical-contextual modulation
capability based on pure excitatory dendritic location effects (Figure 2.4). A final
example shows that modulation of a PN's responses by a horizontal pathway is
not restricted even to monotonic (i.e. facilitating or suppressive) effects: the SOM
and PV activation curves shown in Figure 2.7g, modified relative to those in
Figure 2.7f, lead to the non-monotonic 2-D interaction surface shown in Figure
2.7h. This type of interaction function might be appropriate in cases where (1) a
driver input provides evidence for a particular preferred feature within the cell's
CRF (call it feature A), and (2) a contextual input provides contextual support for
A up to a point, which increases the cell's responsiveness to its primary driver
input; but after that point, begins to provide stronger contextual support for a
(non-preferred) feature B, which eventually reduces the cell's responsiveness to
its primary driver input.
26
Figure 2.8. Elaboration of simulation run leading to multiplicative scaling of i/o curves through
combined effects of SOM and PV inhibition (and disinhibition). Each row shows circuit modeled;
slices of 2-D response surface; and progressions of amplitude and threshold of best-fitting logistic
function across modulation levels. a. Excitation only circuit, same as figure 2.7b. Amplitudes
increase and thresholds decrease for increasing modulation levels. b. Circuit with disinhibition
caused by inhibition of PV neuron by SOM neuron. Main effect is to increase dynamic range of
response amplitudes as PV interneuron is progressively shut down. c. Circuit now including only
dendritic inhibition by SOM interneuron. Main effect is threshold increase (and therefore gradual
alignment of thresholds) across modulatory levels. d. Circuit combining both the dynamic range
increase from b and threshold alignment from c. Effect is now a relatively pure multiplicative
scaling of i/o curves over a large dynamic range, as in Figure 2.7d-e.
To summarize, we have shown that the boundary-related CC-IF derived from
natural images has a functional form which is outside the representational scope
27
of a conventional neural activation function, but is readily produced by
NMDAR-dependent spatial interactions in PN dendrites – at the very site where
classical and contextual signals first converge in the cortex. Probing this
mechanism in greater depth, we showed that variations in multiple synapse-
related parameters, including spine neck resistance, peak synaptic conductance,
degree of synaptic saturation, degree of synaptic facilitation, and NMDA-AMPA
ratio, greatly expands the space of CC-IFs that can be produced by PN dendrites
based on purely excitatory synaptic interactions. Finally, we showed that the
spectrum of realizable classical-contextual interaction functions is further
enriched when local interneuron circuits are taken into consideration. As
examples, we showed activation curves for SOM and PV interneurons that lead
to pure multiplicative modulation, and others that produce more complex, non-
monotonic forms of modulation (Figure 2.7).
2.3 Discussion
Our overarching goal in this work has been to gain insight into how the massive
network of horizontal connections in the cortex could, by engaging dendrites,
provide a means for PNs to modulate each other's responsivity to their primary
CRF driver inputs through the back and forth exchange of contextual
information. To gain traction, we focused on the problem of boundary detection,
and on the mechanisms that V1 neurons may use to combine classical and
contextual cues for this purpose. Why boundary detection, and why V1?
Detecting object boundaries is a core process in biological vision, and is a crucial
precursor to rapid object recognition(Biederman, 1987; Potter, 1976). As for V1,
28
psychophysical and neurophysiological studies suggest V1 is heavily involved in
the early stages of boundary processing (Adini, Sagi, & Tsodyks, 1997; Angelucci
et al., 2002; C. C. Chen et al., 2001; Dresp, 1993; Field et al., 1993; Grosof, Shapley,
& Hawken, 1993; Grosof et al., 1993; Ito & Gilbert, 1999; Kapadia et al., 1995, 2000;
Levitt & Lund, 2002; W. Li & Gilbert, 2002; Mizobe, Polat, Pettet, & Kasamatsu,
2001; Polat & Sagi, 1994; Sceniak, Ringach, Hawken, & Shapley, 1999) and the
network of horizontal axons, by linking cells with similar orientation preferences
(Bosking et al., 1997; Gilbert & Wiesel, 1989) and co-linear or co-circular receptive
fields (Chisum et al., 2003; Schmidt, Goebel, Löwel, & Singer, 1997) seems
“designed” with long-range contour integration in mind.
By way of motivating our approach, it is useful to consider what could and could
not be learned from a conventional study of classical-contextual interactions in
V1, such as the seminal study of (Kapadia et al., 1995). In one of their key
neurophysiological findings (whose results are schematized in Figure 2.1a), the
authors showed that a ~40% of V1 neurons exhibit an asymmetric nonlinear
"facilitatory" interaction between a stimulus in the CRF that acts as a driver and
an aligned flanker in the extra-classical RF that acts as a modulator (i.e. does not
drive the cell by itself, but “multiplies” the cell's response to its driver input).
What their study could not answer were two questions that correspond to our
main findings here:
1) "What should be the interaction between the center and the flanker inputs?"
2) "What biophysical mechanisms are capable of generating the appropriate
classical-contextual interaction?"
29
We discuss our efforts to answer each of these questions below.
On the various uses of the natural image-derived CC-IF
Our approach to answering question 1 flows from the fact that, if we are willing
to assume a neuron's goal is to detect object boundaries, then through a
mechanical ground-truth labeling process applied to natural images, we can
determine precisely what function the neuron should use to detect boundaries
based on a given set of cues. The result of this process for the center and flanker
cues shown in Figure 2.2b is the CC-IF shown in Figure 2.2e.
Having the natural image-derived CC-IF in hand provides two major benefits.
First, the CC-IF creates a solid link between a classical-contextual interaction of a
type that has been reported in the neurophysiological literature (Kapadia et al.,
1995) and a specific natural sensory classification problem that the function helps
to solve. This is the first normative account for a classical-contextual interaction
in the cortex that we are aware of. Second, in relation to question (2) above, the
CC-IF provides a well-founded target towards which biophysical modeling
efforts can be aimed; this is how the CC-IF was used in the initial fitting exercise
of Figure 2.3. Furthermore, the natural image-derived CC-IF contains sufficient
detail and structure that it can help distinguish among competing mechanistic
models; this is how the CC-IF was used in the analyses of Figures 4 and 5.
The CC-IF can be looked at in a third way: as a crude "algorithm" for detecting
object boundaries, since it effectively scores every location in an image for
boundary probability based on the responses of two aligned oriented filters. We
30
would not expect the algorithm to perform well, given that it receives input from
just two of a large number of filters in the neighborhood that could provide
information about the presence or absence of an object boundary (In contrast, full
blown models of contour detection in V1 typically include inputs from many
filters (Z. Li, 1998; Loxley & Bettencourt, 2011; Pettet, McKee, & Grzywacz, 1998;
Ursino & La Cara, 2004; Yen & Finkel, 1998)). Nonetheless, when applied to
natural images as a nonlinear filter in its own right, the CC-IF should at least
show some capability for boundary detection. To verify this, we processed images
with the local edge filter shown in Figure 2.2a, collected rcenter and rflanker values at
each image location at 8 orientations and in the two complementary
configurations shown in Figure 2.8a (in red and blue). We then used the two
center-flanker value pairs as inputs to the CC-IF, and the scores obtained were
averaged to yield a composite boundary probability measure at each
location/orientation. Boundary images were generated by plotting the maximum
boundary probability across all 8 orientations at each pixel. Examples of original,
local edge, and boundary images are shown in Figure 2.8b-d. Higher probability
is indicated by darker pixels. In comparison to local edge images, boundary
images emphasized longer, well-formed object contours while suppressing
textures, resulting in images that more closely resemble line drawings. These
images in effect represent how neurons that compute the CC-IF would "see" the
world.
31
32
Figure 2.9. Applying the CC-IF as a nonlinear image filter. a. Two pairs (red and blue) of center
and flanker edge filters in configurations like that shown in Figure 2.2b were evaluated at every
pixel at 8 orientations (illustrated here for the horizontal orientation). Each pair of filter values
was run through the CC-IF, and the results from the two configurations were averaged at each
pixel/orientation. The contour response at each pixel was the maximum across all 8 orientations.
b. Original images. c. Local edge images. Maximum edge score across all orientations at each
pixel is indicated by the darkness of the pixel. d. Contour image. As for local edges, darker pixels
indicated stronger contours. Contour images emphasized spatially-extended object contours and
suppressed textures.
Relationship to previous studies of object boundary statistics
Ours is not the first analysis of natural image statistics pertaining to object
boundaries: Geisler et al. (2000) (Geisler, Perry, Super, & Gallogly, 2001) collected
co-occurrence statistics of local boundary elements in natural images, and
showed that they predict human contour grouping performance. While the
image statistics collected by Geisler et al. also relate to object boundaries, the type
of data they collected and their uses of that data are different. First, they began
with a different premise: they assumed local boundary elements had already
been detected. They then collected statistics such as (1) the probability that a
second boundary element is found at all possible offsets in position and
orientation relative to a first boundary element, and (2) the log likelihood ratio
comparing the probability that a second boundary element at a given offset in
position and orientation is part of the same or different object as a first boundary
element. In contrast, our focus has been on the problem of discriminating object
boundaries from non-boundaries at a given location based on a particular
configuration of cues. This difference in objective is reflected in the different
ways the natural image data is represented: our data is represented by a function,
the CC-IF, which describes how two scalar oriented contrast measurements
should be used to compute boundary probability at a location. The grouping
33
statistics collected by Geisler et al. are represented as scalar values linking pairs
of locations/orientations. In terms of its application, Geisler et al. used their data
to explain human psychophysical phenomena, whereas we have used our data to
help constrain neural models. In a study similar to that of Geisler et al., Sigman et
al. (2000) (Sigman, Cecchi, Gilbert, & Magnasco, 2001) also histogrammed the co-
occurrence probabilities of pre-detected boundary elements at all
position/orientation offsets from a reference boundary location. Their results
were again expressed in terms of scalar values relating pairs of offset boundary
elements. While interesting, their main conclusion – that boundary elements in
natural images tend to lie on common circles – is not directly informative as to
the computation needed to detect boundaries in the first place, nor to the neural
mechanisms that may carry out those computations.
Relationship to previous V1-inspired models of contour detection
Several V1-inspired models of contour detection have appeared in the literature
(Z. Li, 1998; Loxley & Bettencourt, 2011; Pettet et al., 1998; Ursino & La Cara, 2004;
Yen & Finkel, 1998), with one of two objectives (or both): (1) to explain human
contour detection performance as a function of various stimulus parameters (e.g.,
element spacing, contour length, open vs. closed contours, density of distractors,
etc.) (Z. Li, 1998; Pettet et al., 1998; Yen & Finkel, 1998), or (2) to perform well at
detecting contours in noisy artificial and natural images (Z. Li, 1998; Loxley &
Bettencourt, 2011; Ursino & La Cara, 2004). In all cases, these models were
assembled using “off the shelf” neurally-inspired components and operations,
including weighted sums, sigmoids, thresholds, divisive normalization and
winner-take-all operations, etc. The core elements and operations that find their
34
way into such models can generally be traced back to the earliest days of neural
modeling (Rosenblatt, 1962; Rumelhart, Hinton, & McClelland, 1986), when a
high premium was placed on the use of simple mathematical (or logical)
operations. This long-established tradition explains the continued widespread
use of weighted sums to represent synaptic summation, as well as the strong
preference for nonlinearities produced by compact algebraic expressions.
In contrast to previous cortically-inspired models of boundary detection, the core
computing operation of our model (represented by the CC-IF) was not assumed
to take on any particular form, let alone a commonly used or mathematically
compact one, but rather was derived from natural image data under the
normative assumptions that object boundary detection was the task, and certain
specific measurements were available to the neuron to solve the task.
Interestingly, one of our key findings is that that core operation represented by
the CC-IF is not readily expressible in the conventional neural vernacular, or
with compact mathematical expressions of any kind. This challenge motivated
our second main activity, which was to search for neural mechanisms capable of
producing the unconventional type of nonlinear interaction we uncovered. Thus,
proceeding from normative assumptions actively pushed us away from the
simple mathematical operations used in most previous cortical models.
Dendrites provide a parsimonious neural implementation of the CC-IF
One of our key findings is the close match between the image-derived CC-IF and
the neural response function arising from NMDAR dependent proximal-distal
synaptic interactions in PN dendrites. This match supports the prediction that
35
PN basal and apical oblique dendrites contribute to contextual processing in V1,
and perhaps elsewhere in the cortex, by providing a flexible analog computing
substrate in which behaviorally-relevant nonlinear interactions between
horizontal, vertical and potentially other input pathways can take place.
This prediction has three main preconditions: (1) appropriate physiological
machinery; (2) appropriate anatomical connectivity; and (3) sufficient flexibility
to support the tremendous range of computing capabilities that the cortex is
evidently capable of providing (i.e. including multiple types of sensory
processing, motor control, language, planning, emotion, etc.). Regarding the
appropriate physiological machinery, a previous study carried out in brain slices
established that NMDAR-dependent proximal-distal interactions in PN
dendrites are capable of providing the type of asymmetric pathway interaction
needed to fit the image-derived CC-IF ((Behabadi et al., 2012); reviewed in(M. P.
Jadi et al., 2014)). Regarding the appropriate anatomical connectivity, the first
requirement is that classical (vertical) and contextual (horizontal) axons should
target at least some of the same dendrites of PNs. Existing data strongly supports
this: both horizontal and vertical axons terminate on PN dendrites throughout
cortical layer 2-3 (Binzegger et al., 2004; Chisum & Fitzpatrick, 2004; Jennifer S.
Lund et al., 2003; Yoshimura, Sato, Imamura, & Watanabe, 2000).The more
demanding requirement is that horizontal and vertical axons project to PN
dendrites with different spatial biases, especially along the proximal-distal
extents of individual basal or apical oblique dendrites. Very little connectivity
data is currently available at this sub-dendrite scale, though several observations,
taken together, suggest that within-dendrite biases of this kind are biologically
feasible: (1) the axonal projections of inhibitory neurons show famously strong
36
spatial biases at the sub-dendrite scale (Bloss et al., 2016; DeFelipe, Ballesteros-
Yáñez, Inda, & Muñoz, 2006; Karube, Kubota, & Kawaguchi, 2004; Tremblay et
al., 2016); (2) excitatory pathways have well known spatial biases of other kinds,
for example, they can selectively target dendrites in specific layers or parts of
layers (Harris & Shepherd, 2015; J. S. Lund, 1988; Petreanu, Mao, Sternson, &
Svoboda, 2009); (3) excitatory axons are subject to activity-dependent clustering,
producing a tendency for co-activated axons to form contacts on nearby spines
(DeBello et al., 2014; Iacaruso, Gasler, & Hofer, 2017; Lee, Soares, Thivierge, &
Béïque, 2016; van Bommel & Mikhaylova, 2016, 2016; Weber et al., 2016); (4)
individual excitatory axons can show strongly biased projections at the sub-
dendrite scale (Bloss et al., 2018; Morgan, Berger, Wetzel, & Lichtman, 2016) (5)
proximal vs. distal synapses can be subject to different plasticity rules, which
could lead to a spatial sorting-out of functionally distinct input pathways
(Froemke, Poo, & Dan, 2005; Gordon, Gribble, Syrett, & Granato, 2012; Sandler,
Shulman, & Schiller, 2016); and (6) differences in EPSP rise times suggest
horizontal vs. vertical axons (Yoshimura et al., 2000) and near vs. far horizontal
connections onto pyramidal neurons (Schnepel, Kumar, Zohar, Aertsen, &
Boucsein, 2014) do, on average, terminate at different distances from the soma.
Regarding the flexibility to produce a rich spectrum of CC-IFs in the cortex, as
our compartmental simulations show, on top of the inherent spatial processing
capabilities of PN dendrites, variations in multiple synapse-related parameters,
including spine neck resistance, peak synaptic conductance, degree of synaptic
saturation, degree of synaptic facilitation, and NMDA-AMPA ratio, significantly
expands the space of CC-IFs that can be produced by PN dendrites – even when
limited to purely excitatory synaptic interactions. The spectrum of realizable CC-
37
IFs is then greatly expanded when the parameters of local interneuron circuits
are brought into play.
To summarize, PN dendrites have the appropriate capabilities, are located at the
appropriate place, and are part of a circuit with the appropriate flexibility to
contribute centrally to the integration of classical and contextual signals in V1,
and potentially other cortical areas. Whether this powerful and flexible
computing resource is used in the cortex for contextual processing remains an
open question, but one that is answerable with currently available experimental
techniques.
2.4 Methods
Natural image labeling
The pair-difference filter was directly inherited from an earlier study (Zhou &
Mel, 2008) with all parameters unchanged. Image patches were selected based on
their PD filter response centered at regularly spaced values {0, 0.1, 0.2 … 1.0}
along each of the two filter dimensions, with a bin width +/-0.005. Image patches
were collected until each of the 121 bins contained at least 30, but no more than
100 patches. The image patches were displayed in a 21-inch monitor when
shown to the labeling participants through a MATLAB program. The patches
were shown with a red box, representing the center receptive field (CRF) and the
center dot as shown in Figure 2.2b. The labeler was told to follow the following
rules to evaluate the existence of a contour within the center receptive field
(given as printed instructions to them):
38
(1) an object contour was present in the red box.
(2) the contour entered one end of the box and exited the other while
always remaining within the box.
(3) the contour was unoccluded at the box center, indicated by the red dot.
The labeler also received instructions to give scores in the following way:
(1) No contour
(2) Not likely a contour - some structured elements were seem within the
CRF but not likely forming a contour going through it
(3) Likely a contour - contour seen but either occluded at the center of the
CRF, or is not aligned with the orientation of the CRF
(4) Almost certainly a contour – contour seen but occluded at non-center
positions or is slightly curvy.
(5) Surely a contour with aligned orientation
The labeler pressed 1-5 on the keyboard to score each patch and the data were
recorded automatically. The patches were shown in pseudorandom order. If the
labeler made a mistake they could stop the program by pressing esc and get back
to the last patch they labeled to change their score. Each experimental session
lasted for about an hour, during which the labeler was able to label 600-1,000
different patches. A total of 16,000 labels were collected to generate figure 2.2c.
Compartmental simulation:
Simulation were run within the NEURON simulation environment (version 7.5
standard distribution). Unless otherwise specified, the compartmental model,
biophysical parameters and ion channel parameters regarding the NMDAR
AMPAR and GABA-A were the same as in two earlier studies ((Behabadi et al.,
39
2012) Table 1 and (M. Jadi et al., 2012) Table 2). A 3D-reconstructed layer 3
pyramidal neuron morphology ((Amatrudo et al., 2012), cell name “Jul16IR2b-
V1”, source from Neuromorpho.org) was used in producing the fitting results in
figure 2.3. For the rest of the paper results, a layer 5 PN morphology from prior
studies (“j4”) was used (Behabadi et al., 2012). A Gaussian current was injected at
the soma with a mean current of 1.0 nA and standard deviation of 0.75 nA.
Under this condition the output was linearly correlated with the current flowing
to the soma in range of 0-150 Hz. This was a proven-efficient way to remove the
somatic nonlinearity. All parameters used in the compartmental model were
summarized in Appendix A. Neuron files are available upon request.
In generating figure 2.3c, the x and y axis were elongated according to the
relationship x’ = x
1.5
. The motivation was to model synaptic depression effects, in
which the effect of the stimulus input with linearly increasing presynaptic
frequency may only climb up sub-linearly. The fitting process was conducted in
a dendrite from cell “j4” without this compression operation, and the results
were equally good as in figure 2.3c.
Spines were modeled as two cylindrical compartments. Unless otherwise
specified, the morphology was as follows: neck, cylinder shape, with a height of
1 µm and a diameter of 0.05 µm; head, cylinder shape, with a height of 0.5 µm
and a diameter of 0.5 µm. The two compartment were then attached to a parent
dendrite. In figure 2.5c, when simulating the “increased neck resistance” case, the
diameter of the neck compartment is set to 0.025µm. Only excitatory synapses
(AMPAR/NMDAR) were modeled with spine morphology. Inhibitory synapses
(GABA-A) were modeled directly innervating the dendritic shaft without spines.
40
All simulations were done with computational resources of the University of
Southern California high performance computing center.
41
Chapter 3
The spatial dimensionality of dendritic
computation
3.1 Introduction
In Chapter 2, we focused on long-range boundary detection in V1 and the role of
the massive network of horizontal connections carrying “contextual” information
back and forth between pyramidal neurons. It is an excellent example to
illustrate the potential power of NMDA-dependent dendritic integration on
cortical computation. This also advanced our general view on standard
abstraction of a neuron’s input-output function, from simple thresholded linear
units to encompass an interesting class of multi-dimensional nonlinear
interaction functions. However, in that study, we only considered the integration
of just two excitatory inputs (center+flanker), and we showed that the resulting
42
neural responses can be modeled within the existing framework of two-input
spatial interactions on a single dendrite (Behabadi & Mel, 2014; Behabadi et al.,
2012; M. Jadi et al., 2012). A natural question to ask is: what is the form of the
dendritic input-output function if we allow more than two spatially-separated
inputs on the same dendrite? For example, if we have 3 independent and
spatially distinct inputs on the same dendrite, does this configuration lead to a 3-
D input-output function? Does a 10-input case produce a bona fide 10-D function,
and so on ad nauseum? It is unclear whether these additional pathways may
carry any additional spatial processing power. Since we know that the
asymmetry and nonlinearity of the dendritic input-output function are critically
dependent on spatial separation, when an increasing number of inputs converge
on one dendrite, the spatial distinguishability of each input can be compromised.
As a result, with an increasing number of inputs, there should be an upper limit
on the number of distinguishable pathways, which we call the spatial
“dimensionality” of a dendrite. Until this point, however, the question has never
yet been asked, let alone a method developed, to quantify the upper limit of a
dendrite’s capability to mediate multi-dimensional computation. This is the key
problem that we will solve in this chapter.
3.2 Results
We investigated the spatial processing power of a dendrite using two approaches
in succession. Result from both are presented in this section. Given the
complexity of the problem to be revealed later, we think both approaches
provide some insight into dendritic integration and its potential computational
43
capabilities.
To restate the goal of this exercise, we would like to be able to predict with the
simplest possible model the response of a real dendrite to an arbitrary spatial
pattern of excitation delivered to the dendrite. Rather than carry out the
experiments on a real dendrite, though, which would currently be impossible (as
will become clear below), we use a heavily validated detailed compartmental
model as a surrogate for a real dendrite, and attempt to predict the response of
the compartmental model to a vast number of automatically generated stimulus
patterns. The logic of this exercise is that the compartmental model acts like the
real neuron, and if we are able to accurately predict the responses of the
compartmental model (which contains thousands of coupled nonlinear
differential equations that must be numerically integrated to produce a result)
using a model so simple that it could practically be computed by hand in seconds,
then we will have captured the essence of the dendrite’s interesting behavior.
This is essentially a statement of occam’s razor.
Both approaches described below involve a tabulated approach to solving the
problem, in which a stimulus pattern is mapped through a set of N spatial basis
functions to a set of N integer indices, which are then used to index into an N-
dimensional table where response predictions will be stored and read out. The
table must first be filled with “predictions” by running many “training“ stimulus
cases through the compartmental model, and storing the results of each case in
the appropriate table cell (where it is averaged with the other results in that cell).
44
3.2.1 Approach 1: Representing (and generating) dendritic spatial activity
patterns using overlapping cosine basis functions
In the first approach, we began with the natural spatial distribution of spines on
a pyramidal neuron dendrite (Araya, Eisenthal, & Yuste, 2006) (Figure 3.1). We
then fit this distribution with a set of N compact, slightly overlapping spatial
basis functions (Figure 3.2, bottom row). Specifically, we defined a family of basis
functions B i(x) of a half-cosine shape, where x is the dendritic location, and we
found the best set of coefficients w i on each basis function such that ∑ w
-
B
-
4
-56
equals the natural distribution of spines, r(x). Thus, the fully weighted basis
function in each partition represented the maximal intensity of excitation that it
is possible to deliver to that region of the dendrite by an independently drivable
excitatory pathway targeting that region. By varying the weight on each of the N
basis functions, we were able to generate a set of activation density curves
representing all possible excitation patterns for that value of N. We referred to a
model with N basis functions and their weights as an N-pathway model. When
more basis functions were available, it was possible to generate (or fit) more
spatially articulated patterns of excitation.
45
Figure 3.1. Quantification of the spine density profile. Upper panel (adopted from (Elston and
DeFelipe, 2002), shows data of spine densities according to distance to the soma across many
different cortical areas in monkey. The lower panel is the calculated average of the density curves
in the upper panel.
For each given activation pattern, we ran compartmental simulations using the
NEURON simulation environment to obtain somatic current and firing rate data.
The simulation conducted was based on a platform previously developed in the
lab (Behabadi et al., 2012). In our simulations, the activation pattern was “applied”
to a dendrite with synapses spaced inversely proportional to the activation
density curve. Thus, the synapses were placed closer to each other in high
activation density regions and were more separated in low activation density
regions. All synapses were modeled with “standard” AMPA and NMDA
46
conductances on the dendritic shaft, and were activated by 50 Hz regular trains
with randomized phase. In addition, a current injection was applied to the soma
so that the somatic (F-I) curve was rendered approximately linear. By removing
the somatic nonlinearity in this way, any nonlinearity present in our results
should be due to spatial dendritic interactions. All simulations were run for 10
seconds and the average somatic firing rate was recorded.
Next, we generated a test set consisting of 5,000 activation patterns of varying
spatial complexity, randomly sampled by varying the coefficients of different N-
pathway models (1,000 each for N
test
= 6,7,8,9,10). Some examples are shown in
Figure 2.7. This test set would be used to check the prediction efficacy of each N-
pathway model later. For each of the activation patterns in the test set, we first
ran a simulation to obtain the “actual” somatic firing rate. Then we found the
least squares fit to the activation pattern using the basis functions from an N-
pathway model. In this fitting process, only the “height” of the basis functions
could change (Figure 2.6, lower rows). In the meantime, we built an N-
dimensional table for each N-pathway model. We did so by allowing each basis
function to vary its height to 10 different levels, and generated all possible
combinatory activation patterns. For example, for a 3-pathway model, there were
10
3
= 1,000 activation patterns. Then, we collected firing rate data for each
pattern by running compartmental simulations. Finally, we “predicted” the
firing rate response of a particular complex pattern by referring to the input-
output table of an N-pathway model based on the least squares fit of the complex
model. Since the fit could give fractional index numbers, we used bi-linear
interpolation to produce intermediate values.
47
Figure 3.2. Fitting arbitrary patterns of excitation with overlapping cosine-shaped basis functions.
A large number of spatial patterns of excitation were delivered to a single PN dendrite and the
somatic firing rate was recorded. Examples are shown in the first column, with the resulting
somatic firing rates shown numerically on top of the pattern. A set of fits to the excitation
patterns is shown at right, using varying number of spatial basis functions. Simulations were run
accordingly, and firing rates are shown.
0
25
4DF i t 3DF i t 2DF i t 1DF i t
02000 20 0 0 20 0 0 20 0 0 20 0
Pattern of
excitation
Pathway /
basis functions
0
25
02000 20 0 0 20 0 0 20 0
# Active Syn. / 10um
48
To quantify the prediction performance, we took several subsets of the complex
activation patterns within a narrow range of predicted firing rates generated by
the 1-pathway model. In other words, each subset involved patterns with
roughly equal numbers of activated synapses distributed according to different
patterns in space. Sample subsets centered at 30 Hz, 65 Hz and 100 Hz, with 2 Hz
bin width, were shown. Note that the 1-pathway model was not able to spatially
vary activation density across the length of a dendrite, so this mainly captures
the level of overall activation strength, marked as “low”, “medium”, and “high”
(Figure 3.3a, top row). In the 1-pathway case, we can see that the actual firing
rate deviated considerably from the prediction; however, as the number of the
pathways in the prediction model increased, and the prediction table gained
dimensions, the accuracy of the prediction increased significantly. We used the
coefficient of determination (R
2
) to quantify the performance of multi-
dimensional model predictions. All simulation samples were then sorted into 14
activation strength bins, covering the range of 0 to 120 Hz with a step size of 8 Hz.
The mean R
2
values across all activation strength levels were calculated and
graphed in Figure 3.3b & 3.3c. We found that a 3-pathway model would be able
to, on average, explain more than 88% of the variations in the actual firing rate
within each bin, and additional complexity (as in a 4-pathway or 5-pathway
model) brought little marginal gain in the prediction performance. With these
results, we concluded that a PN thin dendrite can accommodate, within its ~200
!m length, roughly three independent regions of excitatory input (proximal, mid,
and distal), or three dimensions of spatial analog processing capability. In other
words, the somatic response to any activation pattern, defined by the summation
of basis functions activated at different levels, can be captured by a 3-
dimensional table describing the input-output function of the 3-pathway model.
49
A
B C
50
Figure 3.3. Prediction quality for models of increasing dimension. a. Sample prediction
performances for grouped activation patterns. The stimulus patterns were grouped by a linear
predictor according to raw stimulus strength (represented by each column). Variations in firing
rate within group were therefore due entirely to nonlinear spatial interactions. Firing rate
predictions are shown for 3 sample groups (“Low”, “Medium”, and “High”). Each row
represents a prediction model with an increasing number of dimensions. The plot represents the
actual firing rate vs. the predicted firing rate. The R2 value is shown to represent the quality of
the fit. b. Summary plot of prediction performance over stimulus intensity level. Different colored
lines represent the prediction performances of the models with an increasing number of
dimensions over different ranges of stimulus intensity. The bin width for each intensity level was
8 Hz. c. A summary of prediction performances for all five multi-pathway models tested. As the
number of pathways increased, the marginal increment in performance diminished. The results
show that a 3-pathway model can explain 88% of the variations in the actual firing rate data, and
additional pathways don’t improve the performance significantly.
Limitations of the Approach
While this first approach shed some light on the dimensionality of dendritic
computation, we found it suffered from two major drawbacks.
First, we used overlapping basis functions of fixed shape to build the prediction
model and generate the test set of activation patterns. This approach may have (1)
inappropriately restricted the space of activation patterns used to estimate
dimensionality, excluding features to which a dendrite may be sensitive, and in
particular may have excluded stimulus patterns varying in their fine-scale
clusteriness (to which dendrites can be sensitive), and (2) led to an
underestimation of the maximum spatial processing capacity of a dendrite due to
the overlap between basis functions – which was not a strongly motivated
assumption and would have introduced unwanted/uncontrolled spatial
correlations between inputs.
The second drawback of our first approach is that all simulations were
conducted with synaptic conductances placed directly on the dendritic shaft.
51
Recent studies revealed that dendritic spines may play important roles in
facilitating and/or modulating the location-dependent synaptic integration
(Harnett et al., 2012; Smith, Smith, Branco, & Häusser, 2013). Specifically, some
experiments showed that high spine neck resistances will increase the
nonlinearity within certain intensity range of excitatory inputs (Harnett et al.,
2012). Others claimed that neck resistance may equalize all synapses and reduce
nonlinearity. The simplification made in the first approach may prevent us from
discovering additional “dimensions” of dendritic computation.
3.2.2 Approach 2: Representing dendritic spatial activity patterns by non-
overlapping bin counts, and generating spatial activity patterns that explicitly
vary in spatial clusteriness
To address the limitations previously mentioned, we carried out additional
modeling studies with the following two major modifications to our protocol.
First, we built a new set of excitatory activation patterns where the goal was to
minimize the ability of a linear model to predict the dendrite’s response by
simply measuring pattern “strength”. To try to achieve this, we fixed the total
number of activated synapses in every pattern to 48; the synapses were all
identical in strength; and their input firing rates were constant and identical. In
this way, inputs were allowed to vary only in their spatial patterning along the
length of the dendrite. (Interestingly, despite controlling the number, strength,
and rate of the activated synapses, this strategy did not by itself produce patterns
of fixed strength according to a linear predictor, as will be shown below, along
with a remedy that does achieve that goal). To achieve a full sampling of
different degrees of spatial clusteriness, the 48 synapses were uniformly
52
distributed in clusters of different sizes, including 1, 2, 3, 4, 6, 8, 12, 16, 24, and 48
(Figure 3.4). Within each cluster, synapses were evenly spread with a 0.5 !m gap
(corresponding roughly to the distance to a neighboring spine). In different
simulations, the spine neck resistance, and/or the input firing rates, were varied
for the entire set of activated synapses (see methods for details), allowing us to
test the effects of these variables on prediction performances over plausible
biological ranges.
Figure 3.4. Sample distribution of synapses used to train and test the multi-segment model.
Left: Figure showing a thalamocortical cell receiving excitatory inputs from five different
retinal ganglion cell axons innervating different parts of the dendritic arbor. Figure adopted
from Morgan et al. (2016) Figure 2.6G. Right: Samples of the new proposed activation
patterns. The red dots or bars represent clusters of synapses. The total number of synapses
was kept the same (in the case of this figure, 24). These samples were generated with
randomly inserted synapse clusters of different sizes (1 upto 24) onto the dendrite. The multi-
segment model (shown on the bottom) is an exhaustive none-overlapping decomposition of
dendritic locations. Different colors represent different segments and the segment can vary in
size. The activation strength in each case was calculated based on the number of synapses
falling into each segment.
53
Second, we conceptually partitioned the dendrite into N non-overlapping
dendritic segments, or “bins”, where N varied from 2 to 5. A pattern could then
be represented abstractly by counting the number of activated synapses within
each bin. For example, in a 2-bin model, an activation pattern might have 10
synapses landing in the proximal bin and 38 in the distal bin. To generate a
response prediction for such pattern (and all other possible bin value
combinations), we used a cross-validation training and testing scheme. A typical
training process involved a large set of “training patterns” (typically 20,000 cases
generated by the above-described scheme). The somatic firing rates for these
training patterns were collected by running a compartmental simulation for each
case, and tabulating the resulting firing rate in an N-dimensional table indexed
by the N bin counts. Thus, in the above example, the result for the pattern with
10 proximal and 38 distal synapses would be stored in the cell (10, 38) in a 2-
dimensional table. After “training”, to generate a firing rate prediction for an
arbitrary new activation pattern, the synapses were binned according to their
locations on the dendrite to yield the N bin counts, and the prediction was
computed by averaging the firing rates of all cases stored in the appropriate table
cell, that is, all previously seen training cases with the same bin counts. In the
above example, that would consist of all examples in which there were 10
proximal and 38 distal synapses, without regard to synapse location within each
bin. To quantify the prediction performance of any given bin count and
arrangement, after constructing the N-dimensional tabulated model, we
simulated 2,000 test patterns, and correlated the predictions of the tabulated
model to the firing rates actually generated by the ground truth simulations. R
2
values were calculated to quantify the prediction performance. For each N-
dimensional model (N = 2,3,4,5), there were many different ways to partition a
54
single dendrite into a fixed number of non-overlapping segments. We
exhaustively generated all possible partitions with a minimum resolution of 20
!m in length. We then tested their prediction quality of each partitioning, and
selected a partition that led to the best prediction performance (see methods). To
increase the robustness of our selection of partition points, the process of training,
testing, and selecting partitions was repeated at least 5 times with independent
training and testing samples. The “gold standard” partitioning for each of the N-
dimensional tabulated models was then determined by taking the numerical
average of the best partitions found across trials (which could lead to partitions
at finer resolution than 20 !m steps), and the best R
2
performances were recorded
(see methods for more details).
Predicting the variance unexplained by a linear model
Recall that the goal of this work is to quantify a dendrite’s capability in
performing nonlinear spatial processing. As a reference point, it is therefore
worthwhile to first determine how well a linear model performs in predicting the
above-described datasets. In the previous step, we ran simulations for a large
number of activation patterns with different synaptic clustering levels and
collected their somatic firing rate output. This allowed us to perform a linear
regression on those data to reverse-engineer the optimal linear weights assigned
to each synapse location in order to best predict firing rates.
In the first example shown below, we collected somatic firing rates for 40,000
patterns consisting of 48 active synapses on a 220-!m-long dendrite, each driven
at 12.5 Hz. We calculated the optimal weight along the length of the dendrite in
55
order to predict the firing rate output with a minimal root-mean-squared error
(Figure 3.5a). Interestingly, we noted that the linear model performed quite well
in this set of samples, with R
2
= 0.73. Thus, despite our best attempts to control
the “strength” of the patterns by fixing the number, strength, and firing rate of
the activated synapses, fluctuations in the locations of the activated synapses
from pattern to pattern led effectively to strength changes that could be predicted
by a linear model. In other words, a pattern that happens to contain a high
density of activated synapses in a dendritic region with higher linear weight
values (e.g. at a distance of 70 !m from the soma; see Figure 3.5a) will tend to
produce a stronger response than a pattern whose synapses are accidentally
clustered in regions of low linear weight (e.g. 40 !m or 160 !m from the soma).
The differences were not small: training patterns in this example generated
actual somatic firing rates ranging from 8 to 40 Hz.
On the other hand, we found there was significant response variation that could
not be predicted by a linear model, corresponding to subsets of cases with widely
varying responses despite nearly identical linear predictions. For example,
consider the slice of data with linear predictions around 15 Hz, for which the
actual firing rates ranged from 7 to 31 Hz (Figure 3.5b, leftmost-orange-vertical
data slice). The response variation in these cases could only be explained by
differences in the spatial patterning of activated synapses, rather than their
aggregate linear “strength”. Incidentally, the existence of such data subsets
demonstrates the phenomenon in a new way that a dendrite can be sensitive to
differences in spatial patterning under conditions of tightly controlled input
strength (see also Mel, 1992ab(B. W Mel, 1992; Bartlett W. Mel, 1992)).
56
In order to focus our study on the capability of a dendrite to discriminate inputs
according to their fine-scale spatial patterning, rather than linear strength, we
adopted the strategy of quantifying prediction performance only within subsets
of the data for which the linear prediction was nearly constant – corresponding
to the colored vertical slices through the entire data set shown in Figure 3.5b. A
total of 8 slices are shown in figure 3.5b, each containing 190 patterns selected
drawn quasi-uniformly across the vertical range of actual firing rates (see
methods for further details).
Figure 3.5. Liner model performance on predating somatic firing rates. Results were based on
simulations of 40,000 cases with 48 synapses, each with a 0.05 !m neck diameter and receiving
12.5 Hz presynaptic input. a. Optimal spatial weights for the linear model to minimize prediction
error in the output firing rate. Results were generated through linear regression. b. The linear
model prediction (x-axis) compared to the actual firing rate output (y-axis). The blue cloud
represents all 40,000 data points; each of the 8 colored slices of data represents 190 cases that have
the same linear model prediction, reflecting the response variance unexplained by the linear
model.
Next, we measured prediction performance on various datasets using our
trained N-dimensional (N = 2,3,4,5) models. As a reference point, we tested each
model on an entire dataset of 40,000 samples (which included linear strength as a
57
cue). Not surprisingly, prediction performance was high overall, but significantly
increased as the number of dendritic bins (hereafter “dimensions”) increased –
similar to what was seen in our first modeling approach (see Figure 3.3). The R
2
value between the predicted firing rates and the actual values increases from 0.60
for a 2-D model to 0.94 for a 5-D model (scatter plots are shown in figure 3.6; R
2
value are summarized in figure 3.6, blue line). Note that the 2-D model
performed worse than the linear predictor (Figure 3.6 black reference line). This
is because the bin-based models did not have access to information about
synapse location within each bin, so that when the number of bins was small,
and the uncertainty about spatial location was therefore large, prediction
performance suffered. On the other hand, the bin-based models with N greater
than or equal to 3 all outperformed the linear model, indicating that even lacking
the “knowledge” contained in the linear weights, the capacity to represent
arbitrary functions – not just linear functions – is helpful in predicting dendritic
responses.
58
Figure 3.6. Scatter plots showing the performances of the multi-dimensional models. Upper row:
output firing rate prediction vs. actual firing rate for all activation patterns. Lower row: output
firing rate prediction vs. actual firing rate for slices of activation patterns that received the same
linear prediction value. The color code for slice samples is the same as in figure 3.4.
The “acid test”, however, was to measure prediction performance of the bin-
based models on the constant-linear-response data sets for which the linear
model prediction performance was – by construction – essentially zero. As
shown in the lower row of Figure 3.6, nonlinear prediction performance
remained generally high, increasing with increasing model dimension. The color
coding of the data subsets is the same as in Figure 3.5b. A summary of these data
is shown in Figure 3.7. First, linear model performance on the total dataset
(which contains linear strength variation) is high, shown as a black horizontal
line (R
2
= 0.73), whereas linear model prediction performance averaged over the 8
colored data subsets was effectively zero (R
2
= 0.03). This low R
2
value confirmed
that the internal variance within each selected slice was unexplained by the
linear model. The blue line in turn shows the R
2
performance of the bin-based
models predicting the entire data set (a relatively easy task), while the red line
shows the average R
2
performance for predicting the linear-unpredictable sliced
data (a relatively hard task). It is worth noting that the red line, in all cases, is
significantly higher than the green baseline, showing that multi-dimensional bin-
based models have the capability to explain the variance not captured by the
liner model. In addition, the performance increases with an increasing number of
dimensions in the model. For example, a 5-D model could, on average, explain
up to 69% (R
2
= 0.69) of the variance that is unexplained by the linear predictor.
The performance started to plateau as the dimension increases from 4-D to 5-D.
59
Besides the R
2
value, we also calculated the root mean squared prediction error
and absolute percentage error of the prediction. In both cases the error dropped
significantly for the multi-dimensional bin-based models, and the improvements
in the performance also saturated at 5-D (Figure 3.8b,c bold lines show the
averages, thin lines show the performance on each slice, color code same as in
figure 3.5 and 3.6).
Figure 3.7. Summary of prediction performance for linear and multi-dimensional model.
Prediction performances were measured in R
2
values. Black horizontal line - linear model
performance on all data; green horizontal line - linear model performance on sliced data; blue
curve - multi-D model performance on all data; red curve - average multi-D model performance
on sliced data.
60
Figure 3.8. Qualification of prediction performance by multi-dimensional models. For all: the x-
axis is the dimensionality of the model, and the y axis represents different performance measures.
Red lines represent the average performances of predicting 8 slices of data; thinner lines
represent the performance for each slice. The color code is consistent with the color in figure 3.4
and 3.5. a. R
2
performance, the redline is the same as in figure 3.6.
61
The effect of spine neck resistance
Next, we looked into the issue of spine neck resistances. A single synapse’s input
resistance consists of two parts, the dendritic resistance due to the cable-like
structure of dendrite, and the spine neck resistance due to spine morphology.
The first part strongly increases moving away from the soma, as the serial
resistance of a cable builds up with increasing distance. It is also worth noting
that the dendrites tended to be thinner in distal, which also increases the
dendritic input resistance. Previous studies have suggested that distal input
resistance can be several-fold higher than proximal R
in
(Araya et al., 2006; Nevian,
Larkum, Polsky, & Schiller, 2007). It is a reasonable prediction that a high neck
resistance tends to bridge the difference in total input resistance between
proximal and distal synapse, thus “equalizing” each individual input, and
potentially making the system less variant and easier to predict in general.
In the base case example (described in Figure 3.5-3.8), we used a typical neck
diameter of 0.05 !m, corresponding to a neck resistance of 125 M", within the
range estimated for cortical neurons. We repeated our simulation experiments
with two different choices of neck diameter – a thinner one (0.03 !m neck,
equivalent to 400 M" neck resistance), and a thicker one (0.1 !m neck,
equivalent to 30 M" neck resistance). The results are summarized in Figure 3.9.
The upper row shows the linear prediction results through scatter plots of the
predicted firing rates vs. the actual firing rates (method the same as in Figure
3.5b). We noted that with a decreasing neck diameter (or an increasing neck
resistance), the variance of the distribution increased. The linear model
performance, measured in R
2
, dropped from 0.9 with 0.03 !m neck diameter to
62
0.66 with 0.05 !m diameter, and further to 0.60 with 1.0 !m diameter (black line,
lower row of Figure 3.9). This result is consistent with our prediction that a
thinner spine tends to “linearize” the system, making it more easily predictable
by the linear model. When the multi-dimensional bin-based model was applied
(Figure 3.9, lower row, the method and color code are the same as in figure 3.7),
we noted that the maximal capability of the bin-based model (5-D version) in
predicting the output was very close to the linear model in the thin spine case,
with R
2
= 0.92 compared to 0.90. Moreover, on the sliced data, the multi-
dimensional models did not explain more than 50% of the variance unexplained
by the linear predictor. This is not surprising as the linear predictor already had a
very good performance in the high-resistance spine case, so the multi-
dimensional model may not be necessary since “nonlinear” interactions are
relatively suppressed in such a scenario. On the other hand, in the medium and
low neck resistance cases (middle and right columns in Figure 3.8), the multi-
dimensional model, with 3 or more dimensions, generally performed better than
the linear model, suggesting that the nonlinear spatial interaction was likely to
happen with relatively low neck resistances. In both cases, the 5-D model could
capture more than 60% of the variance unexplained by the linear predictor, and
the marginal gain of increasing dimensions started to plateau around 4-D to 5-D.
63
Figure 3.9. The effect of spine neck resistance. Left to right, 0.03 !m spine, 0.05 !m spine and 0.1
!m spine neck diameter cases. Upper row: the scatter plots showing the predicted firing rates by
the linear model vs. the actual firing rate. Lower row: the quantification of prediction
performances by the multi-dimensional models. Color code the same as in figure 3.6.
The effect of presynaptic firing rate
Next, we examined the effect of varying the overall input intensity. We did this
through modulating the presynaptic firing rates to all active synapses in the
model. In the two-layer model, a dendrite was modeled as a 1-Dimensional
independently thresholded computational unit. Earlier studies showed that the
input/output function can have different threshold and gain effect depending on
where the input lands on the dendrite, resulting in a series of sigmoidal curves
(M. P. Jadi et al., 2014). When the input is randomly distributed across the full
dendritic length, as in our studies, it is hard to predict in advance what general
form the input/output function would take. However, due to the specific
64
electrophysiological properties of the NMDA channel, we know that when the
overall input strength is close to the threshold (or critical voltage for the blockage
of the magnesium cation in the NMDA channel), small alterations in input can
lead to relatively high variations in output. In other words, if the overall input
intensity is too low, all activation patterns would be most likely in the sub-
threshold range; if the overall input intensity is too high, then most patterns
produce super-threshold responses instead. In both cases, it would be easy for a
linear predictor to predict the output. When the overall input intensity is set to
intermediate levels (the “sweet-spot intensity”), where the likelihood of
triggering nonlinear interaction is the highest, it is expected to see a drop in the
performance of the linear predictor and a more significant advantage in using the
multi-dimensional bin-based model.
Again, we started from our base case example (described in Figure 3.5-3.8), and
increased the presynaptic firing rate from 12.5 Hz to 25 Hz, and further to 37.5
Hz. The results were summarized in Figure 3.9 (same method and color code as
in Figure 3.5b and 3.6). We noted that with an increasing firing rate, the
performance of the linear predictor actually increased (R
2
= 0.66 for 12.5 Hz, R
2
=
0.73 for 25 Hz and R
2
= 0.88 for 37.5 Hz). This indicated that the “sweet-spot”
input intensity in this case is likely around 12.5 Hz, which led to the highest level
of nonlinear interactions that could not be captured by the linear model. When
applying multi-dimensional models to this set of the data, we saw a similar effect
as before, namely (1) the 5-D model was at least as good as the linear model, and
better in most cases; (2) when the linear model performed poorly, the multi-
dimensional model could still do a good job in predicting the variance in the
sliced data and capturing the nonlinear variations; and (3) when the linear model
65
performed well, the multi-dimensional bin-based model tended to have
relatively inferior performance in predicting variance in the sliced data,
suggesting a low likelihood of nonlinear interaction in these scenarios. With
these results discussed above, a natural question to ask is that whether the
“sweet-spot” always lies around the relative low end of the input intensity
spectrum. The answer is no, as we will see in the next section. When the total
length of the dendrite is varied, the “sweet-spot” intensity may shift away from
the range we detected from this 200-!m-long dendrite.
Figure 3.10. The effect of presynaptic firing rates. Left to right, 12.5 Hz, 25 Hz and 37.5 Hz
presynaptic firing rate cases. Upper row: the scatter plots showing the predicted firing rates by
the linear model vs. the actual firing rate. Lower row: the quantification of prediction
performances by the multi-dimensional model. Color code the same as in figure 3.7.
66
The effect of the length of the dendrite
Finally, we investigated the effect of dendritic length on nonlinear synaptic
integration. A typical basal dendrite is about 200 !m in length but the range
varies significantly in different cortical areas as well as in different species. Little
work has been done to investigate how the dendritic length affects nonlinear
synaptic integration. To gain traction, we artificially constructed shorter and
longer branches by (see methods for details). We created two different versions
of the base case dendrite with length 110 !m and 420 !m. The same simulation
experiments were then run, and the results summarized in figure 3.10 (only the
performance summary is shown for each case; the method and color code are the
same as in figure 3.6).
It was interesting to see that for the shorter branch case, the prediction of the
linear model performed the worst with 25 Hz input strength among the three
groups (figure 3.11 left column), compared to the prediction in the 12.5 Hz and
37.5 Hz input cases. With the conclusion from the previous section, it seems that
the “sweet-spot” for the 110-um-long branch is close to 25 Hz, different from the
range around 12.5 Hz for the 200-um-long branch. Third, the linear model
yielded very good predictions in the 420 !m case in general, and it seems the
linear model was enough to provide an accurate prediction of the output. It is
intriguing why that’s the case. We will continue our discussion on this issue in
next section.
67
Figure 3.11. The combined effect of dendritic length and presynaptic firng rate. The 3-by-3 grid
each shows a performance summary that has layout and color code the same as in figure 3.6 Top
to bottom rows: 12.5 Hz, 25 Hz and 37.5 Hz presynaptic firing rate cases. Left to right: 110, 200,
and 420!m long dendrite.
3.3 Discussions
Comparing our two approaches to dendritic dimensionality estimation
In Approach 2, we utilized a novel method to construct the activation patterns
68
that maximized the spatial variation, and used bin-based multi-dimensional
tabulated models to investigate the spatial dimensionality of dendritic
integration. These methods overcome the shortcomings of Approach 1, including
the underestimation of dimensionality due to (1) inadequate spatial variability in
the input activation patterns, and (2) overlap between the separate pathways. In
Approach 2, we also extended our study in the parameter space of spine neck
resistance, presynaptic firing rate and the length of the dendrite. Overall, we find
that the maximum dimensionality measured with Approach 2 is around four to
five, after which the marginal gain in prediction performance becomes small. The
estimate is somewhat higher than was obtained in Approach 1, where the
dimensionality saturated around N = 3 in our experiments.
The capability of multi-dimensional bin-based models to capture linear vs.
nonlinear variation
In this approach, we “dissected” the variance in responses for all activation
patterns into two parts: the “linear” part which could be captured by a linear-
weight predictor, and the “nonlinear” part which was the residual variance
unexplained by the linear model. In order to test the capability of our multi-
dimensional models in capturing both types of variance, we conducted
predictions on the sliced data that received the same linear prediction, in
addition to predictions over the entire dataset. Our hypothesis is that if the multi-
dimensional model performs well on the sliced data set, it suggests that the
multi-dimensional model can capture the variance unexplained by the linear
model, thus the “nonlinearity” part of the variance in integration. We noted that
there existed cases where the linear model yielded very good prediction of the
69
output over all patterns, with performance values R
2
> 0.9 (e.g. 200-um dendrite,
37.5 Hz input, 0.05 !m neck diameter, figure 3.7 right column, ect.). In these
cases, a 5-D model performed very similarly to the linear predictor. When
applied to the sliced data, the multi-dimensional model showed only moderate
power in predicting the variance unexplained by the linear model, with a typical
5-D performance of R
2
value close to 0.5-0.6. One explanation is that when the
linear predictor performs well, the sliced data tends to have a low internal range.
Since there were multiple sources of randomness in the simulation setup,
including the arrangement of presynaptic firing events and the somatic current
injection condition (see methods), it is reasonable to assume that most of the
remaining variance not captured by the 5-D model was due to systematic error.
On the other hand, we also noted multiple cases where the linear model did not
perform very well, with an R
2
< 0.7 (e.g. 200-!m-dendrite, 12.5 Hz input, 0.1 !m
neck diameter, etc.). In these cases, the multi-dimensional models with N greater
than or equal to three dimensions could outperform the linear model. In addition,
when the bin-based model was applied to the sliced data in these cases, the 5-D
model typically captured 80% or more of the variance unexplained. An
alternative interpretation is that for some conditions, the likelihood of triggering
nonlinear variations was higher than that of other conditions, thus leading to
differences in prediction performance of linear and multi-dimensional models. In
this study we’ve sampled a considerable range of these parameters, including the
spine neck diameter (resistance), the presynaptic firing rate and the length of the
dendrite. It remains an experimental challenge to reveal what is the actual
physiological range of operation in the cortex.
70
Low spine neck resistance “facilitates” nonlinear interactions in synaptic
integration
Another interesting observation is that a lower neck resistance typically created a
higher variance in linear prediction results. There have been multiple studies on
the role of spine neck resistance in synaptic integration. Experimental studies
suggested that the spines increase the “cooperativity” between synapses,
(Harnett et al., 2012) or “linearize” dendritic summation (Araya, Eisenthal, &
Yuste, 2006). Our results are partially consistent with these findings as the cases
with high neck resistance tended to get a more accurate linear model prediction
compared to the cases with lower neck resistance. We think this is because high
neck resistance leveled up the input resistance for all synapses, making them
more equal when considering their overall effect on the soma. On the other hand,
a dendrite with lower spine neck resistances are evidently more capable of
carrying out nonlinear computations than a dendrite with higher spine neck
resistances. A more recent studies estimated that the actual neck resistance,
calculated based on the diffusion model, is lower than what was measured
before (Kubota, Hatada, Kondo, Karube, & Kawaguchi, 2007; Miyazaki & Ross,
2017).Along with our results, this suggests that reducing neck resistances might
be an important tool for the brain to unlock nonlinear processing capability in
the cortex.
On the relationship between the capability of facilitating nonlinear
integration, the overall input intensity and the dendritic length.
In the result section, we mentioned the existence of the “sweet-spot” of the input
71
presynaptic firing rate to trigger nonlinear interactions. We noted that for
different morphology, this “sweet-spot” could vary. For example, the input firing
rate for our base case with a 200-!m-dendrite and a 0.05 !m neck diameter had a
“sweet-spot” close to 12.5 Hz, which introduced the highest variations
unexplained by the linear model prediction (figure 3.9), while in the case of a 110
!m dendrite and a 0.05 !m neck diameter, the “sweet spot” landed on around 25
Hz (figure 3.10, left column).
We know that for a focal input with an increasing input intensity, the
input/output relationship follows a sigmoidal function. Earlier studies have
shown that this function changes in its shape (threshold and asymptote)
according to the location of the input (Behabadi et al., 2012; Schiller, Major,
Koester, & Schiller, 2000). It is natural to think when many different input
sources were distributed across the dendrite, the overall input/output
relationship was close to an “average” of all the sigmodal curves, with a virtual
threshold and asymptote. The “sweet-spot” we discussed above could be the
place where the overall input intensity was closest to the overall input/output
function’s threshold. From previous studies, we learned that due to the
biophysical features of the dendrite, proximal inputs tend to have both a higher
threshold and a higher asymptote than the distal ones. For that reason, the
nonlinearity in the output should happen most commonly in the range of high
input intensity for a proximal input. This is consistent with our result that a
shorter dendrite had a higher “sweet-spot” than the longer ones (Figure 3.9),
since the nonlinear interaction can only happen under high input. It is also worth
noting that the 420 !m dendrite case does not seem to have a “sweet-spot” that
leads to high level of nonlinear effects. In all three cases presented, the prediction
72
performances of the linear model had an R
2
> 0.9, which means most of the
variance in the output could be explained by the linear model. It is a surprising
result at the first glance, but we think it is reasonable given our experimental
design. Since in the 200 !m case, the optimal input intensity was 12.5 Hz in order
to trigger the maximum nonlinearity effect. As mentioned above, a proximal
input tends to have a higher threshold and a higher asymptote. In other words, if
the input is randomly distributed on the synapse without any spatial bias
towards proximal or distal end, a longer dendrite can only have an overall
input/output relationship with a lower threshold than the shorter ones. Thus,
the optimal presynaptic firing rate for the 420-!m-dendrite to trigger nonlinear
integration should be lower than the optimal input level for the 200-!m-dendrite.
However, with an even lower input, the somatic firing rate may be too low to be
distinguished from baseline noise, so the simulations’ results were not included
in this study. Note that under biophysical condition in vivo, a long dendrite may
have all its input segregated in one or two clusters on its path, thus having a
more spatially-biased condition compared to our assumptions. In extreme case, a
420-!m-long dendrite can act similarly to a 110-!m-long one if it receives all its
input within the 110 !m. So having extra segments does not limit the capability
in performing nonlinear integration; rather, it is because we assumed that the
input had to be randomly distributed across the entire range so that we did not
see a rich set of nonlinear activities within the input ranges tested.
73
3.4 Methods
Biophysically-detailed compartmental simulation
For both approach 1 and approach 2, simulations were run within the NEURON
simulation environment (version 7.5 standard distribution). Unless otherwise
specified, the compartmental model, biophysical parameters and ion channel
parameters regarding the NMDAR AMPAR were the same as in an earlier
published study (Behabadi et al., 2012 Table). All the simulations in this chapter
started with resting potential of -70 mV, and the simulations ran for at least 2
seconds with 1 ms step size in integration. A layer 5 PN morphology from prior
studies (“j4”) was used (Behabadi et al., 2012). Same as in Chapter 2, a Gaussian
current was injected at the soma with a mean current of 1.0 nA and standard
deviation of 0.75 nA, in order to linearize the F/I relationship of the axo-somatic
compartment. The morphology was altered when needed so as to create
conditions of longer or shorter branches in related studies with approach 2. All
parameters used in the compartmental model were summarized in Appendix A.
Neuron files are available upon request.
Same as in Chapter 2, spines were modeled as two cylindrical compartments.
Unless otherwise specified, the morphology was as follows: neck, cylinder shape,
with a height of 1 µm and a diameter of 0.05 µm; head, cylinder shape, with a
height of 0.5 µm and a diameter specified in each different conditions. The two
compartments were then attached to a parent dendrite.
Creating activation patterns with random clusters of synapses
74
In approach 2, we used a computer program to generate the set of activation
patterns used for training and testing. Each activation pattern consisted of 48
synapses. We generated a set of samples with 10 different cluster sizes, i.e.,
1,2,3,4,6,8,12,16,24,48; the position of the cluster was chosen randomly along the
range of the dendrite. Within a cluster, the synapses were linearly positioned 0.5
µm from each other. Each sample set contained 100 samples for each cluster level
totaling 1,000 samples per set. The simulations were run with the specified
activation patterns and the somatic firing rate output was recorded.
Training of the multi-dimensional table and the prediction process
In training and testing phase, typically 20 sets of data (20,000 samples) collected
in the previous step were used for training and two outside sets (2,000 samples)
were used for testing. We searched through all possible partition methods to
break a single dendrite spatially into different segments. In an N-dimensional
model, the placement of the N-1 bin boundaries had a minimal separation of 10
!m between neighboring boundaries for the 100-um-long dendrite. The
minimum separation was 20 !m for the 200-um-long dendrite, and 40 !m for the
420-um-long dendrite respectively. With each combination of bin boundaries, an
N-dimensional (N = 2,3,4,5) table was set up, with 48 levels in each dimension.
We sorted the actual firing rate of each activation pattern based on the 48
synaptic locations. After sorting all the training samples, the table was ready to
make firing rates predictions. For each testing sample, we first counted the
number of synapses within each segment to find the corresponding entry in the
table. If within that entry there were more than three training samples recorded,
we proceeded to make a prediction, otherwise the testing samples were
75
abandoned due to insufficient data for making a prediction. For all testing
scenarios, the percentage of testing cases that were abandoned was lower than 20%
and would not affect the results and conclusions. For every possible combination
of bin boundaries, the performance of the prediction was recorded. The training
and testing processes were repeated at least five times for each testing case. In
producing the line graphs as in figure 3.7, it was assumed that the 1-D model
predicted all cases with one output firing rate, i.e., the mean of the actual firing
rate of all cases, so the prediction performance was R
2
= 0.
Slicing of the data predicted by the linear model
The weights were calculated based on linear regression between the location of
the synaptic input and its output firing rate for at least 40 sets of training data
(40,000 samples). The centers of the slices were evenly divided into eight levels
between the 10% to 90% range of the dataset. The width of the slice was always 1
Hz, and the samples were chosen at 19 different levels evenly dividied from 5%
to 95% range of its actual firing rate (y-axis). At each of the 19 actual firing rate
levels, 10 closest samples were chosen, making a total of 190 samples for each set.
Creating artificial longer and shorter branches
In most of the cases, we used a single dendrite “a10_11” in our study, which was
225.36 !m in length. It was referred to as a “200 !m” dendrite since we only put
synapses on the first 200 !m of the distance. We had the data for the detailed
morphology saved as a vector graph indicting the length, diameter and angular
76
rotation for connecting each segment. In order to generate a shorter or longer
dendrite, we linearly scaled the length of the original dendrite while retaining all
the information regarding the diameter and the angular rotation at each
connection point. This was done manually by modifying the morphology files in
Neuron (.hoc files) and the other parts of the morphology was not changed. The
neuron files are available upon request.
77
Chapter 4
Integrating synaptic parameters into a
single strength measure
4.1 Introduction
In chapter 3, we conducted simulation experiments with identical “standard”
synapses. We varied the presynaptic input frequency to all synapses in order to
control the overall input intensity. In reality, the activation strength at a specific
dendritic location is determined by multiple factors, including presynaptic firing
rate, peak conductance and short-term synaptic dynamics. For example, one may
wonder if the input strength of 20 synapses with presynaptic firing of 20 Hz is
equivalent to the strength of 40 synapses with presynaptic firing of 10 Hz, and if
not, which one is stronger than the other. In the classical-contextual interaction
project (Chapter 2), we’ve discussed that altering the local synaptic parameters
may change the shape of the 2-D input-output function of a single dendrite
78
considerably (Figure 2.6). In other words, it is necessary to understand how the
local synaptic strength factors interact with each other and what is their role in
modulating the nonlinear spatial dendritic processing. In this chapter, we
designed a framework to develop a single focal strength measure that would
accommodate the variability of these three factors, i.e., number of synapses,
presynaptic firing rate, and channel conductance. Although the conclusion in this
chapter covers only a small range of possible variations, and has only considered
two-dimensional spatial integration, we think it provides an intermediate step in
unifying various input conditions into a single strength measure that can be fed
into the spatial prediction model developed in Chapter 3. This will allow us to
have a complete model that can accurately predict the overall response of the
dendrite considering both spatial distribution of activation and focal synaptic
strength variability.
Figure 4.1. Interactions between presynaptic firing rate, peak synaptic conductance, and number
of synapses. a. Setup for mono-synaptic compartmental simulation. b. 2-D and 1-D slices that
represent the 3-D input-output function describing the relationship between synaptic input
variables (N – number of synapses, G – peak conductance, and F – presynaptic firing rate), with
each row representing a single peak conductance level.
variable
resposes to
the same-
strength
conbinitions
Somatic response
Proximal strength
Distal strength
Proximal strength
Distal strength
Somatic response
Proximal
Distal
0
40
F N
0 20 40
0.4
0.8
0 50 100
0.4
0.8
100
0
0.8
F
N
0 20 40
0.4
0.8
0 50 100
0.4
0.8
N F(Hz)
0
40
N
100
0
0.8
F(Hz)
F
N
0 20 40
0.4
0.8
0 50 100
0.4
0.8
N F(Hz)
0
40
N
100
0
0.8
F(Hz)
Low G
Somatic current (nA)
N F(Hz)
N
F(Hz)
High G
Medium G
Focal excitation
C D
AB
a
a
b
a
79
4.2 Results
We tested the interactions among three variables that related to the net strength
of focal synaptic activation: (1) number of synapses, (2) peak conductance, and (3)
presynaptic firing frequency. We used the same simulation environment,
NEURON, as in previous Chapters. First, we started with focal activation with
only one synaptic input and measured the somatic current. We systematically
varied these three parameters, at 10 discrete levels for each, and generated the 3-
D input-output function for a single dendrite (Figure 4.1a & 4.1b). We observed
that the three strength variables interacted cooperatively in general, but this
interaction could not be easily captured by a simple analytical description, such
as summation or multiplication.
Figure 4.2. Selection of samples that have the same mono-synaptic response. In three different
levels on both proximal and distal site, we selected three different levels of response measured in
current flowing to the soma, represented by the red, green and blue dots. Each dot represents a
80
case that generates the somatic response with a corresponding synaptic variable (N – number of
synapses, G – peak conductance, and F – presynaptic firing rate). The 30 cases in each level were
marked and selected.
In order to see the effect of focal synaptic strength variability on location-
dependent interaction, we hypothesized that the mono-synaptic response can be
a good local activation strength measure. To verify this hypothesis, we extended
our simulation to a two focal input condition, with one input on the proximal site
(60 !m) and the other on the distal site (150 !m). We selected ~30 cases from the
mono-synaptic input-output function that led to the same level of mono-synaptic
response (same horizontal level). We repeated the selection process for three
different mono-synaptic response levels for both the proximal and distal input
sites, and the data were shown in Figure 4.2. This allowed us to vary the focal
input strength proximally and distally at three different levels, which gave nine
cases carrying different activation strength. For each case, we could put one
strength specification from each set of ~30 cases into the proximal site and
another specification into the distal site. We ran at least 50 such combinations of
strength specifications for each case and observed the variance in the current
response. The results, as shown in Figure 4.3a & 4.3b, showed that the
unexplained variance by using our primitive strength measure was relatively
small. This proved that the three variables tested could be collapsed into one net
strength variable without substantially increasing the unexplained variance in
response (Figure 4.3b).
81
Figure 4.3. The somatic responses in a 2-focal stimulation for different synaptic input
configurations. a. The “virtual” 2-D surface capturing the proximal-distal interaction. The red
dots represent the mean activities, and the vertical blue bars represent the standard deviations of
the test samples. b. Enlarged view of the 3-by-3 grid in a. Simulations were run by configuring
different strength specification at a proximal (60 !m from soma) and a distal (150 !m from soma)
location. A total of 50 cases were run for each of the nine input-strength combinations. The
resulting somatic activates were plotted on the 3-D graph with respect to proximal and distal
input levels.
4.3 Discussion
In this study, we considered three focal synaptic parameters (number of spines,
peak conductance and presynaptic firing rate) and we only tested the strength
measure’s efficacy on a two-focal stimulation model sampled at three different
strength levels. Though not quite extensive, this study provided a framework
through which we could define a multi-dimensional description of how the three
synaptic variables integrated. We found that the three variables interacted in a
“cooperative” way, but such interaction could not be easily described by simple
analytical operations. In addition, we were able to make good predictions of the
response to the proximal-distal interactions, based on the measurement of the
mono-input current responses. This result confirmed our hypothesis that it is
possible to define a unifying strength measure at focal locations, in this case, the
variable
resposes to
the same-
strength
conbinitions
Somatic response
Proximal strength
Distal strength
Proximal strength
Distal strength
Somatic response
Proximal
Distal
0
40
F N
0 20 40
0.4
0.8
0 50 100
0.4
0.8
100
0
0.8
F
N
0 20 40
0.4
0.8
0 50 100
0.4
0.8
N F(Hz)
0
40
N
100
0
0.8
F(Hz)
F
N
0 20 40
0.4
0.8
0 50 100
0.4
0.8
N F(Hz)
0
40
N
100
0
0.8
F(Hz)
Low G
Somatic current (nA)
N F(Hz)
N
F(Hz)
High G
Medium G
Focal excitation
C D
AB
a
a
b
a
82
mono-input current response, in order to capture the interactions between
different synaptic and focal specifications.
For future works, the model can be extended to include parameters related to
short-term synaptic dynamics. In this study, we assumed there was no
conductance “cap” for each synapse, that is, there were virtually unlimited
AMPA/NMDA channels on the post-synaptic site so that every presynaptic
event injected the exact same amount of conductance into the post-synaptic site.
In reality, the efficacy of high frequency pre-synaptic events can be compromised
due to saturation of postsynaptic channels or depletion of neurotransmitters. The
overall effect of synaptic depression may lead to a diminishing marginal return
in rapid current injections through synapses.
The goal of Chapter 3&4 is to build another “two-layer” model for a single
dendrite. The first layer consisted of a conversion function that turned the
synaptic strength specifications at different locations into a single conductance
measure. The second layer would take the inputs from the first layer, and feed
them into the multi-dimensional model built in Aim 1 (Figure 4.4). This “model
fitting” approach would allow us to significantly broaden the parameter space
over which accurate predictions of average spike rates could be generated by a
model so simple that it could be practically evaluated by hand.
83
Figure 4.4. Conceptual diagram of the next generation argumented 2-layer model. The proposed
model includes the unification of the synaptic strength variables, the dendritic spatial
computation (represented by a multi-dimensional sigmoid), the summation at the soma, and the
axon-somatic f-I curve.
4.4 Methods
Compartmental simulation
Simulation were run within the NEURON simulation environment (version 7.1
standard distribution). Unless otherwise specified, the compartmental model,
biophysical parameters and ion channel parameters regarding the NMDAR
AMPAR and GABA-A were the same as in two earlier studies ((Behabadi et al.,
2012) Table 1 and (M. Jadi et al., 2012) Table 2). The biophysical parameters used
were summarized in Appendix A. A 3D-reconstructed layer 5 pyramidal neuron
morphology from prior studies (“j4”) was used (Behabadi et al., 2012). Different
from previous chapters, no current was injected at the soma when measuring
somatic response. In all cases when the mono-input current response was
measured, no somatic firing was allowed. Neuron files are available upon
request.
Proximal
Input
strength
Spatial
interactions
N
F
G
multi-dimensional
I/O function
Σ
...
N
F Distal
...
Apical
trunk
Dendr i t es
Somatic
f/i curve
A xon
som a
G
84
References
Adesnik, H., Bruns, W., Taniguchi, H., Huang, Z. J., & Scanziani, M. (2012). A neural circuit
for spatial summation in visual cortex. Nature, 490(7419), 226–231.
https://doi.org/10.1038/nature11526
Adini, Y., Sagi, D., & Tsodyks, M. (1997). Excitatory–inhibitory network in the visual
cortex: Psychophysical evidence. Proceedings of the National Academy of
Sciences, 94(19), 10426–10431.
Amatrudo, J. M., Weaver, C. M., Crimins, J. L., Hof, P. R., Rosene, D. L., & Luebke, J. I.
(2012). Influence of Highly Distinctive Structural Properties on the Excitability of
Pyramidal Neurons in Monkey Visual and Prefrontal Cortices. Journal of
Neuroscience, 32(40), 13644–13660. https://doi.org/10.1523/JNEUROSCI.2581-
12.2012
Angelucci, A., Levitt, J. B., Walton, E. J. S., Hupé, J.-M., Bullier, J., & Lund, J. S. (2002).
Circuits for Local and Global Signal Integration in Primary Visual Cortex. The
Journal of Neuroscience, 22(19), 8633–8646.
85
Araya, R., Eisenthal, K. B., & Yuste, R. (2006). Dendritic spines linearize the summation of
excitatory potentials. Proceedings of the National Academy of Sciences of the
United States of America, 103(49), 18799–18804.
https://doi.org/10.1073/pnas.0609225103
Behabadi, B. F., & Mel, B. W. (2014). Mechanisms underlying subunit independence in
pyramidal neuron dendrites. Proceedings of the National Academy of Sciences,
111(1), 498–503. https://doi.org/10.1073/pnas.1217645111
Behabadi, B. F., Polsky, A., Jadi, M., Schiller, J., & Mel, B. W. (2012). Location-Dependent
Excitatory Synaptic Interactions in Pyramidal Neuron Dendrites. PLoS Comput
Biol, 8(7), e1002599. https://doi.org/10.1371/journal.pcbi.1002599
Biederman, I. (1987). Recognition-by-components: A theory of human image
understanding. Psychological Review, 94(2), 115–147.
Binzegger, T., Douglas, R. J., & Martin, K. A. C. (2004). A Quantitative Map of the Circuit
of Cat Primary Visual Cortex. The Journal of Neuroscience, 24(39), 8441–8453.
https://doi.org/10.1523/JNEUROSCI.1400-04.2004
Bloss, E. B., Cembrowski, M. S., Karsh, B., Colonell, J., Fetter, R. D., & Spruston, N. (2016).
Structured Dendritic Inhibition Supports Branch-Selective Integration in CA1
Pyramidal Cells. Neuron, 89(5), 1016–1030.
https://doi.org/10.1016/j.neuron.2016.01.029
Bloss, E. B., Cembrowski, M. S., Karsh, B., Colonell, J., Fetter, R. D., & Spruston, N. (2018).
Single excitatory axons form clustered synapses onto CA1 pyramidal cell
86
dendrites. Nature Neuroscience, 21(3), 353–363.
https://doi.org/10.1038/s41593-018-0084-6
Bosking, W. H., Zhang, Y., Schofield, B., & Fitzpatrick, D. (1997). Orientation Selectivity
and the Arrangement of Horizontal Connections in Tree Shrew Striate Cortex.
The Journal of Neuroscience, 17(6), 2112–2127.
Boucsein, C. (2011). Beyond the cortical column: abundance and physiology of
horizontal connections imply a strong role for inputs from the surround.
Frontiers in Neuroscience, 5. https://doi.org/10.3389/fnins.2011.00032
Chen, C. C., Kasamatsu, T., Polat, U., & Norcia, A. M. (2001). Contrast response
characteristics of long-range lateral interactions in cat striate cortex. Neuroreport,
12(4), 655–661.
Chen, C.-C., & Tyler, C. W. (2001). Lateral sensitivity modulation explains the flanker
effect in contrast discrimination. Proceedings of the Royal Society of London B:
Biological Sciences, 268(1466), 509–516.
https://doi.org/10.1098/rspb.2000.1387
Chisum, H. J., & Fitzpatrick, D. (2004). The contribution of vertical and horizontal
connections to the receptive field center and surround in V1. Neural Networks,
17(5), 681–693. https://doi.org/10.1016/j.neunet.2004.05.002
Chisum, H. J., Mooser, F., & Fitzpatrick, D. (2003). Emergent Properties of Layer 2/3
Neurons Reflect the Collinear Arrangement of Horizontal Connections in Tree
Shrew Visual Cortex. The Journal of Neuroscience, 23(7), 2947–2960.
87
DeBello, W. M., McBride, T. J., Nichols, G. S., Pannoni, K. E., Sanculi, D., & Totten, D. J.
(2014). Input clustering and the microscale structure of local circuits. Frontiers in
Neural Circuits, 8, 112. https://doi.org/10.3389/fncir.2014.00112
DeFelipe, J., Ballesteros-Yáñez, I., Inda, M. C., & Muñoz, A. (2006). Double-bouquet cells
in the monkey and human cerebral cortex with special reference to areas 17 and
18. Progress in Brain Research, 154, 15–32. https://doi.org/10.1016/S0079-
6123(06)54002-6
Dresp, B. (1993). Bright lines and edges facilitate the detection of small light targets.
Spatial Vision, 7(3), 213–225. https://doi.org/10.1163/156856893X00379
Field, D. J., Hayes, A., & Hess, R. F. (1993). Contour integration by the human visual
system: Evidence for a local “association field.” Vision Research, 33(2), 173–193.
https://doi.org/10.1016/0042-6989(93)90156-Q
Froemke, R. C., Poo, M.-M., & Dan, Y. (2005). Spike-timing-dependent synaptic plasticity
depends on dendritic location. Nature, 434(7030), 221–225.
https://doi.org/10.1038/nature03366
Geisler, W. S., Perry, J. S., Super, B. J., & Gallogly, D. P. (2001). Edge co-occurrence in
natural images predicts contour grouping performance. Vision Research, 41(6),
711–724.
Gidon, A., & Segev, I. (2012). Principles governing the operation of synaptic inhibition in
dendrites. Neuron, 75(2), 330–341.
https://doi.org/10.1016/j.neuron.2012.05.015
88
Gilbert, C. D., & Wiesel, T. N. (1989). Columnar specificity of intrinsic horizontal and
corticocortical connections in cat visual cortex. The Journal of Neuroscience, 9(7),
2432–2442.
Gordon, L. R., Gribble, K. D., Syrett, C. M., & Granato, M. (2012). Initiation of synapse
formation by Wnt-induced MuSK endocytosis. Development, 139(5), 1023–1033.
https://doi.org/10.1242/dev.071555
Grosof, D. H., Shapley, R. M., & Hawken, M. J. (1993). Macaque V1 neurons can signal
“illusory” contours. Nature, 365(6446), 550–552.
https://doi.org/10.1038/365550a0
Harnett, M. T., Makara, J. K., Spruston, N., Kath, W. L., & Magee, J. C. (2012). Synaptic
amplification by dendritic spines enhances input cooperativity. Nature,
491(7425), 599–602. https://doi.org/10.1038/nature11554
Harris, K. D., & Shepherd, G. M. G. (2015). The neocortical circuit: themes and variations.
Nature Neuroscience, 18(2), 170–181. https://doi.org/10.1038/nn.3917
Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and
functional architecture in the cat’s visual cortex. The Journal of Physiology, 160,
106–154.
Iacaruso, M. F., Gasler, I. T., & Hofer, S. B. (2017). Synaptic organization of visual space
in primary visual cortex. Nature, 547(7664), 449–452.
https://doi.org/10.1038/nature23019
89
Ito, M., & Gilbert, C. D. (1999). Attention modulates contextual influences in the primary
visual cortex of alert monkeys. NEURON-CAMBRIDGE MA-, 22, 593–604.
Jadi, M. P., Behabadi, B. F., Poleg-Polsky, A., Schiller, J., & Mel, B. W. (2014). An
Augmented Two-Layer Model Captures Nonlinear Analog Spatial Integration
Effects in Pyramidal Neuron Dendrites. Proceedings of the IEEE, 102(5), 782–798.
https://doi.org/10.1109/JPROC.2014.2312671
Jadi, M., Polsky, A., Schiller, J., & Mel, B. W. (2012). Location-Dependent Effects of
Inhibition on Local Spiking in Pyramidal Neuron Dendrites. PLoS Comput Biol, 8(6),
e1002550. https://doi.org/10.1371/journal.pcbi.1002550
Kapadia, M. K., Ito, M., Gilbert, C. D., & Westheimer, G. (1995). Improvement in visual
sensitivity by changes in local context: Parallel studies in human observers and in
V1 of alert monkeys. Neuron, 15(4), 843–856. https://doi.org/10.1016/0896-
6273(95)90175-2
Kapadia, M. K., Westheimer, G., & Gilbert, C. D. (2000). Spatial Distribution of
Contextual Interactions in Primary Visual Cortex and in Visual Perception. Journal
of Neurophysiology, 84(4), 2048–2062.
Karube, F., Kubota, Y., & Kawaguchi, Y. (2004). Axon Branching and Synaptic Bouton
Phenotypes in GABAergic Nonpyramidal Cell Subtypes. Journal of Neuroscience,
24(12), 2853–2865. https://doi.org/10.1523/JNEUROSCI.4814-03.2004
90
Koch, C., Poggio, T., & Torre, V. (1983). Nonlinear interactions in a dendritic tree:
localization, timing, and role in information processing. Proceedings of the
National Academy of Sciences of the United States of America, 80(9), 2799–2802.
Kubota, Y., Hatada, S., Kondo, S., Karube, F., & Kawaguchi, Y. (2007). Neocortical
inhibitory terminals innervate dendritic spines targeted by thalamocortical
afferents. The Journal of Neuroscience, 27(5), 1139–1150.
Lee, K. F. H., Soares, C., Thivierge, J.-P., & Béïque, J.-C. (2016). Correlated Synaptic Inputs
Drive Dendritic Calcium Amplification and Cooperative Plasticity during Clustered
Synapse Development. Neuron, 89(4), 784–799.
https://doi.org/10.1016/j.neuron.2016.01.012
Levitt, J. B., & Lund, J. S. (2002). The spatial extent over which neurons in macaque
striate cortex pool visual signals. Visual Neuroscience, 19(04), 439–452.
https://doi.org/10.1017/S0952523802194065
Li, W., & Gilbert, C. D. (2002). Global Contour Saliency and Local Colinear Interactions.
Journal of Neurophysiology, 88(5), 2846–2856.
https://doi.org/10.1152/jn.00289.2002
Li, Z. (1998). A neural model of contour integration in the primary visual cortex. Neural
Computation, 10(4), 903–940.
Loxley, P. N., & Bettencourt, L. M. (2011). Visually-salient contour detection using a V1
neural model with horizontal connections. ArXiv:1103.3531 [Physics]. Retrieved
from http://arxiv.org/abs/1103.3531
91
Lund, J. S. (1988). Anatomical organization of macaque monkey striate visual cortex.
Annual Review of Neuroscience, 11, 253–288.
https://doi.org/10.1146/annurev.ne.11.030188.001345
Lund, Jennifer S., Angelucci, A., & Bressloff, P. C. (2003). Anatomical Substrates for
Functional Columns in Macaque Monkey Primary Visual Cortex. Cerebral Cortex,
13(1), 15–24. https://doi.org/10.1093/cercor/13.1.15
McGuire, B. A., Gilbert, C. D., Rivlin, P. K., & Wiesel, T. N. (1991). Targets of horizontal
connections in macaque primary visual cortex. The Journal of Comparative
Neurology, 305(3), 370–392. https://doi.org/10.1002/cne.903050303
Mel, B. W. (1992). The clusteron: toward a simple abstraction for a complex neuron. In J.
Moody, S. Hanson, & R. Lippmann (Eds.), Advances in neural information
processing systems (Vol. 4, pp. 35–42). San Mateo, CA: Morgan Kaufmann.
Mel, Bartlett W. (1992). NMDA-Based Pattern Discrimination in a Modeled Cortical
Neuron. Neural Computation, 4(4), 502–517.
https://doi.org/10.1162/neco.1992.4.4.502
Miyazaki, K., & Ross, W. N. (2017). Sodium dynamics in pyramidal neuron dendritic
spines: synaptically evoked entry predominantly through AMPA receptors and
removal by diffusion. Journal of Neuroscience, 1758–17.
https://doi.org/10.1523/JNEUROSCI.1758-17.2017
92
Mizobe, K., Polat, U., Pettet, M. W., & Kasamatsu, T. (2001). Facilitation and suppression
of single striate-cell activity by spatially discrete pattern stimuli presented
beyond the receptive field. Visual Neuroscience, 18(03), 377–391.
Morgan, J. L., Berger, D. R., Wetzel, A. W., & Lichtman, J. W. (2016). The Fuzzy Logic of
Network Connectivity in Mouse Visual Thalamus. Cell, 165(1), 192–206.
Nelson, J. I., & Frost, B. J. (1985). Intracortical facilitation among co-oriented, co-axially
aligned simple cells in cat striate cortex. Experimental Brain Research, 61(1), 54–
61. https://doi.org/10.1007/BF00235620
Nevian, T., Larkum, M. E., Polsky, A., & Schiller, J. (2007). Properties of basal dendrites of
layer 5 pyramidal neurons: a direct patch-clamp recording study. Nature
Neuroscience, 10(2), 206–214. https://doi.org/10.1038/nn1826
Petreanu, L., Mao, T., Sternson, S. M., & Svoboda, K. (2009). The subcellular organization
of neocortical excitatory connections. Nature, 457(7233), 1142–1145.
https://doi.org/10.1038/nature07709
Pettet, M. W., McKee, S. P., & Grzywacz, N. M. (1998). Constraints on long range
interactions mediating contour detection. Vision Research, 38(6), 865–879.
https://doi.org/10.1016/S0042-6989(97)00238-1
Pfeffer, C. K., Xue, M., He, M., Huang, Z. J., & Scanziani, M. (2013). Inhibition of
inhibition in visual cortex: the logic of connections between molecularly distinct
interneurons. Nature Neuroscience, 16(8), 1068–1076.
https://doi.org/10.1038/nn.3446
93
Poirazi, P., Brannon, T., & Mel, B. W. (2003). Pyramidal neuron as two-layer neural
network. Neuron, 37(6), 989–999.
Polat, U., Mizobe, K., Pettet, M. W., Kasamatsu, T., & Norcia, A. M. (1998). Collinear
stimuli regulate visual responses depending on cell’s contrast threshold. Nature,
391(6667), 580–584. https://doi.org/10.1038/35372
Polat, U., & Sagi, D. (1994). The architecture of perceptual spatial interactions. Vision
Research, 34(1), 73–78. https://doi.org/10.1016/0042-6989(94)90258-5
Potter, M. C. (1976). Short-term conceptual memory for pictures. Journal of
Experimental Psychology: Human Learning and Memory, 2(5), 509–522.
https://doi.org/10.1037/0278-7393.2.5.509
Rockland, K. S., & Lund, J. S. (1982). Widespread periodic intrinsic connections in the
tree shrew visual cortex. Science, 215(4539), 1532–1534.
https://doi.org/10.1126/science.7063863
Rosenblatt, F. (1962). Principles of Neurodynamics: Perceptrons and the Theory of Brain
Mechanisms. Spartan Books.
Rumelhart, D. E., Hinton, G. E., & McClelland, J. L. (1986). A general framework for
parallel distributed processing. In D. E. Rumelhart & J. L. McClelland (Eds.),
Parallel distributed processing: explorations in the microstructure of cognition
(Vol. 1, pp. 45–76). Cambridge, MA: Bradford.
94
Sandler, M., Shulman, Y., & Schiller, J. (2016). A Novel Form of Local Plasticity in Tuft
Dendrites of Neocortical Somatosensory Layer 5 Pyramidal Neurons. Neuron,
90(5), 1028–1042. https://doi.org/10.1016/j.neuron.2016.04.032
Sceniak, M. P., Ringach, D. L., Hawken, M. J., & Shapley, R. (1999). Contrast’s effect on
spatial summation by macaque V1 neurons. Nature Neuroscience, 2(8), 733–739.
https://doi.org/10.1038/11197
Schiller, J., Major, G., Koester, H. J., & Schiller, Y. (2000). NMDA spikes in basal dendrites
of cortical pyramidal neurons. Nature, 404(6775), 285–289.
https://doi.org/10.1038/35005094
Schmidt, K. E., Goebel, R., Löwel, S., & Singer, W. (1997). The Perceptual Grouping
Criterion of Colinearity is Reflected by Anisotropies of Connections in the Primary
Visual Cortex. European Journal of Neuroscience, 9(5), 1083–1089.
https://doi.org/10.1111/j.1460-9568.1997.tb01459.x
Schnepel, P., Kumar, A., Zohar, M., Aertsen, A., & Boucsein, C. (2014). Physiology and
Impact of Horizontal Connections in Rat Neocortex. Cerebral Cortex, bhu265.
https://doi.org/10.1093/cercor/bhu265
Sigman, M., Cecchi, G. A., Gilbert, C. D., & Magnasco, M. O. (2001). On a common circle:
Natural scenes and Gestalt rules. Proceedings of the National Academy of
Sciences, 98(4), 1935–1940. https://doi.org/10.1073/pnas.98.4.1935
95
Smith, S. L., Smith, I. T., Branco, T., & Häusser, M. (2013). Dendritic spikes enhance
stimulus selectivity in cortical neurons in vivo. Nature, 503(7474), 115–120.
https://doi.org/10.1038/nature12600
Stepanyants, A., Martinez, L. M., Ferecskó, A. S., & Kisvárday, Z. F. (2009). The fractions
of short- and long-range connections in the visual cortex. Proceedings of the
National Academy of Sciences, 106(9), 3555–3560.
https://doi.org/10.1073/pnas.0810390106
Tremblay, R., Lee, S., & Rudy, B. (2016). GABAergic Interneurons in the Neocortex: From
Cellular Properties to Circuits. Neuron, 91(2), 260–292.
https://doi.org/10.1016/j.neuron.2016.06.033
Ursino, M., & La Cara, G. E. (2004). A model of contextual interactions and contour
detection in primary visual cortex. Neural Networks: The Official Journal of the
International Neural Network Society, 17(5–6), 719–735.
https://doi.org/10.1016/j.neunet.2004.03.007
van Bommel, B., & Mikhaylova, M. (2016). Talking to the neighbours: The molecular and
physiological mechanisms of clustered synaptic plasticity. Neuroscience and
Biobehavioral Reviews, 71, 352–361.
https://doi.org/10.1016/j.neubiorev.2016.09.016
Vu, E. T., & Krasne, F. B. (1992). Evidence for a computational distinction between
proximal and distal neuronal inhibition. Science (New York, N.Y.), 255(5052),
1710–1712.
96
Weber, J. P., Andrásfalvy, B. K., Polito, M., Magó, Á., Ujfalussy, B. B., & Makara, J. K.
(2016). Location-dependent synaptic plasticity rules by dendritic spine
cooperativity. Nature Communications, 7, 11380.
https://doi.org/10.1038/ncomms11380
Yen, S.-C., & Finkel, L. H. (1998). Extraction of perceptually salient contours by striate
cortical networks. Vision Research, 38(5), 719–741.
https://doi.org/10.1016/S0042-6989(97)00197-1
Yoshimura, Y., Sato, H., Imamura, K., & Watanabe, Y. (2000). Properties of Horizontal
and Vertical Inputs to Pyramidal Cells in the Superficial Layers of the Cat Visual
Cortex. The Journal of Neuroscience, 20(5), 1931–1940.
Zhou, C., & Mel, B. W. (2008). Cue combination and color edge detection in natural
scenes. Journal of Vision, 8(4), 4. https://doi.org/10.1167/8.4.4
97
Appendix A
Multi-compartment Model Parameters
Passive Properties
Rm dendrites: 10 KΩ.cm
2
node: 50 KΩ.cm
2
other: 20 KΩ.cm
2
Cm myelination: 0.05 µF/cm
2
soma: 1 µF/cm
2
dendrites: 2 µF/cm
2
Ra 100 Ωm
Active Properties
ḡNa soma: 25 pS/µm
2
dendrites (∆ ḡNa /∆L): 0.003 pS/µm
3
axon IS, hillock and nodes: 100 pS/µm
2
myelination: 0.6 pS/µm
2
ḡK soma: 3 pS/µm
2
dendrites: 0.03 pS/µm
2
axon IS, hillock and nodes: 5 pS/µm
2
myelination: 200 pS/µm
2
Synaptic Conductances
AMPA ḡAMPA = 1.5 nS
98
τrise, τfall = 0.05, 0.5 ms
NMDA unless otherwise specified, ḡNMDA = 2 × ḡAMPA
τrise, τfall = 2.1, 18.8 ms
GABAA Biexponential model (Figure 2.7&2.8)
Short: τ1 = 0.5 ms, τ2 = 100 ms
Long: τ 1 = 0.5 ms, τ2 = 2 ms
Abstract (if available)
Abstract
The brain is a complex machinery that performs a wide range of computations and processing. Within the cortex, the key processing units that carry different streams of information and integrate them are pyramidal neurons (PNs). Recent attempts in modeling synaptic integration have evolved from a simple notation of linear summation unit to an augmented two-layer model, with a first layer describing the nonlinear interactions within the dendritic subunit and the second layer representing the axo-somatic activation function. The importance of dendrites in mediating spatial synaptic integration gained more and more attention in the evolution of models. However, the functional relevance for such dendrite-mediated computations remains unclear. In this thesis, we started by looking for a plausible functional role for nonlinear dendritic integration. We noted that in the sensory cortex, the main feedforward projection, which defines the layer 2/3 PN’s classical receptive field (CRF), accounts for only a small fraction of the excitatory contacts onto those cells. In contrast, 60%-70% of the contacts arise from “horizontal” connection. The horizontal network provides contextual information that helps modulate the cell’s response. Many of such modulations are nonlinear and little is known regarding its detailed mechanisms. We chose to study the classical-contextual interaction between the center and flanker edge elements in simple cells in the primary visual cortex (V1). An earlier study showed that the presence of aligned edge elements in the flanking receptive field can “multiplicatively” enhance the activity of the V1 cell’s response to its direct input within the CRF. The literature only provided the cell’s response to a limited number of conditions. In order to gather more information, we conducted a human-labeling experiment with natural image patches to map out the 2-input integration between the center and flanker inputs. We gathered a two-dimensional surface of ground truth contour probability, which is an example of classical-contextual interaction function (CC-IF). The CC-IF can be considered as the function a V1 simple cell is supposed to compute in order to reach its functional goal of contour detection. Using a detailed compartmental model with two focal inputs spatially separated on a single dendrite, we reproduced this CC-IF in terms of the somatic response to proximal and distal inputs. We demonstrated that the effect was NMDA-dependent, and the spatial separation was critical for such interaction to happen. Furthermore, we showed that by altering the synaptic dynamics in the model, we could extend the CC-IF to a rich set of 2-D functions, including AND/OR-like operations. The flexibility could be further extended by accommodating the recently discovered SOM/PV interneuron motif. We showed that with the inhibitory circuitry, a single dendrite was capable of computing pure multiplication and certain type of nonmonotonic interactions. Our findings support a model in which the nonlinear synaptic integration effects in PNs contribute directly to fine-grained contextual processing in V1. Moreover, the modeling study provided a valuable framework for studying other sensory modalities and other types of 2-input interactions. Next, we moved our attention back to the theoretical question of how exactly two or more spatially distinctive inputs streams gets converged on a single dendrite. The key question we asked is that how many independent dimensions of spatial analog processing are necessary to accurately describe the branch’s input-output profile. We proposed two approaches to address this question. The first approach used a set of spatial activation patterns defined by some pre-set cosine-shaped pathways. We attempted to apply low-dimensional models to predict the somatic response of arbitrary activation patterns generated by high-dimensional models,and quantified the performances of the predictions. The results showed that a single dendrite may be represented by roughly three independent dimensions, and the marginal gain of adding more dimensions diminished significantly. Although the first approach adopted assumptions that were more constrained to the biological conditions, it didn’t sufficiently represent the spatial variability of activation patterns. In order to seek for the theoretical upper limit of a dendrite’s spatial processing power, we proposed a second approach. In the second approach, we used spatial activation patterns consisting of a fixed number of synapses within clusters to maximize the variability of possible inputs. In addition, we developed a set of non-overlapping, segmentation-based multi-dimensional models that predicted the somatic response based on simple synaptic counts. A linear-weight model was also developed in this approach to produce sets or “slices” of patterns that received the same linear predictions. These “slices” were used to further test the capability of the multi-dimensional models in capturing specific variances due to nonlinear interactions. Furthermore, we expanded the study with a set of different biophysical parameters, including neck resistance, overall input intensity, and length of the dendrite. With the second approach, we observed that a four- or five-dimensional model would generally be sufficient in accurately predicting the somatic response of an arbitrary input pattern. We also noted that certain biophysical specifications tended to increase the likelihood of nonlinear dendritic integration. Specifically, a lower neck resistance may in general increase the nonlinear spatial processing power of the dendrite, and it was common to see certain input intensity level trigger more nonlinear integration than other input levels. Finally, we further extended our model to encompass focal synaptic variations. We systematically changed three focal synaptic parameters, namely number of synaptic contacts, peak conductance and presynaptic firing rate, and looked into the interaction between these variables. Then we tried to test the possibility of using a unifying measure to describe the dynamics between the three focal strength variables and found that the mono-input current response could be a good candidate for this purpose. We hope our theoretical study regarding the dimensionality of spatial processing in dendrites may facilitate the interpretation of experimental results involving active dendritic actives, provide new hypotheses for future experimental researches, and help guide new designs of future neuromorphic hardware systems.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Excitatory-inhibitory interactions in pyramidal neurons
PDF
Dendritic computation and plasticity in neuromorphic circuits
PDF
Computational investigation of glutamatergic synaptic dynamics: role of ionotropic receptor distribution and astrocytic modulation of neuronal spike timing
PDF
Spatial and temporal precision of inhibitory and excitatory neurons in the murine dorsal lateral geniculate nucleus
PDF
Functional synaptic circuits in primary visual cortex
PDF
Dendrite patterning by a local cell-autonomous repulsive mechanism
PDF
Functional magnetic resonance imaging characterization of peripheral form vision
PDF
A million-plus neuron model of the hippocampal dentate gyrus: role of topography, inhibitory interneurons, and excitatory associational circuitry in determining spatio-temporal dynamics of granul...
PDF
Feasibility theory
PDF
Exploring sensory responses in the different subdivisions of the visual thalamus
PDF
The structural and functional configurations of glutamate, GABA, and catecholamine pre-synaptic terminals in the parvicellular neuroendocrine part of the paraventricular nucleus of the hypothalamus
PDF
Experimental and model-based analyses of physiological determinants of force variability
PDF
Cortical and subcortical responses to electrical stimulation of rat retina
PDF
Contextual modulation of sensory processing via the pulvinar nucleus
PDF
Functional consequences of network architecture in rat hippocampus: a computational study
PDF
Nonlinear dynamical modeling of single neurons and its application to analysis of long-term potentiation (LTP)
PDF
Computational investigation of cholinergic modulation of the hippocampus and directional encoding of touch in somatosensory cortex
PDF
Detecting joint interactions between sets of variables in the context of studies with a dichotomous phenotype, with applications to asthma susceptibility involving epigenetics and epistasis
PDF
Synaptic circuits for information processing along the central auditory pathway
PDF
Engineering genetic tools to illustrate new insights into the homeostatic control of synaptic strength
Asset Metadata
Creator
Jin, Lei
(author)
Core Title
Synaptic integration in dendrites: theories and applications
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Neuroscience
Publication Date
10/23/2018
Defense Date
08/29/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
classical-contextual interaction,compartmental simulation,computational model,dendritic computation,OAI-PMH Harvest,synaptic integration,vision
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Gerald, Loeb E. (
committee chair
), Hirsch, Judith A. (
committee member
), Mel, Bartlett W. (
committee member
)
Creator Email
edmund5719@gmail.com,leij@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-90825
Unique identifier
UC11676648
Identifier
etd-JinLei-6900.pdf (filename),usctheses-c89-90825 (legacy record id)
Legacy Identifier
etd-JinLei-6900.pdf
Dmrecord
90825
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Jin, Lei
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
classical-contextual interaction
compartmental simulation
computational model
dendritic computation
synaptic integration