Close
USC Libraries
University of Southern California
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected 
Invert selection
Deselect all
Deselect all
 Click here to refresh results
 Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Folder
Demonstrating the role of multiple memory mechanisms in learning patterns using neuromorphic circuits
(USC Thesis Other) 

Demonstrating the role of multiple memory mechanisms in learning patterns using neuromorphic circuits

doctype icon
play button
PDF
 Download
 Share
 Open document
 Flip pages
 More
 Download a page range
 Download transcript
Copy asset link
Request this asset
Request accessible transcript
Transcript (if available)
Content Demonstrating the role of multiple memory mechanisms in
learning patterns using neuromorphic circuits
by
Saeid Barzegarjalali
M.S., University of Southern California, 2012
A dissertation submitted to the
Faculty of the Graduate School of the
University of Southern California in partial fulllment
of the requirements for the degree of
Doctor of Philosophy
Department of Electrical Engineering
December 2016
Abstract
Saeid Barzegarjalali, (Ph.D., Electrical Engineering)
Demonstrating the role of multiple memory mechanisms in learning patterns using neuromorphic
circuits
Proposal directed by Dr. Alice Parker
iii
Acknowledgements
I would like to thank my advisor Dr. Alice Parker for her help and guidance, my committee
members and my teammates and colleagues in BioRC group.
iv
Contents
Chapter
1 Overview of BioRC Circuits 1
1.1 Problem Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Hypothesis Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.1 FPGA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3.2 Large-scale general purpose models of neural systems . . . . . . . . . . . . . 2
1.3.3 Small scale special purpose neuromorphic systems . . . . . . . . . . . . . . . 3
1.3.4 Circuits with nano synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Introduction to this thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Components at transistor level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.6 The Hybrid Electronic Cortical Neuron . . . . . . . . . . . . . . . . . . . . . . . . . 10
2 Modeling the Cortico-Striatal-Thalamo-Cortical Circuit in the Brain and its Role in Obses-
sive Compulsive Disorder 12
2.1 Motivation and introduction to the CSTC circuit . . . . . . . . . . . . . . . . . . . . 12
2.2 The biological OCD Circuit and the Electronic Model . . . . . . . . . . . . . . . . . 15
2.3 Typical and Atypical Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.1 Scenario A: The indirect pathway and the direct pathway are both weak . . . 19
2.3.2 Scenario B: The indirect pathway is strong . . . . . . . . . . . . . . . . . . . 20
2.3.3 Scenario C: Both pathways are strong . . . . . . . . . . . . . . . . . . . . . . 21
v
3 Pattern Recognizing Circuit in the Cerebral Cortex and its role in Schizophrenia 22
3.1 Motivation and introduction to pattern recognizing circuit and Schizophrenia . . . . 22
3.2 Symptoms of Schizophrenia including disinhibition . . . . . . . . . . . . . . . . . . . 24
3.3 Our neuromorphic circuit using feedforward inhibition to recognize a predicted pat-
tern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 Simulation results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4.1 Input Scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4.2 Input Scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.3 Input Scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.4 Input Scenario 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 False ring due to dierent noise levels . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.6 Detecting the expected pattern with dierent synaptic weights . . . . . . . . . . . . 31
3.7 Tolerance of the circuit for delay between two consecutive spikes . . . . . . . . . . . 33
4 Neuromorphic Circuit Mimicking Biological Short-Term Memory 36
4.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.3 Short-term Memory Based on Persistent Firing of Prefrontal Cortical Neurons . . . 37
4.4 Long-range Network to Generate Persistent Firing . . . . . . . . . . . . . . . . . . . 39
4.5 Mutually Inhibitory Neurons to Generate Persistent Firing Representing Finite-state
Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.5.1 A: Input 1 and input 2 get activated simultaneously . . . . . . . . . . . . . . 43
4.5.2 B: Input 1 get activated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.5.3 C: Input 2 gets activated . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.6 Short-term Memory with Appearance and Location Information . . . . . . . . . . . 43
4.6.1 Circuit is signalled image number 1 shown in Figure 4.1, a hollow circle on
the left . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
vi
4.6.2 Circuit is signalled image number 6 shown in Figure 4.1, a solid circle on right 46
5 A Bio-inspired Electronic Mechanism for Unsupervised Learning using Structural Plasticity 47
5.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.3 Builder Circuits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.4 Temporal tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.5 Learning temporal patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.6 Learning spatial patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.7 Learning spatiotemporal patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.7.1 Case 1, ( upper left in Figures 5.9 and 5.10) . . . . . . . . . . . . . . . . . . . 64
5.7.2 Case 2, ( upper right in Figures 5.9 and 5.10) . . . . . . . . . . . . . . . . . . 64
5.7.3 Case 3, ( lower left in Figures 5.9 and 5.10) . . . . . . . . . . . . . . . . . . . 65
5.7.4 Case 4, ( lower right in Figures 5.9 and 5.10) . . . . . . . . . . . . . . . . . . 65
5.8 Learning gets interrupted by a wrong input . . . . . . . . . . . . . . . . . . . . . . . 66
5.9 Input is in transition between two patterns . . . . . . . . . . . . . . . . . . . . . . . 69
5.9.1 The circuit is fed by pattern shown in case 3 . . . . . . . . . . . . . . . . . . 69
5.9.2 The input is shifted to case 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.9.3 Circuit gets case 3 input pattern again . . . . . . . . . . . . . . . . . . . . . . 69
6 An Analog Neural Network that Learns Sudoku-Like Puzzle Rules 72
6.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.3 The circuit at the transistor level . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3.1 Synapse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3.2 The neuron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6.3.3 Circuits for synaptic plasticity and reward-based learning . . . . . . . . . . . 79
6.4 Eight types of synapses used in the regular architecture . . . . . . . . . . . . . . . . 85
vii
6.5 Training type 3 synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.6 The network can get trained and then can solve a 2-by-2 Sudoku . . . . . . . . . . . 89
6.7 Pseudo sudoku type I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.8 Pseudo sudoku type II . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7 Conclusion and Future Research 104
Bibliography 107
Appendix
A Carbon NanoTube Transistors 111
B Neuromorphic Circuit Modeling Directional Selectivity in the Visual Cortex 113
B.1 Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
B.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B.3 Neuron and synapse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
B.4 Orientation Selectivity in V1 (Primary Visual Cortex) . . . . . . . . . . . . . . . . . 114
B.5 Dorsal and Ventral Pathways in the brain . . . . . . . . . . . . . . . . . . . . . . . . 119
B.6 Directional selectivity of movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
B.7 Detecting size of stimulus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
B.8 Detecting Orientation, Direction and size . . . . . . . . . . . . . . . . . . . . . . . . 127
B.8.1 Big vertical object is moving from left to right . . . . . . . . . . . . . . . . . 127
B.8.2 Small vertical object is moving from right to left . . . . . . . . . . . . . . . . 127
B.9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
viii
Tables
Table
2.1 Table for OCD symptoms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1 Table for lifetime risk of schizophrenia as a function of genetic relatedness to a person
with schizophrenia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
ix
Figures
Figure
1.1 Our hybrid neuromorphic neuron with CNT synaptic layer and CMOS neurons . . . 4
1.2 Synapse with CNT transistors with voltage knobs . . . . . . . . . . . . . . . . . . . . 7
1.3 Hybrid architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Two-input adder with CMOS technology . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Three two-input adders are used as dendritic arbor . . . . . . . . . . . . . . . . . . . 9
1.6 Axon Hillock with CMOS technology . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1 a) Simplied Block Diagram of A Circuit in the Mouse Brain[1]. MSN, medium spiny
neuron; GPe, globus pallidus pars externalis; GPi, globus pallidus pars internalis;
SNc, substantia nigra pars compacta; SNr, substantia nigra pars reticulate; STN,
subthalamic nucleus; D1R, D1-type dopamine receptor; D2R, D2-type dopamine
receptor. b) Our Biomimetic Circuit Mimicking Part a . . . . . . . . . . . . . . . . . 16
2.2 Firing pattern of all neurons in the OCD circuit for dierent scenarios . . . . . . . . 18
2.3 Typical and atypical responses of D1 and D2 neurons . . . . . . . . . . . . . . . . . 19
2.4 Noise in indirect pathway can interrupt cortex . . . . . . . . . . . . . . . . . . . . . 20
2.5 Response of a healthy circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 Miswiring in Schizophrenia adopted from [2]. . . . . . . . . . . . . . . . . . . . . . . 25
3.2 Block Diagram of our Hybrid Neuromorphic Circuit . . . . . . . . . . . . . . . . . . 26
3.3 Simulation Results for Normal and Schizophrenic Conditions . . . . . . . . . . . . . 27
x
3.4 Neuron with noisy inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.5 False Firing due to dierent noise levels . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.6 Detecting the expected pattern with dierent synaptic weights . . . . . . . . . . . . 32
3.7 Tolerance of the circuit for delay between two consecutive spikes . . . . . . . . . . . 33
3.8 When delay is 2.25 ns which is within the tolerance G2 res. . . . . . . . . . . . . . 34
3.9 When delay is just 1 ns which is less than a typical pattern P res indicating an
erroneous pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.10 Delay is 3.5 ns which is out of tolerance and P res twice. . . . . . . . . . . . . . . . 35
4.1 Six dierent pictures with dierent location and appearance information. . . . . . . 38
4.2 Our biomimetic circuit mimicking Long-range neural interactions . . . . . . . . . . . 39
4.3 Simulation result for Figure 4.2. One input spike can trigger persistent ring in the
prefrontal cortex due to a closed-loop network . . . . . . . . . . . . . . . . . . . . . 40
4.4 Our biomimetic circuit mimicking a mutual inhibition network. Here, both group 1
and group 2 are single neurons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.5 Simulation result for Figure 4.4. One input spike can trigger persistent ring in the
prefrontal cortex, and the ring can cease when an input spike arrives at the other
group due to a mutual-inhibition network. . . . . . . . . . . . . . . . . . . . . . . . 42
4.6 Short-term memory neural circuits with appearance and location information. . . . 44
4.7 Location and appearance information in short-term memory. The circuit rst is
signalled a hollow circle on the left and remembers it, then it is signalled a solid circle
on the right and remembers it. The circuit remembers the location information for
tens of nanoseconds. When the hollow neuron and the left neuron re simultaneously,
the circuit remembers that the object is a hollow circle and it is on the left. Firing
together binds these two types of information. . . . . . . . . . . . . . . . . . . . . . 45
5.1 Many presynaptic neurons that are connected to a postsynaptic neuron via only two
excitatory synapses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
xi
5.2 Four dierent spatiotemporal patterns that can be learned . . . . . . . . . . . . . . . 51
5.3 The Builder circuit at the transistor level. There are four instances of this circuit in
the nal learning circuit for neurons S, C, O, and T, to be described later. . . . . . 52
5.4 The time that sn stays high as a function of the forget signal. . . . . . . . . . . . . . 53
5.5 Temporal tolerance between two input spikes . . . . . . . . . . . . . . . . . . . . . . 54
5.6 Temporal Learner. The part of the circuit that learns temporal patterns generated
by IN1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.7 Spatial Learner. The part of the circuit that learns spatial patterns generated by
IN1 and IN2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.8 Block diagram of our nal circuit that learns spatiotemporal patterns . . . . . . . . 60
5.9 The circuit can learn four dierent spatiotemporal patterns . . . . . . . . . . . . . . 62
5.10 Neurons S, C, O, T and their corresponding builder compartments . . . . . . . . . . 63
5.11 Learning case 1 gets interrupted by a wrong input pattern. . . . . . . . . . . . . . . 67
5.12 Neurons S and O and their corresponding builder compartments for the simulation
shown in Figure 5.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.13 Input shifts from case 3 to case 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.14 Neurons C and S and their corresponding builder compartments . . . . . . . . . . . 71
6.1 Fully connected neural network with normally-o (silent) synapses. Every neuron
can be excited externally with normally-on synapses. . . . . . . . . . . . . . . . . . . 74
6.2 A 2-by-2 Sudoku-like puzzle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.3 An excitatory synapse adapted from [3] . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.4 A neuron in our circuit consists of the dendritic arbor and the axon hillock. . . . . . 77
6.5 8-input Dendritic Arbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.6 Block diagram of synaptic plasticity and learning mechanism . . . . . . . . . . . . . 80
6.7 Monostable Generator Circuits (MGCs) that synchronize neural ring . . . . . . . . 81
6.8 Dopamine Receptor Circuits (DRCs) that detect correct ring . . . . . . . . . . . . 83
xii
6.9 Latches for Neurotransmitter Concentrations (LNCs) to maintain learned synaptic
strengths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.10 Inputs to neuron A1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6.11 Setting NC 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.12 Any solution to a 2-by-2 Sudoku will be one of these two possibilities. . . . . . . . . 89
6.13 A supervisor is training the network by giving a dopamine reward. . . . . . . . . . . 91
6.14 Network is ready to solve 8 dierent possibilities. . . . . . . . . . . . . . . . . . . . 93
6.15 Pseudo Sudoku type I with two possible solutions . . . . . . . . . . . . . . . . . . . 94
6.16 Network is learning Pseudo Sudoku type I. . . . . . . . . . . . . . . . . . . . . . . . 96
6.17 Network solves 8 dierent possibilities of Pseudo Sudoku type I. . . . . . . . . . . . . 98
6.18 Pseudo Sudoku type II with two possible solutions . . . . . . . . . . . . . . . . . . . 99
6.19 Network is learning Pseudo Sudoku type II. . . . . . . . . . . . . . . . . . . . . . . . 101
6.20 Network solves 8 dierent possibilities of Pseudo Sudoku type II. . . . . . . . . . . . 103
A.1 DC sweep for ntype CNT transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
A.2 DC sweep for ptype CNT transistor . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
A.3 Voltage Transfer Curve for CNT inverter . . . . . . . . . . . . . . . . . . . . . . . . . 112
B.1 Orientation selectivity in layer III of primary visual cortex a)biological mechanism
from [4],[5] b) our biomimetic circuit where NV has a vertical receptive eld and NH
has a horizontal receptive eld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
B.2 NV neuron with a vertical receptive eld. This neuron has been excited in two
dierent scenarios (A and B). Simulation is shown in Figure B.3 . . . . . . . . . . . 117
B.3 Simulation result for biomimetic cortex with a vertical receptive eld . . . . . . . . 118
B.4 Directional selectivity of movement a)biology from [4] b) our circuit . . . . . . . . . 121
B.5 Target neuron res only for preferred direction from [4],[6]. . . . . . . . . . . . . . . 123
B.6 Our circuit mimics biological mechanism of the directional selectivity . . . . . . . . . 124
B.7 Our nal neural circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
xiii
B.8 Simulation result of our complete circuit . . . . . . . . . . . . . . . . . . . . . . . . . 128
Chapter 1
Overview of BioRC Circuits
1.1 Problem Motivation
Memory in the human brain consists of dierent but integrated mechanisms that collectively
form the capacity for memory. Providing memory in an electronic brain requires implementation
of this diverse set of mechanisms. Memory and learning are related concepts. Modeling memory
with bio-inspired circuits will help us to design circuits that learn similarly to the human brain and
as a consequence we can design articially intelligent circuits.
1.2 Hypothesis Statement
Memory and learning in the brain are realized by a collection of mechanisms that interact,
resulting in learning and subsequent recognition of input patterns. While these mechanisms are
complex and span dierent areas of the brain, we hypothesize that we can construct electronic cir-
cuits that mimic these mechanisms, demonstrating some aspects of pattern recognition that mimic
the brains ability to learn and recognize patterns. We hypothesize that feed-forward inhibition is
useful for these electronic circuits because this mechanism has a proactive and predicting nature
similar to the human brain. From the Cerebral Cortex of the human brain (for recognizing a
rhythm) to the simple brain of a female cricket (for recognizing chirp of male cricket) feed-forward
inhibition plays an important role. It sets an adaptive fault-tolerant pattern-recognition circuit
with simple faulty neurons, along with short-term memory represented by continuous neural r-
ing, as in biological neural networks, that retain some information for a short period of time until
2
higher-level decisions are made, recognition occurs, or action is taken. We also hypothesize that
prediction is a critical feature of recognition, and must be included in neuromorphic recognition
networks. In addition, we observe from biological research that erroneous prediction could underlie
hallucinations, so we believe modeling prediction with neuromorphic circuits could assist in mod-
eling hallucinations. We hypothesize that our neural circuits to predict patterns can be used to
model such erroneous behavior. (The prediction circuits require structural plasticity for pattern
recognition). We hypothesize that learning can be mediated by structural plasticity and synaptic
plasticity in our analog neuromorphic circuits. The structural plasticity in our circuits is a novel
approach in neuromorphic circuits, since structural plasticity in others' circuits is implemented by
software. Furthermore, in our circuit learning the rules of a game is mediated by synaptic plasticity
and learning is expedited by the regular architecture of the neural network.
1.3 Related Work
All the bio-inspired circuits that are designed can be categorized into four groups:
1.3.1 FPGA
There are projects in which an FPGA is programmed with articial neural network algorithms
like winner-take-all algorithm [7]. Since these projects use an O-The-Shelf FPGA, they focus on
software design and hardware is a conventional Von-Neumann machine. Transistors are in strong
inversion and therefore these devices are power hungry; designing a device with complexity of the
human brain (assuming if it is possible) with FPGA will consume tens of Megawatts.
1.3.2 Large-scale general purpose models of neural systems
The NeuroGrid project at Stanford University [8] is an example of a large-scale and general
purpose model of neural systems. It consists of a million neurons but transistors are in sub-
threshold region and therefore it is relatively low power device (3 watts). Because in NeuroGrid,
FET transistors are in sub-threshold (weak-inversion) region, their I-V characteristic is exponential
3
and therefore, their behavior is more similar to ion-channels on the membrane of the neurons.
However, there is no synaptic plasticity and the circuit cannot learn on-line. In the NeuroGrid
project, synapses are memory units and neurons are computation units. However, since synapses
are close to each neuron and there is no one big memory unit and one big processor unit, it is
relatively non-Von Neumann architecture.
1.3.3 Small scale special purpose neuromorphic systems
The ROLLS project by Giacomo Indiveri [9] is an example of a small network and special-
purpose neuromorphic circuit. Our project is also a small and special purpose neuromorphic system
which is optimized for pattern recognition and learning is mediated by structural plasticity. Our
circuit has a completely Non-Von Neumann architecture. There is no separated memory or com-
putation parts in the circuit. The whole circuit shows an emergent behavior. The circuit learns,
predicts and compares as whole which is independent from single components like biological brain.
1.3.4 Circuits with nano synapses
In these circuits, state dependent resistance of memristive devices can be exploited as a
synaptic function. Therefore, integration of conventional CMOS and nano-technology is a challenge
in theses projects. Memristors with higher conductance and memristors with lower conductance
mimic strong and weak synapses respectively.
1.4 Introduction to this thesis
There are about 100 billion neurons and 10
15
synapses in the human brain. A presynaptic
neuron can excite a postsynaptic neuron via a synapse. Each time that a presynaptic neuron res
and produces an Action Potential (AP), the corresponding synapse produces a PostSynaptic
Potential (PSP). These PSP signals from presynaptic neurons are summed up on dendrites of
the postsynaptic neuron and, if summed PSPs goes above the ring threshold of the postsynaptic
neuron, it will re.Typically a neuron is connected to 10,000 other neurons.
4
Even though neurons relatively have stereotyped behavior, their connections and architecture of
the brain can cause them to show dierent emergent behaviors. This means that when we use our
intrinsic skills like when we see the world around us, there are neurons that are hard wired in our
brain in serial and parallel pathways that process information coming from the retina. Also, when
we learn and memorize (acquired skill), new connections will be made in the brain region or some
of the current connections will be strengthened or weakened which is called plasticity. Dysfunctions
in these connections can cause psychological and neurological disorders.
Figure 1.1 shows an example of our hybrid neuromorphic neuron. This specic neuron can be
connected to four synapses. These synapses can be excitatory or inhibitory. An excitatory synapse
produces a positive voltage and tries to cause the neuron to re. On the other hands, an inhibitory
synapse produces a negative voltage and tries to inhibit the neuron from ring.
Figure 1.1: Our hybrid neuromorphic neuron with CNT synaptic layer and CMOS neurons
In this chapter, synapses and neurons are shown at transistor level. These synapses and
neurons are building blocks of our high-level circuits.
In chapter 2, CORTICO-STRIATAL-THALAMO-CORTICAL (CSTC) circuit will be in-
5
troduced. The CSTC circuit in the brain has an important role in controlling movements and
thoughts.This circuit regulates response of the human brain to the external input stimuli that are
conveyed to the brain through ascending sensory signals. This circuit has "gas" (positive feed-
back) and "brake" (negative feedback) pathways. Strengths of these pathways will determine the
level of activity in the cortex as response to the input stimulus. These pathways are altered for
dierent experiences and memories. Therefore, a CSTC circuit causes our memories to aect our
behaviors and as a result plays an important role in reinforcement learning. As a consequence,
any dysfunction in this circuit may cause movement and psychological disorders. For example,
one hypothesis to explain Obsessive Compulsive Disorder (OCD) is that alteration in this circuit
causes a positive feedback loop (direct pathway) to become strengthened and a negative feedback
loop (indirect pathway) to become weakened, which causes hyperactivity in the cerebral cortex
resulting in OCD symptoms. We have designed and simulated a biomimetic electronic circuit that
mimics the CSTC loop in healthy and OCD conditions. This circuit simulation demonstrates the
utility of neuromorphic circuits in brain modeling, since varied behavior can be demonstrated via
modication of synaptic strengths.
In chapter 3, we will show how a hard-wired circuit can predict input stimulus. This circuit
mimics the cerebral cortex that recognizes familiar input patterns. It is assumed that this circuit has
received 2 consecutive spikes for long time and now is adapted to it (has memorized it). we will show
how alteration in this circuit can cause cognitive problems and hallucinations that are symptoms of
schizophrenia. The neural system in the human brain can identify regularities in received stimuli
and, based on that, predict future stimuli [10]. Neural prediction circuits reduce responses to
predictable and thus possibly redundant events. Failures in predictions that result in erroneous
responses may cause positive and negative symptoms in people who suer from schizophrenia [11].
we will show a bio-inspired neuromorphic circuit that mimics this prediction in the human brain
and its response to stimuli with a predictable pattern. Furthermore, it shows how alteration in the
neural circuitry can cause the circuit to recognize stimuli that do not occur (hallucination) or fail
to recognize the expected pattern (negative symptom).
6
Chapter 4, shows the mechanism of biological short-term memory. Research shows that the
way we remember things for a few seconds is a dierent mechanism from the way we remember
things for a longer time. Short-term memory is based on persistently ring neurons, whereas storing
information for a longer time is based on strengthening the synapses or even forming new neural
connections. Spatial and object information are segregated and processed by dierent neurons and
neurons can keep ring with dierent mechanisms. we will show a biomimetic neuromorphic circuit
that mimics short-term memory by ring neurons using biological mechanisms and remembers
spatial and object information. In our nal macro circuit, ring neurons will show that the circuit
is remembering the input pattern and alteration in the synaptic weights will show the circuit is
learning (transferring information from short-term memory into long-term memory) .
Chapter 5, shows a circuit that learns and later recognizes spatiotemporal patterns which is
a developed version of the circuit that has been introduced in chapter 3.
Finally, in chapter 6, we show a circuit that learns the rules of Sudoku or Sudoku-like games
by synaptic plasticity. It shows how the regular architecture of the network expedites the learning
procedure.
Appendix A shows simulation results for the CNT transistors that are used in this project.
Appendix B mimics vision and cognition in the brain as a more complicated cognition in the
Cerebral Cortex. Vision starts from the retina. Images on the retina are two-dimensional, but our
visual perception from the outside world is three-dimensional. This is because many serial and
parallel processing happen simultaneously in dierent parts of the brain, so that we can perceive
and recognize objects around us [12]. I will show a neuromorphic circuit that mimics vision and
object recognition in the brain. Visual signals are the most important inputs to the cerebral cortex.
Heavily encoded signals coming from retina are decoded into spatial and object information.
1.5 Components at transistor level
The synapses are modeled with Carbon NanoTube transistors by the BioRC research group
[13]. Carbon NanoTube transistors are lighter and have less parasitics compared to conventional
7
silicon transistors. This makes CNT a good option to implement the intense interconnectivity of
synapses in the brain. Also, CNT is more bio-compatible and a good candidate to be used as
neuro-prosthetics in the brain in future. Figure 1.2 shows the synapse designed by our research
group with CNT transistors [3]. This synapse has voltage knobs that control the duration and the
amplitude of the PSP signal. Therefore, they can be strengthened and weakened similar to Long
term potentiation (LTP) andLong Term Depression (LTD) in biology which is believed to
play an important role in learning.
Figure 1.2: Synapse with CNT transistors with voltage knobs from [3]
8
These synapses are in a nanolayer on the surface of the CMOS die containing the neurons. As
Figure 1.3 shows, our circuit has a hybrid architecture. Neurons are modeled with CMOS TSMC
45nm technology.
Figure 1.3: Hybrid architecture
9
Figure 1.4: Two-input adder with CMOS technology adopted from [14]
Figure 1.4 shows the two-input adder designed by our group that is used in a two-stage
architecture shown in Figure 1.5 to mimic a Dendritic Arbor with four inputs.
Figure 1.5: Three two-input adders are used as Dendritic Arbor
10
This Dendritic Arbor receives PSP signals from four synapses (that can be excitatory or
inhibitory synapses) and generates summed PSP which goes to Axon Hillock. If this summed
PSP goes above ring threshold of the neuron, the axon hillock will produce an Action Potential.
Figure 1.6 shows the Axon Hillock designed in our group at transistor level.
Figure 1.6: Axon Hillock with CMOS technology from [15]
1.6 The Hybrid Electronic Cortical Neuron
The human brain has around 100 billion neurons. Each one of these neurons can have an
average of 10,000 synaptic connections to it from other neurons.
1
Realizing synapses with a
nanolayer on the surface of the CMOS die is used here to enable complex neural networks to
be fabricated. Synapses are designed with CNT transistors [3]. CNT technology is a suitable
choice for the synaptic layer in neuromorphic circuits due to the lower parasitics of CNTs and
lighter weight and being more biocompatible, while CMOS is used for dendritic potential adders
and axon hillocks that require complicated computations. There are emerging applications for
this hybrid architecture in nanotechnology.A hybrid architecture combines inherent advantages of
1
Our simplied neuron described here can only be connected to 4 synapses, but the BioRC library has been used
for much larger neurons.
11
CMOS and CNTs. It uses CNT transistors as sensors and CMOS as complex electronic control
and processing units [16], while supply voltages are identical for both technologies. There are
techniques for independent VLSI fabrication and CNT processing so that the CMOS portion can
exploit advantages of VLSI scaling like cost,
exibility and predictable performance [17]. While
demonstration of hybrid structures is not relevant to demonstrating via simulation, such combined
technologies might make prosthetic devices more powerful in the future.
Chapter 2
Modeling the Cortico-Striatal-Thalamo-Cortical Circuit in the Brain and its
Role in Obsessive Compulsive Disorder
2.1 Motivation and introduction to the CSTC circuit
The Cortico-Striatal-Thalamo-Cortical Circuit (CSTC) circuit in the brain has an important
role in controlling movement and thoughts.This circuit regulates response of the human brain to
the external input stimuli that are conveyed to the brain through ascending sensory signals. This
circuit has \gas" (positive feedback) and \brake" (negative feedback) pathways. Strengths of these
pathways will determine the level of activity in the cortex as a response to input stimulus. These
pathways are altered for dierent experiences and memories. Therefore, a CSTC circuit causes our
memories to aect our behaviors, and as a result plays an important role in reinforcement learning.
As a consequence, any dysfunction in this circuit may cause movement and psychological
disorders. For example, one hypothesis to explain Obsessive Compulsive Disorder (OCD) is that
alteration in this circuit causes a positive feedback loop (direct pathway) to become strengthened
and a negative feedback loop (indirect pathway) to become weakened, which causes hyperactivity
in the cerebral cortex, resulting in OCD symptoms. We have designed and simulated a biomimetic
electronic circuit that mimics the CSTC loop in healthy and OCD conditions. This circuit simu-
lation demonstrates the utility of neuromorphic circuits in brain modeling, since the role of varied
behavior can be demonstrated via modication of synaptic strengths as well as noisy inputs.
We have chosen to model the CSTC circuit in the human brain using BioRC neurons in
order to demonstrate the capability of modeling neural disorders spanning multiple brain regions.
13
In addition, the OCD neural behavior we model involves neurons with behavior sensitive to noisy
inputs, and we demonstrate the capability of the BioRC neural structure to process these variable
inputs.
Obsessive Compulsive Disorder (OCD) is prevalent throughout the world, and the rate of
occurrence is thought to be between 1 and 2 percent. While models of this disorder have been
proposed, demonstration of the models using electronic circuits has not appeared in the research
literature. The BioRC project [13] has a library of neuromorphic circuits that support modeling
both typical and atypical neural circuits. Demonstrating simulations that exhibit disorder states is
a current focus of the BioRC group, because it highlights range and behaviors that can be modeled
with BioRC circuits, and OCD is an excellent candidate for inclusion in these experiments.
OCD is a debilitating neuropsychiatric condition characterized by intrusive distressing thoughts
(obsessions) and/or repetitive ritualized mental or behavioral acts (compulsions), normally acted
on to alleviate anxiety [1, 18]. There are numerous symptoms of OCD [19, 20], some related to
obsession and some to compulsion. Behaviors characterized by obsession include excessive doubt
about task completion, fear of contamination or uncleanliness, need for symmetry or exactness,
fear of causing harm to others, excessive concern over right and wrong, and intrusive inappropriate
sexual thoughts. Compulsive behavior includes excessive or repeated checking, washing/cleaning,
counting, ordering/arranging, repeating, hoarding or praying. These are shown in Table 2.1.
14
Table 2.1: Table for OCD symptoms [1]
15
The CORTICO-STRIATAL-THALAMO-CORTICAL (CSTC) circuit controls thought and
movement as a response of the brain to external stimuli. Lesions in this area, dysfunctional behavior
of neurons, or imbalance in excitation and inhibition can cause movement disorders, neurological
symptoms, and even psychological disorders. For example, in non-human primates, blocking the
activity of the sub-thalamic nucleus (STN) produces a movement disorder similar to that seen in
human hypokinetic and hyperkinetic movement disorders like Parkinson's disease and Huntington's
disease [21, 22]. One explanation for Obsessive Compulsive Disorder (OCD) is that neurons in the
thalamus and cortex keep ring due to a repetitive excited loop and Substantia Nigra (SNr), a
major output structure of the basal ganglia, becomes unable to inhibit the thalamus and slow
down the hyperactivity. This hyperactivity is believed to cause OCD [1].
Here, we have designed a biomimetic circuit that mimics the mechanism of OCD. Our circuit
has a hybrid structure. Excitatory synapses and inhibitory synapses have been designed using
circuits containing Carbon Nanotube (CNT) transistors and neuron neuromorphic circuits have
been designed using CMOS 45nm technology transistors.
2.2 The biological OCD Circuit and the Electronic Model
Figure 2.1 (a) shows a simplied block diagram of a mouse brain. It shows the loop between
cortex, striatum (which is a part of the Basal ganglia), and thalamus. In a healthy condition,
the striatum inhibits Globus Pallidus pars externalis (GPe), and therefore GPe stops inhibiting
SubThalamic Nucleus (STN). Then, STN excites Globus Pallidus pars internalis (GPi)/SNr and
GPi/SNr inhibits the thalamus.
The indirect pathway on the right in Figure 2.1 (a) acts like a brake and inhibits hyperactivity
in the thalamus and cortex. If the direct pathway on the left becomes strong, it will inhibit
GPi/SNr and therefore the thalamus becomes hyperactive. The level of thalamus and cortical
activity is controlled by a \gas-brake" mechanism of direct and indirect pathways [1]. All sensory
signals (except Olfactory) rst go to the thalamus and from the thalamus they get distributed to
corresponding regions in the Cerebral Cortex.
16
Figure 2.1: a) Simplied Block Diagram of A Circuit in the Mouse Brain[1]. MSN, medium spiny
neuron; GPe, globus pallidus pars externalis; GPi, globus pallidus pars internalis; SNc, substantia
nigra pars compacta; SNr, substantia nigra pars reticulate; STN, subthalamic nucleus; D1R, D1-
type dopamine receptor; D2R, D2-type dopamine receptor. b) Our Biomimetic Circuit Mimicking
Part a
17
The block diagram of our circuit design is shown in Figure 2.1 (b). D1 is the neuron that
inhibits GPi/SNr and accelerates the activity of the thalamus. D2 is the neuron that excites
GPi/SNr through an indirect pathway, inhibiting the thalamus and reducing its activity. GPi/SNr
is connected to the thalamus through inhibitory synapses, inhibiting the thalamus and consequently
the cortex.
It should be noted that many neurons are involved in the biological circuit modeled here, while
we have simplied the circuit to single neurons in each brain region. More-extensive neuromorphic
circuit models can be constructed to model more detailed behavior. We ran simulations for this
OCD circuit to demonstrate the overall behavior of the circuit when certain neurons D1 and D2
have weak or strong synapses, and we also introduce noise as that is believed to break the OCD
loop. We show details of the simulations in the following scenarios.
18
2.3 Typical and Atypical Responses
Figure 2.2: Firing pattern of all neurons in the OCD circuit for dierent scenarios
Figure 2.2 shows three dierent ring scenarios (A , B, and C) showing AP output (Action
Potential) of all neurons. Thethalamus receives external excitation that mimics sensory inputs.This
external excitation, which is not shown in the simulation, causes the thalamus to re, unless it is
inhibited. GPe and STN also are connected to external excitation and similar to the thalamus they
will keep ring unless inhibited. In other words, two synaptic inputs of the neurons representing
19
these three regions (thalamus, GPe and STN) are excited by a spiking voltage source that mimics
sensory signals.
Figure 2.3: Typical and atypical responses of D1 and D2 neurons
Typical and atypical responses of D1 and D2 neurons are shown in Figure 2.3.
2.3.1 Scenario A: The indirect pathway and the direct pathway are both weak
The thalamus keeps ring unless inhibited, and each time the thalamus res it excites the
cortex. The cortex tries to excite neurons D1 and D2 in direct and indirect pathways but synapses
connecting the cortex to D1 and D2 are weak and cannot cause them to re. Therefore during
scenarioA,D2 does not re. Since it does not re, it cannot inhibit GPe. GPe res unless inhibited,
therefore during scenario A, GPe keeps ring. STN res unless inhibited. During scenario A, GPe
keeps inhibiting STN and therefore STN does not re. Since STN does not re, it cannot excite
GPi/SNr. GPi/SNr does not re and consequently cannot inhibit the thalamus. The thalamus res
20
unless inhibited, and therefore during scenarioA it res and causes the cortex to re in hyperactive
mode. Figure 2.4 shows the simulation result for scenario A when the other two inputs to neuron
D2R are connected to a white noise source with zero mean and 30 mV rms value. Occasional rings
of D2R due to noise can interrupt hyperactivity of the cortex.
Figure 2.4: Noise in indirect pathway can interrupt cortex
2.3.2 Scenario B: The indirect pathway is strong
Excitatory synapses connecting cortex to D2 have been strengthened and therefore each time
the cortex res, it causes D2 to re. D2 starts inhibiting GPe. Since GPe does not re, it stops
inhibiting STN. STN res and excites GPi/SNr. GPi/SNr inhibits the thalamus. The thalamus
stops ring, therefore it does not excite the cortex. The thalamus and cortex do not re for some
time. Since the brake mechanism is indirect, it does not brake the cortex immediately. Since the
cortex does not re it cannot cause D2 to re. D2 does not re, GPe starts ring, and STN stops
21
ring.This means that GPi/SNr will not inhibit the thalamus, the thalamus will re again and will
excite the cortex. So, even if scenario B lasts for some time, the thalamus and cortex will show
activity, but this negative feedback mechanism reduces the hyperactivity and ring burst of the
thalamus and cortex which is shown in Figure 2.5.
Figure 2.5: Response of a healthy circuit
2.3.3 Scenario C: Both pathways are strong
In this scenario, the indirect pathway is strong but the direct pathway is stronger and D1
res each time that the cortex res. So, the cortex inhibits GPi/SNr. GPi/SNr cannot re even
though it receives excitation from the brake pathway. Therefore, during scenario C, D2 and STN
keep ring. However, GPi/SNr does not re and the thalamus shows atypical hyperactivity.
Chapter 3
Pattern Recognizing Circuit in the Cerebral Cortex and its role in
Schizophrenia
3.1 Motivation and introduction to pattern recognizing circuit and Schizophre-
nia
The neural system in the human brain can identify regularities in received stimuli and, based
on that, predict future stimuli [10]. Neural prediction circuits reduce responses to predictable and
thus possibly redundant events.
Here, we will show how a hard-wired circuit can recognize a familiar input pattern. It is
assumed that this circuit has received 2-consecutive spikes that repeats for long time and therefore,
it is adapted to this input pattern. So, the circuit has a xed memory.
Failures in predictions that result in erroneous responses may cause positive and negative
symptoms in people who suer from schizophrenia [11]. Genes play an important role in schizophre-
nia. Table 3.1 shows lifetime risk of schizophrenia as a function of genetic relatedness to a person
with schizophrenia.
23
Table 3.1: lifetime risk of schizophrenia as a function of genetic relatedness to a person with
schizophrenia. from [4]
24
Here, we have designed a bio-inspired, neuromorphic circuit that mimics this \prediction"
in the human brain and its response to stimuli with a predictable pattern. Furthermore, it shows
how alteration in the neural circuitry can cause the circuit to recognize stimuli that do not occur
(hallucination) or fail to recognize the expected pattern (negative symptom).
A growing body of research shows that some neural processes in the human brain are highly
predictive and proactive. These predictions help the brain to perform fast and targeted responses
to environmental stimuli. Erroneous prediction or failure in prediction may underlie hallucinations
and negative symptoms in schizophrenia. In the normal condition, the brain suppresses responses
to familiar and expected input and detects when unexpected inputs occur [23]. Alterations of three
neurotransmitters (GABA, Glutamate, and Dopamine) in the micro-circuitry and mis-wiring in the
macro neural circuitry of the neural system can trigger schizophrenic hallucinations and cognitive
abnormalities. This alteration can cause disinhibition or erroneous inhibition by changing the
balance between excitatory and inhibitory synapses [2] that is essential for cognition. Feedforward
inhibition is a mechanism in which one neuron excites a neighboring or \downstream neuron, but
also recruits a third neuron to inhibit the downstream target after some delay [24]. Therefore,
appropriate inhibition plays an important role in prediction. We have designed a biomimetic
neural circuit that mimics normal and schizophrenic conditions and their responses to expected
and unexpected input stimuli. A predicted sequence of input spikes does not elicit a nal response
in the healthy brain, since it is expected, but misring due to noise in the schizophrenic brain
causes the sequence to re as if there were the expected input spike train, giving the illusion
(hallucination).
3.2 Symptoms of Schizophrenia including disinhibition
Symptoms of schizophrenia are divided into positive and negative symptoms. Positive symp-
toms are those that are present in people with schizophrenia (like hallucinations) and negative
symptoms are those capabilities that are absent in schizophrenia like failure in recognition com-
pared to typical individuals.
25
Miswiring in neural circuitry can cause disinhibition in pyramidal neurons and therefore
can cause schizophrenic symptoms when neurons re without proper input stimuli [2]. As Figure
3.1 shows, in the normal condition, a simplied pyramidal neuron receives one pair of excitatory
synapses and two inhibitory synapses coming from GABAergic interneurons. In a schizophrenic
condition, there might be two pairs of excitatory synapses and only one inhibitory GABAergic
interneuron. This means that the pyramidal neuron receives more excitation and less inhibition
compared to the normal condition, which may cause erroneous responses. In our simulation exper-
Figure 3.1: Miswiring in Schizophrenia adopted from [2].
iments we change the synaptic weights to mimic this structural change. We demonstrate how this
change causes failure in recognizing expected patterns.
3.3 Our neuromorphic circuit using feedforward inhibition to recognize a
predicted pattern
We have designed a simple neuromorphic prediction network that expects two spikes arriving
at its input in sequence, separated by a time period that is dependent on the neural delays in the
26
network. If the two spikes arrive as predicted, the pyramidal neuron will not re. Spikes separated
by a shorter or longer time period that does not match the predicted delay or a missing second spike
will cause the nal pyramidal neuron in the network to re. The circuit mimics the feedforward
inhibition mechanism in the biological brain [24]. Figure 3.2 shows the block diagram of our hybrid
neural circuit. Every neuron needs a coincident PSP of two excitatory synapses to re.
Figure 3.2: Block Diagram of our Hybrid Neuromorphic Circuit
3.4 Simulation results
Figure 3.3 shows the simulation results for both normal and schizophrenic conditions for 4
dierent scenarios.
3.4.1 Input Scenario 1
Circuit is in the normal condition and waiting for input. Input is just 1 spike. It causes A to
re. A causes D1 to re. D1 causes D2 to re. When D2 res, IN is silent and therefore G neurons
do not re. G neurons only re if both synapses excite simultaneously. They have not detected the
27
Figure 3.3: Simulation Results for Normal and Schizophrenic Conditions
28
expected consecutive 2 spikes, so they do not re. D2 causes B to re, and nally B causes P to
re, indicating an unexpected pattern. It is similar to the condition when a person hears something
that he does not expect and he understands something is wrong.
3.4.2 Input Scenario 2
The circuit is still in the normal condition. Input is 2 consecutive spikes. First spike causes
A to re. A causes D1 to re. D1 causes D2 to re. When D2 is trying to cause B to re, second
spike from input arrives to the other synapse of G neurons. The B neuron is trying to cause P to
re, but two interneurons G1 and G2 are inhibiting P from ring. As a result, P does not re.
Therefore, when G neurons re, it means the input is 2 consecutive spikes, and when P does not
re it means input is expected.This is similar to the condition when a person hears something that
is consistent with his predication and therefore he stops responding to it.
3.4.3 Input Scenario 3
The circuit now is altered to a schizophrenic condition by altering synaptic strengths as shown
in Figure 3.2. G2 has four inputs and 2 inputs are connected to a Gaussian noise source with zero
mean and 40 mV rms value as shown in Figure 3.4. This noise can cause G2 to re when the input
is only one spike or even when there is no input spike. This means that due to noise, the circuit
mistakenly thinks the input is 2 consecutive spikes. This response is analogous to hallucination
and mistaken cognition.
3.4.4 Input Scenario 4
The circuit is still altered to a schizophrenic condition but the noise has stopped, in order
to see the eect of the alteration of synaptic strengths. As Figure 3.2 shows, in a schizophrenic
condition, one of the excitatory synapses going from B to P is strengthened. Therefore, P receives
more excitation. Also, one of the excitatory synapses connecting the input to G2 is weakened.
Therefore, if the input is 2 consecutive spikes, G2 does not re and P receives less inhibition.
29
Receiving more excitation and less inhibition causes neuron P to re even when the input is the
expected 2 consecutive spikes. This disinhibition caused by neural circuitry alteration causes the
network to fail to recognize an expected pattern and to display an appropriate response. Although
the input is 2 consecutive spikes, G2 does not re and P res twice.
3.5 False ring due to dierent noise levels
Neurons G1 and G2 should re if they receive input from two synapses simultaneously. As
stated previously, false ring due to noise mimics hallucination. Figure 3.4 shows neuron G2 with
two synapses connected to a noise source. This noise source has Gaussian output with zero mean
and dierent rms values.
Figure 3.4: Neuron with noisy inputs
30
The simulation result is shown in Figure 3.5. As the blue curve illustrates, when there is no
noise or when noise has only 10 mV rms value, there is no false ring, but when the magnitude
of noise increases, the frequency of false ring increases. The red curve shows a similar simulation
but this time one of the inputs to G2 receives one spike with frequency equal to 200 MHz, which
makes it more likely to re mistakenly.
Figure 3.5: False Firing due to dierent noise levels
31
3.6 Detecting the expected pattern with dierent synaptic weights
We continue exciting the neuromorphic circuit with two consecutive spikes with 3 ns delay
while G2 is connected to a noise source with 10 mV rms value. In a healthy condition, neuron P
should never re and neuron G2 should always re, for this expected stimulus. By strengthening
the synapse that connects neuron B to neuron P and weakening the synapse that connects In to
neuron G2, response of the circuit gradually changes from typical to atypical. Figure 3.6 shows
the percentage that neuron P and neuron G2 re each time the circuit gets two consecutive spikes.
When the amplitude of PSP output of B-to-P synapse is 95 mV and the amplitude of In-to-G2 is
90 mV, the circuit always gives an appropriate response, while when the former changes to 105 mV
and later changes to 75 mV, the circuit behaves in an atypical fashion.
32
Figure 3.6: Detecting the expected pattern with dierent synaptic weights
33
3.7 Tolerance of the circuit for delay between two consecutive spikes
Figure 3.7 shows response of the circuit when it is in \healthy condition" (Figure 3.6) to two
consecutive spikes with dierent delays in between the spikes. When the delay is from 1.25 ns to
3 ns, P does not re and G2 res once, which means circuit identies the two spikes as expected
input. When delay is more than 3 ns, the circuit consider them as two independent spikes and P
res twice and G2 do not re. Also, if spikes are too close (delay is less than 1.25n ), G2 does not
re, cannot inhibit P, and P res.
Figure 3.7: Tolerance of the circuit for delay between two consecutive spikes
Figures 3.8, 3.9 and 3.10 are showning three simulations for the three data points in the
Figure 3.7.
34
Figure 3.8: When delay is 2.25 ns which is within the tolerance G2 res.
Figure 3.9: When delay is just 1 ns which is less than a typical pattern P res indicating an
erroneous pattern.
35
Figure 3.10: Delay is 3.5 ns which is out of tolerance and P res twice.
Chapter 4
Neuromorphic Circuit Mimicking Biological Short-Term Memory
4.1 Abstract
Research shows that the way we remember things for a few seconds is a dierent mechanism
from the way we remember things for a longer time. Short-term memory is based on persistently
ring neurons, whereas storing information for a longer time is based on strengthening the synapses
or even forming new neural connections. Information about location and appearance of an object
is segregated and processed by separate neurons. Furthermore neurons can continue ring using
dierent mechanisms. Here, we have designed a biomimetic neuromorphic circuit that mimics
short-term memory by ring neurons using biological mechanisms to remember location and shape
of an object.
37
4.2 Introduction
Memory plays an important role in biological neural circuits, relating past to current time.
There are two types of memory with dierent mechanisms, short-term memory and long-term
memory. Short-term memory is based on persistent ring of neurons in the prefrontal lobe of
the cortex [4]. The hippocampus selectively transfers information from short-term memory into
long-term memory by dierent forms of Long-Term Potentiation (LTP) and Long-Term Depression
(LTD) [12]. In other words, information in long-term memory is stored in the strengths of the
synapses, along with connectivity patterns of the neural circuits in the biological brain. Any
dysfunctional behavior in memory can be associated with disorders like hallucination or memory
decit of schizophrenia [25, 26].
Synaptic strength has been a feature of neuromorphic circuits for many years. Recently, there
are many research groups that have designed circuits with a neuromorphic learning architecture
[27] or neuromorphic memory with a short-term part [28] . They usually use volatile memristors to
mimic short-term memory and forgetting [29]. Two-terminal synaptic memory has also been used
to build neuromorphic computation devices [30]. They use analog change in conductance of the
two-terminal device to mimic recongurability and learning. Even though previous neuromorphic
memory circuits have similar functions to biological memory, their principles of operation are not
completely biomimetic. In contrast to previous works, our circuit mimics short-term memory
retention and short-term memory loss with similar mechanisms to their biological counterparts.
4.3 Short-term Memory Based on Persistent Firing of Prefrontal Cortical
Neurons
Short term memory depends on neural activity in the prefrontal lobe of the cortex. In human
and non-human primates, neurons in the prefrontal cortex start ring and, with persistent ring,
they remember necessary information during a short period of time. This information can have
both location and shape feature components. Dierent neurons are sensitive to dierent types of
38
information (location or appearance). Some of them re for both types [31]. For example, monkeys
can be trained to compare frequency of input electrical excitation with a previous signal using
\Yes" neurons and \No" neurons that mutually inhibit each other and answer this question \Is f1
bigger than f2?" [32].
Figure 4.1 shows six dierent pictures of objects with dierent location and appearance
information. The object can be a hollow circle \H" or a solid circle \S". Location of this object
can be on left \L", up \U", or right \R". Therefore, in order to remember any of these pictures we
need to remember two types of information, appearance information (hollow or solid) and location
information ( left, up, or right). We assume individual neurons have already red to indicate
location and appearance of the object, and this information is input to our short-term memory.
Figure 4.1: Six dierent pictures with dierent location and appearance information.
Therefore, in an oversimplied neural circuit we need ve neurons in the prefrontal cortex to
remember each one of the pictures shown in Figure 4.1, two for appearance and three for location.
Two of neurons will re at any time, one for the location information and one for the appearance
information. There are dierent mechanisms for sustained ring in human and nonhuman primates
to remember items or events in short-term memory. This continuous ring can be generated by
a network connection or mutual inhibition [4]. We describe both mechanisms with our example
circuits below.
39
4.4 Long-range Network to Generate Persistent Firing
Figure 4.2 shows the block diagram of our hybrid electronic network that mimics the long-
range synaptic connection in the human brain that goes from the prefrontal cortex to the parietal
lobe of the cortex, from parietal to inferior temporal lobe of the cortex, and from inferior temporal
back to prefrontal cortex. For simplicity we have modeled a single neuron in each location as
forming the loop, although many more would be included in this neural circuit in the biological
brain. In our simplied model, every neuron is connected to the next neuron via one excitatory
synapse.
Figure 4.2: Our biomimetic circuit mimicking Long-range neural interactions
40
Figure 4.3 is the simulation result for our neuromorphic circuit network that mimics the
biological network. One input signal that has location or appearance information can trigger the
network, and selected neurons in the prefrontal cortex will begin ring continuously. Figure 4.3
shows the input spike, and action potential of neurons in the prefrontal, parietal, and inferior
temporal cortex as a result of the single input spike.
Figure 4.3: Simulation result for Figure 4.2. One input spike can trigger persistent ring in the
prefrontal cortex due to a closed-loop network
4.5 Mutually Inhibitory Neurons to Generate Persistent Firing Representing
Finite-state Memory
In our circuit, there are two groups of neurons involved in a mutually-inhibitory network.
Neurons inside one group form a loop and excite each other and inhibit the neurons in the other
group. If a neuron inside one of the groups receives enough excitation to re, it will start continuous
ring due to the closed loop ring while inhibiting the neurons in the other group. This means that
once a neuron starts ring in one of the groups, it will keep ring until a neuron in the other group
41
gets a strong excitation and res; then there will be a shift to a new state. Figure 4.4 shows our
bio-inspired circuit mimicking a mutual inhibition network with two neurons that mutually inhibit
each other, and Figure 4.5 shows the simulation result.
Figure 4.4: Our biomimetic circuit mimicking a mutual inhibition network. Here, both group 1
and group 2 are single neurons.
42
Figure 4.5: Simulation result for Figure 4.4. One input spike can trigger persistent ring in the
prefrontal cortex, and the ring can cease when an input spike arrives at the other group due to a
mutual-inhibition network.
43
4.5.1 A: Input 1 and input 2 get activated simultaneously
In the beginning neurons in both Group 1 and Group 2 receive external excitation (In) but
due to mutual inhibition they do not re and ignore the external excitation. When Input 1 and
Input 2 both get activated, both neurons re but they cannot continue ring for they inhibit each
other.
4.5.2 B: Input 1 get activated
Input 1 excites a neuron in Group 1 and the neuron in Group 1 starts ring and continues
ring due to signal In.
4.5.3 C: Input 2 gets activated
Input 2 excites a neuron in Group 2 and then the neuron in Group 2 starts ring and neuron
in Group 1 stops because it is now inhibited by the ring neuron in Group 2. Therefore, mutual
inhibition is the basis of winner take all behavior.
4.6 Short-term Memory with Appearance and Location Information
Figure 4.6 shows the block diagram of our neuromorphic circuit. It has two neurons that
mutually inhibit each other, H for hollow circle and S for solid circle. Both H and S detection
neurons have four synapses. 3 synapses are shown in the gure. The AP input to the fourth
synapse of neurons H and S is grounded for these experiments.
Location information is retained by the network of neurons. For any possibility of location
we have a network of 3 neurons in three lobes of the cortex. Two PSP inputs to neurons that
represent neurons in parietal and Inferior temporal lobes are connected to a Gaussian noise source
with zero mean and 8 mV standard deviation. This noise can break continuous ring which mimics
forgetting in short-term memory due to interference [33]. Figure 4.7 shows the simulation results.
44
Figure 4.6: Short-term memory neural circuits with appearance and location information.
45
Figure 4.7: Location and appearance information in short-term memory. The circuit rst is sig-
nalled a hollow circle on the left and remembers it, then it is signalled a solid circle on the right
and remembers it. The circuit remembers the location information for tens of nanoseconds. When
the hollow neuron and the left neuron re simultaneously, the circuit remembers that the object is
a hollow circle and it is on the left. Firing together binds these two types of information.
46
4.6.1 Circuit is signalled image number 1 shown in Figure 4.1, a hollow circle on
the left
We activate signals Hollow and Left in our circuit. The corresponding appearance (H) and
location (L) neurons start ring and continue. Neuron L stops ring after some time due to noise
in corresponding neurons in the Parietal and Inferior Temporal lobes, which mimics the forgetting
eect.
4.6.2 Circuit is signalled image number 6 shown in Figure 4.1, a solid circle on
right
We activate signals Solid and Right in our circuit. There is shift from H to S, and S neuron
continues ring by the mutual inhibition mechanism that was explained. Also Neuron R starts
ring due to network inputs signaling an object on the right.
Chapter 5
A Bio-inspired Electronic Mechanism for Unsupervised Learning using
Structural Plasticity
5.1 Abstract
Learning in the human brain is mediated by dierent forms of plasticity [34]. The circuit
processing spatiotemporal inputs we demonstrate here highlights the mechanisms and role of struc-
tural plasticity. In many neuromorphic systems, synaptic plasticity is used for learning, memory
and information processing. In these systems, neurons are fully or sparsely connected and synaptic
connections between neurons become stronger or weaker during the learning process. However, in
living organisms, structural plasticity plays a role in learning as well. To demonstrate the utility of
structural plasticity, our example circuit autonomously learns spatiotemporal patterns with struc-
tural plasticity rather than synaptic plasticity, learning by long-term changes in neural pathways
rather than by updating synaptic weights. Our neuromorphic system has a completely non Von-
Neumann architecture; it does not have separate units for memory and processing. Instead, its
memory and processing capability emerge from the way its neurons are connected. Finally, inputs
do not have preconceived meaning and therefore the circuit learns to recognize repeated input pat-
terns in an unsupervised mode. Incorporating both structural plasticity and synaptic plasticity into
individual neurons is the ultimate goal of the BioRC project.
48
5.2 Introduction
In order to mimic the human brain, as envisioned in future applications like autonomous
vehicles, electronic neural networks must learn without supervision. In many existing articial
neural networks, neurons can be fully or sparsely connected and synaptic weights get updated
by learning rules like the Hebbian Rule [35] or STDP [36]. However, long-term memory in the
biological brain is also mediated by structural plasticity and axonal growth [34, 4, 37, 38]. While
neuromorphic circuits that learn can be constructed by varying synapse weights down to zero, vir-
tually disconnecting them, biological structural plasticity is resource preserving in that unnecessary
connections are pruned heavily during development and selectively thereafter. New connections
are added during learning as required. Even so, the density of synapses per neuron in the cortex
is signicant, with estimates of 10,000-30,000 synapses per neuron. If neuromorphic circuits are
ever to model learning and intelligence, it is not clear how dense in synapses each neuron must
be. With structural plasticity mechanisms, fewer synapses can be present at a given execution
time since connectivity can change and evolve as learning occurs. Most of the biological brain
is white matter (60%), which is composed of long nerve bers. This means if we could create a
neuromorphic circuit with the scale of the actual brain, implementing the interconnections would
be challenging if all the presynaptic neurons were connected to corresponding postsynaptic neurons
while learning was mediated by tuning strengths of the synapses. Here, we suggest an approach
that may alleviate this issue. In our work, a presynaptic neuron gets connected to a postsynaptic
neuron if it exhibits enough activity. At a large scale, this approach can save interconnections and
increase the feasibility of articial neural circuits that mimic the brain.
1
In our approach, when
there are many presynaptic neurons that can be selectively connected to postsynaptic neurons, only
a few pairs of synapses on the postsynaptic side will be enough as shown in Figure 5.1.
1
Exactly how such connectivity will change will be technology dependent, with hybrid CMOS-nanotechnology
circuits likely to provide some capabilities in the near future.
49
Figure 5.1: Many presynaptic neurons that are connected to a postsynaptic neuron via only two
excitatory synapses
50
The structural learning model we present here is a step in implementation of structural
plasticity in learning circuits, building on the research that demonstrated electronic remapping
of neural circuits in the barrel cortex of the rat when a whisker is clipped [39], [40]. While that
circuit demonstrated barrel cortex remapping that occurs due to absence of inputs, our circuit
demonstrates neural circuit conguration to recognize a frequent pattern as a pattern of interest.
In order to simplify our demonstration to focus on structural plasticity, we do not include synaptic
plasticity, dendritic computations, dendritic plasticity or other mechanisms we have implemented
in our previous experiments.
Our demonstration circuit consists of biomimetic neural portions and bio-inspired builders
2
. Neurons in our circuit, similar to their biological counterparts, are excitable and, if they receive
enough excitation, they produce an action potential (AP), i.e. the neurons re. On the other
hand, builders in our circuit do not re but they recongure the connections between neurons and
therefore mediate learning. Our circuit has a hierarchical architecture with bottom-up processing.
Its learning capability emerges from the way neurons from dierent stages are connected to each
other. The circuit is provided dierent unlabeled spatiotemporal patterns and after some time if
a pattern is input frequently, it can learn to recognize the pattern as a familiar pattern. Inputs
to the circuit are unlabeled (without preconceived meaning), there is no reward or punishment
3
and therefore the circuit learns in an unsupervised mode. A neural network can be fed by dierent
spatiotemporal input patterns. The number of inputs and available time can be exploited to
generate dierent patterns.
2
There is evidence that astrocytes play a role in the builder circuit.
3
Other learning in response to dopamine reward will be demonstrated in future circuits.
51
Figure 5.2 shows four combinations of input spikes with two inputs. Case 1 is when IN1 is
a single spike in time and in space. Case 2 is when IN1 is a single spike but simultaneous with
another spike in IN2. Case 3 is when IN1 is two consecutive spikes while IN2 is silent. Finally, Case
4 is when IN1 is two consecutive spikes and IN2 contains a spike simultaneous with the rst spike
in IN1. Case 1 and case 2 are similar temporally and dierent spatially. Cases 1 and 3 are similar
spatially and dierent temporally. Case 1 is dierent from case 4 both spatially and temporally. The
two inputs ( IN1, IN2) could be used to generate more patterns, but to simplify the experimental
setup, our circuit can learn to recognize one of the four patterns described here. Our approach is
Figure 5.2: Four dierent spatiotemporal patterns that can be learned
to build a general recognition circuit that learns to recognize two spatio-temporal inputs that can
arrive in two time periods that are closely spaced. We have designed builder circuits that generate
control signals to congure the recognition circuit (learning) to respond to a familiar pattern (one
that has occurred frequently in the recent past), using our library of neural compartments and
our builder circuits. However, the builder circuits are not tuned to specic patterns; each builder
circuit just connects two neurons based on the activity of the presynaptic neuron. We have just
included builder circuits that are presynaptic to neurons temporal and spatial. Ideally, the neural
circuits supplying these neurons would have builder circuits attached to other neurons to congure
recognition of dierent patterns. Here we have supplied the choices of patterns as hardwired circuits
52
to simplify the demonstration.
5.3 Builder Circuits
Builder Compartments are used to connect presynaptic and postsynaptic neurons. Figure
5.3 shows a builder compartment circuit at the transistor level. The input to this circuit is an
Action Potential of a neuron. For example, if S is an action potential signal, each time S res,
transistor 1 turns on and the current mirror (transistors 2 and 4) will charge up the acts node. If S
res enough times, in our case three times, node acts will go above the switching threshold of the
rst inverter, sp will go low and as a consequence sn will go high. If S becomes silent, node acts
will be discharged over time through transistor 3 and sn will go low again. The NMOS threshold
voltage for our technology (CMOS TSMC 45nm) is 0.469. Therefore, by setting forget to a voltage
less than that, transistor 3 will be in the subthreshold region and the delay for sn to fall can be
adjustable.
Figure 5.3: The Builder circuit at the transistor level. There are four instances of this circuit in
the nal learning circuit for neurons S, C, O, and T, to be described later.
53
Figure 5.4 shows the time sn can stay high as a function of signal forget. For our experiments
here, we set the forget signal to 0.2 V in all builder circuits. This means after the circuit learns a
pattern, it forgets after 69 nanoseconds. Setting a smaller voltage for forget will result in a longer
time to forget. Forgetting can occur over a much longer time period if leakage current is used to
discharge the node acts instead of a subthreshold transistor.
Figure 5.4: The time that sn stays high as a function of the forget signal.
5.4 Temporal tolerance
As we had said, in our experimental circuit, a neuron res if it receives two Action Potentials
that are connected to its excitatory synapses at the same time or approximately at the same time.
Figure 5.5 shows that temporal tolerance for the delay between two inputs is 0.15 nanoseconds.
When delay between IN1 and IN2 is 0.2 nanoseconds, summed PSP of the postsynaptic neuron
cannot go above the ring threshold (260 mV) and N1 does not re. The PSPs shown here have
faster rise and fall times to demonstrate temporal tolerance/intolerance. In practice, the PSPs
could be slowed to create more temporal tolerance that mimics biological neurons.
54
Figure 5.5: Temporal tolerance between two input spikes
55
5.5 Learning temporal patterns
Figure 5.6 shows the block diagram for the temporal learner network that learns temporal
patterns of input IN1. A chain of three neurons, N1, N2 and N3, recognizes a sequence of spikes on
IN1. Neurons S and C determine if there is a single spike or two consecutive spikes. Transmission
gates connect the most frequent spiking pattern to the Temporal neuron that indicates a familiar
temporal pattern has occurred. If a neuron receives EPSP of two synapses approximately at the
same time it will re unless inhibited, so when IN1 arrives at two synapses, N1, N2 and N3 re
consecutively. Delay from one neuron to the next neuron is constant and is equal to 0.7 nanoseconds
(N1 to N2, and N2 to N3). This helps the circuit to measure time.
Consecutive spikes that arrive too quickly or too slowly will not be recognized as a pattern
of two spikes, but within a window, early and late spikes will be tolerated since they can raise the
summed PSP on the postsynaptic neuron above the ring threshold. If IN1 is a single spike, after
some time N3 will re while N1 is silent. Therefore, neuron S will re only if IN1 is a single spike
in a time window. In other words, neuron S predicts that the input is single spike and res if its
predication comes true. This behavior emerges from its connections to low-level neurons (N1, N2,
and N3). If IN1 has a consecutive temporal pattern and the delay between two spikes is equal to
1.25 nanoseconds (approximately and not exactly delay of two neurons) then there will be a time
that N1 and N3 will re approximately simultaneously and will cause neuron C to re. Duration of
inhibitory PSP is longer than excitatory PSP, and neuron S will not re for two consecutive spikes.
In spite of the state of neuron S, neuron C res when IN1 has a consecutive 2-spike temporal
pattern.
Our neurons cannot re more than once in a time span of 1.25 nanoseconds. This determines
the temporal resolution of the circuit. Because of the builder circuit, if any of these neurons ( S or
C) shows enough activity the corresponding act signal voltage will increase, and as a consequence
at some point (after 3 spikes) the corresponding transmission gate will turn on to connect the S or
C neuron to the temporal neuron. For example, if the circuit keeps receiving a single spike, with
56
Figure 5.6: Temporal Learner. The part of the circuit that learns temporal patterns generated by
IN1
57
no consecutive spike within a time period, at some point neuron S will be connected to neuron
Temporal. Then, each time the circuit receives a single spike, neuron Temporal will re to state
that the input has a familiar temporal pattern. Therefore, in terms of the hierarchy of the circuit,
N1, N2 and N3 are low-level neurons to measure temporal spiking, S and C are mid-level neurons
that detect temporal patterns and Temporal is a high-level neuron, ring when a familiar pattern
is encountered. This serial connection circuit to recognize a temporal pattern has a biological basis
[41, 24] and has been exploited in some research [42]. A similar circuit could detect temporal ring
in IN2.
5.6 Learning spatial patterns
Figure 5.7 shows the circuit that learns spatial patterns. IN1 and IN2 are two inputs to
the circuit. IN1 and IN2 are connected to N1 and N2 by two excitatory synapses respectively.
Therefore, when IN1 res, N1 res and when IN2 res, N2 res. Neuron O is connected to N1
by two excitatory synapses and to N2 by one inhibitory synapse. This means that if IN1 res
and IN2 does not re, neuron O will re. Neuron T is connected to N1 and N2 by one excitatory
synapse each. This means that neuron T res only when both IN1 and IN2 are active. Like the
temporal circuit, if any of the neuron O or neuron T shows enough activity it will be connected to
the Spatial neuron by a builder circuit via the transmission gates. Therefore, when neuron spatial
res, it means it has detected a familiar spatial pattern.
58
Figure 5.7: Spatial Learner. The part of the circuit that learns spatial patterns generated by IN1
and IN2
59
5.7 Learning spatiotemporal patterns
Figure 5.8 shows the nal circuit that learns four dierent spatiotemporal patterns. The
circuit learns in an unsupervised mode because inputs do not have labels and the circuit itself
classies inputs as familiar and unfamiliar spatiotemporal patterns without a priori information.
Like the biological brain it is event-driven and learning is mediated by a mechanism inspired by
biological structural plasticity. Inputs to the circuit are IN1 and IN2 and the nal output of the
circuit is the SpaTemp neuron that binds temporal and spatial parts of the circuit. As explained
in the introduction, IN1 and IN2 can generate four spatiotemporal patterns; the circuit can learn
these patterns and later SpaTemp will re only if the input has the same spatiotemporal pattern.
The temporal block of the circuit learns the temporal pattern of the input and later checks if the
input has the same pattern. The spatial block behaves similarly in the spatial dimension. The
temporal block needs time duration (which is equal to the delay of two neurons) to wait and verify
consistency of the input but the spatial block does not need time. Therefore, there is a block in
the circuit (neurons ND1 and ND2) to delay output of the spatial block and synchronize it with
the temporal block. A dierent synchronization block would be required if the spatial pattern to
be learned occurred during the second time window of the input. A completely general learning
circuit would have more complex spatial and temporal learning blocks.
SpaTemp neuron is connected to the temporal block and the delayed spatial block with only
one excitatory synapse each. Therefore, it will re only if both temporal and spatial blocks re
simultaneously.
60
Figure 5.8: Block diagram of our nal circuit that learns spatiotemporal patterns
61
Figures 5.9 and 5.10 show simulation results of the circuit shown in Figure 5.8 in four dier-
ent cases that are described in the introduction. In all cases we train the circuit with one of the
four possible spatiotemporal patterns; after three events the circuit identies that particular spa-
tiotemporal pattern as a familiar pattern and SpaTemp res only for that specic pattern. Figure
5.9 shows the spiking simulation results for Temporal, Spatial D (ND2), and SpaTemp. Figure 5.10
shows the learning mechanisms that track neurons S, C, O and T.
62
Figure 5.9: The circuit can learn four dierent spatiotemporal patterns
63
Figure 5.10: Neurons S, C, O, T and their corresponding builder compartments
64
5.7.1 Case 1, ( upper left in Figures 5.9 and 5.10)
We feed the circuit with four events that have single spikes in time and in space. At the fourth
event, SpaTemp neuron res. This means that the circuit has learned this pattern. The fth event
is a spike that is single in time but simultaneous with another spike in IN2. Only Temporal neuron
res and the recognition neuron, SpaTemp is silent. The sixth event is two consecutive spikes in
IN1 while IN2 is silent. SpatialD res twice to indicate that it has detected two spikes single in
space but Temporal neuron does not re. At the seventh event, the input is two consecutive spikes
in IN1 while IN2 res. Spatial and Temporal both are silent for this event. Every time that neurons
S and O re due to this spatiotemporal pattern, their corresponding act signals increase and after
3 spikes sn and on are strong enough to turn on the corresponding transmission gates to mimic
structural plasticity, as shown in Figure 5.10 on the upper left.
5.7.2 Case 2, ( upper right in Figures 5.9 and 5.10)
We feed the circuit with four events that have single spikes in IN1 simultaneous with a spike
in IN2. At the fourth event, SpaTemp neuron res. This means that the circuit has learned this
spatiotemporal pattern. The fth event is a spike that is single in time and in space. Only Temporal
neuron res and the recognition neuron, SpaTemp is silent because it is dierent spatially. The
sixth event is two consecutive spikes in IN1 while IN2 is silent. This input pattern is not consistent
temporally nor spatially with the learned pattern. Neither the Temporal neuron nor the Spatial
neuron re. At the seventh event, the input is two consecutive spikes in IN1 while IN2 res. It is
consistent spatially but not temporally and neuron SpaTemp does not re. Every time that neurons
S and T re due to this spatiotemporal pattern, their corresponding act signals increase and after
3 spikes sn and tn are high enough to turn on the corresponding transmission gates (Figure 5.10
upper right).
65
5.7.3 Case 3, ( lower left in Figures 5.9 and 5.10)
The circuit is fed with two consecutive spikes in IN1 while IN2 is silent. At the fourth event,
SpaTemp neuron res. The fth event is a spike that is single in time and in space. Only Spatial
neuron res and the recognition neuron, SpaTemp is silent because it is dierent temporally. The
sixth event is a single spike in IN1 and in IN2. This pattern spatially and temporally is dierent
from what had been learned. None of the Temporal neuron nor Spatial neuron re. At the seventh
event, the input is two consecutive spikes in IN1 while IN2 res. It is consistent temporally but
not spatially and neuron SpaTemp does not re. As shown in Figure 5.10, since neurons C and O
show activity their corresponding transmission gates turn on (Figure 5.10 lower left).
5.7.4 Case 4, ( lower right in Figures 5.9 and 5.10)
The circuit is fed with two consecutive spikes in IN1 while IN2 res. At the fourth event,
SpaTemp neuron res to indicate that the input pattern is familiar. The fth event is a spike that
is single in time and in space, which is temporally and spatially dierent from the learned pattern.
The sixth event is a single spike in IN1 and in IN2, which is temporally dierent and spatially
consistent. At the seventh event, the input is two consecutive spikes in IN1 while IN2 is silent. It is
consistent temporally but not spatially therefore neuron Temporal res, neuron SpatialD does not
re and, as a consequence, neuron SpaTemp does not re. Every time that neurons C and T re
due to this spatiotemporal pattern, their corresponding act signals increase; after 3 spikes cn and
tn go high enough to turn on the corresponding transmission gates to mimic structural plasticity
as shown in Figure 5.10 lower right.
66
5.8 Learning gets interrupted by a wrong input
We train the circuit with case 1 spatiotemporal pattern by inputting the pattern twice (a
single spike in IN1 while IN2 is silent). After two events with case 1 input, at 21 nanoseconds, the
circuit receives a wrong input pattern (case 4) and after that we continue feeding the circuit with
case 1. Figures 5.11 and 5.12 show that this wrong pattern does not aect the learning procedure
and neuron SpaTemp res for case 1. In other words, the learning procedure is fault tolerant.
67
Figure 5.11: Learning case 1 gets interrupted by a wrong input pattern.
68
Figure 5.12: Neurons S and O and their corresponding builder compartments for the simulation
shown in Figure 5.11
69
5.9 Input is in transition between two patterns
Figures 5.13 and 5.14 show three dierent conditions when frequent patterns are in transition.
5.9.1 The circuit is fed by pattern shown in case 3
The circuit is fed by case 3 and as a consequence neuron C res and actc goes high and cn
turns on and the circuit recognizes this pattern by ring SpaTemp neuron.
5.9.2 The input is shifted to case 1
Now, input is case 1 and neuron S res and acts goes high and sn turns on. Since neuron C
does not show activity actc gradually goes down.
5.9.3 Circuit gets case 3 input pattern again
The Circuit receives case 3 which had learned previously and was going to forget, but since
cn is still high, neuron SpaTemp res and the circuit still recalls the previously learned pattern.
70
Figure 5.13: Input shifts from case 3 to case 1.
71
Figure 5.14: Neurons C and S and their corresponding builder compartments
Chapter 6
An Analog Neural Network that Learns Sudoku-Like Puzzle Rules
6.1 Abstract
We have designed a fully-connected neural network implemented as an analog circuit consist-
ing of 8 neurons and 64 synapses that can learn rules of 2-by-2 Sudoku or Sudoku-like puzzles and
then can solve them. In this circuit, learning is mediated by giving a dopamine reward signal to
correct actions, which has a biological basis and is known as reinforcement learning [43]. Regular
architecture of the circuit helps it to generalize learned rules and as a consequence expedites the
learning procedure. The circuit receives dopamine externally from a trainer circuit and therefore
learning is supervised. Injected dopamine will strengthen some of the existing excitatory synapses
similar to biological synaptic plasticity [35]. Previously, we had designed a bio-inspired circuit that
learns spatiotemporal patterns in an unsupervised mode using structural plasticity. In the human
brain, learning is mediated by both types of plasticity [34].The long-term goal of our research group
is combining these two types of plasticity (synaptic and structural) to design a network with higher
level of learning capabilities and more complex cognitive skills to imitate the biological brain.
73
6.2 Introduction
We have designed a trainable and fully-connected neural network with 8 neurons and 64
excitatory synapses. Every neuron is connected to all other 7 neurons via 7 excitatory synapses
(one for each) that are normally silent but can be awakened. Also, every neuron is externally
connected to an excitation input via an excitatory synapse that is always on. Therefore, a trainer
circuit can force any of these neurons to re to set initial conditions for each game.
74
Figure 6.1 shows the architecture of the circuit. Blue circles represent neurons and black
lines connecting them represent two synaptic connections. Black lines that are connected to each
neuron individually and externally represent a synaptic connection to the neuron for the external
excitation. For example, the bidirectional line between neurons A1 and A2 means that there is
one synapse that connects the output of neuron A1 to the input of neuron A2 and there is another
synapse that connects the output of neuron A2 to the input of neuron A1. Both neurons can be
excited externally with one synapse each.
Figure 6.1: Fully connected neural network with normally-o (silent) synapses. Every neuron can
be excited externally with normally-on synapses.
75
Figure 6.2 shows our puzzle with four squares, each of which is labeled with one of the letters
A, B, C and D. There are two neurons in each square with index 1 or 2. These neurons re to
show the content of that square resembling one-hot encoding. For example, if A1 res, it means
that there is 1 in square A and if D2 res, it means there is 2 in square D. We will show how this
network can learn rules of 2-by-2 Sudoku or Sudoku like games with a bio-inspired mechanism and
then can solve them. Also, we will show how regularity in the architecture of a neural network can
expedite the learning procedure.
Figure 6.2: A 2-by-2 Sudoku-like puzzle
76
6.3 The circuit at the transistor level
6.3.1 Synapse
To illustrate the complexity of our circuits, Figure 6.3 shows our synapse at the transistor level
which previously had been designed with CNT transistors [3], but here it is designed with CMOS
TSMC45nm technology. Variations of this synapse have been published for some time [44]. Input
to this synapse is the Action Potential of the pre-synaptic neuron and the output of the synapse
is an Excitatory Post-synaptic Potential (EPSP). The Neurotransmitter Concentration knob on
the synapse modulates the amplitude of the EPSP output of the circuit; when it is grounded the
synapse will be o and when it is connected to VDD the synapse will be fully on. There are 64 of
these synapses in the circuit including 8 synapses to excite every neuron externally and 56 synapses
that connect every neuron to all other neurons.
Figure 6.3: An excitatory synapse adapted from [3]
77
6.3.2 The neuron
A neuron in our circuit, as shown in Figure 6.4, consists of two components, the Dendritic
Arbor and the Axon Hillock. Inputs to the neuron are 8 EPSPs coming from the other 7 neurons
and one external excitation input. Output of the neuron is an Action Potential.
Figure 6.4: A neuron in our circuit consists of the dendritic arbor and the axon hillock.
78
Figure 6.5 shows the 8-input Dendritic Arbor of the neuron. The Dendritic Arbor is made
of seven 2-input adders that previously had been designed and used by others [45], [42]. The Axon
Hillock that generates an Action Potential has also been designed previously and has been used in
some publications [44], [42].
Figure 6.5: 8-input Dendritic Arbor
79
6.3.3 Circuits for synaptic plasticity and reward-based learning
Figure 6.6 shows the block diagram of the synaptic plasticity and learning part of the circuit.
The circuit monitors ring patterns of the neurons, checks which patterns receive dopamine, and
based on ring patterns sets Neurotransmitter Concentrations for the appropriate synapses. The
circuit consists of three types of components which are as follows:
80
Figure 6.6: Block diagram of synaptic plasticity and learning mechanism
81
6.3.3.1 Monostable Generator Circuit (MGC)
Figure 6.7 shows our bioinspired Monostable Generator Circuits (MGCs) for all neurons.
As a case in point, input to the MGC circuit of neuron A1 is Action Potential of neuron A1 and
output is mA1. If neuron A1 is silent for awhile, node P will be discharged through transistor 4
and mA1 will be high because of the inverter. If A1 res, transistor 2 turns on and its current will
be mirrored via transistors 1 and 3 and the current will charge up node P and as a consequence
node mA1 will go low. After sometime (determined by bias2), node P will be discharged again and
node mA1 will go high again. Therefore, node mA1 is normally high and generates a negative logic
pulse when the neuron A1 res. Duration of this pulse determines temporal tolerance for detecting
synchrony of ring of the neurons in the presence of dopamine. There is one MGC for each neuron
and as a consequence 8 MGCs in total.
Figure 6.7: Monostable Generator Circuits (MGCs) that synchronize neural ring
82
6.3.3.2 Dopamine Receptor Circuit (DRC)
Figure 6.8 shows the Dopamine Receptor Circuits (DRCs). These circuits detect synchronous
ring of neurons leading to a correct puzzle solution. As a case in point, in NC 3, Node x3 is
normally discharged through transistor 10 and, because of the inverter, x3bar is normally high. If
A1 res simultaneously with B2, A2 res simultaneously with B1, C1 res simultaneously with
D2 or C2 res simultaneously with D1, and DOP signal is high (dopamine is given to the circuit)
at least for one of these events, then node x3 will be charged up to VDD via one of the branches
(transistors 1-2, 3-4, 5-6, or 7-8) along transistor 9 and x3bar will go low. After some time x3 and
x3bar will go low and high respectively; this time is determined by bias1. There are 7 DRCs in the
nal circuit that will be explained in the next sections. Dopamine Receptor Circuit for NC 1 is not
shown in the gure.
83
Figure 6.8: Dopamine Receptor Circuits (DRCs) that detect correct ring
84
6.3.3.3 Latch for Neurotransmitter Concentration (LNC)
There is one Latch for each Neurotransmitter Concentration value (Figure 6.9). These latches
maintain learned synaptic strengths. NC x nodes can be reset with clear signal and then stay low.
The x and xbar signals from corresponding DRCs are normally low and high respectively and as a
consequence the latches are normally in holding mode. If x and xbar signals from the corresponding
DRC go high and low respectively, each latch will be in writing mode and VDD will be written to
NC x. Then x and xbar signals will return to their normal points and latch will hold the written
VDD.
Figure 6.9: Latches for Neurotransmitter Concentrations (LNCs) to maintain learned synaptic
strengths
85
6.4 Eight types of synapses used in the regular architecture
Figure 6.10: Inputs to neuron A1
Figure 6.10 shows 8 synaptic connections to neuron A1 coming from other neurons and
external excitation. All the 64 synapses in the circuit can be classied in one of these 8 types as
follow:
Type 1: Synapses that connect two neurons inside the same square which are A1-A2, B1-
B2, C1-C2, D1-D2. These synapses are controlled by NC 1. Neurons in the same square are not
supposed to re simultaneously. In other words, content of a square cannot be 1 and 2 at the same
time. Therefore, we do not set NC 1 to VDD in any of the games.
Type 2: Synapses that connect a neuron to a neuron in its horizontal adjacent square with
the same number which are A1-B1, A2-B2, C1-D1, C2-D2.These synapses are controlled by NC 2.
Type 3: Synapses that connect a neuron to a neuron in its horizontal adjacent square with
the dierent number which are A1-B2, A2-B1, C1-D2, C2-D1.These synapses are controlled by
NC 3.
Type 4: Synapses that connect a neuron to a neuron in its vertical adjacent square with the
same number which are A1-C1, A2-C2, B1-D1, B2-D2.These synapses are controlled by NC 4.
Type 5: Synapses that connect a neuron to a neuron in its vertical adjacent square with the
86
dierent number which are A1-C2, A2-C1, B1-D2, B2-D1.These synapses are controlled by NC 5.
Type 6: Synapses that connect a neuron to a neuron in its diagonal adjacent square with the
same number which are A1-D1, A2-D2, B1-C1, B2-C2.These synapses are controlled by NC 6.
Type 7: Synapses that connect a neuron to a neuron in its diagonal adjacent square with the
dierent number which are A1-D2, A2-D1, B1-C2, B2-C1.These synapses are controlled by NC 7.
Type 8: Synapses that connect external excitation to corresponding neurons which are A1,
A2, B1, B2, C1, C2, D1, D2.These synapses are always on (i.e. Their NC knobs are hardwired to
VDD.), so the trainer can always cause any of the neurons to re to train the network or to initiate
the process to solve a puzzle after it learns the rules of the games.
6.5 Training type 3 synapses
Synapses become trained (strengthened) when correct answers are provided by the neural
network to the trainer circuit. A simple experiment shows how the NC signals that train the
synapses are generated. Figure 6.11 shows the simulation result for setting NC 3, and the results
of the training that sets NC 3.
87
Figure 6.11: Setting NC 3
88
At t=10 nanoseconds, both A1 and B2 neurons are forced to re externally by the trainer
and the circuit receives dopamine for this ring pattern. This dopamine shot sets NC 3 to VDD.
At t=20 nanoseconds, we assume the circuit has been trained. Now we start a game session.
Only neuron A1 is forced to re but the synapse connecting neuron A1 to neuron B2 has been
turned on (EPSP A1 B2) by NC 3 and neuron B2 res due to ring of neuron A1. This shows the
direct eect of the previous training.
At t=30 nanoseconds, we continue the game session. Neuron D2 is externally forced to re
and since all type 3 synapses are on, it causes its horizontal adjacent neuron with dierent number
(C1) to re. This means the circuit can generalize rules.
89
6.6 The network can get trained and then can solve a 2-by-2 Sudoku
There are eight dierent ways of beginning a 2-by-2 Sudoku by setting one of the squares to
either 1 or 2 and there are two possible solutions as shown in Figure 6.12. According to the rules of
Sudoku, in every row and in every column all numbers (here 1 and 2) should repeat exactly once.
Therefore, NC 3, NC 5 and NC 6 should set to VDD and all other NCs should stay low (o). This
is because numbers in the same row and column are dierent and on the same diagonal are equal.
Figure 6.12: Any solution to a 2-by-2 Sudoku will be one of these two possibilities.
90
Figure 6.13 shows how the network can learn to solve the Sudoku by two shots of dopamine.
We train the network with three dierent inputs at 10, 20 and 30 ns. At t=10 nanoseconds, both
A1 and B2 neurons are forced to re and the circuit gets dopamine shot for this ring pattern. As
a consequence, NC 3 goes high. The circuit learns synapses between neurons in the same row with
dierent numbers should turn on. At t=20 nanoseconds, neurons C1 and D1 are forced to re and
they cause D2 and C2 to re respectively because of what had been learned previously with NC 3
high. Since there is no reward for this ring pattern, NC 2 does not go high. At t=30 nanoseconds,
both A2 and C1 neurons are forced to re and they cause neurons B1 and D2 to re respectively.
The trainer gives dopamine shot for this ring pattern. As a consequence, NC 5 and NC 6 go
high.This means the circuit has learned that synapses between neurons in the same column with
dierent numbers and synapses between diagonal neurons with the same number should turn on.
At this point, the circuit has learned all the rules of the Sudoku game and can solve it. At t=40
nanoseconds, we begin a new game with the rules learned from training. Neuron D2 is forced to
re. Solution to this input is solution b shown in Figure 6.12 and therefore neurons A2, B1 and
C1 re in response. At t=50 nanoseconds, we begin a new game and neuron D1 is forced to re.
Solution to this input is solution a and therefore neurons A1, B2 and C2 re in response. At t=60
nanoseconds, the trainer activate clear signal and all NC signals go low. This means the circuit
forgets the rules in response to clear signal and as a consequence at t=70 nanoseconds, when neuron
A1 is forced to re, the circuit does not show response.
91
Figure 6.13: A supervisor is training the network by giving a dopamine reward.
92
In Figure 6.14 the network has learned rules of Sudoku. NC 3, NC 5, andNC 6 have been set
to VDD. NC 2, NC 4, and NC 7 are low. NC 1 (not shown in the gure) is also low, because we do
not want to have 1 and 2 in the same square at the same time. At t=40 nanoseconds, A1 is forced
to re. Response of the circuit is solution a. At t=50 nanoseconds, A2 is forced to re. Response
of the circuit is solution b. At t=60 nanoseconds, B1 is forced to re. Response of the circuit is
solution b. At t=70 nanoseconds, B2 is forced to re. Response of the circuit is solution a. At
t=80 nanoseconds, C1 is forced to re. Response of the circuit is solution b. At t=90 nanoseconds,
C2 is forced to re. Response of the circuit is solution a. At t=100 nanoseconds, D1 is forced to
re. Response of the circuit is solution a. At t=110 nanoseconds, D2 is forced to re. Response of
the circuit is solution b.
93
Figure 6.14: Network is ready to solve 8 dierent possibilities.
94
6.7 Pseudo sudoku type I
Now, we dene a new game in which numbers in the same row are equal but in the same
column are dierent. We use a dierent trainer circuit for this game. There are two possible
solutions to this game that are shown in Figure 6.15. Like Sudoku, the network can learn the rules
by two dopamine shots.
Figure 6.15: Pseudo Sudoku type I with two possible solutions
95
Figure 6.16 shows how the network can learn rules of Pseudo Sudoku type I by two shots of
dopamine. At t=10 nanoseconds, both of A1 and B1 neurons are forced to re and the circuit gets
dopamineshot for this ring pattern. As a consequence,NC 2 goes high. The circuit learns synapses
between neurons in the same row with the same numbers should turn on. At t=30 nanoseconds,
both of A2 and C1 neurons are forced to re and they cause neurons B2 and D1 to re respectively.
The trainer gives dopamine shot for this ring pattern. As a consequence, NC 5 and NC 7 go
high. This is because numbers on the same column and on the same diagonal are dierent in this
new game. At this point, the network has learned the rules of the new game. Therefore, at t=40
nanoseconds, when neuron A1 is forced to re, neurons B1, C2, and D2 re as response which is
Solution C.
96
Figure 6.16: Network is learning Pseudo Sudoku type I.
97
Figure 6.17 shows the network has learned rules of Pseudo Sudoku type I. NC 2, NC 5, and
NC 7 have been set to VDD. NC 3, NC 4, and NC 6 are low. NC 1 (not shown in the gure) is
also low, because we do not want to have 1 and 2 in the same square at the same time. At t=40
nanoseconds, A1 is forced to re. Response of the circuit is solution c. At t=50 nanoseconds, A2
is forced to re. Response of the circuit is solution d. At t=60 nanoseconds, B1 is forced to re.
Response of the circuit is solution c. At t=70 nanoseconds, B2 is forced to re. Response of the
circuit is solution d. At t=80 nanoseconds, C1 is forced to re. Response of the circuit is solution
d. At t=90 nanoseconds, C2 is forced to re. Response of the circuit is solution c. At t=100
nanoseconds, D1 is forced to re. Response of the circuit is solution d. At t=110 nanoseconds, D2
is forced to re. Response of the circuit is solution c.
98
Figure 6.17: Network solves 8 dierent possibilities of Pseudo Sudoku type I.
99
6.8 Pseudo sudoku type II
In Pseudo Sudoku type II, numbers on the same row are dierent and numbers on the same
column are equal as shown in Figure 6.18.
Figure 6.18: Pseudo Sudoku type II with two possible solutions
100
Like previous games, trainer can train the network with two dopamine shots. In Figure 6.19,
A1 and C1 are forced to re and network gets dopamine. This sets NC 4 to VDD. Then A2 and B1
are forced to re and they cause neurons C2 andD1 to re respectively. Then circuit gets dopamine
and NC 3 and NC 7 get set to VDD. At this point the network knows how to play the game and
when A1 is forced to re, response of the circuit is solution e.
101
Figure 6.19: Network is learning Pseudo Sudoku type II.
102
Figure 6.20 shows how the network can solve eight dierent possibilities of the pseudo-sudoku
type II game. At 40, 50, 60, 70, 80, 90, 100, and 110 nanoseconds neurons A1, A2, B1, B2, C1,
C2 ,D1, and D2 are forced to re respectively and response of the network are solutions e, f, f, e,
e, f, f, and e respectively.
103
Figure 6.20: Network solves 8 dierent possibilities of Pseudo Sudoku type II.
Chapter 7
Conclusion and Future Research
In chapter 2, we have designed a circuit with a CNT synaptic layer and CMOS neurons
that mimics hyperactivity in the cortex linked to OCD, and simulations show how a gas-brake
mechanism controls the level of activity in the brain. This circuit simulation can be used to
probe further by strengthening and weakening synaptic strengths to demonstrate more complex
behaviors. Varying many more mechanisms in the neuromorphic circuits is possible, including ring
threshold, receptor availability, and neurotransmitter availability and reuptake rate, to demonstrate
additional physiological behaviors. Even though this circuit is oversimplied and regions in brain
are represented by only one neuron, it shows that healthy and unhealthy conditions in the brain
can be modeled by BioRC neuromorphic circuits.These kinds of circuits can be helpful in analyzing
psychological and neurological disorders.
The human brain is a prediction machine. It detects regularities in environmental stimuli and
stops responding to them. If the input is not what is expected, humans can typically comprehend
this dierence between reality and expectation. For example, if someone listens to his favorite music,
he can predict the next note or beat. Failure in predications or wrong responses can cause negative
symptoms or hallucination in schizophrenia. Research shows that alteratiosn in neural circuitry can
cause disinhibition in pyramidal neurons and hence can cause schizophrenic symptoms. In chapter
3, we have designed a circuit that mimics a small circuit in a healthy brain and schizophrenic brain
in response to stimuli. This circuit uses CNT transistors for the synaptic layer and TSMC 45nm
CMOS technology for neurons. Due to light weights CNT are better candidate for synapses that are
105
highly interconnected, while neurons need mature CMOS technology to creat complex structures to
add the PSPs and produce APs. There are emerging applications for this hybrid architecture [16]
and hybrid fabrication techniques [17]. The research described here is a beginning, a tentative step
towards modeling human behavior, both typical and atypical. While the networks here are vastly
simplied over human neural networks, some basic behavior concerning prediction and mistaken
perception has been demonstrated here. A long-term goal of this research is to build complex
behaviors that allow variation of neural parameters and comparison to biological observations.
In chapter 4, we have designed a neuromorphic circuit that mimics biological short-term
memory. This circuit retains location and appearance information with two mechanisms found in
biological short-term memory. These two biological mechanisms in the brain that help persistent
ring are exploited in the neuromorphic circuit to retain appearance and location information.
Appearance information is stored by mutually inhibitory neurons. Location information is stored
by a network of neurons in three lobes of the cortex. The circuit also mimics forgetting in short-
term memory due to noise in the neurons. The circuits we have shown here demonstrate the utility
of the BioRC project [13] neurons for modeling brain mechanisms across a neural network and
within neurons.
In chapter 5, we have designed a hybrid circuit (with CNT and CMOS transistors) that has
ring neurons and non-ring builder parts. The circuit can learn spatiotemporal patterns in an
unsupervised mode and learning is mediated by a bio-inspired structural plasticity mechanism.
There are two parallel pathways that independently learn temporal and spatial dimensions of the
spatiotemporal patterns. Each of these pathways consists of bottom-up serial and hierarchical
structure. These parallel and serial connections mimic the connections in the cereberal cortex that
play an important role in cognition [4]. While this simple example demonstrates a possible mech-
anism for learning patterns, in the future we plan to demonstrate how the example would scale
to recognize a practical pattern such as a ring tone. We also plan to demonstrate builder com-
partments throughout the pattern recognition neural network to recongure the entire recognition
circuitry as the frequent input pattern changes.
106
In chapter 6, we have designed a neural network that can learn in a supervised mode using
synaptic plasticity with a reward-based mechanism. Regular architecture of the neural network
helps it to learn the rules of 2-by-2 Sudoku or Sudoku like games with only two shots ofdopamine.
Neurons and synapses are bio-inspired and imitate their biological counterparts. The learning
mechanism is also bio-inspired. In our previous work, we had designed a circuit that learns in an
unsupervised mode using structural plasticity. Combining these mechanisms and scaling up to
more complex examples is the ultimate goal of our group. The neural network itself will increase
in size linearly as the problem size increases, and the training circuits will scale as well.
Bibliography
[1] Jonathan T Ting and Guoping Feng. \Glutamatergic synaptic dysfunction and obsessive
compulsive disorder". Current Chemical Genomics, Dec 2008.
[2] Francine M Benes. \Emerging principles of altered neural circuitry in schizophrenia ". Brain
Research Reviews, 31(23):251 { 269, 2000.
[3] J. Joshi, A.C. Parker, and Chih-Chieh Hsu. \A carbon nanotube cortical neuron with spike-
timing-dependent plasticity". In Engineering in Medicine and Biology Society, 2009. EMBC
2009. Annual International Conference of the IEEE, pages 1651{1654, Sept 2009.
[4] Eric R. Kandel, James H. Schwartz, Thomas M. Jessell, Steven A. Siegelbaum, and A. J.
Hudspeth. Principles of Neural Science. The McGraw-Hill Companies, 5th edition, 2012.
[5] Hubel D. H. and Wiesel T. N. \Receptive elds and functional architecture of monkey striate
cortex". The Journal of Physiology, March 1968.
[6] Nicholas J. Priebe and David Ferster. \Inhibition, spike threshold, and stimulus selectivity in
primary visual cortex". Neuron, February 2008.
[7] D. Neil and Shih-Chii Liu. \Minitaur, an event-driven fpga-based spiking network accelerator".
Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, 22(12):2621{2628, Dec
2014.
[8] Kwabena Boahen. Neurogrid: A mixed-analog-digital multichip system for large-
scale neural simulations. http://web.stanford.edu/group/brainsinsilicon/documents/
BenjaminEtAlNeurogrid2014.pdf.
[9] A recongurable on-line learning spiking neuromorphic processor comprising 256 neurons and
128k synapses. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4413675/.
[10] Karl Friston. \The free-energy principle: a unied brain theory?". Nature Reviews
Neuroscince, February 2010.
[11] Guillermo Horga, Kelly C. Schatz, Anissa Abi-Dargham, and Bradley S. Peterson. \Decits
in predictive coding underlie hallucinations in schizophrenia". The Journal of Neuroscience,
34(24):8072{8082, 2014.
[12] Larry R. Squire, Darwin Berg, Floyd E. Bloom, Sascha du Lac, Anirvan Ghosh, and Nicholas C.
Spitzer. Fundamental Neuroscience. Academic Press, 4th edition, 2012.
108
[13] Alice Parker. The biorc project website. http://ceng.usc.edu/
~
parker/BioRC_research.
html.
[14] A.C. Parker, J. Joshi, Chih-Chieh Hsu, and N.A.D. Singh. \A carbon nanotube implementation
of temporal and spatial dendritic computations". In Circuits and Systems, 2008. MWSCAS
2008. 51st Midwest Symposium on, pages 818{821, Aug 2008.
[15] J. Joshi, Chih-Chieh Hsu, A.C. Parker, and P. Deshmukh. \A carbon nanotube cortical
neuron with excitatory and inhibitory dendritic computations". In Life Science Systems and
Applications Workshop, 2009. LiSSA 2009. IEEE/NIH, pages 133{136, April 2009.
[16] D. Akinwande, S. Yasuda, B. Paul, Shinobu Fujita, G. Close, and H.-S.P. Wong. \Monolithic
integration of CMOS VLSI and CNT for hybrid nanotechnology applications". In Solid-State
Device Research Conference, 2008. ESSDERC 2008. 38th European, pages 91{94, Sept 2008.
[17] Changxin Chen, Dong Xu, Eric Siu-Wai Kong, and Yafei Zhang. \Multichannel carbon-
nanotube FETS and complementary logic gates with nanowelded contacts". Electron Device
Letters, IEEE, 27(10):852{855, Oct 2006.
[18] Susanne E. Ahmari, Timothy Spellman, Neria L. Douglass, Mazen A. Kheirbek, H. Blair Simp-
son, Karl Deisseroth, Joshua A. Gordon, and Ren Hen. \Repeated cortico-striatal stimulation
generates persistent ocd-like behavior". Science, 340(6137):1234{1239, 2013.
[19] \Diagnostics and Statistical Manual of Mental Health Disorders". American Psychiatric As-
sociation, 4th edition, 1994.
[20] S. B. Math and Y. C. Janardhan Reddy. \Issues in the pharmacological treatment of obsessive
compulsive disorder". International Journal of Clinical Practice, 61(7):1188{1197, 2007.
[21] A.R. Crossman. \Primate models of dyskinesia: The experimental approach to the study of
basal ganglia-related involuntary movement disorders ". Neuroscience, 21(1):1 { 40, 1987.
[22] R.G. Robertson, S.M. Farmery, M.A. Sambrook, and A.R. Crossman. \Dyskinesia in the
primate following injection of an excitatory amino acid antagonist into the medial segment of
the globus pallidus ". Brain Research, 476(2):317 { 322, 1989.
[23] Moshe Bar. \The proactive brain: using analogies and associations to generate predictions".
Trends in Cognitive Sciences, May 2007.
[24] Kristen Delevich, Jason Tucciarone, Z. Josh Huang, and Bo Li. \The mediodorsal thalamus
drives feedforward inhibition in the anterior cingulate cortex via parvalbumin interneurons".
The Journal of Neuroscience, 35(14):5743{5753, 2015.
[25] Guillermo Horga, Kelly C. Schatz, Anissa Abi-Dargham, and Bradley S Peterson. \Decits
in predictive coding underlie hallucinations in schizophrenia". The Journal of Neuroscience,
June 2014.
[26] Alison R. Preston, Daphna Shohamy, Carol A. Tamminga, and Anthony D. Wagner. \Hip-
pocampal function, declarative memory, and schizophrenia: Anatomic and functional neu-
roimaging considerations". Current Neurology and Neuroscience Reports, July 2005.
109
[27] J. Burger and C. Teuscher. \Volatile memristive devices as short-term memory in a neuro-
morphic learning architecture". In Nanoscale Architectures (NANOARCH), 2014 IEEE/ACM
International Symposium on, pages 104{109, July 2014.
[28] Ling Chen, Chuandong Li, Tingwen Huang, Haz Gulfam Ahmad, and Yiran Chen. \A
phenomenological memristor model for short-term/long-term memory ". Physics Letters A,
378(40):2924 { 2930, 2014.
[29] Ling Chen, Chuandong Li, Tingwen Huang, Yiran Chen, Shiping Wen, and Jiangtao Qi. \A
synapse memristor model with forgetting eect". Physics Letters A, 377(4548):3260 { 3265,
2013.
[30] Saptarshi Mandal, Ammaarah El-Amin, Kaitlyn Alexander, Bipin Rajendran, and Rashmi
Jha. \Novel synaptic memory device for neuromorphic computing". SCIENTIFIC REPORTS,
4, June 2014.
[31] GREGOR RAINER, WAEL F. ASAAD, and EARL K. MILLER. \Memory elds of neurons
in the primate prefrontal cortex". Neuron, December 1998.
[32] Christian K Machens, Ranulfo Romo, and Carlos D. Brody. \Flexible control of mutual
inhibition: A neural model of two-interval discrimination". Science, February 2005.
[33] John Jonides. \The mind and brain of short-term memory". Annual review of psychology,
2008.
[34] G. Indiveri and Shih-Chii Liu. \Memory and information processing in neuromorphic systems".
Proceedings of the IEEE, 103(8):1379{1397, Aug 2015.
[35] Stefano Fusi. \Hebbian spike-driven synaptic plasticity for learning patterns of mean ring
rates". Biological Cybernetics.
[36] H. Markram, W. Gerstner, and P J. Sjstrm. \Spike-Timing-Dependent Plasticity: A Compre-
hensive Overview". Frontiers in Synaptic Neuroscience, 4, 2012.
[37] Pico. Caroni, Flavio. Donato, and Dominique. Muller. \Structural plasticity upon learning:
regulation and functions". Nat Rev Neurosci, 13, 2012.
[38] Hiroshi Nishiyama. \Chapter one - learning-induced structural plasticity in the cerebellum". In
Michael D. Mauk, editor, Cerebellar Conditioning and Learning, volume 117 of International
Review of Neurobiology, pages 1 { 19. Academic Press, 2014.
[39] Jonathan R. Joshi. Plasticity in CMOS neuromorphic circuits. PhD thesis, University of
Southern California, 2013.
[40] J. Joshi, A. C. Parker, and T. Celikel. \Neuromorphic network implementation of the so-
matosensory cortex". In Neural Engineering (NER), 2013 6th International IEEE/EMBS
Conference on, pages 907{910, Nov 2013.
[41] Stefan Sch oneich, Konstantinos Kostarakos, and Berthold Hedwig. \An auditory feature de-
tection circuit for sound pattern recognition". Science Advances, 1(8), 2015.
110
[42] S. Barzegarjalali and A.C. Parker. \A hybrid neuromorphic circuit demonstrating
schizophrenic symptoms". In Biomedical Circuits and Systems Conference (BioCAS), 2015
IEEE, pages 1{4, Oct 2015.
[43] Alexxai V Kravitz, Lynne D Tye, and Anatol C Kreitzer. \Distinct roles for direct and indirect
pathway striatal neurons in reinforcement". Nat Neurosci, 15:816{818, 2012.
[44] M. Mahvash and A. C. Parker. \Modeling intrinsic ion-channel and synaptic variability in a
cortical neuromorphic circuit". In 2011 IEEE Biomedical Circuits and Systems Conference
(BioCAS), pages 69{72, Nov 2011.
[45] H. Chaoui. \CMOS analogue adder". Electronics Letters, 31(3):180{181, Feb 1995.
[46] Giacomo Indiveri, Luigi Rao, SilvioP. Sabatini, and GiacomoM. Bisio. \A neuromorphic
architecture for cortical multilayer integration of early visual tasks". Machine Vision and
Applications, 8(5):305{314, 1995.
[47] G. Indiveri and G. M. Bisio. Analog subthreshold VLSI implementation of I3 neuromorphic
model of the visual cortex for pre-attentive vision, 1994.
[48] R. Serrano-Gotarredona, T. Serrano-Gotarredona, A. Acosta-Jimenez, and B. Linares-
Barranco. \A neuromorphic cortical-layer microchip for spike-based event processing vision
systems". Circuits and Systems I: Regular Papers, IEEE Transactions on, 53(12):2548{2566,
Dec 2006.
Appendix A
Carbon NanoTube Transistors
Carbon NanoTube Transistors:
(1) Figures A.1 and A.2 show IDS-VDS characteristic for CNT n-type and p-type transistor
respectively that are used to model our synaptic circuit. Carbon NanoTubes are light
and bio-compatible with low parasitics and they are ideal choice to be used as synapse in
biomimetic circuits and even in neural prosthetics in future. Figure A.3 shows the voltage
transfer curve for an inverter designed with CNT transistors.
Figure A.1: DC sweep for ntype CNT transistor
112
Figure A.2: DC sweep for ptype CNT transistor
Figure A.3: Voltage Transfer Curve for CNT inverter
Appendix B
Neuromorphic Circuit Modeling Directional Selectivity in the Visual Cortex
B.1 Abstract
Our visual perception is three-dimensional because serial and parallel processing happen
simultaneously in dierent parts of the brain, so that we can perceive and recognize objects around
us [12]. Here, we have designed a neuromorphic circuit that models directional selectivity in the
visual cortex. The neuromorphic circuit is biomimetic. It consists of neurons and synapses, and
models biological mechanisms. We show example simulations of neuromorphic circuits that indicate
orientation, direction of movement and size selectivity. In these circuits, the processing capability
of the neural network emerges from the way neurons are connected.
114
B.2 Introduction
When an object is moving, there is dierent information that the visual cortex should process
like orientation of the object, direction of movement, and size of the object so that we perceive the
object motion correctly. Studies show that spatial and object information is processed separately
and in parallel [4]. There are serial and parallel pathways in the brain that help us to extract
information from the two-dimensional image on the retina. In order to see an object and recognize
it, many low-level and high-level visual processing functions are involved in dierent parts of the
brain. We have designed a biomimetic circuit that mimics vision in the brain. This circuit receives
signals coming from the retina, and detects the orientation (vertical), the direction of movement
(from left to right or from right to left), and size of the object (large or small).
B.3 Neuron and synapse
Like our previous work [42], the simplied dendritic arbor consists of three 2-input adders
to make a four-synapse neuron. Dendritic arbor adds up all the PSP signals and if it reaches the
threshold of the neuron, the axon hillock will produce an Action Potential (AP) and the neuron
will re. In the human cortex, neurons perform relatively simple functions. However, complicated
emergent behaviors like cognition come from the architecture in which these neurons are used.
Similarly, our simplied neurons are used in an architecture that can extract information about an
object using raw data coming from the retina.
B.4 Orientation Selectivity in V1 (Primary Visual Cortex)
The Primary Visual Cortex (V1) is located in the Occipital lobe of the brain and like other
cortices has a laminar structure. Neurons in Layer IV of V1 receive signals from Retinal Ganglion
Cells (RGC) via Thalamus Nuclei. These neurons have a circular Receptive Field (Figure B.1) and
re maximally if there is contrast between ON and OFF areas. In other words, if ON area detects
light and OFF area does not detect any light a corresponding neuron in layer IV will re with max-
115
Figure B.1: Orientation selectivity in layer III of primary visual cortex a)biological mechanism from
[4],[5] b) our biomimetic circuit where NV has a vertical receptive eld and NH has a horizontal
receptive eld
116
imum ring rate. However, since these receptive elds are circular they cannot detect orientation
of the stimulus. Even though neurons in layer IV have unoriented receptive elds, they project
their outputs into neurons in layer III and form oriented receptive elds there [5].The hierarchical
and multilayer structure of the visual cortex has been modeled in neuromorphic architectures to
improve computation eciency and better feature extraction [46]. Dierent neuromorphic visual
cortex circuits have been produced by dierent groups. Analog subthreshold VLSI implementation
of the visual cortex [47] is one example. In some work, a neuromorphic architecture is used for
visual processing like neuromorphic Cortical-Layer Microchip for vision systems [48]. In some work,
the circuit performs early visual tasks mimicking the hierarchical visual system of mammalians [46].
Furthermore, retinomorphic chips have been designed that emulate parallel pathways of the pri-
mate retina to improve spike-coding eciency [8] . In our work, circuit at all levels are biomimetic
and mimic low-level and high-level signal processing and parallel signal streams in the brain.
117
Figure B.2: NV neuron with a vertical receptive eld. This neuron has been excited in two dierent
scenarios (A and B). Simulation is shown in Figure B.3
Excitatory synapses and Inhibitory synapses are used to mimic ON and OFF areas in layer
IV of primary cortex respectively. Therefore, NV is connected to two excitatory synapses and two
inhibitory synapses and res only if it gets EPSP (Excitatory Post Synaptic Potential) signal from
both excitatory synapses and does not receive IPSP (Inhibitory Post Synaptic Potential) from any
of the inhibitory synapses. This causes NV to have a vertical and oriented receptive eld. Figure
B.2 shows two scenarios for NV which are A and B. In scenario A, stimulus is vertical and therefore
E1 and E2 are being excited while I1 and I2 do not receive any input, and NV keeps ring as
response. In scenario B, stimulus is horizontal; therefore inputs to E1 and I1 are spikes but E2
and I2 do not receive any input. This condition does not cause NV to re. Figure B.3 shows the
simulation result for the two scenarios shown in Figure B.2.
118
Figure B.3: Simulation result for biomimetic cortex with a vertical receptive eld
119
B.5 Dorsal and Ventral Pathways in the brain
There are dorsal and ventral pathways from V1 to the Parietal lobe and from V1 to the
Temporal lobe respectively. The dorsal pathway processes visually guided movement with cells
selective for direction of movement. The ventral pathway mainly processes information about object
identication like form and color. Therefore, the primary visual cortex detects the orientation of
the stimulus, and then two streams go from the visual cortex to two lobes of the brain (Parietal and
Temporal) for a higher level of processing to detect spatial information and the form of the object
[4]. This means that dierent parts of the brain process dierent types of information. Similarly, in
our biomimetic circuit there are neurons that detect direction of movement and size of the stimulus
with similar mechanisms to their biological counterparts. These neurons receive signals from V1
and process them to extract spatial and object information.
B.6 Directional selectivity of movement
Directional selectivity in the neurons is caused by dierent latencies of presynaptic neurons
[4],[6]. In Figure B.4 (a), neuron a has the slowest response to the stimulus and neuron e has the
fastest response. Therefore, when the stimulus is moving from neuron a toward neuron e, for some
period of time the summed EPSP (membrane potential) of each neuron will be high enough to
cause the neuron to spike. The target neuron will receive the spikes, the summed EPSP of the
target neuron will pass the ring threshold and the target neuron will re. On the other hand, if the
object moves from neuron e toward neuron a, then by the time neuron a produces a summed EPSP
above threshold due to stimulus, the EPSP of neuron e will settle; this means that the summed
EPSP in the target cannot reach ring threshold and the target neuron will not re. Figure B.4
(b) shows our biomimetic circuit for direction detection. For simplicity, we use only neurons a and
e. Neurons d1 and d2 are connected to neuron a to make it a neuron with slow response.Therefore,
neuron a res with more delay compared to neuron e. Neuron a has more latency compared to
neuron e and similar to biology, the corresponding target neuron res for movement from neuron
120
a toward neuron e and not for movement from neuron e toward neuron a.
121
Figure B.4: Directional selectivity of movement a)biology from [4] b) our circuit
122
Figure B.5 shows corresponding signals for directional selectivity for biological neurons for
preferred and nonpreferred directions and Figure B.6 shows simulation results of our biomimetic
circuit for preferred and nonpreferred directions. Comparing these two gures shows how we exploit
a biological mechanism to detect direction of a stimulus. Neurons a and e are low-level neurons
and the Target neuron is a high-level neuron that has direction selectivity.
123
Figure B.5: Target neuron res only for preferred direction from [4],[6].
124
Figure B.6: Our circuit mimics biological mechanism of the directional selectivity
125
B.7 Detecting size of stimulus
Size of the stimulus can be extracted by checking the number of NV cells that re simulta-
neously. Figure B.7 shows our nal biomimetic circuit. If NV1 and NV3 re together, this means
that the stimulus is vertical and it is big so that it transverses the vertically oriented receptive elds
simultaneously. Therefore, Action Potential outputs of NV neurons can be processed at a higher
level to extract information about size of the stimulus as well as its orientation.
126
Figure B.7: Our nal neural circuit
127
B.8 Detecting Orientation, Direction and size
Figures B.3 and directionsim show simulation results for subcircuits when they detect orien-
tation and direction of movement of the stimulus separately. Now, we assemble these components
into a macro circuit that receives inputs from the retina and performs the processing to detect ori-
entation, direction and size of the stimulus. Figure B.8 shows the simulation result for the macro
circuit shown in Figure B.7 for two scenarios A and B.
B.8.1 Big vertical object is moving from left to right
Since the object is big and vertical, NV1 and NV3 re simultaneously. Therefore, neuron
Big res. Then stimulus continues moving from receptive elds of NV1 and NV3 toward the right
and therefore transverses receptive elds of NV2 and NV4. This direction causes Neuron L to R to
re.This means that dierent parts of the circuit extract dierent information, similar to biological
mechanisms.
B.8.2 Small vertical object is moving from right to left
Now, a small vertical stimulus moves from right to left. It only transverses receptive eld of
NV2 and not NV4 and then it moves to the left and transverses the receptive eld of NV1. Therefore,
neuron R to L res and detects the direction. However, due to small size of the stimulus, it cannot
cause NV1 and NV3 to re simultaneously. This means that neuron Big will not re.
B.9 Conclusion
We have designed a biomimetic circuit with hybrid structure (Silicon neurons and Carbon
synapses) that receives signals from the retina and performs low-level and high-level processing to
extract information about orientation, direction and size of the stimulus with a similar mechanism
to biology.
128
Figure B.8: Simulation result of our complete circuit 
Asset Metadata
Creator Barzegarjalali, Saeid (author) 
Core Title Demonstrating the role of multiple memory mechanisms in learning patterns using neuromorphic circuits 
Contributor Electronically uploaded by the author (provenance) 
School Andrew and Erna Viterbi School of Engineering 
Degree Doctor of Philosophy 
Degree Program Electrical Engineering 
Publication Date 11/08/2016 
Defense Date 10/17/2016 
Publisher University of Southern California (original), University of Southern California. Libraries (digital) 
Tag bio-inspired learning,memory,neuromorphic circuit,OAI-PMH Harvest,plasticity 
Format application/pdf (imt) 
Language English
Advisor Parker, Alice C. (committee chair), McCain, Megan L. (committee member), Wang, Han (committee member) 
Creator Email saeid.barzegarjalali@gmail.com,sbarzega@usc.edu 
Permanent Link (DOI) https://doi.org/10.25549/usctheses-c40-317573 
Unique identifier UC11214379 
Identifier etd-Barzegarja-4903.pdf (filename),usctheses-c40-317573 (legacy record id) 
Legacy Identifier etd-Barzegarja-4903-0.pdf 
Dmrecord 317573 
Document Type Dissertation 
Format application/pdf (imt) 
Rights Barzegarjalali, Saeid 
Type texts
Source University of Southern California (contributing entity), University of Southern California Dissertations and Theses (collection) 
Access Conditions The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law.  Electronic access is being provided by the USC Libraries in agreement with the a... 
Repository Name University of Southern California Digital Library
Repository Location USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Abstract (if available)
Abstract Memory and learning in the brain are realized by a collection of mechanisms that interact, resulting in learning and subsequent recognition of input patterns. While these mechanisms are complex and span different areas of the brain, we hypothesize that we can construct electronic circuits that mimic these mechanisms, demonstrating some aspects of pattern recognition that mimic the brains ability to learn and recognize patterns. 
Tags
bio-inspired learning
memory
neuromorphic circuit
plasticity
Linked assets
University of Southern California Dissertations and Theses
doctype icon
University of Southern California Dissertations and Theses 
Action button