Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Experimental and computational models for seizure prediction
(USC Thesis Other)
Experimental and computational models for seizure prediction
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Experimental and Computational Models for Seizure
Prediction
by
Pen-Ning Yu
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
BIOMEDICAL ENGINEERING
December 2020
Copyright 2020 Pen-Ning Yu
ii
Acknowledgements
I am grateful to the members of my dissertation committee, Drs. Dong Song, Theodore
Berger, Vasilis Marmarelis, Charles Liu and George Nune, the principal contributors to this work,
for their extensive discussion and insightful input. I particular thank my advisors, Drs. Dong
Song and Theodore Berger for providing constructive criticism and guidance on the entire
project.
This work is also indebted to the contributions of other research fellows. Dr. Min-Chi
Hsiao taught me most of the experimental methods used in this work. Dr. Christianne Heck and
Shokofeh Alavi provided valuable EEG data for my research. I am equally grateful to my other
labmates, Ude Lu, Rosa Chan, Shane Roach, Huijin Hsu, Gene Yu, Clayton Bingham, Victoria
Wolseley, Mike Huang, Xiwei She, Bryan Moore, Wenxuan Jiang and numerous others for their
companionship, and encouragement.
I’d like to say a special thanks to future readers. I write; you read, therefore I am.
Finally, I would like to thank my supportive friends and family. Without their support,
this wouldn’t be possible.
iii
Table of contents
ACKNOWLEDGEMENTS ............................................................................................................................ II
TABLE OF CONTENTS ............................................................................................................................... III
LIST OF TABLES ........................................................................................................................................ IV
LIST OF FIGURES ....................................................................................................................................... V
ABSTRACT .............................................................................................................................................. VII
CHAPTER 1 GENERAL INTRODUCTION.................................................................................................. 1
1.1 MOTIVATION AND RATIONALE .................................................................................................................. 1
1.2 LFP AND EEG ....................................................................................................................................... 1
1.3 EPILEPSY .............................................................................................................................................. 2
1.3.1 The Puzzling Preictal State ....................................................................................................... 3
1.4 HISTORY OF SEIZURE PREDICTION .............................................................................................................. 3
CHAPTER 2 AN EXPERIMENTAL SEIZURE MODEL FROM HUMAN HIPPOCAMPAL SLICES ...................... 6
2.1 METHOD .............................................................................................................................................. 7
2.1.1 Human hippocampal slice preparation.................................................................................... 7
2.1.2 Electrophysiology on human hippocampal slices .................................................................... 9
2.1.3 Nonlinear dynamics in the interictal-like activity .................................................................. 10
2.2 RESULTS ............................................................................................................................................. 15
2.2.1 Field potential recordings on human hippocampal slices ...................................................... 15
2.3 DISCUSSION ........................................................................................................................................ 19
CHAPTER 3 A SPARSE MULTISCALE NONLINEAR AUTOREGRESSIVE MODEL FOR SEIZURE PREDICTION
21
3.1 INTRODUCTION .................................................................................................................................... 21
iv
3.2 MATERIALS AND METHODS .................................................................................................................... 26
3.2.1 iEEG data preprocessing ........................................................................................................ 27
3.2.2 Feature extraction based on autoregressive models ............................................................. 29
3.3 CLASSIFICATION ................................................................................................................................... 35
3.3.1 Logistic Lasso ......................................................................................................................... 37
3.4 EVALUATION OF MODEL PREDICTION PERFORMANCE ................................................................................... 39
3.4.1 ROC curve .............................................................................................................................. 42
3.4.2 Assessing feature importance ............................................................................................... 45
3.5 RESULTS ............................................................................................................................................. 45
3.5.1 AR and LVAR models of iEEG ................................................................................................. 45
3.5.2 Seizure prediction using AR and LVAR features ..................................................................... 49
3.5.3 Relative importance of LVAR features ................................................................................... 51
3.6 DISCUSSION ........................................................................................................................................ 53
CHAPTER 4 FUTURE WORK ................................................................................................................ 56
4.1 MULTIINPUT LVAR MODEL .................................................................................................................... 56
4.2 STACKING TO INCREASE PREDICTION PERFORMANCE .................................................................................... 57
REFERENCE ............................................................................................................................................. 59
List of Tables
TABLE 3-1 CONFUSION MATRIX ........................................................................................................................................... 40
TABLE 3-2: TIME SCALES AND DECAY PARAMETERS ................................................................................................................ 46
TABLE 3-3: PREDICTION PERFORMANCE OF CLASSIFIERS .......................................................................................................... 50
v
List of Figures
FIGURE 2-1: HIPPOCAMPUS RESECTION SURGERY. .................................................................................................................... 8
FIGURE 2-2: HUMAN HIPPOCAMPUS. ..................................................................................................................................... 9
FIGURE 2-3: RECONSTRUCTION OF PHASE SPACE WITH INTERPULSE INTERVALS. ............................................................................ 12
FIGURE 2-4: TRAJECTORY OF SADDLE-TYPE FIXED POINT. .......................................................................................................... 13
FIGURE 2-5: EVOKED RESPONSES TO STIMULATIONS ................................................................................................................ 15
FIGURE 2-6: INTERICTAL-LIKE ACTIVITY INDUCED BY HIGH-POTASSIUM, LOW- MAGNESIUM ACSF AND 4AP. ...................................... 16
FIGURE 2-7: ICTAL-LIKE ACTIVITY AT DIFFERENT TIME SCALES ..................................................................................................... 17
FIGURE 2-8: INTERPULSE INTERVALS OF INTERICTAL-LIKE ACTIVITY .............................................................................................. 18
FIGURE 2-9: INTERICTAL-LIKE ACTIVITY ON FIRST-RETURN MAP.. ................................................................................................ 19
FIGURE 3-1: SEIZURE PREDICTION BASED ON EEG RECORDING. ................................................................................................. 22
FIGURE 3-2: SCHEMATIC DIAGRAM OF THE TWO-LEVEL SEIZURE PREDICTION MODEL ...................................................................... 26
FIGURE 3-3 : ELECTRODE MAPS OF CANINE AND HUMAN IEEG RECORDING. ................................................................................. 27
FIGURE 3-4 : SCHEMATIC DIAGRAM OF SPLIT DATA .................................................................................................................. 29
FIGURE 3-5: ILLUSTRATION OF THE AR MODEL, THE VAR MODEL AND THE LVAR MODEL. ............................................................. 31
FIGURE 3-6 CONSTRAINTS OF THE LASSO (GREEN LEFT) AND RIDGE (GREEN RIGHT) ON A LOG-LIKELIHOOD FUNCTION (RED). .................. 38
FIGURE 3-7 THE COEFFICIENT PATHS OF LASSO (LEFT) AND THEIR EFFECTS ON THE GENERALIZED ERRORS (RIGHT).. .............................. 38
FIGURE 3-8. DATA POINTS (LEFT) AND DENSITY DISTRIBUTIONS (RIGHT) SHOW THE PREDICTED SCORE OF THE INTERICTAL AND PREICTAL
SAMPLES.. ............................................................................................................................................................. 41
FIGURE 3-9. DENSITY DISTRIBUTIONS (LEFT) OF 2 CLASSES AND CORRESPONDING ROC CURVES (RIGHT).. .......................................... 43
FIGURE 3-10. VARIOUS DENSITY DISTRIBUTIONS AND THEIR ROC CURVES. .................................................................................. 44
FIGURE 3-11 AR AND LVAR MODELS OF INTERICTAL STATE AND PREICTAL STATE IEEG (DOG 1). .................................................... 47
FIGURE 3-12 AR AND LVAR MODELS OF INTERICTAL STATE AND PREICTAL STATE IEEG (PATIENT 1). ................................................ 48
FIGURE 3-13: DISTRIBUTION OF CLASSIFICATION SCORES OF INTERICTAL AND PREICTAL STATE SAMPLES USING THE COMBINED AR AND
LVAR MODEL FOR DOG 1 (LEFT) AND PATIENT 4 (RIGHT) ............................................................................................... 49
vi
FIGURE 3-14: BOXPLOTS SHOW THE PAIRWISE COMPARISON OF AUC DISTRIBUTIONS AMONG THE AR MODEL, THE LVAR MODEL AND THE
COMBINED AR + LVAR MODEL.................................................................................................................................. 51
FIGURE 3-15: THE 95% CONFIDENCE INTERVALS AND THE AVERAGE PERMUTATION IMPORTANCE FOR EACH OF THE FEATURE GROUPS IN
ALL SUBJECTS.. ........................................................................................................................................................ 52
FIGURE 3-16: GROUP-LEVEL COMPARISON OF AUCS WITH PERMUTATION OF DIFFERENT LVAR FEATURES. ....................................... 53
FIGURE 4-1: MULTIINPUT AUTOREGRESSIVE MODELS PREDICT THE FUTURE VALUE BASED ON THE PAST VALUES FROM THE SAME LOCATION
AND OTHER LOCATIONS. ........................................................................................................................................... 56
FIGURE 4-2: SINGLE CLASSIFIER AND ENSEMBLE CLASSIFIER. ...................................................................................................... 57
vii
Abstract
An Experimental Seizure Model from Human Hippocampal Slices
In this study, we have developed an in vitro model of epilepsy using human hippocampal
slices resected from patients suffering from intractable mesial temporal lobe epilepsy. We show
that using a planar multi-electrode array system, spatio-temporal interictal-like activity can be
consistently recorded in high-potassium (8mM), low-magnesium (0.25mM) artificial cerebral
spinal fluid with 4-aminopyridine (100µM) added. The induced epileptiform discharges were
recorded in different subregions of the hippocampus, including dentate, CA1 and subiculum.
This paradigm allows the study of seizure generation in different subregions of hippocampus
simultaneously, as well as dynamics of the interictal-like activity. The dynamics was investigated
by developing the first return map with inter-pulse intervals. Unstable periodic orbits (UPOs)
were detected in the hippocampal slice at the DG area according to the topological recurrence
method. Surrogate analysis suggests the presence of UPOs in hippocampal slices. This finding
also suggests that interictal-like activity is a chaotic system and chaos control techniques may be
used to manipulate interictal-like activity.
A Sparse Multiscale Nonlinear Autoregressive Model for Seizure Prediction
Accurate seizure prediction is highly desirable for medical interventions such as
responsive electrical stimulation. We aim to develop a classification model that can predict
seizures by identifying preictal states, i.e., the precursor of a seizure, based on multi-channel
intracranial EEG (iEEG) signals. A two-level sparse multiscale classification model is developed
to classify interictal and preictal states from iEEG data. In the first level, short time-scale linear
dynamical features are extracted as autoregressive (AR) model coefficients; Arbitrary (usually
viii
long) time-scale linear and nonlinear dynamical features are extracted as Laguerre-Volterra
autoregressive (LVAR) model coefficients; Root-mean-square error (RMSE) of model prediction
is used as a feature representing model unpredictability. In the second level, all features are fed
into a sparse classifier to discriminate the iEEG data between interictal and preictal states. The
two-level model can accurately classify seizure states using iEEG data recorded from ten canine
and human subjects. Adding arbitrary (usually long) time-scale and nonlinear features
significantly improves model performance compared with the conventional AR modeling
approach. There is a high degree of variability in the types of features contributing to seizure
prediction across different subjects. This study suggests that seizure generation may involve
distinct linear/nonlinear dynamical processes caused by different underlying neurobiological
mechanisms. It is necessary to build patient-specific classification models with a wide range of
dynamical features.
1
Chapter 1 General Introduction
1.1 Motivation and Rationale
Epilepsy is one of the most common neurological disorders. Approximately 1% of all
people living to the age of 60 will be diagnosed with epilepsy at some point in their lives. Until
now, patients with epilepsy have been treated mostly with antiepileptic drugs and surgical
resection. Antiepileptic drugs reduce the excessive excitability of the neurons, but patients can
become pharmacoresistant. Surgical resection removes the seizure focus, but surgery is not
always suitable for some patients. As an alternative treatment, intranasal antiepileptic drug
delivery and electrical brain stimulation treatments have been used to control seizures. Building
an experimental seizure model from slices of the human hippocampus allows us to test various
effects of protocols, such as drug or electrical stimulation. Rather than directly applying different
protocols to patients with epilepsy, surgical specimens from human epileptic hippocampal tissue
provide a simple evaluation of the effects of antiepileptic drugs and electrical stimulation
treatment on seizure control. Furthermore, a high-quality computational seizure prediction model
warns the patients of an upcoming seizure. This warning could allow patients to plan ahead of
time. In addition, intranasal antiepileptic drug delivery and electrical stimulation can treat the
patients in the early stage of seizure generation.
1.2 LFP and EEG
The local field potential (LFP) and the electroencephalogram (EEG) represent the
synchronized activity of neurons. As an excitatory synaptic potential (EPSP) propagates to the
apical dendrite of a neuron, current flows into the dendrite, creating a current sink. To reach back
to the original membrane potential without accumulating electric charge, the current must flow
2
down the dendrite and leave the neuron at other sites, creating a current source. The current
flowing across the extracellular medium thus creates a field potential. The superposition of fields
generated by current sources and sinks from all nearby neurons can be measured by various
electrodes. If the field potential is measured by a small electrode in the brain, it is known as local
field potential (LFP). If the field potential is measured by subdural grid electrodes, it is known as
an electrocorticogram (EcoG). Both of these two recordings are also known as intracranial EEG
(iEEG). If the field potential is measured by scalp electrodes, it is known as a scalp EEG or just
EEG. The measurement of the field potential allows EEG to depict macroscopic
electrophysiologic activity in the brain. Therefore, one of the most active electrophysiologic
activities, epilepsy, is usually tested with EEG in clinical practice.
1.3 Epilepsy
Epilepsy is one of the most common neurological disorders. Around 1% of the world’s
population suffers from epilepsy. Despite commonality of epilepsy, the definition of an epileptic
seizure and epilepsy has changed over time, reflecting the complexity of the condition. A
proposal from the International League against Epilepsy defines epilepsy as “A chronic condition
of the brain characterized by an enduring propensity to generate epileptic seizures, and by the
neurobiological, cognitive, psychological and social consequences of this condition” [1]. In other
words, patients with epilepsy are more likely to suffer from future seizures. An epileptic seizure
is defined as “a transient occurrence of signs and/or symptoms due to abnormal excessive or
synchronous neuronal activity in the brain” [1]. This definition characterizes a seizure as an
increased excitation or synchronization in the neurons. The increased excitation or
synchronization in the neurons causes a large amount of current flow in the brain. Hence the field
3
potential driven by that current flow is usually larger during the seizure (ictal state) than that
occurring during the non-seizure period (interictal state).
1.3.1 The Puzzling Preictal State
Contrary to the clear definition of ictal state and interictal state, the existence of a preictal
state is not definite. One piece of strong supporting evidence for its existence is the fact that
some patients with epilepsy experience vague sensations before their seizures [2]. Other than
patients’ subjective feelings, physiologic findings also support the existence of a preictal state,
including an increase in cerebral blood flow and changes in heart rate [3], [4]. Despite the
supporting evidence, there is still no consensus on the definition of preictal state. In the absence
of such a clear-cut definition, the time horizon of the preictal state cannot be accurately
determined. However, the time horizon of the preictal state must first be determined for seizure
prediction in order to set up the benchmark of the preictal data and the interictal data. Because of
the lack of consensus on the preictal period, various time horizons have been proposed for
seizure prediction, including 15 min [5], 60 min [6], 90 min [7] and 240 min [8]. This variation
of the time horizons also reflects the uncertainty about the preictal state.
1.4 History of Seizure Prediction
The ability to accurately predict the timing of the interictal-ictal transition has
fascinated the researchers since the 1950s, where the macroscopic brains signals, such as change
in the amplitude and the frequency of the interictal spikes, were first investigated [9]–[11].
Linear techniques, such as spectral analysis and autoregressive model, were then used to find the
dynamical change of the seizure generation. Specifically, spectral power at patient-specific
frequency bands represented the preictal EEG signals [12]. Also, the coefficient change in an
4
autoregressive model indicated a move from stable dynamics towards instability during the
generation of a seizure. The change of the system dynamics was supported by the pole trajectory
moving towards the unit circle in the z-plane [13]. With the advent of the theory of nonlinear
dynamics, also known as chaos theory, many seizure-related electrophysiology could be
characterized [14]. For example, the interictal-ictal transition consists of a large number of short-
duration periodic bursts mixed with infrequent long-duration bursts was characterized by type III
intermittency [15]. The inter-pulse intervals of the interictal activity was investigated using
unstable periodic orbits [16]. Meanwhile, much interest arose in applying these nonlinear
dynamic analysis tools to seizure prediction. Investigators estimated the largest Lyapunov
exponents as an indicator for nonlinear behavior and reported a decrease in the preictal state of a
seizure [17]. The correlation dimension dropped in the preictal state possibly caused by the
increasing degree of neural synchronization [18]. Phase synchronization technique, however,
suggested that the decrease in synchronization took place before a seizure [19].
Despite of the preliminary successes, different opinions aroused and showed that the
predictive performance of those claimed approaches on more extensive databases may not be as
good as it seems [20]. Around the turn of the millennium, mass storage capacity and long-lasting
multi-channel intracranial EEG (iEEG) recording became more widely available. The time is
ripe for examining the feasibility of seizure prediction. Four major guidelines have thus been
proposed to rigorously evaluate prediction performance - (i) Datasets contain representative
interictal and preictal data from long-term recordings (ii) Hyperparameter tuning for algorithms
should be performed on the training data only. Prediction performance is reported only for out-
of-sample test data. (iii) Both sensitivity and specificity must be assessed. (iv) A proper statistical
validation is required [20]. To demonstrate the feasibility and compare various techniques in
5
seizure prediction, the International Workshop on Seizure prediction held a competition in 2007.
However, none of the few submitted algorithms performed better than chance level [21].
With the resurgence of artificial intelligence, machine learning techniques have gained
prominence over the past decade as important tools for seizure prediction with big datasets of
long-term iEEG recordings [6], [22], [23]. Rather than simply applying threshold to a given
measure extracted from the EEG, recognizing the earlier subtle change of seizure generation
based on many channels was accomplished by feature extraction and classification. The
American Epilepsy Society, Epilepsy Foundation of America, and National Institutes of Health
sponsored an open invitation competition on kaggle.com in 2014 using iEEG data from subjects
with epilepsy. All the top contests incorporated various machine learning techniques in their
seizure prediction algorithms, showing their prediction performance were well above the chance
level [6]. More than 30 years of effort and numerous studies have been dedicated to seizure
prediction. By continuously understanding the mechanism of seizure generation and improving
the seizure prediction performance, the seemingly unpredictable seizures are not so unpredictable
anymore.
6
Chapter 2 An Experimental Seizure Model from Human
Hippocampal Slices
To study the electrophysiology of epilepsy, it is difficult or impossible to observe various
effects of protocols, such as drug and electrical stimulation, on patients with epilepsy in vivo.
Rather than directly applying different protocols to patients with epilepsy, surgical specimens
from human epileptic hippocampal tissue provide easy assessment of the effects of antiepileptic
drugs and electrical stimulation therapy on seizure control and the dynamics of seizure
generation. An in-vitro model of epileptiform activity in the dentate gyrus area has been induced
with high-potassium (12 mM) artificial cerebral spinal fluid (aCSF) in combination with
electrical stimulation [24]. To explore the electrophysiological dynamics of seizure generation, a
seizure model induced by drug only (without electrical stimulation contamination) would be
more suitable. Furthermore, nonlinear dynamics of interictal-like activity in rat hippocampal
slices was supported by the detection of the unstable periodic orbit [25]. However, the nonlinear
dynamics has not been investigated in human hippocampal slices. In this study, we report our in-
vitro seizure model in which epileptiform activity in the dentate gyrus area is induced with high-
potassium (8mM), low-magnesium (0.25 mM) artificial cerebral spinal fluid with 4-
aminopyridine (100 µM) added [1]. Also, the nonlinear dynamics of interictal-like activity is
explored using the unstable periodic orbit approach [26].
7
2.1 Method
2.1.1 Human hippocampal slice preparation
The in-vitro model of epilepsy was developed using the surgical byproduct hippocampal
tissue from patients suffering from treatment of drug-resistant epilepsy, i.e. epilepsy that persists
despite numerous trials of anti-seizure medications. Patients first underwent a standard
presurgical workup. Suitable candidates for temporal lobectomy gave their consent for surgery in
the standard manner. Following consents for surgery as the primary treatment for their epilepsy,
they consented to our study (University of Southern California Institutional Review Board
approval number: HS-10-00162). The surgery was performed in the standard manner with no
alterations in technique to accommodate the study, as shown in Figure 2-1. After the
hippocampal specimen was resected, neurosurgeons dissected the hippocampal body. The
hippocampal body was then immediately placed into cold (4
∘
C) oxygenated (95% O
2
, C O
2
)
sucrose solution. The sucrose solution consisted of 248 Sucrose, 1 KCl, 26 NaH C O
3
, 10 Glucose,
1 C aC l
2
, and 10 M g C l
2
(in mM). Compared to cerebral spinal fluid, NaCl in the sucrose solution
was removed to decrease the neurotoxic effects of passive N a
+
influx followed by cell swelling
and lysis. Therefore, the sucrose solution was used during slicing and transportation.
8
Figure 2-1: Hippocampus resection surgery. (A) Pre-Lobectomy (B) Neocortical resection (C)
Hippocampal head (D) Post temporal lobectomy
We cut the hippocampal specimen into slices and transported these in the sucrose
solution, preserving them in artificial cerebral fluid (aCSF) for recovery and recording.
Following identification of the anatomical structure of the hippocampal specimen, as shown in
Figure 2(A), we coronally dissected it into slices of 500µm thickness using a vibratome (Leica
VT1200, Germany). During the transport of the hippocampal slices from the hospital (Keck
Hospital of USC or Los Angeles County hospital) to the University Park Campus of USC, we
preserved the hippocampal slices in the sucrose solution bubbled with 95% O
2
, 5% C O
2
at 4
∘
C
[27]. After the transport, we preserved the slices in artificial cerebral spinal fluid for a minimum
of one hour to allow them to recover from potential damage due to surgery, slicing and
transportation. The artificial cerebral spinal fluid (aCSF) consists of 124 NaCL, 4 KCl, 26
9
NaH C O
3
, 10 Glucose, 2 C aC l
2
, and 2 M g C l
2
(in mM). After the recovery, we moved the slice
into a recording chamber, as shown in Figure 2-2. Field potential recordings were done at 34
∘
C
in a Hass-type interface recording chamber (Harvard Apparatus BSC-HT and BSC-BU) and a
submerged microelectrode array system (MCS MEA 1060-Inv and MC_Card).
Figure 2-2: Human hippocampus. (A) Resected hippocampal specimen from the right hemisphere
of a patient with epilepsy. The cross-section dimensions of dissected hippocampus were
approximately 15-20 mm in length and 10-15 mm in width. (B) A human hippocampal slice of
500 𝜇 m rested in a recording chamber. Sub: subiculum, CA: cornus ammonis, DG: dentate gyrus
2.1.2 Electrophysiology on human hippocampal slices
To validate the viability of the human hippocampal slices, we examined the reverse
polarity of evoked responses in the different areas of the slices. In the human hippocampus, the
granule cells are packed together in a highly laminated structure - cell bodies are packed in a
layer and dendritic trees are roughly oriented perpendicular to the cell body layer. Because of the
laminated structures, synaptic activation of a population of granule cells generates characteristic
signals. When stimulation intensity is low, pure EPSP (excitatory postsynaptic potential) is
generated due to the current flow of ions into dendrites. The current flowing into the dendrites
generates a negative-going waveform in the dendritic area. The current then flows from the
10
dendritic tree to the soma inside the neurons and exits in the region of the soma. The exiting
current at the somas causes a positive-going waveform. The loop of the current works like a
battery, where current flows in at the negative terminal and flows out at the positive terminal.
When stimulation intensity is high, population spikes (granule cells fire synchronously) are
generated and current flows in the opposite direction of the current generated from EPSP,
thereby causing a positive-going waveform at the dendritic area and a negative-going waveform
at the soma. The reverse polarity of waveforms allowed us to know the signals were generated
by the granule cells instead of artifacts caused by equipment issues, thereby validating the
viability of the human hippocampal slices.
To induce the epileptiform activity in the hippocampal slices, we perfused the
hippocampal slices with high-potassium, low-magnesium aCSF in combination with 4-
aminopyridine (Hi K
+
-lo M g
2 +
-4AP) [27]. In each experiment, we conducted field potential
recordings at the granule cell layer of the dentate gyrus in the hippocampal slices. The slice was
first perfused with normal aCSF (baseline), then perfused with high-potassium (8 mM), low-
magnesium (0.25 mM) aCSF with 4-aminopyridine (100 µM) to induce the interictal-like
activity.
2.1.3 Nonlinear dynamics in the interictal-like activity
The field potential recording signals are generated by neural networks with nonlinear
dynamical properties. The interictal-like activity generated by those signals exhibits complex
oscillations with properties that can be characterized by nonlinear dynamics of a chaotic system.
A chaotic system consists of an infinite number of unstable periodic orbits (UPO). Some of those
UPOs are visited more frequently and can thus be detected based on the characteristic of UPOs.
An important characteristic of an UPO is that the local dynamics around an UPO is
11
approximately linear. To detect an unstable periodic orbit of the interictal-like activity, we need
to represent the linear dynamics on the first-return map and then search for the UPO based on the
linear property.
A dynamical system with a continuous time variable (denoted t) can be described by N
first-order ordinary differential equations,
𝑑 𝑥 ( 1 )
𝑑𝑡
= 𝐹 1
( 𝑥 ( 1 )
, 𝑥 ( 2 )
, . . . , 𝑥 ( 𝑁 )
)
𝑑 𝑥 ( 2 )
𝑑𝑡
= 𝐹 2
( 𝑥 ( 1 )
, 𝑥 ( 2 )
, . . . , 𝑥 ( 𝑁 )
)
⋮
𝑑 𝑥 ( 𝑁 )
𝑑𝑡
= 𝐹 𝑁 ( 𝑥 ( 1 )
, 𝑥 ( 2 )
, . . . , 𝑥 ( 𝑁 )
)
(2.1)
The coordinate ( 𝑥 ( 1 )
, 𝑥 ( 2 )
, 𝑥 ( 3 )
) in the Eqn (2.1) is referred to as state space or phase
space, and the path in phase space is referred as an orbit. The phase space method provides a
geometric way of studying system dynamics but it is often difficult or impossible to have all the
measured variables, i.e., ( 𝑥 ( 1 )
, 𝑥 ( 2 )
, . . . , 𝑥 ( 𝑁 )
). To circumvent this difficulty, the system dynamics
can be reconstructed using the intervals between successive events to build a phase space [28].
The geometric and topological features of orbit in the reconstructed phase space are equivalent to
the orbit in the original phase space. This means they both have the same dynamical properties.
For example, the original coordinates of phase space, i.e. ( 𝑥 ( 1 )
, 𝑥 ( 2 )
, 𝑥 ( 3 )
) , are replaced with
three successive interpulse intervals, i.e., ( 𝐼 𝑛 − 2
, 𝐼 𝑛 − 1
, 𝐼 𝑛 ). Figure 2-3 shows how to reconstruct the
phase space based on the interpulse intervals. The phase space plot in Figure 2-3 (B) is referred
to as the first-return map. The first-return map is particularly useful in characterizing the
dynamics of neuron firing in a nervous system. For example, the first-return map has been used
12
to analyze the characteristics of heart and brain dynamics with successive intervals of heart beat
[29] and epileptiform activity [15], [16].
Figure 2-3: Reconstruction of phase space with interpulse intervals. Left: data of a measured
variable in time series. This measured variable is characterized by the interpulse intervals. Right:
The first return map reconstructed with the interpulse intervals on the left figure. Every point is
formed by the current and the next intervals. For example, point 1,2 is plotted by 𝐼 1
on the x-axis
and 𝐼 2
on the y-axis.
For a linear dynamics on the first-return map,
𝑥 ⃗
𝑛 + 1
= 𝐴 𝑥 ⃗
𝑛
𝑥 ⃗
𝑛 = [
𝑥 𝑛 𝑥 𝑛 − 1
] , 𝐴 = [
𝑎 11
𝑎 12
𝑎 21
𝑎 22
]
(2.2)
The general solution of Eqn (2.2) is 𝑥 𝑛 ⃗ ⃗ ⃗ ⃗ ⃗ = 𝑐 1
𝜆 1
𝑛 𝑣 1
⃗ ⃗ ⃗ ⃗ ⃗ + 𝑐 2
𝜆 2
𝑛 𝑣 2
⃗ ⃗ ⃗ ⃗ ⃗, where 𝜆 is an eigenvalue and
𝑣 ⃗ is an eigenvector. If 𝑥 ⃗
0
= (
0
0
), then 𝑥 0
⃗ ⃗ ⃗ ⃗ ⃗ = (
0
0
) for all 𝑛 > 0 is a solution to Eqn (2.2). Such a
constant solution is called a “fiex point”. Given that Eqn (2.2) has real and distinct eigenvalues
− 1 < 𝜆 1
< 0 and 𝜆 2
< − 1, trajectories will converge to 𝑥 ⃗
0
along 𝑣 ⃗
1
and then diverge away from
𝑥 ⃗
0
along 𝑣 ⃗
2
as shown in Figure 2-4
13
Figure 2-4: Trajectory of saddle-type fixed point. Points converge along 𝑣 ⃗
1
because the absolute
value of the eigenvalue is less than 1; points diverge along 𝑣 ⃗
2
because the absolute value of the
eigenvalue is greater than 1. Eigenvalues are negative so points jump from one side of the fixed
point to the other side.
To detect UPOs, we actually look for saddle-type fixed points on the first-return map
rather than search for UPOs in the phase space. Since UPOs are approximately linear in local
dynamics, points on the first-return map around the unstable fixed points characteristically
converge to the fixed point along the stable manifold and diverge away from the fixed point
along the unstable manifold, as shown in Figure 2-4. These convergence and divergence
characteristics provide a statistical method for detecting unstable periodic orbits called the
“topological recurrence method”.
The topological recurrence method was used to locate the encounters with UPO in the
experimental data. This method detects UPOs by searching for the occurrence of the pattern that
is indicative of an encounter with a UPO [30], [31]. The pattern of an encounter is defined by the
following criteria: three approaching points on the first-return map whose orthogonal distances to
the line of identity decrease sequentially, followed by three departing points with sequentially
14
increasing distances. The last approaching point and the first departing point are shared, so that
only five points on the return map are needed.
Simply observing the number of encounters on the first-return map does not guarantee
that UPOs are really present. The encounters and can occur even in random noise with some
probability. It is therefore necessary to perform statistical tests to determine whether the number
of encounters is significantly larger than what we expect to see due to chance. In an effort to
exclude the occurrence of the UPO from pure noise, the UPO detection methods were
statistically tested using surrogate data [16]. The shuffling surrogate data were generated by
randomly reordering the time series of the original data. Because the temporal correlation is
destroyed by randomly reordering the time series of the original data, the surrogate data are
basically uncorrelated noise. Thus if the statistical value, i.e. the number of encounters, from the
original data is significantly larger than the surrogate data, the null hypothesis that the original
data is noise can be rejected, thereby suggesting the presence of UPOs. For the test, a statistical
significance ( 𝐾 ) can be determined with the following formula:
𝐾 =
𝑁 − 𝑁 𝑠 𝜎 𝑠
(2.3)
where 𝑁 is number of encounters found in the original data, 𝑁 𝑠 is the mean number of
encounters found in the surrogate data, and 𝜎 𝑠 is the standard deviation of 𝑁 𝑠 . Assuming
Gaussian statistics, 𝐾 > 2 indicates with a probability of 95% that the encounters in the original
data are not due to chance.
15
2.2 Results
2.2.1 Field potential recordings on human hippocampal slices
The reverse polarity of waveforms based on the anatomy of the granule cells validated
that the signals were generated by the granule cells. A pure field EPSP was evoked by low-
intensity stimulation on the perforant path fibers. The EPSP was measured as a negative-going
waveform at dendrites and a positive-going waveform around somas. When high-intensity
stimulation was given, population spikes, opposite to those waveforms of the EPSP, had a
positive-going waveform at dendrites and a negative-going waveform around somas. Figure 2-5
shows the reverse polarity of waveforms recorded at positions of somas and dendrites with
various stimulation intensities.
Figure 2-5: Evoked responses to stimulations. With low-intensity stimulation,
pure EPSPs recorded at dendritic area and somatic area were negative-going and
positive-going respectively. With high-intensity stimulation, the amplitudes of
the EPSPs were increased and population spikes were superimposed on the
EPSPs
16
In addition to the neural activity evoked through electrical stimulation, interictal-like
activity was induced through administration of Hi K
+
-lo M g
2 +
-4AP aCSF. Interictal-like activity
was characterized by low-frequency (0.2-2 Hz) coordinated discharge of a population of neurons
lasting 20-200 (ms). Interictal-like activity was consistently (~80%) observed at the dentate
gyrus, as shown in Figure 2-6. Ictal-like activity was also observed at the granule cell layer in the
dentate gyrus but the chance (~3%) to observe the ictal-like activity was very low. Ictal-like
activity was characterized by a negative field potential followed by spikes progressing from high
frequency and low amplitude to low frequency and high amplitude, as shown in Figure 2-7.
Figure 2-6: Interictal-like activity induced by high-potassium, low- magnesium aCSF
and 4AP. The top panel shows that no interictal-like activity was present when the slice
was perfused with normal aCSF. In the middle panel, Hi K
+
-lo M g
2 +
-4AP aCSF was
perfused to the slice for 20 min. The interictal-like activity induced within 3 min in the
presence of the Hi K
+
-lo M g
2 +
-4AP aCSF and ceased with normal aCSF wash-out within
2 min. The lower panel shows the expansion of interictal-like activity.
17
After the interictal-like activity was induced, the interpulse intervals of the interictal-like
activity were used to detect UPOs. Figure 2-8 shows a sequence of 265 interpulse intervals of
interictal-like activity recorded at DG from a hippocampal slice. The interpulse interval was
defined by the interval between the peaks of the interictal-like activity. One hundred sets of
shuffling surrogate data were generated. In the case of Figure 2-8, a test showed N=6, 𝑁 𝑠 =2.65
and 𝜎 𝑠 =1.6 and K=2.09. This result supports the evidence for the presence of UPOs in the
interictal-like activity and excludes the null hypothesis that the converging and diverging pattern
on the first-return map is simply uncorrelated noise.
Figure 2-7: Ictal-like activity at different time scales. (A) Two consecutive sets of
ictal-like activity. (B) Faster time scale of the initial stage of the ictal-like activity
characterized by a negative field potential (arrow) followed by the high-frequency
low-amplitude spikes. (C) Faster time scale of the late stage of ictal-like activity
characterized by the low-frequency high-amplitude spikes
18
Two sets of consecutive points around the UPO were identified by the topological
recurrence method. Figure 2-9 shows the first-return map based on the data shown in Figure 2-8.
The same two sets of consecutive points from are plotted and numbered in Figure 2-9. The stable
and unstable manifolds were derived by the fitting (the least square method) of a sequence of 5
points, 3 points converging toward the unstable fixed point and 3 points diverging away from the
unstable fixed point, as shown in Figure 2-9.
Figure 2-8: Interpulse intervals of interictal-like activity. The 2 sets of UPO
encounters were expanded in green circles and brown triangles
19
a
2.3 Discussion
This study shows an experimental seizure model from human hippocampal slices. In our
experiments, evidence that the slices were viable is given in the reverse polarity of electrically
evoked responses at different anatomical sites, as shown in Figure 2-5. The interictal-like activity
could be consistently induced by administrating high-potassium low-magnesium aCSF combined
with 4AP, as shown in Figure 2-6. The ictal-like activity could be induced by the solution, as
shown in Figure 2-7. These activities have the same characteristics as those observed from the
human dysplastic neocortex [32], [33]. This model allows for easy access to the dynamics of
Figure 2-9: Interictal-like activity on first-return map. The green hexagons and
brown triangles represent two sets of sequential points. Each set has 5 sequential
points, including 3 points converging along the stable manifold and 3 points
diverging along the unstable manifold. The unstable fixed point at the value of
3.2 sec is located at the intersection of unstable and stable manifolds. The slopes
(eigenvalue) of stable manifold and unstable are -0.5 and -1.3 respectively.
20
seizure generation on the human hippocampal slice that would be very difficult to explore in
living patients. For example, different electrical stimulation patterns could be tested to interfere
the dynamics of the interictal activity. However, our human hippocampal slice model, like all
other seizure slice models, is limited to its isolation from the entire circuit of seizure generation.
In slice preparation, many intrinsic and most extrinsic connections of the hippocampus are cut.
This loss of connections may explain why observations of ictal-like activity were so rare in our
model. Because the hippocampus may not be the only part responsible for the seizure generation,
it may be limited to the study for local dynamics of interictal activity rather than the global
dynamics of ictal activity.
This study shows that unstable periodic orbits are present in interictal-like activity in
human hippocampal slices. Supporting evidence is that statistically significant UPOs were
present in hippocampal slices from a patient with epilepsy using the topological recurrence
method along with the shuffling surrogate validation. This result confirmed the separate
observations of animal models and human EEG recordings [25], [34]. Although our statistical
results rejected the null hypothesis that the interpulse intervals were just uncorrelated noise, it is
unclear whether the interpulse intervals can be distinguished from filtered noise. Many possible
null hypotheses need to be tested in order to reject all kinds of possible cause of the linear
patterns of the interpulse intervals. To resolve this argument, supporting evidence other than the
UPO detection, such as a 0-1 test, needs to be gathered to explore the dynamics of chaotic
behavior.
21
Chapter 3 A Sparse Multiscale Nonlinear Autoregressive Model for
Seizure Prediction
3.1 Introduction
Patients with epilepsy suffer from unexpected, uncontrolled seizures which can
frequently result in injuries that can potentially be fatal [35]. Until now, patients with epilepsy
have been treated mostly with irreversible surgical resection and antiepileptic drugs, which are
unsuitable or ineffective for 25-30% of the total patient population [36]. Alternatively,
responsive stimulation aims to treat seizure by detecting ongoing seizures and subsequently
delivering electrical stimulation to abolish them [37]. This approach has gained success but can
potentially be further improved by replacing seizure detection with seizure prediction. The
ability to intervene before seizures happens can not only decrease a patient’s risk of injury and
feeling of helplessness, but also may result in more effective abolishment or even prevention of
seizures. Seizure prediction could be implemented by continuously monitoring brain signals
through electroencephalography (EEG) and then updating the risk of having an upcoming seizure
(Figure 3-1). Specifically, an algorithm would identify the preictal state, i.e., the precursor of a
seizure, as opposed to detecting the ictal state as done in seizure detection. Detection of the
preictal state could lead to an intervention with means such as warning the patient of the risk,
triggering antiepileptic drug treatment, and electrical stimulation to prevent the seizure from
happening.
For decades, the idea of predicting seizure based on identification of preictal state from
electroencephalograms (EEG) signals has piqued considerable interest [38], [39]. Multiple linear
and/or nonlinear approaches based on various mathematical / computational theories have been
22
proposed to obtain features from EEG signals that can be used for seizure prediction. One
approach is based on linear dynamical modeling, in which spectral analysis and autoregressive
modeling are used to detect dynamical changes of seizure generation. Specifically, spectral
power at patient-specific frequency bands is associated with preictal EEG signals. Specifically,
spectral power at patient-specific frequency bands is associated with preictal EEG signals [12].
Alternatively, changes in coefficients of the autoregressive (AR) model are used to indicate
changes in system dynamics [13], which can be verified by identifying the trajectory of the most
Figure 3-1: Seizure prediction based on EEG recording. The EEG signal has three stages
of seizure generation: interictal, preictal and seizure (ictal) stages. A predictive model
continuously monitors the EEG and estimates the probability of upcoming seizure. When
the probability is higher than a threshold, intervention is triggered.
23
mobile pole of the system that is moving towards the unit circle in the z-plane. The trajectory of
this pole is associated with the seizure generation. Another approach is inspired by the theory of
nonlinear dynamics, in which measures such as largest Lyapunov exponents, [17], correlation
dimension [18], and phase synchronization [19] are extracted from EEG signals for seizure
prediction. Since these measures represent the EEG state with single values, a threshold for a
given measure can be used to distinguish the interictal and preictal states. Despite much success,
several studies have shown that the aforementioned approaches do not generalize well to more
extensive datasets [20].
With the advent of machine learning and recording techniques, more sophisticated
computational modeling techniques have been developed over the past decade to predict seizures
from large datasets of long-term multi-channel intracranial EEG (iEEG) recordings [6], [22],
[23]. Instead of using single-channel feature extraction followed by a threshold, earlier and more
subtle changes preceding seizure onset are extracted with a richer set of features from multiple
channels and are subsequently detected with more sophisticated classifiers. Among all proposed
methods, the AR model approach has regained success using a two-level model structure, which
consists of (1) a multi-channel AR model of EEG, and (2) a classifier for predicting the
probability of preictal state [5]. Specifically, the AR models use a linearly weighted summation
of past EEG values to predict present EEG values and thus represent the EEG linear dynamics
with AR model coefficients. The AR coefficients are then used as features for the second level
classifier.
However, nonlinearity is ubiquitous in neural systems and signals such as EEG [40]. It is
highly unlikely that only linear components of EEG can contribute to seizure prediction.
Therefore, we hypothesize that adding nonlinear terms to the AR model can potentially capture
24
nonlinear characteristics of EEG and provide additional features for better seizure prediction.
One natural approach to extending a linear AR model to a nonlinear model is to replace the
linear relation with a Volterra series to build a Volterra autoregressive (VAR) model. A VAR
model contains all the linear terms of an AR model and has additional nonlinear terms in the
polynomial form to represent nonlinear characteristics of the system. In addition to first-order
kernel coefficients, which are equivalent to the AR coefficients, a VAR model also provides
high-order kernel coefficients as nonlinear features for the following classification.
Another limitation of the conventional AR models is that they can capture only dynamics
at a single and short time scale. The reason is that AR models directly use past values as inputs to
predict the present value (output). Given a certain sampling frequency (e.g., 500 Hz in EEG data),
an AR model typically includes only a small number of past values (e.g., 10) to be reliably
estimated and thus results in a single short time scale (e.g., 20 ms). An AR model with a large
number of coefficients (i.e., high-order AR model) can potentially include longer time scales, but
its estimation quickly becomes unwieldy, especially in the case of an AR model of EEG, where
short data length has to be used to capture the changes in EEG dynamics. Furthermore, model
estimation is virtually impossible when an AR is extended to a VAR, where not only each past
value, but each unique combination of past values needs to be considered in the model. One
solution to this problem is to use the Laguerre basis to expand the VAR model, yielding a
Laguerre-Volterra autoregressive (LVAR) model based on the functional expansion theory [40],
[41]. Since Laguerre-Volterra kernels are represented by a set of parameterized Laguerre basis
functions with an adjustable time scale, they can capture both linear and nonlinear dynamics at
arbitrary time scales without increasing model complexity (i.e., number of model coefficients),
as is often the case in conventional AR models.
25
Therefore, we combine the conventional AR model with a LVAR model for seizure
prediction in this study. Specifically, the coefficients of the AR model represent the linear EEG
dynamics with a short time scale, and the coefficients of the LVAR model represent both linear
and nonlinear EEG dynamics with an arbitrary (usually long) time scale. In addition, residual
errors of model predictions are used as features that represent levels of unpredictability in EEG
signals by the models.
With both linear and nonlinear features, we use a sparse (L1-regularized, i.e., lasso)
logistic regression classifier to select significant features and estimate the corresponding model
coefficients at the same time. Sparse estimation can effectively facilitate model estimation and
avoid overfitting [42]. We compare the performance of the AR model, the LVAR model, and the
combined AR and LVAR model with the sparse logistic classifier. Finally, we use a permutation
feature importance technique to investigate the importance of each model’s components, i.e.,
short time scale linear coefficients, arbitrary (usually long) time scale linear coefficients,
arbitrary (usually long) time scale nonlinear coefficients, and unpredictability, to the prediction
accuracy of the preictal state.
26
3.2 Materials and Methods
A two-level model is developed to predict seizures based on multi-channel iEEG signals (Figure
3-2). First, iEEG data collected from epilepsy subjects are preprocessed and labelled with
interictal, preictal, and seizure states. Second, short time-scale linear dynamical features are
extracted as AR model coefficients; arbitrary (usually long) time-scale linear and nonlinear
dynamical features are extracted as LVAR model coefficients, and the root-mean-square error
(RMSE) of model prediction is calculated as a feature representing model unpredictability. Third,
all features are fed into a sparse classifier, i.e., lasso logistic regression classifier, to discriminate
the iEEG data between interictal and preictal states.
Figure 3-2: Schematic diagram of the two-level seizure prediction model
27
3.2.1 iEEG data preprocessing
We used both publicly available data and data collected at Keck Hospital of University of
Southern California (USC) for seizure prediction. Public data from 5 canines and 2 human
patients had a total of 111 lead seizures ( 𝑀 = 15 . 9, 𝑆𝐷 = 13 . 3) [43]. Lead seizures were defined
as seizures without a preceding seizure for a minimum of 4 hours. For the 5 canines, 8 to 40 lead
seizures for each canine were recorded with 16-contact subdural strip electrodes symmetrically
implanted in each hemisphere, as shown in Figure 3-3(A). One of the two human patients had 5
lead seizures recorded with 8-contact depth electrodes implanted along the axis of the
hippocampus in each hemisphere. The other patient had 6 lead seizures recorded with a 3 × 8
subdural electrode grid placed over the temporal lobe. In addition, three human patients with
epilepsy undergoing iEEG monitoring before epilepsy surgery at USC Keck Hospital were
included. These subjects had 7 to 8 lead seizures. Signals close to seizure foci were recorded
using both grid electrodes and strip electrodes with 96 or 98 channels in total. Figure 3-3(B)
shows a representative electrode map of a patient from Keck Hospital of USC. Channels with
Figure 3-3 : Electrode maps of canine and human iEEG recording. (A) Canine
iEEG is recorded with two electrode strips with 4 electrodes each oriented in the
rostral-caudal direction. (B) Human iEEG is recorded with electrode grids and
strips covering the ventral and lateral sides of the right hemisphere.
28
frequent large noise were excluded from the data. Canine data were sampled at 400 Hz; human
data were sampled at 5000 Hz and then downsampled to 500 Hz to be comparable to the canine
data.
We further labeled iEEG signals into interictal, preictal and ictal (seizure) states. There is
no consensus about the time period of the preictal state [44]. Various time periods including 15
min [5], 60 min [6], 90 min [7] and 240 min [8] before seizure onset have been used. In this
study, preictal data were taken from a 60-minute period with a 5-minute margin prior to the
seizure onset. The 5-minute period right before seizure onset was excluded to prevent subtle
early ictal activity from contaminating the preictal data. Interictal data were taken from a 60-
minute period at least 4 hours before any seizure. We excluded the data from one hour to 4 hours
before seizure onset because that period could be a mixture of interictal and preictal states.
Data were split into a block-wise training set and test set for model estimation and
validation, respectively (Figure 3-4 A). In the block-wise setting, the training set and test set are
separated at the block (one complete interictal, preictal and ictal period) level and thus do not
contain data points from the same seizure period even if they are independent of each other. This
block-wise setting is necessary for avoiding temporal data leakage due to temporal correlations
of EEG data, i.e., data at a given time point can contain information about data at neighboring
time points. Specifically, we divided the one-hour interictal periods and the preictal periods into
non-overlapping one-minute epochs (Figure 3-4 B). All one-minute epochs from the same one-
hour period were used for either training or testing, but not both (Figure 3-4 C). Using one-
minute epochs from the same one-hour period for both training and testing may introduce biases
in model estimation and lead to overly-optimistic prediction. More importantly, seizure
prediction models are likely to be estimated with data collected from previous seizure episodes in
29
practice. We split the training and test data from the Keck Hospital of USC in chronological
order. Specifically, we used the first 80% of the iEEG recording for training and the last 20% of
the iEEG recording for testing. The training set and test set were then divided into the one-hour
periods to create the block-wise data as described above. This way we used a more realistic
training-testing sequence for model estimation and evaluation.
3.2.2 Feature extraction based on autoregressive models
AR and LVAR models are used to extract linear and nonlinear dynamical features at
different time scales from iEEG signals in 1-minute non-overlapping epochs. This epoch length
Figure 3-4 : Schematic diagram of split data. (A) Illustration of interictal, preictal
and ictal chunks in time series iEEG data (B) One-hour periods were divided into 60
one-minute windows for feature extraction (C) All one-minute windows from the
same hour were either go into the training set or test set. The number on the top of
each one-minute window represents the period number
30
results from a compromise between reliable model estimation and sensitivity to EEG non-
stationarity. One minute is generally considered the appropriate epoch length for capturing time-
invariant properties within EEG dynamics and commonly used for seizure prediction [6], [45].
AR model
The iEEG signals from each channel (x) are characterized by a conventional AR model
that predicts the current iEEG value as the weighted summation of past iEEG values:
𝑥 ( 𝑡 ) = 𝑎 0
+ ∑ 𝑎 𝑀 𝜏 = 1
( 𝜏 ) 𝑥 ( 𝑡 − 𝜏 ) + 𝜀 ( 𝑡 ) ,
(3.1)
where 𝑥 is the iEEG signal, 𝑎 0
is a constant, 𝑎 ( 𝜏 ) are the AR coefficients at time lag 𝜏 =
1 , 2 , . . . 𝑀 , 𝜀 ( 𝑡 ) is the residual error, and 𝑀 is the system memory length. Model coefficients are
estimated with the least-squares method. These coefficients reflect the linear dependence of the
present iEEG signal on the past iEEG signals, as shown in Figure 3-5(A). RMSE reflects how
much the linear prediction deviates from 𝑥 and can be interpreted as model unpredictability.
31
Figure 3-5: Illustration of the AR model, the VAR model and the LVAR model. (A) The AR
model predicts the present value based on the sum of the 3 weighted past values. These 3
coefficients represent the linear and short-time scale (3 time lags) features of the iEEG
signals (B) The LVAR model predicts the present value by convolving the signal using 3
Laguerre basis functions. The predicted value is based on the sum of not only the 40 s
weighted past values of the convolutions but also the weighted multiplication of all
combinations of the convolved signals. (C) The elements of linear and nonlinear Volterra
series. These 3 coefficients from the first-order linear Volterra series and 6 coefficients from
the second-order nonlinear Volterra series represent long-time scale (40 lags) features of the
iEEG signals.
.
32
Laguerre-Volterra autoregressive model
The iEEG signals from each channel ( 𝑥 ) are also characterized by a nonlinear Volterra
autoregressive (VAR) model, as in
𝑥 ( 𝑡 ) = 𝑘 0
+ ∑ 𝑘 1
𝑀 𝜏 = 1
( 𝜏 ) 𝑥 ( 𝑡 − 𝜏 ) +
∑ ∑ 𝑘 2
𝑀 𝜏 2
= 1
𝑀 𝜏 1
= 1
( 𝜏 1
, 𝜏 2
) 𝑥 ( 𝑡 − 𝜏 1
) 𝑥 ( 𝑡 − 𝜏 2
) + 𝜀 ( 𝑡 ) ,
(3.2)
where 𝑘 0
is the constant zeroth-order kernel coefficient, 𝑘 1
( 𝜏 ) are first-order kernel coefficients,
𝑘 2
( 𝜏 1
, 𝜏 2
) are second-order kernel coefficients, and 𝜀 ( 𝑡 ) is the residual error. First-order kernels
represent the linear dynamical relations between the present iEEG value and past iEEG values.
Second-order kernels describe the pair-wise nonlinear dynamical interactions between past iEEG
values as they influence the present iEEG value. RMSEs quantify the unpredictability of iEEG
by the VAR models. Different from the AR model, the VAR model includes nonlinear terms,
e.g., second-order kernel, in a polynomial form.
The VAR model in its original form suffers from the same estimation difficulty of having
too many model coefficients as does the conventional AR model. The reason is that these models
directly use past input values at different time lags as their inputs to predict their output. Given a
certain sampling rate, they often require a large number of model coefficients to cover long
system memories and thus can cause serious estimation problems, e.g., underdetermined model
or overfitting. In fact, this problem becomes much more serious in VAR than in AR since VAR
also includes coefficients for every unique combination of past values. In practice, these models
are often truncated at a small number of lags to be reliably estimated [46] . The drawback of this
33
approach, on the other hand, is loss the model’s ability to capture dynamics at long time scales.
To solve this system, we employ Laguerre basis functions in the Volterra series that results in a
Laguerre–Volterra autoregressive (LVAR) model [47]–[50]. In this approach, past input is first
convolved with a set of Laguerre basis functions to include temporal dynamics. The output is
then expressed as a polynomial power series of the convolutions to include nonlinearity in the
model [40], [41], [51]–[54]. A LVAR model can be mathematically expressed as follows:
𝑥 ( 𝑡 ) = 𝑐 0
+ ∑ 𝑐 1
𝐿 𝑗 = 1
( 𝑗 ) 𝑣 𝑗 ( 𝑡 ) + ∑ ∑ 𝑐 2
𝑗 1
𝑗 2
= 1
𝐿 𝑗 1
= 1
( 𝑗 1
, 𝑗 2
) 𝑣 𝑗 1
( 𝑡 ) 𝑣 𝑗 2
( 𝑡 ) + 𝑒 ( 𝑡 ) ,
(3.3)
𝑣 𝑗 ( 𝑡 ) = ∑ 𝑏 𝑗 𝑀 𝜏 = 1
( 𝜏 ) 𝑥 ( 𝑡 − 𝜏 ) ,
(3.4)
𝑏 𝑗 ( 𝜏 ) = 𝑍 − 1
{
√ 1 − 𝛼 2
1 − 𝛼 𝑛 𝑧 − 1
(
𝑧 − 1
− 𝛼 𝑛 1 − 𝛼 𝑛 𝑧 − 1
)
𝑗 − 1
} ,
(3.5)
where 𝑐 0
, 𝑐 1
and 𝑐 2
are the zeroth-order, first-order and second-order coefficients of Laguerre
basis functions, 𝐿 is the number of Laguerre basis functions, 𝜀 is the residual error, 𝑏 𝑗 ( 𝜏 ) are
Laguerre basis functions, and 𝛼 is the decay parameter of Laguerre basis functions (Figure 3-5
B). Since 𝐿 can be made much smaller than 𝑀 in practice, a LVAR model contains a much
smaller number of coefficients than does its equivalent VAR model. The tails of Laguerre basis
functions are like exponential decay. This allows the Laguerre basis functions to capture long
time scale dynamics, but the contribution decreases as the time scale increases. We define
effective Laguerre memory (Me) as the number of samples required for the magnitude of the
highest order Laguerre basis function to fall to 1% of its largest magnitude.
Since the LVAR model expresses the nonlinear dynamical relation between output and
input as a linear static relation between output and a polynomial series of convolutions of input
34
with Laguerre basis functions, model coefficients 𝑐 0
, 𝑐 1
and 𝑐 2
can be estimated with standard
linear regression techniques such as the least-squares method. Volterra kernels 𝑘 0
, 𝑘 1
and 𝑘 2
can
be reconstructed with model coefficients and Laguerre basis functions as in:
𝑘 0
̂
= 𝑐 0
̂ ,
𝑘 1
̂
( 𝜏 ) = ∑ 𝑐 1
̂
𝐿 𝑗 = 1
( 𝑗 ) 𝑏 𝑗 ( 𝜏 ) ,
𝑘 2
̂
( 𝜏 1
, 𝜏 2
) = ∑ ∑ 𝑐 2
̂
𝑗 1
𝑗 2
= 1
𝐿 𝑗 1
= 1
( 𝑗 1
, 𝑗 2
) 𝑏 𝑗 1
( 𝜏 1
) 𝑏 𝑗 2
( 𝜏 2
) ,
(3.6)
where 𝑐 0
̂ , 𝑐 1
̂ and 𝑐 2
̂ are the estimated coefficients.
Combined AR and LVAR model
In the combined AR and LVAR model, we separately trained the AR and LVAR models
and used all of their model coefficients as well as RMSE as features representing (1) short time-
scale linear dynamics, (2) arbitrary (usually long) time-scale linear dynamics, (3) arbitrary
(usually long) time-scale nonlinear dynamics, and (4) unpredictability for seizure prediction in
the classification step (Figure 3-2).
Hyperparameter tuning
For better seizure prediction performance, hyperparameters for each first-level model
(AR or LVAR model) was independently tuned based on each individual second-level model
(sparse classifier) seizure prediction performance. All hyperparameters are tuned with a block-
wise cross-validation method within the training set, which is further divided into an inner-loop
training set and a validation set. Test set was set aside at beginning and used only for second-
level model (sparse classifier) performance evaluation, neither for first-level model estimation
35
(AR and LVAR model) nor for hyperparameter tuning. Hyperparameters for the AR model are
the time-lag interval and the order of the AR model 𝑀 (Equation (3.1)). Hyperparameters for the
LVAR model are the number of Laguerre basis functions 𝐿 and the decay parameter 𝛼 (Equation
(3.3), (3.5)). Point-wise cross-validation, where each data point is considered independent and
randomly selected for inner-loop training or validation, is often used for hyperparameter tuning
but can be problematic with time series data, where neighboring data points are temporally
correlated and share the same nonstationarity. To avoid this, the block-wise cross-validation
method assigns data points from the same time period in the same fold to either training or
validation, but not both [55]. Each fold has a preictal episode and an equal number of interictal
episodes. For example, for a subject with 7 preictal episodes and 21 interictal episodes, we use 7-
fold block-wise cross-validation, in which each fold contains 1 preictal episode and 3 interictal
episodes. If there are fewer than 5 preictal episodes in the data, each preictal episode is used in 2
folds with different interictal episodes. For example, for a subject with 3 preictal episodes and 9
interictal episodes, we employ a 6-fold cross-validation, where each preictal episode is assigned
to 2 different folds with 3 different interictal episodes selected from the 9 episodes. Assigning
the same preictal episodes to different folds augments the data length and reduces the estimation
variance caused by variability within interictal episodes.
Hyperparameter tuning is performed independently for AR and LVAR models. The level
of model complexity for hyperparameter tuning is set the same. The order of the AR model,
ranging from 1 to 15 (1 to 15 coefficients), and the number of the Laguerre basis functions,
ranging from 1 to 4 (1 to 14 coefficients), are both set in grid search.
36
3.3 Classification
Given all the features in the section 3.2.2, a classifier has to judiciously decide whether
an unseen iEEG sample is a preictal sample or an interictal sample. Pioneers had often adopted a
threshold based analysis for seizure prediction. For the threshold analysis, a single measure
represents the brain state and is compared with the threshold: if the measure is above the
threshold, the instance is classified as a preictal instance, else an interictal instance [44][56] [8].
That measure could be the value of a feature or a value generated by a clever feature-specific
formula that could integrate several values of relevant features into one. More recently, much
more features than before were adopted to reflect the various aspects of the brain state. Because
of the increase in the number of features, researchers usually adopt a machine learning approach
to integrate the all features into a single probabilistic value. Machine learning is a more general
approach to pattern recognition than a feature-specific formula. It can usually recognize subtle
differences between patterns, i.e. features, and thus become one of the standard techniques for
seizure prediction. The logistic lasso (also known as l1-penalized logistic regression) has gained
great success in the seizure prediction and was used as a foundation of our classification. One
variant of the lasso, the logistic group lasso, was also used as a classifier in this study. Both of
them are based on the logistic regression, which models the conditional probability 𝑝 𝛽 ( 𝐱 𝑖 ) =
ℙ
𝛽 ( 𝑌 = 1 | 𝐱 𝑖 ) by
𝑙𝑜𝑔 {
𝑝 𝛽 ( 𝐱 𝑖 )
1 − 𝑝 𝛽 ( 𝐱 𝑖 )
} = 𝜂 𝛽 ( 𝐱 𝑖 )
with 𝜂 𝛽 ( 𝐱 𝑖 ) = 𝛽 0
+ ∑ 𝐱 𝑖 T
𝛽 𝑖 𝑝 𝑖 = 1
(3.7)
37
where 𝑌 ∈ { 0 , 1 }is a binary response, 𝑋 = ( 𝑋 1
, 𝑋 2
… 𝑋 𝑝 )is a vector of predictors, 𝛽 0
is the
intercept and 𝛽 𝑖 ∈ ℝ is a vector of regression coefficients. Those parameters 𝛽 𝑖 ̂
are estimated by
maximize the log-likelihood function, Equation (3.8)
𝑙 ( 𝛽 ) = ∑ { 𝑦 𝑖 𝜂 𝛽 ( 𝒙 𝑖 ) − 𝑙𝑜𝑔 [ 1 + 𝑒 𝜂 𝛽 ( 𝒙 𝑖 )
] }
𝑛 𝑖
(3.8)
3.3.1 Logistic Lasso
The logistic lasso often yields better prediction accuracy and model interpretability than
the logistic regression by imposing the penalty term to the log-likelihood function [57], Equation
(3.7) and (3.8). Estimator 𝛽 ̂
λ
is derived by minimizing the convex function[58][59]:
𝑆 𝜆 = − 𝑙 ( 𝛽 ) + 𝜆 ∥ 𝛽 ∥
1
(3.9)
where ∥ 𝛽 ∥
1
= ∑ ∣ β
j
∣
p
j = 1
is the 𝑙 1
norm of the vector 𝛽 . The lasso constraint ∥ 𝛽 ∥
1
= ∑ ∣
p
j = 1
β
j
∣ and the ridge constraint ∥ 𝛽 ∥
1
= ∑ β
j
2
p
j = 1
.The penalty term works like putting a constraint on
the log-likelihood function, as shown in Figure 3-6. The point where the constraint hits the
likelihood contour determine the estimated predictor variable β
j
. The region of lasso is a
diamond having corners. If the point occurs at a corner, then the predictor variable β
j
is equal to
zero. On the other hand, the region of ridge is a disk so the point is less likely to be on the axes
of predictor variables. Therefore, the predictor variable β
j
is usually not equal to zero. The
characteristic of setting predictor variable β
j
to be zero enables the lasso to perform feature
selection. This is why more predictor variables β
j
become zero in the coefficient path as λ
increases, as shown in Figure 3-7.
38
Figure 3-6 Constraints of the lasso (green left) and ridge (green right) on a log-likelihood function
(red). The lasso constraint has a shape of a diamond. The point where the diamond hits the
likelihood function is likely to be zero, i.e. the predictor 𝛽 2
is zero, when increasing the penalty
scale 𝜆 . On the other hand, the ridge constraint has a shape of a circle. Therefore, predictors do not
have the tendency to become zeros with the increase in the penalty scale 𝜆 .
Figure 3-7 The coefficient paths of lasso (left) and their effects on the generalized errors (right).
Increasing λ moves the vertical dash line to the left in the coefficient paths, causing 4 non-zero
coefficients to shrink towards zero and eventually all be equal to zero. Increasing λ moves the
dashed line to the left and decreases model complexity. The decrease in model complexity then
decreases variance and increases bias.
39
The sparsity parameter 𝜆 controls the level of model sparsity. Optimal value of 𝜆 is
estimated with nested cross-validation and the one-standard-error rule [42]. Specifically, the
hyperparameter tuning of each first-level model (hyperparameters of AR or LVAR model) is the
first-level cross-validation. The training set of the first-level cross-validation is further split for
the second-level cross-validation for the second-level model (i.e. optimal 𝜆 ).”
While the logistic lasso provides a means to automated feature selection and
classification, it may sometimes fail to select the best features for classification, which in turn
results in suboptimal prediction performance. To lessen the problem with suboptimal
performance, we used the best of the prediction performance among the AR model, the LVAR
model and the combined AR and LVAR model as the prediction performance of the combined
AR and LVAR model.
3.4 Evaluation of model prediction performance
The most frequently used metrics to evaluate a prediction performance is accuracy, which
is the percentage of cases correctly classified. For imbalanced data, using accuracy could be
misleading because the minority class has small effect on overall accuracy. For example, most
patients with epilepsy are in the interictal state for 95-99% of their life [35]. Simply classifying
all EEG signals into the interictal state without any preictal state can have 95-99% high accuracy.
So accuracy is a poor metric for data with skewed class distribution. To investigate all aspects of
the information of the prediction performance, the performance can be represented by the
confusion matrix, as shown in Table 3-1.
40
Table 3-1 Confusion matrix
A true positive (TP) is a test result where the model correctly predicts the positive class.
Similarly, a true negative (TN) is a test result where the model correctly predicts the negative
class. A false positive (FP), commonly called a “false alarm”, is a test result where the model
incorrectly predicts the positive class. And a false negative (FN), also called a 'miss', is a test
result where the model incorrectly predicts the negative class. In our case, a true positive is that a
model correctly predicts patient is in the preictal state. False positive is that a model incorrectly
predicts a patient is in the preictal state, as shown in Figure 3-8.
41
Figure 3-8. Data points (left) and density distributions (right) show the predicted score of the
interictal and preictal samples. A threshold at predicted score of 0.5 determines the class of each
sample. If the predicted score is above the threshold, the instance is classified as a positive
instance, else a negative instance. For example, preictal samples having scores higher than the
threshold and correctly classified into the preictal class are true positives. Similarly, interictal
samples having scores higher than the threshold and falsely classified into the preictal class are
false positives.
Although the confusion matrix is a comprehensive representation of the prediction
performance, those elements in the confusion matrix greatly depend on a decision threshold. The
decision threshold is often set as 0.5, implying the prediction results of a preictal sample and an
interictal sample are equally important. However, patients may prefer a low threshold because
they are more likely to lower the number of false negatives at the price of increasing the number
of false positive. In other words, they would rather be falsely alarmed an upcoming seizure than
miss a warning that seizure is happening. Unfortunately, the determination of the threshold could
be a subjective issue so it is powerful to have a measure to evaluate the overall performance for
all possible thresholds. And this is exactly what the area under the receiver operating
characteristic curve (AUC) does. Therefore, it is one of the most frequently used metrics for
seizure prediction [6], [22], [44].
42
3.4.1 ROC curve
A ROC curve is made by plotting the true positive rate (TPR) against the false positive
rate (FPR) with various thresholds being applied to a classifier. The ROC curve combines 4
metrics from the confusion matrix into the TPR and the FPR. AUC further combines them into a
single metric. The true positive rate (TPR) and the false positive rate (FPR) are defined as:
𝑇𝑃𝑅 =
𝑇𝑃
𝑇𝑃 + 𝐹𝑁
=
𝑇𝑃
𝑇 𝑜 𝑡𝑎 𝑙 𝑝𝑜 𝑠𝑖𝑡 𝑖𝑣𝑒𝑠
(3.10)
𝐹 𝑃 𝑅 =
𝐹𝑃
𝐹𝑃 + 𝑇𝑁
=
𝐹𝑃
𝑇 𝑜 𝑡𝑎 𝑙 𝑛𝑒𝑔𝑎𝑡𝑖𝑣 𝑒𝑠
(3.11)
The higher the TPR, the more positive data points are correctly considered as positive.
The higher FPR, the more negative data points are mistakenly considered as positive. The
classifier produces a class membership probability of an instance. The class membership
probability is then compared to a threshold. Each threshold produces a point of a TPR and a FPR
for the ROC curve. Connecting all points at various threshold settings completes the ROC curve,
as illustrated in Figure 3-9.
43
Figure 3-9. Density distributions (left) of 2 classes and corresponding ROC curves (right). The
vertical lines on the density distributions plots indicate the threshold of 0.8 (top) and 0.2 (bottom).
The corresponding points on the ROC curves are represented as the red dots. Moving the threshold
from 1 to 0 on the density distribution plot makes the red dot on the ROC curve depart from the
top right corner, get close to the top left corner and arrive the bottom left corner.
The area under the ROC curve (AUC) can be further used as a single measure to quantify
the performance of a test. The AUC represents the probability of correctly ranking a pair of an
event and a non-event [60] [61]. In other words, given a randomly selected pair of a preictal
sample and an interictal sample, the AUC represents the probability of correctly identify which
of these two samples is “preictal” and which is “interictal”. To illustrate the idea, a curve going
from the origin trough the top left corner to the top right corner of the ROC graph represents a
perfect classification in which the true positive rate is 100% and the false positive rate is 0% at
any threshold setting. The perfect classifier covers the whole ROC graph and has the area of one.
The diagonal line represents the random classification where the true positive rate is equal to the
44
false positive rate at any threshold setting. The random classifier covers half of the ROC graph
and has the area of 0.5. A realistic classifier should have the AUC somewhere between 0.5 and 1.
The larger AUC implies a better classification, as shown in Figure 3-10.
Figure 3-10. Various density distributions and their ROC curves. Top: A perfect classifier has
AUC of 1. Middle: A random classifier has the ROC curve wiggle around the diagonal line.
Bottom: A typical classifier is concave with AUC ranging from 0.5 to 1.
45
3.4.2 Assessing feature importance
To facilitate interpretation, the relative importance of features needs to be assessed. First,
the sparse classifier selects the features that significantly contribute to the classification
performance. Features with zero-valued weights are considered unimportant. Second, the relative
importance of selected features is further assessed with a permutation feature importance
technique that calculates the decrease in model prediction performance (AUC) after randomly
permuting each feature values. Specifically, the more AUC drops after permutating a feature, the
more important the feature is. Permutation was repeated 1000 times for each feature to obtain the
distribution of AUC baseline values. Percentiles such the median (50
th
percentile) and the 95%
confidence interval (2.5
th
to 97.5
th
percentile) were calculated for determining the statistical
significance of features [62].
3.5 Results
3.5.1 AR and LVAR models of iEEG
First, we build AR and LVAR models of iEEG to extract (1) short time-scale linear
dynamical features (AR coefficients 𝑎 ), (2) arbitrary (usually long) time-scale linear dynamical
features (LVAR first-order kernel coefficients 𝑐 1
), arbitrary (usually long) time-scale nonlinear
dynamical features (LVAR second-order kernel coefficients 𝑐 2
), and unpredictability features
(RMSE).
Figure 3-11 and Figure 3-12 illustrate two representative models of Dog 1 and Patient 1
respectively based on hyperparameters tuned through block-wise cross-validation (Table 3-2).
Results show that both the AR model and the LVAR model can predict iEEG signals based on
past iEEG signals with appropriate memory length (Fig. 5A, 6A). However, these two types of
46
models capture system dynamics of iEEG with different time scales. This is clearly shown in the
shape of 𝑎 , 𝑘 1
, and 𝑘 2
(Fig. 5B, 6B). Among all 10 subjects, AR models capture the system
dynamics with a shorter time scale while LVAR models capture the system dynamics with a
longer time scale (Table 3-2). Notably, both 𝑎 of the AR model and 𝑘 1
of the LVAR model show
exponentially decaying bi-phasic waveforms with facilitation in short intervals and depression in
long intervals, while 𝑘 2
of the LVAR model shows 2D surfaces with reversed polarities, i.e.,
depression in short intervals and facilitation in long intervals. Most importantly, the interictal
state and preictal state show visible differences in many features, e.g., 𝑎 and 𝑐 2
in both Dog 1
and Patient 1, in terms of their means and distributions. These differences provide the basis for
classification of the two seizure states in the following step.
Table 3-2: Time Scales and Decay Parameters
Subject AR Time scale (ms) LVAR ( 𝛼 ) LVAR Time scale (ms)
Dog 1 30 0.85 115
Dog 2 20 0.05 25
Dog 3 30 0.15 22.5
Dog 4 30 0.55 62.5
Dog 5 10 0.65 65
Patient 1 12 0.85 124
Patient 2 36 0.6 44
Patient 3* 24 0.2 26
Patient 4* 8 0.9 188
Patient 5* 40 0.75 92
𝛼 : decay parameter; Time scale of the AR model = tuned time-lag interval times M; Time scale of the
LVAR model = original time-lag interval times Me * Data from Keck Hospital
47
Figure 3-11 AR and LVAR models of interictal state and preictal state iEEG
(Dog 1). A: original iEEG signal, AR model prediction, and LVAR model
prediction in 60 s and 0.2 s time periods. B: AR and LVAR estimated from one
epoch of interictal state and preictal state iEEG. C: AR and LVAR models
estimated from all interictal state and preictal state iEEG epochs. Violin plots
indicate the distribution of AR and LVAR features (coefficients and RMSEs).
Without loss of generality, the same types of features are normalized for
comparison.
48
Figure 3-12 AR and LVAR models of interictal state and preictal state iEEG
(Patient 1). Note that the human iEEG has a much lower noise level (higher
SNR).
49
3.5.2 Seizure prediction using AR and LVAR features
AR and LVAR model coefficients and RMSEs are used as features to classify interictal
and preictal states with a sparse lasso logistic regression classifier. The classifier uses features as
inputs and predicts the probability of an epoch of iEEG data being preictal ( 𝑃𝑟 ( 𝑌 = 1 ), or
prediction score) as its output. Figure 3-13 illustrates two examples of LVAR model prediction
for all iEEG epochs in the datasets. The classifier in general gives preictal epochs higher scores
than it does to interictal epochs. Boxplots of scores further indicate that preictal epochs and
interictal epochs have significantly different distributions and thus can be separated with
appropriate classification thresholds.
We build 3 classification models for each dataset. One model (AR) uses only AR features
( 𝑎 and RMSEAR), one model (LVAR) uses only LVAR features ( 𝑐 1
, 𝑐 2
, and RMSELVAR), and
Figure 3-13: Distribution of classification scores of interictal and preictal state samples
using the combined AR and LVAR model for Dog 1 (left) and Patient 4 (right). Each
data point represents the classification score of one 10-minute period.
50
one model (the combined AR and LVAR) uses both AR and LVAR features ( 𝑎 , 𝑐 1
, 𝑐 2
, RMSEAR,
and RMSELVAR). The performance of each classification model is evaluated with AUC of the
ROC with varying classification thresholds. Results show that the combined AR and LVAR
model classifier can achieve high prediction accuracy in most of the subjects (Table 3-3).
We further statistically compared the classification performance of the 3 classifiers in all
patients with a paired Student’s t-test (Figure 3-14). Pairwise comparison of AUC shows that, (1)
Table 3-3: Prediction Performance of Classifiers
Subject AR LVAR
Combined AR and
LVAR
Dog 1 0.90 0.87 0.91
Dog 2 0.85 0.81 0.87
Dog 3 0.72 0.78 0.82
Dog 4 0.87 0.86 0.88
Dog 5 0.32 0.34 0.41
Patient 1 0.49 0.79 0.79
Patient 2 0.22 0.67 0.67
Patient 3* 0.31 0.59 0.71
Patient 4* 0.86 0.99 0.99
Patient 5* 0.53 0.64 0.64
Mean ± std 0.61 ± 0.26 0.73 ± 0.18 0.77 ± 0.17
* Data from Keck Hospital
Bold numbers represent the highest AUC among 3 feature groups
51
LVAR classifier (M = 0.73, SD = 0.18) was significantly higher than AR classifier (M = 0.61,
SD = 0.26), t(9) = 2.4, p = 0.037, d = 0.77, and (2) the combined AR and LVAR (M = 0.77, SD
= 0.17) classifier was significantly higher than both AR , t(9) = 3.2, p = 0.011, d = 1, and LVAR
classifiers, t(9) = 2.9, p = 0.019, d = 0.9. These results indicate that adding arbitrary (usually long)
time-scale linear and nonlinear features ( 𝑐 1
, and 𝑐 2
) to the classifier can significantly improve
classification performance.
3.5.3 Relative importance of LVAR features
An analysis of permutation feature importance was then performed to assess relative
importance of the LVAR features in seizure classification. Results show that all LVAR features,
i.e., first-order LVAR coefficients 𝑐 1
, second-order LVAR coefficients 𝑐 2
, and RMSE, contribute
to seizure classification with high degree of variability across different subjects (Figure 3-15). In
Figure 3-15, horizontal dashed lines represent the AUC values without permutation of features.
A larger decrease in AUC after permutation of a feature indicates greater importance of this
feature for classification. AUCs after 𝑐 1
permutation are significantly lower than the original
AUCs in 6 out of 10 subjects, i.e., the original AUC is above the 95% confidence interval of the
Figure 3-14: Boxplots show the pairwise comparison of AUC distributions among the AR
model, the LVAR model and the combined AR + LVAR model
52
permuted AUC. AUCs after 𝑐 2
permutation are significantly lower than the original AUCs in 5
out of 10 subjects. AUCs after RMSE permutation are significantly lower than the original
AUCs in 2 out of 10 subjects. At group level, only 𝑐 1
permutation causes significant drops of
AUC (Figure 3-16).
Figure 3-15: The 95% confidence intervals and the average permutation importance
for each of the feature groups in all subjects. Each horizontal line indicates the AUC
before the feature permutation of the subject. 𝑐 1
: the coefficients from the first-order
kernel of the LVAR model. 𝑐 2
: the coefficients from the second-order kernel of the
LVAR model Eqn (3.3).
53
3.6 Discussion
In this study, we extend the commonly used AR model with a LVAR model to include
arbitrary (usually long) time-scale and nonlinear features for seizure prediction. The key ideas of
this approach are (1) to capture the temporal dynamics of iEEG with a binless basis function
expansion technique, where arbitrary time scales can be modeled without introducing a large
number of model coefficients, (2) to express nonlinearity with a Volterra functional power series
where linear and nonlinear terms are represented in the same polynomial form, and (3) to select
features relevant to seizure prediction with a sparse classifier. Results show that including these
additional features can significantly improve model performance. Different iEEG nonlinear
dynamics are involved in interictal and preictal states.
One unexpected result was that the model residual error (RMSE), which represented the
model unpredictability, contributed significantly to seizure prediction in only 2 of the 10 subjects.
An earlier study reported that the residual profile of the Wiener algorithm can successfully
predict upcoming seizures [63]. The discrepancy between our RMSE result and this study may
Figure 3-16: Group-level comparison of AUCs with permutation of different LVAR
features.
54
be due to the different noise levels of different brain signals used in the two studies. The iEEG
signals used in this study are more vulnerable to noise than the local field potentials recorded
with penetrating electrodes in the previous study. A higher degree of noise may make the
residual errors calculated with the AR and the LVAR models less informative and thus dilute
their contributions to seizure prediction. Another possible cause of the discrepancy is that we
define the preictal period to be one hour before seizure onset, while the earlier study defines
preictal period to be a few seconds before seizure onset [63]. A more fine-tuned comparison is
needed in future studies.
Indeed, one open question related to seizure prediction in general is the definition of
different seizure states. We define preictal state as one hour before seizure onset and interictal
state as more than four hours before seizure onset. This definition is necessary for excluding
potential transitional states between interictal and preictal states from analysis, and at the same
time leaving enough time for seizure intervention. However, the one-hour period is still
somewhat arbitrary and does not consider variability across different seizure episodes and
different patients. One potential solution is to track the transition of iEEG from interictal state to
preictal state with a non-stationary adaptive filtering technique, where different seizure states are
explicitly modeled in a seizure episode-specific and patient-specific manner. Unsupervised or
weakly-supervised learning may be further used to classify different seizure states without
relying on manual labelling of the iEEG data.
The most intriguing finding in this study is the high degree of variability in the types of
features contributing to seizure prediction. This result suggests that seizure generation may
involve distinct nonlinear dynamical processes caused by different underlying neurobiological
mechanisms. It also indicates that it is necessary to build patient-specific classification models
55
with a wide range of features. In order to include a large number of features in classification,
dimension reduction techniques such as basis function expansion and sparse estimation are
required.
56
Chapter 4 Future work
4.1 Multiinput LVAR model
The LVAR model has been developed as an effective approach to seizure prediction. The
LVAR model predicts the future iEEG signal of an iEEG channel based on the past signals of the
Figure 4-1: Multiinput autoregressive models predict the future value based on the
past values from the same location and other locations. (A) Multiinput autoregressive
model uses individual signals for prediction. (B) Multiinput Laguerre Volterra
autoregressive model uses both the individual signals and the interaction of each pair
of past signals for prediction. It reduces the number of coefficients to be estimated
based on the convolutions of Laguerre basis functions and past values.
57
same channel. This LVAR model can be further extended from the single input structure to the
multiinput structure. This multiinput LVAR model predicts the future iEEG signal from a
channel based not only on past signals of the same channel but also on past signals from other
channels, as shown in Figure 4-1. How much the past signals from other channels can predict the
future signals of this channel represents the level of connectivity between neurons. The level of
connectivity may, in turn, indicate the dynamics of seizure generation since seizures are caused
by abnormal synchronization of neurons.
4.2 Stacking to increase prediction performance
Combining multiple classification models may further improve seizure prediction. So far,
Figure 4-2: Single classifier and ensemble classifier.
58
we have been dedicated to extracting useful features that can characterize the dynamics of
seizure generation. The extracted features were then used as inputs of a single classifier (the
logistic lasso). However, an ensemble classifier (combining multiple classifiers) often leads to
better prediction. An ensemble classifier creates several base classifiers. Each base classifier
makes its own prediction. Then a combiner classifier is trained to make a final prediction using
the predictions from all base classifiers as inputs, as shown in Figure 4-2. Combining multiple
classifiers can often reduce the variance of the base classifiers, thereby improving prediction
performance.
59
Reference
[1] R. S. Fisher et al., “Epileptic seizures and epilepsy: definitions proposed by the
International League Against Epilepsy (ILAE) and the International Bureau for Epilepsy
(IBE),” Epilepsia, vol. 46, no. 4, pp. 470–472, Apr. 2005, doi: 10.1111/j.0013-
9580.2005.66104.x.
[2] P. Rajna et al., “Hungarian multicentre epidemiologic study of the warning and initial
symptoms (prodrome, aura) of epileptic seizures,” Seizure, vol. 6, no. 5, pp. 361–368, 1997.
[3] M. E. Weinand, L. P. Carter, W. F. El-Saadany, P. J. Sioutos, D. M. Labiner, and K. J.
Oommen, “Cerebral blood flow and temporal lobe epileptogenicity,” Journal of
neurosurgery, vol. 86, no. 2, pp. 226–232, 1997.
[4] V. Novak, A. L. Reeves, P. Novak, P. A. Low, and F. W. Sharbrough, “Time-frequency
mapping of R–R interval during complex partial seizures of temporal lobe origin,” Journal
of the autonomic nervous system, vol. 77, no. 2–3, pp. 195–202, 1999.
[5] L. Chisci et al., “Real-Time Epileptic Seizure Prediction Using AR Models and Support
Vector Machines,” IEEE Transactions on Biomedical Engineering, vol. 57, no. 5, pp.
1124–1132, May 2010, doi: 10.1109/TBME.2009.2038990.
[6] B. H. Brinkmann et al., “Crowdsourcing reproducible seizure forecasting in human and
canine epilepsy,” Brain, vol. 139, no. 6, pp. 1713–1722, 2016, doi: 10.1093/brain/aww045.
[7] J. J. Howbert et al., “Forecasting Seizures in Dogs with Naturally Occurring Epilepsy,”
PLoS One, vol. 9, no. 1, Jan. 2014, doi: 10.1371/journal.pone.0081920.
[8] L. D. Iasemidis and J. C. Sackellares, “The evolution with time of the spatial distribution of
the largest Lyapunov exponent on the human epileptic cortex,” Measuring chaos in the
human brain, pp. 49–82, 1991.
[9] R. B. King, J. L. Schricker JR, and J. L. O’Leary, “An experimental study of the transition
from normal to convulsoid cortical activity,” Journal of neurophysiology, vol. 16, no. 3, pp.
286–298, 1953.
[10] B. L. Ralston, “The mechanism of transition of interictal spiking foci into ictal seizure
discharges,” Electroencephalography and clinical neurophysiology, vol. 10, no. 2, pp. 217–
232, 1958.
[11] I. Sherwin, “Interictal-ictal transition in the feline penicillin epileptogenic focus,”
Electroencephalography and clinical neurophysiology, vol. 45, no. 4, pp. 525–534, 1978.
[12] A. Siegel, C. L. Grady, and A. F. Mirsky, “Prediction of Spike-Wave Bursts in Absence
Epilepsy by EEG Power-Spectrum Signals,” Epilepsia, vol. 23, no. 1, pp. 47–60, 1982.
[13] Z. Rogowski, I. Gath, and E. Bental, “On the prediction of epileptic seizures,” Biological
cybernetics, vol. 42, no. 1, pp. 9–15, 1981.
[14] H. Korn and P. Faure, “Is there chaos in the brain? II. Experimental evidence and related
models,” C. R. Biol., vol. 326, no. 9, pp. 787–840, Sep. 2003, doi:
10.1016/j.crvi.2003.09.011.
60
[15] J. P. Velazquez, H. Khosravani, A. Lozano, B. L. Bardakjian, P. L. Carlen, and R.
Wennberg, “Type III intermittency in human partial epilepsy,” European Journal of
Neuroscience, vol. 11, no. 7, pp. 2571–2576, 1999.
[16] M. L. Spano, W. L. Ditto, K. Dolan, and F. Moss, “Unstable Periodic Orbits (UPOs) and
Chaos Control in Neural Systems,” in Epilepsy as a Dynamic Disease, J. Milton and P.
Jung, Eds. Berlin, Heidelberg: Springer, 2003, pp. 297–322.
[17] L. D. Iasemidis, J. Chris Sackellares, H. P. Zaveri, and W. J. Williams, “Phase space
topography and the Lyapunov exponent of electrocorticograms in partial seizures,” Brain
Topogr, vol. 2, no. 3, pp. 187–201, Mar. 1990, doi: 10.1007/BF01140588.
[18] C. E. Elger and K. Lehnertz, “Seizure prediction by non-linear time series analysis of brain
electrical activity,” European Journal of Neuroscience, vol. 10, no. 2, pp. 786–789, 1998,
doi: 10.1046/j.1460-9568.1998.00090.x.
[19] F. Mormann, T. Kreuz, R. G. Andrzejak, P. David, K. Lehnertz, and C. E. Elger, “Epileptic
seizures are preceded by a decrease in synchronization,” Epilepsy Research, vol. 53, no. 3,
pp. 173–185, Mar. 2003, doi: 10.1016/S0920-1211(03)00002-0.
[20] F. Mormann, R. G. Andrzejak, C. E. Elger, and K. Lehnertz, “Seizure prediction: the long
and winding road,” Brain, vol. 130, no. 2, pp. 314–333, 2007.
[21] F. Mormann and R. G. Andrzejak, “Seizure prediction: making mileage on the long and
winding road,” Brain, vol. 139, no. 6, pp. 1625–1627, 2016.
[22] L. Kuhlmann et al., “Epilepsyecosystem. org: crowd-sourcing reproducible seizure
prediction with long-term human intracranial EEG,” Brain, vol. 141, no. 9, pp. 2619–2630,
2018.
[23] K. Gadhoumi, J.-M. Lina, F. Mormann, and J. Gotman, “Seizure prediction for therapeutic
devices: A review,” J. Neurosci. Methods, vol. 260, pp. 270–282, Feb. 2016, doi:
10.1016/j.jneumeth.2015.06.010.
[24] S. Gabriel et al., “Stimulus and Potassium-Induced Epileptiform Activity in the Human
Dentate Gyrus from Patients with and without Hippocampal Sclerosis,” J. Neurosci., vol.
24, no. 46, pp. 10416–10430, Nov. 2004, doi: 10.1523/JNEUROSCI.2074-04.2004.
[25] P. So, J. T. Francis, T. I. Netoff, B. J. Gluckman, and S. J. Schiff, “Periodic orbits: a new
language for neuronal dynamics.,” Biophys J, vol. 74, no. 6, pp. 2776–2785, Jun. 1998.
[26] P.-N. Yu et al., “Unstable periodic orbits in human epileptic hippocampal slices,” in 2014
36th Annual International Conference of the IEEE Engineering in Medicine and Biology
Society, Aug. 2014, pp. 5800–5803, doi: 10.1109/EMBC.2014.6944946.
[27] M.-C. Hsiao et al., “An in vitro seizure model from human hippocampal slices using multi-
electrode arrays,” J. Neurosci. Methods, vol. 244, pp. 154–163, Apr. 2015, doi:
10.1016/j.jneumeth.2014.09.010.
[28] T. Sauer, “Reconstruction of dynamical systems from interspike intervals,” Physical
Review Letters, vol. 72, no. 24, p. 3811, 1994.
[29] A. Babloyantz and A. Destexhe, “Is the normal heart a periodic oscillator?,” Biological
cybernetics, vol. 58, no. 3, pp. 203–211, 1988.
61
[30] X. Pei and F. Moss, “Characterization of low-dimensional dynamics in the crayfish caudal
photoreceptor,” Nature, vol. 379, no. 6566, p. 618, 1996.
[31] D. Pierson and F. Moss, “Detecting periodic unstable points in noisy chaotic and limit cycle
attractors with applications to biology,” Physical review letters, vol. 75, no. 11, p. 2124,
1995.
[32] M. Avoli, A. Bernasconi, D. Mattia, A. Olivier, and G. G. Hwa, “Epileptiform discharges in
the human dysplastic neocortex: in vitro physiology and pharmacology,” Ann. Neurol., vol.
46, no. 6, pp. 816–826, Dec. 1999, doi: 10.1002/1531-8249(199912)46:6<816::aid-
ana3>3.0.co;2-o.
[33] R. Köhling, P. A. Schwartzkroin, and M. Avoli, “Studying Epilepsy in the Human Brain In
Vitro,” in Models of Seizures and Epilepsy, A. Pitkänen and P. Buckmaster, Eds. 2006, pp.
89–101.
[34] M. Le Van Quyen, J. Martinerie, C. Adam, and F. J. Varela, “Unstable periodic orbits in
human epileptic activity,” Physical Review E, vol. 56, no. 3, p. 3401, 1997.
[35] A. Schulze-Bonhage and A. Kühn, “Unpredictability of Seizures and the Burden of
Epilepsy,” in Seizure Prediction in Epilepsy, Wiley-VCH Verlag GmbH & Co. KGaA,
2008, pp. 1–10.
[36] M. Jachan, H. F. genannt Drentrup, B. Schelter, and J. Timmer, “The History of Seizure
Prediction,” in Seizure Prediction in Epilepsy, John Wiley & Sons, Ltd, 2008, pp. 11–24.
[37] F. T. Sun, M. J. Morrell, and R. E. Wharen, “Responsive cortical stimulation for the
treatment of epilepsy,” Neurotherapeutics, vol. 5, no. 1, pp. 68–74, Jan. 2008, doi:
10.1016/j.nurt.2007.10.069.
[38] B. Schelter, J. Timmer, and A. Schulze-Bonhage, Seizure prediction in epilepsy. Wiley
Online Library, 2008.
[39] J. Milton and P. Jung, Epilepsy as a dynamic disease. Springer Science & Business Media,
2013.
[40] V. Z. Marmarelis, Nonlinear dynamic modeling of physiological systems, vol. 10. John
Wiley & Sons, 2004.
[41] G. Valenza, L. Citi, E. P. Scilingo, and R. Barbieri, “Point-Process Nonlinear Models With
Laguerre and Volterra Expansions: Instantaneous Assessment of Heartbeat Dynamics,”
IEEE Transactions on Signal Processing, vol. 61, no. 11, pp. 2914–2926, Jun. 2013, doi:
10.1109/TSP.2013.2253775.
[42] T. Hastie, R. Tibshirani, and M. Wainwright, Statistical learning with sparsity: the lasso
and generalizations. Chapman and Hall/CRC, 2015.
[43] B. H. Brinkmann et al., “Crowdsourcing reproducible seizure forecasting in human and
canine epilepsy,” Brain, vol. 139, no. 6, pp. 1713–1722, 2016.
[44] F. Mormann et al., “On the predictability of epileptic seizures,” Clinical Neurophysiology,
vol. 116, no. 3, pp. 569–587, 2005, doi: 10.1016/j.clinph.2004.08.025.
[45] F. H. Lopes da Silva, “EEG Analysis: Theory and Practice,” in Electroencephalography
basic principles, clinical applications, and related fields, 5th ed., E. Niedermeyer, F. H.
62
Lopes da Silva, and I. Ovid Technologies, Eds. Philadelphia: Lippincott Williams &
Wilkins, 2005.
[46] R. Shibata, “Selection of the order of an autoregressive model by Akaike’s information
criterion,” Biometrika, vol. 63, no. 1, pp. 117–126, Jan. 1976, doi: 10.1093/biomet/63.1.117.
[47] V. Z. Marmarelis, “Identification of nonlinear biological systems using Laguerre
expansions of kernels,” Annals of biomedical engineering, vol. 21, no. 6, pp. 573–589,
1993.
[48] D. Song and T. W. Berger, “Identification of Nonlinear Dynamics in Neural Population
Activity,” in Statistical Signal Processing for Neuroscience and Neurotechnology, K. G.
Oweiss, Ed. Oxford: Academic Press, 2010, pp. 103–128.
[49] D. Song et al., “Hippocampal Microcircuits, Functional Connectivity, and Prostheses,” in
Recent Advances on the Modular Organization of the Cortex, M. F. Casanova and I. Opris,
Eds. Dordrecht: Springer Netherlands, 2015, pp. 385–405.
[50] P.-N. Yu, S. A. Naiini, C. N. Heck, C. Y. Liu, D. Song, and T. W. Berger, “A sparse
Laguerre-Volterra autoregressive model for seizure prediction in temporal lobe epilepsy,”
in 2016 38th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC), 2016, pp. 1664–1667.
[51] D. T. Westwick and R. E. Kearney, Identification of nonlinear physiological systems, vol. 7.
John Wiley & Sons, 2003.
[52] D. Song, R. H. Chan, V. Z. Marmarelis, R. E. Hampson, S. A. Deadwyler, and T. W.
Berger, “Nonlinear dynamic modeling of spike train transformations for hippocampal-
cortical prostheses,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 6, pp.
1053–1066, 2007.
[53] D. Song, R. H. Chan, V. Z. Marmarelis, R. E. Hampson, S. A. Deadwyler, and T. W.
Berger, “Nonlinear modeling of neural population dynamics for hippocampal prostheses,”
Neural Networks, vol. 22, no. 9, pp. 1340–1351, 2009.
[54] D. Song et al., “Identification of sparse neural functional connectivity using penalized
likelihood estimation and basis functions,” Journal of computational neuroscience, vol. 35,
no. 3, pp. 335–357, 2013.
[55] C. Bergmeir and J. M. Benítez, “On the use of cross-validation for time series predictor
evaluation,” Information Sciences, vol. 191, pp. 192–213, May 2012, doi:
10.1016/j.ins.2011.12.028.
[56] P. J. Franaszczuk and G. K. Bergey, “An autoregressive method for the measurement of
synchronization of interictal and ictal EEG signals.,” Biological cybernetics, vol. 81, no. 1,
pp. 3–9, 1999, doi: 10.1007/s004220050540.
[57] G. James, D. Witten, T. Hastie, and R. Tibshirani, An introduction to statistical learning,
vol. 112. Springer, 2013.
[58] N. Qian, J., Hastie, T., Friedman, J., Tibshirani, R. and Simon, “Glmnet for Matlab (2013),”
2013. http://www.stanford.edu/~hastie/glmnet_matlab/.
63
[59] J. Friedman, T. Hastie, and R. Tibshirani, “Regularization paths for generalized linear
models via coordinate descent,” Journal of statistical software, vol. 33, no. 1, p. 1, 2010.
[60] J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating
characteristic (ROC) curve.,” Radiology, vol. 143, no. 1, pp. 29–36, 1982, doi:
10.1148/radiology.143.1.7063747.
[61] S. J. Mason and N. E. Graham, “Areas beneath the relative operating characteristics (ROC)
and relative operating levels (ROL) curves: Statistical significance and interpretation,”
Quarterly Journal of the Royal Meteorological Society, vol. 128, no. 584, pp. 2145–2166,
2002, doi: 10.1256/003590002320603584.
[62] C. Molnar, Interpretable machine learning. https://christophm.github.io/interpretable-ml-
book/, 2018.
[63] P. Rajdev, M. P. Ward, J. Rickus, R. Worth, and P. P. Irazoqui, “Real-time seizure
prediction from local field potentials using an adaptive Wiener algorithm,” Comput. Biol.
Med., vol. 40, no. 1, pp. 97–108, Jan. 2010, doi: 10.1016/j.compbiomed.2009.11.006.
Abstract (if available)
Abstract
An Experimental Seizure Model from Human Hippocampal Slices ❧ In this study, we have developed an in vitro model of epilepsy using human hippocampal slices resected from patients suffering from intractable mesial temporal lobe epilepsy. We show that using a planar multi-electrode array system, spatio-temporal interictal-like activity can be consistently recorded in high-potassium (8mM), low-magnesium (0.25mM) artificial cerebral spinal fluid with 4-aminopyridine (100µM) added. The induced epileptiform discharges were recorded in different subregions of the hippocampus, including dentate, CA1 and subiculum. This paradigm allows the study of seizure generation in different subregions of hippocampus simultaneously, as well as dynamics of the interictal-like activity. The dynamics was investigated by developing the first return map with inter-pulse intervals. Unstable periodic orbits (UPOs) were detected in the hippocampal slice at the DG area according to the topological recurrence method. Surrogate analysis suggests the presence of UPOs in hippocampal slices. This finding also suggests that interictal-like activity is a chaotic system and chaos control techniques may be used to manipulate interictal-like activity. ❧ A Sparse Multiscale Nonlinear Autoregressive Model for Seizure Prediction ❧ Accurate seizure prediction is highly desirable for medical interventions such as responsive electrical stimulation. We aim to develop a classification model that can predict seizures by identifying preictal states, i.e., the precursor of a seizure, based on multi-channel intracranial EEG (iEEG) signals. A two-level sparse multiscale classification model is developed to classify interictal and preictal states from iEEG data. In the first level, short time-scale linear dynamical features are extracted as autoregressive (AR) model coefficients
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Functional consequences of network architecture in rat hippocampus: a computational study
PDF
Data-driven modeling of the hippocampus & the design of neurostimulation patterns to abate seizures
PDF
Decoding memory from spatio-temporal patterns of neuronal spikes
PDF
Developing a neural prosthesis for hippocampus: proof-of-concept using the in vitro slice
PDF
Nonlinear dynamical modeling of single neurons and its application to analysis of long-term potentiation (LTP)
PDF
A million-plus neuron model of the hippocampal dentate gyrus: role of topography, inhibitory interneurons, and excitatory associational circuitry in determining spatio-temporal dynamics of granul...
PDF
Nonlinear modeling of causal interrelationships in neuronal ensembles: an application to the rat hippocampus
PDF
Simulating electrical stimulation and recording in a multi-scale model of the hippocampus
PDF
A double-layer multi-resolution classification model for decoding time-frequency features of hippocampal local field potential
Asset Metadata
Creator
Yu, Pen-Ning
(author)
Core Title
Experimental and computational models for seizure prediction
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Biomedical Engineering
Publication Date
12/12/2020
Defense Date
09/21/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
autoregressive,epilepsy,hippocampus,in-vitro,nonlinear dynamics,OAI-PMH Harvest,seizure prediction,sparse,Volterra
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Berger, Theodore W. (
committee chair
), Song, Dong (
committee chair
), Liu, Charles (
committee member
), Marmarelis, Vasilis (
committee member
), Nune, George (
committee member
)
Creator Email
penning@usc.edu,penningyu@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-409340
Unique identifier
UC11668429
Identifier
etd-YuPenNing-9202.pdf (filename),usctheses-c89-409340 (legacy record id)
Legacy Identifier
etd-YuPenNing-9202.pdf
Dmrecord
409340
Document Type
Dissertation
Rights
Yu, Pen-Ning
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
autoregressive
epilepsy
hippocampus
in-vitro
nonlinear dynamics
seizure prediction
sparse
Volterra