Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Decomposition Of Neuronal Function Using Nonlinear Systems Analysis
(USC Thesis Other)
Decomposition Of Neuronal Function Using Nonlinear Systems Analysis
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type o f computer printer. T he quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely afreet reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper lefi-hand comer and continuing from left to right in equal sections with small overlaps. Bach original is also photographed in one exposure and is included in reduced form at the back of the book. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600 DECOMPOSITION OF NEURONAL FUNCTION USING NONLINEAR SYSTEMS ANALYSIS by Martin Tsupon Chian A Thesis Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA in Partial Fulfillment of the Requirements for the Degree Master of Science in Biomedical Engineering University of Southern California May 1996 Copyright 1996 Martin Tsupon Chian UMI N u m b er: 1380457 UMI Microform 1380457 Copyright 1996, by UMI Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. UMI 300 North Zecb Road Ann Arbor, MI 48103 This thesis, written by MAA.T7V T$uPa/K} CtiJAN...... under the guidance of fyS Faculty Committee and approved by all its members, has been presented to and accepted by the School of Engineering in partial fulfillment of the re quirements for the degree of ..MAsregs ^„jcjm £A /C £ D a te ‘ f t J j l j A k F acu lty C /fp m itte i ACKNOWLEDGEMENTS There are many people I would like to acknowledge who helped contribute to this research project First and foremost, I would like to thank my advisor, Dr. Theodore Berger, whose patience and guidance made this thesis possible. Secondly, I would like to thank Dr. Vasilis Marmarelis whose experience and keen insight helped me tremendously. Thirdly, I would like to thank a number of individuals on my committee who also helped me a great deal in this research effort, namely, Dr. Michael Arbib, and Dr. Robert Sclabassi. Fourthly, I would like to thank the members of Dr. Berger's lab and Bogdan Kosanovic who helped me in my efforts to understand nonlinear concepts and the hippocampal physiology. Finally, I would like to thank my family for their patience and understanding while I was working on my thesis project TABLE OF CONTENTS ACKNOWLEDGMENTS.............................................................................................ii LIST OF TABLES/FIGURES...................................................................................... v ABSTRACT...............................................................................................................viii 1.0 INTRODUCTION...................................................................................................1 2.0 LINEAR SYSTEMS ANALYSIS......................................................................... 5 2.1 Linear Feedback Systems...........................................................................5 2.2 Filtering Effects........................................................................................ 7 3.0 REPRESENTING NONLINEAR SYSTEMS THROUGH FUNCTIONALS. 9 3.1 Volterra’s Representation of Nonlinear Systems.....................................9 3.2 Wiener’s Representation of Nonlinear Systems.....................................10 3.3 Using Wiener’s Functionals for a Point-Process Input......................... 11 3.4 Relationship between VolteiTa and Poisson Kernels............................. 12 3.5 The Meaning of Kernels.......................................................................... 13 3.6 Summary.................................................................................................. 14 4.0 VOLTERRA MODELS VS. FEEDFORWARD ARTIFICIAL NEURAL NETWORKS(ANN)......................................................................................16 4.1 Introduction.............................................................................................. 16 4.2 Fitting a Volteira Model into an Artificial Neural Network(ANN) 18 4.3 Kernel Estimation through ANN Training(KANN)............................. .21 4.4 Summary ..................................................................................... 23 5.0 NONLINEAR SYSTEM DECOMPOSITION....................................................24 5.1 Functional Representations of Discrete Systems....................................24 5.2 System Algebra........................................................................................ 25 5.3 Discrete Transforms for Combining Subsystems...................................29 5.4 Simplified Example.................................................................................. 30 5.5 Representing Feedback Systems in System Algebra............................. 33 6.0 THEORETICAL DECOMPOSITION OF THE HIPPOCAMPUS................... 36 6.1 Introduction.................................................................. 36 6.2 Decomposition of Dentate Gyrus............................................................ 37 7.0 ANALYSIS OF DATA SETS.............................................................................. 41 7.1 Experimental Results................................................................................41 7.2 Filtering Effects........................................................................................47 7.3 Decomposition of Simulated Input D ata.................................................53 7.4 Interpreting the Second Order Feedback Kernel Response................... 53 7.5 Summary..................................................................................................57 8.0 COMPUTATIONAL IMPLEMENTATION OF A FEEDBACK MODEL 58 8.1 Introduction of a Simulated Model....................................................... 58 8.2 Kernel Estimation Results and Comparisons......................................... 62 8.3 Nonlinear System Decomposition of the Simulated Feedback Model..70 8.4 Summary of Feedback Model Results................................................... 74 9.0 Discussion...........................................................................................................78 9.1 Conclusion...............................................................................................78 9.2 Future w ork............................................................................................. 78 BIBLIOGRAPHY....................................................................................................... 80 APPENDIX A .............................................................................................................82 APPENDIX B ............................................................................................................. 86 LIST O F TABLES/FIGURES F ig u res 1) Box diagram of hippocampal formation....................................................................2 2) Linear feedback model................................................................................................5 3) Single-output three layer ANN with feedforward connections.............................17 4) Functional network of the modified Volterra model(MVM) as expressed by equations 4--6 and 4-7....................................................................................20 5) Cascade system......................................................................................................... 26 6) Simplified example................................................................................................... 30 7) Equivalent system...................................................................................................31 8) Nonlinear feedback system..................................................................................... 33 9) Equivalent of simplified example.............................................................................34 10) System G ................................................................................................................ 34 11) Combination of A and G........................................................................................ 34 12) Feedback model of the dentate gyrus....................................................................37 13) Simple feedback model of dentate gyrus.............................................................. 43 14) First order kernel of equivalent system for z26 data s e t.....................................43 15) Second order kernel of equivalent system for z26 data se t................................ 43 16) Feedthrough element..............................................................................................44 17) First order kernel of feedthrough element for z26 data set................................. 44 18) Second order kernel of equivalent system for 226 data se t................................ 44 19) Decomposed feedback element.............................................................................45 20) First order kernel of decomposed feedback element for z26 data set............... 45 21) Second order kernel of decomposed feedback element for z26 data set...........45 22) Magnitude response of first order kernel of feedback element for z26 data se t...................................................................................................................... 46 23) Magnitude response of second order kernel of feedback element for z26 data se t.....................................................................................................................46 24) Magnitude response for original first order kernel vs. noise reduction for latter bins in z26 data set.........................................................................................49 25) Magnitude response for original first order kernel vs. noise reduction for latter bins in z36 data set.........................................................................................49 26) Magnitude response for original first order kernel vs. noise reduction for latter bins in z68 data set.........................................................................................50 27) Magnitude response for original first order kernel vs. noise reduction for latter bins in z45 data set.........................................................................................50 28) Magnitude response for original first order kernel vs. noise reduction for latter bins in z42 data set...........................................................................................51 29) Magnitude response for original first order kernel vs. noise reduction for latter bins in z41 data set...........................................................................................51 30) Magnitude response for original first order kernel vs. noise reduction for latter bins in z75 data set...........................................................................................52 31) Nonlinear feedthrough-linear feedback model.................................................... 54 32) Feedback model with dashed lines denoting overall feedthrough(A) and feedback(B) black box subsystems................................................................58 33)g(t)=C1 ( f e - ' ' " + K ' ' 5 )..................................................................................... 59 34) f(t) = C j(e-'/s - e - ’' s>........................................................................................ 60 35) Overall linear impulse response........................................................................... 61 36) Actual feedthrough 1st order kernel...................................................................63 37) Estimated feedthrough 1st order kernel using PANN.........................................63 38) Estimated feedthrough 1st order kernel using DPOS.......................................... 64 39) Actual feedthrough second order kernel..............................................................65 40) Estimated feedthrough 2nd order kernel from PANN scaled by p.................... 65 41) Actual equivalent 1st order kernel........................................................................ 67 42) Estimated equivalent 1st order kernel using PANN............................................67 43) Estimated equivalent 1st order kernel using DPOS............................................. 68 44) Actual equivalent 2nd order kernel....................................................................... 69 45) Estimated equivalent 2nd order kernel using PANN...........................................69 46) Actual feedback function used in simulated m odel............................................. 71 47) Decomposed 1st order feedback kernel from PANN.......................................... 72 48) Decomposed feedback kernel from DPOS............................................................72 49) Actual feedback 2nd order kernel.......................................................................... 73 50) Decomposed actual feedback 2nd order kernel.....................................................73 51) Decomposed 2nd order kernel using PANN....................................................... 74 Tables 1) Selected parameters for simulated feedback model 62 ABSTRACT The focus of this thesis is to investigate the application of a theoretical and experimental approach to studying the hippocampal formation through a functional power series methodology. The kernels of these functional series can be expressed as the nth order impulse responses which are found through the recording of electrophysiological responses to random impulse train stimulations. The input/output relation from these experiments are used to mathematically decompose the nonlinear dynamic properties of other unknown subsystems through system algebra in the frequency domain. The implementation of system algebra allows the characterization of feedback and feedforward loops. A computational feedback model has been developed to study the effects of nonlinearity and noise on the accuracy of two kernel estimation techniques(Sclabassi et al.(2 0 ) ; Marmarelis and Zhao( 1 8 } ). The results from this model as well as other feedback models reveal the advantages and limitations of this approach. 1.0 INTRODUCTION While many neurophysiological problems may often be solved experimentally, it is often beneficial to implement a theoretical approach to help us define the functional properties of physiological systems that may be difficult to examine experimentally. An acceptable theoretical approach in the modeling of neuronal systems must be able to utilize the present experimental knowledge to develop mathematical models of functional properties expressed by the networks of neurons. Since many neuronal networks, such as those dealing with memory and learning, have been known to involve large quantities of neurons that are dispersed through out the brain, it seems beneficial to also develop quantitative models that treat neurons as groups interacting within a system network. However, most of the experimental and theoretical information gathered thus far, seems to focus on the nature of single neurons, while little progress has been achieved concerning the contribution of these single neurons on a systems level that deals with the interaction of groups of neurons in a particular network. We have been investigating a theoretical approach, to characterize and model neuronal networks using engineering techniques based on nonlinear systems theory. This approach seems to offer solutions to the dynamic properties of the neural network as well as the contribution of neuronal elements in the network, those observable and unobservable. This method involves the implementation of the Volterra/Wiener method which provides us with a way of characterizing nonlinear systems through a functional power series with regards to their input/output relation. The kemels(nonlinear impulse responses) of these functional series may be extracted to perform system algebra in the characterization of unknown nonlinear subsystems when the characteristics of other subsystems are known. The advantage of such an approach allows the characterization of individual neuronal elements at multiple levels, from a biophysical to a systems level and enables us to perform a systematic study of the functional properties of neurons within a network. Therefore, the focus of this thesis addresses three aspects in applying this theoretical approach in the study of the network dynamics of the hippocampal formation: 1) the application of systems algebra to perform theoretical decomposition of unobservable elements. 2) the analysis of various models to study nonlinear feedback systems. 3) the dependency of nonlinearities and noise within feedback systems on kernel estimation procedures. Viewing the Hippocampus as a System a ^ r 1 O C I f T H T l I t " I E Figure 1 box diagram of hippocampal formation The hippocampal formation consists of five subsystems: entorhinal, dentate, hippocampal(CA3 and CA1), and subicular cortices, as shown in Figure 1. The five subsystems are interconnected in a closed feedback loop in which the activity of any one subsystem may modulate the activity of any other subsystem. Intemeurons known as basket cells are located within each subsystem and may 2 modulate the output of the projection neurons via local feedforward and feedback pathways. The network dynamics of each subsystem may be defined by their input/output functions characterizing the black box of a particular subsystem. The functional properties of each subsystem may be obtained experimentally by electrically stimulating its major afferent with a random impulse train having a Poisson distribution of inter-impulse intervals, and measuring the response of the neurons within a particular subsystem. For example, the dentate gyrus may be perceived as a black box with various connections that provide feedback and feedforward inhibitory pathways. These inhibitory pathways are provided by the vast treelike connections of the basket cells. Random impulse train stimulation of the perforant path fibers arising from the entorhinal cortex provide the primary excitatory input to the dentate granule cells. The input/output relation of the dentate subsytem may then be found through the cross-correlation of temporal components of the random interval distribution with the probability of granule cell activity. The input/output relation of the dentate gyrus may then be utilized to find the properties of unobservable components through system decomposition. The difficulty inherent in examining basket cells electrophysiologically presented the perfect opportunity to test this systems approach to find the intrinsic properties of the basket cells functioning as an inhibitory subsystem. Consequently, a feedback model was developed which utilizes system algebra to decompose the functional make up of the inhibitory component, i.e. the basket cells. A preliminary model by Kosanovic{ 8 > will thoroughly be discussed. This paper utilizes a set of nonlinear system algebra rules by George( 6 > to decompose the feedback system of the basket cells. 3 KosanovictS ) was able to characterize feedback kernels through decomposition when given the feedthrough and equivalent subsystems,. While the feedback model offered some solutions to the application of this nonlinear systems analysis approach, it created new questions concerning kernel estimation procedures that need to be addressed. Thus, a computational model was developed to study the effects of varying degrees of noise and nonlinearities on two different kernel estimation procedures. This user-interactive tool allows us to study the plausibility of nonlinear decomposition given the present kernel estimation procedures under a variety of inidal conditions including different random train inputs. The following paper will attempt to analyze the data from Kosanovic's paper in a more detailed aspect as well as discuss the advantages and limitations of two kernel estimation techniques in nonlinear decomposition under many conditions. 4 2.0 LINEAR SYSTEMS ANALYSIS 2.1 Linear Feedback Systems The study of linear systems has often been essential in understanding the larger class of nonlinear systems. In order to conceptually understand nonlinear systems, it is helpful to briefly review how subsystems are found in a linear systems approach Let us first discuss the commonly accepted feedback system which is represented below: x(t) y(t) u(t) v(t) Figure 2 Linear feedback model To facilitate systems analysis in linear systems, it is often easier to study systems in the frequency domain. The most obvious reason is because convolution in the time domain transforms into multiplication in the frequency domain. Consequently, this feedback model may be solved in the frequency domain via Laplace transforms. From figure 2, it can be shown that: 5 V(s)=Y(s)B(s) (2-1) Y(s)=[X(s)-V(s)]A(s) (2-2) Substituting for V(s), gives: Y(s)=A(s)X(s) - A(s)B(s)Y(s) (2-3) Y(s)[l+A(s)B(s)]=A(s)X(s) (2-4) giving the overall transfer function: H(s) = = -----— ----- X(s) l + A(s)B(s) (2.5) In this case, the overall transfer function H(s) is solved but this method may also be used to solve other subsystems such as the feedback component B(s). If the equivalent or overall system H(s) and the feedthrough component A(s) are known, then the feedback component B(s) may be solved through decomposition. In linear systems analysis, this can easily be shown to be: B(s) = - A(s)- H( s 2 (2-6) A(s)H(s) The use of this method has been widely utilized in many applications and presents a powerful method of applying frequency domain techniques to solve 6 subsystem characteristics when other system characteristics are known. As a result, we have investigated the implementation of similar frequency domain techniques to solve nonlinear physiological systems problems which will be discussed in further chapters. 2.2 Filtering Effects Once the subsystems are isolated through systems analysis techniques, we can categorize these transfer functions by their frequency selective characteristics, i.e., their filtering properties. High pass filters pass high frequency components while inhibiting the low frequency components. Conversely, low pass filters pass low frequency components while inhibiting the high frequency components. In the case of discrete time systems, filtering characteristics can often be represented as difference equations. As Oppenheim(l2 ) states, filters that pass frequencies near even multiples of jt (Ojt, ±2iz) are considered low pass filters. On the other hand, a high-pass filters will pass frequency components around odd multiples of ju (£2=±7T,±3jc , etc.). Low pass filters are often thought of as smoothing functions and one typical example of this filter is shown below representing a 3-point moving average of a discrete input x(n): y[n]=l/3(x[n-l]+x[n] + x[n+l]) (2-7) The transfer function of this difference equation can be shown to be : H(£2)= 1/3(1 + 2 co sn j (2-8) 7 by taking the Z-transforms of the difference equation. Since a high pass filter will inhibit the low frequency components we can represent a typical high pass filter as: [n ]- x [ n ] - x [ n - l] 2 (2-9) When the input signals are relatively constant, y[n] is approximately zero. In other words, slowly oscillating components(i.e, low frequencies) will not be passed through the filter. On the other hand, when the input signals vary extensively, y[n] will have larger magnitude values. Therefore, this equation can be interpreted as a high pass filter, since high frequency components vary greatly with their neighboring sequence values and will correspond to larger y[n] output values. This brief discussion on linear systems will hopefully facilitate a better understanding of the proceeding chapters which will discuss the application of nonlinear systems techniques. 8 3.0 REPRESENTING NONLINEAR SYSTEMS THROUGH FUNCTIONALS 3.1 Volterra's Representation of Nonlinear Systems In systems analysis, it is often helpful to represent mathematical expressions in simplified forms to expedite mathematical manipulation. In nonlinear systems, the input and output can often be represented using functionals. If we define a function f as one that operates on a set of x values to produce a set of f(x) values, then a functional is one that operates on a set of functions to produce a new set of functions. Therefore, given the following expression where x(t) is denoted as the input, we define the functional H as a modifier of the function x(t) to produce to the output function y(t). In nonlinear systems, VoIterra( 1 6 ) showed that any output in a nonlinear time-invariant system could be represented in the generalized form: where Hn [* ] is the nth order Volterra operator and hn(xi,...,Xn)is the n* order Volterra kernel. The kernel may be also thought of as the n* order impulse response y(t)- H[x(t)] = (HI + H 2 + ..+ H a)[x(t)] (3-1) (3-2) H n{X !,..., Xn} “ J ” *J h n (T l,..., T n )x (t - X l)‘" X (t - X n)dX l...dln (3-3) 9 of a nonlinear system. Notice the similarities between this expression and the definition of convolution in linear time-invariant systems. 3.2 W iener’s Representation of Nonlinear Systems While the Volterra approach maybe a viable theoretical approach to nonlinear systems analysis, it presents problems in physiological analysis. Two main reasons for these problems are the difficulties in measuring Volterra kernels and the divergence possibilities of functional expansions even when the outputs are known to be bounded. To alleviate these problems, Wiener c l7 ) derived a new representation of functionals implementing modified Volterra functionals called G- functionals. These G-functionals may be either homogeneous or nonhomogeneous and possess a special orthogonality property when the input is Gaussian white noise, A functional is homogeneous if a change in the input amplitude produces a change in the output amplitude but does not modify the output waveform. An example of a homogeneous functional may be defined as: On the other hand, the general form of a pth degree nonhomogeneous Volterra functional may be expressed as: H„[cx(t)] = cnHn[x(t)] (3-4) c = const. p 10 It is these series of nonhomogeneous Volterra functionals, gn[kn,kn.U n ),...,k0 (n );x(t)] that help define the G-functionals according to Schetzen(1 3 ). These Wiener G- fiinctionals may be derived by the following orthogonalization procedure: Hn.[x(t)]gn[kn,kn-l(n),...,k0(n);x(t)] = 0 Thus, the G-functionals may be represented as the nonlinear input-output relation identified through the orthogonal functional expansion: y(t) = XGn[kn ;x(t)] n=0 (3 -7 ) with zero mean Gaussian white input given R„=A5(t). Consequently, the first few G-functionals can then be represented below as: Go[ko;x(t)] = ko Gi[ki;x(t)] = J ki(x)x(t - x)dx G 2[ki;x(t)] = JJ k 2(xi,X2)x(t - xi)x(t - X 2)dxidX2 - A j k2(x,x)dx Gj[k3;x(t)] = JjJ k3(xi,X2,X3)x(t - xi)x(t - X 2)x(t - X3) + J ki(3)(xi)x(t - xi)dx where kit3)(Xi) — “ 3Ajk3(XiXiX2)dX2 [(3-8)-(3-ll)] 1 1 3.3 Using W iener’s Functionals for a Point-Process Input Since in the nervous system, it is difficult to characterize systems using a Gaussian white noise continuous input, a point process must implemented. In this case, an orthogonal expansion may also be derived. The input x(t) with a zero mean Poisson process may be represented with the following functional expansion expressions: Go[po;x(t)] = po f3_12\ where p ,,^ and p2 are the Poisson kernels and X is the average interarrival rate. However, the diagonal terms, i.e., the kernel values when x,=x2 f°r these nonhomogeneous functionals may not be determined. To alleviate these problems, the kernels (pn ) are found using cross-correlation techniques by Lee and Schetzen( 1 0 ) and applied to a Poisson process input by Krausz.( 2 1 ) The first three Poisson kernels are defined as: (3-13) G2[p2; x(t)] = JJ p2(xi, xa)x(t - Ti)x(t - X 2)dXidX2 (3-14) Po - y(t) (3-15) PIC c)= £ y < t)* < t-x ) (3-16) P2 ('C P ' C 2) = - ^ 2 y (t ) x ( t - x l) x ( t - x 2) x, * x a (3-17) 12 The higher order kernels may then be calculated from the following equation: pn(Tp...,x J = ^ y ( t ) x ( t - x 1 )---x (t-x n) T,*Ta * —* T a. (3-18) Note: The discretized version of the Poisson kernels will be referred to as the DPOS kernels. 3.4 Relationship between Volterra and Poisson Kernels Once the Poisson Kernels are found, we may then find the Volterra kernels after reordering the terms of the functional expansion. The relation between the Volterra and Poisson kernels for a second order system can be found with the following expressions: ho = p0 - x j p2(x, x)dx hi(x) = p i(x )-p 2(x,x) h2(xi, X 2) — pi(xi, X 2) (3-19) (3-20) (3-21) These expressions can not be used to find Volterra kernels unless the Poisson kernels are defined for whole region of support including diagonals. 3.5 The Meaning of Kernels In nonlinear systems, it is very important to understand the significance of the kernels as they define the dynamic characteristics of the system. The kernels represent the n* order impulse responses of the system to a given input. The zero order kernel p0 (x)for a continuous output signal is a constant equivalent to the output mean value. It may be expressed as the output that does not depend on the 13 input In other words it provides information on the behavior of a system related to the initial conditions of that system. The first order kernel pt(x) represents the average response of the system to the input stimulus, regardless of whether or not another stimulus impulse occurs during the observation period T . Given a linear system, the first order kernel would represent the impulse response of a system when all the higher order kernels equal zero. In the case of a nonlinear system, the first order kernel is the closest approximation to a linear model although it is not the response of the nonlinear system to a single impulse. Consequently, it is the “expected” response of the nonlinear system to a certain set of stimuli. The second order kernel, p2 (x,A), where A^Xj-x, represents the modulatory effect of the system to the previous pulse A ms before the system response (with a latency x) evoked by the current stimulus. The second order kernel will produce a systematic facilitation or suppression of the granule cell output and may also be inteipreted as a generalized recovery function. 3.6 Summary This chapter discusses the basis behind representing nonlinear systems through functionals. The kernels of these functionals allow nonlinear systems to be defined by their input/output relationship, a typical engineering method in describing the characteristics of a system. While the use of estimated Poisson kernels seems to provide a theoretically robust method for predicting neurophysiological system output, there are many inherent problems in its estimation process. First, a large data length of 14 approximately 103 epochs is needed to recover the second order kernels which may not be practical in real experimental systems. Secondly, nonlinear systems analysis requires a conversion to the Volterra kernels from the Poisson kernels. However, the diagonals of the Poisson kernels do not exist while the Volteira kernels require absolute diagonal existence for mathematical manipulation. Thus, the Volterra kernels must then be estimated through extrapolation or binning procedures resulting in prediction errors. Thirdly, there is a high incidence of error in estimating highly dynamic subsystems. These problems play an important role in whether or not nonlinear systems analysis is plausible. While this chapter discussed the more traditional ways in the evolution of representing nonlinear systems, the next chapter discusses a novel and possibly a more efficient way of estimating kernels for neurobiological systems. The importance of kernel estimation cannot be ignored as they play an important role in system identification. 15 4.0 VOLTERRA MODELS VS. FEEDFORWARD ARTIFICIAL NEURAL NETWORKS 4.1 Introduction While Volterra models are often inhibited by the lack of data length or an inability to estimate higher order kernels, Marmarelis and Zhao * 1 8 ) has been exploring a method to overcome these limitations through the utilization of artificial neural networks(ANN). This method incorporates the use of ANN models trained with the available data to allow indirect estimation of system kernels of arbitrary order. However, the disadvantage of these ANN models is the lack of clear methodology in selecting learning parameters and “hidden units” to obtain acceptable results in nonlinear systems decomposition. To obtain these acceptable results, existing Volterra models may be implemented to select the correct parameters for the ANN models. This study analyzes the basic class of ANN as portrayed in the figure 3. The nonlinear mapping of an input vector x t[ x0 xi„.xm] onto the output scalar y may be accomplished through the selection of learning “weight” parameters. The weights designatedlwj^m} transition from the (M+l) input-layer(IL) to the K middle-layer(ML), i.e. the hidden layer, while the weights {rj} transition from the K middle layer map to the single-unit output layer(OL). For this particular case, the sample input vector x t[X o X [...X m] represents a time-series discrete sequence which is operated on by the ANN at each time sequence to generate the corresponding yn u Si y n fix rv r llA V m Il a y m ium Figure 3 Single-output three layer ANN with feedforward connections A nonlinear transfomiation (activation function) is performed by each unit of the ML and OL units with M uj,»= X wj**».» (4-1> m = 0 where the output of the j-th unit of the ML is 17 (4-2) Sj denotes the sigmoid function with a threshold 0j, Subsequently, the output unit may be expressed os: While many different sigmoid functions may be used to help predict nonlinearities, the polynomial activation functions seem to be quite effective in the representation of Volterra models of finite order. In addition, it will be shown that the use of polynomial activation functions in ANN (PANN) remains computationally efficient in terms of adaptive training via back -propagation algorithms. 4.2 Fitting a Volterra Model into an Artificial Neural Network (ANN) As mentioned earlier, we can represent a discrete-time nonlinear time- invariant system using the general Volterra model: (4-3) (4-4) 18 with x(n) as the input and {/)(} denoting the kernel functions. If the kernels are to be expanded onto the orthonormal basis{bj(m)} over the interval [0,M], then the Volterra series o f Eq. (4-4) may be represented as: y(n) = c0 + 5 > , C/>;v,(n) + 2 X («)+••• (4“5) j ji n where = (4-6) m = 0 and Ci.Co,... are the kernel expansion coefficients(co=ho). The variable Vj(n) is the weighted sum of the input epoch values equivalent to that of eq. (4-1). For the case of discrete-time Volterra kernels with finite memory M, only a finite number L of basics functions{bj}is needed for exact representation and L<M+1. If the proper basis {bj} is chosen then L can be made much smaller than M for acceptable estimation (Marmarelisfl8 > ). Therefore, the Volterra model may be reduced to a less computationally demanding model as well as reducing the dimensionality of the nonlinear system approximation to: .V = /(VpV2)...,v£) (4-7) 19 The modified Volterra model(MVM) may then correspond to an equivalent neural network as shown in figure 3. Figure 4 Functional network of the modified Volterra model(MVM) as expressed by equations 4-6 and 4-7 The middle layer for the MVM acts as a filter bank performing discrete-time convolutions with the input data as seen in eq. (4-6). The outputs of the filter- bank {vj} provide the input into the nonlinearity f(«) thus generating the system output y(n) at each time n. In order to be able to compare the two networks of Fig. 3 and 4 we must first assume that K=L and W j^b^m ) for all j and m to insist that uj=vj for all n. 20 Thus, the only difference between the two networks are the nonlinear activation functions and the progression from hidden layer to the output. The two networks will be equivalent if and only if f(v„...,vL) = Sc X tV v /> /•i (4-8) The improvement in the predictive ability of the ANN can be seen when the number of ML units is increased. However, an exact predictive model may only be observed when the number of ML units approaches infinity. One aspect of the use of PANN to perform kernel estimation that cannot be ignored is the type of input data. The available input data must approach the cose of broadband pseudo-random or quasi-white noise signals as close as possible (Marmarelis(I8)). Then the kernel estimation may be effective in system prediction when the ANN training is exposed to rich variety of possible inputs. If the available input is not appropriate then the resulting input-output mapping will be biased to the subspace of all the possible inputs. 4.3 Kernel Estimation through ANN training(KANN) Once the PANN has been trained for Volterra model predictions with the input/output data, the proper parameters must be extracted to perform kernel 21 estimation via ANN training(KANN). If we can represent each analytic function in figure 3 through a Taylor series expansion: Sj (w) — cc0j + c c^ ju +...+(z I j U +... (4-9) then the appropriate Volterra expansion may be shown as: K K - y » ~ S O fp .o X ” ' X 0 i ,*, 0 p X ’" X a:Wi*,* o;^.y,u)r*-M ^ (4-10) p= o 1 /,= 0 /,» o The above expression may be derived from the substitution of equation (4-9) into equations (4-1) and (4-3), where: if tf «J = ■■■*.*, (4-10) m i» 0 n t= 0 Finally, the general expression for Volterra kernels of the equivalent ANN as derived by W ray(t9) is: K kf(m| (^"1 0 22 We may then estimate the kernels from the appropriate output parameters of PANN, namely {wJ > m } and {ij} as well as the coefficients{otjj} of the Taylor series expansions of the activation functions. 4.4 Summary This method seems to offer many real advantages in applications towards experimental situations. First, the PANN kernels do not depend on the type of input and provide far better estimates than the discrete Poisson(DPOS) kernels. Secondly, the second order kernel may be recovered with a random input containing as little as 102 epochs thus greatly alleviating experimental limitations. Thirdly, the calculation of ANN kernels give a direct relationship to the Discrete Volterra kernels (DVS) kernels eliminating the limitations of estimating kernels along the diagonal for the DPOS kernels. The comparison between the two techniques will be discussed extensively in the following sections with regards to nonlinear systems decomposition. 23 5.0 NONLINEAR SYSTEM DECOMPOSITION 5.1 Functional Representations of Discrete Systems In linear system analysis, it is common to implement Fourier and Laplace transforms to simplify the process of system identification. Conversely, in continuous nonlinear systems analysis, it has been shown that Volterra functional expansions may be implemented to simplify the analysis of nonlinear systems (Brilliant(S ) ; George t6 )). However, in physiological nonlinear systems analysis, a discrete representation is needed to conduct system transforms to solve system identification problems. As a result, for discrete systems, we define the functional relation for the n^ order homogeneous Volterra operator by y(n)[ni] = I H n[x(m),hn] ^ Given this n* order expression, we can define the multidimensional expression of the homogeneous Volterra operator discretely as yn[m ,,...,m n] = Hn{x[m,],...,x[mn]} (5-2) / = 2-*-Shn[k1 ,...,kJx[m 1 - k , ] —x[mn - k j (5-3) k . k , where Hn{.} is a discrete-time version of the generalized Volterra operator and h^k,,...,!^] represent the Volterra kernels. Likewise, we can discretize the system transforms for the functionals by implementing the z-transform definitions. The 24 following expression exhibits the multidimensional system transform pairs for nonlinear systems: H n (zi,..M Zn) = Z n{hr[m ,...,nn]} = V *" ]£ h ii[k l,...,k n ]z j'lc '- k.— k.— (5-4) hn[m ,...,nn]=:--------- <2«jr l 2 v, v„ (5_ 5) 5.2 SYSTEM ALGEBRA Through the use of discretized definitions for nonlinear systems, we can implement system algebra to combine and decompose any subsystems to obtain the desired black box characteristics. In system algebra, there are essentially three ways of combining nonlinear systems: addition, multiplication and cascading. The addition process involves the same input while adding the black box outputs into an adder. The multiplication process involves the same input while multiplying the black box outputs into a multiplier. Finally, the cascade process involves taking the output of a particular input and inputting that output into another system. Since combining subsystems using addition and multiplication is fairly straight forward, we will only describe in detail the process of cascading subsystems. The process of applying system algebra in a cascade system can be seen below: 25 r - - x[n] A yW B z[n] c Figure 5 Cascade system In this cascade system, the output y[n] of the input x[n] is the input of the subsystem B and the overall system C can be represented as: C = A*B or C[x] = A[B[x]] (5-6) (5-7) In the case of system algebra, it is convenient to introduce the circle operator defined as: Aa*B = Ano(B") (5-8) The use of this notation allows for an efficient representation of functional expansions in order to simplify mathematical manipulation and can be proven by Georgete ) as follows: if 26 L = A2*(Bn + C j L[x] = A2[Bn[x] + Cm [x]] (5-9) y - B n[x] z = C Jx ] then L[x] = A2[y + z] = A2((y + z)2) “ A 2(y2)+ 2 A 2(yz)+ A 2(2 2) substituting back in gives L[x] = A 2 <(B„ [x]])2) + 2 A2 (B„ [x]C„ [x]) + A2((C„ [x])2) If we define (A 2 "(B„Cn ))[x] = A 1(Bm[x]C„[x]) L - A 2(B2) + 2A 2 o(BnCm)+ A 2 »(C2 ) note that A2 o(B*) = A2 *B 0 (5-10) (5-11) (5-12) (5-13) 27 Given this proof we can deduce the order of any overall system by breaking it down into the three different ways of the combining subsystems. The order of each combination of systems can be observed below: 1) The addition process: C=An+Bm (5-14) contains both n^1 and mth order parts. 2) The multiplication process: D=An-Bm (5-15) D[ex] = A„[ex]. B Jex ] = Em + "C[x] (5_16) is a system of order m+n. 3) The cascade process: E=An*Bm (5_17) E[ex] - An[Bm [ex]] = (e“T A„[BJx]] = e^E tx] (5. 18) is a system of order mn. Upon careful implementation of the mentioned definitions and proofs, George,6 ) was able to derive a set of rules to use when combining nonlinear subsystems. The rules are stated below: 28 1) A+B=A+B 8) (A • B)* C = (A* C) • (B* C) 2) A+(B+C)=(A+B)+C 9) A*B * B* A 3) A-B=B-A 10) C* (A + B) (C* A) + (C* B) 4) A*(B*C)=(A*B)*C 11) C* (A • B) (C* A) • (C* B) 1 2 ) A j * B j — B j * A j 13) C,*CA + B) = (C1 *A) + (Ct*B) 5) A*(B*C) = (A*B)*C 6) A-(B+C)=CA-B)+(A-C) 7) (A + B)*C = (A*C) + (B*C) From these sets of rules, it is possible to combine subsystems into an overall system by the use of homogeneous operators. 5.3 Discrete Transforms for Combining Subsystems The operators can then be transformed to the frequency domain by executing a set of transforms. There are essentially four system transforms derived from Georgei6 ): 1. Addition: Ln=An+B„ (5 - 1 9 ) z-transform: L„(zi,...,zn )=An(zi.... zJ+BnCzj,...^) (5-20) 2. Multiplication: Ln+m =An .Bm (5-21) Z-tmnSform. i-'n+m (Zl»*"*Zn+m ) — ®m(Zn+l’"*»Z n+m ) (5-22) 3. Cascade: Ln = A,*Bn (5-23) 29 (5-24) 4. General Cascade Lp + q + ...+r = An o(Bp Cq—W r) (5-25) z-transform. Lp + .„+t(Z[,...,zp + „_ fJ) An(Zj***zp»...,zp + ..^,r+ [- • *zp + ..,+ 1) • Bp^zi 2p), c q(zp+i’" ” zp+q) •••W (z z ) ' v «\'6p+ ...r+ l > ^p+.i.+i / To see the proofs for these discrete transforms, please refer to Appendix A 5.4 Sim plified Exam ple Now that we have defined a set of system algebra rules and transform pairs, it will be helpful to depict a simple example of how system algebra may be used to solve various systems analysis problems. Let us suppose that that we are given the following example: x[n] B 1.2,4 yW ^ A z[n] 2 C Figure 6 Simplified example where A is composed of one componentCAJ and B is composed of three components(Bl,B2 ,B4). The above diagram can also be expressed as: 30 Hp Figure 7 Equivalent of simplified example In order to solve the overall transfer function for this cascade system we derive the following expression: C = A 2 * (B, + B2 + B4 ) = A2 o(B ,+ B 2 + B 4)2 (Bt + B 2 +B4XB, + B 2 + B 4) = Bj2 + B j2 + B42 + 2BtB2 + 2B,B4 + 2B2B4 C = A2 o (B,2) + A2 o (B22) + A 2 o (B42) +2A2 o{BjB2) + 2A2 o(B1 B4) + 2A2 o(B,B4) (5-27) (5-28) (5-29) Then we equate the terms of the same order to get the following: 31 C i“ Aa «(Bl a) C3= 2A 2o(BtB2) C4 = A 2o(B*) C5 = 2A2 c(BiB4) Cf i = 2 A 2 o(B2 B4) C8 = A2 o (B*) [(5-30)-(5-35)] By matching up the terms of the same order, we find that the overall equivalent system C is an expansion containing the following orders: C=C2+C3+Gj+C5+C6+C8 (5-36) The next step involves applying the system transforms mentioned previously to obtain the following: C2 (z,, z2) = A2 (Z|, z2 )Bt (z, )B, (z2) Cj(z,,z2,z3) = 2A2(z,,z2 z3) C4(zp z 2,z 3,z 4) = A2(z1 z2,z3 z4)B2(z3 ,z4) [(5-37)-(5-39)] Cj(zl,z2)z3,z4,zJ) = 2A2(z1 ,z2z3 z4zs)Bt(z,)B4(z2,z3,z4,zs) C6(z,,...,z6) = 2 A2 (z,z2, z3 • • • z6 )B2 (Zj, z2 )B4 (z3 , z4, zJ( z6) Cg(zp...,zg) = A2(z, •••z4,zJ --Zj)B 4(zI,...,z4)B4(z3l..M zfi) [(5.40).(5-42)] This example underlies the main points in applying system algebra to any black box system of n orders. Any system of subsystems may be represented as a cascade of subsystems in which specified rules can be applied to obtain the desired results. If this method of functional representation analysis was not implemented, 32 calculations would have to be implemented in the time domain creating rigorous computational tasks as well as tedious mathematical representations. To further understand this method and to examine this example expressed mathematically in the time domain, please refer to appendix B. 5.5 Representing Feedback Systems in System Algebra Often times in system identification, we are faced with various feedback and feedforward subsystems. Representing these types of systems in an equivalent form of one of the three system algebra combinations( addition, multiplication, cascade) is not difficult in system algebra. Let us take, for example, a simple feedback system as shown on the left side of figure 8. Given this feedback model we hope to find the characteristics for overall equivalent system using system algebra as shown on the right side of figure 8 . y(t), m Figure 8 Nonlinear feedback system Therefore we must break this system down into a combination of addition, multiplication and cascade subsystems. Consequently, the feedback system may also be represented in figure 9 as shown below: 33 y(t) x(t) x(t) n L| B*A z(t). ► T T y(t) Figure 9 Equivalent system where B*A represents the cascade component in which the output of A becomes the input into B. We can then break down the cascade of B*A and represent it as a simple black box G giving: x(t) — ---------------- L f ~ B * A ~ x(t) z(t) Figure 10 System G Finally we can represent the overall equivalent system as a system of cascade subsystems. x(t) z(t) y(t), x(t), y(t). Figure 11 Com bination of A and G 34 Thus, we can model many different types of feedback and feedforward systems using this systems algebra approach. This method allows us to break down more complicated systems into easily manageable subsystems in which we can perform efficient mathematical manipulations. 35 6.0 THEORETICAL DECOMPOSITION OF THE HIPPOCAMPUS 6.1 Introduction Kosanovicw applied the rules of system algebra to perform system characterization in the brain structure known as the hippocampus. As mentioned earlier, continuous Gaussian white noise was not possible as an input in this physiological system, so a Poisson process was implemented. The dentate gyrus is a system in the hippocampus with many intemeurons that provide a variety of inhibitory feedforward and feedback connections. While the dentate gyrus is reasonably easy to record from, it is very difficult to record from and characterize the inhibitory influences of the intemeurons known as basket cells. Thus the need arises to implement system algebra to characterize these intemeurons through nonlinear decomposition. While we can stimulate and record from the overall equivalent system, it is also possible to isolate the feedthrough system as well by eliminating the inhibitory intemeuron influences pharmacologically. As a result, we use nonlinear decomposition to characterize the unobservable inhibitory neurons. The nonlinear system may be modeled as shown in figure 12: 36 B E Figure 12 Feedback model of the dentate gyrus where A represents the dentate gyrus and B represents the inhibitory intemeurons that are primarily composed of basket cells. Although there are also feedforward inhibitory intemeurons present, this was the first model of this kind and only a feedback model was implemented for simplicity and to test the potential plausibility of this type of system characterization. 6.2 Decomposition of Dentate Gyrus To perform decomposition of the feedback component, the kernels for the equivalent and feedforward subsystems are needed. In this case, theoretical decomposition was performed to calculate the kernels for the unobservable element B up to the third order. The theoretical process of decomposing the feedback kernels for B will be shown below. Given the described model we can represent y(t) and z(t) with respect to the input x(t) in functional form as shown below: 37 y(t) = E[x(t)] = A[z(t)] 2 (1) = G[x(t)] = x(t)-B[y(t)] (6-1) (6-2) (6-3) These expressions can then be simplified using cascade notation: E=A*G (6-4) G=I-B*E (6-5) Expansion of these two equations gives: E = (Aj + A2+*,,)*(Gj + G2+” -) (6* 6) — (A| *G t + A t * G 2 + ••*) +(A , "(Gf) + 2A 2 . ( 0 , 0 , ) + A , « (0 5 )+ ...) +(Aj o(G f)+3A , o(GfG2)+ 3 A , o(G,G^) + A , °(G ’ ) + --) • » * and; G = I - (Bt + B2+- • ■ •)* (Et + E2+* • •) (6-7) = I — (Bj * Ej + Bj * E2 + • • •) -(B j »(Ef) + 2Bj .(E ,E j)+ B 2 o( E |) + - ) - (B3 ° (Ef) + 3BS o (EjEj) + 3B3 o (E jE j) + B3 o (E j)+ • • •) Matching terms up to the 3rd order gives: Et = Aj*Gj E2 = A,*G2+ A 2 o(Gf) [(6-8H6-10)] E3 = Aj*G3*+2A2 o(G,G2) + A 3 o(G’) and 38 G , = I " B ^ E j G2 = -B , *E 2 - B 2 o(Ef) [(6-ll)-6-13)] G3= - B 1 *E3-2 B 2 o(EiE2) - B 3o(E1 3 ) After applying the multidimensional Laplace transforms, we can obtain the following kernel expressions: E t (s) - A^sJGjfs) (6-14) E2(Sj,s2) = A^(Si + s2)G2(st,s2) + A 2(s, , s2)G 1(s1)G 1(s2) (6-15) E 3(s1, s2,s3) = Aj(Sj + s2 + s3 )G3(sL ,s2,s3 ) + 2 A 2(s1 , s2 + s3)G 1(s1)G 2(s2, s3) +A3(s„s2,s3 )GI(st)G1 (s2)G1 (s3 ) and G, (s) = 1 - Bt (s)Ej (s) (6-17) ^ 2 (sps2) = B[ (S| + s2)E 2(sp s2) — B2 (Sj , s 2 )Ej (S j )E t (s 2 ) (6-18) ^j(®lt®2'®j) — ®l(®l " ^ " ® 2 " i" ^3 ) ^ 3 0 * 1 ) — 2B2(Sj,s2 + s 3)E 1(st)E2(s2,s3) (6-19) - B J(sltsa.s,)E 1(s1 )E1 (s2)Et(s3) Solving for B we get the Laplace transforms for the unknown feedback kernels: b iW = T 7 T + -?7T A,(s) Ej(s) (6-20) 39 g /s s \ = Aa(stlsa)______________Bafe,tSa)______ A[(st)At(s2)Aj(Sj + s2) E^s^EjCSjJEjfSj + s2) B (s s s ) = __________ 2 A 2(s ,,s 2 + s3)A 2(s2,s 3)_________ 3' lf 2’ 3' A ,(s,)A ,(s2) A t (s3)A ,(s2 + s3)A j(s, + s2 + s3) ___________ A 3(sM s2,s 3) A i(S i)A ,(sJt)A t (s3)A 1(s, + s 2 + s 3) ^ _________ 2 E 2(slts2 + s3)E 2(s2,s 3)_________ E i(Sj)E 1(s2)E 1(s3)E j(s2 + s3)E 1(s1 + s2 + s3) __________ E 3(S|,S2,S3)__________ Ei (s( )E , (s2 )Ej (s3)E i (s( + s 2 + s3) (6-21) (6-22) Once the feedback kernels have been calculated in the frequency domain, the inverse multidimensional Laplace transforms may be taken to calculate the time domain feedback kernels in the first, second and third orders. 7.0 ANALYSIS OF DATA SETS While the theoretical decomposition of the hippocampus may seem relatively straight forward, applying this theoretical approach to actual experimental applications was not For instance, variance in the kernel estimates could never be reduced properly because the amount of experimental data was often insufficient In addition, there are also limitations concerning the difficulty in estimating the diagonals for Poisson kernels. Despite the difficulties with the experimental decomposition, preliminary results were obtained that seem to verify an underlying basis for feedback inhibition. In decomposing the feedback component, actual data from in vitro experiments were used to perform system algebra for various experimental data sets. Decomposition was performed in an ANSI C language program and built around the sigp package by Kosanovic t9 ) which was designed to easily manipulate multidimensional structures. However, the results that were specified were somewhat inconclusive which resulted in the need for further analysis of the data. Two main topics will be discussed to better understand the theory behind nonlinear theoretical decomposition , as well as, results from other simulated data sets. These topics include filtering effects and the meaning of the second order kernel responses which may suggest only a linear feedback influence from the basket cells. 7.1 Experimental Results Each input data set consisted of first and second order kernels for the equivalent and feedforward systems calculated through an experimental procedure(Berger et al.< 2 > , Berger et a l.( 3 } ). Examples for the first and second order kernels of the equivalent and feedforward systems may be seen in figures 13-18. The kernels observed in these figures were binned to help overcome noisy 41 experimental preparations. Two types of measurements were used to obtain these input data sets. For thicker slices, alphaxalone was added to emphasize the influence of the basket cells to get the equivalent system. Alphaxalone pharmacologically acts to accentuate the inhibition of the basket cells. To obtain the feedforward system, bicucculine was added along with alphaxalone. Bicucculine acts as a receptor blocker for the basket cells which suppresses the inhibition. For thin slices, alphaxalone was added to emphasize the influence of the basket cells to get the equivalent system while no alphaxalone was added to isolate the feedforward system. This was because the presence of basket cells in the thinner slices was not sufficient enough to elicit a response without initially adding alphaxalone. These different type of measurements did not alter the way in which decomposition was performed, only the amount of inhibitory influence. Most of the data sets primarily exhibited a high response for the first order equivalent and feedforward systems in the first 10 ms bin and minimal responses for the latter bins. The concluding hypothesis showed that the unobservable element B provided inhibition to the feedforward element A whenever the output system was facilitated with average interval pulses of A < 100 ms. In addition, the first order input kernels provided data for t e [0,1000ms] and second order kernels for te[0,100m s],A s[0,1000m s] The resulting feedback kernels were computed with the same parameters. The following figures and graphs display a typical experimental data set in which the feedthrough and equivalent system kernels were found and used to decompose the feedback kernels. 42 B |E ; Figure 13 Simple feedback model of dentate gyrus 0.1 0.08 0.06 | 0.04 0.02 0 Figure 14 First order kernel of equivalent system for z26 data set z 2 6 » E q u iv a le n t S y s t e m ( l a t o r d e r ) 10 20 30 40 50 60 70 80 90 100 2 2 6 - E q u iv a le n t S y s t e m (2 n d o r d e r ) -0,04 400 600 800 1000 d«ll[nv] Figure 15 Second order kernel of equivalent system for z26 data set 43 x(t) + z(t) -► © — A y(t) ( 1 Figure 16 Feedthrough element z 2 6 • F e e d fo r w a r d E le m e n t ( 1 s t o r d e r ) 0.1 O .O Q a.oe ■ I 0.04 - 0.02 10 20 30 40 50 60 70 80 90 100 tn [ m i [ Figure 17 First order kernel of feedthrough element for z26 data set z 2 6 • F e e d t h r o u g h E le m e n t (2 n d o r d e r ) - - - ■ - - - 1 - - - ■ ---------- 1 - - - f 0.04- 0.02 f 0 -0 .0 2 - -0 ,04- 200 800 0 600 1000 d * k a ( m a | Figure 18 Second order kernel of feedthrough element for z26 data set 44 y(t) Figure 19 Decomposed feedback element z 2 S - F e e d b a c k E le m e n t ( 1 s t o r d e r ) E 0 10 ' 20 ' 30 ' 40 ' 50 ' 60 ' 70 ' 80 ' 90 100 ta u ( r m | Figure 20 First order kernel of decomposed feedback element for z26 data set z 2 6 - F e e d b a c k E le m e n t { 2 n d o r d e r ) -0 .0 2 - 200 400 600 800 1000 4 * k J ( m a ] Figure 21 Second order kernel of decomposed feedback element for z26 data set 45 z 2 6 - F e e d b a c k E le m e n t ( 1 s t o r d e r ) ( f r e q u e n c y d o m a in ) 5 * T > . e D - 10- -15 •20 Figure 22 Magnitude response of first order kernel of feedback element for z26 data set z 2 6 • F e e d b a c k E le m e n t (2 n d o r d e r ) ( f r e q u e n c y d o m a in ) - 1 0 - . 20- -30 to q -40 G a -s o - -60 - -70 - -e o 2 0 Figure 23 Magnitude response of second order kernel of feedback element for z26 data set 46 7.2 Filtering Effects Decomposed feedback kernels were plotted in the time domain as well as in the frequency domain(Please refer to figures 19-23). This was done in order to better understand the results from the decomposition as well as to comprehend the filtering characteristics of the respective subsystems. Frequency domain plots were computed by taking the Fast Fourier Transforms (FFT) of the feedback kernels and plotting the results in decibels. When Kosanovic(8 ) performed the decomposition on the experimental data, only 3 of the 7 data sets displayed an evident high pass filtering effect for the feedback first order kernels. For this case, a high-pass filtering effect for the feedback element generally means that high frequencies of stimulation in the feedforward element will induce an inhibitory effect provided by the feedback system to the equivalent system. Therefore, when the interstimulus intervals are small enough to produce facilitation in the feedforward system an active inhibition will be provided by the feedback system. Since only 3 of the 7 data sets exhibited an evident high pass filtering effect , it was extremely difficult to consistently characterize the inhibitory feedback system as a high pass filter. In an attempt to understand the results from the data, it was observed that a somewhat high incidence of noise occurred for the feedback kernels when decomposition was performed. This can be explained by the incidence of noise in the input data as well as the loss of accuracy when performing fast Fourier transforms. As a result, the first attempt in understanding the results was to try and effectively reduce the noise to obtain explainable results. In the type of random impulse trains used, population spike responses most often occur within 47 the first 10 ms. Therefore, the importance of bins that occur in the longer intervals may often be considered negligible. Consequently, to obtain better results only the first initial bins were used to calculate the FFTs while the latter bins were minimized. Although one may argue that this severely alters the signal , it can be shown from the results that it essentially just gives a smoother fit to the unaltered frequency domain plots. Minimizing the noise for the latter bins exhibited surprising results because 6 of the 7 data sets showed an HP filtering effect(PIease refer to figures 24 - 30). Although the amount of high pass filtering effect is relatively small for some of the cases, it allowed a more interpretable explanation of the decomposition results. To further understand the data sets and the effects of high pass filters when dealing with histograms, simulations were conducted to understand the factors involved in the degree of filtering with regards to cutoff frequencies and amplitude passbands. Various manipulations to the initial bins were used to fully understand the conversion from the time domain to the frequency domain. It was found that when the first bin was equal in magnitude but different in sign when compared to the second bin, the steepest high pass effect was observed. If the initial bins exhibit the same signs, the corresponding result in the frequency domain will exhibit a low pass filtering effect These results correspond to the difference equations discussed in chapter 2 concerning linear systems. Consequently, one should be able to predict what kind of filtering characteristics a first order kernel may have from the time domain. 48 1 2 # -F a a d b tc k a la m a n t (ta l o r ttr ) — original — 1 st fo u r b (ra - 10. -1 5 - 20 S O Figure 24 Magnitude response for original first order kernel vs. noise reduction for the latter bins in z26 data set z68 • Fatdback Clamant (lal oidar) e D -1 0 -15 - 20' 20 50 l[H i| Figure 25 Magnitude response for original first order kernel vs. noise reduction for the latter bins in z36 data set 49 1SB - F a td b a c k E lam int ( ta t ardar) -1 0 -15 20 30 Figure 26 M agnitude response for original first order kernel noise reduction for the latter bins in z68 data set 145 - F a td b a c k Elam ant ( la t o rd ir) -10 -15 b p * 2 0 -25' -3 0 ' -3 5 40 M Hi| Figure 27 Magnitude response for original first order kernel noise reduction for the latter bins in z45 data set i 42 . F a llb a c k Elam anl (ta t ardar) - 1 0 -r -15 -20 -25 e n -30 -35 -40 20 30 40 t(Hi| Figure 28 M agnitude response for original first o rd er kernel noise reduction for the latter bins in z42 data set 141 - F aadback Elam ant (la t ard ar) IS n — — 1st four bins origin s) IQ - 40 MHt) Figure 29 Magnitude response for original first order kernel noise reduction for the latter bins in z41 data set 275 - Fttvdback EU m ant (ta t ortfai) i d |o e ( Q •4- • 6 * -1 0 20 2 0 Figure 30 M agnitude response for original first order kernel noise reduction for the latter bins in z75 data set 7.3 Decomposition of Simulated Input Data In order to verify the decomposition method as a physically realizable model for simple feedback systems in the hippocampus, numerous simulations were conducted to comprehend the effects of inhibition on the granule cells of the dentate gyrus. This was done to check the legitimacy of the decomposition program and also to understand how different nonlinear inputs influence the calculation of the feedback kernels. For these particular simulations, only the z26 data set was implemented. First order and second order kernels for the equivalent systems were altered to validate predictions that manipulations of these kernels would correspondingly influence changes in the decomposed feedback kernels. No changes to the feedforward kernels were made. The general hypothesis that can be made is, if the equivalent system kernels were manipulated to be considerably less in magnitude compared to the feedforward system, the predicted output of the feedback kernels should exhibit a high inhibitory influence. Conversely, if the magnitude ratio between the equivalent system kernels and the feedforward kernels were approximately equal, the predicted output of the feedback kernels after decomposition should be minimal. However, when decomposition was performed on these simulated input data, the output data did not validate predicted output. Since the supposed hypothesis did not validate the results, many more questions need to be answered in order to understand this feedback model. 7.4 Interpreting the Second Order Feedback Kernel Response. The first order kernel is often represented as the linear response to a stimulus if the higher order kernels are zero. The second order kernel often 53 represents the degree of nonlinearity in a subsystem. From the figure 21 and other data sets not shown, it was observed that the responses of the second order were quite small which may lead to the conclusion that the feedback kernel may be acting predominantly linearly. These results have led to the development of a nonlinear feedforward linear feedback model represented below: x(t) z(0 y(t) 1,2 Figure 31 Nonlinear feedthrough-linear feedback model This system can basically be modeled in the same way as the nonlinear feedforward-nonlinear feedback model as shown: y(t) - E[x(t)] = A[z(t)] (6-1) z(t) = G[x(t)] (6-2) = x(t)-B[y(t)] Simplifying: E=A*G (6-3) G=I-B!*E (6-4) 54 Expansion of these two equations gives: E — (At + Aj+***)*(Gi + G2+*") (6-5) = (A1 *G1 + Aj*G2+*‘* ) +(A2 o(G f)+ 2A , °CG,G3)+ A 2 .( G j H - ) +(A, o (G,3) + 3 A ,. (G3 G2)+ 3 Aj o (G,G3) + A, ° (G3)+- ••) ♦ * and: G = I - ( B I)*(Ei + Ea+ - ) (6-6) = I - ( B 1 *E1+B ,*E 2+-") (6-7) Again, matching terms up to the 3rd order gives: E! = A,*G, E 2 = A1 *G2+ A 2 o(Gf) [(6-8)-6-10)] E} = A!*G3*+2A2 o(GjG2) + A j °(G j) and G, =I-B j*E , G2 = -B t * E2 [(6-11)-6-13)] G3 = -B ,*E 3 55 After applying the multidimensional La place transforms, we can obtain the following expressions: E 1(s) = A i(s)G1(s) (6-14) E 2 (Si, s2) A t (s, + s2 )G2 (s, .S j )+ A 2 (s!, s2 )G3 (s3 )Gj (s2) (6-15) E 3(s1(S2 ,s 3) - A,(s, +Sj + s j )G j (Sj, s2, Sj) (6-16) +2A2(slts2 + s 3)Gj (S| )G2 (s2 ,Sj) +A 3 (Sj ,S2 ,s3)G 1 (sI)G1(s2)G l(s3) and G t(s) = l-B.fsJEKs) (6-17) G2(si,s2 ) = -Bt(si+s2 )E2(si,s2) (6-18) G3(si,s2,s3 ) = -Bi(si+s2 +s3)E3(st)s2lS 3) (6-19) The first order feedback kernel would result in the same expression as the nonlinear feedforward-nonlinear feedback model as shown: B.(s) = — — + — A,(s) E,(s) (6_ 20) However, to test for the linearity of our model we can test the following expression 56 A 2 (S [ , s 2 ) E j ( s „ s 2 ) ^ i(si)Ai(s2)A1 (s1 + s 2) Ej (Sj )Ej (s2 )Ej (st + s2) (6-21) which can be derived by setting the second order feedback kernel to zero. If the left hand expression is approximately equal in magnitude to the right hand side expression, we can conclude that the feedback subsystem essentially acts as a linear subsystem. The application of this test may allow us to draw conclusions about the feedback component while verifying the experimental data. 7.5 Summary In this chapter, we presented the initial results from performing nonlinear system decomposition. It was discovered that while the initial results provided some moderate success in performing this method towards the hippocampus, it also introduced new questions concerning data interpretation and the robustness of this method for practical experimental applications. Due to the numerous questions that resulted from the experimental decomposition process, much more analysis is necessary to study the application of implementing this approach for practical applications. Consequently, a new model was developed and will be discussed to study the effects of nonlinearity and noise on a feedback model. 57 8.0 Computational Implementation of a Feedback Model 8.1 Introduction of a Simulated Model Due to the limited success in decomposing the feedback kernels using experimental data, it became necessary to develop a simulated feedback model which investigated the robustness of kernel estimation techniques in performing nonlinear decomposition. These kernel estimation techniques often depend upon the degree of nonlinearity, noise, and also various random input patterns. To study the influence of these kernel estimation factors on a feedback system, a computational model was developed with known black box characteristics to compare the ability of two kernel estimation techniques to recover the feedback kernels: Sclabassi and Krausz< 2 0 ,2 1 ) (DPOS), Marmarelis and Zhao(ls,(PANN). After the kernels are found for zero noise conditions, varying amounts of noise may be incorporated into the feedback model to test the noise effects on kernel calculation techniques. To test these factors, the following feedback model was implemented: S i m u l a t e d F e e d b a c k M o d e l noise v(0 y (0 U (t) s (0 Figure 32 Feedback model with dashed lines denoting overall feedthrough(A) and feedback(B) blackbox subsystems 58 The function g(t) represents the linear feedthrough component whose output then enters a squaring function. This creates an overall nonlinear feedthrough component of order 2. The output of the feedthrough component y(t) will then act as the input into the linear feedback component f(t) whose output is also squared to create an overall nonlinear feedback component Therefore, figure 32 denotes a typical nonlinear feedback model with a feedforward component of order two and feedback component of order two. However, the overall equivalent system will eventually diverge to an order of infinity as the input length increases. This will be shown to present problems in kernel estimation as well as in nonlinear system decomposition. For the case of this initial study the following functions were used for g(t) and f(t), respectively: Actual feedthrough component 1st order kernel 0 .2 5 - 1 0.2 - 0 . 1 5 - % > 0 . 0 5 - 0- • 0 .0 5 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 tau(ms) Figure 33 fiC O = C, (^ e 1 + ^ e ‘ ) 59 Actual feedback component 1st order kernel 0.14-, 0.12 - 0.1 - 0 . 0 8 - i 0 . 0 4 - 0.0 2- 0- -0.02 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 U ui(m s) Figure 34 — C2(e 1 e /3) The functions that were chosen tend to simulate possible excitatory postsynaptic potentials(EPSP’s) in the nervous system. The input data x(t) into the feedback model consists of a random impulse train of ones or zeros, a typical pulse stimulus in neurobiological experiments. The random occurance of a pulse for a given time step in the input train is set according to a chosen probability X, ie. a poisson input. The input x(t) then enters a black box g(t) to produce the output of v(t) which may be found through simple convolution techniques. The output v(t) then enters a nonlinear black box according to the equation v+Pv2 producing y(t). The expression y(t) subsequently provides the input for the linear feedback component f(t) with the output r(t) entering the nonlinear feedback component o ^ r-t^ r . After the simulation is completed producing the overall y(t), we may then apply the kernel estimation techniques from the input/output data. For the case of a linear system, we expect the overall equivalent system impulse response to look like: 60 Actual equivalent system 1st order kernel 0.25-, 0.2 - 0 . 1 5 - I 0 . 0 5 - -0 .0 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 6 0 0 tau(m s) Figure 35 overall linear impulse response This computational model is user-interactive and allows a variety of factors to be manipulated. As mentioned earlier ,the input data x(t) may be set according to the probability X. Secondly, the degree of nonlinearity for both the feedthrough and feedback components may also be changed according to the factors (3 , otj, a2 . Thirdly, a variable time step delay may be set for the desired feedback influence. Lastly, various amounts of noise may be incorporated into the output y(t) according to a selected signal/noise ratio to compare the effects of varying amounts of noise upon kernel estimation techniques. The signal/noise ratio may be represented as SNR = 10 iog1 0 ■ “ a N (8-1) where ms _ s (l) = average power of the signal 2 _ a N “ variance of the noise 61 For the initial studies, lambda was chosen to be .1, In addition, the time step delay for the feedback component was set to one. After setting the initial conditions, various factors were selected to test their effects on the kernel estimation procedures for the feedback model. Different nonlinearities and inputs were manipulated to test their effects on nonlinear decomposition. Once the inputs and outputs have been computed from the computational feedback model, we may then estimate the kernels for the feedthrough component as well as the equivalent system. While kernel estimation may be computed in a number of different ways, the two relavent techniques that were studied were those mentioned in previous chapters (Kraus and SclabassiR0,:n Marmarelis and Zhao(W ). 8.2 K ernel E stim ation Results and com parisons As mentioned earlier, kernel estimation of the feedthrough component and equivalent system was computed from input/output data generated from the computer simulated feedback model in figure 32. The parameters that were chosen for this simulation are given as: # of pulses data length linear feedback influence o c2 nonlinear feedback influence P nonlinear feedthrough influence X probability of input pulse 213 2000 1 .05 .2 .11 Table 1 Selected param eters for sim ulated feedback model The chosen parameters signify a moderately nonlinear feedthrough, slightly nonlinear feedback system which is free of noise. Comparisons of the estimated kernels for the feedthrough component are shown in figures 36-38. 62 Actual feedthrough component 1st order kernel 0.25 -I 0.2 - 0 . 1 5 - 0 - •0 .0 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 U w (m s) Figure 36 Actual feedthrough 1st order kernel Estimated feedthrough component 1st order kernel ; o052bk1 0 .25-1 0.2 - 0 . 1 5 - % 0 , 0 5 - ■0.05 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 tau(ms) Figure 37 Estimated feedthrough 1st order kernel using PANN 63 Estimated feedthrough component 1st order kernel; o052b_p1 i 0.0 5- • 0.05 30 40 G O Figure 38 Estimated feedthrough 1st order kernel using DPOS The 1st order kernel for the feedthrough component generated from PANN provides a very accurate estimate of the actual feedthrough 1st order kernel. As can be shown in Figure 38, the estimated DPOS kernel provides a poor estimate of the actual first order kernel when the number of pulses in a given input is small. Approximately 103 are required to accurately recover the first order kernel for the DPOS method. The second order kernel responses for the feedthrough component may be shown in figures 39-40. Figure 39 shows the actual second order kernel response while figure 40 denotes the second order kernel response as calculated from PANN. The DPOS second order kernel response could not be calculated from such a small input data length. Approximately random 4000 pulses with a very low probability of an event(Xc.OOl) are required for the DPOS method resulting in a very long and impractical input data length. For a system of finite order, PANN 64 v o lts Actual feedthrough component; 2nd order kernel x 1 0 " 40 ----40 60 "— 20 80 0 t a u ( m s ) t a u ( m s ) Figure 39 Actual feedthrough second order kernel Estimated feedthrough component-t2nd order kernel;o052bk2.ps X 10' 80 O tau(ms) ta u ( m s ) Figure 40 Estim ated feedthrough 2nd order kernel from PANN can produce a very accurate second order kernel estimate with only about 200 input pulses. The other components that were required for system decomposition were the kernels for the equivalent system. Again, those kernels generated from PANN created much better estimates than the DPOS kernels. While it is difficult to observe from the graphs , the PANN kernel estimates for the equivalent system were not as accurate as the feedthrough kernels due to the infinite order of nonlinearity in the equivalent system. Figures 41-43 show the first order kernels while figures 44 and 45 show the second order kernel comparisons. While the estimates are far better than the DPOS estimated kernels, they will be shown to be still inadequate for nonlinear system decomposition. From the figures 44-45, it can be shown that PANN had the greatest trouble in estimates along the diagonal ('rl=x2 ). Theoretically, this estimate could be improved with a greater input data length or an averaging of different random input sets. Improvements to the accuracy of the kernel estimates have not been implemented yet due to memory limitations in the PANN software program. 66 Actual equivalent system 1st order kernel 0.25 n 0.2 - 0 . 1 5 - | 0.1 - 0 . 0 5 - 0 - -0 .0 5 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 0 tau(nw ) Figure 41 Actual equivalent 1st order kernel Estimated equivalent sytem 1st order kernel ; y052bkl 0 .2 5 - 1 0.2 - 0 . 1 5 - 0.1 - 0 . 0 5 - 0- -0 .0 5 - |------- 1 ------- 1 ------- 1 ------- 1 ------- 1 ------- 1 ------- 1 ------- 1 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 tau(m a) Figure 42 Estimated equivalent 1st order kernel using PANN 67 Estimated Equivalent ay stem 1st order kernel; y052b_j>1 o.t- 0 .0 5 - -0.0 5 - - 0 ,t 40 ■ ■ K M Figure 43 Estimated equivalent 1st order kernel using DPOS 68 v o tts Actual equivalent system; 2nd order kernel x 10" 10-1 t a u ( m s ) ta u { m s ) Figure 44 Actual equivalent 2nd order kernel Estimated equivalent system;Znd order kernel,-yOS2bk2.ps x 1 0 - 80 0 ta u (m s ) ta u ( m s ) Figure 45 Estim ated equivalent 2nd order kernel using PA N N 69 8.3 N onlinear System Decomposition of the Sim ulated Feedback Model Since all the components of the simulated feedback kernel are known (feedthrough, feedback, equivalent), we can now easily compare the results of nonlinear decomposition to the actual feedback kernel. As mentioned before the decomposed feedback kernel should closely resemble the actual function f(t) used for the feedback component as shown in figure 34: Nonlinear decomposition of the computational feedback model in figure 32 was performed using system algebra giving the following feedback kernels up to the second order: . - d _ - 1 . 1 B,(s)e' A.(s) Ej(s) (g_ 2) B fs s )e-* |D -hD = A2(slts2)______________E 2(s,,s2)_________ .g 2' * ’ 2' A,(s2 )A,(s2)A,(s, + s2) E l(s1 )El(s2)E1(sl + s2) Notice the similarities in the theoretical decomposition results between the simulated feedback model and those of the simple experimental feedback model as derived by Kosanovic*8 1 in Chapter 6 The only difference in the mathematical derivation lies in the presence of a delay D as shown by the expressions e * ,D and e_,‘ D “* lD next to B,(s) and B2(slts2), respectively (George*6 1 ). For this particular case, the delay D was chosen to be 1 ms. As compared to figure 46, the decomposed 1st order feedback kernel utilizing the PANN kernel estimation procedure (fig. 47) provides a far better result than that from the Poisson kernel estimation procedures(fig. 48). However, even though PANN provides fairly good estimates as compared to DPOS, the second 70 order decomposed kernel (fig. 51) still could not be recovered from PANN estimates. The unrecoverability of the feedback second order kernel may be due to many factors. Figure 49 shows the actual second order feedback kernel that was used in the feedback model. Figure 50 displays the second order kernel when performing decomposition from the actual feedthrough and equivalent system components. By comparing figure 49 and 50, we first find that there are even some slight limitations in performing decomposition with discretized data even when we used the actual kernels and not the estimates. However, this can be probably be overcome by a large sampling rate at the expense of high computational intensity. A second possible reason in not being able to recover the second order feedback kernel was probably due to the small nonlinearity that was chosen in the feedback component. Further investigation is necessary to provide the proper conclusion in the limitations of second order feedback kernel recovery. Actual feedback component 1st order kernel 0 .1 4 -1 0.12- 0.1 - 0 , 0 8 - 0 . 0 4 - 0.02- 0- - 0.02 1 0 2 0 3 0 4 0 SO 6 0 7 0 8 0 0 tau(ms) Figure 46 Actual feedback function used in simulated model 71 Decomposed feedback component 1st order kernel 0.14 n 0.12- 0.1 - 0.08- 0 . 0 4 - 0.02 - 0- - 0.02 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 tau(m s) Figure 47 Decomposed 1st order feedback kernel from PANN Daeompasad faadback component 1st ordar komal 0 .1 0.0 5 - i •0.0 5 - -0.1 0 10 2 0 30 40 G O S O 70 50 Figure 48 Decomposed feedback kernel from DPOS 72 Actual Feedback Component; 2nd order kernel 0.016-1 0 ,0 1 4 - 0.012- 0.01 - 0 . 0 0 6 - 0.004 60 20 4 0 20 60 6 0 ta u (m s ) ta u (m s ) Figure 49 Actual feedback 2nd o rd er kernel Actual decomposed feedback component; 2nd order kernel 0.02- 0.015 0.01 - 0.005 •0.005 - -0.01 - -0.015 ta u (m s ) ta u (m s ) Figure 50 Decomposed feedback 2nd order kernel from actual equivalent and feedthrough components D ecom posed feedback com ponent; 2nd order kernel; fb052bk2 ta u ( m s ) ta u ( m s ) Figure 51 Decomposed feedback 2nd order kernel using PANN 8.4 Summary of feedback model results. While only one data set was presented, many other diifferent data sets were investigated with different random inputs, and different feedthrough and feedback functions. From these data sets, many observations were made which influence the ability of nonlinear system decomposition to recover the feedback kernel. First and foremost, we find that decomposition depends highly on the ability of PANN to estimate the equivalent system kernels which will often diverge toward infinite order given the simulated results. This creates problems in nonlinear system decomposition. For example, given the first order derivations of the feedback kernels: 74 we find that error can be introduced by both the feedthrough and equivalent systems. However since PANN can easily estimate the feedthrough components due to finite order, we only need to be concerned with the kernel estimation error in the equivalent system. Thus, we can take the derivative of equation (8-1) with respect to the difference in the equivalent system(error). This gives the following equation: 3B 1_ 3E E2 Solving for the actual error in B, the feedback component gives the following equation: AB = —^-AE E 2 Thus, we find that the error in the feedback kernel is dependent on the error AE multiplied by - 1/E2, ie. the sensitivity. This means that very small values less than one in the frequency domain may contribute to high error in the decomposition of the feedback kernel. In order to solve this problem, one can find the frequency response of the equivalent system and filter out the appropriate bandwidths. Otherwise, the other alternative may require training the neural network (PANN) until the desired accuracy is obtained. However, selecting the correct learning parameters for ultimate convergence is often difficult Secondly, the error in kernel estimation may be more tolerable in feedback kernel recovery if the frequency bandwidth of the feedthrough component is greater than the feedback component bandwidth. This will help minimize the bandwidth of the equivalent system component thus giving better results. However, these conditions are often not controllable in experimental preparations. Decomposition was possible for the presented simulation because the bandwidth of the feedthrough component g(t) was much wider than the bandwidth of the feedback component f(t). When a simulation was conducted in which the chosen feedback function f(t) and chosen feedthrough function g(t) were reversed, decomposition of the feedback component was not possible due to the need for more accurate kernel estimates. The observations that were made show that decomposition may not be possible in all cases due to the available kernel estimate techniques. It was found that estimating the kernels with the DPOS method was not practical due to long data length requirements and highly noisy kernel estimates. Thus, this method would make system decomposition very difficult without many filtering and binning techniques. In addition, conversion from the Poisson kernels to the Volterra kernels also presents problems due to diagonal estimation. Conversely, while kernels generated from a polynomial aritficial neural network has some limitations it's strengths seem to make it the preferred method in nonlinear system decomposition. First, the kernels that are generated via PANN are consistently more accurate and also provide a direct correlation to the Volterra kernel. In addition, PANN kernel estimation can be accomplished with a small number of inputs, a highly desirable aspect in experimental procedures. However, the problem with articial neural network aproaches lie in the absence of a clear method in selecting the correct learning parameters which creates problems in obtaining tolerable accuracy of the kernels. Despite these difficulties, it has been 76 shown that kernel estimation using PANN seems to offer many positive possibilities in nonlinear system decomposition 77 9.0 DISCUSSION 9.1 Conclusion The theoretical and experimental approach that has been discussed seems to offer a way in which we may characterize the nonlinear input/output properties of major nodes within the hippocampus. We have represented the input/output activity as the kernels of an orthogonalized functional power series by recording electrophysiological responses to random impulse trains through experimental techniques, (Sclabassi,(l4 ) Berger(t,2 )). Previous models have been studied and new models have been developed to establish this approach as a necessary and realisdc approach to analyzing the hippocampal formation. From this thesis, we have discussed many aspects of this approach: 1) Establishment of Volterra, Wiener and Poisson, PANN kernels as the key component s in analyzing nonlinear systems. 2) Implementation of kernels to solve unknown components in various feedback models using systems algebra. 3) Interpretation of the results from a simple feedback model. 4) Development and analysis of a new model to analyze nonlinearity and different inputs within a feedback model. 9.2 F u tu re work However, there are still many aspects and applications of this approach that need to be addressed for the future. In the immediate future, more analysis is necessary to investigate noise conditions and the limitations for second order feedback kernel recovery. As a long term goal, we must first investigate this approach as one that is theoretically viable for more complex models. However, we would then be faced 78 with the difficulties of developing more difficult experimental protocols to utilize this technique. This will inevitably entail isolating hippocampal subsystems through pharmacological or other experimental methods that may not be fully plausible just yet. Secondly, in developing more complex and physically realizable models we must establish simulation and decomposition strategies to also handle the more realistic multi-input/multi-output systems . Thirdly, we may feel the need to investigate other nonlinear methods to handle decomposition in the time- domain. While there are many possibilities for future work, we must conclude that the method of this approach seems very possible and can provide us with ways of further understanding the hippocampus despite some of its limitations. Nevertheless, it is at least an initial step in understanding the hippocampal network from a systems level and allows us to deduce the unobservable elements in a system that may be difficult to examine experimentally. 79 BIBLIOGRAPHY 1. Berger, T.W., Eriksson, J.L.,Ciarolla, D.A. and Sclabassi, R J. (1988a) Nonlinear Systems Analysis of the Hippocampal Perforant Path-Dentate Projection. IT. Effects of Random Impulse Train and Paired Impulse Stimulation. J. Neurophysiol., 60:1095-1109. 2. Berger, T.W., Eriksson, J.L., Ciarolla, D.A. and Sclabassi, R.J. (1988b) Nonlinear Systems Analysis of the Hippocampal Perforant Path-Dentate Projection. IQ. Comparison of Random Train and Paired Impulse Stimulation. J. Neurophysiol., 60:1095-1109. 3. Berger, T.W., Harty, P.T., Barrionuevo, B. and Sclabassi, R.J. (1989)“ Modeling of Neuronal Networks Through Experimental Decomposition”. In: Advanced Methods o f Physiological System Modeling, vol. 2, Marmarelis (ed.), Plenum, New York, New York, pp. 113-128. 4. Berger, T.W., Harty, T.P., Choi, C„ Xie, X., Barrionuevo, G., Sclabassi, R.J. (1994) “Experimental Basis for an Input/Output Model of the Hippocampal Formation”. In: Advanced Methods o f Physiological System Modeling, Vol. 3, Marmarelis(ed), Plenum Press, New York, New York. 5. Brilliant, M.B. (1958) “Theory of the Analysis of Nonlinear Systems”. Technical Report 345, Research Laboratory of Electronics, Massachusetts Institute of Technology. 6. George, D.A. (1959) “ Continuous Nonlinear Systems “. Technical Report 355, Research Laboratory of Electronics, Massachusetts Institute of Technology. 7. Kelly, A. and Pohl, I. (1992)C by Dissection. The Benjamin/Cummings Publishing company. Inc. Redwood city, California 8. Kosanovic, B. R (1992)“ Theoretical and Experimental Decomposition of Neuronal Structures, Master’s Thesis, University of Pittsburgh, Pittsburgh, Pennsylvania. 9. Kosanovic, B. R.(1992) “ A Short Introduction to sigp”. Center for Neuroscience, University of Pittsburgh, . 10. Lee, Y.W. and Schetzen, M. (1965) “Measurement of the Wiener Kernels of a Nonlinear System by Cross-Correlation”. International Journal of Control, 2:237-254, 11. Oppenheim, A. V. and Schafer, R.W. (1989) Discrete-Time Signal Processing. Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632 12. Oppenheim, A.V. (1983) Signals and Systems. Prentic-Hall International, Inc., London 80 13. Schetzen, M. (1989)The Volterra and Wiener Theories o f Nonlinear Systems. Robert E. Krieger Publishing company, Malabar, Florida, reprinted edition 14. Sclabassi, R. J„ Eriksson, J. L., Port, R. L„ Robinson, G. B„ and Berger, T. W.(1988) “Nonlinear Systems Analysis of the Hippocampal Perforant Path-Dentate Projection. I. Theoretical and Interpretational Considerations’'. Journal of Neurophysiology, 60:1066-1076. 15. Sclabassi, R.J., Kosanovic, B.R., Barrionuevo, G., Berger, T.W., (1994) “ Computational Methods of Neuronal Network Decomposition”. In: Advanced methods o f Physiological System Modeling, Vol. J., Marmarelis(ed.), Plenum press, New York, New York. 16. Volterra, Vito.(1930) Theory o f Functionals and o f Integral andlntegro- Dijferential Equations. Dover Publications, Inc., 180 Varick St, New York 14, N.Y., 17. Wiener, Norbert (\95%)Nonlinear Problems in Random Theory. The M.I.T Press, Cambridge, Massachusetts 18. Marmarelis, V.Z., Zhao, Xiao. "On the Relation between Volterra models and Feedforward Artificial Neural Networks". In: Advanced Methods o f Physiological System Modeling, Vol. 3, Marmarelis(ed), Plenum Press, New York, New York. 19. Wray, J. (1992) Theory and Applications o f Neural Networks, Ph.D. Thesis. University of Newcastle upon Tyne, United Kingdom. 20. Sclabassi, R.J., J.M. Solomon, T. Biedka, S. Levitan, D.N. Krieger, and G. Barrionuevo. 1992. A progress report on the theoretical studies in support o f AFSOR grant 89-0197: A systems theoretic investigation o f neuronal network properties o f the hippocampal formation. University of Pittsburgh. 21. Krausz, H.I. 1975. Identification of nonlinear systems using random impulse train inputs. Bio. Cybernetics 19:217-230. 81 APPENDIX A Expansion of Discrete Transforms for Equivalent Systems 1. La = A„ + Bn y(n) Figure 51 Addition operator L 0[x[n]]= A n[x[n]]+Bn[x[n]] = 2 * " X an E k^ -* k nM n - k j-» x [n - k j k k + 2 - * - 2 ba[k i-..» k Jx [n -k 1 ]-x [n -* k n] k| kB = E ' “X ( an[k1 k J + bn[k,,...,kn])x[n - k j* • *x[n - k j t. From the functionals, we can find the discrete transform of the addition expression for the overall kernels to be: I 'n ( Zl***** Zn ) = A n (Z [,...,Z n) + ®nCZl»'**’ Zn) 82 2 * L n+m ~ A 0 ' B m y(n) Figure 52 Multiplication operator Ln+ ra[x[n]] = A Jx[n]].B ra[x[n]] = X " ' X an[k i k j x f n - k j —x [ n - k a] * X - S bm [k n +l kn+ mMn “ kn+ i l " ’X[n - ka+J I t ..l * .« ■ * *2jC®n[kl*"*»k ii] * ^mtkn+l»"**kn+m])^[^ k l]* "X[n ktt+n] t, kB tn Therefore , ln+ m [nj,...,nn+ m ] an[n|,,...,nn]* bm [na+j,...,ntt+ ra] Using the discrete transform definition, we get: k 'n+ra(Zl»’" ’2 o+m ) =Z-transformn + m {ln+ m [n1 nn+ m ]} — X " X r ^ a nfk l’"*, k n]* b m[k n+l*“ * > k n+ra])Zl * " ^ n t r a * k ,,w “ ^n(2l»‘* ,* Zn)‘®m(Zn+ l»— ' Zii+ fn ) 83 3. Ln = At * Bn where *denotes cascade operator If u[n] = Bn[x[n]] = X —X b n[kp...,kn]x[n - k j —xfn - k j t, t. and v[n]=A1 [u[n]]=Ln[x[n]] v[n] = X ai[iM n - i ] i Substituting the expression for u[n] into v[n] gives: v[n] = £ ai K X ’ * 'X b» tk i k Jx [n - i -* k, ] • • • x[n - i - kn] i k, k. = X ^ X " X b »En - 1 - ki»~»n - i - k„]x[k,]- • *x[kj i k, k. Changing the indices gives: v[n] = " * X bn E ki ~ " *Mn - k j - - ■ x[n - k j t k, k. “ X " X X aiE ^lbn(k J ”">»•••»kn - i])x[n - kt] * • * x[n - kn] t, k„ j = Z - Z u * kn]x [n -k ,]—x [ n - k a] k. k. Since lIt[n,,...,n0] = ^ a 1 [k]*b1 1 [n1 -k ,...,n n - k ] , thez-transform becomes: k = E - X 2 aiMb.[ki - i W — C ' t. k. I = X “.[iK z .- z „ r ‘£ - S b»[k> ..... k, k. i = Z a.[iKz. - j . r ' E - X b. K - - mn K < m ,> - < < "-> k, m, t = A 1 (zl - z „ ) - B n(zp ...,zn) 84 4. Lw ..<„ ,= A „ o(B1 ,C ,-V ,W .) represents the general cascade operation expansion. For v[m] = L p + ..^[x[m]], the expansion may be expressed as: vE ml = 2 * " X < ao[ii-*»io E * ”X bp[ki’-* kp M m - il “ kl] - x [ m - i , - k p] i I* k, kp X ” ' X cqf kp + i *•*•> V q M m “ h “ kP + J- • • X [ m - i2 - k p+ q] f c F*l f c P«l 2 f *** 5 r f ^ i C k p+—+r+L»-" » k p+*— f * l X [ m — *n k p+— r + l ] " " X [ n i — i R “ k p+. „ * ,] fcp« — « ■ Changing the indices gives the Volterra kernel of the equivalent system: ^p+q+-»+r+|[^l> ,M * ^p+q+*"+r+t3 ' (^n tkl,-* ,kD ^ * kj*...* IH p k j]' k, k. cq[mp + 1 - k 2,.,.,mp + q - kj]*-- w,[mp + .„+ r+ j — kn,...,mp+ „ ^ - k j ) The z-transform of the above expression gives: ^ p -t— n ( Z L»*"’ Z p+—+») — ^ n C Z r * * Z p»Z p+—+ r+ l ’ “ Z p -* —+*)* Bp(zr " zp)'C q (zp + 1 ,...,zp+ q)*" ^ » C Z p+—+ r+ l* * "> Z p+—+«) 85 ABSTRACT B Function equivalent of the Functional expansion ( 1) y(t) - J bj(T)x(t - x)dx+ J J b2(t,,x2)x(t - t,)x (t - x2)dxtdx2 + J J J J b 4 (xt,x2,x3,x4)x(t - x M t - T a ) x ( t - t 3)x(t - x4)dxtdx2 dx3 dx4 - functional representation of y(t) y (t) = Bj[x(t)] + B2[x(t)] + B4[x(t)] where: Bj[x(t)] = J b] (x)x(t - x)dx Bj[x(t)] = J J b2(xp x2) x ( t- x 1 )x(t - XjJdXtdTj B4[x(t)] = J J J J b4(Tj,x2,x3 ,x4)x(t - x jx (t - x2)x(t - x3 )x(t - x 4)dxjdx2 dx3 dx4 (2 ) = JJ a2( a l,a 2)y(t - C T t)y(t - < y 2)d C T ,d C T 2 - substitution of the functional representation of y(t) into z(t) z(t) = J J ajCapajXBjfxCt - a ,)]+ B2[x(t - crt)] + B4[x(t - a,)]) x (B, [x(t - o 2)] + B2[x(t - c t 2)] + B4[x(t - c t 2 )])da!da2 z(t) = J J ajCcTpCTjMB^xCt - CTj^B^xft - a 2)]+ B 2[x(t - a ^ B ^ x f t - a 2)] + B4 [x (t» a,)]B4[x(t - c t 2)] + 2Bt[x(t - a,)]B2[x(t - a 2)]) + 2B2[x(t - o t)]B4[x(t - c t 2)] + 2B,[x(t - Oi)]B4[x(t - a 2)] )dcrtdcr2 - actual function representation of z(t) 86 z(t) = JJ a2 (C T lf 0 2)J b (x)x(t - Oj - x)dxj bt (p)x(t - a 2 - p)dpda,dcr2 + (JJa2 (CT„o2)JJ b2 (XpX2)x(t - 0, - x,)x(t — cr, — x2)dxtdx2 x JJb 2 (pltp2 )x(t-CT2- p l) x ( t - a 2- p 2 )dpldp2 d C T 1 d a 2) + (JJa2 (0p 02)JJ J J b4(Xj,x2,x 3,x4)x (t- - xJ^-xC t- a, - x 4) d x ,- d x 4 x JJ JJ b 4 (p1 ,p2 ,p3 ,p4 ) x ( t - a 2 ” Pi)"*x(t — a 2 — p4 )dpI” *dp4 d(j,da2) + 2 JJa2 (a ,,a 2)J bj (x)x(t - a, - x)dx x J J b 2 (p!,p2 ) x ( t - a 2 - p t)x (t-c r2 -p jJ d p ^ p jd a jd ^ + (2 jJ a 2C C T p C T 2)JJb2(xp x2)x C t-a l -x,)xCt-CTj - x 2)dxtdx2 X JJ JJ b4 (pp p2, p3, p4 )x(t - a 2 - pj)—x(t - a 2 - p4 )dpt • • • dp4d C T j d a2) + (2 jJ a 2 (apCT2 )Jb 1 (x )x (t-0 l -x)dx x M b 4 (ppp2 ,p3 ,p4 )x (t~ 0 2 -p ,)-x (t-C T 2 - p 4 )dp1 dp2 dp3 dp4 d ald02) - rerepresentation of z(t) with all integrals on the left side z(t) = JJ JJ (0j, 02 )bj (x)bj (p)x(t - 0! - x)x(t - 02 - p)dxdpd0,d02 + (JJ JJ JJ a: (°l * ct2) b2 (^i . x2 )b2 (0,, 02 )x(t - 0, - x ,) x x ( t - 0 , - x 2 ) x ( t - 0 2 - p l) x ( t - 0 2 - p 2 )dx1 dx2 dp,dp2 d01 d01) + (JJJJJJJJJJa2 {0p02 )b4 (XpX2 ,x3 ,x4 )b4 (ppP2 ,p3 ,p4) X x ( t - 0, - X ,) " - x ( t- 01 -X 4 ) x ( t - 02- p 1 )"- x x { t- 0 2 - p 4 )d i1 -*-dx4 dp1 --*dp4 d01 d02) + 2 jJ J J J a 2 (0 p 02 )b,(x)b2 (ppp2 )x (t-0 , - x ) x x (t - 0 2 - p, )x(t - 02 - p2 )dxdp1 dp2 d01 d02 + (2JJJJJJJJa2((yi»o2)b2 (x„x2 )b4 (ppp2 ,p3 ,p4 ) ( x ( t- 0 1 - x t) x x ( t - 0 , - x 2 ) x ( t - 0 2- p , ) —x ( t- ff2“ p4 )dx1 dx2 dpt —dp4 d0!d02) + (2 JJ a2 (0 p 02 )bl(x)b4 (ppp2 ,p3 ,p4) x x (t - 0! - x)x(t - 0 2 - p,)—x(t - 0 2 - p4)dxdpj • • • dp4 d0!d02) - Total equivalent system of C 87 C2 — A2 o(Bf) C4 = A 2 o(B*) Ca = A 2 o(B5) C3 = 2A 2 o(B1 B2) C6 = 2A 2 o(B2 B4) C5 = 2 A 2 o(B1 B4) - Therefore, C expansion may be represented as: C = C2 + C3 + C4 + Cj + c f i + c g 88
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Cross-correlation methods for quantification of nonlinear input-output transformations of enural systems using a Poisson random test input
PDF
Fault tolerant characteristics of artificial neural network electronic hardware
PDF
A nonlinear systems identification approach to characterizing AMPA and NMDA receptor dynamics in dentate gyrus: Delineation of a positive feedback loop regulating granule cell dynamics
PDF
Auditory brainstem responses (ABR): variable effects of click polarity on auditory brainstem response, analyses of narrow-band ABR's, explanations
PDF
A novel network for nonlinear modeling of neural systems with arbitrary point-process inputs
PDF
Contributions of active dendrites and structural plasticity to the neural substrate for learning and memory
PDF
Estimation of upper airway dynamics using neck inductive plethysmography
PDF
Auditory brainstem responses (ABR): quality estimation of auditory brainstem responsses by means of various techniques
PDF
Nonlinear system decomposition of experimentally "unobservable" hippocampal neurons: System identification of monosynaptic mossy cell input-output properties
PDF
A computational model of NMDA receptor dependent and independent long-term potentiation in hippocampal pyramidal neurons
PDF
Comparisons of deconvolution algorithms in pharmacokinetic analysis
PDF
Functional impacts of morphology on synaptic transmission
PDF
Three-dimensional functional mapping of the human visual cortex using magnetic resonance imaging
PDF
Human Skeletal Muscle Oxygenation And Perfusion: Non-Invasive Measurement By Near-Infrared Spectroscopy
PDF
Assessment of minimal model applicability to longitudinal studies
PDF
A user interface for the ADAPT II pharmacokinetic/pharmacodynamic systems analysis software under Windows 2000
PDF
A kinetic model of AMPA and NMDA receptors
PDF
Cytoarchitecturally conformal multielectrode arrays for neuroscience and neural prosthetic applications
PDF
Design of a portable infrared spectrometer: application to the noninvasive measurement of glucose
PDF
A white noise analysis of the renal autoregulatory system
Asset Metadata
Creator
Chian, Martin Tsupon (author)
Core Title
Decomposition Of Neuronal Function Using Nonlinear Systems Analysis
Degree
Master of Science
Degree Program
Biomedical Engineering
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
biology, neuroscience,engineering, biomedical,engineering, electronics and electrical,OAI-PMH Harvest
Format
masters theses
(aat)
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Berger, Theodore W. (
committee chair
), Arbib, Michael (
committee member
), Marmarelis, Vasilis A. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c18-12559
Unique identifier
UC11356781
Identifier
1380457.pdf (filename),usctheses-c18-12559 (legacy record id)
Legacy Identifier
1380457-0.pdf
Dmrecord
12559
Document Type
Thesis
Format
masters theses (aat)
Rights
Chian, Martin Tsupon
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
biology, neuroscience
engineering, biomedical
engineering, electronics and electrical