Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The effects of noise on bifurcations in circle maps with applications to integrate-and-fire models in neural biology
(USC Thesis Other)
The effects of noise on bifurcations in circle maps with applications to integrate-and-fire models in neural biology
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE EFFECTS OF NOISE ON BIFURCATIONS IN CIRCLE MAPS WITH APPLICATIONS TO INTEGRATE-AND-FIRE MODELS IN NEURAL BIOLOGY by John Mayberry A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) May 2008 Copyright 2008 John Mayberry Dedication This work is dedicated to my loving parents who have done more for me in my 27 years of life than I could ever hope to deserve. Without them, I would quite literally not be here today! Thanks for everything Mom and Dad. I love you both! ii Acknowledgements Many thanks to my advisor, Dr. Peter Baxendale, for our weekly meetings and numerous other times he has opened his office door for me. Without his wisdom and insight, this work would not have been possible. I cannot thank him enough for all the help he has offered. I would also like to thank Dr. Larry Goldstein, Dr. Sergey Lototsky, Dr. Paul Newton, and Dr. Mohammed Ziane for their time and input. iii Table of Contents Dedication ii Acknowledgements iii List of Tables vi List of Figures vii Abstract ix Chapter 1: Introduction 1 Chapter 2: Gaussian Perturbations of Circle Maps 11 2.1 General Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Before the Period Doubling: Basic Set-up and Main Results . . . . . . . . . . 19 2.3 The Local Story Near a Stable Fixed Point . . . . . . . . . . . . . . . . . . . . 27 2.3.1 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.3.2 Asymptotic Expansions and Proof of Theorem 3 . . . . . . . . . . . . 39 2.4 The Local Story Near an Unstable Fixed Point . . . . . . . . . . . . . . . . . . 51 2.4.1 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 2.4.2 Asymptotic Expansions and The Proof of Theorem 6 . . . . . . . . . . 57 2.5 After the Period Doubling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 2.5.1 Stable Period Two Orbit . . . . . . . . . . . . . . . . . . . . . . . . . 62 2.5.2 Notes on the General Case . . . . . . . . . . . . . . . . . . . . . . . . 74 2.6 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 2.6.1 Stable Fixed Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 2.6.2 Stable Period Two Orbit . . . . . . . . . . . . . . . . . . . . . . . . . 81 2.6.3 Stable Period 4 Orbit . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Chapter 3: Analysis of Stochastic I-F Models 87 3.1 Basic Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.2 First-Passage Time Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 3.2.1 Non-Leaky Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.2.2 Leaky Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 3.2.3 Pinned O-U Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.2.4 The Integral Equation of Durbin . . . . . . . . . . . . . . . . . . . . . 105 3.2.5 Conclusions for Leaky Case . . . . . . . . . . . . . . . . . . . . . . . 114 3.3 The Structure of the Transition Operator . . . . . . . . . . . . . . . . . . . . . 114 3.4 Analysis in the Neighborhood of a Fixed Point . . . . . . . . . . . . . . . . . . 118 3.4.1 Non-Leaky Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 iv 3.4.2 Leaky Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 3.5 Questions for Future Research . . . . . . . . . . . . . . . . . . . . . . . . . . 124 References 126 Appendix 128 A Hermite Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 B Perturbation Theory for Linear Operators . . . . . . . . . . . . . . . . . . . . 133 C Special Block Decompositions of Operators . . . . . . . . . . . . . . . . . . . 143 D Useful Bounds and Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . 146 D.1 Expansions of Transition Densities . . . . . . . . . . . . . . . . . . . . 146 D.2 Some Useful Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 v List of Tables 2.1 b =1.8: Stable Fixed pointx s and unstable fixed pointx u withc s = f ′ (x s ) andc u =f ′ (x u ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 2.2 b =2: Stable Fixed pointx s and unstable fixed pointx u withc s =f ′ (x s ) and c u =f ′ (x u ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 2.3 b =2.2: Stable Fixed pointx s and unstable fixed pointx u withc s = f ′ (x s ) andc u =f ′ (x u ). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 2.4 b =2.3: Stable Period Two Orbit{x 1 ,x 2 }, Unstable fixed pointsy 1 ,y 2 . Note thatc s =f ′ (x 1 )f ′ (x 2 ) andc u,i =f ′ (y i ),i = 1,2. We have usedn = 1200 for noise strength.01 andn = 1600 for noise strength.005. . . . . . . . . . . . . . 82 2.5 b =2.5: Stable Period Two Orbit{x 1 ,x 2 }, Unstable fixed pointsy 1 ,y 2 . Note thatc s =f ′ (x 1 )f ′ (x 2 ) andc u,i =f ′ (y i ),i = 1,2. . . . . . . . . . . . . . . . . 82 2.6 b =2.75: Stable Period 4 Orbit{x 1 ,x 2 ,x 3 ,x 4 } and unstable period two orbit {y 1,1 ,y 1,2 } withc s =f ′ (x 1 )f ′ (x 2 )f ′ (x 3 )f ′ (x 4 ) andc u,1 =f ′ (y 1,1 )f ′ (y 1,2 ) . . . 85 2.7 b =2.81 : Stable Period 4 Orbit{x 1 ,x 2 ,x 3 ,x 4 } and unstable period two orbit {y 1,1 ,y 1,2 } withc s =f ′ (x 1 )f ′ (x 2 )f ′ (x 3 )f ′ (x 4 ) andc u,1 =f ′ (y 1,1 )f ′ (y 1,2 ) . . . 85 vi List of Figures 1.1 Bifurcation diagram for the mapf asb is varied. Iterates started at a random x ∈ [0,2π) and the first 1000 iterates are discarded. Other parameters: γ = .5,a = 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Approximation of the first passage-time pdf for the solution todX t = (−X t + 2)dt+.02dW t ,x(0) = .5 across the constant boundaryI(t) = 1. Overlayed is the density for the Gaussian approximation of Proposition 5 in Section 3.4.2 which has mean equal to the passage-time,τ 1 , for the solution to dx dt =−x+2, x(0) =.5 (τ 1 ≈.4055) and variance equal to .02 2 2 (1−e −2τ 1 )≈.01. . . . . . . 8 2.1 The moduli of the top limiting eigenvalues ofT ε asε→ 0 as predicted by The- orem 1 plotted against the parameterb before the first period doubling bifurca- tion point atb = √ 5≈ 2.23. The solid lines are powers ofc s while the dashed lines are negative powers ofc u . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 The moduli of the top limiting eigenvalues ofT ε asε→ 0 as predicted by The- orem 1 plotted against the parameterb after the first period doubling bifurcation point atb = √ 5≈ 2.23 up to the next period doubling atb≈ 2.71. . . . . . . . 17 2.3 Illustration ofλ-bifurcation scenario forf(x) =x+1−bsinx,1.8<b< 2.71. See also Figures 2.1 and 2.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Illustration of Proposition 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5 Bifurcation diagram for the family of sine-circle mapsf(x) =x+1−bsin(x). The plot shows 100 iterates off started from a random initial pointx 0 ∈ [0,2π] with the first 1000 iterations discarded for 1000 different values of b equally spaced throughout the interval[0,2π]. . . . . . . . . . . . . . . . . . . . . . . 77 2.6 Plots of the left eigenvectors ofP ε (solid lines) corresponding to the top four eigenvalues ofP ε along with the approximate eigenvectors predicted from our theory (dotted lines). See Table 2.2 for the exact values of the corresponding eigenvalues. Parameters: b = 2,ε = .025, andN = 800 (x s ≈ .5236). The eigenvectors are basically flat on all parts of the circle not shown. . . . . . . . . 80 2.7 Plots of the right eigenvectors ofP ε corresponding to its second to fifth largest eigenvalues (solid lines). The approximate eigenvectors predicted from our theory are also plotted for the eigenvectors corresponding to λ ε 3 ≈ c −1 u and λ ε 5 ≈c −2 u (dotted lines). See Table 2.1 for the exact values of the corresponding eigenvalues. Parameters: b = 1.8,ε =.025, andN = 800 (x u ≈ 2.6180). The eigenvectors are basically flat on all parts of the circle not shown. . . . . . . . 81 vii 2.8 Plot of the top four left eigenvectors of P ε (solid lines) with b = 2.3, ǫ = .025, and N = 800 along with approximate eigenvectors expected from our theory (dotted lines). Here,f has a stable period 2 orbit{0.1360,0.8242}. The eigenvectors are flat on all parts of the circle not shown. The noise strength is large enough to ‘blur’ out the region betweenx 1 andx 2 making convergence to the approximated eigenvectors occur at a slower rate (see Figure 2.9). . . . . 83 2.9 Plot of the third and fourth left eigenvectors of P ε (solid lines) with along with the approximations expected from our theory (dotted lines). Parameters: b = 2.3,ǫ =.005, andN = 1600. . . . . . . . . . . . . . . . . . . . . . . . . 84 2.10 Top left eigenvectors of P ε with b = 2.75, ǫ = .01, N = 1600, and stable period 4 orbit{−0.3609,1.6102,−0.1376,1.2396} . . . . . . . . . . . . . . . 86 viii Abstract A stochastic bifurcation is generally defined as either a change in the number of stable invariant measures (dynamical or D-bifurcations) or a change in the qualitative shape of invari- ant measures (phenomenological or P-bifurcations) for a stochastic dynamical system. Some authors have observed that these definitions can fail to capture important information regarding the evolution of certain Markov Chains arising from first-passage-time distributions of stochas- tic differential equations since the definitions deal only with static information about the chain (i.e. information regarding invariant or stationary distributions). In this work we perform a more rigorous investigation of these observations by studying changes to the spectra of transi- tion operators for two different classes of examples of Markov Chains obtained by taking small perturbations of deterministic dynamical systems. The first class deals with small gaussian perturbations of discrete time dynamical systems on the circle while the second class arises naturally from the study of sequences of firing times in noisy integrate-and-fire models for chemical potential in neurons. We show that bifurcations in the deterministic system can often lead to changes in the number of eigenvalues of the transition operator for the corresponding perturbed process which approach the unit circle as the noise intensity goes to 0, a phenomenon we call a λ-bifurcation. Although in both classes of examples, the perturbed process always has a unique stationary distribution, these changes in the number of eigenvalues with modulus close to 1 can have significant effects on both the shape of and the rate of convergence to the stationary distribution of the process. ix Chapter 1 Introduction The phenomenon of period doubling bifurcations is now a familiar notion in dynamical sys- tems theory and its applications. One such application can be found in neural biology with the mathematical study of the generation of action potentials. Since the paramount contributions of Hodgkin and Huxley in the late 1940’s and early 1950’s to this field, numerous phenomenolog- ical models capturing the essential firing dynamics of a single neuron have been suggested and used in the analysis of neuronal activity. These models are all based on the simple heuristic that a neuron stores up chemical potential until reaching a certain threshold level at which point the neurons fires, generating an action potential, its chemical potential returns to a specified reset level, and the whole process begins anew. One of the simplest models capturing this behavior is the Integrate-and-Fire model (IFM) in which the chemical potential of a neuron at timet is given by the solution to the differential equation dx dt = −γx+I(t) t 0 ≤t≤τ 1 (1.1) x(t 0 ) = h(t 0 ) subject to the reset conditions 1 x(τ − 1 ) = g(τ 1 ) x(τ + 1 ) = h(τ 1 ) (1.2) whereγ > 0,I,g,h∈C 1 (R) are period functions of time satisfyingh(t)<g(t),∀t, and τ 1 := inf{t>t 0 :x(t) =g(t)} is the first-passage-time of the processx(t) acrossg(t). Here, γ is the membrane resistance, I(t) is an external forcing term, andh(t), g(t) respectively represent the reset and threshold levels for the chemical potential. After timeτ 1 , we continue to runx(t) according to (1.1) until the threshold is again reached at which time we jump according to the condition (1.2). In this way we obtain a sequence of times τ n := inf{t>τ n−1 :x(t) =g(t)} forn≥ 1 withτ ε 0 :=t 0 . We call{τ ε n } ∞ n=0 the sequence of firing times forx(t). It is the sequence of firing times that captures all the pertinent information regarding the dynamic evolution of x(t) and therefore, analysis of this sequence has been a topic of great interest. This was first carried out in detail by Glass and Mackey in 1971 (see [9]) in the limiting caseγ = 0 and various other authors in the case whereγ > 0 including 1. Kenner, Hoppensteadt, and Rinzel : with I(t) = I 0 +Acos(ωt +φ), g(t) = B, and h(t) = 0 ([13]) 2. Coombes, Tateno and Jimbo : withI(t) =I 0 ,g(t) =B +Acos(ωt+φ), andh(t) = 0 ([3], [18]) 2 3. Coombes, Tateno : withI(t) =I 0 ,g(t) =B, andh(t) =Acos(ωt+φ) ([3], [19]) although we should note that the first two models are essentially equivalent (see Remark 8 in Section 3.1). The focus of these investigations is on studying the effects of changes to the constant input I 0 and oscillatory amplitude A on the long term behavior of the sequence of firing times. Since the input, threshold, and forcing are taken to be periodic, this can be done by studying rotation numbers of the corresponding dynamical system θ n :=τ n mod p onS 1 = R\(pZ) where p = 2π/ω. It is shown that for various versions of (1.1) and differ- ent values of the parametersI 0 andA,θ n exhibits a wide range of behaviors including phase locking (periodic response to periodic forcing persisting under small perturbations of param- eters), aperiodic orbits, and even cessation of firing. Looking at bifurcation diagrams in this 2D parameter space, we see the occurrence of Arnold tongues, similar to those occurring in the family of sine-circle maps studied, for instance, in [11]. In particular, it is shown in [3] and [19] that for fixed forcing I 0 , as the amplitude A varies, the system undergoes a period doubling route to chaos. To illustrate this procedure, we briefly describe the details in the third case when the forcing term,I(t) =I 0 , and the threshold,g(t) =B, are constant and the reset level,h(t), is allowed to oscillate, sayh(t) = Asin(t +φ 0 ). By shifting time if necessary, we shall without loss of generality assume thatφ = 0. As above, we define the sequenceτ n of firing times recursively through the relationshipτ n+1 = inf{t>τ n :x(t) =B} withτ 0 =t 0 . Furthermore, we assume 3 thatB < I 0 γ so that the solutionx(t) crosses the barrierB in finite time regardless of its starting pointx(t 0 ). Fort∈ (τ n ,τ n+1 ), equation (1.1) has the solution x(t) = x(τ + n )e −γ(t−τn) + I 0 γ (1−e −γ(t−τn) ) = Asin(τ n +φ 0 )e −γ(t−τn) + I 0 γ (1−e −γ(t−τn) ). Taking the limit of both sides ast→τ − n+1 and using the fact thatx(τ − n+1 ) =B, we have (B−I 0 /γ)e γτ n+1 = (Asin(τ n +φ 0 )−I 0 /γ)e γτn so that we obtain the relationship τ n+1 =f(τ n ) (1.3) wheref(x) := 1 γ log(a−bsin(x+φ 0 ))+x witha := I 0 /γ I 0 /γ−B andb := A I 0 /γ−B both positive constants. Note also thata > b, which implies thatf is well defined and inC 1 (R). Further- more,f(x+2πn) = f(x)+2πn for allx∈R andn∈Z so thatf :S 1 → S 1 andθ n := τ n mod2π is a dynamical system onS 1 . Fixed points off :S 1 →S 1 are given by solutions tof(x) =x+2πn, for somen∈Z. Then we can readily verify that if 0<a−e 2πγ <b,f has two fixed pointsx s ,x u , 0<x s < π 2 and x u =π−x s withf(x s,u ) =x s,u +2π satisfyingsinx s = sinx u = a−e 2πγ b . Moreover, f ′ (x u ) = 1+ p b 2 −(a−e 2πγ ) 2 γe 2πγ > 1 for anya,b in the specified parameter range so thatx u is unstable. Now f ′ (x s ) = 1− p b 2 −(a−e 2πγ ) 2 γe 2πγ ∈ (−1,1) 4 Figure 1.1: Bifurcation diagram for the map f as b is varied. Iterates started at a random x∈ [0,2π) and the first 1000 iterates are discarded. Other parameters:γ =.5,a = 2. if and only ifb<b c (a) := p (a−e 2πγ ) 2 +(2γe 2πγ ) 2 and therefore,x s is stable forb<b c and unstable forb>b c . Forb =b c , we havef ′ (x s ) =−1 and we obtain a ‘flip’ or period doubling bifurcation with the appearance of a stable, period two orbit. If we continue to increaseb, the period two orbit becomes unstable with the appearance of a stable period four orbit, and so on, until the long term behavior of the system becomes chaotic (See Figure 1.1). Tateno ([17], [19]) and Tateno and Jimbo ([18]) numerically study the effects of adding white noise in (1.1) by looking at the system dX ε t = (−γX ε t +I)dt+εdW t t 0 ≤t≤τ ε 1 (1.4) X ε t 0 = h(t 0 ) 5 again subject to the reset conditions X ε (τ ε 1 ) − = g(τ ε 1 ) X ε (τ ε 1 ) + = h(τ ε 1 ) (1.5) where I,γ > 0, g and h are as above, W t is a standard one-dimensional Wiener Process, and τ ε 1 := inf{t > t 0 : X ε t = g(t)}. We refer to such systems as Stochastic Integrate-and- fire models (SIFM’s). Let P t 0 denote the probability law of X ε t under the assumption that X ε t 0 =h(t 0 ). After timeτ ε 1 , we again continue to runX ε t according to (1.4) until the threshold is reached at which time we jump according to the condition (1.5). In this way we obtain a Markov Chain of firing times τ ε n := inf{t>τ ε n−1 :X ε t =g(t)} for n ≥ 1 with τ ε 0 = t 0 . The transition density function for this chain is given by the first passage-time density function p ε (t|t 0 ) := d dt P(τ ε n ≤t|τ ε n−1 =t 0 ) = d dt P t 0 (τ ε 1 ≤t) for the processX ε t and hence, the dynamics of the system can be studied using the first-passage time approach proposed by [10]. In this approach, the sequence of firing times is converted to a Markov Chain Θ ε n onS 1 by looking at τ ε n modp and the dynamic behavior of the chain is studied via its transition operatorT ε , defined by T ε φ(θ 0 ) :=E θ 0 [φ(Θ 1 )] = Z S 1 φ(θ)˜ p ε (θ|θ 0 )dθ 6 for some appropriate class of functionsφ (for example bounded, measurable) where ˜ p ε (θ|θ 0 ) = X n∈Z p ε (θ+np|θ 0 ) is the transition density onS 1 induced by the transition densityp ε ofτ ε n onR. T ε captures the essential dynamics ofΘ n and hence, its spectrum is of primary interest in quantifying transient and asymptotic behavior of the system. Despite the cascade of period doubling bifurcations occurring in the noise-free system as the amplitudeA of the reset (or threshold) level is increased, the sequenceΘ ε n always has a unique stable, invariant density for allε > 0. Furthermore, it is also observed numerically that while deterministic bifurcation points can sometimes be classified as P-bifurcation points for the corresponding noisy system, this is by no means the usual occurrence (see [17], [18]). The destruction of bifurcation points with the addition of noise is a common phenomenon and occurs in the study of relaxation oscillators as well ([5], [16]). However, there do appear to be changes in the dynamic behavior of the noisy system at deterministic bifurcation points and therefore, the authors of these papers investigate alternate notions of stochastic bifurcations in terms of changes to the spectrum of T ε . Their results are numerical, using the procedure of Buonocore, Nobile, and Ricciardi (see [2]) to solve for p ε as the solution to an integral equation. The dynamics of the Markov Chain are approximated by discretizing the circle and calculating the spectrum of the transition matrix for the corresponding discrete state-space chain. Because of the existence of a unique invariant measure, the transition matrix always has a simple top eigenvalue at 1, however, the existence of lower eigenvalues close to 1 in modulus can greatly affect convergence to the stationary distribution. It is shown in [17], [18], and [19] that there are definite observable changes in the values of these lower eigenvalues as parameters are increased through deterministic bifurcation points, including the presence of an eigenvalue 7 near−1 corresponding to the appearance of a stable period 2 orbit in the deterministic system and the appearance of complex cube roots of 1 with the appearance of a stable period 3 orbit. The goal of our work is to mathematically explain these changes. Since no explicit form for the transition densityp ε is available, obtaining rigorous results in the above examples requires more clever analysis. However, we will show in Chapter 3 that for smallε > 0 andθ 0 ∈ S 1 , ˜ p ε (θ|θ 0 ) is approximately Gaussian, centered around the deterministic firing phasef(θ 0 ) (see Figure 1.2 where we have superimposed the approximation from Proposition 5 in Section 3.4.2 over the numerical approximation to ˜ p ε found by using the integral approximation of [2]). 0.35 0.3855 0.4055 0.4255 0.45 0 5 10 15 20 25 30 35 40 Passage−time pdf Gaussian app. θ 1 = deterministic hitting time Figure 1.2: Approximation of the first passage-time pdf for the solution to dX t = (−X t + 2)dt +.02dW t , x(0) = .5 across the constant boundary I(t) = 1. Overlayed is the density for the Gaussian approximation of Proposition 5 in Section 3.4.2 which has mean equal to the passage-time,τ 1 , for the solution to dx dt =−x+2,x(0) =.5 (τ 1 ≈.4055) and variance equal to .02 2 2 (1−e −2τ 1 )≈.01. 8 We therefore begin in Chapter 2 by looking at gaussian perturbations of dynamical systems on the circle exhibiting stable periodic behavior. In particular, iff ∈C ∞ (S 1 ) has a finite number of periodic points and the deterministic system x n+1 =f(x n ), has a finite limit set, we provide a method for calculating the spectrum of the transition operator for the perturbed system X ε n+1 =f(X ε n )+εσ(X ε n )χ n mod1 on S 1 where{χ n } ∞ n=0 is a family of iid standard normal random variables and σ ∈ C 1 (R) satisfies 0 < σ 1 < σ(x) < σ 2 < ∞ for all x ∈ R and some constants σ 1 ,σ 2 . In this case, we have explicit formulas for the transition operator of the chain and can obtain limiting approximations to eigenvalues and eigenvectors asε → 0. The general heuristic is described in Section 2.1 and basically says that the spectrum ofT ε is determined by the derivatives off along its periodic orbits. To illustrate the reasoning behind this heuristic, Sections 2.2 - 2.4 give a detailed analysis of the case whenf has one stable fixed pointx s and one unstable fixed point x u withf n (x) → x s ,∀x ∈ S 1 ,x 6= x u . In Section 2.2, we show that the essential behavior ofX ε n takes place in neighborhoods of the fixed points off and give more rigorous statements of our main results. In Sections 2.3 and 2.4, details of the local analysis, in neighborhoods of the stable and unstable fixed point respectively, are carried out, and this information is used to derive asymptotic expansions for the eigenvalues and eigenfunctions of the transition operator. Section 2.5 describes how to deal with general periodic orbits and Section 2.6 concludes our work on gaussian perturbations of circle maps by illustrating the usefulness of our results with some numerical examples. 9 In Chapter 3, we return to our original question of interest with the study of SIFM’s. Section 3.1 provides the general setting for our investigations. After a look into approximations for first-passage-time-densities in Section 3.2, our work in Sections 3.3 and 3.4 show that the basic results obtained in Chapter 2 hold in this more complicated setting as well. Finally, we conclude our work in Section 3.5 with some questions for future research. For completeness, we have included some classical results on Hermite Polynomials and Per- turbation Theory for Linear Operators in Appendices A - C which we make extensive use of in our work. Appendix D contains some of the more technical lemmas used in Sections 2.3 and 2.4 for deriving asymptotic expansions. These sections can be read as desired when referred to in the main portion of the document and may be unnecessary for readers familiar with these topics. For those interested in ascertaining the substance of our work without concern for some of the more technical details, we recommend skipping Sections 2.3.2 and 2.4.2 on first reading as well. 10 Chapter 2 Gaussian Perturbations of Circle Maps 2.1 General Heuristic We begin by studying small, gaussian perturbations of dynamical systems on the circle. First, we introduce some notation. IfX andY are Banach spaces (or metric spaces when appropri- ate), we define the following objects associated withX andY : • B X is the set of all Borel measurable subsets ofX • B(X,Y) is the set of all Borel measurable functions fromX toY andB(X) =B(X,X) • B(X,Y) =L ∞ (X,Y) is the set of all bounded, measurable functions fromX toY and B(X) =B(X,X) • L(X,Y) is the set of all bounded, linear operators fromX toY andL(X) =L(X,X) • M(X) is the set of all probability measures on(X,B X ) • X ∗ is the topological dual ofX. We shall use the notationkφk L ∞ (X,Y) to denote the sup-norm of a functionφ∈ B(X,Y) and writekφk ∞ when the domain and range ofφ are clear. In a slight abuse of notation, we shall use the same notation when referring to the operator norm onL(X,Y). IfX = S 1 , we shall use the notationB(S 1 ) =B(S 1 ,R). Define the deterministic system x n+1 =f(x n ) (2.1) 11 wheref is a smooth map onS 1 =R/Z orR/(2πZ) dependent on a one-dimensional parameter b∈R. We are interested in the dynamics of the perturbed system X ε n+1 =f(X ε n )+εσ(X ε n )χ n mod 1 (2.2) whereχ n is a family of iid standard normal random variables andσ ∈ C ∞ (S 1 ). We assume there exist positive constantsσ 1 ,σ 2 so thatσ 1 <σ(x)<σ 2 ,∀x∈S 1 and use the notationP x to denote the probability law of (2.2) given thatX 0 =x. It is easy to see thatX ε n forms a (time homogeneous) Markov Chain onS 1 with transition operatorT ε :B(S 1 )→B(S 1 ) given by T ε φ(x) =E x [φ( ˜ X ε 1 )] =E[φ(f(x)+εσ(x)χ)] = Z S 1 φ(y)˜ p ε (x,y)dy (2.3) for anyφ∈B(S 1 ) andx∈S 1 where ˜ p ε (x,y) := X n∈Z p ε (x,y +n) with p ε (x,y) := 1 √ 2πε e −(y−f(x)) 2 /(2σ(x)ε 2 ) . We say thatμ∈M(S 1 ) is a stationary (probability) measure for (2.2) if Z S 1 ˜ p ε (x,A)dμ(x) =μ(A) for anyA∈B S 1 where ˜ p ε (x,A) = Z A ˜ p ε (x,y)dy. We callλ∈C an eigenvalue ofT ε if there exists a non-zero functionφ∈B(S 1 ) such that T ε φ =λφ. 12 Any suchφ is called a (right) eigenfunction ofT ε corresponding to the eigenvalueλ. If there exists a measureμ with Z S 1 ˜ p ε (x,A)dμ(x) =λμ(A) (2.4) for allA∈B S 1 thenμ is called a (left) eigenmeasure ofT ε corresponding toλ (the use of the word ‘left’ here refers to the fact that the adjoint of T ε acts on measures in the way defined by the left hand side of (2.4) and hence, (2.4) says that μ is a ‘left’ eigenvector of T ε ). If dμ(x) =ρ(x)dx for someρ∈B(S 1 ), then we callρ a (left) eigendensity ofT ε corresponding toλ. It is clear thatμ is a stationary measure forT ε if and only ifμ is an eigenmeasure forT ε with corresponding eigenvalue1. Furthermore, from the inequality kT ε φk ∞ ≤kφk ∞ it is clear that|λ|≤ 1 for any eigenvalueλ ofT ε . In our particular example, we can verify that inf x,y∈S 1 ˜ p ε (x,y)> 0 for any ε > 0 and hence X ε n is an aperiodic recurrent Harris Chain so that it has a unique, stationary measure for allε> 0 satisfying kP x (X n ∈A)−μ(A)k TV → 0 as n → ∞ for all x ∈ S 1 and A ∈ B S 1 wherekνk TV denotes the total variation norm of the measureν (see for instance, [7], Theorem 5.6.8). This implies thatT ε always has a simple eigenvalue at1. We call the set of all eigenvalues ofT ε the spectrum ofT ε and denote this set by σ(T ε ). SinceS 1 is compact andT ε is given by integration against the bounded and continuous (in both variables) kernel ˜ p ε , the Arzela-Ascoli Theorem implies thatT ε is a compact operator onB(S 1 ) so that we are justified in this definition. 13 Our interest is in quantifying changes to the spectrum ofT ε as the parameterb is varied through deterministic bifurcation points. Since we have already noted that T ε always has a unique, stable stationary measure, there will be no changes in the number of modulus 1 eigenvalues. However, there may be changes in the number eigenvalues that asymptotically have modulus 1 asε→ 0 which will effect rates of convergence to and the shape of the stationary measure for X ε n . 1 Definition. We call any change to the number of eigenvalues of T ε approaching the unit circle asε→ 0 aλ-bifurcation. We would like to quantify the conditions under which such bifurcations occur. To do so, we first suppose that for a particular fixed value ofb,f has a finite number of periodic orbits whose basin of attraction isS 1 . The following result tells us how to calculate the spectrum ofT ε : 1 Theorem. Suppose that f has a finite number of stable periodic orbits P i of period p i , i = 1,2,...,n s and unstable periodic orbitsQ i of periodq i ,i = 1,2,...,n u . Assume in addition that lim n→∞ f n (x)∈ ns [ i=1 P i for allx ∈ S 1 \(∪ nu i=1 Q i ). Then for allr > 0, we can decomposeT ε = T ε lp +T ε up so that for smallε> 0, we havekT ε lp k ∞ <r and any eigenvalue ofT ε up with modulus greater thanr is of the formλ+O(ε) with (1) λ = (c n s,i ) 1/p i , where c s,i is the derivative of f p i along P i for some i = 1,2,...,n s and n≥ 0 or (2) λ = (|c u,i | −1 c −n u,i ) 1/q 1 , wherec u,i is the derivative off q i alongQ i for somei = 1,2,...,n u andn≥ 0. Note that we include all branches of the p th i or q th i root in (1) and (2) above so that every stable orbit of periodp contributes exactlyp eigenvalues of modulus 1. Also note that every eigenvalue contributed by an unstable periodic orbit is strictly less than 1 in modulus. 14 We will not produce a rigorous proof of Theorem 1 in its entirety, but instead focus on providing a detailed argument in the case when the only periodic points of f are one stable and one unstable fixed point. This task is the topic of Sections 2.2-2.4. We discuss the issues involved in dealing with orbits of arbitrary period in Section 2.5. But first we look at an example of what Theorem 1 says in the case when the deterministic system (2.1) undergoes a period doubling bifurcation. In general, period doubling bifurcations in the deterministic system (2.1) lead toλ-bifurcations in the perturbed system (2.2). To illustrate this with a concrete example, let us take f(x) =x+1−bsin(x) to be in the well studied family of sine-circle maps. Clearlyf(x + 2π) = f(x) + 2π so that f : S 1 → S 1 . Ifb > 1, thenf has two fixed pointsx s andx u , satisfying sinx s = sinx u = 1 b with0≤x s <π/2<x u ≤π. Furthermore we have, c u =f ′ (x u ) = 1−bcosx u = 1+ √ b 2 −1> 1 so thatx u is unstable and f ′ (x s ) = 1−bcosx s = 1− √ b 2 −1∈ (−1,1) if and only if b < b c := √ 5 ≈ 2.23 so thatx s is stable for b < b c and unstable for b > b c . Therefore, ifb<b c , Theorem 1 tells us that limiting eigenvalues ofT ε should be close toc n s or c −(n+1) u forn≥ 0 (see Figure 2.1). Whenb =b c ,f ′ (x s ) =−1 so thatf undergoes a period doubling bifurcation with the appear- ance of a stable period two orbit. Figure 2.2 shows contributions to the spectrum ofT ε coming from its two unstable fixed pointsx s and x u and stable period two orbitP = {x 1 ,x 2 }. The 15 1.8 1.85 1.9 1.95 2 2.05 2.1 2.15 2.2 0 0.2 0.4 0.6 0.8 1 Limiting Eigenvalues of T ε b ≈ |λ ε | c s 0 =1 −c s c s 2 −c s 3 c s 4 c u −1 c u −2 c u −3 Figure 2.1: The moduli of the top limiting eigenvalues ofT ε asε→ 0 as predicted by Theorem 1 plotted against the parameterb before the first period doubling bifurcation point atb = √ 5≈ 2.23. The solid lines are powers ofc s while the dashed lines are negative powers ofc u . contributions from the fixed point are of the form|c s | −1 c −n s or|c u | −1 c −n u wherec s,u =f ′ (x s,u ) and the contributions fromP are of the form √ c n wherec = f ′ (x 1 )f ′ (x 2 ) and we take both branches of the square root. This leads to the appearance of pairs of equal modulus eigenvalues in Figure 2.1. Whenb < b c,2 ≈ 2.47,c > 0 so that these eigenvalues are real valued whereas whenb > b c,2 , they are pure imaginary. Asbր 2.71,cց−1 and a second period doubling occurs in the deterministic system with the appearance of a stable period four orbit. Figure 2.3 shows the completeλ-bifurcation scenario. We can see a first λ-bifurcation occurs at b c with the appearance of a limiting eigenvalue at -1. Since a stable period four orbit yields four eigenvalues which approach the unit circle asε → 0, anotherλ-bifurcation will occur in the perturbed system nearb≈ 2.71 as well. A similar analysis will show that if (2.1) undergoes a pitchfork bifurcation, the perturbed sys- tem (2.2) undergoes aλ-bifurcation since one stable fixed points gives way to two new stable 16 2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6 2.65 2.7 0 0.2 0.4 0.6 0.8 1 Contributions from Period Two Orbit 2.25 2.3 2.35 2.4 2.45 2.5 2.55 2.6 2.65 2.7 0 0.2 0.4 0.6 0.8 1 Contributions from Fixed Points −c s −1 c s −2 −c s −3 c u −1 −c u −2 (c 0 ) 1/2 =|−(c 0 ) 1/2 | = 1 |c 1/2 |= | −c 1/2 | |c |= | −c| Figure 2.2: The moduli of the top limiting eigenvalues ofT ε asε→ 0 as predicted by Theorem 1 plotted against the parameterb after the first period doubling bifurcation point atb = √ 5≈ 2.23 up to the next period doubling atb≈ 2.71. fixed points, each yielding an eigenvalue approaching1 asε→ 0. Note, however, that a tangent bifurcation in the deterministic system will not always lead to aλ-bifurcation in the perturbed system since we have already noted that there will always be at least one eigenvalue at 1. Our proof of Theorem 1 also yields formulas for corresponding eigenvectors. To give some idea of their structure, letH n denote then th Hermite polynomial (see Appendix A), define α s = r 1−c 2 s 2 , β u = r c 2 u −1 2 , 17 1.8 1.9 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 0 0.2 0.4 0.6 0.8 1 Complete λ −Bifurcation Scenario Figure 2.3: Illustration ofλ-bifurcation scenario forf(x) = x+1−bsinx, 1.8 < b < 2.71. See also Figures 2.1 and 2.2. for |c s | < 1, |c u | > 1, and let h n (x) = e −x 2 H n (x). Then if P = {x 1 ,...,x p } is a stable periodic orbit withc s = the derivative off p along P, the eigendensities ofT ε corresponding to the eigenvalues(c n s ) 1/p are multiples of linear combination of the functions φ s,n,j (x) =h n (α s (x−x j )/ε), forj = 1,2,...,p. IfQ ={y 1 ,...,y q } is an unstable periodic orbit withc u = derivative off q alongQ, then the eigenfunctions corresponding to the eigenvalues (|c u | −1 c −n u ) 1/q are all multiples of different linear combination of the functions φ u,n,j (x) =h n (β u (x−y j )/ε), 18 forj = 1,2,...,q (see Section 2.5 for more details). The reason we only know eigendensi- ties for one set of eigenvalues and eigenfunctions for the other is a direct consequence of the structure ofT ε , which we examine in detail in Section 2.2. 2.2 Before the Period Doubling: Basic Set-up and Main Results In this Section, we continue using the notation of Section 2.1. We assume thatf has two fixed pointsx s ,x u satisfyingc s := f ′ (x s )∈ (−1,1),c u := f ′ (x u ) / ∈ [−1,1] with the property that f n (x) → x s as n → ∞,∀x ∈ S 1 , x 6= x u . This corresponds to studying the system (2.1) in a regime of stable period one behavior. The starting point for our analysis is the following proposition which gives us a way of splitting up the circle into regions determined by the different actions off. In what follows,d denotes the standard quotient metric onS 1 induced by the Euclidean metric onR and B r (x) ={y∈S 1 :d(x,y)<r} for anyr> 0. Figure 2.4 illustrates the results. 1 Proposition. There exist positive constants δ u ,δ s ,δ and N ∈ N so that if we let V 1 := B δu (x u ),V 3 :=B δs (x s ) andV 2 =S 1 /(V 1 ∪V 3 ), we have (1) d(f(x),V 1 )>δ 1 for everyx / ∈V 1 (2) d(f(x),V c 3 )>δ 2 for everyx∈V 3 (3) For everyx∈V 2 , we havef n (x)∈V 3 ,∀n≥N. Proof: To prove (1) and (2), we first chooseδ ′ > 0 so thatd(f(x),x u ) > γ u d(x,x u ),∀x ∈ B δ ′(x u ) withγ u > 1 and let K = S 1 \B δ ′(x u ). K is compact so that f(K) is compact and 19 ( ) | x u V 1 ( ) | x s V 3 V 2 S 1 Figure 2.4: Illustration of Proposition 1. therefore, we can find aδ u ∈ (0,δ ′ ) so thatf(K) ⊂ S 1 /B 2δu (x u ). Then ifx / ∈ B δu (x u ) =: V 1 either: x ∈ K, in which case d(f(x),x u ) ≥ 2δ u or x ∈ B δ ′/B δu (x u ), in which case d(f(x),x u ) > γ u d(x,x u ) > γ u δ u . This implies thatd(f(x),V 1 ) > δ 1 for everyx / ∈ V 1 with δ 1 := min(γ u −1,1)δ u . Now chooseδ s > 0 so thatd(f(x),x s ) < γ s d(x,x s ),∀x ∈ B δs (x s ) withγ s ∈ (0,1). Then ∀x∈V 3 :=B δs (x s ), we have d(f(x),V c 3 ) = min(d(f(x),x s +δ s ),d(f(x),x s −δ s )) ≥ δ s −d(x s ,f(x)) ≥ δ s −γ s d(x,x s )> (1−γ s )δ s =:δ 2 Therefore, we can takeδ = min(δ 1 ,δ 2 ), yielding (1) and (2). 20 To obtain (3), we define N x := inf{n ≥ 1 : f m (x) ∈ V 3 ,∀m ≥ n}. Note thatN x is finite ∀x ∈ V 2 sincef n (x) → x s ,∀x6= x u . Furthermore, sincef is continuous, for everyx∈ V 2 , ∃δ x > 0 so thatN y ≤N x ,∀y∈B δx (x). But thenV 2 ⊂∪ x∈V 2 B δx (x) and sinceV 2 is compact, there must exist a finite number of pointsx 1 ,x 2 ,...,x k ∈ V 2 so thatV 2 ⊂∪ k i=1 B δx i (x i ). This implies 3 withN = max i=1,...,k N x i . We can writeφ∈ B(S 1 ) asφ = φ 1 +φ 2 +φ 3 whereφ i = φ1 V i =: P i φ,i = 1,2,3 and then decomposeT ε into operatorsT ε ij :B(V j )→B(V i ) so that T ε φ(x) =ψ 1 +ψ 2 +ψ 3 ∈B(V 1 )⊕B(V 2 )⊕B(V 3 ) ∀φ∈B(S 1 ) with ψ i = 3 X j=1 T ε ij φ j fori = 1,2,3. This leads to the block decomposition T ε = T ε 11 T ε 12 T ε 13 T ε 21 T ε 22 T ε 23 T ε 31 T ε 32 T ε 33 . Note that forφ∈B(V j ), we have T ε ij φ(x) =1 V i (x)E[(φ1 V j )(f(x)+εσ(x)χ)] =1 V i (x) Z V j φ(y)˜ p ε (x,y)dy. (2.5) 21 If we takeε = 0, we obtain the deterministic system. Furthermore, Proposition 1 implies that the transition operatorT 0 φ(x) =φ(f(x)) has the ’upper triangular’ decomposition T 0 = T 0 11 T 0 12 T 0 13 0 T 0 22 T 0 23 0 0 T 0 33 . with the additional property that (T 0 22 ) n = 0,∀n ≥ N. With noise in the system, we cannot hope for such good fortune as there is always a small probability of movement between regions. We can, however, obtain bounds on the probabilities of such events, as the next three Lemmas illustrate. 1 Lemma. There exist constantsM,K > 0 so that kT ε ij k ∞ ≤Mεe −K/ε 2 , for everyi>j. Proof: For everyφ∈B(V j ) andx∈V i , we have |T ε ij φ(x)|≤kφk B(V j ) P(f(x)+εσ(x)χ∈V j ) with equality forφ =1 V j and therefore, kT ε ij k ∞ = sup x∈V i P(f(x)+εσ(x)χ∈V j ). Ifx∈V 3 andj = 1,2, then (2) in Proposition 1 implies that P(f(x)+εσ(x)χ∈V j ) ≤ P(d(f(x)+εσ(x)χ,f(x))>δ). 22 Similarly, ifx∈V 2 andj = 1, (1) in Proposition 1 implies that P(f(x)+εσ(x)χ∈V 1 )≤P(d(f(x)+εσ(x)χ,f(x))>δ) The result therefore follows from: 2 Lemma. For anyδ,ε> 0 andx∈S 1 , P(d(f(x)+εσ(x)χ,f(x))>δ)≤ r 2 π ε δ e −δ 2 /(2σ 2 2 ε 2 ) . Proof: From standard normal distribution tail estimates, we have P(d(f(x)+εσ(x)χ,f(x))>δ) ≤ P(εσ(x)|χ|>δ) ≤ P(ε|χ|>δ/σ 2 ) ≤ r 2 π ε δ e −δ 2 /(2σ 2 2 ε 2 ) . 3 Lemma. There exist positive constantsM N ,K N such that k(T ε 22 ) n k ∞ ≤ (M N εe −K N /ε 2 ) p(n) , ∀n≥N +1 whereN is the same constant as in (3) of Proposition 1 andp(n) =⌊ n N+1 ⌋. Proof: From (3) in Proposition 1 we know thatf N (x)∈V 3 ,∀x∈V 2 so thatd(f N+1 (x),V 2 )> δ by (2). This implies that k(T ε ij ) N+1 k ∞ = sup x∈V 2 P x (X N+1 ∈V 2 ) ≤ sup x∈V 2 P x (d(X N+1 ,f N+1 (x))>δ). 23 Now if we chooseL> 0 such thatd(f(x),f(y))≤Ld(x,y),∀x,y∈S 1 , then d(X N+1 ,f N+1 (x)) ≤ d(X N+1 ,f(X N ))+d(f(X N ),f N+1 (x)) ≤ εσ 2 |χ N |+Ld(X N ,f N (x)) ≤ ··· ≤ εσ 2 |χ N |+Lεσ 2 |χ N−1 |+···+L N εσ 2 |χ 0 | for everyx∈V 2 . Therefore, k(T ε ij ) N+1 k ∞ ≤P(ε(|χ N |+L|χ N−1 |+···+L N |χ 0 |)>δ/σ 2 ). LettingJ = max(L,L N ) so thatL j <J,∀j = 1,...,N, we have P(ε(|χ N |+L|χ N−1 |+···+L N |χ 0 |)>δ/σ 2 )≤P( N X i=0 |χ i |>δ/(Jσ 2 ε)) and hence, P( N X i=0 |χ i |>δ/(Jσ 2 ε)) ≤ P(|χ i |>δ/(Jσ 2 (N +1)ε), for somei∈{0,...,N}) ≤ (N +1)P(|χ 0 |>δ/(Jσ 2 (N +1)ε)) ≤ M N εe −K N /ε 2 withM N = 2Jσ 2 (N+1) 2 δ √ 2π , K N = δ 2 2(Jσ 2 (N+1)) 2 > 0 which proves the result forn = N + 1. If n>N +1, we writen =p(n)(N +1)+r(n) where 0≤r(n)≤N and then note that since kT ε 22 k≤ 1, k(T ε 22 ) n k≤k(T ε 22 ) N+1 k p(n) which yields the result. 24 We are now ready to state our main result concerning the spectrum ofT ε . The notationα = q 1−c 2 s 2σ 2 s , β = q c 2 u −1 2σ 2 u ,σ s = σ(x s ), σ u = σ(x u ),H n =n th Hermite polynomial, andh n (x) = e −x 2 H n (x) will come in handy. 2 Theorem. Suppose thatf is a smooth map onS 1 with stable fixed pointx s and unstable fixed pointx u . In addition, assume thatf n (x) → x s for allx ∈ S 1 \{x u }. Then there is ak s > 0 such that for anyr> 0,∃ε r ,L r ,K r > 0, so that∀ε<ε r , we can writeT ε =T ε up +T ε lp where kT ε lp k ∞ <r and anyλ ε j ∈σ(T ε up ) with|λ ε j |>r is a simple eigenvalue of one of the two forms λ ε s,j :=c j s +λ s,j,1 ε+λ s,j,2 ε 2 +λ ε s,j,3 ε 3 (2.6) or λ ε u,j :=|c u | −1 c −j u ++λ u,j,1 ε+λ u,j,2 ε 2 +λ ε u,j,3 ε 3 (2.7) for some j ≥ 0 with c s = f ′ (x s ), c u = f ′ (x u ), λ s,j,i ,λ u,j,i ∈ C, i = 1,2 given by explicit formulas and max(|λ ε s,j,3 |,|λ ε u,j,3 |) ≤ L r . All eigendensities corresponding to theλ ε s,j are of the form a 1 (h j ( α(x−x s ) ε )+εψ ε s,j ( x−x s ε ))1 V 3 (x) (2.8) for some constanta 1 whereψ ε s,j has the property that sup x∈R (|ψ ε s,j (x)|e −(ks+α 2 )x 2 )<K r while all eigenfunctions corresponding to theλ ε u,j are of the form a 2 (h j ( β(x−x u ) ε )+εψ ε u,j ( x−x u ε ))1 V 1 (x) (2.9) for some constanta 2 with sup x∈R (|ψ ε u,j (x)|e −(ku+β 2 )x 2 )<K r . 25 1 Remark. We assumef is smooth only for convenience. We could in fact show that iff ∈ C n+1 for anyn≥ 1,∃λ s,j,k ∈C,k = 1,2,...,n−1 such that λ ε s,j =c j s,j + n−1 X k=2 ε k λ s,j,k +λ ε s,j,n ε n with|λ ε s,j,n | ≤ L r ,∀ε < ε r and similarly forλ ε u,j . We choose the casen = 3 for illustrative purposes; it is complicated enough to show the technical issues involved in the calculation of the asymptotic expansions. 2 Remark. We will show in Section 2.3.2 that ifσ(x) = σ,∀x, thenλ s,j,1 = λ s,j,2 = 0 for allj so that in this case, the order of convergence ofλ ε s,j andλ ε u,j is of orderε 2 . This is not necessarily true ifσ is not constant. Theorem 2 basically tells us that the top eigenvalues of T ε are determined by powers of the derivative off near the stable fixed point and negative powers of the derivative at the unstable fixed point. We also note that ifc u is close to 1 in modulus andc s is close to 0, then the effect of the unstable fixed point on the dynamics of (2.2) becomes quite important and hence, we should approach linearization near a stable fixed point with caution (linearizing near a stable fixed point to approximate dynamics is a technique used, for example, in [20] for calculating the spectral density of certain Markov Chains on the circle). Proof of Theorem 2: For anyε> 0, we can writeT ε =T ε up +T ε lp where T ε up := T ε 11 T ε 12 T ε 13 0 T ε 22 T ε 23 0 0 T ε 33 . 26 and T ε lp := 0 0 0 T ε 21 0 0 T ε 31 T ε 32 0 . If ε is sufficiently small, then Lemma 1 implies that kT ε lp k ∞ ≤ Mεe −K/ε 2 < r and since T ε up is upper triangular, Corollary 3 in Appendix C implies that its spectrum is included in the union of the spectra of the diagonal operatorsT ε ii , i = 1,2,3. But by Lemma 3, the spectral radius of T ε 22 can be made less than r by shrinking ε if necessary so that any eigenvalue of T ε with modulus greater thanr must be inσ(T ε 11 ) orσ(T ε 33 ). Furthermore, from Remark 16 in Appendix C, we know that the eigendensities ofT ε 33 lead directly to eigendensities ofT ε up while the eigenfunctions ofT ε 11 lead directly to eigenfunctions ofT ε up . Therefore, the proof of Theorem 2 will be complete if we can show that all eigenvalues of T ε 33 with modulus larger than r are of the form (2.6) with corresponding eigendensities (2.8) and all eigenvalues of T ε 11 with modulus larger thanr are of the form (2.7) with corresponding eigenfunctions (2.9). Calculating the spectra of these operators turns out to be a difficult task and is of interest in its own right. We therefore, dedicate the next two sections to this analysis and note that Theorems 5 and 7 in Sections 2.3.1 and 2.4.1, respectively, give the results necessary for the completion of this proof. 2.3 The Local Story Near a Stable Fixed Point In this section, we identifyS 1 with[−1/2,1/2) by identifyingx s ↔ 0 so thatV 3 ↔ (−δ s ,δ s ), f(0) = 0, andc s =f ′ (0). Since we will be working onR for the remainder of this section, we shall denote (−δ s ,δ s ) byV 3 and use the notation B r (x) :={y∈R :|x−y|<r} 27 for anyr > 0. The essential conclusions from our work in this section are contained in Theo- rem 5 which provides us with the necessary information we need about the action ofT ε near a stable fixed point off. 2.3.1 Main Results WriteB =B(R) and extendT ε 33 to an operator onB via (2.5) by defining ˆ T ε 3 φ(x) :=1 V 3 (x) Z V 3 φ(y)˜ p ε (x,y)dy for everyφ∈B. Note that ˆ T ε 3 has a decomposition of the form ˆ T ε 3 = 0 0 0 T ε 33 with respect to the splittingR =V c 3 ∪V 3 so that the spectrum of ˆ T ε 3 differs from the spectrum ofT ε 33 only by the possible addition of 0. In order to obtain a non-degenerate limit for the operators ˆ T ε , we need to re-scale space in order to ’blow-up’ the neighborhoodV 3 of the stable fixed point in which the interesting dynamics occur. Towards this end, we define the family of scaling operatorsU ε :B→B by the formula U ε (φ)(x) = φ(x/ε). Clearly,U ε is a bounded, invertible operator onB withkU ε k ∞ = 1 and (U ε ) −1 given by(U ε ) −1 (φ)(x) =φ(εx). Let T ε s := (U ε ) −1 ◦ ˆ T ε 3 ◦U ε . 28 SinceU ε is a similarity transform, we haveσ( ˆ T ε 3 ) = σ(T ε s ) and therefore, in order to under- stand the spectrum of ˆ T ε 3 , it suffices to study the spectrum ofT ε s . To this end, letφ∈B. Then we can write T ε s φ(x) = (U ε ) −1 [( ˆ T ε U ε )φ](x) = ˆ T ε (U ε φ)(εx) = 1 V 3 (εx) Z V 3 φ(y/ε)˜ p ε (εx,y)dy = 1 V ε 3 (x) Z V ε 3 X n∈Z φ(y)p ε (εx,εy+np)dy = T ε sm φ(x)+T ε se φ(x) whereV ε 3 :=V 3 /ε = (−δ s /ε,δ s /ε), T ε sm φ(x) :=1 V ε 3 (x) Z V ε 3 φ(y)p ε (εx,εy)dy, and T ε se φ(x) :=1 V ε 3 (x) Z V ε 3 X n6=0 φ(y)p ε (εx,εy +np)dy. Intuitively, we would expect that the main contribution toT ε s should come fromT ε sm as the mass ofp ε (εx,·) is concentrated nearf(0) = 0 forεx∈V 3 . To make this observation rigorous, we use the fact that ifφ∈B, then kT ε se φk ∞ ≤kφk sup εx∈V 3 P εx (X ε 1 / ∈B δs (f(0))) But ifεx∈V 3 , (1) in Proposition 1 tells us thatf(εx) is at least distanceδ away fromB c δs (f(0)) and therefore by Lemma 2, we have P εx (X ε 1 / ∈B δs (f(0)))≤P εx (|X ε 1 −f(εx)|>δ)≤Mεe −K/ε 2 29 for anyεx∈V 3 so that kT ε se k ∞ ≤Mεe −K/ε 2 for anyε> 0. We can conclude that the main contributions to the spectrum ofT ε s (and hence, to the spectrum ofT ε 33 ) will come from the spectrum ofT ε sm . To identify the limit ofT ε sm asε→ 0, we define f ε s (x) := 1 ε f(εx) = 1 ε (f(εx)−f(0))→c s x asε→ 0 for anyx∈R, 4 Lemma. εp ε (εx,εy)→ 1 √ 2πσ 0 e −(y−csx) 2 /(2σ 2 0 ) :=p s (x,y) asε → 0 whereσ 0 = σ(0). Furthermore, the convergence is uniform on compact subsets of R 2 . Proof : By the definition ofp ε , εp ε (εx,εy) = 1 √ 2πσ(εx) e −(εy−f(εx)) 2 /(2σ 2 (εx)ε 2 ) = 1 √ 2πσ(εx) e −(y−f ε s (x)) 2 /(2σ 2 (εx)) → 1 √ 2πσ 0 e −(y−csx) 2 /(2σ 2 0 ) :=p s (x,y) as ε → 0 where σ 0 = σ(0). Uniformity follows from the fact that the partial derivatives of εp ε (εx,εy) andp s (x,y) with respect tox andy are uniformly bounded inε over any compact subset ofR 2 . 30 Lemma 4 identifies the appropriate limit for the transition density of our re-scaled process and leads us to the hypothesis that in some sense,T ε sm →T s asε→ 0 whereT s is defined by T s φ(x) = Z R φ(y)p s (x,y)dy and is the transition operator for the autoregressive model X n+1 =c s X n +σ 0 χ n . The heuristic for obtaining information about the spectrum ofT ε s is therefore as follows: Step 1: Find an appropriate spaceX in which we have convergence ofT ε sm toT s . Step 2: Calculate the spectrum ofT s inX Step 3: Apply classical results from Perturbation Theory for linear operators to obtain asymp- totics for the spectrum ofT ε sm (and hence,T ε s ). For Step 1, letB(R) be the space of all Borel measurable functionsφ : R → R and define a family of weighted sup- norms onB(R) by kφk k = sup x∈R |φ(x)| v k (x) (2.10) withv k (x) = e kx 2 , k > 0. Let X k = {φ ∈ B(R) : kφk k < ∞}. Then it is easily verified thatX k along withk·k k is a Banach Space. Again, we shall be abusive with our notation and usek·k k when referring to the induced norm onL(X) := L(X,X) as well. The choice of these weighted sup-norms as the setting for our work may seem arbitrary at first, but they are good enough for our purposes as the following Theorem shows. We shall later provide further justification for their use. 31 3 Theorem. There exists a constantk s > 0 and operatorsT s,1 ,T s,2 ∈L(X ks ) so that T ε sm =T s +εT s,1 +ε 2 T s,2 +0(ε 3 ) inL(X ks ) 3 Remark. The amount of terms in the expansion ofT ε is directly related to the smoothness assumptions onf andσ. In general, if we assumef ∈ C k+1 andσ ∈ C k fork ≥ 1, then we have the asymptotic expansion T ε sm =T s + k−1 X j=1 ε j T s,j +O(ε k ) with T s,j ∈ L(X k ) for j = 1,2,...,k− 1. We will later see that the number of terms in the expansion of T ε sm directly coincides with the number of terms in the expansions of the eigenvalues ofT ε . Therefore, even though we have assumedf ∈ C ∞ , we really only need to assumef ∈C k fork≥ 4 to prove Theorem 2. Alternatively, if we wish to keep the assumption thatf ∈C ∞ , then we could in fact obtain better results (see also Remark 1). To prevent us from getting so bogged down in technicalities that we miss the forest for the trees, we defer the proof of Theorem 3 until the next section. In the meantime, we move on to Step 2. We define ρ s (x) = α √ π e −(αy) 2 withα = q 1−c 2 s 2σ 2 0 . It is easily verified thatρ s (x) is the invariant density for the chainX n with transition operatorT s . We define a measureμ onR bydμ(x) =ρ s (x)dx and writeL 2 =L 2 (μ). Then 5 Lemma. T s acts as a bounded, self-adjoint operator onL 2 (μ) withkT s k L 2 = 1. 32 Proof: Letφ∈L 2 (μ) . By the Cauchy-Schwartz Inequality, we have Z |φ(y)p s (x,y)|dy = Z (|φ(y)|p s (x,y) 1 2 )p s (x,y) 1 2 dy ≤ ( Z p s (x,y)dy) 1 2 ( Z φ 2 (y)p s (x,y)dy) 1 2 = ( Z φ 2 (y)p s (x,y)dy) 1 2 . A quick calculation reveals that ρ s (x)p s (x,y) =ρ s (y)p s (y,x) (2.11) for allx,y∈R and hence, kT s φk 2 L 2 = Z (T s φ(x)) 2 dμ(x) ≤ Z Z φ 2 (y)p s (x,y)dydμ(x) = Z Z φ 2 (y)ρ s (x)p s (x,y)dydx = Z Z φ 2 (y)ρ s (y)p s (y,x)dydx = Z ( Z p s (y,x)dx)φ 2 (y)ρ s (y)dy =kφk 2 L 2. Therefore,T s is bounded withkT s k L 2 ≤ 1. Furthermore, sincekT1k L 2 = 1 =k1k L 2, we must havekTk L 2 = 1. The self-adjointness follows from (2.11). 4 Remark. It turns out thatk s in Theorem 3 satisfiesk s <α 2 so thatX ks ⊂L 2 (μ). Therefore, what we have actually shown above is thatT ε sm has an asymptotic expansion aboutT s on some proper subspace of L 2 (μ). Whether or not any such expansion is valid on the whole space L 2 (μ) remains an open question. 33 SinceT s is a self-adjoint operator onL 2 (μ), we know that we can find a complete, orthonormal set (CONS) of eigenfunctions forT s inL 2 (μ). To identify these functions, define φ s,n (x) = H n (αx) whereH n is then th Hermite polynomial (see Appendix A). It is well known that the H n form a CONS inL 2 (ν) whereν is defined bydν(x) = q 2 π e −x 2 dx and therefore, theφ s,n form a CONS inL 2 (μ). Furthermore, it is known that: 6 Lemma. For anyn≥ 0,c n s is an eigenvalue ofT s with corresponding eigenfunctions given by multiples ofφ s,n (x). Proof: Using the definition of Hermite polynomials and the fact that ifχ is standard normal, thenE[e tχ ] =e t 2 /2 ,∀t∈R, we have T s ( ∞ X n=0 H n (αx) n! z n ) = T s (e −z 2 +2αxz ) = E[e −z 2 +2α(cx+σ 0 χ)z ] = e −z 2 +2cαxz E[e 2αzσ 0 χ ] = e −z 2 +2csαxz e 2(ασ 0 ) 2 z 2 = e −z 2 +2csαxz e (1−c 2 s )z 2 = e −(csz) 2 +2csαxz = ∞ X n=0 H n (αx) n! (c s z) n . (2.12) Since Lemma 5 implies that T s acts continuously on L 2 (μ), if we can show that the partial sumsS N (x) := P N n=0 Hn(αx)z n n! converge to S(x) := e −z 2 +2αxz ,∀z, then we interchange T s and the summation in the first line above. SinceS(x)∈L 2 (μ) andS N (x) converges pointwise 34 toS(x),∀z, it suffices to show that{S N (x)} forms a Cauchy sequence inL 2 (μ),∀z. But for N ≥M, Z (S N (x)−S M (x)) 2 dμ(x) = Z N X n,m=M H n (αx)H m (αx)z n+m n!m! dμ(x) = N X n=M 2 n n!z 2n (n!) 2 (by Lemma 9 in Appendix A) = N X n=M 2 n z 2n n! → 0 as N,M → ∞, which proves the desired result. Therefore, we can interchange T s and the summation so that equating coefficients of z n in (2.12), we have T s φ s,n (x) = T s H n (αx) = c n s H n (αx) =c n φ s,n (x). 5 Remark. For any (Borel) measurem, define the measuremT s (A) = R T s 1(A)(x)dm(x) = RR A p s (x,y)dydm(x) for all (Borel) measurable setsA. If we letφ ∗ s,n denote the measure with densityφ s,n (x) with respect toμ, then by (2.11) and Lemma 6, we have∀ measurableA, φ ∗ s,n T s (A) = Z ( Z A p s (x,y)dy)φ s,n (x)ρ s (x)dx = Z Z A ρ s (y)p s (y,x)φ s,n (x)dydx = Z A T s φ s,n (y)ρ s (y)dy = c n Z A φ s,n (y)dμ(y) =c n φ ∗ s,n . Therefore,T s has eigenmeasuresφ ∗ s,n and eigendensitiesφ s,n ρ s . Finally, we have the following Theorem which completely characterizes the spectrum ofT s in X =X ks . 4 Theorem. σ X (T s ) = {c n s } n≥0 ∪{0}. Furthermore, each c n s is a simple eigenvalue with corresponding eigenfunctionsφ s,n and eigenmeasuresφ ∗ s,n . 35 Proof: From Lemma 6, we know thatT s φ s,n =c n s φ s,n ,∀n≥ 0 and in Lemma 8, we will show thatT s is compact inX, so it remains to show that any eigenfunction ofT s must be a multiple of φ s,n , for some n ≥ 0. Suppose towards a contradiction that ∃ψ ∈ X with T s ψ = λψ, for someλ 6= 0 and thatψ is not a multiple ofφ s,n , for anyn. Since X ⊂ L 2 (μ), as noted in Remark 4, and the{φ s,n } form a CONS inL 2 (μ), we must haveψ = P ∞ n=0 a n φ s,n , with a i ,a j 6= 0 for somei6=j. Therefore, the fact thatψ is an eigenfunction ofT s and the linearity ofT s imply that X n≥0 a n c n s φ s,n =λ X n≥0 a n φ s,n so that in particular,a i c i s =λa i anda j c j s =λa j . Sincea i ,a j 6= 0, this implies thatc i s =λ =c j s , a contradiction. It follows thatψ must be a multiple of someφ s,n , completing the proof. This completes Step 2 of our heuristic. Finally, for Step 3, we apply the classical Perturbation Theory for linear operators to obtain asymptotic expansions for the eigenvalues and eigenvec- tors ofT ε sm and hence, forT ε s . The conclusions are contained in the following Lemma. 7 Lemma. For any r > 0, ∃ε s,r ,L s,r ,K s,r > 0 so that∀ε < ε s,r , any eigenvalue of T ε s in B(R) with modulus greater thanr is a simple eigenvalue of the form λ ε s,j =c j s +ελ s,j,1 +ε 2 λ s,j,2 +ε 3 λ ε s,j,3 for some j ≥ 0 with λ s,j,i ∈ C, i = 1,2 and|λ ε s,j,3 | ≤ L s,r , ∀ε < ε s,r . Furthermore, the eigenfunctions ofT ε s corresponding toλ ε s,j are multiples of a function of the formφ ε s,j (x) := (H j (αx) +εψ ε s,j (x))1 V ε 3 (x), withkψ ε s,j k ks ≤ K s,r for allε < ε s,r and the eigendensities of T ε s corresponding toλ ε s,j are multiples of a function of the form(φ ε s,j ) ∗ (x) := (H j (αx)ρ s (x)+ ε ˜ ψ ε s,j (x))1 V ε 3 (x) withk ˜ ψ ε s,j k ks+α 2 ≤K s,r . Proof: Letr > 0. By Theorem 3, we know thatT ε sm has a second order asymptotic expansion aboutT s inX ks . Furthermore, Lemma 8 in Section 2.3.2 implies thatT s , T ε s are compact for 36 anyε> 0 so that by Theorem 16 in Appendix B and Theorem 4 above,∃ε s,r ,L s,r ,K s,r > 0 so that∀ε<ε s,r , any eigenvalue ofT ε sm inX ks with modulus greater thanr is a simple eigenvalue of the form λ ε s,j =λ j +ελ s,j,1 +ε 2 λ s,j,2 +ε 3 λ ε s,j,3 for some j ≥ 0 with λ j ∈ σ(T s ), λ s,j,i ∈ C, i = 1,2 and|λ ε s,j,3 | ≤ L s,r , ∀ε < ε s,r . We also know that the eigenfunctions ofT ε s corresponding toλ ε s,j are multiples of a function of the formH j (αx) +εψ ε s,j (x) wherekψ ε s,j k ks ≤ K s,r for allε < ε s,r . To show that the difference T ε s −T ε sm =T ε se is small, we writeφ∈X ks asφ =φ 1 +φ 2 withφ 1 =φ1 V ε 3 andφ 2 =φ−φ 1 . But then by the definition ofT ε se , we can see thatT ε se φ 1 (x) = 0 and by shrinkingk s if necessary, the same argument used in Lemma 35 in Appendix D.2 shows that T ε se φ 2 (x) v ks (x) ≤εKkφk ks e −ηδ 2 s /ε 2 for allx∈R and some constantsK,η> 0. Therefore, kT ε s −T ε sm k ks ≤kT ε se k ks ≤Kεe −ηδ 2 s /ε 2 , and hence, we can replaceT ε sm withT ε s . Moreover,T ε s φ has support in the compact setV ε 3 for anyφ ∈ X ks so that we can replace the given eigenfunctions of T ε s with the same functions multiplied by1 V ε 3 . This implies that the eigenfunctions ofT ε s are in fact inB(R) which leads to the result about the eigenvalues and eigenfunctions of T ε s . To obtain the result about the eigendensities, we simply replaceT ε sm with(T ε sm ) ∗ and use Remark 5. In the end our hard work, pays off and we obtain from Lemma 7 the required result describ- ing the spectrum of T ε 33 in B(S 1 ). In in the previous section, we use the notation h n (x) = H n (x)e x 2 . 37 5 Theorem. For anyr > 0,∃ε s,r ,L s,r ,K s,r > 0 so that∀ε < ε s,r , any eigenvalue ofT ε 33 in B(S 1 ) is a simple eigenvalue of the form λ ε s,j =c j s +ελ s,j,1 +ε 2 λ s,j,2 +ε 3 λ ε s,j,3 for somej ≥ 0 withλ s,j,i ∈ C for allj ≥ 0,i = 1,2 and|λ ε s,j,3 | ≤ L s,r ,∀j ≥ 0, ε < ε s,r . Furthermore, the (right) eigenfunctions ofT ε 33 corresponding toλ ε s,j are multiples of (H j ( α(x−x s ) ε )+εψ ε s,j ( x−x s ε ))1 V 3 (x), with sup x∈R (|ψ ε s,j (x)|e −ksx 2 )≤K s,r for allε<ε s,r and the (left) eigendensities ofT ε 33 corresponding toλ ε s,j are multiples of (h j ( α(x−x s ) ε )+ε ˜ ψ ε s,j ( x−x s ε ))1 V 3 (x) where sup x∈R (| ˜ ψ ε s,j (x)|e −(ks+α 2 x 2 ) )≤K s,r for allε<ε s,r . Proof: Since ˆ T ε 3 = U ε ◦T ε s ◦(U ε ) −1 , we know thatσ( ˆ T ε 3 ) =σ(T ε s ) so that any eigenvalue of ˆ T ε 3 inU ε (X ks ) with modulus larger thanr must be of the form λ ε s,j =c j s +λ s,j,1 ε+λ s,j,2 ε 2 +λ ε s,3 ε 3 by Lemma 7. Furthermore, corresponding eigenfunctions must be of the formU ε φ ε s,j where φ ε s,j is as in Lemma 7 and similarly for eigendensities. SinceT ε 33 is just the restriction of ˆ T ε 3 to V 3 after re-identifying[−1/2,1/2) withS 1 and 0 withx s , we obtain the result. 38 2.3.2 Asymptotic Expansions and Proof of Theorem 3 In this section we provide a proof of Theorem 3 and take a more detailed look at the asymptotic expansions of Lemma 7. In particular, we provide a method for calculating the coefficients λ s,j,1 andλ s,j,2 in the expansions for the eigenvalues ofT ε s . This section is more technical in nature and can be skipped without loss of continuity. We will rely heavily on the results and notation of Appendices A - D. For convenience, we assume also thatσ(x) = σ 0 is constant and takeσ 0 = 1. Asymptotic expansions can be obtained in a similar manner for the general case, but the calculations become even more tedious and the formulas for coefficients are more complicated (see Remark 18 in Appendix D.1 for more information on the role played byσ.) Without loss of generality, we suppose that δ s < δ 1 where δ 1 is as in Lemma 32 and write c =c s ,δ =δ s ,p(x,y) =p s (x,y), and p ε (x,y) = 1 √ 2π e −(y−f ε (x)) 2 /2 for the transition density associated withT ε s Our plan is to apply the results of Appendix D withF =f andσ = 1. To this end, fix a positive k < min( ˜ k c ,k δ ) where ˜ k c ,k δ are as in Lemmas 32 and Lemma 35 (note that this also implies k <k c wherek c is the constant in Lemma 31). LetX =X k ,v(x) =v k (x) andk·k =k·k k . Define the operators T s,1 φ(x) = E[g 1 (x,cx+χ)φ(cx+χ)] T s,2 φ(x) = E[g 2 (x,cx+χ)φ(cx+χ)] where χ is N(0,1), σ > 0, and g 1 ,g 2 are the polynomials from Lemma 28. At this point it will also be convenient to drop the subscripts on the operators and write T ε = T ε sm ,T = 39 T s ,T i =T s,i ,i = 1,2. SinceTφ(x) = R φ(y)p(x,y)dy, andT i φ(x) = R φ(y)g i (x,y)p(x,y)dy for i = 1,2 by Lemma 31, we know that T,T 1 ,T 2 ∈ L(X). We also have T ε φ(x) = 1 V ε 3 (x) R (1 V ε 3 φ)(y)p ε (x,y)dy with V ε 3 = (−δ/ε,δ/ε) so that from Lemma 32, T ε ∈ L(X) as well. With this in mind, we are ready for: Proof of Theorem 3: Letφ ∈ X and set ˆ T ε = T +εT 1 +ε 2 T 2 . We need to show that there exists a constantK > 0 so that |(T ε − ˆ T ε )φ(x)| v(x) ≤Kε 3 kφk (2.13) ∀x ∈ R. We proceed by splitting the proof into the two separate cases where|x| is ’large’ and where|x| is ’small’. In the first case, we use bounds on the decay ofT,T 1 ,T 2 derived in Appendix D.2 while in the second, we use the expansion ofp ε given in Lemma 28 of Appendix D.1. Case1: Suppose first that|x|≥ δ ε . ThenT ε φ(x) = 0 by definition and Lemma 31 in Appendix D.2 implies that we have the bound |(T ε − ˆ T ε )φ(x)| v(x) ≤ |Tφ(x)|+ε|T 1 φ(x)|+ε 2 |T 2 φ(x)| v(x) ≤ K 1 kφke −η 1 x 2 ≤ K 1 kφke −η 1 δ 2 /ε 2 ≤ ˜ K 1 kφkε 3 where ˜ K 1 is a constant depending only onK 1 ,η 1 andδ. 40 Case 2: Suppose |x| < δ ε and write φ = φ 1 + φ 2 where φ 1 = φ1 I ε δ and φ 2 = φ1 (I ε δ ) c. Then we can apply Lemma 28 in Appendix D.1 to write T ε φ 1 = ˆ T ε φ 1 + ε 3 T ε r φ 1 , where T ε r φ 1 (x) = R φ 1 (y)R ε (x,y)dy. Moreover, from (D-2), we know that |T ε r φ 1 (x)|≤ Z |φ 1 (y)|g ε r (x,y)p(x,y)dy+ Z |φ 1 (y)|g ε r (x,y)p ε (x,y)dy whereg ε r is polynomial in|x| and|y|. Therefore, by Lemmas 31 and 32 in Appendix D.2, we have |(T ε − ˆ T ε )φ 1 (x)| v(x) = ε 3 |T ε r φ 1 (x)| v(x) ≤ ε 3 (K 1 kφ 1 ke −η 1 x 2 +K 2 kφ 1 ke −η 2 x 2 ) ≤ (K 1 +K 2 )kφ 1 kε 3 . NowT ε φ 2 (x) = 0 and hence, |(T ε − ˆ T ε )φ 2 (x)| v(x) = | ˆ T ε φ 2 (x)| v(x) . Furthermore, supp(φ 2 )⊂ (I ε δ ) c and therefore, Lemma 35 in Appendix D.2 implies that | ˆ T ε φ 2 (x)| v(x) ≤ 3K 3 ε(1+ε+ε 2 )kφ 2 ke −η 5 δ 2 /ε 2 . We can conclude that |(T ε − ˆ T ε )φ(x)| v(x) ≤ |(T ε − ˆ T ε )φ 1 (x)| v(x) + |(T ε − ˆ T ε )φ 2 (x)| v(x) ≤ (K 1 +K 2 )kφ 1 kε 3 +3K 3 ε(1+ε+ε 2 )kφ 2 ke −η 5 δ 2 /ε 2 ≤ [(K 1 +K 2 )ε 3 +3K 3 ε(1+ε+ε 2 )e −η 5 δ 2 /ε 2 ]kφk ≤ ˜ K 2 ε 3 kφk. 41 where ˜ K 2 is a constant only depending onK 1 ,K 2 ,K 3 ,η 5 , andδ. Together, Cases 1 and 2 yield (2.13) withK = max( ˜ K 1 , ˜ K 2 ). 6 Remark. The expressions given for the operatorsT 1 and T 2 take on a simpler form if we consider their restrictions toC 1 ∩X andC 2 ∩X, respectively. In particular, forφ∈C 1 ∩X, integrating by parts we have T 1 φ(x) = Z φ(y)c 1 (y−cx)x 2 p(x,y)dy = −c 1 x 2 Z φ(y) d dy (p(x,y))dy = c 1 x 2 Z φ ′ (y)p(x,y)dy = c 1 x 2 Tφ ′ (x) (2.14) and forφ∈C 2 ∩X T 2 φ(x) = c 2 x 3 Z φ(y)(y−cx)p(x,y)dy+ c 2 1 2 x 4 Z φ(y)((y−cx) 2 −1)p(x,y)dy = −c 2 x 3 Z φ(y) d dy (p(x,y))dy+ c 2 1 2 x 4 Z φ(y) d 2 dy 2 (p(x,y))dy = c 2 x 3 Tφ ′ (x)+ c 2 1 2 x 4 Tφ ′′ (x). (2.15) We shall make use of these equations later in calculating coefficients in the asymptotic expan- sions of the eigenvalues ofT ε . We also provide the promised proof of compactness for T ε . For ease of notation, we set T 0 =T . 8 Lemma. T ǫ is compact for allǫ≥ 0. 42 Proof: Letρ> 0 and suppose first thatε> 0. Let{φ n } n≥0 ⊂X withkφ n k = 1,∀n, and let x∈R with|x|<δ/ε. Choosex 0 ∈R with|x 0 |<δ/ε as well. Then T ε φ n (x)−T ε φ n (x 0 ) = Z δ/ε −δ/ε φ n (y)(p ε (x,y)−p ε (x 0 ,y))dy = 1 √ 2π Z δ/ε −δ/ε φ n (y)(h ε (x,y)−h ε (x 0 ,y))M(x,x 0 ,y)dy whereh ε (u,v) =− (v−f ε (u)) 2 2 and|M(x,x 0 ,y)|≤p ε (x,y)+p ε (x 0 ,y). Letting Δ =f ε (x)− f ε (x 0 ), we have h ε (x,y)−h ε (x 0 ,y) = − 1 2 [((y−f ε (x 0 ))−Δ) 2 −(y−f ε (x 0 )) 2 ] = Δ[(y−f ε (x 0 ))− Δ 2 ] and therefore, |T ε φ n (x)−T ε φ n (x 0 )| ≤ |Δ| √ 2π [ Z δ/ε −δ/ε |y||φ n (y)|(p ε (x,y)+p ε (x 0 ,y))dy +(|f ε (x 0 )|+ Δ 2 ) Z δ/ε −δ/ε |φ n (y)|(p ε (x,y)+p ε (x 0 ,y))dy] ≤ |Δ| √ 2π [T ε 0,1 |φ n |(x)+T ε 0,1 |φ n |(x 0 ) +(|f ε (x 0 )|+ Δ 2 )(T ε 0,0 |φ n |(x)+T ε 0,0 |φ n |(x 0 ))] ≤ 4K 2 |Δ| √ 2π (1+|f ε (x 0 )|+ Δ 2 )(v(x)+v(x 0 )) ≤ 8K 2 |Δ| √ 2π (1+M δ + Δ 2 )v(δ/ε) 43 where M δ = sup |y|<δ/ε {|f(y)|} and we have used Lemma 32 in the second to last line with m,n = 0,1. Since f ε is continuous, we can choose|x−x 0 | small enough so that |Δ| < min(2, ρ √ 2π 8K 2 (2+M δ )v(δ/ε) ), in which case we have |T ε φ n (x)−T ε φ n (x 0 )|≤ρ ∀n. This implies that the sequence{T ε φ n (x)} is equicontinuous forx∈ (−δ/ε,δ/ε). Further- more, a similar proof shows that{T ε φ n (x)} is equibounded and therefore, by the Ascoli-Arzela Theorem, there exists a subsequencen k and a continuous functionφ defined on (−δ/ε,δ/ε) such thatT ε φ n k →φ uniformly on [−δ/ε,δ/ε]. Defineφ(x) = 0,∀|x|≥δ/ε. Then we certainly haveφ∈X. Moreover, sinceT ε φ n (x) = 0, ∀n and|x|≥δ/ε, we have kT ε φ n k −φk = sup |x|<δ/ε |T ε φ n k (x)−φ(x)| v(x) + sup |x|≥δ/ε |T ε φ n k (x)−φ(x)| v(x) = sup |x|<δ/ε |T ε φ n k (x)−φ(x)| v(x) → 0 asn k →∞. We conclude thatT ε n k →φ inX asn k →∞ so thatT e is compact, forε> 0. To prove the result forε = 0, we chooseM > 0 so thatK 1 e −η 1 M 2 <ρ/2 whereK 1 ,ζ 1 are as in Lemma 31 withm,n = 6,2 (the degrees ofx,y, respectively, ing 2 ). LetU = (−M,M). Then using Lemma 31 instead of Lemma 32 in the argument above,∀x,x 0 ∈U, we obtain the inequality |T ε φ n (x)−T ε φ n (x 0 )|≤ 8K 1 |Δ| √ 2π (1+M U + Δ 2 )v(M) 44 whereM U = sup y∈U {|f(x 0 )|}. This implies that we can again apply the Ascoli-Arzela The- orem to show that there exists a subsequence n k and a function φ so that Tφ n k → φ uni- formly on U. Thus, if we define φ(x) = 0, ∀x / ∈ U, and choose n k large enough so that sup |x|<M |Tφ n k (x)−φ(x)|<ρ/2, Lemma 31 then yields kTφ n k −φk ≤ sup |x|<M | |Tφ n k (x)−φ(x)| v(x) + sup |x|>M |Tφ n k (x)| v(x) ≤ ρ/2+K 1 e −η 1 M 2 =ρ and thus,Tφ n k →φ inX which proves thatT is compact. Next, we move onto the calculation of the coefficients in the asymptotic expansions for the eigenvalues ofT ε . The ideas draw heavily on standard facts about Hermite Polynomials which are proven in Appendix A for ease of reference. We again use the notationφ n (x) = H n (αx) to denote the eigenfunctions ofT The next few lemmas modify the results of Appendix A to our present needs. 9 Lemma. Z φ n (x)φ m (x)dμ(x) =δ nm 2 n n! Proof: This follows from Theorem 13 and the fact that Z φ n (x)φ m (x)dμ(x) = Z H n (x)H m (x)dν(x). (A-2) implies that φ ′ n (x) =αH ′ n (αx) = 2nαH n−1 (αx) = 2nαφ n−1 (x). (2.16) 45 which leads to the following analog of Lemma 23. 10 Lemma. For alln≥ 0 we have (1) xφ n (x) =α −1 ( 1 2 φ n+1 (x)+nφ n−1 (x)) (2) x 2 φ n (x) =α −2 ( 1 4 φ n+2 (x)+ 2n+1 2 φ n (x)+n(n−1)φ n−2 (x)) (3) x 3 φ n (x) = α −3 ( 1 8 φ n+3 (x)+ 3n+3 4 φ n+1 (x)+ 3n 2 2 φ n−1 (x) +n(n−1)(n−2)φ n−3 (x)) (4) x 4 φ n (x) = α −4 ( 1 16 φ n+4 (x)+ 4n+6 8 φ n+2 (x)+ 6n 2 +6n+3 4 φ n (x) + 4n 3 +2n 2 φ n−2 (x)+n(n−1)(n−2)(n−3)φ n−4 (x)) where we defineφ k (x)≡ 0 fork< 0. Proof: From Lemma 23 in Appendix A, we have xφ n (x) =α −1 (αxH n (αx)) =α −1 ( 1 2 H n+1 (αx)+nH n−1 (αx)) which proves 1. 2 - 4 follow as before from iterating 1. 11 Lemma. Assumen≤m. Then (1) Z xφ n (x)φ m (x)dμ(x) = α −1 2 n (n+1)! m =n+1 0 otherwise 46 (2) Z x 2 φ n (x)φ m (x)dν(x) = α −2 2 n−1 (2n+1)n! m =n α −2 2 n (n+2)! m =n+2 0 otherwise (3) Z x 3 φ n (x)Hφ m (x)dν(x) = 3α −3 (2 n−1 )(n+1)(n+1)! m =n+1 α −3 2 n (n+3)! m =n+3 0 otherwise (4) Z x 4 φ n (x)φ m (x)dν(x) = 3α −4 (2 n−2 )(2n 2 +2n+1)(n+1)! m =n α −4 2 n (n+2)!(2n+3) m =n+2 α −4 2 n (n+4)! m =n+4 0 otherwise. Proof: This follows directly from Lemma 24 in Appendix A and the fact that Z x p φ n (x)φ m (x)dμ(x) =α −p Z x p H n (x)H m (x)dν(x) for anyn,m,p∈N. Now from (B-14) in Appendix B, we know thatλ n,1 satisfies the equation (T −c n )P n,1 +(T 1 −λ n,1 )P n = 0. (2.17) If we evaluate both sides of the above equation atφ n and then apply the operatorφ ∗ n to both sides, then Remark 5 implies that φ ∗ n T 1 φ n =λ n,1 φ ∗ n φ n . 47 Using the definition ofφ ∗ n and the form ofT 1 derived in Remark 6, we have φ ∗ n T 1 φ n = −c 1 Z x 2 φ n (x)Tφ ′ n (x)dμ(x) = −2c 1 nα Z x 2 φ n (x)Tφ n−1 (x)dμ(x) (By (2.16)) = −2c 1 c n−1 nα Z x 2 φ n (x)φ n−1 (x)dμ(x) = 0 by (2) in Lemma 11. Sinceφ ∗ n φ n = R φ 2 n (x)dμ(x) = 2 n n!6= 0, we can conclude thatλ n,1 = 0. In particular, the convergence ofλ ε n toc n asε→ 0 is of orderε 2 . In order to calculateλ n,2 , we will use (B-16) from Section B which says that (T −c n )P n,2 +(T 1 −λ n,1 )P n,1 +(T 2 −λ n,2 )P n = 0. Again, evaluating both sides atφ n and applyingφ ∗ n , we have φ ∗ n T 1 P n,1 φ n +φ ∗ n T 2 φ n = 2 n n!λ n,2 . (2.18) Letφ n,1 =P n,1 φ n . Then evaluating both sides of (2.17) atφ n and using the fact thatλ n,1 = 0, we have (T−c n )φ n,1 (x) = −T 1 φ n (x) = −c 1 x 2 Tφ ′ n (x) = −2nc 1 c n−1 αx 2 φ n−1 (x) = −2nc 1 c n−1 α −1 ( 1 4 φ n+1 (x)+ 2n−1 2 φ n−1 (x) +(n−1)(n−2)φ n−3 (x)) 48 for everyx ∈ R, the last equality following from Lemma 10. It is easily checked thatφ m ∈ R((T −c n ) −1 ) for m 6= n with (T −c n ) −1 φ m = 1 c m −c n φ m and hence, the above equation implies that φ n,1 (x) = −2nc 1 c n−1 α −1 ( 1 4(c n+1 −c n ) φ n+1 (x)+ 2n−1 2(c n−1 −c n ) φ n−1 (x) + (n−1)(n−2) c n−3 −c n−1 φ n−3 (x)) = 2nc 1 α −1 ( 1 4c(1−c) φ n+1 (x)− 2n−1 2(1−c) φ n−1 (x) − c 2 (n−1)(n−2) 1−c 2 φ n−3 (x)). Therefore, T 1 φ n,1 (x) = 2c 1 x 2 Tφ ′ n,1 (x) = 4nc 2 1 α −1 ( 2(n+1)c n−1 α 4(1−c) φ n (x)− 2(n−1)(2n−1)c n−2 α 2(1−c) φ n−2 (x) − 2(n−1)(n−2)(n−3)c n−2 α 1−c 2 φ n−4 (x)) = 4nc n−2 c 2 1 ( 2(n+1)c 4(1−c) x 2 φ n (x)− 2(n−1)(2n−1) 2(1−c) x 2 φ n−2 (x) − 2(n−1)(n−2)(n−3) 1−c 2 x 2 φ n−4 (x)) 49 so that from (2) in Lemma 11, we have φ ∗ n T 1 φ n,1 = Z φ n (x)T 1 φ n (x)dμ(x) = 4nc n−2 c 2 1 ( 2(n+1)c 4(1−c) Z x 2 φ 2 n (x)dμ(x) + 2(n−1)(2n−1) 2(1−c) Z x 2 φ n (x)φ n−2 (x)dμ(x) − 2(n−1)(n−2)(n−3) 1−c 2 Z x 2 φ n (x)φ n−4 (x)dμ(x) = 4nc n−2 c 2 1 ( 2 n−2 (n+1)(2n+1)n!c α 2 (1−c) − 2 n−2 (n−1)(2n−1)n! α 2 (1−c) ) = 2 n nn!c n−2 c 2 1 α 2 (1−c) ((n+1)(2n+1)c−(n−1)(2n−1)). Now Remark 6 and (2.16) also imply that T 2 φ n (x) = c 2 x 3 Tφ ′ n (x)+ c 2 1 2 x 4 Tφ ′′ n (x) = 2nc 2 αx 3 Tφ n−1 (x)+2n(n−1)c 2 1 α 2 x 4 Tφ n−2 = 2nc n−1 c 2 αφ n−1 (x)+2n(n−1)c n−2 c 2 1 α 2 x 4 φ n−2 (x). Therefore, using (3) and (4) from Lemma 11, we have φ ∗ n T 2 φ n (x) = 2nc n−1 c 2 α Z x 3 φ n−1 (x)φ n (x)dμ(x) +2n(n−1)c n−2 c 2 1 α 2 Z x 4 φ n−2 (x)φ n (x)dμ(x) = 3(2 n−1 )n 2 n!c n−1 c 2 α −2 +2 n−1 n(n−1)(2n−1)n!c n−2 c 2 1 α −2 = 2 n−1 nn!c n−2 α −2 (3ncc 2 +(n−1)(2n−1)c 2 1 ). 50 Substituting all this into (2.18), we obtain λ n,2 = 1 2 n n! [ 2 n nn!c n−2 c 2 1 α 2 (1−c) ((n+1)(2n+1)c−(n−1)(2n−1)) +2 n−1 nn!c n−2 α −2 (3ncc 2 +(n−1)(2n−1)c 2 1 )] = nc n−2 1−c 2 [ c 2 1 1−c (2(n+1)(2n+1)c−(n−1)(2n−1)(3−c))+3ncc 2 ] = nc n−2 1−c 2 [ c 2 1 1−c (3c(2n 2 +n+1)−3(n−1)(2n−1)+3ncc 2 ] (2.19) where in the second to last line we have used the fact thatα 2 = 1−c 2 2 . This completes our analysis of the operator near the stable fixed point and explains the λ ε s,j eigenvalues appearing in Theorem 2. We now move to the study of the operator in a neighbor- hood of the unstable fixed point, which thankfully turns out to be essentially the same. 2.4 The Local Story Near an Unstable Fixed Point We begin in the same way as the previous section, identifyingS 1 with [−p/2,p/2), but now identifyx u ↔ 0 so thatV 1 ↔ (−δ u ,δ u ) and we replacef withf(·+x u )−x u so thatf(0) = 0 andc u =f ′ (0). Again, we will be working onR for the remainder of this section, so we shall denote (−δ u ,δ u ) byV 1 and use B r (x) :={y∈R :|x−y|<r} for anyr> 0. 51 2.4.1 Main Results We extendT ε 11 to an operator onB via (2.5) by defining ˆ T ε 1 φ(x) :=1 V 1 (x) Z V 1 φ(y)˜ p ε (x,y)dy for everyφ∈B. Note that ˆ T ε 1 has a decomposition of the form ˆ T ε = T ε 11 0 0 0 with respect toB so that the spectrum of ˆ T ε 1 will again differ from the spectrum ofT ε 11 only by the possible addition of 0. Once again ,we re-scale space by definingT ε u := (U ε ) −1 ◦ ˆ T 1 ε ◦U ε and note thatT ε u and ˆ T ε 1 have the same spectrum. Now as in Section 2.3.1, forφ∈B, we have T ε u φ(x) = 1 V ε 1 (x) Z V 1 X n∈Z φ(y/ε)p ε (εx,y +np)dy = T ε um φ(x)+T ε ue φ(x) whereV ε 1 :=V 1 /ε, T ε um φ(x) :=1 V ε 1 (x) Z V ε 1 φ(y)p ε (εx,εy)dy, and T ε ue φ(x) :=1 V ε 1 (x) Z V ε 1 X n6=0 φ(y)p ε (εx,εy +np)dy. Lemma 2 implieskT ε ue k ∞ ≤Mεe −K/ε 2 (see the argument in Section 2.3.1) for anyε> 0 and hence the main contributions to the spectrum of T ε u (and hence, to the spectrum of T ε 11 ) will come from the spectrum ofT ε um . 52 To identify the limit ofT ε um asε→ 0, we define f ε u (x) := 1 ε (f(εx)−n u p) = 1 ε (f(εx)−f(0))→c u x asε→ 0 for anyx∈R. We therefore obtain the following analog of Lemma 4 12 Lemma. lim ε→0 εp ε (εx,εy) = 1 √ 2πσ 0 e −(y−cux) 2 /(2σ 2 0 ) =:p u (x,y) whereσ 0 =σ(0). Furthermore, the convergence is uniform on compact subsets ofR 2 . Proof : See Proof of Lemma 4. p u is of course the transition density for the autoregressive model X n+1 =c u X n +σ 0 χ n , but since|c u | > 1, we cannot directly apply the same arguments we used in Section 2.3.1 for calculating the spectrum of its transition operator T u φ(x) = Z R φ(y)p u (x,y)dy. To simplify the following calculations, we assume for now thatσ(x) = 1. 53 Now we can directly verify thatT u has an eigenvalue|c u | −1 with corresponding eigenfunction v(x) = e −β 2 x 2 whereβ 2 = c 2 u −1 2 . We re-scaleT ε u a second time by pre-multiplying and post- dividing by the functionv. Then∀φ∈B, T u (vφ)(x) = 1 √ 2π Z φ(c u x+y)v(c u x+y)e −y 2 /2 dy = 1 √ 2π e −β 2 c 2 u x 2 Z φ(c u x+y)e − 1 2 (c 2 u y 2 +4β 2 cuxy) dy = e −(β 2 c 2 u −2β 2 )x 2 /2 1 √ 2π Z φ(c u x+y)e − 1 2 (cuy+2βx) 2 dy = e −β 2 x 2 1 |c u | √ 2π Z φ(c −1 u x+c −1 u z)e −z 2 /2 dz = 1 |c u | v(x)E[φ(c −1 u x+c −1 u χ)] where in the second to last line we have made the change of variablesz = c u y + 2βx. This implies that |c u | v(x) T u (vφ)(x) =E[φ(c −1 u x+c −1 u χ)] ∀φ∈B,x∈R so that if we define the bounded, invertible operatorV :B→B byVφ =vφ, the operatorQ :=|c u |V −1 ◦T u ◦V is the transition operator for the autoregressive chain X n+1 =c −1 u X n +c −1 u χ n and since|c −1 u |< 1, we can use the techniques of the previous section to calculate its spectrum. We therefore consider the operatorQ ε :=|c u |V −1 ◦T ε um ◦V . SinceV is a similarity transform, we can easily obtain the eigenvalues of T ε um from the eigenvalues of Q ε by multiplying by |c u | −1 . Performing a similar calculation to the one above, we have the representation Q ε φ(x) = 1 V ε 1 (x)h ε (x)E[(φ1 V ε 1 )(F ε u (x)+c −1 u χ)] = 1 V ε 1 (x) Z V ε 1 φ(y)h ε (x)p ε u (x,y)dy (2.20) 54 whereF ε u (x) = f ε u (x) c 2 u ,h ε (x) =e β 2 [x 2 −(f ε u (x)/cu) 2 ] , and p ε u (x,y) = c u √ 2π e −c 2 u (y−F ε u (x)) 2 /2 . Since (F ε u ) ′ (0) = c −2 u (f ε u ) ′ (0) = c −1 u ∈ (−1,1),p ε u has the same basic form as the transition densityp ε s for the operatorT ε s discussed in the previous section. The factorh ε (x) is tiresome, but with a little extra work it turns out that we can still prove: 6 Theorem. There exist operatorsQ 1 ,Q 2 ∈L(X ks ) so that Q ε =Q+εQ 1 +ε 2 Q 2 +0(ε 3 ) inL(X ks ) Again, we defer the proof to the next section. Now from our work in Section 2.3, we know that Q has eigenvaluesc −n u , forn≥ 0 and since β = r c 2 u −1 2 = s 1−(c −1 u ) 2 2σ 2 u whereσ u =c −1 u , we know that the corresponding (right) eigenfunctions are of the formH n (βx) while the (left) eigendensities are of the formH n (βx)ρ u (x) with ρ u (x) = β √ 2π e −(βx) 2 . Combining this information with Theorem 6 in mind, we can again apply Theorem 16 in Appendix B to conclude that: 55 13 Lemma. For anyr > 0,∃ε u,r ,L u,r ,K u,r > 0 so that∀ε < ε u,r , any eigenvalue ofQ ε in X ks with modulus bigger thanr is a simple eigenvalue of the form λ ε j =c −j u +ελ j,1 +ε 2 λ j,2 +ε 3 λ ε j,3 for somej ≥ 0 withλ j,i ∈C,i = 1,2 and|λ ε j,3 |≤ L u,r ,∀ε < ε u,r . Furthermore, the (right) eigenfunctions of Q ε corresponding to λ ε j are multiples of a function of the form φ ε u,j (x) := (H j (βx)+εψ ε j (x))1 V ε 1 (x), withkψ ε j k ks ≤K u,r for allε<ε u,r . We will not need information about the eigendensities, although their relationship to the eigen- functions is the same as in Lemma 7. Following the same argument used in the proof of Theorem 5, we obtain: 7 Theorem. For anyr > 0,∃ε u,r ,L u,r ,K u,r > 0 so that∀ε < ε u,r , any eigenvalue ofT ε 11 in B(S 1 ) with modulus greater thanr is a simple eigenvalue of the form λ ε u,j =|c u | −1 c −j u +λ u,j,1 ε+λ u,j,2 ε 2 +λ ε u,j,3 ε 3 for somej≥ 0 withλ u,j,i ∈C,i = 1,2, and|λ ε u,j,3 |≤|c u | −1 L u,r ,∀ε<ε u,r . Furthermore, the eigenfunctions ofT ε 11 corresponding toλ ε u,j are multiples of a function of the form (exp{ β 2 (x−x u ) 2 ε }H j ( α(x−x u ) ε )+εψ ε u,j ( x−x u ε ))1 V 1 (x) with sup x∈R (|ψ ε u,j (x)|e −(ks+β 2 )x 2 )≤K u,r for allε<ε u,r . 56 2.4.2 Asymptotic Expansions and The Proof of Theorem 6 In this section we provide a proof of Theorem 6 and take a more detailed look at the asymptotic expansions of Theorem 13. The work done here is analogous to the work done in Section 2.3.2 and can be also be skipped without loss of continuity. Again, we simplify calculations by assumingσ(x) = 1. Without loss of generality, we suppose that δ s < δ 1 where δ 1 is as in Lemma 32 and write c =c u ,δ =δ u ,p(x,y) =p u (x,y), and p ε (x,y) = 1 √ 2π e −(y−(f ε (x)/c 2 )) 2 /2 for the transition density associated withQ ε Our plan is to apply the results of Appendix D withF = f c 2 andσ = 1 |c| .(Note that in this case, a = F ′ (0) = 1 c satisfies|a| < 1.) To this end, fix a positivek < min( ˜ k c ,k δ ) where ˜ k c ,k δ are as in Lemmas 32 and Lemma 35) (note that this also impliesk < k c wherek c is the constant in Lemma 31). LetX =X k ,v(x) =v k (x) andk·k =k·k k . Define the operators Q 1 φ(x) = E[˜ g 1 (x, x c + 1 |c| χ)φ( x c + 1 |c| χ)] Q 2 φ(x) = E[˜ g 2 (x, x c + 1 |c| χ)φ( x c + 1 |c| χ)] where χ is N(0,1), σ > 0, and ˜ g 1 ,˜ g 2 are the polynomials in Lemma 29 of Appendix D.1. Since Qφ(x) = R φ(y)p(x,y)dy, and Q i φ(x) = R φ(y)˜ g i (x,y)p(x,y)dy for i = 1,2, from Lemma 31 in Appendix D.2 we know that Q,Q 1 ,Q 2 ∈ L(X). We also have Q ε φ(x) = 1 I ε δ (x)h ε (x) R 1 I ε δ φ(y)p ε (x,y)dy so that from Lemma 34,Q ε ∈L(X) as well. This leads to: 57 Proof of Theorem 6: Letφ∈X and write ˆ Q ε =Q+εQ 1 +ε 2 Q 2 . As in the proof of Theorem 3, we need to show that there exists aK > 0 such that |(Q ε − ˆ Q ε )φ(x)| v(x) ≤Kε 3 kφk ∀x∈R. Again, we deal with the two cases of|x| ’large’ and|x| ’small’ separately. Case 1: If|x| ≥ δ ε , thenQ ε φ(x) = 0 and sinceQ,Q 1 ,Q 2 have the same basic structure as T,T 1 ,T 2 , the proof of this case is the same as the proof of Case 1 in Theorem 3. Case 2: If|x|< δ ε , then we again writeφ =φ 1 +φ 2 withφ 1 =φ1 I ε δ andφ 2 =φ1 I ε δ . By Lemma 29 in Appendix D.1,Q ε φ 1 (x) = ˆ Q ε φ 1 (x)+ε 3 Q ε r φ 1 (x) whereQ ε r φ 1 (x) := R φ(y) ˜ R ǫ (x,y)dy. Using (D-8) and Lemmas 31 - 34 in Appendix D.2, we have |(Q ε − ˆ Q ε )φ 1 (x)| v(x) = ε 3 |Q ε r φ 1 (x)| v(x) ≤ ε 3 Z |˜ g ǫ r (x,y)φ 1 (y)|(p(x,y)+p ǫ (x,y))(1+h ε (x))dy ≤ ε 3 (K 1 kφ 1 ke −η 1 x 2 +K 2 kφ 1 ke −η 2 x 2 +K 1 kφ 1 ke −η 3 x 2 +K 2 kφ 1 ke −η 4 x 2 ) ≤ (2K 1 +2K 2 )kφ 1 kε 3 . Now we know thatQ ε φ 2 (x) = 0 and applying Lemma 36 in Appendix D.2 we have | ˆ Q ε φ 2 (x)| v(x) ≤K 3 kφ 2 ke −η 6 δ 2 /ε 2 58 and therefore, |(Q ε − ˆ Q ε )φ(x)| v(x) ≤ |(Q ε − ˆ Q ε )φ 1 (x)| v(x) + |(Q ε − ˆ Q ε )φ 2 (x)| v(x) ≤ (2K 1 +2K 2 )|c|ε 3 kφ 1 k+K 3 |c|kφ 2 ke −η 6 δ 2 /ε 2 ≤ ˜ Kε 3 kφk for some constant ˜ K depending only onc,K 1 ,K 2 ,K 3 ,η 6 , andδ, which completes the proof. 7 Remark. Again, the operatorsQ 1 , andQ 2 take on simpler forms when operating on func- tions inC 1 orC 2 , respectively. To see this, letc 1 =c u,1 ,c 2 =c u,2 be the coefficients ofx 2 and x 3 , respectively, in the Taylor expansion off aboutx u . Ifφ∈ C 1 , then using the calculation in Remark 6 and takinga = 1 c ,a 1 = c 1 c 2 in the expression for ˜ g 1 in Lemma 29, we have Q 1 φ(x) =− 2c 1 β c x 3 Qφ(x)+ c 1 c 2 x 2 Qφ ′ (x). Similarly, ifφ∈C 2 , we have Q 2 φ(x) = β[−( c 2 1 c 2 + 2c 2 c )x 4 + 4c 1 β c 2 x 6 ]Qφ(x)+( c 2 c 2 x 3 − 2c 2 1 β c 3 x 5 )Qφ ′ (x) + c 2 1 2c 4 x 4 Qφ ′′ (x). SinceQφ(x) =E(φ( x c + 1 c χ)), we know from our work in the previous section thatQ has eigen- values{c −n } n≥0 , corresponding right eigenfunctionsφ u,n (x) := H n ( q 1−1/c 2 2/c 2 x) = H n (βx), and left eigenmeasuresφ ∗ u,n defined byφ ∗ u,n (A) = R A φ u,n (x)dμ(x) wheredμ(x) := ρ(x)dx defines the invariant measure forQ. Therefore, for everyn≥ 0,Q ε must have an eigenvalue λ ε n of the formλ ε u,n =c −n +ελ u,n,1 +ε 2 λ u,n,2 +O(e 3 ) and a corresponding eigenfunction of the formφ ε u,n =φ u,n +εφ u,n,1 +O(ε 2 ). We now concentrate on calculating the coefficientsλ u,n,1 59 and λ u,n,2 . For the remainder of this section, we drop the subscript u and write λ ε n = λ ε u,n , λ n,i =λ u,n,i , andφ n =φ u,n . To findλ n,1 , we evaluate (B-14) from Theorem 15 in Appendix B atφ n to obtain the identity (Q−c −n )φ n,1 +(Q 1 −λ n,1 )φ n = 0 Applying the operatorφ ∗ n to both sides, we have λ n,1 φ ∗ n φ n =φ ∗ n (Q 1 φ n ). (2.21) NowQ 1 φ(x) =− 2c 1 β c x 3 Qφ(x)+ c 1 c 2 x 2 Qφ ′ (x) and hence, φ ∗ n (Q 1 φ n ) = Z − 2c 1 β c x 3 φ n (x)Qφ n (x)dμ(x)+ Z c 1 c 2 x 2 φ n (x)Qφ ′ (x)dμ(x). SinceQφ n =c −n φ n , the first term on the right is − 2c 1 β c n+1 Z x 3 φ 2 n (x)dμ(x) = 0 by (3) in Lemma 11 in Section 2.3.2. To deal with the second term, we first note that (2.16) in Section 2.3.2 (replacingα withβ) impliesQφ ′ n (x) = 2nβQφ n−1 (x) = 2nβ c n−1 φ n−1 . Further- more, from Lemma 23, we know thatx 2 φ n (x) is a linear combination ofφ n+2 (x),φ n (x), and φ n−2 (x). But R φ n (x)φ m (x)dμ(x) = 0,∀m6= n, and hence, the second term must also be 0. Therefore,φ ∗ n Qφ n = 0. Sinceφ ∗ n φ n = R φ 2 n dμ(x) = 2 n n! 6= 0, (2.21) tells us thatλ n,1 = 0, ∀n≥ 0. 60 As in the Section 2.3.2, we will need an expression forφ n,1 . To obtain this expression, we use (2.16) and Lemma 10 in Section 2.3.2 (again, replacingα withβ) to calculate Q 1 φ n (x) = − 2c 1 β 2 c x 3 Qφ n (x)+ 2nc 1 β c 2 x 2 Qφ n−1 (x) = − 2c 1 β 2 c n+1 x 3 φ n (x)+ 2nc 1 β 2 c n+1 x 2 φ n−1 (x) = − 2c 1 βc n−1 ( 1 8 φ n+3 (x)+ 3n+3 4 φ n+1 (x) + 3n 2 2 φ n−1 (x)+n(n−1)(n−2)φ n−3 (x)) + 2nc 1 βc n−1 ( 1 4 φ n+1 (x)+ 2n−1 2 φ n−1 (x)+(n−1)(n−2)φ n−3 (x)) = − 2c 1 βc n−1 ( 1 8 φ n+3 (x)+ 2n+3 4 φ n+1 (x)+ n(n+1) 2 φ n−1 (x)). Now ifm6=n, thenφ m ∈D((Q−c −n ) −1 ) and (Q−c −n ) −1 φ m = 1 (c −m −c −n ) . Using this fact along with (2.21), we can conclude that φ n,1 = − 2c 1 βc n−1 (Q−c −n ) −1 ( 1 8 φ n+3 (x)+ 2n+3 4 φ n+1 (x)+ n(n+1) 2 φ n−1 (x)) = − 2c 1 βc n−1 ( 1 8(c −(n+3) −c −n ) φ n+3 (x)+ 2n+3 4(c −(n+1) −c −n ) φ n+1 (x) + n(n+1) 2(c −(n−1) −c −n ) φ n−1 (x)) = − 2c 1 β ( c 4 8(1−c 3 ) φ n+3 (x)+ (2n+3)c 2 4(1−c) φ n+1 (x)+ n(n+1)c 2(c−1) φ n−1 (x)). (2.22) From (B-16) in Theorem 15, we know that (Q−c −n )φ n,2 +Q 1 φ n,1 +(Q 2 −λ n,2 )φ n = 0. 61 Following our previous heuristics and applyingφ ∗ n to the above equation yields the expression λ n,2 = 1 φ ∗ n φ n (φ ∗ n Q 1 φ n,1 +φ ∗ n Q 2 φ n ). (2.23) This calculation is more tedious than in the stable case and the exact expression is not important at the moment. We do, however, make the remark that from the expression forφ n,1 above, we can see thatλ n,2 blows-up as|c|→ 1. 2.5 After the Period Doubling 2.5.1 Stable Period Two Orbit We return to the study of (2.2) after the switch to period 2 stability in the corresponding deter- ministic systemx n+1 =f(x n ) has occurred. We takeσ(x) = 1 to simplify the computations. To this end, we assume thatf has two fixed pointsx s ,x u withf ′ (x s ), f ′ (x u ) / ∈ [−1,1] so that x s and x u are unstable. We suppose that in addition to these two fixed points, f also has a stable period 2 orbitP ={x 1 ,x 2 } with (f 2 ) ′ (x 1 ) = (f 2 ) ′ (x 2 ) = f ′ (x 1 )f ′ (x 2 )∈ (−1,1) and assume thatP has basin of attractionS 1 \{x s ,x u } so that all orbits ofx n+1 = f(x n ) end up inP ifx 0 / ∈{x s ,x u }. As in our previous investigations of Section 2.2 (see Proposition 1), we begin by splitting up the circle into regions determined by the different actions off. 2 Proposition. There exist positive constantsδ u ,δ s ,δ 1 ,δ 2 ,δ andN ∈N so that if we letV 1 := B δu (x u ),V 2 :=B δs (x s ),U 1 :=B δ 1 (x 1 ),U 2 :=B δ 2 (x 2 ),V 4 :=U 1 ∪ U 2 , andV 3 :=S 1 \(V 1 ∪ V 2 ∪ V 4 ), we have (1) d(f(x),V 1 ∪V 2 )>δ for everyx / ∈V 1 ∪V 2 . (2) d(f(x),V j )>δ forx∈V i ,i,j = 1,2,j6=i. 62 (3) f(U 1 ) ⊂ U 2 ,f(U 2 ) ⊂ U 1 withd(f 2 (x),U c i ) > δ, andd(f(x),U i ) > δ for everyx∈ U i , i = 1,2. (4) f n (x)∈V 4 for everyx∈V 3 andn≥N. Proof: (1) and (4) are essentially the same as (1) and (3) of Proposition 1. For (2) , we note that by continuity off and the fact thatf(x s,u ) = x s,u , we can choose ˜ δ s,u < d(xs,xu) 4 so that d(f(x),x s )< d(xs,xu) 4 for everyx∈B ˜ δs (x s ) andd(f(x),x u )< d(xs,xu) 4 for everyx∈B ˜ δu (x u ). This implies that by shrinkingδ s andδ u if necessary and makingδ < d(xs,xu) 2 , we obtain (2) . (Shrinking δ s,u does not change (1)). The proof of the first part of (3) is easy since f is continuous withf(x 1,2 ) =x 2,1 and(f 2 ) ′ (x 1,2 )< 1. To obtain the condition thatd(f(x),U i )> δ if x ∈ U i , we simply shrink δ,δ 1 , and δ 2 if necessary so that min(δ 1 ,δ 2 ) < d(x 1 ,x 2 ) 4 and δ < d(x 1 ,x 2 ) 2 . This splitting leads to the decompositionT ε = (T ε ij ) 4 i,j=1 whereT ε ij : B(V j ) → B(V i ),i,j = 1,2,3,4 is given by T ε ij φ(x) =1 V i (x)E((φ1 V j )(f(x)+εχ)). Proposition 2 then yields 14 Lemma. There exist constantsM,K > 0 so that kT ε ij k ∞ ≤Mεe −K/ε 2 , for everyi>j. Furthermore, there exist positive constantsM N ,K N such that k(T ε 33 ) n k ∞ ≤ (M N εe −K N /ε 2 ) p(n) , ∀n≥N +1 whereN is the same constant as in (4) of Proposition 2 andp(n) =⌊ n N+1 ⌋. 63 Proof: Since (1) , (2) , and (3) in Proposition 2 imply that P x (X ε 1 ∈V j )≤P x (d(f(x),X ε 1 )>δ) for allx∈V i andi>j, the first part follows directly from Lemma 2 (Recall that kT ε ij k ∞ = sup x∈V i P x (X ε 1 ∈V j ) for alli,j.) The proof of the second part is identical to the proof of Lemma 3 in Section 2.2. Lemma 14 implies thatT ε is again ‘asymptotically’ upper triangular withT ε 33 having exponen- tially small spectral radius asε → 0. Therefore, as in Theorem 2, if we are givenr > 0, we can decomposeT ε =T ε lp +T ε up is such a way that ifε is sufficiently small,kT ε lp k ∞ <r and any eigenvalues ofT ε up with modulus greater thanr are of the formλ+O(ε) whereλ is determined by the limiting eigenvalues of T ε ii , i = 1,2,4, asε → 0. Furthermore, since bothx s and x u are now unstable fixed points, we can apply Theorem 7 in Section 2.4 to conclude that when i = 1,2, these limiting eigenvalues are of the form|c| −1 c −n wherec = c s orc = c u . Hence, our analysis will be complete once we determine the appropriate limits for the eigenvalues of T ε 44 . Now we can further splitT ε 44 according to the decompositionV 4 =U 1 ∪U 2 to obtain T ε 44 = T ε 411 T ε 412 T ε 421 T ε 422 withT 4ij :B(U i )→B(U j ) defined by T ε 4ij φ(x) =1 U i (x)E[(φ1 U j )(f(x)+εχ)]. 64 But (3) in Proposition 2 implies that P x (X ε 1 ∈U i )≤P x (d(f(x),X ε 1 )>δ) forx∈U i ,i = 1,2 so that from Lemma 2, we obtain kT ε 4ii k ∞ ≤Mεe −K/ε 2 fori = 1,2. Therefore,T ε 44 ’asymptotically’ has a 0-diagonal form (see Appendix C). Lemma 27 in Appendix C implies that the spectrum of T ε 44 is determined by the spectrum of S ε 1 := T ε 412 T ε 421 (which is the same as the spectrum ofS ε 2 :=T ε 421 T ε 412 ). To determine the spectrum ofS ε 1 , we first mapS 1 to [−1/2,1/2) in such a way thatx 1 ↔ 0 (and hence, x 2 ↔ f(0) withf(x 2 ) = 0). Letc 1 = f ′ (0) andc 2 = f ′ (f(0)). We define the linearized chain ˜ X ε n by ˜ X ε 0 =X ε 0 and ˜ X ε 2n−1 = f(0)+c 1 ˜ X ε 2(n−1) +εχ 2(n−1) ˜ X ε 2n = c 2 ( ˜ X ε 2n−1 −f(0))+εχ 2n−1 for alln≥ 1. That is, we replacef in the definition ofX ε n by alternate linearization near 0 and f(0). Suppose thatX ε 0 =x∈U 1 . Then by our choice ofU 1 andU 2 , we have P x (X ε 1 / ∈U 2 )≤P x (d(f(x),X ε 1 )>δ)≤Mεe −K/ε 2 (2.24) and P x (X ε 2 / ∈U 1 )≤P(X ε 2 / ∈U 1 |X ε 1 ∈U 2 )+P x (X ε 1 / ∈U 2 )≤ 2Mεe −K/ε 2 . (2.25) If we recall thatf(x)≈f(0)+c 1 x onU 1 andf(x)≈c 2 (x−f(0)) onU 2 , then we obtain the same inequalities for the linearized process as well (in fact, we can choose our neighborhoods 65 U 1 andU 2 so that property (3) of Proposition 2 holds if we replacef byf(0)+c 1 x onU 1 and c 2 (x−f(0)) onU 2 ; see also Proposition 1). Recalling the definition of S ε 1 , using the Markov Property, and applying standard results for conditional expectations we have S ε 1 φ(x) = (T ε 412 )(T ε 421 φ)(x) = 1 U 1 (x)E x [(T ε 421 φ)(X ε 1 )1 U 2 (X ε 1 )] = 1 U 1 (x)E x [E[φ(X ε 2 )1 U 1 (X ε 2 )|X ε 1 ]1 U 2 (X ε 1 )] = 1 U 1 (x)E x [φ(X ε 2 )1 U 1 (X ε 2 )1 U 2 (X ε 1 )]. (2.26) We now look at the difference betweenS ε 1 and ˜ S ε := two step transition operator for ˜ X ε n . Since ˜ X ε 2 =c 2 ( ˜ X ε 1 −f(0))+χ ε 1 = d c 1 c 2 x+σ 1 εχ whereσ 1 = p 1+c 2 2 andχ is standard normal, ˜ S ε is given by ˜ S ε φ(x) =E x [φ( ˜ X ε 2 )] =E x [φ(c 1 c 2 x+εσ 1 χ)]. The following Lemma gives us bounds on the difference. As in Section 2.3, we define the weighted sup-norm spaces X k ={φ∈B(R) : sup x∈R |φ(x)|e −kx 2 <∞}. 15 Lemma. There exists a constantk> 0 such that| ˜ S ε φ(x)−S ε 1 φ(x)|e −kx 2 /ε 2 ≤O(ε)kφk k/ε 2, for allx∈R andφ∈X k/ε 2. 66 Proof: From (2.26) and the definition of ˜ S ε , we have | ˜ S ε φ(x)−S ε 1 φ(x)| = |E x [φ( ˜ X ε 2 )]−1 U 1 (x)E x [φ(X ε 2 )1 U 1 (X ε 2 )1 U 2 (X ε 1 )]| ≤ |1 U c 1 (x)E x [φ( ˜ X ε 2 )]| +|1 U 1 (x)(E x [φ( ˜ X ε 2 )−φ(X ε 2 )1 U 1 (X ε 2 )1 U 2 (X ε 1 )| ≤ I ε 1 φ(x)+I ε 2 φ(x)+I ε 3 φ(x) where I ε 1 φ(x) := |1 U c 1 (x)E x [φ( ˜ X ε 2 )]| I ε 2 φ(x) := |1 U 1 (x)(E x [φ( ˜ X ε 2 )1 U 1 ( ˜ X ε 2 )1 U 2 ( ˜ X ε 1 )−φ(X ε 2 )1 U 1 (X ε 2 )1 U 2 (X ε 1 )]| I ε 3 φ(x) := |1 U 1 (x)(E x [φ( ˜ X ε 2 )(1−1 U 1 ( ˜ X ε 2 )1 U 2 ( ˜ X ε 1 ))]|. We would like to show that for some suitable value ofk andφ∈X k/ε 2, we have |I ε j φ(x)|e −k/ε 2 ≤O(ε)kφk k/ε 2, ∀x∈R andj = 1,2,3.I ε 1 is the easiest so we deal with this term first. Since|c 1 c 2 | < 1, if we takek < (1−(c 1 c 2 ) 2 )/(2σ 2 1 ), we can apply the bounds in Lemma 31 from Appendix D.2 to conclude that∃ ˜ K,η> 0 so that I ε 1 φ(x)e −kx 2 /ε 2 = 1 U c 1 (x)e −ky 2 |E[φ(c 1 c 2 εy +εσ 1 χ)]| (y =x/ε) ≤ 1 |x|>δ 1 e −ky 2 1 √ 2πσ 1 Z |φ(εz)|e −(z−c 1 c 2 y) 2 /(2σ 2 1 ) dz ≤ 1 |x|>δ 1 kφk k/ε 2e −ky 2 1 √ 2πσ 1 Z e kz 2 e −(z−c 1 c 2 y) 2 /(2σ 2 1 ) dz ≤ 1 |x|>δ 1 kφk k/ε 2 ˜ Ke −ηy 2 ≤ ˜ Ke −ηδ 1 2 /ε 2 kφk k/ε 2 67 which yields |I ε 1 φ(x)|e −kx 2 /ε 2 ≤ 0(ε)kφk k/ε 2 for everyφ∈X k/ε 2 as desired. We next turn our attention to I ε 2 . To deal with this term, it is useful to define the re-scaled processesY ε n , ˜ Y n byY ε 0 = ˜ Y 0 =y :=x/ε and Y ε 1 = X ε 1 −f(0) ε = f(x)−f(0) ε +χ 0 =:f ε 1 (y)+χ 0 Y ε 2 = X ε 2 ε = f(εY ε 1 +f(0)) ε +χ 1 =:f ε 2 (Y ε 1 )+χ 0 ˜ Y 1 = ˜ X ε 1 −f(0) ε =c 1 y +χ 0 ˜ Y 2 = ˜ X ε 2 ε =c 2 ˜ Y 1 +χ 1 ... wheref ε 1 (y) := 1 ε (f(εy)−f(0)) andf ε 2 (y) := 1 ε f(εy +f(0)). We use the notation p ε i (y,z) = 1 √ 2π e −(z−f ε i (y)) 2 /2 , ˜ p i (y,z) = 1 √ 2π e −(z−c i y) 2 /2 , i = 1,2 to denote the one-step transition densities for the re-scaled processes. We also define U ε i := (−δ i /ε,δ i /ε) fori = 1,2. Then rewritingI ε 2 in terms ofY ε n , ˜ Y n , we have I ε 2 φ(εy) = 1 U ε 1 (y)|E εy [φ(ε ˜ Y 2 )1 U ε 1 ( ˜ Y 2 )1 U ε 2 ( ˜ Y 1 )−φ(εY ε 2 )1 U ε 1 (Y ε 2 )1 U ε 2 (Y ε 1 )]| = 1 U ε 1 (y)| Z U ε 1 Z U ε 2 φ(εz)(˜ p 1 (y,u)˜ p 2 (u,z)−p ε 1 (y,u)p ε 2 (u,z))dudz|. 68 Using ˜ p 1 (y,u)˜ p 2 (u,z)−p ε 1 (y,u)p ε 2 (u,z) = (˜ p 1 (y,u)−p ε 1 (y,u))˜ p 2 (u,z) +p ε 1 (y,u)(˜ p 2 (u,z)−p ε 2 (u,z)), we have I ε 2 φ(εy) ≤ 1 U ε 1 (y)[ Z U ε 1 Z U ε 2 |φ(εz)||˜ p 1 (y,u)−p ε 1 (y,u)|˜ p 2 (u,z)dudz + Z U ε 1 Z U ε 2 |φ(εz)||˜ p 2 (u,z)−p ε 2 (u,z)|p ε 1 (y,u)dudz]. (2.27) Sincef ε 1 (y) → c 1 y andf ε 2 (y) → c 2 y asε → 0, Lemma 28 and Remark 17 in Appendix D.1 implies that |˜ p 1 (y,u)−p ε 1 (y,u)|.O(ε)(˜ p 1 (y,u)+p ε 1 (y,u)) and |˜ p 2 (u,z)−p ε 2 (u,z)|.O(ε)(˜ p 2 (u,z)+p ε 2 (u,z)). 69 Applying these bounds to (2.27), assumingφ ∈ X k/ε 2, and using the notation ˜ p 12 andp ε 12 to denote the two step transition densities for ˜ Y n andY ε n , respectively, we have I ε 2 φ(εy) ≤ 1 U ε 1 (y)O(ε)kφk k/ε 2[ Z U ε 1 Z U ε 2 e kz 2 ˜ p 1 (y,u)˜ p 2 (u,z)dudz + Z U ε 1 Z U ε 2 e kz 2 p ε 1 (y,u)˜ p 2 (u,z)dudz + Z U ε 1 Z U ε 2 e kz 2 p ε 1 (y,u)˜ p 2 (u,z)dudz + Z U ε 1 Z U ε 2 e kz 2 p ε 1 (y,u)p ε 2 (u,z)dudz] ≤ 1 U ε 1 (y)O(ε)kφk k/ε 2[ Z U ε 2 e kz 2 ˜ p 12 (y,z)dz +2 Z U ε 2 ( Z U ε 1 e kz 2 ˜ p 2 (u,z)dz)p ε 1 (y,u)du + Z U ε 2 ( Z U ε 1 e kz 2 p ε 2 (u,z)dz)p ε 1 (y,z)dz], (2.28) the last line following from the fact that R U ε 1 ˜ p 1 (y,u)˜ p 2 (u,z)du≤ ˜ p 12 (y,z). It remains to bound the three integrals on the right side of (2.28). If we assumek < 1 2σ 2 1 , then bounding the first term is easy since in this case, Z U ε 2 e kz 2 ˜ p 12 (y,z)dz< 1 √ 2πσ 1 Z e kz 2 e −(z−c 1 c 2 y) 2 /(2σ 2 1 ) dz =:K 1 <∞. (2.29) To bound the second term, we complete the square (see Lemma 30 in Appendix D.2) to con- clude that for anyk,m< 1/2, Z U ε 1 e kz 2 ˜ p 2 (u,z)dz≤L 1 e kc 2 2 u 2 /(1−2k) and 1 U ε 1 (y) Z U ε 2 e mu 2 p ε 1 (y,u)du≤1 U ε 1 (y)L 2 e mM 2 1 y 2 /(1−2k) 70 whereM 1 := sup x∈U 1 |f ′ (x)| andL 1 ,L 2 are positive constants. Therefore if we again assume k< 1 2σ 2 1 (recall thatσ 2 1 = 1+c 2 2 ), we have 1 U ε 1 (y) Z U ε 2 ( Z U ε 1 e kz 2 ˜ p 2 (u,z)dz)p ε 1 (y,u)du ≤ 1 U ε 1 (y)L 1 Z U ε 2 e kc 2 2 u 2 /(1−2k) p ε 1 (y,u)du ≤ L 1 L 2 1 U ε 1 (y)e k(M 1 c 2 ) 2 y 2 /(1−2kσ 2 1 ) . (2.30) Similarly, providedk< 1 2σ 2 1 , we can bound the third term by 1 U ε 1 (y) Z U ε 2 ( Z U ε 1 e kz 2 p ε 2 (u,z)dz)p ε 1 (y,u)du ≤ 1 U ε 1 (y)L ′ 1 Z U ε 2 e kM 2 2 u 2 /(1−2k) p ε 1 (y,u)du ≤ 1 U ε 1 (y)L ′ 1 L ′ 2 e k(M 1 M 2 ) 2 y 2 /(1−2kσ 2 1 ) (2.31) where M 2 := sup x∈U 2 |f ′ (x)|. By continuity and the fact that|c 1 c 2 | < 1, we may assume without loss of generality that|M 1 M 2 |< 1. Then using Equations (2.29), (2.30), and (2.31) in Equation (2.28), and further restrictingk by assumingk< 1−(M 1 M 2 ) 2 2σ 2 1 , we obtain I 2 φ(x)e −kx 2 /ε 2 = I 1 φ(εy)e −ky 2 ≤ e −ky 2 1 U ε 1 (y)O(ε)kφk k/ε 2[K 1 +L 1 L 2 e k(M 1 c 2 ) 2 y 2 /(1−2kσ 2 1 ) +L ′ 1 L ′ 2 e k(M 1 M 2 ) 2 y 2 /(1−2kσ 2 1 ) ] ≤ O(ε)kφk k/ε 2, the last line following because our assumptions onk and that fact that|c 2 | =|f ′ (f(0))|≤M 2 imply that −k +k(M 1 c 2 ) 2 /(1−2kσ 2 1 )≤−k +k(M 1 M 2 ) 2 /(1−2kσ 2 1 )< 0. 71 It remains to bound I ε 3 . Since e kz 2 > 0, using the same notation as above for the transition operator of the re-scaled process, we have I ε 3 φ(εy) ≤ 1 U ε 1 (y)kφk k/ε 2 Z (U ε 1 ) c Z (U ε 2 ) c e kz 2 ˜ p 1 (y,u)˜ p 2 (u,z)dudz ≤ 1 U ε 1 (y) Z (U ε 1 ) c e kz 2 ˜ p 12 (y,z)dz. (2.32) But then, as in the proof of Lemma 35 in Appendix D.2, fork < (1−|c 1 c 2 |)/2, there exist constantsL 3 ,η> 0 such that 1 U ε 1 (y) Z (U ε 1 ) c e kz 2 ˜ p 12 (y,z)dz≤L 3 e ka 2 y 2 /(1−2k) e −ηδ 2 1 /ε 2 and hence (2.32) implies that I ε 3 φ(x)e −kx 2 /ε 2 ≤L 3 kφk k/ε 2e −ηδ 2 1 /ε 2 for allx∈R which completes the proof. We can conclude that the eigenvalues ofS ε 1 (which are the same as the eigenvalues ofS ε 2 ) are close to the eigenvalues of S ε for small ε > 0. Since S ε is the transition operator for the autoregressive model ˜ X ε n+2 =c 1 c 2 x+εσ 1 χ n and|c 1 c 2 |< 1, Theorem 4 implies that its eigenvalues are given by (c 1 c 2 ) n ,n≥ 0. Therefore, by Lemma 27 in Appendix C, T ε 44 has limiting eigenvalues p (c 1 c 2 ) n as ε → 0 (where √ · denotes the multi-valued complex root function). To address the issue of eigenvectors, we can again apply Lemma 27 to conclude thatT ε 44 has eigenfunctions (eigendensities) close to(λ ε n φ ε 1 ,φ ε 2 ) whereφ ε 1 is an eigenfunction (eigendensity) of S ε 1 corresponding to the eigenvalueλ ε n = p (c 1 c 2 ) n + 0(ε) of T ε 44 andφ ε 2 = T ε 421 φ ε 1 is an 72 eigenfunction (eigendensity) ofS ε 2 corresponding to the same eigenvalueλ ε . Now Theorem 4 implies that the eigenfunctions of the transition operator forY ε n corresponding to (c 1 c 2 ) n are multiples of H n ( α(x−x 1 ) εσ 1 ) and the corresponding eigendensities are multiples of h n ( α(x−x 1 ) εσ 1 ) where H n is the n th Hermite polynomial, h n (x) = e x 2 H n (x), and α = p (1−c 1 c 2 ) 2 /2. Therefore, the eigenfunctions ofT ε 44 corresponding to the eigenvalueλ ε n are asymptotically of the form a 1 p (c 1 c 2 ) n H n ( α(x−x 1 ) εσ 1 )+a 2 H n ( α(x−x 2 ) εσ 2 ) for some constantsa 1 ,a 2 while the eigendensities are of the form b 1 p (c 1 c 2 ) n h n ( α(x−x 1 ) εσ 1 )+b 2 h n ( α(x−x 2 ) εσ 2 ) for some constantsb 1 ,b 2 . Having formally completed our analysis of the local behavior ofT ε near the stable period two orbit, we conclude this section by stating our results concerning the spectrum ofT ε : 1 Result. For any r > 0, we can decompose T ε = T ε lp +T ε up so that∀ε sufficiently small, kT ε lp k ∞ < r and any eigenvalues of T ε up with modulus greater than r is of the form λ ε = λ+O(ε) where (1) λ =|c| −1 c −n for somen≥ 0 withc =c s orc =c u , or (2) λ = p (c 1 c 2 ) n for somen≥ 0 and some branch of √ ·. 73 Furthermore, the eigenfunctions of T ε corresponding to any eigenvalue with λ as in (1) are perturbations of a 1 h n ( β(x−x 1 ) ε ) for some constanta 1 withβ = p (c s −1) 2 /2 orβ = p (c u −1) 2 /2 and any eigendensities of T ε corresponding to an eigenvalue withλ as in (2) are ‘close’ to linear combinations of h n ( α(x−x 1 ) εσ 1 ) and h n ( α(x−x 2 ) εσ 2 ) where α = p (1−c 1 c 2 ) 2 /2 and as above, h n (x) = e x 2 H n (x) with H n (x) equal to the n th Hermite polynomial. For a more precise statement, see Theorem 2. 2.5.2 Notes on the General Case The general case described in Section 2.1 can be handled in the same way as the specific cases we have dealt with in Sections 2.2 and 2.5.1 although the details are more tedious. In this section, we remark on some of the differences. • The starting point is, as before, to split the circle into regions describing the different actions off. The notation is more complicated, but the end result is the same: we can split the circle into neighborhoods of the different periodic orbits forf and label sets in such a way thatT ε has an ‘almost’ upper triangular decomposition with respect to the splitting (see Propositions 1 and 2). 74 • We already know how to deal with the local behavior of the operator near fixed points and stable period 2 orbits. For stable periodic orbitsP of periodp> 2, we simply note that the block ofT ε corresponding to this orbit asymptotically has the form: 0 T ε 12 0 ··· 0 . . . 0 T ε 23 . . . . . . . . . . . . 0 0 ··· 0 T ε (p−1)p T ε p1 0 ··· 0 . One can readily check that any such operator has eigenvaluesλ 1/p whereλ is an eigen- value of T ε 12 T ε 23 ···T ε (p−1)p T ε p1 . A similar argument to the one in Section 2.5.1 can be used to show that thisp-step chain has eigenvalues close toc n where c is equal to the derivative off alongP which yields the eigenvalues(c n ) 1/p as desired. • For an unstable period 2 orbitQ we note that the work of Section 2.5.1 shows that the eigenvalues of the block forT ε near the period two orbit are close to the eigenvalues of the transition operator forX n+2 = cX n +σχ wherec is the derivative along the period two orbit. Since|c|< 1 in that section, this chain had eigenvaluesc n . But if|c|> 1, then our work in Section 2.4 shows that this chain has eigenvalues|c| −1 c −n and therefore,T ε has eigenvalues near(|c| −1 c −n ) 1/2 . The case whenQ has period greater than 2 is similar. 2.6 Numerical Examples In this section, we verify the results of our work in the Sections 2.1-Section 2.5 by looking at the specific example when f(x) =x+1−bsinx 75 is in the well-studied family of sine-circle maps (see, for instance, [11]). Note thatf(x+2π) = f(x)+2π so that we can treatf :S 1 →S 1 . Suppose thatb> 1. Thenf has two fixed points x s andx u , satisfyingsinx s = sinx u = 1 b with0≤x s <π/2<x u ≤π. Furthermore, f ′ (x u ) = 1−bcosx u = 1+ √ b 2 −1> 1 so thatx u is unstable and f ′ (x s ) = 1−bcosx s = 1− √ b 2 −1∈ (−1,1) if and only if b < b c := √ 5 so thatx s is stable forb < b c and unstable forb > b c . In fact, if we write x s = x s (b), f ′ (x s (b c )) = −1 and the nondegeneracy conditions 1 2 f xx (x s (b c )) + 1 3 f xxx (x s (b c )) 6= 0 and f xb (x s (b c )) 6= 0 are satisfied so thatf experiences a period doubling bifurcation atb c (See Theorem 4.3 in [14]). This is illustrated graphically in Figure 2.5 where we can see the whole cascade of period doubling bifurcations forf leading to chaotic behavior. Now consider the Markov Chain onS 1 defined by X ε n+1 =f(X ε n )+εχ n mod2π where, as before,χ n is a sequence of iid, standard normal random variables. We can discretize this chain by partitioning the circle into N intervals I i := [2π(i− 1)/N,2πi/N], for i = 1,2,...,N and considering the induced process ˜ X ε n with values in {2π(i− .5)/N} N i=1 (the midpoints of the intervalsI i ) and transition matrixP ε = (p ε ij ) N i,j=1 . 76 2 2.5 3 3.5 1 2 3 4 5 6 b f n (x 0 ) Figure 2.5: Bifurcation diagram for the family of sine-circle mapsf(x) = x + 1−bsin(x). The plot shows 100 iterates off started from a random initial pointx 0 ∈ [0,2π] with the first 1000 iterations discarded for 1000 different values ofb equally spaced throughout the interval [0,2π]. The entriesp ε ij are defined by p ε ij := P(X ε n+1 ∈I j |X ε n = 2π(i−.5)/N) = P(χ n ∈ [ k∈Z 1 ε (I j +2πk−f(2π(i−.5)/N))) = X k∈Z [Φ( 1 ε (2πj/N +2πk−f(2π(i−.5)/N))) −Φ( 1 ε (2π(j−1)/N +2πk−f(2π(i−.5)/N)))] (2.33) where Φ is the cdf for the standard Normal distribution. 77 If ε is small, we can approximate p ε ij numerically by using only the k = −1,0,1 terms in (2.33). Furthermore, it seems reasonable to believe that ifN is large, the transition matrixP ε should be, in some sense a ‘good’ approximation to the transition operatorT ε of the chainX ε n . For the remainder of this section, we numerically investigate the eigenvalues and eigenvectors ofP ε . 2.6.1 Stable Fixed Point We begin by looking at the eigenvalues of theP ε forb = 1.8,2.0,2.2 with two different noise intensities: ε = 0.05 and ε = 0.01 (we take N = 800 for ε = 0.05 and N = 1200 for ε = 0.01). Tables 2.1 - 2.3 show the results of our calculations. In each case, we list the 10 largest modulus eigenvalues of the transition matrixP ε and compare the values obtained with successive powers of c s := f ′ (x s ) and c −1 u := f ′ (x u ) −1 . Since we have already mentioned that f has one stable and one unstable fixed point for b < √ 5, Theorem 2 implies that the eigenvalues ofT ε are close to powers ofc s andc −1 u for small values ofε (note that in this case, c u > 0 so that|c u |c −n u =c −(n−1) u ,∀n≥ 1). We can see from these tables that the numerics are in agreement with this result. Notice that the influence of the unstable point on the spectrum ofP ε grows weaker asb approaches √ 5. There is also a noticeable difference in the rates of convergence for different eigenvalues ofP ε asε → 0. For instance, eigenvalues converging to powers of c −1 u seem to do so more quickly than eigenvalues converging to powers of c s . In addition, the rate of convergence seems to be much quicker for larger eigenvalues than for smaller ones. This is probably due to the fact that the coefficients of ε 2 in the asymptotic expansions of the eigenvalues ofT ε 33 increase withn (see (2.19) in Section 2.3). Theorem 2 also predicts that the left eigenvectors ofP ε corresponding to eigenvalues nearc n s should be near multiples ofh n (α(x−x s )/ε) whereα = p (1−c s ) 2 /2,h n (x) =e x 2 H n (x), and H n (x) is then th Hermite polynomial. Similarly, the right eigenvectors ofP ε corresponding to c −n u should be close toh n (β(x−x u )/ε) withβ = p c 2 u −1)/2. 78 Powers ofc s Powers of 1 cu Evals of P ε (ε =.05) Evals of P ε (ε =.01) 1.0000 1.000 1.000 1.000 -0.4967 - -0.4931 -0.4979 0.4005 0.4003 0.4003 0.2467 - 0.2375 0.2477 - 0.1604 0.1604 0.1602 -0.1225 - -0.1118 -0.1231 - 0.0643 0.0643 0.0642 0.0608 - 0.0514 0.0611 - 0.0257 0.0258 0.0257 -0.0302 - -0.0231 -0.0302 Table 2.1: b =1.8: Stable Fixed pointx s and unstable fixed pointx u withc s = f ′ (x s ) and c u =f ′ (x u ). Powers ofc s Powers of 1 cu Evals of P ε (ε =.05) Evals of P ε (ε =.01) 1.0000 1.000 1.000 1.000 -0.7321 - -0.7249 -0.7329 0.5359 - 0.5131 0.5366 - 0.3660 0.3659 0.3659 -0.3923 - -0.3549 -0.3925 0.2872 - 0.2402 0.2868 -0.2102 - -0.1591 -0.2093 - 0.1340 0.1339 0.1338 0.1539 - 0.1033 0.1526 -0.1127 - -0.0656 -0.1112 Table 2.2: b =2: Stable Fixed point x s and unstable fixed point x u with c s = f ′ (x s ) and c u =f ′ (x u ). Powers ofc s Powers of 1 cu Evals of P ε (ε =.05) Evals of P ε (ε =.01) 1.0000 1.000 1.000 1.000 -0.9596 - -0.9339 -0.9591 0.9208 - 0.8348 0.9160 -0.8836 - -0.7256 -0.8717 0.8479 - 0.6153 0.8268 -0.8136 - -0.5100 -0.7818 0.7808 - 0.4140 0.7373 - 0.3379 0.3378 - -0.7492 - -0.3292 -0.6935 0.7189 - 0.2568 0.6507 Table 2.3: b =2.2: Stable Fixed pointx s and unstable fixed pointx u withc s = f ′ (x s ) and c u =f ′ (x u ). 79 0.35 0.4 0.45 0.5 0.55 0.6 −0.5 0 0.5 λ 1 ε = 1 0.35 0.4 0.45 0.5 0.55 0.6 −0.5 0 0.5 λ 2 ε ≈ c s 0.35 0.4 0.45 0.5 0.55 0.6 −0.5 0 0.5 λ 3 ε ≈ c s 2 0.35 0.4 0.45 0.5 0.55 0.6 −0.5 0 0.5 λ 4 ε ≈ c s 3 Figure 2.6: Plots of the left eigenvectors of P ε (solid lines) corresponding to the top four eigenvalues ofP ε along with the approximate eigenvectors predicted from our theory (dotted lines). See Table 2.2 for the exact values of the corresponding eigenvalues. Parameters:b = 2, ε = .025, andN = 800 (x s ≈ .5236). The eigenvectors are basically flat on all parts of the circle not shown. Figure 2.6 shows the left eigenvectors corresponding to the four largest modulus eigenvalues ofP ε withb = 2 and noise intensityε = 0.025. The top four eigenvalues are close to powers ofc s and we can see that the corresponding left eigenvectors are indeed well approximated by an appropriate multiple ofh n (α(x−x s )/ε). In Figure 2.7, we look at the right eigenvectors of P ε corresponding to its second to fifth largest eigenvalues (the eigenvector corresponding to λ ε 1 = 1 is just constant) for the case when b = 1.8 and ε = .025. Here, the effects of the unstable fixed pointx u are more observable. As expected, we can also see that the right eigenvectors corresponding toλ ε 3 ≈c −1 u andλ ε 5 ≈c −2 u are similar in appearance to multiples of h 1 (β(x−x u )/ε) andh 2 (β(x−x u )/ε), respectively. Note the somewhat unpredictable shape 80 2 2.2 2.4 2.6 2.8 3 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 2.3 2.4 2.5 2.6 2.7 2.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 2 2.2 2.4 2.6 2.8 3 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 2.3 2.4 2.5 2.6 2.7 2.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 λ 2 ε ≈ c s λ 3 ε ≈ c u −1 λ 4 ε ≈ c s 2 λ 5 ε ≈ c u −2 Figure 2.7: Plots of the right eigenvectors of P ε corresponding to its second to fifth largest eigenvalues (solid lines). The approximate eigenvectors predicted from our theory are also plotted for the eigenvectors corresponding toλ ε 3 ≈c −1 u andλ ε 5 ≈c −2 u (dotted lines). See Table 2.1 for the exact values of the corresponding eigenvalues. Parameters: b = 1.8,ε =.025, and N = 800 (x u ≈ 2.6180). The eigenvectors are basically flat on all parts of the circle not shown. of the right eigenvectors corresponding toλ ε 2 ≈ c s andλ ε 4 ≈ c 2 s . In general, it seems that left eigenvectors have mass aroundx s while right eigenvectors have mass centered aroundx u . 2.6.2 Stable Period Two Orbit Now ifb > b c , thenf has a period two orbit{x 1 ,x 2 } which is stable forb < b c,2 ≈ 2.71 and unstable thereafter. To conform with the notation of Section 2.1, we re-label our two unstable 81 − 1 (c u,1 ) n ± √ c n s Evals of P ε (ε =.01) Evals of P ε (ε =.005) - 1.000 1.000 1.000 - -1.000 -1.000 -1.000 0.9335 - 0.9337 0.9331 -0.8714 - -0.8739 -0.8712 - 0.8479 0.8524 0.8478 - -0.8479 -0.8524 -0.8478 0.8135 - 0.8201 0.8138 -0.7594 - -0.7719 -0.7607 - 0.7189 0.7402 0.7217 - -0.7189 -0.7402 -0.7217 Table 2.4: b =2.3: Stable Period Two Orbit{x 1 ,x 2 }, Unstable fixed pointsy 1 ,y 2 . Note that c s = f ′ (x 1 )f ′ (x 2 ) andc u,i = f ′ (y i ),i = 1,2. We have usedn = 1200 for noise strength.01 andn = 1600 for noise strength.005. − 1 c n u,1 1 c n u,2 ± √ c n s Evals of P ε (ε =.01, n = 1200) - - 1.000 1.000 - - -1.000 -1.000 0.7744 - - 0.7740 -0.5997 - - -0.5993 0.4644 - - 0.4643 -0.3597 - - -0.3598 - - 0.3487i 0.3458i - - -0.3487i -0.3458i - 0.3038 - 0.3034 0.2785 - - 0.2790 Table 2.5: b =2.5: Stable Period Two Orbit{x 1 ,x 2 }, Unstable fixed pointsy 1 ,y 2 . Note that c s =f ′ (x 1 )f ′ (x 2 ) andc u,i =f ′ (y i ),i = 1,2. fixed points asx s =y 1 andx u =y 2 and writec u,i =f ′ (y u,i fori = 1,2 andc s =f ′ (x 1 )f ′ (x 2 ). Our theory predicts that the eigenvalues ofP ε should be close to|c u,i | −1 c −n u,i or (c 1 c 2 ) n/2 for n≥ 0 wherec 1,2 =f ′ (x 1,2 ). Sincec s < 0, c u > 0,|c s | −1 c −n s =− 1 c n+1 s and|c u | −1 c −n u =− 1 c n+1 u . Figures 2.4 and 2.5 show the results of the numerical computations for b = 2.3,2.5. From these results, it appears that our predictions of the eigenvalues ofT ε are correct in the regime following the first bifurcation point as well. 82 0 0.2 0.4 0.6 0.8 1 −0.2 −0.1 0 0.1 0.2 0.3 λ 1 ε = 1 0 0.2 0.4 0.6 0.8 1 −0.2 −0.1 0 0.1 0.2 0.3 λ 2 ε ≈ −1 0 0.2 0.4 0.6 0.8 1 −0.2 −0.1 0 0.1 0.2 0.3 λ 3 ε ≈ c s 1/2 0 0.2 0.4 0.6 0.8 1 −0.2 −0.1 0 0.1 0.2 0.3 λ 4 ε ≈ −c s 1/2 Figure 2.8: Plot of the top four left eigenvectors ofP ε (solid lines) withb = 2.3,ǫ =.025, and N = 800 along with approximate eigenvectors expected from our theory (dotted lines). Here, f has a stable period 2 orbit{0.1360,0.8242}. The eigenvectors are flat on all parts of the circle not shown. The noise strength is large enough to ‘blur’ out the region betweenx 1 andx 2 making convergence to the approximated eigenvectors occur at a slower rate (see Figure 2.9). To confirm our predictions regarding eigenvectors, we look at the left eigenvectors ofP ε cor- responding to its top two eigenvaluesλ 1 = 1 andλ ε 2 ≈−1 in the case whenb = 2.3. Notice that we do indeed observe that the mass of these eigenvectors is concentrated near the period two points. In fact, our theory predicts that the first two eigenvectors should be close toρ ε 1 ±ρ ε 2 where ρ ε i (x) =k i e −α 2 (x−x i ) 2 /(2ε 2 σ i ) fori = 1,2 and somek i ∈R withα = p 1−c 2 s ,σ i = q 1+c 2 j ,c i =f ′ (x i ),i,j = 1,2,j6=i. Figure 2.8 confirms that this is indeed the case. When we look at lower left eigenvectors, however, the situation is slightly more complicated, as illustrated by the bottom half of Figure 2.8. Just past the bifurcation point (b = 2.3), convergence takes place at a much slower rate as the noise strength ‘blurs’ out the region between the points of the period two orbit. Compare this with Figure 2.9 where our predicted 83 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 Figure 2.9: Plot of the third and fourth left eigenvectors ofP ε (solid lines) with along with the approximations expected from our theory (dotted lines). Parameters: b = 2.3,ǫ = .005, and N = 1600. eigendensities are plotted against the third and fourth largest left eigenvectors of P ε for a smaller noise strength (ε =.005). 2.6.3 Stable Period 4 Orbit To provide some further confirmation for our heuristic, we also look at b > 2.71 so that the period 2 orbit is now unstable, but we have the appearance of a period 4 orbit which is stable for b<b c,4 ≈ 2.82. We look at two examples, the first whenb = 2.75 is close to the 2:4 bifurcation point and the second, when b = 2.81 is close to the 4:8 bifurcation point. To conform with the notation of Section 2.1, we label the unstable periodic orbit as {y 1,1 ,y 1,2 }, the unstable fixed points as y 2,1 ,y 3,1 , and the stable period four orbit as {x 1 ,x 2 ,x 2 ,x 4 }. In both cases, 84 ±i q 1 c n u,1 Fourth Roots ofc n s Evals of P ε (ε =.005, n = 1600) - 1.0000 1.0000 - -1.0000 -1.0000 - 1.0000 i 1.0000 i - -1.0000 i -1.0000 i 0.9259 - 0.9269 -0.9259 - -0.9269 0.8572i - 0.8623i -0.8572i - -0.8623i 0.7937 - 0.8058 - 0.7937 - - 0.8058 Table 2.6: b =2.75: Stable Period 4 Orbit {x 1 ,x 2 ,x 3 ,x 4 } and unstable period two orbit {y 1,1 ,y 1,2 } withc s =f ′ (x 1 )f ′ (x 2 )f ′ (x 3 )f ′ (x 4 ) andc u,1 =f ′ (y 1,1 )f ′ (y 1,2 ) ±i q 1 c n u,1 Fourth Roots ofc n s Evals of P ε (ε =.005, n = 1600) - 1.0000 1.0000 - -1.0000 -1.0000 - 1.0000 i 1.0000 i - -1.0000 i -1.0000 i - 0.6711 + 0.6711i 0.6462 + 0.6462i - 0.6711 - 0.6711i 0.6462 - 0.6462i - - 0.6711 + 0.6711i - 0.6462 + 0.6462i - - 0.6711 - 0.6711i - 0.6462 - 0.6462i 0.8374 - 0.8374 - 0.8374 - - 0.8374 Table 2.7: b =2.81 : Stable Period 4 Orbit {x 1 ,x 2 ,x 3 ,x 4 } and unstable period two orbit {y 1,1 ,y 1,2 } withc s =f ′ (x 1 )f ′ (x 2 )f ′ (x 3 )f ′ (x 4 ) andc u,1 =f ′ (y 1,1 )f ′ (y 1,2 ) c u,1 :=f ′ (y 1 )f ′ (y 2 )<−1 and therefore, we should expect contributions of the form±i q c −n u,1 from this orbit where herei = √ −1. We also havec s :=f ′ (x 1 )f ′ (x 2 )f ′ (x 3 )f ′ (x 4 )∈ (−1,1) so we should expect contributions of the form (c n s ) 1/4 from the period 4 orbit. Tables 2.6 and 2.7 confirm these facts. Note that the two unstable fixed pointsy i,1 , i = 2,3, still contribute to the spectrum as well, but the derivative at these points is so large for our chosen values of b that these contributions do not show up until lower down in the spectrum. We also note that the eigenvectors corresponding to±1 are again sums of gaussian distributions centered at the four points in the stable orbit as shown in Figure 2.10. 85 −0.36 −0.13 0 1 1.23 1.61 2 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 x 1 x 2 x 3 x 4 Figure 2.10: Top left eigenvectors ofP ε withb = 2.75,ǫ =.01,N = 1600, and stable period 4 orbit{−0.3609,1.6102,−0.1376,1.2396} 86 Chapter 3 Analysis of Stochastic I-F Models 3.1 Basic Setup Having now developed some techniques for calculating the spectrum of transition operators for Markov Chains on the circle, we return to our original problem by looking at Markov Chains induced by first-passage times of Stochastic Integrate-and-Fire Models (SIFM’s). We recall that a SIFM is given by the solution to the stochastic differential equation dX ε t = (v(X ε t )+I(t))dt+εdW t t 0 ≤t≤τ ε 1 (3.1) X ε t 0 = h(t 0 ) (usually ,v(x) =−γx+I 0 ,γ,I 0 ≥ 0) subject to the reset conditions X ε (τ ε 1 ) − = g(τ ε 1 ) X ε (τ ε 1 ) + = h(τ ε 1 ) (3.2) where v ∈ C ∞ (R), I,g,h ∈ C ∞ (R) are p-periodic functions of time with h(t) < g(t), ∀t∈ [−p/2,p/2), and τ ε 1 := inf{t>t 0 :X ε t =g(t)} is the first passage time of the processX ε t across the time-varying thresholdg(t). (The exis- tence of a continuousX ε t satisfying equation 3.1 follows from standard SDE theory, see for instance [15]). After timeτ ε 1 , we continue to runX ε t according to (3.1) until the threshold is 87 again reached at which time we jump according to the condition (3.2). In this way we obtain a sequence of firing times τ ε n := inf{t>τ ε n−1 :X ε t =g(t)} forn ≥ 1,τ ε 0 = t 0 , which defines a Markov Chain onR. The transition density function for this chain is given by the first passage-time density function p ε (t|t 0 ) := d dt P(τ ε n ≤t|τ ε n−1 =t 0 ) = d dt P t 0 (τ ε 1 ≤t) whereP t 0 denotes the probability law ofX ε t started ath(t 0 ). 8 Remark. IfX ε t satisfies dX ε t = (v(X ε t )+I(t))dt+εdW t withv(x) =−γx+I 0 andI(t) =I 1 sin(ωt+φ), then it is easy to see that ˜ X ε t =X ε t −k(t) is a diffusion process satisfyingd ˜ X ε t =v( ˜ X ε t )dt+εdW t ,t 0 ≤t<τ ε 1 for k(t) = γ ω 2 +γ 2 sin(ωt+φ)− ω ω 2 +γ 2 cos(ωt+φ) Therefore, we shall assume for the rest of the chapter that I(t) = 0 since at least our main case of interest can be reduced to the study of this case without loss of generality by altering the reset and threshold levels accordingly. We shall denote the solution to the noise free (ε = 0) equation (3.1) with initial condition x 0 = h(t 0 ) and reset condition (3.2), by x(t) = x(t|t 0 ) (we shall, at times, suppress the argument involving the initial time). Sinceh,v∈C ∞ , the basic existence theorem for solutions to ODE ′ s implies that x(t|t 0 ) is C ∞ in both variables up to the first hitting time τ 1 . We assume the following transversality and growth conditions assuring boundary crossings occur in a ‘nice’ way: 88 1 Assumption. Let α h = min{h(x) : x ∈ R} andα g = min{g(x) : x ∈ R} > α h . Then v,g,h satisfy (A) v(g(t))6=g ′ (t), ∀t∈ [−p/2,p/2] (B) v(x)> 0,∀x∈ (α h ,α g ) By periodicity, (A) implies thatv(g(t))6=g ′ (t),∀t∈R while (A) and (B) together imply v(g(f(t 0 ))>g ′ (f(t 0 )),∀t 0 ∈ [−p/2,p/2]. (3.3) Furthermore, if we define τ 1 =f(t 0 ) := inf{t>t 0 :x(t|t 0 ) =g(t)}, as the first passage-time of the deterministic processx(t|t 0 ) acrossg, then we have 3 Proposition. f ∈C ∞ (R) withf(t 0 +p) =f(t 0 )+p,∀t 0 ∈R. Proof: LetF(s,t) :=x(t|s)−g(t). Sinceg,x are bothC ∞ (x in both variables),F isC ∞ in both variables. Furthermore,∀t 0 ∈R, we haveτ 1 = f(t 0 ) <∞ by (B) andF(t 0 ,τ 1 ) = 0 by the definition ofτ 1 , and therefore, F t (t 0 ,τ 1 ) =v(x(τ 1 |t 0 ))−g ′ (τ 1 ) =v(g(τ 1 ))−g ′ (τ 1 )6= 0 by (A) so that the Implicit Function Theorem tells us we can find a functionf ∈ C ∞ so that F(s,f(s)) = 0,∀s in some neighborhood oft 0 and ifF(s,t) = 0, thent = f(s). The latter part of this statement implies that the solution toF(t 0 ,t) = 0 is given byt =f(t 0 ),∀t 0 ∈R. Having established the existence of f, the fact that f(t 0 +p) = f(t 0 ) +p follows from the p-periodicity ofg andh. 89 Proposition 3 implies that the hitting timesτ n ofx(t|t 0 ) are explicitly defined by the dynamical system τ n =f(τ n−1 ) (3.4) withτ 0 =t 0 . (We shall refer tof as the deterministic ‘return map’). f(t 0 +p) =f(t 0 )+p implies that we can alternatively look at the sequence of firing phases Θ n :=τ n mod p (3.5) which forms a dynamical system on S 1 = R/pZ (we will associate S 1 with the interval [−p/2,p/2) when necessary). In the same way, we define Θ ε n =τ ε n mod p (3.6) for allε > 0 with Θ ε 0 = t 0 modp, to obtain the random sequence of firing phases associated with the processX ε t . The processΘ ε n now forms a MC onS 1 and its transition density function ˜ p ε can be calculated fromp ε by ˜ p ε (θ|θ 0 ) = X n∈Z p ε (θ+np|θ 0 ) (3.7) for allθ,θ 0 ∈ [−p/2,p/2). In what follows we also assume that time has been shifted so that the initial timet 0 ∈ [−p/2,p/2) as well. Our goal is to study the spectrum of the transition operatorT ε for Θ ε n . Here, T ε φ(θ 0 ) :=E θ 0 (φ(Θ ε 1 )) = Z S 1 φ(θ)˜ p ε (θ|θ 0 )dθ (3.8) 90 for φ ∈ B(S 1 ) = the set of all bounded, measurable functions on S 1 . In Chapter 2, our underlying Markov Chain X ε n was defined as a small noise perturbation of a deterministic system and we were able to use this fact to show that the behavior of the perturbed system could not deviate too greatly from the behavior of the deterministic system provided ε was small (see, for example, Lemma 2). Although the relationship betweenτ ε n and the deterministic systemτ n = f(τ n−1 ) is less obvious, our next result hints that the underlying structure of the two problems is similar. 4 Proposition. For anyδ> 0, we have P t 0 (d(τ ε 1 ,f(t 0 ))>δ)≤M δ εe −K δ /ε 2 for allt 0 ∈ [−p/2,p/2) whereM δ ,K δ > 0 are independent oft 0 . Proof: To obtain this bound, we define the processesY ε t ,y(t) = y(t|t 0 ) to be the solutions to (3.1) withε> 0,ε = 0, respectively, obtained by running (3.1) without the reset condition (3.2) and look at the difference betweenY ε t andy(t) for fixedt∈ [t 0 ,T] withT = sup{f(t 0 )+δ : t 0 ∈ [−p/2,p/2)}. From (3.1), we have |Y ε s −y(s)| ≤ Z s t 0 |v(Y ε u )−v(y(u))|du+ε|W s −W t 0 | ≤ L v Z s t 0 |Y ε u −y(u)|du+εM t for allt 0 ≤s≤t whereL v = sup{|v ′ (y(t))| :−p/2≤t≤T}<∞ andM t := sup{|W u − W t 0 | :u∈ [t 0 ,t]}. We can therefore apply Gronwall’s Inequality to obtain sup t 0 ≤s≤t |Y ε s −y(s)|≤ (1+L v (t−t 0 )e Lv(t−t 0 ) )εM t ≤εK(t)M t (3.9) ∀t 0 ∈ [−p/2,p/2] whereK(t) = 1+L v (t+ p 2 )e Lv(t+p/2) . 91 Now lett 0 ∈ [−p/2,p/2], and suppose thatd(τ ε 1 ,f(t 0 )) > δ. We would like to translate this into a statement about the deviations of Y ε t from y(t). To do so, we establish the notation I p = [−p/2,p/2] and A ρ ={(t 0 ,t)∈R 2 :t 0 ∈I p , f(t 0 )≤t≤f(t 0 )+ρ} for anyρ> 0. Note that by (3.3), the compactness ofA ρ , and the continuity ofb,g, andy (in fact, the joint continuity in (t 0 ,t) fory),∃r<δ so that δ 1 := min (t 0 ,t)∈Ar (v(y(t|t 0 ))−g ′ (t))> 0. We also define C δ ={(t 0 ,t)∈R 2 :t 0 ∈I p , t∈ [t 0 ,f(t 0 )−δ]} and note that sinceg(t)>y(t|t 0 ),∀(t 0 ,t)∈C δ , compactness and continuity imply that, δ 2 := min (t 0 ,t)∈C δ (g(t)−y(t|t 0 ))> 0. With this notation in hand, consider the caseτ ε 1 ≥f(t 0 )+δ. Then, ift∈ [f(t 0 ),f(t 0 )+r], we have |Y ε t −y(t|t 0 )| = y(t|t 0 )−X ε t ≥ y(t|t 0 )−g(t) = (y(f(t 0 )|t 0 )−g(f(t 0 )))+(v(y(t ∗ |t 0 ))−g ′ (t ∗ ))(t−f(t 0 )) = (v(y(t ∗ |t 0 ))−g ′ (t ∗ ))(t−f(t 0 )) ≥ δ 1 (t−f(t 0 )) 92 since t ∗ ∈ [f(t 0 ),t] ⊂ [f(t 0 ),f(t 0 ) + r] implies (t 0 ,t ∗ ) ∈ A r . In particular, if we take t =f(t 0 )+r, we obtain the lower bound sup s∈[t 0 ,f(t 0 )+δ) |Y ε s −y(s)|≥δ 1 r. Ifτ ε 1 ≤f(t 0 )−δ, then we have |Y ε τ ε 1 −y(τ ε 1 |t 0 )| =g(τ ε 1 )−y(τ ε 1 |t 0 )≥δ 2 sinceτ ε 1 ≤f(t 0 )−δ implies that(t 0 ,τ ε 1 )∈C δ so that sup s∈[t 0 ,f(t 0 )+δ) |Y ε s −y(s)|≥|Y ε τ ε 1 −y(τ ε 1 )|≥δ 2 . Putting these results together, we have sup s∈[t 0 ,f(t 0 )+δ) |Y ε s −y(s)|≥δ 0 (3.10) forδ 0 = min(δ 1 r,δ 2 ) > 0 wheneverd(τ ε 1 ,f(t 0 )) > δ. Combining (3.10) with (3.9), we can conclude that∀t 0 ∈−[p/2,p/2), we have P t 0 (d(τ ε 1 ,f(t 0 ))>δ) ≤ P t 0 ( sup s∈[t 0 ,f(t 0 )+δ) |Y ε s −y(s)|>δ 0 ) ≤ P t 0 (M f(t 0 )+δ > δ 0 εK(f(t 0 )+δ) ) ≤ P t 0 (M f(t 0 )+δ > δ 0 εK ∗ ) 93 whereK ∗ = sup{K(f(t 0 )+δ) :t 0 ∈ [−p/2,p/2)}<∞ by the assumed continuity off. The reflection principle implies that P( sup t 0 ≤s≤t (W s −W t 0 )≥x) =P( inf t 0 ≤s≤t (W s −W t 0 )≤−x) = 2P(W t −W t 0 ≥x), ∀t≥t 0 ,x> 0 and hence, we have the standard tail probability bound P(M t >x) ≤ P( sup t 0 ≤s≤t (W s −W t 0 )≥x)+P( inf t 0 ≤s≤t (W s −W t 0 )≤−x) ≤ 4 √ 2π (t−t 0 ) 1/2 x e −x 2 /[2(t−t 0 )] . Therefore, P t 0 (M f(t 0 )+δ > δ 0 εK ∗ )≤Mεe −K 2 /e 2 where M = 2 q 2k 0 π K ∗ δ 0 andK = δ 0 2K ∗ k 0 withk 0 = sup{f(t 0 ) +δ−t 0 : t 0 ∈ [−p/2,p/2)} which completes the proof. Proposition 4 is a start, but obtaining exact expressions for eigenvalues and eigenvectors ofT ε is complicated by the fact that we do not always have an explicit formula for the transition density ˜ p ε of Θ ε n . Therefore, before analyzing T ε in detail, we devote the next section to examining the first-passage-time density functionp ε in detail. 3.2 First-Passage Time Densities We restrict our attention to the case when v(x) = −γx +I for γ,I ≥ 0. This leads to two important examples commonly found in the literature. Example (I) (γ = 0) : Before the first reset, X ε t is just Brownian motion with drift I and varianceε 2 (t−t 0 ). In theε = 0 case, this corresponds to the non-leaky or ‘pure’ integrate- and-fire models studied in [9]. 94 Example (II)(γ > 0) :X ε t is an Ornstein-Uhlenbeck process. Theε = 0 case has been studied in [13] and [3], for instance. We begin with (I) for motivational purposes. 3.2.1 Non-Leaky Case Consider (I) in the case wheng(t) = B is constant. SinceX ε t is just Brownian motion with drift I, it can be shown (see for instance [2]) that in this special case, the first-passage time density is given explicitly by p ε (t|t 0 ) = B−h(t 0 ) t−t 0 q ε (t|t 0 ) fort≥t 0 whereq ε (t|t 0 ) is the transition density function forX ε t givenX ε t 0 =h(t 0 ) evaluated atB. This yields the expression p ε (t|t 0 ) = B−h(t 0 ) √ 2πε(t−t 0 ) 3/2 exp{− (B−h(t 0 )−I(t−t 0 )) 2 2ε 2 (t−t 0 ) }. (3.11) Solving theε = 0 equation trivially givesx(t) = h(t 0 ) +I(t−t 0 ) so thatf(t 0 ) satisfies the equation I(f(t 0 )−t 0 ) =B−h(t 0 ). (3.12) Using (3.12) in (3.11), we obtain the useful representation p ε (t|t 0 ) = I(f(t 0 )−t 0 )) √ 2πε(t−t 0 ) 3/2 exp{− I 2 (t−f(t 0 )) 2 2ε 2 (t−t 0 ) }. (3.13) 95 From (3.13), we can see that ift is not near f(t 0 ), p ε (t|t 0 ) is exponentially small asε → 0. However, ift =f(t 0 )+s for smalls, then p ε (t|t 0 ) = I(f(t 0 )−t 0 )) √ 2πε(f(t 0 )−t 0 +s) 3/2 exp{− I 2 (t−f(t 0 )) 2 2ε 2 (f(t 0 )−t 0 +s) } (3.14) ≈ 1 √ 2πεσ 0 (t 0 ) exp{− (t−f(t 0 )) 2 2ε 2 σ 2 0 (t 0 ) } =:p ε 0 (t|t 0 ) (3.15) whereσ 2 0 (t 0 ) = (f(t 0 )−t 0 )/I 2 . Motivated by the scaling used in Lemma 4 of Section 2.3.1, we obtain: 8 Theorem. For anyt,t 0 ∈R, we have lim ε→0 εp ε (εt+f(t 0 )|t 0 ) =p 0 (t|t 0 ) asε→ 0 where p 0 (t|t 0 ) = 1 √ 2πσ 0 (t 0 ) e −t 2 /(2σ 2 0 (t 0 )) with σ 2 0 = (f(t 0 )−t 0 )/I 2 . In other words, the density of ε −1 (τ ε 1 −f(t 0 )) converges to the density of a mean zero normal random variable with varianceσ 2 0 (t 0 ). Theorem 8 basically tells us that for smallε, τ ε n+1 ≈f(τ ε n )+εσ 0 (τ ε n )χ n . Therefore, we should suspect that our work in Chapter 2 will apply to the analysis of (I) as well. We will return to this argument in Section 3.4.1 and show that this is indeed the case. 9 Remark. The varianceε 2 σ 2 0 (t 0 ) of the approximate chain has a nice geometric interpretation sincef(t 0 )−t 0 = spatial variance ofX ε t att =f(t 0 ). Therefore,ε 2 σ 2 0 (t 0 ) can be interpreted as a re-scaling of the spatial variance at the first crossing time where the re-scaling constant is the square of the difference between the deterministic solutions’s slope and the boundary’s 96 slope (in this case, the latter is 0) at the time of the first crossing. In other words, we have the intuitive solution Variance ofτ ε 1 ≈ Spatial variance ofX ε t at passage-timet =f(t 0 ) (x ′ (f(t 0 )) -g ′ (f(t 0 ))) 2 3.2.2 Leaky Case In this Section, we examine (II) for general g satisfying Assumptions (A) and (B) . We no longer have explicit formulas for p ε at our disposal. However, there has been a great deal of work done on finding implicit integral equations and numerical approximations for first- passage-time densities. In particular, sinceX ε t is a Gaussian process, we can apply the result of Durbin in [6] to obtain the decomposition p ε (t|t 0 ) =b ε (t|t 0 )q ε (t|t 0 ) (3.16) whereq ε (t|t 0 ) is the density ofX ε t evaluated atg(t) given thatX ε t 0 =h(t 0 ) and b ε (t|t 0 ) := lim sրt (t−s) −1 E t 0 ,h(t 0 ) (1 τ ε >s (g(s)−X ε s )|X ε t =g(t)). Now q ε (t|t 0 ) = 1 √ 2πεσ(t,t 0 ) exp{− (g(t)−x(t|t 0 )) 2 2ε 2 σ 2 (t,t 0 ) } (3.17) whereσ 2 (t,t 0 ) = 1 2γ (1−e −2γ(t−t 0 ) ) andx(t|t 0 ) = (h(t 0 )− I γ )e −γ(t−t 0 ) + I γ is the solution to the ODEx ′ =−γx+I,x(t 0 ) =h(t 0 ). We callq ε the density term andb ε the slope term in the decomposition (3.16) for reasons that if not already obvious, will become so shortly. 97 Analysis of the Density Term Due to the explicit nature of the density term, analysis can be carried out in much the same way as the analysis carried out in Section 3.2.1. In particular,q ε is exponentially small when t =f(t 0 )+s for larges (since then,g(t) is far fromx(t|t 0 )) and whent =f(t 0 )+s for small s, we have g(t)−x(t|t 0 ) = g(t)−g(f(t 0 ))+x(f(t 0 )|t 0 )−x(t|t 0 ) ≈ (g ′ (f(t 0 ))−x ′ (f(t 0 )|t 0 ))s = −sm(t 0 ) wherem(t 0 ) := x ′ (f(t 0 )|t 0 )−g ′ (f(t 0 )) is the difference in the slopes ofx andg at the time of first crossing. Substituting this expression in 3.17 and noting thatσ 2 (t,t 0 ) ≈ σ 2 (f(t 0 ),t 0 ) whens is small, we obtain the approximation m(t 0 )q ε (t|t 0 )≈ 1 √ 2πεσ γ (t 0 ) exp{− (t−f(t 0 )) 2 2ε 2 σ 2 γ (t 0 ) } =:p ε γ (t|t 0 ) (3.18) whereσ γ (t 0 ) =σ(f(t 0 ),t 0 )/m(t 0 ). Therefore, we have: 16 Lemma. For anyt 0 ,t∈R, lim ε→0 εm(t 0 )q ε (εt+f(t 0 )|t 0 ) =p γ (t|t 0 ) asε→ 0 where p γ (t|t 0 ) = 1 √ 2πσ γ (t 0 ) e −t 2 /(2σ 2 γ (t 0 )) andσ 2 γ (t 0 ) = [ 1 2γ (1−e −2γ(f(t 0 )−t 0 ) )]/m 2 (t 0 ). 98 With Lemma 16 in mind, we would like to show thatb ε (εt +f(t 0 )|t 0 ) is close to m(t 0 ) for smallε as that would yield the nice approximation τ ε n+1 ≈f(τ ε n )+εσ γ (τ ε n )χ n . In order to establish this fact, we will need to explore some at first seemingly unrelated topics. We ask the reader to be patient with these side excursions; the payoff will be evident when we reach the main result in Theorem 9. Alternatively, if one wishes to maintain the continuity of our current line of reasoning, we suggest the reader skip ahead to this result and fill in notational gaps as needed. Conclusions are contained in Section 3.2.5 where we see the desired analog of Theorem 8. 3.2.3 Pinned O-U Processes For the purposes of this Section, we assume thatX ε t is the stationary solution to dX ε t = (−γX ε t +I)dt+εdW t with ε,γ,I > 0. In other words, X ε t is a gaussian process with mean I/γ and covariance function ρ ε (s,t) := ε 2 2γ e −γ|t−s| . For a good amount of our work, we will need information regarding the conditional law ofX ε s given thatX ε t 0 = x andX ε t = y fort 0 ≤ s≤ t. Therefore, we will need the following result which is standard regression analysis. 17 Lemma. Suppose that{Z t : t ∈ T} is a Gaussian process onR with mean functionμ(t) and covariance functionρ(s,t), s,t ∈ T ⊂ R. LetT 0 := {t i } i=1,..,N ⊂ T and suppose that the matrixA := (ρ(t i ,t j )) i,j=1,...,N is invertible withA −1 =B. Then the conditional law ofZ t 99 givenZ t 1 =z 1 ,...,Z t N =z N is Gaussian with meanμ(t)+ P N i,j=1 ρ(t,t i )B ij (z j −μ(t j )) and covariance function ˜ ρ(s,t) =ρ(s,t)− P N i,j=1 ρ(s,t i )B ij ρ(t,t j ). Furthermore, if bothZ t and ρ(s,·) are continuous, then so is the conditioned process. Proof: By replacingZ t withZ t −μ(t), we may assume without loss of generality thatZ t is mean zero. Define the process V t = N X i,j=1 ρ(t,t i )B ij Z t j for t ∈ T . By definition, V t is measurable with respect to σ{Z t 1 ,..,Z t N } and since V t is a linear combination of mean zero Gaussian random variables, it is itself a mean zero Gaussian process. Also note thatV t j =Z t j ,∀j = 1,2,...,N. LetU t :=Z t −V t .U t is certainly a mean zero Gaussian process as well and since Cov(U s ,V t ) = E(Z s V t )−E(V s V t ) = N X i,j=1 ρ(t,t i )B ij ρ(s,t j )− N X i,j,k,l=1 ρ(t,t i )B ij ρ(s,t k )B kl ρ(t j ,t l ) = N X i,j=1 ρ(t,t i )B ij ρ(s,t j )− N X i,j,k,l=1 ρ(t,t i )ρ(s,t k )B ij B kl A lj = N X i,j=1 ρ(t,t i )B ij ρ(s,t j )− N X i,j,k=1 ρ(t,t i )ρ(s,t k )B ij δ jk = N X i,j=1 ρ(t,t i )B ij ρ(s,t j )− N X i,j=1 ρ(t,t i )ρ(s,t j )B ij = 0, 100 U s is independent of V t , ∀s,t ∈ T . But then U t must be independent of σ{Z t 1 ,..,Z t N } so that the law of Z t = U t + V t given Z t i = z i , for i = 1,...,N is the same as the law of U t + P N i,j=1 ρ(t,t i )B ij z j . Since Cov(U s ,U t ) = ρ(s,t)−E(V s Z t )− Cov(U s ,V t ) = ρ(s,t)− N X i,j=1 ρ(s,t i )B ij ρ(t,t j ), we obtain the desired covariance structure. Finally, ifZ t andρ(s,·) are continuous, then so is U t + P N i,j=1 ρ(t,t i )B ij z j and since this process as the same law as the conditioned process, the conditioned process must be continuous as well . 10 Remark. From the proof, we know that the law ofZ t , givenZ t j =z j ,j = 1,2,...,N is the same as the law of μ(t)+ N X i,j=1 ρ(t,t i )B ij (z j −μ(t j ))+U t where U t = (Z t −μ(t))− N X i,j=1 ρ(t,t i )B ij (Z t j −μ(t j )). We refer to the processU ε · as the ‘residual process’ since it represents the error made in approx- imatingZ ε · by its conditional expectation givenZ t j =z j ,j = 1,2,...,N. We use the notationP t,y to denote the conditional probability law ofX ε · givenX ε t = y andP to denote the unconditional law ofX ε · . By Lemma 17, we have m(s|t,y) :=E t,y (X ε s ) =I/γ+(ρ ε (s,t)/ρ ε (t,t))(y−I/γ) =I/γ+(y−I/γ)e −γ|t−s| . (3.19) 101 We also obtain the following result about the conditional law ofX ε · givenX ε t 0 = x,X ε t = y, t 0 <t. Here, we set ψ 1 (s|t 0 ,t) := sinh[γ(t−s)]/sinh[γ(t−t 0 )] and ψ 2 (s|t 0 ,t) := sinh[γ(s−t 0 )]/sinh[γ(t−t 0 )]. 1 Corollary. The conditional law of{X ε s :t 0 ≤s≤t} givenX ε t 0 =x andX ε t =y is the same as the law of Z ε s :=m(s|t 0 ,x,t,y)+U ε s where m(s|t 0 ,x,t,y) :=E t 0 ,x (X ε s |X ε t =y) =I/γ +ψ 1 (s|t 0 ,t)(x−I/γ)+ψ 2 (s|t 0 ,t)(y−I/γ) and U ε s := (X ε s −I/γ)−ψ 1 (s|t 0 ,t)(X ε t 0 −I/γ)−ψ 2 (s|t 0 ,t)(X ε t −I/γ). Proof: Follows directly from Remark 10. 11 Remark. Suppose thatx(t|t 0 ,x) is the solution to the differential equationdx/dt =−γx+ I with the initial conditionx(t 0 |t 0 ,x) =x. Thenx(t|t 0 ,x) is given by x(t|t 0 ,x) = (x− I γ )e −γ(t−t 0 ) + I γ and with a little algebraic manipulation, we obtain the following alternative representation for the conditional mean of Corollary 1 in terms ofx(s|t 0 ,x) andx(t|t 0 ,x): m(s|t 0 ,x,t,y) =x(s|t 0 ,x)+ψ 2 (s|t 0 ,t)(y−x(t|t 0 ,x)). 102 This form will be useful in the proof of Theorem 9. Our next task is to obtain bounds on the residual process U ε · from Corollary 1. To this end, define ˜ X ε t = (ε/ √ 2γ)e −γt W(e 2γt ) whereW(·) = W · is a standard Brownian motion. Then ˜ X ε t is clearly a mean zero gaussian with Cov( ˜ X ε s , ˜ X ε t ) =E( ˜ X ε s ˜ X ε t ) = ε 2 2γ e −γ(t−s) fors≤ t so that ˜ X ε t is a stationary, mean zero O-U process and is equal in law toX ε t −I/γ. This fact allows us to express the residual processU ε s in terms of the standard Wiener process as the following corollary illustrates. 2 Corollary. For allt 0 ≤s≤t, the residual processU ε s is equal in law to the process ˜ U ε s = ε √ 2γ [e −γs (W(e 2γs )−W(e 2γt 0 ))−e −γt ψ 2 (s|t 0 ,t)(W(e 2γt )−W(e 2γt 0 ))]. Proof : If we replaceX ε · −I/γ with ˜ X ε · in the definition ofU ε s and call the new process ˜ U ε s , then ˜ U ε s is equal in law toU ε s and by the definition of ˜ X ε · , we have ˜ U ε s = ε √ 2γ e −γs (W(e 2γs −W(e 2γt 0 ))− ε √ 2γ (e −γt 0 ψ 1 (s|t 0 ,t)−e −γs )W(e 2γt 0 ) − ε √ 2γ e −γt ψ 2 (s|t 0 ,t)W(e 2γt ). An elementary calculation reveals thate −γt 0 ψ 1 (s|t 0 ,t)−e −γs =−e −γt ψ 2 (s|t 0 ,t), yielding the result. . 18 Lemma. For anyδ > 0, we have P(sup{|U ε s | :t 0 ≤s≤t}≥δ)≤ K T ε δ e −M T δ 2 /ε 2 103 whereT =t−t 0 ,K T = (4 √ e 2γT −1/ √ γπ), andM T =γ/(4(e 2γT −1)) Proof : Define the process B(v) =B v :=e −γt 0 (W((v +1)e 2γt 0 )−W(e 2γt 0 )) ThenB v is a mean zero gaussian process and ifu≤v, Cov(B u ,B v ) = E(B u B v ) = E[(W((u+1)e 2γt 0 )−W(e 2γt 0 ))(W((v +1)e 2γt 0 )−W(e 2γt 0 ))] e 2γt 0 = e −2γt 0 [(u+1)e 2γt 0 −e 2γt 0 ] =u = min(u,v) so thatB v is in fact a Wiener process. Suppose thatt 0 ≤s≤t and letT =t−t 0 ,u =s−t 0 , and ˜ ψ(u,T) =ψ 2 (s|t 0 ,t). Then we have 0≤u≤T ,| ˜ ψ(u,T)|≤ 1, and ˜ U ε s = ε √ 2γ e −γu e −γt 0 [W(e 2γu e 2γt 0 )−W(e 2γt 0 )] − ε √ 2γ ˜ ψ(u,T)e −γT e −γt 0 (W(e 2γT e 2γt 0 )−W(e 2γt 0 )) = ε √ 2γ [e −γu B(e 2γu −1)− ˜ ψ(u,T)e −γT B(e 2γT −1)] ≤ 2ε √ 2γ sup{|B u | : 0≤u≤e 2γT −1}. Therefore, sup{| ˜ U ε s | :t 0 ≤s≤t}≤ε r 2 γ sup{|B u | : 0≤u≤e 2γT −1}. 104 This implies that for anyδ > 0, P(sup{| ˜ U ε s | :t 0 ≤s≤t}≥δ) ≤ P(sup{|B u | : 0≤u≤e 2γT −1}≥ δ ε r γ 2 ) ≤ K T ε δ e −M T δ 2 /ε 2 by the Reflection Principle. Finally, since Corollary 2 implies thatU ε s has the same law as ˜ U ε s , we are done. 3.2.4 The Integral Equation of Durbin In this section, we again assume that X ε t is a stationary O-U process (see the beginning of Section 3.2.3). We extend the generality of our results by assuming thatX ε t 0 =x 0 <g(t 0 ) (not necessarilyx 0 =h(t 0 )) and define τ ε =τ ε (t 0 ,x 0 ) = inf{t≥t 0 :X ε t =g(t)|X ε t 0 =x 0 }. We also define τ = inf{t≥t 0 :x(t|t 0 ,x 0 ) =g(t)} and assume Assumption 1 holds so that as in Proposition 3, we can conclude that there exists a C ∞ functionf such thatτ =f(t 0 ,x 0 ). IfX ε s =x<g(t), p ε (t|s,x) := d dt P s,x (τ ε ≤t) gives the first-passage time density for the processX ε t across the (upper) boundaryg(t) given thatX ε s = x. Letq ε (t,y|s,x) be the transition density ofX ε t aty given thatX ε s = x and set 105 q ε (t|s) := q ε (t,g(t)|s,g(s)). As mentioned in Section 3.3, the result of Durbin in [6] implies that p ε (t|t 0 ,x 0 ) =b ε (t|t 0 ,x 0 )q ε (t,g(t)|t 0 ,x 0 ) (3.20) ∀t≥t 0 where b ε (t|t 0 ,x 0 ) := lim sրt (t−s) −1 E t 0 ,x 0 (1 τ ε >s (g(s)−X ε s )|X ε t =g(t)). (3.21) Writing1 =1 τ ε ≤s +1 τ ε >s , we haveb ε (t|t 0 ,x 0 ) =b ε 1 (t|t 0 ,x 0 )− ¯ b ε (t|t 0 ,x 0 ) where b ε 1 (t|t 0 ,x 0 ) := lim sրt (t−s) −1 E t 0 ,x 0 ((g(s)−X ε s )|X ε t =g(t)) and ¯ b ε (t|t 0 ,x 0 ) := lim sրt (t−s) −1 E t 0 ,x 0 (1 τ ε ≤s (g(s)−X ε s )|X ε t =g(t)). Then from Corollary 1, we have b ε 1 (t|t 0 ,x 0 ) = lim sրt g(s)−m(s|t 0 ,x 0 ,t,g(t)) t−s = lim sրt g(s)−g(t) t−s − x 0 −I/γ sinh[γ(t−t 0 )] lim sրt sinh[γ(t−s)] (t−s) + g(t)−I/γ sinh[γ(t−t 0 )] lim sրt sinh[γ(t−t 0 )]−sinh[γ(s−t 0 )] t−s = −g ′ (t) + 1 sinh[γ(t−t 0 )] [(−γx 0 +I)−(−γg(t)+I)cosh[γ(t−t 0 )]]. (3.22) Therefore,b ε 1 is easily computable and is independent ofε so that we can drop theε superscript. 106 12 Remark. As in Remark 11, we can rewrite b 1 in terms of the solution x(t|t 0 ,x 0 ) to the differential equationdx/dt =−γx+I with the initial conditionx(t 0 ) =x 0 as: b 1 (t|t 0 ,x 0 ) = [(−γx(t|t 0 ,x 0 )+I)−g ′ (t)]+[g(t)−x(t|t 0 ,x 0 )] γcosh[γ(t−t 0 )] sinh[γ(t−t 0 )] so that if we look at the value of b 1 (t|t 0 ,x 0 ) at a time t satisfying x(t|t 0 ,x 0 ) = g(t) (for instance,t =f(t 0 ,x 0 )), we have b 1 (t|t 0 ,x 0 ) = (−γx(t|t 0 ,x 0 )+I)−g ′ (t) =x ′ (t|t 0 ,x 0 )−g ′ (t). This shows us that at a hitting time for the deterministic process,b 1 gives the difference in the slope ofx andg, providing us with a nice intuition for the role played byb 1 . Also recall our work in Section 3.2.2. Sinceb 1 has such a simple structure, it is tempting to use it as an approximation ofb ε for small ε> 0. In order to prove that it is a good approximation, however, we will need to show that lim εց0 ¯ b ε (t|t 0 ,x 0 ) = 0. (3.23) The following Lemma, which is stated but not rigorously proved in [6], is useful for investigat- ing the validity of (3.23). 19 Lemma. For anyt≥t 0 , we have the integral expression ¯ b ε (t|t 0 ,x 0 ) = 1 q ε (t,g(t)|t 0 ,x 0 ) Z t t 0 b 1 (t|r,g(r))q ε (t|r)p ε (r|t 0 ,x 0 )dr. 107 Proof: For ease of notation, we set m(s|r,t) = m(s|r,g(r),t,g(t)) for r ≤ s ≤ t. Using repeated conditioning and the Markov property, we have (t−s) −1 E t 0 ,x 0 (1 τ ε ≤s (g(s)−X ε s )|X ε t =g(t)) = (t−s) −1 E t 0 ,x 0 (1 τ ε ≤s E((g(s)−X ε s )|X ε t =g(t),F τ ε)|X ε t =g(t)) =E t 0 ,x 0 (1 τ ε ≤s g(s)−m(s|τ ε ,t) t−s |X ε t =g(t)) = Z s t 0 g(s)−m(s|r,t) t−s ¯ p ε (r|t 0 ,x 0 ,t,g(t))dr (3.24) where ¯ p ε (r|t 0 ,x 0 ,t,y) is the conditional density function forτ ε given thatX ε t =y andX ε t 0 = x 0 . Now P t 0 ,x 0 (τ ε ∈ (r,r +dr),X t ∈ (y,y+dy)) =P t 0 ,x 0 (τ ε ∈ (r,r+dr))P t 0 ,x 0 (X ε t ∈ (y,y+dy)|τ ε ∈ (r,r +dr)) =p ε (r|t 0 ,x 0 )q ε (t,y|r,g(r))drdy and P t 0 ,x 0 (τ ε ∈ (r,r +dr),X ε t ∈ (y,y+dy)) =P t 0 ,x 0 (X ε t ∈ (y,y+dy))P t 0 ,x 0 (τ ε ∈ (r,r +dr)|X ε t ∈ (y,y+dy)) =q ε (t,y|t 0 ,x 0 )¯ p ε (r|t 0 ,x 0 ,t,y)drdy so that we have the Bayes’ like formula ¯ p ε (r|t 0 ,x 0 ,t,y) = q ε (t,y|r,g(r))p ε (r|t 0 ,x 0 ) q ε (t,y|t 0 ,x 0 ) . (3.25) 108 Substituting this expression into (3.24), we obtain the integral expression (t−s) −1 E t 0 ,x 0 (1 τ ε ≤s (g(s)−X ε s )|X ε t =g(t)) = 1 q ε (t,g(t)|t 0 ,x 0 ) Z s t 0 g(s)−m(s|r,t) t−s q ε (t|r)p ε (r|t 0 ,x 0 )dr. (3.26) To complete the proof, we letsրt in (3.26). Now from the definition ofb 1 we have lim sրt g(s)−m(s|r,t) t−s =b 1 (t|r,g(r)) and hence, the proof will be complete if we can justify passing the limit underneath the integral sign. We first show that (g(s)−m(s|r,t))/(t−s) is bounded in r as s → t. To this end, we define β(s|r,t) = g(s)−m(s|r,t). Then β(t|r,t) = 0 and thus the MVT implies that ∀r≤s<t, there existss ∗ ∈ [s,t] such that β(s|r,t) t−s =− ∂β(s|r,t) ∂s (s ∗ ). Therefore it is sufficient to show ∂β(s|r,t) ∂s is bounded uniformly inr≤s<t. But if we write β(s|r,t) = g(s)+ sinh[γ(t−s)] sinh[γ(t−r)] (g(t)−g(r)) − sinh[γ(t−s)]+sinh[γ(s−r)] sinh[γ(t−r)]) (g(t)+ I γ )+ I γ , then (suppressing the arguments for ∂β ∂s )) we have | ∂β ∂s | ≤ |g ′ (s)|+γcosh[γ(s−t)] |g(r)−g(t)| sinh[γ(t−r)] +γ |cosh[γ(t−s)]−cosh[γ(s−r)]| sinh[γ(t−r)] |g(t)+ I γ | ≤ M +K 1 M +K 2 |g(t)| 109 whereM := sup u∈[t 0 ,t] {|g ′ (u)|} andK 1 ,K 2 are constants depending only onγ andt. Using this fact along with Equation (3.25) and the fact that ¯ p(r|t,g(t)) is a density, we can use the BCT to conclude that ¯ b ε (t|t 0 ,x 0 ) = lim sրt 1 q ε (t,g(t)|t 0 ,x 0 ) Z s t 0 g(s)−m(s|r,t) t−s q ε (t|r)p ε (r|t 0 ,x 0 )dr = lim sրt Z s t 0 g(s)−m(s|r,t) t−s ¯ p(r|t,g(t))dr = Z t t 0 b 1 (t|r,g(r))¯ p(r|t,g(t))dr = 1 q ε (t,g(t)|t 0 ,x 0 ) Z t t 0 b 1 (t|r,g(r))q ε (t|r)p ε (r|t 0 ,x 0 )dr which complete the proof. Finally, we are ready to state and prove the main result of this section. 9 Theorem. Let C be a compact subset of R 2 with the property that x 0 < g(t 0 ) for any (t 0 ,x 0 )∈C. Then for anyρ> 0,∃δ 0 ,ε 0 > 0 so that |b ε (t|t 0 ,x 0 )−b 1 (t|t 0 ,x 0 )|<ρ whenever (t 0 ,x 0 )∈C,|t−f(t 0 ,x 0 )|<δ 0 , and0<ε<ε 0 where b 1 (t|t 0 ,x 0 ) = [(−γx(t|t 0 ,x 0 )+I)−g ′ (t)]+[g(t)−x(t|t 0 ,x 0 )] γcosh[γ(t−t 0 )] sinh[γ(t−t 0 )] . In other words,b ε (t|t 0 ,x 0 ) converges uniformly tob 1 (t|t 0 ,x 0 ) asε ց 0 for (t 0 ,x 0 ) ∈ C and t∈B δ 0 (f(t 0 ,x 0 )). Proof: Letρ> 0. For any(t 0 ,x 0 )∈C,δ > 0, andt∈R, we can use Lemma 19 to write b ε (t|t 0 ,x 0 )−b 1 (t|t 0 ,x 0 ) = ¯ b ε (t|t 0 ,x 0 ) =I 1 (t|t 0 ,x 0 )+I 2 (t|t 0 ,x 0 ) 110 where I 1 (t|t 0 ,x 0 ) = 1 q ε (t,g(t)|t 0 ,x 0 ) Z t t−δ b 1 (t|r,g(r))q ε (t|r)p ε (r|t 0 ,x 0 )dr and I 2 (t|t 0 ,x 0 ) = 1 q ε (t,g(t)|t 0 ,x 0 ) Z t−δ t 0 b 1 (t|r,g(r))q ε (t|r)p ε (r|t 0 ,x 0 )dr. Now b 1 (t|r,g(r)) = −g ′ (t)+ γ sinh[γ(t−r)] (g(t)−g(r)) +γ cosh[γ(t−r)]−1) sinh[γ(t−r)] (g(t)−I) = −g ′ (t)+ γ(t−r) sinh[γ(t−r)] ) g(t)−g(r) t−r + cosh[γ(t−r)]−1 t−r t−r sinh[γ(t−r)] (g(t)−I) → −g ′ (t)+ γ γ g ′ (t)+ 0 γ (g(t)−I) = 0 as|r−t|→ 0. Furthermore,g∈C 1 so thatg ′ is uniformly continuous on any compact subset ofR. This implies thatb 1 (t|r,g(r))→ 0 uniformly as|r−t|→ 0 fort in any compact subset ofR. Since f(C) is compact, we can thus fix δ > 0 so that|b 1 (t|r,g(r))| < ρ/2 whenever d(t,f(C))< 1 and|r−t|<δ. Then, for any (t 0 ,x 0 )∈C andt∈B 1 (f(C)) we have |I 1 (t|t 0 ,x 0 )| ≤ ρ 2 1 q ε (t,g(t)|t 0 ,x 0 ) Z t t−δ q ε (t|r)p ε (r|t 0 ,x 0 )dr = ρ 2 Z t t−δ ¯ p ε (r|t 0 ,x 0 ,t,g(t))dr ( by Equation (3.25)) = ρ 2 P(τ ε ∈ (t−δ,t)|X ε t 0 =x 0 ,X ε t =g(t)) ≤ ρ 2 . (3.27) 111 To obtain an estimate forI 2 (t|t 0 ,x 0 ), we define a(u,x,t) := min s∈[u,t−δ] {g(s)−m(s|u,x,t,g(t))} for all (u,x) ∈ C and t ≥ u. If we fix a point P = (t 0 ,x 0 ) ∈ C, the continuity of f implies that we can chooseδ 1 > 0 so thatg(s)−x(s|u,x) > 2η for someη > 0, whenever |(u,x)−P| < δ 1 , andu≤ s≤ f(t 0 ,x 0 )−δ/2 (recall thatf(t 0 ,x 0 ) is by definition the first timex(s|t 0 ,x 0 ) reachesg(s)). Furthermore, sincex(s|t 0 ,x 0 )−m(s|t 0 ,x 0 ,t,g(t)) = 0, for all t 0 ≤ s≤ t ift = f(t 0 ,x 0 ), we can chooseδ 2 > 0 so that|x(s|u,x)−m(s|u,x,t,g(t))|< η if max(|(u,x)−P|,|t−f(t 0 ,x 0 )|)<δ 2 andu≤s≤t. Now ift∈B δ P (f(t 0 ,x 0 )), (u,x)∈ B δ P (P), andu≤s≤t−δ withδ P := min(1,δ/2,δ 1 ,δ 2 ), thenu≤s≤f(t 0 ,x 0 )−δ/2 and hence, a(u,x,t) ≥ min s∈[u,f(t 0 )−δ/2] {|g(s)−x(s|u,x)|} − max s∈[u,t] {|x(s|u,x)−m(s|u,x,t,g(t))|} ≥ η (3.28) for any(u,x)∈B δ P (P) andt∈B δ P (f(P)). Therefore, sinceb 1 is bounded inr andt, say by L, we have |I 2 (t|u,x)| ≤ L 1 q ε (t,g(t)|u,x) Z t−δ t 0 q ε (t|r)p ε (r|u,x)dr = L Z t−δ t 0 ¯ p ε (r|u,x,t,g(t))dr ( by Equation (3.25) ) = LP(τ ε ≤t−δ|X ε u =x,X ε t =g(t)) = LP( sup u≤s≤t−δ {X ε s −g(s)}≥ 0|X ε u =x,X ε t =g(t)) 112 We can then apply Corollary 1 to conclude that |I 2 (t|u,x)| ≤ LP( sup u≤s≤t−δ {U ε s +m(s|u,x,t,g(t))−g(s)}≥ 0) ≤ LP( sup u≤s≤t−δ {U ε s }≥a(u,x,t)) ≤ LP( sup u≤s≤t−δ {U ε s }≥η) ( by (3.28) ) ≤ L K T ε η e −M T η 2 /ε 2 , the last line following from Corollary 18 and the fact thatT :=t−δ−u< 1+f(t 0 ,x 0 )−t 0 by the definition ofδ P . This shows that∀(u,x)∈B δ P (P) andt∈B δ P (f(P)), we have |I 2 (t|u,x)|≤Kεe −M/ε 2 (3.29) whereK =K 1+f(t 0 )−t 0 L/η andM =η 2 M 1+f(t 0 )−t 0 . Finally, if we chooseε P so thatKεe −M/ε 2 < ρ/2 wheneverε < ε P , then∀(u,x) ∈ B δ P (P) andt∈B δ P (f(P)), (3.27) and (3.29) imply |b ε (t|u,x)−b 1 (t|u,x)|<Kεe −M/ε 2 + ρ 2 <ρ. To prove the convergence is uniform on all ofC, we simply note that ifρ> 0 is given, then for each pointP ∈C, the above argument implies that we can findδ P ,ε P > 0 so that |b ε (t|u,x)−b 1 (t|u,x)|<ρ whenever max(|(u,x)−P|,|t−f(P)|) < δ P and 0 < ε < ε P . The ballsB δ P (P) form an open covering of C and hence by compactness, we can choose a finite number of pointsP i , i = 1,2,...,m so that the ballsB δ P i (P i ) also coverC. Moreover,f(C) is also compact so we 113 can choose a finite number of pointsQ j , j = 1,2,...,n so that the balls B δ Q j (f(Q j )) cover f(C). Takingδ 0 = min{δ Q j :j = 1,2,...,n} andε 0 = min{ε P i ,ε Q j :i = 1,2,...,m, andj = 1,2,...,n} then yields the result. The exact form ofb 1 follows from Remark 12. 3.2.5 Conclusions for Leaky Case Putting together our work in Sections 3.2.2 - 3.2.4, we obtain the following result giving us our desired approximation to the first-passage-time density in Example (II). 10 Theorem. For anyt 0 ,t∈R, lim ε→0 εp ε (εt+f(t 0 )|t 0 ) =p γ (t|t 0 ) asε→ 0 where p γ (t|t 0 ) = 1 √ 2πσ γ (t 0 ) e −t 2 /(2σ 2 γ (t 0 )) andσ 2 γ (t 0 ) = [ 1 2γ (1−e −2γ(f(t 0 )−t 0 ) )]/m 2 (t 0 ). In other words, the density ofε −1 (τ ε 1 −f(t 0 )) converges to the density of a mean zero normal random variable with varianceσ 2 γ (t 0 ). Proof : Lemma 16 and Theorem 9 imply that lim ε→0 εm(t 0 )q ε (εt+f(t 0 )|t 0 ) = p γ (t|t 0 ) lim ε→0 b ε (εt+f(t 0 )|t 0 ) = b 1 (f(t 0 )|t 0 ) =m(t 0 ) and therefore, the result follows from (3.16). 3.3 The Structure of the Transition Operator The work of the previous section suggests that if v(x) = −γx + I, then the study of the sequence of firing phases Θ ε n should be similar to the study of (2.2) in Chapter 2. The goal of 114 the next two sections is to further develop this idea. We shall prove most of our early results in the case of general driftv∈C ∞ and specialize later as needed. As in Section 2.2, we begin by looking at the case where ˜ f := f modp has two fixed points θ s ,θ u satisfying the conditions that c s := f ′ (θ s ) ∈ (−1,1) and c u := f ′ (θ u ) / ∈ [−1,1]. We also assume that the basin of attraction of θ s is S 1 \{θ u } so that ˜ f n (θ) → θ s as n → ∞, ∀θ ∈ S 1 ,θ 6= θ u . Note that since ˜ f(θ s ) = θ s and ˜ f(θ u ) = θ u , there exist integersn s ,n u such that f(θ s ) = θ s +n s p and f(θ u ) = θ u +n u p. Such a situation certainly occurs in our two previously discussed examples of interest: Example (I) : If we assumeg(t) = B andh(t) = Asin2πt, withA < B, then we can easily check that τ 1 =f(t 0 ) :=t 0 +a−bsin2πt 0 witha =B/I,b =A/I > 0. In this case,f is in the well-studied family of sine-circle maps. Clearly,f(t+n) =f(t)+n for all integersn. If0≤a−1≤b, then there exists0≤t s < 1/4, t u = 1/2−t s withf(t s,u ) =t s,u +1 (equivalently,sin2πt s,u = a−1 b ). Now f ′ (t u ) = 1+ p b 2 −(a−1) 2 > 1 so thatt u is unstable and f ′ (t s ) = 1− p b 2 −(a−1) 2 ∈ (−1,0) ifb < b c (a) := p (a−1) 2 +4 so thatt s is stable providedb < b c (a). One can also confirm that all orbits of (3.5) converge toθ s =t s providedθ 0 6=t u . 115 Example (II) : If g,h are as in (I) above and I/γ > B so that v(x) > 0 if x < B and v(g(t)) =I−γB > 0 =g ′ (t) (i.e. Assumption 1 is satisfied), then τ 1 =f(t 0 ) := 1 γ log(a−bsin2πt 0 )+t 0 witha := I 0 /γ I 0 /γ−B andb := A I 0 /γ−B positive constants. As long as0≤a−e γ ≤b, we can again verify the existence oft s ,t u satisfyingf(t s,u ) = t s,u +1 witht u unstable andt s stable if and only ifb<b c (a,γ) := p (a−e γ ) 2 +4e 2γ . Both of the above examples exhibit period doubling bifurcations along the curveb c (a,γ) as we varyb. Returning to the theory, the assumed conditions on ˜ f lead to the splitting of S 1 into regions V 1 ,V 2 ,V 3 as in Proposition 1. Therefore, if φ ∈ B(S 1 ), we write φ = φ 1 +φ 2 +φ 3 where φ i =φ1 V i . The action ofT ε can then be described by the block decomposition T ε = T ε 11 T ε 12 T ε 13 T ε 21 T ε 22 T ε 23 T ε 31 T ε 32 T ε 33 whereT ε ij :B(V j )→B(V i ). In particular, T ε ij φ(θ 0 ) =1 V i (θ 0 ) Z V j φ(θ)˜ p ε (θ|θ 0 )dθ for anyφ∈B(V j ). 20 Lemma. There exist positive constantsM,K > 0 so thatkT ε ij k ∞ ≤Mεe −K/ε 2 ,∀i>j. Proof : Clearly, kT ε ij k ∞ = sup θ 0 ∈V i P θ 0 (Θ ε 1 ∈V j ) (3.30) 116 for anyi,j. Suppose first thati = 3 andj = 1,2. Then for anyθ 0 ∈ V i ,d( ˜ f(θ 0 ),V j ) > δ by Proposition 1, so that by (3.30), kT ε ij k ∞ = sup θ 0 ∈V 3 P θ 0 (Θ ε 1 ∈V i ) ≤ sup θ 0 ∈V 3 P θ 0 (d(Θ ε 1 , ˜ f(θ 0 ))>δ) ≤ sup t 0 ∈[−p/2,p/2) P t 0 (d(τ ε 1 ,f(t 0 ))>δ). But ifi = 2,j = 1, we also have P θ 0 (Θ ε 1 ∈V 1 )≤P θ 0 (d(Θ ε 1 , ˜ f(θ 0 ))>δ)≤P t 0 (d(τ ε 1 ,f(t 0 ))>δ) for anyθ 0 ∈V 2 and hence, the result follows from Proposition 4. In a similar fashion, 21 Lemma. k(T ε 22 ) n k ∞ ≤ (M N εe −K N /ε 2 ) p(n) whenever n ≥ N + 1 where N is the same constant as in (3) of Proposition 1,M N ,K N > 0 andp(n) =⌊ n N+1 ⌋. Proof : By the uniformity of the bound obtained in Proposition 4, we have P θ ε j−1 (d(θ ε j , ˜ f(θ ε j−1 ))>r)≤M r εe −Kr/ε 2 for anyr > 0,j≥ 1. The rest of the argument is identical to the proof of Lemma 3 in Section 2.2. This yields the ‘almost’ upper-triangular structure we were able to exploit in Chapter 2. Fol- lowing our heuristic developed there, we now move on to the local analysis near fixed points. 117 3.4 Analysis in the Neighborhood of a Fixed Point The next question of interest is to determine the limiting behavior of the diagonal operatorsT ε 11 andT ε 33 asε→ 0. To this end, we identifyS 1 ↔ [−p/2,p/2) by mappingθ s (or respectively, θ u ) → 0 so thatV 3 → (−δ s ,δ s ) ( orV 1 → (−δ u ,δ u )) andf(0) = n s p (orf(0) = n u p). We then extendT ε 33 (orT ε 11 ) to an operator onB(R) by defining T ε 33 φ(t 0 ) = 1 V 3 (t 0 ) Z δs −δs φ(t)˜ p ε (t|t 0 )dt = 1 V 3 (t 0 ) Z δs −δs X n∈Z φ(t)p ε (t+np|t 0 )dt for allφ∈ B(R) and look at the re-scaled versionT ε s := (U ε ) −1 ◦T ε 33 ◦U ε whereU ε φ(x) = φ(x/ε). Now T ε s φ(t 0 ) = 1 V 3 (εt 0 ) Z δs −δs X n∈Z φ(t/ε)p ε (t+np|εt 0 )dt = T ε sm φ(t 0 )+T ε se φ(t 0 ) where T ε sm φ(t 0 ) =1 V ε 3 (t 0 ) Z V ε 3 φ(t)εp ε (εt+n s p|εt 0 )dt (3.31) and T ε se φ(t 0 ) =1 V ε 3 (t 0 ) Z V ε 3 X n6=ns φ(t)εp ε (εt+np|εt 0 )dt (3.32) 118 with V ε 3 = (−δ s /ε,δ s /ε). But if εt 0 ∈ V 3 , then d(f(εt 0 ),B c δs (n s p)) > δ, so that for any t 0 ∈V ε 3 , |T ε se φ(t 0 )| ≤ kφk ∞ P εt 0 (τ ε 1 ∈B c δs (n s p)) ≤ kφk ∞ P εt 0 (d(τ ε 1 ,f(εt 0 )>δ) ≤ Mεe −K/ε 2 kφk ∞ , the last line following from Proposition 4. Therefore, kT ε se k ∞ ≤Mεe −K/ε 2 . This implies that for smallε> 0, the main contributions toT ε s come fromT ε sm . SinceV ε 3 →R as ε → 0, (3.31) suggests that the appropriate limit for T ε sm can be found if ∃ a transition density functionp(t|t 0 ) such that εp ε (εt+n s p|εt 0 )→p(t|t 0 ) (3.33) asε→ 0 (recall our work in Section 2.3.1). To illustrate, we return to our earlier examples. 3.4.1 Non-Leaky Case Here, we again consider the simple case wheng(t) = B for motivational purposes. Equation (3.14) implies that for smallε andt nearf(t 0 ),p ε (t|t 0 )≈p ε 0 (t|t 0 ) where p ε 0 (t|t 0 ) = 1 √ 2πσ 0 (t 0 ) e −(t−f(t 0 )) 2 /(2ε 2 σ 2 0 (t 0 )) 119 andσ 0 (t 0 ) = (f(t 0 )−t 0 )/I. If we define f ε (t 0 ) := 1 ε (f(εt 0 )−n s p) = 1 ε (f(εt 0 )−f(0)) then we havef ε (t)→f ′ (0)t 0 =c s t 0 andσ 0 (εt 0 )→σ 0 (0) asε→ 0 so that εp ε 0 (εt+n s p|εt 0 ) → 1 √ 2πσ 0 (0) exp{− (t−c s t 0 ) 2 2σ 2 0 (0) } asε → 0. We therefore obtain the following Lemma which the reader may notice is almost identical to Lemma 4 in Section 2.3.1. 22 Lemma. lim ε→0 εp ε (εt+n s p|εt 0 ) =p 0 (t|t 0 ) wherep 0 (t|t 0 ) is the transition density for the autoregressive chain X n+1 =c s X n +σ 0 χ n with χ n a family of iid standard normals, c s = f ′ (0), and σ 2 0 = f(0) I 2 . Furthermore, the convergence is uniform for (t 0 ,t) in any compact subset ofR 2 . Recall from Section 2.3 that the spectrum of the transition operator forX n consists ofc n s ,n≥ 0 and hence, the spectrum of T ε 33 will also be close toc n s . The reader can check that the same argument holds forT ε 11 to show that its transition density is close to the transition density of the chainY n+1 =c u Y n +σ 0 χ n where nowf ′ (0) =c u / ∈ [−1,1] andf(0) =n u p. In Section 2.4.1, we showed that this chain has eigenvalues|c u | −1 c −n u ,n≥ 0. This concludes our work on the non-leaky case and with Lemmas 20, 21, and 22 in hand (to replace Lemmas 1, 3, and 4) we can easily extend our main result in Section 2.2 to 11 Theorem. LetΘ ε n be the sequence of firing phases for a SIFM with constant driftv(x) =I, constant thresholdg(t) =B, and periodic reseth(t)∈C ∞ ([−p/2,p/2)) and suppose that the phase return map ˜ f for the noise free system has two fixed pointsθ s andθ u with the property 120 thatf n (θ) → θ s for allθ ∈ S 1 ,θ 6= θ u . LetT ε denote the transition operator for Θ ε n . Then for anyr > 0 andε sufficiently small, we can writeT ε = T ε lp +T ε up wherekT ε lp k ∞ < r and any eigenvalue of T ε up with modulus greater than r is of one of the two forms c n s +O(ε) or |c u | −1 c −n u +O(ε) for somen≥ 0 withc s,u :=f ′ (θ s,u ). See the statement of Theorem 2 for more details on eigenfunctions and eigendensities. Higher order expansions for the eigenvalues and eigenvectors certainly seem reasonable, but would require a more detailed analysis (see Section 2.3.2). 3.4.2 Leaky Case Our hard work in Section 3.2 really pays off when we return to the analysis of Example (II) . Recall that in this example, we assumed linear driftv(x) =−γx+I,γ > 0. Supposeg,h∈C ∞ are periodic,h(t)<g(t), and that Assumption 1 is satisfied. (3.16) implies εp ε (εt+n s p|εt 0 ) =εb ε (εt+n s p|εt 0 )q ε (εt+n s p|εt 0 ) and equation 3.18 implies εm(εt 0 )q ε (εt+n s p|εt 0 )≈p ε γ (εt+n s p|εt 0 ) for smallε where p ε γ (t|t 0 ) = 1 √ 2πεσ γ (t 0 ) e −(t−f(t 0 )) 2 /(2ε 2 σ 2 γ (t 0 )) withσ 2 γ (t 0 ) = σ 2 (t 0 ,f(t 0 ))/m 2 (t 0 ) andm(t 0 ) =x ′ (f(t 0 )|t 0 )−g ′ (f(t 0 )). Furthermore, from Theorem 9, we have b ε (εt+n s p|εt 0 )→b 1 (f(0)|0) =x ′ (f(0)|0)−g ′ (f(0)) =m(0) 121 asε→ 0 (recall thatn s p =f(0)). Since f ε (t 0 ) := f(εt 0 )−f(0) ε →f ′ (0)t 0 m(εt 0 ) → m(0) σ γ (εt 0 ) → σ γ (0) asε→ 0, we have p ε γ (εt+n s p|εt 0 )→p γ (t|t 0 ) asε→ 0 where p γ (t|t 0 ) = 1 √ 2πσ γ (0) e −(t−cst 0 ) 2 /(2σ 2 γ (0)) withc s =f ′ (0). Putting all this information together, we obtain 5 Proposition. For anyt 0 ,t∈R, we have lim ε→0 εp ε (εt+n s p|εt 0 ) =p γ (t|t 0 ) wherep γ is the transition density for the autoregressive chain X n+1 =c s X n +σ γ χ n with σ γ = [ 1 2γ (1−e −2γf(0) )]/[(−γg(f(0))+I)−g ′ (f(0))]. Furthermore, the convergence is uniform for(t 0 ,t) in any compact subset ofR 2 Note that the limiting variance σ γ (0) is again of the form (Spatial Variance)/(Difference of Slopes) 2 . The reader may have noticed that the analogous results Lemmas 4 and 22 were just lemmas and we have labeled our result here a Proposition. This is because of the significantly 122 greater amount of work that went into proving Proposition 5; we just could not rest easy unless we knew that the importance of this result was highlighted. Of course as in the previous cases, we arrive at our final result about the spectrum ofT ε . 12 Theorem. Let Θ ε n be the sequence of firing phases for a SIFM with linear drift v(x) = −γx +I, γ > 0, and periodic threshold, reset functionsg,h ∈ C ∞ ([−p/2,p/2)) satisfying Assumption 1. Suppose that the phase return map ˜ f for the noise free system has two fixed points θ s and θ u with the property that ˜ f n (θ) → θ s for all θ ∈ S 1 , θ 6= θ u . Let T ε denote the transition operator for Θ ε n . Then for any r > 0 and ε sufficiently small, we can write T ε =T ε lp +T ε up wherekT ε lp k ∞ <r and any eigenvalue ofT ε up with modulus greater thanr is of one of the two formsc n s +O(ε) or|c u | −1 c −n u +O(ε) for somen≥ 0 withc s,u :=f ′ (θ s,u ). Of course, Theorems 11 and 12 have obvious extensions to the case when ˜ f has multiple periodic orbits as well (see Theorem 1 in Section 2.1). In conclusion, our results on gaussian perturbations of circle maps in Chapter 2 extend readily to results on firing phases in SIFM’s; that is to say, the spectrum of the corresponding transition operator is determined by the derivative of the return mapf along its various periodic orbits. In addition, we have derived an intuitive expression for the variance of the limiting autoregressive chains obtained during our analysis of the diagonal blocks ofT ε in terms of the spatial variance of X ε t and the difference in the slopes of the noise free solution x(t|t 0 ) to the IFM and the threshold g at the first firing time τ 1 = f(t 0 ). Although this does not affect the limiting eigenvalues, it does affect the scaling of corresponding eigenfunctions and eigedensities (see Theorem 2 in Section 2.2). In some sense, this gives a nice sense of completion to our work as we have been able to provide a method for calculating spectral properties ofT ε in the case when the asymptotic structure of the deterministic firing phasesθ n is simple (all orbits converge to a periodic orbit). A number of interesting questions remain, however, and we highlight a few in the final section of this report. 123 3.5 Questions for Future Research The following lists a few of proposals for future questions relating to the effects of noise on bifurcations in dynamical systems: (1) Our work thus far has concentrated on calculating the spectrum of T ε when the system x n+1 = f(x n ) has some nice asymptotically stable behavior. But what happens if the orbits of x n+1 = f(x n ) become chaotic or quasi-periodic? For our theory of λ- bifurcations to be more complete, this issue should be addressed. (2) We have assumedf is at leastC 1 in all our work thus far and it has been the derivative of f along periodic orbits which has played the crucial role in determining the eigenvalues of T ε . What if f is not differentiable, perhaps even discontinuous? This case is of interest in the study of IFM’s as the sequence of firing times may not always be given by repeated iterations of a differentiable function (see [13] or [3] for examples). If these discontinuities do not occur along periodic orbits of the deterministic system, our results should apply to these examples as well (provided f is sufficiently smooth near each periodic point). On a related note, dropping either of the Assumptions (A) or (B) in Section 3.1 may lead to further interesting results as well. (3) Our examples thus far have assumed the noise in our model is gaussian. How would our results differ if we allowed the noise term in (2.2) to have a more general form? For instance, we can show that the spectrum ofT ε does not change if we replaceχ n in (2.2) with some other sequence of finite moment iid random variables, however we can say little about the case of correlated noise. Understanding the role played by the noise in (2.2) may gives us hints as to what happens if we replace the ‘εdW t ’ term in (3.1) with some other type of noise such as multiplicative, Poissonian, or real noise. 124 (4) IFM’s are really the simplest model for studying chemical potential in neurons. It would be nice to extend our results to stochastic versions of some more realistic, higher dimen- sional models such as the Morris-Lecar or FitzHugh-Nagumo model. A good start- ing point for these investigations would be to look at the spectrum of T ε for multi- dimensional versions of (2.2). (5) The power spectrum ofX ε n is an important analytic tool in studying the dynamics ofX ε n from a time series perspective. Previous approaches to approximating power spectra have been based on linearization in the neighborhood of a stable fixed point (see, for example, [20]). However, our work suggests that unstable fixed points may have a non- trivial effect on the dynamics of perturbed systems. Therefore another possible area for future research is in exploring connections between the spectrum of T ε and the power spectrum ofX ε n . (6) Phenomenological orP -bifurcations are a more classical notion of stochastic bifurcation theory and are vaguely defined as changes in the shape of the invariant measure for a stochastic dynamical system (see [4]). In our examples, it appears that λ-bifurcations generally seem to lead to qualitative changes in the shape of stationary densities and therefore,P -bifurcations, but there are some case (for instance, whenx n+1 =f(x n ) has a tangent bifurcation) where this connection may be less obvious. Further investigation is needed to establish clearer connections between these concepts. 125 References [1] L. Ahlfors Complex Analysis, McGraw-Hill Book Company, New York (1979). [2] A. Buonocore, A.G Nobile, and L.M Ricciardi A New Integral Equation for the Evalua- tion of First-Passage-Time Probability Densities, Adv. Appl. Prob.,19:784-800 (1987). [3] S. Coombes Liapunov exponents and mode-locked solutions fro integrate-and-fire dynam- ical systems, Physics Letters A,255:49-57 (1999). [4] H. Crauel, P. Imkeller, and M. Steinkamp Bifurcations of One-Dimensional Stochas- tic Differential Equations, in Stochastic Dynamics (H. Crauel and M. Gundlach, eds), Springer, Berlin Heidelberg New York (1999) [5] S. Doi, J. Inoue, and S. Kumagai Spectral Analysis of Stochastic Phase Lockings and Stochastic Bifurcations in the Sinusoidally Forced van der Pol Oscillator with Additive Noise, J. Stat. Phys.,90:1107-1127 (1998). [6] J. Durbin The First-Passage Density of a Continuous Gaussian Process to a General Boundary, J. Appl. Prob.,22:99-122 (1985). [7] R. Durrett Probability: Theory and Examples, Duxbury Press, Belmont, California (1996). [8] G. Folland Real Analysis: Modern Techniques and Their Applications, John Wiley and Sons, Inc., New York (1999). [9] L. Glass, and M.C. Mackey A Simple Model for Phase Locking of Biological Oscillators, J. Math. Bio.,7:339-352 (1979). [10] J. Grassman, and J.B.T.M Roerdink Stochastic and Chaotic Relaxation Oscillators, J. Stat. Phys.,54:949-970 (1988). [11] T. Horita, T. Kanamura, and T. Akishita Stochastic Resonance-Like Behavior in the Sine- Circle Map, Prog. Theo. Phys.,102:1057-1064 (1999). [12] T. Kato Perturbation Theory for Linear Operators, Springer-Verlag, Berlin Heidelberg New York (1976). [13] J.P. Keener, F.C. Hoppendsteadt, and J. Rinzel Integrate-and-Fire Models of Nerve Mem- brane Response to Oscillatory Input, SIAM J. Appl. Math.,41:503-517 (1981). [14] Y . Kuznetsov Elements of Applied Bifurcation Theory, Springer-Verlag, Berlin Heidel- berg New York (2004). 126 [15] B. Oksendal Stochastic Differential Equations: An Introduction with Applications, Springer-Verlag, Berlin Heidelberg New York (2003). [16] T. Tateno, S. Doi, S. Sato, and L.M Ricciardi Stochastic Phase Lockings in a Relax- ation Oscillator Forced by a Periodic Input with Additive Noise: A First-Passage-Time Approach, J. Stat. Phys.,78:917-935 (1995). [17] T. Tateno Characterization of Stochastic Bifurcation in a Simple Biological Oscillator, J. Stat. Phys.,92:675-705 (1998). [18] T. Tateno and Y . Jimbo Stochastic Mode Locking for a Noisy Integrate-and-Fire Oscilla- tor, Phys. Lett. A,271:227-236 (2000). [19] T. Tateno Noise Induced Effects of Period-Doubling Bifurcation for Integrate-and-Fire Oscillators, Phys. Rev. E.,65: 1-10 (2002). [20] K. Weisenfeld and I. Satija Noise tolerance of frequency-locked dynamics, Phys. Rev. B., 36: 2483-2492 (1987). 127 Appendix A Hermite Polynomials We define the Hermite PolynomialsH n (x) by the generating function e −z 2 +2xz = ∞ X n,m=0 (−1) m 2 n x n z n+2m n!m! = ∞ X n=0 H n (x) n! z n . (A-1) From the second equality, we can see thatH n (x) is a polynomial inx of degreen with leading coefficient 2 n and involving only terms of the formx n−2r for 0≤r≤ n 2 . In particular,H n (x) is an even function ifn is even or an odd function ifn is odd. If we differentiate both sides of (A-1) with respect tox, we obtain 2z ∞ X n=0 H n (x) n! z n = ∞ X n=0 H ′ n (x) n! z n from which we obtain the useful recursion formula 2nH n−1 (x) =H ′ n (x). (A-2) Writinge −z 2 +2xz =e x 2 e −(z−x) 2 , taking a Taylor expansion ofe −(z−x) 2 about 0, differentiating n times and evaluating the result atz = 0, we obtain Rodriguez’ formula H n (x) = (−1) n e x 2 d n dx n e −x 2 . (A-3) We can use (A-2) and (A-3) to obtain useful formulas involving the products of powers ofx and Hermite polynomials as the following lemma illustrates. 23 Lemma. For alln≥ 0 we have 128 (1) xH n (x) = 1 2 H n+1 (x)+nH n−1 (x) (2) x 2 H n (x) = 1 4 H n+2 (x)+ 2n+1 2 H n (x)+n(n−1)H n−2 (x) (3) x 3 H n (x) = 1 8 H n+3 (x)+ 3n+3 4 H n+1 (x)+ 3n 2 2 H n−1 (x) +n(n−1)(n−2)H n−3 (x) (4) x 4 H n (x) = 1 16 H n+4 (x)+ 4n+6 8 H n+2 (x)+ 6n 2 +6n+3 4 H n (x) + 4n 3 +2n 2 H n−2 (x)+n(n−1)(n−2)(n−3)H n−4 (x) where we defineH k (x)≡ 0 fork< 0. Proof: From (A-2) and (A-3), we have 2nH n−1 (x) =H ′ n (x) = (−1) n 2xe x 2 d n dx n e −x 2 +(−1) n e x 2 d n+1 dx n+1 e −x 2 = 2xH n (x)−H n+1 (x) which gives (1). (2), (3) and (4) follow by iterating 1. Letν be the Gaussian measure onR with mean 0 and variance 1 2 so thatν has density 1 √ π e −x 2 w.r.t Lebesgue measure onR. One of the most important properties of the Hermite polynomials is that they form an orthogonal set inL 2 (ν). 13 Theorem. For anyn,m≥ 0, we have Z H n (x)H m (x)dν(x) =δ nm 2 n n! 129 Proof: Suppose without loss of generality, thatn≥m. Using (A-3), integrating by parts, and applying (A-2), we obtain Z H n (x)H m (x)dν(x) = 1 √ π Z H n (x)(−1) m d m dx m e −x 2 dx = 1 √ π Z H ′ n (x)(−1) m−1 d m−1 dx m−1 e −x 2 dx = 2n 1 √ π Z H n−1 (x)(−1) m−1 d m−1 dx m−1 e −x 2 dx. Repeating this procedurem times yields Z H n (x)H m (x)dν(x) = 2 m n! (n−m)! 1 √ π Z H n−m (x)e −x 2 dx. Ifn =m, 1 √ π Z H n−m (x)e −x 2 dx = 1 √ π Z e −x 2 dx = 1 and ifn>m, 1 √ π Z H n−m (x)e −x 2 dx = (−1) n−m √ π Z d n−m dx n−m e −x 2 dx = 0 and hence, Z H n (x)H m (x)dν(x) =δ nm 2 n n!. We will later need to evaluate integrals of the form R x p H n (x)H m (x)ν(dx) for various values of p. To do so, we first note that by (A-1), we have x p ∞ X n,m=0 H n (x)H m (x) n!m! z n 1 z m 2 =x p e −(z 2 1 +z 2 2 )+2(z 1 +z 2 )x . 130 Therefore, recalling the definition ofν and making the change of variablesy = x−z 1 −z 2 , we have ∞ X n,m=0 z n 1 z m 2 n!m! Z x p H n (x)H m (x)dν(x) = Z x p e −(z 2 1 +z 2 2 )+2(z 1 +z 2 )x dν(x) = 1 √ π e 2z 1 z 2 Z x p e −(x−z 1 −z 2 ) 2 dx = 1 √ π e 2z 1 z 2 Z (y +z 1 +z 2 ) p e −y 2 dy = e 2z 1 z 2 Z (y +z 1 +z 2 ) p dν(y). (A-4) Sinceν is Gaussian with mean 0 and variance 1 2 , it is well known that Z x p dν(x) = 0 p = 1,3 1/2 p = 2 3/4 p = 4. (A-5) Without loss of generality, supposen≤m. Then substituting p=1 into (A-4), using the power series expansion ofe 2z 1 z 2 , and equating coefficients of like terms, we have Z xH n (x)H m (x)dν(x) = 2 n (n+1)! m =n+1 0 n≤m. Again, using (A-5), we have Z (y +z 1 +z 2 ) 2 dν(x) = Z y 2 dν(y)+2(z 1 +z 2 ) Z ydν(y)+(z 1 +z 2 ) 2 Z dν(y) = 1 2 +(z 1 +z 2 ) 2 131 so that substitutingp = 2 into (A-4), expandinge 2z 1 z 2 , and equating coefficients of like terms, we obtain Z x 2 H n (x)H m (x)dν(x) = 2 n−1 (2n+1)n! m =n 2 n (n+2)! m =n+2 0 otherwise. Continuing in this fashion, we arrive at the following Lemma, which also summarizes our work above. 24 Lemma. Assumen≤m. Then (1) Z xH n (x)H m (x)dν(x) = 2 n (n+1)! m =n+1 0 otherwise (2) Z x 2 H n (x)H m (x)dν(x) = 2 n−1 (2n+1)n! m =n 2 n (n+2)! m =n+2 0 otherwise (3) Z x 3 H n (x)H m (x)dν(x) = 3(2 n−1 )(n+1)(n+1)! m =n+1 2 n (n+3)! m =n+3 0 otherwise (4) Z x 4 H n (x)H m (x)dν(x) = 3(2 n−2 )(2n 2 +2n+1)(n+1)! m =n 2 n (n+2)!(2n+3) m =n+2 2 n (n+4)! m =n+4 0 otherwise. 132 B Perturbation Theory for Linear Operators LetX be a Banach Space andT :X →X a compact operator. In this section we study second order perturbations ofT of the form T ε =T +εT 1 +ε 2 T 2 +ε 3 T ε 3 with T 1 ,T 1 ,T ε 3 ∈ L(X), the space of all bounded linear operators on X and kT ε 3 k ≤ K for some constant K > 0, independent of ε. In particular, we will study the effects of this perturbation on the spectrum ofT . We will also assume that compactness is preserved under small perturbations (i.e. T ε is compact∀ε sufficiently small) and writeT 0 =T whenever it is convenient. All of the results in this section are adapted from similar results found in [12]. We can define the resolvent R ǫ (ζ) := (T ǫ −ζ) −1 of T ǫ for all ζ ∈ P(T ǫ ) := {z ∈ C : (T ǫ −z) −1 ∈L(X)} and writeR =R 0 .(Note: T −ζ is shorthand notation for the operator T −ζI where I is the identity on X.) Since T is compact, its spectrum σ(T) := P(T) c consists of a countable number of points{λ n } with no accumulation points except possibly 0. Furthermore, we know that∀n there exists a nonzeroφ n ∈X such thatTφ n =λ n φ n . For anyζ 1 ,ζ 2 ∈P(T), we have R(ζ 1 )−R(ζ 2 ) = (ζ 1 −ζ 2 )R(ζ 1 )R(ζ 2 ). (B-1) Equation (B-1) is often referred to as the resolvent equation and is a useful identity to keep in mind. In fact, it is the resolvent equation which gives us: 25 Lemma. R is analytic on the open setP(T). Here, we say a Banach Space Value function is analytic at a pointζ 0 ∈ C if lim z→0 z −1 (R(ζ 0 + z)−R(ζ 0 )) exists in norm and the existing limit is inL(X). 133 Proof of Lemma: Letζ 0 ∈P(T). Then ifζ ∈P(T), (B-1) implies thatR(ζ 0 ) = [1 +(ζ 0 − ζ)R(ζ 0 )]R(ζ) so that if|ζ 0 −ζ|<kR(ζ 0 )k, R(ζ) :=R(ζ 0 )[1−(ζ−ζ 0 )R(ζ 0 )] −1 = ∞ X n=0 (ζ−ζ 0 ) n R(ζ 0 ) n+1 (B-2) is well define at ζ. This says that in fact ζ ∈ P(T) so that P(T) is open. Furthermore, ∀f ∈X ∗ , (B-2) along with the continuity and linearity off gives us f(R(ζ)) = ∞ X n=0 a n (f)(ζ−ζ 0 ) n with a n (f) := f(R(ζ 0 ) n+1 ) so that f(R(ζ) is analytic. This implies that R(ζ) is weakly analytic onP(T) and hence, by Theorem III.1.37 in [12], it is strongly analytic. . 13 Remark. The technique of dealing with Banach Space valued, analytic functions via point- wise evaluations can be used to extend many of the nice integration results from elementary Complex Analysis. For instance, ifS(ζ) :X →X is a family of operators, analytic inζ on and inside a simple, closed curve Γ, then∀f ∈X ∗ , we have R Γ f(R(ζ))dζ = 0 and since elements ofX ∗ separate points inX (see Section 5.2 in [8]), we must have R Γ R(ζ)dζ = 0. We shall use the extensions of these integration results without further comment. The following splitting Theorem from [12] is fundamental in what follows. 14 Theorem. Suppose that there exists a simple, closed curve Γ ⊂ P(T) that ’splits’σ(T) into two disjoint partsσ ′ andσ ′′ such thatσ ′ is enclosed by Γ andσ ′′ is exterior to Γ. Then T can be decomposed according to the decompositionX = M ⊕N with σ(T M ) = σ ′ and σ(T N ) =σ ′′ . Proof: Define the operator P = −1 2πi R Γ R(ζ)dζ. Note that P is well defined as a Bochner integral sinceR(ζ) analytic on Γ implies that 1 2πi R Γ kR(ζ)kdζ < ∞ Then, sinceP(T) is an open set (see Theorem III.6.7 in [12]), we can choose another simple, compact curve Γ ′ ⊂ 134 P(T) which admits the same splitting of σ(T) as Γ. Then as in Remark 13, we have P = −1 2πi R Γ R(ζ)dζ = −1 2πi R Γ ′ R(ζ ′ )dζ ′ and hence, P 2 = ( −1 2πi ) 2 Z Γ ′ Z Γ R(ζ ′ )R(ζ)dζdζ ′ = ( −1 2πi ) 2 Z Γ ′ Z Γ (ζ ′ −ζ) −1 (R(ζ ′ )−R(ζ))dζdζ ′ (B-3) where the second line follows from (B-1). Since∀ζ ′ ∈ Γ ′ ,g(ζ) := (ζ ′ −ζ) −1 is analytic on and insideΓ, we have R Γ (z−ζ) −1 R(z)dζ = 0 and hence, (B-3) implies that P 2 = ( −1 2πi ) 2 Z Γ −( Z Γ ′ (ζ ′ −ζ) −1 dζ ′ )R(ζ)dζ = − 1 2πi Z Γ R(ζ)dζ = P. This shows thatP is a projection. Moreover,P commutes withT (since the resolvent does) and therefore lettingM :=PX andN = (1−P)X, we have the decompositionT =T M +T N withT M =TP ,T N =T(1−P). LetR M (ζ) =R(ζ)P andR N (ζ) = (1−P)R(ζ) be the parts ofR(ζ) inM andN respectively. Then∀ζ ∈P(ζ) andu∈M, we have R M (ζ)(T M −ζ)u =R(ζ)P(PT−ζ)Pu =R(ζ)(T−ζ)Pu =Pu =u and similarly,T M R M (ζ)u =u so thatR M is actually the resolvent ofT M withP(T)⊂P(T M ). Therefore, σ(T M ) ⊂ σ(T). Similarly, σ(T N ) ⊂ σ(T). We now show that in fact, we can replaceσ(T) withσ ′ andσ ′′ respectively. 135 Supposez / ∈ Γ. Then the resolvent equation implies that R(z)P = − 1 2πi Z Γ R(z)R(ζ)dζ = − 1 2πi Z Γ (z−ζ) −1 (R(z)−R(ζ))dζ = − R(z) 2πi Z Γ (z−ζ) −1 dζ + 1 2πi Z Γ (z−ζ) −1 R(ζ)dζ. (B-4) Ifz is exterior to Γ, the first term on the right of (B-4) is 0 and the second term is analytic as a function ofz (see [1], pg. 121 for complex valued case. General result follows from pointwise evaluations). Therefore,R M =RP can be continued analytically to the exterior ofΓ so that the exterior of Γ must be inP(T M ). In particular,σ ′′ ⊂P(T M ) so that by the previous paragraph, σ(T M )⊂σ ′ . Ifz is interior to Γ, then (B-4) implies that R(z)P =R(z)+ 1 2πi Z Γ (z−ζ) −1 R(ζ)dζ so thatR N (z) = R(z)(1−P) = 1 2πi R Γ (z−ζ) −1 R(ζ)dζ has an analytic continuation to the interior of Γ. Therefore we also haveσ(T N )⊂σ ′′ . To prove the reverse inequalities, suppose that there existζ ∈σ(T) withζ ∈P(T M )∩P(T N ). ThenR M (ζ)+R N (ζ) is a well defined, bounded operator onX with (R M (ζ)+R N (ζ))(T − ζ) = (T − ζ)(R M (ζ) + R N (ζ)) = 1 so that ζ ∈ P(T), a contradiction. It follows that σ(T)⊂σ(T M )∪σ(T N ) which implies thatσ ′ ⊂σ(T M ) andσ ′′ ⊂σ(T N ). We now pass to the study of a nonzero eigenvalueλ ofT . SinceT is compact, we can choose ar > 0 so thatB r (λ)∩σ(T) ={λ}. Then taking Γ := ∂B r (λ) we can define the projection P = P λ = −1 2πi R Γ R(ζ)dζ to obtain the decompositionT = T M +T N withM = M λ = PX, N =N λ = (1−P)X,σ(T M ) ={λ} andσ(T N ) =σ(T)−{λ}. We callP the eigenprojection associated withλ andM the corresponding (algebraic) eigenspace. Ifm = dimM = 1, we callλ a simple eigenvalue ofT . 136 14 Remark. The geometric eigenspace associated withλ is usually defined asK = K λ := {φ∈X :Tφ =λφ}. Ifm = 1, this is, in fact the same as the (algebraic) eigenspace defined above. To see this, let φ ∈ M. Then since T M φ ∈ M, we must have T M φ = zφ for some z ∈ C. This implies thatφ is an eigenfunction forT M with corresponding eigenvaluez. But by Theorem 14, we know thatσ(T M ) ={λ} and therefore, we must havez =λ. This implies thatM ⊂K. Ifφ∈K, writeφ =φ M +φ N . ThenTφ =λφ M +λφ N ∈M +N. Therefore, T M φ M =λφ M andT N φ N =λφ N . Sinceλ / ∈T N , this can only be true ifφ N = 0, and hence, φ∈M. Therefore, ifλ is simple, we obtain the useful identity (T−λ)P = 0. (B-5) The following theorem tells us that any simple eigenvalue of T does not change drastically under perturbations of the operator. We defineP ε = −1 2πi R Γ R ε (ζ)dζ. 15 Theorem. Supposeλ is a simple eigenvalue ofT . Then∀ε sufficiently small, Γ ⊂P(T ε ) and there exists a family of simple eigenvaluesλ ε ofT ε and constantsλ 1 ,λ 2 ∈C with λ ε =λ+ελ 1 +ε 2 λ 2 +O(ε 3 ). (B-6) Furthermore, for all such ε, P ε is the eigenprojection associated with λ ε and there exist P 1 ,P 2 ,P ε 3 ∈L(X) andK P > 0 withkP ε 3 k≤K P and P ε =P +εP 1 +ε 2 P 2 +ε 3 P ε 3 . (B-7) 137 Proof: LetA ǫ =T ǫ −T =ǫT 1 +ǫ 2 T 2 +ε 3 T ε 3 andδ = max ζ∈Γ {kR(ζ)k}. Chooseǫ small enough so thatkA ǫ k< 1 2δ . Then the series ∞ X n=0 (A ǫ R(ζ)) n is uniformly convergent on Γ and hence, R ǫ (ζ) = (T −ζ +A ǫ ) −1 = ((1+A ǫ R(ζ))(T−ζ)) −1 = (T −ζ) −1 (1+A ǫ R(ζ)) −1 = R(ζ) ∞ X n=0 (−A ǫ R(ζ)) n (B-8) exists∀ζ ∈ Γ so thatΓ⊂P(T ǫ ). In addition, (B-8) implies that R ǫ (ζ) = R(ζ) ∞ X n=0 (−1) n [A ǫ R(ζ)] n = R(ζ) ∞ X n=0 (−1) n [(ǫT 1 +ǫ 2 T 2 +ǫ 3 T ε 3 )R(ζ)] n = R(ζ)−ǫR 1 (ζ)+ǫ 2 R 2 (ζ)+ε 3 R ε 3 (ζ) (B-9) where R 1 (ζ) = R(ζ)T 1 R(ζ) andR 2 (ζ) = −R(ζ)T 2 R(ζ) +R(ζ)T 1 R(ζ)T 1 R(ζ) are both in L(X) (sinceL(X) is an algebra). Further, the series in (B-8) converges uniformly on Γ with kR ε (ζ)k≤ 2δ for allζ ∈ Γ so that the series definingR ε 3 also converges uniformly on Γ with kR ε 3 (ζ)k≤ 3δ+K 1 +K 2 for allζ ∈ Γ whereK 1 =δ 2 kT 1 k andK 2 =δ 2 kT 2 k+δ 3 kT 1 k 2 . We can conclude that P ǫ = −1 2πi Z Γ R ǫ (ζ)dζ =P +ǫP 1 +ǫ 2 P 2 +ε 3 P ε 3 whereP 1 = −1 2πi R Γ R 1 (ζ)dζ,P 2 = −1 2πi R Γ R 2 (ζ)dζ, andP ε 3 = −1 2πi R Γ R ε 3 (ζ)dζ are inL(X) with kP ε 3 k≤K P :=r(3δ +K 1 +K 2 ). This provesP ε satisfies (B-7). It also shows thatP ε →P asε→ 0 and hence, the Projection Lemma ([12],pg. 34) implies thatdim(P ε X) = 1 for allε sufficiently small. 138 Now since 0 / ∈ B r (λ), for all ε sufficiently small, T ǫ has a finite number of eigenvalues λ ε 1 ,...,λ ε n in B r (λ). For each j = 1,...,n, choose a curve Γ ε j ⊂ B r (λ) so that Γ ε j encloses λ ε j but no other eigenvalues ofT ǫ and letP ǫ j =− 1 2πi R Γ j R ǫ (ζ)dζ be the eigenprojection associ- ated withλ ε j . Since the singularities ofR ǫ inB r (λ) are exactlyλ ε 1 ,...,λ ε n , the Residue Theorem implies that P ǫ = n X j=1 P ε j . (B-10) But the same calculation that showsP ε j is a projection can be extended to showP ε j P ε k = δ jk and therefore, 1 = dim(P ǫ ) = n X j=1 dim(P ε j ). This implies that n = 1 and P ε 1 = P ε which proves thatT ε has a unique, simple eigenvalue inB r (λ) for allε sufficiently small. Note that Remark 14 then implies that (T −λ ε )P ε = 0. (B-11) To obtain (B-6), fixe∈M and define the operatorQ : X →C by the formulaPx = (Qx)e, ∀x∈X. Then∀x∈X, (B-5) implies Q[(T −λ)x]e =P(T−λ)x = (T −λ)Px = 0 = 0e (B-12) so thatQ(T −λ) = 0. We first derive a formula for the coefficientsλ 1 ,λ 2 and then show that|λ ε −λ−ελ 1 −ε 2 λ 2 | is O(ε 3 ). Now if (B-6) is to hold, thenλ 1 is the derivative ofλ ε atε = 0 so we begin by studying the convergence of the difference quotients λ ε −λ ε . From (B-11) and (B-5), we have λ ε −λ ε P ε = 1 ε [(λ ε −T ε )P ε +(T ε −T)P ε +(T −λ)P ε ] = T ε −T ε P ε +(T−λ) P ε −P ε . (B-13) 139 Evaluating both sides of (B-13) ate, applyingQ and using (B-12) yields λ ε −λ ε = 1 Q(P ε e) Q( T ε −T ε P ε e)→Q(T 1 e) =:λ 1 asε→ 0. Therefore, lettingε→ 0 in (B-13) implies that (T −λ)P 1 +(T 1 −λ 1 )P = 0. (B-14) To obtain an expression forλ 2 , we study the convergence of the second order difference quo- tients forλ ε . Using identities (B-11), (B-5), and (B-14), we have λ ε −λ−ελ 1 ε 2 = 1 ε 2 [(λ ε −T ε )P ε +(T e −T −εT 1 )P ε +(T −λ)P ε +ε(T 1 −λ 1 )P ε ] = T e −T −εT 1 ε 2 P ε +(T −λ) P ε −P −εP 1 ε 2 +(T −λ 1 ) P ε −P ε . (B-15) As in the previous paragraph, we evaluate both sides of (B-15) ate and applyQ to obtain λ ε −λ−ελ 1 ε 2 →Q(T 2 e)+Q((T −λ)P 2 e)+Q((T −λ 1 )P 1 e) =:λ 2 . We can then letε→ 0 in (B-15) to yield (T−λ)P 2 +(T 1 −λ 1 )P 1 +(T 2 −λ 2 )P = 0. (B-16) 140 If we now look at the third order difference quotient and use (B-11), (B-5), (B-14), and (B-16), we have λ ε −λ−ελ 1 −ε 2 λ 2 ε 3 P ε = 1 ε 3 [(λ ε −T ε )P ε +(T e −T −εT 1 −ε 2 T 2 )P ε +(T −λ)P ε +ε(T 1 −λ 1 )P ε +ε 2 (T 2 −λ 2 )P ε ] = T e −T −εT 1 −ε 2 T 2 ε 3 P ε +(T−λ) P e −P −εP 1 −ε 2 P 2 ε 3 +(T 1 −λ 1 ) P e −P −εP 1 ε 2 +(T 2 −λ 2 ) P e −P ε = T ε 3 P ε +(T −λ)P ε 3 +(T 1 −λ)(P 2 +εP ε 3 ) = +(T 2 −λ 2 )(P 1 +εP 2 +ε 2 P ε 3 ). If we let e ε ∈ P ε X withke ε k = 1, then applying both sides of the above expression to e ε , we can see that| λ ε −λ−ελ 1 −ε 2 λ 2 ε 3 | is bounded by the operator norm of the righthand side, which, due to the fact thatkT ε 3 k≤K,kP ε 3 k≤K P , is bounded by a constant forε sufficiently small. Therefore,λ ε −λ−ελ 1 −ε 2 λ 2 = 0(ε 3 ) which completes the proof. 15 Remark. LetS ∗ denote the adjoint of an operatorS onX. Then the linearity of the adjoint implies that (T ε ) ∗ =T ∗ +εT ∗ 1 +ε 2 T ∗ 2 +ε 3 (T ε 3 ) ∗ (B-17) in L(X ∗ ). SincekSk L(X) = kS ∗ k L(X ∗ ) for any S ∈ L(X), k(T ε 3 ) ∗ k L(X ∗ ) ≤ K is bounded independently of ε, and hence, (B-17) gives a perturbation expansion for (T ε ) ∗ in L(X ∗ ). Therefore, we can apply Theorem 15 to(T ε ) ∗ to obtain second order asymptotic expansions for the eigenprojections of(T ε ) ∗ as well (note that the compactness ofT ε implies the compactness of (T ε ) ∗ ). We call these eigenprojections the left eigenprojections ofT ε and any functionφ ε satisfying(T ε ) ∗ φ ε =λ ε φ ε for someλ ε ∈C is called a left eigenfunction ofT ε . 141 Suppose we order the nonzero eigenvalues ofT in a sequence withλ 1 > λ 2 > ··· > λ n > ··· > 0 and choose a curve Γ ⊂ P(T) enclosing{λ 1 ,...λ n }, but no other eigenvalues ofT . Then takingP as in the proof of Theorem 14, we can writeT = T M +T N withT M = PT and T N = (1−P)T satisfyingσ(T M ) = {λ 1 ,...λ n } and σ(T N ) = σ(T)−{λ 1 ,...,λ n }. In particular, by choosingn sufficiently large, we can make the spectral radius ofT N as small as we like. The following result tells us that we essentially have the same result for T ε if ε is sufficiently small. We use the notationρ(S) to denote the spectral radius of an operatorS. 16 Theorem. For anyr> 0,∃n r ≥ 1 andε r > 0 so that ifε<ε r , we can writeT ε =T ε M +T ε N where ρ(T ε N )<r and any eigenvalueλ ε ofT ε M has an expansion of the form (B-6) about some eigenvalue ofT . Furthermore, the corresponding eigenfunctions (eigendensities) ofT ε have expansions about the corresponding eigenfunctions (eigendensities) ofT . Proof: Choose a curve Γ ⊂P(T) that encloses all eigenvalues ofT with modulus less than r/2. Then T has a finite number of eigenvalues{λ 1 ,...,λ n } outside of Γ. By Theorem 14, we can decompose T = T M +T N with σ(T N ) = {λ 1 ,...,λ n }. Furthermore, we know that P = − 1 2πi R Γ R(ζ)dζ is the projection ontoM alongN and thatdim(N) = n. If we define P ε = − 1 2πi R Γ R ε (ζ)dζ, then a calculation similar to the one used in the proof of Theorem 15 shows that P ε → P as ε → 0 so that again applying the Projection Lemma, we have dim((1−P ε )X) = dim(N) = n,∀ε sufficiently small. In particular,T ε cannot have more thenn eigenvalues outside Γ. But∀j = 1,..n, Theorem 15 tells us that there must exist some simple eigenvalue λ ε j of T ε with a second order expansion of the form (B-6) about λ j and clearly, each such eigenvalue must be exterior to Γ forε sufficiently small. Therefore, taking T ε M = (1−P)T ε andT ε N =PT ε yields the result. 142 C Special Block Decompositions of Operators In this section, we develop the idea of a block decomposition for an operator T acting on a Banach SpaceX. In particular, we introduce the notions of an upper triangular operator and a 0-diagonal operator. We shall make use of some of the ideas and notation used in Appendix B. Suppose we have the decompositionX =X 1 ⊕X 2 ⊕X 3 . LetP i be the projection ontoX i and writeφ∈X asφ =φ 1 +φ 2 +φ 3 withφ i =P i φ fori = 1,2,3. Then we can decompose any T ∈L(X) into operatorsT ij :X j →X i so that∀φ∈X, we haveTφ =ψ 1 +ψ 2 +ψ 3 with ψ i = 3 X j=1 T ij φ j . This yields the block decomposition T = T 11 T 12 T 13 T 21 T 22 T 23 T 31 T 32 T 33 (C-1) for the operatorT . In particular, forφ∈X j ,we have T ij φ =P i T ij φ =P i TP j φ. We say that T ∈ L(X) is upper triangular if it has a decomposition of the form (C-1) with T ij = 0 for i > j. If φ = φ 1 +φ 2 +φ 3 ∈ X 1 ⊕X 2 ⊕X 3 , we will use the vector notation 143 φ = [φ 1 φ 2 φ 3 ] T . We also note that since X ∗ has the corresponding decomposition X ∗ = X ∗ 1 ⊕X ∗ 2 ⊕X ∗ 3 ,T ∗ has the corresponding decomposition T ∗ = T ∗ 11 T ∗ 12 T ∗ 13 T ∗ 21 T ∗ 22 T ∗ 23 T ∗ 31 T ∗ 32 T ∗ 33 in the spaceL(X ∗ ) provided we think of the operationT ∗ φ ∗ as left multiplication: T ∗ φ ∗ = φ ∗ 1 φ ∗ 2 φ ∗ 3 T ∗ 11 T ∗ 12 T ∗ 13 T ∗ 21 T ∗ 22 T ∗ 23 T ∗ 31 T ∗ 32 T ∗ 33 = P 3 j=1 φ ∗ j T ∗ j1 P 3 j=1 φ ∗ j T ∗ j2 P 3 j=1 φ ∗ j T ∗ j3 for anyφ ∗ ∈X ∗ . 26 Lemma. Suppose thatT ∈L(X) is upper triangular. ThenP(T 11 )∩P(T 22 )∩P(T 11 )⊂ P(T). In particular, ifζ ∈P(T 11 )∩P(T 22 )∩P(T 11 ),R(ζ) exists and we have the decompo- sition R(ζ) = R 11 (ζ) R 12 (ζ) R 13 (ζ) 0 R 22 (ζ) R 23 (ζ) 0 0 R 33 (ζ) (C-2) whereR ii (ζ) = (T ii −ζ) −1 ,∀i = 1,2,3, R 12 (ζ) = −R 11 (ζ)T 12 R 22 (ζ), R 13 (ζ) = −R 11 (ζ)T 13 R 33 (ζ)+R 1 1(ζ)T 12 R 22 (ζ)T 23 R 33 (ζ), 144 andR 23 (ζ) =−R 22 (ζ)T 23 R 22 (ζ). Proof: Follows by directly verifying that ifζ∈P(T 11 )∩P(T 22 )∩P(T 11 ), then ˜ R(ζ)(T−ζ) = (T −ζ) ˜ R(ζ) = 1 where ˜ R(ζ) is the operator defined by the right hand side of (C-2). 3 Corollary. For any upper triangular operatorT ∈L(X), we have σ(T)⊂ (σ(T 11 )∪σ(T 22 )∪σ(T 33 )). 16 Remark. Suppose thatT is compact and letφ 1 be a right eigenfunction ofT 11 correspond- ing to the eigenvalueλ. Then it is easy to see that [φ 1 00] T is a right eigenfunction ofT with corresponding eigenvalueλ. In other words, any eigenvector ofT 11 can be transformed into an eigenvector forT by adding zero components. Similarly, ifφ 3 is a left eigenfunction ofT 33 , then[00φ 3 ] is a left eigenfunction ofT . The second type of operators which we make use of in this document are operators with a decomposition of the form T = 0 T 1 T 2 0 with respect toX = X 1 ⊕X 2 . The next result tells us about the spectra of these special kind of operators which we call 0-diagonal operators. 27 Lemma. Suppose thatT is a compact operator on a Banach spaceX with a decomposition of the form T = 0 T 1 T 2 0 with respect to the splittingX =X 1 ⊕X 2 for some disjoint setsV 1 ,V 2 . Then the eigenvalues of T are given by the branches of √ λ where λ ∈ σ(T 1 T 2 ) = σ(T 2 T 1 ). Furthermore, the corresponding eigenvectors are of the formφ = (φ 1 ,φ 2 ) T whereφ 1 is an eigenvector ofT 1 T 2 145 corresponding toλ,φ 2 is an eigenvector ofT 2 T 1 corresponding toλ,T 1 φ 2 =λφ 1 , andT 2 φ 1 = λφ 2 . Proof: Suppose thatλ is an eigenvalue ofT with corresponding eigenvectorφ = (φ 1 ,φ 2 ) T . Then since T 2 = T 1 T 2 0 0 T 2 T 1 the equationT 2 φ =λ 2 φ implies thatT 1 T 2 φ 1 =λφ 1 andT 2 T 1 φ 2 =λφ 2 so thatλ∈σ(T 1 T 2 ) = σ(T 2 T 1 ) with corresponding eigenvectorφ 1 with respect toT 1 T 2 andφ 2 with respect toT 2 T 1 . Also,Tφ =λφ implies thatT 2 φ 1 =λφ 2 andT 1 φ 2 =λφ 1 . Conversely, ifλ is an eigenvalue of T 1 T 2 with corresponding eigenvectorφ 1 , thenλ is also an eigenvalue ofT 2 T 1 with correspond- ing eigenvectorφ 2 =T 2 φ 1 and sinceφ 1 =T 1 ˜ φ 2 for some eigenvector ˜ φ 2 ofT 2 T 1 corresponding toλ, it is easy to check that √ λ is an eigenvalue ofT with corresponding eigenvectors of the formφ = ( √ λφ 1 ,φ 2 ) that satisfy the required conditions. D Useful Bounds and Expansions D.1 Expansions of Transition Densities LetF ∈ C k (R) for somek ≥ 4. Suppose thatF(0) = 0 anda = F ′ (0) satisfies|a| < 1 so thatF has a third order Taylor expansion about0 of the form F(x) =ax+a 1 x 2 +a 2 x 3 +r 3 (x)x 4 witha 1 = F ′′ (0) 2 ,a 2 = F ′′′ (0) 6 , andr 3 (x) bounded on bounded subsets ofR. DefineF ǫ (x) = 1 ǫ F(ǫx). Then F ǫ (x) = 1 ǫ F(ǫx) =ax+a 1 ǫx 2 +a 2 ǫ 2 x 3 +r ǫ 3 (x)ǫ 3 x 4 (D-1) 146 withr ǫ 3 (x) =r 3 (ǫx). Let σ > 0. Let p(x,y) = 1 √ 2πσ e −(y−ax) 2 2σ 2 , p ǫ (x,y) = 1 √ 2πσ e −(y−F ǫ (x)) 2 2σ 2 , and h ε (x) = e β(x 2 −( F ǫ (x) a ) 2 ) withβ := 1−a 2 2σ 2 . The following lemma gives us the relationship betweenp and p ε . 28 Lemma. Letǫ> 0 andU be a bounded subset ofR. Then∀x∈U/ε andy∈R, we have p ǫ (x,y) =p(x,y)+ǫg 1 (x,y)p(x,y)+ǫ 2 g 2 (x,y)p(x,y)+ǫ 3 R ǫ (x,y) with g 1 (x,y) = a 1 σ 2 (y−ax)x 2 g 2 (x,y) = a 2 σ 2 (y−ax)x 3 + a 2 1 2σ 4 ((y−ax) 2 −1)x 4 and |R ǫ (x,y)|≤g ǫ r (x,y)(p ǫ (x,y)+p(x,y)) (D-2) whereg ǫ r is a polynomial inǫ,|y−ax|, and|x| of degrees 15, three and 24, respectively. Proof: If we take a second order Taylor expansion ofg(z) =e −z aboutz =z 0 , we obtain e −z =e −z 0 −e −z 0 (z−z 0 )+ e −z 0 2 (z−z 0 ) 2 + M(z,z 0 ) 6 (z−z 0 ) 3 (D-3) with|M(z,z 0 )|≤ max{e −z ,e −z 0 }≤e −z +e −z 0 ,∀z∈R. Letz = (y−f ǫ (x)) 2 2σ 2 ,z 0 = (y−cx) 2 2σ 2 and g ǫ (x) =f ǫ (x)−ax =ǫa 1 x 2 +ǫ 2 a 2 x 3 +r ǫ 3 (x)ǫ 3 x 4 . (D-4) 147 Then z−z 0 = 1 2σ 2 [((y−ax)−g ǫ (x)) 2 −(y−ax) 2 ] = 1 σ 2 (−g ǫ (x)(y−ax)+ g 2 ǫ (x) 2 ) = −ǫ a 1 σ 2 (y−ax)x 2 +ǫ 2 (− a 2 σ 2 (y−ax)x 3 + a 2 1 2σ 2 x 4 )+ǫ 3 O ǫ 1 (x,y) (D-5) where O ǫ 1 is a polynomial in y − ax,x,r ǫ 3 (x), and ǫ of degrees one, eight, two, and three respectively. (D-5) implies that (z−z 0 ) 2 =ǫ 2 a 2 1 σ 4 (y−ax)x 4 +2ǫ 3 O ǫ 2 (x,y) (D-6) and (z−z 0 ) 3 = 6ǫ 3 O ǫ 3 (y,x) (D-7) whereO ǫ 2 is a polynomial iny−ax,x,r ǫ 3 (x) andǫ of degrees two, 16, four, and nine andO ǫ 3 is a polynomial in the same variables of degrees three, 24, six, and 15. Substituting (D-5), (D-6), and (D-7) into (D-3), multiplying by 1 √ 2πσ , and using the fact that F ∈ C k , k ≥ 4 implies that|r 3 (x)| ≤ a U ,∀|x| ∈ U (so that|r ε 3 (x)| ≤ a U ,∀x ∈ U/ε) yields the result with R ǫ (x,y) =O ǫ 1 (x,y)p(x,y)+O ǫ 2 (x,y)p(x,y)+M(z,z 0 )O ǫ 3 (x,y) . 17 Remark. If we assume only that f ∈ C 2 , then we can easily modify the above proof to obtain the result that |p ε (x,y)−p(x,y)|≤O(ε)g r (x,y)(p ε (x,y)+p(x,y)) for all x ∈ U/ε. The more complicated result is only needed to get higher order terms in asymptotic expansions of the transition operator. We use this simpler version in Section 2.5.1. 148 18 Remark. The above proof can also be adapted to the case whenσ is a generalC 3 func- tion ofx that is bounded away from 0 since in this case, we can use the second order Taylor expansion of1/σ(x) about0 to obtain 1 σ(εx) = 1 σ(0) − εσ ′ (0)x (σ(0)) 2 + ε 2 (2(σ ′ (0)) 2 −σ ′′ (0))x 2 (σ(0)) 2 +O(ε 3 x 3 ) (see Section 2.3.1 for the importance of this extension). Note that in this case,σ(0) takes the place ofσ in the definition ofp(x,y) and bothg 1 , g 2 will depend onσ(0),σ ′ (0), andσ ′′ (0) . Note also that the newg 1 will contain a first order polynomial inx which will prevent us from getting 0 in our calculation ofλ s,j,1 in Section 2.3.2. The following Lemma is a needed extension of the previous Lemma used in Section 2.4.2. 29 Lemma. Letǫ> 0 andU a bounded subset ofR. Then∀x∈U/ε andy∈R, we have h ε (x)p ǫ (x,y) =p(x,y)+ǫ˜ g 1 (x,y)p(x,y)+ǫ 2 ˜ g 2 (x,y)p(x,y)+ǫ 3 ˜ R ǫ (x,y) with ˜ g 1 (x,y) = g 1 (x,y)− 2a 1 β a x 3 ˜ g 2 (x,y) = g 2 (x,y)− 2a 1 β a x 3 g 1 (x,y)−β[ a 2 1 a 2 + 2a 2 a ]x 4 + 4a 2 1 β 2 a 2 x 6 and | ˜ R ǫ (x,y)|≤ ˜ g ǫ r (x,y)(p ǫ (x,y)+p(x,y))(1+h ε (x)) (D-8) where ˜ g ǫ r is a polynomial inǫ,|y−ax|, and|x|. Proof: Using (D-3) withz =−β(x 2 −( F ε (x) a ) 2 ) andz 0 = 0 implies thath(x) = 1+ǫh 1 (x)+ ǫ 2 h 2 (x)+ǫ 3 M(x) withh 1 (x) =− 2a 1 β a x 3 ,h 2 (x) =−β[ a 2 1 a 2 + 2a 2 a ]x 4 + 4a 2 1 β 2 a 2 x 6 , and|M(x)|≤ 149 O ε (x)(1 +h ε (x)) for some polynomial O ε (x) in ε,x, ∀x ∈ U/ε. Multiplying this by the expression forp ǫ in Lemma 28 gives the result with ˜ g 1 =g 1 +h 1 and ˜ g 2 =g 2 +g 1 h 1 +h 2 . D.2 Some Useful Bounds LetB(R) be the space of all Borel measurable functionsφ : R → R. We define a weighted sup- norm onB(R) by kφk k = sup x∈R |φ(x)| v k (x) with v k (x) = e kx 2 , k > 0 and define X k = {φ ∈ B(R) : kφk k < ∞}. Then it is easily verified thatX k along withkk k is a Banach Space. We also usekk k denote the operator norm onL(X k ) = the space of all bounded, linear functions onX, and define the linear operators T i,j φ(x) = Z φ(y)|x| i |y| j p(x,y)dy and T ε i,j φ(x) = Z φ(y)|x| i |y| j p ε (x,y)dy We shall also use the notationI δ = (−δ,δ), I ε δ = (−δ/ε,δ/ε), andM δ = sup x∈I δ {|F ′ (x)|} for δ > 0. Note that by the definition ofF ε , |F ε (x)|≤M δ |x| (D-9) for everyx∈I ε δ . The following Lemma is useful in many of our calculations: 30 Lemma. For anyk< 1 2σ 2 andj∈N, we have Z |y| j v k (x)p ε (x,y)dy≤q(x)e ka 2 x 2 /(1−2σ 2 k) 150 for allx∈R and Z |y| j v k (x)p ε (x,y)dy≤ ˜ q(x)e kM 2 δ |x| 2 /(1−2σ 2 k) for allx∈I ε δ whereq,˜ q are polynomials of degreej. Proof: We prove the second inequality. The proof of the first is similar, but easier. For anyx∈R, we have Z |y| j v k (x)p ε (x,y)dy = 1 √ 2πσ Z |y| j e ky 2 e −(y−F ε (x)) 2 /(2σ 2 ) dy = 1 √ 2πσ e −(F ε (x)) 2 /2σ 2 e (F ε (x)) 2 /[2σ 2 (1−2σ 2 k)] Z |y| j e − 1−2σ 2 k 2σ 2 (y− F ε (x) 1−2σ 2 k ) 2 dy. Since k < 1 2σ 2 , the integrand is the kernel of the density for a normal random variable with mean F ε (x) 1−2σ 2 k and variance σ 2 1−2σ 2 k > 0. The above inequality then implies that Z |y| j v k (x)p ε (x,y)dy≤ 1 √ 2πσ q ε (x)e −(F ε (x)) 2 /2σ 2 e (F ε (x)) 2 /[2σ 2 (1−2σ 2 k)] (D-10) whereq ε (x) is a polynomial inF ε (x) of degreej. Now from (D-9), we have|F ε (x)|≤M|x| for everyx∈I ε δ 1 and therefore, (D-10) implies that |T ε i,j φ(x)| ≤ ˜ q(x)e k(F ε (x)) 2 /(1−2σ 2 k) ≤ ˜ q(x)e kM 2 |x| 2 /(1−2σ 2 k) ∀x∈I ε δ 1 , which proves the result. Lemma 30 yields the bounds we need on the growth ofT ij φ(x) andT ε ij φ(x). 151 31 Lemma. For anyk < 1−a 2 2σ 2 =:k c andn,m∈N, there exist positive constantsK 1 ,η 1 > 0 depending only onk,a,n,m so that |T i,j φ(x)| v k (x) ≤K 1 kφke −η 1 x 2 ∀i≤m,j≤n,x∈R, andφ∈X. Proof: Letk < k c ,m,n∈N, andφ∈X k . For the remainder of the proof, we drop will drop the subscriptk. Then∀i≤m,j≤n andx∈R, Lemma 30 implies that |T i,j φ(x)| ≤ |x| i kφk Z v(y)|y| j p(x,y)dy ≤ ˆ q(x)kφke ka 2 x 2 /(1−2σ 2 k) where ˆ q is polynomial of degree at mosti+j. Therefore, |T i,j φ(x)| v(x) ≤q(x)kφke −ηx 2 where η :=− ka 2 x 2 1−2σ 2 k +k = k[(1−a 2 )−2kσ 2 ] 1−2kσ 2 is positive by the assumption that k < 1−a 2 2σ 2 . Now we can choose a constant K m,n so that |x| l ≤ K m,n e η 2 x 2 ,∀l ≤ m+n andx∈R from which we obtain Lemma 31 withη 1 = η 2 and K 1 equal toK m,n multiplied by the absolute maximum of the coefficients inq(x). In particular, we haveT i,j ∈L(X k ),∀k<k c withkT i,j k k ≤K 1 . Since F ε (x) may be badly behaved for large x, we cannot hope for a universal bound on |T i,j φ(x)| as in Lemma 31. Instead, we look for an inequality which holds in small neighbor- hoods of 0. The following suffices. 152 32 Lemma. There exists aδ 1 > 0 so that ifk < k δ 1 := 1−M 2 δ 1 1−2σ 2 k andm,n ∈ N, we can find positive constantsK 2 ,η 2 depending onδ 1 ,k,n,m andF such that |T ε i,j φ(x)| v k (x) ≤K 2 kφke −η 2 x 2 ∀i≤m,j≤n, ε> 0, x∈I ε δ 1 , andφ∈X k . Proof: Chooseδ 1 > 0 such that|M δ 1 | < 1 and fixk ∈ (0,k δ 1 ),m,n ∈ N, andφ ∈ X k . For the remainder of the proof, write δ = δ 1 , M = M δ 1 , v(x) = v k (x) andkk = kk k . Then ∀i≤n,j≤m, andx∈I ε δ 1 , Lemma 30 implies that |T ε i,j φ(x)| ≤ |x| i kφk Z v(y)|y| j p ε (x,y)dy = ˆ q(x)kφke kM 2 |x| 2 /(1−2σ 2 k) where ˆ q(x) is a polynomial inx of degreei+j. Dividing byv(x), we obtain |T ε i,j φ(x)| v(x) ≤ ˜ q(x)kφke −ηx 2 (D-11) with η :=− kM 2 2σ 2 (1−2kσ 2 ) +k = k((1−M 2 )−2σ 2 k) 2σ 2 (1−2kσ 2 ) > 0. sincek<k 1 = 1−M 2 2σ 2 . The result now follows as in the proof of Lemma 31. We also define the operators Q ε i,j φ(x) =h ε (x)T i,j φ(x) and ˜ Q ε i,j φ(x) =h ε (x)T ε i,j φ(x) for alli,j∈N. 153 33 Lemma. There exists a δ 2 > 0 so that ∀k < k c and m,n ∈ N, there exists a positive constantη 3 > 0 depending only onk,a,n,m such that |Q ε i,j φ(x)| v k (x) ≤K 1 kφke −η 3 x 2 ∀i≤m,j≤n, ε> 0, x∈I ε δ 2 , andφ∈X k whereK 1 is as in Lemma 31. Proof: Chooseδ 2 > 0 so that 1−( m δ 2 a ) 2 < η 1 2α whereη 1 is the constant in Lemma 31 (this is possible since m δ a → 1 asδ→ 0 + ). Then for allx∈I ε δ 2 , we have |h ε (x)|≤e α(x 2 −( m δ 2 a ) 2 x 2 ) <e η 1 2 x 2 and therefore, by Lemma 31, we have the result withη 3 = η 1 2 . 34 Lemma. There exist aδ 3 ∈ (0,δ 1 ) so that ifk <k δ 1 andm,n∈N, we can find a positive constantη 4 depending onδ 1 ,k,n,m andF such that | ˜ Q ε i,j φ(x)| v k (x) ≤K 2 kφke −η 4 x 2 ∀i≤m,j≤n, ε> 0, x∈I ε δ 3 , andφ∈X k whereK 2 is as in Lemma 32. Proof: Chooseδ 3 ∈ (0,δ 1 ) so that 1− ( m δ 3 a ) 2 < η 2 2α whereη 2 is the constant in Lemma 32. The result then follows as in the proof of Lemma 33 withη 4 = η 2 2 . . The next few lemmas give some bounds of a slightly different nature. 35 Lemma. Let δ > 0. Then∀k < ˜ k c := 1−|a| 2σ 2 and m,n ∈ N,∃K 3 ,η 5 > 0 depending on n,m,a,δ andk such that |T i,j φ(x)| v k (x) ≤εK 3 kφke −η 5 δ 2 /ε 2 ∀i≤m, j≤n, x∈I ε δ , andφ∈X k with supp(φ)⊂ (I ε δ ) c . 154 Proof: Letk < ˜ k c , m,n ∈ N, andγ ∈ (0, ˜ kc−k 2 ). ChooseK n so that|y| j ≤ K n e γx 2 ,∀j ≤ n, y ∈R and letφ∈ X with supp(φ)⊂ (I ε δ ) c . Then, as in the proof of Lemma 31, we have ∀i≤m, j≤n, |T i,j φ(x)| ≤ |x| i kφk Z |y|≥δ/ε v(y)|y| j p(x,y)dy ≤ K n |x| i kφk √ 2πσ Z |y|≥δ/ε e (k+γ)y 2 e −(y−ax) 2 /(2σ 2 ) dy = K n |x| i kφk √ 2πσ e (k+γ)a 2 x 2 /(1−2σ 2 (k+γ)) Z |y|≥δ/ε e −(y−˜ ax) 2 /(2˜ σ 2 ) dy with ˜ a = a 1−2σ 2 (k+γ) and ˜ σ 2 = σ 2 1−2σ 2 (k+γ) . Sincek +γ < ˜ k c < 1 2σ 2 , ˜ σ 2 > 0 and hence the integrand is just the density of aN(˜ ax,˜ σ 2 ) r.v. Denoting byχ aN(0,1) r.v.,∀x∈I ε δ , we have 1 √ 2πσ Z y≥δ/ε e −(y−˜ ax) 2 /(2˜ σ 2 ) dy ≤ ( 1 p 1−2σ 2 (k +γ) ) ( 1 √ 2π˜ σ Z y≥δ/ε e −(y−˜ aδ/ε) 2 /(2˜ σ 2 ) dy) = 1 p 1−2σ 2 (k +γ) P(˜ σχ+˜ a δ ε ≥ δ ε ) = 1 p 1−2σ 2 (k +γ) P(χ≥ ( 1−˜ a ˜ σ ) δ ε ) ≤ εKe −η( δ ε ) 2 whereη = ( 1−˜ a ˜ σ ) 2 > 0 andK = σ √ 2π(1−2σ 2 (k+γ))(1−˜ a)δ > 0 sincek +γ < ˜ k c < 1 2σ 2 implies |˜ a|< |a| 1−2σ 2˜ kc < |a| 1−(1−|a|) = 1. Similarly, Z y≤−δ/ε e (y−˜ ax) 2 /(2˜ σ 2 ) dy ≤ ( 1 p 1−2σ 2 (k +γ) )( 1 √ 2π˜ σ Z y≤−δ/ε e −(y+˜ aδ/ε) 2 /(2˜ σ 2 ) dy) = 1 p 1−2σ 2 (k +γ) P(˜ σχ−˜ a δ ε ≤− δ ε ) ≤ εKe −η( δ ε ) 2 . 155 Using these estimates, we have |T i,j φ(x)|≤εKK n |x| i kφke (k+γ)a 2 x 2 /(1−2σ 2 (k+γ)) e −η( δ ε ) 2 ∀x∈I ε δ so that dividing byv(x) we obtain |T i,j φ(x)| v(x) ≤ εKK n kφk|x| i e −kx 2 e (k+γ)a 2 x 2 /(1−2σ 2 (k+γ)) e −η( δ ε ) 2 ≤ εKK n K m kφke −η 5 ( δ ε ) 2 where the last inequality follows from the fact thatk+γ < 1−|a| 2σ 2 < 1−a 2 2σ 2 so that the coefficient of x 2 in the exponent is negative and so we can choose K m > 0 such that|x| i ≤ K m e ˜ γx 2 , ∀x∈R, i≤m with ˜ γ chosen so that the new coefficient ofx 2 is still negative. The next Lemma follows directly from Lemma 35 in the same way that Lemma 33 followed from Lemma 31. 36 Lemma. There exists aδ 4 so that ifk< ˜ k c andm,n∈N,∃η 6 > 0 depending onn,m,a,δ 4 andk such that |Q ε i,j φ(x)| v k (x) ≤εK 3 kφke −η 6 δ 2 4 /ε 2 ∀i ≤ m, j ≤ n, x ∈ I ε δ 4 , and φ ∈ X k with supp(φ) ⊂ (I ε δ 5 ) c where K 3 is the constant in Lemma 35. 156
Abstract (if available)
Abstract
A stochastic bifurcation is generally defined as either a change in the number of stable invariant measures (dynamical or D-bifurcations) or a change in the qualitative shape of invariant measures (phenomenological or P-bifurcations) for a stochastic dynamical system. Some authors have observed that these definitions can fail to capture important information regarding the evolution of certain Markov Chains arising from first-passage-time distributions of stochastic differential equations since the definitions deal only with static information about the chain (i.e. information regarding invariant or stationary distributions). In this work we perform a more rigorous investigation of these observations by studying changes to the spectra of transition operators for two different classes of examples of Markov Chains obtained by taking small perturbations of deterministic dynamical systems. The first class deals with small Gaussian perturbations of discrete time dynamical systems on the circle while the second class arises naturally from the study of sequences of firing times in noisy integrate-and-fire models for chemical potential in neurons. We show that bifurcations in the deterministic system can often lead to changes in the number of eigenvalues of the transition operator for the corresponding perturbed process which approach the unit circle as the noise intensity goes to 0, a phenomenon we call a lambda-bifurcation. Although in both classes of examples, the perturbed process always has a unique stationary distribution, these changes in the number of eigenvalues with modulus close to 1 can have significant effects on both the shape of and the rate of convergence to the stationary distribution of the process.
Linked assets
University of Southern California Dissertations and Theses
Asset Metadata
Creator
Mayberry, John (author)
Core Title
The effects of noise on bifurcations in circle maps with applications to integrate-and-fire models in neural biology
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
04/01/2008
Defense Date
03/10/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
first passage times,Gaussian perturbations,integrate-and-fire models,Markov chains,OAI-PMH Harvest,Ornstein-Uhlenbeck process,stochastic bifurcations,transition operators
Language
English
Advisor
Baxendale, Peter (
committee chair
), Newton, Paul K. (
committee member
), Ziane, Mohammed (
committee member
)
Creator Email
jmayberr@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1068
Unique identifier
UC155836
Identifier
etd-Mayberry-20080401 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-50285 (legacy record id),usctheses-m1068 (legacy record id)
Legacy Identifier
etd-Mayberry-20080401.pdf
Dmrecord
50285
Document Type
Dissertation
Rights
Mayberry, John
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
first passage times
Gaussian perturbations
integrate-and-fire models
Markov chains
Ornstein-Uhlenbeck process
stochastic bifurcations
transition operators