Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Large deviations approach to the bistable systems with fractal boundaries
(USC Thesis Other)
Large deviations approach to the bistable systems with fractal boundaries
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
LARGE DEVIATIONS APPROACH TO THE BISTABLE SYSTEMS WITH FRACTAL BOUNDARIES by Yu Zeng A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Ful llment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) December 2008 Copyright 2008 Yu Zeng Dedication To my parents and sister. ii Acknowledgements My deepest gratitude goes to my advisor Prof. PeterBaxendale for his inspiration, kindness support and insightful advice. His in-depth knowledge about wide elds of mathematics and constant guidance have made the work presented in the thesis possible. I own him more that words can express. I wouldlike tothankProf. RichardArratiaforagreeingtoserve onbothmyGuid- ance Committee and my Dissertation Committee. He is always so kind and ready to help. I am also grateful to Prof. Fengzhu Sun for serving my Dissertation Com- mittee, and Prof Nicolai Haydn, Prof. Remigijus Mikulevicius, and Prof Lei Li as my Guidance committee. I have bene ted tremendously from their comments, and their input has vastly improved this dissertation. iii Table of Contents Dedication ii Acknowledgements iii List Of Figures vi Abstract vii Chapter 1: Introduction 1 Chapter 2: The Model And Statements Of Results 13 2.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3: Proof of Main Theorems 23 3.1 Large Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Structure of the General mapping f . . . . . . . . . . . . . . . . . . 32 3.3 Uniformly Convergence of X " n . . . . . . . . . . . . . . . . . . . . . 43 3.4 Expected Exit Time from Basins of Stable Fixed Points . . . . . . . 53 3.5 Proof of Theorem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.6 Exit Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.7 Invariant Measure and Proof of Theorem 3 . . . . . . . . . . . . . . 81 3.7.1 Preliminary Results for Recurrent Di¤usion Processes . . . . 81 3.7.2 Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . 84 Chapter 4: Applications 95 4.1 Structure of the Special Mapping f . . . . . . . . . . . . . . . . . . 95 4.2 Expected Exit Time from Neighborhoods of Unstable Fixed Points for the special mapping f . . . . . . . . . . . . . . . . . . . . . . . 98 4.2.1 Expected Exit Time for a simpler map f (x) =cx . . . . . . 98 4.2.2 Expected Exit Time for the special mapping (1.1) . . . . . . 102 4.3 Expected Exit Time for more general mapping f . . . . . . . . . . . 108 iv 4.3.1 Generalized ladder map . . . . . . . . . . . . . . . . . . . . 111 4.3.1.1 De nition of r(y) . . . . . . . . . . . . . . . . . . . 114 4.3.1.2 De nition of a(y) . . . . . . . . . . . . . . . . . . . 116 4.3.2 Exit time from C n . . . . . . . . . . . . . . . . . . . . . . . 116 4.4 Calculation of Quasipotential . . . . . . . . . . . . . . . . . . . . . 124 4.4.1 the Simpler Ladder Map . . . . . . . . . . . . . . . . . . . . 130 4.4.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . 132 4.4.3 the Ladder Map of our Interest . . . . . . . . . . . . . . . . 136 4.5 Exit Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Chapter 5: Future Work 147 Bibliography 150 v List Of Figures 1.1 Double Well Potential . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2 Cyclical Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3 f(x) and Its Mixed Basins . . . . . . . . . . . . . . . . . . . . . . . 10 1.4 Transport Regions for the Dynamic System. . . . . . . . . . . . . . 11 3.1 Khas minskii s Construction . . . . . . . . . . . . . . . . . . . . . . 83 3.2 1 and 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.1 We Can Change Slopes of f . . . . . . . . . . . . . . . . . . . . . . 109 4.2 Invariant Measure " . . . . . . . . . . . . . . . . . . . . . . . . . . 142 4.3 Invariant Measure from [1] . . . . . . . . . . . . . . . . . . . . . . . 143 vi Abstract In this thesis, we study the discrete time dynamical system on the unit interval by low intensity additive Gaussian noise. We consider systems where the underlying deterministic system has two stable xed points, and use large deviations methods to study transitions from one basin of attraction to the other. We investigate tran- sition time between the basins of two attractors, and the places where transitions most likely occur. It s the dynamical system with fractal boundaries between two basins that is of great interest to us. We estimate the expected exit time from the neighborhoods of such complicated boundaries. Finally it is shown, for the system with the speci c underlying linear mapping, that we are able to compute the quasipotentials exactly. vii Chapter 1 Introduction Over the last forty years, stochastic resonance has attracted considerable attention in lots of elds such as physics, engineering, chemistry, and biomedical science. The term is used to describe a phenomenon that is manifest in nonlinear systems wherebygenerallyfeebleinputinformation(suchasaweaksignal)canbeampli ed and optimized by the assistance of noise[4]. This phenomenon occurs if the following three basic conditions are held: 1. There is a form of threshold in general. Specially an energetic activation barrier will do the work; 2. There is a weak coherent input in the system. A periodic signal is a good example; 3. There is a source of noise which either is inherent in the system, or adds to the coherent input. 1 Once these three basic ingredients occurs, the nonlinear system responds a resonance-like behavior which is a function of the noise level. Hence such phe- nomenon got the name stochastic resonance. The underlying mechanism is fairly simple and robust. As a consequence, stochastic resonance has been observed in a large variety of systems, including bistable ring lasers, semiconductor devices, chemical reactions, and mechanoreceptor cells in the tail fan of a cray sh [4]. In [4], they give a connection between stochastic resonance and the large devia- tionbehaviorinthesettingofbistability, whichisatwo-attractorsystem. Forsuch system, it has previously been observed by both experiments and numerical calcu- lations that a small volume of noise would be enough to destroy the noise-absent equilibrium barriers in the phase space (pseudo-barriers). Hence a pre-heteroclinic tangency chaos-like behavior occurs. In this thesis, we will deal with this bistable system by means of the large deviations instead. In this chapter we briey formulate the facts from Freidlin-Wentzell theory of perturbed dynamical system [3] in the one dimensional continuous case. Let"> 0: FollowingFreidlinandWentzell[3]westudythedi¤usionprocessX " inR which is the solution of the SDE dX " t =U 0 (X " t )dt+ p "dW t ; X " 0 =x;t 0; 2 where W t is a standard Brownian motion, and U is a smooth function, called potential. NotethatthedrifttermoftheSDEcanalwaysbewrittenasaderivative ofsomepotentialinone-dimensionalcase. Butthisisnotnecessarilytrueinhigher dimension. ForT > 0,weintroducetheactionfunctional onthespaceC[0;T]corresponding to the potential U S 0T (h) = 8 > > < > > : 1 2 R T 0 _ h s +U 0 (h s ) 2 ds h is absolutely continuous, +1 otherwise. It is easy to see that S 0T 0. And if S 0T (h) = 0; then h is a trajectory of the dynamical system _ x =U 0 (x) on the interval [0;T]. Let x;y2R, we de ne the quasipotential in terms of the action functional: V (x;y) = inffS 0T (h) :h2C[0;T];h 0 =x;h T =y;T > 0g: Note that the endpoint T is not xed. The quasipotential describes the work done by a physical particle moving in the potential landscape given by U to get from x to y. More precisely, let " y = infft> 0 :X " t =yg 3 and denote by P x the law of the di¤usion starting at x. In these notations V (x;y) = lim T!1 lim "!0 logP x " y T : In terms of the quasipotential one can describe the asymptotic behavior of the di¤usion X " as "! 0, and the asymptotic of the transition time, which contains a mathematical formulation of Kramerslaw. Theorem (Freidlin, Wentzell [3]) Let [a;b] be a nite interval, 02 [a;b] be the unique zero of U 0 (x) on the interval, and 0 be the asymptotically stable point of the dynamical system _ x =U 0 (x). Let " = infft> 0 :X " t = 2 [a;b]g: Then for any x2 (a;b) the following holds: lim "!0 "logE x " = minfV (0;a);V (0;b)g =V 0 ; lim "!0 P x e (V 0 )=" < " <e (V 0 +)=" = 1 for any > 0: Let U be a double-well potential with minima at points1, and a saddle point at 0. Assume U (x)!1, as x!1. Let U (1) = V 2 ;U (1) = u 2 ;U (0) = 0;0<u<V: 4 From the structure of the potential, one obtained that (1;0) would be attracted to 1, and (0;1) would be attracted to 1, which are called basin 1 and basin 2 separately in the graph of U. 0 is the barrier between the two basins. Figure 1.1: Double Well Potential If x and y are from the same well, it has been shown that V (x;y) = 2maxfU (y)U (x);0g: The factor 2 explains why we choose the well s depths as V 2 and u 2 . From this for- mula,ifU (y)<U (x),thequasipotentialV (x;y) = 0:Thiscanbeexplainedasthe particlegoing downinthepotentiallandscapefromx toy alongthedeterministic trajectory, on which the action functional equals zero. if U (y) > U (x), the way upfrom y to x costs twice the deference between U (x) and U (y). If x and y are from di¤erent wells, we have several situations: x;y are also betweentwostableattractors,andatleastoneofx;y aresmallerthan1orbigger 5 than 1. Hereweonlylookatthe rstsituationsincethesecondonecanbedoneby combining the rst situation and the case that both x and y are in the same well. Assume that1x;y 1. In this essential case, one has to overcome a potential barrier of height U (0)U (x) on the way between x and 0, and the way downto y is free. Hence the quasipotential V (x;y) = 2(U (0)U (x)). In a similar way, V (y;x) = 2(U (0)U (y)). In particular, if x = 1 we have V (x;y) = V; and if y = 1, then V (y;x) = u. Based on the above theorem, one can derive that the expected time to jump from the basin 1 to the basin 2 is exponentially large in ", and has order e V=" , while the expected time to jump back is smaller and has order e u=" . These observations suggest that in order to observe the inter-wall motion of the di¤usion, we have to wait for an exponentially long time. The paper [4] furthermore deals with a case where a driving with a period T occurs. The double well potential is tilted back and forth, thereby raising and low- ering successively the potential barriers of the left and the right wells respectively. This cyclical process is shown in the gure 1.2. The theorem deals with the continuous di¤usion X " t . Now let us consider the following SDE inR d ;d 1: dX t =V 0 (X t ;t)dt+" k X =1 V (X t ;t)dW t ; 6 Figure 1.2: Cyclical Process whereX t is d-dimensional, V 0 (x;t);fV (x;t)g are periodic in t with period T. We de ne Y n =X nT : Then Y n is a d-dimensional discrete time Markov chain, and the dynamics thus may be represented as a map, F :R d !R d , Y n+1 =F (Y n ; n ): Heref n :n 0g are i.i.d. discrete noise terms. As far as we know, there has not appeared in the literature any theoretical result for the discrete-time model. In Bollt, Billings and Schwartz s paper [1], they got a MSI model in the same formofY n . Theirmodel isbasedonthewellknownstochasticdynamicsofdisease epidemics SEIR model. Suppose that the population is so large that the various subcategories can be considered as continuous. Here S, E, I, R represent the four 7 subcategories that the population are divided into, and the subcategories evolve with time t. S(t) are the individuals that are susceptible to the disease; E(t) are thoseexposedtoaninfectiousindividualbutnotyetinfectious;I(t)aretheinfective individuals that are capable of transmitting the diseases; R(t) are those recovered from the disease. A susceptible individual becomes exposed after contacting with an infective one. And after a latent period, an exposed one becomes infective, and eventually recovered. Thus these categories are disjoint, hence the population can benormalizedtoS+E+I+R = 1. It sclearthatalldependentvariablerepresent fractions of the population. Sincethedynamicsfromtheexposedtotheinfectiveareruledbylinearkinetics, ithasbeenshownthatalthoughtheSEIRmodelisthree-dimensional,thedynamics collapsesontoatwo-dimensionalsurfaceandtheinfectivesareroughlyproportional to the exposed. These relation thus indicates a modi ed SI model (MSI), given by S 0 (t) =S(t)(t)I (t)S(t); I 0 (t) = + (t)I (t)S(t)(+)I (t); (t) = 0 (1+cos2t);0 < 1; where is the birth rate, 1 is the mean latent period, and 1 is the infectious period. We can vary , the uctuating contact rate amplitude [10]. Since the MSI model is periodically driven with period 1 and both S and I are fractions of 8 the population, it may be viewed as a two-dimensional map of the unit box into itself. The stochastic model is considered to be discrete, i.e., noise is added to the population rate equations periodically (period=1) at the same phase having mean 0 and standard deviation . Then the dynamics may be represented as a map, F :R 2 !R 2 , where (S;I)(t+1) =F [(S;I)(t)]+(t): (t) is a discrete noise term. Inordertounderstandthedynamicsofthismodel, Bollt, BillingsandSchwartz developed a collection of tools to nd the expected time of escape from pseduo- basinsandregionsofhighactivityoftransportnumerically. Buttheydidn tpresent any theoretical results for such discrete-time model. Therearestillmanyproblemsunsolvedinthesettingofbistability. Forinstance, ifweturntotheSDEwithcontinuousdouble-wellpotential,onecaneasilytellthat the origin point 0 is the threshold: points to the left of it are attracted to1 and all others are attracted to 1. Thus we know the basins of1 and 1. But in many cases, one might not be able to tell the basins of the two attractors easily since sometimes the basins might be mixed together. Say, using A to denote the basins belonging to one attractor, and B to denote the basins belonging to the other, one might have a complicated bistability system, which contains in nite many mixed 9 Figure 1.3: f(x) and Its Mixed Basins basins, even when the map F itself has a simple structure. One such example was given in Section 6 of Bollt, Billings and Schwartz s paper [1] (and see gure 1.3): f(x) = 8 > > > > > > > > > > < > > > > > > > > > > : 0:9(0:1x) if 0x 0:1 2:5(x0:1) if 0:1x 0:5 2:5(0:9x) if 0:5x< 0:9 10:9(x0:9) if 0:9x 1 : (1.1) Bollt, BillingsandSchwartzalsoappliedtheirnumerical toolboxonsuchexam- ple. There they are interested in the transport of a stochastic ux from one basin to another. Their method is based on the nite rank Galerkin matrix, which is 10 a matrix approximation of the Frobenius-Perron operator and can be interpreted as a transition matrix. They nd the transport regions of a stochastic dynamical systembyre-indexingthisGalerkinmatrix. Thepicturefromtheirpapershowsthe escape regions of our dynamical system found for " = 0:04 labeled by the dashed black lines, overlaid on the basin maps for each basin. But the graph can only show us vaguely where all the leakage takes place.A natural question would arise Figure 1.4: Transport Regions for the Dynamic System for both models: how can one theoretically obtain the results to describe how the motion works under such discrete-time bistability setting, just as shown in the two theorems for the continuous case? Where are the least unlikely places that the transport occurs? We ll try to answer these questions in this paper. The paper is laid out as follows. In chapter 2, we set up the model of our interest,andtheassumptionsaregiven. Theactionfunctionandthequasipotential ofdiscretetimemodelsarede ned,andalsomainresultsarestated. Chapter3and Chapter 4 form the crux of this paper. In chapter 3, we prove the main theorems. Some properties related to the large deviations principle are provided. Problems 11 about exit time from one basin to another, exit positions and invariant measure are studied in general settings. Chapter 4 describes how to estimate the expected escape time from one attractor to the other for general (both linear and nonlinear) models, and gives the detailed calculation of the quasipotential for the model with f in (1.1). The future work is presented in the last chapter. 12 Chapter 2 The Model And Statements Of Results 2.1 The Model Let f (x) be a xed mapping of the unit interval I = [0;1] to itself. We study the dynamical behavior of the Markov chain X " n as a result of small perturbations of the system x n =f (x n1 ); x 0 =x; (2.1) on [0;1], and X " n =' f X " n1 +" X " n1 n1 ; X 0 =x; (2.2) 13 where f n :n 0g are independent standard normal random variables, 0 < c 1 (x) c 2 <1 for any x2 [0;1], c 1 ;c 2 are constants, and ' :R! [0;1] is de ned by '(x) = 8 > > > > > > < > > > > > > : 0 if x 0 x if 0x 1 1 if x 1 : We use ' to ensure X " n is always in [0;1], and use a superscript " to denote the dependence on " of the law of the Markov chain. OurgoalistosolvethefollowingthreeissuesfortheMarkovChainfX " n :n 0g withallassumptionsinthesection2.2as"& 0: Firstofall,theexpectedtransient time from one basin to the other; Secondly, the places where the transitions take place; Thirdly, the invariant measure of X " n . Since X " n is restricted to the interval I, we have the following trick to work around this issue. Suppose we de ne an extension e f of f to the real line by e f(x) = f('(x)); an extensione of to the real line bye (x) = ('(x)) , and a Markov chain e X " n = e f e X " n1 +"e e X " n1 n1 : (2.3) Notice that if X " n1 =' e X " n1 then X " n =' f(X " n1 )+" X " n1 n1 =' f ' e X " n1 +" ' e X " n1 n1 14 =' e f e X " n1 +"e e X " n1 n1 ='( e X " n ): Therefore if X 0 = e X 0 , then X " n = ' e X " n for all n 0. So we can always study the chainf e X " n :n 0g, and at the very last step apply the mapping ' in order to get results for the chainfX " n :n 0g. In particular, ife " is a stationary probability measure forf e X " n :n 0g then " (A) :=e " (' 1 (A)) A [0;1] is stationary for fX " n : n 0g. Notice that " (A) = e " (A) for A (0;1) and " (f0g) =e " ((1;0]) and " (f1g) =e " ([1;1)): There is a similar formula relating hitting times: if A (0;1) then inffn 0 :X " n 2Ag = inffn 0 : e X " n 2Ag: 2.2 Assumptions Suppose f satis es the following assumptions: 1. f is bounded and piecewise continuous on I. Denote D =fd i :f is discontinuous at d i , 0<i<J +1g 15 =fd i : 0(=d 0 )<d 1 <<d J < (d J+1 =)1g: fj (d i ;d i+1 ) has continuous extensions on [d i ;d i+1 ], for all 0 i J. So fj (d i ;d i+1 ) is uniformly continuous for each of nite many intervals. Assume the set x :f i (x)2D for some i is countable. 2. Let a;b be the only two stable xed points of f, i.e. f (a) =a;f (b) =b; and there exist < 1; 0 > 0; such that jf (x)aj < jxaj if jxaj< 0 ; jf (x)bj < jxaj if jxbj< 0 and f is continuous on (a 0 ;a+ 0 )[ (b 0 ;b+ 0 ). f might have several unstable xed points. 3. For any l > 0, if there is no discontinuous point of f between x;y2 I with jxyj<l, then the Lipschitz condition holds: jf (x)f (y)jLjxyj; where L 1 is the Lipschitz coe¢ cient. 16 Let us use the notation U 0 =fx :jaxjhg; U 1 =fx :jbxjhg: for 0<h< 0 =2, where 0 is from the assumption 2. De ne ~ A N = x :f N (x)2U 0 ; ~ B N = x :f N (x)2U 1 ;and ~ C N = In( ~ A N [ ~ B N ): Let C be the collection of all unstable xed points. 4.Assume [ N ~ A N [ [ N ~ B N is dense and C is nite. 5.Assume given > 0, there is n> 0, such that for all x2 ~ C n , E x;" ( n )k n exp " 2 ; where k n is a constant depending on n, and n = inffn 0 :X " k 62 ~ C n g: 17 6. Because P ";x fX n =d i for some n 1g = 0; without loss of generality, assume f (d i ) =f d + i . Denote E = f x + :x2D [ f x :x2D ; which is a nite set since D is nite. If y2E then f n (y) = 2D for all n 0; and f n (y)!a or b as n!1: 2.3 Main Results De nition. Let S be any Borel set on a bounded closed set I. De ne S = inffn 0 :X " n 2Sg; the time needed to hit the set S, with the usual convention that inf(?) =1: De nition. For any sequence x = (x 0 ;x 1 ;x 2 ;:::;x N ); we call S N (x) = 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 (2.4) 18 the action function. De nition. For any x;y2I we de ne the quasipotentials V N (x;y) = inffS N (x) :x 0 =x;x N =yg and V(x;y) = inffV N (x;y) :N 1g: (2.5) For the dynamical system (2.1) and (2.2) with the mapping f satisfying all the assumptions, we have three theorems. Our rst theorem estimates the expected waiting time before the transition from the neighborhood of one stable xed point to that of the other: Theorem 1. For any > 0; there exist an " 0 > 0, K 1 = K 1 (;h) < 1, K 2 = K 2 (;h)<1, such that K 1 e V(a;b) " 2 E x ( U 1 )K 2 e V(a;b)+ " 2 for 0<"<" 0 ; and lim "!0 P x n e V(a;b) " 2 U 1 e V(a;b)+ " 2 o = 1 (2.6) for any x2 U 0 , where the quasipotential V (a;b) is de ned by (2.5), and the limit (2.6) is uniform in x. Symmetrically, for any y2U 1 , we have K 1 e V(b;a) " 2 E y ( U 0 )K 2 e V(b;a)+ " 2 for 0<"<" 0 ; and 19 lim "!0 P y n e V(b;a) " 2 U 0 e V(b;a)+ " 2 o = 1; and the limit is uniform in y. Thesecondtheoremtellswherearethemostlikelyexitpositionsfromonebasin to the other. Theorem 2. Suppose that F is an open subset of [0;1] such that the complement of F is dense in the complement F c of F. Suppose also f F F, f is continuous on F, and f n (x)!a as n!1 for all x2 F. De ne V F = minfV (a;y) :y = 2Fg; O F = fz = 2F :V (a;z) =V F g: Given > 0, lim "!0 P x;" (d(X F c ;O F )<) = 1 for all x2F. The third theorem describes properties of the invariant measure of X " n : Theorem 3. Assume " is a normalized invariant measure for X " n . Then " is unique. If V (a;b)>V(b;a), we have " (U 1 )! 0; " (U 0 )! 1 20 as "! 0. Furthermore let = min(V (a;ah);V (b;bh);V (a;b)V(b;a)); there exist " 0 ; 1 > 0, such that for 0< < 1 and K =K(;h)<1 , we have j " (U 0 )1jKe " 2 for 0<"<" 0 . In particular, one can apply these three theorems to the system with the map f given by (1.1). We want to study the dynamical behavior of (2.2) with (x) 1 and such f, which has two stable xed points a = 9=190;b = 181=190 and two unstable xed points c = 1=6;d = 9=14. Notice that f(x) is discontinuous at the point 0:9. For such f = (1:1), we ll show that Proposition 4. f satis es all assumptions in section 2.2. It follows from this proposition that we can directly apply Theorem 1 to the model (2.2) with such f. What s more we can calculate the quasipotentials exactly in this special case: Proposition 5. V (a;b) = 4777 1781250 = 2:681810 3 ; 21 V (b;a) = 1 3800 = 2:631610 4 : ThuswecanapplyTheorem3withV (a;b)andV (b;a)givenintheProposition 5. AndwiththehelpofTheorem2,onehastheexitpositionsforthespecialmodel: Proposition 6. Given > 0, and x2U 0 , we have lim "!0 P x X " k 2 1 6 ; 1 6 + for some k < U 1 = 1: For y2U 1 , we have lim "!0 P y (X " k 2 (0:9;0:9+) for some k < U 0 ) = 1: Also we have some results on checking assumption 5 for other maps, including both linear and nonlinear maps, see section 4.3. 22 Chapter 3 Proof of Main Theorems Before we prove 3 theorems, we rst show the large deviations results that are not only true for the ladder map f with the assumptions, but also true for f in more general setting. 3.1 Large Deviations Let f (x) be a piecewise continuous mapping of I to itself. We consider the large deviationsbehaviorofX " n as"! 0. For xedN 1,weusex =(x 0 ;x 1 ; ;x N )to denoteanysequenceonI,andX " = (X 0 ;X " 1 ;X " 2 ; ;X " N )todenotethetrajectory of the dynamical system (2.3). 23 Proposition 7. Assume f (x);(x) is continuous on G, where G is closed on I, and the sequencex2G N . Then for "> 0; P x 0 satis es the large deviation principle as "! 0; i.e. lim #0 liminf "#0 " 2 lnP x 0 (jX " i x i j< for i = 1;2; ;N)S N (x); lim #0 limsup "#0 " 2 lnP x 0 (jX " i x i j< for i = 1;2; ;N)S N (x): Proof. The joint density function ofX " = (X 0 ;X " 1 ;X " 2 ; ;X " N ) is p x 0 ;" (y 1 ; ;y N ) = N1 Y i=0 1 p 2" 2 2 (y i ) exp (y i+1 f(y i )) 2 2" 2 2 (y i ) ! where y 0 =x 0 = 1 p 2" 2 N N1 Y i=1 1 (y i ) ! exp 1 2" 2 N1 X i=0 y i+1 f(y i ) (y i ) 2 ! . For any > 0, P x 0 (jX " i x i j< for i = 1;2; ;N) = Z Z jy N x N j<;jy 1 x 1 j< p x 0 ;" (y 1 ; ;y N )dy 1 dy 2 dy N = Z Z jy N j<;jy 1 j< 1 p 2" 2 N N1 Y i=1 1 (y i +x i ) ! exp " 1 2" 2 N1 X i=0 y i+1 +x i+1 f(y i +x i ) (y i +x i ) 2 # dy 1 dy 2 dy N = Z Z jy N j<;jy 1 j< 1 p 2" 2 N exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 # 24 N1 Y i=1 1 (y i +x i ) ! exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 1 2" 2 N1 X i=0 y i+1 +x i+1 f(y i +x i ) (y i +x i ) 2 # dy 1 dy 2 dy N (3.1) Since f; is uniformly continuous on G and > 0, for any > 0, there exists 0 = 0 (G;N; )> 0, for any < 0 , N1 X i=0 x i+1 f(x i ) (x i ) 2 N1 X i=0 y i+1 +x i+1 f(y i +x i ) (y i +x i ) 2 < for allx2G N andjy i j<, all i. In other words, we have exp n 2" 2 o exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 1 2" 2 N1 X i=0 y i+1 +x i+1 f(y i +x i ) (y i +x i ) 2 # exp n 2" 2 o : Since (3:1) 1 p 2" 2 c 2 2 ! N exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 # Z Z jy N j<;jy 1 j< exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 25 1 2" 2 N1 X i=0 y i+1 +x i+1 f(y i +x i ) (y i +x i ) 2 # dy 1 dy 2 dy N 1 p 2" 2 c 2 2 ! N exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 # exp n 2" 2 o C, where C = Z Z jy N j<;jy 1 j< dy 1 dy 2 dy N , we have liminf "#0 " 2 lnP x 0 (jX " i x i j< for i = 1;2; ;N) liminf "#0 " N" 2 ln( q 2" 2 c 2 2 ) 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 2 +" 2 lnC # = 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 2 Then we conclude that lim #0 liminf "#0 " 2 lnP x 0 (jX i x i j< for i = 1;2; ;N) 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 . (3.2) On the other hand we can have (3:1) 1 p 2" 2 c 2 1 ! N exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 # Z Z jy N j<;jy 1 j< exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 1 2" 2 N1 X i=0 y i+1 +x i+1 f(y i +x i ) (y i +x i ) 2 # dy 1 dy 2 dy N 26 1 p 2" 2 c 2 1 ! N exp " 1 2" 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 # exp n 2" 2 o C: So we have limsup "#0 " 2 lnP x 0 (jX i x i j< for i = 1;2; ;N) 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 + 2 , and thus the other direction of the inequality lim #0 limsup "#0 " 2 lnP x 0 (jX " i x i j< for i = 1;2; ;N) 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 . Let N (x;y) = max i fjx i y i j for 1iNgbethemetriconR N . TheAction function has the following properties: Proposition 8. (a) Assume f (x);(x) is continuous on G, where G is closed on I. The sequencex2G N starts from x 0 . For any > 0, any > 0, there exists an " 0 =" 0 (G;N;; )> 0 such that P x 0 f N (X " ;x)<g exp " 2 (S N (x)+ ) for all 0<"" 0 . 27 (b) Let s be a positive number. For any y2R, de ne y (s) =fx = (x 0 ;x 1 ;x 2 ;:::;x N ) :x 0 =y, S N (x)<sg. If we know the initial point, we just use (s) for simplicity. For any > 0, any > 0 and any s 0 > 0 there exists an " 0 =" 0 (N)> 0 such that P x 0 f N (X " , (s))g exp " 2 (s ) for 0<"" 0 , and s<s 0 . Proof. (a) This directly follows from (3.2). (b) For this result, we need to estimate the probability that the trajectoriesX " of our process move far away from the set of the action functions S N (x) which are with small values. Since P x 0 f N (X " , (s))gPfS N (X " )sg, we only need to show that P x 0 fS N (X " )sg exp " 2 (s ) . 28 By the de nition (2.4), we get P x 0 fS N (X " )sg =P x 0 ( 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 s ) =P x 0 ( " 2 2 N1 X i=0 2 i s ) =P ( N1 X i=0 2 i 2" 2 s ) : (3.3) Since E exp 1 2 2 i = 1 p ,C < +1 for every > 0, and together with Chebyshev s inequality, the right hand side of (3.3) can be estimated as P x 0 ( N1 X i=0 2 i 2" 2 s ) E exp 1 2 P 2 i expf" 2 s(1)g =C N exp " 2 s(1) . This implies that there exist an " 0 =" 0 (N)> 0 such that P x 0 f N (X " , (s))g exp " 2 (s ) for "<" 0 and ss 0 . 29 Remark We dont require the continuity of f; on the sequence x for proposition 8(b). The following proposition tells the relation between the action function and the quasipotential, and a property of the quasipotential without the continuity of f; involved. Proposition 9. (a) For any x;y 2 I; and any > 0, there exists N nite and x = (x 0 =x;x 1 ;:::;x N =y), such that S N (x)V(x;y)+. (b) Assume f (x) is bounded on the set I. Then there exists a constant C such that jV(x;y)V(x;^ y)jCjy ^ yj for any x;y;^ y2I. Proof. (a) This directly follows from the de nition of the quasipotential V (x;y). (b) Let us consider two sequencesx = (z 0 = x;z 1 ; ;z n1 ;z n = y) andb x = (z 0 =x;z 1 ; ;z n1 ;^ z n = ^ y) with the same rst n trajectories z 0 =x;z 1 ; ;z n1 . By the de nition of the action (2.4), we have S N (b x) = 1 2 n2 X i=0 z i+1 f(z i ) (z i ) 2 + 1 2 ^ yf(z n1 ) (z n1 ) 2 30 = 1 2 n1 X i=0 z i+1 f(z i ) (z i ) 2 + 1 2 " b yf(z n1 ) (z n1 ) 2 yf(z n1 ) (z n1 ) 2 # =S N (x)+ 1 2 (^ yy) ^ y +y 2 f(z n1 ) 1 2 (z n1 ) : Since f (x) is bounded on the closed set I; there exists a constant C independent of ^ y;y, such that ^ y +y 2 f(z n1 ) 1 2 (z n1 ) C: So we get S N (b x)S N (x)+Cjy ^ yj for any possible sequencesx;b x. Using V(x;^ y)S N (b x), we have V(x;^ y)S N (x)+Cjy ^ yj: Then take in mum over all the possible sequences ofx and N, we get V(x;^ y)V (x;y)+Cjy ^ yj: Since the symmetry of ^ y and y, (b) is proved. 31 3.2 Structure of the General mapping f Before we study the structure of f s domain which will be used in the proof of the main theorems, we need the following lemmas. Lemma 10. Assume assumptions 1,2,6. Given 0 < 0 < 0 (the same 0 as in assumption2),thereisauniformly h> 0,suchthatforallz2[ x2D x h;x+ h ; (a) dist(f n (z);D) > 0, where is a constant, for any n 1; (b) there exists M 0 > 0 independent of z, such that for all nM 0 , we have f n (z)2 (a 0 =2;a+ 0 =2)[(b 0 =2;b+ 0 =2): Proof. Let 1 = dist(fa;bg;D) and " = 1 =8, by assumption 6, there exists N 0 such that jf n (y)aj<" or jf n (y)bj<" for all y2E;nN 0 : Let 2 = min y2E min 0n<N 0 dist(f n (y);D), clearly 2 > 0. Thus for y 2 E;n N 0 , dist(f n (y);D) 1 " = 7 8 1 : Take 2 = min 2 ; 7 8 1 , then for all y2E;n 0, dist(f n (y);D) 2: 32 WLOG, we only prove the lemma is true, say, when z 2 (xh;x) for some x2D. Let y =f (x ) and assume f n (y)!a. For any 0<"< minf 0 =2;g, there exists N =N(y) 0, f N (y)a <"=2: Since f n (y) = 2D, thus for n 1 lim z!x f n (z) =f n1 (y): And given ", there is 0 N such that f N (z)f N1 (y) < 0 N ) f N+1 (z)f N (y) <"=2; then there is N1 such that f N1 (z)f N2 (y) < 0 N1 ) f N (z)f N1 (y) < min( N ;"=2); . . . Thus there exists h = h(y) = 0 0 > 0, such that for for i = 1;:::;N + 1 and any z2 (xh;x); f i (z)f i1 (y) <"=2; and 33 dist f i (z);D dist f i1 (y);D "=2 2"=2 : Thus f N+1 (z)a <"< 0 =2 for all z2 (xh;x). For any nN +1, by assumption 1.2, f n (z) = 2D, jf n (z)aj<"; and distff n (z);Dg 1 " : Since D is nite, the uniform h = min E h(y). N is independent of z, and so is M 0 because E is also nite. The assumption 6 assures that the lemma is also true when z =x2D. Lemma 11. Assume assumption 1,2,6. Given n > 0, 0 < 0 < 0 (the same 0 as in assumption 2), and > 0, there exists " 0 > 0, such that P z fjX " n f n (z)j< 0 =2g 1 for 0<"<" 0 and all z2[ x2D x h;x+ h . Proof. Notice f i g n1 i=0 are independent standard normal random variables. First we show that for any 0 < 0 , there exist h 0 = 0 =2;h 1 ;:::;h n1 all positive and dependent on 0 , and9 " 1 > 0, such that j" 0 j<h n1 =2;:::; " n1 <h 0 =2 )fjX " n f n (z)j<h 0 = 0 =2g: 34 for all 0<"<" 1 . By assumption 1, for any > 0, there exists h i > 0 such that jf (y 1 )f (y 2 )j< ifjy 1 y 2 j<h i and y 1 ;y 2 2 (d i ;d i+1 ). By lemma 10, dist f k (z);D for8kn, and dist(E;D)> 0: Thus f f k (z) f (y) < for81kn if f k (z)y <h, where h = min i fdist(E;D);;h i g. Taking h 0 = 0 =2, there is h 1 > 0, such that X " n1 f n1 (z) <h 1 ) f X " n1 f n (z) <h 0 =2: Then there is h 2 > 0, such that X " n2 f n2 (z) <h 2 ) f X " n2 f n1 (z) <h 1 =2; . . . 35 This gives the sequence h 1 ;h 2 ; ;h n1 . Then if j" 0 j<h n1 =2;j" 1 j<h n2 =2;:::; and " n1 <h 0 =2; we have jX " 1 f (z)j =j" 0 j<h n1 =2; and thus f (X " 1 )f 2 (z) <h n2 =2; so X " 2 f 2 (z) f (X " 1 )f 2 (z) +j" 1 j<h n2 ; . . . In the nal step we getjX " n f n (z)j<h 0 = 0 =2. Second, it is clear that for any > 0, and xed h 0 ;h 1 ;:::;h n1 > 0 as above, there exists " 2 > 0, P z j" 0 j<h n1 =2;:::; " n1 <h 0 =2 1 when 0<"<" 2 . If we take 0<"<" 0 = min(" 1 ;" 2 ), we have P z fjX " n f n (z)j< 0 =2gP z j" 0 j<h n1 ;:::; " n1 <h 0 =2 1 : 36 Following from the two lemmas, we choose 0<h< 0 , h 0 < min h; 0 , and a sequence h n ! 0 with h 0 h 1 >h 2 >. De ne A n = fx2 [0;1] :f n (x)2 (ah;a+h) and f i (x)62D hn for 0in1g = ~ A n nD n where D n =fx :f i (x)62D hn for 0in1g; (3.4) D hn =fy :9x2D;jxyjh n g = [ x2D [xh n ;x+h n ]: And similarly D h 0 =fy :9x2D;jxyjh 0 g = [ x2D [xh 0 ;x+h 0 ]; and f n (y)\D h 0 =; for all n 0;y2E: Since A n \ D n = ;, A n \ D = ;, where A n is the closure of A n . Thus f is continuous on A n . Similarly one can de ne B n for the other stable xed point b. Remark. Also from the two lemmas, when n > M 0 , we notice that ~ A n [ ~ B n in assumption 5 can be written as ~ A n [ ~ B n = A n [B n [D n . And indeed it would be 37 easier for us to study the properties of f and to prove the theorems in the form of A n [B n [D n in the later sections. For the next lemma, let F be any set with the properties: i)9 n 0 > 0;0< 2 < 0 such that f n 0 1 (x)2 (a 2 ;a+ 2 ) for all x2F; ii) f i (x) = 2D n 0 for all in 0 ;x2F, where D n 0 is given by (3.4). Lemma 12. Assume assumption 1,2, (x) is continuous and F as above. Then for the dynamical system (2.1), any positive 1 < 0 , and any such set F, (a) there exist a 0 > 0;N 0 > 0; both dependent onF and 1 , such that for any se- quencesx = (x;x 1; ;x N ) assuming its values in the set F n(a 1 ;a+ 1 ) N+1 and N >N 0 , we have the inequality S N (x)>a 0 (NN 0 ): Here F denotes the closure of F. (b) there exist c 0 > 0, such that for all su¢ ciently small " > 0 and any x 2 F n(a 1 ;a+ 1 ); we have the inequality P x f 1 >Ng expf" 2 c 0 (NN 0 )g, where 1 = inffk :X " k = 2 F n(a 1 ;a+ 1 )g: 38 Proof. (a) We claim: if x2 F, then f n 0 (x)2 (a 2 ;a+ 2 ). Indeed, if x is in the interior of F, by the property of F, f N 0 1 (x) 2 (a 2 ;a+ 2 ). If x 2 @F, there are x k 2 F such that x k ! x. Look at the sequence ff i (x)g. If f i (x) = 2 D for i = 0;1;:::;n 1, then by using the continuity of f at x;f (x);:::;f n1 (x) and x k !x, we have f (x k )!f (x);f 2 (x k )!f 2 (x);:::;f n (x k )!f n (x): Since f n 0 1 (x k ) 2 (a 2 ;a+ 2 ), f n 0 1 (x) 2 [a 2 ;a+ 2 ], thus f n 0 (x) 2 (a 2 ;a+ 2 ). Now we show that f i (x) = 2 D for all i n 0 1. If not, as- sume there is i, such that x;f (x);:::;f i1 (x) all not in D, but f i (x)2D. Using the continuity again we have f (x k )!f (x);:::;f i1 (x k )!f i1 (x); and f i (x k )!f i (x): But f i (x k ) = 2 D n 0 D, f i (x) 2 D. This is the contradiction. Thus we have f n 0 (x)2 (a 2 ;a+ 2 ) for all x2@F. By the selection of 1 , we know that any trajectories x = (x;x 1; ;x n ;) of the dynamical system (2.1) never leave (a 1 ;a+ 1 ) if x 2 (a 1 ;a+ 1 ). 39 It s the same if we replace 1 by 2 here. And there exists n 0 0 0, f n 0 0 (x) 2 (a 1 ;a+ 1 ) for any x2 (a 2 ;a+ 2 ). We thus have f N 0 (x)2 (a 1 ;a+ 1 ) for x 2 F, where N 0 = n 0 + n 0 0 : Notice since f is continuous on F, the action function S N (x) = 1 2 N1 X i=0 x i+1 f(x i ) (x i ) 2 is continuous on the compact set F n(a 1 ;a+ 1 ) N+1 . So S N attains its in - mumonthisset. ForN N 0 ,thisin mumisdi¤erentfrom0,sinceotherwisesome trajectoryofthedynamicalsystemwouldbelongtothisset,whichcontradictswith thatthetrajectoryof (2.1)wouldonlytakevaluesin (a 1 ;a+ 1 ) afterN N 0 : Thus for all such trajectoriesx, there exists n 1 > 0 such that S N 0 (x)n 1 > 0: Using the additivity of the action functional, for sequences x2 F n(a 1 ;a+ 1 ) N+1 with N > N 0 , we have S N (x) n 1 ; for sequences x 2 F n(a 1 ;a+ 1 ) N+1 with N > 2N 0 , we have S N (x) 2n 1 , etc. In general, we have S N (x) n 1 [N=N 0 ]>n 1 (N=N 0 1) =a 0 (NN 0 ). 40 (b) Now for xed 2 > 0, take 2 < 3 < 0 . We can nd a set F 0 satisfying: i) f n 0 (x)2 (a 3 ;a+ 3 ) for all x2 F 0 ; ii) f i (x) = 2 D n 0 for all i n 0 ;x2 F 0 . So F 0 is also attracted to a, and F 0 ! F. Assume 0< < min 1 2 ; 3 2 3 n 0 : Using assertion (a), there exist constants N 0 and n 1 ; such that S N 0 (x) > n 1 for sequences which do not leave F 0 and do not get into (a 1 =2;a+ 1 =2). For x2 F, the sequences in the set x (n 1 ) =fx =(x; ;x N 0 ) :S N 0 (x) n 1 g reach (a 1 =2;a+ 1 =2)orleaveF 0 duringthetimefrom 0toN 0 ; thetrajectories of X " n for which 1 >N 0 are at a distance not smaller than from this set. Using Proposition 8 of action function, this implies that for small ", any > 0 and all x2 F we have P x f 1 >N 0 gP x N 0 (X " ; x (n 1 )) expf" 2 (n 1 )g. Then using the Markov property of X " n , we get P x f 1 > (n+1)N 0 g =E x h 1 >nN 0 ;P X " nN 0 f 1 >N 0 g i P x 1 >nN 0 sup y2F P y 1 >N 0 ; 41 and by induction we have that P x f 1 >Ng P x 1 > N N 0 N 0 sup y2F P y f 1 >N 0 g h N N 0 i exp " 2 N N 0 1 (n 1 ) . Thus we take c 0 = (n 1 )N 0 , where is arbitrarily small. The next result is needed when we prove Theorem 19. Lemma 13. Assume assumption 1,2,6. If F is a compact subset of A n , then there is > 0 such that d f i (x);A c n for all i 1 and x2F. Proof. Since f i (x)2 U 0 for all i > n, we only consider the case when i n. For any x2F, f (x);f 2 (x); ;f n1 (x)2A n , we have x = max 0in1 dist f i (x);A c n > 0: 42 Since x = 2 D n , there exists x , such that for any y with jyxj < x , we have jf i (y)f i (x)j< x =2. Thus dist f i (y);A c n > x 2 > 0 if jyxj< x : Since F is compact and [ x2F (x x ;x+ x ) F, there exist nitely many x 1 ; ;x m 2F such that m [ i=1 x i x i ;x i + x i F: Thus d f i (x);A c n min i x i 2 > 0: 3.3 Uniformly Convergence of X " n Since the discontinuity of f, it is not necessary that X " n converges to x n uniformly for all x 0 2I, namely, xed > 0,8 > 0;9" 0 > 0 s.t P x 0 fjX " n x n j< for n 0g 1 43 for all x 0 2 I, 0 < " < " 0 . Here x n is the trajectory of (2.1). In this section we see under assumption 3, X " n converges to x n uniformly when the initial point is not in D. Let = inffn 0 :there is at least one discontinuous point of f between x n and X " n g: We have Lemma 14. Assume assumption 3. For xed M > 0, l > 0, and any > 0, there is " 0 =" 0 (M;l; ), such that P x fjX " n x n j<l for 0nM^g 1 for all x = 2D, and 0<"<" 0 . Proof. For 0<"" 0 = (L 2 1)l 2 (L 2M 1)M ; and any starting point x = 2D, considering the following: if n1; then E jX " n^ x n^ j 2 F n1 = X " (n1)^ x (n1)^ 2 ; 44 if n; thenthereisnodiscontinuouspointbetweenX " k andx k for 0k <n. Sojf (X " k )f (x k )j LjX " k x k j for 0 k < n: Noticing n1 is independent of F n1 , we have E E jX " n^ x n^ j 2 F n1 1 n =E E jX " n x n j 2 F n1 1 n =E E f X " n1 +" n1 f (x n1 ) 2 F n1 1 n =E E f X " n1 f (x n1 ) 2 +2" f X " n1 f (x n1 ) n1 +" 2 E 2 n1 F n1 1 n E L 2 X " n1 x n1 2 +" 2 1 n =E L 2 X " (n1)^ x (n1)^ 2 +" 2 1 n : Combining these two together, we get E jX " n^ x n^ j 2 =E E jX " n^ x n^ j 2 F n1 =E E jX " n^ x n^ j 2 F n1 1 n1 +E jX " n^ x n^ j 2 F n1 1 n E X " (n1)^ x (n1)^ 2 1 n1 + L 2 X " (n1)^ x (n1)^ 2 +" 2 1 n L 2 E X " (n1)^ x (n1)^ 2 +" 2 . . . L 2 n1 E(X 1 x 1 ) 2 +" 2 1+L 2 ++L 2n4 45 =" 2 L 2n 1 L 2 1 : By the Chebyshev s inequality, we have PfjX " n^ x n^ jlg " 2 l 2 L 2n 1 L 2 1 c" 2 l 2 ; where c = L 2M 1 L 2 1 . Since max 0nM^ jX " n x n j<l = max 0nM jX " n^ x n^ j<l ; we get P max 0nM^ jX " n x n j<l =P max 0nM jX " n^ x n^ j<l =P ( M \ n=1 jX " n^ x n^ j<l ) =1P ( M [ n=1 jX " n^ x n^ jl ) 1 M X n=1 PfjX " n^ x n^ jlg 1 M X n=1 c" 2 l 2 1 cM" 2 0 l 2 1 : 46 Followedbylemma14andassumption4,wecanestimatethatstartingfromthe sets A N ;B N ; the probabilities that X " hits U 0 = a h 2 ;a+ h 2 ;U 1 respectively. Lemma 15. Assume assumption 1,2,3. For xed N > 0 and any > 0, there exists K <1 and " 0 > 0, such that P x f U 0 Kg 1 ; P y f U 1 Kg 1 ; for all x2A N ;y2B N , and 0<"<" 0 . Proof. In the lemma 14, we x 0 < l min h 4 ;h 0 , where h and h 0 are from the constructions of U 0 and D h 0: We claim that if x = 2D N , then from jX " n x n j<l for 0nN^ we can obtain jX " n x n j<l for 0nN: Indeed suppose < N. SincejX " x j < l < h 0 ; and there exists p2 D between X " and x , we must have x 2 (ph 0 ;p+h 0 )D h 0; 47 which contradicts to x = 2D h 0: In other words, N: Nowifstartingatx2A N ;thenx N 2 (ah;a+h):Sincef (x)on(ah;a+h) is attracted to the point a; there exists an m 0; such that N +m<;jx N+m ajl: And since X " N+m x N+m <l, we have X " N+m 2 a h 2 ;a+ h 2 U 0 : Take K =N +m and apply lemma 14, we obtain P x f U 0 Kg 1 : The other inequality can be proved in the same way. InordertoestimatetheprobabilityX " hits(a;a+)[(b;b+)starting from D N , we need lemma 10 and lemma 11. Lemma 16. Assume assumption 1,2,3,6. For xed N > 0;0 < < 0 and any > 0, there exists K <1 and " 0 > 0, such that P z (a;a+)[(b;b+) K 1 for all z2D N , and 0<"<" 0 . 48 Proof. By the construct of D N , for any z2D N , there exists n 1 =n(z)N, such that the trajectory of the unperturbed system (2.1) jz i qjh 0 for i<n 1 ; all q2D 9p2D;z n 1 2 (ph 0 ;p+h 0 ): In the lemma 14, we take l < min h 0 ; hh 0 . Then on the set f! :jX " i z i j<l for 0iM^g; we have n 1 . Indeed suppose < n. SincejX " z j < l < h 0 ; and there exists q2D between X " ;z , we must have z 2 (qh 0 ;q +h 0 ); which contradicts to how z i is chosen. So following from lemma 14, given M = N, > 0, and any small l > 0, there is " 1 > 0 such that P z fjX " i z i j<l for 0in 1 g P z fjX " i z i j<l for 0iM^g 1 =2; 49 uniformly in z and all 0<"<" 1 . So on the set f! :jX " i z i j<l for 0in 1 g; we have X " n 1 p <l +h 0 < h: It follows that P z ( X " n 1 2 [ x2D x h;x+ h ) 1 2 : (3.5) Then by lemma 10, taking 0 < , there exists n 2 M 0 independent of X " n 1 , such that f n 2 X " n 1 2 (a 0 =2;a+ 0 =2)[(b 0 =2;b+ 0 =2); Together with X " n 1 +n 2 f n 2 X " n 1 < 0 =2, we have X " n 1 +n 2 2 (a 0 ;a+ 0 )[(b 0 ;b+ 0 ) (a;a+)[(b;b+): And following from lemma 11, there exists " 2 > 0, such that P X " n 1 X " n 1 +n 2 2 (a;a+)[(b;b+) 1 =2: 50 It follows that for z2D N P z ( X " n 1 +n 2 = 2 (a;a+)[(b;b+) X " n 1 2 [ x2D x h;x+ h ) =2 )P z ( X " n 1 +n 2 = 2 (a;a+)[(b;b+);X " n 1 2 [ x2D x h;x+ h ) 2 P z ( X " n 1 2 [ x2D x h;x+ h ) 2 : for 0<"<" 2 . Together with (3.5), and taking " 0 = minf" 1 ;" 2 g> 0, we have P z ( X " n 1 +n 2 2 (a;a+)[(b;b+);X " n 1 2 [ x2D x h;x+ h ) 1 =2 =2 1 if 0<"<" 0 . Thus P z X " n 1 +n 2 2 (a;a+)[(b;b+) P z ( X " n 1 +n 2 2 (a;a+)[(b;b+);X " n 1 2 [ x2D x h;x+ h ) 1 for 0<"<" 0 . Since X " n 1 +n 2 2 (a;a+)[(b;b+) (a;a+)[(b;b+) N +M 0 ; 51 take K =N +M 0 n 1 +n 2 , and we obtain P z (a;a+)[(b;b+) K P z X " n 1 +n 2 2 (a;a+)[(b;b+) 1 for any z2D N , and 0<"<" 0 . The next lemma tells starting from some point in A n , it s highly likely that the trajectory hits U 0 before it leaves A n . Lemma 17. Assume assumption 1,2,3,6. Fixed N > 0, > 0, and suppose F is a compact subset of A N , then there exists " 0 > 0, such that P x U 0 [( A N) c < ( A N) c 1 for all x2F and 0<"<" 0 . Proof. In the lemma 14, we x 0 < l min h 4 ;h 0 ; 0 , where 0 is in the lemma 13. By the proof of lemma 15, we know for some m> 0 P x fjX " n x n j<l for 0nN +mg 1 : Together with lemma 13 andjX " n x n j<l < 0 , it follows that fjX " n x n j<l for 0nN +mg 52 X " N+m 2U 0 and X " n 2A N for all nN +m n U 0 [( A N) c < ( A N) c o : Thus P x U 0 [( A N) c < ( A N) c = P x X " n hits U 0 before it leaves A N P x fjX " n x n j<l for 0nN +mg 1 : 3.4 Expected Exit Time from Basins of Stable Fixed Points The next results studythe behaviorof system(2.1) and(2.3) onthe setsA n . Since the structure of A n and B n are the same, they are also applicable to B n . Lemma 18. Assume assumption 1,2. For > 0, 0 < h < 0 , there exist N 2N and " 0 > 0, such that P x ( An) c N exp " 2 (V n +) ; 53 for all x2 A n , 0<"<" 0 , where V n = inf y= 2 An V(a;y): Proof. In this proof, to simplify the notation, we just ignore the subscript of the actionfunctionifthepathxandtheactionfunctionsharethesamesubscript. And from now on, if there s no confusion, we ll always take this simpli ed notation. We claim that: for > 0, there exists N 2N , such that for any x 2 A n and 0 < < 4C , there exists y 2 ( -neighborhood of A n ) c and a path x Nx (x;y) = (x 0 =x;x 1 ; ;x Nx =y) with N x N; such that S(x Nx (x;y))V n +; here C is a constant given in the proposition 9. Indeed, since 0 < =4C, from Proposition 9(b) on page 30, there exists y2 ( -neighborhood of A n ) c , such that V(a;y)V n +C V n + 4 . 54 Forsuchy;byProposition9(a), thereexistsN 1 niteandapathx N 1 (a;y) = (x 0 = a;x 1 ; ;x N 1 =y), such that S(x N 1 (a;y))V(a;y)+ 4 V n + 2 : Andforanyx2A n , wehavef n (x)2 (ah;a+h). SowecanchooseN 2 large enough, and independent of x, such that 1 2 f N 2 (x)a (x) 2 2 and the path x N 2 (x;a) = (x;f(x);f (2) (x); ;f (N 2 1) (x);a): Thus S(x N 2 (x;a)) = 1 2 f N 2 (x)a (x) 2 2 : So we get a path x Nx (x;y) = (x 0 =x;; ;x N 2 =a; ;x N 1 +N 2 =y) with N x =N 1 +N 2 independent of x and S(x Nx (x;y))V n +: 55 Then from Proposition 8, for x2A n , we have P x sup 0nNx jX " n x n j< exp " 2 S(x Nx (x;y))+ 2 exp " 2 (V n +) (3.6) for su¢ ciently small "> 0: On the other hand, if sup 0nNx jX " n x n j< and x Nx 2 ( -neighborhood of A n ) c ; then X " Nx = 2A n : So A c n N x : Thus we have P x ( An) c N x P x sup 0nNx jX " n x n j< : (3.7) 56 From (3.6) and (3.7) we conclude that P x ( An) c N P x ( An) c N x exp " 2 (V n +) : Theorem 19. Assume assumption 1,2,3,6. Given n 1 and > 0, (1) there exist " 0 > 0 such that E x;" ( ( An) c )e Vn+ " 2 ; for all x2A n and 0<"<" 0 , where V n = inf y= 2 An V(a;y): (2) if F A n and F is closed, then there exist " 0 > 0 such that E x;" ( ( An) c )e Vn " 2 for all x2F and 0<"<" 0 . Proof of Theorem. To prove (1), we use lemma 18, for any point x 2 A n , we have P x ( An) c N exp " 2 (V n +) : (3.8) Then using the Markov property of X " n , we have P x f ( An) c >kNg 57 =E x fP x f ( An) c >kNgjF (k1)N g =E x f ( An) c > (k1)N;P x f ( An) c ( (k1)N (!))>Ngg =E x f ( An) c > (k1)N;P X " (k1)N f ( An) c >Ngg E x f ( An) c > (k1)N;gmax z2An P z f ( An) c >Ng max z2An P z f ( An) c >Ng k We used the induction in the last step. Together with (3.8) we have E x ( ( An) c ) 1 X k=0 (k +1)NP x n kN < ( An) c (k +1)N o =N 1 X k=0 k+1 X i=1 P x n kN < ( An) c (k +1)N o =N 1 X i=1 1 X k=i1 P x n kN < ( An) c (k +1)N o =N 1 X i=1 P x n ( An) c > (i1)N o =N 1 X k=0 P x n ( An) c >kN o N 1 X k=0 1 min z2An P z n ( An) c N o k =N min z2An P z n ( An) c N o 1 2N exp V n + " 2 , whenever " is su¢ ciently small. Thus we proved (1). 58 Now we prove the other direction, the inequality (2). Let 1 = U 0 [A c n , 2 = A n (ah;a+h). We de ne the Markov times 0 , 1 , ... as follows: 0 = 0; k = inffl > k1 :X " k 2 1 g. If at some step the process X " n does not reach the set 1 any more, we set the corresponding Markov time and all subsequent ones equal to +1: And we de ne Z n =X " n . It forms a Markov chain on 1 since 1 is closed . Thenfortheone-steptransitionprobabilitiesofthischain,speci callytheprob- ability that X " n leaves (ah;a+h) and hits A c n before hitting U 0 , we have the estimate Pfx;A c n g max y2 2 P y f 1 = A c n g = max y2 2 P y f A c n = 1 Kg+P y f A c n = 1 <Kg . (3.9) As follows from lemma 12, K can be chosen so large that the rst probability have the estimate P y f A c n = 1 KgP y f 1 Kg =P y f h Kg 1 2 expf" 2 (V n )g: (3.10) 59 Notice that for any > 0, any trajectories X " n , 0 n K, for which A c n = 1 <K are a positive distance from the set fx = (y; ;x K ) :y2 2 ;S K (x)<V n 2 g: By the properties of the action functional, we have the following estimation for the second probability in (3.9): P y f A c n = 1 <Kg expf" 2 (V n )g for y2 2 and su¢ ciently small ", h> 0. Then by the last inequality and estimations of (3.9) and (3.10), we have Pfx;A c n g expf" 2 (V n )g, (3.11) whenever ", h are su¢ ciently small. Denote by the smallest n for which Z n = X " A c n 2 A c n . It follows from (3.11) that P x fvng = 1 X k=n P x fv =kg = 1 X k=n (1Pfx;A c n g) k Pfx;A c n g =(1Pfx;A c n g) n1 60 1expf" 2 (V n )g n1 (3.12) for x2U 0 . Since ( An) c A c n v, we have E x ( ( An) c ) E(v) = X n P x fng X n min z2U 0 P z fng X n 1exp " 2 (V n ) n1 = exp V n " 2 for su¢ cient small " and h and x2U 0 . Thus (2) holds for x2U 0 . For any x2A n , we have E x ( ( An) c ) =E x f ( An) c 1 ; ( An) c g+E x f ( An) c > 1 ; ( An) c g E x f ( An) c > 1 ;E 1 ( ( An) c )g > expf" 2 (V n )gP x f ( An) c 1 g > 1 2 exp V n " 2 . We used lemma 17 to get P x ( An) c 1 > 1=2 in the last step. So we have E x ( ( An) c )> exp Vn " 2 , (2) holds. Remark Lemma 12 Theorem 19 hold true for a mapping with nite xed stable points if the assumption 1 and 2 are satis ed for each xed stable point. The 61 following lemma shows that in the situation of a mapping having two stable xed points a;b, the expected escape time from the set A n , which belongs to the basin of a, is approximately e V(a;b)=" 2 when n!1. Lemma 20. Assume assumption 1,4. De ne V 1 , min y= 2 S m Am V(a;y); then we have V 1 =V(a;b): (3.13) Proof. On one hand, since b2 S m A m c , V 1 V(a;b): On the other hand, we have the property of quasipotential, V(a;y)+V(y;b)V(a;b): If y2 S B m , since points in S B m are attracted to b, V(y;b) = 0. Thus V(a;y) V(a;b); By the assumption [ N ~ A N [ [ N ~ B N 62 is dense and the set x :f i (x)2D for some i is countable, we know ( S m A m )[ ( S m B m ) is also dense. Then if y = 2 S B m , y is on the boundary of S m A m . For 8" > 0, there exist y 2 S B m , such that V(y;y)<". We thus have V(a;y)+"V(a;y)+V(y;y)+V(y;b)V(a;b): Combine these two situations, we get V 1 V(a;b): 3.5 Proof of Theorem 1 The rst lemma describes the probability of escaping from the small neighborhood of one stable xed point and ending in the small neighborhood of the other one in nite many jumps. Lemma21. Assumeassumption1,2,3. Given > 0; thereexistN 2N and" 0 > 0, such that P x ( U 1 N) exp n " 2 V 1 + 2 o : 63 for any x2U 0 , 0<"<" 0 . Proof. The proof of this lemma is similar to that of lemma 18. Lemma 22. Assume assumption 16. For any > 0, there exists " 0 > 0, such that E x ( U 1 )N 00 exp V 1 + " 2 ; for any x = 2U 1 and 0<"<" 0 , where N 00 is a constant. Proof. By assumption 5, for any > 0, there exists m, such that for all x2C m , E x;" ( m )k m exp 2" 2 ; where k m is a constant depending on m. Fixm,wecanhavethedisjointunionsofI =A m [B m [C m [D m . Sincex = 2U 1 can be in any one of the A m ;B m ;C m ;D m , we need to discuss 4 cases for E x ( U 1 ) according to 4 di¤erent starting points. If x 2 A m , the starting point is in some basin of the stable xed point a: By de nition of A m , the trajectory of the ladder system would come back to U 0 in K iterations,whereK isfromlemma15. ThenitescapesfromU 0 totheneighborhood U 1 of the other stable xed point b in N iterations, where N is from lemma 21. Using the strong Markov property of X " n , there exists an " 1 > 0 such that P x ( U 1 K +N) 64 E x ( U 0 K;P X U 0 ( U 1 N)) P x ( U 0 K)exp V 1 + 2 " 2 1 2 exp V 1 + 2 " 2 uniformly for all x 2 A m and " < " 1 : In the last step we used the estimate from lemma 15: If x2 B m , the starting point is already in some basin of the stable xed point b. Using lemma 15 again and the de nition of B m , there exists an " 2 > 0 such that P x ( U 1 K) 1 2 uniformly for all x2B m and "<" 2 : If x 2 D m , since (ah=2;a+h=2)[ (bh=2;b+h=2) U 0 [U 1 , by lemma 16, there exist m;" 3 > 0, such that P x (U 0 [U 1 ) m 1 2 uniformly for all x 2 D m and " < " 3 : So we have two more estimations based on where the trajectory is after m recursions. If X (U 0 [U 1 ) is in U 0 , using the strong Markov property of X " n , we have P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 ; U 1 m+N =E x P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 ; U 1 m+N F (U 0 [U 1 ) =E x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 ;P x U 1 m+N F (U 0 [U 1 ) 65 E x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 ;P x U 1 (U 0 [U 1 ) N F (U 0 [U 1 ) =E x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 ;P y U 1 N y=X (U 0 [U 1 ) P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 exp V 1 + 2 " 2 : And if X (U 0 [U 1 ) is in U 1 , similarly, by lemma 15, we have P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 1 ; U 1 = (U 0 [U 1 ) m+N P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 1 : Combining these two estimations, we obtain P x ( U 1 m+N)P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 ; U 1 m+N +P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 1 ; U 1 = (U 0 [U 1 ) m+N P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 0 exp V 1 + 2 " 2 +P x (U 0 [U 1 ) m;X (U 0 [U 1 ) 2U 1 : P x (U 0 [U 1 ) m exp V 1 + 2 " 2 1 2 exp V 1 + 2 " 2 : 66 If x 2 C m , the trajectory of the system can be in any one of the three sets: A m ;B m ;D m when it jumps out of C m for the rst time. Recall by assumption 5 there exists " 4 > 0 such that E x;" ( m )k m exp 2" 2 uniformly for x2C m and "<" 4 . With the Chebyshev s inequality, we can choose M =M " 2N, such that P x ( m M) k m M e 2" 2 ; P x ( m M) 1 k m M e 2" 2 1 2 : It follows that M should satisfy the inequality h 2k m e 2" 2 i M h 2k m e 2" 2 i +1. If the trajectory goes into A m after it goes out of C m , using the strong Markov property of X " n again, we have P x ( m M;X m 2A m ; U 1 M +K + m+N) =E x (P x ( m M;X m 2A m ; U 1 M +K + m+NjF m )) =E x ( m M;X m 2A m ;P x ( U 1 M +K + m+NjF m )) 67 E x ( m M;X m 2A m ;P x ( U 1 m K + m+NjF m )) =E x m M;X m 2A m ;P y U 1 K + m+N y=Xm P x ( m M;X m 2A m ) 1 2 exp V 1 + 2 " 2 : And if the trajectory X m goes into B m after it goes out of C m , by the lemma 14, we have P x ( m M;X m 2B m ; U 1 M +K + m+N) 1 2 P x ( m M;X m 2B m ): Also if the trajectory goes into D m after it goes out of C m , using the result for (U 0 [U 1 ) , we have P x ( m M;X m 2D m ; U 1 M +K + m+N) P x ( m M;X m 2D m ) 1 2 exp V 1 + 2 " 2 : Combining these three estimations, we obtain P x ( U 1 M +m+ m+N)P x ( m M;X m 2A m ; U 1 M +K + m+N) +P x ( m M;X m 2B m ; U 1 M +K + m+N) +P x ( m M;X m 2D m ; U 1 M +K + m+N) 68 P x ( m M) 1 2 exp V 1 + 2 " 2 1 4 exp V 1 + 2 " 2 : In summary, for any > 0, there exists an " 0 = minf" 1 ;" 2 ;" 3 ;" 4 g, such that P x ( U 1 K + m+N +M) 1 4 exp V 1 + 2 " 2 whenever starting from x = 2U 1 and 0<"<" 0 . Thus by the Markov property of X " n , for x2U 0 , we obtain E x ( U 1 ) (K + m+N +M) 1 X n=0 P x f U 1 >n(K + m+N +M)g (K + m+N +M) 1 X n=0 1 min z2U c 1 P z f U 1 K + m+N +Mg n (K + m+N +M) 1 X n=0 1 1 4 exp V 1 + 2 " 2 n = (K + m+N +M) 1 1 h 1 1 4 exp n V1+ 2 " 2 oi = 4(K + m+N +M)exp V 1 + 2 " 2 , 4(N 0 +M)exp V 1 + 2 " 2 4 N 0 +2k m e 2" 2 +1 exp V 1 + 2 " 2 N 00 exp V 1 + " 2 : 69 Base on previous lemmas, we can prove Theorem 1: Proof of Theorem 1. For the consistency among the proofs of lemmas and this theorem, we use here to represent in the statement of this theorem. For the rst part, it is su¢ cient to show that for any > 0, there exists an " 0 such that (a) E x ( U 1 ) < exp V 1 + " 2 , (b) E x ( U 1 ) > exp V 1 " 2 , for any x2U 0 and "<" 0 : For the inequality (a), it now follows directly from lemma 22. As for the in- equality (b), notice that A n A n A n+1 , and V n = inf y= 2 An V(a;y), we have V n %V 1 =V (a;b) as n % 1. Then given > 0, there exist n such that V n > V 1 =2. Together with Theorem 19 (2), (b) follows from E x ( U 1 )E x ( A c n )K n exp V n =2 " 2 K n exp V 1 " 2 : 70 As for the second part, on one hand, using lemma 21, one has P x ( U 1 >N) 1exp V 1 + 2 " 2 , 1p (") : By the strong Markov property of X " n , we have P x ( U 1 >N " ) 1p (") [ N" N ] exp p (") N " 1 N : The last inequality holds since 1x expfxg. Choose N " = N exp h " 2 with h =V 1 + , and const is a positive number, we have P x ( U 1 >N " ) exp constexp V 1 + 2 " 2 exp h " 2 = exp constexp 2 " 2 ! 0 (3.14) as "! 0. On the other hand, similar to the proof of (3.12) we have P x ( U 1 N 00 )P x fvN 00 g 1p (") N 00 where p (") = exp n V1 2 " 2 o . Take N 00 = exp h " 2 with h =V 1 , we get P x ( U 1 <N 00 ) 1exp exp V 1 2 " 2 exp h " 2 = 1exp exp 2 " 2 ! 0 (3.15) 71 as "! 0. Combine (3.14) and (3.15), we obtain the second part of this theorem. 3.6 Exit Position We study in this section where the least unlikely transitions from one basin to another occur. Without loss of generality, we only study the transitions from a to b. Suppose that F is an open subset of [0;1] such that the complement of F is dense in the complement F c of F. Lemma 23. Suppose G [0;1] is closed and f is continuous on G and f(G)G and f n (x)!a as n!1 for all x2G. Then GA N for some n. Proof. Let x 2 G. Then f is continuous at f i1 (x) for all i 1 and so f i (x) is continuous at x for all i 1. In particular G\D = ;. For each x 2 G there existsN(x) suchthatf N(x) (x)2 (ah=2;a+h=2). Also(x) min(D(f i (x);D) : 0 i N(x) 1g > 0. For each x 2 G there is a neighborhood U(x) of x such that f N(x) (y)2 (ah;a+h) and minfD(f i (y);D) : 0 i N(x)1g (x)=2 for y 2 U(x). Since G is compact there exists a nite set fx 1 ;:::;x k g such that G S k i=1 U(x i ). There exists nite N such that N maxfN(x i ) : 1 i kg and h N < minf(x i )=2 : 1 i kg. If x2 G then x2 U(x i ) for some i and then f N(x i ) (x)2 (ah;a+h) and f i (x)62D h N for 0iN(x i )1. Since h< 0 we have f n (x)2 (ah;a+h) for all nN(x i ), so that f N (x)2 (ah;a+h). Also 72 since h< 0 and h N h 0 we have (ah;a+h)\D h N =; so that f i (x)62D h N for 0in1. Therefore x2A N , and the proof is complete. Recall the de nition V F = minfV (a;y) :y = 2Fg; O F = fz = 2F :V (a;z) =V F g: In particular, if F = A n , we useO n to denoteO An . For su¢ ciently small > 0; letB F be the collection of the -neighborhoods of all points inO F , i.e. B F = [ z2O F B(z;): Denote d = min V (a;y) :y2F c B F V F : SinceB F is a union of open sets it is open, and so F c B F is closed. Therefore min V (a;y) :y2F c B F is achieved at some point y 0 , say. Since y 0 = 2O F we have V (a;y 0 )>V F and so d =V (a;y 0 )V F > 0: For the following lemmas, we choose a positive number 1 < min(d=5C;h) (the same C as in the proposition 9 b). De ne U 0 0 = [a 1 ;a+ 1 ], and the sequence of times 0 = 0 and j = inffk > j1 :X " k 2 0 1 =U 0 0 [F c g: 73 Consider the Markov chain Z j =X " j . Lemma 24. There is " 0 > 0 such that P x fZ 1 2F c g exp " 2 (V F +0:45d) for all x2U 0 0 and 0<"" 0 : Proof. Since the complement of F is dense in the complement F c of F, we can choose a point y outside F at a distance not greater than 1 =2 from some z 2 O F . There exists N > 0, such that for any point x 2 U 0 0 there exists a path x =(x 0 =x; ;x N =y), with S N (x)V F +0:4d. Indeed, rst we can choose a sequence x (1) = x (1) 0 =a; ;x (1) N 1 =z with S N 1 x (1) V F +0:1d: We cut o¤its rst portion up to the point x 1 =x (1) n 1 of the last point ofx (1) in (a 1 ;a+ 1 ), i.e., we introduce a new sequence x (2) = x (2) 0 =x 1 ; ;x (1) N 2 =z with S N 2 x (2) S N 1 x (1) V F +0:1d: Moreover, using proposition 9 b), jV (x;y)j =jV (x;y)V (x;x)jCjxyj; 74 we can choose sequences x (3) = x (3) 0 =a; ;x (3) N 3 =x 1 with S N 3 x (3) 0:2d sincejax 1 j< 1 , and x (4) = x (4) 0 =z; ;x (4) N 4 =y with S N 4 x (4) 0:1d sincejzyj< 1 =2. Finally since a is an attractor, for any x2U 0 0 , we can choose a positive number N 5 and a path x (5) = x (5) 0 =x;f (x); ;f N 5 1 (x);x (5) N 5 =a with S N 5 x (5) = 1 2 f N 5 (x)a (x) 2 0:1d depending on x. We thus can construct the pathx out of piecesx (5) ;x (3) ;x (2) andx (4) : x = 0 B B @ x 0 =x; ;x N 5 =x (5) N 5 =x (3) 0 =a; ;x N 5 +N 3 =x (3) N 3 =x (2) 0 =x 1 ; ;x N 5 +N 3 +N 2 =x (1) N 2 =x (4) 0 =z; ;x N 5 +N 3 +N 2 +N 4 =x (4) N 4 =y 1 C C A We choose 0 < 2 < min( 1 =4;the distance between y and @F): By the prop- erty of the action function, for "<" 0 and all x2U 0 0 , we have P x f N (X " ;x)< 2 g exp " 2 (V F +0:4d+0:05d) : 75 On the other hand, if a trajectory of X " n passes at a distance smaller than 2 from the curvex, then it hits the 2 -neighborhood of y , i.e., it hits F c , but not hitting U 0 0 after coming out of (a 1 ;a+ 1 ). Consequently, P x fZ 1 2F c gP x f N (X " ;x)< 2 g exp " 2 (V F +0:4d+0:05d) : Lemma 25. There is " 0 > 0 such that P x Z 1 2F c B F exp " 2 (V F +0:55d) for all x2U 0 0 and 0<"" 0 : Proof. It su¢ ces to estimate sup x2U 0 0 P x X " 1 2F c B F : Notice that 1 = inffn :X " n 2 U 0 0 [ F c g; by the strong Markov property and lemma 12, for any > 0 there exists N, such that P x f 1 >Ng =P x f 1 >N;X " 1 2FnU 0 0 g =P X " 1 f 1 >N1g 76 sup y2FnU 0 0 P y f 1 >N1g exp " 2 for all x 2 U 0 0 and " smaller than some " 0 . As we take, say, V F +d. To obtain the estimate needed, it remains to estimate P x 1 N;X " 1 2F c B F : We obtain this estimate by means of the property of the action function. We denoteK the closure of the 1 =2-neighborhood of F c B F . We claim that there is no such pathx = (x 0 ; ;x N ), starting at x 0 2 U 0 0 , that S N (x) V F + 0:65d;and hitting K for some iN. Indeed, assume that x n 1 2K for some n 1 N. Then S n 1 (x)S N (x)V F +0:65d: By the property of quasipotential, we can take the path x (1) = x (1) 0 =a; ;x (1) N 1 =x 0 with S N 1 x (1) 0:2d: 77 Take the path x (2) = x (2) 0 =x n 1 ; ;x (2) N 2 2F c B F with S N 2 x (2) < 0:1d; since we can nd x (2) N 2 2F c B F such that x n 1 x (2) N 2 < 1 . Then out of the paths x (1) ;x; andx (2) we form a new path x 0 = 0 B B @ x 0 0 =x (1) 0 =a; ;x 0 N 1 =x (1) N 1 =x 0 ; ;x 0 N 1 +n 1 1 =x n 1 1 ; x 0 N 1 +n 1 =x (2) 0 =x n 1 ; ;x 0 N 1 +n 1 +N 2 =x (2) N 2 2F c B F 1 C C A with S N 1 +n 1 +N 2 (x 0 ) 0:2d+V F +0:65d+0:1d: This is smaller than the in mum of V (a;y) for y 2 F c B F , which is in conict with the choice of d. This means that all paths from [ x2U 0 0 x (V F +0:65d) pass at a distance not smaller than 1 =2 from F c B F . By the property of the action function, there is " 0 > 0 such that P x 1 N;X " 1 2F c B F P x N X " ; x V 0 F +0:65d 1 =2 exp " 2 V 0 F +0:65d0:05d 78 for all x2U 0 0 and 0<"" 0 : Thus we have the estimate P x X " 1 2F c B F P x f 1 >Ng+P x 1 N;X " 1 2F c B F exp " 2 (V F +d) +exp " 2 (V F +0:65d0:05d) exp " 2 (V F +0:55d) : Proof of Theorem 2. It follows from the last three lemmas, we have P x Z 1 2F c B F P x fZ 1 2F c gexp " 2 0:1d for 0 < " " 0 and all x2 U 0 0 . We denote by the smallest m for which Z m 2 F c . Using the strong Markov property, for x2U 0 0 we get P x X " F c z for any z2O F =P x Z 2F c B F = 1 X m=1 P x =m;Z m 2F c B F = 1 X m=1 E x Z 1 2U 0 0 ; ;Z m1 2U 0 0 ;P Z m1 Z 1 2F c B F 1 X m=1 E x Z 1 2U 0 0 ; ;Z m1 2U 0 0 ;P Z m1 fZ 1 2F c gexp " 2 0:1d 79 = 1 X m=1 P x f =mgexp " 2 0:1d =exp " 2 0:1d ! 0 (3.16) as "! 0. Consequently, this theorem is proved for x2U 0 0 . If x is an arbitrary interior point of F, then P x X " F c z for any z2O F P x X " 1 2F c +P x X " 1 2U 0 0 ; X " F c z for any z2O F : The rst probability converges to zero according to lemma 14 by choosing l;" 0 dependentonx. (IfGisacompactsubsetofinteriorofF,thenthe rstprobability converges uniformly to zero by choosing the same l;" 0 for all x 2 G.) Using the strong Markov property, we can write the second probability in the form of E x X " 1 2U 0 0 ;P X " 1 X " F c z for any z2O F ; which converges to zero by (3.16). 80 3.7 Invariant Measure and Proof of Theorem 3 3.7.1 PreliminaryResultsforRecurrentDi¤usionProcesses In this section, we assume the terminology and notation used in G. Maruyama and H.Tanaka spaper[7]. TheresultswereobtainedbybothG.Maruyama,H.Tanaka [7] and R. Z. Khas minskii [6] using di¤erent approaches at around the same time. Let (E;) be a -compact complete metric space, B the -algebra of the mea- surablesetsgeneratedbytheopensetsofthisspace. AssumetheMarkovtransition probability P (t;x;A) =P (X t 2Ajx);0t<1 is B-measurable in x for xed t and A; and it satis es the Kolmogorov-Chapman identity P (t+s;x;A) = Z E P (t;y;A)P (s;x;dy); 0t;s<1;A2B: LetB bethesetofB-measurableboundedfunctionsde nedonE,andC(I)theset of bounded continuous functions on the set I, then P (t;x;A) de nes a semi-group of transformations T t from B to itself, T t :f (x)! (T t f)(x) =Eff (X x t )g 81 = Z E f (y)P (t;x;dy); f 2B: If f 2 C(E), T t f (x) is then B-measurable in (t;x) and therefore P (t;x;A) is B-measurable in (t;x) for xed A: Wesaythatthemeasure isinvariantfortheprocessX if6= 0 andforA2B (A) = Z E (dx)P (t;x;A): This relation is equivalent to the relation Z E f (x)(dx) = Z E T t f (x)(dx) for any function f (x) continuous onE and equal to zero outside of a compactum. Let X be a recurrent di¤usion process. We present R. Z. Khas minskii s con- struction [6] which plays a fundamental role in the following sections. Let G G 1 be open sets of E with compact boundaries and ~ such that ; ~ > 0. Denote by 1 the rst time is hit, by 0 1 the rst time after 1 that ~ is hit; by i 0 i the rst time after i1 ( i ) that ( ~ ) is hit. The section of the trajectory of the process X t for i < t < i+1 will be called the i-th cycle, for i <t< 0 i the rst half-cycle of this cycle; the time i+1 i of one cycle in those cases when the trajectory will go precisely about one cycle will 82 be denoted by with no index. The sequence Z i , X i represents a homogeneous Markov chain on . Figure 3.1: Khas minskii s Construction The transition probability (x; ) (x2 ~ ; 2B 1 ;B 1 being the -algebra gen- erated by the open set of ) of this chain are de ned by the formula (x; ) = Z ~ ~ (x;dy) ~ (y; ); where U (x; ) = P x f U 2 g. The probability connected with a cycle will be denoted by with no index: (x; ) =P x fX 2 g;x2 : Lemma 26. The Markov chain Z i has a nite invariant measure, i.e. there exists a nite measure ( ) = such that ( ) = Z (x; )(dx); 83 where 2B 1 :And its useful to write it in the form Z E f (x)(dx) = Z E x f (Z i )(dx): Asusuallet A (x)bethecharacteristicfunctionofthesetA:Then R 0 A (X t )dt isthetimespentinthesetAbythetrajectoryoftheprocessinonecycle. Wehave Lemma 27. sup x2 E x R 0 A (X t )dt <1 for compact set A. Theorem 28. For a recurrent di¤usion process there exists a - nite invariant measure given by the formula (A) = Z (dx)E x Z 0 A (X t )dt : AndtheproofofthefollowingTheoremisgivenbyG.MaruyamaandH.Tanaka [7]: Theorem 29. The invariant measure of the di¤usion process is unique up to a multiplicative constant. 3.7.2 Proof of Theorem 3 Before we prove the theorem 3, let us look at the motivation behind it. Suppose over the domain [0;1], we only have one linear system, which has a xed point at a, i.e. f (x) is linear, f (x)a =0:9(xa); 84 and the perturbed system is X " n a =0:9 X " n1 a +" n1 : De ne Y n = X n b. We see the perturbed system is actually an Autoregression Process Y " n =0:9Y " n1 +" n1 : By the properties of the Autoregression Process, Y " n has a N 0; " 2 1(0:9) 2 distribu- tion. Thus the law of X " n : " has an approximate distribution N b; " 2 1(0:9) 2 . Taking account of the inequality P (>z) jj z p 2 exp z 2 2 2 true for a normal random variable with mean 0 and variance 2 , z > 0, we have j " (W 0 )1j = " (Rn(ah;a+h)) =P " a+ s " 2 1(0:9) 2 = 2 (ah;a+h) ! =P " 0 @ = 2 0 @ h s 1(0:9) 2 " 2 ;h s 1(0:9) 2 " 2 1 A 1 A r 2 " q h 2 1(0:9) 2 e h 2 ( 1(0:9) 2 ) 2" 2 = " p e " 2 ; 85 where is de ned as V(a;a+h) in the proof of Theorem 3. This indicates that if there is a measure weighted around the xed point a, approximately it would be a normal distribution. Assume V (a;b) > V(b;a). We rst use a similar idea to R. Z. Khas minskii s construction to see the invariant measure around the point b. For U 0 = [ah=2;a+h=2] and U 1 = [bh=2;b+h=2] with h < 0 , de ne the Markov times 0 , 0 , 1 , 1 , as follows: 0 = 0; k = inffk > k :X " k 2U 0 g; k = inffk > k1 :X " k 2U 1 g. De neZ n =X " n . ItformsaMarkovchainonU 1 andlet " denotetheinvariant measure forfZ n :n 0g on U 1 . De ne " by " (A) = Z U 1 E x 1 1 P n=0 A (X " n ) " (dx) R U 1 (E x ( 1 )) " (dx) (3.17) for the Borel set A on I. Thus by similar proof in [8] and [6], " is a normalized invariant measure for X " n . Proposition 30. lim "!0 " (U 1 ) = 0: 86 Proof. Since we have V(a;b) > V(b;a), we can choose small = h 2 > 0 such that V(b;a)+ <V(a;b): By theorem 1, for such > 0, there exists " 0 > 0 such that for any "<" 0 , K 2 e V(b;a)+ " 2 E x ( U 0 )K 1 e V(b;a) " 2 for x2U 1 and similarly K 0 2 e V(a;b)+ " 2 E y ( U 1 )K 0 1 e V(a;b) " 2 for y2U 0 . And we have Z U 1 (E x ( 1 )) " (dx) = Z U 1 E x U 0 + U 1 U 0 " (dx) = Z U 1 E x ( U 0 )+ E y ( U 1 )j y=X U 0 " (hx) K 1 e V(b;a) " 2 +K 0 1 e V(a;b) " 2 K 1 e V(a;b) " 2 : (3.18) Notice that for x2U 1 , E x 1 1 X n=0 U 1 (X " n ) ! E x ( U 0 )K 2 e V(b;a)+ " 2 ; 87 Figure 3.2: 1 and 2 together with (3.18) and (3.17), we have 0 " (U 1 ) R U 1 (E x ( U 0 )) " (dx) R U 1 (E x ( 1 )) " (dx) K 1 e V(b;a)+ " 2 K 0 2 e V(a;b) " 2 = K 1 K 0 2 e V(a;b)V(b;a)2 " 2 ! 0 as "! 0. This indicates the invariant measure around the point b is small. Then we study the invariant measure around the point a. For W 0 = [ah;a+h];W 1 = [bh;b+h] with h< 0 , again we de ne the Markov times ~ 0 , ~ 0 , ~ 1 , ~ 1 , as follows: ~ 0 = 0; ~ k = inffk > ~ k :X " k 2 2 =In(W 0 [W 1 )g; ~ k = inffk > ~ k1 :X " k 2 1 =U 0 [U 1 g. De ne ~ Z n =X " ~ n . It forms aMarkovchainin 1 andlet ~ " denotetheinvariant measure for n ~ Z n :n 0 o in 1 . 88 De ne ~ " by ~ " (A) = Z 1 E x ~ 1 1 P n=0 A (X " n ) ~ " (dx) R 1 (E x (~ 1 ))~ " (dx) (3.19) for the Borel set A on I. Then ~ " is a normalized invariant measure for X " n . Lemma 31. " and ~ " are equal. Proof. We notice that for all points x2 I, all sets C I, we have the transition probability from x to C P (x;C) = Z C 1 p 2" 2 exp ( (yf (x)) 2 2" 2 ) dy Z C 1 p 2" 2 exp ( max(yf (x)) 2 2" 2 ) dy 1 p 2" 2 exp 1 2" 2 m(C); wherem is the Lebesgue measure on I. Then by Theorem (6.7) from chapter 5 of [2], invariant measure is unique up to some multiplicative constant, " and ~ " are equal. Theorem 32. Let 1 =U 0 [U 1 and 2 =In(W 0 [W 1 ). (i) For any > 0, there exist " 0 > 0;K 2 <1; such that for 0<"<" 0 , E y ( 1 )K 2 e " 2 89 for any y2 2 . (ii) De ne = minfV (a;ah);V (b;bh)g. For any > 0, there exist " 0 > 0;K 1 <1; such that for 0<"<" 0 , E x ( 2 )K 1 e " 2 for any x2 1 . The function V is de ned by (2.5). Proof. (i) For any > 0, there exists m2N, such that for all x2C m , E x;" ( m )k m exp " 2 : We decompose I = A m [B m [C m [D m for such m by assumption 4. So y must be in one of A m ;B m ;C m ;D m . If y 2 A m , by lemma 15, the trajectory X " n converges to x n uniformly in A m , we have P y ( 1 K) max(P y ( U 0 K);P y ( U 1 K))P y ( U 0 K) 1 2 for "<" 0 . If y2B m ; by the same reason, we have P y ( 1 K)P y ( U 1 K) 1 2 90 for "<" 0 . If y2D m , by lemma 16, there exists K <1 such that P y 1 K 1 2 : If y2C m , since E x ( m )k m exp " 2 ; using the chebyshev s inequality, there exists M 2N, M >m such that P x ( m M) k m M exp " 2 ; P x ( m M) 1 k m M exp " 2 1 2 : From the last inequality we can actually let M = 2k m exp " 2 : So use the same argument as that in the the proof of Theorem 1 again, we have P y 1 K + K +M 1 2 P y ( m M;X m 2A m )+ 1 2 P y ( m M;X m 2B m ) + 1 2 P y ( m M;X m 2D m ) 1 2 P y ( m M) 1 4 : In summary, for any y2 2 , we have P y 1 K + K +M 1 4 : 91 Using the Markov property of X " n again, we get E y ( 1 ) K + K +M 1 X n=0 P y 1 >n K + K +M K + K +M 1 X n=0 1 min z2H c P z 1 K + K +M n K + K +M 1 X n=0 3 4 n = 4 K + K +M : Thus for xed > 0; there exists " 0 > 0;K 2 <1;, for any 0<"<" 0 , we have E y ( 1 )K 2 e " 2 for any y2 2 . (ii) Apply Theorem 19 with n = 0 and V n = , we get E x ( 2 )K 1 e " 2 for x2 1 and 0<"<" 0 : With the help of Theorem 32, we have Proof of Theorem 3. By theorem 32, we choose < 1 2 . Thus for x2 1 , E x ~ 1 1 X n=0 2 (X " n ) ! sup y2 2 E y ( 1 )K 0 e " 2 : (3.20) 92 and E x ( 2 )K 1 e " 2 : We then have Z 1 (E x (~ 1 ))~ " (dx) = Z 1 E x ( 2 )+ E y ( 1 )j y=X 2 ~ " (dx) K 1 e " 2 +0: (3.21) Combining (3.19), (3.20) and (3.21), we obtain 0 " ( 2 ) = ~ " ( 2 ) K 0 e " 2 K 1 e " 2 =K 0 e 2 " 2 ! 0 (3.22) as "! 0: SinceProposition30and(3.22)aretrueforanysmallh> 0,weapplytheproof of Proposition 30 to W 1 to get 0 " (W 1 )K 00 e V(a;b)V(b;a)2 " 2 ! 0 as "! 0, where =(h)> 0: On the other hand, since I is a disjoint union of W 0 ;W 1 ; 2 , with (3.22), we have " (W 0 ) =1 " ( 2 ) " (W 1 ) 93 1K 0 e 2 " 2 K 00 e V(a;b)V(b;a)2 " 2 : Since V (a;b)V(b;a), it follows from the last inequality that j " (W 0 )1jKe " 2 : Thus the statement of theorem 3 is true. 94 Chapter 4 Applications 4.1 Structure of the Special Mapping f Westudythedynamicalsystem(2.3)withthemap(1.1)and 1inthischapter. It s easy to check that f satis es assumptions 1, 2, 3, 4, 6. In order to check assumption 5, rst of all, we introduce in this section the basic structure of the special mapping. In the next section we check the assumption 5 in detail. Thebasinofattractionof9=190contains[0;0:1] ~ A 0 andthebasinofattraction of 181=190 contains [0:9;1] ~ B 0 . Thus we have there exists an m 0 > 0 such that f (m 0 ) (x)2 (ah;a+h) foranyx2 (0;0:1); Alsothereexistsanm 1 > 0 suchthat f (m 1 ) (y)2 (bh;b+h) for any y 2 (0:9;1). Based on these, we can simplify the description of the structure of f s domain by de ning new A n ;B n ;C n ;D n , which are a little di¤erent from those in the proof of the main results. De ne A = 95 [0;0:14][ [0:86;0:9) and B = [0:46;0:54][ [0:9;1]. Then f(A) = [0;0:1] A and f(B) = [0:9;1]B. Also In(A[B) = (0:14;0:46)[(0:54;0:86)J 1 [J 2 J; say. Also f(J 1 ) = (0:1;0:9) = (0:1;0:14][J 1 [[0:46;0:54][J 2 [[0:86;0:9) and f(J 2 ) = (0:1;0:9) = (0:1;0:14][J 1 [[0:46;0:54][J 2 [[0:86;0:9): De ne C n+1 =fx2I :f n (x)2Jg =f n (J); and C n+1 is made up of 2 n+1 intervals of length 0:32(0:4) n , giving a total length of (0:8) n 0:64. Since f(A)A and f(B)B we have J =C 1 C n C n+1 : Notice that if x62C n+1 then f n (x)2A[B. 96 De ne sets ~ A n =fx62 ~ A n1 : f n (x)2 ~ A 0 g and ~ B n =fx62 ~ B n1 : f n (x)2 ~ B 0 g for n 1. Thus ~ A 0 = [0;0:1] and ~ A 1 = An ~ A 0 = (0:1;0:14][ [0:86;0:9). Also ~ B 0 = [0:9;1] and ~ B 1 =Bn ~ B 0 = [0:46;0:54]. Lemma 33. For each N 0, the interval [0;1] can be written as a nite disjoint union [0;1] = N+1 [ k=0 ~ A k [ N+1 [ k=0 ~ B k [ C N+1 , A N+1 [B N+1 [C N+1 : Proof. Suppose x 62 C N+1 . Then f N (x) 2 A[ B and so f N+1 (x) 2 f(A[ B) ~ A 0 [ ~ B 0 . Therefore x 2 S N+1 k=0 ~ A k [ S N+1 k=0 ~ B k . It remains to check the disjointedness. Clearly all the ~ A j and ~ B k are disjoint. Finally, if x 2 C N+1 then f N (x)2J, so that f N+1 (x)2f(J) =f(J 1 )[f(J 2 ) =J[ ~ A 1 [ ~ B 1 . It follows that x62 S N+1 k=0 ~ A k [ S N+1 k=0 ~ B k . Remark. Here we de ne slightly di¤erent A n ;B n from ~ A n ; ~ B n in assumption 5 to make our following description easier. And its clear that A n ;B n can be written in the form of ~ A n ; ~ B n , since [0;0:1] is attracted to a. Therefore there exists m > 0 such that f m ([0;0:1]) (ah;a+h) for small h. The next section will check the rest of the assumption 5 for f. 97 4.2 Expected Exit Time from Neighborhoods of UnstableFixedPointsforthespecialmapping f Let us begin with a toy problem related to time spent near the unstable critical point. 4.2.1 Expected Exit Time for a simpler map f (x) =cx Let P denote the Markov operator associated with the Markov chain X n+1 =cX n +" n wheref n :n 0g are i.i.d. standard normal random variables. Lemma 34. De ne V(x) =e x 2 . Then PV(x) = (1+2" 2 ) 1=2 e x 2 where = c 2 1+2" 2 : 98 In particular if = (c 2 1)=(2" 2 ) then PV(x) =jcj 1 V(x): Proof. PV(x) = 1 p 2 Z expf(cx+"z) 2 gexpfz 2 =2gdz = 1 p 2 expfc 2 x 2 g Z expf2cx"z" 2 z 2 z 2 =2gdz = 1 p 2 expfc 2 x 2 g Z exp ( 1 2 (1+2" 2 ) z + 2"cx 1+2" 2 2 ) exp 2 2 " 2 c 2 x 2 1+2" 2 dz = 1 p 1+2" 2 expfc 2 x 2 gexp 2 2 " 2 c 2 x 2 1+2" 2 = 1 p 1+2" 2 expf x 2 g where =c 2 2 2 " 2 c 2 1+2" 2 = c 2 1+2" 2 : The rest of the lemma is now a simple calculation for the special case when is chosen so that =. Now x a> 0 and de ne = inffn 0 :jX n jag. 99 Proposition 35. Assumejcj> 1. If 1< <jcj then E x;" ( ) (1) (1jcj 1 ) exp (c 2 1)(a 2 x 2 ) 2" 2 +1: Proof. Write V(x) = expf(c 2 1)x 2 =2" 2 g. Forjxja we have P(V(x)+kV(a)) =jcj 1 V(x)+kV(a) 1 (V(x)+kV(a)) so long as jcj 1 V(x)+kV(a)V(x)+kV(a); that is, (1)kV(a) (1jcj 1 )V(x): Since V is a monotone decreasing function ofjxj, we can take k = 1jcj 1 1 and then P(V(x)+kV(a)) 1 (V(x)+kV(a)) 100 wheneverjxj a. It follows that M n = n^ (V(X n^ )+kV(a)) is a supermartin- gale. Therefore kV(a)E x;" ( n^ )E x;" M n E x;" M 0 =V(x)+kV(a): Letting n!1 we get E x;" ( ) V(x)+kV(a) kV(a) = 1 k V(x) V(a) +1 and the result follows from the formulas for k and V. Corollary 36. Assumejcj> 1. Then E x;" () jcj jcj1 exp (c 2 1)(a 2 x 2 ) 2" 2 Proof. Notice that 1 1 & as & 1: The dominated convergence theorem together with Proposition 35 implies E x;" () = lim !1+ E x;" ( )1 1 1 (1jcj 1 ) exp (c 2 1)(a 2 x 2 ) 2" 2 as required. 101 4.2.2 Expected Exit Time for the special mapping (1.1) Now we consider the Markov chain given by X " n =f X " n1 +" n1 : Notice rst that f(x) = 8 > > > < > > > : c+k(xc) if x2J 1 ; dk(xd) if x2J 2 ; where c = 1=6, d = 9=14 and k = 2:5: For the purposes of our calculations it is important only that k > 2. De ne C n+1 =fx2I :f n (x)2Jg =f n (J); (4.1) and n = inffn 0 :X " k 62C n g: WeseethatC n+1 ismadeupof 2 n+1 intervalsoflengthl(1=k) n ,givingatotal length of (2=k) n 2l. Since k > 1, we have J =C 1 C n C n+1 : Proposition show that assumption 5 of f is also satis ed: 102 Proposition 37. Given > 0, there is n> 0, such that for all x2C n , E x;" ( n )k n exp " 2 ; where k n is a constant depending on n, and n = inffn 0 :X " k 62C n g: The proof of Proposition 37 uses a sequence of lemmas. Lemma 38. De ne V(x) = exp (k 2 1)x 2 2" 2 : Then E[V(kx+")] = 1 k V(x) where is a standard normal random variable. Proof. This is just rewrite of lemma 34. Lemma 39. For b2R de ne V b (x) =V(xb). For b2J 1 [J 2 de ne b 1 2J 1 and b 2 2J 2 by the condition f(b 1 ) =f(b 2 ) =b. Then P " V b (x) = 8 > > > < > > > : 1 k V b 1 (x) if x2J 1 ; 1 k V b 2 (x) if x2J 2 : 103 Proof. First,b 1 mustsatisfytheequationc+k(b 1 c) =b,sothatb 1 =ck 1 (cb). For x2J 1 P " V b (x) =E[V(f(x)+"b)] =E[V(cb+k(xc)+")] =E[V(k(xc+k 1 (cb))+")] =k 1 V(xc+k 1 (cb)) =k 1 V b 1 (x): Similarly, dk(b 2 d) =b, so that b 2 =d+k 1 (db). For x2J 2 we have P " V b (x) =E[V(f(x)+"b)] =E[V(dbk(xd)+")] =E[V(k(x+d+k 1 (db))+")] =k 1 V(x+d+k 1 (db)) =k 1 V(xdk 1 (db)) =k 1 V b 2 (x): 104 Remember the set J is made up of two intervals of length l each, a total length of 2l. De ne S n =fx2I :f n (x)2fc;dgg =f n (fc;dg): Lemma 40. The set S n consists of 2 n+1 distinct points, and S n S n+1 J for all n 0. The set C n+1 consists of 2 n+1 open intervals of length l=k n each, and C n C n+1 J for all n 0. Each subinterval of C n+1 contains a unique point of S n . Write S n;1 =S n \J 1 and S n;2 =S n \J 2 . Then S n;1 =fx2J 1 :f n (x)2fc;dgg =fx2J 1 :f(x)2S n1 g =fc+k 1 (yc) :y2S n1 g and similarly S n;2 =fx2J 2 :f n (x)2fc;dgg =fx2J 2 :f(x)2S n1 g =fdk 1 (yd) :y2S n1 g: De ne W n (x) = X b2Sn V b (x): 105 Then, using Lemma 39, we get for x2J 1 , PW n (x) = X b2Sn (PV b )(x) =k 1 X b2S n+1;1 V b (x) and similarly for x2J 2 we have PW n (x) = X b2Sn (PV b )(x) =k 1 X b2S n+1;2 V b (x) Together we have PW n (x) =k 1 0 @ 1 J 1 (x) X b2S n+1;1 V b (x)+1 J 2 (x) X b2S n+1;2 V b (x) 1 A k 1 W n+1 (x): De ne W(x) = P 1 n=0 r n W n (x) for some r < 1. In fact W n (x) 2 n+1 and so W(x) 2(12r) 1 <1 for r < 1=2. Also PW(x) = 1 X n=0 r n (PW n )(x) 1 X n=0 r n k 1 W n+1 (x) = 1 X m=1 r m (rk) 1 (W m )(x) (rk) 1 W(x): If k 1 < r < 1=2 we have (rk) 1 < 1 and PW(x) W(x). Thus we can get estimates on the time for X n to leave a set of the formfx :W(x)g. However it may be hard to analyze the setfx :W(x)g. 106 The set C n consists of 2 n+1 intervals of length l=k n each. Also,each interval of C n contains a point of S n . Thus if x2C n then for some s2S n , jxsj l k n ; and we have W(x)r n W n (x)r n V s (x)r n V(l=k n ): De ne = inffk 0 :X " k 62C n g: For x2C n we have PW(x) (rk) 1 W(x) =W(x)(1(rk) 1 )W(x) W(x)(1(rk) 1 )r n V(l=k n ) W(x): It follows that M T =W(X T^ )+(T ^) is a supermartingale, T 2N + , and so E x (T ^)W(x): 107 Letting T !1 gives E x () 2 (12r)(1(rk) 1 )r n V(l=k n ) = 2 (12r)(1(rk) 1 )r n exp l 2 k 2n : This completes the proof of proposition 37 with = (k 2 1)l 2 =2k 2n . Then from lemma 33 and proposition 37, it is easy to see that the proposition 4 is true. Thus Theorem 1 and 3 holds for the dynamical system (2.3) with the special map f and (x) 1. In the next section, we ll generalize the method we used here to more general mapping f to estimate the expected exit time from the neighborhoods of the un- stable xed points. 4.3 ExpectedExitTimeformoregeneralmapping f We required the linear map f has a slope greater than 2 when we estimated the exit time from basins of unstable xed points, and (x) 1. A natural question to ask is whether one can study such exit time for (2.3) with general ladder maps? For example, we keep the basic topology structure of f; the map we ve considered, but we can push the undi¤erentiated points on f around so that the slopes of the 108 straightlineswouldchangealittlebit. Inthisway,theabsolutevaluesoftheslopes are likely to become smaller than 2. What s more, we can even modify the linear mapalittlebittogetanonlinearone. CanProposition37stillholdinthesecases? Would our method be robust? Figure 4.1: We Can Change Slopes of f 1. Inthissectionweprovesomeresultsofexittimefor(2.3)withgeneralladder maps, both linear maps with any slope k > 1, and some nonlinear maps with the assumptionsjf 0 (x)jk > 1 for all x and f 1 (x) exists. First we rewrite the lemma 34 to get Lemma 41. De ne V r (x) = exp rx 2 2" 2 2 (x) : Then for r > 0 we have EV r (x+"(x)) = 1 p 1+r exp rx 2 2(1+r)" 2 2 (x) : 109 In particular if r =c 2 1 withjcj> 1 then EV r (cx+"(x)) = 1 jcj V r (x): Proof. Similar to the proof of lemma 34. Lemma 42. De ne V y;r (x) = exp r(xy) 2 2" 2 2 (x) ; Then for r > 0 we have EV y;r (f (x)+"(x)) 1 p 1+r V f 1 (y); rk 2 1+r (x): In particular (1) if r =k 2 1 then EV y;r (f (x)+"(x)) 1 jkj V f 1 (y);r (x): (2) if f (x) =kx+a, then for r > 0 and k6= 0 we have EV y;r (kx+a+"(x)) = 1 p 1+r V ya k ; rk 2 1+r (x): 110 Furthermore if kz +a =y and r =k 2 1 then EV y;r (kx+a+"(x)) = 1 jkj V z;r (x): Proof. (1) In Lemma 41, replace x by f (x). Then EV y;r (f (x)+"(x)) = 1 p 1+r exp r(f (x)y) 2 2(1+r)" 2 2 (x) ! 1 p 1+r exp rk 2 (xf 1 (y)) 2 2(1+r)" 2 2 (x) ! : We applied the mean value theorem to getjf (x)yj kjxf 1 (y)j in the last step. (2) is a simple application of Lemma 41 with x replaced by kx+ay. 4.3.1 Generalized ladder map Suppose that the map f of the unit interval to itself with nitely many unstable xed points. Here we only study the case that f has two unstable xed points c;d. Assume there are disjoint open intervals J 1 = (a 1 ;b 1 ) and J 2 = (a 2 ;b 2 ) with 0<a 1 <c<b 1 <a 2 <d<b 2 < 1 so that f(x) = 8 > > < > > : f 1 (x) if x2J 1 ; f 2 (x) if x2J 2 ; 111 where jf 0 1 (x)jk 1 > 1 for x2J 1 ; jf 0 2 (x)jk 2 > 1 for x2J 2 : Assume f 1 1 ;f 1 2 exist and 1 k 1 + 1 k 2 < 1: Denote J 1 [J 2 = J. We also assume that f(J 1 ) J and f(J 2 ) J, and that the interval I can be written as the disjoint union I =A[B[J where f(A)A and f(B)B. Denote by g i the restriction to J J 1 [J 2 of (fj J i ) 1 , so that g 1 (x) =f 1 1 (x); g 2 (x) =f 1 2 (x): De ne S n =fx2I :f n (x)2fc;dgg =f n (fc;dg) for n 0. Since f(c) = c and f(d) = d we have S n S n+1 for all n 0. De ne T 0 = S 0 and T n = S n n S n1 for n 1. Then the T n , n 0 are disjoint and S n =T 0 [T 1 [[T n for all n 0. Lemma 43. (i) S 0 =T 0 =fc;dg: 112 (ii) T 1 = fc 1 ;d 1 g where c 1 2 J 1 and d 1 2 J 2 satisfy f(c 1 ) = d, f(d 1 ) = c. In fact c 1 =g 1 (d); d 1 =g 2 (c): (iii) For n 1, T n+1 =fg 1 (x) :x2T n g[fg 2 (x) :x2T n g: (iv)jS n j = 2 n+1 for n 0 andjT n j = 2 n for n 1. Proof. Omitted. De ne C n =fx2I :f n (x)2Jg: Lemma 44. (i) C n =fx2I :f i (x)2J for all i = 0;1;:::;ngJ: (ii) S n C n (iii) C n consists of 2 n+1 open intervals each containing a single point of S n . Each interval has length bounded above by min(k 1 ;k 2 ) n 113 4.3.1.1 De nition of r(y) For y 2 S = [ n0 S n = [ n0 T n we de ne r(y) inductively as follows. On the set T 0 =fc;dg we de ne r(c) =k 2 1 1; r(d) =k 2 2 1: On the set T 1 =fc 1 ;d 1 g we de ne r(c 1 ) = (k 2 2 1)k 2 1 k 2 2 ; r(d 1 ) = (k 2 1 1)k 2 2 k 2 1 : For n 1, assuming r(y) is de ned for y2T n de ne r on the set T n+1 r(g i (y)) = r(y)k 2 i 1+r(y) ; y2T n ; i = 1;2: (4.2) Lemma 45. For all y2S, r(y) = r(f(y))k 2 i 1+r(f(y)) if y2J i : Proof. This can be checked case by case for y 2 T 0 [ T 1 , and it is a simple consequence of the de nition (4.2) for y2T n with n 2. 114 Lemma 46. min(k 2 1 1;k 2 2 1)r(y) max(k 2 1 1;k 2 2 1): Proof. This is clearly true for y = c and y = d. Then we use an inductive argument. For ease of notation write min(k 2 1 ;k 2 2 ) = A and max(k 2 1 ;k 2 2 ) = B with 1<AB. Notice the mapping (r;`)7! r` 1+r =` ` 1+r is monotone in r and in `. If A1rB1 and `2A;B then r` 1+r (A1)A A1+A =A1 and also r` 1+r (B1)B B1+B =B1: Therefore if the result is true for f(y) it is also true for y This completes the inductive step, and the result is proved. 115 4.3.1.2 De nition of a(y) For y 2 S = [ n0 S n = [ n0 T n we de ne a(y) inductively as follows. On the set T 0 = fc;dg we de ne a(c) and a(d) arbitrarily (strictly positive). For n 0, assuming a(y) is de ned for y2T n de ne a on the set T n+1 a(y) = a(f(y)) p 1+r(f(y)) ; y2T n : (4.3) Lemma 47. For all y2SnS 0 , a(y) = a(f(y)) p 1+r(f(y)) : Since f(A)A and f(B)B we have J =C 0 C 1 C n1 C n : Notice that if x62C n then f n (x)2A[B. 4.3.2 Exit time from C n In this section we wish to use a supermartingale argument to get an estimate on the expected exit time E x;" ( n ) for x2C n . 116 Remark. For y2S n , write f 1 (y) =fy 1 ;y 2 g where y i 2J i for i = 1;2. Thus y 1 =g 1 (y); y 2 =g 2 (y): Also, if y2S n nS n1 for n 1 then y i 2S n+1 nS n . Lemma 48. For y2S n , EV y;r(y) (f(x)+"(x)) 8 > > > > > < > > > > > : 1 p 1+r(y) V y 1 ;r(y 1 ) (x) if x2J 1 ; 1 p 1+r(y) V y 2 ;r(y 2 ) (x) if x2J 2 : The equality holds when f is a linear map. Proof. Suppose x2J 1 . Then f(x) =f 1 (x). By Lemma 42 we get EV y;r(y) (f 1 (x)+"(x)) 1 p 1+r(y) V z;s (x) where z =g 1 (y) =y 1 and s = r(y)k 2 1 1+r(y) =r(y 1 ): 117 Similarly if x2J 2 then f(x) =f 2 (x), and then by Lemma 41 we get EV y;r(y) (f 1 (x)+"(x)) 1 p 1+r(y) V z;s (x) where z =g 2 (y) =y 2 and s = r(y)k 2 2 1+r(y) =r(y 2 ): De ne W n (x) = X b2Tn a(b)V b;r(b) (x) For n 1 we have PW n (x) = X y2Tn a(y)PV y;r(y) (x) X y2Tn a(y) p 1+r(y) 1 J 1 (x)V y 1 ;r(y 1 ) (x)+1 J 2 (x)V y 2 ;r(y 2 ) (x) X y2Tn a(y) p 1+r(y) V y 1 ;r(y 1 ) (x)+V y 2 ;r(y 2 ) (x) = X y2Tn 2 X i=1 a(f(y i )) p 1+r(f(y i )) V y i ;r(y i ) (x) = X y2Tn 2 X i=1 a(y i )V y i ;r(y i ) (x) 118 = X y2T n+1 a(y)V y;r(y) (x) =W n+1 (x): Also PW 0 (x) = X y2T 0 a(y)PV y;r(y) (x) X y2T 0 a(y) p 1+r(y) 1 J 1 (x)V y 1 ;r(y 1 ) (x)+1 J 2 (x)V y 2 ;r(y 2 ) (x) X y2T 0 a(y) p 1+r(y) V y 1 ;r(y 1 ) (x)+V y 2 ;r(y 2 ) (x) = X y2T 0 2 X i=1 a(f(y i )) p 1+r(f(y i )) V y i ;r(y i ) (x) = X y2T 1 a(y)V y;r(y) (x)+ X y2T 0 a(y) p 1+r(y) V y;r(y) (x) W 1 (x)+min 1 k 1 ; 1 k 2 W 0 (x): Now de ne W(x) = 1 X n=0 r n W n (x) for some r > 0, to be determined later. Then PW(x) = 1 X n=0 r n PW n (x) W 1 (x)+min 1 k 1 ; 1 k 2 W 0 (x)+ 1 X n=1 r n W n+1 (x) 1 r W(x) 119 so long as min 1 k 1 ; 1 k 2 1 r : We also require that W(x)<1 for all x. Lemma 49. For n 1, X y2Tn a(y) max(k 2 1 1;k 2 2 1) min(k 2 1 1;k 2 2 1) 1=2 1 k 1 + 1 k 2 n1 (a(c 1 )+a(d 1 )): Proof. Notice that if y62T 0 then 2 X i=1 a(y i ) p r(y i ) = 2 X i=1 a(y) p 1+r(y) p 1+r(y) p r(y)k i = 1 k 1 + 1 k 2 a(y) p r(y) : For n 1 we can sum over y2T n to get X y2T n+1 a(y) p r(y) = 1 k 1 + 1 k 2 X y2Tn a(y) p r(y) : By induction we have X y2Tn a(y) p r(y) = 1 k 1 + 1 k 2 n1 X y2T 1 a(y) p r(y) : and the result follows from the bounds min(k 2 1 1;k 2 2 1) r(y) max(k 2 1 1;k 2 2 1). 120 Lemma 50. Suppose r < 1 k 1 + 1 k 2 1 . Then W(x)a(c)+a(d)+ max(k 2 1 1;k 2 2 1) min(k 2 1 1;k 2 2 1) 1=2 r(a(c 1 )+a(d 1 )) 1r 1 k 1 + 1 k 2 W 0 for all x. Proposition 51. For all > 0 there exists n and K such that for all x2C n , E x;" ( n )Ke =" 2 : Proof. Choose and x r so that 1 k 1 + 1 k 2 < 1 r < 1: andde neW(x)asabove. ThenPW(x) (1=r)W(x)forallx2J andW(x)K for some K <1. For ease of notation in the proof we will assume that k 1 k 2 . The set C n consists of 2 n+1 intervals of length at most 1=k n 1 each. Also, each interval of C n 121 contains a point of S n . Thus if x 2 C n then there is j n and y 2 T j such that jxyj 1=k n 1 . W(x)r j W j (x)r j a(y)V y;r(y) (x)r j a(y)exp r(y) 2" 2 k n 1 c 2 1 ; since 0 < c 1 (x) c 2 <1. De ne = inffk 0 : X k 62 C n g. For x2 C n we have PW(x)r 1 W(x) =W(x)(1r 1 )W(x) W(x)(1r 1 )r j a(y)exp r(y)=2" 2 k n 1 c 2 1 W(x): It follows that M m =W(X m^ )+(m^) is a supermartingale, and so E x (m^)W(x): Letting m!1 gives E x () W 0 (1r 1 )r j a(y) exp r(y)=2k n 1 " 2 c 2 1 : 122 Remark. Suppose some map f has m unstable xed points. On neighborhoods J = J 1 [[J m of each unstable xed point, f(J i ) J for 1 i m, and f 1 exists. Also we assume jf 0 1 (x)jk 1 > 1 for x2J 1 ; . . . jf 0 m (x)jk m > 1 for x2J m ; and 1 k 1 ++ 1 k m < 1: We de ne C n accordingly. Then one can check that the same arguments in this section can be applied to such f, and the result about the exit time from the basins of unstable xed points, i.e. Proposition 51, still holds. In the next section we will compute explicitly the quasipotential V(a;b) and V(b;a) for the special linear map f we interested in, and it will then be clear that Theorem 3 is also true for this system. 123 4.4 Calculation of Quasipotential For the rest contents of this paper, if we don t specify the Markov chain X " n , we always refer to the Markov chain given by X " n =f X " n1 +" n1 : So the action function is S N (x) = 1 2 N1 X i=0 (x i+1 f(x i )) 2 Thefollowinglemmaistrueformoregeneralmappingthanourspecialmappingf. Lemma 52. Let f be a xed mapping ofR to itself. Suppose V N (x;y) is attained by a sequencex = (x 0 ;x 1 ;x 2 ;:::;x N ). For each i2 1;2;:::;N1, either f is not di¤erentiable at x i , or f 0 (x i ) = 0, or else x i+1 =f(x i )+ 1 f 0 (x i ) (x i f(x i1 )): (4.4) Proof. The mapping x i !V N (x) has a minimum at x i . Therefore the mapping x! 1 2 (x i+1 f(x)) 2 + 1 2 (xf(x i1 )) 2 124 has a minimum at x =x i . Assuming f 0 (x i ) exists, we get f 0 (x(i))(x i+1 f(x i ))+x i f(x i1 ) = 0 and the result follows immediately. Note: Iff 0 (x i ) = 0wededucex i =f(x i1 ), andtheninductivelyx i1 =f(x i2 ) so long as f 0 (x i1 ) exists, and so on. Lemma 53. Suppose that f(x) = x for some xed 6= 0;1. Suppose V N (x;y) is attained by a sequencex = (x 0 ;x 1 ;:::;x N ). Then x i = y N x N N i + N xy N N i (4.5) for i = 1;2;:::;N1, and V N (x;y) = 1 2 2 1 2N 1 N xy 2 : Proof. The equation (4.4) applies and gives the recursion formula x i+1 = (+ 1 )x i x i1 : This has general solution x i =A i +B i 125 Putting x 0 =x and x N =y gives A+B =x and A N +B N =y, so that A = y N x N N ; B = N xy N N ; and (4.5) follows immediately. Using this sequence gives V N (x;y) = 1 2 N1 X i=0 (x i+1 f(x i )) 2 = 1 2 N1 X i=0 A i+1 +B ii (A i +B i ) 2 = 1 2 B 2 ( 1 ) 2 N1 X i=0 2i = 1 2 B 2 ( 1 ) 2 1 2N 1 2 = 1 2 B 2 ( 2 1)(1 2N ) = 1 2 N xy N N 2 ( 2 1)(1 2N ) = 1 2 2 1 2N 1 N xy 2 as required. Case 1: Supposejj> 1 and x6= 0 and y = 0. Then V N (x;0) = 1 2 2 1 2N 1 2N x 2 = 1 2 2 1 1 2N x 2 & 1 2 ( 2 1)x 2 126 as N !1. And if x = 0 and y6= 0, then V N (0;y)& 0 as N !1. Case 2: Supposejj< 1 and x = 0 and y6= 0. Then V N (0;y) = 1 2 2 1 2N 1 y 2 = 1 2 1 2 1 2N y 2 & 1 2 (1 2 )y 2 as N !1. If x6= 0 and y = 0 V N (x;0)& 0 as N !1. Lemma 54. Supposejj> 1;f(x) =x;0<y <x, then V (x;y) = inf N1 V N (x;y) 1 2 2 1 x 2 y 2 ; where the equality holds if ~ N = x y holds and ~ N is an positive integer. If ~ N isnt an integer, then V (x;y) =V N 0 (x;y) where N 0 is the nearest positive integer to ~ N. Proof. Let s consider g(t) = (xyt) 2 1t 2 127 and then take t = N . We have g 0 (t) = 2(xyt)(y)(1t 2 )+2t(xyt) 2 (1t 2 ) 2 = 2(xyt) (1t 2 ) 2 y +yt 2 +xtyt 2 = 2(xyt)(xty) (1t 2 ) 2 : If 0<y <x then the minimum of g on 0<t< 1 is obtained at t = y x : And the results follow from this. Remark Forthemodel (2.3) with (x)notalwaysbeing1, wealsowant to ndthe condition when the mapping x i !V N (x) has a minimum at x i , i.e., the mapping x! 1 2 x i+1 f(x) (x) 2 + 1 2 xf(x i1 ) (x i1 ) 2 has a minimum at x =x i . Assuming f 0 (x i ); 0 (x i ) exists, we get x i+1 f(x) (x) f 0 (x)(x)(x i+1 f(x)) 0 (x) 2 (x) + xf(x i1 ) (x i1 ) 1 (x i1 ) = 0 when x =x i . Formally we can write x i+1 =f(x i )+ f 0 (x i )(x i ) q (f 0 (x i )(x i )) 2 4 0 (x i ) 3 (x i ) 2 (x i1 ) (f(x i1 )x i ) 2 0 (x i ) : 128 We should only have one solution for x i+1 . So we need to decide which one of the two solution is the one we want. Also comparing this general result of x i+1 to that of the original model with (x) = 1, we nd it surprising that we cant apply this general result to that special situation since 0 (x) = 0. The reason is the following: consider the solution of ax 2 +bx +c = 0, with a possibly being 0. If a = 0, we have x =c=b; If a6= 0 and a is small, we have x = b p b 2 4ac 2a = bjbj q 1 4ac b 2 2a = bjbj 1 2ac b 2 + 2a = bjbj 2ac b 2 jbj+ 2a ; and there will always be one solution out of the two that is close to c=b, regardless of the choice of b. And this explains the situation happened here. Butsincetheformulafor x i+1 isfarmorecomplicatedthantherecursionformula for the model with (x) = 1, it is hard to get an exact formula to compute V n and even harder to nd the quasipotential in this situation. So we stop at this point for this model. 129 4.4.1 the Simpler Ladder Map We compute the quasipotential of a simpler map f of the unit interval to itself given by the following formula f(x) = 8 > > > > > > < > > > > > > : 0:9(0:1x) if 0x 0:1 1:25(x0:1) if 0:1x 0:9 10:9(x0:9) if 0:9x 1 : Themaphasastable xedpointsatx = 9=190andx = 181=190andanunstable xed point x = 1=2. The basin of attraction of 9=190 is [0;1=2) and the basin of attraction of 181=190 is (1=2;1]. Here we wish to calculate the quasipotential V(9=190;1=2) and to nd paths x = (x 0 = 9=190;x 1 ;:::;x N ;x N+1 = 1=2) which are close to optimal. Consider a sequence starting at x 0 = 9=190 and running with x i 2 [0;0:1] until reaching x M say, with f(x M ) = y 2 [0;0:09]. Choose ~ x M 2 [0:1;0:5] so that f(~ x M ) = y = f(x M ). Then consider a sequence ~ x M ;x M+1 ;x M+2 ;:::;x N = 1=2. For y xed the optimal x 0 ;:::;x M has V(9=190;x M ) given by a version of Case 2 with 2 =0:9. We get V(9=190;x M ) = 1 2 (1 2 2 ) x M 9 190 2 : 130 Similarly, for y xed the optimal ~ x M ;:::;x N = 1=2 has (~ x M ;1=2) given by a version of Case 1 with 1 = 1:25. We get V(~ x M ;1=2) = 1 2 ( 2 1 1) 1 2 ~ x M 2 : Since y = f(x M ) we get x M = 0:1y=0:9 and since y = f(~ x M ) we have ~ x M = 0:1 +y=1:25. Using the sequence x 0 = 9=190;:::;x M1 ;x M ;x M+1 ;:::;x N = 1=2 gives a value V(9=190;x M )+V(x M ;1=2) =V(9=190;x M )+V(~ x M ;1=2) = 1 2 (1 2 2 ) x M 9 190 2 + 1 2 ( 2 1 1) 1 2 ~ x M 2 = 1 2 (1 2 2 ) y 0:9 10 190 2 + 1 2 ( 2 1 1) 0:4 y 1:25 2 = 1 2 0:19 0:9 y 9 190 2 + 1 2 0:5625 1:25 (0:5y) 2 1 2 y 9 190 2 + 1 2 (0:5y) 2 g(y): The function g is decreasing on (1; y) and increasing on ( y;1), where y = + 9 190 + + 0:5: With our numbers for 1 and 2 we have > and so y > 0:1. But y = f(X M ), so y 0:09. To minimize V(9=190;x M )+V(x M ;1=2) we choose y = 0:09, so that x M = 0 and ~ x M = 0:1+0:09=1:25 = 0:172. 131 Thissuggeststogetfrom9/190to1/2, rstchoosetheoptimalpathfrom9/190 to 0 2 f 1 (0:09), using the part of the function from 0 to 0.1. Then choose the optimal path from 0:182 2 f 1 (0:09) to 1/2, using the part of the function from 0.1 to 0.5. We need to check that at the point x M the minimality condition holds. 4.4.2 The General Case In this section, we nd the quasipotentials between two stable xed points of the following function f (x), which consists of two straight lines: f (x) = 8 > > < > > : 1 x if 0 x 2 1 2 1 c 2 x+(1 2 )c if x 2 1 2 1 c ; (4.6) where1< 1 < 0, 2 > 1, c> 0, and 0 < 0 is a xed number. For the rst line 1 x, 0 is the stable xed point. For the second one, c is the unstable xed point. As for V(c;0), we can directly apply lemma 53 case 1 to get V(c;0) =V c; 2 1 2 c = 0: As for V(0;c), similar to the last section, we rst nd out where the jump (x m ) from the line 1 x to the other line (e x m ) occurs. And this calculation is based on the lemma 53, which holds for an unbounded line. But since both lines of f here are cut at the point 2 1 2 1 c, weneedto ndthe conditionsuchthat all thepaths 132 x i in the lemma 53 up to the jump point x m are all indeed on the line 1 x. Notice since f is cut o¤at the point 0 , we need the restriction x m 0 . The following lemma describes all the situations. Lemma 55. For f (x) = (4.6), we have V (c;0) = 0; V (0;c) =V (0;x m )+V (e x m ;c); where f (x m ) =f (e x m )2 (0;c),e x m is on the line 2 x+(1 2 )c, x m is on the line 1 x and determined by the following conditions: (1) if 2 + 1 + 1 2 0; (i) either 2 1 1 ( 2 1 ) c> 0 ; then x m = 2 1 1 ( 2 1 ) c; (ii) or 2 1 1 ( 2 1 ) c 0 ; then x m = 0 : (2) if 2 + 1 + 1 2 > 0; 133 (i) either 1 ( 2 2 1) 2 2 2 1 c> 0 ; then x m = 1 ( 2 2 1) 2 2 2 1 c; (ii) or 1 ( 2 2 1) 2 2 2 1 c> 0 ; then x m = 0 : Proof. Usingthenotationx m ;e x m todenotethejumppointsbetweenthetwolines and f (x m ) = f (e x m ). By computing the quasipotentials V(0;x m ) and V(e x m ;0) using lemma 53 case 1 and 2, one has V(0;c) =V(0;x m )+V(e x m ;c) = 1 2 (1 2 1 )x 2 m + 1 2 ( 2 2 1)(e x m c) 2 = 1 2 (1 2 1 ) y 1 2 + 1 2 ( 2 2 1) y(1 2 )c 2 c 2 = 1 2 1 2 1 2 1 + 2 2 1 2 2 y 2 2 2 1 2 2 cy + 1 2 2 2 1 2 2 c 2 ; (4.7) since 1 2 1 2 1 2 1 + 2 2 1 2 2 > 0, 134 we can nd the function obtain its minimum at the point y = 2 2 1 2 2 c 1 2 1 2 1 + 2 2 1 2 2 = 2 1 ( 2 2 1) 2 2 2 1 c = 2 1 2 2 2 1 2 2 2 1 c; (4.8) and x 0 m = y 1 = 1 ( 2 2 1) 2 2 2 1 c: Rememberx 0 m isobtainedbylettingN !1inthelemma53,andweareinterested inwherex 0 m1 is. Supposefx i gisgivenby(4.5)withx = 0and xedN <1, then x i = y N 1 N 1 i 1 y N 1 N 1 i 1 =y i 1 i 1 N 1 N 1 : Now with N !1, we have x i ,x Ni =y Ni 1 (Ni) 1 N 1 N 1 =y i 1 2Ni 1 1 2N 1 ! i 1 y: Thus we get x 0 m1 = 1 x 0 m = 2 1 ( 2 2 1) 2 2 2 1 c: (1) if x 0 m1 is not on the line 1 x, x 0 m1 2 1 2 1 c, i.e. 2 + 1 + 1 2 0, then we take x m1 = 2 1 2 1 c, and thus x 00 m = 1 1 x m1 = 2 1 1 ( 2 1 ) c. Compare x 00 m with 0 , if x 00 m > 0 , we take x m =x 00 m ; otherwise x m = 0 . 135 (2) if x 0 m1 is on the line 1 x, x 0 m1 < 2 1 2 1 c, i.e. 2 + 1 + 1 2 > 0, then compare x 0 m with 0 , if x 0 m > 0 , we take x m =x 0 m ; otherwise x m = 0 . Notice it is always true that 0 < 2 1 2 2 2 1 2 2 2 1 < 1, from (4.8), we conclude that 0<f (x m )y <c. Remark Notice that if we project these two stable points onto the x-axis (also true for y-axis), jcj actually is the distance between them. So once the slope conditions of two straight lines are satis ed and if we know the distance of the two stable points in the x-axis (or the y-axis), we can apply the above calculation in this section to get the quasipotential. One can get the similar results for f (x) = 8 > > < > > : 1 x if 0 x 2 1 2 1 c 2 x+(1 2 )c if x 2 1 2 1 c ; where 1< 1 < 0, 2 > 1, c< 0 and 0 > 0 xed. 4.4.3 the Ladder Map of our Interest Inthissectionweshowtheproposition5, i.e.,thecalculationofthequasipotentials V(a;b) andV(b;a) oftheladdermap(1.1). Forsimplicity, denoteV 0 =V(a;b) and V 1 =V(b;a). 136 To compute V 0 , rst notice that: starting from a, in order to reach any point y larger than c = 1 6 with the minimum cost, one has to choose the path that hits the point 1 6 on the way. Indeed, suppose there exists a sequence x = (x 0 =a;x 1 ; ;x k =y) with S k (x) < V a; 1 6 and y > 1 6 . Wlog, we can assume all of x 0 ;x 1 ; ;x k1 are in the interval 0; 1 6 . Since f 0; 1 6 0; 1 6 ; we have S k x 0 =a;x 1 ; ;x k1 ; 1 6 = 1 2 (x 1 f (a)) 2 ++ 1 2 (x k1 f (x k2 )) 2 + 1 2 1 6 f (x k1 ) 2 < 1 2 (x 1 f (a)) 2 ++ 1 2 (x k1 f (x k2 )) 2 + 1 2 (yf (x k1 )) 2 =S k (x)<V a; 1 6 ; which contradicts the de nition of V a; 1 6 . Thus we can only have S k (x) V a; 1 6 for any pathx. 137 ThusV 0 iscomputedintwosteps: thequasipotentialfromatoc,andthatfrom c to b. Applying the results from lemma 53 and similar methods in the section 4.4.1, it s obvious that from c to b we have V(c;b) =V(c; e b) =V( 1 6 ; 181 190 +0:25 2:5 ) = 0; here we use the notation e b such that f (b) =f e b for 0:1< e b< 1 2 : For the quasipotential from a to c, one can apply lemma 55 by shifting the coordinate system (x;y) in the lemma to the new coordinate system (x 0 ;y 0 ) = (x+9=190;y +9=190). Since the topology is the same ( 1 ; 2 unchanged), the results remain the same. By lemma 55 case (1)(ii), x m = 0 = 0. This means in order to obtain V (a;c), we need to choose the path with in nite steps going from a to 0 rst, and then choose the path starting from e 0(= 0:09+2:5 2:5 ) to c with in nite steps. Thus we get V(a;c) =V(a;0)+V( 0:09+2:5 2:5 ; 1 6 ) = 1 2 (10:9 2 ) 9 190 2 + 1 2 (2:5 2 1)( 1 6 0:136) 2 : 138 Using the path a; ;c; ;b together, we have V 0 =V (a;c)+V (c;b) = 1 2 (10:9 2 ) 9 190 2 + 1 2 (2:5 2 1)( 1 6 0:136) 2 = 4777 1781250 = 2:681810 3 : The calculation of the quasipotential V 1 is a little more complicated. In order to go from b to a, the path has to contain some points in (0:1;0:9). Similar to the lemma 55 case (1)(ii), x m = 0 = 1 and this means the optimal path to get into (0:1;0:9) is to go to 1 rst, and then from 1 or c < 1 0 = 0:1+ 0:91 2:5 < 1 2 or 1 2 < e 1 = 0:912:25 2:5 <d back to a. So applying lemma 53, we have V (b;1) = 1 2 (10:9 2 ) 1 181 190 2 : We also notice that: starting from b, in order to reach any point y smaller than 0:9 with the minimum cost, one has to choose the path that hits the point 0:9 on the way. Indeed, assume that there exists y < 0:9, and a sequence x = (x 0 =b;x 1 ; ;x k =y) with S k (x) < V (b;0:9): Wlog, we can assume all of x 0 ; x 1 ; ; x k1 are in the interval [0:9;1]. Since f ([0:9;1]) [0:9;1]; we have S k (x 0 =b;x 1 ; ;x k1 ;0:9) =(x 1 f (b)) 2 ++(x k1 f (x k2 )) 2 +((0:9)f (x k1 )) 2 139 <(x 1 f (b)) 2 ++(x k1 f (x k2 )) 2 +(yf (x k1 )) 2 =S k (x)<V (b;0:9); which contradicts the de nition of V (b;0:9). Thus we can only have S k (x) V (b;0:9) for any pathx. Since 0:9 is the discontinuous point of f (x), and the small interval to the left of 0:9 (0:9 is not included) belongs to basins of a, the cost thereafter is 0. Thus we only need to nd a path with a minimum cost from the point 1 to the left side of 0:9. Thus the quasipotential can be obtained. We have three possible paths from 1 to 0:9: 1. using the path from 1 0 to 0:9; 2. using the path from ~ 1 to 0:9; 3. jumping directly from 1 to the left side of 0:9. For the rst two paths, we have immediately that V (1 0 ;0:9 0 )>V e 1; f 0:9 > 0; since f (x) has the same absolute slopes on c; 1 2 and 1 2 ;d , and (1 0 ) 2 (0:9 0 ) 2 > ~ 1 2 f 0:9 2 : So we only need to compute V e 1; f 0:9 . For xed su¢ cient small 0 > 0, we can nd a point 1 2 < ~ x m < d with f (~ x m ) = 0:9 0 : In this way, the cost V 0 1 from b to a can be achieved by choosing the path b; ;1; ;x 0 m ;0:9 0 ;f (0:9 0 );f (2) (0:9 0 ); ;a : 140 By proposition 9(b), we can just use the path b; ;1; ;x 00 m ;0:9 0 ;f (0:9 0 );f (2) (0:9 0 ); ;a instead to compute V 0 1 , where f (x 00 m ) = 0:9. In order to get V e 1;x 00 m , we apply lemma 54. N = ln e 1 x 00 m ln = ln 0:9 0:91 2:5 9 14 0:9 0:9 2:5 9 14 ln2:5 = 4:163710 2 < 1: So we use N 0 = 1 to get V e 1;x 00 m = 1 2 f e 1 x 00 m 2 = 1 2 0:91 0:9 0:9 2:5 2 : Combing the above together, we obtain V 0 1 =V (b;1)+V e 1;x 00 m +V (0:9 0 ;a) = 1 2 (10:9 2 ) 1 181 190 2 + 1 2 0:91 0:9 0:9 2:5 2 +0 =6:866310 2 : For the third path, by proposition 9(b), we have immediately V (1;0:9 0 )!V (1;0:9) = 1 2 (f (1)0:9) 2 = 1 2 (0:910:9) 2 ; 141 as 0 ! 0. So V 00 1 =V (b;1)+V (1;0:9 0 )+V (0:9 0 ;a) ! 1 2 (10:9 2 ) 1 181 190 2 + 1 2 (0:910:9) 2 +0 = 1 3800 = 2:631610 4 ; as 0 ! 0. Thus we have V 1 =V 00 1 ! 2:631610 4 . It s now easy to see V 0 > V 1 . And this completes the proof of proposition 5. And it follows that theorem 3 can be applied in this special case. Theorem 3 shows that the invariant measure " is weighted almost all around the xed point a and the shape looks approximately normal around a. We can see this result in the following graph by nding the invariant measure numerically: Figure 4.2: Invariant Measure " In our calculation of the invariant measure, we take " = 0:003, and divide [0;1] into 500 bins. We take all the transition probabilityfP i;j g, which is the transition 142 probability from the point right in the middle of the ith bin B i to the point in the middle of thejthbinB j , as the probabilityjumpingfromB i toB j . (We alsothink of averaging suchfP i;j g over all the points in B i ;B j , but the result is almost of no di¤erence.) And the invariant measure formula for the discrete case P = gives the invariant measure in the above graph. Comparing with the numerical invariant measure from [1], they get the largest peaksaroundthepointbwhen" = 0:01,andincreasing"upto 0:12tohavesmaller peaks. Figure 4.3: Invariant Measure from [1] Suchdi¤erencesmaycomefromtheerrorsduringthecalculation,andalsofrom the di¤erent approaches to nd the transition probabilities. When they computed the matrix A, the approximation of the Frobenius-Perron operator, which can be interpreted as the transition matrix, they used the approximation A i;j 7! 0;if inf y2B i ;x2B j jxf (y)j>" p 2ln 2 ; 143 where is A i;j the probability jumping from the bin B i to the bin B j , = p 2" 2 m(B i )m(B j ) ; and is some negligible threshold, say = 10 15 in double precision arithmetic. That is, if all the points in the ith bin, y2 B i , map far away under f from all the pointsinthejthbin,x2B j ,thenthematrixentryA i;j shouldbeignored. Another reasonmaybethattheydidn ttakethenoisesmallenoughsinceourtheoremholds only when "! 0. 4.5 Exit Position In the paper [1], they are interested in the transport of a stochastic ux from one basin to another. Their method is based on the nite rank Galerkin matrix, which isamatrixapproximationoftheFrobenius-Perronoperatorandcanbeinterpreted as a transition matrix. They nd the transport regions of a stochastic dynamical system by re-indexing this Galerkin matrix. The picture from their paper (see gure 1.4) shows the escape regions of our dynamical system found for " = 0:04 labeled by the dashed black lines, overlaid on the basin maps for each basin. But the graph can only show us vaguely where the leakage takes place. 144 ProofofProposition6. Inordertoapplytheorem2,we rsttakeF = [0;a n )for a n % 1 6 asn%1, andthenweneedtocheckthestructureofO F . Fory2 a n ; 1 6 , as done from the calculation of the quasipotential from a to b, we know that V (a;a n )V (a;y)<V a; 1 6 : Thus by the de nition of O F , we obtain O F = fa n g: For given > 0, one can choose n, such that 1 6 <a n < 1 6 : Then choosing 1 such that 1 6 <a n 1 ; and applying theorem 2 with 1 in place of , we obtain P x X " A c n 2 1 6 ; 1 6 + ! 1 as "! 0. Since A c n U 1 , we obtain the rst result of the proposition. Similarly, one can nd out where the least unlikely leakagefrom the basin of b to the basin of a takes place. Take F 0 = (b n ;1] for b n & 0:9 as n%1. From the calculationofthequasipotentialV (b;a),weknowstartingfromb,inordertoreach any point y smaller than 0:9 with the minimum cost, one has to choose the path 145 that hits the point 0:9 on the way. Notice all the points on [0:9;1] are contracted to b in nite iterations. So similarly we haveO 0 F =f0:9g. Apply theorem 2 again we get the second part of the proposition 6. 146 Chapter 5 Future Work In section 2.2, we list 6 assumptions of the mapping f. We see from the proof in chapter3andchapter4thatsomeassumptionscanbeweakened. Forexample, the requirementoftheexpectedexittimefromthecomplicatedstructureinassumption 5 is always true not only for linear mapping with the slopes around unstable xed points satisfying 1=k 1 +1=k 2 < 1, but also for nonlinear f with similar conditions (see section 4.3). In addition, we realize that even with nite many unstable xed points, once such easy-to-check conditions are satis ed, assumption 5 still holds. But can one weaken other assumptions, say assumption 6? One can check this condition for many mappings, but can one also make it simpler since it still looks complicated? And because we study the bistable system, assumption 2 requires exactly two stable xed points. What if one adds another stable xed point into this system? In that case, starting near one of these three points, X " can be in either of the other two basins. Then how do the quasipotentials play the role here? 147 What can one say about the expected exit time from one of the basins? And can one still get similar invariant measure result like theorem 3? In this thesis we only considered the one dimensional case. What if we change the mapping f to be a xed mapping of the d dimensional unit interval [0;1] d to itself and keep other assumptions similar? Speci cally, the Markov chain now we work on would be X n+1 =f (X n )+"g(X n ) n ; (5.1) with g(x) bounded. Can one still get similar results for this Markov chain as in this thesis? Furthermore, we want to study the SDE dX t =v 0 (X t ;t)dt+"v 1 (X t ;t)dW t ; where v 0 (X t ;t);v 1 (X t ;t) are periodic mappings with the period T = 1. One can observe the discrete time Markov chainfX n :n 0g onR, and the dynamics then becomes X n+1 =F (X n ;"fw s w n :nsn+1g); which can be rewritten in an asymptotic way as X n+1 =f (X n )+"g(X n )noise+" 2 g(X n )noise+ : 148 Since " is so small, one can ignore the higher order terms of ". When it comes to themultidimensional,onecanobservethediscretetimeMarkovchainfX n :n 0g onR d . This asymptotically leads to the Markov chain in the form of (5.1). Bollt, Billings and Schwartz [1] s SEIR model is the case when d = 2. They studied this two dimensional example by using their numerical tools. We want to nd out whether similar theoretical results as those in this thesis are true for this type of multidimensional bistability questions in the future. 149 Bibliography [1] ErikM.Bollt,LoraBillings,IraB.Schwartz,Amanifoldindependent approach to understanding transport in stochastic dynamical systems, Physica D 173 (2002) 153-177. [2] R. Durrett, Probability: Theory and Examples, Duxbury Advanced Series, 2004. [3] M. I. Freidlin, A. D. Wentzell, Random Perturbations of Dynamical Systems, Springer-Verlag New York, 1998. [4] L. Gammaitoni, P. Hänggi, P. Jung, F. Marchesoni, Stochastic Resonance, Rev. Mod. Phys. Vol 70, No. 1, (1998), 223-287. [5] T.E.Harris, The existence of stationary measures for certain markov processes, Proceed. Third Berkeley Symp., 2, (1956), 113-124. [6] R.Z.Khas minskii, Ergodic properties of recurrent di¤usion processes and sta- bilization of the solution to the cauchy problem for parabolic equations, Theory Prob. and Its App., Vol. V, No. 2, (1960), 179-196. (English translation by E. Nishiura.) [7] G. Maruyama and H. Tanaka, Some properties of one-dimensional di¤usion processes,Mem.Fac.Sci.KyushuUniv.,Ser.A,Vol.11,No.2,(1957),117-141. [8] G. Maruyama and H. Tanaka, Ergodic property of N-dimensional recurrent markov processes,Mem.Fac.Sci.KyushuUniv.,Ser.A,Vol.13,No.2,(1959), 157-172. [9] I. Pavlyukevich, Stochastic Resonance, Dissertation, Humboldt-Universität Berlin, 2002. [10] I. Schwartz, H. Smith, In nite subharmonic bifurcations in an SEIR epidemic model, J. Math. Biol. 18 (1983) 233-253. 150
Abstract (if available)
Abstract
In this thesis, we study the discrete time dynamical system on the unit interval by low intensity additive Gaussian noise. We consider systems where the underlying deterministic system has two stable fixed points, and use large deviations methods to study transitions from one basin of attraction to the other. We investigate transition time between the basins of two attractors, and the places where transitions most likely occur. It's the dynamical system with fractal boundaries between two basins that is of great interest to us. We estimate the expected exit time from the neighborhoods of such complicated boundaries. Finally it is shown, for the system with the specific underlying linear mapping, that we are able to compute the quasipotentials exactly.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The effects of noise on bifurcations in circle maps with applications to integrate-and-fire models in neural biology
PDF
Large deviations rates in a Gaussian setting and related topics
PDF
Efficient inverse analysis with dynamic and stochastic reductions for large-scale models of multi-component systems
PDF
The projection immersed boundary method for compressible flow and its application in fluid-structure interaction simulations of parachute systems
PDF
Linear quadratic control, estimation, and tracking for random abstract parabolic systems with application to transdermal alcohol biosensing
PDF
Computational geometric partitioning for vehicle routing
PDF
Limiting distribution and error terms for the number of visits to balls in mixing dynamical systems
PDF
Spatial analysis of PM₂.₅ air pollution in association with hospital admissions in California
PDF
Efficient two-step testing approaches for detecting gene-environment interactions in genome-wide association studies, with an application to the Children’s Health Study
PDF
On the depinning transition of the directed polymer in a random environment with a defect line
PDF
Out-of-equilibrium dynamics of inhomogeneous quantum systems
PDF
Defaultable asset management with incomplete information
PDF
Numerical study of shock-wave/turbulent boundary layer interactions on flexible and rigid panels with wall-modeled large-eddy simulations
PDF
Nonparametric estimation of an unknown probability distribution using maximum likelihood and Bayesian approaches
PDF
Prohorov Metric-Based Nonparametric Estimation of the Distribution of Random Parameters in Abstract Parabolic Systems with Application to the Transdermal Transport of Alcohol
PDF
Using chemical biology approaches to investigate the consequences of protein concentration and activity in cancer cells
PDF
The role of individual variability in tests of functional hearing
PDF
Provable reinforcement learning for constrained and multi-agent control systems
PDF
High-performance distributed computing techniques for wireless IoT and connected vehicle systems
PDF
Context-adaptive expandable-compact POMDPs for engineering complex systems
Asset Metadata
Creator
Zeng, Yu
(author)
Core Title
Large deviations approach to the bistable systems with fractal boundaries
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
09/14/2008
Defense Date
06/02/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
action function,bistable systems,fractal boundaries,gaussian noise,large deviations,OAI-PMH Harvest,quasipotential
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Baxendale, Peter (
committee chair
), Arratia, Richard A. (
committee member
), Sun, Fengzhu Z. (
committee member
)
Creator Email
zengyu@usc.edu,zengyusc@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1598
Unique identifier
UC1191348
Identifier
etd-Zeng-2140 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-94021 (legacy record id),usctheses-m1598 (legacy record id)
Legacy Identifier
etd-Zeng-2140.pdf
Dmrecord
94021
Document Type
Dissertation
Rights
Zeng, Yu
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
action function
bistable systems
fractal boundaries
gaussian noise
large deviations
quasipotential