Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Limiting entry/return times distribution for Ergodic Markov chains in a unit interval
(USC Thesis Other)
Limiting entry/return times distribution for Ergodic Markov chains in a unit interval
PDF
Download
Share
Open document
Flip pages
Copy asset link
Request this asset
Transcript (if available)
Content
LIMITING ENTRY/RETURN TIMES DISTRIBUTION FOR ERGODIC MARKOV CHAINS IN A UNIT INTERV AL by Renyan Sun A Thesis Presented to the FACULTY OF THE USC DORNSIFE COLLEGE OF LETTERS, ARTS, AND SCIENCES UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF ARTS (MATHEMATICS) May 2022 Copyright 2022 Renyan Sun Acknowledgements I would like to thank my advisor Professor Nicolai Hadyn for his patient advising. I would like to thank my committee members Professor Cymra Haskell and Professor Sergey Lototsky for their careful reviewing of my work. I would like to thank all of my friends for the support of my masters studies for the past two years. Finally, I would like to thank my family to support me financially to obtain a master’s degree in mathematics. ii Table of Contents Acknowledgements ii Abstract iv Chapter 1: Introduction 1 1.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Preliminary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.3 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 2: Main Results 8 Chapter 3: Counterexample 15 References 23 iii Abstract The thesis aims to find the limiting entry/return times distribution for ergodic Markov chains in a unit interval. We will first introduce the Markov chains in a general state space and the concepts of ergodicity and invariance which serve as the foundation of our discussion. We will utilize various proof techniques and important results of limiting distributions to derive the ones in our setting. iv Chapter 1 Introduction 1.1 Motivations In this thesis, we explore the limiting behaviour of a Markov chain in a unit interval under ergod- icity. Ergodicity is a property that ensures that every point in the state space can be reached in a finite number of visits, which avoids the situations where either some points can never be reached or there are fixed points where the chain cannot escape indefinitely. We show what conditions are needed for a Markov chain to be ergodic and how the ergodicity of a Markov chain indicates an in- variant initial probability distribution of the chain. We study invariance measures, since invariance is an important assumption in our discussion. We review the definition and some important results concerning limiting return/entry times distributions. We also utilized theO and little-o notations to quantify the approximated error terms by their asymptotic behaviour. All the conditions introduced in this thesis apply to a general state space which makes the results more widely applicable. 1.2 Preliminary First, we review the fundamentals of the measure theory. Put(W;F;m) as a probability space. Definition 1 ([3] Probability Measure). m is a probability measure if it is a function m :F!R with 1 1. m(A)m(?)8A2F 2. if A i 2F is a countable sequence of disjoint sets, then m([ i A i )=å i m(A i ) 3. m(W)= 1 Definition 2 ([11]). Let T :W!W be a measurable function. A measure m is T -invariant if: 8A2F m(T 1 A)=m(A) Example 2.1. ([11]) Let g be a point on the real line R with its usual Borel s-algebra, T g be defined onR by T g (x)= x+g and l is the Lebesgue measure. Then l is clearly a T g -invariant measure, since T g is a one-dimensional shifting function. Example 2.2 ([4] Doubling Map). Consider the unit interval I=[0;1] and T is a map on I defined by T(x) = 2x mod 1. Let l be the Lebesgue measure and a;b2 I with a < b. Observe that T 1 [a;b]=[ a 2 ; b 2 ][[ a+1 2 ; b+1 2 ] and l(T 1 [a;b])=l([ a 2 ; b 2 ])+l([ a+1 2 ; b+1 2 ])= ba 2 + b+1a1 2 = b a=l([a;b]. This relation also holds on the algebra of finite or countable union of intervals. By the Kolmogorov extension theorem,l(T 1 (B)=l(B)8B as Borel sets, i.e.,l is a Tinvariant measure. Definition 3 ([3] Ergodic Measure). Letm be invariant with respect to the map T :W!W. We say m is ergodic if for all T -invariant A2F m(A) is either 0 or 1. Remark. To be more specific, if there is A2F such that T 1 A= A with m(T 1 A)=m(A), then any composition of transformation map T with the set A will be trapped within the set. In other words, points outside A cannot be reached by applying such a map. Definition 4 ([6] Return/Entry Times Function). Let T be a map on W and assume m is a T- invariant probability measure onW. For a measurable set UW we define the return/entry times function t U (x)= minf j 1 : T j x2 Ug 2 for x2 U. We assume m(U)> 0. Definition 5 ([6] Return/Entry Times Distributions). Let BW(m(B)> 0) and consider the entry times functiont B (x) where x2W. For t > 0 put F B (t)=P(t B > t m(B) )=m(fx2W :t B (x)> t m(B) g) for the entry time distribution to B. Similarly, the return times distribution is defined by ˜ F B (t)=P(t B > t m(B) jB)= m(fx2 B :t B (x)> t m(B) g) m(B) Remark. The rationale behind the scaling of time by 1 m(B) will be elaborated in Theorem 2. Definition 6 ([6] Limiting Entry Times Distribution). (W;F;m) is a probability space. Let B n W(m(B n ) > 0) be a sequence of sets such that m(B n )& 0 + . We define the limiting entry times distribution as F(t)= lim n!¥ F B n (t) for t > 0 if the limit exists. Similarly, we define the limiting return times distribution as ˜ F(t)= lim n!¥ ˜ F B n (t) for t > 0 if the limit exists. Theorem 1 ([6] Dual Relation of Entry and Return Times Limiting Distributions). Let t > 0 and F be the limiting entry times distribution. Then the limiting return times distribution is formulated as F(t)= 1 Z t 0 ˜ F(s)ds 3 Theorem 2 ([6] Kac 1947). If m is an ergodic T-invariant probability measure onW then for any UW of positive measure one has Z U t U (x)dm(x) = 1 Remark. The Kac’s Theorem does not only guarantee the integrability of t U in U, but also its average value of 1 m(U) . Now we introduce Markov chains on general state spaces. Definition 7 ([3]Transition Probability). A function p :WF!R is said to be a transition probability if: 1. For each x2W, A! p(x;A) is a probability measure on(W;F). 2. For each A2F , x! p(x;A) is a measurable function. Definition 8 ([3] Markov Chain). X n is a Markov chain with transition probability p if P(X n+1 2 Bjs(X 1 ;:::;X n ))= p(X n ;B) Remark ([3]). Given a transition probability p and an initial distribution r on (W;F), we can define a consistent set of finite dimensional distributions by P(X j 2 B j ;0 j n)= Z B 0 r(dx 0 ) Z B 1 p(x 0 ;dx 1 ) Z B n p(x n1 ;dx n ) Remark ([3]). If we consider a countable state space instead, the corresponding Markov prop- erty has a discrete representation. Given the states i 0 ;:::;i n1 ;i;and j, we haveP(X n+1 = jjX n = i;X n1 = i n1 ;:::;X 0 = i 0 )=P(X n+1 = jjX n = i), i.e., a Markov chain has one-step memory. For convenience of notations, instead of checking the Markov property, we write the transition proba- bility as p(i; j)=P(X n+1 = jjX n = i). 4 Example 8.1 ([3] Ehrenfest chain). Suppose we have a total of N balls in two urns; m in the first and N m in the second. We choose one of the N balls randomly and move it to the other urn. Then the corresponding transition probability is stated as follows with the space denoted by X =f0;1;:::;Ng. p(m;m+ 1)= N m N p(m;m 1)= m N p(i; j)= 0 else Example 8.2. Let X be a Markov chain on [0;1] with transition probability p where p(x;) is absolutely continuous with respect to the Lebesgue measure for all x. Therefore, we can find a probability density p(x;y) such that p(x;A)= R A p(x;y)dy. This is the type of Markov chains that we are considering in this thesis. Leth A =å ¥ n=1 1fX n 2 Ag which counts the number of visits to a set A and E x [h A ]=å ¥ n=1 p n (x;A). Definition 9 ([7] Uniform Transience and Recurrence). A set A is called uniformly transient if for there exists M<¥ such that E x [h A ] M for all x2 A; A set A is called recurrent if E x [h A ]=¥ for all x2 A. Put L(x;A)=P x (t A <¥)=P x (X ever enters A) where P x denotes the probability of events conditional on the chain beginning with X 0 = x. Definition 10 ([7] Irreducibility). Let x2W, UW and X =(X 0 ;X 1 ;:::) be a Markov chain. A set A is irreducible if L(x;A)> 0. Definition 11 ([9] Aperiodicity). (W;F;p) is a probability space. A Markov chain with invariant distributionp() is aperiodic if there do not exist d 2 and disjoint subsets A 1 ;A 2 ;;A d 2F with p(x;A i+1 )= 1 for all x2 A i (1 i d 1), and p(x;A 1 )= 1 for all x2 A d , such thatp(A 1 )> 0. Otherwise, the chain is periodic with period d. Remark. Irreducibility ensures that the transition probability between any two states is nonzero and aperiodicity guarantees that there are no periodic points. These two properties are essential 5 to make any point can be visited in the space within a finite number of visits. The combination of irreducibility and aperiodicity of a Markov chain is equivalent to the ergodicity of the chain ([3],[9]). Definition 12 ([1]O Notation). For a given function g(x) onW, we denote by O(g(x))=f f(x) : there exist positive constants c and x 0 (c; f) such that j f(x)j cg(x) for all x x 0 (c; f)g. Definition 13 ([1] Little-o Notation). For a given function g(x) onW, we denote by o(g(x))=f f(x) : for any positive constant c> 0, there exists a constant x 0 (c; f)> 0 such thatj f(x)j< cg(x) for all x x 0 (c; f)g Theorem 3 ([2] Doeblin’s Condition). Suppose that there is a measure f of sets inB(I) with 0<f(I)<¥, an integer m 1, ane > 0 and a set C2B(I) such that f(C)> 0; p m 0 (x;h)e; x2 I;h2 C where p m 0 (x;) is the density of the absolutely continuous component of p m (x;) with respect to f. Then there is an invariant probability distributionp(), withp(C 1 )ef(C 1 ) when C 1 C, for which jp n (x;A)p(A)j[1ef(C)] (n=m)1 8n for A2B(I). Remark. In our case, we can choosef to be a Lebesgue measure, m= 1 and C= I. f(C) is clearly strictly positive. By the assumption of irreducibility and the absolute continuity of the transition probability with respect to Lebesgue measure, we also have p(x;h)e; x;h2 C. By Doeblin’s condition, we can find an invariant probability distribution r such thatjp n (x;A)r(A)j[1 ef(C)] n1 2q n 8n for A2B(I), some q2(0;1) and a sufficiently small e. This implies p n (x;A)=r(A)+O(2q n ). We choose thisr as our initial distribution for the Markov chain. 6 1.3 Setup The unit interval is denoted by I and let (S;F;P) be a probability space whereS= I N 0 . T is a measurable function on I. P is an ergodic T -invariant probability measure. The correspondings- algebraF is generated by the n-cylinders given by U(A 0 ;A 1 ;:::;A n )=fX2S : X j 2 A j 8 j ng with A i I measurable. The Borel s-algebra is denoted byB(I). X =(X 0 ;X 1 ;:::)2S is an irreducible and aperiodic Markov chain[6]. Let a be some point in I and B e (a) is a small neighborhood of radius e > 0 around the point a that lies in I. Let p(x;) be absolutely continuous with respect to the Lebesgue measure for all x. This indicates that there exists a probability density p(x;y) such that p(x;A)= R A p(x;y)dy. Also, p n (x;A) is a transition probability from x to A in n steps. By Theorem 3, we obtain an invariant initial distribution r. By T-invariance ofP, the invariance of r is equivalent as showing R I p(x;y)r(x)dx=r(y)[3]. 7 Chapter 2 Main Results First, we consider a time interval with two time stamps s and s+t. The next lemma wants to show that there is only weak dependence between the probability of the entry time greater than s and that of the entry time greater than t. Lemma 4.jP(t B e (a) > t+ s)P(t B e (a) > s)P(t B e (a) > t)j 2DP(B e (a))+O(2q D ) for someq2 (0;1) withD< t. Proof. Step 1: Cutting hole ft B e (a) > t+ sg=fx : T j x = 2 U 8 j t+ sg =fx : T j x = 2 U 8 j s and 8s+D j t+ sg\fx : T j x = 2 U 8s j s+Dg SojP(t B e (a) > t+ s)P(t B e (a) = 2[0;s] T [s+D;s+t])jP(t B e (a) 2(s;s+D)). By T-invariance ofP, we have that jP(t B e (a) > t+ s)P(t B e (a) s^t B e (a) T s+D > tD)jP(t B e (a) T s D)=P(t B e (a) D) D å j=1 P(T j B e (a))=DP(B e (a))) (2.1) Step 2: Bounding the mixed term Recall that the space is S = I N 0 . Then Sft B e (a) > sg = T s j=1 T j (B e (a)) C . Similarly S 8 ft B e (a) T s+D > tDg= T s+t j=s+D+1 T j (B e (a)) C . By Markov property and the T-invariance of the measureP, we have that P(ft B e (a) > sg\ T (s+D) ft B e (a) > tDg)= Z I r(x 0 )dx 0 Z B e (a) C p(x 0 ;x 1 )dx 1 Z B e (a) C p(x s1 ;x s )dx s Z I p D (x s ;x s+D )dx s+D Z B e (a) C p(x s+D ;x s+D+1 )dx s+D+1 Z B e (a) C p(x s+t1 ;x s+t )dx s+t By Doeblin’s condition, p D (x s ;x s+D )=r(x s+D )+O(2q D ) for someq2(0;1). This implies P(ft B e (a) > sg\ T (s+D) ft B e (a) > tDg)= Z I r(x 0 )dx 0 Z B e (a) C p(x 0 ;x 1 )dx 1 Z B e (a) C p(x s1 ;x s )dx s Z I (r(x s+D )+O(2q D ))dx s+D Z B e (a) C p(x s+D ;x s+D+1 )dx s+D+1 Z B e (a) C p(x s+t1 ;x s+t )dx s+t : =(1+O(2q D ))P(t B e (a) > s)P(t B e (a) > tD) P(t B e (a) > s)P(t B e (a) > tD)+O(2q D ) We obtain jP(ft B e (a) > sg\ T (s+D) ft B e (a) > tDg)P(t B e (a) > s)P(t B e (a) > tD)jO(2q D ) (2.2) 9 Step 2: Filling the gap ft B e (a) > tg=ft B e (a) > tDgnft B e (a) 2[tD;t]g By T-invariance ofP, jP(t B e (a) > t)P(t B e (a) > tD)jP(t B e (a) 2[tD;t]) =P(t B e (a) T tD D) =P(t B e (a) D) (2.3) Combining Step 1, 2, 3, jP(t B e (a) > t+ s)P(t B e (a) > s)P(t B e (a) > tjjP(t B e (a) > t+ s) P(t B e (a) > s^t B e (a) T s+D > tD)j +jP(t B e (a) > s^t B e (a) T s+D > tD) P(t B e (a) > s)P(t B e (a) > tD)j +jP(t B e (a) > s)P(t B e (a) > tD) P(t B e (a) > s)P(t B e (a) > t)j P(t B e (a) D)+O(2q D ) +P(t B e (a) > s)P(t B e (a) D) DP(B e (a))+O(2q D )+ 1DP(B e (a)) 2DP(B e (a))+O(2q D ) Remark. Lemma 4 gives us a tool to dissect a large time horizon ˜ T into small pieces with an error term while studying the entry times functions in the time interval [0; ˜ T]. We will show this error term is negligible if the e ball is sufficiently small and both the initial distribution as well as the transition probabilities are bounded. 10 Theorem 5. The limiting distribution of return times F(t)= e t if the initial distribution r and the transition probabilities p n are bounded by some constant K. Proof. Let F e (t)=P(t B e (a) > ˜ T) where ˜ T = t P(B e (a) and r be a sufficiently large positive number such that s=b ˜ T r c=b ˜ T b c with 0 T)=P(t B e (a) > rs)=P(t B e (a) > s)P(t B e (a) >(r 1)s) +O(2DP(B e (a))+ 2q D )==P(t B e (a) > s) r +O(2DP(B e (a))+ 2q D ) r1 å j=0 P(t B e (a) > s) j =P(t B e (a) > s) r +O(2DP(B e (a))+ 2q D ) 1 1P(t B e (a) > s) =P(t B e (a) > s) r +O(2DP(B e (a))+ 2q D ) 1 P(t B e (a) s) (2.4) Now we estimateP(t B e (a) > s). First, we notice that P(t B e (a) s)=P([ s j=1 T j B e (a)) s å j=1 P(T j B e (a))= s å j=1 P(B e (a))= sP(B e (a)) by T-invariance ofP. By inclusion-exclusion principle, we have P(t B e (a) s) sP(B e (a)) å 1 j<ks P(T j B e (a)\ T k B e (a)): By T-invariance ofP, we have å 1 j<ks P(T j B e (a)\ T k B e (a))= å 1 j<ks P(B e (a)\ T jk B e (a)) 11 Since n= k j2[1;s 1], we have P(B e (a)\ T n B e (a))= Z B e (a) r(x 0 )dx 0 Z B e (a) p n (x 0 ;x n )dx n K 2 P(B e (a)) 2 : Therefore, sP(B e (a)) s 2 K 2 P(B e (a)) 2 P(t B e (a) s) sP(B e (a)) Observing that sP(B e (a))=b( t P(B e (a)) ) b cP(B e (a))= t b P(B e (a)) 1b (1+ o(1))? and 0< 1b < 1, we obtain lim e!0 P(t B e (a) s) sP(B e (a)) = 1 (2.5) Notice that by (2.5), r log(P(t B e (a) > s))= r log(1P(t B e (a) s))= r(P(t B e (a) s)(1+ o(1))) =rsP(B e (a))(1+ o(1))= ˜ TP(B e (a))(1+ o(1)) =t(1+ o(1)) (2.6) By (2.5), we may further simplify (2.4) as F e (t)=P(t B e (a) > s) r +O(2DP(B e (a))+ 2q D ) 1 sP(B e (a)) =P(t B e (a) > s) r +O 2s a P(B e (a)) sP(B e (a)) + 2q D sP(B e (a)) Now we need to estimate the error terms. O 2s a P(B e (a)) sP(B e (a)) =O 2 t P(B e (a)) b(a1) =O(2t b(a1) P(B e (a)) b(1a) )! 0 12 ase! 0, since 0 s) r = lim e!0 e r logP(t B e (a) >s) = e t Corollary 5.1. The limiting distribution of entry times ˜ F(t)= e t if the initial distribution r and the transition probabilities p n are bounded by some constant K. Proof. By Theorem 1, we have ˜ F(t)=F 0 (t)= e t Remark ([8]). Define a 1 (a) = lim K!¥ lim e!0 P(K < t B e (a) jB e (a))2 [0;1]. This quantity de- scribes the limiting probability of points in a neighborhood of the point a return after a very long time on Kac’s time scale, i.e., of order 1 P(B e (a)) . Obviously, lim e!0 P(K < t B e (a) jB e (a))= 13 lim e!0 ˜ F e (KP(B e (a))= lim e!0 ˜ F e (0)= 18K. Therefore, we havea 1 (0)= 1. We name this quan- tity as extremal index. It measures the clustering of extremes in the invariant Markov chain. If a 1 (a)= 1, extremes do not cluster around the point a; if a 1 (a)< 1, extremes cluster around the point a. 14 Chapter 3 Counterexample In Chapter 2, we impose the conditions of boundedness for initial distribution and transition prob- abilities. They give us a clean exponential function for the limiting return/entry times distributions. We can’t help asking the question whether the removal of the boundedness conditions would still deliver similar results. The answer is no. We will show the distribution of the following counterex- ample and also compute its extremal index as a value strictly less than 1. Let 0 1 such thatba 1b = 1. Then put p(x;y)= 8 > < > : 1 ax c [0;ax) (y) for 0< x 1 a 1 for 1 a < x< 1 as probability kernel as depicted in Figure 3. Clearly, this probability density p has a singularity point at(0;0). Let X be a Markov chain with transition probability defined by the probabiity kernel p andm(x)= (1b)x b be our probability measure, since m((0;1))= 1. 15 Lemma 6. m is invariant. Proof. Z 1 0 p(x;y)m(x)dx=(1b) Z 1 a 0 1 ax c [0;ax) (y)x b dx+(1b) Z 1 1 a x b dx = 1b a Z 1 a y a x b1 dx+ 1b 1b x 1b 1 1 a = 1b a x b b 1 a y a + 1 1 a 1b = 1b ab h 1 a b y b a b i + 1 1 a 1b = 1b ba 1b y b + 1 1 a 1b 1b ba 1b =(1b)y b + 1b(1b)=(1b)y b =m(y) asba 1b = 1. Let B e (0)=[0;e) for somee2(0; 1 a ). Then the probability to go from any point x2 B e (0) to the exterior of B e (0) is axe ax with x2[ e a ;e) and weight m(x)dx. P(B e (0)! B C e (0))= Z e e a axe ax m(x)dx= Z e e a (1 e ax )(1b)x b dx =(1b) x 1b 1b e e a (1b)e a(b) x b r e a =e 1b (1 1 1b )+ e(1b) ab e b (1 1 a b ) =e 1b h 1 b ba 1b + 1b ab 1b ba 1b i =e 1b h 1 1 ba 1b + 1b ab i =e 1b 1b ab (3.1) 16 By ( 3.1) and definition of m, we have P(t B e (0) 1jB e (0))= 1P(B e (0)! B C e (0)jB e (0)) = 1 1 m(B e (0)) e 1b 1b ab = 1 1b ab (3.2) In order to obtain a 1 , we need to findP(t B e (0) > njB e (0)). Since we already obtainedP(t B e (0) 1jB e (0)), we only need to findP(t B e (0) = njB e (0)) for n 2. Ift B e (0) = n and the point starts from B e (0), we need to compute the probability of the point going from B e (0) to B C e (0) at step 1, then remaining inside B C e (0) for n 2 steps and returning back to B e (0) at the final step. We already computed the first probability by (3.2). Now we computeP(B C e (0)! B C e (0)jB C e (0)). P(B C e (0)! B C e (0))= Z 1 a e axe ax dm(x)+ Z 1 1 a (1e)dm(x) =(1b) Z 1 a e (1 e ax )x b dx+(1e)(1b) Z 1 1 a x b dx =(1b) Z 1 a e x b e a x b1 dx+(1e)(1b) x 1b 1b 1 1 a =(1b) x 1b 1b 1 a e (1b) e a x b b 1 a e +(1e)(1 1 a 1b ) = 1 a 1b e 1b + e(1b) ab a b 1 e b + 1e 1 a 1b + e a 1b =e 1b + e(1b) a 1b b e 1b (1b) ab + 1e+ e a 1b =e 1b +eeb e 1b (1b) ab + 1e+ e a 1b = 1+e 1 a 1b b e 1b 1+ 1b ab = 1e 1b 1+ 1b ab = 1+ 1b ab (1e 1b ) 1b ab asba 1b = 1. 17 Since m(B C e (0))= R 1 e (1b)x b dx=(1b) h x b+1 b+1 i 1 e = 1e 1b , we obtain P(B C e (0)! B C e (0)jB C e (0))= 1 m(B C e (0)) P(B C e (0)! B C e (0)) = 1 1e 1b h 1+ 1b ab (1e 1b ) 1b ab i = 1+ 1b ab 1b ab(1e 1b ) = 1 1b ab 1 1e 1b 1 = 1 1b ab e 1b 1e 1b (3.3) This implies P(B C e (0)! B e (0)jB C e (0))= 1P(B C e (0)! B C e (0)jB C e (0))= 1b ab e 1b 1e 1b (3.4) 18 For convenience, we denote P(B C e (0)! B C e (0)jB C e (0)) by h, P(B C e (0)! B e (0)jB C e (0)) by w, P(B e (0)! B e (0)jB e (0)) by z andP(B e (0)! B C e (0)jB e (0)) by q. ThenP(t B e (0) = njB e (0))= qh n2 w for n 2. This implies P(t B e (0) njB e (0))=P(t B e (0) 1jB e (0))+ n å i=2 P(t B e (0) = ijB e (0)) = 1 1b ab + qw n å i=2 h i2 = 1 1b ab + qw 1h n1 1h = 1 1b ab + 1b ab e 1b 1e 1b 1b ab 1 h 1+ 1b ab 1b ab(1e 1b ) i n1 1 h 1+ 1b ab 1b ab(1e 1b ) i = 1 1b ab + 1b ab e 1b 1e 1b 1b ab ab(1e 1b ) (1b)(1b)(1e 1b ) n 1 h 1+ 1b ab 1b ab(1e 1b ) i n1 o = 1 1b ab + 1b ab e 1b 1e 1b 1b ab ab(1e 1b ) (1b)e 1b n 1 h 1+ 1b ab 1b ab(1e 1b ) i n1 o = 1 1b ab + 1b ab n 1 h 1+ 1b ab 1b ab(1e 1b ) i n1 o = 1 1b ab h 1+ 1b ab 1 1 1e 1b i n1 ThereforeP(t B e (0) > njB e (0))= 1b ab h 1+ 1b ab 1 1 1e 1b i n1 . Hence,a 1 = lim n!¥ lim e!0 P(t B e (0) > njB e (0))= 1b ab , which is not equal to 1. Hence, there is a clustering mass around(0;0). Remark. By assumption,ba 1b = 1. Supposea 1 = 1, then 1b =ab, which impliesb = 1 1+a . So 1 1+a a 1 1 1+a = 1 and that results in a 1 1 1+a = 1+a. Since a > 1, we have 0< 1 1 1+a < 1. Therefore, we can derivea 1 1 1+a t m(B e (0)) jB e (0))= lim e!0 h t e 1b 1 = lim e!0 1 1b ab e 1b 1e 1b t e 1b 1 = lim e!0 1 1b ab 1 1e 1b e 1b t e 1b = lim e!0 exp t 1b ab 1 1e 1b = exp t 1b ab = e a 1 t Define N B e (0) (t) as the number of entries into B e (0) by time t. By the dual relation of interarrival and waiting time distribution in [10], lim e!0 P N B e (0) ( t B e (0) )= k = e a 1 t (a 1 t) k k ! ; i.e.,P= lim e!0 N B e (0) ( t B e (0) ) Poisson(a 1 ) which describes the distribution of clusters. Put W = å P i X i . By [5], W P´ olya-Aeppli(l =a 1 ; p= 1a 1 ),i.e., a compound-Poisson distribution, given byP(W = k)= e a 1 å k j=1 (1a 1 ) k j a 2 j 1 j ! k1 j1 for k 1 andP(W = 0)= e a 1 . Remark. We notice in this example that we actually don’t need to compute the extremal in- dex a 1 by finding the return times distributions at each step. Instead, we could just look for P(B e (0)! B C e (0)jB e (0)),i.e., the extremal index measures the probability of points don’t clus- ter in the long term. We could use the doubling map to illustrate this point. Recall that T x= 2x mod 1.Let B e (0)=[0;e) with e = 1 4 . After applying the transformation T to the points x2[0; 1 8 ), 20 the points will expand to[0; 1 4 ), which is exactly the neighborhood around 0 that we started with. Yet the points x2[ 1 8 ; 1 4 ) will be transformed to [ 1 4 ; 1 2 ),which is outside the initial neighborhood. Therefore, there is half of the mass dissipating, while the other half of the mass clustering. Hence, the corresponding extremal indexa 1 (0)=P(B e (0)! B C e (0)jB e (0))= 1 2 . As a matter of fact, this result holds for alle < 1 2 Now instead, we consider B e ( 1 15 ) with e = 1 30 ,i.e.,( 1 30 ; 1 10 ). Similarly, the points x2( 1 30 ; 1 20 ) ex- pand to the size of the original neighborhood after applying the transformation. Hence,a 1 ( 1 15 )= P(B e (0)! B C e (0)jB e (0))= 1 1 20 1 30 1 10 1 30 = 0:75. This result is valid for alle < 1 30 . From above, we have two observations. First, 1a 1 keeps record of the clustering mass. Second, the extremal index could vary at different locations. In other words, the extremal index at point x EI looks at the extent of dissipation of points from a sufficiently small neighborhood of x EI . 21 Figure 3.1: Probability density p Since x 0 2[0; 1 a ] , the corresponding transition probability is 1 ax 0 for 0 yax 0 and is zero otherwise; If x2( 1 a ;1], then the transition probability is 1 regardless of the values of y. 22 References [1] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edition, 2009. [2] J.L. Doob. Stochastic Processes. Probability and Statistics Series. Wiley, 1953. [3] Rick Durrett. Probability: Theory and Examples. Cambridge Series in Statistical and Prob- abilistic Mathematics. Cambridge University Press, 4 edition, 2010. [4] Alexander Gorodnik. University of zurich, lecture notes: Dynamical systems and ergodic theory, 2017/18. [5] Nicolai Haydn and Sandro Vaienti. Limiting entry and return times distribution for arbitrary null sets. Communications in Mathematical Physics, 378(1):149–184, 2020. [6] N.T.A. Haydn. Entry and return times distribution. Dynamical Systems, 28, 06 2013. [7] Sean Meyn and Richard L. Tweedie. Markov Chains and Stochastic Stability. Cambridge University Press, USA, 2nd edition, 2009. [8] Nicholas Moloney, Davide Faranda, and Yuzuru Sato. An overview of the extremal index. Chaos: An Interdisciplinary Journal of Nonlinear Science, 29:022101, 02 2019. [9] Gareth O. Roberts and Jeffrey S. Rosenthal. General state space markov chains and mcmc algorithms. Probability Surveys, 1(none), Jan 2004. [10] S.M. Ross. Stochastic Processes. Wiley series in probability and mathematical statistics. Wiley, 1996. [11] J. V on Neumann. Invariant Measures. American Mathematics Society non-series Title Se- ries. Institute for Advanced Study, 1941. 23
Asset Metadata
Creator
Sun, Renyan (author)
Core Title
Limiting entry/return times distribution for Ergodic Markov chains in a unit interval
Contributor
Electronically uploaded by the author
(provenance)
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Mathematics
Degree Conferral Date
2022-05
Publication Date
04/08/2022
Defense Date
03/10/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
entry times,Kac's Law,limiting distribution,Markov chain,OAI-PMH Harvest,return times
Format
application/pdf
(imt)
Language
English
Advisor
Haydn, Nicolai (
committee chair
), Haskell, Cymra (
committee member
), Lototsky, Sergey (
committee member
)
Creator Email
renyansu@usc.edu,sun_renyan1996@163.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC110886259
Unique identifier
UC110886259
Document Type
Thesis
Format
application/pdf (imt)
Rights
Sun, Renyan
Type
texts
Source
20220408-usctheses-batch-920
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
uscdl@usc.edu
Abstract (if available)
Abstract
The thesis aims to find the limiting entry/return times distribution for ergodic Markov chains in a unit interval. We will first introduce the Markov chains in a general state space and the concepts of ergodicity and invariance which serve as the foundation of our discussion. We will utilize various proof techniques and important results of limiting distributions to derive the ones in our setting.
Tags
entry times
Kac's Law
limiting distribution
Markov chain
return times
Linked assets
University of Southern California Dissertations and Theses