Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Limiting distribution and error terms for the number of visits to balls in mixing dynamical systems
(USC Thesis Other)
Limiting distribution and error terms for the number of visits to balls in mixing dynamical systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
LIMITING DISTRIBUTION AND ERROR TERMS FOR THE NUMBER OF VISITS TO BALLS IN MIXING DYNAMICAL SYSTEMS by Katarzyna Wasilewska A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (MATHEMATICS) August 2013 Copyright 2013 Katarzyna Wasilewska Acknowledgments First and foremost I would like to thank my adviser, Professor Nicolai Haydn. His guidance and dedication has proven invaluable throughout the writing of this thesis. It has been a pleasure to work under his supervision. I would also like to express my gratitude to the members of my dissertation committee: Professors Peter Baxendale, Cymra Haskell, Edmond Jonckheere and Robert Sacker for their time and commitment. Thanks to Amy Yung and Arnold Deal for always being there to help when red tape issues come up. Chcia labym r ownie_ z podzi , ekowa c moim kochanym rodzicom - El_ zbiecie i Woj- ciechowi Wasilewskim, kt orzy nigdy nie pozwolili mi spocz , a c na laurach. Bez ich wsparcia i po swi , ece n nie by labym w stanie w pe lni odkry c mojego potencja lu. ii Table of Contents Acknowledgments ii List of Figures v Abstract vi Chapter 1: Introduction 1 1.1 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Chapter 2: Background Material 7 2.1 Dynamical Systems and Ergodic Theory . . . . . . . . . . . . . . . 7 2.1.1 Dimension of a Measure . . . . . . . . . . . . . . . . . . . . 8 2.1.2 -regularity of a Measure . . . . . . . . . . . . . . . . . . . . 9 2.1.3 Geometric Regularity of a Measure . . . . . . . . . . . . . . 9 2.2 Decay of Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Young Tower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3.1 Jacobian of T on the Tower. . . . . . . . . . . . . . . . . . . 14 2.3.2 Separation Time . . . . . . . . . . . . . . . . . . . . . . . . 15 Chapter 3: Recurrence to Generic Points 17 3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.2 Immediate Consequences . . . . . . . . . . . . . . . . . . . . . . . . 20 3.2.1 Distortion of the Jacobian . . . . . . . . . . . . . . . . . . . 20 3.2.2 Expansion Estimates . . . . . . . . . . . . . . . . . . . . . . 22 3.2.3 Decay of Correlations . . . . . . . . . . . . . . . . . . . . . . 24 3.2.4 Decay of (s) . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4 EstimatingR 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Setting upR 2 outside of M ;J . . . . . . . . . . . . . . . . . . . . . 35 3.6 Restricting the Return Time . . . . . . . . . . . . . . . . . . . . . . 36 3.7 Cylinder Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.8 Approximation of Balls by Cylinder Sets . . . . . . . . . . . . . . . 41 iii 3.9 EstimatingR 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.10 Optimization of the Result . . . . . . . . . . . . . . . . . . . . . . . 50 3.11 Measure of Removed Sets . . . . . . . . . . . . . . . . . . . . . . . 55 3.11.1 Estimate for S p1 n=J C ! 1 (s) . . . . . . . . . . . . . . . . . . . . 55 3.11.2 Estimate for S p1 n=J C ! 2 (n;s) . . . . . . . . . . . . . . . . . . . 56 3.11.3 The Measure ofE ;v . . . . . . . . . . . . . . . . . . . . . . . 58 3.11.4 Total Measure of the Removed Sets . . . . . . . . . . . . . . 58 3.12 Technical Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 Chapter 4: Short Returns 83 4.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.2 Immediate Consequences . . . . . . . . . . . . . . . . . . . . . . . . 86 4.2.1 Distortion of the Jacobian . . . . . . . . . . . . . . . . . . . 86 4.2.2 Decay of (s) . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3 Estimate for M ;J . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.3.1 Estimate for n Suciently Large . . . . . . . . . . . . . . . 88 4.3.2 Estimate for Small Values of n . . . . . . . . . . . . . . . . . 95 4.4 Technical Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Chapter 5: Recurrence under a Regular Measure 112 5.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.2 Proof of the Corollary . . . . . . . . . . . . . . . . . . . . . . . . . 115 Chapter 6: Recurrence under an Absolutely Continuous Measure 116 6.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.2 Proof of Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 6.3 Technical Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Chapter 7: Poisson Approximation Theorem 125 Chapter 8: Some Consequences of Besicovitch Covering Lemma 132 Bibliography 136 iv List of Figures 2.1 Young Tower . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.1 Cylinder expansion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 v Abstract This dissertation explores return statistics to metric balls in measure preserving dynamical systems which admit a Young tower with polynomial decay of the tail. The thesis opens with some background material and known results in the area. Then we proceed to analyze the recurrence to generic points of the system and estimate the measure of non-generic points. We nally apply these ndings to a system equipped with an absolutely continuous measure. To analyze the recurrence rate for the system (M;B;T;) we study the distri- bution of the function S t ;x :M!N 0 dened by S t ;x (x) = N1 X n=0 1 B(x) T n (x); The function counts the number of visits the trajectory of a point x2 M makes to the xed ball B :=B (x) up to time N =bt=(B )c. We show that return times are governed by an almost Poisson distribution for the generic centers x. We derive an estimate for the error between distribution vi of S t ;x and a true Poissonian and prove that the error converges to zero at a logarithmic rate. Next, we estimate the measure of the set containing the centers of balls B (x) with short return times by considering its level sets. The size of the set is inversely proportional to the logarithm of the radius of the ball. Finally we combine the two ndings and apply the corollary to a system which admits a measure which is absolutely continuous with respect to Lebesgue measure. We show that absolute continuity is sucient to satisfy the assumption of our initial results. This thesis generalizes the paper \Poisson approximation for the number of vis- its to balls in non-uniformly hyperbolic systems" by Chazottes and Collet, [CC13]. Their result holds for systems which can be modeled by a Young tower with expo- nential decay of the tail and which are equipped with a measure absolutely con- tinuous with respect to Lebesgue. vii Chapter 1 Introduction One of the most fundamental theorems of dynamical systems and ergodic theory is the Poincar e Recurrence Theorem from the year 1899 [P99]. The theorem states that for a measure preserving system the orbit of almost every point will return arbitrarily close to its initial position in nite time. If, in addition, the system is ergodic, then the orbit of almost every point can come arbitrarily close to any other point of the space, again in nite time. Unfortunately, the theorem gives no indication of the amount of time necessary to achieve either of those goals. To make matters more precise, let A be a subset of an ergodic measure preserving dynamical system M and assume that A has positive measure. We may ask how long it would take for a point a2A to come back to the set after having left, this is referred to as the return time, or how long it would take for a point x2 M to visit the set A, in which case we would be dealing with the hitting time. In addition, we may wonder how often the visits to A occur and what is the distribution of the return times. Note that forx2M the second visit to A would also be the rst return of the image of x inside A. In 1931 Birkho [B31] proved his ergodic theorem as a consequence of which we know that typical orbits would visit the setA with asymptotic vising frequency 1 equal to the measure of A. In other words, as we let time go to innity the ratio of the number of visits to time approaches the measure of the visited set. In 1947 Ka c [K47] rened these results by proving that once an orbit hits the set A, the average time between two consecutive visits is equal to the inverse of the measure of the set. Since then, numerous papers have been written on the topic, including among others [GS97, H99, H00, L02] on hitting times and [BSTV03, HLV05, S06] on the return times. Most of the results in existence explore the visit and return statistics to special types of neighborhoods called cylinder sets. Traditionally, cylinder sets are intersections of various pullbacks of elements from a measurable partition of the space; points that belong to the same cylinder travel together and visit specic elements of the partition up to some predetermined nite time. As such, cylinders are very restrictive and lack practicality. Metric balls, on the other hand, are much more natural. Further, distribution of return times to balls seem to be related to other indicators, such as local dimension or Lyapunov exponents [STV03]. There are some publications exploring the statistics associated to metric balls, however the systems in question possess extra structure, like exponential mixing or hyperbolicity. We set out to prove a result concerning recurrence to a metric ball in a system with minimal assumptions. We do not assume hyperbolicity or absolute continuity 2 of the measure and instead require of our system what is equivalent to polynomial decay of correlations. 1.1 Summary of Results Let (M;T ) be a dynamical system on the compact metric spaceM. For a positive number dene the variable J =baj lnjc =ba lnc (1.1) and the set M ;J =fx2M :B (x)\T n B (x)6=? for some 1n<Jg: (1.2) Notice that M ;J represents the points within M with short return times. Fix a ballB (x) insideM and dene the functionS :M!N 0 which will track the number of visits that a trajectory of the point x2M makes to the ballB (x). That is S t ;x (x) = bt=(B(x))c1 X n=0 1 B(x) T n (x): When clear from the context, we will omit the sub- and superscripts and simply denote the operator by S(x). Relevant denitions and background material can be found in Chapter 2. 3 Theorem 1. Let (M;T ) be a dynamical system which can be modeled by a Young tower. Suppose that the tail of the tower's return time function decays polynomially with degree > 4. Let be the SRB measure admitted by the system and let @ be its dimension. Assume that is geometrically and -regular. Exact assumptions are in Section 3.1. Then there exist constants ; 0 2 (0; 4 2 ) and C;C 0 > 0 such that for su- ciently small there exists a set M 0 M with (M 0 )C 0 j lnj 0 such that for all -balls with centers x = 2M 0 [M ;J we have P(S =k)e t t k k! Cj lnj for all k2N 0 : Theorem 2. Let (M;T ) be a dynamical system on the compact manifold M with the map T :M!M a C 1 dieomorphism. Suppose the system can be modeled by a Young tower whose return time function decays polynomially with degree > 9. Let be the SRB measure admitted by the system and let @ be its dimension. Assume that is geometrically and -regular. Exact assumptions are in Section 4.1. Let M ;J be dened as in (1.2): M ;J =fx2M :B (x)\T n B (x)6=? for some 1n<Jg; where J =baj lnjc with a = [ 4 (kDTk L 1 +kDT 1 k L 1)] 1 . 4 There exist constants ~ = 9 4 and ~ C > 0 such that for suciently small (M ;J ) ~ Cj lnj ~ : Combining Theorems 1 and 2 we obtain: Corollary. Let (M;T ) be a dynamical system satisfying the assumptions of The- orems 1 and 2. Let be the invariant measure. For a xed ball B (x) with the center x2Mn (M 0 [M ;J ), the function S t ;x counting the number of visits to the ball B (x) with suciently small diameter can be approximated by a Poissonian as follows: P(S =k)e t t k k! Cj lnj for all k2N 0 where the constants 2 (0; 4 2 ) and C > 0 are independent of and x. Further, we can bound the measure of the set containing the excluded ball centers by (M 0 [M ;J )C 00 j lnj 00 where 00 2 (0; 9 4 ] and C 00 > 0 are independent of both and x. 5 Note that as tends to zero so does the error of approximation and the measure of the excluded set. Therefore in the limit the functionS is Poisson distributed on a full measure set. The corollary also holds if the assumption \ geometrically and -regular" is replaced with \ absolutely continuous with respect to Lebesgue measure" as the latter condition implies the former one. This gives rise to: Theorem 3. Let (M;T ) be a dynamical system on the compact manifold M of dimension D. Let the map T :M!M be a C 1 dieomorphism with the attractor A . Suppose the system can be modeled by a Young tower whose return time func- tion decays polynomially with degree > 9. Let be the SRB measure admitted by the system and let @ be its dimension. Exact assumptions are in Section 6.1. There exists constants ^ 2 (0; 9 4 ] and ^ C > 0, and a small set M M with (M ) ^ Cj lnj ^ so that the following is true for suciently small. For a xed ball B (x) with the center x = 2M , the function S t ;x counting the number of visits to the ball B (x) can be approximated by a Poissonian as follows: P(S =k)e t t k k! ^ Cj lnj ^ for all k2N 0 : 6 Chapter 2 Background Material To pave the way for the results in this thesis, this chapter includes some previously known material alongside newly dened concepts. We begin with the basic facts of dynamical systems and measure theory. We introduce various forms of regularity of a measure. Then, we proceed to discuss the fundamental elements of decay of correlations and Young tower construction. 2.1 Dynamical Systems and Ergodic Theory A dynamical system is comprised of a metric space M and a piecewise continuous map T : M! M. We can introduce a probability measure on the system and write down the Borel sigma algebraB of subsets of M that are measurable under . Recall that is a probability measure if (M) = 1. The dynamical system (M;B;T;) is said to be measure preserving ifT =, meaning that is T -invariant. By denition then, (T 1 E) =(E) for every set E2B: 7 A measure preserving dynamical system is called ergodic provided that any set E2B with the property (T 1 E4E) = 0 has the measure (E) = 0 or 1. Intuitively, if the set E is almost surely xed by the map T , meaning modulo sets of measure zeroT 1 E =E, thenE must be null or of full measure. It follows then that all sets of positive non-full measure are moved throughout the space by the map T . (M;B;T;) is called mixing if for any two Borel sets A;B2B with positive measure we havej(A\ T n B) (A)(B)j ! 0 as n ! 1. In a mixing dynamical system any two Borel sets become independent over time. We can rewrite the condition as lim n!1 (A\T n B) (A) =(B): Since the sets A and B are chosen arbitrarily, mixing ensures that in the limit: a) preimages of B spread throughout the whole space M, and b) the preimages spread evenly, meaning that the relative size of the preimage of B inside A is the same as the size of B inside the entire space M. 2.1.1 Dimension of a Measure By denition, the dimension of the measure is given by @ = lim !0 log(B (x)) log a.e. x2M 8 Thus for any "> 0 for any x2M there exists x so that for < x we have @"< log(B (x)) log <@ +" and so after some algebraic manipulation we see that @+" <(B (x))< @" : 2.1.2 -regularity of a Measure Let > 1. We say that a measure is-regular if there exists aw2 (1;) and an a> 0 such that for almost every x2M there exists a x so that (B + w(x)nB w(x)) (B (x)) . 1 j lnj a for < x : We say is uniformly-regular if there exists a ^ such that for almost every x2M we have x = ^ . 2.1.3 Geometric Regularity of a Measure Let@ be the dimension of the measure. We say that the measure is geometrically regular if there exists a positive constant@ 0 <@ so that for any metric ballB M with suciently small radius (B ). @ 0 : 9 2.2 Decay of Correlations Consider two observables; :M!R. Assume that is Lipschitz onM and is L 1 (M). Mixing properties of the system are re ected in the decay of correlations of the observables. We say that the system has polynomial decay of correlations if Z M T n d Z M d Z M d ' n kk Lip k k L 1; with ' n of the order of n ^ for a positive ^ bounded away from zero. The corre- lation is said to decay exponentially if ' n is of the order # n for 0<#< 1. With either of the decay speeds, the absolute value on the left converges to zero. Note that in the special case of and being indicator functions, we recover the denition of mixing. Therefore all systems exhibiting decay of correlations are mixing, in turn all mixing systems are ergodic. Note that some authors require the two observables to satisfy dierent types of regularity; for example [CC13] denes both and as H older, while [S06] expects the functions to beL 2 . For our purposes the condition Lipschitz and 2L 1 (M) is sucient as the theorems we quote are true with this more relaxed condition. Polynomial decay of correlations is present in such dynamical systems as piece expanding one dimensional maps with neutral xed points or certain billiards with convex boundaries. Systems with exponential decay of correlation include dispers- ing billiards, certain logistic and H enon-type maps. [LSY99] 10 2.3 Young Tower Figure 2.1: Young Tower A Young tower is an abstract dynamical object which models the behavior of a dynamical system (M;T ). The tower simplies the statistical analysis by focusing only on recurrence times. The idea behind the construction is to pick an appropriate reference set and think of the system as having renewed itself when a full return to that set is made. Let M be an open set with non-empty interior and m be a suitable reference measure on . will become the base of our tower. Note that since the system is mixing, due to the decay of correlations, it does not matter where is located within M ([LSY98], p.614). There exist pairwise disjoint subsetsf i g i2N of the base so that, modulo sets of measure zero, = F i and the return time function R : !N is constant on each of the subsets, meaning that for each i there is an R i 2 Z + such that 11 Rj i = R i and T R i i = . Without loss of generality we will assume that the greatest common divisor of all of the R i is equal to one; the alternative and its reduction to our setting can be found in [LSY98], p.607. Further suppose that the function R is integrable on the tower, i.e. X i R i m( i ) = Z Rdm<1: (2.1) It follows that (M;T ) admits an SRB measure ; details can be found in [LSY98](Theorem 1). In addition, the measure m is SRB for the system (;T R ). The relationship between and m can be expressed as: (S) = 1 X i=1 R i 1 X j=0 m(T j S\ i ) for any set SM: (2.2) The idea behind the above formula is that parts of the set S lie on dierent levels of various beams, so we need to rst pull them back to the base before measuring their contribution to the whole. We will refer to the portions of the tower above each i as beams. We will assume that for n2Z + , there will be only nitely many i's for which R i =n, so that for every n there are only nitely many beams with that height. We can think ofT as propagating the sets i up the beams, with each iteration image represented by one level of the appropriate beam, with the last level being R i 1 above the original i . Each level is bijectively mapped onto the next and 12 the last level is bijectively mapped onto all of . Note thatR i may not be the rst time i returns to . Due to the bijective nature of T on any particular beam, we will assume that as the map propagates a subset E i upwards the set remains unchanged until the last iteration of the map, so that all the dynamics (eg. expansion) that would have aected E happen all at once when the set returns to the base . In particular, we will suppose that the diameter and the measure of anyE i remains constant as long as the set does not fall o the beam. Letjj represent the diameter the set with respect to the metric d on the space M. The metric does not exist on the tower, thus we shall articially preserve the diameter by setting jT j Ej =jEj for 0jR i 1: (2.3) As for the measure, we have for any E i m(T j E) =m(E) for 0jR i 1: (2.4) Note that this particular assumption puts us in the setting of [LSY99], where T has been reduced to an \expanding map" by collapsing along stable manifolds. In eect, throughout our analysis we will only look at the behavior of our map in the expanding directions. 13 Since any pointx from i will return to afterR i iterations we can dene the map ^ T : ! piecewise as follows: ^ Tx :=T R i x for x2 i : (2.5) From our assumptions about the behavior ofT on the tower we have that for every element i of the partition ^ T i = with ^ Tj i one to one and onto. We can also write down the function representing the l-th return to using the ergodic sum notation R l (x) = l1 X k=0 R( ^ T k x): When iterated the two functions satisfy ^ T l =T R l . Throughout the text we will use the following function of s2R (s) := s X i:R i s R i m( i ): Note that since the return time R is integrable (s)! 0 as s!1. 2.3.1 Jacobian of T on the Tower. Even though our original system (M;B;T;) is not dierentiable, we can introduce a dierential structure on the tower using the Radon-Nikodym derivative ofT with respect tom (following [LSY98], p. 596). The derivative exists and is well dened 14 because every T R i j i and its inverse are non-singular with respect to the measure m. Let JT = d(T 1 m) dm : This means that d(T 1 m) =J(T )dm, which further gives us m(TE) = Z TE 1dm = Z E 1d(T 1 m) = Z E JTdm: (2.6) By our assumption on the measure m, we know that J(T ) 1 on all of the tower except for the top levelT 1 . Notice that the Jacobian of ^ Tj i is equal to the the Jacobian of T on the last level of the beam i . This is because from the measure theoretic perspective nothing happens between the lower levels. Then it is true that kJ ^ Tk L 1 =kJTk L 1: (2.7) 2.3.2 Separation Time Based on the ight time R and the partition of the base, we dene the separation time s(x;y) = minfk 0 : ^ T k x and ^ T k y lie in distinct i g; so that for x;y2 i , s(x;y) = 1 +s( ^ Tx; ^ Ty) and in particular s(x;y) 1. 15 We can extend the concept of a separation time from points to sets inside by dening s(E) = minfk 0 :9x;y2E s.t. ^ T k x and ^ T k y lie in distinct i g: (2.8) We will refer to a set as having separated or being separated if there are two distinct points x and y in that set that belong to dierent i . 16 Chapter 3 Recurrence to Generic Points In this chapter we prove Theorem 1. We begin by stating the precise assumptions necessary for the result and derive some of the consequences that follow with minimal work. In Section 3.3 we utilize the Poisson approximation theorem from Chapter 7 to establish the proof strategy. We then proceed to estimate the error componentsR 1 andR 2 arising from the approximation theorem. To boundR 1 we employ the decay of correlations, whileR 2 is estimated using the Young tower method. In Section 3.10 we optimize the variables we introduced throughout the proof. We close with Section 3.11 which contains estimates concerning the portions of the space M that had to be omitted from our analysis. 3.1 Assumptions Note that C 0 ;C 1 ;C 3 ;::: and ;; ;::: will represent global constants that retain their meaning throughout the chapter, whilec 0 ;c 1 ;c 3 ;::: shall only be used locally. Let (M;T ) be a dynamical system equipped with a metric d(;). Suppose that the system can be modeled by a Young tower possessing a reference measure m, assume also that the greatest common divisor of the return times R i is equal 17 to 1. Let be the SRB measure associated to the system. We will further require the following: (A1) Polynomial Decay of the Tail There exist constants C 0 and > 4 such that m(R>k)C 0 k : (3.1) (A2) Additional assumption on Let @ be the dimension of the measure . We will require that @( 2)> 1: (3.2) (A3) Regularity of the Jacobian and the metric on the Tower There exists a constant C 1 > 0 and 2 (0; 1) such that for any x;y in with s(x;y) 1 (a) ln J ^ Tx J ^ Ty C 1 s( ^ Tx; ^ Ty) ; (3.3) (b) d( ^ T k x; ^ T k y)C 1 s(x;y)k ; where 0k<s(x;y): (3.4) 18 (A4) Uniform regularity of the measure Let = @ ( 2) > 1 by (A2) and suppose that the positive constant @ 0 < @ is xed. There exist C 2 > 0 and 0 so that: (a) (-regularity) There exists w2 (1;) and a> 0 so that for < 0 we have (B + w(x)nB w(x)) (B (x)) C 2 j lnj a (3.5) for all points x2M. (b) (geometric regularity) (B )C 2 @ 0 (3.6) for all B M provided that < 0 . 19 3.2 Immediate Consequences 3.2.1 Distortion of the Jacobian Lemma 3.2.1 (Distortion of the Jacobian). There exists a constant C 3 > 1 such that J ^ T q x2J ^ T q y 1 C 3 ;C 3 (3.7) for any x and y in with separation time s(x;y)q. Proof. Letx;y2 and letq 1 be a natural number less than or equal tos(x;y). Then s(x;y) =s( ^ T q x; ^ T q y) +qq ) s( ^ T q x; ^ T q y) 0 and so for any jq 1 s( ^ T j+1 x; ^ T j+1 y) = s( ^ T q x; ^ T q y)+q(j+1) = q(j+1) s( ^ T q x; ^ T q y) q(j+1) : Then by chain rule and (A3)(a), we have ln J ^ T q x J ^ T q y = ln q1 Y j=0 J ^ T ( ^ T j x) J ^ T ( ^ T j y) q1 X j=0 ln J ^ T ( ^ T j x) J ^ T ( ^ T j y) q1 X j=0 C 1 s( ^ T ( ^ T j x); ^ T ( ^ T j x)) =C 1 q1 X j=0 q(j+1) =C 1 q1 X j=0 j C 1 1 : 20 In particular J ^ T q (x) J ^ T q (y) e C 1 1 :=C 3 ; with C 3 > 1 since C 1 1 > 0: By symmetry we also get J ^ T q (y) J ^ T q (x) C 3 : This completes the proof. Lemma 3.2.2. Let EE 0 i for some i. For any qs(E 0 ) we have 1 C 3 m( ^ T q E) m( ^ T q E 0 ) m(E) m(E 0 ) C 3 m( ^ T q E) m( ^ T q E 0 ) : Proof. By the Mean Value Theorem there exist x and x 0 in E 0 such that m( ^ T q E) =J( ^ T q x)m(E) m( ^ T q E 0 ) =J( ^ T q x 0 )m(E 0 ): Then we can write m(E) m(E 0 ) = J( ^ T q x) 1 m( ^ T q E) J( ^ T q x 0 ) 1 m( ^ T q E 0 ) = J( ^ T q x 0 ) J( ^ T q x) m( ^ T q E) m( ^ T q E 0 ) and now the result follows directly from Lemma 3.2.1. 21 3.2.2 Expansion Estimates Lemma 3.2.3. For every " > 0 there exists N large enough so that for x2 i with R i >N implies J ^ T (x) 1 <". Proof. Since we have assumed that R is integrable we obtain 1 X l=1 m(R =l) 1 X l=1 lm(R =l) = Z Rdm<1; whence lim l!1 m(R =l) = 0: Using above we know that for " > 0 there exists N so that whenever l > N we have m(R = l) < " 0 , where " 0 = m() C 3 ", with C 3 being the constant from Lemma 3.2.1. Thus whenever R i = l > N we have m( i ) m(R = l) < " 0 . Let x2 i , then by the Lemma 3.2.1 on the previous page for every y2 i J ^ T (y)C 3 J ^ T (x); and so Z i J ^ T (y)dm(y)C 3 J ^ T (x)m( i ): 22 Equation (2.6) and the fact that ^ T : i ! is one to one and onto let us conclude that Z i J ^ Tdm =m( ^ T i ) =m(); and therefore J ^ T (x) m() C 3 m( i ) > m() C 3 " 0 = 1 " : Lemma 3.2.4 (Minimum Expansion). Let E be a subset of one of the beam bases i . There exists a 2 (0; 1) such that m( ^ TE) 1 m(E); (3.8) which indicates that 1 is the minimum expansion factor for ^ T . Proof. We dene = max i max x2 i J ^ T (x) 1 : By the lemma above we know that there exists an N large enough so that for x2 i with R i N we have J ^ T (x) 1 < 1 2 : Then it follows that max i:R i N max x2 i J ^ T (x) 1 1 2 : 23 To bound the inverse of the Jacobian on the beams with returns shorter than N, note that every one of the bases gets mapped to the full measure set. Thus min x2 i J ^ T (x)> 1 ) max x2 i J ^ T (x) 1 < 1: Recall that for every n2Z there are only nitely many R i 's with R i = n, thus there are only nitely many beams with R i <N. It follows then that max i:R i <N max x2 i J ^ T (x) 1 < 1: By putting those two estimates together we obtain that 2 (0; 1). Further, for E i m( ^ TE) = Z E J ^ Tdm Z E 1 dm = 1 m(E): 3.2.3 Decay of Correlations Lemma 3.2.5. For functions Lipschitz and in L 1 Z M T n d Z M d Z M d ' n kk Lip k k L 1 (3.9) where the decay function ' n C 4 n +1 , where C 4 > 0 is a constant and > 0 is the tail decay exponent. 24 Proof. In [LSY99](Thrm 3) Young proved that ' n =O(1) P k>n m(R > k). This means that there exists c2R such that ' n c P k>n m(R>k). Then X k>n m(R>k) X k>n C 0 k C 0 Z 1 n k = C 0 1 n +1 To complete the proof, we dene C 4 =c C 0 1 . 3.2.4 Decay of (s) Lemma 3.2.6. For any s 1 we have X i;R i >s R i m( i ) 1 X k=s m(R>k) +sm(R>s): Proof. In the proof we follow [C01]. Notice that for any k;i> 0 m(fR>kg\ i ) = 8 > > > < > > > : m( i ) if R i >k; and 0 otherwise. It follows then that for any i with height R i >s 1 X k=s m(fR>kg\ i ) = (R i s)m( i ); since the only levels that contribute are above and includings. IfR i s then the sum would simply be equal to zero. 25 Therefore, for any s> 0 we can write 1 X k=s m(R>k) = 1 X k=s 1 X i=1 m(fR>kg\ i ) = 1 X i=1 1 X k=s m(fR>kg\ i ) = X i;R i >s (R i s)m( i ): Note that we were able to change the order of the summation due to the fact that the double sum is nite. Now, rewriting the above equation we get 1 X k=s m(R>k) = X i;R i >s (R i s)m( i ) = X i;R i >s R i m( i )s X i;R i >s m( i ) = X i;R i >s R i m( i )sm(R>s): The result follows by rearranging the terms. Lemma 3.2.7 ( Decay). Let = ( 1)=2. There exists a constantC 5 such that for s 4 (s)C 5 s : Proof. By denition of (s) and the lemma above (s) 2 = X i:R i s R i m( i ) = X i:R i >s1 R i m( i ) 1 X k=s1 m(R>k) + (s 1)m(R>s 1): 26 Incorporating the the assumption about tail decay we get the bound (s) 2 1 X k=s1 C 0 k + (s 1)C 0 (s 1) C 0 Z 1 s2 y dy +C 0 (s 1) 1 = C 0 1 (s 2) 1 +C 0 (s 1) 1 C 0 (s 2) 1 +C 0 (s 1) 1 : Last step is due to the fact that > 4, which implies that ( 1) 1 1 3 < 1. As 1 is negative we in fact have (s) 2 2C 0 (s 2) 1 ; which can be simplied using the assumption that s 4 (s) 2 2C 0 (s 2) 1 2C 0 s 2 1 = 2 C 0 s 1 = 2 C 0 s 2 : We complete the proof by taking the square root of the above equation and setting C 5 = p 2 C 0 . 27 3.3 Preliminaries To prove the main result we will employ the Poisson approximation theorem from Chapter 7. We will denote byP the probability associated to the measure and byE the corresponding expectation. Thus in our setting P() =E(1 ) = Z M 1 d: Assume a reference point x in the phase space has been chosen and let the ball B :=B (x) be xed. Assign X n = 1 B T n1 , then we see that =P(X 1 = 1) =P(1 B = 1) =(B ) and N =bt=c =bt=(B )c: Thus S(x) = N X n=1 X n = N X n=1 1 B T n1 (x) = bt=(B)c1 X n=0 1 B T n (x): Note that now S counts the number of times the orbitfx;Tx;:::;T N1 xg enters the ball B . By Theorem 7.0.1, we then obtain that for any 2 p N and any subset E of the natural numbers jP(S2E)(E)j 6t #fE\ [0;N]g (N(R 1 +R 2 ) +p) + 4t 28 where, R 1 = sup 0<j<Np 0<q<Npj fjE(1 B 1 S Nj p+1 =q )E(1 S Nj p+1 =q )jg R 2 = p1 X n=1 E(1 B 1 B T n ): In particular, if we choose E =fkg2N 0 , then whenever kN we will have (recall p 2) P(S =k) t k k! e t 6t (N(R 1 +R 2 ) +p) + 4t 6tN(R 1 +R 2 ) + 6tp + 2pt 8t (N(R 1 +R 2 ) +p) (3.10) To complete the approximation for kN, we need to estimate the errorsR 1 andR 2 , and then optimize for the gap p. For k >N note that it is impossible to have S =k as the function S is a sum of Nf0; 1g-valued random variables. Thus P(S =k) t k k! e t = P N1 X n=0 1 B T n (x) =k t k k! e t = 0 t k k! e t = t k k! e t : 29 By Lemma 3.12.1 we know that for suciently small t k k! e t j lnj 4 2 8k>N: It follows that for small enough, for k>N P(S =k) t k k! e t j lnj 4 2 : (3.11) We now proceed to approximating the error between the distribution of S and a Poissonian for kN based on Theorem 7.0.1. 30 3.4 EstimatingR 1 The denition ofR 1 can be rewritten in terms of the measure as R 1 = sup 0<j<Np 0<q<Npj fj(B \fS Nj p+1 =qg)(B )(fS Nj p+1 =qg)jg We are using the probabilistic set notation, meaning that fS Nj p+1 =qgfx2 c M :S Nj p+1 (x) =qg: Thanks to Lemma 3.12.2, we can rewrite (B \fS Nj p+1 =qg)(B )(fS Nj p+1 =qg) as (B \T p fS Njp 1 =qg)(B )(fS Njp 1 =qg): Then, incorporating indicator functions we have (B \T p fS Njp 1 =qg)(B )(fS Njp 1 =qg) = Z M 1 B (1 S Njp 1 =q T p )d Z M 1 B d Z M 1 S Njp 1 =q d: We shall use the decay of correlations (3.9) in Lemma 3.2.5 to obtain an estimate forR 1 . 31 Let = 1 S Njp 1 =q 2L 1 , then since is an indicator function, we know that k k L 1 1: In fact, the norm is either equal to one or equal to zero, with the latter happening if and only if the setfS Njp 1 =qg is empty. We will also use the fact that Z M d =(S Njp 1 =q)(M) 1: Note that we can't assign = 1 B , because indicators of balls are not Lipschitz functions. Instead, we will approximate 1 B by Lipschitz functions from above and below and use those to get the bounds. For a> 0 to be determined later, dene (x) = 8 > > > < > > > : 1 on B 0 outside B + and ~ (x) = 8 > > > < > > > : 1 on B 0 outside B with both functions linear within the annuli. Note that the Lipschitz norms of both and ~ are equal to 1= and that Z M d Z M 1 B + d =(B + ) 32 and Z M ~ d Z M 1 B d =(B ): We will require so that the approximations of the indicator are as close as possible to the original. We have ~ 1 B : We shall use this bound and decay of correlations for and ~ to bound the terms present in the denition ofR 1 . On one hand (B \T p fS Njp 1 =qg)(B )(fS Njp 1 =qg) = Z M 1 B (1 S Njp 1 =q T p )d Z M 1 B d Z M 1 S Njp 1 =q d Z M (1 S Njp 1 =q T p )d Z M 1 B d Z M 1 S Njp 1 =q d = Z M (1 S Njp 1 =q T p )d Z M d Z M 1 S Njp 1 =q d + Z M d Z M 1 S Njp 1 =q d Z M 1 B d Z M 1 S Njp 1 =q d ' p kk Lip k 1 S Njp 1 =q k L 1 + Z M 1 S Njp 1 =q d Z M ( 1 B )d ' p kk Lip +(B + nB ) = ' p = +(B + nB ): 33 The last inequality is true since the L 1 measure of any indicator function is at most equal to 1 and the integral of an indicator over the space M can at most be (M) = 1 as well. On the other, via decay of correlations for ~ (B \T p fS Njp 1 =qg)(B )(fS Njp 1 =qg) Z M ~ (1 S Njp 1 =q T p )d Z M 1 B d Z M 1 S Njp 1 =q d = Z M ~ (1 S Njp 1 =q T p )d Z M ~ d Z M 1 S Njp 1 =q d Z M 1 S Njp 1 =q d Z M (1 B ~ )d ' p k ~ k Lip (B nB ) = (' p = +(B nB )) Thus, by putting the two together j(B \fS Nj p+1 =qg)(B )(fS Nj p+1 =qg)j ' p = + maxf(B + nB );(B nB )g ' p = +(B + nB ) and so R 1 ' p = +(B + nB ) (3.12) 34 3.5 Setting upR 2 outside of M ;J The errorR 2 can be rewritten as R 2 = p1 X n=1 E(1 B 1 B T n ) = p1 X n=1 (B \T n B ); where B =B (x) with x2M. Now, we would like our main result to hold for generic points on the manifold M. To that end, we have to exclude from our analysis points that are periodic or come back to the ball too quickly for some other reason. Dene M ;J =fx2M :B (x)\T n B (x)6=? for some 1n<Jg where the cuto J =baj lnjc. Note that J!1 as goes to zero. Outside of the setM ;J we haveB (x)\T n B (x) =? for alln smaller thanJ. By taking then-th preimage of the intersection we obtain thatT n B (x)\B (x) = ? for all n smaller than J. Therefore R 2 = p1 X n=J (B \T n B ): 35 3.6 Restricting the Return Time We will estimate the measure of each of the summands comprisingR 2 individually with the help of the Young tower. For the sake of simplicity we will denote B \ T n B byg n . Then (B \T n B ) =(g n ) = 1 X i=1 R i 1 X j=0 m(T j g n \ i ) (3.13) Fix a number s = s(n)2N 0 to be determined later. We shall separate the beams with return times greater than s and also portions of the base that visit those beams during the \ ight". The contribution to the measure ofg n from the removed portion of will be coarsely estimated. The beams with heights less than s will be referred to as \short" and will be the focus of our analysis. (g n ) = X i;R i s R i 1 X j=0 m(T j g n \ i ) + X i;R i >s R i 1 X j=0 m(T j g n \ i ): For every i let's dene the subset consisting exclusively of points that only visit short beams: ~ i =fx2 i :8ln; R( ^ T l x)sg: If the original beam i happens to be tall then the corresponding ~ i will be empty. 36 With above specications estimate (3.13) becomes (g n ) = X i;R i s R i 1 X j=0 m(T j g n \ i ) + X i;R i >s R i 1 X j=0 m(T j g n \ i ) = X i;R i s R i 1 X j=0 m(T j g n \ ~ i ) + X i;R i s R i 1 X j=0 m(T j g n \ ( i n ~ i )) + X i;R i >s R i 1 X j=0 m(T j g n \ i ) Dene ! 1 (s) = s X i;R i >s R i m( i ) = (s + 1); ! 2 (n;s) = p s(n + 1)m(Rs) By Lemmata 3.12.3 and 3.12.4, on the complement of the setC ! 1 (s) [C ! 2 (n;s) we have (g n ) X i;R i s R i 1 X j=0 m(T j g n \ ~ i ) + (! 1 (s) +! 2 (n;s))(B ): (3.14) Before we proceed any further with the approximation we need to extract more structure from within our system. 37 3.7 Cylinder Sets We will partition the i 's into sets that behave like cylinder sets; more specically we shall separate points that travel together for specied number of iterations of ^ T . Because of this predetermined future property we will actually refer to the partitions as cylinder sets. Figure 3.1: Cylinder expansion. For ln dene i 0 ;:::;i l =fx2 i 0 : ^ T k x2 i k for all 1klg = i 0 \ ^ T 1 i 1 \ ^ T 2 i 2 \:::\ ^ T l i l : 38 Note that after one iteration of the map ^ T the cylinder expands to ^ T i 0 ;:::;i l = ^ T ( i 0 \ ^ T 1 i 1 \ ^ T 2 i 2 \:::\ ^ T l i l ) = \ i 1 \ ^ T 1 i 2 \:::\ ^ T l+1 i l = i 1 \ ^ T 1 i 2 \:::\ ^ T l+1 i l = i 1 ;:::;i l ; after kl iterations we obtain (modulo sets of measure zero) ^ T k i 0 ;:::;i l = i k \ ^ T 1 i k+1 \:::\ ^ T l+k i l = i k ;:::;i l : Consequently after l + 1 iterations of ^ T the cylinder becomes all of : ^ T l+1 i 0 ;:::;i l = ^ T ( ^ T l i 0 ;:::;i l ) = ^ T ( i l ) = : By (3.8) in Lemma 3.2.4 the cylinder set expands at least 1 with each iteration f the map ^ T : m( i 1 ;:::;i l ) = m( ^ T i 0 ;:::;i l ) 1 m( i 0 ;:::;i l ) and hence m() =m( ^ T l+1 i 0 ;:::;i l ) (l+1) m( i 0 ;:::;i l ); 39 giving m( i 0 ;:::;i l ) l+1 m(): (3.15) Consider the intersection of i 0 ;:::;i l with the restriction ~ i 0 . If any of the beams i k on the cylinder's path were tall then i 0 ;:::;i l must have originated in i 0 n ~ i 0 , since points from ~ i 0 don't travel to tall towers. In that case ~ i 0 ;:::;i l = i 0 ;:::;i l \ ~ i 0 =?: On the other hand, if all the beams on the cylinder's path were short, then it must have originated inside ~ i 0 and so ~ i 0 ;:::;i l = i 0 ;:::;i l \ ~ i 0 = i 0 ;:::;i l : Therefore m( ~ i 0 ;:::;i l ) = 8 > > > < > > > : m( i 0 ;:::;i l ) if i 0 ;:::;i l ~ i 0 0 otherwise: (3.16) 40 3.8 Approximation of Balls by Cylinder Sets For all i 0 2Z, j <R i 0 s and any n2Z dene the collection of indices I i 0 ;j;n = (i 0 ;:::;i l ) minimal s.t. l X k=0 R i k >n +j : (3.17) By `(i 0 ;:::;i l ) minimal' we mean that for the index (i 0 ;:::;i l1 ) the above condi- tion on sum of beam heights is not satised. Then, for a minimal (i 0 ;:::;i l ) we have l1 X k=0 R i k n +j < l X k=0 R i k ; or equivalently R l n +j <R l+1 : Recall that T R l i 0 ;:::;i l = ^ T l i 0 ;:::;i l = i l and T R l+1 i 0 ;:::;i l = ^ T l+1 i 0 ;:::;i l = : Therefore, for an index = (i 0 ;:::;i l )2 I i 0 ;j;n we know that T n+j is directly above the last beam i l and has not separated yet. In Lemma 3.12.5 we show that any beam base can be decomposed using the cylinders indexed by I i 0 ;j;n : i 0 = G 2I i 0 ;j;n : 41 Then, as a direct consequence ~ i = ~ i \ G 2I i;j;n = G 2I i;j;n ~ i \ = G 2I i;j;n ~ : Using the above decompositions we can express each of the summands from (3.14) as m(T j g n \ ~ i ) = X 2I i;j;n m(T j g n \ ~ ) From now on, whenever we are referring to cylinders we shall mean the cylinders that originated in ~ i . Since those ~ are in fact the same as in the sequel we will drop the ''. Also, when clear from the context we shall simply write 2I, instead of 2I i;j;n . If = (i 0 ;:::;i l ) then let 0 = (i 0 ;:::;i l1 ) and recall that by construction we have = 0\ ^ T l i l 0. We shall denote this dependence as: 0 to re ect the relationship between the cylinders. We have X 2I m(T j g n \ ) X 0 j9 0 2I m(T j g n \ 0): At this point we would like the reader to notice two things. First, it suces to take only one 0, regardless of how many cylinders indexed by it contains. 42 To show this, suppose that there were multiple indices 1 ;:::; r in I i;j;n with the same 0 . Since the cylinders indexed by those are all disjoint subsets of 0 we have m( 1 ) +::: +m( r )m( 0); and also (T j g n \ 1 ) +::: +m(T j g n \ r )m(T j g n \ 0): Second, ifT j B \ 0 =?, thenm(T j g n \ 0)m(T j B \ 0) = 0. Since the cylinders that do not intersect the pullback of the ball contribute nothing, we can omit them from the estimate. Consequently, the right side of the above inequality is equal to X 0 j9 0 2I T j B\ 06=? m(T j g n \ 0) = X 0 j9 0 2I T j B\ 06=? m(T j g n \ 0) m( 0) m( 0): In Lemma 3.12.6 we bounded the ratio in the sum m(T j g n \ 0) m( 0) C 3 (B ) m() for all 0 with 0 in I i;j;n . Thus X 2I m(T j g n \ )C 3 (B ) m() X 0 j9 0 2I T j B\ 06=? m( 0): 43 Finally, from Lemma 3.12.8 we obtain a bound for the sum of measures of the cylinders, whence X 2I m(T j g n \ )C 3 (B ) m() m(T j (B +C 9 n=s)\ i ): We thus the entire sum in estimate (3.14) can be bounded as follows X i;R i s R i 1 X j=0 m(T j g n \ ~ i ) = X i;R i s R i 1 X j=0 X 2I m(T j g n \ ~ ) (3.18) X i;R i s R i 1 X j=0 C 3 (B ) m() m(T j (B +C 9 n=s)\ i )C 3 (B ) m() (B +C 9 n=s): 44 3.9 EstimatingR 2 Combining together inequalities (3.14) and (3.18) we obtain (B \T n B ) =(g n )C 3 (B ) m() (B +C 9 n=s) + (! 1 (s) +! 2 (n;s))(B ); provided that n J and the center x of the ball B lies outside the setC ! 1 (s) [ C ! 2 (n;s) . Recall that s depends on n. Consequently, summing up theg n terms, we see that outside the set of \bad" ball centers M ;J [ S p1 n=J C ! 1 (s) [ S p1 n=J C ! 2 (n;s) R 2 = p1 X n=J (B \T n B ) (3.19) p1 X n=J C 3 (B ) m() (B +C 9 n=s) + (! 1 (s) +! 2 (n;s))(B ) =C 3 (B ) m() p1 X n=J (B +C 9 n=s) +(B ) p1 X n=J s X i;R i >s R i m( i ) + p s (n + 1)m(Rs) : Before estimating the rst sum, note that B +C 9 n=s(x) 8 > > > < > > > : B 2 (x) if >C 9 n=s B 2C 9 n=s(x) if C 9 n=s : 45 Regardless of which quantity is larger we always know that B +C 9 n=s(x)B 2 (x)[B 2C 9 n=s(x): Then, p1 X n=J (B +C 9 n=s) p1 X n=J (B 2 ) +(B 2C 9 n=s)p(B 2 ) + p1 X n=J (B 2C 9 n=s): By Lemma 8.0.5 and geometric regularity (A4)(b) we obtain (B 2 )j lnj v (B ) and (B 2C 9 n=s) (2C 9 n=s ) @ 0 where the rst inequality holds outside ofE ;v while the second is true provided that the radius 2C 9 n=s < 0 . We will later show that this condition is satised whenever < 1 for 1 suciently small. The second sum in (3.19) can be bounded by p1 X n=J s X i;R i >s R i m( i ) + p s (n + 1)m(Rs) p1 X n=J p n + 1 s X i;R i s R i m( i ) + s X i;R i s sm( i ) p1 X n=J p 2n 2 s X i;R i s R i m( i ) 4 p1 X n=J p n (s): 46 Thus p1 X n=J (B +C 9 n=s)pj lnj v (B ) + p1 X n=J (2C 9 n=s ) @ 0 Overall, we see that for < 1 the quantity (3.19) can be bounded by R 2 C 3 (B ) m() pj lnj v (B ) + (2C 9 ) @ 0 p1 X n=J ( @ 0 ) n=s + 4(B ) p1 X n=J p n (s): Recall that = 1 2 > 3 2 . Dene e = @ 0 , note that e < 1. Finally, let s =n with 2 ( 3 1 ; 1). Lemma 3.12.9 shows that < 1 indeed allows us to employ geometric regularity. Dene 2 to be the value of below which J 4. Such 2 exists since lim !0 J =1 and > 0. Then s = n J 4 allows is to use Lemma 3.2.7 giving (s)C 5 s : A little algebraic manipulation yields for < minf 1 ; 2 g R 2 C 3 (B ) m() pj lnj v (B ) + (2C 9 ) @ 0 p1 X n=J e n 1 + 4C 5 (B ) p1 X n=J p n (n ) C 3 (B ) 2 m() pj lnj v +C 3 (2C 9 ) @ 0 (B ) m() p1 X n=J e n 1 + 4C 5 (B ) 1 X n=J n 1 2 : 47 Dene 3 to be the value of below whichJ 2, givingJ1 J 2 . Lemma 3.12.10 establishes that for < 3 1 X n=J e n 1 K 1 e 1 2 (J1) 1 and 1 X n=J n 1 2 K 2 J 3 2 ; thus for < minf 1 ; 2 ; 3 g R 2 C 3 (B ) 2 m() pj lnj v +(B ) (2C 9 ) @ 0 C 3 K 1 m() e 1 2 (J1) 1 + 4C 5 K 2 J 3 2 : Let = 3 2 , then 2 3 1 ; 1 ) 2 0; 4 2 : By Lemma 3.12.11 we know that for all < 4 (for exact denition of 4 see the body of the lemma) e 1 2 (J1) 1 J ; so whenever < minf 1 ;:::; 4 g R 2 C 3 (B ) 2 m() pj lnj v +(B ) (2C 9 ) @ 0 C 3 K 1 m() + 4C 5 K 2 J : 48 To simplify the above expression we deneC 10 = C 3 m() + (2C 9 ) @ 0 C 3 K 1 m() + 4C 5 K 2 , then R 2 C 10 (B ) p(B )j lnj v +J : Note that above it true provided < minf 1 ;:::; 4 g and the center of the ball x = 2 S p1 n=J C ! 1 (s) [ S p1 n=J C ! 2 (n;s) [E ;v . 49 3.10 Optimization of the Result Recall that by the result of Theorem 7.0.1 for p2 [2;N) and kN we have P(S =k) t k k! e t 8t (N(R 1 +R 2 ) +p(B )): (3.20) Utilizing the derivation from the previous section, for < minf 1 ;:::; 4 g NR 2 t (B ) C 10 (B ) p(B )j lnj v +J tC 10 p(B )j lnj v +J : On the other hand, NR 1 t (B ) ' p = +(B + nB ) : Recall that =@( 2)> 1 by assumption (A2). Dene = w for w2 (1;). Note that lim !0 = = 0, so that for suciently small, say< 5 , we will have , as required. By Lemma 3.2.5 and -regularity (A4)(a) we know that for < minf 0 ; 5 g NR 1 t (B ) C 4 p +1 w + C 2 j lnj a (B ) (3.21) = tC 4 p 1 (B ) w + C 2 t j lnj a : 50 We shall want to bound all of the error components from inequality (3.20) with terms of the order of J orj lnj raised to a negative power. To that end we choose p = J (B ) 1 ; (3.22) which immediately takes care of the last summand as p(B )J : (3.23) From Lemma 3.12.12 we know thatpN as required (for< 6 ). Going back to the estimate of NR 1 , Lemma 3.12.13 establishes the following lower bound when < 7 (for denition see body of the lemma) p(B ) 1 2 J or equivalently p 1 2 J (B ) 1 ; both versions of which we use below, as well as the upper bound for the measure of a ball. For < minf 1 ; 7 g p 1 (B ) w = p 2 p(B ) w 2 p 2 J w ( 1 2 J (B ) 1 ) 2 J w = 2 2 (B ) 2 w (J ) 2 J 2 2 @ 0 ( 2)w J ( 1) = (2 2 @ 0 ( 2)w J )J : 51 In Lemma 3.12.14 we show that for < 8 2 2 @ 0 ( 2)w J 1 and so for < minf 1 ; 7 ; 8 g NR 1 tC 4 J +tC 2 j lnj a : (3.24) As for NR 2 , our assumption on p gives rise to p(B )j lnj v J (B ) 1 (B )j lnj v =J j lnj v ; thus NR 2 tC 10 J j lnj v +J : (3.25) Combining the results of estimates (3.23), (3.24) and (3.25) above we obtain that for < minf 0 ;:::; 8 g N(R 1 +R 2 ) +p(B ) tC 4 J +tC 2 j lnj a +tC 10 J j lnj v +J +J : 52 Dene 9 to be the value of below whichj lnj> 1, then remembering that for any real number u 1 we have 1 2 u<buc we derive J =baj lnjc> 1 2 aj lnj; (3.26) then for < 9 J < 2 a j lnj and J j lnj v < 2 a j lnj (v) : We will assume that v < , so that all of the exponents onj lnj are negative. Now, let = minf 4 2 ;v;ag, then whenever < minf 0 ;:::; 9 g N(R 1 +R 2 ) +p(B ) tC 4 2 a +C 2 t + 2 +1 ta C 10 + 2 a j lnj : Finally, set C = maxf1; 8t (tC 4 2 a +C 2 t + 2 +1 ta C 10 + 2 a )g and dene M 0 = p1 [ n=J C ! 1 (s) [ p1 [ n=J C ! 2 (n;s) [E ;v : 53 Then whenever < minf 0 ;:::; 9 g for any x = 2M 0 we have for kN P(S =k) t k k! e t Cj lnj : (3.27) In Section 3.3 we bounded the error for k > N. Combining inequality (3.11) with the denitions of C and we obtain P(S =k) t k k! e t j lnj 4 2 Cj lnj : It follows that estimate (3.27) holds for all k 2 N 0 . Note that the exponent = minf 4 2 ;v;ag<< 4 2 . 54 3.11 Measure of Removed Sets To complete the proof of Theorem 1 it remains to estimate the measure of the set containing centers of balls we excluded from our analysis: M 0 = p1 [ n=J C ! 1 (s) [ p1 [ n=J C ! 2 (n;s) [E ;v : We will bound each of the components separately. 3.11.1 Estimate for S p1 n=J C ! 1 (s) Lemma 3.11.1. There exists a constant C 11 > 0 so that p1 [ n=J C ! 1 (s) C 11 j lnj (+1=2) : Proof. Recall that we have set s = n with 2 ( 3 1 ; 1), = 1 2 > 0 and = 3 2 2 (0; 4 2 ). From Lemma 3.12.3 we know how to estimate the components of the union individually, therefore p1 [ n=J C ! 1 (s) p1 X n=J (C ! 1 (s) ) p1 X n=J C 7 ! 1 (s) =C 7 p1 X n=J (s + 1) C 7 p1 X n=J C 5 (s + 1) C 5 C 7 p1 X n=J s =C 5 C 7 p1 X n=J n C 5 C 7 Z 1 J1 y dy =C 5 C 7 1 1 (J 1) 1 : 55 Our assumptions on ensure that J 1 a 4 j lnj. Thus (J 1) 1 4 1 a 1 j lnj 1 : Then expressing the main estimate in terms of and simplifying we obtain p1 [ n=J C ! 1 (s) C 5 C 7 1 1 4 1 a 1 j lnj 1 =C 5 C 7 2 2 + 1 4 +1=2 a (+1=2) j lnj (+1=2) :=C 11 j lnj (+1=2) : 3.11.2 Estimate for S p1 n=J C ! 2 (n;s) Lemma 3.11.2. There exists a constant C 12 > 0 so that p1 [ n=J C ! 2 (n;s) C 12 j lnj : Proof. As before s = n with 2 ( 3 1 ; 1), = 1 2 > 0 and = 3 2 2 (0; 4 2 ). From Lemma 3.12.4 we know how to estimate the components of the union individually. (C ! 2 (n;s) )C 8 ! 2 (n;s): 56 Therefore p1 [ n=J C ! 2 (n;s) p1 X n=J (C ! 2 (n;s) ) p1 X n=J C 8 ! 2 (n;s) =C 8 p1 X n=J p s (n + 1)m(Rs) =C 8 p1 X n=J p n (n + 1)m(R>n 1) C 8 1 X n=J p n 2nC 0 (n 1) 2 p C 0 C 8 1 X n=J n +1 2 (n 1) 2 : Given our assumptions on we know that n =s 4 ) n 1 1 2 n ; whence 1 X n=J n +1 2 (n 1) 2 1 X n=J n +1 2 2 2 n 2 = 2 2 1 X n=J n ( 1)1 2 : The exponent on n, ignoring the negative sign, can be rewritten as ( 1)1 2 = + 1> 1, thus the sum above is nite. In fact, 1 X n=J n (+1) Z 1 J1 y (+1) dy = 1 (J 1) 1 2 2 a j lnj ; 57 with the last inequality being true since J 1 a 4 j lnj. Combining all the estimates together yields p1 [ n=J C ! 2 (n;s) p C 0 a C 8 2 2 +2+1 j lnj :=C 12 j lnj : 3.11.3 The Measure ofE ;v The estimate for the measure of the setE ;v is done in Chapter 8. From Lemma 8.0.5 we know that (E ;v )C(D)j lnj v where C(D) is a constant that only depends on the dimension of the space M. Recall that we assumed that v<. 3.11.4 Total Measure of the Removed Sets We now combine all of the results from this section to nd the bound on the measure of the set M 0 . (M 0 ) p1 [ n=J C ! 1 (s) + p1 [ n=J C ! 2 (n;s) +(E ;v ) C 11 j lnj (+1=2) +C 12 j lnj +C(D)j lnj v : 58 Let 0 = minf + 1=2;;vg = v by our assumption on the variables. Then since j lnj 1 < 1 we obtain (M 0 ) (C 11 +C 12 +C(D))j lnj 0 : The proof is completed by settingC 0 =C 11 +C 12 +C(D). Note that the exponent 0 =v<< 4 2 . 59 3.12 Technical Estimates Lemma 3.12.1. Let k>N = j t (B) k . Then for suciently small we have t k k! e t j lnj 4 2 : Proof. Note that it is enough to show that lim !0 t k k! e t j lnj 4 2 = 0: Once we have established the above limit, we immediately see that for small enough t k k! e t j lnj 1. Dividing both sides byj lnj 4 2 gives us the desired result. Before we turn to the limit, note that since k is an integer k>N = t (B ) ) k> t (B ) : Then by geometric regularity we have(B )C 2 () @ 0 provided that< 0 . Thus we see that in fact k> t C 2 () @ 0 : Since @ 0 tends to zero in the limit there is a c 0 > 0 so that for < c 0 we have C 2 @ 0 < 1 giving k>t. 60 Now, since the probabilities in a Poisson distribution attain their maximum at k =t, we know that the sequence t k k! e t is decreasing for all the values ofk we are considering. Let K = t=(C 2 @ 0 ) , then t k k! e t t K K! e t for all k>N: Note that lim !0 t K K! e t = lim K!1 t K K! e t = 0 as the probabilities associated with the Poisson distribution converge to zero in the limit. By Stirling's approximation we know that K! p 2K K e K r 2 t C 2 @ 0 t eC 2 @ 0 t=C 2 @ 0 ; to ease notation dene c 1 =t=C 2 , then K! p 2c 1 @ 0 =2 c 1 e @ 0 c 1 @ 0 and thus t k k! e t t K K! e t t c 1 @ 0 +1 (2c 1 ) 1=2 @ 0 =2 e c 1 @ 0 c 1 @ 0 e t : 61 Let c 2 = (2c 1 ) 1=2 te t , then t k k! e t c 2 t c 1 @ 0 @ 0 =2 e c 1 @ 0 c 1 @ 0 c 2 t c 1 @ 0 e c 1 @ 0 c 1 @ 0 = c 2 et c 1 @ 0 c 1 @ 0 : Finally we compute the desired limit utilizing L'H^ opital's Rule. Again for ease of notation set c 3 = 4 2 lim !0 t k k! e t j lnj c 3 lim !0 c 2 et c 1 @ 0 c 1 @ 0 j lnj c 3 =c 2 lim !0 ( ln) c 3 c 1 et @ 0 c 1 @ 0 L'H = c 2 lim !0 c 3 ( ln) c 3 1 ( 1 ) c 1 et @ 0 c 1 @ 0 @ 0 1 (c 1 @ 0 ) ( ln( c 1 et @ 0 ) + 1 ) = c 2 c 3 c 1 @ 0 lim !0 ( ln) c 3 1 @ 0 et c 1 @ 0 c 1 @ 0 ln( c 1 et @ 0 ) + 1 : (3.28) Since @ 0 tends to positive innity we know that the the denominator converges to positive innity and that et c 1 @ 0 c 1 @ 0 converges to zero. As for the remaining part of the numerator, if c 3 < 1 we have lim !0 ( ln) c 3 1 @ 0 = 0: 62 Ifc 3 = 4 2 > 1, deneu = 4 2 , so thatu is the smallest integer larger than c 3 . Applying L'H^ opital's Rule u 1 times we obtain lim !0 ( ln) c 3 1 @ 0 = lim !0 ( ln) c 3 1 @ 0 L'H = lim !0 (c 3 1)( ln) c 3 2 ( 1 ) (@ 0 ) @ 0 1 = lim !0 (c 3 1)( ln) c 3 2 @ 0 @ 0 ::: L'H = lim !0 (c 3 1)(c 3 2)::: (c 3 u + 1)( ln) c 3 u (@ 0 ) u1 @ 0 = (c 3 1)(c 3 2)::: (c 3 u + 1) (@ 0 ) u1 lim !0 ( ln) c 3 u @ 0 = 0: Therefore regardless of the value of c 3 the whole numerator converges to zero and so the limit (3.28) is zero. It follows that lim !0 t k k! e t j lnj 4 2 = 0: Let c 4 be so that for < minf 0 ;c 0 ;c 4 g t k k! e t j lnj 4 2 1; then t k k! e t j lnj 4 2 : 63 Lemma 3.12.2. Suppose we are in the setting of Section 3.4. We have i: fS Nj p+1 =qg =T p fS Njp 1 =qg ii: (S Nj p+1 =q) =(S Njp 1 =q) Proof. The rst claim is true because S Nj p+1 = Nj X n=p+1 X n = Nj X n=p+1 1 B T n1 = Njp X n=1 1 B T n1 T p ; and thus x2fS Nj p+1 =qg () Njp X n=1 1 B T n1 T p (x) =q () () T p (x)2fS Njp 1 =qg () x2T p fS Njp 1 =qg The second claim follows directly from the rst one by the T-invariance of the measure . Lemma 3.12.3. Dene! 1 (s) = [ P i;R i >s R i m( i ) ] 1 2 = (s+1). The contribution from the tall beams to the measure ofg n can be bounded as follows: X i;R i >s R i 1 X j=0 m(T j g n \ i )<! 1 (s)(B ) outside of a setC ! 1 (s) such that (C ! 1 (s) )C 7 ! 1 (s); 64 with C 7 depending only on the dimension of the manifold M. Proof. We will employ Lemma 8.0.6 from Chapter 8 with 0 = and 1 = P i;R i >s P R i 1 j=0 m(T j (:)\ i ). From the lemma we obtain that C ! 1 (s) =fx2M : 1 (B (x))! 1 (s)(B (x))g has its measure bounded by C 7 ! 1 (s) 1 1 (M). Now 1 (M) = X i;R i >s R i 1 X j=0 m(T j (M)\ i ) = X i;R i >s R i 1 X j=0 m( i ) = X i;R i >s R i m( i ) and therefore (C ! 1 (s) )C 7 ! 1 (s) 1 1 (M) =C 7 ! 1 (s) 1 ! 1 (s) 2 =C 7 ! 1 (s): Outside the setC ! 1 (s) we have X i;R i >s R i 1 X j=0 m(T j g n \ i ) X i;R i >s R i 1 X j=0 m(T j (B )\ i ) = 1 (B )! 1 (s)(B ): 65 Lemma 3.12.4. Dene ! 2 (n;s) = p s (n + 1)m(Rs). The contribution from the portions of the short beams that travel to tall beams throughout the process of iterating ^ T l times can be bounded as follows X i;R i s R i 1 X j=0 m(T j g n \ ( i n ~ i ))<! 2 (n;s)(B ) outside of a setC ! 2 (n;s) such that (C ! 2 (n;s) )C 8 ! 2 (n;s); with C 8 depending only on the dimension of the manifold. Proof. We again employ Lemma 8.0.6 from Chapter 8, this time with 0 = and 1 = P i;R i s P R i 1 j=0 m(T j (:)\ ( i n ~ i )). Dene C ! 2 (n;s) =fx2M : 1 (B (x))! 2 (n;s)(B (x))g; from the lemma we know that (C ! 2 (n;s) )C 8 ! 1 (n;s) 1 1 (M). Now, 1 (M) = X i;R i s R i 1 X j=0 m(T j (M)\ ( i n ~ i )) = X i;R i s R i m( i n ~ i ) X i;R i s sm( i n ~ i ) 66 At this point note that R( ^ T l x)s () ^ T l x2fRsg () x2 ^ T l fRsg and so i n ~ i =fx2 i :9ln; s.t. R( ^ T l x)sg =fx2 i :9ln; s.t. x2 ^ T l fRsgg = n [ l=0 ^ T l fRsg\ i : Thus the measure 1 (M) can be estimated as 1 (M)s X i;R i s m( i n ~ i ) =s X i;R i s m n [ l=0 ^ T l fRsg\ i sm n [ l=0 ^ T l fRsg s n X l=0 m( ^ T l fRsg) =s n X l=0 m(fRsg) =s (n + 1)m(fRsg) 67 with the second to last step being true by the invariance of the SRB measure m with respect to the map ^ T . It follows that (C ! 2 (n;s) )C 8 ! 2 (n;s) 1 1 (M) =C 8 ! 2 (n;s) 1 ! 2 (n;s) 2 =C 8 ! 2 (n;s): Outside the setC ! 2 (n;s) we have X i;R i s R i 1 X j=0 m(T j g n \ ( i n ~ i )) X i;R i s R i 1 X j=0 m(T j (B )\ ( i n ~ i )) = 1 (B )! 2 (n;s)(B ): Lemma 3.12.5 (Beam Decomposition). For the collection of indicesI i 0 ;j;n dened in Section 3.8, up to sets of measure zero we have i 0 = G 2I i 0 ;j;n : Proof. First we will prove that for 1 6= 2 the cylinders are indeed disjoint. Let 1 = (i 0 ;i 1 ;:::;i m ) and 2 = (i 0 ; 1 ;:::; l ) be arbitrary indices in I i 0 ;j;n . Without loss of generality suppose that lm. For the sake of contradiction assume that 1 \ 2 6=?: 68 Let x be a point in the intersection. By denition we have x2 i 0 \ ^ T 1 i 1 \:::\ ^ T l i l \:::\ ^ T m im x2 i 0 \ ^ T 1 1 \:::\ ^ T l l ; hence x2 i 0 ; ^ Tx2 i 1 \ 1 ; ::: ; ^ T l x2 i l \ l : Since we have assumed that all of the beam bases i are disjoint it follows that in fact i 1 = 1 ; ::: ; i l = l and therefore 2 = (i 0 ;i 1 ;:::;i l ). If l = m, then in fact 1 = 2 , which cannot happen, so l < m. Then since 2 2 I i 0 ;j;n it follows that l is the smallest integer such that l X k=0 R i k >n +j: But 1 2I i 0 ;j;n requires thatm be the minimal integer satisfying the above inequal- ity. Contradiction ensues. Now we shall prove that the cylinders indexed by I i 0 ;j;n do in fact ll i 0 . Let 2I i 0 ;j;n . Since i 0 it is immediate that G 2I i 0 ;j;n i 0 : 69 Thus it only remains to prove the reverse inclusion. Consider an x2 i 0 . We need to show that there exists a q n dening a = (i 0 ;:::;i q )2 I i 0 ;j;n such that x2 . Letfi 0 ;i 1 ;:::;i n g be the indices of the beam bases that x visits as it propagates through the tower during n iterations of ^ T , meaning that ^ T k x2 i k for 0kn; and so x2 i 0 ;:::;i k for 0kn: We know that each of the beam heights R i k 1 and also that j <R i 0 . Then, n +j = (n 1) + (j + 1) (n 1) +R i 0 n1 X k=1 R i k +R i 0 = n1 X k=0 R i k < n X k=0 R i k : Now, let q be the smallest integer satisfying q X k=0 R i k >n +j: By above derivation we know that necessarily q n, therefore x2 i 0 ;:::;iq . In addition, since we choseq to be the smallest number satisfying the above inequality (i 0 ;:::;i q )2I i 0 ;j;n . This completes the proof. 70 Lemma 3.12.6. Let = (i 0 ;:::;i l ) be an index inside I i 0 ;j;n . Also let 0 = (i 0 ;:::;i l1 ). Suppose further that the cylinders indexed by these indices only travel to short towers. Then m(T j g n \ 0) m( 0) C 3 (B ) m() : (3.29) Proof. Recall that 2I means that l1 X k=0 R i k n +j < l X k=0 R i k : We begin the proof by simplifying the ratio. First, by simple inclusion we have m(T j g n \ 0) m( 0) m(T (n+j) B \ 0) m( 0) : Let the numberb be such thatn+jb =R l . Sincen+j lies betweenR l andR l+1 , we have 0 b < R i k . Further T n+jb = ^ T l . Recall that s( 0) = l, thus we can push both the numerator and the denominator forward by ^ T l and use distortion, Corollary 3.2.2, to obtain m(T (n+j) B \ 0) m( 0) C 3 m(T b B \ ^ T l 0) m( ^ T l 0) : 71 In addition, because s( 0) = l we know that ^ T l 0 = , which takes care of the denominator. As for the numerator, we write m(T b B \ ) = 1 X i=0 m(T b B \ i ) 1 X i=0 R i 1 X =0 m(T (T b B )\ i ) =(T b B ) =(B ): The last step being true by invariance of the measure with respect to the map T . Result follows. Lemma 3.12.7. The diameter of the cylinder set i 0 ;:::;i l conforms to the following estimate j i 0 ;:::;i l jC 1 l+1 : (3.30) Proof. Let x and y be two points in i 0 ;:::;i l , then by denition we have ^ T k x; ^ T k y2 i k for 0kl: It follows thats(x;y)l + 1. Therefore by Assumption (A3)(b), inequality (3.4), we know that d(x;y)C 1 s(x;y) C 1 l+1 : 72 Finally, since the points x;y were arbitrary we can conclude that j i 0 ;:::;i l jC 1 l+1 : Lemma 3.12.8. Consider a collection of cylinders 0 = i 0 ;:::;i l1 2 i 0 such that i)9 0 with 2I i 0 ;j;n and ii) 0\T j B 6=?: Then there exists a constant C 9 for which we have X 0 j9 0 2I T j B\ 06=? m( 0)m(T j (B +C 9 n=s)\ i 0 ): Proof. All the unions, sums and maxima in this proof are subscripted with " 0 j9 0 ; 2 I i 0 ;j;n; ;T j B \ 0 6= ?," unless otherwise specied. Now, since all cylinders involved are disjoint X m( 0) =m [ 0 : From Lemma 3.12.7 above we know that j 0jC 1 s( 0) =C 1 l : 73 Since 0 contains a cylinder satisfying minimality conditions of I i 0 ;j;n it follows that n +j < l X k=0 R i k (l + 1)s; and hence l n +j s 1 n s 1: Putting these facts together and setting C 9 =C 1 1 , we see that j 0jC 1 n=s1 =C 9 n=s : Based on our assumptions about the behavior of the diameter of a set as we travel up the tower, equation (2.3), we can further say that jT j 0jC 9 n=s : We have assumed that 0\T j B 6= ?, which implies that T j 0\B 6= ?. Keeping the estimate on the diameter of the cylinder in mind, we can write T j 0B +C 9 n=s: In fact, since the above estimate works for any cylinder whose subscript 0 satises " 0 j9 0 ;2I i 0 ;j;n ;T j B \ 06=?," we can conclude 74 T j [ 0 = [ T j 0B +C 9 n=s; and consequently [ 0T j (B +C 9 n=s): Since S 0 i 0 as well we have [ 0T j (B +C 9 n=s)\ i 0 ; and we can conclude that m [ 0 m(T j (B +C 9 n=s)\ i 0 ): Lemma 3.12.9. Let s =n with 2 ( 3 1 ; 1). There exists a value 1 so that for all < 1 we have 2C 9 n=s < 0 . This ensures that the radius of the ball B 2C 9 n=s is small enough to allow us to use geometric regularity on the ball. Proof. Note that incorporating the denition of the variable s we can write the radius as 2C 9 n=s = 2C 9 n 1 : 75 Since 1> 0 and nJ we know that n 1 J 1 and thus remembering that < 1 we obtain 2C 9 n 1 2C 9 J 1 : Recall, as ! 0 we have J!1 and also J 1 !1. Therefore lim !0 2C 9 J 1 = 0 and so there exists a 1 such that for < 1 we indeed have 2C 9 J 1 < 0 : Lemma 3.12.10. Let e < 1 and 2 ( 3 1 ; 1). Let 3 be as dened in Section 3.9, meaning < 3 ) J 2: There exist constants K 1 ;K 2 such that for < 3 1 X n=J e n 1 K 1 e 1 2 (J1) 1 and 1 X n=J n 1 2 K 2 J 3 2 (3.31) 76 Proof. We begin with the rst sum; through u-substitution with u = y 1 we obtain 1 X n=J e n 1 Z 1 J1 e y 1 dy = 1 1 Z 1 (J1) 1 e u u 1 du = 1 1 Z 1 (J1) 1 e u=2 e u=2 u 1 du: Before we proceed any further notice that for any 0<z < 1 lim u!1 z u u = lim u!1 u z u L'H = lim u!1 1 z u ( lnz) = 1 lnz lim u!1 z u = 0 and therefore setting z =e 1 2 we see that lim u!1 e u=2 u 1 = ( lim u!1 z u u ) 1 = 0: It follows that e u=2 u 1 is bounded on the interval [0;1), since the expression is continuous in u and at the origin it is actually equal to zero. Denote the bound by U. Then, continuing the integration 1 1 Z 1 (J1) 1 e u=2 e u=2 u 1 du U 1 Z 1 (J1) 1 e u=2 du = U 1 1 1 2 lne e u=2 1 (J1) 1 = U 1 2 j lne j e 1 2 (J1) 1 : To complete this estimate we set K 1 = U 1 2 j lne j . 77 As for the second sum, keeping in mind that our assumption on ensures 3 2 < 0, 1 X n=J n 1 2 Z 1 J1 y 1 2 dy = 1 3=2 y 3 2 1 J1 = 1 3=2 (J 1) 3 2 : Since J 2 we immediately have J 1 J 2 making 1 X n=J n 1 2 1 3=2 2 3 2 J 3 2 : The proof is completed by setting K 2 = 1 3=2 2 3 2 . Lemma 3.12.11. Let e 2 (0; 1); > 4;2 ( 3 1 ; 1) and 2 (0; 4 2 ). Then there exists 4 such that for < 4 e 1 2 (J1) 1 J : Proof. First notice, from the previous lemma, that for< 3 J 1> J 2 , therefore for all such we have e 1 2 (J1) 1 <e 1 2 ( J 2 ) 1 =e 2 2 J 1 : Let z =e 2 2 . Note that because of our assumptions z > 1. Then we can write e 2 2 J 1 J = z J 1 J : 78 Now, lim !0 z J 1 J = lim J!1 z J 1 J L'H = lim J!1 z J 1 (lnz)(1)J 1 = (lnz)(1) lim J!1 z J 1 J L'H = (lnz)(1) lim J!1 z J 1 (lnz)(1)J J 1 = (lnz) 2 (1) 2 lim J!1 z J 1 J 21 L'H = (lnz) 3 (1) 3 (2 1) lim J!1 z J 1 J 32 ::: Since we assumed that 1 > 0 there exists a natural number u large enough such that 1 u < 1< 1 u 1 ; we can rearrange the two inequalities to obtain u (u 1)< 0 and (u 1) (u 2)> 0: Continuing the process of repeatedly applying L'H^ ospital's Rule until the exponent on J in the denominator becomes negative we obtain lim !0 z J 1 J = (lnz) u (1) u (2 1)::: ((u 1) (u 2)) lim J!1 z J 1 J u(u1) = (lnz) u (1) u (2 1)::: ((u 1) (u 2)) lim J!1 z J 1 J ju(u1)j =1 79 Then, it follows that lim !0 e 2 2 J 1 J = lim !0 z J 1 J = 0; so that there exists c such that for <c e 2 2 J 1 J < 1 giving e 2 2 J 1 <J : Consequently for all < 4 = minf 3 ;cg e 1 2 (J1) 1 <e 2 2 J 1 <J : Lemma 3.12.12. Suppose we are in the set up of Section 3.10, i.e. 2 (0; 4 2 ). Let p =bJ (B ) 1 c, then there exists 6 such that for all < 6 we have pN. Proof. First notice that lim !0 t=(B ) =1, so that for smaller than somec we have t=(B ) > 1. Recall that for any real u 1 we have 1 2 u <buc and so for <c t=(B )< 2bt=(B )c = 2N: 80 We can now proceed to the main estimate p = J (B ) 1 J (B ) 1 =J 1 t t (B ) <J 2 t N: Finally, since 2J =t! 0 as goes to zero, the result follows for suciently small values of , say <c 1 . Finally dene 6 = minfc 0 ;c 1 g. Lemma 3.12.13. Suppose we are in the set up of Section 3.10, i.e. 2 (0; 4 2 ). Let p =bJ (B ) 1 c, then there exists 7 such that for all < 7 p(B )> 1 2 J Proof. We begin by showing that lim !0 J (B ) 1 =1. By L'H^ opital's Rule we know that for any positive exponent z lim !0 z ln = lim !0 ln z L'H = lim !0 1= (z) z1 = 1 z lim !0 1 z = 1 z lim !0 z = 0 and thus lim !0 z j lnj = lim <1;!0 z j lnj = lim <1;!0 z ln = 0: Then 0 lim !0 (B )J lim !0 @ 0 (aj lnj) = a [lim !0 @ 0 = j lnj] = 0: 81 As a consequence lim !0 (B )J = 0 and so lim !0 J (B ) 1 =1. It follows that there exists a 7 such that for all smaller J (B ) 1 > 1. Now, since u 1 gives 1 2 u <buc, for < 7 we have p > 1=2J (B ) 1 . Result follows. Lemma 3.12.14. Suppose we are in the set up of Section 3.10, meaning that > 4 and 2 (0; 4 2 ). Then there exists 8 such that for < 8 2 2 @ 0 ( 2)w J 1: Proof. First note that@ 0 ( 2)w>@( 2)w =w> 0 by our assumptions about-regularity. We will show that as goes to zero the expression in the lemma converges to zero also. Let z = @ 0 ( 2)w > 0 lim !0 2 2 @ 0 ( 2)w J = 2 2 lim !0 z J 2 2 lim !0 z (aj lnj) = 2 2 a lim !0 [ z j lnj] = 2 2 a [lim !0 z j lnj ] In the lemma above, Lemma 3.12.13 we showed that for any z > 0 lim !0 z j lnj = 0: Since > 0 it follows that lim !0 2 2 @ 0 ( 2)w J = 0. Then the claim follows by denition of the limit. 82 Chapter 4 Short Returns In this chapter we prove Theorem 2, which estimates the measure of the set with short return times. We begin by stating the precise assumptions necessary for the result and derive some of the consequences that follow with minimal work. Section 4.3 contains the body of the proof which takes advantage of the Young tower construction. 4.1 Assumptions C 0 ;C 1 ;C 3 ;::: and;; ;::: will again represent global constants that retain their meaning throughout the chapter, while c 0 ;c 1 ;c 3 ;::: shall only be used locally. Let (M;T ) be a dynamical system equipped with a metric d(;). Assume that the map T :M!M is a C 1 -dieomorphism. As in the previous chapter we dene the set M ;J M as M ;J =fx2M :B (x)\T n B (x)6=? for some 1n<Jg; 83 where J =baj lnjc. Let a = (4 lnA) 1 , where A =kDTk L 1 +kDT 1 k L 1 2: (4.1) Suppose also the system can be modeled by a Young tower possessing a refer- ence measure m and that the greatest common divisor of the return times R i is equal to 1. Let be the SRB measure associated to the system. We will further require the following: (B1) Polynomial Decay of the Tail There exist constants C 0 and > 9 such that m(R>k)C 0 k : (4.2) (B2) Regularity of the Jacobian and the metric on the Tower There exists a constant C 1 > 0 and 2 (0; 1) such that for any x;y in with s(x;y) 1 (a) ln J ^ Tx J ^ Ty C 1 s( ^ Tx; ^ Ty) ; (4.3) (b) d( ^ T k x; ^ T k y)C 1 s(x;y)k ; where 0k<s(x;y): (4.4) 84 (B3) Uniform geometric regularity of the measure Let @ be the dimension of the measure . Suppose that the positive constant @ 0 <@ is xed. There exist C 2 > 0 and 0 so that: (B )C 2 @ 0 (4.5) for all B M provided that < 0 . 85 4.2 Immediate Consequences First note that by denition of the constant A, equation (4.1), for any two points x;y2M d(Tx;Ty) = Z y x DT (t)dt Z y x Adt =Ad(x;y); (4.6) similarly we have, d(T 1 x;T 1 y)Ad(x;y): We shall also require two of the results from the previous chapter. The proofs of these results are exactly the same as before. 4.2.1 Distortion of the Jacobian Lemma 4.2.1. Let EE 0 i for some i. For any qs(E 0 ) we have 1 C 3 m( ^ T q E) m( ^ T q E 0 ) m(E) m(E 0 ) C 3 m( ^ T q E) m( ^ T q E 0 ) : 4.2.2 Decay of (s) Lemma 4.2.2. Let = ( 1)=2. There exists a constant C 5 such that for s 4 (s)C 5 s : 86 4.3 Estimate for M ;J We shall now proceed to prove Theorem 2. We will partition M ;J into level sets N (n) and bound each of the sets individually. Before we begin, note that since T is a dieomorphism B (x)\T n B (x)6=? () B (x)\T n B (x)6=?; see Lemma 4.4.1 for details. It follows that we can rewrite the denition of M ;J as M ;J =fx2M :B (x)\T n B (x)6=? for some 1n<Jg: Now, we subdivide the set M ;J as follows M ;J = J [ n=1 N (n); where N (n) =fx2M :B (x)\T n B (x)6=?g: We will split the above union into two collections and for b2 (0; 1) dene M 1 ;J = bbJc [ n=1 N (n) and M 2 ;J = J [ n=dbJe N (n): In order to nd the measure of the total set we will estimate the measures of the two parts separately. 87 4.3.1 Estimate for n Suciently Large We begin with M 2 ;J . We will derive a uniform estimate for the measure of the level setsN (n) when n> bJ. (N (n)) =(T n N (n)) = 1 X i=1 R i 1 X j=0 m(T (n+j) N (n)\ i ) (4.7) = X i:R i <J 1 4 R i 1 X j=0 m(T (n+j) N (n)\ ~ i ) + X i:R i <J 1 4 R i 1 X j=0 m(T (n+j) N (n)\ i n ~ i ) + X i:R i J 1 4 R i 1 X j=0 m(T (n+j) N (n)\ i ); where ~ i =fx2 i :8ln; R( ^ T l x)J 1 4 g: Note that by the proof of Lemma 3.12.4 and assumption (B1) the middle sum can be bounded by X i:R i <J 1 4 R i 1 X j=0 m(T (n+j) N (n)\ i n ~ i ) X i:R i <J 1 4 R i 1 X j=0 m( i n ~ i ) X i:R i <J 1 4 R i m( i n ~ i ) J 1 4 (n + 1)m(RJ 1 4 ) J 1 4 (n + 1)m(R> 2 1 J 1 4 ) C 0 2 (n + 1)J 1 4 : 88 While by Lemma 4.2.2 for the last sum we have X i:R i J 1 4 R i 1 X j=0 m(T (n+j) N (n)\ i ) X i:R i J 1 4 R i m( i ) = 2 (J 1 4 ) (C 5 J 1 8 ) 2 = C 5 2 J 1 4 : As for the rst sum in (4.7), we can decompose the beam base ~ i into cylinder sets, as in Section 3.7, with the index set I dened just as (3.17) ~ i = G 2I i;j;n ~ : Then m(T (n+j) N (n)\ ~ i ) = X 2I m(T (n+j) N (n)\ ) X 0 j92I 0 m(T (n+j) N (n)\ 0): Incorporating the above estimates and decomposition into (4.7), we obtain (N (n)) X i:R i <J 1 4 R i 1 X j=0 X 0 j92I m(T (n+j) N (n)\ 0) + (C 0 2 +C 5 2 )(n + 1)J 1 4 : 89 We will consider each of the measures m(T (n+j) N (n)\ 0) separately. Let = fi 0 ;:::;i l g and 0 =fi 0 ;:::;i l1 g as in Section 3.8. Recall that 2I means that R l = l1 X k=0 R i k n +j < l X k=0 R i k =R l+1 : (4.8) By distortion of the Jacobian, Lemma 4.2.1, m(T (n+j) N (n)\ 0) = m(T (n+j) N (n)\ 0) m( 0) m( 0) (4.9) C 3 m( ^ T l (T (n+j) N (n)\ 0)) m( ^ T l 0) m( 0) =C 3 m( ^ T l (T (n+j) N (n)\ 0)) m() m( 0): We estimate the numerator by nding a bound for the diameter of the set. Let the points x and z in T n N (n) be such that T j x;T j z2 T (n+j) N (n)\ 0. From (4.8) we know that ^ T l =T n+jb ; where b =n +jR l <R i l ; therefore d( ^ T l T j x; ^ T l T j z) =d(T nb x;T nb z): Incorporating the relationship between points and their pre-images (4.6) d(T nb x;T nb z)A b d(T n x;T n z): 90 Note thatT n x;T n z2N (n). Utilizing Lemma 4.4.2 and triangle inequality we see that d(T n x;T n z)d(T n x;x) +d(x;z) +d(z;T n z) 4A n +d(x;z): Again by (4.6), d(x;z)A j d(T j x;T j z): Now, T j x;T j z are both elements of 0 and s( 0) =l. Thus s(T j x;T j z)s( 0) =l: We derive the lower bound on l from (4.8), nn +j < l X k=0 R i k (l + 1)J 1=4 ) l> n J 1=4 1: Then employing (B2)(b) and keeping in mind that n> bJ d(T j x;T j z)C 1 s(T j x;T j z) (C 1 =) n J 1=4 (C 1 =) bJ J 1=4 = (C 1 =) bJ 3=4 : Since J =baj lnjc> a 2 j lnj we can rewrite above as d(T j x;T j z) (C 1 =) ^ j lnj 3=4 ; 91 where ^ = 2 3=4 a 3=4 b < 1: Therefore we see that d( ^ T l T j x; ^ T l T j z) =d(T nb x;T nb z)A b d(T n x;T n z) A b (4A n +d(x;z)) = 4A n+b +A b d(x;z) 4A n+b +A b+j d(T j x;T j z)) 4A n+b +A b+j (C 1 =) ^ j lnj 3=4 : Lemma 4.4.3 gives for < 10 d( ^ T l T j x; ^ T l T j z)C 13 e (1=4)j lnj 1=4 and so by taking the supremum over all points x and z j ^ T l (T (n+j) N (n)\ 0)jC 13 e (1=4)j lnj 1=4 : By assumption (B3) on the relationship between the measure and the metric, (4.5) ( ^ T l (T (n+j) N (n)\ 0))C @ 0 13 e (1=4)@ 0 j lnj 1=4 : 92 Now note that if a set S is inside the base then (S) =(S\ ) = 1 X i=1 R i 1 X j=0 m(T j (S\ )\ i ) = 1 X i=1 m(S\ \ i ) = 1 X i=1 m(S\ i ) =m(S): Thus m( ^ T l (T (n+j) N (n)\ 0))C @ 0 13 e (1=4)@ 0 j lnj 1=4 : (4.10) Incorporating the estimate into (4.9) we see that m(T (n+j) N (n)\ 0) C 3 C @ 0 13 m() e (1=4)@ 0 j lnj 1=4 m( 0) :=C 14 e (1=4)@ 0 j lnj 1=4 m( 0): Going back to the estimate (4.7) and setting C 15 =C 0 2 +C 5 2 we see that (N (n)) X i:R i <J 1 4 R i 1 X j=0 X 0 j92I T (n+j) N(n)\ 06=? m(T (n+j) N (n)\ 0) +C 15 (n + 1)J 1 4 X i:R i <J 1 4 R i 1 X j=0 X 0 j92I T (n+j) N(n)\ 06=? C 14 e (1=4)@ 0 j lnj 1=4 m( 0) +C 15 (n + 1)J 1 4 =C 14 e (1=4)@ 0 j lnj 1=4 X i:R i <J 1 4 R i 1 X j=0 X 0 j92I T (n+j) N(n)\ 06=? m( 0) +C 15 (n + 1)J 1 4 : 93 Now we need to bound X i:R i <J 1 4 R i 1 X j=0 X 0 j92I T (n+j) N(n)\ 06=? m( 0): As we showed before all of the 0 with 0 j92I are disjoint and are all subsets of ~ i , therefore the above equation is bounded by X i:R i <J 1 4 R i 1 X j=0 m( ~ i ) J 1 4 X i:R i <J 1 4 m( ~ i )J 1 4 m(): We now have (N (n))C 14 e (1=4)@ 0 j lnj 1=4 J 1 4 m() +C 15 (n + 1)J 1 4 : We are now ready to estimate the measure of M 2 ;J . (M 2 ;J ) = J [ n=dbJe N (n) J X n=dbJe (N (n)) (4.11) J X n=dbJe C 14 e (1=4)@ 0 j lnj 1=4 J 1 4 m() +C 15 (n + 1)J 1 4 (1b)JC 14 e (1=4)@ 0 j lnj 1=4 J 1 4 m() +C 15 J 1 4 J X n=dbJe 2n = (1b)C 14 m()J 5 4 e (1=4)@ 0 j lnj 1=4 + C 15 J 1 4 (J +dbJe) (JdbJe + 1) (1b)C 14 m()J 5 4 e (1=4)@ 0 j lnj 1=4 + 9C 15 J 1 4 J 2 : 94 Lemma 4.4.4 gives for < minf 10 ; 11 g (1b)C 14 m()J 5 4 e (1=4)@ 0 j lnj 1=4 + 9C 15 J 1 4 J 2 C 16 J 9 4 ; and thus (M 2 ;J )C 16 J 9 4 : 4.3.2 Estimate for Small Values of n We now consider the case 1nbbJc to estimate the measure of M 1 ;J . Dene s p = 2 p A n 2 p 1 A n 1 : By Lemma 4.4.5 we know that N (n)N sp (2 p n) for any p 1, in particular for p(n) =b lgbJ lgnc + 1. So that bbJc [ n=1 N (n) bbJc [ n=1 N s p(n) (2 p(n) n): (4.12) Now, dene n 0 =n2 p(n) and 0 =s p(n) : 95 Incorporating the computation regarding bounds for n 0 from Lemma 4.4.6 into (4.12) we conclude that M 1 ;J = bbJc [ n=1 N (n) bbJc [ n=1 N s p(n) (2 p(n) n) 2bJ [ n 0 =dbJe N 0(n 0 ): (4.13) Therefore to estimate the measure ofM 1 ;J it suces to nd a bound forN 0(n 0 ) when n 0 bJ. Note, this can be accomplished using an argument analogous to the one in Section 4.3.1. We replace all the n with n 0 and with 0 . The cuto J 1=4 = (baj lnjc) 1=4 would remain unchanged. Similarly to the argument in Section 4.3.1 we have (N 0(n 0 )) X i:R i <J 1 4 R i 1 X j=0 X 0 j92I m(T (n 0 +j) N 0(n 0 )\ 0) + (C 0 2 +C 5 2 )(n 0 + 1)J 1 4 ; and m(T (n 0 +j) N 0(n 0 )\ 0)C 3 m( ^ T l (T (n 0 +j) N 0(n 0 )\ 0)) m() m( 0): To compute m( ^ T l (T (n 0 +j) N 0(n 0 )\ 0)) we estimate the diameter of the set as before. This time however for x2N 0(n 0 ) we have d(T n 0 x;x) 2A n 0 0 ; 96 giving for x and z in T n 0 N 0(n 0 ) d( ^ T l T j x; ^ T l T j z) 4A n 0 +b 0 +A b+j (C 1 =) ^ j lnj 3=4 : Lemma 4.4.7 shows that if we require b< 1=3 then for < minf 1 ;:::; 12 g d( ^ T l T j x; ^ T l T j z)C 13 e (1=4)j lnj 1=4 and so m( ^ T l (T (n 0 +j) N 0(n 0 )\ 0))C @ 0 13 e (1=4)@ 0 j lnj 1=4 : This is the same bound we established in (4.10) when computingN (n), albeit this time the estimate holds for smaller values of . Therefore we can pull all the estimates together exactly as before to obtain: (N 0(n 0 ))C 14 e (1=4)@ 0 j lnj 1=4 J 1 4 m() +C 15 (n 0 + 1)J 1 4 : And thus, using the fact that b< 1 3 we see that (M 1 ;J ) 2bJ X n 0 =dbJe (N 0(n 0 )) J X n 0 =dbJe C 14 e (1=4)@ 0 j lnj 1=4 J 1 4 m() +C 15 (n 0 + 1)J 1 4 = J X n=dbJe C 14 e (1=4)@ 0 j lnj 1=4 J 1 4 m() +C 15 (n + 1)J 1 4 : 97 This is precisely line two of derivation (4.11), in which we were computing a bound for the measure of M 2 ;J . Thus we can conclude (M 1 ;J )C 16 J 9 4 : Overall we obtain for < minf 10 ; 11 ; 12 g (M ;J )(M 1 ;J ) +(M 2 ;J ) 2C 16 J 9 4 : Since J =baj lnjc 1 2 aj lnj we see that J 9 4 2 9 4 a 9 4 j lnj 9 4 : The result follows by setting ~ C =C 16 2 5 4 a 9 4 and ~ = 9 4 . We obtain that for < minf 10 ; 11 ; 12 g (M ;J ) ~ Cj lnj ~ : (4.14) 98 4.4 Technical Estimates Lemma 4.4.1. Let T be a bijection. Then B (x)\T n B (x)6=? () B (x)\T n B (x)6=?: Proof. First suppose that B (x)\T n B (x)6= ? and let y be the point in the intersection. Then we know that y2B (x) and also y2T n B (x) ) T n y2B (x): Letz =T n y, which immediately givesz2B (x). Mappingz forwardn times we see that T n z =y and thus T n z =y2B (x) ) z2T n B (x): It follows that z2B (x)\T n B (x) meaning that B (x)\T n B (x)6=?: 99 On the other hand, if there exists a point y2B (x)\T n B (x), then y2B (x) and y2T n B (x): Propagating both claims forward n times under T gives T n y2T n B (x) and T n y2B (x) and so T n y2B (x)\T n B (x) meaning that B (x)\T n B (x)6=?: Lemma 4.4.2. Suppose we are in the setting of Section 4.3.1. For any x2N (n) we have d(T n x;x)d(T n x;T n y) +d(T n y;x) (A n + 1)< 2A n : (4.15) Proof. Consider a point x2N (n). Then B (x)\T n B (x)6=?. Let y be the point in B such that T n y is in the intersection. 100 Based on the geometry, we have d(T n x;x)d(T n x;T n y) +d(T n y;x): By the denition of the constant A (4.1) d(T n x;T n y)A n d(x;y)A n : Thus for any x2N (n) d(T n x;x)d(T n x;T n y) +d(T n y;x) (A n + 1)< 2A n : Lemma 4.4.3. Suppose we are in the setting of Section 4.3.1. There exists a constant C 13 and a 10 such that for < 10 4A n+b +A b+j (C 1 =) ^ j lnj 3=4 C 13 (e 1=4 ) j lnj 1=4 : 101 Proof. Recall we have chosen J =baj lnjc with a = (4 lnA) 1 . Then since bJ <nJ and all R i j <J 1=4 we have b<R i l <J 1 4 and A n+b A J+J 1 4 A 2J A 2aj lnj = 12a lnA = 1=2 : Therefore A n+b (e 1=4 ) j lnj 1=4 1=2 (e 1=4 ) j lnj = 1=2 1=4 = 1=4 ! 0: We also have j <R i 0 <J 1 4 and so A b+j A 2J 1 4 A 2a 1=4 j lnj 1=4 := ^ A j lnj 1=4 ; making A b+j ^ j lnj 3=4 ^ A j lnj 1=4 ^ j lnj 3=4 = ^ A ^ j lnj 1=2 j lnj 1=4 : Note that, lim !0 ^ A ^ j lnj 1=2 = 0: 102 Dene 10 to be such that whenever < 10 we have A n+b (e 1=4 ) j lnj 1=4 < 1 and ^ A ^ j lnj 1=2 <e 1=4 : Then 4A n+b +A b+j (C 1 =) ^ j lnj 3=4 4 (e 1=4 ) j lnj 1=4 + (C 1 =) (e 1=4 ) j lnj 1=4 : To complete the proof, set C 13 = 4 +C 1 =. Lemma 4.4.4. Suppose we are in the setting of Section 4.3.1. There exists a constant C 16 and a 11 such that for < 11 (1b)C 14 m()J 5 4 e (1=4)@ 0 j lnj 1=4 + 9C 15 J 1 4 J 2 C 16 J 9 4 ; Proof. Note that J 5 4 e (1=4)@ 0 j lnj 1=4 J 9 4 =J 4 4 e (1=4)@ 0 j lnj 1=4 a 4 4 j lnj 4 4 e (1=4)@ 0 j lnj 1=4 As ! 0 we havej lnj 1=4 !1 and thus lim !0 j lnj 4 4 e (1=4)@ 0 j lnj 1=4 = lim x!1 x 4 e (1=4)@ 0 x : 103 For ease of computation, denote c = 1 and ^ c =e (1=4)@ 0 c < 1, then lim x!1 x 4 e (1=4)@ 0 x = lim x!1 x^ c x c : Now, lim x!1 x^ c x = lim x!1 x ^ c x L'H = lim x!1 1 ( ln ^ c) ^ c x = 1 ln ^ c lim x!1 ^ c x = 0: It follows that lim !0 J 5 4 e (1=4)@ 0 j lnj 1=4 J 9 4 = 0 and so there exists 11 so that J 5 4 e (1=4)@ 0 j lnj 1=4 J 9 4 : Therefore (1b)C 14 m()J 5 4 e (1=4)@ 0 j lnj 1=4 + 9C 15 J 1 4 J 2 (1b)C 14 m()J 9 4 + 9C 15 J 9 4 = (1b)C 14 m() + 9C 15 J 9 4 : We complete the proof by setting C 16 = (1b)C 14 m() + 9C 15 . 104 Lemma 4.4.5. For any integer n, for any p2N and > 0 we have fxjB (x)\T n B (x)6=?gfxjB sp (x)\T 2 p n B sp (x)6=?g where s p = 2 p A 2 p n 1 A n 1 : Proof. We prove the claim by induction. We rst consider the case p = 1. Since s 1 = 2(A n + 1) we are looking to show that fxjB (x)\T n B (x)6=?gfxjB 2(A n +1) (x)\T 2n B 2(A n +1) (x)6=?g: Let x be a point withB (x)\T n B (x)6=?. Suppose thaty2B (x) is such that z =T n y lies in the intersection. Now, z2B (x)\T n B (x) ) T n z2T n B (x)\T 2n B (x) ) T n B (x)\T 2n B (x)6=?: 105 Let u2 T n B (x) be an arbitrary point. Then there exists a point v2 B (x) so that u =T n v. Then d(u;z) =d(T n v;T n y)A n d(v;y) 2A n and d(u;x)d(u;z) +d(z;x) 2A n + 2(A n + 1): Since the point u was arbitrary it follows that T n B (x)B 2(A n +1) (x): On the other hand, we also have B (x)B 2(A n +1) (x) ) T 2n B (x)T 2n B 2(A n +1) (x) and so T n B (x)\T 2n B (x) B 2(A n +1) (x)\T 2n B 2(A n +1) (x): Since we already showed that the smaller set is non-empty we obtain B 2(A n +1) (x)\T 2n B 2(A n +1) (x)6=?; which completes the base case. 106 Now assume that fxjB (x)\T n B (x)6=?gfxjB sp (x)\T 2 p n B sp (x)6=?g and consider a point x such that B (x)\T n B (x)6=?. We want to show that B s p+1 (x)\T 2 p+1 n B s p+1 (x)6=?; where s p+1 = 2 p+1 A 2 p+1 n 1 A n 1 = 2 2 p (A 2 p n 1)(A 2 p n + 1) A n 1 = 2(A 2 p n + 1)s p : By the inductive assumption we know that B sp (x)\ T 2 p n B sp (x) 6= ?. Let y2B sp (x) be such that z =T 2 p n y lies in the intersection. Then z2B sp (x)\T 2 p n B sp (x) ) T 2 p n z2T 2 p n B sp (x)\T 2 p+1 n B sp (x) ) T 2 p n B sp (x)\T 2 p+1 n B sp (x)6=?: Let u2 T 2 p n B sp (x) be an arbitrary point. Then there exists a v2 B sp (x) so that u =T 2 p n v. Then d(u;z) =d(T 2 p n v;T 2 p n y)A 2 p n d(v;y) 2A 2 p n s p 107 and d(u;x)d(u;z) +d(z;x) 2A n s p +s p 2(A 2 p n + 1)s p =s p+1 : Since the point u was arbitrary it follows that T 2 p n B sp (x)B s p+1 (x): On the other hand, we also have B sp (x)B s p+1 (x) ) T 2 p+1 n B sp (x)T 2 p+1 n B s p+1 (x) and so T 2 p n B sp (x)\T 2 p+1 n B sp (x) B s p+1 (x)\T 2 p+1 n B s p+1 (x): As before the result follows since the smaller set was proven to be non-empty. The general claim now follows for all p 2 N by the principle of mathematical induction. 108 Lemma 4.4.6. Suppose we are in the setting of Section 4.3.2, in particular assume that 1nbbJc. Dene n 0 =n2 p(n) and 0 =s p(n) , where p(n) =b lgbJ lgnc + 1 and s p = 2 p A n2 p 1 A n 1 : Then it follows that 1nbbJc ) dbJen 0 2bJ: and 0 1b : Proof. We begin with the computation concerning n 0 . We know that lgbJ lgn<b lgbJ lgnc + 1 lg bJ n + 1 bJ n <2 b lgbJlgnc+1 2 bJ n bJ <n2 b lgbJlgnc+1 2bJ bJ <n 0 2bJ: And since n 0 2Z we can conclude thatdbJen 0 2bJ. We now estimate 0 in terms of . Since A 2 s p(n) = 2 p(n) A 2 p(n) n 1 A n 1 A p(n) A 2 p(n) n A 2 p(n) +2 p(n) n <A 2n2 p(n) =A 2n 0 : 109 Then, n 0 2bJ and a = (4 lnA) 1 give A 2n 0 A 4bJ <A 4baj lnj =A 4ba ln = 4ba lnA = b : And so we obtain 0 =s p(n) 1b : (4.16) Lemma 4.4.7. Suppose we are in the setting of Section 4.3.2. There a 12 such that for < 12 4A n 0 +b 0 +A b+j (C 1 =) ^ j lnj 3=4 C 13 (e 1=4 ) j lnj 1=4 ; where C 13 = 4 +C 1 = as in Lemma 4.4.3. Proof. Recall we have chosen J =baj lnjc with a = (4 lnA) 1 . Then since bJ <n 0 2bJ, 0 1b and all R i j <J 1=4 we have b<R i l <J 1 4 and A n 0 +b 0 A 2bJ+J 1 4 1b A (2b+1)aj lnj 1b = 1b(2b+1)a lnA = 3 4 3 2 b : 110 Therefore A n 0 +b 0 (e 1=4 ) j lnj 1=4 3 4 3 2 b (e 1=4 ) j lnj = 1 2 3 2 b ! 0; provided that b< 1 3 . As before, in Lemma 4.4.3 A b+j ^ j lnj 3=4 (e 1=4 ) j lnj 1=4 whenever < 10 . Dene 12 < 10 to be such that whenever < 12 we have A n 0 +b 0 (e 1=4 ) j lnj 1=4 < 1 Then 4A n 0 +b 0 +A b+j (C 1 =) ^ j lnj 3=4 4 (e 1=4 ) j lnj 1=4 + (C 1 =) (e 1=4 ) j lnj 1=4 :=C 13 (e 1=4 ) j lnj 1=4 : 111 Chapter 5 Recurrence under a Regular Measure In this chapter we combine results from Theorems 1 and 2. We apply the theorems directly and then simplify the resulting expressions. 5.1 Assumptions Let (M;T ) be a dynamical system equipped with a metric d(;). Assume that the mapT :M!M is aC 1 -dieomorphism. As in the previous chapter we dene the set M ;J M as M ;J =fx2M :B (x)\T n B (x)6=? for some 1n<Jg; where J =baj lnjc with a = [ 4 (kDTk L 1 +kDT 1 k L 1)] 1 . Suppose also the system can be modeled by a Young tower possessing a refer- ence measure m and that the greatest common divisor of the return times R i is 112 equal to 1. Let be the SRB measure associated to the system. We will further require the following: (C1) Polynomial Decay of the Tail There exist constants C 0 and > 9 such that m(R>k)C 0 k : (C2) Additional assumption on Let @ be the dimension of the measure . We will require that @( 2)> 1: (C3) Regularity of the Jacobian and the metric on the Tower There exists a constant C 1 > 0 and 2 (0; 1) such that for any x;y in with s(x;y) 1 (a) ln J ^ Tx J ^ Ty C 1 s( ^ Tx; ^ Ty) ; (b) d( ^ T k x; ^ T k y)C 1 s(x;y)k ; where 0k<s(x;y): 113 (C4) Uniform regularity of the measure Let = @ ( 2) > 1 by (A2) and suppose that the positive constant @ 0 < @ is xed. There exist C 2 > 0 and 0 so that: (a) (-regularity) There exists w2 (1;) and a> 0 so that for < 0 we have (B + w(x)nB w(x)) (B (x)) C 2 j lnj a for all points x2M. (b) (geometric regularity) (B )C 2 @ 0 for all B M provided that < 0 . 114 5.2 Proof of the Corollary Note that since > 9 implies > 4 we have satised all the assumptions of Theorem 1. We can thus conclude that there exist positive constants C;C 0 ; and 0 so that for > 0 suciently small the following is true. There exists a set M 0 M with measure (M 0 )C 0 j lnj 0 such that for all balls B (x) with centers x = 2M 0 [M ;J we have P(S =k)e t t k k! Cj lnj ; for k2N 0 : Theorem 2 allows us to nd constants ~ C; ~ > 0 so that (M ;J ) ~ Cj lnj ~ : Therefore we can bound the total measure of the \bad" ball centers with (M 0 [M ;J )C 0 j lnj 0 + ~ Cj lnj ~ : Dene C 00 = C 0 + ~ C and 00 = minf 0 ; ~ g ~ = 9 4 , then above inequality simplies to (M 0 [M ;J )C 00 j lnj 00 as desired. Result follows. 115 Chapter 6 Recurrence under an Absolutely Continuous Measure In this chapter we apply the Corollary resulting from combining Theorems 1 and 2 to a system in which the measure is absolutely continuous with respect to Lebesgue measure. 6.1 Assumptions Let (M;T ) be a dynamical system equipped with a metric d(;). Assume that the map T :M!M is a C 1 -dieomorphism with the attractorA . Suppose that the system can be modeled by a Young tower possessing a reference measure m, and that the greatest common divisor of the return times R i is equal to 1. Let be the SRB measure on the attractor. Since is supported on the attractor we know that it is absolutely continuous with respect to Lebesgue measure and that its density function is regular and bounded. We will further require the following: 116 (D1) Polynomial Decay of the Tail There exist constants C 0 and > 9 such that m(R>k)C 0 k : (D2) Additional assumption on Let @ be the dimension of the measure . We will require that @( 2)> 1: (D3) Regularity of the Jacobian and the metric on the Tower There exists a constant C 1 > 0 and 2 (0; 1) such that for any x;y in with s(x;y) 1 (a) ln J ^ Tx J ^ Ty C 1 s( ^ Tx; ^ Ty) ; (b) d( ^ T k x; ^ T k y)C 1 s(x;y)k ; where 0k<s(x;y): 117 6.2 Proof of Result Recall that an attractor can be dened as a closed invariant set within the space such that the distance between the attractor and any other point goes to zero as the point is evolved under the system's map. No proper subset of the attractor is characterized by this property. Dene the set A = 1 [ i=1 R i 1 [ j=0 T j i : Notice thatA M represents the attractor of the system (M;B;T;) and it also supports the absolutely continuous measure . As such, all of the dynamics we observe with the aid of the measure can considered as happening on the attractor. Since is absolutely continuous with respect to Lebesgue measure by the Radon - Nikodym theorem there exists aB-measurable function f :A ! [0;1) so that for any E2B (E) = Z E fd; (6.1) where is the Lebesgue measure. We also have 1 c f(x) c for a.e. x2A (6.2) for some positive constant c> 1. 118 In Lemmata 6.3.1 and 6.3.3 we show that the measure is in fact uniformly regular, in the sense of both - and geometric regularity. Because of this fact we can utilize the results we have proven in previous chapters. Dene M ;J =fx2A :B (x)\T n B (x)6=? for some 1n<Jg; where J =baj lnjc with a = [ 4 (kDTk L 1 +kDT 1 k L 1)] 1 . By Theorem 1 we know that there exists a setM 0 A such that for any point x2An(M 0 [M ;J ), the functionS counting the number of visits to the ballB (x) can be approximated by a Poissonian. Dene the set M =M 0 [M ;J , then by the Corollary (M )C 00 j lnj 00 : Further, for any point x = 2M we have P(S =k)e t t k k! Cj lnj for all k2N 0 : The proof is completed by setting ^ C = maxfC;C 00 g and ^ = minf; 00 g. Note that the exponent ^ 00 9 4 . 119 6.3 Technical Estimates Lemma 6.3.1. Let be a measure that is absolutely continuous with respect to Lebesgue with a Radon-Nikodym derivative f bounded as in (6.2). Then is uni- formly -regular. Proof. First note that since 1 c f(x) c for any set EA we have (E) = Z E fd Z E cd = c(E) and analogously (E) 1 c (E): Therefore for any EA 1 c (E)(E) c(E): (6.3) Recall that on a Euclidean space Lebesgue measure coincides with volume, giving that for any ball B A (B ) =c(D) D ; (6.4) 120 where D is the dimension of the space M and c(D) is a constant depending only on D. We can now proceed to verifying -regularity. Let be less than 1 and let w2 (1;), where =@( 2)> 1. By (6.3) we get (B + wnB w) (B ) c(B + wnB w) 1 c (B ) = c 2 (B + wnB w) (B ) (6.5) Incorporating equation (6.4) we see that the last quotient can be rewritten as (B + wnB w) (B ) = (B + w)(B w) (B ) = c(D) ( + w ) D c(D) ( w ) D c(D) D = ( + w ) D ( w ) D D : Now, using the binomial theorem the numerator becomes ( + w ) D ( w ) D = D X k=0 D k Dk wk D X k=0 D k Dk wk (1) k = D X k=0 D k Dk+wk (1 (1) k ) = D X k=1 D k Dk+wk (1 (1) k ): The last step is due to the fact that when k = 0 we have 1 (1) k = 1 1 = 0, which means that the k = 0 term does not contribute to the sum. 121 Let c 0 be the largest of the binomial coecients, then ( + w ) D ( w ) D D = D X k=1 D k k+wk (1 (1) k ) 2c 0 D X k=1 k(w1) 2c 0 D w1 : Since < 1, the smaller the exponent, the larger the expression k(w1) thus all the summands can be bounded by the term k = 1. Overall, the quotient in estimate (6.5) can be bounded by (B + wnB w) (B ) c 2 (B + wnB w) (B ) c 2 ( + w ) D ( w ) D D 2c 2 c 0 D w1 : It now remains to show that for some positive constant a the expression w1 is eventually dominated byj lnj a . Note that lim !0 w1 a j lnj = lim !0 ln w1 a L'H = lim !0 1 w1 a w1 a 1 = a w 1 lim !0 w1 a = 0: It follows that lim !0 w1 j lnj a = 0 122 and thus there exists a 0 suciently small so that for < 0 w1 j lnj a (2c 2 c 0 D) 1 : Consequently (B + wnB w) (B ) j lnj a : Lemma 6.3.2. Let be a measure that is absolutely continuous with respect to Lebesgue with a Radon-Nikodym derivative f bounded as in (6.2). The two mea- sures have the same dimension. Proof. Denote the dimension of Lebesgue measure by @ . Then using inequality (6.3) we get that for almost every x2A @ = lim !0 log(B (x)) log lim !0 logc(B (x)) log = lim !0 logc + log(B (x)) log = lim !0 logc log + lim !0 log(B (x)) log = 0 +@ : The last step follows from the fact that log tends to1 and logc is a constant in the limit. Using the lower bound for f we can also establish that @ lim !0 logc 1 (B (x)) log = lim !0 logc 1 log + lim !0 log(B (x)) log = 0 +@ : 123 Combining the two estimates we see that @ @ @ giving us the desired result. Lemma 6.3.3. Let be a measure that is absolutely continuous with respect to Lebesgue with a Radon-Nikodym derivative f bounded as in (6.2). Then is uni- formly geometrically regular. Proof. Let B A be an arbitrary metric ball. By (6.3) and (6.4) we have (B ) cc(D) D : Now, by Lemma 6.3.2 the dimensions of and coincide. In addition @ = lim !0 log(B (x)) log = lim !0 log(c(D) D ) log = lim !0 logc(D) log + lim !0 log D log = 0 +D; giving us D =@: Therefore for all values of (B ) cc(D) D = cc(D) @ cc(D) @ 0 for any constant @ 0 <@. 124 Chapter 7 Poisson Approximation Theorem This chapter contains the statement and the proof of the abstract Poisson approx- imation theorem which establishes the distance between sums of f0; 1g-valued dependent random variables (X n ) and a random variable that is Poisson dis- tributed. We utilize this result in Section 3.3 to begin the proof of Theorem 1. In the proof of the approximation theorem we follow [CC13]. We compare the number of occurrences in a nite time interval with the number of occurrences in the same interval for a Bernoulli process ( ~ X n ). Then we estimate the distance between the number of occurrences in the Bernoulli process and the Poisson law. Theorem 7.0.1. Let (X n ) n2N be a stationaryf0; 1g-valued process and set = P(X 1 = 1). Let S b a = P b n=a X n , N =bt=c and dene S :=S N 1 for convenience's sake. Additionally, let be the Poisson distribution measure with mean t, so that (fkg) = e tt k k! for k2N 0 . Finally, assume that < t 2 . Then for any EN 0 , and 2p<N we have jP(S2E)(E)j 6t #fE\ [0;N]g (N(R 1 +R 2 ) +p) + 2t 125 where, R 1 = sup 0<j<Np 0<q<Npj fjP(X 1 = 1^S Nj p+1 =q)P(S Nj p+1 =q)jg R 2 = p X n=2 P(X 1 = 1^X n = 1): Recall that #fg denotes the cardinality of the set in parenthesis. Also note that we are approximatingP(S2 E) with the mean t Poisson measure of the set E. For the approximation to be accurate we need the expected value of S(x) to also be equal to t. Thus we require t =E(S) =E N X n=1 X n = N X n=1 E(X n ) =NE(X 1 ) =N; which explains why we have chosen N =bt=c in the theorem. Proof. Let ( ~ X n ) n2N be a sequence of independent, identically distributed random variables taking values inf0; 1g, constructed so that P ( ~ X 1 ) = . Further assume that the ~ X n 's are independent of the X n 's. Let ~ S = P N n=1 ~ X n . Dene ^ (k) =P(S =k) and ~ (k) =P( ~ S =k) We would like to approximate the distribution of ^ using the mean t Poisson distribution. 126 jP(S2E)(E)j =j ^ (E)(E)j j ^ (E) ~ (E)j +j ~ (E)(E)j = X k2E P(S =k)P( ~ S =k) + X k2E P( ~ S =k) t k k! e t X k2E\[0;N] jP(S =k)P( ~ S =k)j + 1 X k=0 P( ~ S =k) t k k! e t Note that the rst sum is only considered for k N due to the fact that for k > N P(S = k) =P( ~ S = k) = 0 as both S and ~ S are sums of Nf0; 1g-valued random variables. Thanks to [AGG89] we can bound the second sum using the estimate 1 X k=0 P( ~ S =k) t k k! e t 2t 2 N : (7.1) For summands of the remaining term we obtain a uniform estimate as follows. We start by writing a telescoping identity P(S =k)P( ~ S =k) = N1 X j=0 k (j) (7.2) 127 where k (j) =P( ~ S j 1 +S N j+1 =k)P( ~ S j+1 1 +S N j+2 =k) = j^k X m=0 P( ~ S j 1 =m)[P(S N j+1 =km)P( ~ X j+1 +S N j+2 =km)] = j^k X m=0 P( ~ S j 1 =m)[P(S Nj 1 =km)P( ~ X 1 +S Nj 2 =km)] = j^k X m=0 j m m (1) jm k;j (m) and k;j (m) =P(S Nj 1 =km)P( ~ X 1 +S Nj 2 =km): Note the sum only goes up to m =j^k := minfj;kg. This is because m can take on values at most equal to j and at the same time we require km to be a non-negative number. By our assumptions of independence and expected value of ~ X 1 P( ~ X 1 +S Nj 2 =km) =P( ~ X 1 = 1)P(S Nj 2 =km 1) +P( ~ X 1 = 0)P(S Nj 2 =km) =P(S Nj 2 =km 1) + (1)P(S Nj 2 =km) =E(1 S Nj 2 =km1 ) + (1)E(1 S Nj 2 =km ) 128 On the other hand P(S Nj 1 =km) =P(X 1 = 1^S Nj 2 =km 1) +P(X 1 = 0^S Nj 2 =km) =E(1 X 1 =1 1 S Nj 2 =km1 ) +E(1 X 1 =0 1 S Nj 2 =km ) Putting the two together, we obtain: k;j (m) =E(1 X 1 =1 1 S Nj 2 =km1 ) +E(1 X 1 =0 1 S Nj 2 =km ) E(1 S Nj 2 =km1 ) (1)E(1 S Nj 2 =km ) =E(1 X 1 =1 1 S Nj 2 =km1 )E(1 S Nj 2 =km1 ) [(1)E(1 S Nj 2 =km )E(1 X 1 =0 1 S Nj 2 =km )] =E(1 X 1 =1 1 S Nj 2 =km1 )E(1 S Nj 2 =km1 ) [E(1 S Nj 2 =km )E(1 X 1 =0 1 S Nj 2 =km )E(1 S Nj 2 =km )] =E(1 X 1 =1 1 S Nj 2 =km1 )E(1 S Nj 2 =km1 ) [E(1 X 1 =1 1 S Nj 2 =km )E(1 S Nj 2 =km )] Both terms on the right hand side are of the form E(1 X 1 =1 1 S V 2 =v )E(S V 2 =v) for 0vV 129 and so it suces to nd estimates for the general term and combine them accord- ingly. We will use the fact that for any n, 1 Xn=1 1 S V n =v 1 S V n+1 =v 1 Xn=1 so that using a telescoping sum from n = 2 to n =p we obtain j 1 S V 2 =v 1 S V p+1 =v j p X n=2 1 Xn=1 Now, jE(1 X 1 =1 1 S V 2 =v )E(S V 2 =v)j jE(1 X 1 =1 1 S V 2 =v )E(1 X 1 =1 1 S V p+1 =v )j +jE(1 X 1 =1 1 S V p+1 =v )E(1 S V p+1 =v )j +jE(1 S V p+1 =v )E(1 S V 2 =v )j E 1 X 1 =1 p X n=2 1 Xn=1 +jE(1 X 1 =1 1 S V p+1 =v )E(1 S V p+1 =v )j + E p X n=2 1 Xn=1 R 2 +R 1 +p 2 ; by denition of the error components and since each of the summandsE(X n ) in the last term is equal to . Collecting all the estimates we obtain that for every k Np1 X j=0 j k (j)j 2(Np)(R 1 +R 2 +p 2 ) 130 The last p terms (for Np j N 1) of the sum (7.2) we will need to estimate separately since the above method requires j to be smaller than Np. A direct coarse estimate gives us that for those indices j k;j (m)j 4 and so N1 X j=Np j k (j)j 4p Thus for every kN, jP(S =k)P( ~ S =k)j N1 X j=0 j k (j)j 2N(R 1 +R 2 +p 2 ) + 4p Recall that N =t=. Then after simplication above becomes jP(S =k)P( ~ S =k)j 6t (N(R 1 +R 2 ) +p): (7.3) Finally, putting (7.1) and (7.3) together we see that jP(S2E)(E)j X k2E\[0;N] jP(S =k)P( ~ S =k)j + 2t 2 N X k2E\[0;N] 6t (N(R 1 +R 2 ) +p) + 2t 2 t= 1 6t #fE\ [0;N]g (N(R 1 +R 2 ) +p) + 4t with the last inequality being true since 2t 2 t= 1 = 2t 2 t 2t 2 t=2 = 4t: 131 Chapter 8 Some Consequences of Besicovitch Covering Lemma In this chapter we state the Besicovitch Covering Lemma and prove two results which we nd useful throughout this thesis. Our proofs closely follow the ones provided by Chazottes and Collet, [CC13]. Lemma 8.0.4. Besicovitch Covering Lemma Let E be a set inR D and letH be a family of balls with centers at x2 E. This denesH =fB r(x) (x) : x2 Eg as a covering of the set E. Then there is some integer p(D)> 0, depending only on the dimension D of the space, and qp(D) subfamilies ofH:H 1 ;:::;H q such that E q [ i=1 [ B2H i B where the balls in eachH i are pairwise disjoint. For the proof of the lemma, we refer the reader to Section 2.8 of Federer's \Geometric Measure Theory" [F69]. 132 Lemma 8.0.5. Let be a Borel probability measure on an D-dimensional manifold M. For > 0 and v> 0 dene the set E ;v =fx : (B 2 (x))>j lnj v (B (x))g: There exists a constant C(D), depending only on the dimension of the manifold, such that (E ;v )C(D)j lnj v . Proof. The family of ballsC =fB (x) : x2E ;v g is a covering of the setE ;v . Thus, by Lemma 8.0.4 above, there is an integer p(D) > 0 and q p(D) collections of disjoint balls belonging toC:H 1 ;H 2 ;:::;H q such that E ;v q [ i=1 G B2H i B: Denote byK i the set of centers of the balls in eachH i , then we can write E ;v q [ i=1 G B2H i B = q [ i=1 G x2K i B (x) which implies that (E ;v ) q X i=1 X x2K i (B (x)): Now, for everyK i considerC i =fB 2 (x) : x2K i g. Note that the setC i is a covering ofK i , with each point covered by exactly one ball. Then, again by Lemma 8.0.4, we can ndq i p(D) collections of disjoint balls, this time fromC i : 133 H i;1 ;H i;2 ;:::;H i;q i such thatK i S q i j=1 F B2H i;j B. Because originally each point ofK i was only covered by exactly one of the balls, we are not discarding any of the elements ofC i , but instead rearranging them in groups consisting of disjoint balls. For any 1iq and 1jq i we now have X B2H i;j (B) =( G B2H i;j B) 1 and thus q X i=1 q i X j=1 X B2H i;j (B)p(D) 2 : Finally, putting all of the above information together, along with the denition of the setE ;v , we obtain (E ;v ) q X i=1 X x2K i (B (x)) q X i=1 X x2K i j lnj v (B 2 (x)) =j lnj v q X i=1 q i X j=1 X B2H i;j (B) j lnj v p(D) 2 :=C(D)j lnj v : Lemma 8.0.6. Let 0 and 1 be two nite positive measures on a D-dimensional Riemannian manifold M. For !2 (0; 1) and 2 (0; 1) dene C ! =fx2X : 1 (B (x))! 0 (B (x))g: 134 There exists an integer p(D) such that 0 (C ! )p(D)! 1 1 (M): Proof. The family of ballsD =fB (x) : x2C ! g is a covering of the setC ! . There is a nite number p(D) and qp(D) collections of ballsH 1 ;:::;H q such that C ! q [ i=1 G x2K i B (x); whereK i denotes the set of centers of the balls inH i . We obtain 0 (C ! ) q X i=1 X x2K i 0 (B (x)) q X i=1 X x2K i ! 1 1 (B (x)) =! 1 q X i=1 1 ( G x2K i B (x)) ! 1 q X i=1 1 (M) ! 1 p(D) 1 (M): 135 Bibliography [AP06] J.F. Alves and V. Pinheiro. Recurrence times and rates of mixing for invert- ible dynamical systems, 2006, arXiv:math/0611404.v1. [AGG89] R. Arratia, L. Goldstein and L. Gordon. Two moments suce for Poisson approximations: the Chen-Stein method, 1989, Ann. Probab. 17, no. 1, 9-25. [B31] G.D. Birkho. Proof of the ergodic theorem, 1931, Proc. Natl. Acad. Sci. USA 17 (12): 656-660. [BSTV03] H. Bruin, B. Saussol, S. Troubetzkoy and S. Vaienti. Return time statis- tics via inducing, 2003, Ergodic Theory Dynam. Systems 23: 991-1013. [C01] P. Collet. Statistics of closest return for some non-uniformly hyperbolic sys- tems, 2001, Ergod. Th. & Dynam. Sys. 21: 410-420. [CC13] J.-R. Chazottes and P. Collet. Poisson approximation for the number of visits to balls in nonuniformly hyperbolic dynamical systems, 2013, Ergodic Theory Dynam. Systems 33: 49-80. [F03] K. Falconer. Fractal Geometry, Mathematical Foundations and Applications, Wiley, England 2003. [F69] H. Federer. Geometric measure theory, Die Grundlehren der mathematischen Wissenschaften, Band 153, Springer-Verlag New York Inc., New York 1969. [GS97] A. Galves and B. Schmitt. Inequalities for hitting time in mixing dynamical systems, 1997, Random Comput. Dynam. 5: 319-331. [H99] N.T.A. Haydn. The distribution of the rst return map for rational maps, 1999, J. Statist. Phys. 94: 1027-1036. 136 [H00] N.T.A. Haydn. Statistical properties of equilibrium states for rational maps, 2000, Ergodic Theory Dynam. Systems 20: 1371-1390. [HLV05] N.T.A. Haydn, Y. Lacroix and S. Vaienti. Hitting and return times in ergodic dynamical systems, 2005, Ann. Prob. 33: 2043-2050. [HP10] N.T.A. Haydn and Y. Psiloyenis. Return times distribution for Markov towers with decay of correlations, 2010, preprint. [K47] M. Ka c. On the notion of recurrence in discrete stochastic processes, 1947, Bull. Amer. Math. Soc. 53: 1002-1010. [L02] Y. Lacroix. Possible limit laws for entrance times of an ergodic aperiodic dynamical system, 2002, Israel J. Math. 132: 253-263. [P99] H. Poincar e. Les m ethodes nouvelles de la m ecanique classiqe c el este, vol. 3, Gauthiers-Villars, Paris 1899. [RS08] J. Rousseau and B. Saussol. Poincar e recurrence for observations, 2010, Transactions A.M.S. 362: 5845-5859. [STV03] B. Saussol, S. Troubetzkoy and S. Vaienti. Recurrence and Lyapunov exponents for dieomorphisms, 2003, Moscow Mathematical Journal 3: 189- 203. [S06] B. Saussol. Recurrence rate in rapidly mixing dynamical systems, 2006, Dis- crete Contin. Dyn. Syst. 15: 259-267. [LSY98] L.-S. Young. Statistical properties of dynamical systems with some hyper- bolicity, 1998, Ann. of Math. (2) 147, no. 3: 585-650. [LSY99] L.-S. Young. Recurrence times and rates of mixing, 1999, Israel J. Math. 110: 153-188. 137
Abstract (if available)
Abstract
This dissertation explores return statistics to metric balls in measure preserving dynamical systems which admit a Young tower with polynomial decay of the tail. The thesis opens with some background material and known results in the area. Then we proceed to analyze the recurrence to generic points of the system and estimate the measure of non-generic points. We finally apply these findings to a system equipped with an absolutely continuous measure. ❧ We show that return times are governed by an almost Poisson distribution for the generic centers of metric balls. We derive an estimate for the error between distribution describing visits to the ball and a true Poissonian and prove that the error converges to zero at a logarithmic rate. ❧ Next, we estimate the measure of the set containing the centers of balls with short return times by considering its level sets. The size of the set is inversely proportional to the logarithm of the radius of the ball. ❧ Finally we combine the two findings and apply the corollary to a system which admits a measure which is absolutely continuous with respect to Lebesgue measure. We show that absolute continuity is sufficient to satisfy the assumptions of our initial results. ❧ This thesis generalizes the paper ""Poisson approximation for the number of visits to balls in non-uniformly hyperbolic systems"" by Chazottes and Collet. Their result holds for systems which can be modeled by a Young tower with exponential decay of the tail and which are equipped with a measure absolutely continuous with respect to Lebesgue.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Mixing conditions and return times on Markov Towers
PDF
Analysis of ergodic and mixing dynamical systems
PDF
Rényi entropy and recurrence
PDF
Euclidean extensions of dynamical systems
PDF
Coherently interacting dynamics in the neuromuscular system
PDF
Localized escape rate and return times distribution
PDF
Entry times statistics on metric spaces
PDF
Limiting entry/return times distribution for Ergodic Markov chains in a unit interval
PDF
The existence of absolutely continuous invariant measures for piecewise expanding operators and random maps
PDF
Dynamical system approach to heart rate variability: QT versus RR interval
PDF
Return time distributions of n-cylinders and infinitely long strings
PDF
Prohorov Metric-Based Nonparametric Estimation of the Distribution of Random Parameters in Abstract Parabolic Systems with Application to the Transdermal Transport of Alcohol
PDF
Quantum feedback control for measurement and error correction
PDF
Applications and error correction for adiabatic quantum optimization
PDF
Certain regularity problems in fluid dynamics
PDF
Outdoor visual navigation aid for the blind in dynamic environments
PDF
Limit theorems for three random discrete structures via Stein's method
PDF
An analytical dynamics approach to the control of mechanical systems
PDF
Categorical operators and crystal structures on the ring of symmetric functions
PDF
Non-parametric models for large capture-recapture experiments with applications to DNA sequencing
Asset Metadata
Creator
Wasilewska, Katarzyna
(author)
Core Title
Limiting distribution and error terms for the number of visits to balls in mixing dynamical systems
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Mathematics
Publication Date
05/24/2013
Defense Date
04/30/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
ergodic theory,OAI-PMH Harvest,Poisson approximation,polynomial mixing,recurrence rates,Young tower
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Haydn, Nicolai T. A. (
committee chair
), Baxendale, Peter H. (
committee member
), Haskell, Cymra (
committee member
), Jonckheere, Edmond A. (
committee member
), Sacker, Robert (
committee member
)
Creator Email
katarzyna.wasilewska@gmail.com,kwasilew@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-257403
Unique identifier
UC11295050
Identifier
etd-Wasilewska-1701.pdf (filename),usctheses-c3-257403 (legacy record id)
Legacy Identifier
etd-Wasilewska-1701.pdf
Dmrecord
257403
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Wasilewska, Katarzyna
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
ergodic theory
Poisson approximation
polynomial mixing
recurrence rates
Young tower