Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Distribution of descents in matchings
(USC Thesis Other)
Distribution of descents in matchings
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DISTRIBUTION OF DESCENTS IN MATCHINGS by Gene B. Kim M.A., University of California, Los Angeles, 2011 A Dissertation Submitted in Partial Fulllment of the Requirements for the Doctor of Philosophy Degree Department of Mathematics in the Graduate School University of Southern California May 2018 c Copyright by GENE B. KIM, 2018 All Rights Reserved AN ABSTRACT OF THE DISSERTATION OF GENE B. KIM, for the Doctor of Philosophy degree in MATHEMATICS, presented on December 8, 2017, at University of Southern California. TITLE: DISTRIBUTION OF DESCENTS IN MATCHINGS THESIS ADVISOR: Dr. Jason Fulman The distribution of descents in certain conjugacy classes of S n have been previously studied, and it is shown that its moments have interesting properties. This dissertation provides a bijective proof of the symmetry of the descents and major indices of matchings and uses a generating function approach to prove a central limit theorem for the number of descents in matchings. We also extend this result to xed point free permutations. ii DEDICATION This dissertation is dedicated to Nurungji and Jeremi, the best hamsters one could ever wish for. iii ACKNOWLEDGMENTS First, I would like to thank Professor Jason Fulman, my advisor, for guiding me through graduate school and giving me valuable advice on countless occasions. Professor Brendon Rhoades introduced me to the wonderful world of algebraic combinatorics and helped me with the bijection in Chapter 3. Professor Simon Thomas deserves my gratitude for mentoring me in college and encouraging me to pursue a Ph.D in math. Simon has always been a role model and he is, to this day, the best presenter I have seen. I would also like to thank Professors Larry Goldstein and David Kempe for making the time to be on my committee. David went the extra mile as an outside member to do most of the calculations in this dissertation and caught a few crucial typographical errors. Huge thanks go out to Sangchul Lee, who helped me brush up for the analysis and probability qualifying examinations. The material in Chapter 6 is work done jointly with Sangchul. I would also like to thank Sangjin Lee for teaching me dierential geometry. Sangchul, Sangjin, and I would meet regularly and have discussions about research which were denitely helpful. Adam Massey also deserves gratitude for teaching me algebra and helping me with the algebra qualifying examination. I would also like to thank Jeremy Engel for teaching and making me an expert of L A T E X. I would also like to thank my friends Jae Young Bang, Woochan Jun, Taehyung Kim, and Anton Shkel. While we did not have mathematical discussions, our regular conversations were meaningful and provided inspiration. Special thanks go to my former hamsters Jeremi, who got me through qualifying examinations, and Nurungji, who got me through research. Exercise has been a huge part of my graduate school life, and swimming was the best outlet to relieve frustrations from research. For that, I have to thank the sta of Lyon Center and the wonderful Uytengsu Aquatic Center at USC. I have denitely spent a lot of time in the pool and gotten various ideas about research on numerous occasions. iv Last but denitely not least, I would like to thank my wonderful wife April, who has always encouraged me to do what I want. April has always there when I needed someone to lean on, and I could not have nished gradute school without her support. I look forward to the post-graduate school arc of our journey. v TABLE OF CONTENTS Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Mean and variance of d() and maj() of matchings . . . . . . . . . . . . . . . 5 3 Symmetry of d() and maj() . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Calculating the higher moments of d() . . . . . . . . . . . . . . . . . . . . . . . 17 4.1 Third moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.2 Counting the intrapair arrangements . . . . . . . . . . . . . . . . . . . . . 25 4.3 Fourth moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.4 Fifth moment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5 Central limit theorem for descents in matchings . . . . . . . . . . . . . . . . . . 37 6 Extending the result to xed point free permutatations . . . . . . . . . . . . . . 42 6.1 Crossing the singularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6.2 Convergence of MGF ofC . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 7 Conclusion and future plans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 vi CHAPTER 1 INTRODUCTION The Euler-Riemann zeta function (s) is a function of a complex variable s that analytically continues the sum of the Dirichlet series (s) = X n1 1 n s for when the real part ofs is less than 1. The Euler-Riemann zeta function plays a pivotal role in analytic number theory and has applications in physics, probability theory, and applied statistics. In order to calculate(s) fors negative integers, Euler dierentiated the equation X j0 x j = 1 1x several times to obtain X j1 jx j1 = 1 (1x) 2 ; X j1 j 2 x j1 = 1 +x (1x) 3 ; X j1 j 3 x j1 = 1 + 4x +x 2 (1x) 4 ; X j1 j 4 x j1 = 1 + 11x + 11x 2 +x 3 (1x) 5 ; and so on. The Eulerian function A n (x) was dened by the relation X j1 j n x j1 = A n (x) (1x) n+1 : 1 Euler also showed that if we write A n (x) = n1 X k=0 A n;k x k ; the coecients A n;k , called Eulerian numbers, satisfy the following recurrence relation A n;0 = 1 A n;k = (k + 1)A n1;k + (nk)A n1;k1 (1kn 1) (1.1) A n;k = 0 (kn): It turns out that the Eulerian numbers can be interpretted combinatorially. First, we dene two permutation statistics. Denition. A permutation 2 S n has a descent at position i if (i) > (i + 1), where i = 1;:::;n 1, and the descent set of , denoted Des() is the set of all descents of . The descent number of is dened as d() :=jDes()j + 1. A permutation2S n has an excedance at positioni if(i)>i, wherei = 1;:::;n1, and the excedance set of, denotedExc() is the set of all excedances of. The excedance number of is dened as exc() :=jExc()j. MacMahon showed, in [16], that X 2Sn x d()1 = X 2Sn x exc() ; and Riordan showed, in [19], that if a n;k counts the number of permutations in S n that have k excedances, the a n;k 's satisfy the recurrence relations (1.1). These two results are combined to show thatA n;k counts the number of permutations inS n that havek descents. The theory of descents in permutations has been studied thoroughly and is related to many questions. In [15], Knuth connected descents with the theory of sorting and the 2 theory of runs in permutations, and in [5], Diaconis, McGrath, and Pitman studied a model of card shuing in which descents play a central role. Bayer and Diaconis also used descents and rising sequences to give a simple expression for the chance of any arrangement after any number of shues and used this to give sharp bounds on the approach to randomness in [1]. Garsia and Gessel found a generating function for the joint distribution of descents, major index, and inversions in [11], and Gessel and Reutenauer, in [12], showed that the number of permutations with given cycle structure and descent set is equal to the scalar product of two special characters of the symmetric group. Petersen also has an excellent and very thorough book on the subject of Eulerian numbers [17]. It is well known [6] that the distribution of d() in S n is asymptotically normal with mean n+1 2 and variance n1 12 . Fulman also used Stein's method to show that the number of descents of a random permutation satises a central limit theorem with error rate n 1=2 in [9]. In [2], Chatterjee and Diaconis proved a central limit theorem ford() +d( 1 ), where is a random permutation { this result was originally proven by Vatutin in [23]. Using generating functions, Fulman proved the following analogous result in [8] about conjugacy classes with large cycles only: Theorem 1.1. For every n 1, pick a conjugacy class C n in S n , and let n i (C n ) be the number of i-cycles in C n . Suppose that for all i, n i (C n ) ! 0 as n ! 1. Then, the distribution of d() in C n is asymptotically normal with mean n1 2 and variance n1 12 . One question that arises from this result is: how are descent numbers distributed in conjugacy classes with small cycles? Denition. A matching, also known as a xed point free involution, is a permutation 2S 2n such that 2 = 1 and (i)6=i for all i = 1;:::; 2n. The main goal of this dissertation is to prove a central limit theorem for descent numbers of matchings, which come up frequently. Goldstein and Rinott considered a per- mutation method for testing whether observations given in their natural pairing exhibit 3 an unusual level of similarity in situations where all pairings can be compared in [13]. In [14], Harper showed that if the generating function of a nite sequence has real roots, then a central limit theorem follows. However, it turns out that the Eulerian functions over matchings have only complex roots. It is worth noting that the number of matchings inS 2n , denoted byF 2n in this thesis, is F 2n = (2n 1)!! = Q n i=1 (2i 1). After proving the central limit theorem for matchings, we will extend the result to the more general case of xed point free permutations. 4 CHAPTER 2 MEAN AND VARIANCE OF D() AND MAJ() OF MATCHINGS In this chapter, we will compute the mean and the variance of the descent numbers and major indices of matchings by the method of indicators. The major index is a permutation statistic that has been studied widely and will be dened later in the chapter. Given 2S 2n , let X i be the event that has a descent at position i and 1 i denote the indicator function of X i . We also denote the event that has descents at positions i 1 ;:::;i k by X i 1 ;:::;i k and the corresponding indicator function by 1 i 1 ;:::;i k . Lemma 2.1. If 2S 2n is a matching chosen uniformly at random, the probability that has a descent at position i is P (X i ) = n 2n 1 : Proof. Let 1i< 2n be arbitrary. We proceed by counting the number of matchings that have a descent at position i. There are two cases to consider. The rst case is where (i) = i + 1. This automatically guarantees a descent at positioni, and there areF 2n2 ways to match the remaining 2n 2 elements. Hence, there are F 2n2 matchings with a descent at position i and (i) =i + 1. The second case is where (i)6= i + 1. There are 2n2 2 ways of choosing pairs of candidates for ((i);(i + 1)). In order for there to be a descent at positioni, the larger of the two candidates has to be matched with i and the smaller with i + 1. There are F 2n4 ways to match the remaining 2n4 elements. Hence, there are 2n2 2 F 2n4 matchings with a descent at position i and (i)6=i + 1. Summing the two cases, we see that the probability that has a descent at positioni is P (X i ) = F 2n2 + 2n2 2 F 2n4 F 2n = n 2n 1 : 5 Note thatP (X i ) is independent ofi. We can use Lemma 2.1 to calculate the mean of d(). Theorem 2.2. Let 2 S 2n be a matching chosen uniformly at random. Then, Ed() = n + 1. Proof. The result follows by linearity of expectation and Lemma 2.1: Ed() =E 2n1 X i=1 1 i + 1 ! = (2n 1) n 2n 1 + 1 =n + 1: Finding the variance is a bit more complicated. In order to nd the second moment of d(), we proceed along a similar fashion as when we found the mean. Lemma 2.3. If n 3 and 2 S 2n is a matching chosen uniformly at random, for jijj = 1, the probability that has descents at positions i and j is P (X i \X j ) = n + 1 3(2n 1) : Proof. Let 1 i < i + 1 = j < 2n be arbitrary. We proceed by counting the number of matchings that have descents at positions i and j. There are two cases to consider. The rst case is where there is an intrapair within i, i + 1, and i + 2, i.e. jfi;i + 1;i + 2g\f(i);(i + 1);(i + 2)gj = 2. If two of the three get matched, there are 2n 3 choices to be matched with the remaining of the three. If the choice, say m, is greater thani, theni must be matched withm andi + 1 withi + 2 for there to be descents at positions i and i + 1. If m is less than i, i + 2 must be matched with m and i with i + 1 for there to be descents at positions i and i + 1. In both cases, there are F 2n4 ways to match the remaining 2n 4 elements, and so, there are (2n 3)F 2n4 matchings in the rst case. 6 The second case is where there is no intrapair within i, i + 1, and i + 2, i.e. jfi;i + 1;i + 2g\f(i);(i + 1);(i + 2)g = 0. There are 2n3 3 ways to choose three elements to get matched with i, i + 1, and i + 2. The descent conditions necessitate the largest of the three choices be matched withi, the second largest withi+1, and the smallest with i + 2. Since there are F 2n6 ways to match the remaining 2n 6 elements, there are 2n3 3 F 2n6 matchings in the second case. Hence, ifjijj = 1, P (X i \X j ) = (2n 3)F 2n4 + 2n3 3 F 2n6 F 2n = n + 1 3(2n 1) : Lemma 2.4. If n 4 and 2 S 2n is a matching chosen uniformly at random, for jijj> 1, the probability that has descents at positions i and j is P (X i \X j ) = n(n 1) (2n 1)(2n 3) : Proof. Let 1 i < j < 2n be arbitrary. In order to count the number of matchings that have descents at positions i and j, we consider the three following cases. The rst case is where there are two intrapairs within i, i + 1, j, and j + 1. There are two ways to have two intrapairs so that there are descents at positions i and j: either (i) =i + 1 and (j) =j + 1 or (i) =j + 1 and (j) =i + 1. Since there are F 2n4 ways to match up the 2n 4 remaining elements, there are 2F 2n4 matchings in this case. The second case is where there is one intrapair within i,i + 1,j, andj + 1. We break this down into subcases. 1. The rst subcase is when the intrapair is within i and i + 1, i.e. (i) =i + 1. Then, there are 2n4 2 ways to choose two elements to get matched with j and j + 1. The larger element has to be matched with j for there to be a descent at position j, 7 and since there are F 2n6 ways to match the remaining 2n 6 elements, there are 2n4 2 F 2n6 matchings in this subcase. 2. The second subcase is when the intrapair is within j and j + 1. This is the same as the previous subcase, and so, there are 2n4 2 F 2n6 matchings in this subcase. 3. The third subcase is when the intrapair is between one of i and i + 1 and one of j and j + 1. For this subcase, we rst choose the candidates to be matched with the two elements that are not in the intrapair, and there are 2n4 1 1 ways to do this. The size of the candidates relative to i and j determine which of i and i + 1 get matched with which one of j and j + 1. For example, if a and b are the candidates chosen to be matched with one of i and i + 1 and one of j and j + 1, respectively, with a < i and b > j, then a would be matched with i + 1, b would be matched with j, and i would be matched with j + 1. For each pair (a;b) of candidates chosen, one can see that there is a unique matching that guarantees descents at position i and j. There areF 2n6 ways to match the remaining 2n6 elements, and so, there are 2n4 1 1 F 2n6 matchings in this subcase. Summing the subcases, we see that there are 4 2n4 2 F 2n6 matchings in this case. The third case is where there is no intrapair within i, i + 1, j, and j + 1. There are 2n4 2 2 to choose two possible candidates to be matched with i and i + 1 and two possible candidates to be matched with j andj + 1. There are F 2n8 ways to match the remaining 2n 8 elements, and so, there are 6 2n4 4 F 2n8 matchings in the third case. Hence, ifjijj> 1, P (X i \X j ) = 2F 2n4 + 4 2n4 2 F 2n6 + 6 2n4 4 F 2n8 F 2n = n(n 1) (2n 1)(2n 3) : 8 Theorem 2.5. Let 2 S 2n be a matching chosen uniformly at random, where n 4. Then, Var (d()) = (n + 4)(n 1) 3(2n 1) : Proof. By Lemmas 2.1, 2.3, and 2.4, the second moment of d() is E (d()) 2 = 0 @ 2n1 X i=1 E1 2 i + X jijj=1 E1 i;j + X jijj>1 E1 i;j 1 A + 2 2n1 X i=1 E1 i + 1 = 6n 3 + 10n 2 + 3n 7 3(2n 1) ; and so, the variance of d() is Var (d()) =E (d()) 2 (Ed()) 2 = (n + 4)(n 1) 3(2n 1) : Note that the asymptotic mean and variance of d() of matchings are n and n 6 , re- spectively, which will be proven in Chapter 5. We now calculate the mean and variance of major indices of matchings, where major index of a permutation is dened by: Denition. Given a permutation 2S n , the major index of is the sum of the descent positions of , i.e. maj() = X i2Des() i: The mean and variance of maj() can be computed in the same way as above. Theorem 2.6. Let 2S 2n be a matching chosen uniformly and random. Then, Emaj() =n 2 : 9 Proof. By Lemma 2.1, we see that Emaj() =E 2n1 X i=1 i1 i ! = n 2n 1 2n1 X i=1 i =n 2 : Theorem 2.7. Let 2 S 2n be a matching chosen uniformly at random, where n 4. Then, Var (maj()) = 2n(n + 4)(n 1) 9 : Proof. By Lemmas 2.1, 2.3, and 2.4, the second moment of maj() is E (maj()) 2 = 2n1 X i=1 i 2 E1 2 i + X jijj=1 ijE i;j + X jijj>1 ijE i;j = n 2n 1 2n1 X i=1 i 2 + n + 1 3(2n 1) X jijj=1 ij + n(n 1) (2n 1)(2n 3) X jijj>1 ij = 9n 4 + 2n 3 + 6n 2 8n 9 ; and so, the variance of maj() is Var (maj()) =E (maj()) 2 (Emaj()) 2 = 2n(n + 4)(n 1) 9 : 10 CHAPTER 3 SYMMETRY OF D() AND MAJ() While looking at the distribution of the descent numbers and major indices forS 2n , we noticed that there was a palindromic property for both descent numbers and major indices. In other words, if we dene f(x;y) := P x d() y maj() , where the sum is over matchings, then x 2(n+1) y 2n 2 f (x 1 ;y 1 ) =f(x;y). This led us to search for a bijection that explained this symmetry. Denition. A Young diagram is a collection of boxes arranged in left-justied rows, with a weakly decreasing number of boxes in each row. A standard Young tableau (pl. tableaux) is a lling of a Young diagram with positive integers that is 1. strictly increasing across each row, and 2. strictly increasing down each column. A partition of a positive integer n is a sequence of nondecreasing positive integers = ( 1 ;:::; k ) such that P i i =n. If is a partition of n, we write `n. Every Young diagram corresponds to a partition , where i counts the number of boxes in theith column of the Young diagram, and we dene the shape of a Young tableau T , denoted by sh(T ) to be the Young diagram obtained by removing the entries of T . Due to the bijection between Young diagrams and partitions, Young diagrams are sometimes denoted by the corresponding partitions. Flipping a Young diagram over its main diagonal (from upper left to lower right) gives the conjugate diagram, denoted 0 . For more information on Young tableaux and the material in this chapter, Fulton has an excellent and very thorough book [10] on the material. 11 Example 3.1. The tableaux 1 3 4 8 2 5 7 6 and 1 2 6 3 5 4 7 8 are both standard Young tableaux and conjugates of each other. Given a tableau T and a positive integer x, an algorithm called row-insertion, due to Schensted [20], constructs a new tableau, denoted T x, as follows. If x is at least as large as all of the entries in the rst row of T , x is added in a new box to the end of the rst row. If not, we nd the left-most entry in the rst row that is strictly larger than x, and we put x in the box of this entry while removing (or \bumping") the entry. We now take the bumped entry from the rst row and repeat the process on the second row. The process is repeated until the bumped entry can be put at the end of the row it is bumped into, or until it is bumped out the bottom, in which case it forms a new row with one entry. A row-insertion T x determines a collection of boxes R, consisting of the boxes from which entries were bumped, together with the box where the last bumped element lands. This collection is called the bumping route of the row-insertion, and the last box added to the diagram is called the new box of the row-insertion. Given the new diagram and the new box, the row-insertion algorithm is reversible. Example 3.2. If we insert 4 into the tableau 1 2 5 7 3 6 9 8 ; the resulting tableau is 1 2 4 7 3 5 9 6 8 ; where the boxes with bold entries constitute the bumping route. 12 Another operation one can do with Young tableaux is sliding. Given a Young tableau, one can remove an entry, which creates a hole. The sliding operation, due to Sch utzenberger [21], slides the smaller of the box to the right and the box below into the hole. If both boxes are the same, the one below is chosen. This process creates a new hole and is repeated until the hole has been slid out of the tableau, i.e. when there is no more hole. Example 3.3. If we remove 1 from the tableau 1 2 5 9 3 4 7 6 8 and slide the hole out, the resulting tableau is 2 4 5 9 3 7 6 8 Denition. Given a partition , an oscillating tableau T of length n and shape is a sequence of n + 1 partitions T = [; = 0 ; 1 ;:::; n =] such that i diers from i1 by a single box. Equivalently, we can think of T as a walk on Young's lattice which starts at the empty partition, ends at , and has n steps. In [22], Sundaram provides a bijection between matchings and oscillating tableaux of empty shape and length 2n. Given a matching 2 S 2n , we dene an oscillating tableau T () of empty shape and length 2n as follows. We rst dene a sequence of Young tableaux P 0 ;P 1 ;:::;P 2n of length 2n. Starting with P 0 =;, given P i1 , we inspect the pairing (ij) and dene P i as follows. 1. If i is the smaller element in the pairing (ij), let P i be obtained from P i1 by row- inserting j. 13 2. If i is the larger element in the pairing (ij), then P i1 contains i. P i is obtained by removing i and sliding the hole out. If we dene T () to be the sequence of shapes T () = (sh(P 0 ); sh(P 1 );:::; sh(P 2n )); thenT is a bijective map from the set of matchings ofS 2n to the set of oscillating tableaux of empty shape and length 2n. For a given oscillating tableauT = ( 0 ;:::; 2n ), we can dene the conjugate oscillat- ing tableau to be T 0 = ( 0 0 ;:::; 0 2n ). Conjugate oscillating tableaux were rst considered by Chen et al. in [3]. We use the following result from [10] to prove the main result of this section. Lemma 3.1 (Row Bumping Lemma). Consider two successive row-insertions, rst row- inserting x in a tableau T and then row-inserting x 0 in the resulting tableau T x, giving rise to two routes R and R 0 , and two new boxes B and B 0 . 1. If xx 0 , then R is strictly left of R 0 , and B is strictly left of and weakly below B 0 . 2. If x>x 0 , then R 0 is weakly left of R and B 0 is weakly left of and strictly below B. Theorem 3.2. Let 2S 2n be a matching and dene 0 by 0 =T 1 (T () 0 ). Then, d() +d( 0 ) = 2(n + 1); and maj() +maj( 0 ) = 2n 2 : Before we prove Theorem 3.2, we provide the following example in order to read the proof more easily. 14 Example 3.4. Consider the matching = (1 4)(2 3)(5 6)2 S 6 . Then, has descents at positions 1, 2, 3, and 5, and so, d() = 5 and maj() = 11. The sequence of tableaux associated with is ;; 4 ; 3 4 ; 4 ;;; 6 ;; ; and so, the associated oscillating tableau is T () = ;; ; ; ;;; ;; : Then, the conjugate of T () is T () 0 = ;; ; ; ;;; ;; ; and the matching associated with the conjugate oscillating tableau is T 1 (T () 0 ) = (1 3)(2 4)(5 6), which has descents at positions 2 and 5. Hence, d (T 1 (T () 0 )) = 3 and maj (T 1 (T () 0 )) = 7, and we see that d() + d (T 1 (T () 0 )) = 2(3 + 1) and maj() +maj (T 1 (T () 0 )) = 2 3 2 . Proof of Theorem 3.2. Given a matching 2S 2n , let T () = ( 0 ; 1 ;:::; 2n ). For any 1i 2n 1, consider the sizes of i1 , i , and i+1 . Then, by Lemma 3.1, we see that 1. ifj i1 j<j i j andj i j>j i+1 j, then has a descent at position i. 2. ifj i1 j>j i j andj i j<j i+1 j, then does not have a descent at position i. 3. ifj i1 j<j i j<j i+1 j and the box added the second time is in a strictly lower row than the box added the rst time, then has a descent at position i. 4. ifj i1 j<j i j<j i+1 j and the box added the second time is in a weakly higher row than the box added the rst time, then does not have a descent at position i. 15 5. ifj i1 j >j i j >j i+1 j and the box removed the rst time is in a strictly lower row than the box removed the second time, then has a descent at position i. 6. ifj i1 j>j i j>j i+1 j and the box removed the rst time is in a weakly higher row than the box removed the second time, then does not have a descent at position i. For i xed, replacing T () by T () 0 has the eect of interchanging cases 3 and 4, as well as cases 5 and 6, while preserving cases 1 and 2. Since T () has empty shape, case 1 has to happen for precisely one more value of i than case 2. Hence, letting 1 i;j denote the indicator function that position i of falls into case (j), d() +d( 0 ) = 2n1 X i=1 1 i;1 + 1 i;3 + 1 i;5 + 2n1 X i=1 1 0 i;1 + 1 0 i;3 + 1 0 i;5 + 2 = 2n1 X i=1 1 i;1 + 1 i;1 + 1 i;3 + 1 i;4 + 1 i;5 + 1 i;6 + 2 = 2n1 X i=1 1 i;1 + 1 i;2 + 1 i;3 + 1 i;4 + 1 i;5 + 1 i;6 + 3 = 2n + 2: Similarly, by noting that P 2n1 i=1 i(1 i;1 1 i;2 ) =n, we see that maj() +maj( 0 ) = 2n1 X i=1 i1 i;1 +i1 i;3 +i1 i;5 + 2n1 X i=1 i1 0 i;1 +i1 0 i;3 +i1 0 i;5 = 2n1 X i=1 2i1 i;1 +i1 i;3 +i1 i;4 +i1 i;5 +i1 i;6 = 2n1 X i=1 i +n = 2n 2 : 16 CHAPTER 4 CALCULATING THE HIGHER MOMENTS OF D() One of our initial approaches to prove the asymptotic normality of d() was by the method of moments. We wanted to nd a general expression for the higher moments and verify that they converge to the moments of a normal distribution with the mean and variance found in Chapter 2. Even though the main result in this thesis is proved by a dierent method in Chapter 5, the method of moments, if successful, will give us the error terms for the convergence. If we want to use the indicator method to calculate the moments, we need to compute P (X i 1 \\X i k ), the probability that a matching chosen uniformly at random has descents at positions i 1 ;:::;i k . Once the positions i 1 ;:::;i k are given, we can associate a partition, say , of k by recording the number of consecutive indices. In this chapter, we will use the notation P () for P (X i 1 \\X i k ) for simplicity, and to better understand this new notation, we consider the following example. Example 4.1. The positions 1; 2; 5; 6; 7; 9; 11 would correspond to the partition (3; 2; 1; 1) since positions 1; 2 are 2 consecutive positions, 5; 6; 7 are 3 consecutive positions, and 9 and 11 are both singleton positions. In the three examples below, P = P (X i ), P =P (X i \X j ;jijj> 1), and P =P (X i \X j ;jijj = 1). The prob- abilities we calculated in Chapter 2 can be denoted as 1. P = n 2n 1 , 2. P = n(n 1) (2n 1)(2n 3) , and 3. P = n + 1 3(2n 1) . For the following calculations, we will be assuming that is a matching from S n instead of the S 2n setting we were in for Chapters 2 and 3. 17 Section 4.1 goes through the calculation of the third moment in detail by considering all the cases and the subcases. We also introduce the box diagram and intrapair notations to help us calculate the necessary probabilities in section 4.1. In section 4.2, we introduce an intermediate step which counts the number of possible intrapair arrangements given the number of boxes in each block involved in intrapairs. This step drastically reduces the number of subcases to consider, and the detailed use of this technique is given in section 4.3 while calculating the fourth moment. Even though the calculations in this thesis stop at the fth moment, we hope to be able to nd a more ecient way to calculate the higher moments in the future. 4.1 THIRD MOMENT The rst probability we need to compute is P . As we saw in Chapter 2, we are only concerned with the boxes where the descents need to occur and the boxes to the right of them. So, we are concerned with d d d , where d denotes the position at which there needs to be a descent. 1. The rst case we consider is when there is no intrapair. There is one subcase, which is illustrated below. Once we choose the six candidates for the boxes from n 6 elements, there are 6 222 ways to assign six candidates to the three blocks. Since there are F n12 ways to match the rest, there are n6 6 6 222 F n12 such matchings. 18 2. The second case we consider is when there is one intrapair. There are two subcases. The rst subcase is illustrated below, where the curve indicates the intrapair is between an element in the rst block and an element in the second block. Once we choose the four candidates for the boxes not in the intrapair, there are 4 112 ways to assign the four candidates to the three blocks. An important thing to note here is that once the candidates are assigned to the blocks, candidates are ordered from largest to smallest within each block due to the descent con- straints. Since there are three possible arrangements for the intrapair, there are 3 n6 4 4 112 F n10 such matchings. The second subcase is where the intrapair is within a block as illustrated below. Once we choose the four candidates for the boxes not in the intrapair, there are 4 22 ways to assign the four candidates to the two blocks. Since there are three possible arrangements for the intrapair, there are 3 n6 4 4 22 F n10 such matchings. 3. The third case we consider is when there are two intrapairs. There are four subcases. The rst subcase is illustrated below. Once we choose the two candidates for the boxes not in the intrapairs, there are 2 2 ways to assign the two candidates. Since there are 3 2 possible arrangements for the intrapairs, there are 3 n6 2 F n8 such matchings. 19 The second subcase is illustrated below. Once we choose the two candidates for the boxes not in the intrapairs, there are 2 2 ways to assign the two candidates. Since there are 3 2 possible arrangements for the intrapairs, there are 3 n6 2 F n8 such matchings. The third subcase is illustrated below. Once we choose the two candidates for the boxes not in the intrapairs, there are 2 11 ways to assign the two candidates. Since there are 3 possible arrangements for the intrapairs, there are 6 n6 2 F n8 such matchings. The fourth subcase is illustrated below. Once we choose the two candidates for the boxes not in the intrapairs, there are 2 11 ways to assign the two candidates. Since there are 3 possible arrangements for the intrapairs, there are 6 n6 2 F n8 such matchings. 4. The fourth case we consider is when there are three intrapairs. There are three subcases. The rst subcase is illustrated below. There is 1 possible arrangement for the intrapairs, and so, there are F n6 such matchings. 20 The second subcase is illustrated below. There are 3 possibles arrangements for the intrapairs, and so, there are 3F n6 such matchings. The third subcase is illustrated below. There is 1 possible arrangement for the intrapairs, and so, there are F n6 such matchings. Using the values we obtained, we can calculate P = 90 n6 6 F n12 + 54 n6 4 F n10 + 18 n6 2 F n8 + 5F n6 F n = n 3 6n 2 + 8n 8 8(n 1)(n 3)(n 5) : Now, we need to compute P , and so, we are concerned with d d d , where d denotes the position at which there needs to be a descent. 1. The rst case we consider is when there is no intrapair. The only case is illustrated below. Once we choose the ve candidates for the boxes, there are 5 32 ways to assign the ve candidates. Hence, there are 5 32 n5 5 F n10 such matchings. 21 2. The second case we consider is when there is one intrapair. There are three subcases to consider. The rst subcase is illustrated below. Once we choose the three candidates for the boxes not in the intrapair, there are 3 12 ways to assign the candidates to the two blocks. Since there is one possible arrangement for the intrapair, there are 3 21 n5 3 F n8 such matchings. The second subcase is illustrated below. Once we choose the three candidates for the boxes not in the intrapair, there is 3 3 way to assign the candidates to the one block. Since there is one possible arrangement for the intrapair, there are n5 3 F n8 such matchings. The third subcase is illustrated below. Once we choose the three candidates for the boxes not in the intrapair, there are 3 21 ways to assign the candidates to the two blocks. Since there is one possible arrangement for the intrapair, there are 3 21 n5 3 F n8 such matchings. 3. The third case we consider is when there are two intrapairs. There are three subcases. The rst subcase is illustrated below. Once we choose the one candidate for the box not in the intrapairs, we assign it to the block. Since there is one possible arrangement for the intrapairs, there are (n 5)F n6 such matchings. 22 The second subcase is illustrated below. Once we choose the one candidate for the box not in the intrapairs, we assign it to the block. Since there is one possible arrangement for the intrapairs, there are (n 5)F n6 such matchings. The third subcase is illustrated below. Once we choose the one candidate for the box not in the intrapairs, we assign it to the block. Since there is one possible arrangement for the intrapairs, there are (n 5)F n6 such matchings. Hence, we see that P = 10 n5 5 F n10 + 7 n5 3 F n8 + 3(n 5)F n6 F n = n 2 12(n 1)(n 3) : Finally, we need to calculate P , and so, we are concerned with d d d ; where d denotes the position at which there needs to be a descent. 23 1. The rst case we consider is when there is no intrapair. The only case is illustrated below. Once we choose the four candidates for the boxes, we see that there are n4 4 F n8 such matchings. 2. The second case we consider is when there is one intrapair. The only case is illustrated below. Once we choose the two candidates for the boxes not in the intrapair, we see that there are n4 2 F n6 such matchings. 3. The third case we consider is when there are two intrapairs. The only case is illustrated below. We see that there are F n6 such matchings. Hence, we see that P = n4 4 F n8 + n4 2 F n6 +F n4 F n = n(n + 2) 24(n 1)(n 3) : 24 Using the three probabilities we calculated, we see that the third moment is Ed 3 =E n1 X i=1 1 i ! 3 = (n 1)P + 6 n 2 2 P + 6(n 2)P + 6 n 3 3 P + 12 n 3 2 P + 6(n 3)P = n 4 + 6n 2 16n 8(n 1) : 4.2 COUNTING THE INTRAPAIR ARRANGEMENTS As we saw in the previous section, the computation of the probabilies for the third moment was a complicated procedure. These calculations had to be done several times as some subcases were either missed or double-counted. After making several mistakes calculating the fourth moment and the fth moment, we realized that there is a more ecient method to count the number of matchings. Some of the subcases that we have seen can actually be grouped by the number of boxes from each block that are to be in the intrapairs. For example, in the table on the next page, we see that there is only one arrangement of intrapairs connecting one box and one box (IP (1;1) = 1), and there are two arrangements of intrapairs connecting one box, one box, and two boxes or two boxes and two boxes (IP (2;1;1) =IP (2;2) = 2). These quantities can be calculated by using a recursive relation, i.e. building on the arrangements for smaller partitions. The gures below illustrate the subcases for four intrapairs in calculating P and show how counting the intrapair arrangements can drastically re- duce the number of subcases to consider. 25 The gures on the left column represent the 14 subcases similar to the subcases we have seen in the previous section. The subcases 1 7 can be grouped into the subcaseA in the right column, and the subcases 8 14 can be grouped into the subcase B in the right column. 1: 3 2: 6 3: 6 4: 3 5: 6 6: 3 7: 6 8: 1 9: 3 10: 3 11: 1 12: 3 13: 3 14: 3 A: 3IP (3;2;2;1) = 33 XXX XX XX X B: IP (2;2;2;2) = 17 XX XX XX XX This example should give the reader a sense of how ecient the process of counting intrapair arrangements is. The IP quantities for smaller partitions have been calculated and are shown in the tables on the next page. 26 n `n IP 2 (1,1) 1 (2) 1 4 (1,1,1,1) 3 (2,1,1) 2 (2,2) 2 (3,1) 1 (4) 1 6 (1,1,1,1,1,1) 15 (2,1,1,1,1) 9 (2,2,1,1) 6 (2,2,2) 5 (3,1,1,1) 4 (3,2,1) 3 (3,3) 2 (4,1,1) 2 (4,2) 2 (5,1) 1 (6) 1 n `n IP 8 (1,1,1,1,1,1,1,1) 105 (2,1,1,1,1,1,1) 60 (2,2,1,1,1,1) 36 (2,2,2,1,1) 23 (2,2,2,2) 17 (3,1,1,1,1,1) 25 (3,2,1,1,1) 16 (3,2,2,1) 11 (3,3,1,1) 8 (3,3,2) 6 (4,1,1,1,1) 10 (4,2,1,1) 7 (4,2,2) 6 (4,3,1) 4 (4,4) 3 (5,1,1,1) 4 (5,2,1) 3 (5,3) 2 (6,1,1) 2 (6,2) 2 (7,1) 1 (8) 1 27 4.3 FOURTH MOMENT The rst probability we will calculate is P . 1. The rst case we look at is when there are no intrapairs. The only subcase is There are 8 2222 n8 8 F n16 = 2520 n8 8 F n16 such matchings. 2. The second case is when there is one intrapair. The rst subcase is XX ; where X denotes the boxes in the blocks that are in the intrapair. Since there are 4 possible arrangements for the X's, there are 4IP (2) 6 222 n8 6 F n14 = 360 n8 6 F n14 such matchings. The second subcase is X X There are 6 possible arrangements for the X's, and so, there are 6IP (1;1) 6 1122 n8 6 F n14 = 1080 n8 6 F n14 such matchings. 3. The third case is when there are two intrapairs. The rst subcase is XX XX There are 6 possible arrangements for the X's, and so, there are 6IP (2;2) 4 22 n8 4 F n12 = 72 n8 4 F n12 such matchings.= 28 The second subcase is XX X X There are 12 possible arrangements for the X's, and so, there are 12IP (2;1;1) 4 112 n8 4 F n12 = 288 n8 4 F n12 such matchings. The third subcase is X X X X There is 1 possible arrangement for the X's, and so, there are IP (1;1;1;1) 4 1111 n8 4 F n12 = 72 n8 4 F n12 such matchings. 4. The fourth case is when there are three intrapairs. The rst subcase is XX XX XX There are 4 possible arrangements for the X's, and so, there are 4IP (2;2;2) 2 2 n8 2 F n10 = 20 n8 2 F n10 such matchings. The second subcase is XX XX X X There are 6 possible arrangements for the X's, and so, there are 6IP (2;2;1;1) 2 11 n8 2 F n10 = 72 n8 2 F n10 such matchings. 5. The last case is when there are four intrapairs. The only subcase is XX XX XX XX ; and so, there are IP (2;2;2;2) F n8 = 17F n8 such matchings. 29 Using the above quantities, we see that P = n 4 12n 3 + 44n 2 80n + 144 16(n 1)(n 3)(n 5)(n 7) : Next, we calculate P . 1. The rst case is when there are no intrapairs. The only subcase is There are 7 322 n7 7 F n14 = 210 n7 7 F n14 such matchings. 2. The second case is when there is one intrapair. The rst subcase is XX There are IP (2) 5 122 n7 5 F n12 = 30 n7 5 F n12 such matchings. The second subcase is XX There are two arrangements for theX's, and so, there are 2IP (1;1) 5 3;2 n7 5 F n12 = 20 n7 5 F n12 such matchings. The third subcase is X X Since there are two arrangements for the X's, there are 2IP (1;1) 5 212 n7 5 F n12 = 60 n7 5 F n12 such matchings. 30 The fourth subcase is X X There are IP 1;1 5 311 n7 5 F n12 = 20 n7 5 F n12 such matchings. 3. The third case is when there are two intrapairs. The rst subcase is XXX X There are two arrangements for theX's, and so, there are 2IP (3;1) 3 12 n7 3 F n10 = 6 n7 3 F n10 such matchings. The second subcase is XX XX There are two arrangements for the X's, and so, there are 2IP (2;2) 3 12 n7 3 F n10 = 12 n7 3 F n10 such matchings. The third subcase is XX X X There are IP (2;1;1) 3 111 n7 3 F n10 = 12 n7 3 F n10 such matchings. The fourth subcase is X XX X Since there are two arrangements for the X's, there are 2IP (2;1;1) 3 21 n7 3 F n10 = 12 n7 3 F n10 such matchings. The fth subcase is XX XX There are IP (2;2) n7 3 F n10 = 2 n7 3 F n10 such matchings. 31 4. The fourth case is when there are three intrapairs. The rst subcase is XXX XX X There are two arrangements for theX's, and so, there are 2IP (3;2;1) (n7)F n8 = 6(n 7)F n8 such matchings. The second subcase is XX XX XX There are IP (2;2;2) (n 7)F n8 = 5(n 7)F n8 such matchings. Hence, P = n 3 4n 2 + 4n 24 24(n 1)(n 3)(n 5) : Next, we calculate P . 1. The rst case is when there are no intrapairs. The only subcase is There are 6 33 n6 6 F n12 = 20 n6 6 F n12 such matchings. 2. The second case is when there is one intrapair. The rst subcase is XX There are two arrangements for the X's, and so, there are 2IP (2) 4 13 n6 4 F n10 = 8 n6 4 F n10 such matchings. 32 The second subcase is X X There are IP (1;1) 4 22 n6 4 F n10 = 6 n6 4 F n10 such matchings. 3. The third case is when there are two intrapairs. The rst subcase is XXX X There are two arrangements for the X's, and so, there are 2IP (3;1) n6 2 F n8 = 2 n6 2 F n8 such matchings. The second subcase is XX XX There are IP (2;2) 2 11 n6 2 F n8 = 4 n6 2 F n8 such matchings. 4. The fourth case is when there are three intrapairs. The only subcase is XXX XXX There are IP (3;3) F n6 = 2F n6 such matchings. Using these quantities, we see that P = n 3 3n 2 + 2n 48 36(n 1)(n 3)(n 5) : 33 Next, we calculate P . 1. The rst case is when there are no intrapairs. The only subcase is There are 6 42 n6 6 F n12 = 15 n6 6 F n12 such matchings. 2. The second case is when there is one intrapair. The rst subcase is XX There are IP (2) 4 22 n6 4 F n10 = 6 n6 4 F n10 such matchings. The second subcase is X X There are IP (1;1) 4 31 n6 4 F n10 = 4 n6 4 F n10 such matchings. The third subcase is XX There are IP (2) n6 4 F n10 = n6 4 F n10 such matchings. 3. The third case is when there are two intrapairs. The rst subcase is XXXX There are IP (4) n6 2 F n8 = n6 2 F n8 such matchings. The second subcase is XXX X There are IP (3;1) 2 11 n6 2 F n8 = 2 n6 2 F n8 such matchings. 34 The third subcase is XX XX There are IP (2;2) n6 2 F n8 = 2 n6 2 F n8 such matchings. 4. The fourth case is when there are three intrapairs. The only subcase is XXXX XX There are IP (4;2) F n6 = 2F n6 such matchings. Using these quantities, we see that P = n 3 2n 2 48 48(n 1)(n 3)(n 5) : As for P , it is easy to see that P = n 2 + 6n + 48 120(n 1)(n 3) : Using these probabilities, we see that the fourth moment is Ed 4 = 15n 6 30n 5 + 140n 4 972n 3 + 1336n 2 888n + 1344 240(n 1)(n 3) : The third and fourth moments we found in this chapter are consistent with that of a normal distribution with the mean and variance we found in Chapter 2. 35 4.4 FIFTH MOMENT Using similar methods as in the previous section, we nd P = n 5 20n 4 + 140n 3 480n 2 + 1264n 2304 32(n 1)(n 3)(n 5)(n 7)(n 9) P = n 4 10n 3 + 32n 2 112n + 320 48(n 1)(n 3)(n 5)(n 7) P = n 4 9n 3 + 26n 2 144n + 432 72(n 1)(n 3)(n 5)(n 7) P = n 4 8n 3 + 20n 2 160n + 576 96(n 1)(n 3)(n 5)(n 7) P = n 3 + 20n 96 144(n 1)(n 3)(n 5) P = n 3 + 2n 2 + 40n 240 240(n 1)(n 3)(n 5) P = n 3 + 6n 2 + 128n 480 720(n 1)(n 3)(n 5) : From the probabilities above, we see that the fth moment is Ed 5 = 3n 7 2n 6 + 44n 5 292n 4 + 376n 3 888n 2 + 1344n 96(n 1)(n 3) : We note that, due to the symmetry proven in Chapter 3, the odd central moments are 0, so given the lower moments, we can calculate the odd moments Ed k . However, in order to calculate the even moments, we need all the probabilities from the lower moments. The method of moments was the initial approach in an attempt to prove the central limit theorem for descents in matchings as this was the method used in [8]. For descents in conjugacy classes with large cycles, Fulman showed that the moments were equal to the moments of descents in S n , and this gave the desired central limit theorem. However, the moments of descents in matchings did not match up with the moments of descents in S n , and the increasing complexity in computing higher moments convinced the author to stop after the sixth moment and look for a dierent approach. 36 CHAPTER 5 CENTRAL LIMIT THEOREM FOR DESCENTS IN MATCHINGS LetD n be the number of descents of a permutation n which is uniformly chosen from matchings in S 2n . In [8], Fulman showed that Et Dn = (1t) 2n+1 (2n 1)!! 1 X k=0 k(k+1) 2 +n 1 n t k ; (5.1) which is a formal identity, and an actual identity forjtj < 1, where n!! denotes the odd factorial. Now, we dene the normalized random variable W n = D n n p n : Recall that the moment generation function (MGF) of a random variableX is dened asM X (s) =Ee sX . In [4], Curtiss proved the following MGF analogue of the L evy continuity theorem, which we will use to prove the normal convergence. Theorem 5.1. Suppose we have a sequencefX n g 1 n=1 of random variables and there exists s 0 > 0 such that each MGFM n (s) =Ee sXn converges fors2 (s 0 ;s 0 ). IfM n (s) converges pointwise to some function M(s) for each s 2 (s 0 ;s 0 ), then M is the MGF of some random variable X, and X n converges to X in distribution. In view of this result by Curtiss, it suces to show that M Wn (s) converges pointwise to M W (s) =e s 2 =12 , which is the MGF of W :=N 0; 1 6 . In Chapter 3, we showed that D n is symmetric about n. Hence,nD n has the same distribution as D n n, and so, M Wn (s) =M Wn (s). Thus, we can assume s> 0 without 37 loss of generality. Since e s= p n < 1, we can plug t =e s into (5.1) to get M Wn (s) = e s p n 1e s p n 2n+1 (2n 1)!! 1 X k=0 k(k+1) 2 +n 1 n e ks p n =e s p n 1e s p n s p n ! 2n+1 s p n 2n+1 (2n)! 1 X k=0 n1 Y j=0 k 2 +k + 2j ! e ks p n : By considering the Taylor expansion 1e x x = 1 x 2 + x 2 6 +O(x 3 ), we see that log 1e s p n s p n ! = log 1 s 2 p n + s 2 6n +O 1 n 3 2 = s 2 p n s 2 6n +O 1 n 3 2 1 2 s 2 p n +O 1 n 2 +O 1 n 3 2 = s 2 p n + s 2 24n +O 1 n 3 2 ; and so, e s p n 1e s p n s p n ! 2n+1 = exp ( s p n + (2n + 1) log 1e s p n s p n !) = exp s p n + (2n + 1) s 2 p n + s 2 24n +O 1 n 3 2 = exp s 2 12 +O 1 p n =e s 2 12 +O n 1=2 ; as n!1, where the implicit bound depends only on s. Thus, if we prove the following lemma, we are done. Lemma 5.2. For each xed s> 0, lim n!1 s p n 2n+1 (2n)! 1 X k=0 n1 Y j=0 (k 2 +k + 2j) ! e ks p n = 1: (5.2) 38 Proof. First, we start with the lower bound. For k 1xk, we have e ks p n e s p n x+1 ; and n1 Y j=0 k 2 +k + 2j x 2n : Thus, s p n 2n+1 (2n)! 1 X k=0 n1 Y j=0 (k 2 +k + 2j) ! e ks p n s p n 2n+1 (2n)! e s p n Z 1 0 x 2n e s p n x dx = s p n 2n+1 (2n)! e s p n (2n)! loge s p n 2n+1 =e s p n This provides a uniform lower bound for the left-hand side of (5.2). As for the upper bound, more work is required due to the competition between the size of n and k. In order to control the sum, we divide the range of k into 3 ranges and investigate the behavior of the corresponding truncated sums respectively. Small range (k< p n). Note that n1 Y j=0 k 2 +k + 2j =k 2n n1 Y j=0 1 + 1 k + 2j k 2 k 2n (2 n n!): 39 Then, we have X k p n n1 Y j=0 k 2 +k + 2j ! e ks p n 2 n n! X k p n k 2n e ks p n 2 n n! p n p n 2n Cn 2n+1 2 n e n ; where C is independent of n. Thus, s p n 2n+1 (2n)! X k< p n n1 Y j=0 (k 2 +k + 2j) ! e ks p n C s 2n+1 e n n n 2 n ! 0 as n!1. Medium range ( p n<k <"n) and large range (k >"n). Next, we focus on the range k> p n. We begin by noting that n1 Y j=0 k 2 +k + 2j =k 2n n1 Y j=0 1 + 1 k + 2j k 2 k 2n exp ( n1 X j=0 1 k + 2j k 2 ) k 2n e n k + n 2 k 2 : Then, we get s p n 2n+1 (2n)! X k> p n n1 Y j=0 (k 2 +k + 2j) ! e ks p n s p n 2n+1 (2n)! Z 1 p n u 2n e n u + n 2 u 2 e su p n du = 1 (2n)! Z 1 s v 2n e v e s p n v + s 2 n v 2 dv: The last integral is close to what we want, but we need control over the undesirable expo- nential term. For the medium range p n < k < "n, note that v7! v 2n e v is increasing on (0; 2n). 40 Thus, there exists an absolute C > 0 such that 1 (2n)! Z "n s v 2n e v e s p n v + s 2 n v 2 dv 1 (2n)! ("n) 2n+1 e "n e p n+n C p n" 2n+1 e p n+(3log4")n ; which decays exponentially for "> 0 suciently small, i.e. " 2 e 3 4 < 1. For the large range k>"n, 1 (2n)! Z 1 "n v 2n e v e s p n v + s 2 n v 2 dv 1 2n! e s " p n + s 2 " 2 n Z 1 "n v 2n e v dv 1 +O n 1 2 : Combining the three calculations, we obtain the bound s p n 2n+1 (2n)! 1 X k=0 n1 Y j=0 k 2 +k + 2j ! e ks p n 1 +o(1); and so, the lemma is proven. Combining Theorem 5.1 and Lemma 5.2, we obtain the desired result: Theorem 5.3. Since M Wn (s) converges to M W (s) pointwise, W n converges to W in dis- tribution, i.e. W n )N 0; 1 6 : Note that this is consistent with the result from Chapter 2 where we found that the variance of D n is n+3 6 . 41 CHAPTER 6 EXTENDING THE RESULT TO FIXED POINT FREE PERMUTATATIONS After the central limit theorem for descents in matchings was proved, Diaconis con- jectured that a similar result holds for xed point free permutations. The proof for the asymptotic normality of descents in matchings used the symmetry proven in Chapter 3, which allowed us to restrict to the case s > 0. The s > 0 case corresponds tojtj < 1 for which (5.1) is an actual identity. Since there is no palindromic property for descents in xed point free permutations, we need to deal with the case s< 0, or equivalentlyjtj> 1, and come up with an actual identity similar to (5.1). 6.1 CROSSING THE SINGULARITY For each subset S S n , we dene the generating function A S (t) = P 2S t d() . Any conjugacy class of S n has a partition associated with it, where consists of the cycle lengths, and the conjugacy class is denotedC . For example, the conjugacy class of match- ings in S n is associated with the partition (2; 2;:::; 2)`n. In [8], Fulman shows that, for any conjugacy classC , if has m i i's, the corresponding generating function is A C (t) = (1t) n+1 1 X a=1 t a n Y i=1 f i;a +m i 1 m i ; (6.1) wheref i;a = 1 i P dji (d)a i=d and(d) is the M obius function. This identity holds as a formal power series, and as an actual convergent series forjtj< 1. In [18], Reutenauer showed that f i;a counts the number of primitive circular words of lengthi from the alphabetf1;:::;ag. Proposition 6.1. LetC be a conjugacy class of S n . Then, forjtj> 1, A C (t) = (t 1) n+1 1 X a=1 t a " (1) n n Y i=1 f i;a +m i 1 m i # : (6.2) 42 In order to prove the proposition, we rst prove an analogous statement for A n (t) = A Sn (t). Lemma 6.2. Forjtj> 1, A n (t) = (t 1) n+1 1 X a=1 a n t a : (6.3) Proof. Recall thata n = P n k=0 n k (a) k , where n k is the (unsigned) Stirling number of the second kind, and (a) k =a(a1) (ak +1) is the falling factorial. Plugging this identity into A n (t) = (1t) n+1 P 1 a=1 a n t a , we see that A n (t) = (1t) n+1 n X k=0 n k 1 X a=0 t a (a) k = n X k=0 n k k!t k (1t) nk ; where the second equality follows from 1 X a=0 (a) k t ak = d dt k (1t) 1 =k!(1t) (k+1) : Now, we can view A n (t) as a polynomial in t, and assumingjtj > 1, we substitute t = 1=s to get A n 1 s = (1) n s 1 s 1 n+1 n X k=0 n k (1) k k! (1s) k+1 = (1) n s 1 s 1 n+1 n X k=0 n k (1) k 1 X a=0 (a +k) k s a = (1) n s 1 s 1 n+1 1 X a=0 n X k=0 n k (1) k (a +k) k ! s a : We can simplify the expression further by noting (1) k (a +k) k = (a 1) k , and so, A n 1 s = (1) n s 1 s 1 n+1 1 X a=0 (a 1) n s a = 1 s 1 n+1 1 X a=1 a n s a : Plugging back s = 1=t proves (6.3). 43 Proof of Proposition 6.2. By viewing n Y i=1 f i;a +m i 1 m i = n X k=1 c k a k ; we can write A C (t) = n X k=1 c k (1t) nk A k (t): Hence, by (6.3), forjtj> 1, we have A C (t) = n X k=1 c k (1t) nk (t 1) k+1 1 X a=1 a k t a = (t 1) n+1 1 X a=1 t a " (1) n 1 X k=1 c k (a) k # = (t 1) n+1 1 X a=1 t a " (1) n n Y i=1 f i;a +m i 1 m i # 6.2 CONVERGENCE OF MGF OFC LetC be a xed point free conjugacy class ofS n . If hasm i i's, thenm 1 = 0. Denote F i;a := if i;a . Then, by (6.1) and (6.2), the MGF of d(), where is chosen uniformly at random fromC is Ee sd() = A C (e s ) jC j = 8 > > > > < > > > > : (1e s ) n+1 n! 1 X a=1 e sa n Y i=2 m i 1 Y k=0 (F i;a +ik); s< 0 (e s 1) n+1 n! 1 X a=1 e sa n Y i=2 m i 1 Y k=0 (1) i (F i;a +ik); s> 0 (6.4) We claim thatd() is asymptotically normal with mean n+1 2 and variance n 12 . In accordance with this claim, we introduce a normalized random variable W dened by the relation d() = n+1 2 + p nW n . We want to show thatW n is asymptotically normal with mean 0 and variance 1 12 . As we did in Chapter 5, we will be using Theorem 5.1 to prove this result. 44 Let us denote the MGF of W n by M . Then, M is related to the MGF of d() by M (s) =Ee sWn =e (n+1)s 2 p n Ee sd() p n : We now claim the following theorem. Theorem 6.3. For each s, there exist constants C 1 ;C 2 > 0, which depend only on s, such that 1 C 1 p n e s 2 24 M (s) 1 + C 2 p n e s 2 24 holds for all n and for all xed point free conjugacy classesC of S n . Before proving the theorem, we prove the following lemma about F i;a and F i;a . Lemma 6.4. Let a and i be positive integers. Then, 1. (1) i F i;a =F i;a + 2Fi 2 ;a 1 ford 2 (i)=1g , 2. (upper bound) 0F i;a a i and 0 (1) i F i;a a i + 2a i 2 , and 3. (lower bound) (1) i F i;a F i;a a i 2 a i 2 i 2 . Proof. Part (1) is proven by looking at the denition of f i;a . Let us writei = 2 k q, wherek is a positive integer and q is an odd integer. Then, by the multiplicity of , we have F i;a = k X j=0 X djq (d) 2 j a 2 kjq d ; and F i;a = k X j=0 X djq (d) 2 j (a) 2 kjq d : 45 We divide the computation into three cases. 1. If k = 0, i =q is odd, and so, (1) i F i;a = X djq (d)(a) q d = X djq (d)a q d =F i;a : 2. If k = 1, both the terms for j = 0; 1 may survive, and (1) i F i;a = X djq (d)(a) 2q d + X djq (2)(d)(a) q d = X djq (d)a 2q d + X djq (d)a q d = X dj2q (d)a 2q d + 2 X djq (d)a q d =F i;a + 2Fi 2 ;a : 3. If k 2, we have (1) i = 1 and (a) 2 kjq d =a 2 kjq d for j = 0; 1 and djq. Hence, by comparing the formula for F i;a and (1) i F i;a , we see that they coincide. For part (2), we note that f i;a counts certain types of words, and so, F i;a =if i;a 0. By using M obius inversion formula, we see thatF i;a P dji F i;a =a i . The second inequality follows from part (1) and the rst inequality. For part (3), note that, by parts (1) and (2), we have (1) i F i;a F i;a . The other half of the inequality follows by noting that F i;a a i X dji;d6=i a d a i i 2 a i 2 ; and so, the lemma is proven. 46 We now prove the main theorem. Proof of Theorem 6.3. LetC be a xed point free conjugacy class. Since M (0) = 1, without loss of generality, we may assume that s6= 0. Let us dene f M (s) by M (s) = e s p n 1 s p n e s 2 p n ! n+1 f M (s): By using the Taylor expansion log e x 1 x x 2 = x 2 24 +O (x 4 ) as x! 0, it follows that e s p n 1 s p n e s 2 p n ! n+1 =e s 2 24 +O(n 1 ) ; where the implicit constant depends only on s. In view of this computation, it suces to show that f M (s) = 1 +O n 1 2 ; where the implicit constant is independent of n and . In what follows, it is convenient to consider =jsj instead. Then, f M (s) = p n n n! 1 X a=1 e a p n A(;a); where A(;a) is dened by A(;a) = 8 > > > > < > > > > : n Y i=2 m i 1 Y k=0 (F i;a +ik); s< 0 n Y i=2 m i 1 Y k=0 (1) i (F i;a +ik); s> 0 (6.5) AlthoughA(;a) might depend on the sign ofs as well, we suppress this dependence from our notation. As we will work on xed values of s, this suppression will not aect our argument. We proceed by splitting f M (s) into three dierent ranges for a. 47 Small range (a p n). Using Lemma 6.4, we have jA(;a)j n Y i=2 m i 1 Y k=0 a i + 2a i 2 +ik n Y i=2 m i 1 Y k=0 3a i i(k + 1) (3a) n n! jC j : (6.6) Now, we take the maximum bound to estimate p n n n! X a p n e a p n A(;a) p n n+1 jC j p n max a p n jA(;a)j (3s) n+1 jC j : (6.7) This bound decays at least as fast as exponentially as n!1, uniformly in , thanks to the following lemma. Lemma 6.5. IfC is a conjugacy class of S n that is xed point free, then jC j n! n 2 !e n : Proof of Lemma 6.5. We rst note that n Y i=1 i m i = exp ( n X i=1 m i logi ) exp ( n X i=1 im i ) =e n : Similarly, since m 1 = 0, we note that n Y i=1 m i ! n X i=2 m i ! ! n X i=2 i 2 m i ! ! = n 2 !: Thus, it follows that jC j = n! Q n i=1 i m i m i ! n! n 2 !e n : 48 Medium range ( p n < a "n p n), where " > 0 is a xed constant satisfying "e 3=2 < 1. Since 1 +xe x , we have jA(;a)j n Y i=2 m i 1 Y k=0 a i + 2a i 2 +ik a n exp ( n X i=2 m i 1 X k=0 2 a 1 2 + ik a i ) a n e 2n a + n 2 2a 2 : (6.8) Sincea7!a n e a= p n increases ona2 [0;n p n=], we can provide a simple maximum bound p n n n! X p n<a"n p n e a p n A(;a) p n n n! X p n<a"n p n a n e a p n e 2n a + n 2 2a 2 p n n n! "n p n n+1 e "n e 2 p n+ n 2 C p n(") n+1 e ( 3 2 ")n+2 p n (6.9) for some absolute constantC > 0, where the last inequality follows from Stirling's approx- imation. Again, this bound decays exponentially fast due to our choice of ". Large range (a>"n p n), where"> 0 is the same constant used in the middle range calculations. Using (6.8), we see that jA(;a)ja n e 2 " p n + 1 2" 2 n : Next, let us assume n is large enough so that "n p n > 2n. Then, by Lemma 6.4, for 2 i n, we have F i;a a a n 2 3n. Then, we have, for 0 k < m i , (1) i F i;a F i;a 3n>ik, as well as i 2 a i 2 +ikn 1 2 a i 2 + 1 a i 2 2 1 2 a i 2 +a i 2 = 3 4 a i : 49 Using the estimate log(1x) x 1x 4x forjxj 3 4 , we obtain A(;a) n Y i=2 m i 1 Y k=0 a i i 2 a i 2 ik a n exp ( 4 n X i=2 m i 1 X k=0 i 2a i 2 + ik a i ) a n e 2n a 2n 2 a 2 a n e 2 " p n 2 " 2 n : From both estimates, it follows that p n n n! X "n p n<a e a p n A(;a) = 1 +O n 1 2 p n n n! X "n p n a n e a p n = 1 +O n 1 2 p n n n! Z 1 "n p n x n e x p n dx = 1 +O n 1 2 1 n! Z 1 "n y n e y dy; where the implicit bounds are independent of n and in each step. Finally, using the fact that <e 3=2 < 1, we obtain a simple maximum bound 1 n! Z "n 0 y n e y dy 1 n! ("n) n+1 e "n C p n(") n+1 e (1")n for some absolute constant C > 0. Again, this decays exponentially, and so, we obtain p n n n! X "n p n<a e a p n A(;a) = 1 +O n 1 2 : (6.10) The conclusion follows from combining all the estimates (6.7), (6.9), and (6.10) together. 50 By Theorem 6.3, we have the desired result. Theorem 6.6. SinceM (s) converges pointwise toe s 2 24 , which is the MGF ofN 0; 1 12 , we have W n )N 0; 1 12 : 51 CHAPTER 7 CONCLUSION AND FUTURE PLANS In this thesis, we proved a central limit theorem for descents in matchings. The palindromic property, proved in Chapter 3, was essential as the generating function for descents in permutations of a xed conjugacy class, A C (t) is an actual identity forjtj< 1. In light of Curtiss' theorem and the symmetry of descents in matchings, we showed that the moment generating function of descents in matchings converged pointwise to that of a normal random variable for positive s, which yielded the desired result. The rst attempt to prove the central limit theorem was by the method of moments approach in Chapter 4. The advantage of the method of moments is that, if successful, we can obtain error terms for the convergence in distrbution, which we did not obtain in Chapter 5. While calculating the exact moments, as in Chapter 4, seems too complex for higher moments, we would like to isolate the main contributing terms in order to simplify the process. Another attempt that was not mentioned in this thesis was via Stein's method, which would give the rate of convergence. However, this method was hampered by the failure to nd a viable exchangeable pair, but we would like to try to prove the asymptotic normality by Stein's method. After the central limit theorem for descents in matchings was proved, Diaconis con- jectured that descents in xed point free conjugacy classes are also asymptotically normal. However, seemingly every xed point free conjugacy class we looked at did not have the palindromic property for descents. After constructing A C (t) to converge forjtj > 1, we were able to show pointwise convergence of the MGF of descents in an arbitrary xed point free conjugacy class to that of a normal distribution. While this thesis was being written, one of the future plans was to prove similar results for all conjugacy classes ofS n , where we conjectured that the asymptotic mean and variance will depend on the number of xed points. This turned out to be true as Lee and I proved, 52 in early March 2018, that if a conjugacy class hasn xed points, then asn goes to innity, the distribution of descents is asymptotically normal with mean (1 2 ) n 2 and variance (1 4 3 + 3 4 ) n 12 . Note that this result is consistent with the main result of Chapter 6, which corresponds to the = 0 case. These results were very encouraging and, without doing the exact calculations, the central limit theorems for other permutation statistics, such as inversions, excedances, and major indices, (previously proven) seemed provable by taking the corresponding generating functions, constructing the moment generating functions, and showing pointwise conver- gence. To the author's knowledge, this is the rst time such an approach was used to prove a central limit theorem, and we would also like to attempt to get analogous results for other nite Coxeter groups, such as the hyperoctahedral group of signed permutations. Rhoades has also suggested, via personal communication, to look at the asymptotics of ordered multiset partitions { recent developments have increased interest as ordered multi- set partitions have been closely associated with the shue conjecture (proven in 2015), and the Delta conjecture, which is the generalized version of the shue conjecture. Very little is known about statistics of ordered multiset partitions, and Rhoades hopes that analyzing the asymptotics might help gain better understanding of ordered multiset partitions. There have been suggestions to use this approach to look at the asymptotics of various objects, such as peri-Catalan numbers, and the author is truly excited to see how applicable the method used in this thesis will be. 53 REFERENCES [1] D. Bayer and P. Diaconis, Trailing the dovetail to its lair, The Annals of Applied Probability, 2 (1992), no.2, 294{313. [2] S. Chatterjee and P. Diaconis, A central limit theorem for a new statistic on permuta- tions, to appear in a special issue of Indian J. Pure App. Math. [3] W. Chen, E. Deng, R. Du, R. Stanley, and C. Yan, Crossings and nestings of matchings and partitions, Transactions of the American Mathematical Society, 359 (2007), no.4, 1555{1575. [4] J. H. Curtiss, A note on the theory of moment generating functions, The Annals of Mathematical Statistics, 13 (1942), no.4, 430{433. [5] P. Diaconis, M. McGrath, and J. Pitman, Rie shues, cycles, and descents, Com- binatorica, 15 (1995), no.1, 11{29. [6] P. Diaconis and J. Pitman, Unpublished notes on descents [7] L. Euler, Institutiones calculi dierentialis cum eius usu in analysi nitorum ac doct- rina serierum (1787) [8] J. Fulman, The distribution of descents in xed conjugacy classes of the symmetric group, Journal of Combinatorial Theory Series A, 84 (1998), no.2, 171{180. [9] J. Fulman, Stein's method and non-reversible Markov chains, Stein's Method: Expos- itory Lectures and Applications, IMS Lecture Notes Monogr. Ser., 46 (2004), 69{77. [10] W. Fulton, Young tableaux, Cambridge University Press, Cambridge, U.K. (1997) [11] A. M. Garsia and I. Gessel, Permutation statistics and partitions, Adv. Math, 31 (1979), no.3, 288{305. [12] I. Gessel and C. Reutenauer, Counting permutations with given cycle structure and descent set, Journal of Combinatorial Theory Series A, 64 (1993), no.2, 189{215. [13] L. Goldstein and Y. Rinott, A permutation test for matchings and its asymptotic distrbution, Metron: International Journal of Statistics, 61 (2004), no.3, 375{388. 54 [14] L. H. Harper, Stirling behavior is asymptotically normal, Annals of Mathematical Statistics 38 (1966), 410{414. [15] D. Knuth, The art of computer programming, Vol. 3. Sorting and searching, Addison- Wesley, Reading, MA. (1973) [16] P. MacMahon, Combinatory analysis, Cambridge University Press, Cambridge, U.K. (1915) [17] T. K. Petersen, Eulerian numbers, Birkh auser-Springer, New York, NY. (2015) [18] C. Reutenauer, Free Lie algebras, Oxford University Press, London, U.K. (1993) [19] J. Riordan, An introduction to combinatorial analysis, J. Wiley, New York, NY. (1958) [20] C. Schensted, Longest increasing and decreasing subsequences, Canadian Journal of Mathematics, 13 (1961), 179{191. [21] M. Sch utzenberger, La correspondance de Robinson, Lecture Notes in Math, 579 (1979), 59{113. [22] S. Sundaram, The Cauchy identity forSp(2n), Journal of Combinatorial Theory Series A, 53 (1990), no.2, 209{238. [23] V. A. Vatutin, The numbers of ascending segments in a random permutations and in the inverse to it are asymptotically independent, Discrete Math Appl., 6 (1996), no.1, 41{52. 55
Abstract (if available)
Abstract
The distribution of descents in certain conjugacy classes of Sₙ have been previously studied, and it is shown that its moments have interesting properties. This dissertation provides a bijective proof of the symmetry of the descents and major indices of matchings and uses a generating function approach to prove a central limit theorem for the number of descents in matchings. We also extend this result to fixed point free permutations.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
CLT, LDP and incomplete gamma functions
PDF
Limit theorems for three random discrete structures via Stein's method
PDF
Stein's method via approximate zero biasing and positive association with applications to combinatorial central limit theorem and statistical physics
PDF
Exchangeable pairs in Stein's method of distributional approximation
PDF
Fractional calculus and its application
PDF
On limiting distribution and convergence rates of random processes defined over discrete structures
PDF
Advances in the combinatorics of shifted tableaux
PDF
A Pieri rule for key polynomials
PDF
Concentration inequalities with bounded couplings
PDF
Cycle structures of permutations with restricted positions
PDF
Bijection between reduced words and balanced tableaux
PDF
Max-3-Cut performance of graph neural networks on random graphs
PDF
Convergence rate of i-cycles after an m-shelf shuffle
PDF
Invariable generation of finite groups of Lie type
PDF
Gene-set based analysis using external prior information
PDF
Construction of orthogonal functions in Hilbert space
PDF
On the Giroux correspondence
PDF
Landmark detection for faces in the wild
PDF
Permutations, statistics, and switches
PDF
Topological generation of classical algebraic groups
Asset Metadata
Creator
Kim, Gene Bernhard
(author)
Core Title
Distribution of descents in matchings
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Mathematics
Publication Date
04/06/2018
Defense Date
12/08/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
asymptotic normality,asymptotics,central limit theorem,descents,generating functions,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Fulman, Jason (
committee chair
), Goldstein, Larry (
committee member
), Kempe, David (
committee member
)
Creator Email
genebkim@gmail.com,genebkim@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-490530
Unique identifier
UC11266994
Identifier
etd-KimGeneBer-6154.pdf (filename),usctheses-c40-490530 (legacy record id)
Legacy Identifier
etd-KimGeneBer-6154.pdf
Dmrecord
490530
Document Type
Dissertation
Rights
Kim, Gene Bernhard
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
asymptotic normality
asymptotics
central limit theorem
descents
generating functions