Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Bulk and edge asymptotics in the GUE
(USC Thesis Other)
Bulk and edge asymptotics in the GUE
PDF
Download
Share
Open document
Flip pages
Copy asset link
Request this asset
Transcript (if available)
Content
Bulk and edge asymptotics in the GUE by Aniruddh Kannan A Thesis Presented to the FACULTY OF THE USC DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF ARTS (APPLIED MATHEMATICS) May 2024 Copyright 2024 Aniruddh Kannan Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Section 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Section 2. Basic definitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Section 3. Convergence of determinantal point processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Section 4. The GUE as a determinantal point process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Section 5. Bulk asymptotics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Section 6. Edge asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Appendix A. Semi-rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Appendix B. Moment properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 ii Abstract We consider the asymptotics of the random measure formed by the eigenvalues of a matrix from the Gaussian Unitary Ensemble (GUE). We show that this measure is a determinantal point process and derive a double-contour integral formula for its correlation kernel. We show that the bulk of the spectrum weakly converges to the sine process, and the edge of the spectrum weakly converges to the Airy point process. Our approach for proving weak convergence goes through establishing uniform convergence of the correlation kernels over compact sets, and is based on ideas developed by Borodin. iii Preface Random Matrix Theory (RMT) is an interesting branch of mathematics that explores the properties of large matrices with random elements. Initially developed to understand complex systems in nuclear physics, RMT has since been studied extensively and has connections to many areas in mathematics. At its core, RMT investigates the behavior of eigenvalues of random matrices, and seeks to describe the patterns that emerge. One of the key achievements of RMT is the discovery of universality classes, which categorize random matrices into ensembles based on some relatively fundamental properties (for example, the types of symmetries observed). These classes enable one to make predictions about the distribution and behavior of the eigenvalues without knowing the precise details of the system under study. For further reference on the study of random matrices, we refer to (3), (13), (42), (32), (21) and mention that earlier texts tend to focus more on applications of random matrices in physics (see (33), (25)). Gaussian ensembles represent a fundamental class of random matrix ensembles within RMT, characterized by matrices whose elements are independently and identically distributed according to a Gaussian distribution. These ensembles are of interest due to the ubiquity of Gaussian distributions in applications, and are fundamental to the study of random matrices. The Gaussian Orthogonal Ensemble (GOE), Gaussian Unitary Ensemble (GUE), and Gaussian Symplectic Ensemble (GSE) are the three main categories of Gaussian ensembles, each exhibiting distinct symmetry properties that govern their spectral behavior. The GOE consists of real symmetric matrices whose elements are independently and identically distributed according to a Gaussian distribution. This ensemble arises in physical systems that exhibit time-reversal symmetry. The GUE comprises Hermitian matrices with complex elements drawn from a Gaussian distribution. Matrices in the GUE exhibit unitary symmetry and are often used to model complex quantum systems without additional symmetries. The GSE consists of real symmetric matrices with quaternionic elements drawn from a Gaussian distribution. Matrices in the GSE exhibit symplectic symmetry and are relevant for systems with time-reversal symmetry, odd spin states but no rotational symmetry. Of particular interest to us is the GUE, in which the unitary symmetry property leads some well-studied spectral properties, the most notable being that the eigenvalues of GUE matrices are distributed according to the Wigner semicircle law in the large matrix limit. The GUE arises naturally in physics, where it describes the statistical behavior of quantum systems that do not exhibit time reversal symmetry (for example, a magnetic field). iv The study of random matrices often begins with the work of Eugene Wigner, in which many foundational results were established. The Wigner surmise and semicircle law are some of the most fundamental concepts in RMT, providing insight into the statistical distribution of eigenvalues for certain types of random matrices. The semicircle law states that in the limit of large matrix size, the distribution of eigenvalues of a Hermitian matrix drawn from a Gaussian ensemble approaches a semicircle shape. This means that the density of eigenvalues near the center of the spectrum is higher than near the edges, forming a semicircular arc. The semicircle law captures the universal behavior of eigenvalue distributions in systems exhibiting no additional symmetries. We refer here to Wigner’s original works (49) and (50). The Wigner surmise is a probability distribution function (explicitly given in (15, (1.1))) that describes the spacing between adjacent eigenvalues of random matrices in the bulk of their spectrum. It refines the semicircle law by accounting for deviations from the semicircular shape, particularly near the edges of the spectrum where fluctuations are more pronounced. The surmise introduces a parameter, typically denoted by β, which depends on the symmetry class of the random matrix ensemble. For example, β = 1 corresponds to the Gaussian Orthogonal Ensemble (GOE), β = 2 to the Gaussian Unitary Ensemble (GUE), and β = 4 to the Gaussian Symplectic Ensemble (GSE). This β is also referred to as the Dyson Index, and is equal to the number of real variables needed to specify an entry in the random matrix (this is also referred to as Dyson’s threefold way, see (51)). The Wigner surmise provides a more accurate description of the distribution of eigenvalues, especially in systems with additional symmetries beyond those captured by the semicircle law. The Tracy-Widom distribution plays a pivotal role in RMT, particularly in the context of the GUE. Originally introduced in (44), Tracy and Widom’s work demonstrated that in the limit of large matrix size, the distribution of the largest eigenvalue of GUE matrices converges to the TracyWidom distribution. We mention that (44) contains many interesting results about the eigenvalues of GUE matrices, including the original proofs about bulk convergence to the sine kernel and edge convergence to the Airy kernel. We will elaborate more about this in Section 1.2. v Section 1. Introduction 1.1. The Gaussian unitary ensemble. We start by formally introducing the Gaussian unitary ensemble. There are a few equivalent ways one can define t he m odel. We fi x N ∈ N an d let {ξi,j , ηi,j} N i,j=1 be a collection of 2N2 i.i.d. standard normal variables (i.e. of mean 0 and variance 1). We define an N × N r andom Hermitian m atrix X = [Xi,j ] N i,j=1 via (1.1) Xi,j = ξi,i if i = j ξi,j+ √ √ −1ηi,j 2 if i < j ξj,i− √ √ −1ηj,i 2 if i > j . Alternatively, we can define Y = [Yi,j ] N i,j=1 via (1.2) Yi,j = ξi,j + √ −1ηi,j for i, j = 1, . . . , N, and then set X˜ = 1 2 (Y + Y ∗ ), where A∗ denotes the conjugate transpose of a matrix. The matrices X and X˜ are not the same, but one can readily verify that they have the same distribution. We say that X (or equivalently X˜ as it has the same law) belongs to the Gaussian unitary ensemble (GUE). From the Spectral theorem for Hermitian matrices, we have that the random matrix X has N real eigenvalues (counted with multiplicities). If we denote these eigenvalues by λ1 ≤ λ2 ≤ · · · ≤ λN , then we have the following statement from (3). Theorem 1.1. (3, Theorem 2.5.2) Let X be an N × N GUE matrix. Then, the random N-tuple of ordered eigenvalues (λ1, . . . , λN ) of X has a density with respect to Lebesgue measure on R N , which equals (1.3) fN (x1, . . . , xN ) = N! · ZN · 1{x1 ≤ x2 ≤ · · · ≤ xN } · ∆(x) 2 · Y N i=1 e −x 2 i /2 , where ZN = (2π) −N/2 · QN k=1 1 k! and (1.4) ∆(x) = Y 1≤i<j≤N (xj − xi) is the Vandermonde polynomial. 1 We mention here that obtaining the above form for the normalization constant ZN is not a straightforward task, and may involve computing Selberg integrals, for which we refer to (20). The computations of these normalization constants for the Gaussian ensembles were first published in (34). 1.2. Main results. In this section we present the main results in the thesis. For a fuller understanding of the results, we refer the reader to Section 2, which contains many key definitions. 1.2.1. Bulk asymptotics. Let X be an N × N GUE matrix and let (λ1, . . . , λN ) be its ordered eigenvalues counted with multiplicities. For α ∈ (− √ 2, √ 2) we let (1.5) βi = √ N · (λi − α √ N), and let Mα N be the random measure on R such that (1.6) Mα N A = X N i=1 1{βi ∈ A}, for all Borel sets A ⊂ R. In other words, Mα N is the point process formed by (β1, . . . , βN ), see Definition 2.10 in the main text. Our first main result, see Theorem 1.3 below, shows that the random measures Mα N converge to the sine process, which we define next. Definition 1.2. For ϕ > 0 we define the sine kernel by (1.7) Ksine,ϕ(x, y) = sin(ϕ(x − y)) π(x − y) if x ̸= y, ϕ π if x = y. By a straightforward adaptation of the argument in (3, page 232) one can show that there is a determinantal point process Nsine,ϕ with correlation kernel Ksine,ϕ as in (1.7). It is called the sine process with parameter ϕ. Our main result about bulk asymptotics is as follows. Theorem 1.3. Fix α ∈ (− √ 2, √ 2). Then, as N → ∞ the measures Mα N in (1.6) converge weakly (as random elements in the space of locally bounded random measures on R) to the sine process with parameter ϕ = √ 2 − α2 from Definition 1.2. 2 Bulk asymptotic convergence of the eigenvalue spectrum in the GUE to the sine process has been extensively studied, with the original results appearing in (44). Subsequent results of this type for random matrices appear in (16), (17), and (43). Furthermore, some properties of the deterministic point process with the sine correlation kernel have been established in (36), and it has been studied as a scaling limit in (18, Section 3) and (9). 1.2.2. Edge asymptotics. Let X be an N × N GUE matrix and let (λ1, . . . , λN ) be its ordered eigenvalues counted with multiplicities. We let (1.8) αi = 21/2N 1/6 · (λi − √ 2N), and let M edge N be the random measure on R such that (1.9) M edge N A = X N i=1 1{αi ∈ A}, for all Borel sets A ⊂ R. In other words, M edge N is the point process formed by (α1, . . . , αN ), see Definition 2.10 in the main text. Our second main result, see Theorem 1.6 below, shows that the random measures M edge N converge to the Airy point process, which we define next. We start by introducing a bit of notation. The Airy function is the entire function, given by (1.10) Ai(x) = 1 2πi Z γ e z 3/3−zxdz, where γ is the contour consisting of the two rays {yeiπ/3 : y ∈ [0, ∞)} and {ye−iπ/3 : y ∈ [0, ∞)}, oriented in the direction of increasing imaginary part. The Airy function Ai(x) is a special function with many remarkable properties. For example, from (3, (3.1.4)) we have that Ai(x) is a solution to the Airy differential equation (1.11) d 2y dx2 − xy = 0. Definition 1.4. We define the Airy kernel by (1.12) KAiry(x, y) = Ai(x) Ai′ (y) − Ai′ (x) Ai(y) x − y if x ̸= y, Ai′ (x) Ai′ (x) − Ai′′(x) Ai(x) if x = y. As was shown in (3, Section 4.2) there is a determinantal point process NAiry with correlation kernel KAiry as in (1.12). It is called the Airy point process. 3 Remark 1.5. We mention that in (3) the authors call NAiry the “Airy process” rather than the “Airy point process”. The term “Airy process” is commonly used to refer to a certain stationary continuous process on R, see e.g. (45) and (37), and so to avoid confusion, we refer to the point process NAiry in Definition 1.4 as the “Airy point process”. For more about the Airy point process, see (1). Our main result about edge asymptotics is as follows. Theorem 1.6. As N → ∞ the measures M edge N in (1.9) converge weakly (as random elements in the space of locally bounded random measures on R) to the Airy point process from Definition 1.4. Our result here differs from Tracy and Widom’s original result in (44) in that we are proving a result about weak convergence of random measures. However, we mention that this type of convergence been extensively studied in many contexts: • Consider a random permutation of N numbers. In (4), it was shown that under appropriate scaling, the distribution of the length of the largest increasing subsequence converges to the distribution of the largest eigenvalue of a GUE matrix established in (44). • The Asymmetric Simple Exclusion Process (ASEP) and the Totally Asymmetric Simple Exclusion Process (TASEP) are models used to study the dynamics of interacting particles on one-dimensional lattices. It has been shown that the probability distribution for the position of an individual particle in these settings asymptotically converges to the largest eigenvalue distribution of a GUE matrix. The first result of this type for TASEP was shown in (26) and extended to ASEP in (46), (47) (48). • The KPZ universality class, based on the nonlinear stochastic differential equation introduced in (30), has emerged to describe a variety of random models such as surface growth processes, interacting particle systems and polymers in random environments. It has been widely claimed that the type of convergence we show in Theorem 1.6 falls in this universality class. For example, it was shown in (8) that the exponential moments of the KPZ equation align with those of Airy point process. Furthermore, it was proved in (2) that the long-time asymptotics of the distribution of the KPZ equation converges to the distribution of the largest eigenvalue of a GUE matrix given in (44). For more about the KPZ universality class, we refer to (12), (7), and (39). 4 • Random tilings are arrangements of geometric shapes (tiles) on a plane or other surfaces, where the placement of tiles is determined by a random process. The tiles can be simple shapes like squares or triangles, or more complex shapes, and they are often subject to certain rules or constraints. Random tilings are of interest due to their connections to diverse areas such as statistical mechanics and combinatorics, and have applications in modeling physical systems. The Airy point process has emerged as an edge scaling limit for some such models, as shown in (18, Section 5). For more about tilings and related models, we refer to (23) and (31). 1.3. Ideas behind the proof and outline. Here we provide a brief overview of the thesis. We begin by providing the basic definitions of weak convergence, random measures, correlation functions, and determinantal point processes in Section 2, following the presentations in (5), (28), and (29). The main tool we use to prove Theorems 1.3 and 1.6 is introduced in Section 3. In short, we claim in Proposition 3.1 that we can establish weak convergence of a sequence of determinantal point processes to a unique limit (which is also a determinantal point process) in the space of locally bounded measures on R by showing that the correlation kernels converge uniformly over compact sets. We emphasize that this technique of establishing weak convergence is different than prior results such as (40, Theorem 5), which requires proving convergence of the correlation kernels with respect to the trace class norm. In Section 4, we show that the eigenvalues of GUE matrices with densities as in Theorem 1.3 can be used to form a determinantal point process and obtain an integral representation for the correlation kernels following the approach in (6). We begin this section by proving some basic properties about the well-known Hermite polynomials, and then we use these results to show that the random measure formed by the GUE eigenvalues is a determinantal point process (in particular, it is a biorthogonal ensemble which is defined in Section 2.4). The integral representation we obtain for the correlation kernel is different from previous results in a similar context, such as those in (27, (2.27)) and (35, (8)). Sections 5 and 6 are dedicated to proving Theorems 1.3 and 1.6, respectively. In both sections, we begin by rescaling the correlation kernel obtained in Section 4 to address either the bulk or edge cases, and show convergence of the modified kernels to the sine or Airy kernels, respectively (in Section 6, this is performed using the steepest descent method). The final portions of these 5 sections are dedicated to applying Proposition 3.1 to arrive at our results about weak convergence in the space of locally bounded measures. 6 Section 2. Basic definitions In this section we recall the definition a nd b asic r esults a bout d eterminantal p oint processes, which will be used throughout the text. 2.1. Weak convergence. In this section we introduce the relevant notions about weak convergence in general metric spaces. Our exposition and notation follows (5, Chapter 1), and we refer the reader to the latter for more details. Throughout this section we assume that (S, d) is a metric space and let S denote its Borel σ-algebra, i.e. the smallest σ-algebra that contains all the open sets. Definition 2 .1. Let {Pn}n≥1 be a sequence of probability measures on (S, S ). We say that {Pn}n≥1 converges weakly to a probability measure P if for any bounded continuous f : S → R we have Pnf → Pf. We say that {Pn}n≥1 is tight if for every ε > 0, there exists a compact set K ⊆ S such that PnK > 1 − ε for all n ≥ 1. We say that {Pn}n≥1 is relatively compact if every subsequence {Pnk }k≥1 has a further subsequence {Pnkl }l≥1 that weakly converges to some probability measure P on (S, S). Let {Xn}n≥1 be a sequence of random elements in S defined on probability spaces (Ωn, Fn, Pn), i.e. Xn : Ωn → S are Fn/S-measurable maps. We say that {Xn}n≥1 converge weakly to a random element X, defined on (Ω, F, P), if the corresponding push-forward measures P n := P nXn −1 converge to P = PX−1 in the sense of Definition 2 .1. One analogously e xtends the n otions o f tightness and relative compactness to a sequence of random elements. We end this section with a couple of results that we require later in the text. These are Prokhorov’s and Skorohod’s representation theorems from (5). Theorem 2.2. (5, Theorems 5.1 and 5.2) Let S be a Polish Space, i.e. a separable, completely metrizable topological space. Then, a sequence of probability measures {Pn}n≥1 is tight if and only if it is relatively compact. Theorem 2.3. (5, Theorem 6.7) If Pn weakly converge to P and P has separable support, then there exist random elements Xn and X, defined on a common probability space (Ω, F, P), such that Pn is the law of Xn, P is the law of X, and Xn(ω) → X(ω) for all ω ∈ Ω. 2.2. Random measures and point processes. In this section we introduce point processes and establish a few results about them. We let k ∈ N and denote E = R k with the usual Euclidean 7 distance. We also let E denote the Borel σ-algebra on E, and Eˆ the collection of bounded Borel sets. Usually, we will take E = R in which case we write R and Rˆ for the Borel σ-algebra and the collection of bounded Borel sets, respectively. Definition 2.4. A locally bounded measure µ on (E, E) is a nonnegative measure such that µ[−n, n] k < ∞ for all n ∈ N. The set of locally bounded measures on (E, E) is denoted by ME. The vague topology on ME is the coarsest topology such that for every continuous function with compact support f, the maps πf : ME → R defined as πf (µ) = µf = R E f(x)µ(dx) are continuous. Lemma 2.5. (29, Lemma 4.2) ME with the vague topology is a Polish space. Definition 2.6. Given a probability space (Ω, F, P), we say M is a random measure if M : Ω 7→ ME is F/S measurable, where S is the Borel σ-algebra corresponding to the vague topology. A random measure M is called a point process if M(ω)A ∈ Z≥0 for all A ∈ Eˆ a.s. If M is a random measure, we denote its mean measure by E[M], which is the measure on E defined by E[M](A) = E[M(A)] for all A ∈ E. Remark 2.7. We mention that even though M is locally bounded by definition for each ω ∈ Ω it is possible that its mean measure E[M] assigns infinite mass to bounded sets. We next explain what it means for a sequence of random elements to form a point process and show that any point process on R is formed by some sequence of random elements that take value in R ∪ {∂}. In general, we write E¯ := E ∪ {∂}, to mean the space obtained from E upon adding an extra isolated point ∂. We explain below in Remark 2.9 why this extra point is useful. Definition 2.8. A sequence of random elements {Xn}n≥1 ∈ E¯ = E ∪ {∂} is said to form a random measure N ∈ ME if for any Borel set A ∈ E, (2.1) N(ω)A = X∞ n=1 1{Xn(ω) ∈ A} Remark 2.9. We note that since A ⊆ E only the terms Xn(ω) ̸= ∂ contribute to the sum in (2.1). In particular, one could restrict the sum in (2.1) to only those Xn that are not equal to ∂. Having the extra point ∂ allows us to keep the sum fixed – always from one to infinity – while having the flexibility of making the right side of (2.1) finite even when A = E by making all but finitely many elements equal to ∂. 8 Lemma 2.10. Let N be a point process on R, defined on a probability space (Ω, F, P). There exist random elements {Xn}n≥1 ∈ E¯ = R ∪ {∂} all defined on (Ω, F, P), which form N. Proof. Let R + = [0, ∞) and R¯ + = [0, ∞]. Define ϕ(x) = 0 if x = 0, x − a 2 if x ∈ (a, a + 1], a ∈ Z≥0 even, −x + a+1 2 if x ∈ (a, a + 1], a ∈ Z≥0 odd, ∂ if x = ∞. One readily verifies that ϕ : R¯ + → R¯ is a measurable bijection and also that its restriction to R + defines a measurable bijection between R + and R with a measurable inverse. For a Borel set A ⊆ R + and ω ∈ Ω we define (2.2) M(ω)A = N(ω)ϕ(A), which is nothing but the pushforward measure of N under the map ϕ. Since N is a point process on R, we conclude that M is an integer-valued random measure on R + in the sense of (10, Section 1, Chapter VI). From (10, Proposition VI.1.11) and its proof we have that Tn(ω) := inf{t ≥ 0 : M(ω)[0, t] ≥ n} are increasing (extended) random variables on R¯ + that form M in the sense that for any Borel A ⊆ R + we have (2.3) M(ω)A = X n≥1 1{Tn(ω) ∈ A}. Setting Xn(ω) := ϕ(Tn(ω)), we observe that Xn are random elements in R¯ (note that Xn are measurable as they are the composition of measurable maps). By combining (2.2) and (2.3), with the fact that ϕ is a measurable bijection with a measurable inverse, we conclude that {Xn}n≥1 form N in the sense of Definition 2.8. □ In the remainder of this section, we introduce a few important structures used in several proofs throughout the text. Definition 2.11. A collection of sets C is called a semi-ring if 9 (i) for any A, B ∈ C we have A ∩ B ∈ C, (ii) for any A, B ∈ C such that A ⊆ B we have that B \ A can be expressed as a finite disjoint union of sets in C. A family I ⊆ Eˆ is called dissecting if any open set G ⊆ E can be written as an at most countable union of sets in I and any bounded set is contained in some finite union of sets in I. A family G is called generating if σ(G) = E. We mention that any dissecting semi-ring is easily seen to be generating. Below we list a few results about dissecting semi-rings, which will be proved in Appendix A. The following result is proved in Appendix A as Lemma A.1. Lemma 2.12. Suppose that I ⊆ E is a semi-ring. If I1, . . . , Ik ∈ I, then there exist finitely many J1, . . . , Jm ∈ I that are pairwise disjoint and such that Ii = [ j:Jj⊆Ii Jj for i = 1, . . . , k and [ k i=1 Ii = [m j=1 Jj . Of particular interest to us is the family of half-open rational intervals I = {(a, b] : a ≤ b; a, b ∈ Q}, for which we have the following result, whose proof is given in Appendix A as Lemma A.2. Lemma 2.13. Let I = {(a, b] : a ≤ b; a, b ∈ Q}. Then I is a countable dissecting semi-ring on R. The following result helps to construct dissecting semi-rings on R k from ones on R. It’s proof can also be found in Appendix A as Lemma A.3. Lemma 2.14. If I is a dissecting semi-ring in (R, R), then for all k ∈ N, Ik = {A1 × · · · × Ak : Ai ∈ I} is a dissecting semi-ring in (R k , Rk ). Moreover, any set A ∈ Ik can be written as a finite disjoint union of sets of the form B1 × · · · × Bk ∈ Ik such that Bi = Bj or Bi ∩ Bj = for each 1 ≤ i ≤ j ≤ k. 2.3. Correlation functions. The goal of this section is to introduce correlation functions for point processes. Our exposition follows (28, Section 2). By Lemma 2.10, for any point process N on R there exist random variables {Xi}i≥1 on the extended space R¯ = R ∪ {∂} that form N in the sense of (2.1). Using the variables {Xi}i≥1 we 10 define new point processes on (R n , Rn ) as follows (2.4) NnA = X i1,...,in=1 |{i1,...,in}|=n 1{(Xi1 , . . . , Xin ) ∈ A} It is worth pointing out that the right side of (2.4) only depends on N and not on the variables {Xi}i≥1 in the sense that if {X′ i }i≥1 is another set of variables that form N, then the measures N′ n defined as in (2.4) with Xi replaced with X′ i agree with Nn. For future use, we let Mn denote the mean measure of Nn as in (2.4). The following definition is made after (28, Definition 2.1) where it is formulated for a general measure λ, called a reference measure. We only deal with the case when λ is the usual Lebesgue measure on (R, R) and all our definitions and results below make use of this choice of λ. Definition 2.15. Let N be a point process on R and Nn be as in (2.4) with mean measures Mn. The n-th correlation function for the point process N is said to exist if the mean measure Mn is locally bounded and absolutely continuous with respect to λ n (the Lebesgue measure on R n ). The latter and the Radon-Nikodym Theorem (14, Theorem A.4.8) imply the existence of a non-negative, measurable ρn : R n → [0, ∞) such that for any A ∈ Rˆ (2.5) MnA = Z A ρn(x1, . . . , xn)λ n (dx) < ∞ Any function ρn that satisfies (2.5) is called the n-th correlation function of N. Remark 2.16. We mention that (14, Theorem A.4.8) implies that if another function ρ˜n also satisfies (2.5), then ρn = ˜ρn almost everywhere on R n . In this sense, if the n-th correlation function of N exists, it is essentially unique. The following lemma relates the n-th correlation function with the joint n-th factorial moment of the random measure N. It follows from (28, (2.15)) and (2.5). Lemma 2.17. Suppose that N is a point process. For any bounded, disjoint Borel sets A1, . . . , Am ∈ Rˆ and n1, . . . , nm ∈ N with n1 + · · · + nm = n we have Mn(A n1 1 × · · · × A nm m ) = E "Ym i=1 (N(Ai))! (N(Ai) − ni)!# . 11 If the n-th correlation function ρn exists as in Definition 2.15, then we also have Mn(A n1 1 × · · · × A nm m ) = E "Ym i=1 (N(Ai))! (N(Ai) − ni)!# = Z A n1 1 ×···×A nmm ρn(x1, . . . , xn)λ n (dx), where the conventions k! = k · (k − 1)· · · · 1 for k ≥ 0, 0! = 1, and 1 k! = 0 for k < 0 are used. 2.4. Determinantal point processes. In this section we introduce determinantal point processes and provide a large class of such models. Our exposition follows (28, Section 2). Definition 2.18. A point process N is called a determinantal point process if the n-th correlation function ρn exists as in Definition 2.15 for each n ≥ 1 and there exists a locally bounded measurable function K : R 2 7→ C such that (2.6) ρn(x1, . . . , xn) = det[K(xi , xj )]n i,j=1 A function K that satisfies (2.6) is called a correlation kernel for the point process N. We mention that while the n-th correlation function ρn is essentially unique (see Remark 2.16), the same is not true for K. For example, if K(x, y) satisfies (2.6) then so does K˜ (x, y) = e x−yK(x, y) by linearity of the determinant function. In fact, one has the following general result. Lemma 2.19. If N is a determinantal point process with correlation kernel K(x, y), then it is also a determinantal point process with correlation kernel K˜ (x, y), given by K˜ (x, y) = K(x, y) · f(x) f(y) , where f(x) is any continuous non-vanishing function on R. Proof. If K(x, y) satisfies (2.6), then so does K˜ (x, y), since by linearity of the determinant we have det h K˜ (xi , xj ) in i,j=1 = det [K(xi , xj )]n i,j=1 Yn i=1 f(xi) Yn j=1 1 f(xj ) = det [K(xi , xj )]n i,j=1 . □ Later in the text we require certain estimates on the correlation functions, for which it is useful to bound determinants of matrices using what is known as Hadamard’s inequality. We state this result next and note that its proof can be found in (38, Corollary 33.2.1.2.). 12 Lemma 2.20. Let A be an n × n matrix with entries aij ∈ C for all 1 ≤ i, j ≤ n and let v1, . . . , vn denotes the column vectors of A. Then, | det A| ≤ Yn i=1 ∥vi∥ where ∥x∥ = (|x1| 2 + · · · + |xn| 2 ) 1/2 for x = (x1, . . . , xn). In particular, if |aij | ≤ C, then | det A| ≤ n n/2 · C n . One class of determinantal point processes is given by biorthogonal ensembles, which we introduce next. We start with a useful definition. Definition 2.21. Two finite sequences of (complex-valued) functions {ϕi} n i=1 and {ψi} n i=1 in L 2 (R) are biorthogonal if for all 1 ≤ i ̸= j ≤ n, (2.7) Z R ϕi(x)ψj (x)λ(dx) = 0. We can now introduce biorthogonal ensembles. Definition 2.22. Let {ϕi} n i=1 and {ψi} n i=1 be two finite sequences of (complex-valued) functions in L 2 (R) and suppose that (X1, . . . , Xn) is a random vector in R n with probability density function (2.8) un(x1, . . . , xn) = 1 n!Zn · det[ϕi(xj )]n i,j=1 · det[ψi(xj )]n i,j=1, where Zn ∈ (0, ∞) is a normalization constant, given by (2.9) Zn = 1 n! Z Rn det[ϕi(xj )]n i,j=1 · det[ψi(xj )]n i,j=1λ n (dx) If {ϕi} n i=1 and {ψi} n i=1 are biorthogonal as in Definition 2.21, then the density un of the form (2.8) is called a biorthogonal ensemble. Suppose that {ϕi} n i=1 and {ψi} n i=1 are two finite sequences of (complex-valued) functions in L 2 (R). We introduce the matrix A = [aij ] n i,j=1 with entries (2.10) aij = Z R ϕi(x)ψj (x)λ(dx) ∈ C, which we note are finite by the Cauchy-Schwarz inequality. From (28, Proposition 2.10) we have that Zn as in (2.9) is finite and equals det A. 13 Suppose now that Zn = det A ∈ (0, ∞) and un as in (2.8) is a probability density function. If (X1, . . . , Xn) is a random vector in R n with probability density function un, we have from (28, Proposition 2.11) that the point process N on R defined by (2.11) NB = Xn i=1 1{Xi ∈ B} is a determinantal point process on R with correlation kernel (2.12) K(x, y) = Xn i,j=1 ψi(x) · (A −1 )ij · ϕj (y). Remark 2.23. We mention that since det A = Zn ∈ (0, ∞) we have that A is invertible and the entries (A−1 )ij in (2.12) are well-defined. Remark 2.24. We mention that the definition of a determinantal point process in (28) is more general than ours, and in order for N as in (2.11) to be a determinantal point process as in Definition 2.18 one needs to further assume that {ϕi} n i=1 and {ψi} n i=1 are locally bounded – this would ensure that K(x, y) is locally bounded. The latter condition will be automatically satisfied in all of our applications later in the text. We end this section by noting that if we further suppose that {ϕi} n i=1 and {ψi} n i=1 are biorthogonal as in Definition 2.21, then A is a diagonal matrix and (2.12) simplifies to (2.13) K(x, y) = Xn i=1 ψi(x) · 1 aii · ϕi(y). 14 Section 3. Convergence of determinantal point processes The goal of this section is to prove the following statement. Proposition 3.1. Let Nn be a sequence of determinantal point processes with correlation kernels Kn as in Definition 2.18. As usual, we assume that Kn are locally bounded functions on R 2 . Suppose that there exists a measurable function K on R 2 such that for each compact set B limn→∞ sup x,y∈B |Kn(x, y) − K(x, y)| = 0. Then, there exists a determinantal point process N with correlation kernel K. In addition, Nn converge weakly to N (as random elements in MR – the space of locally bounded measures on R). Remark 3.2. In plain words, Proposition 3.1 states that to prove that a sequence of determinantal point processes converges weakly, it suffices to show that the corresponding sequence of correlation kernels converges uniformly over compact sets. The proof of Proposition 3.1 is split into two parts, each isolated in a subsection below. 3.1. Weak convergence of Nn. In this section we show that the sequence Nn is tight and that any two subsequential limits N′ and N′′ have the same distribution. In particular, this shows that the sequence Nn has a unique weak subsequential limit N, which together with tightness shows that Nn ⇒ N. For clarity we split the proof into five steps. Step 1. In this step we show that Nn is tight. In view of (29, Theorem 4.10) it suffices to show that for B ∈ Rˆ we have (3.1) limr→∞ sup n≥1 P(Nn(B) > r) = 0. First, note that (3.2) E[Nn(B)] = Z B Kn(x, x)λ(dx) ≤ C · λ(B), where C = sup n≥1 sup x∈B Kn(x, x) < ∞. The first equality in (3.2) uses that Nn is determinantal with kernel Kn and Lemma 2.17. The fact that C < ∞ follows from the fact that Kn are locally bounded and converge uniformly on compact 15 (and hence bounded) sets. From the Markov inequality we get P(Nn(B) > r) ≤ E[Nn(B)] r ≤ C · λ(B) r , which proves (3.1). Step 2. To show the uniqueness of the subsequential limits, we need a few intermediate results. In this step we show that the mean measure of a subsequential limit of Nn has no atoms in R. We know that {Nn}n≥1 is a sequence of determinantal point processes, and hence random elements in MR, and the latter is a Polish space, see Lemma 2.5. Suppose that N′ is a weak subsequential limit of Nn, i.e. Nnk ⇒ N′ as random elements in MR. By Skorohod’s representation theorem, Theorem 2.3, we may assume that Nnk and N′ are all defined on the same probability space (Ω, F, P) and for all ω ∈ Ω we have that Nnk (ω) vaguely converge to N′ (ω), denoted as Nnk (ω) v→ N′ (ω). We write M′ for the mean measure of N′ and proceed to show for each x ∈ R (3.3) M′ {x} = 0. Fix a, b ∈ R with a < b. Since Nnk (ω) v→ N′ (ω), we conclude by (29, Lemma 4.1) that N ′ (ω)(a, b) ≤ lim inf n Nnk (ω)(a, b). This holds for any fixed ω ∈ Ω, so we have E[N ′ (a, b)] ≤ E[lim inf k Nnk (a, b)] ≤ lim inf k E[Nnk (a, b)] where the second inequality is due to Fatou’s lemma (19, Lemma 2.18). Since Nnk are determinantal, we have from Lemma 2.17 lim inf k E[Nnk (a, b)] = lim inf k Z b a Knk (x, x)dx = Z b a K(x, x)dx, where the last equality used the bounded convergence theorem. Using the monotonicity of measures, and the above inequalities, we conclude for any x ∈ R and ε > 0 that M′ {x} = E[N ′ {x}] ≤ E[N ′ (x − ε, x + ε)] ≤ Z x+ε x−ε K(x, x)dx. 16 This is true for arbitrary ε > 0, and taking the limit as ε → 0+ we conclude (3.3). Step 3. We continue with the same notation as in Step 2. Suppose that I = {(a, b] : a ≤ b; a, b ∈ Q} is the family of half-open intervals with rational endpoints. Our goal in this step is to show that there is an event E ∈ F such that P(E) = 1 and such that for each ω ∈ E and I ∈ I we have (3.4) lim k→∞ Nnk (ω)I = N ′ (ω)I. From (3.3) and Tonelli’s theorem we have (3.5) E[N ′Q] = E X x∈Q N ′ {x} = X x∈Q E[N ′ {x}] = X x∈Q M′ ({x}) = 0. We conclude that there exists an event E ∈ F such that P(E) = 1 and such that for each ω ∈ E (3.6) N ′ (ω)Q = 0. In particular, if (a, b] = I ∈ I we conclude that for ω ∈ E we have N′ (ω)∂I = N′ (ω){a, b} = 0 and so I is a continuity set for N′ . The latter and (29, Lemma 4.1) together imply (3.4). Step 4. We continue with the same notation as in Steps 2 and 3. Take a set of pairwise disjoint intervals I1, . . . , Im ∈ I. In this step we prove that if b1, . . . , bm ∈ Z≥0 and b1 + · · · + bm = r, then (3.7) E "Ym i=1 N′ (Ii)! (N′(Ii) − bi)!# = Z I b1 1 ×···×I bmm det[K(xi , xj )]r i,j=1λ r (dx). In our arguments we use the following result, whose proof is given as Lemma B.5 in Appendix B. Lemma 3.3. Let Xk = (Xk 1 , . . . , Xk m) be a sequence of random vectors in Z m ≥0 . Suppose that for each b1, . . . , bm ∈ Z≥0, there exists a constant C(b1, . . . , bm) ∈ [0, ∞) such that (3.8) lim k→∞ E "Ym i=1 Xk i ! (Xk i − bi)!# = C(b1, . . . , bm) Suppose further that if cr := max{C(b1, . . . , bm) : b1 + · · · + bm = r}, then P∞ r=0 cra r r! < ∞ for some a > 0. Then, Xk converges weakly to a random vector X∞, where X∞ ∈ Z m ≥0 almost surely and (3.9) E "Ym i=1 X∞ i ! (X∞ i − bi)!# = C(b1, . . . , bm) 17 for all b1, . . . , bm ∈ Z≥0. We denote Xk = (Xk 1 , . . . , Xk m), where Xk i = Nnk Ii for i = 1, . . . , m. We proceed to show that Xk satisfies the conditions of Lemma 3.3. Since Nnk is a point process, we know that Xk ∈ Z m ≥0 . From the determinantal structure of Xk and Lemma 2.17 we also get for any b1, . . . , bm ∈ Z≥0 with r = b1 + · · · + bm that E "Ym i=1 Xk i ! (Xk i − bi)!# = E "Ym i=1 Nnk (Ii)! (Nnk (Ii) − bi)!# = Z I b1 1 ×···×I bmm det[Knk (xi , xj )]r i,j=1λ r (dx). Taking the limit on both sides and applying the bounded convergence theorem gives (3.10) lim k→∞ E "Ym i=1 Nnk (Ii)! (Nnk (Ii) − bi)!# = Z I b1 1 ×···×I bmm det[K(xi , xj )]r i,j=1λ r (dx) =: C(b1, . . . , bm). From Hadamard’s inequality, Lemma 2.20, we conclude that (3.11) C(b1, . . . , bm) ≤ r r/2 · C r , where C = max 1≤i≤m λ(Ii) · sup x,y∈∪m i=1Ii |K(x, y)|. The latter shows that cr = max{C(b1, . . . , bm) : b1 + · · · + bm = r} is bounded by r r/2 · C r . By domination we observe that for any a > 0 (3.12) X∞ r=0 cr · a r r! ≤ X∞ r=0 r r/2 · C r · a r r! < ∞, where the second sum is convergent since by the root test limr→∞ r r/2 · C r · a r r! 1/r = limr→∞ r 1/2 · Ca (r!)1/r = 0. The last few observations allow us to conclude that Xk converge weakly to a vector X∞ which satisfies (3.9) with C(b1, . . . , bm) as in (3.10). On the other hand, by Step 3 we know that Xk converges almost surely (and hence weakly) to (N′ I1, . . . , N′ Im). Since weak limits are unique, we conclude that X∞ has the same law as (N′ I1, . . . , N′ Im), which together with (3.9) implies (3.7). Step 5. In this final step we show that if N′′ is another subsequential limit of Nn then N′ and N′′ have the same distribution as random elements in MR. 18 As in Step 3 we denote by I = {(a, b] : a ≤ b; a, b ∈ Q} the collection of half-open intervals with rational endpoints. We fix m pairwise disjoint set of intervals I1, . . . , Im ∈ I, and integers b1, . . . , bm ∈ Z≥0 such that b1 + · · · + bm = r. From (3.7) applied to N′ and N′′ we conclude that E "Ym i=1 N′ (Ii)! (N′(Ii) − bi)!# = E "Ym i=1 N′′(Ii)! (N′′(Ii) − bi)!# = Z I b1 1 ×···×I bmm det[K(xi , xj )]r i,j=1λ r (dx) =: C(b1, . . . , bm) ≤ r r/2 · C r , (3.13) where the last inequality used (3.11). We now observe that (3.13), the convergence in (3.12), and Lemma 3.3 together imply the following distributional equality of random vectors in R m: (3.14) (N ′ I1, . . . , N′ Im) d= (N ′′I1, . . . , N′′Im). Let us elaborate on the latter statement briefly. Suppose we set Xk = (N′ I1, . . . , N′ Im) if k is odd and Xk = (N′′I1, . . . , N′′Im) if k is even. In view of (3.12) and (3.13) this sequence satisfies the conditions of Lemma 3.3. In particular, we see that Xk converges to some X∞ but then so does any subsequence. Passing to the subsequence of odd indices we conclude that (N′ I1, . . . , N′ Im) has the same law as X∞, while passing to the subsequence of even indices shows (N′′I1, . . . , N′′Im) has the same law as X∞, which gives (3.14). From Lemma 2.13 we know that I is a dissecting semi-ring. In addition, if I1, . . . , Im ∈ I are arbitrary (i.e. not necessarily pairwise disjoint) we know from Lemma 2.12 that we can find J1, . . . , Jk that are pairwise disjoint and such that Ii = [ j:Jj⊆Ii Jj . From (3.14) we know that (N ′J1, . . . , N′Jk) d= (N ′′J1, . . . , N′′Jk), while by additivity of the random measures we have N ′ Ii = X j:Jj⊆Ii N ′Jj and N ′′Ii = X j:Jj⊆Ii N ′′Jj . 19 The conclusion is that (N′ I1, . . . , N′ Im) and (N′′I1, . . . , N′′Im) are given by the same linear (and hence continuous) function of (N′J1, . . . , N′Jk) and (N′′J1, . . . , N′′Jk), respectively. By the mapping theorem, see (5, Theorem 2.7), we conclude that (N′ I1, . . . , N′ Im) and (N′′I1, . . . , N′′Im) also have the same distribution. I.e. (3.14) holds without assuming that Ii ’s are pairwise disjoint. In the remainder of this step we use a standard monotone-class argument to extend the equality of distributions in (3.14) to equality of distribution of N′ and N′′. Consider the collection A := {A ∈ S : P(N ′ ∈ A) = P(N ′′ ∈ A)} where S is as in Definition 2.6. We first show that for any I1, . . . , Im ∈ I and B1, . . . , Bm ∈ R (3.15) π −1 I1 (B1) ∩ · · · ∩ π −1 Im (Bm) ∈ A where πI : MR 7→ R are defined by πI (µ) = µI. Using the definition of the πI ’s and the definition of a pre-image, we see that P N ′ ∈ π −1 I1 (B1) ∩ · · · ∩ π −1 Im (Bm) = P πI1 (N ′ ) ∈ B1, . . . , πIm(N ′ ) ∈ Bm =P N ′ I1 ∈ B1, . . . , N′ Im ∈ Bm = P N ′′I1 ∈ B1, . . . , N′′Im ∈ Bm =P πI1 (N ′′) ∈ B1, . . . , πIm(N ′′) ∈ Bm = P N ′′ ∈ π −1 I1 (B1) ∩ · · · ∩ π −1 Im (Bm) , where the equality on the second line used (3.14). We next show that A is a λ-system. This is relatively straightforward: • P(N′ ∈ MR) = 1 = P(N′′ ∈ MR), so MR ∈ A. • If A ∈ A, then P N ′ ∈ A c = 1 − P N ′ ∈ A = 1 − P N ′′ ∈ A = P N ′′ ∈ A c , so Ac = MR \ A ∈ A. • If A1, A2, · · · ∈ A are pairwise disjoint, then by σ-additivity of probability measures, P(N ′ ∈ ∪iAi) = X i P(N ′ ∈ Ai) = X i P(N ′′ ∈ Ai) = P(N ′′ ∈ ∪iAi), so ∪iAi By the π − λ theorem we conclude that A contains the σ-algebra generated by the sets (3.15). Since I is a dissecting semi-ring it follows from (29, Lemma 4.7) that the latter σ-algebra agrees with S. We conclude that P(N′ ∈ A) = P(N′′ ∈ A) for all A ∈ S, proving that N′ and N′′ have the same distribution. 3.2. N is a determinantal point process. In Section 3.1 we showed that Nn converge weakly to a random measure N in M(R). In this section we show that N is a determinantal point process. Similarly to Step 2 in Section 3.1 we can apply Skorohod’s representation theorem, Theorem 2.3, by which we can assume that Nn and N are defined on the same probability space (Ω, F, P) and that Nn(ω) converge vaguely to N(ω) for each ω ∈ Ω. With this setup in place we split the proof into three steps. Step 1. In this step we show that N is a point process. From Step 4 in Section 3.1 we know that if I = {(a, b] :< a ≤ b; a, b ∈ Q} is the collection of half-open intervals with rational endpoints, then N(ω)I ∈ Z≥0 with full probability for each I ∈ I. Since I is countable, we can find an event F ∈ F such that P(F) = 1 and for ω ∈ F we have (3.16) N(ω)A ∈ Z≥0, provided that A ∈ I. Below we proceed to prove (3.16) when A is a finite union of sets in I, then when A is a bounded open set and finally when A is any bounded Borel set. The latter would imply that N is a point process. Suppose first that I1, . . . , Ik ∈ I. From Lemma 2.12 we know that we can find pairwise disjoint J1, . . . , Jm ∈ I such that Sk i=1 Ii = Sm j=1 Jj . By additivity, we conclude N(ω) ∪ k i=1Ii = N(ω) ∪ m j=1Jj = Xm j=1 N(ω)Jj . For ω ∈ F we have that each summand on the right is in Z≥0 and then so is the sum. In particular, we conclude that (3.16) holds, provided that A is a finite union of sets in I. Suppose next that G is a bounded open. From Lemma 2.13 we know that I is a dissecting semi-ring and so we can find countably many {Ir}r≥1 such that G = ∪r≥1Ir. By continuity of the 21 measure from below we have for ω ∈ F N(ω)G = limn→∞ N(ω) (∪ n r=1Ir) ∈ Z≥0. To see why the latter holds, note that the measure of each finite union is in Z≥0 and the monotone limit of non-negative integers is either infinity or in Z≥0. But N is locally bounded, and G is bounded so the limit needs to be in Z≥0. This proves (3.16) when A is a bounded open set. Finally, suppose that A ∈ Rˆ. From (19, Theorem 1.18) we know that N(ω)A = inf{N(ω)U : A ⊆ U for open and bounded U}. From our earlier work we know N(ω)U ∈ Z≥0 for bounded open U and ω ∈ F and so N(ω)A ∈ Z≥0. Step 2. In this step we prove that N is a determinantal point process with correlation kernel K. We make use of the following lemma, whose proof is given in the next and final step. Lemma 3.4. Suppose that N is a point process on (R, R) and I is a dissecting semi-ring in (R, R). Suppose that there is a non-negative symmetric measurable function un : R n 7→ [0, ∞) such that for each pairwise disjoint I1, . . . , Im ∈ I and n1, . . . , nm ∈ N with n1 + · · · + nm = n we have E "Ym i=1 (N(Ii))! (N(Ii) − ni)!# = Z I n1 1 ×···×I nmm un(x1, . . . , xn)λ n (dx) < ∞, where we make the same convention for factorials as in Lemma 2.17. Then, the n-th correlation function for N exists and is equal to un. From Step 4 in Section 3.1, see (3.7), we know that N satisfies the conditions of Lemma 3.4 with I = {(a, b] : a ≤ b; a, b ∈ Q} and un(x1, . . . , xn) = det [K(xi , xj )]n i,j=1 . We mention that I is a dissecting semi-ring from Lemma 2.13 and un is symmetric since the determinant function is alternating in both rows and columns. Also, N is a point process from our work in the previous step. From Lemma 3.4 we conclude that all correlation functions ρn(x1, . . . , xn) for n ≥ 1 of N exist and are equal to det [K(xi , xj )]n i,j=1. From Definition 2.18 we conclude that N is a determinantal 22 point process with correlation kernel K. Step 3. In this step we prove Lemma 3.4. In the remainder we assume the same notation as in the statement of the lemma. We also let Nn be as in (2.4) and In be as in Lemma 2.14. We first note by the definition of Nn in (2.4) that Ym i=1 N(Ii)! (N(Ii) − ni)! = Nn(I n1 1 × · · · × I nm m ) = Nn(B1 × · · · × Bn), where B1, . . . , Bn is any permutation of n1 copies of I1, n2 copies of I2 etc. Taking expectations on both sides and using the symmetry of un we conclude that for any B1, . . . , Bn ∈ I such that Bi = Bj or Bi ∩ Bj = ∅ for each 1 ≤ i ≤ j ≤ n we have (3.17) Mn(B1 × · · · × Bn) = Z B1×···×Bn un(x1, . . . , xn)λ n (dx) < ∞. We now note that any A ∈ In can be written as a finite disjoint union of sets of the form B1×· · ·×Bn as above (using Lemma 2.14). By linearity and (3.17) we conclude (3.18) Mn(A) = Z A un(x1, . . . , xn)λ n (dx) for all A ∈ In. Below we show progressively that (3.18) holds when A is a finite union of elements in In, when A is an open set, when A is a bounded Borel set and finally if A is any Borel set. The latter by Definition 2.15 means that the n-th correlation function ρn exists and equals un. Let A1, . . . , Ar ∈ In. From Lemma 2.12 (using that that In is a dissecting semi-ring from Lemma 2.14) we can find finitely many B1, . . . , Bm ∈ In that are pairwise disjoint and [r i=1 Ai = [m j=1 Bj . By additivity and (3.18) applied to the Bj ’s we conclude for B = ∪ r i=1Ai = ∪ m j=1Bj Mn(B) = Xm j=1 Mn(Bj ) = Xm j=1 Z Bj un(x1, . . . , xn)λ n (dx) = Z B un(x1, . . . , xn)λ n (dx). 23 This shows that (3.18) holds for finite unions of elements in In. Fix now any open set G ⊆ R n . Since In is dissecting we can find Im ∈ In such that G = ∪m≥1Im. This shows that Mn(G) = Mn (∪m≥1Im) = lim N→∞ Mn ∪ N m=1Im = lim N→∞ Z ∪N m=1Im un(x1, . . . , xn)λ n (dx) = Z ∪∞m=1Im un(x1, . . . , xn)λ n (dx) = Z G un(x1, . . . , xn)λ n (dx), (3.19) where in the second equality on the first line we used the monotone convergence theorem, the third equality on the first line used that (3.18) holds for finite unions of elements in In, and in the first equality on the second line we used that monotone convergence theorem again. Our work above shows that (3.18) holds provided A is an open set. We now fix a bounded open set G in R n and consider the collection AG of subsets of G that satisfy (3.18). Similarly to Step 5 in Section 3.1, one readily verifies that AG is a λ-system that contains the open subsets of G (which is a π-system). By the π − λ theorem we conclude that (3.18) holds for any Borel subset of G and since G was arbitrary we see that (3.18) holds for all bounded Borel sets. Finally, if A is any Borel set, then A = ∪m≥1Am where Am are bounded and Borel. One can repeat (3.19) with G replaced with A and Im replaced with Am to conclude that (3.18) holds for A as well. This proves that (3.18) holds for any Borel set. 24 Section 4. The GUE as a determinantal point process In this section we prove the following statement. Proposition 4.1. Let N ∈ N and (X1, . . . , XN ) be a random vector in R N with density as in (1.3). We form the random measure (4.1) MN A = X N i=1 1{Xi ∈ A} for all Borel sets A ⊆ R. Then, MN is a determinantal point process on R as in Definition 2.18 with correlation kernel (4.2) KN (x, y) = e y 2/2−x 2/2 2(πi) 2 I C0 dz Z a+i∞ a−i∞ dw w z N · 1 w − z · e w2−2wy−z 2+2zx . In (4.2), we have that C0 is a positively oriented circle around the origin with radius R > 0 and a > R. Part of the statement is that the right side of (4.2) does not depend on a, R (provided that a > R > 0) and also that the integral is absolutely convergent. The rest of the section is organized as follows. In Section 4.1 we recall the definition of the Hermite polynomials and establish some of their properties. In Section 4.2 we show that the measure MN in Proposition 4.1 is determinantal and find a formula for its correlation kernel KN in terms of Hermite polynomials. In Section 4.3 we show that the formula for KN in Section 4.2 agrees with (4.2). 4.1. Hermite polynomials. In this section, we introduce a few different representations and properties of Hermite polynomials, which are defined as follows. Definition 4.2. For all n ∈ Z≥0, the n-th Hermite polynomial is defined as (4.3) Hn(x) = (−1)n e x 2 d n (dx) n e −x 2 The first few Hermite polynomials are H0(x) = 1, H1(x) = 2x, H2(x) = 4x 2 − 2. The next lemma contains a combinatorial formula and a few simple properties of Hermite polynomials. Lemma 4.3. The Hermite polynomials have the following explicit formula (4.4) Hn(x) = n! · ⌊ X n/2⌋ m=0 (−1)m m! · (2x) n−2m (n − 2m)!. 25 In particular, we have that Hn(x) is a polynomial of degree n, with leading coefficient 2 n , and Hn(x) is an even function when n is even, and is an odd function when n is odd. Proof. Let Un(x) denote the right side of (4.4). We prove that Hn(x) = Un(x) for all n ∈ Z≥0 by induction on n. One directly checks that H0(x) = 1 = U0(x), which proves the base case. Suppose that Hn(x) = Un(x). Using (4.3) we have Hn+1(x) = (−1)n+1e x 2 d n+1 (dx) n+1 e −x 2 = −e x 2 d dx Hn(x) e x2 = −e x 2 d dx Hn(x) e x2 = −e x 2 " −2xex 2Hn(x) (e x2 ) 2 + d dxHn(x) e x2 # = 2xHn(x) − d dxHn(x). (4.5) On the other hand, using the definition of Un(x), we have 2xUn(x) − d dxUn(x) = n! · ⌊ X n/2⌋ m=0 (−1)m m! · (2x) n−2m+1 (n − 2m)! − 2 · n! · ⌊ X n/2⌋ m=0 (−1)m m! · (2x) n−2m−1 (n − 2m − 1)! = (n + 1)! · ⌊(nX +1)/2⌋ m=0 (−1)m m! · (2x) n+1−2m (n + 1 − 2m)! · n + 1 − 2m n + 1 + 2m n + 1 = Un+1(x), (4.6) where we used the convention 1/k! = 0 if k < 0. Combining the last two equations with the induction hypothesis we see Hn+1(x) = 2xHn(x) − d dxHn(x) = 2xUn(x) − d dxUn(x) = Un+1(x). This completes the induction step and the general result now follows by induction on n. □ We next show that the Hermite polynomials are orthogonal with respect to the Gaussian weight. Proposition 4.4. Given Hermite polynomials Hn(x) and Hm(x), we have that (4.7) Z R Hm(x)Hn(x)e −x 2 dx = 0 if m ̸= n, √ π · 2 n · n! if m = n. Proof. We first consider the m ̸= n case, and without loss of generality we assume m < n. From (4.3), we have Hn+1(x) e x2 = (−1)n+1 d n+1 (dx) n+1 e x 2 = (−1) d dx Hn(x) e x2 , 26 which implies d Hn−1(x) e x2 = −Hn(x) e x2 dx. Using the latter and repeated uses of integration by parts we get Z R Hn(x)Hm(x)e −x 2 dx = Z R Hn−1(x)H ′ m(x)e −x 2 dx = · · · = Z R Hn−m−1(x)H(m+1) m (x)e −x 2 dx, where we mention that we have no boundary terms at ±∞ due to the decay of e −x 2 . Since Hm(x) is a degree m polynomial, we have H (m+1) m (x) = 0, and so the last integral above is zero. Suppose now that m = n. We perform the same integration by parts as above m times to get Z R Hm(x)Hm(x)e −x 2 dx = Z R H0(x)H(m) m (x)e −x 2 dx = (m!)(2m) Z R e −x 2 dx = m! · 2 m · √ π. In deriving the above we used the known value of the Gaussian integral, the fact that the m-th derivative of a degree m polynomial with leading coefficient am is (m!)(am), and Lemma 4.3. □ We are also interested in the exponential generating function of the Hermite polynomials. Lemma 4.5. For all x ∈ R, z ∈ C, the exponential generating function of the Hermite polynomials, given by P∞ n=0 Hn(x) · z n n! , is absolutely convergent and is equal to e 2xz−z 2 . Proof. We first show that the series is absolutely convergent. Using (4.4) we get X∞ n=0 Hn(x) · z n n! ≤ X∞ n=0 ⌊ n 2X ⌋ m=0 (−1)m m! · (2x) n−2mn! (n − 2m)! · z n n! = X∞ m=0 1 m! X∞ n=2m |2x| n−2m|z| n (n − 2m)! , where the exchange of the two sums is justified by Tonelli’s theorem. Further simplifications give X∞ n=0 Hn(x) · z n n! ≤ X∞ m=0 |z| 2m m! X∞ n=2m (|2x||z|) n−2m (n − 2m)! = X∞ m=0 |z 2 | m m! e |2xz| = e |z| 2+|2xz| < ∞. Therefore, Fubini’s theorem can be applied to the series X∞ n=0 Hn(x) · z n n! = X∞ n=0 ⌊ n 2X ⌋ m=0 (−1)m m! · (2x) n−2mn! (n − 2m)! · z n n! = X∞ m=0 (−1)mz 2m m! X∞ n=2m (2xz) n−2m (n − 2m)! = X∞ m=0 (−z 2 ) m m! e 2xz = e −z 2+2xz . □ The next result provides an integral formula for Hn(x). 27 Lemma 4.6. For all a, x ∈ R and n ∈ Z≥0 we have (4.8) Hn(x) = 2 n i √ π Z a+i∞ a−i∞ s n e (s−x) 2 ds. Proof. For clarity we split the proof into two steps. Step 1. In this step we show that (4.9) d n (dx) n e −x 2 = 1 √ π Z R (2it) n e −t 2+2itxdt. Denote f(x, t) := e −t 2+2itx, and observe that (4.10) ∂ n x f(x, t) = (2it) n · e −t 2+2itx , and e −x 2 = 1 √ π Z R f(x, t)dt, where the second equality follows from (14, Example 3.3.5). Using (4.10), we see that (4.9) is equivalent to (4.11) d n (dx) n Z R f(x, t)dt = Z R ∂ n x f(x, t)dt. We proceed to show for k = 0, . . . , n − 1 that (4.12) d dx Z R ∂ k x f(x, t)dt = Z R ∂ k+1 x f(x, t)dt , which clearly implies (4.11). We are thus left with establishing (4.12). First, by (24, 3.326.2), Z R ∂ k x f(x, t) dt = Z R (2it) k e −t 2+2itx dt = 2k Z R |t| k e −t 2 dt < ∞. Next, ∂ k+1 x f(x, t) = (2it) k+1e −t 2+2itx is continuous in x for all fixed t. In addition, for all y ∈ R and arbitrary δ > 0 we have Z R sup θ∈[−δ,δ] (2it) k+1e −t 2+2it(y+θ) dt = 2k+1 Z R |t| k+1e −t 2 dt < ∞. The last few observations verify the conditions of (14, Theorem A.5.3), which gives (4.12). 28 Step 2. In this step we prove (4.8). From (4.3) and (4.9) we obtain (4.13) Hn(x) = (−1)n e x 2 Z ∞ −∞ 1 √ π (2it) n e −t 2+2itxdt = (−1)n Z ∞ −∞ 1 √ π (2it) n e −(t−ix) 2 dt. Making the substitutions u = it and s = −u gives Hn(x) = (−1)n i √ π Z i∞ −i∞ 2 nu n e (u+x) 2 du = (−1)n2 n i √ π Z i∞ −i∞ (−1)(−1)n s n e (−s+x) 2 (−ds) = 2 n i √ π Z i∞ −i∞ s n e (s−x) 2 ds = lim N→∞ 2 n i √ π Z iN −iN s n e (s−x) 2 ds. (4.14) What remains is to identify the right side of (4.14) with the right side of (4.8). Consider the rectangle with the sides defined by the following segments: Γ N 1 = {z ∈ C : Re(z) ∈ [0, a],Im(z) = N}, Γ N 2 = {z ∈ C : Re(z) = a,Im(z) ∈ [−N, N]}, Γ N 3 = {z ∈ C : Re(z) ∈ [0, a],Im(z) = −N}, Γ N 4 = {z ∈ C : Re(z) = 0,Im(z) ∈ [−N, N]}. Assume that the above paths are all oriented in the increasing real or imaginary direction. Then the positively oriented rectangle formed by these paths (positively along Γ2 and Γ3, negatively along Γ1 and Γ4) is a simple closed curve and we can conclude from Cauchy’s integral theorem (41, Appendix B; Theorem 2.3) that the integral along the entire rectangle is zero for any analytic function. In particular, we conclude that (4.15) Z iN −iN s n e (s−x) 2 ds = Z ΓN 4 s n e (s−x) 2 ds = − Z ΓN 1 s n e (s−x) 2 ds+ Z ΓN 2 s n e (s−x) 2 ds+ Z ΓN 3 s n e (s−x) 2 ds. Using estimates of the form (22, (IV.1.7)), we see that the integrals along Γ N 1 and Γ N 3 disappear in the large N limit: Z ΓN 3 s n e (s−x) 2 ds ≤ |a| · sup u∈[0,a] |u − iN| n e (x−u+iN) 2 ≤ |a| · (|a| + N) n e (|a|+|x|) 2 e −N2 → 0, Z ΓN 1 s n e (s−x) 2 ds ≤ |a| · sup u∈[0,a] |u + iN| n e (x−u−iN) 2 ≤ |a| · (|a| + N) n e (|a|+|x|) 2 e −N2 → 0. Combining (4.14) with (4.15) and the last few observations gives Hn(x) = lim N→∞ 2 n i √ π Z iN −iN s n e (s−x) 2 ds = lim N→∞ 2 n i √ π Z ΓN 2 s n e (s−x) 2 ds = 2 n i √ π Z a+i∞ a−i∞ s n e (s−x) 2 ds, which proves (4.8). □ 29 4.2. MN is determinantal. We begin this section by introducing the Vandermonde matrix and proving that the determinant of this matrix is as in (1.4). We then use some of the results from Section 4.1 to rewrite the density in (1.3) in terms of Hermite polynomials. The purpose of this is to show that the symmetrized density in (1.3) is a biorthogonal ensemble as in Definition 2.22, which from Section 2.4 lets us conclude that MN as in (4.1) is a determinantal point process. We denote the N × N Vandermonde matrix VN by (4.16) VN := 1 x1 x 2 1 · · · x N−1 1 1 x2 x 2 2 · · · x N−1 2 . . . . . . . . . . . . . . . 1 xN x 2 N · · · x N−1 N . Lemma 4.7. Let VN be as in (4.16). Then, (4.17) det VN = Y 1≤i<j≤N (xj − xi), where we recall that the right side is precisely ∆(x) as in (1.4). Proof. We prove (4.17) by induction on N with base case N = 1 being trivially true as det V1 = 1. In the sequel we assume that (4.17) holds for N − 1 and proceed to prove it for N. Subtracting the first row from all other rows gives det VN = det 1 x1 x 2 1 · · · x N−1 1 1 x2 x 2 2 · · · x N−1 2 . . . . . . . . . . . . . . . 1 xN x 2 N · · · x N−1 N = det 1 x1 x 2 1 · · · x N−1 1 0 (x2 − x1) (x 2 2 − x 2 1 ) · · · (x N−1 2 − x N−1 1 ) . . . . . . . . . . . . . . . 0 (xN − x1) (x 2 N − x 2 1 ) · · · (x N−1 N − x N−1 1 ) . We next proceed to subtract from the j-th column the (j − 1)-th column multiplied by x1 for j = N, . . . , 2. Using the identities (x j−1 i − x j−1 1 ) − x1(x j−2 i − x j−2 1 ) = x j−1 i − x j−1 1 − x1x j−2 i + x j−1 1 = (xi − x1)x j−2 i , 30 we conclude det VN = det 1 0 0 · · · 0 0 (x2 − x1) (x2 − x1)x2 · · · (x2 − x1)x N−2 2 . . . . . . . . . . . . . . . 0 (xN − x1) (xN − x1)xN · · · (xN − x1)x N−2 N . Now, using the linearity of the determinant to pull out the (xj − x1) terms gives det VN = Y N j=2 (xj − x1) · det 1 0 0 · · · 0 0 1 x2 · · · x N−2 2 . . . . . . . . . . . . . . . 0 1 xN · · · x N−2 N . However, we observe that the first row in the last matrix is (1, 0, . . . , 0), so the first row and first column can be removed from the determinant to get det VN = Y N j=2 (xj − x1) · det 1 x2 · · · x N−2 2 . . . . . . . . . . . . 1 xN · · · x N−2 N = Y 1≤i<j≤N (xj − xi), where in the last equality we applied the induction hypothesis. This completes the induction step and the general result now proceeds by induction. □ Lemma 4.8. For fixed N ∈ N, the matrix A = [Hi−1(xj )]N i,j=1 satisfies (4.18) det A = 2N(N−1)/2 · Y 1≤i<j≤N (xj − xi), where Hn(x) is the n-th Hermite polynomial from Definition 4.2. Proof. We first note that, A = B · V ⊤ N , where VN is as (4.16), and the matrix B = [bi,j ] N i,j=1 has entries such that Hi−1(x) = X N j=1 bi,jx j−1 . By Lemma 4.3, Hn(x) is a polynomial of degree n with leading coefficient 2 n . Consequently, bi,j = 0 for all j > i, i.e. B is lower triangular and its diagonal entries are bi,i = 2i−1 . 31 From the above arguments we conclude det A = det B · det V ⊤ N = Y N i=1 2 i−1 · det V ⊤ N = 2N(N−1)/2 · Y 1≤i<j≤N (xj − xi), where in the last equality we used (4.17). □ We note here that the density (1.3) requires that the eigenvalues be ordered. From (3, Remark 2.5.3), we obtain a similar density for the unordered eigenvalues: (4.19) f sym N (x1, . . . , xN ) = ZN · ∆(x) 2 · Y N i=1 e −x 2 i /2 . Combining (4.19) with Lemma 4.8, we get that (4.20) f sym N (x1, . . . , xN ) = ZN · 2 N−N2 · det h Hi−1(xj )e −x 2 j /2 iN i,j=1 · det h Hi−1(xj )e −x 2 j /2 iN i,j=1 . The density in (4.20) is of the form (2.8) with ϕi(x) = ψi(x) = Hi−1(x)e −x 2/2 . Moreover, we know from (4.7) that Z R ϕi(x) · ψj (x)dx = 1{i = j} · √ π · 2 i−1 · (i − 1)!. From Section 2.4, see (2.13), we conclude that if (Y1, . . . , YN ) has density (4.19), and (4.21) M˜N A := X N i=1 1{Yi ∈ A}, then M˜N is a determinantal point process with correlation kernel (4.22) KN (x, y) = X N i=1 Hi−1(x)Hi−1(y)e −x 2/2−y 2/2 √ π2 i−1(i − 1)! . We finally remark that MN as in (4.1) has the same distribution as M˜N as in (4.21), since the distribution of (Y1, . . . , YN ) is the same as that of (X1, . . . , XN ) composed with a uniform permutation of the indices {1, . . . , N}, and permuting the atoms of MN leaves the measure invariant. In particular, we see that MN as in (4.1) is a determinantal point process with correlation kernel KN as in (4.22). 4.3. Proof of Proposition 4.1. In this section we prove (4.2). In the sequel we fix a, R ∈ R such that a > R > 0 and denote by C0 the positively oriented circle around the origin of radius R. 32 From Lemmas 4.5 and 4.6 we have Hn(x) = n! 2πi Z C0 e 2xz−z 2 z n+1 dz, and Hn(y) = 2 n i √ π Z a+i∞ a−i∞ w n e (w−y) 2 dw. Substituting the latter into (4.22) gives (4.23) KN (x, y) = e y 2/2−x 2/2 2(πi) 2 I C0 dz Z a+i∞ a−i∞ dw N X−1 n=0 w n z n+1 e w2−2wy−z 2+2zx . We next note that for any M ≥ 0 we have KN (x, y) = e y 2/2−x 2/2 2(πi) 2 I C0 dz Z a+i∞ a−i∞ dw N X−1 n=−M w n z n+1 e w2−2wy−z 2+2zx = e y 2/2−x 2/2 2(πi) 2 I C0 dz Z a+i∞ a−i∞ dwfM(z, w), where fM(z, w) = w z N · 1 w − z · h 1 − (z/w) M+N i · e w2−2wy−z 2+2zx . (4.24) The first equality in (4.24) follows from (4.23) and the fact that the extra terms, corresponding to n = −M . . . , −1 integrate to zero by Cauchy’s theorem. In deriving the second equality in (4.24) we used the identity N X−1 n=−M w n z n+1 = 1 z · (w/z) N−1 · N+ X M−1 n=0 (z/w) n = (w/z) N · 1 w · (z/w)M+N − 1 (z/w) − 1 . Since |z| = R for z ∈ C0 and |w| ≥ Re(w) = a > R, we have |z/w| < 1, and so (4.25) lim M→∞ fM(z, w) = w z N · 1 w − z · e w2−2wy−z 2+2zx =: f(z, w). On the other hand, we have for z ∈ C0 and Re(w) = a that |fM(z, w)| = |(w/z)| N · 1 |w − z| · 1 − (z/w) M+N · e w2−2wy−z 2+2zx ≤ R −N · |w| N · 1 a − R · 2 · e R2+2R|x| · e 2a|y|+a 2−Im(w) 2 =: g(z, w). (4.26) Taking the limit as M → ∞ in (4.24) and applying the dominated convergence theorem with dominating function g(z, w) gives KN (x, y) = lim M→∞ e y 2/2−x 2/2 2(πi) 2 I C0 dz Z a+i∞ a−i∞ dwfM(z, w) = e y 2/2−x 2/2 2(πi) 2 I C0 dz Z a+i∞ a−i∞ dwf(z, w), which proves (4.2). 33 ′ α Section 5. Bulk asymptotics The goal of this section is to prove Theorem 1.3. In Section 5.1 we introduce a certain oneparameter family of functions Fα(z) that naturally arises in our analysis and establish some properties for the latter. In Section 5.2 we show that under appropriate scaling the kernels KN from (4.2) converge to the sine kernel from Definition 1 .2 – t his i s P roposition 5 .4. I n S ection 5 .3 we prove Theorem 1.3 by utilizing Proposition 5.4 and various results from Sections 2, 3 and 4. 5.1. Preliminary estimates. Throughout this section we assume that α ∈ (− √ 2, √ 2), and define (5.1) Fα(z) = z 2 − 2αz + log z, where we take the principal branch of the logarithm. By a direct computation we have (5.2) F (z) = 2z − 2α + 1 z , from which we conclude that F ′ α(z) has two complex conjugate roots z±, given by (5.3) z+ = α 2 + i s 1 2 − α2 4 and z− = α 2 − i s 1 2 − α2 4 . In the remainder of this section we investigate the behavior of the real part of Fα(z) along certain contours that we introduce next. Definition 5.1. For r > 0 we denote by Cr the positively oriented circle of radius r, centered at the origin. For a ∈ R we denote by Γa the vertical line with real part a, oriented in the direction of increasing imaginary part. We summarize the results we require about Fα(z) in the following lemma. Lemma 5.2. Let C√ 2/2 and Γα/2 be as in Definition 5.1. Then, the points z± in (5.3) are minimizers of ReFα(z) along C√ 2/2 and maximizers of ReFα(z) along Γα/2 . That is, (5.4) ReFα(z) ≥ ReFα(z±) for z ∈ C√ 2/2 (5.5) ReFα(z) ≤ ReFα(z±) for z ∈ Γα/2 Moreover, the inequalities in (5.4) and (5.5) are strict if z ̸= z− and z ̸= z+. 34 Remark 5.3. Note that for z± as in (5.3) we have {z−, z+} = C√ 2/2 ∩ Γα/2 . Proof. We parametrize C√ 2/2 via z(θ) = (√ 2/2)e iθ for θ ∈ (−π, π]. This gives (5.6) ReFα(z(θ)) = Re " e 2iθ 2 − √ 2αeiθ + log √ 2 2 + iθ # = cos(2θ) 2 − √ 2α cos θ + log √ 2 2 , where we used (22, (I.6.1)) to evaluate the logarithm. Differentiating (5.6) with respect to θ gives d dθReFα(z(θ)) = − sin(2θ) + √ 2α sin θ = √ 2 sin θ · [α − √ 2 cos θ], which is zero precisely when θ = 0, θ = π, θ = ϕ+ and θ = ϕ−, where ϕ− = − arccos(α/√ 2) = argz− ∈ (−π, 0) and ϕ+ = arccos(α/√ 2) = argz+ ∈ (0, π). Since sin θ > 0 for θ ∈ (0, π) and sin θ < 0 for θ ∈ (−π, 0), while α − √ 2 cos θ < 0 if θ ∈ (ϕ−, ϕ+) and α − √ 2 cos θ > 0 if θ ̸∈ [ϕ−, ϕ+]. The above show d dθReFα(z(θ)) < 0 if θ ∈ (0, ϕ+) ∪ (−π, ϕ−), d dθReFα(z(θ)) > 0 if θ ∈ (ϕ−, 0) ∪ (ϕ+, π) which implies (5.4). We parametrize Γα/2 via z(v) = α/2 + iv for v ∈ R. This gives ReFα(z(v)) = Re " α 2 + iv 2 − (2α) α 2 + iv + log α 2 + iv # = − 3α 2 4 − v 2 + log s α2 4 + v 2. (5.7) Differentiating (5.7) with respect to v gives (5.8) d dvReFα(z(v)) = −2v + 1 2 1 (α2/4) + v 2 (2v) = 2v α2 + 4v 2 (2 − α 2 − 4v 2 ), 35 which is zero precisely when v = 0, v = Im(z+) and v = Im(z−). From (5.8) we note that d dvReFα(z(v)) > 0 if v ∈ (0, Im(z+)) ∪ (−∞, Im(z−)), d dvReFα(z(v)) < 0 if v ∈ (Im(z−), 0) ∪ (Im(z+), ∞) which implies (5.5). □ 5.2. Kernel convergence. The goal of this section is to establish the following result. Proposition 5.4. Fix α ∈ (− √ 2, √ 2), x, ˜ y˜ ∈ R, and sequences x˜N , y˜N ∈ R such that limN→∞ x˜N = x˜ and limN→∞ y˜N = ˜y. We then have (5.9) lim N→∞ e x 2 N /2−y 2 N /2 · N −1/2 · KN (xN , yN ) = e αx˜−αy˜ · Ksine,ϕ(˜x, y˜), where xN = α · N1/2 + ˜xN · N −1/2 , yN = α · N1/2 + ˜yN · N −1/2 , and ϕ = √ 2 − α2. We recall that KN is as in (4.2) and Ksine,ϕ is as in Definition 1.2. Proof. Starting from the formula for KN from (4.2), in which we take R = (α/4) · N1/2 and a = (α/2) · N1/2 , and performing the variable changes w˜ = N −1/2w and z˜ = N −1/2 z, we obtain (5.10) e x 2 N /2−y 2 N /2 · N −1/2 · KN (xN , yN ) = 1 2(πi) 2 Z Γα/2 dw˜ I Cα/4 dz˜ e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ e N[Fα( ˜w)−Fα(˜z)] , where Fα(z) is as in (5.1), and Γα/2 , Cα/4 are as in Definition 5.1. We next deform the contour Cα/4 to C√ 2/2 for each w˜ ∈ Γα/2 . Due to the factor 1/( ˜w − z˜) in (5.10), we see that we may cross the isolated singularity (in particular, a simple pole) at w˜ = ˜z in the process of deformation. If |Imw˜| > |Imz±|, then we do not cross the pole, and so by Cauchy’s theorem we have (5.11) I Cα/4 dz˜ e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ · e N[Fα( ˜w)−Fα(˜z)] = I C√ 2/2 dz˜ e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ · e N[Fα( ˜w)−Fα(˜z)] . Alternatively, for |Imw˜| < |Imz±| we do cross the pole at w˜ = ˜z, and so using the Residue theorem (22, (VII.1.2)) we obtain I Cα/4 dz˜ e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ · e N[Fα( ˜w)−Fα(˜z)] =2πi · e 2 ˜w(−y˜N +˜xN ) + I C√ 2/2 dz˜ e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ · e N[Fα( ˜w)−Fα(˜z)] (5.12) 36 We combine (5.10), (5.11), and (5.12) to get e x 2 N /2−y 2 N /2 · N −1/2 · KN (xN , yN ) = AN + BN where AN = 1 πi Z z+ z− e 2 ˜w(˜xN −y˜N ) dw, ˜ and BN = 1 2(πi) 2 Z Γα/2 dw˜ I C√ 2/2 dz˜ e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ · e N[Fα( ˜w)−Fα(˜z)] . (5.13) Recall from Remark 5.3 that z± ∈ Γα/2 ∩ C√ 2/2 , which means that the integrand in BN has singularities at z±. However, these singularities are integrable (compare to (22, (IV.8.6))). Applying the bounded convergence theorem to AN and evaluating the resulting integral gives (5.14) lim N→∞ AN = 1 πi Z z+ z− e 2 ˜w(˜x−y˜) dw˜ = z+ − z− πi if x˜ = ˜y, e 2z+(˜x−y˜) − e 2z−(˜x−y˜) 2πi(˜x − y˜) if x˜ ̸= ˜y. We now observe that z+ − z− πi = 2i · Im(z+) πi = √ 2 − α2 π = ϕ π . In addition, we have e 2z±(˜x−y˜) = e (α±iϕ)·(˜x−y˜) = e α(˜x−y˜) · [cos(ϕ(˜x − y˜)) ± isin(ϕ(˜x − y˜))] , from which we deduce for x˜ ̸= ˜y that e 2z+(˜x−y˜) − e 2z−(˜x−y˜) 2πi(˜x − y˜) = e α(˜x−y˜) · sin(ϕ(˜x − y˜)) π(˜x − y˜) . From the last few statements and (5.14) we conclude that (5.15) lim N→∞ AN = Ksine,ϕ(˜x, y˜). In view of (5.13) and (5.15) we see that to conclude (5.9) it suffices to show that (5.16) lim N→∞ 1 2(πi) 2 Z Γα/2 dw˜ I C√ 2/2 dz˜ e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ · e N[Fα( ˜w)−Fα(˜z)] = 0. 37 In the remainder of the proof we establish (5.16) by appropriately applying the dominated convergence theorem. From Lemma 5.2 we know that if w˜ ∈ Γα/2 \ {z−, z+} and z˜ ∈ C√ 2/2 \ {z−, z+}, there exists D( ˜w, z˜) > 0 such that (5.17) e N[Fα( ˜w)−Fα(˜z)] = e N[ReFα( ˜w)−ReFα(z+)]−N[ReFα(˜z)−ReFα(z+)] ≤ e −D( ˜w,z˜)N . In addition, since x˜N and y˜N are bounded sequences (as they converge), we can find constants D1, D2 > 0 such that for all N ≥ 1 (5.18) e −2 ˜wy˜N = e −2˜yN Rew˜ = e −y˜N ·α ≤ D1, provided that w˜ ∈ Γα/2 , and (5.19) e 2˜zx˜N = e 2˜xN Rez˜ ≤ e √ 2·|x˜N | ≤ D2 provided that z˜ ∈ C√ 2/2 . Equations (5.17), (5.18) and (5.19) show that the integrand in (5.16) converges to zero for a.e. ( ˜w, z˜) ∈ Γα/2 × C√ 2/2 as N → ∞. We claim that we can find a constant C1 > 0 such that for all N ≥ 1 (5.20) |exp (N[Fα( ˜w) − Fα(˜z)])| ≤ C1e −|Im( ˜w)| . Assuming (5.20) for the moment, we can combine it with (5.18) and (5.19) to conclude (5.21) e −2 ˜wy˜N +2˜zx˜N w˜ − z˜ · e N[Fα( ˜w)−Fα(˜z)] ≤ C1D1D2 · e −|Im( ˜w)| |w˜ − z˜| =: f(˜z, w˜). Notice that (5.22) Z Γα/2 |dw˜| I C√ 2/2 |dz˜|f(˜z, w˜) < ∞, where |dz˜| and |dw˜| denote integration with respect to arc-length. Indeed, the integral in (5.22) over the region {w˜ ∈ Γα/2 : |Im( ˜w)| ≤ 1} × C√ 2/2 is finite due to the integrability of |w˜ − z˜| −1 , see (22, (IV.8.6)). On the other hand, the integral in (5.22) over the region {w˜ ∈ Γα/2 : |Im( ˜w)| > 1}×C√ 2/2 is finite since |w˜ − z˜| −1 is bounded, while e −|Im( ˜w)| is integrable. 38 Summarizing the above few observations, we see that if (5.20) holds, then (5.16) follows from the dominated convergence theorem with dominating function f(˜z, w˜) as in (5.21). We have thus reduced the proof to establishing (5.20), which we do below. Let us parametrize Γα/2 via w˜ = α/2 + iy. From (5.8) we have for y ≥ 3 ReFα(α/2 + iy) − ReFα(z+) = ReFα(α/2 + i) − ReFα(z+) + Z y 1 2v(2 − α 2 − 4v 2 ) α2 + 4v 2 dv ≤ Z y 1 2v(2 − α 2 − 4v 2 ) α2 + 4v 2 dv ≤ Z y 1 2v(−α 2 − 2v 2 ) α2 + 4v 2 dv ≤ Z y 1 (−v)dv = −y 2 /2 + 1/2 ≤ −y, where in going from the first to the second line we used (5.5), and in the last inequality we used y ≥ 3. By a similar argument applied to y ≤ −3 we conclude that for |y| ≥ 3 (5.23) ReFα(α/2 + iy) − ReFα(z+) ≤ −|y|. Combining (5.23) with (5.4), we conclude for w˜ = α/2 + iy with |y| ≥ 3 and z˜ ∈ C√ 2/2 |exp (N[Fα( ˜w) − Fα(˜z)])| = exp (N[ReFα( ˜w) − ReFα(z+)])·exp (N[ReFα(z+) − ReFα(˜z)]) ≤ e −N|y| . In addition, from (5.4) and (5.5) we have for all w˜ ∈ Γα/2 and z˜ ∈ C√ 2/2 |exp (N[Fα( ˜w) − Fα(˜z)])| = exp (N[ReFα( ˜w) − ReFα(z+)]) · exp (N[ReFα(z+) − ReFα(˜z)]) ≤ 1. The last two inequalities imply (5.20) with C1 = e 3 . This completes the proof of (5.20) and hence the proposition. □ 5.3. Proof of Theorem 1.3. In this section we present the proof of Theorem 1.3. For clarity we split the proof into three steps. Step 1. In this step we show that Mα N is a determinantal point process with correlation kernel (5.24) Kα N (x, y) = e x 2 N /2−y 2 N /2 · N −1/2 · KN (xN , yN ), where xN = α √ N +x/√ N and yN = α √ N +y/√ N. The idea is to verify the conditions of Lemma 3.4, by utilizing Proposition 4.1. 39 We introduce the affine transformation S(x) = √ N(x − α √ N) and note that S −1 (y) = α √ N + y/√ N. From Theorem 1.1 and (1.6) we know that (5.25) Mα N = X N i=1 δS(Xi) , where (X1, . . . , XN ) has density (1.3). We also know from Proposition 4.1 that MN = X N i=1 δXi is a determinantal point process with correlation kernel KN . Fix a collection of bounded, pairwise disjoint Borel sets A1, . . . , Am ∈ Rˆ and n1, . . . , nm ∈ N with n1 + · · · + nm = n. We compute the factorial moments of Mα N : E "Ym i=1 (Mα N (Ai))! (Mα N (Ai) − ni)!# = E "Ym i=1 (MN (S −1 (Ai)))! (MN (S−1(Ai)) − ni)!# = Z S−1(A1) n1×···×S−1(Am)nm det [KN (xi , xj )]n i,j=1 dx1 · · · dxn = Z A n1 1 ×···×A nmm det h KN S −1 (yi), S−1 (yj ))in i,j=1 dy1 √ N · · · dyn √ N = Z A n1 1 ×···×A nmm det h N −1/2 · KN S −1 (yi), S−1 (yj ))in i,j=1 dy1 · · · dyn, (5.26) where in going from the first to the second line we used that MN is determinantal with correlation kernel KN and Lemma 2.17. The other identities follow from simple changes of variables and linearity of the determinant. From (5.26) and Lemma 3.4 we conclude that Mα N is determinantal with correlation kernel N −1/2 · KN (α √ N + x/√ N, α√ N + y/√ N), which proves it is determinantal with correlation kernel Kα N as in (5.24) in view of Lemma 2.19. Step 2. In this step we conclude the proof of the theorem. We require the following technical lemma, whose proof is given in the next step. 40 Lemma 5.5. Let fn be a sequence of locally bounded functions on R d , and let f be a continuous function on R d . Suppose that for each sequence xn ∈ R d such that limn xn = x we have limn fn(xn) = f(x). Then, for each compact set K ⊂ R d we have (5.27) limn→∞ sup x∈K |fn(x) − f(x)| = 0. From (5.24) and Proposition 5.4 we know that for each sequence xN , yN ∈ R such that limN→∞ xN = x and limN→∞ yN = y we have (5.28) lim N→∞ Kα N (xN , yN ) = e αx−αy · Ksine,ϕ(x, y), where ϕ = √ 2 − α2. From (1.7) we know that Ksine,ϕ(x, y) is continuous on R 2 . From Lemma 5.5 we conclude that Kα N (x, y) converges uniformly over compact sets to e αx−αy · Ksine,ϕ(x, y). From Proposition 3.1 we conclude that Mα N converges weakly to a determinantal point process with correlation kernel e αx−αy · Ksine,ϕ(x, y), which by Lemma 2.19 and Definition 1.2 is precisely the sine process with parameter ϕ. Step 3. In this final step we prove Lemma 5.5. Fix a compact set K ⊂ R d , and let xn ∈ K be a sequence such that (5.29) limn→∞ |fn(xn) − f(xn)| = lim sup n→∞ sup x∈K |fn(x) − f(x)|. By possibly passing to a subsequence, which we continue to call xn, we may assume that xn → x ∗ ∈ K. Here, we used that K is compact. Since f is continuous, we have limn f(xn) = f(x ∗ ). On the other hand, by the conditions of the lemma we have limn→∞ |fn(xn) − f(x ∗ )| = 0. Combining the last few statements and the triangle inequality we conclude limn→∞ |fn(xn) − f(xn)| ≤ limn→∞ |fn(xn) − f(x ∗ )| + limn→∞ |f(xn) − f(x ∗ )| = 0. The latter and (5.29) concludes the proof of Lemma 5.5 and hence the theorem. 41 Section 6. Edge asymptotics The goal of this section is to prove Theorem 1.6. In Section 6.1 we introduce a certain function F(z) that naturally arises in our analysis and establish some properties for the latter. In Section 6.2 we show that under appropriate scaling the kernels KN from (4.2) converge to the Airy kernel from Definition 1 .4 – t his i s P roposition 6 .6. I n S ection 6 .3 w e p rove T heorem 1 .6 b y utilizing Proposition 6.6 and results from Sections 2, 3 and 4. 6.1. Preliminary estimates. We define the function (6.1) F(z) = z 2 − 2 √ 2z + log z + 3 2 + log 2 2 , where as usual we take the principle branch of the logarithm. In this section we establish various properties of the function F(z), which will be used in Section 6.2. Lemma 6.1. There exist constants δ0 ∈ (0, 1/4) and C0 > 0 such that for z ∈ C with |z− √ 2/2| ≤ δ0 (6.2) F(z) − (1/3) · 2 3/2 · (z − √ 2/2)3 ≤ C0 · |z − √ 2/2| 4 . Proof. By a direct computation we have F ′ (z) = 2z − 2 √ 2 + 1 z , F′′(z) = 2 − 1 z 2 , F′′′(z) = 2 z 3 , which shows by Taylor expansion that for |z − √ 2/2| ≤ 1/4 we have F(z) = 2 5/2 3! (z − √ 2/2)3 + O(|z − √ 2/2| 4 ) = (1/3) · 2 3/2 · (z − √ 2/2)3 + O(|z − √ 2/2| 4 ). The latter proves (6.2). □ Lemma 6.2. Let δ0 be as in Lemma 6.1. We can find constants δ1 ∈ (0, δ0) and C1 > 0 such that ReF(z) ≤ −C1 · |z − √ 2/2| 3 if z − √ 2/2 = re±iπ/3 and r ∈ [0, δ1], and ReF(z) ≥ C1 · |z − √ 2/2| 3 if z − √ 2/2 = re±2iπ/3 and r ∈ [0, δ1]. (6.3) Proof. Let z(r) = √ 2/2 + re±iπ/3 , where r ∈ [0, δ0], and note that Re h (1/3) · 2 3/2 · (z(r) − √ 2/2)3 i = (1/3) · 2 3/2 · r 3 · cos(π) = −(1/3) · 2 3/2 · r 3 . 42 Combining the latter with (6.2) and the triangle inequality we conclude ReF(z(r)) ≤ −(1/3) · 2 3/2 · r 3 + C0 · r 4 . Let δ1 ∈ (0, δ0) be sufficiently small so that C0δ1 ≤ 1/3. The last equation gives for r ∈ [0, δ1] ReF(z(r)) ≤ −(2/3) · r 3 + C0 · r 4 ≤ −(1/3) · 2 3/2 · r 3 + C0δ1 · r 3 ≤ −(1/3) · [23/2 − 1] · r 3 . The last equation implies the first line in (6.3) with C1 = (1/3) · [23/2 − 1]. The second line is established analogously. □ The next set of results describe the behavior of the real part of F(z) along certain contours. Lemma 6.3. Let a ∈ (0, √ 2/2). Then, we have (6.4) d dθReF(aeiθ ) > 0 if θ ∈ (0, π) and d dθReF(aeiθ ) < 0 if θ ∈ (π, 2π). Proof. By a direct computation using (6.1) we have ReF(aeiθ ) = Re a 2 e 2iθ − 2 √ 2aeiθ + log(aeiθ ) + 3 2 + log 2 2 = 2a 2 cos2 θ − a 2 − 2 √ 2a cos θ + log a + 3 2 + log 2 2 . Differentiating the last expression with respect to θ gives d dθReF(aeiθ ) = −2a 2 cos θ sin θ + 2√ 2a sin θ = 2a sin θ h√ 2 − 2a cos θ i Since a ∈ (0, √ 2/2), we have √ 2 − 2a cos θ > 0, and so the last equation implies (6.4) once we use that sin θ is positive for θ ∈ (0, π) and negative for θ ∈ (π, 2π). □ Lemma 6.4. Let a > √ 2/2 and set ca = 2 − 1/a2 ∈ (0, 2). Then, we have for all y > 0 (6.5) d dyReF(a ± iy) ≤ −ca · y. Proof. By a direct computation using (6.1) we have ReF(a ± iy) = Re (a ± iy) 2 − 2 √ 2(a ± iy) + log(a ± iy) + 3 2 + log 2 2 = a 2 − y 2 − 2 √ 2a log q a 2 + y 2 + 3 2 + log 2 2 . 43 Differentiating the last expression with respect to y gives d dyReF(a ± iy) = −2y + 1 2 · 2y a 2 + y 2 = y −2 + 1 a 2 + y 2 ≤ y −2 + 1 a 2 = −ca · y, where in the next to last inequality we used that y > 0. The latter gives (6.5). □ 6.2. Kernel convergence. In this section we show that under appropriate scaling the kernels KN from (4.2) converge to the Airy kernel from Definition 1.4 – the precise statement can be found in Proposition 6.6 below. We require the following technical lemma for the proof of Proposition 6.6, which establishes a useful integral representation for the Airy kernel. Lemma 6.5. Let KAiry be as in Definition 1.4. Then, for any x, y ∈ R (6.6) KAiry(x, y) = 1 (2πi) 2 Z γ˜π/3 dv Z γ2π/3 dwe v 3/3−w3/3−vy+wx v − w , where γ˜π/3 is the contour consisting of the rays {yeiπ/3 : y ∈ [1, ∞)}, {ye−iπ/3 : y ∈ [1, ∞)} and the vertical segment connecting e −iπ/3 with e iπ/3 , and γ2π/3 is the contour consisting of the rays {ye2πi/3 : y ∈ [0, ∞)} and {ye−2πi/3 : y ∈ [0, ∞)}. The contours γ˜π/3 and γ2π/3 are oriented in the direction of increasing imaginary part. Proof. For clarity we split the proof into two steps. Step 1. In this step we show that (6.7) KAiry(x, y) = Z ∞ 0 Ai(x + z) Ai(y + z)dz, where we recall that Ai(x) is the Airy function from (1.10). We mention that in from (3, (3.7.20)) we have as x → ∞ (6.8) Ai(x) ∼ π −1/2x −1/4 e −2x 3/2/3 /2 and Ai′ (x) ∼ −π −1/2x 1/4 e −2x 3/2/3 /2. In particular, (6.8) shows that the improper integral in (6.7) converges. We mention that (6.7) appears in (11). Below we establish it using an elegant argument from (44), which we supplement with more details. 44 Applying ∂ ∂x + ∂ ∂y to (1.12) we get ∂ ∂x + ∂ ∂y KAiry(x, y) = Ai′ (x) Ai′ (y) x − y + Ai(x) Ai′ (y) (x − y) 2 − Ai′′(x) Ai(y) x − y − Ai′ (x) Ai(y) (x − y) 2 + Ai(x) Ai′′(y) x − y − Ai(x) Ai′ (y) (x − y) 2 − Ai′ (x) Ai′ (y) x − y + Ai′ (x) Ai(y) (x − y) 2 = Ai(x) Ai′′(y) − Ai′′(x) Ai(y) x − y = y Ai(x) Ai(y) − x Ai(x) Ai(y) x − y = (−1) Ai(x) Ai(y). We mention that the first equality in the last line follows from the fact that Ai solves the Airy differential equation (1.11), see (3, (3.1.4)). We now apply ∂ ∂x + ∂ ∂y to the right side of (6.7). The result is ∂ ∂x + ∂ ∂y Z ∞ 0 Ai(x + z) Ai(y + z)dz = Z ∞ 0 ∂ ∂x + ∂ ∂y Ai(x + z) Ai(y + z)dz = Z ∞ 0 ∂ ∂z (Ai(x + z) Ai(y + z)) dz = limz→∞ Ai(x + z) Ai(y + z) − limz→0 Ai(x + z) Ai(y + z) = − Ai(x) Ai(y). We mention that the exchange of the order of the derivatives and the integral is justified by (6.8), which is also used in the last equality. The last two displayed equations show that (6.9) KAiry(x, y) − Z ∞ 0 Ai(x + z) Ai(y + z)dz = ϕ(x − y), for some smooth function ϕ. Fix r ∈ R, set x = y + r and let y → ∞ to conclude lim y→∞ KAiry(y + r, y) = 0 and lim y→∞ Z ∞ 0 Ai(y + r + z) Ai(y + z)dz = 0. Indeed, the first equality follows from the definition of KAiry in (1.12) and (6.8), while the second one follows from (6.8) and the dominated convergence theorem. The above shows that ϕ(r) = 0, which together with (6.9) shows (6.7). Step 2. In this step we use (6.7) to prove (6.6). Substituting the formula for Ai from (1.10) into (6.7) gives KAiry(x, y) = 1 (2πi) 2 Z ∞ 0 Z γ Z γ e v 3/3−v(y+z) · e v˜ 3/3−v˜(x+z) dvdvdz. ˜ 45 We may now deform γ to γ˜π/3 without crossing any poles and hence without affecting the value of the integral by Cauchy’s theorem. Performing the deformation, and setting v˜ = −w gives KAiry(x, y) = 1 (2πi) 2 Z ∞ 0 Z γ˜π/3 Z γ2π/3 e v 3/3−v(y+z) · e −w3/3+w(x+z) dvdwdz = 1 (2πi) 2 Z γ˜π/3 Z γ2π/3 e v 3/3−w3/3−vy+wx Z ∞ 0 e z(w−v) dzdvdw = 1 (2πi) 2 Z γ˜π/3 Z γ2π/3 e v 3/3−w3/3−vy+wx · 1 v − w dvdw. (6.10) We mention that in going from the first to the second line in (6.10) we used Fubini’s theorem, which is justified, since by the definition of the contours we have for all v ∈ γ˜π/3 , and w ∈ γ2π/3 that Re[w − v] ≤ −1/2 and e v 3/3−w3/3−vy+wx · e z(w−v) ≤ e · exp −|v| 3 /3 − |w| 3 /3 + |v||y| + |w||x| − z/2 , and the latter is integrable due to the cubes in the exponential. We also mention that the last equality in (6.10) used the fact that Z ∞ 0 e −λzdz = 1 λ , for all λ ∈ C with Re(λ) > 0, applied to λ = w − v. Equation (6.10) implies (6.6). □ We now turn to the main result of this section. Proposition 6.6. Fix x, ˜ y˜ ∈ R and sequences x˜N , y˜N ∈ R such that limN→∞ x˜N = ˜x and limN→∞ y˜N = ˜y. Then, we have (6.11) lim N→∞ e x 2 N /2−y 2 N /2 · N −1/6 · e N1/3 (˜yN −x˜N ) · KN (xN , yN ) = 21/2 · KAiry(˜x, y˜), where xN = 21/2 · N1/2 + 2−1/2 · x˜N · N −1/6 , yN = 21/2 · N1/2 + 2−1/2 · y˜N · N −1/6 . We recall that KN is as in (4.2) and KAiry is as in Definition 1.4. Proof. For clarity we split the proof into four steps. Step 1. The goal of this step is to find a suitable expression for the terms on the left side of (6.11). The key statement we prove is equation (6.13). Let δ1 ∈ (0, 1/4) be as in Lemma 6.2 and α = √ 2/2. Starting from the formula for KN from (4.2), in which we take R = (1/2) · N1/2 and a = (α + δ1/2) · N1/2 , and performing the variable 46 changes w˜ = N −1/2w and z˜ = N −1/2 z, we obtain e x 2 N /2−y 2 N /2 · N −1/6 · e N1/3 (˜yN −x˜N ) · KN (xN , yN ) = N1/3 2(πi) 2 Z Γα+δ1/2 dw˜ I C1/2 dz˜ 1 w˜ − z˜ · exp −N 1/3 ( √ 2 ˜w − 1)˜yN + N 1/3 ( √ 2˜z − 1)˜xN · exp (N[F( ˜w) − F(˜z)]), (6.12) where we recall that Cr and Γa are as in Definition 5.1 and F(z) is as in (6.1). We next introduce certain contours, to which we will deform the ones in (6.12). Let γ− be the contour consisting of the two segments connecting α to α + δ1e ±πi/3 , and the circular arc, which is centered at the origin, starts from α + δ1e πi/3 and goes counter clockwise to α + δ1e −πi/3 . We note that γ− is a closed contour and we orient it positively. We denote by γ 1 − the circular part of γ− and by γ 0 − the one consisting of the two segments, with the orientation inherited by γ−. For N sufficiently large so that N −1/3 ∈ (0, δ1), we define the contours γ+,N , γ0 +,N and γ 1 +,N as follows. γ 1 +,N contains two vertical rays: one starting from α+δ1e πi/3 and going straight up, and the second starting from α+δ1e −πi/3 and going straight down. γ 0 +,N consists of three segments: the first connecting α+δ1e πi/3 to α+ N −1/3 e πi/3 , the second connecting α+ N −1/3 e πi/3 to α+ N −1/3 e −πi/3 , and the third connecting α+N −1/3 e −πi/3 to α+δ1e −πi/3 . Finally, γ+,N = γ 0 +,N ∪γ 1 +,N and all three contours are oriented in the direction of increasing imaginary part. Figure 1 provides a visualization of these contours. We now proceed to deform C1/2 in (6.12) to γ− and Γα+δ1/2 to γ+,N . Note that in the process of deformation we do not cross any poles and so by Cauchy’s theorem the value of the integral γ 0 − γ 1 − γ 0 +,N γ 1 +,N Re Im γ− = γ 0 − ∪ γ 1 − γ+,N = γ 0 +,N ∪ γ 1 +,N Figure 1. Contours in (6.13) 47 remains the same. We conclude that e x 2 N /2−y 2 N /2 · N −1/6 · e N1/3 (˜yN −x˜N ) · KN (xN , yN ) = X i,j∈{0,1} A i,j N , where A i,j N = N1/3 2(πi) 2 Z γ i +,N dw˜ I γ j − dz˜ 1 w˜ − z˜ · exp −N 1/3 ( √ 2 ˜w − 1)˜yN + N 1/3 ( √ 2˜z − 1)˜xN × exp (N[F( ˜w) − F(˜z)]) for i, j = 0, 1. (6.13) Step 2. In view of (6.13) we see that to prove (6.11) it suffices to show (6.14) lim N→∞ A i,j N = 0 if i + j ≥ 1 and lim N→∞ A 0,0 N = √ 2KAiry(˜x, y˜). In the remainder of this step we establish various estimates for the functions that appear in the integrals in (6.13), which will be used in the proof of (6.14) in the steps below. We first note here that Rew˜ and Rez˜ are bounded for w˜ ∈ γ+,N and z˜ ∈ γ−, and thus there exists some constant d1 > 0 such that (6.15) exp −N 1/3 ( √ 2 ˜w − 1)˜yN + N 1/3 ( √ 2˜z − 1)˜xN ≤ e d1N1/3 . In addition, for i + j ≥ 1 we see that γ j − and γ i +,N are at least distance δ1/2 apart, implying that for w˜ ∈ γ i +,N and z˜ ∈ γ j − (6.16) 1 w˜ − z˜ ≤ 2 δ1 . Using Lemmas 6.2 and 6.3, we also have | exp (−NF(˜z))| = exp (−NReF(˜z)) ≤ exp −C1N|z˜ − α| 3 if z˜ ∈ γ 0 −, and | exp (−NF(˜z))| ≤ exp −NReF(α + δ1e πi/3 ) ≤ exp −C1Nδ3 1 if z˜ ∈ γ 1 −. (6.17) We mention that in deriving the second line in (6.17) we implicitly used that δ1 ∈ (0, 1/4), which in particular implies that the radius of the circular arc γ 1 − is in (0, √ 2/2). By combining Lemmas 6.1 and 6.2, we see that there exists C2 > 0 such that (6.18) | exp (NF( ˜w))| = exp (NReF( ˜w)) ≤ exp C2 − C1N|w˜ − α| 3 if w˜ ∈ γ 0 +,N . 48 The last estimate we derive in this step is (6.19) |exp (NF( ˜w))| ≤ exp −C1Nδ3 1 − N(ca/2)(Im( ˜w) 2 − 3δ 2 1/4) for all w˜ ∈ γ 1 +,N , where a = α + δ1/2 and ca is as in Lemma 6.4. Let us explain how we show (6.19) briefly. We focus on the case when w˜ ∈ γ 1 +,N and Imw˜ ≥ √ 3δ1/2. The case when Imw˜ ≤ −√ 3δ1/2 is handled similarly. Writing w˜ = α/2 +δ1/2 +ir we have by the fundamental theorem of calculus that |exp (NF(α/2 + δ1/2 + ir))| = exp NReF(α + δ1e πi/3 ) + N Z r √ 3δ1/2 d dyReF(α + δ1/2 + iy)dy! ≤ exp −C1Nδ3 1 − caN Z r √ 3δ1/2 ydy! = exp −C1Nδ3 1 − N(ca/2)(r 2 − 3δ 2 1/4) , where in going from the first to the second line we used Lemmas 6.2 and6.4. The last equation implies (6.19). Step 3. In this step we prove the left part of (6.14). We mention that the integrals below involving |dz˜| or |dw˜| are with respect to arc-length, see (22, IV.1.3) for more details. We also use without mention that γ 0 +,N and γ− have uniformly bounded (in N) length. First, for A 0,1 N , by combining (6.15), (6.16), the second line in (6.17), and (6.18) we get (6.20) A 0,1 N ≤ N1/3 2π 2δ2 Z γ 0 +,N |dw˜| Z γ 1 − |dz˜| exp d1N 1/3 − C1Nδ3 1 + C2 ≤ e −C1Nδ3 1 /3 , where the last bound holds for large N. Next, for A 1,1 N , we combine (6.15), (6.16), the second line in (6.17), and (6.19) to get A 1,1 N ≤ N1/3 2π 2δ2 Z γ 1 +,N |dw˜| Z γ 1 − |dz˜| exp d1N 1/3 − 2C1Nδ3 1 − N(ca/2)(Im( ˜w) 2 − 3δ 2 1/4) ≤ e −C1Nδ3 1 · Z ∞ √ 3δ1/2 exp −N(ca/2)(r 2 − 3δ 2 1/4) dr ≤ e −C1Nδ3 1 /2 , (6.21) where both inequalities on the second line hold for large N. 49 Finally, for A 1,0 N , we combine (6.15), (6.16), the first line in (6.17), and (6.19) to get A 1,0 N ≤ N1/3 2π 2δ2 Z γ 1 +,N |dw˜| Z γ 0 − |dz˜| exp d1N 1/3 − C1Nδ3 1 − N(ca/2)(Im( ˜w) 2 − 3δ 2 1/4) ≤ e −C1Nδ3 1 /2 · Z ∞ √ 3δ1/2 exp −N(ca/2)(r 2 − 3δ 2 1/4) dr ≤ e −C1Nδ3 1 /4 , (6.22) where just as before the inequalities hold for large N. Therefore, we can combine (6.20), (6.21), and (6.22) to conclude the proof of the left part of (6.14). Step 4. In this step we prove the right part of (6.14). We apply the change of variables z˜ = √ 2/2 + wN −1/3 and w˜ = √ 2/2 + vN −1/3 in the definition of A 0,0 N in (6.13) to get A 0,0 N = 1 2(πi) 2 Z γ˜π/3 dv Z γ2π/3 dw · exp N[F( √ 2/2 + vN −1/3 ) − F( √ 2/2 + wN −1/3 )] × 1 v − w · exp − √ 2vy˜N + √ 2wx˜N · 1{|Re(v)| ≤ δ1N 1/3 /2, |Re(w)| ≤ δ1N 1/3 /2}. (6.23) Our task is to take the N → ∞ limit of (6.23), by applying the dominated convergence theorem, and verify the right part of (6.14). From Lemma 6.1 we see that the integrand in (6.23) converges pointwise to (6.24) exp ( √ 2v) 3 3 − ( √ 2w) 3 3 ! · 1 v − w · exp − √ 2vy˜ + √ 2wx˜ . We next find a dominating function. First, from the first line in (6.17) and (6.18) we have (6.25) exp N[F( √ 2/2 + vN −1/3 ) − F( √ 2/2 + wN −1/3 )] ≤ exp C2 − C1|z| 3 − C1|w| 3 . Next, we know the contours γ˜π/3 and γ2π/3 are at least a distance of 1/2 from each other, due to the way these contours were defined in Step 1. Therefore, we have that (6.26) 1 v − w ≤ 2. Finally, using the basic inequality |e λ | ≤ e |λ| , we know there exists some constant C3 > 0 such that (6.27) exp − √ 2zy˜N + √ 2wx˜N ≤ exp (C3(|z| + |w|)). 50 Merging (6.25), (6.26), and (6.27), we now have that the integrand in (6.23) is absolutely bounded by the integrable function (6.28) exp C2 − C1|z| 3 − C1|w| 3 + C3|z| + C3|w| . Combining (6.23), (6.24) and the dominated convergence theorem with dominating function (6.28), we conclude that (6.29) lim N→∞ A 0,0 N = 1 2(πi) 2 Z γ˜π/3 dv Z γ2π/3 dw 1 v − w · exp ( √ 2v) 3 3 − ( √ 2w) 3 3 − √ 2vy˜ + √ 2wx˜ ! . Performing a final change of variables vˆ = √ 2v and wˆ = √ 2w, we rewrite (6.29): (6.30) lim N→∞ A 0,0 N = 1 2 3/2(πi) 2 Z γ˜π/3 dvˆ Z γ2π/3 dwˆ 1 vˆ − wˆ · exp vˆ 3 3 − wˆ 3 3 − vˆy˜ + ˆwx˜ ! . We mention that in deriving (6.30) we used that √ 2γ2π/3 = γ2π/3 and that we can deform √ 2˜γπ/3 to γ˜π/3 without affecting the value of the integral by Cauchy’s theorem. The right side of (6.30) is precisely √ 2KAiry(x, y). This concludes the proof of (6.14) and hence the proposition. □ 6.3. Proof of Theorem 1.6. The proof we present is similar to that of Theorem 1.3, and for clarity is split into two steps. Step 1. In this step we show M edge N is a determinantal point process with correlation kernel (6.31) K edge N (x, y) = 2−1/2 · N −1/6 · e x 2 N /2−y 2 N /2 · e N1/3 (y−x) · KN (xN , yN ), where xN = 21/2 · N1/2 + 2−1/2 · N −1/6 · x and yN = 21/2 · N1/2 + 2−1/2 · N −1/6 · y. The idea is to verify the conditions of Lemma 3.4 by utilizing Proposition 4.1. We introduce the affine transformation T(x) = 21/2N 1/6 (x − √ 2N) and note that T −1 (y) = √ 2N + 2−1/2N −1/6 · y. From Theorem 1.1 and (1.9) we know that (6.32) M edge N = X N i=1 δT(Xi) , 51 where (X1, . . . , XN ) has density (1.3). We also know from Proposition 4.1 that MN = X N i=1 δXi is a determinantal point process with correlation kernel KN . Fix a collection of bounded, pairwise disjoint Borel sets A1, . . . , Am ∈ Rˆ and n1, . . . , nm ∈ N with n1 + · · · + nm = n. We compute the factorial moments of M edge N : E "Ym i=1 (M edge N (Ai))! (M edge N (Ai) − ni)!# = E "Ym i=1 (MN (T −1 (Ai)))! (MN (T −1(Ai)) − ni)!# = Z T −1(A1) n1×···×T −1(Am)nm det [KN (xi , xj )]n i,j=1 dx1 · · · dxn = Z A n1 1 ×···×A nmm det h KN T −1 (yi), T −1 (yj ))in i,j=1 dy1 2 1/2N1/6 · · · dyn 2 1/2N1/6 = Z A n1 1 ×···×A nmm det h 2 −1/2 · N −1/6 · KN T −1 (yi), T −1 (yj ))in i,j=1 dy1 · · · dyn, (6.33) where in going from the first to the second line we used that MN is determinantal with correlation kernel KN and Lemma 2.17. The other identities follow from simple changes of variables and linearity of the determinant. From (6.32) and Lemma 3.4 we see that M edge N is determinantal with correlation kernel 2 −1/2 · N −1/6 · KN (21/2 · N 1/2 + 2−1/2 · N −1/6 · x, 2 1/2 · N 1/2 + 2−1/2 · N −1/6 · y), which along with Lemma 2.19 proves it is determinantal with correlation kernel K edge N as in (6.31). Step 2. In this step we complete the proof of the theorem. From (6.31) and Proposition 6.6 we know that for each sequence xN , yN ∈ R such that limN→∞ xN = x and limN→∞ yN = y we have (6.34) lim N→∞ K edge N (xN , yN ) = KAiry(x, y). From (1.12) we know that KAiry(x, y) is continuous on R 2 , and so from Lemma 5.5 we conclude that K edge N (x, y) converges uniformly over compact sets to KAiry(x, y). Therefore, we have satisfied the criteria for Proposition 3.1 and we conclude that M edge N converges weakly to a determinantal point process with correlation kernel KAiry(x, y), which from Definition 1.4 is the Airy point process. 52 References 1. Andrew Ahn, Airy point process via supersymmetric lifts, Probability and Mathematical Physics 3 (2022), no. 4, 869–938. 2. Gideon Amir, Ivan Corwin, and Jeremy Quastel, Probability distribution of the free energy of the continuum directed random polymer in 1 + 1 dimensions, Communications on Pure and Applied Mathematics 64 (2010), no. 4, 466–537. 3. G. Anderson, A. Guionnet, and O. Zeitouni, Introduction to random matrices, Cambridge Studies in Advanced Mathematics, 2009. 4. Jinho Baik, Percy Deift, and Kurt Johansson, On the distribution of the length of the longest increasing subsequence of random permutations, 1999. 5. Patrick Billingsley, Convergence of probability measures, Wiley, 1999. 6. Alexei Borodin, Notes on random matrices, Unpublished. 7. Alexei Borodin and Vadim Gorin, Lectures on integrable probability, 2015. 8. , Moments match between the KPZ equation and the Airy point process, Symmetry, Integrability and Geometry: Methods and Applications (2016). 9. Alexei Borodin, Andrei Okounkov, and Grigori Olshanski, Asymptotics of Plancherel measures for symmetric groups, 1999. 10. Erhan Çinlar, Probability and stochastics, Springer, 2011. 11. P.A. Clarkson and J.B. McLeod, A connection formula for the second Painlevé transcendent, Arch. Rat. Mech. Anal. 103 (1988), 97–138. 12. Ivan Corwin, The Kardar-Parisi-Zhang equation and universality class, 2011. 13. P. Deift and D. Gioev, Random matrix theory: Invariant ensembles and universality, Courant Lecture Notes, American Mathematical Soc., 2009. 14. Rick Durrett, Probability: Theory and examples, Cambridge University Press, 2019. 15. László Erdős, Universality for random matrices and log-gases, 2012. 16. László Erdős, Sandrine Peche, Jose A. Ramirez, Benjamin Schlein, and Horng-Tzer Yau, Bulk universality for Wigner matrices, 2009. 17. László Erdős, Jose A. Ramirez, Benjamin Schlein, and Horng-Tzer Yau, Universality of sinekernel for wigner matrices with a small Gaussian perturbation, 2010. 18. Patrik L. Ferrari and Herbert Spohn, Step fluctuations for a faceted crystal, 2003. 19. Gerald B. Folland, Real analysis: modern techniques and their applications, Wiley, 1999. 53 20. Peter J. Forrester and S. Ole Warnaar, The importance of the Selberg integral, 2007. 21. P.J. Forrester, Log-gases and random matrices, Princeton University Press, Princeton, 2010. 22. Theodore W. Gamelin, Complex analysis, Springer New York, NY, 2001. 23. Vadim Gorin, Lecture 18: Edge limits of tilings of hexagons, Cambridge Studies in Advanced Mathematics, pp. 142–150, Cambridge University Press, 2021. 24. I.S. Gradshteyn and I.M. Ryzhik, Table of integrals, series, and products, Elsevier, 2007. 25. Thomas Guhr, Axel Müller-Groeling, and Hans A. Weidenmüller, Random-matrix theories in quantum physics: common concepts, Physics Reports 299 (1998), no. 4-6, 189–425. 26. Kurt Johansson, Shape fluctuations and random matrices, Communications in Mathematical Physics 209 (2000), no. 2, 437–476. 27. , The arctic circle boundary and the Airy process, 2005. 28. , Random matrices and determinantal processes, 2005. 29. Olav Kallenberg, Random measures, theory and applications, Springer, 2017. 30. Mehran Kardar, Giorgio Parisi, and Yi-Cheng Zhang, Dynamic scaling of growing interfaces, Phys. Rev. Lett. 56 (1986), 889–892. 31. Richard Kenyon, Lectures on dimers, 2009. 32. Giacomo Livan, Marcel Novaes, and Pierpaolo Vivo, Introduction to random matrices, Springer International Publishing, 2018. 33. M.L. Mehta, Random matrices, 3rd edition, Elsevier/Academic Press, Amsterdam, 2004. 34. M.L. Mehta and F J Dyson, Statistical theory of the energy levels of complex systems. part V, Journal of Mathematical Physics (New York) (U.S.) (1963). 35. Andrei Okounkov and Nicolai Reshetikhin, The birth of a random matrix, Moscow Mathematical Journal 6 (2006), no. 3, 553–566. 36. Grigori Olshanski, Determinantal point processes and fermion quasifree states, Communications in Mathematical Physics 378 (2020), no. 1, 507–555. 37. Michael Praehofer and Herbert Spohn, Scale invariance of the PNG droplet and the Airy process, 2002. 38. V. Prasolov, Problems and Theorems in Linear Algebra, Amer. Math. Soc., Providence, RI, 1994. 39. Jeremy Quastel and Herbert Spohn, The one-dimensional KPZ equation and its universality class, Journal of Statistical Physics 160 (2015), no. 4, 965–984. 54 40. A Soshnikov, Determinantal random point fields, Russian Mathematical Surveys 55 (2000), no. 5, 923–975. 41. Elias M. Stein and Rami Shakarchi, Complex analysis, Princeton University Press, 2003. 42. T. Tao, Topics in random matrix theory, Graduate studies in mathematics, American Mathematical Society, 2012. 43. Terence Tao and Van Vu, Random matrices: The universality phenomenon for wigner ensembles, 2012. 44. Craig A. Tracy and Harold Widom, Level-spacing distributions and the Airy kernel, Commun. Math. Phys. 159 (1994), 151–174. 45. , A system of differential equations for the Airy process, 2004. 46. , A Fredholm determinant representation in ASEP, Journal of Statistical Physics 132 (2008), no. 2, 291–300. 47. , Integral formulas for the asymmetric simple exclusion process, Communications in Mathematical Physics 279 (2008), no. 3, 815–844. 48. , Asymptotics in ASEP with step initial condition, Communications in Mathematical Physics 290 (2009), no. 1, 129–154. 49. Eugene P. Wigner, Characteristic vectors of bordered matrices with infinite dimensions, Annals of Mathematics 62 (1955), no. 3, 548–564. 50. , On the distribution of the roots of certain symmetric matrices, Annals of Mathematics 67 (1958), no. 2, 325–327. 51. Martin R. Zirnbauer, Symmetry classes, 2010. 55 Appendix A. Semi-rings In this section we prove Lemmas 2.12, 2.13 and 2.14 from Section 2.2. The statements of these three lemmas are recalled below as Lemmas A.1, A.2 and A.3, respectively, for the reader’s convenience. Lemma A.1. Suppose that I ⊆ E is a semi-ring. If I1, . . . , Ik ∈ I, then there exist finitely many J1, . . . , Jm ∈ I that are pairwise disjoint and such that Ii = [ j:Jj⊆Ii Jj for i = 1, . . . , k and [ k i=1 Ii = [m j=1 Jj . Proof. We proceed by induction on k. When k = 1, we can just let J1 = I1. Suppose we have proved the result when k = n and that k = n + 1. By induction hypothesis we can find J1, . . . , Ja ∈ I such that Ji ∩ Jj = ∅ for 1 ≤ i ̸= j ≤ a, for each i = 1, . . . , n the set Ii is the union of finitely many Jj ’s and [n i=1 Ii = [a j=1 Jj . Note that since I is a semi-ring In+1 \J1 = Jn+1 \(J1∩In+1) is a finite disjoint union of elements in I. Iterating this, we see that In+1 \ (J1 ∪ · · · ∪ Ja) = C Ga+1 ca+1=1 Ia+1,ca+1 , where Ia+1,ca+1 ∈ I. Similarly, we also have Ji \ In+1 = G Ci ci=1 Ii,ci for i = 1, . . . , a , where Ii,ci ∈ I. We now consider the sets {Ii,ci } for i = 1, . . . a + 1 and ci = 0, . . . , Ci , where Ii,0 = Ji ∩ In+1 for i = 1, . . . , a and Ia+1,0 = ∅. One readily verifies that these sets are pairwise disjoint elements in I and also Ji = G Ci ci=0 Ii,ci for i = 1, . . . , a, and In+1 = ⊔ Ca+1 ca+1=1Ia+1,ca+1 G (⊔ a i=1Ii,0). We conclude that Ii,ci ’s satisfy the conditions in the statement of the lemma for the sets I1, . . . , In+1. This completes the induction step and the general result now follows by induction. □ Lemma A.2. Let I = {(a, b] : a ≤ b; a, b ∈ Q}. Then I is a countable dissecting semi-ring on R. 56 Proof. Since Q is countable we conclude the same for I. We next show that I is a semi-ring. Take A1, A2 ∈ I. Then A1 = (a1, b1] and A2 = (a2, b2] for some a1, b1, a2, b2 ∈ Q with a1 ≤ b1 and a2 ≤ b2. If a1 ≥ b2 or b1 ≤ a2, then A1 ∩ B2 = ∅ ∈ I. For all other cases, A1 ∩ A2 = (a1 ∨ a2, b1 ∧ b2] ∈ I. If we now suppose that A1 ⊆ A2, then A2 \ A1 = (a2, a1] ∪ (b1, b2], which is a finite disjoint union of sets in I. We finally show that I is dissecting. Let F be a bounded set in R. Then, there exists M ∈ N ⊂ Q such that F ⊆ (−M, M]. In particular, F is contained in a finite union of elements in I. Now let G ⊆ R be an open set. Then, for any x ∈ G there exists εx > 0 such that (x − εx, x + εx) ⊆ G. Since Q is dense in R there exist q x 1 , qx 2 ∈ Q such that x − εx < qx 1 < x < qx 2 < x + εx so that (q x 1 , qx 2 ] ⊆ (x − εx, x + εx). The latter inclusions show [ x∈G (q x 1 , qx 2 ] ⊆ [ x∈G (x − εx, x + εx) ⊆ G and [ x∈G (q x 1 , qx 2 ] ⊇ [ x∈G {x} = G. The above shows that G = S x∈G(q x 1 , qx 2 ] and the latter is an at most countable union of distinct elements in I (since I is itself countable) □ Lemma A.3. If I is a dissecting semi-ring in (R, R), then for all k ∈ N, Ik = {A1 × · · · × Ak : Ai ∈ I} is a dissecting semi-ring in (R k , Rk ). Moreover, any set A ∈ Ik can be written as a finite disjoint union of sets of the form B1 × · · · × Bk ∈ Ik such that Bi = Bj or Bi ∩ Bj = for each 1 ≤ i ≤ j ≤ k. Proof. We first show that Ik is dissecting. Suppose that Bk ⊂ R k is bounded. Then, we can find B ∈ Rˆ such that Bk ⊆ Bk . Since I is dissecting, we can find I1, . . . , In ∈ I such that B ⊆ ∪n i=1Ii . We then have that Bk ⊆ B k ⊆ [n i1=1 · · · [n ik=1 Ii1 × · · · × Iik . Next, suppose that G ⊆ R k is open. Since U1 × · · · × Uk (for open Ui ⊆ R) form a base for the product topology on R k , we see that G = [ (x1,...,xk)∈G Ux1 × · · · × Uxk , 57 where Ux1 , . . . , Uxk are open sets such that xi ∈ Uxi for i = 1, . . . , k and Ux1 × · · · × Uxk ⊆ G. From (5, M6) we know that R k is separable and so we can find an at most countable set F ⊂ G such that G = [ (x1,...,xk)∈F Ux1 × · · · × Uxk . Since I is dissecting, we have that Uxi = ∪z∈Fxi I xi z for some at most countable set Fxi that depends on xi and I xi z ∈ I for each z ∈ Fxi . We then have G = [ (x1,...,xk)∈F [ z1∈Fx1 · · · [ zk∈Fxk I x1 z1 × · · · × I xk zk , which is an at most countable union. This proves that Ik is dissecting. We next show that Ik is a semi-ring. Suppose that A = I1 × · · · × Ik and B = J1 × · · · × Jk are two sets in Ik. We then have A ∩ B = (I1 ∩ J1) × · · · × (Ik × Jk) ∈ Ik, since In ∩ Jn ∈ I by the semi-ring property for n = 1, . . . , k. Suppose that A ⊆ B and note that this implies Ii ⊆ Ji for i = 1, . . . , k. Since I is a semi-ring, we know that Ji \ Ii = ⊔ ni j=1Ii,j for some Ii,j ∈ I. We then have that B \ A = G j1=0,...,n1,··· ,jk=0,...,nk j1+···+jk≥1 I1,j1 × · · · × Ik,jk , where Ii,0 = Ii . This proves that Ik is a semi-ring. We now address the last part of the problem. Let A = A1 × · · · × Ak ∈ Ik. From Lemma A.1 we can find finitely many I1, . . . , Im ∈ I that are pairwise disjoint and Ai = ⊔j∈Di Ij for i = 1, . . . , k, where Di = {j : Ij ⊆ Ai}. We now consider the sets {Ia1 × · · · × Iak : ai ∈ Di for i = 1, . . . , k}. The latter sets are clearly pairwise disjoint elements of Ik and also their union is precisely A. This suffices for the proof. □ 58 Appendix B. Moment properties The goal of this section is to provide the proof of Lemma 3.3 from Section 3.1, which is recalled below as Lemma B.5 for the reader’s convenience. We mention that the notation in Lemma B.5 is slightly different from that in Lemma 3.3, and this is to make it more consistent with the other statements in this section. This, however, should cause no confusion. We start with a few auxiliary statements. For a > 0 we define (B.1) Da := {z ∈ C : | Re(z)| < a}. Lemma B.1. Let X be a real random variable on (Ω, F, P) such that E[e a|X| ] < ∞ for some a > 0. Then, f(z) = E[e zX] is well-defined and holomorphic in Da as in (B.1). Proof. Note that for each z ∈ C we have |e z | = e Re(z) . Using the latter we get for z ∈ Da |e zX| = e Re(zX) = e X Re(z) ≤ e a|X| . Therefore, e zX is integrable and so f is well-defined. Holomorphicity would follow from proving (B.2) limz→z0 E[e zX] − E[e z0X] z − z0 = limz→z0 E " e zX − e z0X z − z0 # = E[Xez0X]. In the remainder of the proof we establish (B.2), and in particular that the right side of (B.2) is well-defined and finite. Fix z0 ∈ Da and a sequence of numbers {zn} ⊆ Da such that zn → z0 as n → ∞. Denote Yn = e znX − e z0X zn − z0 = Sn · e z0X, where Sn = e (zn−z0)X − 1 zn − z0 . Note that for any fixed ω ∈ Ω we have Yn(ω) → X(ω)e z0X(ω) . Using the series expansion of e (zn−z0)X we obtain Sn = 1 zn − z0 X∞ k=1 (zn − z0) kXk k! = X X∞ k=1 (zn − z0) k−1Xk−1 k! Therefore, |Sn| ≤ |X| · X∞ k=1 |zn − z0| k−1 |X| k−1 k! ≤ |X| X∞ k=1 |zn − z0| k−1 |X| k−1 (k − 1)! = |X|e |zn−z0||X| . 59 Since z0 ∈ Da, there exists ε ∈ (0, a) such that Bε(z0) ⊂ Da. As zn → z0, we can find N ∈ N such that |zn − z0| < ε/2 for all n ≥ N. Furthermore, we have the trivial bound |X| ≤ (2/ε) · e (ε/2)|X| . Combining the last few estimates we conclude for n ≥ N |Yn| = |e z0X| · |Sn| ≤ |e z0X| · |X| · e |zn−z0||X| ≤ e (a−ε)|X| · (2/ε)e (ε/2)|X| · e (ε/2)|X| = (2/ε) · e a|X| . We note that the rightmost expression is integrable by assumption and so by the dominated convergence theorem we conclude limn→∞ E[Yn] = E[Xez0X]. Since z0 and {zn} were arbitrary we conclude (B.2). □ We next record a statement from complex analysis, which is a direct consequence of (41, Chapter 2, Corollary 4.9). Lemma B.2. Fix a > 0 and let f, g be holomorphic functions in the region Da as in (B.1). If f(z) = g(z) for z ∈ [0, a), then f(z) = g(z) for all z ∈ Da. The following statement shows that the distribution of a random vector with non-negative entries is uniquely determined by its joint moment generating function. Lemma B.3. Let (X1, . . . , Xk) and (Y1, . . . , Yk) be random vectors in R k ≥0 . Suppose there exists a > 0 such that for all a1, . . . , ak ∈ [0, a], we have E h e a1X1+···+akXk i = E h e a1Y1+···+akYk i < ∞. Then, (X1, . . . , Xk) d= (Y1, . . . , Yk). Proof. We prove by induction on n that (B.3) E Yn i=1 e ziXi · Y k i=n+1 e aiXi = E Yn i=1 e ziYi · Y k i=n+1 e aiYi whenever z1, . . . , zn ∈ Da as in (B.1) and an+1, . . . , ak ∈ [0, a]. Part of the statement is that both sides are well-defined and finite. 60 The base case n = 0 holds from the problem statement and we assume that we have proved the result for n = v ≤ k − 1. Assume now that n = v + 1. We know that Yn i=1 e ziXi · Y k i=n+1 e aiXi = Yn i=1 e Re(zi)Xi · Y k i=n+1 e aiXi ≤ Yn i=1 e aXi Y k i=n+1 e aiXi , and so both sides of (B.3) are well-defined and finite. We define for z ∈ Da fX(z) = E nY−1 i=1 e ziXi · e zXn · Y k i=n+1 e aiXi and fY (z) = E nY−1 i=1 e ziYi · e zYn · Y k i=n+1 e aiYi . We now repeat the argument in the proof of Lemma B.1 above. Fix z0 ∈ Da, a sequence hm → 0, and let ε ∈ (0, a) be such that Bε(z0) ⊂ Da. Then, we have Qn−1 i=1 e ziXi · e (z0+hm)Xn · Qk i=n+1 e aiXi − Qn−1 i=1 e ziXi · e z0Xn · Qk i=n+1 e aiXi hm → Xn nY−1 i=1 e ziXi · e z0Xn · Y k i=n+1 e aiXi . On the other hand, we have when |hm| ≤ ϵ/2 that Qn−1 i=1 e ziXi · e (z0+hm)Xn · Qk i=n+1 e aiXi − Qn−1 i=1 e ziXi · e z0Xn · Qk i=n+1 e aiXi hm ≤ 2 ϵ Y k i=1 e aXi . By the dominated convergence theorem with dominating function 2 ϵ Qk i=1 e aXi we conclude that fX(z) is holomorphic at z0 and has derivative E Xn nY−1 i=1 e ziXi · e z0Xn · Y k i=n+1 e aiXi . Analogously, fY (z) is holomorphic in Da. Since fX(z) and fY (z) are two holomorphic functions on Da that agree when z ∈ [0, a), we conclude that they agree on all of Da from Lemma B.2. This proves (B.3) when n = v + 1 and the general result now follows by induction on n. Since the characteristic function uniquely determines the distribution of a random vector, see (14, Theorem 3.10.5), it suffices to show that for each t1, . . . , tk ∈ R we have E e Pk m=1 itmXm = E e Pk m=1 itmYm . The latter is a special case of (B.3) for n = k and zr = itr for r = 1, . . . , k. □ 61 The next result shows that under certain growth conditions the joint factorial moments of integer random variables uniquely determine their distribution. Lemma B.4. Let (X1, . . . , Xk) and (Y1, . . . , Yk) be random vectors in Z k ≥0 . Suppose there is a sequence cn ∈ [0, ∞) such that for all n1, . . . , nk ∈ Z k ≥0 with n1 + · · · + nk = n, E "Y k i=1 Xi ! (Xi − ni)!# = E "Y k i=1 Yi ! (Yi − ni)!# ≤ cn where 1/m! = 0 for m < 0, and P∞ n=0 cn n! · a n < ∞ for some a > 0. Then (X1, . . . , Xk) d= (Y1, . . . , Yk). Proof. Let a1, . . . , ak ∈ [0, a/k]. By (28, (2.13)) E "Y k i=1 (1 + ai) Xi # = X∞ n=0 1 n! X n1+···+nk=n n n1, . . . , nk ! Y k j=1 a nj j E "Y k i=1 Xi ! (Xi − n)!# ≤ X∞ n=0 1 n! X n1+···+nk=n n n1, . . . , nk ! Y k j=1 a nj j · cn = X∞ n=0 cn n! X k j=1 aj n < ∞. We conclude that if a1, . . . , ak ∈ [0, a/k] we have E "Y k i=1 (1 + ai) Xi # = E "Y k i=1 (1 + ai) Yi # < ∞. Writing ci = log(1 + ai) gives for c1, . . . , ck ∈ [0, log(1 + a/k)] that E "Y k i=1 e ciXi # = E "Y k i=1 e ciYi # < ∞, which by Lemma B.3 implies (X1, . . . , Xk) d= (Y1, . . . , Yk). □ We end this section by proving the following result, which is Lemma 3.3 (with some minor notational changes). Lemma B.5. Let Xn = (Xn 1 , . . . , Xn k ) be a sequence of random vectors taking values in Z k ≥0 . Suppose that for each n1, . . . , nk ∈ Z≥0 there exists a constant C(n1, . . . , nk) ∈ [0, ∞) such that (B.4) limn→∞ E "Y k i=1 Xn i ! (Xn i − ni)!# = C(n1, . . . , nk). Put cm = max{C(n1, . . . , nk) : n1 + · · · + nk = m} and suppose that P∞ m=0 cm m! · a m < ∞ for some a > 0. Then, Xn converges to a random vector X∞, which almost surely lies in Z k ≥0 and satisfies 62 for all n1, . . . , nk ∈ Z≥0 (B.5) E "Y k i=1 X∞ i ! (X∞ i − ni)!# = C(n1, . . . , nk). Proof. From (B.4) applied to ni = 1 and all other nj = 0 we see that there exists C > 0 and N ∈ N such that for i = 1, . . . , k and n ≥ N E[Xn i ] ≤ C. By Markov’s inequality and the non-negativity of Xn i we conclude that Xn i is tight for each i = 1, . . . , k. The latter implies that (Xn 1 , . . . , Xn k ) is a tight sequence of random vectors. Suppose that X and Y are two subsequential limits of (Xn 1 , . . . , Xn k ). We seek to show that they have the same distribution. Let Xai be a subsequence converging to X weakly. By Skorohod’s representation theorem, see Theorem 2.3, we may assume that Xan and X are all defined on the same probability space and that Xan converges to X almost surely. In particular, since Xan ∈ Z k ≥0 and the latter space is discrete we see that X ∈ Z k ≥0 a.s. Since Xan converges to X almost surely we have for each n1, . . . , nk ∈ Z≥0 that (B.6) limn→∞ Y k i=1 X an i ! (X an i − ni)! = Y k i=1 Xi ! (Xi − ni)!. We now seek to prove that (B.7) limn→∞ E "Y k i=1 X an i ! (X an i − ni)!# = E "Y k i=1 Xi ! (Xi − ni)!# . Equation (B.7) would follow from (B.6) and (5, Theorem 3.5), provided we can show (by possibly passing to a subsequence) that the family Qk i=1 X an i ! (X an i −ni)! is uniformly integrable. Since 0 ≤ Y k i=1 X an i ! (X an i − ni)! ≤ Y k i=1 (X an i ) ni ≤ X k i=1 (X an i ) A, where A = n1 + · · · + nk, we see that it suffices to prove for each i = 1, . . . , k that (B.8) lim sup n→∞ E h (X an i ) 2A i < ∞, see (5, (3.18)). Consider now the polynomials P0(x) = 1 and for m ≥ 1 Pm(x) = x(x − 1)· · ·(x − m + 1). 63 By opening the brackets we have for each m that Pm(x) = x m + mX−1 i=0 αi,mx i . The latter shows that if ⃗um = [P0(x), . . . , Pm(x)]T and ⃗vm = [1, x, . . . , xm] T , then ⃗um = Am⃗vm, where Am is an upper-triangular matrix with all entries on the diagonal being equal to 1. This means that A−1 m is also upper-triangular and has all 1’s on the diagonal. In particular, x m is a finite linear combination of P0, . . . , Pm for all m ∈ Z≥0. From (B.4) we know that limn→∞ E[Pm(Xn i )] exists and is finite for all m and i = 1, . . . , k, and so we conclude by linearity that the same is true for limn→∞ E[(Xn i ) m], which clearly implies (B.8). Now that we know (B.7) we can combine it with (B.4) to conclude that C(n1, . . . , nk) = E "Y k i=1 Xi ! (Xi − ni)!# . Swapping the roles of X and Y in the above arguments we see that Y is also almost surely in Z k ≥0 and satisfies the latter, so that we get (B.9) C(n1, . . . , nk) = E "Y k i=1 Xi ! (Xi − ni)!# = E "Y k i=1 Yi ! (Yi − ni)!# . We see that the conditions of Lemma B.4 are satisfied and so X and Y has the same law, which together with the earlier established tightness implies that Xn converge weakly to X. Equation (B.5) follows from (B.9). □ 64
Asset Metadata
Creator
Kannan, Aniruddh (author)
Core Title
Bulk and edge asymptotics in the GUE
Contributor
Electronically uploaded by the author
(provenance)
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Applied Mathematics
Degree Conferral Date
2024-05
Publication Date
04/23/2024
Defense Date
04/03/2024
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
determinantal point processes,Gaussian Unitary Ensemble,OAI-PMH Harvest
Format
theses
(aat)
Language
English
Advisor
Dimitrov, Evgeni (
committee chair
), Alexander, Kenneth (
committee member
), Lototsky, Sergey (
committee member
)
Creator Email
aniruddh.kannan@gmail.com,aniruddk@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113892981
Unique identifier
UC113892981
Identifier
etd-KannanAnir-12849.pdf (filename)
Legacy Identifier
etd-KannanAnir-12849
Document Type
Thesis
Format
theses (aat)
Rights
Kannan, Aniruddh
Internet Media Type
application/pdf
Type
texts
Source
20240422-usctheses-batch-1143
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
uscdl@usc.edu
Abstract (if available)
Abstract
We consider the asymptotics of the random measure formed by the eigenvalues of a matrix from the Gaussian Unitary Ensemble (GUE). We show that this measure is a determinantal point process and derive a double-contour integral formula for its correlation kernel. We show that the bulk of the spectrum weakly converges to the sine process, and the edge of the spectrum weakly converges to the Airy point process. Our approach for proving weak convergence goes through establishing uniform convergence of the correlation kernels over compact sets, and is based on ideas developed by Borodin.
Tags
determinantal point processes
Gaussian Unitary Ensemble
Linked assets
University of Southern California Dissertations and Theses