Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Gaussian free fields and stochastic parabolic equations
(USC Thesis Other)
Gaussian free fields and stochastic parabolic equations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Gaussian Free Fields and Stochastic Parabolic Equations by Apoorva Shah A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (MATHEMATICS) August 2022 Copyright 2022 Apoorva Shah Acknowledgements This journey would not have been possible without all of the professors, classmates, and students I encounteredalongtheway. Eachhastaughtmesomethingvaluableandforthat,Iamdeeplygrateful. In particular, I would like to thank my advisor Sergey Lototsky for his endless guidance and patiencethesepastthreeyears. Myweeklymeetingsandseminardiscussionswithhimwereimmensely helpful and I came away from them having a clearer understanding of how to proceed. He taught me theprocessforhowtoapproachadifficultresearchproblem,howtocreatesmallerproblemsthatbuild up to it, and how to creatively bridge the gaps. I know these lessons will be applicable to tackling other monumental tasks I encounter and I will find them extremely useful in my future endeavors. IwouldalsoliketothankDanielShapiroandJimFowlerformanywonderfulsummersattheRoss mathematicsprogram. Beingabletoexplorethebeautifulproofsandrigorousaxiomaticconstruction was what motivated me to pursue a Ph.D. in the first place. Finally, I’d like to thank my parents for their constant support and encouragement. They always allowed me to explore my interests, from signing me up for math classes to driving me to contests. You two have been an integral part of my journey and were the very first math teachers I had. ii Table of Contents Acknowledgements ii List Of Tables v List Of Figures vi Abstract vii Chapter 1: Introduction 1 1.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Basics of SPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Limiting Distribution of O-U Processes. . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Markov Processes and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4.1 Dynamical Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.2 Stationary Markov Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Chapter 2: Gaussian Measures and Gaussian Free Fields 16 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Gaussian Free Field on the Continuum . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Gaussian Processes and Measures on Hilbert Spaces . . . . . . . . . . . . . . . . . . . 20 2.4 The Cameron-Martin Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.5 Wasserstein Distance and Total Variation . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.6 A Comparison Between W 2 Distance and ˙ H − 1 Norm . . . . . . . . . . . . . . . . . . . 31 2.7 Time Regularization of Brownian Motion and one-dim O-U Process . . . . . . . . . . 33 Chapter 3: Stochastic Reaction-Diffusion Equations over a Bounded Domain 41 3.1 Existence of Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2 Existence and Uniqueness of Invariant Measure . . . . . . . . . . . . . . . . . . . . . . 44 3.3 Characterizing the Invariant Measure as the Gaussian Free Field . . . . . . . . . . . . 46 3.3.1 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 Nonlinear Equation over [0,1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.5 Approximation of the Stochastic Convolution with Time Regularization . . . . . . . . 64 Chapter 4: Stochastic Reaction-Diffusion Equations over R d 69 4.1 Existence of Weak Solution of the Linear Equation . . . . . . . . . . . . . . . . . . . . 69 4.2 GFF as the Invariant Measure of the Linear Eqution . . . . . . . . . . . . . . . . . . . 74 4.2.1 Deterministic Equations and Fundamental Solutions . . . . . . . . . . . . . . . 78 4.2.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.3 Non-Linear Case: Existence of a Mild Solution . . . . . . . . . . . . . . . . . . . . . . 85 4.4 Existence of an Invariant Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 iii 4.5 Time Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Chapter 5: Conclusion and Further Directions 92 References 94 iv List Of Tables 1.1 Solutions to the heat equation with various initial conditions. . . . . . . . . . . . . . . 1 1.2 Heat semigroup norm as an operator from L p (R d )→L q (R d ). . . . . . . . . . . . . . . 3 2.1 Gaussian measures and their corresponding Cameron-Martin space . . . . . . . . . . . 26 v List Of Figures 2.1 A sample path for N =8, with Rademacher(1/2) random variables. . . . . . . . . . . 17 2.2 Scaling in one dimension for N = 20, 200, and 20000 steps. . . . . . . . . . . . . . . . 18 2.3 Gaussian free field on the unit square. . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 vi Abstract We study the long-term behavior of stochastic parabolic equations in a domain O ⊆ R d . First we look at the stochastic heat equation over a bounded domain, driven by Gaussian white noise. We prove that for the linear equation, the solution converges to the Gaussian free field. When we include a cubic nonlinear term, we find that in one dimension, there is a unique invariant measure which is concentrated on C 0 [0,1]. Next, we look at the linear stochastic heat equation over the whole space and once again find that the Gaussian free field is the invariant measure. We also observe, however, that the stochastic heat equation with additive white noise is not amenable to being studied over L 2 (R d ) so we suggest some alternatives, such as the schwartz spaces and weighted Banach spaces. Finally we present a method of approximating Brownian motion by regularizing it in time. The ideaistostudysystemswiththistime-regularizednoise W κ (t)andtostudyhowthesolutionchanges as W κ (t) approaches the Brownian motion. Since the trajectories of W κ (t) are continuously differ- entiable, this approximation cannot be in total variation and instead we must use the Wasserstein metric. vii Chapter 1 Introduction In this chapter we discuss some well-known results about the heat equation and review some basics about stochastic PDEs. 1.1 The Heat Equation We recall some basic results about the heat equation. All of these can be found in [Eva10]. The heat equation over theR d is given by u t (x,t)=a∆ u(x,t), u:R d × R + →R and its solution is well known to be u(x,t)= 1 (4πat ) d/2 Z R d e −| x− y| 2 4at u 0 (y)dy, where u 0 (x) : R d → R is the initial condition. We will denote the heat kernel with a = 1 by G(x,t). To get a sense for how the heat semigroup acts on different initial conditions, the chart below summarizes some common computations with various initial conditions: Now let us consider Initial Condition u 0 (x) S(t)u 0 (x) exp(− x 2 ) exp − x 2 4t+1 1 √ 4t+1 x x sin(x) exp(− t)sin(x) Table 1.1: Solutions to the heat equation with various initial conditions. the nonhomogeneous problem with zero initial condition. So say we are interested in u t (x,t)=∆ u(x,t)+f(x,t) with x∈R d , t∈(0,∞) u(x,0)=0. 1 We will apply Duhamel’s principle or variation of parameters. The idea behind Duhamel’s principle is to first consider the PDE: u t (x,t;s)=∆ u(x,t;s) with (x,t)∈R d × (s,∞) u(x,s;s)=f(x,s) with x∈R d . We can build a solution to our nonhomogeneous equation by integrating u(x,t;s) over s. So we get that u(x,t)= Z t 0 u(x,t;s)ds u(x,t)= Z t 0 Z R d G(x− y,t− s)f(y,s)dyds. Now we can also include an initial condition: u t (x,t)=∆ u(x,t)+f(x,t) with x∈R d , t∈(0,∞) u(x,0)=g(x). So we would have u(x,t)= Z R d G(x− y,s)g(y)dy+ Z t 0 Z R d G(x− y,t− s)f(y,s)dyds. Furthermore, this solution must be unique. Let S(t) be the heat semigroup and let Φ t be the heat kernel over the whole space at time t. For an initial condition f, Young’s inequality gives us ∥S(t)f∥ p =∥G t ∗ f∥ p ≤∥ G t ∥ r ∥f∥ q With 1+ 1 p = 1 r + 1 q . Note ∥G t ∥ r = Z R d (4πt ) − rd/2 e − r 4t |x| 2 dx 1/r =(4πt ) − d/2 4tπ r d/2r =(4π ) − (d/2− d/2r) r − d/2r t − d 2 (1− 1 r ) =(4π ) − (d/2− d/2r) r − d/2r t − d 2 ( 1 q − 1 p ) . 2 Putting this all together gives∥S(t)f∥ p ≤ (4π ) − (d/2− d/2r) r − d/2r t − d 2 ( 1 q − 1 p ) ∥f∥ q . So we can look at the behaviorof∥S(t)f∥ p undervariousconditionsthroughthefollowingchart,whichgivesustheoperator norm of S(t). Region t→0 t→∞ p>q ∞ 0 p=q C d,p C d,p p 1 and we have equality when both functions in Young’s inequality are Gaussian. Since k t is already a Gaussian, we will get equality when f is some other Gaussian. This ensures that we actually do have lim t→∞ ∥S(t)∥=∞ for the two cases listed on the chart. Now suppose we added some noise to this equation, what if we also included a nonlinear term? More specifically, consider the additive Stochastic Heat Equation with a cubic term: u t (x,t)=∆ u(x,t)− u 3 (x,t)+ ˙ W(x,t). What can we say about the solution when x∈R 2 or x∈R 3 ? The challenges are that in the computa- tions of the covariance, we will find that the solution to the linear heat equation with additive noise onlyexistsasadistribution. Sonowconsideringacubicnonlinearityisnotsostraightforwardbecause there is not a well-defined way to multiply distributions. The reason this problem is of interest in the firstplace,despitethisdrawback,isthatitbelongstoaclassofequationsknownasReaction-Diffusion equations which can be used to model phenomenon in Biology, Chemistry, and Physics. 1.2 Basics of SPDEs This section summarizes some important introductory material from [LR17]. Theorem 1.2.1 (Kolmogorov Continuity Criterion) Let X(t) be a stochastic process on [0,T]. If there exists C >0 and α> 1 such that E|X(t)− X(s)| β ≤ C|t− s| α , 3 where 0≤ s≤ t≤ T, then X(t) there is a modification of X(t) such that the sample trajectories are almost H¨ older- (α − 1)/β continuous. Corollary 1.2.2 Assume that X is zero mean and Gaussian. If for every γ ∈(0,α ), we have E|X(t)− X(s)| 2 ≤ C γ |X(t)− X(s)| α − γ , then we can say that X is almost H¨ older- (α/ 2) continuous. Now suppose we have two processes X(t)=X 0 + Z t 0 A(s)ds+ Z t 0 N(s)dW(s) and Y(t)=Y 0 + Z t 0 B(s)ds+ Z t 0 M(s)dW(s) where W(t) is a standard Brownian motion. Itˆ o integral is defined as Z t 0 X(s)dY(s)= lim ∆ N→0 X k X(t k )(Y(t k+1 ∧t)− Y(t k ∧t)). While the Stratonovich integral is defined as Z t 0 X(s)◦ dY(s)= lim ∆ N→0 X k X(t k )+X(t k+1 ) 2 , where ∆ N is just the largest gap in our partition t k of the interval [0,T]. We can convert between the Itˆ o and Stratonovich integral as follows: Z t 0 X(s)◦ dY(s)= Z t 0 X(s)dY(s)+ 1 2 ⟨X,Y⟩(t). And recall that ⟨X,Y⟩(s)= Z t 0 N(s)M(s)ds. Let W : [0,T]→R be a real-valued Weiner process, and let X,Y : [0,T]→R be processes adapted to the filtration F W t . Then E Z t 0 X(s)dW(s) 2 = Z t 0 X 2 (s)ds. 4 Furthermore, E Z t 0 X(s)dW(s) Z t 0 Y(s)dW(s) = Z t 0 X(s)Y(s)ds. Definition 1.2.3 An spde is well-posed if the solution exists, is unique and continuous with respect to the initial conditions. Definition 1.2.4 Strong Solution: Constructed on a given probability space. Definition 1.2.5 Weak solution: Probability space is constructed as part of the solution. Definition 1.2.6 Solution of a martingale Problem means probabilistically weak solution. Definition 1.2.7 Classical solution of an SPDE is a continuous function satisfying the equation and the initial and boundary conditions pointwise on the same set of probability 1. Definition 1.2.8 A mild solution extends the closed form solution and is used in equations that are considered perturbations of a linear equation with a closed form solution. Example: Suppose we have du(x,t)=(u xx − u 3 )dt+sin(u)dw(t). The mild solution would be u(x,t)= 1 √ 4πt Z R e − (x− y) 2 /(4t) u(y,0)dy − Z t 0 1 p 4π (t− s) Z R e − (x− y) 2 /(4(t− s)) u 3 (y,s)dy ds + Z t 0 1 p 4π (t− s) Z R e − (x− y) 2 /(4(t− s)) sin(u(y,s))dy dw(s). To save space, we introduce the operator (Sh)(x,t)= Z R e − (x− y) 2 /(4t) h(y)dy. Now we can clean up our equation (Su 0 )(x,t)− Z t 0 Su 3 (s,·))(x,t− s)ds+ Z t 0 (Ssin(u(s,·)))(x,t− s)dw(s). 5 As an example, for the heat equation on the real line, u t =u xx ,t∈[0,T],x∈R, we can define various notions of variational and viscosity solutions: W1 Strong variational solution u ∈ L 1 ([0,T],H 2 (R)) such that u(x,s) = f(x)+ R t 0 u xx (x,s)ds for almost all t∈[0,T]. W2 Weak Variational solution I is a function u from L 1 ([0,T],H 1 (R)) such that for every smooth funtion v(x) with compact support inR, the equality holds (u(·,t),v) L2(R) =(f,v) L2(R) − Z t 0 (u x (·,s),v x ) L2(R) ds for almost all t∈[0,T], where (f,g) L2(R) = Z R f(x)g(x)dx and u(0,x)=f(x). W3 Weak variational solution II is u∈L 1 ([0,T];L 2 (R) s.t. for every smooth function v =v(x) with compact support inR, the equality (u(·,t),v) L2(R) =(f,v) L2(R) − Z t 0 (u x (·,s),v x x) L2(R) ds holds for almost all t∈[0,T]. W4 Weak variational solution III is a function u that is locally integrable on every compact subset of [0,T]× R and such that for every smooth function v with compact support in [0,T]× R, Z R f(x)dx+ Z T 0 Z R u(x,t)(v t (x,t)+v xx (x,t))dxdt=0. 6 W5 Measure Valued Solution is a collection µ t =µ t (dx) of σ -finite measures on ( R,B(R)) such that for every smooth compactly suppported onR function µ t [ϕ ]= Z R ϕ (x)f(x)dx+ Z t 0 µ s [ϕ xx ]ds, where µ t [g]= R R g(x)µ t (dx). W6 Viscosity solution is a continuous function on [0,T]× R with u(x,0) = f(x) with the following two properties: If v = v(x,t) is a smooth function and u− v has a local max at point (x 0 ,t 0 ) thenv t (x 0 ,t 0 )≤ v xx (x 0 ,t 0 ).Ifv =v(x,t)isasmoothfunctionand u− v hasalocalminatpoint (x 0 ,t 0 ) then v t (x 0 ,t 0 )≥ v xx (x 0 ,t 0 ). The heat equation in 1-dimension is u t =au xx . The solution, which is u heat (x,t)= 1 √ 4aπt Z R e − (x− y) 2 /(4at) u(y,0)dy, can be written probabilistically as u heat (t,x)=Eu(0,x+ √ 2aB(t)). Cole-Hopf Transform We make the substitution u = logz to eliminate the nonlinear parts and simplify our equation into something that might already be known. 1.3 Limiting Distribution of O-U Processes We start by considering the limiting distribution of the Ornstein-Uhlenbeck process, which is given by the SDE dX =aXdt+bdW t . The solution converges under certain conditions on a. First let Y t = e − at X t . Then by Itˆ o’s formula, we have dY =d(e − at )X t dt+e − at d(X t )=− ae − at X t dt+ae − at X t dt+e − at bdW t =be − at dW t . 7 So Y t =Y 0 + Z t 0 be − as dW s . Which means X t =e at X 0 + Z t 0 be a(t− s) dW s . Now we will compute the mean and variance: E[X t ]=e at X 0 and Var[X t ]= Z t 0 b 2 e 2as ds= b 2 2a (e 2at − 1), where the first equality is by Itˆ o isometry. When a < 0 this converges to N 0, b 2 2|a| . Now we will look at the multidimensional case. Say we have dX =AXdt+BdW t where A,B∈R n× n are matrices. Using a similar transformation gives dY =d(e − At )X t dt+e − At d(X t )=− Ae − At X t dt+Ae − At X t dt+e − At BdW t =Be − At dW t . The cancellation works because e − At is the matrix exponential so it will commute with A. So X t =e At X 0 + Z t 0 e A(t− s) BdW s . Now we want to compute the covariance matrix because this is a multi-dimensional process. Cov(X t ,X t )=E[X t X T t ]− E[X t ]E[X t ] T =E Z t 0 e A(t− s) BdW s Z t 0 e A(t− s) BdW s T . Now by Itˆ o Isometry this gives Z t 0 e A(t− s) BB T e A(t− s) T dW s . 8 1.4 Markov Processes and Ergodicity In this section we summaraize some notes from [Hai08] on ergodicity of Markov processes. The idea is that in order for a Markov process to have the existence of an invariant measure, it should satisfy some compactness property and some regularity. In order for this invariant measure to be unique, the Markov process should satisfy some irreducibility property along with regularity. Definition 1.4.1 A stochastic process {X t } t∈T , taking values in a state space χ , is called Markov if for any N >0, t − N <...<t 0 <...<t N , E(f(X t1 ,...,X t N )g(X − N ,...,X − 1 )|X t0 )=E(f(X t1 ,...,X t N )|X t0 )E(g(X − N ,...,X − 1 )|X t0 ). Definition 1.4.2 The Markov transition kernel over a polish space X is the map P :X× B(X)→R + such that for A∈B(X), x7→P(x,A) is measurable and for fixed x∈X, A7→P(x,A) is a probability measure. Definition 1.4.3 A Markov operator over a Polish spaceX is a bounded linear operator on the space of bounded borel measurable functions on X , B b (X),known as P :B b (X)→B b (X) with the following properties: • P1=1 • Pϕ is positive whenever ϕ is positive. • If a sequences {ϕ n }⊂B b (X) converges pointwise to an element ϕ ∈B b (X) then Pϕ n converges pointwise to Pϕ . Actually, the Markov operator and Markov transition kernel are interchangeable and we can switch between them with the following lemma: Lemma 1.4.4 Given a polish space X, there is a one-to-one correspondence between Markov transi- tion kernels and Markov operators given by P(x,A)=(Pχ A )(x). Proof: To see that every transition kernel is a Markov operator, just check each of the properties. For the first property you let A be your entire state spaceX, so that χ A is always 1, and on the lhs, we have P(x,X) = 1 since P is a probability measure. The second property follows from using a 9 standard simple function approximmation of ϕ . The proof of the third property follows in the same way as the proof of dominated convergence theorem. For the other direction, it is easy to see thatPχ A (x) is a measurable function for fixed A and now to check that it is a measure for fixed x, we note that Pχ X (x) =P1 = 1,Pχ A (x)≥ 0 for x∈ A by the second property of Markov operators, and for proving countable additivity, we simply apply the third property. Definition 1.4.5 (P ∗ µ )(A):= Z X P(x,A)µ (dx). Definition 1.4.6 Pϕ (x):= Z X ϕ (y)P(x,dy). Definition 1.4.7 Suppose we havae a family of Markov Operators indexed by time, that satisfy the operation P t+s =P t ◦P s . This is called the Markov Semigroup. Definition 1.4.8 WesayaMarkovProcessistime-homogeneousw/semigroupP t ifforanytwotimes s<t, we have P(X t ∈A|X s )=P t− s (X s ,A) a.s. Definition 1.4.9 A probability measure µ on X is invariant for the Markov operator P if Z X (Pϕ )(x)µ (dx)= Z X ϕ (x)µ (dx) holds for every function ϕ ∈B b (X). In other words, one has P ∗ t µ =µ for every time t. 1.4.1 Dynamical Systems Definition 1.4.10 E is a polish space, let T be either N,Z,R + ,R. A dynamical system on E is a collection{Θ t } t∈T of maps Θ t :E→E s.t. Θ t ◦ Θ s =Θ t+s ∀s,t∈T, such that the map (t,x)7→Θ t (x) is jointly measurable. It is called continuous if each of Θ t is continuous. Definition 1.4.11 Denoting by Θ ∗ t µ , the pushforward of µ under the map Θ t . (This is just the measure of a set under the preimage of Θ t .) Definition 1.4.12 We define the set of invariant measures for {Θ t } t∈T by J(Θ)= {µ ∈M 1 (E):Θ ∗ t µ =µ for all t∈T}. 10 Definition 1.4.13 The σ -algebra of invariant subsets of E is I ={A⊂ E : A Borel and Θ − 1 t (A) = A∀t∈T}. Theorem 1.4.14 (Birkhoff’s Ergodic Theorem) Let {Θ t } t∈T be a measurable dynamical system over a Polish space E. Fix an invariant measure µ ∈J(Θ) and let f ∈L 1 (E,µ ). Then lim N→∞ N− 1 X n=1 f(Θ n (x))=E µ (f|I)(x). Definition 1.4.15 An invariant measure µ for a dynamical system {Θ t } is ergodic if µ (A)∈{0,1} for every A∈I. Corollary 1.4.16 This means that if µ is ergodic, then lim N→∞ 1 N N− 1 X n=0 f(θ n (x))=E µ f. Proof: f :=E(f|I)isI-measurable, justbythedefinition. Nowdefine A + ={x∈E|f >Ef},A − = {x∈ E|f < Ef}, and A 0 ={x∈ E|f = Ef}. These three sets form a partition for E , and they are I-measurable and since µ is ergodic, one of them must have measure 1 and the rest must be measure 0. If either A + or A − are measure 1, then you just integrate f over them and you get a contradiction. So we must have µ (A 0 ) = 1. This implies that µ (f = Ef) = 1 and so by tower law, the proof of the corollary follows. Theorem 1.4.17 Let S N (x)= N− 1 X n=0 f(θ n x), and M N (x)=max{S 0 ,...,S N (x)} where S 0 =0. Then Z {M N >0} f(x)µ (dx)≥ 0 for every N ≥ 1. Proof: M N (θ (x))≥ S k (θ (x)) by definition for all 0 ≤ k≤ N. Now f(x)+M N (θ (x))≥ f(x)+S k (θ (x))=S k+1 (x). So f(x)≥ max(S 1 ,...,S N (x))− M N (θ (x)). 11 but since S 0 (x)=0, on the set{M N >0}, max(S 1 (x),...,S N (x))=M N (x). So Z {M N >0} f(x)µ (dx)≥ Z {M N >0} (M N (x)− M N (θ (x)))µ (dx)≥ EM N − Z A N M N (x)µ (dx)≥ 0 where A N ={θ (x)|M N (x)>0} since M N (x)≥ 0,(S 0 =0). Z A M N (x)≤ EM N ,∀A. NowweproceedtoprovingBirkhoff’sErgodicTheorem. WLOG,Assume E(f|I)=0, (ifnot, just replacef, byf− E(f|I)). Letη =limsup n→∞ Sn n andη =liminf n→∞ Sn n . Ifwejustshowthat η ≤ 0 a.s, Then it also implies that η ≥ 0 becasue we can consider the function− f, which implies that η =η =0=E(f|I) Also note that η (θ (x))=η (x)∀x so for all ϵ> 0, we have A ϵ ={η (x)>ϵ }∈I. Define f ϵ (x)=(f(x)− ϵ )χ A ϵ (x) (so it is≥ 0). and define S ϵ N ,M ϵ N similarly. From the Maximal Ergodic Theorem, Z {M ϵ N >0} f ϵ (x)µ (dx)≥ 0∀N ≥ 1 Note that S ϵ N N = 0 η (x)≤ ϵ S N (x) N − ϵ otherwise because η (θ (x))=η (x). The sequence of sets{M ϵ N >0} increases to the set B ϵ :={sup N S ϵ N >0}= {sup N S ϵ N N >0} because M ϵ N is increasing as N →∞. So from the fact that S ϵ N N = 0 η (x)≤ ϵ S N (x) N − ϵ otherwise 12 B ϵ = sup N S ϵ N N >0 ∩{η >ϵ }= sup N S N N >0 ∩{η >ϵ }={η >ϵ }=A ϵ Now note that E|f ϵ ≤ E|f|+ϵ< ∞ so DCT implies that lim N→∞ Z {M ϵ N >0} f ϵ (x)µ (dx)= Z A ϵ f ϵ (x)µ (dx)≥ 0. This means 0≤ Z A ϵ f ϵ (x)µ (dx)= Z A ϵ f(x)− ϵµ (dx) = Z A ϵ f(x)µ (dx)− ϵµ (A ϵ ) = Z A ϵ E(f(x)|I)µ (dx)− ϵµ (A ϵ ) =− ϵµ (A ϵ ). So µ (A ϵ ) has to be 0,∀ϵ> 0. Since A ϵ ={η (x)>ϵ }, this means η ≤ 0 a.s. 1.4.2 Stationary Markov Process Given a Markov semigroup P t over a polish space X and an invariant probability measure µ for P t , we associate to it a probability measure P µ onX R in the following way: For any bounded measurable function ϕ :X R →R s.t. there exists a function ˜ ϕ :X n →R, and an n-tuple of times t 1 < ... < t n , we write (P µ )ϕ = Z X ··· Z X ˜ ϕ (x 1 ,...,x n )P tn− tn− 1 (x n− 1 ,dx n )··· P t2− t1 (x 1 ,dx 2 )µ (x 1 ). By Kolmogorov’s extension theorem, there exists a unique measure P µ on X R such that the above holds. Since µ is invariant, P µ is stationary, i.e θ ∗ t P µ = P µ for all t ∈ R where θ t : X R → X R is the shift map defined by ( θ t x)(s) = x(t+s). So the measure P µ is an invariant measure for the dynamical system θ t over X R . This gives us a bridge between the theory of dynamical systems and Markov proceses. Definition 1.4.18 An invariant measure µ for a Markov semigroupP t is ergodic ifP µ is ergodic for the shift map θ t . 13 The standard way to show existence of an invariant measure is using the Krylov-Bogoliubov theorem. In some cases, however, there is a more straightforward way. Definition 1.4.19 A Markov Semigroup P t is called Feller if for ϕ ∈C b (X) we have P t ϕ ∈C b (X) Lemma 1.4.20 AssumeP t is a Feller Markov semigroup. Let X t be a Markov process, with X 0 ∼ ν . If P ∗ t ν →µ as t→∞ for some µ and some ν , then µ is an invariant measure for the process. Proof: We want to show that for any time s>0, we haveP ∗ s µ =µ . Notice that we have lim t→∞ P ∗ s P ∗ t ν = lim t→∞ P ∗ t+s ν =µ, but we also have lim t→∞ P ∗ s P ∗ t ν =P ∗ s (lim t→∞ P ∗ t ν )=P ∗ s µ, becauseP ∗ s is continuous in total variation. And finally, since these two limits must be equal, we get that P ∗ s µ =µ as desired. Next, we define an inner product between ϕ ∈B b (X) and µ ∈M 1 (X) by ⟨ϕ,µ ⟩= Z X ϕ (x)µ (dx). Now consider the time average 1 T Z T 0 P t (x,A)dt=R T (x,A). This would be a probability measure, and so we can define the push-forward R ∗ T ν (A)= Z X R T (x,A)ν (dx). And so for any ϕ ∈B b (X), we get ⟨ϕ, R ∗ T ν ⟩= 1 T Z T 0 ⟨ϕ, P ∗ T ν ⟩dt. 14 So we can think of R ∗ T ν as equal to 1 T R T 0 P ∗ t νdt. Now we introduce a theorem for proving existence of invariant measure in a more general setting. For more details, see [DPZ96] and [DPZ14]. Theorem 1.4.21 (Krylov-Bogoliubov) Assume thatP t is a Feller semigroup. If for some ν ∈M 1 (X) andsomesequenceT n ↑+∞,R ∗ Tn ν →µ weaklyasn→∞, thenµ isaninvariantmeasureforP t ,t≥ 0 Proof: For fixed r > 0, we have that for ϕ ∈ C b (X) , P r ϕ ∈ C b (X). We wish to show that ⟨ϕ, P ∗ r µ ⟩=⟨ϕ,µ ⟩. First note that ⟨ϕ, P ∗ r µ ⟩=⟨P r ϕ,µ ⟩=⟨P r ϕ, lim n∞ R ∗ Tn ν ⟩. So then D P r ϕ, lim n∞ R ∗ Tn ν E = lim n→∞ 1 T n * P r ϕ, Z Tn 0 P ∗ s νds + = lim n→∞ 1 T n * ϕ, Z Tn+r r P ∗ s νds + = lim n→∞ 1 T n * ϕ, Z Tn 0 P ∗ s νds + + 1 T n * ϕ, Z Tn+r Tn P ∗ s νds + − 1 T n ϕ, Z r 0 P ∗ s νds =⟨ϕ,µ ⟩. 15 Chapter 2 Gaussian Measures and Gaussian Free Fields Sincethese stochasticreaction-diffusionequations aredriven bywhitenoise, itis importanttodiscuss Gaussian measures over an abstract Banach space. 2.1 Introduction FirstwewilllookatGaussianmeasuresoverafinitedimensionalBanachspace. RecallthataGaussian probability measure over R is any measure with density function 1 √ 2σπ exp (x− µ ) 2 2σ 2 . To extend this to multiple dimensions and, more generally, infinite dimensions, we will consider the projections on to one-dimensional spaces. Definition 2.1.1 A Gaussian probability measure µ on a separable Banach space B is a probability measure such that for any linear functional l :B→R, the probability measure l ∗ µ is Gaussian on R. (Recall that l ∗ µ (A):=µ (l − 1 (A)).) One important Gaussian measure that will be relevant for our discussion is the Gaussian free field. This comes in many flavors, depending on the domain and the boundary conditions. We will see later on, how the Gaussian free field over the continuum can also be realized as the scaling limit of a process. LetB(t) t∈[0,1] beaone-dimensionalBrownianmotion. β t :=(B t − tB 1 ) t∈[0,1] isaBrownianbridge. And since β is independent of B 1 , we can think of the distribution of β to be the distribution of B t conditioned to hit zero at time 1. We also have the covariance Eβ t β s =t(1− s) 16 Figure 2.1: A sample path for N =8, with Rademacher(1/2) random variables. With 0≤ t≤ s≤ 1. The Brownian bridge is well-known to be the scaling limit of a simple random walk of Rademacher random variables, where if we let X 1 ,...,X N be Rademacher(1/2) such that S N = X X i =0, then S ⌊Nt⌋ √ N t∈[0,1] converges weakly in distribution to the Brownian bridge. NowwecanconsideranotherprocessthatalsoscalestotheBrownianbridgebyfirstletting h(u)to be the density of the standard normal. Now consider the random vector (S 1 ,...,S N− 1 ) with density proportional to N Y 1 h(s j − s j− 1 ) at(s 1 ,...,s N− 1 ). Thenagain, S ⌊Nt⌋ √ N t∈[0,1] convergesweaklyindistributiontotheBrownianbridge. These two examples give us an idea for extending the process to higher dimensions Consider N ≥ 2 and let ¯ Q N :={0,...,N} 2 be the discrete square lattice, Q N :={1,...,N− 1} 2 be the inside, and ∂ N = ¯ Q N − Q N be the boundary. Pick a family of functions f : ¯ Q N →R such that f| ∂ N =0. For example: 1. Choose f uniformly among the finite set of all integer-valued functions f such that f =0 on the boundary, and for any x in{1,...,N− 1} and any y neighboring x, f(x)− f(y)∈{− 1,0,1}. 2. Let h be the density function of a symmetric zero-mean random variable in L 2 . Let (f(x) x∈Q N ) be the random vector with density proportional to Q e∈Edges h(|∆ f(e)|). Where |∆ f(e)| is the absolute value of the difference between the values on the two vertices of an edge. 17 Figure 2.2: Scaling in one dimension for N = 20, 200, and 20000 steps. We would then rescale as follows by defining ˆ f N (x 1 ,x 2 ):=f N (⌊Nx 1 ⌋,⌊Nx 2 ⌋) where, (x 1 ,x 2 )∈[0,1] 2 . Theideathenisthatforsomegoodsequence ϵ N ,ϵ N ˆ f N willconvergetosome ‘universal random function’ f :[0,1] 2 →R. Unfortunately, it’s not known whether we can find a good sequence ϵ N for many such examples of f N , but it is known for when h(u) is the density of a Gaussian random variable i.e. exp(− u 2 /2σ 2 ), for some σ . In this case, for fixed N we call the distribution of functions on the lattice the discrete Gaussian free field . See [WW20] for a more detailed discussion of the discrete Gaussian free field. 2.2 Gaussian Free Field on the Continuum Let O⊆ R d be a domain and let Φ O = Φ O (x,y), x,y ∈O, be Green’s function of the Laplacian ∆ in O with suitable homogeneous boundary conditions. A Gaussian free field on O is usually defined as a (generalized) Gaussian process ¯ W = ¯ W(x), x∈O, such that E ¯ W(x)=0, E ¯ W(x) ¯ W(y) =Φ O (x,y), x,y∈O. (2.1) If d > 1, the function Φ O (x,y) has a singularity at x = y. So in that case, E| ¯ W(x)| 2 = +∞ for all x∈O, meaning that ¯ W must indeed be a generalized process, or a random generalized function (distribution), indexed by test functions on O rather than points in O. Let us assume that the equation ∆ v =− f (2.2) 18 Figure 2.3: Gaussian free field on the unit square. is well-posed in a sufficiently rich class G of functions f onO and the solution of (2.2) can be written as v(x)= Z O Φ O (x,y)f(y)dy. ThentheGaussianfreefield ¯ W onO isdefinedasacollectionofzero-meanGaussianrandomvariables ¯ W[f], f ∈G, such that E ¯ W[f] ¯ W[g] = ZZ O×O Φ O (x,y)f(x)g(y)dxdy, f,g∈G. (2.3) If ¯ W = ¯ W(x), x∈O, is a collection of Gaussian random variables satisfying (2.1), then ¯ W defines a random distribution onG by ¯ W[f]= Z O ¯ W(x)f(x)dx, f ∈G, whichisacollectionofzero-meanGaussianrandomvariablessatisfying(2.3). LetF= Ω ,F,{F t } t≥ 0 ,P be a stochastic basis with the usual assumptions, on which countably many independent standard Brownian motions w k = w k (t), t≥ 0, k = 1,2,... are defined. The space-time Gaussian white noise ˙ W = ˙ W(t,x)onOisacollectionofzero-meanGaussianrandomvariables ˙ W[f],f ∈L 2 (0,+∞)×O ), such that E ˙ W[f] ˙ W[g] = Z +∞ 0 Z O f(t,x)g(t,x)dxdt. Givenanorthonormalbasis{h k =h k (x), k≥ 1}inL 2 (O),theprocess ˙ W canbewrittenasa(formal) sum ˙ W(t,x)= ∞ X k=1 h k (x) ˙ w k (t). (2.4) 19 Similarly, W(t,x)= ∞ X k=1 h k (x)w k (t) (2.5) is called cylindrical Brownian motion on L 2 (O). For a square integrable function f =f(t,x), Z t 0 Z O f(s,y)W(ds,dy)= ∞ X k=1 Z t 0 Z O f(s,y)h k (y)dy dw k (s). 2.3 Gaussian Processes and Measures on Hilbert Spaces LetH bearealseparableHilbertspacewithinnerproduct(·,·) 0 andnorm∥·∥ 0 , andletΛbealinear operator on H with the following properties: [O1] (Λ f,g) 0 =(f,Λ g) for all f,g in the domain of Λ; [O2] (Λ f,f) 0 >0, f ̸=0, f in the domain of Λ; [O3] There is an orthonormal basis{h k , k≥ 1} in H such that Λ h k =λ k h k , k≥ 1; 0<λ 1 ≤ λ 2 ≤ λ 3 ≤··· ; lim k→∞ k − α λ k =c Λ (2.6) for some α> 0, c Λ >0. For f ∈H, write f k =(f,h k ) 0 . Definition 2.3.1 The Hilbert scaleH Λ generated by the operator Λ is the collection of the Hilbert spaces {H γ , γ ∈R}, where • H 0 =H; • H γ ={f ∈H : P k≥ 1 λ 2γ k f 2 k <∞} if γ > 0; • H γ is the closure of H with respect to the norm ∥f∥ γ , where ∥f∥ 2 γ = X k≥ 1 λ 2γ k f 2 k , (2.7) if γ < 0. 20 Equality (2.7) defines the norm in every H γ , γ ∈R, H γ =Λ − γ H, (f,g) γ =(Λ γ f,Λ γ g) 0 = ∞ X k=1 λ 2γ k f k g k , and f = ∞ X k=1 f k h k ∈H γ ⇐⇒ ∞ X k=1 k 2αγ f 2 k <∞. Proposition 2.3.2 If H Λ ={H γ , γ ∈R} is the Hilbert scale from Definition 2.3.1, then, for every γ 1 >γ 2 , the space H γ 1 is densely and compactly embedded into H γ 2 ; the embedding is Hilbert-Schmidt if γ 1 − γ 2 >1/(2α ). Proof: The construction ofH Λ implies density of the embedding. n h k λ γ k o is an orthonormal basis for H γ . So the assumption (2.6) about the eigenvalues of Λ implies that the embedding is compact and, as long as P k λ 2(γ 2− γ 1) k <∞, it is Hilbert-Schmidt. Definition 2.3.3 Let U be a separable Hilbert space with inner product (·,·) U . 1. A Q-Brownian motion W = W(t) on U is a collection of zero-mean Gaussian processes {W[t,h], h ∈ H, t ≥ 0} such that E W[t,h]W[s,g] = min(t,s)(Qh,g) U for some linear operator Q on U. In the case Q is the identity operator, W is called a cylindrical Brownian motion on U. 2. A Q-Brownian motion W =W(t) on U is called U-valued if W[t,h]= W(t),h U (2.8) and the process W on the right-hand side of (2.8) satisfies W ∈L 2 Ω; C (0,T);U for all T >0. A Q-Brownian motion on U is U-valued if and only if the operator Q is trace class on U; cf. [DPZ14, Propositions 4.3 and 4.4]. It is convenient to re-state [DPZ14, Proposition 4.7] in the setting of the Hilbert scaleH Λ . 21 Proposition 2.3.4 A cylindrical Brownian motion on H has a representation W(t)= X k≥ 1 h k w k (t), (2.9) wherew k (t)=W[t,h k ], k≥ 1,areindependentstandardBrownianmotions,andW ∈L 2 Ω; C((0,T);H − γ ) for all T > 0, γ > 1/(2α ). Equivalently, a cylindrical Brownian motion H is an H − γ -valued Q- Gaussian process, γ > 1/(2α ), with Q = jj ′ , where j is the embedding operator H → H − γ and j ′ :H − γ →H is the adjoint of j. We will also need a stationary version of Definition 2.3.3. Definition 2.3.5 Let U be a separable Hilbert space with inner product (·,·) U . 1. A Q-Gaussian process W on U is a collection of zero-mean Gaussian random variables {W[h], h ∈ H} such that E W[h]W[g] = (Qh,g) U for some linear operator Q on U. In the case Q is the identity operator, W is called an isonormal Gaussian process; cf. [Nua06, Definition 1.1.1] . 2. A Q-Gaussian process W on U is called U-valued if W[h]= W,h U (2.10) and the random variable W on the right-hand side of (2.10) satisfies W ∈L 2 Ω; U) . A Q-Gaussian process on U is U-valued if and only if the operator Q is trace class on U; cf. [LR17, Theorem 3.2.39]. In the Hilbert scaleH Λ , we have a version of Proposition 2.3.4. Proposition 2.3.6 Given an r∈R, an isonormal Gaussian process on H r has a representation W = X k≥ 1 λ − r k h k ζ k , where ζ k = W[h k ], k ≥ 1, are iid Gaussian random variables, and W ∈ L 2 (Ω; H r− γ ) for all γ > 1/(2α ). Equivalently, an isonormal Gaussian process on H r is an H r− γ -valued Q-Gaussian process for every γ > 1/(2α ), with Q=jj ′ , wherej is the embedding operator H r →H r− γ andj ′ :H r− γ →H r is its adjoint. 22 Proof: This follows by direct computation after observing that the collection {λ − r k h k , k≥ 1} is an orthonormal basis in H r . Remark 2.3.7 While every Hilbert space is self-dual, there is an alternative notion of duality in a Hilbert scaleH Λ : for every γ 0 ∈R and every γ > 0, the spaces H γ 0+γ and H γ 0− γ are dual relative to the inner product in H γ 0 ; the duality⟨·,·⟩ γ 0,γ is given by f ∈H γ 0+γ ,g∈H γ 0− γ 7→⟨f,g⟩ γ 0,γ = lim n→∞ (f,g n ) γ 0 , (2.11) where g n ∈ H γ 0 and lim n→∞ ∥g− g n ∥ γ 0− γ = 0. With respect to ⟨·,·⟩ 0,|r| duality, an isonormal Gaussian process on H r from Proposition 2.3.6 becomes an isonormal Gaussian process on H − r . Indeed, if r >0, then, for f ∈H − r , we define ⟨W,f⟩ 0,r = ∞ X k=1 f k λ r k ζ k so that E ⟨W,f⟩ 0,r ⟨W,g⟩ 0,r = ∞ X k=1 f k g k λ 2r k =(f,g) − r . The case r <0 is similar. Remark 2.3.8 Let V be an isonormal Gaussian process on H. By direct computation, an isonormal Gaussian process W on H r is the unique solution of the stochastic elliptic equation Λ r/2 W =V; (2.12) cf. [LR17, Theorem 4.2.2]. By the Bochner-Minlos theorem [DPZ96, Theorem 2.27], a U-valued Q-Gaussian process W defines a centered Gaussian measure µ W on U by µ W (A)=P(W ∈A), 23 where A is a Borel sub-set of U, and, for every f ∈U, Z U e i(f,g) U dµ W (g)=Ee i(W,f) U =exp − 1 2 (Qf,f) U . The Cameron-Martin space of the measure µ W is the collection of all h∈ U such that the measure µ h W defined by µ h W (A)=µ W (A+h) is equivalent to µ W [Bog98, Section 2.4]. Proposition 2.3.9 LetH Λ be the Hilbert scale from Definition 2.3.1. If W is an isonormal Gaussian process on H r , then W generates a Gaussian measure µ W on every H r− γ with γ > 1/(2α ), and the Cameron-Martin space of this measure is H r . Proof: This is a combination of two results, [Bog98, Lemma 2.1.4 and Theorem 3.5.1], in the Hilbert space setting. 2.4 The Cameron-Martin Space GivenaseparableBanachspaceBandaGaussianmeasureγ ,weintroducethenotionoftheCameron- Martin Space. The Cameron-Martin space is a Hilbert space which is a subset of B. Its elements are those along which the translation of a null-set remains a null-set for γ . Definition 2.4.1 The covariance operator of a measure µ is defined as R γ (f)(g)= Z X [f(x)− a γ (f)][g(x)− a γ (g)]γ (dx), where a γ (f)= R X fγ (dx) is the expectation. Definition 2.4.2 First define |h| H(γ ) = sup l∈X ∗ {l(h) : R γ (l)(l)≤ 1}. The Cameron-Martin space is then defined as H(γ )={h∈X :|h| H(γ ) <∞}. Let’s first start by computing Cameron-Martin spapce for white noise. We can think of white noise as being indexed by functions in L 2 (R) and the Gaussian measure induced by it has zero mean and covariance E⟨ ˙ W,f⟩⟨ ˙ W,g⟩=⟨f,g⟩ L 2. 24 To compute the Cameron-Martin space of the measure induced by ˙ W, we need to first determine R(l)(l). R(l)(l)= Z S ′ (R d ) l(x)l(x)µ (dx)=E⟨ ˙ W,l⟩ 2 =∥l∥ 2 L 2. Suppose|h| H(γ ) <∞. Then we have |h| H(γ ) = sup l∈S(R d ) {l(h):∥l∥ 2 2 ≤ 1}= sup l∈S(R d ):∥l∥2=1 h(l)=∥h∥ S(R d )→R <∞. Since S(R d ) is a dense subset of L 2 (R d ), h extends to a bounded linear functional on L 2 (R d ) by Hahn-Banach. Conversely, if h was a bounded linear functional on L 2 (R d ) (or equivalently, a function in L 2 (R d )) then|h| H(γ ) <∞. So this means the Cameron-Martin Space of ˙ W is precisely L 2 (R d ). Next, wewillcomputeCameron-MartinspaceoftheGaussianfreefield. WethinkoftheGaussian Free Field as being indexed by functions in S(R d ) with zero mean and covariance E⟨ ˙ W,f⟩⟨ ˙ W,g⟩= Z R d ˆ fˆ g 1+|ξ | 2 dξ =⟨f,g⟩ H − 1 SothenR γ (l)(l)= R R d | ˆ l| 2 1+|ξ | 2 dξ ,whereFlistheFouriertransform. Wewanttofindsup ∥Fl∥2,2≤ 1 ⟨h,l⟩.Note ⟨h,l⟩=⟨h,F − 1 Fl⟩=⟨F − 1 h,Fl⟩ Define the L(p,q) norm∥·∥ p,q by ∥u∥ p,q = Z R d |u| p (1+|x| 2 ) q dx. We are mainly interested in p=2. For fixed h, |h| H(γ ) = sup ∥Fl∥2,− 1=1 (h,l)= sup ∥Fl∥2,− 1=1 (h,FF − 1 l)= sup ∥Fl∥2,− 1=1 (Fh,F − 1 l)=∥F − 1 h∥ L(2,− 1)→R . To compute the dual space of L(2,− 1) for f ∈L(2,− 1) and g∈L(2,− 1) ∗ , in order for R gf <∞ for ∥f∥ 2,− 1 =1, we need Z gf = Z g· (1+|x| 2 ) 1/2 f· (1+|x| 2 ) − 1/2 dx=≤∥ g∥ 2,1 ∥f∥ 2,− 1 <∞. 25 So ∥g∥ 2,1 < ∞ and so L(2,− 1) ∗ = L(2,1). Therefore h belongs in the Cameron-Martin space if F − 1 h∈ L(2,1), which happens iff h∈ H 1 (R d ), since ∥F − 2 ∥ 2 H 1 (R d ) = R R d (1+|ξ | 2 )|F − 1 h(ξ )| 2 dξ and F − 2 (h(x))=h(− x)· c. Lemma 2.4.3 h∈X is in the Cameron-Martin space H(γ ) precisely when there exists g∈X ∗ γ with h=R γ (g). In this case, |h| H(γ ) =∥g∥ L 2 (γ ) . We can summarize the computations below. Gaussian Measure Cameron-Martin Space White noise L 2 (R d ) Gaussian free field H 1 (R d ) Table 2.1: Gaussian measures and their corresponding Cameron-Martin space 2.5 Wasserstein Distance and Total Variation There are various ways to measure the difference between two probability measures; we will be interested in the total variation distance and the Wasserstein metric, also known as the Kantorovich-Rubinstein metric d W : • For two probability measures µ , ν on a measurable space, d TV (µ,ν )= 1 2 sup|µ (A)− ν (A)|, where the supremum is over all measurable sets A and the factor 1/2 ensures that the total variation distance is at most 1; in particular, d TV (µ,ν )=1 when the measures are singular. • Fortwoprobabilitymeasuresµ ,ν onacomplete separable metricspaceE withdistancefunction ρ , d W (µ,ν )=sup Z E fdµ − Z E fdν , (2.13) with supremum over all functionals f :E→R such that|f(x)− f(y)|≤ ρ (x,y), x,y∈E. Bythe Portemanteau Theorem[Kle20, Theorem13.16], convergenceintheWassersteinmetricimplies weak convergence. 26 We start with the following basic result, sometimes called Scheffe´ e’s Lemma ; cf. [Gut13, Remark 5.6.2]. Lemma 2.5.1 If µ and ν are absolutely continuous measures on the real line with corresponding probability density functions f µ and f ν , then d TV (µ,ν )= 1 2 Z +∞ −∞ |f µ (x)− f ν (x)|dx. (2.14) Proof: Inequality in one direction is obvious; for the opposite direction, consider the set A={x: f µ (x)≥ f ν (x)}. The integral on the right-hand side of (2.14) can be evaluated for two Gaussian distributions. Proposition 2.5.2 For two Gaussian measures on the line, d TV N(µ 1 ,σ 2 1 ),N(µ 2 ,σ 2 2 ) ≤ |µ 2 − µ 1 | √ 2π max(σ 1 ,σ 2 ) + |σ 2 2 − σ 2 1 | min(σ 2 1 ,σ 2 2 ) Proof: With no loss of generality, assume that σ 2 >σ 1 . By the triangle inequality, d TV N(µ 1 ,σ 2 1 ),N(µ 2 ,σ 2 2 ) ≤ d TV N(µ 1 ,σ 2 2 ),N(µ 2 ,σ 2 2 ) +d TV N(µ 1 ,σ 2 1 ),N(µ 1 ,σ 2 2 ) . Using (2.14), with µ =(µ 2 − µ 1 )/σ 2 and Φ denoting the standard normal cdf, d TV N(µ 1 ,σ 2 2 ),N(µ 2 ,σ 2 2 ) = 1 2 1 √ 2πσ 2 Z +∞ −∞ e − (x− µ 1) 2 /(2σ 2 2 ) − e − (x− µ 2) 2 /(2σ 2 2 ) dx = 1 2 √ 2π Z +∞ −∞ e − y 2 /2 − e − (y− µ ) 2 /2 dy = Φ( µ/ 2)− Φ( − µ/ 2) =2Φ |µ |/2 − 1. (2.15) The function Φ satisfies 1 2 ≤ Φ( x)≤ 1 2 + x √ 2π , x≥ 0; the second inequality is a consequence of Φ ′ (0)=1/ √ 2π and Φ ′′ (x)<0, x>0. Then d TV N(µ 1 ,σ 2 ),N(µ 2 ,σ 2 ) ≤ |µ 1 − µ 2 | √ 2πσ . 27 Also, (2.15) implies lim |µ 1− µ 2|→0 d TV N(µ 1 ,σ 2 ),N(µ 2 ,σ 2 ) |µ 1 − µ 2 | = 1 √ 2πσ . Similarly, d TV N(µ 2 ,σ 2 1 ),N(µ 2 ,σ 2 2 ) = 1 2 √ 2π Z +∞ −∞ e − x 2 /(2σ 2 1 ) σ 1 − e − x 2 /(2σ 2 2 ) σ 2 dx ≤ 1 2 √ 2πσ 2 Z +∞ −∞ e − x 2 /(2σ 2 2 ) 1− e − (x 2 /2)(σ − 2 1 − σ − 2 2 ) dx + σ − 1 1 − σ − 1 2 2 √ 2π Z +∞ −∞ e − x 2 /(2σ 2 1 ) dx. Using 0≤ 1− e − t ≤ t, t≥ 0, 1 2 √ 2πσ 2 Z +∞ −∞ e − x 2 /(2σ 2 2 ) 1− e − (x 2 /2)(σ − 2 1 − σ − 2 2 ) dx ≤ σ − 2 1 − σ − 2 2 4 √ 2πσ 2 Z +∞ −∞ x 2 e − x 2 /(2σ 2 2 ) dx= σ 2 2 − σ 2 1 4σ 2 1 . Also, σ − 1 1 − σ − 1 2 2 √ 2π Z +∞ −∞ e − x 2 /(2σ 2 1 ) dx= σ 1 (σ − 1 1 − σ − 1 2 ) 2 = σ 2 − σ 1 2σ 1 ≤ σ 2 2 − σ 2 1 4σ 2 1 , because (σ 2 +σ 1 )/σ 1 ≥ 2 As a result, d TV N(µ 2 ,σ 2 1 ),N(µ 2 ,σ 2 2 ) ≤ |σ 2 1 − σ 2 2 | 2min(σ 2 1 ,σ 2 2 ) , (2.16) and also lim σ 1→σ,σ 2→σ d TV N(µ 2 ,σ 2 1 ),N(µ 2 ,σ 2 2 ) = 1 σ . Corollary 2.5.3 For non-degenerate Gaussian measures on the line, convergence in distribution (weak convergence) is equivalent to convergence in total variation. As an example, consider the Ornstein-Uhlenbeck process dU(t)=− aU(t)dt+σdW (t), t>0, (2.17) 28 where a,σ > 0, W is a standard Brownian motion, and U(0) is a Gaussian random variable indepen- dent of W. By direct computation, U(t)=U(0)e − at +σ Z t 0 e − a(t− s) dW(s), that is, for each t>0, the random variable U(t) is Gaussian with mean EU(t)=m 0 e − at , m 0 =EU(0), and variance Var U(t) =v 2 0 e − 2at + σ 2 2a (1− e − 2at ), v 2 0 =VarU(0). (2.18) Then lim t→+∞ U(t) d =N 0, σ 2 2a , sothattheGaussianmeasureµ 0 withmeanzeroandvarianceσ 2 /(2a)istheuniqueinvariantmeasure for U. Moreover, writing µ (t) for the distribution of the random variable U(t), Proposition 2.5.2 implies that, for all sufficiently large t, d TV µ (t),µ 0 ≤ r a 2πσ 2 |m 0 |e − at +2 av 2 0 σ 2 +1 e − 2at . (2.19) Paper [BU86] extends both (2.15) and (2.16) to infinite-dimensional Gaussian measures as follows. If P µ 1 and P µ 2 are Gaussian measures on a separable Hilbert space H, with means µ 1 , µ 2 , and a common covariance operator K, and K − 1/2 (µ 1 − µ 2 ) ∈ H, then, by [BU86, Theorem 1], the total variation distance between P µ 1 and P µ 2 is given by (2.15), with µ =∥K − 1/2 (µ 1 − µ 2 )∥ H . An infinite-dimensional version of (2.16) is more complicated, see [BU86, Theorem 2]. For many purposes, thefollowingsimplifiedversionisenough. Let P A andP B bezero-meanGaussianmeasures on a separable Hilbert space H, with covariance operators A and B such that B =A 1/2 CA 1/2 (2.20) 29 and C− I is a Hilbert-Schmidt operator with eigenvalues r k , k≥ 1. Define D A,B = ∞ X k=1 r 2 k ! 1/2 . If D A,B is sufficiently small (for example, D A,B ≤ 10 − 2 ), then 2· 10 − 2 D A,B ≤ d TV (P A ,P B )≤ D A,B ; (2.21) cf. [BU86, Corollary 2]. In particular, given a sequence P An of Gaussian measures, we have lim n→∞ d TV (P An ,P A )=0 if and only if lim n→∞ D An,A =0. Proposition 2.5.4 Let H be a separable Hilbert space with an orthonormal basis H ={h k , k ≥ 1}. Given two sequences{a k ,k≥ 1},{b k ,k≥ 1} of positive real numbers such that P k a 2 k <∞, P k b 2 k < ∞, and a collection of iid standard Gaussian random variables ξ k , k≥ 1, define H-valued Gaussian random elements u= ∞ X k=1 a k ξ k h k , v = ∞ X k=1 b k ξ k h k , and let P u and P v be the corresponding Gaussian measures on H (that is, P u (A) = P(u ∈ A) for every Borel subset A of H). Then the measures P u and P v are equivalent if and only if D 2 u,v := ∞ X k=1 b 2 k a 2 k − 1 2 <∞; if D u,v ≤ 10 − 2 , then d TV (P u ,P v )≤ D u,v . Proof: By construction, P u and P v have zero mean; the corresponding covariance operators are diagonal in the basis H and have eigenvalues a 2 k and b 2 k , k≥ 1. Then (2.20) holds, where the operator 30 C isalsodiagonal,witheigenvaluesb 2 k /a 2 k . AtheoremofKakutani[Bog98,Example2.7.6]nowimplies the equivalence of measures; note that ∞ X k=1 b k a k − 1 2 <∞ if and only if D u,v < ∞: in either case, we have lim k→∞ (b k /a k ) = 1. To complete the proof, it remains to apply (2.21). 2.6 A Comparison Between W 2 Distance and ˙ H − 1 Norm TheWassersteindistancebetweentwoprobabilitymeasuresµ andν ,withfinite p-thmomentsisgiven by W p (µ,ν ):= inf γ ∈Γ( µ,ν ) Z M× M d(x,y) p dγ (x,y) 1/p , where Γ( µ,ν ) is the collection of all measures on M× M with marginals µ,ν and M is a connected Riemannian Manifold. Alternatively, we can define it as W p (µ,ν ) = inf X,Y E[d(X,Y) p ] 1/p , where X ∼ µ , and Y ∼ ν . On the other hand the Homogenous Sobolev norm is initially defined on functions, but we extend it to measures as follows: first define ∥f∥ 2 ˙ H 1 (µ ) := Z M |∇f| 2 µ (dx). Then, for a signed measure ν , we denote ∥ν ∥ 2 ˙ H − 1 (µ ) :=sup n ⟨f,ν ⟩ ∥f∥ 2 ˙ H 1 (µ ) =1 o . The duality product ⟨f,ν ⟩ represents the integral of f against the measure ν . In [Pey18] there is a comparison between the W 2 distance and the ˙ H − 1 norm, which we will describe in this section. Furthermore, we will see an application to localization of Wasserstein distance. The first question is, do there exist constants C a and C b such that C a ∥µ − ν ∥ ˙ H − 1 (µ ) ≤ W 2 (µ,ν )≤ C b ∥µ − ν ∥ ˙ H − 1 (µ ) 31 for 0 < C a < C b <∞ and only mild assumptions on µ and ν ? Indeed, there are and the constants are as follows: Theorem 2.6.1 If µ and ν are positive measures on M, we have W 2 (µ,ν )≤ 2∥µ − ν ∥ ˙ H − 1 (µ ) . Theorem 2.6.2 Assume M has nonnegative Ricci curvature (so in the case of R n this is not an issue). Then for any positive measure µ and ν , such that µ ≤ p 0 λ and ν ≤ p 1 λ where λ is lebesgue measure or more generally, the volume form, we have ∥µ − ν ∥ ˙ H − 1 (µ ) ≤ 2(p 1/2 0 − p 1/2 1 ) ln(p 0 /p 1 ) W 2 (µ,ν ) In the case where p 0 =p 1 the constant would be p 1/2 0 . The next question resolved in [Pey18, Theorem 3.1] is if two measures are close in the Wasserstein metric overR n , does it also hold that they’re close over a bounded subset of R n ? Theorem 2.6.3 Let µ , ν be two measures on R n having the same total mass(so this works nicely with probability measures); let B be a ball of R n , with radius R. Assume that on B the density of µ is bounded above and below by Lebesgue measure i.e. ∃m 1 ,m 2 s.t. 0<m 1 <m 2 <∞ and ∀x∈B,m 1 λ (dx)≤ µ (dx)≤ m 2 λ (dx) Let ϕ be a function such that: 1. ϕ is 0 outside B 2. There exist c 1 ,c 2 with 0 < c 1 ≤ c 2 < ∞ such that for all x ∈ B, c 1 dist(x,B c ) 2 ≤ ϕ (x) ≤ c 2 dist(x,B c ) 2 . 3. ϕ is k-Lipschitz for some k <∞. Then, denoting a:=∥ϕ · ν ∥ 1 /∥ϕ · ν ∥ 1 , W 2 (aϕ · µ,ϕ · ν )≤ C(d) c 3/2 2 m 3/2 2 c 3/2 1 m 3/2 1 kc − 1/2 1 W 2 (µ,ν ) for C(d) <∞ some absolute constant only depending on d. Moreover, one can bound explicitly C(d) in such a way that C(n)=O(n 1/2 ) when n→∞. 32 Lemma 2.6.4 Given two centered Gaussian measures µ,ν on the real line, the W 2 distance is equal to the difference between the standard deviations. Proof: Let X ∼ µ be N(0,σ 2 1 ) and Y ∼ ν be N(0,σ 2 2 ). W 2 (µ,ν )= inf X,Y [Ed(X,Y) 2 ] 1/2 = inf X,Y [E(X− Y) 2 ] 1/2 = inf X,Y [E(X 2 − 2XY +Y 2 )] 1/2 = inf X,Y (σ 2 1 − 2Cov(X,Y)+σ 2 2 ) 1/2 =(σ 2 1 − 2σ 1 σ 2 +σ 2 2 ) 1/2 =|σ 1 − σ 2 |. 2.7 Time Regularization of Brownian Motion and one-dim O- U Process Let X =X(t), t≥ 0, be a (continuous version of a) stationary Gaussian process with mean zero and covariance E X(t)X(s) =e − 2|t− s| . In particular, X(t) is a standard Gaussian random variable for every t. The covariance function R(t)=e − 2|t| of X satisfies lim t→0 1− R(t) t α =0, 0<α< 1; Z ∞ −∞ R p (t)dt<∞, p>0; Z ∞ −∞ R(t)dt=1. (2.22) Two equivalent characterizations of X are as follows: X(t)=2 Z t −∞ e − 2(t− s) dW(s); (2.23) dX(t)=− 2X(t)dt+2dW(t), t≥ 0. (2.24) In both (2.23) and (2.24), W = W(t), t≥ 0, is a standard Brownian motion; in (2.23), when t < 0, W(t) = V(− t) for an independent copy V of W. The initial condition X(0) in (2.24) is a standard Gaussian random variable independent of W. 33 First we will see how to approximate standard Brownian motion. For κ> 0 define W κ (t)= 1 √ κ Z κt 0 X(s)ds. (2.25) In particular, the time derivative ˙ W κ (t)= √ κX (κt ) of W κ is a continuous process. We will show that, for large κ , the process W κ is a (weak) ap- proximation of the standard Brownian motion W. Because the trajectories of W κ are continuously differentiable (and almost all trajectories of W are not), the approximation cannot be in total varia- tion. Instead, we derive an explicit bound on the difference in the Wasserstein metric. To this end, we need a suitable complete separable metric space for W κ and W. Definition 2.7.1 The space C (0) is the collection of continuous functions f = f(t) on [0,+∞) such that f(0)=0, lim t→+∞ |f(t)| 1+t =0. If, for f ∈C (0) , we define the norm ∥f∥ (0) =sup t>0 |f(t)| 1+t , thenC (0) becomes a separable Banach space; cf. [DS89, Section 1.3]. Proposition 2.7.2 There exists a constant C X such that, for every functional φ:C (0) →R satisfying |φ(f)− φ(g)|≤∥ f− g∥ (0) and for all sufficiently large κ , we have Eφ W κ − Eφ(W) ≤ C X (lnκ ) 1/2 κ − 1/2 . (2.26) In particular, as κ → +∞, the process W κ converges, weakly in the space C (0) , to the standard Brownian motion W. Proof: Using (2.24), X(t)=X(0)e − 2t +2 Z t 0 e − 2(t− s) dW(s). (2.27) Changing the order of integration (stochastic Fubini theorem [Pro05, Theorem IV.46]), W κ (t)=κ − 1/2 W(κt )+ X(0)− X(κt ) 2 √ κ . (2.28) 34 Define X κ (t)=X(κt ). Because the process t 7→ κ − 1/2 W(κt ) is a standard Brownian motion for every κ > 0, it remain to show that limsup κ →+∞ (lnκ ) − 1/2 E∥X κ ∥ (0) <∞. (2.29) By (2.22) and [Pic67, Theorem 5.5], lim T→+∞ sup 0<t<T |X(t)| √ 2lnT =1, (2.30) and then continuity of X implies that the random variable ζ =sup t>0 |X(t)| p ln(1+t) is finite with probability one. Moreover, because t 7→ X(t)− X(0)e − 2t is a Gaussian process with mean zero, we also conclude thatEζ < ∞; cf. [AT07, Theorem 2.1.1]. As a result, using (1+κt )≤ (1+κ )(1+t) and ln(1+t)≤ t, t,κ> 0, we get ∥X κ ∥ (0) =sup t>0 |X(κt )| 1+t ≤ sup t,κ> 0 |X(κt )| p ln(1+κt ) ! sup t>0 p ln(1+κt ) 1+t ! ≤ ζ p ln(1+κ ), and (2.29) follows. Whenκ =n=1,2,...,andthetimeintervalisfinite,weakconvergenceof W κ toW isaparticular case of [LS89, Theorem 9.2.1] or [JS03, Theorem VIII.3.79]. Let us now approximate Ornstein-Uhlenbeck Process with this noise. Let U =U(t) be the process defined in (2.17), and, for κ> 0, consider ˙ U κ (t)=− aU κ (t)+σ ˙ W κ (t), t>0, U κ (0)=U(0). (2.31) For the sake of concreteness, we assume that the same Brownian motion W defines the processes U and W κ . In particular, U(0) is independent of W κ . The objective of this section is to establish analogs of (2.19) and (2.26) for the process U κ . 35 Proposition 2.7.3 Denote byµ κ (t) the distribution of the random variable U κ (t), and letµ 0 be the invariant measure for (2.17). Then there exists a number C = C(a,σ ) depending only on a and σ such that, for all sufficiently large κ and t, d TV µ κ (t),µ 0 ≤ C(a,σ ) 1 a+κ +|EU(0)|e − at +e − 2at . (2.32) Proof: By direct computation, U κ (t)=U(0)e − at +σ √ κ Z t 0 e − a(t− s) X(κs )ds. (2.33) Therefore, for each t>0, U κ (t) is a Gaussian random variable with mean e − at EU(0) and variance Var U κ (t) =2σ 2 κ Z t 0 Z s 0 e − a(2t− s− r) e − 2κ (s− r) drds= σ 2 κ a(a+2κ ) +R κ (t), (2.34) where, for 2κ>a , R κ (t)= σ 2 κ 4κ 2 − a 2 e − (a+2κ )t − e − 2at − σ 2 κ a(a+2κ ) e − 2at . (2.35) The result now follows from Proposition 2.5.2. To proceed, recall the spaceC (0) from Definition 2.7.1. Proposition 2.7.4 There exists a constant C U such that, for every functional φ:C (0) →R satisfying |φ(f)− φ(g)|≤∥ f− g∥ (0) and for all sufficiently large κ , we have Eφ U κ − Eφ(U) ≤ C U (lnκ ) 1/2 κ − 1/2 . (2.36) In particular, as κ → +∞, the process U κ converges, weakly in the space C (0) , to the Ornstein- Uhlenbeck process U. Proof: Integrating by parts on the right hand side of (2.33), U κ (t)=U(0)e − at +σW κ (t)− aσ Z t 0 e − a(t− s) W κ (s)ds. 36 Similarly, U(t)=U(0)e − at +σW (t)− aσ Z t 0 e − a(t− s) W(s)ds. Then, by Proposition 2.7.2, all we need is to show that the operator F :f(t)7→ Z t 0 e − a(t− s) f(s)ds mapsC (0) toC (0) and satisfies ∥F(f)− F(g)∥ (0) ≤ ∥f− g∥ (0) a . The definition of F implies that F(f)(0) = 0 and, for every f ∈ C (0) , the function t 7→ F(f)(t) is continuous. To show that lim t→∞ |F(f)(t)| 1+t =0, (2.37) fix ε>0 and take t ε so that|f(t)|/(1+t)<ε for all t>t ε . Using Z t 0 e − a(t− s) ds≤ 1 a , t>0, we compute, for t>t ε , 1 1+t Z t 0 e − a(t− s) f(s)ds ≤ 1 1+t Z tε 0 e − a(t− s) |f(s)|ds+ Z t tε e − a(t− s) |f(s)| 1+s ds ≤ max 0≤ s≤ tε |f(s)| a(1+t) + ε a . Passing to the limit, as t→∞, lim t→∞ |F(f)(t)| 1+t ≤ ε a , and (2.37) now follows because ε is arbitrary. Similarly, ∥F(f)− F(g)∥ (0) ≤ Z t 0 e − a(t− s) ∥f− g∥ (0) ds≤ ∥f− g∥ (0) a , completing the proof of Proposition 2.7.4. For infinite-dimensional Gaussian measures, weak convergence does not necessarily imply conver- gence in total variation. 37 Proposition 2.7.5 Let H be a separable Hilbert space with an orthonormal basis H ={h k , k ≥ 1}. Given sequences {a n,κ ,n≥ 1,κ> 0}, {a n ,n≥ 1} of positive real numbers such that P n a 2 n <∞ and P n (a n,κ ) 2 <∞ for all κ> 0, and a collection of iid standard Gaussian random variables ξ k , k≥ 1, define H-valued Gaussian random elements v κ = ∞ X n=1 a n,κ ξ n h n , v = ∞ X n=1 a n ξ n h n , and let P κ v and P v be the corresponding Gaussian measures on H (e.g., P v (A)=P(v∈A) for every Borel subset A of H). Then • lim κ →∞ ∞ X n=1 a n,κ − a n ) 2 =0 implies lim κ →∞ v κ L =v; • lim κ →∞ ∞ X n=1 a n,κ a n − 1 2 =0 or, equivalently, lim κ →∞ ∞ X n=1 a n,κ a n 2 − 1 ! 2 =0 implies lim κ →∞ d TV P κ v ,P v =0. Proof: The weak convergence part is [Bog98, Example 3.8.13(iii)]; the total variation part is Propo- sition 2.5.4. Given a separable Hilbert space H with inner product (·,·) H , norm ∥·∥ H , and an orthonormal basis H = {h k , k ≥ 1}, and independent standard Brownian motions W n = W n (t), t≥ 0, n≥ 1, a cylindrical Brownian motion W =W(t), t≥ 0, on H is W(t)= ∞ X n=1 W n (t)h n . (2.38) The time derivative of W, ˙ W(t)= ∞ X n=1 ˙ W n (t)h n , (2.39) is a generalized process and is known as the Gaussian space-time white noise on H. Consider independent copies W κ n , n≥ 1, of W κ from (2.25). Then W κ (t)= ∞ X n=1 W κ n (t)h n (2.40) 38 becomes a natural approximation of W. Note that, similar to W, the process W κ is not H-valued, but, for every Hilbert space H 1 such that the embedding H ⊂ H 1 is Hilbert-Schmidt, bothW(t) and its time derivative ˙ W κ (t)= ∞ X n=1 √ κX n (κt )h n (2.41) belong to H 1 . If h∈H, with h n =(f,h n ) H and∥h∥ H =1, then we define W[h](t)= ∞ X n=1 h n W n (t), t≥ 0, which is a standard Brownian motion. Similarly, W κ [h](t)= ∞ X n=1 h n W κ n (t). The following is an immediate consequence of Proposition 2.7.2, showing that, as κ → ∞, the convergence of W κ to W is weak, in both probabilistic and functional analytic sense. Proposition 2.7.6 There exists a constant C X such that, for every functional φ:C (0) →R satisfying |φ(f)− φ(g)|≤∥ f− g∥ (0) , for every h∈H with ∥h∥ H =1, and for all sufficiently large κ , we have Eφ W κ [h] − Eφ(W[h]) ≤ C X (lnκ ) 1/2 κ − 1/2 . (2.42) Let us now replace a constant κ with a sequence κ ={κa n , n≥ 1}, a n >0, (2.43) and set W κ = ∞ X n=1 W κa n n (t)h n . On the one hand, Proposition 2.7.6 still applies: we have weak convergence, as κ →∞, ofW κ toW. On the other hand, the time derivative of W κ satisfies ˙ W κ (t)= √ κ ∞ X n=1 √ a n X n (κa n t)h n . 39 In particular, time regularization of W can affect the spacial regularization of its time derivative. More precisely, if P n a n < ∞, then ˙ W κ (t) is more regular than W κ (t): ˙ W κ (t) ∈ H for all t > 0 (recall that EX n (t) = 0 and EX 2 n (t) = 1). On the other hand, if the numbers a n grow sufficiently quickly as n→∞, then ˙ W κ (t) will be even less regular than ˙ W. We will use this observation in the next section while investigating an infinite-dimensional version of (2.31). 40 Chapter 3 Stochastic Reaction-Diffusion Equations over a Bounded Domain We will consider the laplacian over the rectangle D = Q d i=1 [0,c i ] in R d and then discuss its eigen- functions and show that they are uniformly bounded for an appropriately chosen rectangle. First we start with d = 1, so we consider the laplacian over the interval [0,c 1 ]. The eigenfunctions of the laplacian are { √ 2sin( nπx c1 )},n ∈ Z and they form a complete orthonormal basis of L 2 ([0,c 1 ]). In higher dimensions, the eigenfunctions would be of the form b d d Y i=1 sin n i πx i c i , where n i ∈ Z. Note that this is already an orthonormal system so we do not need to worry about potential issues arising from multiplicity of eigenvalues and therefore the basis for L 2 (D) is uniformly bounded. 3.1 Existence of Solution We are interested in first showing existence of a solution and then describing its properties. u t (x,t)=∆ u(x,t)+Q ˙ W(x,t) with x∈D = d Y i=1 [0,c i ],t∈(0,∞) u(x,0)=0 u(x,t)=0 with x∈∂D. 41 We define the solution to be W ∆ (t)= Z t 0 S(t− s)QdW(s). Where S(t)g = P ∞ n=1 e − α nt ⟨g,e n ⟩e n and α n are the eigenvalues of − ∆ on D. Also, the operator Q satisfies Qe k = √ q k e k . Properties: Since W ∆ (t)= Z t 0 S(t− s)QdW(s)= ∞ X k=1 Z t 0 S(t− s)Qe k dB k (t), W ∆ (t) is Gaussian and mean 0. Now we compute the second moment. E|W ∆ (t)| 2 =E Z t 0 S(t− s)QdW(s) 2 = ∞ X n=1 Z t 0 ∥S(t− s)Qe n ∥ 2 ds= ∞ X n=1 Z t 0 q n e − 2α n(t− s) ds = ∞ X n=1 q n 2α n (1− e − 2α nt ). So we see that E|W ∆ (t)| 2 < +∞ ⇐⇒ P ∞ n=1 qn α n < +∞. Furthermore, we have an asymptotic estimate on the eigenvalues. According to Weyl’s law: α k ∼ c· k 2 d where d is the dimension. So for d=1, we can consider Q=Id and for d>1, we can choose Q such that P q n <+∞. Suppose additionally that e n ∈C(D),sup n |e n |≤ C,|∇e n |≤ Cα 1/2 n . And for some γ ∈(0,1) we have ∞ X n=1 q n (α n ) 1− γ <+∞. Then E|W ∆ (t,x)− W ∆ (t,y)| 2 ≤ C|x− y| 2γ , and E|W ∆ (t,x)− W ∆ (s,x)| 2 ≤ C|t− s| γ . Sofor1-dimension, taking γ < 1/2givesusthatthesolutionisalmostH¨ older-1/2continuousinspace and almost H¨ older-1/4 continuous in time. 42 We can use these computations to find bounds on the p-th moment of the stochastic convolution. AccordingtotheBurkholder-Davis-Gundyinequality,ifwehavealocalmartingale M t ,andwedenote M ∗ t to be sup s≤ t |M s | then E|M ∗ t | p ≤ C p E⟨M⟩ p/2 t , where ⟨M⟩ t is the quadratic variation, and C p is a universal constant independent of M t . So in the case of W ∆ (t),W ∆ (t) = R t 0 S(t− s)QdW(s) = P ∞ k=1 R t 0 S(t− s)Qe k dB k (t),⟨W ∆ ⟩ t = P ∞ k=1 R t 0 q k e − 2α k (t− s) ds. Applying BDG inequality gives sup t≥ 0 E|W ∆ (t)| p ≤ C p ∞ X k=1 Z t 0 q k e − 2α k (t− s) ds ! p/2 =C p ∞ X k=1 q n 2α n (1− e − 2α nt ) ! p/2 ≤ C p ∞ X k=1 q n 2α n ! p/2 . So for example in dimension 1, with Q=Id, the moments are all finite. Next, we will consider the following theorem which establishes compactness of the heat semigroup over the interval [0,1]. This result will be used later on when using compact operators to prove existence of the invariant measure. Theorem 3.1.1 The heat semigroup S(t) over [0,1] is compact as an operator from 1. S(t):L 2 ([0,1])→L 2 ([0,1]) 2. S(t):L 2 ([0,1])→C[0,1] Proof: To show that it is compact from L 2 ([0,1])→L 2 ([0,1]) it is enough to show that it is Hilbert- Schmidt, and so compactness is implied. ∥S(t)∥ HS = ∞ X k=1 ∥S(t)e k ∥ 2 = ∞ X k=1 ∥e − α k t e k ∥ 2 = ∞ X k=1 e − 2α k t <∞. Since α k =π 2 k 2 for k∈N, and e k = √ 2sin(kπx ). Now to show that it is compact from L 2 ([0,1])→ C[0,1], we will use Arzela-Ascoli to show that the image of a bounded sequence has a convergent 43 subsequence. Let{f n }beaboundedsequenceoffunctionsinL 2 ([0,1]). Thatis,wehave∥f n ∥ L 2 ≤ M, uniformly for some constant M. We first see that the sequence {S(t)f n } is uniformly bounded: ∥S(t)f n ∥ C[0,1] = ∞ X k=1 e − α k t ⟨f n ,e k ⟩e k C[0,1] ≤ ∞ X k=1 e − k 2 π 2 t |⟨f n ,e k ⟩|∥e k ∥ C[0,1] ≤ ∞ X k=1 e − k 2 π 2 t ∥f n ∥ 2 2 ≤ ∞ X k=1 e − k 2 π 2 t M 2 . Next we show that it is uniformly equicontinuous. So consider |S(t)f n (x)− S(t)f n (y)|= ∞ X k=1 e − α k t ⟨f n ,e k ⟩(sin(kπx )− sin(kπy )) ≤ ∞ X k=1 e − α k t ∥f n ∥ 2 2 |(sin(kπx )− sin(kπy ))| ≤ ∞ X k=1 e − α k t ∥f n ∥ 2 2 (kπ )|x− y| ≤ ∞ X k=1 ke − k 2 π 2 t πM 2 ! |x− y| ≤ C|x− y|. And the summation in the second to last line converges to a constant C not relying on n. So if δ =ϵ/C , we have that|x− y|<δ =⇒ |S(t)f n (x)− S(t)f n (y)|<ϵ ∀n∈N. 3.2 Existence and Uniqueness of Invariant Measure Asshownearlier, longtermconvergenceofthesolutionalsoimpliesexistenceofaninvariantmeasure, so this will only be the case if ∞ X n=1 q n α n <+∞. 44 Now to prove that it is unique, suppose for the sake of contradiction that we have two invariant measures µ , and ν . Now consider two initial conditions u 0 ∼ µ and v 0 ∼ ν for the equations: u t (x,t)=∆ u(x,t)+Q ˙ W(x,t) with x∈D,t∈(0,∞) u(x,0)=u 0 u(x,t)=0 with x∈∂D and v t (x,t)=∆ v(x,t)+Q ˙ W(x,t) with x∈D,t∈(0,∞) v(x,0)=v 0 v(x,t)=0 with x∈∂D. Then u(t)− v(t) satisfies (u− v) t =∆( u− v) with x∈D,t∈(0,∞) (u− v)(x,0)=u 0 − v 0 (u− v)(x,t)=0 with x∈∂D. So this means ∥u(t)− v(t)∥≤ e − α 1t ∥u(0)− v(0)∥. Now take any bounded and continuous test function ϕ and note that: Eϕ (u(t))= Z ϕ (x)P ∗ t µ (dx)= Z ϕ (x)µ (dx) and Eϕ (v(t))= Z ϕ (x)P ∗ t ν (dx)= Z ϕ (x)ν (dx). And from above, we have that E|ϕ (u(t))− ϕ (v(t))|→0 as t→∞. 45 So it follows that µ =ν. 3.3 Characterizing the Invariant Measure as the Gaussian Free Field The objective of [LS20] is to characterize the Gaussian free field as the stationary solution of a heat equation driven by space-time Gaussian white noise. Theorem 3.3.1 Let u=u(x,t) be a solution of u t (x,t)=ν ∆ u(x,t)+σ ˙ W(x,t), t>0, x∈O⊆ R d , (3.1) with initial condition u(x,0) = φ(x) independent of W and with constant ν > 0, σ > 0; suitable boundary conditions are imposed if O ⊂ R d . Then, as t → +∞, u converges weakly to a scalar multiple of the Gaussian free field on O. In other words, as t → +∞, the solution of the stochastic parabolic equation (3.3.1) converges in distribution to the solution of the stochastic elliptic equation (− ν ∆ ) 1/2 v(x)=σV (x), whereV isGaussianwhitenoise(oranisonormalGaussianprocess)on L 2 (O). Bycomparison, direct computationsshowthat, as t→+∞, thesolutionofthedeterministicheatequation u t =ν ∆ u+f(x) inaboundeddomainorinR d ,d≥ 3,withasmoothcompactlysupportedf,convergestothesolution of the elliptic equation ν ∆ v =− f, but this convergence does not in general hold in R andR 2 . The starting point in the proof of Theorem 3.3.1 is well-posedness of equation (3.1): a suitably defined solution exits, is unique, and depends continuously on the initial condition. The argument is relatively straightforward in a bounded domain: the Fourier method shows that (3.1) is well posed as long as the deterministic heat equation is well posed; see Theorem 3.3.6 below. The analysis is more complicated in the whole space, where the stochastic term does not allow a direct application of deterministic theory and makes it necessary to consider the solution in special spaces of generalized functions. 46 Oncethesolutionof (3.1)isconstructed,therestoftheproofofTheorem3.3.1canbesummarized as follows. Denote by G O = G O (t,x,y) the heat kernel for equation (3.1) so that, for f ∈ G, the solution u H,f =u H,f (x,t) of the deterministic heat equation with initial condition f is u H,f (x,t)= Z O G O (t,x,y)f(y)dy. (3.2) The main consequence of well-posedness of (3.1) is that the solution can be written as u(x,t)=u H,φ (x,t)+σ Z t 0 Z O G O (t− s,x,y)W(dy,ds) (3.3) and, because G O (t,x,y)=G O (t,y,x), u[t,f]:= Z O u(x,t)f(x)dx=u H,φ [t,f]+σ Z t 0 Z O Z O G O (t− s,x,y)f(x)dx W(ds,dy) =u H,φ [t,f]+σ Z t 0 Z O u H,f (t− s,y)W(ds,dy); (3.4) cf. [Wal86, Chapter 9] in the caseO =R d , d≥ 3. As a result, E u[t,f]u[t,g] =E u H,φ [t,f]u H,φ [t,g] +σ 2 Z t 0 Z O u H,f (t− s,y)u H,g (t− s,y)dyds, and if lim t→+∞ u H,f (x,t)=0 (3.5) in an appropriate way, then lim t→+∞ E u[t,f]u[t,g] =σ 2 Z +∞ 0 Z O u H,f (y,s)u H,g (y,s)dyds. Moreover, by (3.2) and the semigroup property of G O , Z O u H,f (y,s)u H,g (y,s)dy = ZZ O×O G O (s,x,y)f(x)g(y)dxdy. If we also have Z +∞ 0 G O (s,x,y)ds= 2 ν Φ O (x,y), (3.6) 47 then, combining the above computations with (2.3), we get the convergence lim t→+∞ E u[t,f]u[t,g] = 2σ 2 ν E ¯ W[f] ¯ W[g] . (3.7) A major part of the paper consists in providing the details in the above arguments, in particular, 1. Constructing the solution of (3.1) and interpreting (3.3); 2. Identifying a suitable function classG and verifying (3.4), (3.5); 3. Working around (3.6): this step turns out to be a major technical difference between a bounded domain and the whole space; 4. Interpreting both u and ¯ W as Gaussian measures on a suitable Hilbert space so that (3.7) will indeed imply the required convergence. Wewillalsoseethat,similartothedeterministicproblem,thecasesO =RandO =R 2 requirespecial considerations, partly because of the failure of (3.6) and partly because of unexpected difficulties interpreting (2.3). Let O be a bounded domain in R d and let ∆ be the Laplacian on O with some homogeneous boundary conditions so that [A1] The eigenfunction h k , k≥ 1, of ∆ form an orthonormal basis in L 2 (O); [A2] The eigenvalues− λ 2 k , k≥ 1, of ∆ satisfy 0 < λ 1 < λ 2 ≤ λ 3 ≤··· , and there exists a number c O >0 such that λ k ∼ c O k 1/d . (3.8) Taking H = L 2 (O) and Λ = ( − ∆ ) 1/2 , we see that conditions [O1]–[O3] hold, with α = 1/d, and we construct the Hilbert scaleH Λ as in Definition 2.3.1. In particular, f = ∞ X k=1 f k h k ∈H γ ⇐⇒ ∞ X k=1 k 2γ/ d f 2 k <∞. For ν > 0, consider the heat equation u t (x,t)=f(x)+ν Z t 0 ∆ u(x,s)ds, t≥ 0, x∈O, (3.9) 48 and the Poisson equation ν ∆ v(x)=− g(x), x∈O. (3.10) Writing f(x)= ∞ X k=1 f k h k (x), g = ∞ X k=1 g k h k (x), u(t,x)= ∞ X k=1 u k (t)h k (x), v(x)= ∞ X k=1 v k h k (x), we can solve equations (3.9) and (3.10). Proposition 3.3.2 1. For every f ∈H γ , the unique solution of (3.9) is u(x,t)= ∞ X k=1 e − λ 2 k νt f k h k (x)= Z O G O (t,x,y)f(y)dy, where G O (t,x,y)= ∞ X k=1 e − λ 2 k νt h k (x)h k (y). The operator norm of the heat semigroup S t :f 7→ Z O G O (t,x,y)f(y)dy (3.11) is decaying exponentially in time on every H γ : ∥S t f∥ γ ≤ e − λ 1t ∥f∥ γ . (3.12) 2. For every g∈H γ , the unique solution of (3.10) is v(x)= ∞ X k=1 g k λ 2 k ν h k (x)= Z R d Φ O (x,y)g(y)dy, where Φ O (x,y)= ∞ X k=1 h k (x)h k (y) λ 2 k ν = Z +∞ 0 G O (t,x,y)dt. In particular, equality (3.6) holds. Definition 3.3.3 The (∆ ,O)-Gaussian free field ¯ W is an isonormal Gaussian process on H 1 . 49 Thepointisthat,inaboundeddomainO,therearemanydifferentGaussianfreefields,depending on the boundary conditions of the operator ∆ . For example, with zero boundary conditions, we take f,g∈H 1 and integrate by parts to find E ¯ W[f] ¯ W[g] =(f,g) 1 =(Λ f,Λ g) 0 =− (f,∆ g) 0 =− Z O f(x)∆ g(x)dx=(∇f,∇g) 0 , which, for d=2, is the same as [She07, Definition 2.12]. More generally, by (2.12), (− ∆ ) 1/2 ¯ W =V, where V is an isonormal Gaussian process on L 2 (O). Proposition 3.3.4 Under the assumptions [A1], [A2], the (∆ ,O)-Gaussian free field ¯ W has a rep- resentation ¯ W(x)= ∞ X k=1 ζ k λ k h k (x), (3.13) withiidstandardGaussianrandomvariablesζ k ,anddefinesacenteredGaussianmeasureon H 1− (d/2)− ε for every ε>0; the Cameron-Martin space of this measure is H 1 . Proof: This follows from Proposition 2.3.6 with r =1 and α =1/d. 3.3.1 Main Result Given ν > 0, σ > 0, and a cylindrical Brownian motion W on L 2 (O), consider the evolution equation u(t)=φ+ν Z t 0 ∆ u(s)ds+σW (t), t>0, (3.14) with initial condition φ independent of W. Definition 3.3.5 Given φ ∈ L 2 (Ω; H r ), a solution of (3.14) is an adapted process with values in L 2 Ω × [0,T];H r+1 T L 2 Ω; C((0,T);H r , such that equality (3.14) holds in H r− 1 for all t≥ 0 with probability one. 50 Theorem 3.3.6 Ifφ∈L 2 (Ω; H − γ ) andγ > d/2, then, under assumptions [A1],[A2], equation (3.14) has a unique solution and, for every T >0, E sup 0<t<T ∥u(t)∥ 2 − γ +E Z T 0 ∥u(t)∥ 2 1− γ dt≤ C(γ,T )(1+E∥φ∥ 2 − γ ); (3.15) C =C(γ,T ) is a number depending only on T and γ . Moreover, 1. For every t>0, u(t)∈L 2 Ω; H 1− γ and u(t)=S t φ+ ∞ X k=1 ¯u k (t)h k , (3.16) where S t is the heat semigroup (3.11) and ¯u k (t), k ≥ 1, are independent Gaussian random variables with mean zero and variance E¯u 2 k (t)= σ 2 2νλ 2 k 1− e − 2νλ 2 k t . (3.17) 2. As t→+∞, the H 1− γ -valued random variables u(t) converge weakly to σ (2ν ) − 1/2 ¯ W, where ¯ W is the (∆ ,O)-Gaussian free field. Proof: The first part of the theorem follows directly from [RL18, Theorem 3.1] after the identifi- cations A=ν ∆ , X=H 1− γ , H=H − γ , X ′ =H − γ − 1 , M(t)=W(t), because, by Proposition 2.3.4, W ∈L 2 Ω; C((0,T);H − γ , γ > d 2 . To establish (3.16), we write u(t)= ∞ X k=1 u k (t)h k and combine (3.14) with (2.9) to get u k (t)=φ k − νλ 2 k Z t 0 u k (s)ds+σw k (t); 51 recall that ∆ h k =− λ 2 k h k . Then u k (t)=φ k e − νλ 2 k t +¯u k (t), where ¯u k (t)=σ Z t 0 e − νλ 2 k (t− s) dw k (s). Next, E¯u 2 k (t)=σ 2 Z t 0 e − 2νλ 2 k (t− s) ds, and (3.16) follows. In particular, E∥u(t)∥ 2 1− γ ≤ E∥φ∥ 2 − γ νt + σ 2 2ν ∞ X k=1 λ − 2γ k , (3.18) so that u(t)∈L 2 (Ω; H 1− γ ), t>0. (3.19) Note that (3.18) cannot be used to establish (3.15), whereas (3.15) does not necessarily imply (3.19). Finally, (3.17) implies that, as t→+∞, each ¯u k (t) converges in distribution to σ (2ν ) − 1/2 (ζ k /λ k ), and ζ k , k≥ 1, are iid standard Gaussian random variables. By (3.12) and independence of ¯u k (t) for different k, the process u(t) converges in distribution to the H 1− γ -valued Gaussian random variable ¯ W = σ √ 2ν ∞ X k=1 ζ k λ k h k , which, by Proposition 3.3.4, concludes the proof of the theorem. Corollary 3.3.7 1. Equation (3.14) isergodicandtheuniqueinvariantmeasureisthedistribution of σ (2ν ) − 1/2 ¯ W on H 1− γ . 2. If φ d =σ (2ν ) − 1/2 ¯ W, then u(t) d =σ (2ν ) − 1/2 ¯ W for all t>0. 3. If Eφ k = 0 for all k, then, for each t > 0, the measure generated by u(t) on H 1− γ is absolutely continuous with respect to the measure generated by σ (2ν ) − 1/2 ¯ W. 52 Proof: Thefirsttwostatementsareanimmediateconsequenceof (3.16). Thethirdstatementfollows from a theorem of Kakutani [Bog98, Example 2.7.6]: two zero-mean Gaussian product measures are equivalent if and only if the corresponding standard deviations m k , n k satisfy ∞ X k=1 m k n k − 1 2 <∞: in our case, m k =n k 1− e − 2νλ 2 k t 1/2 . 3.4 Nonlinear Equation over [0,1] Inthissectionwestudytheadditivestochasticheatequationwithcubicnonlinearity. Wewillseethat the solution exists and that there is also an invariant measure over L 2 . Furthermore, this measure is actually concentrated on C 0 [0,1]. First we introduce a useful inequality: Theorem 3.4.1 (ϵ -Young’s Inequality) Leta,b≥ 0 be real numbersandp,q >1 such that 1 p + 1 q = 1. Then ab≤ ϵ p a p p + ϵ − q b q q We begin by verifying a lemma that we will later use to bound expectations of the solution. Lemma 3.4.2 There exists a polynomial function of the form a(y) = c 1 y 4 + c 2 that satisfies the inequality f(x+y)x≤ a(y) for all x,y∈R. Where f(x)=− x 3 . Proof: We will apply ϵ -Young’s inequality to the function f(x+y)x=− x 4 − 3x 3 y− 3x 2 y 2 − xy 3 ≤− x 4 − 3x 2 y 2 +ϵ 4 3 x 4 + 1 ϵ 1 4 y 4 +ϵ 4 1 x 4 + 1 ϵ 4 3 y 4 ≤ c 1 y 4 +c 2 53 Next we will develop the theory of dissipative mappings over a general Banach space. The subd- ifferential is a subspace of the dual space of a Banach space. Definition 3.4.3 (Subdifferential) The subdifferential ∂∥x∥ of the norm ∥·∥ at x is given by: ∂∥x∥={x ∗ ∈E ∗ :∥x+y∥−∥ x∥≥⟨ y,x ∗ ⟩,∀y∈E}. Equivalently, this can be represented as ∂∥x∥= {x ∗ ∈E ∗ :⟨x,x ∗ ⟩=∥x∥,∥x ∗ ∥=1} x̸=0 {x ∗ ∈E ∗ :∥x ∗ ∥≤ 1} x=0 So essentially, the subdifferential is the set of linear functionals that give us the norm we are looking for. For a Hilbert space such as L 2 (R) the subdifferential ∂∥f∥ would contain the operator T f (g) := R R fgdx. In the case of C 0 ([0,1]), ∂∥f∥ would contain the functional T x0 , which is evaluation at x 0 , a point at which sup x∈[0,1] |f(x)| occurs. (Multiplication by -1 depending on the sign). It is also not hardtoshowthatthesubdifferentialisclosed, nonempty, andconvex. Tostartseeinghowthisrelates to the problem at hand, we will next consider the following: Definition 3.4.4 (Dissipative Mapping) A mapping f : D(f)⊂ E → E is said to be dissipative iff for any x,y∈D(f) there exists z ∗ ∈∂∥x− y∥ such that ⟨f(x)− f(y),z ∗ ⟩≤ 0. Equivalently a mapping is dissipative iff ∥x− y∥≤∥ x− y− α (f(x)− f(y))∥, ∀x,y∈D(f), ∀α> 0. Definition 3.4.5 (Strongly Dissipative Mapping) A mapping f : D(f) ⊂ E → E is said to be strongly dissipative if for some positive ω, f +ωI is dissipative. Example 3.4.6 f(u)=− u 3 is dissipative over the Hilbert space L 2 since ⟨f(u)− f(v),u− v⟩= Z 1 0 − (u 3 − v 3 )(u− v)dx= Z 1 0 − (u− v) 2 (u 2 +uv+v 2 )dx≤ 0 54 Lemma 3.4.7 The mapping g(u)= d 2 dx 2 u is dissipative over the Banach space C 0 [0,1]. Proof: We use the equivalent definition of dissipative mappings and so we need to check that (u− v)− α d 2 dx 2 (u− v) ≥∥ u− v∥, ∀α> 0. Since(u− v)(0)=(u− v)(1)=0andatanylocalmaxof|u− v|,thesecondderivativewillbenegative. So (u− v)− α d 2 dx 2 (u− v) will be larger than|u− v| at any local max, which means at some value it will be larger than the supremum as well. Before we can discuss invariant measures, we need to first establish existence and uniqueness of the mild solution: Theorem 3.4.8 The equation d dt u(x,t)=∆ u(x,t)− u(x,t) 3 +dW(t) where u(x,0)∈L 2 [0,1] has a unique mild solution of the form u(x,t)=S(t)u(x,0)+ Z t 0 S(t− s)f(u(x,s))ds+ Z t 0 S(t− s)dW(s). Proof: For existence, first define the function f n :R→R: f n (x):=n 3 1 [−∞ ,− n] (x)+− x 3 1 [− n,n] (x)+− n 3 1 [− n,∞] (x). So essentially, we are truncating − x 3 after it passess − n 3 or n 3 . It is not hard to see that f n → f(x) = − x 3 pointswise, and that while f(x) is only locally Lipschitz, f n is Lipschitz. Next we will check that f n is dissipative. We want to show that⟨f n (u)− f n (v),u− v⟩≤ 0. Case 1: u(x,t),v(x,t)∈[− n,n]: This would be the same as checking whether the function f(u)=− u 3 is dissipative, which it is. Case 2: u(x,t)>n,v(x,t)∈[− n,n]: Thenwewouldconsider[− n 3 − (− v 3 )](u− v). Regardlessofwhetherv(x,t)ispositiveornegative, [− n 3 +v 3 ] is negative and u− v is positive, so overall [f n (u)− f n (v)](u− v) is negative. 55 Thereareatotalof9casessincef n (u(x,t))andf n (v(x,t))have3scenarioseach: {− n 3 ,[− n 3 ,n 3 ],n 3 }. Itisstraightforwardtocheckthatineachofthesecases[f n (u)− f n (v)](u− v)≤ 0. Sof n isdissipative. The next step will be to show local existence of the equation. d dt u(x,t)=∆ u(x,t)+f n (u(x,t))+dW(t) where u(x,0)∈L 2 [0,1]. We start with defining the mild solution u n (x,t)=S(t)u(x,0)+ Z t 0 S(t− s)f n (u n )ds+ Z t 0 S(t− s)dW(s). Let F(v):=S(t)v(x,0)+ Z t 0 S(t− s)f n (v)ds+ Z t 0 S(t− s)dW(s). Then for v,w∈L 2 with v(x,0)=w(x,0) we consider sup t∈[0,Tn] ∥Fv−F w∥ 2 = sup t∈[0,Tn] Z Tn 0 S(t− s)[f n (v)− f n (w)]ds 2 ≤ sup t∈[0,Tn] Z Tn 0 ∥S(t− s)[f n (v)− f n (w)]∥ 2 ds ≤ sup t∈[0,Tn] Z Tn 0 e − ω1(t− s) C n ∥v− w∥ 2 ds ≤ T n · 1· C n · sup t∈[0,T] ∥v− w∥ 2 . Choosing appropriate T n gives us a contraction. Now by a standard fixed-point argument, we have local existence. Next we will prove global existence. Let z(x,t) be the solution to the linear equation d dt z(x,t)=∆ z(x,t)+dW(t) with zero initial condition. Set v n (x,t) = u n (x,t)− z(x,t) which means that v n (x,t) satisfies the equation d dt v n =∆ v n +f n (v n +z) 56 From here we are in a position to exploit dissipativity. d dt v n ,v n =⟨∆ v n +f n (v n +z),v n ⟩ =⟨∆ v n +f n (z)+f n (v n +z)− f n (z),(v n +z)− z⟩ =⟨∆ v n ,v n ⟩+⟨f n (z),v n ⟩+⟨f n (v n +z)− f n (z),(v n +z)− z⟩ ≤− ω 1 ∥v n ∥ 2 2 +⟨f n (z),v n ⟩. In the last step we used dissipativity and Poincare inequality. Putting this togther gives 1 2 d dt ∥v n ∥ 2 2 ≤− ω 1 ∥v n ∥ 2 2 +⟨f n (z),v n ⟩ ≤− ω 1 ∥v n ∥ 2 2 +∥f n (z)∥ 2 ∥v n ∥ 2 ≤− ω 1 ∥v n ∥ 2 2 +ϵ ∥v n 2∥ 2 2 + 1 ϵ ∥f n (z)∥ 2 2 Choosing ϵ to be sufficiently small and using the fact that f n is at most n 3 , a constant, gives us a Gronwall type bound for ∥v n ∥ 2 2 . This gives us a growth bound for ∥u n ∥ 2 2 and so we have global existence. The last step is to check that u n → u. However, this follows by dominated convergence theorem. For uniqueness, suppose there are two solutions u(x,t) and v(x,t) to the equation. Then consider the difference w =u− v which is a solution to the equation d dt w(x,t)=∆ w(x,t)− u(x,t) 3 − (− v(x,t) 3 ). Since f(u)=− u 3 is dissipative, multiplying both sides by w =u− v gives us the bound: 1 2 d dt ∥w(x,t)∥ 2 2 ≤− π ∥w(x,t)∥ 2 2 . Butsincetheinitialconditionisalready0,thismeansw(x,t)mustbe0andsouniquenessisestablished as well. Nowwewillshowexistenceoftheuniqueinvariantmeasuretothenonlinearequationonabounded domain in dimension one. 57 Theorem 3.4.9 The equation d dt u(x,t)=∆ u(x,t)− u(x,t) 3 +dW(t) has a unique invariant measure, where u(x,0)∈L 2 [0,1]. Proof: Let z t be the solution to the linear equation z t = ∆ z +dW(t) with zero initial condition. First we will define a two-sided cylindrical Brownian motion as ¯ W(t)= W(t) t≥ 0 V(− t) t≤ 0, where V(t) is another cylindrical Brownian motion independent of W(t). We consider the equation d dt u(x,t)=∆ u(x,t)− (u(x,t)) 3 +d ¯ W(t), t≥ s u(x,s)=g(x) for some function g(x)∈L 2 [0,1]. We know there is a unique generalized solution to this, and we will denotethatsolutionas u(t,s,g). Next, wewillshowthat∀λ> 0,∀t>− λ, and∀g∈L 2 [0,1],∃C >0, such that E(∥u(t,− λ,g )∥ 2 )≤ C +∥g∥ 2 . For any λ > 0, define z λ (t) to be the solution to the linear equation d dt z λ (x,t)=∆ z λ (x,t)+d ¯ W(t), t≥− λ z λ (x,λ )=0. Alternatively, we can express z λ (t) as R t λ S(t− s)d ¯ W(s). And consider v λ (t,g) = u(t,− λ,g )− z λ (t), which is the solution to the equation d dt v λ (x,t)=∆ v λ (x,t)− (v λ (x,t)+z λ (x,t)) 3 , t≥ λ v λ (x,λ )=g(x). 58 Now we can proceed as follows: v λ d dt v λ =v λ ∆ v λ − v λ (v λ +z λ ) 3 1 2 d dt (v λ ) 2 =v λ ∆ v λ − v λ (v λ +z λ ) 3 Z 1 0 1 2 d dt v 2 λ dx= Z 1 0 v λ ∆ v λ dx− Z 1 0 v λ (v λ +z λ ) 3 dx 1 2 d dt ∥v λ ∥ 2 2 =− Z 1 0 d dx v λ 2 dx+ Z 1 0 v λ (− (z λ ) 3 )dx+ Z 1 0 v λ (− (v λ +z λ )) 3 − v λ (− (z λ ) 3 )dx ≤− π 2 ∥v λ ∥ 2 2 + Z 1 0 v λ (− z 3 λ )dx+ Z 1 0 − v 2 λ (v 2 λ +3v λ z λ +3z 2 λ )dx ≤− π 2 ∥v λ ∥ 2 2 + Z 1 0 v λ (− z 3 λ )dx ≤− π 2 ∥v λ ∥ 2 2 +∥v λ ∥ 2 ∥z 3 λ ∥ 2 ≤− π 2 ∥v λ ∥ 2 2 +ϵ ∥v λ ∥ 2 2 + 1 ϵ ∥z 3 λ ∥ 2 2 . In this case, it suffices to let ϵ =1 and, 2π 2 − 2ϵ =α> 0. Then we have ∥v λ ∥ 2 2 ≤ e − α (t+λ ) ∥g∥ 2 2 + Z t − λ e − α (t− s) 2∥(− z λ ) 3 ∥ 2 2 ds. And taking expectations gives E∥v λ ∥ 2 2 ≤∥ g∥ 2 2 + 1 α sup s∈[− λ,t ] E∥− (z λ ) 3 ∥ 2 2 , t≥| λ |. Therefore, it follows thatE∥u(t,− λ,g )∥ 2 ≤∥ g∥ 2 +C,∀λ> 0,∀t>λ, and∀g∈L 2 [0,1]. Now consider u(t,− λ,g ), the solution to the equation d dt u(t,− λ )=∆ u(t,− λ )− (u(t,− λ )) 3 +d ¯ W(t), t≥− λ u(− λ, − λ )=g(x), and u(t,− γ,g ) the solution to the equation d dt u(t,− γ )=∆ u(t,− γ )− (u(t,− γ )) 3 +d ¯ W(t), t≥− γ u(− γ, − γ )=g(x). 59 Then u(t,− λ,g )− u(t,− γ,g ) for t≥− λ is the solution to the equation d dt [u(t,− λ,g )− u(t,− γ,g )]=∆[ u(t,− λ,g )− u(t,− γ,g )]− (u(t,− λ,g )) 3 − (− u(t,− γ,g )) 3 , t≥− λ u(− λ, − γ,g )− u(− λ, − λ,g )=g(x)− u(− λ, − γ,g ). Now multiplying both sides by [u(t,− λ )− u(t,− γ )] and proceeding similarly as above gives us the inequality E∥u(t,− λ,g )− u(t,− γ,g )∥ 2 2 ≤ e − α (t+λ ) E∥u(− λ, − γ )− g∥ 2 2 . Or E∥u(t,− λ,g )− u(t,− γ,g )∥ 2 ≤ e − α 2 (t+λ ) E∥u(− λ, − γ )− g∥ 2 ≤ e − α 2 (t+λ ) (2∥g∥ 2 +C). So there exists a random variable Ψ for all g such that lim λ →∞ E∥u(0,− λ,g )− Ψ ∥ 2 = 0. And since u(0,− t,g) is the same as u(t,0,g) in distribution, we see that the original equation u t =∆ u− u 3 + ˙ W(t), where u(x,0)∈L 2 [0,1] has a unique invariant measure. Next we will show an alternate proof over C 0 [0,1] using compactness of operators, and along with ideas of dissipative mappings, we will show existence and uniqueness of the invariant measure over the Banach space of continuous functions on [0,1] with compact support. Theorem 3.4.10 Theequation d dt u(x,t)=∆ u(x,t)− u(x,t) 3 +dW(t)hasauniqueinvariantmeasure, where u(x,0)∈C 0 [0,1]. Proof: The proof of a more general case can be found in [DPZ96, 6.3.5]. We will first rewrite the equation in the form d dt u(x,t)=∆ u(x,t)− u(x,t)+u(x,t)− u(x,t) 3 + ˙ W(t). Define A(u) = ∆ u− u and F(u) = − u 3 +u. Let S(t) be the semigroup generated by the operator A over the space C 0 [0,1]. Then we know that ∥S(t)∥ C0[0,1]→C0[0,1] ≤ e − t since by the maximum 60 principal, the heat semigroup over the closed interval has operator norm less than 1 and so modifying the heat equation by a linear term will bound the semigroup operator norm by e − t . We also know that S(t) is compact for t > 0 since the corresponding heat semigroup operator is compact. So we will start with the equation d dt u(x,t)=A(u)+F(u)+ ˙ W(t) and we will consider the solution W A (x,t) to the linear equation d dt z(x,t)=A(z)+ ˙ W(t). As before, subtracting the two equations and setting v =u− W A gives d dt v =A(v)+F(v+W A ). Now let v(x,t) ∗ be an element of the subdifferential ∂∥v(x,t)∥. Then we can consider d dt v,v ∗ =⟨A(v)+F(v+W A ),v ∗ ⟩ d dt v,v ∗ =⟨A(v),v ∗ ⟩+⟨F(v+W A ),v ∗ ⟩ d − dt ∥v∥≤−∥ v∥+⟨F(v+W A ),v ∗ ⟩ That last step was due to the fact that A is strongly dissipative by design, with ω =1. To bound the ⟨F(v+W A ),v ∗ ⟩ term, we will show that there is a function a:R + →R + such that ⟨F(v+W A ),v ∗ ⟩≤ a(∥W A ∥). Recallthatwecanchoosev(x,t) ∗ sothatitistheevaluationmapatsomex 0 wherethesupisattained. And to account for whether it is positive or negative we multiply by sgn(v(x 0 ,t)). So we want (− (v+W A ) 3 +(v+W A ))· sgn(v)| x=x0 ≤ a(∥W A ∥) It is easier to show that (− (r+s) 3 +(r+s))(sgn(r))≤ a(|s|) 61 for r,s∈R and this would imply what we want. We first notice that we can let a(|s|) = c 1 |s| 3 +c 2 for appropriate constants c 1 ,c 2 . First define g(r,s)=(− (r+s) 3 +r+s)sgn(r). Then notice that g(− r,− s)=(− (− r− s) 3 − r− s)sgn(− r) =(− (− r− s) 3 − r− s)(− sgn(r)) =(− (r+s) 3 +r+s)sgn(r) =g(r,s) So without loss of generality, we can assume r ≥ 0. The function g(0,s) =− s 3 +s is decreasing on the interval −∞ ,− 1 √ 3 and bounded by some value c< 1 on the interval − 1 √ 3 ,∞ . So for r≥ 0, this means g(r,s)≤ 1+|s| 3 since if r+s∈ −∞ ,− 1 √ 3 , then g(r,s)≤ g(0,s) =− s 3 +s≤| s| 3 for s∈ −∞ ,− 1 √ 3 . If r+s∈ − 1 √ 3 ,∞ then g(r,s)≤ c≤ 1≤ 1+|s| 3 . So overall g(r,s)≤ 1+|s| 3 . Going back to what we originally wanted to bound, this means (− (v+W A ) 3 +(v+W A ))· sgn(v)| x=x0 ≤ a(W A )| x=x0 ≤ a(∥W A ∥). Going back to the equation this gives us the bound d − dt ∥v∥≤−∥ v∥+a(∥W A ∥) so applying Gronwall’s inequalilty gives us ∥u(x,t)− W A (x,t)∥≤ e − t ∥u(x,0)∥+ Z t 0 e − (t− s) a(∥W A ∥)ds. So this gives us the bound sup t≥ 0 E∥u(x,t)∥≤ sup t≥ 0 E∥W A (x,t)∥+Ea(∥W A (x,t)∥)<+∞ Similar to Theorem 3.2.1, we will show that the family of distributions of {u(x,t)} for t≥ 1 is tight, therefore showing that there exists an invariant measure. For ϵ ∈[0,1], define an operator G ϵ by G ϵ ϕ = Z 1 ϵ S(u)ϕ (u)du, where ϕ ∈C([0,1];C 0 [0,1]). 62 Now for ϵ ∈(0,1] G ϵ ϕ =S(ϵ ) Z 1 ϵ S(u− ϵ )ϕ (u)du, where ϕ ∈C([0,1];C 0 [0,1]) is a compact operator, since S(ϵ ) is a compact operator. But we also have ∥G 0 − G ϵ ∥≤ sup u∈(0,1] ∥ϕ (u)∥ Z ϵ 0 ∥S(v)∥dv So the operator G 0 is also compact. Recall that the mild solution is given by u(x,t)=S(t)u(x,0)+ Z t 0 S(t− s)F(s)ds+ Z t 0 S(t− s)dW(s). Using the fact that S(t) is compact for t > 0, we can express u(x,t+1) as the image of a compact operator via u(x,t+1)=S(1)u(x,t)+ Z 1 0 S(u)(F(u(x,t+(1− r)))dr+W A,t (1), where W A,t (1)= R t+1 t S(t+(1− r))dW(r). However shifting up by suitable t, we have the bound ∥u(x,t+r)− W A,t (r)∥≤ e − r ∥u(x,t)∥+ Z r 0 e − (r− s) a(∥W A,s (r)∥)ds, r≥ 0 So we have sup r∈[0,1] ∥u(x,t+r)∥≤∥ u(x,t)∥+ sup r∈[0,1] ∥W A,t (r)∥+ sup r∈[0,1] a(∥W A,t (r)∥). 63 Since we showed earlier that sup t≥ 0 E∥u(x,t)∥ < +∞, we can find a constant C > 0 such that P(∥u(x,t)∥>C) < ϵ, P sup r∈[0,1] ∥W A,t (r)∥>C < ϵ, P sup r∈[0,1] a(∥W A,t (r)∥)>C < ϵ ∀t ≥ 0 and this gives us the bound P( sup r∈[0,1] ∥u(x,t+u)∥≥ 3C)≤ P(∥u(x,t)∥>C)+P sup r∈[0,1] ∥W A,t (r)∥>C ! +P sup r∈[0,1] a(∥W A,t (r)∥)>C ! ≤ 3ϵ Now we know the set of solutions n u(x,t)|sup r∈[0,1] ∥u(x,t+u)∥≤ 3C,t≥ 0 o is bounded and the operators S(1) and G 0 are compact. And finally, given the above probability bound, we can construct a compact subset K 3ϵ for the family of distributions of {u(x,t+1), t≥ 0}, such thatP(u(x,t+1)∈ K 3ϵ )≥ 1− 3ϵ . So the family is tight and therefore we have existence of invariant measure. Combining this result with Theorem 3.2.1 gives us uniqueness. 3.5 Approximation of the Stochastic Convolution with Time Regularization LetO be a bounded domain inR d , with boundary ∂O regular enough so that the Laplacian ∆ onO with suitable homogeneous boundary conditions has the following properties: [A1] The eigenfunction h n , n≥ 1, of ∆ form an orthonormal basis in L 2 (O); [A2] The eigenvalues− λ 2 n , n≥ 1, of ∆ satisfy 0 <λ 1 <λ 2 ≤ λ 3 ≤··· , and there exists a number c O >0 such that λ n ∼ c O n 1/d . There are various sufficient conditions ensuring [A1] and [A2]. The precise nature of the boundary conditions makes no difference for our analysis. 64 Denote by H γ , γ ∈ R, the Hilbert scale generated by the operator √ − ∆ ; cf. [LS20, Definition 2.1]. In particular, H γ ={λ − γ h n , n≥ 1} (3.20) is an orthonormal basis in H γ . Let W n = W n (t), t ≥ 0, n ≥ 1, be independent standard Brownian motions and ν,σ > 0. A stochastic convolution is the random process u(t)=σ X n≥ 1 Z t 0 e − νλ k (t− s) dW n h n . (3.21) We call u stochastic convolution because u(t)=σ Z t 0 Z O G O (t− s,x,y) ˙ W(ds,dy), where ˙ W is Gaussian space-time white noise on L 2 (O) and G O (t,x,y)= ∞ X k=1 e − λ 2 k νt h k (x)h k (y) is the fundamental solution of the heat equation v t =ν ∆ v inO; cf. [DPZ14, Section 5.1.2]. In other words, u is the unique solution, both mild and generalized, of the equation u t =ν ∆ u+σ ˙ W, t>0, u| t=0 =0, andisoftenconsideredaninfinite-dimensionalversionof (2.17). Foreach t>0, theGaussianrandom element u(t) generates the measure P t on H 1− γ for γ > d/2; cf. [LS20, Theorem 3.5]. Let P 0 be the measure generated on H 1− γ by the Gaussian random element ¯u= σ √ 2ν ∞ X n=1 ζ n λ n h n , (3.22) 65 where ζ n ,n ≥ 1, are iid standard Gaussian random variables; ¯u is a scalar multiple of the (∆ ,O)- Gaussian free field [LS20, Section 3.1.]. Because, by (3.21) and (2.18), for each t>0, u(t) L = σ √ 2ν ∞ X n=1 ζ n λ n 1− e − 2λ 2 n νt 1/2 , Proposition 2.5.4, together with (3.20), implies an infinite-dimensional version of (2.19): for all suffi- ciently large t, d TV (P t ,P 0 )≤ C u e − 2νλ 2 1 t , C 2 u =e 4νλ 2 1 ∞ X n=1 e − 4νλ 2 n . (3.23) The next step is to derive an infinite-dimensional version of (2.7.3). With (2.43) in mind, we will consider a broader class of approximations of W and u. For r ∈R and κ> 0, denote byκ r the sequence κ r ={κλ r n , n≥ 1} and define u κ r (t)=σ X n≥ 1 p κλ r n Z t 0 e − λ 2 n ν (t− s) X n (κλ r n s)ds h n ; (3.24) recall that X n =X n (t), t∈R, n≥ 1, are independent and identically distributed processes such that each X n is Gaussian with mean zero and covariance function EX n (t)X n (s)=e −| t− s| . Writing u κ r n (t)= p κλ r n Z t 0 e − λ 2 n ν (t− s) X n (κλ r n s)ds , we use (2.34) to compute E u κ r n (t) 2 = κλ r n νλ 2 n (νλ 2 n +2κλ r n ) +R κ r n,1 (t)+R κ r n,2 (t), (3.25) R κ r n,1 (t)= κλ r n 4κ 2 λ 2r n − ν 2 λ 4 n e − (νλ 2 n +2κλ r n )t − e − 2νλ 2 n t , (3.26) R κ r n,2 (t)=− κλ r n νλ 2 n (νλ 2 n +2κλ r n ) e − 2νλ 2 n t . (3.27) 66 Withnolossofgenerality,weassumethatκλ r n ̸=νλ 2 n forallnandκ . Then,forfixed κ ,u κ r (t)∈H 1− γ , γ > d/2, for every r∈R. In fact, using (3.8), we see that κλ r n νλ 2 n (νλ 2 n +2κλ r n ) ≍ 1 n 2/d (1+n (2− r)/d ) , n→∞. As a result, if r ≥ 2, then u κ r (t) has the same regularity as u(t); if r < 2, then u κ r (t) has better regularity than u(t): u κ r (t)∈H 1− γ , γ > (d+r− 2)/2. (3.28) For example, if r =0 and d=1,2,3, then u κ r (t)∈H (1/2)− ϵ ⊂ L 2 (O) for every 0<ϵ< 1/2. Let P κ r t be the measure generated by u κ r (t) on H 1− γ , γ > d/2, and let ¯u be defined by (3.22). Theorem 3.5.1 For every r∈R, lim κ,t →∞ u κ r n (t) L = ¯u. (3.29) Moreover, if r > d 2 +2, then the convergence is in total variation: for all sufficiently large κ and t, d TV P κ r t ,P 0 ≤ C(d,r) ν +κ +C(ν,r )e − 2λ 2 1 t . (3.30) Proof: We will apply Proposition 2.7.5 with H =H 1− γ and a n = λ 1− γ n √ 2νλ n = 1 √ 2νλ γ ; the reason for the extra factor λ 1− γ n in the definition of a n is (3.20). Equality (3.25) shows that it is enough to investigate the term a n,κ = √ κλ γ +(r/2) n √ ν (νλ 2 n +2κλ r n ) 1/2 , because R κ r n,i (t), i=1,2, decay exponentially, both in t and n, and the decay is uniform in κ . To show that lim κ,t →∞ u κ r n (t) L = ¯u, we need lim κ →+∞ ∞ X n=1 a n − a n,κ 2 =0. (3.31) 67 By direct computation, a n − a n,κ = λ − γ √ 2ν 1− √ 2κ p νλ 2− r n +2κ ! , and, for γ > d/2, ∞ X n=1 λ − 2γ n <∞. If r≥ 2, then, for all sufficiently large κ , a n − a n,κ 2 ≤ νλ − 2γ n κ 2 , and then (3.31) follows. Similarly, if r <2, then, as n→∞, a n − a n,κ 2 ≤ ν 2 λ 4− 2r− 2γ νλ 2− r +2κ 2 ≤ λ − 2γ n , and therefore (3.31) holds by the dominated convergence theorem; unlike the case r≥ 2, the rate of convergence now depends on r. To establish (3.30), note that, as n→∞, a n,κ a n 2 − 1= νλ 2 n νλ 2 n +2κλ r n ≍ ν ν +κn (r− 2)/d and P n≥ 1 n − 2(r− 2)/d <∞ if and only if 2(r− 2)/d>1, or, equivalently, r > d 2 +2. 68 Chapter 4 Stochastic Reaction-Diffusion Equations over R d 4.1 Existence of Weak Solution of the Linear Equation Earlier we studied the heat equation over the whole space u t (x,t)=∆ u(x,t)+f(x,t) with x∈R d , t∈(0,∞) u(x,0)=g(x) and found that the weak solution would be u(x,t)= Z R d Φ( x− y,s)g(y)dy+ Z t 0 Z R d Φ( x− y,t− s)f(y,s)dyds. Nowsupposeournonhomogeneoustermf israndom, andinparticular, supposeitisspace-timewhite noise, which we will define as a process ˙ W(x,t) with covariance E ˙ W(x,t) ˙ W(y,s) = δ (x− y)δ (s− t). We can alternatively define the associated Weiner process as W(x,t)= P ∞ k=1 B k (t)e k (x) where B k (t) are independent standard Brownian motions and e k is a basis on L 2 (G), for x∈G. We are interested in the solution to: u t (x,t)=∆ u(x,t)+ ˙ W(x,t) with x∈R d , t∈(0,∞) u(x,0)=0. 69 By Duhamel’s principle, the solution to this would be u(x,t)= Z t 0 Z R d Φ( x− y,t− s) ˙ W(y,s)dyds. To get an idea on the behavior, we will do a covariance computation. Define covariance of the solutionbyC(s,t,x,y)=Eu(x,s)u(y,t)IngeneralC(s,t,x,y)=C(s,t,0,x− y). Soweareinterested in C(s,t,0,x). These computations can be found in greater detail in [Hai09]. C(s,t,0,x)= 1 (4π ) n E Z s 0 Z t 0 Z R d Z R d 1 |s− r ′ | n/2 |t− r| n/2 e − |x− y| 2 4(t− r) − |y| 2 4(s− r ′ ) ˙ W(r,y) ˙ W(r ′ ,y ′ )dydy ′ dr ′ dr = 1 (4π ) n Z s∧t 0 Z R d 1 |s− r| n/2 |t− r| n/2 e − |x− y| 2 4(t− r) − |y| 2 4(s− r) dydr =2 − n Z s∧t 0 (s+t− 2r) − n/2 exp − |x| 2 4(s+t− 2r) dr =2 − (n+1) Z s+t |s− t| (p) − n/2 exp − |x| 2 4p dp. When x = 0 and s = t this is only integrable for n < 2. So in dimension 1, this solution exists as a function but in higher dimensions it exists as a distribution. To compute H¨ older continuity for the 1-dimensional case, we will use the previous computation. Recall that by Kolmogorov’s continuity criterion, if E|u(x,t)− u(x,s)| 2 ≤| t− s| 2γ , then u(x,t) is almost H¨ older- γ continuous in time. Expanding the left hand side gives E|u(x,t)− u(x,s)| 2 =Eu(x,t) 2 +Eu(x,s) 2 − 2Eu(x,t)u(x,s)=C(t,t,0,0)+C(s,s,0,0)− 2C(t,s,0,0). We also have C(s,t,0,0)= 1 4 Z s+t |s− t| (p) − 1/2 dp= 1 2 |s+t| 1 2 −| s− t| 1 2 . Along the diagonal with s≈ t, this would imply that E|u(x,t)− u(x,s)| 2 ≈| t− s| 1 2 . 70 In other words, the stochastic convolution is almost H¨ older- 1 4 continuous in time. To understand continuity in space, we fix s=t. Letting z =|x| 2 /4p and doing a change of variables gives C(t,t,0,x)= |x| 8 Z ∞ |x| 2 8t z − 3 2 e − z dz. Next, we can use integration by parts to get C(t,t,0,x)= √ t 4 e − |x| 2 8t + |x| 4 Z ∞ |x| 2 8t z − 1 2 e − z dz So for small values of x, C(t,t,0,x)≈ √ t 4 e − |x| 2 8t + |x| 4 Z ∞ 0 z − 1 2 e − z dz = √ t+ π |x| 4 +O |x| 2 /8 √ 2t . We can see that for fixed t, the process is almost H¨ older- 1 2 continuous in space. The computation of C(t,s,0,0) also shows thatE|u(x,t)| 2 =C(t,t,0,0)= √ t √ 2 . Unlike in the case of the bounded domain where sup t≥ 0 E|W ∆ (t)| 2 ≤∞ , here the second moment grows. Since this is Gaussian, we should expect the p-th moments of the stochastic convolution over the whole space to be bounded over a finite time interval, but will tend to ∞ as t→∞. Unlike in the bounded domain case, the heat semigroup is not compact over L 2 (R). According to the spectral theorem, a compact self-adjoint operator on a Hilbert space has a countable collection of eigenfunctionsthatformanorthonormalbasisontheHilbertspace. Inthecaseoftheheatsemigroup over L 2 (R), if λ is an eigenvalue of the Laplacian then e λt is an eigenvalue of the heat semigroup S(t). Toseewhy, notethat∆ f =λf implies∆ e λt f(x)=λe λt f(x)= d dt e λt f andsoe λt f isasolutiontothe heat equation with initial condition f(x). However, it is well-known that the Laplacian does not have any eigenfunctions over L 2 (R), and so the heat semigroup cannot have any eigenvalues either. Since theheatsemigroupisn’tcompactoverL 2 (R),italsowouldnotbeHilbert-Schmidtwhichisanecessary condition for many of the existence results on the stochastic heat equation. Some alternatives are to consider weighted Banach spaces and Schwartz spaces. We will use the results found in [DPZ14] to show existence of the weak solution to the linear equation over S p spaces. A more detailed discussion 71 on Schwartz spaces can be found in [BS15] and [RT03]. Let T = − d 2 dx 2 +x 2 . There is an ONB of L 2 (R,dx) consisting of eigenfunctions ϕ n of T. The eigenvalues are 2n+1. So we have Tϕ n =(2n+1)ϕ n and these ϕ n ’s are actually the Hermite functions and they’re in the Schwartz space S(R). Now for any function f ∈L 2 (R), we can apply a linear operator B as follows: Bf := X 1 2n+1 ⟨f,ϕ n ⟩ϕ n . We can see that ∥Bf∥ 2 2 ≤∥ f∥ 2 2 by comparing basis coefficients and that B is the inverse of T, since BTf =f, and TBf =f. Now for any p≥ 0, the image of B p is all f in L 2 (R) for which X n∈W (2n+1) 2p |⟨f,ϕ n ⟩| 2 <∞. So let S p (R) = B p (L 2 (R)). Essentially, applying B p ,p > 0 shrinks the space L 2 and it makes the function more and more ‘well-behaved’. There is an inner product on S p given by ⟨f,g⟩ p = X n∈W (2n+1) 2p ⟨f,ϕ n ⟩⟨ϕ n ,g⟩=⟨B − p f,B − p g⟩ L 2 Ifwetaketheintersectionofallofthese S p spaces, wegetS(R), thespaceofSchwartzfunctions.(This is nontrivial and needs a proof, but it is nevertheless true). The elements (2n + 1) − p ϕ n form an ONB of S p (R). Also, the inculsion map S p+1 → S p is Hilbert-Schmidt, to see why, just follow the definitions. The basis in S p+1 is (2n+1) − (p+1) ϕ n , and so the HS norm of the operator is X n∈W ∥(2n+1) − (p+1) ϕ n ∥ 2 p ≈ X n≥ 1 (2n) 2p (2n) 2(p+1) = X n≥ 1 1 (2n) 2 <∞. We can see that the inclusion map of S p into L 2 will also be Hilbert-Schmidt. All of this can be extended very simply in the multi-dimensional case T d over L 2 (R d ). We define T d = − ∂ 2 ∂x 2 d +x 2 d ··· − ∂ 2 ∂x 2 1 +x 2 1 . 72 Then T d ϕ j =(j 1 +1)··· (j d +1)ϕ j , where j =(j 1 ,··· ,j d )∈W d is a multi-index. B d f = X j∈W d [(j 1 +1)··· (j d +1)] − 1 ⟨f,ϕ j ⟩ϕ j The family of Hilbert spaces S p , where p∈ R can also be thought of as the closure of S(R d ) under the norm ∥f∥ 2 p = X k∈N d (2|k|+d) 2p ⟨f,ϕ k ⟩ 2 , whereϕ k (x 1 ,...,x d )istheorthonormalbasisinL 2 (R d )constructedastheproductofthecorrespond- ing one-dimensional Hermite functions ϕ k1 (x 1 )··· ϕ k d (x d ). For x<y we have the inclusionS y ⊂S x . If the heat semigroup is Hilbert-Schmidt over S p , then we can also establish existence and uniqueness of the the stochastic convolution. Theorem 4.1.1 Consider the SPDE dX(t)=AX(t)dt+dW(t) X(0)=0 Suppose the following hypotheses hold: 1. A generates a C 0 -semigroup S(·)∈H. 2. R T 0 ∥S(r)∥ 2 HS dr <∞. Then the equation has exactly one weak solution, denoted as the stochastic convolution, which is given by the formula X(t)= Z t 0 S(t− s)dW(s) where t∈[0,T]. Moreover, the stochastic convolution is Gaussian, continuous in mean square and has a predictable version. The lawL(X(·)) is a symmetric Gaussian measure on L 2 ([0,T];H). This theorem is essentially a combination of results from [DPZ14, Theorem 5.2, Theorem 5.4]. We will let H be S − q for sufficiently large q > 0 and A will be the laplace operator. Then according to 73 Theorem 2.4 in [RT03], the semigroup S(·) is C 0 . To show the second hypothesis holds, we will first note that S(r):S − p →S − q for q >p because of the inclusion S − p ⊂ S − q . Then ∥S(r)∥ 2 HS = X ∥S(r)g n ∥ 2 − q ≤ X c∥g n ∥ 2 − q = X c (2n+1) 2p (2n+1) 2q <∞, where g n is a complete orthonormal basis in S − p . In the first inequality we also used the result in [RT03, Theorem 2.4] that the heat semigroup is uniformly bounded on S − q . As the heat semigroup is Hilbert-Schmidt, itfollowsthat R T 0 ∥S(r)∥ 2 HS dr <∞. Therefore, wehaveverifiedboththeconditions and shown existence and uniqueness of the stochastic convolution in S − q . 4.2 GFF as the Invariant Measure of the Linear Eqution We review the results of [LS20] on showing that the Gaussian free field is the invariant measure of the additive stochastic heat equation over the whole space. There are two special features of the bounded domain that are absent in the whole space: • TheoperatorΛgeneratingthescale H Λ commuteswiththeoperator∆ intheequations(3.9)and (3.10) we want to solve, and has the property that Λ − γ is Hilbert-Schmidt on H for sufficiently large γ > 0; • The assumption λ 1 >0 ensures (3.12), that is, the operator norm of the heat semigroup decays exponentially in time. Asaresult, despiteitssimpleform,equation(3.1)inR d isnotcoveredbysuchstandardreferences as [Kry99] (because of the structure of the noise) and [DPZ88] (because of the particular form of the evolutionoperator). Accordingly,westudy(3.1)inR d bycombiningverygeneralresultsfrom[DPZ14] and [RL18] with very specific computations using (3.3). There are three families of spaces that appear in the analysis of partial differential equations on R d : 1. Homogeneous Sobolev spaces ˙ H γ , γ ∈ R, the collection of generalized functions f ∈ S ′ (R d ) such that the Fourier transform ˆ f = ˆ f(ξ ) of f is locally integrable and ∥f∥ 2 ˙ H γ := Z R d |ξ | 2γ | ˆ f(ξ )| 2 dξ < ∞; (4.1) 74 when γ < d/2, ˙ H γ is also known as the Riesz potential space [Sam76]; 2. Nonhomogenous Sobolev, or Bessel potential, spaces H γ , γ ∈ R, the collection of gen- eralized functions f ∈ S ′ (R d ) such that the Fourier transform ˆ f = ˆ f(ξ ) of f is locally square integrable and ∥f∥ 2 H γ := Z R d (ε+|ξ | 2 ) γ | ˆ f(ξ )| 2 dξ < ∞; (4.2) 3. TheHilbertscaleH ˜ Λ ={ ˜ H γ , γ ∈R},constructedaccordingtoDefinition2.3.1with H =L 2 (R d ) and ˜ Λ defined by ˜ Λ 2 :f(x)7→− ∆ f(x)+|x| 2 f(x), f ∈S(R d ). (4.3) The operator ˜ Λ 2 has pure point spectrum so that (2.6) holds with α = 1/(2d), and the eigen- functions, known as the Hermite functions, form an orthonormal basis in L 2 (R d ); cf. [GJ81, Section 1.5] or [Wal86, Example 4.2]. Recall that the normalized Hermite polynomials are H n (x)= (− 1) n π 1/4 2 n/2 (n!) 1/2 e x 2 d n dx n e − x 2 , n=0,1,2,...; the Hermite functions h n (x)=e − x 2 /2 H n (x) form an orthonormal basis in L 2 (R) and satisfy − h ′′ n (x)+x 2 h n (x)=(2n+1)h n (x). The orthonormal basis in L 2 (R d ), h n (x 1 ,...,x d )= d Y j=1 h nj (x j ), is indexed by n=(n 1 ,...,n d ), n j =0,1,2,... so that ˜ Λ 2 h n =λ 2 n h n = 2(n 1 +··· +n d )+d h n . 75 A non-decreasing ordering of λ 2 n brings us to the setting of Definition 2.3.1. In particular, λ 2 n ∼ (2d!) 1/d n 1/d , cf. [Shu01, Theorem 30.1], and f = ∞ X k=1 f k h k ∈ ˜ H γ ⇐⇒ ∞ X k=1 k γ/ d f 2 k <∞. The norms (4.2) are equivalent for different ε>0 and (4.1) is a formal limit of (4.2) as ε→0. We could interpret (4.1) and (4.2) as ˙ H γ = ˙ Λ − γ L 2 (R d ), H γ =Λ − γ L 2 (R d ), with ˙ Λ=( − ∆ ) 1/2 : ˆ f(ξ ) 7→|ξ | ˆ f(ξ ), Λ=( ε− ∆ ) 1/2 : ˆ f(ξ ) 7→(ε+|ξ | 2 ) 1/2 ˆ f(ξ ), but it is still not possible to construct the scales as in Definition 2.3.1: the operators ˙ Λ and Λ do not have a pure point spectrum, and, in addition, the spaces ˙ H γ are complete with respect to the norm ∥•∥ ˙ H γ if and only if γ < d/2 [BCD11, Proposition 1.3.4]. In particular, ˙ H 1 is not a Hilbert space when d=1,2. It follows from the definitions that H γ ⊂ ˙ H γ and ˜ H γ ⊂ H γ for γ > 0, and ˙ H γ ⊂ H γ for γ < 0. Also, by duality, H γ ⊂ ˜ H γ for γ < 0. To summarize, ˜ H γ ⊂ H γ ⊂ ˙ H γ , γ > 0, ˜ H 0 =H 0 = ˙ H 0 =L 2 (R d ), γ =0, ˙ H γ ⊂ H γ ⊂ ˜ H γ , γ < 0. (4.4) One of the technical difficulties in studying equation (3.1) on R d is that, while the spaces ˙ H γ and H γ are “custom-made” for the operator ∆ , the cylindrical Brownian motion W = W(t) on L 2 (R d ) does not belong to any of those space, even though we do have ψW (t)∈H − γ , γ > d/2 for every t>0 76 and every smooth function ψ with compact support [Wal86, Proposition 9.5]. On the other hand, by Proposition 2.3.4, we have W ∈L 2 Ω; C((0,T); ˜ H − γ , T >0, (4.5) for every γ > d, meaning that the basic existence/uniqueness result for (3.1) must be established in ˜ H γ . Another useful feature of the spaces ˜ H γ is the equalities S(R d )= \ γ ˜ H γ , S ′ (R d )= [ γ ˜ H γ ; cf. [BS15]. Definition 4.2.1 The Gaussian free field ¯ W onR d , d≥ 3, is an isonormal Gaussian process on ˙ H 1 . The Euclidean free field of mass √ ε is an isonormal Gaussian process ¯ W ε on H 1 . We also denote by ˜ W an isonormal Gaussian process on ˜ H 1 . To state a definition of ¯ W that works for all d, denote byS 0 (R d ) the collection of functions from S(R d ) for which the Fourier transform is equal to zero near the origin. Definition 4.2.2 The Gaussian free field ¯ W onR d , d≥ 1, is a collection of zero-mean Gaussian random variables ¯ W[f], f ∈S 0 (R d ) such that E ¯ W[f] ¯ W[g] = Z R d ˆ f(ξ )ˆ g(ξ ) |ξ | 2 dξ. (4.6) In the language of quantum field theory [GJ81, p. 103], construction of a zero-mass free field ( ε=0) in dimensions one and two requires different sets of test functions. For d≥ 3, Definitions 4.2.1 and 4.2.2 are equivalent. Indeed, the space S 0 (R d ) is dense in ˙ H γ for γ < d/2 [BCD11, Proposition 1.35] and, for |γ | < d/2, the spaces ˙ H γ and ˙ H − γ are dual relative to the inner product of L 2 (R d ) [BCD11, Proposition 1.36]. Thus, if d≥ 3, then the isonormal Gaussian process on ˙ H 1 satisfies (4.6) with an interpretation of ¯ W[f] as duality relative to L 2 (R d ) (as opposed to inner product in ˙ H 1 ; cf. Remark 2.3.7). Definitions 4.2.2 is also consistent with (2.1). Indeed, the function ξ 7→ |ξ | − 2 is a homogeneous distribution in S ′ (R d ) and, for d ̸= 2, the Fourier transform of this distribution is the fundamental solution of the Poission equation onR d ; cf. [Don69, Chapter 32]. When d=2, there are some issues withuniqueness, whichcanberesolved, forexample, byrestrictingthesetoftestfunctionstoS 0 (R d ). 77 Finally, by (2.12), if V is an isonormal Gaussian process on L 2 (R d ), then (− ∆ ) 1/2 ¯ W =V, (ε− ∆ ) 1/2 ¯ W ε =V, ˜ Λ ˜ W =V. 4.2.1 Deterministic Equations and Fundamental Solutions For ν > 0,ε≥ 0 and f ∈S(R d ), consider the heat equation u t (x,t)=ν ∆ u(x,t)− εu(x,t), t>0, x∈R d , (4.7) with initial condition u(x,0)=f(x), and the Poisson equation ν ∆ v(x)− εv(x)=− g(x), x∈R d . (4.8) The number ε>0 inR d is the analog of λ 1 >0 in the bounded domain. Below is a summary of the well-known results. • The unique solution of (4.7) inS(R d ) is u(x,t)= Z R d G ε,d (x,t)f(y)dy, where G ε,d (t,x)= 1 (4πνt ) d/2 exp − εt− |x| 2 4νt ; (4.9) cf. [Kry96, Theorem 8.4.2]. • The unique solution of (4.8) inS(R d ) is v(x)= Z R d Φ ε,d (x− y)g(y)dy, (4.10) where Φ ε,d (x)= Z +∞ 0 G ε,d (t,x)dt; (4.11) cf. [Kry96, Theorems 1.2.1 and 1.6.2, and Exercise 1.6.5]. 78 • If ε=0 and d≥ 2, then the unique solution of (4.8) inS(R d ) is v(x)= Z R d Φ 0,d (x− y)g(y)dy, where Φ 0,d (x)= − 1 2πν ln |x| √ ν , d=2, Γ d 2 − 1 4π d/2 ν |x| d− 2 , d≥ 3, and Γ( x)= Z +∞ 0 t x− 1 e − t dt; cf. [Eva10, Section 2.2.1]. Denote by K p = K p (x), p,x ∈ R, the modified Bessel function of the second kind [OLBC10, Section 10.25]. Proposition 4.2.3 (cf. [GJ81, Proposition 7.2.1]) The following equalities hold: Φ ε,d (x)=(2πν ) − d/2 εν x 2 (d− 2)/4 K (d− 2)/2 p ε/ν |x| , (4.12) lim ε→0 Φ ε,d (x)=Φ 0,d (x), x∈R d \{0}, d≥ 3. (4.13) In particular, Φ ε,d (x)= 1 2 √ εν e − √ ε/ν |x| , d=1, 1 2πν K 0 p ε/ν |x| , d=2, 1 4πν |x| e − √ ε/ν |x| , d=3. (4.14) Proof: Equality (4.12): combine (4.11) with [GR15, Formula 3.471.9]. Equality (4.13): use the properties of the function K p , in particular, K p (x)=K − p (x), ν > 0 [OLBC10, Formula 10.27.3] and 79 K p (x) ∼ 2 p− 1 Γ( p)x − p , x → 0, p > 0 [OLBC10, Formula 10.30.2]. Of course, one can get (4.13) directly by passing to the limit in (4.11). Equality (4.14) is a particular case of (4.12) because K ± 1/2 (z)= r π 2z e − z ; cf. [OLBC10, Formula 10.39.2]. Combining (4.14) with K 0 (x) ∼ − lnx, x → 0 [OLBC10, Formula 10.30.3], we see that, in the case d=2, equality (4.13) is missed by a logarithmic term: for fixed x̸=0, Φ ε,2 (x)∼ Φ 0,2 (x)− 1 4πν lnε, ε→0. Representation (4.10) has a version in the Fourier domain, with no explicit dependence on d: ˆ v(ξ )= ˆ f(ξ ) ε+ν |ξ | 2 . Passing to the limit ε→0, we get ˆ v(ξ )= ˆ f(ξ ) ν |ξ | 2 . Event though the function ξ 7→ |ξ | − 2 is not integrable at zero for d = 1,2, it defines a homogenous distribution onS 0 (R d ), and its inverse Fourier transform is equal to Φ 0,d [Don69, Chapter 32]. To conclude, we summarize how the main operators act in the spaces ˜ H γ . Proposition 4.2.4 For every γ ∈R, (C1) the operator ∆ extends to a bounded linear operator from ˜ H γ +2 to ˜ H γ ; (C2) the heat semigroup S t :f(x)7→ Z R d G νε, d (t,x− y)f(y)dy, ε≥ 0, t>0, f ∈S(R d ), (4.15) extends to a bounded linear operator on ˜ H γ , and, if ε>0, then ∥S t f∥ ˜ H γ ≤ Ce − δt ∥f∥ ˜ H γ (4.16) for some C >0, δ > 0 and all f ∈ ˜ H γ . 80 Proof: (C1) Direct computations show that ∆ is bounded from ˜ H 2k+2 to ˜ H 2k for every k = 0,1,2,.... The case of γ > 0 then follows by interpolation and γ < 0, by duality. (C2) This follows by [RT03, Theorem 2.4]. 4.2.2 Main Results Given ν > 0, σ > 0, ε > 0, a cylindrical Brownian motion W on L 2 (R d ), and φ ∈ L 2 (Ω; ˜ H r ) independent of W, consider stochastic evolution equations u(t)=φ+ν Z t 0 (∆ − ε)u(s)ds+σW (t), (4.17) u(t)=φ+ν Z t 0 ∆ u(s)ds+σW (t), (4.18) u(t)=φ+ν Z t 0 ˜ Λ 2 u(s)ds+σW (t), (4.19) with ˜ Λ 2 from (4.3). In physics literature, the deterministic version of (4.19) is known as the Hermite heat equation [DC15]. Definition 4.2.5 Foreachofthethreeequations, giventheinitialconditionφ∈L 2 (Ω; ˜ H r ), asolution u = u(t) on [0,T] is an adapted process with values in L 2 Ω × (0,T); ˜ H r+1 T L 2 Ω; C((0,T); ˜ H γ , such that the corresponding equality holds in ˜ H r− 1 for all t∈[0,T] with probability one. Theorem 4.2.6 Assume that φ∈ ˜ H − γ and γ > d. Then 1. Equation (4.17) has a unique solution for every T >0; 2. The solution has a representation u(t)=S t φ+ Z t 0 S t− s dW(s), (4.20) with S t from (4.15); 3. as t→ +∞, the solution converges in distribution to (2ν ) − 1/2 σW ε , that is, the Gaussian mea- sure generated on ˜ H − γ by the solution converges weakly to the Gaussian measure generated by (2ν ) − 1/2 σW ε . 81 Proof: The general theory of SPDEs in the Sobolev spaces H γ , such as [Kry99], is not applicable because the process W =W(t) does not take values in any of H γ . Similarly, the results from [DPZ88] do not apply because the operator (− ∆ ) − 1 is not Hilbert-Schmidt on L 2 (R d ). Fortunately, for existence and uniqueness of solution, relation (4.5) and first part of Proposition 4.2.4 make it possible to apply [RL18, Theorem 3.1] with A=ν (∆ − ε), X= ˜ H 1− γ , H= ˜ H − γ , X ′ = ˜ H − γ − 1 , M(t)=W(t). Similarly, the second part of Proposition 4.2.4 makes it possible to apply [DPZ14, Theorem 5.4], from which (4.20) follows. To prove convergence, note that, by (4.11), the general argument outlined in Introduction works, with O = R d and G = S(R d ). Keeping in mind that the fundamental solution for (4.17) is G νε, d , which, by (4.9), acts as the multiplier ˆ f(ξ ) 7→e − ν (|ξ | 2 +ε)t ˆ f(ξ ) in the Fourier domain, we easily complete the proof. Theorem 4.2.7 If φ∈ ˜ H − γ and γ > d, then equation (4.18) has a unique solution for every T > 0 and the solution has a representation u(t)=S t φ+ Z t 0 S t− s dW(s), (4.21) where S t is from (4.15) with ε=0. If φ∈ H − γ and γ > d, then, as t→ +∞, the solution converges in distribution to (2ν ) − 1/2 σ ¯ W, that is, the Gaussian measure generated on ˜ H − γ by the solution converges weakly to the Gaussian measure generated by (2ν ) − 1/2 σ ¯ W. Proof: Existence, uniqueness, and representation (4.21) of the solution follow in the same way as in the proof of Theorem 4.2.6. To prove the convergence as t → +∞, we streamline the notations by setting G=G(x,t) to be the heat kernel for equation (4.18): G(x,t)= 1 (4νt ) d/2 e −| x| 2 /(4νt ) . 82 Given a function f =f(x) fromS 0 (R d ), denote by u H,f =u H,f (x,t) the solution of the deterministic heat equation with initial condition f: u H,f (x,t)= Z R d G(t,x− y)f(y)dy. Then u(x,t)=u H,φ (x,t)+σ Z t 0 Z R d G(t− s,x− y)W(ds,dy) and, using (2.11) and [DPZ14, Theorem 5.4], u[t,f]:=⟨f,u(t)⟩ 0,γ =u H,φ [t,f]+σ Z t 0 Z R d Z R d G(t− s,x− y)f(x)dx W(ds,dy) =u H,φ [t,f]+ Z t 0 Z R d u H,f (t− s,y)W(ds,dy). By independence of φ and W, E u[t,f]u[t,g] =E u H,φ [t,f]u H,φ [t,g] +σ 2 Z t 0 Z R d u H,f (t− s,y)u H,g (t− s,y)dyds =E u H,φ [t,f]u H,φ [t,g] +σ 2 Z t 0 Z R d u H,f (s,y)u H,g (s,y)dyds. Next, ˆ u H,f (s,ξ )= ˆ f(ξ )e − sν |ξ | 2 , and then the Fourier isometry implies E u[t,f]u[t,g] = ZZ R d × R d e − tν (|ξ | 2 +|η | 2 ) E ˆ φ(ξ )ˆ φ(η ) ˆ f(ξ )ˆ g(η )dξdη +σ 2 Z t 0 Z R d ˆ f(ξ )ˆ g(ξ )e − 2sν |ξ | 2 dξds. (4.22) The first term on the right-hand side of (4.22) goes to zero as t→∞ by the dominated convergence theorem, because, by assumption, Z R d (1+|ξ | 2 ) − γ E|ˆ φ(ξ )| 2 dξ < ∞ 83 for some γ > d, and, for f,g∈S 0 (R d ), sup ξ (1+|ξ | 2 ) γ | ˆ f(ξ )|<∞, sup η (1+|η | 2 ) γ |ˆ g(η )|<∞. With ε = 0, we no longer have (4.16) and therefore have to make additional assumptions about the initial condition to achieve the desired convergence. As a result, lim t→+∞ E u[t,f]u[t,g] =σ 2 Z R d ˆ f(ξ )ˆ g(ξ ) 2ν |ξ | 2 dξ. Together with (4.6), the last equality completes the proof. Analysis of equation (4.19) in the scale H ˜ Λ is equivalent to analysis of equation (3.14) in the scaleH Λ : similar to Theorem 3.3.6 and Corollary 3.3.7, the distribution of σ (2ν ) − 1/2 ˜ W is the unique invariant measure for equation (4.19). The only difference is that now we have λ k of order k 1/(2d) rather than k 1/d . Let h k , k ≥ 1, be the Hermite functions and let λ k , k ≥ 1, be the corresponding eigenvalues of the operator ˜ Λ 2 . Theorem 4.2.8 Assume that φ∈ ˜ H − γ and γ > d. Then 1. Equation (4.19) has a unique solution for every T >0; 2. For every t>0, u(t)∈L 2 Ω; ˜ H 1− γ and u(t)= ∞ X k=1 e − νλ 2 k t φ k h k + ∞ X k=1 ˜ u k (t)h k , whereand ˜ u k (t), k≥ 1,areindependentGaussianrandomvariableswithmeanzeroandvariance E˜ u 2 k (t)= σ 2 2νλ 2 k 1− e − 2νλ 2 k t ; 3. As t→+∞, the ˜ H 1− γ -valued random variables u(t) converge weakly to σ (2ν ) − 1/2 ˜ W; 4. Equation (4.19) is ergodic and the unique invariant measure is the distribution of σ (2ν ) − 1/2 ˜ W on ˜ H 1− γ ; 5. If φ d =σ (2ν ) − 1/2 ˜ W, then u(t) d =σ (2ν ) − 1/2 ˜ W for all t>0; 84 6. IfEφ=0, then, for each t>0, the measure generated by u(t) on ˜ H 1− γ is absolutely continuous with respect to the measure generated by σ (2ν ) − 1/2 ˜ W. 4.3 Non-Linear Case: Existence of a Mild Solution In this section, we reivew the results of [AM03] and see how it pertains to the research problem. Assing and Manthey consider the Cauchy problem in R d of a stochastic heat equation. They show the existence of an invariant measure under certain conditions of the problem: u t (x,t)=λ ∆ u(x,t)+f(u(x,t))+σ (u(x,t)) ˙ W(x,t) with x∈R d ,t∈(0,∞) u(0,x)=θ (x). We will show that their results also pertain to our problem to some extent by verifying that the conditions apply. In our case, we want λ = 1,f(u) =− u 3 , and σ (u) = 1. The authors treat the case d=1 separately from d>1 by introducing a coloring on the noise term. So for d=1, we will have W(t)= ∞ X k=1 B k (t)e k , where(B k (t)) ∞ k=1 isasequenceofindependentstandardBrownianmotions,and(e k ) ∞ k=1 isauniformly bounded orthonormal basis in L 2 (R d ). This is just our standard white noise. For d≥ 1, let W(t)= ∞ X k=1 √ a k B k (t)e k , where (a k ) ∞ k=1 is a sequence of nonnegative real numbers satisfying P ∞ k=1 a k <∞. We also will define a= ∞ X k=1 a k ∥e k ∥ 2 ∞ . Next, we will introduce the notion of a weighted Banach space: Definition 4.3.1 The weighted banach space L p γ (R n ) with p ≥ 2,γ > n , is the set of all Borel measurable functions u:R n →R, such that ∥u∥ p p,γ = Z R n |u(x)| p (1+|x| 2 ) − γ/ 2 dx<∞. 85 Next, if the heat operator A onR n is defined by [A(t)u](x):= Z R n Φ( t,x− y)u(y)dy, we have that A:L p γ (R n )→L p γ (R n ). We will define a mild solution as a solution of the form u(·,t)=A(t)θ + Z t 0 A(t− s)f(u(·,s))ds+ Z t 0 A(t− s)dW(s). Theorem 4.3.2 (Manthey, Zaussinger 1999) Assume that (f1) f :R→R satisfies |f(x)− f(y)|≤ c v |x− y|(1+|x| v− 1 +|y| v− 1 ) for some constants c v >0, v≥ 1. (f2) There exists a constant k >0 such that uf(u)≤ k(1+|u| 2 ) for u∈R. If the initial condition θ is in L p γ (R d ), where p=2∨v, then there exists a pathwise unique F-adapted continuous solution u(·,t)∈L p γ (R d ), t≥ 0 of the mild form and such that for all q≥ p,T >0 sup t∈[0,T] E∥u(·,t)∥ q p,γ ≤ C T (1+∥θ ∥ q p,γ ). Wewillverifytheassumptionstothistheorem,whichwilltherebygiveusexistenceanduniqueness of a mild solution. Verifying condition (f1) We want to show that |f(x)− f(y)|≤ c v |x− y|(1+|x| v− 1 +|y| v− 1 ) 86 for some constants c v >0, v≥ 1. |f(x)− f(y)|≤| x− y| x 2 +xy+y 2 ≤| x− y| x 2 +|x||y|+y 2 ≤| x− y| x 2 + x 2 2 + y 2 2 +y 2 ≤ 3 2 |x− y||1+x (3− 1) +y (3− 1) |. Verifying condtition (f2) We want to show that there exists a constant k >0 such that uf(u)≤ k(1+|u| 2 ) for u∈R. This is not hard to see, because uf(u)=− u 4 ≤ k(1+u 2 ) for u∈R. So the two conditions hold, and we can apply this theorem to our problem to get existence of a mild solution. 4.4 Existence of an Invariant Measure The main result of this paper involves proving the existence of an invariant measure. Theorem 4.4.1 Under the conditions of Theorem 1, Let {P t } be a stochastically continuous Feller semigroup in L p γ (R) associated with the solutions (u(x,t)) t≥ 0 , u(x,0) ∈ L p γ (R). If γ > γ + d for some γ > d , and if (u(x,t)) t≥ 0 is bounded in probability in L pv γ (R d ) for a suitable initial condition u 0 :R d →R i.e. ∀ϵ ∃R>0∀t>0:P({||u(x,t)|| pv,γ ≥ R})<ϵ, then there exists an invariant measure for (P t ) t≥ 0 on L p γ (R d ). The standard method to show boundedness in probability is to first show boundedness in moments and then apply Markov’s inequality. The conditions for boundedness in moments are given by the following theorem: 87 Theorem 4.4.2 Assume (f1) and choose p = 2∨v as well as γ > d . For p > 2, let c(p) denote the universal constant in the Burkholder-David-Gundy inequality such that ∥M ∗ t ∥ L p (Ω) ≤ c(p) ⟨M⟩ 1/2 T L p (Ω) for all continuous local martingales (M t ) t≥ 0 . In the case of p = 2 set c(p) = 1. If the following condition (f3) there exists a constant c f,κ >0 such that uf(u)≤ c f,κ − κ |u| 2 whereu∈R and κ > c(p) 4 16λ holds in the white noise case, and κ > a· c(p) 2 2 holds in the colored noise case, then sup t≥ 0 E∥u(x,t)∥ p p,γ <∞ for every bounded continuous function u 0 :R d →R. Verifying condition (f3): We want to show that there exists c f,κ >0 such that − u 4 ≤ c f,κ − κu 2 for all u∈R. First let w = u 2 , Then for all w > 0, we want to show that there exists c f,κ > 0 such that − w 2 ≤ c f,κ − κw Note that for w >κ ,− w 2 ≤− κw . So for w >κ , we also have− w 2 ≤ c f,κ − κw . For 0≤ w≤ κ , Let c f,κ be a positive constant which bounds the function − w 2 +κw . We know it is bounded because it is a continuous function on a closed interval. So we have − w 2 ≤ c f,κ − κw on both [0,κ ] and [κ, ∞). So the inequality holds for all w ≥ 0. Note that for our case, it did not really matter what κ was, the condition would hold regardless. So we just showed that sup t≥ 0 E∥u(x,t)∥ pv pv,γ <∞ 88 for every bounded continuous function u 0 :R d →R. By Markov’s inequality we also get that P({∥u(x,t)∥ pv,γ ≥ R})≤ E∥u(x,t)∥ pv pv,γ R pv ≤ ϵ ∀ϵt > 0 and R sufficiently large. Thus, we proved boundedness in probability and by extension, existence of an invariant measure. 4.5 Time Regularization Let G(x,t)= 1 √ 4πνt e − x 2 /(4νt ) and Φ t :f(x)7→ Z R f(y)G(t,x− y)dy. Note that, in the Fourier domain F[Φ t f](z)= ˆ f(z)e − νz 2 t . In particular, by the Fourier isometry and the Lebesgue dominated convergence theorem, for every f ∈L 2 (R), lim t→∞ ∥Φ t f∥ 2 2 = lim t→∞ Z R | ˆ f(z)| 2 e − νz 2 t dz =0. The following proposition indicates several properties that the family of operator Φ t , t≥ 0, does not have. Proposition 4.5.1 (a) The operator norm ∥Φ t ∥ on L 2 (R) does not go to zero as t → ∞. (b) The operator Φ t is not a Hilbert-Schmidt operator on L 2 (R) or, more generally, when considered as acting from H r to H s ,r >s. Proof: (a) It is enough to show that there exists an ε>0 such that, for every t>0, there exists an f ∈L 2 (R) such that∥f∥ 2 =1 and ∥Φ t f∥ 2 >ε. (4.23) To this end, for a,b>0 with ab 2 =1, consider the function f such that ˆ f(z)=bI(0<z 0, we take b=(νt ) 1/4 and achieve (4.23) with ε= Z 1 0 e − x 2 dx. (b) Recall that a linear operator T acting from a separable Hilbert space X to a separable Hilbert spaceY is Hilbert-Schmidt if and only if X k ∥Th k ∥ 2 Y <∞ forone[henceevery]orthonormalbasish k , k≥ 1inX. Letusfirstconsiderthecaseof X=Y=L 2 (R). If Tf(x)= Z R F(x,y)f(y)dy for some function F, then, by Parceval’s identity, X k ∥Th k ∥ 2 2 = X k Z R T(x,y)h k (y)dy dx = Z R Z R|T(x,y)| 2 dydx. Because, in the case of the operator Φ t , the kernel is homogenous, we have X k ∥Φ t h k ∥ 2 2 = Z R Z R |G(t,x− y)| 2 dydx= Z R Z R |G(t,y)| 2 dydx=+∞. More generally, because the norm in the space H r is ∥f∥ 2 2,r = Z R (1+|z| 2 ) r | ˆ f(z)| 2 dz, 90 it follows by the Fourier isometry that, given an orthonormal basis ˜ h k , k ≥ 1, in H r , the functions h k with ˆ h k (z)=(1+|z| 2 ) r F[ ˜ h k ](z) are an orthonormal basis in L 2 (R) and ∥Φ t ˜ h k ∥ 2 2,s = Z R (1+|z| 2 ) s− r e − νt |z| 2 | ˆ h k (z)| 2 dz. Therefore there exists a function G (r,s) =G (r,s) (t,x) such that X k ∥Φ t ˜ h k ∥ 2 2,s = Z R Z R |G (r,s) (t,x− y)| 2 dydx=+∞. 91 Chapter 5 Conclusion and Further Directions Let L be a self-adjoint elliptic operator on a separable Hilbert space H. Under suitable conditions, we expect that, as t→+∞, the solution of the parabolic equation ∂u ∂t =Lu+f to converge to the solution of the elliptic equation Lv =− f. The results of [LS20] show that, under some conditions, the solution of the stochastic evolution equation ∂u ∂t =Lu+ ˙ W, (5.1) driven by a cylindrical Brownian motion on H, converges in distribution to the solution of (−L ) 1/2 v =V, (5.2) where V is an isonormal Gaussian process on H. In particular, we establish this convergence whenL is the Laplace operator and the solution of (5.2) is the Gaussian free field. One could study equation (5.1) with other operators L and driving processes ˙ W, resulting in different limits coming out of equation (5.2). Beside purely mathematical interest, another motivation for this study is scaling limits of (mostly yet to be discovered) discrete models. In the nonlinear setting, we discussed how there exists an invariant measure over L 2 in a bounded domain in one dimension, and over the whole 92 space there exists an invariant measure after suitable choice of Banach spaces. The question of higher dimensions still remains. One technical challenge is that the solution to the linear problem does not exist as a function, which suggests that one must look at methods taking limits. The method of time regularization was presented in earlier chapters and it might prove to be useful in the study of nonlinear stochastic heat equations over higher dimensions. 93 References [AM03] S. Assing and R. Manthey. Invariant measures for stochastic heat equations with un- bounded coefficients. Stochastic Process. Appl., 103(2):237–256, 2003. [AT07] R. J. Adler and J. E. Taylor. Random fields and geometry . Springer Monographs in Mathematics. Springer, New York, 2007. [BCD11] H. Bahouri, J. Y. Chemin, and R. Danchin. Fourier analysis and nonlinear partial differ- ential equations, volume 343 of Grundlehren der mathematischen Wissenschaften [Funda- mental Principles of Mathematical Sciences]. Springer, Heidelberg, 2011. [Bog98] V. I. Bogachev. Gaussian measures, volume 62 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1998. [BS15] J.BecnelandA.Sengupta. Theschwartzspace: Toolsforquantummechanicsandinfinite dimensional analysis. Mathematics, 3(2):527–562, 2015. [BU86] S. S. Barsov and V. V. Ul’yanov. Estimates for the closeness of Gaussian measures. Dokl. Akad. Nauk SSSR, 291(2):273–277, 1986. [DC15] B. P. Dhungana and H. Chengshao. Uniqueness in the Cauchy problem for the Hermite heat equation. Internat. J. Theoret. Phys., 54(1):36–41, 2015. [Don69] William F. Donoghue, Jr. Distributions and Fourier transforms, volume 32 of Pure and Applied Mathematics. Academic Press, New York, 1969. [DPZ88] G. Da Prato and J. Zabczyk. A note on semilinear stochastic equations. Differential Integral Equations, 1(2):143–155, 1988. [DPZ96] G. Da Prato and J. Zabczyk. Ergodicity for infinite-dimensional systems , volume 229 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cam- bridge, 1996. [DPZ14] G.DaPratoandJ.Zabczyk. Stochasticequationsininfinitedimensions ,volume152ofEn- cyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, second edition, 2014. [DS89] J. D. Deuschel and D. W. Stroock. Large deviations, volume 137 of Pure and Applied Mathematics. Academic Press, Inc., Boston, MA, 1989. [Eva10] L.C.Evans. Partial differential equations , volume19of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, second edition, 2010. [GJ81] J. Glimm and A. Jaffe. Quantum physics: A functional integral point of view. Springer- Verlag, New York-Berlin, 1981. 94 [GR15] I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Else- vier/Academic Press, Amsterdam, eighth edition, 2015. [Gut13] A. Gut. Probability: a graduate course. Springer Texts in Statistics. Springer, New York, second edition, 2013. [Hai08] M. Hairer. Ergodic theory for stochastic PDEs. University Lecture, 2008. [Hai09] M. Hairer. An introduction to stochastic PDEs, 2009. arXiv:0907.4178. [JS03] J. Jacod and A. N. Shiryaev. Limit theorems for stochastic processes, volume 288 of Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathemati- cal Sciences]. Springer-Verlag, second edition, 2003. [Kle20] A. Klenke. Probability theory: A comprehensive course. Universitext. Springer, Cham, 2020. Third edition. [Kry96] N. V. Krylov. Lectures on elliptic and parabolic equations in H¨ older spaces , volume 12 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 1996. [Kry99] N.V. Krylov. Ananalyticapproach toSPDEs. In Stochastic partial differential equations: six perspectives, volume 64 of Math. Surveys Monogr., pages 185–242. Amer. Math. Soc., Providence, RI, 1999. [LR17] S. V. Lototsky and B. L. Rozovsky. Stochastic partial differential equations . Universitext. Springer, Cham, 2017. [LS89] R. Sh. Liptser and A. N. Shiryayev. Theory of martingales, volume 49 of Mathematics and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, 1989. Translated from the Russian by K. Dzjaparidze [Kacha Dzhaparidze]. [LS20] S. V. Lototsky and A. Shah. Gaussian fields and stochastic heat equations. Differential Integral Equations, 33(9-10):527–554, 2020. [Nua06] D. Nualart. The Malliavin calculus and related topics. Probability and its Applications (New York). Springer-Verlag, Berlin, second edition, 2006. [OLBC10] F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST handbook of mathematical functions. U.S. Department of Commerce, National Institute of Standards and Technology, Washington, DC; Cambridge University Press, Cambridge, 2010. [Pey18] R.Peyre. ComparisonbetweenW 2 distanceand ˙ H − 1 norm,andlocalizationofWasserstein distance. ESAIM Control Optim. Calc. Var., 24(4):1489–1501, 2018. [Pic67] J. Pickands, III. Maxima of stationary Gaussian processes. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 7:190–223, 1967. [Pro05] P. E. Protter. Stochastic integration and differential equations , volume 21 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin, 2005. Second edition. Version 2.1, Corrected third printing. [RL18] B. L. Rozovsky and S. V. Lototsky. Stochastic evolution systems, volume 89 of Probability Theory and Stochastic Modelling. Springer, Cham, 2018. Linear theory and applications to non-linear filtering, Second edition of [ MR1135324]. [RT03] B. Rajeev and S. Thangavelu. Probabilistic representations of solutions to the heat equa- tion. Proc. Indian Acad. Sci. Math. Sci., 113(3):321–332, 2003. 95 [Sam76] S. G. Samko. Spaces of Riesz potentials. Izv. Akad. Nauk SSSR Ser. Mat., 40(5):1143– 1172, 2000, 1976. [She07] S.Sheffield. Gaussianfreefieldsformathematicians. Probab. Theory Related Fields,139(3- 4):521–541, 2007. [Shu01] M. A. Shubin. Pseudodifferential operators and spectral theory . Springer-Verlag, Berlin, second edition, 2001. [Wal86] J.B.Walsh. AnIntroductiontoStochasticPartialDifferentialEquations . Springer,Berlin, 1986. [WW20] E. Powell W. Werner. Lecture notes on the Gaussian free field, 2020. arXiv:2004.04720. 96
Abstract (if available)
Abstract
We study the long-term behavior of stochastic parabolic equations in a domain O ⊆ Rd. First we look at the stochastic heat equation over a bounded domain, driven by Gaussian white noise. We prove that for the linear equation, the solution converges to the Gaussian free field. When we include a cubic nonlinear term, we find that in one dimension, there is a unique invariant measure which is concentrated on C0[0, 1].
Next, we look at the linear stochastic heat equation over the whole space and once again find that the Gaussian free field is the invariant measure. We also observe, however, that the stochastic heat equation with additive white noise is not amenable to being studied over L2(Rd) so we suggest some alternatives, such as the schwartz spaces and weighted Banach spaces.
Finally we present a method of approximating Brownian motion by regularizing it in time. The idea is to study systems with this time-regularized noise Wκ(t) and to study how the solution changes as Wκ(t) approaches the Brownian motion. Since the trajectories of Wκ(t) are continuously differentiable, this approximation cannot be in total variation and instead we must use the Wasserstein metric.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
Time-homogeneous parabolic Anderson model
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
PDF
On stochastic integro-differential equations
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Statistical inference for stochastic hyperbolic equations
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Statistical inference of stochastic differential equations driven by Gaussian noise
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
Global existence, regularity, and asymptotic behavior for nonlinear partial differential equations
PDF
Large deviations rates in a Gaussian setting and related topics
PDF
Some mathematical problems for the stochastic Navier Stokes equations
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Topics on set-valued backward stochastic differential equations
PDF
Statistical inference for second order ordinary differential equation driven by additive Gaussian white noise
PDF
Model, identification & analysis of complex stochastic systems: applications in stochastic partial differential equations and multiscale mechanics
PDF
Parameter estimation in second-order stochastic differential equations
PDF
Equilibrium model of limit order book and optimal execution problem
Asset Metadata
Creator
Shah, Apoorva
(author)
Core Title
Gaussian free fields and stochastic parabolic equations
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Mathematics
Degree Conferral Date
2022-08
Publication Date
06/22/2022
Defense Date
06/22/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Gaussian free field,invariant measures,OAI-PMH Harvest,stochastic heat equation,stochastic partial differential equations
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Lototsky, Sergey (
committee chair
), Ghanem, Roger (
committee member
), Mikulevicius, Remigijus (
committee member
)
Creator Email
apoorvashah101@gmail.com,apoorvps@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC111345527
Unique identifier
UC111345527
Legacy Identifier
etd-ShahApoorv-10781
Document Type
Dissertation
Rights
Shah, Apoorva
Type
texts
Source
20220623-usctheses-batch949
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
Gaussian free field
invariant measures
stochastic heat equation
stochastic partial differential equations