Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
(USC Thesis Other)
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Parameter Estimation Problems for Stochastic Partial Dierential Equations from Fluid Dynamics by Zachary Wickham A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Mathematics) August 2021 Copyright 2021 Zachary Wickham Dedication I dedicate this thesis to my late father Marvin Wickham. I would not be the person I am today without his love, in uence, and support. ii Acknowledgements First and foremost, I would like to thank my advisor Nabil Ziane for being very encouraging and understanding throughout the process. His patience and insight were crucial to com- pleting this thesis. I would also like to thank Igor Kukavica for his assistance early on during the student analysis seminars. Additionally, I'd like to thank Paul Newton, Juhi Jang, and Remi Mikulevicius for serving on my exam committees. Moreover, I'd like to thank Susan Friedlander for providing summer support going into my fth year in the department. There is no way I would have made it without my friends in the math department. I'd like to specically thank Harrison, Austin, John, Eilidh, and Gene for their encouragement and for all the non-math fun we had. Likewise, I'd like to thank my good friends from Downey who served as a much needed break from school. Thanks to Mario, Armando, Billy, Brandon, Eddie, Mikhael, Waldo, Andrew, and Sam for being there for me when I needed them. One of the best decisions I made in grad school was joining the USC Trojan Marching Band. To all of the trumpet players I had the privilege of meeting from 2018 until now, thank you for giving me a home away from home. I can't wait to play music with all of you again some day. iii I would also like to thank my mom Carol, dad Marvin, sister Erin, brother-in-law Paul, and the newest addition niece Naomi for all of their love during the past six years. I greatly appreciate their support in all aspects of my life. And of course, I need to thank my little chihuahua Pikachu for (usually) bringing my stress levels down, especially this past year. iv Table of Contents Dedication ii Acknowledgements iii Abstract vi Chapter 1: Introduction 1 Chapter 2: Boussinesq Equations 3 2.1 Boussinesq Equations with ;> 0 . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Estimators and the Main Result . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.3 Proof of the Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4 Boussinesq Equations with > 0, = 0 . . . . . . . . . . . . . . . . . . . . 49 2.5 3D Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Chapter 3: Primitive Equations 80 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.2 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 3.3 Estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.4 3D Primitive Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Chapter 4: Linearized Equations 95 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.2 Proof of the Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Chapter 5: Sample Numerical Examples 107 5.1 Stochastic Stokes and Heat Equation . . . . . . . . . . . . . . . . . . . . . . 107 5.2 Extension to other equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Appendix A.1 Heat and Stokes Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 A.2 Probability Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reference List 123 v Abstract We investigate the parameter estimation problem for several stochastic partial dierential equations arising in uid dynamics. These include the fully and partially viscous stochastic Boussinesq system in both two and three dimensions, the stochastic Navier-Stokes equations in three dimensions, the stochastic primitive equation in two dimensions, and a linearized Navier-Stokes equation in both two and three dimensions. Depending on the equation, either Dirichlet or periodic boundary conditions are imposed. Using an approach based on maximum likelihood estimation, we construct three classes of estimators assuming our observations are the rst n Fourier modes of the solutions to the above equations on a nite time interval. We prove consistency in all cases and asymptotic normality for one of the classes. We provide some numerical simulations for the stochastic Stokes equation and explain how one could extend them to more complicated systems. vi Chapter 1 Introduction 1.1 Overview In this thesis, we explore the parameter estimation problem for several systems of partial dierential equations originating from uid dynamics perturbed by an additive white noise. Consider a family of probability measures P which are indexed by a parameter . The parameter estimation problem involves using known data to estimate the unknown parame- ter. For our purposes, the data we observe will be the rstn Fourier modes of the solution to the stochastic PDE in question on a nite time interval [0;T ]. The method of obtaining formulas for dierent estimators is motivated by maximum likelihood estimation. This ap- proach to estimating parameters for stochastic PDEs has been well studied in the literature and is often referred to as the "spectral method." Huebner and Rozovskii began using explor- ing this method in [19] and along with Lototsky in [20] for some stochastic PDE. Lototsky and Rozovskii also studied the parameter estimation problem in [25], [24], [23], and [26]. In 2011, Cialenco and Glatt-Holtz adapted the methods established in the previous works to the two dimensional stochastic Navier-Stokes equation in [4], and Prakasa Rao considered the stochastic Navier Stokes equations with Levy noise in [31]. In 2019, Pasemann and Stannat 1 studied the problem for reaction-diusion equations in [27]. This thesis is motivated by the work of Cialenco and Glatt-Holtz. Their outline provides a foundation for studying other equations from uid dynamics. We pay particular attention to the two and three-dimensional Boussinesq equations, three-dimensional Navier-Stokes equations, two-dimensional Primitive equations, and a linearized Navier Stokes-type system. The function spaces, operators, and boundary conditions for each set of equations is explored in their corresponding chapter. We also review the current regularity results and then prove consistency of the estimators along with asymptotic normality in a few cases. 2 Chapter 2 Boussinesq Equations 2.1 Boussinesq Equations with ;> 0 In this section, we are interested in the Boussinesq Equations perturbed by an additive noise in a two dimensional bounded domainO du + (u + (ur)u +rpe 2 )dt = 1 dW 1 t d + ( + (ur))dt = 2 dW 2 t ru = 0 u(0) =u 0 ; (0) = 0 : (2.1) Here,u is the velocity eld of a uid, is the scalar temperature, andp is the scalar pressure. is the viscosity of the uid, is the thermal diusivity, and e 2 = (0; 1). We impose the Dirichlet boundary conditions uj @O = 0 and j @O = 0. These equations are used in modeling the interactions of heat transfer within a uid, especially ows in the ocean and atmosphere. As mentioned in [10], the derivation of this system comes about by considering ows where the temperature does not vary by much, and hence neither will the density of the uid. This variation of density is thus ignored except 3 in regards to buoyancy. Writing the temperature dierence between the top and bottom layer is what gives rise to the equation as well as the presence of e 2 in the u equation. Since the early 2000s, mathematicians have begun adding noise terms in order to account for small perturbations unable to be accounted for in the velocity, temperature, and various parameters that show up here and related forms of the equations. These equations have been heavily studied in both the deterministic and stochastic set- tings, as well in the presence and/or absence of and . See [16], [3] and [17] for the deterministic and [1], [35], [21], [34], and [9] for the stochastic. We start with the deterministic setting of these equations. This material can be found in, for example, [5] and [33]. Dene the following spaces H 1 =fu2L 2 (O) 2 :ru = 0; unj @O = 0g; H 2 =L 2 (O) V 1 =fu2H 1 0 (O) 2 :ru = 0g; V 2 =H 1 0 (O): (2.2) We writeL p (O) =L p andH k (O) =H k , and moreover use the same notation for both scalar functions and vector elds. We equip H 1 and H 2 with the L 2 inner-product and norm (u;v) = Z O u(x)v(x)dx; juj 2 = (u;u); 4 using the same notation for both scalars and vector elds. We also equipV 1 andV 2 with the inner-product and norm ((u;v)) = (ru;rv); jjujj 2 = ((u;u)): Due to the Dirichlet boundary conditions, we have the very useful Poincar e inequality jujCjjujj: (2.3) LetP :L 2 !H 1 be the orthogonal projection ofL 2 ontoH 1 , known as the Leray projector. Dene the Stokes operator A 1 =P : (2.4) A 1 is an unbounded operator on H with domainD(A 1 ) =H 2 \V 1 . One can show that A is self-adjoint with compact inverse. By the spectral theorem for compact self-adjoint operators, we infer the existence of an orthonormal basisfh 1 k g k1 of H 1 consisting of eigenfunctions of A 1 with eigenvaluesf k g k1 satisfying 0 < 1 2 and lim k!1 k =1. With this, we note that ((u;v)) = (A 1=2 1 u;A 1=2 1 v) = (A 1 u;v); u;v2V 1 : 5 We will need to work with powers of the Stokes operator. For > 0, dene the domain of A 1 via D(A 1 ) = ( u2D(A 1 ) : X k 2 1 k ju k j 2 <1 ) ; where u k = (u;h 1 k ). For u2D(A 1 1 ), dene A 1 1 u = P k 1 k u k h 1 k . Similarly, let A 2 := () with domain D(A 2 ) = V 2 \H 2 . A 2 has a compact, self-adjoint inverse, so there exists a basisfh 2 k g k1 of H 2 consisting of eigenfunctions of A 2 with corre- sponding eigenvalues k satisfying 0< 1 2 and lim k!1 k =1. We see that ((u;v)) = (A 1=2 2 u;A 1=2 2 v) = (A 2 u;v); u;v2V 2 : The domains of A 2 are dened analogously to the above: D(A 2 ) = ( u2D(A 2 ) : X k 2 1 k ju k j 2 <1 ) where here, u k = (u;h 2 k ). For u2D(A 2 ), we similarly dene A 1 2 u = P k 1 k u 1 k h 2 k . We have the following useful Lemma: Lemma 2.1.1. Let 0 1 < 2 . For u2D(A 2 j ), jA 1 j uj ( j ) 2 1 jA 2 j uj; (2.5) 6 where 1 = 1 and 2 = 1 . Dene P j n : H j ! spanfh j 1 ;:::;h j n g = H j;n j = 1; 2 to be the projection of H j onto the span of the rst n eigenfunctions of A j . Dene Q j n = IP j n . We record a useful lemma, similar to the above. Lemma 2.1.2. Let 1 < 2 . For u2D(A 2 j ), we have jA 1 j Q j n uj ( j n ) 1 2 jA 2 j Q j n uj (2.6) jA 2 j P j n uj ( j n ) 2 1 jA 1 j P j n uj; (2.7) where 1 n = n and 2 n = n . Next, dene the bilinear form B 1 :V 1 V 1 !V 0 1 via B 1 (u;v) =P (ur)v. That is, for u, v, and w2V 1 , we have hB 1 (u;v);wi = Z O u j @ j v k w k dx: We also dene the bilinear form B 2 :V 1 V 2 !V 0 2 via B 2 (u;) = (ur). That is, for u2V and , '2V 2 , we have hB 2 (u;);') = Z O u j @ j 'dx (2.8) 7 The following lemma consists of some classical facts about the operator B. Lemma 2.1.3. (i) For u, v, w2V 1 , we have hB 1 (u;v);vi = 0; hB 1 (u;v);wi =hB 1 (u;w);vi: (2.9) (ii) For u, v, w2V 1 , jhB 1 (u;v);wijCjuj 1=2 jjujj 1=2 jjvjjjwj 1=2 jjwjj 1=2 : (2.10) (iii) For u2V 1 , v2D(A 1 ), and w2H 1 , then j(B 1 (u;v);w)jCjuj 1=2 jjujj 1=2 jjvjj 1=2 jA 1 vj 1=2 jwj: (2.11) (iv) B 1 (u;v)2D(A 1 ) for every 0< < 1=4 and u;v2D(A 1 ), and moreover, jA 1 B 1 (u)j 2 Cjjujj 2 jA 1 uj 2 : (2.12) The proofs of (iv) can be found in [15]. The proofs of (i) (iii) are classical and can be found in, for example, [5] and [33]. We have some analogous results for B 2 . 8 Lemma 2.1.4. (i) For u2V and , '2V 2 , hB 2 (u;);i = 0; hB 2 (u;);'i =hB 2 (u;');i: (2.13) (ii) For u2V , ;'2V 2 jhB 2 (u;);'ijCjuj 1=2 jjujj 1=2 jjjjj'j 1=2 jj'jj 1=2 : (2.14) (iii) For u2V , 2D(A 2 ), and '2H 1 , then j(B 2 (u;);')jCjuj 1=2 jjujj 1=2 jjjj 1=2 jA 2 j 1=2 j'j: (2.15) (iv) For u2D(A 1 ) and 2D(A 2 ), we have jjB 2 (u;)jjCjuj 1=4 jA 1 uj 3=4 jj 1=4 jA 2 j 3=4 +Cjuj 1=2 jA 1 uj 1=2 jA 2 j: (2.16) (v) B 2 (u;)2D(A 2 ) for every 0< < 1=4, u2D(A 1 ) and 2D(A 2 ), and moreover, jA 2 B 2 (u;)j 2 CjA 2 j 2 +Cjjujj 2+2 1 jA 1 uj 2 +Cjuj 4 jjujj 2 jjjj 2 jA 1 uj 2 : (2.17) 9 Proof. The rst three items are analogous to Lemma 2.1.3 and are classical. Item (iv) was proved in [17] in particular for the case of a bounded domain; it also holds in the case of periodic boundary conditions. We now prove item (v). We make use the interpolation result from [12], which is also used to prove item (iv) in the previous lemma: For 2 (0; 1=4), jA UjjUj 12 jjUjj 2 (2.18) for U2D(A ); here, A can be A 1 or A 2 . Using (2.15), we obtain jB 2 (u;)j 2 CjujjjujjjjjjjA 2 j: (2.19) Combining this with (2.16), we obtain for 2 [0; 1=4): jA 2 B 2 (u;)j 2 C jB 2 (u;)j 12 jjB 2 (u;)jj 2 2 C juj 12 jjujj 12 jjjj 12 jA 2 j 12 juj jA 1 uj 3 jj jA 2 j 3 +juj 12 jjujj 12 jjjj 12 jA 2 j 12 juj 2 jA 1 uj 2 jA 2 j 4 C juj 1 jj jjjj 12 jA 1 uj 3 jA 2 j 1+ +jujjjujj 12 jjjj 12 jA 1 uj 2 jA 2 j 1+2 =I 1 +I 2 : (2.20) 10 We bound bothI 1 andI 2 via the Poincare and Young inequalities. ForI 1 , we apply Young's with p = 2=(1 +)> 1 and q = 2=(1). We obtain I 1 CjA 2 j 2 +Cjuj 2 jjujj 24 1 jA 1 uj 6 1 CjA 2 j 2 +Cjjujj 2+2 1 jA 1 uj 2 (2.21) where we used the fact that 2> 8 1 since < 1=4. ForI 2 , we apply Young's inequality withp = 4=3 andq = 4 after applying Poincare. We have I 2 Cjujjjujj 1=2 jA 1 uj 1=2 jjjj 1=2 jA 2 j 3=2 CjA 2 j 2 +Cjuj 4 jjujj 2 jjjj 2 jA 1 uj 2 : (2.22) Combining the bounds for I 1 and I 2 give the desired result. We remark that the termjjujj 2+2 1 can be replaced with maxfjjujj 2 ;jjujj 10=3 g by consid- eringjjujj 1 orjjujj > 1. This eliminates the dependency in the upper bound. dependency is ne for our purposes, so we will keep it. We now describe the noise terms. For more details, see for example [6], [15], and [4]. LetS = ;F;fF t g t0 ;P;f j k g k1 be a stochastic basis, wheref j k g k1 , j = 1; 2 are two 11 sequences of independent scalar Brownian motions which are also independent of one another. We may now formally dene 1 dW 1 t = X k1 1 k h 1 k d 1 k and 2 dW 2 t = X k1 2 k h 2 k d 2 k where we require ; 2 > 1. With the notation established, we now rewrite stochastic Boussinesq equations as du + (A 1 u +B 1 (u)Pe 2 )dt =A 1 1 dW 1 t (2.23) d + (A 2 +B 2 (u;))dt =A 2 2 dW 2 t (2.24) u(0) =u 0 ; (0) = 0 : (2.25) For the purposes of the parameter estimation problem, we must establish existence of a unique solution to the above. As mentioned in the previous section, the regularity of the above has been well established in the literature. Most of these results are also interested in the regularity of the solutions in the random ! variable, and as such, they list their results in terms of stopping times. This is discussed in [21] with the exact same noise terms we consider. For the parameter estimation problem, we are not concerned with this, and thus 12 we are only interested in the regularity of u and in the space and time variables almost surely. We state what is needed below as a proposition. Proposition 2.1.5. Let (u 0 ; 0 )2V 1 V 2 . Then there exists uniqueF t -adapted processes u and which are H 1 and H 2 valued respectively such that u2L 2 loc ([0;1);D(A 1 ))\C([0;1);V 1 ) a:s: 2L 2 loc ([0;1);D(A 2 ))\C([0;1);V 2 ) a:s: (2.26) and t 0, u(t) + Z t 0 (A 1 u +B 1 (u)Pe 2 )ds =u 0 +A 1 1 dW 1 t (t) + Z t 0 (A 2 +B 2 (u;))ds = 0 +A 2 2 dW 2 t : (2.27) Sketch of Proof. We will provide the a priori estimates required to prove existence. They may be justied rigorously with a Galerkin approximation. Note that all equations and estimates below are almost sure in !. We will omit writing this for the remainder of the proof. First, write u = u + ~ u and = + ~ where u and satisfy the stochastic Stokes and heat equations d u +A 1 udt =A 1 1 dW 1 t ; u(0) =u 0 d +A 2 dt =A 2 2 dW 2 t ; (0) = 0 (2.28) 13 and the residuals ~ u and ~ satisfy d dt ~ u +A 1 ~ u +B 1 ( u + ~ u)Pe 2 = 0; ~ u(0) = 0 (2.29) d dt ~ +A 2 ~ +B 2 ( u + ~ u; + ~ ) = 0; ~ (0) = 0: (2.30) First, we multiply (2.29) by ~ u, integrate, and use the cancellation property of B 1 to obtain 1 2 d dt j~ uj 2 +jj~ ujj 2 = (Pe 2 ; ~ u) (B 1 (u; u); ~ u): (2.31) Similarly, multiply (2.30) by ~ , integrate and use the cancellation property of B 2 to obtain 1 2 d dt j ~ j 2 +jj ~ jj 2 =(B 2 (u; ); ~ ): (2.32) 14 We now bound each nonlinear term separately using Lemma 2.1.3 and Lemma 2.1.4 along with the Poincare and Young inequalities. We see j(B 1 (u; u); ~ u)jC j~ uj 1=2 +j uj 1=2 jj~ ujj 1=2 +jj ujj 1=2 jj ujjj~ uj 1=2 jj~ ujj 1=2 Cj~ ujjj~ ujjjj ujj +Cj~ ujjj~ ujj 1=2 jj ujj 1=2 jj ujj +Cj~ uj 1=2 j uj 1=2 jj~ ujjjj ujj +Cj~ uj 1=2 j uj 1=2 jj~ ujj 1=2 jj ujj 1=2 jj ujj 3 jj~ ujj 2 +C jj ujj 6 +jj ujj 4 +Cjj ujj 2 j~ uj 2 : (2.33) Similarly, we bound the B 2 term. j(B 2 (u; ); ~ )jC j~ uj 1=2 +j uj 1=2 jj~ ujj 1=2 +jj ujj 1=2 jj jjj ~ j 1=2 jj ~ jj 1=2 =Cj~ uj 1=2 jj~ ujj 1=2 jj jjj ~ j 1=2 jj ~ jj 1=2 +Cj~ uj 1=2 jj ujj 1=2 jj jjj ~ j 1=2 jj ~ jj 1=2 +Cj uj 1=2 jj~ ujj 1=2 jj jjj ~ j 1=2 jj ~ jj 1=2 +Cj uj 1=2 jj ujj 1=2 jj jjj ~ j 1=2 jj ~ jj 1=2 6 jj~ ujj 2 + 2 jj ~ jj 2 +C(1 +jj jj 4 )j~ uj 2 +C(1 +jj jj 2 )j ~ j 2 +Cjj ujj 2 jj jj 4 +Cj uj 2 +Cj ujjj ujjjj jj 2 : (2.34) 15 Finally, we have j(Pe 2 ; ~ u)jCj ~ jj~ uj +Cj jj~ ujC(j~ uj 2 +j ~ j 2 ) +Cj j 2 +j~ uj 2 : (2.35) Let (t) = R t 0 Cjj ujj 2 jj jj 4 +Cj uj 2 +Cj ujjj ujjjj jj 2 +Cj j 2 +Cjj ujj 6 +Cjj ujj 4 ds, and let (t) =C(1 +jj ujj 2 +jj jj 4 +jj jj 2 ), where allC values are chosen to be the maximum of the C values appearing in (2.33) - (2.35). By adding (2.31) and (2.32), then combining (2.33) - (2.35), we obtain j~ uj 2 +j ~ j 2 + Z t 0 2 jj~ ujj 2 + 2 jj ~ jj 2 ds(t) + Z t 0 (s)(j~ uj 2 +j ~ j 2 )ds: (2.36) where (t) and (t) absorbed the 2. Fix T > 0. We may then apply Gronwall's inequality to obtain for t<T j~ uj 2 +j ~ j 2 (t) exp Z t 0 (s)ds : (2.37) Due to the regularity of u and established in (A.1.1), we have that (t); R T 0 (t)dt <1 for all t 0. Thus, we have sup t2[0;T ] (j~ u(t)j 2 +j ~ (t)j 2 )<1; (2.38) 16 which gives ~ u2L 1 loc ([0;1);H 1 ) and ~ 2L 1 loc ([0;1);H 2 ) almost surely. With this result, we return to (2.36) to conclude ~ u2L 2 loc ([0;1);V 1 ) and ~ 2L 2 loc ([0;1);V 2 ). Next, we multiply (2.29) by A 1 ~ u and (2.30) by A 2 ~ , integrate, and obtain 1 2 d dt jj~ ujj 2 +jA 1 ~ uj 2 =(B 1 ( u + ~ u);A 1 ~ u) + (Pe 2 ;A 1 ~ u); (2.39) 1 2 d dt jj ~ jj 2 +jA 2 ~ j 2 =(B 2 ( u + ~ u; + ~ );A 2 ~ ): (2.40) We proceed much in the same way as above, utilizing Lemma 2.1.3 and Lemma 2.1.4 along with Poincare and Young inequalities to obtain 1 2 d dt jj~ ujj 2 +jA 1 ~ uj 2 2 jA 1 ~ uj 2 +C(jj~ ujj 2 +jj ~ jj 2 )(1 +jujjA 1 uj +juj 2 jj~ ujj 2 ) +C(jj jj 2 +juj 2 jj ujj 4 +jujjj ujj 2 jA 1 uj) (2.41) and 1 2 d dt jj ~ jj 2 +jA 2 ~ j 2 2 jA 2 ~ j 2 +C(jujjj( u; )jj 2 jA 2 j 2 +juj 2 jj jj 2 jj ujj 2 ) +C(jj~ ujj 2 +jj ~ jj 2 )(jujjA 2 j 2 +juj 2 jj jj 2 +juj 2 jj ujj 2 +juj 2 jj~ ujj 2 ): (2.42) 17 Redene (t) = Z t 0 C(jujjj( u; )jj 2 jA 2 j 2 +juj 2 jj jj 2 jj ujj 2 ) +C(jj jj 2 +juj 2 jj ujj 4 +jujjj ujj 2 jA 1 uj)ds (2.43) and (t) =C(jujjA 2 j 2 +juj 2 jj jj 2 +juj 2 jj ujj 2 +juj 2 jj~ ujj 2 ) +C(jujjA 2 j 2 +juj 2 jj jj 2 +juj 2 jj ujj 2 +juj 2 jj~ ujj 2 ); (2.44) where all Cs are the maximum of those that appear in (2.41) and (2.42). Adding together (2.39) and (2.40) and combining (2.41) and (2.42) gives 1 2 d dt (jj~ ujj 2 +jj ~ jj 2 ) + 2 jA 1 ~ uj 2 + 2 jA 2 ~ j 2 0 (t) +(t)(jj~ ujj 2 +jj ~ jj 2 ); (2.45) so d dt (jj~ ujj 2 +jj ~ jj 2 ) 0 (t) +(t)(jj~ ujj 2 +jj ~ jj 2 ): (2.46) 18 FixT > 0. Then from Lemma A.1.1 along with our results on ~ u and ~ established previously in this proof, we have (t) <1 and R t 0 (s)ds <1 for all t T . With this, we apply Gronwall's lemma to (2.46) and obtain sup t2[0;T ] (jj~ ujj 2 +jj ~ jj 2 )(T ) exp Z T 0 (t)dt <1; (2.47) which gives ~ u2 L 1 loc ([0;1);V 1 ) and ~ 2 L 1 loc ([0;1);V 2 ). Returning to (2.45), we may then conclude ~ u2L 2 loc ([0;1);D(A 1 )) and ~ 2L 2 loc ([0;1);D(A 2 )), which completes the a priori estimates. One can use the methods outlined in [15] to show existence. For uniqueness, let (u (1) ; (1) ) and (u (2) ; (2) ) be two solutions to the stochastic Boussinesq equations with initial condition (u 0 ; 0 ). Let u =u (1) u (2) and = (1) (2) . Then u and solve d dt u +A 1 u =Pe 2 B 1 (u (1) ;u)B 1 (u;u (2) ) d dt +A 2 = B 2 (u (1) ;)B 2 (u; (2) ) (2.48) 19 with 0 initial conditions. Note that these are random PDEs as the additive noise cancels when taking the dierence between the solutions. Take the inner product of the u equation with A 1 u and use estimates on B 1 to obtain 1 2 d dt jjujj 2 +jA 1 uj 2 jjjA 1 uj +Cju (1) j 1=2 jju (1) jj 1=2 jjujj 1=2 jAuj 3=2 +juj 1=2 jjujj 1=2 jju (2) jj 1=2 jAu (2) j 1=2 jAuj 2 jA 1 uj 2 +C(M 1 +jju (2) jj 2 +jA 1 u (2) j 2 )jjujj 2 +Cjjjj 2 ; (2.49) where M 1 = sup t2[0;T ] ju (1) j 2 jju (1) jj 2 . Next, take the inner product of the equation with A 2 and use the estimates on B 2 to see 1 2 d dt jjjj 2 +jA 2 j 2 2 jA 2 j 2 +CM1jjjj 2 +C(jj (2) jj 2 +jA 2 (2) j 2 )jjujj 2 : (2.50) Adding these two together and rearranging gives d dt (jjujj 2 +jjjj 2 ) +jA 1 uj 2 +jA 2 j 2 C(M 1 +jA 1 u (2) j 2 +jA 2 (2) j 2 )(jjujj 2 +jjjj 2 ): (2.51) By Gronwall's inequality, sincejA 1 u (2) j andjA 2 (2) j are in L 2 ([0;T ]), we concludejjujj 2 + jjjj 2 = 0, and we are done. 20 2.2 Estimators and the Main Result The goal is to estimate the parameters and in the operator form of the stochastic Boussines equations. Our approach is motivated by [4] where the authors estimated in the Navier Stokes equation. Suppose that we observe the rst n Fourier modes of the solution (u;) over a time interval [0;T ]. LetP j n denote the orthogonal projection ofH j onto the span of the rst n eigenfunctions of A j as before. Then the nth Fourier modes of the solutions u and satisfy du n + (A 1 u n +P 1 n B 1 (u)P 1 n Pe 2 )dt =P 1 n A 1 1 dW 1 t d n + (A 2 n +P 2 n B 2 (u;))dt =P 2 n A 2 2 dW 2 t u n (0) =P 1 n u 0 n (0) =P 2 n 0 : (2.52) LetP n;T ; (u) denote the probability measure induced byu n onC([0;T ];H 1;n ) andP n;T ; () the probability measure induced by n on C([0;T ];H 2;n ). Indeed, due to the coupling, these measures implicitly depend on both parameters and . However, as is the case when performing maximum likelihood estimation, we hold constant our observations (u n and n ). In addition, in order to obtain our ansatz, we also assume that P 1 n (B 1 (u) Pe 2 ) and P 2 n B 1 (u;) are observed and independent of and . Thus, since and only appear explicitly in the equations for u and respectively, the two measures above only depend on 21 and respectively. As such, we will simply writeP n;T (u) andP n;T (). Fix the parameters ( 0 ; 0 ). We then formally apply the Girsanov Theorem ([22]) to obtain formal log likelihood estimators. log dP n;T (u) dP n;T 0 (u) = ( 0 ) Z T 0 hP 1 n A 2 1 1 du n ;A 1 u n i 1 2 ( 2 2 0 ) Z T 0 hP 1 n A 2 1 1 A 1 u n ;A 1 u n ids ( 0 ) Z T 0 hP 1 n A 2 1 1 P 1 n B 1 (u);A 1 u n ids ( 0 ) Z T 0 hP 1 n A 2 1 1 P 1 n (Pe 2 );A 1 u n ids (2.53) and log dP n;T () dP n;T 0 () = ( 0 ) Z T 0 hP 2 n A 2 2 2 d n ;A 2 n i 1 2 ( 2 2 0 ) Z T 0 hP 2 n A 2 2 2 A 2 n ;A 2 n ids ( 0 ) Z T 0 hP 2 n A 2 2 2 P 2 n B 2 (u;);A 2 n ids: (2.54) We remark that another way to obtain these formulas is to consider the equation for the joint process = (u;). Then, the log-likelihood ratio for P n;T ; () is simply the sum of the two ratios above. 22 To obtain the estimators for and , we dierentiate the above with respect to and respectively, set them equal to 0, and solve. The maximizer for , denoted n , is n = R T 0 (A 1 u n ;P 1 n A 2 1 1 du n ) + R T 0 hP 1 n A 2 1 1 P 1 n B 1 (u) +P 1 n A 2 1 1 P 1 n (Pe 2 );A 1 u n ids R T 0 (A 1 u n ;P 1 n A 2 1 1 A 1 u n )dt ; (2.55) and the maximizer for , denoted n , is n = R T 0 (A 2 n ;P 2 n A 2 2 1 d n ) + R T 0 (A 2 n ;P 2 n A 2 2 2 P 2 n B 2 (u;))dt R T 0 (A 2 n ;P 2 n A 2 2 2 A 2 n )dt : (2.56) As in [4], we construct three classes of estimators based on the above. We borrow the notation from [27]. The rst comes from replacing j above with a general j whose range will be discussed later. We obtain, after simplifying the above, full n = R T 0 (A 1+2 1 1 u n ;du n ) + R T 0 (A 1+2 1 1 u n ;P 1 n (B 1 (u)Pe 2 ))dt R T 0 jA 1+ 1 1 u n j 2 dt ; (2.57) and full n = R T 0 (A 1+2 2 2 n ;d n ) + R T 0 (A 1+ 2 2 n ;P 2 n B 2 (u;))dt R T 0 jA 1+ 2 2 n j 2 dt : (2.58) Note that computing theP j n B j terms above requires knowledge of the full solution as opposed to just the rst n modes. As such, the above estimator is not very realistic. In an attempt 23 to rectify this, we replace the terms with an approximation. Namely, P 1 n B 1 (u) is replaced with P 1 n B 1 (u n ), and P 2 n B 2 (u;) similarly with P 2 n B 2 (u n ; n ): part n = R T 0 (A 1+2 1 1 u n ;du n ) + R T 0 (A 1+2 1 1 u n ;P 1 n (B 1 (u n )P n e 2 ))dt R T 0 jA 1+ 1 1 u n j 2 dt ; (2.59) and part n = R T 0 (A 1+2 2 2 n ;d n ) + R T 0 (A 1+ 2 2 n ;P 2 n B 2 (u n ; n ))dt R T 0 jA 1+ 2 2 n j 2 dt : (2.60) While these two estimators are more reasonable when considering applications than the former, their computations can still be dicult. As we will show, we can drop the B j terms entirely from them to obtain a third class of estimators. lin n = R T 0 (A 1+2 1 1 u n ;du n ) R T 0 jA 1+ 1 1 u n j 2 dt ; (2.61) and lin n = R T 0 (A 1+2 2 2 n ;d n ) R T 0 jA 1+ 2 2 n j 2 dt : (2.62) We adjust the outline given in Cialenco and Glatt-Holtz's paper [4] for our estimators above. We note that the equation for is linear in . As mentioned in the introduction, linear SPDE have been extensively studied in parameter estimation problems. However, the two linear operators that show up (A 2 andB 2 (u;)) do not commute. While there are a few 24 papers that explore parameter estimation with non-commuting linear operators, e.g. [25], they assume that the observations come from the Galerkin approximation of the SPDE as opposed to the projection of the SPDE. As such, their estimators take on a slightly dierent form where the second linear operator does not play much of a role. Hence, we cannot directly apply their results. However, we can utilize the same approach for consistency of the estimators for the consistency of the estimators. We are now ready to state the main results. Theorem 2.2.1. Suppose that u and solve (2.23)-(2.25) with 1< j < 5=4 and (u 0 ; 0 )2 D(A 1 1 )D(A 2 2 ) for j > j 1. Then (i) if j > j 1, then full n , part n , lin n , full n , part n , and lin n are consistent estimators of and respectively, i.e. lim n!1 full n = lim n!1 part n = lim n!1 lin n = (2.63) and lim n!1 full n = lim n!1 part n = lim n!1 lin n =; (2.64) where all limits are in probability. 25 (ii) if j > j 1=2, then both full n and full n are asymptotically normal with rate n, i.e. n( full n ) d ! 1 (2.65) and n( full n ) d ! 2 ; (2.66) where the j are Gaussian random variables with mean zero and variance 2( 1 1 +1) 2 1 T ( 1 1 +1=2) and 2( 2 2 +1) 2 1 T ( 2 2 +1=2) respectively. 2.3 Proof of the Main result 2.3.1 Main Result for Estimators We will treat the and estimators separately for readability. The proofs are similar though, so we include more detail in this section. As we are just dealing with theu equation, we will drop all 1 subscripts and superscripts. That is, A =A 1 ; B =B 1 P n =P 1 n ; W t =W 1 t ; k = 1 k = 1 ; = 1 : (2.67) 26 As in [4], we will split our solution u = u + ~ u, where u satises the linear stochastic Stokes equation d u +A u =A dW t ; u 0 = 0; (2.68) and ~ u satises the residual random partial dierential equation @ t ~ u +A~ u +B(u) = 0; ~ u 0 =u 0 : (2.69) We now prove some regularity properties for our residual term ~ u: Lemma 2.3.1. Suppose ~ u is a solution to (2.69) with u being the solution to (2.68) with initial condition u 0 2D(A +1=2 ), < 1=4. Then for every T > 0, we have sup t2[0;T ] jA +1=2 ~ uj 2 + Z T 0 jA +1 ~ uj 2 dt<1 a:s:; (2.70) and for an increasing sequence of stopping times n "1, E sup t2[0;n] jA +1=2 ~ uj 2 + Z n 0 jA +1 ~ uj 2 dt ! <1: (2.71) Proof. Multiply equation (2.69) with A 2+1 ~ u to obtain 1 2 d dt jA +1=2 ~ uj 2 +jA +1 ~ uj 2 =(A B(u);A +1 ~ u) + (A Pe 2 ;A +1 ~ u): (2.72) 27 We estimate the nonlinear term on the right hand side rst via Lemma 2.1.3 since < 1=4: j(A B(u);A +1 ~ u)jCjA B(u)j 2 + 4 jA +1 ~ uj 2 Cjjujj 2 jAuj 2 + 4 jA +1 ~ uj 2 ; (2.73) where C depends on . For the the term, we estimate similarly. (A Pe 2 ;A +1 ~ u)Cjjjj 2 + 4 jA +1 ~ uj 2 ; (2.74) since jA e 2 j 2 CjA 1=2 e 2 j 2 Cjjjj 2 : (2.75) By the regularity of u and proved in the previous chapter, we have Z T 0 Cjjujj 2 jAuj 2 dt<1; Z T 0 Cjjjj 2 dt<1: (2.76) Thus, we integrate (2.72) from 0 to t, rearrange, and take the supremum over t2 [0;T ] to obtain 1 2 sup t2[0;T ] jA 1=2+ ~ uj 2 + 2 Z T 0 jA 1+ ~ uj 2 dt<1 (2.77) as desired. 28 For the second conclusion, dene n = inf t0 sup rt jjujj 2 + Z t 0 jAuj 2 dr + Z t 0 jjjj 2 dr>n ; (2.78) which are increasing. Note that P( n <T ) =P sup rT jjujj 2 + Z T 0 jAuj 2 dr + Z T 0 jjjj 2 drn ; (2.79) so by the regularity ofu and, as we increasen, the right hand side tends to 0. So since the probability that n remains nite in the long run is 0, we have n !1. Replacing T with n in (2.70), we have by the above bound that sup t2[0;n] jA 1=2+ ~ uj 2 + Z n 0 jA 1+ ~ uj 2 dtC(n 2 +n)<1; (2.80) and thus we have shown (2.71) upon taking expected value. Next, we will apply the Law of Large Numbers (Lemma A.2.1) to obtain a useful con- vergence result. Note that from here on out, we will use the notation a n b n to mean lim n!1 a n =b n =c6= 0 and a n b n if lim n!1 a n =b n = 1. 29 Lemma 2.3.2. Suppose u and solve (2.23)-(2.25) with u 0 2 D(A ) and u solves (2.68) with u 0 = 0. Assume 1< < 5=4. Then for any > 1, we have lim n!1 R T 0 jA 1+ u n j 2 dt E R T 0 jA 1+ u n j 2 dt = 1 a:s: (2.81) Proof. We start by noting that jA 1+ u n j 2 jA 1+ u n j +jA 1+ ~ u n j 2 =jA 1+ u n j 2 +jA 1+ ~ u n j 2 + 2jA 1+ u n jjA 1+ ~ u n j; (2.82) and similarly jA 1+ u n j 2 jA 1+ u n jjA 1+ ~ u n j 2 =jA 1+ u n j 2 +jA 1+ ~ u n j 2 2jA 1+ u n jjA 1+ ~ u n j: (2.83) Thus, after an application of the H older and Young inequalities, we will be done if we prove that R T 0 jA 1+ u n j 2 dt E R T 0 jA 1+ u n j 2 dt ! 1 a:s: (2.84) and R T 0 jA 1+ ~ u n j 2 dt E R T 0 jA 1+ u n j 2 dt ! 0 a:s: (2.85) 30 For (2.84), let k = 2+2 k R T 0 u 2 k dt and b n = P n k=1 E k . Utilizing Lemma A.1.1, we have b n n X k=1 2+2 k 12 k n X k=1 k 1+22 : (2.86) Since> 1, 1 + 2 2 >1, thus, b n diverges, i.e. b n !1. Again by Lemma A.1.1, we have 1 X n=1 Var n b 2 n 1 X n=1 44 +1 n P n k=1 1+22 k 2 1 X n=1 44 +1 n 44 +4 n 1 X n=1 1 n 3 <1: (2.87) Thus, by the Law of Large Numbers (Lemma A.2.1), we have that R T 0 jA 1+ u n j 2 dt E R T 0 jA 1+ u n j 2 dt ! 1 a.s. Now we deal with the other term. By Lemma A.1.1, we have E Z T 0 jA 1+ u n j 2 dtn 22 +2 (2.88) Pick 0 2 ( 1; minf; 1=4g). By lemma 2.3.1, we have that Z T 0 jA 1+ 0 ~ uj 2 dt<1 a:s: (2.89) 31 Thus, we obtain R T 0 jA 1+ ~ u n j 2 dt E R T 0 jA 1+ u n j 2 dt C 2( 0 ) n R T 0 jA 1+ 0 ~ uj 2 dt 22 +2 n = R T 0 jA 1+ ~ uj 2 dt 2 0 2 +2 n ! 0 a:s: (2.90) since 2 0 2 + 2> 2 2 2 + 2 = 0. This completes the proof. Continuing, we prove a lemma used for dealing with the noise terms. Lemma 2.3.3. Assume the same conditions as Lemma 2.3.2. For any 1 < minf2 + 2 2 ; 1g, lim n!1 n 1 R T 0 (A 1+22 u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 a:s: (2.91) and for any 2 < minf2 + 2 2 ; 5=4 + 1 g, we have lim n!1 n 2 R T 0 (A 1+22 ~ u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 in probability: (2.92) Proof. If we multiply and divide equations (2.91) and (2.92) by E R T 0 jA 1+ u n j 2 dt 22 +2 n , then by Lemma 2.3.2, we are nished once we prove that I n 1 = R T 0 A 1+22 u n ; n X k=1 h k d k ! 22 +2 1 n = n X k=1 1+22 k Z T 0 u k d k 22 +2 1 n ! 0 a:s: (2.93) 32 and I n 2 = R T 0 A 1+22 ~ u n ; n X k=1 h k d k ! 22 +2 1 n ! 0 in probability (2.94) as n!1. For I n 1 , we will again utilize the Law of Large Numbers. Let k = 1+2 k R T 0 u k d k and b n = 2+22 1 n . Then b n !1, and by Ito isometry, we have Var k =E 2 k = 2+42 k E Z T 0 u 2 k dt 1+44 k : (2.95) Together, 1 X n=1 Var n b 2 n 1 X n=1 1+44 n 4+44 2 1 n = 1 X n=1 1 32 1 n 1 X n=1 1 n 32 1 <1 (2.96) since 1 < 1. Thus, the Law of Large Number gives lim n!1 I n 1 = 0. Now, dene for any stopping time ~ k = 1+2 k Z 0 ~ u k d k ; (2.97) 33 where ~ u k = (~ u;h k ). Given our assumption on the noise, the ~ k are uncorrelated. Reusing earlier notation, letb n = 2+22 2 k . By assumptions on the parameters,b n %1. Now for any stopping time such that Var ~ k <1, we obtain X k1 Var ~ k b 2 k = X k1 2+42 k 4+44 2 2 k E Z 0 ~ u 2 k dt = X k1 2 2+2 2 k E Z 0 ~ u 2 k dt =E Z 0 jA 1+ 2 ~ uj 2 dt: (2.98) Note that 1 + 2 < 5=4. Let N be the sequence of stopping times from Lemma 2.3.1. Then by the above and Lemma 2.3.1, we have X k1 Var ~ T^ N k b 2 k <1: (2.99) Hence, by Lemma A.2.1, for each xed N, lim n!1 R T^ N 0 hA 1+2 ~ u n ; P n k=1 h k d k i 2+22 2 n = 0 (2.100) in probability. Since N is increasing, the set[ N f N >Tg is a set of full measure. Thus, we conclude lim n!1 R T 0 hA 1+2 ~ u n ; P n k=1 h k d k i 2+22 2 n = 0 (2.101) 34 in probability once we take N!1. We now state and prove the nal lemma needed to prove consistency of the estimators. Lemma 2.3.4. Assume the same conditions as Lemma 2.3.2. Suppose 2 [0; minf5=4 ; + 1g). Then lim n!1 n R T 0 hA 1+2 u n ;P n B(u)idt R T 0 hA 1+2 u n ;P n Pe 2 idt R T 0 jA 1+ u n j 2 dt = 0 a:s: (2.102) Proof. First note that by H older's inequality, we have R T 0 hA 1+2 u n ;P n B(u)idt R T 0 hA 1+2 u n ;P n Pe 2 idt R T 0 jA 1+ u n j 2 dt R T 0 jA 1+ u n j 2 dt R T 0 jA P n B(u)j 2 dt 1=2 R T 0 jA 1+ u n j 2 dt + R T 0 jA 1+ u n j 2 dt R T 0 jA P n Pe 2 j 2 dt 1=2 R T 0 jA 1+ u n j 2 dt R T 0 jA P n B(u)j 2 dt R T 0 jA 1+ u n j 2 dt ! 1=2 + R T 0 jA P n Pe 2 j 2 dt R T 0 jA 1+ u n j 2 dt ! 1=2 (2.103) Now, if we multiply and divide the right side by E R T 0 jA 1+ u n j 2 dt 1=2 and then by the Law of Large Numbers and Lemma A.1.1, we only need to show that both lim n!1 2(( +1)) n Z T 0 jA P n B(u)j 2 dt = 0 a:s: (2.104) 35 and lim n!1 2(( +1)) n Z T 0 jA P n Pe 2 j 2 dt = 0 a:s: (2.105) We rst address (2.104). If< 1=4, then by Lemma 2.1.3, we know that the integral is nite and bounded above independently of n, so by choice of , 2(( +1)) n ! 0. If 1=4, then choose 0 2 ( + 1; 1=4), which is nontrivial by the assumption on . Then 0 <, so Z T 0 jA P n B(u)j 2 dt 2( 0 ) n Z T 0 jA 0 P n B(u)j 2 dt: (2.106) The above integral is now nite and bounded independently of n as before, and now 2( + 1) + 2 2 0 = 2( + 1 0 )< 0 by choice of 0 , so the term still goes to 0. The proof for (2.105) is essentially the same. If < 1=4, then jA P n Pe 2 j 2 cjA 1=2 P n Pe 2 j 2 cjjjj 2 ; (2.107) which has nite time integral, so the term goes to 0, and if 1=4, then proceed the same way as for the nonlinear term. Thus, indeed, both terms go to 0, which concludes the proof. Note that replacingB(u) withB(u n ) above does not change anything. Indeed, the same bounds still apply, and in the nal inequalities, simply bound any norms of u n by u. The same can be said of Pe 2 and P n e 2 . Thus, we also have the following corollary: 36 Corollary 2.3.5. Assume the same conditions as Lemma 2.3.2. Suppose 2 [0; minf5=4 ; + 1g]. Then lim n!1 n R T 0 hA 1+2 u n ;P n B(u n )idt R T 0 hA 1+2 u n ;P n P n e 2 idt R T 0 jA 1+ u n j 2 dt = 0 a:s: (2.108) We are now ready to prove the main result for the estimator. Proof of Theorem 2.2.1 - . Consistency We use the projected equation for u n to rewrite the estimators: full n = R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt ; (2.109) part n = R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt + R T 0 hA 1+2 u n ;P n B(u)P n B(u n )idt R T 0 jA 1+ u n j 2 dt R T 0 hA 1+2 u n ;P n Pe 2 P n P n e 2 idt R T 0 jA 1+ u n j 2 dt ; (2.110) and lin n = R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt + R T 0 hA 1+2 u n ;P n B(u)idt R T 0 jA 1+ u n j 2 dt R T 0 hA 1+2 u n ;P n Pe 2 idt R T 0 jA 1+ u n j 2 dt : (2.111) 37 Note that all of the dierences above contain the term R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt : (2.112) This term goes to 0 almost surely by letting 1 = 2 = 0 in Lemma 2.3.3, which shows consistency of full n . The remaining terms in (2.110) tend to 0 almost surely by linearity in the second component of the inner products and applying Lemma 2.3.4 and Corollary 2.3.5 with = 0, which shows consistency of part n . Finally, the remaining terms in (2.111) tend to 0 by Lemma 2.3.4 with = 0. Thus, all estimators are consistent. Asymptotic Normality The proof for normality is essentially the exact same as [4]. We provide it for the sake of completion. After using (2.109) and keeping in mind Lemma 2.3.2, we just need to show lim n!1 n R T 0 hA 1+2 u n ; P n k=1 h k d k i E R T 0 jA 1+ u n j 2 dt d =; (2.113) where is a normal random variable with mean zero and variance 2( +1) 2 1 T ( +1=2) , and lim n!1 n R T 0 hA 1+2 ~ u n ; P n k=1 h k d k i E R T 0 jA 1+ ~ u n j 2 dt = 0 in probability: (2.114) 38 We already have (2.114) by applying Lemma 2.3.3 with 2 = 1. This just leaves (2.113). Dene k = 1+2 k u k and k = R T 0 2 k dt. By Lemma A.1.1, we have that E k 1+44 k Var k 1+88 k : (2.115) Let b n = P n k=1 E k . Since 1 + 4 4 <1, we have by Lemma A.1.1 that b n 2+44 n . As such, b n %1. In addition, X k1 Var k b 2 k c X k1 k 3 ; (2.116) so by the Law of Large numbers, lim n!1 P n k=1 k P n k=1 E k = 1 a:s: (2.117) Thus, by the Central Limit Theorem for stochastic integrals (Lemma A.2.2), we conclude lim n!1 R T 0 hA 1+2 u n ; P n k=1 h k d k i E R T 0 jA 1+2 u n j 2 dt 1=2 d =N (0; 1): (2.118) 39 By assumption, 1 +> and 1 + 2 > . So by Lemma A.1.1, we obtain E R T 0 jA 1+2 u n j 2 dt 1=2 E R T 0 jA 1+ u n j 2 dt r 2 1 T + 1 p + 1=2 1 n : (2.119) Once the ns in (2.113) and (2.119) cancel, we apply (2.118) to precisely obtain the conver- gence in distribution to a normal random variable with mean zero and variance 2( +1) 2 1 T ( +1=2) . This completes the proof. 2.3.2 Main Result for Estimators We now prove the main result for the estimators. We will write A =A 2 ; B =B 2 ; h k =h 2 k P n =P 2 n ; W t =W 2 t ; k = 2 k = 2 ; = 2 : (2.120) We recall that the eigenvalues of A are denoted n . The outline of the proofs of this section are exactly the same as the previous, so only point out key dierences. As we did in the previous section, we write for the stochastic heat equation part d +A =A dW t ; 0 = 0 (2.121) 40 and ~ for the residual d ~ +A ~ +B(u;) = 0; ~ 0 = 0 : (2.122) We start by proving similar regularity properties for (2.122). Lemma 2.3.6. Suppose ~ is a solution to (2.122) with u and being the solution to the stochastic Boussinesq equations with initial condition u 0 2 D(A 1=2+ ), < 1=4. Then for every T > 0, we have sup t2[0;T ] jA 1=2+ ~ j 2 + Z T 0 jA 1+ ~ j 2 dt<1; (2.123) and for an increasing sequence of stopping times n "1, E sup t2[0;n] jA 1=2+ ~ j 2 + Z n 0 jA 1+ ~ j 2 dt ! <1: (2.124) Proof. Symbolically, the proof is the same as Lemma 2.3.1. The main dierence comes with the nonlinear term B(u;). By Lemma 2.1.4, we have (A B(u;);A +1 ~ ) 2 jA 1+ ~ j 2 +CjAj 2 +C(juj 4 jjujj 2 jjjj 2 +jjujj 2+2 1 )jA 1 uj 2 : (2.125) 41 Then, by the already established regularity of u and and after subtracting 2 jA 1+ ~ j 2 , the remaining upper bound is integrable. Thus, we may conclude the desired regularity of ~ , proving the rst assertion. For the second, simply dene the stopping times n = inf t0 sup rt (jjujj 2 +jjjj 2 ) + Z t 0 jA 1 uj 2 +jAj 2 dr>n ; (2.126) and repeat as in the case. The next lemma is analogous to Lemma 2.3.2. Lemma 2.3.7. Suppose u and solve (2.23)-(2.25) with 0 2 D(A ) and solves (2.121) with 0 = 0. Assume 1< < 5=4. Then for any > 1, we have lim n!1 R T 0 jA 1+ n j 2 dt E R T 0 jA 1+ n j 2 dt = 1 a:s: (2.127) Proof. The proof is exactly the same as Lemma 2.3.2 with u replaced with and B(u) replaced with B(u;). We now have the Lemma analogous to Lemma 2.3.3. 42 Lemma 2.3.8. Assume the same conditions as Lemma 2.3.7. For any 1 < minf2 + 2 2 ; 1g, lim n!1 n 1 R T 0 A 1+22 n ; P n k=1 h k d k R T 0 jA 1+ n j 2 dt = 0 a:s: (2.128) and for any 2 < minf2 + 2 2 ; 5=4 + 1 g, we have lim n!1 n 2 R T 0 A 1+22 ~ n ; P n k=1 h k d k R T 0 jA 1+ n j 2 dt = 0 in probability: (2.129) Proof. Once again, the proof is the same as Lemma 2.3.3 since the and u satisfy similar equations and ~ and ~ u have the same regularity needed for this result. We now move on to the nal Lemma which is analogous to Lemma 2.3.4. Lemma 2.3.9. Assume the same conditions as Lemma 2.3.7. Suppose 2 [0; minf5=4 ; + 1g]. Then lim n!1 n R T 0 hA 1+2 n ;P n B(u;)idt R T 0 jA 1+ n j 2 dt = 0 a:s: (2.130) Proof. Although the proof is still in the same vein as Lemma 2.3.4, we will provide the details. By H older's inequality, we have R T 0 hA 1+2 n ;P n B(u;)idt R T 0 jA 1+ n j 2 dt R T 0 jA P n B(u;)j 2 dt R T 0 jA 1+ n j 2 dt ! 1=2 : (2.131) 43 If we multiply and divide the right side by E R T 0 jA 1+ n j 2 dt 1=2 and then consider the Law of Large Number and Lemma A.1.1, then we only need to show lim n!1 2(( +1)) n Z T 0 jA P n B(u;)j 2 dt = 0 a:s: (2.132) Consider two cases for . If < 1=4, then by Lemma 2.1.4, we know that the integral in (2.132) is nite and bounded above independently ofn. Then, by choice of, 2(( +1)) n ! 0 as n ! 1, and (2.132) is shown. On the other hand, if 1=4, then pick 0 2 ( + 1; 1=4), which is a nontrivial interval by the assumption on . Then, as 0 <, we have jA P n B(u;)j 2 2( 0 ) n jA 0 P n B(u;)j 2 C 2( 0 ) n Z T 0 CjAj 2 +C(juj 4 jjujj 2 jjjj 2 +jjujj 2+2 1 )jA 1 uj 2 dt; (2.133) where the above integral is nite by the regularity of u and . Now, we have 2( ( + 1)) + 2( 0 ) = 2( + 1 0 )< 0 (2.134) by choice of 0 . Thus, we also have (2.132) in this case, and the proof is complete. 44 As before, we can replace B(u;) with B(u n ; n ) and obtain the same conclusion since any norm of u n or n is bounded above by the same norm of u or respectively. We thus have the following corollary: Corollary 2.3.10. Assume the same conditions as Lemma 2.3.7. Suppose 2 [0; minf5=4 ; + 1g]. Then lim n!1 n R T 0 hA 1+2 n ;P n B(u n ; n )idt R T 0 jA 1+ n j 2 dt = 0 a:s: (2.135) We are now ready to prove Theorem 2.2.1. Proof of Theorem 2.2.1 - . Consistency We use the projected equation for n to rewrite the estimators: full n = R T 0 hA 1+2 n ; P n k=1 h k d k i R T 0 jA 1+ n j 2 dt ; (2.136) part n = R T 0 hA 1+2 n ; P n k=1 h k d k i R T 0 jA 1+ n j 2 dt + R T 0 hA 1+2 n ;P n B(u;)P n B(u n ; n )idt R T 0 jA 1+ n j 2 dt ; (2.137) 45 and lin n = R T 0 hA 1+2 n ; P n k=1 h k d k i R T 0 jA 1+ n j 2 dt + R T 0 hA 1+2 n ;P n B(u;)idt R T 0 jA 1+ n j 2 dt : (2.138) Note that all of the dierences above contain the term R T 0 hA 1+2 n ; P n k=1 h k d k i R T 0 jA 1+ n j 2 dt : (2.139) This term goes to 0 almost surely by letting 1 = 2 = 0 in Lemma 2.3.8, which shows consistency of full n . The remaining terms in (2.137) tend to 0 almost surely by expanding the sums in the inner products and apply Lemma 2.3.9 and Corollary 2.3.10 with = 0, which shows consistency of part n . Finally, the remaining terms in (2.138) tend to 0 by Lemma 2.3.9 with = 0. Thus, all estimators are consistent. Asymptotic Normality The proof for normality is essentially the exact same as [4]. We provide it for the sake of completion. After using (2.136) and keeping in mind Lemma 2.3.7, we just need to show lim n!1 n R T 0 A 1+2 n ; P n k=1 h k d k E R T 0 jA 1+ n j 2 dt d =; (2.140) 46 where is a normal random variable with mean zero and variance 2( +1) 2 1 T ( +1=2) , and lim n!1 n R T 0 D A 1+2 ~ n ; P n k=1 h k d k E E R T 0 jA 1+~ n j 2 dt = 0 in probability: (2.141) We already have (2.141) by applying Lemma 2.3.8 with 2 = 1. This just leaves (2.140). Dene k = 1+2 k k and k = R T 0 2 k dt. By Lemma A.1.1, we have that E k 1+44 k Var k 1+88 k : (2.142) Let b n = P n k=1 E k . Since 1 + 4 4 <1, we have by Lemma A.1.1 that b n 2+44 n . As such, b n %1. In addition, X k1 Var k b 2 k c X k1 k 3 ; (2.143) so by the Law of Large numbers, lim n!1 P n k=1 k P n k=1 E k = 1 a:s: (2.144) 47 Thus, by the Central Limit Theorem for stochastic integrals, we conclude lim n!1 n R T 0 A 1+2 n ; P n k=1 h k d k E R T 0 jA 1+ n j 2 dt 1=2 d =N (0; 1): (2.145) By assumption, > 1=2 and 1 + 2 > . So by Lemma A.1.1, we obtain E R T 0 jA 1+2 n j 2 dt 1=2 E R T 0 jA 1+ n j 2 dt r 2 1 T + 1 p + 1=2 1 n : (2.146) Once the ns in (2.140) and (2.146) cancel, we apply (2.145) to precisely obtain the conver- gence in distribution to a normal random variable with mean zero and variance 2( +1) 2 1 T ( +1=2) . This completes the proof. 48 2.4 Boussinesq Equations with > 0, = 0 2.4.1 Overview In the context of the previous sections, we can also consider the parameter estimation prob- lem for the 2D stochastic Boussinesq equations with zero diusivity ( = 0) and noise only in the velocity equation du + (Au +B(u)Pe 2 )dt =A dW t (2.147) d dt + (ur) = 0 (2.148) u(0) =u 0 ; (0) = 0 : (2.149) We will reuse notation from the subsection on the proof of the estimators. Namely, A is the Stokes operator, B is the bilinear form P (ur)v, and dW t is the noise term. 49 Due to some technical details in the proof of consistency of the estimators, we now consider these equations in a periodic domain, and redene the function spaces H, V , and D(A) accordingly. LetO = [L=2;L=2] for L> 0 and consider the boundary conditions u(x +Le j ;t) =u(x;t); for all x2R 2 ;t 0; Z O u(x)dx = 0 (2.150) (x +Le j ;t) =(x;t); for all x2R 2 ;t 0; Z O (x)dx = 0 (2.151) We then dene the corresponding function spaces H = u2L 2 per (O) 2 :ru = 0; Z O u(x)dx = 0 and V = u2H 1 per (O) 2 :ru = 0; Z O u(x)dx = 0 : We use the same notation for the normsjj andjjjj respectively. The spaces for will be the usual scalar L p and H k spaces of periodic functions. P n is the projection of H onto H n , the span of the rst n eigenfunctions of A. The equations in this form have been studied in, for example [30] and [34]. In either one, we require > 3=2. We take their results as the following proposition: 50 Proposition 2.4.1. Suppose u and solve (2.147) - (2.148) with u 0 2 D(A) and 0 2 H 2 and > 3=2. Then u2C([0;1);D(A))\L 2 loc ([0;1);D(A 3=2 )) and 2L 1 loc ([0;1);H 2 ). Since 3=2 > 5=4, we may not apply the same exact proof as we did in the Dirichlet boundary case, as it was critical that < 5=4. The same general outline will apply, however, the exact details will dier. We will need the following lemma for the nonlinear term that applies in the case of periodic boundary conditions. Lemma2.4.2. Whenever > 1=2, ifu2D(A ) andv2D(A +1=2 ), thenB(u;v)2D(A ), and jA B(u;v)j 2 CjA uj 2 jA +1=2 vj 2 : (2.152) The proof can be found in Lemma 10.4 in the book [5]. We now move on to the parameter estimation problem. With the regularity just estab- lished, the classes of estimators for will be practically identical to those in the previous case of positive diusivity. Our observations are assumed to the nth Fourier modes of the velocity u n and the temperature n over the interval [0;T ] for some T > 0. Since we are no longer concerned with the temperature equation, we project (2.147) onto H n =R n via P n du n + (Au n +P n B(u)P n Pe 2 )dt =P n A dW t ; u n 0 =P n u 0 : (2.153) 51 Let P n;T be the probability measure induced by u n on C([0;T ];R n ). By xing a reference parameter 0 , we then formally apply the Girsanov formula to obtain the formal log-likelihood ratio log dP n;T dP n;T 0 = ( 0 ) Z T 0 hP n A 2 du n ;Au n i 1 2 ( 2 2 0 ) Z T 0 hP n A 2 Au n ;Au n ids ( 0 ) Z T 0 hP n A 2 P n B(u);Au n ids ( 0 ) Z T 0 hP n A 2 P n (Pe 2 );Au n ids: (2.154) We maximize this expression with respect to to obtain n = R T 0 (Au n ;P n A 2 du n ) + R T 0 (P n A 2 (B(u)Pe 2 );Au n )dt R T 0 jA +1 u n j 2 dt : (2.155) As in the previous sections, following [4], we replace all instances of with a hyperparameter whose range will be mentioned later. As this estimator requires full knowledge of u and , we will denote it full n . full n = R T 0 (Au n ;P n A 2 du n ) + R T 0 (P n A 2 (B(u)Pe 2 );Au n )dt R T 0 jA +1 u n j 2 dt : (2.156) 52 In practice, we will not have full knowledge of the solution, so we consider another class of estimators by replacing u with u n and with n . part n = R T 0 (Au n ;P n A 2 du n ) + R T 0 (P n A 2 (B(u n )P n e 2 );Au n )dt R T 0 jA +1 u n j 2 dt : (2.157) Finally, to avoid issues with computing the nonlinear term in general, we can also consider the estimators corresponding to the Stokes equation. lin n = R T 0 (Au n ;P n A 2 du n ) R T 0 jA +1 u n j 2 dt : (2.158) We then have the analogous result to the case with nonzero diusivity. Theorem 2.4.3. Suppose u and solve (2.147) - (2.148) with 3=2 < < 2 and (u 0 ; 0 )2 D(A )H 2 for > 1. Then for > 1=2, the estimators full n , part n , and lin n of are consistent, and moreover, we have n( full n ) d !; (2.159) where is a normal random variable with mean 0 and variance 2( +1) 2 1 T ( +1=2) . 53 2.4.2 Proof of the Main Result As mentioned before, the proof outline closely follows the case with full viscosity. Due to the periodic boundary conditions, the specic details are slightly dierent. As such, we simply emphasize the dierences in the proofs of the following Lemmas and Propositions and refer back to the analogous ones in the previous sections for the rest of the details. We will introduce the same notation used before. Write u = u + ~ u where u solves the stochastic Stokes equation with initial condition 0, and ~ u satises the residual d dt ~ u +A~ u +B(u)Pe 2 = 0; ~ u(0) =u 0 : (2.160) We have the following regularity for ~ u. Proposition 2.4.4. Suppose 1. Then for every T > 0, we have sup t2[0;T ] jA +1=2 ~ uj 2 + Z T 0 jA +1 ~ uj 2 dt<1: (2.161) Moreover, there exists a sequences of stopping times n %1 so that E sup t2[0;n] jA +1=2 ~ uj 2 + Z n 0 jA +1 ~ uj 2 dt ! <1 (2.162) 54 Proof. We give the a priori estimates. Multiply (2.160) by A 2+1 ~ u and integrate to obtain 1 2 d dt jA +1=2 ~ uj 2 +jA +1 ~ uj 2 =(A B(u);A +1 ~ u) + (A Pe 2 ;A +1 ~ u): (2.163) Since 1, we have (A Pe 2 ;A +1 ~ u)Cjj 2 H 2 + 4 jA +1 ~ uj 2 : (2.164) Now, let 0 = maxf; =3g so that 1=2< 0 1. We can then apply Lemma 2.4.2 to obtain j(A B(u);A +1 ~ u)jCjA 0 B(u)jjA +1 ~ ujCjA 0 uj 2 jA 0 +1=2 uj 2 + 4 jA +1 ~ uj 2 : (2.165) Combining the above estimates and integrating from 0 to T , we may use the regularity of u and to conclude (2.161), since 0 + 1=2 3=2. For (2.162), we let n = inf t0 sup t 0 t jA 0 +1=2 ~ uj 2 + Z t 0 jA 0 +1 ~ uj 2 dt + sup t 0 t jj 2 H 2 >n ; (2.166) and argue as in the proof of Lemma 2.3.1. The following lemma carries over with the almost the same proof. 55 Lemma 2.4.5. Suppose u and solve (2.147)-(2.148) with u 0 2D(A ) and u solves (2.68) with u 0 = 0. Assume 3=2< < 2. Then for any > 1, we have lim n!1 R T 0 jA 1+ u n j 2 dt E R T 0 jA 1+ u n j 2 dt = 1 a:s: (2.167) Sketch of Proof. The proof follows Lemma 2.3.2 until we prove R T 0 jA 1+ ~ uj 2 dt E R T 0 jA 1+ u n j 2 dt ! 0 a:s: (2.168) Let 0 2 ( 1; minf; 1g). Then we can apply Lemma 2.4.4 to conclude Z T 0 jA 1+ 0 ~ uj 2 dt<1: (2.169) Then, since E Z T 0 jA 1+ u n j 2 dtn 22 +2 ; and 0 + 1 = 0 + 1< 0, we conclude R T 0 jA 1+ ~ uj 2 dt E R T 0 jA 1+ u n j 2 dt c n 22 0 R T 0 jA 1+ 0 ~ uj 2 dt n 22 +2 =c R T 0 jA 1+ 0 ~ uj 2 dt n 2 0 2 +2 ! 0 a:s: (2.170) 56 We also have: Lemma 2.4.6. Assume > 1. For any 1 < minf2 + 2 2 ; 1g, lim n!1 n 1 R T 0 (A 1+22 u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 a:s: (2.171) and for any 2 1, we have lim n!1 n 2 R T 0 (A 1+22 ~ u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 in probability. (2.172) The proof is essentially the same as Lemma 2.3.3. However, note that the range of 2 values is dierent. As we saw earlier, we will need to conclude R T 0 jA 1+ 2 ~ uj 2 dt<1, which means we must have 1+ 2 2. We need 2 1 for the proof of asymptotic normality, so this inequality can only be satised when < 2; this is the reason we imposed this restriction on . Moreover, We can not allow = 2 due to the proof of the next lemma. The proof also requires 2 + 2 2 2 > 0, which is achieved since > 1=2 and 2 1. As before, we get the same corollary by letting 1 = 2 = 0 and also by replacing u with u n . Our nal lemma concerns the nonlinear terms. We just consider the case = 0 for simplicity compared to a range of values as in Lemma 2.3.4. 57 Lemma 2.4.7. Assume > 1=2. Then lim n!1 R T 0 hA 1+2 u n ;P n B(u)idt R T 0 hA 1+2 u n ;P n Pe 2 idt R T 0 jA 1+ u n j 2 dt = 0 a:s: (2.173) Sketch of Proof. As in Lemma 2.3.4, we need to show 2( +1) n Z T 0 jA P n B(u)j 2 dt! 0 a:s: (2.174) and 2( +1) n Z T 0 jA P n Pe 2 j 2 dt! 0 a:s: (2.175) We start with (2.174). If < 1, pick 0 2 (maxf1=2;g; 1), then Z T 0 jA P n B(u)j 2 dt Z T 0 jA 0 P n B(u)j 2 dt<1 (2.176) since 0 + 1=2 < 3=2. Moreover,2( + 1) < 0, hence we have (2.174). If > 1, let 0 2 ( 1; 1] which is nontrivial since 1< 1. Then we have Z T 0 jA P n B(u)j 2 dtCn 2( 0 ) Z T 0 jA 0 P n B(u)j 2 dt<1; (2.177) 58 and since 0 > 1, we have( + 1) + 0 < 0, and we may conclude (2.174) for all . We now deal with the next term. If 1, then the integral in (2.175) is nite and we already have2( + 1)< 0, hence (2.175) is shown. If > 1, let 0 2 ( 1; 1] which is nontrivial since 1< 1. Then we have Z T 0 jA P n Pj 2 dtCn 2( 0 ) Z T 0 jAP n Pj 2 dt<1; (2.178) and since 0 > 1, we have( + 1) + 0 < 0, and we may conclude (2.175) for all . We can replace u and with u n and n respectively to achieve the same result. Proof of Main Result. Now that all the analogous Lemmas and Propositions are estab- lished, the proof of the main result is exactly the same as in the other estimators. 2.5 3D Equations In this section, we explore both the three-dimensional Navier Stokes and Boussinesq equa- tions. We will focus on the Navier Stokes system and explain how to adapt this to the Boussinesq equations. 59 2.5.1 3D Navier Stokes Equations We will utilize previously established notation (e.g. Stokes operator, function spaces, etc.). The 3D Navier Stokes equations are du + (Au +B(u))dt =A dW t ; u(0) =u 0 : (2.179) From [15], for example, we have the following regularity. Proposition 2.5.1. There exists a unique process u and a strictly positive stopping time such that u1 t 2L 2 ( ;L 2 loc ([0;1);D(A))); u(^)2L 2 ( ;C([0;1);V )); (2.180) and u satises (2.179). We will assume periodic boundary conditions. It is not known if =1 almost surely; indeed, this problem is of great interest to the community, and even in the deterministic setting, it is unknown if solutions exist for all time and are unique. In order to carry out the parameter estimation analysis, we must then make 60 the following assumption: Assumption 1: There exists a deterministic time T > 0 such that u2L 2 loc ([0;T );D(A))\C([0;T );V ): With this, we must then slightly alter our assumption on the observed Fourier modes: Assumption 2: u n is observed on [0;T ] for some time T <T . With these assumptions, we can carry out the mathematics as before. However, practi- cally speaking, this would require one to actually know the deterministic time T to ensure that they "stop observing" before then. This is avoided in two dimensions since we know =1 almost surely, and thus, one can stop observing at any timeT > 0 and be reasonably condent that the data will satisfy the SPDE. Assumptions like these are likely the reason why the parameter estimation problem has been avoided for equations in which it is unknown if the existence is global. With Assumption 1, we can prove additional regularity of u. Proposition 2.5.2. Let > 5=4. For 3=4< < 1=2 and u 0 2D(A ), we have u2L 2 loc ([0;T ];D(A +1=2 ))\C([0;T ];D(A )) a:s: (2.181) 61 As before, < 1=2 so that A 2L 2 (H;D(A )) and so u2L 2 loc ([0;1);D(A +1=2 )). But now, has a lower bound of 3=4. Because of this, we now have > 5=4 so that (3=4; 1=2) is nontrivial. This is due to the following Lemma which is exactly Lemma 2.4.2 in three dimensions (see [5]). Lemma 2.5.3. For > 3=4, we have jA B(u;v)j 2 CjA uj 2 jA +1=2 vj 2 : (2.182) Proof of Proposition 2.5.2. Write u = u + ~ u as before, where u solves the stochastic Stokes equation with u(0) =u 0 and ~ u solves the residual d dt ~ u +A~ u +B( u + ~ u) = 0; ~ u(0) = 0: (2.183) Take the inner product of this equation with A 2 ~ u for > 3=4. Then by Lemma 2.5.3, we have 1 2 d dt jA ~ uj 2 +jA +1=2 ~ uj 2 = (A B(u);A ~ u) C(jA uj 2 +jA ~ uj 2 )jA ~ uj 2 + 2 jA +1=2 ~ uj 2 +CjA +1=2 uj 2 : (2.184) 62 Since < 1=2, u 2 L 2 loc ([0;1);D(A +1=2 )). Therefore, if ~ u 2 L 2 loc ([0;T ];D(A )), then using Gronwall's inequality with the above bound gives the desired regularity. By our assumption 1 in this section, we know this is true for any2 [3=4; 1). Thus, we may continue inductively. With this regularity, we can repeat the same heuristic derivation of the classes of estima- tors for the 3D stochastic Navier Stokes equations. We obtain full n = R T 0 (Au n ;P n A 2 du n ) + R T 0 (P n A 2 B(u);Au n )dt R T 0 jA +1 u n j 2 dt ; (2.185) part n = R T 0 (Au n ;P n A 2 du n ) + R T 0 (P n A 2 B(u n );Au n )dt R T 0 jA +1 u n j 2 dt ; (2.186) and lin n = R T 0 (Au n ;P n A 2 du n ) R T 0 jA +1 u n j 2 dt : (2.187) Apart from the above regularity issues, thee asymptotics of the eigenvalues of the Stokes operator are slightly dierent as well. In three dimensions, we have k 1 k 2=3 : (2.188) See [5] for the proof. Ultimately, we still get the desired result, i.e. consistency of the estimators and asymptotic normality. However, we must account for this change in growth 63 rate in our proofs. In particular, while the asymptotics in Lemma A.1.1 hold at the level of the eigenvalues n , when writing in terms of n, we must raise each to the power of 2=3. We record them here for reference. Lemma 2.5.4. Suppose u solves the stochastic Stokes equation d u +A u =A dW t ; u 0 = 0: (2.189) Then we have the following. (i) E Z T 0 u 2 k dt T (1+2 ) k 2 T (1+2 ) 1 2 k 2 3 (1+2 ) (2.190) and Var Z T 0 u 2 k dt (3+4 ) k k 2 3 (3+4 ) : (2.191) (ii) For > , E Z T 0 jA u n j 2 dt T 22 1 1 4 3 (2 2 + 1 2 ) n 2 3 (22 + 1 2 ) : (2.192) We the above in mind, we state our main result. Theorem 2.5.5. Let > 5=4 and assume that u 0 2 D(A ) for some > 1=2. If > 1, then full n , part n , and lin n are consistent estimators of . If > 5=8, 64 then n 5=6 ( full n ) d ! where is a Gaussian random variable with mean 0 and variance 4 3 1 T ( +5=4) 2 +5=8 . 2.5.2 Proof of the Main Result The proof of the main result follows the same outline as before. We must now pay mind, however, to the fact that > 5=4, the requirements for Lemma 2.5.3, and the asymptotics from Lemma 2.5.4. We start by writing u = u + ~ u where u solves d u +A u =A dW t ; u(0) = 0 (2.193) and ~ u solves d dt ~ u +A~ u +B(u) = 0; ~ u(0) =u 0 : (2.194) We have the following Proposition 2.5.6. Suppose < 1=2, > 5=4 and that ~ u solves (2.194). Then for any T2 [0;T ), we have sup t2[0;T ] jA +1=2 ~ uj 2 + Z T 0 jA +1 ~ uj 2 dt<1: (2.195) 65 Moreover, there exists an increasing sequence of stopping times n such that E sup t2[0;n] jA +1=2 ~ uj 2 + Z n 0 jA +1 ~ uj 2 dt ! <1: (2.196) Proof. Multiplying equation (2.194) by A 2+1 ~ u and integrating gives 1 2 d dt jA +1=2 ~ uj 2 +jA +1 ~ uj 2 =(A B(u);A +1 ~ u): (2.197) Dene 0 = maxf: 3 5 g. Then 3=4< 0 < 1=2. This gives, by Lemma 2.5.3 j(A B(u);A +1 ~ u)jjA B(u)jjA +1 ~ uj CjA 0 B(u)jjA +1 ~ uj CjA 0 ujjA 0 +1=2 ujjA +1 ~ uj cjA 0 uj 2 jA 0 +1=2 uj 2 + 2 jA +1 ~ uj 2 : (2.198) By Proposition 2.5.2, for T <T , Z T 0 jA 0 ujjA 0 +1=2 ujdt<1; (2.199) hence we have equation (2.195) after applying Gronwall's inequality. 66 Dene the stopping time n as follows n = inf t0 sup t 0 t jA 0 uj 2 + Z t 0 jA 0 +1=2 uj 2 dt 0 >n : (2.200) We see n is increasing (once we've passedn+1, we must have already passedn either earlier or at the same time). Moreover, due to the regularity of u, we know that lim n!1 n T almost surely. Using the bounds ofj(A B(u);A +1 ~ u)j above, we then see sup t2[0;n] jA +1=2 ~ uj 2 + Z n 0 jA +1 ~ uj 2 dtC Z n 0 jA 0 uj 2 jA 0 +1=2 uj 2 dtCn 2 : a:s: (2.201) Since the upper bound is deterministic, it of course has nite expected value. This completes the proof. Continuing, we also have the next result as a consequence of the Law of Large Numbers. Lemma 2.5.7. Supposeu solves the 3D stochastic Navier Stokes equations withu 0 2D(A ) and u solves the stochastic Stokes equation with u 0 = 0. Assume > 5=4. Then for any > 1, we have lim n!1 R T 0 jA 1+ u n j 2 dt E R T 0 jA 1+ u n j 2 dt = 1 a:s: (2.202) 67 Proof. As before, we just need to prove R T 0 jA 1+ u n j 2 dt E R T 0 jA 1+ u n j 2 dt ! 1 a:s: (2.203) and R T 0 jA 1+ ~ u n j 2 dt E R T 0 jA 1+ u n j 2 dt ! 0 a:s: (2.204) For the rst term, set k = 2+2 k R T 0 u 2 k (t)dt and b k = P k j=1 E j . By Lemma 2.5.4, we have b k k X j=1 2+2 j (1+2 ) j k X j=1 j 2 3 (22 +1) : (2.205) This sum diverges since, by choice of , 2 3 (2 2 + 1) > 2 3 , i.e. b k !1. From this, in conjunction with Lemma 2.5.4 and the partial sum formula for the series P n k=1 k a , we obtain X k1 Var k b 2 k X k1 44 k (3+4 ) k k 4 3 (22 + 5 2 ) X k1 k 2 3 (44 +1) k 4 3 (22 + 5 2 ) = X k1 k 8 3 <1: (2.206) Thus, we may apply Lemma A.2.1 to conclude (2.203). 68 For the next term, let 0 2 ( 1; minf; 1=2g), which is nontrivial since > 1. Then by Proposition 2.5.6, we have Z T 0 jA 1+ 0 ~ uj 2 dt<1: (2.207) Combining this with Lemma 2.5.4, we obtain R T 0 jA 1+ ~ u n j 2 dt E R T 0 jA 1+ u n j 2 dt C n 4 3 ( 0 ) n 2 3 (22 + 5 2 ) Z T 0 jA 1+ 0 ~ u n j 2 dt C n 4 3 ( 0 ( 1)+ 3 2 ) Z T 0 jA 1+ 0 ~ uj 2 dt! 0 a:s: (2.208) as 0 ( 1)> 0. Thus, we conclude (2.204). We continue our sequence of Lemmas. Lemma 2.5.8. Assume the same conditions as Lemma 2.5.7. For any 1 < minf 4 3 4 3 + 5 3 ; 5 6 g, lim n!1 n 1 R T 0 (A 1+22 u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 a:s: (2.209) and for any 2 < minf 4 3 4 3 + 5 3 ; 4=3g, we have lim n!1 n 2 R T 0 (A 1+22 ~ u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 (2.210) 69 in probability. Proof. By utilizing Lemma 2.5.7, we are done once we prove both n 1 R T 0 (A 1+22 u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt P n k=1 1+2 k u k d k n 2 3 (22 + 5 2 ) 1 (2.211) and n 2 R T 0 (A 1+22 u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt R T 0 (A 1+2 ~ u n ; P n k=1 h k d k ) n 2 3 (22 + 5 2 ) 2 (2.212) converge to 0 (almost surely and in probability respectively). For the rst time, dene k = 1+2 k R T 0 u k d k and b k = (22 + 5 2 3 2 1 ) k . By choice of 1 , b k !1. In addition, by the Ito isometry and Lemma 2.5.4, Var k =E 2 k 2+44 k (1+2 ) k = 1+44 k : (2.213) Together, we see X k1 Var k b 2 k X k1 1+44 k (44 +53 1 ) k = X k1 1 43 1 k X k1 1 k 2 3 (43 1 ) <1 (2.214) as 1 < 5=6. Thus, we may apply the Law of Large Numbers to conclude (2.211). 70 We now move onto (2.210). For ~ u k = (~ u;h k ) and a stopping time , dene ~ k = 1+2 k Z 0 ~ u k d k : (2.215) Let b k = 22 + 5 2 3 2 2 k , which increases to innity since 2 < 4 3 4 3 + 5 3 . For such that Var ~ k <1, we see by Ito isometry that X k1 Var ~ k b 2 k = X k1 42 +2 k 44 +53 2 k E Z 0 ~ u 2 k dt = X k1 3+2 +3 2 k E Z 0 ~ u 2 k dt =E Z 0 jA 3 2 + 3 2 2 ~ uj 2 dt: (2.216) Note that since 2 < 4 3 , 3 2 + 3 2 2 < + 1 2 , so we may apply Lemma 2.5.6. By taking = N ^T as in (2.196), we have X k1 Var[ ~ N ^T k ] b 2 k <1: (2.217) Thus, for each N, we have by the Law of Large Numbers (Lemma A.2.1) that R N ^T 0 A 1+22 ~ u n ; n X k=1 h k d k ! 22 +2 2 n ! 0 in probability: (2.218) 71 Since N increases past T (as T < T ), the set ~ =[ N f N > Tg is a set of full measure. Thus, we can take N!1 in the above convergence and recover the desired result. We now move onto the nonlinear term. Lemma 2.5.9. Assume the same conditions as Lemma 2.5.7. Suppose 2 [0; minf1=2; 2 3 ( + 5 4 )g]. Then lim n!1 n R T 0 hA 1+2 u n ;P n B(u)idt R T 0 jA 1+ u n j 2 dt = 0 a:s: (2.219) Proof. By H older's inequality, we have R T 0 hA 1+2 u n ;P n B(u)idt R T 0 jA 1+ u n j 2 dt R T 0 jA P n B(u)j 2 dt R T 0 jA 1+ u n j 2 dt ! 1=2 (2.220) Using Lemma 2.5.7 and Lemma 2.5.4, we are done if we show lim n!1 n 2( 1 3 (22 + 5 2 )) Z T 0 jA P n B(u)j 2 dt = 0 a:s: (2.221) 72 We have two dierent cases. Suppose < 1=2. Let 2 (maxf; 3 4 g; 1=2). Note that 1=2> 3=4 since > 5=4. Thus, we can apply Lemma 2.5.3 and Proposition 2.5.2 to conclude Z T 0 jA P n B(u)j 2 dtC Z T 0 jA P n B(u)j 2 dtC Z T 0 jA uj 2 jA +1=2 uj 2 dt<1 a:s: (2.222) Since 2 < 2 3 ( + 5 4 ), we have 1 3 (2 2 + 5 2 )< 0. Thus, we conclude (2.219). Now, suppose 1=2. Let 0 2 (maxf 3 2 + 5 4 ; 3=4g; 1=2). Since< 1=2 and 3=4< 1=2, this interval is nontrivial. We then similarly have Z T 0 jA P n B(u)j 2 dt 2( 0 ) n Z T 0 jA 0 P n B(u)j 2 dt C 2( 0 ) n Z T 0 jA 0 uj 2 jA 0 +1=2 uj 2 dt<1 a:s: (2.223) Since 2 2 3 (2 2 + 5 2 ) + 4 3 4 3 0 < 0, we also get convergence to 0, and hence we are done. We are ready to prove the result. Proof of Theorem 2.5.5. Using the projected equation for u n , we have full n = R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt ; (2.224) 73 part n = R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt + R T 0 hA 1+2 u n ;P n B(u)P n B(u n )idt R T 0 jA 1+ u n j 2 dt ; (2.225) and lin n = R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt + R T 0 hA 1+2 u n ;P n B(u)idt R T 0 jA 1+ u n j 2 dt : (2.226) The right hand side of (2.224) goes to 0 by Lemma 2.5.8 by letting 1 = 2 = 0. The right hand side of (2.225) by the previous argument and Lemma 2.5.9 by letting = 0 and noting that the same lemma holds upon replacing u by u n . The right hand side of (2.226) also follows from this same argument. This proves the consistency. The proof of asymptotic normality is similar to before. Let k = 1+2 k u k , k = R T 0 2 k dt. Then we haveE k 1+44 k , and Var k 1+88 k . Letb k = P k j=1 E j . Since> 5 8 , we have 2 3 (1 + 4 4 ) > 2 3 (1 + 4 4 5 2 ) =1. Hence, we have b k k 2 3 (1+44 )+1 = k 2 3 (44 + 5 2 ) and b k !1. In addition, X k1 Var k b 2 k X k1 1 k 8=3 <1: (2.227) So by the Law of Large Numbers, we have lim n!1 P n k=1 k P n k=1 E k = 1 a:s:; (2.228) 74 and hence by the Central Limit Theorem, Lemma A.2.2, we conclude lim n!1 R T 0 A 1+2 u n ; P k1 h k d k E R T 0 jA 1+2 u n j 2 dt 1=2 d !N (0; 1): (2.229) In addition, we may apply Lemma 2.5.9 with = 5=6 to conclude lim n!1 n 5 6 R T 0 A 1+2 ~ u n ; P k1 h k d k E R T 0 jA 1+ u n j 2 dt ! 0 (2.230) in probability. Finally, since 1 +> and 1 + 2 > , we use Lemma 2.5.4 to see that E R T 0 jA 1+2 u n j 2 dt 1=2 E R T 0 jA 1+ u n j 2 dt 2 r 3T 1 ( + 5=4) p + 5=8 1 n 5=6 : (2.231) Combining all of the above convergence results, we may conclude n 5=6 lim n!1 n 5 6 R T 0 A 1+2 u n ; P k1 h k d k E R T 0 jA 1+ u n j 2 dt d !; (2.232) where is a Gaussian random variable with mean 0 and variance 4 3 1 T ( +5=4) 2 +5=8 . 75 2.5.3 3D Boussinesq Equations We will now discuss the 3D stochastic Boussinesq equations with positive viscosity and diusivity. Namely, using the notation from Chapter 2, du + (A 1 u +B 1 (u)Pe 3 )dt =A 1 dW 1 t (2.233) d + (A 2 +B 2 (u;))dt =A 2 dW 2 t (2.234) u(0) =u 0 ; (0) = 0 : (2.235) Note that for simplicity, we let 1 = 2 and denote it . In Chapter 2, we considered the case of a bounded domain in two dimensions. For the Boussinesq system, we only have the local existence of strong solutions in all ofR 3 . Thus, we are essentially in the same situation as the previous section with the 3D Navier Stokes. That is, in a domain without boundary. We can still make use of Lemma 2.5.3, and thus the proof is not so dierent. We consider, in particular, the result of [8]. Proposition2.5.10. For (u 0 ; 0 )2D(A 1 )D(A 2 ). Then there exists a unique progressively measurable solution (u;) to (2.233)-(2.234) such that for all 0<T <1, (u;)2L 1 ([0;T ];V 1 V 2 )\L 2 ([0;T ];D(A 1 )D(A 2 )) a:s:; (2.236) 76 and there exists a unique stopping time such that (u;)2C([0;(!)];V 1 V 2 ) (2.237) for almost all !. Since it is not known that this stopping time is innite almost surely, we must make the same assumption as we did with the 3D Navier Stokes equations. Namely, there exists a deterministicT such that (u;)2C([0;T );V 1 V 1 2\L 2 loc ([0;T );D(A 1 )D(A 2 )) almost surely. With this assumption, we also obtain slightly improved regularity as we did for the Navier Stokes equations. Proposition 2.5.11. For 3=4< < 1=2, (u 0 ; 0 )2D(A 1 )D(A 2 ), we have u2L 2 loc ([0;T ];D(A +1=2 1 ))\C([0;T ];D(A 1 )); 2L 2 loc ([0;T ];D(A +1=2 2 ))\C([0;T ];D(A 2 )) (2.238) Sketch of Proof. This proof is again similar to its counterparts in previous sections. Writ- ingu = u+ ~ u and = + ~ where the rst summands solve the stochastic heat-type equation with initial condition u 0 and 0 respectively and the latter summands solve the respective 77 residual equations with 0 initial condition. If > 3=4, then multiplying the equation for ~ u by A 2 1 ~ u and utilizing Lemma 2.5.3 gives 1 2 d dt jA 1 ~ uj 2 +jA +1=2 1 ~ uj 2 = (A 1 B(u);A 1 ~ u) + (A 1 Pe 3 ;A 1 ~ u) CjA 1 ujjA +1=2 1 ujjA 1 ~ uj +CjA 2 jjA 1 ~ uj C(1 +jA 1 ~ uj 2 +jA 1 uj 2 )jA 1 ~ uj 2 + 2 jA +1=2 1 ~ uj 2 +CjA +1=2 1 uj 2 +CjA 2 j 2 +CjA 2 ~ j 2 : (2.239) Simultaneously, taking the inner product of the equation for ~ with A 2 2 ~ and using Lemma 2.5.3, we similarly obtain 1 2 d dt jA 2 ~ j 2 +jA +1=2 2 ~ j 2 2 jA +1=2 2 ~ j 2 +CjA +1=2 1 j 2 +CjA 1 uj 2 jA 1 j 2 +CjA 1 ~ uj 2 jA 1 j 2 +C(jA 1 uj 2 +jA 1 ~ uj 2 )jA 1 ~ j 2 (2.240) 78 Adding these together and rearranging gives 1 2 d dt jA 1 ~ uj 2 +jA 2 ~ j 2 + 2 jA +1=2 1 ~ uj 2 + 2 jA +1=2 2 ~ j 2 C(1 +jA 1 ~ uj 2 +jA 1 uj 2 +jA 1 j 2 )jA 1 ~ uj 2 +C(1 +jA 1 uj 2 +jA 1 ~ uj 2 )jA 1 ~ j 2 +CjA +1=2 1 uj 2 +CjA +1=2 1 j 2 +C(1 +jA 1 uj 2 )jA 1 j 2 C(1 +jA 1 ~ uj 2 +jA 1 uj 2 +jA 1 ~ uj 2 +jA 1 j 2 )(jA 1 ~ uj 2 +jA 1 ~ j 2 ) +CjA +1=2 1 uj 2 +CjA +1=2 1 j 2 +C(1 +jA 1 uj 2 )jA 1 j 2 (2.241) Since + 1=2 < , we have u 2 L 2 loc ([0;1);D(A +1=2 1 ))\ C([0;1);D(A 1 )) and 2 L 2 loc ([0;1);D(A +1=2 2 ))\C([0;1);D(A 2 )). Thus, once we know ~ u2 L 2 loc ([0;T );D(A 1 )), we may apply Gronwall's inequality to obtain the desired regularity. We know this to be true for 2 [3=4; 1) by our assumption 1 in this section, and so we may repeat inductively for all 2 (3=4; 1=2). Once we have this regularity, then we may proceed symbolically in the same way as in the case of the 3D Navier-Stokes. There is a slight dierence when dealing with, for example, (A 1 Pe 2 ;A +1 1 ~ u) in proving the regularity of ~ u. But this is ne since2L 2 loc ([0;T );D(A 2 )) from the above proposition. As such, we will not repeat the details. 79 Chapter 3 Primitive Equations 3.1 Introduction We consider the velocity part of the two-dimensional stochastic Primitive Equations (SPE) with an additive noise. du + (u +u@ x u +w@ z u +@ x p)dt =dW t (3.1) @ x u +@ z w = 0: (3.2) As mentioned in [12], [11], the Primitive Equations are used to describe geophysical scale uid ows. That is, ows occurring in the ocean or atmosphere. For example, they are used in general circulation models in climatology. They can model each independently or as a combined system. That is, the full system is coupled with temperature density, as in [13]. See [7] and especially [29] for more details. Here, u is the horizontal component of velocity and w is the vertical component. We assume the pressurep is independent of the vertical variablez, and represents the viscosity of the uid as it has in previous chapters. 80 The equations evolve in a cylinderM = [0;L] [h; 0] with (x;z)2M. The boundary is partitioned into the top i = fz = 0g, the bottom b = fz = hg and the sides s = fx = 0g[fx = Lg. On the sides, we impose the Dirichlet boundary condition u = 0 and on the top and bottom, we assume the free boundary condition @ z u = 0;w = 0. Additionally, we assume Z 0 h udz = 0: (3.3) We see from (3.2) that w(x;z) = Z z h @ x u(x; z)d z: (3.4) We now dene the function spaces of interest: H := v2L 2 (M) : Z 0 h vdz = 0 ; (3.5) equipped with the L 2 inner product (;) and corresponding normjj and V := v2H 1 (M) : Z 0 h udz = 0; (v = 0 on s (3.6) 81 equipped the inner product ((u;v)) = (@ x u;@ x v) + (@ z u;@ z v) with corresponding normjjjj. We dene the Leray projector P H as the orthogonal projection of L 2 on to H, i.e. P H v =v 1 h Z 0 h vdz: (3.7) With this, we can dene a Stokes-type operator A from V to V 0 via hAu;vi = ((u;v)): (3.8) We extend this to an unbounded operator on H via Au =P H u with domain D(A) = v2H 2 (M) : Z 0 h vdz = 0;v = 0 on s ;@ z v = 0 on b [ i : (3.9) Note that A 1 is a compact symmetric operator, and hence we have the existence of an orthonormal basisfh k g ofH consisting of eigenfunctions ofA with eigenvaluesf k g. Dene H n = spanfh 1 ;:::;h n g and let P n be the projection of H onto H n . One can show (see [14]) thatP H = on D(A), a fact that we will use repeatedly. Next, we tackle the nonlinear term. In reference to (3.4), dene W(v)(x;z) = Z z h @ x v(x; z)d z; (3.10) 82 and B be the bilinear form dened as B(u;v) =P H (u@ x v +W(u)@ z v): (3.11) We also denote B(u) = B(u;u). We record a few useful results whose proofs can be found in [14] and [12]. Lemma 3.1.1. We have the following properties of B. (i) (B(u,v),v) = 0 (ii) For u2D(A), we havehB(u);@ zz ui = 0 (iii) For u;v;w2V , j(B(u;v);w)jC juj 1=2 jjujj 1=2 jjvjjjwj 1=2 jjwjj 1=2 +j@ x ujj@ z vjjwj 1=2 jjwjj 1=2 (iv) For u2D(A), jB(u)j 2 Cjjujj 3 jAuj and jjB(u)jj 2 CjjujjjAuj 3 83 (v) For < 1=4, jA B(u;u)j 2 Cjjujj 2 jAuj 2 : The last item in particular is proved via interpolation. Next, we dene the noise term in (3.1). Fix a stochastic basis ( ;F;fF t g t0 ;f k g k1 wheref k g k1 is a sequence of independent scalar Brownian motions adapted to the ltration fF t g t0 . Dene P H dW t =A dW t = 1 X k=1 k h k d k (3.12) for some > 1. We close this section be rewriting (3.1) in the above notation by applying P H to it: du + (Au +B(u))dt =A dW t ; u(0) =u 0 : (3.13) 3.2 Regularity Before moving forward with the parameter estimation problem for (3.13), we must establish certain regularity results. As with the Boussinesq equations, most papers are concerned with regularity in the ! variable, and are hence up to a stopping time. The authors of [13] prove 84 global existence and uniqueness for a more general coupling of the equations. We simply take the temperature and extra forces equal to 0 to obtain the regularity necessary for our situation. Theorem 3.2.1 (Glatt-Holtz, Temam 2010). Let u 0 2 V . Then there exists a unique H- valued,F t -adapted process u of equation (3.13) with u2L 2 loc ([0;1);D(A))\C([0;1);V ) a:s: (3.14) and for each t 0 u(t) + Z t 0 (Au +B(u))ds =u 0 + X k k h k d k ; (3.15) with equality in H. 3.3 Estimators The goal now is to derive estimators for the parameter and prove their consistency. What follows is a heuristic derivation of them. We follow the outline set by the authors of [4]. We project (3.13) onto H n =R n via P n and obtain du n + (Au n +P n B(u))dt =P n A dW t u n (0) =P n u 0 : (3.16) 85 Assume we observe these n Fourier modes on the interval [0;T ] for some T > 0. Note that u n induces a probability measure P T;n on C([0;T ];R n ). For a xed reference parameter 0 , we assume that the term P n B(u) is observed and formally apply the Girsanov formula (see [22]) to obtain the formal likelihood ratio log dP T;n dP T;n 0 = Z T 0 ( 0 ) Au n ;P n A 2 du n (t) 1 2 Z T 0 ( 2 2 0 )(Au n ;A 2 Au n )dt Z T 0 ( 0 )(Au n ;P n A 2 P n B(u))dt: (3.17) We now maximize this via dierentiating with respect to, setting it equal to 0 and solving for . Call this value n and see that n = R T 0 (A 1+2 u n ;du n (t)) + R T 0 (A 1+2 u n ;P n B(u))dt R T 0 jA 1+ u n j 2 dt : (3.18) We now write down three classes of estimators as in [4] and [28]. First, we replace above with another parameter with range to be specied later. Note that sinceP n andB do not commute, these estimators require full knowledge of the solutionu. Hence, we refer to them as full n . full n = R T 0 (A 1+2 u n ;du n (t)) + R T 0 (A 1+2 u n ;P n B(u))dt R T 0 jA 1+ u n j 2 dt : (3.19) 86 As mentioned above, (3.19) requires knowledge of u and not just u n . Since we can expect B(u n )!B(u) in some sense, we replaceB(u) withB(u n ) to obtain another set of estimators part n part n = R T 0 (A 1+2 u n ;du n (t)) + R T 0 (A 1+2 u n ;P n B(u n ))dt R T 0 jA 1+ u n j 2 dt : (3.20) Finally, we consider the class of estimators that does not involve the nonlinear term. lin n = R T 0 (A 1+2 u n ;du n (t)) R T 0 jA 1+ u n j 2 dt : (3.21) Due to the estimates in the previous section, we are able to follow the outline of the proofs found in [4] and [28] and we get analogous results as these two papers as well as previous sections. Theorem 3.3.1. Let u be a solution to (3.13) with 1< < 5=4. 1. If > 1, then full n , part n , and lin n are consistent estimators of the parameter , i.e. lim n!1 full n = lim n!1 part n = lim n!1 lin n = (3.22) in probability. 87 2. If > 1=2, then full n is asymptotically normal with rate n, i.e. n( full n ) d !; (3.23) where is a Gaussian random variable with mean zero and variance 2( +1) 2 1 T ( +1=2) . Formally, the proof of this results is the same as in [4] due to the fact that item (V ) in Lemma (3.1.1) exactly matches with the nonlinear term B in the Navier-Stokes equation, and this is the main inequality used in showing consistency of the estimators. For this reason, we will exclude the proof. 3.4 3D Primitive Equations As in the case of two dimensions, We consider the velocity part of the three dimensional stochastic Primitive equations. du + (u + (ur h )u +w@ z u +r h p)dt =dW t r h u +@ z w = 0 @ z p = 0: (3.24) Here, are velocity eld (u;w) consists of the horizontal velocity u = (u 1 ;u 2 ) and vertical velocity w. Moreover,r h = (@ x 1 ;@ x 2 ) is the horizontal gradient. represents the usual 88 Laplacian in three spatial variables. As is in [11], we consider these equations on the three dimensional torus, i.e. with periodic boundary conditions. We then dene our function spaces accordingly: H = v2L 2 (T 3 ) :r h Mv = 0 inT 2 ;v isT 3 -periodic; Z T 3 vdxdz = 0 ; (3.25) where r h Mv =r h Z 1 1 v(x;z)dz; (3.26) and V = H\H 1 (T 3 ), with each having the L 2 and H 1 inner-products respectively. Let P H be the projection of L 2 (T 3 ) onto H, and let A =P H () be the Stokes operator with domain D(A) = H\H 2 (T 3 ). For the noise, = A for some > 1. Dene the operator w via w(u)(x;z) = Z z 0 r h v(x;z 0 )dz 0 : (3.27) Our nonlinear term B is dened via B(u;v) =P H ((ur h )v +w(u)@ z v): (3.28) Unlike the two dimensional case, the bounds available for the nonlinear term are less than ideal. For example, this is what we can obtain at the moment forjjB(u)jj. 89 Lemma 3.4.1. For u2D(A), we have jjB(u)jj 2 C jjujjjAuj 3 +juj 1=2 jAuj 7=2 +jjujjjAuj 2 jA 3=2 uj +jjujj 1=2 jAuj 2 jA 3=2 uj 3=2 : (3.29) or alternatively, jjB(u)jj 2 C jjujjjAuj 3 +juj 1=2 jAuj 7=2 +jjujjjAuj 2 jA 3=2 uj +jAuj(jjujj 2 +jAuj 2 )jA 3=2 uj : (3.30) Proof. We writeB(u) =B NS (u)+B w (u). B NS (u) is the same nonlinear term in the Navier Stokes equations which satises jjB NS (u)jj 2 CjjujjjAuj 3 +juj 1=2 jAuj 7=2 : (3.31) The proof is a combination of Agmon's inequality and the embedding of H 1 in L 6 in three dimensions. See [15] for more details. The remaining terms come from bounding B w (u) = w(u)@ z u. The authors of [12] prove a similar result in two dimensions; the three dimensional 90 case is similar. Let k = 1; 2; 3, l = 1; 2, and @ i = @ x i , where x 1 = x, x 2 = y, and x 3 = z. Then we may write jjB w (u)jj 2 = Z T 3 (@ k (w@ z u l )) 2 C Z T 3 (@ k w) 2 (@ z u l ) 2 +w 2 (@ k @ z u l ) 2 =J 1 +J 2 : (3.32) We deal with each term separately. Let dS =dxdy. Then we have Z T 3 (@ k w) 2 (@ z u l ) 2 Z T 2 j@ k wj 2 L 1 z j@ z u l j 2 L 2 z dS Z T 2 j@ k wj 2 L 1 z dS j@ z u l j 2 L 2 z L 1 (T 2 ) : (3.33) Using Agmon's inequality in one dimension along with Holder's inequality gives Z T 2 j@ k wj 2 L 1 z dSC Z T 2 j@ k wj L 2 z j@ z @ k wj L 2 z dS Z T 2 j@ k wj 2 L 2 z dS 1=2 Z T 2 j@ k (r h u)j 2 L 2 z dS 1=2 Cjuj 2 H 2; (3.34) by notingjwj R 1 0 jr h ujdz 0 . LetR =j@ z u l j L 2 z . Using Agmon's inequality in two dimensions gives jRj L 1 (T 2 ) CjRj 1=2 L 2 (T 2 ) j@ Rj 1=2 L 2 (T 2 ) ; (3.35) 91 where is a multi-index with order at most 2. We havejRj L 2 (T 2 ) Cjjujj. Let @ j and @ i represent any rst order partial derivative. We have @ j R = 1 R Z 1 0 @ j @ z u l @ z u l dz 0 ; (3.36) and so by integration by parts and Holder's inequality, we have @ i @ j R = 2 R 3 Z 1 0 @ j @ z u l @ z u l dz 0 Z 1 0 @ i @ z u l @ z u l dz 0 + 1 R Z 1 0 @ i @ j @ z u l @ z u l dz 0 + 1 R Z 1 0 @ j @ z u l @ i @ z u l dz 0 C R 3 R 2 j@ j @ z u l j L 2 z j@ i @ z u l j L 2 z +j@ i @ j @ z u l j L 2 z + C R Rj@ i @ j @ z u l j L 2 z = C R j@ j @ z u l j L 2 z j@ i @ z u l j L 2 z +Cj@ i @ j @ z u l j L 2 z : (3.37) To deal with the remaining R term, we again integrate by parts: C R j@ j @ z u l j L 2 z j@ i @ z u l j L 2 z = C R Z 1 0 @ j @ j @ z u l @ z u l dz 0 1=2 Z 1 0 @ i @ i @ z u l @ z u l dz 0 1=2 C R R Z 1 0 j@ j @ j @ z u l j 2 dz 0 1=4 Z 1 0 j@ i @ i @ z u l j 2 dz 0 1=4 Cjuj H 3: (3.38) In total, we obtain J 1 CjjujjjAuj 2 jA 3=2 uj. 92 For J 2 , we have Z T 3 w 2 (@ k @ z u l ) 2 Z T 3 (@ k @ z u l ) 2 jwj 2 L 1 (T 3 ) Cjuj 2 H 2jwj 1=2 L 2 (T 3 ) jwj 3=2 H 2 (T 3 ) : (3.39) We immediately havejwj L 2 (T 3 ) Cjjujj. Consider @ m and @ n . If neither @ n nor @ m is @ z , then we havej@ m @ n wj L 2 (T 3 ) Cjuj H 3, and if at least one is@ z , thenj@ m @ n wj L 2 (T 3 ) Cjuj H 2. So either way,jwj H 3Cjuj H 3, and so J 2 Cjjujj 1=2 jAuj 2 jA 3=2 uj 3=2 : (3.40) Thus,jjB w (u)jj 2 C jjujjjAuj 2 jA 3=2 uj +jjujj 1=2 jAuj 2 jA 3=2 uj 3=2 . Combining this with the estimate of B NS (u) gives the rst result. The second result comes from a dierent bound of J 2 . Using the embedding ofH 1 inL 6 in three dimensions, we have J 2 jwj 2 L 6j@ k @ z u l j L 6j@ k @ z u l jC(jjujj 2 +jAuj 2 )jAujjA 3=2 uj; (3.41) sincej@ z wjCjjujj. Note that we now have A 3=2 u showing up, so we would require regularity such as u2 L 2 loc ([0;1);D(A 3=2 ))\C([0;1);D(A)) to make use of this in proving, for example, the 93 regularity of the residual ~ u as we have been for the parameter estimation problem. But since we only have access to this estimate, the most we could obtain is ~ u2L 2 loc ([0;1);D(A 3=2 ))\ C([0;1);D(A)), i.e. the same regularity as u. Thus, u = u ~ u would also have the same regularity. We know this to be true only if > 3=2 (see Lemma A.1.1 in the appendix). With this set up, we are unable to proceed with the parameter estimation problem and will investigate it in future work. To give an example of where things go wrong, consider the proof of Lemma 2.5.7. In the proof, we had to show R T 0 jA 1+ ~ uj 2 dt E R T 0 jA 1+ ~ uj 2 dt ! 0: (3.42) Since we only have ~ u2L 2 loc ([0;1);D(A 3=2 )), we are forced to estimate this as follows: R T 0 jA 1+ ~ uj 2 dt E R T 0 jA 1+ ~ uj 2 dt C n 4 3 ( 1 2 ) n 2 3 (22 + 5 2 ) Z T 0 jA 1+ ~ uj 2 dt = C n 4 3 +1 Z T 0 jA 1+ ~ uj 2 dt: (3.43) However, since > 3=2, we see 4 3 + 1<2 + 1 =1, so in fact, we have a positive power ofn in the numerator. Thus, the upper bound goes o to innity, and we are unable to show the term of interest converges to 0. 94 Chapter 4 Linearized Equations 4.1 Introduction The nal set of equations we will study are related to the Navier-Stokes system which are linearized about a smooth vector eld U. We maintain the same notation of the function spacesH andV in the case of a bounded or periodic domain as well as the operators A and B. If U is a smooth vector eld in eitherR 2 orR 3 , consider the linear operator B U (u) =P [(Ur)u + (ur)U]: (4.1) Note thatB U =B(U;u) +B(u;U), and hence the results in Lemmas 2.1.3, 2.1.4, 2.4.2, and 2.5.3 apply to each summand. We condense these into one lemma presented here in terms of B U . Lemma 4.1.1. All of the following constants depend on the norms of U. (i) In both 2D and 3D, jB U (u)j 2 C(juj 2 +jjujj 2 ): (4.2) 95 (ii) In both 2D and 3D, jjB U (u)jj 2 C(jjujj 2 +jAuj 2 ): (4.3) (iii) For all < 1=4 in both 2D and 3D, jA B U (u)j 2 C(juj 2 +jjujj 2 +jAuj 2 ): (4.4) (iv) In the case of periodic boundary conditions, for > 1=2 in 2D and > 3=4 in 3D, we have jA B U (u)j 2 C(jA uj 2 +jA +1=2 uj 2 ): (4.5) The proof involves simply bounding all integrals by the L 1 norm of the various derivatives of U that appear. Item (iii) is obtained via interpolation of items (i) and (ii). With this notation, we will work with the following SPDE du + (Au +B U (u) +f)dt =A dW t ; u(0) =u 0 : (4.6) If f = 0, we will refer to this as the linearized Navier-Stokes equation. If f 2 L 2 loc ([0;1);D(A))\C([0;1);V ), then this can be seen as the linearized Boussinesq system with f =Pe j , j = 2; 3. Recall that in the case of the Boussinesq equations with = 0 in Chapter 2, the heat transfer equation was only utilized to establish regularity for 96 the temperature. After that, the analysis was solely focused on the Navier-Stokes equation with the forcing termPe j . For the purposes of this chapter, we will only treat the case when f = 0. Namely, du + (Au +B U (u))dt =A dW t ; u(0) =u 0 : (4.7) Next, we will sketch a proof of the desired regularity result. A more rigorous proof makes use of appropriate Galerkin schemes. Moreover, this is a linear equation in u and can be viewed in some sense as the stochastic heat equation with forcing. Proposition 4.1.2. Let u 0 2 V . For > 1 and either periodic or Dirichlet boundary conditions, there exists a uniqueF t -adapted process u such that u2L 2 loc ([0;1);D(A))\C([0;1);V ): (4.8) In two dimensions with periodic boundary conditions, if 1=2 < < 1=2, then for u 0 2 D(A ), we have u2L 2 loc ([0;1);D(A +1=2 ))\C([0;1);D(A )): (4.9) In three dimensions, the same result holds for 3=4< < 1=2. 97 Sketch of Proof. As before, writeu = u + ~ u with u solving the stochastic Stokes equation with u(0) =u 0 , and ~ u solving the residual equation d dt ~ u +A~ u +B U (u) = 0; ~ u(0) = 0: (4.10) Multiply this equation by ~ u and integrate to obtain 1 2 d dt j~ uj 2 +jj~ ujj 2 =(B U (u); ~ u): (4.11) By Lemma 4.1.1 and Young's inequality, we see j(B U (u); ~ u)jjB U (u)jj~ ujCjjujjj~ ujCjj ujj 2 +Cj~ uj 2 + 2 jj~ ujj 2 (4.12) Rearranging gives 1 2 d dt j~ uj 2 + 2 jj~ ujj 2 Cjj ujj 2 +Cj~ uj 2 : (4.13) With the regularity of u, Gronwall's inequality gives ~ u inL 2 loc ([0;1);V )\L 1 loc ([0;1);H). Next, multiply (4.10) by A~ u. To obtain 1 2 d dt jj~ ujj 2 +jA~ uj 2 =(B U (u);A~ u) (4.14) 98 By Lemma 4.1.1, we obtain j(B U (u);A~ u)jjjB U (u)jjjj~ ujjC(jA uj +jA~ uj)jj~ ujjCjA uj 2 +Cjj~ ujj 2 + 2 jA~ uj 2 : (4.15) As before, we rearrange and use Gronwall's inequality, to conclude ~ u2L 2 loc ([0;1);D(A))\ L 1 loc ([0;1);V ) which gives us the desired regularity of u. Suppose we are in two dimensions with periodic boundary conditions. Multiply (4.10) by A 2 ~ u for > 1=2 and integrate to obtain 1 2 d dt jA ~ uj 2 +jA +1=2 ~ uj 2 =(A B U (u);A ~ u): (4.16) To estimate the B U term, we simply apply Lemma 4.1.1 and obtain j(A B U (u);A ~ u)jCjA +1=2 uj 2 +CjA ~ uj 2 + 2 jA +1=2 ~ uj 2 : (4.17) Rearranging gives 1 2 d dt jA ~ uj 2 + 2 jA +1=2 ~ uj 2 CjA +1=2 uj 2 +CjA ~ uj 2 : (4.18) 99 As + 1=2 < , u2 L 2 loc ([0;1);D(A +1=2 )). If ~ u2 L 2 loc ([0;1);D(A )), then Gronwall's inequality gives the desired result. Indeed, from above, this is true for 1=2 1, so we may proceed inductively to obtain the regularity for > 1=2. In three dimensions, we use the same computations, noting that Lemma 4.1.1 only holds for > 3=4. Since we do know ~ u2L 2 loc ([0;1);D(A )) for 3=4< 1, we may apply the same inductive reasoning as in the two dimensional case. With this regularity in hand, we can establish the analogous estimator classes for the linearized equations. Project the linearized equation onto R n and obtain du n + (Au n +P n B U (u))dt =P n A dW t ; u n 0 =P n u 0 ; (4.19) where u n =P n u. Let P n;T be the probability measure induced by u n onto C([0;T ];R n ). Fixing a refer- ence parameter 0 , we can formally apply the Girsanov theorem and obtain the formal log likelihood function log P n;T P n;T 0 = ( 0 ) Z T 0 (Au n ;A 2 du n ) 1 2 ( 2 2 0 ) Z T 0 (Au n ;A 2 +1 u n )dt ( 0 ) Z T 0 (Au n ;A 2 P n B U (u))dt: (4.20) 100 We replace with a hyperparameter and maximize the above with respect to. We obtain three classes of estimators as before: full n = R T 0 (Au n ;A 2 du n ) + R T 0 (A 2 P n B U (u);Au n )dt R T 0 jA +1 u n j 2 dt ; (4.21) part n = R T 0 (Au n ;A 2 du n ) + R T 0 (A 2 P n B U (u n );Au n )dt R T 0 jA +1 u n j 2 dt ; (4.22) and lin n = R T 0 (Au n ;A 2 du n ) R T 0 jA +1 u n j 2 dt : (4.23) As expected, these estimators are consistent and full n is asymptotically normal. We record this here: Theorem 4.1.3. Suppose we are in 2D. Let u solve (4.7) with > 1 and u 0 2 D(A ) for some > 1=2. In the Dirichlet case, further assume < 5=4. Then for > 1, full n , part n , and lin n are consistent estimators of . For > 1=2, full n is asymptotically normal. In 3D, the above holds in the case of periodic boundary conditions and > 5=4. For the sake of completion, we will outline the proof as in previous chapters and highlight any dierences in their proofs. 101 4.2 Proof of the Main Result We start by writing u = u + ~ u, where now u solves the stochastic Stokes equation d u +A u =A dW t ; u(0) = 0; (4.24) and ~ u solves the residual d dt ~ u +A~ u +B U ( u + ~ u) = 0; ~ u(0) =u 0 : (4.25) From this equation, we can prove some regularity results for ~ u. Lemma 4.2.1. Suppose ~ u is a solution to (4.25) with u being the solution to (4.7) with Dirichlet boundary conditions, > 1, and u 0 2D(A 1=2+ ), < 1=4. Then for every T > 0, we have sup t2[0;T ] jA 1=2+ ~ uj 2 + Z T 0 jA 1+ ~ uj 2 dt<1; (4.26) and for an increasing sequence of stopping times n "1, E sup t2[0;n] jA 1=2+ ~ uj 2 + Z n 0 jA 1+ ~ uj 2 dt ! <1: (4.27) 102 In the case of periodic boundary conditions, the above holds for < 1=2 in 2D and 3D, with the additional assumption that > 5=4 in 3D. Proof. Multiply equation (4.25) with A 1+2 ~ u to obtain 1 2 d dt jA 1=2+ ~ uj 2 +jA 1+ ~ uj 2 =(A B U (u);A +1 ~ u): (4.28) For Dirichlet boundary conditions, suppose < 1=4. We estimate the B U term via Lemma 4.1.1: j(A B U (u);A +1 ~ u)jC(juj 2 +jjujj 2 +jAuj 2 ) + 2 jA +1 ~ uj 2 : (4.29) Rearranging gives 1 2 d dt jA +1=2 ~ uj 2 + 2 jA +1 ~ uj 2 C(juj 2 +jjujj 2 +jAuj 2 ): (4.30) Using Gronwall's inequality and the regularity of u gives us (4.26). Let n = inf t0 n R t 0 juj 2 +jjujj 2 +jAuj 2 ds>n o . Then n % 1 as n ! 1 by the regularity of u and (4.27) follows. For periodic boundary conditions, we instead estimate the B U (u) term as follows. Let 0 = maxf; 2 g in 2D so that 1=2 < 0 < 1=2 and 0 = maxf; 3 5 g in 3D so that 103 3=4 < 0 < 1=2 (keeping in mind that > 5=4 in 3D). Then we have by Lemma 4.1.1 that j(A B U (u);A +1 ~ u)jjA +1 ~ ujjA 0 B U (u)j 2 2 jA +1 ~ uj 2 +C(jA 0 uj 2 +jA 0 +1=2 uj 2 ): (4.31) Rearranging and applying Gronwall's inequality gives the desired regularity. For the stopping time, let n = inf t0 Z t 0 jA 0 uj 2 +jA 0 +1=2 uj 2 ds>n : (4.32) The result then follows by the same reasoning as above. Moving on, we have the same useful convergence result as a consequence of the Law of Large Numbers. Lemma 4.2.2. Suppose u and solves (4.7) with u 0 2 D(A ) and u solves the stochastic Stokes equation with u 0 = 0. In 2D, assume 1< < 5=4 with Dirichlet boundary conditions, or > 1 with periodic boundary conditions. In 3D, assume > 5=4 with periodic boundary conditions. Then for any > 1, we have lim n!1 R T 0 jA 1+ u n j 2 dt E R T 0 jA 1+ u n j 2 dt = 1 a:s: (4.33) 104 The proof is simply a combination of the proofs of Lemma 2.3.2 and Lemma 2.5.7. Next, we deal with the noise terms. Lemma 4.2.3. Assume the same conditions as Lemma 2.5.7. For any 1 < minf2 + 2 2 ; 1g, we have lim n!1 n 1 R T 0 (A 1+22 u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 a:s: (4.34) For any 2 < minf2 + 2 2 ; 3=2g in the periodic case (both 2D and 3D) or 2 < minf2 + 2 2 ; 5=4 + 1 g in the Dirichlet case, we have lim n!1 n 2 R T 0 (A 1+22 ~ u n ; P n k=1 h k d k ) R T 0 jA 1+ u n j 2 dt = 0 (4.35) in probability. The proof is a combination of the proofs of Lemma 2.3.3 and Lemma 2.5.8. Letting 1 = 2 = 0 and using the fact that u n = u n + ~ u n gives the following corollary. Corollary 4.2.4. For 1 = 2 = 0 in Lemma 4.2.3, we have lim n!1 R T 0 hA 1+2 u n ; P n k=1 h k d k i R T 0 jA 1+ u n j 2 dt = 0: (4.36) in probability. 105 We now move on to the nonlinear term. The rst lemma will again be analogous to ones that previously appeared. Lemma 4.2.5. Assume the same conditions as Lemma 4.2.2. Suppose 2 [0; minf1=2; + 1g] in the periodic case (both 2D and 3D) and 2 [0; minf5=4 ; + 1g] in the Dirichlet case. Then lim n!1 n R T 0 hA 1+2 u n ;P n B U (u)idt R T 0 jA 1+ u n j 2 dt = 0 a:s: (4.37) Again, the proof is a combination of the proofs of Lemma 2.3.4 and Lemma 2.5.9. With all of the above, we are nally ready to prove the main result. Proof of Theorem 4.1.3. The proof follows by using all of the above Lemmas in the same fashion as before. 106 Chapter 5 Sample Numerical Examples 5.1 Stochastic Stokes and Heat Equation We recall the stochastic Stokes equation that has been used as a main tool in the analysis of estimators in the previous chapters duAudt =A dW (t); u(0) =u 0 : (5.1) In this chapter, we will focus on numerically computing the estimators for the parameter of this equation. Since no forcing, nonlinear or otherwise, is a part of the diusion, all three classes of estimators full n , part n , and lin n will align. We will be simulating the Fourier coecients, but we note that in an actual experiment, one would need to obtain data by observing the physical phenomena in question (for example, using measuring instruments to detect the velocity of the uid assumed to be modeled by the equation). 107 As mentioned in the appendix, we can obtain an Ornstein-Uhlenbeck process for the Fourier coecients u k of the solution u. Letting h k be the kth eigenfunction of A with associated eigenvalue k , take the inner product of equation (5.1) with h k to obtain du k k u k dt = k d k (t); u k (0) = (u 0 ) k ; (5.2) where u k = (u;h k ). By applying Ito's formula to the function e k t u k (t), one can show this SDE has solution u k (t) =e k t u k (0) + k Z t 0 e k (ts) d k (s): (5.3) This closed form solution can then be numerically implemented, as in [18]. Suppose we partition [0;T ] into 0 =t 0 <t 1 <<t m =T , with t =T=m and t j =jt. We note by the Ito isometry that E Z t 0 e k (ts) d k (s) 2 =E Z t 0 e 2 k (ts) ds = 1e 2 k t 2 k : (5.4) Thus, the stochastic integral is normally distributed with mean 0 and variance 1e 2 k t 2 k . This means the stochastic integral from time t j to t j+1 can be written as q 1e 2 k t 2 k j , where 108 the j are i.i.d. standard normal random variables. Together, we can iteratively compute u k (t j ) via u k (t j )e k t u k (t j1 ) + k s 1e 2 k t 2 k j : (5.5) Figure 5.1: Four dierent coecientsu k (t). Here,L = 1,T = 1, = 1, = 1:01, = and m = 99 (so that t 99 = 1) Recall that A =P . In the case of Dirichlet boundary conditions, P and do not commute and hence they do not share eigenfunctions. In the absence of boundaries (e.g. periodic boundary conditions), they do in fact commute, and hence we can use the same eigenfunctions and eigenvalues for A and, paying mind to use a basis that is divergence free. 109 Suppose we are on [0;L] 2 , L> 0. We will use the following eigenfunctions h k of: h k = 2 L 0 B B @ sin k L x cos k L y cos k L x sin k L y 1 C C A (5.6) with corresponding eigenvalues k = k 2 2 L 2 ; the factor 2=L is a normalizing factor. As noted earlier, all classes of estimators for this equation are the same. We will simply denote it n . We can then write n as n = R T 0 (A 1+2 u n ;du n ) R T 0 jA 1+ u n j 2 dt = n X k=1 1+2 k Z T 0 u k (t)du k 2 n X k=1 2+2 k Z T 0 u 2 k (t)dt = n X k=1 1+2 k (u 2 k (T )u 2 k (0)T 2 k ) 2 n X k=1 2+2 k Z T 0 u 2 k (t)dt : (5.7) The last we need to do before implementing is to approximate the integral in the denomi- nator. If we have m time steps as above, then we simply use the Riemann sum: Z T 0 u 2 k (t)dt m X j=1 u 2 k (t j )t: (5.8) 110 Figure 5.2: Four images showing the convergence of n to using the same hyperparameters as the previous images. 111 Figure 5.3: Distribution ofn( n ) with the same hyperparameters as before andn = 200. We note the data we obtained does not have mean 0 nor variance 2 2 . This suggests the rate of convergence to the normal distribution is not n, but n to some power. 112 5.2 Extension to other equations In this section, we will explain how to extend the work in the previous section to the equations analyzed in previous chapters. We will not provide code or gures due to the amount of processing power needed to run the simulations in a reasonable amount of time. Here, we note that in most numerical simulations involving the Navier-Stokes and related equations do not project onto our spaceH. Instead, one would incorporate the pressure term rp and solve the equation p =r ((ur)u): (5.9) This allows one to bypass the implementation of the Leray projector P . Another way is to instead compute the Fourier coecients directly, i.e. (u;h 1 k ) where h 1 k are as in the previous section. Integrating by parts, we would obtain (rp;h 1 k ) =(p;rh 1 k ) = 0. Knowingu k from this method would then allow us to approximate u via u = P n k=1 u k h 1 k . For this section, we choose the former mostly due to the dicultly in computing (B 1 (u);h 1 k ) We note that since we cannot have knowledge of all the Fourier modes of our solution u (and in the case of the Boussinesq equations), our estimator full n will not be considered, and instead, we give our attention to part n . For all equations considered, the estimator lin n is exactly the same as in the previous section for the stochastic Stokes equation. The only dierence is how to obtain u k . Due to the nonlinear term, u k no longer satises a nice 113 Ornstein-Uhlenbeck process, and thus we do not have a closed form solution. What we can do is use the same idea as in the proof of consistency of our estimators and split u = u + ~ u, where u solves (5.1) and ~ u solves d dt ~ u~ u + (( u + ~ u)r)( u + ~ u)e 2 +rp = 0: (5.10) As (5.10) is a random PDE, one can, for example, implement the Euler method to numerically calculate. First, we approximate u via u n = P n k=1 u k h k . From there, consider the notation ~ u l i;j and p l i;j , where l represents our time increment, i represents the x increment and j represents the y increment. To compare with earlier notation, we will use the letter k to explicitly refer to Fourier coecients. Since these only depend on time, we do not need to concern ourselves with the double indices (i;j). We then approximate the derivatives appearing in (5.10) by appropriate dierence schemes. We will use a forward dierence in time, backwards dierence in the rst spatial 114 derivatives, and central mean for the second spatial dierence. For a generic function f, these dierences look like d dt f f l+1 i;j f l i;j t @ @x f f l i;j f l i1;j x @ @y f f l i;j f l i;j1 y @ 2 @x 2 f f l i+1;j 2f l i;j +f l i1;j x 2 @ 2 @y 2 f f l i;j+1 2f l i;j +f l i;j1 y 2 : (5.11) Note that as p solves the Poisson equation, there is no way to get a forward time, i.e. p l+1 i;j . Instead, one solves for p l i;j from the discretized Laplacian (i.e. sum of the nal two approximations in (5.11)). Since p l i;j is represents the unknown for our pressure, we cannot use the rst partial approximations as written above as the appear in the equation for u. Instead, we look at the forward spatial dierence and average it with the backwards spatial dierence above. That is, @ @x p p l i+1;j p l i1;j 2x @ @y p p l i;j+1 p l i;j1 2y : (5.12) 115 Moreover, to ensure continuity of p when discretizing, we must add the term ru t to correct for the absence of p l+1 i;j . Mathematically, this term is 0 since u is divergence free, but it is necessary in our numerical scheme. See [2] and [32] for more details. Finally, as u is a two dimensional vector eld, we will need to get equations for each of its components separately. Due to the numerous indices showing up, we will denote them via u = (v;w) = ( v + ~ v; w + ~ w). In total, this will gives 4 equations to solve. For ~ u and ~ , our unknowns are ~ v l+1 i;j , ~ w l+1 i;j , ~ l+1 i;j . Below, we discretize the stochastic Boussinesq system and then solve each equation for each unknown mentioned previously. ~ v l+1 i;j = ~ v l i;j +t ~ v l i+1;j 2~ v l i;j + ~ v l i1;j x 2 + ~ v l i;j+1 2~ v l i;j + ~ v l i;j1 y 2 ! t( v l i;j + ~ v l i;j ) ( v + ~ v) l i;j ( v + ~ v) l i1;j x ! t( w l i;j + ~ w l i;j ) ( v + ~ v) l i;j ( v + ~ v) l i;j1 y ! t p l i+1;j p l i1;j 2x ! ; (5.13) 116 ~ w l+1 i;j = ~ w l i;j +t ~ w l i+1;j 2 ~ w l i;j + ~ w l i1;j x 2 + ~ v l i;j+1 2~ v l i;j + ~ v l i;j1 y 2 ! t( v l i;j + ~ v l i;j ) ( w + ~ w) l i;j ( w + ~ w) l i1;j x ! t( w l i;j + ~ w l i;j ) ( w + ~ w) l i;j ( w + ~ w) l i;j1 y ! t p l i+1;j p l i1;j 2y + l i;j + ~ l i;j ! ; (5.14) ~ l+1 i;j = ~ l i;j +t ~ l i+1;j 2 ~ l i;j + ~ l i1;j x 2 + ~ l i;j+1 2 ~ l i;j + ~ l i;j1 y 2 ! t( v l i;j + ~ v l i;j ) ( + ~ ) l i;j ( + ~ ) l i1;j x ! t( w l i;j + ~ w l i;j ) ( + ~ ) l i;j ( + ~ ) l i;j1 y ! ; (5.15) p l i;j = (p l i+1;j +p l i1;j )y 2 + (p l i;j+1 +p l i;j1 )x 2 2(x 2 + y 2 ) x 2 y 2 2(x 2 + y 2 ) " 1 t v n i+1;j v n i1;j 2x + w n i+1;j w n i1;j 2y v n i+1;j v n i1;j 2x 2 2 v n i;j+1 v n i;j1 2y w n i+1;j w n i1;j 2x w n i;j+1 w n i;j1 2y 2 # : (5.16) We note that in the last equation, one would write v = v + ~ v and w = w + ~ w. Now that we have the approximations of u n , ~ u n , n , and ~ n , we have our approximations of u n and n by adding them together. With these, the nal step would be to compute the 117 extra term in part n and part n . Due to the fact that u and solve the same heat-type equation, we actually nd that lin n = lin n . We then have part n = lin n R T 0 (A 1+2 1 1 u n ;P 1 n (B 1 (u n ) +P n e 2 ))dt R T 0 jA 1+ 1 1 u n j 2 dt part n = lin n R T 0 (A 1+2 2 2 n ;P 2 n B 2 (u n ; n ))dt R T 0 jA 1+ 2 2 n j 2 dt : (5.17) We investigate part n since it is more involved. Consider (A 1+2 1 1 u n ;P 1 n (B 1 (u n ) +P n e 2 )). We rst see (A 1+2 1 u n ;P 1 n B 1 (u n )) = n X k=1 (B 1 (u n );h 1 k )(h 1 k ; n X j=1 1+2 1 j u j h 1 j ) = n X k=1 (B 1 (u n );h 1 k ) 1+2 1 k u k ; (5.18) and additionally (A 1+2 1 1 u n ;P 1 n P n e 2 ) = n X k=1 1+2 1 k u k (P n e 2 ;h 1 k ) = n X k=1 1+2 1 k u k ( n ;h 1 k;2 ) = n X k=1 n X j=1 1+2 1 k 1+2 2 j u k j (h 2 j ;h 1 k;2 ); (5.19) where h 1 k;i is the ith component of h 1 k . Now, we write out the inner product (B 1 (u n );h 1 k ). First, B 1 (u n ) = n X m=1 2 X i=1 u m h 1 m;i @ i u n = n X m=1 2 X i=1 n X j=1 u m u j h 1 m;i @ i h 1 j ; (5.20) 118 so (B 1 (u n );h 1 k ) = n X m=1 2 X i=1 n X j=1 2 X l=1 u m u j (h 1 m;i @ i h 1 j;l ;h 1 k;l ): (5.21) Thus, once we compute the various integrals (h 2 j ;h 1 k;2 ) and (h 1 m;i @ i h 1 j;l ;h 1 k;l ), we compute the sums above to obtain the inner products. These various integrals can be computed explicitly since we know the eigenfunctions. From there, one can discretize the time integral to compute the integrals of u k j and u m u j . 119 Appendix A A.1 Heat and Stokes Equation We make extensive use of asymptotics related to the heat and Stokes equation. Note that formally the two equations are the same, and the following results hold for either one. Because of this, we will use the notation for the equation d u +A udt = X k1 k h k d k ; u(0) = u 0 ; (A.1) again, we may sub for u, for , and k for k . From here, these results come from [4]. Let u k = ( u;h k ). u k solves the Ornstein-Uhlenbeck process d u k + k u k dt = k d k ; u k (0) = u 0k ; (A.2) and one can show it has solution u k (t) = u k (0)e k t + k Z t 0 e k (ts) d k : (A.3) We now state the important asymptotic and regularity results. 120 Lemma A.1.1. Suppose u is a solution of (A.1), and let u n =P n u. (i) Assume 0 < and EjA 0 1=2 u 0 j 2 <1. Then u2L 2 ( ;L 2 loc ([0;1);D(A 0 )))\L 2 ( ;C([0;1);D(A 0 1=2 ))): (A.4) (ii) Suppose u 0 = 0. Then E Z T 0 u 2 k dt T (1+2 ) k 2 T (1+2 ) 1 2 k (1+2 ) ; (A.5) and Var Z T 0 jA u n j 2 dt (3+4 ) k k (3+4 ) : (A.6) (iii) If > , E Z T 0 jA u n j 2 dt T 22 1 1 2(2 2 ) n 22 : (A.7) The proof of (i) can be found using a Galerkin approximation. One computes (ii) and (iii) by the formula (A.3) letting u(0) = 0. A.2 Probability Results We conclude with the statements of versions of the Law of Large Numbers and Central Limit Theorem for stochastic integrals. We state them in the form used in [4]. 121 Lemma A.2.1. (The Law of Large Numbers) Let n , n 1, be a sequence of random variables andb n ,n 1, be an increasing sequence of positive numbers such that lim n!1 b n = +1, and 1 X n=1 Var n b 2 n <1: (A.8) (i) If the random variables n are independent, then lim n!1 P n k=1 ( k E k ) b n = 0 a:s: (A.9) (ii) If instead we assume the n are just uncorrelated, then lim n!1 P n k=1 ( k E k ) b n = 0 in probability. (A.10) Lemma A.2.2. (CLT for Stochastic Integrals) LetS = ( ;F;P;fF t g t0 ;f k g k1 ) be a stochastic basis. Suppose k 2 L 2 ( ;L 2 ([0;T ])) is a sequence of real valued predictable process such that lim n!1 P n k=1 R T 0 2 k dt P n k=1 E R T 0 2 k dt = 1 in probability. (A.11) Then P n k=1 R T 0 k d k P n k=1 E R T 0 2 k dt 1=2 (A.12) converges in distribution to a standard normal random variable as n!1. 122 Reference List [1] Diego Alonso-Oran and Aythami Bethencourt de Leon. On the well-posedness of stochastic boussinesq equations with cylindrical multiplicative noise. arXiv preprint arXiv:1807.09493, 2018. [2] Lorena Barba and Gilbert Forsyth. Cfd python: the 12 steps to navier-stokes equations. Journal of Open Source Education, 2(16):21, 2019. [3] Dongho Chae. Global regularity for the 2d boussinesq equations with partial viscosity terms. Advances in Mathematics, 203(2):497{513, 2006. [4] Igor Cialenco and Nathan Glatt-Holtz. Parameter estimation for the stochastically per- turbed navier{stokes equations. Stochastic Processes and their applications, 121(4):701{ 724, 2011. [5] Peter Constantin and Ciprian Foias. Navier-stokes equations. University of Chicago Press, 1988. [6] Giuseppe Da Prato and Jerzy Zabczyk. Stochastic equations in innite dimensions. Cambridge university press, 2014. [7] Arnaud Debussche, Nathan Glatt-Holtz, and Roger Temam. Local martingale and pathwise solutions for an abstract uids model. Physica D: Nonlinear Phenomena, 240(14-15):1123{1144, 2011. [8] Lihuai Du. The local existence of strong solution for the stochastic 3d boussinesq equations. Boundary Value Problems, 2019(1):42, 2019. [9] Jinqiao Duan and Annie Millet. Large deviations for the boussinesq equations under random in uences. Stochastic processes and their Applications, 119(6):2052{2081, 2009. [10] S Ghorai. Rayleigh-benard convection, Jan 2003. [11] Nathan Glatt-Holtz, Igor Kukavica, Vlad Vicol, and Mohammed Ziane. Existence and regularity of invariant measures for the three dimensional stochastic primitive equations. Journal of Mathematical Physics, 55(5):051504, 2014. [12] Nathan Glatt-Holtz and Roger Temam. Cauchy convergence schemes for some nonlinear partial dierential equations. Appl. Anal., 90(1):85{102, 2011. [13] Nathan Glatt-Holtz and Roger Temam. Pathwise solutions of the 2-D stochastic prim- itive equations. Appl. Math. Optim., 63(3):401{433, 2011. 123 [14] Nathan Glatt-Holtz and Mohammed Ziane. The stochastic primitive equations in two space dimensions with multiplicative noise. Discrete Contin. Dyn. Syst. Ser. B, 10(4):801{822, 2008. [15] Nathan Glatt-Holtz and Mohammed Ziane. Strong pathwise solutions of the stochastic navier-stokes system. Adv. Dierential Equations, 14(5/6):567{600, 05 2009. [16] Thomas Y Hou and Congming Li. Global well-posedness of the viscous boussinesq equations. Discrete & Continuous Dynamical Systems-A, 12(1):1, 2005. [17] Weiwei Hu, Igor Kukavica, and Mohammed Ziane. On the regularity for the boussinesq equations in a bounded domain. Journal of Mathematical Physics, 54(8):081507, 2013. [18] Yicong Huang. Wiener-Hopf Factorization for Time-inhomogeneous Markov Chains and Statistical Inference for Stochastic Partial Dierential Equations. PhD thesis, Illi- nois Institute of Technology, 2018. [19] M Huebner, R Khasminskii, and BL Rozovskii. Two examples of parameter estimation for stochastic partial dierential equations. In Stochastic processes, pages 149{160. Springer, 1993. [20] M Huebner, SV Lototsky, and BL Rozovskii. Asymptotic properties of an approximate maximum likelihood estimator for stochastic pdes. Statistics and control of stochastic processes (Moscow, 1995/1996), pages 139{155, 1997. [21] Yong Li and Catalin Trenchea. Existence and ergodicity for the two-dimensional stochastic boussinesq equation. International Journal of Numerical Analysis and Mod- eling, 15(4-5):715{728, 2018. [22] Robert S Liptser and Albert N Shiryaev. Statistics of random processes: I. General theory, volume 5. Springer Science & Business Media, 2013. [23] S Lototsky. Parameter estimation for stochastic parabolic equations: asymptotic prop- erties of a two-dimensional projection-based estimator. Statistical inference for stochas- tic processes, 6(1):65{87, 2003. [24] Sergey V Lototsky and Boris L Rosovskii. Spectral asymptotics of some functionals arising in statistical inference for spdes. Stochastic processes and their applications, 79(1):69{94, 1999. [25] Sergey V Lototsky and Boris L Rosovskii. Parameter estimation for stochastic evolution equations with non-commuting operators. National Academy of Sciences of Ukraine, Kyiv zbMATH, 2000. 124 [26] SV Lototsky et al. Statistical inference for stochastic parabolic equations: a spectral approach. Publicacions Matematiques, 53(1):3{45, 2009. [27] Gregor Pasemann and Wilhelm Stannat. Drift estimation for stochastic reaction- diusion systems. arXiv preprint arXiv:1904.04774, 2019. [28] Gregor Pasemann, Wilhelm Stannat, et al. Drift estimation for stochastic reaction- diusion systems. Electronic Journal of Statistics, 14(1):547{579, 2020. [29] Joseph Pedlosky et al. Geophysical uid dynamics, volume 710. Springer, 1987. [30] Zhaoyang Qiu and Yanbin Tang. Strong pathwise solution and large deviation principle for the stochastic boussinesq equations with partial diusion term. arXiv e-prints, pages arXiv{1906, 2019. [31] BLS Prakasa Rao. Parameter estimation for a two-dimensional stochastic navier{stokes equation driven by an innite dimensional fractional brownian motion. Random Oper- ators and Stochastic Equations, 21(1):37{52, 2013. [32] Roger Temam. Navier-Stokes equations and nonlinear functional analysis, volume 66. Siam, 1995. [33] Roger Temam. Navier-Stokes equations: theory and numerical analysis, volume 343. American Mathematical Soc., 2001. [34] Pu Xueke and Guo Boling. Global well-posedness of the stochastic 2d boussinesq equa- tions with partial viscosity. Acta Mathematica Scientia, 31(5):1968{1984, 2011. [35] Chunde Yang and Xin Zhao. Perturbation of stochastic boussinesq equations with multiplicative white noise. Discrete Dynamics in Nature and Society, 2013, 2013. 125
Abstract (if available)
Abstract
We investigate the parameter estimation problem for several stochastic partial differential equations arising in fluid dynamics. These include the fully and partially viscous stochastic Boussinesq system in both two and three dimensions, the stochastic Navier-Stokes equations in three dimensions, the stochastic primitive equation in two dimensions, and a linearized Navier-Stokes equation in both two and three dimensions. Depending on the equation, either Dirichlet or periodic boundary conditions are imposed. Using an approach based on maximum likelihood estimation, we construct three classes of estimators assuming our observations are the first n Fourier modes of the solutions to the above equations on a finite time interval. We prove consistency in all cases and asymptotic normality for one of the classes. We provide some numerical simulations for the stochastic Stokes equation and explain how one could extend them to more complicated systems.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Certain regularity problems in fluid dynamics
PDF
Well posedness and asymptotic analysis for the stochastic equations of geophysical fluid dynamics
PDF
Global existence, regularity, and asymptotic behavior for nonlinear partial differential equations
PDF
Some mathematical problems for the stochastic Navier Stokes equations
PDF
Stability analysis of nonlinear fluid models around affine motions
PDF
Asymptotic expansion for solutions of the Navier-Stokes equations with potential forces
PDF
Gaussian free fields and stochastic parabolic equations
PDF
Topics on set-valued backward stochastic differential equations
PDF
Statistical inference for stochastic hyperbolic equations
PDF
Regularity problems for the Boussinesq equations
PDF
Unique continuation for parabolic and elliptic equations and analyticity results for Euler and Navier Stokes equations
PDF
Mach limits and free boundary problems in fluid dynamics
PDF
On some nonlinearly damped Navier-Stokes and Boussinesq equations
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Linear differential difference equations
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Stochastic multidrug adaptive chemotherapy control of competitive release in tumors
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
Asset Metadata
Creator
Wickham, Zachary David
(author)
Core Title
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Mathematics
Degree Conferral Date
2021-08
Publication Date
07/17/2021
Defense Date
04/29/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,partial differential equations,stochastic partial differential equations
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ziane, Mohammed (
committee chair
), Kukavica, Igor (
committee member
), Newton, Paul (
committee member
)
Creator Email
zacwickham@gmail.com,zwickham@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC15595900
Unique identifier
UC15595900
Legacy Identifier
etd-WickhamZac-9758
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Wickham, Zachary David
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
partial differential equations
stochastic partial differential equations