Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Some mathematical problems for the stochastic Navier Stokes equations
(USC Thesis Other)
Some mathematical problems for the stochastic Navier Stokes equations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
SOME MATHEMATICAL PROBLEMS FOR THE STOCHASTIC NAVIER STOKES EQUATIONS by Kerem U gurlu A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) May 2016 Copyright 2016 Kerem U gurlu Acknowledgements I would rst and foremost like to thank my thesis advisor, Mohammed Ziane for his extraordinary brilliance and warm heart throughout this thesis process. Considering all the levels during my student life, I haven't seen in my life anyone who can explain mathematics more clearly and more intuitively than him. If I start to write a detailed acknowledgement to Remigijus Mikulevicius, then I will have to write another thesis about that subject, so I just state here that I am very grateful to him for all the mathematics he has taught me. I thank Igor Kukavica for insightful discussions about my thesis. I also thank Jianfeng Zhang for serving in my committee and for his excellent classes. Likewise, I thank Sergey Lototsky for serving in my committee and many important hints and advises about academic life. I also thank Ylmaz Ko cer for serving in my committee. I also thank Peter Baxendale for his classes and for his excellent jokes. Lastly but denitely not leastly, I thank Susan Montgomery, Director of Graduate Studies, for helping me, whenever I needed her help while dealing with bureaucratic procedure. ii Table of Contents Acknowledgements ii Abstract iv Chapter 1: Notation and Functional Setting 1 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Deterministic Framework . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 Stochastic Framework . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Chapter 2: Background Work and Summary 9 Chapter 3: Timewise Galerkin Approximations and Norm Estimates 17 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Main Results in this Chapter . . . . . . . . . . . . . . . . . . . . . . 19 3 Timewise Galerkin Convergence in V . . . . . . . . . . . . . . . . . . 20 4 Timewise Galerkin Convergence in H . . . . . . . . . . . . . . . . . . 28 5 The Zygmund norm estimate in H . . . . . . . . . . . . . . . . . . . 31 Chapter 4: A Linearized Scheme for Timewise Approximation of the SNSE 34 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3 Proof of the Main Results . . . . . . . . . . . . . . . . . . . . . . . . 35 Chapter 5: Timewise Optimal Feedback Control for the SNSE 57 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2 Main Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3 Proof of the Main Result . . . . . . . . . . . . . . . . . . . . . . . . . 60 Chapter 6: Appendix 80 Bibliography 114 iii Abstract This thesis collects three interrelated projects related to time-wise approximation, bound estimates and control of stochastic Navier-Stokes equations in a smooth non- periodic bounded domainOR 2 with a multiplicative noise. First, we show that in an open bounded domainO, we have lim n!1 E[ sup t2[0;T ] 1 (k(u(t)u n (t))k 2 V )] = 0 for any deterministic timeT > 0, for a specied moment function 1 (x) = log(1+x) 1 with 0 < < 1, where u n (t;x) corresponds to the Galerkin approximation of the solution u(t;x). Similarly, we prove that lim n!1 E[ sup t2[0;T ] 2 (k(u(t)u n (t))k p H )] = 0 for any p > 0, for a specic function 2 (x) = x 1 with 0 < < 1 and for any deterministic time T > 0. Finally, we show that E sup 0tT exp kuk H =K <1; iv for a constant K with specied regularity assumptions on the initial data. Second, we show that a special linearized schemefu n g n1 for the convergence of the SNSE gives the same convergence results that we have achieved through the Galerkin approximation. Namely, we show that lim n!1 E[ sup t2[0;T ] 1 (k(u(t)u n (t))k 2 V )] = 0 for any deterministic time T > 0, for a specied moment function 1 (x). Moreover, we prove that lim n!1 E[ sup t2[0;T ] 2 (k(u(t)u n (t))k p H )] = 0 for any p > 0, for a specic function 2 (x) and for any deterministic time T > 0, where 1 (x) = (log(1 +x)) 1 2 (x) =x 1 ; with 0<< 1. Third, we solve an optimal control problem verifying the existence of feedback controls that are optimal in sup t2[0;T ] sense. Namely, we show that there exists an optimal feedback control for a specic types of cost functional J() =E sup t2[0;T ] ('(L[t;u (t);(t)])); v of the SNSE in 2D on an open bounded nonperiodic domainO given that the control setU is compact, where '(x) = log(1 +x) 1 with 0<< 1. vi Chapter 1 Notation and Functional Setting 1 Introduction We consider the stochastic Navier-Stokes equations (SNSE) in 2D in a smooth non- periodic bounded domainOR 2 with a multiplicative white noise @ t u + (ur)uu +rp =f +g(u)dW (1.1) ru = 0 (1.2) u(0) =u 0 (1.3) [BKL, CG, C, CP, DD, FG, FR, GV, M, MR2, MS, O, S] with u 0 2 L 4 ( ;H)\ L 2 ( ;V ) and Dirichlet boundary condition u = 0 on [0;1)@O. Here u = (u 1 ;u 2 ) represents the velocity eld, p represents the pressure, and stands for the viscosity, whereas f stands for the deterministic force. Moreover, g(u)W = P k g k (u)e k W k stand for the innite dimensional Brownian motion, where eachW k is the standard one dimensional Brownian motion and g k (u) are the corresponding Lipschitz coecients. First, we recall the deterministic and probabilistic framework used throughout the thesis. 1 2 Deterministic Framework LetO be a smooth bounded open connected subset ofR 2 , and letV =fu2C 1 0 (O) : ru = 0g. Denote by H and V the closures ofV in L 2 (O) and H 1 (O) respectively. The spaces H and V are identied by H =fu2L 2 (O) :ru = 0;uNj @O = 0g; (2.1) V =fu2H 1 0 (O) :ru = 0g (2.2) (cf. [CF2, T]). HereN is the outer pointing normal to@O. OnH we take theL 2 (O) inner product and the norm as hu;vi = Z O uvdx kuk H = p hu;ui: (2.3) LetP H be the Leray-Hopf projector of L 2 (O) onto H. Recall that for u2L 2 (O) we haveP H u = (1Q H )u whereQ H u =r 1 +r 2 and 1 ; 2 2H 1 (O) are solutions of the problems 1 =ru inO 1 = 0 on @O (2.4) 2 and 2 = 0 inO r 2 N =ur 1 on @O (2.5) ([CF2, T]). Let A =P H (2.6) be the Stokes operator with the domainD(A) =V\H 2 (O). The dual ofV =D(A 1=2 ) with respect to H is denoted by V 0 =D(A 1=2 ). Here A is dened as a bounded, linear map from V to V 0 via hAu;vi = Z O rurvdx; u;v2V; with the corresponding norm dened as kuk 2 V =hAu;ui =hA 1=2 u;A 1=2 ui; u2V: By the theory of symmetric, compact operators for A 1 , there exists an orthonormal basisfe k g for H consisting of eigenfunctions of A. The corresponding eigenvalues f k g form an increasing, unbounded sequence 0< 1 2 n 3 We also dene the nonlinear term as a bilinear mapping VV to V 0 via B(u;v) =P H (urv): The deterministic forcef is assumed to be bounded with values in H. Note that the cancellation propertyhB(u;v);vi = 0 holds for u;v2V . 3 Notation Let (S;kk S ) be a Banach space. We denote byL 2 S ( ) the linear space of all functions u :!S that areF-measurable andEkuk 2 S <1. We also denote byL 2 S ( [0;T ]) the linear space of all processesu : [0;T ]!S that areFB [0;T ] measurable, adapted to the ltration (F t2[0;T ] ) andE R T 0 kuk 2 S dt<1. Weak convergence is denoted by *. 4 Stochastic Framework In this section, we recall the necessary background material for stochastic analysis in innite dimensions needed in this paper (cf. [DZ, DGT, F, PR]). Fix a stochastic basisS = ( ;F;P;fF t g;W), which consists of a complete probability space ( ;P), equipped with a complete right-continuous ltrationF t , and a cylindrical Brownian motionW, dened on a separable Hilbert space U adapted to this ltration. Given a separable Hilbert space X, we denote by L 2 (U;X) the space of Hilbert- Schmidt operators fromU toX, equipped with the normkGk L 2 (U;X) = ( P k kGk 2 X ) 1=2 4 [DZ]. For an X-valued predictable process G 2 L 2 ( ;L 2 loc ([0;1]);L 2 (U;X)), we dene the It^ o stochastic integral Z t 0 GdW = X k Z t 0 G k dW k (4.1) which lies in the spaceO X ofX-valued square integrable martingales. We also recall the Burkholder-Davis-Gundy inequality: For all p 1 we have E " sup t2[0;T ] Z t 0 GdW p X # CE Z T 0 kGk 2 L 2 (U;X) p=2 (4.2) for some C = C(p) > 0. Given a pair of Banach spaces X and Y , we denote by Lip u (X;Y ) the collection of continuous functions h: [0;1)X! Y which are sublinear kh(t;x)k Y K Y (1+kxk X );t 0;x2X (4.3) and Lipschitz kh(t;x)h(t;y)k Y K Y kxyk X ;t 0;x;y2X (4.4) for some constantK Y > 0 independent oft. The noise termg(u)dW, which is dened by g =fg k g k1 : [0;1)H!L 2 (U;H) (4.5) 5 satises kg(t;x)k L 2 (U;D(A j=2 )) K j (1 +kxk D(A j=2 ) ) for j2f0; 1; 2g (4.6) and kg(t;x)g(t;y)k L 2 (U;D(A j=2 )) K j kxyk D(A j=2 ) for j2f0; 1; 2g: (4.7) In particular, we have g2 Lip u (H;L 2 (U;H))\ Lip u (V;L 2 (U;V ))\ Lip u (D(A);L 2 (U;D(A))): (4.8) Givenu2L 2 ( ;L 2 ([0;T ];H)) andg as above, the stochastic integral R t 0 g(u)dW is a well-dened H-valued It^ o stochastic integral that is predictable and is such that Z t 0 g(u)dW;v = X k Z t 0 hg k (u);vidW k holds for all v2H. We consider strong pathwise solutions in the PDE sense, i.e., solutions bounded in time with values inV , square integrable in time with values inD(A), and strong in the probabilistic sense, i.e., the driving noise and the ltration are given in advance. Denition 1.4.1. Letg be as in (4.8) predictable, and letf2L 1 ( ;L 4 ([0;T );V 0 )) be predictable. Assume that the initial data u 0 2L 4 ( ;H)\L 2 ( ;V ) isF 0 measurable. 6 We call a process (u(t)) t0 inL 2 V ( [0;T ]) with Eku(t)k 2 H <1 for all t2 [0;T ] a solution of the SNSE if it satises the equation hu(t);vi + Z t 0 hAu(s);vids =hu 0 ;vi + Z t 0 hB(u;u);vids + Z t 0 hf;vids + Z t 0 hg(u);vidW s ; (4.9) for allv2V andt2 [0;T ] and!2 a.s., where the stochastic integral is understood in the Ito sense. Denition 1.4.2. Letg be as in (4.8) predictable, and letf2L 1 ( ;L 4 ([0;T );V 0 )) be predictable. Assume that the initial datau 0 2L 4 ( ;H)\L 2 ( ;V ) isF 0 measurable. The pair (u;) is called a pathwise strong solution of the system if is a strictly positive stopping time, u(^) is a predictable process in H such that u(^)2L 2 ( ;C([0;T ];V )) (4.10) with u1 t 2L 2 ( ;L 2 ([0;T ];D(A))) (4.11) and if hu(t^);vi+ Z t^ 0 Au+B(u;u)f;v dt = u 0 ;v + X k Z t^ 0 hg k (u);vidW k (4.12) 7 holds for every v2H. Moreover, (u;) is called a maximal pathwise strong solution if is a strictly positive stopping time and there exists a non-decreasing sequence of stopping times n such that n ! and (u; n ) is a local strong solution and sup t2[0;n] kuk 2 V + Z n 0 kAuk 2 H dtn (4.13) on the setf <1g. Such a solution is called global ifP( <1) = 0. We proceed with the denition of the Galerkin system. Denition 1.4.3. An adapted processu n inC([0;T ];H n ), whereH n =Lfe 1 ;:::;e n g, is a solution to the Galerkin system of order n if for any v in H n dhu n ;vi +hAu n +B(u n );vidt =hf;vidt + 1 X k=1 hg k (u n );vidW k hu n (0);vi =hu 0 ;vi: (4.14) We may also rewrite 4.14 as equations in H n , i.e., du n + (Au n +P n B(u n ))dt =P n fdt + 1 X k=1 P n g k (u n )dW k u n (0) =P n u 0 =u n 0 : 8 Chapter 2 Background Work and Summary In this chapter, we brie y outline the background work leading to our results and summarize our results proved in the subsequents chapters. The parallel works that have dealt withL 2 V ([0;T ] ) criteria convergence and control criteria rather than investigating the supremum in the whole time interval [0;T ] are [B, B3, B2, B4]. We brie y explain these results. In [B3], the main results are as follows Theorem 2.0.1. The equation 1.4.1 has a solution in the spaceL 2 V ( [0;T ]). The solution is unique almost surely and has in H almost surely continuous trajectories. Theorem 2.0.2. Letu andu n be the corresponding solution of the SNSE in equation 1.4.1 and Galerkin approximation in equation 4.15 respectively. Given thatEku 0 k 4 H < 1 and the initial data are regular enough , then, for each xed time T , the following convergence holds: E Z T 0 kuu n k 2 V dt! 0; (0.1) as n!1. 9 Secondly, in [B2], a linearized scheme is proposed. Namely, for each n = 0; 1; 2; 3;:::, we consider the linear evolution equation with multiplicative noise hu n (t);vi + Z t 0 hAu n (s);vids =hu 0 ;vids + Z t 0 hB(u n1 (s);u n (s));vids + Z t 0 h;vids + Z t 0 hg(u n1 ;v)idW s ; (0.2) for all v 2 H, t 2 [0;T ], for almost every ! 2 , where we let u 0 (t) u 0 be an H-valuedF 0 measurable random variable with u 0 2 L 4 ( ;H)\L 2 ( ;V ). The reason to work on this evolution scheme is that the Galerkin method is useful to prove the existence of the solutionu of the SNSE, but from numerical perspective, it is complicated to implement it due to the the nonlinear terms. Since B(u n1 ;u n ) is linear as opposed to the Galerkin approximation in this scheme, this linearized scheme seems to be more ecient than the Galerkin approximation. The main results in [B2] are as follows. Theorem 2.0.3. For each n2N 0 equation 0.2 has an almost surely unique solution u n 2L 2 V ( [0;T ]) with almost surely continuous trajectories in H. Theorem 2.0.4. The following convergences hold lim n!1 E Z T 0 kuu n k 2 V ds = 0 (0.3) and lim n!1 Ekuu n k 2 H = 0; (0.4) 10 for all t2 [0;T ]. Thirdly, [B] considers the SNSE controlled by linear and continuous feedback controls as well as bounded controls. DenotingU as the set of all admissible linear and continuous controls, which is assumed to be the set of all functions : [0;T ]H!H satisfying the following conditions: For all t2 [0;T ], we have (t;:)2L(H) where L(H) being the space of continuous linear feedback controls and k(t 1 ;x 1 )(t 2 ;x 2 )k 2 H jt 1 t 2 j 2 +kx 1 x 2 k 2 H ; for all t 1 ;t 2 2 [0;T ];x 1 ;x 2 2 H, where ; > 0 are given constants. In [B], the solution u of the SNSE via the corresponding control force reads as hu(t);vi + Z t 0 hAu(s);vids =hu 0 ;vids + Z t 0 hB(u(s);u(s));vids + Z t 0 h(t;u );vids + Z t 0 hg(u;v)idW s ; (0.5) for all v2 V , t2 [0;T ]. a.e. !2 , by the feedback controls 2U. The following cost functional in [B] is taken into consideration J() =E Z T 0 L[s;u ;(s;u )]ds +EK[u (T )]; (0.6) 11 with 2U, whereL : [0;T ]VH!R + , andK :H!R + satisfying jL(t;x 1 ;y 1 )L(t;x 2 ;y 2 )jC(kx 1 x 2 k 2 H +ky 1 y 2 k 2 H ) (0.7) jK(x 1 )K(x 2 )jCkx 1 x 2 k 2 H ; (0.8) for all t2 [0;T ], x 1 ;x 2 ;y 1 ;y 2 2 H, where C is a constant. We state the following result from [B4] Lemma 2.0.5. Letf n g n1 be a sequence inU and let 2U be such that lim n!1 Z T 0 k n k 2 H dt = 0; (0.9) then we have lim n!1 J( n ) =J(). To extend these results to time-wise namely in sup t2[0;T ] case, we rst need to have that the expression sup t2[0;T ] kuk 2 V is well dened. For that, the global existence of SNSE in 2D up to a sequence of stopping timesf n g n0 in the sense of Denition 1.4.1 is proven in [GZ]. There, they have shown that those stopping for any of those stopping times E[ sup t2[0;] kuk 2 V ]<1: E[ sup t2[0;] kuu n k 2 V ]! 0; 12 as n!1. They left as an open question, whether it is possible to replace the stopping time above with the deterministic time T . Namely, whether we could have E[ sup t2[0;T ] kuk 2 V ]<1; E[ sup t2[0;T ] kuu n k 2 V ]! 0; for a deterministic time T . The importance to study this problem is that using a deterministic time T , such moment bounds can used to study statistical properties of the long time behavior of the SNSE. A partial answer to bound E[sup t2[0;T ] kuk 2 V ] has been given by [KV], where the authors have shown that it holds E[ sup t2[0;T ] log(1 + log(1 +kuk 2 V ))]<1: The main diculty in bounding E[sup t2[0;T ] kuk 2 V ] lies in estimating the nonlinear term B(u;u). As opposed to the periodic domain, on a bounded domain, we do not havehB(u;u);Aui = 0 for t > 0. In our work [KUZ] as well as in Chapter 3, we replace the estimating function'(x) = log(1 + log(1 +x)) with the stronger function '(x) = log(1 +x). We show, moreover, via a uniform integrability argument that the 13 Galerkin approximationsfu n g n1 converge to the solutionu of the SNSE in 2D with '(x) = log(1 +x) 1 as well. Namely, we have proved that E[ sup t2[0;T ] log(1 +kuk 2 V )]<1 E[ sup t2[0;T ] log(1 +kuu n k 2 V ) 1 ]! 0: as n!1 for any deterministic time T and for 0<< 1. Furthermore, when we work with theH-norm foru,kuk H , by usinghB(u;u);ui = 0 and increasing the initial data regularity accordingly, we have also shown that E[ sup t2[0;T ] kuu n k p H ]! 0 for any p> 0, where u n corresponds to the Galerkin approximation of u. Finally, we have proved that E sup 0tT exp kuk H K <1; for a constant K, if the initial data is essentially bounded as well. Secondly, we adapt the linearized alternative scheme in [B2] to our framework. We prove the analogue results of [B2]. Namely, using this linearized scheme instead of the Galerkin approximation, we show the global existence of SNSE in 2D up to a sequence 14 of stopping timesf n g n0 , which converges to innity almost surely. Moreover, we prove analogously to our previous work that E[ sup t2[0;T ] log(1 +kuu n k 2 V ) 1 ]! 0: as n!1, and by increasing the initial data regularity, we have shown that E[ sup t2[0;T ] kuu n k p H ]! 0 for any p> 0. Our third work in this direction is the investigation of the existence of optimal feedback controls for the minimization of the vorticity in the SNSE, controlled by dierent external forces. Namely, similar to [B], we consider the SNSE with multi- plicative noise hu(t);vi + Z t 0 hAu(s);vids =hu 0 ;vids + Z t 0 hB(u(s);u(s));vids + Z t 0 h(t;u );vids + Z t 0 hg(u;v)idW s ; (0.10) 15 for allv2V ,t2 [0;T ], for almost every!2 , by the feedback controls2U.,where we assume that the initial datau 0 2L 4 ( ;H)\L 2 ( ;V ) isF 0 measurable. Moreover, we also suppose that the set of feedback controls U satisfy: k(!;t)k V K; for all !t2 [0;T ]: k(t;x 1 )(t 2 ;x 2 )k 2 V C 1 jt 1 t 2 j 2 +C 2 kx 1 x 2 k 2 V ; for all t2 [0;T ] and x 1 ;x 2 2V . We consider the following cost functional J() =E sup t2[0;T ] ('(L[t;u (t);(t)])) whereL : [0;T ]VH!R + satises the followings: jL(t;x 1 ;y 1 )L(t;x 2 ;y 2 )jC(kx 1 x 2 k 2 V +ky 1 y 2 k 2 H ) denoting '(x) = log(1 +x) 1 ; with 0 < < 1. Based on these assumptions and framework, we have proved the existence of an optimal control, namely we have shown that there exists an optimal feedback control satisfying J( ) = min 2U J() 16 Chapter 3 Timewise Galerkin Approximations and Norm Estimates 1 Introduction This chapter is based on [KUZ]. We address the convergence properties of the Galerkin approximation to the stochastic Navier-Stokes equations and obtain new estimates on the convergence in the strong norm. Namely, the goal of this chapter is to address the convergence of the Galerkin pointwise in time for the V norm in the case of a non-periodic domain with Dirichlet boundary conditions. In this case, it is easy to obtain results in this direction up to a suitable stopping time. However, the expected value of the second moment of the normku(t)k 2 V for any xed non-random timet is an open problem. By the same token, it is not known whether the expected value ofku(t)u n (t)k 2 V converges to 0 as n!1. A step toward establishing the niteness of the expected value ofku(t)k 2 V for t > 0 was obtained in [KV], where it was proven that E sup 0tT ~ (kuk 2 V ) <1 (1.1) 17 where ~ () = log(2 + log(2 +));2 (0;1) (1.2) The aim of this chapter is twofold. First, we strengthen the main result in [KV] by showing that (1.1) holds with () = log(2 +) (1.3) instead of ~ (cf. Theorem 3.3.4 below). The second goal is to obtain convergence of the Galerkin approximation in the V norm. Namely, we prove that E " sup [0;T ] (kuu n k 2 V ) 1 # ! 0 (1.4) as n!1 for all > 0. In addition, we obtain two results on the 2D SNSE of independent interest. The rst result states that E[ sup t2[0;T ] k(u(t)u n (t))k q H ]! 0 for all q 2 while the second concerns niteness of the Zygmund type norm E sup 0tT exp kuk H K (1.5) for K suciently large. 18 2 Main Results in this Chapter The rst main result is related to the convergence of Galerkin approximations in the V norm. Theorem 3.2.1. Let 2 (0; 1) and let T > 0 be arbitrary. Suppose that u is a solu- tion to the equation as in Denition 1.4.1, and let u n be the corresponding Galerkin approximation as in Denition 1.4.3. Then we have E " sup [0;T ] 1 (kuu n k 2 V ) # ! 0; (2.1) with 1 (x) = (log(1 +x)) 1 , as n!1. Our second statement gives the convergence in theH norm of the Galerkin approx- imations u n on the whole bounded C 1 domainO. Theorem 3.2.2. Let u be the solution as in Denition 1.4.1 and let u n be the corresponding Galerkin approximation as in 1.4.3. If, additionally, we have f 2 L 2k ( ;L 2k ([0;1);V 0 )) and u 0 2L 2k+2 ( ;H)\L 2 ( ;V ), then E " sup [0;T ] kuu n k 2k(1) H # ! 0; (2.2) for any deterministic time T > 0 and any 2 (0; 1) as n!1. Our third result shows that a Zygmund type norm ofkuk H is bounded up to any deterministic time T > 0. 19 Theorem 3.2.3. Let u be the solution as in Denition 1.4.1. If f 2 L 1 ( ;L 1 ([0;T ];V 0 )) and u 0 2L 1 ( ;V ), then E sup 0tT exp kuk H K <1 (2.3) for any deterministic time T > 0, where K is a suciently large constant. The proofs of the above theorems are given in the remaining sections in this chapter. 3 Timewise Galerkin Convergence in V In this section, we give the proof of the rst main result, Theorem 3.2.1. First, we recall a statement from [GZ]. Theorem 3.3.1. Letfu n g be the sequence of solutions of (4.14) with u being the solution as in Denition 1.4.1 and with g, f, and u 0 as in Denition 1.4.2. T M;T n =fT : ( sup t2[0;] ku n k 2 V + Z 0 kAu n k 2 H dt) 1=2 Mg (3.1) Take T M;T m;n;m1;n1 :=T M;T m \T M;T n \T M;T m1 \T M;T n1 ; (3.2) then we have for any T > 0 and M > 1 20 1. For any T > 0 and M > 1 lim n!1 sup mn sup 2T M;T m;n;m1;n1 E[ sup t2[0;] ku m u n k 2 V + Z 0 kA(u m u n )k 2 H dt] = 0 (3.3) 2. lim s!0 sup n sup 2T M;T n P sup t2[0;^S] ku n k 2 V + Z ^S 0 kA(u n )k 2 H dt> (M 1) 2 = 0: (3.4) Theorem 3.3.2. [GZ] Letfu n g be the sequence of solutions of (4.14) with u being the solution to the equation as in Denition 1.4.1 and with g, f, and u 0 and, there exists a global, maximal pathwise strong solution (u;) in the sense of Denition 1.4.2. Namely, there exists an increasing sequence of strictly positive stopping timesf m g m0 converging to , for which P( <1) = 0. We start with the following lemma. Lemma 3.3.3. Let u and u n be dened as in Denitions 1.4.2 and 1.4.3. Then for any deterministic timeT > 0, the Galerkin approximationsu n converge in probability with respect to the V norm, i.e., for any > 0 we have P( sup t2[0;T ] kuu n k 2 V )! 0 (3.5) as n!1. 21 In particular, under the assumptions, we have P( sup t2[0;T ] (log(1 +kuu n k 2 V )) 1 )! 0 (3.6) for any 2 (0; 1), as well as, by the Poincar e inequality, P( sup t2[0;T ] kuu n k 2k H )! 0 (3.7) for all > 0. Both statements shall be used below. Proof. By assumption, we have u 0 2 L 2 ( ;V ). Hence, by Chebyshev theorem we have, P(ku 0 k 2 V >s)! 0 (3.8) as s!1. Denoting s =fku 0 k 2 V sg, we have s ! . Hence, we choose s such that P( s ) > 1 2 . Moreover, We know by Theorem 3.3.2 and Lemma 6.0.2, that there exists a sequence of stopping timesf~ M n l g n l 1 with the corresponding subsequencefu n l g converging monotone decreasing to M . We also know that M ! 1 a.s. as M!1, where M is the constant dened as in Theorem 3.3.1, since the solution is global in the sense of Denition 1.4.2 by Theorem 3.3.2. Hence, denotingf M n l = ~ M n l ^Tg n l 1 , there exists M 0 such thatP( M 0 <T ) 4 and by Lebesgue dominated convergence theorem, we have lim n l !1 E " 1 s sup t2[0; M 0 ] ku u n l k 2 V # = 0: (3.9) 22 This implies convergence in probability. Thus, lim n l !1 P(1 s sup t2[0; M 0 ] ku u n l k 2 V >) = 0; (3.10) for any > 0. Hence, we have P 1 s sup t2[0;T ] ku u n l k 2 V =P f sup t2[0;T ] ku u n l k 2 V g\f M 0 <Tg\f!2 s g +P(f sup t2[0;T ] ku u n l k 2 V g\f M 0 =Tg\f!2 s g) P( M 0 <T ) +P(1 s sup t2[0; M 0 ] ku u n l k 2 V >) (3.11) Then, we get P sup t2[0;T ] ku u n l k 2 V ! P M 0 <T +P 1 s sup t2[0; M 0 ] ku u n l k 2 V > ! +P ( c s ) 4 + 4 + 2 =: (3.12) forn l large enough. Then by taking any subsequenceu m l and by Theorem 3.3.1 and Lemma 6.0.2 repeating the same arguments above, we get that every subsequence fu m l g has a further subsequence that converges in probability to u , which implies that the whole sequencefu n g converges in probability to u , which concludes the proof. The following theorem improves the main result from [KV]. 23 Theorem 3.3.4. Let u 0 , f, and g be as in Denition 1.4.2 and suppose that u is as dened in Denition 1.4.1 . Then we have E[sup [0;T ] (kuk 2 V )]C(f;g;u 0 ;T ); (3.13) where (x) = log(1 +x). Proof. From the innite dimensional version of It^ o's lemma we get d((kuk 2 V )) + 2 0 (kuk 2 V )kAuk 2 H dt = 0 (kuk 2 V ) 2hf;Aui 2hB(u;u);Aui + 0 (kuk 2 V )kg(u)k 2 V dt + 2 00 (kuk 2 V ) X k g k (u);Au 2 dt + 2 0 (kuk 2 V ) g(u);Au dW: (3.14) We take the supremum up to the stopping time ~ m = m ^T , where m is introduced in Theorem 3.3.2. Denoting m =f!2 : ~ m =Tg, we see that m " asm!1 by Theorem 3.3.2. By taking the expectation on m and, suppressing 1 m for simplicity of notation, we get E[ sup [0;~ m] (kuk 2 V )] + 2E Z ~ m 0 0 (kuk 2 V )kAuk 2 H ds 0 (ku 0 k 2 V ) +E Z ~ m 0 (T 1 +T 2 +T 3 +T 4 )ds + 2E " sup s2[0;~ m] Z s 0 T 0 dW s # (3.15) 24 where we denoted T 0 = 2 0 (kuk 2 V )jhg(u);Auij (3.16) T 1 = 2 0 (kuk 2 V )jhB(u;u);Auij (3.17) T 2 = 2 0 (kuk 2 V )jhf;Auij 2 0 (kuk 2 V )kfk H kAuk H C 0 (kuk 2 V )kfk 2 H + 8 0 (kuk 2 V )kAuk 2 H (3.18) T 3 = 0 (kuk 2 V )kg(u)k 2 V C 0 (kuk 2 V )(1 +kuk 2 V ) (3.19) T 4 = 2j 00 (kuk 2 V )hg(u);Auij 2 Cj 00 (kuk 2 V )jkuk 2 V (1 +kuk 2 V ) (3.20) where C is allowed to depend on K j , for j = 0; 1; 2, and K Y . Appealing to the BDG inequality, we have E " sup s2[0;m] Z s 0 T 0 dW # CE " Z m 0 0 (kuk 2 V ) 2 kg(u)k 2 V kuk 2 V ds 1=2 # (3.21) and thus, using the Lipschitz condition on g(u), E " sup s2[0;m] Z s 0 T 0 dW # CE " Z m 0 1 (1 +kuk 2 V ) 2 (1 +kuk 2 V )kuk 2 V ds 1=2 # C(T ): (3.22) 25 Next, we estimate the term T 1 as T 1 = 2 0 (kuk 2 V )jhB(u;u);Auij (3.23) 2 0 (kuk 2 V )kuk 1=2 H kuk 1=2 V kuk 1=2 V kAuk 3=2 H C 0 (kuk 2 V )kuk 2 H kuk 4 V + 1 4 0 (kuk 2 V )kAuk 2 H Ckuk 2 H kuk 2 V + 1 4 0 (kuk 2 V )kAuk 2 H ; where we note that by Lemma 4.3.9 E Z T 0 kuk 2 V kuk 2 H dt M(ku 0 k 4 H ;kfk 4 V 0;T ): (3.24) By combining all the estimates and writing out 1 m explicitly, we obtain E[1 m sup [0;~ m] (kuk 2 V )]C(f;g;u 0 ;T ): (3.25) By letting m!1 and appealing to the monotone convergence theorem, we get E[sup [0;T ] (kuk 2 V )]C(f;g;u 0 ;T ) (3.26) and the proof is concluded. Lemma 3.3.5. Let u n be as in Denition 1.4.3. Then we have E[sup [0;T ] log(1 +ku n k 2 V )]C(f;g;u 0 ;T ) (3.27) 26 and E[sup [0;T ] log(1 +kuu n k 2 V )]C(f;g;u 0 ;T ); (3.28) for all n2N. Proof of Lemma 3.3.5. The proof of (3.27) follows the same steps as the proof of Lemma 3.3.4 and it is thus omitted. The inequality (3.28) is a consequence of (3.13) and (3.27). Now, we are ready to prove our rst main result, Theorem 3.2.1. Proof. Let 2 (0; 1). By (3.58), we have sup [0;T ] log(1 +kuu n k 2 V ) 1 ! 0 (3.29) in probability as n!1. Moreover, using Lemma 3.3.5, E " sup [0;T ] (log(1 +kuu n k 2 V ) # M(u 0 ;f;g;T ): (3.30) Denoting U n = sup [0;T ] log(1 +kuu n k 2 V ) 1 (3.31) we have by (3.30) E U 1=(1) n M(u 0 ;f;g;T ) (3.32) while (3.58) gives U 1=(1) n ! 0 (3.33) 27 in probability. Using the de la Vall ee-Poussin criterion for uniform integrability (see e.g. [D]), we get that U n ! 0 in L 1 as n!1 and Theorem 3.2.1 is proven. 4 Timewise Galerkin Convergence in H In this section, we prove our second result, Theorem 3.2.2, on convergence of the Galerkin approximations in the H space. Lemma 3.4.1. Let u be the solution to the equation as in Denition 1.4.1. Then we have E[sup [0;T ] kuk 2n H ] +E Z T 0 kuk 2 V kuk 2n2 H ds C(n;ku 0 k 2n H ;kfk 2n V 0;T ) (4.1) for any positive integer n. Proof. We use an induction argument. Let z(t) = exp(Kt) for a positive constant K to be specied below. For any positive integer n, we have d(z(t)kuk 2n H ) =Kz(t)kuk 2n H dt 2nkuk 2 V kuk 2n2 H z(t)dt + 2nz(t)kuk 2n2 H hf;uidt +nz(t)kuk 2n2 H kg(u)k 2 H dt + 2nz(t)kuk 2n2 H hg(u);uidW t (4.2) 28 and thus d(z(t)kuk 2n H ) + 2nkuk 2 V kuk 2n2 H z(t)dt =Kz(t)kuk 2n H dt + 2nz(t)kuk 2n2 H hf;uidt +nz(t)kuk 2n2 H kg(u)k 2 H dt + 2nz(t)kuk 2n2 H hg(u);uidW t Kz(t)kuk 2n H dt + 2nz(t)kuk 2n2 H jhf;uijdt +nz(t)kuk 2n2 H kg(u)k 2 H dt + 2nz(t)kuk 2n2 H hg(u);uidW t Kz(t)kuk 2n H dt +C(n)z(t)kuk 2n H dt +C(n)z(t)kfk 2n H dt + 2nz(t)kuk 2n2 H hg(u);uidW t : (4.3) Choosing a suciently largeK so it cancels the second term on the far right side, we get d(z(t)kuk 2n H ) + 2nkuk 2 V kuk 2n2 H z(t)dt C(n)z(t)kfk 2n H dt + 2nz(t)kuk 2n2 H hg(u);uidW t : (4.4) 29 We take the supremum up to the stopping timesf m g m1 introduced in Theorem 3.3.2 and integrate up to m and take the expectation on m as in Theorem 3.3.4. By suppressing 1 m below, we get E[ sup [0;m] z(t)kuk 2n H ]M(ku 0 k 2n H ;kfk 2n V 0;T ) +C(n)E " sup t2[0;m] Z m 0 z(s)kuk 2n2 H hg(u);uidW s # M(ku 0 k 2n H ;kfk 2n V 0;T ) +C(n)E " Z m 0 z 2 (s)kuk 4n4 H jhg(u);uij 2 ds 1=2 # M(ku 0 k 2n H ;kfk 2n V 0;T ) +C(n)E " Z m 0 z 2 (s)kuk 4n4 H (1 +kuk 2 H )kuk 2 H ds 1=2 # M(ku 0 k 2n H ;kfk 2n V 0;T ) +C(n)E " Z m 0 z 2 (s)kuk 4n2 H (1 +kuk 2 H )ds 1=2 # M(ku 0 k 2n H ;kfk 2n V 0;T ) +C(n)E " sup t2[0;m] z(t)kuk 2n1 H Z m 0 (1 +kuk 2 H )ds 1=2 # M(ku 0 k 2n H ;kfk 2n V 0;T ) + 1 4 E " sup [0;m] z(t)kuk 2n H # +C(n 2n )E " Z m 0 (1 + sup [0;m] kuk 2 H )ds ! n # where in the last line, we used the -Young inequality with p = 2n=(2n 1) and q = 2n. By observing e KT e Kt 1 for 0t m T and using the Gronwall lemma, we obtain E[1 m sup [0;m] kuk 2n H ]M(n 2n ;ku 0 k 2n H ;kfk 2n V 0;T ): (4.5) Moreover, based on the inequality (3.40), we get E 1 m Z m 0 kuk 2 V kuk 2n2 H ds M(n 2n ;ku 0 k 2n H ;kfk 2n V 0;T ): (4.6) 30 We conclude the result using the monotone convergence theorem as in the proof of Theorem 3.3.4. Now, we are ready to prove Theorem 3.2.2. Proof of Theorem 3.2.2. We use that E[sup [0;T ] kuu n k 2k H ] 2 2k (E[sup [0;T ] kuk 2k H ] +E[sup [0;T ] ku n k 2k H ]): (4.7) Then, we have using (3.59) and Lemma 3.4.1 E[sup [0;T ] kuu n k 2k(1) H ]! 0; (4.8) as n!1 using the uniform integrability principle. 5 The Zygmund norm estimate in H In the nal section, we prove Theorem 3.2.3. Proof of Theorem 3.2.3. The proof is in spirit of [GT, Lemma 7.13]. First, as in Lemma 3.4.1, we apply the innite dimensional It^ o's lemma up to the stopping time m and bootstrap. Hence, we have 31 d(kuk 2n H ) =2nkuk 2 V kuk 2n2 H dt + 2nkuk 2n2 H hf;uidt +nkuk 2n2 H kg(u)k 2 H dt + 2nkuk 2n2 H hg(u);uidW t 2nkuk 2 V kuk 2n2 H dt + 2nkuk 2n2 H Ckfk 2 H 0 + 1 8 kuk 2 H dt +nkuk 2n H + 2nkuk 2n2 H hg(u);uidW t 2nkuk 2 V kuk 2n2 H dt + 2n 8 kuk 2n H dt + 2nCkuk 2n2 H kfk 2 H 0dt +nkuk 2n H dt + 2nkuk 2n2 H hg(u);uidW t 2nkuk 2 V kuk 2n2 H dt + 2n 8 kuk 2n H + 2n 8 kuk 2n H +Cnkfk 2n H 0 +nkuk 2n H dt + 2nkuk 2n2 H hg(u);uidW t : (5.1) Taking the supremum, the expectation, and integrating, we get E[sup [0;T ] kuk 2n H ]E[ku 0 k 2n H ] +CnE Z T 0 kfk 2n V 0dt + 2nE Z T 0 kuk 2n2 H hg(u);uidW t : (5.2) By appealing to the BDG inequality and using the -Young inequality, we have E[sup [0;T ] kuk 2n H ]CE[ku 0 k 2n H ] +CnE Z T 0 kfk 2n H 0dt +Cn 2n E " Z T 0 sup [0;T ] kuk 2n H dt # : (5.3) 32 Then by the assumptions onkfk H 0 andku 0 k H and appealing to the Gronwall lemma we conclude that E[sup [0;T ] kuk 2n H ]Cn 2n ; (5.4) for n suciently large. Hence, E[sup [0;T ] kuk n H ]Cn n ;n2N (5.5) using the Cauchy-Schwartz inequality used for odd n, whence E " sup [0;T ] kuk n H n n+2 # C n 2 : (5.6) Using Sterling's formula, we have n n+2 C n n!; for a suciently large constant C. Therefore, for N 0 suciently large, we have E X n=N 0 sup 0tT 1 n! kuk H K n 1 X n=1 C n 2 ; (5.7) which implies E sup 0tT exp kuk H K C: (5.8) The proof is thus concluded. 33 Chapter 4 A Linearized Scheme for Timewise Approximation of the SNSE 1 Introduction The aim of this chapter is to approximate the solution of the SNSE in 1.4.1 by the solutions of a sequence of linear equations of the form hu n (t);vi + Z t 0 hAu n (s);vids =hu 0 ;vids + Z t 0 hB(u n1 (s);u n (s));vids + Z t 0 hf;vids + Z t 0 hg(u n1 ;v)idW s ; (1.1) which is a candidate to be more ecient scheme to study in terms of numerical approximations than the Galerkin approximation. The reason for that is in Equation (1.1), given that u n1 is known, the operators depend linearly on u n and the noise is additive with respect to u n , whereas in the Galerkin approximation 1.4.3, the bilinear term B depends on u n in a nonlinear way in addition to that in Galerkin approximation, noise is a multiplicative term in the model. 34 2 Main Results We list the main results in this section, that are analogous to Chapter 3 on Galerkin approximations. The rst main result gives the linearized approximation in the V - norm. Theorem 4.2.1. Let2 [0; 1] and letT > 0 be arbitrary. Suppose thatu is a solution to 1.4.1 and let u n be the corresponding linearized approximation in Equation (1.1). Then, we have E " sup [0;T ] 1 (kuu n k 2 V ) # ! 0; (2.1) with 1 (x) = (log(1 +x)) 1 , as n!1. Our second result gives the convergence in the H-norm of the linearized scheme u n on the whole bounded domainO. Theorem 4.2.2. Let u be the solution to the equation 1.4.1 and let u n be the cor- responding linearized approximation equation 1.1. If, additionally, we have f 2 L 2k ( ;L 2k ([0;1);V 0 )) and u 0 2L 2k+2 ( ;H)\L 2 ( ;V ), then E " sup [0;T ] kuu n k 2k(1) H # ! 0; (2.2) for any deterministic time T > 0 and any 2 (0; 1) as n!1. 3 Proof of the Main Results To prove these theorems, rst we borrow the following two results from [B2]. 35 Theorem 4.3.1. For each n2 N 0 , the Equation 1.1 has an almost surely unique solution inL 2 V ( [0;T ]) with almost surely continuous trajectories in H Theorem 4.3.2. Let u 0 ;f;g be as dened in Denition 1.4.2, and u as in Denition 1:4:1, then we have E Z T 0 kuu n k 2 V dt! 0; (3.1) as n!1. Next we continue with the following theorem. Theorem 4.3.3. Let u n be the linearized scheme in Equation (1.1). T M;T n =fT : ( sup t2[0;] ku n k 2 V + Z 0 kAu n k 2 H dt) 1=2 Mg (3.2) Take T M;T m;n;m1;n1 :=T M;T m \T M;T n \T M;T m1 \T M;T n1 ; (3.3) then we have for any T > 0 and M > 1 1. For any T > 0 and M > 1 lim n!1 sup mn sup 2T M;T m;n;m1;n1 E[ sup t2[0;] ku m u n k 2 V + Z 0 kA(u m u n )k 2 H dt] = 0 (3.4) 36 2. lim s!0 sup n sup 2T M;T n P sup t2[0;^S] ku n k 2 V + Z ^S 0 kA(u n )k 2 H dt> (M 1) 2 = 0: (3.5) Proof. 1. Given m>n, we get: du m +A(u m )dt =B(u m1 ;u m )dt +fdt + 1 X k=1 g k (u m1 )dW t ; (3.6) then by Ito: dku m u n k 2 V + 2kA(u m u n )k 2 H dt = 2hB(u m1 ;u m )B(u n1 ;u n );A(u m u n )idt + 2hg(u m1 )g(u n1 );A(u m u n )idW t +kg(u m1 )g(u n1 )k 2 V (3.7) We treat each term seperately. First, by the Lipschitz assumption on g we have Z 0 kg(u m1 )g(u n1 )k 2 V dtK Z 0 ku m1 u n1 k 2 V dt ! 0; (3.8) 37 as n;m!1. Next we estimate sup t2[0;T ] 2Ej[ Z T 0 hg(u m1 )g(u n1 );A(u m u n )idW t ]j E Z T 0 kg(u m1 )g(u n1 )k 2 V ku n u m k 2 V dt 1=2 C(M)E Z T 0 kg(u m1 )g(u n1 )k 2 V 1=2 C(M)E Z T 0 ku m1 u n1 k 2 V 1=2 (3.9) as m;n!1. For the nonlinear terms, we treat the terms as follows B(u m1 ;u m )B(u n1 ;u n ) =B(u m1 ;u m )B(u n1 ;u m ) +B(u n1 ;u m )B(u n1 ;u n ) jB(u m1 u n1 ;u m ) +B(u n1 ;u m u n )j ku m1 u n1 k 1=2 H ku m1 u n1 k 1=2 V ku m k 1=2 V kAu m k 1=2 H kA(u n u m )k H 1 4 kA(u m u n )k 2 H +Kku m u n k H ku m1 u n1 k V ku m k V kAu m k H 1 4 kA(u m u n )k 2 H +ku m1 u n1 k 2 V +Mku m k 2 V ku m u n k 2 V kAu m k 2 H ; (3.10) The term 1 4 kA(u m u n )k 2 H is absorbed to LHS and for the termku m1 u n1 k 2 V + Mku m k 2 V ku m u n k 2 V kAu m k 2 H we apply the stochastic Gronwall Lemma 6.0.1 by notic- ing as well that Z T 0 ku n1 u m1 kdt! 0; (3.11) 38 as n;m!1 by [B2] above. For the term jhB(u n1 ;u m u n );A(u m u n )ij Cku n1 k 1=2 H ku n1 k 1=2 V ku m u n k 1=2 V kA(u m u n )k 1=2 H kA(u m u n )k H Cku n1 k V ku m u n k 1=2 V kA(u m u n )k 3=2 H 1 6 kA(u m u n )k 2 H +C ku m u n k 2 V ku n1 k 4 V ; (3.12) here the term 1 6 kA(u m u n )k 2 H is absorbed to LHS, whereas for the term C ku m u n k 2 V ku n1 k 4 V we use the stochastic Gronwall Lemma 6.0.1. 2. Applying Ito, we get dku n k 2 V + 2kAu n k 2 H dt = 2hf;Au n idt 2hB(u n1 ;u n );Au n idt + 1 X k=1 kg k (u n1 );A(u n )k 2 H dt + 2 1 X k=1 hg k (u n1 ;Au n )idW k : (3.13) Fix 2T M;T n;n1 and s> 0. Taking supremum and integrating, we get sup [0;s^] ku n k 2 V + 2 Z s^ 0 kAu n k 2 H dr Z s^ 0 2jhf;Au n ijdr + Z s^ 0 jhB(u n1 ;u n );Au n ijdr + Z s^ 0 kg k (u n1 )k 2 V dt + sup r2[0;S^] 1 X k=1 Z r 0 2hg k (u n1 );Au n idW k : (3.14) 39 We treat the term hB(u n1 ;u n );Au n i as follows. hB(u n1 ;u n );Au n i Cku n1 k 1=2 H ku n1 k 1=2 V ku n k 1=2 V kAu n k 1=2 H kAu n k H Cku n1 k 1=2 H ku n1 k 1=2 V ku n k 1=2 V kAu n k 3=2 H Cku n1 k 2 H ku n1 k 2 V ku n k 2 V + 1 4 kAu n k 2 H ; (3.15) Using this and the Lipschitz condition on g, we have P sup r2[0;S^] ku n k 2 V + Z S^ 0 kAu n k 2 H dr> (M 1) 2 P C(M;K v ) Z S^ 0 kfk 2 V + 1ds> (M 1) 2 2 +P sup s2[0;S^] Z S^ 0 hg(u n1 );Au n idW s > (M 1) 2 2 (3.16) The rst term after inequality of 3.16 converges to 0 as S ! 0 via Chebyshev's inequality, whereas the second term after inequality of 3.16 is treated as follows: P sup s2[0;S^] Z S^ 0 hg(u n1 );Au n idW s > (M 1) 2 2 P Z t 0 jhg(u n1 );Au n ij 2 ds> (M 1) 4 1 (M 1) 4 E Z S^ 0 kg(u n1 )k 2 V ku n k 2 V ds ; (3.17) by Lipschitz condition on g and by letting S! 0, we conclude the proof. 40 Proposition 4.3.4. (Uniqueness) [GZ] Let > 0 be a stopping time. Suppose that (u (1) ;) and (u (2) ;) are solutions to the SNSE in the sense of Denition 1.4.2. Let u (1) 0 ;u (2) 0 be the associated initial conditions and assume that P(I 0 u (1) 0 =I 0 u (2) 0 ) = 1; (3.18) for some 0 2F 0 . Then P I 0 u (1) (t^) =I 0 u (2) (t^);t2 [0;1) = 1 (3.19) Proposition 4.3.5. (Existence) Suppose thatf2L 2 ( ;L 2 loc ([0;1));H). Then there exists a global strong solution (u;) in the sense of Denition 1.4.2. Proof. Let w 2 V be given. Due to Theorem 4.3.3, we apply Lemma 6.0.2 with B 1 = V and B 2 = D(A) and the sequencefX n g =fu n g. We infer the existence of a subsequencefu n 0g, and if necessary by appealing to a subsubsequencefu n 0g, a strictly positive stopping time T and a process u(:) =u(:^), continuous in V such that sup t2[0;] ku n 0uk 2 V + Z 0 kA(u n 0u)k 2 H ds! 0; sup t2[0;] ku n 0 1 uk 2 V + Z 0 kA(u n 0 1 u)k 2 H ds! 0; (3.20) 41 a.s. We also have that the conditions of Lemma 6.0.2(ii) is satised for anyp2 (1;1). Thus, we have for any p> 1 u(:^)2L P ( ;C([0;T ];V )); (3.21) with u1 t 2L p ( ;L 2 ([0;T ];D(A))): (3.22) By Lemma 6.0.2(ii), we infer a collection of measurable sets n 02F with n 0" (3.23) such that sup n 0 E sup t2[0;T ] ku n 0(t^)1 n 0 k 2 V + Z 0 kAu n 0 1 n 0 k 2 H ds p=2 <1: (3.24) By Lemma 6.0.3, we have 1 n 0;t u n 0 * 1 t u in L p ( ;L 2 ([0;T ];D(A))); (3.25) and 1 n 0 u n 0 ^ * u in L p ( ;L 1 ([0;T ];V )): (3.26) 42 For the nonlinear term, we estimate for all w2V as follows: hB(u n 0 1 ;u n 0)B(u;u);wi (3.27) hB(u n 0 1 ;u n 0u)B(u;u);wi + hB(u n 0 1 u;u);wi hB(u n 0 1 ;u n 0u)B(u;u);wi ku n1 k 1=2 H ku n 0 1 k 1=2 V ku n 0uk V kwk 1=2 H kwk 1=2 V ku n 0 1 k V kwk 1=2 V kwk V +kwk V ku n 0uk V hB(u n 0 1 u;u);wi ku n 0 1 uk 1=2 H ku n 0 1 uk 1=2 V kuk V kwk 1=2 H kwk 1=2 V ku n 0 1 uk V kuk 1=2 V kwk V Hence the nonlinear terms converge to 0 as well, we conclude that given any v2V 1 t hB(u n 0 1 ;u n 0);vi! 1 t hB(u;u);vi; (3.28) as n 0 !1, for almost every (!;t)2 [0;T ]. 43 Moreover, by using the uniform bound with p = 4, one nds that sup n 0 E 1 n 0 Z 0 jB(u n 0 1 ;u n 0)j 2 ds (3.29) C sup n 0 E 1 n 0 Z 0 ku n 0 1 k 2 H ku n 0k 2 V ds C sup n 0 E 1 n 0 Z 0 ku n 0 1 k 2 H ku n 0k H kAu n 0k H ds C sup n 0 E 1 n 0 sup t2[0;] ku n 0 1 k 2 H ku n 0k H Z 0 kAu n 0k H ds C sup n 0 E 1 n 0 sup t2[0;] ku n 0 1 k 4 H ku n 0k 2 H + Z 0 kAu n 0k 2 H ds 2 <1 Then by Lemma 6.0.3, we have 1 n 0;t B(u n 0 1 ;u n 0)* 1 t B(u;u)in (3.30) By Lipschitz condition on g we get X kg k (u n 0)g k (u)k 2 V (3.31) kuu n 0k 2 V ! 0; 44 We have moreover that sup n 0 E 1 n 0 Z 0 kg k (u n 0)k 2 V ds (3.32) C sup n 0 E 1 n 0 Z 0 1 +ku n 0k 2 V ds <1; which means that 1 n 0;t g(u n 0)! 1 t g(u); (3.33) in L 2 ( ;L 2 ([0;T ];l 2 (H))). Hence, we deduce that for any xed v2H 1 n 0 Z t^ 0 hAu n 0;vids* Z t^ 0 hAu;vids; (3.34) 1 n 0 Z t^ 0 hBu n 0;vids* Z t^ 0 hB(u);vids; 1 n 0 X k Z t^ 0 g k (u n 0;v)dW k * Z t^ 0 hg k (u);vidW k ; 45 weakly in L 2 ( [0;T ]). If K [0;T ] is any measurable set, we have E Z T 0 K (!;t)hu(t^);vidt (3.35) = lim n 0 !1 E Z T 0 h1 n 0 (!); K (!;t)vidt = lim n 0 !1 E Z T 0 K (!;t)1 n 0 (!)hu 0 ;vidt E Z T 0 K (!;t)1 n 0 (!) Z t^ 0 hAu n 0 +B(u n 0 f;vids dt +E Z T 0 K (!;t)1 n 0 (!) X k Z t^ 0 hg k (u n 0 );vidW s dt =E Z T 0 K (!;t) hu 0 ;vi Z t^ 0 hAu +B(u)f;vids dt +E Z T 0 K (!;t) X k Z t^ 0 hg k (u);vidW k dt Since;K are arbitrary, we conclude thatu satises the regularity conditions. Remov- ing the restriction thatku 0 k V ~ M a.s. and the global uniqueness are due to Theorem 4.2 [GZ]. Next, we continue with the following theorem. Theorem 4.3.6. Let u be the solution to the Equation 1.4.1 and let u n be the corre- sponding linearized approximation Equation 1.1. Then, we have E[sup [0;T ] ku n k 2n H ]C(ku 0 k 2n H ;kfk 2n V 0) (3.36) E Z T 0 ku n k 2 V ku n k 2n2 H M(ku 0 k 2n H ;kfk 2n V 0;T ): for any positive integer n. 46 Proof. Let z(t) = exp(Kt) for a positive constant K that is to be specied below. Via an induction argument by bootstrapping Ito lemma, for any positive integer n, we have that d(z(t)ku n k 2n H ) =Kz(t)ku n k 2n H dt 2nku n k 2 V ku n k 2n2 H z(t)dt + 2nz(t)ku n k 2n2 H hf;u n idt +nz(t)ku n k 2n2 H kg(u n1 )k 2 H dt + 2nz(t)ku n k 2n2 H hg(u n1 );u n idW t (3.37) Hence, we have d(z(t)ku n k 2n H ) + 2nku n k 2 V ku n k 2n2 H z(t)dt Kz(t)ku n k 2n H dt +Cz(t)ku n k 2n H +Ckfk 2n H +Cz(t)ku n k 2n2 H +Cz(t)ku n k 2n2 H ku n1 k 2 H dt + 2nz(t)ku n k 2n2 H hg(u n1 );u n idW t Kz(t)ku n k 2n H dt +Cz(t)ku n k 2n H +Ckfk 2n H +Cz(t)ku n k 2n2 H +Cz(t)ku n k 2n H +Cku n1 k 2 H dt + 2nz(t)ku n k 2n2 H hg(u n1 );u n idW t (3.38) Choosing positive K accordingly cancels the deterministic terms, and so we have d(z(t)ku n k 2n H )Ckfk 2n H +C +Cku n1 k 2n H dt + 2nz(t)ku n k 2n2 H hg(u n1 );u n idW t (3.39) 47 Integrating, taking supremum, expectation, appealing to the BDG-inequality and by induction step, we get E[sup [0;T ] z(t)ku n k 2n H ]M(ku 0 k 2n H ;kfk 2n V 0;T ) +CE[sup [0;T ] Z T 0 z 2 (s)ku n k 4n4 H (1 +ku n1 k 2 H )ku n k 2 H 1=2 M(ku 0 k 2n H ;kfk 2n V 0;T ) + 1 4 E[sup [0;T ] z(t)ku n k 2n H ] +CE Z T 0 1 +ku n1 k 2 H n By observing e KT e Kt 1 for 0 t T and using the induction step, we conclude the result E[sup [0;T ] ku n k 2n H ]M(ku 0 k 2n H ;kfk 2n V 0;T ); (3.40) Moreover, based on inequality 3.40, we immediately also have E Z T 0 ku n k 2 V ku n k 2n2 H M(ku 0 k 2n H ;kfk 2n V 0;T ): (3.41) Theorem 4.3.7. Let u be the solution to the equation 1.4.1 and let u n be the corre- sponding linearized approximation equation 1.1. Then, we have for any n2N 0 E[sup [0;T ] log(1 +ku n k 2 V )]C(f;g;u 0 ;T ); (3.42) 48 Proof. From the innite dimensional version of Ito lemma we get d'(ku n k 2 V ) + 2' 0 (ku n k 2 V )kA(u n )k 2 H dt ='0(ku n k 2 V ) 2hf;Au n i 2hB(u n1 ;u n );A(u n )i +' 0 (ku n k 2 V )kg(u n1 )k 2 V dt + 2' 00 (ku n k 2 V ) X k hg k (u n1 );A(u n )i 2 dt + 2' 0 (ku n k 2 V )hg(u n1 );A(u n )idW t (3.43) Recall here that '(x) = log(1 +x) ' 0 (x) = 1 1 +x j' 00 (x)j = 1 (1 +x) 2 Integrating, taking the supremum and the expectation, we obtain E sup [0;T ] ' 0 (ku n k 2 V ) + 2E Z T 0 ' 0 (ku n k 2 V )kA(u n )k 2 H ds ' 0 (ku 0 k 2 V ) +E Z T 0 (T 1 +T 2 +T 3 +T 4 )ds + 2E sup s2[0;T ] Z s 0 T 0 dW s (3.44) 49 For convenience, we denote T 0 = 2' 0 (ku n k 2 V )jhg(u n1 );Au n ij (3.45) T 1 = 2' 0 (ku n k 2 V )jhB(u n1 ;u n );Au n ij (3.46) T 2 = 2' 0 (ku n k 2 V )jhf;Au n ij 2' 0 (ku n k 2 V )kfk H kAu n k H K' 0 (ku n k 2 V )kfk 2 H + 8 ' 0 (ku n k 2 V )kAu n k 2 H (3.47) T 3 =' 0 (ku n k 2 V )kg(u n1 )k 2 V K' 0 (ku n k 2 V )(1 +ku n1 k 2 V ) (3.48) T 4 = 2j' 00 (ku n k 2 V )hg(u n1 );Au n ij Kj' 00 (ku n k 2 V )jku n k 2 V (1 +ku n1 k 2 V ) (3.49) For T 0 term by appealing to the BDG inequality, we have E sup s2[0;T ] Z s 0 T 0 dW CE Z T 0 j'0(ku n k 2 V )j 2 kg(u n1 )k 2 V ku n k 2 V ds 1=2 (3.50) We thus observe using Lipschitz condition on g(u) E sup s2[0;T ] j Z s 0 T 0 dWj CE Z T 0 1 (1 +ku n k 2 V ) 2 (1 +ku n1 k 2 V )ku n k 2 V ds 1=2 C(ku 0 k 2 H ;T ) (3.51) 50 Next, we estimate the term T 1 by appealing to the classical estimate T 1 = 2j' 0 (ku n k 2 V )hB(u n1 ;u n );A(u n )ij (3.52) C' 0 (ku n k 2 V )ku n1 k 1=2 H ku n1 k 1=2 V ku n k 1=2 V kAuk 3=2 H C' 0 (ku n k 2 V )ku n1 k 2 H ku n1 k 2 V ku n k 2 V + 4 ' 0 (ku n k 2 V )kAu n k 2 H Cku n1 k 2 H ku n1 k 2 V + 4 ' 0 (ku n k 2 V )kAu n k 2 H ; where we note by 3.41 that E Z T 0 ku n1 k 2 H ku n1 k 2 V dtC(ku 0 k 4 H ;kfk 4 V 0;T ) (3.53) By combining all the estimates we conclude that E sup [0;T ] '(ku n k 2 V )C(f;g;u 0 ;T ): (3.54) Hence we conclude the proof. Next we borrow the following two results from [KUZ]. Theorem 4.3.8. [KUZ] Let u be the solution in Denition 1.4.1 and let u n be the corresponding linearized approximation equation in (1.1). Then, we have E[sup [0;T ] ' 1 (kuk 2 V )]C(f;g;u 0 ;T ); (3.55) where ' 1 (x) = log(1 +x). 51 Theorem 4.3.9. [KUZ] Let u be the solution to the equation 1.4.1. Then, we have E[sup [0;T ] kuk 2n H ]C(ku 0 k 2n H ;kfk 2n V 0) (3.56) E Z T 0 kuk 2 V kuk 2n2 H M(ku 0 k 2n H ;kfk 2n V 0;T ): for any positive integer n. Next, we show the convergence in probability of this linearized scheme in Equation (1.1). Lemma 4.3.10. Letu be the solution to the equation 1.4.1 and letu n be dened as in Equation (1.1). Then for any deterministic timeT > 0, the linear approximationsu n converge in probability with respect to theV norm to the solution of the equation 1.4.1, i.e., for any > 0 we have P( sup t2[0;T ] kuu n k 2 V )! 0 (3.57) as n!1. In particular, under the assumptions, we have P( sup t2[0;T ] (log(1 +kuu n k 2 V )) 1 )! 0 (3.58) for any 2 (0; 1), as well as, by the Poincar e inequality, P( sup t2[0;T ] kuu n k 2k H )! 0 (3.59) 52 for all > 0. Both statements shall be used below. Proof. By assumption, we haveu 0 2L 2 V ( ). Hence, by Chebyshev theorem we have, P(ku 0 k 2 V >s)! 0 (3.60) as s!1. Denoting s =fku 0 k 2 V sg, we have s ! . Hence, we choose s such that P( s ) > 1 2 . Moreover, We know by Theorem 4.3.3 and Lemma 6.0.2 there exists a sequence of stopping timesf~ M n l g n l 1 with the corresponding subsequence fu n l g converging monotone decreasing to M . We also know by Theorem 4.3.5 that M !1 a.s. as M!1, where M is the constant dened as in Theorem 4.3.3. Hence, denotingf M n l = ~ M n l ^Tg n l 1 , there exists M 0 such thatP( M 0 <T ) 4 and by Lebesgue dominated convergence theorem, we have lim n l !1 E " 1 s sup t2[0; M 0 ] kuu n l k 2 V # = 0: (3.61) This implies convergence in probability. Thus, lim n l !1 P(1 s sup t2[0; M 0 ] kuu n l k 2 V >) = 0; (3.62) 53 for any > 0. Hence, we have P 1 s sup t2[0;T ] kuu n l k 2 V =P f sup t2[0;T ] kuu n l k 2 V g\f M 0 <Tg\f!2 s g +P(f sup t2[0;T ] kuu n l k 2 V g\f M 0 =Tg\f!2 s g) P( M 0 <T ) +P(1 s sup t2[0; M 0 ] kuu n l k 2 V >) (3.63) Then, we get P sup t2[0;T ] kuu n l k 2 V ! P M 0 <T +P 1 s sup t2[0; M 0 ] kuu n l k 2 V > ! +P ( c s ) 4 + 4 + 2 =: (3.64) for n l large enough. Then by taking any subsequence u m l and by Theorem 4.3.3 and Lemma 6.0.2 repeating the same arguments above, we get that every subsequence fu m l g has a further subsequence that converges in probability to u, which implies that the whole sequencefu n g converges in probability to u, which concludes the proof. Lemma 4.3.11. Let u be the solution to the equation 1.4.1 and let u n be the corre- sponding linearized approximation equation 1.1. Then we have E[sup [0;T ] log(1 +kuu n k 2 V )]M(f;u 0 ;g;T ); (3.65) where M is independent of n and depends on initial data and time T only. 54 Proof. We note that '(x) = log(1 +x) is an increasing concave function. Hence, being concave with '(x) = 0, it is also subadditive. By noting that kuu n k 2 V 2kuk 2 V + 2ku n k 2 V ; we get E[sup [0;T ] log(1 +kuu n k 2 V )]E[sup [0;T ] log(1 + 2kuk 2 V )] +E[sup [0;T ] log(1 + 2ku n k 2 V )] (3.66) By Theorem 4.3.8 above, we conclude the proof. Now, we are ready to prove our rst result, Theorem 3.2.1. Proof. We know by Lemma 4.3.10 that P sup [0;T ] log(1 +kuu n k 2 V ) 1 ! ! 0 (3.67) for 0<< 1 in probability as n!1. Moreover, using 4.3.11, we have that E 2 4 sup [0;T ] (log(1 +kuu n k 2 V ) ! (1) 1 1 3 5 M(u 0 ;f;g;T ): (3.68) 55 We note here that g(x) = x 1 1 is a convex function with lim x!1 x 1 1 x =1. Using de La-Vallee-Poussin criteria for uniform integrability (see e.g. [D]) we get that ( sup [0;T ] (log(1 +kuu n k 2 V 1 ) n1 (3.69) is uniformly integrable. Uniform integrability and convergence in probability and imply L 1 -convergence. Thus, using that x 1 for 0 < < 1 being increasing and continuous, we get that E " sup [0;T ] log(1 +kuu n k 2 V 1 # ! 0 (3.70) as n!1. Hence, Theorem 3.2.1 is proven. Next, we prove our second result Theorem 4.2.2. Proof. We use that E[sup [0;T ] kuu n k 2k H ] 2 2k (E[sup [0;T ] kuk 2k H ] +E[sup [0;T ] ku n k 2k H ]): (3.71) Then, we have using Lemma 4.3.10, Theorem 4.3.6 and Theorem 4.3.9 E[sup [0;T ] kuu n k 2k(1) H ]! 0; (3.72) as n!1. This concludes the proof. 56 Chapter 5 Timewise Optimal Feedback Control for the SNSE 1 Introduction The existence of optimal control of stochastic evolution equations has been studied by [PI, G, GR, GS, T, T2] among others by adding linearity or semilinearity assumptions as well as putting boundedness restriction for nonlinearities. On the other hand, more specically, the literature about the optimal control of SNSE is not very rich, since these assumptions do not apply to the SNSE. The nonlinearity of the SNSE causes the problem to be of non-convex type. We refer the reader the book of Ekeland and Temam about non-convex optimization for further investigations [ET]. Related works about the control of SNSE can be mentioned as follows. Choi et al. investigated the optimal control problem in [CTMK] for the stochastic Burgers equation (one dimen- sional Navier-Stokes equation) with additive noise. The paper [PD] studies the control of turbulence for the stochastic Burgers equation. Another work in this direction is by Sritharan [S], where the existence of optimal controls is established using tech- niques for the martingale problem formulation of Stroock and Varadhan [SV] in the 57 context stochastic Navier-Stokes equation. In [B], it is shown that there exist feed- back controls for the SNSE of (2.1), which are controlled by dierent external forces for a specic cost functional satisfying some regularity conditions. In this chapter, we follow the framework that is studied in [B]. Namely, the SNSE is controlled by deterministic force . We show using the recent bounds and approximations for the SNSE in 2D in [KUZ], that there exists a feedback control that is optimal for the supremum of SNSE up to a terminal deterministic time T , which is only natural to introduce when we want to control the extreme events on the whole path rather than integrating the path. 2 Main Result Our purpose is to control the solution u of the SNSE hu (t);vi + Z t 0 hAu (s);vids =hu 0 ;vi + Z t 0 hB(u (s);u (s));vids + Z t 0 h(s;u );vids + Z t 0 hg(s;u (s));vidW s ; (2.1) for all v2 H, t2 [0;T ], a.e. !2 , where 2U b and the assumptions on initial data are as in Denition 2.1. We consider the following cost functional J() =E sup t2[0;T ] ('(L[t;u (t);(t)])); (2.2) 58 whereL : [0;T ]VH!R + is uniformly locally Lipschitz i.e. jL(t;x;y)L(t;x 2 ;y 2 )j 2 C(kx 1 x 2 k 2 V +ky 1 y 2 k 2 H ); (2.3) with '(x) = log(1 +x) 1 ; (2.4) and 0<< 1. Remark 5.2.1. Since we requireL to be only uniformly locally Lipschitz, using the concave function '(x) = log(1 +x) 1 does not imply that, we should check only the end points for the functionalL. We state now our main result. Theorem 5.2.2. Suppose we have a set U b of bounded feedback controls that are locally Lipschitz in V -norm, i.e. they satisfy sup t2[0;T ] k(t;!)k V K;a:s:; k(t 1 ;x 1 )(t 2 ;x 2 )k 2 H C 1 jt 1 t 2 j 2 +C 2 kx 1 x 2 k 2 H ; (2.5) where C 1 and C 2 are uniform for the family of controls 2U b . Then, there exists an optimal feedback control satisfying J( ) = min 2U J() (2.6) 59 Remark 5.2.3. In their paper F. Abergel and R. Temam [AT] investigate the deter- ministic Navier-Stokes equation by controlling the turbulence inside the ow. They give a cost functional regarding the vorticity in the uid. For our problem, the func- tionalL would be L(t;u (t);(t)) :=kru (t)k H +k(t)k 2 H (2.7) 3 Proof of the Main Result The proof of Theorem 5.2.2 requires several lemmas and theorems. First, we need the following theorem. Theorem 5.3.1. Letf n g n0 be a sequence inU b as dened in (3.23) and J is as dened in (3.20). Suppose sup t2[0;T ] k n k V ! 0 a.s. (3.1) Then, we have J( n )!J() (3.2) as n!1. To prove Theorem 5.3.1, we use the following results from [B] 60 Theorem 5.3.2. [B] Letf n g n0 and be a sequence of linear bounded feedback controls withL(H) being the space of all linear and continuous operators from H to itself and suppose E[ Z T 0 k n k 2 L(H) ]! 0; (3.3) then it holds that E[ Z T 0 ku u n k 2 V ]! 0; (3.4) as n!1. Proof. See appendix. Next, we need the two technical lemmas. Lemma 5.3.3. Let '(x) be a concave increasing function with '(0) = 0. Then, we have j'(x 1 )'(x 2 )j'(jx 1 x 2 j) (3.5) Proof. Since'(x) is a concave increasing function with'(0) = 0,'(x) is subadditive. Hence, we have '(x 1 )'(jx 1 x 2 j +x 2 ) '(jx 1 x 2 j +x 2 )'(jx 1 x 2 j) +'(x 2 ) '(x 1 )'(x 2 )'(jx 1 x 2 j) (3.6) By interchanging x 1 and x 2 we conclude the proof. 61 Lemma 5.3.4. [GZ] Fix T > 0. Assume that X;Y;Z;R : [0;T ) !R (3.7) are real-valued, non-negative stochastic processes. Let < T be a stopping time so that E Z 0 (RX +Z)ds<1: (3.8) Assume, moreover that for some xed constant we have Z 0 Rds<; a.s. (3.9) Suppose that for all stopping times 0 a b E sup t2[a; b ] X + Z 0 Yds C 0 E X( a ) + Z b a (RX +Z)ds ; (3.10) where C 0 is a constant independent of the choice of a ; b . Then we have E sup t2[0;] X + Z 0 Yds CE X(0) + Z 0 Zds ; (3.11) where C depends on C 0 ;T and . Proof. Choose a nite sequence of stopping times 0 = 0 < 1 <:::< N < N+1 = (3.12) 62 so that Z k k1 Rds< 1 2C 0 a.s. (3.13) For each pair k1 ; k take a = k1 and b = k in (0.4). Using (0.7), we have E sup t2[ k1 ;] X + Z k k1 Yds CEX( k1 ) +CE Z k k1 Zds: (3.14) By induction we have E sup t2[0; j ] X + Z j 0 Yds CEX(0) +CE Z j 0 Zds (3.15) then we have E sup t2[0; j+1 ] X + Z j 0 Yds CEX(0) +CE Z j 0 Zds +CE sup t2[ j ; j+1 ] X + Z j+1 j Yds CEX(0) +CE Z j+1 0 Zds +CEX( j ) CEX(0) +CE Z j+1 0 Zds (3.16) Hence, we conclude the proof. We continue with the following theorem. Theorem 5.3.5. Let ~ M > 0 and M > 1. Moreover, letfu n g n1 be the sequence of solutions of Equation 2.1. Suppose, we have ku 0 k V ~ M; a.s. (3.17) 63 Moreover, assume sup t2[0;T ] k m (t;) n (t;)k V ! 0; (3.18) as m;n!1 almost surely. Denote T M;T n =fT : ( sup t2[0;] ku n k 2 V + Z 0 jAu n j 2 dt) 1=2 M + ~ Mg: (3.19) Let T M;T m;n :=T M;T m \T M;T n : (3.20) Then 1. For any T > 0, we have lim n!1 sup mn sup 2T M;T m;n E[ sup t2[0;] ku m u n k 2 V + Z 0 kA(u m u n )k 2 H dt] = 0: (3.21) 2. lim S!0 sup n sup 2T M;T n P sup t2[0;^S] ku n k 2 V + Z ^S 0 kA(u n k 2 H dt> ~ M 2 + (M 1) 2 = 0: (3.22) Proof. 1. We have d(u n u m ) +A(u n u m )dt = (B(u n )B(u m ))dt + 1 X k=1 [g k (u n )g k (u m )]dW k + n (t;u n (t)) m (t;u n (t))dt (3.23) 64 Hence by Ito-lemma, we have dku n u m k 2 V + 2kA(u n u m )k 2 H dt = 2hB(u n )B(u m );A(u n u m )idt + 1 X k=1 kg k (u n )g k (u m )k 2 V dt + 2 1 X k=1 hg k (u n )g k (u m )idW k + 2h n (t;u n ) m (t;u m );A(u n u m )idt (3.24) By taking supremum up to , integrating and taking expectation, we get E[sup [0;] ku n u m k 2 V + 2 Z 0 kA(u n u m )k 2 H dt] 2E Z 0 jhB(u m )B(u n );A(u n u m )ijdt +E Z 0 1 X k=1 kg k (u u m )g k (u n )k 2 V dt +E[ sup r2[0;] j2 1 X k=1 Z r 0 hg k (u m )g k (u n );A(u n u m )idW k j] +E[2 Z 0 jh n (t;u n ) m (t;u m );A(u n u m )ijdt] (3.25) We treat each term above seperately. First E Z 0 1 X k=0 kg k (u n )g k (u m )k 2 V dt E Z 0 ku n u m k 2 V (3.26) 65 By Poincare lemma, assumption 3.18 implies that Z T 0 ku n u m k 2 H ! 0; (3.27) as m;n!1. Then this implies by Theorem 5.3.2 E Z T 0 ku n u m k 2 V ! 0; (3.28) as n;m!1. Next we have E[2 Z 0 jh n (t;u n ) m (t;u m );A(u n u m )ijdt] Z 0 k n (t;u n ) m (t;u m )k V ku n u m k V ds +CE Z T 0 ku n u m k 2 V CE Z 0 k n (t;u n ) n (t;u m )k 2 V +CE Z 0 k n (t;u m ) m (t;u m )k 2 V +CE Z T 0 ku n u m k 2 V (3.29) which goes to 0 as n;m!1 by Theorem 5.3.2 as well as by our assumption on . Next, we treat the nonlinear term by seperating into two parts as follows. jhB(u m u n );A(u n u m )ijjhB(u m u n ;u m );A(u n u m )ij +jhB(u n ;u n u m );A(u n u m )ij 66 For the rst term above, we have jhB(u m u n ;u m );A(u n u m )ij ku n u m k V ku m k 1=2 V kAu m k 1=2 H kA(u n u m )k H 1 6 kA(u n u m )k 2 H +Cku n u m k 2 V kAu m k H ; (3.30) To estimate the term Cku n u m k 2 V kAu m k H in the second line of inequality, we apply Theorem 6.0.1 with R =kAu m k H , X = Cku n u m k 2 V and Y =kA(u n u m )k 2 H and Z stand for the remaining terms in the right hand side of the equation 3.25, that we prove converging to 0. Moreover, 1 6 kA(u n u m )k 2 H is absorbed to the left hand side of the main equation. Next, we treat the second nonlinear term as jhB(u n ;u n u m );A(u n u m )ij ku n k V ku m u n k 1=2 V kA(u n u m )k 3=2 H 1 6 kA(u n u m )k 2 H +C(M; ~ M)ku n u m k 2 V ; (3.31) where the rst term is absorbed to LHS of the equation 3.25, whereas for the second term we have C(M; ~ M)E[ Z 0 ku n u m k 2 V ]! 0; (3.32) as m;n!1 by Theorem 5.3.2. Hence, the rst part of the proof is concluded. 67 2. The proof is identical with [GZ]. First by Ito we have dku k 2 V + 2kAu k 2 H dt = 2hB(u );Au i +kg k (u )k 2 V + 2 1 X k=1 hg k (u );Au idW k (3.33) We x 2T M;T n and S > 0. Integrating from 0 to ^S, we get sup r2[0;S^] ku k 2 V + Z S^ 0 2kAu k 2 H dsku 0 k 2 V + Z S^ 0 2jhB(u );Au ijdr + Z S^ 0 kg(u )k 2 V dr + sup r2[0;S^] 1 X k=1 2hg k (u );u idW k : (3.34) Applying the classical estimate on nonlinear term (see [CF2]), we have hB(u );A(u )i ku k 3=2 V kAu k 3=2 H Cku k 6 V + 4 kAu k 2 H : (3.35) Using Equation (3.35) and the Lipschitz assumption on g, we get sup r2[0;S^] ku k 2 V + Z ^S 0 kAu k 2 H dr ku 0 k 2 V +C Z S^ 0 (kk 2 H +ku k 6 V +ku k 2 V + 1)dr + sup r2[0;S^] Z r 0 2 1 X k=1 hg k (u );u idW k : (3.36) 68 This implies then P sup s2[0;^S] ku k 2 V + Z ^S 0 kAu k 2 H ds>ku 0 k 2 V + (M 1) 2 P C Z ^S 0 (kk 2 H +ku k 6 V +ku k 2 V + 1)dr> (M 1) 2 2 2C (M 1) 2 E Z ^S 0 (kk 2 H +ku k 6 V +ku k 2 V + 1)dr CE Z S 0 kk 2 H + 1dr (3.37) Next, using Doob's inequality for the second term, we get P sup r2[0;^S] 1 X k=1 Z r 0 hg k (u );u idW k > (M 1) 2 2 4 (M 1) 4 E Z S^ 0 ku k 2 V 1 X k=1 kg k (u )k 2 V dr C (3.38) By lettingS! 0 with the integrability condition imposed on function, we conclude the proof. Theorem 5.3.6. Given the assumptions on initial data u 0 as in Denition 2.1 and 2U b , there exists a global strong solution (u ;) in the sense of Denition 2.1 introduced above. Proof. Let w2 H be given. Using Theorem 5.3.5 withfu n g be the sequence of solutions of Equation 2.1. Due to 3.21 and 3.22, we apply Lemma 6.0.2 with B 1 =V and B 2 = D(A) and the sequencefX n g = fu n g. We infer the existence of a 69 subsequencefu 0 n g and a strictly positive stopping time T and a processu (:) = u (:^), continuous in V such that sup t2[0;] ku n 0 u k 2 V + Z 0 kA(u n 0 u )k 2 H ds! 0; (3.39) a.s. We also have that the conditions of Lemma 6.0.2 (ii) is satised for anyp2 (1;1). Thus, we have for any p> 1 u (:^)2L p ( ;C([0;T ];V )); (3.40) with u 1 t 2L p ( ;L 2 ([0;T ];D(A))): (3.41) By Lemma 6.0.2 (ii) we infer a collection of measurable sets n 02F with n 0" (3.42) such that sup n 0 E sup t2[0;] ku n 0 (t)1 n 0 k 2 V + Z 0 kAu n 0 1 n 0 k 2 H ds p=2 <1: (3.43) Using equations 3.39, 3.42, 3.43 and by Lemma 6.0.3 we have 1 n 0;t u n 0 * 1 t u in L p ( ;L 2 ([0;T ];D(A))); (3.44) 70 and 1 n 0 u n 0 ^ * u in L p ( ;L 1 ([0;T ];V )): (3.45) For the nonlinear term, we estimate for all w2H as follows: hB(u n 0 ;u n 0 )B(u ;u );wi (3.46) = hB(u n 0 u ;u n 0 ) +B(u ;u n 0 )B(u;u);wi = hB(u n 0 u ;u n 0 ) +B(u ;u n 0 u );wi hB(u n 0 u ;u n 0 );wi + hB(u ;u n 0 u );wi Then we have using the classical estimates [CF] hB(u n 0 u ;u n 0 );wi Cku n 0 u k 1=2 H ku n 0 u k 1=2 V ku n 0 k 1=2 V kAu n 0 k 1=2 H kwk 1=2 H (3.47) hB(u ;u n 0 u );wi Cku k 1=2 H ku k 1=2 V ku n 0 u k 1=2 V kA(u u n 0 )k 1=2 H kwk 1=2 H Hence the nonlinear terms converge to 0 by 3.39, we conclude that given any v2H 1 t hB(u n 0 1 ;u n 0);vi! 1 t hB(u ;u );vi; (3.48) 71 as n 0 !1, for almost every (!;t)2 [0;T ]. Moreover, by using the uniform bound of Equation 3.39 with p = 4, one nds that sup n 0 E 1 n 0 Z 0 jB(u n 0 ;u n 0 )j 2 ds (3.49) C sup n 0 E 1 n 0 Z 0 ku n 0 k 2 H ku n 0 k 2 V ds C sup n 0 E 1 n 0 sup t2[0;] ku n 0 k 3 H kAu n 0k H ds C sup n 0 E 1 n 0 sup t2[0;] ku n 0 k 4 H + Z 0 kAu n 0k 2 H ds 2 <1 Using 3.48 and 3.49 and using Lemma 6.0.3 we have 1 n 0;t B(u n 0 ;u n 0 )* 1 t B(u ;u ); (3.50) in L 2 ( ;L 2 ([0;T ];H)). By Lipschitz condition on g we get X kg k (u n 0 )g k (u )k 2 V (3.51) ku u n 0 k 2 V ! 0; 72 by 3.39. We have moreover that sup n 0 E 1 n 0 Z 0 kg k (u n 0 )k 2 V ds (3.52) C sup n 0 E 1 n 0 Z 0 1 +ku n 0 k 2 V ds <1; which means that 1 n 0;t g(u n 0 )! 1 t g(u ); (3.53) inL 2 ( ;L 2 ([0;T ];l 2 (H))). Moreover, using n 0 being bounded as well we deduce that for any xed v2H 1 n 0 Z t^ 0 hAu n 0 ;vids* Z t^ 0 hAu ;vids; (3.54) 1 n 0 Z t^ 0 hB(u n 0 );vids* Z t^ 0 hB(u );vids; 1 n 0 Z t^ 0 h n 0;vids* Z t^ 0 h;vids: 1 n 0 X k Z t^ 0 g k (u n 0 ;v)dW k * Z t^ 0 hg k (u );vidW k 73 converges weakly in L 2 ( [0;T ]). If K [0;T ] is any measurable set, then by 3.54, we have E Z T 0 1 K (!;t)hu (t^);vidt (3.55) = lim n 0 !1 E Z T 0 h1 n 0 (!); 1 K (!;t)vidt = lim n 0 !1 E Z T 0 1 K (!;t)1 n 0 (!)hu 0 ;vidt E Z T 0 1 K (!;t)1 n 0 (!) Z t^ 0 hAu n 0 +B(u n 0 n 0;vids dt +E Z T 0 1 K (!;t)1 n 0 (!) X k Z t^ 0 hg k (u n 0 );vidW s dt =E Z T 0 1 K (!;t) hu 0 ;vi Z t^ 0 hAu +B(u )(s;u );vids dt +E Z T 0 1 K (!;t) X k Z t^ 0 hg k (u );vidW k dt Since v2 H and K are arbitrary, we conclude that u satises the regularity con- ditions. Hence, we have shown the local existence of the solution u . Relaxing the restrictionku 0 k V ~ M, namely extending to the case u 0 2 L 2 ( ;V ) and the global uniqueness follows the same steps of [GZ] Theorem 4.2. This concludes the proof. Next, we borrow the following theorem from [KUZ]. Theorem 5.3.7. [KUZ] Let u ;u 0 ;;g be as dened in Denition 1.4.2. Then, we have E[sup [0;T ] log(1 +ku k 2 V )]C(;g;u 0 ;T ): (3.56) We continue with the following lemma. 74 Lemma 5.3.8. Given the assumptions on initial data andf n g n1 as in Theorem 3.8, we have that for any deterministic time T P( sup t2[0;T ] ku u n k 2 V >)< (3.57) for any n;m N 0 for some N 0 , i.e. solutions with dierent deterministic force fu n g n1 , converge in probability to u as n;m!1. Proof. By assumption, we have u 0 2 L 2 ( ;V ). Hence, by Chebyshev theorem we have, P(ku 0 k 2 V >s)! 0 (3.58) as s!1. Denoting s =fku 0 k 2 V sg, we have s ! . Hence, we choose s such that P( s ) > 1 2 . Moreover, We know by Theorem 5.3.5 and Lemma 6.0.2 that there exists a sequence of stopping timesf~ M n l g n l 1 with the corresponding subsequencefu n l g converging monotone decreasing to M . We also know by Lemma 6.0.2 that M !1 a.s. as M!1, where M is the constant dened as in Lemma 5.3.5, since the solution is global in the sense of Denition 1.4.2. Hence, denotingf M n l = ~ M n l ^Tg n l 1 , there exists M 0 such thatP( M 0 <T ) 4 and by Lebesgue dominated convergence theorem, we have lim n l !1 E " 1 s sup t2[0; M 0 ] ku u n l k 2 V # = 0: (3.59) 75 This implies convergence in probability. Thus, lim n l !1 P(1 s sup t2[0; M 0 ] ku u n l k 2 V >) = 0; (3.60) for any > 0. Hence, we have P 1 s sup t2[0;T ] ku u n l k 2 V =P f sup t2[0;T ] ku u n l k 2 V g\f M 0 <Tg\f!2 s g +P(f sup t2[0;T ] ku u n l k 2 V g\f M 0 =Tg\f!2 s g) P( M 0 <T ) +P(1 s sup t2[0; M 0 ] ku u n l k 2 V >) (3.61) Then, we get P sup t2[0;T ] ku u n l k 2 V ! P M 0 <T +P 1 s sup t2[0; M 0 ] ku u n l k 2 V > ! +P ( c s ) 4 + 4 + 2 =: (3.62) forn l large enough. Then by taking any subsequenceu m l and by Theorem 5.3.5 and Lemma 6.0.2 repeating the same arguments above, we get that every subsequence fu m l g has a further subsequence that converges in probability to u , which implies that the whole sequencefu n g converges in probability to u , which concludes the proof. Now we are ready to prove Theorem 5.3.1. 76 Proof. E sup [0;T ] '(L(t;u n ; n )) sup [0;T ] '(L(t;u ;)) E sup [0;T ] '(L(t;u n ; n ))'(L(t;u ;)) E sup [0;T ] '(L(t;u n ; n ))'(L(t;u n ;)) +E sup [0;T ] '(L(t;u n ;))'(L(t;u ;)) E sup [0;T ] '( L(t;u n ; n ))L(t;u n ;) +E sup [0;T ] '( L(t;u ; n ))L(t;u ;) E sup [0;T ] '(Ck n k 2 H +Cku u n k 2 V ) E sup [0;T ] '(Cku u n k 2 V ) + sup [0;T ] '(Ck n k 2 H ) ; (3.63) where we appeal to Lemma 5.3.3 in the third inequality and the Lipschitz assumption onL(t;u ;) in the fourth inequality. We have by boundedness assumption onf n g and assumption (3.26) the followings sup [0;T ] '(Ck n k 2 H )! 0; as n!1 E[sup [0;T ] k n k 2 H ]M; (3.64) 77 Hence, we have by uniform integrability and convergence in probability E[sup [0;T ] '(k n k 2 H )]! 0; as n!1: (3.65) For the rst term in the last line of the Equation 3.63, we have E sup [0;T ] '(Cku u n k 2 V ) E sup [0;T ] '(Cku k 2 V ) +E sup [0;T ] '(Cku n k 2 V ) (3.66) By noting that log(1 +x)x for x> 0, we have by Lemma 5.3.8 that P sup [0;T ] (log(1 +ku u n k 2 V ) 1 ! ! 0 (3.67) for 0<< 1 in probability asn!1. Moreover, using Theorem 5.3.7, we have that E 2 4 sup [0;T ] (log(1 +kuu n k 2 V ) ! (1) 1 1 3 5 M(u 0 ;f;g;T ): (3.68) We note here that g(x) = x 1 1 is a convex function with lim x!1 x 1 1 x =1. Using de La-Vallee-Poussin criteria for uniform integrability (see e.g. [D]) we get that ( sup [0;T ] log(1 +ku u n k 2 V 1 ) n1 (3.69) 78 is uniformly integrable. Using uniform integrability and by Lemma 3.11 convergence in probability imply L 1 -convergence [D]. Thus, using that x 1 for 0 < < 1 being increasing and continuous, we get that E " sup [0;T ] log(1 +ku u n k 2 V 1 # ! 0 (3.70) as n!1. Hence, Theorem 5.3.1 is proven. Our main result, Theorem 5.2.2 follows from Theorem 5.3.1 above. Proof. By assumption onU b and via Arzela-Ascoli theorem, the set is compact. Then, by generalized Weierstrass theorem (see [Z]), we have that there exists an optimal feedback control 2U b for J(). Hence, we conclude the proof. 79 Chapter 6 Appendix In the appendix, we state and prove the critical theorems that are used and not proven during the previous chapters. First, we give two results, rst one being the stochastic version of the Gronwall lemma and second one is related to the existence of a process that lives up to a specic stopping time dened below. Lemma 6.0.1. [GZ] Fix T > 0. Assume that X;Y;Z;R : [0;T ) !R (0.1) are real-valued, non-negative stochastic processes. Let < T be a stopping time so that E Z 0 (RX +Z)ds<1: (0.2) Assume, moreover that for some xed constant we have Z 0 Rds<; a.s. (0.3) Suppose that for all stopping times 0 a b E sup t2[a; b ] X + Z 0 Yds C 0 E X( a ) + Z b a (RX +Z)ds ; (0.4) 80 where C 0 is a constant independent of the choice of a ; b . Then we have E sup t2[0;] X + Z 0 Yds CE X(0) + Z 0 Zds ; (0.5) where C depends on C 0 ;T and . Proof. Choose a nite sequence of stopping times 0 = 0 < 1 <:::< N < N+1 = (0.6) so that Z k k1 Rds< 1 2C 0 a.s. (0.7) For each pair k1 ; k take a = k1 and b = k in (0.4). Using (0.7), we have E sup t2[ k1 ;] X + Z k k1 Yds CEX( k1 ) +CE Z k k1 Zds: (0.8) By induction we have E sup t2[0; j ] X + Z j 0 Yds CEX(0) +CE Z j 0 Zds (0.9) 81 then we have E sup t2[0; j+1 ] X + Z j 0 Yds CEX(0) +CE Z j 0 Zds +CE sup t2[ j ; j+1 ] X + Z j+1 j Yds CEX(0) +CE Z j+1 0 Zds +CEX( j ) CEX(0) +CE Z j+1 0 Zds (0.10) Hence, we conclude the proof. Lemma 6.0.2. [GZ] Let ( ;F; (F t ) t0 ;P) be a xed ltered probability space. Suppose that B 1 and B 2 are Banach spaces with B 2 B 1 with continuous embedding. We denote the associated norms byjj i . Dene E(T ) :=C([0;T ];B 1 )\L 2 ([0;T ];B 2 ) (0.11) with the norm jYj E(T ) = sup t2[0;T ] jY (t)j 2 1 + Z T 0 jY (t)j 2 2 dt 1=2 : (0.12) Let X n be a sequence of B 2 -valued stochastic process such that for every T > 0 we have X n 2E(T ), a.s. For M > 1, T > 0 dene the collection of stopping times T M;T n :=fT :jX n j E() M +jX n (0)j 1 g; (0.13) and letT M;T n;m :T M;T n \T M;T m . 82 1. Suppose that for M > 1 and T , we have lim n!1 sup mn sup inT M;T n EjX n X m j E() = 0 (0.14) and lim S!0 sup n sup 2T M;T n P[jX n j E(^S) >jX n (0)j 1 +M 1] = 0: (0.15) Then, there exists a stopping time with: P(0<T ) = 1; (0.16) and a process X() =X(^2E()), such that jX n l Xj E() ! 0; a.s. (0.17) for some subsequence n l "1. Moreover jXj E() M + sup n jX n (0)j 1 ; a.s. (0.18) 2. If, in addition to the conditions imposed above, we also have sup n EjX n (0)j p 1 <1; (0.19) 83 for some 1p<1, then there exists a sequence of sets l " such that sup l EI l jX n l j p E() <1 (0.20) and EjXj p E() C q (M p + sup n EjX n (0)j p 1 ): (0.21) Proof. To nd the convergent subsequence, we proceed by induction on l and start with l = 0 and n 0 = 1. We have by 0.14 sup 2T M;T n l+1 ;n l EjX n l X n l+1 j E() 2 2l : (0.22) Next to nd in 0.16 and 0.17, we dene l := inf t>0 fjX n l j E(t) >jX n l (0)j 1 + (M 1 + 2 l )g^T; (0.23) and let N = 1 \ j=N jX n j X n j+1 j E( j ^ j+1 ) < 2 (l2) (0.24) Using l ^ l+1 2T M;T n l+1 ;n l , we have P jX n l X n l+1 j E( l ^ l+1 ) 2 (l+2) 2 l+2 EjX n l X n l+1 j E( l ^ l+1 ) 2 (l2) ; (0.25) 84 Hence, by Borel-Cantelli lemma, we conclude that P 1 \ N=1 1 [ j=N jX n j X n j+1 jE( j ^ j+1 ) 2 (j+2) = 0 (0.26) and hence ~ :=[ N N is a set of full measure. Next, we note that l+1 (!) l (!); (0.27) for everylN;!2 N , since givenN andlN, take the setf l+1 > l g\ N . On this set, we have l <T . By continuity ofjX n l j E(t) in t, that imples jX n l j E( l ) =jX n l (0)j 1 + (M 1 + 2 l ): (0.28) Moreover, we have on N jX n l j E( l ^ l+1 ) jX n l+1 j E( l ^ l+1 ) < 2 (l+2) (0.29) jX n l+1 (0)jjX n l (0)j< 2 (l+2) : 85 Hence, we get that jX n l+1 j E( l ^ l+1 ) >jX n l j E( l ^ l+1 ) 2 (l+2) (0.30) =jX n l j E( l ) 2 (l+2) =jX n l (0)j 1 + (M 1 + 2 l ) 2 (l+2) >jX n l+1 (0)j 1 + (M 1 + 2 l ) 2 2 (l+2) =jX n l+1 (0) j 1 + (M 1 + 2 l+1 ); overf l+1 > l g\ N . Moreover on N , we have jX n l+1 j E( l ^ l+1 ) jX n l+1 j E( l+1 ) (0.31) jX n l+1 (0)j + (M 1 + 2 (l+1) ): Hence, we have by 0.30 and 0.31 thatf l+1 > l \ N g is empty. Hence, by 0.26 and 0.27, we have = lim l l ; a.s. (0.32) Next, by xing > 0 with T >> 0. We have f l <gfjX n l j E( l ^) =jX n l (0)j 1 + (M 1 + 2 l )g (0.33) fjX n l j E( l ^) >jX n l (0)j 1 + (M 1)g: 86 Since P( <) =P 1 \ l=1 1 [ k=l f k <g (0.34) lim sup l P( l <) sup l P(jX n l j E( l ^) >jX n l (0)j 1 + (M 1)); by 0.15, we have P( = 0) =P(\ >0 f <g) = lim #0 P( <) = 0: (0.35) Hence, T . Next, we show that X n l is Cauchy inE() a.s. By 0.24, for every ! 2 ~ , we can choose N = N(!) so that ! 2 N and (w) l+1 (w) l (w) whenever lN. Hence jX n l (!)Xj E() = 0; a.s. (0.36) Hence , the rst part is proven. Next, we take l as in 5.15. Hence by 1 l jX n l j E() 2 (l+2) + 1 l jX n l+1 j E() (0.37) jX n l+1 (0)j 1 +M sup n jX n (0)j 1 +M; 87 which implies 0.20. Moreover, by 0.37 we have E 1 l jX n l j p E() C p (M p +EjX n l (0)j p 1 ) (0.38) Combining 0.21 and 0.38, we conclude the bound in 0.22. By Fatou's lemma, we conclude the result in 0.21. Hence, the proof is concluded. Lemma 6.0.3. [GZ] Suppose thatX is a seperable Banach space and letDX be a dense subset. LetX be the dual ofX and denote the dual pairing betweenX andX byh;i. Assume that (E;E;) is a nite measure space and thatp2 (1;1). Assume that u;u n 2L p (E;X ) withfu n g uniformly bounded in L p (E;X ) and hu n ;yi!hu;yi a:e: (0.39) for all y2D. Then u n * u (0.40) in L p (E;X ). Proof. We x y2D and let E N :=f!2E :jhu m (!)u(!);yij 1; for every mNg: (0.41) Denoting 1 N as the indicator function associated to E N , by 0.39, we have 1 1 N ! 0; (0.42) 88 as N!1, almost surely. Let F2E and N 1 be given. By dominated convergence theorem, we have lim n!1 Z F 1 N hu n u;yid = 0: (0.43) Moreover we have lim sup n!1 Z F hu n u;yid (0.44) lim sup n!1 Z F (1 1 N )hu n u;yid + lim sup n!1 Z F 1 N hu n u;yid lim sup n!1 Z F (1 1 N )hu n u;yid lim sup n!1 kyk X ku n uk p X d 1=p Z F j1 1 N j p 0 d 1=p 0 C Z F j1 1 N j p 0 1=p 0 ; by uniform bound on u n in L p (E;X ), we conclude that C can be chosen indepen- dently of N. By letting N ! 1 and via dominated convergence theorem with Equation 0.39, we have lim n!1 Z F hu n u;yid = 0: (0.45) By using that S :=fs = d X k=1 y k :y k 2D;F k 2E;d<1g (0.46) 89 Similarly, we also have the preservation of the weak convergence under continuous linear mappings. Proposition 6.0.4. [B3] Let S 1 and S 2 be Banach spaces and let L : S 1 ! S 2 be a continuous linear operator. Iffx n g is a sequence in S 1 such that x n * x, where x2S 1 , then L(x n )*L(x). Next, we continue with the critical convergence results inL 2 V ([0;T ] ) that we extend to the timewise case. We note here that the stopping timesT Q M are the rst hitting times in the needed sense accordingly. Proposition 6.0.5. [B4] LetfQ(t) t2[0;T ] g be a V -valued process with Z T 0 kQ(s)k 2 V <1; for a.e. !2 . For each M2N, we dene the stopping time T Q M = 8 > > > < > > > : T; if R T 0 kQ(s)k 2 V ds<M infft2 [0;T ] : R t 0 kQ(s)k 2 V dsMg; otherwise. (0.47) Hence, we have Z t^T Q M 0 kQ(s)k 2 V dsM: 90 Then, we have that lim M!1 P(T Q M <T ) = 0 lim M!1 T Q M =T; a.s. Proof. We have that lim M!1 P(T Q M <T ) lim M!1 P( Z T 0 kQ(s)k 2 V ds)M)P 1 \ M=1 Z T 0 kQ(s)k 2 V M = 0 The sequence (TT Q M ) is monotone decreasing a.s. by above, we have that it converges in probability to 0. Hence,T Q M converges to zero a.s. Proposition 6.0.6. [B4] Suppose we have the following assumptions: k 1 ;k 2 > 0 are real numbers; u 0 is a H-valuedF 0 -measurable random variable. with Eku 0 k 4 H <1. F 1 2L 1 R ( [0;T ]), with F 2 2L 2 H ( [0;T ]). F 3 : [0;T ]H! H is a mapping such that for all t2 [0;T ], x2 H, we have kF 3 (t;x)k H Ckxk H and F 3 (;x)2L 2 H [0;T ] for all x2H. (Q(t)) t2[0;T ] is a V -valued process with Z T 0 kQ(s)k 2 V ds<1; 91 which satises kQ(t)k 2 H +C Z t 0 kQ(s)k 2 V dsku 0 k 2 H +C Z t 0 kQ(s)k 2 H ds (0.48) + Z t 0 jF 1 (s)jds + Z t 0 hF 2 (s) +F 3 (s;Q(s));Q(s)idW s ; for all t2 [0;T ] and w2 a.s. Then, we have E sup t2[0;T ] kQ(t)k 2 H +E Z T 0 kQ(s)k 2 V dsC[Eku 0 k 2 H +E Z T 0 jF 1 (s)jds+E Z T 0 kF 2 (s)k 2 H ds]: Moreover, if E R T 0 jF 1 (s)j 2 ds<1, and E R T 0 kF 2 (s)k 4 H ds<1, then E sup t2[0;T ] kQ(t)k 4 H +E Z T 0 kQ(s)k 2 V ds C[Eku 0 k 4 +E Z T 0 jF 1 (s)j 2 ds+E Z T 0 kF 2 (s)k 4 H ds] Proof. By considering the stopping timesT M :=T Q M , and M2N. We have that for all t2 [0;T ] sup s2[0;t^T M ] kQ(s)k 2 H +C Z t^T M 0 kQ(s)k 2 V ds 2ku 0 k 2 H + 2C Z t^T M 0 kQ(s)k 2 H ds (0.49) + 2 Z t^T M 0 jF 1 (s)jds + 2 sup s2[0;t^ M ] Z s 0 hF 2 (r) +F 3 (r;Q(r));Q(r)idW r : 92 and sup s2[0;t^T M ] kQ(s)k 4 H +C 2 Z t^T M 0 kQ(s)k 2 V 2 Cku 0 k 4 H +C Z t^T M 0 kQ(s)k 2 H 2 (0.50) 16 Z t^T M 0 jF 1 (s)jds 2 + 16 sup s2[0;t^T M ] Z s 0 hF 2 (r) +F 3 (r;Q(r));Q(r)idW r 2 : By BDG and the Cauchy-Schwartz inequality, we get that E sup s2[0;t^T M ] kQ(s)k 2 H +C Z t^T M 0 kQ(s)k 2 V 2Eku 0 k 2 H + 2CE t2[0;t^T M ] 0 kQ(s)k 2 H ds (0.51) + 2E Z t^T M ] 0 jF 1 (s)jds + 1 2 E sup s2[0;t^T M ] kQ(s)k 2 H +CE Z t^T M 0 kF 2 (s) +F 3 (s;Q(s))k 2 H ds; and E sup s2[0;t^T M ] kQ(s)k 4 H +CE Z t^T M 0 kQ(s)k 2 V ds 2 16Eku 0 k 4 H +CE Z t^T M 0 kQ(s)k 4 H ds (0.52) CE Z t^T M 0 jF 1 (s)j 2 ds + 1 2 E sup s2[0;t^T M ] kQ(s)k 4 H +CE Z t^T M 0 kF 2 (s) +F 3 (s;Q(s))k 4 H ds; for all t2 [0;T ] for some constant C. Hence, for every t2 [0;T ], we have E sup s2[0;t] 1 [0;T M ] (s)kkQ(s)k 2 V kCku 0 k 2 H (0.53) +CE Z t 0 sup r2[0;s] 1 [0;T M ] (r)kQ(r)k 2 H dr +C Z T 0 jF 1 (s)jds +CE Z T 0 kF 2 (s)k 2 H ds 93 and E sup s2[0;t] 1 [0;T M ] kQ(s)k 4 H +CE Z t 0 1 [0;T M ] (s)kQ(s)k 2 V ds 2 CEku 0 k 4 H (0.54) +CE Z t 0 sup r2[0;s] 1 [0;T M ] (r)kQ(r)k 4 H dr +CE Z T 0 jF 1 (s)j 2 ds +CE Z T 0 kF 2 (s)k 4 H ds By Gronwall lemma we conclude that E sup s2[0;T^T M ] kQ(s)k 2 H +CE Z T^T M 0 ku(s)k 2 V ds (0.55) C[Eku 0 k 2 H +E Z T 0 jF 1 (s)jds +E Z T 0 kF 2 (s)k 2 H ds] and E sup s2[0;T^T M ] kQ(s)k 4 H +C E Z T^T M 0 ku(s)k 2 V ds 2 (0.56) C[Eku 0 k 4 H +E Z T 0 jF 1 (s)j 2 ds +E Z T 0 kF 2 (s)k 4 H ds] Theorem 6.0.7. [B3] Let u and u n be the corresponding solution of the SNSE and Galerkin approximations in the Denition 1.4.1 and 1.4.3 respectively. Then, for each xed time T , the following convergence holds: E Z T 0 kuu n k 2 V dt! 0; (0.57) 94 as n!1. To prove the result, rst we give the following lemma. Lemma 6.0.8. [B3] There exist a positive constant C such that for all n2N Eku n (T )k 2 H + 2E Z T 0 ku n (t)k 2 V dtC Ekx 0 k 2 H +E Z T 0 kf(t)k 2 H dt and each of the expressions below sup t R [0;T ] Eku n (t)k 4 H ;E Z T 0 ku n (t)k 2 V ku n (t)k 2 H dt;E Z T 0 ku n (t)k 2 V dt 2 ; is less than or equal to C[Eku 0 k 4 H +E R T 0 kf(t)k 4 H dt] Proof. Let n be an arbitrary xed natural number. We rewrite equation Galerkin as hu n (t);h i i + Z t 0 hAu n (s);h i ids =hu 0 ;h i i + Z t 0 hB(u n (s);u n (s));h i ids + Z t 0 hf(s);h i ids + Z t 0 hg(u n );h i idW s ; for i = 1;:::;n and t R [0;T ] and a.e. !2 . 95 Let z(t) = expf(6 + 3)tg, so from above and applying Ito lemma, we get z(t)ku n (t)k 2 H + 2 Z t 0 z(s)hAu n (s);u n (s)ids =ku 0 k 2 H + 2 Z t 0 z(s)hf(s);u n (s)ids (0.58) + Z t 0 z(s)kg(u n )k 2 H ds (6 + 3) Z t 0 z(s)ku n (s)k 2 H ds + 2 Z t 0 z(s)hg(u n );u n idW s ; and by bootstrapping Ito we have z(t)ku n (t)k 4 H + 4 Z t 0 z(s)hAu n (s);u n (s)iku n (s)k 2 H ds = 4 Z t 0 z(s)hg(u n );u n i 2 ds (0.59) + 2 Z t 0 z(s)kg(u n )k 2 H ku n k 2 ds (6 + 3) Z t 0 z(s)ku n (s)k 4 H ds + 4 Z t 0 z(s)hf(s);u n (s)iku n (s)k 2 H ds + 4 Z t 0 z(s)hg(u n );u n iku n k 2 dW s +ku 0 k 4 : Hence, we get z(t)ku n (t)k 2 H + 2 Z t 0 z(s)ku n (s)k 2 V ds (0.60) ku 0 k 2 H + Z t 0 z(s)kf(s)k 2 H ds + 2 Z t 0 z(s)hg(u n );u n idW s ; 96 and z(t)ku 0 k 4 H + 4 Z t 0 z(s)ku n k 2 V ku n k 2 H ds (0.61) ku 0 k 4 H + Z t 0 z(s)kf(s)k 4 H ds + 4 Z t 0 z(s)hg(u n );u n iku n (s)k 2 H dW s : By squaring both sides of the inequality in 0.60, we obtain z 2 (t)ku n (t)k 4 H + 4 2 Z t 0 z(s)ku n (s)k 2 V ds 2 (0.62) 3ku 0 k 4 + 3 Z t 0 z(s)kf(s)k 2 H ds 2 + 12 Z t 0 z(s)hg(u n );u n idW s 2 Using 0.60, 0.61 and 0.62, we get Ez(t)ku n (t)k 2 H + 2E Z t 0 z(s)ku n (s)k 2 V dsEku 0 k 2 H +E Z t 0 z(s)kf(s)k 2 H ds (0.63) Ez(t)ku n (t)k 4 H + 4E Z t 0 z(s)ku n (s)k 2 V ku n k 2 H ds Eku 0 k 2 H +E Z t 0 z(s)kf(s)k 4 H ds; and Ez 2 (t)ku n (t)k 4 H + 4 2 E Z t 0 z(s)ku n k 2 V ds 2 (0.64) 3Eku 0 k 4 H + 3E Z t 0 z(s)kf(s)k 2 H ds 2 + 12 Z t 0 z(s)hg(u n );u n idW s 2 97 Hence, we have Eku n (T )k 2 H + 2E Z T 0 ku n (s)k 2 V dsC Eku 0 k 2 H +E Z T 0 kf(s)k 2 H ds ; (0.65) sup t2[0;T ] Eku n (t)k 4 H + 4E Z T 0 ku n k 2 V ku n k 2 H ds C[Eku 0 k 4 H +E Z T 0 kf(s)k 4 H ds]: By 0.64, we get Ez 2 (t)ku n (t)k 4 H + 4 2 E Z t 0 z(s)ku n (s)k 2 V ds 2 (0.66) 3Eku 0 k 4 H + 3tE Z t 0 z 2 (s)kf(s)k 4 H ds + 12E Z t 0 z 2 (s)hg(u n (s));u n (s)i 2 ds 3Eku 0 k 4 H + 3tE Z t 0 kf(s)k 4 H ds + 12E Z t 0 ku n (s)k 4 H ds: From 0.65 and 0.66, we conclude that Eku n (T )k 2 H + 2E Z T 0 ku n (s)k 2 V dsC[Eku 0 k 2 H +E Z T 0 kf(s)k 2 H ds]; (0.67) sup t2[0;T ] Eku n (t)k 4 H + 4E Z T 0 ku n k 2 V ku n k 2 H ds C[Eku 0 k 4 H +E Z T 0 kk 4 H ds]; and E Z T 0 ku n (s)k 2 V ds C[Eku 0 k 4 H +E Z T 0 kf(s)k 4 H ds] 98 Lemma 6.0.9. [B4] We have the followings. There exists u2L 2 V ( [0;T ]);B 2L 2 V ( [0;T ]);g 2L 2 H ( [0;T ]) and a subsequencefn 0 g offng such that for n 0 !1, we have u n 0 *u inL 2 V ( [0;T ]) B(u n 0;u n 0)*B inL 2 V ( [0;T ]); g(u n 0)*g inL 2 H ( [0;T ]): For all v2 V , t2 [0;T ] and !2 a.s., the process (u(t)) t2[0;T ] satises the equation: hu;vi + Z T 0 hAu;vids =hu 0 ;vi + Z t 0 hB (s);vids + Z t 0 hf;vids + Z t 0 hg (s);vidW s ; for all v2 V , t2 [0;T ] and for a.e. !2 . The process (u(t)) t2[0;T ] has in H almost surely continuous trajectories. Moreover, the process (u(t)) t2[0;T ] is a strong solution of the SNSE equation as in Denition 1:4:1, and it has almost surely continuous trajectories inH. Furthermore, the process (u(t)) t2[0;T ] is with probability one a unique solution as in Denition 1:4:1. 99 Lemma 6.0.10. [B3] Dene the stopping times as follows: M = 8 > > > < > > > : T; if R T 0 ku(s)k 2 V ds<M infft2 [0;T ] : R t 0 ku(s)k 2 V dsMg; otherwise, (0.68) then the following convergences hold lim M!1 P( M <T ) = 0; and for a.e. !2 lim M!1 M =T Proof. From previous lemma, we have Z T 0 ku(s)k 2 H ds<1; for a.e. !2 . Hence we have lim M!1 P( M <T ) = lim M!1 P Z T 0 ku(s)k 2 V dsM P 1 \ M=1 Z T 0 ku(s)k 2 V M = 0 Next we state the following two lemmas from [B3]: 100 Lemma 6.0.11. [B3] For each xed natural number M, there exists a subsequence fn k g k1 such that E Z M 0 kuu n k 2 V ds! 0; (0.69) for n k !1 as n k !1. Lemma 6.0.12. [B3] There exists a positive constant C such that E Z T 0 ku(s)k 2 V ds C[Eku 0 k 4 H +E Z T 0 kf(s)k 4 H ] Proof. It is proven that for any constant M E Z M 0 ku(s)u n 0(s)k 2 V ds! 0; (0.70) for n 0 !1. By Lemma 3.2, there exists some M 0 such that P( M 0 <T ) 2 ; (0.71) but by lemma above convergence, we have E Z M 0 0 ku(s)u n 0(s)k 2 V ds! 0; (0.72) 101 for n 0 !1. Hence, for given and , we have P Z T 0 kuu n 0k 2 V P N 0 <T +P f N 0 =Tg^ Z T 0 kuu n 0k 2 V dsg 2 +P Z N 0 0 kuu n 0k 2 V ds ; (0.73) but we also have that E Z T 0 ku n (t)k 2 V dt 2 C[Eku 0 k H +E Z T 0 kf(t)k 4 H dt]; which together implies that E Z T 0 kuu n 0k 2 V ds! 0; (0.74) for that subsequence n 0 !1. Since, every subsequence of (u n ) has a further subse- quence, which converges inL 2 V ([0;T ] ) sense to the same limitu, this implies that the whole sequence u n converges to the same limit uL 2 V ([0;T ] ) sense, i.e. E Z T 0 kuu n k 2 V ds! 0; (0.75) for n!1. Next, we proceed to prove Theorem 4.3.2. 102 Lemma 6.0.13. [B3] There exists a positive constant C depending on and T such that each of the following expressions sup t2[0;T ] Eku n (t)k 4 H ; E Z T 0 ku n (s)k 2 V ds 2 (0.76) for n2N 0 is less than or equal to C(Eku 0 k 4 H +E R T 0 kf(s)k 4 H ds) Proof. Using z(t) =e (9+4)t and applying Ito, we have z(t)ku n (t)k 2 H + 2 Z t 0 z(s)hAu n ;u n ids =ku 0 k 2 H + 2 Z t 0 z(s)hf(s);u n (s)ids + Z t 0 z(s)kg(u n1 )k 2 H ds (9 + 4) Z t 0 z(s)ku n (s)k 2 H ds + 2 Z t 0 z(s)hg(u n1 (s);u n (s))idW s (0.77) and z(t)ku n (t)k 4 H + 4 Z t 0 z(s)hAu n ;u n iku n k 2 H ds = =ku 0 k 4 H + 2 Z t 0 z(s)kg(u n1 )k H ku n k 2 H ds (9 + 4) Z t 0 z(s)ku n (s)k 4 H + 4 Z t 0 z(s)hf(s);u n (s)iku n (s)k 2 H ds + 4 Z t 0 z(s)hg(u n1 );u n i 2 ds + 4 Z t 0 z(s)hg(u n1 ;u n )iku n (s)k 2 H dW s : (0.78) 103 Hence, we have z(t)ku n k 2 H + 2 Z t 0 z(s)ku n (s)k 2 V + 6 Z t 0 z(s)ku n k 2 H ds ku 0 k 2 H + Z t 0 z(s)kf(s)k 2 H + Z t 0 z(s)ku n1 k 2 H + 2 Z t 0 z(s)hg(u n1 );u n (s)idW s (0.79) similarly, we have z(t)ku n (t)k 4 H + 4 Z t 0 z(s)ku n (s)k 2 V ku n (s)k 2 H ds + 6 Z t 0 z(s)ku n (s)k 4 H ds ku 0 k 4 H + Z t 0 z(s)kf(s)k 4 H + 3 Z t 0 z(s)ku n1 (s)k 4 H ds + 4 Z t 0 z(s)hg(u n1 );u n (s)iku n (s)k 2 H dW s : (0.80) By squaring both sides of the inequality in 0.77 we have z 2 (t)ku n (t)k 4 H + 4 2 Z t 0 z(s)ku n (s)k 2 V ds 2 + 36 2 Z t 0 z(s)ku n (s)k 2 H ds 2 4ku 0 k 4 H + 4 Z t 0 z(s)kf(s)k 2 H 2 + 4 2 z(s)ku n1 (s)k 2 H ds 2 + 16 Z t 0 z(s)hg(u n1 );u n (s)idW s 2 (0.81) 104 Moreover, we have by 0.78, we have Ez(t)ku n (t)k 4 H + 6E Z t 0 z(s)ku n (s)k 4 H ds Eku 0 k 4 H +E Z t 0 z(s)kf(s)k 4 H ds + 3E Z t 0 z(s)ku n1 (s)k 4 H ds (0.82) By successive application of 0.82, we obtain Ez(t)ku n (t)k 4 H + 6E Z t 0 z(s)ku n (s)k 4 H ds 1 + 1 2 +::: + 1 2 n1 Eku 0 k 4 H +E Z t 0 z(s)kf(s)k 4 H ds : (0.83) Hence, we have sup t2[0;T ] Eku n (t)k 4 H + 6E Z T 0 ku n (s)k 4 H ds C Eku 0 k 4 H +E Z T 0 kf(s)k H ds : (0.84) Using Ito isometry in 0.81, we have 36 2 E Z t 0 z(s)ku n (s)k 2 V ds 2 4Eku 0 k 4 H + 4tE Z t 0 z 2 (s)kf(s)k 4 H ds +CE Z t 0 z 2 (s)ku n1 (s)k 4 H ds + 8E Z t 0 z 2 (s)ku n (s)k 4 H ds: (0.85) Using 0.84, we conclude that E Z T 0 ku n (s)k 2 V ds 2 C Eku 0 k 4 H +E Z T 0 kf(s)k 4 H ds (0.86) 105 Lemma 6.0.14. [B3] Let y(t) = exp t 2 R t 0 ku(s)k 2 V ds for all t2 [0;T ] and !2 , for some constant . Then we have E Z T 0 y(s)ku(s)u n (s)k 2 V ds! 0; (0.87) and Ey(t)ku n (t)u(t)k 2 H ! 0; (0.88) as n!1 and for all t2 [0;T ], where y(t) = exp t 2 Z t 0 ku(s)k 2 V ds : (0.89) Proof. Denoting s N (t) = N X n=1 y(t)kuu n k 2 H ; (0.90) S N (t) = N X n=1 y(t)kuu n k 2 V ; 106 with N being a natural number, t2 [0;T ], !2 . Applying Ito, we have y(t)ku(t)u n (t)k 2 H + 2 Z t 0 y(s)hAuu n ;uu n ids = 2 Z t 0 y(s)hB(u n1 ;u n )B(u(s);u(s));u n (s)u(s)ids 2 Z t 0 y(s)ku(s)k 2 V kuu n k 2 H ds Z t 0 y(s)ku(s)u n (s)k 2 H + Z t 0 y(s)kg(u n1 )g(u)k 2 H ds + 2 Z t 0 y(s)hg(u n1 )g(u);uu n idW s ; (0.91) for all t2 [0;T ] and ! a.s. Next, we estimate the nonlinear term as follows 2hB(u n1 ;u n )B(u;u);u n ui =2hB(u n1 u;u n u);ui 2 p kuk V ku n1 uk 1=2 V ku n1 uk 1=2 H ku n uk 1=2 V ku n uk 2 V 2 ku n1 uk 2 V + 2 ku n uk 2 V +kuk 2 V ku n1 uk 2 H +kuk 2 V kuu n k 2 H ; (0.92) for all s2 [0;T ] and a.s. !2 . Hence, we have y(t)ku n uk 2 H + 3 2 Z t 0 y(s)ku n uk 2 V ds + Z t 0 y(s)kuu n k 2 H ds 2 Z t 0 y(s)kuu n1 k 2 V ds + 2 Z t 0 y(s)hg(u n1 g(u));u n uidW s + Z t 0 y(s)kuk 2 V (ku n1 uk 2 H ku n uk 2 H )ds Z t 0 y(s)ku n1 uk 2 H ds (0.93) 107 for allt2 [0;T ] and! a.s. Hence, by summing up these estimates up to arbitraryN, we get s N (t) + Z t 0 S N (s)ds + Z t 0 y(s)ku(s)k 2 V ku N uk 2 H ds + Z t 0 y(s)ku n uk 2 H ds 2 Z t 0 y(s)ku 0 uk 2 V ds + Z t 0 y(s)ku 0 uk 2 H ds + 2 N X n=1 Z t 0 y(s)hg(u n1 )g(u);u n uidW s + Z t 0 y(s)ku(s)k 2 V ku 0 uk 2 H ds; (0.94) for all t2 [0;T ] and ! a.s. By taking expectation, we have Es N (t) +E Z t 0 S N (s)ds 2 E Z t 0 y(s)ku 0 uk 2 V ds E Z t 0 y(s)ku 0 uk 2 H ds +E Z t 0 y(s)ku(s)k 2 V ku 0 uk 2 H ds; (0.95) for all t2 [0;T ] and ! a.s. Hence, we have by Lemma 6.0.12 that there exists a constant C independent of N such that Es N (t) +E Z t 0 S N (s)ds E Z T 0 2 ku(s)k 2 V +ku(s)k 2 H +ku(s)k 2 V ku(s)k 2 H ds C: (0.96) 108 But then by letting N!1, we conclude that E Z T 0 y(s)ku n (s)u(s)k 2 V ds! 0; (0.97) and Ey(t)ku n (t)u(t)k 2 H ! 0; (0.98) for n!1 and for all t2 [0;T ]. We have prepared now to prove the main result in [B2] Theorem 6.0.15. [B2] The following convergences hold: E Z T 0 ku n (s)u(s)k 2 V ds! 0; (0.99) and for all t2 [0;T ] we have Eku n (t)u(t)k 2 H ! 0; (0.100) as n!1. Proof. The proof is analogous to Proposition 6.0.6. First, for eachM2N 0 , we dene the stopping time as T M = 8 > > > < > > > : T; if R T 0 ku(s)k 2 V ds<M infft2 [0;T ] : R t 0 ku(s)k 2 V dsMg; otherwise. (0.101) 109 Hence, we have immediately that Z t^T M 0 ku(s)k 2 V dsM; (0.102) for each t2 [0;T ]. But since u2L 2 V ( [0;T ]), we have that lim M!1 P (T M <T ) = 0; (0.103) this means there exists an M 0 such that P(T M 0 < T ) 2 . Then, by Lemma 6.0.16, we have that there exists some n 0 such that for all nn 0 the inequalities e T +2M 0 E Z T 0 y(s)ku n uk 2 V ds 2 (0.104) e T +2M 0 Ey(t)kuu n k 2 H 2 : Hence, for all nn 0 , we can write P Z T 0 ku(s)u n (s)k 2 V ds (0.105) P(T M 0 <T ) +P fT =T M 0 g^ Z T 0 kuu n k 2 V ds 2 +P Z T 0 y(s)kuu n k 2 V dse T2M 0 2 + e T +2M 0 E Z T 0 y(s)kuu n k 2 V ds : 110 Similarly, we have that P(kuu n k 2 H ): (0.106) By 6.0.15, we have uniform integrability as well, hence we have L 1 -convergence with respect to !, and we conclude that E Z T 0 kuu n k 2 V dt! 0; (0.107) Ekuu n k 2 H dt! 0; for all t2 [0;T ]; as n!1. We proceed to prove Theorem 5.3.2 on Chapter 5. Theorem 6.0.16. [B4] LetU be a set of bounded of continuous linear bounded feed- back controls. Letf n g n1 be a sequence inU and 2U be such that lim n!1 Z T 0 k n (t;:)(t;:)k 2 L(H) dt = 0 (0.108) where for t2 [0;T ] with x 1 ;x 2 ;y 1 ;y 2 2H we have L : [0;T ]HH!R + (0.109) K :H!R + jL(t;x 1 ;y 1 )L(t;x 2 ;y 2 )jC kx 1 x 2 k 2 H +ky 1 y 2 k 2 H jK(x 1 )K(x 2 )jC(kx 1 x 2 k 2 H ); 111 andJ () =E R T 0 L(s;u ;)ds +EK(u (T )), then we have lim n!1 J( n ) =J(): (0.110) Proof. Letu :=u ande(t) =e b R t 0 ku(s)k 2 V ds expf( + 2 p + 1)tg. Applying Ito we have that e(t)kuu k 2 H + 2 Z t 0 e(s)hA(uu n ;uu n )ids (0.111) 2 Z t 0 e(s)hB(u;u)B(u ;u );uu n (s)ids b Z t 0 e(s)ku(s)k 2 V kuu n k 2 H ds ( + 2 p + 1) Z t 0 e(s)kuu n k 2 H ds + 2 Z t 0 e(s)h(s;u) n (s;u n (s));uu n (s)ids + Z t 0 e(s)kg(u)g(u n )k 2 H ds + 2 Z t 0 e(s)hg(u)g(u n );uu n idW s : Next, we estimate the nonlinear term as 2hB(u;u)B(u n ;u n );uu n i = 2hB(uu n ;u);uu n i (0.112) b kuk 2 V kuu n k 2 H +kuu n k 2 V : 112 Hence, we have by the Lipschitz assumption on and g E sup s2[0;t] e(s)kuu n k 2 H +E Z t 0 e(s)kuu n k 2 V ds (0.113) 2E Z t 0 e(s)k(s;u) n (s;u)k 2 H ds + 4E sup s2[0;t] Z s 0 e(r)hg(u)g(u n );uu n idW r 2E Z t 0 e(s)k(s;u) n (s;u)k 2 H ds +CE Z t 0 sup r2[0;s] fkuu n k 2 H gds + 1 2 E sup s2[0;t] e(s)kuu n k 2 H ; where C is a constant and t2 [0;T ]. By Gronwall's Lemma, we conclude that E sup s2[0;t] e(s)kuu n k 2 H + 2E Z t 0 e(s)kuu n k 2 V ds (0.114) CE Z T 0 k(s;u) n (s;u)k 2 H ds; for allt2 [0;T ]. Next, we apply Proposition 6.0.6 witht =T u M ;T =T;T M =T u M and Q n (T ) =ku(T )u n (T )k 2 H and Q n (T ) R T 0 kuu n k 2 V ds, we conclude the proof. 113 Bibliography [AT] F. Abergel and R. Temam, On Some Control Problems in Fluid Mechanics, Theoret. Comput.Fluid Dynamics. 1 (1990), 303{325. [BT] A. Bensoussan and R. Temam, Equations stochastiques du type Navier- Stokes, J. Functional Analysis 13 (1973), 195{222. [B2] H. Breckner, Approximation of the solution of the stochastic navier-stokes equation, Optimization 49 (2001), no. 1-2, 15{38. [B] H. Breckner, Existence of Optimal and Epsilon-Optimal Controls for the Stochastic Navier-Stokes Equation, J. Appl. Math. Stochastic Anal. 13 (2000), no. 3, 239{259. [B3] H. Breckner, Galerkin approximation and the strong solution of the Navier- Stokes equation, J. Appl. Math. Stochastic Anal. 13 (2000), no. 3, 239{259. [B4] H. Breckner, Approximation and Optimal Control of the Stochastic Navier- Stokes Equation, Ph.D. Dissertation, Martin-Luther-Universitat-Halle- Wittenberg (2000). [BKL] J. Bricmont, A. Kupiainen, and R. Lefevere, Probabilistic estimates for the two-dimensional stochastic Navier-Stokes equations, J. Statist. Phys. 100 (2000), no. 3-4, 743{756. [BP] Z. Brze zniak and S. Peszat, Strong local and global solutions for stochastic Navier-Stokes equations, Innite dimensional stochastic analysis (Amster- dam, 1999), Verh. Afd. Natuurkd. 1. Reeks. K. Ned. Akad. Wet., vol. 52, R. Neth. Acad. Arts Sci., Amsterdam, 2000, pp. 85{98. [CC] M. Capinski and NJ. Cutland, Nonstandard Methods for Stochastic Fluid Mechanics, World Scientic, Singapore, 1995. 114 [CF2] P. Constantin and C. Foias, Navier-Stokes equations, Chicago Lectures in Mathematics, University of Chicago Press, Chicago, IL, 1988. [CG] M. Capi nski and D. Gatarek, Stochastic equations in Hilbert space with appli- cation to Navier-Stokes equations in any dimension, J. Funct. Anal. 126 (1994), no. 1, 26{35. [CP] M. Capi nski and S. Peszat, Local existence and uniqueness of strong solu- tions to 3-D stochastic Navier-Stokes equations, NoDEA Nonlinear Dier- ential Equations Appl. 4 (1997), no. 2, 185{200. [CTMK] H. Choi, R. Temam, P. Moin, J. Kim, Feedback control for unsteady ow and its application to the stochastic Burgers equation, J. Fluid Mech. 253 (1993) 509{543. [C] A.B. Cruzeiro, Solutions et mesures invariantes pour des equations d' evolution stochastiques du type Navier-Stokes, Exposition. Math. 7 (1989), no. 1, 73{82. [DD] G. Da Prato and A. Debussche, Ergodicity for the 3D stochastic Navier- Stokes equations, J. Math. Pures Appl. (9) 82 (2003), no. 8, 877{947. [DZ] G. Da Prato and J. Zabczyk, Stochastic equations in innite dimensions, Encyclopedia of Mathematics and its Applications, vol. 44, Cambridge Uni- versity Press, Cambridge, 1992. [DGT] A. Debussche, N. Glatt-Holtz, and R. Temam, Local martingale and pathwise solutions for an abstract uids model, Physica D (2011), (to appear). [D] R. Durrett, Probability: Theory and Examples, Cambridge University Press, Cambridge, 2013. [ET] I. Ekeland, R. Temam, Convex Analysis and Variational Problems, SIAM Series Classics in Applied Mathematics, SIAM, Philadelphia, 1999. [F] F. Flandoli, An introduction to 3d stochastic uid dynamics, SPDE in Hydrodynamic: Recent Progress and Prospects, Lecture Notes in Mathe- matics, vol. 1942, Springer Berlin / Heidelberg, 2008, pp. 51{150. [FG] F. Flandoli and D. Gatarek, Martingale and stationary solutions for stochas- tic Navier-Stokes equations, Probab. Theory Related Fields 102 (1995), no. 3, 367{391. [FR] F. Flandoli and M. Romito, Partial regularity for the stochastic Navier- Stokes equations, Trans. Amer. Math. Soc. 354 (2002), no. 6, 2207{2241 (electronic). 115 [FP] C. Foias and G. Prodi, Sur le comportement global des solutions non- stationnaires des equations de Navier-Stokes en dimension 2, Rend. Sem. Mat. Univ. Padova 39 (1967), 1{34. [FV] A.V. Fursikov and MJ. Vishik, Mathematical Problems in Statistical Hydromechanics, 1988,Kluwer, Dordrecht. [G] D. Gatarek, Existence of optimal controls for stochastic evolution systems, in: G. Da Prato et al. (Eds.), Control of Partial Dierential Equations, IFIP WG 7.2 Conference, Villa Madruzzo, Trento, Italy, January 4 9, 1993, New York, Marcel Dekker, Inc. Lect. Notes Pure Appl. Math. 165 (1994) 8186. [GS] D. Gatarek, J. Sobczyk, On the existence of optimal controls of Hilbert space- valued diusions, SIAM Control Optim. 32 (1994) 170{175. [GT] D. Gilbarg and N.S. Trudinger, Elliptic Partial Dierential Equations of Second Order, Springer, Berlin, 2001, Reprint of the 1998 edition. [GV] N. Glatt-Holtz and V. Vicol, Local and global existence of smooth solutions for the stochastic Euler equations with multiplicative noise, The Annals of Probability 42 (2014), no. 1, 80{145. [GZ] N. Glatt-Holtz and M. Ziane, Strong pathwise solutions of the stochastic Navier-Stokes system, Advances in Dierential Equations 14 (2009), no. 5- 6, 567{600. [GR] W. Grecksch, Stochastische Evolutionsgleichungen undderen Steuerung, BSB B.G. Teubner Verlagsgesellschaft, Leipzig, 1987. [K] S.B. Kuksin, Randomly forced nonlinear PDEs and statistical hydrodynamics in 2 space dimensions. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Z urich, 2006. [KUZ] I. Kukavica, K. U gurlu, M. Ziane, On the Galerkin approximation and norm estimates of the stochastic Navier-Stokes equations with multiplicative noise, submitted. [KV] I. Kukavica and V. Vicol, On moments for strong solutions of the 2D stochas- tic Navier-Stokes equations in a bounded domain, Asymptotic Analysis 90 (2014), no. 3-4, 189{206. [M] J.C. Mattingly, The dissipative scale of the stochastics Navier-Stokes equa- tion: regularization and analyticity, J. Statist. Phys. 108 (2002), no. 5-6, 1157{1179, Dedicated to David Ruelle and Yasha Sinai on the occasion of their 65th birthdays. 116 [MR] R. Mikulevicius and B.L. Rozovskii, Stochastic Navier-Stokes equations for turbulent ows, SIAM J. Math. Anal. 35 (2004), no. 5, 1250{1310. [MR2] R. Mikulevicius and B.L. Rozovskii, GlobalL 2 -solutions of stochastic Navier- Stokes equations, Ann. Probab. 33 (2005), no. 1, 137{176. [MS] J.-L. Menaldi and S. S. Sritharan, Stochastic 2-D Navier-Stokes equation, Applied Mathematics and Optimization, vol. 46, no. 1, pp. 3153, 2002. [O] C. Odasso, Spatial smoothness of the stationary solutions of the 3D Navier- Stokes equations, Electron. J. Probab. 11 (2006), no. 27, 686{699. [PD] G. Da Prato, A. Debussche, Control of the stochastic burgers model of tur- bulence, SIAM J. Control Optim. 37 (1999) 1123-1149. [PI] G. Da Prato, A. Ichikawa, Stability and quadratic control for linear stochastic equations with unbounded coecients, Boll. Unione Mat. Ital., VI. Ser. B 4 (1985), 987-1001. [PR] C. Pr ev^ ot and M. R ockner, A concise course on stochastic partial dierential equations, Lecture Notes in Mathematics, vol. 1905, Springer, Berlin, 2007. [S] A. Shirikyan, Analyticity of solutions and Kolmogorov's dissipation scale for 2D Navier-Stokes equations, Evolution equations (Warsaw, 2001), Banach Center Publ., vol. 60, Polish Acad. Sci., Warsaw, 2003, pp. 49{53. [S] S.S. Sritharan, Deterministic and stochastic control of NavierStokes equation with linear, monotone, and hyper viscosities, Appl. Math. Optim. 41 (2000) 255-308. [SV] D. W. Stroock and S. R. S. Varadhan, Multidimensional diusion processes, Springer, Berlin, 1979. [T] R. Temam, Navier-Stokes equations, AMS Chelsea Publishing, Providence, RI, 2001, Theory and numerical analysis, Reprint of the 1984 edition. [T] C. Tudor, Optimal control for semi-linear evolution equations, Appl. Math. Optim. 20 (1989) 319{331. [T2] C. Tudor, Optimal and optimal control for the stochastic linear-quadratic problem, Math. Nachr. 145(1990) 135{149. [V] M. Viot, Solutions faibles d' equations aux d eriv ees partielles non lin eaires, 1976, Th ese, Universit e Pierre et Marie Curie, Paris. [Z] E. Zeidler, Nonlinear Functional Analysis and its Applications, Vol. III: Variational Methods and Optimization, Springer, New York, 1985. 117
Abstract (if available)
Abstract
This thesis collects three interrelated projects related to time-wise approximation, bound estimates and control of stochastic Navier-Stokes equations in a smooth non-periodic bounded domain $\mathcal{O}\subset \mathbb{R}^2$ with a multiplicative noise. ❧ First, we show that in an open bounded domain $\mathcal{O}$, we have \begin{equation*} \lim_{n\to \infty} \mathbb{E} [ \sup_{t \in [0,T]} \phi_1(\lVert (u(t)-uⁿ(t))\rVert^2_V) ] = 0 \end{equation*} for any deterministic time T > 0, for a specified moment function φ₁(x) = log(1 + x)¹⁻ᵋ with 0 < ε < 1, where uⁿ(t,x) corresponds to the Galerkin approximation of the solution u(t,x). Similarly, we prove that \begin{equation*} \lim_{n\to \infty}\mathbb{E} [ \sup_{t \in [0,T]} \phi_2(\lVert (u(t)-u^n(t)) \rVert^p_H) ] = 0 \end{equation*} for any p > 0, for a specific function φ₂(x) = x¹⁻ᵋ with 0 < ε < 1 and for any deterministic time T > 0. Finally, we show that \begin{equation*} \mathbb{E}\big[\sup_{0 \leq t \leq T} \exp\big({\lVert u \rVert_H}/{K} \big)\big] < \infty, \end{equation*} for a constant K with specified regularity assumptions on the initial data. ❧ Second, we show that a special linearized scheme $\{u_n\}_{n\geq 1}$ for the convergence of the SNSE gives the same convergence results that we have achieved through the Galerkin approximation. Namely, we show that ❧ \begin{equation*} \lim_{n\to \infty} \mathbb{E} [ \sup_{t \in [0,T]} \phi_1(\lVert (u(t)-u_n(t)) \rVert^2_V) ] = 0 \end{equation*} for any deterministic time T > 0, for a specified moment function φ₁(x). Moreover, we prove that \begin{equation*} \lim_{n\to \infty}\mathbb{E} [ \sup_{t \in [0,T]} \phi_2(\lVert (u(t)-u_n(t)) \rVert^p_H) ] = 0 \end{equation*} for any p > 0, for a specific function φ₂(x) and for any deterministic time T > 0, where φ₁(x) = log(1 + x))¹⁻ᵋ φ₂(x) = x¹⁻ᵋ, with 0 < ε < 1. ❧ Third, we solve an optimal control problem verifying the existence of feedback controls that are optimal in $\sup_{t \in [0,T]}$ sense. Namely, we show that there exists an optimal feedback control φ for a specific types of cost functional \begin{equation*} J(\phi) = \mathbb{E} \sup_{ t \in [0,T]}(\varphi(\mathcal{L}[t,uᵩ(t),φ(t)])), \end{equation*} of the SNSE in 2D on an open bounded nonperiodic domain $\mathcal{O}$ given that the control set $\mathcal{U}$ is compact, $\varphi(x)$ = log(1 + x)¹⁻ᵋ with 0 < ε < 1.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Regularity problems for the Boussinesq equations
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
On some nonlinearly damped Navier-Stokes and Boussinesq equations
PDF
Well posedness and asymptotic analysis for the stochastic equations of geophysical fluid dynamics
PDF
Global existence, regularity, and asymptotic behavior for nonlinear partial differential equations
PDF
Unique continuation for parabolic and elliptic equations and analyticity results for Euler and Navier Stokes equations
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Asymptotic expansion for solutions of the Navier-Stokes equations with potential forces
PDF
Topics on set-valued backward stochastic differential equations
PDF
On regularity and stability in fluid dynamics
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Dynamic approaches for some time inconsistent problems
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
PDF
Gaussian free fields and stochastic parabolic equations
PDF
Certain regularity problems in fluid dynamics
PDF
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Stein's method and its applications in strong embeddings and Dickman approximations
Asset Metadata
Creator
Uğurlu, Kerem
(author)
Core Title
Some mathematical problems for the stochastic Navier Stokes equations
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
03/09/2016
Defense Date
02/29/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,partial differential equations,stochastic analysis,stochastic Navier-Stokes equations
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ziane, Mohammed (
committee chair
)
Creator Email
keremugurlu@gmail.com,kugurlu@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-221028
Unique identifier
UC11277379
Identifier
etd-UgurluKere-4193.pdf (filename),usctheses-c40-221028 (legacy record id)
Legacy Identifier
etd-UgurluKere-4193.pdf
Dmrecord
221028
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Uğurlu, Kerem; Ugurlu, Kerem
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
partial differential equations
stochastic analysis
stochastic Navier-Stokes equations