Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Path dependent partial differential equations and related topics
(USC Thesis Other)
Path dependent partial differential equations and related topics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
PATH DEPENDENT PARTIAL DIFFERENTIAL EQUATIONS AND RELATED TOPICS by Ibrahim Ekren A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (MATHEMATICS) August 2014 Copyright 2014 Ibrahim Ekren To my family ii Acknowledgments First and foremost, I would like to express my deep gratitude to my advisor Jianfeng Zhang for his patience, availability and guidance. Since our rst meeting in December 2009, his support and wisdom have been a determining part of my Ph.D and this thesis. I owe many thanks to Professors Yilmaz Kocer, Sergey Lototsky, Jin Ma, and Remigijus Mikulevicius for accepting to serve on my committee and for their various support through- out my Ph.D. I would like to use this opportunity to also thank Professors Igor Kukavica, Mohammed Ziane and Susan Friedlander for their help. I am also grateful towards Professors Nicole El Karoui, Michel London and Nizar Touzi whose advice and in uence are one of the main reasons why I started a Ph.D. in mathematics at USC. I enjoyed, especially during the frustrating moments, the emotional support of Angie, Hamdi, Chiara, Elena, Muye, Karol, Guillaume, Ozlem, Samet and other graduate students in the mathematics department. Last, but certainly not least, I would like to thank my family for their love and support. iii Table of Contents Dedication ii Acknowledgments iii Abstract vii Chapter 1: Introduction 1 1 Viscosity Solution of PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1 Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 Comparison for smooth solutions . . . . . . . . . . . . . . . 4 1.1.2 Viscosity Solutions . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Comparison for viscosity solutions . . . . . . . . . . . . . . . . . . . 6 1.3 Main diculties in the Path-dependent case . . . . . . . . . . . . . . 8 2 Notations for PPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1 Shifted Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Dupire's derivatives in the Path-space . . . . . . . . . . . . . . . . . . . . . 12 4 Derivatives in the sense of [17] . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.1 Nonlinear Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . 13 4.1.1 Example of Smooth functionals . . . . . . . . . . . . . . . . 16 Chapter 2: Semilinear Path-dependent PDEs 17 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Semilinear PPDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3 Backward stochastic dierential equations . . . . . . . . . . . . . . . . . . . 19 3.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Nonlinear Markovian Feynman-Kac formula . . . . . . . . . . . . . . 20 4 Path-dependent PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.1 Properties of Classical solutions . . . . . . . . . . . . . . . . . . . . . 22 4.2 Counter Example for the regularity . . . . . . . . . . . . . . . . . . . 24 4.3 Denition of Viscosity solutions for the Semilinear equation . . . . . 25 4.4 The Value functional . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.5 Viscosity solution property of the value functional . . . . . . . . . . 33 5 Wellposedness results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.1 Stability of viscosity solutions . . . . . . . . . . . . . . . . . . . . . . 36 6 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 iv 6.1 Partial comparison principle . . . . . . . . . . . . . . . . . . . . . . . 38 6.2 A variation of the Perron's approach . . . . . . . . . . . . . . . . . . 40 Chapter 3: Optimal Stopping under nonlinear expectation 48 1 Snell envelope in the fully nonlinear case . . . . . . . . . . . . . . . . . . . . 49 2 Optimal stopping under nonlinear expectations . . . . . . . . . . . . . . . . 53 3 Deterministic maturity optimal stopping . . . . . . . . . . . . . . . . . . . . 55 3.1 Dynamic Programming Principle . . . . . . . . . . . . . . . . . . . . 56 3.2 Preparation for theEmartingale property . . . . . . . . . . . . . . 59 3.3 Continuous approximation . . . . . . . . . . . . . . . . . . . . . . . . 60 3.4 Proof of Theorem 60 . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4 Random maturity optimal stopping . . . . . . . . . . . . . . . . . . . . . . . 63 4.1 Dynamic programming principle . . . . . . . . . . . . . . . . . . . . 63 4.2 Continuous approximation of the hitting times . . . . . . . . . . . . 70 4.3 Proof of Theorem 62 . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.4 EContinuity of b Y at the random maturity . . . . . . . . . . . . . . 75 Chapter 4: Fully nonlinear Path-dependent PDEs 86 1 Second order Backward stochastic dierential equations . . . . . . . . . . . 87 1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 1.1.1 The generator for 2BSDEs . . . . . . . . . . . . . . . . . . 88 1.1.2 The spaces and norms . . . . . . . . . . . . . . . . . . . . . 89 1.2 Second order BSDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 1.3 2BSDEs and fully nonlinear PDEs . . . . . . . . . . . . . . . . . . . 91 2 Path dependent partial dierential equations . . . . . . . . . . . . . . . . . 93 2.1 Preliminary results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3 Classical solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 4 Denition of viscosity solutions . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.1 Relation with viscosity solutions of PDEs . . . . . . . . . . . . . . . 99 4.2 Consistency with classical solutions . . . . . . . . . . . . . . . . . . . 100 5 Some Examples with Representation Formula . . . . . . . . . . . . . . . . . 101 5.1 First order PPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.2 Semi-linear PPDEs and BSDEs . . . . . . . . . . . . . . . . . . . . . 103 5.3 Path dependent HJB equations and 2BSDEs . . . . . . . . . . . . . 107 6 Stability and Partial Comparison . . . . . . . . . . . . . . . . . . . . . . . . 114 6.1 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 7 Partial comparison of viscosity solutions . . . . . . . . . . . . . . . . . . . . 116 8 First order PPDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Chapter 5: Wellposedness for viscosity solutions 125 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 1.1 The extended class of smooth functionals and assumptions . . . . . 126 1.2 Fully nonlinear path dependent PDEs . . . . . . . . . . . . . . . . . 127 2 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 2.1 Path-frozen PDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 v 2.2 Wellposedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 2.3 A change variable formula . . . . . . . . . . . . . . . . . . . . . . . . 131 3 Comparison, uniqueness, and change of variable . . . . . . . . . . . . . . . . 132 4 Construction of a viscosity solution and proof of Theorem 148 . . . . . . . . 133 4.1 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 4.2 Existence of viscosity solution . . . . . . . . . . . . . . . . . . . . . . 138 4.3 Proof of Theorem 148 . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5 On Assumption 141 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Chapter 6: Viscosity solutions for the obstacle problem of PPDEs 145 1 Introduction to the obstacle problem . . . . . . . . . . . . . . . . . . . . . . 145 1.0.1 The Generator . . . . . . . . . . . . . . . . . . . . . . . . . 145 2 Introduction of the value functional for the obstacle problem . . . . . . . . 146 2.1 Regularity of the value functional . . . . . . . . . . . . . . . . . . . . 148 3 The PPDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 3.1 Viscosity solutions of PPDEs . . . . . . . . . . . . . . . . . . . . . . 151 4 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 5 A change of variable formula . . . . . . . . . . . . . . . . . . . . . . . . . . 154 6 Viscosity solution property of the value functional . . . . . . . . . . . . . . 157 6.1 Subsolution property of the value functional . . . . . . . . . . . . . . 157 6.2 Supersolution property of the value functional . . . . . . . . . . . . . 160 7 Partial comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 8 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 9 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Chapter 7: Appendix 166 1 Proof of Proposition 152 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 2 Construction of the approximating sequences . . . . . . . . . . . . . . . . . 177 2.1 Construction of subsolutions by penalization . . . . . . . . . . . . . 179 2.2 Construction of supersolutions by approximation . . . . . . . . . . . 182 2.2.1 Study of K c . . . . . . . . . . . . . . . . . . . . . . . . . . 183 3 Regularity of the approximating sequences . . . . . . . . . . . . . . . . . . . 187 3.0.2 Regularity of the hitting times . . . . . . . . . . . . . . . . 188 3.1 Proof of Proposition 183 and 185 . . . . . . . . . . . . . . . . . . . . 191 Bibliography 197 vi Abstract The aim of this thesis is to extend the viscosity solutions theory of partial dierential equa- tions to the space of continuous paths. It is well-known that, when Markovian, through the Feynman-Kac formula, the BSDE theories of various kinds provide a stochastic represen- tation for viscosity solutions of parabolic PDEs. The wellposedness of BSDEs also holds in a non-Markovian framework. It is then reasonable to wonder if one can write a non- Markovian Feynman-Kac formula and provide a relation between non-Markovian BSDEs and the so-called path-dependent partial dierential equations(PPDEs). In this thesis we give a denition of viscosity solutions for PPDEs for several kind of equations. When the equations are fully nonlinear, this denition requires results on optimal stopping theory under nonlinear expectation that we establish. These results allows us, under assump- tions, to prove the wellposedness for viscosity solutions of PPDEs and show that several non-Markovian control problems can be studied using PPDEs. vii Chapter 1 Introduction It is well known that a Markovian type Backward SDE (BSDE, for short) is associated with a semi-linear parabolic PDE via the so called nonlinear Feynman-Kac formula, see Pardoux and Peng [42]. Such relation was extended to Forward Backward SDEs (FBSDE, for short) and quasi-linear PDEs, see e.g. Ma, Protter and Yong [37], Pardoux and Tang [43], and Ma, Zhang and Zheng [39], and Second Order BSDEs (2BSDEs, for short) and fully nonlinear PDEs, see e.g. Cheridito, Soner, Touzi, and Victoir [5] and Soner, Touzi and Zhang [53]. The notable notion G-expectation, proposed by Peng [46], was also motivated from connection with fully nonlinear PDEs. In the non-Markovian case, the BSDEs (and FBSDEs, 2BSDEs) become path dependent. Due to its connection with PDE in Markovian case, it has long been discussed that general BSDEs can also be viewed as a PDE. In particular, in his ICM 2010 lecture, Peng [47] proposed the question whether or not a non-Markovian BSDE can be viewed as a Path Dependent PDE (PPDE, for short). The recent work Dupire [13], which was further extended by Cont and Fournie [6], pro- vides a convenient framework for this problem. Dupire introduces the notion of horizontal derivative (that we will refer to as time derivative) and vertical derivative (that we will refer to as space derivative) for non-anticipative stochastic processes. One remarkable result is the functional It^ o's formula under his denition. As a direct consequence, if u(t;B : ) is a martingale under the Wiener measure with enough regularity (under their sense), then its drift part from the It^ o's formula vanishes and thus it is a classical solution to the following path dependent heat equation: @ t u(t;!) + 1 2 @ 2 !! u(t;!) = 0: (0.1) It is then very natural to view BSDEs as semi-linear PPDEs, and 2BSDEs and G- martingales as fully nonlinear PPDEs. However, we shall emphasize that PPDEs can rarely have classical solutions, even for heat equations. We refer to Peng and Wang [49] for some sucient conditions under which a semi-linear PPDE admits a classical solution. 1 The main objective of this manuscript is to propose a notion of viscosity solutions of PPDEs on the space of continuous paths. The theory of viscosity solutions for standard PDEs has been well developed. We refer to the classical references Crandall, Ishii and Lions [9] and Fleming and Soner [21]. As is well understood, in path dependent case the main challenge comes from the fact that the space variable is innite dimensional and thus lacks compactness. Our context does not neither fall into the framework of Lions [32, 34, 33] where the notion of viscosity solutions is extended to Hilbert spaces by using a limiting argument based on the existence of a countable basis. Consequently, the standard techniques for the comparison principle, which rely heavily on the compactness arguments, fail in our context. We shall remark though, for rst order PPDEs, by using its special structure Lukoyanov [35] studied viscosity solutions by adapting elegantly the compactness arguments. To overcome this diculty, we provide a new approach by decomposing the proof of the comparison principle into two steps. We rst prove a partial comparison principle, that is, a classical sub-solution (resp. viscosity sub-solution) is always less than or equal to a viscosity super-solution (resp. classical super-solution). The main idea is to use the classical one to construct a test function for the viscosity one and then obtain a contradiction. This idea was used in [15, 17, 18, 14] to prove wellposedness result for viscosity solutions of PPDEs. We will give a survey of the results in these papers in a chronological order. In the rest of this rst chapter, we will give a brief presentation of viscosity solutions of PDEs with an emphasize on the issues one faces to extend these ideas to a path-dependent framework. Then we give some PPDE and stochastic analysis related denitions and nota- tions that will be used in this manuscript. In the second chapter we present viscosity solutions theory for semilinear PPDEs. This chapter contains the results in [15]. In order to extend the results in the second chapter to nonlinear PPDEs, we need to have some results on the optimal stopping problem. The chapter 3 contains these results that have already been published in [16]. At Chapter 4, we give some results on viscosity solutions theory for fully nonlinear PPDE and at the next chapter we complete the study of fully nonlinear PPDEs by proving an existence and uniqueness results. These two chapters are based on [17, 18]. Then we present the content of [14] at chapter 6 which is a modication of the theory to the obstacle problem. 1 Viscosity Solution of PDEs In this section we will give a rapid presentation of viscosity solution of partial dierential equations. Our main references are [9], [21] and [50]. The theory covers a wide range of 2 equations and applications. We will focus on the comparison results and give the proofs for the comparison of classical solutions and viscosity solutions for Bellman-Isaacs type equations. We will emphasis the diculties one faces when one wants to consider the path dependent case. For interested readers, the name "viscosity" comes from the fact that, the existence of this kind of solutions were proven by vanishing viscosity method, [10]. However, in the current state of art, for example Chapter 4 of [9], the existence of solutions is proven by Perron's method. We will not mention these points. 1.1 Denitions LetO be an open connected and bounded subset ofR d , andA,B are 2 measurable spaces. We assume that the following functions are given : :OAB :!S d (1.2) f :R + ORR d AB!R (1.3) and we dene F (t;x;y;z; ) := sup 2A inf 2B f 1 2 2 (x;;) : +f(t;x;y;z(x;;);;)g; where M :N =tr(MN). We assume that and F are Lipschitz continuous in x and continuous in t. We assume that F is Lipschitz continuous in (y;z). We also assume that for some > 0, F (t;x;y 0 ;z;;)F (t;x;y;z;;)(yy 0 ) for all yy 0 . Remark 1. This last condition is not very restrictive. A change of variable which consists in a multiplication of u by an appropriate function allows us to have this property for any F Lipschitz continuous in y. Under these assumptions F veries for all (t;x;x 0 ;y;y; 0 z; ; 0 )2 [0;T ]O 2 R 2 (R d ) 2 (S d ) 2 : F (t;x;y;z; )F (t;x;y;z; 0 ) if 0 ; (1.4) F (t;x;y 0 ;z; )(yy 0 ) +F (t;x;y;z; ) if yy 0 ; (1.5) 3 We consider the following parabolic partial dierential equation @ t u(t;x)F (t;x;u(t;x);@ x u(t;x);@ xx u(t;x)) = 0; for (t;x)2 (0;T )O (1.6) 1.1.1 Comparison for smooth solutions Denition 2. A functionu2C 1;2 ((0;T )O)\C([0;T ]O) (resp. u2C 1;2 ((0;T )O)\ C([0;T ]O)) is a classical subsolution (resp. supersolution) of (1.6) if @ t u(t;x)F (t;x;u(t;x);@ x u(t;x);@ xx u(t;x)) ( resp. )0; for (t;x)2 (0;T )O Under our assumptions on the generator it is very easy to prove the comparison principle for the pde (1.6). Proposition 3. Let u be a classical subsolution and v a classical supersolution of (1.6) Assume that uv on ((0;T ]@O)[ (fTgO). Then uv on [0;T ]O. Proof. We reason by contradiction. By compactness there exist (t;x)2 (0;T )O verifying sup (t;x)2[0;T ]O (uv)(t;x) = (uv)(t;x)> 0: Notice that, the values of u andv at the boundary allows us to claim that the previous maximum is indeed achieved inside the domain. The functions being smooth, the maximal- ity condition gives that @ t (uv)(t;x) = 0; @ x (uv)(t;x) = 0; and @ xx (uv)(t;x) 0. Now we use the subsolution and supersolution property of u andv and the properties of F to have 0@ t u(t;x)F (t;x;u(t;x);@ x u(t;x);@ xx u(t;x)) @ t v(t;x)F (t;x;u(t;x);@ x v(t;x);@ xx v(t;x)) @ t v(t;x)F (t;x;v(t;x);@ x v(t;x);@ xx v(t;x)) +(uv)(t;x)> 0 1.1.2 Viscosity Solutions We rst dene the (limiting) parabolic subjets and superjets of a function u. 4 Denition 4. For an upper semi continuous function u : (0;T )O! R, and (t;x)2 (0;T )O we dene the parabolic second order superjet of u at (t;x) by P 2;+ u(t;x) := n (p;q; )2RR d S d :u(t;x)u(t;x) +q(tt) +p (xx) + 1 2 (xx) (xx) + (jtt 0 j +jxxj 2 ) We say that (p;q; ) 2 P 2;+ u(t;x) if there exist (t n ;x n ;q n ;p n ; n ) 2 (0;T ) R d P 2;+ u(t n ;x n ) such that (t n ;x n ;q n ;p n ; n )! (t;x;q;p; ) as n goes to innity. For a lower semi-continuous function v : (0;T )O!R we dene the parabolic subjets by P 2; v(t;x) :=P 2;+ (v)(t;x); P 2; v(t;x) :=P 2;+ (v)(t;x): Remark 5. It is well known that (q;p; )2P 2;+ u(t;x) if and only if there exist a function 2 C 1;2 ((0;T )O) verifying (q;p; ) = (@ t (t;x);@ x (t;x);@ xx (t;x)) and u has a local minimum at (t;x). We now dene viscosity solutions. Denition 6. We say that u2C([0;T ]O) is a viscosity subsolution of the PDE (1.6) if for all (t;x)2 (0;T )O and for all (q;p; )2P 2;+ u(t;x), it holds that qF (t;x;u(t;x);p; ) 0: We say that u 2 C([0;T ]O) is a viscosity supersolution of the PDE (1.6) if for all (t;x)2 (0;T )O and for all for all (q;p; )2P 2; u(t;x), it holds that qF (t;x;u(t;x);p; ) 0: We say thatu2C([0;T ]O) is a viscosity solution of the PDE (1.6) if it is both subsolution and supersolution. Notice that by the continuity of F and the denition ofP, the previous inequalites also hold for the limiting subjets and superjets. 5 1.2 Comparison for viscosity solutions In the modern presentation of viscosity solution theory, the comparison result is based on Ishii's Lemma. As it will be seen in the following pages. This result is one of the main results of this theory. Indeed, if one admits Ishii's lemma, the other main results of the theory follows without big diculties. The following statement of the lemma will be enough for us. Lemma 7. Ishii's Lemma Let u;v2 C((0;T )O) and " > 0, then for all (t;s;x;y)2 (0;T ) 2 O 2 a local maximum ofu(t;x)v(t;y) 1 2" (jtsj 2 +jxyj 2 ) there existM;N2S d such that ( ts " ; xy " ;M)2P 2;+ u(t;x) (1.7) ( ts " ; xy " ;N)2P 2; v(s;y) (1.8) and M 0 0 N ! 3 " I d I d I d I d ! Assuming Ishii's Lemma, we continue to give a brief proof of the comparison result. It is important to notice that during the proof, we will need to pick a point where a certain maximum is achieved. We will claim the existence of this maximum by compactness argument. This argument does not work for path dependent PDEs. At a maximal point we have a lot of information on a function, for example, it is derivative is zero and second derivatives are negative if they exist. Notice we do not have those property if we choose let say an "-optimal point. Theorem 8. Comparison Let u;v2C([0;T ]O) be respectively a viscosity subsolution and supersolution of the PDE (1.6) assume that the generator F veries the assumptions (1.4) and uv on ((0;T ]@O)[ (fTgO) then uv on [0;T ]O. Proof. We assume that c 0 := sup (t;x)2[0;T ]O (uv)(t;x)> 0: 6 For " > 0 by compactness and uniform continuity of the functions there exists (t " ;s " ;x " ;y " )2 [0;T ] 2 O 2 verifying c " := sup (t;s;x;y)2[0;T ] 2 O 2 fu(t;x)v(s;y) 1 2" (jtsj 2 +jxyj 2 )g =u(t " ;x " )v(s " ;y " ) 1 2" (jt " s " j 2 +jx " y " j 2 ) By compactness, without loss of generality we can assume that (t " ;s " ;x " ;y " )! (t;s;x;y) as " goes to 0. The functions u;v are bounded hencejt " s " j 2 +jx " y " j 2 = o(") and c " !c 0 = (uv)(t;x) and (t;x) = (s;y). The boundary values of the functionsu andv allows us to claim that (t;x)2 [0;T )O, so for " small enough (t " ;s " ;x " ;y " ) is also inside that domain. Then for " small enough, Ishii's lemma gives the existence of (M " ;N " )2 (S d ) 2 , such that ( t " s " " ; x " y " " ;M " )2P 2;+ u(t " ;x " ) (1.9) ( t " s " " ; x " y " " ;N " )2P 2; v(s " ;y " ) (1.10) and verifying M " 0 0 N " ! 3 " I d I d I d I d ! Dene the matrix := 2 (x " ;;) (x " ;;)(y " ;;) (y " ;;)(x " ;;) 2 (y " ;;) ! 0 then by simple algebra 2 (x " ;;) :M " 2 (y " ;;) :N " = : M " 0 0 N " ! 3 " ((x " ;;)(y " ;;)) : ((x " ;;)(y " ;;)) 3Ljx " y " j 2 " 7 The viscosity subsolution and supersolution of properties of u and v gives the following inequalities t " s " " F (t " ;x " ;u(t " ;x " ); x " y " " ;M " ) 0 t " s " " F (s " ;y " ;v(s " ;y " ); x " y " " ;N " ) 0 Combining these inequalities with the properties of F and of (M " ;N " ) : 0 t " s " " sup 2A;2B 1 2 2 (x " ;;) :M " +f(t " ;x " ;u(t " ;x " ); x " y " " (x " ;;);;) 5Ljx " y " j 2 2" (jt " s " j +jx " y " j) +(u(t " ;x " )v(s " ;y " )) which provides the contradiction as " goes to 0. 1.3 Main diculties in the Path-dependent case The proof of Ishii's lemma is based on special regularization procedures of function, convex analysis methods (especially Alexandrov's lemma) and more importantly on local compact- ness of the underlying space (0;T )O. Unfortunately, this last point is the main limiting element when one wants to expand this theory to the path dependent equations where the underlying space is the space of continuous paths between [0;T ]. If one wants to prove that a value function is a viscosity solution of a PDE the pointwise maximality condition is not completely necessary. It is enough that the dierence between the test function and the value function u is a supermartingale under an appropriate class of probability and not positive. This remark is the core of the denition of viscosity solutions of PPDEs. 2 Notations for PPDEs We xT > 0, the time maturity, and an integerd> 0. We denote byS d the set of symmetric d-dimensional square matrices. For x 2 R d , jxj is the norm of x, and for A;B 2 S d , A : B := trace(AB). For any matrix M, M denotes its transpose. We will work on the canonical space :=f!2 C([0;T ];R d ) : ! 0 = 0g of d-dimensional continuous paths. B denotes the canonical process on this space, F =fF s g s2[0;T ] is the ltration generated by 8 B, and P 0 is the Wiener measure. For !2 and t2 [0;T ], the stopped path ! :^t 2 is dened as follows : ! :^t (s) =! s ; for 0st; ! :^t (s) =! t ; for tsT: We denote :=f(t;! :^t ) : 0 t T; !2 g. We also dene ^ := D[0;T ], the set of cadlag paths on [0;T ] and ^ :=f(t; ^ ! :^t ) : 0 t T; ^ !2 ^ g. The denitions we are giving can be extended to ^ O and ^ . In the sequel, we will denote a generic element of as (t;!)2 [0;T ] , this notation means that we ignore the values of ! after t and identify (t;! 0 )2 [0;T ] with (t;!)2 [0;T ] if ! :^t =! 0 :^t . We dene the followingjj:jj T and d 1 metrics on respectively and : jj!jj T := sup s2[0;T ] j! s j; for !2 ; d 1 ((t;!); (t 0 ;! 0 )) =jtt 0 j +jj! :^t ! 0 :^t 0jj T ; for (t;!); (t 0 ;! 0 )2 ; then ( ;jj:jj T ) and (;d 1 ) are complete metric spaces. L 0 (F T ;K) and L 0 (;K)(where K =R;R d orS d ) denote respectively the space ofF T measurableK-valued random variables andFprogressively measurableK-valued processes. WhenK =R, we omit the symbolR. For a given setP of probabilities on and p> 0 we dene L p (F T ;P) :=f2L 0 (F T ;K) : sup P2P E P jjjj p K <1g wherejj:jj K is the norm inK. For the BSDE theory we also need the following notations : H 2 (;P) :=f2L 0 (;K) : sup P2P E P Z T 0 jj s jj 2 K ds <1g S 2 (;P) :=f2L 0 (;K) : sup P2P E P " sup s2[0;T ] jj s jj 2 K # <1g 2.1 Shifted Spaces For xed st2 [0;T ], we dene the following shifted objects : t :=f!2C([t;T ];R d ) :! t = 0g. B t is the canonical process on t . F t =fF t s g s2[t;T ] is the ltration generated by B t . 9 P t 0 is the Wiener measure on t ,E t 0 is the expectation under P t 0 . We dene similarly t ,jj:jj t T , d t 1 , L 0 (F t T ) etc. In these denitions, the superscripts will generally stand for the shifted space (i.e. the beginning of times) and subscripts for the nal time related to the notation. For !2 s and ! 0 2 t , we dene ! t ! 0 2 s the concatenation of ! and ! 0 at t by: (! t ! 0 )(r) :=! r 1 [s;t) (r) + (! t +! 0 r )1 [t;T ] (r); for all r2 [s;T ]: For 2 L 0 (F s T ), X 2 L 0 ( s ), and a xed path !2 s ,we dene the shifted random variable t;! 2L 0 (F t T ) and process X t;! 2L 0 ( t ) by: t;! (! 0 ) :=(! t ! 0 ); X t;! (! 0 ) :=X(! t ! 0 ); for all ! 0 2 t : Finally, we denote byT the set of F-stopping times,T + the set of positive F-stopping times, andHT the subset of those hitting times h of the form h := infft :B t 2O c g^t 0 = infft :d(! t ;O c ) = 0g^t 0 ; (2.11) for some 0<t 0 T , and some open and convex setOR d containing 0 withO c :=R d nO. h> 0; h is lower semi continuous, and h 1 ^ h 2 2H for any h 1 ; h 2 2H: T t andH t are dened in the same spirit. For any 2T (resp. h2H) and any (t;!)2 such that t < (!) (resp. t < h(!)), it is clear that t;! 2T t (resp. h t;! 2H t ). For t2 [0;T ] and > 0 we dene the hitting times h t 2H t , that we will use several times, as follows : h t := inffst :jjB t jj s =g^ (t +)^T: (2.12) Notice thatO is open and contains 0, andt 0 >t. Therefore, for all h2H t there exist> 0 such that t< h t h: (2.13) The classH and especially, the stopping time h t will be our mains tools for studying process locally to the right in time in the space . 10 We shall use the following type of regularity, which is stronger than the right continuity of a process in a standard stochastic analysis sense. Denition 9. We say a processu2L 0 () is right continuous underd 1 if for any (t;!)2 and any"> 0, there exists> 0 such that, for any (s; ~ !)2 t satisfyingd t 1 ((s; ~ !); (t; 0)) , we haveju t;! (s; ~ !)u(t;!)j". We now dene the following sets of functionals which are the equivalents of semi con- tinuous and continuous functions in the viscosity solutions theory of PDEs. Notice that for a mapping u : ! K, F-progressive measurability implies that u(t;!) = u(t;! :^t ) for all (t;!)2 [0;T ] . Denition 10. (i)UL 0 () is the set of processes u that are bounded from above, right continuous under d 1 , and such that there exist a modulus of continuity verifying for any (t;!); (t 0 ;! 0 )2 : u(t;!)u(t 0 ;! 0 ) d 1 (t;!); (t 0 ;! 0 ) whenever tt 0 : (2.14) (ii)UL 0 () is the set of processes u such thatu2U. (iii) C 0 (;K)(respectively, C 0 b (;K), UC b (;K)) is the set of F-progressively measurable processes with values in K that are continuous(respectively, continuous and bounded, uni- formly continuous and bounded) in (t;!) under the d 1 metric. When K = R, we simply write these sets as C 0 ();C 0 b () and UC 0 b (). Remark 11. It is clear thatU\U = UC b (). We also recall from [16] Remark 3.2 the condition (2.14) implies that u has left-limits and u t u t for all t2 (0;T ]. Remark 12. The inequality (2.14) is needed to apply the results in [16]. More powerful results then the one in [16] are available in the literature if one wants only to study the obstacle problem for semilinear PPDEs(see. [25]). In this particular case, it is possible to prove the comparison theorem if we deneU as the class of cadlag process that are left upper semi continuous. Notice that this last denition would not require regularity in !, hence the comparison result would be more powerful than the one proven in Section 6. We deneU t ,U t , C 0 ( t ;K), C 0 b ( t ;K) and UC b ( t ;K) in the obvious way. It is clear that, for any (t;!)2 and any u2 C 0 (), we have u t;! 2 C 0 ( t ). The other spaces introduced before enjoy the same property. Notice also that the sample paths of u 2 C 0 (;K) are continuous. 11 3 Dupire's derivatives in the Path-space In his paper [13], Bruno Dupire gives the denition of the derivatives of a functional on the space , and more important, he proves the associated functional Ito's Lemma, a main tool in the development of dierential calculus in the space . We give here his denitions. Denition 13. Let u be a non-anticipative process, the time derivative of u in the sense of Dupire at (t;!) is dened as @ t u(t;!) := lim #0 u(t +;! :^t )u(t;!) (3.15) whenever this limit exists. Similarly, if the limit exists the space derivative of u in the sense of Dupire in the direction i is dened as @ i u(t;!) = lim x!0 u(t;! +xe i 1 [t;T ] )u(t;!) x (3.16) @ ! u(t;!) =f@ 1 u(t;!);:::;@ d u(t;!)g (3.17) where e i is the unit vector associated with the ith component. Notice that we can compose these partial derivatives to dene derivatives of higher order. A function will be said to be C 1;2 in the sense of Dupire if u is bounded continuous, @ t u, @ x u and @ xx u exist and are continuous and bounded. In the chapter 2, regularity of functionals are understood in this previous sense. We illustrate this by the following gures that are also taken from Dupire's paper. Figure 1.1: Time derivative Under if we assume that the functional u is Markovian, meaning u(t;!) = u(t;! t ), then the space derivative if u is equal to the derivative in x and the time derivative is the right derivative of u in t. Thus, the previous denitions are a non-trivial generalization of derivatives in the space . The initial intuition of Dupire in dening these derivatives is the following point. If ! is stepwise constant then, in order to reconstruct the value ofu along! we only need to know 12 Figure 1.2: Space derivative its sensitivity in jumps (space derivative) and its sensitivity in constant extrapolation after t.(time derivative). Then we argue that stepwise constant paths are dense in the space ^ . When one wants to study control problems in the space these denitions have a drawback. In this framework, the data of the problem is naturally dened in , and it is not obvious if one can extend those functionals to the space ^ . To take the derivative in time this extension is needed. Indeed the path (t;! +xe i 1 [t;T ] ) has a jump at its last point and in (3.16) one needs to be able to compute the value of u at this path to dene derivatives. Cont and Fournie treats this problem in a series of article, [8], [7] and [6]. In particular, they prove that if the extension to ^ exist and it is smooth in the sense of Dupire then, the derivatives of u does not depend on the choice of the extension. In order to avoid those diculties, in [17], a weaker denition of derivatives is given. We will present some sets of probabilities in order to dene these derivatives. 4 Derivatives in the sense of [17] 4.1 Nonlinear Expectation For every constant L> 0, we denote byP L the collection of all continuous semimartingale measures P on whose drift and diusion characteristics are bounded by L and p 2L, respectively. To be precise, let ~ := 3 be an enlarged canonical space, ~ B := (B;A;M) be the canonical processes, and ~ ! = (!;a;m)2 ~ be the paths. For any P2P L , there exists an extension Q on ~ such that: B =A +M; A is absolutely continuous, M is a martingale; j P jL; 1 2 tr (( P ) 2 )L; where P t := dAt dt ; P t := p dhMit dt ; Q-a.s. (4.18) Similarly, for any t2 [0;T ), we may deneP L t on t . Remark 14. LetS d + denote the set of dd nonnegative denite matrices. 13 (i) In Q-a.s. sense, clearly P 2L 0 (F B ) and then P 2L 0 (F B;M ). (ii) We may also have the following equivalent characterization ofP L . Consider the canonical space 0 := 2 with canonical processes (B;B 0 ). For any P2P L , there exist a probability measure Q 0 and P 2L 0 (F B;B 0 ;R d ), P 2L 0 (F B ;S d + ) such that j P jL; 1 2 tr (( P ) 2 )L; Q 0 j F B T =P; Q 0 j F B 0 T = Wiener measure; dB t = P t (B;B 0 )dt + P t (B)dB 0 t ; Q 0 -a.s. (4.19) (iii) For any deterministic measurable functions : [0;T ]! R d and : [0;T ]! S d + satisfyingjj L, 1 2 tr ( 2 ) L, there exists unique P2P L such that P = , P = , P-a.s., where P , P can be understood in the sense of either (4.18) or (4.19). When t = 0 we simply writeP L :=P 0 L . We also deneP t 1 :=[ L>0 P t L . We can then dene the nonlinear expectation and the Snell envelope induced by the classP L . For all 2L 1 (F t T ;P t L ) we dene E t L [X] := sup P2P t L E P and E t L [X] := inf P2P t L E P ; (t;!)2 : (4.20) and given 0stT , (t;!)2 s and a bounded process X2L 0 ( s ), S s;t L [X](!) := sup 2T t E t L X t;! and S s;t L [X](!) := inf 2T t E t L X t;! : (4.21) Notice that the elements ofP 1 are semimartingale measures. Thanks to [28], we can contruct a process that we denotehBi equal to the quadratic variation under every semi- martingale measure. HencehBi is dened under each element ofP 1 . We now give the denition of the class C 1;2 (). This will be our main denition of regularity of functionals. Unless explicitly mentioned, regularity of a functional is to be understood in this sense. Denition 15. Let u2C(), we say that u is C 1;2 (), it there exist @ t u2C(), @ ! u2 C(;R d ) and @ !! u2C(;S d ), such that for all P2P 1 it holds that du s =@ t u s ds + 1 2 @ !! u s :dhBi s +@ ! u s dB s ; for all s2 [0;T ]; P a.s. (4.22) The previous notation u t :=u(t;B) will be our convention. If we compose a functional u with the canonical process B, we will use the notation u t . If we need to compose u with other processes (for example X), we will explicitly write u(t;X). 14 Remark 16. By a direct localization argument, together with the required regularity, we see that the above @ ! u and @ 2 !! u, if they exist, are unique. Consequently, we call them the rst order and second order space derivatives of u, respectively. We dene C 1;2 ( t ) similarly. It is clear that, for any (t;!) and u2 C 1;2 (), we have u t;! 2 C 1;2 ( t ), and @ ! (u t;! ) = (@ ! u) t;! , @ 2 !! (u t;! ) = (@ 2 !! u) t;! . Remark 17. Ifu2C 1;2 () then the denition of the time derivative is consistent with the semilinear case. Indeed, xu2C 1;2 () and (t;!)2 such thatt<T then chooseP2P t 1 such thatP-a.s. B s =! s for all st and B s =! t for all st. Then u t;! (s;B t )u(t;!) = Z s t @ t u t;! r dr; P-a.s. The left hand side is u(s;! :^t )u(t;!). Hence by the continuity of the time derivative @ t u(t;!) := lim #0 u(t +;! :^t )u(t;!) ; for (t;!)2 [0;T ) ; Remark 18. (i) The typical case that @ ! u;@ 2 !! u exist is the case that u is smooth in Dupire's sense [13], and in that case our space derivatives agree with the vertical derivatives introduced therein, due to the functional It^ o formula. Therefore, any smooth function in the sense of Dupire's calculus is also smooth in the sense of Denition 15. So our space C 1;2 () is a priori larger than the corresponding space in [13]. In particular, Denition 15 is dierent from the corresponding denition in our previous paper [15]. (ii) The main reason why we assume is the fact that unlike Dupire's denition, our denition does not require all the derivatives to be bounded, and does not involve any extension of u to a larger domain [0;T ]D([0;T ]), where D is the set of c adl ag paths. Moreover, we do not need (2.14) to hold true for all semimartingale measuresP. In order to be able to write a PPDE, one only needs the existence of the time and space derivatives. Our objective is developing a non smooth dierential calculus, for our class of smooth functionals we only require this smoothness. (iii) In general, our space derivatives are not dened path-wise and we do not require that @ ! (@ ! u) =@ 2 !! u, although this holds true in typical cases. Notice however that the equality @ ! (@ ! u) =@ 2 !! u requires that @ ! u is in C 1;2 (). 15 4.1.1 Example of Smooth functionals We will give some examples of C 1;2 functionals. Example 1. Let d = 1. We dene u(t;!) :=E P t 0 Z T 0 (! t B t ) s ds ; for (t;!)2 : Then u(t;!) := R t 0 ! s ds +! t (Tt). We dene @ t u(t;!) := 0; @ ! u(t;!) := (Tt); @ !! u(t;!) := 0: For allP2P 1 by Ito's formula du s = (Ts)dB s , which shows that u is in C 1;2 (). More generally for all smooth functions f;g, and h, the functional u dened as u(t;!) :=f(t;! t ) +g(t; Z t 0 h(s;! s )ds) is in C 1;2 (). We now give the example of a non smooth process, see [6]. Example 2. Let d = 1. u(t;!) := ! t := max 0st ! s , (t;!)2 . It is obvious that @ t u(t;!) = 0 for all (t;!)2 . However,u is not dierentiable. Indeed, if it is dierentiable, then one must have @ ! u = 0, and 1 2 @ 2 !! udt =dB t , which is impossible under P 0 . In terms of the Dupire's vertical derivatives, we have @ ! u(t;!) = 0 whenever ! t <! t , and @ + ! u(t;!) = 1 and @ ! u(t;!) = 0 whenever ! t =! t ; where @ + ! and @ ! denote the right and left space derivatives in the sense of Dupire. Hence the process u is not dierentiable onf! t =! t g. 16 Chapter 2 Semilinear Path-dependent PDEs 1 Introduction In this chapter, we will follow [15] to dene a framework to extend the semilinear Feynman- Kac formula to non-Markovian BSDEs. In this case the solution is path-dependent. Hence we need to work on the space of continuous paths. Chronologically, [15] was written before the [17, 18]. The latter ones contains notations and denitions that have proven to be more suited for studying PPDEs. In this presentation we will, when possible integrate these new points in the semilinear case of [15]. In the Markovian framework, the function dened via the solution of the BSDE is proven to be a viscosity solution of a parabolic partial dierential equation. As we have already mentioned, when dening viscosity solution property in the space and proving wellposedness, the main dicult comes from the fact that the space is not locally compact and studying pointwise maximas of functionals is dicult. The novelty in [15] is to dene the viscosity subsolution or supersolution property for functionals without requiring a pointwise maximality condition as it is the case for PDEs. Instead, we will require a maximality in the sense of optimal stopping. Known results on optimal stopping theory will allow us to prove what we call the partial comparison. Our second step is a variation of the Peron's method. Letu andu denote the supremum of classical sub-solutions and the inmum of classical super-solutions, respectively, with the same terminal condition. In standard Peron's approach, see e.g. Ishii [27] and an interesting recent development by Bayraktar and Sirbu [3], one shows that u =u (1.1) by assuming the comparison principle for viscosity solutions, which further implies the existence of viscosity solutions. We shall instead prove (1.1) directly, which, together with our partial comparison principle, implies the comparison principle for viscosity solutions immediately. Our arguments for (1.1) mainly rely on the remarkable result Bank and Baum [1]. 17 The rest of the chapter is organized as follows. In Section 2 we present the PPDE we will be studying at this chapter and we will give the standing assumptions on the data. We dene classical and viscosity solutions of PPDE in Section 4. In Section 5 we introduce the main wellposedness results and prove some basic properties of the solutions, including existence, stability, and the partial comparison principle of viscosity solutions. Then we prove (1.1) and the comparison principle for viscosity solutions. 2 Semilinear PPDE In this chapter we will study PPDEs having the following form @ t (t;!) 1 2 @ !! (t;!)F (t;!;(t;!);@ ! (t;!)) = 0; for all (t;!)2 (T;!) =(!) for all !2 : (2.2) where F : RR d ! R, and : ! R are 3 mappings. We make the following assumptions on F (t;!;y;z), and (!). Assumption 19. We assume that there exists L 0 0 and 0 a modulus of continuity with at most polynomial growth verifying, (i)F is L 0 -Lipschitz continuous in (y;z), uniformly continuous in (t;!) with modulus of continuity 0 andjF (:; 0; 0)jL 0 . (iii) is bounded by L 0 and uniformly continuous in ! with modulus of continuity 0 . In order to simplify notations, we dene the dierential operator L, for a process 2C 1;2 () and for all (t;!)2 as L(t;!) :=@ t (t;!) 1 2 @ !! (t;!)F (t;!;(t;!);@ ! (t;!)): Additionally, for (t;!)2 , 2C 1;2 ( t ), and (s;! 0 )2 ; we dene L t;! (s;! 0 ) :=@ t (s;! 0 ) 1 2 @ !! (s;! 0 )F t;! (s;! 0 ;(s;! 0 );@ ! (s;! 0 )): Remark 20. The PPDE (2.2) is the PPDE studied in [15]. In this section, we will give a simple presentation of the link between semilinear PDEs and BSDEs. If we also assume 18 that the functional u is Markovian, namely, with abuse of notation, u(t;!) =u(t;! t ) then our equation becomes with a semilinear parabolic PDE. @ t u(t;!) 1 2 @ xx u(t;!)F (t;x;u(t;x);@ x u(t;x)) = 0; for all (t;x)2 [0;T ]R d u(T;x) =(x) for all x2R d : (2.3) As we will see it in the next sections, this PDE is the Feynman-Kac equation that is related to backward stochastic dierential equations, and has application in various stochas- tic control problems. 3 Backward stochastic dierential equations In order to avoid any technical diculties, in this chapter (and in this chapter only), we augment the ltration F by the null sets of the Wiener measure P 0 and denote also the new ltration with F. Then F veries the usual conditions. We will introduce backward stochastic dierential equations (BSDEs) in the line of [20]. 3.1 Denition We consider and F verifying the assumptions (19). Remark 21. We will take the following conventions. (s;!) stands for the whole path! up to t, and (t;! t ) is an element ofR + R d . For a functional u : !R we denote by u s the value of the composition of u with the canonical process at s, meaningu s :=u(s;B). If we want to composeu with another process we will write it explicitly (u(s;X) for example). Using this convention we can writeF (s;B; 0; 0) =F s (0; 0). Under the previous assump- tion, [20] allows us to claim that there exists (Y;Z)2S 2 (;P 0 )H 2 (;P 0 ), such that for all t2 [0;T ] Y s = + Z T t F s (Y s ;Z s )ds Z T t Z s dB s ; P 0 a.s. (3.4) 19 Additionally, there exists a constant C depending only on d;L 0 andT such that, for 2 sets of data ( i ;f i )i = 1; 2 verifying the assumptions, the solutions of respective equations verify the following a priori estimates : E P 0 " sup s2[0;T ] jY 1 s Y 2 s j 2 + Z T 0 jZ 1 s Z 2 s j 2 ds # CE P 0 j 1 2 j 2 + Z T 0 jF 1 s (0; 0)F 2 s (0; 0)j 2 ds : (3.5) 3.2 Nonlinear Markovian Feynman-Kac formula In this chapter we assume that the data of the BSDE (3.4) is Markovian, which means that F (s;!;y;z) = F (s;! s ;y;z) and (!) = (! T ). Then we construct a function v : [0;T ]R d !R in the following way. For (t;x)2 [0;T ]R d , we denote by (Y t;x s ;Z t;x s ) s2[t;T ] the solution of the following BSDE under P t 0 Y t;x s =(x +B t T ) + Z T s F (s;x +B t r ;Y t;x r ;Z t;x r )dr Z T t Z t;x r dB t r ; for all s2 [t;T ]: (3.6) By Blumenthal's 0 1 law, Y t:x t is a constant and we dene v(t;x) :=Y t;x t : (3.7) Under this assumption, and using the uniqueness of the solutions of the BSDE (3.4), we can prove many results. In particular one can show that v veries a dynamic programming principal (DPP) and solves a parabolic PDE in a viscosity sense. We give these results in the following theorem. Theorem 22. (i) v is Lipschitz continuous in x and Holder-1=2 continuous in t in the following sense jv(t;x)v(s;x)jC(1 +jxj 2 )jtsj 1 2 : (3.8) (ii) v veries the following dynamic programming principle for all 0tsT v(t;x) =v(s;x +B t s ) + Z s t F (r;x +B t r ;Y t;x r ;Z t;x r )dr Z s t Z t;x r dB t r ; P t 0 -a.s. (3.9) (iii) v is a viscosity solution of the following semilinear PDE. @ t v(t;x) 1 2 @ xx v(t;x)F (t;x;u(t;x);@ x v(t;x)) = 0; for (t;x)2 (0;T )R d ; v(T;x) =(x) for x2R d : (3.10) 20 Proof. We will only provide with the proof of (iii), which will allow us to motivate our denition of viscosity solutions of PPDEs. Let (t;x)2 (0;T )R d and2C 1;2 ((0;T )R d ) be such thatv has a local minimum at (t;x). Without loss of generality we can assume that (t;x) =v(t;x). This implies that 0 =(t;x)v(t;x)(s;x +B t s )v(s;x +B t s ). For ( s ) s2[t;T ] to be chosen, we dene the following process, d r = r r dB t r ; r2 [t;T ]; t = 1: We apply Ito's Lemma to have underP t 0 , and combine it with the dynamic programming principal (3.9) to obtain P t 0 -a.s. s (s;x +B t s )v(s;x +B t s ) = = Z s t r @ x (r;x +B t r )Z t;x r + ((r;x +B t s )v(s;x +B t r )) r dB t r + Z s t r @ t (r;x +B t r ) + 1 2 @ xx (r;x +B t r ) +F (r;x +B t r ;Y t;x r ;@ x (x +B t r )) dr + Z s t r F (r;x +B t r ;Y t;x r ;Z t;x r )F (r;x +B t r ;Y t;x r ;@ x (x +B t r )) + r (@ x (x +B t r )Z t;x r ) dr We can choosejjL 0 progressively measurable and cancel the last line: r = 0 if @ x (x +B t r ) =Z t;x r r = h F (r;x +B t r ;Y t;x r ;Z t;x r )F (r;x +B t r ;Y t;x r ;@ x (x +B t r )) i @ x (x +B t r )Z t;x r j@ x (x +B t r )Z t;x r j 2 if @ x (x +B t r ) =Z t;x r Then by taking the expectation we obtain : 0E 0 t h s (s;x +B t s )v(s;x +B t s ) i = (3.11) Z s t E 0 t r @ t (r;x +B t r ) + 1 2 @ xx (r;x +B t r ) +F (r;x +B t r ;Y t;x r ;@ x (x +B t r )) dr The regularity of the integrand allows us to divide by (st) and take the limit as s goes to t to claim that 0@ t (t;x) + 1 2 @ xx (t;x) +F (t;x;v(t;x);@ x (t;x)): (3.12) 21 Remark 23. Notice that the inequality (3.12) comes from the inequality (3.11). This last one is obtained thanks to the pointwise maximality condition of denition of viscosity subsolution. We could have obtained an inequality similar to (3.11) if have assumed that s v s is a submartingale under the probability that corresponds to the Girsanov change of measure induced by. This point is the starting point for the denition of viscosity solutions of PPDEs. We will not assume that we can control the sign of v on a neighborhood of (t;x) but we will assume that s v s is a submartingale under some probability measures, which will allow us to have similar proofs to the previous one by having estimates similar to (3.11). 4 Path-dependent PDEs In this section, we return to the non-Markovian case. We will study the properties of smooth solution of the PPDE (2.2) and dene our functional of interest u which comes from the BSDE (3.4). We will study u under the assumptions u is C 1;2 (). 4.1 Properties of Classical solutions Denition 24. Let 2 C 1;2 (), we say that is a classical solution (resp. subsolution, supersolution) of (2.2), if for all (t;!)2 , L(t;!) = (resp.;)0: (4.13) We rst recall from Peng [44] that an F-progressively measurable process Y is called a F -martingale (resp. F -submartingale, F -supermartingale) if, for any F-stopping times 1 2 , we have Y 1 =(resp.,)Y 1 ( 2 ;Y 2 ),P 0 -a.s. where (Y;Z) := (Y( 2 ;Y 2 );Z( 2 ;Y 2 )) is the solution to the following BSDE on [0; 2 ]: Y t =Y 2 + Z 2 t F s (Y s ;Z s )ds Z 2 t Z s dB s ; 0t 2 ; P 0 -a.s. 22 Clearly, Y is a F -martingale with terminal condition (B) if and only if it satises the BSDE (3.4). Using the denition of the class C 1;2 (), we obviously have Proposition 25. Let Assumption (19) hold and2C 1;2 b (). Then is a classical solution (resp. subsolution, supersolution) of PPDE (2.2) if and only if the process s is a F - martingale (resp. F -submartingale, F -supermartingale) under P 0 . In particular, if is a classical solution of PPDE (2.2) with terminal condition , then Y s := s ; Z s :=@ ! s ; P 0 -a.s. (4.14) provides the unique solution of BSDE (3.4). Proof. We shall only prove the subsolution case. Let (Y;Z) be dened by (4.14). (i) Assume is a classical subsolution. By It^ o's formula, d t = @ t t + 1 2 @ !! t dt +@ ! t dB t = F t ( t ;@ ! t ) + (L) t dt +@ ! t dB t ; P 0 -a.s. (4.15) Then for any 1 2 , (Y;Z) satises BSDE: Y t = 2 + Z 2 t F s (Y s ;Z s ) + (L) s ds Z 2 t Z s dB s ; 0t 2 ;P 0 -a.s. SinceL 0, by the comparison principle of BSDEs, see [20], we obtain 1 = Y 1 Y 1 ( 2 ; 2 ). That is, is a F -submartingale. (ii) Assume is a F -submartingale under P 0 . For any 0 t < t +h T , denote Y s :=Y s (t +h; t+h )Y s , Z s :=Z s (t +h; t+h )Z s . By (4.15) we have Y s = Z t+h s [ r Y r +h;Zi r (Lu) r ]dr Z t+h s Z r dB r ; tst +h;P 0 -a.s. wherejj;jjL 0 . Dene s := exp Z s t r dB r + Z s t r 1 2 j r j 2 dr ; (4.16) we have Y t =E P 0 t h Z t+h t s (Lu) s ds i : 23 Since Y = is a F -submartingale, we get 0 1 h Y t =E P 0 t h 1 h Z t+h t s (Lu) s ds i : Send h! 0 we obtain L t 0; P 0 -a.s. Note thatL is continuous in ! and obviously any support of P 0 is dense, we have Lu(t;!) 0 for all !2 : That is, is a classical subsolution of PPDE (2.2). (iii) When is a classical solution, similar to (i) we know Y is aF -martingale underP 0 and thus (4.14) provides a solution to the BSDE. Finally, the uniqueness follows from the uniqueness of BSDEs. Remark 26. This proposition extends the well known nonlinear Feynman-Kac formula of Pardoux and Peng [42] to non-Markovian case. We next prove a simple comparison principle for classical solutions. Lemma 27. Let Assumption 19 hold true. Let u 1 be a classical subsolution and u 2 a classical supersolution of PPDE (2.2). If u 1 (T;)u 2 (T;), then u 1 u 2 on . Proof. Denote Y i s :=u i s ;Z i s :=@ ! u i s , i = 1; 2. By denition we have dY i t = h F t (Y i ;Z i ) + (Lu i ) t i dt + (Z i t ) dB t ; 0tT; P 0 -a.s. Since Y 1 T Y 2 T andLu 1 0Lu 2 , by the comparison principle for BSDEs we obtain Y 1 Y 2 . That is, u 1 u 2 , P 0 -a.s. Since u 1 and u 2 are continuous, and the support of P 0 is dense in , we obtain u 1 u 2 on . 4.2 Counter Example for the regularity In the PDE literature, thanks to the work of Evans-Krylov-Sofanov, under some assump- tions, it is possible to have smooth solutions to the PDE (2.3). As we will see it in the next example, one cannot expect such a regularizing eect for the PPDEs. This fact makes the theory of viscosity solutions both necessary and very challenging. 24 Example 3. Unlike the standard heat equation which always has classical solution in [0;T ), a path dependent heat equation may not have a classical solution in [0;T ). One simple example can be the heat equation with terminal condition u(T;!) = B t 0 (!) for some 0 < t 0 <T . Then clearly u(t;!) =B t^t 0 (!), and thus @ ! u(t;!) = 1 [0;t 0 ] (t) is discontinuous. u is the unique viscosity solution of the heat equation with the terminal condition u(T;!) = B t 0 (!). For more general semilinear PPDEs, we refer to Peng and Wang [49] for sucient conditions of existence of classical solutions. 4.3 Denition of Viscosity solutions for the Semilinear equation Let T t denote the space of F t -stopping times such that, for any s 2 [t;T ), the set f!2 t :(!)>sg is an open subset of t underkk t T . We denote byT t + theF t -stopping times 2T t such that >t. Example 28. Let u2C 0 (). Then, for any constant c, := inf n t :u(t;!)c o ^T is inT: We will now dene viscosity solutions for the PPDE (2.2). We need to dene the fol- lowing nonlinear expectations and Snell envelopes. For anyL 0 andt<T , letU L t denote the space of F t -progressively measurable R d -valued processes such that each component of is bounded by L. By viewing as row vectors, we dene M t; s := exp Z s t r dB t r 1 2 Z s t j r j 2 dr ; P t 0 -a.s., dP t; :=M t; T dP t 0 ; (4.17) and we introduce for all t2 [0;T ] two nonlinear expectations: for any 2L 2 (F t T ;P t 0 ), E t L [] := inf n E P t; [] :2U L t o ; E t L [] := sup n E P t; [] :2U L t o : (4.18) Additionally for a bounded processX2L 0 ( s ), consider the nonlinear optimal stopping problem S s;t L [X](!) := sup 2T t E t L X t;! and S s;t L [X](!) := inf 2T t E t L X t;! ; (t;!)2 s : (4.19) Remark 29. (ConnectingE L andE L to backward SDEs) For readers who are familiar with BSDE literature, by the comparison principle of BSDEs, see e.g. El Karoui, Peng 25 and Quenez [20], one can easily show thatE t L [] =Y t t andE t L [] =Y t t , where (Y t ;Z t ) and (Y t ;Z t ) are the solution to the following BSDEs, respectively: Y t s = Z T s LjZ t r jdr Z T s (Z t r ) dB t r ; Y t s = + Z T s LjZ t r jdr Z s (Z t r ) dB t r ; tsT; P t 0 -a.s. (4.20) Moreover, this is a special case of the so called g-expectation, see Peng [44]. Remark 30. (Optimal stopping under nonlinear expectation and re ected backward SDEs) The denition of the set of smooth functionals involves the optimal stopping problem under nonlinear expectation Y t (!) =S 0;t L [X](!) := sup ~ 2T t E t L X t;! ~ ^ for some stopping time 2T t + and some adapted bounded left upper semi continuous process X. For later use, we provide some key-results which can be proved by following the standard corresponding arguments in the standard optimal stopping theory, and we observe that the process Y is path-wise continuous, see (iv) below. Following the classical arguments in optimal stopping theory, we have: (i)E t L Y ~ ^ Y t (!) for all ~ 2T t , i.e. Y is anE L supermartingale. (ii) If 2T t is an optimal stopping rule, then Y t (!) =E t L X t;! ^ = sup ~ 2T t E t L X t;! ~ ^ sup ~ 2T t E t L Y t;! ~ ^ Y t (!) where the last inequality is a consequence of (i), and the third inequality follows from the fact that XY . Additionally Y t (!) =E t L X t;! ^ E t L Y t;! ^ Y t (!) This implies that Y =X and, by (i), that Y :^ is anE L martingale. (iii) We then dene t;! 1 := inffs > t : Y t;! s = X t;! s g. Since Y t;! T = X t;! T , we have t;! 1 T , P 0 a.s. Moreover, following the classical arguments in optimal stopping theory, we see that Y t;! s^ t;! 1 st is anE t L martingale. With this in hand, we conclude that t;! 1 is an optimal stopping time, i.e. Y t;! t =E t L X t;! t;! 1 : 26 (iv) For those readers who are familiar with backward stochastic dierential equations, we mention that Y = Y , where (Y ;Z ;K ) is the solution to the following re ected BSDEs: Y s =X t;! Z s LjZ r jdr Z s Z r dB t r Z s dK s ; (4.21) Y s X t;! s ; and (Y s X t;! s )dK s = 0; s2 [t;T ]; P t 0 a.s. (4.22) See e.g. [19]. In particular, it is a well-known result that the process Y is path-wise continuous whenever X is left upper semi continuous. (v) Similar results hold for inf ~ 2T tE t L [X t;! ~ ^ ]. We then give our class of smooth test functionals. For any L 0 and t < T , for any u2U, dene A L u(t;!) := n '2C 1;2 b ( t ) : there exists 2T t + such that 0 ='(t; 0)u(t;!) =S t;t L ('u t;! ) ^ (0) o ; (4.23) and for u2U dene A L u(t;!) := n '2C 1;2 b ( t ) : there exists 2T t + such that 0 ='(t; 0)u(t;!) =S t;t L ('u t;! ) ^ (0) o : (4.24) We may omit 0. Denition 31. Let u2U (resp. u2U). (i) For any L 0, we say u is a viscosity L-subsolution (resp. L-supersolution) of PPDE (2.12) if, for any (t;!)2 [0;T ) and any '2A L u(t;!) (resp. '2A L u(t;!)), it holds that (L t;! ')(t; 0) (resp.)0: (ii) We say u is a viscosity subsolution (resp. supersolution) of PPDE (2.12) if u is viscosity L-subsolution (resp. L-supersolution) of PPDE (2.12) for some L 0. (iii) We say u is a viscosity solution of PPDE (2.12) if it is both a viscosity subsolution and a viscosity supersolution. 27 In the rest of this section we provide several remarks concerning our denition of viscosity solutions. In most places we will comment on the viscosity subsolution only, but obviously similar properties hold for the viscosity supersolution as well. Remark 32. As standard in the literature on viscosity solutions of PDEs: (i) The viscosity property is a local property in the following sense. For any (t;!)2 [0;T ) and any "> 0, to check the viscosity property of u at (t;!), it suces to know the value of u t;! on [t; h t " ] for an arbitrarily small "> 0. (ii) TypicallyA L u(t;!) andA L u(t;!) are dierent, sou is a viscosity solution does not mean (L t;! ')(t; 0) = 0 for' in some appropriate set. One has to check viscosity subsolution property and viscosity supersolution property separately. (iii) In generalA L u(t;!) could be empty. In this case automatically u satises the viscosity subsolution property at (t;!). Remark 33. (i) For 0 L 1 < L 2 , obviouslyE t L 2 E t L 1 , andA L 2 u(t;!)A L 1 u(t;!). Then one can easily check that a viscosityL 1 -subsolution must be a viscosityL 2 -subsolution. Consequently, u is a viscosity subsolution i there exists a L 0 such that, for all ~ LL, u is a viscosity ~ L-subsolution. (ii) However, we require the sameL for all (t;!). We should point out that our denition of viscosity subsolution is not equivalent to the following alternative denition, under which we are not able to prove the comparison principle: for any (t;!) and any '2 T L0 A L u(t;!), it holds that (L t;! ')(t; 0) 0. Remark 34. We may replaceA L with the following (A 0 ) L which requires strict inequality: A 0 L u(t;!) := n '2C 1;2 b ( t ) : there exists 2H t such that 0 ='(t; 0)u(t;!)<S t;t L ('u t;! ) ^ for all ~ 2T t + o : (4.25) Then u is a viscosity L-subsolution of PPDE (2.12) if and only if (L t;! ')(t; 0) 0 for all (t;!)2 [0;T ) and '2A 0 L u(t;!). 28 A similar statement holds for the viscosity supersolution. Indeed, sinceA 0 L u(t;!)A L u(t;!), then the only if part is clear. To prove the if part, let (t;!)2 [0;T ) and'2A L u(t;!). For any"> 0, denote' " (s; ~ !) :='(s; ~ !)+"(st). Then clearly ' " 2A 0 L u(t;!), and thus (L t;! ' " )(t; 0) =@ t '(t; 0)" 1 2 tr @ 2 xx '(t; 0) F t;! t;!;'(t; 0);@ x '(t; 0) 0: Send "! 0, we obtain (L t;! ')(t; 0) 0, and thus u is a viscosity L-subsolution. Remark 35. We have some exibility to chooseAu(t;!) andAu(t;!) in Denition 31. In principle, the smaller these sets are, the easier we can prove viscosity properties and thus easier for existence of viscosity solutions, but the more dicult for the comparison principle and the uniqueness of viscosity solutions. (i) The followingA 00 L u(t;!) is larger thanA L u(t;!), but all the results in this paper still hold true if we useA 00 L u(t;!) (and the correspondingA 00 L u(t;!)): A 00 L u(t;!) := n '2C 1;2 b ( t ) : for any 2T t + , 0 ='(t; 0)u(t;!)E t L ('u t;! ) ~ ^ for some ~ 2T t + o : (4.26) (ii) However, if we use the following smaller alternatives ofA L (t;!;u), which do not involve the nonlinear expectation, we are not able to prove the comparison principle and the uniqueness of viscosity solutions: A u(t;!) := n '2C 1;2 b ( t ) : there exists 2H t such that 0 ='(t; 0)u(t;!) ('u t;! ) ~ ^ for any ~ 2T t + o ; or A u(t;!) := n '2C 1;2 b ( t ) : for all (s; ~ !)2 (t;T ] t 0 ='(t; 0)u(t;!) ('u t;! )(s; ~ !) o : See also Remark 33 (ii). Remark 36. (i) Let u be a viscosity subsolution of PPDE (2.12). Then for any 2 R, ~ u t :=e t u t is a viscosity subsolution of the following PPDE: ~ L~ u := @ t ~ u 1 2 (@ !! ~ u) ~ F (t;!; ~ u;@ ! ~ u) 0; (4.27) 29 where ~ F (t;!;y;z) :=y +e t F (t;!;e t y;e t z): Indeed, assume u is a viscosity L-subsolution of PPDE (2.12). Let (t;!)2 [0;T ) and ~ '2A L ~ u(t;!). For any "> 0, denote ' " s :=e s ~ ' s +"(st): Then, noting that ~ ' t =e t u(t;!), ' " s u t;! s e t ~ ' s ~ u t;! s = e s e t ~ ' s + e (st) 1 u s +"(st) = e s e t ~ ' s ~ ' t + e (st) 1 (u s u t ) + e (st) +e (st) 2 u t +"(st) "(st)C(st) j ~ ' s ~ ' t j +ju s u t j + (st) : Let ~ 2T t + be a stopping time corresponding to ~ '2A L ~ u(t;!), and set " := ~ ^ inf n s>t :j ~ ' s ~ ' t j +ju s u t j + (st) " C o ^T: Then " 2T t + , by Example 28, and for any 2T t such that " , it follows from the previous inequality that ' " u t;! e t [ ~ ' ~ u t;! ]: By the increase and the homogeneity of the operator E t;! L , together with the fact that ~ '2A L ~ u(t;!), this implies that: E L t ' " u t;! E L t e t ( ~ ' ~ u t;! ) = e t E L t ~ ' ~ u t;! 0 = ' " t u t : This implies that ' " 2A L t u(t;!), thenL t;! ' " (t; 0) 0. Send "! 0, similar to Remark 34 we getL t;! ' 0 (t; 0) 0, where' 0 s :=e s ~ ' s . Now by straightforward calculation we obtain @ t ~ '(t; 0) 1 2 @ 2 !! ~ '(t; 0) ~ F t;!; ~ '(t; 0);@ x ~ '(t; 0) 0: That is, ~ u is a viscosity subsolution of PPDE (4.27). 30 (ii) If we consider more general variable change: u(t;!) := (t;u(t;!)), where 2 C 1;2 ([0;T ]R) such that @ y > 0. Denote by := 1 the inverse function of with respect to the space variable y. Then one can easily check that u is a classical subsolution of PPDE (2.12) if and only if u is a classical subsolution of the following PPDE: L u :=@ t u 1 2 @ 2 !! u F (t;!; u;@ x u) 0; where F (t;!;y;z) := 1 @ y (t;y) h @ t (t;y) + 1 2 @ 2 yy (t;y)jzj 2 +F (t;!; (t;y);@ y (t;y)z) i : (4.28) However, if u is only a viscosity subsolution of PPDE (2.12), we are not able to prove that u is a viscosity subsolution of (4.28). The main diculty is that the nonlinear expectation E L t and the nonlinear function do not commute. Consequently, given '2A L u(t;!), we are not able to construct as in (i) the corresponding '2A L u(t;!). 4.4 The Value functional For our next results, we shall assume that F and veries (19). A solution of a BSDE is dened almost surely. We will use the following trick to dene a functional with the solution of the BSDE (3.4). We x (t;!)2 and consider the following BSDE under P t 0 . Y s = t;! (B t ) + Z T s F t;! (r;B t ;Y r ;Z r )dr Z T s Z r dB t r ; P t 0 a.s. (4.29) Under Assumption 19, clearly FBSDE (4.29) has a unique solution (Y t;! ;Z t;! ). Then, for any (t;!),Y t;! t is a constant due to the Blumenthal zero-one law. Thus we can dene u 0 (t;!) :=Y t;! t : (4.30) We rst establish the regularity of u 0 . Proposition 37. Under Assumption (19), u 0 is uniformly continuous in under d 1 . Remark 38. The letter C will designate a constant that might change from line to line but it will only depend on L 0 ;T; 0 , and d. Proof. Since F and are bounded, clearly u is bounded. To prove the uniform continuity, we need to give some estimates on the backward SDEs. We pick (t;!); (t;! 0 )2 2 . 31 By standard arguments it is clear that, for any p 1, E P t 0 h kY t;! k p T + Z T t jZ t;! s j 2 ds p=2 i C p ; For (t;!) and (t;! 0 ) the BSDEs at (4.29) are dened with data (F t;! (s;B t ;y;z); t;! (B t )) and (F t;! 0 (s;B t ;y;z); t;! 0 (B t )) Therefore jY t;! t Y t;! 0 t jE P t 0 " sup s2[t;T ] jY t;! s Y t;! 0 s j 2 # E P t 0 j t;! (B t ) t;! 0 (B t )j 2 + Z T s jF t;! (r;B t ; 0; 0)F t;! 0 (r;B t ; 0; 0)j 2 dr E P t 0 0 (jj!! 0 jj t +jjB t B t jj t T ) 2 + Z T s 0 (jj!! 0 jj t +jjB t B t jj t r ) 2 dr 2 0 (jj!! 0 jj t ) 2 We obtain that there exists a modulus of continuity 1 with polynomial growth such that ju 0 (t;!)jC and ju 0 (t;!)u 0 (t;! 0 )j 1 (k!! 0 k t ): (4.31) Given the above regularity, by standard arguments in BSDE theory, we have the following dynamic programming principle: for any t<t 0 T , Y t;! s = (u 0 ) t;! (t 0 ;B t ) + Z t 0 s F t;! (r;B t ;Y t;! r ;Z t;! r )dr Z t 0 s Z t;! r dB t r ;P t 0 -a.s. (4.32) In particular,Y t;! s = (u 0 ) t;! (s;B t ). Denote :=d 1 (t;!); (t 0 ;!) . Then ju 0 t u 0 t 0j(!) = E P t 0 h Y t;! t Y t;! t 0 +u t;! (t 0 ;B t )u(t 0 ;!) i = E P t 0 h Z t 0 t F t;! (r;B t ;Y t;! r ;Z t;! r )dr +u t;! (t 0 ;B t )u(t 0 ;!)] i E P t 0 h Z t 0 t jF t;! (r;B t ;Y t;! r ;Z t;! r )jdr + 1 +kB t k t 0 i ; (4.33) 32 Notice that E P t 0 h Z t 0 t F t;! (r;B t ;Y t;! r ;Z t;! r ) dr i CE P t 0 h Z t 0 t 1 +jY t;! r j +jZ t;! r j dr i C p E P t 0 h Z t 0 t 1 +jY t;! r j 2 +jZ t;! r )j 2 dr i 1=2 C p : As for the second term, since 0 has polynomial growth, one can easily see that we may assume without loss of generality that 1 also has polynomial growth. Note that t 0 t. Then it is clear that there exists a modulus of continuity function 0 such that E P t 0 h 1 +kB t k t 0 i 0 (): Without loss of generality we assume 0 () p . Then, plugging the last estimates into (4.33) and combining with (4.31), we obtain (5.31). Moreover, given the regularity in t, we may extend the dynamic programming principle (4.32). For all st and 2T s , Y t;! s = (u 0 ) t;! (;B t ) + Z s F t;! (r;B t ;Y t;! r ;Z t;! r )dr Z s (Z t;! r ) dB t r ;P t 0 -a.s. (4.34) 4.5 Viscosity solution property of the value functional Theorem 39. Under the assumptions (19), the value functional u 0 is a viscosity solution to the PPDE, (2.2). Proof. We just show thatu 0 is a viscosity subsolution. We prove by contradiction. Assume u 0 is not a viscosity subsolution. Then, for all L> 0,u 0 is not anLviscosity subsolution. For the purpose of this proof, it is sucient to consider an arbitrary LL 0 , the Lipschitz constant of F introduced in Assumption 19 (i). Then, there exist (t;!)2 [0;T ) and '2A L u 0 (t;!) such that c := (L t;! ')(t; 0)> 0: Denote, for s2 [t;T ], ~ Y s :='(s;B t ); ~ Z s :=@ ! '(s;B t ); Y s := ~ Y s Y t;! s ; Z s := ~ Z s Z t;! s : 33 where (Y t;! ;Z t;! ) are dened in (4.29). Applying It^ o's formula under P t 0 , we have d(Y s ) = h (L t;! ')(s;B t ) +F t;! (s;B t ; ~ Y s ; ~ Z s )F t;! (s;B t ;Y t;! s ;Z t;! s ) i ds +Z s dB t s = h (L t;! ')(s;B t ) + s Y s +h;Zi s i ds +Z s dB t s ; P t 0 -a.s. wherejj;jjL 0 . Observing that Y t = 0, we dene 0 :=T^ inf n s>t : (L t;! ')(s;B t )L 0 jY s j c 2 o : Then, by Proposition 37 and Example 28, 0 2T t + and (L t;! ')(s;B t ) + s Y s c 2 ; for all s2 [t; 0 ]: (4.35) Now for any 2T t such that 0 , we have 0 = Y t = Y + Z t h (L t;! ')(s;B t ) + s Y s +h;Zi s i ds Z t Z s dB t s '(;B t )u 0;t;! (;B t ) + c 2 (t) Z t Z s dB t s s ds : ThenE t;! L ('u 0;t;! )(;B t ) E P t ('u 0;t;! )(;B t ) 0. This contradicts with '2 A L u 0 (t;!). Following similar arguments, one can easily prove the following consistency result : Proposition 40. Under Assumption (19), a bounded classical subsolution (resp. superso- lution) of the PPDE (2.12) must be a viscosity subsolution (resp. supersolution). We now state and prove the viscosity solution property of u 0 . Proposition 41. Under Assumption (19), u 0 (t;!) :=Y t;! t is a viscosity solution of PPDE (2.2). Proof. We proceed in two steps. Step 1. In Step 2 below, we will show that u2UC b () and satises the dynamic program- ming principle: for any (t;!)2 and 2T t , underP t 0 Y t;! s = (u 0 ) t;! (;B t ) + Z s F t;! r (Y t;! r ;Z t;! r )dr Z s Z t;! r dB t r : (4.36) 34 Let L be a Lipschitz constant of F in z. We now show that u 0 is an L-viscosity solution. Without loss of generality, we prove only the viscosity subsolution property at (t;!) = (0; 0). For notational simplicity we omit the superscript 0;0 in the rest of this proof. Assume to the contrary that, c := @ t ' + 1 2 @ 2 !! ' +F (;u;@ ! ') (0; 0)> 0 for some '2A L u(0; 0): Let h2H be the hitting time corresponding to ' in (4.23), and by Remark 32 (i), without loss of generality we may assume h = h " for some small " > 0. Since '2 C 1;2 () and u2UC b (), by Assumption (19) (iv) and the uniform Lipschitz property of F in (y;z), we may assume " is small enough such that @ t ' s + 1 2 @ 2 !! ' s +F s (u 0 s ;@ ! ' s ) c 2 > 0; t2 [0; h]: Using the dynamic programming principle (4.36), and applying It^ o's formula on ' under P 0 , we have: ('u) h = ('u) h ('u) 0 = Z h 0 @ t ' + 1 2 @ 2 !! ' +F (;u;Z) (s;B)ds + Z h 0 @ ! 'Z (s;B)dB s Z h 0 c 2 +F (;u;@ ! ')F (;u;Z) (s;B)ds + Z h 0 @ ! 'Z (s;B)dB s = Z h 0 h c 2 + (@ ! 'Z) i (s;B)ds + Z h 0 @ ! 'Z (s;B)dB s = c 2 h + Z h 0 @ ! 'Z (s;B) dB s s ds ; wherejj L. Applying Girsanov Theorem one sees immediately that there exists ~ P equivalent to P 0 such that dB t t dt is a ~ P-Brownian motion. Then the above inequality holds ~ P-a.s., and by the denition ofA L u: 0 E ~ P ('u) h c 2 E ~ P [h] < 0; which is the required contradiction. 35 5 Wellposedness results 5.1 Stability of viscosity solutions We start with a stability result. Theorem 42. Let (F " ;" > 0) be a family of coecients converging uniformly towards a coecient F 2 C 0 () as "! 0. For some L > 0, let u " be a viscosity Lsubsolution (resp. Lsupersolution) of PPDE (2.12) with coecientsF " , for all"> 0. Assume further that u " converges to some u, uniformly in . Then u is a viscosity Lsubsolution (resp. supersolution) of PPDE (2.12) with coecient F . Proof. We shall prove only the viscosity subsolution property by contradiction. By Remark 34, without loss of generality we assume there exists '2A 0 L u(0; 0) such that c :=L'(0; 0)> 0, whereA 0 L u(0; 0) is dened in (4.25). Denote X 0 :='u; X " :='u " ; and 0 := inf n t> 0 :L'(t;B) c 2 o ^T: (5.37) Since f2C 0 (), it follows from Example 28 that 0 2T 0 + . By (4.25), there exists 1 2T 0 + such that 1 0 and E L 0 ( 1 ;X 0 1 )> 0 =X 0 0 : Since u " converges towards u uniformly, we have E L 0 ( 1 ;X " 1 ) > X " 0 for suciently small "> 0: (5.38) Consider the optimal stopping problem, under nonlinear expectation, together with the corresponding optimal stopping rule: Y t := inf 2T t E L t X " ^ 1 and 0 := inf n t 0 :Y " t =X " t o ; (5.39) see Remark 30. We claim that P 0 0 < 1 > 0; (5.40) because otherwise X " 0 Y 0 =E L 0 X " 1 , contradicting (5.38). 36 Since X and Y are continuous, P 0 -a.s. there exists Ef 0 < 1 g such that P 0 (E) = P 0 ( 0 < 1 )> 0, and for any!2E, denotingt := 0 (!) we haveX t (!) =Y t (!). Notice that t;! 1 2T t + . By standard arguments using the regular conditional probability distributions, see e.g. [56] or [53], it follows from the denition of 0 together with theE L submartingale property of Y that X " t (!) =Y t (!) =Y t;! t (!)E L t Y t;! E L t X ";t;! for all 2T t ; t;! 1 : This implies that 0E L t X ";t;! X " t (!) =E L t ' t;! '(t;!) +u " (t;!)u ";t;! : for all 2T; t;! 1 : Dene ' " s :=' t;! s '(t;!) +u " (t;!): Then we have ' " 2A L (t;!;u " ). Since u " is a viscosity L-subsolution of PPDE (2.12) with coecients F " , we have 0 @ t ' " (t; 0) 1 2 tr @ 2 xx ' " (t; 0)F " t;!;' " (t; 0);@ x ' " (t; 0) = @ t '(t;!) 1 2 tr @ 2 xx ' (t;!)F " t;!;u " (t;!);@ x '(t;!) = (L')(t;!) +F t;!;u(t;!);@ x '(t;!) F " t;!;u " (t;!);@ x '(t;!) c 2 +F t;!;u(t;!);@ x '(t;!) F " t;!;u " (t;!);@ x '(t;!) ; thanks to (5.37). Send "! 0, we obtain 0 c 2 , contradiction. Remark 43. (i) We need the same L in the proof of Theorem 42. If u " is only a viscosity subsolution of PPDE (2.12) with coecient F " , but with possibly dierent L " , we are not able to show that u is a viscosity subsolution of PPDE (2.12) with coecients (2.12). (ii) However, ifu " is a viscosity solution of PPDE (2.12) with coecientF " , by Theorem 44, it follows immediately from the stability of BSDEs thatu is the unique viscosity solution of PPDE (2.12) with coecient F . 37 6 Comparison Theorem 44. Let Assumptions 19 hold. Letu 1 be a viscosity subsolution andu 2 a viscosity supersolution of PPDE (2.12). If u 1 (T;)u 2 (T;), then u 1 u 2 on . Consequently, given the terminal condition,u 0 is the unique viscosity solution of PPDE (2.12). The proof is reported in Section 6.2 building on a partial comparison result derived in Subsection 6.1. Remark 45. For technical reasons, we require a uniformly continuous function xi between u 1 T and u 2 T , see Section 6.2. However, when one of u 1 and u 2 is in C 1;2 b (), then we need neither the presence of such nor the existence of ^ F , see Lemma 47 below. 6.1 Partial comparison principle Denition 46. Let t2 [0;T ], u : t !R, and P be a semimartingale measure on t . We say u2 C 1;2 P ( t ) if there exist an increasing sequence of F t -stopping times t = 0 1 T such that, (i) For each i 0 and !2 t , i (!);! i+1 2T i (!) + and u i (!);! 2C 1;2 b ( i (!) ( i (!);! i+1 )); (ii) For each i 0 and !2 , u (!) is continuous on [0; i (!)]; (iii) For P-a.s. !2 t , the setfi : i (!)<Tg is nite. We shall emphasize that, for u2 C 1;2 P ( t ), the derivatives of u are bounded on each interval [ i (!); i (!);! i+1 ], however, in general they may be unbounded on the whole interval [t;T ]. Also, the previous denition and, more specically the dependence on P introduced in item (iii), is motivated by the results established in Section 6.2 below. The following partial comparison principle, which improves Lemma 27, is crucial for this paper. The main argument is very much similar to that of Theorem 42. Lemma 47. Let Assumption 19 hold true. Let u 1 be a viscosity subsolution and u 2 a viscosity supersolution of PPDE (2.12). If u 1 (T;) u 2 (T;) and one of u 1 and u 2 is in C 1;2 b (), then u 1 u 2 on . Proof. First, by Remark 36 (i), by otherwise changing the variable we may assume without loss of generality that F is strictly decreasing in y. (6.41) 38 We assumeu 2 2C 1;2 b () andu 1 is a viscosityL-subsolution for someL 0. We shall prove by contradiction. Without loss of generality, we assume c :=u 2 0 u 1 0 < 0: For future purpose, we shall obtain the contradiction under the following slightly weaker assumptions: u 2 2 C 1;2 P 0 () bounded; and (Lu 2 ) 0; u 2 (T;)u 1 (T;) P 0 -a.s. (6.42) Denote X :=u 2 u 1 and 0 := inf n t> 0 :X t 0 o ^T: Note that X 0 =c< 0, X T 0, and X is continuous,P 0 -a.s. Then 0 > 0; X t < 0; t2 [0; 0 ); and X 0 = 0; P 0 -a.s. (6.43) Similar to Remark 30, dene the processY by the optimal stopping problem under nonlinear expectation: Y t := inf 2T t E L t X ^ 0 ; t2 [0; 0 ]; together with the corresponding optimal stopping rule: 0 := infft 0 : Y t =X t g: Then 0 0 , and we claim that P 0 [ 0 < 0 ] > 0; (6.44) because otherwise X 0 Y 0 =E L 0 X 0 , contradicting (6.43). As in the proof of Theorem 42, there exists Ef 0 < 0 g such that P 0 (E) =P 0 0 < 0 > 0, and for any !2E, by denoting t := 0 (!) we have t;! 0 2T t + and X t (!) = Y t (!) = inf 2T t E L t X ^ t;! 0 ]; P t 0 a.s. 39 Letf i ;i 0g be the sequence of stopping times in Denition 46 corresponding to u 2 . Then P 0 f 0 < i g\E > 0 for i large enough, and thus there exists !2 E such that t := 0 (!) < i (!). Without loss of generality, we assume i1 (!) t. It is clear that ( 0 ^ i ) t;! 2T t + and (u 2 ) t;! 2C 1;2 b ( t (( 0 ^ i ) t;! )). In particular, there exists ~ u 2 2C 1;2 b ( t ) such that (u 2 ) t;! = ~ u 2 on ( 0 ^ i ) t;! ). Now for any 2T t + such that ( 0 ^ i ) t;! , it follows from Remark 30 that: X t (!) =Y t (!) =Y t;! t E L t Y t;! E L t X t;! : Thus 0E L t (~ u 2 ) t;! (u 1 ) t;! X t (!) : Denote ' s := (~ u 2 ) t;! s X t (!), s2 [t;T ]. Then '2A L (t;!;u 1 ). Since u 1 is a viscosity L-subsolution and u 2 is a classical supersolution, we have 0 (L')(t; 0) =@ t ~ u 2 (t; 0) 1 2 tr @ 2 xx ~ u 2 (t; 0) F t;!;u 1 (t;!);@ x ~ u 2 (t; 0) = @ t u 2 (t;!) 1 2 tr @ 2 xx u 2 (t;!) F t;!;u 1 (t;!);@ x u 2 (t;!) = (Lu 2 )(t;!) +F t;!;u 2 (t;!);@ x u 2 (t;!) F t;!;u 1 (t;!);@ x u 2 (t;!) F t;!;u 2 (t;!);@ x u 2 (t;!) F t;!;u 1 (t;!);@ x u 2 (t;!) : By (6.43), u 2 (t;!)<u 1 (t;!). Then the above inequality contradicts with (6.41). 6.2 A variation of the Perron's approach To prove Theorem 44, we dene u(t;!) := inf '(t; 0) :'2D(t;!) ; u(t;!) := sup '(t; 0) :'2D(t;!) ; (6.45) where, in light of (6.42), D(t;!) := n '2 C 1;2 P t 0 ( t ) bounded : (L') t;! s 0; s2 [t;T ] and ' T g t;! ; P t 0 -a.s. o ; D(t;!) := n '2 C 1;2 P t 0 ( t ) bounded : (L') t;! s 0; s2 [t;T ] and ' T g t;! ; P t 0 -a.s. o : (6.46) By Lemma 47, in particular by its proof under the weaker condition (6.42), it is clear that uu 0 u: (6.47) 40 The following result is important for our proof of Theorem 44. Theorem 48. Let Assumptions 19 holds true. Then u =u: (6.48) Proof of Theorem 44. By Lemma 47, in particular by its proof under the weaker condition (6.42), we have u 1 u and uu 2 . Then Theorem 48 implies that u 1 u 2 . This clearly leads to the uniqueness of viscosity solution, and therefore, by Theorem 39 u 0 is the unique viscosity solution of PPDE (2.12) with terminal condition . Remark 49. In standard Peron's method, one shows that u (resp. u) is a viscosity super- solution (resp. viscosity sub-solution) of the PDE. Assuming that the comparison principle for viscosity solutions holds true, then (6.48) holds. In our situation, we shall instead prove (6.48) directly rst, which in turn is used to prove the comparison principle for viscosity solutions. Roughly speaking, the comparison principle for viscosity solutions is more or less equivalent to the partial comparison principle Lemma 47 and the equality (6.48). To our best knowledge, such an approach is novel in the literature. We decompose the proof of Theorem 48 into several lemmas. First, let t < T and 2 C 0 b ( t ) d satisfy there exists ^ 2 C 0 b ( ^ t ) d such that = ^ in and ^ is uniformly continuous in ^ ! under the uniform normkk t T . (6.49) Dene Z s =z + Z s t r dr; v s := Z s t Z s dB t s ; tsT; P t 0 -a.s. (6.50) By It^ o's formula, we have v s =Z s B t s Z s t r B t r dr: Denote ^ Z s (^ !) :=z + Z s t ^ r (^ !)dr; ^ v(s; ^ !) := ^ Z s (^ !)^ ! s Z s t ^ r (^ !)^ ! r dr; ^ !2 ^ t : (6.51) 41 Now for any !2 and x2R, , let ^ u t;! denote the unique solution to the following ODE (with random coecients) on [t;T ]: ^ u t;! (s; ^ !) :=x Z s t ^ F t;! (r; ^ !; ^ u t;! (r; ^ !); ^ Z r (^ !))dr + ^ v(s; ^ !); tsT; ^ !2 ^ t ; (6.52) and dene u t;! (s; ~ !) := ^ u t;! (s; ~ !) for (s; ~ !)2 t : (6.53) We then have Lemma 50. Let Assumptions 19 and (6.49) hold true. Then for each (t;!)2 , the above u t;! 2C 1;2 b ( t ) andL t;! u t;! = 0. Proof. We rst show that ^ u t;! 2C 1;2 b ( ^ t ), which implies that u t;! 2C 1;2 b ( t ). For ts 1 < s 2 T and ^ ! 1 ; ^ ! 2 2 ^ t , we have j ^ Z s 1 (^ ! 1 ) ^ Z s 2 (^ ! 2 )j Z s 2 s 1 j ^ r (^ ! 1 )jdr + Z s 1 t j ^ r (^ ! 1 ) ^ r (^ ! 2 )jdr C[s 2 s 1 ] + Z s 1 t j ^ r (^ ! 1 ) ^ r (^ ! 2 )jdr: Note that d t 1 (r; ^ ! 1 ); (r; ^ ! 2 ) d t 1 ((s 1 ; ^ ! 1 ); (s 2 ; ^ ! 2 ) for t r s 1 . Then one can easily see that ^ Z2C 0 b ( ^ t ). Similarly one can show that ^ v; ^ u t;! 2C 0 ( ^ t ). Next, one can easily check that, for all ^ !2 ^ t , @ t ^ Z s (^ !) = ^ s (^ !); @ x ^ Z s (^ !) = 0; @ t ^ v(s; ^ !) = ^ s (^ !)^ ! s ^ s (^ !)^ ! s = 0; @ x ^ v(s; ^ !) = ^ Z s (^ !); @ 2 xx ^ v(s; ^ !) = 0; @ t ^ u t;! (s; ^ !) = ^ F t;! (s; ^ !; ^ u t;! (s; ^ !); ^ Z s (^ !)); @ x ^ u t;! (s; ^ !) = ^ Z s (^ !); @ 2 xx ^ u t;! (s; ^ !) = 0: Since ^ and ^ F are bounded, it is straightforward to see that ^ u t;! 2C 1;2 b ( ^ t ). Finally, from the above derivatives we see immediately thatL t;! u t;! = 0. Our next two lemmas rely heavily on the remarkable result Bank and Baum [1], which is extended to BSDE case in [53]. Lemma 51. Let Assumption 19 hold true. Let 2T , Z be F-progressively measurable such that E P 0 [ R T jZ s j 2 ds] <1, and X ; ~ X 2 L 2 (F ;P 0 ). For any " > 0, there exists F-progressively measurable process Z " such that 42 (i) For the Lipschitz constant L 0 in Assumption 19 (ii), it holds that P 0 h sup tT e L 0 t jX " t X t j" +e L 0 j ~ X X j i "; (6.54) where X;X " are the solutions to the following ODEs with random coecients: X t =X Z t F (s;B;X s ;Z s )ds + Z t Z s dB s ; X " t = ~ X Z t F (s;B;X " s ;Z " s )ds + Z t Z " s dB s ; tT; P 0 -a.s. (6.55) (ii) " t := d dt Z " t exists for t2 [;T ), where " is understood as the right derivative, and for each !2 , ( " ) (!);! satises (6.49) with t :=(!). Proof. First, let h :=h " > 0 be a small number which will be specied later. By standard arguments there exists a time partition 0 = t 0 < < t n = T and a smooth function : [0;T ]R nd !R d such that and its derivatives are bounded and E P 0 h Z T j ~ Z t Z t j 2 dt i <h (6.56) where ~ Z t (!) := (t;! t 1 ^t ; ;! tn^t ) for all (t;!)2 : Next, for some ~ h := ~ h " > 0 which will be specied later, denote Z " t := 1 ~ h Z t t ~ h ~ Z _s ds for t2 [;T ]: (6.57) By choosing ~ h> 0 small enough (which may depend on h " ), we have E P 0 h Z T jZ " t Z t j 2 dt i < 2h: (6.58) Now denote Z " :=Z " Z; X " :=X " X: Then X " t =X " Z t [ s X " s +h;Z " i s ]ds + Z t Z " s dB s ; 43 wherejjL 0 and 2U L 0 t . Denote " t := exp R t s ds : We get " t X " t =X " Z t " s h;Z " i s ds + Z t " s Z " s dB s : Then 0 sup tT e L 0 t jX " t je L 0 jX " je L 0 h sup tT " t jX " t jjX " j i sup tT " t X " t X " = sup tT Z t " s h;Z " i s ds + Z t " s Z " s dB s C Z T jZ " s jds + sup tT Z t " s Z " s dB s : Thus P 0 h sup tT e L 0 t jX " t X t j" +e L 0 j ~ X X j i = P 0 h sup tT e L 0 t jX " t X t je L 0 j ~ X X j" i P 0 h C Z T jZ " s jds + sup tT Z t " s Z " s dB s " i C " 2 E P 0 h Z T jZ " s jds 2 + sup tT Z t " s Z " s dB s 2 i C " 2 E P 0 h Z T jZ " s j 2 ds i Ch " 2 ; thanks to (6.58). Now set h := " 3 C , we prove (6.54). Finally, by (6.57) and (6.56) we have, " s = 1 ~ h [ ~ Z s ~ Z (s ~ h)_ ]; s2 [;T ]: Fix !2 and set t :=(!). For each ^ !2 ^ t , set ! :=! t ^ !2 ^ , and dene: ^ Z t;! s (^ !) := (s; ! t 1 ^s ; ; ! tn^s ); ( " ) t;! s (^ !) := 1 ~ h [ ^ Z t;! s (^ !) ^ Z t;! (s ~ h)_t (^ !)]; s2 [;T ]: Then we can easily check that ( " ) t;! satises (6.49). 44 Lemma 52. Assume Assumption 19 holds. Let x2R and Z beF-progressively measurable such that E P 0 [ R T 0 jZ s j 2 ds] <1. For any " > 0, there exists F-progressively measurable process Z " and an increasing sequence of F-stopping times 0 = 0 1 T such that (i) It holds that sup 0tT jX " t X t j"; P 0 -a.s.; (6.59) where X;X " are the solutions to the following ODEs with random coecients: X t =x Z t 0 F (s;B;X s ;Z s )ds + Z t 0 Z s dB s ; X " t =x Z t 0 F (s;B;X " s ;Z " s )ds + Z t 0 Z " s dB s ; 0tT; P 0 -a.s. (6.60) (ii) For each i, " t := d dt Z " t exists for t2 [ i ; i+1 ), where " is understood as the right derivative. Moreover, there exists ~ i;" on [ i ;T ] such that ~ i;" t = " t for t2 [ i ; i+1 ), and for each !2 , ( i;" ) i (!);! satises (6.49) with t := i (!); (iii) ForP 0 -a.s. !2 , for eachi, i < i+1 whenever i <T , and the setfi : i (!)<Tg is nite. Proof. Let"> 0 be xed, and set" i := 2 i2 e L 0 T ",i 0. We construct i and (Z i;" ;X i;" ) by induction as follows. First, for i = 0, set 0 := 0. Apply Lemma 51 with initial time 0 , initial value x, and error level" 0 , we can constructZ 0;" andX 0;" on [ 0 ;T ] satisfying the properties in Lemma 51. In particular, P 0 h sup 0 tT e L 0 t jX 0;" t X t j" 0 i " 0 : Denote 1 := inf n t 0 :e L 0 t jX 0;" t X t j" 0 o ^T: (6.61) Since X and X 0;" are continuous, we have 1 > 0 ,P 0 -a.s. We now dene Z " t :=Z 0;" t ; X " t :=X 0;" t ; t2 [ 0 ; 1 ): 45 Assume we have dened i , Z " ;X " on [ 0 ; i ) andX i1;" on [ i1 ;T ]. Apply Lemma 51 with initial time i , initial value X i1;" i , and error level " i , we can construct Z i;" and X i;" on [ i ;T ] satisfying the properties in Lemma 51. In particular, P 0 h sup i tT e L 0 t jX i;" t X t j" i +e L 0 i jX i1;" i X i j i " i : Denote i+1 := inf n t i :e L 0 t jX i;" t X t j" i +e L 0 i jX i1;" i X i j o ^T: Since X and X i;" are continuous, we have i+1 > i whenever i <T . We then dene Z " t :=Z i;" t ; X " t :=X i;" t ; t2 [ i ; i+1 ): From our construction we have P 0 ( i+1 <T )" i . Then 1 X i=0 P 0 ( i+1 <T ) 1 X i=0 " i <1: It follows from the Borel-Cantelli Lemma that the setfi : i (!) < Tg is nite, for P 0 -a.s. !2 , which proves (iii). We thus have dened Z " ;X " on [0;T ], and the statements in (ii) follows directly from Lemma 51. So it remains to prove (i). For each i, by the denition of i we see that, e L 0 i+1 jX " i+1 X i+1 j" i +e L 0 i jX " i X i j; P 0 -a.s. Since X " 0 =X 0 =x, by induction we get sup i e L 0 i jX " i X i j 1 X i=0 " i 1 X i=0 2 i2 e L 0 T " = 1 2 e L 0 T "; P 0 -a.s. Then for each i, sup i t i+1 jX " t X t je L 0 T h " i +jX " i X i j i e L 0 T h 2 i2 e L 0 T " + 1 2 e L 0 T " i "; P 0 -a.s. which implies (6.59). 46 Proof of Theorem 48 Without loss of generality, we shall only prove u 0 = u 0 0 . Recall that (Y 0 ;Z 0 ) is the solution to BSDE (4.29). Set Z :=Z 0 and x :=Y 0 0 in Lemma 52, we see that X =Y 0 =u 0 and thus X satises the regularity in Proposition 37. From the construction in Lemma 52 and then by Lemma 51 we see that ~ 0;" t := d dt Z 0;" t exists for all t2 [0;T ) and satises (6.49). Then by Lemma 50 we see that X 0;" 2C 1;2 b () andLX 0;" = 0. This implies that, for the 1 dened in (6.61), 1 (!)> 0 for all!2 and, by Example 28, 1 2T + . For i = 1; 2; , repeat the above arguments and by induction we can show that, for each i and each !2 , i (!);! i+1 2T i (!) + . Moreover, by Lemma 52,fi : i < Tg is nite, P 0 -a.s. We now let u " be the solution to the following ODE: u " t =X " 0 +e L 0 T " Z t 0 F (s;B;u " s ;Z " s )ds + Z t 0 Z " s dB s : For i = 0; 1; , by the construction of Z " in Lemma 52 and following the arguments in Lemma 50, one can easily show that u " 2 C 1;2 P 0 ([0;T ]) and Lu " = 0: (6.62) Moreover, note that u " t X " t =e L 0 T " Z t 0 s [u " s X " s ]ds; wherejjL 0 . By standard arguments one has sup 0tT ju " t X " t je 2L 0 T "; and u " T X " T e LT [u " 0 X " 0 ] =": Therefore, by (6.59) and noting that u 0 is bounded, u " is bounded and u " T (!)X " T (!) +"X T (!) =Y 0 T (!) =g(!); for P 0 -a.s. !: This, together with (6.62), implies that u " 2D(0; 0). Then, by the denition of u, u 0 u " 0 =X " 0 +e L 0 T "u 0 0 +" +e L 0 T ": Since " is arbitrary, we obtain u 0 u 0 0 . 47 Chapter 3 Optimal Stopping under nonlinear expectation This chapter focuses on the problem sup 2T E[X ^h ]; where E[:] := sup P2P E P [:]; T is the collection of all stopping times, relative to the natural ltration of the canonical process, andP is a weakly compact non-dominated family of probability measures. The main result is the following. Similar to the standard theory of optimal stopping, we introduce the corresponding nonlinear Snell envelopeY , and we show that the classical Snell envelope characterization holds true in the present context. More precisely, we prove that the Snell envelope Y is anEsupermartingale, and anEmartingale up to its rst hitting time of the obstacte. Consequently, is an optimal stopping time for our problem of optimal stopping under nonlinear expectation. This result is proved by adapting the classical arguments available in the context of the standard optimal stopping problem under linear expectation. However, such an extension turns out to be highly technical. The rst step is to derive the dynamic programming principle in the present context, implying theEsupermartingale property of the Snell envelopeY . To establish theEmartingale property on [0; ], we need to use some limiting argument for a sequence Y n , where n 's are stopping times increasing to . However, we face one major diculty related to the fact that in a nonlinear expectation framework the dominated convergence theorem fails in general. It was observed in Denis, Hu and Peng [11] that the monotone convergence theorem holds in this framework if the decreasing sequence of random variables are quasi-continuous. Therefore, one main contribution of this paper is to construct convenient quasi-continuous approximations of the sequence Y n . This allows us to apply the arguments in [11] on Y n , which is decreasing under expectation (but not pointwise!) due to the supermartingale property. The weak compactness of the classP is crucial for the limiting arguments. 48 We note that in an one dimensional Markov model with uniformly non-degenerate dif- fusion, Krylov [30] studied a similar optimal stopping problem in the language of stochastic control (instead of nonlinear expectation). However, his approach relies heavily on the smoothness of the (deterministic) value function, which we do not have here. Indeed, one of the main technical diculties in our situation is to obtain the locally uniform regularity of the value process. The study of this problem is a necessary step towards a generalization of the partial com- parison result at section (6.1). When dening viscosity solution for semilinear PPDEs at the previous chapters we have replaced the pointwise maximality condition, in the standard the- ory of viscosity solution, by a problem of optimal stopping under nonlinear expectation. In the previous chapter the problem was restricted to the context of semilinear path-dependent partial dierential equations. In this special case, our denition of viscosity solutions can be restricted to the context whereP consists of equivalent measures on the canonical space (and henceP has dominating measures). Consequently, the Snell envelope characterization of the optimal stopping problem under nonlinear expectation is available in the existing literature on re ected backward stochastic dierential equations, see e.g. El Karoui et al [19], Bayraktar, Karatzas and Yao [2]. However, the extension of our denition to the fully nonlinear case requires to consider a nondominated family of measures. The chapter is organized as follows. Section 2 formulates the problem of optimal stop- ping under nonlinear expectation, and contains the statement of our main results. The proof of the Snell envelope characterization in the deterministic maturity case is reported in Section 3. The more involved case of a random maturity is addressed in Section 4. 1 Snell envelope in the fully nonlinear case We refer to the seminal work of Stroock and Varadhan [56] for the introduction of r.c.p.d. which is a convenient tool for proving the dynamic programming principles, see e.g. Peng [45] and Soner, Touzi, and Zhang [54]. Let ( ;F;P) be a probability space,GF is a sub--eld. An r.c.p.d. fP ! G ;!2 g satises the following requirements: For each !2 ,P ! G is a probability measure onF; For each E2F, the mapping !!P ! G (E) isG-measurable; For any 2L 1 (F), the conditional expectation E[jG](!) =E P ! G [], forP-a.e. !; For any !2 ,P ! G ( ! G ) = 1, where ! G :=\fE2G :!2Eg. We note that r.c.p.d. exists wheneverG is countably generated. 49 In the special case that :=f!2 C([0;T ];R d ) : ! 0 = 0g is our canonical space and G =F for some 2T , it holds that ! F :=f! 0 2 :(! 0 ) =(!) and ! 0 t =! t ; 0t(!)g =f! (!) ! 0 :! 0 2 (!) g:(1.1) Then, as in [54], we deneP ;! on (!) and still call it an r.c.p.d. of P conditional onF : P ;! (E) :=P ! F f! (!) ! 0 :! 0 2Eg ; 8E2F (!) T : (1.2) One may easily check thatP ;! satises the following properties: For each !2 ,P ;! is a probability measure onF (!) T ; For each 2L 1 (F T ), the mapping !!E P ;! [ ;! ] isF -measurable; For any 2L 1 (F),E[jF ](!) =E P ;! [ ;! ], forP-a.e. !. A probability measure P on t is called a semimartingale measure if the canonical process B t is a semimartingale under P. Throughout this chapter, we shall also consider a familyfP t ;t2 [0;T ]g such that for all t2 [0;T ],P t is a set of semimartingale measures on t satisfying : (P1) there exists some L 0 such that, for all t,P t is a weakly compact subset ofP L 0 t . (P2) For any 0 t T , 2T t , and P2P t , the regular conditional probability distribu- tion(r.c.p.d.) P (!);! 2P (!) for P-a.e. !2 t . (P3) For any 0stT , P2P s ,fE i ;i 1gF s t disjoint, and P i 2P t , the following ^ P is also inP s : ^ P := P t h 1 X i=1 P i 1 E i +P1 \ 1 i=1 E c i i : (1.3) Here (1.3) means, for any event E2F s T and denoting E t;! :=f! 0 2 t :! t ! 0 2Eg: ^ P[E] :=E P h 1 X i=1 P i [E t;B ]1 E i (B) i +P E\ (\ 1 i=1 E c i ) : We remark that (P3) is the concatenation property. Proposition 53. For allL> 0, the familyfP t L ;t2 [0;T ]g satises conditions (P1-P2-P3). 50 Proof. Recall the notations in the beginning of Subsection 4.1 of chapter1. Let F := F B and ~ F := ~ F ~ B be the natural ltrations on and ~ , respectively. Moreover, we may identify F with the ltration ~ F B on ~ generated by B: ~ F B t =fE 2 :E2F B t g. (i) First, it follows from standard arguments, see e.g. Zheng [57] Theorem 3, thatP L t is weakly compact. Then Property (P1) holds. (ii) We next check without loss of generality Property (P2) only at t = 0. Let 2T and P2P L with corresponding Q as in (4.18) at Chapter 1. Dene ~ (~ !) := (!) for ~ ! := (!;a;m)2 ~ , then clearly ~ is an ~ F B -stopping time, hence also an ~ F-stopping time. By Stroock-Varadhan [56], the r.c.p.d. Q ~ ! ~ F B ~ exists. Note that ~ ! 7! Q ~ ! ~ F B ~ (E) is ~ F B ~ - measurable for any E 2 ~ F T , it follows that Q ~ ! ~ F B ~ depends only on ! and thus we may denote it asQ ! ~ F B ~ . Recall the shifted spaces t , ~ t , F t , and ~ F t . We now dene the following probability measure on the shifted space (!) : Q ;! [ ~ E] := Q ! ~ F B ~ ~ ! 1 (!) ~ ! 2 : ~ ! 1 2 ~ ; ~ ! 2 2 ~ E ; 8 ~ E2 ~ F (!) T ; P ;! [E] := Q ;! E ( (!) ) 2 ; 8E2F (!) T : (1.4) It is straightforward to check that P ;! is an r.c.p.d. of P conditional onF , and Q ;! is the required extension on ~ (!) satisfying (4.18) forP-a.e. !. This veries (P2). (iii) It remains to check Property (P3). AssumeQ andQ i are the corresponding exten- sions ofP andP i . Dene ^ Q := Q t h 1 X i=1 Q i 1 E i ( s ) 2 +Q1 \ 1 i=1 (E c i ( s ) 2 ) i : Following similar arguments as in (ii) one can show that ^ Q satises (4.18). It is clear that ^ P(E) = ^ Q(E ( s ) 2 ) for all E2F s T . Then ^ P2P L s and thus (P3) holds. The following are some other typical examples of such a familyfP t ;t2 [0;T ]g. Example 54. Let L;L 1 ;L 2 > 0 be some constants. Wiener measureP t 0 :=fP t 0 g =fP : P = 0; P =I d g. Finite variationP t fv (L) :=fP :j P jL; P = 0g. Drifted Wiener measureP t 0;ac (L) :=fP :j P jL; P =I d g. Relaxed boundsP t (L 1 ;L 2 ) :=fP :j P jL 1 ; 0 P L 2 I d g. Relaxed bounds, Uniformly ellipticP t ue (L 1 ;L 2 ;L) :=fP :j P jL 1 ;LI d P L 2 I d g. Equivalent martingale measuresP t e (L 1 ;L 2 ;L) :=fP2P t (L 1 ;L 2 ):9j P jL; P = P P g: 51 We remark that only the last three families of measures are not dominated. However all the families verify the properties P1-P2-P3. We denote byL 1 (F t T ;P t ) the set of allF t T measurable r.v. with sup P2Pt E P [jj]<1. The setP t induces the following capacity and nonlinear expectation: C t [A] := sup P2P t P[A] for A2F t T ; and E t [] = sup P2P t E P [] for 2L 1 (F t T ;P t ): (1.5) When t = 0, we shall omit t and abbreviate them asP;C;E. ClearlyE is a G-expectation, in the sense of Peng [46]. We remark that, when satises certain regularity condition, thenE t [ t;! ] can be viewed as the conditional G-expectation of, and as a process it is the solution of a Second Order BSDEs, as introduced by Soner, Touzi and Zhang [53]. Abusing the terminology of Denis and Martini [12], we say that a property holdsP-q.s. (quasi-surely) if it holds Pa.s. for all P2P. A main diculity we are facing is the fact that the dominated convergence theorem fails under the nonlinear expectation framework as illustrated by the following example. Example 55. TakeP =P(0; 1), dened at Example 54 and consider the following sequence of random variables X n = 1 fB6=0g 1 fhBi T 1 n g where 0 is the null path. Then X n ! 0, on as n goes to innity but sup P2P E P [X n ] = 1: A random variable : !R is -P-quasicontinuous if for any "> 0, there exists a closed set " such thatC( c " )<" and is continuous in " , -P-uniformly integrable ifE[jj1 fjjng ! 0, as n!1. SinceP is weakly compact, by Denis, Hu and Peng [11] Lemma 4 and Theorems 22, 28, we have: Proposition 56. (i) Let ( n ) n1 be a sequence of open sets with n " . ThenC( c n )# 0. (ii) Let ( n ) n1 be a sequence ofP-quasicontinuous andP-uniformly integrable maps from to R. If n #,P-q.s. thenE[ n ]#E[]. 52 We nally recall the notion of martingales under nonlinear expectation. Denition 57. Let X be an F-progressively measurable process with X 2 L 1 (F ;P) for all 2T . We say that X is aEsupermartingale (resp. submartingale, martingale) if, for any (t;!)2 and any 2T t ,E t [X t;! ] (resp.; =) X t (!) forP-q.s. !2 . We remark that we require theE-supermartingale property holds for stopping times. Under linear expectation P, this is equivalent to the P-supermartingale property for deter- ministic times, due to the Doob's optional sampling theorem. However, under nonlinear expectation, they are in general not equivalent. 2 Optimal stopping under nonlinear expectations We now x anF-progressively measurable process X. Assumption 58. X is a bounded c adl ag process with positive jumps, and there exists a modulus of continuity function 0 such that for any (t;!); (t 0 ;! 0 )2 : X(t;!)X(t 0 ;! 0 ) 0 d 1 (t;!); (t 0 ;! 0 ) whenever tt 0 : (2.6) Remark 59. There is some redundancy in the above assumption. Indeed, (2.6) implies that X has left-limits and X t X t for all t2 (0;T ]. Moreover, the fact that X has only positive jumps is important to ensure that the random times in (2.7), ^ in (2.10), and n in (3.17) and (4.36) areF-stopping times. Proof. Fix !2 , and letft n g andfs n g be two sequences such that t n " t;s n " t, and X tn (!)! lim s"t X s (!), X sn (!)! lim s"t X s (!). Here and in the sequel, in lim s"t we take the notational convention that s < t. Without loss of generality, we may assume t n <s n <t n+1 for n = 1; 2;:::. Then for the 0 dened in (2.14) we have 0 lim s"t X s (!)lim s"t X s (!) = lim n!1 X tn (!) lim n!1 X sn (!) lim n!1 0 d 1 t n ;!); (s n ;!) = 0: This implies the existence of X t (!). Moreover, X t (!)X t (!) = lim s"t X s (!)X t (!) lim s"t d 1 (s;!); (t;!) = 0; completing the proof. 53 We dene the nonlinear Snell envelope and the corresponding obstacle rst hitting time: Y t (!) := sup 2T t E t [X t;! ]; and := infft 0 :Y t =X t g: (2.7) The following result is on nonlinear Snell envelope characterization of the deterministic maturity optimal stopping problem Y 0 . Theorem 60. (Deterministic maturity) Let X be satisfying Assumption 58. Then Y is an E-supermartingale on [0;T ], Y =X , and Y :^ is anE-martingale. Consequently, is an optimal stopping time for the problem Y 0 . To prove the partial comparison principle for viscosity solutions of path-dependent par- tial dierential equations, we need to consider optimal stopping problems with random maturity time h2T of the form h := infft 0 :B t 2O c g^t 0 ; (2.8) for some t 0 2 (0;T ] and some open convex set O R d containing the origin. We shall extend the previous result to the following stopped process: b X h s :=X s 1 fs<hg +X h 1 fshg for s2 [0;T ]: (2.9) The corresponding Snell envelope and obstacle rst hitting time are denoted: b Y h t (!) := sup 2T t E t h b X h t;! i ; and b := infft 0 : b Y h t = b X h t g: (2.10) Our second main result requires the following additional assumption. Assumption 61. (i) For some L > 0,P fv t (L)P t for all t2 [0;T ], whereP fv t (L) is dened in Example 54. (ii) For any 0t<t +T ,P t P t+ in the following sense: for any P2P t we have ~ P2P t+ , where ~ P is the probability measure on t+ such that the ~ P-distribution of B t+ is equal to the P-distribution offB t s ;tsTg. Theorem 62. (Random maturity) Let X be a process satisfying Assumption 58, and suppose that the family of measures P satises Assumption 61. Then b Y h is an E- supermartingale on [0; h], b Y h b = b X h b , and b Y h :^b is anE-martingale. In particular, b is an optimal stopping time for the problem b Y h 0 . 54 Remark 63. (i) The main idea for proving Theorem 62 is to show thatE[ b Y h n ] converges toE[ b Y h b ], where n is dened by (4.36) below and increases to b . However, we face a major diculty that the dominated convergence theorem fails in our nonlinear expectation framework. Notice that Y is anE-supermartingale and thus Y n are decreasing under expectation (but not pointwise!). We shall extend the arguments of [11] for the monotone convergence theorem, Proposition 56, to our case. For this purpose, we need to construct certain continuous approximations of the stopping times n , and the requirement that the random maturity h is of the form (2.8) is crucial. We remark that, in his Markov model, Krylov [30] also considers this type of hitting times. We also remark that, in a special case, Song [55] proved that h is quasicontinuous. (ii) Assumption 61 is a technical condition used to prove the dynamic programming principle in Subsection 5.1 below. By a little more involved arguments, we may prove the results by replacing Assumption 61 (i) with for some constant L;L 1 ;L 2 ; P ue t (L 1 ;L 2 ;L)P t for all t2 [0;T ]; whereP ue t is dened in Example 54 (iv). 3 Deterministic maturity optimal stopping We now prove Theorem 60. Throughout this section, Assumption 58 is always in force, and we consider the nonlinear Snell envelope Y together with the rst obstacle hitting time , as dened in (2.7). AssumejXj C 0 , and without loss of generality that 0 2C 0 . It is obvious that jYjC 0 , Y X, and Y T =X T . (3.11) Throughout this section, we shall use the following modulus of continuity function: 0 () := 0 ()_ h 0 ( 1 3 ) + 1 3 i ; (3.12) and we shall use a generic constant C which depends only on C 0 , T , d, and the L 0 in Property (P1), and it may vary from line to line. 55 3.1 Dynamic Programming Principle Similar to the standard Snell envelope characterization under linear expectation, our rst step is to establish the dynamic programming principle. We start by the case of determinsitic times. Lemma 64. For each t, the random variable Y t is uniformly continuous in !, with the modulus of continuity function 0 , and satises Y t 1 (!) = sup 2T t 1 E t 1 h X t 1 ;! 1 f<t 2 g +Y t 1 ;! t 2 1 f t 2 g i for all 0t 1 t 2 T;!2 : (3.13) Proof. (i) First, for any t, any !;! 0 2 , and any 2T t , by (2.6) we have jX t;! X t;! 0 j = X((B t );! t B t )X((B t );! 0 t B t ) 0 d 1 ((B t );! t B t ); ((B t );! 0 t B t ) = 0 k!! 0 k t : Since is arbitrary, this proves uniform continuity of Y t in !. (ii) When t 2 =T , since Y T =X T (3.13) coincides with the denition of Y . Without loss of generality we assume (t 1 ;!) = (0; 0) and t :=t 2 <T . Step 1. We rst prove "". For any 2T andP2P: E P [X ] = E P h X 1 f<tg +E P t [X ]1 ftg i By the denition of the r.c.p.d., we have E P t [X ](!) = E P t;! [X t;! t;! ] Y t (!) for Pa.e. !2f tg, where the inequality follows from Property (P2) of the familyfP t g which statesP t;! 2P t and from the fact that t;! 2T t . Then: E P [X ] E P X 1 f<tg +Y t 1 ftg : By taking the sup over andP, it follows that: Y 0 = sup 2T E[X ] sup 2T E X 1 f<tg +Y t 1 ftg : Step 2. We next prove "". Fix arbitrary 2T andP2P, we shall prove E P X 1 f<tg +Y t 1 ftg Y 0 : (3.14) 56 Let "> 0, andfE i g i1 be anF t -measurable partition of the eventftg2F t such that k! ~ !k t " for all !; ~ !2 E i . For each i, x an ! i 2 E i , and by the denition of Y we have Y t (! i ) E P i X t;! i i +" for some ( i ;P i )2T t P t : By (2.6) and the uniform continuity of Y , proved in (i), we have jY t (!)Y t (! i )j 0 ("); jX t;! i X t;! i i j 0 ("); for all !2E i : Thus, for !2E i , Y t (!)Y t (! i ) + 0 (")E P i X t;! i i +" + 0 (")E P i X t;! i +" + 2 0 ("): (3.15) Thanks to Property (P3) of the familyfP t g, we may dene the following pair (~ ; ~ P)2TP: ~ := 1 f<tg + 1 ftg X i1 1 E i i (B t ); ~ P :=P t h X i1 1 E i P i + 1 f<tg P i : It is obvious thatf <tg =f~ <tg and is aF stopping time. Then, by (3.15), E P X 1 f<tg +Y t 1 ftg = E P h X 1 f<tg + X i1 Y t 1 E i i E P h X 1 f<tg + X i1 E P i [X t; i ]1 E i i +" + 2 0 (") = E ~ P h X ~ 1 f~ <tg + X i1 X ~ 1 E i i +" + 2 0 (") = E ~ P X ~ +" + 2 0 (")Y 0 +" + 2 0 ("); which provides (3.14) by sending "! 0. We now derive the regularity of Y in t. Lemma 65. For each !2 and 0t 1 <t 2 T , jY t 1 (!)Y t 2 (!)jC 0 d 1 (t 1 ;!); (t 2 ;!) : 57 Proof. Denote := d 1 (t 1 ;!); (t 2 ;!) . If 1 8 , then clearlyjY t 1 (!)Y t 2 (!)j 2C 0 C 0 (). So we continue the proof assuming 1 8 . First, by setting =t 2 in Lemma 64, Y :=Y t 2 (!)Y t 1 (!) Y t 2 (!)E t 1 Y t 1 ;! t 2 E t 1 Y t 2 (!)Y t 2 (! t 1 B t 1 ) E t 1 0 d 1 (t 2 ;!); (t 2 ;! t 1 B t 1 ) E t 1 0 +kB t 1 k t 1 + : On the other hand, by the inequality XY , Lemma 64, and (2.6), we have Y sup 2T t 1 E t 1 h X t 1 ;! t 2 + 0 d 1 ((;! t 1 B t 1 ); (t 2 ;! t 1 B t 1 )) 1 f<t 2 g +Y t 1 ;! t 2 1 ft 2 g i Y t 2 (!) E t 1 h Y t 1 ;! t 2 Y t 2 (!) + 0 d 1 ((t 1 ;!); (t 2 ;! t 1 B t 1 )) i E t 1 h 0 d 1 ((t 2 ;!); (t 2 ;! t 1 B t 1 )) + 0 d 1 ((t 1 ;!); (t 2 ;! t 1 B t 1 )) i 2E t 1 0 +kB t 1 k t 1 + : Hence jYj 2E t 1 0 +kB t 1 k t 1 + E t 1 h 0 + 3 4 1 3 + 2C 0 1 fkB t 1k t 1 + 3 4 1 3g i : Since + 3 4 1 3 1 3 for 1 8 , this provides: jYj 0 ( 1 3 ) +C 2 3 E t 1 h kB t 1 k 2 t 1 + i 0 ( 1 3 ) +C 2 3 C 0 (): (3.16) We are now ready to prove the dynamic programming principle for stopping times. Theorem 66. For any (t;!)2 and 2T t , we have Y t (!) = sup ~ 2T t E t h X t;! ~ 1 f~ <g +Y t;! 1 f~ g i : Consequently, Y is anE-supermartingale on [0;T ]. 58 Proof. First, follow the arguments in Lemma 64 (ii) Step 1 and note that Property (P2) of the familyfP t g holds for stopping times, one can prove straightforwardly that Y t (!) sup ~ 2T t E t h X t;! ~ 1 f~ <g +Y t;! 1 f~ g i : On the other hand, let k # such that k takes only nitely many values. By Lemma 64 one can easily show that Theorem 66 holds for k . Then for any P2P t and ~ 2T t , by denoting ~ m := [~ + 1 m ]^T we have E P h X t;! ~ m 1 f~ m< k g +Y t;! k 1 f~ m k g i Y t (!): Sending k!1, by Lemma 65 and the dominated convergence theorem (under P): E P h X t;! ~ m 1 f~ mg +Y t;! 1 f~ m>g i Y t (!): Since the process X is right continuous in t, we obtain by sending m!1: Y t (!)E P h X t;! ~ 1 f~ <g +Y t;! 1 f~ g i ; which provides the required result by the arbitrariness of P and ~ . 3.2 Preparation for theEmartingale property If Y 0 =X 0 , then = 0 and obviously all the statements of Theorem 60 hold true. There- fore, we focus on the non-trivial case Y 0 >X 0 . We continue following the proof of the Snell envelope characterization in the standard linear expectation context. Let n := infft 0 :Y t X t 1 n g^T; for n> (Y 0 X 0 ) 1 : (3.17) Lemma 67. The process Y is anE-martingale on [0; n ]. Proof. By the dynamic programming principle of Theorem 66, Y 0 = sup 2T E h X 1 f<ng +Y n 1 fng i : 59 For any "> 0, there exist " 2T andP " 2P such that Y 0 E P" h X " 1 f"<ng +Y n 1 f"ng i +" E P" h Y "^n 1 n 1 f"<ng i +"; (3.18) where we used the fact that Y t X t > 1 n for t < n , by the denition of n . On the other hand, it follows from the Esupermartingale property of Y in Theorem 66 that E P" h Y "^n i E[Y "^n ]Y 0 , which implies by (3.18) that P " [ " < n ]n". We then get from (3.18) that: Y 0 E P" h (X " Y n )1 f"<ng +Y n i +"CP " [ " < n ] +E P" [Y n ] +"E[Y n ] + (Cn + 1)": Since" is arbitrary, we obtainY 0 E[Y n ]. Similarly one can proveY is anE-submartingale on [0; n ]. By theEsupermartingale property ofY established in Theorem 66, this implies that Y is anEmartingale on [0; n ]. By Lemma 65 we have Y 0 E[Y ] =E[Y n ]E[Y ]CE h 0 d 1 ( n ;!); ( ;!) i : (3.19) Clearly, n % , and 0 d 1 ( n ;!); ( ;!) & 0. However, in general the stopping times n ; are notP-quasicontinuous, so we cannot apply Proposition 56 (ii) to conclude Y 0 E[Y ]. To overcome this diculty, we need to approximate n by continuous r.v. 3.3 Continuous approximation The following lemma is crucial for us. Lemma 68. Let be random variables on , with values in a compact interval IR, such that for some 0 and > 0: (!)(! 0 )(!) for all !2 0 and k!! 0 k: Then for any " > 0, there exists a uniformly continuous function ^ : ! I and an open subset " such that C c " " and " ^ +" in " \ 0 : 60 Proof. If I is a single point set, then is a constant and the result is obviously true. Thus at below we assume the lengthjIj > 0. Letf! j g j1 be a dense sequence in . Denote O j :=f!2 :k!! j k< 2 g and n :=[ n j=1 O j . It is clear that n is open and n " as n!1. Letf n : [0;1)! [0; 1] be dened as follows: f n (x) = 1 forx2 [0; 2 ],f n (x) = 1 n 2 jIj for x, and f n is linear in [ 2 ;]. Dene n (!) := n (!) n X j=1 (! j )' n;j (!) where ' n;j (!) :=f n (k!! j k) and n := n X j=1 ' n;j 1 : Then clearly n is uniformly continuous and takes values in I. For each !2 n \ 0 , the set J n (!) :=f1jn :k!! j kg6=; and n (!) 1. Then, by our assumption, n (!)(!) = n (!) X j2Jn(!) [(! j )(!)]' n;j (!) + X j= 2Jn(!) [(! j )(!)]' n;j (!) n (!) X j= 2Jn(!) jIj' n;j (!) n (!) X j= 2Jn(!) 1 n 2 1 n : Similarly one can show that 1 n n in n \ 0 . Finally, since n " as n!1, it follows from Proposition 56 (i) that lim n!1 C[ c n ] = 0. 3.4 Proof of Theorem 60 We proceed in two steps. Step 1. For eachn, let n > 0 be such that 3C 0 ( n ) 1 n(n+1) for the constantC in Lemma 65. Now for any ! and ! 0 such thatk!! 0 k T n , by (2.6), the uniform continuity of Y in Lemma 64, and the fact that 0 0 , we have (YX) n+1 (!) (! 0 ) (YX) n+1 (!) (!) + 3C 0 ( n ) 1 n + 1 + 1 n(n + 1) = 1 n : Then n (! 0 ) n+1 (!). Since 3C 0 ( n ) 1 n(n+1) 1 n(n1) , similarly we have n1 (!) n (! 0 ). We may then apply Lemma 68 with = n1 , = n , = n+1 , and 0 = . Thus, there exist an open set n and a continuous r.v. ~ n valued in [0;T ] such that C c n 2 n and n1 2 n ~ n n+1 + 2 n in n : Step 2. By Lemma 67, for each n large, there existsP n 2P such that Y 0 =E[Y n ]E Pn [Y n ] + 2 n : 61 By Property (P1),P is weakly compact. Then, there exists a subsequencefn j g andP 2P such that P n j converges weakly to P . Now for any n large and any n j n, note that n j n . Since Y is anE-supermartingale and thus aP n j -supermartingale, we have Y 0 2 n j E Pn j Y n j E Pn j Y n E Pn j Y ~ n +E Pn j jY ~ n Y n j : (3.20) By the boundedness of Y in (3.11) and the uniform continuity of Y in Lemma 65, we have jY ~ n Y n j C 0 d 1 (~ n ;!); ( n ;!) C 0 d 1 (~ n ;!); ( n ;!) 1 n1 \ n+1 +C1 c n1 [ c n+1 : Notice that ~ n1 2 1n n ~ n+1 + 2 1n on n1 \ n+1 . Then jY ~ n Y n j C 0 d 1 (~ n ;!); (~ n1 2 1n ;!) 1 n1 \ n+1 +C 0 d 1 (~ n ;!); (~ n+1 + 2 1n ;!) 1 n1 \ n+1 +C1 c n1 [ c n+1 C 0 d 1 (~ n ;!); (~ n1 2 1n ;!) +C 0 d 1 (~ n ;!); (~ n+1 + 2 1n ;!) +C1 c n1 [ c n+1 : Then (3.20) together with the estimateC[ c n ] 2 n lead to Y 0 2 n j E Pn j Y ~ n +CE Pn j h 0 d 1 (~ n ;!); (~ n1 2 1n ;!) i +CE Pn j h 0 d 1 (~ n ;!); (~ n+1 + 2 1n ;!) i +C2 n : Notice that Y and ~ n1 ; ~ n ; ~ n+1 are continuous. Send j!1, we obtain Y 0 E P Y ~ n +CE P h 0 d 1 (~ n ;!); (~ n1 2 1n ;!) i +CE P h 0 d 1 (~ n ;!); (~ n+1 2 1n ;!) i +C2 n : (3.21) Since P n P j~ n n j 2 n P n C j~ n n j 2 n P n 2 n <1 and n " , by the Borel-Cantelli lemma underP we see that ~ n ! ,P -a.s. Sendn!1 in (3.21) and apply the dominated convergence theorem under P , we obtain Y 0 E P Y E[Y ]: 62 Similarly Y t (!)E t [Y t;! ] for t < (!). By theE-supermartingale property of Y estab- lished in Theorem 66, this implies that Y is anE-martingale on [0; ]. 4 Random maturity optimal stopping In this section, we prove Theorem 62. The main idea follows that of Theorem 60. However, since b X h is not continuous in !, the estimates become much more involved. Throughout this section, let X, h, O, t 0 , b X := b X h , b Y := b Y h , andb be as in Theorem 62. Assumptions 58 and 61 will always be in force. We shall emphasize when the additional Assumption 61 is needed, and we x the constant L as in Assumption 61 (i). Assume jXjC 0 , and without loss of generality that 0 2C 0 and L 1. It is clear that j b YjC 0 , b X b Y , and b Y h = b X h =X h . (4.22) By (2.6) and the fact that X has positive jumps, one can check straightforwardly that, b X(t;!) b X(t 0 ;! 0 ) 0 d 1 ((t;!); (t 0 ;! 0 )) for tt 0 ; t h(!); t 0 h(! 0 ) (4.23) except the case t =t 0 = h(! 0 )< h(!)t 0 : In particular, b X(t;!) b X(t 0 ;!) 0 d 1 ((t;!); (t 0 ;!)) whenever tt 0 h(!): (4.24) Moreover, we dene 1 () := 0 ()_ h 0 (L 1 ) 1 3 + 1 3 i ; 2 () := [ 1 () +]_ [ 1 ( 1 3 ) + 1 3 ]; (4.25) and in this section, the generic constant C may depend on L as well. 4.1 Dynamic programming principle We start with the regularity in !. Lemma 69. For any t< h(!)^ h(! 0 ) we have: j b Y t (!) b Y t (! 0 )jC 1 k!! 0 k t : 63 To motivate our proof, we rst follow the arguments in Lemma 64 (i) and see why it does not work here. Indeed, note that b Y t (!) b Y t (! 0 ) sup 2T t sup P2Pt E P h b X t;! ^h t;! b X t;! 0 ^h t;! 0 i : Since we do not have h t;! h t;! 0 , we cannot apply (4.23) to obtain the required estimate. Proof. Let2T t andP2P t . Denote := 1 L k!! 0 k t ,t := [t+]^t 0 and ~ B t s :=B t s+ B t t for s t. Set 0 (B t ) := [( ~ B t ) +]^t 0 , then 0 2T t . Moreover, by Assumption 61 and Property (P3), we may choose P 0 2P t dened as follows: P 0 := 1 (! t ! 0 t ), P 0 := 0 on [t;t ], and theP 0 -distribution of ~ B t is equal to theP-distribution of B t . We claim that I :=E P [ b X t;! ^h t;! ]E P 0 [ b X t;! 0 0 ^h t;! 0 ] C 1 (L); (4.26) Then E P [ b X t;! ^h t;! ] b Y t (! 0 )E P [ b X t;! ^h t;! ]E P 0 [ b X t;! 0 0 ^h t;! 0 ] C 1 (L), and it follows from the arbitrariness of P2P t and 2T t that b Y t (!) b Y t (! 0 )C 1 (L). By exchanging the roles of ! and ! 0 , we obtain the required estimate. It remains to prove (4.26). Denote ~ ! 0 s := ! 0 s 1 [0;t) (s) + [! 0 t + P 0 (st)]1 [t;T ] (s): Since t < h(!)^ h(! 0 ), we have ! t ;! 0 t 2 O. By the convexity of O, this implies that ~ ! 0 s 2O for s2 [t;t ], and thus h t;! 0 (B t ) = (h t;! ( ~ B t ) +)^t 0 ;P 0 a.s. Therefore, E P 0 [ b X t;! 0 0 ^h t;! 0 ] = E P 0 h b X 0 (B t )^ h t;! 0 (B t );! 0 t B t i (4.27) = E P 0 h b X [( ~ B t ) +]^ [h t;! ( ~ B t ) +]^t 0 ; ~ ! 0 t ~ B t i = E P h b X [(B t ) +]^ [h t;! (B t ) +]^t 0 ; ~ ! 0 t B t i ; while E P [ b X t;! ^h t;! ] = E P h b X (B t )^ h t;! (B t );! t B t i : 64 Notice that, whenever (B t )^ h t;! (B t ) = [(B t ) +]^ [h t;! (B t ) +]^t 0 , we have (B t )^ h t;! (B t ) =t 0 . This excludes the exceptional case in (4.23). Then it follows from (4.27) and (4.23) that I E P h 0 +k(! t B t ) ^(B t )^h t;! (B t ) (~ ! 0 t B t ) ^[(B t )+]^[h t;! (B t )+]^t 0 k t 0 i : Note that, denoting :=(B t )^ h t;! (B t ), k(! t B t ) ^(B t )^h t;! (B t ) (~ ! 0 t B t ) ^[(B t )+]^[h t;! (B t )+]^t 0 k t 0 k! t B t ~ ! 0 t B t k t 0 + sup 0r j(! t B t ) +r (! t B t ) j h k!! 0 k t i _ h sup tst j! t +B t s ~ ! 0 s j i _ h sup t st 0 j! t +B t s ~ ! 0 t B t s j i + sup 0r j(! t B t ) +r (! t B t ) j 2L +kB t k t + sup t st 0 jB t s B t s j + sup 0r jB t +r B t j: Since L 1, we have I E P h 0 3 +kB t k t + sup t st 0 jB t s B t s j + sup 0r jB t +r B t j i : If 1 8 , thenI 2C 0 C 1 (L). We then continue assuming 1 8 , and thus 3 + 1 4 1 3 1 3 . Therefore, I 0 ( 1 3 ) +CP kB t k t + sup t st 0 jB t s B t s j + sup 0r jB t +r B t j 1 4 1 3 0 ( 1 3 ) +C 8 3 E P h kB t k 8 t + sup t st 0 jB t s B t s j 8 + sup 0r jB t +r B t j 8 i 0 ( 1 3 ) +C 4 3 +C 8 3 E P h sup t st 0 jB t s B t s j 8 i : Set t =s 0 <<s n =t 0 such that s i+1 s i 2, i = 0; ;n 1. Then E P h sup t st 0 jB t s B t s j 8 i =E P h max 0in1 sup s i ss i+1 jB t s B t s j 8 i n1 X i=0 E P h sup s i ss i+1 [jB t s B t s i j +jB t s B t s i j] 8 i C n1 X i=0 (s i+1 s i +) 4 C 1 4 =C 3 : 65 Thus I 0 ( 1 3 ) +C 4 3 +C 8 3 3 0 ( 1 3 ) +C 1 3 C 1 (L), proving (4.26) and hence the lemma. We next show that the dynamic programming principle holds along deterministic times. Lemma 70. Let t 1 < h(!) and t 2 2 [t 1 ;t 0 ]. We have: b Y t 1 (!) = sup 2T t 1 E t 1 h b X t 1 ;! ^h t 1 ;! 1 f^h t 1 ;! <t 2 g + b Y t 1 ;! t 2 1 f^h t 1 ;! t 2 g i : Proof. When t 2 =t 0 , the lemma coincides with the denition of b Y . Without loss of gener- ality we assume (t 1 ;!) = (0; 0) and t :=t 2 <t 0 . First, follow the arguments in Lemma 64 (ii) Step 1, one can easily prove b Y 0 sup 2T E h b X ^h 1 f^h<tg + b Y t 1 f^htg i : (4.28) To show that equality holds in the above inequality, x arbitraryP2P and2T satisfying h (otherwise reset as ^ h), we shall prove E P h b X 1 f<tg + b Y t 1 ftg i b Y 0 : Since b Y h = b X h , this amounts to show that: E P h b X 1 f<tg[fhtg + b Y t 1 ft;h>tg i b Y 0 : (4.29) We adapt the arguments in Lemma 64 (ii) Step 2 to the present situation. Fix 0<t 0 t. LetfE i g i1 be anF t measurable partition of the eventf t; h > tg2F t such that k! ~ !kL for all !; ~ !2E i . Fix an ! i 2E i for each i. By the denition of b Y we have b Y t (! i ) E P i h b X t;! i i ^h t;! i i + for some ( i ;P i )2T t P t : (4.30) As in Lemma 69, we set t := t + < t 0 , ~ B t s := B t s+ B t t for s t, and ~ i (B t ) := [ i ( ~ B t ) +]^t 0 . Then ~ i 2T t . Moreover by Assumption 61 and Property (P3), for each !2E i , we may deneP i;! 2P t as follows: P i;! := 1 (! i t ! t ), P i;! := 0 on [t;t ], and the P i;! -distribution of ~ B t is equal to theP i -distribution of B t . By (4.26), we have E P i [ b X t;! i i ^h t;! i ]E P i;! [ b X t;! ~ i ^h t;! ] C 1 (L): (4.31) 66 Then by Lemma 69 and (4.30), (4.31) we have b Y t (!) b Y t (! i ) +C 1 (L) E P i;! [ b X t;! ~ i ^h t;! ] + +C 1 (L); for all !2E i : (4.32) We next dene: ~ := 1 f<tg[fhtg + X i1 1 E i ~ i (B t ); and then f <tg[fhtg =f~ <tg[fhtg: Since h, we see thatf < tg[fh tg =f < tg[f = h = tg, and thus it is clear that ~ 2T . Moreover, we claim that there exists ~ P2P such that ~ P =P onF t and the r.c.p.d. (4.33) ( ~ P) t;! =P i;! forP-a.e. !2E i ;i 1; ( ~ P) t;! =P t;! forP-a.e. !2f <tg[fhtg: Then, by (4.32) we have b Y t (!) E ( ~ P) t;! b X t;! (~ ^h) t;! + +C 1 (L); P-a.e. !2ft; h>tg; (4.34) and therefore: E P h b X 1 f<tg[fhtg + b Y t 1 ft;h>tg i E ~ P h b X ~ ^h 1 f<tg[fhtg + b X ~ ^h 1 ft;h>tg i + +C 1 (L) = E ~ P h b X ~ ^h i + +C 1 (L) b Y 0 + +C 1 (L); which implies (4.29) by sending ! 0. Then the reverse inequality of (4.28) follows from the arbitrariness ofP and . It remains to prove (4.33). For any " > 0 and each i 1, there exists a partition fE i j ;j 1g ofE i such thatk!! 0 k t " for any!;! 0 2E i j . Fix an! ij 2E i j for each (i;j). By Property (P3) we may dene ~ P " 2P by: ~ P " :=P t h X i1 X j1 P i;! ij 1 E i j +P1 f<tg[fhtg i : By Property (P1),P is weakly compact. Then ~ P " has a weak limit ~ P2P as "! 0. We now show that ~ P satises all the requirements in (4.33). Indeed, for any partition 0 =s 0 <<s m =t<s m+1 <<s M =t <s M+1 <<s N =T and any bounded 67 and uniformly continuous function' :R Nd !R, let :=' B s 1 B s 0 ; ;B s N B s N1 . Then, denoting s k :=s k+1 s k , ! k :=! s k ! s k1 , we see that E P i;! [ t;! ] = i t (!); E P i;! ij [ t;! ] = i;j t (!); where: i t (!) := E P i h ' (! k ) 1km ; ! i t ! t (s k ) m+1kM ; (B s k B s k1 ) M+1kN i ; i;j t (!) := E P i h ' (! k ) 1km ; ! i t ! ij t (s k ) m+1kM ; (B s k B s k1 ) M+1kN i : Let denote the modulus of continuity function of '. Then E P i;! ij [ t;! ]E P i;! [ t;! ] (") for all !2E i j ; and thus E ~ P " []E P 1 f<tg[fhtg + X i1 i t 1 E i = E P h X i;j1 E P i;! ij [ t; ]1 E i j i E P h X i;j1 i t 1 E i j i E P h X i;j1 E P i;! ij [ t; ]E P i; [ t; ] 1 E i j i E P h X i;j1 (")1 E i j i ("): By sending "! 0, we obtain E ~ P [] =E P 1 f<tg[fhtg + P i1 i t 1 E i , which proves (4.33) by the arbitrariness of . We now prove the regularity in the t-variable. Recall the 2 dened in (4.25). Lemma 71. Let 0t 1 < h(! 1 ), 0t 2 < h(! 2 ), and t 1 t 2 . Then we have: j b Y t 1 (! 1 ) b Y t 2 (! 2 )j C h 1 + 1 d(! 1 t 1 ;O c ) i 2 d 1 (t 1 ;! 1 ); (t 2 ;! 2 ) : Proof. Without loss of generality we assumet 1 <t 2 . Also, in view of the uniform continuity in ! of Lemma 69, it suces to prove the lemma in the case ! 1 =! 2 =!. Denote := d 1 (t 1 ;!); (t 2 ;!) and " := d(! t 1 ;O c ). For 1 8 , we havej b Y t 1 (!) b Y t 2 (!)j 2C 0 C" 1 2 (). So we assume in the rest of this proof that < 1 8 . 68 First, by Assumption 61, we may consider the measureP2P t 1 such that P t := 0; P t := 0, t2 [t 1 ;t 2 ]. Then, by setting := t 0 in Lemma 70, we see that b Y t 1 (!)E t 1 [ b Y t 1 ;! t 2 ] E P [ b Y t 1 ;! t 2 ] = b Y t 2 (! ^t 1 ): Note that h(! ^t 1 ) =t 0 >t 2 . Thus, by Lemma 69, b Y t 2 (!) b Y t 1 (!) C 1 d 1 (t 2 ;! ^t 1 ); (t 2 ;!) C 1 ()C 2 (): (4.35) Next, for arbitrary 2T t 1 , noting that b X b Y we have I() := E t 1 h b X t 1 ;! ^h t 1 ;! 1 f^h t 1 ;! <t 2 g + b Y t 1 ;! t 2 1 f^h t 1 ;! t 2 g i b Y t 2 (!) = E t 1 h b X t 1 ;! 1 f<h t 1 ;! ^t 2 g + b X t 1 ;! h t 1 ;! 1 fh t 1 ;! <t 2 ;h t 1 ;! g + b Y t 1 ;! t 2 1 f^h t 1 ;! t 2 g i b Y t 2 (!) E t 1 h b X t 1 ;! b X t 1 ;! h t 1 ;! ^t 2 1 f<h t 1 ;! ^t 2 g + b Y t 1 ;! h t 1 ;! ^t 2 i b Y t 2 (!) E t 1 h b X t 1 ;! b X t 1 ;! h t 1 ;! ^t 2 1 f<h t 1 ;! ^t 2 g i +E t 1 h j b Y t 1 ;! t 2 b Y t 2 (!)j1 fh t 1 ;! >t 2 g i +CC t 1 h t 1 ;! t 2 : By (4.24) and Lemma 69 we have I() E t 1 h 0 d 1 ((t 1 ;!); (t 2 ;! t 1 B t 1 )) i +CE t 1 h 1 k!! t 1 B t 1 k t 2 i +CC t 1 kB t 1 k t 2 " E t 1 h 0 +kB t 1 k t 2 i +CE t 1 h 1 +kB t 1 k t 2 i +C" 1 E t 1 h kB t 1 k t 2 i C[1 +" 1 ]E t 1 h 1 +kB t 1 k t 2 i : Since 1 8 , following the proof of (3.16) we have I() C[1 +" 1 ] h 1 ( 1 3 ) + 1 3 i C[1 +" 1 ] 2 (): By the arbitrariness of and the dynamic programming principle of Theorem 72, we obtain b Y t 1 (!) b Y t 2 (!)C" 1 2 (), and the proof is complete by (4.35). Applying Lemmas 69, 70, and 71, and following the same arguments as those of Theorem 66, we establish the dynamic programming principle in the present context. Theorem 72. Let t< h(!) and 2T t . Then b Y t (!) = sup ~ 2T t E t h b X t;! ~ ^h t;! 1 f~ ^h t;! <g + b Y t;! 1 f~ ^h t;! g i : 69 Consequently, b Y is aEsupermartingale on [0; h]. By Lemma 71, b Y is continuous fort2 [0; h). Moreover, since ^ Y is anE-supermartingale, we see that ^ Y h exists. However, the following example shows that in general b Y may be discontinuous at h. Example 73. Set X t (!) := t and let h correspond to O and t 0 . Clearly b X = X, b Y h = h and b Y t (!)t 0 . However, for anyt< h(!), set :=t 0 andP2P t such that P = 0; P = 0, we see that b Y t (!)E P h X(h(! t B t );! t B t ) i = X(h(! ^t );! ^t ) = h(! ^t ) = t 0 . That is, b Y t (!) =t 0 . Thus b Y is discontinuous at h whenever h(!)<t 0 . This issue is crucial for our purpose, and we will discuss more in Subsection 4.4 below. 4.2 Continuous approximation of the hitting times Similar to the proof of Theorem 60, we need to apply some limiting arguments. We therefore assume without loss of generality that b Y 0 > b X 0 and introduce the stopping times: for any m 1 and n> ( b Y 0 b X 0 ) 1 , h m := inf t 0 : d(! t ;O c ) 1 m ^ (t 0 1 m ); n := infft 0 : b Y t b X t 1 n g: (4.36) Here we abuse the notation slightly by using the same notation n as in (3.17). Our main task in this subsection is to build an approximation of h m and n by continuous random variables. This will be obtained by a repeated use of Lemma 68. We start by a continuous approximation of the sequence (h m ) m1 dened in (4.36). Lemma 74. For all m 2: (i) h m1 (!) h m (! 0 ) h m+1 (!), wheneverk!! 0 k t 0 1 m(m+1) , (ii) there exists an open subset m 0 , and a uniformly continuous ^ h m such that C ( m 0 ) c < 2 m and h m1 2 m ^ h m h m+1 + 2 m on m 0 ; (iii) there exist m > 0 such thatj^ h m (!) ^ h m (! 0 )j 2 m wheneverk!! 0 k t 0 m , and: C ( ^ m 0 ) c 2 m where ^ m 0 :=f!2 m 0 : d(!; [ m 0 ] c )> m g: 70 Proof. Notice that (ii) is a direct consequence of (i) obtained by applying Lemma 68 with " = 2 m . To prove (i), we observe that fork!! 0 k t 0 1 m(m+1) and t< h m (! 0 ), we have d(! t ;O c ) d(! 0 t ;O c ) 1 m(m + 1) > 1 m 1 m(m + 1) = 1 m + 1 : This shows that h m (! 0 ) h m+1 (!) wheneverk!! 0 k t 0 1 m(m+1) . Similarly, h m1 (!) h m (! 0 ) wheneverk!! 0 k t 0 1 m(m1) , and the inequality (i) follows. It remains to prove (iii). The rst claim follows from the uniform continuity of ^ h m . For each > 0, dene h : [0;1)! [0; 1] as follows: h (x) := 1 for x, h (x) = 0 for x 2, and h is linear on [; 2]. (4.37) Then the map !7! (!) := h (d(!; [ m 0 ] c )) is continuous, and # 1 [ m 0 ] c as # 0. Applying Proposition 56 (ii) we have lim !0 E[ ] =E 1 ( m 0 ) c =C ( m 0 ) c < 2 m : By denition of ^ m 0 , notice that 1 ( ^ m 0 ) c m . ThenC ( ^ m 0 ) c E[ m ], and (iii) holds true for suciently small m . We next derive a continuous approximation of the sequences m n := n ^ ^ h m ; (4.38) where n and ^ h m are dened in (4.36) and Lemma 74 (ii), respectively. Lemma 75. For all m 2, n > ( b Y 0 b X 0 ) 1 , there exists an open subset m n and a uniformly continuous map ^ m n such that m n1 2 1m 2 n ^ m n m n+1 + 2 1m + 2 n on ^ m 0 \ m n ; andC ( m n ) c 2 n : Proof. Fixm, and recall the modulus of continuity 1 introduced in (4.25). For each n, let 0 < m n < m such that ( 0 +C 1 )( m n ) 1 n(n+1) , where C is the constant in Lemma 69 . We shall prove ( n1 ^ ^ h m )(!) 2 1m ( n ^ ^ h m )(! 0 ) ( n+1 ^ ^ h m )(!) + 2 1m (4.39) whenever !2 ^ m 0 ; k!! 0 k t 0 m n : 71 Then the required statement follows from Lemma 68 with " = 2 n . We shall prove only the right inequality of (4.39). The left one can be proved similarly. Let !;! 0 be as in (4.39). First, by Lemma 74 (iii) we have ! 0 2 m 0 and ^ h m (! 0 ) ^ h m (!) + 2 m (4.40) We now prove the right inequality of (4.39) in three cases. Case 1. if n+1 (!) ^ h m (! 0 ) 2 m , then ^ h m (! 0 ) ( n+1 ^ ^ h m )(!) + 2 m and thus the result is true. Case 2. If n+1 (!) = h(!), then by Lemma 74 (ii) we have ^ h m (!) h m+1 (!) + 2 m n+1 (!) + 2 m , and thus ^ h m (! 0 ) ^ h m (!) + 2 m n+1 (!) + 2 1m . This, together with (4.40), proves the desired inequality. Case 3. We now assume n+1 (!)< ^ h m (! 0 ) 2 m and n+1 (!)< h(!). By Lemma 74 (ii) we have n+1 (!) < h m+1 (! 0 ), and thus n+1 (!) < h(! 0 ). Then it follows from Lemma 69 that (YX) n+1 (!) (! 0 ) (YX) n+1 (!) (!) + ( 0 +C 1 )( m n ) 1 n + 1 + 1 n(n + 1) = 1 n : That is, n (! 0 ) n+1 (!). This, together with (4.40), proves the desired inequality. For our nal approximation result, we introduce the notations: n := n ^ h n ; n := ^ n1 n1 2 3n ; n := ^ n+1 n+1 + 2 1n ; (4.41) and n := ^ n1 0 \ n1 n1 \ ^ n+1 0 \ n+1 n+1 : (4.42) Lemma 76. For alln ( b Y 0 b X 0 ) 1 _2, n ; n are uniformly continuous, and n n n on n . Proof. This is a direct combination of Lemmas 74 and 75. 4.3 Proof of Theorem 62 We rst prove theE-martingale property under an additional condition. 72 Lemma 77. Let 2T such that andE[Y ] =E[Y ] (in particular if < h). Then b Y is anE-martingale on [0;]. Proof. If b Y 0 = b X 0 , then b = 0 and obviously the statement is true. We then assume b Y 0 > b X 0 , and prove the lemma in several steps. Step 1 Letn be suciently large so that 1 n < b Y 0 b X 0 . Follow the same arguments as that of Lemma 67 , one can easily prove: b Y is an E martingale on [0; n ]: (4.43) Step 2 Recall the sequence of stopping times ( n ) n1 introduced in (4.41). By Step 1 we have b Y 0 =E[ b Y n ]. Then for any"> 0, there existsP n 2P such that b Y 0 "<E Pn [ b Y n ]. Since P is weakly compact, there exists subsequencefn j g and P 2P such that P n j converges weakly toP . Now for any n and n j n, since Y is a supermartingale under eachP n j and ( n ) n1 is increasing, we have b Y 0 " < E Pn j b Y n j E Pn j b Y n : (4.44) Our next objective is to send j%1, for xed n, and use the weak convergence of P n j towards P . To do this, we need to approximate b Y n with continuous random variables. Denote n (!) :=h n inf 0t n (!) d(! t ;O c ) with h n (x) := 1^ [(n + 3)(n + 4)x (n + 3)] + : (4.45) Then n is continuous in !, and f n > 0g inf 0t n (!) d(! t ;O c )> 1 n+4 f n < h n+4 g: (4.46) In particular, this implies that b Y n n and b Y n n are continuous in !. We now decompose the right hand-side term of (4.44) into: b Y 0 " E Pn j h b Y n + ( b Y n b Y n )1 n n + (1 n ) + ( b Y n b Y n )1 ( n ) c : Note that n n n on n . Then b Y 0 " E Pn j h b Y n + sup n t n ( b Y t b Y n ) n i +CC[ n < 1] +CC ( n ) c : 73 Send j!1, we obtain b Y 0 "E P h n b Y n i +E P h n sup n t n ( b Y t b Y n ) i +CC[ n < 1] +CC[( n ) c ]: (4.47) Step 3. In this step we show that lim n!1 E P h n sup n t n ( b Y t b Y n ) i = lim n!1 C[ n < 1] = lim n!1 C[( n ) c ] = 0: (4.48) (i) First, by the denition of n in (4.42) together with Lemmas 74 (iii) and 75, it follows thatC ( n ) c C2 n ! 0 as n!1. (ii) Next, notice that f n < 1g = inf 0t n (!) d(! t ;O c )< 1 n+3 f n > h n+3 g: Moreover, by (4.41) and Lemma 76, n = ^ n+1 n+1 + 2 1n = n+2 + 2 2n n+2 + 2 2n h n+2 + 2 2n ; on n+2 : Then f n < 1g ( n+2 ) c [fh n+3 < h n+2 + 2 2n g ( n+2 ) c [ n sup h n+2 th n+2 +2 2njB t B h n+2 j 1 (n+2)(n+3) o : Then one can easily see thatC[ n < 1]! 0, as n!1. (iii) Finally, it is clear that n !b , n !b . Recall that b Y b exists. By (4.46), we see that n sup n t n ( b Y t b Y n )! 0, P -a.s. as n!1. Then by applying the dominated convergence theorem underP we obtain the rst convergence in (4.48). Step 4. By the dominated convergence theorem under P we obtain lim n!1 E P [ n b Y n ] = E P [ b Y b ]: This, together with (4.47) and (4.48), implies that b Y 0 E P [ b Y b ] +": Note that b Y is anP -supermartingale and b , then b Y 0 E P [ b Y ] +": 74 Since " is arbitrary, we obtain b Y 0 E[ b Y ], and thus by the assumptionE[ b Y ] =E[ b Y ] we have b Y 0 E[ b Y ]. This, together with the fact that b Y is aE-supermartingale, implies that b Y 0 =E[ b Y ]: (4.49) Similarly, one can prove b Y t (!) =E t [ b Y t;! t;! ] for t<(!), and thus b Y :^ is aE-martingale. In light of Lemma 77, the following result is obviously important for us. Proposition 78. It holds thatE[ b Y b ] =E[ b Y b ]. We recall again that b Y b = b Y b whenever b < h. So the only possible discontinuity is at h. The proof of Proposition 78 is reported in Subsection 4.4 below. Let us rst show how it allows to complete the Proof of Theorem 62 By Lemma 77 and Proposition 78, b Y is anE-martingale on [0;b ]. Moreover, since b X b = b Y b , then b Y 0 =E[ b X b ] and thusb is an optimal stopping time. 4.4 EContinuity of b Y at the random maturity This subsection is dedicated to the proof of Proposition 78. We rst reformulate some pathwise properties established in previous subsections. For that purpose, we introduce the following additional notation: for any P2P, 2T , and E2F P(P;;E) := n P 0 2P :P 0 =P P 0 1 E +P1 E c o ; P(P;) :=P(P;; ): (4.50) That is,P 0 2P(P;;E) meansP 0 =P onF and (P 0 ) ;! =P ;! forP-a.e. !2E c . The rst result corresponds to Theorem 72. Lemma 79. Let P2P, 1 ; 2 2T , and E2F 1 . Assume 1 2 h, and 1 < h on E. Then for any "> 0, there exist P " 2P(P; 1 ;E) and " 2T with values in [ 1 ; 2 ], s.t. E P h b Y 1 1 E i E P" h b X " 1 f"< 2 g + b Y 2 1 f"= 2 g 1 E i +": Proof. Let n 1 be a sequence of stopping times such that n 1 # and each n 1 takes only nitely many values. Applying Lemma 71 together with the dominated convergence Theorem under P, we see that lim n!1 E P h j b Y n 1 ^ 2 b Y 1 j i = 0: Fix n such that E P h j b Y n 1 ^ 2 b Y 1 j i " 2 : (4.51) 75 Assume n 1 takes valuesft i ;i = 1; ;mg, and for each i, denote E i := E\f n 1 = t i < 2 g2F t i . By (4.34), there exists ~ i 2T and ~ P i 2P(P;t i ) such that ~ i t i on E i and b Y t i E ~ P i t i h b X ~ i ^h i + " 2 ; P-a.s. on E i : (4.52) HereE ~ P i t i [] :=E ~ P i [jF t i ] denotes the conditional expectation. Dene ~ := 2 1 E c [f 2 n 1 g + m X i=1 ~ i 1 E i ; ~ P :=P1 E c [f 2 n 1 g m X i=1 ~ P i 1 E i : (4.53) Then one can check straightforwardly that ~ 2T and ~ 2 ^ n 1 ; (4.54) and ~ P2P(P; 2 ^ n 1 ;E)P(P; 1 ;E). Moreover, by (4.52) and (4.53), E ~ P b Y 2 ^ n 1 1 E = E ~ P h b Y 2 1 f 2 n 1 g + m X i=1 b Y t i 1 E i 1 E i E ~ P h b Y 2 1 f 2 n 1 g + ( b X ~ ^h + " 2 )1 f n 1 < 2 g 1 E i : This, together with (4.51) and (4.54) , leads to E ~ P h b Y 1 b X ~ 1 f~ < 2 g b Y 2 1 f~ 2 g 1 E i " +E ~ P h b Y 2 1 f 2 n 1 g + b X ~ ^h 1 f n 1 < 2 g b X ~ 1 f~ < 2 g b Y 2 1 f~ 2 g 1 E i = " +E ~ P h b X ~ ^h b Y 2 1 f n 1 < 2 ~ g 1 E i = " +E ~ P h E ~ P 2 [ b X ~ ^h ] b Y 2 1 f n 1 < 2 ~ g 1 E i "; where the last inequality follows from the denition of b Y . Then, by setting " := ~ ^ 2 we prove the result. Next result corresponds to Lemma 77. Lemma 80. Let P2P, 2T , and E2F such that b on E. Then for all "> 0: E P 1 E b Y E P" 1 E b Y b +" for some P " 2P(P;;E): Proof. We proceed in three steps. Step 1. We rst assume =t<b onE. We shall prove the result following the arguments 76 in Lemma 77. Recall the notations in Subsection 4.2 and the n dened in (4.45), and let n denote the modulus of continuity functions of n , n , and n . Denote n := 0 for n ( b Y 0 b X 0 ) 1 . For any n and > 0, letfE n; i ;i 1gF t be a partition of E\f n1 t< n g such thatk!! 0 k t for any !;! 0 2E n; i . For each (n;i), x ! n;i := ! n;;i 2 E n; i . By Lemma 77, b Y 1 E n; i is anE-martingale on [t; n ]. Then b Y t (! n;i ) =E t [ b Y t;! n;i t;! n;i n ], and thus there exists P n; i 2P t such that b Y t (! n;i )E P n; i h b Y t;! n;i t;! n;i n i +": (4.55) Note that[ n m=1 [ i1 E m; i =E\ft< n g. Set P n; := P t h n X m=1 X i1 P m; i 1 E m; i +P1 E c [ft ng i 2P(P;t;E): (4.56) Recall the h dened by (4.37). We claim that, for any Nn, E P [ b Y t 1 E ]E P N; [ b Y t_ n n 1 E ] CnE h 2 + n () + 2 n () i +C n () +" +C2 n +CC( n < 1) +2E P N; h sup n s n j b Y s b Y n j n 1 E i +CE h h d !; ( n ) c i ; (4.57) where n () := sup ts 1 <s 2 t 0 ;s 2 s 1 n() jB t s 1 B t s 2 j: Moreover, one can easily ndF t -measurable continuous random variables ' k such that j' k j 1 and lim k!1 E P [j1 E ' k j] = 0. Then E P [ b Y t 1 E ]E P N; [ b Y t_ n n ' k ] CnE h 2 + n () + 2 n () i +C n () +" +C2 n +CC( n < 1) +CE P N; h sup n s n j b Y s b Y n j n ' k i +CE h h d !; ( n ) c i +CE P [j1 E ' k j]: Send ! 0. First note that [ + n () + 2 n ()]# 0 and h # 1 f0g , then by Proposition 56 (ii) we have lim !0 E h 2 + n () + 2 n () i = 0; lim !0 E h h d !; ( n ) c i =C h d !; ( n ) c = 0 i =C[( n ) c ]C2 n : 77 Moreover, for each N, by the weak compactness assumption (P1) we see that P N; has a weak limit P N 2P. It is straightforward to check that P N 2P(P;t;E). Note that the random variables b Y t_ n n ' k and sup n s n j b Y s b Y n j n ' k are continuous. Then E P [ b Y t 1 E ]E P N [ b Y t_ n n ' k ] " +C2 n +CC( n < 1) +CE P N h sup n s n j b Y s b Y n j n ' k i +CE P [j1 E ' k j]: Again by the weak compactness assumption (P1), P N has a weak limit P 2P(P;t;E) as N!1. Now send N!1, by the continuity of the random variables we obtain E P [ b Y t 1 E ]E P [ b Y t_ n n ' k ] " +C2 n +CC( n < 1) +CE P h sup n s n j b Y s b Y n j n ' k i +CE P [j1 E ' k j]: Send k!1 and recall thatP =P onF t , we have E P [ b Y t 1 E ]E P [ b Y t_ n n 1 E ] " +C2 n +CC( n < 1) + 2E P h sup n s n j b Y s b Y n j n 1 E i : Finally send n!1, by (4.48) and applying the dominated convergence theorem under P andP we have E P [ b Y t 1 E ]E P [ b Y b 1 E ] ": That is,P " :=P satises the requirement in the case =t<b on E. Step 2. We now prove Claim (4.57). Indeed, for any mn and any !2E m; i , by Lemma 69 we have b Y t (!)E P m; i h b Y t;! t;! n i = b Y t (!) b Y t (! m;i ) + b Y t (! m;i )E P m; i h b Y t;! m;i t;! m;i n i +E P m; i h b Y t;! m;i t;! m;i n b Y t;! t;! n i C 1 () +" +E P m; i h b Y t;! m;i t;! m;i n b Y t;! t;! n 1 ( n ) t;! m;i \( n ) t;! t;! m;i n t;! n i (4.58) +CP m; i h [( n ) t;! m;i ] c [ [( n ) t;! ] c i +CE P m; i h 1 t;! m;i n + 1 t;! n i : 78 Note that E P m; i h 1 t;! m;i n + 1 t;! n i 2E P m; i h 1 t;! n i + n (); P m; i h [( n ) t;! m;i ] c [ [( n ) t;! ] c i 2P m; i h [( n ) t;! ] c i +P m; i h [( n ) t;! m;i ] c \ ( n ) t;! i (4.59) 2P m; i h [( n ) t;! ] c i +P m; i h 0<d ! t B t ; ( n ) c < i 2P m; i h [( n ) t;! ] c i +E P m; i h h d ! t B t ; ( n ) c i : Moreover, on ( n ) t;! m;i \ ( n ) t;! \f t;! m;i n > 0g\f t;! n > 0g, by Lemma 76 and (4.46) we have ( n ) t;! m;i t;! m;i n ( n ) t;! m;i < h t;! m;i n+4 ; ( n ) t;! t;! n ( n ) t;! < h t;! n+4 : Then b Y t;! m;i t;! m;i n b Y t;! t;! n b Y t;! m;i ( n ) t;! m;i b Y t;! ( n ) t;! + sup ( n ) t;! m;i s( n ) t;! m;i j b Y t;! m;i s b Y t;! m;i ( n ) t;! m;i j + sup ( n ) t;! s( n ) t;! j b Y t;! s b Y t;! ( n ) t;! j = b Y t;! m;i ( n ) t;! m;i b Y t;! ( n ) t;! + 2 sup ( n ) t;! s( n ) t;! j b Y t;! s b Y t;! ( n ) t;! j + sup ( n ) t;! m;i s( n ) t;! m;i j b Y t;! m;i s b Y t;! m;i ( n ) t;! m;i j sup ( n ) t;! s( n ) t;! j b Y t;! s b Y t;! ( n ) t;! j: Applying Lemma 71 we get b Y t;! m;i ( n ) t;! m;i b Y t;! ( n ) t;! Cn 2 d 1 (( n ) t;! m;i ;! m;i t B t ); (( n ) t;! ;! t B t ) Cn 2 + n () + 2 sup ( n ) t;! n()s( n ) t;! +n() jB t s B t ( n ) t;!j Cn 2 + n () + 2 n () ; 79 and, similarly, sup ( n ) t;! m;i s( n ) t;! m;i j b Y t;! m;i s b Y t;! m;i ( n ) t;! m;i j sup ( n ) t;! s( n ) t;! j b Y t;! s b Y t;! ( n ) t;! j sup ( n ) t;! m;i s( n ) t;! m;i _( n ) t;! j b Y t;! m;i s b Y t;! m;i ( n ) t;! m;i j + sup ( n ) t;! m;i _( n ) t;! s( n ) t;! m;i ^( n ) t;! j b Y t;! m;i s b Y t;! s j +j b Y t;! m;i ( n ) t;! m;i b Y t;! ( n ) t;! j + sup ( n ) t;! m;i ^( n ) t;! s( n ) t;! m;i j b Y t;! m;i s b Y t;! m;i ( n ) t;! m;i j +j b Y t;! m;i ( n ) t;! m;i b Y t;! ( n ) t;! j Cn 2 + n () + 2 n () +C 1 ()Cn 2 + n () + 2 n () : Then b Y t;! m;i t;! m;i n b Y t;! t;! n Cn 2 + n () + 2 n () + 2 sup ( n ) t;! s( n ) t;! j b Y t;! s b Y t;! ( n ) t;! j: Plug this and (4.59) into (4.58), for !2E m; i we obtain b Y t (!)E P m; i h b Y t;! t;! n i CnE P m; i h 2 + n () + 2 n () i +C n () +" +2E P m; i h sup ( n ) t;! s( n ) t;! j b Y t;! s b Y t;! ( n ) t;! j t;! n i +CP m; i h [( n ) t;! ] c i +CE P m; i h 1 t;! n i +CE P m; i h h d ! t B t ; ( n ) c i : Then by (4.56) we have, for any Nn, E P [ b Y t 1 E ]E P N; [ b Y t_ n 1 E ] =E P N; h [ b Y t b Y n ]1 E\ft< ng i CnE P N; h 2 + n () + 2 n () i +C n () +" +CP N; h [ n ] c i +CE P N; h 1 n i +2E P N; h sup n s n j b Y s b Y n j n 1 E i +CE P N; h h d !; ( n ) c i CnE h 2 + n () + 2 n () i +C n () +" +C2 n +CC( n < 1) +2E P N; h sup n s n j b Y s b Y n j n 1 E i +CE h h d !; ( n ) c i : (4.60) 80 Similarly we have E P N; h [ b Y t_ n b Y t_ n n ]1 E i C2 n +CC( n < 1) +E P N; h [ b Y t_ n b Y t_ n ]1 E\ n n i C2 n +CC( n < 1) + 2E P N; h sup n s n j b Y s b Y n j n 1 E i This, together with (4.60), implies (4.57). Step 3. Finally we prove the lemma for general stopping time . We follow the arguments in Lemma 79. Let n be a sequence of stopping times such that n # and each n takes only nitely many values. By applying the dominated convergence Theorem under P, we may x n such that E P h j b Y n ^b b Y j1 E i " 2 : Assume n takes valuesft i ;i = 1; ;mg, and for each i, denote E i := E\f n = t i < b g2F t i . ThenfE i ; 1 i mg form a partition of ~ E := E\f n <b g. For each i, by Step 1 there existsP i 2P(P;t i ;E i ) such that E P [ b Y t i 1 E i i E P i [ b Y b 1 E i i + " 2m : Now deneP " := P m i=1 P i 1 E i +P1 ~ E c 2P(P; n ; ~ E)P(P;;E). Recall that ~ E2F n and note that b Y b b Y b , thanks to the supermartingale property of b Y . Then E P h b Y 1 E i E P" h b Y b 1 E i " 2 +E P h b Y n ^b 1 E i E P" h b Y b 1 E i " 2 +E P h b Y n1 ~ E i E P" h b Y b 1 ~ E i = " 2 + m X i=1 E P h b Y t i 1 E i i E P" h b Y b 1 E i i " 2 + m X i=1 " 2m =": The proof is complete now. We need one more lemma. 81 Lemma 81. Let P2P, 2T , and E2F such that h on E. For any "> 0, there exists P " 2P(P;;E) such that h + 1 L d(! ;O c ) + 3" + sup t+" j! t ! j; P " -a.s. on E Proof. First, there exists ~ 2T such that ~ +" and ~ takes only nitely many values 0t 1 <<t n =t 0 . Denote E i :=E\f~ =t i < hg2F t i . ThenfE i ; 1ing is a partition of E\f~ < hg and h ~ +" on E\f~ hg: (4.61) For any i, there exists a partition (E i j ) j1 of E i such thatj! t i ! 0 t i j L" for any !;! 0 2 E i j . For each (i;j), x an ! ij 2 E i j and a unit vector ij pointing to the direction from ! ij t i to O c . Now for any !2E i j , deneP i;j;! 2P t i as follows: = 0; t = 1 " [! ij t i ! t i ]1 [t i ;t i +") (t) +L ij 1 [t i +";T ] (t): We see that h t i ;! = h t i +" + 1 L d(! ij t i ;O c ) i ^t 0 ; P i;j;! -a.s. on E i j : Similar to the proof of (4.33), there existsP " 2P(P; ~ ;E)P(P;;E) such that the r.c.p.d. P t i ;! " =P i;j;! forP-a.e. !2E i j . Then h + 2" + 1 L [d(! t i ;O c ) +L"] + 3" + 1 L h d(! ;O c ) +j! ! t i j i + 3" + 1 L h d(! ;O c ) + sup t+" j! t ! j i ; P " -a.s. on E i j : This, together with (4.61), proves the lemma. We are now ready to complete the Proof of Proposition 78. The inequalityE[ b Y b ]E[ b Y b ] is a direct consequence of the Esupermartingale property of b Y established in Theorem 72. As for the reverse inequality, since b Y is continuous on [0; h) and h n " h with h n < h, it suces to show that, for any P2P and any "> 0 I n :=E P [ b Y b ^hn ]E[ b Y b ] 5" for suciently large n: (4.62) 82 Let > 0, n> 1 L . Set t n :=t 0 1 n , 0 :=b ^ h n , andP 0 :=P. We proceed in two steps. Step 1. Apply Lemma 79 with P 0 ; 0 ;b , and , there exist P 1;1 2P(P 0 ; 0 ; ) and a stopping time ~ 1 taking values in [ 0 ;b ], such that E P 0 [ b Y 0]E P 1;1 h b X ~ 11 f~ 1 <b g + b Y b 1 f~ 1 =b g i +": Denote E 1 :=f~ 1 < t n g2F ~ 1. By (4.24) and following the same argument as for the estimate in (3.16), we have: P 1;1 -a.s. on E c 1 \f~ 1 <b g, b X ~ 1 b X ~ 1E P 1;1 ~ 1 [ b X b ] +E P 1;1 ~ 1 [ b Y b ] E P 1;1 ~ 1 h 0 1 n +kB ~ 1 k ~ 1 + 1 n i +E P 1;1 ~ 1 [ b Y b ]C 0 (n 1 ) +E P 1;1 ~ 1 [ b Y b ]: Then, denoting E 2 :=E 1 \f~ 1 <b g2F ~ 1, we get: E P 0 h b Y 0 i E P 1;1 h b X ~ 11 E 2 + b X ~ 11 E c 1 \f~ 1 <b g + b Y b 1 f~ 1 =b g i +" E P 1;1 h b X ~ 11 E 2 + b Y b 1 E c 2 i +C 0 (n 1 )P 0 [E c 1 ] +": (4.63) Next, set ~ := [ 2 0 (3)]^ 3 . Apply Lemma 81 on P 1;1 , ~ 1 , E 2 , and ~ , there exists P 1;2 2P(P 1;1 ; ~ 1 ;E 2 ) such that h ~ 1 + 1 L d(! ~ 1;O c ) + +k! ~ 1 t k ~ 1 + ~ ; P 1;2 -a.s. on E 2 : Since ~ 1 b h, we have b ~ 1 3; P 1;2 -a.s. on E 2 \fd(! ~ 1;O c )Lg\fk! ~ 1 k ~ 1 + ~ g: Then, by (4.24) and (3.16) again we have: P 1;2 -a.s. on E 2 \fd(! ~ 1;O c )Lg2F ~ 1, b X ~ 1 E P 1;2 ~ 1 [ b X b ] +E P 1;2 ~ 1 h 0 d 1 (~ 1 ;B); (b ;B) i = E P 1;2 ~ 1 [ b X b ] +E P 1;2 ~ 1 h 0 d 1 (~ 1 ;B); (b ;B) 1 fkB ~ 1 k ~ 1 + ~ g + 1 fkB ~ 1 k ~ 1 + ~ >g i E P 1;2 ~ 1 [ b X b ] +E P 1;2 ~ 1 h 0 3 +kB ~ 1 k ~ 1 +3 i +C 2 E P 1;2 ~ 1 [kB ~ 1 k 2 ~ 1 + ~ ] E P 1;2 ~ 1 [ b X b ] +C 0 (3) + C ~ 2 E P 1;2 ~ 1 [ b X b ] +C 0 (3): 83 Note thatn 1 L 3. Thus, denotingE 3 :=E 2 \fd(! ~ 1;O c )>Lg2F ~ 1, (4.63) leads to: E P 0 h b Y 0 i E P 1;2 h b X ~ 11 E 3 + b Y b 1 E c 3 i +C 0 (3)P 1;2 (E c 3 ) +": (4.64) Moreover, apply Lemma 80 with P 1;2 ; ~ 1 , E 3 , and ", there exists P 1;3 2P(P 1;2 ; ~ 1 ;E 3 ) such that E P 1;2 h b X ~ 11 E 3 i E P 1;2 h b Y ~ 11 E 3 i E P 1;3 h b Y b 1 E 3 i +": Dene 1 := infft ~ 1 : d(! t ;O c ) 1 n g^b . Note that 1 < h on E 3 and b Y is a P 1;3 -supermartingale. Then E P 1;3 h b Y b 1 E 3 i E P 1;3 h b Y 11 E 3 i : Thus E P 1;2 h b X ~ 11 E 3 i E P 1;3 h b Y 11 E 3 i +": Plug this into (4.64), we obtain E P 0 h b Y 0 i E P 1;3 h b Y 11 E 3 + b Y b 1 E c 3 i +C 0 (3)P 1;3 (E c 3 ) + 2": We now denoteP 1 :=P 1;3 2P(P 0 ; 0 ; ), and D 1 :=E 3 \f 1 <b g =f~ 1 <t n ^ ~ g\fd(! ~ 1;O c )>Lg\f 1 <b g2F 1 (4.65) Then E P 0 h b Y 0 i E P 1 h b Y 11 D 1 + b Y b 1 D c 1 i +C 0 (3)P 1 (D c 1 ) + 2": (4.66) Step 3: Iterating the arguments of Step 1, we may dene (~ m ; m ;P m ;D m ) m1 such that: P m+1 2P(P m ; m ;D m ); m ~ m+1 b ; m+1 := inf n t ~ m+1 :d(! t ;O c ) 1 n o ^b D m+1 :=D m \f~ m+1 <t n ^b g\fd(! ~ m+1;O c )>Lg\f m+1 <b g; 84 and E P m h b Y m1 Dm i E P m+1 h b Y m+11 D m+1 + b Y b 1 Dm\D c m+1 i +C 0 (3)P m+1 (D m \D c m+1 ) + 2 1m ": By induction, for any m 1 we have E P 0 h b Y 0 i E P m h b Y m1 Dm + b Y b 1 D c m i +C 0 (3)P m (D c m ) + 4" E P m [ b Y b ] + 2C 0 P m [D m ] +C 0 (3) + 4": (4.67) Note that P m [D m ] P m h \ m i=1 fjB ~ iB i1jL 1 n g\fjB iB ~ ijL 1 n g i P m h m X i=1 [jB ~ iB i1j 2 +jB iB ~ ij 2 ] 2m(L 1 n ) 2 i 1 2m(L 1 n ) 2 E P m h m X i=1 [jB ~ iB i1j 2 +jB iB ~ ij 2 i C 2m(L 1 n ) 2 : Then, (4.67) leads to I n C 2m(L 1 n ) 2 +C 0 (3) + 4": which implies, by sending m!1 that I n C 0 (3) + 4": Hence, by choosing small enough such that 0 (3)", we see that (4.62) holds true for n> 1 L . 85 Chapter 4 Fully nonlinear Path-dependent PDEs Consider the fully nonlinear parabolic path-dependent partial dierential equation dened on the space of continuous paths =f!2C 0 ([0;T ];R d ) :! 0 = 0g: @ t uG(:;u;@ ! u;@ 2 !! u) (t;!) = 0; t<T; !2 ; (0.1) for some progressively measurable nonlinearityG : [0;T ] RR d S d , whereS d is the set of symmetric matrices of sized with real entries. This form covers the non-Markovian HJB equation and the non-Markovian Isaac-Bellman equation. Such equations were considered rst by Lukoyanov [35] in the rst order case, and discussed by Peng [47, 48]. When the nonlinearityG is semilinear, i.e. linear with respect to the@ 2 !! ucomponent, the theory of backward stochastic dierential equations, started by Pardoux and Peng [41], provides a wellposedness result for (0.1) in a strong sense imposing the existence of the space gradient@ ! u in a convenient functional space. For a special class of fully nonlinear mapsG, a similar theory was developed by means of second order backward stochastic dierential equations introduced by Cheridito, Soner, Touzi, and Victoir [5], Soner, Touzi and Zhang [53], and the closely related G-BSDEs of Hu, Ji, Peng and Song [26]. Our objective in this chapter and the following is to relax the strong requirement of existence of the space gradient in the theory of backward stochastic dierential equations and its various extensions. To do this, we follow the classical theory of viscosity solutions of partial dierential equations, introduced by Crandall and Lions [10], see [9] for an overview and [21] for a pedagogical presentation. The main diculty is that the classical nite- dimensional theory of viscosity solutions is strongly based on the local compactness of the underlying space. Since the space of continuous paths does not satisfy this condition, our main contribution is to introduce a denition of viscosity solutions which circumvents this major diculty. This is achieved in [15] and [17] by replacing the pointwise extremality in the standard denition of viscosity solutions by the corresponding extremality in the context 86 of an optimal stopping problem under nonlinear expectation. In the fully nonlinear case, the delicate analysis of such optimal stopping problems reported in the previous chapter. When the generator is convex in the equation is related to second order BSDEs in the sense of [53]. We know give a quick presentation of 2BSDEs and introduce the notations we will use the study (0.1). 1 Second order Backward stochastic dierential equations At this section, we introduce the theory of second order backward stochastic dierential equations. Markovian 2BSDEs provide a stochastic representation for the viscosity solutions of fully nonlinear parabolic dierential equations as shown in [5]. Or aim is to get the reader accustomed with the heavy language of 2BSDEs, especially in terms of the singularity of measures. A systematic study of 2BSDEs started with [53]. As it is the case one expects that in the non-Markovian framework the 2BSDEs studied in [53] will provide a stochastic repre- sentation to the viscosity solutions of the fully nonlinear PPDEs (0.1) when the generator G is convex in @ !! u. In this section we present the notations and main results of [53]. 1.1 Preliminaries We denote by F + :=fF + t g t2[0;T ] the right limit of F. We say that a measure P on is a local martinagle measure if B is a local martingale under P. By Karandikar [28], we know that one can dene anF progressively measurable process that we denotehBi : !R such that for all local martingale measure P,hBi is the quadratic variation of B underP. We also dene the following process : ^ a t (!) := lim "#0 hBi t (!)hBi t" (!) " : We denote byP W the set of local martingale measures P such that hBi t is absolutely continuous in t and ^ a takes values inS >0 d : Pa:s:: Notice that for all P 2 P W , the quadratic variation of B satisfy hBi P t = hBi t = R t 0 ^ a(s;B)ds,P-a.s. and the process W P t := R t 0 ^ a 1 2 s dB s ; P-a.s. is aP Brownian motion. 87 For anF-progressively measurable process satisfying t 2S >0 d and Z T 0 j s jds<1; P 0 -a.s. (1.2) we deneP :=P 0 (X ) 1 where X t := Z t 0 1 2 s dB s ; t2 [0;T ]; P 0 a:s: andP S :=fP : satisfying (1.2)g. Notice that for all satisfying (1.2), underP 0 , both ^ a t (X ) and(s;B) are the quadratic variation density of X . Therefore ^ a t (X ) =(s;B)P 0 a.s. The following result, whose proof can be nd in [54], is useful to manipulate the elements ofP S . Lemma 82. P S = P2P S :F W P P =F P (1.3) and every probability inP S satises Blumenthal zero-one law and the martingale represen- tation property. For all such there exists and F-progressively measurable mapping such that B t = (t;X ) for all t2 [0;T ], P 0 -a.s. ^ a(B) = a (B) dtP -a.s. 1.1.1 The generator for 2BSDEs We assume that the generator G : RR d S d ! R is convex in and dene its conjugate. F (t;!;y;z;a) := sup 2D H 1 2 a : G(t;!;y;z; ) ; a2S >0 d (1.4) ^ F t (y;z) :=F (t;B;y;z; ^ a(t;B)) and ^ F 0 t := ^ F t (0; 0): (1.5) We x a constant 2 (1; 2] and give the following denition. 88 Denition 83. LetP G denote the collection P2P S such that a P ^ aa P ;dtP a.s. for some a P ;a P 2S >0 d ; E P 2 4 Z T 0 j ^ F 0 t j dt 2 3 5 Assumption 84. P G is not empty and the domain of F , D Ft(y;z) := f 2 S >0 d : F (t;!;y;z;a)<1g depends only on t. F is F progressively measurable, uniformly contin- uous in ! under the uniform metric, and for all t2 [0; 1];y;y 0 2R;z;z 0 2R d j ^ F t (y;z) ^ F t (y 0 ;z 0 )jC jyy 0 j + ^ a 1 2 (zz 0 )j ; P G q:s: (1.6) 1.1.2 The spaces and norms To announce the main existence and uniqueness results we dene the following norms : For p 1, L p; G is te the space ofF T measurable random variables with jjjj p L p; G := sup P2P k G E P [jj p ]<1: H p; G is the space of allF + progressively measurable R d valued process Z with jjZjj P H p: G := sup P2P k G E P " Z T 0 j^ a 1 2 t Z t j 2 dt p 2 # <1: D p; G si the space ofF + progressively measurableR valued processes Y such that Y is cadlagP G -q.s. andjjYjj p D p; G := sup P2P G E P " sup 0tT jY t j p # <1: For 2L 1; G ;P2P G and t2 [0;T ], we denote E G;P t [] := P esssup P 0 2P G (t+;P) E P t [] whereP G (t +:P) :=fP 0 2P G :P =P 0 onF + t g: L p; G is the space of 2L p; G such that jjjj p L p; G := sup P2P G E P " P esssup 0tT E G;P t [jj ] p # <1: 89 L p; G :=fthe closure of UC b ( ) under the normjj:jj L p; G g: 1.2 Second order BSDEs Denition 85. For 2L 2; G we say that (Y;Z)2D 2: G H 2; G is a solution to 2BSDE Y t = Z T t ^ F s (Y s ;Z s )ds Z T t Z s dB s +K T =K t ; 0tT;P k G q.s. (1.7) if Y T =; P G -q.s. For each P2P G the process K P :=Y 0 Y t + Z t 0 ^ F s (Y s :Z s )ds + Z t 0 Z s dB s ; 0tT;P -a.s. is non decreasing P-a.s. The familyfK P ;P2P G g satises the following minimality condition: K P t = P essinf P 0 2P G (t+;P) E P 0 h K P T i ; P-a.s. for all P2P G ; t2 [0;T ]: We need the following assumption for the wellposedness of the 2BSDE (1.7). Assumption 86. sup P2P G E 2 4 P esssup 0tT E G;P t Z T 0 j ^ F 0 s j ds 2 3 5 <1: We stated the main results on the wellposedness of 2BSDEs and some of the properties of the solutions. Theorem 87. Under the assumptions (84), and (86), for all 2L 2; G the 2BSDE (1.7) has a unique solution (Y;Z)2D 2; G H 2; G . For anyPP G ,F + measurable stopping time , and anyF + measurable randpm variable 2L 2 (P), let (Y P ;Z P ) := (Y P (;);Z P (;)) is the solution under P of the BSDE Y P t = Z t ^ F s (Y P s ;Z P s )ds Z t Z P s dB s ; 0t: We also give the following properties of the solutions that will be useful to give estimates on PPDEs. 90 Proposition 88. Under the assumptions (84), and (86) the following properties hold. The dynamic programming principle : For any P2P G and 0t 1 t 2 T , Y t 1 = P esssup P 0 2P G (t 1 +;P) Y P 0 t 1 (t 2 ;Y t 2 ); P-a.s. Under the previous assumptions, the solutions of two 2BSDEs for two dierent nal conditions will verify a priori estimates and a comparison principle. These results can be found in [53]. 1.3 2BSDEs and fully nonlinear PDEs To motive the study of PPDEs, we know give the relation between 2BSDEs and parabolic fullynonlinear PDEs. We assume that the nonlinear generator G has the following form G(t;!;y;z; ) = g(t;! t ;y;z; ) for a functiong : [0;T ]R d RR d S d !R. Then the functionF veries F (t;!;y;z;a) =f(t;! t ;y;z;a). We also recall that ifg is convex and lower semi continuous in then the b- conjugate of g is equal to g. We make the following assumption on g. Assumption 89.P g is not empty, the domain D ft of f in a depends only on t. On D ft f is uniformly continuous in t uniformly in (a), and for some constant C and a modulus of continuity with polynomial growth jf(t;x;y;z;a)f(t;x 0 ;y 0 ;z 0 ;a)j(jxx 0 j) +C jyy 0 j +ja 1 2 (zz 0 )j : We now use the 2BSDEs to construct a functionv which is a solution to a fully nonlinear second order parabolic PDE. Similarly to the denition (83) we give the following denition on the shifted spaces. Denition 90. For t2 [0;T ] letP ;t g the class of probability measures P2P t S such that a P ^ a t a P ; dsP a.s. on [t; 1] t ; for some a P ;a P 2S >0 d : and P P " Z 1 t j ^ f t;0 s j ds 2 # <1; where ^ f t;0 s :=f(s; 0; 0; 0; ^ a t s ): 91 Let be an F t -stopping time P2P ;t g , and a P square integrable F t -measurable random variable. We dene (Y P ;Z P ) := (Y t;x;P (;):Z t;x;P (;)) as the solution of the following BSDE Y P s = Z t f(r;x +B t r ;Y P r ;Z P r ; ^ a r )dr Z t Z P r dB r ; ts; P a.s. We then dene the function v as v(t;x) := sup P2P ;t g Y t;x;P t (T;(x +B t T )); for (t;x)2 [0;T ]R d : (1.8) Assumption 91. The function has polynomial growth, and there exists a continuous function (t;x) such that, for all (t;x) ; sup P2P ;t g E P j(x +B t T )j + Z 1 t jf(s;x +B t s ; 0; 0; ^ a t s )j ds jG(t;x)j sup P2P t; g " sup tsT 2 (t;x +B t T ) # <1: Theorem 92. Let assumptions (89) and (91) hold and is uniformly continuous then the solution of the 2BSDE (Y;Z) verify Y t = v(t;B t ) and the function v is uniformly continuous in x uniformly in t, and right continuous in t. For any set of F t -stopping times f P ;P2P ;t g g the following dynamic programming holds: v(t;x) = sup P2P ;t g Y t;x;P ( P ;v( P ;x +B t P )): (1.9) The function u is a viscosity solution of the PDE @ t v(t;x)g(t;x;v(t;x);@ x v(t;x);@ xx v(t;x)) = 0; (t;x)2 [0;T )R d ; (1.10) v(T;x) =g(x); x2R d : (1.11) The formula (1.10) can be called a Feynman-Kac formula for the 2BSDEs. It links the theory of 2BSDEs to the theory of second order fully nonlinear PDEs. The existence and uniqueness of the solution of the 2BSDE is proven in a non-Markovian framework. Hence we suspect that the Feynman-Kac formula can be extended to a non-Markovian framework. 92 2 Path dependent partial dierential equations In the following sections we study the following fully nonlinear parabolic path-dependent partial dierential equation (PPDE, for short): Lu(t;!) :=f@ t uG(:;u;@ ! u;@ 2 !! u)g(t;!) = 0; 0t<T; !2 ; (2.12) where the generator G : RR d S d !R satises the following standing assumptions: Assumption 93. The nonlinearity G satises: (i) For xed (y;z; ), G(;y;z; )2L 0 () andjG(; 0; 0; 0)jC 0 . (ii) G is elliptic, i.e. nondecreasing in . (iii) G is uniformly Lipschitz continuous in (y;z; ), with a Lipschitz constant L 0 . (iv) For any (y;z; ), G(;y;z; ) is right continuous in (t;!) under d 1 , in the sense of Denition 9. 2.1 Preliminary results We will be needing the following results. Lemma 94. For any h2H and any L> 0, we haveE L 0 [h]> 0. Proof. By (2.13), we may assume h " h for some "> 0. For any P2P L and 0<", we have P(h)P(h " ) =P(kBk ")" 4 E P [kBk 4 ]CL 4 " 4 2 : (2.13) This implies that, for := " 2 p 2CL 2 ^", E P [h]P(h>) = 1P(h) 1CL 4 " 4 2 ) 2 : ThusE L 0 [h] 2 > 0. We recall the space derivatives via the functional It^ o formula dened at the rst chapter, which plays an important role. Denote P t 1 := [ L>0 P t L ; t2 [0;T ]: 93 Denition 95. We say u2 C 1;2 () if u2 C 0 (), and there exist ;@ t u2 C 1;2 ();@ ! u2 C 0 (;R d ), @ 2 !! u2C 0 (;S d ) such that, for any P2P 0 1 , u is a local P-semimartingale and it holds: du t =@ t u t dt +@ ! u t dB t + 1 2 @ 2 !! u t :dhBi t ; 0tT; P-a.s. (2.14) The previous notation u t :=u(t;B) will be our convention. If we compose a functional u with the canonical process B, we will use the notation u t . If we need to compose u with other processes (for example X), we will explicitly write u(t;X). Remark 96. Ifu2C 1;2 () then the denition of the time derivative is consistent with the semilinear case. Indeed, xu2C 1;2 () and (t;!)2 such thatt<T then chooseP2P t 1 such thatP-a.s. B s =! s for all st and B s =! t for all st. Then u t;! (s;B t )u(t;!) = Z s t @ t u t;! r dr; P-a.s. The left hand side is u(s;! :^t )u(t;!). Hence by the continuity of the time derivative @ t u(t;!) := lim #0 u(t +;! :^t )u(t;!) ; for (t;!)2 [0;T ) ; 3 Classical solutions Denition 97. Let u2 C 1;2 (). We say u is a classical solution (resp. sub-solution, super-solution) of PPDE (2.12) ifLu(t;!) = (resp.;) 0 for all (t;!)2 [0;T ) . Example 98. Let d = 1 and u(t;!) := E P 0 t R T 0 B t dt (!) = R t 0 ! s ds + (Tt)! t , (t;!)2 . Then u2 C 1;2 (), and is a classical solution of the path dependent heat equation @ t u 1 2 @ 2 !! u = 0 with terminal condition u(T;!) = R T 0 ! t dt. We now give the comparison result for smooth functional which implies the uniqueness of smooth solutions. Proposition 99. Letu(resp. v) be aC 1;2 () subsolution(resp. supersolution) of the PPDE (2.12) verifying u(T;!)v(T;!) for all !2 . Then u(t;!)v(t;!) for all (t;!)2 . Proof. We will prove a consistency result and a comparison result. Combining these two we see that the claim holds. 94 Remark 100. We recall that, unlike the standard heat equation which always has classical solution in [0;T ), a path dependent heat equation may not have a classical solution in [0;T ). One simple example can be the heat equation with terminal condition u(T;!) =B t 0 (!) for some 0 < t 0 < T . Then clearly u(t;!) = B t^t 0 (!), and thus @ ! u(t;!) = 1 [0;t 0 ] (t) is discontinuous. u is the unique viscosity solution of the heat equation with the terminal condition u(T;!) = B t 0 (!). For more general semilinear PPDEs, we refer to Peng and Wang [49] for sucient conditions of existence of classical solutions. 4 Denition of viscosity solutions For any u2L 0 (), (t;!)2 [0;T ) , and L> 0, dene A L u(t;!):= n '2C 1;2 ( t ) : ('u t;! ) t = 0 =S t;t L ('u t;! ) ^h (0) for some h2H t o ; A L u(t;!):= n '2C 1;2 ( t ) : ('u t;! ) t = 0 =S t;t L ('u t;! ) ^h (0) for some h2H t o : (4.15) Denition 101. (i) Let L> 0. We say u2U (resp.U) is a viscosity L-subsolution (resp. L-supersolution) of PPDE (2.12) if, for any (t;!)2 [0;T ) and any'2A L u(t;!) (resp. '2A L u(t;!)): L t;! '(t; 0) := @ t 'G t;! (:;';@ ! ';@ 2 !! ') (t; 0) (resp.) 0: (ii) u2U (resp.U) is a viscosity subsolution (resp. supersolution) of PPDE (2.12) if u is viscosity L-subsolution (resp. L-supersolution) of PPDE (2.12) for some L> 0. (iii)u2UC b () is viscosity solution of PPDE (2.12) if it is viscosity sub- and supersolution. Remark 102. For technical simplication, in [17] and [18], we considered only bounded viscosity solutions. By some more involved estimates one can easily extend our theory to viscosity solutions satisfying certain growth conditions. We shall leave this for future research, however, in some examples below we may consider unbounded viscosity solutions as well. Remark 103. Since our PPDE is backward, in (4.15) the test functions' are dened only aftert. By this nature, both the viscosity solution u and the generatorG are required only to be right continuous in (t;!) under d 1 . To prove the comparison principle, however, we will assume some stronger regularity of G, see the next chapter. 95 We next provide an intuitive justication of our Denition 101 which shows how the above nonlinear optimal stopping problemsS andS appear naturally. Let u2C 1;2 () be a classical supersolution of PPDE (2.12), (t ;! )2 [0;T ) , and '2C 1;2 ( t ) such that (t ; 0) =u(t ;! ). Then: 0Lu(t ;! ) =L t ;! '(t ; 0) +R(t ; 0) (4.16) whereR(t;!) = @ t ('u t ;! )+^ @ ! ('u t ;! )+ 1 2 ^ 2 :@ 2 !! ('u t ;! ) (t;!) for (t;!)2 t , ^ := G z (t ;! ;u(t ;! ); ^ z; ^ ); and ^ := 2G (t ;! ;u(t ;! ); ^ z; ^ ) 1=2 are constant drift and diusion coecients, and (^ z; ^ ) are some convex combination of (@ ! u;@ 2 !! u)(t ;! ) and (@ ! ';@ 2 !! ')(t ; 0). The question is how to choose the test process ' so as to deduce from (4.16) that L t ;! '(t ; 0) 0. Our crucial observation is that d('u t ;! )(t;B t ) = R(t;B t )dt +@ ! ('u t ;! )(t;B t ) ^ d ^ W t ; ^ P a.s. where ^ W is a Brownian motion under the probability measure ^ P2P t L 0 dened by the pair (^ ; ^ ), and L 0 is the Lipschitz constant of the nonlinearity G. Therefore, in order to conclude from (4.16) that R(t ; 0) 0, we have to choose the test process ' so that the dierence ('u t ;! ) has a non positive ^ Pdrift locally at the right hand-side of t . This essentially means that ('u t ;! ) is a ^ Psupermartingale on some right-neighborhood [t ; h] of t , and therefore 0 = ('u t ;! ) t E ^ P [('u t ;! ) ^h ] for any stopping time . Since the probability measure ^ P is imposed by the above calculation, we must choose the test process ' so that 0 = ('u t ;! ) t E L t ('u t ;! ) ^h for all stopping time . Finally, since =t is a legitimate stopping rule, we arrive at 0 = ('u t ;! ) t =S t ;t L ('u t ;! ) ^h ; which corresponds exactly to our denition ofA L u(t ;! ). Conversely, if the pair (t ;! );' satises the last equality, then it follows from the Snell envelope characterization of Theorem 62 that 0 = ('u t ;! ) t E L t S L ^h ('u t ;! ) E L t ('u t ;! ) ^h , for all stopping time . By the right-continuity, this implies that R(t ; 0) 0. Hence our denition of the set of test processesA L u(t ;! ) is essentially necessary and sucient for the inequality R(t ; 0) 0. 96 Remark 104. From the last intuitive justication of our denition, we see that for a semilinear path-dependent PDE, ^ is a constant matrix. Then, in agreement with the chapter on semilinear PPDEs, it is not necessary to vary the coecient in the denition of the operatorE L . Similarly, in the context of a linear PPDE, both coecients ^ and ^ are constant, and we may dene the setsA L u andA L u by means of the linear expectation operator. Finally, for a rst order PPDE, we may take the diusion coecient 0, see Section 8. In the rest of this section we provide several remarks concerning our denition of viscosity solutions. In most places we will comment on the viscosity subsolution only, but obviously similar properties hold for the viscosity supersolution as well. Remark 105. As standard in the literature on viscosity solutions of PDEs: (i) The viscosity property is a local property in the following sense. For any (t;!)2 [0;T ) and any "> 0, dene as in (2.13), h t " := inf n s>t :jB t s j" o ^ (t +") and thus h " = h 0 " : (4.17) It is clear that h t " 2H t . To check the viscosity property of u at (t;!), it suces to know the value of u t;! on [t; h " ] for an arbitrarily small " > 0. In particular, since u and ' are locally bounded, there is no integrability issue in (4.15). Moreover, for any '2A L u(t;!) with corresponding h2H t , by (2.13) we have h t " h when " is small enough. (ii) The fact that u is a viscosity solution does not mean that the PPDE must hold with equality at some (t;!) and' in some appropriate set. One has to check viscosity subsolution property and viscosity supersolution property separately. (iii) In generalA L u(t;!) could be empty. In this case automaticallyu satises the viscosity subsolution property at (t;!). However notice that this can not happen very often. In deed the existence of uniqueness result means intuitively, that for the solution u if the PPDE, A L u(t;!) is not empty "too often". Remark 106. The above Denition 101 does not reduce to the denition introduced in the semilinear context of [15] either, because we are using a dierent nonlinear expectation E L here. It is obvious that any viscosity subsolution in the sense of [15] is also a viscosity subsolution in the sense of this paper, but the opposite direction is in general not true. However, the denitions of viscosity solutions are actually equivalent for semilinear PPDEs, in view of the uniqueness result of our accompanying paper [18]. See also Remark 104. 97 Remark 107. For 0 < L 1 < L 2 , obviouslyP t L 1 P t L 2 ,E L 2 t E L 1 t , andA L 2 u(t;!) A L 1 u(t;!). Then one can easily check that a viscosity L 1 -subsolution must be a viscosity L 2 -subsolution. Consequently, u is a viscosity subsolution if and only if there exists an L 1 such that, for all L 0 L, u is a viscosity L 0 -subsolution. Remark 108. We have some exibility to choose the set of test functions. All the results in the following chapters will still hold true if we replace theA L u with theA 0 L u orA 00 L u dened below. (i) The minimum value in the denition ofA L u may be taken to be equal to zero, by replacing ' with ''(t; 0) +u(t;!) : A 0 L u(t;!) := n '2A L u(t;!) :'(t; 0) =u(t;!) o : (4.18) (ii) By the same arguments as in [15] Remark 3.6, we may also replaceA L with the following A 00 L with strict extremum in the nonlinear optimal stopping problem: A 00 L u(t;!) := n '2C 1;2 ( t ) :9 h2H such that, for all 2T t with >t; ('u t;! ) t = 0 < E L t ('u t;! ) ^h o : (4.19) Remark 109. We remark that the larger we choose the set of test functions, the easier we can prove the uniqueness of viscosity solutions but the more dicult for the existence. (i) If we replace minimum value inA L u with the following pointwise minimum: A L u(t;!) := n '2C 1;2 b ( t ) : ('u t;! ) t (0) = inf (s;! 0 )2 t ('u t;! ) s^h (! 0 ) for some h2H t o ; then typically one cannot obtain uniqueness of viscosity solutions. (ii) On the other hand, if we replace the nonlinear expectation inA L u with at paths from right: A L u(t;!) := n '2C 1;2 b ( t ) : ('u t;! ) t (0) = inf tst+ ('u t;! ) s (0) for some > 0 o ; then typically one cannot obtain existence of viscosity solutions. We next report the following result whose proof follows exactly the lines of the semilinear case. 98 Proposition 110. Let Assumption 93 hold true, and let u be a viscosity subsolution of PPDE (2.12). For 2R, the process ~ u t :=e t u t is a viscosity subsolution of: ~ L~ u := @ t ~ u ~ G(t;!; ~ u;@ ! ~ u;@ 2 !! ~ u) 0; (4.20) where ~ G(t;!;y;z; ) :=y +e t G(t;!;e t y;e t z;e t ). Remark 111. Under Assumption 93, we are not able to prove a more general change of variable formula. 4.1 Relation with viscosity solutions of PDEs In the Markovian case, namelyG(t;!;:) =g(t;! t ;:) andu(t;!) =v(t;! t ), the PPDE (2.12) reduces to the following PDE that we have studied in the previous chapters. Lv(t;x) :=f@ t vg(:;v;@ x v;@ xx v)g(t;x) = 0; t2 [0;T ); x2R d : (4.21) It is clear that, in the Markovian setting with smooth v, u is a classical solution (resp. sub-solution, super-solution) of PPDE (2.12) if and only if v is a classical solution (resp. sub-solution, super-solution) of PDE (4.21). We give the following proposition for viscosity solutions. Proposition 112. Assume that u(t;!) = v(t;! t ) for a function v : [0;T ]R d !R with u2U and v is upper semi continuous. If u is a viscosity subsolution of the PPDE (2.12) then v is viscosity subsolution of the PDE (4.21). Proof. Pick L > 0 such that u is a viscosity Lsubsolution and x (t;x)2 (0;T )R d and a C 1;2 function such that u has a local minimum equal to 0 at (t;x). We pick d > 0 such that for (s;y)2 [t;t +d]B d (x) (the closed ball of radius d and center x) (v)(s;y) (v)(t;x) = 0. We dene the functional 2C 1;2 ( t ) : (s; ~ !) :=(s^ h t d (~ !);x + ~ ! s ): Let !2 be the path which is the linear interpolation of (0; 0); (t;x); (T;x). Then for t s h t d (~ !) the inequality ( u t;! )(s; ~ !) = (v)(s;x + ~ ! s ) (u)(t;x) = 0 = ( u t;! )(t; 0) holds. 99 Therefore 2A L u(t;!). By Ito's formula and viscosity subsolution of u, @ t (t;x)g(t;x;u(t;x);@ x (t;x);@ xx (t;x)) 0: Assume now thatt = 0 and such that (0;x) is a strict local minimum ofv. We x t n # 0 and dene n = inf tnsd;y2B d (x) (v)(s;y). By compactness we can choose (s n ;y n ) where the minimum is achieved and by the strict minimality condition (s n ;y n )! (0;x) as n goes to innity. We can similarly to the previous case construct n 2A L u(s n ;! n ) where! n is the linear interpolation of (0; 0); (s n ;y n ); (T;y n ) and show that @ t (s n ;y n )g(s n ;y n ;v(s n ;y n );@ x (s n ;y n );@ xx (s n ;y n )) 0: The convergence of (s n ;y n ) and the continuity of the terms give the result. Remark 113. The opposite direction of the implication is in general not true. We shall point out though, when the PDE is well posed, by uniqueness our denition of viscosity solution of PPDE (2.12) is consistent with the viscosity solution of PDE (4.21) in the standard sense. 4.2 Consistency with classical solutions Theorem 114. Let Assumption 93 hold and u2C 1;2 ()\UC b (). Then u is a classical solution (resp. subsolution, supersolution) of PPDE (2.12) if and only if it is a viscosity solution (resp. subsolution, supersolution). Proof. We prove the subsolution property only. Assume u is a viscosityL-subsolution. For any (t;!), sinceu2C 1;2 (), we haveu t;! 2C 1;2 ( t ) and thusu t;! 2A L u(t;!) with h :=T . By denition of viscosity L-subsolution we see thatLu(t;!) 0. On the other hand, assumeu is a classical subsolution. Ifu is not a viscosity subsolution, then it is not a viscosity L 0 -subsolution. Thus there exist (t;!)2 and '2A L 0 u(t;!) such that 2c :=L'(t; 0) > 0. Without loss of generality, we set t := 0 and, by Remark 105 (i), let h = h " 2H dened in (2.13) for some small constant " > 0 be the hitting time used in the denition ofA L 0 u(0; 0). Now recall the denition ofP L 0 and letP2P L 0 corresponding to some constants 2R d and 2S d which will be determined later. Then 0 E L 0 0 ('u) h E P ('u) h : 100 Applying functional It^ o's formula (2.14) and noticing that ('u) 0 = 0, we have ('u) h = Z h 0 h @ t ('u) s + 1 2 @ 2 !! ('u) s : 2 +@ ! ('u) s i ds + Z h 0 @ ! ('u) s dW P s : Taking expected values, this leads to 0E P h Z h 0 @ t ('u) s + 1 2 @ 2 !! ('u) s : 2 +@ ! ('u) s ds i =E P h Z h 0 ( ~ L' ~ Lu) s ds i ; where ~ L' s :=L' s G(;';@ ! ';@ 2 !! ') s + 1 2 (@ 2 !! ') s : 2 + (@ ! ') s . Since ~ L' and ~ Lu are continuous, for " small enough we havej ~ L' s ~ L' 0 j +j ~ Lu s ~ Lu 0 j c 2 on [0; h]. Then 0 E P h ( ~ L' 0 ~ Lu 0 +c)h i : (4.22) Note thatLu 0 0,L' 0 = 2c, and ' 0 =u 0 . Thus ~ L' 0 ~ Lu 0 2c + 1 2 @ 2 !! ('u) 0 : 2 +@ ! ('u) 0 [G :;u;@ ! ';@ 2 !! ' 0 G :;u;@ ! u;@ 2 !! u 0 ]: By Assumption 93 (iii), there exist and such thatP2P L 0 and G :;u;@ ! ';@ 2 !! ' 0 G :;u;@ ! u;@ 2 !! u 0 = 1 2 @ 2 !! ('u) 0 : 2 +@ ! ('u) 0 : Then ~ L' 0 ~ Lu 0 2c, and (4.22) leads to 0E P [ch]< 0, contradiction. 5 Some Examples with Representation Formula In this section we provide several examples of viscosity solutions. 5.1 First order PPDEs Example 115. Suppose thatu(t;!) =v(! t ) for all (t;!)2 , wherev :R d !R is bounded and continuous. Then@ t u = 0, by the denition of the time-derivative. We now verify that u is a viscosity solution of the equation@ t u = 0. Indeed, for '2A L u(t;!), it follows from our denition that, for some h2H t : ('u t;! ) t = 0E P 0;0 ('u t;! ) (t+)^h for all > 0: 101 Here P 0;0 is the probability measure corresponding to = 0; = 0 in (4.18). Notice that under P 0;0 , the canonical process ! is frozen to its value at time t. Then h = T , P 0;0 -a.s. and thus, for <Tt, '(t; 0)v(! t ) = ('u t;! ) t E P 0;0 ('u t;! ) (t+)^h ='(t +; 0)v(! t ): This implies that @ t '(t; 0) 0. A similar argument shows that @ t '(t; 0) 0 for all '2A L u(t;!). Example 116. Let d = 1, we check that u(t;!) := 2B t B t is a viscosity solution of the rst order equation: @ t uj@ ! uj + 1 = 0: (5.23) u is not smooth, so it is a viscosity solution but not a classical solution. When ! t <! t , it is clear that u is smooth with @ t u(t;!) = 0;@ ! u(t;!) =1 and thus satises (5.23). So it suces to check the viscosity property when ! t =! t . Without loss of generality, we check it at (t;!) = (0; 0). (i) We rst check thatA L u(0; 0) is empty for L 1, and thus u is a viscosity subsolution. Indeed, assume '2A L u(0; 0) with corresponding h2H. By Remark 105 (i), without loss of generality we may assume h = h " for some small"> 0, and thus@ t ';@ 2 !! ' are bounded on [0; h]. Note thatP 0 2P L . By denition ofA L we have, for any 0<<", 0 E P 0 h ('u) ^h i =E P 0 h Z ^h 0 (@ t ' +@ 2 !! ')(t;!)ds 2B ^h i CE P 0 [^ h] 2E P 0 [B ^h ]C 2E P 0 [B ] + 2E P 0 [B 1 fhg ] Cc p +C p P 0 (h)C[ +" 2 ]c p ; where c := 2E P 0 [B 1 ]> 0. This leads to a contradiction when is small enough. Therefore, A L u(0; 0) is empty. (ii) We next check the viscosity supersolution property. Assume to the contrary thatc := @ t '(0; 0)j@ ! '(0; 0)j + 1< 0 for some'2A L u(0; 0) andL 1. Let := sgn (@ ! '(0; 0)) (with the convention sgn (0) := 1), := 0, andP2P L be determined by these;.. When = 1, we have B t = t;B t = t, P-a.s. When =1, we have B t =t, B t = 0, P-a.s. In both cases, it holds that u(t;!) = t, h " = ", P-a.s. By choosing h = h " and " small 102 enough, we may assumej@ t '(t;B)@ t '(0; 0)j +j@ ! '(t;B)@ ! '(0; 0)j c 2 fort h " . By the denition ofA L u(0; 0) we get 0 E P h ('u) h" i =E P h Z " 0 (@ t ' +@ ! ') t dt" i E P h Z " 0 @ t ' 0 +@ ! ' 0 c 2 dt i " = E P h Z " 0 @ t ' 0 +j@ ! ' 0 j c 2 dt i " = Z " 0 1 +c c 2 dt" = 1 2 c" > 0: This is the required contradiction, and thus u is a viscosity supersolution of (5.23). 5.2 Semi-linear PPDEs and BSDEs We now consider the following semi-linear PPDE: @ t u 1 2 2 (t;!) :@ 2 !! uF t;!;u;(t;!)@ ! u = 0; u(T;!) =(!); (5.24) where2S d andF areF-progressively measurable in all variables, and isF T -measurable. We note that [15] studied the case =I d for simplicity. We shall assume Assumption 117. (i) C 0 I d > 0, and R T 0 2 (s; 0)ds + R T 0 F 2 (s; 0; 0; 0)ds<1. (ii) is L 0 -Lipschitz continuous in !, and F is L 0 -Lipschitz continuous in (y;z). (iii)F and are uniformly continuous in!, and the common modulus of continuity function 0 has polynomial growth. (iv) and F (;y;z) are right continuous in (t;!) under d 1 for any (y;z), in the sense of Denition 9. The assumption > 0 and that F depends on the gradient term through the special form (t;!)@ ! u are mainly needed for the subsequent BSDE representation. For any (t;!)2 , consider the following decoupled FBSDE on [t;T ]: 8 > > < > > : X s = Z s t t;! (r;X )dB t r ; Y s = t;! (X ) + Z T s F t;! (r;X ;Y r ;Z r )dr Z T s Z r dB t r ; P t 0 a.s. (5.25) 103 Under Assumption 117, clearly FBSDE (5.25) has a unique solution (X t;! ;Y t;! ;Z t;! ). Alter- natively, we may consider the BSDE in weak formulation: Y t;! s = t;! (B t )+ Z T s F t;! (r;B t ;Y t;! r ;Z t;! r )dr Z T s Z t;! r ( t;! (r;B t )) 1 dB t r ; P t;! -a.s. (5.26) whereP t;! :=P t 0 (X t;! ) 1 denotes the distribution ofX t;! . Then, for any xed (t;!), Y t;! t =Y t;! t and is a constant due to the Blumenthal zero-one law: Proposition 118. Under Assumption 117, then u2 UC b () and satises the dynamic programming principle: for any (t;!)2 and 2T t , Y t;! s =u t;! (;B t ) + Z s F t;! (r;B t ;Y t;! r ;Z t;! r )dr Z s Z t;! r ( t;! (r;B t )) 1 dB t r ; P t;! -a.s. (5.27) Proof. Indeed, by standard arguments it is clear that, for any p 1 the following classical niteness results hold, E P t 0 h kX t;! k p T +kY t;! k p T + Z T t jZ t;! s j 2 ds p=2 i C p ; E P t;! h kB t k p T +kY t;! k p T + Z T t j[ t;! (s;B t )] 1 Z t;! s j 2 ds p=2 i C p ; Additionally, notice that for !;! 0 2 underP t 0 dX t;! s = t;! (s;X t;! )dB t s ; dX t;! 0 s = t;! 0 (s;X t;! 0 )dB t s andj t;! (s;X t;! ) t;! 0 (s;X t;! 0 )j L 0 (jj!! 0 jj t +jjX t;! X t;! 0 jj s ). Therefore using Gronwall type estimates : E P t 0 h kX t;! X t;! 0 k 2 T i C 0 (k!! 0 k t ) 2 ; and, since 0 has polynomial growth, E P t 0 h kY t;! Y t;! 0 k 2 T + Z T t jZ t;! s Z t;! 0 s j 2 ds i C 0 (k!! 0 k t ) 2 +CE P t 0 h C 0 (kX t;! X t;! 0 k T ) 2 i C 1 (k!! 0 k t ) 2 ; 104 for some modulus of continuity function 1 . In particular, this implies that ju(t;!)jC and ju(t;!)u(t;! 0 )jC 1 (k!! 0 k t ): (5.28) Given the above regularity, by standard arguments in BSDE theory, we have the following dynamic programming principle: for any t<t 0 T , Y t;! s =u t;! (t 0 ;B t ) + Z t 0 s F t;! (r;B t ;Y t;! r ;Z t;! r )dr Z t 0 s Z t;! r ( t;! (r;B t )) 1 dB t r ;P t;! -a.s.(5.29) In particular,u that we have dened! by! and the processY t;! satisfyY t;! s =u t;! (s;B t ) for all tsT ,P t;! -a.s. Denote := d 1 (t;!); (t 0 ;!) . Then ju t u t 0j(!) = E P t;! h Y t;! t Y t;! t 0 +u t;! (t 0 ;B t )u(t 0 ;!) i = E P t;! h Z t 0 t F t;! (r;B t ;Y t;! r ;Z t;! r )dr +u t;! (t 0 ;B t )u(t 0 ;!)] i E P t;! h Z t 0 t jF t;! (r;B t ;Y t;! r ;Z t;! r )jdr +C 1 +kB t k t 0 i ; (5.30) Notice that E P t;! h Z t 0 t F t;! (r;B t ;Y t;! r ;Z t;! r ) dr i CE P t;! h Z t 0 t jF (s;! :^t ; 0; 0)j + 0 (jjB t jj s ) +jY t;! r j +jZ t;! r j dr i C(jj!jj t ) p E P t;! h Z t 0 t 1 +jY t;! r j 2 +jZ t;! r )j 2 dr i 1=2 C(jj!jj t ) p ; where the constant C(jj!jj t ) might eventually depend onjj!jj t . As for the second term, since 0 has polynomial growth, one can easily see that we may assume without loss of generality that 1 also has polynomial growth. Note that t 0 t . Then it is clear that there exists a modulus of continuity function 0 such that E P t;! h 1 +kB t k t 0 i 0 (): 105 Without loss of generality we assume 0 () p . Then, plugging the last estimates into (5.30) and combining with (5.28), for all tt 0 and !;! 0 2 ju(t;!)u(t 0 ;! 0 )j jj!jjt (d 1 ((t;!); (t 0 :! 0 ))) (5.31) for a modulus of continuity that might depend onjj!jj t . Moreover, given the regularity in t, we may extend the dynamic programming principle (5.29) to stopping times, proving (5.27). Proposition 119. Under Assumption 117, u(t;!) :=Y t;! t =Y t;! t is a viscosity solution of PPDE (5.24). Proof. LetL be a Lipschitz constant ofF inz satisfyingjj p 2L. We now show thatu is an L-viscosity solution. Without loss of generality, we prove only the viscosity subsolution property at (t;!) = (0; 0). For notational simplicity we omit the superscript 0;0 in the rest of this proof. Assume to the contrary that, c := @ t ' + 1 2 2 :@ 2 !! ' +F (;u;@ ! ') (0; 0)> 0 for some '2A L u(0; 0): Let h2H be the hitting time corresponding to' in (4.15), and by Remark 105 (i), without loss of generality we may assume h = h " for some small " > 0. Since '2 C 1;2 () and u2UC b (), by Assumption 117 (iv) and the uniform Lipschitz property of F in (y;z), we may assume " is small enough such that @ t ' + 1 2 2 :@ 2 !! ' +F (;u;@ ! ') (t;!) c 2 > 0; t2 [0; h]: 106 Notice that dhBi t = 2 (t;B )dt, P-a.s. Using the dynamic programming principle (5.27), and applying It^ o's formula on ', we have: ('u) h = ('u) h ('u) 0 = Z h 0 @ t ' + 1 2 2 :@ 2 !! ' +F (;u;Z) (s;B )ds + Z h 0 @ ! ' 1 Z (s;B )dB s Z h 0 c 2 +F (;u;@ ! ')F (;u;Z) (s;B )ds + Z h 0 @ ! 'Z (s;B ) 1 (s;B )dB s = Z h 0 h c 2 + (@ ! 'Z) i (s;B )ds + Z h 0 @ ! 'Z (s;B ) 1 (s;B )dB s = c 2 h + Z h 0 @ ! 'Z (s;B ) 1 (s;B )dB s s ds ; P-a.s. wherejjL. Notice that 1 dB t is a P-Brownian motion. Applying Girsanov Theorem one sees immediately that there exists ~ P2P L equivalent toP such that 1 dB t t dt is a ~ P-Brownian motion. Then the above inequality holds ~ P-a.s., and by the denition ofA L u: 0 E ~ P ('u) h c 2 E ~ P [h] < 0; which is the required contradiction. Remark 120. For FBSDE (5.25) with (t;!) = (0; 0), we haveY s :=u(s;X ) P 0 -a.s. This extends the well known nonlinear Feynman-Kac formula in [42] to the path-dependent case. 5.3 Path dependent HJB equations and 2BSDEs Let K be an arbitrary set in some measurable space. We now consider the following path dependent HJB equation: @ t uG(t;!;u;@ ! u;@ 2 !! u) = 0; u(T;!) =(!); (5.32) where G(t;!;y;z; ) := sup k2K h 1 2 2 (t;!;k) : +F (t;!;y;(t;!;k)z;k) i ; where2S d andF areF-progressively measurable in all variables, and isF T -measurable. We shall assume Assumption 121. (i) , F (t;!; 0; 0;k), and are bounded by C 0 , and > 0. (ii) is uniformly Lipschitz continuous in !, and F is L 0 -Lipschitz continuous in (y;z). 107 (iii)F and are uniformly continuous in!, and the common modulus of continuity function 0 has polynomial growth. (iv) (;k), F (;y;z;k), and G(;y;z) are right continuous in (t;!) under d 1 for any (y;z;k), in the sense of Denition 9. For each t, letK t denote the set of F t -progressively measurable K-valued processes on t . For any (t;!)2 and k2K t , let (X t;!;k ;Y t;!;k ;Z t;!;k ) denote the solution to the following decoupled SDE: 8 > > < > > : X s = Z s t t;! (r;X ;k r )dB t r ; Y s = t;! (X ) + Z T s F t;! (r;X ;Y r ;Z r ;k r )dr Z T s Z r dB t r ; P t 0 a.s. (5.33) DenoteP t;!;k :=P t 0 (X t;!;k ) 1 . Since > 0, as discussed in [54]X t;!;k and B t induce the sameP t 0 -augmented ltration, and thus there exists ~ k2K t such that ~ k(X t;!;k ) =k(B t ), P t 0 -a.s. Let (Y t;!;k ;Z t;!;k ) denote the solution to the following BSDE on [t;T ]: Y s = t;! (B t ) + Z T s F t;! (r;B t ;Y r ;Z r ; ~ k r )dr Z T s Z r ( t;! (r;B t ; ~ k r )) 1 dB t r ; P t;!;k -a.s. Notice that the law of ( t;! (X );F t;! (r;X;y;z;k r );dB t r ) underP t 0 is the same as the law of ( t;! (B t );F t;! (r;B t ;y;z; ~ k r ); t;! r ( ~ k r )dB t r ) underP t;!;k . HenceY t;!;k t =Y t;!;k t We now consider the stochastic control problem: u(t;!) := sup k2K t Y t;!;k t ; (t;!)2 : The next results shows that our notion of viscosity solution is also suitable for this stochastic control problem. Proposition 122. Under Assumption 121, u is uniformly continuous and bounded. Proof. Under assumptions (121), we have the following a priori estimate : E t;!;k sup tsT jY t;!;k s j 2 + Z T t jZ t;!;k r j 2 dr ! C: 108 Additionally, for (t;!); (t;! 0 )2 , notice thatjj! t X t;!;k ! 0 t X t;! 0 :k jj T jj!! 0 jj t + jjX t;!;k X t;! 0 ;k jj t T , therefore under our boundedness and regularity assumptions : jY t;!;k t Y t;! 0 ;k t j 2 CE t 0 h j t;! (X t;!;k ) t;! 0 (X t;! 0 ;k )j 2 i +CE t 0 Z T t jF t;! (s;X t;!;k ;Y t;!;k s ;Z t;!;k s ;k s )F t;! 0 (s;X t;! 0 ;k ;Y t;!;k s ;Z t;!;k s ;k s )j 2 ds CE t 0 h 2 0 (jj! 0 !jj t +jjX t;!;k X t;! 0 ;k jj t T ) i : (5.34) 0 has at most polynomial growth, denote p 0 > 0 this growth power. For xed > 0, we can estimates the dierencejY t;!;k t Y t;! 0 ;k t j as follows : jY t;!;k t Y t;! 0 ;k t j 2 (5.35) CE t 0 2 0 (jj! 0 !jj t +jjX t;!;k X t;! 0 ;k jj t T ) CE t 0 2 0 (jj! 0 !jj t +jjX t;!;k X t;! 0 ;k jj t T )1 fjjX t;!;k X t;! 0 ;k jj t T >g +CE t 0 2 0 (jj! 0 !jj t +)1 fjjX t;!;k X t;! 0 ;k jj t T g C s E t 0 jjX t;!;k X t;! 0 ;k jj t T h 1 +jj!! 0 jj 2p 0 t i +C 2 0 (jj! 0 !jj t +) C r jj! 0 !jj t h 1 +jj!! 0 jj 2p 0 t i + 2 0 (jj! 0 !jj t +) ! : If we choose := p jj!! 0 jj t , then the last line becomes a modulus of continuity 1 with at most polynomial growth. First of all, the previous estimates gives that Y t;!;k t is bounded by a constant that only depends onC 0 ;T;L 0 , and 0 . With a passage to supremum ink, we see thatu is bounded. Additionally ju(t;!)u(t;! 0 )j sup k2K t jY t;!;k t Y t;! 0 ;k t j 1 (jj!! 0 jj t ); (5.36) which show that for xed t, u is uniformly continuous in ! uniformly in t. We now prove the dynamic programming for the u. We rst notice that for P t;!;k -a.s. ~ !2 t it holds that the control k t 1 ;~ ! 2K t 1 and Y t;!;k t 1 (~ !)Y t 1 ;! t ~ !;k t 1 ;~ ! t 1 u(t 1 ;! t ~ !): 109 ThereforeP t;!;k -a.s. Y t;!;k t 1 u(t 1 ;! t B t ). We can seeY t;!;k as the solution of the BSDE with nal condition Y t;!;k t 1 at time t 1 and claim that by comparison of BSDEs, u(t;!) sup k2K t ~ Y t;!;k t where ( ~ Y t;!;k ; ~ Z t;!;k ) solves underP t;!;k the BSDE ~ Y s =u t;! (t 1 ;B t ) + Z t 1 s F t;! (r;B t ; ~ Y r ; ~ Z r ; ~ k r )dr Z t 1 s ~ Z r ( t;! (r;B t ; ~ k r )) 1 dB t r ; P t;!;k -a.s. We then x " > 0 and choose ! i 2 t andfE i g i2N an F t t 1 -measurable partition of t such that for all i and !2E i ,jj!! i jj". We choose k i 2K t 1 such that " +Y t 1 ;! t! i ;k i t 1 u(t 1 ;! ! i ); for all i: We have already proven that Y t 1 ;! t! i ;k i t 1 and u(t 1 ;! ! i ) uniformly continuous with respect to ! i . We dene the following control k 0 r = k r on [t;t 1 ] and k 0 r = k i r on (t 1 ;T ]. Then the previous uniform continuities allow us to claim that ~ Y t;!;k 0 +(") Y t;!;k t for some modulus of continuity . Hence the following dynamic programming principal on deterministic times hold : u(t;!) = sup k2K t ~ Y t;!;k t : We estimate the variation in time for k2K t u(t 1 ;!)u(t;!) =u(t 1 ;!) ~ Y t;!;k t + ~ Y t;!;k t u(t;!) =u(t 1 ;!)u(t 1 ;! t B t ) Z t 1 t F t;! (r;B t ; ~ Y t;!;k r ; ~ Z t;!;k r ; ~ k r )dr + Z t 1 t ~ Z t;!;k r ( t;! r ( ~ k r )) 1 dB t r + ~ Y t;!;k t u(t;!) We take rst the expectation under P t;!;k then absolute values to have : ju(t 1 ;!)u(t;!)j j ~ Y t;!;k t u(t;!)j +(jj!! t B t jj t 1 ) + Z t 1 t E P t;!;k h jF t;! (r;B t ; ~ Y t;!;k r ; ~ Z t;!;k r ; ~ k r )j i dr 110 Finally using the boundedness assumptions, Lipschitz continuity ofF , and estimates on the BSDEs we have the following estimates ju(t 1 ;!)u(t;!)j j ~ Y t;!;k t u(t;!)j +C p t 1 t +(jj! :^t !jj t 1 ): Finally taking the supremum in k to obtain : ju(t 1 ;!)u(t;!)j(d 1 (t 1 ;!); (t;!)): Finally regrouping the regularity in time and space, we obtain that there is a modulus of continuity ~ 0 , which only depends on, C 0 ;L 0 ; 0 ;T , such that ju(t;!)u(t 0 ;! 0 )j ~ 0 (d 1 ((t;!); (t 0 ;! 0 ))): Proposition 123. Under Assumption 121, u is a viscosity solution of the PPDE (5.32). Proof. Notice that without loss of generality we can assume that F and G are increasing in y: (5.37) We can extend the dynamic programming principle to random times : u(t;!) = sup k2K t Y t;!;k t (;u t;! (;)); for any (t;!)2 ;2T t ; (5.38) where, for anyF t -measurable random variable, (Y;Z) := (Y t;!;k (;);Z t;!;k (;)) solves the following BSDE on [t;]: Y s =(B t ) + Z s F t;! (r;B t ;Y r ;Z r ; ~ k r )dr Z s Z r ( t;! (r;B t ; ~ k r )) 1 dB t r ; P t;!;k a.s. We now prove the viscosity property, for the same L as in Proposition 119. Again we shall only prove it at (t;!) = (0; 0) and we will omit the superscript 0;0 . However, since in this caseu is dened through a supremum, we need to prove the viscosity subsolution property and supersolution property dierently. Viscosity Lsubsolution property. Assume to the contrary that, c := @ t ' +G(;u;@ ! ';@ 2 !! ') (0; 0) > 0 for some '2A L u(0; 0): 111 As in Proposition 119, let h = h " 2H be the hitting time corresponding to ' in (4.15). Since'2C 1;2 (), u2UC b (), and by Assumption 121 (iv) G is right continuous in (t;!) under d 1 , we may assume " is small enough such that @ t ' +G(;u;@ ! ';@ 2 !! ') (t;!) c 2 > 0; t2 [0; h]: By the denition of G, this implies that, for any t2 [0; h] and k2K, @ t ' + 1 2 2 (t;!;k) :@ 2 !! ' +F (t;!;u;(;k)@ ! ';k) (t;!) c 2 > 0: Now for any k 2 K, notice that dhBi t = 2 (t;B ; ~ k t )dt, P k -a.s. Denote (Y k ;Z k ) := (Y k (h;u(h;));Z k (h;u(h;))). One can easily see that u(s;B)Y k s , 0 s h, P k -a.s. Applying It^ o's formula on ', we see that for any > 0: P k -a.s. ('Y k ) 0 ('u) h^ ('Y k ) 0 ('Y k ) h^ = Z h^ 0 h @ t ' + 1 2 2 :@ 2 !! ' +F (;Y k ;Z k ) i (s;B ; ~ k s )ds Z h^ 0 @ ! ' 1 Z k (s;B ; ~ k s )dB s Z h^ 0 h c 2 +F (;u;@ ! ')F (;Y k ;Z k ) i (s;B ; ~ k s )ds Z h^ 0 @ ! ' 1 Z k (s;B ; ~ k s )dB s : Note again thatY k s u(s;B ). Then by (5.37) we have uY k 0 'u h^ = 'Y k 0 'u h^ Z h^ 0 h c 2 +F (;u;@ ! ')F (;u;Z k ) i (s;B ; ~ k s )ds Z h^ 0 @ ! ' 1 Z k (s;B ; ~ k s )dB s = Z h^ 0 h c 2 + (@ ! 'Z k ) i (s;B ; ~ k s )ds Z h^ 0 @ ! ' 1 Z k (s;B ; ~ k s )dB s = c 2 (h^) Z h^ 0 @ ! 'Z k (s;B ; ~ k s ) ( 1 (s;B ; ~ k s )dB s s ds); P k -a.s. 112 wherejjL and is bounded. As in Proposition 119, we may dene ~ P k 2P L equivalent toP such that 1 (t;B ; ~ k t )dB t t dt is a ~ P k -Brownian motion. Then the above inequality holds ~ P k -a.s., and by the denition ofA L u, we have u 0 Y k 0 u 0 Y k 0 E ~ P k ('u) h^ c 2 E ~ P k [h^] c 2 h 1 ~ P k [h] i : By (2.13), for small enough we have u 0 Y k 0 c 2 h 1C" 4 2 i c 4 > 0: This implies that u 0 sup k2K Y k 0 c 4 > 0, which is in contradiction with (5.38). Viscosity Lsupersolution property. Assume to the contrary that, c := n @ t ' +G(;u;@ ! ';@ 2 !! ') i (0; 0) > 0 for some '2A L u(0; 0): By the denition of F , there exists k 0 2K such that n @ t ' + 1 2 2 (;k 0 ) :@ 2 !! ' +F (;u;(;k 0 )@ ! ';k 0 ) o (0; 0) c 2 > 0 Again, let h = h " 2H be the hitting time corresponding to ' in (4.15), and by the right continuity of and F in Assumption 121 (iv) we may assume " is small enough so that n @ t ' + 1 2 2 (;k 0 ) :@ 2 !! ' +F (;u;(;k 0 )@ ! ';k 0 ) o (t;!) c 3 > 0; t2 [0; h]: Consider the constant processk :=k 0 2K. It is clear that the corresponding ~ k =k 0 . Follow similar arguments as in the subsolution property, we arrive at the following contradiction: u 0 Y k 0 c 3 E ~ P k [h]< 0: Example 124. Assume K :=fk 2 S d : k g, where 0 < < are constant matrices. Set (t;!;k) := k. Then Y t (!) = u(t;!) is the solution to the following second order BSDE, as introduced by [53]: Y t =(B ) + Z T t F (s;B ;Y s ;Z s ; ^ a 1 2 s )ds Z T t Z s (^ a s ) 1 2 dB s dK t ;P-q.s. (5.39) whereP :=fP2P 0 1 : P = 0; P 2Kg, ^ a is the universal process such that dhBi t = ^ a t dt, P-q.s. and K is an increasing process satisfying certain minimum condition. 113 Remark 125. By using the zero-sum game, we may also obtain a representation formula for the viscosity solution of the following path dependent Bellman-Isaacs equation: @ t uG(t;!;u;@ ! u;@ 2 !! u) = 0; u(T;!) =(!) (5.40) where G(t;!;y;z; ) := sup k 1 2K 1 inf k 2 2K 2 h 1 2 2 (t;k 1 ;k 2 ) : +F (t;!;y;(t;k 1 ;k 2 )z;k 1 ;k 2 ) i = inf k 2 2K 2 sup k 1 2K 1 h 1 2 2 (t;k 1 ;k 2 ) : +F (t;!;y;(t;k 1 ;k 2 )z;k 1 ;k 2 ) i : See Pham and Zhang [51]. 6 Stability and Partial Comparison 6.1 Stability The main result of this section is the following extension of Theorem 4.1 in [15], with a proof following the same line of argument. However the present fully nonlinear context makes a crucial use of Theorem 62. Theorem 126. Let L> 0, G satisfy Assumption 93, and u2U (resp. u2U). Assume (i) for each "> 0, there exist G " and u " such that G " satises Assumption 93 and u " is a viscosity L-subsolution (resp. L-supersolution) of PPDE (2.12) with generator G " ; (ii) as "! 0, (G " ;u " ) converge to (G;u) locally uniformly in the following sense: for any (t;!;y;z; ), there exist > 0 such that, lim "!0 sup (s;~ !;~ y;~ z;~ )2O (t;!;y;z; ) h j(G " G) t;! j(s; ~ !; ~ y; ~ z; ~ ) +j(u " u) t;! j(s; ~ !) i = 0;(6.41) where O (t;!;y;z; ) := n (s; ~ !; ~ y; ~ z; ~ )2 t RR d S d : d 1 ((s; ~ !); (t; 0)) +j~ yyj +j~ zzj +j~ j o : Then u is a viscosity L-subsolution (resp. L-supersolution) of PPDE (2.12) with generator G. 114 Proof. Without loss of generality we shall only prove the viscosity subsolution property at (0; 0). Let'2A L u(0; 0) with corresponding h2H, 0 > 0 be a constant such that h 0 h and lim "!0 ("; 0 ) = 0, where (";) := sup (t;!;~ y;~ z;~ )2O (0;0;y 0 ;z 0 ; 0 ) h jG " Gj(t;!; ~ y; ~ z; ~ ) +ju " uj(t;!) i ; and (y 0 ;z 0 ; 0 ) := (' 0 ;@ ! ' 0 ;@ 2 !! ' 0 ): Now for 0< 0 , denote ' (t;!) :='(t;!) +t. By (4.15) and Lemma 94 we have (' u) 0 = ('u) 0 = 0E L 0 h ('u) h i <E L 0 h (' u) h i : By (6.41), there exists " > 0 small enough such that, for any "" , (' u " ) 0 <E L 0 h (' u " ) h i : (6.42) Denote X := X "; := u " ' 2U. Dene ^ X := X1 [0;h ) +X h 1 [ ;T ] , Y :=E L [ ^ X], and := infft 0 : Y t = ^ X t g, as in Theorem 62. Then all the results in Theorem 62 hold. Noticing that X h X h , by (6.42) we have E L 0 [ ^ X h ]E L 0 [X h ] =E L 0 h (' u " ) h i <(' u " ) 0 =X 0 Y 0 =E L 0 [Y ] =E L 0 [ ^ X ]: Then there exists ! such that t := (! ) < h (! ), and thus h t ;! 2H t . We shall remark though that here Y; ;! ;t all depend on ";. Now dene ' " (t;!) :=' t ;! (t;!)' (t ;! ) +u " (t ;! ); (t;!)2 t : It is straightforward to check that' " 2A L u " (t ;! ) with corresponding hitting time h t ;! . Since u " is a viscosity L-subsolution of PPDE (2.12) with generator G " , we have 0 h @ t ' " (G " ) t ;! (;' " ;@ ! ' " ;@ 2 !! ' " ) i (t ; 0) = h @ t 'G " (;u " ;@ ! ';@ 2 !! ') i (t ;! ): (6.43) 115 Note thatt < h (! ), thenju " uj(t ;! )(";)("; 0 ). By (6.41) and Denition 9, we may set small enough and then " small enough so that (;u " ;@ ! ';@ 2 !! ')(t ;! )2 O 0 (0; 0;y 0 ;z 0 ; 0 ). Thus, (6.43) leads to 0 h @ t 'G " (;u " ;@ ! ';@ 2 !! ') i (t ;! ) h @ t 'G(;u " ;@ ! ';@ 2 !! ') i (t ;! )("; 0 ) h @ t 'G(;u;@ ! ';@ 2 !! ') i (t ;! )("; 0 )C(";) L' 0 C sup (t;!):t<h (!) h ju(t;!)u 0 j +j@ ! '(t;!)@ ! ' 0 j +j@ 2 !! '(t;!)@ 2 !! ' 0 j i sup (t;!):t<h (!) G(t;!;y 0 ;z 0 ; 0 )G(0; 0;y 0 ;z 0 ; 0 ) ("; 0 )C(";); where we used the fact that G satises Assumption 93. Recall Denition 9 again. Now by rst sending "! 0 and then ! 0 we obtainL' 0 0. Since '2A L u(0; 0) is arbitrary, we see thatu is a viscosity subsolution of PPDE (2.12) with generator G at (0; 0) and thus complete the proof. Remark 127. Similar to Theorem 4.1 in [15], we need the sameL in the proof of Theorem 126. If u " is only a viscosity subsolution of PPDE (2.12) with coecient G " , but with possibly dierent L " , we are not able to show that u is a viscosity subsolution of PPDE (2.12) with coecient G. 7 Partial comparison of viscosity solutions In this section, we prove a partial comparison principle, i.e. a comparison result of a viscosity super- (resp. sub-) solution and a classical sub- (resp. super-) solution. This result extends Proposition 5.3 of [17] and is a rst key step for our comparison principle. The proof is crucially based on the optimal stopping problem reported in Theorem 62. Proposition 128. Let Assumption 93 hold true. Let u 2 2U be a viscosity supersolution of PPDE (2.12) and u 1 2 C 1;2 () bounded from above satisfyingLu 1 (t;!) 0 for all (t;!)2 with t<T . If u 1 (T;)u 2 (T;), then u 1 u 2 on . Similar result holds if we switch the role of u 1 and u 2 . 116 Proof. We shall only prove u 1 0 u 2 0 . The inequality for general t can be proved similarly. Assumeu 2 is a viscosityL-supersolution andu 1 2 C 1;2 () with corresponding hitting times h i , i 0. By Proposition 3.14 of [17], we may assume without loss of generality that G(t;!;y 1 ;z; )G(t;!;y 2 ;z; )y 2 y 1 for all y 1 y 2 : (7.44) We now prove the proposition in three steps. Throughout the proof, denote ^ u :=u 1 u 2 : Since u 1 is bounded from above and u 2 bounded from below, we see that ^ u + is bounded. Step 1. We rst show that, for all i 0 and !2 , ^ u + h i (!) E L h i (!) h ^ u + h i+1 h i ;! i : (7.45) Since (u 1 ) t;! 2C 1;2 ( t ), clearly it suces to consider i = 0. Assume to the contrary that 2Tc := ^ u + 0 (0)E L 0 ^ u + h 1 > 0: (7.46) Recall (1.2). Notice that E 0 1 = and that' 0 1k (0; 0) are constants, we may assume without loss of generality that n 0 = 1 and u 1 t = (t;B); 0t h 1 ; where 2C 1;2 ()\UC b () with bounded derivatives. Denote X t := t u 2 t + +ct; 0tT: Since u 2 is bounded from below, by the denition ofU, one may easily check that X is a bounded process inU, and X t := ^ u + t +ct; 0t h 1 : Dene b X :=X1 [0;h 1 ) +X h 1 1 [h 1 ;T ] ; Y :=S L [ b X]; := infft 0 :Y t = b X t g: 117 Applying Theorem 62 and by (7.46) we have E L 0 [ b X ] =Y 0 X 0 = ^ u + 0 (0) = 2Tc +E L 0 [^ u + h 1 ]Tc +E L 0 [ b X h 1 ]: Then there exists ! 2 such that t := (! ) < h 1 (! ). Next, by the E L supermartingale property of Y of Theorem 62, we have: ^ u + (t ;! ) +ct =X t (! ) =Y t (! )E L t h X t ;! h t ;! 1 i E L t [ch t ;! 1 ]>ct ; implying that 0< ^ u + (t ;! ) = ^ u(t ;! ). Since u 2 2U, by (2.13) there exists h2H t such that h< h t ;! 1 and ^ u t ;! t > 0 for all t2 [t ; h]: (7.47) Then X t ;! t = ' t (u 2 ) t ;! t for all t2 [t ; h], where '(t;!) := t ;! (t;!) +ct. Observe that'2C 1;2 ( t ). Using again theE L supermartingale property of Y of Theorem 62, we see that for all 2T t : ' (u 2 ) t ;! t =X t (! ) =Y t (! )E L t Y t ;! ^h E L t X t ;! ^h =E L t ' (u 2 ) t ;! ^h : That is, '2A L u 2 (t ;! ), and by the viscosity L-supersolution property of u 2 : 0 @ t 'G(:;u 2 ;@ ! ';@ 2 !! ') (t ;! ) =c @ t u 1 +G(:;u 2 ;@ ! u 1 ;@ 2 !! u 1 ) (t ;! ) c @ t u 1 +G(:;u 1 ;@ ! u 1 ;@ 2 !! u 1 ) (t ;! ); where the last inequality follows from (7.47) and (7.44). Sincec> 0, this is in contradiction with the subsolution property of u 1 , and thus completes the proof of (7.45). Remark 129. The rest of the proof is only needed in the case whereu 1 2C 1;2 ()nC 1;2 (). Indeed, if u 1 2 C 1;2 (), then H 1 = T and it follows from Step 1 that ^ u + 0 E L 0 ^ u + T E L 0 ^ u + T = 0, and then u 1 0 u 2 0 . In fact, this is the partial comparison principle proved in [17] Proposition 5.3. Step 2. We continue by using the following result which will be proved later. Lemma 130. For i 1, P2P L , andP L (P; h i ) :=fP 0 2P L :P 0 =P onF h i g, we have i := ^ u + h i ess sup P P 0 2P L (P;h i ) E P 0 h ^ u + h i+1 F h i i 0; P-a.s. 118 Then by standard arguments, we have E P h ^ u + h i i sup P 0 2P L (P;h i ) E P 0 h ^ u + h i+1 i E L 0 h ^ u + h i+1 i : Since P2P L is arbitrary, this leads toE L 0 [^ u + h i ]E L 0 ^ u + h i+1 , and by induction, ^ u + 0 E L 0 ^ u + h i , for all i. Notice that ^ u + is bounded,C L 0 [h i < T ]! 0 as i%1, by Denition 137 (i), and u 2 T u 2 T by the denition ofU. Then, sending i!1, we obtain ^ u + 0 E L 0 ^ u + T E L 0 ^ u + T = 0, which completes the proof of u 1 0 u 2 0 . The rest of this section is devoted to the proof of Lemma 130. Recall the partition fE i j ;j 1gF h i , the constant n i , and the uniform continuous mappings ' i jk and i jk in (1.2) corresponding to u 1 2 C 1;2 (). For > 0, let 0 = t 0 <t 1 < <t N = T such that t k+1 t k for k = 0; ;N 1, and dene t N+1 :=T +. Lemma 131. For all i;j 1, there is a partition ( ~ E i j;k ) k1 F h i of E i j and a sequence (p k ) k1 taking values 0; ;N, such that h i 2 [t p k ;t p k +1 ) on ~ E i j;k ; sup !;! 0 2 ~ E i j;k ! :^h i (!) ! 0 :^h i (! 0 ) ; and min !2 ~ E i j;k h i (!) = h i (! i j;k ) =: ~ t i j;k for some ! i j;k 2 ~ E i j;k : Proof. Sincei;j are xed, we simply denote E :=E i j and h := h i . DenoteE k :=E\ft k h < t k+1 g, k n. ThenfE k g k F h forms a partition of E. Since is separable, there exists a ner partitionfE k;l g k;l F h such that, for any!;! 0 2E k;l ,k! ^h(!) ! 0 ^h(! 0 ) k. Next, for each E k;l , there is a sequence ! k;l;m 2 E k;l such that t k;l;m := h(! k;l;m )# inf !2E k;l h(!). Denote t k;l;0 := t k+1 . Dene E k;l;m := E k;l \ft k;l;m+1 h < t k;l;m g2F h i and renumerate them as ( ~ E k ) k1 . We then verify directly that ( ~ E k ) k1 denes a partition of E satisfying the required conditions. Proof of Lemma 130 Clearly it suces to prove the lemma on eachE i j . As in the previous proof, we omit the dependence on the xed pair (i;j), thus writingE :=E i j ,n =n i , h := h i , h 1 := h i+1 ,' k :=' i j;k , k := i j;k , and letC denote the common bound of' k ; k and the common modulus of continuity function of ' k ; k , 1kn. We also denote ~ E k := ~ E i j;k , ! k :=! i j;k and ~ t k := ~ t i j;k , as dened in Lemma 131. 119 Fix an arbitraryP2P L and "> 0. Since u 2 2U, we have u 2 h u 2 h . Then, for each k, it follows from (7.45) that: ^ u + h (! k ) ^ u + h (! k )E P k h (^ u + h 1 ) ~ t k ;! k i +" for some P k 2P ~ t k L : Dene ^ P2P L (P; h) such that, for P-a.e. !2 ~ E k , the ^ P h(!);! -distribution of B h(!) is equal to theP k -distribution of B ~ t k , where ^ P h(!);! denotes the r.c.p.d. Then, P-a.s. on ~ E k , E ^ P h ^ u + h 1 F h i (!) = E ^ P h(!);! h ^ u + h 1 (! h(!) B h(!) );! h(!) B h(!) i = E P k h ^ u + h 1 (! h(!) ~ B ~ t k : );! h(!) ~ B ~ t k : i ; where ~ B ~ t k s :=B ~ t k sh(!)+ ~ t k , s h(!). Recalling that ^ u + is bounded,P-a.s. this provides (!) ^ u + h (!)E ^ P h ^ u + h 1 F h i (!) " + X k1 1 ~ E k (!) ^ u + h (!) ^ u + h (! k ) + X k1 1 ~ E k (!)E P k h (^ u + h 1 ) ~ t k ;! k ^ u + h 1 (! h(!) ~ B ~ t k );! h(!) ~ B ~ t k i " + X k1 1 ~ E k (!) ^ u h (!) ^ u h (! k ) + (7.48) + X k1 1 ~ E k (!)E P k h (^ u h 1 ) ~ t k ;! k ^ u h 1 (! h(!) ~ B ~ t k );! h(!) ~ B ~ t k + ^C i : We now estimate the above error for xed !2 ~ E k . 1. To estimate the terms of the rst sum, we recall that d 1 (h(!);!); ( ~ t k ;! k ) 2 on ~ E k , by Lemma 131. Then, since u 1 is continuous, it follows from (1.2) that on ~ E k : u 1 h i (!)u 1 h i (! j ) =u 1 h i (!)u 1 h i (! j ) = n X l=1 h ' l (h(!);!)' l ( ~ t k ;! k ) i l (0; 0)Cn(2): Moreover, denoting by 2 the modulus of continuity ofu 2 2U in (2.14), we see that: u 2 h (! k )u 2 h (!) = u 2 ( ~ t k ;! k )u 2 ( ~ t k ;!) +u 2 ( ~ t k ;!)u 2 (h(!);!) 2 () + sup h(!)th(!) [u 2 (t;!)u 2 (h(!);!)]; 120 By the last two estimates, we see that the rst sum in (7.48): X k1 1 ~ E k (!) ^ u h (!) ^ u h (! k ) + ! 0; as & 0: (7.49) 2. Recall from Lemma 131 that 0 h(!) ~ t k . Then (1.1) leads to 0 [h 1 (! k ~ t k ~ B ~ t k ) ~ t k ] [h 1 (! h(!) ~ B ~ t k ) h(!)] h(!) ~ t k ; and therefore, denoting (!) := + supfj! s ! t j : 0tT; ts (t +)^Tg, d 1 h 1 (! k ~ t k ~ B ~ t k ) ~ t k ; ~ B ~ t k ; h 1 (! h(!) ~ B ~ t k )h(!); ~ B ~ t k ( ~ B ~ t k ) (B ~ t k ): (7.50) Then, by using (1.2) again , we see that (u 1 ) ~ t k ;! k h ~ t k ;! k 1 u 1 h 1 (! h(!) ~ B ~ t k );! h(!) ~ B ~ t k = u 1 h 1 (! k ~ t k B ~ t k );! k ~ t k ~ B ~ t k u 1 h 1 (! h 1 (!) ~ B ~ t 1 );! h 1 (!) ~ B ~ t k = n X l=1 h ' l ( ~ t k ;! k ) l h 1 (! h(!) ~ B ~ t k )h(!); ~ B ~ t k ' l (h(!);!) l h 1 (! k ~ t k ~ B ~ t k ) ~ t k ; ~ B ~ t k i Cn h (2) + (B ~ t k ) i : (7.51) We now similarly estimate the corresponding term with u 2 . Since ~ t k h(!), by (2.14) and (7.51) we have u 2 h 1 (! h(!) ~ B ~ t k );! h(!) ~ B ~ t k (u 2 h 1 ) ~ t k ;! k = (u 2 ) h 1 (! k ~ t k B ~ t k );! k ~ t k ~ B ~ t k (u 2 ) h 1 (! h(!) ~ B ~ t k );! h(!) ~ B ~ t k d 1 (h 1 (! h(!) ~ B ~ t k );! h(!) ~ B ~ t k ); (h 1 (! k ~ t k ~ B ~ t k );! k ~ t k ~ B ~ t j ) d 1 (h(!);!); ( ~ t k ;! k ) +d 1 h 1 (! k ~ t k ~ B ~ t k ) ~ t k ; ~ B ~ t k ; h 1 (! h(!) ~ B ~ t k ) h(!); ~ B ~ t k 2 + (B ~ t k ) ; 121 Combining with (7.51), this implies that the second summation in (7.48) satises: X k1 1 ~ E k (!)E P k h (^ u h 1 ) ~ t k ;! k ^ u h 1 (! h(!) ~ B ~ t k );! h(!) ~ B ~ t k + ^C i X k1 E P k h Cn( + 2 ) 2 + (B ~ t k ) ^C i 1 ~ E k (!) CnE L 0 h ( + 2 ) 2 + (B) ^C i : One can easily check that lim !0 E L 0 h (+ 2 ) 2 + (B) ^C i = 0. Then by sending! 0 and "! 0 in (7.48), we conclude the proof of Lemma 130. As a direct consequence of the above partial comparison, we have Proposition 132. Let Assumption 93 hold true. If PPDE (2.12) has a classical solution u2C 1;2 ()\UC b (), thenu is the unique viscosity solution of PPDE (2.12) with terminal condition u(T;). 8 First order PPDEs In this section we study the following rst order PPDE: f@ t uG(:;u;@ ! u)g(t;!) = 0; (t;!)2 ; (8.52) where G : RR d !R veries the following counterpart of Assumption 93: Assumption 133. (i) G(;y;z)2L 0 () for any xed (y;z), andjG(; 0; 0)jC 0 . (ii) G is uniformly Lipschitz continuous in (y;z) with a Lipschitz constant L 0 , and contin- uous in (t;!) under d 1 , . We note that here we require G to be continuous in (t;!), which is stronger than Assumption 93 (iv), but weaker than the uniform continuity required in Theorem 148 or in [18]. The uniform regularity is used mainly for the proof of comparison principle. However, in this case we will employ some compactness arguments, and then continuity implies locally uniform continuity. This PPDE, in the case that G is convex in z and thus the PPDE is of HJB type, has been studied by Lukoyanov [36] by using the compactness argument. In this section we explain brie y the connection between our notion of viscosity solutions and that of Lukoyanov. However, we emphasize again that the compactness argument has fundamental 122 diculty in second order case, and we shall establish its wellposedness in our accompanying paper [18] by using the optimal stopping result Theorem 62. As is well known, a rst order HJB equation corresponds to deterministic control. In this case we may restrict our probability measures to degenerate ones with = 0, see Remark 104. We then dene for any L> 0: P t L :=fP : : [t;T ]!R d ;jjLg where dB t s = s ds; P -a.s. (8.53) and the corresponding nonlinear expectationsE L t ,E L t , and nonlinear optimal stopping prob- lemsS L t ,S L t , etc. in an obvious way. Denote t L :=f!2 t :! is Lipschitz continuous with Lipschitz constant Lg; (8.54) andP t 1 :=[ L>0 P t L , t 1 :=[ L>0 t L . As in [36], one can easily check that P( t L ) = 1 for allP2P t L ; t L is compact and t 1 is dense underkk T ; for s<t;!2 s L ; ~ !2 t L ; we have ! t ~ !2 s L : (8.55) Remark 134. All the above properties are important in Lukoyanov's approach for rst order PPDEs, especially for proving the comparison principle. In the second order case, for example for the semilinear PPDEs, since P t 0 ( t 1 ) = 0, the set t 1 is not appropriate. One may consider to enlarge the space: for 0<< 1 and L> 0, let t ;L :=f!2 t :! is H older- continuous with H older constant Lg; t ;1 :=[ L>0 t ;L : Then for < 1 2 we have P t 0 ( t ;1 ) = 1; t ;L is compact and t ;1 is dense underkk T However, the last property in (8.55) fails in this case: for s<t;!2 s ;L ; ~ !2 t ;L ; in general ! t ~ ! = 2 s ;L : This is the main reason why we were unable to use this approach in second order case. Notice that PPDE (8.52) involves only derivatives @ t u and @ ! u, we thus introduce the following denitions. 123 Denition 135. We say a processu2C 0 ( t ) is inC 1;1 ( t ) if there exist@ ! u2C 0 ( t ;R d ) and @ t u2C 0 ( t ) such that, du s =@ t u s ds +@ ! u s dB t s ; tsT; P-a.s. for all P2P t 1 : (8.56) It is obvious that @ ! u, if exists, is unique on [t;T ] t 1 . Then, since t 1 is dense and@ ! u is continuous, we see that@ ! u is unique in t . Additionally, as in the second order case the time derivative can be proven to be lim #0 u t+;!:^t u(t;!) : For all u2L 0 (), (t;!)2 with t<T , and L> 0, dene A L u(t;!) := n '2C 1;1 ( t ) : ('u t;! ) t = 0 =S L t ('u t;! ) ^h for some h2H t o ; A L u(t;!) := n '2C 1;1 ( t ) : ('u t;! ) t = 0 =S L t ('u t;! ) ^h for some h2H t o : (8.57) We then dene viscosity solutions exactly as in Denition 101. We may easily check that all the results in this paper, when reduced to rst order PPDEs, still hold under this new denition. In particular, the examples in Section 5.1 are still valid, and the value function of the deterministic control problem is a viscosity solution to the corresponding rst order path dependent HJB equation. We remark that our Denition 135 of derivatives is equivalent to Lukoyanov's notion of derivatives, which is dened via Taylor expansion. Moreover, instead of using nonlinear expectation as in (8.57), Lukoyanov uses test functions ' such that'u attains path wise local maximum (or minimum) at (t;!). Indeed, our comparison principle and uniqueness result for rst order PPDEs follows from almost the same arguments as that of [36]. We conclude the paper with the following result and leave the details to interested readers. Theorem 136. Let Assumption 133 hold true. Let u 1 (resp. u 2 ) be a viscosity subsolution (resp. supersolution) of PPDE (8.52), in the sense of Denition 101 and modied in the context of this section. If u 1 (T;)u 2 (T;) on , then u 1 u 2 on . 124 Chapter 5 Wellposedness for viscosity solutions 1 Introduction This chapter contains the main wellposedness theory for the above path dependent PDE, under convenient assumptions. Our starting point is the partial comparison result estab- lished in Section 7 of the previous chapter which states, under fairly general Lipschitz-type conditions on the nonlinearity G, that for any bounded viscosity subsolution u 1 and super- solution u 2 with u 1 T u 2 T , we have u 1 u 2 on [0;T ] , provided that one of them is smooth. Then, similar to the approach of [15] in the semilinear case, we use the Perron's approach to construct a viscosity solution on one hand, and to prove that the comparison result of bounded viscosity subsolutions and supersolutions holds true without the require- ment that one of them is smooth. While the corresponding wellposedness results in the semilinear context of [15] relied on the representation of the solution of (0.1) by means of a backward stochastic dierential equation, we introduce a new argument in this paper which builds upon the local wellposedness of the path-frozen partial dierential equation. The path-frozen PDE is dened locally on the nite dimensional space R d , so that the last wellposedness assumption is related to the standard theory of viscosity solutions as in [9]. Our wellposedness results cover path dependent PDEs which are not accessible in the existing literature on backward stochastic dierential equations. For instance, the nonlin- earity G does not need to fulll the strong requirements of second order backward SDEs (e.g. convexity in the@ 2 !! ucomponent). Another example is the general class of backward stochastic partial dierential equations, which appear naturally in many applications, see e.g. Ma, Yin and Zhang [38] on non-Markovian FBSDEs, and Oksendal, Sulem and Zhang [40] on stochastic control of SPDEs. The rest of the chapter is organized as follows. In the rest of this section we will dene the class of C 1;2 which is a main tool for the proof of comparison. The main results and the 125 corresponding assumptions are reported in Section 2. To prepare for the proof, we start by providing in Section 7 a stronger partial comparison result extending that of [17]. Section 3 is devoted to the proof of the comparison result, which implies uniqueness. The existence result is proved in Section 4. Finally, Section 5 provides some sucient conditions for our main assumption under which our wellposedness result is established. 1.1 The extended class of smooth functionals and assumptions For technical reasons, we shall extend the space C 1;2 () slightly as follows. Denition 137. Lett2 [0;T ],u : t !R. We sayu2 C 1;2 ( t ) if there exist an increasing sequence of F t -stopping timesfh i ;i 1g, a partitionfE i j ;j 1gF t h i of t for each i, a constant n i 1 for each i, and ' i jk 2 UC b (), i jk 2C 1;2 ()\UC b () for each (i;j) and 1kn i , such that: denoting h 0 :=t, E 0 1 := t , (i) for each i and !, h h i ;! i+1 2H h i (!) whenever h i (!) < T ; the setfi : h i (!) < Tg is nite for each !; and lim i!1 C L s [h s;! i <T ] = 0 for any (s;!)2 t and L> 0, (ii) for each (i;j), !;! 0 2E i j such that h i (!) h i (! 0 ), it holds for all ~ !2 : 0 h i+1 (! 0 h i (! 0 ) ~ !) h i+1 (! h i (!) ~ !) h i (! 0 ) h i (!); (1.1) here we abuse the notation that: (! s ~ !) r :=! r 1 [t;s) (r) + (! s + ~ ! rs )1 [s;T ] (r), (iii) for each i, ' i jk , i jk , @ t i jk , @ ! i jk , @ 2 !! i jk are uniformly bounded, and ' i jk , i jk are uniformly continuous, uniformly in (j;k) (but may depend on i), (iv) u is continuous in t on [0;T ], and for each i, u(s;B) = X j1 n i X k=1 h ' i jk (h i ;B) i jk (s h i ;B h i +s B h i ) i 1 E i j ; h i s h i+1 : (1.2) Let u 2 C 1;2 ( t ). One may easily check that u s;! 2 C 1;2 ( s ) for any (s;!) 2 t . For any P2P t 1 , it is clear that the process u is a local P-semimartingale on [t;T ] and a P-semimartingale on [t; h i ] for all i, and du s =@ t u s ds + 1 2 @ 2 !! u s :dhB t i s +@ ! u s dB t s ; ts<T; P-a.s. (1.3) By setting h 1 := T , n 0 := 1, ' 0 11 := 1, and 0 11 := u, we see that C 1;2 ( t ) C 1;2 ( t ). In spirit, the processes in C 1;2 ( t ) are piecewise smooth. We remark that the requirements of the space C 1;2 ( t ) are due to technical reasons, for example, (1.1) and (1.2) are mainly 126 needed for the partial comparison principle Proposition 128 below. These technical require- ments may vary from time to time, in particular, the space C 1;2 ( t ) here is slightly dierent from that in [15] and that in [17] Section 7. 1.2 Fully nonlinear path dependent PDEs We recall the assumptions on the generator G of the PPDE. Assumption 138. The nonlinearity G satises: (i) For xed (y;z; ), G(;y;z; )2L 0 () andjG(; 0; 0; 0)jC 0 . (ii) G is elliptic, i.e. nondecreasing in . (iii) G is uniformly Lipschitz continuous in (y;z; ), with a Lipschitz constant L 0 . (iv) For any (y;z; ), G(;y;z; ) is right continuous in (t;!) under d 1 . However, for our main wellposedness result, we shall also use the following stronger regularity requirement: Assumption 139. G is uniformly continuous in (t;!) under d 1 with a modulus of conti- nuity function 0 . This condition is needed for our uniform approximation of G at below, in particular it is used (only) in the proof of Lemma 180. We should point out though, for the semi-linear PPDE and path dependent HJB considered in [17] Section 4, this condition is violated when depends on (t;!). However, this is a technical condition due to our current approach for uniqueness. While we shall address this issue in some future research, in Remark 181 below we will take the following slightly weaker condition, which will be important for the work Pham and Zhang [51]. Assumption 140. There exist modulus of continuity functions 0 , ~ 0 such that, for any (t;!); ( ~ t; ~ !)2 and any (y;z; ), jG(t;!;y;z; )G( ~ t; ~ !;y;z; )j ~ 0 (jt ~ tj)[jzj +j j] + 0 d 1 (t;!); ( ~ t; ~ !) : For any u2L 0 (), (t;!)2 [0;T ) , and L> 0, recall that A L u(t;!):= n '2C 1;2 ( t ): ('u t;! ) t = 0 =S L t ('u t;! ) ^h for some h2H t o ; A L u(t;!):= n '2C 1;2 ( t ): ('u t;! ) t = 0 =S L t ('u t;! ) ^h for some h2H t o : (1.4) whereS L andS L are the nonlinear Snell envelopes dened in (4.21). 127 2 Main results 2.1 Path-frozen PDE For any (t;!)2 , dene the following deterministic function on [t;1)RR d S d : g t;! (s;y;z; ) :=G(s^T;! ^t ;y;z; ): For any "> 0 and 0, we denote T := (1 +)T , " := (1 +)", and O " :=fx2R d :jxj<"g; O " :=fx2R d :jxj"g; @O " :=fx2R d :jxj ="g; O "; t := [t;T )O " ; O "; t := [t;T ]O " ; @O "; t := [t;T ]@O " [ fT gO " ; (2.5) and we further simplify the notation for = 0: O " t :=O ";0 t ; O " t :=O ";0 t ; @O " t :=@O ";0 t : Our additional assumption is formulated on the following localized and path-frozen PDE dened for every (t;!)2 : (E) t;! "; L t;! v :=@ t vg t;! (s;v;Dv;D 2 v) = 0 on O "; t : (2.6) Notice that for xed (t;!), this is a standard deterministic partial dierential equation for which we now assume some wellposedness conditions. Assumption 141. For "> 0; 0, (t;!)2 , and h2C 0 (@O "; t ), we have v =v, where v(s;x) := inf n w(s;x) :w classical supersolution of (E) t;! "; and wh on @O "; t o ; v(s;x) := sup n w(s;x) :w classical subsolution of (E) t;! "; and wh on @O "; t o : (2.7) We rst note that the above sets of w are not empty. Indeed, one can check straight- forwardly that, for any > 0 and denoting := C 0 +L 0 khk1 +L 0 , w(t;x) :=khk 1 +e (Tt) ; w(t;x) :=khk 1 e (Tt) ; (2.8) satisfy the requirement for v(s;x) and v(s;x), respectively. We also observe that our denition (2.7) of v and v is dierent from the corresponding denition in the standard Perron's approach [27], in which thew is a viscosity supersolution 128 or subsolution. It is also dierent from the recent development of Bayraktar and Sirbu [3], [4] and [52], in which the w is a so called stochastic supersolution or subsolution. Loosely speaking, assuming that the comparison result holds for the equation (E) t;! "; , our Assumption 141 (ii) requires that the viscosity solution of (E) t;! "; can be approximated by a sequence of classical supersolutions and a sequence of classical subsolutions. We shall discuss further this issue in Section 5 below. In particular, we will provide some sucient conditions for Assumption 141 to hold. For later use, we show that Assumption 141 implies the wellposedness of (E) t;! "; . Proposition 142. Let Assumptions 138 and 141 hold. Then, for all " > 0; 0, (t;!)2 : (i) Letv 1 ;v 2 2C 0 ( O "; t ) be viscosity subsolution and supersolution, respectively, of the PDE (E) t;! "; with v 1 v 2 on @O "; t . Then v 1 v 2 inO "; t . (ii) The PDE (E) t;! "; , with boundary condition dened by an arbitrary function h 2 C 0 (@O "; t ), has a unique viscosity solution v =v =v. Proof. (i) First, by the same argument as Step 1 of the proof of Proposition 5.1 in [17], we see that the partial comparison principle holds for (E) t;! "; under Assumption 138. That is, if v 1 or v 2 is in C 1;2 ( O "; t ), then v 1 v 2 in O "; t . Now for general viscosity subsolution v 1 and viscosity supersolution v 2 , by partial comparison principle we have v 1 v and vv 2 . Then it follows from Assumption 141 that v 1 v =vv 2 . (ii) By standard arguments, see e.g. Proof of Theorem 148: Existence in Section 4 below for path dependent case, one can show thatv andv are viscosity supersolution and subsolution of PDE (E) t;! "; with boundary condition h. Then v := v = v is a viscosity solution, by Assumption 141. The uniqueness follows from the comparison principle. Finally we introduce the assumptions for the terminal condition . First, Assumption 143. 2L 0 (F T ) is bounded and uniformly continuous in ! underkk T . Next, for each "> 0 and n 0, denote O " n := n n = (t i ;x i ) 1in : 0<t 1 <<t n <T;jx i j" for all 1in o : (2.9) For each n 2O " n , let ! n denote the linear interpolation of (0; 0), (t i ; P i j=1 x j ) 1in , and (T; P n j=1 x j ). Assumption 144. For any" small and anyn, the function n 2O " n !(! n ) is uniformly continuous. 129 Remark 145. Assumption 144 is a purely technical condition for our proof of uniqueness. To be precise, it will be used only in the proof of Lemma 179 below. When we have a representation for the viscosity solution, we may construct the " n there explicitly, and then Assumption (144) is not required. See also Remark 182 below. However, we note that (!) =! t for some 0<t<T does not satisfy Assumption 144. To include this case, we weaken the assumption slightly. Let 0T 0 <T 1 T . Denote O " n (T 0 ;T 1 ) := n n = (t i ;x i ) 1in :T 0 <t 1 <<t n <T 1 ;jx i j" for all 1in o : (2.10) For each n , let ! n 2 T 0 denote the linear interpolation of (T 0 ; 0), (t i ; P i j=1 x j ) 1in , and (T; P n j=1 x j ). We next assume Assumption 146. There exist 0 =T 0 <<T N =T such that, for eachi = 0; ;N1, for any " small, any n, and any !2 , the functions n ! (! T i ! n T i+1 ~ !) and n !G(t;! T i ! n T i+1 ~ !;y;z; ) are uniformly continuous in O " n (T i ;T i+1 ), uniformly on tT i+1 , (y;z; )2RR d S d , and ~ !2 T i+1 . We conclude this subsection with a sucient condition for Assumption 146. Lemma 147. Let (!) = g(! T 1 ; ;! T N ;! T 1 ; ;! T N ;! T 1 ; ;! T N ;!) for some 0 = T 0 < T 1 < < T N = T , where ! t := sup 0st ! s , ! t := sup 0st ! s dened component wise. Denote x := (x 1 ; ;x N ;x 1 ; ;x N ;x 1 ; ;x N ), and assume g is bounded and uni- formly continuous in (x;!). Moreover, for each x, i, and !2 , there exists a modulus of continuity function (which may depend on the above parameters) such that g(x; (! T i ! 1 T i+1 ~ !)g(x; (! T i ! 2 T i+1 ~ !) Z T i+1 T i j! 1 t ! 2 t jdt ; for all ! 1 ;! 2 2 T i ; ~ !2 T i+1 . Then satises Assumption 146. Proof. Under our conditions, clearly satises Assumption 146 (i). Moreover, for each !2 , n = (t k ;x k ) 1kn 2O " n (T i ;T i+1 ), and ~ !2 T i+1 , denote ^ ! n :=! T i ! n T i+1 ~ !. We see that, for ji, ^ ! n T j =! T j ; (^ ! n ) T j =! T j ; (^ ! n ) T j =! T j ; and for j >i, denoting y :=! T i + P n k=1 x k , ^ ! n T j =y + ~ ! T j ; (^ ! n ) T j =! T i _ (y + (~ !) T j ); (^ ! n ) T j =! T i ^ (y + (~ !) T j ): 130 Then, for another ~ n = ( ~ t k ; ~ x k ) 1kn 2O " n (T i ;T i+1 ), we have (! T i ! n T i+1 ~ !)(! T i ! ~ n T i+1 ~ !) j n X k=1 x k n X k=1 ~ x k j + Z T i+1 T i j! n t ! ~ n t jdt : Then one can easily check that the (! T i ! n T i+1 ~ !) is uniformly continuous in n . 2.2 Wellposedness The main result of this paper is: Theorem 148. Let Assumptions 138, 139, 141, and 146 hold true. (i) Let u 1 2U be a viscosity subsolution and u 2 2U a viscosity supersolution of PPDE (2.12) with u 1 (T;)u 2 (T;). Then u 1 u 2 on . (ii) The PPDE (2.12) with terminal condition has a unique viscosity solutionu2UC b (). The proof is reported in Sections 3 and 4. In particular, we shall prove the theorem under the stronger condition Assumption 144 rst. A key ingredient is the partial comparison result, proved in Section 7, which extends the corresponding result in Proposition 5.3 of [17] to the set C 1;2 (). Remark 149. In [17] Section 6, we transformed backward stochastic PDEs into a PPDE. It is straightforward to translate the assumptions in Theorem 148 to that setting and thus obtain the wellposedness of viscosity solutions of BSPDEs. 2.3 A change variable formula We have previously observed in [17], Remark 3.15, that the classical change of variable formula is not known to hold true for our notion of viscosity solutions under Assumption 138. We now show that the additional Assumption 141 allows to prove that the change of variable holds true. Let u2C 1;2 b () and 2C 1;2 ([0;T ]R). Assume is strictly increasing in x and let denote its inverse function. Dene w(t;!) := (t;u(t;!)) and thus u(t;!) = (t;w(t;!)): (2.11) 131 Then @ t u = t + x @ t w; @ ! u = x @ ! w; @ 2 !! u = xx (@ ! w) 2 + x @ 2 !! w: One can check straightforwardly that Lu(t;!) = x (t;w(t;!)) ~ Lw(t;!) and ~ Lw :=@ t w ~ G(t;!;w;@ ! w;@ 2 !! w); (2.12) where ~ G(t;!;y;z; ) := t (t;y) +G t;!; (t;y); x (t;y)z; xx (t;y)z 2 + x (t;y) x (t;y) : Note that is increasing in x and x > 0. Then the following result is obvious. Proposition 150. Under the above assumptions on , u is classical solution (resp. super- solution, subsolution) ofLu = 0 if and only if w := (t;u) is a classical solution (resp. supersolution, subsolution) of ~ Lw = 0. Moreover, we have Theorem 151. Assume both (G;) and ~ G; (T;) satisfy the conditions of Theorem 148. Then u is the viscosity solution of PPDE (2.12) with terminal condition if and only if w := (t;u) is the viscosity solution of PPDE (2.12) with terminal condition ~ := (T;). The proof is postponed to Section 3 below. We shall remark though that the above operator ~ G is quadratic in the zvariable, so we need somewhat stronger conditions to ensure the wellposedness. 3 Comparison, uniqueness, and change of variable We follow the spirit of Perron's approach. Dene: u(t;!) := inf t : 2D(t;!) ; u(t;!) := sup t : 2D(t;!) ; (3.13) where, D(t;!) := n 2C 1;2 ( t ) : bounded; (L ) t;! 0 on [t;T ) t ; T t;! o ; D(t;!) := n 2C 1;2 ( t ) : + bounded; (L ) t;! 0 on [t;T ) t ; T t;! o : (3.14) 132 By using the functional It^ o formula (1.3) and following the arguments in [17] Theorem 3.16, we obtain a similar result as the partial comparison of Proposition 128, implying that: u u: (3.15) A crucial step for our proof is to show that equality holds in the last inequality under our additional assumptions. Proposition 152. Under Assumptions 138, 139, 141, 143, and 144, we have u =u . The lenghty proof of this proposition is postponed to the Appendix 1. We now show how it implies the comparison result for the PPDE under Assumption 144. Proof of Theorem 148 (i) and uniqueness under Assumption 144. By Proposition 128, we haveu 1 u anduu 2 . Then Proposition 152 impliesu 1 u 2 immediately, which implies further the uniqueness of viscosity solution. Proof of Theorem 151. One may easily check that w = (t;u);w = (t;u), where w(t;!) := inf n t : 2C 1;2 ( t ); bounded; ~ L 0; T (T; t;! ) o ; w(t;!) := sup n t : 2C 1;2 ( t ); + bounded; ~ L 0; T (T; t;! ) o : Then the result follows immediately from Proposition 152 and the arguments in the proof of Theorem 148. 4 Construction of a viscosity solution and proof of Theorem 148 By the previous Proposition 152, we have u =u =:u. In this section, we show that u is a viscosity solution of our PPDE. 4.1 Regularity Proposition 153. Under Assumptions 138 and 143, we have u;u2UC b (). 133 Proof. We shall only prove the result for u. The proof for u is similar. (i) We rst show that u is bounded. Fix (t;!), and set (s; ~ !) :=C 0 (L 0 + 1)e (L 0 +1)(Ts) : Then 2C 1;2 ( t ) C 1;2 ( t ), 0, T C 0 (L 0 + 1)C 0 t;! , and we compute that (L ) t;! s = (L 0 + 1) s G t;! (; s ; 0; 0) s G t;! (; 0; 0; 0)C 0 (L 0 + 1)C 0 0: This implies that 2D(t;!), and thus u(t;!) (t; 0). On the other hand, by similar arguments one can show that is a classical subsolution of PPDE (2.12) satisfying T t;! . Then by partial comparison Proposition 128 we get u(t;!) t (t; 0). Therefore, ju(t;!)j (t; 0) =C 0 (L 0 + 1)e (L 0 +1)(Tt) C 0 (L 0 + 1)e (L 0 +1)T and is bounded: (ii) We next show that u is uniformly continuous in !, uniformly in t. For t2 [0;T ] and ! 1 ;! 2 2 , denote :=k! 1 ! 2 k t . For 1 2D(t;! 1 ), dene: 2 (s; ~ !) := 1 (s; ~ !) + (s; ~ !) where (s; ~ !) :=e (L 0 +1)(Ts) 0 () + : Notice that e (L 0 +1)s = e (L 0 +1)h i e (L 0 +1)(sh i ) , one can easily check that 2 2 C 1;2 ( t ) with the same h i as those of 1 . Moreover, 2 is bounded from below, and 2 T = 1 T + T t;! 1 T + 0 () t;! 2 ; (L 2 ) t;! 2 s (L 2 ) t;! 2 s (L 1 ) t;! 1 s = (L 0 + 1) s G t;! 2 (s;; 2 ;@ ! 1 ;@ 2 !! 1 ) +G t;! 1 (s;; 1 ;@ ! 1 ;@ 2 !! 1 ) (L 0 + 1) s 0 ()L 0 s = s 0 ()> 0: Then 2 2D(t;! 2 ), and therefore u(t;! 2 ) 2 (t; 0), implying that u(t;! 2 ) 1 (t; 0) 2 (t; 0) 1 (t; 0) =e (L 0 +1)(Tt) [ 0 () +]C[ 0 () +]: Since 1 2D(t;! 1 ) is arbitrary, we obtainu(t;! 2 )u(t;! 1 )C[ 0 () +]. By symmetry, this shows the required uniform continuity of u in !, uniformly in t. 134 (iii) We now prove thatu satises (2.14). For this purpose, we rst prove the following partial dynamic programming principle: for 0t 1 <t 2 T , u(t 1 ;!) inf n t 1 : 2D t 2 (t 1 ;!) o where (4.16) D t 2 (t 1 ;!) := n 2C 1;2 ( t 1 ); bounded; (L ) t 1 ;! 0 on [t 1 ;t 2 ) t 1 ; t 2 u t 1 ;! t 2 o : Indeed, noticing that t 2 ;! 2C 1;2 ( t 2 ) for any 2C 1;2 ( t 1 ), it is clear that t 2 u t 1 ;! t 2 for any (t 1 ;!)2 and any 2D(t 1 ;!). ThenD(t 1 ;!)D t 2 (t 1 ;!) and thus (4.16) holds. Next, we x t 1 <t 2 T and consider the following PPDE on [0;t 2 ]: @ t wg(w;@ ! w;@ 2 !! w) = 0; t2 [0;t 2 );!2 ; w(t 2 ;!) =u(t 2 ;!); (4.17) where g is introduced in (1.1). This PPDE clearly satises Assumptions 138 and 143. Moreover, similar to (1.10) and (1.11), we have the representation for PPDE (4.17): w(t;!) := inf b2B t L 0 E L 0 t h e R t 2 t brdr u(t 2 ;! t B t )C 0 Z t 2 t e R s t brdr ds i ; (t;!)2 [0;t 2 ] : (4.18) Recalling (1.2) and apply partial comparison principle Proposition 128 on PPDE (4.17), we see that t 1 w(t 1 ;!) for any 2D t 2 (t 1 ;!). Then u(t 1 ;!)w(t 1 ;!), and thus u(t 2 ;!)u(t 1 ;!) u(t 2 ;!)w(t 1 ;!) = sup b2B t 1 L 0 E L 0 t 1 h u(t 2 ;!)e R t 2 t 1 brdr u(t 2 ;! t 1 B t 1 ) +C 0 Z t 2 t 1 e R s t 1 brdr ds i : Then it follows from (i) and (ii) that u(t 2 ;!)u(t 1 ;!) C(t 2 t 1 ) +CE L 0 t 1 h u(t 2 ;!)u(t 2 ;! t 1 B t 1 ) i C(t 2 t 1 ) +CE L 0 t 1 h d 1 ((t 1 ;!); (t 2 ;!)) +kB t 1 k t 2 i ; where is the modulus of continuity of u(t 2 ;). Now it is straightforward to check thatu satises (2.14). (iv) We nally prove that u satises (2.14). This, together with (i) and (iii), implies that u2UC b (). For t 1 <t 2 , !2 , and 2 2D(t 2 ;!), dene t 2 (~ !) := 2 (t 2 ; 0) +e L 0 (Tt 2 ) 0 d 1 (t 1 ;!); (t 2 ;!) +k~ !k t 2 ; ~ !2 t 2 ; 135 and consider the PPDE on [t 1 ;t 2 ]: @ t wg(w;@ ! w;@ 2 !! w) = 0; (t; ~ !)2 [t 1 ;t 2 ) t 1 ; and w(t 2 ;) = t 2 ; (4.19) where g is introduced in (1.1). This PPDE clearly satises Assumptions 138, 139, 143, and 144. We shall prove in Proposition 156 (i) below that it also satises Assumption 141. Then, it follows from the comparison result of Theorem 148 (i), which we proved in the previous section, that the PPDE (4.19) satises the comparison result. Next, similar to (4.17)-(4.18), PPDE (4.19) has a representation for its unique solution w: w(t; ~ !) := sup b2B t L 0 E L 0 t h e R t 2 t brdr t 2 (t 2 ; ~ ! t B t ) +C 0 Z t 2 t e R s t brdr ds i ; (t; ~ !)2 [t 1 ;t 2 ] t 1 : (4.20) Then w(t 1 ; 0) 2 (t 2 ; 0) = sup b2B t 1 L 0 E L 0 t 1 h [e R t 2 t 1 brdr 1] 2 (t 2 ; 0) +C 0 Z t 2 t 1 e R s t brdr ds +e R t 2 t 1 brdr+L 0 (Tt 2 ) 0 d 1 (t 1 ;!); (t 2 ;!) +kB t 1 k t 2 i : By (i), we may assume without loss of generality thatj 2 (t 2 ; 0)jC. Then jw(t 1 ; 0) 2 (t 2 ; 0)j C(t 2 t 1 ) +CE L 0 t 1 h 0 d 1 (t 1 ;!); (t 2 ;!) +kB t 1 k t 2 i C d 1 (t 1 ;!); (t 2 ;!) ; (4.21) for some modulus of continuity . Moreover, apply Propositions 128 and 152, which we proved in previous sections, on PPDE (4.19), this unique viscosity solution of PPDE (4.19) satises w = (w), where w = (w) is dened for PPDE (4.19) in the spirit of (3.13). Then for any "> 0, there exists 0 2 C 1;2 ( t 1 ) bounded from below such that 0 (t 1 ; 0)w(t 1 ; 0)+"; 0 (t 2 ; ~ !)w(t 2 ; ~ !); and@ t 0 g( 0 ;@ ! 0 ;@ 2 !! 0 ) 0: (4.22) Therefore, for t2 [t 1 ;t 2 ), by (1.1) and (1.2) we have L 0 = @ t 0 G(; 0 ;@ ! 0 ;@ 2 !! 0 ) g 0 ( 0 ;@ ! 0 ;@ 2 !! 0 )G(; 0 ;@ ! 0 ;@ 2 !! 0 ) 0: (4.23) 136 Now dene 1 on t 1 by: denoting ~ ! t 2 s := ~ ! s ~ ! t 2 for ~ !2 t 1 and s2 [t 2 ;T ], 1 (t; ~ !) := 0 (t; ~ !)1 [t 1 ;t 2 ) (t) + h 2 (t; ~ ! t 2 ) + 0 (t 2 ; ~ !) 2 (t 2 ; 0) e L 0 (t 2 t) i 1 [t 2 ;T ] (t): (4.24) Since 0 ; 2 and 2 (t 2 ; 0) are bounded from below, then so is 1 . We shall prove in (v) below that 1 2 C 1;2 ( t 1 ). Then it follows from (4.19) and (4.22) that 0 (t 2 ; ~ !) w(t 2 ; ~ !) 2 (t 2 ; 0), and thus 1 (t; ~ !) 2 (t; ~ ! t 2 ) for tt 2 . Then, for t2 [t 2 ;T ], L 1 = @ t 2 +L 0 1 2 (t; ~ ! t 2 ) G(; 1 ;@ ! 2 ;@ 2 !! 2 ) (4.25) L 0 ( 1 2 (t; ~ ! t 2 )) +G(; 2 ;@ ! 2 ;@ 2 !! 2 )G(; 1 ;@ ! 2 ;@ 2 !! 2 ) 0: Moreover, by (4.24), (4.22), and (4.19), 1 (T; ~ !) 2 (T; ~ ! t 2 ) + w(t 2 ; ~ !) 2 (t 2 ; 0) e L 0 (t 2 T ) t 2 ;! (~ ! t 2 ) + 0 d 1 (t 1 ;!); (t 2 ;!) +k~ !k t 2 t 1 ;! (~ !): This, together with (4.23) and (4.25), implies that 1 2D(t 1 ;!). Then it follows from (4.22) and (4.21) that u(t 1 ;!) 1 (t 1 ; 0) = 0 (t 1 ; 0)w(t 1 ; 0) +" 2 (t 2 ; 0) +C d 1 (t 1 ;!); (t 2 ;!) +": Since 2 2D(t 2 ;!) and "> 0 are arbitrary, this provides (2.14). (v) It remains to verify that 1 2 C 1;2 ( t 1 ). Let h 0 i ;E 0;i j correspond to 0 and h 2 i ;E 2;i j correspond to 2 in Denition 137. Dene a random index: I := inffi : h 0 i t 2 g: Set h 1 i := h 0 i fori<I and h 1 i (!) := h 2 iI (! t 2 ) foriI. Moreover, setE 1;i 2j1 :=E 0;i j \fI > ig and E 1;i 2j :=E 2;iI j \fIig, j 1. Noting that h 1 i+1 = h 0 i+1 ^t 2 whenever h 0 i <t 2 , it is clear that h 1 i areF-stopping times and (h 1 ) h 1 i (!);! i+1 2H h 1 i (!) whenever h 1 i (!)<T . From the construction ofE 1;i j one can easily see thatfE 1;i j ;j 1gF h 1 i and form a partition of t 1 . Moreover, since on each E 1;i j , either h 1 i = h 0 i or h 1 i = h 2 iI , Denition 137 (ii)-(iv) are obvious. It remains to prove fi : h 1 i <Tg is nite and lim i!1 C L t [(h 1 i ) t;! <T ] = 0 for any (t;!)2 t 1 : (4.26) 137 Notice that, denoting by [ i 2 ] the largest integer below i 2 , fh 1 i <Tg = fh 1 i <T;I > [ i 2 ]g[fh 1 i <T;I [ i 2 ]g fh 0 [ i 2 ] <t 2 g[f!2 t 1 : h 2 [ i 2 ] (! t 2 )<Tg: Then , for any !, h 1 i (!) = T when i is large enough. Furthermore, for any L > 0 and P2P L t 1 , P h 1 i <T P h 0 [ i 2 ] <t 2 +P f!2 t 1 : h 2 [ i 2 ] (! t 2 )<Tg C L t 1 h 0 [ i 2 ] <T +E P h P t 2 ;! h 2 [ i 2 ] <T i C L t 1 h 0 [ i 2 ] <T +C L t 2 h 2 [ i 2 ] <T ) ; and thus lim i!1 C L t 1 [h 1 i <T ] lim i!1 h C L t 1 [h 0 [ i 2 ] <T ] +C L t 2 [h 2 [ i 2 ] <T )] i = 0: Similarly one can show (4.26) for any (t;!)2 t 1 . 4.2 Existence of viscosity solution Proposition 154. Under Assumptions 138 and 144, u (resp. u) is a viscosity L 0 - supersolution (resp. L 0 -subsolution) of PPDE (2.12) with terminal condition . Proof. Without loss of generality, we may assume that the generatorG satises (7.44), and we prove only that u is a viscosity L 0 -supersolution at (0; 0). Assume to the contrary that there exists '2A L 0 u(0; 0) such thatc :=L'(0; 0)< 0. For any 2D(0; 0) and any (t;!)2 , it is clear that t;! 2D(t;!) and then (t;!) u(t;!). Now by (3.13) there exist n 2 C 1;2 () such that n := n (0; 0)u(0; 0)# 0 as n!1; (L n ) s 0 and n s u s ; s2 [0;T ]: (4.27) Let h be the hitting time required inA L 0 u(0; 0), and since '2C 1;2 () and u2UC b () U, without loss of generality we may assume L'(t;!) c 2 and j' t ' 0 j + u t u 0 c 6L 0 ; for all t h: (4.28) 138 We emphasize that the above h is independent of n. Now letfh n i ;i 1g correspond to n 2C 1;2 (). Since '2A L 0 u(0; 0), this implies for all P2P L 0 and n;i that: 0 E P h ('u) h^h n i i E P h (' n ) h^h n i i : (4.29) Recall (4.18) and denoteG P := P @ ! + 1 2 ( P ) 2 : @ 2 !! . Then, applying functional It^ o formula in (4.29) and recalling that n is a semi-martingale on [0; h n i ], it follows from (4.27) that: n E P h n 0 n h^h n i +' h^h n i ' 0 i =E P h Z h^h n i 0 (@ t +G P )(' n )ds i E P h Z h^h n i 0 c 2 G(:;';@ ! ';@ 2 !! ') +G(:; n ;@ ! n ;@ 2 !! n ) +G P (' n ) ds i E P h Z h^h n i 0 c 2 G(:;';@ ! ';@ 2 !! ') +G(:;u;@ ! n ;@ 2 !! n ) +G P (' n ) ds i ; where the last inequality follows from (7.44) and the fact that u n by (4.27). Since ' 0 =u 0 , by (4.28) and (7.44) we get n E P h Z h^h n i 0 c 3 G(:;u 0 ;@ ! ';@ 2 !! ') +G(:;u 0 ;@ ! n ;@ 2 !! n ) +G P (' n ) ds i : Now let > 0 be a small number. For each n, dene n 0 := 0, and n j+1 := h^ inf n t n j : 0 d 1 ((t;!); ( n j ;!)) +j@ ! '(t;!)@ ! '( n j ;!)j +j@ 2 !! '(t;!)@ 2 !! '( n j ;!)j +j@ ! n (t;!)@ ! n ( n j ;!)j +j@ 2 !! n (t;!)@ 2 !! n ( n j ;!)j o : Recall Denition 137 (iii)-(iv) we see the uniform regularity of n on [0; h n i ] for eachi. Then, together with the smoothness ofG and', one can easily check that n j " h asj!1. Thus n [ c 3 C]E P [h^ h n i ] + X j0 E P h n j+1 ^ h n i n j ^ h n i G(:;u 0 ;@ ! n ;@ 2 !! n )G(:;u 0 ;@ ! ';@ 2 !! ') +G P (' n ) n j i = [ c 3 C]E P [h^ h n i ] + X j0 E P h n j+1 ^ h n i n j ^ h n i n j @ ! ( n ') + 1 2 2 n i :@ 2 !! ( n ') +G P (' n ) n j i : 139 for some appropriate n j ; n j . Now choose P n 2P L 0 such that Pn t = n j , Pn t = n j for all n j t < n j+1 . Then n [ c 3 C]E Pn [h^ h n i ]. Set := c 6C , send i!1, and recall Denition 137 that lim i!1 C L 0 0 (h n i <T ) = 0. This leads to n c 6 E Pn [h]E L 0 0 [h], and, by sending n!1, we obtainE L 0 0 [h] = 0. However, since h2H, by [17] Lemma 2.4 we have E L 0 [h]> 0. This is a contradiction. Proof of Theorem 148 under Assumption 144: Existence. Recall from Proposition 152 that u := u = u. Then, Propositions 153 and 154 imply immediately that u is a viscosity solution of PPDE (2.12). 4.3 Proof of Theorem 148 We now prove Theorem 148 under the weaker condition Assumption 146. First, by the proof of Theorem 148 under Assumption 144, we have the comparison, existence, and uniqueness of viscosity solution on [T N1 ;T N ]. Let u denote the unique viscosity solution [T N1 ;T N ] with terminal condition , constructed by the Perron's approach. Now consider the PPDE 2.12 on [T N2 ;T N1 ] with terminal conditionu(T N1 ;). We claim thatu(T N1 ;) satises Assumptions 143 and 144 on [T N2 ;T N1 ], then we may extend the comparison, existence, and uniqueness of viscosity solution to the interval [T N2 ;T N ]. Repeat the arguments backwardly we prove Theorem 148. So it remains to check Assumptions 143 and 144 for u(T N1 ;) on [T N2 ;T N1 ]. First, by Proppsition 153 it is clear that u(T N1 ;) is bounded. Given !2 , note that PPDE (2.12) on [T N1 ;T N ] can be viewed as a PPDE with generator G T N1 ;! and terminal con- dition T N1 ;! . Then, following the arguments in Proposition 153 (ii), one can easily show that u(T N1 ;!) is uniformly continuous in !, and it follows from Assumption 146 that u(T N1 ;! T N2 ! n ) is uniformly continuous in n 2O " n (T N2 ;T N1 ). 5 On Assumption 141 In this section we discuss the validity of our Assumption 141 which is clearly related to the classical Perron approach, the key-argument for the existence in the theory of viscosity solutions as shown by Ishii [27]. However, our denition of v andv involves classical super- solutions and subsolutions, while the classical denition in [27] involves viscosity solutions. We remark that Fleming and Vermes [22, 23] have some studies in this respect, see Remark 157 (ii) below. The main issue here is to approximate viscosity solutions by classical super- solutions or subsolutions. This is a dicult question which requires some restrictions on the 140 nonlinearity. In this section, we provide some sucient conditions, and we hope to address this issue in a more systematic way in some future research. For the ease of presentation, we rst simplify the notations in Assumption 141. Let O :=fx2R d :jxj< 1g; O :=fx2R d :jxj 1g; @O :=fx2R d :jxj = 1g; O := [0;T )O; O := [0;T ]O; @O := [0;T ]@O [ fTgO : (5.30) We shall consider the following (deterministic) PDE onO: Lv :=@ t vg(s;x;v;Dv;D 2 v) = 0 in O and v =h on @O: (5.31) Assumption 155. (i) g and h are continuous in (t;x), (ii) g is uniformly Lipschitz continuous in (y;z; ), and nondecreasing in , (iii) The PDE (5.31) satises existence and comparison in the sense of viscosity solutions within the class of bounded functions. More precisely, the last item (iii) states that for any bounded functions v 1 ;v 2 satisfying Lv 1 0 Lv 2 onO, in the sense of viscosity solutions, and v 1 hv 2 on @O, we have v 1 v 2 . Dene v(t;x) := inf n w(t;x) :w classical supersolution of PDE (5.31) o ; v(t;x) := sup n w(t;x) :w classical subsolution of PDE (5.31) o : By the comparison principle we have v v v, where v denotes the unique viscosity solution of PDE (5.31). DenoteS d + :=f 2S d : 0g. Our main result of this section is Proposition 156. Under Assumption 155, we have v =v in the following three cases: (i) g is convex in (y;z; ), g (:; ) := inf A2S d + g(:; +A)I d :A >1 for 0c 0 , for some c 0 > 0, and g !g as & 0, (ii) g is convex in and uniformly elliptic: for some constant c 0 > 0, g(; 1 )g(; 2 )c 0 I d : ( 1 2 ) for any 1 2 : (iii) g is uniformly elliptic and d 2. 141 Proof. Without loss of generality we assume g(:;y 1 ;:)g(:;y 2 ;:) y 2 y 1 for all y 1 y 2 : (5.32) (i) We proceed in two steps. Step 1. Let be a smooth molier onO, and dene w :=v +c for any > 0, where c := sup @O jv hj. Then w 2 C 1;2 (O) and w h on @O, and by the convexity argument of Krylov [29], it follows that w satises in the viscosity sense: @ t w g :;w ;Dw ;D 2 w @ t vg(:;v +c ;Dv;D 2 v) @ t vg(:;v;Dv;D 2 v) = 0; where we used (5.32). This implies v w . Note that w v 2c on @O. Following similar arguments as in Lemma 177 one can easily show that w (0; 0)v(0; 0) Cc . Clearly c ! 0 as & 0, then v(0; 0) =v(0; 0). Step 2. Let 0<<c 0 . Then g g, and for any 0 2S d + , we directly verify that g (:; ) = inf A 0 g(:; + 0 +A)I d : (A + 0 ) inf A0 g(:; + 0 +A)I d : (A + 0 ) =I d : 0 +g (:; + 0 ); that is g is uniformly elliptic. Finally, consider some 2 [0; 1], 0 ; 1 2 S d , and set := (1) 0 + 1 . For all A 0 ;A 1 2S d + , we have g (:; ) g :; + (1)A 0 +A 1 I d : (1)A 0 +A 1 (1) g( 0 +A 0 )I d :A 0 + g( 1 +A 1 )I d :A 1 : By the arbitrariness of A 0 ;A 1 inS d + , this shows that g is convex in . Next, let g n and h n be smooth moliers of g and h such that g n " g and h n " h uniformly. Note thatg n inherits the uniform ellipticity. Then we may apply Theorem 14.15 of Lieberman [31] to deduce the existence of a unique bounded classical solution w n; of the equation @ t w n; g n :;w n; ;Dw n; ;D 2 w n; = 0 on O; and w n; =h n on @O: 142 This implies that Lw n; 0 inO and w n; h on @O, and thus w n; (0; 0) v(0; 0). By the comparison and the stability of viscosity solutions, it follows that w n; "w , where w is the unique viscosity solution of @ t w g :;w ;Dw ;D 2 w = 0 on O; and w =h on @O: Note that g "g as # 0. Then by the comparison and then stability of viscosity solutions again we see that w "v. This implies that v(0; 0) =v(0; 0). (ii) For any > 0, we dene O :=fx2R d :jxj< 1 +g,O := [0; (1 +)T )O , and similar to (5.30), dene their closures and boundaries. Let , be smooth molliers onO andO 1 RR d S d and dene: for any 0 > 0, h (t;x) := (h ) t 1+ ; x (1+) ; (t;x)2O ; g 0 (t;x;y;z; ) := min (t 0 ;x 0 )2O fg(t 0 ;x 0 ;y;z; ) + 2 0 (jtt 0 j +jxx 0 j)g g 0 := (g 0 0); (t;x;y;z; )2O 1 RR d S d : By the uniform continuity of g, we have c( 0 ) :=kgg 0k 1 ! 0 as 0 & 0. Set g 0 :=g 0c( 0 ); and g 0 :=g 0 +c( 0 ): By our assumptions on g and h, it follows from Theorem 14.15 of Liebermann [31] that there exist v ; 0;v ; 02C 1;2 (O )\C(O ) solutions of the equations: E ; 0 :@ t vg 0 (:;v;Dv;D 2 v) = 0 in O ; and v =h on @O E ; 0 :@ t vg 0(;:;v;Dv;D 2 v) = 0 in O ; and v =h on @O ; respectively. In particular, their restriction toO are in C 1;2 (O). By comparison principle, v ; 0v ; 0. Moreover, it follows from (5.32) that: g 0(:;y + 2c( 0 );:) g 0(:;y;:) 2c( 0 ) = g 0 (:;y;:): This shows that v ; 0 + 2c( 0 ) is a classical supersolution of (E ; 0), and therefore v ; 0 + 2c( 0 ) v ; 0 v ; 0: 143 Additionally, notice that the solutions v ; 0;v ; 0 are bounded uniformly in ; 0 for ; 0 small enough. The generatorsg 0 ;g 0 have the same uniform ellipticity constants as g, and they verify the hypothesis of Theorem 14.13 of Liebermann [31] uniformly in 0 . Therefore v ; 0;v ; 0 are Lipschitz continuous with the same Lipshitz constant for all ; 0 . Then, denoting h ; 0 :=v ; 0 @O and h ; 0 :=v ; 0 @O , this implies that c(; 0 ) := max kh ; 0hk 1 ;kh ; 0hk 1 ! 0; as ! 0; uniformly in 0 : Now for xed"> 0, choose 0 ; 0 0 > 0 so thatc( 0 ; 0 )<"=4 for all 0 > 0, andc( 0 0 )"=4. Then,w 0 ; 0 0 :=v 0 ; 0 0 +c( 0 ; 0 0 ) andw 0 ; 0 0 :=v 0 ; 0 0 c( 0 ; 0 0 ) are respectively classical supersolution and subsolution of (5.31) onO. Thus w 0 ; 0 0 v and w 0 ; 0 0 v. Therefore, vvw 0 ; 0 0 w 0 ; 0 0 =v 0 ; 0 0 v 0 ; 0 0 + 2c( 0 ; 0 0 ) 2c( 0 0 ) + 2c( 0 ; 0 0 )": Then it follows from the arbitrariness of " that v =v. (iii) We refer to Pham and Zhang [51] for the proof in this case. Remark 157. (i) Proposition 156 (i) allows to deal with degenerate PPDEs. For example, in the case of degenerateG-expectation withG =G( ) = 1 2 sup 0 [ 2 ], one can compute straightforwardly that g( ) = 1 2 2 + and g ( ) = 1 2 h 2 + i , and thus satises the conditions in Proposition 156 (i). However this proposition does not cover the rst order PPDEs, which has been investigated in our accompanying paper [17] Section 8 by using compactness argument. It will be interesting to unify the two approaches, and we shall leave it for future research. (ii) Proposition 156 (i) deals with Hamilton-Jacobi-Bellman equations, and a similar problem has been studied by Fleming and Vermes [22, 23] in the context of stochastic control. In fact, they proved the result in Step 1 and mentioned the result in Step 2 as an open problem. (iii) Proposition 156 (iii) does not require convexity ofG in , and thus allows us to deal with path dependent Bellman-Isaacs equations. See Pham and Zhang [51] for its application in stochastic dierential games. 144 Chapter 6 Viscosity solutions for the obstacle problem of PPDEs 1 Introduction to the obstacle problem In [19], El Karoui et al. introduced a new kind of BSDE, called re ected BSDEs, where the solution is forced to stay above a barrier. In the Markovian framework, they have proven that the value function dened through the RBSDE is the unique viscosity solution of an obstacle problem for a semilinear parabolic partial dierential equation, hence extending the well-known Feynman-Kac formula to the associated variational inequalities. Our objective in this chapter is to present the paper [14] where we adapted the denition of viscosity solutions of PPDEs given in [17] to an obstacle problem for a fully nonlinear PPDE and make the link between non-Markovian RBSDEs and the obstacle problem of PPDEs. In this regard, the chapter can be seen as a non-Markovian and nonlinear version of [19]. In order to achieve our objective, especially to tackle with the lack of local compactness of the space of paths, we will use similar ideas as in [18]. In our case, the main diculty is to produce a sequence of "smooth" sub and supersolutions of the obstacle problem that con- verge to the value functional, however in general the solutions of obstacle problem of PDEs do not haveC 1;2 regularity. To overcome this diculty, we use a penalization approach and a change of variable which allows us to have "smooth" solution to the obstacle problem. 1.0.1 The Generator LetK be a measurable set with its sigma algebraM K and 2 mappings : F : RR d K!R : K!S d : 145 We consider the following generator G : RR d S d !R (1.1) G(t;!;y;z; ) = sup k2K h 1 2 (t;!;k) 2 : +F (t;!;y;(t;!;k)z;k) i (1.2) We make the following assumption on the data of the 2RBSDE: Assumption 158. There exist L 0 ;M 0 0 and 0 a modulus of continuity with at most polynomial growth verifying the following points. (i) Boundedness : , h, and F (:; 0; 0;:) are bounded by M 0 . (ii) Assumptions on F and G: F (:;y;z;k) and G(:;y;z; ) are right continuous under d 1 metric in the sense of the denition (9). F (t;!;:;:;k) is Lipschitz continuous in (y;z) with Lipschitz constant L 0 . (iii) Assumption on h: h is uniformly continuous under d 1 with modulus of continuity 0 . (iv) Assumption on : is uniformly continuous under thejj:jj T norm with modulus of continuity 0 . For all !2 , (!)h(T;!). (v) Assumption on : For all (t;!)2 , inf k2K (t;!;k) > 0,j(t;!;k)j p 2L 0 , (t;:;k) is Lipschitz continuous with Lipschitz constant L 0 , and (:;k) is right continuous under d 1 . We will also need the following additional assumption for our wellposedness results. Assumption 159. does not depend on (t;!), F (:;y;z;k) is uniformly continuous with modulus of continuity 0 . Remark 160. The Assumption (159) will only be used to prove the Lemma 175 and under this additional assumption the operator G is uniformly non-degenerate in . 2 Introduction of the value functional for the obstacle prob- lem For t2 [0;T ], we denote byK t , the set of F t -progressively measurable andK valued pro- cesses. Under the assumptions (158), for xed (t;!)2 , and k2K t , by the Lipschitz 146 continuity of in !, there exists a unique strong solution X t;!;k of the following equation underP t 0 : X t;!;k s = Z s t t;! (r;X t;!;k ;k r )dB t r ; for s2 [t;T ]: (2.3) Additionally, by the classical estimates on SDEs, for (t;!); (t;! 0 )2 : E t 0 h (jjX t;!;k jj t T ) p i C p ; for all p> 0; E t 0 h (jjX t;!;k X t;! 0 ;k jj t T ) 2 i Cjj!! 0 jj 2 t : (2.4) At the previous inequality, as it will be the case in the sequel, C is a constant that may change from line to line, however it only depends on d;M 0 ;T;L 0 , and 0 . We dene P t;!;k := P t 0 (X t;!;k ) 1 2P t L 0 . Notice that, the lemma 2.2 of [54] shows that there exits a mapping ~ k2L 0 (;K) such that ~ k(s;X t;!;k ) =k(s;B t ), dsP t 0 -a.s. By rewriting (2.3) underP t 0 : X t;!;k s = Z s t t;! (r;X t;!;k ; ~ k(r;X t;!;k ))dB t r ; for s2 [t;T ]: (2.5) Therefore,f t;! r ( ~ k r ) 1 dB t r g r2[t;T ] (recall that t;! r ( ~ k r ) = t;! (r;B t ; ~ k(r;B t )) ) is the incre- ment of a Brownian motion under P t;!;k . Hence, for xed 2T t and F t measurable and bounded random variable , one can dene (Y t;!;k s (;);Z t;!;k s (;);K t;!;k s (;)) s2[t;] solu- tion to the re ected BSDE on t , with data (F t;! s (:;:; ~ k s );h t;! ;) underP t;!;k : Y t;!;k s =(B t ) + Z s F t;! r (Y t;!;k r ;Z t;!;k r ; ~ k r )dr (2.6) Z s (Z t;!;k r ) t;! r ( ~ k r ) 1 dB t r +K t;!;k K t;!;k s ; Y t;!;k s h t;! s ; for all s2 [t;]; (K t;!;k s ) s2[t;] is increasing in s,K t;!;k t = 0 and h Y t;!;k s h t;! s i dK t;!;k s = 0: When (;) = (T;), we denote (Y t;!;k s ;Z t;!;k s ;K t;!;k s ) = (Y t;!;k s (T;);Z t;!;k s (T;);K t;!;k s (T;)): (2.7) 147 To make easier our notations, we also dene the following re ected BSDE, under P t 0 : ~ Y t;!;k s = t;! (X t;!;k ) + Z T s F t;! (r;X t;!;k ; ~ Y t;!;k r ; ~ Z t;!;k r ; ~ k(r;X t;!;k ))dr (2.8) Z T s ( ~ Z t;!;k r ) dB t r + ~ K t;!;k T ~ K t;!;k s ; ~ Y t;!;k s h t;! (s;X t;!;k ); for all s2 [t;T ]; ( ~ K t;!;k s ) s2[t;T ] is increasing in s, ~ K t;!;k t = 0 and h ~ Y t;!;k s h t;! (s;X t;!;k ) i d ~ K t;!;k s = 0: For alls2 [t;T ],Y t;!;k s and ~ Y t;!;k s areF t P t;k s measurable, soY t;!;k t and ~ Y t;!;k t are constant. Additionally, the family: ( t;! (B t );F t;! s (y;z; ~ k s );h t;! s ; t;! s ( ~ k s ) 1 dB t s ) s2[t;T ] underP t;!;k has the same distribution as the family ( t;! (X t;!;k );F t;! (s;X t;!;k ;y;z; ~ k(s;X t;!;k ));h t;! (s;X t;!;k );dB t s ) s2[t;T ] underP t 0 . So Y t;!;k t = ~ Y t;!;k t . We dene the following process which is our value functional of interest : u 0 (t;!) := sup k2K t Y t;!;k t = sup k2K t ~ Y t;!;k t ; for (t;!)2 : (2.9) 2.1 Regularity of the value functional Proposition 161. u 0 is bounded and uniformly continuous under the d 1 metric in . Proof. Under assumptions (158), the data of the problem veries the assumptions of [19] which gives the following a priori estimate : E t;!;k sup tsT jY t;!;k s j 2 + Z T t jZ t;!;k r j 2 dr + (K t;!;k T ) 2 ! C: 148 Additionally, for (t;!); (t;! 0 ) 2 , notice thatjj! t X t;!;k ! 0 t X t;! 0 :k jj T jj! ! 0 jj t +jjX t;!;k X t;! 0 ;k jj t T , therefore under our boundedness and regularity assumptions the estimates in [19] gives : j ~ Y t;!;k t ~ Y t;! 0 ;k t j 2 CE t 0 h j t;! (X t;!;k ) t;! 0 (X t;! 0 ;k )j 2 i +CE t 0 Z T t jF t;! (s;X t;!;k ;Y t;!;k s ;Z t;!;k s ;k s )F t;! 0 (s;X t;! 0 ;k ;Y t;!;k s ;Z t;!;k s ;k s )j 2 ds +CE t 0 " sup tsT jh t;! (s;X t;!;k )h t;! 0 (s;X t;! 0 ;k )j 2 # 1=2 CE t 0 h 2 0 (jj! 0 !jj t +jjX t;!;k X t;! 0 ;k jj t T ) i : (2.10) 0 has at most polynomial growth, denote p 0 > 0 this growth power. For xed > 0, we can estimates the dierencej ~ Y t;!;k t ~ Y t;! 0 ;k t j as follows : j ~ Y t;!;k t ~ Y t;! 0 ;k t j 2 (2.11) CE t 0 2 0 (jj! 0 !jj t +jjX t;!;k X t;! 0 ;k jj t T )1 fjjX t;!;k X t;! 0 ;k jj t T >g +CE t 0 2 0 (jj! 0 !jj t +)1 fjjX t;!;k X t;! 0 ;k jj t T g C s E t 0 jjX t;!;k X t;! 0 ;k jj t T h 1 +jj!! 0 jj 2p 0 t i +C 2 0 (jj! 0 !jj t +) C r jj! 0 !jj t h 1 +jj!! 0 jj 2p 0 t i + 2 0 (jj! 0 !jj t +) ! : If we choose := p jj!! 0 jj t , then the last line becomes a modulus of continuity 1 with at most polynomial growth. First of all, the previous estimates gives that Y t;!;k t is bounded by a constant that only depends onM 0 ;T;L 0 , and 0 . With a passage to supremum ink, we see thatu 0 is bounded. Additionally ju 0 (t;!)u 0 (t;! 0 )j sup k2K t j ~ Y t;!;k t ~ Y t;! 0 ;k t j 1 (jj!! 0 jj t ); (2.12) which show that for xed t, u 0 is uniformly continuous in ! uniformly in t. 149 Fix 0 t t 1 T . Given the uniform continuity of u 0 in ! for xed times, one can proceed as in Lemma 4.1 of [16] to obtain the following dynamic programming principle at deterministic times : u 0 (t;!) = sup k2K t Y t;!;k t (t 1 ;u 0 (t 1 ;! t B t )); (2.13) whereY t;!;k (t 1 ;u 0 (t 1 ;! t B t )) (denoted only byY t;!;k for simplicity at this section) is dened at (2.6). We estimate the variation in time for k2K t and underP t;!;k u 0 (t 1 ;!)u 0 (t;!) = u 0 (t 1 ;!)u 0 (t 1 ;! t B t ) Z t 1 t F t;! (r;B t ;Y t;!;k r ;Z t;!;k r ; ~ k r )dr + Z t 1 t Z t;!;k r ( t;! r ( ~ k r )) 1 dB t r K t;!;k t 1 +Y t;!;k t u 0 (t;!) We take the expectation under P t;!;k to have : u 0 (t 1 ;!)u 0 (t;!) E Pt;!;k 1 (jj!! t B t jj t 1 ) +jY t;!;k t u 0 (t;!)j + C(t 1 t) +L 0 Z t 1 t E P t;!;k h jZ t;!;k r j i dr: Finally by taking a sequencek n such thatY t;!;kn t !u 0 (t;!) and using estimates on the RBSDEs we have: u 0 (t 1 ;!)u 0 (t;!) C p t 1 t +(jj! :^t !jj t 1 ); for some modulus of continuity . We dene the following optimal stopping time forY t;!;k , D t;!;k := inffs2 [t;t 1 ] :Y t;!;k s =h t;!;k s g^t 1 : Then u 0 (t 1 ;!)Y t;!;k t = E P t;!;k h u 0 (t 1 ;!) (u 0 ) t;! t 1 + 1 fD t;!;k <t 1 g ((u 0 ) t;! t 1 h t;! D t;!;k ) Z t 1 ^D t;!;k s F t;! (r;B t ;Y r ;Z r ; ~ k r )dr # E P t;!;k h 1 (jj!! t B t jj t 1 ) + 1 fD t;!;k <t 1 g (h t;! t 1 h t;! D t;!;k ) Z t 1 s jF t;! (r;B t ;Y r ;Z r ; ~ k r )jdr 150 Recall that h is uniformly continuous hence we can bound the term (h t;! t 1 h t;! D t;!;k ) on the eventfD t;!;k < t 1 g. Therefore, we can control the right hand side uniformly in k. Finally combining all the previous results, we obtain that there is a modulus of continuity ~ 0 , which only depends on, M 0 ;L 0 ; 0 ;T , such that ju 0 (t;!)u 0 (t 0 ;! 0 )j ~ 0 (d 1 ((t;!); (t 0 ;! 0 ))): (2.14) 3 The PPDE The functional u 0 dened with the (2.7) is related, as it is the case in the Markovian case, to the following PPDE : minfLu(t;!); (uh)(t;!)g = 0; for all (t;!)2 [0;T ) ; (3.15) u(T;!) =(!) for all !2 : (3.16) 3.1 Viscosity solutions of PPDEs For any L 0 and (t;!)2 [0;T ) , and u2U, dene: A L u(t;!) := n '2C 1;2 b ( t ) : there exists h2H t such that 0 ='(t; 0)u(t;!) =S L t ('u t;! ) :^h (0) o ; (3.17) and for all, u2U: A L u(t;!) := n '2C 1;2 b ( t ) : there exists h2H t such that 0 ='(t; 0)u(t;!) =S L t ('u t;! ) :^h (0) o : (3.18) Those sets are the equivalents of sub/superjets in our theory. We give the following denition of viscosity solution. 151 Denition 162. (i) For any L 0, we say u2U is a viscosity L-supersolution of PPDE (2.12) if, for any (t;!)2 [0;T ) and any '2A L u(t;!), it holds that u(t;!)h(t;!) 0; and (L t;! ')(t; 0) 0; or equivalently minfL t;! (t; 0);u(t;!)h(t;!)g 0: (ii) We say u 2 U is a viscosity L-subsolution of PPDE (2.12) if, for any (t;!) 2 [0;T ) such that u(t;!)h(t;!)> 0 and any '2A L u(t;!), it holds that (L t;! ')(t; 0) 0: (iii) We say u is a viscosity subsolution (resp. supersolution) of PPDE (2.12) if u is viscosity L-subsolution (resp. L-supersolution) of PPDE (2.12) for some L 0. (iv) We say u is a viscosity solution of PPDE (2.12) if it is both a viscosity subsolution and a viscosity supersolution. Remark 163. For 0 L 1 L 2 and (t;!)2 [0;T ) , we haveE L 2 t [:]E L 1 t [:] and A L 2 u(t;!) A L 1 u(t;!). If u is a viscosity L 1 -subsolution then u is a viscosity L 2 - subsolution. Same statement also holds supersolutions. Remark 164. The denition of viscosity solution property is local in the following sense. For any (t;!)2 [0;T ) , to check the viscosity property of u at (t;!), it suces to know the value of u t;! on [t; h t ] for an arbitrarily small > 0. The hitting times h t are our tools of localization. Remark 165. We have some exibility to choose the set of test functionals. All the results in this paper still hold true if we replace theA L u with theA 0 L u A 0 L u(t;!) := n '2C 1;2 ( t ) :9 h2H t such that, for all 0 2T t + ; ('u t;! ) t (0) = 0 < E L t ('u t;! ) 0 ^h o : (3.19) 152 4 Consistency Proposition 166. Assume u2 C 1;2 (;R) then u is a viscosity subsolution(respectively, supersolution) of the PPDE (2.12) if and only if u is a classical subsolution(respectively, supersolution) of the same equation. Proof. We only prove the subsolution case. A similar proof also holds for supersolutions. Assume thatu is a viscosityL-subsolution and take (t;!)2 [0;T ) andu(t;!)h(t;!)> 0. Choosing =:u and h :=T2H t , clearly 2A L u(t;!) soL t;! (t;!) =Lu(t;!) 0. Therefore: minfLu(t;!); (uh)(t;!)g 0; for all (t;!)2 For the reverse implication, assume thatu is a classical subsolution and it is not a viscosity subsolution, a fortiori it is not a viscosity L 0 -subsolution. Therefore, there exist (t;!)2 [0;T ) and 2A L 0 u(t;!) with the associated h2H t such that (uh)(t;!) > 0 and c :=L t;! (t; 0) > 0. The processes G(:; t ;@ ! t ;@ !! ' t ); : ;u t;! : are right continuous under the d 1 metric, so there exist > 0 such that for (s; ~ !)2 [t; h t ] t , the following inequalities holds: jG(t;!; t ;@ ! t ;@ !! t )G(s;! t ~ !; t ;@ ! t ;@ !! t )jc=4; (4.20) j@ t t @ t s j +L 0 j t s j +M 0 L 0 j@ ! t @ ! s j +L 0 j s u t;! s j +L 0 j@ !! t @ !! s jc=4 ThenL t;! s L 0 j s u t;! s j c=2 for s2 [t; h t ]. u is a subsolution of the PPDE (2.12) and the data is right continuous underd 1 , so we can choose a constant processk2K t and > 0 small enough such that for all s2 [t; h t ] : @ t u t;! s + 1 2 @ !! u t;! s : ( s (k s )) 2 +F t;! s (u t;! s ; s (k s )@ ! u t;! s ;k s )c=4: (4.21) 153 Notice that for k constant the equation (2.3) has strong solutions. Applying It^ o's formula underP t;!;k : 0 = (u t;! ) t = (u t;! ) h t Z h t t @ t (u t;! ) s + 1 2 @ !! (u t;! ) s : s (k s ) 2 ds Z h t t @ ! (u t;! ) s dB t s (u t;! ) h t + Z h t t L t;! s ds + Z h t t c=4ds + Z h t t r s (u t;! ) s + s s (k s )@ ! (u t;! ) s ds Z h t t @ ! (u t;! ) s dB t s For somer s and s progressively measurabler2R,2R d , withjr s jL 0 andja s jL 0 . Therefore : 0 (u t;! ) h t d + Z h t d t L t;! s +r s (u t;! ) s ds (h t d t)c 4 Z h t d t @ ! (u t;! ) s (dB t s s (k s ) s ds) (u) h t d + (h t d t)c 4 Z h t d t @ ! (u t;! ) s (dB t s s (k s ) s ds) Notice that by Girsanov's theorem, there exists P2P t L 0 equivalent to P t;!;k such that the last integral is a martingale under P. Therefore, we have the following inequalities that contradicts the assumption that 2A L 0 u(t;!) : 0> c 4 E P [h t t]E P [(u) h t ]E L 0 t [(u) h t ]: 5 A change of variable formula We will need the following change of variable formula in our subsequent analysis. 154 Proposition 167. Let C;; 2 R be constants and u 2 U then u is a viscosity L- subsolution of the PPDE (2.12) with data (G;;h) if and only if u 0 t := e t u t +Ce t t is a viscosity L-subsolution of the PPDE (2.12) with data (G 0 ; 0 ;h 0 ) where : G 0 (t;!;y;z; ) :=Ce t (1 + ()t)y +e t G(t;!;e t yCe ()t t;e t z;e t ); 0 :=e T +Ce T T; h 0 t :=e t h t +Ce t t: The same statement holds also for L-supersolutions. Proof. We will only prove the subsolution case. Assume that u is a viscosityL-subsolution with data (G;;h). We want to show that u 0 is a viscosity L-subsolution with data (G 0 ; 0 ;h 0 ). Take (t;!)2 [0;T ) such that u 0 (t;!)h 0 (t;!) > 0 (notice that this is equivalent tou(t;!)h(t;!)> 0) and 0 2A L u 0 (t;!), with the corresponding hitting time h2H t . For xed "> 0, we dene " s :=e s 0 s Ce ()s s +"(st): Notice that 0 t =u 0 t =e t u t +Ce t t and for st: " s u t;! s e t ( 0 s (u 0 s ) t;! ) = (e s e t )(( 0 s Ce s s) ( 0 t Ce t t)) + (e (st) 1)(u t;! s u t;! t ) +(e (ts) +e (st) 2)u t;! t +"(st) There exists a constantK > 1 which may depend on;t;T but not ins2 (t;T ], and"> 0 such that : 0j(e s e t )jK(st) 0e (st) 1K(st) 0je (ts) +e (st) 2jK(st) 2 155 Additionallyu2U and is continuous in under the d 1 metric, so there exist (depending on ") such that on [t; h t ] : R 0 := 1_ sup s u s <1; u t;! s u t;! t " 3KR 0 j( 0 s Ce s s) ( 0 t Ce t t)j " 3KR 0 0st " 3KR 0 : Combining the previous inequalities : " s u t;! s e t ( 0 s (u 0 s ) t;! )"(st) +"(st) 0: Then for all 2T t , such that h t it holds that : " u t;! e t ( 0 (u 0 ) t;! ); therefore : E L t [ " u t;! ]e t E L t [ 0 (u 0 ) t;! ] 0 = " t u t;! t : Which shows that " 2Au(t;!), and u(t;!)h(t;!) > 0, by the denition of viscosity subsolutions, 0L t;! " . Taking the limit as " goes to 0, we obtain : 0L t;! =@ t 0 t G 0 (t;!; 0 t ;@ ! 0 t ;@ !! 0 t ): Remark 168. Notice that after the change of variable the function G 0 can be written as : G 0 (t;!;y;z; ) = sup k2K (t;!;k) 2 : 2 +F 0 (t;!;y;(t;!;k)z;k) where F 0 (t;!;y;z;k) :=e t F (t;!;e t yCe ()t t;e t z;k)Ce t (1 + ()t)y 156 We will make the following choices for the constants: = L 0 + 1, = 0 and C = 2e (L 0 +1)T ( + 1)(M 0 + 1). With this change of variable, the data of the problem ver- ify the following properties: G 0 (t;!;y;z; )G 0 (t;!;y +;z; ) +; for all > 0;and any (t;!;y;z; ); (5.22) F 0 (t;!;y;z;k)F 0 (t;!;y +;z;k) +; for all > 0;and any (t;!;y;z;k); (5.23) F 0 (t;!;h 0 (t;!); 0;k) =Ce t h t +e t F (t;!;h(t;!); 0;k) 0 for all (t;!)2 : When needed, we will assume, thatF;G andh verify (5.22). This change of variable formula will be useful at subsection (2.2.1). 6 Viscosity solution property of the value functional Before starting to study the viscosity solution property ofu 0 , we give the following dynamic programming principal on random times. Its proof is similar to the proof of Theorem 4.3 of [16]. With the notation introduced at (2.6), for all 2 T t , the following dynamic programming at stopping times holds u 0 (t;!) = sup k2K t Y t;!;k t (;u 0 (;! t B t )) (6.24) Theorem 169. Under the assumptions (158) on the data of the problem, the value func- tional u 0 dened at (2.9) is viscosity solution of the PPDE (2.12). 6.1 Subsolution property of the value functional We assume without loss of generality that G and F are increasing in y: We reason by contradiction by assuming that u 0 is not a viscosity L 0 -subsolution, so there exist (t;!)2 [0;T ) such thatu 0 (t;!)>h(t;!), and2A L 0 u 0 (t;!), with the associated h2H t and verifying: c = minfL t;! (t; 0);u 0 (t;!)h(t;!)g> 0: 157 Without loss of generality we will assume that (t;!) = (0; 0). Recall that u 0 and h are uniformly continuous. Therefore there exist > 0 such that for all s2 [0; h ], ju 0 0 u 0 s jc=4; jh 0 h s jc=4: Denote, for s2 [0;T ] and k2K 0 : Y k s := s Y 0;0;k s ; Z k s :=@ ! s ( s (k s )) 1 Z 0;0;k s G s (u 0 ;@ ! ) :=G s (u 0 s ;@ ! s ;@ !! (s;B)); F k s (y;z) :=F s (y;z; ~ k s ); By the continuity of and the right continuity of G, we can take > 0 small enough, to have for s2 [0; h ]: @ t s G s (u 0 s ;@ ! s ;@ !! s )c=2: G is dened as the supremum in (1.1), then for all k2K 0 , the following inequality holds : @ t s 1 2 @ !! s : s (k s ) 2 F s (u 0 s ; s (k s )@ ! s ; ~ k s )c=2: For k2K 0 , we apply functional It^ o's formula to and use the denition of Y 0;0;k in (2.7) to obtain underP 0;0;k : d(Y k ) s = [@ t s + 1 2 @ !! s : s (k s ) 2 +F k s (u 0 ;@ ! s )]ds +[F k s (Y 0;0;k s ;Z 0;0;k s )F k s (u 0 s ;@ ! s )]ds + (Z k s ) dB s +dK 0;0;k s : Therefore for all k,P 0;0;k -a.s. : Y k h Y k 0 = Z h d t [@ t s + 1 2 @ !! (s;B t ) : s (k s ) 2 +F k s (u 0 ;)]ds + Z h d 0 (F k s (Y 0;0;k s ;Z 0;0;k s )F k s (u 0 s ;))ds + Z h d 0 (Z k s ) dB s +K 0;0;k h d ch 2 + Z h t (F k s (Y 0;0;k s ;Z 0;0;k s )F k s (u 0 s ;))ds + Z h t (Z k s ) dB s +K 0;0;k h 158 We have assumed that F is increasing in y and u 0 s Y 0;0;k s therefore for all k,P 0;0;k -a.s.: (u 0 ) h (u 0 Y 0;0;k ) 0 = (u 0 ) h (Y 0;0;k ) 0 (Y 0;0;k ) h (Y 0;0;k ) 0 =Y k h Y k 0 ch 2 + Z h 0 (F s (u 0 s ;Z 0;0;k s )F k s (u 0 s ;))ds + Z h 0 (Z k s ) dB s +K 0;0;k h = ch 2 + Z h 0 (Z k s ) (dB s + s (k s ) s ds) +K t;!;k h t wherej s jL 0 . By the denition ofu 0 there exists a sequence k n 2K 0 such thatY 0;0;k n 0 "u 0 (0; 0) asn goes to innity, and Y 0;0;k n 0 u 0 0 c=4 for all n. Dene the optimal stopping time D k for Y 0;0;k by D k = inffs 0 :Y 0;0;k s =h s g^T . We can write Y 0;0;k 0 =E P 0;0;k " Z D k ^h 0 F k r (Y 0;0;k r ;Z 0;0;k r )dr +h D k1 fD k <h g +u 0 h 1 fD k h g # : Using the uniform bounds on (Y 0;0;k ;Z 0;0;k ), we have that E P 0;0;k " Z D k ^h 0 jF k r (Y 0;0;k r ;Z 0;0;k r )jdr # C p : uniformly in k. We choose small such that the previous term is dominated by c 4 . Recall also that h D kn u 0 0 c 4 onfD kn < h g and u 0 h u 0 0 c 4 . Then, for all n, P 0;0;kn (D kn < h ) = 0. Therefore K 0;0;k n h = 0; P 0;0;k n a.s. for all n. Injecting this into the previous inequalities, the following holds under P 0;0;k n : (u 0 ) h + ch 2 (u 0 Y 0;0;kn ) 0 + Z h d 0 (Z kn ) s (dB t s + s (k n s ) s ds): There exists a probability P n 2P L 0 equivalent to P 0;0;k n such that the previous integral is aP n martingale. Taking the expectation E n : E n [(u 0 ) h ] +E n [ ch 2 ] (u 0 Y 0;0;k n ) 0 2A L 0 (0; 0) implies that 0E L 0 (u 0 ) h E n [(u 0 ) h ]. Therefore : 0< c 2 E L 0 [h ]E n [ ch 2 ] (u 0 Y 0;0;k n ) 0 : 159 Taking the limit as n goes to innity we arrive to the contradiction 0< c 2 E L 0 [h ] 0. 6.2 Supersolution property of the value functional Without loss of generality, we can assume that F and G are decreasing in y: We will again reason by contradiction. Assume that u 0 is not a viscosity supersolution. A fortiori, it is not a viscosity L 0 -supersolutions. So there exist (t;!)2 [0;T ) , and2 A L 0 u 0 (t;!) with the associated h2H t such thatc := min(L t;! (t; 0);(t; 0)h(t;!))< 0. Notice that 0 =(t; 0)u 0 (t;!) so (t; 0)h(t;!). Thereforec =L t;! (t; 0). Without loss of generality we assume (t;!) = (0; 0). Similarly to the previous case there exist > 0, such that for s2 [0; h ] it holds that : @ t s +G s (u 0 ;@ ! )c=2: By the denition of G (in (1.1)) and the right continuity of the processes involved, there exists a constant processk 0 2K 0 such that by taking> 0 small enough, for alls2 [0; h ] the following inequality holds : @ t s + 1 2 @ !! s : s (k 0 s ) 2 +F k 0 s (u 0 s ; s (k 0 s )@ ! s )c=3: We use (6.24) with = h and denote (Y;Z;K) := (Y 0;0;k 0 (h ;u 0 h );Z 0;0;k 0 (h ;u 0 h );K 0;0;k 0 (h ;u 0 h )) and with the obvious modications of the notations of the subsolution case, under P 0;0;k 0 we have: d(Y) s = h (@ t s + 1 2 @ !! s : s (k 0 s ) 2 +F k 0 s (u 0 s ; s (k 0 s )@ ! s ) i ds + h F k 0 s (Y s ;Z s )F k 0 s (u 0 s ; s (k 0 s )@ ! s ) i ds + (Z k 0 s ) dB s +dK 0;0;k 0 s c 6 ds + h F k 0 s (u 0 s ;Z s )F k 0 s (u 0 s ; s (k 0 s )@ ! s ) i ds + (Z k 0 s ) dB s c 6 ds + (Z k 0 s ) (dB s + s (k 0 s ) s ds) for somej s jL 0 . 160 Therefore underP 0;0;k 0 : (u 0 ) h (u 0 Y) 0 = (u 0 ) h + (u 0 Y) h (u 0 Y) 0 = (Y) h (u 0 Y) 0 = (Y) h (Y) 0 ch 6 + Z h 0 (Z k 0 s ) (dB s s (k 0 s ) s ds) Recall that the DPP (6.24) gives (u 0 Y) 0 0 therefore : (u 0 ) h ch 6 + Z h 0 (Z k 0 s ) (dB s s (k 0 s ) s ds) Similarly to the subsolution case, there exist P2P L 0 , equivalent to P 0;0;k 0 such that the last integral is aP martingale and by assumption 2A L 0 u 0 (0; 0). Therefore 0E L 0 [(u 0 ) h ]E P [(u 0 ) h ] c 6 E P [h] c 6 E L 0 [h]> 0 which is impossible. 7 Partial comparison Following the same method as [18], we will rst prove a weaker version of the comparison principle, when, one of the functionals is "smoother". Then we will extend it to general sub/supersolution. In our case the 2RBSDE provides us with a representation formula, therefore our set of "smoother" functionals is simpler than the one in [18]. Another dierence comes from our denition of subsolutions that is only required when the functional does not touch the barrier. Except these points the proofs are the same as the ones in [18]. We dene the following classes of "smoother" processes. Denition 170. Let u2L 0 (), we say that u is in C 1;2; ()(respectively, C 1;2;+ ()) if: (i) u2U(respectively u2U), (ii) There exists a sequence of hitting timesfh i g i2N , such that 0 = h 0 h 1 ::: and for all ! the setfi2N : h i (!)<Tg is nite. (ii)For all (t;!)2 , t < T , and i such that h i (!) t < h i+1 (!), u t;! 2 C 1;2 ( t (h t;! i+1 )), where t (h t;! i+1 ) :=f(s; ~ !)2 t : h t;! i+1 (~ !)>sg. 161 Theorem 171. Let u 1 2U and u 2 2U be respectively a viscosity subsolution and a super- solution of (2.12) such that for all ! 2 , u 1 (T;!) u 2 (T;!). Assume further that u 1 2C 1;2; or u 2 2C 1;2;+ , then for all (t;!)2 u 1 (t;!)u 2 (t;!): Proof. To avoid repeating same arguments as in [18], we will use the same notations is in the proof of partial comparison in [18]. We will only point out the dierences. Remark 5.22 allows us, without loss of generality, to assume that F is non-increasing in y. We will prove the statement at (t;!) = (0; 0), it is also valid for all intermediate (t;!)2 . Dene ^ u :=u 1 u 2 and denote byfh i g i2N the stopping times given by (170). We will rst prove that ^ u + h i (!)E L h i (!) h (^ u + h i+1 ) h i (!);! i : We only prove the inequality for i = 0, the proof is valid for all i. Assume on the contrary that 2Tc := ^ u + 0 E L 0 ^ u + h 1 > 0: and dene X2U by : X : !R X(t;!) := (^ u) + (t;!) +ct: and ^ X :=X1 [0;h 1 ) +X h 1 1 [h 1 ;T ] , Y :=S L [ ^ X], := inffs 0 :Y t = ^ X t g: Similarly as in [18], there exists ! 2 such that t = (! )< h 1 (! ) and 0< (u 1 u 2 ) + t (! ) = (u 1 u 2 ) t (! ): (7.25) X t ;! 2U t therefore there exists > 0 such that for all s2 [t ; h t ] it holds that (u 1 u 2 ) t ;! 0. So we can write X t ;! t = (u 1 u 2 ) t ;! t +ct on [t ; h t ]: There are 2 cases to treat. Assume that u 2 2C 1;2;+ . Then, by denition of C 1;2;+ , there exist i such that h i (! ) t < h i+1 (! ) and t := (u 2 ) t ;! t ct2C 1;2 ( t (h i+1 )). By taking > 0 smaller to have h t h t ;! i+1 then for all 2T t we have : (u 1 ) t ;! t t =Y t (! )E L t [X t ;! ^h t ]; 162 which shows that2Au 1 (t ;! ). Additionallyu 2 (t ;! )h(t ;! ) 0, so the inequality (7.25) gives that (u 1 ) t ;! t > h(t ;! )(this point is the only dierence between our proof and the proof in [18]) and by viscosity subsolution property of u 1 : 0@ t (t ;! )G(t ;! ;u 1 (t ;! );@ ! (t ;! );@ !! (t ;! )) =@ t u 2 (t ;! ) +cG(t ;! ;u 1 (t ;! );@ ! u 2 (t ;! );@ !! u 2 (t ;! )) @ t u 2 (t ;! ) +cG(t ;! ;u 2 (t ;! );@ ! u 2 (t ;! );@ !! u 2 (t ;! )) > 0 which is impossible. Assume that u 1 2C 1;2; . This case is the same as the one in [18]. In conclusion, ^ u + h i (!)E L h i (!) h (^ u + h i+1 ) h i ;! i : Then by the Lemma 5.2 of [18](which only depends on regularity of u 1 ;u 2 and not on their viscosity solution properties), we have that for all P2P L , it holds that E P ^ u + h i E L 0 h ^ u + h i+1 i ; By taking the supremum in P and taking into account the positive sign of the possible jumps of ^ u we have that ^ u + 0 E L 0 ^ u + T = 0 which completes the proof. 8 Stability In this section, we will prove an extension of Theorem 5.1 of [17] to the PPDE (2.12). 163 Theorem 172. Fix L > 0 and for " > 0, let (G " ;h " ; " ) be a family of data verifying assumptions (158) with the same constantsM 0 ;L 0 and 0 andu " anL-subsolution of (2.12). Assume that as " goes to 0 the following locally uniform convergences hold : for all (t;!;y;z; )2 RR d S d ; there exists such that : (8.26) (G " ) t;! !G t;! ; (h " ) t;! !h t;! ; ( " ) t;! ! t;! ; (u " ) t;! !u t;! ; uniformly on O t;!;y;z; := n (s;! 0 ;y 0 ;z 0 ; 0 )2 t RR d S d : d t 1 (s;! 0 ); (t; 0) +jyy 0 j +jzz 0 j +j 0 j o ; Then u is a viscosity L-subsolution of the PPDE (2.12) with data (G;h;). Remark 173. We are not able to prove the stability result when L depends on ". Remark 174. Except the condition u(t;!) h(t;!), our denition of viscosity superso- lution is the same as the one given in [18]. Therefore their stability result for viscosity supersolutions can directly be applied for the PPDE (2.12). Proof. We will use the same notations as in [17] and only point out the dierences. We will prove the viscosity subsolution property at (0:0). We assume that u(0; 0) > h(0; 0), 2A L u(0; 0), and h2H. The main dierence with the proof of Theorem 5.1 in [17] is that we need to take " > 0 small enough to have u " s >h " s for s2 [0; h ] for all 0<"<" . Then we have " 2A L u " (t ;! ) and with our choice of " the process u " does not touch the barrierh " . Therefore we can use the viscosity subsolution property of u " for the PPDE with data (G " ;h " ; " ) to obtain the equation (5.3) in [17] and conclude. 9 Comparison Our objective in this section is to extend the partial comparison result. We will carry out the proof in a similar way as in [18], and for 0t 1 <t 2 T ,2 0 (F t 2 ) and!2 , dene the following sets : D(t;!) :=f2C 1;2;+ ( t ); minfL t;! s ; s h t;! s g 0;s2 [t;T ]; T t;! g; D(t;!) :=f 2C 1;2; ( t ); minfL t;! s ; s h t;! s g 0;s2 [t;T ]; T t;! g: 164 and processes: u(t;!) := inff(t; 0) :2D(t;!)g; (9.27) u(t;!) := supf (t; 0) : 2D(t;!)g (9.28) Lemma 175. Under Assumptions (158), (159), the equality u =u holds. Proof. The proof of this lemma is very technical and requires the introduction of various notations. The construction of the smooth approximating subsolutions and supersolutions are the subject of the Section 2 in the Appendix. In the Section 3 of the Appendix, we prove the required regularity of those approximating sequences. Theorem 176. Assume (158), and (159), and letu 1 2U (respectively, u 2 2U ) a viscosity subsolution (respectively, supersolution) of (2.12), such that u 1 (T;!)(!)u 2 (T;!) for all !2 , then u 1 (t;!)u 2 (t;!) for all (t;!)2 . Proof. For all (t;!)2 , and ; belonging respectively toD(t;!) andD T (t;!), by partial comparison result, u 1 (t;!) (t; 0) and (t; 0) u 2 (t;!). We take the supremum in and the inmum in to haveu 1 (t;!)u(t;!) andu(t;!)u 2 (t;!). The lemma 175 gives the the equality u(t;!) =u(t;!), therefore : u 1 (t;!)u 2 (t;!) 165 Chapter 7 Appendix 1 Proof of Proposition 152 In this section, we prove Proposition 152. We follow the idea in [17] Proposition 7.5. However, as pointed out in [17] Remark 7.7, due to the fully nonlinearity the arguments here are much more involved. We shall divide the proof into several lemmas. By Proposition 110, we shall always assume without loss of generality thatG is strictly decreasing iny, i.e. (7.44). We also introduce: g 0 (z; ) := sup jjL 0 ;jj p 2L 0 [z + 1 2 2 : ]; g(y;z; ) :=g 0 (z; ) +L 0 jyj +C 0 ; g 0 (z; ) := inf jjL 0 ;jj p 2L 0 [z + 1 2 2 : ]; g(y;z; ) :=g 0 (z; )L 0 jyjC 0 : (1.1) By Assumption 138, it is clear that gGg: (1.2) We start with some estimates for viscosity solutions of PDE (2.6). Lemma 177. Let Assumptions 138 and 141 hold true. Let h i : @O " t ! R be continuous, and v i be the viscosity solution of the PDE (E) t;! ";0 with boundary condition h i , i = 1; 2. Then, denoting v :=v 1 v 2 , h :=h 1 h 2 , v(s;x)E L 0 s (h) + (h;x +B s h ) ; where h :=T^ inffrs :jx +B s r j ="g: (1.3) Proof. By standard results, or see [17] Proposition 4.8 for path dependent case, the function w(s;x) :=E L 0 s h (h) + (h;x +B s h ) i is a viscosity solution of the nonlinear PDE: @ t wg 0 (Dw;D 2 w) = 0 on O " t ; and w = (h) + on @O " t : 166 Let K be a smooth nonnegative kernel with unit total mass. For all > 0, we dene the molication w := wK of w. Then w is smooth, and it follows from a convexity argument in Krylov [29] that w is a classical supersolution of @ t w g 0 (Dw ;D 2 w ) 0 onO " t ; w = (h)K on @O " t : (1.4) We claim that ~ w +v 2 is a supersolution of the PDE (E) t;! ";0 , where ~ w :=w +kw (h) + k L 1 (@O " t ) : (1.5) Then, noting that ~ w +v 2 =w +h 2 +kw (h) + k L 1 (@O " t ) h 1 =v 1 on@O " t , we deduce from the comparison result of Proposition 142 (i) that ~ w +v 2 v 1 onO " t . Sending& 0, this implies that vw, which is the required result. It remain to prove that ~ w +v 2 is a supersolution of the PDE (E) t;! ";0 . Let (t 0 ;x 0 )2O " t , 2C 1;2 (O " t ) be such that 0 = ( ~ w v 2 )(t 0 ;x 0 ) = max( ~ w v 2 ). Then, it follows from the viscosity supersolution property of v 2 that L t;! ( ~ w )(t 0 ;x 0 ) 0. Hence, at the point (t 0 ;x 0 ), by (7.44) and (1.4) we have L t;! L t;! L t;! ( ~ w ) = @ t w g t;! (:;;D;D 2 ) +g t;! :; ~ w ;D(w );D 2 (w ) @ t w g t;! (:;;D;D 2 ) +g t;! :;;D(w );D 2 (w ) g 0 (Dw ;D 2 w )Dw :D 2 w 0; wherejjL 0 andj jL 0 , thanks to Assumption 138. This proves (1.5). Recall (2.9) and the ! n dened right after it. For n 2 O " n and (t;x)2O " tn , dene h i := h ";n;t;x i as follows: h 0 :=t, and h 1 := inffst :jx +B t s j ="g^T; h i+1 :=fs> h i :jB t s B t h i j ="g^T; i 1; " i ( n ;t;x) := n ; (h 1 ;x +B t h 1 ); (h 2 ;B t h 2 B t h 1 ); ; (h i ;B t h i B t h i1 ) : (1.6) It is clear that " i ( n ;t;x)2O " n+i whenever h i <T . Lemma 178.fh ";n;t;x m ;m 0g satises the requirements of Denition 137 (i) - (ii), with E m j = t in (ii). 167 Proof. For notational simplicity, we omit the superscripts ";n;t;x . It is clear that h hm;! m+1 2 H hm(!) whenever h m (!) < T . Next, if h m (!) < T for all m, thenjB t h m+1 B t hm j(!) = " for all m. This contradicts with the fact that ! is (left) continuous at lim m!1 h m (!), and thus h m (!) =T when m is large enough. Moreover, for each m, fh m <TgfjB t h i+1 B t h i j =";i = 1; ;m 1gf m1 X i=1 jB t h i+1 B t h i j 2 (m 1)" 2 g Then, for any L> 0, C L t [h m <T ] 1 (m 1)" 2 E L t h m1 X i=1 jB t h i+1 B t h i j 2 i CL 2 (m 1)" 2 ! 0 as m!1: (1.7) Similarly one can show that lim m!1 C L s [h s;! m <T ] = 0 for any (s;!)2 t . Finally, for !; ~ !2 and using the notation in Denition 137 (ii), we have h m+1 (! hm(!) ~ !) =T^ infft h i (!) :j~ ! thm(!) j ="g =T^ [h m (!) + ~ h(~ !)] where ~ h(~ !) := infft :j~ ! t j = "g is independent of !. Then, given h n (!) h n (! 0 ), (1.1) follows immediately. Recall (2.9) and denote: O " n := n n = (t i ;x i ) 0in : 0 =t 0 t n T;x 0 = 0;jx i j" for all 1in o : (1.8) Under Assumption 144, clearly one may extend the mapping n 2 O " n ! (! n ) continu- ously to the compact set O " n , and we shall still denote it as (! n ) for all n 2O " n . Lemma 179. Let Assumptions 138, 141, 143, and 144 hold true. Then, there exists a sequence of continuous functions " n : ( n ; (t;x))2O " n+1 !R, bounded uniformly in (";n), such that: " n ( n ;:) is a viscosity solution of (E) tn;! n ";0 ; " n ( n ;t;x) = ! n;(t;x) if t =T; " n ( n ;t;x) = " n+1 ( n ; (t;x);t; 0); ifjxj =": (1.9) Proof. We proceed in two steps. Step 1. We rst prove the lemma in the cases G = g and G = g as introduced in (1.1). 168 Indeed, as in [17] Section 7 for semilinear PPDEs, in these cases we may have explicit representation for the required functions. For any N, denote " N;N ( N ;t N ; 0) :=(! N ); which is continuous for N 2 O " N , thanks to Assumption 144. For n = N 1; ; 0, := " N;n ( n ;) is the unique viscosity solution of the PDE: @ t g(;D;D 2 ) = 0 inO " tn ; (t;x) = " N;n+1 ( n ; (t;x);t; 0) on @O " tn : (1.10) Then clearly " N;n ( n ;t;x) are uniformly bounded and continuous in all variables ( n ;t;x). Following the arguments in our accompanying paper [17] Proposition 4.8, we can easily see that the above PDE has following representation of its viscosity solution: " N;n ( n ;t;x) := sup b2B t L 0 E L 0 t h e R h Nn t brdr ! " Nn (n;t;x) +C 0 Z h Nn t e R s t brdr ds i ; (1.11) where B t L 0 := n b2L 0 ( t ) :jbjL 0 o : Now for any n 2O " n and (t;x)2O " tn , dene " n ( n ;t;x) := sup b2B t L 0 E L 0 t h e R T t brdr lim i!1 ! " i (n;t;x) +C 0 Z T t e R s t brdr ds i : Then, by (1.7), j " n ( n ;t;x) " N;n ( n ;t;x)j CC L 0 tn h Nn <T C (Nn 1)" 2 ! 0 as N!1: This implies that " N;n ( n ;t;x) are uniformly bounded, uniformly in (";N;n), and are con- tinuous in all variables ( n ;t;x). Moreover, by stability of viscosity solutions we see that " n ( n ;) is the viscosity solution of PDE (1.10) inO " tn with boundary condition: " n ( n ;T;x) = ! n;(T;x) ; jxj"; " n ( n ;t;x) = " n+1 ( n ; (t;x);t; 0); jxj =": Similarly we may dene from g the following " n satisfying the corresponding properties: " n ( n ;t;x) := inf b2B t L 0 E L 0 t h e R T t brdr lim i!1 ! " i (n;t;x) +C 0 Z T t e R s t brdr ds i : 169 Step 2. We now prove the lemma for G. Given the construction of Step 1, dene: ";m m ( m ;t;x) := " m ( m ;t;x); ";m m ( m ;t;x) := " m ( m ;t;x); m 1: By Proposition 142 (ii), for n = m 1; ; 0, we may dene ";m n and ";m n as the unique viscosity solution of the PDE (E) tn;! n ";0 with boundary conditions ";m n = ";m n+1 and ";m n = ";m n+1 on @O " tn . Note that, for (t;x)2@O " tm , ";m m ( m ;t;x) = ";m+1 m+1 ( t;x m ;t; 0); ";m m ( m ;t;x) = ";m+1 m+1 ( t;x m ;t; 0): By the comparison result of Proposition 142 (i), we also have that ";m m ( m ;:) ";m+1 m ( m ;:) ";m+1 m ( m ;:) ";m m ( m ;:) in O " tm ; and therefore, by the same comparison argument: ";m n ( n ;:) ";m+1 n ( n ;:) ";m+1 n ( n ;:) ";m n ( n ;:) in O " tn ; for all nm:(1.12) Denote ";m n := ";m n ";m n . For any n and any (t;x)2O " tn , recall the notations in (1.6). Applying Lemma 177 repeatedly, and following similar but much easier arguments as those in Steps 2 and 3 of Proposition 128, we see that: j ";m n ( n ;t;x)j E L 0 t h ";m m mn ( n ;t;x); h mn ; 0 i : Note that ";m n ( n ;t;x) = 0 when t =T . Then, by (1.7) again, j ";m n ( n ;t;x)j CC L 0 t h mn <T C (mn 1)" 2 ! 0 as m!1: Together with (1.12), this implies the existence of " n such that ";m n & " n ; ";m n % " n , as m!1. Clearly " n are uniformly bounded and continuous. Finally, it follows from the stability of viscosity solutions that " n satises (1.9). We now dene h " i := h ";(0;0);(0;0) i , namely h " 0 := 0; and h " n+1 :=T^ inf t h " n :jB t B h " n j =" for all n 0: 170 Let ^ n denote the sequence h " i ;B h " i 1in , and ! " := lim n!1 ! ^ n . It is clear that k!! " k T 2" and k! ^ n ^hn !k h n+1 2"; for all n;!: (1.13) Lemma 180. Let Assumptions 138, 139, 141, 143, and 144 hold true. Then there exists " 2C 1;2 () bounded from below with corresponding stopping times h " n such that " (0; 0) = " 0 (0; 0) +" +T 0 (2"); " (T;!)(! " ); L " 0 on [0;T ): (1.14) Proof. For notational simplicity, in this proof we omit the superscript" and denote n := " n , = " etc. Moreover, we extend the domain of n ( n ;) to [t n ;1)R d : n ( n ;t;x) := n ;t^T; proj O" (x) ; where proj O" is the orthogonal projection on O " , the closed centered ball with radius". We shall construct on each [h n ; h n+1 ) by induction on n. Step 1. First, let > 0, > 0 be small numbers which will be decided later. Thanks to Assumption 141 and Proposition 142, let v ; 0 denote the unique viscosity solution of the PDE (E) 0;0 "; with boundary condition v ; 0 = 0 + on@O "; 0 . Then by standard arguments there exist 0 () andC 0 (), which may depend onL 0 , and the regularity of 0 , such that, for all 0 (), 0v ; 0 0 C 0 () onO "; 0 nO " 0 : In particular, the above inequalities hold on@O " 0 . Then, by the comparison principle Propo- sition 142 (i) and Lemma 177, we have 0v ; 0 0 C 0 () inO " 0 : It is clear that lim !0 C 0 () = 0. Fix 0 such that C 0 ( 0 )< " 4 and set 0 := 0 ( 0 ). Then v 0 ; 0 0 (0; 0)< 0 (0; 0) + " 4 : Now by Assumption 141 there exists v 0 2C 1;2 (O "; 0 ) satisfying v 0 (0; 0)v 0 ; 0 0 (0; 0) + " 4 ; L 0;0 v 0 0 inO "; 0 0 ; v 0 v 0 ; 0 0 on @O "; 0 0 : 171 We note that, by the comparison principle Proposition 142 (i) again we have v 0 v 0 ; 0 0 0 on O "; 0 0 : By modifyingv 0 outside ofO "; 0 2 0 and by the monotonicity (7.44) , without loss of generality we may assume v 0 2C 1;2 ([0;T ]R d ) with bounded derivatives such that v 0 (0; 0) = 0 (0; 0) + " 2 ; L 0;0 v 0 0 inO " 0 ; v 0 0 on @O " 0 : We now dene (t;!) :=v 0 (t;! t ) + " 2 + 0 (2")(Tt); t2 [0; h 1 ]: (1.15) Note that (t;! t )2O " 0 for t< h 1 , (h 1 ;! h 1 )2@O " 0 , and 0 is bounded. Then (0; 0) = 0 (0; 0) +" +T 0 (2"); v 0 (h 1 ;!) 0 (h 1 ;! h 1 ) = 1 (^ 1 ; h 1 ; 0); C on [0; h 1 ]: (1.16) Moreover, by the monotonicity (7.44) again, and by Assumption 139 and (1.13), L (t;!) = 0 (2")@ t v 0 (t;! t )G(t;!; ;Dv 0 (t;! t );D 2 v 0 (t;! t )) 0 (2")@ t v 0 (t;! t )G(t;!;v 0 (t;! t );Dv 0 (t;! t );D 2 v 0 (t;! t )) @ t v 0 (t;! t )g 0;0 (t;v 0 (t;! t );Dv 0 (t;! t );D 2 v 0 (t;! t )) = L 0;0 v 0 (t;! t ) 0; for 0t< h 1 (!): (1.17) Step 2. Let,, be small positive numbers which will be decided later. Sets i := (1) i T , i 0. Since O " is compact, there exist a partition D 1 ; ;D n such thatjy ~ yj T for any y; ~ y 2 D j , j = 1; ;n. For each j, x a point y j 2 D j . Now for each (i;j), let v ; ij denote the unique viscosity solution of the PDE (E) s i ;! (s i ;y j ) "; with boundary condition v ; ij (t;x) = 1 (s i ;y j ;t^T;x) + on @O "; s i . Here ! (s i ;y j ) denotes the linear interpolation of (0; 0); (s i ;y j ); (T;y j ). Then by standard arguments there exist 0 () and C 0 (), which may depend on L 0 , and the regularity of 1 , but independent of and (i;j), such that lim !0 C 0 () = 0 and, for all 0 (), 0v ; ij (t;x) 1 (s i ;y j ;t^T;x)C 0 () onO "; s i nO " s i : 172 Follow the arguments in Step 1, we may x 0 , 0 (independent of and (i;j)) and there exists v ij 2C 1;2 ([s i ;T ]R d ) with bounded derivatives such that v ij (s i ; 0) = 1 (s i ;y j ;s i ; 0) + " 4 ; L s i ;! (s i ;y j ) v ij 0 inO " s i ; v ij 1 (s i ;y j ;) on @O " s i : Denote E 1 ij :=fs i+1 < h 1 s i g\fB h 1 2D j g2F h 1 : Here we are using (i;j) instead ofj as index and clearlyE 1 ij form a partition of . We then dene on [h 1 ; h 2 ] in the form of (1.2) with n 1 = 2: t := X i;j h v 0 (h 1 ;B h 1 ) +v ij (s i +t h 1 ;B t B h 1 )v ij (s i ; 0) + " 2 i 1 E 1 ij + 0 (2")(Tt); t2 [h 1 ; h 2 ]: (1.18) We show that satises all the requirements on [h 1 ; h 2 ] when is small enough. First, by (1.18), we have h 1 = X i;j h v 0 (h 1 ;B h 1 ) + " 2 i 1 E 1 ij + 0 (2")(T h 1 ) =v 0 (h 1 ;B h 1 ) + " 2 + 0 (2")(T h 1 ); which is consistent with (1.15), and thus is continuous at t = h 1 . We next check, similarly to (1.17), that L (t;!) 0; h 1 t< h 2 : (1.19) Note that (h 1 ;B h 1 )2@O " 0 and 0s i h 1 s i s i+1 =s i T on E 1 ij , then v 0 (h 1 ;B h 1 )v ij (s i ; 0) + " 2 1 (h 1 ;B h 1 ; h 1 ; 0) 1 (s i ;y j ;s i ; 0) + " 4 " 4 1 (3T); on E 1 ij ; 173 where 1 is the modulus of continuity function of 1 . In particular, 1 (3T) < " 4 when is small enough. Now on E 1 ij , denoting t 1 := h 1 , x := ! h 1 , ~ t := s i h 1 +t, by (7.44), Assumption 139, and (1.13) again we have L (t;!) L (t;!) L s i ;! (s i ;y j ) v ij ( ~ t;x) = 0 (2")G t;!; (t;!);Dv ij ( ~ t;x);D 2 v ij ( ~ t;x) +G ~ t^T;! (s i ;y j ) ^s i ;v ij ( ~ t;x);Dv ij ( ~ t;x);D 2 v ij ( ~ t;x) " 4 1 (3T)G t;! ^ 1 ^t 1 ;v ij ( ~ t;x);Dv ij ( ~ t;x);D 2 v ij ( ~ t;x) +G ~ t^T;! (s i ;y j ) ^s i ;v ij ( ~ t;x);Dv ij ( ~ t;x);D 2 v ij ( ~ t;x) " 4 1 (3T) 0 d 1 (t;! ^ 1 ^t 1 ); ( ~ t^T;! (s i ;y j ) ^s i ) : (1.20) Without loss of generality, assume "T . Then d 1 (t;! ^ 1 ^t 1 ); ( ~ t^T;! (s i ;y j ) ^s i ) jt ~ tj + sup 0sT j s^t 1 t 1 x s^s i s i y j j T + sup 0sT j s^t 1 t 1 x s^t 1 t 1 y j j + sup 0sT j s^t 1 t 1 y j s^s i s i y j j 2T +" sup 0sT j s^t 1 t 1 s^s i s i j = 2T +"[1 t 1 s i ] 2T + " [1 s i+1 s i ] = 3T: ThenL (t;!) " 4 [ 0 + 1 ](3T): By choosing small enough, we obtain (1.19). Finally, we emphasize that the bound of v ij and its derivatives depend only on the properties of 1 (and the 0 which again depends on 1 ), but not on (i;j). Then satises Denition 137 (iii) on [h 1 ; h 2 ]. Moreover, since 1 is bounded, by comparison we see that C on [h 1 ; h 2 ]. Step 3. Repeat the arguments, we may dene on [h n ; h n+1 ] for all n. From the construction and recalling Lemma 178 we see that 2 C 1;2 () bounded from below, (0; 0) = 0 (0; 0) +" +T 0 (2"), andL 0 on [0;T ). Finally, since h n = T when n is large enough, we see that (T;!) = (h n (!);!) n (! " ) = (! " ). The proof is complete. Remark 181. This remark weakens the uniform regularity in Assumption 139 slightly to Assumption 140, and the result will be important for the work Pham and Zhang [51]. 174 We note that Assumption 139 is used only in the proof of Lemma 180, more precisely in (1.17) and (1.20). We also note that the smooth functions v ij in Step 2 above are typically constructed as the classical solution to some PDE, as we will see in Section 5 and in [51], and thus satisfy certain estimates. Assume There exists a constant C 0 > 0, which may depend on 0 (and "), but independent of , such that the v ij in Step 2 of the above proof can be constructed so that jDv ij (t;x)jC ;jD 2 v ij (t;x)jC for all (t;x)2O " : (1.21) We claim that Lemma 180, hence our main result Theorem 148, still holds true if we replace Assumption 139 by (1.21) and Assumption 140. Indeed, in (1.17) of Step 1 above, note that G(t;!;v 0 (t;! t );Dv 0 (t;! t );D 2 v 0 (t;! t ))g 0;0 (t;v 0 (t;! t );Dv 0 (t;! t );D 2 v 0 (t;! t )) = G(t;!;v 0 (t;! t );Dv 0 (t;! t );D 2 v 0 (t;! t ))G(t; 0;v 0 (t;! t );Dv 0 (t;! t );D 2 v 0 (t;! t )) 0 ("); thanks to Assumption 140. So we still have (1.17). To see (1.20) under our new assumption, we rst note that, as in (1.20) and by (7.44), L (t;!) 0 (2") + " 4 1 (3T)G t;!;v ij ( ~ t;x);Dv ij ( ~ t;x);D 2 v ij ( ~ t;x) +G ~ t^T;! (s i ;y j ) ^s i ;v ij ( ~ t;x);Dv ij ( ~ t;x);D 2 v ij ( ~ t;x) : Now by Assumption 140 and (1.21) we have, at ( ~ t;x)2O " , G t;!;v ij ;Dv ij ;D 2 v ij G ~ t^T;! (s i ;y j ) ^s i ;v ij ;Dv ij ;D 2 v ij = G t;!;v ij ;Dv ij ;D 2 v ij G t;! ^ 1 ^t 1 ;v ij ;Dv ij ;D 2 v ij +G t;! ^ 1 ^t 1 ;v ij ;Dv ij ;D 2 v ij G ~ t^T;! (s i ;y j ) ^s i ;v ij ;Dv ij ;D 2 v ij 0 k!! ^ 1 ^t 1 k t + ~ 0 (jt ~ t^Tj) h jDv ij j +jD 2 v ij j i + 0 d 1 (t;! ^ 1 ^t 1 ); ( ~ t^T;! (s i ;y j ) ^s i ) 0 (2") +C 0 ~ 0 (T) + 0 d 1 (t;! ^ 1 ^t 1 ); ( ~ t^T;! (s i ;y j ) ^s i ) : 175 Thus L (t;!) " 4 1 (3T)C 0 ~ 0 (T) 0 d 1 (t;! ^ 1 ^t 1 ); ( ~ t^T;! (s i ;y j ) ^s i ) : Now follow the same arguments as in Lemma 180 we can prove it under Assumption 140 and (1.21). Proof of Proposition 152. For any " > 0, let h " n , n 0, and " be as in Lemma 180, and dene " := " + 0 (2"). Then clearly " 2C 1;2 (), " is bounded from below, and " (T;!)(!) = " (T;!) + 0 (2")(!)(! " )(!) + 0 (2") 0: where the last inequality thanks to (1.13). Moreover, for h n t < h n+1 , by (7.44) again we have L " (t;!) = @ t " (t;!)G(t;!; " + 0 (2");@ ! " ;@ 2 !! " ) @ t " (t;!)G(t;!; " ;@ ! " ;@ 2 !! " ) =L " (t;!) 0: Then by the denition of u we see that u(0; 0) " (0; 0) = " (0; 0) + 0 (2") " 0 (0; 0) +" + (T + 1) 0 (2"): Similarly, u(0; 0) " 0 (0; 0)" (T + 1) 0 (2"). This implies that u(0; 0)u(0; 0) 2 " + (T + 1) 0 (2") : Since " > 0 is arbitrary, we prove that u(0; 0) = u(0; 0). Similarly we can show that u(t;!) =u(t;!) for all (t;!)2 . Remark 182. In some special cases where a candidate solution of the PPDE is available through some representation, Proposition 152 can be proved by a much easier argument. See [17] and [15] in the context of a semilinear PPDE. In particular, in these cases we do not need the technical condition Assumption 144. 176 2 Construction of the approximating sequences In the following 2 subsections we will construct 2 families of processesf m; g >0;m2N 2 D(0; 0) andf g >0 2D(0; 0) that will allow us to show the Lemma 175. We will adopt the following strategy to prove the equality u = u = u 0 . We will freeze the data of the problem (F;h;), in regions of related to the stopping times h t . Then, we will show that the functionals dened as the solutions of the problem with frozen data are stepwise Markovian. This will bring us to a PDE problem. The proposition 8.2 of [18] allows us to construct smooth approximation to the solutions of the frozen PDE. We recall that, for the comparison result, does not depend on (t;!), and the assump- tions on the data allows us to claim that c 0 := inf k2K inf jj=1 (k)> 0 (2.22) and F is uniformly continuous in (t;!) with modulus 0 . Additionally, recall that Remark (167) allows us to assume without loss of generality that F;G and h verify (5.22). We will need the following denitions to carry out this construction. For > 0(that will go to 0) and t2 [0;T ), we dene: O :=fx2R d :jxj<g; O :=fx2R d :jxjg; @O :=fx2R d :jxj =g O t := [t; (t +)^T )O ; O t ; = [t; (t +)^T ]O ; @O t ; = ([t; (t +)^T ]@O )[ (f(t +)^TgO ): Forft i g i0 a nondecreasing sequence in [0;T ] with t 0 = 0, sup i ft i+1 t i g, andfx i g i0 a sequence inO with x 0 = 0 and n 0, we denote n :=f(t i ;x i )g 0in . In the sequel n will always verify the previous properties. The sequenceft i g will represent the successive hitting times of a given level by the canonical process, andfx i g the direction of variation of between the hitting times. For such n , and (t;x)2O tn , we dene : h t;x; 1 :=t n ; (2.23) h t;x; 0 := inffst :jx +B t s j =g^ (t n +)^T; and for i 0; (2.24) h t;x; i+1 := inffs h t;x; i :jB t s B t h t;x; i j =g^ (h t;x; i +)^T: (2.25) Notice that we can associate to n a path ^ n 2 , which is the linear interpolation of (t i ; P i j=0 x j ) 0in and (T; P n j=0 x j ) and we can associated to n , (t;x)2O tn and a 177 path ! 2 t a path , ^ ! n;t;x; 2 , the linear interpolation of (t i ; P i j=0 x j ) 0in and of (h t;x; i (!); n X j=0 x j +x +! h t;x; i (!) ) i0 . We remark that ^ ! n;t;x; is not adapted to the ltration (F t s ) tsT . Indeed to know the value of ^ ! n;t;x; after the date h t;x; i (!) we need to know the value of ! at the date h t;x; i (!). However the discretization in time allows us to dene the approximated data (2.26) as constant between these hitting times. Thus, the data in (2.26) are adapted. For (t;x)2O tn , the notation (t;x) n means that we add (t;x) to the sequence n as (n + 1) th element, namely (t;x) n =f n ; (t;x)g. We will constructf m; g andf g by approximating the data of the problem by data with constant coecients between some hitting times of B. For n , and (t;x)2O tn , we dene the following generator, nal condition and barrier for the approximated equations : ^ F n;t;x; : t RR d K!R; ^ h n;t;x; : t R!R; ^ n;t;x; : t !R: If (s;!)2 t , with h t;x; i (!)s< h t;x; i+1 (!) : ^ F n;t;x; (s;!;y;z;k) =F (h t;x; i (!); ^ ! n;t;x; ;y;z;k); (2.26) ^ h n;t;x; (s;!) =h(h t;x; i (!); ^ ! n;t;x; ); (2.27) ^ n;t;x; (!) =(^ ! n;t;x; ); (2.28) There are 3 important feature of this approximation: The approximated generator and barrier are still adapted to F. They verify the assumptions of [24], therefore we can use the results on RBSDEs. Their dierence from the original data is less than 0 (2). The idea which consists in approximating the data and studying the RBSDE with the approximated data can not allow us to construct a sequence of subsolutions inD(t;!). Indeed the barrier for the approximated problem might have negative jumps therefore the construction produce subsolutions which might not be inU and we would not be able to use this sequence in partial comparison. However this idea allows to produce supersolutions. Indeed the solutions of the RBSDEs can only have negative which does not generate any problem for supersolutions in partial comparison. 178 2.1 Construction of subsolutions by penalization In this subsection, we will constructf m; g m>0;>0 2D(0; 0). The construction will be done by penalization. It is worth noticing that this approach might even work for more general control problem such as stochastic target problems. However in this more general case, in a non-Markovian framework, the value functional might not be continuous, therefore we don't expect a characterization of the value functional as a unique solution of a PPDE but only as the minimal supersolution(i.e. u =u 0 ). Fix > 0, n;m2 Nf0g, n as previously, (t;x)2O tn , k2K t , and dene P t;k as follows: dX t;k s =(k s )dB t s ; underP t 0 X t;k t = 0; andP t;k :=P t 0 (X t;k ) 1 : Consider (Y n;t;x;;k;m s ;Z n;t;x;;k;m s ) s2[t;T ] (denoted (Y s ;Z s ) for simplicity), the solution of the following BSDE under P t;k : Y s = n;t;x; (B t ) Z T s Z r 1 ( ~ k r )dB t r (2.29) + Z T s ^ F n;t;x; r (Y r ;Z r ; ~ k r ) +m(Y r ^ h n;t;x; r ) dr and dene ;m n ( n ;t;x) := sup k2K t Y n;t;x;;k;m t Notice that, a priori, we don't know anything on the regularity of this function m; n . We will prove that it is uniformly countinuous with modulus of continuity depending only on ;m;d and bounds on the data of the initial problem and it is a viscosity solution of @ t ;m n ( n ;:)G(t n ; ^ n ; ;m n ( n ;:);@ x ;m n ( n ;:);@ xx ;m n ( n ;:)) m( ;m n ( n ;:)h(t n ; ^ n )) = 0; onO tn ; ;m n ( n ;t;x) = ;m n+1 ( (t;x) n ;t; 0); for all (t;x)2@O tn : 179 We rst notice that if (t;x)2 @O tn then h t;x; 0 = t. Therefore ^ ! n;t;x; = ^ ! (t;x) n ;t;0; for all !2 t , which implies the equality of the data dening (Y n;t;x;;k;m ;Z n;t;x;;k;m ) and (Y (t;x) n ;t;0;;k;m ;Z (t;x) n ;t;0;;k;m ). Therefore : ;m n ( n ;t;x) = ;m n+1 ( (t;x) n ;t; 0); for all (t;x)2@O tn : (2.30) Our main diculty in the rest of the paper is the fact that the stopping timesfh t;x; i g does not depend continuously on (t;x). However the regularization eect of the PDE allows us to prove the following result. Proposition 183. For all > 0, n;m2Nf0g, n as previously, the mapping (t;x)2@O tn ! m; n+1 ( (t;x) n ;t; 0) = m; n ( n ;t;x) is uniformly continuous with modulus of continuity depending only on d, m, , L 0 , c 0 ; M 0 ; 0 ;T . Proof. The proof this proposition and the proposition 185 is in the Appendix B. Given this result, and the proposition 5.14 of [53] we can represent ;m n ( n ;:) as the supremum of solutions of BSDEs with a nal condition m; n ( n ; h t;x; 0 ;B t h t;x; 0 ). Dene ( ~ Y n;t;x;;k;m s ; ~ Z n;t;x;;k;m s ) the solution of the following BSDE under P t;k : ~ Y n;t;x;;k;m s = ~ Y s = m; n ( n ; h t;x; 0 ;B t h t;x; 0 ) Z h t;x; 0 s ~ Z r 1 ( ~ k r )dB t r + Z h t;x; 0 s F (t n ; ^ n ; ~ Y r ; ~ Z r ; ~ k r ) +m( ~ Y r h(t n ; ^ n )) dr; and by Proposition 5.14 of [53], m; n ( n ;t;x) = sup k2K t ~ Y n;t;x;;k;m t , which comes from a classical Markovian 2BSDE. Therefore by the uniqueness of viscosity solutions to PDEs in the class of locally bounded functions ;m n ( n ;:) veries (see chapter 5 of [53] on Markovian 2BSDEs): @ t ;m n ( n ;:)G(t n ; ^ n ; ;m n ( n ;:);@ x ;m n ( n ;:);@ xx ;m n ( n ;:)) (2.31) m( ;m n ( n ;:)h(t n ; ^ n )) = 0; for all (t;x)2O tn ; (2.32) ;m n ( n ;t;x) = ;m n+1 ( (t;x) n ;t; 0); for all (t;x)2@O tn : (2.33) 180 We want to construct a process in C 1;2; . The functions ;m n might not be C 1;2 . However, the PDE (2.31) veries the assumptions of the Proposition 8.2 of [18]. Thus, we can approximate the viscosity solution of this PDE with smooth subsolutions. In this approximation we will lose the continuity at datesfh i g. However we can manage to have positive jumps which is consistent with the denition ofC 1;2; . We dene 2 other sequences of functions, ;m n and ~ ;m n . Dene ;m n ( n ;:) as the unique viscosity solution of @ t ;m n ( n ;:)G(t n ; ^ n ; ;m n ( n ;:);@ x ;m n ( n ;:);@ xx ;m n ( n ;:)) (2.34) m( ;m n ( n ;:)h(t n ; ^ n )) = 0; for all (t;x)2O tn ; (2.35) ;m n ( n ;t;x) = ;m n+1 ( (t;x) n ;t; 0) 2 n ; for all (t;x)2@O tn : (2.36) Notice that the operator G(t n ; ^ n ;u;@ x u;@ xx u) +m(uh(t n ; ^ n )) is decreasing in u. Thus ;m n ( n ;t;x) 2 n ;m n ( n ;t;x) ;m n ( n ;t;x) for all (t;x)2O tn : Additionally under the condition c 0 > 0 the mapping (y;z; )!G(t n ; n ;y;z; ) is convex and uniformly non-degenerate in , therefore, the Proposition (8.2) of [18] gives the existence of ~ ;m n ( n ;:)2C 1;2 (O tn )\C(O tn ); @ t ~ ;m n ( n ;:)G(t n ; ^ n ; ~ ;m n ( n ;:);@ x ~ ;m n ( n ;:);@ xx ~ ;m n ( n ;:)) m( ~ ;m n ( n ;:)h(t n ; ^ n )) 0; for all (t;x)2O tn ; ~ ;m n ( n ;t;x) ;m n+1 ( (t;x) n ;t; 0); for all (t;x)2@O tn ; ;m n ( n ;:) 2 n ~ ;m n ( n ;:) ;m n ( n ;:); for all (t;x)2O tn For all (t;x)2@O tn , ~ ;m n ( n ;t;x) ;m n ( n ;t;x) = ;m n+1 ( (t;x) n ;t; 0) 2 n ;m n+1 ( (t;x) n ;t; 0) + 2 n+1 2 n ~ ;m n+1 ( (t;x) n ;t; 0) + 2 n+1 + 2 n+1 2 n ~ ;m n+1 ( (t;x) n ;t; 0) 181 Therefore ;m n ( n ;:) is a subsolution of minf@ t ~ ;m n ( n ;:)G(t n ; ^ n ; ~ ;m n ( n ;:);@ x ~ ;m n ( n ;:);@ xx ~ ;m n ( n ;:)); ~ ;m n ( n ;:)h(t n ; ^ n )g 0 for all (t;x)2O tn ; (2.37) ~ ;m n ( n ;t;x) ~ ;m n+1 ( (t;x) n ;t; 0) for all (t;x)2@O tn : (2.38) We can now dene the processf m; g. h 0;0; i will be denoted by h i . For (s;!)2 with h n (!)s< h n+1 (!), n (!) will stand forf(h i (!);! h i (!) ! h i1 (!) )g 0in , and we dene: m; (s;!) = ~ ;m n ( n (!);s;! s ! hn(!) ) 0 (): We now show that ,with the associated sequence of stopping timesfh i g, m;" 2C 1;2; . The denition off m; g gives that if (t;!)2 with ! hn(!) s<! h n+1 (!) , then denoting P :=@ t ~ ;m n ( n (!);t;! t ! hn(!) ) Q := ~ ;m n ( n (!);t;! t ! hn(!) ) R :=@ x ~ ;m n ( n (!);t;! t ! hn(!) ) S :=@ xx ~ ;m n ( n (!);t;! t ! hn(!) ) minf@ t m; (t;!)G(t;!; m; (t;!);@ ! m; (t;!);@ !! m; (t;!)); m; (t;!)h(t;!)g = minfPG(t;!;Q 0 ();R;S);Q 0 ()h(t;!)g minfPG(t;!;Q;R;S) 0 ();Qh(h n (!); ^ n (!))g minfPG(h n (!); n (!);Q;R;S);Ph(h n (!); ^ n (!))g 0: which shows that m; 2D(0; 0). 2.2 Construction of supersolutions by approximation Fix > 0, n , for (t;x) 2 O tn and k 2 K t , our approximated data dened at (2.23) veries the assumption of [24], therefore we have the existence of 182 ( ^ Y n;t;x;;k ; ^ Z n;t;x;;k ; ^ K n;t;x;;k ) s2[t;T ] (we drop the superscript n ;t;x;;k for simplicity of notation) solution of the following RBSDE under P t;k . ^ Y s = ^ n;t;x; (B t ) + Z T s ^ F n;t;x; r ( ^ Y r ; ^ Z r ; ~ k r )dr Z T s ^ Z r 1 ( ~ k r )dB t r + ^ K T ^ K s ; ^ Y s ^ h n;t;x; s ; [ ^ Y s ^ h n;t;x; s ]d ^ K c s = 0; (2.39) s ^ Y := ^ Y s ^ Y s =( ^ h n;t;x; s ^ Y s ) + ; ^ K non decreasing, ^ K t = 0: And similarly we dene the following mapping : n ( n ;t;x) := sup k2K t ^ Y n;t;x;;k t : Notice that if s ^ Y > 0 than s ^ h n;t;x; > 0. Therefore the jumps of ^ Y , which are the jumps of the discontinuous part ^ K d of ^ K, can only happen when there is a jump of ^ h n;t;x; , those possible jump dates arefh t;x; i g. In the literature, there are some estimates of d ^ K c , the continuous part of ^ K when the barrier is a continuous semimartingale. In that case it can be shown that 0 d ^ K c s ( ^ F n;t;x; s ( ^ Y s ; ^ Z s ; ~ k s )ds +dA s ) where A is the drift part of the barrier,(notice that in our case dA s = 0, excepts atfh t;x; i g). We will extend this result to our case. 2.2.1 Study of K c At this subsection, we will study the RBSDE dened at (2.39) under P t;k . We again drop the superscript n ;t;x;;k for notational simplicity. We denote by ( ^ F; ^ ; ^ h) the data and by ( ^ Y; ^ Z; ^ K) the solution. Recall that by the Remark 168, we can assume that ^ F and ^ h veries (5.22). We then have the following proposition : Proposition 184. Under assumptions (158), ^ K c 0; P t;k -a.s: Proof. We dierentiate ( ^ Y ^ h) in 2 dierent ways under P t;k : d( ^ Y s ^ h s ) = ^ F s dsd ^ K c s d ^ K d s d ^ h s + ^ Z r 1 ( ~ k r )dB t r ; (2.40) where ^ F s := ^ F n;t;x; s ( ^ Y s ; ^ Z s ; ~ k s ), and ^ h s = ^ h n;t;x; (s;B t ) . The processes ^ K c and ^ K d have integrable variation. Between two successive h t;x; i (denoted h i for simplicity), ^ h is constant, therefore the variation of ^ h(!) is bounded by 2(N(!) + 1) 0 (")<1 for all !2! t , where 183 N(!) = inffi2 N : h i (!) = Tg <1. Additionally ^ h is cadlag and constant between the terms offh i g, so ( ^ Y ^ h) is a semimartingale, denoting L 0 its local time at 0, by the It^ o-Meyer formula : d( ^ Y s ^ h s ) =d( ^ Y s ^ h s ) + = (2.41) 1 ^ Y s > ^ h s ^ F s dsd ^ K d s d ^ K c s d ^ h s + ^ Z s 1 ( ~ k s )dB t s (2.42) + s ( ^ Y ^ h) + 1 ^ Y s > ^ h s s ( ^ Y s ^ h s ) + 1 2 L 0 s (2.43) = 1 ^ Y s > ^ h s ^ F s dsd ^ K d s d ^ h s + ^ Z s 1 ( ~ k s )dB t s (2.44) +1 ^ Y s = ^ h s s ( ^ Y s ^ h s ) + 1 2 L 0 s : (2.45) In the previous equality, we used 0 = ( ^ Y s ^ h s )d ^ K c s = 1 f ^ Ys> ^ hsg d ^ K c s to eliminate the term 1 ^ Y s > ^ h s d ^ K c s . Regrouping terms : 1 ^ Y s = ^ h s ^ F s dtd ^ K d s d ^ h s + ^ Z s 1 ( ~ k s )dB t s d ^ K c s = 1 ^ Y s = ^ h s s ( ^ Y s ^ h s ) + 1 2 L 0 s : We dene the setJ :=f(s;!) : s6= h i (!) for all ig and notice that onJ , d ^ h s = d ^ K d s = s ( ^ Y ^ h) = 0, so ^ Y s = ^ Y s and ^ h s = ^ h s . By rewriting the previous equality onJ : 1 ^ Ys= ^ hs ^ Z s 1 ( ~ k s )dB t s =d ^ K c s + 1 ^ Ys= ^ hs ^ F s ds + 1 2 L 0 s : (2.46) The right term is predictable nite variation and the left term denes a martingale. There- fore on the setf ^ Y s = ^ h s g\J , we have : ^ Z s = 0; and 0d ^ K c s ^ F s ds = ^ F s ( ^ h s ; ^ Z s ; ~ k s )ds = ^ F s ( ^ h s ; 0; ~ k s )ds: Notice that for !2 t using the Remark 168: ^ F s ( ^ h s ; 0; ~ k s ) = ^ F n;t;x; s ( ^ h n;t;x; (s;!); 0; ~ k s ) =F (s; ^ ! n;t;x; ;h(s; ^ ! n;t;x; ); 0; ~ k s ) 0: So d ^ K c s = 0 onf ^ Y s = ^ h s g\J . Onf ^ Y s 6= ^ h s g\J , the equality (2.46) directly gives d ^ K c s = 0. 184 Therefore onJ ,d ^ K c s = 0,dtP t;k -a.s. which shows that ^ K c is constant between the h i so it is always 0. Then ^ K = ^ K d can only jump at the stopping timesfh t;x; i g. If we rewrite (2.39) up to h t;x; 0 for s < h t;x; 0 it becomes (without the superscript ( n ;t;x;;k)) underP t;k : ^ Y s = ^ Y h t;x; 0 + ^ K h t;x; 0 + Z h t;x; 0 s ^ F r ( ^ Y r ; ^ Z r ; ~ k r )dr Z h t;x; 0 s ^ Z r 1 ( ~ k r )dB t r ; ^ Y s ^ h s ; ^ K h t;x; 0 = h t;x; 0 ^ Y :=( ^ Y h t;x; 0 ^ Y h t;x; 0 ) = ( ^ h h t;x; 0 ^ Y h t;x; 0 ) + : Therefore for s< h t;x; 0 : ^ Y s = maxf ^ Y h t;x; 0 ;h(t n ; ^ n )g (2.47) + Z h t;x; 0 s ^ F r ( ^ Y r ; ^ Z r ; ~ k r )dr Z h t;x; 0 s ^ Z r 1 ( ~ k r )dB t r : This equation is actually a BSDE up to h t;x; 0 . We need the following results to continue our analysis. Proposition 185. The mappingM n (t;x)2@O tn ! maxf n+1 ( (t;x) n ;t; 0); ^ h(t n ; ^ n )g =M n ( n ;t;x); (2.48) is uniformly continuous with modulus of continuity depending only at d;;T;c 0 ;M 0 ;L 0 ; 0 . Proof. The proof of this result is the subject of the next Appendix. Given the previous regularity result it is easy to prove that for (t;x)2O tn , n ( n ;t;x) can also be represented as the following supremum : n ( n ;t;x) = sup k2K t ~ Y n;t;x;;k (2.49) where ( ~ Y n;t;x;;k ; ~ Z n;t;x;;k ) solves for s< h t;x; 0 : (2.50) ~ Y s =M n ( n ; h t;x; 0 ;B t h t;x; 0 ) + Z h t;x; 0 s ^ F r ( ~ Y r ; ~ Z r ; ~ k r )dr Z h t;x; 0 s ~ Z r 1 ( ~ k r )dB t r : Similarly to the subsolution case, given the regularity of the boundary condition, we have the following proposition: 185 Proposition 186. Under the assumptions (158), for xed n and > 0, the function n ( n ;:) is uniformly continuous and viscosity solution of the following PDE : @ t n ( n ;:)G(t n ; ^ n ; n ( n ;:);@ x n ( n ;:);@ xx n ( n ;:)) = 0; for all (t;x)2O tn ; n ( n ;t;x)h(t n ; ^ n ) for all (t;x)2O tn ; and n ( n ; (t;x)) = n+1 ( (t;x) n ;t; 0); for all (t;x)2@O tn : (2.51) Similarly to the subsolution case we can nd ~ n ( n ;:)2 C 1;2 (O tn ), supersolution of (2.51), and verifying n ( n ;:) + ~ n ( n ;:) n ( n ;:): We dene 2 C 1;2;+ as in the subsolution case. Similarly, h i stands for h 0;0; i , for (s;!)2 with h n (!) s < h n+1 (!), n (!) stands forf(h i (!);! h i (!) ! h i1 (!) )g 0in , and we dene : (s;!) := ~ n n (!);s;! s ! hn(!) + 0 (): As it is proven for m; , using the Remark 167, 2D(0; 0) . Finally, we can now prove the lemma 175. Proof of lemma 175 We show the two inequalities separately. Notice that by partial comparison, for any 2D(0; 0) and 2D(0; 0), (0; 0) (0; 0), which by taking the inmum in and supremum in shows thatu(0; 0)u(0; 0). m; 2 D(0; 0) and (0; 0)2D(0; 0), so m; (0; 0)u(0; 0)u(0; 0) (0; 0): Fix > 0 and > 0, then 0 ( 0 ; 0; 0) is the value at 0 of the 2RBSDE with data ( ^ G 0;0; ; ^ h 0;0; ; 0;0 ) and m; 0 ( 0 ; 0; 0) is the value at 0 of the 2BSDE with generator ^ G 0;0; (s;!;x;y)m(y ^ h 0;0; ) and nal condition 0;0; , the convergence of the solutions of the penalized BSDE to the solu- tion of the RBSDE gives that there existsm 2N such that m; 0 ( 0 ; 0; 0) 0 ( 0 ; 0; 0). We rewrite these inequalities in terms of m and to have m; (0; 0) +() + (0; 0)(): By the denition of u, and u, this gives u(0; 0) + 0 () +u(0; 0) 0 (): 186 We take take the limit as ; goes to 0 to have u(0; 0)u(0; 0): 3 Regularity of the approximating sequences We rst prove a lemma on the dependence of the solutions of the approximated problem on n . Lemma 187. There exist C > 0 depending only on d;L 0 ;M 0 , and T such that for all n =f(t i ;x i )g 0in , and 0 n =f(t 0 i ;x 0 i )g 0in with t n =t 0 n , j n ( 0 n ;t n ; 0) n ( n ;t n ; 0)jC 0 (jj^ n ^ 0 n jj tn ) holds. For the function m; n the constant C also depend on m. Remark 188. We have dened m; n with the penalized 2BSDE(see (2.29)). The generator of this 2BSDE isf ^ F n;t;x; (s;!;y;z; ~ k(s;!)) +m(y ^ h n;t;x; (s;!)) g k2K t whose Lipschitz constant in y might depend on m. Therefore in the next lemmas, as it is the case in the previous lemma, the constants C might have an extra dependence in m for the functions m; n . Proof. n ( 0 n ;t n ; 0) and n ( n ;t n ; 0) are dened with the solutions at t n of RBSDEs with respectively data ( ^ F n;tn;0; (s;!;y;z;k); ^ h n;tn;0; (s;!); ^ n;tn;0; (!)) and ( ^ F 0 n ;tn;0; (s;!;y;z;k); ^ h 0 n ;tn;0; (s;!); ^ 0 n ;tn;0; (!)) and the stopping timesfh tn;0; i g in (2.23) are the same for both of the data. Therefore for all (s;!)2 t , denoting =jj^ n ^ 0 n jj tn , we have : 187 j ^ F n;tn;0; (s;!;y;z;k) ^ F 0 n ;tn;0; (s;!;y;z;k)j 0 (); (3.52) j ^ h n;tn;0; (s;!) ^ h 0 n ;tn;0; (s;!)j 0 (); (3.53) j ^ n;tn;0; (!) ^ 0 n ;tn;0; (!)j 0 (); (3.54) Given the a priori estimates of RBSDEs and taking the sup in k2K tn , we have that j n ( n ;t n ; 0) n ( 0 n ;t n ; 0)jC 0 (): (3.55) 3.0.2 Regularity of the hitting times To show the regularity of n and m; n , we cannot rely on the same method as we have used to show the regularity of the value functional u 0 . Indeed, in order to bring the problem to the framework of the Markovian RBSDEs, we had to freeze the data (2.26). Because of the lack of continuity of the stopping timesfh i g, the approximated data is not uniformly continuous any more. However the stopping times are continuous on a set of full capacity, and we will be able to claim the uniform continuity of the boundary condition (2.48). The regularity of m: n is a sub-case of the regularity of n , we only prove the last one. Our rst objective is to prove the uniform continuity of n ( n ;:) in x uniformly in t. For t2 [0;T ), we also dene following set of probability and capacity(for which the superscript t will be omitted), P t G :=fP t;k ;k2K t g; C t G := sup P2P t G P; E t G [] := sup P2P t G E P []: Notice that under the additional assumption (159), does not depend on (t;!), hence the nondegeneracy assumption in (158) implies that c 0 > 0, where c 0 is dened in (2.22). We introduce the following hitting times that will simplify our task in giving uniform bounds in h t;x; i . h t;x;;t 0 = inffst :jx +B t s j =g^t 0 ^T; (3.56) ~ h t;x; i := h t;x; i _ h t;0; i : (3.57) 188 In giving the estimates on the familyfh t;x; i g, the hitting times h t;x;;t 0 will allow us to write (h t;x; i+1 ; h t;0; i+1 ) conditionally on (h t;x; i ; h t;0; i ). However we need to make sure that h t;x; i+1 ^ h t;0; i+1 h t;x; i _ h t;0; i . We will use Holder continuity to claim this last point. We give the following estimates on the family h whose proof is based on the proof of Lemma 4.7 in [18]. Proposition 189. There exists a constant C that only depends on c 0 ;T such that for all 2 (0;jt 1 t 0 j) it holds that C t G jh t;0;;t 0 h t;x;;t 1 j> C jxj p and (3.58) E t G jh t;0;;t 0 h t;x;;t 1 j)jt 1 t 0 j +Cjxj (3.59) Proof. As in Lemma 4.7 of [18], we x P2P t G and dene A s := R s t x jxj 2 (k r ) x jxj dr, s := inffr 0 : A r sg and M s := R s t x jxj dB r , which is a P Brownian motion and A veries A s 2c 0 (st). Notice that under the constraintjt 1 t 0 j>, we have fh t;x;;t 1 > h t;0;;t 0 +gf sup fh t;0;;t 0 sh t;0;;t 0 +g x jxj B h t;0;;t 0 s jxjg f sup fh t;0;;t 0 sh t;0;;t 0 +2c 0 g (M s M h t;0;;t 0 )jxjg Taking the the probabilities of the events, P(h t;x;;t 1 > h t;0;;t 0 +)P( sup fh t;0;;t 0 sh t;0;;t 0 +2c 0 g (M s M h t;0;;t 0 )jxj) P 0 (jjBjj 2c 0 jxj)C jxj p Thus,P(jh t;0;;t 0 h t;x;;t 1 j>) 2C jxj p , which gives the 2 bounds. We recall that h t;x; 1 =t. Proposition 190. For n> 0 dene t;x; n := sup 0in jh t;x; i h t;0; i j1 f~ h t;0; i1 +<Tg and (3.60) C t K;1=3 :=f!2 t ; sup ts<rT j! r ! s j jrsj 1=3 <Kg (3.61) 189 then for all "> 0 there exists K " <1, whose choice is independent in t, such that for all > 0 small enough and n2 N there exists q > 0, whose choice is independent of t, verifyingC t G (f t;x; n >g[ (C t K";1=3 ) c )" ifjxjq. Proof. Notice that the denitions of the hitting times for i = 0 and i> 0 are dierent. We will rst give estimates for i = 0. By classical results on stochastic analysis, there exist a constant p > 0 depend- ing only on d such that E G h sup ts<rT jB t r B t s j p jrsj p=3 i < 1. Then C G ((C t K;1=3 ) c ) 1 K p E G h sup ts<rT jB t r B t s j p jrsj p=3 i ! 0 as K goes to innity. Remark that the upper bound depends only on d;L 0 and T . Fix "> 0, and t2 [0;=2] then, by the previous inequality, there exists K " > 0, whose choice is independent of t, such thatC G ((C t K";1=3 ) c )"=2. We x2 (0; 3 8K 3 " ) andn2N, by denition h t;x; = h t;x;;t+ , and using the Proposition (189), there exists q 0 "; 2 (0;=2) such thatC t G (jh t;x; 0 h t;0; 0 j > ) " 4 ifjxj q 0 "; . We dene ~ 0;t;x; "; :=fjh t;x; 0 h t;0; 0 j g\C K";1=3 verifyingC t G (( ~ 0;t;x; "; ) c ) 3"=4, for all t2 [0;=2], andjxjq 0 "; . Notice the Holder continuity, the upper bound on and the boundjh t;x; 0 h t;0; 0 j implies that on ~ 0;t;x; "; , it holds thatjB t h t;x; 0 B t h t;0; 0 j 2 for alljxjq 0 " . On the eventfh t;x; 0 h t;0; 0 +<Tg\ ~ 0;t;x; "; , we have the following inequality and equalities for alljxjq 0 "; : h t;x; 1 ^ h t;0; 1 h t;x; 0 _ h t;0; 0 h t;0; 1 = h h t;0; 0 ;0;;h t;0; 0 + h t;x; 1 = h h t;0; 0 ;(B t h t;0; 0 B t h t;x; 0 );;h t;0; 0 + Hence, switching the roles of x and 0 and using the estimates (3.58), we obtain for all t2 [0;=2],P2P t G , andjxjq 0 "; , E h jh t;x; 1 h t;0; 1 j1 f~ h t;0; 0 +<Tg\ ~ 0;t;x; "; jF ~ h t;x; 0 i jh t;x; 0 h t;0; 0 j +CjB t h t;0; 0 B t h t;x; 0 j (1 +CK " )jh t;x; 0 h t;0; 0 j: 190 Thus, by induction, for all i = 1;:::;n, t2 [0;=2],P2P t G , andjxjq 0 "; , we have : E h jh t;x; i h t;0; i j1 f~ h t;0; i1 +<Tg\ ~ 0;t;x; "; jF ~ h t;x; 0 i (3.62) jh t;x; 0 h t;0; 0 j +CjB t h t;0; 0 B t h t;x; 0 j (3.63) (1 +CK " ) n jh t;x; 0 h t;0; 0 j: (3.64) Therefore for all t2 [0;=2],P2P t G , andjxjq 0 "; P f t;x; n >g \ ~ 0;t;x; "; (1 +CK " ) n E jh t;x; 0 h t;0; 0 j (3.65) We now chooseq2 (0;q 0 "; ) to have the right hand side smaller than"=4(notice that this only depends on d;;L 0 ;c 0 ;T and ";;n) then, for all t2 [0;=2],P2P t G , andjxjq P f t;x; n >g [ (C t K";1=3 ) c P f t;x; n >g [ ( ~ 0;t;x; "; ) c P ( ~ 0;t;x; "; ) c +P f t;x; n >g \ ~ 0;t;x; "; ": 3.1 Proof of Proposition 183 and 185 Proof. Notice that the proposition 183 is a subcase of 185. We only prove the second one. Thanks to Lemma 187, without loss of generality we assume that n = 0, and t n = 0. Step 1: Regularity in space : Fix "> 0, and > 0 small enough, then there exist K " <1 such thatC t G ((C K";1=3 ) c )"=2. There existsn " <1 such that onC K";1=3 , for alljxj=2, we have h t;x; n" = T . Then using again the Proposition (190), for all d > 0 small enough, there exists q2 (0;) such thatC t G (f t;x; n" > g[ (C t K";1=3 ) c ) " for alljxj q. Notice that the choice of q does not depend on t. We dene 1 :=f t;x; n" g\C t K";1=3 : 191 On the event 1 \f~ h t;x; i1 +<Tg, the followng inequalities hold forjxjq : jh t;x; i h t;0; i j jh t;x; i1 h t;x; i j 3 8K 3 " : The second inequality is the consequence of the Holder continuity of the paths, and the estimates is 3 8K 3 " for i = 0 but 3 K 3 " for i> 0. We take the smallest one. Given the previous inequalities we can estimate the slope of ^ ! n;t;x; on [h t;x; i1 (!); h t;x; i (!)], for all !2 1 \f~ h t;x; i1 +<Tg. Chosing 3 32K 3 " , we have that the slope of ^ ! n;t;x; on on [h t;x; i1 (!); h t;x; i (!)] is less than 3 8K 3 " 2 3 16K 3 " 16K 3 " 2 . Therefore, for all !2 1 , jj^ ! n;t;x; ^ ! n;t;0; jj 16K 3 " 2 +jxj 16K 3 " 2 +q =:l ;";q ; wheneverjxjq. Fix also k2K t , we introduce the following notations: ^ Y s := ^ Y n;t;0;;k s ; ^ Y 0 s := ^ Y n;t;x;;k s ; ^ Z s := ^ Z n;t;0;;k s ; ^ Z 0 s := ^ Z n;t;x;;k s ; ^ K s := ^ K n;t;0;;k s ; ^ K 0 s := ^ K n;t;x;;k s ; h i := h t;0; i ; h 0 i := h t;x; i ^ F s := ^ F n;t;0; (s;!; ^ Y s ; ^ Z s ; ~ k r ); ^ F 0 s := ^ F n;t;x; (s;!; ^ Y 0 s ; ^ Z 0 s ; ~ k r ); ^ F 0;s := ^ F n;t;0; (s;!; 0; 0; ~ k r ); ^ F 0 0;s := ^ F n;t;x; (s;!; 0; 0; ~ k r ); ^ h s = ^ h n;t;0; (s;!); ^ h 0 s = ^ h n;t;x; (s;!) and ^ = ^ n;t;0; (!); ^ 0 = ^ n;t;x; (!): With this notation on 1 , j ^ ^ 0 j 0 (l ;";q ): Additionally for s2 (h 0 i _ h i ; h 0 i+1 ^ h i+1 ) j ^ F 0;r ^ F 0 0;r j 0 ( +l ;";q ); (3.66) j ^ h s ^ h 0 s j 0 ( +l ;";q ): (3.67) The classical a priori estimates on RBSDEs control the dierence of the solutions with E[sup s2[t;T ] j ^ h s ^ h 0 s j], for some expectationE. In our case this estimate is not sharp enough. 192 Indeed, the jumps dates h i and h 0 i will be dierent in the generic case, and the value of sup s2[t;T ] j ^ h s ^ h 0 s j will be at the order of 0 (), which can not be controlled withjxj, thus rendering the classical estimates useless. However we can improve those estimates. We only need to improve the upper bound of the term R T t (Y s Y 0 s )d(K s K 0 s ) under P t;k , which is easier in our case because of the fact that the K c = 0: Z T t (Y s Y 0 s )d(K s K 0 s ) Z T t ( ^ h s ^ h 0 s )d(K s K 0 s ) = n" X i=0 1 h 0 i <h i [h(h i1 )h 0 (h 0 i )]K h i [h(h i1 )h 0 (h 0 i1 )]K 0 h i + n" X i=0 1 h i <h 0 i n [h(h i1 )h 0 (h 0 i1 )]K h i [h(h i )h 0 (h 0 i1 )]K 0 h 0 i o = n" X i=0 1 h 0 i <h i n [h(h i )K 0 h 0 i h 0 (h 0 i )K h i ] + [h 0 (h 0 i1 )K h i h(h i1 )K 0 h i ] o C n" X i=0 1 h 0 i <h i [jh(h i )h 0 (h 0 i )j +jK 0 h 0 i K h i j]: Our RBSDEs veries the general assumptions in [24], therefore the size of the jumps verify: jK 0 h 0 i K h i jj(h 0 (h 0 i1 )Y 0 h 0 i ) + (h(h i1 )Y h i ) + j jh 0 (h 0 i1 )h(h i1 )j +jY 0 h 0 i Y h i j 0 ( +l ;";q ) +jY 0 h 0 i Y h i j: Combining the previous inequalities, and restrict our analysis to 1 , we obtain on 1 : Z T t (Y s Y 0 s )d(K s K 0 s )C(2n " 0 ( +l ;";q ) + n" X i=0 1 h 0 i <h i jY 0 h 0 i Y h i j): (3.68) Recall that underP t;k : Y h i = maxfh(h i );Y h i+1 g + Z h i+1 h i ^ F r dr Z h i+1 h i ^ Z r 1 ( ~ k r )dB t r ; Y 0 h 0 i = maxfh 0 (h 0 i );Y 0 h 0 i+1 g + Z h 0 i+1 h 0 i ^ F 0 r dr Z h 0 i+1 h 0 i ^ (Z 0 r ) 1 ( ~ k r )dB t r : 193 Using the estimates (3.66) between (h 0 i _h i ; h 0 i+1 ^h i+1 ) and the boundedness of the ^ Y and ^ Y 0 , the classical estimates on BSDEs give : E P t;k h jY 0 h 0 i Y h i j 2 1 1 i CE P t;k " 1 1 jh 0 (h 0 i )h(h i )j 2 +jY 0 h 0 i+1 Y h i+1 j 2 + ( Z h 0 i+1 _h i+1 h 0 i ^h i j ^ F 0 0;r ^ F 0;r jdr) 2 !# CE P t;k n E P t;k h i h + (1 +T ) 2 0 ( +l ;";q ) + 1 1 jY 0 h i+1 Y h i+1 j 2 io CE P t;k +n " (1 +T ) 2 0 ( +l ;";q ) + 1 1 jY 0 T Y T j 2 C(n " + 1)(1 +T )( + 2 0 ( +l ;";q )): In the previous inequalities the term + comes from the fact that the dierence between the stopping times h 0 and h is less than and the integrands can be bounded. Injecting this to (3.68), we obtain the following inequality on 1 : E P t;k 1 1 Z T t (Y s Y 0 s )d(K s K 0 s ) Cn 2 " 0 ( +l ;";q ); (3.69) We can improve the classical estimates as follows : jY 0 t Y t j 2 E P t;k j 0 j 2 + Z T t j( ^ F 0 ^ F ) r (Y r ;Z r )j 2 dr + Z T t (Y s Y 0 s )d(K s K 0 s ) CE P t;k 1 1 j 0 j 2 + Z T t j( ^ F 0 ^ F ) r (Y r ;Z r )j 2 dr + Z T t (Y s Y 0 s )d(K s K 0 s ) +CE P t;k 1 c 1 j 0 j 2 + Z T t j( ^ F 0 ^ F ) r (Y r ;Z r )j 2 dr + Z T t (Y s Y 0 s )d(K s K 0 s ) C n 2 " 0 ( +l ;";q ) +E P t;k Z T t j( ^ F 0 ^ F ) r (Y r ;Z r )j 2 dr +C"=2 C n 2 " 0 ( +l ;";q ) +n " +C"=2: Taking > 0 small enough, there exists q > 0 whose choice depend only on d, , 0 , L 0 , M 0 , c 0 , T , ", K " , and some universal constant(and additionally on k for k; n ), but not on, t or n , such that ifjxjq, the inequality j ^ Y n;t;x;;k t ^ Y n;t;0;;k t j 2 C" 194 holds for all k2K t . In conclusion, by taking the sup in k, there exist :R + :!R + , with (0+) = 0 and j n ( n ;t;x) n ( n ;t; 0)j(jxj); for t2 [t n ;t n +=2] andjxj=2. Step 2: Regularity in time: We x t n t < t 0 t +=2, and dene the hitting time h =2 := inffs t :jB s j = =2g^ (t +=2). Notice that on [t n ; h =2 ^t 0 ], for all k2K t , the RBSDE (2.7) under P t;k is actually a BSDE thanks to the fact that ^ K c = 0. Additionally, the previous regularity in x allows us to use proposition 5.14 of [53] to have the representation n ( n ;t; 0) = sup k2K t ~ Y n;t;0;;k t ; where ( ~ Y n;t;0;;k ; ~ Z n;t;0;;k ) solves (without the superscripts) under P t:k , ~ Y s = maxf n ( n ; h =2 ^t 0 ;B t h =2 ^t 0); ^ h(t n ; ^ n )g + Z h =2 ^t 0 s ^ F r ( ~ Y r ; ~ Z r ; ~ k r )dr Z h =2 ^t 0 s ~ Z r 1 (k r )dB t r : Notice that (h =2 ^t 0 ;B t h =2 ^t 0 )2O tn therefore maxf n ( n ; h =2 ^t 0 ;B t h =2 ^t 0); ^ h(t n ; ^ n )g = n ( n ; h =2 ^t 0 ;B t h =2 ^t 0); so we can rewrite the the previous BSDE under P t;k : ~ Y s = n ( n ; h =2 ^t 0 ;B t h =2 ^t 0) + Z h =2 ^t 0 s ^ F r ( ~ Y r ; ~ Z r ; ~ k r )dr Z h =2 ^t 0 s ~ Z r 1 (k r )dB t r : Fix "> 0, then there exist k2K t such that n ( n ;t; 0) ~ Y n;t;0;;k +". Then n ( n ;t; 0) n ( n ;t 0 ; 0) n ( n ; h =2 ^t 0 ;B t h =2 ^t 0) n ( n ;t 0 ; 0) + Z h =2 ^t 0 s ^ F r ( ~ Y r ; ~ Z r ; ~ k r )dr Z h =2 ^t 0 s ~ Z r 1 (k r )dB t r 1 fh =2 t 0 g ( n ( n ;t 0 ;B t t 0) n ( n ;t 0 ; 0)) +C1 fh =2 <t 0 g + Z h =2 ^t 0 s ^ F r ( ~ Y r ; ~ Z r ; ~ k r )dr Z h =2 ^t 0 s ~ Z r 1 (k r )dB t r 195 Taking the expectation under P t;k and using the previous x regularity result. : n ( n ;t; 0) n ( n ;t 0 ; 0) E P t;k j n ( n ;t 0 ;B t t 0) n ( n ;t 0 ; 0)j +CP t;k h =2 <t 0 +E P t;k " Z t 0 t ^ jF r (0;; ~ k(s;B t ))j +C +L 0 j ~ Z r jdr # E P t;k (B t t 0) +CP t;k h =2 <t 0 +Cjtt 0 j +CE G " Z t 0 t j ~ Z r jdr # E G (B t t 0) +CP G h =2 <t 0 +Cjtt 0 j +C p jtt 0 j This inequality controls the variation at one direction and the last term is a modulus of continuity for the variation in t . For the other direction we x a k2K t : n ( n ;t 0 ; 0) n ( n ;t; 0) n ( n ;t 0 ; 0) ~ Y n;t;0;;k By similar bounds : j n ( n ;t; 0) n ( n ;t 0 ; 0)j(jtt 0 j) In conclusion, the mapping t2 [t n ;t n +=2]! n ( n ;t; 0) is uniformly continuous, with modulus of continuity depending only on d; 0 ;c 0 ;L 0 ;M 0 ;T and. which proves 185. Combining this result with the Lemma 187, we obtain the uniform continuity of the mapping (t;x)2@O " tn ! " n+1 ( (t;x) n ;t; 0), with the modulus of continuity depending only on the previously cited parameters. For the function m; n the modulus of continuity may also depend on m. 196 Bibliography [1] Bank, P., and Baum, D. Hedging and portfolio optimization in nancial markets with a large trader. Mathematical Finance 14, 1 (2004), 1{18. [2] Bayraktar, E., Karatzas, I., Yao, S., et al. Optimal stopping for dynamic convex risk measures. Illinois Journal of Mathematics 54, 3 (2010), 1025{1067. [3] Bayraktar, E., and Sirbu, M. Stochastic perron's method and verication without smoothness using viscosity comparison: obstacle problems and dynkin games. In Proc. Amer. Math. Soc., to appear (2012). [4] Bayraktar, E., and Sirbu, M. Stochastic perron's method for hamilton{jacobi{ bellman equations. SIAM Journal on Control and Optimization 51, 6 (2013), 4274{ 4294. [5] Cheridito, P., Soner, H. M., Touzi, N., and Victoir, N. Second-order backward stochastic dierential equations and fully nonlinear parabolic pdes. Communications on Pure and Applied Mathematics 60, 7 (2007), 1081{1110. [6] Cont, R., and Fournie, D. A functional extension of the ito formula. Comptes Rendus Mathematique 348, 1 (2010), 57{61. [7] Cont, R., and Fourni e, D.-A. Change of variable formulas for non-anticipative functionals on path space. Journal of Functional Analysis 259, 4 (2010), 1043{1072. [8] Cont, R., and Fourni e, D.-A. Functional it^ o calculus and stochastic integral rep- resentation of martingales. The Annals of Probability 41, 1 (2013), 109{133. [9] Crandall, M. G., Ishii, H., and Lions, P.-L. User's guide to viscosity solutions of second order partial dierential equations. Bulletin of the American Mathematical Society 27, 1 (1992), 1{67. [10] Crandall, M. G., and Lions, P.-L. Viscosity solutions of hamilton-jacobi equa- tions. Transactions of the American Mathematical Society 277, 1 (1983), 1{42. [11] Denis, L., Hu, M., and Peng, S. Function spaces and capacity related to a sublinear expectation: application to g-brownian motion paths. Potential Analysis 34, 2 (2011), 139{161. 197 [12] Denis, L., and Martini, C. A theoretical framework for the pricing of contingent claims in the presence of model uncertainty. The Annals of Applied Probability (2006), 827{852. [13] Dupire, B. Functional Ito calculus, 2010. [14] Ekren, I. Viscosity solutions of obstacle problems for fully nonlinear path-dependent pdes. arXiv preprint arXiv:1306.3631 (2013). [15] Ekren, I., Keller, C., Touzi, N., and Zhang, J. On viscosity solutions of path dependent pdes. accepted to Annals of Probability, arXiv:1109.5971 (2011). [16] Ekren, I., Touzi, N., and Zhang, J. Optimal stopping under nonlinear expectation. submitted, arXiv:1209.6601 (2012). [17] Ekren, I., Touzi, N., and Zhang, J. Viscosity solutions of fully nonlinear parabolic path dependent pdes: Part i. submitted, arXiv:1210.0006 (2012). [18] Ekren, I., Touzi, N., and Zhang, J. Viscosity solutions of fully nonlinear parabolic path dependent pdes: Part ii. submitted, arXiv:1210.0007 (2012). [19] El Karoui, N., Kapoudjian, C., Pardoux, E., Peng, S., and Quenez, M.-C. Re ected solutions of backward sde's, and related obstacle problems for pde's. the Annals of Probability 25, 2 (1997), 702{737. [20] El Karoui, N., Peng, S., and Quenez, M. C. Backward stochastic dierential equations in nance. Mathematical nance 7, 1 (1997), 1{71. [21] Fleming, W. H., and Soner, H. M. Controlled Markov processes and viscosity solutions, vol. 25. springer New York, 2006. [22] Fleming, W. H., and Vermes, D. Generalized solutions in the optimal control of diusions. Stochastic dierential systems, stochastic control theory and applica- tions(Minneapolis, Minn.,1986) IMA Vol. Math. Appl.,10 (1988), 119{127. [23] Fleming, W. H., and Vermes, D. Convex duality approach to the optimal control of diusions. SIAM journal on control and optimization 27, 5 (1989), 1136{1155. [24] Hamad ene, S. Re ected bsde's with discontinuous barrier and application. Stochas- tics: An International Journal of Probability and Stochastic Processes 74, 3-4 (2002), 571{596. [25] Hamad ene, S., and Lepeltier, J.-P. Re ectedfBSDEsg and mixed game problem. Stochastic Processes and their Applications 85, 2 (2000), 177 { 188. [26] Hu, M., Ji, S., Peng, S., and Song, Y. Backward stochastic dierential equations driven by g-brownian motion. Stochastic Processes and their Applications 124, 1 (2014), 759{784. [27] Ishii, H. Perron's method for hamilton-jacobi equations. Duke Mathematical Journal 55, 2 (1987), 369{384. 198 [28] Karandikar, R. L. On pathwise stochastic integration. Stochastic Processes and their applications 57, 1 (1995), 11{18. [29] Krylov, N. On the rate of convergence of nite-dierence approximations for bell- mans equations with variable coecients. Probability Theory and Related Fields 117, 1 (2000), 1{16. [30] Krylov, N. V. Controlled diusion processes, vol. 14. Springer, 2008. [31] Lieberman, G. M. Second order parabolic dierential equations. World Scientic Publishing Co. Inc., River Edge, NJ, 1996. [32] Lions, P. Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in innite dimensions. part i: The case of bounded stochastic evolu- tions. Acta mathematica 161, 1 (1988), 243{278. [33] Lions, P. Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in innite dimensions. iii. uniqueness of viscosity solutions for general second-order equations. Journal of Functional Analysis 86, 1 (1989), 1{18. [34] Lions, P.-L. Viscosity solutions of fully nonlinear second order equations and optimal stochastic control in innite dimensions. part ii: Optimal control of zakai's equation. In Stochastic Partial Dierential Equations and Applications II. Springer, 1989, pp. 147{ 170. [35] Lukoyanov, N. Y. On viscosity solution of functional hamilton-jacobi type equations for hereditary systems. Proceedings of the Steklov Institute of Mathematics 259, 2 (2007), S190{S200. [36] Lukoyanov, N. Y. Viscosity solution of nonanticipating equations of Hamilton-Jacobi type. Dier. Uravn. 43, 12 (2007), 1674{1682, 1727. [37] Ma, J., Protter, P., and Yong, J. Solving forward-backward stochastic dierential equations explicitly|a four step scheme. Probability Theory and Related Fields 98, 3 (1994), 339{359. [38] Ma, J., Yin, H., and Zhang, J. On non-markovian forward{backward sdes and backward stochastic pdes. Stochastic Processes and Their Applications 122, 12 (2012), 3980{4004. [39] Ma, J., Zhang, J., Zheng, Z., et al. Weak solutions for forward{backward sdes|a martingale problem approach. The Annals of Probability 36, 6 (2008), 2092{2125. [40] Oksendal, B., Sulem, A., Zhang, T., et al. Singular control of spdes and back- ward spdes with re ection. [41] Pardoux, E., and Peng, S. Adapted solution of a backward stochastic dierential equation. Systems & Control Letters 14, 1 (1990), 55{61. 199 [42] Pardoux, E., and Peng, S. Backward stochastic dierential equations and quasilin- ear parabolic partial dierential equations. In Stochastic partial dierential equations and their applications. Springer, 1992, pp. 200{217. [43] Pardoux, E., and Tang, S. Forward-backward stochastic dierential equations and quasilinear parabolic pdes. Probability Theory and Related Fields 114, 2 (1999), 123{ 150. [44] Peng, S. Backward sde and related g-expectations. Backward Stochastic Dierential Equations, 364 (1997), 141{159. [45] Peng, S. Filtration consistent nonlinear expectations and evaluations of contingent claims. Acta Mathematicae Applicatae Sinica 20, 2 (2004), 191{214. [46] Peng, S. G-brownian motion and dynamic risk measure under volatility uncertainty. arXiv preprint arXiv:0711.2834 (2007). [47] Peng, S. Backward stochastic dierential equation, nonlinear expectation and their applications. In International Congress of Mathematicians (2011), p. 393. [48] Peng, S. Note on viscosity solution of path-dependent pde and g-martingales. arXiv preprint arXiv:1106.1144 (2011). [49] Peng, S., and Wang, F. Bsde, path-dependent pde and nonlinear feynman-kac formula. arXiv preprint arXiv:1108.4317 (2011). [50] Pham, H. Continuous-time stochastic control and optimization with nancial applica- tions, vol. 61. Springer, 2009. [51] Pham, T., and Zhang, J. Two person zero-sum game in weak formulation and path dependent bellman-isaacs equation. arXiv preprint arXiv:1209.6605 (2012). [52] S^ rbu, M. Stochastic perron's method and elementary strategies for dierential games. arXiv preprint arXiv:1305.5083 (2013). [53] Soner, H. M., Touzi, N., and Zhang, J. Wellposedness of second order backward sdes. Probability Theory and Related Fields 153, 1-2 (2012), 149{190. [54] Soner, H. M., Touzi, N., Zhang, J., et al. Dual formulation of second order target problems. The Annals of Applied Probability 23, 1 (2013), 308{347. [55] Song, Y. Properties of hitting times forg-martingale. arXiv preprint arXiv:1001.4907 (2010). [56] Stroock, D. W., and Varadhan, S. S. Multidimensional diussion processes, vol. 233. Springer, 1979. [57] Zheng, W. Tightness results for laws of diusion processes application to stochastic mechanics. In Annales de l'institut Henri Poincar e (B) Probabilit es et Statistiques (1985), vol. 21, Gauthier-Villars, pp. 103{124. 200
Abstract (if available)
Abstract
The aim of this thesis is to extend the viscosity solutions theory of partial differential equations to the space of continuous paths. It is well‐known that, when Markovian, through the Feynman‐Kac formula, the BSDE theories of various kinds provide a stochastic representation for viscosity solutions of parabolic PDEs. The wellposedness of BSDEs also holds in a non‐Markovian framework. It is then reasonable to wonder if one can write a non‐Markovian Feynman‐Kac formula and provide a relation between non‐Markovian BSDEs and the so‐called path‐dependent partial differential equations(PPDEs). In this thesis we give a definition of viscosity solutions for PPDEs for several kind of equations. When the equations are fully nonlinear, this definition requires results on optimal stopping theory under nonlinear expectation that we establish. These results allows us, under assumptions, to prove the wellposedness for viscosity solutions of PPDEs and show that several non‐Markovian control problems can be studied using PPDEs.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Pathwise stochastic analysis and related topics
PDF
Probabilistic numerical methods for fully nonlinear PDEs and related topics
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Topics on set-valued backward stochastic differential equations
PDF
Optimal dividend and investment problems under Sparre Andersen model
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Controlled McKean-Vlasov equations and related topics
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Topics on dynamic limit order book and its related computation
PDF
Dynamic approaches for some time inconsistent problems
PDF
Defaultable asset management with incomplete information
PDF
On non-zero-sum stochastic game problems with stopping times
PDF
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
On some nonlinearly damped Navier-Stokes and Boussinesq equations
PDF
Linear differential difference equations
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
Asset Metadata
Creator
Ekren, Ibrahim
(author)
Core Title
Path dependent partial differential equations and related topics
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Mathematics
Publication Date
07/03/2014
Defense Date
05/09/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
backward SDE,nonlinear expectation,OAI-PMH Harvest,viscosity solutions
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Zhang, Jianfeng (
committee chair
), Kocer, Yilmaz (
committee member
), Ma, Jin (
committee member
)
Creator Email
ekren@usc.edu,ibrahim.ekren@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-431239
Unique identifier
UC11286843
Identifier
etd-EkrenIbrah-2617.pdf (filename),usctheses-c3-431239 (legacy record id)
Legacy Identifier
etd-EkrenIbrah-2617.pdf
Dmrecord
431239
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Ekren, Ibrahim
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
backward SDE
nonlinear expectation
viscosity solutions