Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
(USC Thesis Other)
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ZERO SUM STOCHASTIC DIFFERENTIAL GAMES IN WEAK FORMULATION AND RELATED NORMS FOR SEMI-MARTINGALES by Triet M Pham A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (MATHEMATICS) May 2013 Copyright 2013 Triet M Pham DEDICATION To my family ii ACKNOWLEDGMENTS First and foremost, I would like to thank my advisor, Professor Jianfeng Zhang, for his tireless support and guidance during my graduate studies at USC. It was his patience, care and encouragement that motivated me to tackle challenging topics in stochastic analysis and obtain interesting results that finally culminate in this dissertation. I also would like to express my gratitude to Professors Peter Baxendale, Larry Gold- stein, Nicolai Haydn, Igor Kukavica, Sergey Lototsky, Jin Ma, Remy Mikulevicius and Wlodek Proskurowski for instructing me in various areas of mathematics. I sincerely thank Professors Jin Ma, Remy Mikulevicius and Fernando Zapatero for serving on my Disser- tation Committee and Professors Peter Baxendale and Sergey Lototsky for serving on my Guidance Committee. Many thanks are also due to Professors Robert Mena, Kent Merry- field and Tangan Gao of the California State University at Long Beach for initiating me into the subject of mathematics. Our secretaries Amy Yung, Arnold Deal, Alma Hernandez, Chaunte Williams and Adri- ana Cisneros deserve grateful appreciation for their help in many ways. I particularly ap- preciate my fellow graduate students with whom I enjoyed lively and beneficial discus- sions, among them: Diogo Bessam, Ibrahim Ekren, Christian Keller, Radoslav Marinov, Xin Wang and Jie Zhong. Finally, I am greatly indebted to my family, for their love and support, especially my parents, without whom what I have achieved so far would have been impossible. iii TABLE OF CONTENTS DEDICATION ii ACKNOWLEDGMENTS iii ABSTRACT vi Chapter 1 INTRODUCTION 1 Chapter 2 THE ZERO-SUM GAME UNDER FEEDBACK CONTROLS 4 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.1 The canonical space . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2.2 Probability measures . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3 Viscosity solutions of path dependent PDEs . . . . . . . . . . . . . 11 2.3 The zero-sum game . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.1 The admissible controls . . . . . . . . . . . . . . . . . . . . . . . 14 2.3.2 The Backward SDEs . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.3 The value processes . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Dynamic Programming Principle . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 Existence of the game value . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5.1 Viscosity solution properties . . . . . . . . . . . . . . . . . . . . . 33 2.5.2 Comparison principle for viscosity solutions of PPDEs . . . . . . . 39 2.6 Approximate saddle point . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.7 A counterexample in the strong formulation . . . . . . . . . . . . . . . . . 47 Chapter 3 NORM ESTIMATES FOR SEMIMARTINGALES 50 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2 A Priori Estimates for Semimartingales . . . . . . . . . . . . . . . . . . . 53 3.2.1 Some preliminary results . . . . . . . . . . . . . . . . . . . . . . . 55 3.2.2 Square integrable semi-martingales . . . . . . . . . . . . . . . . . 58 3.3 Semimartingales underG-expectation . . . . . . . . . . . . . . . . . . . . 67 3.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.3.2 Characterization ofP-semi-martingales . . . . . . . . . . . . . . . 70 3.3.3 Doob-Meyer Decomposition forG-submartingales . . . . . . . . . 73 3.4 Absolute continuity of the finite variational processes . . . . . . . . . . . . 75 iv 3.4.1 The absolute continuity ofP-semi-martingales . . . . . . . . . . . 76 3.4.2 Absolute continuity ofP-semi-martingales . . . . . . . . . . . . . 78 3.5 G-martingale representation with component . . . . . . . . . . . . . . . 78 3.5.1 Existence of . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.5.2 Uniqueness of . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 3.6 Some counterexamples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.7 Good integrators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Chapter 4 DOUBLY REFLECTED BSDES 87 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.2 Apriori estimate of local solution in the linear case . . . . . . . . . . . . . 92 4.3 Apriori estimates and wellposedness conditions of DRBSDEs . . . . . . . 100 4.3.1 Apriori estimate under the separation condition . . . . . . . . . . . 100 4.3.2 Preliminary apriori estimate of the difference of solutions . . . . . . 101 4.3.3 Main result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 LIST OF REFERENCES 108 v ABSTRACT In this dissertation, we study three topics under a common theme: nonlinear expectation related to zero-sum stochastic differential games. To develop this nonlinear expectation, we first study the stochastic game problem where both players use feedback controls. This is in contrast with the standard literature where the setting of strategies versus controls is usually used. Such approach has the drawback of creating the asymmetry between the two players. Using feedback controls, we prove the existence of the game value where both players use controls and preserve the symmetry. Moreover, we allow for non-Markovian structure and characterize the value process as the unique viscosity solution of the path- dependent Bellman-Isaacs equation. Using the dynamic programming principle, the game value process can be viewed as a filtration consistent nonlinear expectationE. Moreover,E is dominated by the G- Expectation, which is defined naturally from the game setting. It follows that the game value process is aG-submartingale. It is natural to conjecture that aG-submartingale is a semi-martingale under each probabilityP that composes theG-Expectation. Therefore, we study norm estimate for semi-martingales as our second topic. We introduce two new types of norms. The first characterizes square integrable semi-martingales. The second char- acterizes the absolute continuity of the finite variation part with respect to the Lebesgue measure. As an application of the first norm, we obtain the Doob-Meyer decomposition for G-submartingale. Finally, we study the wellposedness problem of doubly reflected Backward Stochastic vi Differential Equations and establish some a priori estimates for DRBSDEs without impos- ing the Mokobodski’s condition. vii CHAPTER 1 INTRODUCTION Nonlinear expectation is both a challenging and exciting topic. It has a long history, but one of the modern systematic developments starts with Peng’s g-expectation [48]. Let be aF T measurable random variable, for a functiong : [0;T ]RR d satisfying certain condition, we can defineE g t () := Y t whereY is the solution to the following Backward Stochastic Differential Equation (BSDE for short): Y t = + Z T t g(s;Y s ;Z s )ds Z T t z s dBs; P 0 -a.s. whereB is a Brownian motion underP 0 . Peng also introduced another nonlinear expectation that is related to the volatility uncer- tainty, the so calledG-Expectation [51]. Roughly speaking, aG-expectation is a nonlinear expectation taking the following form: E G := sup P2P E P , whereP is a family of mutu- ally singular probability measuresP and in general does not have a dominating probability measure. The conditional G-Expectation can also be defined in such a way that it satisfies the filtration consistency property: for 0s<tT E G s (E G t ()) =E G s (): It is clear that in the case ofg-expectation, we also have E g s (E g t ()) =E g s (): 1 From these notions of conditional expectations, one can straightforwardly defineg (resp. G) martingales, submartingales and supermartingales. In this context, the immediate ques- tion is how to characterize these processes? In particular, we would like to know if the classical Doob-Meyer decomposition still applies to the g (resp. G) submartingales and supermartingales. In theg-expectation context, the answer to the Doob-Meyer decomposition is positive, see e.g. [49]. In the case of theG-expectation, the problem is more involved. In particu- lar, Soner, Touzi and Zhang [59] showed that aG-martingale is aP-supermartingale for all P2P. A similar structure holds forG-supermartingales, hence the Doob-Meyer decom- position is true for aG-supermartingale. However, the question remained open for theG- submartingale case. Intuitively, part of the difficulty is due to the fact of theG-Expectation being sublinear; so it is harder to analyze theG-submartingales than theG-martingales and supermartingales. Motivated by theG-Expectation, we investigate a nonlinear expectation that is related to a zero-sum stochastic differential game as followed. Let :UV!R be given where U;V are subsets of some metric space. LetP 0 be the Wiener measure on the canonical space andB the canonical process. We consider the forward process X u;v t = Z t 0 (u s ;v s )dB s ; P 0 -a.s. whereu (resp. v) is a progressively measure process taking values inU (resp. V). Denote P u;v :=P 0 (X u;v ) 1 and define E P u;v t () := ess sup u 0 uj [0;t] ess inf v 0 vj [0;t] E P u 0 ;v 0 t (); P u;v -a.s.: Under some conditions, for example when is uniformly continuous, there exists a 2 random variableE t () such thatE t () =E P u;v t () P u;v -a.s. for allP u;v . We would like to address the following questions aboutE: (i) IsE filtration consistent, i.e. for 0s<tT ,E s (E t ()) =E s ()? (ii) IsE “well-defined", i.e. ess sup u 0 uj [0;t] ess inf v 0 vj [0;t] E P u 0 ;v 0 t () = ess inf v 0 vj [0;t] ess sup u 0 uj [0;t] E P u 0 ;v 0 t (); P u;v -a.s. ? (iii) How to characterize aE-martingale? Indeed the last question is connected to the problem ofG-submartingale Doob-Meyer decomposition. If we denote E G;P u;v t () := ess sup u 0 uj [0;t] ;v 0 vj [0;t] E P u 0 ;v 0 t (); P u;v -a.s. ; then we can show, see e.g. Chapter 3 that aE-martingale is aG-submartingale. The rest of the dissertation will give answers to the above questions and be organized as followed. In Chapter 2 we study the problem of two-person zero sum SDGs, establish the existence of the game value and characterize the game value process as the unique viscosity solution to the path-dependent Bellman-Isaacs equation. From the results of Chapter 2 we will see thatE defined above is indeed filtration consistent and “well-defined". In Chapter 3, we introduce the norms for semi-martingales under linear and non-linear expectation and obtain the estimates. We prove theG-submartingale Doob-Meyer decomposition and complete the characterization of aE-martingale. In Chapter 4, we give the well-posedness condition for DRBDEs and prove the a priori estimates. 3 CHAPTER 2 THE ZERO-SUM GAME UNDER FEEDBACK CONTROLS 2.1 Introduction Game theory is the study of how people make decisions in conflicting situations. Modern game theory begins with the study of deterministic games. The seminal chapter Fleming and Souganidis [26] gave two person zero sum SDGs a rigorous structure and opened the door to an extensive study of the subject. We describe the setting of two person zero-sum SDGs. Letu andv denote the controls of the two players,B a Brownian motion,X S;u;v the controlled state process in the strong formulation: X S;u;v t =x + Z t 0 b(s;u s ;v s )ds + Z t 0 (s;u s ;v s )dB s ; (2.1) andJ(u;v) the corresponding value (utility or cost) which is determined byX S;u;v ,B, and (u;v). The lower and upper values of the game are defined as: V 0 := sup u2U inf v2V J(u;v); V 0 := inf v2V sup u2U J(u;v); whereU andV are appropriate sets of admissible controls. It is clear thatV 0 V 0 . Two central problems in the game literature are: (i) When does the game value exists, namelyV 0 :=V 0 =V 0 ? 4 (ii) Given the existence of the game value, is there a saddle point? That is, we want to find (u ;v )2UV such thatV 0 =J(u ;v ) = inf v2V J(u ;v) = sup u2U J(u;v ). However, when both players use controls, even under reasonable assumptions, the game value may not exist. We shall provide a counterexample, see Example 2.7.1 below, which is due to Buckdahn. To address this issue, Fleming and Souganidis [26] introduced the concept of strategy: a mapping from one player’s space of controls to the other’s. More specifically for our above setting, we can define the mappings :V!U and :U!V whereA,B are appropriate sets of admissible strategies. The lower and upper values of the game become: V 0 0 := sup 2A inf v2V J((v);v); V 0 0 := inf 2B sup u2U J(u;(u)); Under the Isaacs condition and assuming the comparison principle for the viscosity solution of the corresponding Bellman-Isaacs equation holds, [26] showed thatV 0 0 = V 0 0 . This work has been extended by many authors in various aspects. In particular, Buckdahn and Li [7] definedJ(u;v) via Backward SDEs, and very recently Bayraktar and Yao [2] used doubly reflected BSDEs. The main drawback of this approach, however, is that the two players have non-symmetric information, and forV 0 0 andV 0 0 , the roles of two players are switched. Consequently, it is less convenient to study the saddle point in this setting. We propose to attack the SDG problem using the feedback type of controls, or the weak formulation. Note that in (2.1) the controls (u;v) actually meanu(B );v(B ). Our state process with feedback controls is as followed: X W;u;v t = x + Z t 0 b(s;u s (X W;u;v );v s (X W;u;v ))ds + Z t 0 (s;u s (X W;u;v );v s (X W;u;v ))dB s : (2.2) 5 HereX W;u;v denotes the path ofX W;u;v and the superscript W stands for weak formulation. Moreover, we study the game in non-Markovian framework, or say in a path dependent manner, and thus the game value is a random process. We remark that, when there is only drift control, the weak formulation has already been used in the literature, see Bensoussan and Lions [4] for Markovian case and Hamadene and Lepetier [31] for non-Markovian case. In this dissertation we study the general case with diffusion controls under non-Markovian setting, using a different approach from the above works. We also note that, in the controller-stopper problem with Markovian setting, Karatzas and Sudderth [36] studied the game with diffusion control in weak formulation, under certain strong conditions. Most recently, Nutz and Zhang [45] studied the controller- stopper problem under non-Markovian setting and established the existence of the optimal controls under natural conditions. We begin by establishing the Dynamic Programming Principles for the upper and lower value processes. To handle the path-dependent nature of our setting, we utilize the notion of regular conditional probability distribution, due to Stroock and Varadhan [63]. The feed- back type of controls plays a crucial role here for our proof. Next, we show that the game has value under the standard Isaacs condition. The typical approach in the literature which uses the characterization of the value function as viscosity solution of the Bellman-Isaacs PDE , e.g. [26] and [7], cannot be applied to our non Markovian setting. Instead, we rely on the notion of viscosity solution to path-dependent PDEs, introduced by Ekren, Keller, Touzi and Zhang [21] and Ekren, Touzi and Zhang [22, 23, 24]. Based on the Dynamic Programming Principle, we show that the lower value and the upper value of the game are viscosity solutions of the corresponding path dependent Bellman-Isaacs equations. Then, under the Isaacs condition and assuming the uniqueness of viscosity solutions, we establish the existence of the game value. 6 2.2 Preliminaries 2.2.1 The canonical space Let := !2 C([0;T ];R d ) : ! 0 = 0 , the set of continuous paths starting from the origin, B the canonical process,F the filtration generated byB,P 0 the Wiener measure, and := [0;T ] . Here and in the sequel, for notational simplicity we use0 to denote vectors or matrices with appropriate dimensions whose components are all equal to 0. Let S d denote the set ofdd matrices,S d 0 :=f2S d :0g, and xx 0 := P d i=1 x i x 0 i for any x;x 0 2R d ; : 0 := Trace[ 0 ] for any ; 0 2S d : We define a norm on and a metric on as follows: for any (t;!); (t 0 ;! 0 )2 , k!k t := sup 0st j! s j; d 1 (t;!); (t 0 ;! 0 ) :=jtt 0 j + ! :^t ! 0 :^t 0 T : (2.3) Then ( ;kk T ) and (;d 1 ) are complete metric spaces. Definition 2.2.1. LetY : !R be anF-progressively measurable process. (i) We sayY2L 1 () ifY is bounded. (ii) We sayY 2 C 0 () (resp. UC()) ifY is continuous (resp. uniformly continuous) in (t;!). Moreover, we denoteC 0 b () :=C 0 ()\L 1 () andUC b () :=UC()\L 1 (). (iii) We sayY 2U ifY is bounded from above, upper semi-continuous (u.s.c. for short) from right int, and there exists a modulus of continuity function such that for (t;!); (t 0 ;! 0 )2 : Y (t;!)Y (t 0 ;! 0 ) d 1 ((t;!); (t 0 ;! 0 )) whenevertt 0 ; (2.4) 7 and we sayY2U ifY2U. It is clear thatU\U = UC b (). Moreover, we denote by L 1 (;R d ) the space of R d -valued processes whose components are inL 1 (), and define other similar notations in the same spirit. We next introduce the shifted spaces. Let 0stT . - Let t := !2C([t;T ];R d ) :! t =0 be the shifted canonical space;B t the shifted canonical process on t ;F t the shifted filtration generated byB t ,P t 0 the Wiener measure on t , and t := [t;T ] t . - Definekk s on t ,d 1 on t t , andC 0 ( t ) etc. in the spirit of (2.3) and Definition 2.2.1. - For!2 s and! 0 2 t , define the concatenation path! t ! 0 2 s by: (! t ! 0 )(r) :=! r 1 [s;t) (r) + (! t + ^ ! 0 r )1 [t;T ] (r); for all r2 [s;T ]: - Let s 2 [0;T ) and ! 2 s . For anF s T -measurable random variable , an F s - progressively measurable processX on s , andt2 (s;T ], define the shiftedF t T -measurable random variable t;! andF t -progressively measurable processX t;! on t by: t;! (! 0 ) :=(! t ! 0 ); X t;! (! 0 ) :=X(! t ! 0 ); for all ! 0 2 t : It is clear that, for any (t;!)2 and any Y 2 L 1 (), we have Y t;! 2 L 1 ( t ). Similarly the property holds for other spaces defined in Definition 2.2.1. 8 2.2.2 Probability measures In this subsection we introduce the probability measures on t in different formulations. First, let2L 1 (;S d 0 ),b2L 1 (;R d ). Define P S;;b :=P 0 (X S;;b ) 1 where X S;;b t := Z t 0 b s ds + Z t 0 s dB s ; P 0 -a.s. (2.5) Here the superscript S stands for strong formulation. We next introduce the corresponding weak formulation. We denote a probability measureP on asP W;;b if M b t :=B t Z t 0 b s ds is aP-martingale andhM b i t = Z t 0 2 s ds P-a.s. (2.6) Here the quadratic variationhM b i is underP. We remark thatP W;;b := P 0 (X W;;b ) 1 , where X W;;b is a weak solution of the following SDE (with random measurable coeffi- cients): X W;;b t := Z t 0 b s (X W;;b )ds + Z t 0 s (X W;;b )dB s ; P 0 -a.s. (2.7) In other words, we are considering feedback type of controls. In this chapter we shall use the weak formulation, which is more convenient for proving Dynamical Programming Principle. We note that, for arbitrarily given (;b), the SDE (2.7) may not have a weak solution, namely there is noP such thatP =P W;;b . Let W := n (;b)2L 1 (;S d 0 )L 1 (;R d ) : SDE (2.7) has a unique weak solution o ; S := n (;b)2L 1 (;S d 0 )L 1 (;R d ) : SDE (2.7) has a unique strong solution o : (2.8) For probability measures on the shifted space t , we defineP S;t;;b ,P W;t;;b , and W;t , S;t , etc. similarly. 9 We next introduce the regular conditional probability distribution (r.c.p.d for short) due to Stroock and Varadhan [63]. We shall follow the presentation in Soner, Touzi and Zhang [61]. LetP be an arbitrary probability measure on and be anF- stopping time. The r.c.p.d.fP ;! ;!2 g satisfies: For each!,P ;! is a probability measure onF (!) T ; For every boundedF T -measurable random variable: E P jF (!) =E P ;! (!);! ; P-a.s. (2.9) The following simple lemma will be important for the proof of Dynamic Programming Principle in Section 2.4 below. Lemma 2.2.2. Let (;b)2 S (resp. W ),t2 [0;T ],fE i ; 1ingF t be a partition of , and ( i ;b i )2 S;t (resp. W;t ). Define (!) :=(!)1 [0;t) + n X i=1 i (! t )1 E i1 [t;T ] ; b(!) :=b(!)1 [0;t) + n X i=1 b i (! t )1 E i1 [t;T ] : Then ( ; b)2 S (resp. W ), and, fori = 1; ;n, P ; b =P ;b onF t and P ; b t;! =P t; i ;b i forP ;b -a.e.!2E i : Proof. We prove the case S only. The case W can be proved similarly. LetX be the unique strong solution to SDE (2.7) with coefficients (;b), andX i be the unique strong solution to SDE (2.7) on [t;T ] with coefficients ( i ;b i ). First, denote X s = X s 1 [0;t) (s) + h X t + n X i=1 1 E i (X)X i s (B t ) i 1 [t;T ] (s); 0sT: 10 One can check straightforwardly that X is a strong solution to SDE (2.7) with coefficients ( ; b). On the other hand, let ~ X be an arbitrary strong solution to SDE (2.7) with coeffi- cients ( ; b). Then both X and ~ X satisfy SDE (2.7) on [0;t] with coefficients (;b). By the uniqueness assumption of (;b), we see that X = ~ X on [0;t], P 0 -a.s. In particular, this implies1 E i ( X) = 1 E i ( ~ X). Then forP 0 -a.e. !2 , there exists uniquei such that 1 E i ( X) = 1 E i ( ~ X) = 1. Thus both X t;! and ~ X t;! satisfy SDE (2.7) on [t;T ] with coeffi- cients ( i ;b i ). By the uniqueness assumption of ( i ;b i ), we see that X t;! = ~ X t;! ,P t 0 -a.s. This implies that X = ~ X,P 0 -a.s. and therefore, ( ; b)2 S . Finally, since X =X on [0;t], we haveP ; b =P ;b onF t . Moreover, since X t;! (B t ) = X t (!) +X i (B t ) whenever1 E i (X) = 1, by the definition of r.c.p.d. we see that (P ; b ) t;! = P t; i ;b i forP ;b -a.e.!2E i . 2.2.3 Viscosity solutions of path dependent PDEs Our notion of viscosity solutions of Path Dependent PDEs (PPDEs for short) is introduced by Ekren, Keller, Touzi and Zhang [21] for semilinear PPDE and Ekren, Touzi and Zhang [22, 23] for fully nonlinear PPDE. We follow the presentation in [22, 23] here. For any constantL> 0, denote P t L := P W;t;;b :jbjL;0 p 2LI d ; and P t 1 :=[ L>0 P t L : (2.10) We remark that inP t L we do not require the uniqueness of weak solution. LetY 2 C 0 (). Fort2 [0;T ), we define the right time-derivative, if it exists, as in Dupire [18] and Cont and Fournie [12]: @ t Y (t;!) := lim h#0 1 h Y t +h;! ^t Y t;! : (2.11) 11 For the final timeT , we define, whenever the following limit exists: @ t Y (T;!) := lim t"T @ t Y (t;!): (2.12) Definition 2.2.3. (i) We say Y 2 C 1;2 () if Y 2 C 0 (), @ t Y 2 C 0 (), and there exist @ ! Y 2 C 0 (;R d ), @ 2 !! Y 2 C 0 (;S d ) such that, for any (s;!)2 [0;T ) and any P2P s 1 ,Y s;! is a localP-semi-martingale and it holds: dY s;! t = (@ t Y t ) s;! dt + (@ ! Y t ) s;! dB s t + 1 2 (@ 2 !! Y t ) s;! :dhB s i t ; P-a.s. (2.13) (ii) We sayY2C 1;2 b () ifY2 UC b (),@ t Y2C 0 b (), and the above@ ! Y and@ 2 !! Y exist and are inC 0 b (;R d ) andC 0 b (;S d ), respectively. Next, letT denote the set ofF-stopping times, andHT the subset of those hitting times H taking the following form: for some open and convex setOR d containing0 and some 0<t 0 T , H := infft :B t 2O c g^t 0 = infft :d(! t ;O c ) = 0g^t 0 : (2.14) We may defineC 1;2 ( t ),C 1;2 b ( t ),T t , andH t similarly. It is clear that, for any (t;!) andY 2 C 1;2 () (resp. Y 2 C 1;2 b ()), we haveY t;! 2 C 1;2 ( t )(resp. Y t;! 2 C 1;2 b ()), and for any H2H such that H(!)>t, we have H t;! 2H t . 12 For anyL> 0, (t;!)2 witht<T , andF-adapted processY , define A L Y (t;!) := n '2C 1;2 b ( t ) : for some H2H t ; ('Y t;! )(t;0) = inf 2T t inf P2P t L E P ('Y t;! ) ^H o ; A L Y (t;!) := n '2C 1;2 b ( t ) : for some H2H t ; ('Y t;! )(t;0) = sup 2T t sup P2P t L E P ('Y t;! ) ^H o : (2.15) We are now ready to introduce the viscosity solution of PPDEs. Consider the following PPDE with generatorG: @ t Y t G(t;!;Y t ;@ ! Y t ;@ 2 !! Y t ) = 0: (2.16) Definition 2.2.4. (i) LetL> 0. We sayY2U (resp.U) is a viscosityL-subsolution (resp. L-supersolution) of PPDE (2.16) if, for any (t;!)2 [0;T ) and any'2A L Y (t;!) (resp.'2A L Y (t;!)): @ t 'G t;! (:;Y t;! ;@ ! ';@ 2 !! ') (t;0) (resp.) 0: (ii) We sayY2U (resp.U) is a viscosity subsolution (resp. supersolution) of PPDE (2.16) ifY is viscosityL-subsolution (resp.L-supersolution) of PPDE (2.16) for someL> 0. (iii) We say Y 2 UC b () is a viscosity solution of PPDE (2.16) if it is both a viscosity subsolution and a viscosity supersolution. Remark 2.2.5. For 0 < L 1 < L 2 , obviouslyP t L 1 P t L 2 andA L 2 Y (t;!)A L 1 Y (t;!). Then one can easily check that a viscosityL 1 -subsolution must be a viscosityL 2 -subsolution. Consequently,Y is a viscosity solution of PPDE (2.16) iff there exists anL 1 such that for all ~ L L, Y is a viscosity ~ L-subsolution. However, we require the same L for all 13 (t;!). A similar statement holds for the viscosity supersolution. Remark 2.2.6. (i) In the Markovian case, namelyY (t;!) =Y (t;! t ) andG =g(t;! t ;y;z; ), our definition of viscosity solution is stronger than the standard viscosity solution in PDE literature. That is,Y is a viscosity solution in our sense implies it is a viscosity solution in the standard sense as in Crandall, Ishii, and Lions [13]. (ii) The state space of PPDEs is not locally compact, and thus the standard arguments by using Ishii’s lemma do not work in path dependent case. The main idea of [21, 22, 23] is to transform the definition to an optimal stopping problem in (2.18), which helps to obtain the comparison and hence the uniqueness of viscosity solutions. 2.3 The zero-sum game 2.3.1 The admissible controls LetU andV be two Borel measurable spaces equipped with some topology. From now on we shall fix twoF-progressively measurable mapping: : [0;T ]UV!S d 0 ; b : [0;T ]UV!R d : We shall always assume Assumption 2.3.1. andb are bounded by a constantC 0 . For t 2 [0;T ], letU t (resp. V t ) denote the set of U-valued (resp. V-valued), F t - progressively measurable processes u (resp. v) on t . Throughout the chapter, when (u;v)2U t V t is given, for any process ' on t with appropriate dimension, we de- note b ' s := b ' t;u;v s :=' s (s;u s ;v s ): (2.17) 14 Define t := n (u;v)2U t V t : ((;u;v); b b(;u;v))2A S;t o X t;u;v :=X W;t;(u;v); b b(u;v) ; P t;u;v :=P W;t;(u;v); b b(u;v) ; for (u;v)2 t : (2.18) We note that, from now on, the;b in the previous section will actually be(t;u t ;v t ) and b b(t;u t ;v t ) for some (u;v)2A 0 . In particular, for the convenience of studying the BSDE later, we are considering SDE in the form X t;u;v s = Z s t (r;u r (X t;u;v );v r (X t;u;v )) h dB t r +b(r;u r (X t;u;v );v r (X t;u;v ))dr i ; P t 0 -a.s. (2.19) Moreover, one can easily check that there exists aP t;u;v -Brownian motionW t;u;v such that dB t s =(s;u s ;v s ) h dW t;u;v s +b(s;u s ;v s )ds i ; P t;u;v -a.s. (2.20) To formulate the game problem, we shall restrict the controls to subsetsU t U t and V t V t whose elementsu andv take the following form: u = m1 X i=0 n i X j=1 u ij 1 E i j 1 [t i ;t i+1 ) ; v = m1 X i=0 n i X j=1 v ij 1 E i j 1 [t i ;t i+1 ) ; where (2.21) t =t 0 <<t m =T;fE i j g 1jn i F t t i is a partition, andu ij ,v ij are constants. It is clear that, foru2U t , u2U t if and only if u takes finitely many values. (2.22) We have the following simple lemma. Lemma 2.3.2. (i)U 0 is closed under pasting. That is, for u2U 0 , t2 [0;T ], u i 2U t , 15 i = 1; ;n, and disjointfE i ;i = 1; ;ngF t , the followingu is also inU 0 : u :=u1 [0;t) + h n X i=1 u i (! t )1 E i +u1 \ n i=1 E c i i 1 [t;T ] : (ii) Under Assumption 2.3.1, it holdsU t V t t . Proof. In light of (2.22), (i) is obvious. To see (ii), we notice that any pair of constant processes (u;v) is obviously in t . Then (ii) follows from repeated use of Lemma 2.2.2. 2.3.2 The Backward SDEs Let f(t;!;y;z;u;v) : RR d UV ! R be an F-progressively measurable nonlinear generator. Throughout the chapter, we shall assume Assumption 2.3.3. (i)f(t;!; 0;0;u;v) is bounded by a constantC 0 , and uniformly con- tinuous in (t;!) with a modulus of continuity function 0 . (ii)f is uniformly Lipschitz in (y;z) with a Lipschitz constantL 0 . Now for any (t;!) 2 , (u;v) 2 U t V t , 2 T t , andF t -measurable terminal condition, recall the notation (2.17) and consider the following BSDE on [t;]: Y s = + Z s f t;! (r;B t ;Y r ; b Z r ;u r ;v r )dr Z s Z r dB t r ; P t;u;v -a.s. (2.23) We have the following simple lemma. Lemma 2.3.4. Let Assumptions 2.3.1 and 2.3.3 (ii) hold, and I 2 0 (t;!;u;v) :=E P t;u;v h jj 2 + Z t jf t;! (s;B t ; 0;0;u s ;v s )j 2 ds i <1: 16 Then BSDE (2.23) has a unique solution, denoted as Y t;!;u;v [;];Z t;!;u;v [;] , and there exists a constantC, depending only onC 0 ,L 0 ,T , and the dimensiond, such that E P t;u;v h sup ts jY t;!;u;v s [;]j 2 + Z t j b Z t;!;u;v s [;]j 2 ds i CI 2 0 (t;!;u;v): (2.24) Moreover, ift +, then E P t;u;v h sup ts jY t;!;u;v s [;]j i C E P t;u;v [jj 2 ] 1 2 +C 1 2 I 0 (t;!;u;v): (2.25) Proof. Recall the P t;u;v -Brownian motion W t;u;v defined in (2.20). One may rewrite BSDE (2.23) as Y s = + Z s h f t;! (r;B t ;Y r ; b Z r ;u r ;v r ) + b Z r b(r;u r ;v r ) i dr Z s b Z r dW t;u;v r ; P t;u;v -a.s. Then (2.24) follows from standard BSDE arguments. Moreover, note that Y s = + Z s h f t;! (r;B t ; 0;0;u r ;v r ) + r Y r + b Z r r i dr Z s b Z r dW t;u;v r ; P t;u;v -a.s. where; are bounded. Denote r := exp Z r t s dW t;u;v s + Z r t [ r 1 2 j r j 2 ]dr : Then Y t = + Z s r f t;! (r;B t ; 0;0;u r ;v r )dr Z s [ ]dW t;u;v r ; P t;u;v -a.s. 17 Thus jY t j = E P t;u;v h + Z s r f t;! (r;B t ; 0;0;u r ;v r )dr i E P t;u;v [ 2 ] 1 2 E P t;u;v [jj 2 ] 1 2 + E P t;u;v [kk 2 ] 1 2 E P t;u;v Z s jf t;! (r;B t ; 0;0;u r ;v r )j 2 dr 1 2 : It is clear thatE P t;u;v [kk 2 ]C. Then (2.25) follows immediately. Throughout the chapter, we shall use the generic constantC which depends only onC 0 , L 0 ,T , and the dimensiond, and may vary from line to line. 2.3.3 The value processes We now fix anF T -measurable terminal condition and assume throughout the chapter: Assumption 2.3.5. is bounded by a constantC 0 , and is uniformly continuous in! with a modulus of continuity function 0 . We now define the lower value and upper value of the game as follows: Y (t;!) := sup u2Ut inf v2Vt Y t;!;u;v t [T; t;! ]; Y (t;!) := inf v2Vt sup u2Ut Y t;!;u;v t [T; t;! ]: (2.26) As a direct consequence of Lemma 2.3.4, we have C Y Y C: (2.27) When there is no confusion, we will simplify the notations: Y t;!;u;v ;Z t;!;u;v := Y t;!;u;v [T; t;! ];Z t;!;u;v [T; t;! ] : (2.28) 18 Our goal of this chapter is to show, under certain additional assumptions, thatY = Y and it is the unique viscosity solution of certain PPDE. See Theorem 2.5.1 below. Remark 2.3.6. (i) In this chapter we restrict our controls toU t V t t . We note that in generalU t V t is not in t . We may study the following problem though: Y 0 (t;!) := sup u2Ut inf v2Vt(u) Y t;!;u;v t ; Y 0 (t;!) := inf v2Vt sup u2Ut(v) Y t;!;u;v t ; where V t (u) := n v2V t : (u;v)2 t o ; U t (v) := n u2U t : (u;v)2 t o ; and we take the convention that, for the empty set , sup [] =1 and inf [] =1. However, we will be able to prove only partial Dynamic Programming Principle in this formulation. (ii) Another important constraint we impose is that andb are independent of!. When andb are random, given (t;!)2 , the solutionX t;!;u;v of SDE (2.19) and its distribution P t;!;u;v will depend on! as well. This has some subtle consequences, e.g. in Lemma 2.4.1 concerning the regularity of the value processes. The main difficulty is that we do not have a good stability result for feedback type of SDEs (2.19). We hope to address this issue in future research. (iii) Note that we may get rid of the driftb by using Girsanov transformation, so all our results hold true whenb is random, given that is independent of!. However, to simplify the presentation we assumeb is independent of! as well. Remark 2.3.7. For each (u;v)2U 0 V 0 , denote ~ u t := u t (X u;v ) and ~ v t := v t (X u;v ). Then (~ u; ~ v)2U 0 V 0 andP u;v =P S;~ u;~ v :=P S;(~ u;~ v); b b(~ u;~ v) . Thus we haveY 0;0;u;v 0 =Y S;~ u;~ v 0 , 19 where, X u;v t = Z t 0 (s; ~ u s ; ~ v s ) h dB s +b(s; ~ u s ; ~ v s )ds i ; Y S;~ u;~ v t =(X u;v ) + Z T t f(s;X ~ u;~ v ;Y S;~ u;~ v s ; b Z S;~ u;~ v s ; ~ u s ; ~ v s )ds Z T t Z S;~ u;~ v s dB s ; P 0 -a.s. However, we shall emphasize that the mapping from (u;v) to (~ u; ~ v) is in pairs, and it does not induce a mapping fromu to ~ u (or fromv to ~ v). Consequently the game values defined below in strong formulation are different from theY 0 andY 0 in (2.26): Y S 0 := sup ~ u2U 0 inf ~ v2V 0 Y S;~ u;~ v 0 ; Y S 0 := inf ~ v2V 0 sup ~ u2U 0 Y S;~ u;~ v 0 : Indeed, in strong formulation the above game with control against control may not have the game value, namely Y S 0 < Y S 0 , even if Isaacs condition and comparison principle for the viscosity solutions of the corresponding Bellman-Isaacs equation hold. See the counterexample Example 2.7.1 below. Remark 2.3.8. (i) In standard literature, see e.g. [26] and [7], one transforms the problem into a game with strategy type of controls. That is, let :V t !U t and :U t !V t be appropriate strategies. One considers: Y " (t;!) := sup inf v Y t;!;(v);v t ; Y " (t;!) := inf sup u Y t;!;u;(u) t ; This type of control problem is in fact a principal-agent problem, see e.g. Cvitanic and Zhang [15]. In Markovian framework and under appropriate conditions, one can show that Y " = Y " and is the unique solution of the corresponding Bellman-Isaacs equation. However, in this formulation the two players have nonsymmetric informations, and the lower and upper values are defined using different information settings. In particular, it is 20 less convenient to define saddle point in this formulation. (ii) Our weak formulation actually has the feature of strategy type of controls. Indeed, consider the (~ u; ~ v) in Remark 2.3.7 again. Roughly speaking, givenu, then ~ u is uniquely determined byv, which is in turn uniquely determined by ~ v. Thusu can be viewed as a strategy which maps ~ v (andB) to ~ u. Similarlyv can be viewed as a strategy which maps ~ u (andB) to ~ v. Compared to the strategy against control, the advantage of weak formulation is that it is control against control and the two players have symmetric information. Remark 2.3.9. When there is only drift control, namely is independent of (u;v), our formulation reduces to the work Hamadene and Lepeltier [31]. Under Isaacs condition, by using Girsanov transformation and comparison for BSDEs, they proved Y = Y and the existence of saddle point. We allow for both diffusion control and drift control, and we shall proveY = Y . However, when there is diffusion control, the comparison used in [31] fails. Consequently, we are not able to follow the arguments in [31] to establish the existence of saddle point. Indeed, with the presence of diffusion control, even for stochastic optimization problem the optimal control does not seem to exist in general. We shall instead obtain some approximate saddle point in Section 2.6 below. 2.4 Dynamic Programming Principle We start with the regularity of Y and Y in !. This property is straightforward in strong formulation. Our proof here relies heavily on our assumption that andb are independent of!. As pointed out in Remark 2.3.6 (ii), the problem is very subtle in general case and we hope to address it in some future research. Lemma 2.4.1. Let Assumptions 2.3.1, 2.3.3, and 2.3.5 hold. ThenY andY are uniformly continuous in! with modulus of continuity functionC 0 for some constantC > 0. Conse- quently,Y andY areF-progressively measurable. 21 Proof. Let t2 [0;T ];!;! 0 2 . For any (u;v)2U t V t , denote Y :=Y t;!;u;v Y t;! 0 ;u;v ; Z :=Z t;!;u;v Z t;! 0 ;u;v . Then,P t;u;v -a.s. Y s = t;! (B t ) t;! 0 (B t ) Z T s Z r dB r + Z T s h r Y r + b Z r (r;u r ;v r ) r +[f t;! (r;B t ;Y t;!;u;v r ; b Z t;!;u;v r ;u r ;v r )f t;! 0 (r;B t ;Y t;!;u;v r ; b Z t;!;u;v r ;u r ;v r )] i dr; where and are bounded. Apply (2.24) on the above BSDE, one obtains jY t;!;u;v t Y t;! 0 ;u;v t jC 0 (k!! 0 k t ): (2.1) Thus jY t (!)Y t (! 0 )j sup (u;v)2UtVt jY t;!;u;v t Y t;! 0 ;u;v t jC 0 (k!! 0 k t ): Similarly one can prove the estimate forY . We next present a technical lemma that will be needed to prove the Dynamic Program- ming Principle below. Lemma 2.4.2. For any"> 0 andt2 (0;T ), there exist disjoint setsfE i ;i = 1; ;ng F t such that k!! 0 k t " for all!;! 0 2E i ;i = 1; ;n; and sup (u;v)2U 0 V 0 E P 0;u;v \ n i=1 E c i ": Proof. We introduce the following capacityC: C(A) := sup (u;v)2U 0 V 0 P 0;u;v (A); for all A2F T : (2.2) In this proof we abuse a notation a little bit by denotingB s r :=B r B s for 0srt. 22 Step 1. We first show that, for anyc;> 0, andR> 0, C kBk t >R C R 4 and C sup 0st kB s k (s+)^t c C c 4 : (2.3) Indeed, for any (u;v)2U 0 V 0 and any 0t 1 <t 2 , since andb are bounded, then by (2.20) and applying the Burkholder-Davis-Gundy Inequality we get E P 0;u;v h kB t 1 k 4 t 2 i = E P 0;u;v h sup t 1 st 2 Z t 2 t 1 (r;u r ;v r )b(r;u r ;v r )dr + Z t 2 t 1 (r;u r ;v r )dW u;v ) r 4 i CE P 0;u;v h Z t 2 t 1 j(r;u r ;v r )b(r;u r ;v r )jdr 4 + Z t 2 t 1 j(r;u r ;v r )j 2 d r 2 i C(t 2 t 1 ) 2 : (2.4) Then P 0;u;v kBk t >R 1 R 4 E P 0;u;v h kBk 4 t i C R 4 : By the definition ofC, this implies the first estimate in (2.3). Next, let 0 =t 1 <<t m =t such that t i < 2 for alli. Then sup 0st kB s k (s+)^t = max 0im1 sup t i st i+1 sup sr(s+)^t jB r B s j max 0im1 sup t i st i+1 sup sr(s+)^t h jB r B t i j +jB s B t i j i 2 max 0im1 kB t i k t i +3 : 23 Then, noting thatm T , by (2.4) we have P 0;u;v sup 0st kB s k (s+)^t c 1 c 4 E P 0;u;v h sup 0st kB s k 4 (s+)^t i C c 4 m1 X i=0 E P 0;u;v h kB t i j 4 t i +3 i C c 4 m 2 C c 4 : By the definition ofC we obtain the second estimate in (2.3). Step 2. We now fix"> 0. For the constantC in (2.3), set c := " 3 ; := c 4 " 2C ^t = " 5 162C ^t; R := ( 2C " ) 1 4 : Let 0 = t 0 < < t m = t such that t i 2,i = 1; ;m. Clearly there exists a partitionf ~ E j ; 1jngF t such that [ n j=1 ~ E j = n max 0im jB t i jR +c o and max 0im j! t i ! 0 t i j " 3 for all!;! 0 2 ~ E i : Now set E j := ~ E j \A; where A := n sup 0st kB s k (s+)^t c o 2F t : Then for any!;! 0 2E j , k!! 0 k t = max 0im1 sup t i st i+1 j! s ! 0 s j max 0im1 sup t i st i+1 h j! s ! t i j +j! 0 s ! 0 t i j +j! t i ! 0 t i j i max 0im1 sup t i st i+1 h " 3 + " 3 + " 3 i =": 24 On the other hand, \ n j=1 E c j = [ n j=1 ~ E j c [A c = n max 0im jB t i j>R +c o [A c n max 0im jB t i j>R +c o \A [A c : For each!2 n max 0im jB t i j>R +c o \A, we have k!k t = max 0im1 sup t i st i+1 j! s j max 0im1 sup t i st i+1 h j! t i jj! s ! t i j i > (R +c)c =R: That is, n max 0im jB t i j>R +c o \A n kBk t >R o ; and therefore, \ n j=1 E c j n kBk t >R o [A c : Now it follows from (2.3) thatC(\ n j=1 E c j )". The following Dynamical Programming Principle is important for us. Lemma 2.4.3. Let Assumptions 2.3.1, 2.3.3, and 2.3.5 hold true. For any 0 s t T and!2 we have Y s (!) = sup u2Us inf v2Vs Y s;!;u;v s t;Y s;! t ; Y s (!) = inf v2Vs sup u2Us Y s;!;u;v s t;Y s;! t : Proof. We shall prove only the Dynamic Programming Principle forY . The proof forY 25 is similar. Without loss of generality, we assumes = 0. That is, we shall prove: Y 0 = sup u2U 0 inf v2V 0 Y 0;0;u;v 0 [t;Y t ]: (2.5) Step 1. We first prove "". Fix arbitrary"> 0 andu2U 0 . LetfE i ;i = 1; ;ng F t be given by Lemma 2.4.2, and fix an! i 2 E i for eachi. For any!2 E i , By Lemma 2.4.1 and (2.1) we have jY t (!)Y t (! i )jC 0 (") and sup (u;v)2UtVt Y t;!;u;v t Y t;! i ;u;v t C 0 ("): (2.6) Letu i 2U t be an"-optimizer ofY t (! i ), that is, inf v2Vt Y t;! i ;u i ;v t +" Y t (! i ): (2.7) Denote ^ E n :=\ n i=1 (E i ) c . By Lemma 2.3.2 (i) we defineu " 2U 0 by: u " s (!) :=u s (!)1 [0;t) (s) + n X i=1 u i s (! t )1 E i (!) +u s (!)1 ^ En (!) 1 [t;T ] (s) (2.8) Now for anyv2V 0 , we have Y 0;0;u " ;v 0 = Y 0;0;u " ;v 0 t;Y 0;0;u " ;v t =Y 0;0;u " ;v 0 h t; n X i=1 Y 0;0;u " ;v t 1 E i +Y 0;0;u " ;v t 1 ^ E n i : Since solutions of BSDEs can be constructed via Picard iteration, one can easily check that, for any (u;v)2U 0 V 0 , Y 0;0;u;v t (!) =Y t;!;u t;! ;v t;! t ; P 0;u;v -a.e.!2 : 26 Then it follows from (2.7) and Lemma 2.2.2 that, forP 0;u " ;v -a.e.!2E i , Y 0;0;u " ;v t (!) = Y t;!;(u " ) t;! ;v t;! t =Y t;!;u i ;v t;! t inf v2Vt Y t;!;u i ;v t inf v2Vt Y t;! i ;u i ;v t C 0 (")Y t (! i )"C 0 (") Y t (!)"C 0 ("): Therefore, by comparison principle of BSDEs and (2.27) we have Y 0;0;u " ;v 0 Y 0;0;u " ;v 0 h t; n X i=1 Y t 1 E i (" +C 0 (")) +Y 0;0;u " ;v t 1 ^ E n i Y 0;0;u " ;v 0 h t;Y t (" +C 0 ("))C1 ^ E n i : Recall that sup (u;v)2U 0 V 0 P 0;u;v ^ E n ". Applying (2.25) we get Y 0;0;u " ;v 0 Y 0;0;u " ;v 0 t;Y t C(" + 0 (")) 1 2 =Y 0;0;u;v 0 t;Y t C(" + 0 (")) 1 2 : Sincev is arbitrary, this implies that inf v2V 0 Y 0;0;u " ;v 0 inf v2V 0 Y 0;0;u;v 0 t;Y t C(" + 0 (")) 1 2 : Then Y 0 inf v2V 0 Y 0;0;u;v 0 t;Y t C(" + 0 (")) 1 2 : Sending"! 0 and by the arbitrariness ofu2U 0 , we obtain Y 0 sup u2U 0 inf v2V 0 Y 0;0;u;v 0 t;Y t : 27 Step 2. We now prove "". Fixu2U 0 in the form of (2.21), withu ij being replaced byu ij . It suffices to prove that inf v2V 0 Y t;0;u;v 0 inf v2V 0 Y 0;0;u;v 0 [t;Y t ]: Without loss of generality, assumet =t i 0 for somei 0 . Notice thatY tm =, then it suffices to prove inf v2V 0 Y 0;0;u;v 0 [t i+1 ;Y t i+1 ] inf v2V 0 Y 0;0;u;v 0 [t i ;Y t i ]; for alli: (2.9) We now fix i and recall that u t = P n i j=1 u ij 1 E i j for t2 [t i ;t i+1 ). For any " > 0, let fE k ;k = 1; ;KgF t i be given by Lemma 2.4.2. DenoteE i jk := E i j \E k and fix an ! jk 2E i jk for each (j;k). For anyv2V 0 , as in Step1 we have Y 0;0;u;v 0 [t i ;Y t i ] = Y 0;0;u;v 0 h t i ; n i ;K X j;k=1 Y t i 1 E i jk +Y t i 1 \ K k=1 E c k i Y 0;0;u;v 0 h t i ; n i ;K X j;k=1 Y t i (! jk )1 E i jk (!) i C( 0 (") +") 1 2 : By Step 1, we see that Y t i (! jk ) sup u2Ut i inf v2Vt i Y t i ;! jk ;u;v t i t i+1 ;Y t i ;! jk t i+1 inf v2Vt i Y t i ;! jk ;u ij ;v t i t i+1 ;Y t i ;! jk t i+1 : Here the constantu ij denotes the constant process. Then there existsv jk 2V t i such that Y t i (! jk )Y t i ;! jk ;u ij ;v jk t i t i+1 ;Y t i ;! jk t i+1 ": 28 Now define ^ v := v1 [0;t i ) + h n i ;K X j;k=1 v jk (B t i )1 E i jk + v1 \ K k=1 E c k i 1 [t i ;T ] : By Lemma 2.3.2 we have ^ v2V 0 . Then, noting that u t i ;! t = u ij for !2 E i jk and t2 [t i ;t i+1 ), Y 0;0;u;v 0 [t i ;Y t i ] Y 0;0;u;v 0 h n i ;K X j;k=1 Y t i ;! jk ;u ij ;v jk t i t i+1 ;Y t i ;! jk t i+1 1 E i jk (!) i C( 0 (") +") 1 2 = Y 0;0;u;v 0 h n i ;K X j;k=1 Y t i ;! jk ;u t i ;! ;^ v t i ;! t i t i+1 ;Y t i ;! jk t i+1 1 E i jk (!) i C( 0 (") +") 1 2 Y 0;0;u;v 0 h n i ;K X j;k=1 Y t i ;!;u t i ;! ;^ v t i ;! t i t i+1 ;Y t i ;! t i+1 1 E i jk (!) i C( 0 (") +") 1 2 = Y 0;0;u;^ v 0 h t i+1 ; n i ;K X j;k=1 Y t i ;! t i+1 1 E i jk (!) i C( 0 (") +") 1 2 Y 0;0;u;^ v 0 h t i+1 ;Y t i+1 i C( 0 (") +") 1 2 inf v2V 0 Y 0;0;u;v 0 h t i+1 ;Y t i+1 i C( 0 (") +") 1 2 : Send"! 0, by the arbitrariness ofv2V 0 we prove (2.9). Remark 2.4.4. If we use strong formulation with control against control, as in Remark 2.3.7, we can only prove the following partial Dynamic Programming Principle: Y S s (!) sup u2Us inf v2Vs Y s;!;P S;s;u;v Y S t ; Y S s (!) inf v2Vs sup u2Us Y s;!;P S;s;u;v Y S t ; which does not lead to the desired viscosity property. That is why we use weak formulation instead of strong formulation. We now turn to the regularity of Y and Y in t, which is required for studying their 29 viscosity property. Lemma 2.4.5. Let Assumptions 2.3.1, 2.3.3, and 2.3.5 hold. Then, for any 0t 1 <t 2 T and!2 , jY t 1 (!)Y t 2 (!)j +jY t 1 (!)Y t 2 (!)j C 1 d 1 ((t 1 ;!); (t 2 ;!)) ; (2.10) where 1 is a modulus of continuity function defined by 1 () := 0 ( + 1 4 ) + + 1 4 : (2.11) Proof. We shall only prove the regularity of Y in t. The estimate for Y can be proved similarly. Denote :=d 1 ((t 1 ;!); (t 2 ;!)). By Theorem 2.4.6 and Lemma 2.4.1 we have Y t 1 (!)Y t 2 (!) = sup u2Ut 1 inf v2Vt 1 Y t 1 ;!;u;v t 1 t 2 ;Y t 1 ;! t 2 Y t 2 (!) sup u2Ut 1 ;v2Vt 1 Y t 1 ;!;u;v t 1 t 2 ;Y t 1 ;! t 2 Y t 2 (!) : (2.12) Denote Y t :=Y t 1 ;!;u;v t t 2 ;Y t 1 ;! t 2 Y t 2 (!); Z t :=Z t 1 ;!;u;v t t 2 ;Y t 1 ;! t 2 : Then,P t 1 ;u;v -a.s. Y t = Y t 1 ;! t 2 Y t 2 (!) + Z t 2 t f t 1 ;! (s;B t 1 ;Y s +Y t 2 (!); b Z s ;u s ;v s )ds Z t 2 t Z s dB t 1 s : 30 Recall from (2.27) thatY is bounded. Apply (2.25) and Lemma 2.4.1, we get jY t 1 j C E P t 1 ;u;v jY t 1 ;! t 2 Y t 2 (!)j 2 1 2 +C C E P t 1 ;u;v 2 0 (d 1 ((t 2 ;!); (t 2 ;! t 1 B t 1 ))) 1 2 +C: Note that E P t 1 ;u;v 2 0 (d 1 ((t 2 ;!); (t 2 ;! t 1 B t 1 ))) E P t 1 ;u;v h 2 0 ( +kB t 1 k t 2 ) i 2 0 ( + 1 4 ) +CP t 1 ;u;v h kB t 1 k t 2 1 4 i 2 0 ( + 1 4 ) +C 1 2 E P t 1 ;u;v [kB t 1 k 2 t 2 ] 2 0 ( + 1 4 ) +C 1 2 : Then jY t 1 j C h 0 ( + 1 4 ) + 1 4 + i =C 1 (): Plug this into (2.12) we complete the proof. Combining Lemmas 2.4.3 and 2.4.5, we have the following Dynamic Programming Principle for stopping times: Theorem 2.4.6. Let Assumptions 2.3.1, 2.3.3, and 2.3.5 hold true. For any (t;!)2 and 2T t , we have Y t (!) = sup u2Ut inf v2Vt Y t;!;u;v t ;Y t;! ; Y t (!) = inf v2Vt sup u2Ut Y t;!;u;v t ;Y t;! : Proof. First, suppose is a stopping time that takes only two valuest < t 1 < t 2 T . 31 Then we have Y t;!;u;v t ;Y t;! = Y t;!;u;v t t 1 ;Y t;! t 1 1 f=t 1 g +Y t;!;u;v t t 2 ;Y t;! t 2 1 f>t 1 g We note that Y t;!;u;v t h t 2 ;Y t;! t 2 1 f>t 1 g i = Y t;!;u;v t t 1 ;Y t;!;u;v t 1 t 2 ;Y t;! t 2 1 f>t 1 g = Y t;!;u;v t h t 1 ;Y t 1 ;!;u t 1 ;! ;v t 1 ;! t 1 t 2 ;Y t;! t 2 1 f>t 1 g i : Following a similar argument to the proof of Lemma (2.4.3) we have sup u2Ut inf v2Vt Y t;!;u;v t ;Y t;! = sup u2Ut inf v2Vt Y t;!;u;v t h Y t;! t 1 1 f=t 1 g + sup u2Ut 1 inf v2Vt 1 Y t 1 ;!;u;v t 1 t 2 ;Y t;! t 2 1 f>t 1 g i = sup u2Ut inf v2Vt Y t;!;u;v t h Y t;! t 1 1 f=t 1 g +Y t;! t 1 1 f>t 1 g ] = Y t : The result is true for taking finitely many values by induction. In general, there exists a sequence of stopping times k # such that k takes only finitely many values. By 2:4:5 and standard BSDE estimates, we see that the Dynamic Programming Principle for stopping time holds. 32 2.5 Existence of the game value 2.5.1 Viscosity solution properties Define G(t;!;y;z; ) := sup u2U inf v2V h 1 2 2 (t;u;v) : +b(t;u;v)z +f(t;!;y;z(t;u;v);u;v) i G(t;!;y;z; ) := inf v2V sup u2U h 1 2 2 (t;u;v) : +b(t;u;v)z +f(t;!;y;z(t;u;v);u;v) i ; (2.1) and consider the following path dependent PDEs: @ t Y t G(t;!;Y t ;@ ! Y t ;@ 2 !! Y t ) = 0; (2.2) @ t Y t G(t;!;Y t ;@ ! Y t ;@ 2 !! Y t ) = 0: (2.3) Theorem 2.5.1. Let Assumptions 2.3.1, 2.3.3, and 2.3.5 hold. Then Y (resp. Y ) is a viscosity solution of PPDE (2.2) (resp. (2.3)). Proof. We shall only prove that Y is a viscosity solution of the PPDE (2:2). The other statement can be proved similarly. Step 1. We first prove the viscosity supersolution property. Assume by contradiction that there exists (t;!) and'2A L Y (t;!) such that c := @ t '(t;0) + sup u2U inf v2V 1 2 2 (t;u;v) :@ 2 !! '(t;0) +b(t;u;v)@ ! '(t;0) +f(t;!;Y t (!);@ ! '(t;0)(t;u;v);u;v) > 0: 33 By Remark (2.2.5), we can assumeL is large enough as we will see later. Then there exists ~ u2U such that, for allv2V @ t '(t;0) + 1 2 2 (t; ~ u;v) :@ 2 !! '(t;0) +b(t; ~ u;v)@ ! '(t;0) +f(t;!;Y t (!);@ ! '(t;0)(t; ~ u;v); ~ u;v) c 2 (2.4) Let H2H t be the hitting time corresponding to' in (2.18). For any"> 0, set H " := inf st :st +jB t s j =" : By choosing" > 0 small enough, we have H " H. Since'2 C 1;2 ( t ), there exist some constantC ' C 0 and modulus of continuity function ' 1 , which may depend on', such that j (s;B t )jC ' ;j (s;B t ) (t;0)j ' ("); forts H " ; =';@ t ';@ ! ';@ 2 !! ': (2.5) Now setu := ~ u2U t be a constant process and letv2V t be arbitrary. Fix > 0 and denote H " := H " ^ (t +), Y :=Y t;!;u;v H " ;Y t;! H " ; Z :=Z t;!;u;v H " ;Y t;! H " ; Y s :='(s;B t )Y s ; Z s :=@ ! '(s;B t )Z s : 34 Then, applying the functional Itô’s formula we obtain: dY s = h @ t ' + 1 2 @ 2 !! ' : 2 (s;u s ;v s ) +f t;! (;Y s ; b Z s ;u s ;v s ) i (s;B t )ds + Z s dB t s = h @ t ' + 1 2 @ 2 !! ' : 2 (s;u s ;v s ) +f t;! (;Y s ; b Z s ;u s ;v s ) i (s;B t )ds + Z s dB t s = h @ t ' + 1 2 @ 2 !! ' : 2 (s;u s ;v s ) +f t;! (;Y t (!);@ ! '()(s;u s ;v s );u s ;v s ) i (s;B t )ds + h s (Y s Y t (!)) + b Z s s i ds + Z s dB t s ; wherejj;jjL 0 . By (2.4) and (2.5) we have dY s h c 2 C ' ' (")CjY s Y t (!)j + b Z s s i ds + Z s dB t s ; ts H " : Recall (2.20) and definedP :=M H " dP t;u;v , where M s := exp Z s t [b(r;u r ;v r ) + r ]dW t;u;v r 1 2 Z s t jb(r;u r ;v r ) + r j 2 dr : Then Z s dB t s + b Z s s ds is aP-martingale, and thus Y t E P h Y H " Z H " t h c 2 C ' ' (")CjY s Y t (!)j i ds i By choosingL large enough, we see thatP2P t L . Then it follows from the definition of A L Y (t;!) that E P [Y H " ] =E P h '(H " ;B t )Y t;! H " i '(t;0)Y t (!): 35 Therefore, sinceb and are bounded, Y t (!)Y t E P h Z H " t h c 2 +C ' ' (") +CjY s Y t (!)j i ds i [ c 2 +C ' ' (")] +C ' P(H " t +) +CE P h kY Y t (!)k H " i [ c 2 +C ' (")] +C ' P t;u;v (H " t +) 1 2 +C E P t;u;v h kY Y t (!)k 2 H " i1 2 : (2.6) Note that, for " 2 , P t;u;v H " t + P t;u;v +kB t k t+ " =P t;u;v kB t k t+ " 2 C " 2 E P t;u;v kB t k 2 t+ C " 2 : (2.7) Moreover, denote ~ Y :=YY t (!). Then ~ Y s =Y t;! H " Y t (!) + Z H " s f t;! (r;B r ; ~ Y r +Y t (!); b Z r ;u r ;v r )dr Z H " s Z r dB t r : By (2.25) and applying Lemma 2.4.5 we obtain E P t;u;v h kY Y t (!)k 2 H " i CE P t;u;v h jY t;! H " Y t (!)j 2 i +C C +CE P t;u;v h 2 1 d 1 ((t;!); (t +;! t B t )) i C +CE P t;u;v h 2 1 d 1 ((t;!); (t +;!)) +kB t k t+ ) i C 2 (); (2.8) where 2 () := + sup (u;v)2UtVt E P t;u;v h 2 1 d 1 ((t;!); (t +;!)) +kB t k t+ ) i : (2.9) 36 Plug (2.7) and (2.8) into (2.6), we have Y t (!)Y t h c 2 +C ' ' (") + C ' 1 2 " +C 1 2 2 () i : It is clear that lim !0 2 () = 0. Then by first choosing" small and then choosing small enough, we have Y t (!)Y t;!;u;v t [H " ;Y t;! H " ] c 4 : Sincev is arbitrary, we get Y t (!) inf v2Vt Y t;!;u;v t [H " ;Y t;! H " ] c 4 ; which implies further that Y t (!) sup u2Ut inf v2Vt Y t;!;u;v t [H " ;Y t;! H " ] c 4 < 0: This contradicts with the dynamic programming principle Theorem 2.4.6. Therefore,Y is a viscosity supersolution of PPDE (2.2). Step 2. We now prove the viscosity subsolution property. Assume by contradiction that, for someL large enough, there exists (t;!) and'2A L Y (t;!) such that c := @ t '(t;0) + sup u2U inf v2V 1 2 2 (t;u;v) :@ 2 !! '(t;0) +b(t;u;v)@ ! '(t;0) +f(t;!;Y t (!);@ ! '(t;0);u;v) < 0: Then there exists a mapping (no measurability is involved!) :U!V such that, for any 37 u2U, @ t '(t;0) + 1 2 2 (t;u; (u)) :@ 2 !! '(t;0) +b(t;u; (u))@ ! '(t;0) +f(t;!;Y t (!);@ ! '(t;0);u; (u)) c 2 : (2.10) For any u 2 U t , by the structure (2.21) one can easily see that v := (u) 2 V t . Introduce the same notations as in Step 1, and follow almost the same arguments, we obtain Y t (!)Y t h c 2 C ' ' (") C ' 1 2 " C 1 2 2 () i : Again, by first choosing" small and then choosing small enough, we have Y t (!)Y t;!;u;v t [H " ;Y t;! H " ] c 4 : This implies Y t (!) inf v2Vt Y t;!;u;v t [H " ;Y t;! H " ] c 4 : Sinceu is arbitrary, then Y t (!) sup u2Ut inf v2Vt Y t;!;u;v t [H " ;Y t;! H " ] c 4 > 0: This contradicts with the dynamic programming principle Theorem 2.4.6. Therefore,Y is a viscosity subsolution of PPDE (2.2). We now assume the Isaacs condition: G(t;!;y;z; ) =G(t;!;y;z; ) =:G(t;!;y;z; ); (2.11) 38 and consider the following path dependent Isaacs equation: @ t Y t G(t;!;Y t ;@ ! Y t ;@ 2 !! Y t ) = 0: (2.12) Our main result of the chapter is: Theorem 2.5.2. Let Assumptions 2.3.1, 2.3.3, and 2.3.5 hold. Assume further that the Isaacs condition (2.11) and the uniqueness for viscosity solutions of the PPDE (2.12) hold. ThenY =Y =:Y and is the unique viscosity solution of PPDE (2.12). Proof. Applying Theorem 2.5.1 and by the uniqueness of viscosity solutions, we see im- mediately thatY =Y and it is the unique viscosity solution of PPDE (2.12). Remark 2.5.3. (i) For the comparison principle of viscosity solutions of PPDE (2.12), we refer to Ekren, Touzi and Zhang [22]. We shall also provide a sufficient condition in sub Section 2.5.2 below. (ii) In Markovian framework, the PPDE (2.12) becomes a standard PDE. Note that a viscosity solution (resp. supersolution, subsolution) in the sense of Definition 2.2.4 is a viscosity solution (resp. supersolution, subsolution) in the standard literature. Then, assuming the comparison principle for standard viscosity solution of PDEs holds true,Y := Y = Y and is the unique viscosity solution of the Bellman-Isaacs PDE with terminal conditionY (T;x) =(x) 2.5.2 Comparison principle for viscosity solutions of PPDEs In this chapter we study the comparison principle of PPDE (2.12), which clearly implies the uniqueness required in Theorem 2.5.1. 39 We first cite a general result from [24] concerning wellposedness of PPDEs, adapting to our setting. For any (t;!)2 , denote the following deterministic function with parameter (t;!): g t;! (s;y;z; ) := G(s^T;! ^t ;y;z; ): (2.1) For any"> 0 and 0, we denoteT := (1 +)T , and O " :=fx2R d :jxj<"g; O " :=fx2R d :jxj"g; @O " :=fx2R d :jxj ="g; O "; t := [t;T )O " ;O "; t := [t;T ]O " ); @O "; t := [t;T ]@O " [ fT gO " ; (2.2) Consider the following localized and path-frozen PDE defined for every (t;!)2 : (E) t;! "; L t;! :=@ t g t;! (s;;D;D 2 ) = 0 on O "; t : (2.3) Here@ t ;D;D 2 are standard differential operators. Assumption 2.5.4. For any" > 0; 0, (t;!)2 , and anyh2 C 0 (@O "; t ), we have =, where (s;x) := inf n w(s;x) :w classical supersolution of (E) t;! "; andwh on@O "; t o ; (s;x) := sup n w(s;x) :w classical subsolution of (E) t;! "; andwh on@O "; t o : (2.4) By [24] Theorem 3.4, we have Theorem 2.5.5. Let Assumptions 2.3.1, 2.3.3, 2.3.5, and the Isaacs condition (2.11) hold. Then, under the additional Assumption 2.5.4, the PPDE (2.12) has a unique viscosity solu- tion and the comparison principle of viscosity solutions holds. We remark that Assumption 2.5.4 is in the spirit of Perron’s approach. However, in 40 standard literature thew in (2.4) is required only to be viscosity supersolution or subsolu- tion, while we require it to be a classical one. To check that, we present a result concerning classical solutions of parabolic PDEs. We first simplify the notations. Let O R d be open, connected, bounded, and with smooth boundary. Set O := [0;T )O; O := [0;T ]O; @O := [0;T ]@O [ fTgO : Consider the following (standard) PDE inO with boundary conditionh: @ t g(t;x;;D;D 2 ) = 0 inO and =h on@O: (2.5) Then we have the following result, whose argument is standard in the literature and is communicated to us by Lihe Wang. Lemma 2.5.6. Assume (i)h2C 1;2 (O) andg(;y;z; )2C 1;2 (O) for any (y;z; ); (ii)g is continuously differentiable in (y;z; ) with bounded derivatives; (iii)@ gc 0 I d for somec 0 > 0, andd 2. Then the PDE (2.5) has a classical solution2C 1;2 (O). Proof. As standard in PDE literature, it suffices to provide a priori estimates. That is, we assume 2 C 1;2 (O) satisfies PDE (2.5), and we shall provide estimates which depends only on the parameters in our assumptions. (i) We first establish the estimates in the caseg =g( ). We proceed in several steps. Step 1. We first cite a result from Ladyzenskaya et al [37]. Assume satisfying the 41 following linear PDE: @ t 1 2 A(t;x) :D 2 = 0; whereA = [a ij ] 1i;jd is required only to be measurable and0<c 0 I d AC 0 I d . Then 2C 2 ; loc , where depends only onc 0 andC 0 . Step 2. We next cite a result by Caffarelli [8]. The elliptic PDEg(D 2 ) =f(x) withf2C hasC 2; -solution if the simplified PDEg(D 2 ) = constant hasC 2;~ -solution for some ~ >. Step 3. We also need the DeGiorgi-Nash estimate: If P d i;j=1 D x i (a ij D x j ) = 0, then 2C . See, e.g., Gilbarg and Trudinger [28] Theorem 8.22. Step 4. We now come back to the PDE (2.5) with g = g( ). First, set ~ := @ t . Differentiate both sides of (2.5) with respect tot we obtain: @ t ~ + [@ g(D 2 )] :D 2 ~ = 0: By Step 1, we have@ t = ~ 2C 2 ; . Now fixt. Then (2.5) becomes g(D 2 ) =@ t 2C : By Step 2, it suffices to show that g(D 2 ) = constant hasC 2; 0 -solution for some 0 > (2.6) For this, we can only prove in the casesd = 1 ord = 2. 42 In the cased = 1, notice thatg is strictly increasing, thenD 2 = constant and thus is a parabola. In the cased = 2, fixk = 1; 2 and denote k :=D x k . Differentiate both sides of (2.6) with respect tox k : A :D 2 k = 0; where A := [a i;j ] 1i;j2 :=@ g(D 2 ) Note thata 11 c 0 > 0. Then D 2 x 1 x 1 k + a 12 a 11 D 2 x 1 x 2 k + a 22 a 11 D 2 x 2 x 2 k = 0: Forl = 1; 2, differentiate both sides of the above PDE with respect tox l and denote k;l := D x l k =D 2 x k x l : D 2 x 1 x 1 k;l +D x l a 12 a 11 D 2 x 1 x 2 k +D x l a 22 a 11 D 2 x 2 x 2 k = 0: In the casel = 2, this is: D x 1 (D x 1 k;2 ) +D x 2 a 12 a 11 D x 1 k;2 +D x 2 a 22 a 11 D x 2 k;2 = 0: By Step 3, k;2 2C . Similarly, k;1 2C . That is, for anyt,(t;)2C 2+ . Moreover, it follows from PDE (2.5) that is differentiable int and thus2C 1;2 . (ii). We now consider the general case where g = g(t;x;y;z; ). We define a map J :C 1;2 (O)!C 1;2 (O) byJ := ~ , where, thanks to (i), ~ is the classical solution of the following PDE: @ t ~ g(t;x;;D;D 2 ~ ) = 0 inO and ~ = on@O: 43 Now, following the arguments in [38] Theorem 8.2, one can show that the mappingJ is a contraction mapping ifT is small enough. Moreover, the fixed point of the mappingJ is also inC 1;2 (O). Therefore, we can conclude the so called small time existence: the PDE (2.5) has a classical solution whenT is small enough. Next, [38] Theorem 14.4 gives an a priori uniform estimate for the Hölder-(1 + ) norm of the classical solution to (2.5), for some 2 (0; 1), where the definition of the Hölder-(1 +) norm is given in [38] Chapter IV , Section 1. Using this a priori estimate and following the arguments in [38] Theorem 8.3, we can infer the existence of the classical solution over arbitrary time duration [0;T ] from the small time existence, and thus complete the proof. We now have Proposition 2.5.7. Let Assumptions 2.3.1, 2.3.3, 2.3.5, and the Isaacs condition (2.11) hold. Assume further that c 0 I d for somec 0 > 0 and the dimensiond 2. (2.7) Then Assumption 2.5.4 holds true. ConsequentlyY =Y =:Y and is the unique viscosity solution of PPDE (2.12). Proof. We use the notations in Assumption 2.5.4. By [23] Proposition 3.14, we may assume without loss of generality that G(;y 1 ;)G(;y 2 ;)y 2 y 1 for any y 1 y 2 (2.8) First, one can easily extendh to a uniformly continuous function on [t;1)R d , still denoted ash. For any > 0, letg t;! andh be smooth mollifiers ofg t;! andh such that kg t;! gk 1 ;kh hk 1 . By our assumptions, it is clear thatc 0 I d @ g t;! L 0 I d . 44 Apply Lemma 2.5.6, the following PDE has a classical solution 2C 1;2 (O "; t ): @ t g t;! (s; ;D ;D 2 ) = 0; inO "; t ; =h on@O "; t : Denote := + and := : Then clearly 2C 1;2 (O "; t ), h on@O "; t . Moreover, by (2.8) L t;! = @ t g t;! (s; +;D ;D 2 ) @ t g t;! (s; ;D ;D 2 ) + = g t;! (s; ;D ;D 2 )g t;! (s; ;D ;D 2 ) + 0: Then is a classical supersolution of (E) t;! "; , and thus . Similarly, . Then 0 = 2: Since> 0 is arbitrary, we conclude that =. 2.6 Approximate saddle point In this section we discuss briefly saddle points of the game, assuming the game value exists. In our setting, it is natural to define Definition 2.6.1. We call (u ;v )2U 0 V 0 a saddle point of the game if Y 0;0;u;v 0 Y 0;0;u ;v 0 Y 0;0;u ;v 0 for all u2U 0 ;v2V 0 : 45 We remark that, if a saddle point (u ;v ) exists, then it is straightforward to check that the game has a value Y 0 :=Y 0;0;u ;v 0 . However, even in stochastic optimization prob- lem with diffusion control, in general the optimal control may not exist. We thus study approximate saddle points only. Definition 2.6.2. For any"> 0, we call (u " ;v " )2U 0 V 0 an"-saddle point of the game if Y 0;0;u;v " 0 "Y 0;0;u " ;v " 0 Y 0;0;u " ;v 0 +" for all u2U 0 ;v2V 0 : We have the following simple observation: Proposition 2.6.3. Assume the game has a value, then it has an"-saddle point (u " ;v " ) for any"> 0. Proof. Let Y 0 = Y 0 = Y 0 be the game value. Then for any " > 0, there exist u " 2 U 0 ;v " 2V 0 such that Y 0 " 2 < inf v2V 0 Y 0;0;u " ;v 0 Y 0 sup u2U 0 Y 0;0;u;v " 0 Y 0 + " 2 : In particular, this implies that Y 0;0;u " ;v " 0 sup u2U 0 Y 0;0;u;v " 0 inf v2V 0 Y 0;0;u " ;v 0 +"Y 0;0;u " ;v " 0 +": That is, (u " ;v " ) is an"-saddle point. Moreover, we observe thatjY 0;0;u " ;v " 0 Y 0 j". 46 2.7 A counterexample in the strong formulation As pointed out in Remark 2.3.7, a game with control against control in strong formulation may not have the game value, even if the Isaacs condition and the comparison principle for the associate Bellman-Isaacs equation hold. The following counterexample is communi- cated to us by Rainer Buckdahn. Example 2.7.1. Letd = 2,U :=fx2R :jxj 1g,V :=fx2R :jxj 2g, andU (resp. V) be the set ofF-progressively measurableU-valued (resp. V-valued) processes. Write B = (B 1 ;B 2 ). Given (u;v)2UV, the controlled state processX u;v = (X 1;u ;X 2;v ) is determined by: X 1;u t :=B 1 t + Z t 0 u s ds; X 2;v t :=B 2 t + Z t 0 v s ds where 0 is a constant. Define, for somea2R, J(u;v) :=E P 0 h ja +X 1;u T X 2;v T j i ; Y 0 := sup u2U inf v2V J(u;v); Y 0 := inf v2V sup u2U J(u;v): Then, for 0< q T 2 andjajT , we haveY 0 <Y 0 . Proof. For anyu2U, setv t :=u t + a T . Thenv2V and, a +X 1;u T X 2;v T =a +B 1 T + Z T 0 u t dtB 2 T Z T 0 [u t + a T ]dt =[B 1 T B 2 T ]: Thus J(u;v) =E P 0 h jB 1 T B 2 T j i = p 2T: 47 This implies that inf v2V J(u;v) p 2T: Sinceu is arbitrary, we get Y 0 p 2T: (2.1) On the other hand, for anyv2V, set u t :=u 0 := aE P 0 [X 2;v T ] jaE P 0 [X 2;v T ]j 1 faE P 0 [X 2;v T ]6=0g +1 faE P 0 [X 2;v T ]=0g : (2.2) That is,u is a constant process. One can easily check that u2U; ju 0 j = 1; aE P 0 [X 2;v T ] =u 0 jaE P 0 [X 2;v T ]j: Then E P 0 h a +X 1;u T X 2;v T i =a +u 0 TE P 0 [X 2;v T ] =u 0 h T +jaE P 0 [X 2;v T ]j i : Thus, J(u;v) E P 0 h a +X 1;u T X 2;v T i =ju 0 j h T +jaE P 0 [X 2;v T ]j i = T +jaE P 0 [X 2;v T ]jT: This implies sup u2U J(u;v) T . Sincev is arbitrary, we haveY 0 T . This, together with 2.1, implies thatY 0 <Y 0 when 0< q T 2 . Moreover, note that in this case the system is Markovian and@ ! Y =DY . The Hamil- 48 tonians in (2.1) become: for (t;x;y;z; )2 [0;T ]R 2 RR 2 S 2 , G(t;x;y;z; ) := sup u2U inf v2V h 1 2 tr ( ) +uz 1 +vz 2 i = 1 2 tr ( ) +z + 1 2z 2 ; G(t;x;y;z; ) := inf v2V sup u2U h 1 2 tr ( ) +uz 1 +vz 2 i = 1 2 tr ( ) +z + 1 2z 2 : Then the Isaacs condition holds, and the corresponding Bellman-Isaacs equation becomes: @ t Y t 1 2 h D 2 x 1 x 1 Y t +D 2 x 2 x 2 Y t i [D x 1 Y t ] + + 2[D x 2 Y t ] = 0: It is clear that the comparison principle for the viscosity solutions of above PDE holds. Remark 2.7.2. (i) The above counterexample stays valid when = 0, and thus the game is deterministic. We note that, even in deterministic case, our weak formulation is dif- ferent from strong formulation. Indeed, the corresponding state process X W;u;v in weak formulation is: X W;1;u;v t = Z t 0 u(s;X W;1;u;v ;X W;2;u;v )ds; X W;2;u;v t = Z t 0 v(s;X W;1;u;v ;X W;2;u;v )ds: In particular, X W;2;u;v depends onu as well. Consequently, givenv, one cannot defineu through (2.2). (ii) In this chapter the drift coefficient is b, see (2.19), so the above deterministic example is not covered in our current framework. However, this assumption is mainly to ensure the wellposedness of the BSDE (2.23) . When f = 0, one may define the value processes via conditional expectations, instead ofY. Then we may considerX in the form of (2.2) and all our results, after appropriate modifications, will still hold true. In particular, the above deterministic game in weak formulation has a value. 49 CHAPTER 3 NORM ESTIMATES FOR SEMIMARTINGALES 3.1 Introduction As mentioned in the introduction of this dissertation, the nonlinear expectationE defined as E t ()(!) := sup u2Ut inf v2Vt E P t;u;v ( t;! ) is filtration consistent and well-defined by the results of Chapter 2. We also characterize a E-martingale as the unique viscosity solution of the path-dependent Bellman-Isaacs equa- tion. However, this characterization provides no dynamics of aE-martingale. In order to accomplish this purpose, we need to connect theE-Expectation with the G-Expectation, introduced by Peng [51]. Roughly speaking, a G-expectation is a nonlinear expectation taking the following form:E G := sup P2P E P , whereP is a family of mutually singular probability measuresP and in general the familyP does not have a dominating probability measure. For a random variable, the conditionalG-expectationE G t [] can be defined as E G t ()(!) := sup P2P E P t;! ( t;! ); and is aG-martingale. In particular, if we takeP :=fP 0;u;v ;u2U 0 ;v2V 0 g from Chapter 50 2, then forY :=E t () we observe that for 0s<tT E G s (Y t ) = sup u2Us;v2Vs E P s;u;v (Y t ) sup u2Us inf v2Vs E P s;u;v (Y t ) =Y s : In other words, aE-martingale is aG-submartingale. Therefore, the natural question we want to address is: What is the structure of aG-submartingale? Soner-Touzi-Zhang [61] established the following G-martingale representation theorem: denotingY t :=E G t [], Y t =Y 0 + Z t 0 Z s dB s K t ; P-a.s. for all P2P; (3.1) where B is the canonical process which is a martingale under all P 2 P, and K is a nondecreasing process with K 0 = 0. In particular, a G-martingale is a supermartingale under eachP2P. It is clear that aG-supermartingale is also a supermartingale under each P2P. Given a G-submartingale Y , one may expect that Y = M + L, where M is a G- martingale andL is a nondecreasing process. Then by (3.1) one expects that Y t =Y 0 + Z t 0 Z s dB s +A t ; P-a.s. for all P2P; (3.2) whereA :=LK is a a semi-martingale under eachP2P. While the above analysis is intuitively clear, its rigorous proof is by no means easy, because it involves a priori estimates for total variations ofA under eachP2P. We thus 51 first turn our attention to norm estimates for semi-martingales under a fixed probability measure P. In the standard literature, the norm of a semi-martingale is defined through its decomposition, see e.g. [56]. However, for our purpose it is important to have a norm defined through the semi-martingale itself, without involving its decomposition. We shall introduce a normkk P , see (3.11) below, such that a processY is a square integrable semi- martingale underP if and only ifkYk P <1. We remark that the normkk P is motivated from the definition of quasimartingales, and these estimates are interesting in their own rights. Now in the G-framework, definekk P := sup P2P kk P , we show that a process Y is a square integrable G-semi-martingale if and only ifkYk P <1, and we obtain the desired estimates. As a special case, we prove the Doob-Meyer decomposition (3.2) for G-submartingales and complete the characterization of aE-martingale. The second main object of this chapter is to obtain the component in the following G-martingale representation, an improved version of (3.1): Y t =Y 0 + Z t 0 Z s dB s Z t 0 [2G( t )dt t dhBi t ]; P-a.s. for all P2P: (3.3) Here G is a function used by Peng [50] to define G-expectation, andhi is the quadratic variation. This is an open problem proposed by Peng, and remained open in [61] as well as in [62] for solutions to 2BSDEs. In the Markovian case, the component corresponds to the second order derivative of the solution to the associated PDE. In fact, is part of the solution to the earlier formulation of 2BSDEs in [11], and plays a very important role in numerical methods for fully nonlinear PDEs in [27]. Clearly, the problem is more or less equivalent to when the increasing processK in (3.1) is absolutely continuous with respect to the Lebesgue measuredt. Again, we first study the problem under a fixed probability measureP. For any 1 < p1, we shall define a new 52 normkk P;p , see (3.2) below. For any semi-martingaleY underP, ifkYk P;p <1, then the finite variation part ofY can be written asdA t = a t dt andE P [ R T 0 ja t j p dt] <1. We then definekk P;p := sup P2P kk P;p . For anyG-semi-martingaleY , ifkYk P;p <1, then similarly we havedA t =a t dt such thatE G [ R T 0 ja t j p dt]<1. Finally, for a random variable , ifkk P;p :=kE G []k P;p <1, we obtain the following decomposition in backward form: E G t [] = Z T t Z s dB s + Z T t [G( s )ds s dhBi s ]: (3.4) However, the above analysis does not yield the uniqueness of , not to mention the norm estimates for . We thus introduce a much stronger metric for , which will lead to the existence, uniqueness, as well as a priori norm estimates of . We shall point out though, this metric is very strong and is in general not convenient to use. Recently, Peng, Song and Zhang [52] provide a different norm and complete the characterization of the component theG-martingale representation. 3.2 A Priori Estimates for Semimartingales Let ( ;F;F;P) be a filtered probability space on a fixed finite time horizon [0;T ] such thatF is right continuous. We note that the filtrationF is not necessarily complete underP. The removal of the completeness requirement will be important in next sections. However, the following simple lemma, see e.g. [61], shows that we may assume all the processes involved in this section areF-progressively measurable. Lemma 3.2.1. Let F P denote the augmented filtration ofF underP. For any F P -progressively measurable processX, there exists a unique (dtdP-a.s.) process ~ X such that ~ X = X, dtdP-a.s. Moreover, ifX is càdlàg,P-a.s., then so is ~ X. We recall that anF-progressively measurable càdlàg semi-martingaleY has the follow- 53 ing decomposition: Y t =Y 0 +M t +A t ; (3.1) where M is a martingale, A has finite variation, and M 0 = A 0 = 0. Now given anF- progressively measurable and càdlàg processY , We are interested in the following ques- tions: (i) IsY a semi-martingale? (ii) Do we have appropriate norm estimates forY ,M, andA? (iii) When isdA t absolutely continuous with respect to the Lebesgue measuredt? The first question was answered by Bichteler-Dellacherie, see e.g. [56] and Section 3.7 of this chapter for some further discussion. The main goal of this section is to answer the second question, and the third question will be answered in Section 3.4 below. As explained in Introduction, the latter questions are natural and important for our study of semi-martingales under nonlinear expectations. In this section we will always assume: The augmented filtration F P is a Brownian filtration: (3.2) Consequently, anyF-martingaleM is continuous,P-a.s. (3.3) 54 3.2.1 Some preliminary results We first note that, whenY is a supermartingale or submartingale, it is well known thatY is a semi-martingale and the following norm estimates hold. Since the arguments will be important for our general case, we provide the proof for completeness. Lemma 3.2.2. Let (3.2) hold. There exist universal constants 0<c<C such that, for any Y in the form of (3.1) with monotoneA, it holds ckYk 2 P;0 E P h jY 0 j 2 +hMi T +jA T j 2 i CkYk 2 P;0 : (3.4) where, for any càdlàg processY , kYk 2 P;0 :=E P h sup 0tT jY t j 2 i : (3.5) Proof. The first inequality is obvious. We shall only prove the second inequality. By oth- erwise using the standard stopping techniques, we may assume without loss of generality that E P h sup 0tT jY t j 2 +hMi T +jA T j 2 i <1: Apply Itô’s formula, we have Y 2 T =Y 2 0 +hMi T + 2 Z T 0 Y t dM t + 2 Z T 0 Y t dA t + X 0tT jY t j 2 : (3.6) 55 Note that E P h Z T 0 jY t j 2 dhMi t 1 2 i E P h sup 0tT jY t j(hMi T ) 1 2 i 1 2 E P h sup 0tT jY t j 2 +hMi T i <1: Then E P h Z T 0 Y t dM t i = 0: Thus, for any"> 0, by (3.6) and the monotonicity ofA we have E P [hMi T ]E P h hMi T + X 0tT jY t j 2 i =E P h Y 2 T Y 2 0 2 Z T 0 Y t dA t i E P h jY T j 2 +jY 0 j 2 + 2 sup 0tT jY t jjA T j i C" 1 kYk 2 P;0 +"E P [jA T j 2 ]: (3.7) Moreover, note that A T =Y T Y 0 M T : Clearly we have E P [jA T j 2 ]CkYk 2 P;0 +CE P [hMi T ]C" 1 kYk 2 P;0 +C"E P [jA T j 2 ]: Set" := 1 2C for the aboveC, we obtain E P [jA T j 2 ]CkYk 2 P;0 : This, together with (3.7), proves the second inequality. 56 The next lemma is a discrete version of Lemma 3.2.2. Lemma 3.2.3. Let 0 = 0 n =T be a sequence of stopping times. In the setting of Lemma 3.2.2, ifA i 2F i1 , then cE P h max 0in jY i j 2 i E P h jY 0 j 2 +hMi T +jA T j 2 i CE P h max 0in jY i j 2 i : (3.8) Proof. Again we prove only the second inequality. Similar to the proof of Lemma 3.2.2, by otherwise using the standard stopping techniques, we may assume without loss of generality that E P h max 0in jY i j 2 +hMi T +jA T j 2 i <1: Note that Y i+1 =Y i +A j+1 A j +M j+1 M j : Then Y 2 i+1 Y 2 i = 2Y i [A j+1 A j ] + [A j+1 A j ] 2 + 2[Y i +A j+1 A j ][M j+1 M j ] + [M j+1 M j ] 2 : Notice thatY i +A j+1 A j 2F i . One can easily obtain E P h Y 2 i+1 Y 2 i i =E P h 2Y i [A j+1 A j ] + [A j+1 A j ] 2 + [M j+1 M j ] 2 i : 57 Then, sinceA is monotone, E P [hMi T ] = E P [M 2 T ] = n X i=0 E P h [M j+1 M j ] 2 i n X i=0 E P h Y 2 i+1 Y 2 i 2Y i [A j+1 A j ] i E P h Y 2 T + 2 sup 0in jY i jjA T j i E P h C" 1 max 0in jY i j 2 +"jA T j 2 i ; (3.9) for any"> 0. Moreover, sinceA T =Y T Y 0 M T , we have E P [jA T j 2 ] CE P h max 0in jY i j 2 +jM T j 2 i E h C" 1 max 0in jY i j 2 +C"jA T j 2 i Choose" = 1 2C for the aboveC, we have E P [jA T j 2 ]CE P h max 0in jY i j 2 i : This, together with (3.9), implies the second inequality. 3.2.2 Square integrable semi-martingales In this subsection we characterize the norm for square integrable semi-martingales. For 0t 1 <t 2 T , let t 2 _ t 1 A denote the total variation ofA over the interval (t 1 ;t 2 ]. Definition 3.2.4. We say a semi-martingaleY in the form of (3.1) is a square integrable semi-martingale if E P h jY 0 j 2 +hMi T + T _ 0 A 2 i <1: (3.10) We remark that (3.10) is the norm used in standard literature for semi-martingales, see 58 e.g. [56]. Clearly, for a square integrable semi-martingale Y , we havekYk P;0 < 1. However, whenA is not monotone, in general the left side of (3.10) cannot be dominated byCkYk 2 P;0 . See Example 3.6.1 below. Our goal is to characterize square integrable semi-martingales via the processY itself, without involving M and A directly. In many situations, we may have a representation formula for the processY , but in general it is difficult to obtain representation formulas for M andA. So it is much easier to verify conditions imposed onY than those onM andA. We introduce the following norm: kYk 2 P :=kYk 2 P;0 + sup E P h n1 X i=0 E P i (Y i+1 )Y i 2 i ; (3.11) where the supremum is over all partitions : 0 = 0 n = T for some stopping times 0 ; ; n . Remark 3.2.5. The normkk P is motivated from the definition of quasimartingale, see e.g. [41]: A càdlàg processY is a called a quasimartingale if sup E P h n1 X i=0 E P i (Y i+1 )Y i i <1: (3.12) Remark 3.2.6. The main reason that we assumeF is the Brownian filtration in (3.2) is to ensure the martingale partM is continuous, see (3.3). WhenF is a general right continuous filtration, our results still hold true if M is continuous. If M is discontinuous, we shall modify the normkk as: kYk 2 P :=kYk 2 P;0 + sup E P h n1 X i=0 E P i (Y i+1 )Y i 2 i +E P h sup 0tT Y t Y t j 2 i : (3.13) 59 The following a priori estimate is the main technical result of the chapter. Theorem 3.2.7. There exist universal constants 0 < c < C such that, for any square integrable semi-martingaleY t =Y 0 +M t +A t , ckYk 2 P E P h jY 0 j 2 +hMi T + T _ 0 A 2 i CkYk 2 P : (3.14) Proof. We first prove the left inequality. Let : 0 = 0 n =T be an arbitrary partition, and denote A i+1 :=A i+1 A i . Then E P h n1 X i=0 E P i (Y i+1 )Y i 2 i =E P h n1 X i=0 E P i (A i+1 )A i 2 i E P h n1 X i=0 E P i (jA i+1 j) 2 i =E P h n1 X i=0 [E P i (jA i+1 j)jA i+1 j] + n1 X i=0 jA i+1 j 2 i CE P h n1 X i=0 [E P i (jA i+1 j)jA i+1 j] 2 i +CE P h T _ 0 A 2 i : (3.15) Note that j X i=0 [E P i (jA i+1 j)jA i+1 j];j = 0; ;n 1; is a martingale: Then E P h n1 X i=0 [E P i (jA i+1 j)jA i+1 j] 2 i =E P h n1 X i=0 E P i (jA i+1 j)jA i+1 j 2 i CE P h n1 X i=0 E P i (jA i+1 j) 2 +jA i+1 j 2 i CE P h n1 X i=0 E P i (jA i+1 j 2 ) +jA i+1 j 2 i CE P h n1 X i=0 jA i+1 j 2 i CE P h n1 X i=0 jA i+1 j 2 i CE P h T _ 0 A 2 i : 60 This, together with (3.15) and the left inequality of (3.4), proves the left inequality of (3.14). We now prove the right inequality. First, for any " > 0, following the arguments in Lemma 3.2.2 one can easily show that E P [hMi T ]C" 1 kYk 2 P;0 +"E P h T _ 0 A 2 i : (3.16) We claim that E P h T _ 0 A 2 i CkYk 2 P +CE P [hMi T ]: (3.17) This, together with (3.16) and by choosing" small enough, implies the right inequality of (3.14) immediately. We prove (3.17) in several steps. Step1. Let : 0 = 0 1 ::: n =T be an arbitrary partition. Note that E P i [Y i+1 ]Y i =E P i [A i+1 ]A i : Then n1 X i=0 A i+1 E i [A i+1 ] = A T n1 X i=0 E P i [A i+1 ]A i = Y T Y 0 M T n1 X i=0 E P i [Y i+1 ]Y i : By the definition ofkYk P , we see that E P h n1 X i=0 A i+1 E i [A i+1 ] 2 i CkYk 2 P +CE P [hMi T ]: 61 Note that j1 X i=0 A i+1 E i [A i+1 ] ; j = 1; ;n; is a martingale: Then E P h n1 X i=0 A i+1 E i [A i+1 ] 2 i CkYk 2 P +CE P [hMi T ]: (3.18) Step 2. In this step we assumeA t = R t 0 a s dK s , whereK is a continuous nondecreasing process anda is a simple process. That is, a = n1 X i=0 a t i 1 [t i ;t i+1 ) for some 0 =t 0 <<t n =T: Then, denoting i :=sign(a t i ), V (A) = Z T 0 ja t jdK t = n1 X i=0 Z t i+1 t i i a t dK t = n1 X i=0 i [A t i+1 A t i ] = n1 X i=0 i A t i+1 E P t i [A t i+1 ] + n1 X i=0 i E P t i [A t i+1 ]A t i : Note that j X i=0 i A t i+1 E P t i [A t i+1 ] ; j = 0; ;n 1; is a martingale: Then E P h V (A) 2 i CE P h n1 X i=0 A t i+1 E P t i [A t i+1 ] 2 + n1 X i=0 E P t i [A t i+1 ]A t i 2 i : By (3.18) and the definition ofkYk P we obtain (3.17). 62 Step 3. We now prove (3.17) for general continuousA . DenoteK t := t _ 0 A. SinceA is continuous,K is also continuous. MoreoverdA t is absolutely continuous with respect todK t and thusdA t =a t dK t for somea. By [35] Chapter 3 Lemma 2.7, for every"> 0 there exists a simple processfa " g such that E P h Z T 0 ja " t a t jdK t 2 i ": (3.19) Denote A " t := Z t 0 a " s dK s ; Y " t :=Y 0 +M t +A " t : Then by Step 2 we see that E P h T _ 0 A " 2 i CkY " k 2 P +CE P [hMi T ]: (3.20) Note that T _ 0 A T _ 0 A " + T _ 0 [A " A] T _ 0 A " + Z T 0 ja " t a t jdK t : Then E P h T _ 0 A 2 i CE P h T _ 0 A " 2 i +C": (3.21) On the other hand, apply the left inequality of (3.14) onY " Y =A " A, we get kY " Yk 2 P CE P h T _ 0 (A " A) 2 i CE P h Z T 0 ja " t a t jdK t 2 i C": 63 Then kY " k 2 P CkYk 2 P +C": Plug this and (3.21) into (3.20), we get E P h T _ 0 A 2 i CkYk 2 P +CE P [hMi T ] +C": Since" is arbitrary, we obtain (3.17). Step 4. We now prove (3.17) for the general case. Since A is of bounded variation, we can decomposeA = A c +A d , whereA c is the continuous part andA d is the part with pure jumps. Since Y is càdlàg and M is continuous, A and A d are càdlàg. We denote Y c t =Y 0 +M t +A c t . From step 3 we have E P h jY 0 j 2 +hMi T + T _ 0 A c 2 i CkY c k 2 P : Note that kY c k P kYk P +kA d k P and apply the left inequality of (3.14) onA d we see that kA d k 2 P CE P h T _ 0 A d 2 i : Then E P h jY 0 j 2 +hMi T + T _ 0 A c 2 i CkYk 2 P +CE P h T _ 0 A d 2 i ; 64 and thus it suffices to show that E P h T _ 0 A d 2 i CkYk 2 P : (3.22) To this end, we first note that T _ 0 A d = X 0tT jA t j = X 0tT jY t j: (3.23) Define, for eachn, D n := X 0tT jY t j1 fjYtj 1 n g ; and, n 0 := 0, and form 0, by denotingY t :=Y T fortT , n m+1 := inf n t> n m :jY t j 1 n o ^ (T + 1): We remark that we useT + 1 instead ofT here so that Y T will not be counted repeatedly at below. Then it is clear that D n " X 0tT jY t j as n!1; and m X i=1 jY n i j"D n as m!1: Therefore, to obtain (3.22) it suffices to show that E P h m X i=1 jY n i j 2 i kYk 2 P for all n;m: (3.24) We now fixn;m. Notice that theF is quasi-left continuous. Then for each n i , there existf n i;j ;j 1g such that n i;j < n i and n i;j " n i asj!1. By definition ofkYk P , we 65 have E P h m X i=1 jE P n i1 _ n i;j [Y n i ]Y n i1 _ n i;j j 2 i kYk 2 P : (3.25) Sendj!1, sinceF is continuous, we see that lim j!1 [E P n i1 _ n i;j [Y n i ]Y n i1 _ n i;j ] =Y n i Y n i = Y n i : Then by applying the Dominated Convergence Theorem we obtain (3.24) from (3.25). Theorem 3.2.8. Let Y be an F-progressively measurable càdlàg process. Then Y is a square integrable semi-martingale if and only ifkYk P <1. Proof. By Theorem 3.2.7, it suffices to prove the if part. AssumekYk P <1. For each n, lett n i := i n T ,i = 0; ;n. Denote, fori = 0; ;n, M n t n i := i X j=1 Y t n j E P t n j1 [Y t n j ] ; K +;n t n i := i X j=1 E P t n j1 [Y t n j ]Y t n j1 + ; K ;n t n i := i X j=1 E P t n j1 [Y t n j ]Y t n j1 : ThenM n is a martingale,K +;n ;K ;n are nondecreasing, and Y t n i =Y 0 +M n t n i +A n t n i ; where A n t n i :=K +;n t n i K ;n t n i : Moreover, E P h (K +;n T ) 2 + (K ;n T ) 2 i kYk 2 P <1: 66 Then following the arguments for the standard Doob-Meyer decomposition, see e.g. [35], one can easily prove the result. 3.3 Semimartingales underG-expectation In this section we introduce a nonlinear expectation, which is a variant of theG-expectation proposed by Peng [50]. Let ( ;F;F) be a filtered space such thatF is right continuous,P a family of probability measures. For eachP2P andF-stopping time, denote P(;P) := P 0 2P :P 0 =P on F : (3.1) We shall assume Assumption 3.3.1. (i)N P F 0 , whereN P is the set of allP-polar sets, that is, allE2F such thatP(E) = 0 for allP2P. (ii) For anyP2P,F-stopping time,P 1 ;P 2 2P(;P), and any partitionE 1 ;E 2 2F of , the probability measure P defined below also belongs toP(;P): P(E) :=P 1 (E\E 1 ) +P 2 (E\E 2 ); 8E2F: (3.2) 3.3.1 Definitions We first define Definition 3.3.2. We say anF-progressively measurable processY is aP-martingale (resp. P-supermartingale,P-submartingale,P-semi-martingale) if it is aP-martingale (resp.P- supermartingale,P-submartingale,P-semi-martingale) for allP2P. 67 We next define the conditionalG-expectation. For anyF-measurable random variable such thatE P [jj]<1 for allP2P, and anyF-stopping time, denote E G;P [] := P ess sup P 0 2P(;P) E P 0 []; P a.s. (3.3) We note that, by Lemma 3.2.1,E G;P [] isF -measurable. When the familyfE G;P [];P2 Pg can be aggregated, that is, there exists anF -measurable random variable, denoted as E G [], such that E G [] =E G;P []; P a.s. for all P2P; (3.4) we callE G [] the conditionalG-expectation of. Following standard arguments, we have the following Dynamic Programming Principle Lemma 3.3.3. Under Assumption 3.3.1, for any 1 2 and anyP2P, we have E G;P 1 [] = P ess sup P 0 2P( 1 ;P) E P 0 1 E G;P 0 2 [] ; P a.s. Proof. We have E G;P 1 [] = P ess sup P 0 2P( 1 ;P) E P 0 1 [] = P ess sup P 0 2P( 1 ;P) E P 0 1 [E P 0 2 []] P ess sup P 0 2P( 1 ;P) E P 0 1 [E G;P 0 2 []]: To prove the other inequality, we recall that by [42], there exists a sequence P n 2 P( 2 ;P 0 ) such that E P n 2 []! P 0 ess sup ~ P2P( 2 ;P 0 ) E ~ P 2 []; as n!1: If we can construct a sequence ~ P n 2P( 2 ;P 0 ) such thatE ~ P 1 2 []E ~ P 2 2 [] ::: P 0 a.s. 68 andE ~ P n 2 []" ess sup P 0 ~ P2P( 2 ;P 0 ) E ~ P 2 [] then for allP 0 2P( 1 ;P) E P 0 1 h P 0 ess sup ~ P2P( 2 ;P 0 ) E ~ P 2 [] i = lim n!1 E P 0 1 h E ~ P n 2 [] i = lim n!1 E ~ P n 1 h E ~ P n 2 [] i = lim n!1 E ~ P n 1 []E G;P 1 [] P a.s. Taking the sup overP 0 2P( 1 ;P) we obtain the second inequality. Lastly, we show generally forP 1 ;P 2 2P( 2 ;P 0 ), there is ~ P 2 2P( 2 ;P 0 ) such thatE ~ P 2 2 [] = E P 1 2 []_E P 2 2 []. Indeed, forE2F, define ~ P 2 (E) :=P 1 (E\E + ) +P 2 (E\E ) where E + := n E P 1 2 [] E P 2 2 [] o and E := n E P 1 2 [] < E P 2 2 [] o . By Assumption (3:2); ~ P 2 2P( 2 ;P 0 ). Moreover, forE2F 2 : E ~ P 2 2 [1 E ] = E ~ P 2 [1 E ] =E ~ P 2 [1 E\E +] +E ~ P 2 [1 E\E ] = E P 1 [1 E\E +] +E P 2 [1 E\E ]: By the definition ofE + andE we concludeE ~ P 2 2 [] = E P 1 2 []_E P 2 2 []. The existence of ~ P n follows by induction. Definition 3.3.4. We say anF-progressively measurable processY is aG-martingale (resp. G-supermartingale,G-submartingale) if, for anyP2P and anyF-stopping times 1 2 , Y 1 = (resp.;)E G;P 1 [Y 2 ]; P a.s. We remark that aP-martingale is also called a symmetricG-martingale in the literature, see e.g. [64]. 69 3.3.2 Characterization ofP-semi-martingales The following result is immediate: Proposition 3.3.5. Let Assumption 3.3.1 hold. (i) AP-martingale (resp.P-supermartingale,P-submartingale) must be aG-martingale (resp.G-supermartingale,G-submartingale). (ii) IfY is aG-martingale (resp.G-supermartingale,G-submartingale) andM is aP- martingale, thenY +M is aG-martingale (resp.G-supermartingale,G-submartingale). (iii) AG-supermartingale is aP-supermartingale. In particular, aG-martingale is a P-supermartingale. Proof. (i) and (ii) are obvious. To prove (iii), letY be aG-supermartingale. Then for any 1 2 and anyP2P, Y 1 E G;P 1 [Y 2 ]E P 1 [Y 2 ]; P-a.s. That is,Y is aP-supermartingale for allP2P, and thus is aP-supermartingale. We next studyP-semi-martingales. In light of Theorem 3.2.8, we define a new norm: kYk P := sup P2P kYk P : (3.5) The following result is a direct consequence of Theorems 3.2.7 and 3.2.8. Theorem 3.3.6. Assume Assumption 3.3.1 holds and (3.2) holds for allP2P. IfkYk P < 1, thenY is aP-semi-martingale. Moreover, for anyP2P and for the decomposition Y t =Y 0 +M P t +A P t ; P-a.s. (3.6) 70 we have E P h hM P i T + T _ 0 A P 2 i CkYk 2 P : The normkk P is defined through eachP2P. The following definition relies on the G-expectation directly: kYk 2 G :=E G h sup 0tT jY t j 2 i + sup sup P2P E P h n1 X i=0 E G;P i (Y i+1 )Y i 2 i : (3.7) Remark 3.3.7. (i) If the involved conditionalG-expectations exist, then we may simplify the definition ofkYk G : kYk 2 G :=E G h sup 0tT jY t j 2 i + sup E G h n1 X i=0 E G i (Y i+1 )Y i 2 i : (ii) In generalkk G does not satisfy the triangle inequality and thus is not a norm. (iii) ForG-submartingalesY 1 ;Y 2 , the triangle inequality holds: kY 1 +Y 2 k G kY 1 k G +kY 2 k G : However, in generalY 1 +Y 2 may not be aG-submartingale anymore. Nevertheless,kYk G involves the processY only, and we have the following estimate. Theorem 3.3.8. Assume Assumption 3.3.1 holds and (3.2) holds for allP2P. Then there exists a universal constantC such thatkYk P CkYk G . Proof. Without loss of generality, we assumekYk G <1. For any P2P and any 71 partition : 0 = 0 n =T , denote N i := i1 X j=0 h E G;P j (Y j+1 )Y j i : Then Y i N i = Y 0 + i1 X j=0 h Y j+1 E G;P j (Y j+1 ) i = Y 0 + i1 X j=0 h Y j+1 E P j (Y j+1 ) i i1 X j=0 h E G;P j (Y j+1 )E P j (Y j+1 ) i : Note that i1 X j=0 h Y j+1 E P j (Y j+1 ) i is aP-martingale; i1 X j=0 h E G;P j (Y j+1 )E P j (Y j+1 ) i is nondecreasing and isF i1 -measurable: Applying Lemma 3.2.3 we obtain E P h n1 X j=0 E G;P j (Y j+1 )E P j (Y j+1 ) 2 i CE P h sup 0in [jY i j 2 +jN i j 2 ] i CkYk 2 G : This, together with the definition ofkk G , implies that E P h n1 X j=0 E P j (Y j+1 )Y j 2 i CkYk 2 G : Since is arbitrary, we getkYk P CkYk G . Finally, sinceP2P is arbitrary, we prove the result. 72 3.3.3 Doob-Meyer Decomposition forG-submartingales As a special case of Theorem 3.3.6, we have the following decomposition forG submartin- gales. Assumption 3.3.9. We suppose thatY is uniformly continuous in!. That is, there exists a modulus of continuity such that jY t (!)Y t (~ !)j(k! ~ !k t );8(!; ~ !)2 2 : We also assume thatkYk G <1. In particular,Y is a square integrableP-semimartingale for allP2P and Y t =Y 0 + Z t 0 P s dB s A P t ; P-a.s. whereA is a process of finite variation. The following theorem is the main result of this subsection. Theorem 3.3.10. LetY be a continuous process that satisfies assumption (3.3.9). Suppose alsoY is aG-submartingale. Then there exists a family ofP-consistent processesfM P ;k P g such that Y t =Y 0 +M P t +k P t ; P-a.s.; wherek P is increasing andM P satisfies ess sup P 0 2P(P;s) E P 0 s M P 0 t =M P s ; P-a.s.: That isfM P g is aG-martingale. Moreover, letA P = [A P ] + [A P ] be the Jordan decom- 73 position ofA P . Then we havek P = [A P ] . Proof. We let (y P ;z P ;k P ) denote the unique solution to the RBSDE with upper obstacle Y and terminal conditionY T underP: y P t =Y T Z T t z P s dB s k P T +k P t ; 0tT; P-a.s. y P t Y t ; P-a.s. Z T 0 (Y t y P t )dk P t = 0; P-a.s. By Matoussi, Piozin and Possamai [39], the 2RBSDE with upper obstacleY and terminal conditionY T has a unique solution. That is, there exist unique processes (Y;Z) such that Y T =Y T ;P-q.s. V P t :=Y 0 Y t Z t 0 Z s dB s ; 0tT is of bounded variationP-a.s. V P s +k P s = ess inf P 0 2P(P;s) E P 0 s V P t +k P t ; 0s<tT; P-a.s. Y t Y t ;P-q.s. (3.8) We claim thatY t =Y t ;P-q.s. Indeed, by optimal stopping y P s = ess inf 2T s;T E P s Y : By Nutz and Zhang’s Theorem 3.4 [45] and the fact thatY is aG-submartingale, ess sup P2P y P s = ess sup P 0 2P(P;s) ess inf 2T s;T E P 0 s Y = ess inf 2T s;T ess sup P 0 2P(P;s) E P 0 s Y =Y s : 74 On the other hand, by the 2RBSDE representation ess sup P 0 2P(P;s) y P s =Y s ; P-a.s. HenceY s =Y s ; P-a.s. Since both have path regularity,Y t =Y t ;8tP-q.s. By unique- ness of semi-martingale decomposition,Z = P; P-q.s. and A P = V P ; P-a.s. for all P2P. By [39] Proposition 2.1, k P = [A P ] and hencefk P g is a P-consistent family. Moreover, from the property of 2RBSDE solution,(A P +k P ) is aG-martingale. Thus Y t = Y 0 + Z t 0 P s dB s A P t = Y 0 + Z t 0 P s dB s (A P t +k P t ) +k P t ; which is the desired decomposition. 3.4 Absolute continuity of the finite variational processes Let ( ;F;F;P) be as in Section 3.3, whereF is right continuous and Assumption 3.3.1 holds. But we do not require (3.2) in this section. LetY be aP-semi-martingale. In this section we investigate when its finite variation part is absolutely continuous with respect to the Lebesgue measuredt. For this purpose, we letL denote the space ofF-progressively measurable processes such that is bounded and piecewise constant. For anF-progressively measurable càdlàg processY , define the Daniel integral as a linear operator onL : I Y () := n1 X i=0 i (Y i+1 Y i ); for all := n1 X i=0 i 1 [ i ; i+1 ) 2L (3.1) 75 3.4.1 The absolute continuity ofP-semi-martingales We first fixP2P. For 1 p1, letL p P denote the space ofF-progressively measur- able processes such thatkk p L p P := E P h R T 0 j t j p dt i <1. Now for anF-progressively measurable càdlàg processY which is uniformly integrable underP, define kYk P;p := sup n jE P [I Y ()]j kk L q P : 06=2L o ; (3.2) where 1q<1 is the conjugate ofp. Theorem 3.4.1. IfkYk P;p <1, then dY t = dM t +a t dt where M is a martingale and a2L p P withkak L p P kYk P;p . Proof. Note that jE P [I Y ()]jkYk P;p kk L q P for all 2L : SinceL is dense inL q P under normkk L q P , we can extendI Y toL q P such that jE P [I Y ()]jkYk P;p kk L q P for all 2L q P : By the Riesz’s representation theorem, there isa2L p P such that E P [I Y ()] =E P Z T 0 t a t dt for all 2L q P ; and kak L p P kYk P;p : Define M t :=Y t Y 0 Z t 0 a s ds: 76 We see that, for any stopping times 1 2 and any 1 2 F 1 , by denoting := 1 1 [ 1 ; 2 ) 2L , E P h 1 [M 2 M 1 ] i =E P h 1 [Y 2 Y 1 ] Z 2 1 t a t dt i =E P h I Y () Z T 0 t a t dt i = 0: This implies thatM is a martingale. Corollary 3.4.2. Assume (3.2) holds. There exists a constant C such that, for any F- progressively measurable uniformly integrable processY , kYk P C h kYk P;0 +kYk P;2 i : Proof. Without loss of generality we assumekYk P;0 +kYk P;2 <1. By Theorem 3.4.1 we havedY t = dM t +a t dt whereM is a martingale anda2 L 2 P . DenoteA t := R t 0 a s ds. Then E P h T _ 0 A 2 i =E P h Z T 0 ja t jdt 2 i TE P h Z T 0 ja t j 2 dt i =TkYk 2 P;2 : (3.3) Note that dY 2 t = 2Y t dM t + 2Y t a t dt +dhMi t : Then E P h hMi T i = E P h jY T j 2 jY 0 j 2 2 Z T 0 Y t a t dt i (3.4) CE P h sup 0tT jY t j 2 + Z T 0 ja t j 2 dt i C h kYk 2 P;0 +kYk 2 P;2 i : 77 Combining (3.3) and (3.4) we obtain E P h jY 0 j 2 +hMi T + T _ 0 A 2 i C h kYk 2 P;0 +kYk 2 P;2 i : Then by applying Theorem 3.2.7 we prove the result. 3.4.2 Absolute continuity ofP-semi-martingales We now letY be anF-progressively measurable càdlàg process such thatY is uniformly integrable underP for allP2P. For 1<p1, define kYk P;p := sup P2P kYk P;p : (3.5) The following result is a direct consequence of Theorem 3.4.1 Proposition 3.4.3. Assume Assumption 3.3.1 holds. IfkYk P;p <1 for some 1<p1, then Y is aP-semi-martingale with decomposition dY t = dM P t +a P t dt, where M P is a P-martingale and sup P2P E P h Z T 0 ja P t j p dt i <1: Moreover, ifP is separable in the sense of [60], thendY t =dM t +a t dt, whereM is a P-martingale andE G h R T 0 ja t j p dt i <1. 3.5 G-martingale representation with component We now consider the framework in [61]. Let := !2 C([0;T ]) : ! 0 = 0 , B the canonical process, and 0 < be two constants. LetP be the set of all probability 78 measuresP such thatB is aP-martingale, and there exists a constant 0 < " P 2 such that [" P _ 2 ]dtdhBi t 2 dt. By [61], there exists a process ^ a such that dhBi t = ^ a t dt; P-a.s. for allP2P: (3.1) We use the filtrationF =fF t g: F t :=F B t+ _N P whereN P is as in Assumption 3.3.1 (i). (3.2) Then one can easily see that Assumption 3.3.1 holds. Peng [50] introduced the following function: G( ) := 1 2 sup 2 = 1 2 h 2 + 2 i : (3.3) It is known that, see e.g. [61], for = g(B T ) whereg is a Lipschitz continuous function, we haveE G t [] =u(t;B t ) whereu is the unique viscosity solution to the following PDE: u t +G(u xx ) = 0; u(T;x) =g(x): (3.4) LetL ip denote the space of all random variables'(B t 1 ; ;B tn ) where' is a Lipschitz continuous function. For2L ip , define kk 2 G :=E G h sup 0tT E G t [jj] 2 i ; (3.5) and letL G be the closure ofL ip under the normkk G . By [61], for any 2L G , the conditional G-expectationE G t [] exists and is a continuous G-martingale. Moreover, we have the decomposition (3.1). Our goal of this section is to study the further decomposition 79 (3.3), conjectured by Peng. 3.5.1 Existence of Theorem 3.5.1. Let2L G . IfkE G []k P;p <1 for some 1 < p1, then we have the following decomposition: E G t [] = E G [] + Z t 0 Z s dB s Z t 0 2G( s )ds + Z t 0 s dhBi s (3.6) = E G [] + Z t 0 Z s dB s Z t 0 [2G( s ) s ^ a s ]ds; whereZ; areF-progressively measurable such that E G h Z T 0 Z 2 t dt + Z T 0 k p t dt i <1 where k := 2G() ^ a 0: (3.7) Proof. For2L G , by [61] there existZ and nondecreasing processK such that E G t [] =E G [] + Z t 0 Z s dB s K t and E G h Z T 0 Z 2 t dt +jK T j 2 i <1 SincekE G []k P;p <1, by Proposition 3.4.3 we see that dK t =k t dt and E G h Z T 0 k p t dt i <1: Note that 2 ^ a 2 , and thek in (3.7) is equivalent to k = [ 2 ^ a] + [^ a 2 ] : (3.8) 80 Set := 8 > > > > < > > > > : k ^ a 2 ; on f^ a = 2 g; k 2 ^ a ; onf^ a = 2 g; k 2 ^ a or k ^ a 2 ; on f 2 < ^ a< 2 g: (3.9) One can check straightforwardly that satisfies all the requirements. Remark 3.5.2. The above martingale representation theorem holds true without assuming B has martingale representation property under eachP2P. The main reason is that in this framework we may start from the PDE (3.4) and apply the Itô’s formula. Remark 3.5.3. Denote kk G;p :=kk G +kE G (jj)k P;p : (3.10) We shall note thatkk G;p does not satisfy the triangle inequality and thus is not a norm. We can define instead: for any 1<p1, p ( 1 ; 2 ) :=k 1 2 k G +kE G ( 1 )E G ( 2 )k P;p : (3.11) Then p defines a metric. LetL G;p denote the closure ofL ip under p . Then clearly we have the decomposition (3.6) for all2L G;p . Remark 3.5.4. Hu-Peng [34] considers the following metric: for some2 (1; 2), 0 ( 1 ; 2 ) :=k 1 2 k G + E G h sup [K 1 t i+1 K 1 t i ] [K 2 t i+1 K 2 t i ] i1 ; (3.12) where G is a modification theG, andK i is the increasing process in the (unique) decom- position of the G-martingaleE G t [ i ]. They also proved (3.6) when is in the closure of 81 L ip under . We note that the above metric depends on the process K, while our metric p involves onlyE G t []. Moreover, in (3.12) the supremum over the partitions is inside the G-expectation, while in (3.11) which depends on (3.5) and (3.2), essentially the supremum over the partitions are outside of the expectations and thus is weaker. 3.5.2 Uniqueness of From (3.9), clearly is not unique unlessk = 0, that is,E G t [] is aP-martingale. Song [58] proved that there is at most one in the spaceM 2 G as defined below. LetM 0 G denote the space ofF-progressively measurable and piecewise constant pro- cesses such that t 2L ip for allt, andM 2 G be the closure ofM 0 G under the norm: kk 2 H 2 G :=E G h Z T 0 j t j 2 dt i : (3.13) We next introduce another space of for which we shall have existence of inM 2 G . For this purpose, we assume > 0: (3.14) For = '(B t 1 ; ;B tn )2L ip , by Peng [50] we know there existZ; 2M 2 G such that (3.6) holds. Now for i 2L ip and for the correspondingZ i ; i 2M 2 G ,i = 1; 2, we define: ( 1 ; 2 ) :=k 1 2 k G +kZ 1 Z 2 k H 2 G +k 1 2 k H 2 G : (3.15) Let L := the closure ofL ip under the above metric. (3.16) 82 We then have Theorem 3.5.5. Assume Assumption 3.3.1 and (3.14) hold. Then for any2L, there exist uniqueZ; 2M 2 G such that (3.6) holds. Proof. Let2L and n 2L ip such that lim n!1 (; n ) = 0. Let (Z n ; n )2M 2 G M 2 G be corresponding to n . Then by definition of we see thatf(Z n ; n );n 1g are Cauchy sequence under the normkk H 2 G . Thus there exist (Z; )2M 2 G M 2 G such that lim n!1 h kZ n Zk H 2 G +k n k H 2 G i = 0: Then it is straightforward to check that (Z; ) satisfy (3.6) for. The uniqueness ofZ and follow from [61] and [58], respectively. Remark 3.5.6. While the conclusion of Theorem 3.5.5 looks nice, the metric is rather strong and consequently the spaceL could be small. It is not clear how largeL is. More- over, in (3.15), we use the same norm for Z and . This is not reasonable, because in the Markovian case Z and correspond to the first and second derivatives of the PDE, respectively. Intuitively, the norm for should be weaker than that forZ. As pointed out in the introduction, Peng, Song and Zhang [52] provided a different norm and addressed this problem completely in that paper. 3.6 Some counterexamples We first provide an example such thatkYk P;0 <1 butkYk P =1. Example 3.6.1. Fix P. Let K be an F-progressively measurable continuous increasing process such that K 0 = 0 and E P [K 2 T ] = 1. Define the sequence of stopping times: 83 0 := 0 and, forn 1, n := infft 0 : K t = ng^T . SinceK T <1, n = T forn large enough, a.s. We now define the processY t as follows:Y 0 := 0, and forn 0, Y t := 8 > < > : Y 2n K t +K 2n ; t2 ( 2n ; 2n+1 ]; Y 2n+1 +K t K 2n+1 ; t2 ( 2n+1 ; 2n+2 ]: (3.1) ThenkYk P;0 <1 butkYk P =1. Proof. It is easy to check that1 Y t 0 and W T 0 Y = K T . ThenkYk P;0 1 and E P h W T 0 Y 2 i =1. By Theorem 3.2.8, we getkYk P =1. We next provide aG-submartingale such that sup P2P kYk P;0 <1, butkYk P =1. Example 3.6.2. FixP. Let K be as in Example 3.6.1 such thatK is a G martingale andE G [K 2 T ] =1, instead ofE P [K 2 T ] =1. Then the processY defined in Example 3.6.1 satisfies all the requirements. Proof. By the proof of Example 3.6.1, clearly sup P2P kYk P;0 <1, butkYk P =1. Moreover, on ( 2n ; 2n+1 ],dY t =dK t and thus is aGmartingale; and on ( 2n+1 ; 2n+2 ], dY t =dK t , thenY is increasing and thus is aG-submartingale. SoY is aG-submartingale on [0;T ]. 3.7 Good integrators We conclude the chapter by providing the connection with the Bichteler-Dellacherie’s the- orem. FixP and recall (3.1). We callY a good integrator if: for anyf k gL , lim k!1 k k k L 1 (P) = 0 implies thatI Y ( k ) converges to 0 in probabilityP. (3.2) 84 Bichteler-Dellacherie’s Theorem states that, see e.g. [56], Y is a semi-martingale if and only ifY is a good indicator: (3.3) Clearly, ifkYk P <1, by Theorem 3.2.8 and (3.3) we know thatY must be a good inte- grator. At below we provide a direct proof of this. Proposition 3.7.1. IfkYk P <1, thenY is a good integrator. Proof. Let k = n k 1 X i=0 k i 1 [ k i ; k i+1 ) 2L such that lim k!1 k k k L 1 (P) = 0. Denote Y k i+1 := Y k i+1 Y k i . Then E P h I Y ( k ) 2 i =E P h n k 1 X i=0 k i Y k i+1 2 i CE P h n k 1 X i=0 k i Y k i+1 E P k i [Y k i+1 ] 2 i +CE P h n k 1 X i=0 k i E P k i [Y k i+1 ] 2 i : Since j1 X i=0 k i Y k i+1 E P k i [Y k i+1 ] ; j = 1; ;n k ; is a martingale; we have E P h I Y ( k ) 2 i CE P h n k 1 X i=0 j k i j 2 Y k i+1 E P k i [Y k i+1 ] 2 i +CE P h n k 1 X i=0 k i E P k i [Y k i+1 ]Y k i 2 i Ck k k 2 L 1 (P) E P h n k 1 X i=0 Y k i+1 E P k i [Y k i+1 ] 2 i +Ck k k 2 L 1 (P) kYk P Ck k k 2 L 1 (P) E P h n k 1 X i=0 jY k i+1 j 2 i +Ck k k 2 L 1 (P) kYk 2 P : (3.4) 85 Note that jY k i+1 j 2 =jY k i+1 j 2 jY k i j 2 2Y k i Y k i+1 : Then E P h n k 1 X i=0 jY k i+1 j 2 i =E P h n k 1 X i=0 jY k i+1 j 2 jY k i j 2 2Y k i E P k i [Y k i+1 ]Y k i i = E P h jY T j 2 jY 0 j 2 2 n k 1 X i=0 Y k i E P k i [Y k i+1 ]Y k i i E P h jY T j 2 + 2 sup 0tT jY t j n k 1 X i=0 E P k i [Y k i+1 ]Y k i i CE P h sup 0tT jY t j 2 + n k 1 X i=0 E P k i [Y k i+1 ]Y k i 2 i CkYk 2 P : Pluging into (3.4) we obtain E P h I Y ( k ) 2 i CkYk 2 P k k k 2 L 1 (P) ! 0; ask!1: This implies thatI Y ( k ) converges to 0 in probabilityP, and thusY is a good integrator. 86 CHAPTER 4 DOUBLY REFLECTED BSDES 4.1 Introduction Since the seminal chapter Pardoux and Peng [46], there have been numerous applications on the theory and applications of BSDEs, see e.g. the recent survey chapter Peng [47] and the references therein. A BSDE is the Itô’s type of equations in the following form: Y t = + Z T t f(s;Y s ;Z s )ds Z T t Z s dB s ; (4.1) whereB is a standard Brownian motion,f and are given coefficients, and (Y;Z) is the so- lution pair which by definition is adapted to the Brownian filtration. Under the assumption thatf is uniformly Lipschitz continuous in (y;z), Pardoux and Peng [46] shows that k(Y;Z)k 2 :=E h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt i CE h jj 2 + Z T 0 jf(t; 0; 0)j 2 dt i : (4.2) One important application of the above BSDE is the pricing and hedging of European type contingent claims. To extend the applications to American type contingent claims, El Karoui et al [20] introduced the following Reflected BSDE, which can also be viewed as a nonlinear version of the Skorohod’s problem: 8 > < > : Y t = + Z T t f(s;Y s ;Z s )ds Z T t Z s dB s +K T K t ; Y t L t ; [Y t L t ]dK t = 0: (4.3) 87 Here the continuous barrierL is also given, and the solution becomes a triplet (Y;Z;K) whereK is by definition an increasing process withK 0 = 0. Among others, El Karoui et al [20] shows that k(Y;Z;K)k 2 := E h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt +jK T j 2 i (4.4) CE h jj 2 + Z T 0 jf(t; 0; 0)j 2 dt + sup 0tT jL + t j 2 i : In this chapter we are interested in Reflected BSDEs with double barriers: 8 > < > : Y t = + Z T t f(s;Y s ;Z s )ds Z T t Z s dB s +A T A t ; L t Y t U t ; [Y t L t ]dK + t = [U t Y t ]dK t = 0: (4.5) Here L;U are given càdlàg barriers, (Y;Z;A) are the solution triplet, where A is càdlàg and has finite variation withA 0 = 0, andA =K + K is the orthogonal decomposition ofA. Naturally we are interested in the following norm of the solution: k(Y;Z;A)k 2 := E h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt + T _ 0 A 2 i ; (4.6) and clearly, t _ 0 A =K + t +K t : (4.7) Such RBSDE was first studied by Cvitanic and Karatzas [14], under the so called Mokobodski’s condition. Their motivation was to study the zero sum Dynkin game. Peng and Xu [53] extended their result and obtained some norm estimates, again under Moko- bodski’s type of conditions. To be precise, assume there exists a semi-martingale Y 0 t = 88 Y 0 0 + R t 0 Z 0 s dB s A 0 t , whereA 0 has finite variation, such thatL t Y 0 t U t , then k(Y;Z;A)k 2 Ck(Y 0 ;Z 0 ;A 0 )k 2 (4.8) + CE h jj 2 + Z T 0 jf(t; 0; 0)j 2 dt + sup 0tT [jL + t j 2 +jU t j 2 ] i : While the above estimate is nice, in general it is difficult to verify the existence of suchY 0 . Hamadene et al [29, 30] proposed another approach without assuming Mokobodski’s type of conditions. Based on the notion of local solutions, they proved the wellposedness of RBSDE (4.5) under the separate condition: L t <U t ; L t <U t : (4.9) The above condition is obviously very easy to verify. However, this approach does not yield norm estimates. We note that norm estimates are important for applications, for example when one studies the time discretization of such RBSDEs, see e.g. [10]. Our goal of this chapter is to provide a normkk 1 for the barriers (L;U), see (4.9) below, so that the following a priori estimate holds: k(Y;Z;A)k 2 Ck(L;U)k 2 1 +CE h jj 2 + Z T 0 jf(t; 0; 0)jdt 2 i : (4.10) The normk(L;U)k 1 involves only (L;U) and is easy to verify its finiteness, so it is conve- nient in applications. We remark that this new norm is strongly motivated from our study for general semi-martingales in Chapter 3. Another contribution of the chapter is an estimate for the difference of solutions to two RBSDEs. Such estimate was obtained by Peng and Xu [53] when the two RBSDEs have the same barriers (L;U). We extend it by allowing different (L;U), which enables us to approximate (L;U) when necessary. In fact, the proof of our main estimate (4.10) relies 89 on this estimate. In this chapter we assumeF is generated by a standard Brownian motionB and aug- mented with all theP-null sets. We shall consider the following doubly reflected backward SDE withF-progressively measurable solution triplet (Y;Z;A): 8 > < > : Y t = + Z T t f s (Y s ;Z s )ds Z T t Z s dB s +A T A t ; LYU; [Y t L t ]dK + t = [U t Y t ]dK t = 0: (4.1) HereY2D(F) andA has finite variation with orthogonal decompositionA =K + K . We say (Y;Z;A) satisfying (4.5) is a local solution if sup 0tT jY t j + Z T 0 jZ t j 2 dt + T _ 0 A<1; P-a.s. (4.2) and a solution if k(Y;Z;A)k 2 :=E P h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt + T _ 0 A 2 i <1: (4.3) Throughout this chapter, we assume the following standing assumptions: Assumption 4.1.1. (i) isF T -measurable,f(; 0; 0) isF-progressively measurable, and I 2 0 :=I 2 0 (;f) :=E P h jj 2 + Z T 0 jf t (0; 0)jdt 2 i <1: (4.4) (ii)f is uniformly Lipschitz continuous in (y;z); (iii)L;U2D(F);LU,L T U T ; and k(L;U)k 2 P;0 :=kL + k 2 P;0 +kU k 2 P;0 <1: (4.5) 90 Remark 4.1.2. In the standard literature, one requiresE P h R T 0 jf(t; 0; 0)j 2 dt i ] <1. Our condition (4.4) is slightly weaker. In fact, most estimates in the BSDE literature can be improved by replacingE P h R T 0 jf(t; 0; 0)j 2 dt i withE P h R T 0 jf(t; 0; 0)jdt 2 i , and the argu- ments are rather standard. We refer to the Appendix of the monograph [15] for interested readers. It is well known that Assumption 4.1.1 does not yield the wellposedness of DRBSDE (4.5). At below is a simple counterexample. Example 4.1.3. LetL =U be deterministic, càdlàg, and T _ 0 L =1. Then DRBSDE (4.5) with =L T andf = 0 has no solution. Proof. Assume there is a solution (Y;Z;A). SinceLY U, one must haveY =L, which leads to Z = 0 and A = L. But this contradicts with the assumption that L has infinite variation. In the literature, there are two approaches for wellposedness of DRBSDEs. We first report a result from Hamadene, Hassani and Ouknine [30]: Lemma 4.1.4. Let Assumption 4.1.1 hold. Assume further the following separation condi- tion: L t <U t and L t <U t for allt: (4.6) Then (4.5) admits a local solution. The condition (4.6) is mild and easy to verify, but it does not yield any a priori estimates. We remark that [30] takes a slightly different form of DRBSDEs. But that is mainly for the sake of uniqueness. One can easily check that a local solution in [30] is a local solution in our sense, so the existence in Lemma 4.1.4 is valid. 91 We next report a result from Peng and Xu [53], following the original work Cvitanic and Karatzas [14]: Lemma 4.1.5. Let Assumption 4.1.1 hold. Assume further the following Mokobodski’s type of condition: there exists a square integrable semi-martingaleY 0 such thatL t Y 0 t U t : (4.7) Then DRBSDE (4.5) admits a unique solution and the following estimate holds: k(Y;Z;A)k 2 C h I 2 0 +kY 0 k 2 P i : (4.8) However, in those works there is no discussion on the sufficient conditions for the existence of suchY 0 . Our goal in this chapter is to provide a tractable equivalent condition. In light of the normk:k P (3.11), we introduce the following norm for the barriers (L;U): k(L;U)k 2 P := k(L;U)k 2 P;0 (4.9) + sup E P h n1 X i=0 [E P i (L i+1 )U i ] + + [L i E P i (U i+1 )] + 2 i ; where the supremum is again taken over all partitions : 0 = 0 n =T . 4.2 Apriori estimate of local solution in the linear case Lemma 4.2.1. Let Assumption 4.1.1 and (4.6) hold, andf = 0. If DRBSDE (4.5) has a local solution (Y;Z;A), then we have the following estimate k(Y;Z;A)k 2 C h I 2 0 +k(L;U)k 2 P i : (4.10) 92 Proof. Without loss of generality, we assumek(L;U)k P <1. We proceed in three steps. Step 1. We first assume (Y;Z;A) is a solution of (4.5) andY is continuous. ThenK + andK are also continuous. Apply Itô’s formula onjY t j 2 , by the minimum condition in (4.5) we have, djY t j 2 = 2Y t Z t dB t +jZ t j 2 dt 2Y t dK + t + 2Y t dK t = 2Y t Z t dB t +jZ t j 2 dt 2L t dK + t + 2U t dK t : (4.11) Then, for any"> 0, E P h jY t j 2 + Z T t jZ s j 2 ds i =E P h jj 2 + 2 Z T t L s dK + s 2 Z T t U s dK s i E P h jj 2 + 2 sup 0sT L + s K + T + 2 sup 0sT U s K T i E P h jj 2 +C" 1 sup 0sT [jL + s j 2 +jU s j 2 ] +"[jK + T j 2 +jK T j 2 ] i E P [j 2 ] +C" 1 k(L;U)k 2 P;0 +"E P h T _ 0 A 2 i : Following standard arguments, in particular by applying the Burkholder-Davis-Gundy In- equality on (4.11), we have E P h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt i CE P [j 2 ] +C" 1 k(L;U)k 2 P;0 +C"E P h T _ 0 A 2 i : (4.12) We claim that E P h T _ 0 A 2 i CE P h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt i +Ck(L;U)k 2 P : (4.13) Combine (4.12) and (4.13) and set" small, we prove (4.10) immediately. 93 To prove (4.13), we define a sequence of stopping times: 0 := 0 and, fori 0, 2i+1 := infft 2i :K + t >K + 2i g^T; 2i+2 := infft 2i+1 :K t >K 2i+1 g^T: (4.14) ThendK + t = 0 on [ 2i ; 2i+1 ] anddK t = 0 on [ 2i+1 ; 2i+2 ], and thus Y t =Y 2i+1 Z 2i+1 t Z s dB s (K 2i+1 K t ); t2 [ 2i ; 2i+1 ]; Y t =Y 2i+2 Z 2i+2 t Z s dB s + (K + 2i+2 K + t ); t2 [ 2i+1 ; 2i+2 ]; (4.15) Since L and U are right continuous and K is continuous, by the minimum condition in (4.5) we have Y 2i =U 2i 1 f 2i <Tg +1 f 2i =Tg and Y 2i+1 =L 2i+1 1 f 2i+1 <Tg +1 f 2i+1 =Tg : (4.16) In particular, onf 2i < Tg, we have Y 2i = U 2i > L 2i , then Y t > L t for t in a right neighborhood of 2i and thus dK + t = 0. This implies that 2i+1 > 2i onf 2i < Tg. Similarly, 2i+2 > 2i+1 onf 2i+1 <Tg. Moreover, as in [30], we see that for a.s.!, n (!) =T forn large enough. (4.17) Indeed, denote := lim n!1 n . If <T , then n < for alln and we get L = lim n!1 L 2i+1 = lim n!1 Y 2i+1 =Y = lim n!1 Y 2i = lim n!1 U 2i =U : This contradicts with (4.6). 94 For eachi, by (4.15) and (4.16), 0 E P 2i [K 2i+1 ]K 2i =E P 2i [Y 2i+1 ]Y 2i = E P 2i h L 2i+1 1 f 2i+1 <Tg +1 f 2i+1 =Tg i U 2i 1 f 2i <Tg 1 f 2i =Tg = h E P 2i [L 2i+1 ]U 2i i 1 f 2i <Tg +E P 2i h [L 2i+1 ]1 f 2i <T = 2i+1 g i h E P 2i [L 2i+1 ]U 2i i + Then for anyn, E P h n X i=0 E P 2i [K 2i+1 ]K 2i i 2 k(L;U)k 2 P : Sendn!1, we get E P h X i0 E P 2i [K 2i+1 ]K 2i i 2 k(L;U)k 2 P : (4.18) Similarly, E P h X i0 E P 2i+1 [K + 2i+2 ]K + 2i+1 i 2 k(L;U)k 2 P : (4.19) Denote ^ Y n :=Y n X i n 2 E P 2i [K 2i+1 ]K 2i + X i n1 2 E P 2i+1 [K + 2i+2 ]K + 2i+1 : (4.20) By (4.18) and (4.19), we have E P h max n0 j ^ Y n j 2 i CE P h sup 0tT jY t j 2 i +Ck(L;U)k 2 P : (4.21) 95 Note that ^ Y n =Y 0 + Z n 0 Z s dB s + X i n 2 K 2i+1 E P 2i [K 2i+1 ] X i n1 2 K + 2i+2 E P 2i+1 [K + 2i+2 ] is a martingale. By (4.21), we have E P h X i n 2 K 2i+1 E P 2i [K 2i+1 ] 2 + X i n1 2 K + 2i+2 E P 2i+1 [K + 2i+2 ] 2 i = E P h X i n 2 K 2i+1 E P 2i [K 2i+1 ] X i n1 2 K + 2i+2 E P 2i+1 [K + 2i+2 ] 2 i = E P h ^ Y n Y 0 Z n 0 Z s dB s 2 i CE P h sup i0 j ^ Y i j 2 + Z n 0 jZ t j 2 dt i CE P h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt i +Ck(L;U)k 2 P : Sendn!1 and E P h X i0 K 2i+1 E P 2i [K 2i+1 ] 2 + X i0 K + 2i+2 E P 2i+1 [K + 2i+2 ] 2 i = E P h X i0 K 2i+1 E P 2i [K 2i+1 ] X i0 K + 2i+2 E P 2i+1 [K + 2i+2 ] 2 i CE P h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt i +Ck(L;U)k 2 P : (4.22) 96 This, together with (4.17), (4.18) and (4.19), implies further that E P h jK + T j 2 +jK T j 2 i =E P h X i0 K 2i+1 K 2i 2 + X i0 K + 2i+2 K + 2i+1 2 i CE P h X i0 K 2i+1 E P 2i [K 2i+1 ] 2 + X i0 E P 2i [K 2i+1 K 2i ] 2 i +CE P h X i0 K + 2i+2 E P 2i+1 [K + 2i+2 ] 2 + X i0 E P 2i+1 [K + 2i+2 ]K + 2i+1 2 i = CE P h X i0 K 2i+1 E P 2i [K 2i+1 ] 2 + X i0 K + 2i+2 E P 2i+1 [K + 2i+2 ] 2 i +CE P h X i0 E P 2i [K 2i+1 K 2i ] 2 + X i0 E P 2i+1 [K + 2i+2 ]K + 2i+1 2 i CE P h sup 0tT jY t j 2 + Z T 0 jZ t j 2 dt i +Ck(L;U)k 2 P : This proves (4.13) and hence (4.10). Step 2. We next assume (Y;Z;A) is a local solution butY is still continuous. Let i be defined by (4.14). Then (4.15)-(4.17) still hold. This implies Y 2i U 2i 1 f 2i <Tg jj1 f 2i =Tg h sup 0tT U t +jj i ; Y 2i E P 2i [Y 2i+1 ]E P 2i h L + 2i+1 1 f 2i+1 <Tg +jj1 f 2i+1 =Tg i E P 2i h sup 0tT L + t +jj i : Then max i0 jY 2i j h sup 0tT U t +jj i _ sup 0sT E P s h sup 0tT L + t +jj i sup 0sT E P s h sup 0tT [L + t +U t ] +jj i : 97 Thus E P h max i0 jY 2i j 2 i E P h sup 0sT E P s sup 0tT [L + t +U t ] +jj 2 i CE P h sup 0tT [L + t +U t ] +jj 2 i CE P [jj 2 ] +Ck(L;U)k 2 P;0 : (4.23) Now for anyn, define ^ n := inf n t : sup 0st jY s j + Z t 0 jZ s j 2 ds + t _ 0 An o ^T: (4.24) Then E P h sup 0t<^ n jY t j 2 + Z ^ n 0 jZ t j 2 dt + ^ n _ 0 A 2 o <1: (4.25) Define ~ n := inff 2i ^ n g: Then by (4.5) and (4.23) we have Y t =Y ~ n + Z ~ n t Z s dB s (K ~ n K t ); Y t U t ; [U t Y t ]dK t = 0; t2 [^ n ; ~ n ]; E P [jY ~ n j 2 ]CE P [jj 2 ] +Ck(L;U)k 2 P;0 : By standard arguments for Reflected BSDEs with one barrier, see e.g. [20], E P [jY ^ n j 2 ]CE P [jj 2 ] +Ck(L;U)k 2 P;0 +CE P h sup 0tT jU t j 2 i CE P [jj 2 ] +Ck(L;U)k 2 P;0 : 98 This, together with (4.25), implies that E P h sup 0t^ n jY t j 2 + Z ^ n 0 jZ t j 2 dt + ^ n _ 0 A 2 o <1: Then by Step 1, we obtain E P h sup 0t^ n jY t j 2 + Z ^ n 0 jZ t j 2 dt + ^ n _ 0 A 2 o CE P [jY ^ n j 2 ] +Ck(L;U)k 2 P CE P [jj 2 ] +Ck(L;U)k 2 P : Note that ^ n =T whenn is large enough. Sendn!1 and apply the Monotone Conver- gence Theorem, we prove (4.10). Step 3. Finally we allowY to be discontinuous.. Let Y t :=Y t X 0st Y s ; K + t :=K + t X 0st K + s ; K t :=K t X 0st K s ; A t := K + t K t ; L t :=L t X 0st K + s ; U t :=U t + X 0st K s ; := X 0sT Y s : Then it is clear that Y is continuous, ( L; U) satisfies (4.6), and ( Y;Z; A) is a local solution to DRBSDE (4.5) with coefficients ( ; 0; L; U). By Step 2, we have k( Y;Z; A)k 2 CE P [j j 2 ] +Ck( L; U)k 2 P : 99 One can check straightforwardly that k(Y;Z;A)k 2 Ck( Y;Z; A)k 2 +CE P h X 0tT [K + t + K t ] 2 i ; E P [j j 2 ]CE P [jj 2 ] +CE P h X 0tT [K + t + K t ] 2 i ; k( L; U)k 2 P k(L;U)k 2 P Then k(Y;Z;A)k 2 CE P [jj 2 ] +Ck(L;U)k 2 P +CE P h X 0tT [K + t + K t ] 2 i : (4.26) Note that, when K + t > 0, by the minimum condition of (4.5) we see thatY t =L t . SinceK + andK are orthogonal, we have Y t =K + t . ThusL t Y t =Y t K + t = L t K + t . This implies that P 0tT K + t P 0tT [L t ] . Similarly we have P 0tT K t P 0tT [U t ] + . Following the arguments for (3.22), one can easily prove that E P h X 0tT [[L t ] + [U t ] + ] 2 i Ck(L;U)k 2 P : Then (4.10) follows from (4.26) immediately. 4.3 Apriori estimates and wellposedness conditions of DRBSDEs 4.3.1 Apriori estimate under the separation condition Lemma 4.3.1. Let Assumption 4.1.1 and (4.6) hold. Suppose the DRBSDE (4:5) has a solution (Y;Z;A). Then the estimate (4.10) holds. Proof. By Lemma (4.1.4), under the condition (4.6), there exists a local solution (Y 0 ;Z 0 ;A 0 ) 100 to the DRBSDE (4:5) withf = 0. By Lemma (4:2:1), kY 0 k 2 P C h I 2 0 +k(L;U)k 2 P i : By Lemma (4.1.5), we have k(Y;Z;A)k 2 C h I 2 0 +kY 0 k 2 P i : The result follows immediately. 4.3.2 Preliminary apriori estimate of the difference of solutions Lemma 4.3.2. Assume ( i ;f i ;L i ;U i ), i = 1; 2, satisfy Assumption 4.1.1. If the corre- sponding RBSDE (4.5) has a solution (Y i ;Z i ;A i ), then E P h sup 0tT [jY t j 2 +jA t j 2 ] + Z T 0 jZ t j 2 dt i CI 2 ; (4.27) where, recalling the normk(Y;Z;A)k defined by (4.3), I 2 := E P h jj 2 + Z T 0 jf(t;Y 1 t ;Z 1 t )jdt 2 i (4.28) + 2 X i=1 k(Y i ;Z i ;A i )k E P h sup 0tT [jL t j 2 +jU t j 2 ] i1 2 : Remark 4.3.3. We note that the above estimate involves the data from solutions, namely k(Y i ;Z i ;A i )k;i = 1; 2. In theorem (4.3.5), we will improve the estimate to only involve the data from the given parameters. Proof. Let> 0 be a constant which will be specified later. Applying Itô’s formula on 101 e t jY t j 2 we have e t jY t j 2 + Z T t e s jY s j 2 ds + Z T t e s jZ s j 2 ds (4.29) = e T j 2 j + 2 Z T t e s Y s f 1 (s;Y 1 s ;Z 1 s )f 2 (s;Y 2 s ;Z 2 s ) ds + 2 Z T t e s Y s dA s 2 Z T t e s Y s Z s dB s : For any"> 0, note that 2 Z T t e s jY s j f 1 (s;Y 1 s ;Z 1 s )f 2 (s;Y 2 s ;Z 2 s ) ds C Z T t e s jY s j[jf(s;Y 1 s ;Z 1 s )j +jY s j +jZ s j]ds C h sup tsT jY s j Z T t e s jf(s;Y 1 s ;Z 1 s )jds + Z T t e s [jY s j 2 +jY s jjZ s j]ds i " sup tsT jY s j 2 + 1 2 Z T t e s jZ s j 2 ds (4.30) +C Z T t e s jY s j 2 ds +C" 1 Z T t e s jf(s;Y 1 s ;Z 1 s )jds 2 ; 102 and, with the orthogonal decompositionsA i =K i;+ K i , 2 Z T t e s Y s dA s = 2 Z T t e s Y 1 s dK 1;+ s Y 1 s dK 1; s Y 2 s dK 1;+ s +Y 2 s dK 1; s Y 1 s dK 2;+ s +Y 1 s dK 2; s +Y 2 s dK 2;+ s Y 2 s dK 2; s 2 Z T t e s L 1 s dK 1;+ s U 1 s dK 1; s L 2 s dK 1;+ s +U 2 s dK 1; s L 1 s dK 2;+ s +U 1 s dK 2; s +L 2 s dK 2;+ s U 2 s dK 2; s = 2 Z T t e s L s dK 1;+ s U s dK 1; s L s dK 2;+ s + U s dK 2; s 2e (Tt) sup 0sT [jL s j +jU s j] T _ t A 1 + T _ t A 2 : (4.31) Plug (4.30) and (4.31) into (4.29), we obtain e t jY t j 2 + Z T t e s jY s j 2 ds + Z T t e s jZ s j 2 ds e T j 2 j +" sup tsT jY s j 2 + 1 2 Z T t e s jZ s j 2 ds +C Z T t e s jY s j 2 ds +C" 1 Z T t e s jf(s;Y 1 s ;Z 1 s )jds 2 +2e (Tt) sup 0sT [jL s j +jU s j] T _ t A 1 + T _ t A 2 2 Z T t e s Y s Z s dB s : Set =C for the aboveC, we get e t jY t j 2 + 1 2 Z T t e s jZ s j 2 ds e T j 2 j +" sup tsT jY s j 2 +C" 1 Z T t e s jf(s;Y 1 s ;Z 1 s )jds 2 (4.32) +2e (Tt) sup 0sT [jL s j +jU s j] T _ t A 1 + T _ t A 2 2 Z T t e s Y s Z s dB s : 103 Take expectation on both sides, we have sup 0tT E P [jY t j 2 ] +E P h Z T 0 jZ t j 2 dt i C[1 +" 1 ]I 2 +"E P h sup 0tT jY t j 2 i : (4.33) Moreover, by (4.32) we have sup 0tT e t jY t j 2 e T j 2 j +" sup 0tT jY t j 2 +C" 1 Z T 0 e t jf(t;Y 1 t ;Z 1 t )jdt 2 (4.34) +2e T sup 0tT [jL t j +jU t j] T _ 0 A 1 + T _ 0 A 2 + 2 sup 0tT Z T t e s Y s Z s dB s : Apply the Burkholder-Davis-Gundy Inequality and note that =C, for any> 0 we get E P h sup 0tT Z T t e s Y s Z s dB s i CE P h Z T 0 jY t Z t j 2 dt 1 2 i (4.35) CE P h sup 0tT jY t j Z T 0 jZ t j 2 dt 1 2 i E P h sup 0tT jY t j 2 i +C 1 E P h Z T 0 jZ t j 2 dt i : Take expectation on both sides of (4.34), and apply (4.35) and then (4.33), we obtain E P h sup 0tT jY t j 2 i C[1 +" 1 ]I 2 +C"E P h sup 0tT jY t j 2 i +CE P h sup 0tT jY t j 2 i +C 1 E P h Z T 0 jZ t j 2 dt i C +"(1 + 1 ) E P h sup 0tT jY t j 2 i +C[1 + 1 ][1 +" 1 ]I 2 : Set := 1 4C ," := 1 4C(1+4C) for the aboveC. ThenC h +"(1 + 1 ) i = 1 2 and thus E P h sup 0tT jY t j 2 i CI 2 : (4.36) 104 Plug (4.36) into (4.33), we get E P h Z T 0 jZ t j 2 dt i CI 2 : (4.37) Finally, notice that A t = Y 0 Y t Z t 0 [f 1 (s;Y 1 s ;Z 1 s )f 2 (s;Y 2 s ;Z 2 s )]ds + Z t 0 Z s dB s : One can easily get the estimate for A. 4.3.3 Main result Our main result of this chapter is: Theorem 4.3.4. Let Assumption 4.1.1 hold. Then the following are equivalent: (i) The DRBSDE (4.5) admits a unique solution (Y;Z;A); (ii) the Mokobodski condition (4.7) holds; (iii)k(L;U)k P <1. Moreover, in this case we have the estimate: k(Y;Z;A)k 2 C h I 2 0 +k(L;U)k 2 P i : In addition, we have the following estimates for the difference of two DRBSDEs: Theorem 4.3.5. Assume ( i ;f i ;L i ;U i ), i = 1; 2, satisfy all the conditions in Theorem 4.3.4, and let (Y i ;Z i ;A i ) denote the solution to the corresponding DRBSDE (4.5). Denote 105 Y :=Y 1 Y 2 , and similarly for the other notations. Then E P h sup 0tT [jY t j 2 +jA t j 2 ] + Z T 0 jZ t j 2 dt i CE P h jj 2 + Z T 0 jf(t;Y 1 t ;Z 1 t )jdt 2 i (4.38) +C 2 X i=1 h I 0 ( i ;f i ) +k(L i ;U i )k P i E P h sup 0tT [jL t j 2 +jU t j 2 ] i1 2 : These two theorems will be proved in the rest of this chapter. We first note that Remark 4.3.6. (i) In the case that there is only one barrierL, we may view it asU =1. One can check straightforwardly thatk(L;U)k P =kL + k P;0 . Then Theorems 4.3.4 and 4.3.5 reduce to standard results for reflected BSDEs with one barrier, see El Karoui et al [20]. (ii) In the case (L 1 ;U 1 ) = (L 2 ;U 2 ), the last term in (4.38) vanishes and Peng and Xu [53] has already obtained the estimate. Proofs of Theorem 4.3.4. First, by Lemma 4.1.5 we know (ii) implies (i). On the other hand, if (i) holds true, thenY 0 :=Y is clearly a square integrable semi-martingale between L andU. That is, (i) and (ii) are equivalent. Next, assume (ii) holds true. SinceL Y 0 U, then for any partition : 0 = 0 < < n =T , L + +U (Y 0 ) + + (Y 0 ) =jY 0 j; h E P i [L i+1 ]U i i + + h L i E P i [U i+1 ] i + h E P i [Y 0 i+1 ]Y 0 i i + + h Y 0 i E P i [Y 0 i+1 ] i + = E P i [Y 0 i+1 ]Y 0 i : This implies immediately thatk(L;U)k P kY 0 k P , and thus (iii) holds. 106 It remains to prove that (iii) implies (ii). We first assume (4.6) holds. Then it follows from Lemma 4.1.4 that DRBSDE (4.5) withf = 0 admits a local solution (Y 0 ;Z 0 ;A 0 ). Applying Lemma 4.2.1 we see thatk(Y 0 ;Z 0 ;A 0 )k C[I 0 +k(L;U)k P ]. This implies (4.7). In the general case, denoteU n :=U + 1 n . Then (L;U n ) satisfies (4.6). By the above ar- guments, DRBSDE (4.5) with coefficients (; 0;L;U n ) has a unique solution (Y n ;Z n ;A n ) satisfying k(Y n ;Z n ;A n )k 2 CE P [jj 2 ] +Ck(L;U n )k 2 P It is obvious thatk(L;U n )k P k(L;U)k P . Then k(Y n ;Z n ;A n )k 2 CE P [jj 2 ] +Ck(L;U)k 2 P : Now form>n, applying Lemma 4.3.2 we have E P h sup 0tT [jY n t Y m t j 2 + [A n t A m t ] 2 + Z T 0 jZ n t Z m j 2 dt i C h k(Y n ;Z n ;A n )k +k(Y m ;Z m ;A m )k i [ 1 n 1 m ] C n h E P [jj 2 ] 1 2 +k(L;U)k P i : Sendn!1, we obtain limit processes (Y 0 ;Z 0 ;A 0 ). By Theorem 1 in [1] we see thatY 0 satisfies the requirement in (ii). 107 References [1] Barlow, M. and Protter, P. (1990) On convergence of semimartingales. Séminaire de Probabilités XXIV 1988/89, 188-193. [2] Bayraktar, E. and Yao, S. (2011) On zero-sum stochastic differential games. Preprint arXiv:1112.5744v3. [3] Bayraktar, E. and Huang, Y . (2010) On the multi-dimensional controller and stopper games. Preprint arXiv:1009.0932. [4] Bensoussan, A. and Lions, J. L. (1982) Applications of Variational Inequalities in Stochastic Control, North-Holland Publishing Company. [5] Buckdahn, R., Cardaliaguet, P. and Quincampoix, M. (2011) Some recent aspects of differential game theory.Dyn.GamesAppl. 1, no. 1, 74-114. [6] Buckdahn, R., Hu, Y . and Li, J. (2011) Stochastic representation for solutions of Isaacs’ type integral-partial differential equations.StochasticProcess.Appl. 121, no. 12, 2715-2750. [7] Buckdahn, R. and Li, J. (2008) Stochastic Differential Games and Viscosity Solutions of Hamilton-Jacobi-Bellman-Isaacs Equations,SiamJ.ControlOptim., V ol.47, No.1, 444-475. [8] Caffarelli, A.L. (1989) Interior a priori estimates for solutions of fully non-linear equations,AnnalsofMathematics, V ol. 130, 189-213. [9] Cardaliaguet, P. and Rainer, C. (2009) Stochastic differential games with asymmetric information.Appl.Math.Optim. 59, no. 1, 1-36. [10] Chassagneux, J. (2009) A discrete-time approximation for doubly reflected BSDE, AdvancesinAppliedProbability, 41, 101-130. [11] Cheridito, P., Soner, H.M. and Touzi, N., Victoir, N. (2007) Second order BSDE’s and fully nonlinear PDE’s, Communications in Pure and Applied Mathematics, 60 (7): 1081-1110. [12] Cont, R. and Fournie, D. (2012) Functional Itô calculus and stochastic integral rep- resentation of martingales,AnnalsofProbability, to appear, arXiv:1002.2446. 108 [13] Crandall, M.G, Ishii, H. and Lions, P.L. (1992) User’s guide to viscosity solutions of second order partial differential equations, Bulletin of the American Mathematical Society, V ol. 27, No.1, pp. 1-67. [14] Cvitanic, J. and Karatzas, I. (1996) Backward SDEÕs with reflection and Dynkin games,AnnalsofProbability, 24, 2024-2056. [15] Cvitanic, J. and Zhang, J. (2012) Contract Theory in Continuous Time Models, Springer Finance. [16] Denis, L., Hu, M. and Peng, S. (2011) Function Spaces and Capacity Related to a Sublinear Expectation: Application to G-Brownian Motion Paths, Potential Anal., 34, 139-161. [17] Denis, L. and Martini, C. (2006) A Theorectical Framework for the Pricing of Con- tingent Claims in the Presence of Model Uncertainty, Annals of Applied Probability 16, 2, 827-852. [18] Dupire, B. (2009) Functional Itô calculus, papers.ssrn.com. [19] El-Karoui, N. and Hamadene, S. (2003) BSDEs and risk-sensitive control, zero- sum and nonzero-sum game problems of stochastic functional differential equations. StochasticProcess.Appl. 107, no. 1, 145-169. [20] El. Karoui, N., Kapoudjian, C., Pardoux, E., Peng, S. and Quenez, M. (1997) Re- flected Solutions of Backward SDE’s, and Related Obstacle Problems for PDE’s,The AnnalsofProbability, 25, 702-737. [21] Ekren, I., Keller, C., Touzi, N., and Zhang, J. On Viscosity Solutions of Path Depen- dent PDEs,AnnalsofProbability, to appear, arXiv:1109.5971. [22] Ekren, I., Touzi, N., and Zhang, J. Optimal Stopping under Nonlinear Expectation, preprint. [23] Ekren, I., Touzi, N., and Zhang, J. Viscosity Solutions of Fully Nonlinear Parabolic Path Dependent PDEs: Part I, preprint. [24] Ekren, I., Touzi, N., and Zhang, J. Viscosity Solutions of Fully Nonlinear Path Parabolic Dependent PDEs: Part II, preprint. [25] Evans, L.C. and Souganidis, P.E. (1984) Differential games and representation for- mulas for solutions of Hamilton-Jacobi-Isaacs equations. Indiana Univ. Math. J. 33, 773-797. [26] Fleming, W.H., Souganidis P.E. (1989) On The Existence of Value Functions of Two- Player, Zero-Sum Stochastic Differential Games, Indiana University Mathematics Journal V ol. 38, No.2, 293-314. 109 [27] Fahim A, N. Touzi. and X. Warin (2011) A Probabilistic Numerical Scheme for Fully Nonlinear PDEs,AnnalsofAppliedProbability, 21(4), 1322-1364. [28] Gilbarg D. and Trudinger N.S., Elliptic Partial Differential Equations of Second Or- der, 2nd ed., Springer-Verlag, New York, 1983. [29] Hamadéne, S. and Hassani, M. (2005) BSDEs with two reflecting barriers: the gen- eral result,ProbabilityTheoryandRelatedFields, 132, 237-264. [30] Hamadéne, S., Hassani, M., and Ouknine, Y . (2010) BSDEs with general discontin- uous reflecting barriers without Mokobodski’s condition, Bull. Sci. math., 134, 874- 899 ; DOI : 10.1016/j.bulsci.2010.03.001 [31] Hamadene, S. and Lepeltier, J.P. (1995) Zero-sum stochastic differential games and backward equations,SystemsControlLett, 24, 259-263. [32] Hamadene, S. and Wang, H. (2011) The mixed zero-sum stochastic differential game in the model with jumps. Advances in dynamic games, 83-110, Ann. Internat. Soc. Dynam.Games, 11, BirkhŁuser/Springer, New York, 2011. [33] Hu, M., Ji, S., Peng, S., and Song, Y . (2012) Backward Stochastic Differential Equa- tions Driven by G-Brownian Motion, arXiv:1206.5889. [34] Hu, Y . and Peng, S. (2010) Some Estimates for Martingale Representation under G- Expectation, arXiv:1004.1098. [35] Karatzas,I and Shreve, S. Brownian Motion and Stochastic Calculus, 2nd Edition, Springer. [36] Karatzas, I. and Sudderth, W. (2006) Stochastic games of control and stopping for a linear diffusion.Randomwalk,sequentialanalysisandrelatedtopics, 100-117, World Sci. Publ., Hackensack, NJ, 2006 [37] Ladyzenskaya, O.A., Solonnikov and V .A., Uralseva, N.N. (1967). Linear and Quasi- linear Equations of Parabolic Type, AMS, Providence. [38] Lieberman, G. M. Second order parabolic differential equations , World Scientific, 1998. [39] Matoussi, A., Piozin, L. and Possamai, D. (2012). Second-order BSDEs with general reflection and Dynkin games under uncertainty. arXiv preprint arXiv:1212.0476. [40] Mou, L. and Yong, J. (2006) Two-person zero-sum linear quadratic stochastic differ- ential games by a Hilbert space method.J.Ind.Manag.Optim. 2, no. 1, 95-117. [41] Meyer, P. and Zheng, W. (1984) Tightness criteria for laws of semi-martingales,Ann. Inst.HenriPoincaré, 20, 353-372. 110 [42] Neveu, J. (1975) Discrete Parameter Martingales. North Holland Publishing Com- pany. [43] Nutz, M. (2010) Random G-expectations, Quantitative Finance Papers. [44] Nutz, M. and Soner, M. (2010) Superhedging and Dynamic Risk Measures under Volatility Uncertainty, arXiv:1011.2958. [45] Nutz, M. and Zhang, J. (2012). Optimal Stopping under Adverse Nonlinear Expecta- tion and Related Games, SIAM Journal on Control and Optimization, 50(4), 2065- 2089. [46] Pardoux, E. and Peng, S. (1990) Adapted solution of a backward stochastic differen- tial equation,Syst.ControlLett., 14, 55-61. [47] Peng, S. (2010) Backward Stochastic Differential Equation, Nonlinear Expectations and Their Applications,ProceedingsoftheInternationalCongressofMathematicians Hyderabad,India,2010. [48] Peng, S. (1997) Backward SDE and related g-expectation, Backward stochastic dif- ferential equations, (N. El Karoui and L. Mazliak, eds.), Pitman Res. Notes Math. Ser., vol. 364, Long- man, Harlow, p. 141Ð159. [49] Peng, S. (1999). Monotonic limit theorem of BSDE and nonlinear decomposition the- orem of Doob Meyers type,ProbabilityTheoryandRelatedFields, 113(4), 473-499. [50] Peng, S. (2007) G-Brownian motion and dynamic risk measure under volatility un- certainty, arXiv:0711.2834v1. [51] Peng, S. (2010) Nonlinear Expectations and Stochastic Calculus under Uncertainty, Preprint, arXiv:1002.4546v1. [52] Peng, S., Song, Y . and Zhang, J. (2012) A Complete Representation Theorem forG- martingales, arXiv preprint arXiv:1201.2629. [53] Peng, S. and Xu, M. (2005) The smallestg-supermartingale and reflected BSDE with single and doubleL 2 obstacles,AnnalesdeI.H.P., 141, 605-630. [54] Pham, T. and Zhang, J. (2012) Some Norm Estimates for Semimartingales—Under Linear and Nonlinear Expectations, arXiv preprint arXiv:1107.4020. [55] Pham, T. and Zhang, J. (2012) Two Person Zero-sum Game in Weak Formulation and Path Dependent Bellman-Isaacs Equation, arXiv preprint arXiv:1209.6605. [56] Protter, P. (2004) Stochastic Integration and Differential Equations, 2nd Edition, Springer. 111 [57] Revuz, D. and Yor, M. (1999) Continuous martingales and Brownian motion, Springer, third edition. [58] Song, Y . (2010) Uniqueness of the representation forGmartingales with finite varia- tion, arXiv:1012.1913. [59] Soner, M., Touzi, N. and Zhang, J. (2011) Martingale representation theorem for the Gexpectation,StochasticProcessesandTheirApplications, 121, 265-287. [60] Soner, M., Touzi, N. and Zhang, J. (2011) Quasi-sure stochastic analysis through aggregation,ElectronicJournalofProbability, 16, 1844-1879. [61] Soner, M., Touzi, N. and Zhang, J. (2011), Dual Formulation of Second Order Target Problems,AnnalsofAppliedProbability, to appear. [62] Soner, M., Touzi, N. and Zhang, J. (2012), Wellposedness of Second Order BSDEs, ProbabilityTheoryandRelatedFields, 153, 149-190. [63] Stroock, D. and Varadhan, S. R. S. (1979), Multidimensional diffusion processes Springer-Verlag, New York. [64] Xu, J. and Zhang, B. (2009) Martingale characterization of G-Brownian motion, StochasticProcessesandtheirApplications, 119, 232-248. 112
Abstract (if available)
Abstract
In this dissertation, we study three topics under a common theme: nonlinear expectation related to zero-sum stochastic differential games. To develop this nonlinear expectation, we first study the stochastic game problem where both players use feedback controls. This is in contrast with the standard literature where the setting of strategies versus controls is usually used. Such approach has the drawback of creating the asymmetry between the two players. Using feedback controls, we prove the existence of the game value where both players use controls and preserve the symmetry. Moreover, we allow for non-Markovian structure and characterize the value process as the unique viscosity solution of the path-dependent Bellman-Isaacs equation. ❧ Using the dynamic programming principle, the game value process can be viewed as a filtration consistent nonlinear expectation. Moreover, this nonlinear expectation is dominated by the G-Expectation, which is defined naturally from the game setting. It follows that the game value process is a G-submartingale. It is natural to conjecture that a G-submartingale is a semi-martingale under each probability measure that composes the G-Expectation. Therefore, we study norm estimate for semi-martingales as our second topic. We introduce two new types of norms. The first characterizes square integrable semi-martingales. The second characterizes the absolute continuity of the finite variation part with respect to the Lebesgue measure. As an application of the first norm, we obtain the Doob-Meyer decomposition for G-submartingale. ❧ Finally, we study the well-posedness problem of doubly reflected Backward Stochastic Differential Equations and establish some a priori estimates for Doubly Reflected Backward Stochastic Differential Equations without imposing the Mokobodski's condition.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
On non-zero-sum stochastic game problems with stopping times
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Topics on set-valued backward stochastic differential equations
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Gaussian free fields and stochastic parabolic equations
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Path dependent partial differential equations and related topics
PDF
Pathwise stochastic analysis and related topics
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Dynamic approaches for some time inconsistent problems
PDF
Defaultable asset management with incomplete information
PDF
Set values for mean field games and set valued PDEs
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
Some mathematical problems for the stochastic Navier Stokes equations
PDF
Probabilistic numerical methods for fully nonlinear PDEs and related topics
PDF
Topics on dynamic limit order book and its related computation
PDF
Some topics on continuous time principal-agent problem
Asset Metadata
Creator
Pham, Triet M.
(author)
Core Title
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Mathematics
Publication Date
04/03/2013
Defense Date
02/27/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Bellman-Isaacs equations,G expectation,nonlinear expectations,OAI-PMH Harvest,path dependent partial differential equations,stochastic differential games,zero-sum games
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Zhang, Jianfeng (
committee chair
), Ma, Jin (
committee member
), Mikulevičius, Remigijus (
committee member
), Zapatero, Fernando (
committee member
)
Creator Email
pmtriet00@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-230895
Unique identifier
UC11293387
Identifier
usctheses-c3-230895 (legacy record id)
Legacy Identifier
etd-PhamTrietM-1508.pdf
Dmrecord
230895
Document Type
Dissertation
Rights
Pham, Triet M.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
Bellman-Isaacs equations
G expectation
nonlinear expectations
path dependent partial differential equations
stochastic differential games
zero-sum games