Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Pathwise stochastic analysis and related topics
(USC Thesis Other)
Pathwise stochastic analysis and related topics
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
PATHWISE STOCHASTIC ANALYSIS AND RELATED TOPICS by Christian Keller A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) August 2015 Copyright 2015 Christian Keller Acknowledgments I want to express my most sincere gratitude to my advisor Professor Jianfeng Zhang for his continuous care, encouragement, and guidance, which were invaluable during my time as Ph.D. student at USC. I also want to thank Professors Yilmaz Kocer, Jin Ma, Remigijus Mikulevicius, and Sergey Lototsky for accepting to be members of my committees, their help whenever I needed it and their support in numerous occasions in the last years. ii Table of Contents Acknowledgments ii Abstract v Chapter 1: Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Organization of the dissertation . . . . . . . . . . . . . . . . . . . . 3 Chapter 2: Path-dependent partial dierential equations 7 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Pathwise stochastic analysis . . . . . . . . . . . . . . . . . . . . . . 11 2.2.1 Derivatives on c adl ag paths . . . . . . . . . . . . . . . . . . 12 2.2.2 Derivatives on continuous paths . . . . . . . . . . . . . . . . 13 2.2.3 Localization of spaces . . . . . . . . . . . . . . . . . . . . . . 15 2.2.4 Space shifts . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 PPDEs and main denitions . . . . . . . . . . . . . . . . . . . . . . 21 2.4 Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5 Proofs of some of the main results . . . . . . . . . . . . . . . . . . 36 2.5.1 Properties of classical solutions . . . . . . . . . . . . . . . . 36 2.5.2 Existence of viscosity solutions . . . . . . . . . . . . . . . . 40 2.5.3 Stability of viscosity solutions . . . . . . . . . . . . . . . . . 44 2.5.4 Partial comparison principle . . . . . . . . . . . . . . . . . . 47 2.6 A variation of Perron's approach . . . . . . . . . . . . . . . . . . . . 51 Chapter 3: Path-dependent integro-dierential equations 64 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.2 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.2.1 Notation and preliminaries . . . . . . . . . . . . . . . . . . . 68 3.2.2 Standing assumptions . . . . . . . . . . . . . . . . . . . . . 71 3.2.3 Canonical setup . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.2.4 Path-dependent stochastic analysis . . . . . . . . . . . . . . 76 3.3 Viscosity solutions: Notion and main results . . . . . . . . . . . . . 78 iii 3.4 Consistency and Existence . . . . . . . . . . . . . . . . . . . . . . . 86 3.5 Partial Comparison and Stability . . . . . . . . . . . . . . . . . . . 101 3.5.1 BSDEs with jumps and nonlinear expectations . . . . . . . . 101 3.5.2 RBSDEs with jumps . . . . . . . . . . . . . . . . . . . . . . 105 3.5.3 Partial Comparison . . . . . . . . . . . . . . . . . . . . . . . 109 3.5.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 3.6 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.6.1 Hitting times . . . . . . . . . . . . . . . . . . . . . . . . . . 117 3.6.2 Regularity of path-frozen approximations . . . . . . . . . . . 124 3.6.3 Path-frozen integro-dierential equations . . . . . . . . . . . 135 3.6.4 Proof of Comparison . . . . . . . . . . . . . . . . . . . . . . 147 3.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 3.7.1 Appendix A. Martingale problems and regular conditioning . 150 3.7.2 Appendix B. Skorohod's topologies . . . . . . . . . . . . . . 160 3.7.3 Appendix C. Auxiliary results . . . . . . . . . . . . . . . . . 162 Chapter 4: Pathwise It^ o calculus for rough paths and RPDEs with path- dependent coecients 171 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 4.2 Rough path integration and path derivatives . . . . . . . . . . . . . 177 4.2.1 Rough paths and quadratic variation . . . . . . . . . . . . . 177 4.2.2 Rough path integration . . . . . . . . . . . . . . . . . . . . . 179 4.2.3 Path derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 184 4.2.4 Backward rough integration . . . . . . . . . . . . . . . . . . 187 4.3 Functions of controlled paths . . . . . . . . . . . . . . . . . . . . . . 188 4.3.1 Commutativity of spatial and path derivatives . . . . . . . . 190 4.3.2 Chain rule of path derivatives . . . . . . . . . . . . . . . . . 192 4.3.3 Some estimates . . . . . . . . . . . . . . . . . . . . . . . . . 197 4.4 Rough dierential equations . . . . . . . . . . . . . . . . . . . . . . 202 4.4.1 Linear RDEs . . . . . . . . . . . . . . . . . . . . . . . . . . 209 4.5 Pathwise solutions of stochastic dierential equations . . . . . . . . 216 4.5.1 The rough path setting for Brownian motion . . . . . . . . . 216 4.5.2 Stochastic dierential equations with regular solutions . . . 219 4.6 Rough PDEs and stochastic PDEs . . . . . . . . . . . . . . . . . . . 223 4.6.1 RDEs with spatial parameters . . . . . . . . . . . . . . . . . 224 4.6.2 Pathwise characteristics . . . . . . . . . . . . . . . . . . . . 227 4.6.3 Rough PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . 231 4.6.4 Pathwise solution of stochastic PDEs . . . . . . . . . . . . . 234 Bibliography 236 iv Abstract In this dissertation, problems from stochastic analysis on path space are investi- gated. The dissertation consists of three major parts. The rst two parts deal with path-dependent PDEs, which are motivated by non-Markovian problems in math- ematical nance and stochastic control and are inherently backward in nature, whereas the last part deals with rough path theory and its applications to forward SPDEs. In the rst part, a notion of viscosity solutions for path-dependent PDEs is introduced. The underlying state space for those PDEs is a path space consisting of continuous functions and the involved derivatives are based on Dupire's functional It^ o calculus. As main result, we have well-posedness for semilinear equations. The second part deals with semilinear path-dependent integro-dierential equa- tions, which are closely related to backward SDEs with jumps. Here, the state space is a path space consisting of c adl ag functions. The results from the rst part are appropriately extended. v In the third part, rough path theory is extended to deal with rough PDEs with minimal requirements in time. This makes it possible to study SPDEs with ran- dom coecients in a pathwise manner. Moreover, a connection between Dupire's functional It^ o calculus and rough path integration is established. vi Chapter 1 Introduction 1.1 Motivation Markovian problems in stochastic analyis, i.e., problems where the object under study depends only its current state, are closely linked to parabolic PDEs. Besides oering an innitesimal characterization, they also provide lots of insight for the corresponding probabilistic situation and vice versa. However, there is also strong interest to study non-Markovian problems, i.e., problems where the object under study depends on the entire history of some state process, e.g., path-dependent options in mathematical nance or stochastic dierential equations with delay. A corresponding counterpart to the PDEs in the Markovian case are so-called path- dependent PDEs (PPDEs from now on) in the non-Markovian case. A basic example of a PPDE is the path-dependent heat equation @ t u(t;!) + 1 2 @ 2 !! u(t;!) = 0; where ! is a continuous path on [0;T ], u(t;!) depends only on t and (! s ) 0st , and the derivatives @ t u as well as @ 2 !! u are understood in the sense of Dupire's 1 functional It^ o calculus [29]. Those derivatives take into account that u = u(t;!) depends on the path! in a nonanticipative manner, i.e., up to timet. In particular, the derivatives are not Fr echet derivatives. In the Markovian case, namely when u(t;!) =v(t;! t ), the equation becomes the standard heat equation@ t v+ 1 2 @ 2 xx v = 0. The path-dependent heat equation provides a characterization for (non-Markovian) martingales. However, one should point out that PPDEs rarely have classical solu- tions. Thus, we turn to viscosity solutions, a notion of weak solutions introduced by Crandall and Lions [23] that turned out to be extremely successful. The PPDEs mentioned above are backward, namely, the terminal datum u(T;!) is given. We are also interested in viscosity solutions for (forward) stochas- tic partial dierential equations (SPDEs for short), which can be easily transformed into forward PPDEs. It turns out that rough path theory is a convenient language to study forward problems. Our goal is to study fully nonlinear SPDEs with ran- dom coecients to provide a counterpart to the, by now, well-established theory of viscosity solutions for fully nonlinear deterministic PDEs. To this end, rough path theory will be appropriately extended to treat rough PDEs with minimal regularity in time. Moreover, as a rst step toward a unied treatment of forward and backward problems, natural links between Dupire's derivatives and Gubinelli's derivatives [42] occurring in rough path integration will be pointed out. 2 1.2 Organization of the dissertation The rest of this dissertation is organized as follows. In Chapter 2 (previously published as [31]), we introduce a notion of viscosity solution for semi-linear parabolic PPDEs. In particular, we address the terminal problem @ t u(t;!) + 1 2 @ 2 !! u(t;!) +f(t;!;u(t;!);@ ! u(t;!)) = 0; (t;!)2 [0;T ) ; u(T;!) =(!); !2 ; where =f!2 C([0;T ];R d ) : ! 0 = 0g. Due to lack of local compactness of our state space , we cannot employ standard methods to prove a comparison principle for viscosity solutions (see, for example, Crandall, Ishii, and Lions [22]). To overcome this diculty, we dene an appropriate space of test functions for our notion of viscosity solutions, which incorporates an optimal stopping problem. The existence of optimal stopping times enables us to prove well-posedness for our terminal-value problem under uniformly continuous data. We prove existence by using a stochastic representation in terms of backward stochastic dierential equations (see, for example, El Karoui, Peng, and Quenez [36]). In particular, we extend the nonlinear Feynman-Kac formula from Pardoux and Peng [77] to the non-Markovian case. In order to show uniqueness, we rst establish a par- tial comparison principle, where a classical subsolution (resp. supersolution) and 3 a viscosity supersolution (resp. subsolution) are compared. A key ingredient of the proof is the use of re ected backward stochastic dierential equations (see El Karoui et al. [35]). Then, we prove the comparison principle by adapting Perron's method, where we rely on a result of Bank and Baum [2]. In Chapter 3 (available online as [51]), I investigate the terminal-value problem Lu(t;!) +f(t;!;u(t;!);@ ! u(t;!);Iu(t;!)) = 0; (t;!)2 [0;T ) ; u(T;!) =(!); !2 ; whereL is a linear integro-dierential operator of the form Lu(t;!) =@ t u(t;!) + X id b i (t;!)@ ! iu(t;!) + 1 2 X i;jd c ij (t;!)@ 2 ! i ! ju(t;!) + Z R d " u(t;! +z:1 [t;T ] )u(t;!) X id z i @ ! iu(t;!) # K(t;!;dz) andI is an integral operator of the form Iu(t;!) = Z R d u(t;! +z:1 [t;T ] )u(t;!) (t;!)K(t;!;dz): Due to the nature of the problem, the path space is here the set of c adl ag func- tions from [0;T ] intoR d . After dening the appropriate notion of viscosity solutions in the spirit of [31], existence of a viscosity solution to the terminal value problem 4 is established by means of a stochastic representation in terms of non-Markovian backward stochastic dierential equations with jumps (cf. Barles, Buckdahn, and Pardoux [3] for the Markovian case). In order to establish uniqueness, more restric- tive assumptions have to be imposed. In particular, we need the coecients ofL to be constant and the dataf and to be uniformly continuous with respect to the so-called M 1 -Skorohod topology. This topology has been introduced by Skorohod in [91]. It should be noted that the M 1 -topology is not the most frequently used of Skorohod's topologies, which is the J 1 -topology. A major dierence compared to [31] is a new framework. Instead of using so-called shifted path spaces, where paths have value 0 at initial time, we have one global path space, but use (shifted) probability measures. In particular, at initial time, our paths are allowed to have arbitrary values in R d . The proof of the comparison principle necessitated the change of the framework. Additional diculties compared to [31] are of a subtle, more technical nature (for example, measurability issues, choice of ltrations, total inaccessibility of certain hitting times). Moreover, regular conditional probability distributions (r.c.p.d.) play a technical yet essential role in [31]. However, since in our framework we need to work with right-continuous ltrations, r.c.p.d. (in the sense of Stroock and Varadhan [94]) do not exist (see Blackwell and Dubins [8]). Nonetheless, they can be adequately substituted by conditional probability distributions, but appropriate care has to be used. My results can be applied to non-Markovian problems that involve stochastic processes with jumps. 5 In Chapter 4 (available online as [52]), we adapt Dupire's functional It^ o cal- culus, which operates within the (probabilistic) semimartingale framework, to a pathwise setting, in particular, to the rough paths framework. The theory of rough paths has been introduced by Lyons in [68] and has received much attention in the last decade. First, we introduce path derivatives in the context of rough path integration and establish an It^ o calculus for rough paths. Next, we study rough dierential equations and rough PDEs with time- and path-dependent coecients under minimal regularity assumptions with respect to time. Our main motivation is to achieve progress in the theory of viscosity solutions for SPDEs. We extend results for rough ODEs and PDEs with coecients that are constant in time (see Gubinelli [42] and Friz and Hairer in [40]) to our more general setting, namely we allow the coecients to depend on time and on the underlying rough path. In particular, by adapting the method of stochastic characteristics (due to Kunita [54]), we prove well-posedness for a class of semi-linear rough PDEs. Moreover, we establish existence of a pathwise solution and continuity in a certain rough path topology for diusions and solutions to a large class of stochastic dierential equations with respect to the underlying sample path. It should be noted that the last results are valid for every sample path and not just almost surely. The results will be crucial for studying pathwise viscosity solutions of SPDEs. 6 Chapter 2 Path-dependent partial dierential equations 1 2.1 Introduction It is well known that a Markovian type backward SDE (BSDE, for short) is asso- ciated with a semi-linear parabolic PDE via the so called nonlinear Feynman- Kac formula, see Pardoux and Peng [77]. Such relation was extended to forward- backward SDEs (FBSDE, for short) and quasi-linear PDEs, see, e.g., Ma, Protter, and Yong [70], Pardoux and Tang [79], and Ma, Zhang, and Zheng [71], and second order BSDEs (2BSDEs, for short) and fully nonlinear PDEs, see, e.g., Cheritdito, Soner, Touzi, and Victoir [16] and Soner, Touzi, and Zhang [92]. The notion of G-expectation, proposed by Peng [82], was also motivated from connection with fully nonlinear PDEs. 1 This is joint work with Ibrahim Ekren, Nizar Touzi, and Jianfeng Zhang. A version with minor modications of this chapter has been previously published as I. Ekren, C. Keller, N. Touzi, and J. Zhang, On viscosity solutions of path dependent PDEs, Ann. Probab. 42 (2014), no. 1, 204{236. Copyright of this work has been transferred to the Institute of Mathematical Statistics. However, the authors retain their right to use all or parts of this article in future works of their own. 7 In the non-Markovian case, the BSDEs (and FBSDEs, 2BSDEs) become path- dependent. Due to its connection with PDEs in Markovian case, it has long been discussed that general BSDEs can also be viewed as a PDE. In particular, in his ICM 2010 lecture, Peng [84] proposed the question whether or not a non-Markovian BSDE can be viewed as a path-dependent PDE (PPDE, for short). The recent work Dupire [29], which was further extended by Cont and Fournie [20], provides a convenient framework for this problem. Dupire introduces the notion of horizontal derivative (that we will refer to as time derivative) and vertical derivative (that we will refer to as space derivative) for non-anticipative stochastic processes. One remarkable result is the functional It^ o formula under his denition. As a direct consequence, if u(t;!) is a martingale under the Wiener measure with enough regularity (under their sense), then its drift part from It^ o's formula vanishes and thus it is a classical solution to the following path-dependent heat equation: @ t u(t;!) + 1 2 @ 2 xx u(t;!) = 0: (2.1.1) It is then very natural to view BSDEs as semi-linear PPDEs, and 2BSDEs and G-martingales as fully nonlinear PPDEs. However, we shall emphasize that PPDEs can rarely have classical solutions, even for heat equations. We refer to Peng and Wang [86] for some sucient conditions under which a semi-linear PPDE admits a classical solution. 8 The present work was largely stimulated by the recent paper Peng [85] which appeared while our investigation of the problem was in an early stage. Peng pro- poses a notion of viscosity solutions for PPDEs on c adl ag paths using compactness arguments. However, the horizontal derivative (or time derivative) in [85] is dened dierently from that in Dupire [29] which leads to a dierent context than ours. Moreover, [85] derives a uniqueness result for PPDEs on c adl ag paths. Given the non-uniqueness of extension of a function to the c adl ag paths, this does not imply any uniqueness statement in the space of continuous paths. For this reason, our approach uses an alternative denition than that of Peng [85]. Our main objective is to propose a notion of viscosity solutions of PPDEs on the space of continuous paths. To focus on the main idea, we focus on the semi-linear case and leave the fully nonlinear case for future study. We shall prove existence, uniqueness, stability, and comparison principle for viscosity solutions. The theory of viscosity solutions for standard PDEs has been well developed. We refer to the classical references Crandall, Ishii and Lions [22] and Fleming and Soner [37]. As is well understood, in the path-dependent case the main challenge comes from the fact that the space variable is innite-dimensional and thus lacks compactness. Our context does not fall into the framework of Lions [56{58] where the notion of viscosity solutions is extended to Hilbert spaces by using a limiting argument based on the existence of a countable basis. Consequently, the standard techniques for the comparison principle, which rely heavily on the compactness 9 arguments, fail in our context. We shall remark though, for rst order PPDEs, by using its special structure Lukoyanov [66] studied viscosity solutions by adapting elegantly the compactness arguments. To overcome this diculty, we provide a new approach by decomposing the proof of the comparison principle into two steps. We rst prove a partial com- parison principle, that is, a classical sub-solution (resp. viscosity sub-solution) is always less than or equal to a viscosity super-solution (resp. classical super- solution). The main idea is to use the classical one to construct a test function for the viscosity one and then obtain a contradiction. Our second step is a variation of the Perron's method. Let u andu denote the supremum of classical sub-solutions and the inmum of classical super-solutions, respectively, with the same terminal condition. In standard Perron's approach, see, e.g., Ishii [47] and an interesting recent development by Bayraktar and Sirbu [5], one shows that u =u (2.1.2) by assuming the comparison principle for viscosity solutions, which further implies the existence of viscosity solutions. We shall instead prove (2.1.2) directly, which, together with our partial comparison principle, implies the comparison principle for viscosity solutions immediately. Our arguments for (2.1.2) mainly rely on the 10 remarkable result Bank and Baum [2], which was extended to the nonlinear case in [93]. We also observe that our results make strong use of the representation of the solution of the semilinear PPDE by means of the corresponding Backward SDEs [78]. This is a serious limitation of our approach that we hope to overcome in some future work. However, our approach is suitable for a large class of PPDEs as Hamilton-Jacobi-Bellman equations, which are related to stochastic control prob- lems, and their extension to Isaac-Hamilton-Jacobi-Bellman equations correspond- ing to dierential games. The rest of this chapter is organized as follows. In Section 2.2 we introduce the framework of [29] and [20] and adapt it to our problem. We dene classical and viscosity solutions of PPDE in Section 2.3. In Section 2.4 we introduce the main results, and in Section 2.5 we prove some basic properties of the solutions, including existence, stability, and the partial comparison principle of viscosity solutions. Finally in Section 2.6 we prove (2.1.2) and the comparison principle for viscosity solutions. 2.2 Pathwise stochastic analysis In this section we introduce the spaces on which we will dene the solutions of path-dependent PDEs. The key notions of derivatives were proposed by Dupire 11 [29] who introduced the functional It^ o calculus, and further developed by Cont and Fournie [20]. We shall also introduce their localization version for our purpose. 2.2.1 Derivatives on c adl ag paths Let ^ :=D([0;T ];R d ), the set of c adl ag paths, ^ ! denote the elements of ^ , ^ B the canonical process, ^ F the ltration generated by ^ B, and ^ := [0;T ] ^ . We dene a norm on ^ and a pseudometric on ^ as follows: for any (t; ^ !); (t 0 ; ^ ! 0 )2 ^ , k^ !k t := sup 0st j^ ! s j; d 1 (t; ^ !); (t 0 ; ^ ! 0 ) :=jtt 0 j + sup 0sT j^ ! t^s ^ ! 0 t 0 ^s j: (2.2.1) Then ( ^ ;kk T ) is a Banach space and ( ^ ;d 1 ) is a complete pseudometric space. Let ^ u : ^ ! R be an ^ F-progressively measurable process. Note that the pro- gressive measurability implies that ^ u(t; ^ !) = ^ u(t; ^ ! ^t ) for all (t; ^ !)2 ^ . Following Dupire [29], we dene spatial derivatives of ^ u, provided the following limits exist, in the standard sense: for the basis e i ofR d , i = 1; ;d, @ x i ^ u(t; ^ !) := lim h!0 1 h h ^ u(t; ^ ! +h1 f[t;T ]g e i ) ^ u(t; ^ !) i ; @ x i x j ^ u :=@ x i (^ u x j ); i;j = 1; ;d; (2.2.2) and the right time-derivative of ^ u, if the following limit exists, as: @ t ^ u(t; ^ !) := lim h!0;h>0 1 h h ^ u t +h; ^ ! ^t ^ u t; ^ ! i ; t<T: (2.2.3) 12 For the nal time T , we dene: @ t ^ u(T;!) := lim t<T;t"T @ t ^ u(t;!): (2.2.4) We take the convention that ^ ! are column vectors, but @ x ^ u denotes row vectors and @ 2 xx ^ u denote dd-matrices. Denition 2.2.1. Let ^ u : ^ !R be ^ F-progressively measurable. (i) We say ^ u2C 0 ( ^ ) if ^ u is continuous in (t; ^ !) under d 1 . (ii) We say ^ u2C 0 b ( ^ )C 0 ( ^ ) if ^ u is bounded. (iii) We say ^ u2C 1;2 b ( ^ )C 0 ( ^ ) if @ t ^ u, @ x ^ u, and @ 2 xx ^ u exist and are in C 0 b ( ^ ). Remark 2.2.2. To simplify the presentation, we will consider here only bounded viscosity solutions. By slightly more involved estimates, we can extend our results to the cases with polynomial growth. However, the boundedness of the derivatives @ t ^ u, @ x ^ u, and @ 2 xx ^ u is crucial for the functional It^ o's formula (4.2.18) below. 2.2.2 Derivatives on continuous paths We now let := !2 C([0;T ];R d ) : ! 0 = 0 , the set of continuous paths with initial value 0, B the canonical process, F the ltration generated by B, P 0 the Wiener measure, and := [0;T ] . Here and in the sequel, for notational simplicity, we use 0 to denote vectors or matrices with appropriate dimensions whose components are all equal to 0. 13 Clearly ^ , ^ , and each !2 can also be viewed as an element of ^ . Thenkk t and d 1 in (2.2.1) are well dened on and , and ( ;kk T ) and (;d 1 ) are also complete metric spaces. Given u : !R and ^ u : ^ !R, we say ^ u is consistent with u on if ^ u(t;!) =u(t;!); for all (t;!)2 : (2.2.5) Denition 2.2.3. Let u : !R beF-progressively measurable. (i) We say u2C 0 () if u is continuous in (t;!) under d 1 . (ii) We say u2C 0 b ()C 0 () if u is bounded. (iii) We say u2C 1;2 b () if there exists ^ u2C 1;2 b ( ^ ) such that (2.2.5) holds. By [29] and [20], we have the following important results. Theorem 2.2.4. Let u2C 1;2 b () and ^ u2C 1;2 b ( ^ ) such that (2.2.5) holds. (i) The following denition @ t u :=@ t ^ u; @ x u :=@ x ^ u; @ 2 xx u :=@ 2 xx ^ u; on ; is independent of the choice of ^ u. Namely, if there is another ^ u 0 2 C 1;2 b ( ^ ) such that (2.2.5) holds, then the derivatives of ^ u 0 coincide with those of ^ u on . (ii) IfP is a semimartingale measure, then u is a semimartingale underP and du t =@ t u t dt + 1 2 tr @ 2 xx u t dhBi t +@ x u t dB t ; P-a.s. (2.2.6) 14 Here and in the sequel, when we emphasize that u is a process, we use the notation u t (!) := u(t;!) and often omit ! by simply writing it as u t . Moreover, when a probability is involved, quite often we use B which by denition satises B t (!) =! t . 2.2.3 Localization of spaces For our purpose, we need to introduce the localization version of the above notions. Let T := n F-stopping time :f! :(!)>tg is an open subset of ( ;kk T ) o : (2.2.7) The following is a typical example of such . Example 2.2.5. Let u2C 0 (). Then, for any constant c, := inf n t :u(t;!)c o ^T is inT: Proof. For any t < T ,f > tg =fsup 0st u s < cg. Fix !2f > tg and set " := 1 2 [c sup 0st u(s;!)] > 0. For any s2 [0;t], since u is continuous at (s;!), there exists a constant h s > 0 such thatju(r; ~ !)u(s;!)j " whenever d 1 (r; ~ !); (s;!)) < h s . Note that the open intervals (s 1 2 h s ;s + 1 2 h s ), s2 [0;t], cover the compact set [0;t], then there exist 0 =s 0 <s 1 <<s n =t such that [0;t][ 0in (s i 1 2 h s i ;s i + 1 2 h s i ). Now seth := 1 2 min 0in h s i > 0. For any ~ !2 15 such thatk~ !!k T <h, for any s2 [0;t], there exists i such thatjss i j 1 2 h s i . Then d 1 (s; ~ !); (s i ;!))jss i j +k~ !!k T 1 2 h s i +hh s i for all s2 [0;t]: Thus, u(s; ~ !)u(s i ;!) +" sup 0st u(s;!) +"<c for all s2 [0;t]: This implies that (~ !)>t, and therefore, 2T . Denote () := n (t;!)2 :t<(!) o and () := n (t;!)2 :t(!) o : (2.2.8) Then clearly () is an open subset of (;d 1 ). Denition 2.2.6. Let 2T and u : ()!R. We say u2 C 1;2 b ( ()) if there exists ~ u2C 1;2 b () such that u = ~ u on (): (2.2.9) The following result is the localization version of Theorem 2.2.4. 16 Proposition 2.2.7. Let 2T , u2 C 1;2 b ( ()), ~ u2 C 1;2 b () such that (2.2.9) holds. (i) One may dene @ t u :=@ t ~ u; @ x u :=@ x ~ u; @ 2 xx u :=@ 2 xx ~ u; on (); (2.2.10) and the denition is independent of the choice of ~ u. (ii) LetP be a semimartingale measure. Thenu is aP-semimartingale on [0;] and (4.2.18) holds on [0;]. Proof. First, for the derivatives dened in (2.2.10), (4.2.18) follows directly from Theorem 2.2.4. Next, assume ~ u 0 2C 1;2 b () also satises (2.2.9). Denote u := ~ u~ u 0 . Then u = 0 on (). Now x (t;!)2 (). Since () is open, there exists h := h(t;!)> 0 such that (s; ~ !)2 () whenever d 1 ((s; ~ !); (t;!))<h. Now following the denition of the time derivative we obtain immediately that @ t u(t;!) = 0. Moreover, letP =P 0 and applying (4.2.18) to u we have 0 = 1 2 tr @ 2 xx u t dt +@ x u t dB t ; 0t<; P 0 -a.s. Thus, since @ x u and @ 2 xx u are bounded, @ x u = 0; @ 2 xx u = 0; dtdP 0 -a.s. on (): 17 Since () is open, and@ x u and@ 2 xx u are continuous in (t;!) underd 1 , it is clear that @ x u = 0; @ 2 xx u = 0; on (): This implies that the denition in (2.2.10) is independent of the choice of ~ u. 2.2.4 Space shifts We rst x t2 [0;T ] and introduce the shifted spaces on c adl ag paths. - Let ^ t :=D([t;T ];R d ) be the shifted canonical space; ^ B t the shifted canonical process on ^ t ; ^ F t the shifted ltration generated by B t ; and ^ t := [t;T ] ^ t . - Denekk t s and d t 1 in the spirit of (2.2.1); - For ^ F t -progressively measurable ^ u : ^ t ! R, dene the derivatives in the spirit of (2.2.2) and (2.2.3), and dene the spaces C 0 ( ^ t ), C 0 b ( ^ t ) and C 1;2 b ( ^ t ) in the spirit of Denition 2.2.1. Similarly, we may dene the shifted spaces on continuous paths. - Let t := !2C([t;T ];R d ) :! t = 0 be the shifted canonical space; B t the shifted canonical process on t ; F t the shifted ltration generated by B t , P t 0 the Wiener measure on t , and t := [t;T ] t . - Dene C 0 ( t ), C 0 b ( t ) and C 1;2 b ( t ) in an obvious way. 18 - LetT t denote the space of F t -stopping times such that, for any s2 [t;T ), the setf!2 t :(!)>sg is an open subset of t underkk t T . - For each 2T t , dene t (), t (), and C 1;2 b ( t ()) in an obvious way. We next introduce the shift and concatenation operators. Let 0stT . - For any ^ !2 ^ s , ^ ! 0 2 ^ t , and!2 s ,! 0 2 t , dene the concatenation paths ^ ! t ^ ! 0 2 ^ s and ! t ! 0 2 s by: (^ ! t ^ ! 0 )(r) := ^ ! r 1 [s;t) (r) + (^ ! t ^ ! 0 t + ^ ! 0 r )1 [t;T ] (r); (! t ! 0 )(r) :=! r 1 [s;t) (r) + (! t +! 0 r )1 [t;T ] (r); for all r2 [s;T ]: - Let ^ ! 2 ^ s . For any ^ F s T -measurable random variable ^ and any ^ F s - progressively measurable process ^ X on ^ s , dene the shifted ^ F t T -measurable ran- dom variable ^ t;^ ! and ^ F t -progressively measurable process ^ X t;^ ! on ^ t by: ^ t;^ ! (^ ! 0 ) := ^ (^ ! t ^ ! 0 ); ^ X t;^ ! (^ ! 0 ) := ^ X(^ ! t ^ ! 0 ); for all ^ ! 0 2 ^ t : - Let ! 2 s . For any F s T -measurable random variable and any F s - progressively measurable process X on s , dene the shiftedF t T -measurable ran- dom variable t;! and theF t -progressively measurable process X t;! on t by: t;! (! 0 ) :=(! t ! 0 ); X t;! (! 0 ) :=X(! t ! 0 ); for all ! 0 2 t : 19 It is clear that all the results in previous subsections can be extended to the shifted spaces, after obvious modications. Moreover, for any2T , (t;!)2 (), and u2C 1;2 b ( ()), it is clear that t;! 2T t and u t;! 2C 1;2 b ( t ( t;! )). For some technical proofs later, we shall also use the following space. Denote T t + :=f2T t : >tg for t<T and T T + :=fTg: (2.2.11) Denition 2.2.8. Let t2 [0;T ], u : t !R, andP be a semimartingale measure on t . We say u2 C 1;2 P ( t ) if there exists an increasing sequence of F t -stopping times t = 0 1 T such that, (i) For each i 0 and ! 2 t , i (!);! i+1 2 T i (!) + and u i (!);! 2 C 1;2 b ( i (!) ( i (!);! i+1 )); (ii) For each i 0 and !2 , u (!) is continuous on [0; i (!)]; (iii) ForP-a.s. !2 t , the setfi : i (!)<Tg is nite. We shall emphasize that, for u2 C 1;2 P ( t ), the derivatives of u are bounded on each interval [ i (!); i (!);! i+1 ], however, in general they may be unbounded on the whole interval [t;T ]. Also, the previous denition and, more specically the dependence on P introduced in item (iii), is motivated by the results established in Section 2.6 below. The following result is a direct consequence of Proposition 2.2.7. 20 Proposition 2.2.9. Let P be a semimartingale measure on t and u2 C 1;2 P ( t ). Then u is a local P-semimartingale on [t;T ] and du s =@ t u s ds + 1 2 tr @ 2 xx u s dhB t i s +@ x u s dB t s ; tsT; P-a.s. 2.3 PPDEs and main denitions We study the following semi-linear parabolic Path-dependent PDE (PPDE, for short): (Lu)(t;!) = 0; 0t<T; !2 ; (2.3.1) where (Lu)(t;!) :=@ t u(t;!) 1 2 tr @ 2 xx u(t;!) f(t;!;u(t;!);@ x u(t;!)): We remark that there is potential to extend our results to a much more general setting. In particular, all our results can be easily extended to semi-linear PPDEs with a more general generator of the form (Lu)(t;!) :=@ t u(t;!) 1 2 tr T @ 2 xx u (t;!)f t;!;u(t;!);@ x u(t;!)(t;!) : However, in order to focus on the main ideas, we content ourselves here with the simpler PPDE (2.3.1) under somewhat strong technical conditions, and leave more general cases, e.g., fully nonlinear PPDEs, for future studies. 21 Remark 2.3.1. In the Markovian case, namely when f = f(t;! t ;y;z) and u(t;!) =v(t;! t ), the PPDE (2.3.1) reduces to the following PDE: (Lv)(t;x) = 0; 0t<T; x2R d ; (2.3.2) where (Lv)(t;x) :=@ t v(t;x) 1 2 tr D 2 xx v(t;x) f(t;x;v(t;x);D x v(t;x)): Here D x and D 2 xx denote the standard rst and second order derivatives with respect to x. However, slightly dierent from the PDE literature but consistent with (2.2.3), @ t denotes the right time-derivative. As usual, we start with classical solutions. Denition 2.3.2. Let u2 C 1;2 b (). We say u is a classical solution (resp. sub- solution, super-solution) of PPDE (2.3.1) if (Lu)(t;!) = (resp.;)0 for all (t;!)2 [0;T ) : (2.3.3) It is clear that, in the Markovian setting as in Remark 2.3.1, u is a classical solution (resp. sub-solution, super-solution) of PPDE (2.3.1) if and only if v is a classical solution (resp. sub-solution, super-solution) of PDE (4.6.15). Existence and uniqueness of classical solutions are related to the analogue results for the corresponding backward SDE. In order to avoid diverting the atten- 22 tion from our main purpose in this paper, we report these properties later in Subsection 2.5.1, and we move to our notion of viscosity solutions. For anyL 0 andt<T , letU L t denote the space ofF t -progressively measurable R d -valued processes such that each component of is bounded byL. By viewing as row vectors, we dene M t; s := exp Z s t r dB t r 1 2 Z s t j r j 2 dr ; P t 0 -a.s., dP t; :=M t; T dP t 0 ; (2.3.4) and we introduce for all t 2 [0;T ] two nonlinear expectations: for any 2 L 2 (F t 1 ;P t 0 ), E L t [] := inf n E P t; [] :2U L t o ; E L t [] := sup n E P t; [] :2U L t o : (2.3.5) Moreover, for any u2C 0 b (), dene A L (t;!;u) := n '2C 1;2 b ( t ) : there exists 2T t + such that 0 ='(t; 0)u(t;!) = min ~ 2T tE L t ('u t;! ) ~ ^ o ; A L (t;!;u) := n '2C 1;2 b ( t ) : there exists 2T t + such that 0 ='(t; 0)u(t;!) = max ~ 2T tE L t ('u t;! ) ~ ^ o : (2.3.6) Denition 2.3.3. Let u2C 0 b (). 23 (i) For anyL 0, we sayu is a viscosityL-subsolution (resp. L-supersolution) of PPDE (2.3.1) if, for any (t;!)2 [0;T ) and any '2A L (t;!;u) (resp. '2A L (t;!;u)), it holds that (L t;! ')(t; 0) (resp.)0; where, for each (s; ~ !)2 [t;T ] t ; (L t;! ')(s; ~ !) :=@ t '(s; ~ !) 1 2 tr @ 2 xx '(s; ~ !) f t;! s; ~ !;'(s; ~ !);@ x '(s; ~ !) : (ii) We say u is a viscosity subsolution (resp. supersolution) of PPDE (2.3.1) if u is viscosity L-subsolution (resp. L-supersolution) of PPDE (2.3.1) for some L 0. (iii) We say u is a viscosity solution of PPDE (2.3.1) if it is both a viscosity subsolution and a viscosity supersolution. In the rest of this section we provide several remarks concerning our denition of viscosity solutions. In most places we will comment on the viscosity subsolution only, but obviously similar properties hold for the viscosity supersolution as well. Remark 2.3.4. As standard in the literature on viscosity solutions of PDEs: (i) The viscosity property is a local property in the following sense. For any (t;!)2 [0;T ) and any "> 0, dene " := inf n s>t :jB t s j" o ^ (t +"): 24 To check the viscosity property of u at (t;!), it suces to know the value of u t;! on [t; " ] for an arbitrarily small "> 0. (ii) TypicallyA L (t;!;u) andA L (t;!;u) are disjoint, sou is a viscosity solution does not mean (L t;! ')(t; 0) = 0 for ' in some appropriate set. One has to check viscosity subsolution property and viscosity supersolution property separately. (iii) In generalA L (t;!;u) could be empty. In this case automaticallyu satises the viscosity subsolution property at (t;!). Remark 2.3.5. (i) For 0 L 1 < L 2 , obviously U L 1 t U L 2 t , E L 2 t E L 1 t , andA L 2 (t;!;u)A L 1 (t;!;u). Then one can easily check that a viscosity L 1 - subsolution must be a viscosity L 2 -subsolution. Consequently, u is a viscosity subsolution if and only if there exists a L 0 such that, for all ~ L L, u is a viscosity ~ L-subsolution. (ii) However, we require the same L for all (t;!). We should point out that our denition of viscosity subsolution is not equivalent to the following alternative denition, under which we are not able to prove the comparison principle: For any (t;!) and any ' 2 T L0 A L (t;!;u), it holds that (L t;! ')(t; 0) 0. 25 Remark 2.3.6. We may replaceA L with the following (A 0 ) L which requires strict inequality: A 0 L (t;!;u) := n '2C 1;2 b ( t ) : there exists 2T t + such that 0 ='(t; 0)u(t;!)<E L t ('u t;! ) ~ ^ for all ~ 2T t + o : (2.3.7) Then u is a viscosity L-subsolution of PPDE (2.3.1) if and only if (L t;! ')(t; 0) 0 for all (t;!)2 [0;T ) and '2A 0 L (t;!;u). A similar statement holds for the viscosity supersolution. Indeed, sinceA 0 L (t;!;u)A L (t;!;u), then the only if part is clear. To prove the if part, let (t;!)2 [0;T ) and '2A L (t;!;u). For any " > 0, denote ' " (s; ~ !) :='(s; ~ !) +"(st). Then clearly ' " 2A 0 L (t;!;u), and thus (L t;! ' " )(t; 0) =@ t '(t; 0)" 1 2 tr @ 2 xx '(t; 0) f t;! t;!;'(t; 0);@ x '(t; 0) 0: Send " ! 0. Then we obtain (L t;! ')(t; 0) 0, and thus u is a viscosity L- subsolution. Remark 2.3.7. Consider the Markovian setting in Remark 2.3.1. One can easily check that u is a viscosity subsolution of PPDE (2.3.1) in the sense of Denition 26 2.3.3 implies thatv is a viscosity subsolution of PDE (4.6.15) in the standard sense. Remark 2.3.8. We have some exibility to chooseA(t;!;u) andA(t;!;u) in Denition 2.3.3. In principle, the smaller these sets are, the easier we can prove viscosity properties and thus easier for existence of viscosity solutions, but the more dicult for the comparison principle and the uniqueness of viscosity solutions. (i) The followingA 00 L (t;!;u) is larger thanA L (t;!;u), but all the results in this chapter still hold true if we useA 00 L (t;!;u) (and the correspondingA 00 L (t;!;u)): A 00 L (t;!;u) := n '2C 1;2 b ( t ) : for any 2T t + , 0 ='(t; 0)u(t;!)E L t ('u t;! ) ~ ^ for some ~ 2T t + o : (2.3.8) (ii) However, if we use the following smaller alternatives ofA L (t;!;u), which do not involve the nonlinear expectation, we are not able to prove the comparison principle and the uniqueness of viscosity solutions: A (t;!;u) := n '2C 1;2 b ( t ) : there exists 2T t + such that 0 ='(t; 0)u(t;!) ('u t;! ) ~ ^ for any ~ 2T t + o ; 27 or A (t;!;u) := n '2C 1;2 b ( t ) : for all (s; ~ !)2 (t;T ] t 0 ='(t; 0)u(t;!) ('u t;! )(s; ~ !) o : See also Remark 2.3.5 (ii). Remark 2.3.9. (i) Let u be a viscosity subsolution of PPDE (2.3.1). Then for any 2R, ~ u t :=e t u t is a viscosity subsolution of the following PPDE: ~ L~ u := @ t ~ u 1 2 tr (@ xx ~ u) ~ f(t;!; ~ u;@ x ~ u) 0; (2.3.9) where ~ f(t;!;y;z) :=y +e t f(t;!;e t y;e t z): Indeed, assumeu is a viscosityL-subsolution of PPDE (2.3.1). Let (t;!)2 [0;T ) and ~ '2A L (t;!; ~ u). For any "> 0, denote ' " s :=e s ~ ' s +"(st): 28 Then, noting that ~ ' t =e t u(t;!), ' " s u t;! s e t ~ ' s ~ u t;! s = e s e t ~ ' s + e (st) 1 u s +"(st) = e s e t ~ ' s ~ ' t + e (st) 1 (u s u t ) + e (st) +e (st) 2 u t +"(st) "(st)C(st) j ~ ' s ~ ' t j +ju s u t j + (st) : Let ~ 2T t + be a stopping time corresponding to ~ '2A L (t;!; ~ u), and set " := ~ ^ inf n s>t :j ~ ' s ~ ' t j +ju s u t j + (st) " C o ^T: Then " 2T t + , by Example 2.2.5, and for any 2T t such that " , it follows from the previous inequality that ' " u t;! e t [ ~ ' ~ u t;! ]: By the increase and the homogeneity of the operatorE L t , together with the fact that ~ '2A L (t;!; ~ u), this implies that: E L t ' " u t;! E L t e t ( ~ ' ~ u t;! ) = e t E L t ~ ' ~ u t;! 0 = ' " t u t : 29 This implies that ' " 2A L t (t;!;u), thenL t;! ' " (t; 0) 0. Send "! 0, similar to Remark 2.3.6 we getL t;! ' 0 (t; 0) 0, where' 0 s :=e s ~ ' s . Now by straightforward calculation we obtain @ t ~ '(t; 0) 1 2 tr @ 2 xx ~ '(t; 0) ~ f t;!; ~ '(t; 0);@ x ~ '(t; 0) 0: That is, ~ u is a viscosity subsolution of PPDE (2.3.9). (ii) If we consider more general variable change: u(t;!) := (t;u(t;!)), where 2C 1;2 ([0;T ]R) such that @ y > 0. Denote by := 1 the inverse function of with respect to the space variable y. Then one can easily check that u is a classical subsolution of PPDE (2.3.1) if and only if u is a classical subsolution of the following PPDE: L u :=@ t u 1 2 tr (@ 2 xx u) f(t;!; u;@ x u) 0; where f(t;!;y;z) := 1 @ y (t;y) h @ t (t;y) + 1 2 @ 2 yy (t;y)jzj 2 +f(t;!; (t;y);@ y (t;y)z) i : (2.3.10) However, if u is only a viscosity subsolution of PPDE (2.3.1), we are not able to prove that u is a viscosity subsolution of (2.3.10). The main diculty is that the nonlinear expectationE L t and the nonlinear function do not commute. Conse- quently, given '2A L (t;!; u), we are not able to construct as in (i) the corre- sponding '2A L (t;!;u). 30 We conclude this section by connecting the nonlinear expectation operators to backward SDEs, and providing some tools from optimal stopping theory which will be used later. Remark 2.3.10. (ConnectingE L andE L to backward SDEs) For readers who are familiar with BSDE literature, by the comparison principle of BSDEs, see,e.g., El Karoui, Peng and Quenez [36], one can easily show thatE L t [] =Y t andE L t [] =Y t , where (Y;Z) and (Y;Z) are the solution to the following BSDEs, respectively: Y s = Z T s LjZ r jdr Z T s Z r dB t r ; Y s = + Z T s LjZ r jdr Z s Z r dB t r ; tsT; P t 0 -a.s. (2.3.11) Moreover, this is a special case of the so called g-expectation, see Peng [80]. Remark 2.3.11. (Optimal stopping under nonlinear expectation and re ected backward SDEs) The denition of the setA L involves the optimal stopping problem under nonlinear expectation Y t := inf ~ 2T t E L t X ~ ^ for some stopping time 2T t + and some adapted bounded pathwise continuous processX. A rigorous denition should be formulated using the regular conditional probability distribution as in [94] and [93], but we refrain from doing this for ease of presentation. 31 For later use, we provide some key-results which can be proved by following the standard corresponding arguments in the standard optimal stopping theory, and we observe that the process Y is pathwise continuous, see (iv) below. Following the classical arguments in optimal stopping theory, we have: (i)E L t Y ~ ^ Y t for all ~ 2T t , i.e. Y is anE L submartingale. (ii) If 2T t is an optimal stopping rule, then Y t =E L t X ^ = inf ~ 2T E L t X ~ ^ = inf ~ 2T E L t Y ~ ^ =E L t Y ^ where the last inequality is a consequence of (i), and the third inequality follows from the fact thatXY on one hand, and infE L t [:]E L t [inf:] on the other hand. This implies that Y =X and, by (i), that Y :^ is anE L martingale. (iii) We then dene 1 t := inffs>t :Y t =X t g. SinceY T =X T , we have 1 t T , a.s. Moreover, following the classical arguments in optimal stopping theory, we see that Y s^ 1 t st is anE L martingale. With this in hand, we conclude that 1 t is an optimal stopping time, i.e. Y t =E L t X 1 t : 32 (iv) For those readers who are familiar with backward stochastic dierential equations, we mention that Y =Y , where (Y ;Z ;K ) is the solution to the following re ected BSDEs: Y s =X Z s LjZ r jdr Z s Z r dB t r Z s dK s ; (2.3.12) Y s X s ; and (Y s X s )dK s = 0; s2 [t;T ]; P t 0 a.s. (2.3.13) See, e.g., [35]. In particular, it is a well-known result that the processY is pathwise continuous. (v) Similar results hold for sup ~ 2T tE t [X ~ ^ ]. 2.4 Main results We start with a stability result. Theorem 2.4.1. Let (f " ;" > 0) be a family of coecients converging uniformly towards a coecient f2 C 0 () as "! 0. For some L > 0, let u " be a viscosity Lsubsolution (resp. Lsupersolution) of PPDE (2.3.1) with coecients f " , for all " > 0. Assume further that u " converges to some u, uniformly in . Then u is a viscosityLsubsolution (resp. supersolution) of PPDE (2.3.1) with coecient f. 33 The proof of this result is reported in Subsection 2.5.3. For our next results, we shall always use the following standing assumptions, where g is a terminal condition associated to the PPDE (2.3.1). Assumption 2.4.2. (i) f is bounded, F-progressively measurable, continuous in t, uniformly continuous in !, and uniformly Lipschitz continuous in (y;z) with a Lipschitz constant L 0 > 0. (ii) g is bounded and uniformly continuous in !. To establish an existence result of viscosity solutions under the above assump- tion, we note that the PPDE (2.3.1) with terminal condition u(T;!) = g(!) is closely related to (and actually motivated from) the following BSDE: Y 0 t =g(B ) + Z T t f(s;B ;Y 0 s ;Z 0 s )ds Z T t Z 0 s dB s ; 0tT; P 0 -a.s. (2.4.1) We refer to the seminal paper by Pardoux and Peng [78] for the wellposedness of such BSDEs. On the other hand, for any (t;!)2 , by [78] the following BSDE on [t;T ] has a unique solution: Y 0;t;! s =g t;! (B t ) + Z T s f t;! (r;B t ;Y 0;t;! r ;Z 0;t;! r )dr Z T s Z 0;t;! r dB t r ; P t 0 -a.s.(2.4.2) By the Blumenthal 0-1 law, Y 0;t;! t is a constant and we thus dene u 0 (t;!) :=Y 0;t;! t : (2.4.3) 34 Theorem 2.4.3. Under Assumption 2.4.2, u 0 is a viscosity solution of PPDE (2.3.1) with terminal condition g. The proof is reported in Subsection 2.5.2. Similar to the classical theory of viscosity solutions in the Markovian case, we now establish a comparison result which, in particular, implies the uniqueness of viscosity solutions. For this purpose, we need an additional condition: Assumption 2.4.4. There exist ^ f : ^ RR d !R satisfying: (i) ^ f(t;!;y;z) =f(t;!;y;z) for all (t;!;y;z)2 RR d . (ii) ^ f is bounded, ^ f(;y;z)2 C 0 ( ^ ) for any xed (y;z), and ^ f is uniformly Lipschitz continuous in (y;z). Remark 2.4.5. In the Markovian case as in Remark 2.3.1, we have a natural extension: ^ f = f(t; ^ ! t ;y;z) for all ^ !2 ^ . In this case Assumption 2.4.2 implies Assumption 2.4.4. Theorem 2.4.6. Let Assumptions 2.4.2 and 2.4.4 hold. Let u 1 be a viscosity subsolution and u 2 a viscosity supersolution of PPDE (2.3.1). If u 1 (T;) g u 2 (T;), then u 1 u 2 on . Consequently, given the terminal conditiong,u 0 is the unique viscosity solution of PPDE (2.3.1). The proof is reported in Section 2.6 building on a partial comparison result derived in Subsection 2.5.4. 35 Remark 2.4.7. For technical reasons, we require a uniformly continuous function g between u 1 T and u 2 T , see Section 2.6. However, when one of u 1 and u 2 is in C 1;2 b (), then we need neither the presence of such g nor the existence of ^ f , see Lemma 2.5.7 below. 2.5 Proofs of some of the main results In this section we provide some proofs of the main results, and provide some more results. We leave the most technical part of the proof for the comparison principle to next section. 2.5.1 Properties of classical solutions We rst recall from Peng [81] that anF-progressively measurable processY is called a f-martingale (resp. f-submartingale, f-supermartingale) if, for any F-stopping times 1 2 , we have Y 1 =(resp.,)Y 1 ( 2 ;Y 2 ),P 0 -a.s. where (Y;Z) := (Y( 2 ;Y 2 );Z( 2 ;Y 2 )) is the solution to the following BSDE on [0; 2 ]: Y t =Y 2 + Z 2 t f(s;B ;Y s ;Z s )ds Z 2 t Z s dB s ; 0t 2 ; P 0 -a.s. 36 Clearly,Y is af-martingale with terminal conditiong(B ) if and only if it satises the BSDE (2.4.1). Applying the It^ o's formula of Proposition 2.2.9, we obviously have Proposition 2.5.1. Let Assumption 2.4.2 hold and u2 C 1;2 b (). Then u is a classical solution (resp. subsolution, supersolution) of PPDE (2.3.1) if and only if the process u is a f-martingale (resp. f-submartingale, f-supermartingale). In particular, if u is a classical solution of PPDE (2.3.1) with terminal condi- tion g, then Y :=u; Z :=@ x u (2.5.1) provides the unique solution of BSDE (2.4.1). Proof. We shall only prove the subsolution case. Let (Y;Z) be dened by (2.5.1). (i) Assume u is a classical subsolution. By It^ o's formula, du t = @ t u t + 1 2 tr @ 2 xx u t dt +@ x u t dB t = f(t;B ;u t ;@ x u t ) + (Lu) t dt +@ x u t dB t ; P 0 -a.s. (2.5.2) Then for any 1 2 , (Y;Z) satises BSDE: Y t =u 2 + Z 2 t f(s;B ;Y s ;Z s ) + (Lu)(s;B ) ds Z 2 t Z s dB s ; 0t 2 ;P 0 -a.s. 37 SinceLu 0, by the comparison principle of BSDEs, see [36], we obtain u 1 = Y 1 Y 1 ( 2 ;u 2 ). That is, u is a f-submartingale. (ii) Assume u is a f-submartingale. For any 0 t < t +h T , denote Y s :=Y s (t +h;u t+h )Y s , Z s :=Z s (t +h;u t+h )Z s . By (2.5.2) we have Y s = Z t+h s [ r Y r +h;Zi r (Lu) r ]dr Z t+h s Z s dB s ; tst +h;P 0 -a.s. wherejj;jjL 0 . Dene s := exp Z s t r dB r + Z s t r 1 2 j r j 2 dr ; (2.5.3) we have Y t =E P 0 t h Z t+h t s (Lu) s ds i : Since Y =u is a f-submartingale, we get 0 1 h Y t =E P 0 t h 1 h Z t+h t s (Lu) s ds i : Send h! 0 we obtain Lu(t;B ) 0; P 0 -a.s. 38 Note thatLu is continuous in! and obviously the support ofP 0 is dense, we have Lu(t;!) 0 for all !2 : That is, u is a classical subsolution of PPDE (2.3.1). (iii) When u is a classical solution, similar to (i) we know Y is a f-martingale and thus (2.5.1) provides a solution to the BSDE. Finally, the uniqueness follows from the uniqueness of BSDEs. Remark 2.5.2. This proposition extends the well known nonlinear Feynman-Kac formula of Pardoux and Peng [77] to non-Markovian case. We next prove a simple comparison principle for classical solutions. Lemma 2.5.3. Let Assumption 2.4.2 hold true. Let u 1 be a classical subsolution and u 2 a classical supersolution of PPDE (2.3.1). If u 1 (T;) u 2 (T;), then u 1 u 2 on . Proof. Denote Y i :=u i ;Z i :=@ x u i , i = 1; 2. By (2.5.2) we have dY i t = h f(t;B ;Y i ;Z i ) + (Lu i ) t i dt +Z i t dB t ; 0tT; P 0 -a.s. Since Y 1 T Y 2 T andLu 1 0Lu 2 , by the comparison principle for BSDEs we obtainY 1 Y 2 . That is,u 1 u 2 ,P 0 -a.s. Sinceu 1 andu 2 are continuous, and the support ofP 0 is dense in , we obtain u 1 u 2 on . 39 2.5.2 Existence of viscosity solutions We rst establish the regularity of u 0 as dened in (2.4.3). Proposition 2.5.4. Under Assumption 2.4.2, u 0 is uniformly continuous in under d 1 . Proof. Since f and g are bounded, clearly u 0 is bounded. To show the uniform continuity, let (t i ;! i )2 , i = 1; 2, and assume without loss of generality that 0t 1 t 2 T . By taking conditional expectations E P t 1 0 t 2 , one can easily see that Y 0;t 1 ;! 1 can be viewed as the solution to the following BSDE on [t 2 ;T ]: forP t 1 0 -a.s. B t 1 , Y 0;t 1 ;! 1 s = g t 2 ;! 1 t 1 B t 1 (B t 2 ) + Z T s f t 2 ;! 1 B t 1 (r;B t 2 ;Y 0;t 1 ;! 1 r ;Z 0;t 1 ;! 1 r )dr Z T s Z 0;t 1 ;! 1 r dB t 2 r ; t 2 sT; P t 2 0 -a.s. Denote ! :=! 1 ! 2 ; Y :=Y 0;t 1 ;! 1 Y 0;t 2 ;! 2 ; Z :=Z 0;t 1 ;! 1 Z 0;t 2 ;! 2 : Then Y s =Y T + Z T s r + r Y r +h;Zi r dr Z T s Z r dB t 2 r ; t 2 sT; P t 2 0 -a.s. 40 wherejj L 0 ;2U L 0 t , and r := f t 2 ;! 1 B t 1 f t 2 ;! 2 r;B t 2 ;Y 0;t 1 ;! 1 ;Z 0;t 1 ;! 1 r . Dene as in (2.5.3) with initial time t 2 , then Y t 2 = T Y T + Z T t 2 r r dr Z T t 2 Z r dB t 2 r ; P t 2 0 -a.s. Let denote the modulus of continuity function off andg with respect to!. Note that jY T j = g(! 1 t 1 B t 1 t 2 B t 2 )g(! 2 t 2 B t 2 ) ! 1 t 1 B t 1 ! 2 t 2 d 1 (t 1 ;! 1 ); (t 2 ;! 2 ) +kB t 1 k t 1 t 2 : Similarly, j s j d 1 (t 1 ;! 1 ); (t 2 ;! 2 ) +kB t 1 k t 1 t 2 : Then jY t 2 j = E P t 2 0 h T Y T + Z T t 2 s s ds i C d 1 (t 1 ;! 1 ); (t 2 ;! 2 ) +kB t 1 k t 1 t 2 : 41 Thus, noting that f is bounded, ju 0 t 1 (! 1 )u 0 t 2 (! 2 )j =jY 0;t 1 ;! 1 t 1 Y 0;t 2 ;! 2 t 2 j = E P t 1 0 Y 0;t 1 ;! 1 t 2 + Z t 2 t 1 f t 1 ;! 1 (s;B t 1 ;Y 0;t 1 ;! 1 s ;Z 0;t 1 ;! 1 s )dsY 0;t 2 ;! 2 t 2 i C[t 2 t 1 ] +E P t 1 0 h jY t 2 j i C[t 2 t 1 ] +CE P t 1 0 h d 1 (t 1 ;! 1 ); (t 2 ;! 2 ) +kB t 1 k t 1 t 2 i : (2.5.4) For any "> 0, there exists h> 0 such that (h) " 2C for the above C. Since f;g are bounded, we may assume is also bounded and denote bykk 1 its bound. Now for d 1 (t 1 ;! 1 ); (t 2 ;! 2 ) h 2 , we obtain ju 0 t 1 (! 1 )u 0 t 2 (! 2 )j Cd 1 (t 1 ;! 1 ); (t 2 ;! 2 ) +C(h) +Ckk 1 P t 1 0 h kB t 1 k t 1 t 2 > h 2 i " 2 +Cd 1 (t 1 ;! 1 ); (t 2 ;! 2 ) + 4Ckk 1 h 2 E P t 1 0 h (kB t 1 k t 1 t 2 ) 2 i = " 2 +Cd 1 (t 1 ;! 1 ); (t 2 ;! 2 ) + 4Ckk 1 h 2 (t 2 t 1 ) " 2 +C 1 + 4kk 1 h 2 d 1 (t 1 ;! 1 ); (t 2 ;! 2 ) : By choosingd 1 (t 1 ;! 1 ); (t 2 ;! 2 ) small enough, we see thatju 0 t 1 (! 1 )u 0 t 2 (! 2 )j". This completes the proof. 42 However, in general one cannot expect u 0 to be a classical solution to PPDE (2.3.1). We refer to Peng and Wang [86] for some sucient conditions, in a slight dierent setting. Proof of Theorem 2.4.3. We just show that u 0 is a viscosity subsolution. We prove by contradiction. Assume u 0 is not a viscosity subsolution. Then, for all L > 0, u 0 is not an Lviscosity subsolution. For the purpose of this proof, it is sucient to consider an arbitrary LL 0 , the Lipschitz constant of f introduced in Assumption 2.4.2 (i). Then, there exist (t;!)2 [0;T ) and '2A L (t;!;u 0 ) such that c := (L t;! ')(t; 0)> 0. Denote, for s2 [t;T ], ~ Y s :='(s;B t ); ~ Z s :=@ x '(s;B t ); Y s := ~ Y s Y 0;t;! s ; Z s := ~ Z s Z 0;t;! s : Applying It^ o's formula, we have d(Y s ) = h (L t;! ')(s;B t ) +f t;! (s;B t ; ~ Y s ; ~ Z s )f t;! (s;B t ;Y 0;t;! s ;Z 0;t;! s ) i ds +Z s dB t s = h (L t;! ')(s;B t ) + s Y s +h;Zi s i ds +Z s dB t s ; P t 0 -a.s. wherejjL 0 and 2U L 0 t U L t . Observing that Y t = 0, we dene 0 :=T^ inf n s>t : (L t;! ')(s;B t )L 0 jY s j c 2 o : 43 Then, by Proposition 2.5.4 and Example 2.2.5, 0 2T t + and (L t;! ')(s;B t ) + s Y s c 2 ; for all s2 [t; 0 ]: (2.5.5) Now for any 2T t such that 0 , we have 0 = Y t = Y + Z t h (L t;! ')(s;B t ) + s Y s +h;Zi s i ds Z t Z s dB t s '(;B t )u 0;t;! (;B t ) + c 2 (t) Z t Z s dB t s s ds : ThenE L t ('u 0;t;! )(;B t ) E P t ('u 0;t;! )(;B t ) 0. This contradicts with '2A L (t;!;u 0 ). Following similar arguments, one can easily prove: Proposition 2.5.5. Under Assumption 2.4.2, a bounded classical subsolution (resp. supersolution) of the PPDE (2.3.1) must be a viscosity subsolution (resp. supersolution). 2.5.3 Stability of viscosity solutions Proof of Theorem 2.4.1. We shall prove only the viscosity subsolution property by contradiction. By Remark 2.3.6, without loss of generality we assume there exists'2A 0 L (0; 0;u) such thatc :=L'(0; 0)> 0, whereA 0 L (0; 0;u) is dened in (2.3.7). 44 Denote X 0 :='u; X " :='u " ; 0 := inf n t> 0 :L'(t;B) c 2 o ^T: (2.5.6) Since f2 C 0 (), it follows from Example 2.2.5 that 0 2T 0 + . By (2.3.7), there exists 1 2T 0 + such that 1 0 andE L 0 ( 1 ;X 0 1 ) > 0 = X 0 0 . Since u " converges towards u uniformly, we have E L 0 ( 1 ;X " 1 ) > X " 0 for suciently small "> 0: (2.5.7) Consider the optimal stoppping problem, under nonlinear expectation, together with the corresponding optimal stopping rule: Y t := inf 2T t E L t X " ^ 1 and 0 := inf n t 0 :Y " t =X " t o ; (2.5.8) see Remark 2.3.11. We claim that P 0 0 < 1 > 0; (2.5.9) because otherwise X " 0 Y 0 =E L 0 X " 1 , contradicting (2.5.7). Since X and Y are continuous, P 0 -a.s. there exists Ef 0 < 1 g such that P 0 (E) = P 0 ( 0 < 1 ) > 0, and for any ! 2 E, denoting t := 0 (!) we have 45 X t (!) = Y t (!). Notice that t;! 1 2T t + . By standard arguments using the regular conditional probability distributions, see, e.g., [94] or [93], it follows from the denition of 0 together with theE L submartingale property of Y that X " t (!) =Y t (!) =Y t;! t (!)E L t Y t;! E L t X ";t;! for all 2T t ; t;! 1 : This implies that 0E L t X ";t;! X " t (!) =E L t ' t;! '(t;!) +u " (t;!)u ";t;! for all 2T; t;! 1 . Dene ' " s :=' t;! s '(t;!) +u " (t;!): Then we have ' " 2A L (t;!;u " ). Since u " is a viscosity L-subsolution of PPDE (2.3.1) with coecients f " , we have 0 @ t ' " (t; 0) 1 2 tr @ 2 xx ' " (t; 0)f " t;!;' " (t; 0);@ x ' " (t; 0) = @ t '(t;!) 1 2 tr @ 2 xx ' (t;!)f " t;!;u " (t;!);@ x '(t;!) = (L')(t;!) +f t;!;u(t;!);@ x '(t;!) f " t;!;u " (t;!);@ x '(t;!) c 2 +f t;!;u(t;!);@ x '(t;!) f " t;!;u " (t;!);@ x '(t;!) ; 46 thanks to (2.5.6). Send "! 0, we obtain 0 c 2 , contradiction. Remark 2.5.6. (i) We need the same L in the proof of Theorem 2.4.1. If u " is only a viscosity subsolution of PPDE (2.3.1) with coecient f " , but with possibly dierent L " , we are not able to show that u is a viscosity subsolution of PPDE (2.3.1) with coecients (2.3.1). (ii) However, if u " is a viscosity solution of PPDE (2.3.1) with coecient f " , by Theorems 2.4.3 and 2.4.6, it follows immediately from the stability of BSDEs that u is the unique viscosity solution of PPDE (2.3.1) with coecient f. 2.5.4 Partial comparison principle The following partial comparison principle, which improves Lemma 2.5.3, is crucial. The main argument is very much similar to that of Theorem 2.4.1. Lemma 2.5.7. Let Assumption 2.4.2 hold true. Let u 1 be a viscosity subsolution and u 2 a viscosity supersolution of PPDE (2.3.1). If u 1 (T;)u 2 (T;) and one of u 1 and u 2 is in C 1;2 b (), then u 1 u 2 on . Proof. First, by Remark 2.3.9 (i), by otherwise changing the variable we may assume without loss of generality that f is strictly decreasing in y. (2.5.10) 47 We assume u 2 2 C 1;2 b () and u 1 is a viscosity L-subsolution for some L 0. We shall prove by contradiction. Without loss of generality, we assume c :=u 2 0 u 1 0 < 0: For future purpose, we shall obtain the contradiction under the following slightly weaker assumptions: u 2 2 C 1;2 P 0 () bounded; and (Lu 2 ) 0; u 2 (T;)u 1 (T;) P 0 -a.s. (2.5.11) Denote X :=u 2 u 1 and 0 := inf n t> 0 :X t 0 o ^T: Note that X 0 =c< 0, X T 0, and X is continuous,P 0 -a.s. Then 0 > 0; X t < 0; t2 [0; 0 ); and X 0 = 0; P 0 -a.s. (2.5.12) Similar to Remark 2.3.11, dene the process Y by the optimal stopping problem under nonlinear expectation: Y t := inf 2T t E L t X ^ 0 ; t2 [0; 0 ]; 48 together with the corresponding optimal stopping rule: 0 := infft 0 : Y t =X t g: Then 0 0 , and we claim that P 0 [ 0 < 0 ] > 0; (2.5.13) because otherwise X 0 Y 0 =E L 0 X 0 , contradicting (2.5.12). As in the proof of Theorem 2.4.1, there existsEf 0 < 0 g such thatP 0 (E) = P 0 0 < 0 > 0, and for any!2E, by denotingt := 0 (!) we have t;! 0 2T t + and X t (!) = Y t (!) = inf 2T t E L t X ^ t;! 0 ]; P t 0 a.s. Letf i ;i 0g be the sequence of stopping times in Denition 2.2.8 corresponding tou 2 . ThenP 0 f 0 < i g\E > 0 fori large enough, and thus there exists!2E such that t := 0 (!) < i (!). Without loss of generality, we assume i1 (!) t. It is clear that ( 0 ^ i ) t;! 2T t + and (u 2 ) t;! 2 C 1;2 b ( t (( 0 ^ i ) t;! )). In particular, there exists ~ u 2 2C 1;2 b ( t ) such that (u 2 ) t;! = ~ u 2 on ( 0 ^ i ) t;! ). 49 Now for any 2T t + such that ( 0 ^ i ) t;! , it follows from Remark 2.3.11 that: X t (!) =Y t (!) =Y t;! t E L t Y t;! E L t X t;! : Thus 0E L t (~ u 2 ) t;! (u 1 ) t;! X t (!) : Denote ' s := (~ u 2 ) t;! s X t (!), s2 [t;T ]. Then '2A L (t;!;u 1 ). Since u 1 is a viscosity L-subsolution and u 2 is a classical supersolution, we have 0 (L')(t; 0) =@ t ~ u 2 (t; 0) 1 2 tr @ 2 xx ~ u 2 (t; 0) f t;!;u 1 (t;!);@ x ~ u 2 (t; 0) = @ t u 2 (t;!) 1 2 tr @ 2 xx u 2 (t;!) f t;!;u 1 (t;!);@ x u 2 (t;!) = (Lu 2 )(t;!) +f t;!;u 2 (t;!);@ x u 2 (t;!) f t;!;u 1 (t;!);@ x u 2 (t;!) f t;!;u 2 (t;!);@ x u 2 (t;!) f t;!;u 1 (t;!);@ x u 2 (t;!) : By (2.5.12), we have u 2 (t;!) < u 1 (t;!). Then the above inequality contradicts with (2.5.10). 50 2.6 A variation of Perron's approach To prove Theorem 2.4.6, we dene u(t;!) := inf '(t; 0) :'2D(t;!) ; u(t;!) := sup '(t; 0) :'2D(t;!) ; (2.6.1) where, in light of (2.5.11), D(t;!) := n '2 C 1;2 P t 0 ( t ) bounded : (L') t;! s 0; s2 [t;T ] and ' T g t;! ; P t 0 -a.s. o ; D(t;!) := n '2 C 1;2 P t 0 ( t ) bounded : (L') t;! s 0; s2 [t;T ] and ' T g t;! ; P t 0 -a.s. o : (2.6.2) By Lemma 2.5.7, in particular by its proof under the weaker condition (2.5.11), it is clear that uu 0 u: (2.6.3) The following result is important for our proof of Theorem 2.4.6. Theorem 2.6.1. Let Assumptions 2.4.2 and 2.4.4 hold true. Then u =u: (2.6.4) 51 Proof of Theorem 2.4.6. By Lemma 2.5.7, in particular by its proof under the weaker condition (2.5.11), we have u 1 u and u u 2 . Then Theorem 2.6.1 implies that u 1 u 2 . This clearly leads to the uniqueness of viscosity solution, and therefore, by Theorem 2.4.3, the function u 0 is the unique viscosity solution of PPDE (2.3.1) with terminal condition g. Remark 2.6.2. In standard Perron's method, one shows that u (resp. u) is a viscosity super-solution (resp. viscosity sub-solution) of the PDE. Assuming that the comparison principle for viscosity solutions holds true, then (2.6.4) holds. In our situation, we shall instead prove (2.6.4) directly rst, which in turn is used to prove the comparison principle for viscosity solutions. Roughly speaking, the comparison principle for viscosity solutions is more or less equivalent to the partial comparison principle Lemma 2.5.7 and the equality (2.6.4). To our best knowledge, such an approach is novel in the literature. We decompose the proof of Theorem 2.6.1 into several lemmas. First, lett<T and 2 C 0 b ( t ) d satisfy there exists ^ 2 C 0 b ( ^ t ) d such that = ^ in and ^ is uniformly continuous in ^ ! under the uniform normkk t T . (2.6.5) 52 Dene Z s =z + Z s t r dr; v s := Z s t Z s dB t s ; tsT; P t 0 -a.s. (2.6.6) By It^ o's formula, we have v s =Z s B t s Z s t r B t r dr: Denote ^ Z s (^ !) :=z + Z s t ^ r (^ !)dr; ^ v(s; ^ !) := ^ Z s (^ !)^ ! s Z s t ^ r (^ !)^ ! r dr; ^ !2 ^ t :(2.6.7) Now for any!2 andx2R, , let ^ u t;! denote the unique solution to the following ODE (with random coecients) on [t;T ]: ^ u t;! (s; ^ !) :=x Z s t ^ f t;! (r; ^ !; ^ u t;! (r; ^ !); ^ Z r (^ !))dr + ^ v(s; ^ !); tsT; ^ !2 ^ t ; (2.6.8) and dene u t;! (s; ~ !) := ^ u t;! (s; ~ !) for (s; ~ !)2 t : (2.6.9) We then have the following result. 53 Lemma 2.6.3. Let Assumptions 2.4.2 and (2.6.5) hold true. Then for each (t;!)2 , the above u t;! 2C 1;2 b ( t ) andL t;! u t;! = 0. Proof. We rst show that ^ u t;! 2 C 1;2 b ( ^ t ), which implies that u t;! 2 C 1;2 b ( t ). For ts 1 <s 2 T and ^ ! 1 ; ^ ! 2 2 ^ t , we have j ^ Z s 1 (^ ! 1 ) ^ Z s 2 (^ ! 2 )j Z s 2 s 1 j ^ r (^ ! 1 )jdr + Z s 1 t j ^ r (^ ! 1 ) ^ r (^ ! 2 )jdr C[s 2 s 1 ] + Z s 1 t j ^ r (^ ! 1 ) ^ r (^ ! 2 )jdr: Note thatd t 1 (r; ^ ! 1 ); (r; ^ ! 2 ) d t 1 ((s 1 ; ^ ! 1 ); (s 2 ; ^ ! 2 ) fortrs 1 . Then one can easily see that ^ Z2C 0 b ( ^ t ). Similarly one can show that ^ v; ^ u t;! 2C 0 ( ^ t ). Next, one can easily check that, for all ^ !2 ^ t , @ t ^ Z s (^ !) = ^ s (^ !); @ x ^ Z s (^ !) = 0; @ t ^ v(s; ^ !) = ^ s (^ !)^ ! s ^ s (^ !)^ ! s = 0; @ x ^ v(s; ^ !) = ^ Z s (^ !); @ 2 xx ^ v(s; ^ !) = 0; @ t ^ u t;! (s; ^ !) = ^ f t;! (s; ^ !; ^ u t;! (s; ^ !); ^ Z s (^ !)); @ x ^ u t;! (s; ^ !) = ^ Z s (^ !); @ 2 xx ^ u t;! (s; ^ !) = 0: Since ^ and ^ f are bounded, it is straightforward to see that ^ u t;! 2C 1;2 b ( ^ t ). Finally, from the above derivatives we see immediately thatL t;! u t;! = 0. Our next two lemmas rely heavily on the remarkable result Bank and Baum [2], which is extended to BSDE case in [93]. 54 Lemma 2.6.4. Let Assumption 2.4.2 hold true. Let 2T , Z be F-progressively measurable such that E P 0 [ R T jZ s j 2 ds] <1, and X ; ~ X 2 L 2 (F ;P 0 ). For any "> 0, there exists F-progressively measurable process Z " such that (i) For the Lipschitz constant L 0 in Assumption 2.4.2 (ii), it holds that P 0 h sup tT e L 0 t jX " t X t j" +e L 0 j ~ X X j i "; (2.6.10) where X;X " are the solutions to the following ODEs with random coecients: X t =X Z t f(s;B;X s ;Z s )ds + Z t Z s dB s ; X " t = ~ X Z t f(s;B;X " s ;Z " s )ds + Z t Z " s dB s ; tT; P 0 -a.s. (2.6.11) (ii) " t := d dt Z " t exists for t2 [;T ), where " is understood as the right deriva- tive, and for each !2 , ( " ) (!);! satises (2.6.5) with t :=(!). Proof. First, let h := h " > 0 be a small number which will be specied later. By standard arguments there exists a time partition 0 = t 0 < < t n = T and a smooth function : [0;T ]R nd ! R d such that and its derivatives are bounded and E P 0 h Z T j ~ Z t Z t j 2 dt i <h (2.6.12) where ~ Z t (!) := (t;! t 1 ^t ; ;! tn^t ) for all (t;!)2 : 55 Next, for some ~ h := ~ h " > 0 which will be specied later, denote Z " t := 1 ~ h Z t t ~ h ~ Z _s ds for t2 [;T ]: (2.6.13) By choosing ~ h> 0 small enough (which may depend on h " ), we have E P 0 h Z T jZ " t Z t j 2 dt i < 2h: (2.6.14) Now denote Z " :=Z " Z; X " :=X " X: Then X " t =X " Z t [ s X " s +h;Z " i s ]ds + Z t Z " s dB s ; wherejjL 0 and 2U L 0 t . Denote " t := exp R t s ds : We get " t X " t =X " Z t " s h;Z " i s ds + Z t " s Z " s dB s : 56 Then 0 sup tT e L 0 t jX " t je L 0 jX " je L 0 h sup tT " t jX " t jjX " j i sup tT " t X " t X " = sup tT Z t " s h;Z " i s ds + Z t " s Z " s dB s C Z T jZ " s jds + sup tT Z t " s Z " s dB s : Thus P 0 h sup tT e L 0 t jX " t X t j" +e L 0 j ~ X X j i = P 0 h sup tT e L 0 t jX " t X t je L 0 j ~ X X j" i P 0 h C Z T jZ " s jds + sup tT Z t " s Z " s dB s " i C " 2 E P 0 h Z T jZ " s jds 2 + sup tT Z t " s Z " s dB s 2 i C " 2 E P 0 h Z T jZ " s j 2 ds i Ch " 2 ; thanks to (2.6.14). Now set h := " 3 C , we prove (2.6.10). Finally, by (2.6.13) and (2.6.12) we have, " s = 1 ~ h [ ~ Z s ~ Z (s ~ h)_ ]; s2 [;T ]: 57 Fix !2 and set t :=(!). For each ^ !2 ^ t , set ! :=! t ^ !2 ^ , and dene: ^ Z t;! s (^ !) := (s; ! t 1 ^s ; ; ! tn^s ); ( " ) t;! s (^ !) := 1 ~ h [ ^ Z t;! s (^ !) ^ Z t;! (s ~ h)_t (^ !)]; s2 [;T ]: Then we can easily check that ( " ) t;! satises (2.6.5). Lemma 2.6.5. Assume Assumption 2.4.2 holds. Let x 2 R and Z be F- progressively measurable such thatE P 0 [ R T 0 jZ s j 2 ds]<1. For any"> 0, there exists F-progressively measurable process Z " and an increasing sequence of F-stopping times 0 = 0 1 T such that (i) It holds that sup 0tT jX " t X t j"; P 0 -a.s.; (2.6.15) where X;X " are the solutions to the following ODEs with random coecients: X t =x Z t 0 f(s;B;X s ;Z s )ds + Z t 0 Z s dB s ; X " t =x Z t 0 f(s;B;X " s ;Z " s )ds + Z t 0 Z " s dB s ; 0tT; P 0 -a.s. (2.6.16) (ii) For each i, " t := d dt Z " t exists for t2 [ i ; i+1 ), where " is understood as the right derivative. Moreover, there exists ~ i;" on [ i ;T ] such that ~ i;" t = " t for t2 [ i ; i+1 ), and for each !2 , ( i;" ) i (!);! satises (2.6.5) with t := i (!); 58 (iii) For P 0 -a.s. !2 , for each i, i < i+1 whenever i < T , and the set fi : i (!)<Tg is nite. Proof. Let " > 0 be xed, and set " i := 2 i2 e L 0 T ", i 0. We construct i and (Z i;" ;X i;" ) by induction as follows. First, for i = 0, set 0 := 0. Apply Lemma 2.6.4 with initial time 0 , initial valuex, and error level" 0 , we can constructZ 0;" andX 0;" on [ 0 ;T ] satisfying the properties in Lemma 2.6.4. In particular, P 0 h sup 0 tT e L 0 t jX 0;" t X t j" 0 i " 0 : Denote 1 := inf n t 0 :e L 0 t jX 0;" t X t j" 0 o ^T: (2.6.17) Since X and X 0;" are continuous, we have 1 > 0 ,P 0 -a.s. We now dene Z " t :=Z 0;" t ; X " t :=X 0;" t ; t2 [ 0 ; 1 ): Assume we have dened i , Z " ;X " on [ 0 ; i ) and X i1;" on [ i1 ;T ]. Apply Lemma 2.6.4 with initial time i , initial value X i1;" i , and error level " i , we can 59 construct Z i;" and X i;" on [ i ;T ] satisfying the properties in Lemma 2.6.4. In particular, P 0 h sup i tT e L 0 t jX i;" t X t j" i +e L 0 i jX i1;" i X i j i " i : Denote i+1 := inf n t i :e L 0 t jX i;" t X t j" i +e L 0 i jX i1;" i X i j o ^T: Since X and X i;" are continuous, we have i+1 > i whenever i < T . We then dene Z " t :=Z i;" t ; X " t :=X i;" t ; t2 [ i ; i+1 ): From our construction we haveP 0 ( i+1 <T )" i . Then 1 X i=0 P 0 ( i+1 <T ) 1 X i=0 " i <1: It follows from the Borel-Cantalli Lemma that the setfi : i (!)<Tg is nite, for P 0 -a.s. !2 , which proves (iii). 60 We thus have denedZ " ;X " on [0;T ], and the statements in (ii) follows directly from Lemma 2.6.4. So it remains to prove (i). For each i, by the denition of i we see that, e L 0 i+1 jX " i+1 X i+1 j" i +e L 0 i jX " i X i j; P 0 -a.s. Since X " 0 =X 0 =x, by induction we get sup i e L 0 i jX " i X i j 1 X i=0 " i 1 X i=0 2 i2 e L 0 T " = 1 2 e L 0 T "; P 0 -a.s. Then for each i, sup i t i+1 jX " t X t j e L 0 T h " i +jX " i X i j i e L 0 T h 2 i2 e L 0 T " + 1 2 e L 0 T " i "; P 0 -a.s. which implies (2.6.15). Proof of Theorem 2.6.1 Without loss of generality, we shall only proveu 0 =u 0 0 . Recall that (Y 0 ;Z 0 ) is the solution to BSDE (2.4.1) and Y 0 = u 0 satises the regularity in Proposition 2.5.4. Set Z := Z 0 and x := Y 0 in Lemma 2.6.5, we see that X =u 0 and thus X satises the regularity in Proposition 2.5.4. From the construction in Lemma 2.6.5 and then by Lemma 2.6.4 we see that ~ 0;" t := d dt Z 0;" t exists for all t2 [0;T ) and satises (2.6.5). Then by Lemma 2.6.3 61 we see that X 0;" 2C 1;2 b () andLX 0;" = 0. This implies that, for the 1 dened in (2.6.17), 1 (!)> 0 for all !2 and, by Example 2.2.5, 1 2T + . For i = 1; 2; , repeat the above arguments and by induction we can show that, for each i and each !2 , i (!);! i+1 2T i (!) + . Moreover, by Lemma 2.6.5, fi : i <Tg is nite,P 0 -a.s. We now let u " be the solution to the following ODE: u " t =X " 0 +e L 0 T " Z t 0 f(s;B;u " s ;Z " s )ds + Z t 0 Z " s dB s : For i = 0; 1; , by the construction of Z " in Lemma 2.6.5 and following the arguments in Lemma 2.6.3, one can easily show that u " 2 C 1;2 P 0 ([0;T ]) and Lu " = 0: (2.6.18) Moreover, note that u " t X " t =e L 0 T " Z t 0 s [u " s X " s ]ds; wherejjL 0 . By standard arguments one has sup 0tT ju " t X " t je 2L 0 T "; and u " T X " T e LT [u " 0 X " 0 ] =": 62 Therefore, by (2.6.15) and noting that u 0 is bounded, u " is bounded and u " T (!)X " T (!) +"X T (!) =Y 0 T (!) =g(!); for P 0 -a.s. !: This, together with (2.6.18), implies that u " 2D(0; 0). Then, by the denition of u, u 0 u " 0 =X " 0 +e L 0 T "u 0 0 +" +e L 0 T ": Since " is arbitrary, we obtain u 0 u 0 0 . 63 Chapter 3 Path-dependent integro-dierential equations 1 3.1 Introduction The purpose of this chapter is to extend the theory of viscosity solutions (in the sense of [31] and [32]) for path-dependent partial dierential equations (PPDEs) to path-dependent integro-dierential equations. In particular, we investigate semi- linear path-dependent integro-dierential equations of the form Lu(t;!)f t (!;u(t;!);@ ! u(t;!);Iu(t;!)) = 0; (t;!)2 [0;T )D([0;T ];R d ); (3.1.1) 1 A version with minor modications of this chapter is available online as C. Keller, Viscos- ity Solutions of Path-dependent Integro-dierential Equations, arXiv preprint arXiv:1412.8495 (2014). The author would like to thank Remigijus Mikulevicius and Jianfeng Zhang for very valuable discussions. 64 whereD([0;T ];R d ) is the space of right-continuous functions with left limits from [0;T ] toR d ,L is a linear integro-dierential operator of the form Lu(t;!) = @ t u(t;!) d X i=1 b i t (!)@ ! iu(t;!) 1 2 d X i;j=1 c ij t (!)@ 2 ! i ! ju(t;!) Z R d " u(t;! +z:1 [t;T ] )u(t;!) d X i=1 z i @ ! iu(t;!) # K t (!;dz); andI is an integral operator of the form Iu(t;!) = Z R d u(t;! +z:1 [t;T ] )u(t;!) t (!;z)K t (!;dz): Well-posedness for semilinear PPDEs has been rst established by Ekren, Keller, Touzi, and Zhang [31], where also the here used notion of viscosity solu- tions has been introduced. Subsequent work by Ekren, Touzi and Zhang deals with fully nonlinear PPDEs ([32] and [33]), by Pham and Zhang with path-dependent Isaacs equations ([88]), by Ekren with obstacle PPDEs ([30]), and by Ren with fully nonlinear elliptic PPDEs ([89]). Initial motivation for this line of research came from Peng [84], who consid- ered non-Markovian backward stochastic dierential equations (BSDEs) as PPDEs analogously to the relationship between Markovian BSDEs and (standard) PDEs, from Dupire [29], who introduced new derivatives on D([0;T ];R d ) so that for smooth functionals on [0;T )D([0;T ];R d ) a functional counterpart to It^ os for- 65 mula holds, and from Cont and Fourni e (see [18], [19], and [20]), who extended Dupire's seminal work. However, fully nonlinear PPDEs of rst order have been studied earlier by Lukoyanov (see, for example, [63], [65], [64]). He used deriva- tives introduced by Kim [53] and adapted rst the notion of so-called minimax solution and then of viscosity solutions from PDEs to PPDEs. Minimax solutions for PDEs have been introduced by Subbotin (see, e.g., [95] and [96]) motivated by the study of dierential games. In the case of PDEs of rst order, minimax and viscosity solutions are equivalent (see [97]). Another approach for generalized solution for rst-order PPDEs can be found in work by Aubin and Haddad [1], where so-called Clio derivatives for path-dependent functionals are introduced in order to study certain path-dependent Hamiltion-Jacobi-Bellman equations that occur in portfolio theory. Possible applications of path-dependent integro-dierential equations are non- Markovian problems in control, dierential games, and nancial mathematics that involve jump processes. Some comments about dierences between PDEs and PPDEs seem to be in order. Contrary to standard PDEs, even linear PPDEs have rarely classical solu- tions in most relevant situations. Hence, one needs to consider a weaker forms of solutions. In the case of PDEs, the notion of viscosity solutions introduced by Crandall and Lions [23] turned out to be extremely successful. The main diculty in the path-dependent case compared to the standard PDE case is the lack of local 66 compactness of the state space, e.g., [0;T ]D vs. [0;T ]R d . Local compact- ness is essential for proofs of uniqueness of viscosity solutions to PDEs, i.e., PDE standard methods can, in general, not easily adapted to the path-dependent case. The main contribution of [31] was to replace the pointwise supremum/ inmum occuring in the denition of viscosity solutions to PDEs via test functions by an optimal stopping problem. The lack of local compactness could be circumvented by the existence of an optimal stopping time. This is crucial in establishing the comparison principle. Moreover, additional intricacies caused by the jumps have to be faced. For example, it turns out that in contrast to the PPDE case the uniform topology is not always appropriate. In order to prove the comparison principle, it seems necessary to equipD with one of Skorohod's nonuniform topologies. The general methodology to establish well-posedness of viscosity solutions for (3.1.1) follows [31] and [32]. Existence will be proven by using a stochastic repre- sentation. An intermediate result is a so-called partial comparison principle, which is a comparison principle, where one the involved solutions is a viscosity subsolu- tion (resp. a viscosity supersolution), and the other one a classical super- (resp. a classical subsolution). The partial comparison principle is essential to prove the comparison principle. The rest of this chapter is organized as follows. In Section 2, we introduce most of the notation and preliminaries. In Section 3, viscosity solutions for semilinear path-dependent integro-dierential equation are dened and the main results are 67 stated. In Section 4, we prove consistency of classical solutions with viscosity solu- tions as well as existence of viscosity solutions. In Section 5, the partial comparison principle and a stability result is proved. This section also contains some auxiliary results about backward SDEs and optimal stopping. In Section 6, the comparison principle is proved. Appendix A deals with conditional probability distributions and their applications to martingale problems. In Appendix B, Skorohod's topolo- gies are dened. Appendix C contains additional auxiliary results. ;in particular. we state the integration-by-parts formula for semimartingales. 3.2 Setup 3.2.1 Notation and preliminaries For unexplained notation, we refer to [49] and [90]. LetN =f1; 2;:::g be the set of all strictly positive integers,N 0 :=N[f0g,Q be the set of rational numbers, andR be the set of real numbers. Givend 0 2N, we denote byS d 0 the set of all symmetric real-valuedd 0 d 0 -matrices. For any matrix A, we denote byA > its transpose. Given a topological spaceE, letB(E) its Borel -eld. We write 0 for zero vectors, zero matrices, constant functions attaining only the value 0, etc. The meaning should be clear from context. We write 1 for indicator functions. The expected value with respect to some probability measure 68 P is denoted by E P . On R d 0 , d 0 2 N, denote the ` p -norms byjj p , p2 N[f1g. Also setjj :=jj 2 . Fix T > 0 and d2 N. Let := D([0;T ];R d ) be the canonical space, X the canonical process on , i.e., X t (!) =! t , and F 0 =fF 0 t g t2[0;T ] the (raw) ltration generated byX. Denote the right-limit ofF 0 byF 0 + =fF 0 t+ g t2[0;T ] . Givent2 [0;T ], let t := [t;T ) and t := [t;T ] . Also, put := 0 and := 0 . Given random times 1 , 2 : ! [0;1], put J 1 ; 2 K :=f(t;!)2 : 1 (!)t 2 (!)g: The other stochastic intervals J 1 ; 2 J, etc., are dened similarly. We equip with the uniform normkk 1 and with the pseudometric d 1 dened by d 1 ((t;!); (t 0 ;! 0 )) :=jtt 0 j +k! ^t ! 0 ^t 0k 1 : Often, we consider a functional u : ! R as a stochastic process, in which case, we write u t instead of u(t;X). Denition 3.2.1. LetE 1 andE 2 be nonempty sets. LetA be a nonempty subset of E 1 . Consider a mappingu =u(t;!;x) :A!E 2 . We callu non-anticipating if, for every (t;!;x)2A, u(t;!;x) =u(t;! ^t ;x): 69 We call u deterministic if it does not depend on !. Given a nonempty subset A of and a topological space E, we denote by C(A;E) the set of all functionals from A to E that are continuous under d 1 . If E =R, we just write C(A) instead. Remark 3.2.2. Note that any u2C( ) satises the following: (i) u is non-anticipating. This follows immediately from the denition of d 1 . (ii) The trajectories t7!u(t;!) are c adl ag and the trajectories t7!u(t;! ^t ) are left-continuous (Proposition 1 of [19]). Also, for xed t 2 (0;T ],the path ~ ! :=! ^t is continuous at t, which again, by Proposition 1 of [19], implies that u t (!) = lim s"t u(s; ~ !) =u(t; ~ !) =u(t;! ^t ): That is, u = (u(t;! ^t )) t . Moreover, considered as processes, u and u are F 0 -adapted (Theorem 2 of [19]). (iii) X jumps whenever u jumps (Lemma 3.7.11). Often, we writeH S for stochastic integrals with respect to semimartingales, i.e., H S t = R t s H r dS r . The initial time s is usually clear from context. We also write sometimes H t instead of R H t dt. Similarly, we write W for stochastic integrals with respect to random measures (see Chapter II of [49]). Given a proba- bility measureP, denote byF P its induced ltration satisfying the usual conditions. If S is an (F P ;P)-semimartingale, write L 2 loc (S;P) for the set of all F P -predictable 70 processesH such thatH 2 hX;Xi is locally integrable (cf. I.439 in [49]). Similarly, given a random measure , the set G loc (;P) is dened (see Denition II.1.27 in [49]). Given a process S with left limits, dene S by S t := S t S t . If S is a semimartingale under a probability measure P, then we denote by S c = S c;P the continuous local martingale part of S (p. 45 in [49]) and by S the random measure associated to the jumps of S (p. 69 in [49]). It is dened by S (;dt;dx) := X s 1 fSs6=0g (s;Ss) (dt;dx); where denotes the Dirac measure. Given a nonempty domain D in R d 0 , d 0 2N, denote bykk n+;D , n2N 0 , 2 (0; 1), the standard H older norms. Similarly, denote bykk n+;Q ,Q = (t 1 ;t 2 )D, t 1 < t 2 , the corresponding parabolic H older norms. We refer to [73] for the de- nition. The corresponding H older spaces are denoted by C n+ ( D) and C n+ ( Q), resp., and the corresponding local H older spaces by C n+ loc (D) and C n+ loc (Q), resp. Also, putkk D :=kk 0;D andkk Q :=kk 0;Q . 3.2.2 Standing assumptions The assumptions in this section are always in force unless explicitly stated other- wise. 71 Let b = (b i ) id be a d-dimensional, non-anticipating, and F 0 + -predictable pro- cess,c = (c ij ) i;jd a non-anticipating andF 0 + -predictable process with values in the set of nonnegative denite realdd-matrices, andK =K t (dz) a non-anticipating andF 0 + -predictable process with values in the set of -nite measures onB(R d ). Assumption 3.2.3. Let (b;c;K) satisfyc ij = P kd ik jk ,i,jd, andK t (A) = R 1 Anf0g ( t (z))F (dz), A2B(R d ), where = ( i;j ) i;jd is a non-anticipating and F 0 + -predictable process with values in the set of realdd-matrices, = ( i ) id is a d-dimensional, non-anticipating, andF 0 + -predictable random eld onR d , andF is a nonnegative-nite measure onB(R d ). Letb,, and be right-continuous int. Let b and be bounded by a common constant C 0 0 1 and Lipschitz continuous in ! with common Lipschitz constant L 0 1. Let t (;z) be bounded byjzj^C 0 0 and Lipschitz continuous with Lipschitz constant L 0 (jzj^C 0 0 ). Also, assume that R R d jzj 2 ^C 0 0 F (dz) C 0 1 for some constant C 0 1 1. Moreover, let d = 1 or let s (!) be invertible for every (s;!)2 . Let = t (!;z) : R d !R and f =f t (!;y;z;p) : RR d R!R be functions that are non-anticipating in (t;!). Assumption 3.2.4. Let andf be bounded from above byC 0 0 . Let be uniformly continuous underkk 1 with modulus of continuity 0 . Let and f be uniformly 72 continuous in (t;!) under d 1 uniformly in (y;z;p) with modulus of continuity 0 . Also let, uniformly in (t;!), jf t (!;y;z;p)f t (!;y 0 ;z 0 ;p 0 )jL 0 jyy 0 j + (t;!) > (zz 0 ) 1 +jpp 0 j : Let 0(t;!;z)C 0 0 (1^jzj). Remark 3.2.5. Our Lipschitz condition of f in z is the same as in [13]. To be able to use the comparison principle for backward SDEs with jumps, we also need the following assumption. Assumption 3.2.6. Let f be nondecreasing in p. 3.2.3 Canonical setup We introduce probability measuresP s;! , which will be employed in the rest of this chapter. To this end, let (B;C;) be a candidate for a characteristic triplet of X (seexIII.2.a in [49]) such that dB t =b t dt; dC t =c t dt; (dt;dz) =K t (z)dt: 73 For every s2 [0;T ], dene (cf.xIII.2.d in [49]) p s B : [s;T ] !R d ; (p s B) t :=B t B s ; p s C : [s;T ] !S d ; (p s C) t :=C t C s ; p s :B([s;T ]R d )!R; (p s )((s;t]A) :=(((s;t]A): Then, by Assumption 3.2.3, for every (s;!)2 [0;T ] , the martingale problem for (p s B;p s C;p s ) starting at (s;!) has a unique solutionP s;! (cf. Theorem III.2.26 in [49] for the Markovian case). That is, X is an (F 0 + ;P s;! )-semimartingale on [s;T ] with characteristics (p s B;p s C;p s ) and X:1 [0;s] =!:1 [0;s] ,P s;! -a.s. We also write E s;! instead of E Ps;! and the continuous local martingal part of X underP s;! on [s;T ] is denoted by X c;s;! . Remark 3.2.7. By Theorem II.2.34 in [49], the canonical representation of X on [s;T ] is given by X =X s +p s B +X c;s;! +z:1 fjzjC 0 0 g ( X ) +z:1 fjzj>C 0 0 g X ; P s;! -a.s. Also note that, since is bounded from above by C 0 0 , the random measure K assigns no mass tofz2 R d :jzj > C 0 0 g. Consequently, the jumps of X on [s;T ] are bounded from above by C 0 0 ,P s;! -a.s., i.e., we have on [s;T ], X =X s +p s B +X c;s;! +z ( X ); P s;! -a.s. 74 and X is a special (P s;! ;F 0 + )-semimartingale on [s;T ]. We augment the raw ltrationF 0 similarly as in the theory of Markov processes (see, e.g., [90]). To this end, letN s;! be the collection of all P s;! -null sets inF 0 T and put, for every t2 [0;T ], F s;! t :=(F 0 t+ ;N s;! ); F t := \ (s;!)2 F s;! t : Now we can dene the following ltrations: F :=fF t g t2[0;T ] ; F s;! :=fF s;! t g t2[0;T ] ; (s;!)2 : Note thatF is right-continuous. Next, we introduce several classes of stopping times. Denition 3.2.8. Let s2 [t;T ]. Given a ltration G =fG t g t2[0;T ] on , denote byT s (G) the set of all G-stopping times such that s. SetT s :=T s (F). Let !2 . Denote byH s (resp.H s;! ) the set of all 2T s for which there exist some d 0 2 N, a right-continuous, non-anticipating, F-adapted, d 0 -dimensional process Y = (Y i ) id 0, and a closed subset E of R d 0 such that, for every ~ !2 (resp. for P s;! -a.e. ~ !2 ), (~ !) = inffts :Y t (~ !)2Eg^T: 75 Given a stopping time and a path !2 , we often write (;!) instead of ((!);!) if there is no danger of confusion. Lemma 3.2.9. Fix (s;!)2 and t2 [s;T ). Let 2H s;! . If (!) > t and X coincides with ! on [0;t], P s;! -a.s., then >t, P s;! -a.s. Proof. LetY be the corresponding process,E be the corresponding closed set, and 0 the corresponding subset of with P s;! ( 0 ) = 1 in the denition ofH s;! such that, for every ~ !2 0 , (~ !) = infft s : Y t (~ !)2 Eg^T and ! coincides with ~ ! on [0;s]. Since (!) > t, we have Y r (~ !) = Y r (!)2 E c for every r2 [s;t]. This yields (~ !)>t because Y is right-continuous and E is closed. 3.2.4 Path-dependent stochastic analysis First, we introduce a new space of continuous functionals. The reason is that we want the trajectories t7! u(t;! +x:1 [t;T ] ) to be right-continuous, which, in general, is not the case if the functionalu is only inC( ) as Example 3.2.11 below demonstrates. Denition 3.2.10. Lets2 [0;T ]. Denote byC 0 ( s ) the set of allu2C( s ) such that, for every x2R d , the map (t;!)7!u(t;! +x1 [t;T ] ) is continuous under d 1 . Denote by C 0 b ( s ) the set of all bounded functionals in C 0 ( s ) and by UC 0 b ( s ) the set of all uniformly continuous functionals in C 0 b ( s ). 76 Example 3.2.11. Consider u = u(t;!) := sup 0st j! s j. Fix t > 0. Let ! = 2:1 [t;T ] . Then u(t;! + 1 [t;T ] ) = 1 but u(t +n 1 ;! + 1 [t+n 1 ;T ] ) = 2 for every n2N. Next, we give an implicit denition of our path-dependent derivatives. Denition 3.2.12. Let (s;!)2 and let h2H s;! with h>s, P s;! -a.s. Denote by C 1;2 b (Js;hK) the set of all bounded functionals u2C( s ) for which there exist bounded, right-continuous, non-anticipating, F s;! -adapted functionals @ t u : s ! R, @ ! u = (@ ! iu) id : s ! R d , and @ 2 !! u = (@ ! i ! ju) i;jd : s ! S d such that @ t u2 C(Js;hJ), @ ! u2 C(Js;hJ;R d ), @ 2 !! u2 C(Js;hJ;S d ), and that, for every 2T s , u ^h =u s + R ^h s @ t u t dt + P d i=1 R ^h s @ ! iu t dX i t + 1 2 P d i;j=1 R ^h s @ 2 ! i ! j u t dhX i;s;!;c ;X j;s;!;c i t + R ^h s R R d h u t (X ^t +z:1 [t;T ] )u t P d i=1 z i @ ! iu t i X (dt;dz);P s;! -a.s. Given z2R d , we sometimes use the operatorr 2 z dened by r 2 z u(t;!) :=u(t;! +z:1 [t;T ] )u(t;!) d X i=1 z i @ ! iu(t;!): 77 Remark 3.2.13. If u2C 1;2 b (Js;hK), then u ^h =u s Z ^h s Lu t dt + local martingale part,P s;! -a.s., for every 2T s . 3.3 Viscosity solutions: Notion and main results In this section, we introduce the notion of viscosity solutions for equations of the form (3.1.1). A minimal requirement for those solutions is consistency with classical solutions. They are dened as follows: Denition 3.3.1. If u2C 0 b ( )\C 1;2 b () and Luf(;u;@ ! u;Iu) (resp., =) 0 in ; thenu is a classical subsolution (resp. classical supersolution, classical solution) of (3.1.1). To state the actual denition of viscosity solutions, we need rst to introduce two nonlinear expectations and spaces of test functionals. 78 Fix (s;!)2 andL 0. Given a processH2L 2 loc (X c;s;! ;P s;! ) and a random eld W2G loc (p s X ;P s;! ), denote by H;W the solution to = 1 + ( H) X c;s;! + ( W ) ( X ) on [s;T ] with = 1 on [0;s),P s;! -a.s. Denition 3.3.2. Let L 0 and let (s;!)2 . Denote byP L (s;!) the set of all probability measures P on ( ;F 0 T ) for which there exists a process H 2 L 2 loc (X c;s;! ;P s;! ) with > H 1 L and a random eld W2G loc (p s X ;P s;! ) with 0WL such that, for every A2F 0 T , P(A) = Z A H;W T (~ !)dP s;! (~ !): Now we can dene the following nonlinear expectations: E L s;! := inf P2P L (s;!) E P ; E L s;! := sup P2P L (s;!) E P : Denition 3.3.3. Let u : ! R be an F-adapted process, let L 0, and let (s;!)2 . Denote byA L u(s;!) (resp.A L u(s;!)) the set of all functionals 79 '2 C 0 b ( s ) for which there exists a hitting time h2H s;! with h > s, P s;! -a.s., such that '2C 1;2 b (Js;hK) and that 0 = ('u)(s;!) = inf 2Ts E L s;! [('u) ^h ] (resp. = sup 2Ts E L s;! [('u) ^h ]). Denition 3.3.4. Let u be a bounded, right-continuous, non-anticipating, F- adapted process that isP s;! -quasi-left-continuous on [s;T ] for every (s;!)2 . (i) Given L 0, we say u is a viscosity L-subsolution (resp. viscosity L- supersolution) of (3.1.1) if, for every (t;!) 2 and every ' 2 A L u(t;!) (resp.A L u(t;!)), L'(t;!)f t (!;'(t;!);@ ! '(t;!);I'(t;!)) (resp.) 0: (ii) We say u is a viscosity subsolution (resp. viscosity supersolution) of (3.1.1) if it is a viscosity L-subsolution (resp. viscosity L-supersolution) of (3.1.1) for some L 0. (iii) We sayu is a viscosity solution of (3.1.1) if it is both a viscosity subsolution and a viscosity supersolution of (3.1.1). Remark 3.3.5. Ifu2C( ), thenu isP s;! -quasi-left-continuous on [s;T ] for every (s;!)2 . Indeed, sinceX isP s;! -quasi-left-continuous on [s;T ], there exists, by Proposi- tion I.2.26 in [49], a sequence of totally inaccessible stopping times exhausting the 80 jumps of X. By Remark 3.2.2 (iii), this sequence also exhausts the jumps of u. Hence, again by Proposition I.2.26 in [49], u isP s;! -quasi-left-continuous. Theorem 3.3.6 (Consistency with classical solutions). Let u2C 0 ( )\C 1;2 (). Then u is a classical subsolution (classical supersolution, classical solution) of (3.1.1) if and only if u is a viscosity subsolution (viscosity supersolution, viscosity solution) of (3.1.1). Our semilinear path-dependent integro-dierential equation is closely connected to a family of non-Markovian BSDEs with jumps. To introduce this family, x rst (s;!)2 . Denote by (Y s;! ;Z s;! ;U s;! ) the unique solution to the BSDE Y s;! t = + Z T t f r X;Y s;! r ;Z s;! r ; Z R d U s;! r (z) r (z)K r (dz) dr Z T t Z s;! r dX c;s;! r Z T t Z R d U s;! r (z) ( X )(dr;dz); t2 [s;T ];P s;! -a.s. Without loss of generality, we assume thatY s;! is right-continuous andF 0 + -adapted. Remark 3.3.7. By Theorem III.4.29 in [49] every (P s;! ;F 0 + )-local martingale has the representation property relative to X (see Denition III.4.22 in [49]). There- fore, one can prove well-posedness of the BSDE above by standard methods. For related results for BSDEs driven by c adl ag martingales see [34] and [13]. In [99], which deals with BSDEs driven by c adl ag martingales and random measures, a spe- 81 cial case of our BSDE is covered. Moreover, BSDEs driven by random measures in a general setting are treated in [17]. Next, dene a functional u 0 : !R by u 0 (t;!) :=E t;! [Y t;! t ]: It will turn out that under additional assumptions u 0 is the unique solution to (3.1.1) satisfying u 0 T =. Theorem 3.3.8 (Existence). If (B;C;) and are deterministic, then u 0 is a viscosity solution of (3.1.1) and u 0 2UC b ( ). Theorem 3.3.9 (Partial comparison I). Fix (s;!)2 . Let u 1 be a viscosity subsolution of (3.1.1) on s and let u 2 be a classical supersolution of (3.1.1) on s . Suppose that u 1 T u 2 T , P s;! -a.s. Then u 1 (s;!)u 2 (s;!). Theorem 3.3.10 (Stability). For every "> 0, let (b " ;c " ;K " ) together with some process " , some random eld " , and some -nite measure F " satisfy Assump- tion 3.2.3 in place of (b;c;K) together with,, andF , and denote the correspond- ing linear integro-dierential operator byL " (see Section 3). Also, for every"> 0, let " = " t (!;z) and f " = f " t (!;y;z;p) satisfy Assumption 3.2.4 and Assump- tion 3.2.6 in place of and f, respectively, and denote the corresponding integral operator byI " , where (;K) is replaced with ( " ;K " ). Suppose thatb " !b,c " !c, 82 K " !K, " !, and f " !f uniformly as "# 0. Fix L> 0. For every "> 0, let u " = u " (t;!) be a viscosity L-supersolution of (3.1.1) withL replaced byL " and f replaced by f " . Suppose that u " converges to some functional u = u(t;!) on uniformly as "# 0. Then u is a viscosity L-supersolution of (3.1.1). For the comparison principle, we have to employ the subsequent set of assump- tions. Assumption 3.3.11. Let be uniformly continuous with respect to the weak M 1 -topology, i.e., with respect to the metric d p dened by d p (!; ~ !) := max id d M 1 (! i ; ~ ! i ), ! = (! i ) id , ~ ! = (~ ! i ) id 2 (see Theorem 12.5.2 in [98]). Remark 3.3.12. If d = 1, then the weak M 1 -topology coincides with the M 1 - topology. For its denition, see Section 3.7.2. For more details, we refer the reader to [98]. Assumption 3.3.13. The triple (b;c;K) is constant, the random eld is deter- ministic and does not depend on z, and there exist positive constants (L " ) "2(0;1) and 2 (0; 1) such that the following holds: (i) For every = ( i ) id 2R d , jj 2 1 2 d X i;j=1 c ij i j 1 jj 2 : 83 (ii) For every 2 (0; 1)\Q, there exists a positive constant L 2 () such that jj =2;[0;T ] L 2 (): (iii) For every "2 (0; 1), there exist nonnegative -nite measures K 1;" and K 2;" onB(R d ) such that K(dz) K 1;" (dz) +K 2;" (dz); Z R d jzj 2 +jzj K 1;" (dz) "; K 2;" (R d nf0g) L " : Assumption 3.3.14. Suppose that there exists a constant c 0 0 2 (0;C 0 0 ) such that K(fz2R d :jzj<c 0 0 g) = 0 and that K(R d )<1. Assumption 3.3.15. For every !2 and every 2 (0; 1)\Q, there exists a positive constant L 1 (!;) such that the following holds: (i) For every (y;z;p)2RR d R, jf (!;y;z;p)j =2;[0;T ] L 1 (!;) [j(y;z;p)j + 1]: (ii) For every (t;p)2 [0;T ]R, we have f t (!;;p)2C 1 (RR d ). 84 The following sets are also used in the proof of the partial comparison principle and the comparison principle. Denition 3.3.16 (The sets t 1 and t i ). Let t2 [0;T ] and i2N. Denote by t 1 the set of all 1 = (t 0 ;x 0 ;t 1 ;x 1 ;::: ;x 1 ) such that (i) t =t 0 t 1 :::T , (ii) t i =T for all but nitely many i2N 0 , (iii) x i 2R d for all i2N 0 [f1g. Denote by t i the set of all i = (s 0 ;y 0 ;::: ;s i1 ;y i1 ) such that (i) t =s 0 :::s i1 T , (ii) y j 2R d for all j2f0;:::;i 1g. The following assumption is only needed for measurability issues in the proof of the comparison principle. Assumption 3.3.17. For every (s;!)2 , i2N, and ~ !2 , let the functions i !R; i = (s 0 ;y 0 ;::: ;s i1 ;y i1 )7! !:1 [0;s 0 ) + P i1 j=0 y j :1 [s j ;s j+1 ) + ~ !:1 [s i ;T ] ; i !R; i 7!f t !:1 [0;s 0 ) + P i1 j=0 y j :1 [s j ;s j+1 ) + ~ !:1 [s i ;T ] ;y;z;p be continuous uniformly in (t;y;z;p). 85 Theorem 3.3.18 (Comparison). Suppose that Assumptions 3.3.11, 3.3.13, 3.3.15, 3.3.14, and 3.3.17 are satised. If u 1 is a viscosity subsolution of (3.1.1), if u 2 is a viscosity supersolution of (3.1.1), and if u 1 T u 2 T , then u 1 u 2 . Theorems 3.3.8 and 3.3.18 immediately yield our nal main result. Theorem 3.3.19 (Well-posedness). Suppose that Assumptions 3.3.11, 3.3.13, 3.3.15, 3.3.14, and 3.3.17 are satised. Then u 0 is the unique viscosity solution of (3.1.1) with u 0 T =. 3.4 Consistency and Existence Proof of Theorem 3.3.6. Clearly, if u is a viscosity subsolution of (3.1.1), then it is a classical subsolution of (3.1.1). Let us now assume that u is not a viscosity L 0 -subsolution of (3.1.1) but a classical subsolution of (3.1.1). Then there exist (s 0 ;!)2 and '2A L 0 u(s 0 ;!) with corresponding hitting time h2H s 0 (see the denition ofA L 0 ) such that c 0 :=L'(s 0 ;!)f s 0 (!;'(s 0 ;!);@ ! '(s 0 ;!);I'(s 0 ;!))> 0: Without loss of generality, s 0 = 0. Put := inf t 0 :L' t f t (X;' t ;@ ! ' t ;I' t ) c 0 2 ^T: 86 By right-continuity of the involved process, >t,P 0;! -a.s. Let H = (H i ) id be a stochastic process with > H 1 L 0 , letW be a random eld with 0WL 0 , and let = H;W (see Section 3.3). Then integration-by-parts (Lemma 3.7.15) yields ('u) = " L' +Lu + X k;l H k @ ! l('u)c kl + Z R d Wr 2 z ('u)K(dz) # t + martingale. (3.4.1) Given a strictly positive stopping time ~ ^h such that jf t (X;' t ;@ ! ' t ;I' t )f t (X;u t ;@ ! ' t ;I' t )j c 0 4 87 on J0; ~ J, taking expectations yields E 0;! [ ~ ('u) ~ ] E 0;! " Z ~ 0 t " c 0 2 f t (X;' t ;@ ! ' t ;I' t ) +f t (X;u t ;@ ! u t ;Iu t ) + X k;l H k t @ ! l('u) t c kl + Z R d W t r 2 z ('u) t K t (dz) # dt # E 0;! " Z ~ 0 t " c 0 4 f t (X;u t ;@ ! ' t ;I' t ) +f t (X;u t ;@ ! u t ;Iu t ) + X k;l H k t @ ! l('u) t c kl + Z R d W t r 2 z ('u) t K t (dz) # dt # : (3.4.2) Dene a function ~ f : RR d R!R by ~ f t (~ !;y;z;p) :=f t (~ !;y; ( > t (~ !)) 1 z;p): Then f t (~ !;y;z;p) = ~ f t (~ !;u; > t (~ !)z;p) and ~ f t (~ !;y;z;p) ~ f t (~ !;y;z 0 ;p) L 0 jzz 0 j 1 : 88 Consequently, f t (X;u t ;@ ! ' t ;I' t ) +f t (X;u t ;@ ! u t ;Iu t ) = ~ f t (X;u t ; > t @ ! u t ;Iu t ) ~ f t (X;u t ; > t @ ! ' t ;I' t ) = [ ~ f t (X;u t ; > t @ ! u t ;Iu t ) ~ f t (X;u t ; ( > t @ ! u t ) 1 ;:::; ( > t @ ! u t ) d1 ; ( > t @ ! ' t ) d );Iu t )] +[ ~ f t (X;u t ; ( > t @ ! u t ) 1 ;:::; ( > t @ ! u t ) d1 ; ( > t @ ! ' t ) d );Iu t ) ~ f t (X;u t ; ( > t @ ! u t ) 1 ;:::; ( > t @ ! u t ) d2 ; ( > t @ ! ' t ) d1 ; ( > t @ ! ' t ) d );Iu t )] + + ~ f t (X;u t ; ( > t @ ! u t ) 1 ; ( > t @ ! ' t ) 2 ;:::; ( > t @ ! ' t ) d ;Iu t ) ~ f t (X;u t ; > t @ ! ' t ;Iu t )] +[ ~ f t (X;u t ; > t @ ! ' t ;Iu t ) ~ f t (X;u t ; > t @ ! ' t ;I' t )] = d t ( > t @ ! (u') t ) d + 1 t ( > t @ ! (u') t ) 1 + Z R d t t (z)r 2 z (u') t K t (dz); where, for every i2f1;:::;dg, i t := [( > t @ ! (u') t ) i ] 1 :1 f( > t @! (u')t) i 6=0g [ ~ f t (X;u t ; ( > t @ ! u t ) 1 ;:::; ( > t @ ! u t ) i ; ( > t @ ! ' t ) i+1 ;:::; ( > t @ ! ' t ) d ) ~ f t (X;u t ; ( > t @ ! u t ) 1 ;:::; ( > t @ ! u t ) i1 ; ( > t @ ! ' t ) i ;:::; ( > t @ ! ' t ) d )] 89 and t := [I(u t ' t )] 1 :1 fI(ut't)6=0g [ ~ f t (X;u t ; > t @ ! ' t ;Iu t ) ~ f t (X;u t ; > t @ ! ' t ;I' t )]: Note that, by Assumption 3.2.4, we have j i j L 0 and that, by Assump- tion 3.2.4 and Assumption 3.2.6, we have 0 L 0 . Our goal now is to establish H > c@ ! ('u) = > [ > @ ! ('u)], which, by putting ~ H = > H and ~ Z = > @ ! (' u), is equivalent to ~ H > ~ Z = > ~ Z. Hence, if H is given by H = ( > ) 1 ~ H in the case d > 1 and by H = 1 ~ H:1 f6=0g in the case d = 1, where ~ H i = i , and if W is given by W t (z) = t t (z), then > H 1 =jj 1 L 0 , 0WL 0 , and, provided ~ is suciently small, which is possible because is right-continuous, we have, by (3.4.2), E 0;! [ T ('u) ~ ] =E 0;! [ ~ ('u) ~ ] E 0;! " R ~ 0 t " c 0 4 # dt # E 0;! ~ c 0 8 < 0; i.e.,E L 0 0;! [('u) ~ ] < 0, which contradicts '2A L 0 u(0;!). Thus u is a viscosity L 0 -subsolution. Similarly, one can show the corresponding statement for supersolutions. Next, we prove regularity of u 0 . To this end, we need the following result. 90 Lemma 3.4.1. Suppose that (B;C;) and are deterministic. Fix (s;!)2 . Dene a process ~ Y on [s;T ] by ~ Y t :=Y s;! t (!:1 [0;s) + (X +! s ):1 [s;T ] ): Then there exist processes ~ Z and ~ U(z) in the appropriate spaces such that ( ~ Y; ~ Z; ~ U) is the solution to the BSDE ~ Y t =(!:1 [0;s) + (X +! s ):1 [s;T ] ) + Z T t f r (!:1 [0;s) + (X +! s ):1 [s;T ] ; ~ Y r ; ~ Z r ; Z R d ~ U r (z) r (z)K r (dz))dr Z T t ~ Z r dX c;s;0 r Z T t Z R d ~ U r (z) ( X )(dr;dz); t2 [s;T ];P s;0 -a.s. (3.4.3) Proof. Put Z 0 :=Z s;! (!:1 [0;s) + (X +! s ):1 [s;T ] ); U 0 :=U s;! (!:1 [0;s) + (X +! s ):1 [s;T ] ): Since (B;C;) and are deterministic, the process M on [s;T ] dened by M t := ~ Y t ~ Y s R t s f r (!:1 [0;s) + (X +! s ):1 [s;T ] ; ~ Y r ;Z 0 r ; R R d U 0 r (z) r (z)K r (dz)) 91 is an (F s;0 ;P s;0 )-martingale. Thus, we can, for every n 2 N 0 , dene a pair (Z n+1 ;U n+1 ) inductively by ~ Y t =(!:1 [0;s) + (X +! s ):1 [s;T ] ) + Z T t f r (!:1 [0;s) + (X +! s ):1 [s;T ] ; ~ Y r ;Z n r ; Z R d U n r (z) r (z)K r (dz))dr Z T t Z n+1 r dX c;s;0 r Z T t Z R d U n+1 r (z) ( X )(dr;dz); t2 [s;T ];P s;0 -a.s. Since (Z n ;U n ) converges to some limit ( ~ Z; ~ U), the triple ( ~ Y; ~ Z; ~ U) is a solution to (3.4.3). Proposition 3.4.2. If (B;C;) and are deterministic, then u 0 2UC b ( ). Proof. Let 1 be an increasing modulus of continuity of and of (t; ~ !)7! f t (~ !;y;z;p), uniformly in t, y, z, and p, with upper boundk 1 k 1 > 0. Let (s;!), (s 0 ;! 0 )2 with ss 0 . Then u 0 (s;!)u 0 (s 0 ;! 0 ) u 0 (s;!)u 0 (s 0 ;! ^s ) + u 0 (s 0 ;! ^s )u 0 (s 0 ;! 0 ) =:A 1 +A 2 : 92 Let us start with estimating A 2 . To this end, put ~ Y 1 t := Y s 0 ;!^s t (! ^s :1 [0;s 0 ) + (X +! s ):1 [s 0 ;T ] ); ~ 1 := (! ^s :1 [0;s 0 ) + (X +! s ):1 [s 0 ;T ] ); ~ f 1 t (X;y;z;p) := f t (! ^s :1 [0;s 0 ) + (X +! s ):1 [s 0 ;T ] ;y;z;p); and ~ Y 2 t := Y s 0 ;! 0 t (! 0 ^s 0:1 [0;s 0 ) + (X +! 0 s 0):1 [s 0 ;T ] ); ~ 2 := (! 0 ^s 0:1 [0;s 0 ) + (X +! 0 s 0):1 [s 0 ;T ] ); ~ f 2 t (X;y;z;p) := f t (! 0 ^s 0:1 [0;s 0 ) + (X +! 0 s 0):1 [s 0 ;T ] ;y;z;p): Since (B;C;) and are deterministic, there exists, by Lemma 3.4.1, for every i2f1; 2g, a pair ( ~ Z i ; ~ U i ) such that the triple ( ~ Y i ; ~ Z i ; ~ U i ) is the solution to the BSDE ~ Y i t = ~ i + Z T t ~ f i r (X; ~ Y i r ; ~ Z i r ; Z R d ~ U i r (z) r (z)K r (dz))dr Z T t ~ Z i r dX c;s 0 ;0 r Z T t Z R d ~ U i r (z) ( X )(dr;dz); t2 [s 0 ;T ];P s 0 ;0 -a.s. 93 Therefore and using again the fact that (B;C;) is deterministic, we have A 2 = E s 0 ;!^s [Y s 0 ;!^s s 0 ]E s 0 ;! 0[Y s 0 ;! 0 s 0 ] = E s 0 ;0 [ ~ Y 1 s 0 ~ Y 2 s 0] : (3.4.4) Now, note that, for every t2 [s 0 ;T ], ~ f 1 t (X; ~ Y 1 t ; ~ Z 1 t ; R R d ~ U 1 t (z) t (z)K t (dz)) ~ f 2 t (X; ~ Y 2 t ; ~ Z 2 t ; R R d ~ U 2 t (z) t (z)K t (dz)) = t [ ~ Y 1 t ~ Y 2 t ] + P d j=1 j t [ > t ( ~ Z 1 t ~ Z 2 t )] j + R R d t t (z) ( ~ U 1 t (z) ~ U 2 t (z))K t (dz) + ~ f 1 t (X; ~ Y 2 t ; ~ Z 2 t ; R R d ~ U 2 t (z) t (z)K t (dz)) ~ f 2 t (X; ~ Y 2 t ; ~ Z 2 t ; R R d ~ U 2 t (z) t (z)K t (dz)); where t := [ ~ Y 1 t ~ Y 2 t ] 1 :1 f ~ Y 1 t ~ Y 2 t 6=0g [ ~ f 1 t (X; ~ Y 1 t ; ~ Z 1 t ; Z R d ~ U 1 t (z) t (z)K t (dz)) ~ f 1 t (X; ~ Y 2 t ; ~ Z 1 t ; Z R d ~ U 1 t (z) t (z)K t (dz))]; and the processes j , j = 1, :::, d, and are dened similarly (with the obvious changes) as in the proof of Theorem 3.3.6. Also, as in said proof, dene a d- dimensional process H by H j := [( > ) 1 ] j in the case d > 1 and by H := 1 94 in the case d = 1, dene a random eld W by W t (z) := t t (z), and consider the solution ~ of ~ = 1 + ( ~ ) t + ( ~ H) X c;s 0 ;0 + ( ~ W ) ( X ) on [s 0 ;T ] with ~ = 1 on [0;s 0 ),P s 0 ;0 -a.s. Integration-by-parts (Lemma 3.7.15) yields ~ ( ~ Y 1 ~ Y 2 ) = ( ~ Y 1 ~ Y 2 ) s 0 ~ [ ~ f 1 (X; ~ Y 2 ; ~ Z 2 ; Z R d ~ U 2 (z)(z)K(dz)) ~ f 2 (X; ~ Y 2 ; ~ Z 2 ; Z R d ~ U 2 (z)(z)K(dz))] t +martingale,P s 0 ;0 -a.s. Since , f, and are bounded, we get, together with (3.4.4), A 2 e (Ts 0 )L 0 E s 0 ;0 h ~ 1 ~ 2 i + Z T s 0 e (ts 0 )L 0 1 ((t;! ^s ); (t;! 0 ^s 0))dt C 0 1 (k! ^s ! 0 ^s 0k 1 ); where C 0 does not depend on (s;!) and (s 0 ;! 0 ). To deal with A 1 , let "> 0. Our goal is to nd a 0 > 0 such that d 1 ((s;!); (s 0 ;! 0 ))< 0 95 implies A 1 <". In order to estimate A 1 , put for every ~ !2 , ~ Y 3;~ ! t := Y s;! t (~ !:1 [0;s 0 ) + (X + ~ ! s 0):1 [s 0 ;T ] ); ~ 3;~ ! := (~ !:1 [0;s 0 ) + (X + ~ ! s 0):1 [s 0 ;T ] ); ~ f 3;~ ! t (X;y;z;p) := f t (~ !:1 [0;s 0 ) + (X + ~ ! s 0):1 [s 0 ;T ] ;y;z;p): Since (B;C;) is deterministic and since, forP s;! -a.e. ~ !2 , Y s;! s 0 (~ !) =E s;! [Y s;! s 0 jF 0 s 0 + ](~ !) =E s 0 ;~ ! [Y s;! s 0 ]; we have A 1 = E s;! [Y s;! s ]E s 0 ;!^s [Y s 0 ;!^s s 0 ] E s;! " Z s 0 s f t X;Y s;! t ;Z s;! t ; Z R d U s;! t (z) t (z)K t (dz) dt # + E s;! [Y s;! s 0 ]E s 0 ;!^s [Y s 0 ;!^s s 0 ] (s 0 s)C 0 0 + Z E s 0 ;~ ! [Y s;! s 0 ]E s 0 ;!^s [Y s 0 ;!^s s 0 ]P s;! (d~ !) = (s 0 s)C 0 0 + Z E s 0 ;0 [ ~ Y 3;~ ! s 0 ~ Y 1 s 0]P s;! (d~ !) : (3.4.5) Similarly, as we estimated A 2 in (3.4.5), we get, for every ~ !2 , E s 0 ;0 [ ~ Y 3;~ ! s 0 ~ Y 1 s 0] C 0 1 (k~ ! ^s 0! ^s k 1 ); (3.4.6) 96 whereC 0 > 0 does not depend ons,s 0 ,!, and ~ !. Note that, since 1 is continuous at 0, there exists a 00 such that C 0 1 ( 00 ) < "=2. Thus, by (3.4.5) together with (3.4.6), A 1 (s 0 s)C 0 0 +C 0 E s;! [ 1 (kX ^s 0! ^s k 1 ] = (s 0 s)C 0 0 +C 0 E s;! [ 1 ( sup t2[s;s 0 ] jX t X s j) 1 fsup t2[s;s 0 ] jXtXsj< 00 g + 1 fsup t2[s;s 0 ] jXtXsj 00 g ] (s 0 s)C 0 0 + " 2 +C 0 k 1 k 1 P s;! sup t2[s;s 0 ] jX t X s j 00 ! : (3.4.7) Recall that on [s;T ], X =X s +p s B +M; P s;! -a.s., whereM :=X c;s;! +z ( X ) is a (P s;! ;F 0 + )-martingale on [s;T ]. Without loss of generality, let 0 C 0 0 < 00 2 and s 0 s< 0 : (3.4.8) Thus, since sup t2[s;s 0 ] jX t X s j 00 97 implies sup t2[s;s 0 ] jp s B t j + sup t2[s;s 0 ] jM t j 00 but, by (3.4.8), sup t2[s;s 0 ] jp s B t j Z s0 s jb t j dt (s 0 s)C 0 0 00 2 ; we have, by Doob's inequality and It^ o's lemma, P s;! sup t2[s;s 0 ] jX t X s j 00 ! P s;! sup t2[s;s 0 ] jM t j 00 2 ! P s;! sup t2[s;s 0 ] jM t j 2 j 00 j 2 4 ! 4 j 00 j 2 E s;! jM s 0j 2 4 j 00 j 2 E s;! " X id Z s 0 s c ii t dt + Z s 0 s Z R d jzj 2 ^C 0 0 K t (dz)dt # 4 j 00 j 2 (s 0 s)(dC 0 0 +C 00 0 ): Together with (3.4.7), we get A 1 (s 0 s)C 00 + " 2 98 for some constant C 00 > 0 that does not depend on s, s 0 , !, and ! 0 provided that (3.4.8) holds. I.e., if d 1 ((s;!); (s 0 ;! 0 ))< " 2C 00 ^ 00 2C 0 0 ; then A 1 <". Lemma 3.4.3. Fix (s;!)2 and t2 [s;T ]. For P s;! -a.e. ~ !2 , the processes Y s;! and Y t;~ ! are P t;~ ! -indistinguishable on [t;T ]. Proof. Let 0 be the set of all! 0 2 such that the processM on [s;T ] dened by M r :=Y s;! r Y s;! t Z r t f (X;Y s;! ;Z s;! ; Z R d U s;! (z) (z)K (dz))d is an (F t;! 0 ;P t;! 0)-martingale. By Proposition 3.7.8,P s;! ( 0 ) = 1. Now, let ~ !2 0 . Put (Z 0 ;U 0 ) := (Z s;! ;U s;! ). Since M is an (F t;~ ! ;P t;~ ! )-martingale, we can, for every n2N 0 , dene (Z n+1 ;U n+1 ) inductively by Y s;! r = + R T r f (X;Y s;! ;Z n ; R R d U n (z) (z)K (dz))d R T r Z n+1 dX c;t;~ ! R T r R R d U n+1 (z) ( X )(d;dz); r2 [t;T ]; P t;~ ! -a.s. 99 Note that (Z n ;U n ) converges to some limit (Z;U) and that (Y s;! ;Z;U) solves Y s;! r = + R T r f (X;Y s;! ;Z ; R R d U (z) (z)K (dz))d R T r Z dX c;t;~ ! R T r R R d U (z) ( X )(d;dz); r2 [t;T ]; P t;~ ! -a.s. Uniqueness for BSDEs concludes the proof. Remark 3.4.4. Fix (s;!)2 . By Lemma 3.4.3 and by Proposition 3.7.7, for every t2 [s;T ], and forP s;! -a.e. ~ !2 , u 0 (t; ~ !) = E t;~ ! [Y s;! t ] =E s;! [Y s;! t jF 0 t+ ](~ ! =Y s;! t (~ !): If u 0 2 C( ), then u 0 and Y s;! are P s;! -indistinguishable on [s;T ] because both processes are right-continuous. Proof of Theorem 3.3.8. By Proposition 3.4.2,u 0 2C b ( ). Thusu 0 is bounded, right-continuous, non-anticipating, andP s;! -quasi-left-continuous for every (s;!)2 . Keeping Remark 3.4.4 in mind, one can easily show that u 0 is a viscosity 100 subsolution. To do so, follow the lines of the corresponding part of the proof of Theorem 3.3.6 and replace in (3.4.1) Lu by f X;u;Z 0;! ; R U 0;! (z)(z)K(dz) and everywhere @ ! u by Z 0;! ; Iu by R U 0;! (z)(z)K(dz); and r 2 z u by U 0;! (z): Similarly, one can show that u 0 is a viscosity supersolution. 3.5 Partial Comparison and Stability Before we begin to prove the partial comparison principle itself, we need to establish some auxiliary results about BSDEs, re ected BSDEs (RBSDEs), and optimal stopping. 3.5.1 BSDEs with jumps and nonlinear expectations Given (s;!)2 , L > 0, 2T s (F s;! ), and anF s;! -measurable random variable ~ : !R, denote by (Y s;! (L;; ~ );Z s;! (L;; ~ );U s;! (L;; ~ )) 101 the solution to the BSDE Y t = ~ + Z T t 1 fr<g L > r Z r 1 + Z R d U r (z) + r (z)K r (dz) dr Z T t Z r dX c;s;! r Z T t Z R d U r (z) ( X )(dr;dz); t2 [s;T ]; P s;! -a.s. Remark 3.5.1. Note that in the driver of the BSDE above we use the positive part U(z) + instead of the absolute valuejU(z)j in order for the comparison principle for BSDEs to hold (see [3]). Lemma 3.5.2. We have E L s;! [ ~ ] =E s;! [Y s;! s (L;; ~ )]: (3.5.1) Proof. For the sake of readability, we omit to write s;! (L;; ~ ) in this proof. Given a process H and a random eld W , let = H;W be the solution to = 1 + ( H) X c;s;! + ( W ) ( X ) 102 on [s;T ] with = 1 on [0;s),P s;! -a.s. Since Z = 1 Js;J Z, U(z) = 1 Js;J U(z), and Y = Y s 1 Js;J L > Z 1 + Z R d LU(z) + K(dz) t + X i Z i X i;c;s;! +U ( X ); integration-by-parts (Lemma 3.7.15) yields Y =Y s + 1 Js;J " L > Z 1 + X i;j H i Z j c ij # t + 1 Js;J Z R d LU(z) + (z)K(dz) + Z R d W (z)U(z)K(dz) t + martingale =:Y s +A 1 +A 2 + martingale: (3.5.2) Our goal is to choose H and W so that the drift term A 1 +A 2 vanishes. Let us rst deal with A 2 . If W is dened by W (z) =L(z):1 fU(z)>0g ; (3.5.3) then A 2 = 0 and 0W (z)L(z). Next, we deal with A 1 . We need H > cZ = (H > )( > Z) =L > Z 1 (3.5.4) 103 to hold. To this end, put ~ Z = > Z and ~ H = > H. Then (3.5.4) is equivalent to X i ~ H i ~ Z i =L X i ~ Z i : Thus, if ~ H i =L ~ Z i ~ Z i :1 f ~ Z i 6=0g and H = 8 > > > < > > > : ( > ) 1 ~ H if d> 1, 1 ~ H:1 f6=0g if d = 1, (3.5.5) then we get (3.5.4), i.e., A 1 = 0, and, moreover, we have > H 1 = ~ H 1 L. Consequently, for H dened by (3.5.5) and W dened by (3.5.3), we have E L s;! [ ~ ]E s;! H;W Y =E s;! [Y s ]: (3.5.6) On the other hand, for every process H with > H 1 L and every random eld W with 0W (z)L(z), we have H > cZ > H 1 > Z 1 L > Z 1 and W (z)U(z)L(z)U(z) + ; which, by (3.5.2), yields E s;! H;W Y E s;! [Y s ]: 104 This, together with (3.5.6), establishes (3.5.1). 3.5.2 RBSDEs with jumps Our proof of the partial comparison principle relies heavily on the theory of RBS- DEs. See [35], [45], [24], and Chapter 14 of [26] for more details. Fix a bounded, right-continuous, F-adapted process R : ! R that is P s;! - quasi-left-continous on [s;T ] for every (s;!)2 . Fix also L 0. For every (s;!)2 and h2T s (F s;! ), there exists, because of the martingale representation property (Theorem III.4.29 in [49]), a unique solution ( Y; Z; U; K) = ( Y s;! (L;h;R); Z s;! (L;h;R); U s;! (L;h;R); K s;! (L;h;R)) with Y = Y ^h , Z = 1 fhg Z, U = 1 fhg U, and K = K ^h to the following RBSDE with lower barrier R and random terminal time h (cf. Remark 2.4 in [24]): Y t =R h + Z T t 1 frhg L > r Z r 1 + Z R d U r (z) + r (z)K r (dz) dr + K h K t Z T t Z r dX c;s;! r Z T t Z R d U r (z) ( X )(dr;dz);t2 [s;T ]; P s;! -a.s., Y t R t^h ;t2 [s;T ]; P s;! -a.s., Z T s ( Y t R t )d K t = 0; K s;! s = 0; K is continuous and nondecreasing. 105 Lemma 3.5.3. Fix2T s (F s;! ) andh2H s;! withh,P s;! -a.s. Then, forP s;! - a.e. ~ !2 , the processes Y s;! (L;h;R) and Y ;~ ! (L;h;R) areP ;~ ! -indistinguishable on [(~ !);T ] and K ;~ ! (L;h;R) = K s;! (L;h;R) K s;! (~ !) (L;h;R) on [(~ !);T ]. Proof. Proceed as in the proof of Lemma 3.4.3 and note that Proposition 3.7.8 also applies to stopping times. Utilizing uniqueness for RBSDEs will conclude the proof. Given h2T s (F s;! ), consider the optimal stopping times s;!;h := inffts : Y s;! t (L;h;R) =R t^h g; s;!;h := inffts : K s;! t (L;h;R)> 0g: Note that, since K s;! (L;h;R) is continuous, we have s;!;h s;!;h ^h. Lemma 3.5.4. If h2T s (F s;! ), then E s;! [ Y s;! s (L;h;R)] = sup 2Ts(F s;! ) E L s;! [R ^h ]: (3.5.7) A corresponding result for RBSDEs without jumps has been proven in [74]. We follow the approach in [6], where quadratic RBSDEs without jumps are studied. Proof of Lemma 3.5.4. Provided there is no danger of confusion, we omit to 106 write s;! (L;h;R) in this proof. Let us rst x a stopping time 2T s (F s;! ). Note that Y t = Y ^h + Z T t 1 fr^hg L > r Z r 1 + Z R d U r (z) + r (z)K r (dz) dr + K ^h K t Z T t 1 fr^hg Z r dX c;s;! r Z T t Z R d 1 fr^hg U r (z) ( X )(dr;dz); t2 [s;T ];P s;! -a.s. Since Y ^h R ^h and K ^h K t 0, the comparison principle for BSDEs with jumps (combine, e.g., the proofs of Theorem 4.2 of [24] and Theorem 5.1 of [6]) yields Y s;! t (L;h;R)Y s;! t (L;^h;R ); stT; P s;! -a.s. Consequently, by Lemma 3.5.2, E s;! [ Y s ] sup 2Ts(F s;! ) E L s;! [R ^h ]: (3.5.8) Next, consider the (optimal) stopping time := s;!;h . Since K = 0 on Js; K, h, and Y =R , we have Y s;! ^ (L;h;R) =Y s;! (L; ^h;R ^h ): 107 Thus, by Lemma 3.5.2,E s;! [ Y s ] =E L s;! [R ^h ]. Together with (3.5.8), we get (3.5.7), which completes the proof. Lemma 3.5.5. If h2H s;! , then for P s;! -a.e. ~ !2 , Y s;! s;!;h (~ !;L;h;R) = sup 2T s;!;h (~ !) (F s;!;h ;~ ! ) E L s;!;h (~ !) [R ^h ]: Proof. Write instead of s;!;h (~ !). Let ~ be a stopping time belonging to T s (F 0 + ) such that ~ = , P s;! -a.s. Note that ~ h, P s;! -a.s. Then, for P s;! - a.e. ~ !2 , Y s;! (~ !;L;h;R) = Y s;! ~ (~ !; L;h;R) = E s;! [ Y s;! ~ (L;h;R)jF 0 ~ + ](~ !) = E ~ ;~ ! [ Y s;! ~ (~ !) (L;h;R)] by (3.7.2) and Lemma 3.7.5 = E ~ ;~ ! [ Y ~ ;~ ! ~ (~ !) (L;h;R)] by Lemma 3.5.3 = sup 2T ~ (~ !) (F ~ ;~ ! ) E L ~ ;~ ! [R ^H ] by Lemma 3.5.4 = sup 2T (~ !) (F ;~ ! ) E L ;~ ! [R ^H ]: This concludes the proof. 108 3.5.3 Partial Comparison We will need the following modication of the partial comparison principle. The- orem 3.3.9 can be proven similarly. In order to formulate our result we need the following denition. It might be helpful to recall Denition 3.3.16. Denition 3.5.6. Fix t2 [0;T ). The space C 1;2 b ( t ) is the set of all universally measurable functionals u : t ! R for which there exist a sequence ( n ) n2N 0 of stopping times inH t and a collection of functions (# n ( n ;)) n2N;n2 t n on [t;T ]R d such that the following holds: (i) The sequence ( n ) is nondecreasing, 0 = t, n < n+1 if n < T , and, for every !2 , there exists an m2N such that m (!) =T . (ii) For every n2 N, # n = # n ( n ;t;x) is universally measurable and, for every n = (s i ;y i ) 0in1 , the function# n ( n ;) is continuous on [s n1 ;T ]R d and belongs to C 1;2 b ([s n1 ;T )O " (y n1 )) for some "> 0. 109 (iii) For every n2N and !2 , # n (( i (!);X i (!)) 0in1 ; n (!);X n (!)) = # n+1 (( i (!);X i (!)) 0in ; n (!);X n (!)):1 fn<Tg (!) +u(T;!):1 fn=Tg (!) (iv) We have the representation u(s;!) = X n1 # n (( i (!);X i (!)) 0in1 ;s;! s ):1 J n1 ;nJ (s;!) +u(T;!):1 fTg (s): Theorem 3.5.7 (Partial Comparison II). Fix (s;!) 2 . Let u 1 be a vis- cosity subsolution of (3.1.1) on s . Let u 2 2 C 1;2 b ( s ) with a corresponding sequence ( n ) of stopping times and a corresponding collection (# n ) of function- 110 als such that, for every n 2 N and every (r; ~ !) 2 J n1 ; n J, we have, with n = (h t;" i (~ !);X h t;" i (~ !)) 0in1 , @ t # n ( n ;r; ~ ! r ) d X i=1 b i r (~ !)@ x i# n ( n ;r; ~ ! r ) 1 2 d X i;j=1 c ij r (~ !)@ 2 x i x j# n ( n ;r; ~ ! r ) Z R d " # n ( n ;r; ~ ! r +z)# n ( n ;r; ~ ! r ) d X i=1 z i @ x i# n ( n ;r; ~ ! r ) # K r (~ !;dz) f r ~ !;# n ( n ;r; ~ ! r );@ x # n ( n ;r; ~ ! r ); Z R d [# n ( n ;r; ~ ! r +z)# n ( n ;r; ~ ! r )] r (~ !)K r (~ !;dz) ! 0: Suppose that u 1 T u 2 T , P s;! -a.s. Then u 1 (s;!)u 2 (s;!). Proof. Our proof follows the approach of Lemma 5.7 in [31]. However, due to our more general setting and dierent denition of C 1;2 b , details are somewhat more involved and therefore we shall give a complete proof. Without loss of generality, let s = 0 and let f be nonincreasing in y (cf. Remark 3.9 in [31]). For the sake of a contradiction, assume that c := (u 1 u 2 )(0;!)> 0: Dene a process R : !R by R t := (u 1 u 2 ) t + ct 2T : 111 Note that R is P t;~ ! -quasi-left-continuous for every (t; ~ !) 2 , bounded, right- continuous, andF-adapted. Thus the results about RBSDEs and optimal stopping in this section are applicable. Put h := infft 0 : (u 1 u 2 ) t 0g: Clearly, hT ,P 0;! -a.s. Put Y := Y 0;! (L;h;R) and := infft 0 : Y t =R t^h g: Since Y h =R h , we have h. Note thatE L 0;! [R H ]c=2, but, by Lemma 3.5.4, E L 0;! [R ] =E 0;! [ Y 0 ]E 0;! [R 0 ] =c: Thus, P 0;! ( < h) > 0. Consequently, there exists an ! 2 such that t := (! ) < h(! ), that (t ;! )2 J n1 ; n J for some n2N, and that, according to Lemma 3.5.5 and Lemma 3.2.9, from which we get the existence of a hitting time ~ h2H t with ~ h>t , ~ h(~ !) =h(~ !), and ~ h =h,P 0;! -a.s., we have R t (! ) = sup 2T t (F t ;! ) E L t ;! [R ^~ h ]: 112 Next, dene a process ~ R : t !R by ~ R t :=R t R t (! ). Then 0 = ~ R t (! ) = sup 2T t (F t ;! ) E L t ;! [ ~ R ^~ h ]: Our goal is to establish a decomposition ~ R = u 1 ' such that '2 C 0 b ( t )\ C 1;2 b (Jt ;K) for some stopping time t . Since ~ R t =u 1 t u 2 t + (u 1 u 2 )(t ;! ) c(tt ) 2T ; we get such a decomposition with = n by setting ' := u 1 ~ R. Then '2 A L u 1 (t ;! ), and since (u 1 u 2 )(t ;! ) 0 (, which follows from t < ~ h(! )), we have 0 L'(t ;! )f t (! ;'(t ;! );@ ! '(t ;! );I'(t ;! )) = Lu 2 (t ;! ) + c 2T f t (! ;'(t ;! );@ ! u 2 (t ;! );Iu 2 (t ;! )) > Lu 2 (t ;! )f t (! ;u 2 (t ;! );@ ! u 2 (t ;! );Iu 2 (t ;! )): But this is a contradiction to u 2 being a classical supersolution. 113 3.5.4 Stability Proof of Theorem 3.3.10. Let" 0 > 0 be undetermined for the moment. Assume that u is not a viscosity L-supersolution of (3.1.1). Then there exist (s 0 ;!)2 and '2A L u(s 0 ;!) such that c 0 :=L'(s 0 ;!)f s 0 (!;'(s 0 ;!);@ ! '(s 0 ;!);I'(s 0 ;!))< 0: Without loss of generality, s 0 = 0. Next, dene processes R, R " : !R, " > 0, by R t :=' t u t " 0 t; R " t :=' t u " t " 0 t: Also, put 1 := inf t 0 :L' t f t (X;' t ;@ ! ' t ;I' t ) c 0 2 : Note that 1 2H 0 with 1 > 0, P 0;! -a.s. Thus, since '2A L u(0;!), there exists anh2H 0;! withh> 0,P 0;! -a.s., such that 0 =R 0 >E L 0;! [R 1 ^h ] because we have E L 0;! [(" 0 )( 1 ^h)]< 0 for otherwiseE L 0;! [(" 0 )( 1 ^h)] = 0 =E L 0;! [0] would, by the comparison principle for BSDEs with jumps (cf. Theorem 3.2.1 in [26]) together 114 with Lemma 3.5.2, imply that (" 0 )( 1 ^h) = 0,P 0;! -a.s., which is a contradiction. Now, let " suciently small so that R " 0 >E L 0;! [R " 1 ^h ]. Put 2 := 1 ^h; Y " := Y 0;! (L; 2 ;R " ); " := infft 0 : Y " t =R " t^ 2 g; where we used the notation of Subsection 3.5 for RBSDEs. Then,P 0;! ( " < 2 )> 0 because otherwise, by Lemma 3.5.4, R " 0 Y " 0 =E L 0;! [R " 2 ] < R " 0 . That is, there exists an ! " 2 such that t " := " (! " ) < 2 (! " ), that 2 2H t " with 2 > t " , P t " ;! "-a.s. (, which is possible by Lemma 3.2.9), and that R " t "(! " ) = sup 2T t " (F t " ;! " ) E L 0;! R " ^ 2 : Dene ~ R " : t " !R by ~ R " t :=R " t R t "(! " ). Then, ~ R " t =' t " 0 (tt " )u " t ['(t " ;! " )u(t " ;! " )] and, with ' " t :=' t " 0 (tt " ) ('u)(t " ;! " ), t2 [t " ;T ], we have 0 = (' " u " )(t " ;! " ) = sup 2T t " (F t " ;! " ) E L t " ;! " [(' " u " ) ^ 2 ]: 115 That is, ' " 2A L u " (t " ;! " ), and thus 0 L " ' " (t " ;! " )f " t "(! " ;' " (t " ;! " );@ ! ' " (t " ;! " );I " ' " (t " ;! " )) = L'(t " ;! " ) +" 0 h f " t "(! " ;u " (t " ;! " );@ ! '(t " ;! " );I " '(t " ;! " )) f t "(! " ;u(t " ;! " );@ ! '(t " ;! " );I'(t " ;! " )) i f t "(! " ;u(t " ;! " );@ ! '(t " ;! " );I'(t " ;! " )) c 0 2 +" 0 h f " t "(! " ;u " (t " ;! " );@ ! '(t " ;! " );I " '(t " ;! " )) f t "(! " ;u(t " ;! " );@ ! '(t " ;! " );I'(t " ;! " )) i : Hence, for " 0 =c 0 =8 and for suciently small ", uniform convergence of f " , u " , " , and K " yields 0c 0 =4< 0, which is a contradiction. 3.6 Comparison In this subsection, Assumption 3.3.11, Assumption 3.3.13, Assumption 3.3.15, Assumption 3.3.14, and Assumption 3.3.17 are in force. Furthermore, without loss of generality, assume that f is nonincreasing in y (cf. Remark 3.9 in [31]). 116 Given 2 (0;1], t2 (1;T ], and y2R d , put O (y) := fx2R d :jxyj<g; O (y) := fx2R d :jxyjg; @O (y) := fx2R d :jxyj =g; Q t;y := (t;T )O (y); Q t;y := [t;T ] O (y); @Q t;y := ((t;T ]@O (y))[ (fTgO (y)); @ Q t;y := ([t;T ]@O (y))[ (fTgO (y)); Q 1 t := (t;T )R d ; Q 1 t := [t;T ]R d : 3.6.1 Hitting times Given "> 0, t2 [0;T ], and y2R d , dene hitting times h t;y;" 0 := t; h t;y;" 1 := inffst :X s 62O " (y)g^T; h t;y;" i+1 := inffsh t;y;" i : X s X h t;y;" i "g^T: 117 Also put h t;" j :=h t;X t t ;" j , that is, h t;" 0 = t; h t;" 1 = inffst :jX s X t j"g^T; h t;" j+1 = inffsh t;" j : X s X h t;" j "g^T: Lemma 3.6.1. Let (G n ) n be an increasing sequence of non-empty open subsets of R d with G =[ n G n . Let Y be a d-dimensional, c adl ag, and F 0 + -adapted process that isP-quasi-left-continuous for some probability measureP on ( ;F 0 T ). Suppose that Y 0 2G 1 , P-a.s. Consider the rst-exit times G := infft 0 :Y t 2G c g^T; (3.6.1) Gn := infft 0 :Y t 2G c n g^T; n2N: (3.6.2) Then lim n Gn = G , P-a.s. Moreover, lim n Y Gn =Y G , P-a.s. Proof. Without loss of generality, we can assume that that G and Gn ,n2N, areF 0 + -stopping times. Clearly, ( Gn ) n is increasing and := sup n Gn G : Consider the sets A :=f < G g and A n :=f Gn < G g, n2N. We have to show thatP(A) = 0. 118 Let us rst note that, for everyn2N, we have Gn < onA because otherwise there exists an ! 2 A and an n 0 2 N such that, for every n n 0 , Gn (!) = (!), which yields Y (!)2\ nn 0 G c n = G c , i.e., (!) G (!) and thus !62 A. Consequently, the stopping times ~ := :1 A +T:1 A c; ~ Gn := Gn :1 An +T:1 A c n ^ [T (1n 1 )]; n2N; satisfy the following: For every n2 N, ~ Gn < ~ , i.e., ~ is F 0 + -predictable. Thus, usingP-quasi-left-continuity of Y , we get P(~ <T ) =P(fY ~ = 0g\f~ <Tg) =P(fY = 0g\A): (3.6.3) Next, note thatjY j> 0 onA because otherwise there exists an!2A such that Y (!) = 0 and then, since, for every m2 N,fY Gn (!)g nm takes values in G c m andG c m is closed, we getY (!)2\ m G c m =G c , i.e.,(!) G (!) and thus!62A. Therefore,P(fY = 0g\A) = 0 and, together with (3.6.3), we getP(A) = 0. I.e., we have shown that lim n Gn = G , P-a.s. Hence, by Proposition I.2.26 of [49], lim n Y Gn =Y G ,P-a.s. In the following statement and its proof, the random times G and Gn ,n2N, resp., are dened by (3.6.1) and (3.6.2), resp. Also, we do not need the full 119 strength of Assumption 3.3.13, in particular, (B;C;) is allowed to be random; only Part (iii) of Assumption 3.3.13 is actually used. Lemma 3.6.2. Let (G n ) n be an increasing sequence of open connected subsets ofR d withG =[ n G n . LetH be an open subset ofR d withGH. PutQ n := [0;T )G n , n2N,Q := [0;T )G, andR := [0;T )H. Suppose that there exists an"2 (0; 1) such that, for all n2N, dist(G c n+1 ;G n ) " n(n + 1) : Letv2C( R)\C 1;2 (Q) andx2G 1 . Then there exists an (F 0 + ;P 0;x )-martingaleM such that v( G ;X G )v(0;x) = Z G 0 ( @ t v(t;X t ) + d X i=1 b i t @ x iv(t;X t ) + 1 2 d X i;j=1 c ij t @ 2 x i x jv(t;X t ) + Z jzjC 0 0 " v(t;X t +z)v(t;X t ) d X i=1 z i @ x iv(t;X t ) # K t (dz) ) dt +M G ; P 0;x -a.s. Proof. Let (a n ) n be a sequence of positive real numbers converging to 0. For every n 2 N, let v n 2 C 1;2 ( R) such that v = v n on Q n andjvv n j R a n . 120 By Lemma 3.6.1, v( G ;X G ) = lim n v n+1 ( Gn ;X Gn ), P 0;x -a.s. Moreover, for every n2N, there exists, by It^ o's formula an (F 0 + ;P 0;x )-martingale M n such that v n+1 ( Gn ;X Gn )v(0;x) = Z Gn 0 ( @ t v(t;X t ) + d X i=1 b i t @ x iv(t;X t ) + 1 2 d X i;j=1 c ij t @ 2 x i x jv(t;X t ) + Z jzjC 0 0 " v(t;X t +z):1 fjzj " n(n+1) g +v n+1 (t;X t +z):1 fjzj> " n(n+1) g v(t;X t ) d X i=1 z i @ x iv(t;X t ) # K t (dz) ) dt +M n Gn ; P 0;x -a.s. Now, if a n+1 = 1 (L 1=n _n)(n + 1) ; n2N; 121 then, by Part (iii) of Assumption 3.3.13, Z Gn 0 Z " n(n+1) <jzjC 0 0 [v n+1 (t;X t +z)v(t;X t +z)]K t (dz)dt Z Gn 0 Z " n(n+1) <jzjC 0 0 a n+1 K t (dz)dt T " Z " n(n+1) <jzjC 0 0 a n+1 K t;1;1=n (dz) + Z " n(n+1) <jzjC 0 0 a n+1 K t;2;1=n (dz) # T " n(n + 1)a n+1 " ) Z " n(n+1) <jzjC 0 0 jzjK t;1;1=n (dz) +a n+1 (L 1=n _n) # T 1 (L 1=n _n)" + 1 n + 1 ! 0 as n!1. This concludes the proof. Remark 3.6.3. If (b;c;K) is constant, t2 [0;T ], and x2R d , then E t;x [h t;" 1 ] =E 0;x [t + [h " 1 ^ (Tt)]]: Indeed, E t;x [(h t;" 1 t] = E t;x [(inffst :jX s X t j"g^T )t] = E 0;x [(inffst :jX st X 0 j"g^T )t] = E 0;x [((t + inffr 0 :jX r X 0 j"g)^T )t] = E 0;x [inffr 0 :jX r X 0 j"g^ (Tt)] = E 0;x [h " 1 ^ (Tt)]; 122 where we have used the identity (a 1 ^a 2 )t = (a 1 t)^ (a 2 t). Proposition 3.6.4. Fix (s;!)2 and " > 0. Then the map x7! h s;x;" 1 (!), R d ! [s;T ], is universally measurable. Proof. Note that we can express x7!h s;x;" 1 (!) as the deb ut of the set A :=f(t;x)2 [s;T ]R d :j! t xj"g[f(T;x) :x2R d g; i.e., h s;x;" 1 (!) =D A (x), where D A :R d 7! [0;T ] is dened by D A (x) := infft2 [s;T ] : (t;x)2Ag: SinceA2B([s;T ]) B(R d ) as inverse image of a Borel set under a Borel measurable map, we can deduce from Theorem III.44 in [25] thatD A is universally measurable. This concludes the proof. Remark 3.6.5. If d = 1, then h t;x;" 1 can be written as inmum of rst-passage times that are monotone inx and thusx7!h s;x;" 1 (!) is even Borel measurable. Lemma 3.6.6 (Shifting of hitting times). Let"> 0, t2 [0;T ], y2R, !2 , and s =h t;y;" 1 (!). Then h t;y;" i+1 =h s;!s;" i , i2N 0 , P s;! -a.s. 123 Proof. The case i = 0 follows from Remark 3.7.6. Next, let i = 1. Then, by Remark 3.7.6, h t;y;" 2 = inffrs :jX r X s j"g^T =h s;!s;" 1 ; P s;! -a.s. Finally, assume that the statement is true for some i2N. Then h t;y;" i+2 = inffrh s;!s;" i+1 : X r X h t;!s;" i "g^T =h s;!s;" i+1 ; P s;! -a.s. Mathematical induction concludes the proof. 3.6.2 Regularity of path-frozen approximations We are going to dene candidate solutions for so-called path-frozen integro- dierential equations by means stochastic representation. To this end x " > 0 and (s ;! )2 . Next, dene a map g s ;! =g : s 1 !R by g( 1 ) :=(! :1 [0;s ) + X i2N 0 x i :1 [t i ;t i+1 ) +x 1 :1 fTg ) and a map ~ f s ;! = ~ f :R s 1 RR d R!R by ~ f t ( 1 ;y;z;p) :=f (t_s )^T (! :1 [0;s ) + X i2N 0 x i :1 [t i ;t i+1 ) +x 1 :1 fTg ;y;z;p): 124 Given i2 N, i = (s 0 ;y 0 ;::: ;s i1 ;y i1 )2 s i with s = s 0 , s = s i1 , and y =y i1 , denote, for every (t;x)2 [s;T ]R d , by ~ Y s ;! ;"; i ;t;x ; ~ Z s ;! ;"; i ;t;x ; ~ U s ;! ;"; i ;t;x = ~ Y i ;t;x ; ~ Z i ;t;x ; ~ U i ;t;x the solution of the BSDE ~ Y i ;t;x r =g( i ; (h t;yx;" j ;x +X h t;yx;" j ) j2N );x +X T ) + Z T r ~ f ~ r ( i ; (h t;yx;" j ;x +X h t;yx;" j ) j2N );x +X T ); ~ Y i ;t;x ~ r ; ~ Z i ;t;x ~ r ; Z R d ~ U i ;t;x ~ r (z) ~ r (z)K ~ r (dz) d~ r Z T r ~ Z i ;t;x ~ r dX c;t;0 ~ r Z T r Z R d ~ U i ;t;x ~ r (z) ( X )(d~ r;dz); r2 [t;T ];P t;0 -a.s., and dene s ;! ;" i ( i ;) = i ( i ;) : [s;T ]R d !R by i ( i ;t;x) :=E t;0 [ ~ Y i ;t;x t ]: Lemma 3.6.7 (Dynamic programming). We have i ( i ;t;x) = E t;0 h i+1 ( i ;h t;yx;" 1 ;x +X h t;yx;" 1 ;h t;yx;" 1 ;x +X h t;yx;" 1 ) + Z h t;y;" 1 t ~ f r ( i ; (h t;yx;" j ;x +X h t;yx;" j ) j2N );x +X T ); ~ Y i ;t;x r ; ~ Z i ;t;x r ; Z R d ~ U i ;t;x r (z) r (z)K r (dz) dr i : 125 If, additionally, x62O " (y), then i ( i ;t;x) = i+1 ( i ;t;x;t;x): Proof. Skip superscript ". Put :=h t;yx 1 . Since i ( i ;t;x) = E t;0 h ~ Y i ;t;x + Z t ~ f r ( i ; (h t;yx;" j ;x +X h t;yx;" j ) j2N );x +X T ); ~ Y i ;t;x r ; ~ Z i ;t;x r ; Z R d ~ U i ;t;x r (z) r (z)K r (dz) dr i it suces to show that E t;0 [ ~ Y i ;t;x ] =E t;0 [ i+1 ( i ;h t;yx;" 1 ;x +X h t;yx;" 1 ;h t;y;" 1 ;x +X h t;y;" 1 )]: (3.6.4) By Corollary 3.7.7, E t;0 [ ~ Y i ;t;x ] = Z Z ~ Y i ;t;x (!) (~ !)P ;! (d~ !)P t;0 (d!): (3.6.5) For every !2 , dene a process Y ! on [(!);T ] by Y ! r := ~ Y i ;t;x (!:1 [0;(!)) + (X +! (!) ):1 [(!);T ] ): (3.6.6) 126 Note that, by Lemma 3.6.6, forP ;! -a.e. ~ !2 , ( i ; (h t;yx;" j (~ !);x +X h t;yx;" j (~ !)) j2N ;x + ~ ! T ) = ( i ; (h (!);! (!) j (~ !);x +X h (!);! (!) j (~ !)) j2N 0 ;x + ~ ! T ): Thus (cf. Lemma 3.4.1 and Lemma 3.4.3), for P t;0 -a.e. !2 , there exists a pair (Z ! ;U ! ) such that (Y ! ;Z ! ;U ! ) is the solution to the BSDE Y ! r =g i ; h (!);! (!) j ;x +! (!) +X h (!);! (!) j j2N 0 ;x +! (!) +X T ! + Z T r ~ f ~ r i ; h (!);! (!) j ;x +! (!) +X h (!);! (!) j j2N 0 ;x +! (!) +X T ! ; Y ! ~ r ;Z ! ~ r ; Z R d U ! ~ r (z) ~ r (z)K ~ r (dz) ! d~ r + Z T r Z ! ~ r dX c;(!);0 + Z T r Z R d U ! ~ r (z) ( X )(d~ r;dz); r2 [(!);T ];P (!);0 -a.s. Together with (3.6.5) and (3.6.6), we obtain E t;0 h ~ Y i ;t;x i = Z Z Y ! (!) (~ !)P (!);0 (d~ !)P t;0 (d!) = Z Z Y ( i ;(!);x+! (!) );(!);x+! (!) (!) (~ !)P (!);0 (d~ !)P t;0 (d!) = Z i+1 ( i ;(!);x +! (!) ;(!);x +! (!) )P t;0 (d!): Thus (3.6.4) has been established. 127 Fix (s ;! )2 and "2 (0; 1). We will write g instead of g s ;! and ~ f instead of ~ f s ;! . The generic notation for an element of s i is i = (s 0 ;y 0 ;::: ;s i1 ;y i1 ); s =s i1 ; y =y i1 : Dene a function h s ;! ;" i =h " i : s i RR!R by h " i ( i ;t;x) := s ;! ;" i+1 ( i ; (s_t)^T;x; (s_t)^T;x): The following result is needed for the approximation of the functions i+1 ( i ;) by smooth functions. Lemma 3.6.8. Let i 2 s i . Then (t;x)7!h " i (;t;x),RR d !R, is continuous. Remark 3.6.9. If were only d J 1 -continuous, then, in contrast to corresponding mappings in [32] and [33], the mapping (y;t;x)7! (y:1 [0;t) +x:1 [t;T ] ) cannot be expected to be continuous. For example, assume that d = 1 and let (y 0 ;t 0 ;x 0 ) := (1; 0; 2) and (y n ;t n ;x n ) := (1; 1=n; 2). Then (y n ;t n ;x n )! (y 0 ;t 0 ;x 0 ) as n!1, but d J 1 (y n :1 [0;t n ) +x n :1 [t n ;T ] ;y 0 :1 [0;t 0 ) +x 0 :1 [t 0 ;T ] ) 1. Remark 3.6.10. If is just d J 1 -continuous, then we cannot expect (t;x) 7! " i ( i ;t;x) on Q 2C 0 0 s;y n Q " s;y to be left-continuous in t. To see this, assume that d = 1 and let t 0 = T and t n " t 0 with t n < T . Then, given !2 , we have, for 128 suciently largen and withy j = 0,j = 0,:::,i1, (which we can assume without loss of generality,) jg( i ; (t n + [h " j (!)^ (Tt n )];x +X h " j ^(Tt n ) (!)) j2N 0 ;x +X Tt n(!)) g( i ; (T;x +X 0 (!)) j2N 0 ;x +X 0 (!))j = ((x +! 0 ):1 [t n ;T ] )((x +! 0 ):1 fTg ) but d J 1 (z:1 [t n ;T ] ;z:1 fTg ) 1 for z 6= 0. However d M 2 (z:1 [t n ;T ] ;z:1 fTg ) ! 0 as n!1. Proof of Lemma 3.6.8. Let (t n ;x n )! (t 0 ;x 0 ) inRR d as n!1. For every n2N, h " i ( i ;t 0 ;x 0 )h " i ( i ;t n ;x n ) h " i ( i ;t 0 ;x 0 )h " i ( i ;t 0 ;x n ) + h " i ( i ;t 0 ;x n )h " i ( i ;t n ;x n ) =: A x n +A t n : 129 Note that, for every (t;x)2RR d , with t 0 := (t_s)^T , h " i ( i ;t;x) =E t 0 ;0 " g i ;t 0 ;x; h t 0 ;" j ;x +X h t 0 ;" j j2N ;x +X T + Z T t 0 ~ f r i ;t 0 ;x; h t 0 ;" j ;x +X h t 0 ;" j j2N ;x +X T ; ~ Y ( i ;t 0 ;x);t 0 ;x r ; ~ Z ( i ;t 0 ;x);t 0 ;x r ; Z R d ~ U ( i ;t 0 ;x);t 0 ;x r (z) r (z)K(dz) ! dr # (3.6.7) and also (cf. Remark 3.6.3) E t 0 ;0 g i ;t 0 ;x; h t 0 ;" j ;x +X h t 0 ;" j j2N ;x +X T =E 0;0 g i ; t 0 + [h " j ^ (Tt 0 )];x +X h " j ^(Tt 0 ) j2N 0 ;x +X Tt 0 : (3.6.8) as well as E t 0 ;0 " Z T t 0 ~ f r i ;t 0 ;x; h t 0 ;" j ;x +X h t 0 ;" j j2N ;x +X T ; ~ Y ( i ;t 0 ;x);t 0 ;x r ; ~ Z ( i ;t 0 ;x);t 0 ;x r ; Z R d ~ U ( i ;t 0 ;x);t 0 ;x r (z) r (z)K(dz) ! dr # =E 0;0 " Z Tt 0 0 ~ f r+t 0 i ; t 0 + [h " j ^ (Tt 0 )];x +X h " j ^(Tt 0 ) j2N 0 ; x +X Tt 0 ; ^ Y t 0 ;x r ; ^ Z t 0 ;x r ; Z R d ~ U t 0 ;x r (z) r (z)K(dz) ! dr # ; (3.6.9) 130 where ( ^ Y t 0 ;x ; ^ Z t 0 ;x ; ^ U t 0 ;x ) is the solution of the BSDE ^ Y t 0 ;x r =g i ; t 0 + [h " j ^ (Tt 0 )];x +X h " j ^(Tt 0 ) j2N 0 ;x +X Tt 0 + Z Tt 0 r ~ f ~ r+t 0 i ; t 0 + [h " j ^ (Tt 0 )];x +X h " j ^(Tt 0 ) j2N 0 ; x +X Tt 0 ; ^ Y t 0 ;x ~ r ; ^ Z t 0 ;x ~ r ; Z R d ^ U t 0 ;x ~ r (z) ~ r (z)K(dz) ! d~ r Z Tt 0 r ^ Z t 0 ;x ~ r dX c;0;0 ~ r Z Tt 0 r Z R d ^ U t 0 ;x ~ r (z) ( X )(d~ r;dz); r2 [0;Tt 0 ];P 0;0 -a.s. (3.6.10) Since is uniformly continuous underd U , sincef is uniformly continuous under d 1 in (t;!) uniformly in (y;z;p), and since d M 1 d U , one can show using BSDE standard techniques (keeping (3.6.7) in mind) that there exists a constant C 0 = C 0 (t 0 )> 0 such that A x n C 0 0 (jx 0 x n j): Put s n := (t n _s)^T . To show convergence of A t n , let us initially x !2 . Set r j = r j (!) :=h " j (!); j2N 0 ; = (!) := maxfj2N 0 :s 0 +r j Tg: 131 We treat rst the case that t n t 0 , whence s n s 0 . Since ! is xed, we can and will assume that, without loss of generality, s n +r T: Since s 0 Tr , we have s n 2 [s 0 ;Tr ]. Since, for every n2N 0 and x2R, g( i ; (s n + [r j ^ (Ts n )];! r j ^(Ts n ) ) j2N 0 ;x +! Ts n) = i2 X j=0 y j :1 [s j ;s j+1 ) +y:1 [s;s n ) + 1 X j=0 (x +! r j ):1 [s n +r j ;s n +r j+1 ) +(x +! r ):1 [s n +r;T ) + (x +! Ts n):1 fTg ! =:(~ !(x;s n )): we have, by Lemma 3.7.13, sup x (~ !(x;s n ))(~ !(x;s 0 ) sup x 0 (d J 1 (~ !(x;s n ); ~ !(x;s 0 )) sup x 0 (2(s n s 0 ) +j! Ts n! Ts 0j! 0 as n ! 1 provided ! is left-continuous at T , which, however is the case for P 0;0 -a.e. ! because X isP 0;0 -quasi-left-continuous. A corresponding result can be shown for the driver ~ f( ) of the BSDE (3.6.10). Thus, keeping (3.6.8) and (3.6.9) in mind, we can employ standard a-priori estimates for BSDEs (cf. Lemma 3.1.1 132 in [26]) to deduce that, for every x2 R d , h " i ( i ;t n ;x)! h " i ( i ;t 0 ;x) as n!1 with t n t. Hence, by Lemma 3.7.12, (t n ;x n )! (t 0 ;x 0 ) as n!1 with t n t 0 implies A t n sup x h " i ( i ;t n ;x)h " i ( i ;t 0 ;x) ! 0 as n!1. Now we treat the case t n t 0 . Again, x ! = (! k ) kd 2 and, in addition to the notation introduced in the previous paragraph, set n = n (!) := maxfj2N 0 :s n +r j Tg: Note that n . Recall that ~ !(x;s 0 ) = (~ !(x;s 0 ) k ) kd = i2 X j=0 y j :1 [s j ;s j+1 ) +y:1 [s;s 0 ) + 1 X j=0 (x +! r j ):1 [s 0 +r j ;s 0 +r j+1 ) +(x +! r ):1 [s 0 +r;T ) + (x +! Ts 0):1 fTg 133 For any n2N, let !(x;s n ) = ( !(x;s n ) k ) kd := i2 X j=0 y j :1 [s j ;s j+1 ) +y:1 [s;s n ) + 1 X j=0 (x +! r j ):1 [s n +r j ;s n +r j+1 ) +(x +! r ):1 [s n +r;(s n +r +1 )^T ) + n 1 X j=+1 (x +! r j ):1 [s n +r j ;s n +r j+1 ) +1 fg c( n ):(x +! r n ):1 [s n +r n;T ) + (x +! Ts n):1 fTg : Right-continuity of! yields! r j =! r j ^(Ts n ) !! Ts 0 asn!1 forj n . Hence, since d M 1 is a metric, by the triangle inequality together with Lemma 3.7.9, d p ( !(x;s n ); ~ !(x;s 0 )) max kd d M 1 ( !(x;s n ) k ; ~ !(x;s 0 ) k )! 0 uniformly in x as n!1. Thus, corresponding considerations as in the previous paragraph yield sup x ( !(x;s n ))(~ !(x;s 0 )) sup x 0 (d p ( !(x;s n ); ~ !(x;s 0 ))! 0 and sup x h " i ( i ;t n ;x)h " i ( i ;t 0 ;x) ! 0: 134 as n!1. This concludes the proof. 3.6.3 Path-frozen integro-dierential equations Let"2 (0;c 0 0 ). Given (s;y)2 [s ;T ]R d , letK 4C 0 0 s;y := [s;T ] Q d i=1 [y i 4C 0 0 ;y i + 4C 0 0 ]. Then, by the Weierstrass approximation theorem, for any> 0, there exists a polynomial h "; i ( i ;) onRR d such that h "; i ( i ;)h " i ( i ;) K 4C 0 0 s;y <: Since we can take multivariate Bernstein polynomials as approximating functions (see, e.g., Appendix B of [46]), we can and will, by Assumption 3.3.17, assume that the mapping ( i ;t;x)7!h "; i ( i ;), s i RR d !R, is continuous. Put h "; i :=h "; i +: Now, we proceed similarly as in the approach of the proof of Lemma 5.4 in the rst arxiv version of [33] to dene inductively (in three steps) a functional s ;! ;" 2 C 1;2 b ( s ), which will have properties that are needed in the proof of Theorem 3.3.18 below. To this end, let us, rst of all, introduce some notation. 135 Denition 3.6.11. Let 0< <c 0 0 <C 0 0 and 2C 0 0 < 0 <1. (Recall that c 0 0 is a lower bound andC 0 0 is an upper bound of the jump size ofX. See Remark 3.2.7 and Assumption 3.3.14.) Let (t ;y )2 (1;T )R d . PutD :=O (y ),D 0 :=O 0(y ), Q := (t ;T )O (y ), Q 0 := (t ;T )O 0(y ), etc. Fix 2 (0; 1) and h2C 1 ( Q). Set C (h) :=fw : Q 0 !R such that w2C 2; loc (Q) withj@ t wj Q ,j@ x wj Q , @ 2 xx w Q being bounded and that w =h in (t ;T ) (D 0 nD) and onfTgD 0 g: Let f = f(t;y;z;p) : QRR d R!R be a function. Dene : (1;T ]!R by (t) := t_0 . Given t2 (1;T ], dene an operator I h t = I t onC (h) by I t w(t;x) := Z c 0 0 jzjC 0 0 [h(t;x +z) (t)] K(dz)w(t;x) (t)K(R d ): Dene a mapping ~ F = ~ F (t;x;y;z;w) : QRR d C (h)!R by ~ F (t;x;y;z;w) := f(t;y;z; I t w(t;x)): Given v2C (h), put ~ F [v](t;x) := ~ F (t;x;v(t;x);@ x v(t;x);v(;)): 136 Dene an operator L h = L onC (h) by Lw(t;x) := @ t w(t;x) d X i=1 b i )@ x iw(t;x) 1 2 d X i;j=1 c i;j @ x i x jw(t;x) Z R d " h(t;x +z) d X i=1 z i @ x iw(t;x) # K(dz) +w(t;x)K(R d ): Remark 3.6.12. Given the context of the preceding denition, we have I h t w(t;x) := Z R d [w(t;x +z)w(t;x)] (t) K(t;dz); L h w(t;x) := @ t w(t;x) d X i=1 b i @ x iw(t;x) 1 2 d X i;j=1 c i;j @ x i x jw(t;x) Z R d " w(t;x +z)w(t;x) d X i=1 z i @ x iw(t;x) # K(dz) for all (t;x)2Q. Given y2 R d and h2 C 1 (RR d ), put D y := O " (y) D 0 y := O 2C 0 0 (y), and let the function spaceC (h;y) be dened as the spaceC (h) with Q 0 = Q 2C 0 0 1;y and Q = Q " 1;y . Step 1. Let 1 = (s ;y). Set 1 := "=4. Write h = h "; 1 1 ( 1 ;). Write ^ f(t;) = ~ f t (( 1 ; (T;y) j2N ;y);) and let ~ F as well as ~ F [] be dened as in Denition 3.6.11. 137 By standard PDE theory, there exists a function w s ;! ;" 1 ( 1 ;) =w 1 ( 1 ;) =w 1 2 C (h;y) such that Lw 1 ~ F [w 1 ] = 0 in Q " 1 , w 1 =h in (1;T )D 0 y nD y , w 1 =h onfTgD 0 y . Dene a function v s ;! ;" 1 ( 1 ;) =v 1 ( 1 ;) =v 1 on Q 2C 0 0 1;y by v 1 ( 1 ;t;x) :=w 1 ( 1 ;t;x)w 1 ( 1 ; 1 ) + 1 ( 1 ; 1 ) + " 2 : Then v 1 ( 1 ;)2C (h 0 ;y) for some h 0 2C 1 (RR d ) and Lv 1 ~ F [v 1 ] 0 in Q " 1;y , (3.6.11) v 1 ( 1 ; 1 ) = 1 ( 1 ; 1 ) + " 2 ; (3.6.12) v 1 h " 1 ( 1 ;) in [s ;T )D 0 y nD y and onfTgD 0 y . (3.6.13) To see that (3.6.13) is true, it suces, by denition of v 1 , to show that "=2 + 1 ( 1 ; 1 )w 1 ( 1 ; 1 ) 0. Indeed, noting thatf is non-anticipating and using the dynamic programming principle (Lemma 3.6.7), we can employ the comparison principle for BSDEs with jumps together with constant-translatibility of sublinear expectations (see, e.g., [83]) to deduce thathh "; i ( 1 ;)+2 1 impliesw 1 ( 1 ; 1 ) 1 ( 1 ; 1 ) +"=2. Note that It^ o's formula together with quasi-left-continuity makes it possible to represent w 1 as BSDE (see Lemmas 3.6.1 and 3.6.2). 138 Dene s ;! ;" (t;!) :=v 1 (s ;! s ;t;! t ) + 1 X j=1 j ; s th s ;" 1 (!): Then, by (3.6.12), s ;! ;" 1 (s ;! s ;s ;! s )< s ;! ;" (s ;!)< s ;! ;" 1 (s ;! s ;s ;! s ) +": Universal measurability of (t;!) 7! v 1 (s ;! s ;t;! t ) follows from Assump- tion 3.3.17. Step 2. Let 2 = (s 0 ;y 0 ;s;y)2 s 2 . Set 2 := "=8. Write h = h "; 2 2 ( 2 ;) and ^ f(t;) = ~ f t (( 2 ; (T;y) j2N ;y);) and let ~ F as well as ~ F [] be dened as in Denition 3.6.11. By standard PDE theory, there exists a functionw s ;! ;" 2 ( 2 ;) = w 2 ( 2 ;) =w 2 2C (h;y) such that Lw 2 ~ F [w 2 ] = 0 in Q " 1 , w 2 =h in (1;T )D 0 y nD y , w 2 =h onfTgD 0 y . Dene a function v s ;! ;" 2 ( 2 ;) =v 2 ( 2 ;) =v 2 on Q 2C 0 0 1;y by v 2 ( 2 ;t;x) :=w 2 ( 2 ;t;x)w 2 ( 2 ;s;y) +v 1 ( 1 ;s;y) + 1 : 139 Then v 2 ( 2 ;)2C (h 0 ;y) for some h 0 2C 1 (RR d ) and furthermore Lv 2 ~ F [v 2 ] 0 in Q " 1;y , (3.6.14) v 2 ( 2 ;s;y) = v 1 ( 1 ;s;y) + 1 ; (3.6.15) and, if "jyy 0 j 2C 0 0 , then v 2 h " 2 ( 2 ;) in [s;T )D 0 y nD y and onfTgD 0 y . (3.6.16) To see that (3.6.16) is true, let (t;x)2 [s;T )D 0 y nD y or (t;x)2fTgD 0 y . Then, by (3.6.13) in Step 1, v 2 ( 2 ;t;x) = h(t;x) +v 1 ( 1 ;s;y)w 2 ( 2 ;s;y) + 1 h " 2 ( 2 ;t;x) +v 1 ( 1 ;s;y)w 2 ( 2 ;s;y) + 1 = h " 2 ( 2 ;t;x) + h "; 1 1 (s 0 ;y 0 ;s;y) + 1 (s ;y 0 ;s ;y 0 )w 1 (s ;y 0 ;s ;y 0 ) + 2 1 w 2 ( 2 ;s;y) + 1 : That is, we have to show that w 2 ( 2 ;s;y) h "; 1 1 (s ;y 0 ;s;y) + 1 (s ;y 0 ;s ;y 0 ) w 1 (s ;y 0 ;s ;y 0 ) + 3 1 : (3.6.17) 140 Note that, similarly as in Step 1, one can show that w 2 ( 2 ;s;y) 2 ( 2 ;s;y) + 2 2 : We also have 2 ( 2 ;s;y) =h " 1 (s ;y 0 ;s;y) because "jyy 0 j. Thus w 2 ( 2 ;s;y) h "; 1 1 (s ;y 0 ;s;y) + 2 2 ; and together with 2 1 1 (s ;y 0 ;s ;y 0 )w 1 (s ;y 0 ;s ;y 0 ) from Step 1 we get (3.6.17) and consequently (3.6.16). Dene s ;! ;" (t;!) := v s ;! ;" 2 (s ;! s ;h s ;" 1 (!);X h s ;" 1 (!);t;! t ) + 1 X j=2 j ; h s ;" 1 (!)<th s ;" 2 (!): Note that, by Step 1 and by denition of v 2 , s ;! ;" (h s ;" 1 ;!) =v s ;! ;" 2 (s ;! s ;h s ;" 1 (!);X h s ;" 1 (!);h s ;" 1 (!);X h s ;" 1 (!)) + 1 X j=2 j : Universal measurability of (t;!)7!v 2 (s ;! s ;h s ;" 1 (!);X h s ;" 1 (!);t;! t ); 141 follows from Assumption 3.3.17 and standard BSDE error estimates. Step 3 (i ! i + 1). Let i 2 N. Set j := "=2 j+1 , j 2 N. For every j = (s 0 ;y 0 ;::: ;s j1 ;y j1 )2 s j , j2 N, there exists, by standard PDE theory, w s ;! ;" j ( j ;) =w j ( j ;) =w j 2C ( h "; j j ( j ;);y j1 ) such that Lw j ~ f t (( j ; (T;y j1 ) k2N ;y j1 );w j ;@ x w j ; I t w j ) = 0 in Q " 1;y j1 , (3.6.18) and w j = h "; j j ( j ;) in (1;T )D 0 y nD y j1 , w j = h "; j j ( j ;) onfTgD 0 y j1 . (3.6.19) Dene v s ;! ;" j ( j ;) =v j ( j ;) on Q 2C 0 0 1;y j1 recursively by v j ( j ;t;x) :=w j ( j ;t;x) +v j1 ( j )w j ( j ;s j1 ;y j1 ) + j1 : (3.6.20) Suppose that the following induction hypothesis holds: For every j2f1;:::;ig, Lv j ~ f t (( j ; (T;y j1 ) k2N ;y j1 );v j ;@ x v j ; I t v j ) 0 in Q " 1;y j1 , 142 and, if "jy k+1 y k j 2, for every k2f0;:::;j 1g, then v j ( j ;)h " j ( j ;) in (s j1 ;T )D 0 y j1 nD y j1 and onfTgD 0 y j1 . Let i 2. Fix i+1 = (s 0 ;y 0 ;::: ;s i ;y i )2 s i+1 with s = s i and y = y i . Let j := (s 0 ;y 0 ;::: ;s j1 ;y j1 ), j = 1, :::, i 2. Then v i+1 ( i+1 ;)2C (h 0 ;y) for some h 0 2C 1 (RR d ) and Lv i+1 ~ f t (( i+1 ; (T;y) k2N ;y);v i+1 ;@ ! v i+1 ; I t v i+1 ) 0 in Q " 1;y , (3.6.21) v i+1 ( i+1 ;s;y) =v i ( i ) + i ; (3.6.22) and, if "jy j+1 y j j 2; j = 0;:::;i 1; (3.6.23) then v i+1 h " i+1 ( i+1 ;) in (s;T )D 0 y nD y and onfTgD 0 y . (3.6.24) 143 To see that (3.6.24) is true, let (t;x)2 Q 2C 0 0 s;y nQ " s;y . By (3.6.23), v i+1 ( i+1 ;t;x) h " i+1 ( i+1 ;t;x) +v i ( i ;s;y)w i+1 ( i+1 ;s;y) + i : By (3.6.19), (3.6.20), (3.6.23), :::h " i+1 ( i+1 ;t;x) + h "; i i ( i ;s;y) + v i1 ( i ) w i ( i ;s i1 ;y i1 ) + i1 w i+1 ( i+1 ;s;y) + i : By (3.6.19), (3.6.20), (3.6.23), :::h " i+1 ( i+1 ;t;x) + h "; i i ( i ;s;y) + h "; i1 i1 ( i ) +v i2 ( i2 )w i1 ( i1 ;s i2 ;y i2 ) + i2 w i ( i ;s i1 ;y i1 ) + i1 w i+1 ( i+1 ;s;y) + i : 144 Thus ::: =h " i+1 ( i+1 ;t;x) + h "; i i ( i ;s;y) + h "; i1 i1 ( i ) +( i + i1 + i2 ) (w i1 ( i1 ;s i2 ;y i2 ) +w i ( i ;s i1 ;y i1 ) +w i+1 ( i+1 ;s;y)) +v i2 ( i1 ) ::: h " i+1 ( i+1 ;t;x) + h h "; i i ( i+1 ) +::: + h "; 1 1 ( 2 ) + 1 ( 1 ; 1 ) i + [( i +::: + 1 ) + 2 1 ] [w i+1 ( i+1 ;s;y) +w i ( i ;s i1 ;y i1 ) +::: +w 1 ( 1 ;s 0 ;y 0 )]: I.e., we have to show that [w i+1 ( i+1 ;s;y) +w i ( i ;s i1 ;y i1 ) +::: +w 1 ( 1 ;s 0 ;y 0 )] h h "; i i ( i+1 ) +::: + h "; 1 1 ( 2 ) + 1 ( 1 ; 1 ) i + [( i +::: + 1 ) + 2 1 ]: (3.6.25) 145 Again, similarly, as in Step 1, one can show that, for every j2f2;:::;i + 1g, we have w j ( j ;s j1 ;y j1 ) j ( j ;s j1 ;y j1 ) + 2 j . Also, "jy j1 y j j, j = 0, :::, i 1, implies that, for every j2f2;:::;i + 1g, we have j ( j ;s j1 ;y j1 ) =h " j1 ( j1 ;s j1 ;y j1 ); which yields w j ( j ;s j1 ;y j1 )h " j1 ( j1 ;s j1 ;y j1 ) + 2 j : Together with w 1 ( 1 ; 1 ) 1 ( 1 ; 1 ) + 2 1 , from Step 1 and with 2 i+1 +::: + 2 2 = ( i +::: + 1 ); we get (3.6.25) and thus (3.6.24). Dene s ;! ;" (t;!) := v i+1 ((h s ;" j (!);X h s ;" j (!)) 0ji ;t;! t ) + 1 X j=i+1 j ; h s ;" i (!)<th s ;" i+1 (!): 146 Note that, by the induction hypothesis by denition of v i+1 , s ;! ;" (h s ;" i ;!) =v i+1 ((h s ;" j (!);X h s ;" j (!)) 0ji ;h s ;" i (!);X h s ;" i (!)) + 1 X j=i+1 j : As in Step 2, universal measurability of (t;!)7!v i+1 ((h s ;" j (!);X h s ;" j (!)) 0ji ;t;! t ); follows from Assumption 3.3.17 and standard BSDE error estimates. By mathematical induction, we obtain the following result. Lemma 3.6.13. The mapping s ;! ;" : ! R dened in Step 1, Step 2, and Step 3 belongs to C 1;2 b ( ). 3.6.4 Proof of Comparison Denition 3.6.14. Let (t;!)2 . Denote byD(t;!) (resp.D(t;!)) the set of all '2 C 1;2 b ( t ) with corresponding sequences ( n ) of stopping times and correspond- ing collections (# n ) of functionals such that the following holds: 147 (i) For every n 2 N and every (r; ~ !) 2 J n1 ; n J, we have, with n = (h t;" i (~ !);X h t;" i (~ !)) 0in1 , L# n ( n ;r; ~ ! r ) f r (~ !;# n ( n ;r; ~ ! r );@ ! # n ( n ;r; ~ ! r ); I r # n ( n ;r; ~ ! r )) (resp.) 0. (ii) ForP t;! -a.e. ~ !2 , '(T; ~ !) (resp.) (~ !). Proof of Theorem 3.3.18. Put u(t;!) := inff'(t;!) :'2D(t;!)g; u(t;!) := supf'(t;!) :'2D(t;!)g: We assert that u(t;!) u(t;!). To show this, we proceed nearly exactly as in the corresponding part of the proof of Proposition 7.5 in [32]. Dene functionals t;!;" , t;!;" on t by t;!;" r := t;!;" r +(2")[1 +Tr]; t;!;" r := t;!;" r (2")[1 +Tr]: 148 Note that t;!;" , t;!;" 2 C 1;2 ( t ) and the corresponding sequences of stopping times are in both cases (h t;" n ) and the corresponding collections of functionals are (v n ) and (v n ), resp., dened by v n (;r;) :=v n (;r;) +(2")[1 +Tr]; v n (;r;) :=v n (;r;)(2")[1 +Tr]: Moreover, t;!;" 2D(t;!) because, whenever (r; ~ !)2 Jh t;" n1 ;h t;" n J for some n2N, we have, with n = (h t;" i (~ !);X h t;" i (~ !)) 0in1 , Lv n ( n ;r; ~ ! r )f r (~ !;v n ( n ;r; ~ ! r );@ x v n ( n ;r; ~ ! r ); I r v n ( n ;r; ~ ! r )) Lv n ( n ;r; ~ ! r ) + 0 (2")f r (~ !;v n ( n ;r; ~ ! r );@ x v n ( n ;r; ~ ! r ); I r v n ( n ;r; ~ ! r )) Lv n ( n ;r; ~ ! r ) ~ f r (( n ; (T; ~ ! r ) k2N ; ~ ! r ); v n ( n ;r; ~ ! r );@ x v n ( n ;r; ~ ! r ); I r v n ( n ;r; ~ ! r )) 0 and, similarly, t;!;" T ,P t;! -a.s. Thusu(t;!) t;!;" (t;!) and, similarly, one can show that t;!;" (t;!)u(t;!). Consequently,u(t;!)u(t;!) 2 0 (2")[1+Tt]. Letting "# 0 yields u(t;!)u(t;!). Finally, by the partial comparison principle (Theorem 3.5.7), u 1 (t;!)u(t;!) and u(t;!)u 2 (t;!). Our previous assertion yields then u 1 (t;!)u 2 (t;!). 149 3.7 Appendix 3.7.1 Appendix A. Martingale problems and regular con- ditioning The results in this section are actually valid in a more general context than in our canonical setup and might be of independent interest. In particular, (B;C;) can be as general as in III.2a. of [49], in which case standard conventions of [49] are in force. First, we recall the denitions of [94] for conditional probability distributions (c.p.d.) and regular conditional probability distributions (r.c.p.d). A c.p.d. of a probability measure P on ( ;F 0 T ) given a sub -eldF F 0 T is a collection fP ! g !2 of probability measures satisfying the following: (i) For every A2F 0 T , the map !7!P ! (A) isF-measurable. (ii) For every A2F 0 T and every B2F, P(A\B) = Z A P ! (B)P(d!): If a c.p.d.fP ! g !2 givenF satisesP ! (A(!)) = 1 forP-a.e!2 , whereA(!) := \fA2F :x2Ag, then we callfP ! g !2 an r.c.p.d. givenF. The following two results are straight-forward generalizations of Theorem 6.1.3 and Theorem 6.2.2 in [94]. For unexplained notation we refer to [49]. 150 Proposition 3.7.1. Let X be an (F 0 + ;P)-semimartingale with characteristics (B;C;) after time s2 [0;T ], 2T s (F 0 ) andfP ! g !2 be a c.p.d. of P given F 0 . Then, for P-a.e. !2 , the process X is an (F 0 + ;P ! )-semimartingale with characteristics (p (!) B;p (!) C;p (!) ) after time (!). Proof. By Theorem II.2.2.1 in [49], the processesX(h)BX s ,M(h) i M(h) j ~ C ij , i, j d, and g (p s X ) g , g 2 C + (R d ) (see [49] for the deni- tion of C + (R d )), are (F 0 + ;P)-local martingales after time s. Hence, by Theo- rem 1.2.10 in [94] (,which, after localization, is applicable by the same argu- ment as Lemma III.2.48 in [49] in the proof of Theorem III.2.40, p. 165, in [49]), there exists a P-null set N such that, for every !2 nN, the processes X(h)p (!) BX (!) , M(h) i M(h) j p (!) ~ C ij M(h) i (!) M(h) j (!) , i, j d, andg (p (!) X )g (p (!) ),g2C + (R d ) are local martingales. Hence, since the canonical decomposition of X(h) after time (!), !2 nN, is X(h) =X (!) +M(h)M(h) (!) +B(h)B(h) (!) ; Theorem II.2.21 in [49] concludes the proof. Corollary 3.7.2. Suppose that, for every (s;!)2 [0;T ] , there exists a unique solutionP s;! of the martingale problem for (p s B;p s C;p s ) starting at (s;!). Then, for every 2T s (F 0 ), the familyfP (~ !);~ ! g ~ !2 is an r.c.p.d. of P s;! givenF 0 . 151 Proof. LetfP ~ ! g ~ !2 be an r.c.p.d. of P s;! givenF 0 . By Proposition 3.7.1, for P s;! -a.e. ~ !2 ,P ~ ! is a solution of the martingale problem for (p (~ !) B;p (~ !) C;p (~ !) ) starting at ((~ !); ~ !). By uniqueness,P ~ ! =P (~ !);~ ! . The next result is crucial. Theorem 3.7.3 (Proof communicated by R. Mikulevicius). LetP be a probability measure on ( ;F 0 T ). Let 2T (F 0 + ). LetfP ! g !2 be a c.p.d. of P givenF 0 + . Then, for every !2 , P ! (X t =! t ; 0t(!)) = 1: (3.7.1) Proof. Step 1. Fix a boundedF 0 T F 0 + -measurable function H : !R and put H(~ !) :=H(~ !; ~ !). We claim that, forP-a.e. !2 , E P [ HjF 0 + ](!) = Z H(~ !;!)P ! (d~ !): (3.7.2) 152 IfH is of the formH(~ !;!) =G 1 (~ !)G 2 (!),G 1 F 0 T -measurable,G 2 F 0 + -measurable, then, forP-a.e. !2 , E P [ HjF 0 + ](!) = G 2 (!)E P [G 1 jF 0 + ](!) = G 2 (!) Z G 1 (~ !)P ! (d~ !) = Z H(!; ~ !)P ! (d~ !): A monotone-class argument yields the claim. Step 2. Fixt2 [0;T ] and deneH : !R byH(~ !;!) := 1 A (~ !;!), where A :=f(~ !;!)2 : ~ ! t^(!) =! t^(!) g: SinceH isF 0 T F 0 + -measurable and H(~ !) = 1, Step 1 yields that, forP-a.e.!2 , 1 =E P [ HjF 0 + ] = Z 1 A (~ !;!)P ! (d~ !) =P ! (X t^(!) =! t^(!) ): Thus (3.7.1) holds up to aP-null set and on this null set we can redeneP ! such that (3.7.1) holds there, too (cf. p. 34 in [94]). This concludes the proof. Remark 3.7.4. Note that the-eldF 0 + is not countably generated, which yields non-existence of r.c.p.d. (see [8], where r.c.p.d. in our sense are called proper r.c.p.d.). Hence we cannot rely on the corresponding proof in [94], whenF 0 + is replaced byF 0 and 2T (F 0 ). 153 The following result is an adaption of Lemma 2 in [72] to our setting. Again, for unexplained notation, see [49] and also [90]. Lemma 3.7.5. Let (s;!) 2 , P be a solution of the martingale problem for (p s B;p s C;p s ) starting at (s;!), 2T s (F 0 + ), andfP ~ ! g ~ !2 be a c.p.d. of P given F 0 + . Then, for P-a.e. ~ ! 2 , the probability measure P ~ ! is a solution of the martingale problem for (p (~ !) B;p (~ !) C;p (~ !) ) starting at ((~ !); ~ !). Remark 3.7.6. (i) EachF 0 -stopping time satisesP ;! ( =(!)) = 1 for every !. This follows easily from Galmarino's test (see [25]). (ii) Given a right-continuous F-adapted process Y such that Y t (!) = Y t (! ^t ) (this is sometimes nearly impossible to verify) and a closed subset E of R, the F-stopping time := infft 0 :Y t 2Eg^T satisesP ;! ( =(!)) = 1. To see this, let ! and ~ ! two paths that coincide on [0;(!)]. First note that (!) = T or, by right-continuity, Y (!) = Y (!) (~ !)2 E. Moreover, if 0 t < (!), then Y t (!) =Y t (~ !)62E. Hence (~ !) =(!). (iii) If we assume that the set E in the preceding paragraph is open instead of closed, then a corresponding result does not necessarily hold. For example, let T = 2, = infft 0 :jX t j > 1g^T , ! 2 be dened by ! t = t, and ~ ! t :=t:1 [0;1] + (2t):1 (1;T ] . Then (!) = 1 but (~ !) =T . Proof of Lemma 3.7.5. First, note that, by Theorem 3.7.3, forP-a.e. ~ !2 , we have X t = ~ ! t , 0t(~ !). 154 In the next two steps, let M = (M t ) ts be one of following processes: X(h)p s BX s ; M(h) i M(h) j p s ~ C ij ; i;jd; g (p s X )g (p s ); g2C + (R d ): Here, we can and will assume thatC + (R d ) is countable. Step 1. By Theorem II.2.2.1 in [49],M is an (F 0 + ;P)-local martingale. Moreover, M is F 0 -adapted. Let ( l ) l be a corresponding localizing sequence of F 0 -stopping times (cf. the proof of Lemma III.2.48 in [49]). Without loss of generality, let us assume that M l is bounded. Then, for every A2F 0 + , 2N, r, r 0 2 [0;T ] with rr 0 , and2 bF 0 r , we can apply the claim in Step 1 of the proof of Theorem 3.7.3 to get Z A E P ~ ! [(M l r 0 _(~ !) M l r_(~ !) )]P(d~ !) = Z A E P [E P [M l r 0 _ M l r_ k jF 0 (r_)+ ]jF 0 + ]P(d~ !) = 0: Step 2. For every n2 N, x a countable dense subset J n of C 1 0 (R dn ) with respect to the locally uniform topology. For every r2 [0;T ]\ (Q[fTg), denote by r the set of all : !R of the form =f(X sn ;:::;X s 1 ) for somen2N,s 1 , :::,s n 2 [0;r]\ (Q[frg) withs 1 :::s n , andf2J n . Put :=\ r r . Since 155 is countable, there exists, by Step 1, a set M withP( M ) = 1 such that, for every l2N, every r, r 0 2 [0;T ]\ (Q[fTg) with rr 0 , every 2 r , and every ~ !2 M , E P ~ ! [(M l r 0 _(~ !) M l r_(~ !) )] = 0: (3.7.3) Since ( r ) =F 0 r , (3.7.3) holds also for every 2 bF 0 r . Right-continuity of M implies that M l (~ !)_ is an (F 0 + ;P ~ ! )-martingale after time (~ !). Hence M is an (F 0 + ;P ~ ! )-local martingale after time (~ !). Step 3. Since P(\ M M ) = 1, Step 2 and a second application of Theo- rem II.2.2.1 in [49] conclude the proof. Proposition 3.7.7. For every (s;!)2 , 2 bF s;! T , and 2T s (F s;! ), E ;X [] =E s;! [jF s;! ]; P s;! -a.s. Proof. Note that there exists a ~ 2T s (F 0 + ) and set 0 such thatP s;! ( 0 ) = 1 and (~ !) = ~ (~ !) for every ~ !2 0 . Also note that there exists a c.p.d.fP ~ ! g ~ !2 ofP s;! givenF 0 ~ + , which, by Theorem 3.7.3, satises P ~ ! (X t = ~ ! t ; 0t ~ (~ !)) = 1: Step 1. We will show that, for every 2 bF 0 T ,E [jF s;! ] =E ~ ;X [],P-a.s. 156 First, we show thatF s;! =F s;! ~ . To this end, letA2F s;! andt2 [s;T ]. Then A\f~ tg = [A\ftg\f = ~ g][ [A\f~ tg\f6= ~ g]2F s;! t : That is,F s;! F s;! ~ . The other inclusion can be shown in the same way. Conse- quently, using Lemma 3.7.5, we have E [jF s;! ] (~ !) =E [jF s;! ~ ] (~ !) = () E jF 0 ~ + (~ !) =E P ~ ! [] =E ~ ;~ ! [] (3.7.4) for P s;! -a.e. ~ !2 . Note that the equality () follows from the fact that rst for everyA2F s;! ~ there exist, by Theorem II.75.3 in [90], setsA 0 2F 0 ~ + andN2N s;! such that A =A 0 [N and then E s;! [1 A E s;! [jF s;! ~ ]] = E s;! [1 A ] =E s;! [1 A 0] =E s;! 1 A 0E s;! jF 0 ~ + = E s;! 1 A E s;! jF 0 ~ + and that secondE s;! jF 0 ~ + isF s;! ~ -measurable. Finally, E ;X [] = 1 f=~ g :E ~ ;X [] + 1 f6=~ g :E ;X []; P s;! -a.s. Thus, the map ~ !7!E (~ !);~ ! [] isF s;! -measurable andE ;X [] =E s;! [jF s;! ],P s;! - a.s. 157 Step 2. To nish the proof of the corollary, we can, without loss of generality because of Step 1, assume that = 1 A with AB2F 0 T andP s;! (B) = 0. Using Step 1, we have 0E (~ !);~ ! 1 A E (~ !);~ ! 1 B =E s;! [1 B jF s;! ](~ !) =E s;! [1 A jF s;! ](~ !) = 0 forP s;! -a.e. ~ !2 . Proposition 3.7.8. Fix (s;!)2 . Let M be a bounded right-continuous F 0 + - adapted (F s;! ;P s;! )-martingale. Let 2T s (F s;! ). Then, for P s;! -a.e. ~ !2 , M is an (F ;~ ! ;P ;~ ! )-martingale after time (~ !). A corresponding result holds for sub- and supermartingales. Proof. Let ~ 2T s (F 0 + ) satisfy ~ =,P s;! -a.s. For every n2N, x a countable dense subset J n of C b (R n ) with respect to the locally uniform topology. For every r2 [0;T ]\ (Q[fTg), denote by r the set of all : ! R of the form = f(X sn ;:::;X s 1 ) for some n2N, s 1 , :::, s n 2 [0;r]\ (Q[frg) with s 1 :::s n , and f2 J n . Put :=\ r r . Since is countable, there exists, by Step 1 of the 158 proof of Proposition 3.7.7, a set 0 with P( 0 ) = 1 such that for every r, r 0 2 [0;T ]\ (Q[fTg) with rr 0 , every 2 r , and every ~ !2 0 , E ;~ ! M r 0 _(~ !) = E ~ ;~ ! M r 0 _~ (~ !) = E s;! M r 0 _~ jF 0 ~ + (~ !) by (3.7.2) = E s;! [M r 0 _~ jF s;! ~ ] (~ !) by (3.7.4) = E s;! [E s;! [M r 0 _~ jF s;! r_~ ]jF s;! ~ ] (~ !) since 2 bF 0 r bF r_~ = E s;! [M r_~ jF s;! ~ ] (~ !) = E s;! M r_~ jF 0 ~ + (~ !) by (3.7.4) = E ~ ;~ ! M r_~ (~ !) by (3.7.2) = E ;~ ! M r_(~ !) : SinceF 0 r = ( r ), E ;~ ! [M r 0 _(~ !) ] =E ;~ ! [M r_(~ !) ] for every 2 bF 0 r hence, also for every 2 bF ;~ ! r , because, by Proposition III.4.32 in [49], we haveF 0 r _N ;~ ! = F ;~ ! r . Next, let 0 ^ ss 0 <T , and let (r n ) and (r 0 n ) be sequences in [0;T ]\Q with ^ s = inf n r n , s 0 = inf n r 0 n , and r n r 0 n . Then, for every k2 N, every ~ !2 0 , and every2 bF ;~ ! ^ s , we have, by right-continuity ofM and the dominated convergence theorem, E ;~ ! M s 0 _(~ !) = lim n E ;~ ! M r 0 n _(~ !) = lim n E ;~ ! M rn_(~ !) =E ;~ ! M ^ s_(~ !) : 159 Since M isF 0 + -adapted, it is alsoF ;~ ! -adapted and thus M _(~ !) is an (F ;~ ! ;P ;~ ! )- martingale after time (~ !). 3.7.2 Appendix B. Skorohod's topologies In this section, we recall some denitions and basic results from [98]. Put D := D([0;T ];R). The completed graph of a path !2D is dened as the set ! :=f(t;x)2 [0;T ]R :92 [0; 1] :x =! t + (1)! t g; where ! 0 := ! 0 . We equip ! with a linear order dened as follows: Given (t;x), (t 0 ;x 0 )2 ! , we write (t;x) (t 0 ;x 0 ) if either t < t 0 or both, t = t 0 as well asjx! t jjx 0 ! t j hold. A parametric representation of ! is a mapping (r;z) : [0; 1]! ! that is continuous, nondecreasing, and surjective. The set of all parametric representations of ! is denoted by (!). Dene d M 1 :DD!R + by d M 1 (!;! 0 ) := inf (r;z)2(!) (r 0 ;z 0 )2(! 0 ) krr 0 k 1 _kzz 0 k 1 : Note that d M 1 is a metric (Theorem 12.3.1 in [98]). Lemma 3.7.9. Let 0 t n t 0 T . Put ! n = 1 [t n ;T ] , ! = 1 [t 0 ;T ] 2 D. Then d M 1 (!;! n )t 0 t n . 160 Proof. Fix 0 < a < b < 1. We distinguish between the cases t 0 < T and t 0 =T . (i) Without loss of generality, let T = 2, t 0 = 1, and t n = 1n 1 . Then ! = 1 [1;2] and ! n = 1 [1n 1 ;2] . Dene (r;z)2 (!) and (r n ;z n )2 (! n ) by r(t) := t a 1 [0;a] (t) + 1 (a;b] (t) + 1t 1b 1 + tb 1b 2 1 (b;1] (t); r n (t) := t a (1n 1 )1 [0;a] (t) + (1n 1 )1 (a;b] (t) + 1t 1b (1n 1 ) + tb 1b 2 1 (b;1] (t); z(t) =z n (t) := ta ba 1 [a;b] (t) + 1 (b;1] (t): Thenkrr n k 1 =n 1 andkzz n k 1 = 0. Thus d M 1 (!;! n )n 1 . (ii) Without loss of generality, letT =t 0 = 1 andt n = 1n 1 . Then! = 1 fTg and ! n = 1 [1n 1 ;1] . Dene (r;z)2 (!) and (r n ;z n )2 (! n ) by r(t) := t a 1 [0;a] (t) + 1 (a;1] (t); r n (t) := t a (1n 1 )1 [0;a] (t) + (1n 1 )1 (a;b] (t) + 1t 1b (1n 1 ) + tb 1b 1 1 (b;1] (t); z(t) =z n (t) := ta ba 1 [a;b] (t) + 1 (b;1] (t): Thenkrr n k 1 =n 1 ,kzz n k 1 = 0 and consequently d M 1 (!;! n )n 1 . 161 The d M 2 -metric onD is dened by, d M 2 (!; ~ !) :=m H ( ! ; ~ ! ); where m H is the Hausdor distance, that is, given closed sets A, B [0;T ]R, m H (A;B) := sup a2A inf b2B jabj _ sup b2B inf a2A jbaj : The uniform topology onD and the (usual) Skorohod J 1 -topology are induced by the following metrics: d U (!; ~ !) := k! ~ !k 1 ; d J 1 (!; ~ !) := inf sup [s2[0;T ] j(s)sj_ ! (s) ! 0 s ; where runs through the set of all strictly increasing continuous functions from [0;T ] to [0;T ] satisfying (0) = 0 and (T ) =T . Remark 3.7.10. We have d M 1 d J 1 d U (Theorem 12.3.2 in [98]). 3.7.3 Appendix C. Auxiliary results Lemma 3.7.11. Let u2C( ). Then u t > 0 implies X t > 0. 162 Proof. Fix (t;!)2 and let c :=ju(t;w)j > 0. Then, there exists an m 0 such that, for every mm 0 , u(t;!)u(tm 1 ;!) c=2: Since u2 C 0 (), there exists a = (t;!) > 0 (independent from m) such that, for every (t 0 ;! 0 )2 , d 1 ((t;!); (t 0 ;! 0 ))<)ju(t;!)u(t 0 ;! 0 )j<c=2: Let t 0 =tm 1 , ! 0 =!. Then d 1 ((t;!); (t 0 ;! 0 )) = m 1 + sup s2[0;T ] ! s^t ! s^(tm 1 ) = m 1 + sup s2[tm 1 ;t] j! s ! tm 1j > if m m 0 . Let m 1 m 0 be large enough so that m 1 <=2 whenever m m 1 . Now, let mm 1 . Then sup s2[tm 1 ;t] j! s ! tm 1j>=2: Letting m!1 yieldsjX t (!)j=2> 0. 163 Lemma 3.7.12. Let (S;S;) be a nite measure space, (P;P) be a measurable space, andff n g n2N 0 be a uniformly bounded family of measurable functions from SP to [0;1) such that, for -a.e. s2S, sup p2P jf n (s;p)f(s;p)j! 0 as n!1. Then sup p2P Z jf n (s;p)f(s;p)j (ds)! 0 as n!1. Proof. The proof follows the lines of a standard proof of the Dominated Con- vergence Theorem. By Fatou's lemma, lim n!1 sup p2P Z jf n (s;p)f(s;p)j (ds) lim n!1 Z esssup p2P jf n (s;p)f(s;p)j (ds) Z lim n!1 esssup p2P jf n (s;p)f(s;p)j (ds) Z lim n!1 sup p2P jf n (s;p)f(s;p)j (ds) = 0: This concludes the proof. 164 Lemma 3.7.13. Lett, t n 2 [0;T ] withtt n . Let2N, 0 =r 0 <r 1 <:::<r < Tt and z j 2R d , j = 0, :::, . Let t n +r <T . Consider two paths !, ! n 2 dened by ! := 1 X j=0 z j :1 [t+r j ;t+r j+1 ) +z :1 [t+r;T ) ; ! n := 1 X j=0 z j :1 [t n +r j ;t n +r j+1 ) +z :1 [t n +r;T ) : Then d J 1 (! n ;!) 2(t n t). Proof. Let n : [0;T ]! [0;T ] be a strictly increasing and continuous function with n (0) = 0, n (t +r j ) =t n +r j , j = 0, :::, , n (T ) =T , andk n idk 1 2(t n t). Then, givens2 [0;T ), we have! s ! n n(s) = 0, givens2 [t+r j ;t+r j+1 ), j = 0, :::, 1, we have ! s ! n n(s) =z j ! n n(t+r j ) =z j ! n t n +r j = 0; and, given s2 [t +r ;T ), we have ! s ! n n(s) =z ! n n(t+r) =z ! n t n +r = 0: This concludes the proof. 165 Lemma 3.7.14. Fix (t;!)2 and s2 [t;T ]. For P t;! -a.e. ~ !2 , dX c;s;~ ! r =dX c;t;! r ; srT;P s;~ ! -a.s. Proof. Note that X = X t + (BB t ) +X c;t;! +z () on [t;T ],P t;! -a.s., X = X s + (BB s ) +X c;s;~ ! +z () on [s;T ],P s;~ ! -a.s. Dene a process V on [t;T ] by V :=XBz () and put 0 :=f! 0 2 :X c;t;! r (! 0 ) = (VX t +B t )(! 0 ); trTg: ThenP t;! ( 0 ) = 1. Let ~ !2 0 . Put 00 :=f! 00 2 0 :X c;s;~ ! r (! 00 ) = (VX s +B s )(! 00 ); srTg: ThenP s;~ ! ( 00 ) = 1 and, for every (r;! 00 )2 [s;T ] 00 , X c;s;~ ! r (! 00 ) = (V r X t +B t )(! 00 ) + (X t X s +B s B t )(! 00 ) = X c;t;! r (! 00 ) + (X t X s +B s B t )(! 00 ); which concludes the proof. 166 Lemma 3.7.15. Fix (s;!)2 . For i = 1, 2, let S i s 2 R, i be a predictable process on [s;T ], H i 2L 2 loc (X c;s;! ;P s;! ), andW i 2G loc (p s X ;P s;! ). Then we have the following: (a) The process S i dened by S i :=S i s + i t +H i X c;s;! +W i ( X ) is a special (F 0 + ;P s;! )-semimartingale on [s;T ] with characteristics ( ~ B i ; ~ C i ; ~ i ), where ~ B i = i t, ~ C i = P d k;l=1 (H ik H il c kl ) t, and ~ i is a random measure on [s;T ]R dened by ~ i ([s;t]A; ~ !) :=(f(r;z)2 [s;T ]R d : (r;W i (r;z; ~ !))2 [s;t] (Anf0g)g; ~ !) for every (t;A; ~ !)2 [s;T ]B(R) . (b) The 2-dimensional process S := (S 1 ;S 2 ) is a special (F 0 + ;P s;! )-semimar- tingale on [s;T ] with characteristics ( ~ B; ~ C; ~ ), where ~ B = ( ~ B 1 ; ~ B 2 ), ~ C ij = d X k;l=1 (H ik H jl c kl ) t 167 for i, j = 1, 2, and ~ is a random measure on [s;T ]R 2 dened by ~ ([s;t]A 1 A 2 ; ~ !) :=(f(r;z)2 [t;T ]R d : (r;W 1 (r;z; ~ !);W 2 (r;z; ~ !))2 [s;t] (A 1 nf0g) (A 2 nf0g)g; ~ !) for every (t;A 1 ;A 2 ; ~ !)2 [s;T ]B(R)B(R) . (c) We have S 1 S 2 = S 1 s S 2 s + " S 2 1 +S 1 2 + d X k;l=1 H 1;k H 2;l c kl + Z R d W 1 W 2 K(dz) # t + d X k=1 (S 2 H 1;k +S 1 H 2;k ) X k;c;s;! + (S 1 x 2 +S 2 x 1 +x 1 x 2 ) ( S ~ ): Proof. (a) Using Theorem I.4.40 of [49], we get * d X k=1 i;k X k;c;s;! + = d X k=1 * H ik X k;c;s;! ; d X l=1 H il X l;c;s;! + = d X k;l=1 (H ik H il )hX k;c;s;! ;X l;c;s;! i = d X k;l=1 (H ik H il c kl ) t: 168 Now it suces to show thatx ~ i =W i . Dene a function x : [s;T ]R!R by x (t;x) :=x. Then, by Satz 19.1 in [4], (x ~ i ) t = Z t s Z R x (r;x) ~ i (dr;dx; ~ !) = Z t s Z R d x (r;W i (r;z; ~ !))(dr;dz; ~ !) = (W i ) t : (b) By Theorem I.4.40 in [49], hS i;c ;S j;c i = * d X k=1 H ik X k;c;s;! ; d X l=1 H jl X k;c;s;! + = d X k;l=1 (H ik H jl c kl ) t: Next we show that x ~ = (W 1 ;W 2 ). Dene functions i : [s;T ]R 2 !R by (t;x 1 ;x 2 ) :=x i , i = 1, 2. Then, again by Satz 19.1 in [4], (x i ~ ) t = Z t s Z R 2 i (r;x 1 ;x 2 ) ~ (dr;dx; ~ !) = Z t s Z R d i (r;W 1 (r;z; ~ !);W 2 (r;z; ~ !))(dr;dz; ~ !) = (W i ) t : 169 (c) By (b) and by It^ o's formula based on local characteristics (see, e.g., Sec- tion 2.1.2 of [48]), S 1 S 2 = S 1 s S 2 s +S 2 ~ B 1 +S 1 ~ B 2 + ~ C 1;2 + (S 1 +x 1 )(S 2 +x 2 )S 1 S 2 S 2 x 1 S 1 x 2 ~ +S 2 S 1;c +S 1 S 2;c + (S 1 +x 1 )(S 2 +x 2 )S 1 S 2 ( S ~ ): To conclude, note that S j ~ B i = (S j i ) t, ~ C 1;2 = P d k;l=1 (H 1;k H 2;l c kl ) t, (S 1 +x 1 )(S 2 +x 2 )S 1 S 2 S 2 x 1 S 1 x 2 ~ = x 1 x 2 ~ = (W 1 W 2 ) by Satz 19.1 of [4] = (W 1 W 2 ) (dr K(dz)); and that S j S i;c = P d k=1 (S j i;k ) X k;c;s;! . 170 Chapter 4 Pathwise It^ o calculus for rough paths and RPDEs with path-dependent coecients 1 4.1 Introduction Firstly initiated by Lyons [68], the rough path theory has been studied extensively and its applications have been found in many areas, including the recent application on KPZ equations by Hairer [44]. We refer to Lyons [67], Friz and Hairer [20], Friz and Victoir [41], and the reference therein for the general theory and its applications. On the other hand, the functional It^ o calculus, initiated by Dupire [29] and further developed by Cont and Fournie [20], has received very strong attention in recent years. In particular, it has proven to be a very convenient language for 1 This is joint work with Jianfeng Zhang. A version with minor modications of this chapter is available online as C. Keller and J. Zhang, Pathwise It^ o calculus for rough paths and rough PDEs with path dependent coecients, arXiv preprint arXiv:1412.7464 (2014). The authors would like to thank Joscha Diehl, Peter Friz, and Harald Oberhauser for very helpful discussions on rough path theory and suggestions on the present work. 171 viscosity theory of path dependent PDEs, see Ekren, Keller, Touzi and Zhang [31] and Ekren, Touzi and Zhang [32, 33]. We also refer to Buckdahn, Ma and Zhang [11], Cosso and Russo [21], Leao, Ohashi and Simas [76], and Oberhauser [75] for some recent related works on functional It^ o calculus. Our rst goal is to develop the pathwise It^ o calculus, in the spirit of Dupire's functional It^ o calculus, in the rough path framework with possibly non-geometric rough paths. Based on the bracket process of rough paths, which plays the role of quadratic variation in semimartingale theory, we introduce path derivatives for controlled rough paths of Gubinelli [42]. Our rst order spatial path derivative is the same as Gubinelli's derivative, and the time derivative is closely related to second order Taylor expansion of the controlled rough paths. This allows us to study the structure of fairly general class of controlled rough paths, and more importantly, to treat the rough integration and rough ODEs/PDEs in the same manner as standard It^ o calculus. In particular, as observed by Buckdahn, Ma and Zhang [11] in a Brownian motion setting, we show that the pathwise It^ o-Ventzell formula is equivalent to the chain rule of our path derivatives, which is crucial for studying rough PDEs and stochastic PDEs. We shall remark though, while we believe such presentation of path derivatives in rough path framework is new, many related ideas have already been discussed in the literature. Besides [40] and the reference therein, we also refer to the recent work Perkowski and Pr omel [87] for some related studies. 172 We next study the following rough dierential equations in the form: d t =g(t; t )d! t +f(t; t )dh!i t ; (4.1.1) where! is a H older- continuous rough path andh!i is its bracket process. We remark that we use the Young integrationf(t; t )dh!i t rather than Lebesgue inte- gration f(t; t )dt in the drift term above. Our study of above RDE is mainly motivated from the following stochastic dierential equations with random coe- cients: dX t =g(t;!;X t )dB t +f(t;!;X t )dt; (4.1.2) where B is a Brownian motion in the canonical probability space ( ;F;P), dB is It^ o integration, and g, f are adapted, namely depend on the history of the path: f! s g 0st . In the literature, typically the coecients g and f in (4.1.1) do not depend on t, or at least is H older-(1) continuous in t, see Lejay and Victoir [55]. However, since a Brownian motion sample path ! is only H older-( 1 2 ") continuous, by setting = 1 2 ", for (4.1.2) it is not reasonable to assume the mapping t7! g(;!;x) is H older-(1) continuous as required by [55]. Conse- quently, we are not able to apply the existing results in the rough path literature to study SDE (4.1.2) with random coecients. We shall provide various estimates for rough path integrations, which follow more or less standard arguments, and 173 then establish the wellposedness of RDE (4.1.1) under minimum regularity condi- tions on the coecients. To be precise, we require only that g(;x), f(;x), and @ ! g(;x) are H older- continuous for some2 (12;], where@ ! g is the spatial path derivative corresponding to Gubnelli's derivative. This can be easily satised for the coecients of (4.1.2) when 1 3 < < 1 2 . We note that the recent works Gubinelli, Tindel and Torrecilla [43], and Lyons and Yang [69] have also studied the rough integration for more general integrands. As a direct consequence of the above wellposedness result of RDE (4.1.1), we obtain the pathwise solution of SDE (4.1.2) with random coecients. Moreover, by restricting the canonical space slightly and by using the pathwise stochastic integration, we construct the second order process! via! itself. Then the pathwise solution exists for all !2 , without the exceptional P-null set, and the solution X(!) is continuous in ! under the rough path topology. We would also like to mention that, for linear RDEs, we introduce a decoupling strategy and provide a semi-explicit solution, by using the local solution of certain Riccati type of RDEs. The result seems new even for standard linear SDEs in multidimensional setting. Finally, we extend the theory to the following rough PDEs with less regular coecients: du(t;x) = (t;x)@ x u +g(t;x;u) d! t +f(t;x;u;@ x u;@ 2 xx u)dh!i t ; (4.1.3) 174 again motivated from pathwise analysis for stochastic PDEs with random coe- cients: du(t;!;x) = (t;!;x)@ x u +g(t;!;x;u) dB t +f(t;!;x;u;@ x u;@ 2 xx u)dt: (4.1.4) As standard in the literature, see e.g. Kunita [54] for Stochastic PDEs and [40] for Rough PDEs, the main tool is the (pathwise) characteristics. We construct the pathwise characteristics via RDEs against a backward rough path. We remark that the backward rough path we construct is also a rough path. Our result here is crucial for the study of viscosity solutions of SPDEs in Buckdahn, Ma and Zhang [12]. The rest of this chapter is organized as follows. In Section 2 we introduce the basics of our pathwise It^ o calculus, in particular the path derivatives of controlled rough paths. In Section 3 we study functions of controlled rough paths and their path derivatives. We shall provide related estimates and prove the chain rule of path derivatives, which is equivalent to the pathwise It^ o-Ventzell formula. In Section 4 we study the wellposedness results of rough dierential equations. In particular, for linear RDEs we introduce a decoupling strategy which enables us to construct semi-explicit global solution. In Section 5 we apply the RDE results to SDEs with random coecients. Finally in Section 6 we extend the results to rough PDEs and stochastic PDEs. 175 At below we collect some notations used throughout this chapter: T > 0 is a xed time; andT := [0;T ],T 2 :=f(s;t) : 0s<tTg. d is the xed dimension for rough paths, andS d the space ofdd symmetric matrices. E (and ~ E) is a generic Euclid space, andjEj is the dimension of E, namely E =R jEj . By default E n is viewed as a collum vector. However, for a function g : y2 E! ~ E, we take the convention that the rst order derivative @ y g2 ~ E 1jEj is viewed as a row vector, and the second order derivative @ 2 yy g := @ y [(@ y g) ]2 ~ E jEjjEj is symmetric. Moreover, forg : (x;y)2E 1 E 2 ! ~ E,@ xy g :=@ x [(@ y g) ]2 ~ E jE 2 jjE 1 j and @ yx g :=@ y [(@ x g) ]2 ~ E jE 1 jE 2 . ' s;t :=' t ' s for any function ' :T!E and any (s;t)2T 2 . For A2E mn , A 2E nm is its transpose. For x2E d and y2R d , xy2E is their inner product. For A2E mn and ~ A2R mn , A : ~ A := Trace(A ~ A )2E. For A = [a i;j : 1 i m; 1 jjEj]2 ~ E mjEj and x = [x i;j ; 1 i n; 1jjEj]2E n =R njEj , A x2 ~ E mn is their convolution whose (i;j)-th component is P jEj k=1 a i;k x j;k . For A = [a i;j : 1 ijE 1 j; 1 j E 2 ]2 ~ E jE 1 jjE 2 j and x = [x i;j ; 1 i m; 1 jjE 1 j]2 E m 1 = R mjE 1 j , y = [y i;j ; 1 i n; 1 jjE 2 j]2 E n 2 = 176 R njE 2 j , A 2 [x;y]2 ~ E mn is their double convolution whose (i;j)-th component is P jE 1 j k=1 P jE 2 j l=1 a k;l x i;k y j;l . 4.2 Rough path integration and path derivatives In this section we present the basics of rough path theory as well as our pathwise It^ o calculus. 4.2.1 Rough paths and quadratic variation Denote, for a constant > 0, (E) := n !2C(T;E) :k!k <1 o ; wherek!k := sup (s;t)2T 2 j!s;tj jtsj ; (E) := n !2C(T 2 ;E) :k!k <1 o ; wherek!k := sup (s;t)2T 2 j! s;t j jtsj : (4.2.1) It is clear that k!k 1 := sup 0tT j! t jj! 0 j +T k!k ; 8!2 (E): (4.2.2) From now on, we shall x two parameters: := (;) where 2 ( 1 3 ; 1 2 ); 2 (1 2;]: (4.2.3) 177 Our space of rough paths is: 0 := n ! = (!;!)2 (R d ) 2 (R dd ) : (4.2.4) ! s;t ! s;r ! r;t =! s;r ! r;t 80s<r<tT o : equipped with: k!k :=k!k +k!k 2 : (4.2.5) The requirement in second line of (4.2.4) is called Chen's relation. We remark that in generalk!k 6=jjk!k for a constant . We next introduce the bracket process of!: h!i t :=! 0;t (! 0;t ) ! 0;t ! 0;t 2S d : (4.2.6) By (4.2.4), one can easily check that h!i s;t =! s;t (! s;t ) ! s;t ! s;t and thus h!i2 2 (S d ): (4.2.7) Remark 4.2.1. (i) Clearlyh!i = 0 if and only if ! is a geometric rough path. This process is intrinsic for non-geometric rough paths, and makes our study much more convenient. 178 (ii) The processh!i is called the bracket process, denoted as [!], of the so called reduced rough path in [40]. As we will see later, this process plays essentially the same role as the quadratic variation process in semimartingale theory. However, a typical rough path may not have nite quadratic variation. The following result is straightforward and its proof is omitted. Lemma 4.2.2. For any!; ~ !2 0 , we have kh!ik 2 k!k [2 +k!k ];kh!ih ~ !ik 2 [k!k +k~ !k + 2]k! ~ !k : (4.2.8) 4.2.2 Rough path integration To study rough path integration against!, we rst introduce the controlled rough paths of Gubinelli [42], which can be viewed as C 1 -regularity of the paths against the rough path. Denition 4.2.3. For each !2 (R d ), the spaceC 1 !; (E) consists of E-valued controlled rough paths2 (E) such that there exists@ ! 2 (E 1d ) satisfying: R !; 2 + (E) where R !; s;t := s;t @ ! s ! s;t ;8(s;t)2T 2 : We note that for notational simplicity we take the convention that@ ! is a row vector. 179 Remark 4.2.4. (i) The @ ! depends on !, but not on !. (ii) In general @ ! is not unique. However, when ! is truly rough, namely !2 as dened in (4.2.9) below, @ ! is unique. See [40] Proposition 6.4. For the ease of presentation, we shall assume that !2 . However, most of our results still hold true when!2 0 , provided that we specify a version of @ ! . (iii) @ ! is called the Gubinelli derivative in the rough path literature. As we will see in Section 4.5, when ! is a sample path of Brownian motion, it coincides with the path derivative introduced in [11]. So we also call it in this chapter path derivative. For the ease of presentation, from now on we restrict to!2 so that@ ! is unique: := n !2 0 : there exists a dense subset A [0;T ) such that (4.2.9) lim t#s jv! s;t j (ts) 2 =1 for all s2A and v2R d nf0g o : For!2 , we equip the spaceC 1 !; (E) with the semi-norms: kk !; :=k@ ! k +kR !; k + ; jjjjjj !; :=kk !; +j@ ! 0 j; d !;~ ! (; ~ ) :=d !;~ ! (; ~ ) +j@ ! 0 @ ~ ! ~ 0 j; d !;~ ! (; ~ ) :=k@ ! @ ~ ! ~ k +kR !; R ~ !; ~ k + : (4.2.10) 180 In particular, we note that d ! (; ~ ) :=d !;! (; ~ ) =k ~ k !; ; d ! (; ~ ) :=d !;! (; ~ ) =jjj ~ jjj !; : (4.2.11) By (4.2.2) one can easily check that + (E)C 1 !; (E); with @ ! = 0 andkk !; =kk + ;82 + ; C 1 !; (E) (E); withkk j@ ! 0 jk!k +T [1 +k!k ]kk !; 82C 1 !; (E): (4.2.12) We are now ready to dene the rough path integration. For each !2 , 2C 1 !; (E d ), and each partition : 0 =t 0 <<t n =T , denote t := n1 X i=0 h t i ^t ! t i ^t;t i+1 ^t +@ ! t i ^t :! t i ^t;t i+1 ^t i : (4.2.13) Here, for = [ 1 ; ; d ] , we take the convention that @ ! 2E dd with i-th row @ ! i . Following Gubinelli [42], we may dene the rough integral as the unique limit of : Lemma 4.2.5. For each!2 , 2C 1 !; (E d ), the rough integral Z t 0 s d! s := t := lim jj!0 t 2E (4.2.14) 181 exists, and is independent of the choice of. Moreover, 2C 1 !; (E) with@ ! = and s;t s ! s;t @ ! s :! s;t C k!k kk !; jtsj 2+ ; kk !; T k!k j@ ! 0 j +C T [1 +k!k ]kk !; ; (4.2.15) where the constant C depends only on and the dimensionsjEj and d. Proof. This result follows the same arguments in [40] Theorem 4.10, except that the second line of (4.2.15) appears slightly dierently. To see that, by the rst estimate we have kR !; k + k@ ! k 1 k!k T +CT k!k kk !; : Plug the rst inequality of (4.2.12) into above and then use the second inequality of (4.2.12), we obtain the second estimate of (4.2.15) immediately. Moreover, we have the following stability result in terms of the rough integral, which improves [40] Theorem 4.16 slightly. Lemma 4.2.6. Let (!;; ) be as in Lemma 4.2.5 and consider (~ !; ~ ; ~ ) similarly. Denote M :=kk !; +k ~ k ~ !; +k!k +k~ !k ; and ' := ~ ''; for ' =!;; : 182 Then, there exists a constant C ;M , depending on;M, andjEj, d, such that d !;~ ! (; ~ )T h j@ ! ~ 0 jk!k +k!k j@ ! 0 j i +C ;M T h k!k +d !;~ ! (; ~ ) i : Proof. First, similar to the rst estimate in (4.2.15), or following the same argu- ments as in [40] Theorem 4.16, we have [R ~ !; ~ s;t @ ! ~ s : ~ ! s;t ] [R !; s;t @ ! s :! s;t ] CT h k!k +d !;~ ! (; ~ ) i (ts) + : Note that, by (4.2.2), j@ ! ~ s : ~ ! s;t @ ! s :! s;t j h k@ ! k 1 k!k 2 +k@ ! ~ k 1 k!k 2 i (ts) 2 h [j@ ! 0 jk!k 2 +j@ ! ~ 0 jk!k 2 ] +CT [k@ ! k +k!k 2 ] i (ts) 2 : Then we obtain the desired estimate forkR ~ !; ~ R !; k + immediately. Moreover, j@ ! s;t j = j s;t j = [@ ! ~ s ~ ! s;t +R ~ !; ~ s;t ] [@ ! s ! s;t +R !; s;t ] h k@ ! k 1 k!k +k@ ! ~ k 1 k!k +T kR ~ !; ~ R !; k + i (ts) By (4.2.2) again we obtain the desired estimate fork@ ! k . We conclude this subsection with the Young's integration againsth!i. Since h!i2 2 (S d ), by (4.2.3) the Young's integral t : dh!i t is well dened for all 183 2 (E dd ). We collect below some results concerning this integration. Since the proofs are standard and are much easier than Lemmas 4.2.5 and 4.2.6, we thus omit them. Lemma 4.2.7. (i) Let !2 , 2 (E dd ), t := R t 0 s : dh!i s . Then 2 + (E) and j s;t s :h!i s;t jCkk kh!ik 2 (ts) 2+ ; kk + h T j 0 j +CT kk i kh!ik 2 : (4.2.16) (ii) Let (~ !; ~ ; ~ ) satisfy the same properties. Then, denoting ' := ' ~ ' for ' =!;; , kk + T kh!ik 2 j 0 j +CT h kh!ik 2 kk +k ~ k kh!ih ~ !ik 2 i : (4.2.17) 4.2.3 Path derivatives We next introduce further path derivatives of . Our following denition is moti- vated from the path derivatives introduced in Ekren, Touzi and Zhang [32] and Buckdahn, Ma and Zhang [11], which in turn were motivated by the functional It^ o calculus of Dupire [29]. 184 Denition 4.2.8. For each!2 , the spaceC 2 !; (E) consists of E-valued con- trolled rough paths 2 C 1 !; (E) such that @ ! 2 C 1 !; (E 1d ) and there exists symmetric D ! t 2 (E dd ) satisfying the following pathwise It^ o formula: d t =@ ! t d! t + [D ! t t + 1 2 @ 2 !! t ] :dh!i t : (4.2.18) Here, @ 2 !! t :=@ ! [(@ ! t ) ]2E dd . Remark 4.2.9. (i) In generalD ! t may not be unique. Similar to (4.2.9), one can easily check that D ! t is unique if! is restricted to the following b : b := n !2 : there exists a dense subset A [0;T ) such that (4.2.19) lim t#s jv :h!i s;t j (ts) 2+ =1 for all s2A and v2S d nf0g o : (ii) However, h!i is more regular than !, and thus (4.2.19) is much more dicult to satisfy than (4.2.9). For example, if ! is a sample path of Brownian motion with It^ o integration, thenh!i t = tI d as we will see in Section 4.5 below. Consequently, by considering v2S d nf0g with Trace(v) = 0, we see that b =;. (iii) In many cases in this chapter, already takes the formd t =a t d! t +b t : dh!i t , then clearly @ ! = a and we shall always set, thanks to the symmetry of h!i, D ! t := 1 2 h (b 1 2 @ ! a) + (b 1 2 @ ! a) i : (4.2.20) 185 (iv) In the case thath!i t =t, we will actually dene@ ! t :=Trace(D ! t ). Then we see that @ ! t is unique. Remark 4.2.10. (i) In general @ ! i and @ ! j do not commute, and D ! t and @ ! are also not commutative. In particular,@ 2 !! is not symmetric. However, sinceh!i is symmetric, we see that (4.2.18) is equivalent to d t =@ ! t d! t + h D ! t t + 1 4 [@ 2 !! t + (@ 2 !! t ) ] i :dh!i t : (4.2.21) (ii) One can easily check that the pathwise It^ o formulae (4.2.18) and (4.2.21) are equivalent to the following pathwise Taylor expansion: s;t =@ ! s ! s;t + 1 2 @ 2 !! s : [! s;t ! s;t +! s;t ! s;t ] +D ! t s :h!i s;t +O((ts) 2+ ): (4.2.22) In the case that @ 2 !! is symmetric, which is always the case when d = 1, (4.2.22) becomes s;t =@ ! s ! s;t + 1 2 @ 2 !! s : [! s;t ! s;t ] +D ! t s :h!i s;t +O((ts) 2+ ): (4.2.23) We refer to [11] for related works in Brownian motion setting. 186 4.2.4 Backward rough integration In this subsection we introduce the backward rough path, which is also a rough path and will play an important role in constructing the pathwise characteristics in Section 4.6 below. Let !2 and 2C 1 !; (E d ). For any t 0 2 [0;T ] and 0stt 0 , dene ! t 0 t :=! t 0 ! t 0 t ; ! t 0 s;t :=! t 0 t;t 0 s ! t 0 t;t 0 s ! t 0 t;t 0 s ; ! t 0 := ( ! t 0 ; ! t 0 ); t 0 t := t 0 t ; ( @ ! ) t 0 t :=@ ! t 0 t : (4.2.24) By restricting the processes on [0;t 0 ] in obvious sense, we have Lemma 4.2.11. Let!2 and 2C 1 !; (E d ). Then ! t 0 2 0 , t 0 2C 1 ! t 0 ; (E d ) with @ ! t 0 t 0 = ( @ ! ) t 0 and Z t 0 s t 0 t t 0 r d ! t 0 r = Z t s r d! r ; 0s<tt 0 : (4.2.25) Proof. In this proof we omit the superscript t 0 and denotet 0 :=t 0 t,s 0 :=t 0 s, r 0 :=t 0 r, :=ts. First, one can easily check that ! s;t =! t 0 ;s 0; ! s;t ! s;r ! r;t =! r 0 ;s 0! t 0 ;s 0 = ! s;r ! r;t : 187 This implies that !2 0 . Next, s;t = t 0 ;s 0 =@ ! t 0! t 0 ;s 0R !; t 0 ;s 0 = @ ! s ! s;t +@ ! t 0 ;s 0! t 0 ;s 0R !; t 0 ;s 0 : Then clearly @ ! is a Gubinelli derivative of with respect to !. Finally, the second equality of (4.2.25) is exactly the same as [40] Proposition 5.10. We remark that ! t 0 may not be in , and then @ ! t 0 t 0 is not unique. See Remark 4.2.4 (ii). In this case we shall always choose ( @ ! ) t 0 as its path derivative. 4.3 Functions of controlled paths In this section we study functions' :T ~ E!E and its related path derivatives. Similar to (4.2.18), we shall take the notational convention that @ yy ' :=@ y [(@ y ') ]; @ y! ' :=@ y [(@ ! ') ]; @ !y ' :=@ ! [(@ y ') ]: (4.3.1) Denition 4.3.1. (i) Fork 0, letC k loc ( ~ E;E) be the set of mappingsg :T ~ E! E such that g is k-th dierentiable in y. Moreover, letC k ( ~ E;E)C k loc ( ~ E;E) be such that kgk k := k X i=0 sup y2 ~ E k@ (i) y g(;y)k 1 <1: (4.3.2) 188 (ii) For k 0, letC k ;loc ( ~ E;E)C k loc ( ~ E;E) be such that, for i = 0; ;k, @ (i) y g is H older- continuous in t, and the mapping y7! @ (i) y g(;y) is continuous under kk . Moreover, letC k ( ~ E;E)C k ;loc ( ~ E;E) be such that kgk k; := k X i=0 sup y2 ~ E k@ (i) y g(;y)k <1: (4.3.3) (iii) LetC 1;2 !;;loc ( ~ E;E)C 2 loc ( ~ E;E) be such that g(;y)2C 1 !; (E), @ y g(;y)2 C 1 !; (E 1j ~ Ej ), for each y 2 ~ E, the mappings y 7! g(;y) and y 7! @ y g(;y) are continuous underjjjjjj !; , and @ ! g2C 1 ;loc ( ~ E;E 1d ). Moreover, letC 1;2 !; ( ~ E;E) C 1;2 !;;loc ( ~ E;E) be such that kgk 2;!; :=kgk 2 +k@ ! gk 1 + sup y2 ~ E [kg(;y)k !; +k@ y g(;y)k !; ]<1: (4.3.4) (iv) LetC 2;3 !;;loc ( ~ E;E)C 1;2 !;;loc ( ~ E;E) be such that @ ! g2C 1;2 !;;loc ( ~ E;E 1d ), @ y g2C 1;2 !;;loc ( ~ E;E 1j ~ Ej ), g(;y)2C 2 !; (E) for every y2 ~ E and there exists D ! t g2 C 1 ;loc ( ~ E;E dd ). Moreover, letC 2;3 !; ( ~ E;E)C 2;3 !;;loc ( ~ E;E) be such that kgk 3;!; :=kgk 2;!; +k@ ! gk 2;!; +k@ y gk 2;!; <1: (4.3.5) (v) LetC 3;3 !;;loc ( ~ E;E)C 2;3 !;;loc ( ~ E;E) be such that @ ! g2C 2;3 !;;loc ( ~ E;E 1d ). 189 (vi) For!; ~ !2 , and g2C 1;2 !; ( ~ E;E), ~ g2C 1;2 ~ !; ( ~ E;E) dene d !;~ ! 2; (g; ~ g) := kg ~ gk 2 +k@ ! g@ ~ ! ~ gk 1 + sup y2 ~ E h d !;~ ! (g(;y); ~ g(;y)) +d !;~ ! (@ y g(;y);@ y ~ g(;y)) i :(4.3.6) Remark 4.3.2. (i) For g2C 2;3 !; ( ~ E;E), by (4.2.18) we have dg(t;y) =h(t;y)d! t +f(t;y) :dh!i t ; where h := (@ ! g) 2C 1;2 !;;loc ( ~ E;E d );f :=D ! t g + 1 2 @ ! h2C 1 ;loc ( ~ E;E dd ): (4.3.7) (ii) In (4.3.4), we need onlyk@ ! gk 1 instead ofk@ ! gk 1; , and in (4.3.5), we do not needkD ! t gk 1; . The latter is particularly convenient because D ! t g may not be unique. (iii) It is clear that d ! 2; (g; ~ g) :=d !;! 2; (g; ~ g) =kg ~ gk 2;!; . 4.3.1 Commutativity of spatial and path derivatives Lemma 4.3.3. (i) Let g2C 2;3 !; ( ~ E;E). Then @ !y g = [@ y! g] 2E j ~ Ejd , namely @ ! @ y i g =@ y i @ ! g; i = 1; ;j ~ Ej: (4.3.8) 190 (ii) Let g2C 3;3 !; ( ~ E;E). Then, for appropriate D ! t and for each i = 1; ;j ~ Ej, @ 2 !! @ y i g =@ y i @ 2 !! g and D ! t @ y i g =@ y i D ! t g: (4.3.9) Proof. Without loss of generality, we assumej ~ Ej = 1, namely ~ E = R. Recall (4.3.7). (i) Fix y2R and denote, for 06= y2R, r' t (y) := '(t;y + y)'(t;y) y ; ' =g;h;f: It is straightforward to check that rg t (y) = Z t 0 rh s (y)d! s + Z t 0 rf s (y) :dh!i s rh t (y) = Z 1 0 @ y h(t;y +y)d; rf t (y) = Z 1 0 @ y f(t;y +y)d; and thus, asjyj! 0, jjjrh(y)@ y h(y)jjj !; Z 1 0 jjj@ y h(y +y)@ y h(y)jjj !; d! 0; krf(y)@ y f(y)k Z 1 0 k@ y f(y +y)@ y f(y)k d! 0: 191 Then it follows from Lemma 4.2.6 and Lemma 4.2.7 (ii) that @ y g(t;y) = Z t 0 @ y h(s;y)d! s + Z t 0 @ y f(s;y) :dh!i s : (4.3.10) This implies (4.3.8) immediately. (ii) Sinceh2C 2;3 !; ( ~ E;E 1d ), by (i) we have@ y @ ! h =@ ! @ y h and thus@ y @ 2 !! g = @ 2 !! @ y g. Now applying the convention (4.2.20) for D ! t on (4.3.10) and by (4.3.7), we have 2D ! t (@ y g) = (@ y f 1 2 @ !y h) + (@ y f 1 2 @ !y h) =@ y h (f 1 2 @ ! h) + (f 1 2 @ ! h) i = (@ y f 1 2 @ y! h) + (@ y f 1 2 @ y! h) = 2@ y D ! t g: This completes the proof. 4.3.2 Chain rule of path derivatives Theorem 4.3.4. (i) Let ! 2 , 2C 1 !; ( ~ E), g 2C 1;2 !;;loc ( ~ E;E), and t := g(t; t ). Then 2C 1 !; (E) with @ ! t = (@ ! g)(t; t ) +@ y g(t; t ) @ ! t : (4.3.11) 192 (ii) Assume further that 2C 2 !; ( ~ E) and g2C 2;3 !;;loc ( ~ E;E). Then, for appro- priate D ! t , 2C 2 !; (E) with D ! t t = (D ! t g)(t; t ) +@ y g(t; t ) D ! t t : (4.3.12) Remark 4.3.5. Similar to [11] Proposition 2.7, the chain rule of pathwise deriva- tives is equivalent to the It^ o-Ventzell formula, which extends the It^ o formula in [40] Proposition 5.6. Indeed, note that 2C 2 !; ( ~ E) takes the form: d t =a t d! t +b t :dh!i t where a := (@ ! ) ; b :=D ! t + 1 2 @ ! a: (4.3.13) Recall (4.3.7) again. It follows from Lemma 4.3.3 (i) that @ ! @ y g = (@ y h) . Then, noticing that h2C 1;2 !;;loc ( ~ E;E d ), @ y g2C 1;2 !;;loc ( ~ E;E 1j ~ Ej ), by applying (4.3.11) several times and by (4.3.12), we have @ ! t = h (t; t ) +@ y g(t; t ) a t ; @ 2 !! t = @ ! [h(t; t ) +@ y g(t; t ) a t ] = h @ ! h +@ y h a + (@ y h a ) +@ 2 yy g 2 [a;a] +@ y g @ ! a i (t; t ); D ! t t = 1 2 h [(f 1 2 @ ! h) + (f 1 2 @ ! h) ] +@ y g [(b 1 2 @ ! a) +(b 1 2 @ ! a) ] i (t; t ): 193 This, together with (4.2.18) and the symmetry ofh!i, implies: d[g(t; t )] = h h(t; t ) +@ y g(t; t ) a t i d! t (4.3.14) + h f +@ y g b t + 1 2 @ 2 yy g 2 [a t ;a t ] +@ y h a t i (t; t ) :dh!i t ; which we call the pathwise It^ o-Ventzell formula. Proof of Theorem 4.3.4. (i) For (s;t)2T 2 , we have s;t = g(t; t )g(s; s ) =g(t; t )g(s; t ) +g(s; t )g(s; s ) (4.3.15) = [@ ! g](s; t )! s;t +R !;g(;t) s;t + Z 1 0 @ y g(s; s + s;t )d s;t = h (@ ! g)(s; s ) +@ y g(s; s ) @ ! s i ! s;t +R !; s;t ; where R !; s;t := h [@ ! g](s; t ) [@ ! g](s; s ) i ! s;t +R !;g(;t) s;t + Z 1 0 [@ y g(s; s + s;t )@ y g(s; s )]d @ ! s ! s;t + Z 1 0 @ y g(s; s + s;t )d R !; s;t : Then clearly kR !; k + kgk 2;!; h kk k!k + 1 +kk k@ ! k 1 k!k +kk !; i <1: (4.3.16) 194 Moreover, under our conditions it is clear that (@ ! g)(t; t ) +@ y g(t; t ) @ ! t is H older--continuous. This proves (4.3.11). (ii) Recall (4.3.7) and (4.3.13). By reversing the arguments in Remark 4.3.5, it suces to prove (4.3.14). Denote := ts. Recall the rst line of (4.3.15) and note that s;t =a s ! s;t +@ ! a s :! s;t +b s :h!i s;t +O( 2+ ); g(t;y)g(s;y) =h(s;y)! s;t +@ ! h(s;y) :! s;t +f(s;y) :h!i s;t +O( 2+ ) Then, by the standard Taylor expansion and applying Lemma 4.3.3 (i) on g, we have g(t; t )g(t; s ) = @ y g(t; s ) s;t + 1 2 @ 2 yy g(t; s ) 2 [ s;t ; s;t ] +O( 3 ) = h @ y g(s; s ) +@ y h(s; s )! s;t i s;t + 1 2 @ 2 yy g(s; s ) 2 [ s;t ; s;t ] +O( 2+ ); g(t; s )g(s; s ) = h(s; s )! s;t + [@ ! h](s; s ) :! s;t +f(s; s ) :h!i s;t +O( 2+ ): 195 On the other hand, Z t s [h(r; r ) +@ y g(r; r ) a r ]d! r = [h(s; s ) +@ y g(s; s ) a s ]! s;t +@ ! [h(s; s ) +@ y g(s; s ) a s ] :! s;t +O( 2+ ); Z t s [f(r; r ) +@ y g(r; r ) b r ] :dh!i r = [f(s; s ) +@ y g(s; s ) b s ] :h!i s;t +O( 2+ ): By Lemma 4.3.3 (i) we have @ !y g = [@ y! g] =@ y h . Then it follows from (4.3.11) that @ ! [h(s; s ) +@ y g(s; s ) a s ] (4.3.17) = h @ ! h +@ y h a s +@ y h a s +@ 2 yy g 2 [a s ;a s ] +@ y g @ ! a s ](s; s ): 196 Noting that ! s;t =O( ), ! s;t =O( 2 ), andh!i s;t =O( 2 ), then we have s;t Z t s [h(r; r ) +@ y g(r; r ) a r ]d! r Z t s [f(r; r ) +@ y g(r; r ) b r ] :dh!i r = h [@ y h(s; s )! s;t ] [a s ! s;t ] + 1 2 @ 2 yy g(t; s ) 2 [(a s ! s;t ) ; (a s ! s;t ) ] h @ y h(s; s ) a s + [@ y h(s; s ) a s ] +@ 2 yy g(s; s ) 2 [a s ;a s ]] i :! s;t +O( 2+ ) = h 1 2 @ 2 yy g(t; s ) 2 [@ ! s ;@ ! s ] +@ y h(s; s ) @ ! s i :h!i s;t +O( 2+ ) This proves (4.3.14), and hence (4.3.12). 4.3.3 Some estimates In this subsection we provide some estimates for =g(t; t ), which will be crucial for studying rough dierential equations in next section. These results correspond to [40] Lemma 7.3 and Theorem 7.5, where g does not depend on t. Lemma 4.3.6. (i) Let !2 , 2C 1 !; (E), g2C 1;2 !; ( ~ E;E), t := g(t; t ), and denote M 1 :=k!k +jjjjjj !; : 197 Then for any T 0 > 0 and any TT 0 , there exists a constant C ;M 1 ;T 0 , depending only on, M 1 , T 0 , andjEj,j ~ Ej, such that kk !; C ;M 1 ;T 0 kgk 2;!; : (4.3.18) (ii) Assume further that g2C 2;3 !; ( ~ E;E), and (~ !; ~ ; ~ g; ~ ) satisfy the same con- ditions. Denote ' := ~ '' for appropriate ', and M 2 :=jjjjjj !; +jjj ~ jjj ~ !; +k!k +k~ !k +kgk 3;!; +k~ gk 3;~ !; : Then, for any TT 0 as in (i), there exists a constant C ;M 2 ;T 0 such that d !;~ ! (; ~ )C ;M 2 ;T 0 h d !;~ ! 2; (g; ~ g) +d !;~ ! (; ~ ) +j 0 j +k!k i : (4.3.19) Proof. (i) First, by (4.2.2) and (4.2.12) we havek@ ! k 1 +kk C. By the rst line of (4.3.15) it is clear that kk C h kgk 0; +kgk 1 i : (4.3.20) 198 Next, recall (4.3.11) and note that j@ ! s;t j j@ ! g(t; t )@ ! g(s; s )j +j@ y g(t; t )@ y g(s; s )jj@ ! t j +j@ y g(s; s )jj@ ! s;t j: Applying (4.3.20) on @ ! g and @ y g we obtaink@ ! k Ckgk 2;!; . Moreover, by (4.3.16) we havekR !; k + Ckgk 2;!; . Putting together we prove (4.3.18). (ii) First, note that s;t = ~ g(t; ~ t )g(t; t ) ~ g(s; ~ s ) +g(s; s ) = [g(t; ~ t ) g(s; ~ s )] + Z 1 0 @ y g(s; s + s )d s;t + Z 1 0 [@ y g(t; t + t )@ y g(s; s + s )]d t : Apply (4.3.20) on g and @ y g, we obtain kk C h kgk 0; +kgk 1 +kk +j 0 j i Note that s;t =@ ! s ! s;t +R !; s;t , and similarly for ~ . Then, by (4.2.2), kk k@ ~ ! ~ @ ! k 1 k~ !k +k@ ! k 1 k!k +kR ~ !; ~ R !; k C h d !;~ ! (; ~ )] +k!k i : (4.3.21) 199 Thus kk C h kgk 0; +kgk 1 +j 0 j +d !;~ ! (; ~ ) +k!k i : (4.3.22) We shall emphasize that the aboveC depends onkgk 2;!; +k~ gk 2;~ !; , notkgk 3;!; + k~ gk 3;~ !; . Next, note that @ ~ ! ~ t @ ! t = [@ ~ ! ~ g(t; ~ t )@ ! g(t; t )] + [@ y ~ g(t; ~ t )@ y g(t; t )] @ ~ ! ~ t +@ y g(t; t ) [@ ~ ! ~ t @ ! t ]: [@ ~ ! ~ @ ! ] s;t = [@ ~ ! ~ g(; ~ )@ ! g(; )] s;t + [@ y ~ g(; ~ )@ y g(; )] s;t @ ! ~ t +[@ y g(s; ~ s ) +@ y g(s; ~ s )@ y g(s; s )] @ ! ~ s;t +[@ y g(; )] s;t @ ! t +@ y g(s; s ) @ ! s;t : Apply (4.3.22) on @ ! g and @ y g, and (4.3.20) on @ y g, we obtain from (4.3.21) that k@ ! k C h d !;~ ! 2; (g; ~ g) +j 0 j +d !;~ ! (; ~ ) +k!k i (4.3.23) 200 Finally, recall (4.3.16) and note that R ~ !;~ g(;~ y) s;t R !;g(;y) s;t =R ~ !;~ g(;~ y) s;t R !;g(;~ y) s;t + h [g(; ~ y) s;t@ ! g(s; ~ y)! s;t i h [g(;y)] s;t @ ! g(s;y)! s;t i =R ~ !;~ g(;~ y) s;t R !;g(;~ y) s;t + Z 1 0 R !;@yg(;y+y) s;t d y; one can obtain the desired estimate forkR ~ !;~ R !; k + straightforwardly. This, together with (4.3.23), completes the proof. Moreover, we have the following simpler results whose proof is omitted. Lemma 4.3.7. (i) Let 2 (E), f 2C 1 ( ~ E;E), and t := f(t; t ). Then 2 (E) and kk kfk 0; +kfk 1 kk kfk 1; [1 +kk ]: (4.3.24) (ii) Let ; ~ 2 (E), f; ~ f2C 2 ( ~ E;E), and t :=f(t; t ), ~ := ~ f(t; ~ t ). Then k~ k [1 +kk +k ~ k ] h k ~ ffk 1; +kfk 2 [j ~ 0 0 j +k ~ k ] i :(4.3.25) 201 4.4 Rough dierential equations In this section we study rough path dierential equations with coecients less regular in the time variable t, motivated from our study of stochastic dierential equations with random coecients in next section. Let!2 , g2C 2;3 !; (E;E d ), f2C 2 (E;E dd ), and y 0 2E. Consider the following RDE: t =y 0 + Z t 0 g(s; s )d! s + Z t 0 f(s; s ) :dh!i s ; t2T: (4.4.1) Our goal is to nd solution 2C 1 !; (E). By Theorem 4.3.4 and Lemma 4.3.7, in this case g(;)2C 1 !; (E d ), f(;)2 (E dd ), and thus the right side of (4.4.1) is well dened. Remark 4.4.1. When 2C 1 !; (E) is a solution, clearly @ ! t = g(t; t ), then by Theorem 4.3.4 (i) it is clear that 2C 2 !; (E). So a solution to RDE (4.4.1) is automatically inC 2 !; (E). We shall use this fact without mentioning it. In standard rough path theory the generatorg of RDE (4.4.1) is independent of t. In Lejay and Victoir [55],g may depend ont, but is required to be H older-(1) continuous, which is violated forg2C 2;3 !; (E;E d ) (since< 1 2 ). This relaxation of regularity in t is crucial for studying SDEs and SPDEs with random coecients, see Remark 4.5.7 below. We also refer to Gubinelli, Tindel and Torrecilla [43] for some discussion along this direction. 202 Theorem 4.4.2. Let !2 , g2C 2;3 !; (E;E d ), f 2C 2 (E;E dd ), and y 0 2 E. Then RDE (4.4.1) has a unique solution 2C 2 !; (E). Moreover, there exists a constant C , depending only on , d,jEj, T ,kfk 2; ,kgk 3;!; , andk!k , such that kk +kk !; C : (4.4.2) Proof. We proceed in three steps. Step 1. Denote M := [k@ ! gk 0 +kgk 2 1 ]k!k +kfk 0 k!k [2 +k!k ] and A := n 2C 1 !; (E) : 0 =y 0 ;@ ! 0 =g (0;y 0 );kk !; M + 1 o ; (4.4.3) equipped with the normkk !; . Note thatA contains t := y 0 +g(0;y 0 )! 0;t and thus is not empty. Dene a mapping onA : () := where t := y 0 + 1 t + 2 t := y 0 + Z t 0 g(s; s )d! s + Z t 0 f(s; s ) :dh!i s : We show that, there exists 0 < 1, which depends on , d,jEj, T ,kfk 2; , kgk 3;!; , andk!k , but not on y 0 , such that whenever T , is a contraction mapping onA . One can easily check thatA is complete underd !;! , then has a unique xed point 2A which is clearly the unique solution of RDE (4.4.1). 203 To prove that is a contraction mapping, let C denote a generic constant which depends only on the above parameters, but not on y 0 . We rst show that ()2A for all 2A . Indeed, clearly 0 =y 0 and @ ! 0 =g (0;y 0 ). For any 2A , denote t := g(t; t ). Applying Lemma 4.3.6 and then Lemma 4.2.5, we have, kk !; C; k@ ! 0 jk@ ! gk 0 +k@ ! gk 2 1 ; and thus k 1 k !; k!k j@ ! 0 j +C [1 +k!k ]kk !; [k@ ! gk 0 +kgk 2 1 ]k!k +C : Similarly, It follows from Lemmas 4.2.7 and 4.3.7 (i) that k 2 k !; =k 2 k + kfk 0 k!k [2 +k!k ] +C ; and thuskk !; k 1 k !; +k 2 k !; M +C : Set small enough we havekk !; M + 1. That is, 2A . Next, let ~ 2A and denote ~ ; ~ 1 ; ~ 2 ; ~ in the obvious sense. For appropriate ', put ' := ~ ''. Recalling (4.3.21), we see that kk 1 C kk C kk : (4.4.4) 204 Then, applying Lemmas 4.2.6, 4.3.6 (ii), 4.2.7 (ii), and 4.3.7 (ii), we have k 1 k !; C kk !; C kk !; ; k 2 k + C kk ; and thus kk !; C kk !; : Set be small enough such that C 1 2 , then is a contraction mapping. Step 2. We now prove the result for generalT . Let be the constant in Step 1. Let 0 =t 0 <<t n =T such that t i+1 t i , i = 0; ;n 1. We may solve the RDE over each interval [t i ;t i+1 ] with initial condition ( t i ;g(t i ; t i )), which is obtained from the previous step by considering the RDE on [t i1 ;t i ], and thus we obtain the unique solution over the whole interval [0;T ]. Step 3. We now estimatekk !; . First, whenT for the constant = in Step 1, we have 2A and thuskk M + 1. In particular, this implies that j@ ! s;t j (M + 1)(ts) ; jR !; s;t j (M + 1)(ts) + ; whenever ts: Now for arbitrary s;t, let k := [ ts ] + 1 be the smallest integer greater than ts , and t i :=s + i k (ts), i = 0; ;k. Then j@ ! s;t j k1 X i=0 j@ ! t i ;t i+1 jk( ts k ) =k 1 (ts) ( 1 + 1) 1 (ts) : Thusk@ ! k ( 1 + 1) 1 . Similarly we may provekR !; k + ( 1 + 1) 1 . 205 Finally, note thatk@ ! k 1 C, it is clear that kk k@ ! k 1 k!k +kR !; k C: This concludes the proof. We next study the stability of RDEs. Theorem 4.4.3. Let (y 0 ;!;f;g) and (~ y 0 ; ~ !; ~ f; ~ g) be as in Theorem 4.4.2, and , ~ be the corresponding solution of the RDE. Then there exists a constant C , depending only on,d,jEj,T ,kfk 2; ,k ~ fk 2; ,kgk 3;!; ,k~ gk 3;~ !; , andk!k ,k~ !k , such that, denoting ' :=' ~ ' for appropriate ', d !;~ ! (; ~ )C [I +jy 0 j] where I :=d !;~ ! 2; (g; ~ g) +kfk 1; +k!k : (4.4.5) Proof. First assume T for some constant > 0 small enough. Use the notations in Step 1 of Theorem 4.4.2. Applying Lemma 4.3.6 (i) and (4.4.2) we see thatj@ ~ ! ~ 0 j +k~ k !; C. Then, it follows from Lemmas 4.2.6 and 4.3.6 (ii) that d !;~ ! ( 1 ; ~ 1 ) C h d !;~ ! (; ~ ) +d (!; ~ !) +j 0 0 ~ 0 0 j i C h d !;~ ! (; ~ ) + I +jy 0 j i : 206 Similarly, by Lemmas 4.2.7 and 4.3.7, we have k 2 k + C h kk + I +jy 0 j i : Putting together we get d !;~ ! (; ~ ) =d !;~ ! (; ~ )C h d !;~ ! (; ~ ) + I +jy 0 j i : Set be small enough such that C 1 2 , we obtain d !;~ ! (; ~ )C[I +jy 0 j]. Now for general T , let k := [ T ] + 1 be the smallest integer greater than T and t i := i k T , i = 0; ;k. Denote J i := sup t i s<tt i+1 h j@ ! s;t j (ts) + jR ~ !; ~ s;t R !; s;t j (ts) + i ; i = 0; ;k 1: By the above arguments we have J i C[I +j t i j]. Then, applying (4.3.21) on [t i ;t i+1 ] and noting that @ ! t i =g(t i ; t i ) and @ ! ~ t i = ~ g(t i ; ~ t i ) are bounded, we have j t i+1 j j t i j +j t i ;t i+1 jj t i j + J i +C[j@ ! t i j +k!k ] C[I +j t i j]: 207 By induction we get max 0ik j t i jC[I +jy 0 j]; and thus max 0ik J i C[I +jy 0 j]: Now following the arguments in Theorem 4.4.2 Step 3 we can prove the desired estimate. Remark 4.4.4. (i) The uniqueness of RDE solutions do not depend on bound- edness of g, @ ! g, and f. Indeed, let and ~ be two solutions. Notice that any element ofC 1 !; (E) is bounded, and thus we may denoteM 0 :=kk 1 +k ~ k 1 <1. One can see that all the arguments in Theorem 4.4.2 remain valid if we replace the sup y2E in (4.3.2) with sup y2E;jyjM 0 , while the latter is always bounded for g, @ ! g, and f. (ii) If we do not assume boundedness of g, @ ! g, and f, in general we can only obtain the local existence, namely the solution exists when T is small. However, if we can construct a solution for large T , as we will see for linear RDEs, then by (ii) above this solution is the unique solution. 208 4.4.1 Linear RDEs Now consider RDE (4.4.1) with g(t;y) =a t y +b t ; f(t;y) = t y +l t ; ; where y2E;a2C 2 !; (E djEj );b2C 1 !; (E d );2 (E ddjEj );l2 (E dd ): (4.4.6) We remark that the abovef andg are not bounded and thus we cannot apply The- orem 4.4.2 directly. In Friz and Victoir [41], some a priori estimate is provided for linear RDEs and then the global existence follows from the arguments of Theorem 4.4.2, by replacing the sup y2E in (4.3.2) with the supremum over the a priori bound of the solution, as illustrated in Remark 4.4.4 (ii). At below, we shall construct a solution semi-explicitly. WhenjEj = 1, we have an explicit representation in the spirit of Feyman-Kac formula in stochastic analysis literature, see (4.4.7) below. However, the formula fails in multidimensional case due to the noncommutativity of matrices. Our main idea is to introduce a decoupling strategy, by using the local solution of certain Riccati type of RDEs, so as to reduce the dimension of E. To our best knowledge, such a construction is new even for multidimensional linear SDEs. Theorem 4.4.5. The linear RDE (4.4.1) with (4.4.6) has a unique solution. Proof. If b2C 2 !; (E d ), under (4.4.6) it is straightforward to check that g 2 C 2;3 !;;loc (E;E d ) andf2C 2 ;loc (E;E dd ), and thus the uniqueness follows from The- 209 orem 4.4.2 and Remark 4.4.4 (ii). However, in the linear case, by going through the arguments of Theorem 4.4.2 we can easily see that it is enough to assume the weaker condition b2C 1 !; (E d ). We shall construct the solution and thus obtain the existence via induction onjEj. Step 1. We rst assumejEj = 1, namely E =R. Applying Theorem 4.3.4 and Remark 4.3.5 we may verify directly that the following provides a representation of the solution: t = 1 t h 0 + Z t 0 s b s d! s + Z t 0 s l s a s b s :dh!i s i ; (4.4.7) where t := exp Z t 0 a s d! s + Z t 0 1 2 a s a s l s :dh!i s : Step 2. In order to show the induction idea clearly, we present the casejEj = 2 in details. With the notations in obvious sense, the linear RDE becomes d 1 t = [a 11 t 1 t +a 12 t 2 t +b 1 t ]d! t + [ 11 t 1 t + 12 t 2 t +l 1 t ] :dh!i t ; d 2 t = [a 21 t 1 t +a 22 t 2 t +b 2 t ]d! t + [ 21 t 1 t + 22 t 2 t +l 2 t ] :dh!i t : (4.4.8) Clearly, if the system is decoupled, for example if a 12 = 0 and 12 = 0, one can easily solve the system by rst solving for 1 and then solving for 2 . In the general case, we introduce a decoupling strategy as follows. Consider an auxiliary RDE: d t =a t d! t + t :dh!i t : (4.4.9) 210 where a; will be specied later. Denote t := 2 t + t 1 t . Then, applying the It^ o-Ventzell formula (4.3.14) we have d t = h [a 22 t 2 t +a 21 t 1 t +b 2 t ] + t [a 12 t 2 t +a 11 t 1 t +b 1 t ] +a t 1 t i d! t + h [ 22 t 2 t + 21 t 1 t +l 2 t ]+ t [ 12 t 2 t + 11 t 1 t +l 1 t ] + t 1 t +a t [a 22 t 2 t +a 21 t 1 t +b 2 t ] i :dh!i t : (4.4.10) We want to choose a; so that the right side above involves only . That is, a 21 + t a 11 +a = [a 22 + a 12 ]; 21 + 11 + +a(a 21 ) = t [ 22 + 12 +a(a 22 ) ]: This implies a = a 12 () 2 + [a 22 a 11 ]a 21 ; (4.4.11) = 12 () 2 + [ 22 11 ] 21 +a[a 22 a 21 ] = c 3 () 3 +c 2 () 2 +c 1 +c 0 ; where c 3 := a 12 (a 22 ) ; c 2 := 12 a 12 (a 21 ) + (a 22 a 11 )(a 22 ) c 1 := 22 11 (a 22 a 11 )(a 21 ) a 21 (a 22 ) ; c 0 := a 21 (a 21 ) 21 : 211 Plugging this into (4.4.9) we obtain the following Riccati type of RDE: d t = h a 12 t () 2 t + [a 22 t a 11 t ] t a 21 t i d! t + h c 3 t () 3 t +c 2 t () 2 t +c 1 t t +c 0 t i :dh!i t ; (4.4.12) and the RDE (4.4.10) becomes: d t = h [a 22 + a 12 ] t + [b 2 t + t b 1 t ] i d! t (4.4.13) + h [ 22 + 12 +a(a 22 ) ] t + [l 2 t + t l 1 t +a t (b 2 t ) ] i :dh!i t : Moreover, plug 2 = 1 into the second equation of (4.4.8), we have d 1 t = h [a 11 t a 12 t ] 1 t + [a 12 t t +b 1 t ] i d! t + h [ 11 t 12 t t ] 1 t + [ 12 t t +l 1 t ] i :dh!i t : (4.4.14) Now the RDEs (4.4.12), (4.4.13), and (4.4.14) are decoupled. We shall empha- size though the Riccati RDE (4.4.12) typically does not have a global solution on [0;T ]. However, following the arguments in Theorem 4.4.2, there exists a constant > 0, which depends only on the coecients a, and the rough path !, such that the Riccati RDE (4.4.12) with initial value 0 has a solution whenever the time interval is smaller than . We now set 0 = t 0 < < t n = T such that t i t i1 for i = 1; ;n, and we solve the system (4.4.8) as follows. First, we solve RDE (4.4.12) on [t 0 ;t 1 ] with initial value t 0 = 0. Plug this into (4.4.13), 212 where a is determined by (4.4.11), we solve (4.4.13) on [t 0 ;t 1 ] with initial value 0 = 2 0 . Plug and into (4.4.14), we may solve (4.4.14) on [t 0 ;t 1 ] with initial value 1 0 . Moreover, 2 := 1 satises the second equation of (4.4.8) on [t 0 ;t 1 ] with initial value 2 0 . Next, we solve the Riccati RDE (4.4.12) on [t 1 ;t 2 ], again with initial value t 1 = 0. Then we solve (4.4.13) on [t 1 ;t 2 ] with initial value t 1 = 2 t 1 . Plug and into (4.4.14), we may solve (4.4.14) on [t 1 ;t 2 ] with initial value 1 t 1 . Moreover, 2 := 1 satises the second equation of (4.4.8) on [t 1 ;t 2 ] with initial value 2 t 1 . Repeat the arguments we solve the system (4.4.8) over the whole interval [0;T ]. Step 3. We now assume the result is true forjEj = n 1 and we shall prove the casejEj =n. With obvious notations, we consider d i t = h n X j=1 a ij t j t +b i t i d! t + h n X j=1 ij t j t +l i t i :dh!i t ; i = 1; ;n: (4.4.15) Denote := n + P n1 i=1 i i , where, for i = 1; ;n 1, d i t = h n1 X j=1 [a jn i t a ji t ] j t + [a nn t i t a ni t ] i d! t + h [ i t nn t ni t ] + n1 X j=1 j t [ i t jn t ji t ] (4.4.16) + n1 X j=1 n1 X k=1 [a kn j t a kj t ] k t + [a nn t j t a nj t ] [ i t (a jn t ) (a ji t ) ] i :dh!i t ; 213 Then d t = h [a nn t + n1 X i=1 i t a in t ] t + [b n t + n1 X i=1 i t b i t ] i d! t (4.4.17) + h nn t + n1 X i=1 [ i t in t +a i t (a in ) ] t + l n + n1 X i=1 [ i t l i t +a i t (b i t ) ] i :dh!i t : where a i t := n1 X j=1 [a jn i t a ji t ] j t + [a nn t i t a ni t ]: Plug this into (4.4.15), we obtain d i t = h n1 X j=1 [a ij t a in t j t ] j t + [b i t +a in t t ] i d! t (4.4.18) + h n1 X j=1 [ ij t in t j t ] j t + [l i t + in t t ] i :dh!i t ; i = 1; ;n 1: Now similarly, there exists > 0, depending only on a, , and the rough path!, such that the system of Riccati type RDE (4.4.16) with initial condition 0 has a solution whenever the time interval is smaller than. Now set 0 =t 0 <<t n = T such that t i t i1 . As in Step 2, we may rst solve (4.4.16) on [t 0 ;t 1 ] with initial condition i 0 = 0. We then solve (4.4.17) on [t 0 ;t 1 ] with initial condition 0 = n 0 . Now notice that the linear system (4.4.18) has only dimensionn1, then by induction assumption, we may solve (4.4.18) on [t 0 ;t 1 ] with initial condition i 0 , i = 1; ;n 1, which further provides n := P n1 i=1 i i . Now repeat the arguments as in Step 2, we obtain the solution over the whole interval [0;T ]. 214 Remark 4.4.6. (i) WhenE =R, the representation formula (4.4.7) actually holds under weaker conditions: a;b2C 1 !; (R d ). Moreover, uniqueness also holds under this weaker condition. Indeed, for any arbitrary solution 2C 2 !; (E) and for the dened in (4.4.7), by applying the It^ o-Ventzell formula (4.3.14) we see that t t = 0 + Z t 0 s b s d! s + Z t 0 s l s a s b s :dh!i s : Then has to be the one in (4.4.7). (ii) In multidimensional case, we note that the Riccati RDE (4.4.12) does not involve b. Then we may also obtain the uniqueness, under our weaker condition b2C 1 !; (E d ), from the strategy in this proof. Applying Theorem 4.4.3 and following the arguments in the beginning of the proof for Theorem 4.4.5 (or Remark 4.4.6 (ii)) concerning the weaker condition on b, the following result is immediate. Corollary 4.4.7. Let !;a;b;;l; be as in Theorem 4.4.5 and ~ !; ~ a; ~ b; ~ ; ~ l; ~ . Denote ' :=' ~ ' for appropriate '. Then d !;~ ! (; ~ ) C h d !;~ ! (a; ~ a) +d !;~ ! (b; ~ b) +kk +klk +k!k +ja 0 j +j@ ! a 0 @ ~ ! ~ a 0 j +jb 0 j +j@ ! b 0 @ ~ ! ~ b 0 j i : 215 4.5 Pathwise solutions of stochastic dierential equations 4.5.1 The rough path setting for Brownian motion Let 0 :=f!2 C([0;T ];R d ) : ! 0 = 0g be the canonical space, B the canonical process, F = F B the natural ltration, and P 0 the Wiener measure. Following F ollmer [38] (or see Bichteler [7] and Karandikar [50] for more general results on pathwise stochastic integration), we may construct pathwise It^ o integration as follows: t (!) := lim n!1 2 n 1 X i=0 ! t n i (! t n i ^t;t n i+1 ^t ) where t n i := iT 2 n ;i = 0; ; 2 n : (4.5.1) Then is F-adapted and t = R t 0 B s d Ito B s , 0 t T , P 0 -a.s. Here d Ito stands for It^ o integration. Dene s;t (!) := t (!) s (!)! s ! s;t ; Str s;t (!) := s;t (!) + 1 2 (ts)I d ; h!i t :=! t ! t t (!) [ t (!)] : (4.5.2) It is straightforward to check that s;t (!) s;r (!) r;t (!) =! s;r ! r;t = Str s;t (!) Str s;r (!) Str r;t (!): (4.5.3) 216 Moreover, we have the following well known result: Lemma 4.5.1. For any 1 3 << 1 2 , we have P 0 (A ) = 1, where A := n sup (s;t)2T 2 j s;t j jtsj 2 <1 o \ n h!i t =tI d ; 0tT o (4.5.4) \ n lim t#s jv! s;t j jtsj 2 =1;8s2Q\ [0;T ); v2R d nf0g o : Now set, for the A dened in (4.5.4), := n !2 0 : (!; (!))2 and !2A ; for all 1 3 << 1 2 o ; d (!; ~ !) :=d (!; (!)); (~ !; (~ !)) ; for all !; ~ !2 and 1 3 << 1 2 : (4.5.5) By (4.5.3) and Lemma 4.5.1, we see thatP 0 ( ) = 1. From now on, we shall always restrict the sample space to , and we still denote byB the canonical process and F :=F B . Dene C( ;E) := S n C ( ;E) : satises (4:2:3) o ; where (4.5.6) C ( ;E) := n 2L 0 (F) :(!)2C 1 !; (E);8!2 ; andE P 0 h k(!)k 2 !; i <1 o : 217 We now dene pathwise stochastic integral by using rough path integral: for 2 C( ;E d ), Z t 0 s dB s (!) := Z t 0 s (!)d(!; (!)) s ; 8!2 ; Z t 0 s dB s (!) := Z t 0 s (!)d(!; Str (!)) s ; 8!2 : (4.5.7) The following result can be found in [40] Proposition 5.1 and Corollary 5.2. Theorem 4.5.2. For any 2C( ;E d ), the above pathwise stochastic integrals R t 0 s dB s and R t 0 s dB s coincide with the It^ o integral and the Stratonovic integral, respecively. Remark 4.5.3. Let X be a semi-martingale with dX t = t dB t + t dt, where 2C( ;E d ) and is continuous. Then X2C( ;E) with @ ! X t (!) = t (!) for each !2 . In the spirit of Dupire's functional It^ o calculus (see [29]), [11] denes the above as the path derivative of the process X. Therefore, the Gubinelli's derivative @ ! X(!) in Denition 4.2.3 is consistent with the path dervatives intro- duced in [11]. Remark 4.5.4. Let !2 and 2C 2 (!;(!)); (E) for certain satisfying (4.2.3). Dene @ ! t := Trace(D ! t ): (4.5.8) 218 Then@ ! t is unique and is consistent with the time derivative in [11]. Moreover, the pathwise Ito formula (4.2.18) and the pathwise Taylor expansion (4.2.22), (4.2.23) become: d t = @ ! t d! t + h @ ! t t + 1 2 Trace(@ 2 !! t ) i dt; s;t = @ ! s ! s;t + 1 2 @ 2 !! s : [! s;t ! s;t +! s;t ! s;t ] +@ ! t s (ts) (4.5.9) +O((ts) 2+ ); (4.5.10) s;t = @ ! s ! s;t + 1 2 @ 2 !! s : [! s;t ! s;t ] +@ ! t s (ts) +O((ts) 2+ ); respectively. These are also consistent with [11]. 4.5.2 Stochastic dierential equations with regular solu- tions We now consider the following SDE with random coecients: X t =x + Z t 0 (s;X s ;!)dB s + Z t 0 b(s;X s ;!)ds; !2 ; (4.5.11) 219 whereb; areF-progressively measurable. Clearly, the above SDE can be rewritten as the following RDE: X t (!) =x + Z t 0 (s;X s (!);!)d(!; (!)) s + Z t 0 b(s;X s (!);!) I d d :dh!i s ; !2 : (4.5.12) The following result is a direct consequence of Theorems 4.4.2 and 4.4.3. Theorem 4.5.5. (i) Assume, for each !2 , there exists(!) satisfying (4.2.3) such that b(;!)2C 2 (!) (E;E) and (;!)2C 2;3 !;(!) (E;E d ). Then the SDE has a unique solution X such that X(!)2C 2 !;(!) (E) for all !2 . (ii) Assume further that b and are continuous in ! in the following sense: lim n!1 h kb(;! n )b(;!)k 1;(!) +d !;! n 2;(!) ((;! n );(;!)) i = 0; (4.5.13) for any !;! n 2 such that lim n!1 d (!) (! n ;!) = 0: Then X is also continuous in ! in the sense that: lim n!1 d !;! n (!) (X(!);X(! n )) = 0; and lim n!1 kX(!)X(! n )k 1 = 0: (4.5.14) Remark 4.5.6. The construction of pathwise solutions of SDEs via rough path is standard. However, we remark that our canonical sample space is universal, which particularly does not depend on the controlled rough path or the coecient 220 (t;!;x). Hence, our solution is constructed indeed for every !2 , without the exceptional null set, which is, to our best knowledge, a new message. Remark 4.5.7. (i) Assume is H older- 1 2 continuous int and Lipschitz continuous in ! in the following sense: j(t;x;!)( ~ t;x; ~ !)jC hp ~ tt + sup 0sT j! s^t ~ ! s^ ~ t j i ; (4.5.15) Then (;x;!) is H older- continuous in t for all < 1 2 . We remark that the dis- tance in the right side of (4.5.15) is used in Zhang and Zhuo [100] and is equivalent to the metric introduced by Dupire [29]. (ii) As mentioned in Introduction, since ! is only H older- continuous for < 1 2 , it is not reasonable to assume (;x;!) is H older-(1) continuous as required in Lejay and Victoir [55]. Remark 4.5.8. Under the Stratonovich integration, the quadratic variation of Brownian motion sample path vanishes:h(!; str (!))i t = 0. If we want to consider SDE in the form: dX t =(t;X t ;!)dB t +b(t;!;X t )dt; (4.5.16) 221 we cannot simply rewrite it into dX t (!) =(t;!;X t (!))d(!; str (!)) t +b(t;!;X t (!)) I d d :dh(!; str (!))i t : We can obtain pathwise solution of (4.5.16) in the following two ways: (i) We may rewrite (4.5.16) in It^ o form: dX t =(t;!;X t )dB t + h b + 1 2 Trace @ ! +@ y i (t;!;X t )dt; (4.5.17) which corresponds further to the following RDE: dX t (!) =(t;!;X t (!))d(!; (!)) t + h bI d d + @ ! +@ y 2 i (t;!;X t (!)) :dh!i t : (4.5.18) (ii) In Section 4.4, we may easily extend our results to more general RDEs: d t =g(t; t )d! t +f(t; t ) :dh!i t +h(t; t )dt: (4.5.19) Then we may deal with (4.5.16) directly. 222 4.6 Rough PDEs and stochastic PDEs In this section, we extend the results in previous sections to rough PDEs (4.1.3) and stochastic PDEs (4.1.4) with random coecients. The wellposedness of such RPDEs and SPDEs, especially in fully nonlinear case, is very challenging and has received very strong attention. We refer to Lions and Souganidis [59{62], Buckdahn and Ma [9, 10], Caruana and Friz [14], Caruana, Friz and Oberhauser [15], Friz and Obhauser [39], Diehl and Friz [27], Oberhauser and Riedel [28], and Gubinelli, Tindel and Torrecilla, [43] for wellposedness of some classes of RPDEs/SPDEs, where various notions of solutions are proposed. While this section is mainly motivated from the study of pathwise viscosity solutions of SPDEs in Buckdahn, Ma and Zhang [12], in this section we shall focus on calssical solutions only. In particular, we do not intend to establish strong wellposedness for general f, instead we shall investigate diusion coecients andg and see when the RPDE/SPDE can be transformed to a deterministic PDE. Again, unlike most results in standard literature of rough PDEs, we allow the coecients to depend on (t;!). The results will require quite high regularity of the coecients, in the sense of our path regularity. In order to simplify the presentation, for some results we shall not specify the precise regularity conditions. 223 4.6.1 RDEs with spatial parameters Let u 0 : ~ E! E, g :T ~ EE! E d , f :T ~ EE! E dd , and consider the following RDE with parameter x2 ~ E: u t (x) =u 0 (x) + Z t 0 g(s;x;u s (x))d! s + Z t 0 f(s;x;u s (x)) :dh!i s : (4.6.1) Assumeu 0 ,g andf are dierentiable inx, and dierentiate (4.6.1) formally inx i , i = 1; ;j ~ Ej, we obtain: denoting v i t (x) :=@ x i u t (x), v i t (x) =@ x i u 0 (x) + Z t 0 [@ x i g(s;x;u s (x)) +@ y g(s;x;u s (x)) v i s (x)]d! s + Z t 0 [@ x i f(s;x;u s (x)) +@ y f(s;x;u s (x)) v i s (x)] :dh!i s : (4.6.2) Theorem 4.6.1. Assume (i) u 0 , g, f are continuously dierentiable in x; (ii) for each x2 ~ E, i = 1; ;j ~ Ej, j = 1; ;jEj, g(x;)2C 2;3 !; (E;E d ); f(x;)2C 2 (E;E dd ); @ x i g(x;)2C 1;2 !; (E;E d ); @ y j g(x;)2C 2;3 !; (E;E d ); @ x i f(x;)2C 0 (E;E dd ): (4.6.3) 224 (iii) for any x2 ~ E, denoting ' :='(x + x;)'(x;) for appropriate ', lim jxj!0 h kgk 2;!; +kfk 1; i = 0; lim jxj!0 h k[@ x g]k 2;!; +k[@ y g]k 2;!; +k[@ x f]k 0; +k[@ y f]k 0; i = 0: (4.6.4) Moreover, @ !x g and @ !y g are continuous. Then, for each x 2 ~ E, RDEs (4.6.1) and (4.6.2) have unique solution u(x;);v i (x;) 2 C 2 !; (E), respectively. Moreover, u is dierentiable in x with @ x i u =v i . Proof. First, without loss of generality we may assumej ~ Ej = 1, namely ~ E =R. For eachx2 ~ E, by the rst line of (4.6.3) and applying Theorem 4.4.2, we see that RDE (4.4.1) has a unique solution u(x)2C 2 !; (E). By the second line of (4.6.3) and applying Theorem 4.3.4 and Lemma 4.3.7, we see that, for j = 1; ;jEj, we have @ x g(x;u(x)) 2 C 1 !; (E d ), @ y j g(x;u(x)) 2 C 2 !; (E d ), @ x f(x;u(x)), and @ y j f(x;u(x))2 (E dd ). Then by Theorem 4.4.5 the linear RDE (4.6.2) has a unique solution v(x)2C 2 !; (E). It remains to prove @ x u = v. Given x2 R, x2 Rnf0g and 2 [0; 1], put u t :=u t (x + x)u t (x),ru t := ut x , ' t () :='(t;x +x;u t (x) +u t (x)), and ' t () := ' t ()' t (0), for appropriate '. By the rst line of (4.6.4), it follows from Theorem 4.4.3 that: lim jxj!0 kuk !; = 0: (4.6.5) 225 Moreover, one can easily check that, dru t = Z 1 0 [@ x g t () +@ y g t () ru t ]dd! t + Z 1 0 [@ x f t () +@ y f t () ru t ]d :dh!i t ; dv t (x) = [@ x g t (0) +@ y g t (0) v t (x)]d! t + [@ x f t (0) +@ y f t (0) v t (x)] :dh!i t : By the second line of (4.6.4) and (4.6.5), it follows from Lemmas 4.3.6 (ii) and 4.3.7 (ii) that lim jxj!0 h k@ x g t ()@ x g(0)k !; +k@ y g t ()@ y g(0)k !; i = 0; lim jxj!0 h k@ x f t ()@ x f(0)k +k@ y f t ()@ y f(0)k i = 0: for any 2 [0; 1]. Furthermore, by Theorem (4.3.4) (i) we have @ ! [@ x g 0 ()] =@ !x g() +@ yx g 0 () g 0 (); @ ! [@ y g 0 ()] =@ !y g() +@ yy g 0 () g 0 () Recalling the continuity of @ !x g, @ !;y g in (iii) we see that, for any 2 [0; 1], lim jxj!0 h j@ ! [@ x g 0 ()]@ ! [@ x g 0 ()j +j@ ! [@ y g 0 ()]@ ! [@ y g 0 ()j i = 0: 226 Now by Corollary 4.4.7 we have lim jxj!0 kruv(x)k !; = 0. That is, @ x u t (x) = v t (x). 4.6.2 Pathwise characteristics As standard in the literature, see e.g. Kunita [54] for Stochastic PDEs and [40] Chapter 12 for rough PDEs, the main tool for dealing with semilinear RPDEs/SPDEs is the characteristics, which we shall introduce below by using RDEs against rough paths and backward rough paths. Let :T ~ E! ~ E d and g :T ~ EE!E dd . Fix t 0 2T and denote t 0 (t;y) :=(t 0 t;y); g t 0 (t;x;y) :=g(t 0 t;x;y); (4.6.6) Consider the following characteristic RDEs: 8 > > > < > > > : x t =x R t 0 (s; x s )d! s ; t 0 ;x t =x + R t 0 t 0 (s; t 0 ;x s )d ! t 0 s ; (4.6.7) 8 > > > < > > > : x;y t =y + R t 0 g(s; x s ; x;y s )d! s ; t 0 ;x;y t =y R t 0 g t 0 (s; t 0 ;x s ; t 0 ;x;y s )d ! t 0 s : (4.6.8) By Lemma 4.2.11 and Theorem 4.4.2, the following result is obvious. 227 Lemma 4.6.2. (i) Assume 2C 2;3 !; ( ~ E; ~ E d ). Then, for each x2 ~ E, the RDEs (4.6.7) have unique solution x 2C 1 !; ( ~ E) and t 0 ;x 2C 1 ! t 0 ; ([0;t 0 ]; ~ E) satisfying t 0 ; x t 0 t = x t 0 t , t2 [0;t 0 ]. In particular, the mapping x7! x t 0 is one to one with inverse function x7! t 0 ;x t 0 . (ii) Assume further that, for each x2 ~ E and for the above solution x , the mapping (t;y)7!g(t; x t ;y) is inC 2;3 !; (E;E dd ). Then the RDEs (4.6.8) have unique solution x;y 2 C 1 !; (E) and t 0 ;x;y 2 C 1 ! t 0 ; (E) satisfying t 0 ; x t 0 ; x t 0 t = x;y t 0 t , t 2 [0;t 0 ]. In particular, the mapping (x;y)7! ( x t 0 ; x;y t 0 ) is one to one with inverse functions (x;y)7! ( t 0 ;x t 0 ; t 0 ;x;y t 0 ). Now dene '(t;x) := t;x t ; (t;x;y) := t; x t ;y t ;(t;x;y) := '(t;x);y t ;b g(t;x;y) :=g(t; x t ;y): (4.6.9) Lemma 4.6.3. Assume and g are smooth enough in the sense of Theorem 4.6.1. Then '; are twice dierentiable in (x;y), and for any xed (x;y), '(;x); (;x;y)2C ! . Moreover, they satisfy the following RDEs: '(t;x) = x + Z t 0 @ x ' (s;x)d! s + Z t 0 h 1 2 @ 2 xx ' 2 [;] +@ x ' [@ x ] i (s;x) :dh!i s ; (t;x;y) = y Z t 0 [@ y b g](s;x;y)d! s + Z t 0 h 1 2 @ 2 yy 2 [b g;b g] +@ y [@ y b g b g ] i (s;x;y) :dh!i s : 228 Proof. By Theorem 4.6.1, x , t;x , x;y , t;x;y are suciently dierentiable in (x;y). This implies the desired dierentiability of '; . We now check the RDEs. First, given (s;t)2T 2 and denote :=ts. Note that '(t;x) = t;x t = s; t;x s ='(s; t;x ); and that, applying Lemma 4.2.11, t;x x = Z 0 t (r; t;x r )d ! t r = t (0;x) ! t 0; +[@ ! t t +@ x t ( t ) ](0;x) : ! t 0; +O( 2+ ) = (t;x)! s;t + [@ ! +@ x ](t;x) : [! s;t ! s;t ! s;t ] +O( 2+ ) = (s;x)! s;t +@ ! (s;x) :! s;t +@ x (s;x) : [! s;t ! s;t ! s;t ] +O( 2+ ) Then, applying Taylor expansion, '(t;x)'(s;x) ='(s; t;x )'(s;x) = @ x '(s;x) [ t;x x] + 1 2 @ 2 xx '(s;x) 2 [ t;x x; t;x x] +O( 3 ) = @ x '(s;x) h (s;x)! s;t +@ ! (s;x) :! s;t +@ x (s;x) : [! s;t ! s;t ! s;t ] i + 1 2 @ 2 xx '(s;x) 2 [(s;x)! s;t ] +O( 2+ ) 229 In particular, this implies @ ! ' =@ x ' : On the other hand, by applying Theorem 4.6.1 on (4.6.7) and view ( x ;@ x x ) as the solution to a higher dimensional RDE, one can check similarly that @ ! [@ x '] =@ x [(@ x ' ) ]: Denote ~ ' as the right side of the RDE for '. Then, taking values at (s;x), [ ~ '(;x)] s;t = @ x ' ! s;t +@ ! [@ x ' ] :! s;t + h 1 2 @ 2 xx ' [;] +@ x ' [@ x ] i :h!i s;t +O( 2+ ) = @ x ' ! s;t + h @ x [@ x ' ] +@ x ' @ ! :! s;t + h 1 2 @ 2 xx ' +@ x ' [@ x ] i : [! s;t ! s;t ! s;t ! s;t ] +O( 2+ ): It is straightforward to check that ['(;x)] s;t = [ ~ '(;x)] s;t + O( 2+ ); impling ' = ~ '. Similarly, notice that (t;x;y) = t; x t ;y t = s; t; x t ; t; x t ;y s = s; x s ; t; x t ;y s = (s;x; t; x t ;y ): 230 Following similar arguments one can verify the RDE for . 4.6.3 Rough PDEs Now consider RPDE: u t (x) = u 0 (x) + Z t 0 [@ x u s (x) s (x) +g s (x;u s (x))]d! s (4.6.10) + Z t 0 f s (x;u s (x);@ x u s (x);@ 2 xx u s (x)) :dh!i s : Dene v(t;x) := (t;x;u(t; x t )) and equivalently u(t;x) =(t;x;v(t;'(t;x))): Theorem 4.6.4. Assume the coecients and u are smooth enough. Then u is a solution of RPDE (4.6.10) if and only if v satises: dv t (x) = b f(t;x;v t (x);@ x v t (x);@ 2 xx v t (x)) :dh!i t ; (4.6.11) or equivalently; D ! t v t (x) = b f(t;x;v t (x);@ x v t (x);@ 2 xx v t (x)); 231 where b f(t;x;y;z; ) := @ y (t;x;b y) h f(t; x t ;b y;b z;b ) 1 2 b : [;](t; x t ) b z @ x +@ x g +@ y g b z] i (t; x t ;b y); (4.6.12) b y = (t; x t ;y); b z = @ x (t; x t ;y) +@ y (t; x t ;y) z @ x '(t; x t ); b = @ 2 xx (t; x t ;y) + [@ xy (t; x t ;y) +@ yx (t; x t )] 2 [z;@ x '(t; x t )] +@ 2 yy (t; x t ;y) 2 [@ x ' @ x ';@ x ' @ x '](t; x t ) +@ y (t; x t ;y) h 2 [@ x ';@ x '](t; x t ) +z @ 2 xx '(t; x t ) i : Proof. Applying the It^ o-Ventzell formula (4.3.14) we have du(t; x t ) =g(t; x t ;u(t; x t ))d! t + h f(;u;@ x u;@ 2 xx u) [ 1 2 @ 2 x u : [;] +@ x u @ x +@ x g(;u) +@ y g @ x u ] i (t; x t ) :dh!i t ; v(t;x) =d[ (t;x;u(t; x t ))] =@ y (t;x;u(t; x t )) h f(;u;@ x u;@ 2 xx u) 1 2 @ 2 x u : [;] @ x u @ x + @ x g +@ y g @ x u] i (t; x t ;u(t; x t )) :dh!i t : 232 Now note that u(t;x) = (t;x;v(t;'(t;x))); @ x u = @ x +@ y @ x v @ x '; @ 2 xx u = @ 2 xx + [@ xy +@ yx ] 2 [@ x v;@ x '] +@ 2 yy 2 [@ x ' @ x ';@ x ' @ x '] +@ y @ 2 xx v 2 [@ x ';@ x '] +@ y @ x v @ 2 xx ': Then u(t; x t ) = (t; x t ;v(t;x)); @ x u(t; x t ) = @ x (t; x t ;v(t;x)) +@ y (t; x t ;v(t;x)) @ x v(t;x) @ x '(t; x t ); @ 2 xx u(t; x t ) = @ 2 xx (t; x t ;v(t;x)) + [@ xy (t; x t ;v(t;x)) +@ yx (t; x t )] 2 [@ x v(t;x);@ x '(t; x t )] +@ 2 yy (t; x t ;v(t;x)) 2 [@ x ' @ x ';@ x ' @ x '](t; x t ) +@ y (t; x t ;v(t;x)) @ 2 xx v(t;x) 2 [@ x ';@ x '](t; x t ) +@ y (t; x t ;v(t;x)) @ x v(t;x) @ 2 xx '(t; x t ): Plug this into (4.6.13), we obtain the result immediately. 233 4.6.4 Pathwise solution of stochastic PDEs We now study Stochastic PDE: u t (!;x) =u 0 (x) + Z t 0 [ s (!;x)@ x u s (!;x) +g s (!;x;u s (!;x))]dB s + Z t 0 f s (!;x;u s (!;x);@ x u s (!;x);@ 2 xx u s (!;x))ds; P 0 -a.s. Clearly, this corresponds to the following RPDE: u t (!;x) =u 0 (x) + Z t 0 [ s (!;x)@ x u s (!;x) +g s (!;x;u s (!;x))]d(!;F (!)) s + Z t 0 F s (!;x;u s (!;x);@ x u s (!;x);@ 2 xx u s (!;x)) :dh!i s ; 8!2 ; whereF (t;!;x;y;z; ) :=f(t;!;x;y;z; ) I d d : (4.6.13) Dene !;x t , (t;!;x;y), b F (t;!;x;y;z; ) in obvious sense and v(t;!;x) := (t;!;x;u(t;!; !;x t )); b f(t;!;x;y;z; ) := Trace[ b F (t;!;x;y;z; )]: (4.6.14) Then we have, recalling @ ! t v dened in Remark 4.5.4, dv(t;!;x) =@ ! t v(t;!;x)dt = b f t (!;x;v t (!;x);@ x v t (!;x);@ 2 xx v t (!;x))dt: 234 Clearly, this implies that @ ! t v t (x) = @ t v(t;!;x), the standard time derivative for xed (!;x). We now conclude this chapter with the following result: Theorem 4.6.5. Assume the coecients and u are smooth enough. Then, for each !2 , u(!;) is a solution of (4.6.13) if and only if v(!;) is a solution of the following PDE: @ t v t (!;x) = b f t (!;x;v t (!;x);@ x v t (!;x);@ 2 xx v t (!;x)): (4.6.15) 235 Bibliography [1] J.-P. Aubin and G. Haddad, History path dependent optimal control and portfolio valuation and management, Positivity 6 (2002), no. 3, 331{358. Special issue of the mathematical economics. [2] P. Bank and D. Baum, Hedging and portfolio optimization in nancial mar- kets with a large trader, Math. Finance 14 (2004), no. 1, 1{18. [3] G. Barles, R. Buckdahn, and E. Pardoux, Backward stochastic dierential equations and integral-partial dierential equations, Stochastics Stochastics Rep. 60 (1997), no. 1-2, 57{83. [4] H. Bauer, Ma - und Integrationstheorie, 2. Au age, de Gruyter Lehrbuch., Walter de Gruyter & Co., Berlin, 1992. [5] E. Bayraktar and M. S^ rbu, Stochastic Perron's method and verication without smoothness using viscosity comparison: the linear case, Proc. Amer. Math. Soc. 140 (2012), no. 10, 3645{3654. [6] E. Bayraktar and S. Yao, Quadratic re ected BSDEs with unbounded obsta- cles, Stochastic Process. Appl. 122 (2012), no. 4, 1155{1203. [7] K. Bichteler, Stochastic integration and L p -theory of semimartingales, Ann. Probab. 9 (1981), no. 1, 49{89. [8] D. Blackwell and L. E. Dubins, On existence and non-existence of proper, regular, conditional distributions, Ann. Probability 3 (1975), no. 5, 741{752. [9] R. Buckdahn and J. Ma, Stochastic viscosity solutions for nonlinear stochas- tic partial dierential equations. I, Stochastic Process. Appl. 93 (2001), no. 2, 181{204. [10] R. Buckdahn and J. Ma, Stochastic viscosity solutions for nonlinear stochas- tic partial dierential equations. II, Stochastic Process. Appl. 93 (2001), no. 2, 205{228. 236 [11] R. Buckdahn, J. Ma, and J. Zhang, Pathwise Taylor expansions for random elds on multiple dimensional paths, Stochastic Process. Appl. 125 (2015), no. 7, 2820{2855. [12] R. Buckdahn, J. Ma, and J. Zhang, Pathwise viscosity solutions of stochastic PDEs and forward path-dependent PDEs, arXiv preprint arXiv:1501.06978 (2015). [13] R. Carbone, B. Ferrario, and M. Santacroce, Backward stochastic dierential equations driven by c adl ag martingales, Theory of Probability & Its Appli- cations 52 (2008), no. 2, 304{314. [14] M. Caruana and P. Friz, Partial dierential equations driven by rough paths, J. Dierential Equations 247 (2009), no. 1, 140{173. [15] M. Caruana, P. K. Friz, and H. Oberhauser, A (rough) pathwise approach to a class of non-linear stochastic partial dierential equations, Ann. Inst. H. Poincar e Anal. Non Lin eaire 28 (2011), no. 1, 27{46. [16] P. Cheridito, H. M. Soner, N. Touzi, and N. Victoir, Second-order backward stochastic dierential equations and fully nonlinear parabolic PDEs, Comm. Pure Appl. Math. 60 (2007), no. 7, 1081{1110. [17] F. Confortola and M. Fuhrman, Backward stochastic dierential equations and optimal control of marked point processes, SIAM J. Control Optim. 51 (2013), no. 5, 3592{3623. [18] R. Cont and D. Fournie, A functional extension of the Ito formula, C. R. Math. Acad. Sci. Paris 348 (2010), no. 1-2, 57{61. [19] R. Cont and D.-A. Fourni e, Change of variable formulas for non-anticipative functionals on path space, J. Funct. Anal. 259 (2010), no. 4, 1043{1072. [20] R. Cont and D.-A. Fourni e, Functional It^ o calculus and stochastic integral representation of martingales, Ann. Probab. 41 (2013), no. 1, 109{133. [21] A. Cosso and F. Russo, A regularization approach to functional It^ o calcu- lus and strong-viscosity solutions to path-dependent PDEs, arXiv preprint arXiv:1401.5034 (2014). [22] M. G. Crandall, H. Ishii, and P.-L. Lions, User's guide to viscosity solutions of second order partial dierential equations, Bull. Amer. Math. Soc. (N.S.) 27 (1992), no. 1, 1{67. [23] M. G. Crandall and P.-L. Lions, Viscosity solutions of Hamilton-Jacobi equa- tions, Trans. Amer. Math. Soc. 277 (1983), no. 1, 1{42. 237 [24] S. Cr epey and A. Matoussi, Re ected and doubly re ected BSDEs with jumps: a priori estimates and comparison, Ann. Appl. Probab. 18 (2008), no. 5, 2041{2069. [25] C. Dellacherie and P.-A. Meyer, Probabilities and potential, North-Holland Mathematics Studies, vol. 29, North-Holland Publishing Co., Amsterdam, 1978. [26] L. Delong, Backward stochastic dierential equations with jumps and their actuarial and nancial applications, European Actuarial Academy (EAA) Series, Springer, London, 2013. BSDEs with jumps. [27] J. Diehl and P. Friz, Backward stochastic dierential equations with rough drivers, Ann. Probab. 40 (2012), no. 4, 1715{1758. [28] J. Diehl, H. Oberhauser, and S. Riedel, A L evy area between Brownian motion and rough paths with applications to robust nonlinear ltering and rough partial dierential equations, Stochastic Process. Appl. 125 (2015), no. 1, 161{181. [29] B. Dupire, Functional Ito calculus, SSRN, 2010. [30] I. Ekren, Viscosity solutions of obstacle problems for fully nonlinear path- dependent PDEs, preprint, 2013. [31] I. Ekren, C. Keller, N. Touzi, and J. Zhang, On viscosity solutions of path dependent PDEs, Ann. Probab. 42 (2014), no. 1, 204{236. [32] I. Ekren, N. Touzi, and J. Zhang, Viscosity solutions of fully nonlinear parabolic path dependent PDEs: Part I, Ann. Probab. (forthcoming). [33] I. Ekren, N. Touzi, and J. Zhang, Viscosity solutions of fully nonlinear parabolic path dependent PDEs: Part II, Ann. Probab. (forthcoming). [34] N. El Karoui and S.-J. Huang, A general result of existence and uniqueness of backward stochastic dierential equations, Backward stochastic dierential equations (Paris, 1995{1996), 1997, pp. 27{36. [35] N. El Karoui, C. Kapoudjian, E. Pardoux, S. Peng, and M. C. Quenez, Re ected solutions of backward SDE's, and related obstacle problems for PDE's, Ann. Probab. 25 (1997), no. 2, 702{737. [36] N. El Karoui, S. Peng, and M. C. Quenez, Backward stochastic dierential equations in nance, Math. Finance 7 (1997), no. 1, 1{71. 238 [37] W. H. Fleming and H. M. Soner, Controlled Markov processes and viscosity solutions, Second, Stochastic Modelling and Applied Probability, vol. 25, Springer, New York, 2006. [38] H. F ollmer, Calcul d'It^ o sans probabilit es, Seminar on Probability, XV (Univ. Strasbourg, Strasbourg, 1979/1980) (French), 1981, pp. 143{150. [39] P. Friz and H. Oberhauser, Rough path stability of (semi-)linear SPDEs, Probab. Theory Related Fields 158 (2014), no. 1-2, 401{434. [40] P. K. Friz and M. Hairer, A course on rough paths, Universitext, Springer, Cham, 2014. With an introduction to regularity structures. [41] P. K. Friz and N. B. Victoir, Multidimensional stochastic processes as rough paths, Cambridge Studies in Advanced Mathematics, vol. 120, Cambridge University Press, Cambridge, 2010. Theory and applications. [42] M. Gubinelli, Controlling rough paths, J. Funct. Anal. 216 (2004), no. 1, 86{140. [43] M. Gubinelli, S. Tindel, and I. Torrecilla, Controlled viscosity solutions of fully nonlinear rough PDEs, arXiv preprint arXiv:1403.2832 (2014). [44] M. Hairer, Solving the KPZ equation, Ann. of Math. (2) 178 (2013), no. 2, 559{664. [45] S. Hamad ene and Y. Ouknine, Re ected backward stochastic dierential equa- tion with jumps and random obstacle, Electron. J. Probab. 8 (2003), no. 2, 20. [46] C. Heitzinger, Simulation and inverse modeling of semiconductor manufac- turing processes, Dissertation, Technische Universit at Wien, 2002. [47] H. Ishii, Perron's method for Hamilton-Jacobi equations, Duke Math. J. 55 (1987), no. 2, 369{384. [48] J. Jacod and P. Protter, Discretization of processes, Stochastic Modelling and Applied Probability, vol. 67, Springer, Heidelberg, 2012. [49] J. Jacod and A. N. Shiryaev, Limit theorems for stochastic processes, Second, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 288, Springer-Verlag, Berlin, 2003. [50] R. L. Karandikar, On pathwise stochastic integration, Stochastic Process. Appl. 57 (1995), no. 1, 11{18. 239 [51] C. Keller, Viscosity of path-dependent integro-dierential equations, arXiv preprint arXiv:1412.8495 (2014). [52] C. Keller and J. Zhang, Pathwise it^ o calculus for rough paths and rough PDEs with path dependent coecients, arXiv preprint arXiv:1412.7464 (2014). [53] A. V. Kim, Functional dierential equations, Mathematics and its Applica- tions, vol. 479, Kluwer Academic Publishers, Dordrecht, 1999. Application of i-smooth calculus. [54] H. Kunita, Stochastic ows and stochastic dierential equations, Cambridge Studies in Advanced Mathematics, vol. 24, Cambridge University Press, Cambridge, 1990. [55] A. Lejay and N. Victoir, On (p;q)-rough paths, J. Dierential Equations 225 (2006), no. 1, 103{133. [56] P.-L. Lions, Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in innite dimensions. I. The case of bounded stochastic evolutions, Acta Math. 161 (1988), no. 3-4, 243{278. [57] P.-L. Lions, Viscosity solutions of fully nonlinear second order equations and optimal stochastic control in innite dimensions. II. Optimal control of Zakai's equation, Stochastic partial dierential equations and applications, II (Trento, 1988), 1989, pp. 147{170. [58] P.-L. Lions, Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in innite dimensions. III. Uniqueness of viscosity solutions for general second-order equations, J. Funct. Anal. 86 (1989), no. 1, 1{18. [59] P.-L. Lions and P. E. Souganidis, Viscosity solutions of fully nonlinear stochastic partial dierential equations, S urikaisekikenky usho K oky uroku 1287 (2002), 58{65. Viscosity solutions of dierential equations and related topics (Japanese) (Kyoto, 2001). [60] P.-L. Lions and P. E. Souganidis, Fully nonlinear stochastic partial dieren- tial equations, C. R. Acad. Sci. Paris S er. I Math. 326 (1998), no. 9, 1085{ 1092. [61] P.-L. Lions and P. E. Souganidis, Fully nonlinear stochastic partial dieren- tial equations: non-smooth equations and applications, C. R. Acad. Sci. Paris S er. I Math. 327 (1998), no. 8, 735{741. 240 [62] P.-L. Lions and P. E. Souganidis, Fully nonlinear stochastic pde with semi- linear stochastic dependence, C. R. Acad. Sci. Paris S er. I Math. 331 (2000), no. 8, 617{624. [63] N. Yu. Lukoyanov, Functional equations of Hamilton-Jacobi type, and dif- ferential games with hereditary information, Dokl. Akad. Nauk 371 (2000), no. 4, 457{461. [64] N. Yu. Lukoyanov, Viscosity solution of nonanticipating equations of Hamilton-Jacobi type, Dier. Uravn. 43 (2007), no. 12, 1674{1682, 1727. [65] N. Lukoyanov, Functional Hamilton-Jacobi type equations in ci-derivatives for systems with distributed delays, Nonlinear Funct. Anal. Appl. 8 (2003), no. 3, 365{397. [66] N. Yu. Lukoyanov, On viscosity solution of functional Hamilton-Jacobi type equations for hereditary systems., Proc. Steklov Inst. Math. 259 (2007), s190{ s200. [67] T. Lyons, Rough paths, signatures and the modelling of functions on streams, Proceedings of the International Congress of Mathematicians. seoul, 2014. Volume IV, 2014, pp. 163{184. [68] T. J. Lyons, Dierential equations driven by rough signals, Rev. Mat. Iberoamericana 14 (1998), no. 2, 215{310. [69] T. J Lyons and D. Yang, Integration of time-varying cocyclic one-forms against rough paths, arXiv preprint arXiv:1408.2785 (2014). [70] J. Ma, P. Protter, and J. M. Yong, Solving forward-backward stochastic dif- ferential equations explicitly|a four step scheme, Probab. Theory Related Fields 98 (1994), no. 3, 339{359. [71] J. Ma, J. Zhang, and Z. Zheng, Weak solutions for forward-backward SDEs| a martingale problem approach, Ann. Probab. 36 (2008), no. 6, 2092{2125. [72] R. Mikulevicius and H. Pragarauskas, On the martingale problem associ- ated with nondegenerate L evy operators, Lithuanian Mathematical Journal 32 (1992), no. 3, 297{311. [73] R. Mikulevicius and H. Pragarauskas, On classical solutions of boundary value problems for certain nonlinear integro-dierential equations, Lithua- nian Mathematical Journal 34 (1994), no. 3, 275{287. [74] M.-A. Morlais, Re ected backward stochastic dierential equations and a class of non-linear dynamic pricing rule, Stochastics 85 (2013), no. 1, 1{26. 241 [75] H. Oberhauser, An extension of the functional ito formula under a family of non-dominated measures, arXiv preprint arXiv:1212.1414 (2012). [76] A. Ohashi, D. Le~ ao, and A. B Simas, Weak functional it^ o calculus and appli- cations, arXiv preprint arXiv:1408.1423 (2014). [77] E. Pardoux and S. Peng, Backward stochastic dierential equations and quasilinear parabolic partial dierential equations, Stochastic partial dier- ential equations and their applications (Charlotte, NC, 1991), 1992, pp. 200{ 217. [78] E. Pardoux and S. G. Peng, Adapted solution of a backward stochastic dif- ferential equation, Systems Control Lett. 14 (1990), no. 1, 55{61. [79] E. Pardoux and S. Tang, Forward-backward stochastic dierential equations and quasilinear parabolic PDEs, Probab. Theory Related Fields 114 (1999), no. 2, 123{150. [80] S. Peng, Backward SDE and related g-expectation, Backward stochastic dif- ferential equations (Paris, 1995{1996), 1997, pp. 141{159. [81] S. Peng, Monotonic limit theorem of BSDE and nonlinear decomposition theorem of Doob-Meyer's type, Probab. Theory Related Fields 113 (1999), no. 4, 473{499. [82] S. Peng, G-brownian motion and dynamic risk measure under volatility uncertainty, arXiv preprint arXiv:0711.2834 (2007). [83] S. Peng, G-expectation, G-Brownian motion and related stochastic calculus of It^ o type, Stochastic analysis and applications, 2007, pp. 541{567. [84] S. Peng, Backward stochastic dierential equation, nonlinear expectation and their applications, Proceedings of the International Congress of Mathemati- cians. Volume I, 2010, pp. 393{432. [85] S. Peng, Note on viscosity solution of path-dependent PDE and G- martingales, arXiv preprint arXiv:1106.1144 (2011). [86] S. Peng and F. Wang, BSDE, path-dependent PDE and nonlinear Feynman- Kac formula, arXiv preprint arXiv:1108.4317 (2011). [87] N. Perkowski and D. J Pr omel, Pathwise stochastic integrals for model free nance, arXiv preprint arXiv:1311.6187 (2013). 242 [88] T. Pham and J. Zhang, Two person zero-sum game in weak formulation and path dependent bellman{isaacs equation, SIAM Journal on Control and Optimization 52 (2014), no. 4, 2090{2121, available at http://dx.doi.org/ 10.1137/120894907. [89] Z. Ren, Viscosity solutions of fully nonlinear elliptic path dependent PDEs, preprint, 2014. [90] L. C. G. Rogers and D. Williams, Diusions, Markov processes, and martin- gales. Vol. 1, Cambridge Mathematical Library, Cambridge University Press, Cambridge, 2000. Foundations, Reprint of the second (1994) edition. [91] A. V. Skorohod, Limit theorems for stochastic processes, Teor. Veroyatnost. i Primenen. 1 (1956), 289{319. [92] H. M. Soner, N. Touzi, and J. Zhang, Wellposedness of second order back- ward SDEs, Probab. Theory Related Fields 153 (2012), no. 1-2, 149{190. MR2925572 [93] H. M. Soner, N. Touzi, and J. Zhang, Dual formulation of second order target problems, Ann. Appl. Probab. 23 (2013), no. 1, 308{347. [94] D. W. Stroock and S. R. S. Varadhan, Multidimensional diusion processes, Classics in Mathematics, Springer-Verlag, Berlin, 2006. Reprint of the 1997 edition. [95] A. I. Subbotin, Generalization of the fundamental equation of the theory of dierential games, Dokl. Akad. Nauk SSSR 254 (1980), no. 2, 293{297. [96] A. I. Subbotin, Generalization of the main equation of dierential game the- ory, J. Optim. Theory Appl. 43 (1984), no. 1, 103{133. [97] A. I. Subbotin, Generalized solutions of rst-order PDEs, Systems & Control: Foundations & Applications, Birkh auser Boston Inc., Boston, MA, 1995. The dynamical optimization perspective, Translated from the Russian. [98] W. Whitt, Stochastic-process limits, Springer Series in Operations Research, Springer-Verlag, New York, 2002. An introduction to stochastic-process lim- its and their application to queues. [99] J. Xia, Backward stochastic dierential equation with random measures, Acta Math. Appl. Sinica (English Ser.) 16 (2000), no. 3, 225{234. [100] J. Zhang and J. Zhuo, Monotone schemes for fully nonlinear parabolic path dependent PDEs, Journal of Financial Engineering 1 (2014), no. 01. 243
Abstract (if available)
Abstract
In this dissertation, problems from stochastic analysis on path space are investigated. The dissertation consists of three major parts. The first two parts deal with path‐dependent PDEs, which are motivated by non‐Markovian problems in mathematical finance and stochastic control and are inherently backward in nature, whereas the last part deals with rough path theory and its applications to forward SPDEs. ❧ In the first part, a notion of viscosity solutions for path‐dependent PDEs is introduced. The underlying state space for those PDEs is a path space consisting of continuous functions and the involved derivatives are based on Dupire's functional Ito calculus. As main result, we have well‐posedness for semilinear equations. ❧ The second part deals with semilinear path‐dependent integro‐differential equations, which are closely related to backward SDEs with jumps. Here, the state space is a path space consisting of càdlàg functions. The results from the first part are appropriately extended. ❧ In the third part, rough path theory is extended to deal with rough PDEs with minimal requirements in time. This makes it possible to study SPDEs with random coefficients in a pathwise manner. Moreover, a connection between Dupire's functional Ito calculus and rough path integration is established.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Path dependent partial differential equations and related topics
PDF
Probabilistic numerical methods for fully nonlinear PDEs and related topics
PDF
Controlled McKean-Vlasov equations and related topics
PDF
Topics on set-valued backward stochastic differential equations
PDF
Dynamic approaches for some time inconsistent problems
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Set values for mean field games and set valued PDEs
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
PDF
On non-zero-sum stochastic game problems with stopping times
PDF
Topics on dynamic limit order book and its related computation
PDF
Physics-informed machine learning techniques for the estimation and uncertainty quantification of breath alcohol concentration from transdermal alcohol biosensor data
PDF
Theoretical and computational foundations for cyber‐physical systems design
PDF
Modeling and simulation of circulating tumor cells in flow. Part I: Low-dimensional deformation models for circulating tumor cells in flow; Part II: Procoagulant circulating tumor cells in flow
Asset Metadata
Creator
Keller, Christian
(author)
Core Title
Pathwise stochastic analysis and related topics
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
07/09/2015
Defense Date
04/30/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
backward SDEs,backward SDEs with jumps,characteristics,comparison principle,functional Ito calculus,functional Ito formula,Ito-Ventzell formula,martingale problems,OAI-PMH Harvest,path derivatives,path-dependent integro-differential equations,path-dependent PDEs,rough differential equations,rough paths,rough PDEs,Skorohod topologies,stochastic PDEs,viscosity solutions
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Zhang, Jianfeng (
committee chair
), Kocer, Yilmaz (
committee member
), Ma, Jin (
committee member
)
Creator Email
christian_keller@gmx.li,kellerch@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-591043
Unique identifier
UC11300460
Identifier
etd-KellerChri-3576.pdf (filename),usctheses-c3-591043 (legacy record id)
Legacy Identifier
etd-KellerChri-3576.pdf
Dmrecord
591043
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Keller, Christian
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
backward SDEs
backward SDEs with jumps
characteristics
comparison principle
functional Ito calculus
functional Ito formula
Ito-Ventzell formula
martingale problems
path derivatives
path-dependent integro-differential equations
path-dependent PDEs
rough differential equations
rough paths
rough PDEs
Skorohod topologies
stochastic PDEs
viscosity solutions