Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Topics on set-valued backward stochastic differential equations
(USC Thesis Other)
Topics on set-valued backward stochastic differential equations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
TOPICS ON SET-VALUED BACKWARD STOCHASTIC DIFFERENTIAL EQUATIONS by Wenqian Wu ADissertationPresentedtothe FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) August 2021 Copyright 2021 Wenqian Wu Dedication To My Family ii Acknowledgments My deepest and sincerest gratitude goes to my Ph.D. advisor, Professor Jin Ma, foracceptingmeashisPh.D.student, andguidingmethroughoutmywholePh.D. life. Ihavealwaysbeengratefulforhavingsucharespectableandlovelygentleman as my advisor. His wide knowledge, patience, motivation and kindness benefits me not only on valuable academic ideas, but also wisdom in every aspect of life. IwouldliketoexpressmythankstomydissertationcommitteememberProf. Jianfeng Zhang and Prof. Xin Tong who provides me various supports, insightful advicesandwarmconcernsinmanyaspects. Also,mythanksgotoProf. Remigijus Mikulevicius and Prof. Qi Feng for their service in the committee of my qualifying exam, and Prof. Igor Kukavica, Prof. Kenneth Alexander, Prof. Sergey Lototsky for their excellent teaching on many important courses in my graduate study. I am also indebted to Prof. Susan Montgomery and Amy T Yung for their e↵ortless helps throughout my Ph.D. time. My great gratitude also goes to Prof. C ¸a˘ gın Ararat at the Bilkent University for the interesting and invaluable discussions on many problems in this thesis. iii It’s truly my pleasure to work with him as his knowledge, ideas and enthusiasm enlightened me greatly. I also owe my special thanks to many graduated Ph.D.s like Xinyang Wang, Jia Zhuo, Tian Zhang, Weisheng Xie, Xiaojin Xing, Rentao Sun, and Jian Wang, whom I view as guidance for my future career path and my dearest colleagues including but not limited to Bowen Gang, Jie Ruan, Hao Wu, Jiaowen Yang, Jiajun Luo, Jingting Liu, Zimu Zhu, Pengbin Feng, Yusheng Wu, Mengsha Yao, Tianyou Wang, Ying Tan, Linfeng Li, and Jian Zhou for all the discussions and enjoyments we shared. Lastly, my most heartfelt thanks belong to my dearest parents Cheng Wu and Daoying Zhang for their enormous contribution to support me both physically and spiritually with their persistent care and encouragement. Also, my gratitude goes to my beloved husband Man Luo for standing by my side and supporting me selflessly all the time. Nothing could have been accomplished without their unconditional love. I, hereby, dedicate this dissertation to my family. iv Table of Contents Dedication ii Acknowledgments iii Abstract vii Chapter 1: Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Main Diculties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Outline of the Dissertation . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 2: Set-valued Backward Stochastic Di↵erential Equations 8 2.1 Basics of Set-Valued Analysis . . . . . . . . . . . . . . . . . . . . . 8 2.1.1 Spaces of Sets . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.2 Set-Valued Measurable Functions and Decomposable Sets . . 14 2.1.3 Set-Valued Integrals . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Set-Valued Stochastic Analysis Revisited . . . . . . . . . . . . . . . 21 2.2.1 Set-Valued Conditional Expectations . . . . . . . . . . . . . 22 2.2.2 Set-Valued Stochastic Processes . . . . . . . . . . . . . . . . 24 2.2.3 Set-Valued Stochastic Integrals . . . . . . . . . . . . . . . . 25 2.3 Some Important Estimates . . . . . . . . . . . . . . . . . . . . . . . 34 2.4 Set-Valued Martingales and their Integral Repsentations . . . . . . 41 2.4.1 Representation of Martingales with Trivial Initial Value . . . 42 2.4.2 Representation of Martingales with General Initial Value . . 45 2.5 Set-Valued BSDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.5.1 Set-Valued BSDEs in Conditional Expectation Form . . . . 54 2.5.2 Set-Valued BSDEs with Martingale Terms . . . . . . . . . . 62 2.5.3 Set-Valued BSDEs with Generalized Stochastic Integrals . . 66 Chapter 3: Some Related Topics 69 3.1 Relationship between BSDIs and SV-BSDEs . . . . . . . . . . . . . 69 3.2 Simulation Using Deep Learning . . . . . . . . . . . . . . . . . . . . 81 v 3.2.1 Deep Learning Based Method . . . . . . . . . . . . . . . . . 84 3.2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.2.3 Some Discussions on the Boundary . . . . . . . . . . . . . . 88 Bibliography 92 vi Abstract In this dissertation, we establish an analytic framework for studying set-valued backward stochastic di↵erential equations (set-valued BSDE) ,motivatedlargelyby the current studies of dynamic set-valued risk measures for multi-asset or network- based financial models. Our framework will make use of the notion of Hukuhara di↵erence between sets, in order to compensate the lack of “inverse” operation of the traditional Minkowski addition, whence the vector space structure in set- valued analysis. While proving the well-posedness of a class of set-valued BSDEs, we shall also address some fundamental issues regarding generalized Aumann-Itˆ o integrals, especiallywhenitisconnectedtothemartingalerepresentationtheorem. In particular, we propose some necessary extensions of the integral that can be used to represent set-valued martingales with non-singleton initial values. This extension turns out to be essential for the study of set-valued BSDEs. As the wide application of the set-valued backward stochastic di↵erential equa- tions, we shall also consider the numerical results about the set-valued backward stochastic di↵erential equations. It is worth noting that, for both the underlying vii x and each components y of the solution Y of the set-valued backward stochastic di↵erential equations can be considered as high dimensional cases. Therefore, we explorethesimulationforourset-valuedbackwardstochasticdi↵erentialequations based on the recently developed deep BSDE method which uses the state of arts, deep neural networks, as a tool to resolve the curse of dimensionality issue for the high-dimensional PDEs and BSDEs. viii Chapter 1 Introduction The main topic of this dissertation is to study the set-valued backward stochastic di↵erentialequationswhichextendsthetraditionaluni-variatebackwardstochastic di↵erential equations to the case of set-valued functions. Some related topics has been studied in this dissertation as well. 1.1 Motivation Set-valued analysis, both deterministic and stochastic, has found many applica- tions over the years. Most of these applications are in optimization and optimal control theory, but recently more applications have been studied in economics and finance. The problem that particularly motivated this work is the so-called set- valued dynamic risk measures,whichwenowbrieflydescribe. The risk measure of a financial position ⇠ at a specific time t, often denoted by ⇢ t (⇠ ), is defined as a convex functional of the (bounded) real-valued random variable ⇠ satisfying certain axioms such as monotonicity and translativity (cash- additivity) (cf., e.g., [3,5,32]). A dynamic risk measure is a family of risk measures {⇢ t (·)} t2 [0,T] , such that for each financial position ⇠ , {⇢ t (⇠ )} t2 [0,T] is an adapted 1 stochastic process satisfying the so-called time-consistency,inthesensethatthe following “tower property” holds (cf. [5,7,32]): ⇢ s (⇠ )= ⇢ s ( ⇢ t (⇠ )), ⇠ 2 L 1 F T (⌦ ,R), 0 s t T, (1.1) whereL 1 F T (⌦ ,R)isthespaceof F T -measurable essentially bounded random vari- ableswithvaluesinR. Amonumentalresultinthetheoryofdynamicriskmeasures is that, any coherent or even convex risk measure satisfying certain “dominating” conditions can be represented as the solution of the following Backward Stochastic Di↵erential Equation (BSDE): ⇢ t (⇠ )= ⇠ + Z T t g(s,⇢ s (⇠ ),Z s )ds Z T t Z s dB s ,t T, (1.2) where g is determined completely by the properties of {⇢ t } t2 [0,T] (cf. [7,25,32]). There has been a tremendous e↵ort to extend the univariate risk measures to the case when the risk appears in the form of a random vector ⇠ =(⇠ 1 ,...,⇠ d )2 L 1 F T (⌦ ,R d )with d 2 N,typicallyknownasthe systemic risk in the context of defaultcontagion(see,e.g.,[10]foranotherapplicationinthecontextofmulti-asset markets with transaction costs). For example, one can consider the contagion of (default) risks in a financial market with large number of institutions as a network, inwhicheachinstitution’sfutureassetvaluecanbeviewedasa“randomshock”,to be assessed by its ability to meet its obligations to other members of the network. 2 As a result, it is natural to evaluate these random shocks collectively, which leads to a multivariate setting of a risk measure, often referred to as “systemic risk measures” (cf., e.g., [1,4,9]). One way to characterize a systemic risk measure is to consider it as a multivari- atebutscalar-valuedfunction. Inastaticframework,onecandefineanaggregation function⇤: R d ! R,soastoessentiallyreducetheproblemtoaone-dimensional risk measure. For example, a systemic risk measure can be defined as (cf. [4]) ⇢ sys (⇠ )= ⇢ (⇤( ⇠ )) = inf{k2 R:⇤( ⇠ )+k2A}, (1.3) where ⇠ 2 L 1 F T (⌦ ,R d )isthewealthvectoroftheinstitutions, A is a certain acceptance set,and ⇢ is a standard risk measure. Such a definition of a systemic riskmeasureisconvenientbuthavesomefundamentaldeficiencies, especiallywhen one seeks a dynamic version. For example, it would be almost impossible to define thetowerproperty(1.1), duetothemis-matchofthedimensionality. Furthermore, in practice one is often interested in the individual contribution of each institution, and assess the risk for each institution, thus a more ideal way would be to allocate risks individually, so that the value of a systemic risk measure is defined as a set of vectors. It is worth noting that the set-valued risk measure for a random vector ⇠ 2 R d (d 2) can no longer be defined as the “smallest” capital requirement vector, 3 as it may not exist, for instance, with respect to the componentwise ordering of vectors. One remedy is to define it as the set R 0 (⇠ )(say,at t=0)ofalltherisk compensating portfolio vectors of ⇠ so that the risk measure R 0 is a set-valued functional (see, e.g., [8]). Similarly, one can also define a dynamic set-valued risk measure {R t } t2 [0,T] .Thetowerproperty(1.1)canbedefinedby R s (⇠ )= [ ⌘ 2 Rt(⇠ ) R s ( ⌘ )=:R s [ R t (⇠ )], 0 s t T. (1.4) However, the availability of a BSDE-type mechanism to construct or characterize time-consistent dynamic risk measures as in the univariate case is a widely open problem, and is the main purpose of this paper. 1.2 Main Diculties The theory of set-valued stochastic di↵erential equations (set-valued SDE) and the related stochastic analysis is not new. Measurability and integration of set- valued functions can be traced back to as early as 1960s. The commonly used notion of integral is provided by the celebrated work of Aumann [2], where the (Aumann) integral of a set-valued function is defined as the set of all (Lebesgue) integrals of its integrable selections. On the other hand, stochastic integrals of set- valued functions (with respect to Brownian motion or other semimartingales) are relatively new in the literature (see [16]). The theory of set-valued SDEs, whose 4 solutions are set-valued stochastic processes (as opposed to stochastic di↵erential inclusions (SDI),whosesolutionsarevector-valuedprocesses), wasestablished recently (cf., e.g., [23,26]). While the Backward SDIs have been around for some time (see, e.g., [17,18]), to the best of our knowledge, the systematic study of the set-valued BSDEs, especially in the general form: Y t = ⇠ + Z T t f(s,Y s ,Z s )ds Z T t Z dB, t2 [0,T], (1.5) isstillwidelyopen. (Here, R Z dB isthegeneralizedset-valuedstochasticintegral, see §3). We should point out that the first major diculty for set-valued analysis, par- ticularly, for studying set-valued BSDEs, is the lack of vector space structure. More precisely, the (Minkowski) addition for sets does not have an “inverse” (e.g., A+( 1)A6=0(!)). Consequently,eveninthesimplecasewhen f is free of Z,the equivalence of the BSDE (1.5) and its more popular form (cf., e.g., [17,18]) Y t =E h ⇠ + Z T t f(s,Y s )ds F t i ,t2 [0,T], (1.6) is actually not clear at all. To overcome this diculty and lay a more generic foundation for the study of BSDEs of type (1.5), in this paper we shall explore the notion of the so-called Hukuhara di↵erence between sets, originated by M. Hukuhara in 1967 [14]. We 5 shallfirstestablishsomefundamentalresultsonstochasticanalysisusingHukuhara di↵erence, and then try to prove the the well-posedness of a class of set-valued BSDEs of the form (1.5) where f is free of Z.Itturnsoutthattheseemingly simple additional algebraic structure causes surprisingly subtle technicalities in all aspects of the stochastic analysis, we shall therefore focus on the most basic properties and some key estimates, which will be useful for further development. Our second goal in this paper is to address some special technical issues in set- valuedstochasticanalysisinvolving thegeneralizedAumman-Itˆ ointegral R Z dB. These issues are subtle, and only occur in the truly set-valued scenarios. When (1.5) is read as a standard vector-valued BSDE, the indefinite stochastic integral R Z s dB s = R Z dB appears as a consequence of the classical martingale represen- tation theorem. In the set-valued framework, using the generalized Aumann-Itˆ o integral, a similar representation theorem was shown in [21] for a set-valued mar- tingale with zero initial value. However, as was pointed out in the recent work [34], if a set-valued stochastic integral is both a martingale and null at zero, then it must be a singleton. Such an observation essentially nullifies any possible role of the martingale representation theorem in the study of set-valued BSDE, unless some modification on the definition of the stochastic integral is adopted. We shall therefore propose a generalization of the Aumman-Itˆ o integral so that it contains the information of the non-singleton initial values, and preserves the martingale property. We shall also point out some other fundamental issues regarding the 6 Aumman-Itˆ o integral in various remarks, but in order not to disturb the main purpose of the paper, we will address these issues in our future publications. 1.3 Outline of the Dissertation The rest of the dissertation is organized as follows. In Chapter 2, we give the necessary preliminaries on set-valued analysis, introduce the notion of Hukuhara di↵erence and its properties, and extend the existing results (mostly in the book [20])tothosethatinvolveHukuharadi↵erence. Thenwerevisitset-valuedstochas- tic analysis, again with an eye on these that involve Hukuhara di↵erence. Also, we establish some key estimates on set-valued conditional expectations and set-valued Lebesgue integrals. Nest, we study set-valued martingales and their representa- tions as generalized stochastic integrals. Finally, we study the well-posedness of a class of BSDEs of the form (1.5) in the case when f is free of Z and compare it to theBSDEoftheform(1.6). InChapter3,wediscussionsomerelatedtopicsonset- valued backward stochastic di↵erential equations. We first discuss the relationship between our set-valued backward stochastic di↵erential equations and the back- ward stochastic di↵erential inclusion which has been studied for decades. Further more, based on the relationship, we establish a deep learning based method to give some results on the simulation of the set-valued backward stochastic di↵erential equations. 7 Chapter 2 Set-valued Backward Stochastic Di↵erential Equations 2.1 Basics of Set-Valued Analysis In this section, we give a brief introduction to set-valued analysis and all the necessary notations associated to it. The interested reader is referred to the books [20,22] for many of the definitions but we shall present all the results in a self- contained way. 2.1.1 Spaces of Sets Although most of our discussion applies to more general Hausdor↵ locally convex topological vector spaces, throughout this paper we let X be a separable Banach spacewithnorm|·|.WeshalldenoteP(X)tobethesetofallnonemptysubsetsof X,C(X)tobethesetofall closed sets inP(X), andK (X)thesetofallcompact convex sets inP(X), with respect to the norm topology onX.Wefurtherdenote 8 K w (X)tobethesetofallweaklycompactconvexsetsinP(X)withrespectto the weak topology onX. Algebraic Structure onK (X). Let A,B2 K (X)and ↵ 2 R.Wedefine A+B :={a+b: a2 A, b2 B}; ↵A :={↵a : a2 A}. (2.1) We note that the operations in (2.1) are often referred to as theMinkowski addition and scalar multiplication.Itcanbecheckedthat K (X)isclosedunderthese operations. It is important to note that the so-called cancellation law (cf., e.g., [29,33]) holds onK (X), namely, for A,B,C2 K (X), A+C =B +C =) A =B. (2.2) Clearly, multiplying A by ↵ = 1givesthe“opposite”of A,as A := ( 1)A, which leads to the “Minkowski di↵erence” A B :=A+( 1)B ={a b: a2 A, b2 B}. (2.3) But in general, A+( 1)A6= {0},thatis,theoppositeof A is not the “inverse” of A under the Minkowski addition (unless A is a singleton). Consequently, these operations do not establish a vector space structure onK (X). An early e↵ort to address the inverse operation of Minkowski addition, often still referred to as the 9 Minkowski di↵erence , is the so-called “geometric di↵erence” or “inf-residuation” (see [11] and [12]), defined by A ⇧ B :={x2 X|x+B⇢ A}, with x + B := {x} + B.Suchadi↵erencesatisfies A ⇧ A = {0},andcanbe defined for all A,B2 K (X). However, one only has (A ⇧ B)+B⇢ A;thereverse inclusion usually fails. In 1967, M. Hukuhara introduced a definition of set di↵erence that has since been referred to as Hukuhara di↵erence (cf. [14]) as follows: for A,B2 K (X), A B =C () A =B +C. (2.4) As we shall see below, this definition has many convenient properties, but the only subtlety is that the Hukuhara di↵erence does not always exist(!). The follow- ing result characterizes the existence of Hukuhara di↵erence and gives an explicit expression ofA B,whichwillbeusedfrequentlyinourfuturediscussions. Recall that, for A 2 K (X)and a 2 A, a is called an extreme point of A if it cannot be written as a strict convex combination of two points in A, that is, for every x 1 ,x 2 2 A and 2 (0,1), we have a6= x 1 +(1 )x 2 .Wedenoteext(A)tobe the set of all extreme points of A. 10 Proposition 2.1.1. Let A,B2 K (X). The Hukuhara di↵erence A B exists if and only if for every a2 ext(A), there exists x2 X such that a2 x+B⇢ A.In this case, A B is unique, closed, convex, and we have A B =A ⇧ B ={x2 X|x+B⇢ A}. (2.5) Proof: Since this is an infinite-dimensional version of [14, Proposition 4.2] com- bined with a simple application of the Krein-Milman theorem, we omit the proof. The Hukuhara di↵erence facilitates set-valued analysis greatly, without the vector space structure onK (X). We list some properties that will be used often in this paper. Proposition2.1.2. LetA,B,A 1 ,A 2 ,B 1 ,B 2 2 K (X), then the following identities hold when all the Hukuhara di↵erences involved exist: (i) A A ={0}, A {0} =A; (ii) (A 1 +B 1 ) (A 2 +B 2 )=(A 1 A 2 )+(B 1 B 2 ); (iii) (A 1 +B 1 ) B 2 =A 1 +(B 1 B 2 )=(A 1 B 2 )+B 1 ; (iv) A 1 +(B 1 B 2 )=(A 1 B 2 )+B 1 ; and (v) A =B+(A B). Proof:(i)A A ={0} is immediate sinceA =A+{0}.SupposeX :=A {0}. Then by definition (2.4), A ={0}+X =X. 11 (ii) Denote X := (A 1 +B 1 ) (A 2 +B 2 ), Y := A 1 A 2 ,and Z := B 1 B 2 . That is, A 1 +B 1 =A 2 +B 2 +X; A 1 =A 2 +Y; B 1 =B 2 +Z. (2.6) Adding the last two identities above, we get A 1 + B 1 = A 2 + Y + B 2 + Z = A 2 +B 2 +Y +Z.Comparingthiswiththefirstidentityin(2.6)andusingthe cancellation law (2.2), we see that X = Y +Z=(A 1 A 2 )+(B 1 B 2 ), proving (ii). (iii) Let A 2 = {0} in (ii). By the second equality in (i), we obtain the first equality in (iii). The second equality in (iii) follows by switching the roles of A 1 and B 1 . (iv) Denote X := B 1 B 2 and Y := A 1 B 2 .Thatis, B 1 = X +B 2 and A 1 =Y +B 2 .Then, A 1 +X =Y +B 2 +X =Y +B 1 .Thisisexactly(iv). (v) This follows immediately by taking B 1 =B 2 =B in (iv). Topological Structure onK (X). We note that sinceX is a locally convex topological vector space under both the strong and weak topologies, bothK (X) andK w (X)areclosedundertheMinkowskiadditionandscalarmultiplication. Moreover, the cancellation law (2.2), Proposition 2.1.1 and Proposition 2.1.2 are valid for both spaces. 12 For A,B 2 K (X), let us define ¯ h(A,B):=sup a2 A d(a,B), where d(x,B):= inf b2 B |x b| for x2 X.Then,the Hausdor↵ distance between A and B is given by h(A,B):= ¯ h(A,B)_ ¯ h(B,A)=inf{">0:A⇢ V " (B),B⇢ V " (A)}, (2.7) where V " (C):= {x 2 X : d(x,C) "}, C 2 K (X),">0(cf. [20,Corol- lary 1.1.3]). Moreover, (K (X),h)isaPolishspace(cf. [6,TheoremII.14]). For A2 K (X), we define kAk :=h(A,{0})=sup{|a| :a2 A}. (2.8) We have the following easy results. Proposition 2.1.3. (i) The mappingk·k :K (X)! R + satisfies the properties of a norm. (ii) If A,B2 K (X) and A B exists, then h(A,B)=kA Bk. Proof. (i) Clearly, kAk = 0 implies A = {0},andforany 2 R we have kA k = h(A, {0})=sup{|y | : y2 A} = | |sup{|y| : y2 A} = | |kAk. Finally, the “triangle inequality”, in the sense that kA +Bkk Ak +kBk,istrivialby definition ofk·k. 13 (ii) Since A,B2 K (X), applying the translation invariance property of Haus- dor↵ distance (cf. [15, Proposition 1.3.2]), we see that kA Bk =h(A B,{0})=h((A B)+B,{0}+B)=h(A,B), (2.9) whenever A B exists. Remark 2.1.4. It should be noted that the fact thatk·k satisfies the properties of a norm does not imply that (K (X),k·k)isanormedspace,sinceK (X)isnot a vector space. It is particularly worth noting that, although the Hausdor↵ metric is symmetric, the identity (2.9) does not render (A,B)7!kA Bkametricon K (R d )intheusualsense,sincetheexistenceof A B by no means implies that ofB A.Infact,itcanbecheckedthatbothA B andB A exist if and only if A is a translation ofB (i.e., A =x+B for somex2 X). Nevertheless, the relation in Proposition 2.1.3-(ii) is useful and sucient for our purposes. 2.1.2 Set-ValuedMeasurableFunctionsandDecomposable Sets We now consider set-valued functions. Let (E,E,µ) be a finite measure space. If E is a topological space, we take E =B(E), the Borel -algebra on E.Weshall make use of the following definition of set-valued “measurable” function. 14 Definition 2.1.5 ([28, Definition 1.3.1]). A set-valued function F: E! C(X) is said to be (strongly) measurable if {e2 E: F(e)\ B6=;}2E for every closed set B⇢ X. The following selection/representation theorems for set-valued functions are well-known and will be useful in later sections. We shall denote cl{A} to be the closure of a set A. Theorem 2.1.6. Let F: E! C(X) be a set-valued function. (i) (Kuratowski and Ryll-Nardzewski, [22, Theorem 2.2.2]) If F is measurable, then F admits a measurable selection, i.e., there exists an E/B(X)-measurable function f: E! X such that f(e)2 F(e) for each e2 E. (ii) (Castaing, [22, Theorem 2.2.3]) F is measurable if and only if there exists a sequence{f n } 1 n=1 of measurable selections of F such that F(e)=cl{f n (e): n2 N}, e2 E. Let us denote L 0 (E,X)= L 0 E (E,X)tobethesetofallmeasurablefunctions f: E ! X that are distinguished up to µ-almost everywhere (a.e.) equality. For p 2 [1,+1), let L p (E,X)= L p E (E,X)bethesetofall f 2 L 0 (E,X)suchthat R E |f(e)| p µ(de) < 1.Togetherwiththenorm f 7! ( R E |f(e)| p µ(de)) 1 p ,theset L p (E,X)isaBanachspace. For p 2 (1,+1)and X = R d , L p (E,X) is also reflexive. 15 We denoteL 0 (E,C(X)) = L 0 E (E,C(X)) to be the space of all measurable set-valued mappings F: E ! C(X)thataredistinguishedupto µ-a.e. equality. For F 2 L 0 (E,C(X)), we consider the set S(F):=S E (F):={f2 L 0 (E,X): f(e)2 F(e)µ-a.e. e2 E} (2.10) of its measurable selections, which is nonempty by Theorem 2.1.6(i). Moreover, by Theorem 2.1.6(ii), two measurable set-valued functions F and G are identical in L 0 (E,C(X)) if and only if S(F)= S(G). An interesting and crucial question in set-valued analysis is whether a given set of measurable functions inL 0 (E,R d )can be seen as the set of measurable selections of a measurable set-valued function. It turns out that this is a highly non-trivial question, for which the following notion is fundamental. Definition 2.1.7. A set V ⇢ L 0 (E,X) is said to be decomposable with respect to E if it holds 1 D f 1 +1 D cf 2 2 V for every f 1 ,f 2 2 V and D2E . Given a set V ⇢ L p (E,X)with p2 [1,+1), we define the decomposable hull of V,denotedbydec(V)=dec E (V), to be the smallest decomposable subset of L p (E,X)containingV.Itcanbecheckedthatdec(V)preciselyconsistsoffunctions of the form f = P m i=1 1 D i f i ,where {D 1 ,...,D m } is a E-measurable partition of E with m2 N and f 1 ,...,f m 2 V.Weshalloftenconsider dec(V)= dec E (V), 16 the closure of dec(V)in L p (E,X). It is readily seen that dec(V)isthesmallest decomposable and closed subset ofL p (E,X)containing V. For p 2 [1,+1)and F 2 L 0 (E,C(X)), we define S p (F):= S p E (F):= S(F)\ L p (E,X). It is easy to check that S p (F)isacloseddecomposablesub- set ofL p (E,X). But it is possible that S p (F)=;.Wethusconsidertheset A p (E,C(X)) :=A p E (E,C(X)) :={F 2 L 0 (E,C(X)) :S p (F)6=;} , (2.11) and say that F is p-integrable if F 2 A p (E,C(X)). By [20, Corollary 2.3.1], for F,G 2 A p (E,C(X)), F and G are identical if and only if S p (F)= S p (G). Moreover, we have the following important theorem. Theorem 2.1.8 ([20, Theorem 2.3.2]). Let V be a nonempty closed subset of L p (E,X), p 1. Then, there exists F 2 A p (E,C(X)) such that V =S p (F) if and only if V is decomposable. 2.1.3 Set-Valued Integrals We shall now assume thatX =R d ,anddefinethe Aumann integral of a set-valued function F: E! C(R d )throughitsmeasurableselections. As a preparation, for a function f2 L 1 (E,R d ), we define I(f):= R E f(e)µ(de) and, for a set M ⇢ L 1 (E,R d ), we define I[M]:= {I(f): f 2 M}.Then,onecan 17 check (see [20, Lemma II.3.9]) that I[M]isaconvexsubsetof R d whenever M is decomposable. Now, for a set-valued function F 2 A 1 (E,C(R d )), we define Z E F(e)µ(de):=cl(I[S 1 (F)]) = cl n Z E f(e)µ(de): f2 S 1 (F) o . (2.12) Clearly, the “integral” R E F(e)µ(de)isanonemptyclosedconvexset,andiscalled the (closed version of the) Aumann integral of F. Let p2 [1,+1). For a given F 2 L 0 (E,C(R d )), we say that it is p-integrably bounded if there exists `2 L p (E,R + )suchthatkF(e)k = h(F(e),{0}) `(e)a.e. e2 E.LetL p (E,C(R d )) =L p E (E,C(R d )) be the set of all p-integrably bounded set-valued functions inL 0 (E,C(R d )). It is readily seen thatL p (E,C(R d )) ⇢ A p (E,C(R d )). Moreover, by [20, Theorem 2.4.1-(ii)], a set-valued function F 2 A p (E,C(R d )) is p-integrably bounded if and only if S p (F)isaboundedsubset of L p (E,R d ). In this case, it is even true that S p (F)= S p 0 (F)= S(F)forevery p 0 2 [1,p](cf. [28,Proposition2.1.4]). Inwhatfollows,weshallconsidermostlythe cases p=1and p=2;andsaythat F is integrably bounded if F 2 L 1 (E,C(R d )), and square-integrably bounded if F 2 L 2 (E,C(R d )). Clearly, L 2 (E,C(R d )) ⇢ L 1 (E,C(R d )). We have the following result on integrably bounded set-valued functions. For asubset A of a vector space, co(A)denotestheconvexhullof A. 18 Theorem 2.1.9 ([20, Theorem 2.3.4]). Let F 2 L 1 (E,C(R d )). Then, Z E F(e)µ(de)= Z E co(F(e))µ(de). In view of Theorem 2.1.9, in the integrably bounded case, it is enough to consider the Aumann integrals of convex-valued functions. On the other hand, if F 2 L p (E,C(R d )), then it is immediate that F(e) is a bounded (hence compact) set for µ-a.e. e2 E.Inwhatfollows,wemostlyrestrictourattentiontothecase F :E! K (R d )anddefinethespacesA p (E,K (R d )),L p (E,K (R d )), and so on in an obvious manner. Let F 2 L p (E,K (R d )), p 1. By [28, Theorem 2.1.18], we have S p (F)= S(F)2 K w (L p (E,R d )). Moreover, sinceI isa(weakly)continuouslinearmapping onL p (E,R d ), I[S p (F)] =I[S(F)] is a nonempty compact convex set and one can remove the closure in (2.12), that is, Z E F(e)µ(de)=I[S(F)]2 K (R d ). The following lemma will be helpful in some later calculations. Lemma 2.1.10. Let F 1 ,F 2 2 L p (E,K (R d )), p 1. Then, F 1 + F 2 2 L p (E,K (R d )) and S(F 1 +F 2 )=S(F 1 )+S(F 2 ). (2.13) 19 Furthermore, if F 1 F 2 exists, then F 1 F 2 2 L p (E,K (R d )). In this case, we have S(F 1 F 2 )=S(F 1 ) S(F 2 ). (2.14) Proof. Therelation(2.13)isknown(see, e.g., [20, Lemma2.4.1]). Inparticular, S p (F 1 +F 2 )6=;so that F 1 +F 2 2 A p (E,K (R d )). Moreover, since S p (F 1 +F 2 )is clearly bounded, we haveF 1 +F 2 2 L p (E,K (R d )) andS p (F 1 +F 2 )=S(F 1 +F 2 ). To see the properties of F 1 F 2 ,weassumethatitexists. Wefirstclaimthat F 1 F 2 is measurable. Indeed, for e 2 E and x 2 R d ,itiseasytocheckthat (cf. [11, Proposition 4.16]) x 2 F 1 (e) F 2 (e)holdsifandonlyifthereexistsa countable dense set D⇢ R d (independent of the choice of e)suchthat hw,xi sup x 1 2 F 1 (e) hw,x 1 i sup x 2 2 F 2 (e) hw,x 2 i,w2 D. (2.15) In other words, we can write F 1 (e) F 2 (e)= \ w2 D {x2 R d :hw,xi sup x 1 2 F 1 (e) hw,x 1 i sup x 2 2 F 2 (e) hw,x 2 i}. (2.16) Furthermore, for each w 2 D, the mappings e 7! sup x 1 2 F 1 (e) hw,x 1 i, sup x 2 2 F 2 (e) hw,x 2 i are measurable real-valued functions by [30, Example 14.51], 20 thuseveryhalfspace-valuedmappinginsidetheintersectionin(2.16)ismeasurable, thus so is the countable intersection F 1 F 2 ,thanksto[30,Proposition14.11-(a)]. Next, note thatkF 1 (e) F 2 (e)kk F 1 (e)k+kF 2 (e)k for every e2 E. Since F 1 ,F 2 arep-integrably bounded, we see thatkF 1 (·) F 2 (·)k2 L p (E,R)andF 3 := F 1 F 2 is p-integrably bounded. Finally, since F 2 ,F 3 2 L p (E,K (R d )) and F 1 = F 2 +F 3 ,(2.13)yieldsS(F 1 )=S(F 2 )+S(F 3 ), which then implies thatS(F 1 F 2 )= S(F 3 )=S(F 1 ) S(F 2 ). 2.2 Set-Valued Stochastic Analysis Revisited In this section, we review some basics of set-valued stochastic analysis, and estab- lish some fine results that will be useful for our discussion but not covered by the existing literature. Throughout the rest of the paper, we shall consider a given complete, filtered probability space (⌦ ,F,P,F = {F t } t2 [0,T] ), on which is defined a standard m-dimensional Brownian motion B = {B t } t2 [0,T] ,where T> 0isagiventimehorizon. Weshalldenote L p F ([0,T]⇥ ⌦ ,R d )tobethe space of all F-progressively measurable d-dimensional processes { t } t2 [0,T] with E[ R T 0 | t | p dt] < +1.Thespace L p F ([0,T]⇥ ⌦ ,R d⇥ m )ofmatrix-valuedprocesses can be defined similarly. 21 2.2.1 Set-Valued Conditional Expectations Aset-valuedrandomvariable X :⌦ ! C(R d )isan F-measurable set-valued function. If X 2 A 1 (⌦ ,C(R d )), then we define its expectation, denoted by E[X] as usual, by its Aumann integral R ⌦ X(!)P(d!). Given p 1, if X 2 A p (⌦ ,C(R d )), then S p (X)isacloseddecomposablesubsetof L p (⌦ ,R d )and S p (co(X)) = co(S p (X)) (see [20, Lemma 2.3.3]). Further, X is p-integrably bounded if and only if S p (X)isaboundedsetin L p (⌦ ,R d ), that is, E[kXk p ]= R ⌦ sup{|x| p : x 2 X(!)}P(d!)= R ⌦ h p (X(!),{0})P(d!) < 1.Inparticular,if X 2 L p (⌦ ,K (R d )), then S p (X)= S(X)isaweaklycompactconvexsubsetof L p (⌦ ,R d ). Let G be a sub- -field of F.Wedenote L p G (⌦ ,R d ), A p G (⌦ ,C(R d )), L p G (⌦ ,K (R d )), S p G (X) to be the same as those in Sections 2.1.2 and 2.1.3, on the probability space (⌦ ,G,P). Further, for X 2 A 1 F (⌦ ,C(R d )), the conditional expectationofX givenG isdefinedasthe(almostsurely)uniqueset-valuedrandom variableE[X|G]2 A 1 G (⌦ ,C(R d )) that satisfies S 1 G (E[X|G]) = cl{E[f|G]:f2 S 1 (X)}, (2.1) where the closure is evaluated in L 1 G (⌦ ,R d ). The existence of E[X|G]followsby Theorem 2.1.8 since the set on the right in (2.1) is decomposable. Moreover, for p 1, if X 2 L p F (⌦ ,K (R d )), then it can be shown that the closure in (2.1) is 22 not needed andE[X|G]2 L p G (⌦ ,K (R d )). In this case, E[X|G]satisfiestheusual identity Z D E[X|G](!)P(d!)= Z D X(!)P(d!),D2G . (2.2) Moreover, it can be easily checked that E[·|G]satisfiesallthenaturalproperties of a conditional expectation, except that the “linearity” should be interpreted in terms of the Minkowski addition and multiplication by scalars. Furthermore, we note that the conditional expectation of a set V ⇢ L 1 F (⌦ ,R d )ofrandomvariables can also be defined in a generalized sense even if it is not the set of selections of a set-valued random variable. To be more precise, if V ⇢ L 1 F (⌦ ,R d )isanonempty closed decomposable set, then there exists a uniqueE[V|G]2 A 1 G (⌦ ,C(R d )) (by a slight abuse of notation) such that S 1 G (E[V|G]) = cl{E[f|G]:f2 V}. (2.3) The following is a seemingly obvious fact regarding set-valued conditional expectations. 23 Corollary 2.2.1. Let X 1 ,X 2 2 L p (⌦ ,K (R d )) with p 2 [1,+1). LetG⇢F be a sub- -field. Suppose that X 1 X 2 exists. Then, E[X 1 X 2 |G] exists in L p G (⌦ ,K (R d )) and it holds that E[X 1 X 2 |G]=E[X 1 |G] E[X 2 |G]. (2.4) Proof. By Lemma 2.1.10, X 1 X 2 2 L p (⌦ ,K (R d )) so that E[X 1 X 2 |G] existsinL p G (⌦ ,K (R d )). Bythedefinitionofconditionalexpectationandrepeated applications of Lemma 2.1.10, we have S G (E[X 1 X 2 |G]+E[X 2 |G]) = S G (E[X 1 X 2 |G])+S G (E[X 2 |G]) = {E[f 1 |G]:f 1 2 S(X 1 X 2 )}+{E[f 2 |G]:f 2 2 S(X 2 )} = {E[f|G]:f2 S(X 1 X 2 )+S(X 2 )} = {E[f|G]:f2 S((X 1 X 2 )+X 2 )} =S G (E[X 1 |G]). This is equivalent to havingE[X 1 |G]=E[X 1 X 2 |G]+E[X 2 |G], whence (2.4). 2.2.2 Set-Valued Stochastic Processes Aset-valuedstochasticprocess= { t } t2 [0,T] is a family of set-valued random variables taking values inC(R d ). We call measurable if it is B([0,T])⌦F - measurable as a single set-valued function on [0,T]⇥ ⌦. The notions such 24 as “adaptedness” or “progressive measurability” can be defined accordingly in the obvious ways. We denote L 0 F ([0,T]⇥ ⌦ ,C(R d )) to be the space of all set-valued, F-progressively measurable processes taking values in C(R d ). For 2 L 0 F ([0,T]⇥ ⌦ ,C(R d )), we denote S F () to be the set of all F-progressively measurable selectors of, which is nonempty by Theorem 2.1.6. For p2 [1,+1), we define S p F () := S F () \ L p F ([0,T]⇥ ⌦ ,R d )anddenoteL p F ([0,T]⇥ ⌦ ,C(R d )) to be the set of all F-progressively measurable, C(R d )-valued processes with E[ R T 0 k t k p dt] < +1 (i.e., p-integrably bounded). The notations L p F ([0,T]⇥ ⌦ ,K (R d )),L p F ([0,T]⇥ ⌦ ,K (R d⇥ m )) for set-valued processes with compact con- vex values are defined similarly for p=0and p 1. It is worth pointing out that thespaceL 2 F ([0,T]⇥ ⌦ ,K (R d ))isnota Hilbertspace, butonlyacompletemetric space, with the metric d H ( , ) := (E[ R T 0 h 2 ( t , t )dt]) 1/2 . 2.2.3 Set-Valued Stochastic Integrals In this section, we assume that F = F B ,thenaturalfiltrationgeneratedby B, augmented by all theP-null sets of F so that it satisfies the usual hypotheses. Let us consider the two linear mappings J :L 2 F ([0,T]⇥ ⌦ ,R d )! L 2 F T (⌦ ,R d ), and J :L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )! L 2 F T (⌦ ,R d )definedby J( ):= Z T 0 t dt, J( ):= Z T 0 t dB t , (2.5) 25 for 2 L 2 F ([0,T]⇥ ⌦ ,R d ), 2 L 2 F ([0,T]⇥ ⌦ ,R d⇥ m ), respectively. For K ⇢ L 2 F ([0,T]⇥ ⌦ ,R d )(resp. K 0 ⇢ L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )), the set J[K](resp. J[K 0 ]) is defined in an obvious way. Let 2 L 0 F ([0,T]⇥ ⌦ ,C(R d )) and 2 L 0 F ([0,T]⇥ ⌦ ,C(R d⇥ m )) such that S 2 F () 6= ; , S 2 F ( ) 6= ;.Then,onecanshowthatthereexistuniqueset-valued random variables R T 0 t dt2 A 2 F T (⌦ ,C(R d )) and R T 0 t dB t 2 A 2 F T (⌦ ,C(R d )) such that S 2 F T ✓Z T 0 t dt ◆ = dec F T (J[S 2 F ()]),S 2 F T ✓Z T 0 t dB t ◆ = dec F T (J[S 2 F ( )]). (2.6) We call R T 0 t dt and R T 0 t dB t set-valued stochastic integrals. As usual, for t2 [0,T],wedefinetheindefinitestochasticintegralsas R t 0 s ds := R T 0 1 (0,t] (s) s ds and R t 0 s dB s := R T 0 1 (0,t] (s) s dB s .Equivalently,onecandefinethemviathe relations S 2 Ft ( R t 0 s ds)= dec Ft (J 0,t [S 2 F ()]), S 2 Ft ( R t 0 s dB s )= dec Ft (J 0,t [S 2 F ( )]), where J 0,t ():= R t 0 s ds, J 0,t ( ):= R t 0 s dB s .Theintegrals R T t s ds and R T t s dB s ,andthemappings J t,T , J t,T can be defined similarly for t2 [0,T]. Remark2.2.2. Theset-valuedItˆ ostochasticintegralshavemanyinterestingprop- erties, we refer the interested reader to the books [20,22] for the exhaustive explo- rations. Here we mention a few that will be useful for our discussion. (i) The definition (2.6) implies that both R T 0 t dt and R T 0 t dB t are F T -measurable set-valued random variables. However, neither of the sets 26 J[S 2 F ()] ,J[S 2 F ( )] ⇢ L 2 F T (⌦ ,R d )isnecessarilydecomposable(see[20,p.105]for counterexamples). Thus, by virtue of Theorem 2.1.8, they cannot be seen as the selectors of any F T -measurable set-valued random variables. (ii) One can actually show that {E[x]: x2J [S 2 F ( )]} ={0},and J[S 2 F ( )] is decomposable if and only if it is a singleton(!). (iii) By [20, Theorem 3.1.1], it is shown that dec F T (J[S 2 F ( )]) =L 2 F T (⌦ ,R d )if and only if dec(J[S 2 F ( )])6=; . (iv) If and are convex-valued, then so are R T 0 t dt and R T 0 t dB t .If 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )), thenitisknownthat R T 0 t dt2 L 2 F T (⌦ ,K (R d )), that is, the stochastic time integral of a square-integrably bounded process is a square- integrably bounded set-valued random variable (see [19, Theorem 3.2]). However, the Itˆ o integral R T 0 t dB t fails to be square-integrably bounded in general even if 2 L 2 F ([0,T]⇥ ⌦ ,K (R d⇥ m )) (see [27]). (v) The set-valued stochastic integrals R t 0 s ds, R t 0 s dB s are defined, almost surely, for each t 2 [0,T], and they are (F-)adapted,intheusualsense. Fur- thermore, when 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )), by [24, Theorem 2.4], the process { R t 0 s ds} t2 [0,T] has a continuous (with respect to h), whence progressively mea- surable, modification. Wecandefinetheindefiniteintegral R · 0 s dsbythisprogres- sively measurable set-valued process. However, the continuity of the Itˆ o integral { R t 0 s dB s } t2 [0,T] is much more involved, and so is the progressive measurability issue (see [22, Section 5.5] for a special case). 27 The following lemma shows that the additivity holds for both integrals, which also allows to calculate the integrals of the Hukuhara di↵erence of two processes. Lemma 2.2.3. Suppose that P is a nonatomic probability measure. Let 1 , 2 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )) and 1 , 2 2 L 2 F ([0,T]⇥ ⌦ ,K (R d⇥ m )). Then, for every t2 [0,T], Z t 0 ( 1 s + 2 s )ds = Z t 0 1 s ds+ Z t 0 2 s ds, Z t 0 ( 1 s + 2 s )dB s = Z t 0 1 s dB s + Z t 0 2 s dB s (2.7) hold almost surely. If 1 2 and 1 2 exist (dt⇥ dP-a.e.), then we have 1 2 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )), 1 2 2 L 2 F ([0,T]⇥ ⌦ ,K (R d⇥ m )) and, for every t2 [0,T], Z t 0 ( 1 s 2 s )ds = Z t 0 1 s ds Z t 0 2 s ds, Z t 0 ( 1 s 2 s )dB s = Z t 0 1 s dB s Z t 0 2 s dB s (2.8) hold almost surely. Proof: The relations in (2.7) are given in [19, Theorem 3.1-3.2]. Suppose that 1 2 exists. It is clear that 1 2 takes values inK (R d ). Sincek 1 t 2 t k k 2 t k+k 2 t k, E Z T 0 1 t 2 t 2 dt 2E Z T 0 1 t 2 dt +2E Z T 0 2 t 2 dt < +1. 28 This and Lemma 2.1.10 imply that 1 2 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )). We have 1 = 2 +( 1 2 ). Let t 2 [0,T]. By the first relation in (2.7), we obtain R t 0 1 s ds = R t 0 2 s ds + R t 0 ( 1 s 2 s )ds. By the definition of Hukuhara di↵erence, the first relation in (2.8) follows. The proofs of the claims related to 1 2 are similar, hence omitted. Corollary 2.2.4. Suppose that P is a nonatomic probability measure. Let 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )), 2 L 2 F ([0,T]⇥ ⌦ ,K (R d⇥ m )). For each t2 [0,T], Z T 0 s ds = Z t 0 s ds+ Z T t s ds, Z T 0 s dB s = Z t 0 s dB s + Z T t s dB s and Z T t s ds = Z T 0 s ds Z t 0 s ds, Z T t s dB s = Z T 0 s dB s Z t 0 s dB s . hold almost surely. Proof:ThisisimmediatefromLemma2.2.3andthedefinitionsoftheintegrals since 1 (0,T] (s)⇠ s =1 (0,t] (s)⇠ s +1 (t,T] (s)⇠ s , ⇠ 2{ , },forall s2 [0,T]. The notion of stochastic integral can be extended to the case where the inte- grand is only a set of processes, instead of a set-valued process. We briefly describe the idea (cf. [21]). LetZ2 P(L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) be a nonempty set and con- sider the sets J t [Z]={ R t 0 z s dB s :z2Z} , t2 [0,T]. Due to lack of decomposabil- 29 ity,J t [Z]isnotequaltothesetofsquare-integrableselectionsofaset-valuedran- domvariable,ingeneral. Butsimilartothestochasticintegraldiscussedabove,one can show that, for eacht2 [0,T], there exists a unique R t 0 Z dB2 A 2 Ft (⌦ ,C(R d )) such that S 2 Ft ✓Z t 0 Z dB ◆ = dec Ft (J t [Z]). (2.9) We call R t 0 Z dB the generalized (indefinite) Aumann-Itˆ o stochastic integral (cf. [21]). If Z is convex, then R t 0 Z dB is convex-valued (see [21, Theorem 2.2]). We have the following analogue of Lemma 2.2.3. Lemma 2.2.5. Assume that P is nonatomic, and let Z 1 ,Z 2 2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )). Then, the following statements are true: (i) Z 1 +Z 2 2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) and for every t2 [0,T], it holds that Z t 0 (Z 1 +Z 2 ) dB = Z t 0 Z 1 dB + Z t 0 Z 2 dB, P-a.s. (2.10) (ii) If Z 1 Z 2 exists, then Z 1 Z 2 2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) and for every t2 [0,T], Z t 0 (Z 1 Z 2 ) dB = Z t 0 Z 1 dB Z t 0 Z 2 dB, P-a.s. (2.11) 30 (iii) If Z 1 Z 2 exists and R t 0 Z 1 dB = R t 0 Z 2 dB, P-a.s., for all t2 [0,T], then Z 1 =Z 2 as subsets of L 2 F ([0,T]⇥ ⌦ ,R d⇥ m ). Proof:(i)Theadditivityresult(2.10)isgivenin[21,Theorem2.2].(ii)Since Z 1 ,Z 2 are bounded subsets ofL 2 F ([0,T]⇥ ⌦ ,R d⇥ m ), it can be checked thatZ 1 Z 2 is also bounded. Moreover,Z 1 Z 2 is convex and closed as a Hukuhara di↵erence. Since L 2 F ([0,T]⇥ ⌦ ,R d⇥ m ) is reflexive, we may conclude that Z 1 Z 2 is weakly compact. Hence, Z 1 Z 2 2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )). The proof of the identity (2.11) follows from the additivity of integral as in the proof of Lemma 2.2.3. It remains to prove (iii). We first note that by the property of the Hukuhara di↵erence and the assertion (ii), it suces to show that R t 0 Z dB=0, P-a.s. , for all t2 [0,T], implies Z = {0}.Toseethis,weobservethat,forafixed t2 [0,T], the general stochastic integral R t 0 Z dB=0,P-a.s., amounts to saying, by definition, thatS 2 Ft ( R t 0 Z dB)= dec Ft (J t [Z]) ={0}, whichisobviouslyequivalent to J t [Z]= {0}.Inotherwords,wehave R t 0 z s dB s =0, P-a.s., for all z2Z.But since this holds for anyt2 [0,T], and since the integralM z t := R t 0 z s dB s , t2 [0,T], is a continuous martingale, we can conclude thatP{M z t =0forall t2 [0,T]}=1 for each z2Z.Thisleadstothat z⌘ 0, P-a.s., for all z2Z,thatis, Z = {0}. In the proof of Lemma 2.2.5(iii), the existence of the Hukuhara di↵erence Z 1 Z 2 is needed in order to obtain the conclusionZ 1 =Z 2 ,andhenceZ 1 Z 2 ={0}, by using Lemma 2.2.5(ii). To remove this assumption, we will pass to a quotient 31 space ofK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) in which two sets of processes are considered identical if they yield the same Itˆ o integral. To make this idea precise, let us define arelation ⇠ = onK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) by Z 1 ⇠ =Z 2 , Z t 0 Z 1 dB = Z t 0 Z 2 dB P-a.s. for all t2 [0,T]. (2.12) It is easy to see that ⇠ = is an equivalence relation onK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )); let us denote K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) to be the set of all equivalence classes of ⇠ =.ForaclassZ2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )), we define its stochastic integral { R t 0 Z dB} t2 [0,T] ,asthestochasticintegralofanymemberofZ,whichisuniquely defined up to modifications. Hence, for Z 1 ,Z 2 2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )), if R t 0 Z 1 dB = R t 0 Z 2 dB P-a.s., for all t2 [0,T], then Z 1 =Z 2 inK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )). For future use, let us extend the definition of Minkowski addition for the new space. For Z, ˆ Z2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )), we define Z + ˆ Z :={Z 1 + ˆ Z 1 |Z 1 2Z , ˆ Z 1 2 ˆ Z}, (2.13) which is well-defined sinceZ 1 + ˆ Z 1 ⇠ =Z 2 + ˆ Z 2 wheneverZ 1 ,Z 2 2Z and ˆ Z 1 , ˆ Z 2 2 ˆ Z.Then, has an obvious definition onK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) by (2.4). With thesedefinitions,Lemma2.2.5canberewrittenforK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m ))except that in (iii), the existence of the Hukuhara di↵erence is not needed. 32 The next corollary is an important observation. Corollary 2.2.6. Suppose that P is a nonatomic probability measure. LetZ2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) be a nonempty set of processes and t2 [0,T]. Then, it holds almost surely that Z T 0 Z dB⇢ Z t 0 Z dB + Z T t Z dB, (2.14) Moreover, if Z is decomposable, then it holds almost surely that Z T 0 Z dB = Z t 0 Z dB + Z T t Z dB, Z T t Z dB = Z T 0 Z dB Z t 0 Z dB. (2.15) Proof:Let t 2 [0,T]. By [22, Lemma 3.3.4], we haveZ⇢ 1 [0,t] Z +1 (t,T] Z, and equality holds when Z is decomposable. Applying Lemma 2.2.5(i) together with the monotonicity of the integral with respect to⇢ ,therelationsin(2.14)and (2.15) hold. Remark 2.2.7. The essence of Corollary 2.2.6 is that, unlike Corollary 2.2.4, the temporal-additivity R T 0 = R t 0 + R T t is not necessarily true in the case of generalized stochastic integrals for lack of decomposability of the integrand. In particular, the Hukuhara di↵erence R T 0 Z dB R t 0 Z dB may not exist in general. This peculiar feature of generalized stochastic integrals will be particularly felt when we study the set-valued BSDEs in §3.1.7. 33 2.3 Some Important Estimates In this section we establish some important estimation regarding set-valued stochastic integrals and their conditional expectations. These estimates, albeit conceivable, need justifications given the special natures of the set-valued stochas- tic analysis, as well as the lack of a vector space structure in general. Some of the arguments are following those in [20] closely, but we nevertheless provide the details for the sake of completeness. Recall the setK (R d ), the collection of all nonempty convex compact subsets ofR d .For p2 [1,+1)and X 1 ,X 2 2 L p F (⌦ ,K (R d )), define H p (X 1 ,X 2 ):=(E[h p (X 1 ,X 2 )]) 1 p . (2.1) The following result is a strengthened version of [20, Theorem 2.4.1] in the L 2 sense. Lemma 2.3.1. Let X 1 ,X 2 2 L 2 F (⌦ ,K (R d )), andG⇢F be a sub- -algebra. Then, one has h 2 (E[X 1 |G],E[X 2 |G]) E[h 2 (X 1 ,X 2 )|G], P-a.s. (2.2) 34 In particular, the following inequalities hold: H 2 (E[X 1 |G],E[X 2 |G]) H 2 (X 1 ,X 2 ); (2.3) kE[X 1 |G]k 2 E[kX 1 k 2 |G], P-a.s. (2.4) Proof. Let us introduce the notation E[⇠ : D]:=E[⇠ 1 D ]for ⇠ 2 L 1 F (⌦ ,R)and D2F . Note that (2.2) is equivalent to E ⇥ h 2 (E[X 1 |G],E[X 2 |G]) :D ⇤ E ⇥ h 2 (X 1 ,X 2 ):D ⇤ ,D2G . (2.5) Let D 2 G,and define C := {! 2 ⌦ : ¯ h(E[X 1 |G](!),E[X 2 |G](!)) ¯ h(E[X 1 |G](!),E[X 2 |G](!))}.Clearly,bythedefinitionofconditionalexpectation, C2G . Now we can write E[h 2 (E[X 1 |G],E[X 2 |G]) :D]= E[ ¯ h 2 (E[X 1 |G],E[X 2 |G]) :D\ C] +E[ ¯ h 2 (E[X 1 |G],E[X 2 |G]) :D\ C c ]. (2.6) 35 Repeatedly applying [20, Theorem 2.3.1] (see also [13, Theorem 2.2]), we obtain E ⇥ ¯ h 2 (E[X 1 |G],E[X 2 |G]) :D\ C ⇤ = Z D\ C sup x2 E[X 1 |G](!) d 2 (x,E[X 2 |G](!))P(d!) =sup ⌘ 2 S(E[X 1 |G]) E ⇥ d 2 (⌘, E[X 2 |G]):D\ C ⇤ =sup ⌘ 2{ E[' |G]:'2 S(X 1 )} E ⇥ d 2 (⌘, E[X 2 |G]):D\ C ⇤ =sup '2 S(X 1 ) E ⇥ d 2 (E['|G],E[X 2 |G]):D\ C ⇤ =sup '2 S(X 1 ) Z D\ C inf y2 E[X 2 |G](!) |E['|G](!) y| 2 P(d!) =sup '2 S(X 1 ) inf 2 S(X 2 ) E ⇥ |E['|G] E[ |G]| 2 :D\ C ⇤ =sup '2 S(X 1 ) inf 2 S(X 2 ) E ⇥ |E[' |G]| 2 :D\ C ⇤ sup '2 S(X 1 ) inf 2 S(X 2 ) E ⇥ E ⇥ |' | 2 |G ⇤ :D\ C ⇤ =sup '2 S(X 1 ) inf 2 S(X 2 ) E ⇥ |' | 2 :D\ C ⇤ = E ⇥ ¯ h 2 (X 1 ,X 2 ):D\ C ⇤ E ⇥ h 2 (X 1 ,X 2 ):D\ C ⇤ . (2.7) Here in the above, the inequality is due to the conditional version of Jensen’s inequality. Similarly we also have E ⇥ ¯ h 2 (E[X 1 |G],E[X 2 |G]) :D\ C c ⇤ E[h 2 (X 1 ,X 2 ):D\ C c ]. Combiningthetwoinequalitieswith(2.6), weobtain(2.5) and hence (2.2). Then, (2.3) is immediate from (2.2). Finally, (2.4) follows from (2.2) by taking X 2 ⌘{ 0}. Next, we present a H¨ older-type of inequality regarding the Aumann integral. Asimilarinequalityappearsin[23,Theorem2.1]foraspecialclassofintegrands. For completeness, we provide a full proof here for our version. 36 Proposition 2.3.2. Let 1 , 2 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )), and t2 [0,T]. Then, it holds that h 2 ✓Z T t 1 s ds, Z T t 2 s ds ◆ (T t) Z T t h 2 ( 1 s , 2 s )ds, P-a.s. (2.8) Proof. Recalling the definition of the Hausdor↵ metric h,itsucestoshow that 8 > > > < > > > : ¯ h 2 ✓Z T t 1 s ds, Z T t 2 s ds ◆ (T t) Z T t ¯ h 2 ( 1 s , 2 s )ds, ¯ h 2 ✓Z T t 2 s ds, Z T t 1 s ds ◆ (T t) Z T t ¯ h 2 ( 2 s , 1 s )ds, P-a.s. (2.9) By symmetry, we shall check only the first inequality in (2.9). To begin with, we first note that the statement is equivalent to showing, for every D2F T ,that E h ¯ h 2 ✓Z T t 1 s ds, Z T t 2 s ds ◆ :D i (T t)E h Z T t ¯ h 2 ( 1 s , 2 s )ds :D i . (2.10) To see (2.10), we first note that, similar to (2.7), we have E h ¯ h 2 ⇣ Z T t 1 s ds, Z T t 2 s ds ⌘ : D i =sup ⌘ 1 2 decJ t,T (S 2 F ( 1 )) inf ⌘ 2 2 decJ t,T (S 2 F ( 2 )) E[|⌘ 1 ⌘ 2 | 2 : D]. (2.11) 37 Next, by the standard H¨ older’s inequality we have sup ⌘ 1 2 J t,T (S 2 F ( 1 )) inf ⌘ 2 2 J t,T (S 2 F ( 2 )) E[|⌘ 1 ⌘ 2 | 2 :D] =sup ' 1 2 S 2 F ( 1 ) inf ' 2 2 S 2 F ( 2 ) E h J t,T (' 1 ) J t,T (' 2 ) 2 :D i (2.12) (T t)sup ' 1 2 S 2 F ( 1 ) inf ' 2 2 S 2 F ( 2 ) E h Z T t |' 1 s ' 2 s | 2 ds :D i . Now, for given D2F T ,weconsidertheprobabilityspace(D,F D T ,P D ), where F D T :={C\ D :C2F T },andP D (C)=[P(C)/P(D)]1 {P(D)>0} , C2F D T .Wealso define the filtration F D = {F D t } t2 [0,T] in a similar way. Applying [20, Theorem 2.3.1] again, we have sup ' 1 2 S 2 F ( 1 ) inf ' 2 2 S 2 F ( 2 ) E h Z T t |' 1 s ' 2 s | 2 ds :D i =sup ' 1 2 S 2 F ( 1 ) inf ' 2 2 S 2 F ( 2 ) E P D h Z T t |' 1 s ' 2 s | 2 ds i =sup ' 1 2 S 2 F D ( 1 ) inf ' 2 2 S 2 F D ( 2 ) Z D⇥ [t,T] |' 1 s (!) ' 2 s (!)| 2 P D (d!)ds (2.13) = Z D⇥ [t,T] sup x2 1 s (!) inf y2 2 s (!) |x y| 2 P D (d!)ds = Z D⇥ [t,T] ¯ h 2 ( 1 s (!), 2 s (!))P D (d!)ds = E P D h Z T t ¯ h 2 ( 1 s , 2 s )ds i =E h Z T t ¯ h 2 ( 1 s , 2 s )ds :D i . Let ↵ D := (T t)E h R T t ¯ h 2 ( 1 s , 2 s )ds : D i .Combining(2.12)and(2.13),we have sup ⌘ 1 2 J t,T (S 2 F ( 1 )) inf ⌘ 2 2 J t,T (S 2 F ( 2 )) E[|⌘ 1 ⌘ 2 | 2 :D] ↵ D . (2.14) 38 Next, we show that (2.14) implies that sup ⌘ 1 2 decJ t,T (S 2 F ( 1 )) inf ⌘ 2 2 J t,T (S 2 F ( 2 )) E h |⌘ 1 ⌘ 2 | 2 :D i ↵ D , (2.15) For any ⌘ 1 2 decJ t,T (S 2 F ( 1 )) we write ⌘ 1 = P m i=1 1 D i ⌘ 1,i for some D 1 ,...,D m 2 F T partitioning ⌦, and ⌘ 1,1 ,...,⌘ 1,m 2 J t,T (S 2 F ( 1 )). Then, for ⌘ 2 2 J t,T (S 2 F ( 2 )) we can apply Jensen’s inequality to get E[|⌘ 1 ⌘ 2 | 2 :D]= E P D h m X i=1 1 D i (⌘ 1,i ⌘ 2 ) 2 i E P D h m X i=1 1 D i |⌘ 1,i ⌘ 2 | 2 i = m X i=1 E ⇥ 1 D\ D i |⌘ 1,i ⌘ 2 | 2 ⇤ . (2.16) Since ⌘ 1 and ⌘ 2 are arbitrary, we deduce from (2.16) and (2.14) that sup ⌘ 1 2 decJ t,T (S 2 F ( 1 )) inf ⌘ 2 2 J t,T (S 2 F ( 2 )) E[|⌘ 1 ⌘ 2 | 2 :D] m X i=1 sup ⌘ 1,i 2 J t,T (S 2 F ( 1 )) inf ⌘ 2 2 J t,T (S 2 F ( 2 )) E ⇥ |⌘ 1,i ⌘ 2 | 2 :D\ D i ⇤ m X i=1 ↵ D\ D i = ↵. This proves (2.15). Noting that decJ t,T (S 2 F ( 2 )) J t,T (S 2 F ( 2 )), (2.15) implies that sup ⌘ 1 2 decJ t,T (S 2 F ( 1 )) inf ⌘ 2 2 decJ t,T (S 2 F ( 2 )) E h |⌘ 1 ⌘ 2 | 2 :D i ↵ D . (2.17) 39 Finally, we claim that (2.17) implies sup ⌘ 1 2 decJ t,T (S 2 F ( 1 )) inf ⌘ 2 2 decJ t,T (S 2 F ( 2 )) E h |⌘ 1 ⌘ 2 | 2 :D i ↵ D , (2.18) which, together with (2.11), would lead to(2.10). Indeed, let⌘ 1 2 decJ t,T (S 2 F ( 1 )), and let {⌘ n 1 } n2 N ⇢ decJ t,T (S 2 F ( 1 )) be a sequence that converges to ⌘ 1 (strongly) in L 2 F T (⌦ ,R d ). Let"> 0. For each n2 N,thanksto(2.17),wemayfind ⌘ n 2 2 decJ t,T (S 2 F ( 2 )) such that E[|⌘ n 1 ⌘ n 2 | 2 :D]<↵ D +". (2.19) ByRemark2.2.2,{⌘ n 2 } n2 N isaboundedsequenceinL 2 F T (⌦ ,R d ); hence, byBanach- Saks theorem, it has a subsequence {⌘ n k 2 } k2 N for which the sequence {⌘ k 2 } k2 N con- verges to some ⌘ 2 2 L 2 F T (⌦ ,R d )strongly,where ⌘ k 2 := 1 k P k `=1 ⌘ n ` 2 is the Ces` aro average, for k 2 N.Moreover,since decJ t,T (S 2 F ( 2 )) is a closed convex set, all Ces` aro averages and their limit ⌘ 2 belong to decJ t,T (S 2 F ( 2 )). The strong conver- gence of {⌘ n 1 } n2 N implies that {⌘ k 1 } k2 N ⇢ decJ t,T (S 2 F ( 1 )) converges to ⌘ 1 strongly inL 2 F T (⌦ ,R d ), where ⌘ k 1 := 1 k P k `=1 ⌘ n ` 1 , k2 N.By(2.19),wehave E[|⌘ k 1 ⌘ k 2 | 2 :D] ⇣ 1 k k X `=1 ⇣ E[|⌘ n ` 1 ⌘ n ` 2 | 2 :D] ⌘1 2 ⌘ 2 <↵ D +", k2 N. 40 Thus, (E[|⌘ 1 ⌘ 2 | 2 :D]) 1 2 (E[|⌘ 1 ⌘ k 1 | 2 :D]) 1 2 +(E[|⌘ k 1 ⌘ k 2 | 2 :D]) 1 2 +(E[|⌘ k 2 ⌘ 2 | 2 :D]) 1 2 (E[|⌘ 1 ⌘ k 1 | 2 ]) 1 2 +(↵ D +") 1 2 +(E[|⌘ k 2 ⌘ 2 | 2 ]) 1 2 , and letting k!1 yields inf ⌘ 2 2 decJ t,T (S 2 F ( 2 )) E[|⌘ 1 ⌘ 2 | 2 :D] E[|⌘ 1 ⌘ 2 | 2 :D] ↵ +". Since">0and ⌘ 1 2 decJ t,T (S F ( 1 )) are arbitrary, (2.18) follows, concluding the proof. 2.4 Set-Valued Martingales and their Integral Repsentations Using the notion of conditional expectation in §2.2.1, one can define set-valued martingales as follows. We say that a set-valued process M ={M t } t2 [0,T] is a set- valuedF-martingaleifM2 L 0 F ([0,T]⇥ ⌦ ,C(R d )),M t 2 A 1 Ft (⌦ ,C(R d )), andM s = E[M t |F s ]forall0 s t. M is called square-integrable if M t 2 A 2 Ft (⌦ ,C(R d )), 0 t T,and uniformly square-integrably bounded if there exists `2 L 2 (⌦ ,R + ) such that sup t2 [0,T] kM t (·)k `(·)a.s. 41 We note that, if M is a square-integrable set-valued martingale, then for each t 2 [0,T], the set of square-integrable selectors, S 2 Ft (M t ), is decomposable. On the other hand, we consider the set of all square-integrable martingale selectors, that is, all d-dimensional F-martingales f = {f t } t2 [0,T] such that f t 2 S 2 Ft (M t ), t 2 [0,T], and denote it by MS(M). If M is convex-valued, then it is known that MS(M)6=;(cf. [21, §3]). For t2 [0,T], consider the t-section of MS(M), defined as P t [MS(M)] := {f t : f 2 MS(M)}⇢ L 2 Ft (⌦ ,R d ). We remark that the two sets S 2 Ft (M t )(theselectorsofthe t-section) and P t [MS(M)] (the t-section of the selectors) are quite di↵erent. In particular, the former is known to be decomposable, but the latter is not. However, the following relation holds (see [21, Proposition 3.1]): S 2 Ft (M t )= dec Ft (P t [MS(M)]),t2 [0,T], (2.20) where dec Ft denotes the closed decomposable hull with respect toL 2 Ft (⌦ ,R d ). 2.4.1 Representation of Martingales with Trivial Initial Value In what follows we assume that F = F B ,forsome R m -valued Brownian motion B ={B t } t2 [0,T] . ThefundamentalbuildingblockofthetheoryofBackwardSDEis thecelebratedMartingale Representation Theorem,whichstatesthateverysquare- 42 integrable F-martingale can be written, uniquely, as a stochastic integral against B, whence continuous. There is a similar result for set-valued martingales (see §2.2.2), which we now describe. LetM beaconvex-valuedset-valuedF-martingalethatissquare-integrable,i.e., M t 2 A 2 Ft (⌦ ,C(R d )) for each t2 [0,T]. Then for each y2 MS(M), by standard martingale representation theorem, there exists uniquez y 2 L 2 F ([0,T],R d⇥ m ), such that y t = R t 0 z y s dB s , t 2 [0,T], P-a.s. Denote Z M := {z y : y 2 MS(M)}2 P(L 2 F ([0,T],R d⇥ m )). Remark 2.4.1. We should note that while a set-valued martingale always gives rise to a set of vector-valued martingales, i.e., stochastic integrals, not every set of vector-valued martingales can be realized asMS(M)forsomeset-valuedmartin- gale M. The following Set-valued Martingale Representation Theorem is due to [21]. Theorem 2.4.2 (Kisielewicz [21, Proposition 4.1, Theorem 4.2]). For every convex-valued square-integrable set-valued martingale M = {M t } t2 [0,T] with M 0 = {0}, there existsZ M 2 P(L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) such thatM t = R t 0 Z M dB,P-a.s. t2 [0,T].If M is also uniformly square-integrably bounded, then Z M is a convex weakly compact set, that is, Z M 2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )). Remark 2.4.3. (i) We first note that in the set-valued martingale representation, the “martingale integrand” Z M may not be a measurable set-valued process. In 43 fact, if the set-valued martingale is square-integrably bounded, then the integrand Z M cannot be decomposable unless it is a singleton (see [22, Corollary 5.3.2]). Thus the stochastic integral can only be in the generalized sense. But on the other hand, if Z M is not decomposable, then the temporal-additivily of the set-valued stochastic integral fails in general (see, Corollary 2.2.6). Such a conflict leads to some fundamental diculties for the study of set-valued BSDEs, and it does not seemtobeamendableunlesssomemoregeneralframeworkofset-valuedstochastic integrals is established. (ii) If ⌦ is separable, then there exists a sequence {z n } n 1 ⇢ L 2 F ([0.T],R d⇥ m ) such that M t =cl{ R t 0 z n s dB s } n 1 , S 2 Ft (M t )= dec Ft { R t 0 z n s dB s } n 1 , t 2 [0,T](see [21, Theorem 4.3]). (iii) If M is a uniformly square-integrably bounded martingale and P is nonatomic, then there exists a sequence {z n } n 1 ⇢ L 2 F ([0.T],R d⇥ m )suchthat M t = co{ R t 0 z n s dB s } n 1 for all t2 [0,T](see[21,Theorem4.3]). (iv) In light of the equivalence relation ⇠ = in (2.12), in the last part of Theorem 2.4.2, we can easily conclude that such Z M is unique inK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )). Indeed, if there exist Z M 1 and Z M 2 inK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) such that R t 0 Z M 1 dB = R t 0 Z M 2 dB = M t , t2 [0,T], then Z M 1 ⇠ = Z M 2 ,thatis,theycorrespondto the same element ofK w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) and we may denote this element by Z M with a slight abuse of notation. 44 (v) Unlike usual stochastic integrals, set-valued stochastic integrals do not always generate set-valued martingales. In fact, given a nonempty setZ2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )) (orZ2 K w (L 2 F ([0,T]⇥ ⌦ ,R d⇥ m ))) of processes, the set-valued process { R t 0 Z dB} t2 [0,T] forms a set-valued submartingale in the sense that R u 0 Z dB⇢ E[ R t 0 Z dB s |F u ]forevery0 u t T (see[24, Theorem 4.2]). Nevertheless, the stochastic integrals that appear in Theorem 2.4.2 are naturally martingales. 2.4.2 Representation of Martingales with General Initial Value We would like to point out that in Theorem 2.4.2 it is assumed that M 0 = {0}. Such a seemingly benign assumption actually has some severe consequences. In particular, as it was pointed out recently in [34], a set-valued martingale whose initial value is a singleton is essentially a vector-valued martingale. Therefore, Theorem 2.4.2 actually is not a suitable tool for the study of set-valued BSDEs with non-singleton terminal values. The main purpose of this subsection is to establish a refined version of set-valued martingale representation theorem for set- valued martingales with general (non-singleton) initial values. Our idea is to extend the notion of Aumman-Itˆ o integral so that it is a mar- tingale but its expectation is not necessarily zero (see [34, Example 3.1] for the 45 set-valued delemma). To this end, for any t 2 [0,T], we consider the space R t :=L 2 Ft (⌦ ,R d )⇥ L 2 F ([t,T]⇥ ⌦ ,R d⇥ m ). Givenaprocessz ={z u } u2 [0,T] ,wedenotez t,T := (z u ) u2 [t,T] tobetherestriction of z onto the interval [t,T], and define a mapping F t :R 0 7! R t by F t (x,z):= ⇣ x+ Z t 0 z s dB s ,z t,T ⌘ , (x,z)2 R 0 . We have the following result. Lemma 2.4.4. For given t2 [0,T] and (⇠,z t )2 R t , define a process J t (⇠,z t )= {J t u (⇠,z t )} u2 [0,T] : J t u (⇠,z t ):=E[⇠ |F u ]1 [0,t) (u)+ ✓ ⇠ + Z u t z t s dB s ◆ 1 [t,T] (u),u2 [0,T]. (2.21) Then, J t (⇠,z t ) is anF-martingale on [0,T]. Moreover, it holds that J t F t =J 0 on R 0 . Proof. That J t (⇠,z t ) is a martingale is obvious. To check the identity, let (x,z)2 R 0 .Followingthedefinitionsof J t ,F t ,J 0 , we have J t u (F t (x,z)) = J t u ⇣ x+ Z t 0 z s dB s ,z t,T ⌘ = E h x+ Z t 0 z s dB s F u i 1 [0,t) (u)+ ⇣ x+ Z t 0 z s dB s + Z u t z s dB s ⌘ 1 [t,T] (u) = ⇣ x+ Z u 0 z s dB s ⌘ 1 [0,t) (u)+ ⇣ x+ Z u 0 z s dB s ⌘ 1 [t,T] (u)=J 0 u (x,z), 46 for every u2 [0,T]. Hence, J t (F t (x,z)) =J 0 (x,z). Next, letR⇢ R 0 beanonemptysetandt2 [0,T]. ByvirtueofTheorem2.1.8, there exists a set-valued random variable inL 2 Ft (⌦ ,C(R d )), denoted by R t 0 R dB, such that S 2 Ft ⇣ Z t 0 R dB ⌘ = dec Ft (J 0 t [R]). (2.22) We call R t 0 R dB the stochastic integral of R.Clearly,suchastochasticintegral is an extended version of the generalized Aumann-Itˆ o stochastic integral R t 0 Z dB defined by (2.9), and in particular, the integrand R consists of pairs (x,z), which keeps track of the initial values x of the martingales in J 0 [R], motivating the choice of the notation R t 0 . To see how the integral R t 0 R dB (or more precisely, R)canbedefined through a set-valued martingale, let M = {M u } u2 [0,T] be a convex uniformly square-integrably bounded set-valued martingale with respect toF =F B ,and M 0 is a non-singleton convex set. LetMS(M)bethesetofallL 2 -martingale selectors of M.Bystandardmartingalerepresentationtheorem,forfixed t2 [0,T], each y2 MS(M)canbewrittenas y =J t (⇠,z )forauniquepair(⇠,z )2 R t .Weshall define, for each t2 [0,T], R M t := (⇠,z )2 R t : J t (⇠,z )2 MS(M) ;and R M :=R M 0 . (2.23) 47 In what follows, for (⇠,z )2 R t ,wewrite ⇡ ⇠ (⇠,z ):= ⇠ and ⇡ z (⇠,z ):= z.(For convenience, we suppress the dependence of the mappings ⇡ ⇠ ,⇡ z ont.) Also, ify = J t (⇠,z )2R M t ,wedenote ⇡ ⇠ (y)= ⇠ ,and ⇡ x (y)= x,respectively. Furthermore, we define Z M t := ⇡ z [R M t ]. The following theorem collect collects various forms of “time-consistency” properties of the collection{R M t } t2 [0,T] ,whichwillbeusefulin our future discussion. Proposition 2.4.5 (Time-consistency). Let t 1 2 [0,T]. Then, it holds F t 1 [R M 0 ]=R M t 1 . (2.24) Furthermore, the following relations hold for every t 2 2 (t 1 ,T]: (i) J t 1 [R M t 1 ]=J t 2 [R M t 2 ]=MS(M). (ii) ⇡ ⇠ [R M 0 ]=M 0 . (iii) ⇡ ⇠ [R M t 1 ]=J t 1 t 1 [R M t 1 ]=J t 2 t 1 [R M t 2 ]=P t 1 [MS(M)]. (iv) ⇡ ⇠ [R M t 1 ]={E[⇠ |F t 1 ]: ⇠ 2 ⇡ ⇠ [R M t 2 ]}. (v) Z M t 1 1 [t 2 ,T] =Z M t 2 1 [t 2 ,T] =Z M 0 1 [t 2 ,T] . Proof. We first prove (2.24). Fix t 1 2 [0,T]andlet(x,z)2R M 0 . By Lemma 2.4.4 and the definition of R M 0 ,wehave J t 1 (F t 1 (x,z)) = J 0 (x,z) 2 MS(M). On the other hand, since (x,z) 2 R 0 ,wehave F t 1 (x,z) 2 R t 1 ,whichimplies F t 1 (x,z)2R M t 1 . Namely, F t 1 [R M 0 ]⇢R M t 1 . 48 Conversely, let ( ˆ ⇠, ˆ z)2R M t 1 . Define a martingale y s :=E[ ˆ ⇠ |F s ], s2 [0,t 1 ]. By martingale representation theorem, there exists a unique pair (x,¯ z)2 R 0 such that y u =x+ Z u 0 ¯ z s dB s ,u2 [0,t 1 ]. Let z := ¯ z1 [0,t 1 ) +ˆ z1 [t 1 ,T] 2 L 2 F ([0,T]⇥ ⌦ ,R d⇥ m ). Then, (x,z)2 R 0 and for every u2 [0,T], J 0 u (x,z)= x+ Z u 0 z s dB s = ✓ x+ Z u 0 ¯ z s dB s ◆ 1 [0,t 1 ) (u)+ ✓ ˆ ⇠ + Z u t 1 ˆ z s dB s ◆ 1 [t 1 ,T] (u) = E[ ˆ ⇠ |F u ]1 [0,t 1 ) (u)+ ✓ ˆ ⇠ + Z u t 1 ˆ z s dB s ◆ 1 [t 1 ,T] (u)=J t 1 u ( ˆ ⇠, ˆ z). Hence, J 0 (x,z)=J t 1 ( ˆ ⇠, ˆ z)2 MS(M), that is, (x,z)2R M 0 . Finally, F t 1 (x,z)= ✓ x+ Z t 1 0 z s dB s ,z t 1 ,T ◆ =( ˆ ⇠, ˆ z). So ( ˆ ⇠, ˆ z)2 F t 1 [R M 0 ]. Consequently, we have R M t 1 ⇢ F 0 [R M 0 ], proving (2.24). We now turn to properties (i)–(v). The proof of (i) is immediate since J t i [R M t i ]=MS(M)bythedefinitionof R M t i for i2{ 1,2}. To see (ii), let (x,z)2R M 0 .Since J 0 (x,z)2 MS(M), we have ⇡ ⇠ (x,z)=x = J 0 0 (x,z)2 M 0 .Conversely,since M is a set-valued martingale, M 0 =E[M T |F 0 ]= E[M T ], thanks to Blumenthal 0-1 law. Hence, by the definition of set-valued expectation, for any x 2 M 0 ,thereexists ⇠ 2 S 2 F T (M T )suchthat x = E[⇠ ]. 49 Furtherm, by the martingale representation theorem, there exists z2 L 2 F ([0,T]⇥ ⌦ ,R d⇥ m )suchthat E[⇠ |F u ]=x+ Z u 0 z s dB s =J 0 u (x,z),u2 [0,T]. Note that M is a set-valued martingale, we have E[⇠ |F u ]2 S 2 Fu (M u ), u2 [0,T]. Hence, J 0 (x,z)={E[⇠ |F u ]} u2 [0,T] 2 MS(M), that is, (x,z)2R M 0 ,orx2 ⇡ ⇠ [R M 0 ], proving (ii). To prove (iii), first note that, for every (x,z)2 R 0 , ⇡ ⇠ (F t 1 (x,z)) = ⇡ ⇠ ⇣ x+ Z t 1 0 z s dB s ,z t 1 ,T ⌘ =x+ Z t 1 0 z s dB s =J 0 t 1 (x,z). This implies that ⇡ ⇠ [R M t 1 ]= ⇡ ⇠ [F t [R M 0 ]] = J 0 t 1 (x,z): (x,z)2R M 0 =J 0 t 1 [R M 0 ]=J t 1 t 1 F t 1 [R M 0 ]=J t 1 t 1 [R M t 1 ], where the first and last equalities are by (2.24) and the fourth equality is due to Lemma 2.4.4. On the other hand, by the definitions of P t 1 ,J t 1 ,weseethat P t 1 J t 1 =J t 1 t 1 .Therefore, ⇡ ⇠ [R M t 1 ]=J t 1 t 1 [R M t 1 ]=P t 1 [J t 1 [R M t 1 ]] =P t 1 [J t 2 [R M t 2 ]] =P t 1 [MS(M)], 50 thanks to (i), which concludes the proof of (iii). To prove (iv), first note thatE[P t 2 (y)|F t 1 ]=P t 1 (y)whenever y ={y u } u2 [0,T] is a martingale. Hence, applying (iii) twice, we obtain {E[⇠ |F t 1 ]: ⇠ 2 ⇡ ⇠ [R M t 2 ]} ={E[⇠ |F t 1 ]: ⇠ 2 P t 2 [MS(M)]} =P t 1 [MS(M)] = ⇡ ⇠ [R M t 1 ]. Finally, to prove (v), note that, for every (x,z)2 R 0 , ⇡ z (F t 1 (x,z)) = ⇡ z ⇣ x+ Z t 1 0 z s dB s ,z t 1 ,T ⌘ =z t 1 ,T . Hence, R M t 1 1 [t 2 ,T] = ⇡ z [R M t 1 ]1 [t 2 ,T] = ⇡ z [F t 1 [R M 0 ]]1 [t 2 ,T] = z t 1 ,T :(x,z)2R M 0 1 [t 2 ,T] = z1 [t 2 ,T] : z2Z M 0 =Z M 0 1 [t 2 ,T] . Taking t 1 =t 2 above, we also obtain Z M t 2 1 [t 2 ,T] =Z M 0 1 [t 2 ,T] . The following theorem is a martingale representation theorem for set-valued martingaleswithpossiblynontrivialinitialvalues, i.e.,M 0 isanon-singletondeter- ministic set. 51 Theorem 2.4.6. Let M = {M u } u2 [0,T] be a convex uniformly square-integrably bounded set-valued martingale with respect to F =F B . Then, for each u2 [0,T], it holds M u = Z u 0 R M dB a.s. Moreover, for each t2 [0,u], it holds that S 2 Fu (M u )= dec Fu (J t u [R M t ]). Proof:ByLemma2.4.5(ii),wehaveJ 0 u [R M ]=P u [J 0 [R M ]] =P u [MS(M)],u2 [0,T]. On the other hand, by [21, Proposition 3.1], we have dec Fu (P u [MS(M)]) = S 2 Fu (M u ). Combining these with the definition of stochastic integral in (2.22), we get S 2 Fu ⇣ Z u 0 R M dB ⌘ = dec Fu (J 0 u [R M ]) = dec Fu (P u [MS(M)]) =S 2 Fu (M u ). This shows that M u = R u 0 R M dB almost surely. The second part of the propo- sition is an immediate consequence of Lemma 2.4.4. Remark 2.4.7. It is interesting to note the relationship between the new stochas- tic integral R u 0 R M dB (2.22) and the generalized Aumann-Itˆ o stochastic integral R u 0 Z M dB (2.9), where Z M :=Z M 0 .Recalling(2.5)and(2.21),wehave J 0 u (x,z)=x+J u (z)2 M 0 +J u [Z M ] 52 for every (x,z)2R M . Hence, J 0 u [R M ] ⇢ M 0 + J u [Z M ]. After taking closed decomposable hulls, it follows that S 2 u ⇣ Z u 0 R M dB ⌘ = dec Fu (J 0 u [R M ])⇢ M 0 +dec Fu (J u [Z M ]) =M 0 +S 2 Fu ⇣ Z u 0 Z M dB ⌘ . Therefore, Z u 0 R M dB⇢ M 0 + Z u 0 Z M dB a.s. and the reverse inclusion fails to hold in general. When M 0 = {0},wehave R u 0 R M dB = R u 0 Z M dB since R M ={0}⇥Z M in this case. Remark 2.4.8. In view of Remark 2.4.7 and Theorem 2.4.6, the new integral R u 0 R M dB is a non-trivial and necessary extension of the Aumann-Itˆ o stochastic integral, which can be used for the integral representation of any truly set-valued martingale M with a non-zero (non-singleton) initial value M 0 . 2.5 Set-Valued BSDEs We are now ready to study the set-valued BSDEs. Assume from now on that (⌦ ,F,P,F)isafilteredprobabilityspaceonwhichisdefinedan m-dimensional standard Brownian motion B = {B t } t2 [0,T] .Weassumefurtherthat F = F B , the natural filtration generated by B,augmentedbyallthe P-null sets of F so that it satisfies the usual hypotheses.Inparticular,wemayassumewithoutloss 53 of generality that (⌦ ,F)=(C([0,T]),B(C([0,T]))) is the canonical space with F t = {!(·^ t),! 2 ⌦ }, t2 [0,T], andP is the Wiener measure on (⌦ ,F). Hence, ⌦isseparableand P is nonatomic. 2.5.1 Set-Valued BSDEs in Conditional Expectation Form In this section, we shall focus on the following simplest form of set-valued BSDE: Y t =E h ⇠ + Z T t f(s,Y s )ds F t i ,t2 [0,T]. (2.1) where ⇠ 2 L 2 F T (⌦ ,K (R d )), f:[0,T]⇥ ⌦ ⇥ K (R d ) ! K (R d )isaset-valued function to be specified later. We first give the definition of the solution to the set-valued BSDE (2.1). Definition 2.5.1. A set-valued process Y 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )) is called an adapted solution to the set-valued BSDE (2.1) if Y t =E h ⇠ + Z T t f(s,Y s )ds F t i , P-a.s. ,t2 [0,T]. We shall make use of the the following assumptions on the coecient f. Assumption 2.5.2. The function f :[0,T]⇥ ⌦ ⇥ K (R d )! K (R d ) enjoys the following properties: (i) for fixed A2 K (R d ), f(·,·,A)2 L 0 F ([0,T]⇥ ⌦ ,K (R d )); 54 (ii) f(·,·,{0})2 L 2 F ([0,T]⇥ ⌦ ,K (R d )), that is, E h Z T 0 kf(t,{0})k 2 dt i =E h Z T 0 h 2 (f(t,{0}),{0})dt i <1; (2.2) (iii) f(t,!, ·) is Lipschitz, uniformly in (t,!)2 [0,T]⇥ ⌦ , in the following sense: there existsK> 0 such that h(f(t,!,A),f(t,!,B)) Kh(A,B), A,B2 K (R d ), (t,!)2 [0,T]⇥ ⌦ .(2.3) Remark 2.5.3. Note that a multifunction f satisfying Assumption 2.5.2 must be aCarath´ eodorymultifunction(seeSection2.1.2),whichrequiresonlycontinuity on the spatial variable. Remark 2.5.4. By Assumption 2.5.2, it is easy to check that {f(t,Y t )} t2 [0,T] 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )) whenever {Y t } t2 [0,T] 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )). We shall consider the following standard Picard iteration. Let Y (0) ⌘{ 0} and for n 1, we define Y (n) recursively by Y (n) t =E h ⇠ + Z T t f(s,Y (n 1) s )ds F t i ,t2 [0,T]. (2.4) We should point out that the set-valued random variable Y (n) t is defined almost surely for each fixed t 2 [0,T]. An immediate question is whether {Y (n) t } t2 [0,T] makes sense, as a (jointly) measurable set-valued process, which, as usual, requires 55 justificationaswehaveseenfrequentlyintheset-valuedcase. Thefollowinglemma is important for this purpose. Lemma 2.5.5. Let X 2 L 2 (⌦ ,K (R d )) and define F t := E[X|F t ], t 2 [0,T]. Then, {F t } t2 [0,T] has an optional modification that is a uniformlyL 2 -bounded mar- tingale. Proof. Consider the (trivial) set-valued process G t ⌘ X, t 2 [0,T], which is clearly (jointly) measurable and G ⌧ = X is integrable for every F-stopping time ⌧ :⌦ ! [0,T]. By [31, Theorem 3.7], there exists a unique optional projection { o G t } t2 [0,T] of process {G t } t2 [0,T] ,suchthat E[G ⌧ |F ⌧ ]= o G ⌧ , P-a.s. for every F- stoppingtime⌧ .Inparticular,{ o G t } t2 [0,T] isanoptionalmodificationof{F t } t2 [0,T] . It is easy to check that{ o G t } t2 [0,T] is a square-integrable set-valued martingale, and by anL 1 -version of Lemma 2.3.1, it holds that k o G t k =kE[X|F t ]k E[kXk|F t ],t2 [0,T]. Finally, note that X2 L 2 (⌦ ,K (R d )), applying Doob’sL 2 -maximal inequality to the (R-valued) martingale M t :=E[kXk|F t ], t2 [0,T], we obtain E h sup t2 [0,T] k o G t k 2 i E h sup t2 [0,T] |M t | 2 i 4E[kXk 2 ]< +1. That is, { o G t } t2 [0,T] is uniformly square-integrably bounded. 56 The next proposition establishes the desired measurability for the Picard iter- ation. Proposition 2.5.6. For each n2 N, Y (n) has a progressively measurable modifi- cation. Proof. Note that Y (0) ⌘{ 0} is progressively measurable itself. Let n2 N and suppose thatY (n 1) has a progressively measurable modification, which we denote by Y (n 1) for ease of notation, and interpret (2.4) accordingly. For each t2 [0,T], using Corollary 2.2.1 and Corollary 2.2.4, we have Y (n) t =E h ⇠ + Z T 0 f(s,Y (n 1) s )ds F t i Z t 0 f(s,Y (n 1) s )ds. (2.5) By Remark 2.2.2(v),{ R t 0 f(s,Y (n 1) s )ds} t2 [0,T] hasa progressively measurablemod- ification. Moreover, by Lemma 2.5.5, {E[⇠ + R T 0 f(s,Y (n 1) s )ds|F t ]} t2 [0,T] has an optional, hence progressively measurable, modification. Replacing the original processes with such modifications in (2.5), and using Lemma 2.1.10, we have that Y (n) is progressively measurable. In view of Proposition 2.5.6, we will assume without loss of generality thatY (n) is progressively measurable, in particular, Y (n) 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )) for each n2 N. In order to guarantee the convergence of the sequence{Y (n) } n2 N constructed in (2.4), wewillusearecursiveestimateon{Eh 2 (Y (n) t ,Y (n 1) t )} n2 N ,whichisprovided 57 by following lemma. We note that unlike the vector-valued BSDEs, this lemma is non-trivial because of the lack of standard tools, in particular a set-valued Itˆ o’s formula. Lemma 2.5.7. For each n2 N, it holds that E[h 2 (Y (n) t ,Y (n 1) t )] TK 2 Z T t E[h 2 (Y (n 1) s ,Y (n 2) s )]ds, t2 [0,T]. (2.6) Proof. By Proposition 2.1.3, Lemma 2.3.1 and the properties of Hausdor↵ distance, we get E[h 2 (Y (n) t ,Y (n 1) t )]=E h 2 ✓ E h ⇠ + Z T t f(s,Y (n 1) s )ds F t i ,E h ⇠ + Z T t f(s,Y (n 2) s )ds F t i ◆ E E h h 2 ⇣ ⇠ + Z T t f(s,Y (n 1) s )ds,⇠ + Z T t f(s,Y (n 2) s )ds ⌘ F t i E h 2 ✓Z T t f(s,Y (n 1) s )ds, Z T t f(s,Y (n 2) s )ds ◆ . (2.7) Then,combining(2.7)withProposition2.8,Assumption2.5.2(iv),andProposition 2.1.3, we derive (2.6). We are now ready to establish the well-posedness of the set-valued BSDE (2.1). Theorem 2.5.8. Assume Assumption 2.5.2. Then, the set-valued BSDE (2.1) has a solution Y 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )). Moreover, the solution is unique up to modifications: if Y 0 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )) is another solution of (2.1), then Y t =Y 0 t P-a.s. t2 [0,T]. 58 Proof. Recall that (L 2 F ([0,T]⇥ ⌦ ,K (R d )),d H )isacompletemetricspace, where the metric is defined by d H ( , ) = (E[ R T 0 h 2 ( t , t )dt]) 1 2 for , 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )). We shall argue that the sequence {Y (n) } n2 N of the Picard iteration is Cauchy inL 2 F ([0,T]⇥ ⌦ ,K (R d )). To this end, for fixed t2 [0,T], we note that Y (0) t ={0}.Thus,byrepeatedlyapplyingLemma2.3.1,wehave Eh 2 (Y (1) t ,Y (0) t )= Eh 2 ⇣ E h ⇠ + Z T t f(s,{0})ds F t i ,{0} ⌘ (2.8) Eh 2 ⇣ ⇠ + Z T t f(s,{0})ds,{0} ⌘ 2 h Ek⇠ k 2 +Eh 2 ⇣ Z T t f(s,{0})ds,{0} ⌘i 2 h Ek⇠ k 2 +T Z T 0 Ekf(s,{0})k 2 ds i =:C. Note that C is free of the choice of t.Weclaimthat,for n 1itholdsthat Eh 2 (Y (n) t ,Y (n 1) t ) C(TK 2 ) n 1 (T t) (n 1) (n 1)! . (2.9) Indeed, for n = 1, (2.9) is just (2.8). Now assume that (2.9) holds for n 1, then by Lemma 2.5.7 we have Eh 2 (Y (n) t ,Y (n 1) t ) TK 2 Z T t Ek Y (n 1) s k 2 ds TK 2 Z T t C(TK 2 ) n 2 (T s) (n 2) (n 2)! ds = C(TK 2 ) n 1 (T t) (n 1) (n 1)! CK 2(n 1) T 2(n 1) (n 1)! =:a 2 n . (2.10) 59 Since H 2 is a metric onL 2 Ft (⌦ ,K (R d )), the estimate in (2.10) then yields for m>n 1: H 2 (Y (n) t ,Y (m) t ) m 1 X k=n H 2 (Y (k+1) t ,Y (k) t ) m 1 X k=n a k+1 , (2.11) where a k = p CK (k 1) T (k 1) p (k 1)! , k 1, by (2.10). Hence, d 2 H (Y (m) ,Y (n) )= Z T 0 H 2 2 (Y (n) t ,Y (m) t )dt T ⇣ m 1 X k=n a k+1 ⌘ 2 . (2.12) Now note that a k+1 a k = p CK k T k p k! p CK (k 1) T (k 1) p (k 1)! = TK p k ! 0, as k!1. By ratio test, P 1 k=1 a k converges. That is, {Y (n) } n2 N is a Cauchy sequence in L 2 F ([0,T]⇥ ⌦ ,K (R d )), thanks to (2.11); whence converges to some Y 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )). Next, we show that the limit process Y ={Y t } t2 [0,T] indeed leads to a solution to the BSDE (2.1). Since d H (Y,Y (n) )! 0as n!1,thereexistsasubsequence {Y (n ` ) } `2 N such that h(Y t (!),Y (n ` ) t (!)) ! 0as `!1 for dt⇥ dP-a.e. (t,!) 2 60 [0,T]⇥ ⌦. By Proposition 2.1.3(ii) and Corollary 2.2.1, we have By Lemma 2.3.1, Proposition 2.3.2, and Assumption 2.5.2(iii,iv), we have Eh 2 ⇣ E h Z T t f(s,Y s )ds F t i ,E h Z T t f(s,Y (n ` ) s )ds F t i⌘ (2.13) Eh 2 ⇣ Z T t f(s,Y s )ds, Z T t f(s,Y (n ` ) s )ds ⌘ (T t)E h Z T t h 2 (f(s,Y s ),f(s,Y (n ` ) s ))ds i TK 2 Z T 0 H 2 2 (Y s ,Y (n ` ) s )ds. By the construction of the limit Y,wehave R T 0 H 2 2 (Y s ,Y (n ` ) s )ds ! 0as `!1. Now, (2.13) shows that sup t2 [0,T] Eh 2 ⇣ E h Z T t f(s,Y (n ` ) s )ds F t i ,E h Z T t f(s,Y s )ds F t i⌘ ! 0, as n!1. It follows that Y satisfies the BSDE Y t =E h ⇠ + Z T t f(s,Y s )ds F t i ,t2 [0,T]. (2.14) In fact, a similar argument as above (using Assumption 2.5.2(iii)) also shows that Y is actually unique, as the solution of (2.1) in the spaceL 2 F ([0,T]⇥ ⌦ ,K (R d )) up to modifications. This proves the theorem. 61 2.5.2 Set-Valued BSDEs with Martingale Terms The set-valued BSDE (2.1) considered in Section 2.5.1 is formulated using set- valued conditional expectations. In this section, we aim to remove the conditional expectation by introducing an additional term to the BSDE, that is, a set-valued martingale. More specifically, we consider a set-valued BSDE of the form Y t +M T = ⇠ + Z T t f(s,Y s )ds+M t ,t2 [0,T], (2.15) where {M t } t2 [0,T] is a set-valued martingale and {Y t } t2 [0,T] , ⇠ , f:[0,T]⇥ ⌦ ! K (R d )areasinSection2.5.1. Definition 2.5.9. A pair (Y,M)2 (L 2 F ([0,T]⇥ ⌦ ,K (R d ))) 2 of set-valued process is called an adapted solution to the set-valued BSDE (2.15) if M is a uniformly square-integrably bounded set-valued martingale with M 0 =Y 0 and Y t +M T = ⇠ + Z T t f(s,Y s )ds+M t , P-a.s. ,t2 [0,T]. Remark 2.5.10. In light of Hukuhara di↵erence, the set-valued BSDE (2.15) is equivalent to Y t = ✓ ⇠ + Z T t f(s,Y s )ds+M t ◆ M T ,t2 [0,T]. (2.16) 62 Furthermore, if M t M T exists, the same BSDE is also equivalent to Y t = ⇠ + Z T t f(s,Y s )ds+(M t M T ),t2 [0,T], (2.17) thanks to Proposition 2.1.2. However, the existence of M t M T is a tall order due to the lack of time-additivity of generalized stochastic integral (see Remark 2.5.15 for more details). We now establish the well-posedness of the set-valued BSDE (2.15). Theorem 2.5.11. Suppose that Assumption 2.5.2 holds. Then, the set-valued BSDE (2.15) has a solution (Y,M) 2 (L 2 F ([0,T]⇥ ⌦ ,K (R d ))) 2 . Moreover, the solution is unique up to modifications: if (Y 0 ,M 0 )2 (L 2 F ([0,T]⇥ ⌦ ,K (R d ))) 2 is another solution of (2.15), then Y t =Y 0 t and M t =M 0 t P-a.s. for every t2 [0,T]. Proof. ByTheorem2.5.8, theset-valuedBSDE(2.1)inconditionalexpectation form has a solution Y 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )). Define a process M ={M t } t2 [0,T] by M t :=E h ⇠ + Z T 0 f(s,Y s )ds F t i ,t2 [0,T]. By Remark 2.5.4, {f(t,Y t )} t2 [0,T] 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )); and by Lemma 2.5.5, M is a uniformly square-integrably bounded set-valued martingale. On the other hand, by Corollary 2.2.4, Z T 0 f(s,Y s )ds = Z t 0 f(s,Y s )ds+ Z T t f(s,Y s )ds, t2 [0,T]. (2.18) 63 By the linearity of set-valued conditional expectation and (2.18), we have M t = E h ⇠ + Z T 0 f(s,Y s )ds F t i =E h ⇠ + Z t 0 f(s,Y s )ds+ Z T t f(s,Y s )ds F t i (2.19) = E h ⇠ + Z T t f(s,Y s )ds F t i + Z t 0 f(s,Y s )ds =Y t + Z t 0 f(s,Y s )ds. Using the definitions of M T ,M t ,andcombining(2.18)and(2.19)give Y t +M T = Y t +⇠ + Z t 0 f(s,Y s )ds+ Z T t f(s,Y s )ds = ⇠ + Z T t f(s,Y s )ds+M t . Finally, M 0 =E[⇠ + R T 0 f(s,Y s )ds]= Y 0 by the definitions of M 0 and Y 0 . Hence, the pair (Y,M)isasolutiontotheset-valuedBSDE(2.15). To prove uniqueness, let (Y 0 ,M 0 )beanothersolutionto(2.15). Let t2 [0,T]. Hence, Y 0 t +M 0 T = ⇠ + Z T t f(s,Y 0 s )ds+M 0 t , P-a.s. . (2.20) Since M 0 is a martingale, taking conditional expectation in (2.20) under F t gives Y 0 t +M 0 t =E h ⇠ + Z T t f(s,Y 0 s )ds)ds F t i +M 0 t . (2.21) 64 Hence, by cancellation law, Y 0 is a solution to the BSDE (2.1). By the uniqueness part of Theorem 2.5.8, Y t =Y 0 t P-a.s. Using this, we may rewrite (2.20) as Y t +M 0 T = ⇠ + Z T t f(s,Y s )ds+M 0 t , P-a.s. . In particular, when t=0,wehave Y 0 +M 0 T = ⇠ + Z T 0 f(s,Y s )ds+M 0 0 . (2.22) On the other hand, we have M 0 = Y 0 = Y 0 0 = M 0 0 . Hence, cancellation law in (2.22) and the definition of M T yield M 0 T = ⇠ + Z T 0 f(s,Y s )ds =M T . Finally, sinceM,M 0 are both martingales with the same terminal value, we obtain M t = M 0 t P-a.s. for each t 2 [0,T]. Hence, (Y,M)and(Y 0 ,M 0 )coincideupto modifications. We conclude this section by formulating the precise relationship between the solutions of the two forms of the set-valued BSDE: (2.1) and (2.15). Corollary 2.5.12. (i) If Y 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )) is a solution of (2.1), then there exists a unique M 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )), such that (Y,M) is a solution of (2.15). 65 (ii) If (Y,M)2 (L 2 F ([0,T]⇥ ⌦ ,K (R d ))) 2 is a solution of (2.15), then Y solves (2.1). Proof. (i) Let Y be a solution of (2.1). Following the construction in the proof of Theorem 2.5.11, one can find a set-valued martingaleM such that (Y,M)solves (2.15). The uniqueness of such M is also a consequence of the uniqueness part of Theorem 2.5.11. (ii) Let (Y,M)beasolutionof(2.15). Foreach t2 [0,T], taking conditional expectation with respect to F t in (2.15) gives Y t +M t =E h ⇠ + Z T t f(s,Y s )ds F t i +M t since M is a set-valued martingale. By cancellation law, it follows that Y solves (2.1). 2.5.3 Set-ValuedBSDEswithGeneralizedStochasticInte- grals We can now combine the martingale representation theorem developed in Section 2.4.1 with the set-valued BSDE (2.15) to study the equation of the form Y t + Z T 0 R dB = ⇠ + Z T t f(s,Y s )ds+ Z t 0 R dB, t2 [0,T], (2.23) 66 whereR⇢ R 0 =R d ⇥ L 2 F ([0,T]⇥ ⌦ ,R d⇥ )isasetofmartingalerepresenterpairs. Definition 2.5.13. A pair (Y,R) with Y 2 L 2 F ([0,T]⇥ ⌦ ,K (R d )) andR2 R 0 is called a solution of the set-valued BSDE (2.23) if Y 0 = ⇡ ⇠ [R] and Y t + Z T 0 R dB = ⇠ + Z T t f(s,Y s )ds+ Z t 0 R dB, P-a.s. ,t2 [0,T]. Thefollowingtheoremprovidesawell-posednessresultfortheset-valuedBSDE (2.23). Theorem 2.5.14. Under Assumption 2.5.2, the set-valued BSDE (2.23) has a solution (Y,R). Moreover, the solution is unique in the following sense: if (Y 0 ,R 0 ) is another solution of (2.23), then Y t = Y 0 t and R t 0 R dB = R t 0 R 0 dB P-a.s. for every t2 [0,T]. Proof. By Theorem 2.5.11, there exists a solution (Y,M)oftheset-valued BSDE (2.15). By Definition 2.5.9, M is a uniformly square-integrably bounded set-valued martingale. Hence, by Theorem 2.4.6, we may write M t = R t 0 R M dB P-a.s. for each t2 [0,T]. Hence, (Y,R M ) is a solution of (2.23). The uniqueness claim is an immediate consequence of the uniqueness part of Theorem 2.5.11. Remark 2.5.15. The indefinite integral M ={ R T 0 R M dB} t2 [0,T] in the proof of Theorem 2.5.14 is a uniformly square-integrably bounded set-valued martingale. Following similar arguments as in [22, Corollary 5.3.2], it can be shown that if 67 Z M is decomposable, then, Z M as well as R M are singletons. Hence, in all cases where Z M contains more than one processes, the set Z M is not decomposable. Similar to Corollary 2.2.6 for the generalized Aumann-Itˆ o stochastic integral, the indefinite integral considered here does not have time-additivity in general, that is, the Hukuhara di↵erence M T M t = R T 0 R M dB R t 0 R M dB does not exist. In particular, in view of Remark 2.4.7, the inclusion R t 0 R M dB⇢ M 0 + R t 0 Z M dB is generally a strict one. 68 Chapter 3 Some Related Topics in this chapter, we discuss some related topics about the set-valued backward stochastic di↵erential equations. We first talk about some relationships between the set-valued backward stochastic di↵erential equations and then based on the relationshipresults,wediscussthenumericalresultsabouttheset-valuedbackward stochastic di↵erential equations which is high-dimensional in the under-laying x as well as high-dimensional in every components of Y.Wewillshowthedetails below. 3.1 Relationship between BSDIs and SV-BSDEs In this section we investigate the relationship between a BSDI and its correspond- ing SVBSDE. The comparison is actually not completely straightforward, since the structure of the BSDI and SVBSDE are often quite di↵erent, especially on the generator. But we would like to know, assuming that the generators are compa- rable, then if the collection of the solutions of a BSDI would coincide with the solution of a SVBSDE. To be more precise, let us make the concepts a bit clearer. 69 Definition 3.1.1 (BSDI). y s 2 E ⇥ y t + Z t s F 1 (⌧,y ⌧ )d⌧ F s ⇤ for 0 s<t T, (3.1) y T =⇠, where F 1 :[0,T]⇥ ⌦ ⇥ R d ! K c (R d ). Here,K c (X) is the family of all nonempty, closed, bounded and convex subsets of X . Definition 3.1.2 (Solution of BSDI). A solution to BSDI (3.1) is a stochastic process y:[0,T]⇥ ⌦ ! R d which has a representation y t =E ⇥ ⇠ + Z T t f 1 (⌧,y ⌧ )d⌧ F t ⇤ , (3.2) where f 1 2 S(coF 1 ). The solution above depend on the terminal value ⇠ . Hereby, we denote SI(⇠ ) as the set of all solutions to BSDI (3.1) with a terminal value being equal to ⇠ P-a.e. Now we will assume that F 1 satisfies the following hypothesis: (H1) F 1 (·,a):[0,T]⇥ ⌦ ! K c (R d )isanadaptedstochasticprocessforevery a2 R d , 70 (H2) there exists m2 L 2 ([0,T]⇥ ⌦ ,R d ) such that sup f 1 2 F 1 kF 1 (t,!,a)k R d m(t,!)forevery a2 R d , (H3) there exists a constantL>0suchthatforevery(t,!)2 [0,T]⇥ ⌦andevery a 1 ,a 2 2 R d it holds that h F 1 (t,!,a 1 ),F 1 (t,!,a 2 ) Lka 1 a 2 k R d. Let us now define a mapping F 2 :[0,T]⇥ ⌦ ⇥ K c (L 2 )! K c (R d )as F 2 (t,!,A):= co ⇣ [ a2 A F 1 (t,!,a) ⌘ for (t,!,A)2 [0,T]⇥ ⌦ ⇥ K c (L 2 ). Now, we define the Set Valued Backward Stochastic Di↵erential Equation as below. Definition 3.1.3 (SVBSDE). Y t =E ⇥ ⌅+ Z T t F 2 (s,Y s )ds F t ⇤ (3.3) Definition 3.1.4 (Solution of SVBSDE). By a solution to SVBSDE we mean a mapping Y :[0,T] ! K c (L 2 ) that satisfies (3.3) and Y t 2 K c (L 2 t ) for every t2 [0,T]. 71 In order to establish a connection between solutions to BSDI (3.1) and soltions to Set Valued BSDE (3.3), we define some sets: Let Y :[0,T]! K c (L 2 ) be a solution to SVBSDE (3.3). For ⇠ 2 ⌅, define: C(T,L 2 ):={y2 L 2 : y is continuous in t, where t2 [0,T]}, S 2 (H):={h2 L 2 :h2 H}, K(⇠,Y ):= {y2 C(T,L 2 ): y t =E ⇥ ⇠ + R T t f 2 (s)d s F t ⇤ P-a.e. for t2 [0,T], and f 2 2 S 2 (F 2 (Y))}. Lemma 3.1.5. If y 2 K(⇠,Y ), then for every t 2 [0,T], it holds that y t 2 Y t P-a.e. Proof: First, y2 K(⇠,Y )meansthat,forany t2 [0,T], y t =E ⇥ ⇠ + Z T t f 2 (s)d s F t ⇤ =E ⇥ ⇠ F t ⇤ +E ⇥ Z T t f 2 (s)d s F t ⇤ . (3.4) And for Y t ,wehavethat Y t =E ⇥ ⌅+ Z T t F 2 (s,Y s )d s F t ⇤ =E ⇥ ⌅ F t ⇤ +E ⇥ Z T t F 2 (s,Y s )d s F t ⇤ . (3.5) Note that, the second equal sign in (3.5) is not trivial. It follows by the fact that bothXi and F 2 have values inK c (L 2 )andThm2.4.1.(v)fromK. 72 Then followed by Lemma 3.3.3 from K, and the fact that F 2 is a convex hull and⌅ is convex, we can get: E ⇥ ⇠ F t ⇤ 2 E ⇥ ⌅ F t ⇤ , and E ⇥ Z T t f 2 (s)d s F t ⇤ 2 E ⇥ Z T t F 2 (s,Y s )d s F t ⇤ . Therefore, the right hand side of (3.4) belongs to the right hand side of (3.5). So, we conclude that y t 2 Y t . Lemma 3.1.6 (Completeness of K(⇠,Y )). Let the hypotheses (H1)-(H3) be sat- isfied. The for every ⇠ 2 ⌅ , the set K(⇠,Y ) is a nonempty, bounded, convex and closed subset of space C(T,L 2 ). Proof: Nonemptiness. Note thatF 2 Y :T⇥ ⌦ !K c R d is nonanticipating set-valued stochastic processes. Indeed, the set-valued stochastic processF 2 (·,·,A) is adapted for every A2 K c (L 2 )andthemapping F 2 (t,!, ·) is continuous. Also the continuous mapping Y :T! K c (L 2 )canbetreatedasanadaptedset-valued 73 stochastic process Y : T ⇥ ⌦ ! K c (L 2 ). We claim that F 2 Y is an adapted set-valued stochastic process. Also it holds that (F 2 Y)(t,!) 2 R d = F 2 (t,!,X(t)) 2 R d = co [ a2 X(t) F 1 (t,!,a) 2 R d [ a2 Y(t) F 1 (t,!,a) 2 R d =sup r2 U a2 Y(t) F 1 (t,!,a) krk 2 R d =sup a2 Y(t) kF 1 (t,!,a)k 2 R d m 2 (t,!) for (t,!)2 I⇥ ⌦ . Hence the sets S 2 (F 2 Y)isnonempty. Boundedness. Let y2 K(⇠,Y ). Then we have the following estimations: sup t2 T Eky(t)k 2 R d 3k⇠ k 2 R d +3sup t2 T Z T t f 2 (s)ds 2 R d 3k⇠ k 2 R d +3sup t2 T t Z T t kf 2 (s)k 2 R d ds 3k⇠ k 2 R d +3T Z T 0 kf 2 (s)k 2 R d ds wheref 2 2 S 2 (F 2 Y). On the other hand, by (H2), for every fixed (s,!)2 T⇥ ⌦ we get kf 2 (s,!)k 2 R d k F 2 (s,!,Y (s))k 2 R d m 2 (s,!) Hence sup t2 T Eky(t)k 2 R d <M, where M is a positive constant which does not depend on y. 74 Convexity. Since F 2 X has convex values, the set S 2 (F 2 Y)isconvexalso in the space L 2 .Thusconvexityof K(⇠,Y )followsimmediately. Closedness. Let us take any sequence {y n } 1 n=1 ⇢ K(⇠,Y )suchthat y n n!1 ! y in the space C T,L 2 and y2 C T,L 2 Thus for every t2 T the sequence {y n (t)} 1 n=1 converges to y(t)inthespace L 2 . since the y n are elements of K(⇠,Y )wehave y n (t)=E h ⇠ + Z T t f n 2 (s)ds F t i where f n 2 2 S 2 (F 2 Y)for n 2 N. Due to the weak compactness of the set S 2 (F 2 Y)inthespace L 2 , we infer that there exist subsequences {f n k 2 } 1 k=1 , and there exist f2 S 2 (F 2 Y) such that f n k 2 ! f in L 2 . Also, E h ⇠ + Z T t f n k 2 (s)ds F t i k!1 ! y(t)in L 2 for every t2 T. For t2 T,definethelinearoperatorsJ t :L 2 ! L 2 t as follows: J t (f):= Z T t f(s)ds. 75 One can show that it is continuous with respect to weak topologies. Hence J t (f n k 2 )= Z T t f n k 2 (s)ds k!1 ! J t (f 2 )= Z T t f 2 (s)ds for every t2 I Thus E h ⇠ + Z T t f n k 2 (s)ds F t i k!1 ! E h ⇠ + Z T t f 2 (s)ds F t i in L 2 for every t2 T This implies that y(t) E h ⇠ + Z T t f 2 (s)ds F t i L 2 =0 for every t2 T. Hence for every fixed t2 T it holds that y(t)=E h ⇠ + Z T t f 2 (s)ds F t i . The process appearing on the right-hand side of the equality written above is continuous. Therefore we arrive at the conclusion that for every t2 T,itholds that y(t)=E h ⇠ + R T t f 2 (s)ds F t i . Thus y2 K(⇠,Y ) 76 Theorem3.1.7. Let⌅ 2 K c (L 2 ),F 1 satisfies hypotheses (H1)-(H3). Let Y denote the unique solution to (3.3). Then for every ⇠ 2 ⌅ there exists a solution y : [0,T]⇥ ⌦ ! R d to BSDI (3.1) such that y t 2 Y t for every t2 [0,T]. Proof: Since F 1 satisfies (H1) and (H3), there exists mapping ˜ f:[0,T]⇥ R d ! R d such that: (i) ˜ f(t,!,a)2 F 1 (t,!,a)for(t,!,a)2 [0,T]⇥ ⌦ ⇥ R d , (ii) for every a 2 R d ,themapping ˜ f(·,·,a):[0,T]⇥ ⌦ ! R d is an adapted stochastic processes, (iii) there exists a positive constant ˜ L such that for t2 [0,T],a2 R d , k ˜ f(t,!,a 1 ) ˜ f(t,!,a 2 )k 2 R d ˜ Lka 1 a 2 k 2 R d . We then define the mapping : K(⇠,Y )! K(⇠,Y )by (( y)) t =E ⇥ ⇠ + Z T t ˜ f(s,y s )ds F t ⇤ inL 2 for every t2 [0,T]. This mapping is well defined. Indeed, ( y) 2 C(T,L 2 ). Also, ˜ f(s,!,y s ) 2 F 2 (s,!,Y s ). Indeed, we have that ˜ f(s,!,y s ) 2 F 1 (s,!,y s )and F 1 (s,!,y s ) ⇢ 77 S a2 Ys F 1 (s,!,a)⇢ F 2 (s,!,Y s ). Therefore,( y)2 K(⇠,Y ). Then, we will show that is a contraction under the metric D(x,y):= sup t2 [0,T] e LT(T t) E|x t y t | 2 . For every x,y2 K(⇠,Y ), we have D(( x),( y)) = sup t2 [0,T] e LT(T t) ⇣ E E ⇥ Z T t ( ˜ f(s,x s ) ˜ f(s,y s ))ds F t ⇤ 2 ⌘ sup t2 [0,T] e LT(T t) E ⇣ E Z T t ( ˜ f(s,x s ) ˜ f(s,y s ))ds 2 F t ⌘ sup t2 [0,T] e LT(T t) (T t)E ⇣ E ⇥ Z T t ˜ f(s,x s ) ˜ f(s,y s ) 2 ds F t ⇤ ⌘ sup t2 [0,T] e LT(T t) (T t)E ⇣ E ⇥ Z T t ˜ L|x s y s | 2 ds F t ⇤ ⌘ sup t2 [0,T] e LT(T t) ˜ L(T t)E Z T t |x s y s | 2 ds sup t2 [0,T] e LT(T t) ˜ L(T t)D(x,y) Z T t e LT(T s) ds = D(x,y)sup t2 [0,T] (1 e LT(T t) ) =(1 e LT 2 )D(x,y) Since (K(⇠,Y ),D) is a complete metric space, applying Banach Fixed Point Prin- ciple, we infer that there exists a unique y ⇤ 2 K(⇠,Y ) such that y ⇤ t =E ⇥ ⇠ + Z T t ˜ f(s,y ⇤ s )ds F t ⇤ inL 2 for t2 [0,T]. 78 Since ˜ f(s,!,y ⇤ s )2 F 1 (s,!,y ⇤ s ),thestochasticprocessy ⇤ isasolutiontoBSDI(3.1). By lemma, we have y ⇤ t 2 Y t for any t2 [0,T]. Now let us define some reachable set of soutions to a BSDI. For t2 [0,T]and ⇠ 2 ⌅, R(t,⇠ ):={y t 2 L 2 t :y2 SI(⇠ )} R(t,⌅) := [ ⇠ 2 ⌅ R(t,⇠ ). Corollary 3.1.8. R(t,⌅) \ Y t 6=;for every t2 [0,T]. (3.6) Corollary 3.1.9. Suppose that the assumptions of Theorem 3.1.7 are satisfied. If a family {Y t : t2 [0,T]}⇢ K c (L 2 ) is such that SVBSDE (3.3) is satisfied, then there exists a solution x to the BSDI: y s 2 E ⇥ y t + Z t s F 1 (⌧,y ⌧ )d⌧ F s ⇤ for 0 s<t T, (3.7) ⇠ 2 ⌅ , such that y t 2 Y t for any t2 [0,T]. 79 Theorem 3.1.10. Let ⌅ 2K c ⇣ L 2,d T ⌘ and let F :[0,T]⇥ ⌦ ⇥ L 2,d !K c R d satisfy assumptions. Then coR(t,⌅) ⇢ Y t for every t2 T. Proof: Let t 2 T. It suce to show that ¯ H L 2,d(clR(t,⌅) ,Y t )=0. Then clR(t,⌅) ⇢ Y t and since Y t is closed and convex, we will get coR(t,⌅) ⇢ Y(t). Let us take a2 R(t,⌅) arbitrarily chosen. Then a =y t for some y2 SI(⌅). Thus y t 2 E ⇥ ⌅+ Z T t F 1 (⌧,y ⌧ )d⌧ F t ⇤ for 0 t T, = E ⇥ ⌅+ Z T t F 2 (⌧, {y ⌧ })d⌧ F t ⇤ . Therefore, we have, dist 2 L 2,d(a,Y t ) ¯ H 2 L 2,d ✓ E ⇥ ⌅+ Z T t F 2 (⌧, {y ⌧ })d⌧ F t ⇤ ,E ⇥ ⌅+ Z T t F 2 (⌧,Y ⌧ )d⌧ F t ⇤ ◆ 2 ¯ H 2 L 2,d ✓ E ⇥ Z T t F 2 (⌧, {y ⌧ })d⌧ F s ⇤ ,E ⇥ Z T t F 2 (⌧,Y ⌧ )d⌧ F t ⇤ ◆ 4E ¯ H 2 L 2,d ⇣ Z T t F 2 (⌧, {y ⌧ })d⌧, Z T t F 2 (⌧,Y ⌧ )d⌧ ⌘ F t 8E Z T t ¯ H 2 L 2,d F 2 (⌧, {y ⌧ }),F 2 (⌧,Y ⌧ ) d⌧ F t 8L 2 E Z T t ¯ H 2 L 2,d {y ⌧ },Y ⌧ d⌧ F t 8L 2 E Z T t ¯ H 2 L 2,d clR(t,⌅) ,Y ⌧ d⌧ F t 80 Hence, ¯ H 2 L 2,d clR(t,⌅) ,Y ⌧ 8L 2 E Z T t ¯ H 2 L 2,d clR(t,⌅) ,Y ⌧ d⌧ F t Corollary3.1.11. Let⌅ 2K c ⇣ L 2,d T ⌘ and letF :[0,T]⇥ ⌦ ⇥ L 2,d !K c R d satisfy assumptions. Then co(SI(⌅)) ⇢ C(Y), where C(Y) is the set of all continuous selections for Y being a solution to SVBSDE. 3.2 Simulation Using Deep Learning Base on the corollary result about the relationship between the solution to SVB- SDE and the solution to BSDI, we do the simulation of the BSDI so as to get some ideas about the moving paths about the SVBSDE. One issue need to be mentioned here is that some traditional bsde simulation methods including Monte Carlo method did not solve the simulation tasks with high dimensional cases. To be more precise, with the di↵usion part of the BSDE to be multi-dimensional, the traditional simulation methods lack of the ability to handle the tasks. The recently popular deep learning based method which can handle this issue therefore has been noticed and will be a nice tools for our uses. So here in this section, we will show how we simulate the BSDI using the deep learning method to handle the high dimensional problems. 81 Here we state again the solution setSI(⇠ )to the BSDI is the set of all solutions to (3.1) with a terminal value being equal to ⇠ P-a.e. which states below y s 2 E ⇥ y t + Z t s F 1 (⌧,y ⌧ )d⌧ F s ⇤ for 0 s<t T, (3.8) y T =⇠, where F 1 :[0,T]⇥ ⌦ ⇥ R d ! K c (R d ). Here,K c (X)isthefamilyofallnonempty, closed, bounded and convex subsets ofX. Also, a solution to (3.8) is a stochastic process y:[0,T]⇥ ⌦ ! R d which has arepresentation y t =E ⇥ ⇠ + Z T t f 1 (⌧,y ⌧ )d⌧ F t ⇤ , (3.9) where f 1 2 S(coF 1 ). The representation (3.9) of the solution to (3.8) can also be stated as y t = ⇠ + Z T t f 1 (⌧,y ⌧ )d⌧ Z T t z ⌧ dW ⌧ , (3.10) where W :[0,T]⇥ ⌦ ! R d is a d-dimensional standard Brownian motion and {F t } t2 [0,T] be the filtration generated by {W t } t2 [0,T] .Tobemoregeneral,wecan 82 consider cases where the terminal conditions are based on a forward stochastic di↵erential equations as well. Consider the following BSDI y s 2 E ⇥ y t + Z t s F 1 (⌧,y ⌧ )d⌧ F s ⇤ for 0 s<t T, (3.11) y T =g(x T ), x t =x 0 + Z t 0 µ(s,x s )ds+ Z t 0 (s,x s )dW s . The solution set of the above BSDI (3.11) is the set of stochastic processes y with the representations as y t =g(x T )+ Z T t f 1 (⌧,y ⌧ )d⌧ Z T t z ⌧ dW ⌧ , (3.12) x t =x 0 + Z t 0 µ(s,x s )ds+ Z t 0 (s,x s )dW s . WiththerelationshipbetweenthesolutionofSVBSDEandBSDI,wearegoing todothenumericalresultswiththeBSDEs(3.12). Intheformula,theforwardpart x t having high dimensions was an issue for doing the simulation due to the Curse of Dimensionality. With the development of the deep learning method, solving high dimensional problems no longer surfer the large time cost issues thanks to the network structures within the deep learning models. We are going to simulate our models based on the deep learning method. For the simulation results, we will 83 showsomeexamplesoftheBSDIincludingthehighdimensionalvanillacalloption price problem. 3.2.1 Deep Learning Based Method Here in this section, we focus on using the deep learning method to illustrate the simulation of the SVBSDE model which is high dimensional both in its output Y t and its input x t . As mentioned above, the solution set of the BSDI (3.11) is fully determined by the BSDEs (3.12). We focus on solving the BSDEs given the terminal condition usingthedeeplearningmethod. Weshowthenumericalschemeindetailsasbelow. It is known that applying the Feyman-Kac formula, there exists a deterministic functionu =u(t,x)suchthaty t =u(t,x t )andz t = T (t,x t )r u(t,x t )whereu(t,x) solves a PDE . For both the forward and backward process, the Euler scheme can be used to approximate them as: y t n+1 ⇡ y tn f 1 (t n ,y tn )(t n+1 t n )+z tn (W t n+1 W tn ), (3.13) x t n+1 ⇡ x tn +µ(t n ,x tn )(t n+1 t n )+ (t n ,x tn )(W t n+1 W tn ). (3.14) 84 Here both the processes {y t } t2 [0,T] and {x t } t2 [0,T] can be as general asR d .Wecan define y t := (y 1 t ,···,y d t )and u(·):=(u 1 (·),···,u d (·)) where y i t = u i (t,x t ). Then (3.13) can be written as y i t n+1 ⇡ y i tn f 1 (t n ,y tn )(t n+1 t n )+z tn (W t n+1 W tn ),i=1,···,d(3.15) Given the above discretization (3.14) and (3.15), the next step is to approximate the function z i tn = T (t n ,x tn )r u i (t n ,x tn )ateachtimestep t =t n . T (t n ,x tn )r u(t n ,x tn )=( T r u)(t n ,x tn )⇡ ( T r u)(t n ,x tn |✓ n ), for n=1,2,···,N 1, where ✓ n is the parameter of the neural network approx- imation for the function z t = T (t,x t )r u(t,x t )ateach t = t n . With the neural network approximation, we take {x tn } 0 n N and {W tn } 0 n N as the input data and the output is denoted as ˆ u({x tn } 0 n N ,{W tn } 0 n N )asanapproximationof u(t N ,x t N ). With the terminal function g(·)writtenas g(·):=(g 1 (·),···,g d (·)), the loss function then can be defined as below l(✓ )=E d X i=1 |g i (x t N ) ˆ u i ({x tn } 0 n N ,{W tn } 0 n N )| 2 , with the set of parameters being ✓ ={✓ uo ,✓ r uo ,✓ 1 ,···,✓ N 1 }. 85 In the previous literature, general high-dimensional backward stochastic dif- ferential equations do not always have solutions, except the case where the ith component f i of the generator function f depend only on the ith row of the pro- cess {z t } t2 [0,T] . We shall point out that in our setting, the coecient function f 1 (·)ofthedrifttermdoesnotcontaintheprocess {z t } t2 [0,T] .Theaboveobserva- tion can be viewed in a di↵erent way using the stochastic control theory. So the well-posedness is not an issue. The approximation process is a combination of several sub-networks which are connected between time t = t n and time t = t n+1 by u i (t n ,x tn ),r u i (t n ,x tn ),W t n+1 W tn ! u i (t n+1 ,x t n+1 )and(x tn ,W t n+1 W tn )! x t n+1 . Within the sub-network, at each time t = t n , x tn ! h i,1 n ! h i,2 n !···! h i,H n !r u i (t n ,x tn is an H-layer feedforward neural network. The goal of the whole network is to minimize the loss function l(✓ ). With the connections, there are in total (H+1)⇤ (N 1)⇤ d layers and d parameter sets to be optimized simultaneously. The following sudo-code gives a brief idea about the algorithm. 86 Network Architecture For Each Component of The Set Here, represents represents and, for all Figure 3.1: Network Architecture. 3.2.2 Examples Vanilla Call Options Based on Multiple Under-lying Assets with Uncer- tain Interest Rates. The traditional vanilla call options consider only one standard risk free interest rate. Here we instead consider a set of risk free interest rates. y s 2 E ⇥ y t + Z t s Ry ⌧ d⌧ F s ⇤ for 0 s<t T, y T =g(x T ), x t =x 0 + Z t 0 bx s ds+ Z t 0 x s dW s , 87 Algorithm 1 Deep Learning Based Method 1: Initialize discretization parameter ✓ ,mini-batchsize B,numberoftimeseg- ments n,threshold " 2: Generatedata. (1)SimulateB pathsofBrownianmotions(W i t 1 ,···,W i tn ) 1 i B and (2) generate B paths of state processes (x i t 1 ,···,x i tn ) 1 i B . 3: for f2 F 1 do 4: while loss(✓ )>"do 5: Randomly select a mini-batch of data, with batch size ˜ B 6: for j2{ 1,···,n} 7: z ✓ t j =( T r u)(t n ,x tn |✓ n ) 8: Compute y n t j+1 based on z ✓ t j and x i t j from Euler scheme 9: end 10: Compute loss(✓ )= E ⇥ |g(x t N ) ˆ u({x tn } 0 n N ,{W tn } 0 n N )| 2 ⇤ .Mini- mize loss, and update ✓ by stochastic gradient descent. whereRisasetofinterestrates{r}representingthesetofallconsiderableinterest rates. 3.2.3 Some Discussions on the Boundary In this section, we are going to discuss some nice properties of the boundary sets of the solution sets Y t at each time t2 [0,T]. The results show that the solutions of the BSDI with the terminal conditions on the boundary points of the terminal set ⌅ are the one with the boundary points of the solution sets at each time t2 [0,T] as well. This result will help us a lot on the simulation. Before we state the theorem, let us start with two lemma as following. Lemma 3.2.1. Let a2 ext(⌅) ✓ ⌅ be an extreme point of the terminal condition set⌅ . Consider y a t :=a+E ⇥R T t f(s,y a s )ds F t ⇤ as a solution to the BSDI with the terminal condition a2 ⌅ . Then for any t2 [0,T], it holds that y a t 2 ext(Y t ). 88 Proof: Suppose the claim is not true, then by the definition of the extreme points, there exists a time t2 [0,T], such that y a t = ↵y 1 t +(1 ↵ )y 2 t , where y 1 t , y 2 t 2 Y t and ↵ 2 (0,1). Let y 1 = y 1 t T t=0 and y 2 = y 2 t T t=0 . Here y 1 (or y 2 ) is the solution to BSDI(3.1). Also, since Y s is convex, then for any s2 [t,T], y a s = ↵y 1 s +(1 ↵ )y 2 s 2 Y s , where y 1 s , y 2 s 2 Y s .Inparticular, a =y a T = ↵y 1 T +(1 ↵ )y 2 T 2 ⌅ , wherey 1 T , y 2 T 2 ⌅. This is a contradiction with the fact that a is an extreme point. The lemma3.2.1 above showed that y a t ,a 2 ext(⌅) ✓ ext(Y t )forany t 2 [0,T]. Actually, the other way around holds as well. Indeed, we have the following lemma. Lemma 3.2.2. For any t2 [0,T], consider any u2 ext(Y t )✓ Y t as a solution y t of the BSDI. Then it holds that y T 2 ext(⌅) . 89 Proof: Suppose that y T is not an extreme point of⌅. That means, there exists an ↵ 2 (0,1), such that y T = ↵⇠ 1 +(1 ↵ )⇠ 2 , where ⇠ 1 , ⇠ 2 2 ⌅. Let y 1 = y ⇠ 1 t T t=0 and y 2 = y ⇠ 2 t T t=0 . Here y 1 (or y 2 )isthe solution to BSDI(3.1) with the terminal condition to be ⇠ 1 2 ⌅(or ⇠ 2 2 ⌅). Then due to the convexness of Y t ,forsome ↵ 0 2 (0,1), y t = ↵ 0 y 1 t +(1 ↵ 0 )y 2 t where y 1 t , y 2 t 2 Y t .Thiscontradictswiththeconditionthat u2 ext(Y t ). Nowbasedonthelemma3.2.1andlemma3.2.2above,wecanclaimthefollowing theorem. Theorem 3.2.3. For any t2 [0,T], ext(Y t )= y a t ,a2 ext(⌅) holds. With these two sets to be equal to each other, we state a corollary as below. Corollary 3.2.4. For any t2 [0,T], Y t := co ext(Y t ) = co ⇣ y a t ,a2 ext(⌅) ⌘ holds. Up to now, we have the conclusion that the solutions of the BSDI with the terminal conditions on the extreme points of the terminal set⌅ are the ones with the extreme points of the solution sets at each time t 2 [0,T] as well. With 90 this result, and the fact that the extreme points are contained in the boundary points, we claim that the solutions of the BSDI with the terminal conditions on the boundary points of the terminal set⌅ are the ones with the boundary points of the solution sets at each time t2 [0,T]. The above results give us the convenience in the simulation. Instead of plotting all the solutions of the BSDI, we just need to start from the terminal conditions on the boundary points to plot the boundary solutions at each time t2 [0,T]. 91 Bibliography [1] Ararat, C ¸., Rudlo↵, B., (2020), Dual representations for systemic risk measures, Math. Financ. Econ., 14(1), 139–174. [2] Aumann, R. J., (1965), Integrals of set-valued functions, J. Math. Anal. Appl., 12(1), 1–12. [3] Artzner, P., Delbaen, F., Eber, J., Heath, D., (1999), Coherent measures of risk, Math. Financ., 9(3), 203–228. [4] Biagini, F., Fouque, J.-P., Frittelli, M., Meyer-Brandis, T., (2019), A unified approach to systemic risk measures via acceptance sets, Math. Financ., 29(1), 329–367. [5] Bion-Nadal, J., (2009), Time consistent dynamic risk processes, Stoch. Proc. Appl., 119(2), 633–654. [6] Castaing, C., Valadier, M., (1977), Convex Analysis and Measurable Multifunctions, Springer-Verlag, Berlin Heidelberg. [7] Coquet, F., Hu, Y., M´ emin, J., Peng, S., (2002), Filtration-consistent nonlinear expectations and related g-expectations, Probab. Theory Rel., 123(1), 1–27. [8] Feinstein, Z., Rudlo↵, B., (2015), Multi-portfolio time consistency for set-valued convex and coherent risk measures, Financ. Stoch., 19(1), 67–107. [9] Feinstein, Z., Rudlo↵, B., Weber, S., (2017), Measures of systemic risk, SIAM J. Financ. Math., 8(1), 672-–708. 92 [10] Hamel, A. H., Heyde, F., Rudlo↵, B., (2011), Set-valued risk measures for conical market models, Math. Financ. Econ., 5(1), 1–28. [11] Hamel,A.H.,Heyde,F.,L¨ ohneA.,Rudlo↵B.,SchrageC.,(2015), Setoptimization-arather short introduction, in: A. H. Hamel, F. Heyde, A. L¨ ohne, B. Rudlo↵, C. Schrage (eds.), Set Optimization and Applications - the state of the art. From set relations to set-valued risk measures, 65–141, Springer-Verlag, Berlin. [12] Hamel, A. H., Schrage C., (2014), Directional derivatives, subdi↵erentials and optimality conditions for set-valued convex functions, Pac. J. Optim., 10(4), 667–689. [13] Hiai.F.,Umegaki.H.,(1977),Conditional expectations, and martingales of multivalued func- tions, J. Multivariate Anal., 7(1), 149–182. [14] Hukuhara,M.,(1967),Integration des applications measurables dont la valeur est un compact convexe, Funkcialaj Ekvacioj, 10, 205-223. [15] Lakshmikantham, V., Gnana Bhaskar, T., Vasundhara Devi, J., (2006), Theory of Set Dif- ferential Equations in Metric Spaces, Cambridge Scientific Publishers, Cambridge. [16] Kisielewicz,M.,(1997),Set-valued stochastic integrals and stochastic inclusions,Stoch.Anal. Appl., 15(5), 783–800. [17] Kisielewicz, M., (2007), Backward stochastic di↵erential inclusions , Dyn. Syst. Appl., 16, 121–140. [18] Kisielewicz, M., (2008), Weak compactness of weak solutions to backward stochastic di↵er- ential inclusions, Dyn. Syst. Appl., 17, 351–370. [19] Kisielewicz, M., (2012), Some properties of set-valued stochastic integrals, J. Math. Anal. Appl., 388(2), 984–995. [20] Kisielewicz, M., (2013), Stochastic Di↵erential Inclusions and Applications , Springer, New York. 93 [21] Kisielewicz, M., (2014), Martingale representation theorem for set-valued martingales,J. Math. Anal. Appl., 409(1), 111–118. [22] Kisielewicz, M., (2020), Set-valued Stochastic Integrals and Applications, Springer, Switzer- land. [23] Kisielewicz,M.,Michta,M.,(2016),Properties of set-valued stochastic di↵erential equations , Optimization, 65(12), 2153–2169. [24] Kisielewicz, M., Michta, M., (2017), Integrably bounded set-valued stochastic integrals,J. Math. Anal. Appl., 449(2), 1892–1910. [25] Ma, J., Yao, S., (2010), On quadratic g-evaluations/expectations and related analysis,Stoch. Anal. Appl., 28(4), 711–734. [26] Malinowski, M., Michta, M., (2013), The interrelation between stochastic di↵erential inclu- sions and set-valued stochastic di↵erential equations ,J.Math.Anal.Appl.,408(2),733–743. [27] Michta,M.,(2015),Remarksonunboundednessofset-valuedItˆ ostochasticintegrals,J.Math. Anal. Appl., 424(1), 651–663. [28] Molchanov, I., (2018), Theory of Random Sets, second edition, Springer-Verlag, London. [29] Pallaschke, D., Urba´ nski, R., (2010), On the separation and order law of cancellation for bounded sets, Optimization, 51(3), 487–496. [30] Rockafellar, R. T., Wets, R. J.-B., (1998), Variational Analysis, Springer-Verlag, Berlin Heidelberg. [31] Wang, R., (2001), Optional and predictable projections of set-valued measurable processes, Appl. Math. J. Chinese Univ. Ser. B, 16(3), 323–329. [32] Rosazza Gianin, E., (2006), Risk measures via g-expectations, Insur. Math. Econ., 39(1), 19–34. 94 [33] Urba´ nski,R.,(1976),A generalization of the Minkowski-Radstr¨ om-H¨ ormander theorem,Bul- letinofthePolishAcademyofSciences,Mathematics,Astronomy,andPhysics,24,709–715. [34] Zhang, J., Yano, K., (2020), Remarks on martingale representation theorem for set-valued martingales, arXiv preprint 2012.06988. 95
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Set values for mean field games and set valued PDEs
PDF
Pathwise stochastic analysis and related topics
PDF
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Topics on dynamic limit order book and its related computation
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Path dependent partial differential equations and related topics
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
Controlled McKean-Vlasov equations and related topics
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Gaussian free fields and stochastic parabolic equations
PDF
Dynamic approaches for some time inconsistent problems
PDF
Dynamic network model for systemic risk
PDF
Probabilistic numerical methods for fully nonlinear PDEs and related topics
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
On non-zero-sum stochastic game problems with stopping times
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
Asset Metadata
Creator
Wu, Wenqian
(author)
Core Title
Topics on set-valued backward stochastic differential equations
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Degree Conferral Date
2021-08
Publication Date
07/28/2021
Defense Date
06/15/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
convex compact set,deep BSDE,Hukuhara difference,integrably bounded set-valued process,OAI-PMH Harvest,Picard iteration,set-valued backward stochastic differential equation,set-valued stochastic analysis,set-valued stochastic integral
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ma, Jin (
committee chair
), Tong, Xin (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
wenqian@usc.edu,wuwenqian316@163.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC15657560
Unique identifier
UC15657560
Legacy Identifier
etd-WuWenqian-9906
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Wu, Wenqian
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
convex compact set
deep BSDE
Hukuhara difference
integrably bounded set-valued process
Picard iteration
set-valued backward stochastic differential equation
set-valued stochastic analysis
set-valued stochastic integral