Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
(USC Thesis Other)
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
OPTIMAL INVESTMENT AND REINSURANCE PROBLEMS AND RELATED NON-MARKOVIAN FBSDES WITH CONSTRAINTS by Tian Zhang A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) August 2015 Copyright 2015 Tian Zhang Dedication To my family ii Acknowledgments I am deeply indebted to my adviser, Professor Jin Ma for persevering with me throughout the time of my research and dissertation. I have always been grateful for having such a gentleman as my adviser. This dissertation is only possible under the guidance of his immense knowledge, great enthusiasm, exceptional ideas and ongoing encouragement. I greatly appreciate Professor Jianfeng Zhang, a respectable scholar and teacher who shared his wisdom and experience unreservedly. My thanks must as well go to Professor Peter Baxendale, Remigijus Mikulevicius and Yilmaz Kocer for serving my dissertation committee and for all the comments and advices provided to me. It is my great honor to be a member of USC math department. Thanks to the eort of all the faculty and sta members, I enjoyed the time here in the past years. I must acknowledge many friends who assisted, advised, and supported me. Especially, I need to express my gratitude to Detao Zhang whose friendship and knowledge have enlightened and me during the year that he visited USC. iii Last but not least, I would like to thank my mom Haiyan Zhao, my sister Mia Zhang and my boyfriend Rentao Sun for standing by my side and supporting me spiritually all these time. My mother deserves special thanks for providing me the best education, environment and freedom to grow up. I dedicate this dissertation to my family. iv Table of Contents Dedication ii Acknowledgments iii Chapter 1: Introduction 2 Chapter 2: Non-Markovian FBSDEs with Multidimensional Drivers 11 2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Decoupling Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Heuristic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4 Case with upper-triangular 3 . . . . . . . . . . . . . . . . . . . . . 24 2.5 Dominating Ordinary Dierential Equation . . . . . . . . . . . . . . 26 2.6 Characteristic BSDE and Small Time Duration . . . . . . . . . . . 30 2.7 Wellposedness of FBSDE . . . . . . . . . . . . . . . . . . . . . . . . 34 Chapter 3: Non-Markovian FBSDEs with constraint on Z 36 3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.2 Method of Penalization . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.3 Discussion of Strong convergence ofZ n under Method of Penalization 65 3.4 Minimal Solution Property . . . . . . . . . . . . . . . . . . . . . . . 87 Chapter 4: Applications: Optimal Reinsurance with Investment and Dividend 93 4.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.2 The Candidates of Optimal Control . . . . . . . . . . . . . . . . . . 100 4.3 The Optimal Closed-loop System . . . . . . . . . . . . . . . . . . . 104 4.4 Existence of Optimal Control . . . . . . . . . . . . . . . . . . . . . 108 Chapter 5: Applications: Optimal Reinsurance with Counter-party Risk 112 5.1 Preliminaries and Problem Formulation . . . . . . . . . . . . . . . . 113 5.2 Proportional Reinsurance Policy . . . . . . . . . . . . . . . . . . . . 115 5.3 Compound Poisson Case . . . . . . . . . . . . . . . . . . . . . . . . 120 v 5.4 Cost Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.5 The Candidates of Optimal Control . . . . . . . . . . . . . . . . . . 127 5.6 The Optimal Closed-loop System . . . . . . . . . . . . . . . . . . . 130 Chapter 6: Bibliography 136 1 Chapter 1 Introduction The goal of our research is to study a class of general non-Markovian Forward Backward Stochastic Dierential Equations (FBSDE) with constraint on the Z process. In particular, we consider the cases in which the driving Brownian motion is multi-dimensional, and the coecients are assumed to be only Lips- chitz. Such FBSDEs are particularly motivated by applications in optimal reinsur- ance/investment/dividend problems and its close-loop solutions via the Pontrya- gin's Stochastic Maximum Principle. The well-posedness of the FBSDE, especially those with Z-constraints, turn out to be an important device that could lead to a closed-loop system for the optimal strategy. Mathematically, we are interested in the following FBSDE: 8 > > > < > > > : X t =x + Z t 0 b(s;X s ;Y s ;Z s )ds + Z t 0 h(s;X s ;Y s ;Z s );dW s i; Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i +D T D t ; t2 [0;T ]; (1.1) where all the coecients and terminal condition function are assumed to be Lip- schitz in the corresponding spatial variables and are allowed to be random, the martingale integrand of the backward SDE, i.e., the process Z, is subject to the 2 constraint that it has to remain in a (possibly random and time-varying) compact set, and D is an adapted, c adl ag, non-decreasing process. We should note that non-decreasing processD plays the role of holding theZ process in the constraint region, and is a part of the solution. Furthermore, in (1.1) both X and Y are 1-dimensional, but the Brownian Motion W is allowed to be multi-dimensional, and thus so is the process Z. Our methodology depends heavily on the recent work of Ma-Wu-Zhang-Zhang [23], in which a unied approach is given to nd the so-called \decoupling eld" that is uniformly Lipschitz continuous, to solve the following (unconstrained) FBSDE under only Lipschitz conditions on the coecients: 8 > > > < > > > : X t =x + Z t 0 b(s;X s ;Y s ;Z s )ds + Z t 0 h(s;X s ;Y s ;Z s );dW s i; Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i; t2 [0;T ]: (1.2) We should note that in [23] the driving Brownian motion is assumed to be 1- dimensional, thus our rst task is to extend the result of [23] to the case of multi- driver under the same conditions on the coecients. For technical reasons in this work we shall consider only the case in which (i) the coecients of the forward diusion, b and are independent of Z; and (ii) the constraint of Z is in the form of a compact, multi-dimensional random interval (or \cube") whose boundaries are assumed to be adapted processes. In 3 other words, each direction of process Z is constrained by a closed interval whose boundary is a pair of adapted processes. We shall follow the \method of penalization" to attack the well-posedness of the constraint FBSDE (1.1). More specically, for each n2 Z we consider the so-called n-th \penalized system" 8 > > > < > > > : X n t =x + Z t 0 b(s;X n s ;Y n s ;Z n s )ds + Z t 0 h(s;X n s ;Y n s ;Z n s );dW s i; Y n t =g(X n T ) + Z T t f(s;X n s ;Y n s ;Z n s )ds + Z T t n(s;Z n s )ds Z T t hZ n s ;dW s i; (1.3) where instead of placing constraint on Z n , we add the so-called \penalty" term R T t n(s;Z n s )ds which becomes zero if Z n satises the constraint, but strictly positive otherwise. Based on the previous analysis of the non-markovian high- dimensional FBSDE without constraint, we can nd conditions under which all the penalized FBSDEs are well-posed with adapted solution (X n ;Y n ;Z n ) and the cor- responding uniform Lipschitz decoupling eld n exists. Consequently, we would be able to construct adapted solution to the original constrained FBSDE after showing the convergence properties of (X n ;Y n ;Z n ). One of the main diculties here is that we cannot obtain theL 2 convergence ofZ n , consequently failed to show the limiting processes are actually a solution to the original constrained FBSDE. One way to resolve such issue is to just show the weak convergence of Z n , and construct the non-negative RCLL processD using the weak limit ofZ n along with the strong limits ofX n andY n . This is an extension of the methods used in [6] by 4 Buckdahn, R. Hu, Y.. On the other hand, we noticed that if the limiting decou- pling eld is continuous in time, then we can improved the convergence result of Z n by showing thatZ n converges toZ(the weak limit ofZ n ) inL p ;p2 [1; 2) which simplied the construction of D. Our second goal of this work is to use the results on FBSDE (1.1) to solve the optimal (proportional) reinsurance and investment problem with dividend, which we now brie y describe. We assume that the reserve process of a nancial institution (in particular, an insurance company) has a dynamics that can be modeled by diusion whose coecients are allowed to be random. Also, we assume that the time horizon of the optimization problem can also be random, due to the possibility of, e.g., bankruptcy, or other form of \common shock" (cf. e.g., [5]). It is clear that such a model will be non-Markovian in nature, and its corresponding close-loop system has not been fully explored in the literature due to the lack of knowledge on the resulting non-Markovian FBSDE. Mathematically, we start from the simplest diusion-type risk reserve model with proportional reinsurance dX t = t a t dt + 0 t a t dW 0 t ; X 0 =x: where the drift-diusion pair (; 0 ) is determined by the diusion approximation of the original Cr emer-Lundberg reserve model (see, e.g., Grandell [12]), and a t 2 5 [0; 1] is the so-called \retention level" of the reinsurance policy (cf. e.g., Gerber [11] and B uhlmann [7]), namely, the portion of the incoming claim that the insurance company retains as its own liability. Now let us assume that the the insurance company is allowed to invest its reserve in a security market. For simplicity we assume that there are only two assets traded continuously, one risky and one riskless, with prices at time t being denoted by P t and P 0 t that follows the standard stochastic dynamics: 8 > > < > > : dP t =P t [b t dt + t dW t ]; P 0 =p; dP 0 =P 0 t r t dt; P 0 0 =p 0 ; where, again, the appreciation rate b =fb t g, volatility =f t g, and the interest rate r =fr t g are all allowed to be random. If we denote t to be the amount of money invested in the risky asset at timet (thusX t t is invested in the risk-free asset), then one can easily show that the dynamics of reserve process should satisfy the following (linear) stochastic dierential equation (SDE): dX t = [r t X t + t a t + t t t ]dt + 0 t a t dW 0 t + t t dW t ; X 0 =x; where t = 1 t (b t r t ), t 0, is the so-called \risk premium". 6 Our optimization problem is to choose a combined (proportional) reinsurance policy and an investment portfolio with dividend v 4 = (;;D) so as to optimize certain cost functional which we assume to be of the following form: J(v()) =E n Z ^T 0 L(v s ;X s )ds +U(X v ^T ) o ; where is the random horizon at which the insurance company cease to operate andL andU are given functions. We note that the random time need not to be a stopping time with respect to the ltration generated by the Brownian motions (W;W 0 ), but for technical clarity we assume that is a random time representing a \Common Shock". It is now clear that our optimization problem is in fact a stochastic control prob- lem with random terminal time. In light of Jeanblanc et al. [5] we can easily change it to a more standard stochastic control problem. We refer to [13, 14, 15, 16] for the reinsurance problems involving diusion approximations. We should point out, however, that in most of the previous works the Markovian models were considered, and the problem was solved via dynamical programming, or more precisely, by solv- ing the HJB equation. These methods become powerless in the current situation since the solution to the SDE (4.5), albeit linear, is non-Markovian. We therefore take a dierent route via the stochastic maximum principle. In other words, we shall nd a candidate for the optimal strategy through the necessary condition, and 7 then validate the existence of the optimal strategy by proving the well-posedness of the constrained \close-loop" stochastic system. The main diculty of this method comes eventually down to proving the well-posedness of a strongly coupled, con- strained, non-Markovian FBSDE with only Lipschitz coecients. This is the where the previous results regarding the constrained high-dimensional FBSDE applies. Furthermore, we consider a more complicated but realistic model with counter- party risk. Aside from reinsurance, nancial investment and dividend distribution, the insurer is exposed to the possibility of default from the reinsurer (we regard as counter-party risk). At default, the reinsurer would provide part of the originally promised indemnity. More specically, we start by considering the following reserve process of a simple continuous insurance model: X t = x + Z t 0 c s (1 + s )dsS t = x + Z t 0 c s (1 + s )ds Z t 0 Z R f(s;z)N p (ds;dz) WhereX t represents the reserve process of the insurer with the initial endowment of x. Letc t be the premium process paid to the insurer by the buyers of the insurance, with the safety loading of t . Finally, S t = R t 0 R R f(s;z)N p (ds;dz) represents the claim process coming to the insurer, following a jump process, with the density of f(t;z). 8 In addition, the insurer is allowed to purchase a proportional reinsurance policy, a random eld : [0;1)R + ! [0; 1]. Given a reinsurance policy , the part of the claim that a insurance company retains to itself during anytime period [t;t + t] is assumed to be [S] t+t t , where [S] t+t t = Z t 0 Z R + (s;z)f(s;z)N p (dsdz): That is to say the part of the claim the insurer ceded to the reinsurer is [(1) S] t+t t . Now, under the proportional reinsurance policy , the reserve process of the insurer becomes the following: X t =x + Z t 0 m(s;)(1 + s )ds Z t 0 Z R (s;z)f(s;z)N p (ds;dz) where m(s;) = Z R + (s;z)f(s;z)(dz); is now the modied premium process, considering the premium charged by the reinsurer for the reinsurance policy. Meanwhile, t is the modied safety loading process in this case, calculated following the \prot margin principle". Finally, the last part is the retained part of the original claim process, while the rest is indemnied by the reinsurer. 9 Now, to include the counter-party risk into our model, let be a random time on the original probability space and dene the corresponding right-continuous process H t = 1 ftg . At the time of default, we assume there is compensated recovery, i.e. let represents the \recovery rate", that is, in case of default, it represents the percentage of the promised indemnity that the reinsurer is still able to pay. For simplicity, we assume 0 for some 0 2 (0; 1). The reserve process under proportional reinsurance with counterparty-risk would be: X t =x + Z t 0 (1 + s )m(s;)ds Z t 0 Z R [ + (1 0 )(1)H t ]f(s;z)N p (ds;dz) where m(t;) denote the modied premium process with counterparty-risk, and m(t;) = Z R [q t + ( + (1 0 )(1))p t ]f(dz): And p t =P( tjF t ) =P(H t = 1jF t ), and q t = 1p t =P( > tjF t ) =P(H t = 0jF t ), p t is the conditional default probability given inference information up to time t, and consequently q t is the \survival probability". For simplicity, we consider the case of compound poisson claim with diusion approximation. Following similar argument as before, we obtain sucient con- ditions under which the admissible control constructed using the solution of the corresponding constrained FBSDE would be optimal while minimizing the cost of the insurer. 10 Chapter 2 Non-Markovian FBSDEs with Multidimensional Drivers 2.1 Problem Formulation Throughout this section we assume that all uncertainties come from a common complete probability space ( ;F;P) on which is dened d-dimensional Brownian motionW =fW t :t 0g. For notational clarity, we denoteF to be the ltrations generated by W with the usual P-augmentation such that it satises the usual hypotheses (cf. e.g., Protter [27]). Further, we will denote, for a generic Euclidean spaceX, regardless of its dimen- sion,h;i andjj to be its inner product and norm, respectively. We denote the space of continuous functions with the usual sup-norm byC([0;T ];X), and we shall make use of the following notations: For any sub--eldGF T and 1p<1,L p (G;X) denotes the space of all X-valued,G-measurable random variables such thatEjj p <1. As usual, 2L 1 (G;X) means that it isG-measurable and bounded. 11 for 1p<1,L p F ([0;T ];X) denotes the space of allX-valued,F-progressively measurable processes satisfyingE R T 0 j t j p dt<1. The meaning of the space L 1 F ([0;T ];X) is dened similarly. Let us now consider the following FBSDE(Forward Backward Stochastic Dieren- tial Equation) 8 > > > < > > > : X t =x + Z t 0 b(s;X s ;Y s ;Z s )ds + Z t 0 h(s;X s ;Y s ;Z s );dW s i; Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i; (2.1) where (H1)T > 0 is a xed time horizon; the coecientsb;f : [0;T ] R 2+d !R, : [0;T ] R 2+d !R d areF-progressively measurable for any xed (x;y;z)2 R 2+d ; and g :R !R areF T -measurable for any xed x2R. Our purpose is to ndF-progressively measurable, square-integrable solutions such that (2.1) holds for all t2 [0;T ]. We shall also make use of the following Standing Assumptions throughout this section: (H2) The function g(x) and f(t;x; 0; 0) are uniformly boundedP a.s.. (H3) The functions b, , g and f are uniformly Lipschitz continuous with respect to the corresponding spatial variables, that is there exists constantK 1 ;K 2 ; K 3 ;K 4 such that8 !2 jb(t;x 1 ;y 1 ;z 1 ;!)b(t;x 2 ;y 2 ;z 2 ;!)jK 1 (jx 1 x 2 j +jy 1 y 2 j +jz 1 z 2 j); 12 jf(t;x 1 ;y 1 ;z 1 ;!)f(t;x 2 ;y 2 ;z 2 ;!)jK 2 (jx 1 x 2 j +jy 1 y 2 j +jz 1 z 2 j; j(t;x 1 ;y 1 ;z 1 ;!)(t;x 2 ;y 2 ;z 2 ;!)jK 3 (jx 1 x 2 j +jy 1 y 2 j +jz 1 z 2 j); jg(x 1 ;!)g(x 2 ;!)jK 4 jx 1 x 2 j. Denition 2.1.1 Under the standing assumptions (H1) -(H3), a quadruplet of processes (X;Y;Z) is called a solution of the FBSDE (2.1), if it satises (2.1) and (X;Y )2L 2 F ([0;T ];R 2 ); Z2L 2 F ([0;T ];R d ): 2.2 Decoupling Field To facilitate the discussion, we often consider the FBSDE (2.1) on a subinterval [t 1 ;t 2 ] which is the following: 8 > > > < > > > : X t = + Z t t 1 b(s;X s ;Y s ;Z s )ds + Z t t 1 h(s;X s ;Y s ;Z s );dW s i; Y t ='(X t 2 ) + Z t 2 t f(s;X s ;Y s ;Z s )ds Z t 2 t hZ s ;dW s i; (2.2) where 2 L 2 (F t 1 ) and '(x;)2 L 2 (F t 2 ) for any given x. Let t 1 ;t 2 ;;' denote the solution to FBSDE (2.2) if it exists. Particularly, t;x := t;T;x;g . Recall the denition of \decoupling eld" from [23] by Ma, Wu, Zhang and Zhang, Denition 2.2.1 AFprogressively measureable random eldu : [0;T ]R ! R with u(T;x) = g(x) is a \decoupling eld" of FBSDE (2.1) if there exists a 13 constant > 0(possibly depending on the dimension d) such that for any 0 t 1 < t 2 T and t 2 t 1 and any 2 L 2 (F t 1 ), the FBSDE (2.2) with initial value and terminal condition u(t 2 ;) has a unique adapted solution that satises Y t =u(t;X t ) for t2 [t 1 ;t 2 ];P a.s.. Although we omit the w, but keep in mind its presence in the random coecients and decoupling eld. Furthermore, we include the following theorem from [23] which shows the existence of a unique solution once the correspoding FBSDE possess a decoupling eld. Theorem 2.2.2 Under assumption (H1){(H3), if there exists a decoupling eldu for FBSDE (2.1), then the FBSDE (2.1) has a unique solution and the following holds over an arbitrary duration [0;T ] Y t =u(t;X t ): Furthermore, the decoupling eld is also unique. 2.3 Heuristic Analysis For the well-posedness of the FBSDE, we refer to the method described in Ma, Wu, Zhang,Zhang[23]. The paper carried out a systmatic analysis for nding a decoupling scheme of the FBSDE system, which was essentially related to the 14 corresponding variational FBSDE , consequently to the existence of a uniformly bounded solution of the characteristic BSDE. In what follows we shall use the following notations introduced in [23]: let j 4 = (x j ;y j ;z j )2R 2+d ;j = 1; 2. For a Lipschitz continuous function ' = b;;f, we denote 8 > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > : h(x 1 ;x 2 ) = [g(x 1 )g(x 2 )]=[x 1 x 2 ]; ' 1 (t; 1 ; 2 ) = ['(t;x 1 ;y 1 ;z 1 )'(t;x 2 ;y 1 ;z 1 )]=[x 1 x 2 ]; ' 2 (t; 1 ; 2 ) = ['(t;x 2 ;y 1 ;z 1 )'(t;x 2 ;y 2 ;z 1 )]=[y 1 y 2 ]; ' 3i (t; 1 ; 2 ) = '(t;x 2 ;y 2 ;z 1 2 ;:::;z i1 2 ;z i 1 ;z i+1 1 ;:::z d 1 )'(t;x 2 ;y 2 ;z 1 2 ;:::;z i1 2 ;z i 2 ;z i+1 1 ;:::z d 1 ) z i 1 z i 2 i = 1;:::;d: (2.3) Now, suppose the FBSDE is well-posed on [0;T ] for arbitrary initialx and there exists a uniformly Lipschitz decoupling eld u = u(t;x) such that Y t = u(t;X t ) holds true. Let i denote the unique solution to FBSDE with initial condition x i ;i = 1; 2, dene the following dierence quotient: r 4 = 1 2 x 1 x 2 ; ru(t) = u(t;X 1 t )u(t;X 2 t ) X 1 t X 2 t : Since Y i t =u(t;X i t );i = 1; 2, then one will have rY t =ru(t)rX t 15 Consider the FBSDE (2.1). Let(X;Y;Z), denote the dierence quotients of spatial variable, i.e. (X;Y;Z) 4 =r = (rX t ;rY t ;rZ t ). Notice sinceZ = [Z 1 ;:::;Z d ] T , we haverZ t = [rZ 1 t ;:::;rZ d t ] T , then one can easily check that (X;Y;Z) satisfy the following FBSDE which we call the variational FBSDE as named in [23], 8 > > > < > > > : X t = 1 + Z t 0 b 1 X s +b 2 Y s +hb 3 ;Z s i ds + Z t 0 h 1 X s + 2 Y s + 3 Z s ;dW s i; Y t =hX T + Z T t f 1 X s +f 2 Y s +hf 3 ;Z s i ds Z T t hZ s ;dW s i; (2.4) where h =h(X 1 T ;X 2 T ). Remark 2.3.1 is a d-dimensional function with each component depending on (X;Y;Z) in general, hence it is not hard to see that both 1 ; 2 are d-dimensional column vectors where the high dimension comes from the dimension of range of . In the mean time, 3 is a square matrix inR dd where the row space dimension d is due to the dimension of 's range, while the column space dimension is because of the d-dimension of Z process. SincerY t =ru(t)rX t , u being uniformly Lipschitz is equivalent toru(t) = rY t =rX t being uniformly bounded. Hence dene b Y t =ru(t) =rY t =rX t = Y t =X t , assumingX t 6= 0, then It^ o Formula implies that d b Y t = (1=X t )dY t +Y t [1=(X t ) 2 ]dX t + [1=(X t ) 2 ]dX t dY t + [Y t =(X t ) 3 ]dX t dX t : (2.5) 16 Further dene b Z t = Z t b Y t ( 1 X t + 2 Y t + 3 Z t ) X t ; (2.6) by assuming I 3 b Y t is invertible, we obtain that Y t = b Y t X t ; Z t = (I 3 b Y t ) 1 [ b Z t + b Y t ( 1 + 2 b Y t )]X t : (2.7) Combining (2.5) with (2.4), we obtain the following SDE in the dierential form d b Y t = (1=X t )dY t +Y t [1=(X t ) 2 ]dX t + [1=(X t ) 2 ]dX t dY t + [Y t =(X t ) 3 ]dX t dX t = (1=X t ) [f 1 X t +f 2 Y t +hf 3 ;Z t i]dt +hZ t ;dW t i +Y t [1=(X t ) 2 ] [b 1 X t +b 2 Y t +hb 3 ;Z t i]dt +h 1 X t + 2 Y t + 3 Z t ;dW t i +[1=(X t ) 2 ] [f 1 X t +f 2 Y t +hf 3 ;Z t i]dt +hZ t ;dW t i [b 1 X t +b 2 Y t +hb 3 ;Z t i]dt +h 1 X t + 2 Y t + 3 Z t ;dW t i +[Y t =(X t ) 3 ] [b 1 X t +b 2 Y t +hb 3 ;Z t i]dt +h 1 X t + 2 Y t + 3 Z t ;dW t i 2 : (2.8) We consider the summation on the right hand side part by part for simplicity of the calculation. The rst part of the summation is the following, I 1 =[f 1 +f 2 b Y t +hf 3 ; Z t X t i]dt +h Z t X t ;dW t i: (2.9) 17 and I 2 = b Y t [b 1 +b 2 b Y t +hb 3 ; Z t X t i]dt +h 1 + 2 b Y t + 3 Z t X t ;dW t i : I 3 = h Z t X t ;dW t ih 1 + 2 b Y t + 3 Z t X t ;dW t i =h Z t X t ; 1 + 2 b Y t + 3 Z t X t idt: I 4 = b Y t j 1 + 2 b Y t + 3 Z t X t j 2 dt: (2.10) since we assumed that W t has independent components. Notice that according to (2.6), Zt Xt can be written in terms of b Y t and b Z t , Z t X t = (I 3 b Y t ) 1 [ b Z t + b Y t ( 1 + 2 b Y t )]; hence we can derive a BSDE of ( b Y t ; b Z t ) which is previously dened. Further sim- plication of the dierential form of b Y by combining I 1 I 4 yields d b Y t = n [f 1 +f 2 b Y t +hf 3 ; Z t X t i] b Y t [b 1 +b 2 b Y t +hb 3 ; Z t X t i]h Z t X t ; 1 + 2 b Y t + 3 Z t X t i + b Y t j 1 + 2 b Y t + 3 Z t X t j 2 o dt +h Z t X t b Y t ( 1 + 2 b Y t + 3 Z t X t );dW t i: (2.11) Notice that the obtained diusion term integrand is exactly the b Z t that we dened before, this is actually the reason why we dened b Z t as it is. Now, plug in Z t X t = (I 3 b Y t ) 1 [ b Z t + b Y t ( 1 + 2 b Y t )]; 18 we can obtain the desired characteristic BSDE about b Y t after simplication. b Y t = h + Z T t F s ( b Y s ) +hG s ( b Y s ); b Z s i +h s ( b Y s ) b Z s ; b Z s i ds Z T t h b Z s ;dW s i; (2.12) where F s (y) = (b 1 +b 2 y)y +f 1 +f 2 y +h(b 3 y +f 3 )y; (I 3 y) 1 ( 1 + 2 y)i; G s (y) = (I 3 y) 1 (b 3 y +f 3 ) + 1 + 2 y + 3 (I 3 y) 1 y( 1 + 2 y); s (y) = 3 (I 3 y) 1 : (2.13) One of our main tasks for the rest of the section it to nd conditions so that BSDE (2.12) has a solution ( b Y; b Z) such that b Y and (I 3 b Y ) 1 are bounded: (2.14) Remark 2.3.2 boundedness conditions as above are necessary for the well-posedness of the characteristic BSDE. Now a uniformly Lipschitz decoupling eld is equivalent to a uniformly bounded solution to the corresponding characteristic BSDE. One crucial point to nding such a solution, as pointed out in [23], is the strategy for obtaining the apriori 19 uniform estimate of b Y . To begin, we state the deterministic upper and lower bound for any bounded random variable dened in [23]. 4 = esssup 4 = inffa2R :a; a.s.g; 4 = essinf 4 = supfa2R :a; a.s.g: For any j 4 = (x j ;y j ;z j );j = 1; 2, we dene h 4 = esssup sup x 1 6=x 2 h(x 1 ;x 2 ) ; h 4 = essinf inf x 1 6=x 2 h(x 1 ;x 2 ) ; F s (y) 4 = esssup sup x 1 6=x 2 ;y 1 6=y 2 ;z 1 6=z 2 F s ( 1 ; 2 ;y) ; F s (y) 4 = essinf inf x 1 6=x 2 ;y 1 6=y 2 ;z 1 6=z 2 F s ( 1 ; 2 ;y) : The following lemma would be useful while analyzing the bounds of coecients from the characteristic BSDE. Lemma 2.3.3 For any vector v2R d , matrix A = [A 1 ;:::;A d ]2R dd , the follow- ing relation holds, max 1id jA i jjAj p d max 1id jA i j; where for vectorjj is simply the Euclidean norm, whereas for matrix,jj stands for the induced norm i.e.jAj = max 1id jAvj jvj . 20 Proof. For arbitrary v2R d jAvj =j d X i=1 v i A i j d X i=1 jv i jjA i j max 1id jA i j d X i=1 jv i j p djvj max 1id jA i j; hence deviding both sides byjvj, and taking maximum over all non-zero v2 R d yields, jAj = max 1id jAvj jvj p d max 1id jA i j: For the other side of the inequality, let m be the index of the column of A with the maximum norm value, and take v =e m , then jAvj jvj =jAvj =jA m j = max 1id jA i j; hence, taking maximum on the left, we obtain jAj = max 1id jAvj jvj max 1id jA i j; and our proof for both sides of the inequality is complete. Remark 2.3.4 For arbitrary v2 R d ; A2 R dd , we havejAvjjAjjvj, thanks to the denition of the vector and matrix norm dened as above. Such sub- multiplicative property can be extended to any matrices-vectors product. 21 Notice that, by assumption (H3), b;f; are Lipschitz8(x;y;z), i.e. Lipschitz in every component as well as every direction of z(since z2 R d ). Hence by def- inition of ' i , i = 1; 2; 3 for ' = b;f, we can obtain thatj' i j is bounded by the corresponding Lipschitz constant. Furthermore, we present the analysis of whose high-dimensionality results in a square matrix form of 3 . Recall = (t;X t ;Y t ;Z t )2 R d where (X;Y;Z)2 R 2+d . First consider 1 which by denition equals to the following for given i = (X i ;Y i ;Z i ), i = 1; 2 1 (t; 1 ; 2 ) = [(t;X 1 t ;Y 1 t ;Z 1 t )(t;X 2 t ;Y 1 t ;Z 1 t )]=[X 1 t X 2 t ]; and obviously the numerator is inR d , while the denominator is inR. Further recall (H3) states that,8(x i ;y i ;z i ); i = 1; 2 and8 !2 j(t;x 1 ;y 1 ;z 1 ;!)(t;x 2 ;y 2 ;z 2 ;!)jK 3 (jx 1 x 2 j +jy 1 y 2 j +jz 1 z 2 j); then, taking x i =X i t ; y i =Y 1 t ; z i =Z 1 t ; i = 1; 2, and move the absolute dierence ofX to the left of the inequality, we obtained the desired boundK 3 for 1 . Similary, we can obtain the same bound for 2 . The boundaries for quotient dierence in the direction of Z of other functions b;f are very similar to the above. The only dierence is now the high-dimensionality is no longer coming from the function itself, butZ2R d in stead. However, same technique applies. Now, we proceed to 22 take a deeper look at 3 , where 3 = [ 1 3 ; 2 3 ;:::; d 3 ], whereith component is actually a column vector consisting of quotient dierences in terms of the ith direction of Z for each component function of . More specically, for the same given pair of entries, i 3 = [ i;1 3 ;:::; i;d 3 ] T , where i;j 3 = j (t;X 2 t ;Y 2 t ;Z 1;2 t :::Z i1;2 t ;Z i;1 t :::Z d;1 t ) Z i;1 t Z i;2 t j (t;X 2 t ;Y 2 t ;Z 1;2 t :::Z i1;2 t ;Z i;2 t ;Z i+1;1 t :::Z d;1 t ) Z i;1 t Z i;2 t : Remark 2.3.5 the lower index 3 with indicates that the quotient dierence is taken in the direction ofZ, the rst upper index for indicates the specic compo- nent in theZ column in term of which the quotient dierence is taken. Finally, the second upper index tells the specic component function using which the quotient dierence is calculated. Consider a specic column i 3 where we only look at ith component of Z, similar quotient dierences calculated for all direction functions of form the column. On the other hand, taking x m =X 2 ;y m =Y 2 ;z p m =Z p;2 ;z q m =Z q;1 ;z i 1 = Z i;1 ;z i 2 = Z i;2 ;m = 1; 2;p = 1;:::i 1;q = i + 1;:::;d , then we obtained K 3 as bound forj i 3 j; i = 1;:::;d, hence each column of 3 has norm bounded by K 3 . Similarly, we can obtain the boundaries for other quotient dierences, and in 23 summary we have the following boundary conditions satised corresponding to the Standing Assumption, 8 > > > > > > < > > > > > > : jb 1 jK 1 ;jb 2 jK 1 ;jb 3 j p dK 1 ; jf 1 jK 2 ;jf 2 jK 2 ;jf 3 j p dK 2 ; j 1 jK 3 ;j 2 jK 3 ;j 3 j p dK 3 : 2.4 Case with upper-triangular 3 For simplicity, in this section, we only consider the FBSDE system with upper- triangular 3 , that is, each direction j of function depends only on components Z i ;i = 1;:::;j of Z. j = j (t;X t ;Y t ;Z 1 t ;:::;Z j t ); where j denote thejth component of2R d . Hence the corresponding variational FBSDE can be simplied to the following form 8 > > > < > > > : X t = 1 + Z t 0 [b 1 X s +b 2 Y s +hb 3 ;Z s i]ds + Z t 0 h 1 X s + 2 Y s + 3 Z s ;dW s i; Y t =hX T + Z T t [f 1 X s +f 2 Y s +hf 3 ;Z s i]ds Z T t hZ s ;dW s i; (2.15) 24 where 3 is now simply a upper-triangular matrix with the following specic com- ponents, for ij [ 3 ] ij = j (t;X 2 t ;Y 2 t ;Z 1;2 t :::Z i1;2 t ;Z i;1 t :::Z j;1 t ) Z i;1 t Z i;2 t j (t;X 2 t ;Y 2 t ;Z 1;2 t :::Z i1;2 t ;Z i;2 t ;Z i+1;1 t :::Z j;1 t ) Z i;1 t Z i;2 t ; and zero otherwise. Remark 2.4.1 For the upper indices, j denotes the jth component function of , and the rst upper index denote the component of Z vector. The second upper index of Z simply indicates which set of solutions(with the corresponding initial value x i ; i = 1; 2) they belong to. Lemma 2.4.2 Assume b Y is bounded and 3 is upper-triangular, then the following two statements are equivalent, (i) (I 3 b Y ) 1 is bounded; (ii) (1 [ 3 ] jj b Y ) 1 ;j = 1;:::;d are all bounded. Proof. It is known that det(I 3 b Y ) = Q d j=1 [I 3 b Y ] jj = Q d j=1 [1 [ 3 ] jj b Y ], hence det(I 3 b Y ) 1 = det(I 3 b Y ) 1 = 1 Q d j=1 [I 3 b Y ] jj = d Y j=1 1 1 [ 3 ] jj b Y : 25 Next, recall the analytic inverse of I 3 b Y should be the following, (I 3 b Y ) 1 = 1 det(I 3 b Y ) [Co(I 3 b Y )] T ; where Co(I 3 b Y ) is the cofactor matrix of I 3 b Y . Due to the boundedness of b Y , we can conclude thatCo(I 3 b Y ) is of nite norm. Consequently, (I 3 b Y ) 1 being bounded is equivalent to saying 1 det(I 3 b Y ) being bounded, which is essentially requiring [1 [ 3 ] jj b Y ] 1 bounded for j = 1;:::;d. Hence, under the case of upper-triangular 3 , the condition (2.14) is equivalent to the following all b Y and (1 [ 3 ] jj b Y ) 1 ;j = 1;:::;d are bounded: (2.16) 2.5 Dominating Ordinary Dierential Equation Lemma 2.5.1 Assume the general assumption (H1){(H3) holds and the BSDE (2.12) has a solution ( b Y; b Z), and the following ODE admit solutions y; y y t =h + Z T t F s (y s )ds; y t =h + Z T t F s (y s )ds: (2.17) 26 Further assume that b Y; y; y all satisfy condition (2.16). Then y t b Y y t ;8t2 [0;T ];P a.s. Proof. proof of the lemma follows the argument in Lemma 3.2 in [23], but the uniform lipschitz property of F involves the boundedness of (I 3 y) 1 . Before we present the result of well-posedness of the penalized FBSDE system, we state the following lemma from [23] for reference of later results. Consider the following \backward ODEs" on [0;T ] y 0 t =h 0 + Z T t F 0 s (y 0 s )ds; (2.18) and y 1 t =h 1 C 1 + Z T t [F 1 s (y 1 s ) +c 1 s ]ds; y 2 t =h 2 C 2 + Z T t [F 2 s (y 2 s ) +c 2 s ]ds; (2.19) where F 0 ;F 1 ;F 2 : [0;T ]R!R are deterministic measurable functions. Lemma 2.5.2 Assume that (i) h 1 h 0 h 3 , and F 1 F 0 F 2 ; (ii) Both ODEs in (2.19) admit bounded solutions on [0;T ]; (iii) For any t2 [0;T ], the function y!F i t (y);i = 0; 1; 2 are uniformly Lipschitz for y2 [y 1 t ; y 2 t ], with a common Lipschitz constant L; 27 (iv) C i R T t e R T s rdr c i s ds; 8t2 [0;T ] and such thatjjL . Then (2.18) admits a unique solution y 0 such that y 1 y 0 y 2 . Remark 2.5.3 A trivial sucient condition of the above (iv) is the followingC i R T 0 e L(Tt) (c i ) + t dt,8t2 [0;T ], where we take the greatest value possible of the right hand side of the inequality of (iv) by taking =L;t = 0(since the exponential function is always positive) and the positive part ofc i t . In particular, this is satised when C i = 0; c i t 0 8t2 [0;T ]. It is worth noting that, [23] presented sucient conditions under which the \dominating ODE" has bounded solution in both linear and nonlinear case where 3 = 0 or 3 6= 0. However, for following discussion of constrained FBSDE with multi-dimensional driver, we only consider the nonlinear case where 3 = 0, then we have F simplied to F s (y) = (b 1 +b 2 y)y +f 1 +f 2 y +h(b 3 y +f 3 )y; (I 3 y) 1 ( 1 + 2 y)i = (b 1 +b 2 y)y +f 1 +f 2 y +h(b 3 y +f 3 )y; 1 + 2 yi = f 1 + (b 1 +f 2 +hf 3 ; 1 i)y + (b 2 +hf 3 ; 2 i +hb 3 ; 1 i)y 2 +hb 3 ; 2 iy 3 The following Theorem gives sucient conditions under which the \dominating ODE" has bounded solutions. 28 Theorem 2.5.4 Suppose the assupmtions (H1)- (H3) are in force and in addi- tion = (t;x;y). Then for any T > 0, the ODE (2.17) have bounded solutions y t and y t on [0;T ] if one of the following three cases hold true: (i) there exists a constant "> 0 such that, h 2 ;b 3 i" b 2 +hf 3 ; 2 i +hb 3 ; 1 i : (ii) there exits a constant h, and a constant "> 0 small enough such that F (t;) 0;h 2 ;b 3 i" and b 2 +hf 3 ; 2 i +hb 3 ; 1 i" (iii) there exits a constant h, and a constant "> 0 small enough such that F (t;) 0;h 2 ;b 3 i" and b 2 +hf 3 ; 2 i +hb 3 ; 1 i" Proof. Since F (t;y) is still one-dimensional in the case of multi-dimensional driver, the proof is similar to the one in [23], hence we omit it here. 29 2.6 Characteristic BSDE and Small Time Dura- tion Following the argument from [23], we now proceed with the discussion of the con- nection between well-posedness of linear variational FBSDE and the corresponding characteristic BSDE in the following theorem. Theorem 2.6.1 Suppose the assupmtions (H1)- (H3) are in force and in addi- tion = (t;x;y). Suppose the linear variational FBSDE (2.15) has a solution (X;Y;Z)2L 2 such that jY t jCjX t j; (2.20) thenX6= 0, and the process ( b Y; b Z) dened by b Y =Y=X; b Z t = Z t b Y t ( 1 X t + 2 Y t + 3 Z t ) X t (2.21) satises BSDE (2.12) and b Y is bounded. Proof. proof of the theorem follows the similar argument of Theorem 4.2 of [23]. It is worth noting that up to now we have assumed the wellposedness of the linear variational FBSDE(hence the corresponding characteristic BSDE) which guaran- tees the existence of the decoupling eld. However, to begin with the whole proce- dure, the existence of the solution to the original FBSDE is necessary. Therefore, 30 we return to the beginning of this argument by considering the \local existence" result of the FBSDE. A well-known sucient condition for the existence isj 3 hj< 1 roughly speak- ing. Theorem 2.6.2 Suppose the assupmtions (H1)- (H3) are in force and in addi- tion = (t;x;y), then there exists a constant > 0; ~ c > 0, which depend only on the Lipschitz constants of the FBSDE system such that whenever T, (i) the FBSDE (2.1) admits a uniqe solution 2L 2 ; (ii) the ODEs in (2.17) have solutions satisfying ~ c y t y t ~ c; 8t2 [0;T ]; (2.22) (iii) there exists a random eld u such that8t2 [0;T ];Y t =u(t;X t ) and y t u(t;x 1 )u(t;x 2 ) x 1 x 2 y t ; for any x 1 6=x 2 : (2.23) Proof. Notej 3 j = 0c 1 for anyc 1 > 0. By the standing assumptionjhjK 4 4 = c 2 . Then c 1 c 2 < 1 for picking c 1 suciently small and (i) follows from Theorem 31 I. 5.1 from ([24]) by Ma, Yong. To see (ii), let ~ c 4 = c 2 +c 1 1 2 ,then conseuqently c 2 < ~ c<c 1 1 . Consider the following ODEs: ~ y 1 t = c 2 [~ cc 2 ] + Z T t [F s (~ y 1 s )F s (~ c)]ds; ~ y 2 t = c 2 + [~ cc 2 ] + Z T t [F s (~ y 1 s )F s (~ c)]ds; It is easy to see that ~ y 1 =~ c; ~ y 2 = ~ c satisfy the above, and recall y t =h + Z T t F s (y s )ds; y t =h + Z T t F s (y s )ds F s (y) =f 1 + (b 1 +f 2 +hf 3 ; 1 i)y + (b 2 +hf 3 ; 2 i +hb 3 ; 1 i)y 2 +hb 3 ; 2 iy 3 Then F;F can be obtained by passing the coecients to the boundary values thanks to the heuristic analysis, hence fory2 [~ c; ~ c];F andF are both uniformly Lipschitz in y similar to F . We denote the common Lipschitz constant by L. We assume (i) holds for some> 0, then modify if necessary, we may further assume that Z 0 e Lt dt sup jyj~ c sup t2[0;T ] (jF t (y)j +jF t (y)j) ~ cc 2 32 then , Z T 0 e L(Tt) [F t (~ c)]dt sup jyj~ c sup t2[0;T ] jF t (y)j Z 0 e L(t) dt = sup jyj~ c sup t2[0;T ] jF t (y)j Z 0 e Lt dt ~ cc 2 : Similarly, we can obtain Z T 0 e L(Tt) F t (~ c)dt ~ cc 2 : Together with the fact that h;h2 [c 2 ;c 2 ], and FF , by Lemma (2.5.2) where actually the sucient condition in Remark (2.5.3) is satised in place of (iv) in Lemma (2.5.2), we proved (ii). Now let be small enough so that both (i); (ii) hold true. For any (t;x) initial time with the initial forward value, let the unique solution to FBSDE (2.1) starting from (t;x) be denoted by t;x , and dene a random eld u(t;x) 4 = Y t;x t . The uniqueness of the solution implies thatY t;x s =u(s;X t;x s );8s2 [t;T ];Pa.s.. Taking the starting time to be zero i.e.;t = 0, we have thatY t =u(t;X t );8t2 [0;T ]. Then givenx 1 6=x 2 two distinct initial forward values, recall (2.15). Following standard argument, for smaller if necessary, one can easily see thatjY t j ~ cjX t jP a.s. Hence by Theorem (2.6.1), the processes dened by (2.21) is an adapted solution of the corresponding characteristic BSDE, and the backward process is bounded 33 by ~ c a.s.. Consequently, (iii) follows from Lemma (2.5.1). 2.7 Wellposedness of FBSDE The following Theorem corresponds to the case where =(t;x;y). Theorem 2.7.1 Assume the assumptions (H1)-(H3) hold, =(t;x;y) and any of the conditions in Theorem 2.5.4 is satised, then (i) FBSDE (2.1) possesses a uniformly Lipschitz decoupling eld. (ii) FBSDE (2.1) admits a unique solution 2L 2 . Proof. (i) Let > 0 be the constant as determined in Theorem (2.6.2), and 0 =t 0 <<t m =T be a partition of [0;T ] such thatt i t i1 ;i = 1;;n. By Theorem 2.5.4, we notice that the Lipschitz constant of decoupling eld is bounded throughout the partition. First consider the FBSDE (2.1) on [t m1 ;t m ], then by Theorem 2.6.2 there exists a random eld u such that (2.23) holds8t2 [t m1 ;t m ]. In particular by (2.22) ~ c is the Lipschitz constant of u(t m1 ;x). Next, consider FBSDE (2.1) on [t m2 ;t m1 ] with the terminal condition u(t m1 ;x). Applying Theorem (2.6.2) on [t m2 ;t m1 ], we nd a decoupling eld with the same uniform Lipschitz constant. Repeating this procedure backwardly nitley many times, we obtain a random eld on the entire [0;T ]. 34 (ii) Let u be the decoupling eld obtained through (i) and consider the same partition of [0;T ] as above, then Theorem 2.2.2 implies FBSDE (2.1) admits a unique solution 2L 2 . 35 Chapter 3 Non-Markovian FBSDEs with constraint on Z 3.1 Problem Formulation Let us now consider the following FBSDE 8 > > > < > > > : X t =x + Z t 0 b(s;X s ;Y s ;Z s )ds + Z t 0 h(s;X s ;Y s );dW s ;i Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i +D T D t ; (3.1) where the coecients still satises assumption (H1)-(H3). In addition, we have the following assumption, (H4)Z t is assumed to be bounded below and above by given adapted bounded processes (L t ;U t ) taking value inR d , that is Z t 2 [L t ;U t ] , 8t2 [0;T ];P a.s.. Denition 3.1.1 A quadruple of processes (X;Y;Z;D) is called a solution of FBSDE(3.1) if (i) (X;Y )2L 2 F ([0;T ];R 2 ), Z2L 2 F ([0;T ];R d ); (ii) D2L 2 F ([0;T ];R) is non-decreasing, RCLL with D 0 = 0, s.t. EjD T j 2 <1; 36 (iii) Z t 2 [L t ;U t ]; 8t2 [0;T ]P a.s. 3.2 Method of Penalization For the constraint Z t 2 [L t ;U t ], we can apply the so-called penalized procedure method. More specically, rst dene the penalization process corresponding to the convex constraint onZ. Let(t;x) = (xL t ) +(xU t ) + ; 1 = P d i=1 (x i L i t ) +(x i U i t ) + forx2R d , then one can easily see thatZ t 2 [L t ;U t ] is equivalent to (t;Z t ) = 0, hence the convex constraint on Z is the zero set of the function . Our goal is to consider the convergence of a sequence of penalized solutions and eventually construct a limiting solution which would be shown to solve the original FBSDE (3.1) system. Such method restrict the possible model that we can consider into the type where b 3 = 0. Now consider a sequence of penalized FBSDE without constraint 8 > > > < > > > : X n t =x + Z t 0 b(s;X n s ;Y n s )ds + Z t 0 h(s;X n s ;Y n s );dW s i; Y n t =g(X n T ) + Z T t f(s;X n s ;Y n s ;Z n s )ds + Z T t n(s;Z n s )ds Z T t hZ n s ;dW s i; (3.2) It is easy to see that we can derive a sequence of penalized characteristic BSDE indexed by n and we seek sucient conditions on the coecients under which the sequence of penalized FBSDEs are wellposed. 37 Theorem 3.2.1 Assume assumption (H1)-(H4) hold, and in addition we assume = t and b =b(t;x;y), then F n =F8n.Then for each n, the penalized FBSDE 8 > > > < > > > : X n t =x + Z t 0 b(s;X n s ;Y n s )ds + Z t 0 h s ;dW s i; Y n t =g(X n T ) + Z T t f(s;X n s ;Y n s ;Z n s )ds + Z T t n(s;Z n s )ds Z T t hZ n s ;dW s i; (3.3) is wellposed with unique adapted solution (X n ;Y n ;Z n )2 L 2 F ([0;T ];R 2+d ) and a uniformly Lipschitz decoupling eld n such that Y n t = n (t;X n t ) if any of the following holds, (i) there exits a constant h, and a constant "> 0 small enough such that F (t;) 0; and b 2 " (ii) there exits a constant h, and a constant "> 0 small enough such that F (t;) 0; and b 2 " Proof. since = t and b =b(t;x;y), then F n s (y) = (b 1 +b 2 y)y +f 1 +f 2 y +h(b 3 y +f 3 +nr)y; (I 3 y) 1 ( 1 + 2 y)i = f 1 + (b 1 +f 2 )y +b 2 y 2 =F s (y): 38 Applying Theorem 2.7.1, we obtain the desired results. Remark 3.2.2 It is worth noting that a direct consequence of Theorem 3.2.1 is that there exists constant C > 0 independent of n such that, jr n jC: A weaker but sucient case results in the following theorem. Theorem 3.2.3 Assume assumption (H1)-(H4) hold. Further we suppose = (t;x;y), b =b(t;x;y), and r2spanf 1 ; 2 g ? then F n =F8n, and the penalized FBSDE 8 > > > < > > > : X n t =x + Z t 0 b(s;X n s ;Y n s )ds + Z t 0 h(s;X n s ;Y n s );dW s i; Y n t =g(X n T ) + Z T t f(s;X n s ;Y n s ;Z n s )ds + Z T t n(s;Z n s )ds Z T t hZ n s ;dW s i; (3.4) is wellposed8n with unique adapted solution (X n ;Y n ;Z n )2 L 2 F ([0;T ];R 2+d ) and a uniformly Lipschitz decoupling eld n such that Y n t = n (t;X n t ) if any of the following holds, 39 (i) it holds that b 2 +hf 3 ; 2 i = 0: (ii) there exits a constant h, and a constant "> 0 small enough such that F (t;) 0; and b 2 +hf 3 ; 2 i" (iii) there exits a constant h, and a constant "> 0 small enough such that F (t;) 0; and b 2 +hf 3 ; 2 i" Proof. since =(t;x;y) and b =b(t;x;y), andr2 spanf 1 ; 2 g ? then F n s (y) = (b 1 +b 2 y)y +f 1 +f 2 y +h(b 3 y +f 3 +nr)y; (I 3 y) 1 ( 1 + 2 y)i = (b 1 +b 2 y)y +f 1 +f 2 y +h(f 3 +nr)y; 1 + 2 yi = f 1 + (b 1 +f 2 +hf 3 ; 1 i)y + (b 2 +hf 3 ; 2 i))y 2 =F s (y): Consequently, the results follows from Theorem 2.7.1. 40 Remark 3.2.4 One trivial case satisfying the condition of the above theorem is j i = 0; i = 1; 2 for j2fj = 1;:::;djZ j 2 [L j ;U j ]g. Consequently, we still have jr n jC: Our next goal is to show the convergence of the corresponding sequence of penalized solutions and further construct a limiting processes to be shown the solutions to the original FBSDE system with constraint on Z. Lemma 3.2.5 Assume assumption (H1)-(H4) hold, then there exists constant C > 0, independent of n, such that j n (t;x)jC; 8 (t;x)2 [0;T ]R: Proof. we recall that (s;Z n s ) = (Z n s L s ) + (Z n s U s ) + ; 1 is Lipschitz w.r.t. each component of Z n with Lipschitz constant 1, also (s; 0) = 0, hence there exists adapted process n s such thatj n s jK n ;8s2 [t;T ]P a.s. for some K n > 0, and that n(Z n s ) = n s Z n s : Similarly, since f is Lipschitz w.r.t. (X n ;Y n ;Z n ), then there exist adapted pro- cesses ; such thatj s jL;j s jL;8s2 [t;T ] and f(s;X n s ;Y n s ;Z n s ) = s Y n s +h s ;Z n s i +f(s;X n s ; 0; 0): 41 Hence Y n t = g(X n T ) + Z T t f(s;X n s ;Y n s ;Z n s )ds + Z T t n(s;Z n s )ds Z T t hZ n s ;dW s i; = g(X n T ) + Z T t [f(s;X n s ; 0; 0) + s Y n s +h s ;Z n s i +n(s;Z n s )]ds Z T t hZ n s ;dW s i = g(X n T ) + Z T t [f(s;X n s ; 0; 0) + s Y n s +h s + n s ;Z n s i]ds Z T t hZ n s ;dW s i: To further simplify, letr t =e R t 0 sds , and Y n t =r t Y n t ; Z n t =r t Z n t . Application of It^ o on Y n t yields, d Y n t = r t dY n t +r t Y n t t dt = r t [f(t;X n t ; 0; 0) t Y n t h t + n t ;Z n t i]dt +hZ n t ;dW t i +r t Y n t t dt = r t [f(t;X n t ; 0; 0) +h t + n t ;Z n t i]dt +hr t Z n t ;dW t i = [r t f(t;X n t ; 0; 0)h t + n t ; Z n t i]dt +h Z n t ;dW t i; hence we have Y n t = r T g(X n T ) + Z T t [r s h(s;X n s ; 0; 0) +h s + n s ; Z n s i]ds Z T t h Z n s ;dW s i = g(X n T ) + Z T t r s h(s;X n s ; 0; 0)ds Z T t h Z n s ;dW s ( s + n s )dsi = g(X n T ) + Z T t r s h(s;X n s ; 0; 0)ds Z T t h Z n s ;dW n s i; 42 whereW n s =W s W t R s t ( r + n r )dr. SinceZ n 2L 2 F ([0;T ];R d ) andr t is bounded due to the fact that is bounded, then Z n t 2L 2 F ([0;T ];R d ) also holds. Now thanks to the fact that ; n are bounded for each n, by Girsanov Thorem, there exists probability measure Q n P such that W n is a Q n Brownian motion on [t;T ]. Therefore Y n t =E Q n [ g(X n T ) + Z T t r s f(s;X n s ; 0; 0)dsjF t ]: Together with (H2) which states f(t;x; 0; 0) are uniformly bounded in (t;x)P a.s., and g(x) uniformly bounded in x, we acquired the uniform boundedness of Y n t . Next, we present a comparison theorem, which together with the Lemma 3.2.5 will lead to construction of limiting processes. Lemma 3.2.6 Assume assumption (H1)-(H4). For any n 1, it holds that n+1 (t;x) n (t;x) 8[t;x]2R 2 : Proof. Note that throughout the sequence of penalized FBSDE systems, all the coecients including the initial and terminal conditions stay the same except the drift term of the backward process. Preciselyn(;) increases asn goes to innity. Hence by Theorem 8.6 from ([23]), we obtain that n+1 n . 43 The following lemma combines the remarks of Theorem 3.2.1 or Theorem 3.2.3 and states the uniformly boundedness and Lipschitz property of the decoupling elds n . Lemma 3.2.7 Assume assumption (H1)-(H3) hold, and any of the conditions in Theorem 3.2.1 or Theorem 3.2.3 is satisfeid, then there exists constantC > 0 such that. jr n (t;x)jC; 8(t;x)2R 2 : Combing the results from Lemmas 3.2.5 and 3.2.6, by Monotone Convergence Theorem, we see that there exists (t;x) such that n (t;x)% (t;x) as n!1. Furthermore, is jointly measurable, uniformly bounded and uniformly Lipschitz in x thanks to Lemma 3.2.7. Proposition 3.2.8 Assume assumption (H1)-(H4), and any of the conditions in Theorem 3.2.1 or Theorem 3.2.3 is satised, then the following SDE is well-posed X s =x + Z s 0 b(r;X r ;(r;X r ))dr + Z s 0 h(r;X r ;(r;X r ));dW r i: (3.5) Consequently, we can dene Y s =(s;X s ), then (i) X n s !X in L 2 F ([0;T ];R); E[jY n t Y t j 2 ]! 0; (ii) Z n w ! Z in L 2 F ([0;T ];R) for some Z2 L 2 F ([0;T ];R), possibly along a subsequence. 44 Proof. It is easy to see that the wellposedness of (3.5) is garaunteed by the uniformly Lipschitz property of . In addition, it is worth noting the importance of b 3 = 0 in our model for the success of such construction due to our lack of representation of Z in term of X. Now notice that X n t = x + Z t 0 b(r;X n r ; n (r;X n r ))dr + Z t 0 h(r;X n r ; n (r;X n r ));dW r i = x + Z t 0 b n (r;X n r )dr + Z t 0 h n (r;X n r );dW r i; where b n (r;x) =b(r;x; n (r;x)); n (r;x) =(r;x; n (r;x)); and X t =x + Z t 0 b(r;X r ;(r;X r ))dr + Z t 0 h(r;X r ;(r;X r ));dW r i: Denote X n :=X n t X t ; b n =b n b; n = n ; then we have X n t = Z t 0 [b(r;X n r ; n (r;X n r ))b(r;X r ;(r;X r ))]dr + Z t 0 h[(r;X n r ; n (r;X n r ))(r;X r ;(r;X r ))];dW r i = Z t 0 [b n (r;X n r ) + r X n r ]dr + Z t 0 h[ n (r;X n r ) + r X n r ];dW r i; 45 where r = b(r;X n r ;(r;X n r ))b(r;X r ;(r;X r )) X n r 1 fX n r 6=0g ; r = (r;X n r ;(r;X n r ))(r;X r ;(r;X r )) X n r 1 fX n r 6=0g : Since, by assumption (H3),b(t;x;y);(t;x;y) are Lipschitz in (x;y), and random eld(t;x) is uniformly Lipschitz inx as well, hence it is easy to see that both; are bounded with bounds depending on the Lipschitz constants of function b;;. Now notice sup t2[0;T ] jX n t j Z T 0 jb n (r;X n r )+ r X n r jdr+ sup t2[0;T ] Z t 0 h[ n (r;X n r )+ r X n r ;dW r i; Square both sides and take expectation, we obtain E[ sup t2[0;T ] jX n t j 2 ] =E[( sup t2[0;T ] jX n t j) 2 ] 2E Z T 0 jb n (r;X n r ) + r X n r j 2 dr + sup t2[0;T ] Z t 0 h[ n (r;X n r ) + r X n r ;dW r i 2 CE Z T 0 jb n (r;X n r )j 2 +j r X n r j 2 dr + Z T 0 j n (r;X n r ) + r X n r j 2 dr CE Z T 0 jb n (r;X n r )j 2 +j n (r;X n r )j 2 +jX n r j 2 dr : (3.6) 46 Note: by Burkholder-Davis-Gundy(BDG) inequality, E sup t2[0;T ] Z t 0 h n (r;X n r )+ r X n r ;dW r i 2 CE Z T 0 j n (r;X n r )+ r X n r j 2 dr ; where C does not depend on the stochastic integral or time. Furthermore, thanks to assumption (H3), and the fact that both n ; have the same bound, jb n (r;X n r )j = jb(r;X n r ; n (r;X n r ))b(r;X n r ;(r;X n r ))j K 1 j n (r;X n r )(r;X n r )j 2CK 1 : Similarly for n (r;X n r ). Finally since X n ;X are adapted solutions to the cor- responding FBSDE system in L 2 F ([0;t];R), then it holds that R t 0 jX n r j 2 dr <1. Therefore (3.6) implies that E[ sup t2[0;T ] jX n t j 2 ]<1: Appying It^ o formula onjX n t X t j 2 , we obtain jX n t j 2 = Z t 0 [2X n r (b n (r;X n r ) + r X n r ) +j n (r;X n r ) + r X n r j 2 ]dr + Z t 0 h2X n r [ n (r;X n r ) + r X n r ];dW r i: (3.7) 47 To proceed, we need to show R t 0 h2X n r [ n (r;X n r ) + r X n r ];dW r i is a true mar- tingale. We notice that by BDG inequality E sup t2[0;T ] j Z t 0 h2X n r [ n (r;X n r ) + r X n r ];dW r ij CE Z T 0 j2X n r [ n (r;X n r ) + r X n r ]j 2 dr 1 2 CE sup t2[0;T ] jX n t j Z T 0 j n (r;X n r ) + r X n r j 2 dr 1 2 C n E sup t2[0;T ] jX n t j 2 o1 2 n E Z T 0 j n (r;X n r ) + r X n r j 2 dr o1 2 ; where E Z T 0 [j n (r;X n r ) + r X n r j 2 ]drCE Z T 0 [j n (r;X n r )j 2 +jX n r j 2 ]dr<1: Hence together with the fact thatE[sup t2[0;T ] jX n t j 2 ]<1, we can conclude that E sup t2[0;T ] j Z t 0 h2X n r [ n (r;X n r ) + r X n r ];dW r ij <1 thus R t 0 h2X n r [ n (r;X n r ) + r X n r ];dW r i is a true martingale. 48 Now, take expectation of (3.7), apply Gronwall's inequality and the fact that 2ad"a 2 +" 1 b 2 yields EjX n t j 2 = E Z t 0 [2X n r (b n (r;X n r ) + r X n r ) +j n (r;X n r ) + r X n r j 2 ]dr E Z t 0 [2jX n r jjb n (r;X n r )j +CjX n r j 2 +Cj n (r;X n r )j 2 ]dr CE 2 sup t2[0;T ] jX n t j Z t 0 jb n (r;X n r )jdr + Z t 0 n (r;X n r )j 2 dr "E[ sup t2[0;T ] jX n r j 2 ] +" 1 CE Z T 0 [jb n (r;X n r )j 2 +j n (r;X n r )j 2 ]dr: Therefore take supremum on both sides yields, sup t2[0;T ] EjX n t j 2 "E[ sup t2[0;T ] jX n t j 2 ] + " 1 CE Z T 0 [jb n (r;X n r )j 2 +j n (r;X n r )j 2 ]dr: (3.8) Finally, plug (3.8) back into (3.6) , we get E[ sup t2[0;T ] jX n t j 2 ] CE Z T 0 jb n (r;X n r )j 2 +j n (r;X n r )j 2 +jX n r j 2 dr CE Z T 0 jb n (r;X n r )j 2 +j n (r;X n r )j 2 dr +CT sup T2[0;T ] E jX n t j 2 ] C"E[ sup t2[0;T ] jX n t j 2 ] +" 1 CE Z T 0 [jb n (r;X n r )j 2 +j n (r;X n r )j 2 ]dr: 49 Choose " := 1 2C , and rearrage the terms on both sides, we get E[ sup t2[0;T ] jX n t j 2 ]CE Z T 0 [jb n (r;X n r )j 2 +j n (r;X n r )j 2 ]dr: Note: Since all the coecients in the forward equation are all independent of the system index n, then C is consistent for all n. Now, applying the result that n ! and both n ; being bounded, the Bounded Convergence Theorem yields lim n!1 E[ sup t2[0;T ] jX n s X s j 2 ] = lim n!1 E[ sup t2[0;T ] jX n t j 2 ] lim n!1 CE Z T 0 [jb n (r;X n r )j 2 +j n (r;X n r )j 2 ]dr lim n!1 CE Z T 0 j n (r;X n r )(r;X n r )j 2 dr = 0: For the barkward equation, EjY n t Y t j 2 = Ej n (t;X n t )(t;X t )j 2 = E[j n (t;X n t ) n (t;X t ) + n (t;X t )(t;X t )j] 2CEjX n t X t j 2 + 2Ej n (t;X t )(t;X t )j 2 ! 0 50 Recall the backward process after application of Girsanov Transformation, Y n 0 = r T g(X n T ) + Z T 0 r t f(t;X n t ; 0; 0) + ( t + n t ) Z n t dt Z T 0 Z n t dW t = Y n T + Z T 0 r t f(t;X n t ; 0; 0) +n (t; Z n t )dt Z T 0 Z n t (dW t t dt) = Y n T + Z T 0 r t f(t;X n t ; 0; 0) +n (t; Z n t )dt Z T 0 Z n t d W t ; where (s;z s ) = (z s ) + (z s r s U s ) + . Denote A n t = R t 0 n (s; Z n s )ds, then one can easily see that A n 0 = 0, and E A n T 2 = E Y n 0 Y n T Z T 0 r t f(t;X n t ; 0; 0)dt + Z T 0 h Z n t ;d W t i 2 E j Y n 0 j +j Y n T j +j Z T 0 r t f(t;X n t ; 0; 0)dtj +j Z T 0 h Z n s ;dW s ij 2 4E j Y n 0 j 2 +j Y n T j 2 + 4E[j Z T 0 r t f(t;X n t ; 0; 0)dtj 2 ] + 4E Z T 0 j Z n s j 2 ds C 1 + 4E Z T 0 j Z n s j 2 ds : (3.9) Note: the last step is due to the fact thatj Y n t j =jr t n (t;X n t )jC, hence C 1 is obviously independent of n. On the other hand, apply It^ o onj Y n t j 2 , we obtain j Y n t j 2 = j Y n T j 2 Z T t j Z n s j 2 ds 2 Z T t Y n s h Z n s ;d W s i + 2 Z T t Y n s dA n s + 2 Z T t Y n s r s f(s;X n s ; 0; 0)ds: 51 Note: since Z n 2 L 2 (F), and Y n is bounded, then Y n Z n 2 L 2 (F), hence one can show that R T t Y n s h Z n s ;d W s i is a true martingale. Take expectation on both sides, and lett = 0, the boundedness of Y n T and Cauchy- Schwartz Inequality yield E j Y n 0 j 2 + Z T 0 j Z n s j 2 ds = Efj Y n T j 2 + 2 Z T 0 Y n s dA n s ds + 2 Z T 0 Y n s r s h(s;X n s ; 0; 0)dsg E j Y n T j 2 + 2 Z T 0 Y n s r s h(s;X n s ; 0; 0)ds + 2E A n T sup s2[0;T ] j Y n s j C 2 + 2E A n T sup s2[0;T ] j Y n s j C 2 + 2 E jA n T j 2 1=2 E sup s2[0;T ] j Y n s j 2 1=2 C 2 + 1 8 E jA n T j 2 + 8E sup s2[0;T ] j Y n s j 2 C 3 + 1 8 E jA n T j 2 ; where again C 2 ;C 3 are independent of n. hence together with (3.9), we have E Z T 0 j Z n s j 2 ds C 3 + 1 8 E jA n T j 2 C 3 + 1 8 C 1 + 4E Z T 0 j Z n s j 2 ds = C 3 + 1 8 C 1 + 1 2 E Z T 0 j Z n s j 2 ds : 52 Thus E Z T 0 j Z n s j 2 ds 2C 3 + 1 4 C 1 4 = ~ C 8n: Consequently, we have E jA n T j 2 C 1 + 4E Z T 0 j Z n s j 2 ds C 1 + 4 ~ C 8n: Note: All the properties derived for the ( Y n ; Z n ) system applies equivalently to the original (Y n ;Z n ) by the boundedness of r t . Therefore there exists some process Z2L 2 F ([0;T ];R) such that Z n w !Z in L 2 F ([0;T ];R); possibly along a subsequence. Proposition 3.2.9 Assume assumption (H1)-(H3), and any of the conditions in Theorem 3.2.1 or Theorem 3.2.3 is satisfeid, then there exists f 0 2 L 2 F ([0;T ];R) such that f(s;X n s ;Y n s ;Z n s ) w !f 0 s : 53 Proof. Recall, the penalized system is the following 8 > > > < > > > : X n t =x + Z t 0 b(s;X n s ;Y n s )ds + Z t 0 h(s;X n s ;Y n s );dW s i; Y n t =g(X n T ) + Z T t f(s;X n s ;Y n s ;Z n s )ds + Z T t n(t;Z n s )ds Z T t hZ n s ;dW s i; and from the proof of Lemma 3.2.5 f(s;X n s ;Y n s ;Z n s ) = s Y n s +h s ;Z n s i +f(s;X n s ; 0; 0); then it is obvious that E Z T 0 jf(s;X n s ;Y n s ;Z n s )j 2 ds = E Z T 0 j s Y n s +h s ;Z n s i +f(s;X n s ; 0; 0)j 2 ds 2E Z T 0 j s j 2 jY n s j 2 +j s j 2 jZ n s j 2 +jf(s;X n s ; 0; 0)j 2 ds: Sincej s jL;j s jL;8s2 [t;T ] and f(t;x; 0; 0) is bounded uniformly in (t;x), together with the fact thatY n s = n (s;X n s ) wherej n jC;8n, andE R T 0 j Z n s j 2 ds ~ C;8n, the above implies that E Z T 0 jf(s;X n s ;Y n s ;Z n s )j 2 dsC; hence there exists h 0 2L 2 F ([0;T ];R) such that f(s;X n s ;Y n s ;Z n s ) w !f 0 s . 54 Now we proceed to discuss the regularity of pathes of process Y . First, recall A n s = Z s 0 n(r;Z n r )dr; (3.10) and further dene A s =(0;X 0 )Y s Z s 0 f 0 r dr + Z s 0 hZ r ;dW r i; (3.11) then it is equivalent to say A n s = n (0;X n 0 )Y n s Z s 0 f(r;X n r ;Y n r ;Z n r )dr + Z s 0 hZ n r ;dW r i: According to Proposition 3.2.8, it is not hard to see the result of the following lemma. Lemma 3.2.10 A n converges to A weakly in L 2 F ([0;T ];R) 55 Proof. since for any '2L 2 F ([0;T ];R) E Z T 0 '(t)(A n t A t )dt = E Z T 0 '(t)[ n (0;X n 0 )Y n t Z t 0 f(r;X n r ;Y n r ;Z n r )dr + Z t 0 hZ n r ;dW r i ((0;X 0 )Y t Z t 0 h 0 r dr + Z t 0 hZ r ;dW r i)]dt = E Z T 0 '(t) n n (0;X n 0 )(0;X 0 ) (Y n t Y t ) Z t 0 [f(r;X n r ;Y n r ;Z n r )f 0 r ]dr + Z t 0 h(Z n r Z r );dW r i o dt: Consider the right hand side in three groups with the rst being the following E Z T 0 '(t) n n (0;X n 0 )(0;X 0 ) o dt = E Z T 0 '(t) n n (0;X n 0 ) n (0;X 0 ) + n (0;X 0 )(0;X 0 ) o dt: Notice that n is Lipschitz8n, thenCjX n 0 X 0 j n (0;X n 0 ) n (0;X 0 ) CjX n 0 X 0 j, hence E Z T 0 C'(t)jX n 0 X 0 jdt E Z T 0 '(t)[ n (0;X n 0 ) n (0;X 0 )]dt E Z T 0 C'(t)jX n 0 X 0 jdt; 56 where, without the Lipschitz constant, by Cauchy Schwartz inequality, as n!1 E Z T 0 '(t)jX n 0 X 0 jdt n E Z T 0 j'(t)j 2 dt o1 2 n E Z T 0 jX n 0 X 0 j 2 dt o1 2 C n E[TjX n 0 X 0 j 2 ] o1 2 C n E[ sup t2[0;T ] jX n t X t j 2 ] o1 2 ! 0: where the last step is due to the strong convergence of X n to X from Proposition 3.2.8. Now, sinceEjY n t Y t j 2 ! 0, thenE R T 0 jY n t Y t j 2 dt = R T 0 EjY n t Y t j 2 dP! 0, hence by Cauchy-Schwartz inequality, E Z T 0 '(t)(Y n t Y t )dt E Z T 0 j'(t)j 2 dt 1 2 E Z T 0 jY n t Y t j 2 dt 1 2 C E Z T 0 jY n t Y t j 2 dt 1 2 ; which converges to 0. Lastly, the remainder is the following E Z T 0 '(t) n Z t 0 [f(r;X n r ;Y n r ;Z n r )h 0 r ]dr + Z t 0 h(Z n r Z r );dW r i o dt; 57 where by Fubini's Theorem E Z T 0 '(t) n Z t 0 [f(r;X n r ;Y n r ;Z n r )h 0 r ]dr o dt = E Z T 0 Z T r '(t)[f(r;X n r ;Y n r ;Z n r )f 0 r ]dtdr = E Z T 0 n [f(r;X n r ;Y n r ;Z n r )f 0 r ][ Z T r '(t)dt] o dr = E Z T 0 (r)[f(r;X n r ;Y n r ;Z n r )f 0 r ]dr; where (r) = R T r '(t)dt, and furthermore, E Z T 0 j(r)j 2 dr = E Z T 0 Z T r '(t)dt 2 drE Z T 0 Z t 0 j'(t)j 2 drdt TE Z T 0 j'(t)j 2 dt<1; therefore, 2 L 2 F ([0;T ];R). Together with the fact that h(s;X n s ;Y n s ;Z n s ) w ! h 0 s , we can conclude that E Z T 0 '(t) n Z t 0 [f(r;X n r ;Y n r ;Z n r )f 0 r ]dr o dt! 0: Finally, following the same procedure, with the weak convergence of Z n to Z, we can draw similar conclusion that E Z T 0 '(t) n Z t 0 h(Z n r Z r );dW r i o dt! 0: 58 Remark 3.2.11 with just the weak convergence of Z n to Z, we cannot conclude that f(r;X n r ;Y n r ;Z n r ) converges weakly to f(r;X r ;Y r ;Z r ), hence Lemma 3.2.10 would no longer holds. Therefore, we dene A using directly the weak limit of f(r;X n r ;Y n r ;Z n r ), i.e., f 0 r in replace. On the other hand, by denition of,A n is continuous and monotonically increas- ing process in t8n, then by Lemma 3.2.10, for 0s 1 <s 2 T P (A s 1 A s 2 ) = 1 Thus one shows that A is also monotonically increasing process in t. Hence, both sided limits A s+ ;A s exist8s2 [t;T ]. Let ^ A s = A s+ , then obviously ^ A is c adl ag. . Lemma 3.2.12 A s = ^ A s ; 8s2 [0;T ]; a.s. Proof. First it is obvious that for xed s, A s ^ A s ; a.s. Now for the other side of the inequality, we begin by realizing that Y s +A s is continuous by (3.11) and uniform Lipschitz property of the limiting decoupling eld . Thus lim r#s;r2R Y r = lim r#s;r2R Y r +A r A r =Y s +A s ^ A s ; a.s. (3.12) 59 On the other hand, by denition and Lemma 3.2.6, Y r =(r;X r ) n (r;X r );8n. Hence lim r#s Y r lim r#s n (r;X r ) = n (s;X s );8n by the continuity of n and X for each n. Combining with (3.12) yields Y s +A s = lim r#s;r2R Y r + ^ A s lim r#s Y r + ^ A s n (s;X s ) + ^ A s ;8n Let n!1, we obtain Y s +A s lim n!1 n (s;X s ) + ^ A s =(s;X s ) + ^ A s =Y s + ^ A s Hence,A s ^ A s , thus the equality holds a.s.. In other words, ^ A is a c adl ag version of A. From now on, we replace A by its c adl ag version without further specication. Consequently, Y s =(0;X 0 ) Z s 0 f 0 r dr Z s 0 hZ r ;dW r iA s ; is indeed a semimartingale with the above decomposition of a martingale and an adapted c adl ag process with bounded variation. We have the following theorem, 60 which is essentially the nal step for the wellposedness of the original constrained FBSDE sytem. Theorem 3.2.13 Assume assumption (H1)-(H3), let X;Y;Z;D be dened by (3.5), Proposition 3.2.8 and D s 4 = Z s 0 f 0 r f(r;X r ;Y r ;Z r )dr +A s ; (3.13) and let Y s =(s;X s ), where is the monotone limit of decoupling elds n . Then (X;Y;Z;D) is an adapted solution to the FBSDE 8 > > > < > > > : X t =x + Z t 0 b(s;X s ;Y s )ds + Z t 0 h(s;X s ;Y s );dW s i; Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i +D T D t ; (3.14) with constraint Z t 2 [L t ;U t ]. Proof. We rst show thatf 0 s f(s;X s ;Y s ;Z s ) 0;dtdP a.e.. In fact, using the lipschitz property of f, for each n, we have f(s;X n s ;Y n s ;Z n s )f(s;X s ;Y s ;Z s ) k 2 [jX n s X s j +jY n s Y s j] +f(s;X s ;Y s ;Z n s )f(s;X s ;Y s ;Z s ) = k 2 [jX n s X s j +jY n s Y s j] + (Z n s Z s )r z f s ; 61 where r z f s = f(s;X s ;Y s ;Z n s )f(s;X s ;Y s ;Z s ) Z n s Z s : Using the boundedness ofrf z , one can see that for2L 2 F ([0;T ];R) such that 0;dtdP a.e., it holds that E Z T 0 [f 0 s f(s;X s ;Y s ;Z s )] s ds = lim n!1 E Z T 0 [f(s;X n s ;Y n s ;Z s s )f(s;X s ;y s ;Z s )] s ds k 2 lim n!1 E Z T 0 [jX n s X s j +jY n s Y s j] s ds +E Z T 0 (Z n s Z s )r z f s s ds = 0 and the last equality is due to Theorem 3.2.8 and the fact thatjr z f s j k 2 , i.e.r z f s s 2 L 2 F ([0;T ];R). Therefore, f 0 s f(s;X s ;Y s ;Z s ) 0;dtdP a.e., henceD s = R s 0 f 0 r f(r;X r ;Y r ;Z r )dr +A s is a non-decreasing c adl ag process. Now recall by denition A s =(0;X 0 )Y s Z s 0 f 0 r dr + Z s 0 hZ r ;dW r i; take s =T , we obtain A T = (0;X 0 )Y T Z T 0 f 0 r dr + Z T 0 hZ r ;dW r i = (0;X 0 )g(X T ) Z T 0 f 0 r dr + Z T 0 hZ r ;dW r i 62 Notice: since n (T;x) =g(x);8n and n %, then (T;x) = lim n!1 n (T;x) = g(x). Hence Y T =(T;X T ) =g(X T ). Now subtract A s from above yields A T A s =Y s g(X T ) Z T s f 0 r dr + Z T s hZ r ;dW r i Rewrite the above, we see Y t = g(X T ) + Z T t f 0 s ds Z T t hZ s ;dW s i +A T A t = g(X T ) + Z T t f 0 s ds Z T t hZ s ;dW s i Z T t [f 0 s f(s;X s ;Y s ;Z s )]ds +D T D t = g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i +D T D t (3.15) Therefore (X;Y;Z;D) solves the above FBSDE, now it remains to check that Z t 2 [L t ;U t ]. Since (t;L t ) = 0, andr z i(t;)2 [1; 1]8t2 [0;T ];i = 1;:::;d, then E Z T 0 j(s;Z n s )j 2 dsE Z T 0 jZ n s j 2 ds +C <C Hence, possibly along a subsequence, (s;Z n s ) w ! 0 s for some 0 s 2 L 2 F ([0;T ];R). Before we proceed, we rst show that 0 s (s;Z s ) 0;dt dP a.e.. By construction of , for each n (s;Z n s )(s;Z s )jZ n s Z s j 63 Then for any 2 L 2 F ([0;T ];R) such that 0;dtdP a.e., by Dominated Convergence Theorem, it holds that, as n!1 E Z T 0 [ 0 s (s;Z s )] s ds = lim n!1 E Z T 0 [(s;Z n s )(s;Z s )] s ds lim n!1 E Z T 0 jZ n s Z s j s ds = 0 Thanks to the weak convergence of Z n to Z. Therefore 0 s (s;Z s ) 0;dt dP a.e.. Hence again by DCT, E Z T 0 (s;Z s )dsE Z T 0 0 s ds = lim n!1 E Z T 0 (s;Z n s )ds = lim n!1 1 n E[A n T ] = 0 Note: the last step is due to the boundedness ofE jA n T j 2 similarly to the proof of Proposition 3.2.8. Thus, we have (s;Z s ) = 0;dtdP a.e., equivalently saying Z t 2 [L t ;U t ];dtdP a.e.. Therefore the obtained solution satisfy the given constraint. Theorem 3.2.13 essentially showed the existence of solution to the constrained FBSDE (3.14) by construction. 64 3.3 Discussion of Strong convergence ofZ n under Method of Penalization In the following discussion we investigate the possible strong convergence of Z n in L p F ([0;T ];R);p 2 [0; 2). A case that has been well know is the case where the backward components of the penalized solutionsfY n g n1 is \monotonic" (see, e.g., [26]). But in the FBSDE case such monotonicity is no longer valid as the comparison theorem fails in general. Instead, we only have the monotonicity of the decoupling eldsf n g n0 . In what follows we show how this diculty could be circumvented. Case I. We begin with a simple case. Lemma 3.3.1 Assume assumption (H1)-(H4), and that the mapping (t;x)7! (t;x) is continuous,P-a.s. Then it holds that lim n!1 E h sup (t;x)2K N j n (t;x)(t;x)j 2 i = 0: for any K N = [0;T ]B N , where B N is a real ball with raduis N. 65 Proof. Since is continuous, and n (t;x)%(t;x),8(t;x), asn!1, by Dini's Theorem, for any compact set K N , N > 0, the convergence is uniform. That is, lim n!1 sup (t;x)2K N j n (t;x)(t;x)j 2 = 0; P a.s. The result then follows from an easy application of Bounded Convergence Theorem and Lemma 3.2.5. Lemma 3.3.2 Assume assumption (H1)-(H4),and that the mapping (t;x) 7! (t;x) is continuous,P-a.s., then lim n!1 E[ sup t2[0;T ] jY n t Y t j 2 ] = 0: Proof. Similarly to the proof of the strong convergence of X n we can prove that E sup s2[0;T ] jX s j 2 <1. Hence for xed "> 0 there exits a N 0 such that P sup s2[0;T ] jX s j>N 0 ": 66 Further notice that E[ sup s2[0;T ] j n (s;X s )(s;X s )j 2 ] = E[ sup s2[0;T ] j n (s;X s )(s;X s )j 2 :jX s jN 0 ] +E[ sup s2[0;T ] j n (s;X s )(s;X s )j 2 :jX s j>N 0 ] = I 1 +I 2 and proceed by consider the two parts separately, I 1 E[ sup s2[0;T ] sup Xs2B N 0 j n (s;X s )(s;X s )j 2 ]E[ sup (s;x)2K N 0 j n (s;x)(s;x)j 2 ] as n!1, thanks to Lemma 3.3.1. On the other hand, by the boundedness of n and , sending n to innity, we have lim n!1 I 2 CP jX s j>N 0 CP sup s2[0;T ] jX s j>N 0 C" Therefore as n!1, lim n!1 E[ sup s2[0;T ] j n (s;X s )(s;X s )j 2 ] lim n!1 E[ sup (s;x)2K N 0 j n (s;x)(s;x)j 2 ]+C" =C": Now since " is arbitrary, send "! 0, we obtain lim n!1 E[ sup s2[0;T ] j n (s;X s )(s;X s )j 2 ] = 0: 67 Thanks to the strong convergence of X n and above observations, as n!1 E[ sup s2[0;T ] jY n s Y s j 2 ] = E[ sup s2[0;T ] j n (s;X n s )(s;X s )j 2 ] 2E[ sup s2[0;T ] j n (s;X n s ) n (s;X s )j 2 + sup s2[0;T ] j n (s;X s )(s;X s )j 2 ] CE[ sup s2[0;T ] jX n s X s j 2 ] + 2E[ sup s2[0;T ] j n (s;X s )(s;X s )j 2 ]! 0; proving the lemma. Lemma 3.3.3 For an increasing RCLL process A t dened on [0;T ] with A 0 = 0 and EjA T j 2 <1. Then for any ;" > 0, there exists a nite number of pairs of stopping timesf i ; i g, i = 0;:::;N with 0< i T such that [(i)] ( i ; i ]\ ( j ; j ] =;, for i6=j; [(ii)]E P N i=0 [ i i ](!)T"; [(iii)] P N i=0 E P t2( i ; i ] j4A t j 2 . Theorem 3.3.4 Assume assumption (H1)-(H4),,and that the mapping (t;x)7! (t;x) is continuous,P-a.s., then Z n s !Z in L p F ([0;T ];R); p2 [0; 2); where Z is the weak limit of Z n in Theorem 3.2.8. 68 Proof. Now recall Y t =Y 0 Z t 0 f 0 s ds + Z t 0 hZ s ;dW s iA t ; We further recall for each n Y n t =Y n T + Z T t f(s;X n s ;Y n s ;Z n s )ds Z T t hZ n s ;dW s i +A n T A n t ; where Y n T =g(X n T ). Now take t = 0, we obtain Y n 0 =Y n T + Z T 0 f(s;X n s ;Y n s ;Z n s )ds Z T 0 hZ n s ;dW s i +A n T A n 0 : Notice A n 0 = 0 and subtract the above from Y n t yields Y n t = Y n 0 Z t 0 f(s;X n s ;Y n s ;Z n s )ds + Z t 0 hZ n s ;dW s iA n t = Y n 0 Z t 0 f n s ds + Z t 0 hZ n s ;dW s iA n t ; then the dierence of Y n t and Y t has the following form Y n t Y t =Y n 0 Y 0 Z t 0 [f n s f 0 s ]ds + Z t 0 hZ n s Z s ;dW s i [A n t A t ]: 69 Next, following the argument in proof of Theorem 2.1 in [26], apply It^ o's formula for non-continuous semimartingale onjY n t Y t j 2 on a given subinterval (;] where ; are two stopping times, we get jY n Y j 2 = jY n Y j 2 + Z 2(Y n s Y s )d(Y n s Y s ) + Z d[Y n Y;Y n Y ] c s + X s2(;] jY n s Y s j 2 2(Y n s Y s )(Y n s Y s ) = jY n Y j 2 Z [2(Y n s Y s )(f n s f 0 s )jZ n s Z s j 2 ]ds + Z 2(Y n s Y s )hZ n s Z s ;dW s i Z 2(Y n s Y s )d(A n t A t ) + X s2(;] j(Y n s Y s )j 2 : Since the last part concerning the jumps are simlpy, X s2(;] jY n s Y s j 2 2(Y n s Y s )(Y n s Y s ) = X s2(;] jY n s Y s j 2 +jY n s Y s j 2 2(Y n s Y s )(Y n s Y s ) = X s2(;] j(Y n s Y s )j 2 Notice thatY n t andA n t are continuous, observe that Y t =A t , take expectation on both sides yields 70 EjY n Y j 2 +E Z jZ n s Z s j 2 ds = EjY n Y j 2 + 2E Z [(Y n s Y s )(h n s h 0 s )]dsE X s2(;] (A s ) 2 +2E Z (Y n s Y s )dA n s 2E Z (Y n s Y s )dA s : = EjY n Y j 2 + 2E Z [(Y n s Y s )(f n s f 0 s )]ds +E X s2(;] (A s ) 2 +2E Z (Y n s Y s )dA n s 2E Z (Y n s Y s )dA s : Since the following E Z (Y n s Y s )dA s = E Z (Y n s Y s + Y s )dA s =E Z (Y n s Y s )dA s +E Z Y s dA s = E Z (Y n s Y s )dA s E X s2(;] (A s ) 2 : Following the similar decomposition technique and notice the continuity of A n , E Z (Y n s Y s )dA n s = E Z (Y n s Y s )dA n s +E Z Y s dA n s = E Z (Y n s Y s )dA n s E Z [(Y n s Y s )(f n s f 0 s )]ds = E Z [(Y n s Y s )(f n s f 0 s )]ds + E Z [Y s (f n s f 0 s )]ds = E Z [(Y n s Y s )(f n s f 0 s )]ds; 71 hence by discardingEjY n Y j 2 , the following inequality holds, E Z jZ n s Z s j 2 ds EjY n Y j 2 + 2E Z jY n s Y s jjf n s f 0 s jds +E X s2(;] jA s j 2 + 2E Z jY n s Y s jdA n t +2E Z jY n s Y s jdA t : (3.16) We proceed by considering the convergence of the terms on the right hand side of the inequality as n!1. Recall E Z T 0 jf n s j 2 ds =E Z T 0 jf(s;X n s ;Y n s ;Z n s )j 2 dsC: Furthermore, the weak limit of f n , h 0 2L 2 F ([0;T ];R), then it also holds that E Z T 0 jf 0 s j 2 dsC; hence we see that by Holder's inequality, as n!1 E Z T 0 jY n s Y s jjf n s f 0 s jds n E Z T 0 jY n s Y s j 2 ds o1 2 n E Z T 0 jf n s f 0 s j 2 ds o1 2 CE Z T 0 jY n s Y s j 2 ds! 0; (3.17) where the convergence is implied by Proposition 3.2.8. Now, recall EjA T j 2 C, 72 then as n!1 E Z T 0 jY n s Y s jdA t E[ sup t2[0;T ] jY n s Y s jA T ] n E[ sup t2[0;T ] jY n s Y s j 2 ] o1 2 n EjA T j 2 o1 2 C n E[ sup t2[0;T ] jY n s Y s j 2 ] o1 2 ! 0: (3.18) Last but not least, following the similar procedure, we can obtain the same con- vergence result forE R T 0 jY n s Y s jdA n t with the fact thatEjA n T j 2 C for all n. E Z T 0 jY n s Y s jdA n t E[ sup t2[0;T ] jY n s Y s jA n T ] n E[ sup t2[0;T ] jY n s Y s j 2 ] o1 2 n EjA n T j 2 o1 2 C n E[ sup t2[0;T ] jY n s Y s j 2 ] o1 2 ! 0 (3.19) By the convergences (3.17), (3.18) and (3.19), it is clear from the estimate (3.16) that onceA t is continuous i.e.; A t 0, thenZ n tends toZ strongly inL 2 F ([0;T ];R). However, for the general case, the situation becomes complicated. Thanks to Lemma 3.3.3, for any;"> 0, there exists a nite number of disjoint intervals i ; i ,i = 0;:::;N, such that i i T are all stopping times satisfying E N X i=0 [ i i ](!)T " 2 ; (3.20) 73 N X i=0 E X t2( i ; i ] j4A t j 2 " 3 : (3.21) Now for each = i and = i , apply the estimate (3.16) then take sum, we obtain that N X i=0 E Z i i jZ n s Z s j 2 ds N X i=0 EjY n i Y i j 2 + 2E Z T 0 jY n s Y s jjf n s f 0 s jds + N X i=0 E X s2( i ; i ] (A s ) 2 + 2E Z T 0 jY n s Y s jdA n t +2E Z T 0 jY n s Y s jdA t : By the convergence results of (3.17), (3.18) and(3.19), it follows from (3.21) lim n!1 N X i=0 E Z i i jZ n s Z s j 2 ds N X i=0 E X s2( i ; i ] (A s ) 2 " 3 : Thus there exists an integer n " such that for nn " , we have N X i=0 E Z i i jZ n s Z s j 2 ds " 2 : (3.22) On the other hand, E N X i=0 Z i i jZ n s Z s j 2 ds E N X i=0 Z i i jZ n s Z s j 2 1 fjZ n s Zsj 2 g ds E N X i=0 Z i i 1 fjZ n s Zsj 2 g ds; (3.23) 74 hence combining (3.22) and (3.23), we have E N X i=0 Z i i 1 fjZ n s Zsj 2 g ds " 2 : (3.24) Therefore it is equivalent to say, in the product space dened as ([0;T ] ;B([0;T ]) F;mP), where m denotes the Lebesgue measure on [0;T ], we have mP n (s;!)2[ N i=0 ( i ; i ] ;jZ n s (!)Z s (!)j 2 o " 2 : Now consider the following together with (3.24), E Z T 0 1 fjZ n s Zsj 2 g ds = E N X i=0 Z i i 1 fjZ n s Zsj 2 g ds +E Z Tn[ N i=0 ( i ; i ] 1 fjZ n s Zsj 2 g ds " 2 +E Z Tn[ N i=0 ( i ; i ] 1 fjZ n s Zsj 2 g ds " 2 +E Z Tn[ N i=0 ( i ; i ] 1ds = " 2 +TE N X i=0 [ i i ](!) " 2 +T (T " 2 ) ="; where the last inequality is due to (3.20). Hence it is equivalent to say mP n (s;!)2 [0;T ] ;jZ n s (!)Z s (!)j 2 o ";8nn " : 75 It follows that for any > 0, lim n!1 mP n (s;!)2 [0;T ] ;jZ n s (!)Z s (!)j 2 o = 0: Thus in the product space [0;T ] , the sequence of Z n converges in measure to Z. To show the strong convergence ofZ n inL p F ([0;T ];R), notice that for arbitraty > 0 E Z T 0 jZ n t Z t j p dt = Z fjZ n t Ztjg h Z T 0 jZ n t Z t j p dt i dP + Z fjZ n t Ztj<g h Z T 0 jZ n t Z t j p dt i dP Z fjZ n t Ztjg h Z T 0 jZ n t Z t j p dt i dP + p : Furthermore, sincep2 [1; 2), andZ n ;Z2L 2 F ([0;T ];R)then by Holder's inequality, as n!1 Z fjZ n t Ztjg h Z T 0 jZ n t Z t j p dt i dP = Z Z T 0 jZ n t Z t j p 1 fjZ n t Ztjg dtdP n Z Z T 0 jZ n t Z t j 2 dtdP o p 2 + n mP(jZ n t Z t j) o 2p 2 C n mP(jZ n t Z t j) o 2p 2 ! 0; 76 since Z n converges in measure to Z. Now combining the above yields lim n!1 E Z T 0 jZ n t Z t j p dt p : Since is arbitrary, send ! 0, we have lim n!1 E Z T 0 jZ n t Z t j p dt = 0: Therefore for each p2 [1; 2), it converges in L p F ([0;T ];R). Now, in addition to A s , dene ~ A s =(0;X 0 )Y s Z s 0 f(r;X r ;Y r ;Z r ) r dr + Z s 0 hZ r ;dW r i; (3.25) consequently, we have stronger convergence results regarding A n presented in the following lemma. Lemma 3.3.5 Assume assumption (H1)-(H3), and that the mapping (t;x)7! (t;x) is continuous,P-a.s., then A n w ! ~ A in L 2 F ([0;T ];R): Furthermore ~ A t =A t P a.s. for almost t2 [0;T ], where A is the weak limit of A n in Lemma 3.2.10. 77 Proof. Recall A n s = n (0;X n 0 )Y n s Z s 0 f(r;X n r ;Y n r ;Z n r )dr + Z s 0 hZ n r ;dW r i; and ~ A s =(0;X 0 )Y s Z s 0 f(r;X r ;Y r ;Z r ) r dr + Z s 0 hZ r ;dW r i: Following the argument in Lemma 3.2.10, it is sucient to show the following Z s 0 f(r;X n r ;Y n r ;Z n r )dr w ! Z s 0 f(r;X r ;Y r ;Z r )dr: On the other hand, by Fubini's Theorem, for any '2L 2 F ([0;T ];R) E Z T 0 '(t) n Z t 0 [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr o dt = E Z T 0 Z T r '(t)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dtdr = E Z T 0 (r)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr; where (r) = R T r '(t)dt2L 2 F ([0;T ];R), hence we would have the desired result if f(r;X n r ;Y n r ;Z n r ) w !h(r;X r ;Y r ;Z r ): 78 Actualy, for p2 [1; 2), by Theorem 3.3.4 and Jensen's inequality, E Z T 0 jf(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )j p dr CE Z T 0 jX n r X r j +jY n r Y r j +jZ n r Z r j p dr C n E Z T 0 [jX n r X r j +jY n r Y r j +jZ n r Z r j]dr p ; Results from Proposition 3.2.8 and Theorem 3.3.4 implies that f(r;X n r ;Y n r ;Z n r ) converges strongly to f(r;X r ;Y r ;Z r ) in L p F ([0;T ];R);p2 [1; 2), i.e. lim n!1 E Z T 0 jf(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )j p dr = 0: To show the weak convergence of f(r;X n r ;Y n r ;Z n r ) to f(r;X r ;Y r ;Z r ), i.e. for any '2L 2 F ([0;T ];R) lim n!1 E Z T 0 '(t)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr = 0; we proceed in two steps, where the rst step show the above results holds for elementary processes in L 2 F ([0;T ];R); while the second step, using the approxima- tion of elementary processes, the desired results holds for any arbitrary process in L 2 F ([0;T ];R). 79 Step 1: Let '2L 2 F ([0;T ];R) such that there exists a partition 0 =t 0 <t 1 < <t N =T and ' t =' t i for t2 [t i ;t i+1 );i = 0;:::;N 1, then E Z T 0 '(t)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr = n1 X i=0 E Z t i+1 t i '(t)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr = n1 X i=0 E n '(t i ) Z t i+1 t i [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr o ; and in each subinterval [t i ;t i+1 ), for given "> 0 E n '(t i ) Z t i+1 t i [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr o = Z fj't i j 2 >M i g '(t i ) Z t i+1 t i [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]drdP + Z fj't i j 2 M i g '(t i ) Z t i+1 t i [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]drdP = I i 1 +I i 2 ; and we proceed with the discussion part by part. First as n!1, by the strong convergence of f(r;X n r ;Y n r ;Z n r ) to f(r;X r ;Y r ;Z r ) in L p F ([0;T ];R);p2 [1; 2) I i 2 = Z fj't i j 2 M i g '(t i ) Z t i+1 t i [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]drdP p M i E Z t i+1 t i [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr p M i E Z t i+1 t i jf(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )jdr! 0: 80 On the other hand, by Holder's inequality, send "! 0 I i 1 = E n '(t i )1 fj't i j 2 >M i g Z t i+1 t i [f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr o n E[j'(t i )j 2 1 fj't i j 2 >M i g ] o1 2 n E h Z t i+1 t i jf(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )j 2 dr io1 2 p " n E h Z t i+1 t i jf(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )j 2 dr io1 2 ! 0: Since by denition we would haveEj' t i j 2 <1;8i = 0;:::;N1, hence it holds that for any"> 0, there exitsM i 2R; i = 0;:::;N1 such thatE(j' t i j 2 1 fj't i j 2 >M i g )". Note: IfEj'j 2 <1, then by Dominated Convergence Theorem, lim M!1 E[j'j 2 1 fj'j 2 >Mg ] =E[ lim M!1 j'j 2 1 fj'j 2 >Mg ] = 0: Therefore for elementary process in L 2 F ([0;T ];R) lim n!1 E Z T 0 '(t)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr = 0: 81 Step 2: Recall the fact that for any '2 L 2 F ([0;T ];R), there exists a sequence of elementary processes ' m such that lim n!1 E R T 0 j' t ' m t j 2 dt = 0. Hence for arbitrary '2L 2 F ([0;T ];R) E Z T 0 '(t)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr = E Z T 0 ['(t)' m (t)][f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr +E Z T 0 ' m (t)[f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr = I 1 +I 2 : By Step 1, it is clear that for any m, lim n!1 I 2 = 0, then what remains is the following I 1 = E Z T 0 ['(t)' m (t)][f(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )]dr n E Z T 0 j'(t)' m (t)j 2 dt o1 2 n E Z T 0 jf(r;X n r ;Y n r ;Z n r )f(r;X r ;Y r ;Z r )j 2 dr o1 2 : By the fact that lim n!1 E R T 0 j' t ' m t j 2 dt = 0, it holds that lim n!1 N 1 = 0. Therefore, we obtained the desired results. Lastly, by uniqueness of weak limit, we can conclude that A = ~ A a.s.; a.e. 82 Case II. We note the Lemma 3.3.2 implies that there exists a subsequencefn k g such that Pf lim k!1 jY n k t Y t j = 0;8t2 [0;T ]g = 1: (3.26) This property will be crucial in our future discussion. However, we note that assuming being continuous amounts to saying that Y is continuous, which is by no means clear in general. We now show that (3.26) remains true in more general cases. We note that the processA dened by (3.11) is increasing, whence c adl ag , and thus so is Y . On the other hand, recall that lim n!1 E[jY n t Y t j 2 ] = 0, and let 0 =t 1 <t 2 < ;:::;<t m =T be the rational numbers in [0;T ], then fort =t 1 = 0,9 a subsequence fn 1 k g such that P n jY n 1 k Y t j! 0 o = 1. Consequently, in the sequencefn 1 k g, for t =t 2 2Q,9 subsequencefn 2 k g such thatP n jY n 2 k Y t j! 0 o = 1. By induction, we can nd a subsequencefn i+1 g2fn i g such that P n jY n i+1 k t Y t j! 0 o = 1 for t = t j ;j = 1;:::;i. Then by Hellys Selection Principle, we consider the diagonal sequencefn k k g, thenP n jY n k k t Y t j! 0 o = 1 for all t2Q\ [0;T ]: We denote the sequence still byfng. To show the convergence remains true for t2 [0;T ], notice that since Y n is continuous and Y is c adl ag , then for any t2 [0;T ] and > 0,9 83 t2 [0;T ]\Q; tt<,such thatjY n t Y n t j< " 2 andjY t Y t j< " 2 , hence besides E2 whereP(E) = 0, for almost all !2 jY n t Y t jjY n t Y n t j +jY n t Y t j +jY t Y t j<" +jY n t Y t j! 0: Since is arbitrary, send ! 0, we obtained (3.26) again. Theorem 3.3.6 Assume assumption (H1)-(H4), then Z n s !Z in L p F ([0;T ];R); p2 [0; 2); where Z is the weak limit of Z n in Theorem 3.2.8. Proof. The only dierence from the proof of Theorem 3.3.4 is the lack of strong convergence of Y n to Y . Recall the following inequality, E Z jZ n s Z s j 2 ds EjY n Y j 2 + 2E Z jY n s Y s jjf n s f 0 s jds +E X s2(;] jA s j 2 +2E Z jY n s Y s jdA n t + 2E Z jY n s Y s jdA t : 84 The following concerns the parts which involves the strong convergence of Y n in the original proof. Now, recallEjA T j 2 C, and bothY n andY are bounded, then by Dominated Convergence Theorem and (3.26), it holds lim n!1 E Z T 0 jY n s Y s jdA t = lim n!1 Z E c Z T 0 jY n s Y s jdA t dP = Z E c Z T 0 lim n!1 jY n s Y s jdA t dP = 0 Last but not least, following the similar procedure, we can obtain the same con- vergence result for E R T 0 jY n s Y s jdA n t with the fact that EjA n T j 2 C for all n. lim n!1 E Z T 0 jY n s Y s jdA n t = Z E c Z T 0 lim n!1 jY n s Y s jdA t dP = 0 and the rest of the proof follows. Consequently, we have stronger convergence results regarding A n presented in the following lemma without assumption of continuity on . Lemma 3.3.7 Assume assumption (H1)-(H4), then A n w ! ~ A in L 2 F ([0;T ];R): Furthermore ~ A =A a.s.; a.e., where A is the weak limit of A n in Lemma 3.2.10. 85 Proof. The proof is essentially identical to the proof of Lemma 3.3.5 while using the results from Theorem 3.3.6 without assuming continuity of . With the similar argument as for A, we can conclude that ~ A is a montonically increasing process, and has a RCLL version. Theorem 3.3.8 Assume assumption (H1)-(H3), letX;Y;Z; ~ A be dened by (3.5), (3.25) and Proposition 3.2.8 and let Y s =(s;X s ), where is the monotone limit of decoupling elds n . Then (X;Y;Z; ~ A) is an adapted solution to the FBSDE 8 > > > < > > > : X t =x + Z t 0 b(s;X s ;Y s )ds + Z t 0 h(s;X s ;Y s );dW s i; Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i + ~ A T ~ A t ; with constraint Z t 2 [L t ;U t ]. Proof. Recall A n s = n (0;X n 0 )Y n s Z s 0 f(r;X n r ;Y n r ;Z n r )dr + Z s 0 hZ n r ;dW r i; then by Proposition 3.2.8 and Theorem 3.3.4, it holds that Take t = T , then we have ~ A T =(0;X 0 )Y T Z T 0 f(r;X r ;Y r ;Z r ) r dr + Z T 0 hZ r ;dW r i; 86 and subtract ~ A s from the above yields ~ A T ~ A s =(Y T Y s ) Z T s f(r;X r ;Y r ;Z r ) r dr + Z T s hZ r ;dW r i; and notice that Y T =(T;X T ) where (T;x) = lim n!1 (T;x) = lim n!1 g(x) =g(x) hence Y T =g(X T ) and Y s =g(X T ) + Z T s f(r;X r ;Y r ;Z r ) r dr Z T s hZ r ;dW r i + ~ A T ~ A s : Finally, the constaint onZ is satised following the same argument from Theorem 3.2.13. 3.4 Minimal Solution Property It is generally believed that solution of FBSDE with constraints obtained through penalization method is the minimal solution. Under the Markovian case, the result has been proved in [6] by Buckdahn and Hu. They carried out the analysis by con- sidering the so-called \nodal" solution constructed by the penalized decouping eld and the otherwise assumed pair of solution. Such method would not be applicable 87 in our case, since the system is non-Markovian to begin with, and furthermore, without the PDE structure, one fails to obtain the dynamics of the nodal solution. Therefore, a more general argument is introduced by inheriting certain procedure from [23]. To begin, rst recall 8 > > > < > > > : X t =x + Z t 0 b(s;X s ;Y s )ds + Z t 0 h(s;X s ;Y s );dW s i; Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds Z T t hZ s ;dW s i +D T D t : (3.27) where Z t 2 [L t ;U t ]. And the corresponding penalized system for given n is the following 8 > > > < > > > : X n t =x + Z t 0 b(s;X n s ;Y n s )ds + Z t 0 h(s;X n s ;Y n s );dW s i; Y n t =g(X n T ) + Z T t f(s;X n s ;Y n s ;Z n s )ds + Z T t n(s;Z n s )ds Z T t hZ n s ;dW s i: (3.28) Next recall the denition of a minimal solution. Denition 3.4.1 (X t ;Y t ;Z t ;D t ) is called the minimal solution of (3:27) if it is an admissible solution and, for any other admissible solution ( ~ X t ; ~ Y t ; ~ Z t ; ~ D t ), we have Y 0 ~ Y 0 : 88 Remark 3.4.2 The above denition amounts to say that for any FBSDE with given initial, terminal condition and constraint onZ, the solution obtained through penalization method would always give the minimal initial of Y . Now suppose (X;Y;Z;D) is a given solution of (3:27) other than the penalized solution and (X n ;Y n ;Z n ) is the unique solution of penalized system (3:28). It is worth noting that since the given solution satises the given constraint, then R T t n(s;Z s )ds = 08 n, hence we are allowed to add it back to (3:27), yields 8 > > > > > > > < > > > > > > > : X t =x + Z t 0 b(s;X s ;Y s )ds + Z t 0 h(s;X s ;Y s );dW s i; Y t =g(X T ) + Z T t f(s;X s ;Y s ;Z s )ds + Z T t n(s;Z s )ds R T t hZ s ;dW s i +D T D t : (3.29) Next, we let X n t =X n t X t , Y n t =Y n t Y t and Z n t =Z n t Z t , where obviously Z n 2R d , then (X t ; Y t ; Z t ;D) satisifes 8 > > > > > > > > < > > > > > > > > : X n t = Z t 0 (b 1 X n s +b 2 Y n s )ds + Z t 0 h 1 X n s + 2 Y n s ;dW s i; Y n t =hX n T + Z T t (f 1 X n s +f 2 Y n s +hf 3 ; Z n s i)ds + R T t nhr s ; Z n s ids R T t hZ n s ;dW s i (D T D t ): 89 where h, b i ;i = 1; 2, f j ;j = 1; 2; 3 andr are dened similarly to (2:3). On the other hand, it is known from previous results that (3:28) possess a unique decoupling eld u n where b Y n =ru n satisifes the characteristic BSDE b Y n t = h + Z T t F s ( b Y n s ) +hG s ( b Y n s ); b Z n s i ds Z T t h b Z n s ;dW s i; (3.30) where after necessary simplication F s (y) = (b 1 +b 2 y)y +f 1 +f 2 y +yhf 3 +nr; 1 + 2 yi; G s (y) = f 3 +nr + 1 + 2 y: (3.31) Further deneY n t = Y n t b Y n t X n t ,Z n t = Z n t ( 1 X n t + 2 Y n t ) b Y n t X n t b Z n t and apply It^ o formula for non-continuous semi-martingale and note the jumps of Y n t purely comes from the jumps of Y n t , we then obtain d(Y n t ) = h X n t f 1 +b 1 b Y n t F t ( b Y n t )hG s ( b Y n s ); b Z n s i +h 1 ; b Z n t i + Y n t f 2 +b 2 b Y n t +h 2 ; b Z n t i +hZ n t ;f 3 +nr s i i dt + hZ n t ;dW t i +dD t (3.32) 90 Notice that from denition Y n and Z n , we can derive Y n t = Y n t + b Y n t X n t Z n t = Z n t + 2 Y n t b Y n t + X n t ( 1 + 2 b Y n t ) b Y n t + b Z n t plugging into (3:32) yields d(Y n t ) = t X n t + t Y n t + t Z n t dt +hZ n t ;dW t i +dD t where the coecients are t = f 1 F t ( b Y n t )hG t ( b Y n t ); b Z n t i +b 1 b Y n t +h 1 ; b Z n t i + b Y n t (f 2 +b 2 b Y n t +h 2 ; b Z n t i) +h b Z n t + ( 1 + 2 b Y n t ) b Y n t ;f 3 +nri t = f 2 +b 2 b Y n t +h 2 ; b Z n t i +h 2 b Y n t ;f 3 +nri t = f 3 +nr and furthermore, thanks to (3:31) it holds that t = 0,8t2 [0;T ], hence d(Y n t ) = t Y n t + t Z n t dt +hZ n t ;dW t i +dD t 91 Now let t =e R t 0 sds M t , then by It^ o d( t Y n t ) = t h t Y n t +Z n t ;dW t i + t dD t Similar to argument in [23], we can show that R T 0 t h t Y n t +Z n t ;dW t i is a true martingale, hence Y n 0 Y 0 =Y n 0 = 0 Y n 0 = Z T 0 t h t Y n t +Z n t ;dW t i Z T 0 t dD t : Taking expecation on both sides yields, sinceC t is non-decreasing and non-negative Y n 0 Y 0 =E[ Z T 0 t dD t ]< 0;8 n: Finally, pass n to infnity, we obtained the desired results. 92 Chapter 4 Applications: Optimal Reinsurance with Investment and Dividend 4.1 Problem Formulation To describe our problem more precisely, let us consider an insurance company who is allowed to invest its reserve in a security market. We assume that the insurance company can only reduce its insurance risk exposure by adopting a proportional reinsurance policy, but is fully exposed to other risks, including the investment risk, as well as the \common shock" in the form of a random terminal time that would stop the whole insurance/investment operation. Such a random time could include the \ruin time" of the company itself, or the time that a random event that is \totally inaccessible" from the information that is available to the investor. Throughout this section we assume that all uncertainties come from a common complete probability space ( ;F;P) on which is dened d-dimensional Brownian 93 motion W =fW t : t 0g, and a stationary Poisson point process p. We assume that W and p are independent, which will represent the randomness from the nancial market and the insurance claims, respectively. For notational clarity, we denote F W =fF W t : t 0g and F p 4 =fF p t : t 0g to be the ltrations generated by W and p, respectively, and denote F = F W _F p , with the usual P-augmentation such that it satises the usual hypotheses (cf. e.g., Protter [27]). Further, throughout the section, we will denote, for a generic Euclidean space X, regardless of its dimension,h;i andjj to be its inner product and norm, respectively. We denote the space of continuous functions with the usual sup- norm by C([0;T ];X), and we shall make use of the following notations: For any sub--eldGF T and 1p<1,L p (G;X) denotes the space of all X-valued,G-measurable random variables such thatEjj p <1. As usual, 2L 1 (G;X) means that it isG-measurable and bounded. for 1p<1,L p F ([0;T ];E) denotes the space of allX-valued,F-progressively measurable processes satisfying E R T 0 j t j p dt <1. The meaning of space L 1 F ([0;T ];X) is dened similarly. Let us now consider an insurance company whose risk reserve follows the Cr emer-Lundberg model: X t =x + Z t 0 c s (1 + s )ds Nt X i=1 U i 4 =x + Z t 0 c s (1 + s )dsS t ; (4.1) 94 where c t > 0 is the premium rate, t is the safety loading, and S t 4 = P Nt i=1 U i is a compound Poisson process representing the incoming claims. We now assume that the insurance company opts to seeking a (proportional) reinsurance policy which will reduces it liability to only a portion (say, a t 2 [0; 1] fraction) of the incoming claim, by paying the premium at the amount of c t (1a t ), then the risk reserve process of the insurance company is given by a Cr emer-Lundberg model (cf. e.g., Gerber [11], or B uhlmann [7]): X t =x + Z t 0 a s c s (1 + s )ds Z t 0 a s dS s ; t 0: (4.2) It has been well-understood (cf. e.g., [12] and/or [2], [28]) that the above reserve model can be approximated by the following diusion model: X t =x + Z t 0 a s s ds Z t 0 a s 0 dW 0 s ; t 0; (4.3) where, with m 4 =E[U 1 ], 2 4 =Var[U 1 ], andE[N t ] =t, t =c t (1 + t )m; ( 0 ) 2 =(m 2 + 2 ); and W 0 is a Brownian motion. We shall assume without loss of generality that F W 0 =F p . 95 We note that this particular model falls into the so-called \cheap reinsurance", namely the safety loading of the cedent insurance company does not change after reinsurance. In this section, we shall allow the coecients (; 0 ) to be random so the reserve process X is non-Markovian in general. Now let us assume that the the insurance company is allowed to invest its reserve in a security market. For simplicity we assume that there are only two assets are traded continuously, one risky and one riskless, with prices at time t being denoted by P t and P 0 t , respectively. We assume that (P;P 0 ) follows the standard stochastic dynamics: 8 > > < > > : dP t =P t [b t dt + t dW t ]; P 0 =p; dP 0 =P 0 t r t dt; P 0 0 =p 0 ; (4.4) where, again, the appreciation rate b =fb t g, volatility =f t g, and the interest rate r =fr t g are all allowed to be random. If we denote t to be the amount of money invested in the risky asset at time t (thus X t t is invested in the risk-free asset), then one can easily show that the dynamics of reserve process with proportional insurance and (self-nancing) investment strategies should satisfy the following (linear) stochastic dierential equation (SDE): dX t = [r t X t + t a t + t t t ]dt + 0 t a t dW 0 t + t t dW t ; X 0 =x; (4.5) 96 where t = 1 t (b t r t ), t 0, is the so-called \risk premium". We note that the Brownian motion W and W 0 are naturally assumed to be independent, and we shall still denoteF 4 =F W _F W 0 . Further suppose, besides investment in the nancial market, dividend is dis- tributed by the investor/reinsurer, denoted by the non-decreasing processD t which is assumed to be adpated to the given ltration. Formally, we give the following denition. Denition 4.1.1 A cumulative dividend process is aF- predictive nonnegative and nondecreasing process D so that the following holds E jD T j 2 <1 To formulate our optimization problem, we rst dene the set of all admissible reinsurance/investment/ consumption strategies, denoted byU ad , as U ad 4 = n v = (a;;D)2L 0 F ([0;T ];R 3 ) :a t 2 [0; 1]; P-a.s.; E Z T 0 j t j 2 dt<1;E jD T j 2 <1 o : (4.6) We shall denote, for eachv2U ad , the corresponding solution to (4.5) byX v when the dependence needs to be emphasized. 97 In this section, we assume that the optimization problem could be forced to stop by an exogenous random event, known as a \common shock" in the actuarial term. Namely we assume that the time horizon is allowed to be random. However, for technical simplicity, we shall assume that the conditional distribution of the terminal time is known, given the market information, but we do not assume that it is a stopping time with respect to the ltration generated by the underlying market noises. Mathematically, we formulate insurance company's objective as follows: Find the optimal policy v = (a ; ;D ) which maximizes the \cost functional", which the expected terminal utility plus the discounted accumulated dividend, J(v()) =E n U(X v ^T ) + Z ^T 0 k t dD t o : (4.7) We shall also make use of the following Standing Assumptions throughout this section: (H5) The system parameters , b, r, 0 and are bounded, F-adapted pro- cesses. In particular, the processes ( 0 ) 1 , 1 are also bounded; (H6) The function U2 C 2 b (R + ), such that there exists constants k 1 , k 2 > 0, such thatk 1 U 00 (x)k 2 , for all x 0, and U 0 is invertible. (H7) The conditional distribution function of the random terminal time , denoted by F t =PftjF t g, t 0, is absolutely continuous process with respect 98 to Lebesgue measure, with a bounded density function denoted by f, i.e. F t = R t 0 f s ds, t 0. HereF t =F W t _F W 0 t , t 0. To end this section we make the following modications and simplications of the problem. Note that the mapping t7!U(X v t ) is continuousP-a.s, it is not hard to show (cf., e.g., [5]) that the functional J above becomes J(v()) = E n [U(X v )]1 fTg + [U(X v T )]1 f>Tg + Z T 0 1 ft<g k t dD t o = E n E [U(X v )]1 fTg jF T o +E n E [U(X v T )]1 f>Tg jF T (4.8) +E n Z T 0 E 1 ft<g k t jF t dD t o = E n Z T 0 f s U(X v s )ds + (1F T )U(X v T ) + Z T 0 (1F t )k t dD t o ; due to the adaptedness of k t . Then we can rewrite our stochastic control problem as Problem 4.1.2 Find optimal control pair u = (a ; ;D )2U ad , such that J(u ()) = sup u2U ad J(u()); (4.9) 99 where the cost functional J() is dened by (4.8) and the state dynamics is given by 8 > > < > > : dX t = [r t X t + t a t + t t t ]dtdD t + 0 t a t dW 0 t + t t dW t ; X 0 =x: (4.10) 4.2 The Candidates of Optimal Control We begin our investigation by looking at the candidates of the optimal solution to Problem 4.1.2. Since this is a more or less standard stochastic control problem with diusion term containing the control action a t and t , with a convex control domain, we shall apply a version of Protryagin's Maximum Principle (cf. e.g., Yong-Zhou [29]) to obtain the necessary conditions for the optimal strategies. To this end, let us rst dene the following \Hamiltonian function": for any (x;y;z 0 ;z 1 )2R 4 , a2 [0; 1], and 2R, H(t;x;a;;y;z 0 ;z 1 ) 4 =f t U(x) +y(r t x + t a + t t ) + t z 1 +a 0 t z 0 : (4.11) We note that the functionH could be anF-progressively measurable random eld if the market parameters (r;;) are stochastic processes. 100 Now assume that v = (a ; ;D )2U ad is an optimal control, and X is the corresponding optimal \trajectory", that is, the solution to the following Hamilto- nian system: for (t;x)2 [0;T ]R, X t =x + Z t 0 @ y Hds + Z t 0 @ z 0Ha s dW 0 s +@ z 1HdW s D t =x + Z t 0 [r s X s + s a s + s s s ]ds + Z t 0 0 s a s dW 0 s + Z t 0 s s dW s D t : (4.12) For the given (a ; ;D ) and X t , let (y t ;z 0 t ;z 1 t ) be the solution to the following backward stochastic dierential equations (BSDE): Y t = (1F T )U 0 (X T ) + Z T t @ x Hds Z T t Z 1 s dW s Z T t Z 0 s dW 0 s ; = (1F T )U 0 (X T ) + Z T t [r s Y s +f s U 0 (X s )]ds Z T t Z 1 s dW s Z T t Z 0 s dW 0 s : (4.13) We note that the BSDE (4.13) is known as the rst order adjoint equation of the system (4.10). It is worth noting that although the SDEs (4.12) and (4.13) form a \forward-backward" system, they are \decoupled" in the sense that they can be solved separately. Therefore there is no well-posedness issue at this point. The following theorem is then the direct consequence of the Maximum Principle for Singular Control(cf. [8]). Theorem 4.2.1 Assume (H5)-(H7). Then the following two statements are equivalent: 101 (i) v () = a (); (); D () 2U ad is the optimal strategy for Problem 4.1.2, and X is the optimal state process in (4.12), (ii) (x ;y;z 0 ;z 1 ;a ; ) is a solution of the forward-backward SDEs (4.12) and (4.13), such that following identity holds: H(t;;a t ; t ) = max a2[0;1];2R H(t;;a;); a.e. t2 [0;T ],P-a.s., (4.14) or equivalently, for a.e. t2 [0;T ], andP-a.s., H(t;!;a t ; t ) = max a2[0;1];2R n f t U(X t ) +Y t (r t X t + t a + t t )(4.15) + t Z t +Z 0 t 0 t a o : In addition, P n 8t2 [0;T ];k t (1F t ) +Y t 0 o = 1 P n Z T 0 1 fkt(1Ft)+Yt>0g dD t = 0 o = 1: (4.16) Proof. (i) If (a ; ;D ) is an optimal control, then applying the Stochastic Maximum Principle (cf.[8]), Theorem 4.2, it follows that there exists an adapted process (X ;Y;Z 0 ;Z 1 ) such that both SDEs (4.12) and (4.13) are satised. Further more, the necessary condition (4.15) and (5.14) holds. 102 (ii) On the other hand, by Theorem 5.1 from [8], since the admissible controls are square integrable, i.e. a t 2 [0; 1]; P-a.s.; E Z T 0 j t j 2 dt<1;E jD T j 2 <1; then if 8(a;;D) 2 U ad , condition (4.15) and (5.14) are satised, then = (a ; ;D ) is a solution of the optimal control problem (4.1.2). The Theorem 4.2.1 amounts to saying that an optimal control (a ; ;D ) must come from the solutions to the forward-backward SDEs (4.12) and (4.13), as well as the necessary condition (4.15, 5.14). However it does not reveal any information regarding the existence of such optimal control. In fact, the existence of optimal control is a totally dierent issue, which could be attacked from a completely dierent route. Our plan, however, is to nd the optimal control by directly studying the \closed-loop" system, which leads to a fully-coupled FBSDE whose well-posedness is by no means clear. We note that in the current case, especially when the Hamiltonian function H is allowed to be random, the SDEs (4.12) and (4.13) become non-Markovian. Consequently, such a direct method, to our best knowledge, is novel. 103 4.3 The Optimal Closed-loop System In this section we study the well-posedness of the optimal closed-loop system. We begin by assuming that (a ; ;D )2U ad is an optimal control. Recall SDEs (4.12), (4.13) and simplify as (in dierential form): 8 > > > > > > > > > > < > > > > > > > > > > : dX t = r t X t + t a t + t t t dtdD t + 0 t a t dW 0 t + t t dW t ; dY t = r t Y t +f t U 0 (X t ) dt +Z 0 t dW 0 t +Z 1 t dW t ; X 0 =x; Y T = (1 F T )U 0 (X T ): (4.17) Note that since bothf t andU 0 are positive by property of probability intensity and utility function, standard comparison theorem of BSDE shows that Y t 0 a.s.. In what follows we will try to derive a more explicit feedback form of the optimal control (a ; ;D ). To this end, we rst recall the Hamiltonian function H(t;!;a;) = f t U(X t ) +Y t (r t X t + t a + t t ) + t Z 1 t +Z 0 t 0 t a By virtue of Theorem 4.2.1, we can determine the optimal control (a ; ;D ) by rst solving the following constrained optimization problem: H(t;a t ; t ) = max a2[0;1];2R H(t;a;): (4.18) 104 We shall simply follow the method of Lagrange Multiplier to solve the problem (5.16). More precisely, rst note that we can write H(t;!;a;) = A(t;!)a + B(t;!) +C(t;!), where A(t;) = Y t t +Z 0 t 0 t B(t;) = t (Y t t +Z 1 t ) (4.19) C(t;) = f t U 0 (X t ) +Y t r t X t : Dene the Lagrangian: K(t;a;) =A(t)a +B(t) +C(t) + 1 a 2 a: (4.20) Then, the rst order conditions are 8 > > > > > > > > > > < > > > > > > > > > > : @ a K(t;a;) =Y t t +Z 0 t 0 t + 1 2 = 0; @ K(t;a;) = t (Y t t +Z 1 t ) = 0; 1 a = 0; 2 (a 1) = 0; 1 0; 2 0; 0a 1: (4.21) 105 Solving (5.19) we obtain that 8 > > > > < > > > > : a t =a t 1 fYtt+Z 0 t 0 t =0g + 1 fYtt+Z 0 t 0 t 6=0g ; t = t 1 fYtt+Z 1 t =0g ; (4.22) since t as the volatility for the stock price is assume to be nonnegative. More intuitively, since the Hamiltonian function has no interaction terms of the controls, we consider the sucient condition for optimality separately fora and. For optimal the proportion of the reinsurancea, notice thatH is linear ina, and for xed timet, the coecient in front ofa isY t t +Z 0 t 0 t . Hence, ifY t t +Z 0 t 0 t > 0, for a2 [0; 1], it is obvious that maximum of H is obtained when a = 1; Similarly, when Y t t +Z 0 t 0 t < 0, the maximum is achieved when a = 0. Therefore, in order for the maximum of H to be obtained at a = a ; a.e. t2 [0;T ],P-a.s. where a is an admissible optimal control value, we would need the condition Y t t +Z 0 t 0 t = 0; a.e. t2 [0;T ],P-a.s.. Similarly, in order for the optimal of H to be obtained at = , we would need its coecient t (Y t t +Z 1 t ) to be zero. However, t as the volatility of the stock price, is assumed to be nonnegative, then it is necessary and sucient to have Y t t +Z 1 t = 0. On the other hand, if Y t t +Z 0 t 0 t = 0, then the Hamiltonian no longer depends on a, hence H obtains its maximum whena =a ; similarly for. Together with Theorem 4.2.1, we have actually proved the following characterization of the optimal feedback control. 106 Theorem 4.3.1 Assume that (a ; ;D ) 2 U ad , and that there is a solution (X t ;Y t ;Z 0 t ;Z 1 t ) satisfying (4.17), such that the following identities hold: Y t t +Z 0 t 0 t = 0; and Y t t +Z 1 t = 0: (4.23) Together with the condition P n 8t2 [0;T ];k t (1F t ) +Y t 0 o = 1; P n Z T 0 1 fkt(1Ft)+Yt>0g dD t = 0 o = 1 Then (a t ; t ;D t ) is the optimal strategy, and X is the optimal state process. Due to the condition (4.23), the FBSDE (4.17) would be the following for the corresponding optimal strategy (a ; ;L ). 8 > > > > > > > > > > < > > > > > > > > > > : dX t = r t X t + t a t + t t t dtdD t + 0 t a t dW 0 t + t t dW t ; dY t = r t Y t +f t U 0 (X t ) dt + tYt 0 t dW 0 t + t Y t dW t ; X 0 =x; Y T = (1 F T )U 0 (X T ): Now our goal is to transform the above SDE system into a constrained FBSDE system by rst letting Z 0 t = 0 t a t ; Z 1 t = t t and further reverse the roles of the 107 forward and backward process, that is, we apply the following linear transforma- tion, 2 6 6 4 X Y 3 7 7 5 = 2 6 6 4 0 1 1 0 3 7 7 5 2 6 6 4 X Y 3 7 7 5 yields 8 > > > > > > > > > > < > > > > > > > > > > : d X t = r t X t +f t U 0 ( Y t ) dt + t Xt 0 t dW 0 t + t X t dW t ; d Y t = r t Y t + t Z 0 t 0 t + t Z t dtdD t + Z 0 t dW 0 t + Z 1 t dW t ; X 0 = x: Y T = (U 0 ) 1 [ X T =(1 F T )]: (4.24) where Z 0 t 2 [0; 0 t ]. Notice that if the above FBSDE system has super solution ( X; Y; Z 0 ; Z 1 ;D ) satisfying condition (4.23), then we can construct an admissible strategy simply bya = Z 0 = 0 , and = Z 1 = and by Theorem 4.3.1, the obtained strategy should be optimal. Therefore, our task now is to nd adapted solution of the FBSDE (4.27) satisfyling condition (4.23). 4.4 Existence of Optimal Control Recall that the existence of the optimal strategy requires the existence of admissible solution of the FBSDE (3.1) under the constraint Z 0 t 2 [0; 0 t ] for which we can 108 apply the so-called penalized procedure method. More specically, rst dene the penalization process corresponding to the convex constraint on Z 0 . Let (t;x) = x + (x 0 t ) + , then one can easily see that Z 0 t 2 [0; 0 t ] is equivalent to (t; Z 0 t ) = 0, hence the convex constraint on Z 0 is the zero set of the function. Our goal is to consider the convergence of a sequence of penalized solutions and eventually show the limiting solution is actually a solution of the original FBSDE (3.1) system. Now consider a sequence of marketM n where we consider the penalized FBSDE as follows 8 > > > > > > > > > > < > > > > > > > > > > : d X n t = r t X n t +f t U 0 ( Y n t ) dt + t X n t 0 t dW 0 t + t X n t dW t ; d Y n t = r t Y n t + t Z n;0 t 0 t + t Z n;1 t dtn(t; Z n;0 t )dt + Z n;0 t dW 0 t + Z n;1 t dW t ; X n 0 = x: Y n T = (U 0 ) 1 [ X n T =(1 F T )]: (4.25) To further simplify the sequence of penalized FBSDE system, we now apply appropriate transformation to a neater version.Since the current penalized FBSDE possesses a decoupling eld and consequently admits a unique solution inL 2 , the resulting FBSDE system after transformation would also have the same results. As we would see the corresponding decoupling eld and solutions would be related but dierent from the original. 109 Recall the original penalized system is the following integral form 8 > > > < > > > : X n t = x + Z t 0 [a s X n s +f(s; Y n s )]ds + Z t 0 X n s h s ;dW s i; Y n t =g( X n T ) + R T t [b s Y n s +h s ; Z n s i +n(s; Z n s )]ds R T t h Z n s ;dW s i; (4.26) where a t =r t ; f(t;y) =f t U 0 (y); t = [ t 0 t ; t ] T b t =r t ; t =[ t 0 t ; t ] T ; Z n t = [ Z n;0 t ; Z n;1 t ] T ; W t = [W 0 t ;W t ] T g(x) = (U 0 ) 1 [x=(1F T )]; (t;z) =(t;z 0 ) = (z 0 ) + (z 0 0 t ) + : Theorem 4.4.1 Assume assumption (H5)-(H7), if it holds that c t (1 + t ) =m; then the constrained FBSDE has an adapted solution 8 > > > > > > > > < > > > > > > > > : X t = x Z t 0 r s X s +f s U 0 ( Y s ) ds + Z t 0 s X s 0 s dW 0 s + Z t 0 s X s dW s ; Y t = (U 0 ) 1 [ X T =(1 F T )] Z T t r t Y t + t Z 0 t 0 t + t Z t dt +D T D t R T t Z 0 t dW 0 t R T t Z 1 t dW t ; (4.27) with constrainth Z t ;e 1 i2 [0; 0 t ]. Proof. This is a direct application of Theorem 3.2.13. 110 Theorem 4.4.2 Assume assumption (H5)-(H7), and condition of Theorem 4.4.1 holds, let ( X; Y; Z;D) be the adapted solution of FBSDE (4.27) and further let a t = Z 0 t 0 t ; t = Z 1 t t ; D t =D t : If in addition, it holds that P n 8t2 [0;T ];k t (1F t ) +Y t 0 o = 1; P n Z T 0 1 fkt(1Ft)+Yt>0g dD t = 0 o = 1; where Y t = X t , then (a ; ;D ) is the optimal strategy to Problem 4.1.2, and X = Y is the corresponding trajectory. Proof. Applying Theorem 4.3.1, one can easily see under the two probability conditions, (a t ; t ;D t ) is the optimal strategy and X 4 = Y , which is the original wealth process, is the optimal state process(since we reversed the roles of the for- ward and backward processes in the system). 111 Chapter 5 Applications: Optimal Reinsurance with Counter-party Risk In this section, the cost minimization problem for a general reinsurance model is studied. What's dierent from the previous section is that we now include the counter-party risk into the model, where the counter-party risk is refering to the possibility of default from the reinsurance company. The reserve process of the insurance the insurance company is described by a stochastic dierential equation driven by a Brownian motion and a Poisson random measure, representing the randomness from the nancial market and the insurance claim, respectively. The random safety loading and stochastic interest rate are allowed in the model so that the reserve process is in nature non-Markovian. The insurer can manage its reserve by investing in the nancial market, purchasing reinsurance policy from another reinsurer and distribute dividend. To simplify our model, we consider the 112 case where the claim process follows the compound prossoin process for which we further apply diusion approximation. 5.1 Preliminaries and Problem Formulation Throughout this section, we assume that all uncertainties come from a common complete probability space ( ;F;P) on which dened a d-dimensional Brownnian motion W =fW t : t 0g and a stationary Poisson point process p. We assume that W and p independently represent the randomness from the nancial market and the insurance claims. For notational clarity, we denote F W =fF W t ;t 0g and F p = fF p t : t 0g be the ltration generated by W and p respectively, and denote F = F W F p , with the usual P-augmentation such that it satises the usual hypotheses. Furthermore, we assume that the point process p is of class (QL) and denote its corresponding counting measure byN p (dsdz). Then the corresponding compensator is ^ N p (dsdz) = EN p (dsdz) = (dz)ds, where (dz) is the Levy measure of p such that (R + )<1. The following space will be used: F p (resp. F 2 p ) denotes the class of all random elds ' :R + R + !R + , such that for xed z, the mapping (t;w)!'(t;z;w) isF p -predictable, and that E Z T 0 Z R + j'(s;z)j(dz)ds<1 113 (resp:E Z T 0 Z R + j'(s;z)j 2 (dz)ds<1) First, consider the following reserve process of a simple continuous insurance model: X t = x + Z t 0 c s (1 + s )dsS t = x + Z t 0 c s (1 + s )ds Z t 0 Z R f(s;z)N p (ds;dz) (5.1) WhereX t represents the reserve process of the insurer with the initial endowment of x. Letc t be the premium process paid to the insurer by the buyers of the insurance, with the safety loading of t . Finally, S t = R t 0 R R f(s;z)N p (ds;dz) represents the claim process coming to the insurer, following a jump process, with the density of f(t;z). In light of the well-known \equivalence principle" in actuarial mathematics, the premium processc t can be quantitatively characterized by the following equation: c t =E[S t jF p t ] = Z R + f(t;z)(dz) 8t 0;P a.s.: (5.2) Moreover, it is common to require that the premium and expense safety loading satisfy the following \net prot condiiton" essinf w2 fc t (w)(1 + t (w)) Z R + f(t;z;w)(dz)g> 0 8t 0 (5.3) 114 We therefore make the following assumption. (H8) The random eld f2F p and it is continuous in t, piecewise continuous in z. Furthermore, it is uniformly bounded, i.e., there exit constants 0 < d < L such that df(s;z;w)L 8(s;z)2 [0;1)R + ;P a.s. (H9) The safety loading process is bounded, non-negative F p -adapted pro- cess, and the premium process c is an F p -adapted process satisfying (5.2). Fur- thermore, the process c;p satisfy the \net prot condition" (5.3). 5.2 Proportional Reinsurance Policy Denition 5.2.1 A proportional reinsurance policy is a random eld : [0;1) R + ! [0; 1] such that 2 F p and that for each xed z2 R + , the process (;z;) is predictable. Given a reinsurance policy , the part of the claim that a insurance company retains to itself during anytime period [t;t + t] is assumed to be [S] t+t t , where [S] t+t t = Z t 0 Z R + (s;z)f(s;z)N p (dsdz): 115 That is to say the part of the claim the insurer ceded to the reinsurer is [(1) S] t+t t . The reserve process with reinsurance can be argued heuristically using the so- called \prot margin principle". To begin, let us denote the expense safety loading of the reinsurance compant by r , and the resulting modied safety loading for the original insurance company after reinsurance by . For an arbitrary small time interval [t;t+t], and denoteE p t [] =E[jF p t ] then the following identity represents the \prot margin principle" (1 + t )E p t [1S] t+t t | {z } original premium (1 + r t )E p t [(1)S] t+t t | {z } premium paid to the reinsurer = (1 + t )E p t [S] t+t t | {z } modied premium of the insurer Now, under the proportional reinsurance policy, the reserve process of the insurer becomes the following: X t =x + Z t 0 m(s;)(1 + s )ds Z t 0 Z R (s;z)f(s;z)N p (ds;dz) (5.4) where m(s;) = Z R + (s;z)f(s;z)(dz); is now the modied premium process, considering the premium charged by the reinsurer for the reinsurance policy. Meanwhile, t is the modied safety loading 116 process in this case, calculated following the \prot margin principle". Finally, the last part is the retained part of the original claim process, while the rest is indemnied by the reinsurer. Now, to include the counter-party risk into our model, let be a random time on the original probability space, and dene the corresponding right-continuous process H t = 1 ftg and further denoteH the ltration generated by H; that is , H t =(H s :st).Then it is clear that is aH-stopping time but not necessarily stopping time with respect to the inference ltration F. We assume that H and F are independent, equivalently saying that the default time is independent of the information from the nancial and insurance markes. Therefore, to enlarge the ltration, we consider the joint ltration G 4 =F_H, that is we setG t =D t _F t for every t2R + , then it is obvious that is now aG-stopping time. Furthermore, at the time of default, we assume there is compensated recovery, i.e. let represents the \recovery rate", that is, in case of default, it represents the percentage of the promised indemnity that the reinsurer is still able to pay. For simplicity, we assume 0 for some 0 2 (0; 1). Let S t denote the retained claim process under the reinsurance policy with counterparty-risk, then we have: when H t = 0 S t = Z t 0 Z R + (s;z)f(s;z)N p (ds;dz) as before when H t = 1 S t = Z t 0 Z R + f + (1 0 )(1)fN p (ds;dz) 117 Therefore, in general we have the following expression for S S t = Z t 0 Z R + [ + (1 0 )(1)H t ]f(s;z)N p (ds;dz) Now, let = + (1 0 )(1)H t , then by the \Prot Margin Principle" (1 + t )E p t [S] t+t t (1 + r t )E p t [(1)S] t+t t = (1 + t )E p t [S] t+t t Where t , r t are the safety loading of the insurer and the reinsurer respectively, while t is the modied loading with counterparty-risk. Now assume during this time interval, the reinsurance policy does not change in time, then by denition 2F p , hence thanks to assumption on f, we have the following, E t [S] t+t t = E t [SjH t = 0] t+t t P(H t = 0) +E t [SjH t = 1] t+t t P (H t = 1) = E t [S t jH t = 0] t+t t P(H t = 0jF t ) +E t [S t jH t = 1] t+t t P(H t = 1jF t ) = P(H t = 0jF t ) Z t+t t Z R + (t;z)f(s;z)(dz)ds +P(H t = 1jF t ) Z t+t t Z R [(t;z) + (1 0 )(1(t;z))]f(s;z)(dz)ds ' t n P(H t = 0jF t ) Z R (t;z)f(t;z)(dz) +P(H t = 1jF t ) Z R [(t;z) + (1 0 )(1(t;z))]f(t;z)(dz) o 118 Let p t = P( tjF t ) = P(H t = 1jF t ), and q t = 1p t = P( > tjF t ) = P(H t = 0jF t ), p t is the conditional default probability given inference information up to time t, and consequently q t is the \survival probability". Hence, it is readily seen that E t [S] t+t t ' t Z R [(t;z)q t + ( + (1 0 )(1))(t;z)p t ]f(t;z)(dz): We can draw similar results forE t [(1)S] t+t t , E t [(1)S] t+t t ' t n Z R [(1)q t + 0 (1)p t ](t;z)f(t;z)(dz) o Then the \Prot Margin Principle" results in, (1 + t )c t (1 + r t ) Z R [(1)q t + 0 (1)p t ](t;z)f(t;z)(dz) (1 + t )[ Z R [q t + ( + (1 0 )(1))p t ](t;z)f(t;z)(dz)]; from which, we would be able to nd the modied safety loading t with the counterparty-risk. Letm(t;) = R R [q t +(+(1 0 )(1))p t ]f(dz) denote the modied premium 119 process with counterparty-risk, then we have the following reserve process under proportional reinsurance with counterparty-risk: X t =x + Z t 0 (1 + s )m(s;)ds Z t 0 Z R [ + (1 0 )(1)H t ]f(s;z)N p (ds;dz) and the modied claim process withproportional reinsurance and counter-party risk is now S t = Z t 0 Z R [ + (1 0 )(1)H t ]f(s;z)N p (ds;dz) 5.3 Compound Poisson Case We rst consider a simplied version of our model, where we assume the intensity f(t;z)z and(R + ) => 0, thenS t is essentially a compound Poisson process. Indeed, in this case S t = P 0s<t;s2Dp S t = P k1 S T k 1 fT k tg with p t 4 = S t being a Poisson point process, D p =ft :p t 6= 0g =[ 1 k=1 fT k g, andPfp T k 2dzg = (dz) ;8k 1. Furthermore,N t 4 = P 1 k=1 1 fT k tg is a standard Poisson process with intensity > 0, and S can be rewritten as S t = Nt X k=1 p T k : 120 Consequently, the original premium process c t = R R + f(t;z)(dz) = R R + z(dz) = E(U 1 ) is now a constant, and the corresponding safety leading is also a constant. In this case (5.3) becomes c(1 +)E(U 1 ), a usual net prot condition. We further assume that the reinsurance policy is independent of the claim size, that is(s;z) = s and recall that t = t +(1 0 )(1 t )H t = [1(1 0 )H t ] t + (1 0 )H t . Consequently, the modied preimium process is the following m(s; s ) = Z R + [ s q s + ( s + (1 0 )(1 s ))p s ]f(s;z)(dz) = Z R + [ s q s + ( s + (1 0 )(1 s ))p s ]z(dz) = s q s + [ s + (1 0 )(1 s )]p s Z R + z(dz) = n s q s + [ s + (1 0 )(1 s )]p s o E(U 1 ) = m[ s + (1 0 )(1 s )p s ] = m[1 (1 0 )p s ] s +m(1 0 )p s 121 Note: R R + z(dz) = E(U 1 ) = m, where U 1 = S T 1 is the jump size of the identical independent claims. The corresponding modied claim process is S t = Z t 0 Z R + [ s + (1 0 )(1 s )H s ]zN p (ds;dz) = Z t 0 s + (1 0 )(1 s )H s Z R + zN p (ds;dz) = Z t 0 [ s + (1 0 )(1 s )H s ]dS s where S is now simply a compound Poisson process, hence it has the following diusion approximation S t = Z t 0 [ s + (1 0 )(1 s )H s ]dS s Z t 0 m[ s + (1 0 )(1 s )H s ]ds + Z t 0 0 [ s + (1 0 )(1 s )H s ]dW 0 s where, with m 4 =E[U 1 ], 2 4 =Var[U 1 ], andE[N t ] =t, ( 0 ) 2 =(m 2 + 2 ); 122 We shall assume without loss of generality that F W 0 = F p _H. In addition, we further assume that the reinsurance is \cheap", i.e. r = , hence r = = and the diusion approximation of reserve process with jumps becomes the following, X t x + Z t 0 (1 + s )m(s; s )ds Z t 0 m[ s + (1 0 )(1 s )H s ]ds Z t 0 0 [ s + (1 0 )(1 s )H s ]dW 0 s = x + Z t 0 [(1 + s )m(s; s )m s ]ds Z t 0 0 s dW 0 s = x + Z t 0 A(s;p s ;H s ) s +B(s;p s ;H s )ds Z t 0 M(s;H s ) s +N(s;H s )dW 0 s where the coecients of the above linear system are A(s;p s ;H s ) = m n (1 0 ) H s (1 + s )p s + s o B(s;p s ;H s ) = m n (1 0 ) (1 + s )p s H s o M(s;H s ) = 0 [1 (1 0 )H s ] N(s;H s ) = 0 (1 0 )H s Now let us assume that the the insurance company is allowed to invest its reserve in a security market. For simplicity we assume that there are only two assets are traded continuously, one risky and one riskless, with prices at time 123 t being denoted by P t and P 0 t , respectively. Hence the Brownian Motion is 1- dimensional, i.e. d = 1. We assume that (P;P 0 ) follows the standard stochastic dynamics where randomnness originates from the inference ltration: 8 > > < > > : dP t =P t [b t dt + t dW t ]; P 0 =p; dP 0 =P 0 t r t dt; P 0 0 =p 0 ; (5.5) where, again, the appreciation rate b =fb t g, volatility =f t g, and the interest rate r =fr t g are all allowed to be random. If we denote t to be the amount of money invested in the risky asset at time t (thus X t t is invested in the risk-free asset), then one can easily show that the dynamics of reserve process with proportional insurance and (self-nancing) investment strategies should satisfy the following (linear) stochastic dierential equation (SDE): dX t = [r t X t +A(t;p t ;H t ) t + t t t +B(t;p t ;H t )]dt [M(t;H t ) t +N(t;H t )]dW 0 t + t t dW t ; X 0 = x; where t = 1 t (b t r t );t 0 is the so-called \risk-premium". We note that the Brownian Motion W and W 0 are naturally assumed to be independent, and we shall still denoteG =F W 0 _F W . 124 On the other hand, besides investment in the nancial market, we assume dividend is distributed by the investor/reinsurer, denoted by the nondecreasing process D t which is assumed to be adpated to the given ltration. Formally, we give the following denition. Denition 5.3.1 A cumulative dividene process is a G-predictive non-negative and nondecreasing process D such that EjD T j 2 <1: Then the dynamic of the reserve process becomes, dX t = (r t X t +A t t + t t t +B t )dtdD t (M t t +N t )dW 0 t + t t dW t ; X 0 =x (5.6) To formulate our optimization problem, we rst dene the set of all admissible reinsurance/invest -ment/consumption strategies, denoted byU ad as, U ad = u = (;;D)2L 0 G ([0;T ];R 3 ) :2 [0; 1];P a.s.; E Z T 0 j t j 2 dt<1;EjD T j 2 <1 and denote, for each 2 U ad , the corresponding solution to (5.6) by X when dependence needs to be emphasized. 125 5.4 Cost Minimization Alternatively, we can consider a more general formulation where insurance com- pany's objective is as follows: Find the optimal strategy u = ( ; ;D ) which minimize the following cost functional J(u()) =E h Z T 0 L(t;X u t )dt +U(X u T ) + Z T 0 k t dD t i (5.7) where we do not assumeL;U to be utility functions, but given deterministic func- tion in C 2 b (R + ). Then our stochastic control problem is Problem 5.4.1 Find optimal control pair u = (a ; ;D )2U ad , such that J(u ()) = inf u2U ad J(u()); (5.8) where the cost functional J() is dened by (5.7) and the state dynamics is given by 8 > > > > > > < > > > > > > : dX t = [r t X t +A(t;p t ;H t ) t + t t t +B(t;p t ;H t )]dtdD t [M(t;H t ) t +N(t;H t )]dW 0 t + t t dW t ; X 0 =x (5.9) 126 5.5 The Candidates of Optimal Control The corresponding\Hamiltonian function" is: for any (x;y;z 0 ;z 1 )2R 4 ,2 [0; 1], and 2R, H(t;x;a;;y;z 0 ;z 1 ) 4 = L(t;x) +y[r t x +A(t;p t ;H t ) t + t t t +B(t;p t ;H t )] + t z 1 + [M(t;H t ) t +N(t;H t )]z 0 : (5.10) Now assume that v = (a ; ;D )2U ad is an optimal control, and X is the cor- responding optimal \trajectory", that is, the solution to the following Hamiltonian system: for (t;x)2 [0;T ]R, X t =x + Z t 0 @ y Hds + Z t 0 @ z 0Ha s dW 0 s +@ z 1HdW s D t =x + Z t 0 [r s X s +A(s;p s ;H s ) s + s s s +B(s;p s ;H s )]ds + Z t 0 [M(s;H s ) s +N(s;H s )]dW 0 s + Z t 0 s s dW s D t : (5.11) For the given (a ; ;D ) and X t , let (y t ;z 0 t ;z 1 t ) be the solution to the following backward stochastic dierential equations (BSDE): Y t =U 0 (X T ) + Z T t @ x Hds Z T t Z 1 s dW s Z T t Z 0 s dW 0 s ; =U 0 (X T ) + Z T t r s Y s L x (s;X s ) ds Z T t Z 1 s dW s Z T t Z 0 s dW 0 s : (5.12) 127 We note that the BSDE (5.12) is known as the rst order adjoint equation of the system (5.9). It is worth noting that although the SDEs (5.11) and (5.12) form a \forward-backward" system, they are \decoupled" in the sense that they can be solved separately. Therefore there is no well-posedness issue at this point. The following theorem is then the direct consequence of the Maximum Principle for Singular Control(cf. [3], [30]). Theorem 5.5.1 Assume (H1)-(H3). Then the following two statements are equiv- alent: (i) v () = a (); (); D () 2U ad is the optimal strategy for Problem 4.1.2, and X is the optimal state process in (4.12), (ii) (x ;y;z 0 ;z 1 ;a ; ) is a solution of the forward-backward SDEs (5.11) and (5.12), such that following identity holds: H(t;;a t ; t ) = max a2[0;1];2R H(t;;a;); a.e. t2 [0;T ],P-a.s., (5.13) In addition, P n 8t2 [0;T ];k t +Y t 0 o = 1; P n Z T 0 1 fkt+Yt>0g dD t = 0 o = 1: (5.14) Proof. (i) If (a ; ;D ) is an optimal control, then applying the Stochastic Maximum Principle (cf.[8]), Theorem 4.2, it follows that there exists an adapted 128 process (X ;Y;Z 0 ;Z 1 ) such that both SDEs (4.12) and (4.13) are satised. Further more, the necessary condition (4.15) and (5.14) holds. (ii) On the other hand, by Theorem 5.1 from [8], since the admissible controls are square integrable, i.e. a t 2 [0; 1]; P-a.s.; E Z T 0 j t j 2 dt<1;E[jD T j 2 ]<1; then if 8(a;;D) 2 U ad , condition (4.15) and (5.14) are satised, then = (a ; ;D ) is a solution of the optimal control problem (4.1.2). The Theorem 4.2.1 amounts to saying that an optimal control (a ; ;L ) must come from the solutions to the forward-backward SDEs (4.12) and (4.13), as well as the necessary condition (4.15, 5.14). However it does not reveal any information regarding the existence of such optimal control. In fact, the existence of optimal control is a totally dierent issue, which could be attacked from a completely dierent route. Our plan, however, is to nd the optimal control by directly studying the \closed-loop" system, which leads to a fully-coupled FBSDE whose well-posedness is by no means clear. We note that in the current case, especially when the Hamiltonian function H is allowed to be random, the SDEs (4.12) and (4.13) become non-Markovian. Consequently, such a direct method, to our best knowledge, is novel. 129 5.6 The Optimal Closed-loop System In this section we study the well-posedness of the optimal closed-loop system. We begin by assuming that (a ; ;D )2U ad is an optimal control. Recall SDEs (4.12), (4.13) and simplify as (in dierential form): 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : dX t = [r t X t +A(t;p t ;H t ) t + t t t +B(t;p t ;H t )]dtdD t [M(t;H t ) t +N(t;H t )]dW 0 t + t t dW t ; dY t = r t Y t +L x (t;X t ) dt +Z 0 t dW 0 t +Z 1 t dW t ; X 0 =x; Y T =U 0 (X T ): (5.15) Note that sinceU 0 is positive by property of utility function, standard comparison theorem of BSDE shows that Y t 0 a.s.. In what follows we will try to derive a more explicit feedback form of the optimal control ( ; ;D ). To this end, we rst recall the Hamiltonian By virtue of Theorem 5.5.1, we can determine the optimal control ( ; ;D ) by rst solving the following constrained optimization problem: H(t; t ; t ) = max a2[0;1];2R H(t;;): (5.16) 130 We shall simply follow the method of Lagrange Multiplier to solve the problem (5.16). More precisely, rst note that we can write H(t;!;a;) = A(t;!)a + B(t;!;) +Re, where A(t;) = Y t A(t;p t ;H t )Z 0 t M(t;H t ) (5.17) B(t;;) = t (Y t t +Z 1 t ) +Y t r t X T : Dene the Lagrangian: K(t;a;) =A(t)a +B(t;) + 1 a 2 a: (5.18) Then, the rst order conditions are 8 > > > > > > > > > > < > > > > > > > > > > : @ a K(t;a;) =A(t)a + 1 2 = 0; @ K(t;a;) =@ B(t;) = 0; 1 a = 0; 2 (a 1) = 0; 1 0; 2 0; 3 0; 0a 1: (5.19) Solving (5.19) we obtain that 8 > > > > < > > > > : a t =a t 1 fYtA(t;pt;Ht)+Z 0 t M(t;Ht)=0g + 1 fYtA(t;pt;Ht)+Z 0 t M(t;Ht)=0g ; t = t 1 fYtt+Z 1 t =0g ; (5.20) 131 Together with Theorem 5.5.1, we have actually proved the following characteriza- tion of the optimal feedback control. Theorem 5.6.1 Assume that ( ; ;D ) 2 U ad , and that there is a solution (X t ;Y t ;Z 0 t ;Z 1 t ) satisfying (4.17), such that the following identities hold: Y t A(t;p t ;H t ) +Z 0 t M(t;H t ) = 0; and Y t t +Z 1 t = 0: (5.21) Together with the condition P n 8t2 [0;T ];k t +Y t 0 o = 1; P n Z T 0 1 fkt+Yt>0g dD t = 0 o = 1 Then ( t ; t ;D t ) is the optimal strategy, and X is the optimal state process. Notice that M(t;H t ) = 0 [1 (1 0 )H t ], then if H t = 0; M(t;H t ) = 0 6= 0 if H t = 1; M(t;H t ) = 0 0 6= 0 132 Due to the condition (4.23), the FBSDE(4.17) would be the following for the corresponding optimal strategy ( ; ;D ) since M(t;H t )6= 0. 8 > > > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > > > : dX t = [r t X t +A(t;p t ;H t ) t + t t t +B(t;p t ;H t )]dtdD t [M(t;H t ) t +N(t;H t )]dW 0 t + t t dW t ; dY t = r t Y t +L x (t;X t ) dt + Y t A(t;p t ;H t ) M(t;H t ) dW 0 t + t Y t dW t ; X 0 =x; Y T =U 0 (X T ): Now our goal is to transform the above SDE system into a constrained FBSDE system by rst letting Z 0 t =M(t;H t ) t +N(t;H t ); Z 1 t = t t and further reverse the roles of the forward and backward process, that is, we apply the following linear transformation, 2 6 6 4 X Y 3 7 7 5 = 2 6 6 4 0 1 1 0 3 7 7 5 2 6 6 4 X Y 3 7 7 5 133 yields 8 > > > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > > > : d X t = r t X t +L y (t; Y t ) dt + A(t;p t ;H t ) X t M(t;H t ) dW 0 t + t X t dW t ; d Y t = r t Y t + A(t;p t ;H t ) Z 0 t M(t;H t ) + t Z 1 t +B(t;p t ;H t ) N(t;H t ) M(t;H t ) dt dD t + Z 0 t dW 0 t + Z 1 t dW t ; X 0 = x: Y T = (U 0 ) 1 ( X T ): (5.22) where Z 0 t 2 [N(t;H t );M(t;H t ) + N(t;H t )]. Notice that if the above FBSDE system has super solution ( X; Y; Z 0 ; Z 1 ;D ) satisfying condition (5.14), then we can construct an admissible strategy simply by = Z 0 t N(t;Ht) M(t;Ht) , and = Z 1 and by Theorem 4.3.1, the obtained strategy should be optimal. Therefore, our task now is to nd adapted solution of the FBSDE (3.1) satisfying condition (5.14). Theorem 5.6.2 Assume (H1)-(H12), then Theorem 3.3.8 applies, and FBSDE (5.22) is well-posed, if it holds that s = (1 0 )(p s H s ) 1 (1 0 )p s : Corollary 5.6.3 Assume (H1)-(H12), further assume is a Fstopping time, FBSDE (5.22) is well-posed, if it holds that t = 08t2 [0;T ] P a.s.: 134 Theorem 5.6.4 Assume (H1)-(H12), if it holds thatk t <Y t ;8t2 [0;T ] Pa.s., and s = (1 0 )(p s H s ) 1 (1 0 )p s ; then FBSDE (5.22) is well-posed, and ( ; ;D ), where = Z 0 t N(t;H t ) M(t;H t ) ; = Z 1 is the optimal solution to problem 5.4.1 135 Chapter 6 Bibliography [1] Asmussen, S., and Taksar, M. (1997) Controlled diusion models for optimal dividend pay-out. Insurance: Math. & Economics 20, 1-15. [2] Asmussen, S., Hojgaard, M., and Taksar, M. (2000) Optimal risk control and dividend distribution policies. Example of excess-of loss reinsurance for an insurance corporation. Finance and Stochastics 4, 299-324. [3] Bahlali, S. and Mezerdi, B. (2005), A General Stochastic Maximum Principle for Singular Control Problems, Electronic Journal of Probability. [4] Bowers, N. L. Jr., Gerber, H.U., Hickman, J. C., Jones, D. A., Nesbitt, C. J. (1997) Actuarial Mathematics, The Society of Actuaries, USA. [5] Blanchet-Scallieta, C, El Karoui, N, Jeanblanc, M., and Martellini, L. (2008) Optimal investment decisions when time-horizon is uncertain, Journal of Mathematical Economics, Vol. 44, 11 (1), 1100-1113. [6] Buckdahn, R. Hu, Y. Hedging Contingent Claims For A Large Invesor In An Incomplete Market, Applied Probability, 30, 239-255. 136 [7] B uhlmann, H. (1970) Mathematical methods in risk theory, Springer-Verlag Berlin. [8] Cadenillas, A. and Haussmann, U. The Stochastic Maximum Principle for a Singular Control Problem. [9] Cvitanic, J. & Karatzas, I. (1992) Convex duality in constrained portfolio optimization, Ann. of Appl. Prob., Vol. 2, Issue 4, 767-818. [10] Cvitanic, J. & Karatzas, I. (1993) Hedging contingent claims with constrained portfolios, Ann. Appl. Probab. 3, 652{681. [11] Gerber, H. U. (1970) Mathematical methods in risk theory, Springer Verlag, Berlin. [12] Grandell, J. (1977), A class of approximations of ruin probabilities. Scand. Actuarial J., Suppliment, 37-52. [13] Hojgaard, B. and Taksar, M. (1997) Optimal Proportional Reinsurance Poli- cies for Diusion Models, Scand. Actuarial J. 2, 166-180. [14] Hojgaard, B., and Taksar, M. (1998) Optimal proportional reinsurance policies for diusion models. Scand. Actuarial J. 2, 166-180. [15] Hojgaard, B., and Taksar, M. (1999) Controlling risk exposure and dividend payout schemes: insurance company example. Math. Finance 9(2), 153-182. 137 [16] Hojgaard, B., and Taksar, M. (2001) Optimal risk control for a large corpo- ration in the presence of return on investments. Finance and Stochastics 5, 527-547. [17] Karatzas, I.N., Lehoczky, J.P., Sethi, S.P. & Shreve, S. (1986) Explicit solu- tion of a general consumption/investment problem, Mathematics of Opera- tions Research, Vol. 11, No. 2, 261-294. [18] Karatzas, I. & Shreve, S.E. (1991) Brownian motion and stochastic calculus, 2 nd ed., Springer-Verlag. [19] Karatzas, I. & Shreve, S.E. (1998) Methods of mathematical nance, Springer- Verlag. [20] Kazamaki, N.Continuous Exponential Martingales and BMO, Springer- Verlag. [21] Liu, Y., Ma, J. (2009) Optimal Reinsurance/Investment for General Insurance Models. The Annals of Applied Probability. Vol. 19 (4), 1495{1528. [22] Ma, J. & Sun, X. (2003) Ruin probabilities for insurance models involving investments, Scand. Actuarial J. Vol. 3, 217-237. [23] Ma, J., Wu, Z., Zhang, D., and Zhang, J., (2012) On wellposedness of Forward- Backward SDEs- A Unied Approach. 138 [24] Ma, J. and Yong, J. (1999) Forward-backward Stochastic Dierential Equa- tions and Their Applications, Springer, Berlin, New York. [25] Magdalena Kobylanski.(2000) Backward Stochastic Dierential Equation and Partial Dierential Equations with Quadratic Growth. The Annals of Proba- bility, 2000, Vol.28, No. 2, 558-602. [26] Peng, S. (1999) Monotone Limit Theorem of BSDE and Nonlinear Decompo- sition Theorem of Doob-Meyer's Type, Probability Theory and Related Fields, Springer-Verlag. [27] Protter, P. (1990) Stochastic Integration and Dierential Equations: A New Approach, Springer-Verlag, Berlin. [28] Radner, R., and Shepp, L. (1996) Risk vs. Prot potential: a model for cor- porate strategy. J. Econ. Dynam. Control, 1373-1393. [29] Yong, J. and Zhou, X. (1999), Stochastic Controls: Hamiltonian Systems and HJB Equations, New York: Springer-Verlag. [30] Zhang, F. (2013), Stochastic Maximum Principle for Mixed Regular-Singular Control Problems of Forward-backward Systems, Springer-Verlag Berlin Hei- delberg. 139
Abstract (if available)
Abstract
The goal of our research is to study a class of general non‐Markovian Forward Backward Stochastic Differential Equations (FBSDE) with constraint on the Z process. In particular, we consider the cases in which the driving Brownian motion is multi‐dimensional, and the coefficients are assumed to be only Lipschitz. Such FBSDEs are particularly motivated by applications in optimal reinsurance/investment/dividend problems and its close‐loop solutions via the Pontryagin’s Stochastic Maximum Principle. The well‐posedness of the FBSDE, especially those with Z‐constraints, turn out to be an important device that could lead to a closed‐loop system for the optimal strategy.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Pathwise stochastic analysis and related topics
PDF
Topics on set-valued backward stochastic differential equations
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Optimal dividend and investment problems under Sparre Andersen model
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
Dynamic approaches for some time inconsistent problems
PDF
Path dependent partial differential equations and related topics
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Some topics on continuous time principal-agent problem
PDF
Defaultable asset management with incomplete information
PDF
On non-zero-sum stochastic game problems with stopping times
PDF
Probabilistic numerical methods for fully nonlinear PDEs and related topics
PDF
Topics on dynamic limit order book and its related computation
PDF
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
Asset Metadata
Creator
Zhang, Tian
(author)
Core Title
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
07/21/2015
Defense Date
04/30/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
forward‐backward stochastic differential equation,non‐Markovian,OAI-PMH Harvest,reinsurance,stochastic maximum principal
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ma, Jin (
committee chair
)
Creator Email
tianzhan@usc.edu,tianzhan31@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-601952
Unique identifier
UC11301770
Identifier
etd-ZhangTian-3658.pdf (filename),usctheses-c3-601952 (legacy record id)
Legacy Identifier
etd-ZhangTian-3658.pdf
Dmrecord
601952
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Zhang, Tian
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
forward‐backward stochastic differential equation
non‐Markovian
reinsurance
stochastic maximum principal