Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
(USC Thesis Other)
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY FRACTIONAL BROWNIAN MOTION AND POISSON JUMPS by Weisheng Xie A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) August 2016 Copyright 2016 Weisheng Xie To my family ii Acknowledgment I would like to express my deepest gratitude to my advisor Prof. Jin Ma for the con- tinuous support of my Ph.D study and related research, for his patience, motivation, and immense knowledge. Nothing could I have achieved in my Ph.D study without him. I have learned not only valuable academic ideas from him in research, but also wisdom in every aspect in life which will denitely benet me a lot in the future. Besides my advisor, I would like to sincerely thank my dissertation committee member Prof. Jianfeng Zhang, Prof. Jinchi Lv, and guidance committee member Prof. Sergey Lototsky and Prof. Remigijus Mikulevicius. Not only for their insightful comments and encouragement in my defense, but also for providing me great special topic courses throughout the last ve years. I also would like to thank Prof. Yonghui Zhou, my colleagues Xiaojing Xing, Rentao Sun, Cong Wu, Eunjung Noh for valuable discussions. Thanks to all my roommates who shared unforgettable memory with me. Thanks to all my friends on basketball court, without whom, I could have graduated earlier. Finally, my most heartfelt thanks to my parents Shushun Xie and Xiaoling Sun, as well as my wife Mengya Cai, for all the greatest support in my life. iii List of Figures 1.1 S&P500's 100 days historical volatility . . . . . . . . . . . . . . . . . 9 1.2 Volatility correlations matrix of dierent sections in S&P500 . . . . . 10 iv List of Tables 1.1 Default probability of the simple borrow&lending model . . . . . . . 13 v Table of Contents Dedication ii Acknowledgment iii List of Figures iv List of Tables v Chapter 1: Introduction 3 1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Empirical observations . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Some simple simulation comparison . . . . . . . . . . . . . . . . . . . 13 5 Examples of mean eld interactions . . . . . . . . . . . . . . . . . . . 14 Chapter 2: Preliminaries 15 1 Fractional Brownian motion . . . . . . . . . . . . . . . . . . . . . . . 15 2 Fractional Wiener space . . . . . . . . . . . . . . . . . . . . . . . . . 15 3 Stochastic Integrals and Malliavin calculus . . . . . . . . . . . . . . . 19 4 Absolute Continuous Transformation of Gaussian Measures . . . . . . 22 4.1 Cameron-Martin space . . . . . . . . . . . . . . . . . . . . . . 22 4.2 Kusuoka's theorem revisited . . . . . . . . . . . . . . . . . . . 23 5 Monge-Kantorovitch Measure Transportation on Wiener space . . . . 26 Chapter 3: SDEs Driven by Fractional Brownian Motion and Poisson Processes 28 1 Fractional Wiener-Poisson space . . . . . . . . . . . . . . . . . . . . . 28 2 SDEs with Lipschitz coecients . . . . . . . . . . . . . . . . . . . . . 31 2.1 The case of additive jump noise . . . . . . . . . . . . . . . . . 32 2.2 The case of multiplicative (jump) noise . . . . . . . . . . . . . 45 3 SDEs with continuous coecients . . . . . . . . . . . . . . . . . . . . 52 3.1 The case without jump . . . . . . . . . . . . . . . . . . . . . . 52 3.2 The case with jumps . . . . . . . . . . . . . . . . . . . . . . . 57 1 Chapter 4: McKean-Vlasov type SDEs driven by fBm and related top- ics 59 1 A generalized anticipative Girsanov transformation . . . . . . . . . . 60 2 Existence and uniqueness of McKean-Vlasov fBm SDE . . . . . . . . 74 2.1 Nonlinear drift case . . . . . . . . . . . . . . . . . . . . . . . . 74 2.2 Nonlinear volatility case . . . . . . . . . . . . . . . . . . . . . 78 3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4 Uniqueness of ltering SDE driven by fractional Brownian motion . . 88 4.1 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.2 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Bibliography 93 2 Chapter 1 Introduction The primary goal of this work is to study a general class of McKean-Vlasov type stochastic dierential equations driven by fractional Brownian motion and Poisson point process of class (QL). More precisely, we consider the SDE of the following form: X t = x + Z t 0 b(s;X s ;;L(X s ))ds + Z t 0 (s;X s ;;L(X s ))dB H s (0.1) + Z t 0 (s;X s ;;L(X s ))dL s ; t 0; whereB H t is a fractional Brownian motion, withH2 (0; 1);L is a Poisson point pro- cess, independent of B H ; b, , and are (possibly random) functions with appro- priate regularity conditions, andL(X t ) denotes the law of the solution X at time t. The stochastic dierential equations driven by Brownian motion with jumps is not new. It can be studied in a general framework of the L evy process, or jump diusions, and in most cases in a Markovian paradigm (see, for example, the books of [32], [30], [15], to mention a few). However, the problem becomes quite dierent when a fractional Brownian motion is involved. First, it is well-known that an fBM is neither a Markov process nor a semimartingale, except for H = 1 2 . Thus even a simple SDE such as dX t =b(t;X t )dt +(t;X t )dB H t ; t 0; 3 is already quite a nontrivial problem, compared to the Brownian motion case (espe- cially when H < 1 2 ), and most results are restricted to deterministic diusion coef- cient , or Hurst parameter (H > 1 2 ) due to some fundamental diculties arising from the various denitions of the stochastic integrals as well as the failure of the traditional methods (such as Picard iteration). In fact, to date most of SDEs driven by fBM are solved only for H in a subset of (0; 1). For example, in [31] the existence and uniqueness is proved under the restriction H > 1 2 . With the help of recently developed rough path analysis, [13] was able to solve the SDE for H > 1 3 , and [18] improved the result to H > 1 4 . But it seems still dicult to have the result for the general caseH2 (0; 1). The existing literature on such general case include the work [28], where the SDE is of additive noise, i.e. the diusion coecient is deterministic (thus only Wiener integral is used); and the work [16], in which the stochastic integral is in the Skorohod sense, but the diusion coecients is allow to be multiplicative and random (even anticipating), but has to be linear in X. The method is based on an extension of the so-called anticipative Girsanov transformation, initiated by R. Buckdahn [6], to the fractional Brownian motion case. However all these works only consider the driven process is simply continuous Gaussian process, where in reality abnormal discontinuity or jump is an indispensable element empirically, especially in nancial stock pricing or company asset pricing. For example, Merton [27] proposed a jump diusion process with Poisson jump to match the abnormal uctuations of the stock price. It is thus natural to ask the question whether the well-posedness of an SDE driven by an fBM and a Poisson point process can be obtained in a standard way, since the framework of L evy process is obviously no longer valid. Recently, such an SDE was studied by [1] in the additive noise case. But the eld is still largely open. 4 We summarize the current status regarding SDEs driven by fBM and jumps in the following table, classied by the drift and diusion ecients. In the table \?" means still to be investigated, and \X" means we have results. 1 (t;x) Lipschitz condition in t;x 1 (t;x) =f(t) 1 (t;x) =(t)x b(t;x) =b(t)x 0<H < 1 2 Lyon & Martin 2007 Strong condition b(t;x) Lipschitz in x H > 1 2 Naulart & Rascanu, 2002 H < 1 2 ? H > 1 2 Nualart & Ouknine, 2003 0<H < 1 Jien & Ma 2009 With jump Shevchenko 2013 ? Ma, Bai 2012 X Weaker condition b(t;x) linear growth in x ? ? f(t) = 1 Boufoussi & Ouknine, 2003 H < 1 2 Nualart & Ouknine, 2003 Existence X With jump ? ? ? Bai &Ma 2012 Existence X The story of mean-eld type SDEs driven by Brownian motion started with a stochastic toy model for the Vlasov equation of plasma proposed by Mark Kac in [17] 1956, and later in 1966 Henry P. McKean published his seminal paper [25]. Thanks to the well development of standard Brownian motion SDE theory, the existence and uniqueness of this equation is known when H = 1 2 and (b;) are both Lipchitz. And the importance of this type of equation is brought up recently due to arising of mean- eld games around 2006. They are proposed independently by Huang, Malham e, Caines [14], and Lasry, Lions [20]. Since then this model developed fast and had many applications to describe complex multi-agent dynamic systems, like Mexican waves, stock markets or sh schoolings. However, very few work is done in case 5 of fractional Brownian motion, due to the technical issue involved with fractional Brownian motion. To the best of our knowledge, there has been hardly any results on this problem when H < 1 2 , unless volatility coecient is deterministic (cf. [33]), or the volatility involves measure even in case H > 1 2 . We do notice, however, the McKean-Vlasov type of SDE studied in [24] in which the law of the solution appears (only) in drift coecient. My thesis will mainly contain two major parts. The rst part concerns the SDEs driven by fBM and Poisson point process in the form of the \multiplicative noise" form (Chapter 3). The second part concerns McKean-Vlasov type SDEs driven by fBM and possibly with jumps (Chapters 4-6). In Chapter 7 we study a special type of Conditional Mean-eld SDEs driven by fBM, motivated by applications in strategic insider trading equilibrium models and related ltering problem. 6 1 Motivation The driven idea for the study of McKean-Vlasov equation is from mean eld game problems, this problem is also very important for systematic risk control. Let's intu- itively consider the toy bank model from [8], that the asset of an individual nancial bankX i behaves as a Gaussian process driven by a Brownian motion(it's also natural to assume the driven process to be a fractional Brownian motion if long memory is detected), because of the lending relationship between each of them, we can assume that at a certain time t, each bank i lends out with a speed at a proportion a of its asset X i t , meanwhile it receives the money from other banks proportional to each one's asset, so the bank asset X i can be modeled as dX i t =a(X i t 1 N X j X j t )dt +dB i t ; i = 1;:::;N; (1.2) where is the volatility. WhenN!1, theseN equations can be described as a McKean-Vlasov equation dX t =a(X t E(X t ))dt +dB t : (1.3) More generally, if our single particle dynamic is dX i t =b(t;X i t ;X i t )dt +(t;X i t ;X i t )dB i t ; whereX i t := (X 1 t ;:::;X i1 t ;X i+1 t ;:::;X N t ),b and are symmetric in the lastN 1 variables, then we can dened ~ b and ~ with respect to the probability measure N t = 7 1 N P N j=1 X j t which is the distribution ofX t so that the above equation can be written as dX t =b(t;X t ; N t )dt +(t;X t ; N t )dB i t : Sending N!1, and taking ! into consideration as a more generally case, will lead us to our goal problem (0.1). 2 Empirical observations Volatility model has been studied for a long time, from deterministic volatility in Black-Scholes model in 1980s, to local volatility model in 1990s and stochastic volatil- ity models in 2000s. Nowadays, a number of stylized facts have emerged over the years and been conrmed in numerous studies. For example, [26] considered the well known mean reversion volatility model dS t S t = (t;S t )dt +(x t )dW 1 t ; (x) = e x ; dx t = k(x t )dt + dW 2 t : where W 1 t ;W 2 t are two normal Brownian motions. And [9] considered the case of replacing W t by a fractional Brwoniann motion B H t with H > 1 2 . However, these models only focus on modeling the single asset, regardless of the intrinic dynamics. Admittedly, this approach is practical and good enough for option pricing, as an alternative, we provide another interpretation of the stochastic volatility 8 Figure 1.1: S&P500's 100 days historical volatility dynamic taking the interaction with dierent asset into consideration as well. The following graph on left shows the 100 days historical volatility between dierent sectors over the whole S&P500 stocks. In the graph legend, the rst number in parenthesis is tested Hurst parameter based on R/S method, and the second number in parenthesis is number of stocks in corresponding sector. It's obvious that there exists high correlation between dierent sectors' volatility (the correlation between each is as shown on the right graph). In addition, all tested Hurst parameter is close to 0:8 indicates a long memory property for the volatility. This comovement between dierent sectors among U.S. nancial market and long memroy property of the volatility have been empirically noticed and 9 Figure 1.2: Volatility correlations matrix of dierent sections in S&P500 described as a stylized fact for example pointed out in [35]. However, there is still very limited study on the true mechanics of it. For example, [4] provides a new idea on the relationship between the macroeconomic news announcement and the volatility which is also very interesting. Besides for the comovement eect, consider a market for example S&P 500 stock pool with a rather stable total money ow, meanwhile the money ows inside the pool itself. Intuitively, the more active in one sector will lead to rather cooler activity of other sectors. Therefore, this idea leads to an alternative mechanical model to produce the correlation between volatilities as follows 10 dx i t = [(m i x) +( xx i t )]dt + dB i t ; i = 1;:::;n: (2.4) Where B i are fractional Brownian motion with the same Hurst parameter. Thus we can see that this model captures, mean reversion, comovement and long memory properties. And in the following section we'll see more properties of this model. 3 Properties It's not hard to give the explicit solution of equation (2.4) in matrix form as follows, X t =e (()JI)t X 0 +( Z t 0 e (()JI)(ts) ds)m + Z t 0 e (()JI)(ts) dB s ; where X 0 = 0 B B B B @ x 1 0 . . . x n 0 1 C C C C A ;m = 0 B B B B @ m 1 . . . m n 1 C C C C A ;B t = 0 B B B B @ B 1 t . . . B n t 1 C C C C A , J = 0 B B B B @ 1 n ; ; 1 n . . . 1 n ; ; 1 n 1 C C C C A and I is n by n identity matrix. More specically, we have X i t = e t x i 0 + (1e t )m i + (e t e t ) x 0 + (1e t ) m + Z t 0 e (ts) dB i s + Z t 0 (e (ts) e (ts) )d B s : As a simple example when Hurst parameter of B i t are all 1 2 , we have Var(X i t ) = 2 n [ 1e 2t 2 + (n 1) 1e 2t 2 ]; 11 as well as Cov(X i t ;X j t ) = 2 n [ 1e 2t 2 1e 2t 2 ]: Therefore, Cor(X i t ;X j t ) = w1 w+n1 , where w = 1e 2t 2 = 1e 2t 2 . Notice that when ; w!1 and Cor(X i t ;X j j )! 1. Remark 1. According to model (2.4), we know that the average process x follows a independent dynamics d x t =(m x t ) +d B t : Therefore, to some extend determines the systemic volatility and determines the correlation between dierent stocks (or across dierent sectors). 12 4 Some simple simulation comparison What's the signicance of studying such a problem? Let's consider the default risk of (1:2), i.e. probability among all banks that X i t < 0 for some t2 [0;T ]. Assume = 1 then following table shows some signicant facts No Interaction H Interacting default risk a=0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.7532 0.1 0.7578 0.7696 0.7708 0.7866 0.7848 0.8086 0.7982 0.81 0.8242 0.8314 0.4804 0.3 0.4808 0.4802 0.4758 0.4572 0.4588 0.4726 0.4734 0.4714 0.465 0.4686 0.2934 0.5 0.2906 0.2784 0.2602 0.254 0.247 0.239 0.2458 0.215 0.2194 0.203 0.1982 0.7 0.1958 0.1794 0.173 0.161 0.1484 0.1444 0.133 0.127 0.1146 0.1078 0.1662 0.9 0.16 0.1438 0.1296 0.1244 0.1098 0.1032 0.0954 0.0806 0.0772 0.0622 Table 1.1: Default probability of the simple borrow&lending model As we can see from the table that generally, the interacting system will decrease the systematic default probability of the banks. And how to control the lending rate in specic situation that will be a hard but real problem for those executers in micro economy world. Here in my thesis, I'll be more interested in the unique existence of solution for innite particle systems driven by fractional Brownian motion, i.e. a McKean Vlasov type stochastic dierential equation driven by fractional Brownian motion (0:1). 13 5 Examples of mean eld interactions Notice in our goal (0:1) that there are many ways to describe the coecient with respect to the marginal distribution of X t . One of the example is the case shown in (1:3), which we call mean eld interaction of scalar type. In this case, we consider this general form as b(t;X t ;!;L(X t )) = ~ b(t;X t ;!;E PX 1 t ()) = ~ b(t;X t ;!;E P ((X t ))); (t;X t ;!;L(X t )) = ~ (t;X t ;!;E PX 1 t ()) = ~ (t;X t ;!;E P ((X t ))): Specically, when (x) = x, then description of the measure will just be of the form E P (X t ). 14 Chapter 2 Preliminaries In this chapter, we will revisit the basic knowledges we need for the work. 1 Fractional Brownian motion A process B H on an interval I = [0; 1] is called a fractional Brownian motion if it satises the following: B H 0 = 0; B H t is Gaussian for8t2I; E(B H t B H s ) =R H (t;s), 1 2 (t 2H +s 2H jtsj 2H ); 8t;s2I 2 . In particular, when H = 1 2 , B H = B 1 2 is just the normal Brownian motion. And compared with the normal Brownian motion, a general fractional Brownian motion withH6= 1 2 lacks the independent increase property, so it's a non-Markovian process. And also, when H6= 1 2 , B H is not a semimartingale neither. 2 Fractional Wiener space Now, consider the following operator K H :L 2 (I)7!L 2 (I): [K H f](t) 4 = Z t 0 K H (t;s)f(s)ds for t2I; f2L 2 (I); 15 where K H (;) is the square integrable kernel given by K H (t;s) 4 = H + 1 2 1 (ts) H1=2 F H 1 2 ; 1 2 H;H + 1 2 ; 1 t s ; (2.1) and F (a;b;c;z) is the Gaussian hypergeometric function: F (a;b;c;z) = 1 X k=0 a (k) b (k) c (k) k! z k ; ;a;b2R;jzj< 1; c6= 0;1::: where a (k) , b (k) , c (k) are the Pochhammer symbol for the rising factorial: x (0) = 1, x (k) = (x+k) (x) (see, e.g., [3, 10] or [1]). It is well-known that the following identity holds R H (t;s) = Z t^s 0 K H (t;r)K H (s;r)dr; t;s2I 2 : (2.2) Next, we deneH H 4 =K H (L 2 (I)), and endow it with the inner product hf;gi H =hf;gi H H 4 = ( ~ f; ~ g) 2 = ( ~ f; ~ g) L 2 (I) ; f;g2H H : (2.3) where ~ f 4 = [K H ] 1 f, for f 2 H H . Then, H H is a Hilbert space, known as the reproducing kernel space. Next, we let K H; : H ! L 2 (I) be the adjoint operator of K H where H is the dual space of H. For any f 2H and g2 L 2 (I), it holds thathf;K H gi H = (K H; f;g) 2 , for any f2L 2 (I). Denoting H 4 =P B H , it is well-known that the space H is the unique Hilbert space that is densely and continuously embedded inW such that, for any 2W , the topological dual space of W , it holds that Z W e ih;!i W ;W d H (!) =e 1 2 k k L 2 (I) ; 82W ; 16 where 4 = K H; . We will call (W;F; H ) a fractional Wiener space, and denote = H when context is clear. Remark 2.1. For8f2L 2 (I) jK H f(t)K H f(s)j = j Z T 0 (K H (t;r)K H (s;r))f(r)drj [ Z T 0 (K H (t;r)K H (s;r)) 2 dr] 1 2 [ Z T 0 f 2 (r)dr] 1 2 = [R H (t;t) +R H (s;s) 2R H (t;s)][ Z T 0 f 2 (r)dr] 1 2 = jtsj 2H [ Z T 0 f 2 (r)dr] 1 2 : So K H f(t) ! K H f(s) when t ! s thus Kf is continuous. And together with Kf(0) = 0, we indeed have the factH W . Remark 2.2. As a fact from [3], when H > 1 2 K H (t;r) = r 1=2H (H 1=2) Z t r u H1=2 (ur) H3=2 du1 [0;t] (r): When H < 1 2 , K H (t;r) =b H [( t r ) H1=2 (tr) H1=2 (H 1 2 )r 1=2H Z t r (ur) H1=2 u H3=2 du]: with b H = [2H=((1 2H)(1 2H;H + 1=2))] 1=2 . When H = 1 2 , we have K1 2 (t;r) = 1 [0;t] (r): 17 Remark 2.3. By Fubini theorem, Z 1 0 g(t)K H; f(t)dt = Z 1 0 f(t)K H g(t)dt = Z 1 0 f(t) Z 1 0 K H (t;s)g(s)dsdt = Z 1 0 g(s)( Z 1 0 K H (t;s)f(t)dt)ds: Thus K H; f(t) = R 1 0 K H (s;t)f(s)ds = R 1 t K H (s;t)f(s)ds. In particular, K H; t () = K H (t;). Moreover, the dual of W is any radon measure, so dirac measure t 2 W . Under the fractional Wiener measure, a fractional Brownian motion B H is B H t , h t ;!i W ;W . In this setting, we indeed have E [B H t B H s )] = hK H; t ;K H; s i L 2 (I) = hK H (t;);K H (s;)i L 2 (I) = R H (t;s): Which consists with denition of fractional Brownian motion. 18 3 Stochastic Integrals and Malliavin calculus Skorohod integral We will introduce Skorohod integral via Malliavin calculus. The following facts are from [16], we list them only for ready reference. To begin with, letX be a separable Hilbert space andS(X ) the class of all smooth cylindrical functions G :W7!X of the form G(!) =g(h` 1 ;!i;:::;h` n ;!i)x; !2W; ` i 2W where n 2 N;g 2 C 1 b (R n );` k 2 W for k = 1;:::;n and x 2 X . We denote S , S(R). Clearly, any G 2 S can be written in the following form: for some 0<t 1 <<t n 1, and g2C 1 b (R n ), G(!) =g(! t 1 ;:::;! tn ); !2W: We consider the following two derivative of G2S(X ) 8 > > > > < > > > > : DG(!) 4 = n X i=1 @ i g(h` 1 ;!i;:::;h` n ;!i)[K H; ` i ] x; !2W; D H G(!) 4 = n X i=1 @ i g(h` 1 ;!i;:::;h` n ;!i)[(K H K H; )` i ] x; !2W: (3.4) ThenDG2L 2 (I) X andD H G2H H X . An important property of the derivative operators D and D H is the following Chain Rule: if G = (G 1 ;:::;G n );n2N, where fG i gD 1;2 , and g2C 1 b (R n ), then g(G)2D 1;2 , and D[g(G)] = n X i=1 @ i g(G)DG i ; t2I: (3.5) 19 For notational simplicity, we shall denote, for each !2 W ,jG(!)j =jG(!)j X , jDG(!)j =jDG(!)j L 2 (I) X , andjD H G(!)j =jD H G(!)j H H X , respectively, when there is no danger of confusion, and we denotekk 2 =kk L 2 (W; H ) . We shall then use the following norms onS(X ): kGk 2 1;2 ,kjG()jk 2 2 +kjDG()jk 2 2 ; kGk 1;1 ,kjG()jk 1 _kkDG(!)k 2 k 1 : (3.6) The normskGk 1;2;H andkGk 1;1;H can then be dened accordingly. We shall denote the closure ofS(X ) under the normkk 1;2 (resp.kk 1;1 ) byD 1;2 (resp. D 1;1 ). The spaceD 1;2;H (reps. D 1;1;H ) can be dened in an obvious way. In this paper we shall use the Skorohod integral with respect to an fBM, dened as the adjoint operator of the derivative operatorD (resp. D H ) and is denoted by (resp. H ). The domain of, denoted byDom()L 1 (W ;L 2 (I)), is the set of all processes u2L 1 (W ;L 2 (I)) such that, for any G2S, it holds thatjE [(DG;u) 2 ]jckGk 1;1 , wherec is a constant independent ofG. For anyu2Dom(), the integral(u) (resp. H (~ u)), is dened as the element in L 1 (W ) such that, for8G2S, E [(u)G] =E [hDG;ui L 2]; (resp. E [ H (~ u)G] =E [hD H G; ~ ui H ]), where ~ u =K H u. We should note that H (~ u) =(u) if ~ u =K H u, so throughout this paper we shall consider only as the Skorohod integral. Through this denition, the following property is worth noting: for any t2 [0;T ], B H t =h ftg ;!i =(K H; ftg ), andE ([(1 [0;t] ()] 2 )6=R H (t;t). We denote the spaceL 1;1 ,L 1 (I;D 1;1 ). Finally, we also Theorem 3.1. For arbitrary H2 (0; 1), B H 2Dom(). 20 Proof. Since B H t = ! t , it suces to prove that for8G2S, E(hDG;!i L 2 ) ckGk 1;1 where c is a constant independent of G. WLOG, let's assume G(!) = g(h t 1 ;!i;:::;h tn ;!i) withkGk 1;1 <M, therefore, jkDGk L 2j 1 = X 1in [R(t i ;t i )] 1 2 kg x i k 1 <M; E(hDG;!i L 2 ) = E( X 1in g x i hK t i ;!i L 2 ) = E( X 1in g x i Z 1 0 K(t i ;s)! s ds) = X 1in Z 1 0 K(t i ;s)E(! s g x i )ds X 1in Z 1 0 K(t i ;s)k! s k 2 kg x i k 2 ds X 1in kg x i k 1 Z 1 0 K(t i ;s)[R(s;s)] 1 2 ds 1 2H + 1 X 1in kg x i k 1 [R(t i ;t i )] 1 2 M 2H + 1 : That is to say, E(hDG;!i L 2 ) 1 2H+1 kGk 1;1 , thus B H 2Dom(). 21 4 Absolute Continuous Transformation of Gaus- sian Measures 4.1 Cameron-Martin space As an intuition for Cameron-Martin space in fractional Wiener space, we rstly take a look at a nite dimensional Gaussian eld ~ X = (x t 1 ;:::;x tn ) T N( ~ 0; ) where t i = i n . Consider the transformation T ( ~ X) = ~ X + ~ h; ~ h2R n : (4.7) Notice that T 1 ( ~ X) = ~ X ~ h, therefore dT 1 ( ~ X) = d( ~ X ~ h) = 1 (2jj) n=2 exp( ( ~ X ~ h) T 1 ( ~ X ~ h) 2 )d ~ X = exp( ~ X T 1 ~ h 1 2 ~ h T 1 ~ h)d( ~ X); i.e. dT 1 d ( ~ X) = exp( ~ X T 1 ~ h 1 2 ~ h T 1 ~ h): So as a necessary condition for the measure andT 1 to be equivalent, \ 1 2 ~ h T 1 ~ h" should be nite, although it is always true in nite dimensional space when is invertible. Similarly, when it comes to the fractional Wiener space (W;), which is an innite dimensional Gaussian space, the corresponding Cameron-Martin space is a Hilbert spaceH W dened in the following theorem. 22 Theorem 4.1. In a fractional Wiener space (W;), the Cameron Martin spaceH W is given byK(L 2 (I)), and the inner product between two Cameron Martin element h 1 and h 2 is given byhh 1 ;h 2 i H = R 1 0 K 1 h 1 (t)K 1 h 2 (t)dt. Remark 4.2. We recap the relationship between the space as below: W i ! H K ! L 2 (I) K ! H i ! W: One fact is thatH and L 2 (I) are isometric since K is invertible. 4.2 Kusuoka's theorem revisited In this section, we will revisit Kusuoka's theorem about absolutely continuous Gaus- sian measure transformation and study a measure involved type of absolutely contin- uous transformation on Wiener space later. Preliminaries. Assuming X 0 ;X are separable Hilbert spaces. We denote by L 2 (X 0 ;X ) the space of all Hilbert-Schmidt operators fromX 0 intoX , which, under the Hilbert-Schmidt norm, itself is a Hilbert space. Denition 4.3. Let F be anH -valued function dened on W . (a) F is Fr echet dierentiable at W in H -directions if there exists D H F (!) 2 L 2 (H;H ), which is called the Fre echet derivative wuch that for any h2H , jF (! +h)F (!)D H F (!)(h)j H =o(jhj H );asjhj H ! 0; (b) F is anHC 1 map if fora:e:!2W , the maph7!F (!+h) is a continuous Fr echet dierential function on H , and its Fr echet derivative D H F (! +) : H !L 2 (H;H ) is continuous. 23 For a generic spaceH, set the notationI H to indicate the identity map on spaceH. We now dene the Carleman-Fredholm determinant of I H +B where B2L 2 (H;H) by d c (I H +B) = det(I H +B) exp(trace(B)) = 1 Y i=1 (1 + i )e i ; where i 's are eigenvalues of B. Remark 4.4. The rst part of the determinant is called Fredholm determinant which is heuristic as a determinant of matrix when in nite space, but it's welled dened only for trace-class operators instead of Hilbert-Schimidt operators. The following two theorems are important for our discussion. Theorem 4.5 (Kusuoka [19]). Let K be anH C 1 map from W toH . Assume for a:e:!2W that I W +K :W!W is bijective and I H +D H K(!) :H !H is invertible. Then (I W +K) 1 (d!) =jd(!;K)j(d!);a:e:!2W , with d(!;K) =d c (I H +D H K(!)) expf H K(!) 1 2 jK(!)j 2 H g: That is, E [G(I W +K)jd(;K)j] =E [G]: Theorem 4.6 ([16]). Let T :W7!W be a transformation of the form (T!) =! + (K H u(!)) =! + Z 1 0 K H (;r)u r (!)dr; 24 where u is a measurable mapping. Assume further that (i) T is bijective for a:e: w2W . (ii) Fora:e: !2W there existsDu(!)2L 2 (I 2 ) such that , for anyh2L 2 (I), (a) h7!Du(! +K H h) is continuous from L 2 (I) into L 2 (I 2 ) (b)ju (! +K H h)u (!) (D u(!);h) 2 j 2 =o(jhj 2 ) asjhj 2 ! 0. (c) The mapping I L 2 (I) +Du(!) : h7! h + (Du ;h()) 2 from L 2 (I) into itself is invertible. Then the measure and T 1 are equivalent, and the inverse transformation A of T has the density d[A 1 ] d (!) =jd c (I L 2 (I) +Du(!))j expfu(!) 1 2 ju(!)j 2 2 g a:e: !2W: 25 5 Monge-Kantorovitch Measure Transportation on Wiener space We hope to use Wasserstein distance to measure the transformed Gaussian measure, so an immediate question is whether we have the explicit form of the distance in terms of the transformation operatorfT t g? In order to answer this question,we studied the so-called Wasserstein distance for the space of measures. This will be useful when we study the SDE of mean-eld type where the eect of the transformed Gaussian measure will have to dealt with. We begin with the Monge-Kantorovitch problem regarding Gaussian measures. Let us denoteP(W ) to be all probability measures on the canonical measurable space (W;B(W )), andG (W )P(W ) be all the Gaussian measures. Next, for;2G (W ), letC (;)P(W 2 ) be all the joint distributions and with marginals and . The problem is to nd the distribution (x;y)2 C (;) such that Z WW d(x;y)(dxdy) = inf 2C (;) Z WW d(x;y)(dxdy); where d(;) is any distance function on W . In particular, if d(x;y) =d 2 (x;y) 4 = 8 > < > : jxyj 2 H xy2H 1 otherwise. whereH W is the reproducing kernel space, then we denote W 2 (;) = h inf 2C (;) Z WW d 2 (x;y)(dxdy) i1 2 <1; 26 and we call it the 2-Wasserstein distance between andv. By [12, Theorem 6.1], we know that there exists a map T : W! W of the form T = I W +, where 2H almost surely, such that T = and W 2 (;) = h Z W jT (!)!)j 2 H d(!) i1 2 : Using this fact and note that if T 1 = I +F 1 and T 2 = I +F 2 are two invertible transformation onW withW 2 (T 1 ;T 2 )<1, whereF 1 ,F 2 are twoH -C 1 maps from W toH , then we have the following theorem ready to use. Theorem 5.1. W 2 (T 1 ;T 2 ) Z jT 1 2 (!)T 1 1 (!)j H (d!): (5.8) Proof. The proof of this theorem is trivial since W 2 (T 1 ;T 2 ) R jT 1 2 T 1 (!)!j H T 1 (d!) = R jT 1 2 (!)T 1 1 (!)j H (d!): Remark 5.2. The rst glance of this lemma will give us an impression that W 2 (T 1 ;T 2 ) = Z jT 1 2 T 1 !!j H dT 1 !: Then the following proof of existence and uniqueness of transformationfA s;t g will be more straightforward using Wasserstein distance as a bound, but later I realized that this equality holds only in the case that the only Gaussian measure preserving trans- formation is identity transformation, i.e. T =)T =I. Intuitively, this is cor- rect for nite dimensional Gaussian transformation in form of T (X 1 ;X 2 ;:::;X n ) = (X 1 +h 1 ;X 2 +h 2 ;:::;X n +h n ) where (X 1 ;X 2 ;:::;X n ) N(m; ). However, a further study tells that this is not true generally for innite Gaussian space [12],[34]. 27 Chapter 3 SDEs Driven by Fractional Brownian Motion and Poisson Processes 1 Fractional Wiener-Poisson space We rstly revisit Poisson part. Consider a Poisson random measureN(;) on [0;T ] R, dened on a given probability space ( ;F;P), with L evy measure that satises the standard integrability condition: Z Rnf0g (1^jxj 2 )(dx)< +1: The compensator of N is thus the deterministic measure ^ N(dtdx) = dt(dx) on [0;T ]R. Now are interested in Poisson point process of class (QL), namely a point process whose counting measure has deterministic and continuous compensator. Suppose a pure jump process L has the following form L t = Z t 0 Z Rnf0g f(s;z)N(dsdz); t 0; 28 wheref2L 1 (dtd) is a given deterministic function so that the counting measure of L, denoted by N L (dtdx) takes the form N L ((0;t]A) = Z t 0 Z R 1 A (f(s;x))N(dsdx) = X 0<st 1 fLs2Ag ; and its compensator is therefore ^ N L (dtdx) =f(t;x)N(dtdx), and hence deterministic and continuous. For the simplest case, when f(s;z) = 1 and N(dsdz) = ds 1 (dz), L t is a standard Poisson process. We can now consider the canoical space for a given standard Poisson point process of class (QL). Let = D([0;T ]), the space of all c adl ag (right-continuous with left limit) functions, endowed with the Skorohod topology, and letF =fF t g t0 andF be dened as the same as before. LetP L be the law of the processL onD([0;T ]). Then the coordinate process, L t (!) =!(t); (t;!)2 [0;T ] ; is a Poisson point process, dened on ( ;F;P L ). Next, we will review the fractional Wiener-Poisson space ( ;F;P) which is dened as: , 1 2 ;F ,F 1 F 2 ;P,P B H P L ; where ( 1 ;F 1 ;P B H ) and ( 2 ;F 2 ;P B L ) are respectively fractional Wiener space and Poisson space with 1 =C 0 [0; 1]; 2 =D[0; 1]. 29 Thus in this setting, we'll write the element of as ! = (! 1 ;! 2 ), then the two coordinate processes dened by B H t (!),! 1 (t);L t (!),! 2 (t); (t;!)2 [0;T ] ; will respectively be the fractional Brownian motion and Poisson point process with given laws. By denition, these two processes are independent to each other. 30 2 SDEs with Lipschitz coecients In this section we study a general type of stochastic dierential equation driven by a fBM and a Poisson point process with the form of multiplicative type noise, in the sense that both coecients associated to the driving random noises will involve the solution itself. We note that while such nature is by no means new in the standard SDE theory, it is non-trivial in the fractional Brownian motion literature, especially when there is no restriction on the Hurst parameter H. Consider the following SDE: for t2 [0;T ], X t =X 0 + Z t 0 b(s;X s ;)ds + Z t 0 (s;X s ;)dB H s + Z t 0 (s;X s ;)dL s ; (2.1) where B H (!) = B H (! 1 ) is a fractional Brownian motion and L(!) = L(! 2 ) is a compound Poisson process with L evy measurev. Here the stochastic integral against B H is in the Skorohod sense, whereas the integral against L is in the pathwise sense: for each ! 2 2 2 , h Z t 0 (s;X s ;)dL s i (;! 2 ) 4 = Nt(! 2 ) X i=1 ( i ;X i ; (;! 2 ))L i ; H -a.e., where 0< 1 < 2 << Nt are the jump times ofL, andN is the standard Poisson process. It is worth noting that, unlike the usual ways to treat a \jump diusion", we will not employ any It^ o-type of analysis asB H is neither a semi-martingale nor a Markov process. Instead we shall take advantage of the independence of the B H and L and try to carry out our analysis for xed ! 2 2 2 . Throughout this section we shall make use of the following assumptions. 31 Assumption 2.1. We assume that (i) X 0 2L 1 ( ); (ii) (t;x;!) =(t;!)x, where (t; (;! 2 ))2L 1;1 for P L -a.e. ! 2 2 2 ; (iii) There exists 2L 1 (I) such that t 0, and that (1)jb(t; 0;!)jk k L 1 (I) 4 =M, for all t2I ; (2)jb(t;x;!)b(t;y;!)j t jxyj for all x;y2R, t2I; (3)9 >k k L 1 (I) such that E P L h e N T i <1. Assumption 2.2. In multiplicative case, we further assume that (i) is measurable, and for some K > 0,j (t;x;!)jK(1 +jxj). (ii) The jump random variable U , L 1 satises,9 ~ >k k L 1 (I) + ln(K + 1 + KEjUj) such that E P L h e ~ N T i <1. 2.1 The case of additive jump noise We begin by assuming that 1, that is, the jump enters the SDE in an \additive" manner. For simplicity we shall assume that f 1 so that L is simply a Poisson point process. The general case can be argued in the same way without substantial diculties. We should remark, however, that even under such a simplication the SDE (2.1) is still signicantly dierent from that studied in [1], where the coecient is also a constant (i.e., the fBM noise is also additive). The diculty there was the irregularity of the coecient b. We rst note that the Lipschitz condition Assumption (2.1)-(iii) should enable us to use the technique of [16]. To this end, we x ! 2 2 2 , and let 0 < 1 (! 2 ) < 32 2 (! 2 )<< N T (! 2 ) (! 2 )T be the jump times of L. Let us rst consider the set f 1 > Tg. Name we look at ! 2 2 2 , such that 1 (! 2 ) T . Then the SDE (2.1) takes the form: X t =X 0 + Z t 0 b ! 2 s (X s ;)ds + Z t 0 ! 2 s (X s ;)dB H s +N ! 2 t ; t2I; (2.2) where ! 2 t (x;! 1 ) 4 = t (x; (! 1 ;! 2 )), for =b;. We shall follow the scheme of [16] to solve the SDE, thanks to Assumption (2.1). To begin with, we consider the family of coordinate transformations T t : 7! , t2I, such that T t (! 1 ;! 2 ) 4 = ( ~ T t ! 1 ;! 2 ), where ( ~ T t ! 1 ) s 4 = ! 1 s + [K H (1 [0;t] (;T!))] s (2.3) = ! 1 s + Z t^s 0 K H (s;r)(r; ( ~ T r ! 1 ;! 2 ))dr; P-a.e. ! = (! 1 ;! 2 )2 : Applying Theorem (4.6), along with, e.g., [16, Theorem 5.1], we can conclude that, for t2 I and P L -a.e. ! 2 2 2 , the mapping ~ T t : 1 7! 1 is a bijection, with inverse ~ A t and density ~ L t satisfying: ~ L t (!) = d[ ~ T 1 t ] d (!) = exp n (1 [0;t] ()(; (A ;t ! 1 ;! 2 )) 1 2 Z t 0 j(s; (A s;t ! 1 ;! 2 ))j 2 ds + Z t 0 Z t s (D r (s; (A s;t ! 1 ;! 2 ))D s [(r; (A r;t ! 1 ;! 2 ))]drds o : (2.4) In the above (u) denotes the Skorohod integral of u against B H , and the operator ~ A s;t 4 = ~ T s ~ A t (or ~ A s;t ~ T t = ~ T s ) for 0 s t T . We also denote A t ! = T 1 t ! = 33 ( ~ A t ! 1 ;! 2 ), for ! = (! 1 ;! 2 ). We expect, following the idea of [16], that the solution of (2.2) should have the explicit form: X t (!) = ~ L t (!)Z t (A t !);X 0 (A t !)); t2I; P-a.e. !2 : (2.5) where Z t =Z t (!;x) satises the following ODE: for x2R, (t;!)2I , Z t (!;x) =x + Z t 0 ~ L 1 s (T s !)b(s; ~ L s (T s !)Z s (!;x); (T s !))ds: (2.6) In the general case we x ! 2 2 2 , and construct the solution piecewisely on the intervals [ i (! 2 ); i+1 (! 2 )],i = 0; ;N T (! 2 ), with 0 = 0 and N T (! 2 )+1 =T . We con- sider the following system of ODEs: for i = 1; ;N T (! 2 ), and t2 [ i (! 2 ); i+1 (! 2 )], Z i t (!;x) =x + Z t i (! 2 ) ~ L 1 s (T s !)b(s; ~ L s (T s !)Z i s (!;x); (T s !))ds: (2.7) We have the following two theorems. Theorem 2.3. Assume that Assumption (2.1) is in force. Then SDE (2.2) has a solution X that has the explicit form: for t2I, and P-a.e. ! = (! 1 ;! 2 )2 , X t (!) = N T (! 2 ) X i=0 ~ L t (!) ~ Z i t (A t !)1 [ i (! 2 ); i+1 (! 2 )) (t); (2.8) where the processes ~ Z i 's are dened recursively by: 8 > < > : ~ Z 0 t (!) =Z 0 t (!;X 0 (!)); t2 [0; 1 (!)] ~ Z i t (!) =Z i t (!; [ ~ Z i1 i (! 2 ) (!) + ~ L 1 i (! 2 ) (T i !)]); t2 [ i (! 2 ); i+1 (! 2 )]; i 1: (2.9) 34 and Z i t (!;x), i = 0; 1; ;N T (! 2 ) satisfy the system of ODEs (2.7), for x 2 R, t2 [ i (! 2 ); i+1 (! 2 )]. Proof. We shall prove the theorem by verifying that the process X given by (2.8) is indeed a solution to (2.2). To begin with, we note that for xed ! 2 2 2 , the ODEs in (2.7) are essentially the same as those in [16], therefore they are all well-posed, thanks to Assumption (2.1). Thus the process X given by (2.8) is well-dened. Furthermore, since all the processes ~ L and Z i 's are continuous for t2 [ i (! 2 ); i+1 (! 2 )), it is clear that X has c adl ag paths. Furthermore, at each i (! 2 ), by denitions (2.8) and (2.9) we have 8 > > > > > < > > > > > : X i (!) = ~ L i (! 2 ) (!) ~ Z i1 i (! 2 ) (A i !); X i (!) = ~ L i (! 2 ) (!) ~ Z i i (! 2 ) (A i !) = ~ L i (! 2 ) (!)[ ~ Z i1 i (! 2 ) (A i ) + ~ L 1 i (! 2 ) ](!) =X i (!) + 1: (2.10) Thus X i = N i = 1, i = 1; ;N T (! 2 ). Our main task is to prove that the process X dened by (2.8) satises (i) for any t2I, 1 [0;t] ()(;)X2Dom() ; and (ii) The equation (2.1) holdsP-almost surely. Let us rst assume (i) and prove (ii). To this end, we shall prove that, for any G2S, E n G Z T 0 1 [0;t] (s)(s;)X s dB H s o =E n G h X t X 0 Z t 0 b(s;X s ;)dsN t io : (2.11) But note that E n G Z T 0 1 [0;t] (s) s ()X s dB H s o = Z 2 E H n G ! 2 () Z T 0 1 [0;t] (s) ! 2 s ()X ! 2 s dB H s o P L (d! 2 ); 35 where E H [] = E H [] and ! 2 (! 1 ) 4 = (! 1 ;! 2 ) = (!), for = G;b;X, etc. In the sequel we shall often drop the superscript \ ! 2 " on b, , G, X, etc., for notational simplicity when the context is clear. By Fubini's Theorem it suces to argue that for t2I andP L -a.e. ! 2 2 2 , it holds that E H n G h Z t 0 s ()X s dB H s X t X 0 Z t 0 b s (X s ;)dsN t (! 2 ) io = 0: (2.12) To see this we rst note that, by [16, Lemma 6.1], for xed ! 2 , the mapping t7! G(A t !) is dierentiable, and it holds that d dt G(T t !) = (t;!)D ! 1 t G(T t !), for H -a.e. ! 1 2 1 . Here D ! 1 t = D t means that the Malliavin derivative is taken only with respect to ! 1 2 1 . Then, by denition of the Skorohod integral we have, for xed ! 2 , E H n G Z t 0 (s;)X s dB H s o =E H n Z t 0 (s;)X s [D s G]ds o = E H n Z t 0 (s;) N T (! 2 ) X i=0 ~ L t ~ Z i s (A s )1 [ i (! 2 ); i+1 (! 2 )) (s)[D s G]ds o = E H n N T (! 2 ) X i=0 Z t 0 (s; ~ T s ) ~ Z i s ()1 [ i (! 2 ); i+1 (! 2 )) (s)[D s G(T s )]ds o = E H n N T (! 2 ) X i=0 Z [ i (! 2 )^t; i+1 (! 2 )^t) ~ Z i s () dG(T s ) ds ds o : (2.13) Now assume that t2 [ k (! 2 ); k+1 (! 2 )) for some k > 0, then for any 0 i k 1, integrating by parts as well as applying the Girsanov transformation of measures 36 would yield (suppressing variable ! 2 , and abbreviating ! = (! 1 ;! 2 ) by \" when necessary): E H n Z [ i ; i+1 ) ~ Z i s dG(T s ) ds ds o = E H n ~ Z i i+1 G(T i+1 ) ~ Z i i G(T i ) Z i+1 i G(T s ) d ~ Z i s ds ds o = E H n ~ Z i i+1 G(T i+1 ) ~ Z i i G(T i ) Z i+1 i G(T s ) dZ i s (; [ ~ Z i1 i + ~ L 1 i (T i )]) ds ds o = E H f ~ Z i i+1 G(T i+1 ) ~ Z i i G(T i ) Z i+1 i G(T s ) ~ L 1 s (T s )b(s; ~ L s (T s )Z i s (; [ ~ Z i1 i + ~ L 1 i (T i )]);T s )ds o = E H f ~ L i+1 ~ Z i i+1 (A i+1 )G() ~ L i ~ Z i i (A i )G() Z i+1 i G()b(s; ~ L s ()Z i s (A s ; [ ~ Z i1 i (A s ) + ~ L 1 i (T i A s )]);)dsg = E H f ~ L i+1 ~ Z i i+1 (A i+1 )G() ~ L i ~ Z i i (A i )G() Z i+1 i G()b(s; ~ L s ~ Z s (A s );)dsg = E H fG() h ~ L i+1 ~ Z i i+1 (A i+1 ) ~ L i ~ Z i i (A i ) Z i+1 i b(s;X s ;)ds io : (2.14) Similarly, we also have E H n Z [ k ;t) ~ Z k s () dG(T s ) ds ds o =E H n G() h ~ L t ~ Z k t (A t ) ~ L k ~ Z k k (A k ) Z t k b(s;X s ;)ds io : Note that by denition (2.8) we have 8 > < > : E H fG() ~ L i ~ Z k i (A i )g =E H fGX i g; i = 0; 1;k; E H fG() ~ L t ~ Z k t (A t )g =E H fGX t g; t2I: (2.15) Furthermore, by dention (2.9), for i = 0; 1; ;k, one has E H fG ~ L i ~ Z i i (A i )g =EfG ~ L i [ ~ Z i1 i (A i ) + ~ L 1 i ]g =EfG[X i + 1]g: (2.16) 37 Combining (2.13)-(2.16) we see that, for t2 ( k (! 2 ); k+1 (! 2 )], (2.13) can be written as E H n G Z T 0 1 [0;t] (s)(s;)X s dB H s o = E H n k1 X i=0 G h X i+1 1X i Z i+1 i b(s;X s ;)ds i +G h X t X k Z t k b(s;X s ;)ds io = E H n G h X t X 0 Z t 0 b(s;X s ;)dsk io = E H n G h X t X 0 Z t 0 b(s;X s ;)dsN t (! 2 ) io ; proving (2.12). It remains to check assertion (i), that is, to show that for any t 2 I, 1 [0;t] ()(;)X 2 Dom(). In other words, by denition of the Skorohod integral, we need to show that, there exists a universal constant C > 0, such that for all G2S, and forP L -a.e. ! 2 2 2 , E H n Z t 0 (s; (;! 2 ))X s (;! 2 )[D s G](;! 2 )ds o CkG(;! 2 )k 1 : (2.17) But in light of (2.12) and (2.13), it suces to show that the mapping ! 1 7!X t (! 1 ;! 2 )X 0 (! 1 ;! 2 ) Z t 0 b(t;X t ; (! 1 ;! 2 ))dtN t (! 2 ); is integrable for xed ! 2 2 2 . To this end, we note that, for each i =i; ;N t (! 2 ), the solution Z i of (2.7) can be estimated as follows: for t2 [ i (! 2 ); i+1 (! 2 )), jZ i t (!;x)j jxj + Z t i (! 2 ) ~ L 1 s (T s !)(M + s ~ L s (T s !)jZ i s (!;x)j)ds; 38 thanks to Assumption (2.1). Applying the Grownwall inequality, we then deduce that jZ i t (!;x)j jxj +M Z t i (! 2 ) ~ L 1 s (T s !)ds e k k L 1 (I) ; t2 [ i (! 2 ); i+1 (! 2 )): (2.18) In other words, by (2.9), we have j ~ Z i t (!)j ~ Z i1 i (! 2 ) (!) + ~ L 1 i (! 2 ) (T t !) +M Z t i (! 2 ) ~ L 1 s (T s !)ds e k k L 1 (I) ; Consequently, by denition (2.8), for t2 [ i (! 2 ); i+1 (! 2 )) we have jX t (!)j = ~ L t (!)j ~ Z i t (A t !)j (2.19) ~ L t (!) ~ Z i1 i (! 2 ) (A t !) + ~ L 1 i (! 2 ) (!) +M Z t i (! 2 ) ~ L 1 s (T s A t !)ds e k k L 1 (I) : Noting that forP L -a.e. ! 2 2 2 and any 0stT , ~ L t ~ L 1 s 4 = ~ L s;t is the density of the transformation T t A s on 1 =W (see, [16, Remark 5.2]), we conclude that E H [ ~ L t ~ L 1 i (! 2 ) ] =E H [ ~ L i (! 2 );t ] = 1; E H [ ~ L t () ~ L 1 s (T s A t )] =E H [ ~ L 1 s (T s )] = 1; and E H [ ~ L t jZ i1 i (! 2 ) (A t )j] = E H [ ~ L i (! 2 );t ~ L i (! 2 ) jZ i1 i (! 2 ) (A t )] = E H [ ~ L i (! 2 ) jZ i1 i (! 2 ) (A t T t A i (! 2 ) )] =E H jX i (! 2 ) j: 39 It then follows from (2.37) that, for t2 ( i (! 2 ); i+1 (! 2 )], denoting C 4 =e k k L 1 (I) > 1, E H (jX t (;! 2 )j) C n E H [jX i (! 2 ) j] + 1 +ME H [ i+1 (! 2 ) i (! 2 )] o C 2 n E H [jX i1 (! 2 ) j] + 2 +ME H [ i+1 (! 2 ) i1 (! 2 )] o C N T (! 2 ) n E H [jX 0 j] +N T (! 2 ) +MT o : Since X 0 2 L 1 , E P L h e N T k k L 1 (I) i E P L h e N T i < 1 and E P L h N T e N T k k L 1 (I) i < h E P L [e N T ] i k k L 1 (I) h E P L [N k k L 1 (I) T ] i k k L 1 (I) < 1. This implied that X t is inte- grable for all t2 I, which in turn shows that X t X 0 R t 0 b(s;X s )dsN t is also integrable, thanks to Assumption (2.1). This proves (ii), and hence the theorem. We will now prove the uniqueness. Lemma 2.4. Assume that Assumption (2.1) is in force. As dene dened in (2.9) that 8 > < > : ~ Z 0 t (!) =Z 0 t (!;X 0 (!)); t2 [0; 1 (!)] ~ Z i t (!) =Z i t (!; [ ~ Z i1 i (! 2 ) (!) + ~ L 1 i (! 2 ) (T i !)]); t2 [ i (! 2 ); i+1 (! 2 )]; i 1: (2.20) and Z i t (!;x), i = 0; 1; ;N T (! 2 ) satisfy the system of ODEs (2.7), for x 2 R, t2 [ i (! 2 ); i+1 (! 2 )], then ~ Z i t (!) is pathwisely unique . Theorem 2.5. Assume that Assumption 2.1 is in force. Then for any two solutions Y2L 1 (W ;L 2 (I)) that satises the equation Y t =X 0 + Z t 0 s Y s dB s + Z t 0 b(s;Y s )ds +N t ; t2I; i = 1; 2 40 Y has the unique representation: for t2I, and P-a.e. ! = (! 1 ;! 2 )2 , Y t (!) = N T (! 2 ) X i=0 ~ L t (!) ~ Z i t (A t !)1 [ i (! 2 ); i+1 (! 2 )) (t); (2.21) Proof . We shall prove that for8Y (!)2L 1 ( ;L 2 (I)) that satises equation 2.2, L 1 t (T t )Y t (T t ) satises the recursive construction of ~ Z in 2.20. Since ~ Z is unique from Lemma 2.4, the uniqueness of Y is therefore achieved. Since Y (!) 2 L 1 ( ;L 2 (I)) and satises 2.2, then for P L -a.e.! 2 , we have Y ! 2 2L 1 ( 1 ;L 2 (I)). 1 [0;t] () Y ! 2 2Dom() for all t2I and xed ! 2 , we denote as usual thatA t !, ( ~ A t ! 1 ;! 2 );D t !, ( ~ D t ! 1 ;! 2 ). For anyt2I and a random variable G which is cylindrical with respect to ! 1 .Multiplying both sides of equation 2.2 by G(A t ) and taking expectations then we have EfG(A t )Y t g = EfG(A t )Y 0 g +Ef Z t 0 b(s;Y s )G(A t )dsg + Ef Z t 0 D s G(A t ) s Y s dsg +EfG(A t )N t :g Since G is arbitrary, this implies E H fG ! 2 ( ~ A t )Y ! 2 t g = E H fG ! 2 ( ~ A t )Y ! 2 0 g +E H f Z t 0 b(s;Y ! 2 s )G ! 2 ( ~ A t )dsg + E H f Z t 0 ~ D s G ! 2 ( ~ A t ) s Y ! 2 s dsg +E H fG ! 2 ( ~ A t )gN t (! 2 ): 41 Thanks to [16, Lemma 6.1], we haveG ! 2 ( ~ A t ! 1 ) =G ! 2 ( ~ A s ! 1 ) R t s r ~ D r G ! 2 ( ~ A r ! 1 )ds, therefore, for8t2 [ i (! 2 ); i+1 (! 2 )); E H fG ! 2 ( ~ A t )Y t g = E H fG ! 2 Y 0 gEfY 0 Z t 0 r ~ D r [G ! 2 ( ~ A r )]drg +E H f Z t 0 b(s;Y s )G ! 2 ( ~ A s )dsgE H f Z t 0 b(s;Y s ) Z t s s ~ D r [G ! 2 ( ~ A r )]drdsg +E H f Z t 0 ~ D s [G ! 2 ( ~ A s )] s Y s dsgE H f Z t 0 ~ D s [ Z t s r ~ D r [G ! 2 ( ~ A r )] ~ Dr] s Y s dsg +E H fG ! 2 ( ~ A t )gN t (! 2 ) = E H fG ! 2 Y 0 gE H fY 0 Z t 0 r ~ D r [G ! 2 ( ~ A r )]drg +E H f Z t 0 b(s;Y s )G ! 2 ( ~ A s )dsgE H f Z t 0 b(s;Y s ) Z t s s ~ D r [G ! 2 ( ~ A r )]drdsg +E H f Z t 0 ~ D s [G ! 2 ( ~ A s )] s Y s dsgE H f Z t 0 r ~ D r [G ! 2 ( ~ A r )] Z r 0 s Y s dB s drg +E H fG ! 2 (A t )gN t (! 2 ): The last equality is due to Fubini's theorem and the fact E H f Z t 0 Z r 0 ~ D s [ r ~ D r [G[( ~ A r )]] s Y s dsdrg =E H f Z t 0 r ~ D r [G( ~ A r )] Z r 0 s Y s dB s dr:g Furthermore in the above expression, 42 E H fY 0 Z t 0 r ~ D r [G ! 2 ( ~ A r )]drg +E H f Z t 0 b(s;Y s ) Z t s s ~ D r [G ! 2 ( ~ A r )]drdsg + E H f Z t 0 ~ D s [G ! 2 ( ~ A s )] s Y s dsgE H f Z t 0 r ~ D r [G ! 2 ( ~ A r )] Z r 0 s Y s dB s drg = E H f Z t 0 r ~ D r [G ! 2 ( ~ A r )] h Y 0 + Z r 0 b(s;Y s )ds +(1 [0;r] () Y ! 2 )Y s i drg = E H f Z t 0 dG ! 2 ( ~ A r ) dr h Y 0 + Z r 0 b(s;Y s )ds +(1 [0;r] () Y ! 2 )Y s i drg = E H f Z t 0 dG ! 2 ( ~ A r ) dr N r (! 2 )drg: Therefore, for8t2 [ i ; i+1 ), E H fG ! 2 ( ~ A t )Y t gE H fG ! 2 Y 0 gE H f Z t 0 b(s;Y s )G ! 2 ( ~ A s )dsg = E H f Z t 0 dG ! 2 ( ~ A r ) dr N r (! 2 )drg +E H fG ! 2 ( ~ A t )gN t (! 2 ): (2.22) when t = i , we have the same equation E H fG ! 2 ( ~ A i )Y i gE H fG ! 2 Y 0 gE H f Z i 0 b(s;Y s )G ! 2 ( ~ A s )dsg = E H f Z i 0 dG ! 2 ( ~ A r ) dr N r (! 2 )drg +E H fG ! 2 ( ~ A i )gN i (! 2 ): (2.23) Notice that N t (! 2 ) =N i (! 2 ), so substract equation 2.23 from 2.22 we have E H fG ! 2 ( ~ A t )Y t gE H fG ! 2 ( ~ A i )Y i gE H f Z t i b(s;Y s )G ! 2 ( ~ A s )dsg = E H f Z t i dG ! 2 ( ~ A r ) dr N i (! 2 )drg +E H fG ! 2 ( ~ A t )gN i (! 2 )E H fG ! 2 ( ~ A i )gN i (! 2 ) = 0: 43 If we denote ~ Z i t (!) = ~ L 1 ( ~ T t ! 1 )Y ! 2 t ( ~ T t ! 1 ), then the above equation is equivalently to say, E H fG ! 2 h ~ Z i t ~ Z i i Z t i ~ L 1 s ( ~ T s )b(s; ~ L s ~ Z i s ; ~ T s ds i g = 0: So ~ Z i t ~ Z i i Z t i ~ L 1 s ( ~ T s )b(s; ~ L s ~ Z i s ; ~ T s ds = 0: (2.24) P H -almost surely for8! 2 . Finally, notice that E H fG ! 2 ( ~ A i )Y i gE H fG ! 2 Y 0 gE H f Z i 0 b(s;Y s )G ! 2 ( ~ A s )dsg = E H f Z i 0 dG ! 2 ( ~ A r ) dr N r (! 2 )drg +E H fG ! 2 ( ~ A i )gN i (! 2 ): (2.25) Substract from equation 2.23 we have E H fG ! 2 ( ~ A i )(Y i Y i )g = E H fG ! 2 ( ~ A i )g. ThusE H fG ! 2 ( ~ Z i i ~ Z i i ~ L 1 i ( ~ T i ))g = 0; i.e., ~ Z i i = ~ Z i i + ~ L 1 i ( ~ T i ); (2.26) P H -almost surely for8! 2 . It's not hard to see that any!2 , ~ Z i t satises equation 2.24,2.26 fort2 [ i ; i+1 ), it satises 2.20 as well. Thanks to Lemma 2.4, the uniqueness is thus achieved. 44 2.2 The case of multiplicative (jump) noise In this subsection, we will consider the solution to the general equation with multi- plicative jump noise. X t =X 0 + Z t 0 b(s;X s ;)ds + Z t 0 (s;X s ;)dB H s + Z t 0 (s;X s ;)dL s ; (2.27) where L (! 2 ) is a compound Poisson process with i:i:d jump size at each jump time i is L i =U i . This case is following the idea of the additive jump case, however, since the integrand in the jump term now involves X s which makes the case more complicated, thus we will reconsider the problem as the following. In the solution construction of this problem, the equation of Z i in equation (2.7) still holds true. We will rewrite the form of ~ Z i in order to represent of the solution X t to the above equation as ~ L t ~ Z i (A t ). We have the following theorem. Theorem 2.6. Assume that Assumption (2.1) is in force. Then SDE (2.27) has a solution X that has the explicit form: for t2I, and P-a.e. ! = (! 1 ;! 2 )2 , X t (!) = N T (! 2 ) X i=0 ~ L t (!) ~ Z i t (A t !)1 [ i (! 2 ); i+1 (! 2 )) (t); (2.28) where the processes ~ Z i 's are dened recursively by: 8 > < > : ~ Z 0 t (!) =Z 0 t (!;X 0 (!)); t2 [0; 1 (!)] ~ Z i t (!) =Z i t (!;x); t2 [ i (! 2 ); i+1 (! 2 )]; i 1; (2.29) where x = ~ Z i1 i (! 2 ) + [ i ; ~ L i (T t ) ~ Z i1 i ;T i ]U i (T t ) ~ L 1 i (! 2 ) (T i ), and Z i t (!;x), i = 0; 1; ;N T (! 2 ) satisfy the system of ODEs (2.7), for t2 ( i (! 2 ); i+1 (! 2 )]. 45 Proof. Similarly we prove the theorem by verifying that the process X given by (2.28) is indeed a solution to (2.27). Since the idea of the proof follows from the previous case, so some of the notations might referring to the content in additive case. All the processes ~ L and Z i 's are continuous fort2 [ i (! 2 ); i+1 (! 2 )),X has c adl ag paths. Furthermore, at each i (! 2 ), by denitions (2.28) and (2.29) we have X i (!) = ~ L i (!) ~ Z i1 i (A i !); X i (!) = ~ L i (!) ~ Z i i (A i !) (2.30) = ~ L i (!)[ ~ Z i1 i (A i ) + [t; ~ L i (T i ) ~ Z i1 i ;T i ]U i (T i ) ~ L 1 i (T i )](!) = X i (!) + [ i ; ~ L i (T i ) ~ Z i1 i ;T i ]U i (T i ) = ~ L i (!)[ ~ Z i1 i (A i ) + [ i ; ~ L i (T i ) ~ Z i1 i ;T i ]U i (T i ) ~ L 1 i (T i )](!) = X i (!) + ( i ;X i ;T i )U i (T i ): Thus X i = (t;X i ;T i )U i (T i ) = (t;X i ;T i )L i , i = 1; ;N T (! 2 ). Our main task is to prove that the process X dened by (2.28) satises (i) for any t2I, 1 [0;t] ()(;)X2Dom() ; and (ii) The equation (2.27) holdsP-almost surely. Let us rst assume (i) and prove (ii). To this end, we shall prove that, for any G2S, E n G Z T 0 1 [0;t] (s)(s;)X s dB H s o = E n G h X t X 0 Z t 0 b(s;X s ;)ds Z t 0 (s;X s ;)dL s io : (2.31) 46 By Fubini's Theorem it suces to argue that for t2I andP L -a.e. ! 2 2 2 , it holds that E H n G h Z T 0 1 [0;t] (s)(s;)X s dB H s (2.32) X t X 0 Z t 0 b(s;X s ;)ds Z t 0 (s;X s ;)dL s io = 0: To see this, by denition of the Skorohod integral we have, for xed ! 2 , E H n G Z T 0 1 [0;t] (s)(s;)X s dB H s o = E H fG(1 [0;t] ()X)g = E H n Z t 0 (s;)X s [D s G]ds o = E H n Z t 0 (s;) N T (! 2 ) X i=0 ~ L t ~ Z i s (A s )1 [ i (! 2 ); i+1 (! 2 )) (s)[D s G]ds o = E H n N T (! 2 ) X i=0 Z t 0 (s; ~ T s ) ~ Z i s ()1 [ i (! 2 ); i+1 (! 2 )) (s)[D s G(T s )]ds o = E H n N T (! 2 ) X i=0 Z [ i (! 2 )^t; i+1 (! 2 )^t) ~ Z i s () dG(T s ) ds ds o : (2.33) Now assume that t2 [ k (! 2 ); k+1 (! 2 )) for some k > 0, then for any 0 i k 1, integrating by parts as well as applying the Girsanov transformation of measures 47 would yield (suppressing variable ! 2 , and abbreviating ! = (! 1 ;! 2 ) by \" when necessary): E H n Z [ i ; i+1 ) ~ Z i s dG(T s ) ds ds o = E H n ~ Z i i+1 G(T i+1 ) ~ Z i i G(T i ) Z i+1 i G(T s ) d ~ Z i s ds ds o = E H n ~ Z i i+1 G(T i+1 ) ~ Z i i G(T i ) Z i+1 i G(T s ) dZ i s (; [ ~ Z i1 i + [s; ~ L i (T i ) ~ Z i1 i ;T i ]U i (T i ) ~ L 1 i (! 2 ) (T i )]) ds ds o = E H f ~ Z i i+1 G(T i+1 ) ~ Z i i G(T i ) Z i+1 i G(T s ) ~ L 1 s (T s ) b(s; h ~ L s (T s )Z i s (; [ ~ Z i1 i + [t; ~ L i (T i ) ~ Z i1 i ;T i ]U i (T i ) ~ L 1 i (! 2 ) (T i )]) i ;T s )ds o = E H f ~ L i+1 ~ Z i i+1 (A i+1 )G() ~ L i ~ Z i i (A i )G() Z i+1 i G()b(s; ~ L s ~ Z i s (A s );)dsg = E H fG() h ~ L i+1 ~ Z i i+1 (A i+1 ) ~ L i ~ Z i i (A i ) Z i+1 i b(s;X s ;)ds io : Similarly, we also have E H n Z [ k ;t) ~ Z k s () dG(T s ) ds ds o =E H n G() h ~ L t ~ Z k t (A t ) ~ L k ~ Z k k (A k ) Z t k b(s;X s ;)ds io : Note that by denition (2.28) we have 8 > < > : E H fG() ~ L i ~ Z k i (A i )g =E H fGX i g; i = 0; 1;k; E H fG() ~ L t ~ Z k t (A t )g =E H fGX t g; t2I: (2.34) 48 Furthermore, by dention (2.29), for i = 0; 1; ;k, one has E H fG ~ L i ~ Z i i (A i )g = EfG ~ L i [ ~ Z i1 i (A i ) + [t; ~ L i ) ~ Z i1 i (A i );]U i ~ L 1 i ]g = EfG[X i + [t; ~ L i ~ Z i1 i (A i );]U i ]g = EfG[X i + [t;X i ;]U i ]g: (2.35) Combining (2.33)-(2.35) we see that, for t2 [ k (! 2 ); k+1 (! 2 )), (2.33) can be written as E H n G Z T 0 1 [0;t] (s)(s;)X s dB H s o = E H n k1 X i=0 G h X i+1 (t;X i ;)U i X i Z i+1 i b(s;X s ;)ds i +G h X t X k Z t k b(s;X s ;)ds io = E H n G h X t X 0 Z t 0 b(s;X s ;)ds N T X i=1 (t;X i ;)U i io ; proving (2.32). It remains to check assertion (i), that is, to show that for any t 2 I, 1 [0;t] ()(;)X 2 Dom(). In other words, by denition of the Skorohod integral, we need to show that, there exists a universal constant C > 0, such that for all G2S, and forP L -a.e. ! 2 2 2 , E H n Z t 0 (s; (;! 2 ))X s (;! 2 )[D s G](;! 2 )ds o CkG(;! 2 )k 1 : (2.36) 49 But in light of (2.32) and (2.33), it suces to show that the mapping ! 1 7!X t (! 1 ;! 2 )X 0 (! 1 ;! 2 ) Z t 0 b t (X t ; (! 1 ;! 2 ))dt N T X i=1 t (X i ; (! 1 ;! 2 ))U i (! 2 ) (! 2 ); is integrable for xed ! 2 2 2 . To this end, we recall the estimation (2.18) jZ i t (!;x)j jxj +M Z t i (! 2 ) ~ L 1 s (T s !)ds e k k L 1 (I) ; t2 [ i (! 2 ); i+1 (! 2 )): In other words, by (2.29), we have j ~ Z i t (!)j ~ Z i1 i (! 2 ) (!) + [t; ~ L i (T t ) ~ Z i1 i ;T t ]jU i (T t )j ~ L 1 i (! 2 ) (T t ) +M Z t i (! 2 ) ~ L 1 s (T s !)ds e k k L 1 (I) ; Consequently, by denition (2.28), for t2 [ i (! 2 ); i+1 (! 2 )) we have jX t (!)j = ~ L t (!)j ~ Z i t (A t !)j ~ L t (!) ~ Z i1 i (! 2 ) (A t !) + [t; ~ L i ~ Z i1 i (A t );]jU i j ~ L 1 i (! 2 ) (2.37) +M Z t i (! 2 ) ~ L 1 s (T s A t !)ds e k k L 1 (I) : From previous content, we know that a,P L -a.e. ! 2 2 2 and any 0 s t T , ~ L t ~ L 1 s 4 = ~ L s;t is the density of the transformation T t A s on 1 =W (see, [16, Remark 5.2]), we conclude that E H [ ~ L t ~ L 1 i (! 2 ) ] =E H [ ~ L i (! 2 );t ] = 1; E H [ ~ L t () ~ L 1 s (T s A t )] =E H [ ~ L 1 s (T s )] = 1; 50 and E H [ ~ L t jZ i1 i (! 2 ) (A t )j] = E H [ ~ L i (! 2 );t ~ L i (! 2 ) jZ i1 i (! 2 ) (A t )] = E H [ ~ L i (! 2 ) jZ i1 i (! 2 ) (A t T t A i (! 2 ) )] =E H jX i (! 2 ) j: as denoted before, C 4 =e k k L 1 (I) > 1, from assumption (2.2), E H (jX t (;! 2 )j) C(K + 1 +KjU i j) n E H [jX i (! 2 ) j] +ME H [ i+1 (! 2 ) i (! 2 )] o C 2 i1 Y k=i (K + 1 +KjU k j) n E H [jX i1 (! 2 ) j] +ME H [ i+1 (! 2 ) i1 (! 2 )] o C N T (! 2 ) N T (! 2 ) Y k=1 (K + 1 +KjU k j) n E H [jX 0 j] +MT o : Since X 0 2 L 1 , E P L h e N T k k L 1 (I) N T (! 2 ) Q k=1 (K + 1 +KjU k j) i = E P L n e N T k k L 1 (I) h K + 1 + KE P L (jU 1 j) o N T i E P L h e N T i <1. This implied thatX t is integrable for allt2I, which in turn shows that X t X 0 R t 0 b(s;X s )dsN t is also integrable, thanks to Assumption (2.1). This proves (ii), and hence the theorem. 51 3 SDEs with continuous coecients 3.1 The case without jump In this section, we consider the anticipative equation X t =X 0 + Z t 0 b(s;X s ;!)ds + Z t 0 (s;!)X s dB H s ; t2 [0;T ]: (3.38) Assumption 3.1. We assume that (i) X 0 =x 0 2L 1 ( ); (ii) (t;x;!) =(t;!)x, where (t; (;! 2 ))2L 1;1 for P L -a.e. ! 2 2 2 ; (iii) There exist constants b 0 and L such thatjb(t;x;!)jLjxj +b 0 ; for simplicity, when! is clear for readers, we writeb(t;x) instead ofb(t;x;!), the same for (t). The main result of this section is the existence of a solution of equation (3.38) if coecients b(t;x) and (t) satisfy the assumption (3.1), see theorem (3.6). Since this weaker condition on b(t;x;!) doesn't change the condition of (t;!), so the measure transformation densityL t (!) is the same as before, and we only need to consider about the solution for the ODE (3.39). We will use Lepeltier-SanMartin approximation to nd a solution of the linear growth condition ODE (3.39) which provides a solution to equation (3.38) Z t (!;x) =x + Z t 0 L s 1 (T s !)b(s;L s (T s !)Z s (!;x);T s !)ds: (3.39) We now dene b n (t;x) = inf y2Q fb(t;y) +njxyjg 52 According to Lepeltier-San Martin [22, Lemma 1], for nN 0 = maxfL;b 0 g (1) linear growth:8x2R;t2R jb n (t;x)N 0 (1 +jxj); (2) monotonicity in n:8x2R;t2R b n (t;x)%; (3) Lipshitz condition,8x;y;t2R; jb n (t;x)b n (t;y)jnjxyj; (4) strong convergence: if x n !x, then b n (t;x n )!b n (t;x): Notice that b n (t;x) is global Lipchitz with respect to x, thus (3.40) satises Lip- schitz condition with respect to Z n for xed !, so there exists a unique solution to the following equation Z n t (!;x) =x + Z t 0 L s 1 (T s !)b n (s;L s (T s !)Z n s (!;x);T s !)ds: (3.40) Now we'll consider the comparison theorem for the solution sequence. Z n t (!;x)Z n+1 t (!;x) for a:e: !2 : (3.41) In order to prove this, notice that E(L 1 t (T t !)) = 1 thus L 1 t (T t !) 2 [0;1) for a:e:!2 , we can apply lemma (3.2) to achieve this result. Lemma 3.2. If functions Y 1 (t) and Y 2 (t) satisfy following ODE Y i (t) =y 0 + Z t 0 b i (s;Y i (s))ds; for i = 1; 2; t2 [0;T ]; where b 1 (t;x)b 2 (t;x), and b 1 is Lipschitz in x with a coecient k, then Y 1 (t)Y 2 (t);8t2 [0;T ]: 53 Proof. If not, assume there exists 2 (0;T ) s.t. Y 1 () < Y 2 (), denote t 0 = supft :t<; Y 1 (t)Y 2 (t)g. Apparently, according to continuity we have Y 1 (t 0 ) = Y 2 (t 0 ) and Y 1 (t) < Y 2 (t) for t2 (t 0 ;], then 0<Y 2 Y 1 Z t 0 (b 1 (s;Y 2 s )b 1 (s;Y 1 s ))ds Z t 0 kjY 2 s Y 1 s jds Z t 0 k (Y 2 s Y 1 s )ds: where thisk is a Lipchitz constant determined byb k (t;x) in previous context. Notice that Y 1 (t 0 ) =Y 2 (t 0 ), then by Gronwall Inequality, Y 2 t Y 1 t 0;8t2 (t 0 ;); which is a contradiction! Therefore, Y 1 (t)Y 2 (t);8t2 [0;T ]: Lemma 3.3. The sequencefZ n t (!;x);n = 1; 2;:::;g is uniformly bounded in n for almost all !. Proof. Since Z n satises (3.40), thus jZ n t (!;x)j jxj + Z t 0 L s 1 (T s !)N 0 1 +L s (T s !)Z n s (!;x) ds jxj + Z t 0 L s 1 (T s !)N 0 +N 0 Z n s (!;x) ds: 54 Thus by Grownwall, we have fora:e:!2 ,jZ n t (!;x)je N 0 t (jxj+N 0 R t 0 L s 1 (T s !)dt). Since E(jZ n t (!;x 0 )j)e N 0 t E(x 0 ) +N 0 t <1: Thus for a:e: !2 ,jZ n t (!;x 0 )j is uniformly bounded in n. Since Z n is increasing and uniformly bounded, so Z t 4 = lim n!1 Z n t exists, now we'll prove Z t is indeed a solution of (3.39). Lemma 3.4. Z t (!;x) satises Z t (!;x) =x + Z t 0 L s 1 (T s !)b(s;L s (T s !)Z s (!;x);T s !)ds: (3.42) Proof. For8t;!, by DCT, we have lim[x + Z t 0 L s 1 (T s !)b(s;L s (T s !)Z n s (!;x);T s !)ds] = [x + Z t 0 L s 1 (T s !)b(s;L s (T s !) limZ n s (!;x);T s !)dt]: Therefore, Z t (!;x) is a solution satisfying (3.42). Lemma 3.5. Z t (!;x) is the minimal solution of equation (3.42). Proof. Let's assume ~ Z (!;x) is any solution of (3.42), i.e. ~ Z t (!;x) =x + Z t 0 L s 1 (T s !)b(s;L s (T s !) ~ Z s (!;x);T s !)ds: And notice Z n t (!;x) =x + Z t 0 L s 1 (T s !)b n (s;L s (T s !)Z n s (!;x);T s !)ds; 55 if there exists t 0 and ;s:t: ~ Z t 0 =Z n t 0 and ~ Z t <Z n t for 8t2 (t 0 ;],then Z n t ~ Z t = Z t t 0 [b n (s;Z n s )b n (s; ~ Z s ) +b n (s; ~ Z s )b(s; ~ Z s )]ds Z t t 0 b n (s;Z n s )b n (s; ~ Z s )ds nN 0 Z t t 0 (Z n s ~ Z s )ds; by Grownwall inequality, we have Z n t ~ Z t , for all t2 (t 0 ;], contradiction! Thus, ~ Z t Z n t ;8n, which implies ~ Z t Z t for8t2 [0;T ]. Theorem 3.6. X t (!) =L t (!)Z t (A t ;X 0 (A t )) is a minimal solution to equation (3.39). Proof. Since Z t (A t ;X 0 (A t )) satises (3.39), thanks to [16], X t (!) = L t (!)Z t (A t ;X 0 (A t )) satises equation (3.38). On the other hand, for any Y t satisfying (3.38), L 1 t (T t )Y t (T t ) satises (3.39), thusL 1 t (T t !)Y t (T t !)Z t (!;x 0 ). Therefore, X t (!) =L t (!)Z t (A t !;X 0 (A t !)) is the minimal solution. 56 3.2 The case with jumps In terms of multiplicative jump case, in addition to assumption (3.1), we further assume that Assumption 3.7. (i) is measurable, increasing with respect to x, and for some K > 0,j (t;x;!)jK(1 +jxj). (ii) The jump random variable U , L 1 satises,9 ~ >k k L 1 (I) + ln(K + 1 + KEjUj) such that E P L h e ~ N T i <1. Theorem 3.8. If X 0 ;b and satisfy condition (3.1), (t;x;!) satises (3.7) then there exists a solution X t (!) to the equation X t = X 0 + t Z 0 b(s;X s ;!)ds + t Z 0 (s;!)X s dB H s + t Z 0 (s;X s ;!)dN s ; t2 [0;T ]; ! = (! 1 ;! 2 ): (3.43) Proof. The same as the case without jump noise as we discussed in the previous section, we consider the Lepeltier-SanMartin [22] approximation of b using b n . (Z i t ) For8t2 [ i ; i+1 ] Z n;i t (!;x) =x + Z t i (! 2 ) ~ L 1 s (T s !)b n (s; ~ L s (T s !)Z n;i s (!;x); (T s !))ds: ( ~ Z i t ) 8 > < > : ~ Z 0 t (!) =Z 0 t (!;X 0 (!)); t2 [0; 1 (!)] ~ Z i t (!) = lim n!1 Z n;i t (!;x); t2 [ i (! 2 ); i+1 (! 2 )]; i 1; 57 where x = ~ Z i1 i (! 2 ) + [ i ; ~ L i (T i ) ~ Z i1 i ;T i ] ~ L 1 i (! 2 ) (T i ). (X t ) X t (!) = N T (! 2 ) X i=0 ~ L t (!) ~ Z i t (A t !)1 [ i (! 2 ); i+1 (! 2 )) (t)2L 1 ( ;L(I)); (3.44) Remark 3.9. The increasing property in x for can be replaced by monotone in x. When decreasing, we just need to consider b n (t;x) = sup y2Q fb(t;y)njxyjg to construct a decreasing solution sequence. 58 Chapter 4 McKean-Vlasov type SDEs driven by fBm and related topics Before the main content in this chapter, I'll brie y introduce the background behind this work. Since my advisor Professor Ma and I was trying to extend the anticipative transformation method [16, 6] from solving a linear fBm SDE to a nonlinear fBm SDE, one possible approach is to apply changes to the the anticipative Girsanov transformation [16] (T t !) =! + Z t^ 0 K H (;r) r (T r !)dr: In 2015 February during attending a mathematical nance program in IPAM, Professor Ma and I were discussing in a small room, as I still remember, which was reserved for him by IPAM. In that talk, we found out that by introducing Wasserstein measure, we can give a new type of anticipative Girsanov transformation, by which a class of McKean-Vlasov equation can be solved. It turned out that this type of equation is closely related to the popular current topic Mean Field Game theory in the IPAM series' talks. 59 1 A generalized anticipative Girsanov transforma- tion It is proved in [16] that there exists a unique absolute continuous transformation ow fT t ;t2Ig satisfying (T t !) =! + Z 1 0 K H (;r) r (T r !)dr; and the density of the transformation dened byL t (!) = dT 1 t d (!) is L t (!) = expf(1 [0;t] () (A ;t !)) 1 2 Z t 0 j s (A s;t !)j 2 ds Z t 0 Z t s (D r s )(A s;t !)D s [ r (A r;t !)]drdsg; a:e: wherefA s;t g is the inverse transformation from T t to T s , i.e. A s;t ! =T 1 s T t ! that satises (A s;t !) =! Z t^ s^ K H (;r) r (A r;t !)dr: In this section, we will try to establish the well-posedness of the following system of absolute continuous transformation ows: 8 > > > < > > > : (T t !) =! + Z t^ 0 K H (;r) r (A 0;r ;T r !)dr; (A s;t !) =! Z t^ s^ K H (;r) r (A 0;r ;A r;t !)dr: 60 As well as the transformation density L t (!) = expf(1 [0;t] () (A 0; ;A ;t !)) 1 2 Z t 0 j s (A 0;s ;A s;t !)j 2 ds Z t 0 Z t s (D r s )(A 0;s ;A s;t !)D s [ r (A 0;r ;A r;t !)]drdsg; a:e: Before the main theorems, we need to introduce the conditions that should satisfy in this section. Assumption 1.1. Conditions that (t;;!) should satisfy: (i)9M > 0;s:t j(t;;!)j<M; (ii) Lipschitz condition:j(t; 1 ;! 1 )(s; 2 ;! 2 )j<C(jtsj +W 2 ( 1 ; 2 ) +k! 1 ! 2 k W ); (iii) sup k(t;;!)k 1;1 L 1 (I) <1. Theorem 1.2. Assume satises assumption (1.1), then there exists the -a.s. unique transformationfA s;t g that satises 8 > > > < > > > : (T t !) =! + Z t^ 0 K H (;r) r (A 0;r ;T r !)dr; (A s;t !) =! Z t^ s^ K H (;r) r (A 0;r ;A r;t !)dr: (1.1) Proof. We will rst prove there existsfT t g and its inversefA 0;t g satises (T t !) =! + Z t^ 0 K H (;r) r (A 0;r ;T r !)dr;8t2I: (1.2) In order to prove unique existence of this tranformation ow, we consider the partition I =[ i I i where I i = [t i1 ;t i ], and t i = i n . 61 Under assumption (1.2), (;;)2 L 1;1 for all . Thanks to [16], for8t2 I 1 , there exists a iteration as below, T n+1 t ! =! + Z t^ 0 K H (;r)(r;A n 0;r ;T n+1 r !)dr; (1.3) where A n 0;r = (T n r ) 1 . Notice that, E n jT n+1 t !T n t !j 2 W o E n sup u2I 1 h Z t^u 0 K H (u;s)[(s;A n s ;T n+1 s !)(s;A n1 s ;T n s !)]ds i 2 o E n t 2H 1 Z t 1 0 [(s;A n s ;T n+1 s !)(s;A n1 s ;T n s !)] 2 ds o 2C 2 t 2H 1 t n sup s2I 1 W 2 2 (A n s ;A n1 s ) + sup s2I 1 E[jT n+1 s !T n s !j 2 W ] o 2C 2 t n sup s2I 1 W 2 2 (A n s ;A n1 s ) + sup s2I 1 E[jT n+1 s !T n s !j 2 W ] o : We now denote C 1 , 2C 2 t 12C 2 t , we have sup s2I 1 E[jT n+1 s !T n s !j 2 W C 1 sup s2I 1 W 2 2 (A n s ;A n1 s ): 62 On the other hand, thanks to preliminary theorem (5.1), we have sup t2I 1 W 2 2 (A n+1 t ;A n t ) sup t2I 1 [ Z j(A n+1 t ) 1 (A n t ) 1 j H d(!)] 2 = sup t2I 1 E n jT n+1 t T n t j 2 H o = sup t2I 1 E n Z t 0 ((s;A n s ;T n+1 s !)(s;A n1 s ;T n s !)) 2 ds o C 2 E n Z t 1 0 h W 2 (A n s ;A n1 s ) +jT n+1 s !T n s !j W i 2 ds o 2C 2 t n sup s2I 1 W 2 2 (A n s ;A n1 s ) + sup s2I 1 E[jT n+1 s !T n s !j 2 W ] o 2C 2 (1 +C 1 )t n sup s2I 1 W 2 2 (A n s ;A n1 s ) o <1: Since 2C 2 (1+C 1 )t< 1 for small enough t, this inequality implies the existence offA 0;t g and its inversefT t g that satises equation (1.3) for8t2 I 1 . Applying the same argument the uniqueness is trivial to prove. Now we can denote this unique transformation on I 1 byf ^ T t g;f ^ A 0;t g. When t2I 2 , consider the iteration, (T n+1 t !) s = 8 > > > > < > > > > : ! s + R s 0 K H (s;r) r ( ^ A 0;r ; ^ T r !)dr; s2I 1 ! s + R t 1 0 K H (s;r) r ( ^ A 0;r ; ^ T r !)dr + R t^s t 1 K H (s;r) r (A n 0;r ;T n+1 r !)dr;s2I 2 : Likewise, we have the estimation, sup s2I 2 E[jT n+1 s !T n s !j 2 W C 1 sup s2I 2 W 2 2 (A n s ;A n1 s ); 63 and sup t2I 2 W 2 2 (A n+1 t ;A n t ) 2C 2 (1 +C 1 )t n sup s2I 2 W 2 2 (A n s ;A n1 s ) o : Therefore, there exists uniquefT t g and its inversefA 0;t g satisfying equation (1.2) for 8t2 [0;t 2 ]. Generally when t2 I k , assumingf ^ T t g andf ^ A 0;t g uniquely exists on [0;t k1 ], we consider (T n+1 t !) s = 8 > > > > < > > > > : ! s + R s 0 K H (s;r) r ( ^ A 0;r ; ^ T r !)dr; s2 [t 0 ;t k1 ]; ! s + R t 1 0 K H (s;r) r ( ^ A 0;r ; ^ T r !)dr + R t^s t 1 K H (s;r) r (A n 0;r ;T n+1 r !)dr;s2I k : Together with estimation sup t2I k W 2 2 (A n+1 t ;A n t ) 2C 2 (1 +C 1 )t n sup s2I k W 2 2 (A n s ;A n1 s ) o ; we can conclude the unique existence offT t g andfA 0;t g on [0;t k ]. Through this argument, we can further conclude the existence on the whole interval I. Finally, the dynamic (A s;t !) =! Z t^ s^ K H (;r) r (A 0;r ;A r;t !)dr: follows directly from equation (1.2). 64 Remark 2. During above proof, in order to apply theorem (5.1), we need a justica- tion to prove that W 2 (A n 0;t ;A n1 0;t )<1. Actually,since W 2 (A n 0;t ;A n1 0;t )W 2 (A n 0;t ;) +W 2 (;A n1 0;t ); thus we just need to prove W 2 (A n 0;t ;)<1. Recall that Talagrand's inequality W 2 2 (;f)Entropy(f) = R W f logfd. W 2 2 (A n 0;t ;) j Z dA n 0;t d log( dA n 0;t d )dj<1: which is according to [16, Lemma 5.3.] 65 Now we are ready to compute the density L andL of the transformation given a satisfying assumption (1.1). This follows three steps. The rst step is to con- sider thef n (t;;!)2Sg satisfyingj n (t; 1 ;!) n (t; 2 ;!)jCW 2 ( 1 ; 2 ) such that sup k n (t;;!) (t;;!)k 1;1 L 1 (I) n!1 ! 0 and its corresponding density L n t = dT n t d ;L n s;t = dA n s;t d , for simplicity, we can let C be big enough such that sup n sup k n (t;;!)k 2 1;1 L 1 (I) C <1. The second step is to prove the uniformly integrability ofL n t ;L n s;t . The third step is to prove the convergence of n ;A n 0; (T n ) in L 2 (W ;L 2 (I)), thus convergence ofL n andL n . In order to achieve the main result of this section, we provide the following proposition as a preparation from [16] Lemma 1.3. Assume n (t;;!)2L S , then the transformation T n ;A n satisfying (A n s;t !) =! Z t^ s^ K H (;r) n r (A n 0;r ;A n r;t !)dr; (T n t !) =! + Z t^ 0 K H (;r) n r (A n 0;r ;T n r !)dr: are absolutely continuous with L n u;t (!) = dA n u;t d (!) = expf(1 [u;t] () n (A n 0; ;A n ;t !)) 1 2 Z t u j n s (A n 0;s ;A n s;t !)j 2 ds Z t u Z t s (D r n s;A n 0;s )(A n s;t !)D s [ n r;A n 0;r (A n r;t !)]drdsg; a:e: L n t (!) = dT n t d (!) = expf(1 [0;t] () n (A n 0; ;T n !)) 1 2 Z t 0 j n s (A n 0;s ;T n s !)j 2 ds Z t 0 Z t 0 (D r n s;A n 0;s )(T n s !)D s [ n r;A n 0;r (T n r !)]drdsg: 66 Proof. We will compute dT n t d (!) only. For simplicity, let's denote the cylindrical g t (he 1 ;!i;:::;he n ;!i), n (t;A n 0;t ;!), when there is no ambiguity we writeg t (!) instead. ForfT t ;t2Ig that satises T t ! = (I +K)(!); where K(!) =K H 1 [0;t] ()g (he 1 ;T !i;:::;he n ;T !i) , we then have Dhe i ;T t !i =e i + Z t 0 e i (r)Dg r (he 1 ;T r !i;:::;he n ;T r !i)dr Notice that Dg r (he 1 ;T r !i;:::;he n ;T r !i) = X i @ @x i g r (he 1 ;T r !i;:::;he n ;T r !i)Dhe i ;T r !i: Thus, (Dhe i ;T t !i;e j ) 2 = ij + Z t 0 X k e i (r) @ @x k g r (T r !) (Dhe k ;T r !i;e j ) 2 dr (1.4) Denote n by n matrix P t and U t by P ij t = (Dhe i ;T t !i;e j ) 2 , U ij t = e i (r) @ @x j g r (he 1 ;T r !i;:::;he n ;T r !i), then (1.4) is equivalently P t = I + R t 0 U r P r dr which implies P t = exp( R t 0 U r dr). As a consequence, det(P t ) = exp Z t 0 tr (U r )dr = exp Z t 0 X i e i (r)) @ @x i g r (he 1 ;T r !i;:::;he n ;T r !i)dr = exp Z t 0 (D r g r )(T r !)dr : 67 So det(I +K) = det(P t ) = exp R t 0 (D r g r )(T r !)dr : On the other hand, tr (K) = Z t 0 X i X k e i (r) @ @x k g r (T r !) (Dhe k ;T r !i;e i ) 2 dr = Z t 0 D r g r (T r !)dr : Therefore, by theorem (4.6), dT t d (!) = det(I +K) exp(tr (K)) expf 1 [0;t] ()g (T !) 1 2 Z t 0 jg s (T s !)j 2 dsg = expf 1 [0;t] ()g (T !) 1 2 Z t 0 jg s (T s !)j 2 ds + Z t 0 (D r g r )(T r !)dr Z t 0 D r g r (T r !)dr g = expf 1 [0;t] ()g (T !) 1 2 Z t 0 jg s (T s !)j 2 ds Z t 0 Z t 0 (D r g s (T s !)D s [g r (T r !)]drdsg The last equality is by the chain rule of Malliavin derivative, for example [16, Proposition 3.2]. Therefore, L n t (!) = dT n t d (!) = expf(1 [0;t] () n (A n 0; ;T n !)) 1 2 Z t 0 j n s (A n 0;s ;T n s !)j 2 ds Z t 0 Z t 0 (D r n s;A n 0;s )(T n s !)D s [ n r;A n 0;r (T n r !)]drdsg: 68 Similar argument will lead to L n u;t (!) = dA n u;t d (!) = expf(1 [u;t] () n (A n 0; ;A n ;t !)) 1 2 Z t u j n s (A n 0;s ;A n s;t !)j 2 ds Z t u Z t s (D r n s;A n 0;s )(A n s;t !)D s [ n r;A n 0;r (A n r;t !)]drdsg; a:e: as well. Lemma 1.4. The sequence of processf n (A n 0; ;T n )g is bounded in L 1;1 . Proof. We rst note that, by a direct calculation, D s [ n t;A n 0;t (T n t !)] = (D s n t;A n 0;t )(T n t !) + Z t 0 (D r n t;A n 0;t )(T n t !)D s [ n r;A n 0;r (T n r !)]dr: By Cauchy-Schwarz inequality, we have kD[ n t;A n 0;t (T n t !)]k 2 2 2k(D n t;A n 0;t )(T n t !)k 2 2 +2k(D n t;A n 0;t )(T n t !)k 2 2 Z t 0 kD[ n r;A n 0;r (T n r !)]k 2 2 dr: Therefore, kkD[ n t;A n 0;t (T n t !)]k 2 2 k 1 2kk(D n t;A n 0;t )(T n t !)k 2 2 k 1 (1 + Z t 0 kkD[ n r;A n 0;r (T n r !)]k 2 2 k 1 dr): Applying Gronwall's inequality we obtain that kkD[ n t;A n 0;t (T n t !)]k 2 2 k 1 e 2 R t 0 kk(D n s;A n 0;s )(T n s !)k 2 2 k1ds 1: (1.5) 69 Since R t 0 kk(D n s;A n 0;s )(T n s !)k 2 2 k 1 ds R t 0 sup kk(D n s;A n 0;s )(T n s !)k 2 2 k 1 ds = C <1, (1.5) implies that kkD[ n t;A n 0;t (T n t !)]k 2 2 k 1 e 2C 1: This, together with the fact that R 1 0 sup k n t;A n 0;t (T n t !)k 2 1 dt C <1, leads that R 1 0 k n t;A n 0;t (T n t !)k 2 1;1 dt<e 2C is bounded. Lemma 1.5. The density owfL n t = dA n 0;t d g is uniformly integrable. Proof. Similar to the proof in [16], since (L n t ) 1 =L n t (A n t ), EfL n t j lnL n t jg = EfL n t j lnL n t (A n t )jg =Efj lnL n t jg = Efj(1 [0;t] () n (A n 0; ;T n !)) 1 2 Z t 0 j n s (A n 0;s ;T n s !)j 2 ds Z t 0 Z t 0 (D r n s;A n 0;s )(T n s !)D s [ n r;A n 0;r (T n r !)]drdsjg Efj(1 [0;t] () n (A n 0; ;T n !))jg +Ef 1 2 Z t 0 j n s (A n 0;s ;T n s !)j 2 dsg + Efj Z t 0 Z t 0 (D r n s;A n 0;s )(T n s !)D s [ n r;A n 0;r (T n r !)]drdsjg , I 1 +I 2 +I 3 : I 1 k(1 [0;t] () n (A n 0; ;T n !)k 2 Ef Z 1 0 [ n s (A n 0;s ;T n s !)] 2 dsg 1 2 f Z 1 0 k n s (A n 0;s ;T n s !)k 2 1 dsg 1 2 C 1 2 <1: 70 I 2 Ef Z 1 0 [ n s (A n 0;s ;T n s !)] 2 dsg 1 2 C 1 2 <1: I 3 1 2 Ef Z t 0 Z t 0 [(D r n s;A n 0;s )(T n s !)] 2 drdsg + 1 2 Ef Z t 0 Z t 0 [D s [ n r;A n 0;r (T n r !)] 2 drdsg 1 2 Z t 0 k Z t 0 [(D r n s;A n 0;s )(T n s !)] 2 drk 1 ds + 1 2 Z t 0 k Z t 0 [D s [ n r;A n 0;r (T n r !)] 2 dsk 1 dr 1 2 C + 1 2 e 2C <1: Therefore,fL n t g is uniformly integrable. Lemma 1.6. The sequencef n (A n 0; ;T n )g is convergent in L 2 (W ;L 2 ([0; 1])). Proof. Ef Z t 0 j n s;A n 0;s (T n s ) m s;A m 0;s (T m s )j 2 dsg 3Ef Z t 0 j n s;A n 0;s (T n s ) n s;A n 0;s (T m s )j 2 dsg + 3Ef Z t 0 j n s;A n 0;s (T m s ) n s;A m 0;s (T m s )j 2 dsg + 3Ef Z t 0 j n s;A m 0;s (T m s ) m s;A m 0;s (T m s )j 2 dsg = 3(I 1 +I 2 +I 3 ): I 1 = Ef Z t 0 j n s;A n 0;s (T n s ) n s;A n 0;s (T m s )j 2 dsg Ef Z t 0 kjD n s;A n 0;s j 2 k 1 ( Z s 0 j n r;A n 0;r (T n r !) m r;A m 0;r (T m r !)j 2 dr)dsg = Z t 0 kjD n s;A n 0;s j 2 k 1 Ef( Z s 0 j n r;A n 0;r (T n r !) m r;A m 0;r (T m r !)j 2 dr)gds: 71 I 2 = Ef Z t 0 j n s;A n 0;s (T m s ) n s;A m 0;s (T m s )j 2 dsg CEf Z t 0 W 2 2 (A n 0;s ;A m 0;s )dsg CEf Z t 0 jT n+1 s T n s j 2 H dsg = C Z t 0 Ef Z s 0 ( n r;A n 0;r (T n r !) m r;A m 0;r (T m r !)) 2 drgds: I 3 = Ef Z t 0 j n s;A m 0;s (T m s ) m s;A m 0;s (T m s )j 2 dsg = Ef Z t 0 j n s;A m 0;s m s;A m 0;s j 2 L n s dsg Z t 0 k n s;A m 0;s m s;A m 0;s k 2 1 E(L n s )ds C Z t 0 sup k n s; m s; k 2 1 ds: For simplicity, let's denoteg n;m t =C R t 0 sup k n s; m s; k 2 1 ds. CombiningI 1 ;I 2 ;I 3 and Grownwall inequality it implies, Ef Z t 0 j n s;A n 0;s (T n s ) m s;A m 0;s (T m s )j 2 dsg 3fg n;m t + Z t 0 g n;m s kjD n s;A n 0;s j 2 k 1 exp( Z t s kjD n r;A n 0;r j 2 k 1 dr)dsg: SincekjD n r;A n 0;r j 2 k 1 is uniformly bounded and g n;m t n;m min(n;m)!1 ! 0, thus n s;A n 0;s (T n s ) is a Cauchy sequence in L 2 (W ;L 2 (I)) thus convergent. Remark 1.7. The proof of convergence of L n ;L n is then standard. 72 Theorem 1.8. L t (!) = dT 1 t d (!) = expf(1 [0;t] () (A 0; ;A ;t !)) 1 2 Z t 0 j s (A 0;s ;A s;t !)j 2 ds Z t 0 Z t s (D r s )(A 0;s ;A s;t !)D s [ r (A 0;r ;A r;t !)]drdsg; a:e: Proof. Omitted. Remark 1.9. It turns out that this measure involved Gaussian measure will result in the solution of the measure involved SDE problem. But the intuition started from my original attempt to get rid of the volatility coecient's linear restriction in [16]'s equation dX t =b(t;X t ;!)dt +(t;!)X t dB H t ; i.e. dX t =b(t;X t ;!)dt +(t;X t ;!)X t dB H t : (1.6) Notice that the form of solution X t involves the transformation density L t which is just a Radon derivative of the transformed measure A t with respect to , but as a prior assumption, the transformation equation in this case involves (t;X t ;! thus involves the transformed measure itself. In this self-contained situation, the unique existence of this measure involved transformation equation is of a great signicance. Though there are some problems in solving equation (1.6), I found it successful in solving some type of McKean Vlasov type SDE instead. 73 2 Existence and uniqueness of McKean-Vlasov fBm SDE In this section, we'll prove the existence and uniqueness of two types of McKean- Vlasov type equations. 2.1 Nonlinear drift case The rst solvable case we will introduce here is with nonlinear drift term while volatil- ity is free of distribution as below. X t =X 0 + Z t 0 b[s;X s ;!;E P (X s )]ds + Z t 0 s (!)X s dB H s : Lemma 2.1. Suppose R jZ(0;!)jdP(!)<1, andb(t;x;!;y) : [0;T ]R R7!R satises (1) there exists 0<M <1, such thatjb(t; 0;!; 0)j<M; (2) there exists 0<L<1, such thatjb(t;x 1 ;!;y)b(t;x 2 ;!;y)j<Ljx 1 x 2 j; (3) there exists a random process F t > 0 with E(F t ) = 1 such thatjb(t;x;!;y 1 ) b(t;x;!;y 2 )j<LF t (!)jy 1 y 2 j. then there is Pa:s unique solution such that d dt Z(t;!) =b(t;Z(t;!);!;E P (Z(t;))); Z(0;!) =X 0 (!): 74 Proof. With iteration, there is a unique sequence of Lipschitz sequencefZ n g such that d dt Z n+1 (t;!) =b(t;Z n (t;!);!;E P (Z n (t;))); i.e. Z n+1 (t;!) =X 0 (!) + Z t 0 b(s;Z n (s;!);!;E P (Z n (s;)))ds; denote Z n =Z n+1 Z n , then jZ n (t;!)jL Z t 0 jZ n1 (s;!)jds +L Z t 0 F s (!)E P (jZ n1 (s;)j)ds; take the expectation we have E P (jZ n (t;)j) 2L Z t 0 E P (jZ n1 (s;)j)ds: Whent is small, it's easy to conclude thePa:s existence. Similarly, the uniqueness is trivial to prove. Theorem 2.2. Supposejb(t;x;!;y)b(t;x 0 ;!;y 0 )j<L(jxx 0 j +jyy 0 j), (t;!)2 L 1;1 . Then there exists a solution for the equation X t =X 0 + Z t 0 b[s;X s ;!;E P (X s )]ds + Z t 0 s (!)X s dB H s : Proof. According to lemma (2.1), there is aa:s unique solution for the equation d dt Z t (!;X 0 (!)) =L 1 t (T t !)b[t;L t (T t !)Z t (!;X 0 (!));T t !;E P (Z t (!;X 0 (!)))]; 75 then the proof of this theorem is straightfoward by checking X t =L t Z t (A t ;X 0 (A t )) is indeed a solution noticing the fact that E P (X t ) =E P (L t Z t (A t ;X 0 (A t ))) =E P (Z t ): More specically, for8G2S EfG Z 1 0 1 [0;] (t) t X t dB t g =Ef Z 0 t X t D t Gdtg = Ef Z 0 t (!)L t (!)Z t (A t !;X 0 (A t !))D t G(!)dtg = Ef Z 0 t (T t !)Z t (!;X 0 (!))D t G(T t !)dtg = Ef Z 0 Z t (!;X t (!)) d dt G(T t !)dtg = EfZ (!;X 0 (!))G(T !)Z 0 (!;X 0 (!))G(!) Z 0 ( d dt Z t (!;X 0 (!)))G(T t !)dtg = EfG(!)(L Z (A !;X 0 (A !))Z 0 ) Z 0 (L 1 t (T t !)b[t;L t (T t !)Z t (!;X 0 (!));T t !;E P (Z t (!;X 0 (!)))]G(T t !)dtgg = EfG(!)(L Z (A !;X 0 (A !))Z 0 ) Z 0 b[t;Z t (A t !;X 0 (A t !));!;E P (X t ))]G(!)dtgg: Thus the theorem is proved. Theorem 2.3. The solution in theorem (2.2) is unique in L 1 (W ;L 2 (I)). 76 Proof: The proof of this this theorem is trivial by noticing that for any X t 2 L 1 (W ;L 2 (I)) satisfying (2.2), for almost all !,L 1 t (T t )X t (T t ) satises the ODE L 1 t (T t )X t (T t ) =X 0 + Z t 0 L 1 s (T s )b[s;L s (T s )L 1 s X s (T s );!;E(L 1 s (T s )X s (T s ))]ds; a:e:; which has a unique solution. Therefore, the solution X is also unique in the given space. Remark 3. If ' is a random variable that satises E(j'j) < 1, then following equation also has a unique solution following the same argument. X t =X 0 + Z t 0 b[s;X s ;!;E P ('X s )]ds + Z t 0 s X s dB H s : 77 2.2 Nonlinear volatility case The second solvable case is with nonlinear coecients with respect to distribution, while the drift is restricted to linearity to X itself, as presented below, dX t =X t h b t [E(X t )]dt + 1 t g t [E( 2 t X t )]dB H t i ; X 0 =x 0 ; whereb t (x);g t (x) are two deterministic functions and 1 t (!); 2 t (!) are two stochastic processes. More specically, we assume these coecients satises following assump- tions: Assumption 2.4. (1) 1 ()2L 1;1 and bounded; (2) There exists C > 0 such thatj 2 (t;!)j < C <1 andj 2 t (! 1 ) 2 t (! 2 )j j! 1 ! 2 j W ; (3)jb t (x)j<M and satises jb t (x)b s (y)jL(jtsj +jxyj); (4)jg t (x)g t (y)jLjxyj; (5) x 0 is a real number. We will rst introduce the case when g t (x) =x. In order to prove the main theorem, we have the following lemmas. 78 Lemma 2.5. Z 0 t (!) =Z t (!)b t (E P (Z t ));Z 0 =x 0 has a a:s: unique solution. Proof Denote Y t =E(Z t ), then Y t satises Y 0 t =Y t b t (Y t );Y 0 =x 0 : (2.7) If x 0 = 0, then it's trivial to prove that Z t = 0 is the unique solution. If x 0 6= 0, we have Y 0 t Yt =b t (Y t ), integrate both sides, j ln(Y t ) ln(Y 0 )j =j Z t 0 b s (Y s )dsjMt: So Y t e j ln(x 0 )j+Mt C for all t2I, where C =e j ln(x 0 )j+M <1 is a constant. We denote f(t;x) =xb t (x), then for8t;s2I;x;y2 [C;C], we have jf(t;x)f(s;y)j jxb t (x)xb t (y)j +jxb t (y)yb t (y)j +jyb t (y)yb s (y)j jxjLjxyj +jb t (y)jjxyj +Ljyjjtsj (CL +M)(jxyj +jtsj): Therefore, Y 0 t =f(t;Y t ) has unique solution. Therefore Z 0 t (!) =Z t (!)b t (E (Z t )) =Z t (!)b t (Y t ); Z 0 =x 0 , also has the unique deterministic solution Z t (!) =Y t . 79 Next, we denote ~ :IPW7!R is dened as ~ (t;;!) = 1 t (!)Y t R 2 t (!)d(!). Lemma 2.6. There exists aa:s: unique absolutely continuous transformation ow fT t ;t2Ig with it's inverse from t to sfA s;t ; 0s<t 1g that satisfy 8 > < > : (T t !) = ! + R t^ 0 K H (;r)~ (r;A 0;r ;T r !)dr; (A s;t !) = ! R t^ s^ K H (;r)~ (r;A 0;r ;A r;t !))dr: Proof. By the denition of ~ , j~ (t; 1 ;!) ~ (t; 2 ;!)j = j 1 t (!)Y t Z 2 t (!)d( 1 (!) 2 (!))j supj 1 t Y t jW 1 ( 1 ; 2 ) supj 1 t Y t jW 2 ( 1 ; 2 ): where the rst inequality is due to equivalence of Kantorovich metric and Wasserstein-1 metric. Lemma 2.7. 1. Suppose that F =fF t ;t2 Ig2 L S and the mapping t7! F t () is dierentiable. ThenfF t (T t );t2 Ig is dierentiable with respect to t and it holds that d dt [F t (T t )] = ( d dt F t )(T t ) + ~ (t;A 0;t ;T t !)(D t F t )(T t ); a:e 2. For any G2S, the mapping t7!G(T t ) is dierentiable and it holds that d dt G(T t ) = ~ (t;A 0;t ;T t !)D t [G(T t )]; a:e Proof. 80 Assume that F2L S takes the form F t (!) =g t (h ft 1 g ;!i;:::;h ftng ;!i); 0<t 1 <<t n 1; where g is a bounded measurable function on [0; 1]R n with g t 2 C 1 b (R n ) for any t2 [0; 1], and fg is the Dirac function. Then F t (T t !) =g t (h ft 1 g ;T t !i;:::;h ftng ;T t !i); 0<t 1 <<t n 1: Notice that d dt h ft i g ;T t !i = d dt [! t i + Z t^t i 0 K H (t i ;r)~ (r;A 0;r ;T r !)dr] = K H (t i ;t)~ (t;A 0;t ;T t !) = K H ( ft i g )(t)~ (t;A 0;t ;T t !): Then by chain rule, d dt [F t (T t !)] = ( d dt F t )(T t !) + n X i=1 @ i g t (h ft 1 g ;T t !i;:::;h ftng ;T t !i) d dt h ft i g ;T t !i = ( d dt F t )(T t !) + n X i=1 @ i g t (h ft 1 g ;T t !i;:::;h ftng ;T t !i)K H ( ft i g )(t)~ (t;A 0;t ;T r !) = ( d dt F t )(T t !) + (D t F t )(T t !)~ (t;A 0;t ;T t !): Theorem 2.8. Given b; 1 ; 2 satisfy assumption (2.4),andjx 0 j < 1 is a num- ber,then there exists a solution satises the equation X t =x 0 + Z t 0 b s (E(X s ))X s ds + Z t 0 1 (t;!)X s E[ 2 (t;!)X s ]dB H t ; t2I: (2.8) 81 Proof. Let Y t be the solution that satises equation (2.7), then X t (!) =L t (!)Y t is a solution of (2.8). For8G2S E(G Z t 0 1 s (!)X s E[ 2 s (!)X s ]dB H s ) =E(G Z t 0 1 s (!)L s (!)Y s E( 2 s (!)L s (!)Y s )dB H s ) = E(G Z t 0 1 s (!)L s (!)Y s E A 0;s ( 2 s (!)Y s )dB H s ) = E(G Z t 0 L s (!)Y s ~ (s;A 0;s ;!)dB H s ) = E( Z t 0 Y s D s G(!)L s (!)~ (s;A s ;!)ds) = E( Z t 0 Y s D s G(T s !)~ (s;A 0;s ;T s !)ds) = E( Z t 0 Y s dG(T s ) ds ds) = E[G(T t )Y t G(T 0 )x 0 Z t 0 G(T s )Y 0 s ds] = E[GL t Y t Gx 0 Z t 0 G(T s )Y s b s (Y s )ds] = E[GX t Gx 0 G Z t 0 X s b s (E(X s ))ds]: Thus X t (!) =L t (!)Y t is indeed a solution of (2.8). 82 The uniqueness of solution of equation (2.8) is proved in the following theorem. Theorem 2.9. SupposeY is a deterministic function that solves ODE (2.7), and the solution to equation (2.8) in form of X t (!) = dAt d (!)Y t is a:s: unique, where fA t g is any absolute continuous transformation. Proof. The idea of the proof is following, we rst prove that the corresponding fA t g and its inversefT t g to any solution X t must satisfy the equation (T t !) =! + Z t 0 K H (;r)~ r (A r ;T r !)dr; where ~ t (;!) = 1 t (!)E ( 2 t Y t ). SincefA t g;fT t g are a:s: unique according to theorem (1.2), thus the solution X is also a:s: unique. We need to point out that in order to avoid a trivial case, we assume thatX 0 6= 0, in such a situation, it's not hard to see the solution to ODE (2.7) is never zero. Now suppose X t (!) = dAt d (!)Y t is a solution of equation (2.8) where Y solves ODE (2.7), then Y t 6= 0;8t. For8G2S, EfG Z t 0 1 s X s E( 2 s X s )dB H s )g = Ef Z t 0 D s G 1 s L s Y s E( 2 s L s Y s )dsg = Ef Z t 0 D s G(T s ) 1 s (T s )Y s (T s )E As ( 2 s Y s )dsg = Ef Z t 0 D s G(T s )Y s ~ s (A s ;T s )dsg = Z t 0 Y s EfD s G(T s )~ s (A s ;T s )gds: 83 On the other hand, EfG Z t 0 1 s X s E( 2 s X s )dB H s )g = EfG(X t X 0 Z t 0 b(s;E(X s ))X s ds)g = EfGL t Y t GY 0 Z t 0 Gb(s;E(L s Y s ))L s Y s ds)g = EfG(T t )Y t (T t )GY 0 Z t 0 G(T s )b(s;Y s )Y s ds)g = EfG(T t )Y t (T t )GY 0 Z t 0 G(T s )Y 0 s ds)g = Ef Z t 0 dG(T s ) ds Y s dsg = Z t 0 Y s Ef dG(T s ) ds gds: Since Y s 6= 0 for any s, so EfD s G(T s )~ s (A s ;T s )g =Ef dG(Ts) ds g. Consider G(!) = g(h t 1 ;!i;:::;h tn ;!i), we have Ef X i g x i (h t 1 ;T s !i;:::;h tn ;T s !i) d ds h t i ;T s !ig = Ef X i g x i (h t 1 ;T s !i;:::;h tn ;T s !>i)K H (t i ;s)~ (s;A s ;T s )g: Due to the assumption thatfAg;fTg are absolute continuous, d ds h t i ;T s !i =K H (t i ;s)~ (s;A s ;T s ); i.e. (T s !) =! + Z s^ 0 K H (;r)~ (r;A r ;T r )dr: 84 As proved in theorem (1.2), the transformation ow satisfying this equation is unique. Theorem 2.10. Assumption (2.4) holds true. Then equation dX t =X t h b t E(X t ) dt + 1 t g t E( 2 t X t ) dB H t i ; X 0 =x 0 ; (2.9) has a unique solution. Proof. As before, we construct the solution of this equation through the following method: Denote ~ (t;;!) = 1 t (!)g t Y t E ( 2 t ) where Y is the bounded deterministic solution to ODE (2.7). Therefore there exists a unique transformation owfT t g and fA s;t g: 8 > < > : (T t !) =! + R t 0 K H (;r)~ r (A 0;r ;T r !)dr (A s;t !) =! R t^ s^ K H (;r)~ r (A 0;r ;A r;t !)dr: (2.10) One can check X t =L t Y t is the unique solution to the equation (2.9). One the other hand, for any solution X t in form of Y t dAt d , follow the same argument in previous case, one can check that T t satises equation (2.10) which exists uniquely. 85 3 Examples In this section, we present solutions to some of the special cases and compare it with some of other known results. One thing worth mentioning is that our solution assumes all H2 (0; 1). Case I We consider about equation dX t =X t dB H t ; X 0 =x 0 : In such a case, L t (!) = expf[1 [0;t]() )] 1 2 tg;Z t (!) = x 0 Therefore, the solution to this equation in our setting is X t =x 0 expf[1 [0;t]() )] 1 2 tg: (3.11) As a comparison from result given by [29], the Doss-Sussmann transformation provides an alternative representation for the case H > 1 2 X t =x 0 exp(B H t ): More particularly, when H = 1 2 , It^ o sense integral solution to this equation is x 0 exp(B t 1 2 t). Remark 4. Among the above three representations, the Doss-Sussmann represen- tation is heuristically making sense, as an extension of Stratonovich integral result in Brownian motion case. However, technical limitation has made the computation when H < 1 2 quite cumbersome and less of interest. In our representation (3.12), 86 X t like neither Doss-Sussmann form nor It^ o form, this is determined by the intrinsic property of the integral denition of . Case II dX t =X t (rdt +E(X t )dB H t ; X 0 =x 0 : In such a case,L t (!) = expfx 0 [1 [0;t] ()e r ] 1 2 2 x 2 0 e 2 rt1 2r g;Z t (!) =x 0 e rt There- fore, the solution to this equation in our setting is X t =x 0 expf[1 [0;t] ()x 0 e r ] 1 2 2 x 2 0 e 2rt 1 2r +rtg: Which, coincident with solution when H = 1 2 in It^ o sense. X t =x 0 expfx 0 Z t 0 e rs dB s 1 2 2 x 2 0 e 2rt 1 2r +rtg: 87 4 Uniqueness of ltering SDE driven by fractional Brownian motion In this section is we study a problem arising in the Kyle-Back strategic insider trading equilibrium model, in which the noise traders' collective actions have long term mem- ory. It is shown that in this case the problem can be reduced to a ltering problem in which the observation equation is driven by an fBM (see, [2]), but the uniqueness seems to be an open problem in the literature. To be more precise, we consider the well-posedness of following ltering problem on probability space ( ;F;P): Let be a given random variable that is not observable, and let Y be an observation process that satises the following SDE; dY t = (E(jF Y t ))dt +dW H t ; Y 0 = 0;t2 [0;T ]; (4.12) where W H t is a fractional Brownian motion with Hurst parameter H2 ( 1 2 ; 1). Since the observation process contains the conditional law of the random variable , this is a special case of the \conditional mean-eld SDEs" (CMFSDE) studied in, e.g, [7] and [23]. We have the following result. Theorem 4.1. The CMFSDE (4.12) possesses a unique solution. Proof. We consider this problem in two parts, the existence of the solution will be discussed in subsection 4:1, and the uniqueness will be proved in subsection 4:2. 88 4.1 Existence [Existence of (4.12)] Proof. Thanks to [2, Lemma 2.1], if we denote t =t +W H t ; (4.13) and dene process Y t as dY t = (E(jF t ))dt +dW H t ;Y 0 = 0;t2 [0;T ]: ThenF Y =F . Therefore, the process t dened by (4.13) is a solution to (4.12). 4.2 Uniqueness In order to prove the uniqueness of (4.12), we need the following lemma. Lemma 4.2. For any processfY t jt2 [0;T ]g that satises (4.12), if we denote t =t +W H t ; thenF Y =F . Proof. We rst recall an important function k H (t;s) =k H (ts) 1 2 H s 1 2 H (4.14) According to [21],W t = R t 0 k H (t;s)dW H s is the generalized martingale associated with W H t , andhW i t =t 22H . 89 As a factor, one can check that Z t 0 k H (t;s)ds =c H t 22H ; where c H = R 1 0 (1x) 1 2 H x 1 2 H dx<1. Now we will prove lemma (4.2).F Y t F t is obvious. We just need to prove that F Y t F t for8t: For convenience,we can denote t =E(jF Y t ). Let's also denote the process g t , d R t 0 k H (t;s) s ds dhW i t : We remark that the processg is well dened as long as process s is continuous on I, since d R t 0 k H (t;s) s ds is absolutely continuous with respect to dt (therefore does with respect todhW i t as well) proved by lemma (4.3). Therefore,g t is well dened. Furthermore, since t 2F Y t , therefore we have g t 2F Y t : We denote the and Y associated semi-martingale processes by t = Z t 0 k H (t;s)ds + Z t 0 k H (t;s)dW H s ; Y t = Z t 0 k H (t;s)(E(jF Y s ))ds + Z t 0 k H (t;s)dW H s : It is known from [21] thatF Y =F Y ,F =F . 90 Now we can denote another process v t such that dv t =v t (c H g t )dY ;v 0 = 1 and a new probability measure dQ = 1 vt dP , then t =E(jF Y t ) = E Q (vtjF Y t ) E Q (vtjF Y t ) . In order to prove that t 2F t , we explicitly represent process v t and separate F Y t adapted term which is v t = exp( Z t 0 (c H g s )dY s 1 2 (c H g s ) 2 dhW i s ) = exp( Z t 0 c H dY s Z t 0 g s dY s Z t 0 1 2 c 2 H 2 dhW is Z t 0 1 2 g 2 dhW is + Z t 0 c H g s dhW i s ) = exp(c H Y t + Z t 0 c H g s dhW i s Z t 0 1 2 c 2 H 2 dhW i s Z t 0 g s dY s Z t 0 1 2 g 2 dhW i s ) = exp(c H t +c H Z t 0 k H (t;s) s dsc H Z t 0 g s dhW i s Z t 0 1 2 c 2 H 2 dhW i s Z t 0 g s dY s Z t 0 1 2 g 2 dhW i s ) = exp(c H t Z t 0 1 2 c 2 H 2 dhW i s Z t 0 g s dY s Z t 0 1 2 g 2 s dhW i s ): Notice that R t 0 g s dY s + R t 0 1 2 g 2 s dhW i s 2F Y t , together with lemma (4.4) that?F Y t underQ, we can now conclude that bothE Q (v t jF Y t ) andE Q (v t jF Y t ) are adapted to F t . Lemma 4.3. Assume t is a continuous process, then Z t+ 0 k H (t + ;s) s ds Z t 0 k H (t;s) s ds !0 ! 0: 91 where k H (t;s) is dened as (4.14). Proof. Actually, j Z t+ t k s (t + ;s) s dsj supj s j Z 0 k H (;s)ds =c H 22H supj s j !0 ! 0: Z t 0 (k H (t + ;s)k H (t;s)) s ds !0 ! 0: Lemma 4.4. ?F Y t under Q. Proof. Since and Y t are both Gaussian under Q, we just need to check if E Q (Y t ) =E Q ()E Q (Y t ), i.e. E( Yt vt ) =E( 1 vt )E( Yt vt ). Notice that d 1 v t = 1 v 2 t dv t + 1 v 3 t dhvi t = 1 v t (c H g t )dY + 1 v t (c H g t ) 2 dhY i t = 1 v t (c H g t )dW t where W t is a P martingale, so 1 vt is a P martingale, so do Yt vt and Yt vt similarly. Thus E( Yt vt ) =E(Y 0 ) =E()E(Y 0 ) =E( 1 vt )E( Yt vt ). This completes the proof. [Uniqueness of (4.12)] Proof. By Lemma (4.2) dY t = (E(jF Y t ))dt +dW H t = (E(jF t ))dt +dW H t : Thus Y t is uniquely determined. 92 Bibliography [1] Bai, L., & Ma, J. (2015). Stochastic dierential equations driven by fractional Brownian motion and Poisson point process. Bernoulli, 21(1), 303-334. [2] Biagini, F., Hu, Y., Meyer-Brandis, T., & Oksendal, B. (2012). Insider trading equilibrium in a market with memory. Mathematics and Financial Economics, 6(3), 229. [3] Biagini, F., Hu, Y., Oksendal, B., & Zhang, T. (2008). Stochastic calculus for fractional Brownian motion and applications. Springer Science & Business Media. [4] Brenner, M., Pasquariello, P., & Subrahmanyam, M. (2009). On the volatility and comovement of US nancial markets around macroeconomic news announcements. Journal of Financial and Quantitative Analysis, 44(06), 1265-1289. [5] Brody, D. C., Syroka, J., & Zervos, M. (2002). Dynamical pricing of weather derivatives. Quantitative Finance, 2(3), 189-198. [6] Buckdahn, R. (1994). Anticipative Girsanov transformations and Skorohod stochastic dierential equations (Vol. 533). American Mathematical Soc.. [7] Buckdahn, R., Li, J., & Ma, J. A Mean-eld Stochastic Control Problem with Partial Observations. Preprint. [8] Carmona, R., Fouque, J. P., & Sun, L. H. (2013). Mean eld games and systemic risk. Available at SSRN 2307814. [9] Comte, F., & Renault, E. (1998). Long memory in continuous-time stochastic volatility models. Mathematical Finance, 8(4), 291-323. [10] Decreusefond, L. (1999). Stochastic analysis of the fractional Brownian motion. Potential analysis, 10(2), 177-214. [11] Feyel, D. and Ust unel, A.S., Monge-Kantorovitch Measure Transportation and Monge-Amp ere Equation on Wiener Space (2003). 93 [12] Feyel, D., & Ust unel, A. S. (2004). Monge-Kantorovitch measure transportation and Monge-Ampere equation on Wiener space. Probability theory and related elds, 128(3), 347-385. [13] Hu, Y., & Nualart, D. (2009). Rough path analysis via fractional calculus. Trans- actions of the American Mathematical Society, 361(5), 2689-2718. [14] Huang, M., Malham e, R. P., & Caines, P. E. (2006). Large population stochas- tic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence principle. Communications in Information & Systems, 6(3), 221-252. [15] Ikeda, N., & Watanabe, S. (2014). Stochastic dierential equations and diusion processes (Vol. 24). Elsevier. [16] Jien, Y. J., & Ma, J. (2009). Stochastic dierential equations driven by fractional Brownian motions. Bernoulli, 15(3), 846-870. [17] Kac, M. (1954). Foundations of kinetic theory. In Proceedings of the Third Berke- ley Symposium on Mathematical Statistics and Probability (Vol. 1955, No. 3, pp. 171-197). [18] K orezlioglu, H., & Ust unel, A. S. (2012). Stochastic analysis and related topics (Vol. 31). Springer Science & Business Media. [19] Kusuoka, S. (1982). The nonlinear transformation of Gaussian measure on Banach space and absolute continuity (I). [20] Lasry, J. M., & Lions, P. L. (2007). Mean eld games. Japanese Journal of Mathematics, 2(1), 229-260. [21] Le Breton, A., & Roubaud, M. C. (2000). General approach to ltering with fractional Brownian noises|application to linear systems. Stochastics: An Inter- national Journal of Probability and Stochastic Processes, 71(1-2), 119-140. [22] Lepeltier, J. P., & San Martin, J. (1997). Backward stochastic dierential equa- tions with continuous coecient. Statistics & Probability Letters, 32(4), 425-430. [23] Ma, J., Sun, R., & Zhou, Y. (2015). Kyle-Back Equilibrium Models and Linear Conditional Mean-eld SDEs. [24] Maslowski, B., & Nualart, D. (2003). Evolution equations driven by a fractional Brownian motion. Journal of Functional Analysis, 202(1), 277-305. [25] McKean, H. P. (1966). A class of Markov processes associated with nonlinear parabolic equations. Proceedings of the National Academy of Sciences, 56(6), 1907- 1911. 94 [26] Melino, A., & Turnbull, S. M. (1990). Pricing foreign currency options with stochastic volatility. Journal of Econometrics, 45(1), 239-265. [27] Merton, R. C. (1976). Option pricing when underlying stock returns are discon- tinuous. Journal of nancial economics, 3(1-2), 125-144. [28] Mishura, Y. (2008). Stochastic calculus for fractional Brownian motion and related processes (Vol. 1929). Springer Science & Business Media. [29] Nourdin, I., & Simon, T. (2006). On the absolute continuity of one-dimensional SDEs driven by a fractional Brownian motion. Statistics & probability letters, 76(9), 907-912. [30] Oksendal, B. K., & Sulem, A. (2005). Applied stochastic control of jump diu- sions (Vol. 498). Berlin: Springer. [31] Rascanu, A. (2002). Dierential equations driven by fractional Brownian motion. Collectanea Mathematica, 53(1), 55-81. [32] Rong, S. I. T. U. (2006). Theory of stochastic dierential equations with jumps and applications: mathematical and analytical techniques with applications to engineering. Springer Science & Business Media. [33] Tindel, S., Tudor, C. A., & Viens, F. (2003). Stochastic evolution equations with fractional Brownian motion. Probability Theory and Related Fields, 127(2), 186-204. [34] Ustunel, A. S., & Zakai, M. (2000). Some measure-preserving point transforma- tions on the Wiener space and their ergodicity. arXiv preprint math/0002198. [35] Xiao, L., & Aydemir, A. (2007). Volatility modelling and forecasting in nance. Forecasting volatility in the nancial markets, 1. 95
Abstract (if available)
Abstract
In this dissertation, we study two topics of stochastic differential equation. In the first part, we study the existence and uniqueness of a class of stochastic differential equation driven by fractional Brownian motion with arbitrary Hurst parameter H ∈ (0,1) with a jump process. In particular, the solution is considered on fractional Wiener-Poisson space. In the second part, we study the existence and uniqueness of solution for two types of McKean-Vlasov equation driven by fractional Brownian motion, the integral with respect to fractional Brownian motion is Skorokhod integral throughout the paper.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Statistical inference of stochastic differential equations driven by Gaussian noise
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
Topics on set-valued backward stochastic differential equations
PDF
On stochastic integro-differential equations
PDF
Gaussian free fields and stochastic parabolic equations
PDF
Some mathematical problems for the stochastic Navier Stokes equations
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Dynamic approaches for some time inconsistent problems
PDF
Pathwise stochastic analysis and related topics
PDF
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Controlled McKean-Vlasov equations and related topics
PDF
Statistical inference for stochastic hyperbolic equations
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Numerical weak approximation of stochastic differential equations driven by Levy processes
Asset Metadata
Creator
Xie, Weisheng
(author)
Core Title
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
06/29/2016
Defense Date
04/27/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
fractional Brownian motion,McKean-Vlasov,OAI-PMH Harvest,Poisson process,stochastic differential equation
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ma, Jin (
committee chair
)
Creator Email
weishengx@gmail.com,weishenx@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-258574
Unique identifier
UC11281551
Identifier
etd-XieWeishen-4483.pdf (filename),usctheses-c40-258574 (legacy record id)
Legacy Identifier
etd-XieWeishen-4483.pdf
Dmrecord
258574
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Xie, Weisheng
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
fractional Brownian motion
McKean-Vlasov
Poisson process
stochastic differential equation