Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Numerical weak approximation of stochastic differential equations driven by Levy processes
(USC Thesis Other)
Numerical weak approximation of stochastic differential equations driven by Levy processes
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
NUMERICAL WEAK APPROXIMATION OF STOCHASTIC DIFFERENTIAL EQUATIONS DRIVEN BY L EVY PROCESSES by Changyong Zhang A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) December 2010 Copyright 2010 Changyong Zhang Acknowledgements The main results in this dissertation were achieved during my study in the Department of Mathematics at the University of Southern California. I am very grateful to Professor Remigijus Mikulevi cius for his availability, patience, and invaluable guidance during the preparation of this dissertation. I also wish to express my thanks to Professor Sergey Lo- totsky, Professor Jianfeng Zhang, and Professor Roger Ghanem, for having kindly agreed to be members of the Dissertation Committee, and to Professor Jin Ma and Professor An- tonios Sangvinatsos, for having kindly agreed to be members of the Guidance Committee. It was my pleasure to spend spare time with friends from the Department of Mathe- matics as well as the other departments at the University of Southern California. I would like to say thanks to all of them for all the joyful time spent together, which made my study life here much more interesting and colorful. Finally, I owe a great debt of gratitude to my parents, my sister, my brother-in-law, and in particular Tingting, for their continuous love, encouragement, and support. ii Table of Contents Acknowledgements ii Notation iv Abstract vi Chapter 1 Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 From Wiener Process to L evy Process . . . . . . . . . . . . . . . . 1 1.1.2 From Analytic Solution to Numerical Approximation . . . . . . . . 3 1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Chapter 2 Stochastic Dierential Equations Driven by L evy Processes 7 2.1 L evy Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Wiener Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Compound Poisson Process . . . . . . . . . . . . . . . . . . . . . . 10 L evy Jump-Diusion Process . . . . . . . . . . . . . . . . . . . . . 12 Stable Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.1.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Stochastic Dierential Equations . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.1 Diusion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.2.2 Stochastic Process with Jumps . . . . . . . . . . . . . . . . . . . . 22 2.2.3 Stochastic Process Driven by L evy Motion . . . . . . . . . . . . . . 23 Chapter 3 Solution to Integro-Dierential Equations in H older Space 28 3.1 L evy Operators in H older Space . . . . . . . . . . . . . . . . . . . . . . . . 28 3.2 Equivalent Norms in H older Space . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Solution to Cauchy Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 32 iii Chapter 4 Rate of Convergence of Weak Euler Approximation 53 4.1 Weak Euler Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.1.1 Time Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.1.2 Euler Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.2 Rate of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Chapter 5 Approximation of a General Equation 68 5.1 A General Stochastic Dierential Equation Driven by L evy Motion . . . . 68 5.2 Convergence of Weak Euler Approximation . . . . . . . . . . . . . . . . . 73 5.2.1 Weak Euler Approximation . . . . . . . . . . . . . . . . . . . . . . 73 5.2.2 Rate of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Chapter 6 Conclusion and Future Work 98 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Bibliography 101 Index 109 iv Notation N 0 = N[f0g R d 0 = R d nf0g R + = (0;1) H = [0;T ]R d (x;y) = d X i=1 x i y i ;8x = (x 1 ;x 2 ;:::;x d )2R d ;y = (y 1 ;y 2 ;:::;y d )2R d jxj = (x;x) 1 2 jBj = d X i=1 jB ii j;8B2R dd C 1 0 () = finnitely dierential functions on with compact supportg @ 0 u(t;x) = @ t u(t;x) = @ @t u(t;x) @ i u(t;x) = @ @x i u(t;x);i = 1;:::;d @ ij u(t;x) = @ 2 @x i x j u(t;x);i;j = 1;:::;d @ x u(t;x) = ru(t;x) = @ 1 u(t;x);:::;@ d u(t;x) @ 2 x u(t;x) = u(t;x) = d X i=1 @ ii u(t;x) @ x u(t;x) = @ j j @ 1 x 1 :::@ d x d u(t;x); where = ( 1 ;:::; d ) is a multiindex v Fu(t;) = Z e i(x;) u(t;x)dx F 1 u(t;x) = (2) d Z e i(x;) u(t;)d vi Abstract L evy processes are the simplest generic class of processes having a.s. continuous paths interspersed with jumps of arbitrary sizes occurring at random times, which makes them useful tools in a variety of elds including mathematics, physics, engineering, and nance. In stochastic analysis, it is frequently necessary to evaluate functionals of the process modeling the system of interest. In general, the law of the process is unknown and a closed- form solution is unrealistic. An alternative possibility is to numerically approximate the functionals by discrete time Monte-Carlo simulation, which is widely applied in practice. The simplest scheme for Monte-Carlo simulation is the weak Euler approximation. In such numerical treatment of stochastic dierential equations, it is of theoretical and practical importance to estimate the rate of convergence of the discrete time approximation. In this dissertation, the weak Euler approximation for stochastic dierential equations driven by L evy processes is studied. The model under consideration is in a more general form but with weaker assumptions than those in existence. Hence, it is applicable to a broader range of processes arising from various elds. In order to investigate the conver- gence of the weak Euler approximation to the process considered, the existence of a unique solution to the corresponding integro-dierential equation in H older space is rst proved. It is then identied that the Euler scheme yields a positive weak order of convergence, vii provided that the coecients of the stochastic dierential equation are H older-continuous and the test function is continuously dierentiable to some positive order. In particular, if the coecients are slightly more than twice dierentiable and the test function has up to the fourth order derivative, then rst weak order convergence is guaranteed. viii Chapter 1 Introduction 1.1 Motivation 1.1.1 From Wiener Process to L evy Process Since its introduction in the early years of the twentieth century as a model for the physical phenomenon of Brownian motion by Einstein [30] and Smoluchowski [79] and as a description of the dynamical evolution of stock prices by Bachelier [5], the Wiener process has been the most intensively studied stochastic process. Two important properties of the Wiener process are continuity of sample paths and scale invariance, while many phenomena that were rst described by the Wiener process do not exhibit those two properties. For example, the classical Black-Scholes model [19] assumes that a stock price follows a geometric Brownian motion. However, stock prices change by units and opening prices are often not exactly the same as the closing prices of the previous trading day. In addition, external shocks occur regularly, which are either reasonably predictable or wholly inaccessible. Predictable external shocks include earn- ings announcements, going ex-dividend, scheduled meetings of the central bank to adjust interests rates, and so on. Inaccessible ones include unexpected events such as wars, po- litical assassinations, terrorist attacks, currency collapses, and natural disasters. Hence, 1 stock prices move essentially by jumps, including both regularly reasonable small ones and occasionally unpredictable large ones, at intraday scales, and only over longer time scales does their behavior resemble Brownian motion. In addition, empirical studies of stock prices indicate distributions with heavy tails and skewness, which are incompatible with models based on the Wiener process. Therefore, a more realistic model is desirable for stock market behavior. Analogous considerations apply in other arenas such as foreign exchange currency markets and government security markets, where alone there are often substantial jumps related either to central bank intervention or to the release of signicant macroeconomic information. One alternative option is the jump-diusion process [57], which is capable of modeling large and sudden changes and naturally exhibits high skewness and leptokurtosis levels typically observed in nancial time series. Hence, it has gradually become a standard modeling tool in various markets including equity, foreign exchange, xed income, com- modity, and energy derivatives [6, 7, 14, 15, 20, 21, 31, 32, 39, 45, 46, 47, 56, 70]. In particular, when the jump-diusion process is a L evy process such as in Merton's and Kou's models [52, 57], the characteristic function of the process can be obtained in a closed form, which is essential, for example, in pricing European options eciently by inverting the Fourier transform using the fast Fourier transform algorithm [23, 29]. For this as well as other theoretical and practical reasons [10], there has been a re- naissance of interest in L evy processes [33, 82] in recent years. L evy processes are the simplest generic class of processes having a.s. continuous paths interspersed with random jumps of arbitrary sizes occurring at random times, which makes them useful tools in a broad variety of elds from mathematics including dierential geometry and extreme value theory to physics including Burgers' turbulence and quantum theory. Other appli- 2 cations can be found in engineering and sciences [16, 24, 69, 75, 85, 92], and particularly, in nance [11, 26, 80], from portfolio and risk management to option and bond pricing and hedging. 1.1.2 From Analytic Solution to Numerical Approximation LetX =fX t g t2[0;T ] ;T2R + be the process modeling the system of interest. In stochastic analysis, it is frequently necessary to evaluate functionals of the system, such as moments, functional integrals, invariant measures, and Lyapunov exponents. Specically, for a given test function g, the problem of computing the expectation E[g(X T )] arises from various applications. In random mechanics, given a random dynamical system with white noise, it is important to nd the rst two moments of the response or the probability that the response reaches a certain level. In telecommunications, E[g(X T )] represents the average energy of the system at timeT , which is critical to the design and maintenance of telephone lines. In nance and insurance, it is extremely common to evaluate E[g(X T )]. For instance, in the standard Black-Scholes model, a stock price X is assumed to follow a diusion process, the solution to a stochastic dierential equation driven by the Wiener process. An option is then priced as E[g(X T )], for a known convex function g. When both the model and the test functiong are suciently simple, there is a closed- form expression for the expectation. However, the real world is much more complicated. Take equity markets as an example. Security prices usually have jumps, as mentioned in Section 1.1.1. There are serious problems in the loss of completeness, the martingale representation property mathematically, for models with jumps. Nevertheless, arbitrage- free models can still be constructed for a theory of option pricing in the same spirit as the 3 Black-Scholes model, which again leads to the problem of evaluating E[g(X T )], where X is the solution to a stochastic dierential equation driven by a L evy process with jumps. In special cases where g is suciently smooth and the increments of the driving L evy process can be simulated, a closed-form solution in analogy to the Black-Scholes paradigm may exist. For instance, X t =X 0 (1 +c) Nt exp a 1 2 b 2 t +bW t is the solution to X t =X 0 +a Z t 0 X s ds +b Z t 0 X s dW s +c Z t 0 X s dN s ;8t2 [0;T ]; where W =fW t g t2[0;T ] is a scalar Wiener process, N =fN t g t2[0;T ] is a scalar Poisson process [26, 35], and X t denotes the left hand limit of X at time t. In general, the laws of X T and g(X T ) as well as their means are all unknown, so a closed-form solution is unrealistic. One alternative possibility is then to numerically approximate E[g(X T )] by a discrete time Monte-Carlo simulation of the stochastic process X. This approach has been widely applied [1, 2, 22, 27, 38, 40, 90, 94] and is very popular in practice. The simplest discrete time approximation of X that can be used for such Monte-Carlo methods is the weak Euler approximation. 4 1.2 Objective It is of theoretical and practical importance to estimate the rate of convergence of a discrete time approximation. The Euler approximation Y of a stochastic process X is said to converge with a weak order > 0 if for each smooth function g, there exists a constant K, depending only on g, such that jE[g(Y T )] E[g(X T )]j<K ; where > 0 is the maximum step size of the time discretization. Platen and Kloeden gave an extensive list of papers that deal with the discrete time approximations for It^ o processes by Euler and higher order schemes [48, 73]. Milstein was among those who rst studied the order of weak convergence, in particular rst- order convergence, of discrete-time approximations for diusion processes [66, 67, 68]. Talay and Hu investigated a class of second weak-order approximations for diusion pro- cesses [41, 86, 87, 88]. Talay and Tubaro, as well as Kloeden, Platen, and Hofmann, developed the related extrapolation techniques [49, 89]. Schurz, with Saito and Mit- sui, investigated stability of numerical schemes for stochastic diusions to control error propagation [77, 81]. For the discrete time approximations of It^ o processes with jump components, Mikulevi cius and Platen showed rst weak-order convergence in the case in which the coecient functions possess fourth-order continuous dierentiability [58]. Prot- ter, Talay, Jacod, and Rubenthaler presented similar results for L evy-driven stochastic dierential equations [42, 75, 76]. In practice, the coecient functions and the test function do not always have the smoothness properties assumed in the papers cited above, as studied by Bally, Talay, and 5 Guyon [8, 9, 37]. Mikulevi cius and Platen rst proved that there is still some weak-order convergence of the Euler approximation for diusion processes under H older conditions on the coecient and test functions [59]. Kubilius and Platen generalized the result to diusion processes with jumps [54]. The goal of this dissertation is to prove that the convergence still holds for stochastic processes driven by L evy motions. The dissertation is organized as follows. Chapter 2 brie y reviews L evy processes and stochastic dierential equations, and species the model considered. Chapter 3 discusses solutions to integro-dierential equations in H older space, the result of which is invoked in the proof of the main theorem [64]. Chapter 4 denes the Euler approximation with the basic time discretization and presents the main theorem as well as the proof. Chapter 5 generalizes the result obtained in Chapter 4 to a general equation. Chapter 6 outlines future work. 6 Chapter 2 Stochastic Dierential Equations Driven by L evy Processes %addcontentslinetocchapterChapter 2 Stochastic Dierential Equations Driven by L evy Processes In this chapter, the denition, examples, and properties of L evy processes are rst discussed. The stochastic dierential equation considered is then introduced. 2.1 L evy Process 2.1.1 Denition LetT2R + denote the time horizon and ( ;F; F;P) be a ltered probability space, where F =F T and the ltration F =fF t g t2[0;T ] , an increasing family of sub -algebras ofF, satises the usual hypotheses [28]: Completeness:F 0 contains all P -null sets ofF, and Right Continuity:F t =F t+ , whereF t+ = T ">0 F t+" . A stochastic process [44] is a family of random variables X =fX t g t2[0;T ] indexed by time. The parameter t may be either discrete or continuous. For each realization of the 7 randomness!, the trajectoryX(!) :t!X t (!) denes a function of time, called a sample path of the process. Thus, stochastic processes can also be viewed as random functions: random variables taking values in function spaces. A stochastic process can also be seen as a function X of both time t and the randomness ! on [0;T ] . Denition 2.1. A stochastic process X =fX t g t2[0;T ] is adapted to the given ltration F =fF t g t2[0;T ] if each X t isF t -measurable. AnF t -adapted process is also called a nonanticipating process, whose value at time t is revealed by the informationF t . Denition 2.2. A function f : [0;T ]7!R d is said to be c adl ag if it is continue a droite et limit e a gauche, that is, right continuous with left limits. Specically, a function f is c adl ag if for each t2 [0;T ], the two limits f t = lim s"t f s and f t+ = lim s#t f s exist, and f t =f t+ . Dene f t =f t f t . If f is c adl ag, it has only jump discontinuities andft : f t 6= 0;t2 [0;T ]g is at most countable. In addition,ft : f t > ";t2 [0;T ]g,8" > 0 is nite. Hence, a c adl ag function on [0;T ] has a nite number of \large jumps" and a possibly innite but countable number of \small jumps". A L evy process [3, 18, 78] is essentially a stochastic process with stationary and inde- pendent increments. Denition 2.3. A c adl ag, adapted, real-valued process L =fL t g t2[0;T ] with L 0 = 0 a.s. is called a L evy process if it has (L1) Independent Increments: L t L s ?F s , 0s<tT ; (L2) Stationary Increments: L t L s d =L ts , 0s<tT ; and 8 (L3) Stochastic Continuity: lim s!t P (jL t L s j>") = 0,8"> 0;8t2 [0;T ]. 2.1.2 Examples Common examples of L evy processes include linear drift, the Wiener process, the Poisson process, the compound Poisson process, the L evy jump-diusion process, and the stable process. Wiener Process The Wiener process is the only non-deterministic L evy process with continuous sample paths. In one dimension, a c adl ag, adapted, real-valued process W =fW t g t2[0;T ] is call a Wiener process with variance 2 if it satises W 0 = 0; W t W s ?F s , 0s<tT ; W t W s Normal 0; 2 (ts) , 0s<tT ; and W has continuous sample paths. If 2 = 1, W =fW t g t2[0;T ] is called a standard Wiener process. The following are a number of useful properties of the Wiener process. The Wiener process is locally H older continuous with exponent 2 (0; 1 2 ), that is, there exists a constant K =K(T;!) such that jW t (!)W s (!)jKjtsj ; 0s<tT;8!2 : The sample pathW (!) :t!W t (!);8!2 is almost surely nowhere dierentiable. 9 For any sequenceft n g n2N with t n "1, lim inf n!1 W tn =1 a.s. and lim sup n!1 W tn =1 a.s. Compound Poisson Process A c adl ag and adapted stochastic process N =fN t g t2[0;T ] is called a Poisson process with intensity 2R + if it satises N 0 = 0; N t N s ?F s , 0s<tT ; and N t N s Poisson (ts) , 0s<tT . Let N be a Poisson process with intensity . The process ~ N =f ~ N t g t2[0;T ] , where ~ N t = N t t, is called a compensated Poisson process, which satises E[ ~ N t ] = 0 and E[ ~ N 2 t ] =t,8t2 [0;T ]. Denition 2.4. An adapted process X =fX t g t2[0;T ] is called a martingale if it satises E[jX t j]<1;8t2 [0;T ] and E[X t jF s ] =X s a:s:; 8t2 (s;T ];8s2 [0;T ]: Proposition 2.5. The compensated Poisson process is a martingale. 10 Dene nonnegative random variablesfT n g n2N 0 by T 0 = 0 and T n = infft :N t =n;t2 [0;T ]g;n2N: (2.1) TheT n 's are gamma distributed. In addition, the inter-arrival timesT n T n1 ;n2N are i.i.d. exponential random variables with mean 1 . The sample paths of N are piecewise constant with jump discontinuities of size 1 at each of the random timesfT n g n2N . A compound Poisson process is dened as L t = Nt X k=1 J k ; whereN =fN t g t2[0;T ] is a Poisson process with intensity andJ =fJ k g k2N is a sequence of i.i.d. random variables with probability distribution function F and E[J] = <1. Hence, jumps arrive according to the Poisson process and F describes the distribution of jump sizes. By conditioning and independence, the characteristic function of L t is E e iuLt = E h exp iu Nt X k=1 J k i = 1 X n=0 E h exp iu Nt X k=1 J k N t =n i P (N t =n) = 1 X n=0 h Z e iux F (dx) i n e t (t) n n! = exp h t Z e iux 1 F (dx) i : A sample path of a compound Poisson process is piecewise constant with jump dis- continuities at the random timesfT n g n2N as dened in (2.1) and the sizes of the jumps 11 themselves are random. The jumps can be any value in the range of the random variables J k 's. L evy Jump-Diusion Process In one dimension, a L evy jump-diusion process L =fL t g t2[0;T ] is the sum of a linear drift, a Wiener process, and a compensated compound Poisson process, that is, L t =bt +W t + Nt X k=1 J k t ;8t2 [0;T ]; (2.2) whereb2R,2R + ,W =fW t g t2[0;T ] is a standard Wiener process,N =fN t g t2[0;T ] is a Poisson process with intensity, andJ =fJ k g k2N is a sequence of i.i.d. random variables with probability distribution function F and E[J] = <1. All sources of randomness are mutually independent. By independence, the characteristic function of L t is E e iuLt = E h exp iu bt +W t + Nt X k=1 J k t i = e iubt E e iuWt E h exp iu Nt X k=1 J k t i = e iubt exp 1 2 u 2 2 t exp h t Z e iux 1 F (dx)iu i = exp h t iub 1 2 u 2 2 + Z e iux 1iux F (dx) i : Since both the Wiener process and the compensated compound Poisson process are mar- tingales, L is a martingale if and only if b = 0. If the compensated compound Poisson process in (2.2) is replaced by a compound Poisson process, that is, L t =bt +W t + Nt X k=1 J k ; 12 then, the process is called an interlacing process. Stable Process A random variableX is said to be stable if there exist two real-valued sequencesfc n g n2R andfd n g n2R with c n > 0;8n such that c n X +d n d =X 1 +X 2 + +X n ; where X 1 ;:::;X n are independent copies of X. X is said to be strictly stable if d n = 0. The only possible choice for c n is c n = n 1 , where 2 (0; 2] and 2 R + . The parameter plays a key role and is called the index of stability, a measure of concentration. It determines the rate at which the tails of the distribution taper o. E[X 2 ]<1 if and only if = 2 and E[jXj]<1 if and only if 2 (1; 2]. In general, the p-th moment of a stable random variable is nite if and only if p<. Proposition 2.6. If X is a stable real-valued random variable, then its characteristic triplet (b;c;) must take one of the two forms: = 0 and X Normal(b;c), when = 2; or c = 0 and (dx) = c 1 x 1+ 1 fx>0g (x)dx + c 2 x 1+ 1 fx<0g (x)dx, where c 1 ;c 2 0 and c 1 + c 2 > 0, when 6= 2. Three important examples of stable distributions whose density functions are in closed forms include the normal distribution ( = 2), the Cauchy distribution ( = 1 and = 0), and the L evy distribution ( = 1 2 and = 1). In general, the density function of a stable distribution can be expressed only in a series form but cannot be written analytically. 13 However, the general characteristic function, which determines the probability distribu- tion, is given by Proposition 2.7. Proposition 2.7. A real-valued random variable X is stable if and only if there exist b2R, 2R + , and 2 [1; 1] such that for all u2R, ' X (u) = exp h ibujuj 1isgn(u) 1 f6=1g tan( 2 ) 1 f=1g 2 log(juj) i : In Proposition 2.7, b is the location parameter, a measure of centrality, is the scale parameter, a measure of dispersion, and is the skewness parameter, a measure of asym- metry. The distribution is skewed to the right when > 0, skewed to the left when < 0, and symmetric when = 0. Since a stable random variable X is characterized by four parameters, it is denoted by X S (;;b). If X is symmetric, written as X SS, then Proposition 2.7 yields ' X (u) = exp ibujuj ;2 (0; 2]: One reason that stable distributions are important in applications is the nice decay property of tails, except when = 2, in which case tails decay exponentially. The rela- tively slow decay of tails for non-Gaussian stable distributions, the property of leptokurtic distributions with heavy tails, makes them ideally suitable for modeling a wide range of interesting phenomena. A stable process is a L evy process Z =fZ t g t2[0;T ] whose L evy symbol is given by Proposition 2.6. Hence, each Z t is a stable random variable. A process is called a rota- 14 tionally invariant stable process if the L evy symbol is given by (u) = juj ; where is the index of stability. This class of processes is important in applications because they display self-similarity. A stochastic process Y =fY t g t2[0;T ] is self-similar with a Hurst index H2R + if the two processesfY at g t2[0;T ] andfa H Y t g t2[0;T ] have the same distribution for all a2R + . A L evy process Z is self-similar if and only if each Z t is strictly stable. A rotationally invariant stable process is self-similar with a Hurst index H = 1 , and so, for example, the Wiener process is self-similar with H = 1 2 . 2.1.3 Properties Denition 2.8. The lawp X of a random variableX is innitely divisible if for alln2N, there exists a random variable X (1=n) such that ' X (u) = ' X (1=n)(u) n ; where ' X (u) = Z e i(u;x) p X (dx) is the characteristic function of X. Examples of innitely-divisible distributions include normal distributions, Poisson dis- tributions, compound Poisson distributions, exponential distributions, -distributions, ge- ometric distributions, negative binomial distributions, Cauchy distributions, and strictly stable distributions. Counter-examples are uniform distributions and binomial distribu- tions. 15 A random variable is innitely divisible if its law is innitely divisible. The product UV of random variables is innitely divisible if U is arbitrary but nonnegative, V is exponentially distributed, and U and V are independent [36]. Theorem 2.9. (L evy-Khintchine Formula) The law p X of a random variable X is innitely divisible if and only if there exists a triplet (b;c;), withc nonnegative, f0g = 0, and Z 1^jxj 2 (dx)<1, such that ' X (u) = E e i(u;X) = exp i(u;b) 1 2 (u;cu) + Z e i(u;x) 1i(u;x)1 fjxj<1g (dx) : Here, b is the drift coecient, c the diusion coecient, and the L evy measure. They represent linear drift, the Wiener process, and jumps, respectively. The triplet (b;c;) is called the L evy or characteristic triplet and the exponent (u) =i(u;b) 1 2 (u;cu) + Z e i(u;x) 1i(u;x)1 fjxj<1g (dx) is called the L evy symbol or characteristic exponent. Consider a L evy process L =fL t g t2[0;T ] . By the fact that L t =Lt n + L2t n Lt n + + L t L(n1)t n ;8n2N;8t2 (0;T ] and the independence and stationarity of increments, it follows that the random variable L t is innitely divisible. Proposition 2.10. If L =fL t g t2[0;T ] is a L evy process, then L t is innitely divisible and E e i(u;Lt) =e t (u) ;8t2 [0;T ]; 16 where (u) =i(u;b) 1 2 (u;cu) + Z e i(u;x) 1i(u;x)1 fjxj<1g (dx) is the L evy symbol of L 1 , a random variable whose law is innitely divisible. Every L evy process can be associated with the law of an innitely divisible distribution, and given an innitely divisible random variable X, a L evy process L =fL t g t2[0;T ] can be constructed such that L 1 d =X, with the introduction of Poisson random measures. For a L evy process L =fL t g t2[0;T ] , L =fL t g t2[0;T ] is called the jump process associated with L. Proposition 2.11. LetL =fL t g t2[0;T ] be a L evy process. For xedt2R + , L t = 0 a.s. Proof. Letft n g n2N be a sequence inR + witht n "t asn!1. On one hand, sinceL has c adl ag paths, lim n!1 L tn = L t . On the other hand, the sequencefL tn g n2N converges in probability to L t by stochastic continuity, and so there exists a subsequence which converges almost surely to L t . The result then follows by uniqueness of limits. For t2 [0;T ], in general it is possible that P 0st jL s j =1 a.s., but it always holds that P 0st jL s j 2 <1 a.s. Let A2B(R d 0 ) be a set which is bounded below, that is, 0 = 2 A. The random measure of jumps of a L evy process L =fL t g t2[0;T ] is dened as L (!;t;A) = #fs : L s (!)2A;s2 [0;t]g = X 0st 1 fLs(!)2Ag ;8t2 [0;T ]: Proposition 2.12. Let L be a L evy process. If A is bounded below, then L (t;A) <1 a.s.,8t2 [0;T ]. 17 Proof. Dene a sequence of stopping timesfT A n g n2N by T A 1 = infft : L t 2 A;t 0g and T A n = infft : L t 2A;t>T A n1 g;8n> 1. Since L has c adl ag paths, T A 1 > 0 a.s. and lim n!1 T A n =1 a.s. Hence, L (t;A) = P n2N 1 fT A n tg <1 a.s.;8t2 [0;T ]. The measure L (!;t;A) has the property that L (t;A) L (s;A)2(fL u L v ;s v < u tg), and so L (t;A) L (s;A) is independent of F s , that is, L (;A) has independent increments. In addition, L (s +u;A) L (s;A) equals the number of jumps of L s+u L s in A for u2 [0;ts]. Then, by the stationarity of the increments of L, L (t;A) L (s;A) d = L (ts;A), that is, L (;A) has stationary increments. Therefore, L (;A) is a Poisson process and L is a Poisson random measure. The intensity of the process (A) = E[ L (1;A)] is called the L evy measure of L. Let A2B(R d 0 ) be a set bounded below and f be a nite Borel measurable function on A. The Poisson integral of f with respect to a Poisson random measure is dened as Z A f(x) L (!;t;dx) = X x2A f(x) L (!;t;fxg) = X 0st f(L s )1 fLs(!)2Ag : Each Z A f(x) L (t;dx) is a real-valued random variable and generates a c adl ag stochastic process. Theorem 2.13. LetA2B(R d 0 ) be a set bounded below andf be a nite Borel measurable function on A. (i) The process Z t 0 Z A f(x) L (ds;dx) t2[0;T ] is a compound Poisson process with char- acteristic function E exp iu Z t 0 Z A f(x) L (ds;dx) = exp t Z A (e iuf(x) 1)(dx) ; 18 (ii) If f2L 1 (A), then E Z t 0 Z A f(x) L (ds;dx) =t Z A f(x)(dx); (iii) If f2L 2 (A), then Var Z t 0 Z A f(x) L (ds;dx) =t Z A jf(x)j 2 (dx): The Poisson integral may fail to have a nite mean if f = 2 L 1 (A). For each f 2 L 1 (A);t2 [0;T ], the compensated Poisson integral Z t 0 Z A f(x)~ L (ds;dx) = Z t 0 Z A f(x) L (ds;dx)t Z A f(x)(dx) is a martingale with the properties that E exp iu Z t 0 Z A f(x)~ L (ds;dx) = exp t Z A (e iuf(x) 1iuf(x))(dx) ;8u2R and Var Z t 0 Z A f(x)~ L (ds;dx) =t Z A jf(x)j 2 (dx);8f2L 2 (A): Theorem 2.14. (L evy-It^ o Decomposition) If L =fL t g t2[0;T ] is a L evy process, then there exists a triplet (b;c;), withc nonnegative, f0g = 0, and Z 1^jxj 2 (dx)<1, 19 such that L can be decomposed into four independent processes, that is, L t =bt +c 1 2 W t + Z t 0 Z jxj1 x L (ds;dx) + Z t 0 Z jxj<1 x( L L )(ds;dx);8t2 [0;T ]; where L (t;A) = #fs : L s 2A;s2 [0;t]g and L (dt;dx) =(dx)dt. In Theorem 2.14, the rst part corresponds to a deterministic linear drift process with parameter b, the second to a Wiener process with covariance c, the third to a compound Poisson process with jump magnitude F (dx) = (dx) (R d n(1; 1)) 1 fjxj1g and intensity = (R d n(1; 1)), and the fourth to the compensated sum of small jumps, a square-integrable pure jump martingale. All the four terms except the third one have nite moments to all orders, so if a L evy process fails to have a moment, it is due entirely to the large-jump part, which has nite activities. Proposition 2.15. Let L = fL t g t2[0;T ] be a L evy process with characteristic triplet (b;c;). (i) If Z (dx)<1, almost all paths of L have a nite number of jumps on every compact interval, that is, L has nite activities; If Z (dx) =1, almost all paths of L have an innite number of jumps on every compact interval, that is, L has innite activities; (ii) Ifc = 0 and Z jxj1 jxj(dx)<1, almost all paths ofL have nite variation; Ifc6= 0 or Z jxj1 jxj(dx) =1, almost all paths of L have innite variation; 20 (iii) L t has a nite pth moment for p 2 R + , that is, EjL t j p < 1, if and only if Z jxj1 jxj p (dx)<1; L t has a nite pth exponential moment for p2 R, that is, E e pLt <1, if and only if Z jxj1 e px (dx)<1. Theorem 2.16. (It^ o's Formula) Let L =fL t g t2[0;T ] be a L evy process with character- istic triplet (b;c;) and f :H7!R be a function in C 1;2 . Then, f(t;L t ) = f(0;L 0 ) + Z t 0 @ 0 f(s;L s )ds + Z t 0 @ x f(s;L s );dL s + 1 2 Z t 0 d X i;j=1 c ij @ ij f(s;L s )ds + X 0st f(s;L s + L s )f(s;L s ) @ x f(s;L s ); L s ;8t2 [0;T ]: 2.2 Stochastic Dierential Equations 2.2.1 Diusion Process Let W =fW t g t2[0;T ] be a standard Wiener process. The classical SDE [72] with respect to the Wiener process is X t =X 0 + Z t 0 a(X s )ds + Z t 0 b(X s )dW s ;8t2 [0;T ]: (2.3) A sucient condition for (2.3) to have a unique solution is bounded variation of b [71] or nite quadratic variation of b [55]. In the case where the coecients are time inhomogeneous, that is, X t =X 0 + Z t 0 a(s;X s )ds + Z t 0 b(s;X s )dW s ;8t2 [0;T ]; 21 where a(t;x) is a d-dimensional vector, b(t;x) a dd-dimensional matrix, and W = fW t g t2[0;T ] a d-dimensional standard Wiener process, there also exists a unique solution by Picard iteration, provided that the coecients a and b are smooth [89]. 2.2.2 Stochastic Process with Jumps The simplest analogue to (2.3) in the jump case is X t =X 0 + Z t 0 c(X s )dZ s ;8t2 [0;T ]; (2.4) where Z =fZ t g t2[0;T ] is a stochastic process with jumps. When the driving motion is a Wiener process with a Poisson random measure, that is, X t = X 0 + Z t 0 a(X s )ds + Z t 0 b(X s )dW s + Z t 0 Z jyj>1 f(X s ;y)N(ds;dy) + Z t 0 Z jyj1 g(X s ;y) ~ N(ds;dy);8t2 [0;T ]; whereW =fW t g t2[0;T ] is ad-dimensional standard Wiener process and N(dt;dy) a Pois- son random measure with compensator ~ N(dt;dy) = N(dt;dy)(dy)dt, there exists a unique solution by Picard iteration, provided that the following three conditions are satised [3, 74, 83]: Lipschitz Condition: there exists a constant K2R + such that for all x; ~ x2R d , ja(x)a(~ x)j 2 +j ~ B(x; ~ x)j + Z jyj1 jg(x;y)g(~ x;y)j 2 (dy)Kjx ~ xj 2 ; 22 Growth Condition: there exists a constant K2R + such that for all x2R d , ja(x)j 2 +jB(x;x)j + Z jyj1 jg(x;y)j 2 (dy)K(1 +jxj 2 ); Big Jumps Condition: f is jointly measurable andx!f(x;y),8y2fy :jyj> 1g is continuous. Here B(x; ~ x) =b(x) T b(~ x) and ~ B(x; ~ x) =B(x;x) 2B(x; ~ x) +B(~ x; ~ x). Another common case is that Z is a one-dimensional symmetric stable process with index 2 (0; 2). By Picard iteration, there exists a pathwise unique solution to (2.4) if c satises the Lipschitz condition [13]. If c is bounded and has a modulus of continuity , that is,jc(x)c(~ x)j(jx ~ xj);8x; ~ x, where satises Z " 0 1 (x) dx =1;8"> 0; (2.5) then (2.4) admits a strong pathwise unique solution [12]. Condition (2.5) is satised, for instance, if c is H older-continuous of order 1 . The condition is the exact analogue to the Yamada-Watanabe condition [93] for stochastic dierential equations driven by Wiener processes [50]. 2.2.3 Stochastic Process Driven by L evy Motion Under appropriate conditions onc, the solution to (2.4) is a strong Markov process if and only if the driving motion Z is a L evy process [43], in which case the resulting system is a stochastic process driven by a L evy process. 23 Let T 2 R + and ( ;F;P) be a complete probability space with ltrations F = fF t g t2[0;T ] and ~ F =f ~ F t g t2[0;T ] , satisfying the usual conditions. The process X under consideration is F-adapted and the Euler approximation of X is ~ F-adapted. In this dissertation, for 2 (0; 2], the following stochastic dierential equation is considered, X t = X 0 + Z t 0 a () (X s )ds + Z t 0 b () (X s )dW s + Z t 0 Z jyj>1 yp X (ds;dy) + Z t 0 Z jyj1 yq X (ds;dy);t2 [0;T ]; (2.6) where a () (x) = 1 f2(0;1)g Z jyj1 ym () (x;y) dy jyj d+ + Z jyj1 y () (x;y) () (dy) +1 f=1g a(x) + Z jyj1 y () (x;y) () (dy) +1 f2(1;2]g c(x) Z jyj>1 ym () (x;y) dy jyj d+ ; b () (x) = 1 f=2g b(x); X 0 is the F 0 -measurable initial value, W =fW t g t2[0;T ] is an F-adapted d-dimensional standard Wiener process, p X is the jump measure of X t with p X [0;t]A = X s2(0;t] 1 A (X s ); and q X is an (F;P)-martingale measure with q X (dt;dy) =p X (dt;dy)m () (X t ;y) dy jyj d+ dt () (X t ;y) () (dy)dt: 24 In the stochastic equation (2.6), a, c, b, m () , and () are measurable functions, witha andcd-dimensional vectors,b add-dimensional symmetric nonnegative denite matrix, m () and () nonnegative functions, and () a nonnegative measure onR d 0 . In addition, m () (x;y) and its partial derivatives @ y m () (x;y); 2 :j jd 0 = d 2 + 1 are continuous in (x;y), m () (x;y) is homogeneous in y with index zero, and Z S d1 ym (1) (;y) d1 (dy) = 0; m (2) 0; where S d1 is the unit sphere inR d and d1 is the Lebesgue measure on it. For = [] + + > 0, where [] 2N and + 2 (0; 1], let C (H) denote the space of measurable functions u on H such that the norm juj = X j j[] sup t;x j@ x u(t;x)j + 1 ffg + <1g sup j j=[] ; t;x;h6=0 j@ x u(t;x +h)@ x u(t;x)j jhj fg + +1 ffg + =1g sup j j=[] ; t;x;h6=0 j@ x u(t;x +h) 2@ x u(t;x) +@ x u(t;xh)j jhj fg + is nite. Accordingly, C (R d ) denotes the corresponding space of functions onR d . Throughout this dissertation, it is assumed that the following conditions are satised by the stochastic dierential equation considered: (A1) There exists a constant > 0 such that for all x2R d andjj = 1, (B(x);); where B(x) =b(x) T b(x); Z S d1 j(w;)j m () (x;w)d;2 (0; 2); lim #0 sup x Z jyj jyj () (x;y) () (dy) = 0; (2.7) 25 (A2) For 2R + , M () +N () <1; where M () = 1 f=1g jaj + 1 f=2g jBj + 1 f2(0;2)g sup j jd 0 ; jyj=1 j@ y m () (;y)j and N () = 1 f2(1;2]g jcj + sup j j=[];x Z R d 0 jyj ^ 1 () (x;y) + @ x () (x;y) () (dy) + sup j j=[]; x;h6=0 1 jhj [] Z R d 0 jyj ^ 1 @ x () (x +h;y)@ x () (x;y) () (dy); Remark 2.17. Under the above assumptions, for any2R + , there exists a unique weak solution to equation (2.6) [61]. Let X =fX t g t2[0;T ] be the weak solution to (2.6) and Y =fY t g t2[0;T ] be the Euler approximation of X. The convergence rate of Y to X is investigated in this dissertation. Remark 2.18. Compared with (2.4), (2.6) is a more general model. Each equation of the form (2.4) can be transformed into its counterpart of the form (2.6). For example, in the case where Z t = Z t 0 Z y p(ds;dy) dy jyj d+ ds ;2 (1; 2]; (2.8) 26 X s2[0;t] 1 A (X s ) = X s2[0;t] 1 A c(X s )Z s = Z t 0 Z 1 A c(X s )y dy jyj d+ ds = Z t 0 Z 1 A y detc(X s ) 1 c(X s ) 1 y d+ j yj d+ d y j yj d+ ds = Z t 0 Z 1 A y detc(X s ) 1 c(X s ) 1 y d+ jyj d+ dy jyj d+ ds = Z t 0 Z 1 A y detc(X s ) 1 c(X s ) 1 y jyj d+ dy jyj d+ ds: Hence, let m () (X s ;y) = detc(X s ) 1 c(X s ) 1 y jyj d+ ; (2.4) is then of the form (2.6). That is, X t =X 0 + Z t 0 Z yq X (ds;dy);8t2 [0;T ]; where p X [0;t]A = X s2[0;t] 1 A (X s ) and q X (dt;dy) =p X (dt;dy)m () (X t ;y) dy jyj d+ dt: In addition, assumptions outlined above are satised provided thatc is non-degenerate, that is,j detc()j"> 0. 27 Chapter 3 Solution to Integro-Dierential Equations in H older Space In order to prove the main theorem in Chapter 4, some auxiliary results are presented in this chapter. Specically, an associated integro-dierential equation in H older space is solved to determine the rate of convergence. 3.1 L evy Operators in H older Space For u2C + (H), denote A () y u(t;x) =u(t;x +y)u(t;x) 1 fjyj1g 1 f=1g + 1 f2(1;2)g @ x u(t;x);y and B () y u(t;x) =u(t;x +y)u(t;x) 1 fjyj1g 1 f2(1;2]g @ x u(t;x);y : Let A () z u(t;x) = 1 f=1g a(z);@ x u(t;x) + 1 2 1 f=2g d X i;j=1 B ij (z)@ ij u(t;x) 28 + Z R d 0 A () y u(t;x)m () (z;y) dy jyj d+ ; A () u(t;x) = A () x u(t;x) =A () z u(t;x)j z=x ; and B () z u(t;x) = 1 f2(1;2]g c(z);@ x u(t;x) + Z R d 0 B () y u(t;x) () (z;y) () (dy); B () u(t;x) = B () x u(t;x) =B () z u(t;x)j z=x : Note that for u2C + (H), 2R + , A () z u(t;x) = F 1 () (z;)Fu(t;) (x) with () (z;) being the logarithm of the characteristic function of a stable distribution with parameter , which is given by () (z;) = K Z S d1 j(!;)j 1i 1 f6=1g tan 2 sgn(!;) 2 1 f=1g sgn(!;) lnj(!;)j m () (z;!) d1 (d!) i1 f=1g a(z); 1 2 1 f=2g B(z); ; where K =K() is a constant, S d1 is the unit sphere inR d , and d1 is the Lebesgue measure on it. The operatorL () =A () +B () is the generator of X t dened in (2.6). A () is the principal part andB () is the lower order or subordinated part ofL () . Accordingly, for u not depending on t,A () z u(x),A () u(x),B () z u(x), andB () u(x) denote analogous operators. 29 Remark 3.1. The stochastic process u(X t ) Z t 0 A () +B () u(X s )ds;8u2C + (R d ) is a martingale [61], where X =fX t g t2[0;T ] is the unique weak solution to equation (2.6). For f2C (R d ), denote @ f(x) =F 1 jj Ff() (x) = ( K Z A () y f(x) dy jyj d+ ; 2 (0; 2); d X i=1 @ ii f(x) = f(x); = 2; where K =K() is a constant. The following result on @ f [51, 53] is used in the proof of the main theorem. Lemma 3.2. For 2 (0; 1) and f2C (R d ), f x +h f(x) =K(;d) Z k () (h;y)@ f(xy)dy; where for a constant K, k () (h;y) =jy +hj d+ jyj d+ with Z jk () (h;y)jdyKjhj : 30 3.2 Equivalent Norms in H older Space By Lemma 6:1:7 [17], there exists a nonnegative function2C 1 0 (R d ) such that supp = f : 1 2 jj 2g and 1 X j=1 (2 j ) = 1;86= 0: Dene ' k 2S(R d ), k = 0;1;::: by F' k =(2 k ) (3.1) and 2S(R d ) by F = 1 X k1 F' k (); (3.2) whereS(R d ) is the Schwartz space of rapidly decreasing, innitely dierentiable functions onR d . By Lemma 12 [63], juj sup x j u(x)j + sup k1 2 k sup x j' k u(x)j and for 2 (0; 2], 2R + ,juj + is equivalent to the normjuj ; =juj 0 +j@ uj . 31 3.3 Solution to Cauchy Problem For the proof of Theorem 3.8, Lemma 3.3 [25, 91], Lemma 3.4, Lemma 3.6, and Lemma 3.7 are called. For functions possessing the same properties as a, c, b, m () , and () , the corresponding norms and operators are dened as follows. Let a, c, b, m () , and () be measurable functions, with a and c d-dimensional vec- tors, b a dd-dimensional symmetric nonnegative denite matrix, and m () and () nonnegative functions. In addition, assume that m () (x;y) and its partial derivatives @ y m () (x;y); 2 :j jd 0 = d 2 + 1 are continuous in (x;y), m () (x;y) is homoge- neous in y with index zero, and Z S d1 y m (1) (;y) d1 (dy) = 0; m (2) 0; where S d1 is the unit sphere inR d and d1 is the Lebesgue measure on it. For 2R + and B(x) = b(x) T b(x);x2R d , dene M () = 1 f=1g j aj + 1 f=2g j Bj + 1 f2(0;2)g sup j jd 0 ; jyj=1 j@ y m () (;y)j ; N () = 1 f2(1;2]g j cj + sup j j=[];x Z R d 0 jyj ^ 1 j () (x;y)j +j@ x () (x;y)j () (dy) + sup j j=[]; x;h6=0 1 jhj [] Z R d 0 jyj ^ 1 j@ x () (x +h;y)@ x () (x;y)j () (dy); and () (z;) = K Z S d1 j(!;)j 1i 1 f6=1g tan 2 sgn(!;) 2 1 f=1g sgn(!;) lnj(!;)j m () (z;!) d1 (d!) 32 i1 f=1g a(z); 1 2 1 f=2g B(z); : For f2C + (R d ), let A () z f(x) = 1 f=1g a(z);@ x f(x) + 1 2 1 f=2g d X i;j=1 B ij (z)@ ij f(x) + Z R d 0 A () y f(x) m () (z;y) dy jyj d+ ; A () f(x) = A () x f(x) = A () z f(x)j z=x ; and B () z f(x) = 1 f2(1;2]g c(z);@ x f(x) + Z R d 0 B () y f(x) () (z;y) () (dy); B () f(x) = B () x f(x) = B () z f(x)j z=x : In addition, denote M () = 1 f=1g j aj 1 + 1 f=2g j Bj 1 + 1 f2(0;2)g sup j jd 0 ; jyj=1 j@ y m () (;y)j 1 ; N () = 1 f2(1;2]g j cj 1 + sup j j=[];x Z R d 0 jyj ^ 1 j () (x;y)j +j@ x () (x;y)j () (dy) + sup j j=[]; x;h6=0 1 jhj [] Z R d 0 jyj ^ 1 j@ x () (x +h;y)@ x () (x;y)j () (dy); (z;) = () (z;)(1 +jj ) 1 ; f(z;x) = F 1 (z;)Ff() (x); ~ f(x) = f(x;x); j f(x) = F 1 i j jj 1 (1 +jj ) 1 Ff() (x);j = 1;:::;d; 1; ij f(x) = F 1 i j (1 +jj 2 ) 1 Ff() (x);i;j = 1;:::;d: 33 For any multiindex 2 :j j d 0 = d 2 + 1 and any 2 R d , the following inequalities hold, @ (;) K M () jj j j ;2R + ; @ j jj 1 (1 +jj ) 1 Kjj j j ;j = 1;:::;d; @ j j (1 +jj 2 ) 1 Kjj j j ;j = 1;:::;d; (3.3) where K is a constant. Lemma 3.3. Let 2 R + , h2 C 1 (R d ), and K 0 be a constant such thatj@ h()j K 0 (1 +jj) j j ,82R d ,8 2 :j jd 0 = d 2 + 1 . Then for all f2C (R d ), there exists a constant K such that jF 1 (hFf)j KK 0 jfj : Proof. See 2.6.1 in [91]. Lemma 3.4. Let 2 R + , = 2 N, and 2 (0;]. Assume M () <1. Then for all f2C (R d ), there exists a constant K such that j f(z;)j K M () jfj ;8z2R d ; j f(;x)j K M () jfj ;8x2R d ; j ~ fj K M () jfj ; j j fj Kjfj ;j = 1;:::;d; j ij fj Kjfj ;i;j = 1;:::;d: 34 Proof. Let 1 2C 1 0 (R d ) and 2 = 1 1 with 1 2 [0; 1] and 1 (x) = 1 ifjxj 1. Then, f(z;x) = 1 f(z;x) + 2 f(z;x) and ~ f(x) = ~ 1 f(x) + ~ 2 f(x); where k f(z;x) =F 1 (z;) k ()Ff() (x) =' k (z;x)f = ~ k (z;x)f and ~ k f(x) = k f(x;x);k = 1; 2; with ' =F 1 (1 +jj ) 1 and k (z;x) =F 1 () (z;) k () (x). Let ~ u =F 1 1 , then 1 (z;x) = F 1 () (z;) 1 () (x) = F 1 () (z;)F ~ u() (x) = 1 f=1g a(z);@ x ~ u(t;x) + 1 2 1 f=2g d X i;j=1 B ij (z)@ ij ~ u(t;x) + Z A () y ~ u(t;x) m () (z;y) dy jyj d+ ; 35 and so Z j 1 (;x)j dxK M () : It then follows by equivalence between norms [84] that Z j~ 1 (;x)j dxK M () : Also, Z j~ 1 (z;)j dzK M () : Hence, j 1 f(;x)j K M () jfj 1 ; j 1 f(z;)j K M () jfj ; and j ~ 1 fj K M () jfj 1 + M () jfj : Applying (3.3) and Lemma 3.3 gives j 2 f(;x)j K M () jfj ; 36 j 2 f(z;)j K M () jfj ; and j ~ 2 fj K M () jfj + M () jfj K M () jfj : For example, in Lemma 3.3, letF 1 (hFf)() = ~ 2 f() and h() = (z;) 2 (), then j@ h()jK 0 (1 +jj) j j ;82R d ;8 2 :j jd 0 = d 2 + 1 and by Lemma 3.3, j ~ 2 fj K M () jfj : Hence, j f(z;)j j 1 f(z;)j +j 2 f(z;)j K M () jfj ;8z2R d ; j f(;x)j j 1 f(;x)j +j 2 f(;x)j K M () jfj ;8x2R d ; and thus j ~ fj j ~ 1 fj +j ~ 2 fj K M () jfj : Similarly, it follows from (3.3) and Lemma 3.3 that j j fj Kjfj ;j = 1;:::;d 37 and j ij fj Kjfj ;i;j = 1;:::;d: Remark 3.5. For v2C + (R d ), denote f = (1 +@ )v. Then, f2C (R d ) and f(z;x) = F 1 (z;)Ff() (x) = F 1 () (z;)(1 +jj ) 1 Ff() (x) = F 1 h () (z;)FF 1 (1 +jj ) 1 Ff() i (x) = F 1 h () (z;)F (1 +@ ) 1 f() i (x) = F 1 () (z;)Fv() (x) = A () z v(x);8z;x2R d : Hence, by Lemma 3.4, j A () vj =j fj Kjfj =Kjv +@ vj =Kjvj + : Lemma 3.6. Let 2 R + , = 2 N, and 2 (0;]. Assume N () <1. Then for all f2C + (R d ), there exists a constant K such that j B () z f()j K N () jfj + ;z2R d ; j B () f(x)j K N () jfj + ;x2R d ; j B () fj K N () jfj + : 38 The statement is proved by induction. Proof. By Lemma 3.2, for 2 (0; 1), f(x +y)f(x) =K Z k () (y; y)@ f(x y)d y and for 2 (1; 2), f(x +y)f(x) @ x f(x);y = Z 1 0 @ x f(x +sy);y ds Z 1 0 @ x f(x);y ds = Z 1 0 K Z k (1) (sy; y)@ 1 @ x f(x y)d y;y ds: Then, B () z f(x) = 1 f2(1;2]g c(z);@ x f(x) + Z jyj>1 [f(x +y)f(x)] () (z;y) () (dy) +1 f2(0;1]g Z jyj1 [f(x +y)f(x)] () (z;y) () (dy) +1 f2(1;2]g Z jyj1 f(x +y)f(x) @ x f(x);y () (z;y) () (dy) = 1 f2(1;2]g c(z);@ x f(x) + Z jyj>1 [f(x +y)f(x)] () (z;y) () (dy) +1 f2(0;1]g K Z jyj1 Z k () (y; y)@ f(x y) () (z;y)d y () (dy) +1 f2(1;2)g K Z jyj1 Z 1 0 Z k (1) (sy; y)@ 1 @ x f(x y)d y;y () (z;y)ds () (dy) +1 f=2g Z jyj1 Z 1 0 Z s 0 @ ij f(x +ry)y i y j () (z;y)drds () (dy): For 2 (0; 1), by Lemma 3.2, jf(x +y)f(x)j =jK Z k () (y; y)@ f(x y)d yjKj@ fj 1 jyj ;2 (0; 1) 39 and jf(x +y)f(x) @ x f(x);y j = Z 1 0 K Z k (1) (sy; y)@ 1 @ x f(x y)d y;y ds Z 1 0 Kj@ 1 @ x fj 1 (jsyj 1 ;y)ds Kj@ 1 @ x fj 1 jyj ;2 (1; 2): Then, j B () z f()j 0 = sup x j B () z f(x)j 1 f2(1;2]g j cj 1 j@ x fj 1 + 2jfj 1 Z jyj>1 j () (z;y)j () (dy) +1 f2(0;1]g Kj@ fj 1 Z jyj1 jyj j () (z;y)j () (dy) +1 f2(1;2]g Kj@ 1 @ x fj 1 Z jyj1 jyj j () (z;y)j () (dy) K N () jfj and j B () z f()j 1 = sup x;h6=0 j B () z f(x +h) B () z f(x)j jhj 1 f2(1;2]g j cj 1 j@ x fj + 2jfj Z jyj>1 j () (z;y)j () (dy) +1 f2(0;1]g Kj@ fj Z jyj1 jyj j () (z;y)j () (dy) +1 f2(1;2]g Kj@ 1 @ x fj Z jyj1 jyj j () (z;y)j () (dy) K N () jfj + ; Hence, j B () z f()j =j B () z f()j 0 +j B () z f()j 1 K N () jfj + : 40 Similarly, j B () f(x)j 0 = sup z j B () z f(x)j 1 f2(1;2]g j cj 1 j@ x fj 1 + 2jfj 1 sup z Z jyj>1 j () (z;y)j () (dy) +1 f2(0;1]g Kj@ fj 1 sup z Z jyj1 jyj j () (z;y)j () (dy) +1 f2(1;2]g Kj@ 1 @ x fj 1 sup z Z jyj1 jyj j () (z;y)j () (dy) K N () jfj and j B () f(x)j 1 = sup z;h6=0 j B () z+h f(x) B () z f(x)j jhj 1 f2(1;2]g j cj j@ x fj 1 + 2jfj 1 sup z;h6=0 Z jyj>1 j () (z +h;y) () (z;y)j jhj () (dy) +1 f2(0;1]g Kj@ fj 1 sup z;h6=0 Z jyj1 jyj j () (z +h;y) () (z;y)j jhj () (dy) +1 f2(1;2]g Kj@ 1 @ x fj 1 sup z;h6=0 Z jyj1 jyj j () (z +h;y) () (z;y)j jhj () (dy) K N () jfj ; Hence, j B () f(x)j =j B () f(x)j 0 +j B () f(x)j 1 K N () jfj + : Also, j B () fj 0 = sup x j B () x f(x)j 1 f2(1;2]g j cj 1 j@ x fj 1 + 2jfj 1 sup x Z jyj>1 j () (x;y)j () (dy) 41 +1 f2(0;1]g Kj@ fj 1 sup x Z jyj1 jyj j () (x;y)j () (dy) +1 f2(1;2]g Kj@ 1 @ x fj 1 sup x Z jyj1 jyj j () (x;y)j () (dy) K N () jfj and j B () fj 1 = sup x;h6=0 j B () x+h f(x +h) B () x f(x)j jhj sup x;h6=0 j B () x+h f(x +h) B () x+h f(x)j jhj + sup x2R d ;h6=0 j B () x+h f(x) B () x f(x)j jhj K j B () x+h f()j 1 +j B () f(x)j 1 K N () jfj + + N () jfj K N () jfj + : Hence, j B () fj =j B () fj 0 +j B () fj 1 K N () jfj + : Assume the inequalities hold for 2 n1 S l=0 (l;l + 1), n2 N. For 2 (n;n + 1) and f2C + (R d ), 12 (n 1;n) and @ x f2C +1 (R d ). Hence, j B () z @ x f()j 1 K N () j@ x fj +1 K N () jfj + : Similarly, j@ z B () f(x)j 1 K N () jfj K N () jfj + : 42 Since forj j [] , @ ( B () f) = X += @ z B () z (@ f)(x)j z=x : Then, j B () z f()j = X j j[] sup x j@ x B () z f(x)j + sup j j=[] ; x;h6=0 j@ x B () z f(x +h)@ x B () z f(x)j jhj fg + = sup x j B () z f(x)j + X j j[1] sup x j@ x B () z @ x f(x)j + sup j j=[1] ; x;h6=0 j@ x B () z @ x f(x +h)@ x B () z @ x f(x)j jhj fg + = sup x j B () z f(x)j +j B () z @ x f()j 1 K N () jfj + N () j@ x fj +1 K N () jfj + ; j B () f(x)j = X j j[] sup z j@ z B () z f(x)j + sup j j=[] ; z;h6=0 j@ z B () z+h f(x)@ z B () z f(x)j jhj fg + = sup z j B () z f(x)j + X j j[1] sup z j@ z @ z B () z f(x)j + sup j j=[1] ; z;h6=0 j@ z @ z B () z f(x +h)@ z @ z B () z f(x)j jhj fg + = sup z j B () z f(x)j +j@ z B () f(x)j 1 K N () jfj + N () jfj + K N () jfj + ; 43 and j B () fj = X j j[] sup x j@ x B () x f(x)j + sup j j=[] ; x;h6=0 j@ x B () x+h f(x +h)@ x B () x f(x)j jhj fg + X j j[] sup x j@ x B () x f(x)j + sup j j=[] ; x;h6=0 j@ x B () x+h f(x +h)@ x B () x+h f(x)j jhj fg + + sup j j=[] ; x;h6=0 j@ x B () x+h f(x)@ x B () x f(x)j jhj fg + j B () z f()j +j B () f(x)j K N () jfj + + N () jfj + K N () jfj + : The statement then follows. Lemma 3.7. Assume u n 2C (R d );8n2N with sup n ju n j <1 and u n !u uniformly on compact subsets. Then u2C ,juj sup n ju n j , and @ @ u n !@ @ u;j j [] uniformly on compact subsets as n!1 for any 2 [0; 1) such that [] +<. Proof. Letf' k g k=0;1;::: and be as dened in (3.1) and (3.2). If u n !u uniformly on compact sets, then j u(x)j = lim n j u n (x)j sup n sup y j u n (y)j;8x2R d 44 and 2 k j' k u(x)j = 2 k lim n j' k u n (x)j sup n sup k 2 k sup y j' k u n (y)j;8x2R d : Hence,juj sup n ju n j <1. Assume sup n ju n j < 1, > 1. Then, sup n j@ x u n j 1 < 1 and @ x u n satises the Arzel a-Ascoli theorem. Hence, for eachf@ x u n g, there exist a subsequencef@ x u n k g and a function v such that @ x u n k (x)!v(x) uniformly on compact subsets as n k !1. Since u n k (x +h)u n k (x) =h Z 1 0 @ x u n k (x +sh)ds; passing to the limit on both sides yields u(x +h)u(x) =h Z 1 0 v(x +sh)ds: That is, u(x +h)u(x) h = Z 1 0 @ x u(x +sh)ds = Z 1 0 v(x +sh)ds; and so @ x u =v. By induction, @ u n !@ u,8 2f :j j [] g uniformly on compact subsets. Also, by Lemma 3.2, for 2 [0; 1) such that [] +< andjj [] , @ @ u n (x) = Z @ u n (x +y)@ u n (x) dy jyj d+ : 45 Passing to the limit yields that for 2 [0; 1) such that [] +< andjj [] , @ @ u n !@ @ u uniformly on compact subsets as n!1. Theorem 3.8. Let 2 (0; 2] and 2R + , = 2N. Assume (A1) and (A2) hold. Then for f2C (H), there exist a unique solution u2C + (H) to the Cauchy problem @ t +A () x +B () x u(t;x) = f(t;x); u(T;x) = 0 (3.4) and a constant K independent of f such thatjuj + Kjfj . Foru being dened onH,h2R being a given nonzero number,fe k g k=1;:::;d being the canonical basis inR d , and k = 1;:::;d, denote u h k (t;x) = u(t;x +he k )u(t;x) h ; r 1 y u(t;x) =u(t;x +y)u(t;x); r 2 y u(t;x) =u(t;x +y)u(t;x) @ x u(t;x);y ; A ();h z;k u(t;x) = 1 h A () z+he k A () z u(t;x); and B ();h z;k u(t;x) = 1 h B () z+he k B () z u(t;x): 46 The statement is proved by induction. Proof. For 2 (0; 2] and 2 (0; 1), given f 2 C (H), there exists a unique solution u 2 C + (H) to the Cauchy problem ( 3.4) andjuj + Kjfj [60], where K is a constant independent of f. Assume the result holds for 2 n1 S l=0 (l;l + 1), n2N. Let 2 (n;n + 1) and f2 C . Then 12 (n 1;n), f2 C 1 (H), and there exists a unique solution u2 C +1 (H), 2 (0; 2] to the Cauchy problem (3.4) with juj +1 Cjfj 1 . In addition, for a given nonzero number h2R, u satises @ t +A () x+he k +B () x+he k u(t;x +he k ) = f(t;x +he k ); u(T;x +he k ) = 0; k = 1;:::;d: (3.5) Subtracting (3.4) from (3.5) and dividing the dierence by h yields @ t +A () x +B () x u h k (t;x) = f h k (t;x)A ();h x;k u(t;x +he k )B ();h x;k u(t;x +he k ); u h k (T;x) = 0;k = 1;:::;d: (3.6) Since f2C (H) and f h k (t;x) = f(t;x +he k )f(t;x) h = Z 1 0 @ k f(t;x +he k s)ds;8h6= 0; thenjf h k j 1 Kj@ x fj 1 Kjfj with K independent of h. Since u2C +1 (H), then (1 +@ )u2C 1 (H) and by Lemma 3.4, jA () x uj 1 = jA () x (1 +@ ) 1 (1 +@ )uj 1 47 = j (1 +@ )uj 1 KM () j(1 +@ )uj 1 KM () juj +1 KM () jfj 1 : Since a;B;m () 2C , then @ x a;@ x B;@ x m () 2C 1 and so for a h;k (x) = Z 1 0 @ k a(x +he k s)ds; B ij h;k (x) = Z 1 0 @ k B ij (x +he k s)ds; m () h;k (x;y) = Z 1 0 @ k m () (x +he k s;y)ds; a h;k ; B h;k ; m () h;k 2C 1 . Also, A ();h x;k u(t;x +he k ) = 1 h A () x+he k A () x u(t;x +he k ) = 1 f=1g 1 h a(x +he k )a(x) ;@ x u(t;x +he k ) + 1 2 1 f=2g d X i;j=1 1 h B ij (x +he k )B ij (x) @ ij u(t;x +he k ) + Z A () y u(t;x +he k ) 1 h m () (x +he k ;y)m () (x;y) dy jyj d+ = 1 f=1g Z 1 0 @ k a(x +he k s)ds;@ x u(t;x +he k ) + 1 2 1 f=2g d X i;j=1 Z 1 0 @ k B ij (x +he k s)ds@ ij u(t;x +he k ) + Z A () y u(t;x +he k ) Z 1 0 @ k m () (x +he k s;y)ds dy jyj d+ = 1 f=1g a h;k (x);@ x u(t;x +he k ) + 1 2 1 f=2g d X i;j=1 B ij h;k (x)@ ij u(t;x +he k ) 48 + Z A () y u(t;x +he k ) m () h;k (x;y) dy jyj d+ : Hence, by Lemma 3.4, jA ();h ;k uj 1 KM () juj +1 KM () jfj 1 ;k = 1;:::;d with K independent of h and f. Denote c h;k (x) = Z 1 0 @ k c(x +he k s)ds;x2R d ; () h;k (x;y) = Z 1 0 @ k () (x +he k s;y)ds;x;y2R d : Then B ();h x;k u(t;x +he k ) = 1 h B () x+he k B () x u(t;x +he k ) = 1 f2(1;2]g 1 h c(x +he k )c(x);@ x u(t;x +he k ) + 1 h Z B () y u(t;x +he k ) () (x +he k ;dy) () (x;y) () (dy) = 1 f2(1;2]g 1 h c(x +he k )c(x) ;@ x u(t;x +he k ) + Z B () y u(t;x +he k ) 1 h () (x +he k ;y) () (x;y) () (dy) = 1 f2(1;2]g Z 1 0 @ k c(x +he k s)ds;@ x u(t;x +he k ) + Z B () y u(t;x +he k ) Z 1 0 @ k () (x +he k s;y)ds () (dy) = 1 f2(1;2]g Z 1 0 @ k c(x +he k s)ds;@ x u(t;x +he k ) +1 f2(0;1]g Z jyj1 r 1 y u(t;x +he k ) Z 1 0 @ k () (x +he k s;y)ds () (dy) +1 f2(1;2]g Z jyj1 r 2 y u(t;x +he k ) Z 1 0 @ k () (x +he k s;y)ds () (dy) 49 + Z jyj>1 r 1 y u(t;x +he k ) Z 1 0 @ k () (x +he k s;y)ds () (dy) = 1 f2(1;2]g c h;k (x);@ x u(t;x +he k ) +1 f2(0;1]g Z jyj1 r 1 y u(t;x +he k ) () h;k (x;y) () (dy) +1 f2(1;2]g Z jyj1 r 2 y u(t;x +he k ) () h;k (x;y) () (dy) + Z jyj>1 r 1 y u(t;x +he k ) () h;k (x;y) () (dy): Applying Lemma 3.6 to c = c h;k , () = () h;k , = = 1, and f = u(t; +he k ) yields jB ();h ;k u(t;)j 1 KN () ju(t; +he k )j +1 KN () jfj 1 ;k = 1;:::;d; with a constant K independent of h and f. Hence, f h k (t;x)A ();h x;k u(t;x +he k )B ();h x;k u(t;x +he k )2 C 1 (H) and by (3.4), u h k 2C +1 (H),8h6= 0, k = 1;:::;d, with ju h k j +1 Kjf h k A ();h ;k uB ();h ;k uj 1 Kjfj ; where K is a constant independent of h and f. Also by Lemma 3.6,B () u h k 2 C 1 (H) withjB () u h k j 1 Kju h k j +1 Kjfj . By (3.6), u h k (t;x)u h k (s;x) = Z t s f h k (r;x)A ();h x;k u(r;x +he k )B ();h x;k u(r;x +he k ) dr Z t s A () x +B () x u h k (r;x)dr; 0s<tT; 50 and so ju h k (t;x)u h k (s;x)j jf h k A ();h x;k uB ();h x;k uj 1 +j A () x +B () x u h k j 1 jtsj Kjtsj: Hence,u h k (t;x) is equicontinuous in (t;x) and by the Arzel a-Ascoli theorem, for eachfh n g, there exist a subsequencefh n k g and a function u k (t;x) such that u hn k k (t;x)! u k (t;x) uniformly on compact subsets as h n k ! 0. By Lemma 3.7, u k 2C +1 . It then follows from passing to the limit in (3.6) and the dominated convergence theorem that u k is the unique solution to @ t +A () x +B () x u k (t;x) = @ k f(t;x) @ k A () x u(t;x) @ k B () x u(t;x); u k (T;x) = 0;k = 1;:::;d and so u hn k (t;x)!u k (t;x);8h n ! 0: Hence, u k (t;x) = lim h!0 u h k (t;x) = lim h!0 u(t;x +he k )u(t;x) h =@ k u(t;x) and @ k u2C +1 (H);k = 1;:::;d. Therefore, u2C + (H). It thus proves that for 2 (0; 2] and 2 (n;n + 1), given f2C (H), there exists a unique solutionu2C + (H) to the Cauchy problem (3.4) andjuj + Kjfj , whereK is a constant. 51 The statement then follows by induction. Corollary 3.9. Let 2 (0; 2] and 2R + , = 2N. Assume (A1) and (A2) hold. Then for f 2 C (H) and g2 C + (R d ), there exist a unique solution v2 C + (H) to the Cauchy problem @ t +A () x +B () x v(t;x) = f(t;x); v(T;x) = g(x) (3.7) and a constant K independent of f and g such thatjvj + K jfj +jgj + . Proof. For g 2 C + (R d ), by Lemma 3.4 and Lemma 3.6, jA () x gj Kjgj + and jB () x gj Kjgj + , with a constant K independent of f and g. It then follows from (3.4) that there exist a unique solution ~ v2C + (H) to the Cauchy problem @ t +A () x +B () x ~ v(t;x) = f(t;x)A () x g(x)B () x g(x); ~ v(T;x) = 0 (3.8) and a constant K independent of f and g such thatj~ vj + K jgj + +jfj . Let v(t;x) = ~ v(t;x) +g(x), where ~ v is the solution to problem (3.8). Then v is the unique solution to the Cauchy problem (3.7) andjvj + K jfj +jgj + , for a constant K independent of f and g. 52 Chapter 4 Rate of Convergence of Weak Euler Approximation The rate of convergence of the weak Euler approximation to the stochastic process under consideration is identied and proved in this chapter. 4.1 Weak Euler Approximation 4.1.1 Time Discretization The construction of the Euler approximation is based on a time discretization. Let the time discretizationfg of the interval [0;T ] with maximum step size 2 (0; 1) be a partition by ~ F-stopping timesf i g i2N such that 0 = 0 < 1 << i T =T; and max i2f1;:::;i T g ( i i1 ): 53 4.1.2 Euler Scheme Let Y 0 be an ~ F 0 -measurable d-dimensional random vector satisfying P(Y 0 2A) = P(X 0 2A);8A2B(R d ): (4.1) For a xed 2 (0; 2] and a given time discretization fg , the ~ F-adapted Euler approximation Y =fY t g t2[0;T ] of X is dened by the stochastic equation Y t = Y 0 + Z t 0 a () (Y is )ds + Z t 0 b () (Y is )d ~ W s + Z t 0 Z jyj>1 yp Y (ds;dy) + Z t 0 Z jyj1 yq Y (ds;dy);t2 [0;T ]; where is = i if s2 [ i ; i+1 ), ~ W =f ~ W t g t2[0;T ] is a d-dimensional ~ F-adapted standard Wiener process, p Y is an ~ F-adapted point random measure with p Y [0;t]A = X s2[0;t] 1 A (Y s ); and q Y (dt;dy) =p Y (dt;dy)m () (Y i t ;y) dy jyj d+ dt () (Y i t ;y) () (dy)dt is an ~ F-adapted martingale measure. 4.2 Rate of Convergence For the proof of Theorem 4.3, Lemma 4.2, which pertains to the conditional expectation of the increments of the Euler approximation and invokes Lemma 4.1, is used. 54 Let w2 C 1 0 (R d ) be a nonnegative smooth function with support infjxj 1g such that w(x) =w(jxj);x2R d and Z w(x)dx = 1. Note that Z x i w(x)dx = 0;i = 1;:::;d. For x2R d and "2 (0; 1), dene w " (x) =" d w x " and for f2C (R d ), dene the convolution f " (x) = Z f(y)w " (xy)dy = Z f(xy)w " (y)dy;x2R d : (4.2) Apparently, f " is smooth with respect to x. Lemma 4.1. Let 2 (0; 2] and <, 6= 1. For f2C (R d ) and "2 (0; 1), let f " be as dened in (4.2). Then (i) there exists a constant K such that jf " (x)f(x)jKjfj " ;8x2R d : (ii) there exists a constant K such that jA () z f " (x)jKjfj " + ;8z;x2R d : In particular, j@ f " (x)jKjfj " + ;8x2R d : (4.3) 55 Also, for < 1, i = 1;:::;d, j@ i f " (x)jKjfj " 1+ ;8x2R d and for < 2;6= 1, i;j = 1;:::;d, j@ ij f " (x)jKjfj " 2+ ;8x2R d : (iii) there exists a constant K such that jB () z f " (x)jKjfj " + ;8z;x2R d : Proof. Let f2C (R d ) and f " be the corresponding convolution with "2 (0; 1). (i) For 2 (0; 1), since w(x) =w(x);x2R d , f " (x)f(x) = Z [f(xy)f(x)]w " (y)dy = Z [f(x +y)f(x)]w " (y)dy = Z K Z k () (y;z)@ f(xz)dz w " (y)dy; and by Lemma 3.2, jf " (x)f(x)jKjfj " : 56 For 2 (1; 2), since w(x) =w(x);x2R d and Z x i w(x)dx = 0;i = 1;:::;d, f " (x)f(x) = Z [f(xy)f(x)]w " (y)dy = Z [f(x +y)f(x) (@ x f(x);y)]w " (y)dy = Z Z 1 0 @ x f(x +sy)@ x f(x);y dsw " (y)dy = Z Z 1 0 K Z k (1) (sy;z)@ 1 @ x f(xz)dz;y dsw " (y)dy and by Lemma 3.2, jf " (x)f(x)jKjfj " : (ii) For z;x2R d ; A () z w " (x) = 1 f=1g a(z);@ x w " (x) + 1 f=2g B ij (z)@ ij w " (x) + Z A () y w " (x)m () (z;y) dy jyj d+ = " A () z w " (x) = " " d (A () z w)( x " ) (4.4) and A () z f " (x) = Z " " d A () z w( xy " )f(y)dy = Z " " d (A () z )w( y " )f(xy)dy 57 = Z " A () z w(y)f(x"y)dy: For < and 2 (0; 1), since Z " A () z w(y)f(x"y)dy = Z " A () z w(y) f(x"y)f(x) dy; then jA () z f " (x)jKjfj " + : For < and 2 (1; 2), since A () z w(y) = Z w(y + y)w(y) @ y w(y); y m () (z; y) d y j yj d+ = Z Z 1 0 @ y w(y +s y)@ y w(y); y dsm () (z; y) d y j yj d+ and Z " A () z w(y)f(x"y)dy = " +1 Z Z 1 0 w(y +s y)w(y) @ x f(x"y)@ x f(x); y m () (z; y) d y j yj d+ dy; then jA () z f " (x)j K" +1 " 1 jfj Z Z 1 0 jw(y +s y)w(y)jjyj 1 d y j yj d+1 dy 58 Kjfj " + : Taking m () = 1 gives (4.3). If 2 (0; 1), @ i f " (x) = " 1 Z " d @ i w( xy " )f(y)dy = " 1 Z " d @ i w( y " )f(xy)dy = " 1 Z @ i w(y) f(x"y)f(x) dy and @ ij f " (x) = " 2 Z " d @ ij w( xy " )f(y)dy = " 2 Z " d @ ij w( y " )f(xy)dy = " 2 Z @ ij w(y) f(x"y)f(x) dy: Hence, j@ i f " (x)jKjfj " 1+ and j@ ij f " (x)jKjfj " 2+ : 59 Similarly, if 2 (1; 2), @ i f " (x) = Z " d w( y " )@ i f(xy)dy = Z " d w( xy " )@ i f(y)dy and @ ij f " (x) = " 1 Z " d @ j w( y " )@ i f(xy)dy = " 1 Z @ j w(y) @ i f(x"y)@ i f(x) dy: Hence, j@ ij f " (x)jKjfj " 1 " 1 =Kjfj " 2 : (iii) The equality (4.4) with m () = 1 implies that @ w " (x) =" @ w " =" " d (@ w)( x " );8x2R d : If << 1, f " (x +y)f " (x) = Z k () (y; y)@ f " (x y)d y and by (4.3), jf " (x +y)f " (x)jK(jyj ^ 1)jfj " + ;8x;y2R d : 60 If < = 1, f " (x +y)f " (x) = Z 1 0 @ x f " (x +sy);y ds and by (ii), jf " (x +y)f " (x)jK(jyj^ 1)jfj " 1+ ;8x;y2R d : If 2 (1; 2), f " (x +y)f " (x) @ x f " (x);y = Z 1 0 @ x f " (x +sy)@ x f " (x);y ds: By interpolation inequality and (ii), there exists a constant K independent of" and f such that j@ x f " (x + y)@ x f " (x)j Kj yj 1 sup i;j;x j@ ij f(x)j 1 1 sup x j@ x f(x)j 2 Kj yj 1 jfj " (2+)(1) " (1+)(2) = Kj yj 1 jfj " + ;8x; y2R d : Hence, for 2 (1; 2) andjyj 1, jf " (x +y)f " (x) (@ x f " (x);y)jKjyj jfj " + ;8x;y2R d : (iii) then follows. 61 Lemma 4.2. Let 2 (0; 2] and 2R + , = 2N. Assume (A1) and (A2) hold. For a given time discretizationfg with 2 (0; 1), let Y be the Euler approximation for the stochastic process X. Then for f2C (R d ), there exists a constant K such that E f(Y s )f(Y is ) ~ F is Kjfj (;) ;8s2 [0;T ]; where i s =i if i s< i+1 and (;) is as dened in Theorem 4.3. The proof of Lemma 4.2 is based on the application of Ito's formula tof(Y s )f(Y is ). Proof. For <, dene f " as in (4.2) with "2 (0; 1). By applying It^ o's formula and by Lemma 4.1, E f " (Y s )f " (Y is ) ~ F is = E h Z s is 1 f=1g a(Y is );@ x f " (Y r ) + 1 2 1 f=2g d X i;j=1 B ij (Y is )@ ij f " (Y r ) + Z A () y f " (Y r )m () (Y is ;y) dy jyj d+ +1 f2(1;2]g c(Y is );@ x f " (Y r ) + Z B () y f " (Y r ) () (Y is ;dy) dr ~ F is i = E Z s is A () Y is f " (Y r ) +B () Y is f " (Y r ) dr ~ F is E Z s is A () Y is f " (Y r ) + B () Y is f " (Y r ) dr ~ F is Kjfj " + ; (4.5) where K does not depend on f, , , or ". It then follows from Lemma 4.1 and (4.5) that E f(Y s )f(Y is ) ~ F is E (ff " )(Y s ) (ff " )(Y is ) ~ F is 62 + E f " (Y s )f " (Y is ) ~ F is 2 sup x jf " (x)f(x)j + E f " (Y s )f " (Y is ) ~ F is Kjfj inf "2(0;1) (" +" + ): Minimizing " +" + in "2 (0; 1) yields E f(Y s )f(Y is ) ~ F is Kjfj (;) : For >, E f(Y s )f(Y is ) ~ F is = E h Z s is 1 f=1g a(Y is );@ x f(Y r ) + 1 2 1 f=2g d X i;j=1 B ij (Y is )@ ij f(Y r ) + Z A () y f(Y r )m () (Y is ;y) dy jyj d+ +1 f2(1;2]g c(Y is );@ x f(Y r ) + Z B () y f(Y r ) () (Y is ;dy) () (dy) dr ~ F is i = E Z s is A () Y is f(Y r ) +B () Y is f(Y r ) dr ~ F is and so E f(Y s )f(Y is ) ~ F is Kjfj : The assertion of Lemma 4.2 is then obtained. Theorem 4.3. For 2 (0; 2], 2R + , = 2N, and a given time discretizationfg with 2 (0; 1), let Y be the Euler approximation for the stochastic process X. Assume (A1) 63 and (A2) hold. Then there exists a constant K such that jE[g(Y T )] E[g(X T )]jK (;) ;8g2C + (R d ); (4.6) where (;) = ( ; <; 1; >: Proof. Let v2 C + (H) be the unique solution to (3.7) with f = 0. By It^ o's formula and (3.7), E[v(0;X 0 )] = E[v(T;X T )] = E[g(X T )]: (4.7) By Lemma 3.4 and Lemma 3.6, jA () z v(s;)j +jB () z v(s;)j Kjvj + Kjgj + (4.8) and by Corollary 3.9, j@ t v(s;)j Kjgj + : (4.9) Hence,A () z v(s;);B () z v(s;);@ t v(s;)2C (R d ),8z2R d ,8s2 [0;T ]. Then, by It^ o's formula and Corollary 3.9, with (4.7) and (4.1), it follows that E[g(Y T )] E[g(X T )] = E[v(T;Y T )] E[v(0;X 0 )] = E[v(T;Y T )] E[v(0;Y 0 )] 64 = E h Z T 0 @ t v(s;Y s ) + 1 f=1g a(Y is );@ x v(s;Y s ) + 1 2 1 f=2g d X i;j=1 B ij (Y is )@ ij v(s;Y s ) + Z A () y v(s;Y s )m () (Y is ;y) dy jyj d+ +1 f2(1;2]g c(Y is );@ x v(s;Y s ) + Z B () y v(s;Y s ) () (Y is ;y) () (dy) ds Z T 0 @ t +A () Y is +B () Y is v(s;Y is )ds i = E h Z T 0 @ t v(s;Y s )@ t v(s;Y is ) + A () Y is v(s;Y s )A () Y is v(s;Y is ) + B () Y is v(s;Y s )B () Y is v(s;Y is ) ds i = E h Z T 0 E @ t v(s;Y s )@ t v(s;Y is )j ~ F is +E A () Y is v(s;Y s )A () Y is v(s;Y is )j ~ F is +E B () Y is v(s;Y s )B () Y is v(s;Y is )j ~ F is ds i : Hence, by (4.8), (4.9), and Lemma 4.2, there exists a constant K independent of g such that E[g(Y T )] E[g(X T )] Kjgj + (;) : This proves (4.6). Remark 4.4. It can be noted from Theorem 4.3 that ifa,b, andc are H older-continuous, andg is more than+ continuously dierentiable, the Euler scheme has already yielded a positive weak order of convergence. To obtain the rst weak order convergence, only 65 slightly more than continuous dierentiability ofa,b, andc is needed, instead of fourth- order derivatives as in papers cited previously. Example 4.5. For a given time discretizationfg with 2 (0; 1), let Y be the Euler approximation of the stochastic process X dened by X t =X 0 + Z t 0 c(X s )dZ s ;8t2 [0;T ]; (4.10) where c is a dd-dimensional matrix and Z =fZ t g t2[0;T ] is a symmetric -stable d- dimensional process dened as Z t = 8 > > > > > > < > > > > > > : Z t 0 Z yp Z (ds;dy); 2 (0; 1); Z t 0 Z jyj1 yq Z (ds;dy) + Z t 0 Z jyj>1 yp Z (ds;dy); = 1; Z t 0 Z yq Z (ds;dy); 2 (1; 2); with p Z (dt;dy) being the jump measure of Z and q Z (dt;dy) =p Z (dt;dy) dy jyj d+ dt being the corresponding martingale measure. Assumej detc()j " > 0, c ij 2 C (R d ), g2C + (R d ), and 2R + ; = 2N. Then it holds that jE[g(Y T )] E[g(X T )]jK (;) : 66 Indeed, in this case,B () = 0 and m () (x;y) = 1 j detc(x)j jyj jc(x) 1 yj d+ ;x2R d ;y2R d 0 : Remark 4.6. For stochastic dierential equations of the form (4.10), Protter and Talay showed that (;) = 1, provided that c is smooth and the L evy measure of Z has nite moments of suciently high order [75]. 67 Chapter 5 Approximation of a General Equation In this chapter, the result obtained on the stochastic dierential equation (2.6) is gener- alized to a more general equation. 5.1 A General Stochastic Dierential Equation Driven by L evy Motion Let (U;U) be a measurable space with a nonnegative measure () (dv) on it. Assume that there is a decreasing sequence of subsetsfU n g with U n 2U such that U = [ n U c n . For 2 (0; 2], let the d-dimensional It^ o process X =fX t g t2[0;T ] be the weak solution to the stochastic equation X t = X 0 + Z t 0 a () (X s )ds + Z t 0 b () (X s )dW s + Z t 0 Z jyj>1 yp X (ds;dy) + Z t 0 Z jyj1 yq X (ds;dy) + Z t 0 Z U c 1 l(X s ;v)~ p X (ds;dv) + Z t 0 Z U 1 l(X s ;v)~ q X (ds;dv);t2 [0;T ]; (5.1) where a () (x) = 1 f2(0;1)g Z jyj1 ym () (x;y) dy jyj d+ + Z U 1 l(x;v) () (x;v) () (dv) 68 +1 f=1g a(x) + Z U 1 l(x;v) () (x;v) () (dv) +1 f2(1;2]g c(x) Z jyj>1 ym () (x;y) dy jyj d+ ; b () (x) = 1 f=2g b(x); X 0 is the F 0 -measurable initial value, W =fW t g t2[0;T ] is an F-adapted d-dimensional standard Wiener process, p X (dt;dy) is a point measure on [0;T ]R d 0 , q X (dt;dy) is an (F;P)-martingale measure with q X (dt;dy) =p X (dt;dy)m () (X t ;y) dy jyj d+ dt; ~ p X (dt;dv) is a point measure on [0;T ]U, and ~ q X (dt;dv) = ~ p X (dt;dv) () (X t ;v) () (dv)dt: In the stochastic equation (5.1), a, c, b, l, m () , and () are measurable functions, with a, c, and l d-dimensional vectors, b a dd-dimensional symmetric nonnegative denite matrix, m () and () nonnegative functions. In addition, m () (x;y) and its partial derivatives @ y m () (x;y); 2 :j j d 0 = d 2 + 1 are continuous in (x;y), m () (x;y) is homogeneous in y with index zero, and Z S d1 ym (1) (;y) d1 (dy) = 0; m (2) 0; where S d1 is the unit sphere inR d and d1 is the Lebesgue measure on it. 69 Let B(x) =b(x) T b(x);x2R d . Denote A () z u(t;x) = 1 f=1g a(z);@ x u(t;x) + 1 2 1 f=2g d X i;j=1 B ij (z)@ ij u(t;x) + Z R d 0 u(t;x +y)u(t;x) 1 fjyj1g 1 f=1g + 1 f2(1;2)g @ x u(t;x);y m () (z;y) dy jyj d+ ; A () u(t;x) = A () x u(t;x) =A () z u(t;x)j z=x ; and B () z u(t;x) = 1 f2(1;2]g c(z);@ x u(t;x) + Z U u(t;x +l(z;v))u(t;x) 1 fv2U 1 g 1 f2(1;2]g @ x u(t;x);l(z;v) () (z;v) () (dv); B () u(t;x) = B () x u(t;x) =B () z u(t;x)j z=x : The operatorL () =A () +B () is the generator of X t dened in (5.1). A () is the principal part andB () is the lower order or subordinated part ofL () . It is assumed that the following conditions are satised by the stochastic dierential equation dened in (5.1): ( ~ A1) There exists a constant > 0 such that for all x2R d andjj = 1, (B(x);); where B(x) =b(x) T b(x); Z S d1 j(w;)j m () (x;w)d;2 (0; 2); lim #0 sup x Z U c 1 1 jl(x;v)j (jl(x;v)j ^ 1)j () (x;v)j () (dv) = 0;2 (0; 1]; 70 ( ~ A2) For 2R + , M () +N () <1; where M () = 1 f=1g jaj + 1 f=2g jBj + 1 f2(0;2)g sup j jd 0 ; jyj=1 j@ y m () (;y)j and N () =N () (I 1 ) + 1 f>1g N () (I 2 ); with N () (I 1 ) = 1 f2(1;2]g sup j j=[];x j@ x c(x)j j j + sup j j=[];x;~ x Z U 1 jl(x;v)j ((+)^1)_ j@ x () (~ x;v)j () (dv) + sup j j=[];x;~ x Z U c 1 (jl(x;v)j (+)^1 ^ 1)j@ x () (~ x;v)j () (dv) + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U 1 jl(x +h;v)l(x;v)j ((+)^1)_ j@ x () (~ x +h;v)j () (dv) + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U c 1 (jl(x +h;v)l(x;v)j (+)^1 ^ 1)j@ x () (~ x +h;v)j () (dv) + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U 1 jl(x;v)j ((+)^1)_ j@ x () (~ x +h;v)@ x () (~ x;v)j () (dv) 71 + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U c 1 (jl(x;v)j (+)^1 ^ 1)j@ x () (~ x +h;v)@ x () (~ x;v)j () (dv) and N () (I 2 ) = []1 X jij=0 []jij X jjj=1 sup x;~ x Z j@ j x l(x;v)j []jij jjj j@ i x () (~ x;v)j () (dv) + []1 X jij=0 []jij X jjj=1 sup x;~ x;h6=0 1 jhj [] Z j@ j x l(x;v)j []jij jjj j@ i x () (~ x +h;v)@ i x () (~ x;v)j () (dv) + []1 X jij=0 []jij X jjj=1 sup x;~ x;h6=0 1 jhj [] Z j(@ j x l(x +h;v)) (@ j x l(x;v))j []jij jjj j@ i x () (~ x;v)j () (dv) + []1 X jij=0 []jij X jjj=1 sup x;~ x;h6=0 1 jhj [] Z (jl(x +h;v)l(x;v)j (+1)^1 ^ 1)j@ j x l(x;v)j []jij jjj j@ i x () (~ x;v)j () (dv); Remark 5.1. Under the above assumptions, for any 2R + , there exists a unique weak solution to equation (5.1) [63]. Remark 5.2. In (5.1), ifU =R d ,U n =fv :jvj 1 n g, andl(x;v) =v, then the stochastic dierential equation is of the form (2.6). 72 5.2 Convergence of Weak Euler Approximation 5.2.1 Weak Euler Approximation Let Y 0 be an ~ F 0 -measurable d-dimensional random vector satisfying P(Y 0 2A) = P(X 0 2A);8A2B(R d ): (5.2) For a xed 2 (0; 2] and a given time discretization fg , the ~ F-adapted Euler approximationY =fY t g t2[0;T ] ofX described in (5.1) is dened by the stochastic equation Y t = Y 0 + Z t 0 a () (Y is )ds + Z t 0 b () (Y is )d ~ W s + Z t 0 Z jyj>1 yp Y (ds;dy) + Z t 0 Z jyj1 yq Y (ds;dy) + Z t 0 Z U c 1 l(Y is ;v)~ p Y (ds;dv) + Z t 0 Z U 1 l(Y is ;v)~ q Y (ds;dv);t2 [0;T ]; where is = i if s2 [ i ; i+1 ), ~ W =f ~ W t g t2[0;T ] is a d-dimensional ~ F-adapted standard Wiener process,p Y (dt;dy) is an ~ F-adapted point measure on [0;T ]R d 0 ,q X (dt;dy) is an ( ~ F;P)-martingale measure with q Y (dt;dy) =p Y (dt;dy)m () (Y i t ;y) dy jyj d+ dt; ~ p Y (dt;dv) is a point measure on [0;T ]U, and ~ q Y (dt;dv) = ~ p Y (dt;dv) () (Y i t ;v) () (dv)dt: 73 5.2.2 Rate of Convergence The rate of convergence is given in Theorem 5.7. For the proof of the theorem, Lemma 5.4 is invoked. For functions possessing the same properties as a, c, b, m () , and () , the corresponding norms and operators are dened as follows. Let a, c, b, m () , and () be measurable functions, with a and c d-dimensional vec- tors, b a dd-dimensional symmetric nonnegative denite matrix, and m () and () nonnegative functions. In addition, assume that m () (x;y) and its partial derivatives @ y m () (x;y); 2 :j jd 0 = d 2 + 1 are continuous in (x;y), m () (x;y) is homoge- neous in y with index zero, and Z S d1 y m (1) (;y) d1 (dy) = 0; m (2) 0; where S d1 is the unit sphere inR d and d1 is the Lebesgue measure on it. For 2R + and B(x) = b(x) T b(x);x2R d , denote N () = 1 f2(1;2]g sup j j=[];x j@ x c(x)j 1 + sup j j=[];x;~ x Z U 1 jl(x;v)j ((+)^1)_ j@ x () (~ x;v)j () (dv) + sup j j=[];x;~ x Z U c 1 (jl(x;v)j (+)^1 ^ 1)j@ x () (~ x;v)j () (dv) and N () = N () (I 1 ) + 1 f>1g N () (I 2 ); 74 where N () (I 1 ) = 1 f2(1;2]g sup j j=[];x j@ x c(x)j j j + sup j j=[];x;~ x Z U 1 jl(x;v)j ((+)^1)_ j@ x () (~ x;v)j () (dv) + sup j j=[];x;~ x Z U c 1 (jl(x;v)j (+)^1 ^ 1)j@ x () (~ x;v)j () (dv) + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U 1 jl(x +h;v)l(x;v)j ((+)^1)_ j@ x () (~ x +h;v)j () (dv) + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U c 1 (jl(x +h;v)l(x;v)j (+)^1 ^ 1)j@ x () (~ x +h;v)j () (dv) + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U 1 jl(x;v)j ((+)^1)_ j@ x () (~ x +h;v)@ x () (~ x;v)j () (dv) + sup j j=[]; x;~ x;h6=0 1 jhj [] Z U c 1 (jl(x;v)j (+)^1 ^ 1)j@ x () (~ x +h;v)@ x () (~ x;v)j () (dv) and N () (I 2 ) = []1 X jij=0 []jij X jjj=1 sup x;~ x Z j@ j x l(x;v)j []jij jjj j@ i x () (~ x;v)j () (dv) + []1 X jij=0 []jij X jjj=1 sup x;~ x;h6=0 1 jhj [] Z j@ j x l(x;v)j []jij jjj j@ i x () (~ x +h;v)@ i x () (~ x;v)j () (dv) + []1 X jij=0 []jij X jjj=1 sup x;~ x;h6=0 1 jhj [] Z j(@ j x l(x +h;v)) (@ j x l(x;v))j []jij jjj j@ i x () (~ x;v)j () (dv) + []1 X jij=0 []jij X jjj=1 sup x;~ x;h6=0 1 jhj [] Z (jl(x +h;v)l(x;v)j (+1)^1 ^ 1)j@ j x l(x;v)j []jij jjj j@ i x () (~ x;v)j () (dv): 75 For u2C + (H), let A () z u(t;x) = 1 f=1g a(z);@ x u(t;x) + 1 2 1 f=2g d X i;j=1 B ij (z)@ ij u(t;x) + Z R d 0 u(t;x +y)u(t;x) 1 fjyj1g 1 f=1g + 1 f2(1;2)g @ x u(t;x);y m () (z;y) dy jyj d+ ; A () u(t;x) = A () x u(t;x) = A () z u(t;x)j z=x ; and B () z u(t;x) = 1 f2(1;2]g c(z);@ x u(t;x) + Z U u(t;x +l(z;v))u(t;x) 1 fv2U 1 g 1 f2(1;2]g @ x u(t;x);l(z;v) () (z;v) () (dv); B () u(t;x) = B () x u(t;x) = B () z u(t;x)j z=x : Lemma 5.3. Let 2R + , = 2N and f2C (R d ). Then, jf x +h f(x)jK j@ ^1 fj 1 +jfj 1 jhj ^1 ^ 1 ; where K is a constant. Proof. For < 1, jf x +h f(x)j 1 fjhj1g j@ fj 1 jhj + 1 fjhj>1g 2jfj 1 76 and for > 1, jf x +h f(x)j 1 fjhj1g j@ x fj 1 jhj + 1 fjhj>1g 2jfj 1 : Hence, jf x +h f(x)j 2 j@ ^1 fj 1 +jfj 1 jhj ^1 ^ 1 : Lemma 5.4. Let 2R + , = 2N. Assume ( ~ A1) and ( ~ A2) hold for a = a, c = c, b = b, m () = m () , and () = () . Then for each "> 0 there exists a constant K " such that for any f2C + (R d ), j B () z f()j "jfj + +K " jfj 1 ;z2R d ; j B () f(x)j "jfj + +K " jfj 1 ;x2R d ; j B () fj "jfj + +K " jfj 1 : The statement is proved by induction. Proof. Denote r 1 y f(x) =f(x +y)f(x) and r 2 y f(x) =f(x +y)f(x) 1 f2(1;2]g @ x f(x);y : 77 By Lemma 5.3, jr 1 l(z;v) f(x)jK j@ ^1 fj 1 +jfj 1 jl(z;v)j ^1 ^ 1 : Obviously, for = 1, jr 2 l(z;v) f(x)j = Z 1 0 @ x f(x +sl(z;v));l(z;v) ds Kj@ fj 1 jl(z;v)j : By Lemma 3.2, for 2 (0; 1), jr 2 l(z;v) f(x)j = K Z k () (l(z;v);y)@ f(xy)dy Kj@ fj 1 jl(z;v)j and for 2 (1; 2), jr 2 l(z;v) f(x)j = Z 1 0 @ x f(x +sl(z;v));l(z;v) ds Z 1 0 @ x f(x);l(z;v) ds = Z 1 0 K Z k (1) (sl(z;v);y)@ 1 @ x f(xy)dy;l(z;v) ds Z 1 0 Kj@ 1 @ x fj 1 (jsl(z;v)j 1 ;l(z;v))ds Kj@ 1 @ x fj 1 jl(z;v)j = Kj@ fj 1 jl(z;v)j : Also, by the interpolation theorem, if 2 (0;), then for each " > 0 there exists a constant K " such that jfj "jfj +K " jfj 1 ;8f2C (R d ) 78 and jfj + "jfj + +K " jfj 1 ;8f2C + (R d ): For 2 (0; 1), since B () z f(x) = 1 f2(1;2]g c(z);@ x f(x) + Z U c 1 r 1 l(z;v) f(x) () (z;v) () (dv) + Z U 1 r 2 l(z;v) f(x) () (z;v) () (dv); then, j B () z f()j 0 = sup x j B () z f(x)j 1 f2(1;2]g j cj 1 j@ x fj 1 +2 j@ ^1 fj 1 +jfj 1 Z U c 1 (jl(z;v)j ^1 ^ 1)j () (z;v)j () (dv) +Kj@ fj 1 Z U 1 jl(z;v)j j () (z;v)j () (dv) K N () jfj + "jfj + +K " jfj 1 and for 2 (0; 1), j B () z f()j 1 = sup x;h6=0 j B () z f(x +h) B () z f(x)j jhj 1 f2(1;2]g j cj 1 j@ x fj +Kj@ ^1 fj Z U c 1 1 jl(z;v)j jl(z;v)j ^1 j () (z;v)j () (dv) +2jfj (^1) Z U c 1 1 jl(z;v)j> jl(z;v)j ^1 ^ 1 j () (z;v)j () (dv) 79 +1 f2(0;1]g 2jfj Z U 1 nUn j () (z;v)j () (dv) +1 f2(1;2]g 2j@ x fj Z U 1 nUn jl(z;v)jj () (z;v)j () (dv) +Kj@ fj Z Un jl(z;v)j j () (z;v)j () (dv) "jfj + +K " jfj 1 : Hence, j B () z f()j =j B () z f()j 0 +j B () z f()j 1 "jfj + +K " jfj 1 : Similarly, j B () f(x)j 0 = sup z j B () z f(x)j 1 f2(1;2]g j cj 1 j@ x fj 1 +2 j@ ^1 fj 1 +jfj 1 sup z Z U c 1 (jl(z;v)j ^1 ^ 1)j () (z;v)j () (dv) +Kj@ fj 1 sup z Z U 1 jl(z;v)j j () (z;v)j () (dv) K N () jfj + "jfj + +K " jfj 1 and j B () f(x)j 1 = sup z;h6=0 j B () z+h f(x) B () z f(x)j jhj 1 f2(1;2]g sup z;h6=0 1 jhj c(z +h) c(z);@ x f(x) + sup z;h6=0 1 jhj Z U c 1 jr 1 l(z+h;v) f(x)r 1 l(z;v) f(x)jj () (z +h;v)j () (dv) 80 + sup z;h6=0 1 jhj Z U c 1 jr 1 l(z;v) f(x)jj () (z +h;v) () (z;v)j () (dv) + sup z;h6=0 1 jhj Z U 1 jr 2 l(z+h;v) f(x)r 2 l(z;v) f(x)jj () (z +h;v)j () (dv) + sup z;h6=0 1 jhj Z U 1 jr 2 l(z;v) f(x)jj () (z +h;v) () (z;v)j () (dv) 1 f2(1;2]g j cj j@ x fj 1 +Kj@ (+ )^1 fj 1 sup z;h6=0 1 jhj Z U c 1 (jl(z +h;v)l(z;v)j (+ )^1 ^ 1)j () (z +h;v)j () (dv) +2 j@ ^1 fj 1 +jfj 1 sup z;h6=0 1 jhj Z U c 1 (jl(z;v)j ^1 ^ 1)j () (z +h;v) () (z;v)j () (dv) +Kj@ fj 1 sup z;h6=0 1 jhj Z U 1 jl(z +h;v)l(z;v)j j () (z +h;v)j () (dv) +Kj@ fj 1 sup z;h6=0 1 jhj Z U 1 jl(z;v)j j () (z +h;v) () (z;v)j () (dv) K N () jfj + "jfj + +K " jfj 1 : Hence, j B () f(x)j =j B () f(x)j 0 +j B () f(x)j 1 K N () + N () jfj + "jfj + +K " jfj 1 : Also, j B () fj 0 = sup x j B () x f(x)j 1 f2(1;2]g j cj 1 j@ x fj 1 +2 j@ ^1 fj 1 +jfj 1 sup x Z U c 1 (jl(x;v)j ^1 ^ 1)j () (x;v)j () (dv) +Kj@ fj 1 sup x Z U 1 jl(x;v)j j () (x;v)j () (dv) K N () jfj + 81 "jfj + +K " jfj 1 and j B () fj 1 = sup x;h6=0 j B () x+h f(x +h) B () x f(x)j jhj sup x;h6=0 j B () x+h f(x +h) B () x+h f(x)j jhj + sup x;h6=0 j B () x+h f(x) B () x f(x)j jhj K j B () x+h f()j 1 +j B () f(x)j 1 "jfj + +K " jfj 1 : Hence, j B () fj =j B () fj 0 +j B () fj 1 "jfj + +K " jfj 1 : Assume the inequalities hold for 2 n1 S l=0 (l;l + 1), n2 N. For 2 (n;n + 1) and f2C + (R d ), @ x f2C +1 (R d ) and @ x f2C +[] (R d ), wherej j = [] . Hence, j B () z @ x f()j 1 " 0 j@ x fj +1 +K " 0j@ x fj 1 "jfj + +K " jfj 1 and j@ x f x +h @ x f(x)j 2 j@ ^1 @ x fj 1 +j@ x fj 1 (jhj ^1 ^ 1): Since forj j [] , @ ( B () f) = X += @ z B () z (@ f)(x)j z=x : 82 Then, j B () z f()j = X j j[] sup x j@ x B () z f(x)j + sup j j=[] ; x;h6=0 j@ x B () z f(x +h)@ x B () z f(x)j jhj fg + = sup x j B () z f(x)j + X j j[1] sup x j@ x B () z @ x f(x)j + sup j j=[1] ; x;h6=0 j@ x B () z @ x f(x +h)@ x B () z @ x f(x)j jhj fg + = sup x j B () z f(x)j +j B () z @ x f()j 1 "jfj + +K " jfj 1 : Since for x2R d , @ z B () z f(x) =I 1 z (x) +I 2 z (x); where I 1 z (x) = 1 f2(1;2]g @ z c(z);@ x f(x) + Z U c 1 r 1 l(z;v) f(x)@ z () (z;v) () (dv) + Z U 1 r 2 l(z;v) f(x)@ z () (z;v) () (dv) and I 2 z (x) = Z U c 1 @ x f(x +l(z;v))@ z l(z;v) () (z;v) () (dv) +1 f2(0;1]g Z U 1 @ x f(x +l(z;v))@ z l(z;v) () (z;v) () (dv) +1 f2(1;2]g Z U 1 @ x f(x +l(z;v))@ x f(x);@ z l(z;v) () (z;v) () (dv) = Z U c 1 @ x f(x)@ z l(z;v) () (z;v) () (dv) 83 +1 f2(0;1]g Z U 1 @ x f(x)@ z l(z;v) () (z;v) () (dv) + Z @ x f(x +l(z;v))@ x f(x);@ z l(z;v) () (z;v) () (dv); then, j B () f(x)j = X j j[] sup z j@ z B () z f(x)j + sup j j=[] ; z;h6=0 j@ z B () z+h f(x)@ z B () z f(x)j jhj fg + = sup z j B () z f(x)j + X j j[1] sup z j@ z @ z B () z f(x)j + sup j j=[1] ; z;h6=0 j@ z @ z B () z+h f(x)@ z @ z B () z f(x)j jhj fg + = sup z j B () z f(x)j +j@ z B () f(x)j 1 sup z j B () z f(x)j +jI 1 (x)j 1 +jI 2 (x)j 1 : ForjI 1 (x)j 1 , replace c with @ z c and () with @ z () in B () f(x), it then follows from the conclusion for the case 12 (n 1;n) that jI 1 (x)j 1 K N () + N () jfj + "jfj + +K " jfj 1 : ForjI 2 (x)j 1 , jI 2 (x)j 1 = X j j[1] sup z j@ z I 2 z (x)j + sup j j=[1] ; z;h6=0 j@ z I 2 z+h (x)@ z I 2 z (x)j jhj fg + : When 2 (1; 2), clearly @ [1] z I 2 z (x) = I 2 z (x) = Z U c 1 @ x f(x +l(z;v))@ z l(z;v) () (z;v) () (dv) 84 +1 f2(0;1]g Z U 1 @ x f(x +l(z;v))@ z l(z;v) () (z;v) () (dv) +1 f2(1;2]g Z U 1 @ x f(x +l(z;v))@ x f(x);@ z l(z;v) () (z;v) () (dv) = [1] X jij=0 X P j k j jjj=[] jij Z K(f) [] jij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv); where for a constantK and a multiindexk withjkj [ 1] ,K(f) =K@ k x f(x +l(z;v)) or K(f) =K @ x f(x +l(z;v))@ x f(x) . Since sup z jI 2 z (x)j j@ x fj 1 sup z Z U c 1 j@ z l(z;v)jj () (z;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z Z U 1 j@ z l(z;v)jj () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z Z (jl(z;v)j ^1 ^ 1)j@ z l(z;v)jj () (z;v)j () (dv) "jfj + +K " jfj 1 and sup z;h6=0 jI 2 z+h (x)I 2 z (x)j jhj fg + sup z;h6=0 1 jhj fg + Z U c 1 j@ x f(x)@ z l(z +h;v)[ () (z +h;v) () (z;v)]j () (dv) + sup z;h6=0 1 jhj fg + Z U c 1 j@ x f(x)[@ z l(z +h;v)@ z l(z;v)] () (z;v)j () (dv) +1 f2(0;1]g sup z;h6=0 1 jhj fg + Z U 1 j@ x f(x)@ z l(z +h;v)[ () (z +h;v) () (z;v)]j () (dv) +1 f2(0;1]g sup z;h6=0 1 jhj fg + Z U 1 j@ x f(x)[@ z l(z +h;v)@ z l(z;v)] () (z;v)j () (dv) + sup z;h6=0 1 jhj fg + Z j @ x f(x +l(z +h;v))@ x f(x);@ z l(z +h;v) [ () (z +h;v) () (z;v)]j () (dv) + sup z;h6=0 1 jhj fg + Z j @ x f(x +l(z +h;v))@ x f(x);@ z l(z +h;v)@ z l(z;v) () (z;v)j 85 () (dv) + sup z;h6=0 1 jhj fg + Z j @ x f(x +l(z +h;v))@ x f(x +l(z;v));@ z l(z;v) () (z;v)j () (dv) j@ x fj 1 sup z;h6=0 1 jhj fg + Z U c 1 j@ z l(z +h;v)jj () (z +h;v) () (z;v)j () (dv) +j@ x fj 1 sup z;h6=0 1 jhj fg + Z U c 1 j@ z l(z +h;v)@ z l(z;v)jj () (z +h;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z;h6=0 1 jhj fg + Z U 1 j@ z l(z +h;v)jj () (z +h;v) () (z;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z;h6=0 1 jhj fg + Z U 1 j@ z l(z +h;v)@ z l(z;v)jj () (z +h;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z (jl(z +h;v)j ^1 ^ 1)j@ z l(z;v)j j () (z +h;v) () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z (jl(z +h;v)j ^1 ^ 1)j@ z l(z +h;v)@ z l(z;v)j j () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z jl(z +h;v)l(z;v)j ^1 ^ 1 j@ z l(z;v)j j () (z;v)j () (dv) "jfj + +K " jfj 1 ; then, for 2 (1; 2), jI 2 (x)j 1 = sup z jI 2 z (x)j + sup z;h6=0 jI 2 z+h (x)I 2 z (x)j jhj fg + "jfj + +K " jfj 1 : Assume for 2 (n 1;n), n2N and n> 2, @ [1] z I 2 z (x) = [1] X jij=0 X P j k j jjj=[] jij Z K(f) [] jij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv); 86 where for a constantK and a multiindexk withjkj [ 1] ,K(f) =K@ k x f(x +l(z;v)) or K(f) =K @ x f(x +l(z;v))@ x f(x) , and k j 2N 0 . That is, @ n2 z I 2 z (x) = n2 X jij=0 X P j k j jjj=n1jij Z K(f) n1jij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv); where for a constant K and a multiindex k withjkj n 2, K(f) = K@ k x f(x +l(z;v)) or K(f) =K @ x f(x +l(z;v))@ x f(x) . Then for 2 (n;n + 1), @ [1] z I 2 z (x) = @ n1 z I 2 z (x) =@ z @ n2 z I 2 z (x) = n2 X jij=0 X P j k j jjj=n1jij Z K(f)@ z l(z;v) n1jij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv) + n2 X jij=0 X P j k j jjj=n1jij Z K(f)@ z n1jij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv) + n2 X jij=0 X P j k j jjj=n1jij Z K(f) n1jij Y jjj=1 @ j z l(z;v) k j @ i+1 z () (z;v) () (dv) = n1 X jij=0 X P j k j jjj=njij Z K(f) njij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv) = [1] X jij=0 X P j k j jjj=[] jij Z K(f) [] jij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv): Apply H older's inequality by neglecting those terms @ j z l(z;v) k j with k j = 0, it then follows that for any 2 (n;), sup z j@ n1 z I 2 z (x)j n1 X jij=0 X P j k j jjj=njij sup z Z K(f) njij Y jjj=1 @ j z l(z;v) k j @ i z () (z;v) () (dv) Kjfj + n1 X jij=0 sup z njij Y jjj=1 Z j@ j z l(z;v)j k j njij k j jjj j@ i z () (z;v)j () (dv) k j jjj njij 87 Kjfj + n1 X jij=0 njij X jjj=1 sup z R j@ j z l(z;v)j k j njij k j jjj j@ i z () (z;v)j () (dv) k j jjj njij njij k j jjj njij k j jjj Kjfj + n1 X jij=0 njij X jjj=1 sup z Z j@ j z l(z;v)j njij jjj j@ i z () (z;v)j () (dv) "jfj + +K " jfj 1 and sup z;h6=0 j@ n1 z I 2 z+h (x)@ n1 z I 2 z (x)j jhj fg + Kjfj + n1 X jij=0 njij X jjj=1 sup z;h6=0 1 jhj [] Z j@ j z l(z;v)j njij jjj j@ i z () (z +h;v)@ i z () (z;v)j () (dv) + n1 X jij=0 njij X jjj=1 sup z;h6=0 1 jhj [] Z j(@ j z l(z +h;v)) (@ j z l(z;v))j njij jjj j@ i z () (z;v)j () (dv) + n1 X jij=0 njij X jjj=1 sup z;h6=0 1 jhj [] Z (jl(z +h;v)l(z;v)j (+1)^1 ^ 1)j@ j z l(z;v)j njij jjj j@ i z () (z;v)j () (dv) "jfj + +K " jfj 1 : For example, for @ z I 2 z (x) = Z U c 1 @ x f(x)@ 2 z l(z;v) () (z;v) () (dv) + Z U c 1 @ x f(x)@ z l(z;v)@ z () (z;v) () (dv) +1 f2(0;1]g Z U 1 @ x f(x)@ 2 z l(z;v) () (z;v) () (dv) +1 f2(0;1]g Z U 1 @ x f(x)@ z l(z;v)@ z () (z;v) () (dv) + Z @ 2 x f(x +l(z;v)); (@ z l(z;v)) 2 () (z;v) () (dv) 88 + Z @ x f(x +l(z;v))@ x f(x);@ 2 z l(z;v) () (z;v) () (dv) + Z @ x f(x +l(z;v))@ x f(x);@ z l(z;v) @ z () (z;v) () (dv); sup z j@ z I 2 z (x)j j@ x fj 1 sup z Z U c 1 j@ 2 z l(z;v)jj () (z;v)j () (dv) +j@ x fj 1 sup z Z U c 1 j@ z l(z;v)jj@ z () (z;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z Z U 1 j@ 2 z l(z;v)jj () (z;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z Z U 1 j@ z l(z;v)jj@ z () (z;v)j () (dv) +j@ 2 x fj 1 sup z Z j@ z l(z;v)j 2 j () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z Z (jl(z;v)j ^1 ^ 1)j@ 2 z l(z;v)jj () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z Z (jl(z;v)j ^1 ^ 1)j@ z l(z;v)jj@ z () (z;v)j () (dv) "jfj + +K " jfj 1 and sup z;h6=0 j@ z I 2 z+h (x)@ z I 2 z (x)j jhj fg + j@ x fj 1 sup z;h6=0 1 jhj fg + Z U c 1 j@ 2 z l(z +h;v)jj () (z +h;v) () (z;v)j () (dv) +j@ x fj 1 sup z;h6=0 1 jhj fg + Z U c 1 j@ 2 z l(z +h;v)@ 2 z l(z;v)jj () (z;v)j () (dv) +j@ x fj 1 sup z;h6=0 1 jhj fg + Z U c 1 j@ z l(z +h;v)jj@ z () (z +h;v)@ z () (z;v)j () (dv) +j@ x fj 1 sup z;h6=0 1 jhj fg + Z U c 1 j@ z l(z +h;v)@ z l(z;v)jj@ z () (z;v)j () (dv) 89 +1 f2(0;1]g j@ x fj 1 sup z;h6=0 1 jhj fg + Z U 1 j@ 2 z l(z +h;v)jj () (z +h;v) () (z;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z;h6=0 1 jhj fg + Z U 1 j@ 2 z l(z +h;v)@ 2 z l(z;v)jj () (z;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z;h6=0 1 jhj fg + Z U 1 j@ z l(z +h;v)jj@ z () (z +h;v)@ z () (z;v)j () (dv) +1 f2(0;1]g j@ x fj 1 sup z;h6=0 1 jhj fg + Z U 1 j@ z l(z +h;v)@ z l(z;v)jj@ z () (z;v)j () (dv) +j@ 2 x fj 1 sup z;h6=0 1 jhj fg + Z j@ z l(z +h;v)j 2 j () (z +h;v) () (z;v)j () (dv) +j@ 2 x fj 1 sup z;h6=0 1 jhj fg + Z j(@ z l(z +h;v)) 2 (@ z l(z;v)) 2 jj () (z;v)j () (dv) +2 j@ ^1 @ 2 x fj 1 +j@ 2 x fj 1 sup z;h6=0 1 jhj fg + Z jl(z +h;v)l(z;v)j ^1 ^ 1 j@ z l(z;v)j 2 j () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z (jl(z +h;v)j ^1 ^ 1)j@ 2 z l(z +h;v)j j () (z +h;v) () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z (jl(z +h;v)j ^1 ^ 1)j@ 2 z l(z +h;v)@ 2 z l(z;v)j j () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z jl(z +h;v)l(z;v)j ^1 ^ 1 j@ 2 z l(z;v)j j () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z (jl(z +h;v)j ^1 ^ 1)j@ z l(z +h;v)j j@ z () (z +h;v)@ z () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z (jl(z +h;v)j ^1 ^ 1)j@ z l(z +h;v)@ z l(z;v)j j@ z () (z;v)j () (dv) +2 j@ ^1 @ x fj 1 +j@ x fj 1 sup z;h6=0 1 jhj fg + Z jl(z +h;v)l(z;v)j ^1 ^ 1 j@ z l(z;v)j j@ z () (z;v)j () (dv) 90 "jfj + +K " jfj 1 : Hence, it is proved by induction thatjI 2 (x)j 1 "jfj + +K " jfj 1 ,8 > 1 with = 2N. Therefore, j B () f(x)j sup z j B () z f(x)j +jI 1 z (x)j 1 +jI 2 z (x)j 1 "jfj + +K " jfj 1 : It then follows that j B () fj = X j j[] sup x j@ x B () x f(x)j + sup j j=[] ; x;h6=0 j@ x B () x+h f(x +h)@ x B () x f(x)j jhj fg + X j j[] sup x j@ x B () x f(x)j + sup j j=[] ; x;h6=0 j@ x B () x+h f(x +h)@ x B () x+h f(x)j jhj fg + + sup j j=[] ; x;h6=0 j@ x B () x+h f(x)@ x B () x f(x)j jhj fg + K j B () z f()j +j B () f(x)j "jfj + +K " jfj 1 : The statement is thus proved by induction. Lemma 5.5. Let 2 (0; 2], 2R + , = 2N, and 0. Assume ( ~ A1) and ( ~ A2) hold. Then for f2C (H), there exist a unique solution u2C + (H) to the Cauchy problem @ t +A () x +B () x u(t;x) = f(t;x); u(0;x) = 0 (5.3) 91 and a constant K independent of f such thatjuj + Kjfj . Proof. Rewrite (5.3) as @ t +A () x u(t;x) = f(t;x)B () x u(t;x); u(0;x) = 0; then for "> 0, there exist constant K and K " such that juj + K jfj +jB () uj K jfj +"juj + +K " juj 1 : That is, juj + K jfj +juj 1 : Also, for given x 0 , rewrite (5.3) as @ t +A () x 0 u(t;x) = f(t;x)B () x u(t;x) + A () x 0 A () x u(t;x); u(0;x) = 0; then8 2 (0; 1), juj 1 juj () jfj +jB () uj +j A () x 0 A () x uj () jfj +juj + ; where ()! 0 as !1. Hence, juj + K jfj +()(jfj +juj + ) 92 and there exist 0 > 0 and a constant K independent of u such thatjuj + Kjfj if 0 . For 0 , ifu2C + (H) is the solution to (5.3), then ~ u(t;x) =e ( 0 )t u(t;x) is the solution to (5.3) with replaced by 0 . Hence, juj + e ( 0 )T juj + Ke ( 0 )T jfj : The statement is then proved. Theorem 5.6. Let 2 (0; 2] and 2R + , = 2N. Assume ( ~ A1) and ( ~ A2) hold. Then for f2 C (H), there exist a unique solution v2 C + (H) to the Cauchy problem (3:4) and a constant K independent of f such thatjvj + Kjfj . Proof. For 2 [0; 1], denote L u(t;x) = A () x +B () x u(t;x): Let ^ C + (H) be the space of functions u2C + (H) such that u(t;x) = Z t 0 F (s;x)ds;P a:s:;8(t;x)2H; where F =@ t u2C (H). ^ C + (H) is a Banach space with respect to the norm juj ^ C + =juj + +j@ t uj : Consider the mappings T : ^ C + (H)!C (H) dened by u(t;x) = Z t 0 @ t u(s;x)ds7!@ t u(t;x)L u(t;x): 93 Obviously, for some constant K independent of , jT uj Kjuj ^ C + : On the other hand, since u(t;x) = Z t 0 @ t u(s;x)ds = Z t 0 L u(t;x) + @ t u(s;x)L u(t;x) ds; jL uj =j A () x +B () x uj K juj + +juj Kjuj + ; and by Lemma 5.5, juj + KjT uj =Kj@ t uL uj ; then there exists a constant K independent of such that juj ^ C + = juj + +j@ t uj juj + +j@ t uL uj +jL uj K juj + +j@ t uL uj Kj@ t uL uj = KjT uj : Since T 0 is an onto map, by Theorem 5.2 [34], all T 's are onto maps and the statement then follows. Theorem 5.7. For 2 (0; 2], 2R + , = 2N, and a given time discretizationfg with 2 (0; 1), let Y be the Euler approximation for the stochastic process X as dened in 94 (5:1). Assume ( ~ A1) and ( ~ A2) hold. Then there exists a constant K such that jE[g(Y T )] E[g(X T )]jKjgj + (;) ;8g2C + (R d ) and E Z T 0 f(s;Y s )ds E Z T 0 f(s;X s )ds Kjfj (;) ;8f2C (H) where (;) = ( ; <; 1; >: Proof. Let v2C + (H) be the unique solution to (3.7). By Lemma 3.4 and Lemma 5.4, jA () v(s;x)j +jB () v(s;x)j Kjvj + K jfj +jgj + : (5.4) Hence,A () v(s;x);B () v(s;x)2C (R d ),8x2R d ,8s2 [0;T ]. Then, by It^ o's formula, Corollary 3.9, and (5.2), it follows that E[g(Y T )] E[v(0;X 0 )] = E[v(T;Y T )] E[v(0;X 0 )] = E[v(T;Y T )] E[v(0;Y 0 )] = E h Z T 0 @ t +A () Y is +B () Y is v(s;Y s )ds Z T 0 @ t +A () Y s +B () Y s v(s;Y s )ds 95 + Z T 0 @ t +A () Y s +B () Y s v(s;Y s )ds i = E h Z T 0 A () Y is v(s;Y s )A () Y s v(s;Y s ) + B () Y is v(s;Y s )B () Y s v(s;Y s ) ds + Z T 0 f(s;Y s )ds i = E h Z T 0 E A () Y is v(s;Y s )A () Y s v(s;Y s )j ~ F is +E B () Y is v(s;Y s )B () Y s v(s;Y s )j ~ F is ds + Z T 0 f(s;Y s )ds i : Also, by It^ o's formula and (3.7), E[g(X T )] E[v(0;X 0 )] = E[v(T;X T )] E[v(0;X 0 )] = E Z T 0 f(s;X s )ds : Hence, E[g(Y T )] E[g(X T )] E Z T 0 f(s;Y s )ds E Z T 0 f(s;X s )ds = E h Z T 0 E A () Y is v(s;Y s )A () Y s v(s;Y s )j ~ F is +E B () Y is v(s;Y s )B () Y s v(s;Y s )j ~ F is ds i : Then, by (5.4) and Lemma 4.2, if f = 0, there exists a constant K independent of g such that E[g(Y T )] E[g(X T )] Kjgj + (;) and if g = 0, there exists a constant K independent of f such that E Z T 0 f(s;Y s )ds E Z T 0 f(s;X s )ds Kjfj (;) : 96 This proves the results. 97 Chapter 6 Conclusion and Future Work 6.1 Conclusion The weak Euler approximation for stochastic dierential equations driven by L evy pro- cesses has been studied. The model under consideration was in a more general form than existing ones, and hence applicable to a broader range of processes arising from various elds including sciences, engineering, and nance. In order to investigate the convergence of the weak Euler approximation to the process considered, the existence of a unique solution to the corresponding integro-dierential equation in H older space was proved. It was then shown that the Euler scheme yields positive weak order of convergence, provided that the coecients of the stochastic dierential equation are H older-continuous and the test function is continuously dierentiable to some positive order. In particular, if the coecients are slightly more than twice dierentiable and the test function has at least up to the fourth order derivative, then rst weak order convergence is obtained. The assumptions on the regularity of coecient and test functions are signicantly weaker than those in the existing literature, while the same rate of convergence is achieved. 98 The result was further generalized to general L evy-driven stochastic dierential equa- tions, with the L evy operatorB () z u(t;x) of the form B () z u(t;x) = 1 f2(1;2]g c(z);@ x u(t;x) + Z U u(t;x +l(z;v))u(t;x) 1 fv2U 1 g 1 f2(1;2]g @ x u(t;x);l(z;v) () (z;v) () (dv); instead of B () z u(t;x) = 1 f2(1;2]g c(z);@ x u(t;x) + Z R d 0 u(t;x +y)u(t;x) 1 fjyj1g 1 f2(1;2]g @ x u(t;x);y () (z;y) () (dy): The same rate of convergence was veried for the general equations [65]. 6.2 Future Work A straightforward step following the proof of the rate of convergence is to demonstrate the result numerically. In particular, when the jump part is a compound Poisson process or a stable process, the increments of the corresponding L evy process can be exactly simulated. In other cases, the increments can be approximated, for example, by neglecting the small jumps to deliver a compound Poisson process, which can usually be simulated [4]. The stochastic dierential equations considered so far are associated with non-degenerate L evy operators. A further step is to study the case with degenerate operators. That is, (A1) does not hold. The uniqueness of the solution in this case has already been proved [62] 99 while this problem has not been systematically studied in the literature and there could be much more challenges involved. A plausible conjecture is as follows: Conjecture 6.1. For2 (0; 2], 2 (;1), = 2N, and a given time discretizationfg with 2 (0; 1), let Y be the Euler approximation for the stochastic process X. Assume (A2) holds. Then there exists a constant K such that jE[g(Y T )] E[g(X T )]jK (;) ;8g2C (R d ); where (;) = ( 1; 2 (; 2); 1; 2 (2;1): 100 Bibliography [1] Amin, K. I. 1993. Jump Diusion Option Valuation in Discrete Time. The Journal of Finance. 48(5). pp. 1833-1863. [2] Andersen, L. and J. Andreasen. 2000. Jump-Diusion Processes: Volatility Smile Fitting and Numerical Methods for Option Pricing. Review of Derivatives Research. 4(3). pp. 231-262. [3] Applebaum, D. 2004. L evy Processes and Stochastic Calculus. Cambridge University Press. [4] Asmussen, S. and J. Rosi nski. 2001. Approximations of Small Jumps of L evy Pro- cesses with a View Towards Simulation. Journal of Applied Probability. 38(2). pp. 482-493. [5] Bachelier, L. 1900. Th eorie de la sp eculation. Annales Scientiques de l' Ecole Nor- male Sup erieure. 3(17). pp. 21-86. English translation in Cootner, P. (Editor). 1964. The Random Character of Stock Market Prices. The MIT Press. [6] Bakshi, G., C. Cao, and Z. Chen. 1997. Empirical Performance of Alternative Option Pricing Models. The Journal of Finance. 52(5). pp. 2003-2049. [7] Bakshi, G., C. Cao, and Z. Chen. 2000. Pricing and Hedging Long-Term Options. Journal of Econometrics. 94(1-2). pp. 277-318. [8] Bally, V. and D. Talay. 1996. The Law of the Euler Scheme for Stochastic Dierential Equations I. Convergence Rate of the Distribution Function. Probability Theory and Related Fields. 104(1). pp. 43-60. Springer Berlin / Heidelberg. [9] Bally, V. and D. Talay. 1996. The Law of the Euler Scheme for Stochastic Dierential Equations II. Convergence Rate of the Density. Monte Carlo Methods Applicaitons. 2(2). pp. 93-128. [10] Barndor-Nielsen, O. E., T. Mikosch, and S. Resnick (Editors). 2001. L evy Pro- cesses: Theory and Applications. Birkh auser Boston. 101 [11] Barndor-Nielsen, O. E. and N. Shephard. 2001. Modelling by L evy Processes for Financial Econometrics. In Barndor-Nielsen, O. E., T. Mikosch, and S. Resnick (Editors) L evy Processes: Theory and Applications. Birkh auser Boston. [12] Bass, R. F. 2004. Stochastic Dierential Equations Driven by Symmetric Stable Processes. S eminaire de Probabilit es. XXXVI. Lecture Notes in Mathematics. 1801. pp. 302-313. Springer Berlin / Heidelberg. [13] Bass, R. F., K. Burdzy, and Z. Q. Chen. 2004. Stochastic Dierential Equations Driven by Stable Processes for Which Pathwise Uniqueness Fails. Stochastic Pro- cesses and Their Applications. 111(1). pp. 1-15. [14] Bates, D. S. 1996. Jumps and Stochastic Volatility: Exchange Rate Processes Im- plicit in Deutsche Mark Options. The Review of Financial Studies. 9(1). pp. 69-107. The Society for Financial Studies. [15] Bates, D. S. 2000. Post-'87 Crash Fears in the S&P 500 Futures Option Market. Journal of Econometrics. 94(1-2). pp. 181-238. [16] Berestychi, J. 2004. Exchangeable Fragmentation-Coalescence Processes and Their Equilibrium Measures. Electronic Journal of Probability. 9. pp. 770-824. Institute of Mathematical Statistics. [17] Bergh, J. and J. L ofstr om. 1976. Interpolation Spaces: An Introduction. Springer- Verlag. [18] Bertoin, J. 1996. L evy Processes. Cambridge University Press. [19] Black, F. and M. Scholes. 1973. The Pricing of Options and Corporate Liabilities. Journal of Political Economy. 81(3). pp. 637-654. [20] Broadie, M. and O. Kaya. 2006. Exact Simulation of Stochastic Volatility and Other Ane Jump Diusion Processes. Operations Research. 54(2). pp. 217-231. [21] Brockwell, P. J. 2001. L evy-Driven Carma Processes. Annals of the Institute of Statistical Mathematics. 53(1). pp. 113-124. Springer Netherlands. [22] Carr, P. and A. Hirsa. 2003. Why Be Backward? Forward Equations for American Options. Risk. 16(1). pp. 103-107. [23] Carr, P. and D. B. Madan. 1999. Option Valuation Using the Fast Fourier Transform. The Journal of Computatoinal Finance. 2(4). pp. 61-73. [24] C inlar, E. and M. Pinsky. 1972. On Dams with Additive Inputs and a General Release Rule. Journal of Applied Probability. 9(2). pp. 422-429. [25] Coifman, R. R. and Y. Meyer. 1978. Au del a des op erateurs pseudo-di erentiels. Ast erisque. 57. Soci et e Math ematique de France. Paris. 102 [26] Cont, R. and P. Tankov. 2004. Financial Modelling with Jump Processes. Chapman & Hall/CRC. [27] Cont, R. and E. Voltchkova. 2005. A Finte Dierence Scheme for Option Pricing in Jump Diusion and Exponential L evy Processes. SIAM Journal on Numerical Analysis. 43(4). pp. 1596-1626. Society for Industrial and Applied Mathematics. [28] Dellacherie, C. and P. A. Meyer. 1975. Probabilit es et potentiel. Hermann Paris. [29] Due, D., J. Pan, and K. Singleton. 2000. Transform Analysis and Asset Pricing for Ane Jump-Diusions. Econometrica. 68(6). pp. 1343-1376. The Econometric Society. [30] Einstein, A. 1905. Uber die von der molekularkinetischen Theorie der W arme geforderte Bewegung von in ruhenden Fl ussigkeiten suspendierten Teilchen. An- nalen der Physik. 322(8). pp. 549-560. [31] Ekstr om, E. and J. Tysk. 2007. Properties of Option Prices in Models with Jumps. Mathematical Finance. 17(3). pp. 381-397. Blackwell. [32] Feng, L. and V. Linetsky. 2008. Pricing Options in Jump-Diusion Models: an Extrapolation Approach. Operations Research. 56(2). pp. 304-325. [33] Fristedt, B. 1974. Sample Functions of Stochastic Processes with Stationary, Inde- pendent Increments. In Ney, P. and S. Port (Editors) Advances in Probability and Related Topics. 3. pp. 241-396. Marcel Dekker New York. [34] Gilbarg, D. and N. S. Trudinger. 2001. Elliptic Partial Dierential Equations of Second Order. Springer. [35] Glasserman, P. 2003. Monte Carlo Methods in Financial Engineering. In series Stochastic Modelling and Applied Probability. 53. Springer. [36] Goldie, C. 1967. A Class of Innitely Divisible Random Variables. Mathematical Pro- ceedings of the Cambridge Philosophical Society. 63(4). pp. 1141-1143. Cambridge University Press. [37] Guyon, J. 2006. Euler Scheme and Tempered Distributions. Stochastic Processes and their Applications. 116(6). pp. 877-904. Elsevier. [38] d'Halluin, Y., P. A. Forsyth, and K. R. Vetzal. 2005. Robust Numerical Methods for Contingent Claims under Jump Diusion Processes. IMA Journal of Numerical Analysis. 25(1). pp. 87-112. [39] Heston, S. T. 1993. A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options. Review of Financial Studies. 6(2). pp. 327-343. Oxford University Press. 103 [40] Hirsa, A. and D. B. Madan. 2003. Pricing American Options under Variance Gamma. The Journal of Computatoinal Finance. 7(2). pp. 63-80. [41] Hu, Y. Z. 1996. Strong and Weak Order of Time Discretization Schemes of Stochas- tic Dierential Equations. S eminaire de Probabilit es. XXX. Lecture Notes in Math- ematics. 1626. pp. 218-227. Springer Berlin / Heidelberg. [42] Jacod, J. 2004. The Euler Scheme for L evy Driven Stochastic Dierential Equations: Limit Theorems. The Annals of Probability. 32(3). pp. 1830-1872. [43] Jacod, J. and P. Protter. 1991. Une remarque sur les equations di erentielles stochas- tiques a solutions Markoviennes. S eminaire de Probabilit es. XXV. Lecture Notes in Mathematics. 1485. pp. 138-139. Springer Berlin / Heidelberg. [44] Jacod, J. and A. N. Shiryaev. 2003. Limit Theorems for Stochastic Processes (2nd edition). Springer. [45] Jarrow, R. A. and D. Madan. 1995. Option Pricing Using the Term Structure of Interest Rates to Hedge Systematic Discontinuities in Asset Returns. Mathematical Finance. 5(4). pp. 311-336. Wiley Periodicals. [46] Jarrow, R. A. and E. R. Rosenfeld. 1984. Jump Risks and the Intertemporal Capital Asset Pricing Model. Journal of Business. 57(3). pp. 337-351. University of Chicago Press. [47] Johannes, M. 2004. The Statistical and Economic Role of Jumps in Continuous- Time Interest Rate Models. The Journal of Finance. 59(1). pp. 227-260. American Finance Association. [48] Kloeden, P. E. and E. Platen. 2000. Numerical Solution of Stochastic Dierential Equations. Springer. [49] Kloeden, P. E., E. Platen, and N. Hofmann. 1995. Extrapolation Methods for the Weak Approximation of It^ o Diusions. SIAM Journal on Numerical Analysis. 32(5). pp. 1519-1534. Society for Industrial and Applied Mathematics. [50] Komatsu, T. 1982. On the Pathwise Uniqueness of Solutions of One-Dimensional Stochastic Dierential Equations of Jump Type. Proceedings of the Japan Academy, Series A, Mathematical Sciences. 58(8). pp. 353-356. [51] Komatsu, T. 1984. On the Martingale Problem for Generators of Stable Processes with Perturbations. Osaka Journal of Mathematics. 22(1). pp. 113-132. Osaka Uni- versity and Osaka City University. [52] Kou, S. G. 2002. A Jump-Diusion Model for Option Pricing. Management Science. 48(8). pp. 1086-1101. 104 [53] Kubilius, K. 1990. On the Rate of Convergence of the Distributions of Semimartin- gales to the Distribution of Stable Process. In Grigelionis B. (Editor) Probability Theory and Mathematical Statistics. 2. pp. 22-34. [54] Kubilius, K. and E. Platen. 2001. Rate of Weak Convergence of the Euler Approxi- mation for Diusion Processes with Jumps. Research Paper Series. 54. Quantitative Finance Research Centre. University of Technology. Sydney. [55] LeGall, J. F. 1983. Applications du temps local aux equations di erentielles stochas- tiques unidimensionnelles. S eminaire de Probabilit es. XVII. Lecture Notes in Math- ematics. 986. pp. 15-32. Springer Berlin / Heidelberg. [56] Liu, J., F. Longsta, and J. Pan. 2003. Dynamic Asset Allocation with Event Risk. The Journal of Finance. 58(1). pp. 231-259. Blackwell Publishing. [57] Merton, R. C. 1976. Option Pricing when Underlying Stock Returns are Discontin- uous. Journal of Financial Economics. 3(1-2). pp. 125-144. [58] Mikulevi cius, R. and E. Platen. 1988. Time Discrete Taylor Approximations for It o Processes with Jump Component. Mathematische Nachrichten. 138. pp. 93-104. Wiley. [59] Mikulevi cius, R. and E. Platen. 1991. Rate of Convergence of the Euler Approx- imation for Diusion Processes. Mathematische Nachrichten. 151(1). pp. 233-239. Wiley. [60] Mikulevi cius, R. and H. Pragarauskas. 1992. On the Cauchy Problem for Certain Integro-Dierential Operators in Sobolev and H older Spaces. Lithuanian Mathemat- ical Journal. 32(2). pp. 238-264. Springer New York. [61] Mikulevi cius, R. and H. Pragarauskas. 1992. On the Martingale Problem Associated with Nondegenerate L evy Operators. Lithuanian Mathematical Journal. 32(3). pp. 297-311. Springer New York. [62] Mikulevi cius, R. and H. Pragarauskas. 1993. On the Uniqueness of Solution to a Martingale Problem Associated with a Degenerate L evy's Operator. Lithuanian Mathematical Journal. 33(4). pp. 352-367. Springer New York. [63] Mikulevi cius, R. and H. Pragarauskas. 2009. On H older Solutions of the Integro- Dierential Zakai Equation. Stochastic Processes and their Applications. 119(10). pp. 3319-3355. Elsevier. [64] Mikulevi cius, R. and C. Zhang. 2010. On the Rate of Convergence of Weak Euler Ap- proximation for Nondegenerate It^ o Diusion and Jump Processes. arXiv:1007.2914. Submitted to Stochastic Processes and their Applications. pp. 1-33. Elsevier. 105 [65] Mikulevi cius, R. and C. Zhang. 2010. On the Rate of Convergence of Weak Euler Approximation for Non-degenerate SDEs. arXiv:1009.4728. pp. 1-40. [66] Milstein, G. N. 1979. A Method of Second-Order Accuracy Integration of Stochastic Dierential Equations. Theory of Probability and its Applications. 23(2). pp. 396- 401. Society for Industrial and Applied Mathematics. [67] Milstein, G. N. 1986. Weak Approximation of Solutions of Systems of Stochastic Dierential Equations. Theory of Probability and its Applications. 30(4). pp. 750- 766. Society for Industrial and Applied Mathematics. [68] Milstein, G. N. 1995. Numerical Integration of Stochastic Dierential Equations. In series Mathematics and Its Applications. 313. Springer. [69] Moran, P. A. P. 1969. A Theory of Dams with Continuous Input and a General Release Rule. Journal of Applied Probability. 6. pp. 88-98. [70] Naik, V. and M. Lee. 1990. General Equilibrium Pricing of Options on the Market Portfolio with Discontinuous Returns. The Review of Financial Studies. 3(4). pp. 493-521. Oxford University Press. [71] Nakao, S. 1972. On the Pathwise Uniqueness of Solutions of One-Dimensional Stochastic Dierential Equations. Osaka Journal of Mathematics. 9(3). pp. 513-518. [72] ksendal, B. K. 2003. Stochastic Dierential Equations: An Introduction with Ap- plications (6th Edition). Springer. [73] Platen, E. 1999. An Introduction to Numerical Methods for Stochastic Dierential Equations. Acta Numerica. 8. pp. 197-246. Cambridge University Press. [74] Protter, P. E. 2003. Stochastic Integration and Dierential Equations (2nd Edition). In series Stochastic Modelling and Applied Probability. 21. Springer. [75] Protter, P. E. and D. Talay. 1997. The Euler Scheme for L evy Driven Stochastic Dierential Equations. The Annals of Probability. 25(1). pp. 393-423. [76] Rubenthaler, S. 2003. Numerical Simulation of the Solution of a Stochastic Dieren- tial Equation Driven by a L evy Process. Stochastic Processes and their Applications. 103(2). pp. 311-349. Elsevier. [77] Saito, Y. and T. Mitsui. 1996. Stability Analysis of Numerical Schemes for Stochastic Dierential Equations. SIAM Journal on Numerical Analysis. 33(6). pp. 2254-2267. Society for Industrial and Applied Mathematics. [78] Sato, K. I. 1999. L evy Processes and Innitely Divisible Distributions. In series Cambridge Studies in Advanced Mathematics. 68. Cambridge University Press. 106 [79] von Smoluchowski, M. 1906. Zur kinetischen Theorie der Brownschen Molekularbe- wegung und der Suspensionen. Annalen der Physik. 326(14). pp. 756-780. [80] Schoutens, W. 2003. L evy Processes in Finance: Pricing Financial Derivatives. Wiley New York. [81] Schurz, H. 1997. Stability, Stationarity, and Boundedness of Some Implicit Numer- ical Methods for Stochastic Dierential Equations and Applications. Logos Verlag Berlin. [82] Skorohod. A. V. 1991. Random Processes with Independent Increments. Springer. (Russian original: 1964. Nauka Moscow.) [83] Sobczyk, K. 1991. Stochastic Dierential Equations with Applications to Physics and Engineering. In series Mathematics and its Applications. 40. Kluwer Academic Publishers. [84] Stein, E. M. 1971. Singular Integrals and Dierentiability Properties of Functions. Princeton University Press. [85] Stuck, B. W. and B. Kleiner, B. 1974. A Statistical Analysis of Telephone Noise. The Bell System Technical Journal. 53(7). pp. 1263-1320. [86] Talay, D. 1984. Ecient Numerical Schemes for the Approximation of Expectations of Functionals of the Solution of a S.D.E., and Applications. Filtering and Control of Random Processes. Lecture Notes in Control and Information Sciences. 61. pp. 294-313. Springer Berlin / Heidelberg. [87] Talay, D. 1986. Discretization of a Stochastic Dierential Equation and Rough Esti- mate of the Expectations of Functionals of the Solution. ESAIM: Mathematical Mod- elling and Numerical Analysis - Mod elisation Math ematique et Analyse Num erique. 20(1). pp. 141-179. [88] Talay, D. 1995. Simulation of Stochastic Dierential Systems. Lecture Notes in Physics. 451. pp. 54-96. Springer Berlin / Heidelberg. [89] Talay, D. and L. Tubaro. 1990. Expansion of the Global Error for Numerical Schemes Solving Stochastic Dierential Equations. Stochastic Analysis and Applications. 8(4). pp. 483-509. [90] Tavella, D. and C. Randall. 2000. Pricing Financial Instruments: The Finite Dif- ference Method. Wiley New York. [91] Triebel, H. 1983. Theory of Function Spaces. Birkhaueser Verlag. [92] Wolpert, R. L. and M. S. Taqqu. 2005. Fractional Ornstein-Uhlenbeck L evy Pro- cesses and the Telecom Process: Upstairs and Downstairs. Signal Processes. 85(8). pp. 1523-1545. Elsevier North-Holland. 107 [93] Yamada, T. and S. Watanabe. 1971. On the Uniqueness of Solutions of Stochastic Dierential Equations. Journal of Mathematics of Kyoto University. 11(1). pp. 155- 167. [94] Zhang, X. L. 1997. Numerical Analysis of American Option Pricing in a Jump- Diusion Model. Mathematics of Operations Research. 22(3). pp. 668-690. 108 Index adapted, 8 bounded below, 17 c adl ag, 8 characteristic triplet, 16 compensated Poisson integral, 19 compensated Poisson process, 10 compensator, 22 compound Poisson process, 11 converge with a weak order, 5 diusion process, 3 Euler approximation, 54, 73 ltration, 7 H older continuous, 9 independent increments, 8 index of stability, 13 innitely divisible, 15 interlacing process, 13 It^ o's formula, 21 jump process, 17 L evy jump-diusion process, 12 L evy measure, 18 L evy process, 8 L evy symbol, 16 L evy-It^ o decomposition, 19 L evy-Khintchine formula, 16 martingale, 10 Poisson integral, 18 Poisson process, 10 random measure of jumps, 17 sample path, 8 self-similar, 15 stable, 13 stable process, 14 standard Wiener process, 9 109 stationary increments, 8 stochastic continuity, 9 stochastic process, 7 strictly stable, 13 time discretization, 53 Wiener process, 9 110
Abstract (if available)
Abstract
Levy processes are the simplest generic class of processes having a.s. continuous paths interspersed with jumps of arbitrary sizes occurring at random times, which makes them useful tools in a variety of fields including mathematics, physics, engineering, and finance.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
On stochastic integro-differential equations
PDF
On the simple and jump-adapted weak Euler schemes for Lévy driven SDEs
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
Optimal and exact control of evolution equations
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Statistical inference of stochastic differential equations driven by Gaussian noise
PDF
Gaussian free fields and stochastic parabolic equations
PDF
Statistical inference for second order ordinary differential equation driven by additive Gaussian white noise
PDF
Regularity of solutions and parameter estimation for SPDE's with space-time white noise
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Topics on set-valued backward stochastic differential equations
PDF
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
High-frequency Kelly criterion and fat tails: gambling with an edge
PDF
Time-homogeneous parabolic Anderson model
Asset Metadata
Creator
Zhang, Changyong (author)
Core Title
Numerical weak approximation of stochastic differential equations driven by Levy processes
Contributor
Electronically uploaded by the author
(provenance)
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
11/22/2010
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Holder continuity,Levy processes,Nondegenerate,OAI-PMH Harvest,rate of convergence,stochastic differential equations,weak Euler approximation
Language
English
Advisor
Mikulevicius, Remigijus (
committee chair
), Ghanem, Roger Georges (
committee member
), Lototsky, Sergey Vladimir (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
changyoz@usc.edu,changyoz@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m3547
Unique identifier
UC1330186
Identifier
etd-Zhang-3943 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-412382 (legacy record id),usctheses-m3547 (legacy record id)
Legacy Identifier
etd-Zhang-3943.pdf
Dmrecord
412382
Document Type
Dissertation
Rights
Zhang, Changyong
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
Holder continuity
Levy processes
Nondegenerate
rate of convergence
stochastic differential equations
weak Euler approximation