Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
On the simple and jump-adapted weak Euler schemes for Lévy driven SDEs
(USC Thesis Other)
On the simple and jump-adapted weak Euler schemes for Lévy driven SDEs
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
i On The Simple and Jump-Adapted Weak Euler Schemes For Lé vy Driven SDEs BY LANG WANG THESIS Submitted in partial fulfillment of the requirements for the degree of Master of Science in Applied Mathematics in the Graduate School of the University of Southern California May 2015 Los Angeles, California ADVISOR: Professor Remigijus Mikulevičius COMMITTEE MEMBER: Professor Sergey V. Lototsky Professor Jianfeng Zhang ii ABSTRACT This is a comprehensive study of the simple and jump-adapted weak Euler schemes applying to the approximation for solutions to possibly completely degenerate SDEs driven by Lé vy processes, with Hö lder- continuous coefficients. In particular, we’ll give a real simulation for the stable-Lé vy cases, along with some further discussion according to the approximation results. We’ll see that in stable- Lé vy cases and with “nice” choice of coefficient functions, it’s possible to derive well-performing simulations in practice. Keywords: Simple Euler scheme; Jump-adapted Euler scheme; Stable-Lé vy process iii ACKNOWLEDGEMENT This thesis would not have been possible without the support of many people. Many thanks to my advisor, Remigijus Mikulevičius, who read my numerous revisions, led my way to a better comprehension on the main idea, and helped me solve tremendous confusions during the study. Also, thanks to my committee members, Sergey V. Lototsky and Jianfeng Zhang, who contributed their time to review my thesis and offered suggestions to me. Thanks to the University of Southern California for providing me a wonderful study environment such that my thesis could be completed in time. And finally, thanks to my parents, roommates and numerous friends who endured this long process with me, always offering love and encouragement when I got frustrated. iv TABLE OF CONTENTS CHAPTER 1: INTRODUCTION………………………………………………………………..…….1 1.1 Motivation: From Brownian Motion to Jump Process….....………….……1 1.2 Literature Review and Model Set Up……………………….………………….……2 1.3 Notations…………………………………………………………………………………….……4 CHAPTER2: SIMULATIONS OF DRIVEN PROCESSES..……………………………………6 2.1 Time Discretization………………………………………………………….……………….6 2.2 Simulations for Brownian Motions….……………………………….……………...8 2.3 Simulations for Compound Poisson Processes….……………….....……….10 2.4 Simulations for Lé vy Processes..…………………………………………….……….20 2.5 Simulations for Symmetric Stable Lé vy Processes…………………….…….25 CHAPTER 3: SIMULATIONS OF SDEs DRIVEN BY Lé vy PROCESSES………………33 3.1 Simulations of Lé vy Driven SDEs……………………………….…………….………33 3.2 Backward Kolmogorov Equation………………………………………………..……35 3.3 Real Simulations………………………………………………………………………….….38 3.4 Analysis Based on Numerical Results……………………………………………...40 CHAPTER 4: CONCLUSION…………………………………………………….………………….52 APPENDIX: MATLAB CODES FOR SOME ALGORITHMS…………………….………..53 BIBLIOGRAPHY…………………….…………………………………………………………………..63 1 CHAPTER 1: INTRODUCTION 1.1 Motivation: From Brownian Motion to Jump Process Along with the rapid development of financial market during the last century, stochastic modeling for the price fluctuations has become a large field of research. Traditionally, Brownian motion used to be the most crucial part among earlier studies, which was the base for nearly all continuous time models. For example, the famous Black-Scholes model: 𝑆 𝑡 =𝑆 0 exp[µ𝑡 +𝜎 𝑊 𝑡 ] or, in the differential form: 𝑑 𝑆 𝑡 𝑆 𝑡 =𝜎𝑑 𝑊 𝑡 +(µ+ 𝜎 2 2 )𝑑𝑡 where 𝑊 𝑡 is a Brownian model and 𝑆 𝑡 is the price process, which is sometimes called a geometric Brownian motion. The explicit solution of this model makes it quite convenient to apply in practice, and, as Cont and Tankov suggested in reference [1] chapter 1, this model does have some similarities with the real market data. Due to these reasons, the Black- Scholes model was once the most commonly used model in the application area (and still being used widely nowadays). However, the Black-Scholes model was constructed under a series of strict assumptions, which were actually contradicted against the real world. For instance, the assumption that people can always borrow and lend any amount of cash at a same riskless rate would typically not be realized – there should be different rates for borrowing and lending, and even more, the rate could depend on the amount of money and might change over time. Trying to loosen this assumption will lead to an SDE without explicit solutions, which motivated us to run numerical simulations for SDEs. Another important problem in the traditional models was the continuity assumption. Based on the continuous Brownian motion, traditional models implicitly assumed that the evolution of prices will follow a continuous path. However, event-driven uncertainties in the real world, such as 2 corporate defaults, operational failure or central bank announcements, will possibly lead to discontinuities in the evolution path of prices and other financial quantities. By analyzing time series properties of historical data, many authors have argued for this point. For example, Cont and Tankov argued about the existence of jumps in [1] with data from NYSE as well as evidence from option markets. In the meanwhile, Jorin has argued for the presence of jumps for foreign exchange and stock markets in [11], and Johannes has argued for the jumps of short-term interest rate in [12]. According to these facts, it is necessary to consider stochastic processes with jumps instead of the simple Brownian motion. Also, similar as the continuous case, only a small class of SDEs with jumps admits explicit solutions, so it is important to construct discrete time approximations. In the following parts, we’ll focus on the weak-Euler simulation of a special class of SDE with jumps. 1.2 Literature Review and Model Set Up As discussed above, the continuous models based on Brownian motion has been developed for several decades and hence there have been developed an extensive literature on these models as well as on the simulations of pure diffusion SDEs related to them. However, the literature on the numerical approximations of SDEs with jumps, in particular, the weak approximations, is still quite limited. As like topics of most other numerical practice, the main issue for the approximations is the convergence rate of the specific methods. In paper [13], Mikulevičius & Platen derived a theorem for the jump-adapted weak Taylor schemes of any weak order. In [14], Liu & Li derived a weak convergence theorem along with the leading coefficients of the global error. In [2] & [3], Mikulevičius derived a convergence theorem of the weak Euler schemes, both simple and jump-adapted, for the Lé vy driven SDEs, possibly degenerate. These two paper shows that both simple and jump-adapted Euler scheme have nice convergence properties under some usual conditions, and we’ll follow them to run numerical experiments in this thesis. 3 Following by R. Mikulevičius [2], let’s consider the following 1- dimensional model: Let (Ω,ℱ,P) be a complete probability space with a filtration 𝔽 ={ℱ 𝑡 } 𝑡 ∈[0,𝑇 ] of σ-algebras satisfying the usual conditions and α∈(0,2] be a fixed constant. Our main model in R is as following: 𝑋 𝑡 =𝑋 0 +∫𝑎 (𝑋 𝑠 )𝑑𝑠 𝑡 0 +∫𝑏 (𝑋 𝑠 )𝑑 𝑊 𝑠 𝑡 0 +∫𝐺 (𝑋 𝑠 − )𝑑 𝑍 𝑠 𝑡 0 , 𝑡 ∈[0,𝑇 ](1.1) where a(x), b(x), G(x) are measurable and bounded, with a=0 if 𝛼 ∈(0,1) and b=0 if 𝛼 ∈(0,2) . The process 𝑊 𝑠 is a standard Brownian motion in R, and the process 𝑍 𝑠 is a Lé vy process whose characteristic function is 𝑒 𝑡𝜂 (𝑧 ) with 𝜂 (𝑧 )= ∫[𝑒 𝑖𝑧𝑣 −1−𝑖 𝜒 𝛼 (𝑣 )𝑣𝑧 ]𝑞 (𝑑𝑠 ,𝑑𝑣 ) 𝑹 0 Where 𝜒 𝛼 (𝑣 )=𝜒 [−1,1] (𝑣 )𝟏 {𝛼 ∈(1,2]} , 𝑹 0 =𝑹 \{𝟎 } Then we have 𝑍 𝑡 =∫ ∫(1−𝜒 𝛼 (𝑣 ))𝑣𝑝 (𝑑𝑠 ,𝑑𝑣 ) 𝑡 0 +∫ ∫𝜒 𝛼 (𝑣 )𝑣𝑞 (𝑑𝑠 ,𝑑𝑣 ) 𝑡 0 where p(dt,dv) is a Poisson point measure on [0,∞)×𝑹 𝟎 with expectation E[p(dt,dv)]=π(dv)dt, and q(dt,dv)= π(dv)dt is the compensated Poisson measure. Also, we assume ∫ (|𝑣 | 𝛼 ∧1)𝜋 (𝑑𝑣 )<∞, in other words, Z is a Lé vy process of order α. To approximate the equation, we set the time discretization {𝜏 𝑖 ,𝑖 = 0,…,𝑛 } of the interval [0,T] such that 0=𝜏 0 <𝜏 1 <⋯<𝜏 𝑛 =𝑇 and max 𝑖 (𝜏 𝑖 −𝜏 𝑖 −1 )≤𝛿 . Typically, {𝜏 𝑖 } should be 𝔽 -stopping times, and we’ll discuss two special choice of {𝜏 𝑖 } later. Define the Euler approximation of X by an 𝔽 -adapted process {𝑌 𝑡 } 𝑡 ∈[0,𝑇 ] with: 𝑌 𝑡 =𝑋 0 +∫𝑎 (𝑌 𝜏 𝑖 𝑠 )𝑑𝑠 𝑡 0 +∫𝑏 (𝑌 𝜏 𝑖 𝑠 )𝑑 𝑊 𝑠 𝑡 0 +∫𝐺 (𝑌 𝜏 𝑖 𝑠 )𝑑 𝑍 𝑠 𝑡 0 ,𝑡 ∈[0,𝑇 ] (1.2) where 𝜏 𝑖 𝑠 =𝜏 𝑖 𝑖𝑓 𝑠 ∈[𝜏 𝑖 ,𝜏 𝑖 +1 ),𝑖 =0,1,…,𝑛 −1. 4 Our main goal is to investigate how “good” the approximation is. In order to see this, we introduce the convergent order of Y by: Y is said to converge with rate 𝜅 >0 if for each bounded smooth function g with bounded derivatives, there exists a constant C, depending only on g, such that |𝑬 𝑔 (𝑌 𝑇 )−𝑬 𝑔 (𝑋 𝑇 )|≤𝐶 𝛿 𝜅 In Mikulevičius [2], the convergence rates for simple and jump-adapted Euler schemes are derived and we’ll try to numerically test the convergence later. The thesis is organized as follows. In 1.3, some notation is introduced. In Chapter 2, we’ll discuss the simulation of driven processes, including Brownian motion, compound Poisson process and Lé vy process, with either simple or jump-adapted Euler scheme. Chapter 2 will also cover some theoretical reviews. Then in Chapter 3, simulations for (1.1) are covered, and also, as in [2], we’ll relate 𝑋 𝑡 to the solution of the backward Kolmogorov equation and Itô formula to check the goodness of our simulations. Finally in Chapter 4, some conclusions and further discussion are presented. 1.3 Notations Denote 𝐻 =[0,𝑇 ]×ℝ, ℕ={0,1,2,…}, ℝ 0 =ℝ\{0}. Denote 𝜕 𝑡 𝑢 (𝑡 ,𝑥 )= 𝜕 𝜕𝑡 𝑢 (𝑡 ,𝑥 ),𝜕 𝑥 𝑢 (𝑡 ,𝑥 )= 𝜕 𝜕𝑥 𝑢 (𝑡 ,𝑥 ) 𝜕 𝑥 𝑘 𝑢 (𝑡 ,𝑥 )= 𝜕 𝑘 𝜕 𝑥 𝑘 𝑢 (𝑡 ,𝑥 ),𝑘 ∈ℕ For 𝛽 =[𝛽 ] − +{𝛽 } + , where [𝛽 ] − ∈ℕ and {𝛽 } + ∈(0,1] Let 𝐶̃ 𝛽 (𝐻 ) denote the Lipschitz space of measurable functions u on H such that the norm 5 |𝑢 | 𝛽 ≔ ∑ |𝜕 𝑥 𝛾 𝑢 (𝑡 ,𝑥 )| 0 𝛾 ≤[𝛽 ] − + sup 𝛾 =[𝛽 ] − 𝑡 ,𝑥 ≠𝑥̃ |𝜕 𝑥 𝛾 𝑢 (𝑡 ,𝑥 )−𝜕 𝑥 𝛾 𝑢 (𝑡 ,𝑥̃)| |𝑥 −𝑥̃| {𝛽 } + is finite, where |𝑣 | 0 ≔ sup (𝑡 ,𝑥 )∈𝐻 |𝑣 (𝑡 ,𝑥 )|. We denote 𝐶̃ 𝛽 (ℝ) the corresponding function space on ℝ 6 CHAPTER 2: SIMULATIONS OF DRIVEN PROCESSES 2.1 Time Discretization In this section we discuss two specific choices of time discretization. As discussed above, take a sequence of𝔽 -stopping times {𝜏 𝑖 ,𝑖 =0,…,𝑛 } to be a partition of [0,T] such that 0=𝜏 0 <𝜏 1 <⋯<𝜏 𝑛 =𝑇 and max 𝑖 (𝜏 𝑖 −𝜏 𝑖 −1 )≤𝛿 . Recall our original model: 𝑋 𝑡 =𝑋 0 +∫𝑎 (𝑋 𝑠 )𝑑𝑠 𝑡 0 +∫𝑏 (𝑋 𝑠 )𝑑 𝑊 𝑠 𝑡 0 +∫𝐺 (𝑋 𝑠 − )𝑑 𝑍 𝑠 𝑡 0 , 𝑡 ∈[0,𝑇 ] An Euler approximation of X is defined by: 𝑌 𝑡 =𝑋 0 +∫ 𝑎 (𝑌 𝜏 𝑖 𝑠 )𝑑𝑠 𝑡 0 +∫ 𝑏 (𝑌 𝜏 𝑖 𝑠 )𝑑 𝑊 𝑠 𝑡 0 +∫ 𝐺 (𝑌 𝜏 𝑖 𝑠 )𝑑 𝑍 𝑠 𝑡 0 ,𝑡 ∈[0,𝑇 ] With 𝜏 𝑖 𝑠 =𝜏 𝑖 𝑖𝑓 𝑠 ∈[𝜏 𝑖 ,𝜏 𝑖 +1 ),𝑖 =0,1,…,𝑛 −1. Our main issue is to specify {𝜏 𝑖 ,𝑖 =0,…,𝑛 } 2.1.1 Approximate Simple Euler Scheme One natural choice of the stopping times {𝜏 𝑖 ,𝑖 =0,…,𝑛 } is to set them as fixed time points. In particular, Let’s consider the equal-length partition of [0, T], which will lead to the simple Euler scheme: Given 𝑛 ∈ℕ be the total number of partition points, denote ℎ= 𝑇 𝑛 , a fixed equal-length partition is given by: 𝜏 𝑖 =𝑖 𝑇 𝑛 =𝑖ℎ,𝑖 =0,1,…,𝑛 Then employ the same idea in [2] and [15], the approximation of equation (1.1),{𝑌 𝑖 ̃ } 𝑖 =0,…,𝑛 , is given by: 𝑌 0 ̃ =𝑋 0 =𝑥 0 𝑌 𝑖 +1 ̃ ≔𝑌 𝜏 𝑖 +1 ̃ ≔𝑌 𝑖 ̃ +𝑎 (𝑌 𝑖 ̃ )ℎ+𝑏 (𝑌 𝑖 ̃ )(𝑊 𝜏 𝑖 +1 −𝑊 𝜏 𝑖 )+𝐺 (𝑌 𝑖 ̃ )(𝑍 𝜏 𝑖 +1 −𝑍 𝜏 𝑖 ) for i = 0,1,…,n-1 (2.1) 7 It’s easy to see that the only unknown parts are ∆𝑊 𝑖 +1 ≔𝑊 𝜏 𝑖 +1 −𝑊 𝜏 𝑖 and ∆𝑍 𝑖 +1 ≔𝑍 𝜏 𝑖 +1 −𝑍 𝜏 𝑖 , which are increments of Brownian motion and Lé vy process, respectively. Therefore, in order to approximate SDE (1.1), the core problem is to approximate these increments in time intervals [𝜏 𝑖 ,𝜏 𝑖 +1 ),𝑖 =0,…,𝑛 −1, or equivalently, approximate the increments of Brownian motion and Lé vy process. We’ll discuss the details later. 2.1.2 Approximate Jump-Adapted Euler Scheme Unlike Brownian motion, Lé vy Process is a process with jumps. Therefore, another natural choice of the partition might be the jump moments. However, since it’s possible for a general Lé vy Process to have infinite jumps even in finite time intervals, we can’t directly choose all the jump moments of the process to form a partition (recall a partition always contains finite grid points). Therefore, we need to further investigate the properties of Lé vy process before giving the specific definition of{𝜏 𝑖 ,𝑖 = 0,…,𝑛 } for jump-adapted Euler scheme. On the other hand, as a special class of Lé vy process, the compound Poisson process always has almost surely a finite number of jumps in finite intervals. Therefore, the jump moments of a compound Poisson process would be a choice for the jump-adapted partition. More precisely, suppose (𝑍 𝑡 ) 𝑡 ∈[0,𝑇 ] is a compound Poisson process with jump moments {𝜏 𝑖 ,𝑖 = 0,…,𝑛 +1}, which is defined by : 𝜏 0 =0,𝜏 𝑖 +1 ≔inf{𝑡 >𝜏 𝑖 :∆𝑍 𝑡 ≠0}⋀𝑇 In this case, we can approximate (1.1) by : 𝑌 0 ̂ =𝑋 0 =𝑥 0 𝑌 𝜏 𝑖 +1 ̂ ≔𝑌 𝑖 ̂ +𝑎 (𝑌 𝑖 ̂ )(𝜏 𝑖 +1 −𝜏 𝑖 )+𝑏 (𝑌 𝑖 ̂ )(𝑊 𝜏 𝑖 +1 −𝑊 𝜏 𝑖 )+𝐺 (𝑌 𝑖 ̂ )(𝑍 𝜏 𝑖 +1 −𝑍 𝜏 𝑖 ) for i = 0,1,…,n-1 (2.2) Due to the “simple” properties of compound Poisson process, we might want to use it to approximate general Lé vy processes. We’ll discuss the details and present the jump-adapted method for Lé vy process in the following sections. 8 Also, note that no matter which time discretization is chosen, the core problem is always to approximate the increments of process. The difference is that, for simple Euler method, we need to check whether there are any jumps between each partition interval, while for jump- adapted method, there is exactly one jump between each partition interval. Consequently, the simple Euler method is easy to set up but will need running time due to the “check” procedure, while the jump-adapted method saves a lot of computations, once the jump moments are easy to get. We’ll deal with these issues in the following sections, based on Brownian motion, Compound Poisson process, and Lé vy process. 2.2 Simulations for Brownian Motions Let’s first start with the easiest process, Brownian motion. Definition 2.2.1(Brownian motion): A stochastic process B: [0,𝑇 ]×Ω→ℝ is called a (standard) Brownian motion if: (i) 𝐵 0 =0,𝑎 .𝑠 . (ii) 𝐵 𝑡 is almost surely continuous (iii) For any 0=𝑡 0 <⋯<𝑡 𝑛 =𝑇 , 𝐵 𝑡 1 ,𝐵 𝑡 2 −𝐵 𝑡 1 ,…,𝐵 𝑡 𝑛 −𝐵 𝑡 𝑛 −1 are independent. That is, 𝐵 𝑡 has independent increments. (iv) For any 0≤𝑠 <𝑡 ≤𝑇 , 𝐵 𝑡 −𝐵 𝑠 ~𝒩 (0,𝑡 −𝑠 ) , where 𝒩 (0,𝑡 −𝑠 ) denotes the normal distribution with variation (t-s). Since the increments of Brownian motion on non-overlapping time intervals are independent normal random variables, it is easy to deduce the Euler approximation as follows: 9 Algorithm 2.1 Simulation of Standard Brownian Motion with Simple Euler Method: Suppose the time discretization{𝜏 𝑖 ,𝑖 =0,…,𝑛 } is given by section 2.1.1, which gives equal-length partition of [0,T] with step size ℎ= 𝑇 𝑛 , then: Step1: Simulate n independent identically distributed (i.i.d.) random variables 𝜉 𝑘 ,𝑘 =1,…,𝑛 with 𝜉 1 ~𝒩 (0,1) Step2: Use the following iteration to get a simulation of 𝐵 𝑡 at each grid points: { 𝐵 0 =0 𝐵 𝜏 𝑘 +1 =𝐵 𝜏 𝑘 +𝜉 𝑘 +1 √ℎ,𝑘 =0,1,…,𝑛 −1 Similarly, if we want to apply jump-adapted Euler method, the algorithm is nearly the same, except the step sizes which affect the distribution of increments. Although it’s not necessary to use jump-adapted Euler method for Brownian motion since there is exactly no jump for Brownian motion, we still give an algorithm for consistency. Algorithm 2.2 Simulation of Standard Brownian Motion with Jump- Adapted Euler Method: Suppose the time discretization{𝜏 𝑖 ,𝑖 =0,…,𝑛 } is a jump-adapted time discretization Step1: Simulate n independent identically distributed (i.i.d.) random variables 𝜉 𝑘 ,𝑘 =1,…,𝑛 with 𝜉 1 ~𝒩 (0,1) Step2: Use the following iteration to get a simulation of 𝐵 𝑡 at each grid points: { 𝐵 0 =0 𝐵 𝜏 𝑘 +1 =𝐵 𝜏 𝑘 +𝜉 𝑘 +1 √ 𝜏 𝑘 +1 −𝜏 𝑘 ,𝑘 =0,1,…,𝑛 −1 The most important properties of Brownian motion for our simulation is the independent increment property. This property will play a similar role in our simulations of compound Poisson process and Lé vy process. Figure 2.1 is a typically path of standard Brownian motion, with T=10 and n=1000 10 Figure 2.1: Brownian motion in [0,10],with n=1000 2.3 Simulations for Compound Poisson Processes Recall that since Brownian motion is a continuous process, the efficiency of simple Euler method and jump-adapted method are almost the same: we don’t need to check whether there are jumps between intervals. However, for processes with jumps, this will be totally a different story. Before going to the simulation for Lé vy process, let’s start with a simple but fundamental jump process: Compound Poisson Process. Firstly, recall a Poisson process 𝑁 𝑡 with intensity λ is defined by 𝑁 𝑡 = ∑ 𝟏 𝑡 ≥𝑇 𝑛 𝑛 ≥1 , where 𝑇 𝑛 =∑ 𝜏 𝑖 𝑛 𝑖 =1 is the n th arrival time, with (𝜏 𝑖 ) 𝑖 ≥1 be i.i.d. exponential random variables with parameter λ. In other words, a Poisson process is actually a process that counts the total number of arrivals by time t, with independent exponential waiting time for each arrival. A compound Poisson process is actually quite similar to the simple Poisson process. 11 Definition 2.3.1 (Compound Poisson Process): A compound Poisson process with intensity λ>0 and jump size distribution f , is a stochastic process 𝑋 𝑡 given by: 𝑋 𝑡 =∑𝑌 𝑖 𝑁 𝑡 𝑖 =1 With: (𝑖)𝑌 𝑖 are i.i.d. random variables with distribution f (ii) 𝑁 𝑡 is a Poisson process with intensity λ. (iii) 𝑁 𝑡 is independent from {𝑌 𝑖 } 𝑖 ≥1 Note that the requirement (iii) is crucial. Also, by setting 𝑌 𝑖 ≡1, we will get a simple Poisson process with intensity λ. This explains the origin of the term “Compound Poisson”. Followed by the definition, the following properties of compound Poisson process is important and easy to deduce: Property 2.3.2 (Properties of Compound Poisson Process) Let 𝑋 𝑡 be a compound Poisson process, then: (i) Sample paths of X are cadlag (right continuous with left limit) piecewise constant functions. In other words, the path of X looks like a step function, with random jump moments and random jump sizes {𝑌 𝑖 } 𝑖 ≥1 (ii) Jump moments, denote as {𝑇 𝑖 } 𝑖 ≥1 , typically possess the same law as those of simple Poisson process. That is, 𝑇 𝑖 +1 −𝑇 𝑖 is an exponential random variables with parameter λ, and for 𝑖 ≠𝑘 ,𝑇 𝑖 +1 −𝑇 𝑖 𝑎𝑟𝑒 𝑖𝑛𝑑𝑒𝑝𝑒𝑛𝑑𝑒𝑛𝑡 𝑤𝑖𝑡 ℎ𝑇 𝑘 +1 −𝑇 𝑘 (iii) Total number of jumps of X in a finite interval [0,T] are almost surely finite. 12 Furthermore, similar to the Brownian motion, the compound Poisson processes also possess properties of independent increments and stationary increments. In order to get more insight about compound Poisson process, we need to calculate its characteristic function. Proposition 2.3.3 (characteristic function of a compound Poisson process) Let 𝑋 𝑡 be a compound Poisson process on ℝ, with intensity λ and jump size distribution f, then the characteristic function of 𝑋 𝑡 is given by: 𝑬 [𝑒 𝑖𝜉 𝑋 𝑡 ]=exp{𝑡𝜆 ∫(𝑒 𝑖𝜉𝑥 −1)𝑓 (𝑑𝑥 )} ∞ −∞ ,∀𝜉 ∈ℝ(2.1) Proof: Firstly, let 𝜑 (𝜉 )≔𝑬 [𝑒 𝑖𝜉 𝑌 1 ]=∫ 𝑒 𝑖𝜉𝑥 𝑓 (𝑑𝑥 ) ∞ −∞ be the characteristic function of jumping sizes. Then, conditionally on 𝑁 𝑡 , recall the jump sizes are i.i.d. random variables, and 𝑁 𝑡 is a Poisson random variable with parameter 𝜆𝑡 ,we get: 𝑬 [𝑒 𝑖𝜉 𝑋 𝑡 ]=𝑬 [𝑬 [𝑒 𝑖𝜉 𝑋 𝑡 |𝑁 𝑡 ]]=𝑬 [(𝜑 (𝜉 )) 𝑁 𝑡 ]=∑(𝜑 (𝜉 )) 𝑘 𝑒 −𝜆𝑡 (𝜆𝑡 ) 𝑘 𝑘 ! ∞ 𝑘 =0 =exp[𝜆𝑡 (𝜑 (𝜉 )−1)]=exp{𝑡𝜆 ∫(𝑒 𝑖𝜉𝑥 −1)𝑓 (𝑑𝑥 )} ∞ −∞ Moreover, if we introduce a new measure 𝜋 (𝐴 )=𝜆𝑓 (𝐴 ), the characteristic function can be rewritten as: 𝑬 [𝑒 𝑖𝜉 𝑋 𝑡 ]=exp{𝑡 ∫(𝑒 𝑖𝜉𝑥 −1)𝜋 (𝑑𝑥 )} ∞ −∞ ,∀𝜉 ∈ℝ(2.2) □ We will see later that (2.2) corresponds to the characteristic function of general Lé vy process, which implies that a compound Poisson process is actually a special class of Lé vy process. 13 In order to approximate compound Poisson processes, we need to introduce the following jump measure. Definition 2.3.4 (Poisson random measure) Let (Ω,ℱ,P) be a probability space, 𝐸 ⊂ℝ and µ be a Radon measure on (𝐸 ,ℰ) , that is , µ is finite on every compact set in ℰ. We call M is a Poisson random measure on E with intensity µ, if M satisfies: (i) 𝑀 :Ω×ℰ→ℕ is integer valued (ii) For almost every𝜔 ∈Ω , 𝑀 (𝜔 ,.) is a Radon measure on E (iii) 𝐹𝑜𝑟 𝑎𝑛𝑦 𝐴 ∈ℰ, M(.,A):=M(A) is a Poisson random variable with parameter µ(A) (iv) 𝑀 (𝐴 1 ),𝑀 (𝐴 2 ),… are independent random variables once 𝐴 1 ,𝐴 2 ,… are disjoint measurable set. Now consider 𝐸 =[0,𝑇 ]×ℝ 0 , whereℝ 0 =ℝ\{0}, a Poisson random measure M on E can be rewritten as a sum of Dirac measures: For (𝑇 𝑛 ,𝑌 𝑛 )∈𝐸 , 𝑀 =∑ 𝛿 (𝑇 𝑛 ,𝑌 𝑛 ) 𝑛 ≥1 Following by this idea, we can define path-wise integrals over Poisson random measure M by the following way: Given 𝜔 ∈Ω, and f(x) is a real-valued function on E 1. If 𝑓 (𝑥 )=∑ 𝑎 𝑖 𝟏 𝐴 𝑖 𝑛 𝑖 =1 is a simple function, where 𝑎 𝑖 ≥0 and 𝐴 1 ,𝐴 2 ,… are disjoint measurable subsets of E, then we define: 𝑀 (𝑓 )≔∫ ∫𝑓 ( 𝑇 0 𝑠 ,𝑣 )𝑀 (𝑑𝑠 ,𝑑𝑣 )≔∑ 𝑎 𝑖 𝑀 (𝐴 𝑖 ) 𝑛 𝑖 =1 It’s easy to see that M(f) is a random variable with 𝐸 [𝑀 (𝑓 )]= ∑ 𝑎 𝑖 𝜇 (𝐴 𝑖 ) 𝑛 𝑖 =1 2. The following way is classical as the definition of Lebesgue integral. If f is positive, then take 𝑓 𝑛 ↑𝑓 be a sequence of simple functions that converges to f, and we can define 𝑀 (𝑓 )≔ lim 𝑛 →∞ 𝑀 (𝑓 𝑛 ) . Then for real- valued function f(x), simply define the integral by its positive part and 14 negative part, separately, given f is absolutely integrable with respect to M, that is, ∫ ∫|𝑓 ( 𝑇 0 𝑠 ,𝑣 )|µ(𝑑𝑠 ,𝑑𝑣 )<∞ Finally, note that the expectation of random variable M(f) is given by: 𝐸 [𝑀 (𝑓 )]=𝜇 (𝑓 )=∫ ∫𝑓 ( 𝑇 0 𝑠 ,𝑣 )µ(𝑑𝑠 ,𝑑𝑣 ) (2.3) Now let’s return to the compound Poisson process. Let 𝑋 𝑡 be a compound Poisson process with intensity λ. Define a random measure p on [0,𝑇 ]×(0,∞) to “count” the jumps of 𝑋 𝑡 : For any measurable set 𝐴 ⊂[0,𝑇 ]×(0,∞) , 𝑝 (𝐴 )≔#{(𝑡 ,𝑋 𝑡 −𝑋 𝑡 − )∈𝐴 } (2.4) Where the notation # means the number of the elements in the set. In other words, 𝑝 ([𝑡 1 ,𝑡 2 ]×𝐵 ) actually counts the total jumps of 𝑋 𝑡 with jump sizes in B. The following proposition of p is crucial for our simulations: Proposition 2.3.5: Let 𝑋 𝑡 be a compound Poisson process with intensity λ and jump size distribution f, and p is the jump measure defined as (2.4). Then p is a Poisson random measure with intensity 𝜇 (𝑑𝑡 ,𝑑𝑥 )=𝜋 (𝑑𝑥 )𝑑𝑡 =𝜆𝑓 (𝑑𝑥 )𝑑𝑡 . Proof: To see the proof of this proposition, one can refer to Cont & Tankov [1]. Also, by proposition 2.3.5, we can always write a compound Poisson process in the integral form: 𝑋 𝑡 =∫ ∫𝑥 𝑝 (𝑑𝑠 ,𝑑𝑥 ) 𝑇 0 = ∑ ∆𝑋 𝑆 𝑠 ∈[0,𝑇 ] Now with all these propositions, we are ready to simulate the compound Poisson process. Firstly, by property 2.3.2, the jump moments of compound Poisson process are sum of i.i.d exponential random variables. Therefore, we can 15 simulate those exponential random variables to get the exact jump moments in [0,T]. Secondly, we need to simulate jump size (𝑌 𝑖 ) 𝑖 ≥1 .By proposition 2.3.5, the jump distribution f(dx) is given by 𝑓 (𝑑𝑥 )= 𝜋 (𝑑𝑥 ) 𝜆 . Recall that ∫ 𝑓 (𝑑𝑥 ) ∞ −∞ = 1, we have ∫ 𝜋 (𝑑𝑥 ) ∞ −∞ =𝜆 . Therefore, given measure 𝜋 , we can always simulate distribution of jump sizes. Thirdly, in both of the former two steps, we can deduce the distributions, or say, cumulative distribution functions, which we want to simulate. However, we need some method to simulate these distributions. One way is to use the inverse CDF method, as suggested in [7]. Theorem 2.3.6 (inverse cdf method): Let X be a random variable with cdf function F(x). Define 𝐹 −1 :[0,1]→ℝ by: 𝐹 −1 (𝑦 )≔sup{𝑥 |𝐹 (𝑥 )<𝑦 }=inf{𝑥 |𝐹 (𝑥 )≥𝑦 },𝑦 ∈[0,1] 𝐹 −1 (𝑦 ) is left continuous, non-decreasing and can be regard as a general inverse function of F(x).Let 𝑈 ~𝒰 (0,1) be a uniformly distributed random variable on [0,1]. Then 𝐹 −1 (𝑈 ) has the same distribution as X In other words, in order to simulate X, one can simply simulate a uniformly distributed random variable on [0,1], and plug in the inverse function of F(x). Proof: The properties of 𝐹 −1 (𝑦 ) is straight forward and we only proof the main part of the theorem. Denote 𝑌 ≔𝐹 −1 (𝑈 ) Firstly we assume F is continuous, then 𝐹 (𝐹 −1 (𝑦 ))=𝑦 , So ℙ(𝑌 ≤𝑥 )=ℙ(𝐹 −1 (𝑈 )≤𝑥 )=ℙ(𝑈 ≤𝐹 (𝑥 ))=𝐹 (𝑥 ) For the second equality, we need the fact that {𝐹 −1 (𝑈 )≤𝑥 }={𝑈 ≤ 𝐹 (𝑥 )} , to see this, suppose 𝐹 −1 (𝑈 )≤𝑥 , then by monotonicity of F, 16 𝐹 (𝐹 −1 (𝑈 ))≤𝐹 (𝑥 ) , which gives 𝑈 ≤𝐹 (𝑥 ) by continuity. Similarly, if 𝑈 ≤ 𝐹 (𝑥 ) , by the monotonicity of 𝐹 −1 an continuity, we’ll get 𝐹 −1 (𝑈 )≤𝑥 . For general case, it’s easy to show that {𝑈 <𝐹 (𝑥 )}⊆{𝐹 −1 (𝑈 )≤𝑥 }⊆{𝑈 ≤𝐹 (𝑥 )} Then by continuity of U, we will get the same result. □ Example 2.1 (Inverse method for exponential random variable): Suppose X is an exponential random variable with parameter λ. Then we can simulate X by the formula: 𝐹 −1 (𝑈 )=− 1 𝜆 ln(1−𝑈 ) where 𝑈 ~𝒰 (0,1) With the inverse cdf method, we can complete our simulation for compound Poisson process. We summarize the algorithm below: Algorithm 2.3-1 (Simulation of compound Poisson process, simple Euler scheme) Let 𝑋 𝑡 be a compound Poisson process with intensity λ and jump size distribution f (which are typically unknown and need to calculate using former discussion), and {𝑡 𝑖 ,𝑖 =0,…,𝑁 } be the equal-length partition of [0,T] Step1: Set k=1, and simulate a waiting time 𝜏 𝑖 , using inverse cdf method, as in example 2.1. Step2: Define 𝑇 𝑘 ≔∑ 𝜏 𝑖 𝑘 𝑖 =1 to be the total waiting time, check the condition 𝑇 𝑘 <𝑇 . If the condition is true, simulate a jump size 𝑌 𝑘 corresponding to this jump moment, then set k=k+1 and go back to step 1. If the condition is false, stop the circulation, record total jumps K, and set 𝜏 𝐾 +1 =𝑇 ( Note that since there are no jumps between 𝜏 𝐾 𝑎𝑛𝑑 𝜏 𝐾 +1 , we can set 𝑌 𝐾 +1 =0 correspondingly) 17 Step3: Simulate 𝑋 𝑡 𝑖 +1 =𝑋 𝑡 𝑖 +∑ 𝟏 {𝜏 𝑘 ∈[𝑡 𝑖 ,𝑡 𝑖 +1) } 𝑌 𝑘 𝐾 𝑘 =1 with initial condition 𝑋 𝑡 0 =𝑋 0 =𝑥 0 . Note: Recall that a compound Poisson process has almost surely finite number of jumps in [0,T], we can set the jumps moments {𝜏 𝑖 } 𝑖 ≥1 to be the jump-adapted partition. This will lead to a jump-adapted simulation for compound Poisson process. Algorithm 2.3-2 (Simulation of compound Poisson process, jump-adapted Euler scheme) Let 𝑋 𝑡 be a compound Poisson process with intensity λ and jump size distribution f. Step1: Similar to algorithm 2.3-1, we simulate exponential waiting times {𝜏 𝑖 } 𝑖 ≥1 by circulation with contition: ∑ 𝜏 𝑖 𝑘 𝑖 =1 <𝑇 . We need to record total number of jumps K, and set jump moments: 𝑇 0 =0,𝑇 𝐾 +1 =𝑇 ,𝑇 𝑘 ≔∑𝜏 𝑖 𝑘 𝑖 =1 ,𝑘 =1,…,𝐾 Step2: Simulate K i.i.d jump sizes {𝑌 𝑘 } 𝑘 =1,…,𝐾 , corresponding to each jump moments. Also, formally set 𝑌 𝐾 +1 =0 to indicate that there is no jump in (𝑇 𝐾 ,𝑇 𝐾 +1 )=(𝑇 𝐾 ,𝑇 ) . Step3: Simulate 𝑋 𝑡 at jump moments {𝑇 𝑘 } 𝑘 =0,1,…,𝐾 +1 by: 𝑋 𝑇 𝑖 +1 =𝑋 𝑇 𝑖 +𝑌 𝑖 +1 ,𝑓𝑜𝑟 𝑖 =0,1,…,𝐾 with initial condition 𝑋 𝑇 0 =𝑋 0 =𝑥 0 . As we can see, since there is exactly one jump between jump moments, the jump-adapted method is much more convenient than the simple Euler scheme when running step3. When the total number of grid points in simple Euler scheme (N) is large while the number of jumps is small, the simple Euler scheme actually wastes spaces to store horizontal points. In this case, the jump adapted method is more efficient as the grid points are generated when needed. However, when the total number of jumps is rather large, the large number of jumps generated by jumps will be a great load for later calculation. 18 Furthermore, as suggested in Cont & Tankov [1], we can improve our former algorithm for compound Poisson process. Note that the total number of jumps in [0,T] is 𝑁 𝑇 , which is a Poisson random variable with parameter λT. Also, conditionally on 𝑁 𝑇 , the jump moments in [0,T] will have the same distribution as 𝑁 𝑇 i.i.d random variables which are uniformly distributed on [0,T] and rearranged in increasing order (See Cont & Tankov [1] p.46 and p.174) . With these properties, we can derive another algorithm for compound Poisson process. Algorithm 2.4-1 (Improved Algorithm for compound Poisson process, simple Euler scheme) Let 𝑋 𝑡 be a compound Poisson process with intensity λ and jump size distribution f, and {𝑡 𝑖 ,𝑖 =0,…,𝑁 } be the equal-length partition of [0,T] Step1: Simulate the Poisson random variable N with parameter λT. Step2: Simulate N i.i.d random variable 𝑈 𝑖 , with 𝑈 𝑖 ∼𝒰 (0,𝑇 ) . Step3: Simulate N i.i.d random variable 𝑌 𝑖 with distribution 𝜋 (𝑑𝑥 ) 𝜆 . Step4: Simulate 𝑋 𝑡 at 𝑡 𝑖 by: 𝑋 𝑡 0 =𝑋 0 =0,𝑋 𝑡 𝑖 =∑ 𝟏 𝑼 𝒋 <𝒕 𝒊 𝑌 𝑗 𝑁 𝑗 =1 ,𝑖 =1,2,…,𝑁 Also, since 𝑈 𝑖 is the exact jump moment (might not in increasing order), we can easily apply jump-adapted method as well. To get the grid point, i.e. jump moment, simply reorder 𝑈 𝑖 in increasing way. The algorithm is as follows: Algorithm 2.4-2 (Improved Algorithm for compound Poisson process, jump-adapted Euler scheme) Let 𝑋 𝑡 be a compound Poisson process with intensity λ and jump size distribution f. Step1: Simulate the Poisson random variable N with parameter λT. Step2: Simulate N i.i.d random variable 𝑈 𝑖 , with 𝑈 𝑖 ∼𝒰 (0,𝑇 ) . 19 Step3: Let 𝜏 𝑖 ,𝑖 =1,…,𝑁 be the rearranged 𝑈 𝑖 ,which is in an increasing order. Step3: Simulate N i.i.d. random variable 𝑌 𝑖 with distribution 𝜋 (𝑑𝑥 ) 𝜆 . Step4: Simulate 𝑋 𝑡 at 𝜏 𝑖 by: 𝑋 𝜏 0 :=𝑋 0 =0,𝑋 𝜏 𝑖 =∑ 𝑌 𝑗 𝑖 𝑗 =1 ,𝑖 =1,2,…,𝑁 In many programming tools, for example, MATLAB, there is a function that could directly simulate Poisson random variable, which makes algorithm 2.4-1 and 2.4-2 easy to realize. Also, since we are using reordered uniformly distributed random variables to correspond to the jump moments instead of exponential random variables, we don’t need to apply the inverse cdf method in step2. Note that in the improved algorithm, the simple Euler method still requires comparison between jump moments and partition points. Although the jump-adapted method doesn’t need this comparison, it needs a reorder step, which in return, when T is large (and hence N is possibly large), will also consume much time and space. In this case, both two methods have it advantages and the choice of them depends. The following graph is a typical path of compound Poisson Process: Figure 2.2 sample path for compound Poisson process 20 2.4 Simulations for Lé vy Processes Definition 2.4.1 (Lé vy Process): An ℝ−𝑣𝑎𝑙𝑢 𝑒 𝑑 cadlag stochastic process (𝑋 𝑡 ) 𝑡 ≥0 on a given probability space (Ω,ℱ,ℙ) is called a Lé vy processes on [0,T] if: (i) 𝑋 0 =0. (ii) For any 0=𝑡 0 <⋯<𝑡 𝑛 =𝑇 , 𝑋 𝑡 1 ,𝑋 𝑡 2 −𝑋 𝑡 1 ,…,𝑋 𝑡 𝑛 −𝑋 𝑡 𝑛 −1 are independent. That is, 𝑋 𝑡 has independent increments. (iii) Stationary increments: for any ∈[0,𝑇 ]𝑎𝑛𝑑 ℎ≥0 , the distribution of 𝑋 𝑡 +ℎ −𝑋 𝑡 doesn’t depend on t. (iv) Stochastic continuity: ∀𝘀 >0, lim ℎ→0 ℙ(|𝑋 𝑡 +ℎ −𝑋 𝑡 |≥𝘀 )=0 Note the condition (iv) doesn’t necessarily imply path continuity. The continuity is in ω-sense, which means that at a given time t, the probability of seeing jumps at t is zero. In other words, the jumps of Lé vy process can’t happen at any fixed moments. As discussed above, both Brownian motion and compound Poisson process are special class of Lé vy process. More precisely, the following proposition gives the relationship between Lé vy process and compound Poisson process: Proposition 2.4.2: A stochastic process is compound Poisson process if and only if it is a Lé vy process and its sample paths are piecewise constant functions. Proof: for a proof of this proposition, one can refer to Cont & Tankov [1] p.71. Therefore, the compound Poisson process is the only Lé vy process with constant values between jumps. On the other hand, if there are exactly no jumps in Lé vy process, one can expect the process to be a Brownian motion with drift. With these intuitions, we might want to approximate Lé vy process by these two extreme cases. Thanks to the famous Lé vy-Itô decomposition, this idea actually works. Let’s firstly introduce the definition of Lé vy measure, which is similar to the measure we came across in proposition 2.3.3. 21 Definition 2.4.3 (Lé vy measure): Let X be an ℝ−𝑣𝑎𝑙𝑢𝑒𝑑 Lé vy process on [0,T]. The Lé vy measure π of X is defined by: 𝜋 (𝐴 )≔𝑬 [#{𝑡 ∈[0,1]:∆𝑋 𝑡 ≠0,∆𝑋 𝑡 ∈𝐴 }],𝑓𝑜𝑟 𝐴 ∈ℬ(ℝ) In other words, the Lé vy measure on a Borel set A is the expected number of jumps whose sizes are in A, during a unit time. Similar as compound Poisson process, we can also define the jump measure of Lé vy process in the same way: For any measurable set 𝐴 ⊂[0,𝑇 ]×(0,∞) , 𝑝 (𝐴 )≔#{(𝑡 ,𝑋 𝑡 −𝑋 𝑡 − )∈𝐴 } (2.4.1) Then by definition 2.4.3, we have the relationship: 𝑬 [𝑝 (𝑑𝑡 ,𝑑𝑥 )]=𝜋 (𝑑𝑥 )𝑑𝑡 And we can introduce the martingale measure q on [0,𝑇 ]×(0,∞) by: 𝑞 (𝑑𝑡 ,𝑑𝑥 )≔𝑝 (𝑑𝑡 ,𝑑𝑥 )−𝜋 (𝑑𝑥 )𝑑𝑡 (2.4.2) With this definitions, we can present the famous Lé vy-Itô decomposition as follows: Theorem 2.4.4 (Lé vy-Itô decomposition): Suppose X is a Lé vy process defined as definition 2.4.1, and π is the corresponding Lé vy measure given by definition 2.4.3. Then: (i) π is a Radon measure on ℝ\{0} and verifies: ∫ |𝑥 | 2 𝜋 (𝑑𝑥 ) |𝑥 |≤1 <∞ and ∫ 𝜋 (𝑑𝑥 ) |𝑥 |≥1 <∞ (ii) The jump measure of X, defined as (2.4.1), is a Poisson random measure with intensity measure 𝜋 (𝑑𝑥 )𝑑𝑡 (iii) There exist a constant γ, a Brownian motion 𝐵 𝑡 with variance A (that is, 𝐵 1 ~𝒩 (0,𝐴 ) ) , such that 𝑋 𝑡 =𝛾𝑡 +𝐵 𝑡 +𝑋 𝑡 𝑙 +lim 𝜖 ↓0 𝑋̃ 𝑡 𝜀 ,𝑤 ℎ𝑒𝑟𝑒 (2.4.3) 𝑋 𝑡 𝑙 ≔ ∫ 𝑥 |𝑥 |≥1,𝑠 ∈[0,𝑡 ] 𝑝 (𝑑𝑠 ,𝑑𝑥 )𝑎𝑛𝑑 22 𝑋̃ 𝑡 𝜀 ≔ ∫ 𝑥𝑞 (𝑑𝑠 ,𝑑𝑥 ) 𝜀 ≤|𝑥 |<1,𝑠 ∈[0,𝑡 ] And terms in (2.4.3) are independent, and the convergence of the limit is almost sure and uniform in t on [0,T]. The parameters in (2.4.3), along with the Lé vy measure π, (A,π,γ), is called the characteristic triplet or Lé vy triplet of the process X. For a proof of the Lé vy-Itô decomposition, one can refer to Tankov [16]. According to Theorem 2.4.4, any Lé vy process can be written as a sum of Brownian motion with drift (continuous part), a compound Poisson process with “large” jumps 𝑋 𝑡 𝑙 , and a martingale part 𝑋̃ 𝑡 𝜀 . The martingale part 𝑋̃ 𝑡 𝜀 is compensated since there might be infinitely many small jumps which can lead to the singularity of π at zero. We can view 𝑋̃ 𝑡 𝜀 as an infinite superposition of independent centered Poisson processes to which a “center-limit” type argument can be applied to show convergence(see Cont & Tankov [1]): every Lé vy process is a combination of a Brownian motion with drift and a possibly infinite sum of independent compound Poisson processes. More important, the theorem shows that every Lé vy process can be approximated with arbitrary precision by a Brownian motion with drift and a compound Poisson process. This point is crucial for our simulations. One corollary of the Lé vy-Itô decomposition is the Lé vy-Khinchin representation, which gives the characteristic function of Lé vy processes. Corollary 2.4.5 (Lé vy-Khinchin representation): Let X be a Lé vy process on ℝ with characteristic triplet (A,π,γ). Then: 𝑬 [𝑒 𝑖𝑧 𝑋 𝑡 ]=𝑒 𝑡𝜓 (𝑧 ) ,𝑧 ∈ℝ 𝑤𝑖𝑡 ℎ𝜓 (𝑧 )=− 1 2 𝐴 𝑧 2 +𝑖𝛾𝑧 +∫ (𝑒 𝑖𝑧𝑥 −1−𝑖𝑧𝑥 𝟏 |𝑥 |≤1 )𝜋 (𝑑𝑥 ) ∞ −∞ (2.4.4) Now let’s turn to the simulation of Lé vy process. To simulate a discretized trajectory of a Lé vy process, one must be able to simulate the 23 independent increment parts, as discussed earlier. For some special cases for example, the symmetric stable Lé vy process which will be discussed in the following section, the distributions of the increments of the process are known explicitly such that we can run the approximation more precisely. However, for those Lé vy processes without explicit increment distributions, we should refer to the Lé vy-Itô decomposition for simulation, that is, approximate the Lé vy process by compound Poisson process and Brownian motion with drift. Moreover, in order to treat those Lé vy process whose jumps are highly concentered at zero, we need to approximate the small jumps more precisely. As suggested by Cont & Tankov [1], Brownian motion is a nice choice for the approximation of the small jumps. This idea are shared by Mikulevičius [2] and let’s illustrate the approximation with 𝑍 𝑡 =∫ ∫(1−𝜒 𝛼 (𝑣 ))𝑣𝑝 (𝑑𝑠 ,𝑑𝑣 ) 𝑡 0 +∫ ∫𝜒 𝛼 (𝑣 )𝑣𝑞 (𝑑𝑠 ,𝑑𝑣 ) 𝑡 0 which was defined in chapter 1. Simulation 2.1: (Simulation the Lé vy process in equation (1.1)) Followed by [2], take a positive number 𝘀 ∈(0,1) , we approximate 𝑍 𝑡 by 𝑍̃ 𝑡 ≔𝑍 𝑡 𝜀 +𝑅 𝑡 𝜀 where 𝑍 𝑡 𝜀 =∫ ∫(1−𝜒 𝛼 (𝑣 ))𝑣𝑝 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |>𝜀 𝑡 0 +∫ ∫ 𝜒 𝛼 (𝑣 )𝑣𝑞 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |>𝜀 𝑡 0 and 𝑅 𝑡 𝜀 = { 𝑡 ∫ 𝑣𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 ,𝑖𝑓 𝛼 ∈(0,1] 𝐵 𝜀 𝑊̃ 𝑡 ,𝑖𝑓 𝛼𝜖 (1,2] 0,𝑜𝑡 ℎ𝑒𝑟𝑤𝑖𝑠𝑒 with 24 𝐵 𝜀 ≔√ ∫ 𝑣 2 𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 and 𝑊̃ 𝑡 is the standard Brownian motion in ℝ. Note that when 𝛼 ∈(0,1], Z looks like a compound Poisson process, however, it’s not exactly the same. Note for compound Poisson process on ℝ with Lé vy measure π, there should be almost surely finitely many jumps in [0,T], in other words, 𝜋 (ℝ 0 )<∞ But when 𝛼 ∈(0,1], we do not necessarily have the finite measure conditions. For example, when π is a stable Lé vy measure with stable parameter small than 𝛼 , 𝜋 (ℝ 0 ) doesn't converge. Therefore, we must still run the simulation by two part: a compound Poisson process with large jumps, and the expectation of the small-jump process. When 𝛼𝜖 (1,2], we simulate the small jump process of Z by a Brownian motion. This coincides with the idea in [1] and the variance is simply given by: 𝑉𝑎𝑟 [∫ ∫ 𝑣𝑞 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |≤𝜀 𝑡 0 ]=𝑬 [(∫ ∫ 𝑣𝑞 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |≤𝜀 𝑡 0 ) 2 ]= ∫ 𝑣 2 𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 We’ll apply this simulation to a special class of Lé vy process in the following section. 25 2.5 Simulations for Symmetric Stable Lé vy Processes In this section we discuss simulations of symmetric stable Lé vy process whose Lé vy measure is given explicitly. We’ll apply the results to next chapter to simulate equation (1.1) We use notation ν to be Lévy measure instead of π in this section to avoid confusions. Definition 2.5.1 (two-sided stable-α Lé vy Process): Let X be a Lé vy process with triplet (0,π,γ) is called a two-sided stable-α Lé vy Process with 𝛼 ∈(0,2) , if the Lé vy measure ν is given by: ν(𝑑𝑥 )≔{ 𝑝𝐶𝛼 𝑥 −1−𝛼 𝑑𝑥 ,𝑓𝑜𝑟 𝑥 >0 𝑞𝐶𝛼 |𝑦 | −1−𝛼 𝑑𝑥 𝑓𝑜𝑟 𝑥 <0 (2.5.1) where C is a constant and 𝑝 ,𝑞 ≥0𝑤𝑖𝑡 ℎ𝑝 +𝑞 =1. Moreover, a symmetric stable-α Lé vy Process is the case that p=q=1/2. A stable Lé vy process actually defines a family of stable distributions. It’s characteristic function is given by the following theorem. Theorem 2.5.2 (Characteristic Function of Stable-α Lé vy Process) Let X be a Lé vy process with triplet (0, ν, γ), with Lé vy measure given by (2.5.1), with 𝛼 ∈(0,2) , then the characteristic function of X at time 1 can be written as following: (1) If 𝛼 ≠1, then Φ 𝑋 (𝒛 )=𝑬 [𝑒 𝑖𝑧𝑋 ]=exp[𝑖𝑧𝜇 −𝜎 𝛼 |𝑧 | 𝛼 (1−𝑖𝛽𝑠𝑔𝑛 (𝑧 )tan( 𝜋𝛼 2 )] 𝑤𝑖𝑡 ℎ𝜎 𝛼 =𝐶 Γ(2−𝛼 ) 1−𝛼 cos( 𝜋𝛼 2 ) 𝜇 =𝛾 ,𝛽 =𝑝 −𝑞 (2) If 𝛼 =1, then Φ 𝑋 (𝒛 )=𝑬 [𝑒 𝑖𝑧𝑋 ]=exp[𝑖𝑧𝜇 −𝜎 |𝑧 |(1+𝑖𝛽 2 𝜋 𝑠𝑔𝑛 (𝑧 )ln|z|)] 𝑤𝑖𝑡 ℎ𝜎 =𝐶 𝜋 2 𝜇 =𝛾 ,𝛽 =𝑝 −𝑞 26 We denote this process by 𝑆 𝛼 (𝜎 ,𝛽 ,𝜇 ) For proof of theorem 2.5.2, see Meerschaert and Alla [5, pp.120] for details. Now let’s consider the symmetric stable-α Lé vy Process without drift (i.e. γ=0) . Set p=q=1/2, and 𝐶 = (1−𝛼 ) Γ(2−𝛼 )cos( 𝜋𝛼 2 ) ,𝑖𝑓 𝛼 ≠1 𝐶 = 2 𝜋 𝑖𝑓 𝛼 =1 Denote 𝐶 𝑠 ≔ 1 2 𝐶𝛼 Then the Lé vy measure of the symmetric stable-α Lé vy Process is given by: ν(𝑑𝑥 )=𝐶 𝑠 𝑑𝑥 |𝑥 | 1+𝛼 (2.5.2) and parameters in characteristic functions are now given by: 𝜇 =0,𝛽 =0 𝜎 𝛼 =1,𝑓𝑜𝑟 𝛼𝜖 (0,2) Now let’s consider simulation 2.1 derived in section 2.4 with the Lé vy measure given by (2.5.2) To avoid confusions, in the following simulation part, we use 𝛼 0 as the stable parameter discussed above, and α is defined as in chapter 1. Also, according to the assumption ∫ (|𝑣 | 𝛼 ∧1)𝜋 (𝑑𝑣 )<∞, We must have 𝛼 >𝛼 0 Simulation 2.2 (Simulation the Lé vy process in (1.1) with 𝜶𝝐 (𝟎 ,𝟏 ]) When 𝛼𝜖 (0,1], then stable parameter 𝛼 0 ∈(0,1) , we have 𝑍 𝑡 =∫ ∫𝑣𝑝 (𝑑𝑠 ,𝑑𝑣 ) 𝑡 0 27 𝑍 𝑡 is exactly a compound Poisson process. According to simulation 2.2, we simulate 𝑍 𝑡 by 𝑍̃ 𝑡 ≔𝑍 𝑡 𝜀 +𝑅 𝑡 𝜀 where 𝑅 𝑡 𝜀 =𝑡 ∫ 𝑣𝜋 (𝑑 𝑣 ) |𝑣 |≤𝜀 =𝑡 ∫𝒗𝟏 |𝑣 |≤𝜀 𝐶 𝑠 𝑑𝑣 |𝑣 | 1+𝛼 0 =0 and 𝑍 𝑡 𝜀 =∫ ∫ 𝑣𝑝 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |>𝜀 𝑡 0 So we only need to simulate process 𝑍 𝑡 𝜀 , which is a compound Poisson process with jumps lager than ε. The intensity λ is given by: 𝜆 =𝜋 𝜀 ≔𝜋 ({|𝑣 |>𝘀 })=𝐶 𝑠 ∫ 𝑑𝑣 |𝑣 | 1+𝛼 0 |𝑣 |>𝜀 =𝐶 𝑠 2 𝛼 0 𝘀 −𝛼 0 Then the jump distribution is given by 𝑓 (𝑑𝑣 )= 𝜒 {|𝑣 |>𝜀 } 𝑑𝑣 |𝑣 | 1+𝛼 0 2∫ 𝑑𝑣 |𝑣 | 1+𝛼 0 |𝑣 |>𝜀 = 𝛼 0 2 𝘀 𝛼 0 𝜒 {|𝑣 |>𝜀 } 𝑑𝑣 |𝑣 | 1+𝛼 0 which gives the cdf: ℙ(𝑋 ≤𝑡 )=∫ 𝑓 (𝑑𝑥 ) 𝑡 −∞ = { 1 2 ,𝑖𝑓 𝑡 ∈[−𝘀 ,𝘀 ] 1 2 𝘀 𝛼 0 (−𝑡 ) −𝛼 0 ,𝑖𝑓 𝑡 <−𝘀 1− 1 2 ( 𝘀 𝑡 ) 𝛼 0 ,𝑖𝑓 𝑡 >𝘀 28 Then the inverse cdf function is given by: 𝐹 −1 (𝑏 )=inf{𝑡 |𝐹 (𝑡 )≥𝑏 }= { (−𝘀 )(2𝑏 ) − 1 𝛼 0 ,𝑖𝑓 𝑏 < 1 2 𝘀 [2(1−𝑏 )] − 1 𝛼 0 ,𝑖𝑓 𝑏 > 1 2 −𝘀 ,𝑖𝑓𝑏 = 1 2 With these preparation, simply run algorithm 2.3-1, 2.3-2, 2.4-1,2.4-1 will give the simulation for stable-𝛼 0 Lé vy process with 𝛼𝜖 (0,1]. Simulation 2.3 (Simulation the Lé vy process in (1.1) with 𝜶𝝐 (𝟏 ,𝟐 ]) When 𝛼𝜖 (1,2], then stable parameter 𝛼 0 ∈(0,2) , we have 𝑍 𝑡 =∫ ∫ 𝑣𝑝 (𝑑 𝑠 ,𝑑𝑣 ) |𝑣 |>1 𝑡 0 +∫ ∫ 𝑣𝑞 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |≤1 𝑡 0 Again, we simulate 𝑍 𝑡 by 𝑍̃ 𝑡 ≔𝑍 𝑡 𝜀 +𝑅 𝑡 𝜀 Where 𝑅 𝑡 𝜀 =𝐵 𝜀 𝑊̃ 𝑡 =𝒩 (0,1)√𝑡 ∫ 𝑣 2 𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 =𝒩 (0,1)√𝑡 𝐶 𝑠 2𝘀 2−𝛼 0 2−𝛼 0 and 𝑍 𝑡 𝜀 =∫ ∫ 𝑣 𝑝 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |>1 𝑡 0 +∫ ∫ 𝑣𝑞 (𝑑𝑠 ,𝑑𝑣 ) 𝜀 <|𝑣 |≤1 𝑡 0 =∫ ∫ 𝑣𝑝 (𝑑𝑠 ,𝑑𝑣 ) |𝑣 |>𝜀 𝑡 0 since ∫∫ 𝑣𝜋 (𝑑𝑣 )𝑑𝑠 𝜀 <|𝑣 |≤1 𝑡 0 =0𝑑𝑢𝑒 𝑡𝑜 𝑠𝑦𝑚𝑚𝑒𝑡𝑟𝑦 29 Note 𝑍 𝑡 𝜀 is exactly the same as simulation 2.2, that is, a compound Poisson process with jumps over size 𝘀 . Hence we can simply run simulation 2.2 to get 𝑍 𝑡 𝜀 . Another part 𝑅 𝑡 𝜀 is a Brownian motion. To simulate 𝑅 𝑡 𝜀 , we can simply simulate increments 𝒩 (0,1)√Δ 𝑖 𝐶 𝑠 2𝘀 2−𝛼 0 2−𝛼 0 where Δ 𝑖 := 𝑡 𝑖 −𝑡 𝑖 −1 , given {𝑡 𝑖 ,𝑖 =0,…,1} a time discretization of [0,T] Note: In simulation 2.2 and 2.3, we simulate a Lé vy process Z by a sum of compound Poisson process and Brownian motion (or 0). Since the jump moments of the compound Poisson process are almost surely finite on [0,T], the simulation 𝑍̃ 𝑡 is also a process with almost surely finite jumps. Therefore, the jump-adapted Euler method applies here, by choosing time discretization to be the jump moments of 𝑍 𝑡 𝜀 . The algorithm is similar to the compound Poisson case we discussed earlier. Note that when ε is small, more jump moments will be included, which will lead to both possibly better approximations and much more running time, especially for the jump-adapted method. Besides simulation by a combination of a compound Poisson process and a Brownian motion, there is actually another way to simulate 𝑍 𝑡 given the 𝛼 0 −stable measure (2.5.2). In Cont & Tankov [1], Samorodnitsky [9], an algorithm for simulating stable process 𝑆 𝛼 0 (1,0,0) is derived. By setting constant C as in (2.5.2), we get exactly 𝜎 𝛼 =1,𝑓𝑜𝑟 𝛼𝜖 (0,2) and also, for symmetric stable-𝛼 0 Lé vy process, β=0 exactly. Moreover, according to the characteristic function of 𝑍 𝑡 and the relationship 𝛼 >𝛼 0 , we see that 𝑍 𝑡 is a Lé vy process without drift for all 𝛼 ∈(0,2]. Therefore, we can directly apply the following algorithm to simulate 𝑍 𝑡 . 30 Algorithm 2.5 (Direct Simulation for 𝑺 𝜶 𝟎 (𝟏 ,𝟎 ,𝟎 ) process) Suppose a partition of [0,T] is given by {𝑡 0 =0,𝑡 1 ,…,𝑡 𝑛 −1 ,𝑡 𝑛 =𝑇 }, then: Step1: Simulate n independent random variables 𝛾 𝑖 , uniformly distributed on (− 𝜋 2 , 𝜋 2 ) and n independent standard exponential random variables 𝑊 𝑖 . Note 𝛾 𝑖 and 𝑊 𝑖 should also be independent. Step2: Compute increments ∆𝑋 𝑖 ,𝑖 =1,…,𝑛 by: ∆𝑋 𝑖 =(𝑡 𝑖 −𝑡 𝑖 −1 ) 1 𝛼 sin𝛼 𝛾 𝑖 (cos𝛾 𝑖 ) 1/𝛼 ( cos((1−𝛼 )𝛾 𝑖 ) 𝑊 𝑖 ) (1−𝛼 )/𝛼 Step3: 𝑋 𝑡 𝑖 =∑ ∆𝑋 𝑘 𝑖 𝑘 =1 , with 𝑋 𝑡 0 =0 Details of the proof of this algorithm can be find in [9]. Also, in [8] there is an algorithm for more general case 𝑺 𝜶 𝟎 (𝟏 ,𝜷 ,𝟎 ) with a little modification in step2. It’s not concerned here as for ours symmetric case, β=0 is always true. Also, to treat the case 𝜎 ≠1 or 𝜇 ≠1, we can use the property of stable process: If ~𝑆 𝛼 0 (1,𝛽 ,0) , then 𝜎𝑋 +𝜇 ~𝑆 𝛼 0 (𝜎 ,𝛽 ,𝜇 ),𝑓𝑜𝑟 𝛼 0 ≠1 and 𝜎𝑋 + 2 𝜋 𝛽𝜎𝑙𝑛𝜎 +𝜇 ~𝑆 𝛼 0 (𝜎 ,𝛽 ,𝜇 ),𝑓𝑜𝑟 𝛼 0 =1 Moreover, since algorithm 2.5 simulate the increments exactly, it’s not necessary to set up jump-adapted method any longer. The fixed grids simple Euler scheme is enough to apply the algorithm. The following graphs gives the sample paths for stable Lé vy process with different 𝛼 0 . 31 Figure 2.3 Stable Lé vy with 𝛼 0 =0.5 Figure 2.4 Stable Lé vy with 𝛼 0 =1 32 Figure 2.5 Stable Lé vy with 𝛼 0 =1.9 When 𝛼 0 is small, the trajectory is dominated by big jumps, which looks like a compound Poisson process. While when 𝛼 0 is large, the sample path is just like a Brownian motion, with occasionally jumps. According to this intuitions, our former simulation of stable Lé vy process by a combination of Brownian motion and compound Poisson process looks more reasonable. 33 CHAPTER 3: SIMULATIONS OF SDEs DRIVEN BY Lé vy PROCESSES 3.1 Simulations of Lé vy Driven SDEs In chapter 2 we discussed both simple Euler schemes and jump-adapted schemes for the simulation of driven process in the SDE: 𝑋 𝑡 =𝑋 0 +∫𝑎 (𝑋 𝑠 )𝑑𝑠 𝑡 0 +∫𝑏 (𝑋 𝑠 )𝑑 𝑊 𝑠 𝑡 0 +∫𝐺 (𝑋 𝑠 − )𝑑 𝑍 𝑠 𝑡 0 , 𝑡 ∈[0,𝑇 ] (3.1) With these preparation, we are able to simulate (1.1). In particular, we still consider the 𝑠𝑡𝑎𝑏𝑙𝑒 −𝛼 0 Lé vy process with condition 𝛼 0 <𝛼 . Recall the 𝑠𝑡𝑎𝑏𝑙𝑒 −𝛼 0 Lé vy measure: π(𝑑𝑥 )=𝐶 𝑠 𝑑𝑥 |𝑥 | 1+𝛼 0 (3.2) We summarize the former discussions and state the convergence theorem as follows: (1) Approximate simple Euler scheme Given 𝘀 ∈(0,1) and a equal-partition time discretization {𝑡 𝑖 } 𝑖 =1,…,𝑁 , 𝛿 =ℎ= 𝑇 𝑁 . As suggested in (2.1), we run the following simulation: 𝑌 0 ̃ =𝑋 0 =𝑥 0 𝑌 𝑖 +1 ̃ ≔𝑌 𝜏 𝑖 +1 ̃ ≔𝑌 𝑖 ̃ +𝑎 (𝑌 𝑖 ̃ )ℎ+𝑏 (𝑌 𝑖 ̃ )(𝑊 𝜏 𝑖 +1 −𝑊 𝜏 𝑖 )+𝐺 (𝑌 𝑖 ̃ )(𝑍 𝜏 𝑖 +1 −𝑍 𝜏 𝑖 ) for i = 0,1,…,n-1 (3.3) Moreover, for the driven process 𝑍 𝑡 , we apply simulation 2.2 and 2.3 to approximate. Then according to [2], the following convergence theorem holds for 𝑠𝑡𝑎𝑏𝑙𝑒 −𝛼 0 Lé vy process. Theorem 3.1.1 (Convergence of Simple Euler Scheme): Let 𝛼 <𝛽 ≤𝜇 ≤2𝛼 , and let 𝑎 (𝑥 ),𝑏 (𝑥 )∈𝐶̃ 𝛽 (ℝ),𝐺 (𝑥 )∈𝐶̃ 𝛽 ⋁1 (ℝ),and 34 ∫|𝑣 | 𝛼 𝜋 (𝑑𝑣 ) |𝑣 |≤1 + ∫|𝑣 | 𝜇 𝜋 (𝑑𝑣 ) |𝑣 |≥1 <∞, where π is the Lé vy measure given by (3.2). Then for (3.3), there is a constant C (independent of ε and ) such that for all 𝑔 ∈𝐶̃ 𝛽 (ℝ) |𝑬 𝑔 (𝑌̃ 𝑇 )−𝑬 𝑔 (𝑋 𝑇 )|≤𝐶 |𝑔 | 𝛽 [𝛿 𝛽 𝛼 −1 + ∫|𝑣 | 𝛽 ⋀3 𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 ] where the last integral equals to: 𝐶 𝑠 ∫|𝑣 | 𝛽 ⋀3 𝑑𝑣 |𝑣 | 1+𝛼 0 |𝑣 |≤𝜀 =2𝐶 𝑠 𝘀 𝛽 ⋀3−𝛼 0 𝛽 ⋀3−𝛼 0 (2) Approximate jump-adapted Euler scheme Given 𝘀 ∈(0,1) , 𝛿 >0 , as suggested in simulation 2.1, 2.2 and 2.3, consider the following 𝑍 𝜀 −𝑗𝑢𝑚𝑝 −𝑎𝑑𝑎𝑝𝑡𝑒𝑑 time discretization: 𝜏 0 =0,𝜏 𝑖 +1 =inf{𝑡 >𝜏 𝑖 :∆𝑍 𝑡 𝜀 ≠0}∧T∧(𝜏 𝑖 +𝛿 ) Denote N be the total number of grid points. The time discretized trajectory of X is approximated by: 𝑌 0 =𝑋 0 =𝑥 0 𝑌̂ 𝑖 +1 ≔𝑌̂ 𝜏 𝑖 +1 ≔𝑌̂ 𝑖 +𝑎 (𝑌̂ 𝑖 )(𝜏 𝑖 +1 −𝜏 𝑖 )+𝑏 (𝑌̂ 𝑖 )(𝑊 𝜏 𝑖 +1 −𝑊 𝜏 𝑖 )+ 𝐺 (𝑌̂ 𝑖 +1 )(𝑍 𝜏 𝑖 +1 −𝑍 𝜏 𝑖 ) (3.4) Then the following convergence theorem holds: Theorem 3.1.2 (Convergence of Jump-adapted Euler Scheme): Let 𝛼 <𝛽 ≤𝜇 ≤2𝛼 , and let 𝑎 (𝑥 ),𝑏 (𝑥 )∈𝐶̃ 𝛽 (ℝ),𝐺 (𝑥 )∈𝐶̃ 𝛽 ⋁1 (ℝ),and ∫|𝑣 | 𝛼 𝜋 (𝑑𝑣 ) |𝑣 |≤1 + ∫|𝑣 | 𝜇 𝜋 (𝑑𝑣 ) |𝑣 |≥1 <∞, 35 where π is the Lé vy measure given by (3.2). Then for (3.4), there is a constant C (independent of ε and ) such that for all 𝑔 ∈𝐶̃ 𝛽 (ℝ) |𝑬 𝑔 (𝑌̂ 𝑇 )−𝑬 𝑔 (𝑋 𝑇 )|≤𝐶 |𝑔 | 𝛽 [{(𝛿 ∧𝜆 𝜀 −1 )𝜆̃ 𝜀 } 𝛽 𝛼 −1 + ∫|𝑣 | 𝛽 ⋀3 𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 ] where ∫|𝑣 | 𝛽 ⋀3 𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 =𝐶 𝑠 ∫|𝑣 | 𝛽 ⋀3 𝑑𝑣 |𝑣 | 1+𝛼 0 |𝑣 |≤𝜀 =2𝐶 𝑠 𝘀 𝛽 ⋀3−𝛼 0 𝛽 ⋀3−𝛼 0 and 𝜆 𝜀 =𝜋 ({|𝑣 |>𝘀 })=𝐶 𝑠 2 𝛼 0 𝘀 −𝛼 0 and 𝜆̃ 𝜀 =1+𝟏 𝛼 ∈(1,2) |∫ 𝑣 𝜀 <|𝑣 |≤1 𝜋 (𝑑𝑣 )|=1 In particular, if 𝛿 =𝑇 (only jump moments are chosen for the time discretization), then |𝑬 𝑔 (𝑌̂ 𝑇 )−𝑬 𝑔 (𝑋 𝑇 )|≤𝐶 |𝑔 | 𝛽 [{( 𝜆̃ 𝜀 𝜆 𝜀 ) 𝛽 𝛼 −1 + ∫|𝑣 | 𝛽 ⋀3 𝜋 (𝑑𝑣 ) |𝑣 |≤𝜀 ] 3.2 Backward Kolmogorov Equation With former discussion, we already have a simulation 𝑌 𝑡 for the equation (3.1), and we already displayed the weak convergence. Now we want to numerically check this convergence. In Lipschitz spaces 𝐶̃ 𝛽 (𝐻 ) , consider the backward Kolmogorov equation associate with 𝑋 𝑡 : { (𝜕 𝑡 +ℒ)𝑢 (𝑡 ,𝑥 )+𝑓 (𝑡 ,𝑥 )=0 𝑢 (𝑇 ,𝑥 )=𝑔 (𝑥 ) (3.5) 36 where ℒ is an operator defined on 𝐶̃ 𝛽 (𝐻 ),𝛽 >𝛼 by: for 𝑢 ∈𝐶̃ 𝛽 (𝐻 ) , ℒ𝑢 (𝑡 ,𝑥 )≔𝑎 (𝑥 )𝜕 𝑥 𝑢 (𝑡 ,𝑥 )+ 1 2 𝑏 (𝑥 ) 2 𝜕 𝑥𝑥 𝑢 (𝑡 ,𝑥 ) + ∫[𝑢 (𝑡 ,𝑥 +𝐺 (𝑥 )𝑣 )−𝑢 (𝑡 ,𝑥 )−𝜒 𝛼 (𝑣 )𝜕 𝑥 𝑢 (𝑡 ,𝑥 )𝐺 (𝑧 )𝑣 ]𝜋 (𝑑𝑣 ) ℝ 0 The definition of ℒ is derived from Itô ’s formula for semi-martingales. Theorem 3.2.1 (Itô ’s formula for semi-martingales) Let (𝑋 𝑡 ) 𝑡 ≥0 be a semi-martingale. For any 𝐶 1,2 function f:[0,𝑇 ]×ℝ→ℝ, we have: 𝑓 (𝑡 ,𝑋 𝑡 )−𝑓 (0,𝑋 0 )=∫𝜕 𝑡 𝑓 (𝑠 ,𝑋 𝑠 )𝑑𝑠 𝑡 0 +∫𝜕 𝑥 𝑓 (𝑠 ,𝑋 𝑠 − )𝑑 𝑋 𝑠 𝑡 0 + 1 2 ∫𝜕 𝑥𝑥 𝑓 (𝑠 ,𝑋 𝑠 )𝑑 [𝑋 ,𝑋 ] 𝑠 𝑐 𝑡 0 + ∑ [𝑓 (𝑠 ,𝑋 𝑠 )−𝑓 (𝑠 ,𝑋 𝑠 − )−∆𝑋 𝑠 𝜕 𝑥 𝑓 (𝑠 ,𝑋 𝑠 − )] 0≤𝑠 ≤𝑡 ∆𝑋 𝑠 ≠0 where [𝑋 ,𝑋 ] 𝑠 𝑐 is the quadratic variation of the continuous part of X. Since the solution for SDE(3.1),say 𝑋 𝑡 is a semi-martingale, the Itô ’s formula applies for 𝑋 𝑡 . More precisely, suppose u(t,x) is a solution of the backward Kolmogorov equation (3.5), then by Itô’s formula, we have 𝑢 (𝑡 ,𝑋 𝑡 )−𝑢 (0,𝑋 0 )=∫[𝜕 𝑡 𝑢 (𝑠 ,𝑋 𝑠 )+ℒ𝑢 (𝑠 ,𝑋 𝑠 )]𝑑𝑠 𝑡 0 +∫ ∫[𝑢 (𝑠 ,,𝑋 𝑠 − +𝐺 (𝑋 𝑠 − )𝑣 )−𝑢 (𝑠 ,𝑋 𝑠 − )]𝑞 (𝑑𝑠 ,𝑑𝑣 ) 𝑡 0 37 +∫𝑏 (𝑋 𝑠 )𝜕 𝑥 𝑢 (𝑠 ,𝑋 𝑠 ) 𝑡 0 𝑑 𝑊 𝑠 Take expectation on both sides and set t=T, we get: 𝑬 [𝑢 (𝑇 ,𝑋 𝑇 )−𝑢 (0,𝑋 0 )]=𝑬 [∫ (−𝑓 (𝑠 ,𝑋 𝑠 ))𝑑𝑠 𝑇 0 ] Since 𝑢 (𝑇 ,𝑋 𝑇 )=𝑔 (𝑋 𝑇 ) , 𝑋 0 =𝑥 0 , we have: 𝑢 (0,𝑥 0 )=𝑬 [𝑔 (𝑋 𝑇 )]+𝑬 [∫ 𝑓 (𝑠 ,𝑋 𝑠 )𝑑𝑠 𝑇 0 ] (3.6) Now suppose 𝑌 𝑡 is a simulation of 𝑋 𝑡 , then replace right-hand side of (3.6), we’ll get an approximate value of 𝑢 (0,𝑥 0 ) : 𝑢̂ (0,𝑥 0 )=𝑬 [𝑔 (𝑌 𝑇 )]+𝑬 [∫ 𝑓 (𝑠 ,𝑌 𝑠 )𝑑𝑠 𝑇 0 ] (3.7) As suggested in [3, lemma 15 and section 4.3], we can compare 𝑢 (0,𝑥 0 ) and 𝑢̂ (0,𝑥 0 ) instead of 𝑬 [𝑔 (𝑌 𝑇 )] and 𝑬 [𝑔 (𝑋 𝑇 )], as the error estimate of them have the same order, except for a possibly different constant term. Therefore, we’ll focus on the comparison of 𝑢 (0,𝑥 0 ) and 𝑢̂ (0,𝑥 0 ) in the following, and in particular, we’ll choose u(t,x) such that g(x)=0 to simplify calculations. Firstly, to simulate the expectation, one could simply go with the law of large numbers. That is, run the simulation of 𝑋 𝑡 for 𝑁 𝑒 times, and get {{𝑌 𝑖 1 } 𝑖 ≥1 ,{𝑌 𝑖 2 } 𝑖 ≥1 ,…,{𝑌 𝑖 𝑁 𝑒 } 𝑖 ≥1 }. Then in order to get (3.7), we plug in each {𝑌 𝑖 𝑘 } 𝑖 ≥1 to get values 𝑢̂ 𝑘 , 𝑘 =1,…,𝑁 𝑒 , then 𝑢̂ (0,𝑥 0 )≈ 1 𝑁 𝑒 ∑𝑢̂ 𝑘 𝑁 𝑒 𝑘 =1 Another main issue is to calculate 𝑓 (𝑡 ,𝑥 )=(𝜕 𝑡 +ℒ)𝑢 (𝑡 ,𝑥 ) . In order to get an explicit result, we need to specify the Lé vy measure and choose “nice” function u(t,x) such that f(t,x) can be derived explicitly. We’ll do it in next section. 38 3.3 Real Simulations In this section, we’ll specify function u,a,b,G, and the Lé vy measure, such that we can run some real simulations. Firstly, we still choose 𝑠𝑡𝑎𝑏𝑙𝑒 −𝛼 0 Lé vy measure: π(𝑑𝑥 )=𝐶 𝑠 𝑑𝑥 |𝑥 | 1+𝛼 0 (3.8) Denote the integral part of ℒ𝑢 (𝑡 ,𝑥 ) by 𝐼𝑢 (𝑡 ,𝑥 ) , then for given Lé vy measure (3.8), 𝐼𝑢 (𝑡 ,𝑥 )= ∫[𝑢 (𝑡 ,𝑥 +𝐺 (𝑥 )𝑣 )−𝑢 (𝑡 ,𝑥 )−𝜒 𝛼 (𝑣 )𝜕 𝑥 𝑢 (𝑡 ,𝑥 )𝐺 (𝑧 )𝑣 ]𝜋 (𝑑𝑣 ) ℝ 0 =|𝐺 (𝑥 )| 𝛼 0 ∫[𝑢 (𝑡 ,𝑥 +𝑣 )−𝑢 (𝑡 ,𝑥 )−𝜒 𝛼 (𝑣 )𝜕 𝑥 𝑢 (𝑡 ,𝑥 )𝑣 ]𝜋 (𝑑𝑣 ) ℝ 0 given 𝐺 (𝑥 )≥0. So for the choice of G(x), it’s convenient to satisfy the additional condition 𝐺 (𝑥 )≥0. Now let’s consider the choice of u(t,x) such that 𝐼𝑢 (𝑡 ,𝑥 ) will have an explicit formula. Recall in section 2.5, we derived the characteristic function for the 𝑠𝑡𝑎𝑏𝑙𝑒 −𝛼 0 Lé vy process with measure (3.2): Φ 𝑋 (𝒛 )=exp[−|𝑧 | 𝛼 0 ],𝛼 0 ∈(0,2) (3.9) Recall the Lé vy-Khinchin representation which gives the characteristic function of X=𝑋 1 by: Φ 𝑋 (𝑧 )=exp{∫ (𝑒 𝑖𝑧𝑥 −1−𝑖𝑧𝑥 𝟏 {|𝒙 |≤𝟏 } )𝜋 (𝑑𝑥 ) ∞ −∞ } (3.10) 39 Compare (3.9) and (3.10), we get: ∫ (𝑒 𝑖𝑧𝑥 −1−𝑖𝑧𝑥 𝟏 {|𝒙 |≤𝟏 } )𝜋 (𝑑𝑥 ) ∞ −∞ =−|𝑧 | 𝛼 0 ,𝛼 0 ∈(0,2) Multiply both sides by 𝑒 𝑖𝑧𝑦 , and set z=1, we get: 𝑒 𝑖𝑦 ∫ (𝑒 𝑖𝑥 −1−𝑖𝑥 𝟏 {|𝒙 |≤𝟏 } )𝜋 (𝑑𝑥 ) ∞ −∞ =−𝑒 𝑖𝑦 ,𝛼 0 ∈(0,2) Now compare real part and image part separately, we get: ∫ (cos(𝑦 +𝑥 )−cos(𝑦 )+𝑥 𝟏 {|𝒙 |≤𝟏 } sin(𝑦 ))𝜋 (𝑑𝑥 ) ∞ −∞ =−cos(y) (3.11) ∫ (sin(𝑦 +𝑥 )−sin(𝑦 )−𝑥 𝟏 {|𝒙 |≤𝟏 } cos(𝑦 ))𝜋 (𝑑𝑥 ) ∞ −∞ =−sin(y) (3.12) Suggested by (3.11) and (3.12), if u(t,x) is like the form of cos(x) of sin(x), the integral part will give an explicit function. Also, it would be convenient to set g(x)=0 such that we only need to simulate one part in (3.7). For example, we can set u(t,x) = (t-T)cos(x), (t- T)sin(x), (t-T)cos(kx), (t-T)sin(kx), or something similar. We give some examples in the following: Example 3.1 Set u(t, x) = (t-T) cos(x) By calculation, 𝑓 (𝑡 ,𝑥 )=−[(𝜕 𝑡 +ℒ)𝑢 (𝑡 ,𝑥 )] =−{cos(𝑥 )+(𝑡 −𝑇 )[𝑎 (𝑥 )(−sin(𝑥 ))+ 1 2 𝑏 (𝑥 ) 2 (−cos(𝑥 )) −cos(𝑥 )|𝐺 (𝑥 )| 𝛼 0 ]} Denote 𝑓 1 (𝑥 )≔𝑎 (𝑥 )(−sin(𝑥 ))+ 1 2 𝑏 (𝑥 ) 2 (−cos(𝑥 ))−cos(𝑥 )|𝐺 (𝑥 )| 𝛼 0 Then 𝑓 (𝑡 ,𝑥 )=−cos(𝑥 )−(𝑡 −𝑇 )𝑓 1 (𝑥 ) Now we can simulate ∫ 𝑓 (𝑠 ,𝑌 𝑠 )𝑑𝑠 𝑇 0 =−∫ 𝑐𝑜𝑠 𝑌 𝑠 𝑇 0 𝑑𝑠 −∫ (𝑠 −𝑇 ) 𝑇 0 𝑓 1 (𝑌 𝑠 )𝑑𝑠 =−∑(𝑡 𝑖 +1 −𝑡 𝑖 )𝑐𝑜𝑠 𝑌 𝑡 𝑖 𝑁 −1 𝑖 =0 −∑𝑓 1 (𝑌 𝑡 𝑖 ) 1 2 𝑁 −1 𝑖 =0 (𝑡 𝑖 +1 −𝑡 𝑖 )(𝑡 𝑖 +1 +𝑡 𝑖 −2𝑇 ) 40 where 0=𝑡 0 <𝑡 1 <⋯<𝑡 𝑁 =𝑇 is a given partition of T. Example 3.2 Set u(t, x) = (t-T) sin(x) Similar to example 3.1, the simulation is given by ∫ 𝑓 (𝑠 ,𝑌 𝑠 )𝑑𝑠 𝑇 0 =−∫ 𝑠𝑖𝑛 𝑌 𝑠 𝑇 0 𝑑𝑠 −∫ (𝑠 −𝑇 ) 𝑇 0 𝑓 2 (𝑌 𝑠 )𝑑𝑠 =−∑(𝑡 𝑖 +1 −𝑡 𝑖 )𝑠𝑖𝑛 𝑌 𝑡 𝑖 𝑁 −1 𝑖 =0 −∑𝑓 2 (𝑌 𝑡 𝑖 ) 1 2 𝑁 −1 𝑖 =0 (𝑡 𝑖 +1 −𝑡 𝑖 )(𝑡 𝑖 +1 +𝑡 𝑖 −2𝑇 ) where 𝑓 2 (𝑥 )≔𝑎 (𝑥 )(cos(𝑥 ))+ 1 2 𝑏 (𝑥 ) 2 (−sin(𝑥 ))−sin(𝑥 )|𝐺 (𝑥 )| 𝛼 0 3.4 Analysis Based on Numerical Results In this section, we do some analysis based on the numerical results. We take u(t,x) = (t-T)cos(x) as like example 3.2, and set the Lé vy measure as (3.8). Moreover, we set 𝑎 (𝑥 )=−sin(𝑥 )cos 3 (𝑥 ) 𝑏 (𝑥 )=cos 2 (𝑥 ) 𝐺 (𝑥 )= 1 1+𝑥 2 For parameters, we set T=1, ε=0.005, n=100, 𝑁 𝑒 =50,𝑤 ℎ𝑖𝑐 ℎ𝑖𝑠 𝑡 ℎ𝑒 𝑐𝑖𝑟𝑐𝑢𝑙𝑎𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒𝑠 𝑤 ℎ𝑒𝑛 𝑐𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑒 𝑒𝑥𝑝𝑒𝑐𝑡𝑎𝑖𝑜𝑛 3.4.1 Graph of 𝑢̂ (0,𝑥 0 ) 41 Firstly, to check the accuracy of approximation 𝑢̂ (0,𝑥 0 ) by drawing a graph for 𝑥 0 ∈[−𝜋 ,𝜋 ], with N=30 equal-partition grid points. (1) Graph for simple Euler scheme: Figure 3.1 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =0.6,𝛼 0 =0.5 Figure 3.2 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =1.1,𝛼 0 =1 42 Figure 3.3 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =2,𝛼 0 =1.9 As we can see, the closer 𝛼 0 is to 2, the more accurate the approximation is. Recall the convergence theorem derived in section 3.1, this results correspond to the convergence rate inequalities, which exactly show that the error is essentially of the order 𝘀 𝛼 0 (the larger 𝛼 0 we have, the smaller error we have). Another intuition is that when 𝛼 0 is larger, the total number of jumps (with size larger than ε) is larger. This can be seen by both the calculation of λ (which is the average number of jumps) or by the program running time: when 𝛼 0 is large, the running time grows significantly. Therefore, when 𝛼 0 is large, it’s necessary to choose proper parameters (such as n, ε) to control the running time. 43 (2) Graph for jump-adapted Euler scheme Figure 3.4 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =0.6,𝛼 0 =0.5 Figure 3.5 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =1.1,𝛼 0 =1 44 Figure 3.6 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =2,𝛼 0 =1.9 Similar to the simple Euler scheme, the accuracy of approximation grows as 𝛼 0 grows. Also, the running time with 𝛼 =2,𝛼 0 =1.9 is rather large (even slower than simple Euler scheme) as there are too many jumps which form the grid points. This point was referred to earlier and it shows that the costs of jump-adapted method are not so that stable (i.e. varies widely from different choices of 𝛼 0 ). (3) Graph for direct stable process method Figure 3.7 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =0.6,𝛼 0 =0.5 45 Figure 3.8 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =1.1,𝛼 0 =1 Figure 3.0 graph of 𝑢̂ (0,𝑥 0 ) with 𝛼 =2,𝛼 0 =1.9 Again, we can see the improvement of the accuracy when setting 𝛼 0 closer to 2. However, the performance of the direct method seems to be worse than the performance of ε-cut methods, either with simply Euler scheme or jump-adapted scheme, which might be out of our expectation. Actually, as we’ll see in the next section, the MSEs of all three methods are small, which implies that all the methods are “well” enough and we can’t 46 conclude that the stable method is the worst based on the relatively small difference. Also, the choice of the coefficient functions, a(x), b(x), G(x) might be another factor that results in the performance of different methods. 3.4.2 Mean Square Error (MSE) and Goodness of Fit In order to compare the goodness of fit among different methods or different parameters more precisely, we introduce the mean square error method. Suppose 𝑈 :={𝑢̂ 0 𝑖 } 𝑖 =1,…,𝐿 is a sequence of simulation for 𝑢 (0,𝑥 0 ) , with same method and same parameter values. The mean square error is defined as: 𝑀𝑆𝐸 𝑈 ≔ 1 𝐿 ∑(𝑢̂ 0 𝑖 −𝑢 (0,𝑥 0 )) 2 𝐿 𝑖 =1 Now let L=50, and set parameters same as in section 3.4.2. , consider 𝛼 =2,𝛼 0 =1.9we calculate the mean square error for three methods at points 𝑥 0 =0, 𝜋 2 ,𝜋 , respectively The results are shown in the following table: 𝑥 0 MSE for simple Euler MSE for jump adapted MSE for direct stable method 0 0.0024 0.0017 0.0031 𝜋 2 0.0008 0.0014 0.0011 π 0.0004 0.0004 0.0006 Table 3.1 MSE for different 𝑥 0 When 𝑥 0 is closer to 0, the MSEs of all the methods are larger, which corresponds to the graphs in the last section. This results might again due to the choice of coefficient functions, and since the MSEs are small in all 47 cases of 𝑥 0 , the difference could be not so that important when we take the random error into account. Also, we compare MSEs for different 𝛼 0 =1.9,1,0.5, with 𝛼 =2 and 𝑥 0 =0 fixed: 𝛼 0 MSE for simple Euler MSE for jump adapted MSE for direct stable method 1.9 0.0024 0.0017 0.0031 1 0.0058 0.0059 0.0042 0.5 0.0084 0.0061 0.0050 Table 3.2 MSE for different 𝛼 0 As we can see, smaller 𝛼 0 does lead to smaller MSE. Now let’s consider 𝛼 =2,1.1,0.6 with 𝛼 0 =0.5 and 𝑥 0 =0 fixed: 𝛼 MSE for simple Euler MSE for jump adapted MSE for direct stable method 2 0.0084 0.0061 0.0042 1.1 0.0084 0.0073 0.0074 0.6 0.0101 0.0086 0.0071 Table 3.3 MSE for different 𝛼 𝛼 also has some impact upon the accuracy, as it switches the SDE model, however, the impact is relatively small comparing with that of 𝛼 0 . Lastly, according to the convergence theorem in former chapters, the parameter ε might also affect the accuracy, we calculate some MSEs based on different ε= 0.005, 0.001, 0,01. For 𝛼 0 =0.5,𝛼 =0.6,𝑥 0 =0 fixed. Note the direct stable method doesn’t depend on ε, we’ll disregard it in the following table. 𝘀 MSE for simple Euler MSE for jump adapted 0.008 0.0067 0.0077 0.005 0.0084 0.0073 0.001 0.086 0.0069 Table 3.3 MSE for different 𝘀 48 The impact of ε is actually very small. This can be expect through the error estimate term presented earlier. As a summary, we can see that the stable parameter 𝛼 0 plays a great role in the accuracy of approximation. Moreover, the MSE for three methods are always relative small with different parameter values, which shows that our approximation is well acceptable. Some further results and discussion are derived in the next section. 3.4.3 Further Discussion (1) Goodness of fit for the whole paths In section 3.4.1, the graphs show that the direct stable method doesn’t perform as well as our expectation. Our calculation of MSE for some special 𝑥 0 ‘s derived the same conclusion as the graphs. Now in order to compare the performance of three methods along the whole paths, we need to introduce an standard parameter which is similar to the 𝐿 2 −𝑛𝑜𝑟𝑚 of function spaces. Consider 𝑥 0 ∈[−𝜋 ,𝜋 ], with real values 𝑢 (0,𝑥 0 ) and approximations 𝑢̂ (0,𝑥 0 ) . Take a partition of [−𝜋 ,𝜋 ] with grid points: −𝜋 =𝑥 0 0 <𝑥 0 1 <⋯<𝑥 0 𝑁 =𝜋 The mean square error over the whole path {𝑥 0 𝑖 } 𝑖 =0,1,…,𝑁 are defined as following: 𝑀𝑆𝐸 ≔√∑ 2𝜋 𝑁 [𝑢 (0,𝑥 0 𝑖 )−𝑢̂ (0,𝑥 0 𝑖 )] 2 𝑁 𝑖 =0 Now employ the coefficient functions and parameter values setting in 3.4.1 and 3.4.2, with 𝛼 =2,𝛼 0 =1.9, we get: 𝑀𝑆𝐸 𝑠𝑖𝑚𝑝𝑙𝑒 =0.1076 𝑀𝑆𝐸 𝑗𝑢𝑚𝑝 =0.0880 𝑀𝑆𝐸 𝑑𝑖𝑟𝑒𝑐𝑡 =0.2445 49 Again, the results are according with the graphs in 3.4.1, showing that the direct method performs worse than the other two methods in the sense of whole path approximation, under our parameter settings. Also, the jump-adapted method and simple Euler method are having close MSEs and hence we can’t tell precisely which one is better. (2) Precise estimates of the numeric convergence rate of the parameters As shown in section 3.4.2, parameters including 𝛼 0 ,𝘀 ,𝑛 play an important role in the accuracy of simulations. Now we would like to “precisely” calculate the convergence rate of theses parameters, separately. Given ∈{𝛼 0 ,𝘀 ,𝑛 } , we suppose the MSE of 𝜉 are bounded by the following inequality: |𝑀𝑆𝐸 |≤𝐶 𝜉 𝜇 , where C is a constant independent of the given 𝜉 , and 𝜇 is the numeric rate of convergence we want to compute. In this case, 𝜇 also explains how large the effect is for a given parameter. Now fix other parameters unchanged, we denote 𝛿 1 ,𝛿 2 to be the MSEs based on 𝜉 1 ,𝜉 2 , relatively. Then 𝜇 =ln( 𝛿 1 𝛿 2 )/ln( 𝜉 1 𝜉 2 )(∗) And 𝜇 gives the improvement of the accuracy when changing 𝜉 1 to 𝜉 2 , with other parameters unchanged. Now let’s apply the above discussion to the model in 3.4.1. We summary the results as following: (a) Convergence rate of parameters in jump-adapted method Fix 𝛼 =2, we want to test the convergence rate of 𝘀 ,𝛼 0 , respectively. Firstly, fix 𝛼 0 =1.9 and take 𝘀 =0.1,0.01, by (*), we get 𝜇 ≈0.7442, which shows that the cut size 𝘀 does have an effect on the accuracy of the jump-adapted method: the smaller 𝘀 is , the more accurate result we’ll get. 50 Secondly, fix 𝘀 =0.01 and take 𝛼 0 =1.9,1.1, by (*), we get 𝜇 ≈ −1.2000 . This shows that 𝛼 0 does play an important role in the accuracy of jump-adapted method: the larger 𝛼 0 , the more accurate result we’ll get, and the improvement is relatively larger than that of 𝘀 (b) Convergence rate of parameters in Simple Euler method For simple Euler method, fix 𝛼 =2, and we want to test 𝘀 ,𝛼 0 ,𝑛 , respectively. Firstly, fix 𝛼 0 =1.9, 𝘀 =0.01 and take n=100, 500. By (*) we get 𝜇 ≈ −0.0109 , which shows that although larger n will give us more accurate results, the improvement is actually really small. On the other hand, when n is too large, the running time will increase a lot. Therefore, it’s not necessary to set n too large in simple Euler method, which could definitely save time and space costs. Also, if we take n=10, 100, we’ll get 𝜇 ≈ −0.0400 , which again, is relatively small but it’s 4 times larger than the former one. Due to this reason, we’ll usually set n=100 in the following tests. Secondly, fix n=100 , 𝛼 0 =1.9 and take 𝘀 =0.05,0.01, we get 𝜇 ≈0.0212 . Unlike the large 𝜇 derived in jump-adapted method, the 𝜇 here is pretty small, which shows that 𝘀 also plays a little role in the accuracy of simple Euler method. As smaller 𝘀 will lead to more jumps included in our simulations, which finally will increase time and space costs, it’s again not necessary to set 𝘀 too small in simple Euler method. Lastly, fix n=100, 𝘀 =0.01 and take 𝛼 0 =1.9,1.1, we get 𝜇 ≈−1.241. Similar as the case in jump-adapted method, 𝜇 is relatively large, which shows that 𝛼 0 does play an important role in the accuracy of simple Euler method: the larger 𝛼 0 is, the more accurate results we’ll get (c) Convergence rate of parameters in direct stable method For the direct stable method, we fix 𝛼 =2 and test 𝛼 0 ,𝑛 , respectively. Firstly, fix 𝛼 0 =1.9 and take n=100, 500, we get 𝜇 ≈−0.0112 . Similar as the case in simple Euler method, larger n seems to have little effect on the accuracy of direct stable method. However, if we take n=10, 100, we’ll get 51 𝜇 ≈−0.3513. We can see that unlike the simple Euler method, when n goes from 10 to 100, the improvement is actually much larger. Therefore, we can neither set n too small or too large in the direct stable method, if we want to balance the accuracy and costs. Secondly, fix n=100, and take 𝛼 0 =1.9,1.1, we get 𝜇 ≈−1.310. We see that 𝛼 0 is also an important parameter for the accuracy of direct stable method, which is similar as the other two methods and corresponds to our earlier results. 3.4.4 Summarize In this chapter we run a real simulation of given parameters and coefficient functions. Also, we give some further discussion about the accuracy of our simulations. One important result is that the stable parameter, 𝛼 0 , is crucial in the accuracy of convergence. Larger 𝛼 0 will always give us better approximations, no matter which method we are applying. Another important result might be that the direct stable method seems to be worse than other two methods under our parameter settings. However, since the MSE of the stable method is also pretty small (although a little bit larger than the other two methods), we can also conclude that it’s a well approximation method and we can’t say the stable method is the worst one if we take the random error into account. Finally, other parameters, like n and ε, might have different effects on the accuracy of simulations in different methods. For simple Euler method, both n and ε are not so significant for the accuracy, hence relatively smaller n and larger ε should be chosen to control the time and space costs of our algorithm. For jump-adapted method, ε is an important parameter for the accuracy and we should try to set it as small as possible, when the costs are acceptable. For direct stable method, ε and large n are not so important for the accuracy, while small n does affect the accuracy apparently. Therefore, we should balance both the costs and accuracy more carefully to choose proper n and ε for direct stable method. 52 CHAPTER 4: CONCLUSION In this thesis we mainly discussed simulations for Lé vy driven SDEs, with both simple Euler method and jump-adapted Euler method. In the first two chapter, we represent some necessary theoretical preparations and state the algorithm for the simulation of Brownian motion and compound Poisson process, which are fundamental classes in the Lé vy process family. After that, we are able to simulate Lé vy process following the idea of Lé vy-Itô decomposition, and we present a real simulation for a special Lé vy process, the stable Lé vy process. Moreover, as suggested in Cont & Tankov [1], we presented another method for the simulation of stable Lé vy process, which is a direct way to simulate the increments instead of doing decomposition. Finally, we studied the simulation of Lé vy driven SDEs and applied backward Kolmogorov equation to check the accuracy of our approximations. Since we are choosing “nice” functions a(x), b(x), G(x) which are smooth functions with bounded derivatives, as well as “nice” Lé vy measure, the result is nice as expected. Actually, as suggested in [2], when the Lé vy measure is the stable measure, the integral part of the generator 𝐼𝑣 (𝑥 )=∫[𝑣 (𝑥 +𝐺 (𝑥 )𝑣 )−𝑣 (𝑥 )] 𝐶𝑑𝑦 |𝑦 | 1+𝜆 =𝐺 (𝑥 ) 𝜆 ∫[𝑣 (𝑥 +𝑦 )−𝑣 (𝑥 )] 𝐶𝑑𝑦 |𝑦 | 1+𝜆 is differentiable without the strong assumption about tail moments in theorem 3.1.1 and 3.1.2, which will lead to convergent approximations. In the last section of chapter 3, we compared our simulations with the real value by graph, as well as applying MSE method to measure the impact of different parameters. As for the stable Lé vy case, the stable parameter 𝛼 0 is an important factor that both affects the total number of jumps (which in return affect the running time) and the accuracy of our simulation. Other parameters, like n and ε, also have some effect on the accuracy of our simulation and we should always balance the costs and accuracy before choosing proper paremeters. 53 APPENDIX: MATLAB CODES FOR SOME ALGORITHMS 1. Simulation of Brownian motion %Simulate a standard Brownian Motion B(t), t<=10, grid h=10/n %input n=100,1000,10000 separately clear n = input('input n:'); T = 10; h = T/n; %grid length N = n; %total intervals Ksi = randn(1,N); %Simulate B(t) with t=0,h,2*h,...10 t(1) = 0; B(1) = 0; %Start point for i = 2:(N+1) t(i) = (i-1)*h; B(i) = B(i-1) + Ksi(i-1)*sqrt(h); end plot(t,B); axis([min(t) max(t) min(B) max(B)]); 2. Simulation of Compound Poisson Process and Lé vy process (They will be included in the simulation of 𝒖 (𝟎 ,𝒙 𝟎 ) ) 3. Simulation of 𝒖 (𝟎 ,𝒙 𝟎 ) ), Simple Euler Scheme function approximate = simpleEuler_u_cos(x_0,epsilon,alpha,alpha0,N_e) %Simulate u0(0,x_0) %Kolmogorov function u0(t,x) = (t-T)cos(x) %a(x)= -sin(x) *(cos(x))^3 %b(x)= (cos(x))^2 %G(x)= 1/(1+x^2), see def. of G(x) %N_e is Total number of circulation when simulate expectation 54 %simple Euler scheme T = 1; n = 100; h = T/n; while(alpha<=alpha0) disp('parameter error! alpha should be greater than alpha0!'); alpha0 = input('re-enter stable parameter alpha0 in (0,2):'); end if alpha0 ~= 1 C_s = (alpha0*(1-alpha0))/(2*gamma(2-alpha0)*cos(pi*alpha0/2)); else C_s = 1/pi; end lumda = C_s*(2/alpha0)*(epsilon^(-alpha0)); %parameters result = zeros(1,N_e); %save result for each circulation for runningtime = 1:N_e %Simulate 100 times to get average %firstly simulate Z_t_epsilon %By computation, F(t) = 0.5*(-epsilon/t)^(alpha) if t < -epsilon %F(t) = 0.5 if -epsilon<=t<=epsilon %F(t) = 1 - 0.5 * (epsilon/t)^(alpha) if t > epsilon %simulate i.i.d jumping time (exponential(lumda)) until T k = 1; clear v; clear tao; clear jumptime; clear Totaljump; v = rand(1,1); tao(1) = (-1/lumda) * log(1-v); %first jumping time while tao(k) < T k = k+1; v = rand(1,1); jumptime = (-1/lumda) * log(1-v); tao(k) = tao(k-1) + jumptime; end Totaljump = k-1; %0<tao(1)<tao(2)<...<tao(Totaljump)<T<=tao(Totaljump+1) %Simulate jump 'size' Y0(i) using inverse method clear u; clear Y0; u = rand(1,Totaljump); 55 for i = 1:Totaljump if u(i) == 0.5 Y0(i) = -epsilon; elseif u(i) < 0.5 Y0(i) = (-epsilon)*((2*u(i))^(-1/alpha0)); else Y0(i) = epsilon * ((2*(1-u(i)))^(-1/alpha0)); end end clear t; clear deltaZ; t = [0:h:T]; deltaZ(1) = 0; %deltaZ(i) = Z(t(i)) - Z(t(i-1)) with deltaZ(1) = 0 i = 2; k = 0; while (tao(1)>t(i)) deltaZ(i) = 0; i = i+1; end firstjump = i; partition = 1; %record where we are in tao(Totaljump),tao(0) means before first jump for i = firstjump:(n+1) jumps = 0; %count jumps in (t(i-1),t(i)] k = partition; while (tao(k)>t(i-1) && tao(k) <= t(i)) jumps = jumps +1; k = k+1; end deltaZ(i) = sum(Y0(partition:(partition+jumps-1))); partition = k; end clear Z0; Z0(1) = 0; for i = 2:(n+1) Z0(i) = sum(deltaZ(1:i)); end %Simulate Brownian Motion B_t_episilon clear W0; clear B0; W0 = randn(1,n); 56 B0 = zeros(1,n+1); for i = 2:(n+1) for j = 1:(i-1) B0(i) = B0(i) + sqrt(C_s*h*2*(epsilon^(2-alpha0))/(2-alpha0)) * W0(j); end end %Simulate Z_t according to different alpha clear Z; if alpha <= 1 Z = Z0; else Z = B0 + Z0; end %Now simulate Y_i at time point t_i clear Y; Y(1) = x_0; %starting point same as X_0 W = randn(1,n); %independent of W0 for i = 2:(n+1) if alpha < 1 Y(i) = Y(i-1) + G(Y(i-1))*(Z(i)-Z(i-1)); elseif alpha < 2 Y(i) = Y(i-1) + h*a(Y(i-1)) + G(Y(i-1))*(Z(i)-Z(i-1)); else Y(i) = Y(i-1) + h*a(Y(i-1)) + b(Y(i-1))*sqrt(h)*W(i-1) + G(Y(i-1))*(Z(i)-Z(i-1)); end end %Now calculate int(f(t,X_t)) to get result(runningtime) for i = 1:n result(runningtime) = result(runningtime) - h*cos(Y(i)) - 0.5*h*(t(i+1)+t(i)- 2*T)*f1(alpha,alpha0,Y(i)); end end approximate=mean(result); end 57 4. Simulation of 𝒖 (𝟎 ,𝒙 𝟎 ) ), Jump-adapted Scheme function approximate = jumpadapted_u_cos_revised(x_0,epsilon,maxcut,alpha,alpha0,N_e) %Simulate u0(0,x_0) %Kolmogorov function u0(t,x) = (t-T)cos(x) %a(x)= -sin(x) *(cos(x))^3 %b(x)= (cos(x))^2 %G(x)= 1/(1+x^2), see def. of G(x) %N_e is Total number of circulation when simulate expectation %jump-adapted Euler scheme %maxcut is the same as delta in our algorithm %grid points are jump moments, plus those points restricted by (2.5) T = 1; while(alpha<=alpha0) disp('parameter error! alpha should be greater than alpha0!'); alpha0 = input('re-enter stable parameter alpha0 in (0,2):'); end if alpha0 ~= 1 C_s = (alpha0*(1-alpha0))/(2*gamma(2-alpha0)*cos(pi*alpha0/2)); else C_s = 1/pi; end lumda = C_s*(2/alpha0)*(epsilon^(-alpha0)); %parameters result = zeros(1,N_e); %save result for each circulation for runningtime = 1:N_e %Simulate 100 times to get average %firstly simulate Z_t_epsilon %By computation, F(t) = 0.5*(-epsilon/t)^(alpha) if t < -epsilon %F(t) = 0.5 if -epsilon<=t<=epsilon %F(t) = 1 - 0.5 * (epsilon/t)^(alpha) if t > epsilon %simulate i.i.d jumping time (exponential(lumda)) until T k = 1; clear v; clear tao; clear jumptime; clear Totaljump; v = rand(1,1); tao(1) = (-1/lumda) * log(1-v); %first jumping time while tao(k) < T k = k+1; v = rand(1,1); 58 jumptime = (-1/lumda) * log(1-v); tao(k) = tao(k-1) + jumptime; end Totaljump = k-1; %0<tao(1)<tao(2)<...<tao(Totaljump)<T<=tao(Totaljump+1) %Simulate jump 'size' Y0(i) using inverse method clear u; clear Y0; u = rand(1,Totaljump); for i = 1:Totaljump if u(i) == 0.5 Y0(i) = -epsilon; elseif u(i) < 0.5 Y0(i) = (-epsilon)*((2*u(i))^(-1/alpha0)); else Y0(i) = epsilon * ((2*(1-u(i)))^(-1/alpha0)); end end clear t; clear deltaZ; t(1) = 0; maxgrids = 1; %Initially we have the grid point t(1) = 0 jump_pointer = 1; %Point the jump moment we are facing. (i.e. before tao(jump_pointer)) deltaZ(1) = 0; %deltaZ(i) = Z(t(i)) - Z(t(i-1)) with deltaZ(1) = 0 while t(maxgrids) < T maxgrids = maxgrids + 1; temp = t(maxgrids-1) + maxcut; if (jump_pointer > Totaljump) %Already counts the last jump moment if temp <= T t(maxgrids) = temp; deltaZ(maxgrids) = 0; else t(maxgrids) = T; deltaZ(maxgrids) = 0; end else if temp > tao(jump_pointer) t(maxgrids) = tao(jump_pointer); deltaZ(maxgrids) = Y0(jump_pointer); jump_pointer = jump_pointer + 1; else 59 t(maxgrids) = temp; deltaZ(maxgrids) = 0; %no jumps in this case end end end %0=t(1)<t(2)<...<t(maxgrids)=T clear Z0; Z0(1) = 0; for i = 2 : maxgrids Z0(i) = sum(deltaZ(1:i)); end %Simulate Brownian Motion B_t_episilon clear W0; clear B0; W0 = randn(1,maxgrids-1); B0 = zeros(1,maxgrids); for i = 2:maxgrids B0(i) = B0(i-1) + sqrt(C_s*(t(i)-t(i-1))*2*(epsilon^(2-alpha0))/(2-alpha0)) * W0(i-1); end %Simulate Z_t according to different alpha clear Z; if alpha <= 1 Z = Z0; else Z = B0 + Z0; end %Now simulate Y_i at time point t_i clear Y; Y(1) = x_0; %starting point same as X_0 W = randn(1,maxgrids-1); %independent of W0 for i = 2:maxgrids if alpha < 1 Y(i) = Y(i-1) + G(Y(i-1))*(Z(i)-Z(i-1)); elseif alpha < 2 Y(i) = Y(i-1) + (t(i)-t(i-1))*a(Y(i-1)) + G(Y(i-1))*(Z(i)-Z(i-1)); else Y(i) = Y(i-1) + (t(i)-t(i-1))*a(Y(i-1)) + b(Y(i-1))*sqrt(t(i)-t(i-1))*W(i-1) + G(Y(i- 1))*(Z(i)-Z(i-1)); 60 end end %Now calculate int(f(t,X_t)) to get result(runningtime) for i = 2:maxgrids result(runningtime) = result(runningtime) - (t(i)-t(i-1))*cos(Y(i-1)) - 0.5*(t(i)-t(i- 1))*(t(i)+t(i-1)-2*T)*f1(alpha,alpha0,Y(i-1)); end end approximate=mean(result); end 5. Simulation of 𝒖 (𝟎 ,𝒙 𝟎 ) ), Direct Stable Process Scheme function approximate = stableLevy_u_cos(x_0,alpha,alpha0,N_e,n) %Simulate u0(0,x_0) %Kolmogorov function u0(t,x) = (t-T)cos(x) %a(x)= -sin(x) *(cos(x))^3 %b(x)= (cos(x))^2 %G(x)= 1/(1+x^2), see def. of G(x) %N_e is Total number of circulation when simulate expectation %Directly stable-Levy method scheme T = 1; h = T/n; while(alpha<=alpha0) disp('parameter error! alpha should be greater than alpha0!'); alpha0 = input('re-enter stable parameter alpha0 in (0,2):'); end result = zeros(1,N_e); %save result for each circulation for runningtime = 1:N_e %Simulate N_e times to get average clear u; clear v; clear gamma0; clear W; %simulate gamma0(i) uniform in (-pi/2,pi/2) u = rand(1,n); 61 v = rand(1,n); gamma0 = (u-0.5) * (pi); %simulate n standard exponential r.v. (exp(1)) %apply inverse method, F^(-1) (x) = -ln(1-x); for i=1:n W0(i)= -log(1-v(i)); end clear t; clear delta; clear a0 b0 c0 d0; %compute delta(i) %delta_ti = Z_ti -Z_t(i-1) t(1) = 0; %initially set t(1) = t_0 =0 delta(1) = 0; %initially set delta(1)=delta_0=Z_0=0 for i = 2:(n+1) t(i) = (i-1)*h; a0(i) = h^(1/alpha0); b0(i) = sin(alpha0*gamma0(i-1)); c0(i) = (cos(gamma0(i-1)))^(1/alpha0); d0(i) = ((cos((1-alpha0)*gamma0(i-1)))/W0(i-1))^((1-alpha0)/alpha0); delta(i) = ((a0(i) * b0(i)) / c0(i)) * d0(i); end clear Z %compute Z(i) = sum of delta up to i; Z(1) = 0; for i = 2:(n+1) Z(i) = sum(delta(1:i)); end %Now simulate Y_i at time point t_i clear Y; Y(1) = x_0; %starting point same as X_0 W = randn(1,n); %independent of W0 for i = 2:(n+1) if alpha < 1 Y(i) = Y(i-1) + G(Y(i-1))*(Z(i)-Z(i-1)); elseif alpha < 2 Y(i) = Y(i-1) + h*a(Y(i-1)) + G(Y(i-1))*(Z(i)-Z(i-1)); else 62 Y(i) = Y(i-1) + h*a(Y(i-1)) + b(Y(i-1))*sqrt(h)*W(i-1) + G(Y(i-1))*(Z(i)-Z(i-1)); end end %Now calculate int(f(t,X_t)) to get result(runningtime) for i = 1:n result(runningtime) = result(runningtime) - h*cos(Y(i)) - 0.5*h*(t(i+1)+t(i)- 2*T)*f1(alpha,alpha0,Y(i)); end end approximate=mean(result); end 63 BIBLIOGRAPHY [1] Rama Cont and Peter Tankov, Financial Modeling With Jump Processes, Chapman & Hall/CRC financial mathematics series, 2004 [2] R. Mikulevičius, On the Rate of Convergence of Simple And Jump-adapted Weak Euler Schemes For Lé vy Driven SDEs, SciVerse ScienceDirect: Stochastic Processes and their Applications 122(2012) 2730-2757 [3] R.Mikulevičius and C.Zhang, On the Rate of Convergence of Weak Euler Approximation For Nondegenerate SDEs Driven by Lé vy Processes, Stochastic Processes and Their Application(2011), doi: 10.1016/j.spa.2011.04.004 [4] Takashi Komatsu, On the Martingale Problem For Generators of Stable Processes With Perturbations, Osaka J. Math 21(1984),113-132 [5] Mark M. Meerschaert and Alla Sikorskii, Stochastic Models for Fractional Calculus, De Gruyter Studies in Mathematics 43, 2012 [6] R. Mikulevičius, Rate of Convergence of the Euler Approximation for Diffusion Processes, Math. Nachr. 151(1991)233-239 [7] Mitya Boyarchenko, Fast Simulation of Lé vy Processes, Department of Mathematics, University of Michigan at Ann Arbor [8] Chambers, C. J.Mallows , and B. Stuck, A method for simulating stable random variables, J. Amer. Stat. Assoc., 71(1976), pp. 340-344 [9] G. Samorodnitsky and M. Taqqu, Stable Non-Gaussian Random Processes, Chapman & Hall: New York, 1994. [10] Nicola Bruti-Liberati, Numerical Solution of Stochastic Differential Equations with Jumps in Finance, PhD diss., University of Technology, Sydney, 2007 [11] P. Jorin, On jump processes in the foreign exchange and stock markets. Rev. Financial Studies 1, 427-445. 1988 [12] M.Johannes , The statistical and economic role of jumps in continuous-time interest rate models. Journal of Finance 59(1), 227 – 260. 2004 [13] R. Mikulevičius and E. Platen, Time discrete Taylor approximations for Ito processes with jump component. Math. Nachr. 138,93-104. 1983 [14] X. Q. Liu and C. W. Li, Weak approximation and extrapolations of stochastic differential equations with jumps. SIAM J. Numer. Anal, 37(6), 1747-1767. 2000 [15] Vigirdas Mackevičius, Introduction to Stochastic Analysis: Integrals and Differential Equations, ISTE Ltd, 2011 [16] Peter Tankov. Financial Modeling with Lé vy Processes. É cole thé matique, Financial Modeling with Lé vy processes, Banach Center, Institute of Mathematics of the Polish Academy of Sciences (IMPAN), Warsaw, 2010, pp.101.<cel- 00665021>
Abstract (if available)
Abstract
This is a comprehensive study of the simple and jump-adapted weak Euler schemes applying to the approximation for solutions to possibly completely degenerate SDEs driven by Lévy processes, with Hölder-continuous coefficients. In particular, we'll give a real simulation for the stable-Lévy cases, along with some further discussion according to the approximations results. We'll see that in stable-Lévy cases and with ""nice"" choice of coefficient functions, it's possible to derive well-performing simulations in practice.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Numerical weak approximation of stochastic differential equations driven by Levy processes
PDF
Statistical inference for second order ordinary differential equation driven by additive Gaussian white noise
PDF
Asset price dynamics simulation and trading strategy
PDF
Statistical inference of stochastic differential equations driven by Gaussian noise
PDF
Generalized Taylor effect for main financial markets
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Large deviations rates in a Gaussian setting and related topics
PDF
Linear filtering and estimation in conditionally Gaussian multi-channel models
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
Optimal and exact control of evolution equations
PDF
Defaultable asset management with incomplete information
PDF
Elements of dynamic programming: theory and application
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
Improvement of binomial trees model and Black-Scholes model in option pricing
PDF
Theoretical foundations of approximate Bayesian computation
PDF
Analyticity and Gevrey-class regularity for the Euler equations
PDF
Applications of contact geometry to first-order PDEs on manifolds
PDF
Statistical insights into deep learning and flexible causal inference
PDF
High-frequency Kelly criterion and fat tails: gambling with an edge
Asset Metadata
Creator
Wang, Lang
(author)
Core Title
On the simple and jump-adapted weak Euler schemes for Lévy driven SDEs
School
College of Letters, Arts and Sciences
Degree
Master of Science
Degree Program
Applied Mathematics
Publication Date
04/22/2015
Defense Date
03/30/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
jump-adapted Euler scheme,OAI-PMH Harvest,simple Euler scheme,stable-Lévy processes
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Mikulevicius, Remigijus (
committee chair
), Lototsky, Sergey V. (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
langwang@usc.edu,wanglanggd@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-557224
Unique identifier
UC11301547
Identifier
etd-WangLang-3373.pdf (filename),usctheses-c3-557224 (legacy record id)
Legacy Identifier
etd-WangLang-3373.pdf
Dmrecord
557224
Document Type
Thesis
Format
application/pdf (imt)
Rights
Wang, Lang
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
jump-adapted Euler scheme
simple Euler scheme
stable-Lévy processes