Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Tamed and truncated numerical methods for stochastic differential equations
(USC Thesis Other)
Tamed and truncated numerical methods for stochastic differential equations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Tamed and Truncated Numerical Methods for Stochastic Dierential Equations by Jiawei Wang A Thesis Presented to the Faculty of the Dornsife College of Letters, Arts and Sciences University of Southern California In Partial Fulllment of the Requirements for the Degree MASTER OF SCIENCE APPLIED MATHEMATICS May 2021 Copyright 2021 Jiawei Wang Contents List of Tables iii List of Figures iv Abstract v 1 Introduction 1 2 Stochastic Dierential Equations 2 2.1 The Denition 2 2.2 Strong and Weak Solutions of SDEs 5 2.3 Existence and Uniqueness of Strong and Weak Solutions 8 3 Numerical Methods for Stochastic Dierential Equations 12 3.1 Strong and Weak Convergence of Numerical Solutions 12 3.2 The Runge-Kutta Method 15 3.3 The Tamed Runge-Kutta Method 18 3.4 The Truncated Runge-Kutta Method 20 4 Stochastic Dierential Equations with Regular Coecients 25 4.1 Assumptions 25 4.2 Numerical Examples 26 4.3 Comparison Using Numerical Example between Numerical Methods for SDEs with Regular Coecients 36 5 Stochastic Dierential Equations with Non-regular Coecients 44 5.1 Assumptions 44 5.2 Numerical Examples 46 5.3 Comparison Using Numerical Example between Numerical Methods for SDEs with Non-regular Coecients 58 References 61 Appendices 62 A MATLAB Codes for Chapter 2 62 B MATLAB Codes for Chapter 3 62 C MATLAB Codes for Chapter 4 63 D MATLAB Codes for Chapter 5 77 ii List of Tables 4.1 Table of Mean Square Error vs. Step-size for Equation (4.3). 38 4.2 Table of Mean Square Error vs. Step-size with Dierent for Equation (4.3). 40 4.3 Table of Mean Square Error vs. Step-size with Dierent K for Equation (4.3). 42 4.4 Table of Mean Square Error vs. Step-size with Dierent for Equation (4.3). 43 5.1 Table of Mean Square Error vs. Step-size for Equation (5.3). 60 iii List of Figures 2.1 A Sample Trajectory of Wiener Process. 3 3.1 A Sample Trajectory of Exact Solution and Euler Method for Equation (3.1). 13 3.2 The Graph for a Typical Function t . 24 4.1 A Sample Trajectory of Exact Solution for Equation (4.3). 28 4.2 A Sample Trajectory of Exact Solution and Euler Method for Equation (4.3). 30 4.3 A Sample Trajectory by the Tamed Runge-Kutta Method for Equation (4.3). 30 4.4 A Sample Trajectory by the Truncated Runge-Kutta Method for Equation (4.3). 31 4.5 A Sample Trajectory by the Tamed and Truncated Runge-Kutta Method for Equation (4.3). 31 4.6 A Sample Trajectory of Simulated True Solution for Equation (4.8). 33 4.7 A Sample Trajectory by the Tamed Runge-Kutta Method for Equation (4.8). 35 4.8 A Sample Trajectory by the Truncated Runge-Kutta Method for Equation (4.8). 35 4.9 A Sample Trajectory by the Tamed and Truncated Runge-Kutta Method for Equation (4.8). 36 4.10 Log-log Plot of Mean Square Error vs. Runtime for Equation (4.3). 37 4.11 Log-log Plot of Mean Square Error vs. Step-size for Equation (4.3). 38 4.12 Log-log Plot of Mean Square Error vs. Step-size with Dierent for Equation (4.3). 40 4.13 Log-log Plot of Mean Square Error vs. Step-size with DierentK for Equation (4.3). 41 4.14 Log-log Plot of Mean Square Error vs. Step-size with Dierent for Equation (4.3). 43 5.1 A Sample Trajectory of Exact Solution for Equation (5.3). 48 5.2 A Sample Trajectory of Exact Solution and Euler method to Equation (5.3). 50 5.3 A Sample Trajectory of Exact Solution and Tamed Runge-Kutta Method for Equation (5.3). 51 5.4 A Sample Trajectory of Exact Solution and Truncated Runge-Kutta Method for Equation (5.3). 51 5.5 A Sample Trajectory of Exact Solution and Tamed and Truncated Runge- Kutta Method for Equation (5.3). 52 5.6 A Sample Trajectory of Exact Solution for Equation (5.9). 54 5.7 A Sample Trajectory of Exact Solution and Euler Method for Equation (5.9). 56 5.8 A Sample Trajectory of Exact Solution and Tamed Runge-Kutta Method for Equation (5.9). 56 5.9 A Sample Trajectory of Exact Solution and Truncated Runge-Kutta Method for Equation (5.9). 57 5.10 A Sample Trajectory of Exact Solution and Tamed and Truncated Runge- Kutta Method for Equation (5.9). 57 5.11 Log-log Plot of Mean Square Error vs. Runtime to Equation (5.3). 59 5.12 Log-log Plot of Mean Square Error vs. Step-size to Equation (5.3). 60 iv Abstract Explicit numerical methods have been developed to nd solutions to stochastic dierential equations with strong convergence, such as the tamed Euler-Maruyama method and the truncated Euler-Maruyama method. In this thesis, we study the Runge-Kutta method and introduce the tamed and truncated Runge-Kutta methods for one-dimensional stochastic dierential equations with globally Lipschitz continuous coecients and other coecients. v 1 Introduction It is well known and proved by Kloeden and Platen [1] that a stochastic dierential equation (SDE) with coecients under the Lipschitz condition and linear growth condition has a unique strong solution. Furthermore, the Euler-Maruyama approximation could at- tain a strong order of 0.5 for SDEs under such conditions. However, in many nancial and biological stochastic models, diusion coecients of SDEs may be non-Lipschitz and drift coecients may be non-linear. Under such conditions, the Euler-Maruyama approximation may not converge to the true solution of an SDE even if the true solution is nite. Hence, some explicit numerical methods have been developed for nonlinear SDEs. In [2], Hutzen- thaler, Jentzen, and Kloeden proposed the tamed Euler methods for SDEs with non-globally Lipschitz continuous diusion coecients. In [3], Mao developed the truncated Euler method for nonlinear SDEs and proved its strong convergence. Besides the Euler-Maruyama method, the Runge-Kutta method is also a traditional explicit scheme. Motivated by the tamed and truncated Euler method, we aim to introduce the truncated Runge-Kutta method for 1-dimensional SDEs. The rst goal is to approximate solutions of SDEs with super-linearly growing drift coecients. In addition, we want to determine whether the truncated Runge-Kutta method approximates solutions of SDEs with linear coecients more eciently than the Euler-Maruyama method does. Furthermore, we will compare the truncated Runge-Kutta method, the tamed Runge-Kutta method, and their combination. This thesis is organized as the following. Chapter 2 discusses some preliminaries of SDEs. Chapter 3 introduces numerical methods of SDEs, the tamed Runge-Kutta method, and the truncated Runge-Kutta method. Chapter 4 applies these numerical methods to SDEs with globally Lipschitz continuous coecients. Chapter 5 applies the same numerical methods to other SDEs. 1 2 Stochastic Dierential Equations 2.1 The Denition The Basic Setting We use the following notations. LetF = ( ;F;fF t g t0 ;P) be a stochastic basis with the usual assumptions under the usual conditions. That isP-completeness ofF 0 and right continuity ofF t such that \ ">0 F t+" =F t ; t 0: A standard Wiener process (or named as Brownian motion) onF is dened as a random variable W =fW (t);t 0g, if it satises the following properties: 1. W (0) = 0. 2. W has continuous trajectories. 3. For each 0 s < t, W (t)W (s) is normally distributed with its expectation 0 and variancets such thatW (t)W (s) p ts N(0; 1), whereN(0; 1) is the standard Gaussian distribution. 4. For 0s<t<a<bT , W (t)W (s) and W (b)W (a) are independent. The Wiener process is also a continuous square-integrable martingale, W (0) = 0; E W (t)W (s) 2 jF s =ts; ts: Using these properties, we can simulate a standard Wiener process in MATLAB. Consider 2 a time discretization of a time interval [0;T ]: 0 =t 0 <t 1 <t 2 <<t N2 <t N1 <t N =T; which has the same step-size T N such thatt i =i T N , fori = 0; 1;:::;N. Then by the Gaussian distributed property of Wiener process, we have W (t i+1 )W (t i ) q T N N(0; 1). In Figure 2.1, an example of the trajectory of Wiener process is simulated by MATLAB. We use N = 1000;T = 1; t = 1 1000 . The MATLAB code is attached in Appendix A.1. Figure 2.1: A Sample Trajectory of Wiener Process. Now let f =f(x); g =g(x); x2R be measurable real-valued functions such that Z T 0 jf p(t) j +jg p(t) j 2 dt<1 (2.1) for every T > 0 and every p2C([0;T ]), that is, p is continuous on [0;T ]. 3 The Denition of Stochastic Dierential Equations With the above basic settings, we have the denition of SDEs as the following. Denition 2.1. With the above basic concepts, the equation dX(t) =f(X(t))dt +g(X(t))dW (t) (2.2) with initial value X(0) = x 0 is called It^ o stochastic dierential equation (SDE). The initial value X(0) = x 0 2F 0 . That is, X 0 isF 0 -measurable. The coecient functions in the integral parts, f;g, satisfy (2.1). The function f(X(t)) is called the drift coecient, which belongs to the part with deterministic increments. The function g(X(t)) is called diusion coecient, which amplies the increments leading by the Wiener process. Equation (2.2) can also be written as an It^ o processX =fX(t);t 0g with the following form X(t) =X(0) + Z t 0 f(X(s))ds + Z t 0 g(X(s))dW (s); (2.3) then it is called a stochastic integral equation. The second integral in equation (2.3) is an It^ o stochastic integral with respect to the Wiener process, W =fW (t);t 0g. Example 2.2. The following stochastic integral equation X(t) =X(0) + Z t 0 aX(s)ds + Z t 0 bX(s)dW (s) (2.4) where a;b are constants, has a solution in closed form through using It^ o calculus X(t) =X(0)e (a 1 2 b 2 )t+bW (t) : (2.5) 4 This exact solution is the only solution of the equation (2.4) with the given initial value and the prescribed Wiener processW in this example. Formal denitions of the solutions of SDEs are discussed in the next section. 2.2 Strong and Weak Solutions of SDEs Before introducing the denitions of solutions of SDEs, we rst discuss some conditions for drift and diusion coecients. If the diusion coecient g in equation (2.3) is 0, then equation (2.3) would become an ordinary dierential equation dX(t) = f(X(t))dt with initial value X(0) = x 0 . Instead of g = 0, we want to assume that the noise part R t 0 g(X(s))dW (s) in equation (2.3) is not zero. Hence, we assume that the diusion coecient g satises the following condition: g 2 2C(R) and g 2 (x)> 0;8x2R: (2.6) Another condition for the coecients in SDEs is to ensure non-explosion of both drift and diusion coecients: jf(x)j +jg(x)jC;8x2R; (2.7) where C > 0 is a constant. Condition (2.7) implies condition (2.1). Condition (2.1) is necessary to dene the right-hand side of (2.3). Other conditions that ensure non-explosion can be used instead of condition (2.7). For example, two main examples are linear growth and monotonicity: jf(x)j +jg(x)jC(1 +jxj);8x2R; (2.8) xf(x) + (g(x)) 2 C(1 +x 2 );8x2R: (2.9) 5 where C > 0 is a constant. We assume that equation (2.3) satises all basic settings and two conditions, (2.6) and one of the non-explosion conditions. With the above conditions of drift and diusion coecients, the strong solutions and weak solutions of equation (2.3) are dened as the following. Denition 2.3. Weak solution, sometimes also called martingale solution, is a stochastic basis satisfying the usual conditions, a Brownian motion on this basis, and a continuousF t -adapted process X =X(t) satisfying (2.3) with probability one, for all t> 0. Denition 2.4. Given the stochastic basis and the Wiener process, a strong solution X =X(t) of (2.3) is a continuousF t -adapted process satisfying (2.3) with probability one, for all t> 0. Remark 2.5. The main dierence between weak and strong solutions is that the strong solution is determined by the initial condition and the Wiener process, while the initial condition and the Wiener process might not be enough to dene the weak solution. That is, strong solution satises X(s); 0 s t X(0);W (s); 0 s t and a weak solution satises X(0);W (s); 0st X(s); 0st . The following example shows that it is possible that some SDEs only have weak solutions, but no strong solutions [1]. Example 2.6. The SDE dX(t) = sign(X(t))dW (t) (2.10) with initial value X(0) = 0, where sign(x) = 1 if x 0 and sign(x) =1 if x< 0, only has weak solutions but no strong solution. 6 Let f be a Wiener process on F = ( ;F;fF t g t0 ;P). Then dene X(t) =B(t); W (t) := Z t 0 sign(B(s))dB(s): Then this is a weak solution of equation (2.10) since dW (t) = sign(X(t))dX(t) implies dX(t) = sign(X(t))dW (t). With this weak solution, we have W (t) = Z t 0 sign(X(t))dX(t): This implies that the natural ltration of the process W is equivalent to the natural ltration of the processjXj; that is, (jXj) = (W ) (by [4, Cor. VI.2.2]). Therefore, we have (X(0);W (s); 0st) is strictly smaller than (X(s); 0st). Hence, by remark 2.5, there does not exist strong solution for equation (2.10). As a result of the dierent denitions between strong solutions and weak solutions, we consider dierent denitions of uniqueness for strong and weak solutions. Denition 2.7. Strong uniqueness, also known as path-wise uniqueness, on [0;T ] for equation (2.3) means P sup 0tT jX(t)Y (t)j> 0 = 0 for every two solutions X =X(t) and Y =Y (t) satisfying (2.3) with X(0) =Y (0) and with the same Brownian motion W on the same ltered probability space. Denition 2.8. Weak uniqueness on [0;T ] for equation (2.3) means that, as a random element with values inC([0;T ]), every weak solution has the same distribution. With these four denitions, there are 16 = 2 4 combinations to consider, namely whether existence and uniqueness for strong and weak solutions holds for (2.3); see [5, Table 1.1]. 7 2.3 Existence and Uniqueness of Strong and Weak Solutions For an ordinary dierential equation dx dt = f(x) with initial value x(0) = x 0 , we have Lipschitz condition as a sucient condition for the existence and uniqueness of a solution jf(x)f(y)jKjxyj for all x;y2 R, where K is a positive constant. A growth bound condition is assumed to make sure the global existence of a solution to the ordinary dierential equation. Similarly, Kloeden and Platen [1] have the following well-known assumptions for the coecients f;g of (2.3). Assumption 2.9 (Global Lipschitz Condition, Assumption 4.2 [1]). There exists a constant K > 0 such that jf(x)f(y)jKjxyj jg(x)g(y)jKjxyj for all x;y2R. Assumption 2.10 (Linear Growth Bound, Assumption 4.3 [1]). There exists a constant K > 0 such that jf(x)j 2 K(1 +jxj 2 ) jg(x)j 2 K(1 +jxj 2 ) for all x2R. If a functionf satises global Lipschitz condition, then we have that there exist constants 8 K;L> 0 such that jf(x)f(0)jKjx 0j ) jf(x)jjf(0)j +Kjxj: That is, it also satises linear growth condition. By Kloeden and Platen [1], the existence or uniqueness of the solution for a stochastic dierential equation does not require the growth bound, but the sample path of the solution may blow up in a nite time without the growth bound such as 2.10. Theorem 2.11 (Theorem 4.5.3 [1]). Assume that the SDE (2.3) satises all basic settings in Section 2.1. If Assumptions 2.9 holds, then equation (2.3) has a path-wise unique strong solution X(t) on [0;T ] with sup 0tT E(jX(t)j 2 )<1: Besides this theorem, there are more theorems about the existence and uniqueness of strong and weak solutions of SDEs. For an SDE (2.3) satisfying all basic settings in Section 2.1 and two conditions, (2.6) and one of the non-explosion conditions, there are only two possibilities: either (2.3) has a unique strong solution on every stochastic basis with a given Wiener process, or (2.3) does not have a unique strong solution regardless of the choice of stochastic basis and Brownian motion by [6, Section 2.1]. By [5, Proposition 1.7], a well known result of Yamada and Watanabe about strong solutions of SDEs is stated as the following. Theorem 2.12 (Yamada-Watanabe Theorem). For the SDE (2.3), the path-wise uniqueness and existence of a weak solution implies the existence and uniqueness of a strong solution. . 9 Example 2.13. Let W be a Wiener process on F. The SDE dX(t) =dW (t) with initial value X(0) = 0. Let X(t) = W (t). Then this is a weak solution. With its drift coecient f = 0 and diusion coecientg = 1, the SDE satises all basic settings in Section 2.1. The coecients f;g also satisfy Assumptions 2.9 and 2.10. Then by Theorem 2.11, the strong uniqueness holds for this SDE. Therefore, by Yamada-Watanabe Theorem, the SDEdX(t) =dW (t) with initial value X(0) = 0 has a global strong solution. Theorem 2.14. If (2.6) holds, then equation (2.3) has a unique weak solution as long as any of conditions for non-explosion, such as (2.7), (2.8), or (2.9), hold (see [7, Theorem 5.6] for the proof under condition (2.7)). Ifg 2 > 0, then equation (2.3) has a unique weak solution as long as any of conditions for non-explosion, such as (2.7), (2.8), or (2.9), hold (see [8] for the proof under condition (2.7); see [9, Chapter IV, Theorem 2.3 and 2.4] for the proof under condition (2.8)). In other words, no continuity of g 2 is required. Theorem 2.15. If the functions f and g are locally Lipschitz continuous, that is, for every R > 0 there exists a number L such that jf(x)f(y)j +jg(x)g(y)jLjxyj; (2.11) for all x;y withjxj R,jyj R, then equation (2.3) has a unique strong solution as long as any of conditions for non-explosion, such as (2.7), (2.8), or (2.9), hold (see [10, Theorem V.1.1] for the proof under condition (2.9)). 10 Theorem 2.16. If, for every R> 0, there exists a number L such that (xy) f(x)f(y) +jg(x)g(y)j 2 Ljxyj 2 (2.12) for all x;y withjxj R,jyj R, then equation (2.3) has a unique strong solution as long as any of conditions for non-explosion, such as (2.7), (2.8), or (2.9), hold; cf. [10, Theorem V.1.1]. Theorem 2.17. If (2.6) holds andjg(x)g(y)j L p jxyj for some L > 0 and all x;y2 R, then equation (2.3) has a unique strong solution as long as any of conditions for non-explosion, such as (2.7), (2.8), or (2.9), hold (see [11] for the proof under condition (2.7)). 11 3 Numerical Methods for Stochastic Dierential Equa- tions 3.1 Strong and Weak Convergence of Numerical Solutions 3.1.1 Euler-Maruyama Approximation In 1955, Maruyama proposed a numerical method, namely Euler-Maruyama approxi- mation (or called Euler approximation), which is one of the most important and simple numerical methods to be operated and understood. Next is a brief introduction to this method. Consider a time discretization of a time interval [0;T ] 0 =t 0 <t 1 <t 2 <<t N2 <t N1 <t N =T which has the same step-size T N such thatt i =i T N , fori = 0; 1;:::;N. For the SDE in (2.2), its approximation would be Y (t i+1 ) =Y (t i ) +f(Y (t i )) t +g(Y (t i )) W (i) for i = 0; 1;:::;N 1 with the initial value Y 0 =x 0 , where t =t i+1 t i = T N W (i) =W (t i+1 )W (t i ) = p tN(0; 1) N(0; 1) is a normally distributed random variable by the property of Wiener process. Using Euler approximation, we can simulate the trajectory of equation (2.4) and compare this with the trajectory of the exact solution in MATLAB. 12 For a = 2, b = 1, and x 0 = 1, equation (2.4) becomes X(t) = 1 + Z t t 0 2X(s)ds + Z t t 0 X(s)dW (s) (3.1) with the solutionX(t) =e 3 2 t+W (t) and its Euler approximation isY t+1 =Y t + 2 t + W (t). We take T = 1, N = 10 4 , and t = 10 4 for the exact solution and adopt T = 1, N = 10 2 , and t = 10 2 for the Euler method. A sample trajectory is shown in Figure (3.1). The MATLAB code is in Appendix B.1. Figure 3.1: A Sample Trajectory of Exact Solution and Euler Method for Equation (3.1). 3.1.2 Strong and Weak Convergence of Numerical Approximation With numerical methods to approximate the solutions of SDEs, the accuracy of these approximation should be considered. For a stochastic process, a simulation would be ideal if it could simulate the paths close to those of the It^ o process. Hence, in order to measure the closeness of the trajectories, the absolute error at the nal timeT can be applied to describe 13 how accurately the approximation follows the path of the actual stochastic processes. Suppose Y is a discrete-time approximation of the solution for X in equation (2.3) with maximum step-size > 0. Now dene the strong error between Y and X as " strong :=E[jX(T )Y (T )j] (3.2) Based on this denition, we say that Y converges to X in the strong sense if " strong ! 0 as ! 0. Y converges to X with a strong order 2 (0;1) if there exists a nite constant K > 0 and a nite positive constant 0 such that " strong =E[jX(T )Y (T )j]K (3.3) for any time discretization with maximum step-size 2 (0; 0 ). Usually, for the same numerical approximation, its order of strong convergence might be less in SDEs compared to the order of convergence for ordinary dierential equations because of the stochastic increments W . For example, it is well-known that the strong order of the Euler approximation for SDEs is at most 0.5, which is less compared to the order of 1.0 of the Euler approximation for ordinary dierential equations under similar conditions [1]. Dierent from the description of how well the path of a stochastic process has been followed by the approximation, the term weak convergence aims to describe how well the approximation captures the value instead of the path-wise in the strong convergence. For example, if we just want to approximate the value of an SDE rather than its explicit tra- jectory, then a numerical method with weak convergence may be ideal to be used to save operation time. Still, we suppose Y is a discrete-time approximation of the solution for X in equation (2.3) with maximum step-size > 0. Dene the weak error between an approximation Y and X as 14 " weak :=jE[p(X(T ))]E[p(Y (T ))]j (3.4) where p is any polynomial. We say that Y converges to X in the weak sense if " weak ! 0 as ! 0. Y converges to X with a weak order 2 (0;1) if there exists a nite constant K > 0 and a nite positive constant 0 such that " weak =jE[p(X(T ))]E[p(Y (T ))]jK (3.5) for any time discretization with maximum step-size 2 (0; 0 ). 3.2 The Runge-Kutta Method For equation (2.3), which is a one-dimensional SDE, we have many numerical methods to approximate it. For example, the method that we have used in previous chapter, Euler method, Y k+1 =Y k +f(Y k )t +g(Y k )W (t); with initial value Y 0 , is one of the simplest numerical approximations using It^ o-Taylor ex- pansion. Euler method truncates all other terms in It^ o-Taylor expansion except the rst deterministic and stochastic terms. Truncating dierent terms of It^ o-Taylor expansion leads to dierent numerical methods. Keeping more terms of It^ o-Taylor expansion, in general, makes the numerical methods more accurate, but at the same time, has a higher com- putational cost of more derivative evaluations. Another famous method using It^ o-Taylor expansion is Milstein scheme, 15 Y k+1 =Y n +f(Y k )t +g(Y k )W (t) + 1 2 ((W (t)) 2 t)g(Y k )g 0 (Y k ); (3.6) for k = 0; 1;:::;N 1. By [1, Theorem 10.6.3], the Milstein scheme has the strong order of 1.0 under the assumption that f2C 1 (R), g2C 2 (R). Milstein scheme comes from the Wagner-Platen formula [1] X t =X 0 +f(X 0 ) Z t 0 ds +g(X 0 ) Z t 0 dW (s) +g(X 0 )g 0 (X 0 ) Z t 0 Z s 2 0 dW (s 1 )dW (s 2 ) +R; where R represents higher order multiple stochastic integrals. The term 1 2 ((W (t)) 2 t) in Milstein scheme can be computed from the Wagner-Platen formula: Z t k+1 t k Z s t k 1dW (u)dW (s) = Z t k+1 t k h W (u) i s t k dW (s) = Z t k+1 t k h W (s)W (t k ) i dW (s) = Z t k+1 t k W (s)dW (s) Z t k+1 t k W (t k )dW (s) = 1 2 [W (t k+1 )] 2 1 2 [W (t k )] 2 1 2 (t k+1 t k )W (t k )(W (t k+1 )W (t k )) = 1 2 [W (t k+1 )] 2 + 1 2 [W (t k )] 2 W (t k )W (t k+1 ) 1 2 (t k+1 t k ) = 1 2 [W (t k+1 )W (t k )] 2 1 2 (t k+1 t k ) = 1 2 ((W (t)) 2 t); 16 by 1 2 [W (t k+1 )] 2 1 2 [W (t k )] 2 = 1 2 Z t k+1 t k 1d([W (s)] 2 ) = Z t k+1 t k W (s)d(W (s)) + 1 2 Z t k+1 t k 1ds = Z t k+1 t k W (s)d(W (s)) + 1 2 (t k+1 t k ); and by It^ o's Lemma, d 1 2 [W (s)] 2 =W (s)dW (s) + 1 2 [dW (s)] 2 =W (s)dW (s) + 1 2 ds: Milstein scheme or any other numerical methods using It^ o-Taylor expansion and including higher order of derivatives of drift and diusion coecients of equation (2.3) require eval- uations of these derivatives in addition to coecients themselves at each iteration. Hence, discrete time approximations to those derivatives can be used to avoid the use of derivatives. That is, we consider Runge-Kutta schemes. A basic version of Runge-Kutta scheme with a strong order of 1.0 is given by [1] Y k+1 =Y n +f(Y k )t +g(Y k )W (t) + 1 2 ((W (t)) 2 t)(g( ~ Y k )g(Y k ))(t) 1 2 ; (3.7) where ~ Y k =Y k +g(Y k ) p t; (3.8) fork = 0; 1;:::;N1. This scheme is obtained from the Milstein scheme by changingg 0 (Y k ) into nite dierence: g 0 (Y k ) g( ~ Y k )g(Y k ) ~ Y k Y k = g( ~ Y k )g(Y k ) Y k +g(Y k ) p tY k = g( ~ Y k )g(Y k ) g(Y k ) p t : 17 So that 1 2 ((W (t)) 2 t)g(Y k )g 0 (Y k ) in (3.6) can be replaced as 1 2 ((W (t)) 2 t)g(Y k )g 0 (Y k ) 1 2 ((W (t)) 2 t)g(Y k ) g( ~ Y k )g(Y k ) g(Y k ) p t = 1 2 ((W (t)) 2 t)(g( ~ Y k )g(Y k ))(t) 1 2 : We adopt the improved version of Runge-Kutta scheme with a strong order of 1.0, which was proposed by Platen, Y k+1 =Y n +f(Y k )t +g(Y k )W (t) + 1 2 ((W (t)) 2 t)(g( ~ Y k )g(Y k ))(t) 1 2 ; (3.9) where ~ Y k =Y k +f(Y k )t +g(Y k ) p t; (3.10) for k = 0; 1;:::;N 1. By [1], equation (3.10) approximates higher order terms in the It^ o-Taylor expansion better than equation (3.8). Hence our revision of Runge-Kutta method is based on the version proposed by Platen. 3.3 The Tamed Runge-Kutta Method 3.3.1 The Tamed Euler Method Before constructing the tamed Runge-Kutta method, we introduce the tamed Euler method proposed by [2]. With a strong convergence for SDEs with globally Lipschitz con- tinuous coecients and even SDEs with super-linearly growing coecients, the tamed Euler method is almost identical to the explicit Euler method. Only the drift term is modied for 18 uniform boundedness. The tamed Euler scheme is given as the following. For all N2N, Y k+1 =Y k + f(Y k )t 1 +jf(Y k )jt +g(Y k )W (k) (3.11) for k = 0; 1;:::;N 1 and W (k) = W (t k+1 )W (t k ). In this method, the drift term f(Y k )t is "tamed" by 1 1+jf(Y k )jt and the new drift term f(Y k )t 1+jf(Y k )jt is bounded by 1 for every k = 0; 1;:::;N 1. With this boundedness, f(Y k )t 1+jf(Y k )jt cannot produce extremely large value even when the drift coecients are super-linearly growing. Furthermore, by [2], the Taylor expansion with terms of orderO( 1 N 2 ) shows Y k+1 = Y k +f(Y k )t +g(Y k )W (k) (t) 2 f(Y k )jf(Y k )j 1 +jf(Y k )jt = Y k + f(Y k )t(1 +jf(Y k )jt) 1 +jf(Y k )jt +g(Y k )W (k) (t) 2 f(Y k )jf(Y k )j 1 +jf(Y k )jt = Y k + f(Y k )t[1 +jf(Y k )jtjf(Y k )jt] 1 +jf(Y k )jt +g(Y k )W (k) = Y k + f(Y k )t 1 +jf(Y k )jt +g(Y k )W (k): Hence, the tamed Euler method is equivalent to the Euler method up to terms of second order. The advantage of the tamed Euler method is that it is explicit and easy to implement. It has been proved that the tamed Euler method converges strongly with a strong order of 0.5 to the exact solution of an SDE with drift coecients that are globally one-side Lipschitz continuous and have at most polynomially growing derivatives, and with diusion coecients that are globally Lipschitz continuous [2]. 19 3.3.2 The Tamed Runge-Kutta Method Based on the idea of "taming", we modify the strong order of 1.0 Runge-Kutta method proposed by Platen and introduce the tamed Runge-Kutta method. For any given step-size t2 (0; 1) and Y 0 =x 0 . e Y k =Y k + f(Y k ) 1 +f(Y k )t t +g(Y k )t 1 2 Y k+1 =Y k + f(Y k ) 1 +jf(Y k )jt t +g(Y k )W (k) + 1 2 ((W (k)) 2 t)(g( e Y k )g(Y k ))t 1 2 (3.12) for k = 1; 2;:::;N, where t k =kt, W (k) =W (t k+1 )W (t k ) and t N =T . 3.4 The Truncated Runge-Kutta Method 3.4.1 The Truncated Euler Method Proposed by [3], the truncated Euler method achieves the strong convergence for SDEs with coecients under the local Lipschitz condition and a non-explosion condition. Assumption 3.1 (Local Lipschitz Condition, Assumption 2.1 [3]). There exists a constant K > 0 such that jf(x)f(y)j_jg(x)g(y)jKjxyj for all x;y2R andjxj_jyjR. Assumption 3.2 (Non-Explosion Condition, Assumption 2.2 [3]). There exists a pair of constants p> 2 and K > 0 such that for all x2R, xf(x) + p 1 2 jg(x)j 2 K(1 +jxj 2 ): 20 If we have p = 3 in Assumption 3.2, then Assumption 3.2 becomes xf(x) +jg(x)j 2 K(1 +jxj 2 ) which is condition 2.9. Theorem 3.3 (Lemma 2.3 [3]). Assume that the SDE (2.3) satises all basic settings in Section 2.1. If Assumptions 3.1 and 3.2 hold, then equation (2.3) has a path-wise unique strong solution X(t) on [0;T ] with sup 0tT E(jX(t)j p )<1: For a stochastic dierential equation (2.3) satisfying Assumptions 3.1 and 3.2, the trun- cated Euler method is constructed as the following. First, we choose a strictly increasing continuous function :R + !R + such that (r)!1 as r!1 and sup jxjr (jf(x)j_jg(x)j)(r); 8r> 0: (3.13) Denote by 1 the inverse function of . Then 1 : [(0);1)!R + is a strictly increasing continuous function. Choose a number 2 (0; 1) and a strictly decreasing functionu : (0; ]! (0;1) such that u( )(2); lim !0 u() =1 and 1 4 u() 1; 82 (0; 1) (3.14) For a given step-size t2 (0; 1), dene the truncation mapping t :R!R by t (x) = (jxj^ 1 (u(t))) x jxj (3.15) 21 for x2 R and if x = 0; x jxj = 0 as a convention. Now we construct the truncated Euler method. For any given step-size t2 (0; ] and Y 0 =x 0 Y k+1 =Y k +f( t (Y k ))t +g( t (Y k ))W (k) (3.16) for k = 1; 2;:::;N, where t k =kt, W (k) =W (t k+1 )W (t k ) and t N =T . It has been proved that the truncated Euler method converges strongly with a strong order of 0.5 to the exact solution of an SDE with drift coecients and diusion coecients that are locally Lipschitz continuous and satisfy the non-explosion condition 3.2 by Mao in [12]. 3.4.2 The Truncated Runge-Kutta Method Based on the idea of "truncation" and the truncated Milstein method proposed by [13], we modify the strong order of 1.0 Runge-Kutta method proposed by Platen and introduce the truncated Runge-Kutta method. This version of the truncated Runge-Kutta method aims to approximate the solution of SDEs with drift coecients under locally Lipschitz condition and diusion coecients under globally Lipschitz condition. First, we choose a strictly increasing continuous function :R + !R + such that(r)! 1 as r!1 and sup 0<jxjr jf(x)j(r); 8r> 0: (3.17) Denote by 1 the inverse function of . Then 1 : ((0);1)!R + is a strictly increasing continuous function. Choose a constant K jf(0)j_(1). For a given t2 (0; 1], dene the truncation 22 mapping t :R!R by t (x) = (jxj^ 1 (Kt )) x jxj (3.18) where 2 (0; 1 2 ] and if x = 0; x jxj = 0 as a convention. Now we construct the truncated Runge-Kutta method. For any given step-size t2 (0; 1] and Y 0 =x 0 , e Y k =Y k +f( t (Y k ))t +g(Y k )t 1 2 Y k+1 =Y k +f( t (Y k ))t +g(Y k )W (k) + 1 2 ((W (k)) 2 t)(g( e Y k )g(Y k ))t 1 2 (3.19) for k = 1; 2;:::;N, where t k =kt, W (k) =W (t k+1 )W (t k ) and t N =T . This version of the truncated Runge-Kutta method has a saturation level of the truncation mapping as shown in Figure 3.2. Whenjxj 1 (Kt ), we have t (x) = x and thus f( t (x)) =f(x). Ifjxj 1 (Kt ), t (x) = sign(x) 1 (Kt ) and thusf( t (x)) = f( 1 (sign(x)Kt )). Because the function 1 is strictly increasing, we have the following results about the choice of , t, and K. When t and K are xed, a smaller > 0 makes 1 (Kt ) larger. When andK are xed, a smaller t makes 1 (Kt ) larger. When t and are xed, a larger K makes 1 (Kt ) larger. A more detailed discussion about the choice of K, , and for the truncated Runge-Kutta method can be found in Section 4.3. 23 Figure 3.2: The Graph for a Typical Function t . 24 4 Stochastic Dierential Equations with Regular Co- ecients 4.1 Assumptions Consider a 1-dimensional stochastic dierential equation dX(t) =f(X(t))dt +g(X(t))dW (t) (4.1) with initial value X(0) = x 0 . The initial value X(0) = x 0 2F 0 . The coecient functions in the integral parts, f;g, satisfy (2.1). Equation (4.1) can also be written as an It^ o process X =fX(t);t 0g with the following form X(t) =X(0) + Z t 0 f(X(s))ds + Z t 0 g(X(s))dW (s); (4.2) The second integral in equation (4.2) is an It^ o stochastic integral with respect to the standard 1-dimensional Wiener process, W =fW (t);t 0g. For a 1-dimensional stochastic dierential equation, we want to investigate a new approx- imation to nd its solution. We consider SDEs with globally Lipschitz continuous coecients in this chapter and SDEs with other kinds of coecients in the next chapter. We only refer SDEs with globally Lipschitz continuous coecients as SDEs with regular coecients. Denition 4.1 (Regular Coecients of Stochastic Dierential Equations). We say that an SDE has regular coecients if both the drift and diusion coecients are globally Lipschitz continuous functions; otherwise, we say that the SDE has non-regular coecients. The following assumption and theorem about the drift and diusion coecients have been mentioned in Section 2.3 and are introduced again as a quick reminder. 25 Assumption 4.2 (Global Lipschitz Condition, Assumption 4.2 [1]). There exists a constant K > 0 such that for all x;y2R, jf(x)f(y)jKjxyj jg(x)g(y)jKjxyj Lemma 4.3. A continuous and everywhere dierentiable function f satises global Lipschitz condition if and only if there exists K > 0 such thatjf 0 (x)jK for all x2R. Theorem 4.4 (Theorem 4.5.3 [1]). If Assumption 4.2 holds, the SDE (4.1) has a path-wise unique strong solution X(t) on [0;T ] with sup 0tT E(jX(t)j 2 )<1: 4.2 Numerical Examples In each example, we rst verify the existence of the solution. Then we plot the sample single trajectory with each of the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. Example 4.5 (Equation 4.6, [1]). Consider the SDE X(t) = 1 + Z t 0 2X(s)ds + Z t 0 X(s)dW (s) (4.3) for t2 [0; 1]. First, we verify the conditions of Theorem 4.4. Given the drift coecient f(x) = 2x and the diusion coecient g(x) =x in equation (4.3), we check whether these two coecients 26 satisfy Assumption 4.2. Because jf 0 (x)j = 2 and jg 0 (x)j = 1 are bounded, by Lemma 4.3, both f and g are globally Lipschitz continuous. Satisfying Assumption 4.2, the SDE (4.3) has a path-wise unique strong solution X(t) on [0; 1] with sup 1tT E(jX(t)j 2 )<1 according to Theorem 4.4. After verifying the existence and uniqueness of the solution for SDE (4.3), we begin to plot its sample path using its exact solution and simulations using the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. According to [1, Equation 4.6], the solution of (4.3) is X(t) =e 3 2 t+W (t) : (4.4) To plot a sample trajectory for the exact solution, we rst simulate W (t) by calculating the cumulative sum of W (t) = p tN(0; 1) from 0 to 1 with the step-size t (See Figure 2.1 as an example). After getting a vector of the value ofW (t) at eacht i =it,i = 1; 2;:::;N, we can use the exact solution to calculateX(t). For equation (4.4), we takeT = 1,N = 10 4 , and t = 10 4 and get Figure 4.1. The code is attached in Appendix C.1. 27 Figure 4.1: A Sample Trajectory of Exact Solution for Equation (4.3). Then for numerical methods, we useT = 1,N = 10 2 , and t = 10 2 with the same value of W (t) when we plot the exact solution. That is, W (i) = W (t i )W (t i1 ). The Euler method for equation (4.3) has been discussed in Section 3.1.1. The tamed Runge-Kutta approximation would be, with Y 0 = 1, e Y k =Y k + 2Y k 1 +j2Y k jt t +Y k t 1 2 Y k+1 =Y k + 2Y k t 1 +j2Y k jt +Y k W (k) + 1 2 ((W (k)) 2 t)( e Y k Y k )t 1 2 (4.5) for k = 1; 2;:::;N, where t k =kt and W (k) =W (t k+1 )W (t k ). For the truncated Runge-Kutta method and the tamed and truncated Runge-Kutta method, we choose the continuous strictly increasing function (r) = 3r and = 0:5. Then its inverse 1 (r) = 1 3 r is strictly increasing and continuous onR + . LetK = 6jf(0)j_(1). 28 For t = 10 2 , the truncation mapping t is dened as t (x) = (jxj^ 2t 0:5 )) x jxj : and f( t (Y k )) = 2(jY k j^ 2t 0:5 ) Y k jY k j : Hence, the truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k +f( t (Y k ))t +Y k t 1 2 Y k+1 =Y k +f( t (Y k ))t +Y k W (k) + 1 2 ((W (k)) 2 t)( e Y k Y k )t 1 2 (4.6) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. The tamed and truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k + f( t (Y k )) 1 +jf( t (Y k ))jt t +Y k t 1 2 Y k+1 =Y k + f( t (Y k ))t 1 +jf( t (Y k ))jt +Y k W (k) + 1 2 ((W (k)) 2 t)( e Y k Y k )t 1 2 (4.7) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. Figures 4.2, 4.3, 4.4, and 4.5 are approximations of one single trajectory generated by each of the Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method for equation (4.3). The MATLAB codes are attached in Appendices C.2, C.3, C.4, and C.5. 29 Figure 4.2: A Sample Trajectory of Exact Solution and Euler Method for Equation (4.3). Figure 4.3: A Sample Trajectory by the Tamed Runge-Kutta Method for Equation (4.3). 30 Figure 4.4: A Sample Trajectory by the Truncated Runge-Kutta Method for Equation (4.3). Figure 4.5: A Sample Trajectory by the Tamed and Truncated Runge-Kutta Method for Equation (4.3). 31 In section 4.3, we will further compare the exact solution and approximations using the tamed and truncated Runge-Kutta method for equation (4.3). Before this, here is another example of SDEs with regular coecients. Example 4.6. Consider the SDE X(t) = Z t 0 X(s)ds + Z t 0 sin(X(s))dW (s) (4.8) for t2 [0; 1]. First, we verify the conditions of Theorem 4.4. Given the drift coecient f(x) = x and the diusion coecient g(x) = sin(x) in equation (4.3), we check whether these two coecients satisfy Assumption 4.2. Because jf 0 (x)j = 1 and jg 0 (x)j =j cos(x)j 1 are bounded, by Lemma 4.3, both f and g are globally Lipschitz continuous. Satisfying Assumption 4.2, the SDE (4.3) has a path-wise unique strong solution X(t) on [0; 1] with sup 1tT E(jX(t)j 2 )<1 according to Theorem 4.4. This SDE does not have exact solution in closed form. After verifying the existence of the solution for SDE (4.8), we begin to plot its sample path using numerical solution by Euler method with a very small step-size of 2 20 and simulations using the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. To plot a sample trajectory for the exact solution, we rst simulate W (t) by calculating the cumulative sum of dW (t) = p tN(0; 1) from 0 to 1 with the step-size t = 2 30 . After getting a vector of the value of W (t) at each t i = i t, i = 1; 2;:::;N, we can use the Euler method with Y k+1 = Y k +Y k t + sin(Y k )W (k) to simulate X(t). For equation 32 (4.4), we take T = 1, N = 2 20 , and t = 2 20 and get Figure 4.6. The code is attached in Appendix C.6. Figure 4.6: A Sample Trajectory of Simulated True Solution for Equation (4.8). Then for numerical methods, we use T = 1, N = 2 6 , and t = 2 6 with the same value of W (t) when we plot the exact solution. That is, W (i) = W (t i )W (t i1 ). The tamed Runge-Kutta approximation would be e Y k =Y k + Y k 1 +jY k jt t + sin(Y k )t 1 2 Y k+1 =Y k + Y k t 1 +jY k jt + sin(Y k )W (k) + 1 2 ((W (k)) 2 t)(sin( e Y k ) sin(Y k ))t 1 2 (4.9) where t k =kt, W (k) =W (t k+1 )W (t k ), for k = 1; 2;:::;N and Y 0 = 1. For the truncated Runge-Kutta method and the tamed and truncated Runge-Kutta method, we choose the truncation function (r) = 3r and = 0:5. Then 1 (r) = 1 3 r 33 is strictly increasing and continuous onR + . Let K = 6jf(0)j_(1). For t = 10 2 , the truncation mapping t is dened as t (x) = (jxj^ 2t 0:5 )) x jxj : Hence, the truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k + (jY k j^ 2t 0:5 ) Y k jY k j t + sin(Y k )t 1 2 Y k+1 =Y k + (jY k j^ 2t 0:5 ) Y k jY k j t + sin(Y k )W (k) + 1 2 ((W (k)) 2 t)(sin( e Y k ) sin(Y k ))t 1 2 (4.10) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. The tamed and truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k + (jY k j^ 2t 0:5 ) Y k jY k j t + sin(Y k )t 1 2 Y k+1 =Y k + (jY k j^ 2t 0:5 ) Y k jY k j t 1 + (jY k j^ 2t 0:5 ) Y k jY k j t + sin(Y k )W (k) + 1 2 ((W (k)) 2 t)(sin( e Y k ) sin(Y k ))t 1 2 (4.11) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. Figures 4.7, 4.8, and 4.9 are approximations of one single trajectory generated by each of the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. The MATLAB codes are attached in Appendices C.7, C.8, and C.9. 34 Figure 4.7: A Sample Trajectory by the Tamed Runge-Kutta Method for Equation (4.8). Figure 4.8: A Sample Trajectory by the Truncated Runge-Kutta Method for Equation (4.8). 35 Figure 4.9: A Sample Trajectory by the Tamed and Truncated Runge-Kutta Method for Equation (4.8). 4.3 Comparison Using Numerical Example between Numerical Methods for SDEs with Regular Coecients Now consider equation (4.3). In this section, we compare simulations of the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. For those numerical approximations, we use the mean square error E[(X(T )Y (N)) 2 ] of the Euler method, of the tamed Runge-Kutta method, of the truncated Runge-Kutta method, and of the tamed and truncated Runge-Kutta method, where N2 f2 12 ; 2 13 ;:::; 2 18 g is the runtime and T = 1 is the interval for X(t) such that t2 [0; 1]. We take T = 1 and N2f2 4 ; 2 5 ;:::; 2 18 g and use the same SDE as equation (4.3) that has appeared in last section: X(t) = 1 + Z t 0 2X(s)ds + Z t 0 X(s)dW (s) 36 for t2 [0; 1]. WithN2f2 12 ; 2 13 ;:::; 2 18 g, we have t2f2 12 ; 2 13 ;:::; 2 18 g. Running each method with corresponding t, we get the following gures and table. Figure 4.10 is a log-log plot of the mean square error for 10000 sample points between the exact solutionX(T ) of SDE and the numerical solutionsY (N) by Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method, when runtime N =f2 12 ; 2 13 ;:::; 2 18 g to Equation (4.3). The code is attached in Appendix C.10. Figure 4.10: Log-log Plot of Mean Square Error vs. Runtime for Equation (4.3). Figure 4.11 is a log-log plot of the mean square error for 10000 sample points between the exact solutionX(T ) of SDE and the numerical solutionsY (N) by Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g for equation (4.3). The code is attached in Appendix C.11. 37 Figure 4.11: Log-log Plot of Mean Square Error vs. Step-size for Equation (4.3). Table 4.1 is the corresponding table to the above two gures with mean square error for 10000 sample points between the exact solution X(T ) of SDE and the numerical so- lutions Y (N) by Euler method, the tamed Runge-Kutta method, the truncated Runge- Kutta method, and the tamed and truncated Runge-Kutta method, with step-size t = f2 12 ; 2 13 ;:::; 2 18 g, for equation (4.3). Step-size t 2 12 2 13 2 14 2 15 2 16 2 17 2 18 Method Euler Method 1.867810 2 9.210510 3 5.356510 3 2.105210 3 1.036710 3 5.355010 4 2.905710 4 Tamed Runge-Kutta Method 3.865210 2 7.565510 3 2.052410 3 5.511510 4 1.169810 4 3.408510 5 1.558010 5 Truncated Runge-Kutta Method with (r) = 3r 1.043810 2 4.409810 5 1.182810 5 2.724710 6 7.521310 7 1.832410 7 5.479010 8 Truncated Runge-Kutta Method with (r) = 2e r 2.581210 1 2.090210 1 2.204410 1 2.044810 1 1.861310 1 1.764810 1 1.902710 1 Tamed and Truncated Runge-Kutta Method 1.092410 1 6.113110 2 4.410910 2 1.873110 2 1.022410 2 5.830310 3 3.784210 3 Table 4.1: Table of Mean Square Error vs. Step-size for Equation (4.3). 38 By comparing mean square error between the exact solution of equation (4.3) and ap- proximations using four numerical methods in the above gures, we can see that the order of numerical methods ranking from smallest to largest mean square error is the truncated Runge-Kutta method, the tamed Runge-Kutta method, the Euler method, and the tamed and truncated Runge-Kutta method, when step-size approaches zero. With smaller mean square error, the truncated Runge-Kutta method could approximate the solution more ac- curately than other three numerical methods with the same step-size. However, the choice of in uences how accurately the truncated Runge-Kutta method approximates the solutions of SDEs. In Table 4.1, when we have dierent for f(x) = 2x, the truncated Runge-Kutta method with(x) = 2e x has much larger mean square error than the truncated Runge-Kutta method with(x) = 3x. The mean square error of the truncated Runge-Kutta method with(x) = 2e x is larger than the tamed and truncated Runge-Kutta method with(x) = 3x. In order to gure out the choice of, we compare the mean square error between the exact solution and approximations of the truncated Runge-Kutta method with = 4x, = 3x, and = 2:5x for f(x) = 2x. We keep t =f2 12 ; 2 13 ;:::; 2 18 g, K = 6 and = 0:5 for all three methods. Figure 4.12 is a log-log plot of the mean square error for 10000 sample points between the exact solutionX(T ) of SDE and the numerical solutionsY (N) by the truncated Runge-Kutta methods with = 4x, = 3x, and = 2:5x, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g for equation (4.3). The code is attached in Appendix C.12. 39 Figure 4.12: Log-log Plot of Mean Square Error vs. Step-size with Dierent for Equation (4.3). Step-size t 2 12 2 13 2 14 2 15 2 16 2 17 2 18 Method Truncated Runge-Kutta Method with (x) = 4x 8.363810 2 4.119010 3 6.793710 5 6.124810 6 8.151110 7 1.825910 7 4.297310 8 Truncated Runge-Kutta Method with (x) = 3x 1.741210 2 2.389110 4 1.185710 5 3.659710 6 8.151110 7 1.825910 7 4.297310 8 Truncated Runge-Kutta Method with (x) = 2:5x 6.382410 3 5.375010 5 1.185710 5 3.659710 6 8.151110 7 1.825910 7 4.297310 8 Table 4.2: Table of Mean Square Error vs. Step-size with Dierent for Equation (4.3). Table 4.2 is the corresponding table to Figure 4.12 with mean square error for 10000 sample points between the exact solution X(T ) of SDE and the numerical solutions Y (N) by by the truncated Runge-Kutta methods with = 4x, = 3x, and = 2:5x, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g, for equation (4.3). Figure 4.12 shows that when the truncated Runge-Kutta methods with dierent ap- proximate the solution dierently (for example, when t 2 15 , see Table 4.2), the order 40 of numerical methods ranking from smaller mean square error to larger mean square er- ror is the truncated Runge-Kutta methods with = 2:5x, = 3x, and = 4x. Hence, if there are choices of the strictly increasing continuous function , it might be better to choose a such that 1 (Kt ) is as large as possible for xed K; t, and and satises sup 0<jxjr jf(x)j(r). Next, if we x (x) = 2:5x, = 0:5, and t =f2 12 ; 2 13 ;:::; 2 18 g, we want to see how K in uences the accuracy of approximations by the truncated Runge-Kutta methods. Figure 4.13 is a log-log plot of the mean square error for 10000 sample points between the exact solution X(T ) of SDE and the numerical solutions Y (N) by the truncated Runge- Kutta methods with K = 6, K = 4, and K = 2, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g for equation (4.3). The code is attached in Appendix C.13. Figure 4.13: Log-log Plot of Mean Square Error vs. Step-size with Dierent K for Equation (4.3). 41 Step-size t 2 12 2 13 2 14 2 15 2 16 2 17 2 18 Method Truncated Runge-Kutta Method with K = 6 9.167710 3 5.066310 5 1.168410 5 2.630910 6 6.850510 7 1.763010 7 5.176210 8 Truncated Runge-Kutta Method with K = 4 6.326010 2 1.165010 3 1.768410 5 2.630910 6 6.850510 7 1.763010 7 5.176210 8 Truncated Runge-Kutta Method with K = 2 5.190810 1 1.104710 1 5.389510 2 1.392110 5 6.850510 7 1.763010 7 5.176210 8 Table 4.3: Table of Mean Square Error vs. Step-size with Dierent K for Equation (4.3). Table 4.3 is the corresponding table to Figure 4.12 with mean square error for 10000 sample points between the exact solution X(T ) of SDE and the numerical solutions Y (N) by the truncated Runge-Kutta methods with K = 6, K = 4, and K = 2, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g, to Equation (4.3). From Figure 4.13 and Table 4.3, we can see that if there are choices of K, it might be better to choose a K such that 1 (Kt ) is as large as possible for xed ; t, and and satises sup 0<jxjr jf(x)j(r). Now if we x(x) = 2:5x,K = 6, and t =f2 12 ; 2 13 ;:::; 2 18 g, we want to see how in uences the accuracy of approximations by the truncated Runge-Kutta methods. Figure 4.14 is a log-log plot of the mean square error for 10000 sample points between the exact solution X(T ) of SDE and the numerical solutions Y (N) by the truncated Runge-Kutta methods with = 0:5, = 0:4, and = 0:3, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g for equation (4.3). The code is attached in Appendix C.14. Table 4.4 is the corresponding table to Figure 4.14 with mean square error for 10000 sample points between the exact solution X(T ) of SDE and the numerical solutions Y (N) by the truncated Runge-Kutta methods with = 0:5, = 0:4, and = 0:3, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g, to Equation (4.3). From Figure 4.14 and Table 4.4, we can see that if there are choices of , it might be better to choose a such that 1 (Kt ) is as large as possible for xed ; t, and K and satises sup 0<jxjr jf(x)j (r). Hence, from the above observation, when we chooseK,, and for the truncated Runge-Kutta method, we want 1 (Kt ) to be as large as possible and satises sup 0<jxjr jf(x)j(r). 42 Figure 4.14: Log-log Plot of Mean Square Error vs. Step-size with Dierent for Equation (4.3). Step-size t 2 12 2 13 2 14 2 15 2 16 2 17 2 18 Method Truncated Runge-Kutta Method with = 0:5 9.167710 3 5.066310 5 1.168410 5 2.630910 6 6.850510 7 1.763010 7 5.176210 8 Truncated Runge-Kutta Method with = 0:4 6.326010 2 1.165010 3 1.768410 5 2.630910 6 6.850510 7 1.763010 7 5.176210 8 Tamed and Truncated with = 0:3 5.190810 1 1.104710 1 5.389510 2 1.392110 5 6.850510 7 1.763010 7 5.176210 8 Table 4.4: Table of Mean Square Error vs. Step-size with Dierent for Equation (4.3). 43 5 Stochastic Dierential Equations with Non-regular Coecients 5.1 Assumptions Consider a 1-dimensional stochastic dierential equation dX(t) =f(X(t))dt +g(X(t))dW (t) (5.1) with initial value X(0) = x 0 . The initial value X(0) = x 0 2F 0 . The coecient functions in the integral parts, f;g, satisfy (2.1). Equation (5.1) can also be written as an It^ o process X =fX(t);t 0g with the following form X(t) =X(0) + Z t 0 f(X(s))ds + Z t 0 g(X(s))dW (s); (5.2) The second integral in equation (5.2) is an It^ o stochastic integral with respect to the standard 1-dimensional Wiener process, W =fW (t);t 0g. For a 1-dimensional stochastic dierential equation, we want to investigate a new approx- imation to nd its solution. We consider SDEs with non-regular coecients in this chapter. Denition 4.1 denes regular and non-regular coecients for an SDE. The followings are some assumptions and theorems about the coecients for SDEs. Some of them have been introduced in Chapter 3 and we introduce them as a reminder. Assumption 5.1 (Local Lipschitz Condition). For any r> 0, there exists a constant K(r)> 0 such that jf(x)f(y)jK(r)jxyj for all t2 [t 0 ;T ] and x;y2R withjxj_jyjr. 44 Assumption 5.2 (One-side Lipschitz Condition). There exists a constant K > 0 such that (xy)(f(x)f(y))Kjxyj 2 for all x;y2R. Assumption 5.3 (H older Continuity). There exists a constant K > 0 and q2 [0; 1 2 ) such that jf(x)f(y)jKjxyj 1 2 +q for all x;y2R. Assumption 5.4 (Non-Explosion Condition, Assumption 2.2 [3]). There exists a pair of constants p> 2 and K > 0 such that xf(x) + p 1 2 jg(x)j 2 K(1 +jxj 2 ): for all x2R. Lemma 5.5. For functions f satisfying one-side Lipschitz condition and g satisfying globally Lipschitz condition, f and g satisfy Assumption 5.4. Theorem 5.6 (Lemma 2.3 [3]). Assume that the SDE (5.1) satises all basic settings in Section 2.1. If Assumptions 5.1 and 5.4 hold, then equation (5.1) has a path-wise unique strong solution X(t) on [0;T ] with sup 0tT E(jX(t)j p )<1: 45 Theorem 5.7 (Lemma 2.3 [14]). If the drift coecient f in (5.1) satises Assumption 5.2, and Assumption 5.3 holds for the diusion coecient g in (5.1), the SDE (5.1) has a global strong solution X(t) on [0;T ] with sup 0tT E(jX(t)j 2 )<1: 5.2 Numerical Examples Example 5.8 (Equation (4.52) [1]). X(t) = 1 + Z t 0 (X(s) (X(s)) 3 )ds + Z t 0 X(s)dW (s) (5.3) First, we verify the conditions of Theorem 5.6. Given the drift coecient f(x) =xx 3 and diusion coecientg(x) =x in equation (5.3), we check whether the drift coecientsf satises Assumption 5.1 and whether the coecients f;g satisfy Assumption 5.4. The functions f satises Assumption 5.1: jf(x)f(y)j = jxx 3 y +y 3 j =jxy (x 3 y 3 )j = j(xy) (xy)(x 2 +xy +y 2 )j = j1 ((x +y) 2 xy)jjxyj K(r)jxyj forjxj_jyjr. 46 The functions f also satises Assumption 5.2: (xy)(f(x)f(y)) = (xy)(xx 3 y +y 3 ) = (1 ((x +y) 2 xy))(xy) 2 (xy) 2 ; (by 1 ((x +y) 2 xy) 1:) for all x;y2R. The diusion coecientg(x) =x has been veried that it satises the globally Lipschitz condition for equation (4.3). Hence, by Lemma 5.5, the coecients f;g satisfy Assumption 5.4. Satisfying Assumptions 5.1 and 5.4, the SDE (5.3) has a path-wise unique strong solution X(t) on [0; 1] with sup 1tT E(jX(t)j 2 )<1 according to Theorem 5.6. After verifying the existence and uniqueness of the solution for SDE (5.3), we begin to plot its sample path using its exact solution and simulations using the Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. According to [1, Equation 4.52], the solution of (5.3) is X(t) = e 1 2 t+W (t) 1 + 2 R t 0 e s+2W (s) ds (5.4) To plot a sample trajectory for this exact solution, we takeT = 1,N = 10 4 , anddt = 10 4 and get Figure 5.1. The MATLAB code is attached in Appendix D.1. 47 Figure 5.1: A Sample Trajectory of Exact Solution for Equation (5.3). The Euler method for equation (5.3) would be, with Y 0 = 1, Y k+1 =Y k + (Y k Y 3 k )t +Y k W (k) (5.5) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. The tamed Runge-Kutta approximation would be, with Y 0 = 1, e Y k =Y k + (Y k Y 3 k ) 1 +j(Y k Y 3 k )jt t +Y k t 1 2 Y k+1 =Y k + (Y k Y 3 k ) 1 +j(Y k Y 3 k )jt +Y k W (k) + 1 2 ((W (k)) 2 t)( e Y k Y k )t 1 2 (5.6) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. For the truncated Runge-Kutta method and the tamed and truncated Runge-Kutta 48 method, we choose the continuous strictly increasing function (r) = r 2 + 1 and = 0:2. Then its inverse 1 (r) = p r 1 is strictly increasing and continuous on R + . Let K = 1. For t = 10 2 , the truncation mapping t is dened as t (x) = (jxj^ p t 0:2 1)) x jxj : and f( t (Y k )) = (jY k j^ p t 0:2 1) Y k jY k j (jY k j^ p t 0:2 1) Y k jY k j 3 : Hence, the truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k +f( t (Y k ))t +Y k t 1 2 Y k+1 =Y k +f( t (Y k ))t +Y k W (k) + 1 2 ((W (k)) 2 t)( e Y k Y k )t 1 2 (5.7) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. The tamed and truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k + f( t (Y k ))t 1 + f( t (Y k )) t +Y k t 1 2 Y k+1 =Y k + f( t (Y k ))t 1 + f( t (Y k )) t +Y k W (k) + 1 2 ((W (k)) 2 t)( e Y k Y k )t 1 2 (5.8) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. Then for numerical methods, we use T = 1, N = 10 2 , and t = 10 2 with the same value ofW (t) when we plot the exact solution. That is, W (i) =W (t i )W (t i1 ). Figures 5.2, 5.3, 5.4, and 5.5 are approximations of one single trajectory generated by each of the Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and 49 the tamed and truncated Runge-Kutta method. The MATLAB codes are attached in Ap- pendices D.2, D.3, D.4, and D.5. Figure 5.2: A Sample Trajectory of Exact Solution and Euler method to Equation (5.3). 50 Figure 5.3: A Sample Trajectory of Exact Solution and Tamed Runge-Kutta Method for Equation (5.3). Figure 5.4: A Sample Trajectory of Exact Solution and Truncated Runge-Kutta Method for Equation (5.3). 51 Figure 5.5: A Sample Trajectory of Exact Solution and Tamed and Truncated Runge-Kutta Method for Equation (5.3). The tamed Runge-Kutta method and the truncated Runge-Kutta method may also fail. Here is such an example. Example 5.9 (Equation (4.37) [1]). Consider an SDE X(t) = 1 + Z t 0 1ds + Z t 0 2 p X(s)dW (s) (5.9) for t2 [0; 1]. First, we verify the conditions of Theorem 5.6. Given the drift coecient f(x) = 1 and diusion coecient g(x) = 2 p x in equation (5.9), we check whether the drift coecient f satises Assumption 5.2 and the diusion coecient g satises Assumption 5.3. The drift coecient f satises Assumption 5.2 because f :R! 1. 52 The functions g satises Assumption 5.3: jg(x)g(y)j = j p x p yj = jxyj p x + p y = p jxyj p jxyj p x + p y p jxyj =jxyj 1 2 (by p jxyj p x + p y:) for all t2 [0; 1] and x;y2R. Satisfying Assumptions 5.2 and 5.3, the SDE (5.9) has a path-wise unique strong solution X(t) on [0; 1] with sup 1tT E(jX(t)j 2 )<1 according to Theorem 5.7. After verifying the existence and uniqueness of the solution for SDE (5.9), we begin to plot its sample path using its exact solution and simulations using the Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. According to [1, Equation 4.37], the solution of (5.9) is X(t) = (W (t) + 1) 2 (5.10) To plot a sample trajectory for the exact solution, we rst simulate W (t) by calculating the cumulative sum of W (t) = p tN(0; 1) from 0 to 1 with the step-size t. After getting a vector of the value ofW (t) at eacht i =it,i = 1; 2;:::;N, we can use the exact solution to calculate X(t). For equation (5.9), we take T = 1, N = 10 4 , and t = 10 4 and get Figure 5.6. The code is attached in Appendix D.6. Next we will show that the Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method fail for this example. 53 Figure 5.6: A Sample Trajectory of Exact Solution for Equation (5.9). For numerical methods, we use T = 1, N = 10 2 , and t = 10 2 with the same value of W (t) when we plot the exact solution. That is, W (i) = W (t i )W (t i1 ). The Euler method for equation (5.9) would be Y 0 = 1 Y k+1 =Y k + t + 2 p Y k W (k) (5.11) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. The tamed Runge-Kutta approximation would be, with Y 0 = 1, e Y k =Y k + 1 1 + t t + 2 p Y k t 1 2 Y k+1 =Y k + t 1 + t + 2 p Y k W (k) + 1 2 ((W (k)) 2 t)(2 q e Y k 2 p Y k )t 1 2 (5.12) 54 where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. For the truncated Runge-Kutta method and the tamed and truncated Runge-Kutta method, we choose the continuous strictly increasing function (r) = 2 p r + 1 and = 0:5. Then its inverse 1 (r) = 1 2 (r 1) 2 is strictly increasing and continuous on R + . Let K = 3jf(0)j_(1). For t = 10 2 , the truncation mapping t is dened as t (x) = (jxj^ 1 2 (3t 0:5 1) 2 )) x jxj : Hence, the truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k + t + 2 p Y k t 1 2 Y k+1 =Y k + t + 2 s (jY k j^ 1 2 (3t 0:5 1) 2 ) Y k jY k j W (k) + 1 2 ((W (k)) 2 t)(2 q e Y k 2 p Y k )t 1 2 (5.13) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. The tamed and truncated Runge-Kutta approximation is given as, with Y 0 = 1, e Y k =Y k + t + 2 p Y k t 1 2 Y k+1 =Y k + t + 2 q (jY k j^ 1 2 (3t 0:5 1) 2 ) Y k jY k j W (k) 1 + 2 q (jY k j^ 1 2 (3t 0:5 1) 2 ) Y k jY k j W (k) + 1 2 ((W (k)) 2 t)(2 q e Y k 2 p Y k )t 1 2 (5.14) where W (k) =W (t k+1 )W (t k ) and t k =kt, for k = 1; 2;:::;N. Figures 5.7, 5.8, 5.9, and 5.10 are approximations of one single trajectory generated by each of the Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method for equation (5.9). The MAT- LAB codes are attached in Appendices D.7, D.8, D.9, and D.10. 55 Figure 5.7: A Sample Trajectory of Exact Solution and Euler Method for Equation (5.9). Figure 5.8: A Sample Trajectory of Exact Solution and Tamed Runge-Kutta Method for Equation (5.9). 56 Figure 5.9: A Sample Trajectory of Exact Solution and Truncated Runge-Kutta Method for Equation (5.9). Figure 5.10: A Sample Trajectory of Exact Solution and Tamed and Truncated Runge-Kutta Method for Equation (5.9). 57 5.3 Comparison Using Numerical Example between Numerical Methods for SDEs with Non-regular Coecients Now consider equation (5.3). In this section, we compare simulations of the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method. For those numerical approximation, we use the mean square error E[(X(T )Y (N)) 2 ] of the Euler method, of the tamed Runge-Kutta method, of the truncated Runge-Kutta method, and of the tamed and truncated Runge-Kutta method, where N2 f2 12 ; 2 13 ;:::; 2 18 g is the runtime and T = 1 is the interval for X(t) such that t2 [0; 1]. We take T = 1 and N2f2 4 ; 2 5 ;:::; 2 18 g and use the same SDE as the equation (5.3) in last section: X(t) = 1 + Z t 0 (X(s) (X(s)) 3 )ds + Z t 0 X(s)dW (s) for t2 [0; 1]. WithN2f2 12 ; 2 13 ;:::; 2 18 g, we have t2f2 12 ; 2 13 ;:::; 2 18 g. Running each method with corresponding t, we get the following gures and table. By comparing mean square error between the exact solution of equation (5.3) and approximations using four numerical methods in Figures 5.11 and 5.12, we can see that the order of numerical methods ranking from smallest to largest mean square error is the tamed Runge-Kutta method, the trun- cated Runge-Kutta method, the Euler method, and the tamed and truncated Runge-Kutta method, when step-size approaches zero. With smaller mean square error, the tamed Runge- Kutta method could approximate the solution more accurately than other three numerical methods with the same step-size. The mean square error by the truncated Runge-Kutta method decreases rapidly as the step-size approaches zero when we choose (x) = x 2 + 1. More discussions about the truncated Runge-Kutta method can be found in Section 4.3. Figure 5.11 is a log-log plot of the mean square error for 10000 sample points between the 58 exact solutionX(T ) of SDE and the numerical solutionsY (N) by Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method, when runtime N =f2 12 ; 2 13 ;:::; 2 18 g for equation (5.3). The code is attached in Appendix D.11. Figure 5.11: Log-log Plot of Mean Square Error vs. Runtime to Equation (5.3). Figure 5.12 is a log-log plot of the mean square error for 10000 sample points between the exact solutionX(T ) of SDE and the numerical solutionsY (N) by Euler method, the tamed Runge-Kutta method, the truncated Runge-Kutta method, and the tamed and truncated Runge-Kutta method, with step-size t =f2 12 ; 2 13 ;:::; 2 18 g for equation (5.3). The code is attached in Appendix D.12. 59 Figure 5.12: Log-log Plot of Mean Square Error vs. Step-size to Equation (5.3). Table 5.1 is the corresponding table to the above two gures with mean square error for 10000 sample points between the exact solution X(T ) of SDE and the numerical so- lutions Y (N) by Euler method, the tamed Runge-Kutta method, the truncated Runge- Kutta method, and the tamed and truncated Runge-Kutta method, with step-size t = f2 12 ; 2 13 ;:::; 2 18 g, for equation (5.3). Step-size t 2 12 2 13 2 14 2 15 2 16 2 17 2 18 Method Euler Method 2.308910 5 1.168810 5 5.866710 6 2.837210 6 1.452810 6 7.120810 7 3.569810 7 Tamed Runge-Kutta Method 1.590310 7 4.312410 8 9.432210 9 2.197410 9 6.467410 10 1.556910 10 4.205010 11 Truncated Runge-Kutta Method with (r) =r 2 + 1 5.680410 3 1.590610 3 9.690910 5 3.560010 6 3.706310 5 5.638110 8 1.325210 10 Tamed and Truncated Runge-Kutta Method 7.089810 3 1.963010 3 1.427710 4 2.020910 5 4.867210 5 3.916010 6 2.044310 6 Table 5.1: Table of Mean Square Error vs. Step-size for Equation (5.3). 60 References [1] P. E. Kloeden and E. Platen, Numerical solution of stochastic dierential equations, vol. 23. Springer-Verlag, Berlin, rst ed., 1992. [2] M. Hutzenthaler, A. Jentzen, and P. E. Kloeden, \Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coecients," Ann. Appl. Probab., vol. 22, no. 4, pp. 1611{1641, 2012. [3] X. Mao, \The truncated Euler-Maruyama method for stochastic dierential equations," J. Comput. Appl. Math., vol. 290, pp. 370{384, 2015. [4] D. Revuz and M. Yor, Continuous martingales and Brownian motion. Springer-Verlag, Berlin, third ed., 1999. [5] A. S. Cherny and H.-J. Engelbert, Singular stochastic dierential equations, vol. 1858 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2005. [6] N. V. Krylov and A. K. Zvonkin, \On strong solutions of stochastic dierential equa- tions," Sel. Math. Sov., vol. 1, pp. 19{61, 1981. [7] D. W. Stroock and S. R. S. Varadhan, \Diusion processes with continuous coecients. I," Comm. Pure Appl. Math., vol. 22, pp. 345{400, 1969. [8] N. V. Krylov, \It^ o's stochastic integral equations," Theor. Probability Appl., vol. 14, no. 2, pp. 330{336, 1969. [9] N. Ikeda and S. Watanabe, Stochastic dierential equations and diusion processes. North-Holland Publishing Co., Amsterdam-New York; Kodansha, Ltd., Tokyo, 1981. [10] N. V. Krylov, Introduction to the Theory of Diusion Processes. Providence, RI: Amer- ican Mathematical Society, 1995. [11] A. K. Zvonkin, \A transformation of the phase space of a diusion process that will remove the drift," Mat. Sb. (N.S.), vol. 93, no. 1, pp. 129{149, 1974. [12] X. Mao, \Convergence rates of the truncated Euler-Maruyama method for stochastic dierential equations," J. Comput. Appl. Math., vol. 296, pp. 362{375, 2016. [13] X. Li and G. Yin, \Explicit Milstein schemes with truncation for nonlinear stochastic dierential equations: convergence and its rate," J. Comput. Appl. Math., vol. 374, 112771, 23 pp., 2020. [14] H. Yang, F. Wu, P. E. Kloeden, and X. Mao, \The truncated Euler-Maruyama method for stochastic dierential equations with H older diusion coecients," J. Comput. Appl. Math., vol. 366, 112379, 13 pp., 2020. 61 Appendices A MATLAB Codes for Chapter 2 A.1 A Sample Trajectory of Wiener Process. 1 randn('state', 100) % fix the state of randn 2 T = 1; N = 1000; dt = T/N; 3 dW = sqrt(dt) * randn(1, N); % increments in each time step-size 4 W = cumsum(dW); % cumulative sum to get W 5 plot([0: dt: T], [0,W], 'b-') % plot B MATLAB Codes for Chapter 3 B.1 A Sample Trajectory of Exact Solution and Euler Method for Equation (3.1). 1 % dX = 2X dt + X dW 2 % Single Trajectory 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 2 * x; g = @(x) x; X 0 = 1; 5 6 randn('state', 100) 7 dW = sqrt(dt) * randn(1, N); 8 W = cumsum(dW); 9 10 % Explicit Solution 11 X = X 0 * exp(3/2 * [dt: dt: T]+W); 12 plot([0: dt: T], [X 0, X], 'g-') 13 hold on 14 15 R = 100; Dt = R * dt; L= N/R; % time discretization 16 % Euler Method 17 Y = zeros(1,L); Y temp = X 0; 18 for j = 1:L 19 W y = sum(dW(R * (j-1)+1: R * j)); 20 Y temp = Y temp + f(Y temp) * Dt + g(Y temp) * W y; 21 Y(j) = Y temp; 22 end 23 24 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 25 legend('Explicit Solution','Euler Method') % legend 26 hold off 62 C MATLAB Codes for Chapter 4 C.1 A Sample Trajectory of Exact Solution for Equation (4.3). 1 % dX = 2X dt + X dW 2 % Single Trajectory 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 2 * x; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100) 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = X 0 * exp(3/2 * [dt: dt: T]+W); 13 plot([0: dt: T], [X 0, X], 'g-') 14 legend('Exact Solution','Location','northwest') % legend C.2 A Sample Trajectory of Exact Solution and Euler Method for Equation (4.3). 1 % dX = 2X dt + X dW 2 % Single Trajectory 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 2 * x; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 randn('state', 100) 7 dW = sqrt(dt) * randn(1, N); 8 W = cumsum(dW); 9 10 % Explicit Solution 11 X = X 0 * exp(3/2 * [dt: dt: T]+W); 12 plot([0: dt: T], [X 0, X], 'g-') 13 hold on 14 15 R = 100; Dt = R * dt; L= N/R; % time discretization 16 % Euler Method 17 Y = zeros(1,L); Y temp = X 0; 18 for j = 1:L 19 W y = sum(dW(R * (j-1)+1: R * j)); 20 Y temp = Y temp + f(Y temp) * Dt + g(Y temp) * W y; 21 Y(j) = Y temp; 22 end 23 24 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 25 legend('Exact Solution','Euler Method', 'Location','northwest') % legend 26 hold off 63 C.3 A Sample Trajectory by the Tamed Runge-Kutta Method for Equation (4.3). 1 % dX = 2X dt + X dW 2 % Single Trajectory 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 2 * x; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100) 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = X 0 * exp(3/2 * [dt: dt: T]+W); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Tamed Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 for j = 1:L 20 W y = sum(dW(R * (j-1)+1: R * j)); 21 v = f(Y temp) * Dt; 22 Y temp1 = Y temp + tamed function(v) + g(Y temp) * Dtˆ0.5; 23 Y temp = Y temp + tamed function(v) + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5);; 24 Y(j) = Y temp; 25 end 26 27 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 28 legend('Exact Solution','Tamed Runge-Kutta Method', ... 'Location','northwest') % legend 29 hold off C.4 A Sample Trajectory by the Truncated Runge-Kutta Method for Equation (4.3). 1 % dX = 2X dt + X dW 2 % Single Trajectory 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 2 * x; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100) 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 64 12 X = X 0 * exp(3/2 * [dt: dt: T]+W); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Truncated Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 tc = 2 * (Dt)ˆ(-0.5); 20 for j = 1:L 21 W y = sum(dW(R * (j-1)+1: R * j)); 22 truncated = sign(Y temp) * min(abs(Y temp), tc); 23 Y temp1 = Y temp + f(truncated) * Dt + g(Y temp) * Dtˆ0.5; 24 Y temp = Y temp + f(truncated) * Dt + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5);; 25 Y(j) = Y temp; 26 end 27 28 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 29 legend('Exact Solution','Truncated Runge-Kutta Method', ... 'Location','northwest') % legend 30 hold off C.5 A Sample Trajectory by the Tamed and Truncated Runge-Kutta Method for Equation (4.3). 1 % dX = 2X dt + X dW 2 % Single Trajectory 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 2 * x; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100) 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = X 0 * exp(3/2 * [dt: dt: T]+W); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Tamed and Truncated Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 tc = 2 * (Dt)ˆ(-0.5); 20 for j = 1:L 21 W y = sum(dW(R * (j-1)+1: R * j)); 22 truncated = sign(Y temp) * min(abs(Y temp), tc); 23 v = f(truncated) * Dt; 24 Y temp1 = Y temp + tamed function(v) + g(Y temp) * Dtˆ0.5; 65 25 Y temp = Y temp + tamed function(v) + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5); 26 Y(j) = Y temp; 27 end 28 29 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 30 legend('Exact Solution','Tamed and Truncated Runge-Kutta Method', ... 'Location','northwest') % legend 31 hold off C.6 A Sample Trajectory of Simulated True Solution for Equation (4.8). 1 % dX = X dt + sin(X) dW 2 T = 1; N= 2ˆ20; dt = T/N; 3 f = @(x) x; g = @(x) sin(x); X 0 = 1; 4 tamed function = @(x) x/(1+abs(x)); 5 6 randn('state', 100) 7 dW = sqrt(dt) * randn(1, N); 8 W = cumsum(dW); 9 10 % Simulated True Method 11 Y = zeros(1,N); Y temp = X 0; 12 for j = 1:N 13 Y temp = Y temp + f(Y temp) * dt + g(Y temp) * dW(j); 14 Y(j) = Y temp; 15 end 16 plot([0:dt:T], [X 0, Y], 'g-') 17 hold on 18 legend('Simulated True Solution', 'Location','northwest') % legend 19 hold off C.7 A Sample Trajectory by the Tamed Runge-Kutta Method for Equation (4.8). 1 % dX = X dt + sin(X) dW 2 T = 1; N= 2ˆ20; dt = T/N; 3 f = @(x) x; g = @(x) sin(x); X 0 = 1; 4 tamed function = @(x) x/(1+abs(x)); 5 6 randn('state', 100) 7 dW = sqrt(dt) * randn(1, N); 8 W = cumsum(dW); 9 10 % Simulated True Method 11 Y = zeros(1,N); Y temp = X 0; 12 for j = 1:N 66 13 Y temp = Y temp + f(Y temp) * dt + g(Y temp) * dW(j); 14 Y(j) = Y temp; 15 end 16 plot([0:dt:T], [X 0, Y], 'g-') 17 hold on 18 19 R = N/2ˆ6; Dt = R * dt; L= N/R; % time discretization 20 % Tamed Runge-Kutta Method 21 Y = zeros(1,L); Y temp = X 0; 22 for j = 1:L 23 W y = sum(dW(R * (j-1)+1: R * j)); 24 v = f(Y temp) * Dt; 25 Y temp1 = Y temp + tamed function(v) + g(Y temp) * Dtˆ0.5; 26 Y temp = Y temp + tamed function(v) + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5);; 27 Y(j) = Y temp; 28 end 29 30 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 31 legend('Simulated True Solution','Tamed Runge-Kutta Method', ... 'Location','northwest') % legend 32 hold off C.8 A Sample Trajectory by the Truncated Runge-Kutta Method for Equation (4.8). 1 % dX = X dt + sin(X) dW 2 T = 1; N= 2ˆ20; dt = T/N; 3 f = @(x) x; g = @(x) sin(x); X 0 = 1; 4 tamed function = @(x) x/(1+abs(x)); 5 6 randn('state', 100) 7 dW = sqrt(dt) * randn(1, N); 8 W = cumsum(dW); 9 10 % Simulated True Method 11 Y = zeros(1,N); Y temp = X 0; 12 for j = 1:N 13 Y temp = Y temp + f(Y temp) * dt + g(Y temp) * dW(j); 14 Y(j) = Y temp; 15 end 16 plot([0:dt:T], [X 0, Y], 'g-') 17 hold on 18 19 R = N/2ˆ6; Dt = R * dt; L= N/R; % time discretization 20 % Truncated Runge-Kutta Method 21 Y = zeros(1,L); Y temp = X 0; 22 tc = 2 * (Dt)ˆ(-0.5); 23 for j = 1:L 24 W y = sum(dW(R * (j-1)+1: R * j)); 67 25 truncated = sign(Y temp) * min(abs(Y temp), tc); 26 Y temp1 = Y temp + f(truncated) * Dt + g(Y temp) * Dtˆ0.5; 27 Y temp = Y temp + f(truncated) * Dt + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5);; 28 Y(j) = Y temp; 29 end 30 31 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 32 legend('Simulated True Solution','Truncated Runge-Kutta Method', ... 'Location','northwest') % legend 33 hold off C.9 A Sample Trajectory by the Tamed and Truncated Runge-Kutta Method for Equation (4.8). 1 % dX = X dt + sin(X) dW 2 T = 1; N= 2ˆ20; dt = T/N; 3 f = @(x) x; g = @(x) sin(x); X 0 = 1; 4 tamed function = @(x) x/(1+abs(x)); 5 6 randn('state', 100) 7 dW = sqrt(dt) * randn(1, N); 8 W = cumsum(dW); 9 10 % Simulated True Method 11 Y = zeros(1,N); Y temp = X 0; 12 for j = 1:N 13 Y temp = Y temp + f(Y temp) * dt + g(Y temp) * dW(j); 14 Y(j) = Y temp; 15 end 16 plot([0:dt:T], [X 0, Y], 'g-') 17 hold on 18 19 R = N/2ˆ6; Dt = R * dt; L= N/R; % time discretization 20 % Tamed and Truncated Runge-Kutta Method 21 Y = zeros(1,L); Y temp = X 0; 22 tc = 2 * (Dt)ˆ(-0.5); 23 for j = 1:L 24 W y = sum(dW(R * (j-1)+1: R * j)); 25 truncated = sign(Y temp) * min(abs(Y temp), tc); 26 v = f(truncated) * Dt; 27 Y temp1 = Y temp + tamed function(v) + g(Y temp) * Dtˆ0.5; 28 Y temp = Y temp + tamed function(v) + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5); 29 Y(j) = Y temp; 30 end 31 32 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 33 legend('Simulated True Solution','Tamed and Truncated Runge-Kutta ... Method', 'Location','northwest') % legend 68 34 hold off C.10 Log-log Plot of Mean Square Error vs. Runtime for Equation (4.3). 1 a = 12; b = 18; 2 E mse Euler 21 = zeros(1, b-a+1); 3 E mse Tamed 21 = zeros(1, b-a+1); 4 E mse Truncated 21 = zeros(1, b-a+1); 5 E mse Truncated 212 = zeros(1, b-a+1); 6 E mse TmTc 21 = zeros(1, b-a+1); 7 8 n = 10ˆ4; 9 10 for k = a:b 11 T = 1; N= 2ˆk; 12 E mse tempE = 0; 13 E mse tempTm = 0; 14 E mse tempTc = 0; 15 E mse tempTc2 = 0; 16 E mse tempTmTc = 0; 17 for j = 1:n 18 w = 0; 19 Y Euler 21 = 1; 20 Y Tamed 21 = 1; 21 Y Truncated 21 =1; 22 Y Truncated 212 = 1; 23 Y TmTc 21 =1; 24 tc = 2 * (1/N)ˆ(-0.5); 25 tc2 = log(3 * (1/N)ˆ(-0.5)); 26 for i = 1: N 27 dw = randn/sqrt(N); 28 w = w + dw; 29 30 Y Euler 21 = Y Euler 21 + 2 * Y Euler 21/N +Y Euler 21 * dw; 31 32 33 v Tamed 21 = 2 * Y Tamed 21/N; 34 Y temp1 = Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * (1/N)ˆ0.5; 35 Y Tamed 21= Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Tamed 21) * (1/N)ˆ(-0.5); 36 37 c Truncated 21 = sign(Y Truncated 21) * ... min(abs(Y Truncated 21), tc); 38 Y temp1 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * (1/N)ˆ0.5; 39 Y Truncated 21 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 21) * (1/N)ˆ(-0.5); 40 69 41 c Truncated 212 = sign(Y Truncated 212) * ... min(abs(Y Truncated 212), tc2); 42 Y temp1 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * (1/N)ˆ0.5; 43 Y Truncated 212 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 212) * (1/N)ˆ(-0.5); 44 45 c TmTc 21 = sign(Y TmTc 21) * min(abs(Y TmTc 21), tc); 46 v TmTc 21 = 2 * c TmTc 21/N; 47 Y temp1 = Y Truncated 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * (1/N)ˆ0.5; 48 Y TmTc 21= Y TmTc 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y TmTc 21) * (1/N)ˆ(-0.5); 49 50 end 51 X = exp(3/2 * T + w); 52 E mse tempE = E mse tempE + (X-Y Euler 21)ˆ2; 53 E mse tempTm = E mse tempTm + (X-Y Tamed 21)ˆ2; 54 E mse tempTc = E mse tempTc + (X-Y Truncated 21)ˆ2; 55 E mse tempTc2 = E mse tempTc2+ (X-Y Truncated 212)ˆ2; 56 E mse tempTmTc = E mse tempTmTc + (X-Y TmTc 21)ˆ2; 57 end 58 E mse Euler 21(k-a+1) = E mse tempE/n; 59 E mse Tamed 21(k-a+1) = E mse tempTm/n; 60 E mse Truncated 21(k-a+1) = E mse tempTc/n; 61 E mse Truncated 212(k-a+1)= E mse tempTc2/n; 62 E mse TmTc 21(k-a+1) = E mse tempTmTc/n; 63 fprintf('Iteration %d donenn',k); 64 fprintf('This message is sent at time %snn', datestr(now,'mm-DD ... HH:MM:SS')) 65 end 66 67 A = a:b; 68 figure 69 hold on 70 set(gca, 'YScale', 'log') 71 72 loglog(2.ˆA, E mse Euler 21, 'r-- * ') 73 loglog(2.ˆA, E mse Tamed 21, 'b-- * ') 74 loglog(2.ˆA, E mse Truncated 21, 'g-- * ') 75 loglog(2.ˆA, E mse TmTc 21, 'm-- * ') 76 77 grid on 78 legend('Euler Method','Tamed Runge-Kutta Method', 'Truncated ... Runge-Kutta Method', 'Tamed and Truncated Runge-Kutta Method') % legend 79 xlabel('Runtime') 80 ylabel('Mean Square Error') 81 hold off 70 C.11 Log-log Plot of Mean Square Error vs. Step-size for Equation (4.3). 1 a = 12; b = 18; 2 E mse Euler 21 = zeros(1, b-a+1); 3 E mse Tamed 21 = zeros(1, b-a+1); 4 E mse Truncated 21 = zeros(1, b-a+1); 5 E mse Truncated 212 = zeros(1, b-a+1); 6 E mse TmTc 21 = zeros(1, b-a+1); 7 8 n = 10ˆ4; 9 10 for k = a:b 11 T = 1; N= 2ˆk; 12 E mse tempE = 0; 13 E mse tempTm = 0; 14 E mse tempTc = 0; 15 E mse tempTc2 = 0; 16 E mse tempTmTc = 0; 17 for j = 1:n 18 w = 0; 19 Y Euler 21 = 1; 20 Y Tamed 21 = 1; 21 Y Truncated 21 =1; 22 Y Truncated 212 = 1; 23 Y TmTc 21 =1; 24 tc = 2 * (1/N)ˆ(-0.5); 25 tc2 = log(3 * (1/N)ˆ(-0.5)); 26 for i = 1: N 27 dw = randn/sqrt(N); 28 w = w + dw; 29 30 Y Euler 21 = Y Euler 21 + 2 * Y Euler 21/N +Y Euler 21 * dw; 31 32 33 v Tamed 21 = 2 * Y Tamed 21/N; 34 Y temp1 = Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * (1/N)ˆ0.5; 35 Y Tamed 21= Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Tamed 21) * (1/N)ˆ(-0.5); 36 37 c Truncated 21 = sign(Y Truncated 21) * ... min(abs(Y Truncated 21), tc); 38 Y temp1 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * (1/N)ˆ0.5; 39 Y Truncated 21 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 21) * (1/N)ˆ(-0.5); 40 41 c Truncated 212 = sign(Y Truncated 212) * ... min(abs(Y Truncated 212), tc2); 71 42 Y temp1 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * (1/N)ˆ0.5; 43 Y Truncated 212 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 212) * (1/N)ˆ(-0.5); 44 45 c TmTc 21 = sign(Y TmTc 21) * min(abs(Y TmTc 21), tc); 46 v TmTc 21 = 2 * c TmTc 21/N; 47 Y temp1 = Y Truncated 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * (1/N)ˆ0.5; 48 Y TmTc 21= Y TmTc 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y TmTc 21) * (1/N)ˆ(-0.5); 49 50 end 51 X = exp(3/2 * T + w); 52 E mse tempE = E mse tempE + (X-Y Euler 21)ˆ2; 53 E mse tempTm = E mse tempTm + (X-Y Tamed 21)ˆ2; 54 E mse tempTc = E mse tempTc + (X-Y Truncated 21)ˆ2; 55 E mse tempTc2 = E mse tempTc2+ (X-Y Truncated 212)ˆ2; 56 E mse tempTmTc = E mse tempTmTc + (X-Y TmTc 21)ˆ2; 57 end 58 E mse Euler 21(k-a+1) = E mse tempE/n; 59 E mse Tamed 21(k-a+1) = E mse tempTm/n; 60 E mse Truncated 21(k-a+1) = E mse tempTc/n; 61 E mse Truncated 212(k-a+1)= E mse tempTc2/n; 62 E mse TmTc 21(k-a+1) = E mse tempTmTc/n; 63 fprintf('Iteration %d donenn',k); 64 fprintf('This message is sent at time %snn', datestr(now,'mm-DD ... HH:MM:SS')) 65 end 66 67 figure 68 hold on 69 set(gca, 'YScale', 'log') 70 71 loglog(2.ˆ-A, E mse Euler 21, 'r-- * ') 72 loglog(2.ˆ-A, E mse Tamed 21, 'b-- * ') 73 loglog(2.ˆ-A, E mse Truncated 21, 'g-- * ') 74 loglog(2.ˆ-A, E mse TmTc 21, 'm-- * ') 75 76 grid on 77 legend('Euler Method','Tamed Runge-Kutta Method', 'Truncated ... Runge-Kutta Method', 'Tamed and Truncated Runge-Kutta Method', ... 'Location','southeast') % legend 78 xlabel('Stepsize') 79 ylabel('Mean Square Error') 80 hold off C.12 Log-log Plot of Mean Square Error vs. Step-size with Dierent for Equation (4.3). 72 1 a = 12; b = 18; n = 10ˆ4; 2 E mse Truncated 21 = zeros(1, b-a+1); 3 E mse Truncated 212 = zeros(1, b-a+1); 4 E mse Truncated 213 = zeros(1, b-a+1); 5 6 for k = a:b 7 T = 1; N= 2ˆk; 8 E mse tempTc = 0; 9 E mse tempTc2 = 0; 10 E mse tempTc3 = 0; 11 for j = 1:n 12 w = 0; 13 Y Truncated 21 = 1; 14 Y Truncated 212 = 1; 15 Y Truncated 213 = 1; 16 tc = 6/4 * (1/N)ˆ(-0.5); 17 tc2 = 6/3 * (1/N)ˆ(-0.5); 18 tc3 = 6/2.5 * (1/N)ˆ(-0.5); 19 for i = 1: N 20 dw = randn/sqrt(N); 21 w = w + dw; 22 23 c Truncated 21 = sign(Y Truncated 21) * ... min(abs(Y Truncated 21), tc); 24 Y temp1 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * (1/N)ˆ0.5; 25 Y Truncated 21 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 21) * (1/N)ˆ(-0.5); 26 27 c Truncated 212 = sign(Y Truncated 212) * ... min(abs(Y Truncated 212), tc2); 28 Y temp1 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * (1/N)ˆ0.5; 29 Y Truncated 212 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 212) * (1/N)ˆ(-0.5); 30 31 c Truncated 213 = sign(Y Truncated 213) * ... min(abs(Y Truncated 213), tc3); 32 Y temp1 = Y Truncated 213 + 2 * c Truncated 213/N + ... Y Truncated 213 * (1/N)ˆ0.5; 33 Y Truncated 213 = Y Truncated 213 + 2 * c Truncated 213/N + ... Y Truncated 213 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 213) * (1/N)ˆ(-0.5); 34 end 35 X = exp(3/2 * T + w); 36 E mse tempTc = E mse tempTc + (X-Y Truncated 21)ˆ2; 37 E mse tempTc2 = E mse tempTc2+ (X-Y Truncated 212)ˆ2; 38 E mse tempTc3 = E mse tempTc3+ (X-Y Truncated 213)ˆ2; 39 end 40 E mse Truncated 21(k-a+1) = E mse tempTc/n; 41 E mse Truncated 212(k-a+1)= E mse tempTc2/n; 73 42 E mse Truncated 213(k-a+1)= E mse tempTc3/n; 43 fprintf('Iteration %d donenn',k); 44 fprintf('This message is sent at time %snn', datestr(now,'mm-DD ... HH:MM:SS')) 45 end 46 47 %% 48 A = a:b; 49 figure 50 hold on 51 set(gca, 'YScale', 'log') 52 53 loglog(2.ˆ-A, E mse Truncated 21, 'b-- * ') 54 loglog(2.ˆ-A, E mse Truncated 212, 'g-- * ') 55 loglog(2.ˆ-A, E mse Truncated 213, 'm-- * ') 56 57 grid on 58 legend('Truncated Runge-Kutta Method with nphi=4x', 'Truncated ... Runge-Kutta Method with nphi=3x', 'Truncated Runge-Kutta Method with ... nphi=2.5x', 'Location','southeast') % legend 59 xlabel('Stepsize') 60 ylabel('Mean Square Error') 61 hold off C.13 Log-log Plot of Mean Square Error vs. Step-size with Dierent K for Equation (4.3). 1 a = 12; b = 18; n = 10ˆ4; 2 E mse Truncated 21 = zeros(1, b-a+1); 3 E mse Truncated 212 = zeros(1, b-a+1); 4 E mse Truncated 213 = zeros(1, b-a+1); 5 6 for k = a:b 7 T = 1; N= 2ˆk; 8 E mse tempTc = 0; 9 E mse tempTc2 = 0; 10 E mse tempTc3 = 0; 11 for j = 1:n 12 w = 0; 13 Y Truncated 21 =1; 14 Y Truncated 212 = 1; 15 Y Truncated 213 = 1; 16 tc = 6/2.5 * (1/N)ˆ(-0.5); 17 tc2 = 4/2.5 * (1/N)ˆ(-0.5); 18 tc3 = 2/2.5 * (1/N)ˆ(-0.5); 19 for i = 1: N 20 dw = randn/sqrt(N); 21 w = w + dw; 22 74 23 c Truncated 21 = sign(Y Truncated 21) * ... min(abs(Y Truncated 21), tc); 24 Y temp1 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * (1/N)ˆ0.5; 25 Y Truncated 21 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 21) * (1/N)ˆ(-0.5); 26 27 c Truncated 212 = sign(Y Truncated 212) * ... min(abs(Y Truncated 212), tc2); 28 Y temp1 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * (1/N)ˆ0.5; 29 Y Truncated 212 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 212) * (1/N)ˆ(-0.5); 30 31 c Truncated 213 = sign(Y Truncated 213) * ... min(abs(Y Truncated 213), tc3); 32 Y temp1 = Y Truncated 213 + 2 * c Truncated 213/N + ... Y Truncated 213 * (1/N)ˆ0.5; 33 Y Truncated 213 = Y Truncated 213 + 2 * c Truncated 213/N + ... Y Truncated 213 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 213) * (1/N)ˆ(-0.5); 34 end 35 X = exp(3/2 * T + w); 36 E mse tempTc = E mse tempTc + (X-Y Truncated 21)ˆ2; 37 E mse tempTc2 = E mse tempTc2+ (X-Y Truncated 212)ˆ2; 38 E mse tempTc3 = E mse tempTc3+ (X-Y Truncated 213)ˆ2; 39 end 40 E mse Truncated 21(k-a+1) = E mse tempTc/n; 41 E mse Truncated 212(k-a+1)= E mse tempTc2/n; 42 E mse Truncated 213(k-a+1)= E mse tempTc3/n; 43 fprintf('Iteration %d donenn',k); 44 fprintf('This message is sent at time %snn', datestr(now,'mm-DD ... HH:MM:SS')) 45 end 46 47 %% 48 A = a:b; 49 figure 50 hold on 51 set(gca, 'YScale', 'log') 52 53 loglog(2.ˆ-A, E mse Truncated 21, 'b-- * ') 54 loglog(2.ˆ-A, E mse Truncated 212, 'g-- * ') 55 loglog(2.ˆ-A, E mse Truncated 213, 'm-- * ') 56 57 grid on 58 legend('Truncated Runge-Kutta Method with K=6', 'Truncated Runge-Kutta ... Method with K=4', 'Truncated Runge-Kutta Method with K=2', ... 'Location','southeast') % legend 59 xlabel('Stepsize') 60 ylabel('Mean Square Error') 61 hold off 75 C.14 Log-log Plot of Mean Square Error vs. Step-size with Dierent for Equation (4.3). 1 a = 12; b = 18; n = 10ˆ4; 2 E mse Truncated 21 = zeros(1, b-a+1); 3 E mse Truncated 212 = zeros(1, b-a+1); 4 E mse Truncated 213 = zeros(1, b-a+1); 5 6 for k = a:b 7 T = 1; N= 2ˆk; 8 E mse tempTc = 0; 9 E mse tempTc2 = 0; 10 E mse tempTc3 = 0; 11 for j = 1:n 12 w = 0; 13 Y Truncated 21 =1; 14 Y Truncated 212 = 1; 15 Y Truncated 213 = 1; 16 tc = 6/2.5 * (1/N)ˆ(-0.5); 17 tc2 = 6/2.5 * (1/N)ˆ(-0.4); 18 tc3 = 6/2.5 * (1/N)ˆ(-0.3); 19 for i = 1: N 20 dw = randn/sqrt(N); 21 w = w + dw; 22 23 c Truncated 21 = sign(Y Truncated 21) * ... min(abs(Y Truncated 21), tc); 24 Y temp1 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * (1/N)ˆ0.5; 25 Y Truncated 21 = Y Truncated 21 + 2 * c Truncated 21/N + ... Y Truncated 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 21) * (1/N)ˆ(-0.5); 26 27 c Truncated 212 = sign(Y Truncated 212) * ... min(abs(Y Truncated 212), tc2); 28 Y temp1 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * (1/N)ˆ0.5; 29 Y Truncated 212 = Y Truncated 212 + 2 * c Truncated 212/N + ... Y Truncated 212 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 212) * (1/N)ˆ(-0.5); 30 31 c Truncated 213 = sign(Y Truncated 213) * ... min(abs(Y Truncated 213), tc3); 32 Y temp1 = Y Truncated 213 + 2 * c Truncated 213/N + ... Y Truncated 213 * (1/N)ˆ0.5; 33 Y Truncated 213 = Y Truncated 213 + 2 * c Truncated 213/N + ... Y Truncated 213 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 213) * (1/N)ˆ(-0.5); 34 end 76 35 X = exp(3/2 * T + w); 36 E mse tempTc = E mse tempTc + (X-Y Truncated 21)ˆ2; 37 E mse tempTc2 = E mse tempTc2+ (X-Y Truncated 212)ˆ2; 38 E mse tempTc3 = E mse tempTc3+ (X-Y Truncated 213)ˆ2; 39 end 40 E mse Truncated 21(k-a+1) = E mse tempTc/n; 41 E mse Truncated 212(k-a+1)= E mse tempTc2/n; 42 E mse Truncated 213(k-a+1)= E mse tempTc3/n; 43 fprintf('Iteration %d donenn',k); 44 fprintf('This message is sent at time %snn', datestr(now,'mm-DD ... HH:MM:SS')) 45 end 46 47 %% 48 A = a:b; 49 figure 50 hold on 51 set(gca, 'YScale', 'log') 52 53 loglog(2.ˆ-A, E mse Truncated 21, 'b-- * ') 54 loglog(2.ˆ-A, E mse Truncated 212, 'g-- * ') 55 loglog(2.ˆ-A, E mse Truncated 213, 'm-- * ') 56 57 grid on 58 legend('Truncated Runge-Kutta Method with = 0.5', 'Truncated ... Runge-Kutta Method with = 0.4', 'Truncated Runge-Kutta Method with ... = 0.3', 'Location','southeast') % legend 59 xlabel('Stepsize') 60 ylabel('Mean Square Error') 61 hold off D MATLAB Codes for Chapter 5 D.1 A Sample Trajectory of Exact Solution for Equation (5.3). 1 %X(t) = 1+nint 0 ˆt (X(s)-(X(s))ˆ3) n,ds+nint 0 ˆt X(s) n,dW(s) 2 %X(t) = nfracfeˆfnfracf1gf2gt+W(t)ggf1+2nint 0ˆt eˆfs+2W(s)gn, dsg. 3 T = 1; N = 10ˆ4; dt = T/N; 4 f = @(x) x-xˆ3; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = exp(0.5. * [dt: dt: T]+ W)./sqrt(1+2 * cumsum(dt * exp([dt: dt: T]+ 2 * W))); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 legend('Exact Solution') % legend 77 16 hold off D.2 A Sample Trajectory of Exact Solution and Euler method to Equation (5.3). 1 %X(t) = 1+nint 0 ˆt (X(s)-(X(s))ˆ3) n,ds+nint 0 ˆt X(s) n,dW(s) 2 %X(t) = nfracfeˆfnfracf1gf2gt+W(t)ggf1+2nint 0ˆt eˆfs+2W(s)gn, dsg. 3 T = 1; N = 10ˆ4; dt = T/N; 4 f = @(x) x-xˆ3; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = exp(0.5. * [dt: dt: T]+ W)./sqrt(1+2 * cumsum(dt * exp([dt: dt: T]+ 2 * W))); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Euler Method 18 Y = zeros(1,L); Y temp = X 0; 19 for j = 1:L 20 W y = sum(dW(R * (j-1)+1: R * j)); 21 Y temp = Y temp + f(Y temp) * Dt + g(Y temp) * W y; 22 Y(j) = Y temp; 23 end 24 25 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 26 legend('Exact Solution', 'Euler Method') % legend 27 hold off D.3 A Sample Trajectory of Exact Solution and Tamed Runge-Kutta Method for Equation (5.3). 1 %X(t) = 1+nint 0 ˆt (X(s)-(X(s))ˆ3) n,ds+nint 0 ˆt X(s) n,dW(s) 2 %X(t) = nfracfeˆfnfracf1gf2gt+W(t)ggf1+2nint 0ˆt eˆfs+2W(s)gn, dsg. 3 T = 1; N = 10ˆ4; dt = T/N; 4 f = @(x) x-xˆ3; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 78 12 X = exp(0.5. * [dt: dt: T]+ W)./sqrt(1+2 * cumsum(dt * exp([dt: dt: T]+ 2 * W))); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Tamed Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 for j = 1:L 20 W y = sum(dW(R * (j-1)+1: R * j)); 21 v = f(Y temp) * Dt; 22 Y temp1 = Y temp + tamed function(v) + g(Y temp) * Dtˆ0.5; 23 Y temp = Y temp + tamed function(v) + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5); 24 Y(j) = Y temp; 25 end 26 27 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 28 legend('Exact Solution', 'Tamed Runge-Kutta Method') % legend 29 hold off D.4 A Sample Trajectory of Exact Solution and Truncated Runge-Kutta Method for Equation (5.3). 1 %X(t) = 1+nint 0 ˆt (X(s)-(X(s))ˆ3) n,ds+nint 0 ˆt X(s) n,dW(s) 2 %X(t) = nfracfeˆfnfracf1gf2gt+W(t)ggf1+2nint 0ˆt eˆfs+2W(s)gn, dsg. 3 T = 1; N = 10ˆ4; dt = T/N; 4 f = @(x) x-xˆ3; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = exp(0.5. * [dt: dt: T]+ W)./sqrt(1+2 * cumsum(dt * exp([dt: dt: T]+ 2 * W))); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Truncated Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 tc = sqrt(Dtˆ(-0.2)-1); 20 for j = 1:L 21 W y = sum(dW(R * (j-1)+1: R * j)); 22 truncated = sign(Y temp) * min(abs(Y temp), tc); 23 Y temp1 = Y temp + f(truncated) * Dt + g(Y temp) * Dtˆ0.5; 24 Y temp = Y temp + f(truncated) * Dt + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5); 25 Y(j) = Y temp; 26 end 79 27 28 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 29 legend('Exact Solution', 'Truncated Runge-Kutta Method') % legend 30 hold off D.5 A Sample Trajectory of Exact Solution and Tamed and Truncated Runge- Kutta Method for Equation (5.3). 1 %X(t) = 1+nint 0 ˆt (X(s)-(X(s))ˆ3) n,ds+nint 0 ˆt X(s) n,dW(s) 2 %X(t) = nfracfeˆfnfracf1gf2gt+W(t)ggf1+2nint 0ˆt eˆfs+2W(s)gn, dsg. 3 T = 1; N = 10ˆ4; dt = T/N; 4 f = @(x) x-xˆ3; g = @(x) x; X 0 = 1; 5 tamed function = @(x) x/(1+abs(x)); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = exp(0.5. * [dt: dt: T]+ W)./sqrt(1+2 * cumsum(dt * exp([dt: dt: T]+ 2 * W))); 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Tamed and Truncated Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 tc = sqrt(Dtˆ(-0.2)-1); 20 for j = 1:L 21 W y = sum(dW(R * (j-1)+1: R * j)); 22 truncated = sign(Y temp) * min(abs(Y temp), tc); 23 v = f(truncated) * Dt; 24 Y temp1 = Y temp + tamed function(v) + g(Y temp) * Dtˆ0.5; 25 Y temp = Y temp + tamed function(v) + g(Y temp) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5);; 26 Y(j) = Y temp; 27 end 28 29 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 30 legend('Exact Solution', 'Tamed and Truncated Runge-Kutta Method') % ... legend 31 hold off D.6 A Sample Trajectory of Exact Solution for Equation (5.9). 1 %dX(t) = 1 ds+2nsqrtfX(s)g dW(s) 2 %X(t) = (W(t)+1)ˆ2. 3 T = 1; N= 10ˆ4; dt = T/N; 80 4 f = @(x) 1; g = @(x) 2 * sqrt(abs(x)); X 0 = 1; 5 tamed function = @(x) x/(1+x); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = (W+1).ˆ2; 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 legend('Exact Solution','Location','northeast') % legend 16 hold off D.7 A Sample Trajectory of Exact Solution and Euler Method for Equation (5.9). 1 %dX(t) = 1 ds+2nsqrtfX(s)g dW(s) 2 %X(t) = (W(t)+1)ˆ2. 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 1; g = @(x) 2 * sqrt(abs(x)); X 0 = 1; 5 tamed function = @(x) x/(1+x); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = (W+1).ˆ2; 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Euler Method 18 Y = zeros(1,L); Y temp = X 0; 19 for j = 1:L 20 W y = sum(dW(R * (j-1)+1: R * j)); 21 Y temp = Y temp + f(Y temp) * Dt + g(Y temp) * W y; 22 Y(j) = Y temp; 23 end 24 25 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 26 legend('Exact Solution', 'Euler Method','Location','northeast') % legend 27 hold off D.8 A Sample Trajectory of Exact Solution and Tamed Runge-Kutta Method for Equation (5.9). 81 1 %dX(t) = 1 ds+2nsqrtfX(s)g dW(s) 2 %X(t) = (W(t)+1)ˆ2. 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 1; g = @(x) 2 * sqrt(abs(x)); X 0 = 1; 5 tamed function = @(x) x/(1+x); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = (W+1).ˆ2; 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Tamed Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 for j = 1:L 20 W y = sum(dW(R * (j-1)+1: R * j)); 21 Y temp1 = Y temp + (f(Y temp) * Dt) + (g(Y temp) * Dtˆ0.5); 22 Y temp = Y temp + f(Y temp) * Dt + tamed function(g(Y temp) * W y) ... + 0.5 * (W yˆ2-Dt) * (g(Y temp1)-(g(Y temp))) * Dtˆ(-0.5); 23 Y(j) = Y temp; 24 end 25 26 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 27 legend('Exact Solution', 'Tamed Runge-Kutta ... Method','Location','northeast') % legend 28 hold off D.9 A Sample Trajectory of Exact Solution and Truncated Runge-Kutta Method for Equation (5.9). 1 %dX(t) = 1 ds+2nsqrtfX(s)g dW(s) 2 %X(t) = (W(t)+1)ˆ2. 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 1; g = @(x) 2 * sqrt(abs(x)); X 0 = 1; 5 tamed function = @(x) x/(1+x); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = (W+1).ˆ2; 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 82 17 % Truncated Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 tc = ((3 * (Dt)ˆ(-0.5)-1) * 0.5)ˆ2; 20 for j = 1:L 21 W y = sum(dW(R * (j-1)+1: R * j)); 22 truncated = sign(Y temp) * min(abs(Y temp), tc); 23 Y temp1 = Y temp + f(Y temp) * Dt + g(Y temp) * Dtˆ0.5; 24 Y temp = Y temp + f(Y temp) * Dt + g(truncated) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5); 25 Y(j) = Y temp; 26 end 27 28 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 29 legend('Exact Solution', 'Truncated Runge-Kutta ... Method','Location','northeast') % legend 30 hold off D.10 A Sample Trajectory of Exact Solution and Tamed and Truncated Runge- Kutta Method for Equation (5.9). 1 %dX(t) = 1 ds+2nsqrtfX(s)g dW(s) 2 %X(t) = (W(t)+1)ˆ2. 3 T = 1; N= 10ˆ4; dt = T/N; 4 f = @(x) 1; g = @(x) 2 * sqrt(abs(x)); X 0 = 1; 5 tamed function = @(x) x/(1+x); 6 7 randn('state', 100); 8 dW = sqrt(dt) * randn(1, N); 9 W = cumsum(dW); 10 11 % Explicit Solution 12 X = (W+1).ˆ2; 13 plot([0: dt: T], [X 0, X], 'g-') 14 hold on 15 16 R = 100; Dt = R * dt; L= N/R; % time discretization 17 % Tamed and Truncated Runge-Kutta Method 18 Y = zeros(1,L); Y temp = X 0; 19 tc = ((3 * (Dt)ˆ(-0.5)-1) * 0.5)ˆ2; 20 for j = 1:L 21 W y = sum(dW(R * (j-1)+1: R * j)); 22 truncated = sign(Y temp) * min(abs(Y temp), tc); 23 v = g(truncated) * W y; 24 Y temp1 = Y temp + Dt + g(Y temp) * Dtˆ0.5; 25 Y temp = Y temp + Dt + g(tamed function(v)) * W y + 0.5 * ... (W yˆ2-Dt) * (g(Y temp1)-g(Y temp)) * Dtˆ(-0.5);; 26 Y(j) = Y temp; 27 end 28 29 plot([0:Dt:T], [X 0, Y], 'b-- * ') % plot 83 30 legend('Exact Solution', 'Tamed and Truncated Runge-Kutta ... Method','Location','northeast') % legend 31 hold off D.11 Log-log Plot of Mean Square Error vs. Runtime to Equation (5.3). 1 a = 12; b = 18; n = 10ˆ4; 2 E mse Euler 21 = zeros(1, b-a+1); 3 E mse Tamed 21 = zeros(1, b-a+1); 4 E mse Truncated 21 = zeros(1, b-a+1); 5 E mse TmTc 21 = zeros(1, b-a+1); 6 7 for k = a:b 8 T = 1; N= 2ˆk; 9 E mse tempE = 0; 10 E mse tempTm = 0; 11 E mse tempTc = 0; 12 E mse tempTmTc = 0; 13 for j = 1:n 14 w = 0; 15 intg = 0; 16 Y Euler 21 = 1; 17 Y Tamed 21 = 1; 18 Y Truncated 21 =1; 19 Y TmTc 21 =1; 20 tc = sqrt((1/N)ˆ(-0.2)-1); 21 %X(t) = 1+nint 0 ˆt (X(s)-(X(s))ˆ3) n,ds+nint 0 ˆt X(s) n,dW(s) 22 for i = 1: N 23 dw = randn/sqrt(N); 24 w = w + dw; 25 intg = intg + exp(i/N+ 2 * w)/N; 26 27 Y Euler 21 = Y Euler 21 + (Y Euler 21-Y Euler 21ˆ3)/N ... +Y Euler 21 * dw; 28 29 v Tamed 21 = (Y Tamed 21-Y Tamed 21ˆ3)/N; 30 Y temp1 = Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * (1/N)ˆ0.5; 31 Y Tamed 21= Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Tamed 21) * (1/N)ˆ(-0.5); 32 33 c Truncated 21 = sign(Y Truncated 21) * ... min(abs(Y Truncated 21), tc); 34 Y temp1 = Y Truncated 21 + ... (c Truncated 21-c Truncated 21ˆ3)/N + Y Truncated 21 * ... (1/N)ˆ0.5; 35 Y Truncated 21 = Y Truncated 21 + ... (c Truncated 21-c Truncated 21ˆ3)/N + ... Y Truncated 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 21) * (1/N)ˆ(-0.5); 84 36 37 c TmTc 21 = sign(Y TmTc 21) * min(abs(Y TmTc 21), tc); 38 v TmTc 21 = (c TmTc 21-c TmTc 21ˆ3)/N; 39 Y temp1 = Y Truncated 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * (1/N)ˆ0.5; 40 Y TmTc 21= Y TmTc 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y TmTc 21) * (1/N)ˆ(-0.5); 41 42 end 43 X = exp(0.5 * T+ w)/sqrt(1+2 * intg); 44 E mse tempE = E mse tempE + (X-Y Euler 21)ˆ2; 45 E mse tempTm = E mse tempTm + (X-Y Tamed 21)ˆ2; 46 E mse tempTc = E mse tempTc + (X-Y Truncated 21)ˆ2; 47 E mse tempTmTc = E mse tempTmTc + (X-Y TmTc 21)ˆ2; 48 end 49 E mse Euler 21(k-a+1) = E mse tempE/n; 50 E mse Tamed 21(k-a+1) = E mse tempTm/n; 51 E mse Truncated 21(k-a+1) = E mse tempTc/n; 52 E mse TmTc 21(k-a+1) = E mse tempTmTc/n; 53 fprintf('Iteration %d donenn',k); 54 fprintf('This message is sent at time %snn', datestr(now,'mm-DD ... HH:MM:SS')) 55 end 56 %% 57 A = a:b; 58 figure 59 hold on 60 set(gca, 'YScale', 'log') 61 62 loglog(2.ˆA, E mse Euler 21, 'r-- * ') 63 loglog(2.ˆA, E mse Tamed 21, 'b-- * ') 64 loglog(2.ˆA, E mse Truncated 21, 'g-- * ') 65 loglog(2.ˆA, E mse TmTc 21, 'm-- * ') 66 67 grid on 68 legend('Euler Method','Tamed Runge-Kutta Method', 'Truncated ... Runge-Kutta Method', 'Tamed and Truncated Runge-Kutta Method') % legend 69 xlabel('Runtime') 70 ylabel('Mean Square Error') 71 hold off D.12 Log-log Plot of Mean Square Error vs. Step-size to Equation (5.3). 1 a = 12; b = 18; n = 10ˆ4; 2 E mse Euler 21 = zeros(1, b-a+1); 3 E mse Tamed 21 = zeros(1, b-a+1); 4 E mse Truncated 21 = zeros(1, b-a+1); 5 E mse TmTc 21 = zeros(1, b-a+1); 6 7 for k = a:b 85 8 T = 1; N= 2ˆk; 9 E mse tempE = 0; 10 E mse tempTm = 0; 11 E mse tempTc = 0; 12 E mse tempTmTc = 0; 13 for j = 1:n 14 w = 0; 15 intg = 0; 16 Y Euler 21 = 1; 17 Y Tamed 21 = 1; 18 Y Truncated 21 =1; 19 Y TmTc 21 =1; 20 tc = sqrt((1/N)ˆ(-0.2)-1); 21 %X(t) = 1+nint 0 ˆt (X(s)-(X(s))ˆ3) n,ds+nint 0 ˆt X(s) n,dW(s) 22 for i = 1: N 23 dw = randn/sqrt(N); 24 w = w + dw; 25 intg = intg + exp(i/N+ 2 * w)/N; 26 27 Y Euler 21 = Y Euler 21 + (Y Euler 21-Y Euler 21ˆ3)/N ... +Y Euler 21 * dw; 28 29 v Tamed 21 = (Y Tamed 21-Y Tamed 21ˆ3)/N; 30 Y temp1 = Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * (1/N)ˆ0.5; 31 Y Tamed 21= Y Tamed 21 + v Tamed 21/(1+abs(v Tamed 21)) + ... Y Tamed 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Tamed 21) * (1/N)ˆ(-0.5); 32 33 c Truncated 21 = sign(Y Truncated 21) * ... min(abs(Y Truncated 21), tc); 34 Y temp1 = Y Truncated 21 + ... (c Truncated 21-c Truncated 21ˆ3)/N + Y Truncated 21 * ... (1/N)ˆ0.5; 35 Y Truncated 21 = Y Truncated 21 + ... (c Truncated 21-c Truncated 21ˆ3)/N + ... Y Truncated 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y Truncated 21) * (1/N)ˆ(-0.5); 36 37 c TmTc 21 = sign(Y TmTc 21) * min(abs(Y TmTc 21), tc); 38 v TmTc 21 = (c TmTc 21-c TmTc 21ˆ3)/N; 39 Y temp1 = Y Truncated 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * (1/N)ˆ0.5; 40 Y TmTc 21= Y TmTc 21 + v TmTc 21/(1+abs(v TmTc 21)) + ... Y TmTc 21 * dw+0.5 * ... (dwˆ2-1/N) * (Y temp1-Y TmTc 21) * (1/N)ˆ(-0.5); 41 42 end 43 X = exp(0.5 * T+ w)/sqrt(1+2 * intg); 44 E mse tempE = E mse tempE + (X-Y Euler 21)ˆ2; 45 E mse tempTm = E mse tempTm + (X-Y Tamed 21)ˆ2; 46 E mse tempTc = E mse tempTc + (X-Y Truncated 21)ˆ2; 47 E mse tempTmTc = E mse tempTmTc + (X-Y TmTc 21)ˆ2; 48 end 86 49 E mse Euler 21(k-a+1) = E mse tempE/n; 50 E mse Tamed 21(k-a+1) = E mse tempTm/n; 51 E mse Truncated 21(k-a+1) = E mse tempTc/n; 52 E mse TmTc 21(k-a+1) = E mse tempTmTc/n; 53 fprintf('Iteration %d donenn',k); 54 fprintf('This message is sent at time %snn', datestr(now,'mm-DD ... HH:MM:SS')) 55 end 56 %% 57 figure 58 hold on 59 set(gca, 'YScale', 'log') 60 61 loglog(2.ˆ-A, E mse Euler 21, 'r-- * ') 62 loglog(2.ˆ-A, E mse Tamed 21, 'b-- * ') 63 loglog(2.ˆ-A, E mse Truncated 21, 'g-- * ') 64 loglog(2.ˆ-A, E mse TmTc 21, 'm-- * ') 65 66 grid on 67 legend('Euler Method','Tamed Runge-Kutta Method', 'Truncated ... Runge-Kutta Method', 'Tamed and Truncated Runge-Kutta Method', ... 'Location','southeast') % legend 68 xlabel('Stepsize') 69 ylabel('Mean Square Error') 70 hold off 87
Abstract (if available)
Abstract
Explicit numerical methods have been developed to find solutions to stochastic differential equations with strong convergence, such as the tamed Euler-Maruyama method and the truncated Euler-Maruyama method. In this thesis, we study the Runge-Kutta method and introduce the tamed and truncated Runge-Kutta methods for one-dimensional stochastic differential equations with globally Lipschitz continuous coefficients and other coefficients.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Numerical weak approximation of stochastic differential equations driven by Levy processes
PDF
Statistical inference for second order ordinary differential equation driven by additive Gaussian white noise
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
PDF
On stochastic integro-differential equations
PDF
Gaussian free fields and stochastic parabolic equations
PDF
On the simple and jump-adapted weak Euler schemes for Lévy driven SDEs
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
Statistical inference for stochastic hyperbolic equations
PDF
Statistical inference of stochastic differential equations driven by Gaussian noise
PDF
Topics on set-valued backward stochastic differential equations
PDF
Optimal and exact control of evolution equations
PDF
Time-homogeneous parabolic Anderson model
PDF
Conditional mean-fields stochastic differential equation and their application
Asset Metadata
Creator
Wang, Jiawei
(author)
Core Title
Tamed and truncated numerical methods for stochastic differential equations
School
College of Letters, Arts and Sciences
Degree
Master of Science
Degree Program
Applied Mathematics
Publication Date
04/07/2021
Defense Date
03/17/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Euler-Maruyama method,OAI-PMH Harvest,Runge-Kutta method,stochastic differential equation,weak and strong approximation
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Lototsky, Sergey (
committee chair
), Mikulevicius, Remigijus (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
jiaweiwd@gmail.com,jwang535@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-439503
Unique identifier
UC11668569
Identifier
etd-WangJiawei-9394.pdf (filename),usctheses-c89-439503 (legacy record id)
Legacy Identifier
etd-WangJiawei-9394.pdf
Dmrecord
439503
Document Type
Thesis
Rights
Wang, Jiawei
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
Euler-Maruyama method
Runge-Kutta method
stochastic differential equation
weak and strong approximation