Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
(USC Thesis Other)
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
MONTE-CARLO METHODS FOR FORWARD BACKWARD STOCHASTIC
DIFFERENTIAL EQUATIONS IN HIGH DIMENSIONS
by
Jose M. Villalobos
A Dissertation Presented to the
FACULTY OF THE GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(APPLIED MATHEMATICS)
August 2007
Copyright 2007 Jose M. Villalobos
Dedication
To my parents Manuel and Juana Villalobos, my grandparents Maria Alvarado, Beatriz
Garcia, and Jesus Murillo.
ii
Acknowledgements
I would like to thank Professor Jianfeng Zhang for being my advisor and for pro-
viding me with constant guidance and support throughout. I am also thankful to all my
committee members Professor Jaksa Cvitanic, Professor Baxendale, Professor Larry
Goldstein and Professor Moon for their input and participation. Furthermore, I would
like to thank everyone at the Mathematics Department including the faculty (Mikule-
vicius) and staff (Arnold, Chauntee, and Amy), and friends (Igor, Reza, and Coskun)
for their kind helpfulness and consideration.
iii
Table of Contents
Dedication ii
Acknowledgements iii
List of Tables vi
Abstract vii
Chapter 1: Introduction 1
1.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Definitions and Notation . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Important Theorems and Lemmas . . . . . . . . . . . . . . . 6
1.2.4 Some results of FBSDE’s . . . . . . . . . . . . . . . . . . . 7
1.2.5 Motivation in Finance . . . . . . . . . . . . . . . . . . . . . 9
Chapter 2: Numerical Methods for FBSDE’s 12
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 PDE approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Monte-Carlo Approach . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.1 Time discretization . . . . . . . . . . . . . . . . . . . . . . . 14
2.4 Conditional Expectations . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.5.1 Markovian . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5.2 Algorithm Summary . . . . . . . . . . . . . . . . . . . . . . 16
Chapter 3: Reflected BSDE’s 18
3.1 American Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Connection to obstacle problems . . . . . . . . . . . . . . . . . . . . 20
3.3 Time discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 4: Numerical Experiments 22
4.1 Basis functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.1 Hyper-cubes . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.2 Polynomials of degree two . . . . . . . . . . . . . . . . . . . 23
4.1.3 Trigonometric basis . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 American Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
iv
4.2.1 Bermuda max-call equity options . . . . . . . . . . . . . . . 25
4.3 Parabolic PDE’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.1 Burger’s Equation . . . . . . . . . . . . . . . . . . . . . . . 29
4.3.2 D - dimensional Heat - Equation . . . . . . . . . . . . . . . 30
Chapter 5: Results 33
5.1 Picard iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.1.1 Continuous Case . . . . . . . . . . . . . . . . . . . . . . . . 34
5.1.2 Time Discretization . . . . . . . . . . . . . . . . . . . . . . . 38
5.1.3 Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . 45
5.1.4 Empirical Expectation (Simulations) . . . . . . . . . . . . . . 49
References 56
v
List of Tables
4.1 Non-linear American Call,N =4 . . . . . . . . . . . . . . . . . . . 24
4.2 Non-linear American Call,N =8 . . . . . . . . . . . . . . . . . . . 24
4.3 Reference Price for Max-Call . . . . . . . . . . . . . . . . . . . . . . 26
4.4 N =4,D =3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 N =8,D =3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.6 N =4,D =5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.7 N =8,D =5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.8 N =4,D =10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.9 N =8,D =10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.10 Burger’s Equation, t = 0.5, x=0.2 . . . . . . . . . . . . . . . . . . . . 29
4.11 Semi-linear, t = 0.5, x=0.2 . . . . . . . . . . . . . . . . . . . . . . . 30
4.12 Heat-Equation, 3-D, t = 0.9, x=0.6 . . . . . . . . . . . . . . . . . . . 31
4.13 Heat-Equation, 3-D, t = 0.5, x=0.2 . . . . . . . . . . . . . . . . . . . 31
4.14 Heat-Equation 10-D, t = 0.4, x=0.25 . . . . . . . . . . . . . . . . . . 32
4.15 Heat-Equation, 10-D, t = 0.7, x=0.25 . . . . . . . . . . . . . . . . . . 32
vi
Abstract
e use Monte-Carlo methods to solve Decoupled Forward - ackward Stochastic Dif-
ferential Equations when the dimension of the tate process is higher than three. We
focus on the solutions of arabolic Partial Differential Equations as well as the pricing
of merican type Options.
inally, we propose and prove a stopping criterion for the Picard teration case.
vii
Chapter 1
Introduction
1.1 Problem Description
In 1990 Pardoux and Peng [30] proved the well possedness of non linear Backward
Stochastic Differential Equations (BSDE’s for short). Ever since, BSDE’s have been
extensively studied and used in many applied and theoretical areas, in particular in
Mathematical Finance. The theory of BSDE’s was developed rapidly during the 1990’s
(see [27],[2],[26]) and its analysis is well understood by now.
In the numerical side several methods have been developed (e.g [1], [3],[8],[7])
to solve BSDE’s. Up to now, basically two types of numerical schemes have been
considered. The first one is based on the four-step algorithm proposed by Ma et al.
[25](1994). Numerical algorithms based on this approach have been developed by
Douglas et al. [14](1996) and Milstein et al. [29](2004). The algorithm consist of
solving a Parabolic Partial Differential Equation (PDE) which is related to the BSDE.
The second type of algorithm consists of solving the BSDE directly using Monte-
Carlo simulation. The method is based on theL
2
regularity of the processZ introduced
by Zhang [38](2004). This regularity give us the ability to create a sequence of condi-
tional expectations, which in turn can be computed by Monte-Carlo simulation.
Even though there have been several attempts to solve BSDE’s
numerically([5],[19],[22],[7]), none has solved them in dimension higher than
one (with the exception of Delarue and Menozzi [13](2006)). In this direction is that
1
thesis looks to make a contribution. We aim a solving BSDE’s in dimension higher
than three.
Throughout the thesis we use the following set up. Let T > 0 be a fixed (deter-
ministic) time, and (;F;F;P) be a complete filtered probability space on which a
D-dimensional Brownian MotionW is defined, such thatF=fF
t
g
0·t·T
is the natural
filtration ofW , augmented by all theP-null sets.
The focus of the thesis is the numerical solution of the following Decoupled For-
ward Backward Stochastic Differential Equation (FBSDE for short).
X
k;t
=x
k;0
+
Z
t
0
¹
k
(s;X
s
)ds+
Z
t
0
¾
k
(s;X
s
)dW
s
;1·k·d
Y
t
=g(X)+
Z
T
t
f(s;X
s
;Y
s
;Z
s
)ds¡
Z
T
t
Z
s
dW
s
(1.1)
where¹;¾;g;f are deterministic functions, andW is the D-dimensional Brownian
Motion.
The rest of the thesis is organized as follows. In chapter 1 we introduce the prob-
lem, notation, and known results that will used throughout the thesis. In chapter 2
we present the numerical methods (schemes) for decoupled BSDE’s. In chapter 3 we
present the extension to reflected FBSDE’s (which allows us to deal with American
Options Pricing). In chapter 4 we present the numerical results and finally in chapter
5 we state and prove a stopping criterion for the Picard iteration case.
2
1.2 Preliminaries
In this section we present the notation used throughout the thesis and present some
useful results which are either standard or slight variations of well-known results in
the literature .
1.2.1 Assumptions
(A1) For any 1· k · D, the functions (t;x)7! x
k
b
k
(t;x) and (t;x)7! x
k
¾
k
(t;x)
are uniformly Lipschitz continuous w.r.t(t;x)2[0;T]£R
d
:
(A2) The driverf satisfies the following continuity estimate.
jf(t
2
;x
2
;y
2
;z
2
)¡f(t
1
;x
1
;y
1
;z
1
)j· C
f
(
p
jt
2
¡t
1
j+jx
2
¡x
1
j+jy
2
¡y
1
j+
jz
2
¡z
1
j)
for any(t
1
;x
1
;y
1
;z
1
);(t
2
;x
2
;y
2
;z
2
)2[0;T]£R
d
£R£R
q
.
Moreover,sup
0·t·T
jf(t;0;0;0)j<1
(A3) The terminal conditiong satisfies Zhang’sL
1
¡ Lipschitz condition, i.e for any
continuous functionalsx
1
andx
2
one has
jg(x
1
)¡g(x
2
)j· Csup
0·t·T
jx
1
t
¡x
2
t
j. Additionally,jg(0)j <1 where0 is
the function equal to 0 on[0;T]
(A1-A2-A3) are sufficient to ensure the existence and uniqueness of a triplet(X;Y;Z)
solution to (1.5) [25]. Notice that theL
1
- Lipschitz condition allows a large class of
payoff functions (specific examples later).
3
1.2.2 Definitions and Notation
Projections of function basis
² TheL
2
(P) projection of the random variableU on a family
Á = [Á
1
;:::;Á
n
]
¤
(considered as a random column vector)is denoted byP
Á
(U).
We denote the projection error byR
Á
(U)=U¡P
Á
(U)
² At each timet
i
some random variablesp
0;i
;p
1;i
;:::;p
q;i
will be used as projection
bases to approximateY
t
i
;Z
1;t
i
;:::;Z
q;t
i
(Z
l;t
i
is thel-th component ofZ
t
i
. The
projection coefficients are denoted by®
0;i
;®
1;i
;:::;®
q;i
(view as column vectors).
We assume thatEjp
l;i
j
2
<1(0· l· q) and w.o.l.g thatE(p
l;i
p
¤
l;i
) is invertible
which ensures the uniqueness of the coefficients of the projection
P
p
l;i
(0·l·q)
² We setf
i
(®
0;i
;:::;®
q;i
) orf
i
(®
i
) forf(t
i
;X
N
t
i
;®
0;i
¢p
0;i
;:::;®
q;i
¢p
q;i
). WhereX
N
t
i
is the Euler approximation ofX
t
i
.
² We writeE
i
(¢)=E(¢jF
t
i
). Let¢W
i
=W
t
i+1
¡W
t
i
(¢W
l;i
component-wise).
² Definev
i
the (column) vector given by[v
i
]
¤
=(p
¤
0;i
;p
¤
1;i
¢W
l;i
p
¢t
i
;:::;p
¤
q;i
¢W
q;i
p
¢t
i
).
² For a vectorx,jxj denotes the usual Euclidean norm. The relative dimension is
still implicit. For an integer H andx2R
H
, we setjxj
2
H
=
1
H
P
H
h=1
jx
h
j. For a
set of projection coefficients® =(®
0
;:::;®
q
), we setj®j=max
0·l·q
j®
l
j
² For a real symmetric matrixA,jjAjj andjjAjj
F
are, respectively, the maximum
of the absolute value of its eigenvalues and its Frobenius norm (Defined by
jjAjj
2
F
=
P
i;j
a
2
i;j
).
4
Simulations
We simulate M independent paths of (X
N
t
i
)
0·i·N
, (¢W
i
)
0·i·N¡1
. We denote these
simulations by[(X
N;m
t
i
)
0·i·N
]
1·m·M
,[(¢W
m
i
)
0·i·N¡1
]
1·m·M
² The values of the basis functions along these simulations are denoted by
[p
m
l;i
=p
l
(X
N;m
t
i
)]
0·l·q;0·i·N¡1;1·m·M
² Analogously tof
i
(®
0;i
;:::;®
q;i
) orf
i
(®
i
) we setf
m
i
(®
0;i
;:::;®
q;i
) orf
m
i
(®
i
) for
f(t
i
;X
N;m
t
i
;®
0;i
¢p
m
0;i
;:::;®
q;i
¢p
m
q;i
)
² Analogously to (column) vectorv
i
, definev
m
i
by
[v
m
i
]
¤
=(p
m¤
0;i
;p
m¤
1;i
¢W
l;i
p
¢t
i
;:::;p
m¤
q;i
¢W
q;i
p
¢t
i
).
² Define the matrices
– V
M
i
=
1
M
P
M
m=1
v
m
i
[v
m
i
]
¤
– P
M
l;i
=
1
M
P
M
m=1
p
m
l;i
[p
m
l;i
]
¤
(0·l·q)
Truncations
To ensure the stability of the algorithm we use threshold techniques, which are based
on the following notations
² Let (½
N
l;i
)
0·l·q;0·i·N¡1
beR- valued functions bounded from below by 1. Set
½
N
i
(X
N
t
i
)=[½
N
0;i
(X
N
t
i
);:::;½
N
q;i
(X
N
t
i
)]
¤
.
² Define (random) truncation functions ^ ½
N
l;i
such that they are bounded by
2½
N
l;;i
(X
N
t
i
) and their first derivative is bounded by 1.
5
1.2.3 Important Theorems and Lemmas
The following theorems/lemmas will be used frequently in the proofs of the thesis.
1.2.1 Theorem. Gronwall’s Inequality
Supposea : [0;T]! [0;1) satisfiesa
t
· c
1
+c
2
R
t
0
a
s
d
s
for allt, wherec
1
;c
2
> 0:
Thena
t
·c
1
e
c
2
t
8t2[0;T]
1.2.2 Lemma. Gronwall’s (Discrete)Inequality
Let ¼ : 0 = t
0
< t
1
< ::: < t
N
= T be a partition of [0,T]. If a
i
;b
i
;i = 0;1;:::N
satisfya
N
¸0;b
i
¸0 anda
i¡1
·(1+¢t
i
)a
i
+b
i
fori=1;2;:::;N:
Then max
0·i·N
a
i
·e
CT
[a
N
+
N
X
i=1
b
i
]
1.2.3 Lemma. Contraction property of theL
2
- projection operator
For any random variableX 2L
2
, we haveEjP
p
l;i
(X)j
2
·EjXj
2
1.2.4 Lemma. For a symmetry matrixF , ifjjId¡Fjj<1, then
F
¡1
¡Id =
P
1
k=1
[Id¡F]
k
andjjId¡F
¡1
jj·
jjF ¡Idjj
1¡jjF ¡Idjj
1.2.5 Lemma. (Contraction Property of Least Square Problems inR
M
) Consider a
sequence of real numbers (x
m
)
1·m·M
and a sequence (v
m
)
1·m·M
of vectors inR
n
;
associated to the matrix V
M
=
1
M
P
M
m=1
v
m
[v
m
]
¤
(with ¸
min
(V
M
) > 0). Then, the
(unique)R
n
¡ valued vectorµ
x
=arginf
µ
jx¡µ¢vj
2
M
is given by
µ
x
=
[V
M
]
¡1
M
M
X
m=1
v
m
x
m
.
The applicationx7¡!µ
x
is linear and we have the inequality:
¸
min
(V
M
)jµ
x
j
2
·jµ
x
¢vj
2
M
·jxj
2
M
: (1.2)
6
1.2.4 Some results of FBSDE’s
1.2.6 Theorem. (Pardoux-Peng 1990)[30] 8»2L
2
(F
T
) and allf that satisfy (A2),
the following BSDE:
Y
t
=»+
Z
T
t
f(s;Y
s
;Z
s
)¡
Z
T
t
Z
s
dW
s
has a unique adapted solution(Y;Z).
1.2.7 Lemma. Suppose that
~
f satisfies (A2) and that
~
f(t;0;0) 2 L
2
(F). For any
»2L
2
(F
T
), let(Y;Z) be the solution to
Y
t
=»+
Z
T
t
~
f(s;Y
s
;Z
s
)¡
Z
T
t
Z
s
dW
s
Then for any p ¸ 2 there exists a constant C
p
depending only on p, the Lipschtz
constant, andT such that
E[ sup
0·t·T
jY
t
j
p
+(
Z
T
0
jZ
t
j
2
dt)
p
2
]·C
p
E[j»j
p
+
Z
T
t
j
~
f(t;0;0)j
p
dt];
and
E[jY
t
¡Y
s
j
p
]·C
p
Ef[j»j
p
+ sup
0·t·T
~
f(t;0;0)]jt¡sj
p¡1
]+(
Z
T
0
jZ
t
j
2
dt)
p
2
g:
7
1.2.8 Theorem. (Zhang 2004)[38] Assume (A1-A3) hold true and that Z is cµ adlµ ag
(right continuous with left limit). Let ¼ be a artition of [0;T] , then the following
estimate holds.
N
X
i=1
Z
t
i
t
i¡1
[jZ
s
¡Z
t
i¡1
j
2
+jZ
s
¡Z
t
i
j
2
]ds·j¼j
whereC > 0 is a constant depending only on T , but independent of the partition
¼.
1.2.9 Lemma. ([38]) Assume (A1-A3) holds true, then for all p ¸ 2, there exits a
constantC
p
depending only onT andp, such that,
jjZ
t
jj
p
·C
p
a:e t2[0;T]
Moreover, for any partition¼ we have the following estimate:
max
1·i·n
sup
t2(t
i¡1
;t
i
]
EfjX
t
¡X
t
i¡1
j
2
+jY
t
¡Y
t
i¡1
j
2
g·j¼j (1.3)
whereC >0 is a constant depending only onT , but independent of the partition¼.
In order to see, the relationship between Parabolic PDE’s and BSDE’s, we consider
the following parabolic PDE (in one dimension, for ease of notation):
u
t
+
1
2
¾
2
(t;x)u
xx
+¹(t;x)u
x
+f(t;x;u;u
x
¾)=0
u(T;x)=g(x):
(1.4)
8
and the decoupled FBSDE:
X
t;x
s
=x+
Z
s
t
¹(r;X
t;x
r
)dr+
Z
s
t
¾(r;X
r
)dW
r
;
Y
t;x
s
=g(X
t;x
T
)+
Z
T
s
f(r;X
t;x
r
;Y
t;x
r
;Z
t;x
r
)dr¡
Z
T
s
Z
t;x
r
dW
r
8s2[t;T]
(1.5)
where the subscriptst;x indicates the dependence of the solution on the initial date
and value(t;x). It will be omitted when the context is clear.
The following theorem shows the connection between FBSDE (1.5) and PDE (1.4).
1.2.10 Theorem. ([31]) If PDE (1.4) has a classical solutionu, thenY
t
4
= u(t;X
t
)
and Z
t
4
= u
x
(t;X
t
)¾(t;X
t
) solve FBSDE (1.5). On the other hand if ¹;f;¾ and g
satisty A2, then (1.5) has anF-adapted solution and the functionu(t;x)
4
= Y
x
t
is the
unique viscosity solution to (1.4)
1.2.5 Motivation in Finance
One of the main applications of BSDE’s has been in Mathematical Finance. In a
financial market, a FBSDE has the following interpretation. The processX represents
the underlying asset(s) price(s);g(X
T
) its the terminal pay-off;Y is the wealth process
or security price;Z is the hedging strategy, or stock portfolio.
In order to make a clearer connection between stochastic differential equations and
Finance, we consider the standard Black and Scholes market model [6],[21].
dS
0
t
=S
0
t
r
t
dt; S
0
0
=s; (Bond=MoneyMarket)
dS
i
t
=S
i
t
[¹
i
t
dt+
d
X
j=1
¾
i;j
t
dW
i
t
]; S
i
0
=s
i
(Stocks)
(1.6)
9
where,
² S
0
t
;S
i
t
: Prices of bondn (i-th) stocks (per share) at timet.
² r
t
: Interest rate at timet
² ¹
i
t
: Appreciation rates of the market at timet.
² [¾
i;i
t
]: V olatility matrix of the market at timet.
² V
t
: Dollar amount of the total wealth of an investor at timet.
² ¼
i
t
: Dollars invested ini-th stock at timet,i=1;:::;d
² V
t
¡
P
d
i=1
: Dollar Amount in the bond at timet.
² C
t
: Cumulated consumption at timet.
Given a terminal pay-offg(S
T
), the calculation of the option price and stock (hedg-
ing) portfolio leads to the following linear(decoupled)FBSDE:
dS
i
t
=S
i
t
f¹
i
t
dt+
d
X
j=1
¾
i;i
t
dW
i
t
g; S
i
0
=s
i
dV
t
=[r
t
V
t
+¼
t
(¹
t
¡r
t
1)]dt+¼
t
¾
t
dW
t
; V(T)=g(S
T
)
(1.7)
where1=(1;:::;1)
¤
,¼ is a row vector and¹ a column vector.
It is easy to see that(X;Y) and(S;V) represent the same equation. Also,¼¾ can
be identified asZ, since¾ is assumed to be known, to find¼ is the equivalent to finding
Z.
The following are some typical pay-off functions, hereK denotes the strike price:
² g(S)=(S
T
¡K)
+
European option;
10
² g(S)= max
0·t·T
S
t
Look-Back option;
² g(S)=(
1
T
Z
T
0
S
t
dt¡K)
+
Asian option;
² » =1
S
T
>K
² » =(S
T
¡K)
+
1
[max
0·t·T
S
t
·h]
European barrier option;
² » =(S
¿
¡K)
+
American option (¿ stopping time);
Other applications of BSDE’s to mathematical finance include: Connections with
recursive utilities of Duffie and Epstein [15](1992), connection between BSDE’s and
risk measures [32](2003), g-expectations as a pricing rule [33](2004), and hedging
options for large investors [11](1996).
1.2.11 Remark. BSDE’s, also, have very useful applications in stochastic control and
differential games [16](1997),[20](1995), for constructing¡-Martingales on manifolds
with prescribed limits [12](1995), and in providing probabilistic formulas for solutions
of systems of quasi-linear partial differential equations [31].
11
Chapter 2
Numerical Methods for FBSDE’s
2.1 Introduction
In the last 15 years, there have been several attempts to solve BSDE’s numerically.
Bally [3](1997) used a Poisson net which allows him to avoid the unknown regularity
of the processZ, Bally et al. (2003)[4] and Chevance [8](1997) used the quantization
approach, Bouchard and Touzi (2004) [7] based their scheme on Malliavin Calculus,
and Ma et al. [24](2002) replaced the Brownian Motion by a scaled Random Walk.
2.1.1 Remark. The main drawback of the above numerical schemes is their complex-
ity for implementation, that is they are very unpractical and costly (in computer time).
In recent times, two (much more practical) schemes have been proposed by Gobet
et al. [19](2005) (see also [22]) and Bender and Denk [5] (2005). Up to now, two
different approaches have been developed to solve BSDE’s. The first one is the Par-
tial Differential Differential Equation (PDE) approach, which is based on the four
step scheme proposed by Ma, Protter, and Yong (1994)[25](1994). Douglas et al.
[14](1996) as well as Milstein et al. [29](2004) developed numerical schemes based
on this approach. The main idea is to solve a parabolic PDE which is related to the
BSDE. The second scheme, consists on solving the FBSDE directly using the Least
Square Monte - Carlo method proposed by Longstaff and Schartwz [23](2001).
12
2.2 PDE approach
The method consists of solving the related PDE by standard numerical PDE methods.
The two methods commonly used are finite differences [34] and finite elements, which
requires space and time discretizations (which makes these methods very costly in high
dimensions). The advantages of this method are: it solves the equation on the whole
space, the rate of convergence is good (provide that u is smooth enough), however,
this efficiency is only true in low dimensions (d · 3). This is a weakness since in
mathematical finance problems, the dimension (e.g the number of stocks in a basket)
involved is usually much higher than 3. In addition, the method requires that a smooth
solution to the PDE exists, which is generally not the case.
2.3 Monte-Carlo Approach
The second type of scheme is based on Monte-Carlo simulation, this method has a
slower rate of converge than the PDE approach (for dimension less than 3). The
method does not require a space discretization and the rate of convergence (
1
p
M
) is
relatively insensitive to the dimension of the problem. This is an advantage over the
PDE method, and hence a obvious candidate for mathematical finance problems.
The main disadvantage of the Monte-Carlo Method is that the solution is random
and is given a single point.
13
2.3.1 Time discretization
Euler Scheme for X
The forward process is simulated using the standard Euler Scheme [18]. Let ¼: 0 =
t
0
< t
1
< ::: < t
N
= T be a partition of [0;T]. Define ¼(t) = t
i¡1
for t2 [t
i¡1
;t
i
).
LetX
N
be the solution to the following forward SDE:
X
N
t
=x+
Z
t
0
¹(¼(s);X
N
¼(s)
)ds+
Z
t
0
¾(¼(s);X
N
¼(s)
)dW
s
; (2.1)
and
^
X
N
be the corresponding step process defined by:
^
X
N
t
4
=X
N
¼(t)
The two following results describe the regularity of the process X.
2.3.1 Lemma. Assume that ¹ and ¾ satisfy A1-A3. The for X and X
N
there exits a
constant C, depending only on T, such that the following estimates hold.
E[ sup
0·t·T
jX
N
t
j
4
]·C(1+jxj
4
); E[ sup
0·t·T
jX
t
¡X
N
t
j
2
]·(1+jxj
2
)j¼j
2.3.2 Theorem. Assume that¹ and¾ satisfy A1-A3. The forX andX
N
there exits a
constant C, depending only on T, such that the following estimates hold.
sup
0·t·T
E[jX
t
¡
^
X
N
t
j
2
]·(1+jxj
2
)j¼j; E[ sup
0·t·T
jX
t
¡
^
X
N
t
j
2
]·(1+jxj
2
)j¼jlog
1
j¼j
2.3.3 Remark. In order to keep the price of the stock positive, we use the above Euler
scheme onlogX instead ofX [19].
2.3.4 Remark. Since the solution of PDE’s can be negative we use the regular Euler
scheme when solving PDE’s.
14
Backward Euler for BSDE’s
In order to solve for Y and Z in (1.5), we follow the time discretization proposed by
Zhang[38](2004):
LetY
N
t
N
4
=g(X
N
t
N
); Z
N
t
N
4
=0;
then fori=N;N¡1;:::;1
² Z
N
t
i
4
=
1
¢t
i
E
t
i
[
~
Y
N
t
i+1
¢W
i
]
² Y
N
t
i
4
=E
t
i
[Y
N
t
i+1
+¢t
i
f(t
i
;X
N
t
i
;Y
N
t
i+1
;Z
N
t
i
)]
hence, solving for(Y;Z) boils down to the computation of the conditional expec-
tations.
2.4 Conditional Expectations
Two basic approaches have been used to computed the above conditional expectations.
Bouchard and Touzi[7](2004) used the Malliavin Calculus approach, while, Gobet et
al. [19](2005) and Bender et al. [5](2005) used the Least Square Monte-Carlo (LSM)
of Longstaff and Swartz[23].
2.4.1 Remark. In their paper, Longstaff and Swartz [23] did not provide a theoretical
justification for their algorithm. This was done later by Clement et al. [9](2002).
2.5 Algorithm
In this section we present the main algorithm that will be used to to solve the FBSDE
(1.5). We follow the approaches proposed by Gobet et al. [19] and Lemor et al. [22].
15
2.5.1 Markovian
Recall that in the Markovian setting for some (unknown, but) deterministic functions
u
N
andv
N
, we haveY
N
i
=u
N
i
andZ
N
i
=v
N
i
In order to compute the conditional expectations the idea is the following. First
choose some basis functions (more on this later) e
1
(x);:::;e
k
(x). Then we project
u
N
i
;v
N
i
into the space spanned by these basis, that is
u
N
i
(x)¼
k
X
i=1
®
N
i
e
i
(x),
v
N
i
(x)¼
k
X
i=1
¯
N
i
e
i
(x),
and finally, we use the Least Square Monte-Carlo (LSM) simulation to find®
N
i
;¯
N
i
which minimizes the square error.
2.5.1 Remark. For numerical stability we use some truncations functions similar to
the ones used in [19],[22], and [5].
2.5.2 Algorithm Summary
In this section we summarize the details of the numerical scheme.
² Choose the function basis and set¢t
k
=t
k+1
¡t
k
,witht
k
=kT=N, where N is
the number of time steps.
² Simulate M paths of the Brownian MotionW using some random number gener-
ator(e.g Matlab,C++ ) in the following manner: ¢W
m
k
=
p
¢t
k
S. k =1;:::N
andm=1;:::M, whereS is drown from a standard normal distribution.
16
² The forward component X
t
is approximated using the standard Euler Scheme
using an equidistant time step. The approximation is defined byX
N;m
0
=x
0
and
X
N;m
t
k+1
=X
N;m
t
k
+b(t
k
;X
N:m
t
k
)¢t
k
+¾(t
k
;X
N;m
t
k
)¢W
m
k
: (2.2)
2.5.2 Remark. There are other (more accurate, i.e better rate of convergence)
methods to simulate the state processX in addition to the Euler Scheme.
Milstein Scheme is one such example, but it is more costly.
(Least Square Section) The backward component is approximated in a back-
ward manner:
(a) SetY
N;m
t
N
=g(X
N;m
T
);Z
N;m
t
N
=0: AssumeY
N;m
t
k+1
andZ
N;m
t
k+1
is obtained, define
(b) Set®
k
=®
l;k
as the argmin in(®
0
;:::;®
l
) of
1
M
M
X
m=1
(Y
N;m
t
k+1
¡®
0
¢p
m
0;k
+¢t
k
f
m
k
(®
k
)¡
q
X
l=1
®
l
¢p
m
l;k
)
2
(c) SetY
N;m
t
k
=®
0;k
¢p
m
0;k
andZ
N;m
l;t
k
=®
l;k
¢p
m
l;k
:
(d) Iterate fork =N¡1;:::;1:
17
Chapter 3
Reflected BSDE’s
In this section we focus on the case where the solution is forced to stay above a certain
stochastic process, called an obstacle. In this case, the BSDE is said to have a reflec-
tion, that is there is an increasing function that will push the solution upward every time
the solution reaches the obstacle. Such equations are called Reflected BSDE (RBSDE
for short)
A reflected BSDE is of the following form,
X
t
=x
0
+
Z
t
0
¹(r;X
r
)dr+
Z
t
0
¾(r;X
r
)dW
r
Y
t
=g(X
T
)+
Z
T
t
f(r;X
r
;Y
r
;Z
r
)dr¡
Z
T
t
Z
s
;dW
s
+L
T
¡L
t
;
Y
t
¸S
t
; t¸0;
Z
T
0
[Y
t
¡S
t
]dL
t
=0
(3.1)
where ¹;¾;g;f are deterministic functions and W is the Brownian Motion. S
t
is
the reflecting barrier, and L is the reflecting process that keeps the solution Y form
going below the barrier at each time t. n particular, we are interested in the case
S
t
=h(t;X
t
), whereh is a deterministic function satisfying:
h(T;x)·g(x); 8x2R
d
Reflected BSDE’s have studied by several authors (see [28],[17],[10] to mention a
few). The main motivation to study such equations is their use in solving the hedging
18
problem of American Options [16]. Other applications include, solutions to optimal
stopping time problems and obstacle problems in PDEs.
3.1 American Options
An American Call Option, unlike the European one, can be exercised at any time
during the time interval [0;T]. Similarly to (1.7), given a pay-off g(¢), solving the
option price and the hegding portfolio is equivalent to solving the following RBSDE:
X
t
=x
0
+
Z
t
0
b(s;X
s
)ds+
Z
t
0
¾(s;X
s
)dW
s
Y
t
=g(X
T
)+
Z
T
t
f(s;X
s
;Y
s
;Z
s
)ds¡
Z
T
t
Z
s
dW
s
+L
T
¡L
t
;
Y
t
¸max(X
t
¡K)
+
; t¸0;
Z
T
0
[Y
t
¡max(X
t
¡K)
+
]dL
t
=0
(3.2)
here X
t
represent the price of the stock, Y
t
the Option price, and Z
t
the hedging
portfolio. The pay-off function is given by max(X
¿
¡K)
+
, where ¿ is the optimal
stopping time, i.e¿
4
=inft:Y
t
=h(t;X
t
).
19
3.2 Connection to obstacle problems
Similarly to the connection between PDE’s and FBSDE’s (presented earlier [31]), there
exits a connection between RBSDE’s and obstacle problems.
Consider the following obstacle problem (PDE):
maxfu
t
+
1
2
¾
2
(t;x)u
xx
+¹(t;x)u
x
+f(t;x;u;u
x
¾);h(t;x)¡u(t;x)g=0;
u(T;x)=g(x):
(3.3)
then, there is a similar connection to RBSDE (3.1). That is, the solution to (3.1)
can find by PDE methods and the solution to (3.3) can be obtained by a probabilistic
approach.
3.3 Time discretization
The time discretization of the backward component of (3.1) is quite similar to that of
(1.5). We follow the time discretization proposed by Ma and Zhang [28](2005):
LetY
N
t
N
4
=g(X
N
t
N
); Z
N
t
N
4
=0;
then fori=N;N¡1;:::;1
² Z
N
t
i
4
=
1
¢t
i
E
t
i
[
~
Y
N
t
i+1
¢W
i
]
² Y
N
t
i
4
=E
t
i
[Y
N
t
i+1
+¢t
i
t
i
f(t
i
;X
N
t
i
;Y
N
t
i+1
;Z
N
t
i
)]
² Y
N
t
i
4
=max(
~
Y
N
t
i
;h(t
i
;X
N
t
i
))
² letL
¼
0
=0, and fort²[t
i¡1
;t
i
),
defineL
N
t
4
=L
N
t
i
4
=
i
X
j=1
(Y
¼
t
j¡1
¡
~
Y
N
j¡1
);
20
3.3.1 Remark. The processX is discretized as before.
3.3.2 Remark. The computation of the conditional expectations is the exactly the same
as before.
3.3.3 Remark. In the Markovian setting we have the (once, again) following relation-
shipY
t
=u(t;X
t
) andZ
t
=v(t;X
t
) for some deterministic functionsu;v.
21
Chapter 4
Numerical Experiments
In this chapter we present the numerical examples to test the algorithm. First, we look
into the pricing of American and Bermuda type Options and then we present results
for Parabolic PDE’s.
4.1 Basis functions
In this section we define the basis functions that will be used in the least square regres-
sion (LSM)problems to find the projection coefficients. In the option pricing case, we
used hyper-cubes and polynomials up to degree two. For the parabolic PDE case used
trigonometric basis as well as polynomials one.
4.1.1 Hyper-cubes
Hyper-cubes (HC) is the primary basis used by Gobet et al [19], [22].
Hyper-cuber are defined in the following way.
² p
m
l;k
=p
l;k
(X
m
t
k
)l =0;1;;;q does not depend onl ork.
² D =
S
i
D
i
whereD
i
=(X
0
¡R+i±;X
0
¡R+(i+1)±].
² Definep
m
l;k
as the indicator function of this set of intervals.
4.1.1 Remark. HC are some how equivalent to time discretization and is actually very
costly in high dimensions, hence HC is not the optimal basis to choose.
22
4.1.2 Polynomials of degree two
This type of basis includes polynomials up to degree two(i.e linear and quadratic terms
as well as all possible combinations). The pay-off function can be also included for
accuracy.
For example, for dimension two, the basis could be:
p
m
0;k
=(1;x
m
1
;x
m
2
;(x
m
1
)
2
;(x
m
2
)
2
;(max(x
m
1
;x
m
2
)¡K)
+
)
4.1.2 Remark. For simplicity we use the same basis forY andZ, but we could have
different basis for each them. In general, when two different basis are used the conver-
gence is faster as noted in [19] and [5].
4.1.3 Trigonometric basis
This type of basis is very similar to the polynomial basis, i.e the terms used are
trigonometry terms. A typical example could be,
p
0;k
=(1;sin(x
1
);sin(x
2
);sin(x
1
x
2
))
This particular basis is ideal for Parabolic PDE’s with periodic boundary condi-
tions.
4.2 American Options
In order to stablish the accuracy of the algorithm we first consider the case of different
interest rates, that is, the borrowing rate is greater than the lending rate (which is a true
financial example).
Non Linear American Call
² R >r,R = the borrowing rate,r = lending rate
23
² The driver has the following form
f(t;x;y;z)=(ry¡
(¹¡r)z
¾
)¡(y¡
z
¾
)
¡
(R¡r)
² K =100;¹=0:05;r =0:02;R =0:04;¾ =0:2;X(0)=100; andT =0:25
² Pay-off function: max(X
T
¡K;0)
² Reference Price: $4:47[19]
Here ¾ represents the volatility of the stock, ¹ the drift of the stock, X(0) is the
initial stock price,K denotes the strike price (future stock price agreed upon on today),
and T terminal time (option expiration date). The numerical results are presented in
Table 4.1 and Table 4.2 forN =4 (number of time steps) andN =8 respectively. The
number in parenthesis represents the empirical standard deviation obtained as a result
of running the algorithm 50 times.
Table 4.1: Non-linear American Call,N =4
Time (<1s) (<1s) (<1s) (1s)
M 400 800 1600 3200
Price 4.320(.26) 4.432(.12) 4.445(.07) 4.494(.05)
It can be seen from Table 4.1 and Table 4.2 that for a fixed number of steps, as the
number of simulations (M) increases, the price is getting closer to the reference price,
as well as the empirical variance is decreasing at the same time.
Table 4.2: Non-linear American Call,N =8
Time (<1s) (1s) (2s) (4s)
M 400 800 1600 3200
Price 4.301(.25) 4.440(.10) 4.474(.08) 4.484(.05)
24
4.2.1 Bermuda max-call equity options
In section we look into multi-dimensional problems. As a reference value we use the
example used by Anderson and Broadie[1]. In this case we price equity options in a
multi-asset Black-Scholes framework (the equity pays a dividend±). We assume that
the risk-neutral dynamics of D-assets follow correlated Geometric Brownian Motion
processes, i.e
dX
i
t
X
i
t
=(r¡±
i
)dt+¾
i
dW
i
t
;
whereW
i
t
,i=1;:::;n, are standard Brownian motions processes and the instanta-
neous correlation ofW
i
andW
j
is½
ij
. For simplicity, we take±
i
= ± and½
ij
= ½ for
alli;j =1;:::;n andi6=j
1. Max-Call equity option on three assets (D=3)
We start by pricing an equity on the maximum of three assets. We use the following
parameters (here,K;¹;r;R;¾;X(0);T represent the same parameters). We take± =
0:10 (which means¹=¡0:05)and for computational simplicity we take½=0
² K =100;¹=¡0:05;r =0:05;R =0:05;¾ =0:2;X(0)=100; andT =3
² Pay-off function: (max(X
1
t
;:::;X
D
t
)¡K)
+
In Table 4.3 we have the reference values for 3 and 5 dimensions [1].
It can be seen form this table that the Binomial Tree Method [36] is unpractical for
dimension higher than then three, also, it can be seen, that the algorithm of Anderson
and Broadie is unpractical for dimension higher than 5.
4.2.1 Remark. It took more than 20 hours for Anderson and Broadie to get the each
of the results. This is highly unpractical in financial mathematics were the results are
needed much faster than that.
25
Table 4.3: Reference Price for Max-Call
Assets X(0) 95% Binonial
3 100 [18.661,18.728] 18.69
5 100 [26.109,26.292] -
In Table 4.4 and Table 4.5 we present the results for the case where the number of steps
is 4 and 8 respectively. As in the previous example, it can be seen, that for N fixed as
as the number of simulations (M) increase the empirical variance is decreasing. Also
it is important to notice that the computation time is much more faster than the one of
Anderson and Broadie and that in this case the Binomial Method is not even applicable.
Table 4.4: N =4,D =3
Time (3s) (6s) (12s)
M 4000 8000 16000
Price 20.596(.37) 18.639(.29) 18.623(.19)
Table 4.5: N =8,D =3
Time (6s) (12s) (24s)
M 4000 8000 16000
Price 18.265(.42) 18.365(.27) 18.721(.18)
26
2. Max-Call equity option on five assets (D=5)
Now, we consider the case when the dimension of the problem is higher than three.
We begin with five dimensions. The parameters used are the same as in the previous
case. From Table 4.6 and Table 4.7 we can see a similar behavior as in the three
dimensional case. The main thing to notice is that the computation time is increasing
as the number of assets increases, however, the increase is not significantly bigger than
that of three dimensions.
Table 4.6: N =4,D =5
Time (3.5s) (7) (14s)
M 4000 8000 16000
Price 26.8232(.49) 26.4565(.34) 26.2893(.15)
Table 4.7: N =8,D =5
Time (7s) (14s) (28s)
M 4000 8000 16000
Price 26.7861(.44) 26.2427(.27) 26.2223(.16)
4.2.2 Remark. Notice that as the dimension and the number of time steps increase,
the variance is also decreasing but at a slower rate than in dimension five. This is due
to the fact that the global error depends on the number of time steps, hence the bigger
N, the most simulations are needed to keep the same accuracy.
27
3. Max-Call equity option on ten assets (D=10)
To conclude this section we price the option on the maximum of ten assets. We
use the same parameters as in the two previous two examples. Unlike the two previous
cases, there is no reference value to compare our results to. However, the behavior of
the numerical solution is consistent with the two previous results, hence we can infer
that our results are fairly accurate. The results are presented in Table 4.8 and Table 4.9.
Table 4.8: N =4,D =10
Time (6s) (12s) (24s)
M 4000 8000 16000
Price 41.1654(.45) 41.2342(.33) 41.2893(.16)
Table 4.9: N =8,D =10
Time (12s) (24s) (47s)
M 4000 8000 16000
Price 40.9872(.49) 41.2532(.29) 41.2824(.18)
4.2.3 Remark. The computation time for ten dimensions was fairly small given that
we used the Matlab programming languange. If a faster programming was used (e.g
C + +), the method will be much faster and we could have deal with much higher
dimensions.
28
4.3 Parabolic PDE’s
In this section, change our focus from mathematical finance to solve parabolic partial
differential equations. We focus on the solution to Burger’s Equation and the Heat
Equation.
4.3.1 Burger’s Equation
1. We begin by testing the algorithm with the generalized 1-dimensional Burger’s
Equation (¹ is a constant).
u
t
(t;x)+¹u
x
+
1
2
¾
2
u
xx
=0
u(T;x)=C
0
sin(2¼x) t2[0;T]
In the particular case of periodic boundary conditions, the exact solution is given by
[34]:
u(t;x)=C
0
exp(¡4¼
2
(T ¡t))sin[2¼(x+¹(T ¡t))]
Parameters: D =1;¾ =:32;t=0:5;x=0:2;T =1;C
0
=10
Exact solution: 3.2431
The results fort=0:5 andx=0:2 are presented in the Table 4.10. Here N and M
represent the same parameters as before andC
0
is a constant (which is chosen to avoid
getting very small/large numbers)
Table 4.10: Burger’s Equation, t = 0.5, x=0.2
Mn N 4 8
1000 3.2645(0.081) 3.2245(0.079)
2000 3.2554(0.052) 3.2652(0.053)
4000 3.2500(0.037) 3.2436(0.040)
8000 3.2487(0.017) 3.2444(0.024)
As in the option price case, we can see in Table 4.10 that the method converges to
the exact solution as the number of simulations increase.
29
Next, we consider the case where ¹ = u, this changes the above equation to a
semi-linear PDE. This case is harder to handle since the driver is now non-linear. As
reference, we use the example of Delarue and Menozzi [13] (2006).
Parameters: D =1;¾ =:15;t=0:5;x=0:4;T =1;C
0
=1
Exact solution: 0.5999
Table 4.11: Semi-linear, t = 0.5, x=0.2
Mn N 4 8
1000 0.4545(0.109) 0.5233(0.123)
2000 0.5267(0.089) 0.5778(0.097)
4000 0.5872(0.060) 0.5900(0.065)
8000 0.5789(0.042) 0.5892(0.046)
4.3.2 D - dimensional Heat - Equation
Next, we consider the D-dimensional heat equation. In dimension higher than three
there is no real physical interpretation for the solution. However, the motivation is to
show that our method can solve PDE of dimension higher than three, in which, the
traditional methods like finite difference and finite element are unpractical.
Consider the following heat equation.
u
t
(t;x)+
1
2
¾
2
4u(t;x)=0
x2R
D
; 0·x
i
·1; i=1;:::;D
u(T;x)=C
0
D
Y
i=1
sin(2¼x
i
)
If we have the case of periodic boundary conditions, the exact solution (using sep-
aration of variables) is given by [34]:
u(t;x)=C
0
D
Y
i=1
sin(2¼x
i
)exp(¡®a
2
i
¼
2
(T ¡t))sin(a
i
¼x
i
)
30
1. We begin with the case where the dimension of the PDE is three (in order to make
physical sense). We test the algorithm for two different times and two different x
values. We consider the following parameters and use the exact solution as a reference.
Parameters: D =3;¾ =:32;t=0:9;x=0:6;T =1;C
0
=10
Exact solution: -1.1074
The results are presented in the Table 4.12
Table 4.12: Heat-Equation, 3-D, t = 0.9, x=0.6
Mn N 4 8
1000 -1.1113(0.072) -1.1061(0.079)
2000 -1.1099(0.052) -1.1132(0.053)
4000 -1.1056(0.037) -1.1045(0.040)
8000 -1.1084(0.017) -1.1086(0.024)
Next, we change the parameters, and present the results in Table 4.13.
Parameters: D =3;¾ =:32;t=0:5;x=0:2;T =1
Exact solution: 0.4454
Table 4.13: Heat-Equation, 3-D, t = 0.5, x=0.2
Mn N 4 8
1000 0.4212(0.120) 0.4567(0.088)
2000 0.4394(0.062) 0.4438(0.040)
4000 0.4504(0.032) 0.4545(0.035)
8000 0.4443(0.012) 0.4458(0.016)
4.3.1 Remark. Notice that changing the time and location does not change the accu-
racy of the scheme (the expected behavior).
31
2. Finally, we solve the Heat Equation in ten dimensions. We use the same boundary
and terminal conditions as before but we use different parameters.
Parameters: D =10;¾ =:32;t=0:4;x=0:25;T =1;C
0
=1000
Exact solution: 5.4073
We solve the problem for two different times and two different locations. The
results are presented in the Table 4.14 and Table 4.15
Table 4.14: Heat-Equation 10-D, t = 0.4, x=0.25
Mn N 4 8
4000 5.2343(0.66) 5.1233(0.73) 5.3273(0.78)
8000 5.3745(0.53) 5.4433(0.56) 5.4059(0.59)
16000 5.3999(0.37) 5.4066(0.38) 5.4068(0.40)
32000 5.4057(0.22) 5.4100(0.26) 5.4088(0.28)
Parameters: D =10;¾ =:32;t=0:7;x=0:25;T =1;C
0
=1000
Exact solution: 2.3253
Table 4.15: Heat-Equation, 10-D, t = 0.7, x=0.25
Mn N 4 8 16
4000 2.2418(0.67) 2.2532 (0.73) 2.2453(0.77)
8000 2.4090(0.50) 2.4442 (0.46) 2.3334(0.52)
16000 2.3374(0.38) 2.3438 (0.39) 2.2376(0.42)
32000 2.3244(0.24) 2.3353 (0.26) 2.3223(0.29)
Once again we can see that consistency of the algorithm. That is changing the
parameters does not change the convergence of the algorithm to the exact solution.
32
Chapter 5
Results
5.1 Picard iteration
In this section we consider a discretized Picard iteration of the FSBDE proposed
by Bender and Denk [5](2005). In theory, the backward component (Y;Z) can
be obtained as the limit of a Picard type iteration (Y
n
;Z
n
) [35]. We assume that
(Y
0
;Z
0
)
4
=(0;0), and(X;Y
n
;Z
n
) is the solution of the following FBSDE
X
t
=x+
Z
t
0
b(s;X
s
)ds+
Z
t
0
¾(s;X
s
)dW
s
Y
n
t
=g(X
T
)+
Z
T
t
f(s;X
s
;Y
n¡1
s
;Z
n¡1
s
)ds¡
Z
T
t
Z
n
s
dW
s
(5.1)
.
The solution is given by
Y
n
t
=E
t
[g(X
T
)¡
Z
T
t
f(s;X
s
;Y
n¡1
s
;Z
n¡1
s
)ds]
andZ
n
is obtained via the martingale representation theorem [26].
In their paper Bender and Denk [5] proposed to stop the algorithm when the distance
between two consecutive Picard iterations is less than some predetermine tolerance.
However, there is no theoretical justification for this in their paper.
33
5.1.1 Continuous Case
5.1.1 Theorem. Under assumptions, the following estimate holds.
E
n
sup
0·t·T
jY
n+1
t
¡Y
t
j
2
+
Z
T
0
jZ
n+1
t
¡Z
t
j
2
o
·CE
Z
T
0
³
jY
n+1
t
¡Y
n
t
j
2
+jZ
n+1
t
¡Z
n
t
j
2
´
dt
(5.2)
Proof. Let¢Y
n+1
t
4
=Y
n+1
t
¡Y
t
,±
n+1
t
4
=Y
n+1
t
¡Y
n
t
and
I
t
4
=E
n
j¢Y
n+1
t
j
2
+
Z
T
t
j¢Z
n+1
s
j
2
ds
o
(5.3)
then from (5.1), we get,
¢Y
n+1
t
=
Z
T
t
[a
s
(¢Y
n+1
s
¡(Y
n+1
s
¡Y
n
s
))+b
s
(¢Z
n+1
s
¡(Z
n+1
s
¡Z
n
s
))ds]
¡
Z
T
t
¢Z
n+1
s
dW
s
(5.4)
where
a
t
=
f(t;X
s
;Y
n
t
;Z
n
t
)¡f(t;X
t
;Y
t
;Z
n
t
)
Y
n
t
¡Y
t
b
t
=
f(t;X
s
;Y
n
t
;Z
n
t
)¡f(t;X
t
;Y
n
t
;Z
t
)
Z
n
t
¡Z
t
:
(5.5)
bounded and adapted.
Applying Ito’s Formula toj¢Y
n+1
t
j
2
we get
34
dj¢Y
n+1
t
j
2
=¡2¢Y
n+1
t
[a
t
(¢Y
n+1
t
¡±Y
n+1
t
)+b
t
(¢Z
n+1
t
¡±Z
n+1
t
)]dt
+2¢Y
n+1
t
¢Z
n+1
t
dW
t
+j¢Z
n+1
t
j
2
dt
thus,
¡j¢Y
n+1
t
j
2
=
Z
T
t
¡2¢Y
n+1
s
fa
s
(¢Y
n+1
s
¡±Z
n+1
s
)+b
s
(¢Z
n+1
s
¡±Y
n+1
t
)gds
+
Z
T
t
2¢Y
n+1
s
¢Z
n+1
s
dW
s
+
Z
T
t
j¢Z
n+1
s
j
2
ds
taking expectations on both sides and noticing that
Ef
Z
T
t
2¢Y
n+1
t
¢Z
n+1
t
dW
t
g=0 (5.6)
we get
I
t
=Ef
Z
T
t
2¢Y
n+1
s
[a
s
(¢Y
n+1
s
¡±Y
n+1
s
)+b
s
(¢Z
n+1
s
¡±Z
n+1
s
)]dsg
·CEf
Z
T
t
j¢Y
n+1
s
j(j¢Y
n+1
s
j+j±Y
n+1
s
j+j¢Z
n+1
s
j+j±Z
n+1
s
j)dsg
=CEf
Z
T
t
j¢Y
n+1
s
j
2
+j¢Y
n+1
s
jj±Y
n+1
s
j+j¢Y
n+1
s
jj¢Z
n+1
s
j
+j¢Y
n+1
s
jj±Z
n+1
s
j)dsg
.
35
Notice that for anya;b we have(Young’s Inequality)
jajjbj·
1
2
jaj
2
+
1
2
jbj
2
hence,
I
t
·Ef
Z
T
t
C
1
j¢Y
n+1
s
j
2
+
1
2
j±Y
n+1
s
j
2
+
1
2
j¢Z
n+1
s
j
2
+
1
2
j±Z
n+1
s
j
2
)dsg (5.7)
this implies (since¢Z
n+1
2L
2
)
Efj¢Y
n+1
t
j
2
+
1
2
Z
T
t
j¢Z
n+1
s
j
2
dsg
·Ef
Z
T
t
C
1
j¢Y
n+1
s
j
2
dsg+
1
2
E
Z
T
t
(j±Y
n+1
s
j
2
+j±Z
n+1
s
j
2
)dsg
·Ef
Z
T
0
C
1
j¢Y
n+1
s
j
2
dsg+
1
2
E
Z
T
0
(j±Y
n+1
s
j
2
+j±Z
n+1
s
j
2
)dsg
(5.8)
(Applying Fubinni)
=Cf
Z
T
0
Ej¢Y
n+1
s
j
2
dsg+
1
2
E
Z
T
0
(j±Y
n+1
s
j
2
+j±Z
n+1
s
j
2
)dsg
from (5.8) we have:
Efj¢Y
n+1
t
j
2
g·Cf
Z
T
0
Ej¢Y
n+1
s
j
2
dsg+
1
2
E
Z
T
0
(j±Y
n+1
s
j
2
+j±Z
n+1
s
j
2
)dsg (5.9)
Applying Gronwall’s Inequality to (5.9) we have:
36
Efj¢Y
n+1
t
j
2
g·CEf
Z
T
0
(j±Y
n+1
s
j
2
+j±Z
n+1
s
j
2
)dsg; 8t2[0;T]:
which by assumption implies:
EjY
n+1
t
¡Y
t
j
2
dt
·C
2
E
Z
T
0
(jY
n+1
t
¡Y
n
t
j
2
+jZ
n+1
t
¡Z
n
t
j
2
)dt
(5.10)
Taking the supremum on (5.10) we get
sup
0·t·T
EjY
n+1
t
¡Y
t
j
2
·C
2
E
Z
T
0
(jY
n+1
t
¡Y
n
t
j
2
+jZ
n+1
t
¡Z
n
t
j
2
)dt: (5.11)
Now, takingt=0 and plugging (5.10) into (5.8) yields
Ef
Z
T
0
jZ
n+1
s
¡Z
s
j
2
dsg·C
2
E
Z
T
0
(jY
n+1
t
¡Y
n
t
j
2
+jZ
n+1
t
¡Z
n
t
j
2
)dt: (5.12)
Hence, combining (5.11) and (5.12), one gets
sup
0·t·T
EjY
n+1
t
¡Y
t
j
2
+Ef
Z
T
0
jZ
n+1
s
¡Z
s
j
2
dsg
·CE
Z
T
0
(jY
n+1
t
¡Y
n
t
j
2
+jZ
n+1
t
¡Z
n
t
j
2
)dt
37
5.1.2 Time Discretization
Next, we look into the time discretization of (5.1)
5.1.2 Definition. Let K > 0 be a constant. A partition ¼ : 0 = t
0
< t
1
< ::: <
t
N
=T is called K-uniform if¢t
i
¸j¼j=K fori=1;:::;n:
5.1.3 Theorem. Suppose that the partition¼ is K-uniform, under A1, the following
estimate holds
max
0·i·N
EfjY
t
i
¡Y
n;N
t
i
j
2
g+
N¡1
X
i=0
Z
t
i+1
t
i
EjZ
s
¡Z
n;N
t
i
j
2
ds
·CEfjg(X
T
)¡g(X
N
T
)j
2
g+Cj¼j
+C
N¡1
X
i=0
Ef(jY
n;N
t
i
¡Y
n¡1;N
t
i
j
2
+jZ
n;N
t
i
¡Z
n¡1;N
t
i
j
2
)g
where g is deterministic and the constant C depends only on T and K and can
change from line to line.
Proof. Let´
n;N
t
i
4
=Y
n;N
t
i
¡Y
n¡1;N
t
i
,¹
n;N
t
i
4
=Z
n;N
t
i
¡Z
n¡1;N
t
i
with´
n;N
t
i
;¹
n;N
t
i
2L
2
(F
t
i
):
Now, fort2[t
i¡1
;t
i
),i=N;N¡1;:::;1 let
Y
n;N
t
N
=g(X
N
t
N
) and Z
n;N;1
t
N
=0
Y
n;N
t
=Y
n;N
t
i
+f(t
i
;X
N
t
i
;Y
n¡1;N
t
i
;Z
n¡1;N;1
t
i
)¢t
i
¡
Z
t
i
t
Z
n;N
s
dW
s
(5.13)
where,
38
Z
n;N;1
t
i
4
=
1
¢t
i+1
Ef
Z
t
i+1
t
i
Z
n;N
s
dsjF
t
i
g; i=0;:::;N¡1: (5.14)
from (5.13) we get:
Y
n;N
t
i¡1
=Y
n;N
t
i
+[f(t
i
;X
N
t
i
;Y
n;N
t
i
;Z
n;N;1
t
i
)+a
t
i
´
n;N
t
i
+b
t
i
¹
n;N
t
i
]¢t
i
¡
Z
t
i
t
i¡1
Z
n;N
s
dW
s
(5.15)
.
where
a
t
i
;b
t
i
are defined in a similar way as in (5.5).
Fori=1;:::;N, let
I
i¡1
4
=E[jY
t
i¡1
¡Y
n;N
t
i¡1
j
2
+
Z
t
i
t
i¡1
jZ
s
¡Z
n;N
s
j
2
dW
s
] (5.16)
then
Y
t
i¡1
¡Y
n;N
t
i¡1
+
Z
t
i
t
i¡1
(Z
s
¡Z
n;N
s
)dW
s
=Y
t
i
¡Y
n;N
t
i
+
Z
t
i
t
i¡1
f(s;X
s
;Y
s
;Z
s
)ds
¡[f(t
i
;X
N
t
i
;Y
n;N
t
i
;Z
n;N;1
t
i
)+a
t
i
´
n;N
t
i
+b
t
i
¹
n;N
t
i
]¢t
i
=Y
t
i
¡Y
n;N
t
i
+
Z
t
i
t
i¡1
(f(s;X
s
;Y
s
;Z
s
)¡f(t
i
;X
N
t
i
;Y
n;N
t
i
;Z
n;N;1
t
i
))ds
¡[a
t
i
´
n;N
t
i
+b
t
i
¹
n;N
t
i
]¢t
i
39
Squaring both sides and then taking expectations, and noting that Y
t
i¡1
¡Y
n;N
t
i¡1
is
uncorrelated with
Z
t
i
t
i¡1
(Z
s
¡Z
n;N
s
)dW
s
; we get:
I
i¡1
=Ef[(Y
t
i
¡Y
n;N
t
i
)+
Z
t
i
t
i¡1
(f(s;X
s
;Y
s
;Z
s
)¡f(t
i
;X
N
t
i
;Y
n;N
t
i
;Z
n;N;1
t
i
)ds
¡
Z
t
i
t
i¡1
(a
t
i
´
n;N
t
i
+b
t
i
¹
n;N
t
i
)ds]
2
g
·EfjY
t
i
¡Y
n;N
t
i
j+C
Z
t
i
t
i¡1
[
p
t
i
¡s+jX
s
¡X
t
i
j+jY
s
¡Y
n;N
t
i
j
+jZ
s
¡Z
n;N;1
t
i
j+ja
t
i
jj´
n;N
t
i
j+jb
t
i
jj¹
n;N
t
i
j]
2
dsg
Note that for any° >0; (Young’s Inequality) we have:
(a+b)
2
=a
2
+2ab+2b
2
·(1+
¢t
i
°
)a
2
+(1+
°
¢t
i
)b
2
(5.17)
40
thus using the Lipshipts property of f, lemma 1.2.9 [37], and equation (5.17) we
have:
I
i¡1
·(1+
C¢t
i
°
)EjY
t
i
¡Y
n;N
t
i
j
2
+C(1+
°
¢t
i
)Ef
Z
t
i
t
i¡1
(
p
j¼j+jX
s
¡X
t
i
j+jY
s
¡Y
n;N
t
i
j
+jZ
s
¡Z
n;N;1
t
i
j+ja
t
i
jj´
n;N
t
i
j+jb
t
i
jj¹
n;N
t
i
j)
2
dsg
·E(1+
C¢t
i
°
)jY
t
i
¡Y
n;N
t
i
j
2
+C(°+¢t
i
)Ef
Z
t
i
t
i¡1
(j¼j+jX
s
¡X
t
i
j
2
+jX
t
i
¡X
N
t
i
j
2
+jY
s
¡Y
n;N
t
i
j
2
+jZ
s
¡Z
t
i
j
2
+jZ
t
i
¡Z
n;N
t
i
j
2
+j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)dsg
·Ef(1+
C¢t
i
°
)jY
t
i
¡Y
n;N
t
i
j
2
+C
Z
t
i
t
i¡1
jZ
s
¡Z
t
i
j
2
ds
+C(°+¢t
i
)¢t
i
jZ
t
i
¡Z
n;N
t
i
j
2
+Cj¼j
2
+C¢t
i
(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)g:
Since the partition is K-uniform we have¢t
i
·K¢t
i+1
, therefore we have that
Ef¢t
i
jZ
t
i
¡Z
n;N;1
t
i
j
2
g
·KEf¢t
i+1
jZ
t
i
¡Z
n;N;1
t
i
j
2
g
=
K
¢t
i+1
Ef[Ef
Z
t
i+1
t
i
(Z
n;N
s
¡Z
t
i
)dsjF
t
i
g]
2
g
·CEf
Z
t
i+1
t
i
jZ
n;N
s
¡Z
t
i
j
2
dsg
·CE
Z
t
i+1
t
i
[jZ
N
s
¡Z
s
j
2
+jZ
s
¡Z
t
i
j
2
]ds;
(5.18)
41
hence
I
i¡1
·Ef(1+
C¢t
i
°
)jY
t
i
¡Y
n;N
t
i
j
2
+C
1
(°+¢t
i
)
Z
t
i+1
t
i
jZ
n;N
s
¡Z
s
j
2
ds
+C
Z
t
i+1
t
i¡1
jZ
s
¡Z
n;N
s
j
2
ds+Cj¼j
2
+C¢t
i
(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)g
If we choose° =
1
4C
1
; then forj¼j·°, we have
I
i¡1
·Ef(1+C¢t
i
)jY
t
i
¡Y
n;N
t
i
j
2
+
1
2
Z
t
i+1
t
i
jZ
n;N
s
¡Z
s
j
2
ds
+C
Z
t
i+1
t
i¡1
jZ
s
¡Z
n;N
s
j
2
ds+Cj¼j
2
+C¢t
i
(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)g:
This gives
I
i¡1
+
1
2
Ef
Z
t
i+1
t
i
jZ
s
¡Z
n;N
s
j
2
dsg
·(1+C¢t
i
)I
i
+CEf
Z
t
i+1
t
i¡1
jZ
s
¡Z
n;N
s
j
2
ds+j¼j
2
+¢t
i
(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)g
(5.19)
Applying Gronwall inequality to (5.19) we get:
max
0·i·N
I
i
·CEfjg(X
T
)¡g(X
N
T
)j
2
+j¼j
N
X
i=1
[(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)]+j¼j
+
N
X
i=1
Z
t
i
t
i¡1
[jZ
s
¡Z
t
i¡1
j
2
+jZ
s
¡Z
t
i
j
2
gds
42
by theorem 1.2.8 [38] we have
N
X
i=1
Z
t
i
t
i¡1
[jZ
s
¡Z
t
i¡1
j
2
+jZ
s
¡Z
t
i
j
2
]ds·j¼j
thus,
max
0·i·N
I
i
·C[Efjg(X
T
)¡g(X
N
T
)j
2
g+j¼j
N
X
i=1
[(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)]+j¼j] (5.20)
which implies that,
max
0·i·N
EfjY
t
i
¡Y
n;N
t
i
j
2
g
·C[Efjg(X
T
)¡g(X
N
T
)j
2
g+j¼j
N
X
i=1
[(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)]+j¼j]
(5.21)
.
Now summing both sides of (5.19) fromi=0;:::;N¡1 we get
N¡2
X
i=0
I
i
+
1
2
E
N¡1
X
i=0
Z
t
i+1
t
i
jZ
n
s
¡Z
t
i
j
2
ds
·
N¡1
X
i=0
(1+C¢t
i
)I
i
+C[Efjg(X
T
)¡g(X
N
T
)j
2
g+j¼j+j¼j
N¡1
X
i=0
[(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
)]
43
hence
N¡1
X
i=0
Z
t
i+1
t
i
EjZ
n
s
¡Z
t
i
j
2
ds
·2(1+C¢t
n¡1
)I
n¡1
+CEfjg(X
T
)¡g(X
N
T
)j
2
g
+C(
N¡1
X
i=0
¢t
i
I
i
++j¼j
N¡1
X
i=0
[(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
+j¼j)])
(5.22)
.
Plugging (5.20) into (5.22) we get
N¡1
X
i=0
Z
t
i+1
t
i
EjZ
s
¡Z
t
i
j
2
ds
·C[Efjg(X
T
)¡g(X
N
T
)j
2
g+j¼j+j¼j
N¡1
X
i=0
[(j´
n;N
t
i
j
2
+j¹
n;N
t
i
j
2
]:
(5.23)
Finally combining (5.21) and (5.23) gives us:
max
0·i·N
EfjY
t
i
¡Y
n;N
t
i
j
2
g+
N¡1
X
i=0
Z
t
i+1
t
i
EjZ
s
¡Z
n;N
t
i
j
2
ds
·CEfjg(X
T
)¡g(X
N
T
)j
2
g+Cj¼j
+C
N¡1
X
i=0
Ef(jY
n;N
t
i
¡Y
n¡1;N
t
i
j
2
+jZ
n;N
t
i
¡Z
n¡1;N
t
i
j
2
)g
44
5.1.3 Orthogonal Projections
Now we replace the conditional expectations for orthogonal projections on subspaces
ofL
2
(F
t
i
). We fixD+1 subspaces£
d;i
, 0·d·D, ofL
2
(F
t
i
) for each0·i·N.
The orthogonal projection of£
d;i
is denoted byP
p
d;i
.
Define ^ ´
n
t
i
4
=
^
Y
n;N
t
i
¡
^
Y
n¡1;N
t
i
,^ ¹
n
t
i
4
=
^
Z
n;N
t
i
¡
^
Z
n¡1;N
t
i
,
5.1.4 Theorem. Under Assumptions (A1-A3) and for¢t
i
small enough we have the
following estimate
max
0·i·N¡1
Ej
^
Y
n;N
t
i
¡Y
n;N
t
i
j
2
+¢t
i
N¡1
X
i=0
Ej
^
Z
n;N
t
i
¡Z
n;N
t
i
j
2
·Cf
N¡1
X
i=1
EjR
p
0;i
(Y
n;N
t
i
)j
2
+¢t
i
N¡1
X
i=1
q
X
l=1
EjR
p
l;i
(Z
n;N
l;t
i
)j
2
+¢t
i
N¡1
X
i=1
Ej^ ´
n
t
i
+ ^ ¹
n
t
i
j
2
+Ej^ g(X
N
T
)¡g(X
N
T
)j
2
g
(5.24)
Proof. Let
^
Z
n;N
l;t
i
=
1
¢t
i
P
p
l;i
(
^
Y
n;N
t
i+1
¢W
l;i
)
^
Y
n;N
t
i
=P
p
0;i
(
^
Y
n;N
t
i+1
+¢t
i
[f(t
i+1
;X
N
t
i
;
^
Y
n;N
t
i+1
;
^
Z
n;N
t
i
)+ ^ a
i
^ ´
n
t
i
+
^
b
i
^ ¹
n
t
i
])
(5.25)
,
where ^ a
i
;
^
b
i
are defined in a similar way to (5.5).
We will prove the theorem in three steps.
Step 1 Applying the Cauchy-Schwarz Inequality we get,
45
jE
i
(
^
Y
n;N
t
i+1
¢W
l;i
)j
2
=jE
i
([
^
Y
n;N
t
i+1
¡E
i
(
^
Y
n;N
t
i+1
)]¢W
l;i
)j
2
·E
i
j¢W
2
l;i
jE
i
[(
^
Y
n;N
t
i+1
¡E
i
(
^
Y
n;N
t
i+1
))
2
]
·¢t
i
[E
i
[(
^
Y
n;N
t
i+1
¡E
i
(
^
Y
n;N
t
i+1
))
2
]
=¢t
i
fE
i
[(
^
Y
n;N
t
i+1
)
2
]¡[E
i
(
^
Y
n;N
t
i+1
)]
2
g:
(5.26)
Recall that
^
Z
n;N
l;t
i
=
1
¢t
i
P
p
l;i
(
^
Y
n;N
t
i+1
¢W
l;i
), by the contraction of the projection
operator and equation (5.26) it follows that
Ej
^
Z
n;N
l;t
i
j
2
=
1
¢t
2
i
E[P
p
l;i
(E
i
[
^
Y
n;N
t
i+1
¢W
l;i
])]
2
·
1
¢t
i
2
E(E
i
f
^
Y
n;N
t
i+1
¢W
l;i
g)
2
·
1
¢t
i
fE[
^
Y
n;N
t
i+1
]
2
¡E[E
i
(
^
Y
n;N
t
i+1
)]
2
g:
(5.27)
Step 2
By the contraction ofP
p
0;i
,Young’s Inequality, the identity
P
p
0;i
(
^
Y
n;N
t
i+1
)=P
p
0;i
(E
i
(
^
Y
n;N
t
i+1
)), and the Lipschitz property off we have
Ej
^
Y
n;N
t
i
¡P
p
0;i
(Y
n;N
t
i
)j
2
·(1+°¢t
i
)EjE
i
[
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
]j
2
+C¢t
i
(¢t
i
+
1
°
)fEj[
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
]j
2
+Ej[
^
Z
n;N
t
i
¡Z
n;N
t
i
]j
2
+Ej^ ´
n
t
i
+ ^ ¹
n
t
i
j
2
g
(5.28)
46
.
By the orthogonal property of the projection we get
Ej
^
Z
n;N
t
i
¡Z
n;N
t
i
j
2
=
q
X
l=1
EjR
p
l;i
(Z
n;N
l;t
i
)j
2
+
q
X
l=1
Ej
^
Z
n;N
t
i
¡P
p
l;i
(Z
n;N
l;t
i
)j
2
(5.29)
thus, using (5.27) we obtain
q
X
l=1
Ej
^
Z
n;N
t
i
¡P
p
l;i
(Z
n;N
l;t
i
)j
2
·
d
h
fE[j
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
j]
2
¡E(E
i
[
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
])
2
g
(5.30)
taking° =Cd and plugging (5.30) into (5.28) we get
Ej
^
Y
n;N
t
i
¡P
p
0;i
(Y
n;N
t
i
)j
2
·(1+C¢t
i
)Ej[
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
]j
2
+C¢t
i
f
q
X
l=1
EjR
p
l;i
(Z
n;N
l;t
i
)j
2
+Ej^ ´
n
t
i
+ ^ ¹
n
t
i
j
2
g:
(5.31)
Again by the orthogonal property of projections
^
I
N
i
=Ej
^
Y
n;N
t
i
¡Y
n;N
t
i
j
2
=Ej
^
Y
n;N
t
i
¡P
p
0;i
(Y
n;N
t
i
)j
2
+EjR
p
0;i
(Y
n;N
t
i
)j
2
(5.32)
plugging (5.31) into (5.32) we get:
47
^
I
N
i
·(1+C¢t
i
)
^
I
N
i+1
+CEjR
p
0;i
(Y
n;N
t
i
)j
2
+C¢t
i
f
q
X
l=1
EjR
p
l;i
(Z
n;N
l;t
i
)j
2
+Ej^ ´
n
t
i
+ ^ ¹
n
t
i
j
2
g
(5.33)
Applying Gronwalls inequality to (5.33) we have
max
0·i·N¡1
^
I
N
i
= max
0·i·N¡1
Ej
^
Y
n;N
t
i
¡Y
n;N
t
i
j
2
·CfEj^ g(X
N
T
)¡g(X
N
T
)j
2
+
N¡1
X
i=1
EjR
p
0;i
(Y
n;N
t
i
)j
2
+¢t
i
N¡1
X
i=1
q
X
l=1
EjR
p
l;i
(Z
n;N
l;t
i
)j
2
+¢t
i
N¡1
X
i=1
Ej^ ´
n
t
i
+ ^ ¹
n
t
i
j
2
g
(5.34)
Step 3.
Let
^
³
N
=¢t
i
P
N¡1
i=0
Ej
^
Z
n;N
t
i
¡Z
n;N
t
i
j
2
form (5.30) we have
^
³
N
·¢t
i
N¡1
X
i=0
q
X
l=1
EjR
p
l;i
(Z
n;N
t
i
)j
2
+d
N¡1
X
i=0
fEj
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
j
2
¡E(E
i
j
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
j)
2
g
(5.35)
But
N¡1
X
i=0
Ej
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
j
2
=Ej
^
Y
n;N
t
N
¡Y
n;N
t
N
j
2
+
N¡1
X
i=0
Ej
^
Y
n;N
t
i
¡Y
n;N
t
i
j
2
(5.36)
48
then from (5.28) we get
Ej
^
Y
n;N
t
i
¡Y
n;N
t
i
j
2
¡E(E
i
j
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
j)
2
·(°¢t
i
+C(¢
2
t
i
+
¢t
i
°
))Ej
^
Y
n;N
t
i+1
¡Y
n;N
t
i+1
j
2
+C(¢
2
t
i
+
¢t
i
°
)
^
³
N
+C(¢
2
t
i
+
¢t
i
°
))Ej^ ´
n
t
i
+ ^ ¹
n
t
i
j
2
plugging the above inequality into (5.35) and choosing° =4Cd and¢t
i
small enough
such thatdC(¢t
i
+
1
°
)·
1
2
we get
^
³
N
·C¢t
i
N¡1
X
i=0
q
X
l=1
EjR
p
l;i
(Z
n;N
t
i
)j
2
+C
N¡1
X
i=0
EjR
p
0;i
(Y
n;N
t
i
)j
2
+C max
0·i·N¡1
Ej
^
Y
n;N
t
i
¡Y
n;N
t
i
j
2
+C
N¡1
X
i=0
Ej^ ´
n
t
i
+ ^ ¹
n
t
i
j
2
+CEj^ g(X
N
T
)¡g(X
N
T
)j
2
Combining (5.34) and (5.1.3) we get the desired result.
5.1.4 Empirical Expectation (Simulations)
To conclude this chapter we replace the orthogonal projections for empirical expected
values which can be computed by monte-carlo simulation.
5.1.5 Proposition. Under (A1 ¡ A3), for a sequence of explicit functions
(½
N
l;i
)
0·l·q;0·i·N¡1;
bounded from below by 1,one has
j
^
Y
n;N
t
i
j·½
N
0;i
(X
N
t
i
);
p
¢t
i
j
^
Z
n;N
t
i
j·½
N
0;i
(X
N
t
i
);a:s:;
withEj½
N
l;i
(X
N
t
i
)j
2
<1.
49
Set½
N
l;i
(X
N
t
i
)=[½
N
0;i
(X
N
t
i
);:::;½
N
q;i
(X
N
t
i
)]
¤
:
We first define the following event.
A
M
i
= f8j 2 fi;:::;N ¡ 1g : jjV
M
j
¡ Idjj · ¢t
i
:jjP
M
0;j
¡ Idjj · ¢t
i
; and
jjP
M
0;j
¡Idjj· 1 for1·l·qg
For ease of notation we set:
~
Y
M
t
i
4
=
^
Y
n;N;M
,
~
Z
M
t
i
4
=
^
Z
n;N;M
,
~ ´
M
i
4
=
^
Y
n;N;M
t
i
¡
^
Y
n¡1;N;M
t
i
,~ ¹
M
i
4
=
^
Y
n;N;M
t
i
¡
^
Y
n¡1;N;M
t
i
5.1.6 Theorem. Under Assumptions A1 ¡ A3 and that each function basis is
orthonormal andEjp
l;i
j
2
< 1 for any l;i. For ¢t
i
small enough we have, for any
0·i·N¡1,
Ej
~
Y
M
t
i
¡
^
Y
n;N
t
i
j
2
+¢t
i
N¡1
X
j=i
Ej
~
Z
M
t
i
¡
^
Z
n;N
t
i
j
2
·C
N¡1
X
j=i
E(j½
N
j
(X
N
t
j
)j
2
1
[A
M
i
]
c)+
C
¢t
i
M
N¡1
X
j=i
(Ejjv
j
v
¤
j
¡Idjj
2
F
Ej½
N
j
(X
N
t
j
)j
2
+E(jv
j
j
2
jp
0;j+1
j
2
)Ej½
N
0;j
(X
N
t
j
)j
2
+¢t
2
i
E[jv
j
j
2
(1+jX
N
t
j
j
2
+jp
0;j
j
2
Ej½
N
0;j
(X
N
t
j
)j
2
+
1
¢t
i
q
X
l=1
jp
l;j
j
2
Ej½
N
l;j
(X
N
t
j
)j
2
)])+C
N¡1
X
j=i
Ej~ ´
M
i
+ ~ ¹
M
i
j
2
Proof. Let
(
^
µ
M
i
)
¤
=(^ ®
M¤
0;i
;
p
¢t
i
^ ®
M¤
1;i
;:::;
p
¢t
i
^ ®
M¤
q;i
)
Then the least square problem becomes:
50
^
µ
M
i
=arginf
µ
1
M
M
X
m=1
(^ ½
N;m
0;i+1
(^ ®
M
0;i+1
p
m
0;i+1
)+¢t
i
(f
m
i
(^ ®
M
i
)+~ a
i
~ ´
i
+
~
b
i
~ ¹
i
)¡µ¢v
m
i
)
2
:
(5.37)
By the lipschitz property of ^ ½
N;m
l;i
, the orthonormality ofp
l;i
and the eventA
M
i
, we
have
Ej
~
Y
M
t
i
¡
^
Y
n;N
t
i
j
2
+¢t
i
N¡1
X
k=i
Ej
~
Z
M
t
k
¡
^
Z
n;N
t
k
j
2
·C
N¡1
X
k=i
E(j½
N
k
(P
N
t
k
)j
2
1
[A
M
i
]
c)+E(j^ ®
0;i
¡ ^ ®
M
0;i
j
2
1
A
M
i
)
+¢t
i
N¡1
X
k=i
q
X
l=1
E(j^ ®
l;i
¡ ^ ®
M
l;i
j
2
1
A
M
i
):
(5.38)
We need to estimatej^ ®
l;i
¡ ^ ®
M
l;i
j
2
on the eventA
M
i
.
From Proposition 5.1.5, we have
j
^
µ
l;i
j
2
·Ej½
N
l;i
(P
N
t
i
)j
2
; 0·l·q (5.39)
.
Remember that by (5.25) and (5.27) and the orthonormality of each basis p
l;i
we
have:
^
µ
i
=E(v
i
[^ ®
0;i+1
¢p
0;i+1
+¢t
i
f
i
(^ ®
i
)])
^
µ
M
i
=
(V
M
i
)
¡1
M
M
X
m=1
v
m
i
[^ ½
N;m
0;i+1
(^ ®
M
0;i+1
¢p
m
0;i+1
)+¢t
i
(f
m
i
(^ ®
i
)+~ a
i
~ ´
i
+
~
b
i
~ ¹
i
)]
51
On the eventA
M
i
,(V
M
i
)
¡1
exits so we can set
B
1
=(Id¡(V
M
i
)
¡1
)
^
µ
i
;
B
2
=(V
M
i
)
¡1
[E(v
i
^ ½
N
0;i+1
(^ ®
0;i+1
¢p
0;i+1
))¡
1
M
M
X
m=1
v
m
i
^ ½
N;m
0;i+1
(^ ®
0;i+1
¢p
m
0;i+1
)];
B
3
=(V
M
i
)
¡1
¢t
i
[E(v
i
f
i
(^ ®
i
))¡
1
M
M
X
m=1
v
m
i
f
m
i
(^ ®
i
))];
B
4
=
(V
M
i
)
¡1
M
M
X
m=1
v
m
i
[^ ½
N;m
0;i+1
(^ ®
0;i+1
¢p
m
0;i+1
)¡ ^ ½
N;m
0;i+1
(^ ®
M
0;i+1
¢p
m
0;i+1
)
+¢t
i
(f
m
i
(^ ®
i
)¡f
m
i
(^ ®
M
i
)¡(~ a
i
~ ´
i
+
~
b
i
~ ¹
i
))]:
Hence, onA
M
i
, we have that
^
µ
i
¡
^
µ
M
i
=B
1
+B
2
+B
3
+B
4
;
thus, Young’s inequality yields
j
^
µ
i
¡
^
µ
M
i
j
2
·3(1+
1
¢t
i
)(jB
1
j
2
+jB
2
j
2
+jB
3
j
2
)+(1+¢t
i
)jB
4
j
2
: (5.40)
We estimate each B individually. ForB
1
we have
jB
1
j1
A
M
i
=jj(Id¡(V
M
i
)
¡1
)1
A
M
i
jjj
^
µ
i
j
LettingF =V
M
i
on (1.2.4) we get
52
E(jjId¡(V
M
i
)
¡1
jj
2
1
A
M
i
j)·(1¡¢t
i
)
¡2
EjjId¡V
M
i
jj
2
·(1¡¢t
i
)
¡2
EjjId¡V
M
i
jj
2
F
=(M(1¡¢t
i
)
2
)
¡1
Ejjv
i
v
¤
i
¡Idjj
2
F
:
This implies
E(jB
1
j
2
1
A
M
i
)·E(jj(Id¡(V
M
i
)
¡1
))jj
2
1
A
M
i
j)Ej
^
µ
i
j
2
·
C
M
Ejjv
i
v
¤
i
¡Idjj
2
F
Ej½
N
i
(X
N
t
i
)j
2
:
Recall that onA
M
i
one hasjj(V
M
i
)
¡1
)jj· 2.
LetS
M
=
P
M
m=1
v
m
i
½
N;m
i
(^ ®
0;i+1
¢p
m
0;i+1
) then using the Lipschitz property of½ we
obtain
E(jB
2
j
2
1
A
M
i
)=(jj(V
M
i
)
¡1
jj
2
1
A
M
i
j)E(
1
M
2
(S
M
¡E[S
M
])
2
)
·
jj(V
M
i
)
¡1
jj
2
M
Vj(S
1
)j (V(¢)·Ej¢j
2
)
·jj(V
M
i
)
¡1
jj
2
Ejv
i
½
N
i
(^ ®
0;i+1
¢p
0;i+1
)j
2
·
C
M
E(jv
i
j
2
jp
0;i+1
j
2
)Ej^ ®
0;i+1
j
2
·
C
M
E(jv
i
j
2
jp
0;i+1
j
2
)Ej½
N
0;i
(X
N
t
i
)j
2
:
(5.41)
Using a similar argument as forB
2
we have;
53
E(jB
3
j
2
1
A
M
i
)·(jj(V
M
i
)
¡1
jj
2
1
A
M
i
)(
¢t
i
2
M
2
)Ejv
i
(f
i
(^ ®
i
))j
2
·
C¢t
i
2
M
2
Ejv
i
(f
i
(^ ®
i
))j
2
·
C¢t
i
2
M
2
Efjv
i
(f
i
(t
i
;0;0;0))j+j^ ®
0;i
¢p
0;i
j+
1
¢t
i
q
X
l=1
j^ ®
l;i
jjp
l;i
j)g
2
·
C¢t
i
2
M
2
Efjv
i
j
2
(1+jX
N
t
i
j
2
+jp
0;i
j
2
Ej½
N
0;i
(X
N
t
i
)j
2
+
1
¢t
i
q
X
l=1
jp
l;i
j
2
Ej½
N
l;i
(X
N
t
i
)j
2
)g
(5.42)
Notice that1¡¢t
i
·¸
min
(V
M
i
) and¸
max
(P
M
l;i
)·2.
Hence by (1.2) we obtain
(1¡¢t
i
)jB
4
j
2
·
1
M
M
X
m=1
[½
N;m
0;i+1
(^ ®
0;i+1
¢p
m
0;i+1
)¡½
N;m
0;i+1
(^ ®
M
0;i+1
¢p
m
0;i+1
)
+¢t
i
(f
m
(^ ®
i
)¡f
m
(^ ®
M
i
)¡(~ a
i
~ ´
i
+
~
b
i
~ ¹
i
))]
2
·
C
M
M
X
m=1
f[(^ ®
0;i+1
¢p
m
0;i+1
)¡(^ ®
M
0;i+1
¢p
m
0;i+1
)j]+¢t
i
(j^ ®
i
¡ ^ ®
M
i
j+j~ ´
i
+ ~ ¹
i
j)g
2
·C(1+°¢t
i
)
M
X
m=1
j^ ®
0;i+1
¡ ^ ®
M
0;i+1
j
2
jP
m
0;i+1
j
2
+C¢t
i
(¢t
i
+
1
°
)(
q
X
l=0
j^ ®
l;i
¡ ^ ®
M
l;i
j
2
+j~ ´
i
+ ~ ¹
i
j
2
)
(5.43)
Let'
i
=(5:41)+(5:41)+(5:42). Choosing° =3C and¢t
i
small enough such that
C¢t
i
+
C
°
·
1
2
, we get
54
E(1
A
M
i
j
^
µ
i
¡
^
µ
M
i
j
2
)·C
'
i
¢t
i
M
+(1+C¢t
i
)E(1
A
M
i
j^ ®
0;i+1
¡ ^ ®
M
0;i+1
j
2
)+CEj~ ´
i
+ ~ ¹
i
j
2
:
(5.44)
which implies that
E(1
A
M
i
j^ ®
0;i+1
¡ ^ ®
M
0;i+1
j
2
)+¢t
i
q
X
l=1
E(1
A
M
i
j^ ®
l;i+1
¡ ^ ®
M
l;i+1
j
2
)
·C
'
i
¢t
i
M
+(1+C¢t
i
)E(1
A
M
i
j^ ®
0;i+1
¡ ^ ®
M
0;i+1
j
2
)+CEj~ ´
i
+ ~ ¹
i
j
2
:
(5.45)
thus, applying Gronwall’s Inequality we have
E(1
A
M
i
j^ ®
0;i+1
¡ ^ ®
M
0;i+1
j
2
)·C(Ej~ g(X
N
T
)¡^ g(X
N
T
)j
2
+
N¡1
X
j=i
'
i
¢t
i
M
+
N¡1
X
j=i
Ej~ ´
i
+ ~ ¹
i
j
2
)
(5.46)
55
References
[1] L. Anderson and M. Broadie, A primal-dual simulation algorithm for pricing
multi-dimensional american options, Preprint.
[2] F. Antonelli, Backward-forward stochastic differential equations, Annals of
Applied Probability 3 (1993), 777–793.
[3] B. Bally, Approximation scheme for solutions of bsde, In Backward Differential
Equations (Paris, 1995-1996) 364 (1997), 177–191.
[4] B. Bally and Pages, Error analysis of the optimal quantization algorithm for
obstacle problems, Stochastic Process. Appl. 106 (2003), 1–40.
[5] C. Bender and R. Denk, Forward simulation of bsde’s, Preprint.
[6] T. Bjork, Arbitrage theory in continuous time, Oxford University Press, Oxford,
England.
[7] B. Bouchard and N. Touzi, Discrete-time approximations and monte-carlo simu-
lation of bsde’s, Stochastic Processes 11 (2004), 175–206.
[8] D. Chevance, Numerical methods for forward backward sthochastic differential
equations, Numerical Methods in Finance (1997), 232–244.
[9] E. Clement, D. Lamberton, and P. Protter, An analysis of the least square regres-
sion method for american options pricing, Finance Stochastics 6 (2002), 449–
471.
[10] J. Cvitanic and I. Karatzas, Bacward stochastic differential equations and dynkin
games, Annals of Probability 24 (1996), 2024–2056.
[11] J. Cvitanic and J. Ma, Hedging options for a large investor and forward-bacward
stochastic differential equations, Annals of Applied Probability (1996), 370–398.
[12] R. Darling, Contructing gamma martingales with prescribed limits,using bsde’s,
Annals of Probability 23 (1995), 1234–1261.
[13] F. Delarue and S. Menozzi, A forward-backward stochastic algorithm for quasi-
linear pdes., Preprint.
[14] J. Douglas, J. Ma, and P. Protter, Numerical methods for forward-backward dif-
ferential equations, Annals of Applied Probability 6 (1996), 940–968.
56
[15] D. Duffie and L. Epstain, Stochastic differential utility, Econometrica 60 (1992),
353–394.
[16] N. El Karoui, C. Kapoudjian, E. Pardoux, S. Peng, and M. Quenez, Reflected
solutions of bsdes and related obstacle problems for pde’s, Annals of Applied
Probability 2 (1997), 702–737.
[17] N. El Karoui, E. Pardoux, and M. Quenez, Backward stochastic differential equa-
tions in finance, Mathematical Finance 7(1) (1997), 1–71.
[18] P. Glasserman, Monte carlo methods in financial engineering, vol. 53, Appli-
cations of Mathematics, Stochastic modeling and applied probability, Springer,
New York.
[19] E. Gobet, J. Lemor, and X. Warin, A regresion-based monte-carlo method to solve
bsdes., Annals of Applied Probability 15 (2005), 2172–2202.
[20] S. Hamadene and J.P. Lelpetier, Zero-sum stochastic differential games and back-
ward equations., Systems Control Litt. 24 (1995), 259–263.
[21] I. Karatzas and S.E. Shreve, Methods of mathematical finance, vol. 39, Applica-
tions of Mathematics, Springer, New York.
[22] J. Lemor, E. Gobet, and X. Warin, Rate of convergence of an emperical regression
method for solving generalized bsdes., Preprint.
[23] F.A. Longstaff and R.S. Schwartz, Valuing american options by simulation a
simple least-square approach., Review of Financial Studies 14 (2001), 113–147.
[24] J. Ma, P. Protter, J. San Martin, and S. Soledad, Numrical methods for forward
backward stochastic differential equations., Annals of Applied Probability. 12
(2004), 302–316.
[25] J. Ma, P. Protter, and J. Yong, Solving forward-backward stochastic differential
equations explicitly - a four-step scheme., Probabilty Theory and Related Fields.
98 (1994), 339–359.
[26] J. Ma and J. Yong, Forward-backward stochastic differential equations and their
applications., vol. 1702, Lecture Notes in Mathematics, Springer, New York.
[27] J. Ma and J. Zhang, Path regularity for solutions of backward stochastic differen-
tial equations., Probability Theory and Related Fields 122 (2002), 163–190.
[28] J. Ma and J. Zhang, Representations and regularities for solutions of bsde’s with
reflections., Stochastic Processes and their applications 115 (2005), 539–569.
57
[29] G. Milstein and M. Tretyakov, Numerical algorithms for forward backward
stochastic differential equations connected to semilinear pde’s., Pre-print,.
[30] E. Pardoux and S. Peng, Adapted solutions of backward stochastic differntial
equations, System and Control 14 (1990), 55–61.
[31] E. Pardoux and S. Peng, Backward stochastic differntial equations and parabolic
pde’s, Lecture Notes in Control and Inform. Sci. 176 (1992), 200–217.
[32] S. Peng, Dynamically consistent evaluations and expectations, Tecnical Report,
Institute of Mathematics, Shandong Univ.
[33] S. Peng, Nonlinear expectations, nonlinear evaluations and risk measures, vol.
1856, Lecture Notes in Mathematics, Springer, New York.
[34] J. Tannehill, D. Anderson, and R. Pletcher, Computational fluid mechanics and
heat tranfer, vol. 2, Taylor and Francis, Washington D.C.
[35] J. Yong and X.Y . Zhou, Stochastic controls: Hamiltonian systems and hjb equa-
tions, Springer, Berlin.
[36] L. Yuh-Dauh, Financial engineering and computation, Cambridge University
Press., New York.
[37] J. Zhang, Some fine properties of backward stochastic differential equations,
Ph.D. Dissertation, Purdue University.
[38] J. Zhang, A numerical scheme for backward stochastic differential equations,
Annals of Applied Probability 14 (2004), 459–488.
58
Abstract (if available)
Abstract
We use Monte-Carlo methods to solve Decoupled Forward - Backward Stochastic Differential Equations when the dimension of the state process is higher than three. We focus on the solutions of parabolic Partial Differential Equations as well as the pricing of American type Options. -- Finally, we propose and prove a stopping criterion for the Picard iteration case.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Topics on set-valued backward stochastic differential equations
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Numerical weak approximation of stochastic differential equations driven by Levy processes
PDF
Gaussian free fields and stochastic parabolic equations
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
Path dependent partial differential equations and related topics
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Pathwise stochastic analysis and related topics
PDF
Effect of basis functions in least-squares Monte Carlo (LSM) for pricing options
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Optimal and exact control of evolution equations
PDF
On stochastic integro-differential equations
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
Asset Metadata
Creator
Villalobos, Jose M.
(author)
Core Title
Monte Carlo methods of forward backward stochastic differential equations in high dimensions
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
06/28/2007
Defense Date
05/04/2007
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
finance,Mathematics,OAI-PMH Harvest
Language
English
Advisor
Zhang, Jianfeng (
committee chair
), Goldstein, Larry M. (
committee member
), Moon, Hyungsik Roger (
committee member
)
Creator Email
jvillalo@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m565
Unique identifier
UC1275894
Identifier
etd-Villalobos-20070628 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-505888 (legacy record id),usctheses-m565 (legacy record id)
Legacy Identifier
etd-Villalobos-20070628.pdf
Dmrecord
505888
Document Type
Dissertation
Rights
Villalobos, Jose M.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu