Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
On spectral approximations of stochastic partial differential equations driven by Poisson noise
(USC Thesis Other)
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ON SPECTRAL APPROXIMATIONS OF STOCHASTIC PARTIAL DIFFERENTIAL EQUATIONS DRIVEN BY POISSON NOISE by Haoyuan Liu A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Ful¯llment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) May 2007 Copyright 2007 Haoyuan Liu Dedication This dissertation is dedicated to my grandparents, who were among the very few young Chinese scholars that ¯nished their college education during the Second World War in Northeastern China. As the only grandson of the third generation in my family, I was so proud to ¯nish my PhD research as ful¯lling my grandparents' long lasting yet never{succeeded dream here on the other side of the giant Paci¯c Ocean. It is also dedicated to my uncle, my mother's younger brother, a smart, warm- hearted¯eldarmysoldierwhopassedawaywhileonhisdutyneararailwaystation during one of the most terrible natural disasters of the 20th century { the Tang- shan Earthquake on July 28th, 1976 at the age of 22. His master knowledge of mathematics, physics, geography and philosophy, although was only told to me by my mother years after his death, was still another signi¯cant reason to direct me to get onto the path to become a great scientist. May their souls rest in peace! ii Acknowledgments I would like to deeply thank for all the helps from my advisor, Prof. Mikulevicius for his essential suggestion on getting me into the study of the stochastic numerical analysis, his patient and preserving kindness on directing me through most of my di±culties in reading literatures and removing the obstacles. Thanks for other members of my committee, Prof. Rozovskii, Prof. Lototsky, Prof. Baxendale, Prof. Ghanem and Prof. Zhang who have been continuously paying attention on the progress of my research and introducing critical reference materials on helping me solve multiple topics in my work. Thanks for all my friends: Huapeng Shi, Yangyong Zhang, Yifei & Bill Tan, Lei & Tong Wang, Kenneth Chen, Changyong Zhang, Changlong Zhong etc. for constructing a surrogate family atmosphere during the hardest time I have here in USC.Plus, Icannotforgetthee±cienthelpsandcaresfromthechiropracticphysi- cians from Alhambra, CA: Dr. Matthew Lin and Dr. Mickey Luei, on supporting me move through the \darkest" time in my life { when I was su®ering the Achilles Tendonraptureduringthesummerof2005. Thanksforthedoctorsforthesurgery, the therapies and their terri¯c work to keep me stand and walk. iii Finally, to dearest mom and dad, who brought me into this world and served as my earliest teachers on all subjects, all I have today cannot be attained without the understanding and encouragement from you. You are the most magni¯cent theorem I have ever proved! iv Table of Contents Dedication ii Acknowledgments iii List of Figures viii Abstract ix Chapter 1 Introduction and Main Results 1 1.1 Introduction to the Problem . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 Existence of Filtering Density . . . . . . . . . . . . . . . . . 7 1.2.2 SPDE u Satis¯es and Numerical Scheme . . . . . . . . . . . 10 1.2.3 Numerical Analysis . . . . . . . . . . . . . . . . . . . . . . . 12 Chapter 2 Preliminaries 16 2.1 Fundamentals of Poisson Point Process and Poisson Stochastic Inte- grals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1.1 Poisson Point Process . . . . . . . . . . . . . . . . . . . . . 16 2.1.2 Construction of Poisson Stochastic Integrals . . . . . . . . . 20 2.1.3 Other Useful Formulas and Properties . . . . . . . . . . . . 25 2.2 More about the Filtering Problem . . . . . . . . . . . . . . . . . . . 27 2.3 Deriving SPDE of the Filtering Density . . . . . . . . . . . . . . . . 29 Chapter 3 Construction of the Numerical Scheme for Solving UFD 36 3.1 Preliminary Results for Poisson Chaos Expansion . . . . . . . . . . 36 3.1.1 The L 2 -norm and Orthogonality . . . . . . . . . . . . . . . . 37 3.1.2 Expansion Formula: Type{I . . . . . . . . . . . . . . . . . . 40 3.1.3 Details of the Basis Function and Complexity of Using K t ® for Numerical Scheme . . . . . . . . . . . . . . . . . . . . . 43 3.2 Applicable Numerical Scheme Derivation . . . . . . . . . . . . . . . 45 3.2.1 Alternative Complete Orthonormal Basis . . . . . . . . . . . 45 3.2.2 Partial Di®erential Equations Regarding Coe±cients . . . . 50 v 3.3 More Details about the Recursive Scheme and the Basis Functions . 55 3.3.1 P.D.E. Examples at Lower Size Cases . . . . . . . . . . . . . 55 3.3.2 Charlier Polynomials . . . . . . . . . . . . . . . . . . . . . . 58 Chapter 4 Numerical Analysis and Error Reduction 63 4.1 Derive Estimates by Restricting Size of ® Only. . . . . . . . . . . . 63 4.1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.1.2 Brief Introduction to Semi{Group Theory: . . . . . . . . . . 67 4.1.3 Semi{Group Representation of Solutions . . . . . . . . . . . 68 4.1.4 Major Inequality . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2 Derive Estimates by Restrictingj®j and Index(®) . . . . . . . . . . 82 4.2.1 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.2.2 Introduction to Useful Notations for Time Variables . . . . . 83 4.2.3 Re{label Techniques for Cancelation . . . . . . . . . . . . . 88 4.2.4 Final Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.3 Combined Error Estimates for One{Step Approximation . . . . . . 100 4.4 Reduce Error Growth Rate by Simulation on Multi{Time Steps . . 102 4.4.1 Basic Set-ups and Assumptions . . . . . . . . . . . . . . . . 103 4.4.2 Finite Approximation Strategy and Main Theorem . . . . . 108 Chapter 5 Modi¯cation of the Scheme 118 5.1 Abstract Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.2 Hermite Basis Functions . . . . . . . . . . . . . . . . . . . . . . . . 120 5.2.1 Hermite Polynomials: De¯nition and Properties . . . . . . . 120 5.2.2 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.2.3 Application of Properties of Hermite Polynomials . . . . . . 127 5.3 Haar{type Basis: 1{Dimension Example . . . . . . . . . . . . . . . 129 5.4 Implementation Procedure and Convergence Results of the Modi¯- cation of the Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 131 5.4.1 Validity of the Modi¯ed Scheme and Implementation Steps . 131 5.4.2 Convergence Result for Hermite Basis . . . . . . . . . . . . . 136 5.4.3 Convergence Result for Wavelet Basis . . . . . . . . . . . . . 145 Chapter 6 Using 2{Dim Wavelet Basis on the Modi¯ed Scheme 149 6.1 Set-up the Basis Functions . . . . . . . . . . . . . . . . . . . . . . . 149 6.2 Derivation of Convergence Result Based on 2-D Wavelet Basis . . . 158 Chapter 7 Numerical Experiments 166 7.1 Set-up of the Exact Solution u . . . . . . . . . . . . . . . . . . . . . 166 7.2 Charlier Polynomials for Simulation Example . . . . . . . . . . . . 174 7.3 Derive Formulas for the Chaos Expansion Coe±cients . . . . . . . . 177 7.4 Layouts of Simulation Results . . . . . . . . . . . . . . . . . . . . . 186 7.4.1 Only One Jump Happens . . . . . . . . . . . . . . . . . . . 186 vi 7.4.2 Two Jumps Happen . . . . . . . . . . . . . . . . . . . . . . 189 7.4.3 Other Trials . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Chapter 8 Future Research Direction 193 8.1 Filtering Problems with Combined Observation . . . . . . . . . . . 193 8.2 Computational Stochastic Fluid Mechanics . . . . . . . . . . . . . . 198 Reference List 202 vii List of Figures 6.1 2{d Space with Disc and Belts with Mass 1 . . . . . . . . . . . . . . 150 6.2 \Cutting" Mass-1 Regions to \Half{by{Half". . . . . . . . . . . . . 152 6.3 \1,2,3" Gets to \\1,2,...,6" . . . . . . . . . . . . . . . . . . . . . . 153 6.4 \Midline" for Each Closed Zone . . . . . . . . . . . . . . . . . . . . 154 6.5 Angle 360 ± Partition . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.6 \0»180 ± " Example . . . . . . . . . . . . . . . . . . . . . . . . . . 157 6.7 \0»90 ± " Example . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 7.1 Exact vs. Simulated: One{Step Method, One Jump . . . . . . . . . 186 7.2 Growth of Error throughout t2[0;0:1], 1st Example . . . . . . . . 187 7.3 Comparisons at Di®erent Fixed Time Moments, 1st Example . . . . 188 7.4 Exact vs. Simulated: One{Step Method, Two Jumps . . . . . . . . 189 7.5 Growth of Error throughout t2[0;0:1], 2nd Example . . . . . . . . 190 7.6 Comparisons at Di®erent Fixed Time Moments, 2nd Example . . . 190 7.7 Continued from Previous Figure . . . . . . . . . . . . . . . . . . . . 191 viii Abstract In this dissertation, we will recall some basic settings and de¯nitions for nonlinear ¯ltering theory and based on the important optimal ¯ltering estimator provided by Kallianpur [15], we will discuss the properties and equations which the unnormal- ized ¯ltering density (UFD) should satisfy. Lototsky, Mikulevicius and Rozovskii [24]havedoneagreatjobindecomposingtheUFDbasedontheWienerChaosEx- pansiontheoryandapplythetheorynumerically. Wewillderiveasimilarrecursive numerical scheme with respect to theL 2 -CONS basis functions on the probabilistic space generated from the Poisson Point Measure and derive the growth rate of the errorcausedbythealgorithm. Moreover, somealternativewaystoreducetheerror and making the algorithm more e±cient are also going to be discussed. And de¯- nitelyintheend,basedonsomenumericalexperimentsfromcomputersimulations, we will testify the validity and feasibility of our algorithm. KeyWords: nonlinear¯ltering,ChaosExpansion,PoissonPointProcess,stochas- tic numerical analysis ix Chapter 1 Introduction and Main Results 1.1 Introduction to the Problem Filtering problem, commonly known as a process of the following \black{box" pro- cedure, can be illustrated as the picture: - - Input X(t) Output Y(t) where we typically called the \input", a di®usion process X(t) as the signal pro- cess. While the output, Y(t), is the observation process that people can really touch with. Inpracticalworld, thereisalwayssomethingwecanassuresuchasanoperator, a processor, a control system and so on which can be categorized to certain form of analytical function making e®ects on X(t), where we denote as h(X(t)). This 1 e®ect is being taken within the \black{box". If we were living in a \perfect" world, the story will be as simple as Y(t)=h(X(t)) (1.1) for all t¸0. BUT we called it BLACK which means inside it there is also something we cannotcontrolhappened. Thethingweusuallycalled\noise"mightcometoappear during the in{box process and make the output Y(t) being \shifted" somehow from the correct path{h(X(t)). Under this situation, we denote this process as the following di®erential form of equation: _ Y(t)= d dt h(X(t))+ _ w(t) (1.2) Be advised here, the w(t) term is what we called the noise that a®ects Y(t) from getting exactly the same solution of h(X(t)). It stands for any general form of noise, not just traditionally as we are used to, the Wiener Process or simply the Brownian Motion Process. In general, an analytical formula for X(t) is very hard to obtain. In some cases, the process X(t) itself is involving some stochastic information, which even makestheproblemmorecomplicated. Whatpeoplehavebeentryingto¯ndisthat basedontheinformationregardingY(t)wehaveonhands,togetsomeprobabilistic 2 knowledgeofX(t). Forexample,themostimportantsolution,theso{called\¯lter" from the studies on (1.2) is done by Kallianpur et al on [15] (which is also one of the major conclusions in conditional probability theory) such that the best optimal ¯lter for any given analytical function f will be ^ f(X(t))=E[f(X(t))jF Y s ;0·s·t] (1.3) where we will call F Y s the ¯ltration at the moment s. Indeed, the function f can take any form of operator on X(t). For example, a certain indicator operator, such that f(X(t))=1 X(t)2¡ for some suitable choice of set ¡ from probabilistic space. After plugging in this one, we would get the estimated probability that P(X(t)2¡jF Y s ;0·s·t) dislike from traditional probabilistic problems (e.g. we know all the story of X(t), and try to get some conclusions of f(X(t))), we can go furthermore to derive many other useful estimates WITHOUT knowing the analytical formula of X(t). Manyscholarshavedonegreatcontributionsinthestudyofthe¯lteringtheory. Besides [15], we also have Krylov and Rozovskii in [17], Mikulevicius and Rozovskii in [29], Kunita in [18] and Zakai in [35] etc. We would like to mention here that 3 thoseresearchershaveprovidednotonlysometheoretical¯lteringschemesbutalso somefeasibleandproved{to{bee±cientnumericalprocedureswhichnowadayshave widely applied in the ¯elds even beyond mathematics such as signal processing in electricalengineering,imageprocessinginforensicdetectivecasesandeventransmit decoding procedures world wide used in wireless communications. Inthisdissertation,whatwedocareaboutistryingtosolvethedensityfunction from (1.2), a function u=u(t;x) such that the optimal ¯lter can be written as ^ f(X(t))= R f(x)u(t;x)dx R u(t;x)dx for any given moment t with some arbitrary function f. Theoretical derivations (whichwas¯rstdonebyZakai[35])showsthatundersomesmoothnessassumptions, the ¯ltering densityu will satisfy the so{called Zakai Stochastic Partial Di®erential Equation,whereobviouslythebehaviorofY(t)shallbeinvolved. Sinceourexample lateronwillbeshowntoberelatedtoPoissonNoise,wewillbebrie°ytalkingabout how the Poisson{type Zakai equation is obtained. We would like to thank for other researchers, such as Baras [1], Benssousan, Glowinski and Rascanu [2], Elliott, Glowinski [6] and Florchiner, LeGeland [8] who have suggested many applicable methods on solving the Zakai Stochastic Partial Di®erential Equation either analytically and numerically. Their contributions have widely increased our sight of view on getting into the essential part of the world of ¯ltering. 4 Although derived as early as 1940's, Cameron and Martin's [3] was not given enough attention until recent 10 plus years. In their paper, Cameron and Martin proved that any L 2 -integrable random function can be decomposed with respect to a set of complete orthonormal basis function in the L 2 probabilistic space. This method, often called the Chaos Expansion theory, has been applied by Lototsky, Mikulevicius,Rozovskiiin[24]andHou,Kim,Rozovskii,Zhouin[13]onsolvingnot only the Zakai Equation driven by Wiener Noise itself but also some certain type of equations from °uid dynamics (i.e. Navier{Stokes Equations) to get numerical results and theoretical analysis for the error caused by the algorithm. ThemostdominantadvantageoftheCameron{MartinChaosExpansiontheory isthatitseparatesobservationsandparameters,inotherwords,aChaosExpansion form: F = X ® c ® e ® (1.4) can be separated by solving a set of coe±cientsfc ® g ® that are purely analytic and deterministic (i.e. nothing random involved), and the set of basis functionsfe ® g ® which will be the only thing that relates the stochastic noise. Unfortunatelyinpracticalexperiment(whichwearegoingtodoalsoattheend of this work), the Chaos Expansion formula yields lower accuracy due to the fast growth rate of the error that is proportional to exponential of the length of the time interval that we are focus on. Thus, our main goal of this dissertation will be: 5 1. Derive the fundamental equation (i.e. Zakai Equation) that the ¯ltering den- sity will satisfy 2. Study the properties of the CONS basis function on L 2 probabilistic space and present the Chaos Expansion frame 3. Obtain the recursive{type numerical procedure to solve the coe±cients in the Chaos Expansion 4. Undercertainrestrictions,provetheconvergencerateofthenumericalmethod just derived 5. Given applicable alternatives on the numerical schemes that could reduce the growth rate of the error and also present some other forms of algorithm adjustments 6. Numerical experiments for easy{to{implement examples to check the validity and e±ciency of the numerical scheme 1.2 Main Results Before the start of obtaining any particular conclusions, we will brie°y list the results (both referred from others and derived in our later work) here. First, since our intension is to solve the ¯ltering problem, we will have to show the existence of the ¯ltering density and how it can be solved numerically. 6 1.2.1 Existence of Filtering Density Let(;F;N)beaprobabilityspaceandN(¢;¢)bethePoissonPointProcessde¯ned onthatspace. The¯lteringproblemwewanttotalkabouthavethefollowingform, for µ t 2R d , a Markov di®usion process which is also the signal process: µ t =µ 0 + Z t 0 a(s;µ s )ds+ Z t 0 ¾(s;µ s )dW s + Z t 0 Z R d x ~ M(ds;dx)+ Z t 0 Z R d y ~ N(ds;dy) (1.5) for 0·t·T. The process Y t = Z t 0 Z R d y ~ N(ds;dy) is the observation process. While the noise is X t = Z t 0 a(s;µ s )ds+ Z t 0 ¾(s;µ s )dW s + Z t 0 Z R d x ~ M(ds;dx) W is the standard Wiener Process. Thenotations ~ M, ~ N arecalledcompensatedorcenteredPoissonPointProcesses (more details and properties of Poisson Point Process will be mentioned later). Before start talking about the unnormalized ¯ltering density, let's ¯rst make some assumptions which will be used throughout this dissertation: 7 ² (A1) All the deterministic functions arbitrarily given, such as f, g, h etc. are in¯nitely di®erentiable and bounded with all derivatives. Other \stronger" properties may be added as we move on. ² (A2) The Point Processes N and M are independent and have no common jumps. The Wiener Process, W is independent of N and M, too. ² (A3) The initial process of µ, namely µ 0 , is independent of N and M and has a density function p 0 (x)2H n i.e. the Sobolev Space W n 2 (R d ) for all n. More over, the compensators of N and M are provided here: ~ M(dt;dx) = M(dt;dx)¡l(t;µ t ;x) dx jxj d+® (1.6) ~ N(dt;dx) = N(dt;dx)¡k(t;x) dx jxj d+® (1.7) Grigelionis and Mikulevicius [10], [27] proved the following results: Theorem1.1. Assume(A1){(A3)hold. Thenweprobabilitymeasure ~ P, derived from d ~ P=½ T dP 8 where ® t is the solution of the Stochastic Di®erential Equation d½ t =¡½ t ¡ Z R d l(t ¡ ;µ t ¡;y)¡1 l(t ¡ ;µ t ¡;y) ~ M(dt;dy) guarantees that M and N are independent. Furthermore, the optimal ¯lter, ^ f(µ t )=E[f(µ t )jF Y s ;0·s·t]= ~ E[½ ¡1 t f(µ)jF Y s ;0·s·t] ~ E[½ ¡1 t jF Y s ;0·s·t] (1.8) where ~ E is the expectation taken with respect to the probability measure ~ P. What can be also seen from this theorem is that the existence of the unnormal- ized ¯ltering density is admitted as the following © t (dx) = ~ E[1 [µt2dx] ½ ¡1 t jF Y s ;0·s·t] (1.9) u(t;x) = © t (dx) dx (1.10) Moreover, we have for any n: ~ E[ sup 0·t·T jju(t;¢)jj 2 H n] < 1 (1.11) ~ E[ sup 0·t·T jju(t;¢)jj 2 C n b ] < 1 (1.12) where the space C n b contains n times di®erentiable functions with all derivatives bounded. 9 1.2.2 SPDE u Satis¯es and Numerical Scheme In Chapter 2, some fundamentals of Poisson Point Process and corresponding stochastic integrals will be introduced and some deeper discussion about the ¯l- tering problem will be presented as long as one has been familiar with Poisson Noise. Beyond that, we will use the Kallianpur{Stribel de¯nition of the optimal ¯lter to derive the stochastic partial di®erential equation that u should satisfy. That is, the following Poisson{type Zakai Equation: du(t;x)=L ¤ u(t;x)dt+ Z R d (u(t ¡ ;x+z)¡u(t ¡ ;x)) ~ N(dt;dz) (1.13) It is easy and rough to understand the operatorL ¤ as the \2nd order di®erential" operatorTEMPORARILYforamoment. Ofcourse,detailsaboutL ¤ willbegiven. Followed from that, we need to be familiar with the concepts of multi-index, which we will mostly denote as ® that J =f® =(® 1 ;® 2 ;:::; X k ® k <1;® k ¸0)g The following quantities are commonly used: ² j®j= P k ® k ; ² Index(®)=maxfk¸1;® k 6=0g; 10 ² ®!:=¦ k ® k !. In Chapter 3, based on the Cameron{Martin Chaos Expansion theory, we will ¯nd a certain set CONS basis functions C t ® ;8®2J on L 2 (;F N t ;N) such that the solution to the Zakai equation, or just the ¯ltering density we are looking for, u, can be decomposed as u(t;x)= X ®2J 1 p ®! ' ® (t;x)C t ® (1.14) suchthatthecoe±cients' ® (t;x)arecompletelydeterministicwithoutanystochas- tic information involved. Naturally, the numerical scheme to solve the coe±cients will be constructed. We will show ' ® 's satisfy the following Partial Di®erential Equations system: @' ® (t;x) @t = L ¤ ' ® (t;x) + Z R d X l;® l ¸1 [® l ' ®(l) (t;x+y)¡® l ' ®(l) (t;x)]± l (t;y)º(dy) ' ® (0;x) = p 0 (x)1 ®=0 11 where the notation ®(l) refers to the multi-index with normj®j¡1. For a given l such that ® l ¸1, the k th component of ®(l), ® k (l) is given by ® k (l)= 8 > > < > > : ® k k6=l maxf0;® k ¡1g k =l details about the CONS basis functions and some simple examples of the recursive partial di®erential equations will be given, too. 1.2.3 Numerical Analysis In Chapter 4{6, the numerical analysis regarding the scheme just constructed are provided. Based on our de¯nition to multiple \parameters" of a given multi-index ®, wewillprovidetheerrorestimatebetweentheexactsolutionu=u(t;x)andthe approximated solution. This can be summarized to the following three categories: 1. Restrict the \size" (i.e. j¢j) of ®. We take u N (t;x)= X ®2J;j®j·N 1 p ®! ' ® (t;x)C t ® The ¯rst major theorem for error estimate gives us that E[jju(t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ]·e (C+M)t (Mt) N+1 (N +1)! jjp 0 jj 2 H 1 (R d ) (1.15) with some constants C, M >0. 12 2. Restrict both the size and the \Index" of ®, we will take u n N (t;x)= X ®2J;j®j·N;Index(®)·n 1 p ®! ' ® (t;x)C t ® The second major theorem for error estimate gives that E[jju n N (t;¢)¡u(t;¢)jj 2 L 2 (R d ) ]·Be Bt ( (Mt) N+1 (N +1)! + t 3 n )jjp 0 jj 2 H 1 (R d ) (1.16) for some constants B, M >0. 3. The order of error growth rate looks not so good in the previous two theo- rems, we will construct an alternative scheme such that the whole time line is partitioned in to equi{distance intervals and the numerical methods are ap- plied from t i¡1 to t i repeatedly. Assume ¢ is the length of each subinterval, we have the following error estimate: max 1·i·K E[jju n N (t i ;¢)¡u(t i ;¢)jj 2 L 2 (R d ) ]· ¹ Ce ¹ CT ( (M¢) N (N +1)! + ¢ 2 n )jjp 0 jj 2 H 1 (R d ) (1.17) such that we truly can reduce the growth rate of the error. 4. The details of the previous multi{time step method show that when we ap- ply the Chaos Expansion Scheme from interval to interval, some stochastic information will be involved in solving the expansion coe±cients { that is not what we want based on our original intension of constructing such a scheme. 13 A modi¯cation method to the scheme will be introduced, that we will de- compose the solution u(t i¡1 ;x) with respect to certain choice of CONS on the spatial variable x. By doing this, the coe±cients we are going to solve are uniquely satisfying the same set of PDEs presented above with initial values at CONS basis functions of x. The e±ciency of the algorithm can be improved. Di®erent trials of CONS basis functions of x will be introduced and corre- sponding error estimates (compared with u n N (t i ;¢)) are derived: ² CONS generated from Hermite Polynomials with error estimate max 1·i·K q E[jju n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R d ) ]· KC ° (e C°T ¡1) T£ °¡1=2 jjp 0 jj H 4° (4°) (1.18) with ¢ = T=K for some integer K > 0. ° is any given positive integer andjj¢jj H 4° (4°) refers to the \weighted Sobolev Norm". ² CONS of Wavelet Basis Functions on 1{Dimension Case with error esti- mate max 1·i·K q E[jju n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R d ) ]· K ~ Ce ~ CT T2 £ jjp 0 jj H 1 (R) (1.19) 14 ² CONS of Wavelet Basis Functions on 2{Dimension Case with error esti- mate max 1·i·K q E[jju n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R d ) ]· K ~ Ce ~ CT T2 £ jjp 0 jj H 1 (R 2 ) (1.20) Inthoseapproximates,u n;£ N refersthatwepickthecertaintypeofCONS on x, but just use a ¯nite set of those basis functions, where £>0 is an integer. 15 Chapter 2 Preliminaries 2.1 Fundamentals of Poisson Point Process and Poisson Stochastic Integrals 2.1.1 Poisson Point Process We have been dealing with the standard Poisson random variable in a lot of prob- lemsofprobabilisticstudies. ForarandomvariableZ whichtakesonlynon-negative integer values, we have P(Z =k)=e ¡¸ ¸ k k! (2.1) where ¸ is some positive parameter. Beadvisedthatwehavenoti¯edthevariableZ takesvalueslike0;1;2;:::. That is, not \continuous" according to our intuitive understanding. Indeed, there is a \size{1" jump between each di®erent value that Z might take. 16 The concepts of Poisson Jump Process (see comprehensive introduction and mathematical statistical inferences in [16]) could be illuminated in a more inspiring way such that we can ¯rst recall the traditional Poisson Process in the study of stochastic process. We de¯ne Q(t), for example, to be the number of vehicles passing through a given spot on the freeway or the number of customers entering a store or anything similar to these cases. It is clearly checked that P(Q(t)=n)=e ¡¸t (¸t) n n! (2.2) which gives the probability of having n vehicles passing the spot (or n customers entering the store) during the time period [0;t], for8 n¸0. The relationship between Poisson and exponential helped us a lot on ¯nding moreinterestingstoriesfromthisde¯nition. Acommonandnaturalquestionwould be \when the ¯rst item will come into our visual range". It is typically computed as the following: Assume T 1 is the arriving time of the ¯rst vehicle, we have then P(T 1 >t) = P(Q(t)=0) = e ¡¸t 17 Thus P(T 1 · t) = 1¡ e ¡¸t . And similar like this procedure, we could get all distribution functions for later arrival moments T 2 ;T 3 ;::: (indeed it can be shown that they will satisfy certain ¡{Distributions). Our intention to show this example is not only for just recalling the basic idea of relating Poisson random variable and exponential random variable. The \story" of T 1 (which means the probability density function of it), for example, gives us a possibility to ask the question \when the ¯rst jump will happen". That is, at what moment Q(t) gets changed from 0 to 1. The Poisson Jump Process, denoted as N(dt;dx) for the Cartesian space [0;T]£R d is trying to answer question as we mentioned above. We assume the set Aµ[0;T]£R d , as the similar understanding as before, the measure N(¢;¢) attempts to \count" the number of the \items" we are interested over the set A. We de¯ne P(N(A)=k)=e ¡m(A) m k (A) k! (2.3) 18 where m(¢) is a prede¯ned measure on [0;T]£R d . From this de¯nition we can see that the requirement for ¯nding the such a set A is that we need to make sure it is m-measurable. More over, throughout this paper, we will assume the measure m can be sepa- rated into the following: m(dt;dx)=º(dx)dt That is, we assume Lebesgue measure for the time variable on the Cartesian space whileº(¢)willbesomespeci¯cmeasureforthespatialvariableaswewilltalkmore about it later. It is straightforward to realize that N(A) actually admits a random variable that takes values as non-negative integers. We can easily derive that E[N(A)] = m(A) (2.4) Var[N(A)] = m(A) (2.5) and furthermore, the process ~ N(dt;dx)=N(dt;dx)¡m(dt;dx)=N(dt;dx)¡º(dx)dt is called the compensated or centered Poisson Process which certainly provides the same variance and 0 mean. 19 2.1.2 Construction of Poisson Stochastic Integrals Let's start de¯ning the Poisson Stochastic Integral that we will be talking about throughout this paper. Similar to what we have for the Wiener case (see e.g. [31]), one can de¯ne the following category of so called \elementary" functions such that ² f =f(t;x;!) is a function such that f(t;x;!):[0;T]£R d £7!R ² The mapping, (t;x;!) 7! f(t;x;!) is B£F-measurable where the set B containstheBorel¾-algebrageneratedfromthemeasurablesetson[0;T]£R d with respect to measure m. ² f(t;x;!) isF-adapted. ² The property which will leads to the thing similar to the \Ito Isometry" for the Wiener Integral: Z t s Z ¤ f(t;x;!) 2 m(dt;dx)<1 for8 s;t2R and suitable choice of set A. Then we have the following de¯nition: 20 De¯nition 2.1. The Poisson Stochastic Integral of f over [s;t]£¤ is given by Z t s Z ¤ f(t;x;!)N(dt;dx)= lim n!1 Z t s Z ¤ f n (t;x;!)N(dt;dx) (2.6) whereff n g is a sequence of the elementary functions such that Z t s Z ¤ jf(t;x;!)¡f n (t;x;!)j 2 m(dt;dx)!0 (2.7) for each ¯xed !2 as n!1. The validity of the limit is also in the L 2 sense. Intuitively, the Poisson Stochastic integral can still be understood as the tra- ditional way for constructing the Riemann{Stieltjes Integral. For simple example, lets assume the spatial variable now move on the 1-dimension space,R over a ¯nite set, say [0;L] while t 2 [0;T]. One can still do the \partitioning method" as we have always tried that leaves to 0·x 1 ·¢¢¢x i ·x i+1 ·¢¢¢·L and 0·t 1 ·¢¢¢t j ·t j+1 ·¢¢¢·T These two procedures generate the grid system on [0;T]£[0;L] as the following: 21 x i x i+1 t j t j+1 q q q q q The integral is naturally constructed as Z T 0 Z L 0 f(t;x)N(dt;dx)= lim ¢ T ;¢ X !0 X i;j f(t ¤ j ;x ¤ j )N(A ij ) (2.8) as one goes over all the grids like the one we presented above. The pointst ¤ j , x ¤ i are chosen to be on [t j ;t j+1 ], [x i ;x i+1 ], respectively. The set A ij is just f(t;x)2[t j ;t j+1 ]£[x i ;x i+1 ]g or simply denoted as ¢A ij . Poisson Stochastic Integral illuminates how the \jump" property of the given Poisson Point Process can make some di®erence. Naturally, similar as what we have been doing for the Wiener Process, one often likes to think about the function X(t) which satis¯es the following simpli¯ed form of Poisson Integral Equation: dX(t) = a(t)dt+b(t ¡ )N(dt) X(0) = x 0 22 where we have taken the Poisson Point Measure to be depending only on time. We claim that the measure N(¢) here is therefore quite similar to the traditional Poisson process. It is easy to have X(t)=x 0 + Z t 0 a(s)ds+ Z t 0 b(s ¡ )N(ds) (2.9) This solution formula would not make much sense unless we write it as X(¿)= Z ¿ ¿ ¡ a(s)ds+ Z ¿ ¿ ¡ b(s ¡ )N(ds) (2.10) assuming ¿ is a moment where the jump happens (e.g. the moment a new cus- tomer entering the store, or in more complicated cases, where the change may not necessarily be 1). Theory of traditional Riemann-Stieltjes Integral guarantees that Z ¿ ¿ ¡ a(s)ds!0 But we have to consider the term Z ¿ ¿ ¡ b(s)N(ds) as Z ¿ 0 b(s ¡ )N(ds)¡ Z ¿ ¡ 0 b(s ¡ )N(ds) 23 Withusingthede¯nitionofthestochasticintegralweshowedearlier,onecanrealize thatthenumberofcustomersenteringthestoreontheperiods[0;¿ ¡ ]and[0;¿]are NOT the same. Or in more detail, for the partition we are taking, the number of points on [t k ;¿ ¡ ] and [t k ;¿] are di®erent for some t k very closed to ¿. Assume n=N([t k ;¿])¡N([t k ;¿ ¡ ]) as t k !¿ ¡ . So the stochastic integral turns to be Z ¿ ¿ ¡ b(s ¡ )N(ds)=nb(¿ ¡ ) (2.11) Itisveryimportantforustouses ¡ intheintegralwherethePoissonPointMeasure, N(¢) is involved. As we have seen in this short and easy example, the behavior of function b at the left side of ¿ is the main thing we are concerning. In other words, we are trying to discover the \instantaneous change" of the function b(t) as t runs from ¿ ¡ immediately to ¿. We provide those simple examples to illuminate how the Poisson Stochastic Integralisdi®erentfromtheWienerIntegralwehavebeenfamiliarwith. Moreover, the fundamental properties here are also going to be used as we move on with the research of numerical approximations for Poisson Stochastic Partial Di®erential Equations. 24 2.1.3 Other Useful Formulas and Properties We also have the following useful properties for the Poisson Point Process and the corresponding Stochastic Integral: ² For some set ¤2R d that is º-measurable E[ Z t 0 Z ¤ N(ds;dx)]= Z t 0 Z ¤ º(dx)ds=tº(¤) (2.12) ² Assume function f(t;x) satis¯es that E[ Z t 0 Z ¤ f(s;x) 2 m(ds;dx)]= Z t 0 Z ¤ f(s;x) 2 º(dx)ds<1 that is, L 2 ¡m integrable. We have the following E[ Z t 0 Z ¤ f(s;x)N(ds;dx)] = Z t 0 Z ¤ f(s;x)º(dx)ds (2.13) E[ Z t 0 Z ¤ f(s;x) ~ N(ds;dx)] = 0 (2.14) ² Ito{Poisson{Isometry: E[( Z t 0 Z ¤ f(s;x) ~ N(ds;dx)) 2 ]= Z t 0 Z ¤ f(s;x) 2 º(dx)ds (2.15) And like the Wiener case, another commonly used tool is the Ito's Formula (or Ito's Rule). In the Poisson case, we have the following theorem: 25 Theorem 2.2. For 0·t·T, assume the process X(t)2R d satis¯es dX(t)= Z R d µ(t ¡ ;z) ~ N(dt;dz)+®(t)dt (2.16) with X(0)=x 0 where we require Z T 0 Z R d jµ(t;z)j 2 º(dz)dt<1 Then, given process Y(t)=f(X(t)) for some function f 2C 2 (R d ), we have dY(t) = rf(X(t))®(t)dt + Z R d [f(X(t ¡ )+µ(t;z))¡f(X(t ¡ ))¡rf(X(t ¡ ))¢µ(t;z)]º(dz)dt + Z R d [f(X(t ¡ )+µ(t;z))¡f(X(t ¡ ))] ~ N(dt;dz) Proof of this theorem can be found in Âksendal's [12], [5] etc. Note: Itmakessense for usto recallthe Wiener{type Ito's formula, that theterm Z R d [f(X(t ¡ )+µ(t;z))¡f(X(t ¡ ))] ~ N(dt;dz) is corresponding to that Z R d rf(X(t ¡ ))¢µ(t;z) ~ N(dt;dz) 26 while the term Z R d [f(X(t ¡ )+µ(t;z))¡f(X(t ¡ ))¡rf(X(t ¡ ))¢µ(t;z)]º(dz)dt is comparable with the thing like Z R d 1 2 r 2 f(X(t))µ 2 (t;z)( ~ N(dt;dz)) 2 But the \jump" property of the Poisson Point Process oblige us to write things as in the theorem. The interesting di®erence is that although Wiener Process (i.e. Brownian Motion) is nowhere di®erentiable, yet it is still \continuous" everywhere while in the Poisson case we will not have that. 2.2 More about the Filtering Problem Nonlinear ¯ltering has been a classic problem of applied stochastic analysis. Many scholars have been o®ering contributions to this ¯eld of studies such as Kallianpur [15], Kunita [18] and [19], Kushner [20], Liptser and Shiryayev [21], Krylov and Rozovskii [17] etc. It is of notable theoretical and importance by itself and also as a part of control theory for partially observed stochastic system. 27 Although introduced in the introduction part, let's again consider the following ¯ltering scheme such that the signal process (that is going to be ¯ltered) µ t 2R d is a Markov di®usion process which is given by µ t =µ 0 + Z t 0 a(s;µ s )ds+ Z t 0 ¾(s;µ s )dW s + Z t 0 Z R d x ~ M(ds;dx)+ Z t 0 Z R d y ~ N(ds;dy) (2.17) where we denote Y t = Z t 0 Z R d y ~ N(ds;dy) as the observation process. Notice that traditionally in the study of ¯ltering theory, we need to rewrite (2.17) as dY t =dµ t ¡ Z R d x ~ M(dt;dx)¡a(t;µ t )dt¡¾(t;µ t )dW t (2.18) that the information we obtained is noted as Y t , but what we want to derive is the \story" of µ t which requires us to get rid of the noise term X t = Z t 0 Z R d x ~ M(ds;dx)+ Z t 0 a(s;µ s )ds+ Z t 0 ¾(s;µ s )dW s Now let's clarify the details for the notations: ~ M(dt;dx) = M(dt;dx)¡l(t;µ t ;x) dx jxj d+® (2.19) ~ N(dt;dy) = N(dt;dy)¡k(t;y) dy jyj d+® (2.20) 28 in which we assume: ² ®2(0;2); ² l(t;µ;y) and k(t;y) are 0-homogeneous in y and bounded; ² Symmetry in y: l(t;µ;y)=l(t;µ;¡y) k(t;y)=k(t;¡y) ² Finite Second Moment: Z R d jzj 2 º(dz)<1 Z R d 1^jzj 2 º(dz)<1 What is more important as later on we will be using in the derivation of the equation for the ¯ltering density is that we require there are NO common jumps for the non-centered Point measures M(dt;dx) and N(dt;dx). Wiener Process W is independent of M and N, respectively, too. The compensated measures ~ M and ~ N are both martingale measures. 2.3 Deriving SPDE of the Filtering Density Let f be a bounded function onR d and ^ f(µ t ) be the optimal ¯lter (the best mean- square estimator for f(µ t )) based on the observation of Y s where 0·s·t. 29 The following fundamental result holds for getting the optimal ¯lter: ^ f(µ t )= R R d f(x)u(t;x)dx R R d u(t;x)dx =E[f(µ t )jF Y t ] (2.21) the shortened notation, \information"F Y t is the ¾-algebra generated byfY s g 0·s·t . We call the function u = u(t;x) in (2.21) the unnormalized ¯ltering density (UFD).WecouldanticipatethatknowingtheUFDwillde¯nitelyhelpusongetting thebestmean-squareestimatorforthesignalprocessµ t basedonanygivenarbitrary function f. A rather clear fact is that formula (2.21) itself will NOT play an important role unless we discover a way to compute the UFD either analytically or numerically. As mentioned before, The main goal of this research is to derive a cost-e±cient numerical scheme such that u can be solved numerically. Theorem 2.3. The unnormalized ¯ltering density, u=u(t;x) satis¯es the follow- ingPoissonStochasticPartialDi®erentialEquation(alsoknownasZakaiEquation): du(t;x)=L ¤ u(t;x)dt+ Z R d (u(t ¡ ;x+z)¡u(t ¡ ;x)) ~ N(dt;dz) (2.22) 30 where the \di®erential" operatorL ¤ is de¯ned by L ¤ u(t;x):= Z R d [u(t;x+z)¡u(t;x)¡(r x u(t;x)¢z))]s(t;x;z) dz jzj d+® + Z R d u(t;x+z)[s(t;x+z;z)¡s(t;x;z)¡(r x s(t;x;z)¢z)] dz jzj d+® + Z R d [u(t;x+z)¡u(t;x)](r x s(t;x;z)¢z) dz jzj d+® + 1 2 X i;j @ i [¾ i (t;x)¢¾ j (t;x)@ j (u(t;x))]¡ X i @ i (a i (t;x)u(t;x)) where s(t;x;z)=l(t;x;z)+k(t;z). Proof (Sketch): Without loss of generality, assume the dimension is 1. The stochastic di®erential equation that µ t satis¯es is dµ t = Z R x ~ M(dt;dx)+ Z R y ~ N(dt;dy)+a(t;µ t )dt+¾(t;µ t )dW t (2.23) 31 Using the Poisson{type Ito's Formula that we presented earlier, one obtain the following S.D.E. for f(µ t ): f(µ t ) = f(µ 0 )+ Z t 0 Z R [f(µ s ¡ +y)¡f(µ s ¡)¡f 0 (µ s ¡)y]k(s;y) dy jyj ®+1 ds + Z t 0 Z R [f(µ s ¡ +y)¡f(µ s ¡)] ~ N(ds;dy) + Z t 0 Z R [f(µ s ¡ +y)¡f(µ s ¡)¡f 0 (µ s ¡)y]l(s;µ s ;y) dy jyj ®+1 ds + Z t 0 Z R [f(µ s ¡ +y)¡f(µ s ¡)] ~ M(ds;dy) + Z t 0 f 0 (µ s )[a(s;µ s )ds+¾(s;µ s )dW s ]+ Z t 0 1 2 ¾ 2 (s;µ s )ds We take the conditional expectation: E[f(µ t )jF Y t ] = f(µ 0 )+ Z t 0 Z R [f(µ ¿ ¡ +y)¡f(µ ¿ ¡)¡f 0 (µ ¿ ¡)y]s(¿;µ ¿ ;y) dy jyj ®+1 d¿ + Z t 0 Z R [f(µ ¿ ¡ +y)¡f(µ ¿ ¡)] ~ N(d¿;dy) + Z t 0 f 0 (µ ¿ )[a(¿;µ ¿ )+ 1 2 ¾ 2 (¿;µ ¿ )]d¿ Also, we know Z R f(x)u(t;x)dx= Z t 0 Z R f(x)du(s;x)dx+ Z R f(x)u(0;x)dx So we claim that Z t 0 Z R f(x)du(s;x)dx+ Z R f(x)u(0;x)dx= Z R E[f(µ t )jF Y t ]u(t;x)dx 32 Wepresenttheresultstermbytermonrighthandsideofthisequation. ByFubini's Theorem: Z t 0 Z R Z R [f(x+y)¡f(x)]u(t;x)dx ~ N(dt;dy) For a \¯xed" y at the far inside integral: Z R [f(x+y)¡f(x)]u(t;x)dx = Z R [f(x+y)u(t;x+y)+f(x+y)u(t;x)¡f(x+y)u(t;x+y)¡f(x)u(t;x)]dx = Z R [u(t;x¡y)¡u(t;x)]f(x)dx Thus we obtain Z t 0 Z R Z R [f(x+y)¡f(x)]u(t;x)dx ~ N(dt;dy) = Z t 0 Z R Z R f(x)[u(t;x+y)¡u(t;x)]dx ~ N(dt;dy) after changing the variable for y one more time as we move to outside. Similarly, when we calculate Z R [f(x+y)¡f(x)¡f 0 (x)y]u(t;x)s(t;x;y)dx 33 We would do some similar \cut and ¯ll" such that we can transfer the second order derivative from f(x) to u(t;x)s(t;x;y) f(x+y)u(t;x+y)s(t;x+y;y)+f(x+y)u(t;x)s(t;x;y)¡f(x)u(t;x)s(t;x;y) ¡f(x+y)u(t;x+y)s(t;x+y;y)+f(x+y)u(t;x)s(t;x+y;y)¡f(x+y)u(t;x)s(t;x+y;y) +u x (t;x)f(x)s(t;x;y)y+s x (t;x;y)f(x)u(t;x) Again,aftersomeotherchangeofvariableprocedureswithrespecttoy,oneattains that it is equal to Z t 0 Z R Z R [u(t;x+z)¡u(t;x)¡u x (t;x)¢z]s(t;x;z)f(x)dx dz jzj 1+® dt + Z t 0 Z R Z R u(t;x+z)[s(t;x+z;z)¡s(t;x;z)¡s x (t;x;z)¢z]f(x)dx dz jzj 1+® dt + Z t 0 Z R Z R [u(t;x+z)¡u(t;x)]s x (t;x;z)¢zf(x)dx dz jzj 1+® dt Itislikeweusetheintegration-by-partstwiceandtransferthesecondorderderiva- tive to the product of u(t;x)s(t;x;z) with respect to x, which still has to be in discrete form. The \continuous part" (i.e. integrals of a and ¾) gives 1 2 @ 2 @x 2 [¾ 2 (t;x)u(t;x)]¡ @ @x [a(t;x)u(t;x)] 34 after exchanging the derivatives of f and other functions. The other terms where there is not a Z t 0 won't matter as we take the derivative with respect to t and from the generality of function f, we reach the S.P.D.E. for u as the following: du(t;x)=L ¤ u(t;x)dt+ Z R d (u(t ¡ ;x+z)¡u(t ¡ ;x)) ~ N(dt;dz) And this completes the proof for 1-dimension case. ¤ Note: themulti-dimensioncasewillnotbesohardifwejustchangethederivative to gradient in the equation. We can see that the key in deriving this equation is to use such a discrete integration-by-parts rule. The fundamental SPDE that u satis¯es has been obtained. Then we start getting the Chaos Expansion formula for solution to this equation. 35 Chapter 3 Construction of the Numerical Scheme for Solving UFD 3.1 Preliminary Results for Poisson Chaos Expansion The unnormalized ¯ltering density as provided in the Poisson Stochastic Di®er- ential Equation (2.22) is a square-integrable function of the Poisson noise under standard assumptions. In the case of Wiener noise, (see Rozovskii, Mikulevicius and Lototsky, [24]), we have already found a way to expand u(t;x) using a basis in L 2 (;F W ;W) where W(¢) is the standard Wiener process. Here similarly in the Poisson case, we would like to ¯nd a CONS in L 2 (;F N ;N) and later on derive a recursive numerical scheme to solve this \spectral" approximated solution to u. Poisson Chaos is a special orthonormal basis in the space of square integrable randomvariablesthataremeasurablewithrespecttocertainPoisson¾-algebra. For stochasticpartialdi®erentialequations,PoissonChaosdecompositionisastochastic 36 analog of the classical Fourier series method, and it results in separation of the random and the deterministic components. Before presenting the derivation of the Chaos Expansion, we will ¯rst intro- duce some important notations. Furthermore, why and how a function or random variable in the space L 2 (;F N ;N) can be expanded. 3.1.1 The L 2 -norm and Orthogonality We called the space X =[0;T]£R d as the fundamental Cartesian space that we are focus on. But in this section, the function we are concerning with contains n pairs of variables in such a space. Namely, we are considering the space L 2 (X n ;m n ) where m is just the mean measure of the Poisson Point Process we introduced earlier. For convenience, we write Z t 0 Z R d ¢¢¢ Z t 2 0 Z R d := Z X n 37 to denote the multi-level integration such that the time variables t 1 ;t 2 ;:::;t n are in an ascent order. Then we de¯ne jfj 2 n := Z X n jf(t 1 ;y 1 ;:::;t n ;y n )j 2 m(dt 1 ;dy 1 )¢¢¢m(dt n ;dy n ) (3.1) Moreover, for multi-variate functions like f, it is also very common to consider the symmetry of it, ^ f: ^ f(t 1 ;x 1 ;:::;t n ;x n )= 1 n! X ¾2P(n) f(t ¾(1) ;x ¾(1) ;:::;t ¾(n) ;x ¾(n) ) (3.2) where the groupP(n) contains all the permutations on 1;2;:::;n. One would easily ask what if f = ^ f? We obviously will call such functions \symmetric". And the L 2 space they stay, which de¯nitely is a subspace of general L 2 (X n ;m n ), we will denote it as ^ L 2 (X n ;m n ) We have the following rather straightforward fact for f 2 ^ L 2 (X n ;m n ): jfj 2 n = Z X n jf(t 1 ;x 1 ;:::;t n ;x n )j 2 m(dt 1 ;dx 1 )¢¢¢m(dt n ;dx n ) = Z X n j 1 n! X ¾2P(n) f(t ¾(1) ;x ¾(1) ;:::;t ¾(n) ;x ¾(n) )j 2 m(dt 1 ;dy 1 )¢¢¢m(dt n ;dx n ) = 1 n! Z X n jf(t 1 ;x 1 ;:::;t n ;x n )j 2 m(dt 1 ;dx 1 )¢¢¢m(dt n ;dx n ) 38 since we only admit the ascent order of the permuted time variables. Thesederivationsgiveusaninspirationonobtainingthefollowingorthogonality conclusion. Proposition 3.1. Assume function f 2 ^ L 2 (X n ;m n ), we de¯ne the operator I n (¢) such that I n (f):=n! Z X n f(t 1 ;x 1 ;:::;t n ;x n ) ~ N(dt 1 ;dx 1 )¢¢¢ ~ N(dt n ;dx n ) (3.3) Then, one has E[I m (f)I n (f)]= 8 > > < > > : 0 m6=n n!(f;f) ^ L 2 (X n ;m n ) m=n (3.4) Proof: For case when m 6= n, it is clear if we assume one of them, say m to be greater, and since those \extra terms": ~ N(dt n+1 ;dx n+1 )¢¢¢ ~ N(dt m ;dx m ) are all compensated Poisson Point Measures, the 0 expectation is quickly veri¯ed. 39 For the case of m=n, the Ito{Isometry tells us that E[(I n (f)) 2 ] = (n!) 2 jfj 2 n = n! Z X n jf(t 1 ;x 1 ;:::;t n ;x n )j 2 m(dt 1 ;dx 1 )¢¢¢m(dt n ;dx n ) whichisimmediatelyfromtheconclusionabovefortheL 2 -norm,jfj n regardingthe condition if f is itself symmetric. ¤ 3.1.2 Expansion Formula: Type{I We are now ready to give the following expansion formula (proofs can be found in [12], [5], [32]): Theorem 3.2. Let the random function F 2 L 2 (X n ;m n ). Then there exists a unique sequence of symmetric functionsff n g2 ^ L 2 (X n ;m n ) for n¸1 such that F = E[F]+ X m¸1 I m (f m ) (3.5) E[F 2 ] = (E[f]) 2 + X m¸1 m!jjf m jj 2 ^ L 2 (X n ;m n ) (3.6) Note: the second equation is an immediate result from using the Ito{Isometry. Also we have changed the notation for the speci¯c symmetric functions: (f;f) ^ L 2 (X n ;m n ) =jjfjj 2 ^ L 2 (X n ;m n ) 40 Obviously the expansion (3.5) is not the standard Hilbert{Parseval type as we have always seen in Hilbert spaces. I n (¢) is an operator on certain symmetric func- tions. Theorthogonalitycanbeguaranteedinthisexpansion,yetthenormalityhas not been granted. Now we would expand each symmetric function, f n furthermore to reach the commonly seen standard Hilbert{Parseval type formula. Each function f n , involves n pairs of variables, (t 1 ;x 1 );:::;(t n ;x n ). So instead of using traditional 1-dimension indices, we would like to use the multi-indices here for the expansion of f n . Theorem 3.3. For the multi-indices ®2J where J =f® =(® k ) k¸1 ;j®j= X k ® k <1g (3.7) assume the functions f± ^ ® g j®j=n formulate a CONS on ^ L 2 (X n ;m n ). The for each n, and f n 2 ^ L 2 (X n ;m n ), the following expansion holds: f n = X j®j=n f ® n ± ^ ® (3.8) Moreover, the expansion (3.5) can now be written as F = 1 X n=1 X j®j=n f ® n I n (± ^ ® )+E[F] (3.9) 41 ¤ Now it is the right time to introduce the following quantity, K t ® that is the instant application of the operator I(¢) we showed above relating basis functions on the space ^ L 2 (X n ;m n ): we de¯ne K t ® :=I j®j (± ^ ® ) (3.10) The de¯nition is rather simple if we just plug in n = j®j and f = ± ^ ® in I n (f). Namely K t ® :=j®j! Z t 0 Z R d ¢¢¢ Z t 2 0 Z R d ± ^ ® (t 1 ;x 1 ;:::;t n ;x n ) ~ N(dt 1 ;dx 1 )¢¢¢ ~ N(dt n ;dx n ) (3.11) andeasilywehaveK t 0 =1wherej®j=0meansthemulti-indexwithallcomponents 0. 42 3.1.3 DetailsoftheBasisFunctionandComplexityofUsing K t ® for Numerical Scheme Up to this moment, we just presented the abstract form of the basis function ± ^ ® , now we will present some details on how this function is constructed. Assume f± k (t;x)g k formulates the CONS for L 2 (X;m). i.e. Z T 0 Z R d ± k (t;x)± j (t;x)m(dt;dx)= 8 > > < > > : 0 k6=j 1 k =j (3.12) Given a particular multi-index, it is very natural to ask which among all its components are nonzero? In fact, we can de¯ne Index(®):=maxfk;® k 6=0;k¸1g (3.13) andwedenote® 1 ;® 2 ;:::;® k asthosenonzerocomponentsin®. Obviouslywehave j®j=® 1 +® 2 +¢¢¢+® · for ·=Index(®). The following symmetrization procedure de¯nes the quantity ± ® : ± ® (t 1 ;x 1 ;:::;t n ;x n ) 43 =± 1 (t 1 ;x 1 )¢¢¢± 1 (t ® 1 ;x ® 1 )± 2 (t ® 1 +1 ;x ® 1 +1 )¢¢¢± 2 (t ® 1 +® 2 ;x ® 1 +® 2 ) ¢¢¢± · (t ® 1 +¢¢¢+® ·¡1 +1 ;x ® 1 +¢¢¢+® ·¡1 +1 )¢¢¢± · (t ® 1 +¢¢¢+®· ;x ® 1 +¢¢¢+®· ) or simply we call it tensor product of f± k g k with respect to ®. Notice that we are doing all our expansions by assuming the basis functions are orthonormal on ^ L 2 (X n ;m n ), this requires the ¯nal steps: ± ^ ® (t 1 ;x 1 ;:::;t n ;x n )= 1 n! X ¾2P(n) ± ® (t ¾(1) ;x ¾(1) ;:::;t ¾(n) ;x ¾(n) ) (3.14) Beyond this, with the de¯nition of K t ® in (3.11), one can rewrite the expansion formula as F = X ® c ® K t ® +E[F] (3.15) ButthepracticalsimulationforK t ® isextremelyhardsincetherearencompensated PoissonPointMeasuresinvolvedforeachlevelofj®j=n. Thatisalsowhywecalled (3.15) a \theoretical" expansion formula. In real world, it gives more possibilities fordoingtheoreticalresearch. Aswehavealreadyshownthecomplexityofplugging ± ^ ® , the numerical implementation will become incredibly di±cult and make this version of expansion almost worthless for the numerical approximation intension that we are heading for. Instead, we begin looking for another version (i.e. another form of CONS basis functions) of Chaos Expansion that is easier to be applied numerically. 44 3.2 Applicable Numerical Scheme Derivation As done in the Wiener case [24], we start with the following procedures: letZ be the set of all real-valued sequences z =(z k ) k¸1 such that only ¯nite number of z k 's are nonzero. For z2Z,we denote ± z (t;x)= X k z k ± k (t;x) (3.16) where the CONS basis functions ± k (t;x) on L 2 ([0;T]£R d ) have been talked about earlier. The purpose of this section is that we try to de¯ne the following quantity P t (z) as an intermediate quantity that can help us derive: ² AsetoforthonormalfunctionsonL 2 (;F N ;N),thatisgoingtobeusedlater for Chaos Expansion; ² When multiplied with the UFD, u = u(t;x), the recursive systems of partial di®erential equations about the expansion coe±cients; 3.2.1 Alternative Complete Orthonormal Basis Let P t (z) satis¯es the following: dP t (z) = P t ¡(z) Z R d ± z (t ¡ ;y) ~ N(dt;dy) (3.17) P 0 (z) = 1 (3.18) 45 ThenthefollowingconclusionisveryeasytobeobtainedbyusingthePoisson{type Ito's Formula: Proposition 3.4. We claim that logP t (z) satis¯es logP t (z) = Z t 0 Z R d [log(1+± z (t;y))¡± z (t;y)]m(dt;dy) + Z t 0 Z R d log(1+± z (t ¡ ;y)) ~ N(dt;dy) = Z t 0 Z R d log(1+± z (t ¡ ;y))N(dt;dy)¡ Z t 0 Z R d ± z (t;y)m(dt;dy) moreover, the solution to P t (z) is straightforward that: P t (z)=expf Z t 0 Z R d log(1+± z (t ¡ ;y))N(dt;dy)¡ Z t 0 Z R d ± z (t;y)m(dt;dy)g (3.19) Proof: Directly using of Ito's Formula, skipped here. ¤ With the solution to P t (z), we can do furthermore: Proposition 3.5. For two indices, z and z 0 both chosen fromZ, we have P t (z)P t (z 0 ) = 1+ Z t 0 P s ¡(z)dP s (z 0 )+ Z t 0 P s ¡(z 0 )dP s (z) + Z t 0 Z R d P s ¡(z)P s ¡(z 0 )± z (s ¡ ;y)± z 0(s ¡ ;y)N(ds;dy) 46 which gives E[P t (z)P t (z 0 )]=1+ Z t 0 Z R d E[P s (z)P s (z 0 )]± z (s;y)± z 0(s;y)m(ds;dy) (3.20) for all t 2 [0;T]. What is more important for us is what will happen at the end point for time variable, we have that E[P T (z)P T (z 0 )]=expf Z T 0 Z R d ± z (s;y)± z 0(s;y)m(ds;dy)g=exp(z¢z 0 ) (3.21) Proof: The derivation for the SPDE of P t (z)P t (z 0 ) is still based on the Poisson{type Ito's Formula. The orthogonality of ± k (t;x) on L 2 ([0;T]£R d ) yields that Z T 0 Z R d z k z 0 k ± 2 k (t;y)m(dt;dy)=z k z 0 k and the conclusion follows immediately. ¤ The traditional index contains only ONE component, now it is a good place to talk more about the multi-index we are working on and some useful properties. For a given multi-index ®, assume ·=Index(®) 47 that is, ® · is the last item in multi-index ® (or just a vector as we are using inner product) that is not equal to 0. We denote ® 1 ;® 2 ;:::;® · are those components in ® that are nonzero. Also we assumej®j=n= P · i=1 ® i , then the notation @ ® @z ® is provided by @ ® @z ® g(z)= @ n @z ® 1 1 @z ® 2 2 ¢¢¢@z ®· · g(z) (3.22) for a su±cient smooth function g of variable z. So we are now ready to de¯ne ~ C T ® = @ ® @z ® P T (z)j z=0 = @ n @z ® 1 1 @z ® 2 2 ¢¢¢@z ®· · P T (z)j z 1 =z 2 =¢¢¢=z·=0 (3.23) Thede¯nitionof ~ C T ® isaveryimportantsteptogeneratetheCONSbasisfunctions on L 2 (;F N T ;N). Let's take a look at its orthogonality property: Theorem 3.6. For two multi-indices ® and ® 0 both chosen from J, we have E[ ~ C T ® ~ C T ® 0]=@ ® z @ ® 0 z 0E[P T (z)P T (z 0 )]j z=z 0 =0 =@ ® z @ ® 0 z 0 exp(z¢z 0 )=®!1 ®=® 0 (3.24) Followed by this conclusion, the CONS basis function on L 2 (;F N T ;N) is provided by C T ® = ~ C T ® p ®! (3.25) 48 Note: the notation ®! is given by ®!=® 1 !® 2 !¢¢¢® · ! where ® 1 ;:::;® · are nonzero components of the multi-index ®. Proof (Sketch): We only have to show the validity of the third equality as the ¯rst two have been showed in previous proposition. Assume ® =® 0 , and we have the following · nonzero components: ® 1 ;® 2 ;:::;® · Assume we are computing @ ® 1 z 1 @ ® 1 z 0 1 , we have @ ® 1 z 1 @ ® 1 z 0 1 e z 1 z 0 1 +¢¢¢ =@ ® 1 z 1 [z ® 1 1 e z 1 z 0 1 +¢¢¢ ] The ¯rst term after using the product rule will be just ® 1 !e z 1 z 0 1 +¢¢¢ and all other terms will contain a factor like z ° 1 e z 1 z 0 1 49 for ° ¸ 1 which will be 0 after plugging in z = 0. And similar results work for other partial derivatives. If we have®6=® 0 , it is easy to see that there must be somez ° orz 0° terms left that will vanish after plugging in 0. ¤ 3.2.2 Partial Di®erential Equations Regarding Coe±cients Withtheassumptionthattheunnormalized¯lteringdensity,u=u(t;x)2L 2 (;F N t ;N), we have the following Chaos Expansion formula: u(t;x)= X ®2J 1 p ®! ' ® (t;x)C t ® (3.26) where one need to notice that F N t is only used for denote the \information" (i.e. ¯ltration) up the moment at T =t. One might like to ask why we still need a 1= p ®! as the basis function C t ® has already been normalized. The following derivation explains this confusion. Proposition 3.7. As de¯ned in (3.26),we have ' ® (t;x)= @ ® @z ® E[u(t;x)P t (z)]j z=0 (3.27) Proof: 50 As C t ® has been normalized, we have 1 p ®! ' ® (t;x) = <u(t;x);C t ® >= 1 p ®! E[u(t;x) ~ C t ® ] = 1 p ®! E[u(t;x) @ ® @z ® P t (z)j z=0 ] = 1 p ®! @ ® @z ® E[u(t;x)P t (z)]j z=0 And cancelation of p ®! gives the result. ¤ We would say (3.27) tells us the story about how to derive the equation that ' ® should satisfy. We have already seen that du(t;x) = L ¤ u(t;x)dt+ Z R d (u(t ¡ ;x+z)¡u(t ¡ ;x)) ~ N(dt;dz) dP t (z) = P t ¡(z) Z R d ± z (t ¡ ;y) ~ N(dt;dy) Ito's Rule says that for the product, u(t;x)P t (z), we have u(t;x)P t (z) = p 0 (x)+ Z t 0 u(s ¡ ;x)dP s (z)+ Z t 0 P s ¡(z)du(s;x) + Z t 0 Z R d [u(s ¡ ;x+y)¡u(s ¡ ;x)]P s ¡(z)± z (s ¡ ;y)N(ds;dy) where we assume u(0;x)=p 0 (x) as the initial ¯ltering density. Let's take '(t;x;z)=E[u(t;x)P t (z)] 51 by taking expectation on both sides of the SPDE that u(t;x)P t (z) satis¯es, we get Proposition 3.8. @'(t;x;z) @t = L ¤ '(t;x;z) (3.28) + Z R d ['(t;x+y;z)¡'(t;x;z)]± z (t;y)º(dy) (3.29) '(0;x;z) = p 0 (x) (3.30) Proof: Mean-0 property of the compensated Poisson Point Measure guarantees wherever there is a ~ N, the integration disappears after taking the expectation. Similarly, the expectation of the measure givesE[N(dt;dx)]=º(dx)dt and di®erentiate with respect to t we obtain the partial di®erential equation above. ¤ Finally, we use the relation stated in (3.27) to get ' ® (t;x)= @ ® @z ® E[u(t;x)P t (z)]j z=0 = @ ® @z ® '(t;x;z)j z=0 (3.31) It is easy to be applied on (3.28), yet the harder part still exists for the Poisson Integral Term, (3.29). We take a look at the structure of ± z (t;y), one can see that since ± z (t;y)= X k z k ± k (t;y) 52 When using the product rule, i.e. @ ® z on '(t;x;z)± z (t;y), assume we are at a step of @ ® l @z ® l l ['(t;x;z)± z (t;y)] for some ® l ¸ 1 as a component picked from ®. The structure of ± z tells us that for any partial derivatives regarding z taken on ± z , if it has order greater than 2, the whole thing will disappear. If the derivative taken on ± z is equal to 0, i.e. the function ± z won't change, it will get to 0 too since later we are going to plug in 0 for all z k 's. So the only thing left is the term where we have order of ® l ¡1 partial derivative on '(t;x;z), and ¯rst order derivative on ± z (t;y). This becomes ® l @ ® l ¡1 @z ® l ¡1 l '(t;x;z)j z l =0 ± l (t;y) (3.32) It is of interest to unite all the terms like @ ® l ¡1 @z ® l ¡1 l '(t;x;z)j z l =0 which tells us the following theorem: 53 Theorem 3.9. The expansion coe±cients from (3.26) satis¯es the following recur- sive partial di®erential equation system: @' ® (t;x) @t = L ¤ ' ® (t;x) + Z R d X l;® l ¸1 [® l ' ®(l) (t;x+y)¡® l ' ®(l) (t;x)]± l (t;y)º(dy) ' ® (0;x) = p 0 (x)1 ®=0 where the notation ®(l) refers to the multi-index with normj®j¡1. For a given l such that ® l ¸1, the k th component of ®(l), ® k (l) is given by ® k (l)= 8 > > < > > : ® k k6=l maxf0;® k ¡1g k =l Proof: It has been done through the derivation we showed above. One thing one needs to be careful is that u(0;x)=p 0 (x)=' 0 (0;x)+ X ®6=0 1 p ®! ' ® (0;x)C 0 ® (3.33) given 0! = 1 and C 0 0 = 1. But the relation that ' 0 (0;x) = '(0;x;z)j z=0 is rather obvious so it yields that ' 0 (0;x)=E[u(0;x)]=p 0 (x). Therefore, one gets X ®6=0 1 p ®! ' ® (0;x)C 0 ® =0 (3.34) 54 and orthogonality among C ® 's guarantees that all ' ® (0;x)=0 with ®6=0. ¤ 3.3 MoreDetailsabouttheRecursiveSchemeandtheBasis Functions 3.3.1 P.D.E. Examples at Lower Size Cases Let's ¯rst look at the several examples of the P.D.E.s that we are going to solve by using the system. Example 3.10. Casej®j=0, the problem is as simple as the following @' 0 (t;x) @t = L ¤ ' 0 (t;x) (3.35) ' 0 (0;x) = p 0 (x) (3.36) whichcanbesolvedanalyticallyornumericallyaccordingtotheformofL ¤ . Notice that at this step we will be able to know all results for ' 0 (t;x) on the domain of (t;x), e.g. t2[0;T], x2[0;L]. 55 Example 3.11. Case j®j = 1, let k = Index(®), i.e. ® k = 1 is the only nonzero component of ®. We have @' k (t;x) @t = L ¤ ' k (t;x) (3.37) + Z R d [' 0 (t;x+y)¡' 0 (t;x)]± k (t;y)º(dy) (3.38) ' k (0;x) = 0 (3.39) De¯nitely in practical simulation example we are going to implement this equation for lots of k. Be advised that here we are using the result about ' 0 (t;x) from the previous step. Example 3.12. Casej®j = 2, this could be separated into two sub-cases: ¯rst we could have a multi-index ® such that ® k = ® l = 1, with k 6= l. The equations we have become: @' k;l (t;x) @t = L ¤ ' k;l (t;x) (3.40) + Z R d [' k (t;x+y)¡' k (t;x)]± l (t;y)º(dy) (3.41) + Z R d [' l (t;x+y)¡' l (t;x)]± k (t;y)º(dy) (3.42) ' k;l (0;x) = 0 (3.43) 56 Or we could have a multi-index such that only ® n =2 for some n¸1, this yields: @' n;n (t;x) @t = L ¤ ' n;n (t;x) (3.44) + Z R d 2[' n (t;x+y)¡' n (t;x)]± n (t;y)º(dy) (3.45) ' n;n (0;x) = 0 (3.46) Notice that the coe±cients corresponding toj®j=1, ' k , ' l , ' n are all assumed to be obtained from the previous steps. Expansion (3.26) and Theorem 3.9 together provide a method on how to get the approximated solution to u = u(t;x). In practical example, assume the initial ¯lteringdensity,p 0 (x)isknown,onecansolvethecoe±cients,' ® (t;x)levelbylevel (i.e. increasingj®j). The numerical analysis of this scheme will also be presented in later chapters. We are going to restrict \some" particular ®'s that will be used in the scheme and we will be discussing how \good" the approximated solution can be closed to the exact solution. Meanwhile, the computational complexity also comes in front of us. If those P.D.E.s(0-levelasthehomogeneousproblem,othersasnonhomogeneousproblems) are as \perfect" as the existing well-developed models, e.g. heat equation etc., the computation will not be so hard. 57 Yet if the operatorL ¤ is complicated, we will have to solve ' ® (t;x) on a lot of coordinates (t;x) to make sure we have \enough" pre-solved ones to be used for next level. This usually will increase the time cost for the numerical simulation signi¯cantly. 3.3.2 Charlier Polynomials Before talking about more details about the basis functions C ® 's, let's de¯ne the following important concept for the multi-index. Let j®j = n, the quantity Q ® is called the characteristic set of the multi-index ®: Q ® =(q 1 ;q 2 ;:::;q n ) It will be simpler that we write the size of ® separately, as n=1+1+¢¢¢+1 where there are n 1's in this formula. q i 2 Q ® basically tells the index (a integer value)fromthenonzerocomponentsof®thatthei th 1iscomingfrom. Forexample, given ® =(0;0;2;0;0;¢¢¢) 58 thus 2=1+1, each comes from the third component of ®: Q ® =(3;3) If ® =(0;2;0;3;0;0;:::) then 5=1+1+1+1+1: Q ® =(2;2;4;4;4) and so on so forth. Now we can compute: Example 3.13. Forj®j=1 and Q ® =k: C t ® = ~ C t ® = Z t 0 Z R d ± k (t ¡ ;x)N(dt;dx)¡ Z t 0 Z R d ± k (t;x)m(dt;dx) = Z t 0 Z R d ± k (t ¡ ;x) ~ N(dt;dx) Forj®j=2 and Q ® =(k;l): ~ C t ® = [ Z t 0 Z R d ± k (t;x) ~ N(dt;dx)][ Z t 0 Z R d ± l (t;x) ~ N(dt;dx)] ¡ Z t 0 Z R d ± k (t;x)± l (t;x)N(dt;dx) 59 These can be derived by directly using the exact solution to P t (z), as shown in (3.19). Notice that in the second example, the case k =l is allowed. We would also like to comment that we have already stop using the quantity K t ® as de¯ned in (3.11) and instead, we choose to use C t ® to ful¯ll the simula- tion procedures. But we can still tell the relation between these two CONS basis functions: Corollary 3.14. For multi-indices ®6=0, we have the following C t ® = K t ® ( p ®!)(j®j!) (3.47) after we relate both functions according to the normalization with respect to the multi-index. For8®, C t ® iscalledthecorrespondingCharlier Polynomial. TheCharlierPoly- nomialsareofinteresttoPoissonMultipleIntegralsaswellasHermitePolynomials are of interest to Wiener Multiple Integral (see [11] and [12]) where a similar con- clusion has been proved by Ito in 1951, [14] as we did in Corollary 3.14. The Hermite Polynomials of Wiener Processes are still Wiener random variables. BUT for the Charlier Polynomials and the Poisson Point Processes, we will not have this conclusion. 60 Intuitively, the quantity K t ® should provide more convenience for some theo- retical studies in which they are usually called the \Multiple Poisson Integrals". The form of (3.11) gives many more feasibilities if problems involving expectation, variance of K t ® are existing. In numerical ¯eld, de¯nitely the basis function shall be chosen to use C t ® . We summarize the fundamental steps of the numerical simulation here: ² Solve for ' 0 (t;x) with the initial density function, assuming u(0;x) = p 0 (x) is known; ² Pick¯nitelymanyintegersk's,thenwesimulateC t k wherej®j=1andQ ® =k; ² Followed from the last step, solve for ¯nitely many ' k (t;x) based on the ' 0 (t;x) obtained earlier; ² Add ' 0 (t;x)+ X k ' k (t;x)C t k into the simulated solution for those k that have been picked; ² Pick ¯nitely many integers k, l, then we simulate C t k;l where j®j = 2 and Q ® =(k;l); ² Like before, solve for ¯nitely many ' k;l (t;x) based on the ' k (t;x), ' l (t;x) obtained earlier; 61 ² Add X k;l [1 k6=l + 1 k=l p 2! ]' k;l (t;x)C t k;l to the simulated solution for those k, l that have been picked; ² Repeat those procedures if necessary for multi-indices with biggerj®j ...... Each multi-index ® itself de¯nitely contains in¯nitely many components. It is straightforward to everyone that even for the simple case like j®j = 1, we would have in¯nitely many choices for the integer k that we mentioned above. In the next chapters, we will provide certain restrictions on the choice of the integer k and Index(®) to testify how good our numerical scheme can approach the exact solution u(t;x), i.e. the convergence analysis and the growth rate of the error this scheme will generate. 62 Chapter 4 Numerical Analysis and Error Reduction 4.1 Derive Estimates by Restricting Size of ® Only 4.1.1 Preliminaries Let'sstartworkingon¯ndingtheerrorbetweentheapproximatedsolutionandthe exact solution for uniform ¯ltering density u. Once again, we have the following Chaos-Expansion formula: u(t;x)= X ®2J ' ® (t;x) C t ® p ®! (4.1) where set J contains all multi-indices, namely J =f® =(® 1 ;® 2 ;:::);® i ¸0;8ig 63 and C t ® = ~ C t ® p ®! where ~ C t ® = @ ® @z ® P t (z)j z=0 Theorem 4.1. We claim that (4.1) converges in L 2 ([0;T]£R d ) and the following Parseval-type Equality holds: E(ju(t;x)j 2 )= X ®2J 1 ®! j' ® (t;x)j 2 (4.2) The expansion formula (4.1) would not help us a lot in getting the estimates for error accumulation (actually neither in doing numerical simulation). We have to ¯rst rewrite it in the following form: u(t;x)= 1 X k=0 X j®j=k ' ® (t;x) C t ® p ®! (4.3) Note: we restrict the total \size" of each multi-index (i.e. j®j) and let it go from 0 to 1. But indeed for each given ® with j®j = n, (even if n = 1), there are still in¯nitely many choices of ® since it contains in¯nitely many components. In the study of numerical algorithms, we have always tried to make our work stayed on a ¯nite set that we can control its size while later on we will be able to derive the norm of the di®erence (error) between the approximated solution and the actual 64 solution as something depending on the size of the set we can control. Thus, we de¯ne the following quantities: J n N = f®2J;j®j·N;Index(®)·ng (4.4) u N (t;x) = X j®j·N ' ® (t;x) C t ® p ®! (4.5) u n N (t;x) = X ®2J n N ' ® (t;x) C t ® p ®! (4.6) Indeed, from the argument we mentioned earlier, (4.6) is the real ¯nite approx- imation. It is not hard to see that set (4.4) could contain up to n N items. Proof of Theorem: Theconvergenceresultcanbefound(generaltheoryforanyChaos-Expansionbased on certain noise) in [3], [28], [29]. We only present the Parseval equality: E[ju(t;x)j 2 ] = <u(t;x);u(t;x)> = lim N;n!1 < X ®2J n N ' ® (t;x) C t ® p ®! ; X ® 0 2J n N ' ® 0(t;x) C t ® 0 p ® 0 ! > = lim N;n!1 X ®2J n N X ® 0 2J n N ' ® (t;x)' ® 0(t;x) <C t ® ;C t ® 0 > p ®! p ® 0 ! = lim N;n!1 X ®2J n N X ® 0 2J n N ' ® (t;x)' ® 0(t;x) 1 ®=® 0 p ®! p ® 0 ! = lim N;n!1 X ®2J n N ' 2 ® (t;x) ®! = X ®2J ' 2 ® (t;x) ®! ¤ 65 Remark 4.2. By de¯ning (4.5) and (4.6), one would be able to estimate: E[jju n N (t;¢)¡u(t;¢)jj 2 L 2 (R d ) ] (4.7) Yet this estimate is obtained by the following decomposition such that E[jju n N (t;¢)¡u(t;¢)jj 2 L 2 (R d ) ] = E[jju n N (t;¢)¡u N (t;¢)+u N (t;¢)¡u(t;¢)jj 2 L 2 (R d ) ] · 2E[jju n N (t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ]+2E[jju N (t;¢)¡u(t;¢)jj 2 L 2 (R d ) ] ¤ Now we are going to provide estimates for both items in the last formula. But before doing that, let's go back to our original recursive numerical schemes and derive some useful semi{group based conclusions! 66 4.1.2 Brief Introduction to Semi{Group Theory: The semi{group theory (see [7] and [4]) has been a very useful tool in the study of ordinary di®erential equation problem. Let's think about the following problem: 8 > > < > > : d dt u(t)=Au(t) u(0)=u 0 (4.8) where A is a linear operator. The solution to (4.8) is represented by u(t)=S(t)u 0 (4.9) Furthermore, the operatorS should satisfy the following properties: ² S(0)u 0 =u 0 , i.e. S(t) behaves the same as the identical operatorI at t=0; ² S(s+t)u 0 =S(s)S(t)u 0 =S(t)S(s)u 0 ; ² the mapping: t7!S(t)u 0 is continuous; The operatorS that satis¯es those three conditions is called the semi{group oper- ator. In the study of P.D.E. theory, in particular, for the initial value problem: 8 > > > > > > < > > > > > > : @'(t;x) @t =L ¤ '(t;x) '(0;x)=p 0 (x) 67 for each x ¯xed, this can be treated as the O.D.E. problem as (4.8), we can also represent the solution to this P.D.E. problem as the following: '(t;x)=T t p 0 (x) for all t;x in the domain of function '(t;x) where T is the semi{group operator corresponding toL ¤ for each ¯xed x. Therearemanyhelpfulpropertiesthatthesemi{groupoperatorsatis¯esbesides its de¯nition. We will basically use its continuity property on the P.D.E. problem in later arguments. 4.1.3 Semi{Group Representation of Solutions Assume the 1{dimensional multi-index ® satisfy that j®j = P l ® l < 1. As we derived earlier, the recursive numerical schemes can be written as the following: 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : @'®(t;x) @t =L ¤ ' ® (t;x) + R R d P l:® l ¸1 ® l (' ®(l) (t;x+z)¡' ®(l) (t;x))± l (t;z)º(dz) ' ® (0;x)=p 0 (x)1 j®j=0 (4.10) 68 where®(l)correspondstothemulti-indexthathassizej®j¡1anditsk th component satis¯es ® k (l)= 8 > > < > > : ® k k6=l maxf0;® k ¡1g k =l The case whenj®j=0 is obviously the simplest case, we have @' 0 (t;x) @t = L ¤ ' 0 (t;x) ' 0 (0;x) = p 0 (x) Semi-group theory tells us that the solution to this initial value problem can be written as ' 0 (t;x)=T t p 0 (x) (4.11) Thej®j = 0 case is usually called \homogeneous" in di®erential equation theories. To apply the semi{group on the other \nonhomogeneous" equations, let's think about the following easy ordinary di®erential equation problem ¯rst as simple ex- amples: 8 > > < > > : _ x¡ax=0 x(0)=c (4.12) This initial value problem gives the solution x(t)=e at c 69 And for the \nonhomogeneous" problem: 8 > > < > > : _ x¡ax=f(t) x(0)=0 (4.13) This is solved by multiplying both sides of the equation by the multiplier e ¡at and we obtain that x(t)= Z t 0 e a(t¡¿) f(¿)d¿ In these two problems, the quantity e at behaves as the semi{group operator in the partial di®erential equations we are working on. In other words, for the homogeneous O.D.E., we have x(t)=S(t)c and for the following nonhomogeneous case, we have x(t)= Z t 0 S(t¡¿)f(¿)d¿ So compared with the O.D.E. cases, we have the following result for the P.D.E. withj®j=1 where we assume ® k =1 for some k¸1: @' k (t;x) @t = L ¤ ' k (t;x)+ Z R d [' 0 (t;x+z)¡' 0 (t;x)]± k (t;z)º(dz) ' k (0;x) = 0 70 and the corresponding semi{group representation is: ' k (t;x)= Z t 0 Z R d T t¡s 1 (D z T s 1 p 0 (x))± k (s 1 ;z)º(dz)ds 1 (4.14) where D z (¢) is the \di®erence" operator for the spatial variable, i.e. D z (f(t;x))=f(t;x+z)¡f(t;x) As the similarity with the O.D.E. problem, the function Z R d (D z T t p 0 (x))± k (t;z)º(dz) behaves as the nonhomogeneous \tail" function f in the O.D.E. example (4.13). Thisexplainsthevalidityofformula(4.14). Generally,foranygiven®withj®j=m, before presenting the semi-group representation of the solution to the coe±cients, we ¯rst introduce several useful notations. Let (i 1 ;i 2 ;:::;i m ) be the characteristic set of the multi-index ®. We also de¯ne P(m) to be the set containing all permutations over f1;2;:::;mg. The group property of P(m) will help us later in this section. By induction from thej®j=0 andj®j=1 cases, we have: 71 Theorem 4.3. Forj®j=m, ' ® (t;x)= X ¾2P(m) Z t 0 Z R d ¢¢¢ Z s 2 0 Z R d T t¡sm D zm T sm¡s m¡1 D z m¡1 ¢¢¢T s 2 ¡s 1 D z 1 T s 1 p 0 (x) ¢± i ¾(1) (s 1 ;z 1 )¢¢¢± i ¾(m) (s m ;z m )º(dz 1 )ds 1 ¢¢¢º(dz m )ds m (4.15) We could also introduce º(d~ z)£d~ s:=º(dz 1 )ds 1 ¢¢¢º(dz m )ds m F(t;s m ;z m ;x):=T t¡sm D zm T sm¡s m¡1 D z m¡1 ¢¢¢T s 2 ¡s 1 D z 1 T s 1 p 0 (x) and E ® (s m ;z m ):= X ¾2P(m) ± i 1 (s ¾(1) ;z ¾(1) )¢¢¢± im (s ¾(m) ;z ¾(m) ) (4.16) Therefore, (4.15) can be simpli¯ed to ' ® (t;x)= Z t 0 Z R d ¢¢¢ Z s 2 0 Z R d F(t;s m ;z m ;x)E ® (s m ;z m )º(d~ z)£d~ s (4.17) ¤ Remark 4.4. In (4.16) it is easy to see E ® (s m ;z m ):= X ¾2P(m) ± i ¾ ¡1 (1) (s 1 ;z 1 )¢¢¢± i ¾ ¡1 (m) (s m ;z m ) (4.18) 72 while group theory guarantees ¾ and ¾ ¡1 are both in P(m). That is why we can replace the one we have in (4.15) with (4.16). ¤ Let's simplify the notation Z (m) := Z t 0 Z R d ¢¢¢ Z s 2 0 Z R d to clarify the m \layers" of double integrals. We already have ' ® (t;x)= Z (m) F(t;s m ;z m ;x)E ® (s m ;z m )º(d~ z)£d~ s (4.19) Now we introduce G(t;s m ;z m ;x):= X ¾2P(m) T t¡s ¾(m) D z ¾(m) ¢¢¢T s ¾(2) ¡s ¾(1) D z ¾(1) T s ¾(1) p 0 (x)1 (s ¾(1) ·¢¢¢·s ¾(m) ) (4.20) The key point is that for any sequence of s ¾(1) ;¢¢¢ ;s ¾(m) with an increasing order, (4.19) will give the same result. 73 Lemma 4.5. Forj®j=m, we have the following equality: ' ® (t;x)= 1 m! Z (m) G(t;s m ;z m ;x)E(s m ;z m )º(d~ z)£d~ s (4.21) Proof: The sum P ¾2P(m) operates on the permutation group over f1;2;:::;mg with m! items. As we mentioned earlier, in formula (4.20) we only admit the increasing order of any given sequence s ¾(1) ;¢¢¢ ;s ¾(m) , therefore, all m! terms from (4.21) are the same and equal to Z (m) F(t;s m ;z m ;x)E ® (s m ;z m )º(d~ z)£d~ s ¤ Moreover, because of the orthogonality of the basis functions f± k (s;z)g 1 k=0 we claim that E ® p ®! p m! (4.22) forms a complete orthonormal system for the symmetrized Cartesian space gener- ated from L 2 ([0;T]£R d ) m . We have the following theorem: 74 Theorem 4.6. G=G(t;s m ;z m ;x)= X j®j=m c ® E ® p ®!m! (4.23) where the coe±cient c ® is obtained from < G;E ® >= c ® p ®!m!. Furthermore, we have X j®j=m ' 2 ® ®! = Z (m) jF(t;s m ;z m ;x)j 2 º(d~ z)£d~ s (4.24) Proof: As we have always seen in Hilbert spaces, the basis function E ® p ®!m! is normalized. We can quickly conclude that <G; E ® p ®!m! >=c ® then <G;E ® >=c ® p ®!m! Also from (4.21), the inner product satis¯es that <G;E ® >=m!' ® 75 Relating these two equalities, we get ' 2 ® ®! = c 2 ® m! and X j®j=m ' 2 ® ®! = 1 m! X j®j=m c 2 ® And back to the expansion formula of G, we know that X j®j=m c 2 ® =jjG(t;¢;¢;x)jj 2 L 2 = Z (m) jG(t;s m ;z m ;x)j 2 º(d~ z)£d~ s and jG(t;s m ;z m ;x)j 2 = X ¾2P m jT t¡s ¾(m) D z ¾(m) ¢¢¢T s ¾(2) ¡s ¾(1) D z ¾(1) T s ¾(1) p 0 (x)j 2 1 (s ¾(1) ·¢¢¢·s ¾(m) ) as \two permutations cannot have increasing order at the same time". We ¯nally reach that X j®j=m ' 2 ® ®! = 1 m! X j®j=m c 2 ® = 1 m! Z (m) jG(t;s m ;z m ;x)j 2 º(d~ z)£d~ s = Z (m) jF(t;s m ;z m ;x)j 2 º(d~ z)£d~ s ¤ 76 4.1.4 Major Inequality Now let's go back to (4.7), we are ready to give the estimate of E[jju(t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ] Theorem 4.7. For given integer N, the approximated solution u N (t;x) satis¯es the following inequality E[jju(t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ]·e (C+M)t (Mt) N+1 (N +1)! jjp 0 jj 2 H 1 (R d ) (4.25) for some positive constants C and M. Note: The last item,jj¢jj H 1 (R d ) refers to the Sobolev norm of the function where the derivatives are taken only on its spatial variables, i.e. for f = f(t;x) where x=(x 1 ;x 2 ;:::;x d ): jjfjj H 1 (R d ) =[ Z R d f 2 dx+ d X i=1 Z R d ( @f @x i ) 2 dx] 1=2 assuming f has su±cient smoothness onR d . We will see in later proofs that this \norm" will serve only as intermediate steps as we work on some inequalities. As longasafunctiondependingonlyonspatialvariables, thisnormcoincideswiththe actual Sobolev norm. Proof: 77 For u N (t;x), we actually restrict the size of the multi-index ® to be less than or equal to N. First we notice E[ju(t;x)¡u N (t;x)j 2 ]= X m>N X j®j=m ' 2 ® (t;x) ®! = X m>N Z (m) jF(t;s m ;z m ;x)j 2 º(d~ z)£d~ s in which we have used the conclusion in (4.24). To get the L 2 -norm with respect to the spatial variable x, we integrate this whole thing over x, E[jju(t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ]= X m>N Z (m) jjF(t;s m ;z m ;¢)jj 2 L 2 (R d ) º(d~ z)£d~ s (4.26) One important property of the semi-group operator is that we can prove that it is bounded in the following way: Let v satisfy the following initial value problem: v t = L ¤ v v(0;x) = f(x) The solution is given by v(t;x)=T t f(x) 78 while we have the following conclusion derived from \energy estimate" of P.D.E. problem: jjT t fjj 2 L 2 (R d ) ·e Ct jjfjj 2 L 2 (R d ) (4.27) Furthermore, we can extend this conclusion to the initial value problem: (v x ) t = L ¤ (v x ) v x (0;x) = f 0 (x) in which we can obtain a similar estimate forT t f 0 as in (4.27). In the end, the two estimates combine and give us jjT t fjj 2 H 1 (R d ) ·e Ct jjfjj 2 H 1 (R d ) (4.28) Now let's use this inequality on F(t;s m ;z m ;¢), for simplicity, we write: F(t;s m ;z m ;¢) = T t¡sm D zm T sm¡s m¡1 D z m¡1 ¢¢¢T s 2 ¡s 1 D z 1 T s 1 p 0 (¢) : = T t¡sm D zm g m¡1 (s m¡1 ;z m¡1 ;¢) Note: the notation s m¡1 means here only s 1 ;s 2 ;:::;s m¡1 are involved, similar for z m¡1 . 79 We provide the estimate of the L 2 -norm of F: jjF(t;s m ;z m ;¢)jj 2 L 2 (R d ) =jjT t¡sm D zm g m¡1 jj 2 L 2 (R d ) ·e C(t¡sm) jjD zm g m¡1 jj 2 L 2 (R d ) Simple Taylor's Expansion tells us that jD zm g m¡1 j 2 = jg m¡1 (s m¡1 ;z m¡1 ;x+z m )¡g m¡1 (s m¡1 ;z m¡1 ;x)j 2 = jr x g m¡1 (s m¡1 ;z m¡1 ;x)¢z m +o(jz m j)j 2 · 2jr x g m¡1 (s m¡1 ;z m¡1 ;x)j 2 jz m j 2 +o(jz m j 2 ) Thus we keep estimating jjD zm g m¡1 jj 2 L 2 (R d ) by jjD zm g m¡1 jj 2 L 2 (R d ) ·C 1 jjg m¡1 jj 2 H 1 (R d ) jz m j 2 as we have to add the ¯rst-order derivative (gradient) into consideration. The structure of g j for j =m¡2;m¡3;::: allows us to repeat such process as we goes on for other semi - group and di®erence operators coming up later. What we have to be careful about is that at each time, there will be a termjz j j 2 moving out. It is the time to put all those results back to (4.26), we have Z (m) jjF(t;s m ;z m ;¢)jj 2 L 2 (R d ) º(d~ z)£d~ s 80 · e C(t¡sm+sm¡s m¡1 ¢¢¢¡s 1 +s 1 ) Z t 0 Z R d ¢¢¢ Z s 2 0 Z R d jjp 0 jj 2 H 1 (R d ) jz 1 j 2 º(dz 1 )ds 1 ¢¢¢jz m j 2 º(dz m )ds m = e Ct M m jjp 0 jj 2 H 1 (R d ) Z t 0 Z sm 0 ¢¢¢ Z s 2 0 ds 1 ¢¢¢ds m¡1 ds m =e Ct (Mt) m m! jjp 0 jj 2 H 1 (R d ) wherewehaveusedtheassumptionforthesecondmomentofthecompensatorº(¢) M = Z R d jzj 2 º(dz)<1 while the constant C 1 gets merged with M. Taking the sum from m = N +1, we ¯nally obtain that E[jju(t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ]· 1 X m=N+1 e Ct (Mt) m m! jjp 0 jj 2 H 1 (R d ) ·e (C+M)t (Mt) N+1 (N +1)! jjp 0 jj 2 H 1 (R d ) which completes the proof. ¤ 81 4.2 Derive Estimates by Restricting j®j and Index(®) 4.2.1 Main Theorem As we have mentioned earlier, now we will start estimating E[jju n N (t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ] For given multi-index ®, assume it has the characteristic set (i 1 ;i 2 ;:::;i m ) wherej®j=m. WehaveprovedthattheChaos-Expansioncoe±cientcanberepresentedbythe following ' ® (t;x)= Z (m) F(t;s m ;z m ;x)E ® (s m ;z m )º(d~ z)£d~ s Let's calculate E[ju n N (t;x)¡u N (t;x)j 2 ]= 1 X l=n+1 N X m=1 X j®j=m;im=l ' 2 ® (t;x) ®! (4.29) The main theorem of this section is the following: 82 Theorem 4.8. We have the following estimates: E[jju n N (t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ] = 1 X l=n+1 N X m=1 X j®j=m;im=l jj' ® (t;¢)jj 2 L 2 (R d ) ®! · C 1 t 3 e C 2 t n jjp 0 jj 2 H 1 (R d ) we require that in the CONS basis function ± k (t;x), one chooses the h 1 (t)= 1 p T ;h k (t)= r 2 T cos( ¼(k¡1)t T );k¸2 for setting up the CONS for the time variable. Note: we will show later that depending on the values of t, the power of t that appears on right hand side of this inequality MAY vary. That is, we are going to ¯nd the dominant power of t that decides the growth rate of the error. Theproofofthistheoremtakesreallylongtime. Let'sattainitbyseveralsteps. 4.2.2 Introduction to Useful Notations for Time Variables First let's prepare some preliminary formulas that are very helpful: Now we provide an \iterative" scheme for E ® such that it can be written as something involvingE ®(l) where the latter is the multi-index with sizej®j¡1 and de¯nition of its components is the same as we de¯ned earlier for the recursive numerical scheme. 83 Lemma 4.9. E ® (s m ;z m )= m X j=1 ± im (s j ;z j )E ®(im) (s m j ;z m j ) (4.30) where ® im is the multi-index with characteristic set (i 1 ;i 2 ;:::;i m¡1 ) and we denote s m j (z m j ) to be the \reduced" time variable vector withs j item taken out, i.e. s m j =(s 1 ;s 2 ;:::;s j¡1 ;s j+1 ;:::;s m ) Proof: This is rather straightforward from the de¯nition ofE. ¤ With the help of the lemma we have just shown, we can change the semi{group representation for ' ® to the following ' ® (t;x)= m X j=1 Z (m¡1) [ Z s j+1 s j¡1 Z R d F(t;s m ;z m ;x)± im (s j ;z j )º(dz j )ds j ] ¢E ®(im) (s m j ;z m j )º(dz m j )£ds m j (4.31) 84 Note: for the three \layers" of the integrals, the ¯rst one is for the far-outside m¡1 pairs of º(dz m j )£ds m j ; the second is for the time variable s j , as it has been taken out from the time variable vector, we still have s j¡1 ·s j ·s j+1 the integral of far-inside is for the spatial variable z j . We ¯rst try to estimate the integrals inside the bracket of (4.31). By using Fubini's Theorem, we have Z s j+1 s j¡1 Z R d F(t;s m ;z m ;x)± im (s j ;z j )º(dz j )ds j = Z R d Z s j+1 s j¡1 F(t;s m ;z m ;x)± im (s j ;z j )ds j º(dz j ) We claim that there exists a function ¢ im (s;z) such that @¢ im (s;z) @s =± im (s;z) The reason for the validity of this claim is due to the construction of the basis function f± k (s;z)g 1 k=0 85 which was obtained by the tensor product of some CONS basis functions for the timeandspatialvariables,respectively. Theconstructionthatrelatestothespatial variable z could be very complicated due to the dimension of the space we are working on. But for the time variable s, it is very natural to choose the standard basis as we did in Fourier Analysis: h k (s)= r 2 t cos( ¼(k¡1) t s) which formulates a CONS on L 2 ([0;t]). The basis function ± k (s;z) is obtained by ± k (s;z)=h k (s)~ g k (z) while the setf~ g k (¢)g formulates a CONS on L 2 (º;R d ). Note: usually it is NOT true if one has found a CONS on L 2 ([0;t]) and a CONS on L 2 (º;R d ) and multiply together to get f± k g DIRECTLY. The set{up of f± k g requires some procedures to re{organize the indices offh k g andf~ g k g. But we can still write the product as we did above since both indices eventually have to be some function of k. For example (see Âksendal's [12]), it has been checked that ± k (s;z)=h i (s)~ g j (z) where a 1{1 mapping (i;j)$k 86 has been constructed. From now on, we will use the basis function for the \time part" as we have introducedaboveandjustassumetheexistenceofthebasisfunctionforthe\spatial part". Let ± k (s;z)= r 2 t cos( ¼(k¡1) t s)~ g k (z) and in this case ¢ k (s;z)= p 2t ¼(k¡1) sin( ¼(k¡1) t s)~ g k (z) where we will call H k (s):= p 2t ¼(k¡1) sin( ¼(k¡1) t s) Using integration by parts, let's go back to the integral we are working on: Z R d Z s j+1 s j¡1 F(t;s m ;z m ;x)± im (s j ;z j )ds j º(dz j ) = Z R d F(t;s m ;z m ;x)[¢ im (s j ;z j )j s j+1 s j¡1 ]º(dz j ) (4.32) ¡ Z R d Z s j+1 s j¡1 F j (t;s m ;z m ;x)¢ im (s j ;z j )ds j º(dz j ) (4.33) where F j (t;s m ;z m ;x)= @ @s j F(t;s m ;z m ;x) 87 Eventually, we will take the sum over j for (4.32), let's take the sum now and write it as m X j=1 Z R d F(t;s m ;z m ;x)H im (s j )j s j+1 s j¡1 ~ g im (z j )º(dz j ) (4.34) = m X j=1 [ Z R d F(t;s m ;z m ;x)H im (s j+1 )~ g im (z j )º(dz j ) (4.35) ¡ Z R d F(t;s m ;z m ;x)H im (s j¡1 )~ g im (z j )º(dz j )] (4.36) Note: although we did not write explicitly, s j+1 and s j¡1 were plugged into the formulas of F (\hidden" in s m terms) in these two integrals, respectively. 4.2.3 Re{label Techniques for Cancelation As a tricky technique, we re{label the set s m j as the following: t i = 8 > > < > > : s i i·j¡1 s i+1 i¸j for the boundary points, we de¯ne t 0 =0;t m =t 88 By doing this, the set (s 1 ;s 2 ;:::;s j¡1 ;s j+1 ;:::;s m ) can be written as (t 1 ;t 2 ;:::;t j¡1 ;t j ;:::;t m¡1 ) (4.37) and like before, we use the simpli¯ed notation t m¡1 for (4.37) which is equivalent with s m j . In the ¯rst part of (4.34), after plugging s j+1 into F, the s m -term can be con- cluded as (s 1 ;s 2 ;:::;s j¡1 ;s j+1 ;s j+1 ;:::;s m ) where we have replaced the variable s j with s j+1 . Using the new notation we have just created, this is the set (t 1 ;t 2 ;:::;t j¡1 ;t j ;t j ;t j+1 ;:::;t m¡1 ) compared with (4.37), this one has one more item which is due to the repeat of t j . Namely, let's de¯ne this vector as t m¡1;j which says the j th item is repeated! Similarly, in the second part of (4.34) where s j¡1 was plugged in, the set (s 1 ;s 2 ;:::;s j¡1 ;s j¡1 ;s j+1 ;:::;s m ) 89 is equivalent as (t 1 ;t 2 ;:::;t j¡1 ;t j¡1 ;t j ;:::;t m¡1 ) which is equal to t m¡1;j¡1 according to our de¯nition. Moreover, we denote t m¡1;0 =(t 0 ;t 1 ;t 2 ;:::;t m¡1 ) and t m¡1;m =(t 1 ;t 2 ;:::;t m¡1 ;t m ) to make sure that the dimension of such vectors is always m. After doing all these preparations, we can go back to (4.31) and write the equation as m X j=1 f Z (m¡1) Z R d F(t;t m¡1;j ;z m ;x)H im (t j )~ g im (z j )º(dz j ) (4.38) E ®(im) (t m¡1 ;z m j )º(dz m j )£dt m¡1 (4.39) ¡ Z (m¡1) Z R d F(t;t m¡1;j¡1 ;z m ;x)H im (t j¡1 )~ g im (z j )º(dz j ) (4.40) E ®(im) (t m¡1 ;z m j )º(dz m j )£dt m¡1 g (4.41) + Z (m¡1) f im (t;t m¡1 ;z m : ;x)E ®(im) (t m¡1 ;z m : )º(dz m : )£dt m¡1 (4.42) 90 where f im (t;t m¡1 ;z m : ;x) is de¯ned by the following (using the newly re-labeled system for t-vector): f im (t;t m¡1 ;z m : ;x) = ¡ Z t 1 0 Z R d F 1 (t;¿;t m¡1 ;z m ;x)¢ im (¿;z 1 )º(dz 1 )d¿ ¡ m¡1 X j=2 Z t j t j¡1 Z R d F j (t;:::;t j¡1 ;¿;t j ;:::;z m ;x)¢ im (¿;z j )º(dz j )d¿ ¡ Z tm=t t m¡1 Z R d F m (t;t m¡1 ;¿;z m ;x)¢ im (¿;z m )º(dz m )d¿ Claim: by labeling the vector s to t in this way, we can see that (4.39) and (4.41) will be canceled by terms that have successive indices in j with opposite signs. The involvement of z does not a®ect the this result since we have set up the basis functionstobesymmetricfor(z 1 ;z 2 ;:::;z m ). Besides, sincethebasisfunctionsare separable for the time part and spatial part, we can let all the integrals regarding z's go ¯rst (that is, let allz-variables disappeared) before considering the sum with respect to j. One also needs to realize that the choice of the basis functions fh k (s)g k helped us on getting this property at the two boundary values for the complete cancelation of the terms, i.e. H im (t 0 )=H im (t m )=0 91 Note: the reason we wrote z m : in (4.42) is that if we sum over those m terms of integrals, we can see from the detailed formula of f that knowing which individual z j has been taken out does NOT really matter here as all z's are symmetric and all belong toR d . Later we will prove an important estimate for f and use it for estimating the error we are working on. By canceling (4.39) and (4.41), we can prove the following: Theorem 4.10. X j®j=m;im=l ' 2 ® (t;x) ®! · Z (m¡1) jf l (t;t m¡1 ;z m : ;x)j 2 º(dz m : )£dt m¡1 (4.43) Proof: Now we know (4.31) can be written as ' ® (t;x)= Z (m¡1) f l (t;t m¡1 ;z m : ;x)E ®(l) (t m¡1 ;z m : )º(dz m : )£dt m¡1 given i m = l. The detailed formula of f does not include those m¡1 layers of integrals. We now add them: Z (m¡1) f l (t;t m¡1 ;z m : ;x)E ®(l) (t m¡1 ;z m : )º(dz m : )£dt m¡1 92 =¡ Z (m¡1) Z t 1 0 Z R d F 1 (t;¿;t m¡1 ;z m ;x)¢ l (¿;z 1 ) º(dz 1 )d¿E ®(l) (t m¡1 ;z m 1 )º(dz m 1 )£dt m¡1 ¡ m¡1 X j=2 Z (m¡1) Z t j t j¡1 Z R d F j (t;:::;z m ;x)¢ l (¿;z j ) º(dz j )d¿E ®(l) (t m¡1 ;z m j )º(dz m j )£dt m¡1 ¡ Z (m¡1) Z tm t m¡1 Z R d F m (t;t m¡1 ;¿;z m ;x)¢ l (¿;z m ) º(dz m )d¿E ®(l) (t m¡1 ;z m m )º(dz m m )£dt m¡1 Again, the symmetry of variables z 1 ;z 2 ;:::;z m guarantees that exchanging one particular z j with any other z k for j 6= k will not a®ect any of the above multiple integrals. Thisisthesourcewheretheideaofcreatingthenotationz m : comesfrom. We compute X j®j=m;im=l ' 2 ® (t;x) ®! = X j®j=m;im=l [ 1 p ®! Z (m¡1) f l (t;t m¡1 ;z m : ;x)E ®(l) (t m¡1 ;z m : )º(dz m : )£dt m¡1 ] 2 · X j¯j=m¡1 [ 1 p ¯! Z (m¡1) f l (t;t m¡1 ;z m : ;x)E ¯ (t m¡1 ;z m : )º(dz m : )£dt m¡1 ] 2 wherej®(i m )j=j®j¡1 and ®!¸®(i m )!. The orthogonality ofE ¯ admits that the last term is equal to Z (m¡1) jf l (t;t m¡1 ;z m : ;x)j 2 º(dz m : )£dt m¡1 93 which completes the proof! ¤ Lemma4.11. Asarequirement,weprovidethefollowingestimateforF j (t;s m ;z m ;x) just as we did earlier for F(t;s m ;z m ;x): jjF j (t;s m ;z m ;¢)jj 2 L 2 (R d ) · ~ C m 1 e C 2 t jjp 0 jj 2 H 1 (R d ) (¦ m k=1 jz k j 2 ) (4.44) for some positive constants ~ C 1 ;C 2 . Proof: Recall the de¯nition of the semi{group operatorT t , for any given function p(x), we have @T t p(x) @t =L ¤ T t p(x)=T t L ¤ p(x) sinceL ¤ only depends on the spatial variable x. Using the product rule of di®eren- tiation, we obtain the following: F j (t;s m ;z m ;x) = T t¡sm D zm ¢¢¢T s j+1 ¡s j D z j L ¤ T s j ¡s j¡1 ¢¢¢D z 1 T s 1 p 0 (x) ¡ T t¡sm D zm ¢¢¢L ¤ T s j+1 ¡s j D z j T s j ¡s j¡1 ¢¢¢D z 1 T s 1 p 0 (x) 94 the continuity of operator L ¤ allows us to apply a similar proof as we did earlier for F: jjF j (t;s m ;z m ;¢)jj 2 L 2 (R d ) · 4 Z R d jT t¡sm D zm ¢¢¢L ¤ D z 1 T s 1 p 0 (x)j 2 dx · 2e C 2 (t¡sm) Z R d jD zm ¢¢¢L ¤ ¢¢¢D z 1 T s 1 p 0 (x)j 2 dx · 2e C 2 (t¡sm) ~ C 1 jjT sm¡s m¡1 ¢¢¢jj 2 H 1 (R d ) jz m j 2 · ¢¢¢¢¢¢· ~ C m 1 e C 2 t jjp 0 jj 2 H 1 (R d ) (¦ m k=1 jz k j 2 ) The constant 2 and the one coming from the boundary of L ¤ do not matter here as eventually they will get merged to some other constant. ¤ Now, it is time to give the estimate for f: 4.2.4 Final Steps Theorem 4.12. Like before, we assume i m =l: jjf l (t;t m¡1 ;z m : ;¢)jj 2 L 2 (R d ) ·m 2 ~ C m 1 e C 2 t jjp 0 jj 2 H 1 (R d ) t 3 (l¡1) 2 (¦ m¡1 k=1 jz k j 2 ) (4.45) 95 Proof: Using the detailed formula of f, and the following inequality derived from Cauchy - Schwartz Inequality: (a 1 +a 2 +¢¢¢+a m ) 2 ·m(a 2 1 +a 2 2 +¢¢¢+a 2 m ) We obtain jf l (t;t m¡1 ;z m ;x)j 2 · m[( Z t 1 0 Z R d F 1 (t;¿;t m¡1 ;z m ;x)¢ l (¿;z 1 )º(dz 1 )d¿) 2 + m¡1 X j=2 ( Z t j t j¡1 Z R d F j (t;¢¢¢ ;¿;¢¢¢ ;z m ;x)¢ l (¿;z j )º(dz j )d¿) 2 + ( Z t t m¡1 Z R d F m (t;t m¡1 ;¿;z m ;x)¢ l (¿;z m )º(dz m )d¿) 2 ] Then we apply HÄ older's Inequality on the three terms on right hand side, it can be enlarged to: ·mf[ Z t 1 0 Z R d F 2 1 (t;¿;t m¡1 ;z m ;x)º(dz 1 )d¿][ Z t 1 0 Z R d ¢ 2 l (¿;z 1 )º(dz 1 )d¿] + m¡1 X j=2 [ Z t j t j¡1 Z R d F 2 j (t;¢¢¢ ;¿;¢¢¢ ;z m ;x)º(dz j )d¿][ Z t j t j¡1 Z R d ¢ 2 l (¿;z j )º(dz j )d¿] +[ Z t t m¡1 Z R d F 2 m (t;t m¡1 ;¿;z m ;x)º(dz m )d¿][ Z t t m¡1 Z R d ¢ 2 l (¿;z m )º(dz m )d¿]g 96 Nowwecanintegratejf l (t;t m¡1 ;z m ;x)j 2 withrespectto R R d ¢dx, andalsointegrate the integrals on right hand side where F j 's are involved with respect to R R d ¢dx . Moreover, we will enlarge all three integrals above to Z t 0 ¢d¿ wherever there is an integral involving t j . This yields jjf l (t;t m¡1 ;z m ;¢)jj 2 L 2 (R d ) ·m[ m X j=1 t Z R d jjF j (t;t m ;z m ;¢)jj 2 L 2 (R d ) º(dz j ) Z t 0 Z R d ¢ 2 l (¿;z)º(dz)d¿] And we are now ready to plug in the estimate for F j which we obtained from the lemma. Since we have ¦ k jz k j 2 in the lemma, each integral with respect to º(dz j ) above will \consume" one of thejz k j 2 since Z R d jz k j 2 º(dz k )<1 The constant brought by this constant does not show up in the theorem as later it will merge into some other constant. Therefore, after applying the estimate for the L 2 -norm of each F j , we will get a product of ¦ k2K jz k j 2 97 where K is a subset off1;2;:::;mg with size m¡1. Symmetry of z's guarantees that we can re - label them to any way we want. Given the explicit formula of ¢ l (¿;z)=H l (¿)~ g l (z) the integral involving ¢ 2 l can be computed explicitly. Finally, we obtain jjf l (t;t m¡1 ;z m : ;¢)jj 2 L 2 (R d ) ·m 2 ~ C m 1 e C 2 t jjp 0 jj 2 H 1 (R d ) t 3 (l¡1) 2 (¦ m¡1 k=1 jz k j 2 ) Again, as we said, the one that is missing among z 1 ;z 2 ;:::;z m may not necessarily be z m . We just use the term ¦ m¡1 k=1 jz k j 2 to clarify that this is the product of m¡1 items. ¤ Corollary 4.13. X j®j=m;im=l jj' ® (t;¢)jj 2 L 2 (R d ) ®! ·m 2 ~ C m 1 e C 2 t jjp 0 jj 2 H 1 (R d ) t m+2 (l¡1) 2 (m¡1)! (4.46) Proof: Using the estimate of (4.43) and together with the estimate of L 2 -norm of f in (4.45) which we derived above, the corollary is obtained. Notice that when integrating over Z (m¡1) ¢º(dz m¡1 )£ds m¡1 98 on (4.45), one gets t m¡1 =(m¡1)! for the time variables. For the spatial variables, each M = Z R d jz j j 2 º(dz j ) is ¯nite according to our assumption about the compensator measure º(¢) and this constant M gets merged into ~ C 1 . ¤ The proof of the main theorem in this section is almost complete after we show the following estimates: Sum both sides of (4.46) for 1·m·N and l¸n+1: E[jju n N (t;¢)¡u N (t;¢)jj 2 L 2 (R d ) ]= X l¸n+1 N X m=1 X j®j=m;im=l jj' ® (t;¢)jj 2 L 2 (R d ) ®! · ~ C 1 e C 2 t t 3 jjp 0 jj 2 H 1 (R d ) [ X m¸1 m 2 ( ~ C 1 t) m¡1 (m¡1)! ][ X l¸n 1 l 2 ] The term X m¸1 m 2 ( ~ C 1 t) m¡1 (m¡1)! is computed by X m¸1 m 2 ( ~ C 1 t) m¡1 (m¡1)! =e ~ C 1 t X k¸0 (k+1) 2 e ¡ ~ C 1 t ( ~ C 1 t) k k! =e ~ C 1 t ( ~ C 2 1 t 2 +3 ~ C 1 t+1) whichisderivedfromcalculatingE[(X+1) 2 ]givenX asaPoissonrandomvariable with parameter ~ C 1 t. 99 The estimate X l¸n 1 l 2 · Z 1 n 1 ³ 2 d³ = 1 n gives the 1=n term. Therefore, the ¯nal estimates is derived as the following: E[jju n N (t;¢)¡u N (t;¢)jj 2 L 2 ]· C 1 t 3 e C 2 t n jjp 0 jj 2 H 1 (R d ) (4.47) and we claim that this is true for the case t2[0;1]. If t>1, we will have E[jju n N (t;¢)¡u N (t;¢)jj 2 L 2 ]· C 1 t 5 e C 2 t n jjp 0 jj 2 H 1 (R d ) (4.48) The key point is that which power of t is the dominant thing that mainly a®ect thegrowthoftheaccumulatederrortogetherwithe Ct , accordingtothevaluesthat t lies on, i.e. whether the rate of growth is proportional to t 3 e Ct or t 5 e Ct . 4.3 CombinedErrorEstimatesforOne{StepApproximation Now we can give the following \combined" conclusion for the error estimate: E[jju n N (t;¢)¡u(t;¢)jj 2 L 2 ] 100 Theorem 4.14. As the relation we derived in Remark 4.2, we ¯nally obtain the following estimates: E[jju n N (t;¢)¡u(t;¢)jj 2 L 2 ]·Be Bt ( (Mt) N+1 (N +1)! + t 3 n )jjp 0 jj 2 H 1 (R d ) (4.49) Note: The description for constant B is not hard to ¯nd, one can derive the following: E[jju n N (t;¢)¡u(t;¢)jj 2 L 2 (R d ) ] · 2E[jju n N (t;¢)¡u N (t;¢)jj 2 L 2 ]+2E[jju N (t;¢)¡u(t;¢)jj 2 L 2 ] · 2e (C+M)t (Mt) N+1 (N +1)! jjp 0 jj 2 H 1 (R d ) +2 C 1 t 3 e C 2 t n jjp 0 jj 2 H 1 (R d ) · maxf2;2C 1 ge maxfC+M;C 2 g ( (Mt) N+1 (N +1)! + t 3 n )jjp 0 jj 2 H 1 (R d ) · Be Bt ( (Mt) N+1 (N +1)! + t 3 n )jjp 0 jj 2 H 1 (R d ) where B = maxf2;2C 1 ;C + M;C 2 g. This completes our proof for the one-step error estimation as a whole. 101 4.4 Reduce Error Growth Rate by Simulation on Multi{ Time Steps As we have been doing in previous sections, our problem is to solve the uniform ¯ltering density u = u(t;x) numerically on [0;T]£R d . For any t2 [0;T], we used to approximate u(t;x) by the so-called \ONE-STEP" approach: u n N (t;x)= X ®2J n N 1 p ®! ' ® (t;x)C t ® where J n N =f®2J;j®j·N;Index(®)·ng. ButthegrowthrateofsuchONE-STEPapproximationcannotmakeussatis¯ed. In particular, if we take the upper bound of the time interval T large, as we will showlaterinthenumericalexperiments, thesimulatedsolutionu n N (T;x)displaysa very bad (even terrible) approximation to the exact solution. This is because there is a term e CT in the error accumulation formula together with T 3 (or even worse, T 5 ). Therefore, our main task in this section is to change the numerical scheme a little bit such that we can reduce the order of the error accumulation somehow. 102 4.4.1 Basic Set-ups and Assumptions Let's take K > 0 be some positive integer. The basic idea for the alternative numerical scheme is that we want to cut the whole \time line", [0;T] into multi- intervals: 0=t 0 ·t 1 ·¢¢¢·t K =T - t 0 t 1 ¢¢¢ ¢¢¢ ¢¢¢ t K¡1 T Then, on each small time interval [t i¡1 ;t i ], we can still solve the uniform ¯ltering density based on the Poisson Chaos Expansion that we have been dealing with for a long while. We assume the step size of such partition satis¯es ¢=t i ¡t i¡1 = T K ;8i2f1;2;:::;Kg What we want to solve now are the approximated solutions to u(t;x) on the time moments t 1 ;t 2 ;:::;t K . We de¯ne the following notation based on the basic Chaos Expansion formula: u(t i ;x)= X ® 1 p ®! ' i ® (¢;x)C i ® (4.50) Note: in this expansion, the moment t i¡1 will then work as initial moment. We provide more explanations for all notations involved in this formula now: 103 ² The basis functions, that we called f± k (t;x)g in previous derivations, should be adjusted now such that it will become the CONS on L 2 ([t i¡1 ;t i ]£R d ) where we assume¢=t i ¡t i¡1 . Therefore, we de¯ne ± i k (t;x)= ~ ± k (t¡t i¡1 ;x)= ~ ± k (s;x) for s = t¡ t i¡1 2 [0;¢]. We created the notation ~ ± because in the basis functionforthetimevariable,insteadofusingtheCONSchoiceonL 2 ([0;T]), we would choose basis function on L 2 ([0;¢]). If we assume all the time partitionshaveequal stepsize,the ~ ± functionsshallbeusedonallsubintervals uniquely. For example 8 > > < > > : 1 p ¢ k =1 q 2 ¢ cos( ¼(k¡1) ¢ s) k¸2 formulate the CONS basis on L 2 ([0;¢]). In the following arguments, we will use t (function ± i k ) whenever time is running on [t i¡1 ;t i ], and use s (function ~ ± k ) whenever time is running on [0;¢] to clarify the two labeling systems. 104 ² The Charlier Polynomials: previously, the Charlier Polynomials were derived from solving dP t (z) = Z R d ± z (t ¡ ;x)P t ¡(z) ~ N(dt;dx) P 0 (z) = 1 for the whole [0;T]. The Charlier Polynomials we want to apply now should be all focus on the interval [0;¢]. d ¹ P s (z) = Z R d ~ ± z (s ¡ ;x) ¹ P s ¡(z) ~ N(ds;dx) ¹ P 0 (z) = 1 For example, assume j®j = 1 with ® k = 1 for some integer k ¸ 1. The old Charlier Polynomial for this example should be C t k = ~ C t k p 1! = Z t 0 Z R d ± k (¿;z) ~ N(d¿;dz) Instead, we should use now C i k = ~ C i k p 1! = Z t i t i¡1 Z R d ± i k (¿;z) ~ N(d¿;dz)= Z ¢ 0 Z R d ~ ± k (s;z) ~ N(ds;dz) 105 And similarly for all other higher order cases, we should change the Charlier Polynomials as: C i ® = ~ C i ® p ®! ; ~ C i ® = @ ® @z ® ¹ P ¢ (z)j z=0 (4.51) ² Equation for coe±cient ' i (s;x;z), which is de¯ned by ' i (s;x;z)=E[¹ u(s;x) ¹ P s (z)] where ¹ u(s;x) = u(t i¡1 +s;x) = u(t;x), for 0· s· ¢ and t i¡1 · t· t i . It can be shown that ¹ u satis¯es a very similar equation as u does. Thus, by using Ito's Formula, we get: @' i (s;x;z) @s =L ¤ ' i (s;x;z)+ Z R d [' i (s;x+y;z)¡' i (s;x;z)] ~ ± z (s;x)º(dz) (4.52) for ' using the s-label system. ² Function ' i ® (¢;x), is the Chaos-Expansion coe±cient that involves the in- formation about the \initial" solution, u(t i¡1 ;x). More precisely, on this particular time interval [t i¡1 ;t i ], we write ' i ® (¢;x)=' i ® (t i ;x;u(t i¡1 ;¢)) 106 where the latter satis¯es the following recursive scheme, after we relabel the time variable from t into s: 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : @' i ® (s;x;g) @s =L ¤ ' i ® (s;x;g) + R R d P l:® l ¸1 ® l [' i ®(l) (s;x+z;g)¡' i ®(l) (s;x;g)] ~ ± l (s;z)º(dz) ' i ® (0;x;g)=g(x)1 j®j=0 (4.53) for 0 · s · ¢. One could see every time we take g(x) = u(t i¡1 ;x) as the initial condition corresponding toj®j=0. This is the key point of this newly set numerical scheme that we solve the ap- proximationoneachsmalltimeinterval[t i¡1 ;t i ]andalwaysusethe\bottom" solution (i.e. the one corresponding to t i ) as the initial value for next interval instead of using the p 0 (x)=E[u(0;x)] ONLY ONCE as we did before. Now we claim (4.50) as the following theorem: Theorem 4.15. Like before, we set u(t 0 ;x) = u(0;x) = p 0 (x), for some function p 0 2 L 2 (R d ); then, we have the following Chaos Expansion formula, which holds for8x2R d and each t i where i=f1;2;:::;Kg: u(t i ;x)= X ®2J 1 p ®! ' i ® (t i ;x;u(t i¡1 ;¢))C i ® 107 Moreover, the following Parseval-type equality holds: E[ju(t i ;x)j 2 ]= X ®2J 1 ®! j' i ® (t i ;x;u(t i¡1 ;¢))j 2 (4.54) We claim all the convergence results work in L 2 -sense. Proof of this theorem is very similar to the case that we used for ¯nding the one-step error estimate. 4.4.2 Finite Approximation Strategy and Main Theorem As before, the numerical approximation we are going to be focus on is to get the solution of u(t i ;x), we will solve for: u n N (t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® (4.55) where the de¯nition for the ¯nite set J n N is the same as before. Note: the expansion coe±cient here was noted as Á i ® (¢;x) that is di®erent from the one in (4.50). Because here the recursive partial di®erential equations we are using for solving Á i ® (¢;x) is slightly changed. We still denote Á i ® (¢;x)=Á i ® (t i ;x;u n N (t i¡1 ;¢)) 108 That is, the initial condition we are using is based on approximations on previous time interval [t i¡2 ;t i¡1 ] instead of the smooth solution to u. The corresponding P.D.E. is: 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : @Á i ® (s;x) @s =L ¤ Á i ® (s;x) + R R d P l:® l ¸1 ® l [Á i ®(l) (s;x+z)¡Á i ®(l) (s;x)] ~ ± l (s;z)º(dz) Á i ® (0;x;u n N (t i¡1 ;¢))=u n N (t i¡1 ;x)1 j®j=0 (4.56) for 0 · s · ¢. Again, for simplicity, we write Á i ® (¢;x) = Á i ® (¢;x;u n N (t i¡1 ;¢)). That is, we applied the already-solved u n N (t i¡1 ;x) as the new initial condition in thisP.D.E.problem. AssumewecansolveforallÁ i ® (s;x)'s,yetwestillpick\some" of them as the ¯nite approximation to the new approach u n N (t i ;x). Indeed, it will be a good idea to call (4.55) an \approximation to the approximation". One has to realizethatalthoughweusedthesamenotationhere,theu n N wejustwroteaboveis in general DIFFERENT from the one we presented in the one-step approximation. To make this di®erence clearer, from now on let's denote the u n N we presented in this section as ¹ u n N (t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® (4.57) The following theorem illustrates the error accumulation rate between u(t i ;x) and ¹ u n N (t i ;x): 109 Theorem 4.16. Assume the \approximated" solution is given in (4.57), then we have the following estimates: max 1·i·K E[jj¹ u n N (t i ;¢)¡u(t i ;¢)jj 2 L 2 (R d ) ] · ¹ Ce ¹ CT ( (M¢) N (N +1)! + ¢ 2 n )jjp 0 jj 2 H 1 (R d ) (4.58) wherelikebeforewewilldiscoverthedetailsforallconstants ¹ C;M >0astheproof moves on as we did in previous theorems. ¢=t i ¡t i¡1 = T K is the unique step size, also we assume u and ¹ u n N take identical values at the begin- ning of the whole time interval, i.e. u(t 0 ;x)= ¹ u n N (t 0 ;x)=p 0 (x);8x2R d Proof: Again, we list the de¯nitions of the two quantities here: u(t i ;x)= X ®2J 1 p ®! ' i ® (¢;x)C i ® 110 and ¹ u n N (t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® We are heading for the calculation of " i =E[jj¹ u n N (t i ;¢)¡u(t i ;¢)jj 2 L 2 (R d ) ] We have ¹ u n N (t i ;x)¡u(t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® ¡ X ®2J 1 p ®! ' i ® (¢;x)C i ® = X ®2J n N 1 p ®! Á i ® (t i ;x;¹ u n N (t i¡1 ;¢))C i ® ¡ X ®2J n N 1 p ®! Á i ® (t i ;x;u(t i¡1 ;¢))C i ® + X ®2J n N 1 p ®! Á i ® (t i ;x;u(t i¡1 ;¢))C i ® ¡ X ®2J 1 p ®! ' i ® (t i ;x;u(t i¡1 ;¢))C i ® One also has to notice that once we plugged in the initial value as u(t i¡1 ;¢), Á i ® (t i ;x;u(t i¡1 ;¢)) and ' i ® (t i ;x;u(t i¡1 ;¢)) are actually the same since they are satis- fying the same set of P.D.E.'s. Thus, we can combine the last two terms and make it as X ®62J n N 1 p ®! ' i ® (t i ;x;u(t i¡1 ;¢))C i ® Moreover, one would consider in the inner product form (with expectation taken): < ¹ u n N (t i ;x)¡u(t i ;x);¹ u n N (t i ;x)¡u(t i ;x)> 111 = X ®2J n N 1 ®! EjÁ i ® (t i ;x;¹ u n N (t i¡1 ;¢))¡Á i ® (t i ;x;u(t i¡1 ;¢))j 2 + X ®62J n N 1 ®! Ej' i ® (t i ;x;u(t i¡1 ;¢))j 2 according to the orthogonality of ® in di®erent sets J n N and JnJ n N . We claim that " i =E[jj¹ u n N (t i ;¢)¡u(t i ;¢)jj 2 L 2 (R d ) ] = X ®2J n N 1 ®! EjjÁ i ® (t i ;¢;¹ u n N (t i¡1 ;¢))¡Á i ® (t i ;¢;u(t i¡1 ;¢))jj 2 L 2 (R d ) (4.59) + X ®62J n N 1 ®! Ejj' i ® (t i ;¢;u(t i¡1 ;¢))jj 2 L 2 (R d ) (4.60) after integrating with respect to x onR d . And we move on, the term (4.59) can be enlarged to X ®2J 1 ®! EjjÁ i ® (t i ;¢;¹ u n N (t i¡1 ;¢))¡Á i ® (t i ;¢;u(t i¡1 ;¢))jj 2 L 2 (R d ) =< X ®2J 1 p ®! [Á i ® (t i ;¢;¹ u n N (t i¡1 ;¢))¡Á i ® (t i ;¢;u(t i¡1 ;¢))]; X ®2J 1 p ®! [Á i ® (t i ;¢;¹ u n N (t i¡1 ;¢))¡Á i ® (t i ;¢;u(t i¡1 ;¢))]> Now since the expansion is derived with respect to all possible multi-index ®, we claim that X ®2J 1 p ®! Á i ® (t i ;x;¹ u n N (t i¡1 ;¢)) 112 gives a solution to dV(t;x) = L ¤ V(t;x)dt+ Z R d (V(t;x+z)¡V(t;x)) ~ N(dt;dz) V(t i¡1 ;x) = ¹ u n N (t i¡1 ;x) att=t i . Compared with the original stochastic partial di®erential equation thatu satis¯es,theonlydi®erenceinthisproblemistheinitialcondition. Thestructureof the Chaos Expansion for V and the partial di®erential equation that the expansion coe±cient satis¯es are of the same forms. Similarly, we have the same conclusion for X ®2J 1 p ®! Á i ® (t i ;x;u(t i¡1 ;¢)) which only di®ers in the initial value function. Therefore, we know that X ®2J 1 p ®! [Á i ® (t i ;x;¹ u n N (t i¡1 ;¢))¡Á i ® (t i ;x;u(t i¡1 ;¢))] admits a solution to dV(t;x) = L ¤ V(t;x)dt+ Z R d (V(t;x+z)¡V(t;x)) ~ N(dt;dz) (4.61) V(t i¡1 ;x) = ¹ u n N (t i¡1 ;x)¡u(t i¡1 ;x) (4.62) 113 at t=t i and we write X ®2J 1 p ®! [Á i ® (t i ;x;¹ u n N (t i¡1 ;¢))¡Á i ® (t i ;x;u(t i¡1 ;¢))] = X ®2J 1 p ®! Á i ® (t i ;x;¹ u n N (t i¡1 ;¢)¡u(t i¡1 ;¢)) Thus, (4:59)·E[jjU(t i ;¢;¹ u n N (t i¡1 ;¢)¡u(t i¡1 ;¢))jj 2 L 2 (R d ) ] whereU(t;x;¹ u n N (t i¡1 ;¢)¡u(t i¡1 ;¢))isthesolutiontotheinitialvalueproblem(4.61) {(4.62). Forthisinitialvalueproblem,itssolutionsatis¯esthefollowinginequality, similar as the one we have commonly seen as the \energy" estimate in standard P.D.E. theory (see [33] for a rigorous proof). So we obtain (4:59) · E[jjU(t i ;¢;¹ u n N (t i¡1 ;¢)¡u(t i¡1 ;¢))jj 2 L 2 (R d ) ] = e C(t i ¡t i¡1 ) E[jj¹ u n N (t i¡1 ;¢)¡u(t i¡1 ;¢)jj 2 L 2 (R d ) ] = e C¢ E[jj¹ u n N (t i¡1 ;¢)¡u(t i¡1 ;¢)jj 2 L 2 (R d ) ]=e C¢ " i¡1 For (4.60), this L 2 -norm is estimated similarly as we did in earlier proofs. On each subinterval [t i¡1 ;t i ], the initial function u(t i¡1 ;x) behaves as the p 0 (x) as we did on the whole interval [0;T]. Thus let's apply the result we had before in Section 4.3: (4:60)= X ®62J n N 1 ®! Ejj' i ® (t i ;¢;u(t i¡1 ;¢))jj 2 L 2 (R d ) 114 ·Ce C¢ [ (M¢) N+1 (N +1)! + ¢ 3 n ]Ejju(t i¡1 ;¢)jj 2 H 1 (R d ) (4.63) wheretheconstantM >0isderivedfromthesecondmomentofthemeanmeasure of the Poisson process: M = Z R d jzj 2 º(dz)<1 And furthermore, the Sobolev norm Ejju(t i¡1 ;¢)jj 2 H 1 (R d ) can be estimated as we did in (4.61) { (4.62), assuming the exact solution u(t;x) is derived on the whole [0;T]£R d . We have Ejju(t i¡1 ;¢)jj 2 H 1 (R d ) ·e CT jjp 0 jj 2 H 1 (R d ) Up to this moment, we can summarize our estimate for the whole problem as E[jj¹ u n N (t i ;¢)¡u(t i ;¢)jj 2 L 2 (R d ) ] · e C¢ E[jj¹ u n N (t i¡1 ;¢)¡u(t i¡1 ;¢)jj 2 L 2 (R d ) ] + e C¢ Ce CT [ (M¢) N+1 (N +1)! + ¢ 3 n ]jjp 0 jj 2 H 1 (R d ) | {z } ·(N;n;¢;T) 115 while we obtain the following recursive type inequality: " i ·e C¢ [" i¡1 +·(N;n;¢;T)] (4.64) The de¯nition of quantity ·=·(N;n;¢;T) is given above. The Gronwell-type inequality (4.64) can be solved as " i · e C¢ [" i¡1 +·] · " i¡2 e 2C¢ +·e 2C¢ +·e C¢ · ¢¢¢·" 0 e iC¢ +·e iC¢ +·e (i¡1)C¢ +¢¢¢+·e C¢ = ·¢ e C¢ (e iC¢ ¡1) e C¢ ¡1 given " 0 =E[jj¹ u n N (t 0 ;¢)¡u(t 0 ;¢)jj 2 L 2 (R d ) ]=0. Since the step size on the time partition, ¢, would be chosen to be very small if one wants to get better approximation, we use the following approach: ·¢ e C¢ (e iC¢ ¡1) e C¢ ¡1 = · ¢ ¢ e C¢ (e iC¢ ¡1) e C¢ ¡1 ¢ » · ¢ ¢ e C¢ (e iC¢ ¡1) C · · ¢ ¢ e C¢ (e CT ¡1) C · · ¢ ¹ Ce ¹ CT 116 from which we can see that the order of ¢ is reduced for one. The conclusion is then " i = E[jj¹ u n N (t i ;¢)¡u(t i ;¢)jj 2 L 2 (R d ) ] · ¹ Ce ¹ CT [ (M¢) N (N +1)! + ¢ 2 n ]jjp 0 jj 2 H 1 (R d ) The details for all constants have been shown and the proof is done. ¤ 117 Chapter 5 Modi¯cation of the Scheme 5.1 Abstract Forms The most important disadvantage of solving the spectral approximation for the uniform ¯ltering density, u by using ¹ u n N (t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® (5.1) is that although every step we are going to deal with the same system of partial di®erentialequations,thenumberofP.D.E.sweneedtosolveisstilllargecompared with the ONE-STEP approximation scheme we derived earlier. The time required for completing the simulated solution at one given time moment t i is rather long which we will also be mentioning later in the numerical experiment part. Another point is that our original intension to introduce the Chaos Expansion method to solve for the UFD is because we can separate the deterministic part and 118 the stochastic part. This worked clearly in the ONE-STEP approximation as we just used p 0 (x)=E[u(0;x)] as the initial condition for solving the expansion coe±cients where we would also be advised that function p 0 (x) is deterministic. But in the multi-step time partition scheme, we will have to apply the approx- imated solution ¹ u n N (t i¡1 ;x) on each step. This time, the initial condition DOES involve the STOCHASTIC information which runs a little bit away from the original intension of our desire to set up this method. Therefore, in this chapter we will present a modi¯cation to the multi-step nu- merical scheme (5.1) such that we can expand the initial condition function, which is a function of the spatial variable x: ¹ u n N (t i¡1 ;x)= X l c l e l (x) 119 By assuming this, we write ' i ® (¢;x) = ' ® (t i ;x;u(t i¡1 ;¢)) (5.2) ¼ ' ® (t i ;x; X l c l e l (x)) (5.3) = X l c l ' ® (t i ;x;e l ) (5.4) where the functions fe l g 1 l=1 forms a CONS basis function for the spatial variable x2R d on L 2 (R d ). Note: in(5.3)weareusingthenotation\¼"becauseherewehavealreadyapplied the approximation, ¹ u n N (t i¡1 ;¢) to u(t i¡1 ;¢). In the following arguments and proofs, we will provide details on how the constants c l 's are derived. Before presenting the details of the numerical scheme modi¯cation and the con- vergenceresultrelatingtothismodi¯cation,let'stakeadeeplookatsomeparticular choices of CONS basis functions for the spatial variable x in L 2 (R d ). 5.2 Hermite Basis Functions 5.2.1 Hermite Polynomials: De¯nition and Properties In this section, we will introduce the famous Hermite Polynomials and apply its useful properties to derive a system CONS basis function on L 2 (R). 120 De¯nition 5.1. The n th order Hermite Polynomial is de¯ned by H n (z)=(¡1) n e z 2 d n dz n (e ¡z 2 ) (5.5) Examples are: H 0 (z)=1; H 1 (z)=2z; H 2 (z)=4z 2 ¡2; H 3 (z)=8z 3 ¡12z and so on. The most important property and method to deduce Hermite Polynomials is summarized as the following theorem: Theorem 5.2. We have the following recursive formulas: H n+1 (z)¡2zH n (z)+2nH n¡1 (z)=0 (5.6) H 0 n (z)=2nH n¡1 (z) (5.7) where we certainly have n¸1. Note: these two formulas shall be proved at the same time. Proof: Naturally we will show the validity of this theorem by induction method. 121 For the case n=1: H 2 (z)¡2zH 1 (z)+2H 0 (z)=4z 2 ¡2¡2z(2z)+2¢1=0 and H 0 1 (z)=2H 0 (z) Assume the theorem is true for n = k, we want to prove the k+1 case. From the de¯nition of H k (z), we get H 0 k (z) = (¡1) n e z 2 2z d k dz k (e ¡z 2 )+(¡1) n e z 2 d k+1 dz k+1 (e ¡z 2 ) = 2zH k (z)¡H k+1 (z)=2kH k¡1 (z) This gives H k+1 (z)=2zH k (z)¡2kH k¡1 (z) (5.8) It is easy to get similar conclusion for k+1 case from the de¯nition of H k+1 : H 0 k+1 (z)=2zH k+1 (z)¡H k+2 (z) 122 By di®erentiating (5.8), we get H 0 k+1 (z) = 2H k (z)+2zH 0 k (z)¡2kH 0 k¡1 (z) = 2H k (z)+2z d dz [(¡1) k e z 2 d k dz k e ¡z 2 ]¡2k(¡1) k¡1 d dz [e z 2 d k¡1 dz k¡1 e ¡z 2 ] = 2H k (z)+4kzH k¡1 (z)¡4kzH k¡1 (z)+2kH k (z) = 2(k+1)H k (z) Moreover, we also get H k+2 (z)¡2zH k+1 (z)+2(k+1)H k (z)=0 which ¯nishes the proof. ¤ 5.2.2 Orthogonality It is not hard to show that H n (z) satis¯es the following expansion formula: Lemma 5.3. e ¡t 2 +2zt = 1 X n=0 H n (z) n! t n (5.9) Proof: 123 Multiply both sides by e ¡z 2 , we are heading for the following equation: e ¡(t¡z) 2 = 1 X n=0 (¡1) n n! d n dz n e ¡z 2 t n To show the validity of this equation, we want to show that d n dt n e ¡(t¡z) 2 j t=0 =(¡1) n d n dz n e ¡z 2 (5.10) Again, using induction, (5.10) is true for n = 1. Assume it is true for n = k, at n=k+1 we have: d k+1 dt k+1 e ¡(t¡z) 2 = d k dt k [e ¡(t¡z) 2 (¡2(t¡z))] = ¡2(t¡z) d k dt k (e ¡(t¡z) 2 )+k(¡2) d k¡1 dt k¡1 (e ¡(t¡z) 2 ) Plugging in t = 0, also we use the conclusions at n = k and n = k¡1, the last formula gives (¡1) k 2z d k dz k (e ¡z 2 )+k(¡2)(¡1) k¡1 d k¡1 dz k¡1 (e ¡z 2 ) On the other side, (¡1) k+1 d k+1 dz k+1 e ¡z 2 = (¡1) k+1 d k dz k (e ¡z 2 (¡2z)) = (¡1) k+1 [(¡2z) d k dz k (e ¡z 2 )+k(¡2) d k¡1 dz k¡1 (e ¡z 2 )] 124 And we have proved the validity of (5.10) for n=k+1. ¤ The Hermite Polynomial cannot formulate the CONS basis by itself, the follow- ing theorem shows how the basis functions are constructed (referred from [34]). Theorem 5.4. We have the following equation hold: Z 1 ¡1 H m (z)H n (z)e ¡z 2 dz = p ¼2 n n!± mn (5.11) We want to make it clear that ± mn is the famous Cronecker notation instead of the CONS basis function we used in earlier chapters. Again it is 1 whenever m = n and 0 otherwise. Proof: We calculate the following product: e ¡t 2 +2zt e ¡s 2 +2zs e ¡z 2 = ( 1 X n=0 H n (z) n! t n )( 1 X m=0 H m (z) m! s m )e ¡z 2 e ¡(t+s) 2 e 2(s+t)z e ¡z 2 e 2ts = 1 X n;m=0 H n (z)H m (z)e ¡z 2 1 m!n! t n s m 125 Integrating both sides with respect to z and use the Taylor's Expansion for e 2ts , we get 1 X n=0 [ Z 1 ¡1 e ¡(z¡(t+s)) 2 dz | {z } p ¼ ] (2ts) n n! = 1 X n;m=0 [ Z 1 ¡1 H m (z)H n (z)e ¡z 2 dz] 1 n!m! t n s m Generality of s and t shows that Z 1 ¡1 H m (z)H n (z)e ¡z 2 dz = p ¼2 n n!± mn ¤ By proving this, we have immediately the following Corollary 5.5. We de¯ne e n (z):= H n (z)e ¡ z 2 2 p ¼ 1 2 2 n n! (5.12) and we claim thatfe n (z)g 1 n=0 formulates a CONS on L 2 (R d ). Proof: With previous theorem, we can easily calculate <e m ;e n >= Z 1 ¡1 e m (z)e n (z)dz =± mn ¤ 126 5.2.3 Application of Properties of Hermite Polynomials As we have already done, we will be able to use the some properties of Hermite Polynomials to set up the CONS basis on L 2 (R d ). In this section, we will de¯ne an operator that will be applied on the basis functions, namely e n that was just obtained, and ¯nd an interesting property this operator will satisfy. De¯nition 5.6. Assume function f = f(x 1 ;x 2 ;:::;x d ) has su±cient smoothness onR d , we de¯ne the operatorA i to be such that A i f :=¡ @ 2 @x 2 i f +(1+x 2 i )f (5.13) With this de¯nition, we have the following theorem which later on will help us on proving the convergence result for the modi¯cation of the numerical scheme. Theorem 5.7. One can just assume the partial derivatives in (5.13) to be d 2 dx 2 sincethebasisfunctionsfe n gweareworkingoncontainsonlyonevariable, namely we call the corresponding operatorA. Then we claim: Ae n (x)=2(n+1)e n (x) (5.14) for8n=1;2;::: 127 Proof: For convenience, in the de¯nition of e n , we denote: e n (z)= H n (z)e ¡ z 2 2 p ¼ 1 2 2 n n! =c n H n (z)e ¡ z 2 2 Nowlet'sjustcalculatewhathappenediftheoperatorAtakese®ectsonH n (z)e ¡ z 2 2 : A[H n (z)e ¡ z 2 2 ] = ¡ d 2 dz 2 [H n (z)e ¡ z 2 2 ]+(1+z 2 )H n (z)e ¡ z 2 2 = ¡[(e ¡ z 2 2 ) 00 z H n (z)+2(e ¡ z 2 2 ) 0 z H 0 n (z)+e ¡ z 2 2 H 00 n (z)]+(1+z 2 )H n (z)e ¡ z 2 2 = ¡(e ¡ z 2 2 z 2 ¡e ¡ z 2 2 )H n (z)+2ze ¡ z 2 2 H 0 n (z)¡e ¡ z 2 2 H 00 n (z) + (1+z 2 )H n (z)e ¡ z 2 2 = 2e ¡ z 2 2 H n (z)+2n[2zH n¡1 (z)¡2(n¡1)H n¡2 (z)]e ¡ z 2 2 = 2e ¡ z 2 2 H n (z)+2ne ¡ z 2 2 H n (z) where we have applied Theorem 5.2 in the last several steps. It is rather easy to ¯nd that A[e n (z)]=2(n+1)e n (z) ¤ 128 5.3 Haar{type Basis: 1{Dimension Example Another frequent choice for the CONS basis function on L 2 (R) is the famous Wavelet Basis function. The wavelet analysis has shown strong capability on the research of numerical analysis and in particular, signal and image processing and ¯ltering theory. Since the basis function in this CONS works as \piecewise" indicator for x2R, it will be easier for us to use the property of the basis functions to derive some useful conclusions regarding the change of a given function that is having inner product with the wavelet basis. The de¯nition of the 1-dimension Wavelet Basis function (see Y. Meyer [25], [26] and [9]) is derived from the function h(x), de¯ned overR as the following: h(x)= 8 > > > > > > < > > > > > > : 1 0<x· 1 2 ¡1 1 2 <x·1 0 otherwise (5.15) Although this is a 1-dimension problem, we need an index system with two integers at the same time: De¯nition 5.8. The Wavelet basis function is de¯ned by h j k (x)=2 j 2 h(2 j x¡k) (5.16) 129 for integers, j =0;1;2;::: and k =0;§1;§2;:::. Illumination of function h(x) can be seen by the graph below: - s c s c s c 6 0 1 2 1 t h(x) -1 1 Some properties of function h j k : ² Where the function is nonzero: h j k (x)= 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : 2 j 2 k 2 j <x· k 2 j + 1 2 j+1 ¡2 j 2 k 2 j + 1 2 j+1 <x· k+1 2 j 0 elsewhere (5.17) ² Orthogonality of the function: <h j k ;h j 0 k 0 > = Z 1 ¡1 2 j h 2 (2 j x¡k)dx± fj=j 0 ;k=k 0 g 130 As long as j =j 0 ;k =k 0 : jjh j k jj 2 L 2 (R) = Z k 2 j + 1 2 j+1 k 2 j 2 j dx+ Z k+1 2 j k 2 j + 1 2 j+1 2 j dx = 2 j 2 j+1 + 2 j 2 j+1 =1 5.4 ImplementationProcedureandConvergenceResultsof the Modi¯cation of the Scheme 5.4.1 Validity of the Modi¯ed Scheme and Implementation Steps As we have showed in the ¯rst section, the basic intention in this chapter is to expand the function ¹ u n N (t i¡1 ;x) with respect to some CONS basis function for the spatial variable x while later on we will use the calculations based on the basis function chosen to approximate ¹ u n N (t i ;x). First and most of all, we will present the following theorem which will tell us the\structure"ofthismodi¯cationoftheschemeandfurthermore, whywecanuse that as an alternative way to derive and approximated solution. 131 Theorem 5.9. Let fe l g 1 l=1 be a CONS basis function on L 2 (R d ) and we denote (¢;¢) to be the corresponding inner product on the space of L 2 (R d ). We still have the following recursive type partial di®erential equations system for coe±cients: 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : @Á i ® (s;x;g) @s =L ¤ Á i ® (s;x;g) + R R d P l:® l ¸1 ® l [Á i ®(l) (s;x+z;g)¡Á i ®(l) (s;x;g)] ~ ± l (s;z)º(dz) Á i ® (0;x;g)=g(x)1 j®j=0 (5.18) where0·s·¢forwhich¢=T=K asthesizeofthestepontimepartitions. The notation ~ ± isde¯nedasthesameaswhatwehavedoneforthesectionwhenwewere talking about multi-intervals on time variable to attain lower error accumulation rate. Forthecomputationprocedureofusingthebasisfe l gtoget ¹ u n N (t i ;¢),wede¯ne: q l ®k :=(Á ® (¢;¢;e k );e l ) and by using induction method: à l (0;N;n) = (p 0 ;e l ) (5.19) à l (i;N;n) = X ®2J n N X k 1 p ®! à k (i¡1;N;n)q l ®k C i ® (5.20) 132 And we ¯nally claim that ¹ u n N (t i ;x)= X l à l (i;N;n)e l (x) (5.21) for8 i2f1;2;:::;Kg. Proof: Absolutelyourintensionistoderive(5.21)intheend. Comparedwith(5.20), since the systemfe l g is CONS on L 2 (R d ), one will have to prove that (¹ u n N (t i ;¢);e l )= X ®2J n N X k 1 p ®! à k (i¡1;N;n)q l ®k C i ® (5.22) Again, we are going to attain this by using induction method: i=1: it is rather obvious such that from our de¯nition ¹ u n N (t 1 ;x) = X ®2J n N 1 p ®! Á ® (¢;x;p 0 )C i ® = X ®2J n N 1 p ®! Á ® (¢;x; X k (p 0 ;e k )e k )C i ® = X ®2J n N X k (p 0 ;e k ) 1 p ®! Á ® (¢;x;e k )C i ® 133 Given a particular e l , we have (¹ u n N (t 1 ;¢);e l ) = X ®2J n N X k (p 0 ;e k )(Á ® (¢;¢;e k );e l )C i ® = X ®2J n N X k à k (0;N;n)q l ®k C i ® Now, let's assume the validity of ¹ u n N (t i¡1 ;x)= X l à l (i¡1;N;n)e l (x) In the right side of (5.22), we obtain X k à k (i¡1;N;n)q l ®k = X k à k (i¡1;N;n)(Á ® (¢;¢;e k );e l ) = (Á ® (¢;¢; X k à k (i¡1;N;n)e k );e l ) = (Á ® (¢;¢;¹ u n N (t i¡1 ;¢));e l ) From this step, if we sum them over all ®2 J n N and multiply with C i ® for each ®, we get X ®2J n N X k 1 p ®! à k (i¡1;N;n)q l ®k C i ® = X ®2J n N 1 p ®! (Á ® (¢;¢;¹ u n N (t i¡1 ;¢));e l )C i ® = ( X ®2J n N 1 p ®! Á ® (¢;¢;¹ u n N (t i¡1 ;¢))C i ® ;e l ) = (¹ u n N (t i ;¢);e l ) 134 This guarantees the relation ¹ u n N (t i ;x)= X l à l (i;N;n)e l (x) ¤ Remark 5.10. Let's present the following procedures on how to implement the modi¯cation of the numerical scheme. Given the integers N, n: ² Restrict ¯nite number of choices on the CONS basis function setfe l g. That is, we will take fe l g £ l=1 for some integer £>0; ² Compute à l (0;N;n;£)=(p 0 ;e l ) for all l2f1;2;:::;£g; ² For 8 ®2 J n N , compute Á ® (¢;x;e l ) for all l 2f1;2;:::;£g. Notice here we choose the function e l as initial condition; ² Corresponding to last step, for the ®, l chosen, compute q l ®k =(Á ® (¢;¢;e k );e l ) for arbitrary k that was given; 135 ² Compute C i ® ; ² Iterativecomputations: accordingtothetheoremwehavejustproved,obtain à l (i;N;n;£)= X ®2J n N £ X k=1 1 p ®! à k (i¡1;N;n;£)q l ®k C i ® for all l 2 f1;2;:::;£g and 1 · i · K where K is the total number of partitions on [0;T]; ² Finally, we obtain u n;£ N (t i ;x)= £ X l=1 à l (i;N;n;£)e l (x) (5.23) Remark 5.11. The key point to this modi¯cation is that as long as we have made our decision about the choice of the basis function on the spatial variable, namely fe l g, the number of coe±cients, Á ® (¢;x;e l ) that we need to solve will be FIXED withallP.D.E.'sthesamewithdi®erenceonlyoninitialconditions! Thecoe±cients à l (i;N;n) are derived based on the solution of all solved Á ® (¢;x;e l ). 5.4.2 Convergence Result for Hermite Basis The ¯rst convergence result is based on the choice of the CONS basis functions generated from the Hermite Polynomials. 136 Theorem5.12. Supposethatfe l gistheHermitebasisonL 2 (R d )de¯nedby(5.12). Let 0 =t 0 ·¢¢¢·t K =T be a uniform partition on [0;T]. The two quantities we want to compare are ¹ u n N (t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® and u n;£ N which was de¯ned by (5.23). In addition, we assume the initial density function, p 0 (x) and all its derivatives decay faster than any negative power of jxj asjxj!1. Then, for any positive integer °, there exists a positive constant C ° > 0 such that max 1·i·K q E[jj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R d ) ]· KC ° (e C°T ¡1) T£ °¡1=2 jjp 0 jj H 4° (4°) (5.24) Proof: Before the details of the proof, we ¯rst introduce some useful notations we will be using. ² The normjjj¢jjj, which is given from jjjfjjj:= q E[jjfjj 2 L 2 ] 137 Therefore, what we are trying to estimate is q E[jj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R d ) ]=jjj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jjj=:" i ² Operator ¦ £ : the L 2 -orthogonal projection onto the subspace generated by \part" of the CONS functions, i.e. the space generated by fe l g £ l=1 ² Operator V n;i N : for8 f 2L 2 (;F N t i¡1 ;N), this functional gives the mapping f 7! X ®2J n N Á i ® (¢;¢;f)C i ® And in particular, we de¯ne U (i) (f):=V 1;i 1 (f)= X ®2J Á i ® (¢;¢;f)C i ® It is easy to see that for N, n<1, V n;i N (f) is just a part of U (i) (f). Thus we have jjjV n;i N (f)jjj·jjjU (i) (f)jjj Immediately following from those de¯nitions, we obtain the following obvious rela- tions: 138 For simplicity, we denote V :=V n;i N ¹ u n N (t i ;¢) = V(¹ u n N (t i¡1 ;¢)) u n;£ N (t i ;¢) = ¦ £ V(u n;£ N (t i¡1 ;¢)) Now we have the following triangle inequality " i = jjj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jjj (5.25) = jjj¹ u n N (t i ;¢)¡¦ £ ¹ u n N (t i ;¢)+¦ £ ¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jjj (5.26) · jjj¹ u n N (t i ;¢)¡¦ £ ¹ u n N (t i ;¢)jjj (5.27) + jjj¦ £ V(¹ u n N (t i¡1 ;¢))¡¦ £ V(u n;£ N (t i¡1 ;¢))jjj (5.28) Now let's leave the continued work from (5.27) and (5.28) for a while. Assume function f satis¯es all the properties we assumed in the theorem which p 0 satis¯ed. Also, let's denote the multi-index l =(l 1 ;l 2 ;:::;l d ) where d is the dimension of space X. The CONS basis function is therefore given by the following formulation: e l (x)=e l 1 (x 1 )e l 2 (x 2 )¢¢¢e l d (x d ) 139 where x=(x 1 ;x 2 ;:::;x d )2X. We take function f over all x. The following conclusion works for general ¯nite dimension space (d < 1). For convenience, we present the result for d=2 here: e l (x)=e l 1 (x 1 )e l 2 (x 2 ) If we go back to the section where we presented the construction and useful prop- erties of the Hermite Polynomials, one can still recall that (see (5.14)): A i e l i =2(l i +1)e l i which is true for i = 1;2. As one particular operator, A i takes e®ect only on one variable of x 1 , x 2 , also we realize that the basis function e l (x) is separable so we have the following property: A 2 A 1 e l (x) = A 2 A 1 [e l 1 (x 1 )e l 2 (x 2 )] = 2(l 1 +1)A 2 (e l 2 (x 2 ))e l 1 (x 1 ) = 2 2 (l 1 +1)(l 2 +1)e l (x) which gives us e l = A 2 A 1 e l 2 2 (l 1 +1)(l 2 +1) 140 If we study the expansion formula of f, one can show that f = X l c l e l A 2 A 1 f = X l c l A 2 A 1 e l = 2 2 (l 1 +1)(l 2 +1) X l c l e l Therefore, we get jc l j=j(f;e l )j= j(A 2 A 1 f;e l )j 2 2 (l 1 +1)(l 2 +1) · jjA 2 A 1 fjj L 2 2 2 (l 1 +1)(l 2 +1) fromwhichweappliedCauchy{SchwartzInequalityatthelaststep. Onecanrepeat pluggingA 2 A 1 f into the location of f, in above inequality, which ¯nally makes the following j(f;e l )j· jjA 2 A 1 fjj L 2 2 2 (l 1 +1)(l 2 +1) ·¢¢¢· jj(A 2 A 1 ) ° fjj L 2 2 ° (l 1 +1) ° (l 2 +1) ° if we have repeated this process for ° times. The property that f and all its derivatives decay fast enough allows us to estimate jj(A 2 A 1 ) ° fjj L 2 as the following (see e.g. [33]): jj(A 2 A 1 ) ° fjj L 2 ·C ° jjfjj H 4° (4°) 141 where H s (r) (s;r2R) is the weighted Sobolev space. So now, j(f;e l )j· C ° jjfjj H 4° (4°) 2 ° (l 1 +1) ° (l 2 +1) ° It is time to go back to our study for the estimate of " i : For (5.28), we have jjj¦ £ V(¹ u n N (t i¡1 ;¢)) ¡ ¦ £ V(u n;£ N (t i¡1 ;¢))jjj ·jjjV(¹ u n N (t i¡1 ;¢)) ¡ V(u n;£ N (t i¡1 ;¢))jjj·e C¢ " i¡1 Notice here the second inequality is true because here we are considering the prob- lem dH(t;x) = L ¤ H(t;x)dt+ Z R d (H(t;x+z)¡H(t;x)) ~ N(dt;dz) H(t i¡1 ;x) = ¹ u n N (t i¡1 ;x)¡u n;£ N (t i¡1 ;x) as we did in previous chapter. This inequality again uses the \energy estimate" (again, see [33]) result of this kind of problems. 142 And for (5.27), we have jjj¹ u n N (t i ;¢)¡¦ £ ¹ u n N (t i ;¢)jjj 2 = X ®2J n N X l>£ E(Á i ® (¢;¢);e l ) 2 ®! · 2 X ®2J n N [ X l>£ 1 2 2° (l 1 +1) 2° (l 2 +1) 2° ] EjjÁ i ® (¢;¢)jj 2 H 4° (4°) ®! · C ° e C°T £ 2°¡1 jjp 0 jj 2 H 4° (4°) One needs to be careful on the following facts: ² Fact 1 : function Á i ® (¢;¢) serves as f in our previous derivations; ² Fact 2 : the statement \l > £" basically means max 1·i·d l i > £, therefore we calculate X l>£ 1 (l 1 +1) 2° (l 2 +1) 2° · X l 1 >£ 1 (l 1 +1) 2° X l 2 ¸0 1 (l 2 +1) 2° ·C ° Z 1 £ 1 (³ +1) 2° d³ = C ° (£+1) 2°¡1 · C ° £ 2°¡1 where we assumed that at least one of l 1 ;l 2 is greater than £; ² Fact 3 : the following estimate also works as we have been calling the \energy estimate", which was previously used mainly on L 2 -norms: X ®2J n N EjjÁ i ® (¢;¢)jj 2 H 4° (4°) ®! · X ®2J EjjÁ i ® (¢;¢)jj 2 H 4° (4°) ®! ·e C°T jjp 0 jj 2 H 4° (4°) that is, controlled by the initial condition. 143 Finally, our proof again comes to the following Gronwell{type inequality: " i · e C¢ " i¡1 + C ° e C°T £ °¡1=2 jjp 0 jj H 4° (4°) · e iC¢ " 0 +(e (i¡1)C¢ +¢¢¢+e C¢ +1) C ° e C°T £ °¡1=2 jjp 0 jj H 4° (4°) = e iC¢ ¡1 e C¢ ¡1 C ° e C°T £ °¡1=2 jjp 0 jj H 4° (4°) » e iC¢ ¡1 ¢C C ° e C°T £ °¡1=2 jjp 0 jj H 4° (4°) · (e C°T ¡1)C ° ¢£ °¡1=2 jjp 0 jj H 4° (4°) and plug in ¢=T=K we get what we want. ¤ Remark5.13. Thistheoremshowsthatforsu±cientlysmoothinitialconditionp 0 and with the appropriate choice of the basis fe l g, the error due to the truncation of the basis decays faster than any power of £, which says that our approximation is actually a \spectral quality". Corollary 5.14. The overall error of approximation for the modi¯cation of the numerical scheme when using Hermite{Polynomials{Generated basis function for the spatial variable is max 1·i·K E[jju(t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 ]·C( (M¢) N (N +1)! + ¢ 2 n + C(°) ¢ 2 £ 2°¡1 )¨(p 0 ) (5.29) 144 where C is a constant depending only the parameters of the model and ¨(p 0 ):=maxfjjp 0 jj 2 H 1 (R) ;jjp 0 jj 2 H 4° (4°) g 5.4.3 Convergence Result for Wavelet Basis As we have shown, the choice for the CONS basis function can be taken as the Wavelet Basis function for 1{dimension case. Here we have the error estimate result for this choice of basis: Theorem 5.15. Suppose that fe l g is the Wavelet (Haar) Basis on L 2 (R) de¯ned by (5.17). Let 0 = t 0 · ¢¢¢ · t K = T be a uniform partition on [0;T]. The two quantities we want to compare are ¹ u n N (t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® and u n;£ N which was de¯ned by (5.23). In addition, assume the initial condition p 0 2H 1 (R). Then, for some positive constant ~ C >0, we have max 1·i·K q E[jj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R) ]· K ~ Ce ~ CT T2 £ jjp 0 jj H 1 (R) (5.30) Proof: 145 Weclaimalltheproceduresandnotationswearegoingtouseherewillbethesame as before. We still have jjj¦ £ V(¹ u n N (t i¡1 ;¢)) ¡ ¦ £ V(u n;£ N (t i¡1 ;¢))jjj ·jjjV(¹ u n N (t i¡1 ;¢)) ¡ V(u n;£ N (t i¡1 ;¢))jjj·e C¢ " i¡1 and for function f on H 1 (R), we have j(f;h j k )j = 2 j 2 j Z k+1=2 2 j k 2 j f(x)dx¡ Z k+1 2 j k+1=2 2 j f(x)dxj = 2 j 2 j Z k+1=2 2 j k 2 j f(z+ 1 2 j+1 )dz¡ Z k+1=2 2 j k 2 j f(z)dzj = 1 2 j 2 +1 j Z k+1=2 2 j k 2 j f(z+¢ j )¡f(z) ¢ j dzj where ¢ j = 1 2 j+1 . Let's take the square and use HÄ older's Inequality: j(f;h j k )j 2 · 1 2 j+2 [ Z k+1=2 2 j k 2 j dz][ Z k+1=2 2 j k 2 j j f(z+¢ j )¡f(z) ¢ j j 2 dz] · C 4 j Z k+1=2 2 j k 2 j jf 0 (z)j 2 dz One can see that if we sum over j;k, it gives that X j;k j(f;h j k )j 2 · X j C 4 j jjfjj 2 H 1 (R) 146 Thus we continue with our earlier proof as the following, assuming the multi-index l =(j;k): jjj¹ u n N (t i ;¢)¡¦ £ ¹ u n N (t i ;¢)jjj 2 = X ®2J n N X l>£ E(Á i ® (¢;¢);e l ) 2 ®! · 2 X ®2J n N [ X j>£ C 4 j ] EjjÁ i ® (¢;¢)jj 2 H 1 (R) ®! · ¹ Ce ¹ CT 4 £ jjp 0 jj 2 H 1 (R) Followed by this, we get " i · e C¢ " i¡1 + ¹ Ce ¹ CT 2 £ jjp 0 jj H 1 (R) · e iC¢ ¡1 e C¢ ¡1 ¹ Ce ¹ CT 2 £ jjp 0 jj H 1 (R) · ~ Ce ~ CT ¢2 £ jjp 0 jj H 1 (R) ¤ Remark 5.16. This theorem tells that if we take the spatial variable in one di- mension spaceR (which is quite often in practical) and choose the basis function to be the Wavelet Basis, the error due to the truncation of the basis decays even proportional to an exponential rate, which is even better than the previous the- orem. Besides, the assumption on the initial condition p 0 got weakened from the ¯rst theorem. 147 Corollary 5.17. The overall error of approximation for the modi¯cation of the numerical scheme when using Wavelet Basis function for the spatial variable in 1-dimension case is max 1·i·K E[jju(t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 ]·C( (M¢) N (N +1)! + ¢ 2 n + ~ Ce ~ CT ¢ 2 4 £ )jjp 0 jj 2 H 1 (R) (5.31) where C; ~ C are constants depending only the parameters of the model. 148 Chapter 6 Using 2{Dim Wavelet Basis on the Modi¯ed Scheme 6.1 Set-up the Basis Functions As we have been working on, the convergence result of the spectral decomposition on the spatial variable x highly depends on what kind of CONS basis function we choose. In the previous two examples, we have shown one multi-dimension case with Hermite Polynomial{generated basis, and one 1-dimension case with Wavelet Basis. Now we want to present the possible 2{dimension choice for the CONS basis function, de¯nitely the space should be L 2 (R 2 ). Again, we want to present the Wavelet Basis function on L 2 (R 2 ). Since the construction is fairly complicated, we are going to do that in several steps: Step{1: SplitR 2 into Equi-measure Regions: 149 First we will ¯nd R 1 such that ¼R 2 1 =1 andthenwego`outward"suchthat¼(R 2 2 ¡R 2 1 )=1, ¼(R 2 3 ¡R 2 2 )=1andso on. By doing this, we can see that the wholeR 2 was separated into in¯nitely many \belt" regions such that each has a measure 1, except the one with minimal radius, R 1 , which is actually a \disk". This portion of work is shown by the following ¯gure. Figure 6.1: 2{d Space with Disc and Belts with Mass 1 150 It is not hard to calculate the following: R 1 = r 1 ¼ R 2 = r 2 ¼ R 3 = r 3 ¼ ¢¢¢ ¢¢¢ ¢¢¢¢¢¢ R n = r n ¼ while one can also show that lim n!1 (R n ¡R n¡1 )= lim n!1 [ r n ¼ ¡ r n¡1 ¼ ]=0 This tells us that visually the belts should be getting thinner and thinner as we move outward from the origin. But we still have R n ! 1 as n ! 1 from the formula of R n . Step{2: Continuously Split the Belts and the Disk to One-half We could compare with the construction process we were doing for the 1- dimension Wavelet Basis, it is time now to split each measure{1 region we have here to two equal measure areas, and we continue with this split process to attain smaller belts (i.e. 1/2, 1/4, 1/8 etc.). 151 Figure 6.2: \Cutting" Mass-1 Regions to \Half{by{Half" As shown in the ¯gure, each shaded zone and each white zone both have area equal to 1=2. Again, we could show by calculation of each radius, such that the di®erence between the radii are getting smaller, so the belts once again get thinner while we are going away from the origin. We continue with this procedure until we get 1 2 j , for some positive integer j, as the area for all belt regions plus the disk centered at origin. After we ¯nish this process, we want to re{label the radii by the following notation: R(n; 1 2 j ) which says the current area for each zone is 1 2 j while the radius one is standing at is the n th sequentially counted from the origin. We present this ¯gure to show that, assuming each closed zone has an area of 1 2 j , the radii are sequentially labeled \twice" as what they were called in the previous splits (i.e. with each zone having area 1 2 j¡1 ). 152 Figure 6.3: \1,2,3" Gets to \\1,2,...,6" It makes sense to de¯ne the radius R(n+ 1 2 ; 1 2 j ) which de¯nitely will be needed in the split for next level ( 1 2 j+1 ) such that we can determine the \midline" of the the belt with current area 1 2 j . In this case one only has to ¯nd R(n+ 1 2 ; 1 2 j ) satisfying that ¼[R 2 (n+ 1 2 ; 1 2 j )¡R 2 (n; 1 2 j )]=¼[R 2 (n+1; 1 2 j )¡R 2 (n+ 1 2 ; 1 2 j )] (6.1) Similar as earlier calculations, we can ¯nd that R(n; 1 2 j ) = r n 2 j ¼ R(n+ 1 2 ; 1 2 j ) = s n+ 1 2 2 j ¼ 153 Andbyde¯nition,theareabetweentheradiiR(n; 1 2 j ),R(n+ 1 2 ; 1 2 j )andR(n+ 1 2 ; 1 2 j ), R(n+1; 1 2 j ) are both 1 2 j+1 It is natural to think about de¯ning the basis function by letting it equal to 1 and Figure 6.4: \Midline" for Each Closed Zone -1 alternatively on adjacent regions in the ¯gure we have shown, just like what we did in 1-dimension case. But this is not enough to formulate a CONS on L 2 (R 2 ) since it only represents the behavior of a given function on belt zones that lies at a certain distance from the origin. Also one can tell from some intuitive observation that the 2-variable function can be studied by using the polar coordinate system (½;µ). We hereby have only considered the ½ variable. Therefore, in the next step, we will do something about the angle µ. Step{3: Split the Angle 154 As we said, for each x = (½;µ)2R 2 , assume µ is on the domain of [0;2¼). We will split the whole 360 ± by the following: given a positive integer k, one will have to determine an integer m, such that 2¼ 2 k m·µ < 2¼ 2 k (m+1) In this process we can see that 0 · m < 2 k ¡1, that is, the determination of m depends on the current split, 2¼ 2 k . Obviously, in practical numerical analysis study, we would hope to choose some larger k such that the region where µ belongs to can be as \precise" as possible. Some too simple and rough example may be like, say 0· µ < ¼ or 3¼ 2 · µ < 2¼ and so on. In the following ¯gure, we assume each sub{angle has a degree of 2¼ 2 k : Figure 6.5: Angle 360 ± Partition Step{4: Set-up the Basis Function We de¯ne the Wavelet CONS basis function on L 2 (R 2 ) as the following: Given x=(½;µ)2R 2 , the polar form of theR 2 coordinate: 155 Denote à k;m j;n =a k;m j;n h k;m j;n (x) as the basis function, where h k;m j;n (x)=1 [ 2¼ 2 k m·µ< 2¼ 2 k (m+1)] l j;n (x) and l j;n (x)= 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : 1 R(n; 1 2 j )·½<R(n+ 1 2 ; 1 2 j ) ¡1 R(n+ 1 2 ; 1 2 j )·½<R(n+1; 1 2 j ) 0 otherwise (6.2) Note: the coe±cient a k;m j;n is obtained by making jjà k;m j;n jj 2 L 2 =1 which is derived as jjà k;m j;n jj 2 L 2 = (a k;m j;n ) 2 [ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+ 1 2 ; 1 2 j ) R(n; 1 2 j ) rdrdµ+ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) rdrdµ] = (a k;m j;n ) 2 [ 2¼ 2 k 1 2 j+1 2¼ + 2¼ 2 k 1 2 j+1 2¼ ] = (a k;m j;n ) 2 2 k+j =1 156 thus, we get a k;m j;n =2 k+j 2 Some Easy Examples: ² j =0 and k =1, we have l 0;n (x)= 8 > > > > > > > > > > > > > > < > > > > > > > > > > > > > > : 1 R(n;1)·½<R(n+ 1 2 ;1) ¡1 R(n+ 1 2 ;1)·½<R(n+1;1) 0 otherwise while the values of m=0;1 is determined by whether x stands on the upper half or lower half of the plane. In this case a k;m j;n = p 2 Figure 6.6: \0»180 ± " Example Our example is for m=0 or 0·µ <¼. 157 ² j = 0 and k = 2, de¯nition of l is similar as above, while the di®erent values of m = 0;1;2;3 is determined by which quarter of the plane x stands on. a k;m j;n =2. Figure 6.7: \0»90 ± " Example Again, our example is for m=0 or 0·µ < ¼ 2 . 6.2 DerivationofConvergenceResultBasedon2-DWavelet Basis Theorem6.1. Supposethatfe l gisthe2-DWavelet(Haar)BasisonL 2 (R 2 )de¯ned in previous section. Let 0 = t 0 · ¢¢¢ · t K = T be a uniform partition on [0;T]. The two quantities we want to compare are ¹ u n N (t i ;x)= X ®2J n N 1 p ®! Á i ® (¢;x)C i ® and u n;£ N which was de¯ned by (5.23). In addition, assume the initial condition p 0 2H 1 (R 2 ). 158 Then, for some positive constant ~ C >0, we have max 1·i·K q E[jj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R 2 ) ]· K ~ Ce ~ CT T2 £ jjp 0 jj H 1 (R 2 ) (6.3) Proof: Like before, we will use all the same notations for the mappings and projections. It is good to recall them here for a while: ² The normjjj¢jjj, which is given from jjjfjjj:= q E[jjfjj 2 L 2 ] Therefore, q E[jj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jj 2 L 2 (R d ) ]=jjj¹ u n N (t i ;¢)¡u n;£ N (t i ;¢)jjj=:" i ² Operator ¦ £ : the L 2 -orthogonal projection onto the subspace generated by \part" of the CONS functions, i.e. the space generated by fe l g £ l=1 Indeed here we have a multi-index l =(j;n;k;m), we will assume j >£ and n, m are related to the choice of j, k, respectively; 159 ² Operator V n;i N : for8 f 2L 2 (;F N t i¡1 ;N), this functional gives the mapping f 7! X ®2J n N Á i ® (¢;¢;f)C i ® And in particular, we de¯ne U (i) (f):=V 1;i 1 (f)= X ®2J Á i ® (¢;¢;f)C i ® It is easy to see that for N, n<1, V n;i N (f) is just a part of U (i) (f). Thus we have jjjV n;i N (f)jjj·jjjU (i) (f)jjj We still have jjj¦ £ V(¹ u n N (t i¡1 ;¢)) ¡ ¦ £ V(u n;£ N (t i¡1 ;¢))jjj ·jjjV(¹ u n N (t i¡1 ;¢)) ¡ V(u n;£ N (t i¡1 ;¢))jjj·e C¢ " i¡1 and for function f, we have j(f;à k;m j;n )j = 2 j+k 2 j Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+ 1 2 ; 1 2 j ) R(n; 1 2 j ) f(r;µ)rdrdµ ¡ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) f(r;µ)rdrdµj 160 right now, we need to change the variable r in the ¯rst integral, such that it also changes along R(n+ 1 2 ; 1 2 j ) to R(n+1; 1 2 j ). We will do this by the following linear transformation: s = R(n+1; 1 2 j )¡R(n+ 1 2 ; 1 2 j ) R(n+ 1 2 ; 1 2 j )¡R(n; 1 2 j ) r+R(n+ 1 2 ; 1 2 j ) ¡ R(n; 1 2 j ) R(n+1; 1 2 j )¡R(n+ 1 2 ; 1 2 j ) R(n+ 1 2 ; 1 2 j )¡R(n; 1 2 j ) = p n+1¡ q n+ 1 2 q n+ 1 2 ¡ p n r+ s n+ 1 2 ¼2 j ¡ r n ¼2 j p n+1¡ q n+ 1 2 q n+ 1 2 ¡ p n | {z } ~ ¢ r n;j This linear transform guarantees that variable s is taking values from R(n+ 1 2 ; 1 2 j ) to R(n+1; 1 2 j ). As later on we are going to let n!1, this helps us by making the asymptotes of s to be used instead of directly plugging s into the integral: p n+1¡ q n+ 1 2 q n+ 1 2 ¡ p n !1 as n!1. This allows us to use s»r+¢ r n;j where ~ ¢ r n;j !¢ r n;j = s n+ 1 2 ¼2 j ¡ r n ¼2 j 161 which also goes to 0 as n!1. We are now able to rewrite the integral as j(f;à k;m j;n )j » 2 j+k 2 j Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) f(s+¢ r n;j ;µ)sdsdµ ¡ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) f(s;µ)sdsdµj = 2 j+k 2 j Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) p r[f(r+¢ r n;j ;µ)¡f(r;µ)] p rdrdµj We claim that the last term is bounded by the following, according to HÄ older's Inequality: 2 j+k 2 [ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) rdrdµ] 1 2 [ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) (f(r+¢ r n;j ;µ)¡f(r;µ)) 2 rdrdµ] 1 2 · 2 j+k 2 r 1 2 k+j+1 [ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) (f(r+¢ n;j ;µ)¡f(r;µ)) 2 (¢ r n;j ) 2 rdrdµ] 1 2 ¢ r n;j In which we have used our assumption such that ¼(R 2 (n+1; 1 2 j )¡R 2 (n+ 1 2 ; 1 2 j ))= 1 2 j+1 It is very easy to see that functions p n+1¡ r n+ 1 2 = 1 2( p n+1+ q n+ 1 2 ) 162 that appears in ¢ n;j is a bounded decreasing function for integern. By eliminating all constants into one whole thing C, we can write the inequality as · C 2 j 2 [ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) (f(r+¢ r n;j ;µ)¡f(r;µ)) 2 (¢ r n;j ) 2 rdrdµ] 1 2 · C 2 j 2 [ Z 2¼ 2 k (m+1) 2¼ 2 k m Z R(n+1; 1 2 j ) R(n+ 1 2 ; 1 2 j ) j @f @r j 2 rdrdµ] 1 2 Now if we take the sum of the square of the above over all n while ¯xing j, m, the \inner" integral can be bounded by the integral for r over the whole 0 to 1 for current cut{level 1=2 j . As we have said, the value of m depends on k, again, if we ¯x j, the ¯rst integral for the angles gives the whole 360 ± . By taking j >£, the ¯nal estimate is X j>£;m;n j(f;à k;m j;n )j 2 · C 2 £ [ Z 2¼ 0 Z 1 0 j @f @r j 2 rdrdµ] (6.4) for some constant C >0. And using the Jacobian change{of{variable rule (i.e. rdrdµ = dxdy), one can see that for f =f(x;y) where (x;y)2R 2 and µ =arctan(y=x): Z 2¼ 0 Z 1 0 j @f @r j 2 rdrdµ = Z R 2 j @f @x cosµ+ @f @y sinµj 2 dxdy (6.5) 163 which can be bounded by Z R 2 jrfj 2 dxdy·jjfjj 2 H 1 (R 2 ) Finally, we obtain X j>£;m;n j(f;à k;m j;n )j 2 · C 2 £ jjfjj 2 H 1 (R d ) (6.6) Thus we continue with our earlier proof as the following, assuming the multi-index l =(j;n;k;m),pluginthefunctionf asthecoe±cientÁ i ® (¢;¢)thatweareworking with: jjj¹ u n N (t i ;¢)¡¦ £ ¹ u n N (t i ;¢)jjj 2 = X ®2J n N X l>£ E(Á i ® (¢;¢);e l ) 2 ®! · X ®2J n N C 2 £ EjjÁ i ® (¢;¢)jj 2 H 1 (R) ®! · ¹ Ce ¹ CT 2 £ jjp 0 jj 2 H 1 (R) Followed by this, we get " i · e C¢ " i¡1 + ¹ Ce ¹ CT 2 £ jjp 0 jj H 1 (R) · e iC¢ ¡1 e C¢ ¡1 ¹ Ce ¹ CT 2 £ jjp 0 jj H 1 (R) · ~ Ce ~ CT ¢2 £ jjp 0 jj H 1 (R) 164 whichadmitssimilarexponentialrateofconvergenceaswederivedfor1{dimension Wavelet Basis function. We still have the same estimate result as we had in Corol- lary 5.17. ¤ 165 Chapter 7 Numerical Experiments It is time to start the numerical experiments based on the theoretical scheme we havebeenworkingwith. Thebasicideainthissectionisthatweprovideastochastic partial di®erential equation satis¯ed by the uniform ¯ltering density u which could be assumed to have su±cient \good" properties. What we called \good" means that the UFD u can be solved explicitly and analytically. Then we are going to ¯gure out the Poisson-Chaos Expansion coe±cients and use those coe±cients to approximate the exact solution we have found for u. It will be a good idea to compare the approximated solution and the exact solution to verify whatever we have derived for the numerical scheme. 7.1 Set-up of the Exact Solution u As we have just said, we set up the exact solution to u as the following: 166 Let u = u(t;x) be the unnormalized ¯ltering density we are looking for. It satis¯es the following Poisson{driven Stochastic Partial Di®erential equation: du(t;x) = L ¤ u(t;x)dt+u(t ¡ ;x) ~ N(dt) (7.1) u(0;x) = p 0 (x) (7.2) We also assume that the operatorL ¤ is the heat operator, i.e. L ¤ u(t;x)= @ 2 @x 2 u(t;x) And the jump process, N(¢) is simpli¯ed to depend only on the time variable - t, whereweevenassumethatithasasimplecompensator. Thatis,forsomeconstant ¸>0 E(N(dt))=¸dt and ~ N(dt)=N(dt)¡¸dt orinmoredetails, ongiveninterval[0;t], thenumberofjumpsfollowsthestandard Poisson distribution. P(N([0;t])=m)=e ¡¸t (¸t) m m! 167 and on distinct intervals [0;t 1 );[t 1 ;t 2 );[t 2 ;t 3 );:::, the process has independent in- crementsandthePoissondistributionsareidenticalifthelengthsofthetimeinter- vals are the same. Now we want to derive some fundamental results for the exact solution, u(t;x): The equation which u satis¯es can be rewritten as the following integration form: u(t;x)=u(0;x)+ Z t 0 L ¤ u(s;x)ds¡¸ Z t 0 u(s;x)ds+ Z t 0 u(s ¡ ;x)N(ds) (7.3) Assumewehaveasequenceoftimemomentsonwhichthejumpshappen. ¿ 1 ;¿ 2 ;:::, so the whole interval can be partitioned as 0·¿ 1 ·¿ 2 ·¢¢¢·T We know that the point measure (counting measure)N(dt) only takes e®ects when the time variable t is changing near a jump moment, ¿ i from its left side (¿ ¡ i ) to its right (¿ i ). And the counting mass on those points is just 1. So on those intervals when there is no jump, e.g. [0;¿ 1 );(¿ 1 ;¿ 2 );(¿ 2 ;¿ 3 );:::, u shall follow a \smooth" partial di®erential equation by di®erentiating both sides of (7.3) with respect to t: @u @t = @ 2 u @x 2 ¡¸u 168 and we can also show that u(¿ i ;x)=u(¿ ¡ i ;x)+ Z ¿ i ¿ ¡ i L ¤ u(s;x)ds¡¸ Z ¿ i ¿ ¡ i u(s;x)ds+ Z ¿ i ¿ ¡ i u(s ¡ ;x)N(ds) As variable t runs from ¿ ¡ i to ¿ i , the middle two terms approach 0. We conclude that u(¿ i ;x)=2u(¿ ¡ i ;x) Thus we have the following formulations for u as t increases: On [0;¿ 1 )£R: @u @t = @ 2 u @x 2 ¡¸u (7.4) u(0;x) = p 0 (x)=e ¡ x 2 4 (7.5) wherejjp 0 jj L 2 (R) = ( R 1 ¡1 e ¡ x 2 2 dx) 1=2 = (2¼) 1=4 <1. Here u can be solved analyti- cally and we can get the exact value of u(t;x) when t!¿ ¡ 1 . Then we get u(¿ 1 ;x)=2u(¿ ¡ 1 ;x) asthenewinitialvalueforthenextinterval[¿ 1 ;¿ 2 )withusatis¯esthesamepartial di®erential equation. 169 On the First Interval: @u @t = @ 2 u @x 2 ¡¸u (7.6) uj t=0 = e ¡ x 2 4 (7.7) We set ¹ u(t;x)=e ¸t u(t;x), one could get @¹ u @t = @ 2 ¹ u @x 2 (7.8) ¹ uj t=0 = e ¡ x 2 4 (7.9) Fundamental solution of heat equation says that ¹ u(t;x)= 1 p 4¼t Z 1 ¡1 e ¡ (y¡x) 2 4t e ¡ y 2 4 dy =E[e ¡ Y 2 4 ] where Y »N(x;2t), knowledge of Gaussian random variable gives us ¹ u(t;x)= 1 p 1+t e ¡ x 2 4(1+t) and thus u(t;x)= e ¡¸t p 1+t e ¡ x 2 4(1+t) on [0;¿ 1 )£R. 170 For next interval, u(¿ 1 ;x)=2u(¿ ¡ 1 ;x)= 2e ¡¸¿ 1 p 1+¿ 1 e ¡ x 2 4(1+¿ 1 ) On the Second Interval: @u @t = @ 2 u @x 2 ¡¸u (7.10) uj t=¿ 1 = 2e ¡¸¿ 1 p 1+¿ 1 e ¡ x 2 4(1+¿ 1 ) (7.11) Again, by setting ¹ u(t;x)=e ¸t u(t;x), we obtain @¹ u @t = @ 2 ¹ u @x 2 (7.12) ¹ uj t=¿ 1 = 2 p 1+¿ 1 e ¡ x 2 4(1+¿ 1 ) (7.13) We realize that the standard formula for solving the heat equation won't work here since the initial time moment is not 0. Now we need to do some linear transform and make the problem start at 0 again (i.e. a linear \stretch" of the interval from [¿ 1 ;¿ 2 ) to [0;¿ 2 )). Let's de¯ne function v =v(t;x) such that ¹ u(t;x)=v( ¿ 2 ¿ 2 ¡¿ 1 t¡ ¿ 2 ¿ 1 ¿ 2 ¡¿ 1 ;x) (7.14) 171 Thisguaranteesthat ¹ u(¿ 1 ;x)=v(0;x), ¹ u(¿ ¡ 2 ;x)=v(¿ ¡ 2 ;x). Wereachthefollowing initial value problem: @v @t = ¿ 2 ¡¿ 1 ¿ 2 @ 2 v @x 2 (7.15) vj t=0 = 2 p 1+¿ 1 e ¡ x 2 4(1+¿ 1 ) (7.16) So we get v(t;x)= 2 p 1+¿ 1 1 p 4¼a 2 t Z 1 ¡1 e ¡ (y¡x) 2 4a 2 t e ¡ y 2 4(1+¿ 1 ) dy = 2 p 1+¿ 1 E[e ¡ Y 2 4(1+¿ 1 ) ] where Y »N(x;2a 2 t) and a 2 = ¿ 2 ¡¿ 1 ¿ 2 . And ¯nally v(t;x)= 2 p 1+¿ 1 +a 2 t e ¡ x 2 4(1+¿ 1 +a 2 t) Moreover, by using (7.14) ¹ u(t;x)= 2 p 1+t e ¡ x 2 4(1+t) ;u(t;x)= 2e ¡¸t p 1+t e ¡ x 2 4(1+t) on [¿ 1 ;¿ 2 )£R. More Intervals: We skip future calculations. The exact solutions for u on di®erent intervals [¿ i¡1 ;¿ i ) will be obtained similarly as earlier computations: 172 Time Interval u(t;x)= [0;¿ 1 ) e ¡¸t p 1+t e ¡ x 2 4(1+t) [¿ 1 ;¿ 2 ) 2e ¡¸t p 1+t e ¡ x 2 4(1+t) [¿ 2 ;¿ 3 ) 4e ¡¸t p 1+t e ¡ x 2 4(1+t) ...... ...... [¿ n¡1 ;T] 2 n¡1 e ¡¸t p 1+t e ¡ x 2 4(1+t) Here we assume there are n¡1 time moments at which jumps happen. They areactuallygeneratedfromtheexponential randomnumbergeneratorbyusingthe relationship between Poisson and exponential. The exponential random numbers are obtained from ¡ logU ¸ where U is uniform random number on [0;1]. It is easy to see that P(¡ logU ¸ ·x)=P(U ¸e ¡¸x )=1¡e ¡¸x for x2[0;1). In our simulation programs, we will generate the sequence ¿ 1 ;¿ 2 ;:::;¿ n¡1 based on the above results. And we will plot a graph of the exact solution u = u(t;x). 173 7.2 Charlier Polynomials for Simulation Example The Charlier Polynomials are derived from the solution of dP t (z) = ± z (t ¡ )P t ¡(z) ~ N(dt) P 0 (z) = 1 which is derived from the Poisson - type Ito's Formula: dlogP t (z) = [log(P t (z)+P t (z)± z (t))¡logP t (z)¡± z (t)]¸dt + [log(P t ¡(z)+P t ¡(z)± z (t ¡ ))¡logP t ¡(z)] ~ N(dt) = [log(1+± z (t))¡± z (t)]¸dt+log(1+± z (t ¡ )) ~ N(dt) It yields P t (z)=expf Z t 0 log(1+± z (s ¡ ))N(ds)¡¸ Z t 0 ± z (s)dsg (7.17) And ~ C t ® = @ ® @z ® P z (t)j z=0 If we assume the characteristic set of the multi - index ® to be Q ® =(i 1 ;i 2 ;:::;i n ) wherej®j=n. 174 By passing the partial derivatives with respect to z to inside of the integrals, we get thej®j=1 case: ~ C t k = Z t 0 ± k (s)N(ds)¡¸ Z t 0 ± k (s)ds= Z t 0 ± k (s) ~ N(ds) (7.18) where Q ® =fkg. And for thej®j=2 case, the Charlier Polynomial is given by ~ C t k;l = [ Z t 0 ± k (s)N(ds)¡¸ Z t 0 ± k (s)ds][ Z t 0 ± l (s)N(ds)¡¸ Z t 0 ± l (s)ds] ¡ Z t 0 ± k (s)± l (s)N(ds) = [ Z t 0 ± k (s) ~ N(ds)][ Z t 0 ± l (s) ~ N(ds)]¡ Z t 0 ± k (s)± l (s)N(ds) where Q ® = fk;lg and k = l is allowed. And so on so forth for other higher size ®'s. And we normalize them to become the CONS basis functions by: C t ® = ~ C t ® p ®! TheessentialpartofsimulatingtheCharlierpolynomialsishowwecansimulate each Poisson{Stochastic integral: Z t 0 ± k (s) ~ N(ds)= Z t 0 ± k (s)N(ds)¡¸ Z t 0 ± k (s)ds (7.19) 175 Because we have already generated a sequence of jump moments 0·¿ 1 ·¿ 2 ·¢¢¢·¿ n¡1 ·T For each given t, we ¯rst determine a ¿ j , where j ¸ 1 from above sequence such that ¿ j ·t<¿ j+1 Therefore, (7.19) is approximated by Z t 0 ± k (s) ~ N(ds) : =± k (¿ 1 )+± k (¿ 2 )+¢¢¢+± k (¿ j )¡¸ Z t 0 ± k (s)ds (7.20) If such a ¿ j cannot be located, i.e. the only possibility must be t2[0;¿ 1 ) in this case (7.19) is simply Z 4 0 ± k (s) ~ N(ds)=¡¸ Z t 0 ± k (s)ds without any stochastic integrals involved. 176 7.3 Derive Formulas for the Chaos Expansion Coe±cients We have already known the Zakai-type Equation that u satis¯es du(t;x) = L ¤ u(t;x)dt+u(t ¡ ;x) ~ N(dt) (7.21) u(0;x) = p 0 (x) (7.22) Also we have the following equation, which is just similar to the one we used for the Girsanov Theorem for Wiener case. dP t (z) = ± z (t ¡ )P t ¡(z) ~ N(dt) (7.23) P 0 (z) = 1 (7.24) Here z is the multi - index z =(z 1 ;z 2 ;:::) and ± z (t)= P k z k ± k (t). Since the problem we are trying to simulate now has been simpli¯ed so that the jump process depends only on time. So here the basis function depends only on time, too. Equivalently, the basis functions formulate the complete orthonormal system on L 2 ([0;T]). We pick ± 1 (t) = 1 p T (7.25) ± k (t) = r 2 T cos( ¼(k¡1) T t) (7.26) where k¸2. This is the standard choice of CONS in Fourier Analysis! 177 Using Ito's Formula, we have d[u(t;x)P t (z)]=P t (z)L ¤ u(t;x)dt+P t ¡(z)u(t ¡ ;x) ~ N(dt) +u(t ¡ ;x)± z (t ¡ )P t ¡(z) ~ N(dt)+d[ X 0·s·t ¢P s ¡(z)¢u(s ¡ ;x)] where the last term is obtained by X 0·s·t ¢P s ¡(z)¢u(s ¡ ;x)= Z t 0 ± z (s ¡ )u(s ¡ ;x)P s ¡(z)N(ds) as long as the two jumps are happening as the same time. Therefore, we write the following integral form: u(t;x)P t (z)=p 0 (x)+ Z t 0 L ¤ [u(s;x)P s (z)]ds+ Z t 0 u(s ¡ ;x)P s ¡(z) ~ N(ds) + Z t 0 ± z (s ¡ )u(s ¡ ;x)P s ¡(z) ~ N(ds)+ Z t 0 ± z (s ¡ )u(s ¡ ;x)P s ¡(z)N(ds) By taking expectation on both sides, we have E[u(t;x)P t (z)]=u(0;x)+ Z t 0 L ¤ E[u(s;x)P s (z)]ds+¸ Z t 0 ± z (s)E(u(s;x)P s (z))ds (7.27) We call '(t;x;z):=E(u(t;x)P t (z)) 178 Now di®erentiating (7.27) with respect to t, we get @'(t;x;z) @t = L ¤ '(t;x;z)+¸± z (t)'(t;x;z) (7.28) '(0;x;z) = E[u(0;x)]=p 0 (x) (7.29) We are seeking for the partial di®erential equations which the coe±cients from the following Poisson Chaos Expansion Formula should satisfy: u(t;x)= X ® ' ® (t;x) C t ® p ®! (7.30) The orthogonality of C t ® says that ' ® (t;x) p ®! =<u(t;x);C t ® >= 1 p ®! E[u(t;x) ~ C t ® ]= 1 p ®! E[u(t;x) @ ® @z ® P t (z)j z=0 ] therefore ' ® (t;x)= @ ® @z ® E[u(t;x)P t (z)] | {z } '(t;x;z) j z=0 (7.31) We notice that here P t (z) is the only term depending on z. Thus each coe±cient ' ® (t;x) is given by ' ® (t;x)= @ ® @z ® '(t;x;z)j z=0 179 Thus, we can apply the operator @ ® @z ® j z=0 onto both sides of (7.28), and then obtain the recursive type of numerical schemes for all coe±cients in (7.30). For any given multi-index ®, we have: @' ® (t;x) @t = L ¤ ' ® (t;x)+¸ X l:® l 6=0 ® l ± l (t)' ®(l) (t;x) (7.32) ' ® (0;x) = p 0 (x)1 j®j=0 (7.33) Again, the component ®(l) is from the multi-index with size j®j¡1, its k th com- ponent is equal to maxf® l ¡1;0g if k =l and equal to 0 if k6=l. Just like what we did earlier for the exact solution for the uniform ¯ltering density u, we start with: @' 0 (t;x) @t = L ¤ ' 0 (t;x)= @ 2 ' 0 (t;x) @x 2 (7.34) ' 0 (0;x) = p 0 (x) (7.35) Thus the ¯rst and the most important expansion coe±cient, ' 0 (t;x) can hereby solved analytically for any t2[0;T] and x2R. 180 Later on, for ® withj®j = 1. We assume ® k = 1 for some integer index k¸ 1. The equations are as follows: @' k (t;x) @t = L ¤ ' k (t;x)+¸± k (t)' 0 (t;x) (7.36) ' k (0;x) = 0 (7.37) Again, since we have solved ' 0 (t;x) explicitly earlier, for t2[0;T] and x2R, this is the standard problem of nonhomogeneous heat equation, where ' k (t;x) can still be found analytically. As we keep moving, for the multi-index withj®j=2, in the case of ® k =® l =1 for some k;l such that 1·k <l, the equations are @' k;l (t;x) @t = L ¤ ' k;l (t;x)+¸± k (t)' l (t;x)+¸± l (t)' k (t;x) (7.38) ' k;l (0;x) = 0 (7.39) And similarly, in the case ofj®j=2 with ® n =2 for some n¸1: @' n;n (t;x) @t = L ¤ ' n;n (t;x)+2¸± n (t)' n (t;x) (7.40) ' n;n (0;x) = 0 (7.41) Typically, one would start thinking about using numerical methods (e.g. ¯nite dif- ference method) to solve those coe±cients and use the one that was just computed 181 (lower level) for future calculation. Yetbyobserving these partial di®erentialequa- tion problems, we realize that all of them must have analytical solution which is very closed to the process we have experienced to ¯nd the exact solution of u. First Coe±cient: We can even ignore the detail here since we have done this for the part when we solved for u. The result is ' 0 (t;x)=E[e ¡ 1 4 Y 2 ]= 1 p 1+t e ¡ x 2 4(1+t) (7.42) where the normal random variable Y satis¯es Y »N(x;2t) \Second" Coe±cient: Let's change the form of (7.36) a little bit and write it as @' k (t;x) @t = L ¤ ' k (t;x)+f k (t;x) (7.43) ' k (0;x) = 0 (7.44) while f k (t;x)= ¸± k (t) p 1+t e ¡ x 2 4(1+t) 182 Thus we use the formula for nonhomogeneous heat equation problem: ' k (t;x) = Z t 0 Z 1 ¡1 1 p 4¼(t¡s) e ¡ (y¡x) 2 4(t¡s) f k (s;y)dyds (7.45) = Z t 0 ¸± k (s) p 1+s Z 1 ¡1 1 p 4¼(t¡s) e ¡ (y¡x) 2 4(t¡s) e ¡ y 2 4(1+s) dyds (7.46) = Z t 0 ¸± k (s)e ¡ x 2 4(1+t) p 1+t ds= ¸e ¡ x 2 4(1+t) p 1+t Z t 0 ± k (s)ds (7.47) And one could go further for all other coe±cients that have larger sizes. Although this method goes very well analytically and explicitly, we would still like to derive a kind of general formula which could give the coe±cient formula for any given multi-index ®. In the following steps, we want to ¯nd the formula of the exact solution to '(t;x;z) so we will be able to ¯nd any particular ' ® (t;x). We now go back to (7.28) and get the solution analytically. @'(t;x;z) @t = L ¤ '(t;x;z)+¸± z (t)'(t;x;z) '(0;x;z) = E[u(0;x)]=e ¡ 1 4 x 2 Let's de¯ne ©(t;x;z)=G z (t)'(t;x;z) where G z (t)=expf¡¸ Z t 0 ± z (s)dsg 183 It is easy to see that @©(t;x;z) @t =G z (t) @'(t;x;z) @t ¡¸± z (t)'(t;x;z)G z (t)=G z (t)L ¤ '(t;x;z) Thus we reach the new homogeneous problem: @©(t;x;z) @t = L ¤ ©(t;x;z) (7.48) ©(0;x;z) = '(0;x;z)=e ¡ 1 4 x 2 (7.49) The analytical solution is ©(t;x;z)= 1 p 1+t e ¡ x 2 4(1+t) and ¯nally '(t;x;z)= e ¡ x 2 4(1+t) p 1+t expf¸ Z t 0 ± z (s)dsg (7.50) Let'sapplythedi®erentialoperatoronmulti-indexzandderivethegeneralformulas for coe±cients. One more thing we have to use is the characteristic set. For the multi-index ® such thatj®j=n, we have its characteristic set Q ® =(i 1 ;i 2 ;:::;i n ) 184 So the coe±cient ' ® (t;x) is given by ' ® (t;x) = @ n @z i 1 @z i 2 ¢¢¢@z in '(t;x;z)j z=0 (7.51) = ¸ n e ¡ x 2 4(1+t) p 1+t ¦ n k=1 f Z t 0 ± i k (s)dsg (7.52) Let's present some examples of ' ® for indices with size no more thanj®j=2: Example 7.1. j®j=1 with ® k =1 for some k¸1: ' k (t;x)= ¸e ¡ x 2 4(1+t) p 1+t Z t 0 ± k (s)ds j®j=2 with ® k =® l =1 for 1·k <l: ' k;l (t;x)= ¸ 2 e ¡ x 2 4(1+t) p 1+t [ Z t 0 ± k (s)ds][ Z t 0 ± l (s)ds] j®j=2 with ® n =2 for some n¸1: ' n;n (t;x)= ¸ 2 e ¡ x 2 4(1+t) p 1+t [ Z t 0 ± n (s)ds] 2 And so on. All basis functions, ± k (t) are known and the integrals are easy to handle with. 185 7.4 Layouts of Simulation Results We start with setting T = ¢ = 0:1, and the compensator constant of the Poisson measure,¸=2. Basedonourargumentsabove,wewillgeneratethejumpmoments sequence on [0;T]. Practical experiment showed us that there would be very few jumps happened on this range given T small. 7.4.1 Only One Jump Happens As a note to the graphs we will be presenting later, the only jump moment we havesimulatedon[0;T]isabout0.0256. Accordingtotheconclusionsfortheexact solution, there will be only TWO di®erent formulas for u = u(t;x) which di®er in a coe±cient 2. 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 0.5 1 1.5 2 x t Exact Solution 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 0.5 1 1.5 2 2.5 3 x t Simulated Solution Figure 7.1: Exact vs. Simulated: One{Step Method, One Jump From the ¯gure to the exact solution, the jump property of the UFD u can be obviously observed. And corresponding to the exact solution, the simulation yields similar property when the time variable is closed to the jump moment. Let's make some notes on the details about what we did in the simulation: 186 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 Error t Error Plot for 0≤ t ≤ 0.1 Figure 7.2: Growth of Error throughout t2[0;0:1], 1st Example ² Size of the multi-index: j®j·N =2; ² Index of the multi-index: Index(®)·n=13; ² Total number of coe±cients used (considering all size 0,1,2 indices): 1+13+ 13+13£12=2=105 for more than 100 coe±cients; ² The momentof the jump, ¿ is also a \turning" point where the error between these two solutions grows signi¯cantly and dramatically fast; As we have said, t goes from 0 to 0.1, let's observe the how the di®erence of these two changes. In general, the order of the error is about 10 ¡2 except those before the (¯rst) jump happens, which have error of about 10 ¡3 or even better. Next, we pick some particular points on t ¤ 2[0;0:1]. Let's display the compari- sonbetweenthetwocurves,namelyu(t ¤ ;x)andu n N (t ¤ ;x)fort ¤ givenandx2[0;1]. We will use blue curve for exact solution and green curve for simulated solution. In particular, we are going to observe these two curves when time is closed to the jump moment ¿ : =0:0256. 187 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.75 0.8 0.85 0.9 0.95 1 x t * =0.005 Exact Solution Simulated Solution 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.7 0.75 0.8 0.85 0.9 0.95 1 x t * =0.02 Exact Solution Simulated Solution 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 x t * =0.03 Simulated Solution Exact Solution 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 x t * =0.1 Simulated Solution Exact Solution Figure 7.3: Comparisons at Di®erent Fixed Time Moments, 1st Example Again, the worst part happens right after ¿, but gets better in the end. The reason the error gets \better" after ¿ is simply because of the property of u that it comes from the \jump" surface back to the \smooth" analytical surface. The pictureofthesimulatedsolutionclearlyre°ectsthispropertywhilethe\vibrations" thatcanbeseenfromthesimulatedsurfaceisduetothechoiceofthebasisfunction (Fourier Basis Functions). This really makes sense as a continuous and in¯nitely di®erentiable u can be approximatedverywellbytheFourierBasisonL 2 aswehavepickedmanytrigono- metric basis functions for the approach where the basis functions themselves have 188 been proved for a long time to be a set of e±cient \spectral approximation" of the analytical function. 7.4.2 Two Jumps Happen One should have already gained some \experience" on what \bad e®ect" the in- creasing number of jumps happened on the time interval [0;1] can make to the simulated solution. As a quick example followed from the ¯rst one, we generated another sequence of Poisson jump moments, namely ¿ 1 =0:0334;¿ 2 =0:0767 In the previous example, the mean error at the end point t = 0:1 is of the order 10 ¡2 , yet in the following example, the thing changes to 10 ¡1 . 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 0.5 1 1.5 2 2.5 3 3.5 x t Exact Solution 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 0 1 2 3 4 5 6 7 8 x t Simulated Solution Figure 7.4: Exact vs. Simulated: One{Step Method, Two Jumps Let's do the same thing as we did earlier such that we will observe the error between the exact solution and the simulated solution. Again, one could pay more attention as the time is closed to those two jump moments: 189 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Error Plot for 0≤ t≤0.1 Error Figure 7.5: Growth of Error throughout t2[0;0:1], 2nd Example So indeed these two moments both behaves as the \turning" point of the error growth. Right after either of them, the error between the exact and simulated solutions gets to the \local" peak values. We also pick some particular points for di®erent moments of t and show the curves with respect to x only: 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.7 0.75 0.8 0.85 0.9 0.95 1 t=0.01 x Exact Solution Simulated Solution 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 t=0.035 Exact Solution Simulated Solution x Figure 7.6: Comparisons at Di®erent Fixed Time Moments, 2nd Example 7.4.3 Other Trials Further trials show similar trends as what we have just done. For example, if one wants to get the result at t = 0:5, we use ¢ = 0:1 as the partition size and derive 190 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 t=0.05 x Exact Solution Simulated Solution 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2.5 3 3.5 4 4.5 5 5.5 6 6.5 t=0.08 x Simulated Solution Exact Solution Figure 7.7: Continued from Previous Figure the following error accumulation rate. Again, within this renewed example, we generated a random jump moment ¿ : =0:4455: Time Interval Accumulated Error [0;0:1] 1 1:14£10 ¡2 [0:1;0:2] 1:25£10 ¡2 [0:2;0:3] 2:6£10 ¡3 [0:3;0:4] 1:76£10 ¡2 [0:4;0:5] 1:34£10 ¡1 We can see that over the third interval, the su±cient smooth property of u also helped the error reduced a lot compared with the previous step. The \long{ lasting" property of no jump happens for t is hereby treated as a good healing to signi¯cantly decrease the growth rate of the error. Like before, the biggest increasing of errors happened near the jump moment, i.e. the last interval, where the ¯nal error turns to be almost 10 times of before. 1 Now no jumps happened on [0;0:1]. So in general, the error 1:14£10 ¡2 is better than the one from the old example we did which is also on [0;0:1]. 191 Yet this is still better than we directly obtain the value of u at t = 0:5 using only the one{step approximation, which yields an error even closed to 10 0 . Also the time cost for this multi-step method with two jump moments is rela- tively long (approximately 30 min.) as on each step the same moment of cost has been consumed as we spent for the one-step method. 192 Chapter 8 Future Research Direction 8.1 Filtering Problems with Combined Observation In the ¯ltering problem , we have assumed the observed process to be purely Pois- son. Yetinpracticalworlds,especiallyinthestudyof¯nancialmarketmodels,such as the change of stocks and options price in which we have learnt a lot from math- ematical ¯nance theory that they truly satisfy certain SPDEs driven by Wiener Process. An irregularity of the behavior of ¯nancial market may cause a sudden jump of the stock price and the layout would be something observed by people that is 193 mixed with two di®erent types of random processes. The following type of ¯ltering scheme is an example: µ i t = µ i 0 + Z t 0 a i (t;µ t )dt+ Z t 0 ¾ i (s;µ s )dW s + Z t 0 Z R d ~ F i (s;y) ~ M(ds;dy) + Z t 0 Z R d F i (s;y) ~ N(ds;dy)+ Z t 0 H i (s;µ s )d ~ W s for i=1;2;:::;d. Once again, we would assume µ i t to be the signal process. The observation process is given by Y(t)= Z t 0 Z R d F i (s;y) ~ N(ds;dy)+ Z t 0 H i (s;µ s )d ~ W s (8.1) And as before, we provide more details about the compensated Poisson Point mea- sures here: ~ M(dt;dy) = M(dt;dy)¡(1+½(µ t ;y))~ ¼(t;dy)dt (8.2) ~ N(dt;dy) = N(dt;dy)¡¼(dy)dt (8.3) The same as traditional notations, we assume the W t , as appeared in the noise part, to be the standard Wiener Process. The two Wiener Processes, W t and ~ W t , are independent. Besides, ~ W t is independent of M and N, respectively. 194 Thefollowingassumptionsareessentialtotheexistencetounnormalized¯ltering density for this case: ² (A1'): For all multi-indexes ®, @ ® ¾ i (t;x), @ ® a i (t;x), @ ® ½(x;y), @ ® H i (t;x) are bounded. Moreover Z R d j ~ F(t;y)j 2 ~ ¼(t;dy)+ Z R d jF(t;y)j 2 ¼(dy)·K ² (A2'): There is ± >0 such that for all »2R d : X i;j ¾ i (x)¢¾ j (x)» i » j ¸ ±j»j 2 ; 1+½(x;y) ¸ ±: ² (A3'): With respect to probability measure ~ P, the measures M, N, W and ~ W are independent, where d ~ P=® T dP and ® t satis¯es that d® t =® t ¡ Z µ ¡ ½(µ t ;y) 1+½(µ t ;y) ¶ ~ M(dt;dy) (8.4) 195 GrigelionisandMikulevicius[10],[27]provedthatunderassumptions(A1'){(A2'), the Kallianpur optimal ¯lter: ^ f(µ t )= E £ f(µ t )® ¡1 t jF Y t ¤ E £ ® ¡1 t jF Y t ¤ (8.5) admits the UFD u = u(t;x) if the initial density of µ 0 , namely p 0 (x) 2 H n (R d ). That is, we still have u=t(t;x) which satis¯es ^ f(µ t )= R R d f(x)u(t;x)dx R R d u(t;x)dx (8.6) for all 0·t·T. The previous arguments worked similarly as what we did in deriving the ex- istence of the UFD for our original ¯ltering scheme. Followed from that, it is of great interest that one should ask what Stochastic Partial Di®erential Equation that u(t;x) shall satisfy now. So we give the current version of Zakai equation for the newly updated ¯ltering model: Theorem 8.1. The UFD u=u(t;x) is a solution to the following SPDE: du(t;x) = L ¤ u(t;x)dt+ Z R d [u(t;x¡F(t;y))¡u(t;x)] ~ N(dt;dy) (8.7) ¡ X i @ i (H i (t;x)u(t;x))d ~ W t (8.8) u(0;x) = p 0 (x) (8.9) 196 where F is the vectorized function contains all components of F i for i=1;2;:::;d. The \di®erential operator",L ¤ is now given by L ¤ u(t;x) = 1 2 @ i ¡ ~ ¾ i (t;x)¢~ ¾ j (t;x)@ j u(t;x) ¢ ¡@ i (a i (t;x)u(t;x)) + Z R d h u(t;x¡ ~ F(t;y))¡u(t;x)+@ i u(t;x) ~ F i (t;y)1 U (y) i ~ ¼(t;dy); with ~ ¾ i =¾ i +H i . We have been familiar with the process of deriving such an SPDE. As we did before,gettingtheSPDEoff(µ t )nowshallinvolvecombinationuseofbothWiener and Poisson types of Ito's Rules. The Wiener Process has great smooth property whicheventuallygivesusthesecondorderderivativesintheformulaofoperatorL ¤ yet for the Poisson Point Measure, we would still derive the discrete type of second order derivative as shown in the formula. Wewouldalsosayinouroriginalexample,wehaveusedaveryspecialcasethat F(s;y) = y. This signi¯cantly simpli¯ed the derivation of the numerical scheme. But we still claim that the Chaos Expansion method works for equations (8.7) and (8.9). 197 The di®erence between this problem and the one we solved earlier is that the property of the SPDE we have now motivates us to set up the Chaos Expansion scheme as u(t;x)= X ®2J 1 p ®! ' ® (t;x)C t ® + X ¯2J 1 p ¯! © ¯ (t;x)» ¯ (8.10) where the basis functions » ¯ are complete and orthonormal on L 2 (;F Y t ;W). Similar methods and ideas for setting up the recursive numerical scheme and for giving corresponding error estimates can be anticipated and done in the near future. 8.2 Computational Stochastic Fluid Mechanics As we have said, the Chaos Expansion scheme used in this dissertation is con- structed for ¯ltering purposes. But the method itself was not invented only for this categoryofproblemsbutalsoforothermoregeneralizedStochasticPartialDi®eren- tialEquations. Inthissection, wewillintroducetwomajortypeofSPDEsthatare currently being researched my scholars in which the same spectral approximations idea have been plugged in. Stochastic Burger's Equation: 198 Hou, Kim, Rozovskii and Zhou [13] has used the Wiener type Chaos Expansion method on solving the following 1{D stochastic Burger's Equation: 8 > > < > > : u t +uu x =¹u xx +¾(x) _ w(t) u(0;x)=u 0 (x);u(t;0)=u(t;L) (8.11) for 0·t·T and 0·x·L. In this equation, the unknown function u stands for the velocity of given °uid particle, ¹ stands for the viscosity of the speci¯ed °uid, as usually seen in general Navier{Stokes Equations. What makes us felt more interested is that the ¾(x) term, meaning the external random force, is attached with the standard Wiener noise, w(t). Intheirwork,theequationswassolvedagainbyassumingucanbedecomposed according to certain type of CONS basis functions. While after plugging in the assumed expansion formula, set of partial di®erential equations that the expansion coe±cients shall satisfy can be obtained. Numerical solution to those coe±cients hasbeentriedandmoreover,thestatisticalmoments,i.e. E[ju(t;x)j 2 ]hasalsobeen simulated. Per our discussion in previous section, the assumption for the property of the noise can be more complicated such that Poisson type could be enrolled in, too. Stochastic Passive Scalar Equation: 199 Mikulevicius and Rozovskii [28], [29] studied the following Stochastic Passive Scalar Equation: 8 > > < > > : @ t u(t;x)=¢u(t;x)+ P i @ i u(t;x)v i (t;x) u(0;x)=f(x) (8.12) where the spatial variable x2R d . This time stochastic information appeared as \scalars". We assume that v i (t;x)= Z t 0 Z R d F i (s;x;z) ~ N(ds;dz) (8.13) The general assumption for this problem is that F i 's are bounded measurable de- terministic functions and we have its ¯nite second moment: Z T 0 Z R d jF i (t;x;z)jº(dz)dt<1 (8.14) As the noise term is clear, we assume F i (t;x;z)= X ® F i ® (x)à ® (t;z) (8.15) 200 for some ¯xed t. The choice of the CONS basis on z 2R d can be very tricky. A Wavelet trial has been made in the paper of these two scholars. We could expect to derive v i (t;x)= X ® F i ® (x) ~ N(à ® (t;z)) (8.16) where ~ N(à ® (t;z))= Z t 0 Z R d à ® (s;z) ~ N(ds;dz) will have similar CONS properties as the Charlier Polynomials we derived earlier. And followed by that, the °uid mechanics model will be solved base on a recursive scheme using the known expansion coe±cients F i ® 's. Anyway,theChaosExpansionmethodwillplayamoreandmorepowerfulrollin solvingmultipletypesofmathematical{physicalbasedmodelswiththeinvolvement of noise terms. 201 Reference List [1] BARAS J., Real-time Architectures for the Zakai Equations and Applications, Academic Press, Boston, 1991. [2] BENSOUSSAN A., GLOWINSKI R., RASCANU R., Approximations of the Zakai Equation by Splitting up Method, Siam J. Control OPTIM., 28(1990), pp. 1420{1431. [3] CAMERONR.H.,MARTINW.T.,TheOrthogonalDevelopmentofNonlinear Functionals in a Series of Fourier-Hermite Functions, Ann. Math., 48(1947), pp. 385{392. [4] CHICONE C., LATUSHKIN Y., Evolution Semigroups in Dynamical Systems and Di®erential Equations, AMS, 1999. [5] DI NUNNO G., MEYER-BRANDIS T., ÂKSENDAL B., PROSKE F., Malli- avin Calculus for L¶ evy Processes, Department of Mathematics, University of Oslo, 2003. [6] ELLIOTT R., GLOWINSKI R., Approximations to Solutions of the Zakai Filtering Equation, Stochatic Anal. Appl., 7(1989), pp. 145{168. [7] EVANS L., Partial Di®erential Equations, AMS, 1998. [8] FLORCHINGER P., LeGLAND F., Time Discretization of the Zakai Equa- tion for Di®usion Processes Observed in correlated Noise, Stochastics Rep., 35(1991), pp. 233{256. [9] GIRARDI M., SWELDENS W., A New Class of Unbalanced Haar Wavelets thatFormanUnconditionalBasisforL p onGeneralMeasureSpaces,J.Fourier Anal. Appl., (3)1997, pp. 457{474. 202 [10] GRIGELIONIS B., MIKULEVICIUS R., Stochastic Evolution Equations and Densities of the Conditional Distributions, Lecture Notes in Control and In- formatic Science, 1983 [11] HIDA T., KUO H. H., POTTHOFF J., SREIT L., White Noise, Kluwer, Boston, 1993. [12] HOLDEN H., ÂKSENDAL B., UBÂE J., ZHANG T., Stochastic Partial Dif- ferential Equations, A Modeling White Noise Approach, BirkhÄ auser,1996. [13] HOUT.Y.,KIMH.,ROZOVSKIIB.,ZHOUH.M.,WienerChaosExpansions and Numerical Solutions of Randomly Forces Equations of Fluid Mechanics, JCP, 2004. [14] ITO K., Multiple Wiener Integral, J. MATH. SOC. Japan, 3(1951), pp. 157{ 169. [15] KALLIANPUR G., Stochastic Filtering Theory, Springer{Verlag, New York, 1980. [16] KARR A. F., Point Processes and Their Statistical Inference, 2nd Edition, Revised and Expanded, Marcel Dekker, New York, 1991. [17] KRYLOV N. V., B. ROZOVSKII, On Conditional Distributions of Di®usion Processes, MATH. USSR-IZV., 42(1978), pp. 336{356. [18] KUNITA H., Cauchy Problem for Stochastic Partial Di®erential Equations Arising in Nonlinear Filtering Theory, Systems Control Lett., 1(1981), pp. 37{41. [19] KUNITA H., Stochastic Flows and Stochastic Di®erential Equations, Cam- bridge University Press, Cambridge, UK, 1982. [20] KUSHNER H. J., Probability Methods for Approximations in Stochastic Con- trol and for Elliptic Equations, Academic Press, New York, 1977. [21] LIPTSERR.S.,SHIRYAYEVA.N.,StatisticsofRandomProcesses,Springer{ Verlag, New York, 1992 203 [22] LOTOTSKYS.,Problems in Statistics of Stochastic Partial Di®erential Equa- tions, PhD Dissertation, Department of Mathematics, University of Southern California, Aug 1996. [23] LOTOTSKYS.,Wiener Chaos and Nonlinear Filtering,AppliedMathematics and Optimization, Vol. 54, No. 3, pp. 265-291, 2006. [24] LOTOTSKY S., MIKULEVICIUS R., ROZOVSKII B., Nonlinear Filtering Revisited: A Spectral Approach, Siam J. Control OPTIM. Vol. 35, No. 2, pp. 435{461, March 1997. [25] MEYER Y., Wavelets and Applications, Springer{Verlag, 1991. [26] MEYER Y., Wavelets and Operators, Cambridge Studies in Advanced Math- ematics, 1992. [27] MIKULEVICIUS R., Properties of Solutions of SDEs, Lith. Math J., 23(4), 1983, pp. 18{31. [28] MIKULEVICIUS R., ROZOVSKII B., Linear Parabolic Stochastic PDE's and Wiener Chaos, Siam J. MATH. ANAL., 29(1998), pp. 452{480. [29] MIKULEVICIUS R., ROZOVSKII B., Separation of Observations and Pa- rameters in Nonlinear Filtering, in Proc. 32nd IEEE Conf. on Decision and Control,Part2,SanAntonio,1993,IEEEControlSystemSociety,Piscataway, NJ, 1993, pp. 1564{1569. [30] OCONE D., Multiple Integral Expansions for Nonlinear Filtering, Stochastics, 10(1983), pp. 1{30. [31] ÂKSENDAL B., Stochastic Di®erential Equations, 5th Edition, Springer{ Verlag, 2000. [32] ÂKSENDAL B., PROSKE F., White Noise of Poisson Random Measure, De- partment of Mathematics, University of Oslo, 2002. [33] ROZOVSKII B., Stochastic Evolution Systems, Kluwer, Amsterdam, 1990. 204 [34] WANGZ.X.,GUOD.R.,IntroductiontoSpecialFunctions,PekingUniversity Press, 2000 [35] ZAKAI M., On the Optimal Filtering of Di®usion Processes, Z. Wahrschein- lichkeitstheorie und verw. Gebiete, 4(1969), pp. 230{233. 205
Abstract (if available)
Abstract
In this dissertation, we will recall some basic settings and definitions for nonlinear filtering theory and based on the important optimal filtering estimator provided by Kallianpur, we will discuss the properties and equations which the 'unnormalized filtering density' (UFD) should satisfy. Lototsky, Mikulevicius and Rozovskii have done a great job in decomposing the UFD based on the Wiener Chaos Expansion theory and apply the theory numerically. We will derive a similar recursive numerical scheme with respect to the L2-CONS basis functions on the probabilistic space generated from the Poisson Point Measure and derive the growth rate of the error caused by the algorithm. Moreover, some alternative ways to reduce the error and making the algorithm more efficient are also going to be discussed. And definitely in the end, based on some numerical experiments from computer simulations, we will testify the validity and feasibility of our algorithm.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Numerical weak approximation of stochastic differential equations driven by Levy processes
PDF
Gaussian free fields and stochastic parabolic equations
PDF
On stochastic integro-differential equations
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Asymptotic problems in stochastic partial differential equations: a Wiener chaos approach
PDF
Statistical inference for second order ordinary differential equation driven by additive Gaussian white noise
PDF
Optimizing statistical decisions by adding noise
PDF
Statistical inference of stochastic differential equations driven by Gaussian noise
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Optimal and exact control of evolution equations
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
Second order in time stochastic evolution equations and Wiener chaos approach
PDF
Stochastic differential equations driven by fractional Brownian motion and Poisson jumps
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Topics on set-valued backward stochastic differential equations
PDF
On the non-degenerate parabolic Kolmogorov integro-differential equation and its applications
PDF
Model, identification & analysis of complex stochastic systems: applications in stochastic partial differential equations and multiscale mechanics
PDF
On the simple and jump-adapted weak Euler schemes for Lévy driven SDEs
PDF
Topics in selective inference and replicability analysis
PDF
Theoretical foundations of approximate Bayesian computation
Asset Metadata
Creator
Liu, Haoyuan
(author)
Core Title
On spectral approximations of stochastic partial differential equations driven by Poisson noise
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
04/20/2007
Defense Date
03/29/2007
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,stochastic partial differential equations
Language
English
Advisor
Mikulevicius, Remigijus (
committee chair
), Baxendale, Peter H. (
committee member
), Ghanem, Roger Georges (
committee member
), Lototsky, Sergey Vladimir (
committee member
), Rozovskii, (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
haoyuanliu@hotmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m432
Unique identifier
UC1457070
Identifier
etd-Liu-20070420 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-498604 (legacy record id),usctheses-m432 (legacy record id)
Legacy Identifier
etd-Liu-20070420.pdf
Dmrecord
498604
Document Type
Dissertation
Rights
Liu, Haoyuan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
stochastic partial differential equations