Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Optimal dividend and investment problems under Sparre Andersen model
(USC Thesis Other)
Optimal dividend and investment problems under Sparre Andersen model
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
OPTIMAL DIVIDEND AND INVESTMENT PROBLEMS UNDER SPARRE ANDERSEN MODEL by Xiaojing Xing A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) Dec 2016 Copyright 2016 Xiaojing Xing Dedication To My Family ii Acknowledgments First and foremost, I would like to express my greatest appreciation to my advi- sor, Professor Jin Ma, for his patient guidance, invaluable advice and constant inspiration during my years at USC. This dissertation would not exist without his generous help. I would also like to thank Professor Jianfeng Zhang, Professor Jinchi Lv, Pro- fessor Remigijus Mikulevicius and Professor Sergey Lototsky for serving on my committee and all the constructive comments and advices they provided to me. It is my great honor to be a member of USC math department. Thanks to the eort of all the faculty and sta members. I must also acknowledge many friends in USC who assisted, advised, and supported me. I enjoyed the time here in the past years. My sincere gratitude also goes to Professor Lihua Bai for the invaluable dis- cussions on many research problems in this thesis. Her knowledge and enthusiasm enlightened my research exprience greatly. iii Last but not least, I would like to thank my parents and my husband Zimu Zhu for standing by my side and supporting me spiritually throughout all these years. My parents have provided me the best education, the most loving environment, and the unconditional support throughout my life; without their persistent care and encouragement, I could never have gone this far. I hope they will accept this dissertation as a small gift for the love they have given to me. iv Table of Contents Dedication ii Acknowledgments iii Abstract vii Chapter 1: Introduction 1 1.1 Dividend Maximization Problem . . . . . . . . . . . . . . . . . . . 3 1.2 The Sparre Andersen Model . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Main Diculties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Introduction to Related Methodology . . . . . . . . . . . . . . . . . 10 1.4.1 Dynamic Programming Approach . . . . . . . . . . . . . . . 10 1.4.2 Viscosity Solutions . . . . . . . . . . . . . . . . . . . . . . . 11 1.4.3 Phase-type Distribution . . . . . . . . . . . . . . . . . . . . 11 Chapter 2: Preliminaries and Problem Formulation 14 2.1 Backward Markovization and Delayed Renewal Process . . . . . . . 15 2.2 Optimal Dividend-Investment Problem with the Sparre Andersen Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 3: Basic Properties and Continuity of Value Function 24 3.1 Basic Properties of the Value Function . . . . . . . . . . . . . . . . 24 3.2 Continuity of the value function on x . . . . . . . . . . . . . . . . . 32 3.3 Continuity of the value function on w . . . . . . . . . . . . . . . . . 40 Chapter 4: Dynamic Programming Principle and the HJB Equation 51 4.1 Dynamic Programming Principle . . . . . . . . . . . . . . . . . . . 51 4.2 The Hamilton-Jacobi-Bellman equation. . . . . . . . . . . . . . . . 60 Chapter 5: Comparison Principle and Uniqueness 73 v Chapter 6: Phase-type Distribution of Ruin Time 86 6.1 Review of Phase-type Distribution and Application on Risk Processes 87 6.2 Martingale Method for exit probability under L evy process . . . . 92 6.3 Results under perturbed Sparre Andersen Model . . . . . . . . . . . 94 6.4 Lundberg Bounds for Sparre Andersen Model . . . . . . . . . . . . 97 Chapter 7: Appendix 103 Bibliography 106 vi Abstract The main topic of this dissertation is to study a class of optimal dividend and investment problems assuming that the underlying reserve process follows the Sparre Andersen model, that is, the claim frequency is a \renewal" process, rather than a standard compound Poisson process. The signature feature of such prob- lems is that the underlying reserve dynamics, even in its simplest form, is no longer Markovian. A large part of the dissertation is based on the idea of the backward Markovization, with which we can recast the problem in a Markovian framework with expanded dimension representing the time elapsed after the last claim. We will then investigate the regularity of the value function, and validate the dynamic programming principle. As the main result, we prove that the value function is the unique constrained viscosity solution to the associated HJB equation on a cylindrical domain on which the problem is well-dened. As a rst step towards the future research, we shall also explore the possibility of adopting another powerful tool in studying the renewal process: the Phase-type distribution, in a semimartingale paradigm. While the eectiveness of such method vii is still largely unknown beyond L evy processes, we are able to obtain a Lundberg bound of the ruin probability under our combined inviestment-dividend model, which, to the best of our knowledge, is new. viii Chapter 1 Introduction We begin by introducing the well-known Cram er-Lundberg model for the risk reserve process or surplus process of an insurance company: R t =x +pt Nt X i=1 U i ; t 0; (1.1) where x is a deterministic non-negative initial reserve, p is a positive constant representing the premium rate, andQ t = P Nt i=1 U i ,t 0, is the total claim amount up to timet. In particular, one assumes thatN t is a Poisson Process with intensity andU i are independently and identically distributed random variables with some certain distribution G, and are independent of N. That is, Q is a compound Poisson process. We often denote R t =R x t when the initial reserve x needs to be specied. Note that the process R is a piecewise Deterministic Markov Process and the innitesimal generator of this Markov Process, applied to a suitable function g is given by Lg(x;t) =c @g @x (x;t) + @g @t (x;t) +( Z 1 0 g(xu;t)dG(u)g(x;t)): 1 Before we go into details of the background and a review of literature, some important denitions are presented as following. Denition 1.0.1. The ruin time denote the rst passage time of the reserve process exit the region [0;1], i.e., (x) = infft> 0 :R x t < 0g: Denition 1.0.2. LetF = (F t0 ) be the ltration generated by the risk process (e.x.Compound Poisson). A stochastic process L t (serve as dividend process) is an admissible strategy if it is c agl ad,F progressively measurable and fulllls (i) ruin does not occur due to dividend payment,(ii) L 0 = 0 and L nondecreasing (iii) payments stop after ruin. Denition 1.0.3. The controlled reserve process under a Cram er-Lundberg model is dened by R L t =R x;L t =x +pt Nt X i=1 U i L t ; t 0 (1.2) and L (x) denotes the ruin time of R x;L t : 2 1.1 Dividend Maximization Problem After the introduction of the classical risk model by Lundberg, the probability of ruin P((x) <1) serves as the main criterion to measure performance. The problem of maximizing the cumulative discounted dividend payout was rst seen in the seminal work of de Finetti [25] in 1957, when he proposed to measure the performance of an insurance portfolio by looking at the maximum possible dividend paid during its lifetime, E x n Z 0 e ct dL t o ; (1.3) where L t is the dividend process and c a discounted factor, instead of focusing only on the safety aspect measured by its ruin probability. Whereas de Finetti himself solved the problem in a very simple discrete random walk model, since then many research has been done to contribute to this optimality question under more general and realistic model assumptions. Although other criteria such as the so-called Gordon model [30] as well as the simpler model by Miller-Modigliani [45] have been proposed over the years, to date the cumulative discounted dividend is still widely accepted as an important and useful performance index, and various approaches have been employed to nd the optimal strategy that maximizes such index. Now this problem becomes a rich and challenging led of research. 3 Some particular dividend strategy are as the following Barrier Strategy For a xed barrier heightb 0, the cumulative dividend payment are in the form of L t = (xb)I x>b + Z t^ 0 pI fR t =bg dt: In words, such a strategy pays out all the reserve above b immediately and subsequently all incoming premium that lead to a surplus aboveb are imme- diately paid out as dividend. Band Strategy Such a Strategy is characterized by three setsA,B andC which partition the state space of the reserve process. Each set is associated with a certain dividend payment action for the current reservex as follows: if the current surplus x2A, then every income premium is paid out; if x2B then a lump sum is paid out moving the current reserve to the closest point inA that is smaller than x; if x2C then no dividend paid. Threshold Strategy Given a threshold level b and a positive constant a, the dividend is in the form of L t = Z t^ 0 aI R s b ds: 4 In words, such a strategy pays out dividends at a rate of a whenever the current reserve is above b. The solution of the optimal dividend problem under the classical Cram er- Lundberg model has been obtained in various forms. Gerber [28] rst showed that an optimal dividend strategy has a \band" structure. Since then the opti- mal dividend policies, especially the barrier strategies, have been investigated in various settings, sometimes under more general reserve models (see, e.g., [3,4,12,29,34,36,42,46,55], to mention a few). The more general optimization problems for insurance models involving the pos- sibility of investment and/or reinsurance have also been studied quite extensively in the past two decades. In 1995, Browne [22] rst considered the problem of min- imizing the probability of ruin under a diusion approximated Cram er-Lundberg model, where the insurer is allowed to invest some fraction of the reserve dynami- cally into a Black-Scholes market. Hipp-Plum [31] later considered the same prob- lem with a compound Poisson claim process. The problems involving either pro- portional or excess-of-loss reinsurance strategies have also been been studied under the Cram er-Lundberg model or its diusion approximations (see, e.g., [32{34,54]). The optimal dividend and reinsurance problem with transaction cost and taxes was studied by Lihua Bai with various co-authorships [15{17]; whereas the ruin prob- lems, reinsurance problems, and universal variable insurance problems involving investment in the more general jump diusion framework have been investigated 5 by Jin Ma [41,43,44], from the stochastic control perspective. We should remark that the two references that are closest to the present research are Azcue-Muler [13,14], obtained in 2005 and 2010, respectively. The former concerns the optimal dividend-reinsurance, and the latter concerns the optimal dividend-investment. 1.2 The Sparre Andersen Model It is worth noting, however, that all aforementioned results are based on the Cram er-Lundberg type of surplus dynamics or its variations within the Marko- vian paradigm, whose analytical structure plays a fundamental role. A well-recognized generalization of such model is one in which the Poisson claim number process is replaced by a renewal process, known as the Sparre Andersen risk model [56]. Note that, the key dierence between this two models lies in the assumption for the interclaim time distribution. In Cram er-Lundberg model, the inter-claim time distribution is exponential with parameter while in Sparre Andersen it is assumed to be a distribution with density f. The dividend problem under such a model is much subtler due to its non- Markovian nature in general, and the literature is much more limited. In this context, Li-Garrido [39] rst studied the properties of the renewal risk reserve process with a barrier strategy. Later, after calculating the moments of the expected discounted dividend payments under a barrier strategy in [2], Albrecher- Hartinger [3] showed that, unlike the classical Cram er-Lundberg model, even in 6 the case of Erlang(2) distributed interclaim times and exponentially distributed claim amounts, the horizontal barrier strategy is no longer optimal. Consequently, the optimal dividend problem under the Sparre Andersen models has since then been listed as an open problem that requires attention (see [11]), and to the best of our knowledge, it remains unsolved to this day. At the end of this section, we cite from the survey paper by H. Albrecher and S. Thonhauser, which intrigue the idea of this research. It is still an open problem to identify optimal dividend strategies in this model. One can markovize the Sparre Andersen model by extend- ing the dimension of the state space of the risk process, taking into account the time that has elapsed since the last claim occurrence. A reasonable strategy should also depend on this additional variable. But correspondingly also the dimension of the associated HJB equation will be extended which considerably increase the diculties one is facing when analytically approaching this equation. | H. Albrecher and S. Thonhauser 7 1.3 Main Diculties This research is our rst attempt to attack this open problem. We will start with a rather simplied renewal reserve model but allowing investment and try to solve the optimal dividend problem. Note that the two references that are closest related to this research are by Azcue-Muler [13,14]. Both followed the dynamic programming approach, and the analytic properties of the value function, including its being the viscosity solution to the associated Hamilton-Jacobi-Bellman (HJB) equation became the main pur- pose. Two main ingredient are the Dynamic Programming Approach and the viscosity solution, which will be introduced later. From the stochastic control perspective, the main technical diculties for a general optimal dividend problem under Sparre Andersen model can be roughly summarized into two major points: the non-Markovian nature of the model, and the random duration of the insurance portfolio. We note that although the former would seemingly invalidate the dynamical programming approach, a Markovization is possible, by extending the dimension of the state space of the risk process, taking into account the time elapsed since the last claim (see [11]). It turns out that such an extra variable would cause some subtle technical diculties in analyzing the regularity of the value function. For example, as we shall see later, unlike the compound Poisson cases studied in [13, 14], even the continuity of the value function requires some heavy arguments, much less the Lipschitz properties 8 which plays a fundamental role in a standard argument. For the latter issue, since we are focusing on the life of the portfolio until ruin, the optimization problem naturally has a random terminal time. While it is known in theory that such a problem can often be converted to one with a xed (deterministic) terminal time (see, e.g., [21]) once the distribution of the random terminal is known, nding the distribution for the ruin time under Sparre Andersen model is itself a challenging problem, even under very explicit strategies (see, e.g., [2,29,39]), which makes the optimization problem technically extremely hard along this line. In the last part of this dissertation, we will explore the possibility of this approach by taking into consideration the structure of phase-type distribution. So our main work is to rst \Markovize" the model and then study the optimal dividend problem via the dynamic programming approach. Specically, we shall rst investigate the property of the value function and then validate the dynamic programming principle (DPP), from which we can formally derive the associated HJB equation to which the value function is a solution in some sense. An important observation, however, is that the value function could very well be discontinuous at the boundary of a region on which it is well-dened, and no explicit boundary condition can be established directly from the information of the problem. Among other things, the lack of boundary information of the HJB equation will make the comparison principle, whence uniqueness, particularly subtle, if not impossible. To overcome this diculty we shall invoke the notion of constrained viscosity solution 9 for the exit problems (see, e.g., Soner [57]), and as it turns out we can prove that the value function is indeed a constrained viscosity solution to the associated HJB equation on an appropriately dened domain, completing the dynamic program- ming approach on this problem. To the best of our knowledge, these results are novel. 1.4 Introduction to Related Methodology 1.4.1 Dynamic Programming Approach Dynamic Programming Approach is at the heart of the solution to most divi- dend optimization problems in Markov environment. The Dynamic Programming Principle(DPP) basically states that one tries to behave optimally in a rst time interval and then optimally from there on. For the controlled compound Poisson process R L t , LetU ad be the set of admissible strategies and dene value function to be V (x) = sup U ad E x n Z L 0 e ct dL t o : Then the optimization problem is said to fulll DPP if V (x) = sup U ad E x n Z t^ L 0 e cs dL s +e c(t^ L ) V (R L t^ L ) o : (1.4) 10 In continuous time this leads to the Hamilton-Jacobi-Bellman (HJB) equation. The derivation of this equation are technically dicult to verify. Hence usually people derive heuristically by applying the generator to the compound Poisson process and then verify the solution to the HJB equation is the value function of the optimal control problem. 1.4.2 Viscosity Solutions The notion of viscosity solution was introduced by Crandall and Lions[23] for rst order Hamilton-Jacobi equations and by Lions [40] for second-order partial dierential equations. Nowadays, it is a standard tool for studying HJB equations; see, for example, Fleming and Soner [27] and Bardi and Capuzzo-Dolcetta [18]. In [35], Crandall, Ishii and Lions oered a comprehensive introduction about viscosity solutions. 1.4.3 Phase-type Distribution Phase-type distribution, dened as the distribution of absorption times of cer- tain Markov jump processes, constitutes a class of distribution on the positive axis. Indeed any positive distribution may be approximated arbitrarily closely by phase-type distributions. Exact solutions to many complex problems in stochastic modeling can be obtained explicitly or numerically under phase-type structure. 11 Although most of the original applications of phase-type distribution were in the area of queuing theory, many applications to risk theory can be found in [1]. The modern theory of phase-type distribution was rst established in late sev- entieths by Marcel F. Neuts and co-workers, see [47{49]. In the nineties, O'cinneide [50] studies theoretical properties of phase-type distributions. Asmussen and Bladt [10] generalizes risk models to situations where premium depends on the current reserve by assuming phase-type distribution. Asmussen [9] provided an algorithmic solution to nite time horizon ruin probability. In a very recent work of Asmussen [8], by assuming phase-type distribution, dierent types of exit problem under L evy process are elegantly solved by examining roots of a polynomial. Phase-type distributions have the appealing feature of forming a dense class of distributions within the distributions on positive real axis. For any distribution F on the positive real axis, there exists a sequence of phase-type distributions which converge weakly to F , see [7]. Especially for thin tailed distribution, we may assume without too much loss of generality that the distribution is of phase-type. The rest of the thesis is organized as follows. In Chapter 2, we establish the basic setting, formulate the problem, and introduce the backward Markovization technique. In Chapter 3 we study the properties of the value function and prove the continuity of the value function in the temporal variable, spacial variable x and the delayed variable w, respectively. In Chapter 4 we validate the Dynamic 12 Programming Principle (DPP), and show that the value function is a constrained viscosity solution to the HJB equation. In Chapter 5 we prove the comparison principle, hence prove that the value function is the unique constrained viscos- ity solution among a fairly general class of functions. Finally in Chapter 6, we illustrate the markovization idea lying in the framework of phase-type distribu- tion and explore the possibility of attacking the problem through calculating ruin probability by assuming phase-type distribution. 13 Chapter 2 Preliminaries and Problem Formulation Throughout this thesis we assume that all uncertainties come from a common complete probability space ( ;F;P) on which is dened d-dimensional Brown- ian motion B =fB t : t 0g, and a renewal counting process N =fN t g t0 , independent of B. More precisely, we denote f n g 1 n=1 to be the jump times ( 0 := 0) of the counting process N, and T i = i i1 , i = 1; 2; to be its waiting times (the time elapses between successive jumps). We assume thatT i 's are independent and identically distributed, with a common distribution F :R + 7!R + ; and that there exists an intensity function : [0;1)7! [0;1) such that F (t) := PfT 1 > tg = expf R t 0 (u)dug. In other words, (t) = f(t)= F (t), t 0, where f is the common density function of T i 's. Further, throughout the thesis we will denote, for a generic Euclidean space X, regardless of its dimension, (;) andjj to be its inner product and norm, respectively. LetT > 0 be a given time horizon, we denote the space of continuous 14 functions taking values inX with the usual sup-norm byC([0;T ];X), and we shall make use of the following notations: For any sub--eldGF and 1p<1,L p (G;X) denotes the space of all X-valued,G-measurable random variables such thatEjj p <1. As usual, 2L 1 (G;X) means that it is a bounded,G-measurable random variable. For a given ltration F =fF t : t 0g inF, and 1 p <1, L p F ([0;T ];X) denotes the space of all X-valued, F-progressively measurable processes satisfyingE R T 0 j t j p dt<1. The meaning ofL 1 F ([0;T ];X) is dened similarly. 2.1 Backward Markovization and Delayed Renewal Process An important ingredient of the Sparre Andersen model, is the following \compound renewal process" that will be used to represent the claim process in our reserve mode: Q t = P Nt i=1 U i , t 0, where N is the renewal process representing the frequency of the incoming claims, whereasfU i g 1 i=1 is a sequence of random variables representing the \severity" (or claim size) of the incoming claims. We assume thatfU i g are independent, identically distributed with a common distribution G :R + 7!R + , and are independent of (N;B). The main feature of the Sparre Andersen model, which fundamentally dieren- tiate this research with all existing works is that the processQ is non-Markovian in 15 general (unless the counting process N is a Poisson process), consequently we can not directly apply the dynamic programming approach. We shall therefore rst apply the so-called Backward Markovization technique (cf. e.g., [52]) to overcome this obstacle. More precisely, we dene a new process W t =t Nt ; t 0; (2.1) be the time elapsed since the last jump. Then clearly, 0 W t t T , for t2 [0;T ], and it is known (see, e.g., [52]) that the process (t;Q t ;W t ), t 0, is a piecewise deterministic Markov process (PDMP). We note that at each jump time i , the jump sizejW i j = i i1 =T i . Throughout this dissertation we shall consider the following ltrationfF t g t0 , whereF t :=F B t _F Q t _F W t ,t 0. HerefF t :t 0g denotes the natural ltration generated by process = B;Q;W , respectively, with the usual P-augmentation such that it satises the usual hypotheses (cf. e.g., [51]). A very important element in the study of the dynamic optimal control problem with nal horizon is to allow the starting point to be any time s2 [0;T ]. In fact, this is one of the main subtleties in the Sparre Andersen model, which we now describe. Suppose that, instead of starting the clock at t = 0, we start from s 2 [0;T ], such that W s = w, P-a.s. Let us consider the regular conditional probability distribution (RCPD) P sw () := P[jW s = w] on ( ;F), and consider 16 the \shifted" version of processes (B;Q;W ) on the space ( ;F;P sw ;F s ), where F s =fF t g ts . We rst deneB s t :=B t B s ,ts. Clearly, sinceB is independent of (Q;W ), B s is an F s -Brownian motion under P sw , dened on [s;T ], with B s s = 0. Next, we restart the clock at time s2 [0;T ] by dening the new counting process N s t := N t N s , t2 [s;T ]. Then, under P sw , N s is a \delayed" renewal process, in the sense that while its waiting times T s i , i 2, remain independent, identically distributed as the original T i 's, its \time-to-rst jump", denoted by T s;w 1 :=T Ns+1 w = Ns+1 s, should have the survival probability P sw fT s;w 1 >tg =PfT 1 >t +wjT 1 >wg =e R w+t w (u)du : (2.2) In what follows we shall denote N s t Ws=w :=N s;w t , ts, to emphasize the depen- dence on w as well. Correspondingly, we shall denote Q s;w t = P N s;w t i=1 U i and W s;w t := w +W t W s = w + [(ts) ( Nt Ns )], t s. It is readily seen that (B s t ;Q s;w t ;W s;w t ), ts, is anF s -adapted process dened on ( ;F;P sw ), and it is Markovian. 17 2.2 Optimal Dividend-Investment Problem with the Sparre Andersen Model In this thesis we assume that the dynamics of surplus of an insurance company, denoted by X =fX t g t0 , in the absence of dividend payments and investment, is described by the following Sparre Andersen model on the given probability space ( ;F;P;F): X t =x +ptQ t :=x +pt Nt X i=1 U i ; t2 [0;T ]; (2.3) where x = X 0 0, p > 0 is the constant premium rate, and Q t = P Nt i=1 U i is the (renewal) claim process. We shall assume that the insurer is allowed to both invest its surplus in a nancial market and will also pay dividends, and will try to maximize the dividend received before the ruin time of the insurance company. To be more precise, we shall assume that the nancial market is described by the standard Black-Scholes model. That is, the prices of the risk-free and risky assets satisfy the following SDE 8 > > < > > : dS 0 t =rS 0 t dt; dS t =S t dt +S t dB t ; t2 [0;T ]; (2.4) 18 whereB =fB t g t0 is the given Brownian motion,r is the interest rate, and>r is the appreciation rate of the stock. With the same spirit, in this thesis we shall consider a portfolio with only one risky asset and one bank account and dene the control process by = ( t ;L t ), t 0, where 2L 2 F ([0;T ]) is a self-nancing strategy, representing the proportion of the surplus invested in the stock at time t (hence t 2 [0; 1], for all t2 [0;T ]), and L 2 L 2 F ([0;T ]) is the cumulative dividends the company has paid out up to time t (hence L is increasing). Throughout this dissertation we will consider the the ltration F = F (B;Q;W ) , and we say that a control strategy = ( t ;L t ) is admissible if it is F-predictable with c adl ag paths, and square-integrable (i.e., E R T 0 j t j 2 dt +jL T j 2 <1.) and we denote the set of all admissible strategies restricted to [s;t] [0;T ] byU ad [s;t]. Furthermore, we shall often use the notation U s;w ad [s;T ] to specify the probability space ( ;F;P sw ), and denoteU 0;0 ad [0;T ] by U ad [0;T ] =U ad for simplicity. By a standard argument using the self-nancing property, one can easily show that, for any2U ad and any initial surplusx, the dynamics of the controlled risk process X satises the following SDE: dX t =pdt +rX t dt + (r) t X t dt + t X t dB t dQ t dL t ; t2 [0;T ]: X 0 =x (2.5) 19 We shall denote the solution to (2.5) by X t = X t = X ;x t , whenever the specication of (;x) is necessary. Moreover, for any 2 U ad , we denote = ;x := infft 0;X ;x t < 0g to be the ruin time of the insurance company. We shall make use of the following Standing Assumptions: Assumption 2.2.1. (a) The interest rate r, the volatility , and the insurance premium p are all positive constants,; (b) The distribution functions F (of T i 's) and G (of U i 's) are continuous on [0;1). Furthermore,F is absolutely continuous, with density functionf and inten- sity function (t) :=f(t)= F (t)> 0, t2 [0;T ]; (c) The cumulative dividend process L is absolutely continuous with respect to the Lebesgue measure. That is, there exists a2 L 2 F ([0;T ];R + ), such that L t = R t 0 a s ds, t 0. We assume further that for some constant M p > 0, it holds that 0a t M, dtdP-a.e. Remark 2.2.2. 1) Since in this thesis we are focusing mainly on the value function and the dynamic programming approach, we can and shall assume that we are under the risk-neutral measure, that is, =r in (2.5). We should note that such a simplication does not change the technical nature of any of our discussion. 2) The Assumption 2.2.1-(c) is merely technical, and it is not unusual, see for example, [11, 29, 36]. But this assumption will certainly exclude the possibility of having \singular" type of strategies, which could very well be the form of an optimal strategy in this kind of problem. However, since in this research our main 20 focus is to deal with the diculty caused by the renewal feature of the model, we are content with such an assumption. We should note that the surplus dynamics (2.5) with Assumption 2.2.1-(a) is in the simplest form. More general dynamics with carefully posed assumptions is clearly possible, but not essential for the main results of this thesis. In fact, as we can see later, even in this simple form the technical diculties are already signicant. We therefore prefer not to pursue the generality of the surplus dynamics in the current research so as not to disturb the already lengthy presentation. In the rest of this thesis we shall consider, for given s2 [0;T ], the following SDE (recall (2.5) and Remark 2.2.2 on the ltered probability space ( ;F;P sw ;F s )): for ( ;a)2U s;w ad [s;T ], 8 > > < > > : X t =x +p(ts) +r Z t s X u du + Z t s u X u dB u Q s;w t Z t s a u du; W t :=w + (ts) ( Nt Ns ); t2 [s;T ]: (2.6) We denote the solution by (X ;W ) = (X ;s;x;w ;W s;w ), to emphasize its dependence on the initial state (s;x;w). We now describe our optimization problem. Given an admissible strategy 2 U s;w ad [s;T ], we dene the cost functional, for the given initial data (s;x;w) and the state dynamics (2.6), as J(s;x;w;) :=E sw n Z s ^T s e c(ts) dL t X s =x o :=E sxw n Z s ^T s e c(ts) dL t o : 21 Here c > 0 is ?? the discounting factor (or force of interest), and s = ;x;w s := infft > s : X ;s;x;w t < 0g is the ruin time of the insurance company. That is, J(s;x;w;) is the expected total discounted amount of dividend received until the ruin. Our objective is to nd the optimal strategy 2U ad [s;T ] such that V (s;x;w) := sup 2U ad [s;T ] J(s;x;w;): (2.7) We note that the value function should be dened for (s;x;w)2 D where D =f(s;x;w) : 0 s T;x 0; 0 w sg. We make the convention that V (s;x;w) = 0, for (s;x;w) = 2 D. We shall frequently carry out our discussion on the following two sets: D := intD =f(s;x;w)2D : 0<s<T; x> 0; 0<w<sg; (2.8) D := f(s;x;w)2D : 0s<T; x 0; 0wsg: We note that D D D = D, the closure of D, and D does not include boundary at the terminal time s =T . To end this section we list two technical lemmas that will be useful in our discussion. The proofs of these lemmas are very similar to the Brownian motion case (cf. e.g., [58, Chapter 3]), along the lines of Monotone Class Theorem and Regular Conditional Probability Distribution (RCPD), we put them in Appendix. Let us denoteD m T :=D([0;T ];R m ), the space of allR m -valued c adl ag functions on 22 [0;T ], endowed with the sup-norm, andB m T :=B(D m T ), the topological Borel eld onD m T . LetD m t :=f ^t j2D m T g,B m t :=B(D m t ),t2 [0;T ], andB m t + :=\ s>t B m s , t2 [0;T ]. For a generic Euclidean spaceX, we denoteA m T (X) to be the set of all fB m t + g t0 -progressively measurable process : [0;T ]D m T !X. That is, for any 2A m T (X), it holds that (t;) =(t; ^t ), for t2 [0;T ] and 2D m T . As usual, we denoteA m T =A m T (R) for simplicity. Lemma 2.2.3. Let ( ;F;P) be a complete probability space, and : !D m T be a D m -valued process. LetF t = f(s) : 0 s tg. Then : [0;T ] 7! X isfF t g t0 -adapted if and only if there exists an 2A m T (X) such that (t;!) = (t; ^t (!)),P-a.s.-!2 ,8t2 [0;T ]. Lemma 2.2.4. Let (s;x;w) 2 D and = ( ;a) 2 U ad [s;T ]. Then for any stopping time 2 [s; ], P-a.s., and anyF -measuable random variable (;) taking values in [0;1) [0;T ], it holds that J(;(!);(!);) =E n Z e c(t) a t dt F o (!); forP-a.s. !2 . (2.9) 23 Chapter 3 Basic Properties and Continuity of Value Function 3.1 Basic Properties of the Value Function In this section, we present some results that characterize the regularity of the value function V (s;x;w). We should note that the presence of the renewal pro- cess, and consequently the extra component W =fW t g t0 , changes the nature of the dynamics signicantly. In fact, even in this simple setting, many well-known properties of the value function becomes either invalid, or much less obvious. We begin by making some simple but important observations, which will be used throughout the dissertation. First, we note that in the absence of claims (or in between the jumps of N), for a given = ( ;a)2U ad [s;T ], the dynamics of the surplus follows a non-homogeneous linear SDE (2.6) with Q s;w 0, and has the explicit form (cf. [37, p.361]): X t =Z s t h X s + Z t s [Z s u ] 1 (pa u )du i ; t2 [s;T ]; (3.1) 24 where Z s t := exp n r(ts) + R t s u dB u 2 2 R t s j u j 2 du o . From (2.3) and (3.1) it is clear that in the absence of claims, the surplus X t < 0 would never happen if one does not over pay the dividend whenever X t = 0. For example, if we consider only those 2U ad such that (pa t )1 fX t =0g 0, P-a.s., then we have dX t 0, whenever X t = 0, which implies that X t 0 holds for all t 0. Such an assumption, however, would cause some unnecessary complications on the well-posedness of the SDE (2.3). We shall argue slightly dierently. Since it is intuitively clear that the dividends should only be paid when reserve is positive, we suspect that any 2 U ad such that occurs in between claim times (i.e., caused by overpaying dividends) can never be optimal. The following result justies this point. Lemma 3.1.1. Suppose that 2U ad is such thatPf i ^T < < i+1 ^Tg> 0, for some i2 N, where i 's are the jump times of N, then there exists ~ 2U ad such thatPf ~ 2 S 1 i=1 i g = 1, and J(s;x;w; ~ )>J(s;x;w;). Proof. Without loss of generality we assumes =w = 0. We rst note from (3.1) that on the setf i ^T < < i+1 ^Tg, one must haveX =X = 0, and for some > 0, a t >p for t2 [ ; +]. Now dene ~ t := t 1 ft< g + (0;p)1 ft g , and denote ~ X := X ~ . Then clearly, ~ X t = X t for all t 2 [0; ], P-a.s, and 25 d ~ X = (p ~ a )dt = 0. Consequently ~ X t 0 fort2 [ ; i+1 ^T ) and ~ X i+1 < 0 onf i+1 <Tg. In other words, ~ = i+1 , and thus J(0;x; 0; ~ ) = E h Z ~ ^T 0 e ct a t dt i J(0;x; 0;) +E h Z i+1 ^T pe ct dt : i ^T < < i+1 ^T i > J(0;x; 0;); sincePf i ^T < < i+1 ^Tg> 0, proving the lemma. We remark that Lemma 3.1.1 amounts to saying that for an optimal policy it is necessary that ruin only occurs at the arrival of a claim. Thus, in the sequel we shall consider a slightly ne-tuned set of admissible stratetigies: ~ U ad := n = ( ;a)2U ad : X 1 f <Tg < 0;P-a.s. o : (3.2) The set ~ U ad [s;T ] is dened similarly fors2 [0;T ], and we shall often drop the \~" for simplicity. We now list some generic properties of the value function. Proposition 3.1.2. Assume that the Assumption 2.2.1 is in force. Then, the value function V enjoys the following properties: (i) V (s;x;w) is increasing with respect to x; 26 (ii) V (s;x;w) M c (1e c(Ts) ) for any (s;x;w)2 D, where M > 0 is the constant given in Assumption 2.2.1; and (iii) lim x!1 V (s;x;w) = M c [1e c(Ts) ], for 0sT , 0ws. Proof. (i) is obvious, given the form of the solution (3.1); and (ii) follows from the simple estimate: V (s;x;w) R T s e c(ts) Mdt = c M [1e c(Ts) ]. To see (iii), we consider a simple strategy: 0 := ( ;a) (0;M). Then we can write X 0 ;x;w t =e r(ts) x + pM r 1e r(ts) Z t s e r(tu) dQ s;w u ; t2 [s;T ]; (3.3) and it is obvious that lim x!1 0 ;x;w s ^T =T ,P-a.s. Thus we have V (s;x;w)J(s;x;w; 0 ) = E h Z 0 ;x;w s ^T s e c(ts) Mdt i = M c E h 1e c( 0 ;x;w s ^Ts) i : By the Bounded Convergence Theorem we have lim x!1 V (s;x;w) M c (1 e c(Ts) ). This, combined with (ii), leads to (iii). In the rest of this subsection we study the continuity of the value function V (s;x;w) on the temporal variable s, for xed initial state (x;w). We have the following result. 27 Proposition 3.1.3. Assume Assumption 2.2.1. Then,8(s;x;w); (s+h;x;w)2D, h> 0, it holds (a) V (s +h;x;w)V (s;x;w) 0; (b) V (s;x;w)V (s +h;x;w)Mh, where M > 0 is the constant in Assump- tion 2.2.1. Proof. We note that the main diculty here is that, for the given (s;x;w), the claim process Q s;w t = P N s;w t i=1 U i , t 0, and the \clock" process W s;w =fW s;w t g t0 cannot be controlled, thus it is not possible to keep the process W \frozen" at the initial state w during the time interval [s;s +h] by any control strategy. We shall try to get around this by using a \time shift" to move the initial time to s = 0. More precisely, for any 2U s;w ad [s;T ], we dene s t = ( t ; a t ) := ( s+t ;a s+t ), t2 [0;Ts]. Then is adapted to the ltration F s :=fF s+t g t0 . consider the optimization problem on the new probability set-up ( ;F;P sw ; F s ; B s ; Q s;w ; W s;w ), where ( B s t ; Q s;w t ; W s;w t ) = (B s s+t ;Q s;w s+t ;W s;w s+t ), t 0. Let us denote the correspond- ing admissible control set by U s;w ad [0;Ts], to emphasize the obvious dependence on the initial state (s;w). Then s 2 U s;w ad [0;Ts], and the corresponding surplus process, denoted by X s , should satisfy the SDE: X s t =x +pt +r Z t 0 X s u du + Z t 0 u X s u d B s u Q s;w t Z t 0 a u du; t 0: (3.4) 28 Since the SDE is obviously pathwisely unique, whence unique in law, we see that the laws off X s t g t0 and that offX s+t g t0 (which satises (2.6)), underP sw , are identical. In other words, if we specify the time duration in the cost functional, then we should have 8 > > > > > > > > > < > > > > > > > > > : J s;T (s;x;w;) :=E sw h Z ^T s e c(ts) a t dtjX s =x i =E sw h Z s ^(Ts) 0 e ct a t dtj X 0 =x i =: J 0;Ts (0;x;w; s ); V (s;x;w) = sup 2 U s;w ad [0;Ts] J 0;Ts (0;x;w; ): (3.5) Similarly, for any 2U ad [s +h;T ], we can nd ^ s+h 2U s+h;w ad [0;Tsh] such that 8 > > > > < > > > > : J s+h;T (s +h;x;w;) = ^ J 0;Tsh (0;x;w; ^ s+h ); V (s +h;x;w) = sup ^ 2 U s+h;w ad [0;Tsh] ^ J 0;Tsh (0;x;w; ^ ): (3.6) Now, for the given ^ 2 U s+h;w ad [0;Tsh] we apply Lemma 2.2.3 to nd 2 A 3 Tsh (R 2 ), such that ^ t =(t; B s+h ^t ; Q s+h;w ^t ; W s+h;w ^t ),t2 [0;Tsh]. We now dene ~ h t :=(t; B s ^t^(Tsh) ; Q s;w ^t^(Tsh) ; W s;w ^t^(Tsh) ); t2 [0;Ts]: 29 Then, ~ h 2 U s;w ad [0;Ts]. Furthermore, since the law of ( B s+h t ; Q s+h;w t ; W s+h;w t ), t2 [0;Tsh], under P (s+h)w , and that of ( B s t ; Q s;w t ; W s;w t ), t2 [0;Ts h], under P sw , are identical, by the pathwise uniqueness (whence uniqueness in law) of the solutions to SDE (2.6), the processesf(X ~ h t ; W s;w t ; ~ h t )g t2[0;Tsh] and f(X ^ t ; W s+h;w t ; ^ t )g t2[0;Tsh] are identical in law. Thus J s+h;T (s +h;x;w;) = ^ J 0;Tsh (0;x;w; ^ ) =E 0xw h Z ^ ^(Tsh) 0 e ct ^ a t dtj i E 0xw h Z ^(Ts) 0 e ct a t dt i = J 0;Ts (0;x;w; )V (s;x;w): Since2U ad [s +h;T ] is arbitrary, we obtainV (s +h;x;w)V (s;x;w), proving (a). To prove (b), let 2U ad [s;T ]. For any h2 (0;Ts), we dene h t := th for t2 [s +h;T ]. Then clearly, h 2U s+h;w ad [s +h;T ]. Furthermore, we have J(s;x;w;)J(s +h;x;w; h ) = E sxw h Z s e c(ts) a t dt : Th i +E sxw h Z ^T s e c(ts) a t dt : >Th i E (s+h)xw h Z s+h e c(tsh) a th dt : h T i (3.7) E (s+h)xw h Z T s+h e c(tsh) a th dt : h >T i : 30 By denition of the strategy h , it is easy to check that E sxw h Z s e c(ts) a t dt : Th i = E (s+h)xw h Z s+h e c(tsh) a th dt : h T i E sxw h Z T s e c(ts) a t dt : >Th i = E (s+h)xw h Z T s+h e c(tsh) a th dt : h >T i ; we deduce from (3.7) that J(s;x;w;)J(s +h;x;w; h )E sxw h Z T Th e c(ts) a t dt i Mh: (3.8) Consequently, we have J(s;x;w;) Mh +V (s +h;x;w). Since 2U s;w ad [s;T ] is arbitrary, we obtain (b), proving the proposition. We complete this section with an estimate that is quite useful in our discussion. First note that (3.1) implies that in the absence of claims, the surplus without investment and dividend (i.e., (0; 0)) is X 0;s;x t =e r(ts) [x + p r (1e r(ts) )]. Proposition 3.1.4. Let (s;x;w) 2 D. Then, for any h > 0 such that (s + h;X 0;s;x s+h ;w +h)2D, it holds that V (s +h;X 0;s;x s+h ;w +h)e ch+ R w+h w f(u) F(u) du V (s;x;w): (3.9) Proof. For any "> 0, we choose h;" 2U s+h;w+h ad [s +h;T ] such that J(s +h;X 0;s;x s+h ;w +h; h;" )V (s +h;X 0;s;x s+h ;w +h)": 31 Now dene a new strategy: h;" t = h;" t 1 fT s;w 1 >hg 1 [s+h;T ] (t), t2 [s;T ], where T s;w 1 is the rst jump time of the delayed renewal process N s;w . Then, clearly, h;" 2U s;w ad [s;T ], and X h s+h = X 0;x s+h on the setfT s;w 1 > hg2F s+h . Thus, using (2.2) we have V (s;x;w) J(s;x;w; h;" ) =E sxw h Z h;" ^T s+h e c(ts) a h;" t dt 1 fT s;w 1 >hg i = e ch J(s +h;X 0;s;x s+h ;w +h; h;" )P sxw fT s;w 1 >hg [V (s +h;X 0;s;x s+h ;w +h)"]e ch R w+h w f(u) F(u) du : Letting "! 0 we obtain the result. We note that a direct consequence of (3.9) is the following inequality: V (s +h;X 0;s;x s+h ;w +h)V (s;x;w) [e ch+ R w+h w f(u) F(u) du 1]V (s;x;w): (3.10) This gives a kind of one-sided continuity of the value function, although it is a far cry from a true joint continuity which we will study in the next sections. 3.2 Continuity of the value function on x In this section we investigate the continuity of value function on initial surplus x. As in all \exit-type" problem, the main subtle point here is that the ruin time , 32 which obviously depend on the initial statex, is generally not continuous inx. We shall borrow the idea of penalty method (see, e.g., [27]), which we now describe. To begin with, we recall the domain D =f(s;x;w):0sT;x 0; 0w sg. Letd(x;w) := (x)_0 for (x;w)2R[0;T ], and for2U s;w ad [s;T ] we dene a penalty function by (t;") = ;s;x;w (t;") = exp n 1 " Z t s d(X ;s;x;w r ;W s;w r )dr o ; t 0: (3.11) Then clearly (t;") = 1 for t s . Thus we have V " (s;x;w) = sup 2U ad [s;T ] J " (s;x;w;) : = sup 2U ad [s;T ] E h Z T s ;s;x;w (t;")e c(ts) a t dt i (3.12) = sup 2U ad [s;T ] E h Z s s e c(ts) a t dt + Z T s ;s;x;w (t;")e c(ts) a t dt i V (s;x;w): We have the following lemma. Lemma 3.2.1. LetKD be any compact set. Then the mappingx7!V " (s;x;w) is continuous, uniformly for (s;x;w)2K. 33 Proof. For 2U s;w ad [s;T ], and x 1 ;x 2 2 [0;1) we have Ej ;x 1 (t;") ;x 2 (t;")j (3.13) = E exp n 1 " Z t s d(X ;x 1 r ;W r )dr o exp n 1 " Z t s d(X ;x 2 r ;W r )dr o 1 " E Z t s d(X ;x 1 r ;W r )d(X ;x 2 r ;W r )dr 1 " Z t s Ej(X ;x 1 r X ;x 2 r jdr p T 1 " ( Z t s EjX ;x 1 r X ;x 2 r j 2 dr) 1 2 T " jx 1 x 2 j: In the above, the last inequality is due to a standard estimate of the SDE (2.3). Thus, by some standard argument, jV (s;x 1 ;w)V (s;x 2 ;w)j sup 2U ad n E sxw [ Z T s ( ;x 1 (t;) :x 2 (t;))e c(ts) a t dt] o Cjx 1 x 2 j; (3.14) for some constantC, we conclude thatV " is continuous inx. SinceK is compact, the continuity is uniform for (s;x;w)2K. We should point out that, the estimate (3.13) indicates that the continuity of V " (inx), while uniformly on compacta, is not uniform in"(!). Therefore, we are to argue that, as"! 0,V " !V on any compact setKD, and the convergence is uniform in all (s;x;w)2K, which would in particular imply thatV is continuous on D. In other words, we are aiming at the following main result of this section. 34 Theorem 3.2.2. For any compact set K D, the mapping x7! V (s;x;w) is continuous, uniformly for (s;x;w)2 K. In particular, the value function V is continuous in x, for x2 [0;1). To prove Theorem 3.2.2, we shall introduce an intermediate problem. For each > 0, we denote D :=f(s;x;w) : s2 [0;T ];x2 (;1);w2 [0;s]g. Clearly D D 0 for < 0 , and T >0 D = D. For (s;x;w)2 K and 2U ad [s;T ], we denote ; s = ; s;x;w (resp. ;0 s ) to be the exit time of the process (t;X ;s;x;w t ;W s;w t ) from D (resp. D) before T . For notational simplicity we shall write (X ;W ) := (X ;s;x;w ;W s;w ), := ;0 s , and := ; s , when the context is clear. It is worth noting that the function (t;") satises an SDE: (t;") = 1 1 " Z t s d(X r ;W r )(r;")dr; t2 [s;T ]: Thus, together with the underlying process (X ;W ), we see that the optimization problem in (3.12) is a standard stochastic control problem with jumps and xed terminal time T , therefore the standard Dynamic Programming Principle (DPP) holds for V " . To be more precise, for any stopping time ^ 2 [s;T ], it holds that V " (s;x;w) (3.15) = sup 2U ad [s;T ] E sxw n Z ^ s (t;")e c(ts) a t dt +e (^ s) (^ ;")V " (^ ;X ^ ;W ^ ) o : 35 We are now ready to prove Theorem 3.2.2. [Proof of Theorem 3.2.2.] We rst note that, for any (s;x;w) 2 K and 2 U ad [s;T ], by DPP (3.15) and the fact (3.12) we have V (s;x;w)V " (s;x;w) = sup 2U ad [s;T ] E sxw f Z s (t;)e c(ts) a t dt +e ( s) ( ;")V " ( ;X ;W )g = sup 2U ad [s;T ] E sxw n Z s e c(ts) a t dt + Z (t;")e c(ts) a t dt (3.16) +e ( s) ( ;")V " ( ;X ;W )g V (s;x;w) +C sup 2U ad [s;T ] E sxw ( ) +h ("); where h (") := E sxw [V " ( ;X ;W )], and C > 0 is a generic constant depend- ing only on the constants in Assumption 2.2.1 and T . We rst argue that sup 2U ad [s;T ] E sxw j j! 0, as ! 0, uniformly in (s;x;w)2K. To see this, rst note that sup 2U s;w ad [s;T ] E sxw j j sup 2U s;w ad [s;T ] TPf6= g, here and in what followsP :=P sxw , if there is no danger of confusion. On the 36 other hand, recall that must happen at a claim arrival time onf6= g, and X t = Q s;w t , it is easy to check that Pf6= g = PfX 2 (X ;X +)g = Z 1 0 P n Q s;w 2 (y;y +) X =y o F X (dy) = Z 1 0 [G(y +)G(y)]F X (dy); where G is the common distribution function of the claim sizes U i 's. Since G is uniformly continuous on [0;1), thanks to Assumption 2.2.1-(b), for any > 0 we can nd 0 > 0, depending only on , such thatjG(y + 0 )G(y)j < 2T , for all y2 [0;1), sup 2U ad [s;T ] E sxw fj jg sup 2U ad [s;T ] T Z 1 0 jG(y + 0 )G(y)jF X (dy)< 2 :(3.17) Plugging (3.17) into (3.16) we obtain that V (s;x;w) V " (s;x;w)V (s;x;w) + 2 +h 0 ("): (3.18) 37 We claim that lim "!0 h 0 (") = 0, and that the limit is uniform in (s;x;w)2 K. To this end, we dene, for the given 2U ad [s;T ], and = 0 , 8 > > < > > : := infft> ;d(X t ;W t )<=2g^T ; c := infft> ;d(X ;;c t ;W t )<=2g^T; (3.19) where X ;;c is the continuous part of X , for t , given X ;;c = X . Since X only has negative jumps, we have X t 0,8t2 [0;T ]. Thus c and d(X ;;c t ;W t ) d(X t ;W t ), for all t2 [s;T ], P-a.s. Furthermore, we note that d(X ;;c t ;W t ) 2 for t2 [ ; c ],P-a.s. Now, denotingE [ ] :=E[jF ] and X c =X ;;c we have,P-almost surely, J " ( ;X ;W ;) = E h Z T e 1 " R t d(X r ;Wr )dr e c(t ) a t dt i E h Z T e 1 " R t d(X c r ;Wr )dr e c(t ) a t dt i E h Z T e 1 " Rt^ c d(X c r ;Wr )dr e c(t ) a t dt i E h Z T e 1 " 2 [(t^ c ) ] e c(t ) a t dt i (3.20) ME h Z c e 2" (t ) dt + Z T c e 2" ( c ) dt i ME h Z T e 2" (t ) dt i +ME [e 2" ( c ) ] 4 = A (") +B ("); 38 where A () and B () are dened in an obvious way. Clearly, for xed = 0 , 0A (") 2"M [1e 2" T ]! 0; as "! 0, P-a.s. (3.21) and the limit is uniform in (s;x;w) and 2 U s;w ad [s;T ]. We shall argue that B (")! 0, as "! 0, in the same manner. Indeed, note that X , for > 0 we have P (j c j<) P n sup t + X ;;c t > 2 o P n sup t + [X ;;c t X ]> 2 o 4 2 E n sup t + jX ;;c t X j 2 o C ; (3.22) for some generic constant C > 0 depending only on p, r, , T , M, and . Here we have applied Chebyshev inequality, as well as some standard SDE estimats. Consequently, we derive from (3.22) that sup P (j c j<)C ,P-a.s., and thus for xed, and any> 0, we can nd 0 (;)> 0, such thatP (j c j< 0 )< 2T . Then, B (") = M n E h e 2" ( c ) : c 0 i +E h e 2" ( c ) : c < 0 io M e 2" 0 +P ( c < 0 ) <Me 2" 0 + 2 : (3.23) 39 Therefore, for xed = 0 , one has lim "!0 B (") 2 , P-a.s. This, together with (3.20) and (3.21), then implies that lim "!0 J " ( ;X ;W ;) 2 , uniformly in (s;x;w) 2 K and 2 U ad [s;T ], which in turn implies that, for = 0 , lim "!0 h (") = lim "!0 E sxw [V " ( ;X ;W )] 2 , and the limit is uniformly in (s;x;w)2K. Combining this with (3.17) we derive from (3.18) that V (s;x;w) lim "!0 V " (s;x;w) lim "!0 V " (s;x;w)V (s;x;w) +: Since is arbitrary, we have lim "!0 V " (s;x;w) = V (s;x;w), uniformly in (s;x;w)2K. Finally, note thatV " is continuous inx, uniformly in (s;x;w)2K, thanks to Lemma 3.2.1, thus so is V . In particular, V is continuous in x for x2 [0;k], for all k> 0, proving the Theorem. 3.3 Continuity of the value function on w We now turn our attention to the continuity of value functionV in the variablew. We should note that this is the most technical part of the thesis, as it involves the study of the delayed renewal process that has not been fully explored in the litera- ture. We begin by the following proposition that extends the result of Proposition 3.1.3. Recall the intensity of the interclaim times T i 's: (t) = f(t) F (t) , t 0. Proposition 3.3.1. Assume that Assumption 2.2.1 is in force. Then, for any h> 0 such that 0s<s +h<T , it holds that 40 (i) V (s +h;x;w +h)V (s;x;w) 1e (ch+ R w+h w (u)du) V (s +h;x;w +h); (ii)V (s;x;w +h)V (s;x;w)Mh+ 1e (ch+ R w+h w (u)du) V (s+h;x;w +h): Proof. (i) For any = ( ;a)2U s+h;w+h ad [s +h;T ], we dene, for t2 [s;T ], ~ h t = (~ t ; ~ a t ) by (~ t ; ~ a t ) = (0; (p +rX h t )^M)) + [( t ;a t ) (0; (p +rX h t )^M))]1 fT s;w 1 >hg 1 [s+h;T ] (t): (3.24) where T s;w 1 is the rst jump time of the delayed renewal process N s;w , and X h := X ~ h ;s;x;w . Since T s;w 1 is afF s t g t0 = fF s+t g t0 -stopping time, it is clear that ~ h 2U s;w ad [s;T ]. Let us denote h := ~ h s;x;w and consider the following two cases: Case 1. x Mp r . In this case, for s t < s +T s;w 1 , we have X h t x and ~ a t p +rxM. In particular, we note that by denition of ~ h , given T s;w 1 >h it must hold that X h s+h =x, W s;w s+h =w +h, and T s+h;w+h 1 =T s;w 1 ,P sxw -a.s. Thus V (s;x;w) J(s;x;w; ~ h )E sxw h Z ~ h ^T s e c(ts) ~ a t dtjT s;w 1 >h i P sxw fT s;w 1 >hg e R w+h w (u)du E sxw h Z ~ h ^T s+h e c(ts) ~ a t dt T s;w 1 >h i (3.25) = e (ch+ R w+h w (u)du) E (s+h)x(w+h) h Z ^T s+h e c(tsh) a t dt i = e (ch+ R w+h w (u)du) J(s +h;x;w +h;): 41 Since 2 U ad [s + h;T ] is arbitrary, we obtain that V (s;x;w) e (ch+ R w+h w (u)du) V (s +h;x;w +h) which, with an argument similar to the one led to (3.10), implies (a). Case 2. x > Mp r , In this case we have ~ a s = M < p +rx = p +rX h s , thus, by (3.1) dX h s > 0. Namely, on the setfT s;w 1 > hg, X h will be continuous and increasing, so that X h s+h =e rh x + pM r (1e rh ) =:x(h) (see (3.3)). Thus, noting that W s;w s+h = w +h and T s+h;w+h 1 = T s;w 1 onfT s;w 1 > hg, a similar argument as (3.25) would lead to that V (s;x;w)J(s;x;w; ~ h )e (ch+ R w+h w (u)du) V (s +h;x(h);w +h): Now note thatx(h)>x, it follows from Proposition 3.1.2-(a) thatV (s+h;x(h);w+ h))V (s +h;x;w +h), proving (a) again. Finally, (ii) follows from (i) and Proposition 3.1.3-(b). This completes the proof. The next result concerns the uniform continuity of V on the variables (s;w). We have the following result. Proposition 3.3.2. Assume that Assumption 2.2.1 is in force. Then, it holds that lim h# 0 [V (s +h;x;w +h)V (s;x;w)] = 0; uniformly in (s;x;w)2D. 42 Proof. From Proposition 3.3.1-(i) and the boundedness of V we see that lim h# 0 [V (s +h;x;w +h)V (s;x;w)] 0; uniformly in (s;x;w)2D. (3.26) We need only prove the opposite inequality. We shall keep all the notations as in the previous proposition. For any h2 (0;Ts), and = ( t ;a t )2U ad [s;T ], we still consider the strategy ~ h 2 U s;w ad [s;T ] dened by (3.24). (Note that ~ h depends on only for t2 [s +h;T ].) We again consider two cases, and denote 1 :=T s;w 1 for simplicity. Case 1. x Mp r . In this case, we rst write J(s;x;w; ~ h ) = E sxw h Z s+h s e c(ts) ~ a t dt 1 >h i P( 1 >h) +E sxw h Z h ^T s+h e c(ts) ~ a t dt 1 >h i P( 1 >h) (3.27) +E sxw h Z h ^T s e c(ts) ~ a t dt 1 h i P( 1 h) :=I 1 +I 2 +I 3 ; where I 1 ;I 2 and I 3 are dened as the three terms on the right hand side above, respectively. It is easy to see, by (3.24), that on the setf 1 >hg, ~ 0, X h t =x, and ~ a t =p +rxM for t2 [s;s +h], thus I 1 = e R w+h w (u)du E sxw h Z s+h s e c(ts) (p +rX h t )dtj 1 >h i (p +rx)h; (3.28) I 2 e ch R w+h w (u)du V (s +h;x;w +h)V (s +h;x;w +h): 43 To estimate I 3 , we rst note that on the setf 1 hg, by (3.24), ~ t 0, for all t2 [s;T ]. Thus X h t = x and ~ a t = p +rx for t2 [s;s + 1 ). We also note that h s + 1 andf h >s + 1 g =fU 1 xg. Bearing these in mind we now write I 3 =E sxw h Z s+ 1 s + Z h ^T s+ 1 e c(ts) ~ a t dt : 1 h i :=I 1 3 +I 2 3 ; (3.29) where I 1 3 and I 2 3 are dened in an obvious way. For simplicity let us denote the density function of T s;w 1 by p 1 (z) = (w +z)e R w+z w (v)dv , z 0. Clearly, given 1 h we have I 1 3 = Z h 0 E sxw h Z s+ 1 s e c(ts) (p +rX h t )dtj 1 =z i p 1 (z)dz = Z h 0 h Z s+z s e c(ts) (p +rx)dt i p 1 (z)dz (3.30) Z s+h s e c(ts) (p +rx)dt(1e R w+h w (v)dv ) (1e R w+h w (v)dv )(p +rx)h: Further, we note that (X h s+ 1 ;W s;w s+ 1 ) = (xU 1 ; 0),P-a.s., thus I 2 3 = Z h 0 E sxw h Z h ^T s+z e c(ts) (p +rX h t )dt1 f h >s+zg j 1 =z i p 1 (z)dz (3.31) = Z h 0 Z x 0 E sxw [ Z h ^T s+z e c(ts) (p +rX h t )dtj 1 =z;U 1 =u]p 1 (z)dG(u)dz Z h 0 Z x 0 e cz V (s +z;xu; 0)p 1 (z)dG(u)dz M c (1e R w+h w (v)dv ): 44 Here the last inequality is due to Proposition 3.1.2-(ii). Now, combining (3.30) and (3.31) we have I 3 (1e R w+h w (v)dv )((p +rx)h + M c ); (3.32) and consequently we obtain from (3.27)-(3.32) that, for x< Mp r , J(s;x;w; ~ h ) (p +rx)h +V (s +h;x;w +h) +(1e R w+h w (v)dv )((p +rx)h +M=c): (3.33) Case 2. x Mp r . In this case, using the strategy ~ h as in (3.24) with a similar argument as in Case 1 we can derive that J(s;x;w; ~ h ) Mh +V (s +h;e rh (x + pM r (1e rh ));w +h) +(1e R w+h w (v)dv )(M(h + 1 c )): (3.34) To complete the proof we are to replace the left hand side of (3.33) and (3.34) by J(s;x;w;), which would lead to the desired inequality, as 2U ad [s;T ] is arbitrary. To this end we shall argue along a similar line as those in the previous section. 45 Recall the penalty function ;s (t;") := ;s;x;w (t;") dened by (3.11), and dene J " (s;w;x;) =E swx h Z T s ;s (t;")e c(ts) a t dt i : We rst write J " (s;x;w;)J " (s;x;w; ~ h ) E sxw Z s+h s e c(ts) [ ;s (t;")a t ~ h ;s (t;") ~ a t ]dt (3.35) +E sxw Z T s+h e c(ts) [ ;s (t;")a t ~ h ;s (t;")~ a t ]dt :=I 1 +I 2 It is easy to see that I 1 < 2Mh, thanks to Assumption 2.2.1. We shall estimate I 2 . Note that I 2 = E sxw n Z T s+h e c(ts) ( ;s (t;") ~ h ;s (t;"))a t dt 1 >h o P( 1 >h) +E sxw n Z T s+h e c(ts) [ ;s (t;")a t ~ h ;s (t;"))~ a t ]dt 1 h o P( 1 h) ,I 1 2 +I 2 2 : (3.36) Since X t 0, X h t 0 for t s +h on the set 1 > h (i.e., ruin occurs only at arrival of a claim), we have d(X t ;W t ) = d(X h t ;W t ) = 0 for t2 [s;s +h], i.e., 46 ;s (t;") = ;s+h (t;"), ~ h ;s (t;") = ~ h ;s+h (t;"), for t2 [s +h;T ]. Thus, by the similar arguments as in Lemma 3.2.1 one shows that I 1 2 = E sxw n Z T s+h ( ;s+h (";t) ~ h ;s+h (";t))e c(ts) a t dt 1 >h o P( 1 >h) CE sxw jX s+h X h s+h j; (3.37) whereC > 0 is a generic constant depending only on " andT . Furthermore, since P( 1 h) = (1e R w+h w (v)dv ) = O(h), we have I 2 2 = O(h). It then follows from (3.36) and (3.37) thatI 2 CE sxw jX s+h X h s+h j+O(h). The standard result of SDE then leads to lim h!0 I 2 = 0, whence lim h!0 jJ " (s;x;w;)J " (s;x;w; ~ h )j = 0, and the convergence is obviously uniform for (s;x;w)2D and 2U s;w ad [s;T ]. To complete the proof we note that, with exactly the same argument as that in Theorem 3.2.2 one shows that, for any > 0, there exists " 0 > 0, such that jJ " 0 (s;x;w;)J(s;x;w;)j+jJ " 0 (s;x;w; ~ h )J(s;x;w; ~ h )j<; 8(s;x;w)2D: Then, for the xed " 0 , we choose h 0 > 0, independent of 2U s;w ad [s;T ] such that jJ " 0 (s;x;w;)J " 0 (s;x;w; ~ h )j<; 8(s;x;w)2D; 80<h<h 0 : 47 Thus, if x< Mp r , for all 0<h<h 0 , we derive from (3.33) that J(s;x;w;)V (s +h;x;w +h) jJ(s;x;w;)J " 0 (s;x;w;)j +jJ " 0 (s;x;w;)J " 0 (s;x;w; ~ h )j +jJ " 0 (s;x;w; ~ h )J(s;x;w; ~ h )j +J(s;x;w; ~ h )V (s +h;x;w +h) 2 + (p +rx)h + (1e R w+h w (v)dv )((p +rx)h +M=c) 2 +g 1 (h): where g 1 (h) := Mh + (1 e R w+h w (v)dv )(Mh + M=c). Since 2 U ad [s;T ] is arbitrary, we have V (s;x;w)V (s +h;x;w +h) 2 +g 1 (h): (3.38) First sending h! 0 and then ! 0 we obtain the desired opposite inequality of (3.26). The case for x Mp r can be argued similarly. We apply (3.34) to get the analogue of (3.38): V (s;x;w)V (s +h;x;w +h) (3.39) 2 +g 1 (h) +V (s +h;e rh (x + pM r (1e rh ));w +h)V (s +h;x;w +h): 48 For xed x Mp r , by rst sending h! 0 and then ! 0, we have lim h# 0 [V (s +h;x;w +h)V (s;x;w)] 0: (3.40) thanks to the uniformly continuity V (s;x;w) in x (uniformly in (s;w)). This, together with (3.26), yields that, for given x 0, lim h# 0 [V (s +h;x;w +h)V (s;x;w)] = 0; uniformly in (s;w). (3.41) Then, combining (3.41) and Proposition 3.3.1, one shows that V (s;x;w) is con- tinuous in (s;w) for xed x. It remains to argue that (3.41) holds uniformly in (s;x;w)2D. To this end, we note that, by Proposition 3.1.2 and Theorem 3.2.2,V (s;x;w) is increasing in x, continuous in (s;w), and with a continuous limit function M c (1 e (Ts) ) (in (s;w)). Thus V (s;x;w) converges uniformly to M c (1e (Ts) ) as x!1, uniformly in (s;w), thanks to Dini's Theorem. That is, for > 0, there exists N =N()> Mp r , such that V (s +h;e rh (x + pM r (1e rh ));w +h)V (s +h;x;w +h)<; x>N: 49 On the other hand, for Mp r x N, by Theorem 3.2.2, there exists () = (N())> 0, such that for h<(N), it holds that V (s +h;e rh (x + pM r (1e rh ));w +h)V (s +h;x;w +h)<: Thus, we see from (3.39) that for all (s;x;w)2D, and x Mp r , V (s;x;w)V (s +h;x;w +h) 4; whenever h<. Combining this with the case x< Mp r argued previously, we see that lim h# 0 [V (s +h;x;w +h)V (s;x;w)] 0; uniformly in (s;x;w)2D, proving the opposite inequality of (3.26), whence the proposition. Combining Theorems 3.1.3 and 3.3.1, we have proved the following theorem. Theorem 3.3.3. Assume that Assumption 2.2.1 is in force. Then, the value func- tion V (s;x;w) is uniformly continuous in w, uniformly on (s;x;w)2D. 50 Chapter 4 Dynamic Programming Principle and the HJB Equation 4.1 Dynamic Programming Principle In this section we shall substantiate the Bellman Dynamic Programming Principle (DPP) for our optimization problem. We begin with a simple but important lemma. Lemma 4.1.1. For any " > 0, there exists > 0, independent of (s;x;w)2 D, such that for any 2U s;w ad [s;T ] and h := (h 1 ;h 2 ) with 0h 1 ;h 2 <, we can nd ^ h 2U s;wh 2 ad [s;T ] such that J(s;x;w;)J(s;xh 1 ;wh 2 ; ^ h )"; 8(s;x;w)2D: (4.1) Moreover, the construction of ^ h is independent of (s;x;w). 51 Proof. Let = ( ;a) 2 U s;w ad [s;T ]. For any h = (h 1 ;h 2 ) 2 [0;1) 2 , we consider the following two modied strategies in the form of (3.24): denoting (x) := (p +rx)^M, 8 > > > > > > > > > > > > < > > > > > > > > > > > > : ~ h t := (~ h t ; ~ a h t ) = (0;( ~ X h t )) + [( t ;a t ) (0;( ~ X h t )]1 f~ h 1 >h 2 g 1 [s;T ] (t); t2 [sh 2 ;T ]; ^ h t := (^ h t ; ^ a h t ) = (0;( ^ X h t )) + [( th 2 ;a th 2 ) (0;( ^ X h t ))]1 f^ h 1 >h 2 g 1 [s+h 2 ;T ] (t); t2 [s;T ]: (4.2) where, for notational simplicity, we denote ~ h 1 :=T sh 2 ;wh 2 1 ; ^ h 1 :=T s;wh 2 1 ; ~ X h := X ~ h ;sh 2 ;x;wh 2 ; and ^ X h := X ^ h ;s;x;wh 2 . Clearly, ~ h 2U sh 2 ;wh 2 ad [sh 2 ;T ] and ^ h 2U s;wh 2 ad [s;T ], and it holds that J(s;x;w;)J(s;xh 1 ;wh 2 ; ^ h ) [J(s;x;w;)J(sh 2 ;x;wh 2 ; ~ h )] + [J(sh 2 ;x;wh 2 ; ~ h )J(s;x;wh 2 ; ^ h )] +[J(s;x;wh 2 ; ^ h )J(s;xh 1 ;wh 2 ; ^ h )] :=J 1 +J 2 +J 3 : 52 We shall estimate J i 's separately. First, by (3.25), we have J 1 = J(s;x;w;)J(sh 2 ;x;wh 2 ; ~ h ) [1e (ch 2 + R w wh 2 (u)du) J(s;x;w;) M c [1e (ch 2 + R w wh 2 (u)du) : (4.3) Next, we observe from denition (4.2) that the law of ~ X h on [sh 2 ;Th 2 ] and that of ^ X h on [s;T ] are identical. We have J 2 = J(sh 2 ;x;wh 2 ; ~ h )J(s;x;wh 2 ; ^ h ) = E (sh 2 )x(wh 2 ) h Z ~ h ^T sh 2 e c(ts+h 2 ) ~ a h t dt i E sx(wh 2 ) h Z ^ h ^T s e c(ts) ^ a h t dt i = e ch 2 E (sh 2 )x(wh 2 ) h Z ~ h ^(Th 2 ) sh 2 e c(ts) ~ a h t dt i (4.4) E sx(wh 2 ) h Z ^ h ^T s e c(ts) ^ a h t dt i +E (sh 2 )x(wh 2 ) h Z ~ h ^T ~ h ^(Th 2 ) e c(ts+h 2 ) ~ a h t dt i :=e ch 2 J 1 2 J 2 2 +J 3 2 ; whereJ i 2 ,i = 1; 2; 3 are the three expectations on the right side, respectively. Note that by denition of the ^ h and ~ h , it is easy to check that J 1 2 = J 2 2 . Thus (4.4) becomes J 2 J 3 2 =E (sh 2 )x(wh 2 ) h Z ~ h ^T ~ h ^(Th 2 ) e c(ts+h 2 ) ~ a h t dt i Mh 2 : (4.5) 53 Finally, from the proofs of Theorem 3.2.2 and Lemma 3.2.1, we see that the mapping x7! J(s;x;w;) is continuous in x, uniformly for (s;x;w)2 D and 2U ad [s;T ]. Therefore, for any " > 0, we can nd > 0, depending only on ", such that, for 0<h 1 <, it holds that J 3 =J(s;x;wh 2 ; ^ h )J(s;xh 1 ;wh 2 ; ^ h )<"=3; 8h 2 2 (0;w). We can then assume that is small enough, so that for h 2 < , it holds that J 1 <"=3, J 2 <"=3, uniformly in (s;x;w)2D and 2U ad [s;T ], thanks to (4.3) and (4.5). Consequently, we have J(s;x;w;)J(s;xh 1 ;wh 2 ; ^ h )J 1 +J 2 +J 2 <"; proving (4.1), whence the lemma. We are now ready to prove the rst main result of this thesis: the Bellman Prin- ciple of Optimality or Dynamic Programming Principle (DPP). Recall that for a given2U ad [s;T ] and (s;x;w)2D, we denoteR t =R ;s;x;w t = (t;X ;s;x;w t ;W s;w t ), t2 [s;T ]. 54 Theorem 4.1.2. Assume that Assumption 2.2.1 is in force. Then, for any (s;x;w)2D and any stopping time 2 [s;T ], it holds that V (s;x;w) = sup 2U ad [s;T ] E sxw h Z ^ s e c(ts) a t dt +e c(^ s) V (R ^ ) i : (4.6) Proof. The idea of the proof is more or less standard. We shall rst argue that (4.6) holds for deterministic =s +h, for h2 (0;Ts). That is, denoting v(s;x;w;s +h) : = sup 2U ad [s;T ] E sxw h Z (s+h)^ s e c(ts) a t dt +e c((s+h)^ s) V (R (s+h)^ ) i ; we are to show that V (s;x;w) = v(s;x;w;s +h). To this end, let = ( ;a)2 U ad [s;T ], and write J(s;x;w;) (4.7) = E sxw h Z (s+h)^ s e c(ts) a t dt i +E sxw h Z s+h e c(ts) a t dt : >s +h i : 55 Now applying Lemma 2.2.4 we see that the second term on the right hand side of (4.7) becomes E sxw h Z s+h e c(ts) a t dt : >s +h i = e ch E sxw h E h Z s+h e c(t(s+h)) a t dt F s s+h i : >s +h i = e ch E sxw h J (s +h;X s+h ;W ; s+h ) : >s +h i e ch E sxw h V (R s+h ) : >s +h i E sxw h e c((s+h)^ s) V (R (s+h)^ ) i : Plugging this into (4.7) and taking supremum on both sides above we obtain that V (s;x;w)v(s;x;w;s +h). The proof of the reversed inequality is slightly more involved, as usual. To begin with, we recall Lemma 4.1.1. For any " > 0, let > 0 be the constant in Lemma 4.1.1. Next, let 0 = x 0 < x 1 < and 0 = w 0 < w 1 < < w n = T be a partition of [0;1) [0;T ], so that x i+1 x i < w j+1 w j < . Denote D ij := [x i1 ;x i ) [w j1 ;w j ), i;j 2 N. For 0 s < s +h < T , i2 N, and 0jn we choose ij 2U s+h;w j ad [s +h;T ] such that J(s +h;x i ;w j ; ij )>V (s +h;x i ;w j )": 56 Now applying Lemma 4:1:1, for each (x;w)2D ij and ij 2U s+h;w j ad [s +h;T ], we can dene strategy ^ ij = ^ ij (x;w)2U s+h;w ad [s +h;T ], such that J(s +h;x;w; ^ ij ) J(s +h;x i ;w j ; ij )" V (s +h;x i ;w j ) 2"V (s +h;x;w) 3": (4.8) In the above the last inequality is due to the uniform continuity of V on the variables (x;w). Now for any 2U s;w ad [s;T ], we dene a new strategy as follows: t = t 1 [s;s+h) (t) + 1 X i=0 n1 X j=0 ^ ij t (X s+h ;W s+h )1 D ij (X s+h ;W s+h )1 [s+h;T ] (t): Then one can check that 2U s;w ad [s;T ], andf s +hg =f s +hg. Furthermore, when >s +h we have J(s +h;X s+h ;W s+h ; )V (s +h;X s+h ;W s+h ) 3"; P-a.s. onf >s +hg; 57 thanks to (4.8). Consequently, similar to (4.7) we have V (s;x;w)J(s;x;w; ) (4.9) = E sxw h Z (s+h)^ s e c(ts) a t dt + 1 f >s+hg e ch Z ^T s+h e c(t(s+h)) a t dt i = E sxw h Z (s+h)^ s e c(ts) a t dt + 1 f >s+hg e ch J(s +h;X s+h ;W s+h ; ) i E sxw h Z (s+h)^ s e c(ts) a t dt +e c((s+h)^ s) V (R (s+h)^ ) i 3": Here in the last inequality we used the fact that 1 f s+hg V (R (s+h)^ ) = 1 f s+hg V (R ) = 0. Since is arbitrary, (4.9) implies V (s;x;w)v(s;x;w;s + h) 3". Since " > 0 is arbitrary, we obtain that V (s;x;w) v(s;x;w;s +h), proving (4.6) for =s +h. We now consider the general case when s < < T is a stopping time. Let s = t 0 < t 1 < < t n = T be a partition of [s;T ]. We assume that t k := s + k n (Ts), k = 0; 1; ;n. Dene n := P n1 k=0 t k 1 [t k ;t k+1 ) (). Clearly, n takes only a nite number of values and n ! , P-a.s. It is easy to check, using the same argument above when is deterministic to each subinterval [s;T ], that V (s;x;w)v(s;x;w; n ). We shall prove by induction (on n) that V (s;x;w)v(s;x;w; n ); 8n 1: (4.10) 58 Indeed, for n = 1, we have 1 s, so there is nothing to prove. Now suppose that (4.10) holds for n1 , and n 2. We shall argue that (4.10) holds for n as well. For any 2U s;w ad [s;T ] we have E sxw n Z n^ s e c(ts) a t dt +e c(n^ s) V (R n^ ) o = E sxw n 1 f t 1 g Z s e c(ts) a t dt o (4.11) +E sxw nh Z n^ s e c(ts) a t dt +e c(n^ s) V (R n^ ) i 1 fn>t 1 g 1 f >t 1 g + h Z t 1 s e c(ts) a t dt +e c(t 1 s) V (R t 1 ) i 1 fn=t 1 g 1 f >t 1 g o : Note that on the setf n > t 1 g, n takes only n 1 values, by inductional hypothesis, we have E sxw nh Z n^ t 1 e c(ts) a t dt +e c(n^ s) V (R n^ ) i 1 fn>t 1 g 1 f >t 1 g o E sxw n e c(t 1 s) v(t 1 ;X t 1 ;W t 1 ; n )1 fn>t 1 g 1 f >t 1 g o E sxw n e c(t 1 s) V (R t 1 )1 fn>t 1 g 1 f >t 1 g o : 59 Plugging this into (4.11) we obtain E sxw n Z n^ s e c(ts) a t dt +e c(n^ s) V (R n^ ) o E sxw n 1 f t 1 g Z s e c(ts) a t dt o +E sxw nh Z t 1 s e c(ts) a t dt +e c(t 1 s) V (R t 1 ) i 1 fn>t 1 g 1 f >t 1 g + h Z t 1 s e c(ts) a t dt +e c(t 1 s) V (R t 1 ) i 1 fn=t 1 g 1 f >t 1 g o = E sxw n 1 f t 1 g Z s e c(ts) a t dt o +E sxw n 1 f >t 1 g h e c(t 1 s) V (R t 1 ) + Z t 1 s e c(ts) a t dt io = E sxw n Z t 1 ^ s e c(ts) a t dt +e c(t 1 ^ s) V (R t 1 ^ ) o V (s;x;w): In the above we again used the fact V (R ) = 0, and the last inequality is due to (4.6) for xed time t 1 =s +h. Consequently we obtain v(s;x;w; n )V (s;x;w), whencev(s;x;w; n ) =V (s;x;w). A simple application of Dominated Convergence Theorem, together with the uniform continuity of the value function, will then leads to the general form of (4.6). The proof is now complete. 4.2 The Hamilton-Jacobi-Bellman equation. We are now ready to investigate the main subject of the thesis: the Hamilton- Jacobi-Bellman (HJB) equation associated to our optimization problem (2.7). We note that such a PDE characterization of the value function is only possible after 60 the clock process W is brought into the picture. Recall the sets D D D dened in (2.8). Next, we denote C 1;2;1 0 (D) to be the set of all functions '2 C 1;2;1 (D) such that for =', ' t , ' x , ' xx , ' w , it holds that lim (t;y;v)!(s;x;w) (t;y;v)2D (t;y;v) =(s;x;w), for all (s;x;w)2 D; and '(s;x;w) = 0, for (s;x;w) = 2 D. We note that while a function'2C 1;2;1 0 (D) is well-dened onD, it is not necessarily continuous on the boundariesf(s;x;w) :x = 0 or w = 0 or w =sg. Next, we dene the following function: H(s;x;w;u;;A;z; ;a) := 2 2 2 x 2 A + (p +rxa) 1 + 2 +(w)z + (acu); where = ( 1 ; 2 )2R 2 ,u;A;z2R, and ( ;a)2 [0; 1] [0;M]. For'2C 1;2;1 0 (D), we dene the following Hamiltonian: H (s;x;w;';' x ;' w ;' xx ; ;a) :=H(s;x;w;';r';' xx ;I('); ;a); (4.12) wherer' := (' x ;' w ) and I['] is the integral operator dened by I['] : = Z 1 0 ['(s;xu; 0)'(s;x;w)]dG(u) = Z x 0 '(s;xu; 0)dG(u)'(s;x;w): (4.13) 61 Here the last equality is due to the fact that '(s;x;w) = 0 for x < 0. The main purpose of this section is to show that the value function V is a viscosity solution of the following HJB equation: 8 > > < > > : fV s +L [V ]g(s;x;w) = 0; (s;x;w)2D; V (T;x;w) = 0; (4.14) where L [ ] is the second-order partial integro-dierential operator: for ' 2 C 1;2;1 0 (D), L ['](s;x;w) := sup 2[0;1];a2[0;M] H (s;x;w;';' x ;' w ;' xx ; ;a): (4.15) Remark 4.2.1. (i) As we pointed out before, even a classical solution to the HJB equation (4.14) may have discontinuity on the boundariesfx = 0g orfw = 0g or fw =sg, and (4.14) only species the boundary value ofD at s =T . (ii) To guarantee the well-posedness we shall consider the constrained viscos- ity solutions (cf. e.g., [57]), for which the following observation is crucial. Let V 2 C 1;2;1 0 (D) be a classical solution so that (4.14) holds on D . Consider the point (s; 0;w)2 @D . Let '2 C 1;2;1 0 (D) be such that 0 = [V '](s; 0;w) = max (t;y;v)2D [V'](t;y;v). Then one must have (@ t ;r)(V')(s; 0;w) = for some > 0, wherer = (@ x ;@ w ) and is the outward normal vector ofD at the boundaryfx = 0g (i.e., = (0;1; 0)), andI[V'](s; 0;w) =[V'](s; 0;w) = 62 0 since [V'](s;y;w) = 0 for y 0. Thus, for any ( ;a)2 [0; 1] [0;M] we obtain that [' s +H (;';' x ;' w ;' xx ; ;a)](s; 0;w) = h ' s + ((pa; 1);r') +I['] + (ac') i (s; 0;w) = [V s +H (;V;rV;V xx ;I(V ); ;a)](s; 0;w) +(pa): (4.16) Consequently, assuming ap (which is natural in the case x = 0!) we have f' s +L [']g(s; 0;w)fV s +L [V ]g(s; 0;w) = 0; (4.17) For the other two boundariesfw = 0g andfw =sg, we note that [V xx ' xx ] 0 and the corresponding outward normal vectors are = (0; 0;1) and (1; 0; 1), respectively. Therefore, a similar calculation as (4.16), noting that ((1;p +rx a; 1);) =1; 0, respectively, would lead to (4.17) in both cases. In other words, we can extend the \subsolution property" of (4.14) toD . We are now ready to give the denition of the so-called constrained viscosity solution. Denition 4.2.2. LetOD be a subset such that@ T O :=f(T;y;v)2@Og6=;, and let v2C(O). 63 (a) We say that v is a viscosity subsolution of (4.14) onO, if v(T;y;v) 0, for (T;y;v)2 @ T O; and for any (s;x;w)2O and '2 C 1;2;1 0 (O) such that 0 = [v'](s;x;w) = max (t;y;v)2O [v'](t;y;v), it holds that ' s (s;x;w) +L ['](s;x;w) 0: (4.18) (b) We say that v is a viscosity supersolution of (4.14) onO, if v(T;y;v) 0, for (T;y;v)2 @ T O; and for any (s;x;w)2O and '2 C 1;2;1 0 (O) such that 0 = [v ](s;x;w) = min (t;y;v)2O [v'](t;y;v), it holds that ' s (s;x;w) +L ['](s;x;w) 0: (4.19) (c) We say thatv2C(D) is a \constrained viscosity solution" of (4.14) onD if it is both a viscosity subsolution of (4.14) onD and a viscosity supersolution of (4.14) onD. Remark 4.2.3. (i) We note that the main feature of the constrained viscosity solution is that its subsolution is dened on D , which is justied in Remark 4.2.1-(ii). This turns out to be essential for the comparison theorem, whence the uniqueness. 64 (ii) The inequalities in (4.18) and (4.19) are opposite than the usual sub- and super-solutions, due to the fact that the HJB equation (4.14) is a terminal value problem. As in the viscosity theory, it is often convenient to dene viscosity solution in terms of the sub-(super-) dierentials, (or parabolic sub-(super-)jets). To this end we introduce the following notions: Denition 4.2.4. LetOD , u2C(O), and (s;x;w)2O. The set of parabolic super-jets of u at (s;x;w), denoted byP +(1;2;1) O u(s;x;w), is dened as the set of all (q;;A)2RR 2 R such that for all (s;X) := (s;x;w); (t;Y ) := (t;y;v)2O, it holds that u(t;Y ) u(s;X) +q(ts) + (;YX) + 1 2 A(xy) 2 +o(jtsj +jwvj +jyxj 2 ); (4.20) The set of parabolic sub-jets ofu at (s;x;w)2O, denoted byP (1;2;1) O u(s;x;w), is the set of all (q;p;A)2RR 2 R such that (4.20) holds with \" being replaced by \". The closure of P +(1;2;1) O u(s;x;w) (resp. P (1;2;1) O u(s;x;w)), denoted by P +(1;2;1) O u(s;x;w) (resp. P (1;2;1) O u(s;x;w)), is dened as the set of all (q;;A) 2 R R 2 R such that there exists (s n ;x n ;w n ) 2 O and 65 (q n ; n ;A n ) 2 P +(1;2;1) O u(s n ;x n ;w n ) (resp. P (1;2;1) O u(s n ;x n ;w n )), and that ((s n ;x n ;w n );u(s n ;x n ;w n );q n ; n ;A n )! ((s;x;w);u(s;x;w);q;;A), as n!1. We now dene the constrained viscosity solution in terms of the parabolic jets. The equivalence between the two denitions in such a setting can be found in, for example, [6,53]. Denition 4.2.5. LetOD , u2C(O). We say that u (resp. u2C(O) is a viscosity subsolution (resp. supersolution) of (4.14) onO if for any (s;x;w)2O, it holds that q + sup 2[0;1];a2[0;M] H(s;x;w;u;;A;I[u]; ;a) 0 (resp. q + sup 2[0;1];a2[0;M] H(s;x;w; u;;A;I[ u]; ;a) 0); for all (q; (p 1 ;p 2 );A)2P +(1;2;1) O u(s;x;w) (resp. P (1;2;1) O u(s;x;w)). In particular, we say that u is a \constrained viscosity solution" of (4.14) on D if it is both a viscosity subsolution onD , and a viscosity supersolution onD. In the rest of the dissertation, we shall assume that all solutions of (4.14) satisfy u(s;x;w) = 0, for (s;x;w) = 2D. We now give the main result of this section. Theorem 4.2.6. Assume that Assumption 2.2.1 is in force. Then, the value func- tion V of problem (2.7) is a constrained viscosity solution of (4.14) onD . 66 Proof. Supersolution. Given (s;x;w)2 D. Let '2 C 1;2;1 0 (D) be such that V' attains its minimum at (s;x;w) with'(s;x;w) =V (s;x;w). For anyh> 0 such that s s +h < T , let us denote h s := s +h^T s;w 1 , and ~ U 1 = Q s;w T s;w 1 . For any ( 0 ;a 0 )2 [0; 1] [0;M], we consider the following \feedback" strategy: 0 t = ( 0 ;a 0 1 ft< 0 g +p1 ft 0 g ), t2 [s;T ], where 0 = infft > s;X 0 t = 0g. Then 0 2U ad [s;T ], and it is readily seen from (3.1) that ruin can only happen at a jump time, that is, T s;w 1 0 s , and R 0 t := (t;X 0 ;s;w;x t ;W s;w t )2D, for t2 [s; h s ). Next, by DPP (Theorem 4.1.2) and the properties of ' we have 0 E sxw h Z h s s e c(ts) (a 0 1 ft< 0 g +p1 ft 0 g )dt +e c( h s s) V (R 0 h s ) i V (s;x;w) (4.21) E sxw h Z h s s e c(ts) a 0 dt1 f h s < 0 g +e c( h s s) '(R 0 h s ) i '(s;x;w) = E sxw h Z h s s e c(ts) a 0 dt1 f h s < 0 g i +E sxw h e c( h s s) ['(R 0 h s )'(R 0 h s )]1 fT s;w 1 <hg i +E sxw h e c( h s s) '(R 0 h s )'(s;x;w) i :=I 1 +I 2 +I 3 ; where I i , i = 1; 2; 3 are the three terms on the right hand side above. Clearly, we have I 1 = a 0 c n [1e ch ]P( 0 >s +h;T s;w 1 >h) + Z h 0 h 1e ct ]Pf 0 >s +t)dF T s;w 1 (t) o ; (4.22) 67 Since h s =s +T s;w 1 onfT s;w 1 <hg, we have I 2 = E sxw h e cT s;w 1 ['(R 0 s+T s;w 1 )'(R 0 s+T s;w 1 ) )]1 fT s;w 1 <hg i (4.23) = E sxw h Z 1 0 Z h 0 e ct ['(s +t;X 0 (s+t) u; 0) '(t;X 0 (s+t) ;W s;w (s+t) )]dF T s;w 1 (t)dG(u) i : Since there is no jumps on [s; h s ), applying It^ o's formula (and denoting (x) := rx +p) we get I 3 = E sxw h Z h s s e c(ts) c' +' t + (((X 0 t )a 0 ; 1);r') + ( 0 X 0 t ) 2 2 ' 2 xx (R 0 t )dt i (4.24) = E sxw h Z h 0 1 fT s;w 1 tg e ct c' +' t + (((X 0 s+t )a 0 ; 1);r') + ( 0 X 0 s+t ) 2 2 ' 2 xx (R 0 s+t )dt i = E sxw h Z h 0 F T s;w 1 (t)e ct c' +' t + (((X 0 s+t )a 0 ; 1);r') + ( 0 X 0 s+t ) 2 2 ' 2 xx (R 0 s+t )dt i : Recall that dF T s;w 1 (t) = (w) F T s;w 1 (t)dt = (w)e R w+t w (u)du dt, and F T s;w 1 (0) = 1, dividing both sides of (4.21) by h and then sending h to 0 we obtain, in light of (4.22){(4.24), 0f' t +H (;';' x ;' w ;' xx ; 0 ;a 0 )g(s;x;w): (4.25) 68 Since ( 0 ;a 0 ) is arbitrary, we conclude that V is a viscosity supersolution onD. Subsolution. We shall now argue that V is a viscosity subsolution on D . Suppose not, then we shall rst show that there exist (s;x;w)2D , 2C 1;2;1 0 (D), and constants " > 0, > 0, such that 0 = [V ](s;x;w) = max (t;y;v)2D [V ](t;y;v), but 8 > > < > > : f s +L [ ]g(t;y;v)"c; (t;y;v)2B (s;x;w)\D nft =Tg; V (t;y;v) (t;y;v)"; (t;y;v)2@B (s;x;w)\D ; (4.26) where B (s;x;w) is the open ball centered at (s;x;w) with radius . To see this, we note that if V is not a viscosity subsolution on D , then there must exist (s;x;w) 2 D and 0 2 C 1;2;1 0 (D), such that 0 = [V 0 ](s;x;w) = max (t;y;v)2D [V 0 ](t;y;v), but f 0 s +L [ 0 ]g(s;x;w) =2< 0; for some > 0. (4.27) We shall consider two cases. Case 1. x> 0. In this case we introduce the function: (t;y;v) := 0 (t;y;v) + [(ts) 2 + (yx) 2 + (vw) 2 ] 2 (w)(x 2 +w 2 ) 2 ; (t;y;v)2D: 69 Clearly, 2 C 1;2;1 0 (D), (s;x;w) = 0 (s;x;w) = V (s;x;w), and (t;y;v) > V (t;y;v), for all (t;y;v)2 Dn (s;x;w). Furthermore, it is easy to check that ( s ;r )(s;x;w) = ( 0 s ;r 0 )(s;x;w), yy (s;x;w) = 0 yy (s;x;w), and (w) Z x 0 (s;xu; 0)dG(u)(w) Z x 0 0 (s;xu; 0)dG(u) +: Consequently, we see that f s +L [ ]g(s;x;w)f 0 s +L [ 0 ]g(s;x;w) + =< 0: By continuity of s +L [ ], we can then nd > 0 such that f t +L [ ]g(t;y;v)<=2; for (t;y;v)2B (s;x;w)\D nft =Tg. (4.28) Note also that for (t;y;v)2@B (s;x;w)\D , one has V (t;y;v) (t;y;v) 4 (w)(x 2 +w 2 ) 2 : (4.29) Thus if we choose" = min n 2c ; 4 (w)(x 2 +w 2 ) 2 o , then (4.28) and (4.29) become (4.26). 70 Case 2. x = 0. In this case we have 0 s L [ 0 ](s; 0;w) = sup a2[0;M] [((1;pa; 1); ( 0 s ;r 0 ))(s; 0;w) (c +(w)) 0 (s; 0;w) +a]: If we dene (t;y;v) = 0 (t;y;v) +[(ts) 2 +y 2 + (vw) 2 ], for (t;y;v)2 D, and " = min n 2c ; 2 o , then a similar calculation as before shows that (4.26) still holds, proving the claim. In what follows we shall argue that this will lead to a contradiction. To this end, x any = ( ;a)2U s;w ad [s;T ], and let R s;x;w t = (t;X s;x;w t ;W s;w t ). Dene := infft > s : R t = 2 B (s;x;w)\D g, := ^ T s;w 1 , and denote R t =R s;x;w t for simplicity. Applying It^ o's formula to e c(ts) (R t ) from s to we have Z s e c(ts) a t dt +e c(s) V (R ) = Z s e c(ts) a t dt +e c(s) [ (R ) + (V (R ) (R ))] = e c(s) [V (R ) (R )] + (s;x;w) (4.30) + Z s e c(ts) [a t c + t + w + (rX t +pa t ) x + 1 2 X 2 t 2 2 t xx ](R t )dt + Z s e c(ts) x (R t ) t X t dW t + X st e c(ts) ( (R t ) (R t )): 71 Then, on the setf T s;w 1 g, we have = T s;w 1 . Since the ruin only happens at the claim arrival times, we have T s;w 1 . In the case that =T s;w 1 , X T s;w 1 < 0 and V (R T s;w 1 ) = (R T s;w 1 ) = 0; whereas in the case >T s;w 1 , we have R T s;w 1 2D, and V (R T s;w 1 ) (R T s;w 1 ). On the other hand, we note that on the setf < T s;w 1 g, = , and since ( ;X ;W )2@B (s;x;w)\D , we derive from (4.26) that [V (R ) (R )] ". Thus, noting thatW T s;w 1 = 0, and that both x and are bounded, we deduce from (4.30) that E sxw h Z s e c(ts) a t dt +e c(s) V (;X ;W ) i E h (s;x;w)"e c(s) 1 f<T s;w 1 g + Z s e c(ts) [ t +H ( ; t ;a t )(R t )]dt i (s;x;w)"E sxw e c(s) 1 f<T s;w 1 g + (1e c(s) ) = V (s;x;w)"E sxw [(1e c(T s;w 1 s) 1 fT s;w 1 g )] V (s;x;w)"E sxw (1e c(T s;w 1 s) ): (4.31) SincePfT s;w 1 >sg = 1, we see that (4.31) contradicts the Dynamic Programming Principle (4.6). This completes the proof. 72 Chapter 5 Comparison Principle and Uniqueness In this Chapter, we present a Comparison Theorem that would imply the unique- ness among a certain class of the constrained viscosity solutions of (4.14) to which the value function belong. To be more precise, we introduce to following subset of C(D). Denition 5.0.7. We say that a function u2C(D) is of class (L) if (i) u(s;x;w) 0, (s;x;w)2D, and u is uniformly continuous on D; (ii) the mapping x7! u(s;x;w) is increasing, and lim x!1 u(s;x;w) = M c [1 e c(Ts) ]; (iii) u(T;y;v) = 0 for any (y;v)2 [0;1) [0;T ]. Clearly, the value functionV of problem (2.7) is of class (L), thanks to Propo- sition 3.1.2, Proposition 3.1.3, Theorem 3.2.2, and Corollary 3.3.3. Our goal is to show that following Comparison Principle. 73 Theorem 5.0.8 (Comparison Principle). Assume that Assumption 2.2.1 is in force. Let u be a viscosity subsolution of (4.14) onD and u be a viscosity super- solution of (4.14) onD. If both u and u are of class (L), then u u on D. Consequently, there is at most one constrained viscosity solution of class (L) to (4.14) on D. Proof. We rst perturb the supersolution slightly so that all the inequalities involved become strict. Dene, for > 1, ;& > 0, u ;;& (t;y;v) = u(t;y;v) + Tt +& t : Then it is straightforward to check that u ;;& (t;y;v) is also a supersolution of (4.14) onD. In fact, it is easy to see that u is a supersolution of (4.14) inD as > 1, and for any (s;x;w)2D and '2 C 1;2;1 0 (D) such that 0 = [ u ;;& '](s;x;w) = min (t;y;v)2D [ u ;;& '](t;y;v), it holds that [' t + sup ;a H (; u ;;& ;' x ;' w ;' xx ; ;)](s;x;w) [' t + sup ;a H (; u; ~ ' x ; ~ ' w ; ~ ' xx ; ;a)](s;x;w) 0; where ~ '(t;y;v) :='(t;y;v)(Tt +&)=t, i.e., u ;;& is a viscosity supersolution onD. We shall argue that u u ; , which will lead to the desired comparison result as lim # 0;# 0;&# 0 u ;;& = u. 74 To this end, we rst note that lim t!0 u ; (t;y;v) = +1. Thus we need only show thatu u ; onD nft = 0g. Next, note that bothu and u are of class (L), we have (recall Denition 5.0.7) lim y!1 (u(t;y;v) u ;;& (t;y;v)) = (1) M c [1e c(Tt) ] (Tt +&) t & T < 0; (5.1) for all 0 < t T . Thus, by Dini's Theorem, the convergence in (5.1) is uniform in (t;y;v), and we can choose b > 0 so that u(t;y;v) < u ; (t;y;v) for y b, 0<t<T , and 0vt. Consequently, it suces to show that u(t;y;v) u ;;& (t;y;v); onD b =f(t;y;v) : 0<t<T; 0y<b; 0vtg:(5.2) Suppose (5.2) is not true, then there exists (t ;y ;v )2 D such that M b := sup D b (u(t;y;v) u ;;& (t;y;v)) =u(t ;y ;v ) u ;;& (t ;y ;v )> 0: (5.3) Next, we denoteD 0 b :=intD b , and D 1 b :=@D b \D b =@D b n [ft = 0g[ft =Tg[fy =bg]: (5.4) 75 We note thatu(t;y;v) u ;;& (t;y;v) 0, fort = 0;T ory =b, thus (t ;y ;v ) can only happen onD 0 b [D 1 b . We shall consider the following two cases separately. Case 1. We assume that (t ;y ;v )2D 0 b , but u(t;y;v) u ;;& (t;y;v)<M b ; (t;y;v)2D 1 b : (5.5) In this case we follow a more or less standard argument. For "> 0, we dene an auxiliary function: b " (t;x;w;y;v) =u(t;x;w) u ;;& (t;y;v) 1 2" (xy) 2 1 2" (wv) 2 ; (5.6) for (t;x;w;y;v)2C b :=f(t;x;w;y;v) :t2 [0;T ];x;y2 [0;b];w;v2 [0;t]g. Since C b is compact, there existf(t " ;x " ;w " ;y " ;v " )g ">0 C b , such that M ";b := max C b b " (t;x;w;y;v) = b " (t " ;x " ;w " ;y " ;v " ): (5.7) We claim that for some " 0 > 0, (t " ;x " ;w " ;y " ;v " ))2 intC b , whenever 0<"<" 0 . Indeed, suppose not, then there is a sequence " n # 0, such that (t "n ;x "n ;w "n ;y "n ;v "n ) 2 @C b , the boundary of C b , and that (5.7) holds for each n. Now since @C b is compact, we can nd a subsequence, may assume (t "n ;x "n ;w "n ;y "n ;v "n ) itself, such that (t "n ;x "n ;w "n ;y "n ;v "n )! ( ^ t; ^ x; ^ w; ^ y; ^ v)2@C b . 76 Note that the function u is continuous and bounded on D, and b "n (t "n ;x "n ;w "n ;y "n ;v "n ) =M "n;b b "n (t ;y ;v ;y ;v ) =M b > 0; (5.8) it follows from (5.6) and (5.8) that (x "n y "n ) 2 2" n + (w "n v "n ) 2 2" n u(t "n ;x "n ;w "n ) M c : Letting n!1 we obtain that ^ x = ^ y, ^ w = ^ v, which implies, by (5.8), u( ^ t; ^ x; ^ w) u ;;& ( ^ t; ^ x; ^ w) = b " ( ^ t; ^ x; ^ w; ^ x; ^ w) = lim n!1 b " (t "n ;x "n ;w "n ;y "n ;v "n )M b > 0: (5.9) But as before we note thatu(t;y;v) u ;;& (t;y;v) 0 fort = 0,t =T , andy =b, we conclude that ^ t6= 0;T , and ^ x < b. In other words, ( ^ t; ^ x; ^ w)2 @D 0 b n (ft = 0g[ft =Tg[fy =bg] =D 1 b . This, together with (5.9), contradicts the assumption (5.5). 77 In what follows we shall assume that (t " ;x " ;w " ;y " ;v " )2 intC b ,8"> 0. Apply- ing [24, Theorem 8.3] one shows that for any > 0, there exist q = ^ q2 R and A;B2S 2 such that 8 > > < > > : (q; ((x " y " )="; (w " v " )=");A)2 P 1;2;+ D 0 b u(t " ;x " ;w " ); (^ q; ((x " y " )="; (w " v " )=");B)2 P 1;2; D 0 b u ;;& (t " ;y " ;v " ); where P 1;2;+ D 0 b u(t;x;w) and P 1;2; D 0 b u(t;y;v) are the closures of the usual parabolic super-(sub-)jets of the function u at (t;x;w); (t;y;v)2D 0 b , respectively (see [24]), such that 1 " 0 B B @ I I I I 1 C C A + 2 " 2 0 B B @ I I I I 1 C C A 0 B B @ A 0 0 B 1 C C A (5.10) where I is the 2 2 identity matrix. Taking =", we have 3 " 0 B B @ I I I I 1 C C A 0 B B @ A 0 0 B 1 C C A : (5.11) Note that if we denote A = [A ij ] 2 i;j=1 and B = [B ij ] 2 i;j=1 and " := ((x " y " )="; (w " v " )="), then (q; " ;A) 2 P 1;2;+ D 0 b u(t " ;x " ;w " ), (resp. (^ q; " ;B) 2 P 1;2; D 0 b u ;;& (t " ;y " ;v " )) implies that (q; " ;A 11 ) 2 P +(1;2;1) D u(t " ;x " ;w " ) (resp. (^ q; " ;B 11 )2 P (1;2;1) D u ;;& (t " ;y " ;v " )). Since the functions u, u ;;& , and H are all continuous in all variables, we may assume without loss of generality that 78 (q; " ;A 11 ) 2 P +(1;2;1) D u(t " ;x " ;w " ) (resp. (^ q; " ;B 11 ) 2 P (1;2;1) D u ;;& (t " ;y " ;v " )) and, by Denition 4.2.5, 8 > > > < > > > : q + sup 2[0;1];a2[0;M] H(t " ;x " ;w " ;u; " ;A 11 ;I[u]; ;a) 0; q + sup 2[0;1];a2[0;M] H(t " ;y " ;v " ; u ;;& ; " ;B 11 ;I[ u ;;& ]; ;a) 0: Furthermore, we note that (5.11) in particular implies that A 11 x 2 " B 11 y 2 " 3 " (x " y " ) 2 : (5.12) Thus, if we choose ( " ;a " )2 argmax ( ;a)2[0;1][0;M] H(t " ;y " ;v " ;u; " ;A 11 ;I[u]; ;a), then we have H(t " ;x " ;w " ;u; " ;A 11 ; " ;a " )H(t " ;y " ;v " ; u ;;& ; " ;B 11 ; " ;a " ) 0: Therefore, by denition (4.12) we can easily deduce that c(u(t " ;x " ;w " ) u ;;& (t " ;y " ;v " )) +(w " )u(t " ;x " ;w " )(v " ) u ;;& (t " ;y " ;v " ) 1 2 2 " 2 (A 11 x 2 " B 11 y 2 " ) +r (x " y " ) 2 " (5.13) +(w " ) Z x" 0 u(t " ;x " u; 0)dG(u)(v " ) Z y" 0 u ;;& (t " ;y " u; 0)dG(u) 3 2 2 +r (x " y " ) 2 " +(w " ) Z x" 0 u(t " ;x " u; 0)dG(u)(v " ) Z y" 0 u ;;& (t " ;y " u; 0)dG(u) 79 Now, again, since (t " ;x " ;w " ;y " ;v " )2 C b C which is compact, there exists a sequence " m ! 0 such that (t "m ;x "m ;w "m ;y "m ;v "m ) ! ( t; x; w; y; v) 2 C. By repeating the arguments before one shows that t2 (0;T ), x = y2 [0;b), w = v2 [0;t], i.e., and u( t; x; w) u ;;& ( t; x; w) = lim "m!0 M "m;b M b ; we obtain that ( t; x; w)2D 0 b . But on the other hand, replacing" by" m and letting m!1 in (5.13) we have (c +( w))M b ( w) Z x 0 [u( t; xu; 0) u ;;& ( t; xu; 0)]dG(u)( w)M b : This is a contradiction as c> 0 and M b > 0. Case 2. We now consider the case (t ;y ;v )2D 1 b . We shall rst move the point away from the boundaryD 1 b into the interiorD 0 b and then argue as Case 1. To this end we borrow some arguments from [19], [35] and [57]. First, since (t ;y ;v ) is on the boundary of a simple polyhedron and 0 < t < T , it is not hard to see that there exist = ( 1 ; 2 ) 2 R 2 , and a > 0 such that for any (t;x;w)2B 3 a (t ;y ;v )\D 0 b , 0< 1, it holds that (t;y;v)D 0 b ; whenever (y;v)2B 2 a (x + 1 ;w + 2 ): (5.14) 80 Here B n () denotes the ball centered at 2R n with radius . For any "> 0 and 0< < 1, dene the auxiliary functions: for (t;x;w;y;v)2C b , "; (t;x;w;y;v) : = xy p 2" + 1 2 + wv p 2" + 2 2 +[(tt ) 2 + (xy ) 2 + (wv ) 2 ]: "; (t;x;w;y;v) :=u(t;x;w) u ;;& (t;y;v) "; (t;x;w;y;v). Again, we have M ";;b := sup C b "; (t;x;w;y;v) "; (t ;y ;v ;y ;v ) =M b 2 jj 2 > 0; (5.15) for any"> 0 and < 0 , for some 0 > 0. Now we x2 (0; 0 ) and denote, for simplicity, (t " ;x " ;w " ;y " ;v " )2 argmax C b "; . We have "; (t " ;x " ;w " ;y " ;v " ) "; (t ;y ;v ;y + p 2" 1 ;v + p 2" 2 ); (5.16) which implies that x " y " p 2" + 2 2 + w " v " p 2" + 3 2 +[(t " t ) 2 + (x " y ) 2 + (w " v ) 2 ] u(t " ;x " ;w " ) u ;;& (t " ;y " ;v " )u(t ;y ;v ) + u ;;& (t ;y + p 2" 1 ;v + p 2" 2 ) 2M(1 +) c + (Tt +&) t : (5.17) 81 It follows that [(x " y " ) 2 + (w " v " ) 2 ]=" C for some constant C > 0. Thus, possibly along a subsequence, we have lim "!0 [(x " y " ) 2 + (v " w " ) 2 ] = 0. By the continuity of the functions u and u ;;& and the denition of (t ;y ;v ) we have lim "!0 [u(t " ;x " ;w " ) u ;;& (t " ;y " ;v " )] M b = lim "!0 [u(t ;y ;v ) u ;;& (t ;y + p 2" 1 ;v + p 2" 2 )]: Therefore, sending "! 0 in (5.17) we obtain that lim "!0 h x " y " p 2" + 1 2 + w " v " p 2" + 2 2 +[(t " t ) 2 + (x " y ) 2 + (w " v ) 2 ] i 0: Consequently, we conclude that 8 > > < > > : lim "!0 (t " ;x " ;w " ) = lim "!0 (t " ;y " ;v " ) = (t ;y ;v ); lim "!0 1 p 2" (x " y " ) + 1 2 + 1 p 2" (w " v " ) + 2 2 = 0: (5.18) In other words, we have shown that y " =x " + p 2" 1 +o( p 2"); v " =w " + p 2" 2 +o( p 2"); (5.19) and it then follows from (5.14) that (t " ;y " ;v " ) 2 D 0 b for " > 0 small enough. Namely, we have now returned to the situation of Case 1, with a slightly dierent 82 penalty function "; . The rest of the proof will follow a similar line of arguments, which we shall present brie y for completeness. First we apply [24, Theorem 8.3] again to assert that for any > 0, there exist q; ^ q2R and A;B2S 2 such that 8 > > < > > : (q; ( 1 " + 2(x " y ); 2 " + 2(w " v ));A)2 P 1;2;+ D u(t " ;x " ;w " ) (^ q; ( 1 ; 2 );B)2 P 1;2; D u ;;& (t " ;y " ;v " ); (5.20) where q ^ q = 2(t " t ); 1 " := (x " y " )=" + 2 1 = p 2"; 2 " := (w " v " )=" + 2 2 = p 2"; and 0 B B @ (2 + 1 " )I 1 " I 1 " I 1 " I 1 C C A + 0 B B @ ( 2 " 2 + 4 2 + 4 " )I ( 2 " 2 + 2 " )I ( 2 " 2 + 2 " )I 2 " 2 I 1 C C A 0 B B @ A 0 0 B 1 C C A : (5.21) Now, setting =" we have 3 " 0 B B @ I I I I 1 C C A + 0 B B @ (6 + 4 2 ")I 2I 2I 0 1 C C A 0 B B @ A 0 0 B 1 C C A ; (5.22) 83 which implies, in particular, A 11 x 2 " B 11 y 2 " 3 " (x " y " ) 2 + (6 + 4 2 ")x 2 " 4x " y " : (5.23) Again, as in Case 1 we can easily argue that, without loss of generality, one may assume that (q; ( 1 " +2(x " y ); 2 " +2(w " v ));A 11 )2P +(1;2;1) D u(t " ;x " ;w " ) and (^ q; ( 1 " ; 2 " );B 11 )2P (1;2;1) D u ;;& (t " ;x " ;w " )). It is important to notice that, while (t " ;y " ;v " )2D 0 b , it is possible that the point (t " ;x " ;w " ) is on the boundary ofD . Thus it is crucial that viscosity (subsolution) property is satised onD , including the boundary points. Thus, by Denition 4.2.5 we have q+ sup 2[0;1];a2[0;M] H(t " ;x " ;w " ;u; 1 " + 2(x " y ); 2 " + 2(w " v );A 11 ;I[u]; ;a)0; ^ q + sup 2[0;1];a2[0;M] H(t " ;y " ;v " ; u ;;& ; 1 " ; 2 " ;B 11 ;I[ u ;;& ]; ;a) 0: Now if we take ( " ;a " )2 argmaxH(t " ;x " ;w " ;u; 1 " + 2(x " y ); 2 " + 2(w " v );A 11 ;I[u]; ;a), then we have 0 (q ^ q) +H(t " ;x " ;w " ;u; ( 1 " + 2(x " y ); 2 " + 2(w " v ));A 11 ;I[u]; " ;a " ) H(t " ;y " ;v " ; u ;;& ; ( 1 " ; 2 " );B 11 ;I[ u ;;& ]; " ;a " ); 84 or equivalently, (c +(w " ))u(t " ;x " ;w " ) (c +(v " )) u ;;& (t " ;y " ;v " ) 1 2 2 " 2 (A 11 x 2 " B 11 y 2 " ) +r(x " y " ) 2 =" + 2(x " y " )r 1 = p 2" +2[(rx " +pa)(x " y ) + (w " v )] + 2(t " t ) (5.24) +(w " ) Z x" 0 u(t " ;x " u; 0)dG(u)(v " ) Z y" 0 u ;;& (t " ;y " u; 0)dG(u) (3 2 " 2 =2 +r)(x " y " ) 2 =" + 2(x " y " )r 1 = p 2" +2[(rx " +pa)(x " y ) + (3 + 2")x 2 " 2x " y " + (w " v ) + (t " t )] +(w " ) Z x" 0 u(t " ;x " u; 0)dG(u)(v " ) Z y" 0 u ;;& (t " ;y " u; 0)dG(u): First sending"! 0 then sending! 0, and noting (5.18), we obtain from (5.24) that (c +(v ))M b (v )( Z y 0 (u(t ;y u; 0) u ;;& (t ;y u; 0))dG(u))(v )M b : Again, this is a contradiction as c> 0 and M b > 0. The proof is now complete. 85 Chapter 6 Phase-type Distribution of Ruin Time In this Chapter, we try to explore another possible method to attack the problem. Note that the cost functional we are interested in can be written as J(x) =E x Z ;x ^T 0 e cs dL s (6.1) = E x h Z T 0 e cs dL s j ;x >T i P( ;x >T ) + Z T 0 E x h Z ;x 0 e cs dL s j ;x =t i dF ;x (t) = E x h Z T 0 e cs dL s i Z T 0 E x h Z T t e cs dL s ;x =t i dF ;x (t); where F ;x (t) is the distribution function of ruin time ;x . Then we can see if we can get the distribution of ruin time, then we can calculate the cost functional. However, it is not a easy task to get distribution of ruin time in general. We need to assume some Phase-type structure in our model and explore the martingale approach to this problem. 86 6.1 Review of Phase-type Distribution and Application on Risk Processes We letfJ(t)g t>0 to be Markov Process on nite state space E = 1; 2; 3:::p;p + 1 where the rstp states are transient and statep+1 is absorbing. Then the intensity matrix of J t is of the form 2 6 6 4 U u 0 0 3 7 7 5 where U is a pp dimensional matrix and u is a p dimensional column vector. Note that we must have u =Ue where e = (1; 1:::1) 0 since the intensities of rows should sum to zero. Note here in this Chapter we always use bold letter to denote matrix and vectors. Let = ( 1 ; 2 ::: p ) denote the initial distribution of J t over the transient states, then we can dene Denition 6.1.1. The time until absorption of a terminating Markov Process as described above is said to have a phase-type distribution and we write PH(; U): The following properties of phase-type distribution is well-known (cf. e.g., [1]). Theorem 6.1.2. Let X be a random variable whose distribution is of phase-type with representation (; U), then 87 (a) the distribution function of X is given by F (x) = 1e Ux e; (b) the density function of X is given by f(x) =e Ux u. Now we are ready to see some applications of Phase-type distribution in Risk Processes. Consider the Compound Poisson (Cram er-Lundberg) model R 0 t =x +pt N 0 t X i=1 U i ; where N 0 t is a Poisson process and we assume the Poisson intensity to be . WLOG, within this section, we assume that the premium rate p = 1. We further assume that in this sectionU i are i.i.d. random variables with common distribution PH(; U). We are interested in the following innite time ruin probability: 0 (x) =P n inf 0s<1 R 0 s < 0 R 0 0 =x o : To begin with, let us introduce some notations. Let 0 (x) denote the time of ruin with initial reserve x, S 0 t the \claim surplus process", that is, S 0 t := P N 0 t i=1 U i t, G 0 + () = P [S 0 0 (0) ] 1 (j 0 (0) <1) the ladder height distribution and M 0 = sup t0 S 0 t . Theorem 6.1.3. Assume the claim size distribution follows PH(; U), then (a) G 0 + is phase-type with representation ( 0 + ; U), where 0 + is given by 0 + = 88 U 1 , and M 0 is phase-type with representation ( 0 + ; U + u 0 + ). (b) 0 (x) = 0 + e (U+u 0 + )x A detailed proof can be found in [1]. Here we would only like to note that an essential structure in the proof is to introduce a processm 0 x that represents the con- catenated desending ladder heights and show that this process can be identied as a terminating phase-type renewal process with interclaim distributionPH( 0 + ; U). A gurative explanation of the idea is commonly referred to the following picture. It is also worth noting that in the proof, the condition that the interclaim time distribution is exponential() was never used, so it is not hard to generalize this 89 result to the renewal risk model (Sparre Andersen Model). More precisely, we now consider R t =x +pt Nt X i=1 U i ; whereN t is a renewal process. We dene(x),S t ,G + (),M, andm x accordinglly as we did in the Compound Poisson case. For a zero delayed renewal risk model, we have the following results (again, cf. [1] for detailed proofs). Proposition 6.1.4. In zero delayed case, (a)G + is of phase-type representation ( + ;U), where + is the distribution ofm 0 ; (b) m x is a terminating Markov process with intensity matrix given by U + u + . Proposition 6.1.5. + satisfy + =( + ), where ( + ) = Z 1 0 e (U+u + )y F (dy) (6.2) and F is the distribution of interclaim time in the Sparre Andersen Model. Theorem 6.1.6. Condider the Sparre Andersen Model (zero delayed) with inter- claim time distribution F and claim size of phase-type with representation (; U). Then (x) = + e (U+u + )x e; (6.3) where + satisfy (6.2). 90 We now extend the above results to the case of the delayed renewal processes, and we shall provide more detailed discussion for completeness. Consider a delayed renewal risk process with the time to rst claim distributed asF D and assume that the interclaim time after the rst one still follow distribution F , we can generate the following result. Theorem 6.1.7. Consider a delayed renewal process with time to rst claimT 1 F D and interclaim time after rst claim T 2 , T 3 , ..., are i.i.d. with the common distribution F . Assume that the claim size follows PH(; U). Then the ruin probability under delayed renewal process (x) = D + e (U+u + )x e; (6.4) where D + = Z 1 0 e (U+u + )y F D (dy) (6.5) and + satisfy (6.2). Proof. This proof a combination of the idea for proving the result for zero- delayed case and the idea in deriving renewal density for delayed renewal process. We condition on the arrival time of the rst claim T 1 = y and dene m D x from S t+y S y in the same way as m x . Then m D x is Markov with same intensity as 91 m x but the initial distribution is rather than + . Also we note that m 0 =m D y . Sicne the conditional distribution of m D y given T 1 = y is e (U+u + )y , then the distribution D + of m 0 is given by integraing y out according to distribution F D . Except the rst claim, all after that is exactly same as non-delayed renewal process. The result then follows. 6.2 Martingale Method for exit probability under L evy process For the discussions in this part, please refer to [43] and [8] for details. A stochastic processfX 0 t g t0 is said to be a L evy Process if it has stationary independent increments and an important characterization result states that a L evy process can be decomposed as an independent sum as X 0 t =pt +B t +Q 0 t ; (6.6) where B is standard Brownian Motion and Q 0 a pure jump process. To be more precise, Q 0 t takes the form of the following stochastic integral: Q 0 t = Z t+ 0 Z R xN p (ds;dx); (6.7) 92 whereN p is counting measure of a Poisson point process on (0;1)R + . Further more, we know that the compensator ofN p is given by ^ N p (dx;dt) =EN p (dx;dt) = (dx)dt, where is the L evy measure. If we further assume we can write =G where is the intensity of the Poisson point process andG a probability measure, then Q 0 t is a compound Poisson process Q 0 t = Nt X i=1 U i ; where jumpsfU i g i=1;2::: have distributionG and occur at epochs of the Poisson() process N t . Theorem 6.2.1. The process given by M 0 t =e X 0 t t() ; (6.8) where () =p + 1 2 2 2 + R R (e x 1)(dx) is a local martingale. We further assume that positive (negative) jumps have rate + (resp ) and follow PH( + ; U + ) (resp PH( ; U ) with phase number n + (resp. n ), then we have () = p + 1 2 2 2 + + [ + (I + U + ) 1 u + 1] + [ (I U ) 1 u 1]: (6.9) 93 To get ruin probability, we need to nd the roots of the polinomial () = 0 and apply optional sampling theorem to the exponential martingale M 0 t . Note that () is a polynomial of degree n + +n + 2 when p;6= 0. 6.3 Results under perturbed Sparre Andersen Model We now consider the exit time under a perturbed Sparre Andersen Model, i.e. we want to generalize the Poisson Point Processp to a renewal point process in (6.6). More precisely, we assume our risk process follows X t =pt +B t Z t+ 0 Z R + xN r (ds;dx) =pt +B t Nt X i=0 U i ; (6.10) where N r is the counting measure of a \renewal point process" on (0;1)R + , B t is a Brownian Motion, N t a renewal process and U i iid random variables with distributionG. Here our main concern is to carry out the conpensator for a renewal point process. Recall that for a general optional counting process N on a ltered probability space ( ;F;P;F), assume that N has theF-intensity , then the process dened by ~ N t =N t Z t 0 s ds 94 is an F-martigale. Since we will be focusing on the Sparre Andersen Model, we hope to derive more specic form for the compensator for a renewal process. Let us begin by assuming that the counting measure N r (;) is absolutely con- tinuous with respect to the Lebesgue measuredt, and we write the compensator of N r (;), denoted by ^ N r (;), as ^ N r (dxdt) = t (;dx)dt. Since in Sparre Andersen Model we assume the renewal arrival process and the claim size variables are inde- pendent, it is not hard to see that we can further write t (;dx) = (t;)G(dx), where(t;) is the so-called renewal density, a stochastic process. The explicit cal- culation of the renewal density (or the renewal measure), however, is often thought of as infeasible unless one assumes further that the interclaim time distribution is of phase-type. In that case it is known that the renewal density has an explicit expression. For details and proof of the following theorem, please refer to [1]. Theorem 6.3.1. Consider a renewal process with interclaim time distribution to be phase-type with representation (; V). Then the renewal density exists and is given by (t) =e (V+v)t v; (6.11) where v =Ve. Under the assumption of phase-type distribution structure, we can have 95 Theorem 6.3.2. Consider the process in (6.10). If we assume that the interclaim time distribution is PH(; V) and U i PH(; U), then the process M t given by M t = expfX t (;t)g; (6.12) is anF-local martingale. Here (;t) = pt + 1 2 2 2 t + Z t 0 Z R+ (e x 1) s (;dx)ds = pt + 1 2 2 2 t + Z t 0 Z R+ (e x 1)e (V+v)s vG(dx)ds = [p + 1 2 2 2 ]t +(t)K(); (6.13) where (t) = R t 0 e (V+v)s vds, and K() = M U () 1 is a polynomial. Here M U () is the moment generating function of U i . Proof. This result is a special case of the Theorem 6.4.1 in the next section. In light of Assmussen [8], we aim at nding the solution to the Cram er Lundberg Equantion i.e. (;t) = 0 where(;t) is the adjustment coecient such thatM t = exp(X t (;t)) is local martingale. For a perturbed Sparre Andersen Model, from theorem (6.3.2), it is not hard to see that there is no such that(;t) = 0;8t. We can not apply the method for L evy process directly. To overcome this, we rst notice that the exponential martingale in the form of M t = exp(X t (;t)) is not a good choice to t our problem. So we refer to the work [43] and start with a 96 general form M I t = expfI(t;X t ) I (t)g with a \general rate function" I to be determined. We do this in a more general setting. 6.4 Lundberg Bounds for Sparre Andersen Model In this section we consider underlying process as in our proposed project, except that we now only consider a process start at time 0 and the renewal process is zero-delayed with interclaim time to be phase-type distributed. More Precisely, we assume X t =x +pt + Z t 0 (rX u a u )du + Z t 0 u X u dB u Nt X i=0 U i ; (6.14) where a u and u are F stochastic processes which satisfy the admissible strat- egy condition, N t a renewal process and the interclaim time distribution is of PH(; V). We are intersted in deriving the exponential martingale for this pro- cess but in a more general form. To be precisely, we hope to nd a martingale of the form M I t = expfI(t;X t ) I (t)g, where I is a bivariate function (general rate function) and I (t) is a stochastic process to be determined. 97 Dene K I (t;x;k) =exp(I(t;x)k), then M I t =K I (t;X t ; I t ). Further more, it is easily veried that @ t K I =K I (@ t I +@ t I ); @ x K I =K I @ x I; @ xx K I =K I ((@ x I) 2 @ 2 xx I): (6.15) Then by generalized It^ o formula M I t M I 0 =K I (t;X t ; I t )K I (X 0 ; I 0 ) (6.16) = Z t 0 @ t K I du + Z t 0 @ x K I (p +rX u a u )du + Z t 0 @ x K I u X u dB u + 1 2 Z t 0 @ xx K I 2 2 u X 2 u du + Z t 0 Z R+ exp(I(u;X u )I(u;X u x)) 1 K I N r (dxdu) Plug in (6.15) and note that Z t 0 Z R+ exp(I(u;X u )I(u;X u x)) 1 K I N r (dxdu) = Z t 0 Z R+ exp(I(u;X u )I(u;X u x)) 1 K I ~ N r (dxdu) (6.17) + Z t 0 Z R+ exp(I(u;X u )I(u;X u x)) 1 K I e (V+v)u vG(dx)du; 98 we deduce from (6.16) that M I t M I 0 = Z t 0 K I (@ t I +@ t t )du + Z t 0 K I @ x I(p +rX u a u )du + Z t 0 K I @ x I u X u dB u (6.18) + 1 2 Z t 0 K I ((@ x I) 2 @ 2 xx I) 2 2 u X 2 u du + Z t 0 Z R+ exp(I(u;X u )I(u;X u x)) 1 K I ~ N r (dxdu) + Z t 0 Z R+ exp(I(u;X u )I(u;X u x)) 1 K I e (V+v)u vG(dx)du: In order for M I t to be a local martingale, we need all terms except the two local martingale terms Z t 0 K I @ x I u X u dB u and Z t 0 Z R+ exp(I(u;X u )I(u;X u x)) 1 K I ~ N r (dxdu) to be zero, therefore we can conclude that I t = Z t 0 n @ t I@ x I(p +rX u a u ) + 1 2 ((@ x I) 2 @ 2 xx I) 2 2 u X 2 u (6.19) + Z R+ exp(I(u;X u )I(u;X u x)) 1 e (V+v)u vG(dx) o du: Now we have the result 99 Theorem 6.4.1. Assume X t is given by (6.14) and I is a bivariate function such that I t <1,P-a.s for8t, where I t is given by (6.19). ThenM I t =expfI(t;X t ) I (t)g is anF local martingale. Proof The result can be seen from the above calculations. Apply Theorem 6.4.1 with underlying process to be a perturbed Sparre Ander- sen Model, the adjustment coecient is given by I t = Z t 0 n @ t I@ x Ip + 1 2 ((@ x I) 2 @ 2 xx I) 2 (6.20) + Z R+ exp(I(u;X u )I(u;X u y)) 1 e (V+v)u vG(dy) o du: In order this adjustment coecient to be zero for any t, the bivariate function I(t;x) needs to satisfy the following integro-dierential equation @ t I@ x Ip + 1 2 ((@ x I) 2 @ 2 xx I) 2 (6.21) + Z R+ exp(I(u;x)I(u;xy)) 1 e (V+v)u vG(dy) = 0: Once we get result about solutions to this equation, we hope that we can attack the problem by the martingale method as in Assmussen[8]. In particular, under 100 our framework, it is easy to check that he integro-dierential equation is of the following form: 0 = @ t I@ x I(p +rxa) + 1 2 ((@ x I) 2 @ 2 xx I) 2 2 x 2 + Z R+ exp(I(u;x)I(u;xy)) 1 e (V+v)u vG(dy): (6.22) Note that the function I will depend on the parameters a and . Clearly, this equation is generally too complicated to solve and it is not an easy task to identify the general rate functionI that is best t the problem, we can nevertheless derive the Lundberg type bound, which is often helpful. To this end, rst recall that = infft;X t < 0g and (x;T ) =P( <T ). Theorem 6.4.2. For X t given by (6.14), assume I is a bivariate function such that I t <1,P-a.s for8t, where I t is given by (6.19) andI(t;x) 0 for allx 0, then (x;T )e I(0;x) E sup 0tT exp( I t ): (6.23) Proof. Sicne the exponential local martingale in (6.4.2) is nonnegative, it is supermartingale. Applying the Optional Sampling Theorem we have e I(0;x) E exp(I(^T;X ^T ) I ^T )g Efexp(I(;X ) I )j <TgP( <T ): (6.24) 101 By denition X < 0, thus I(;X ) 0. The inequanlity can be reduced to e I(0;x) Efexp( I )j <TgP( <T ) Ef inf 0tT exp( I t )g (x;T ): (6.25) Then we can get the bound for (x;T ) by (x;T ) e I(0;x) =Ef inf 0tT exp( I t )g (6.26) e I(0;x) Ef 1 inf 0tT exp( I t ) ge I(0;x) E sup 0tT exp( I t ): 102 Chapter 7 Appendix [Proof of lemma 2.2.3] Proof We prove the theorem for xed t rst, i.e. we verify for t (!)2F t , there exists a function :D m t !X such that t (!) =( ^t (!)). SetH =f( ^t (!)j : D m t ! U measurableg, thenH is a linear space and 12H. From Proposition 7.1 in Chap 3 of [26], we can dene r :D m [0;T ]!R mn by r () = (r), where r = (r 1 ;:::r n ). Then the Boreal -algebra of D m [0;T ] coincides with ( r : r2 D 0 ), where D 0 is any dense subset of [0;T ]. Then for indicator function 1 fr ()2Eg we have = 1 E such that 1 fr ()2Eg = 1 E ( ^t )2H. Now suppose i : D m t ! X measurable and i ( ^t )2H s.t.0 i ( ^t )% (), then take = sup i i . Clearly : D m t !X is measurable and () = ( ^t ) thus ()2H. By theoremH contains allF t - measurable processes, in particular, t 2 H. This proves that our conclusion is true for xed t. 103 Now for any i 1, let 0 =t i 0 <t i 1 < be a partition of [0;T ] with the mag- nitude of partition goes to 0 asi!1. For xedt i j we can nd the corresponding function t i j and dene i (t;) = 0 ( ^0 )1 f0g (t) + X j1 t i j ( ^t i j )1 (t i j1 ;t i j ] (t); for any 2D m T . We can see for any t2 [0;T ], there is a j such that t i j1 <t<t i j . Then i (t; ^t i j ; (!)) = t i j ( ^t i j ) = t (!): So we dene (t;) =lim i!1 i (t;) to prove the result. [Proof of Lamma 2.2.4] Proof: Recall that Q t = P Nt i=1 U i . Since = ( ;a) is an admissible strat- egy in [s;T ], by Lemma 2:2:3, there are functions 1 and 2 such that t (!) = 1 (t;B ^t (!);Q ^t (!);W ^t (!)) and a t (!) = 2 (t;B ^t (!);Q ^t (!);W ^t (!)), P-a.s. !2 ,8t2 [s; ]. Therefore we have 8 > > > > > > > > > > > > < > > > > > > > > > > > > : dX t = (p +rX t 2 (t;B ^t (!);Q ^t (!);W ^t (!)))dt +X t 1 (t;B ^t (!);Q ^t (!);W ^t (!))dB t d P Nt i=1 U i ; dW t =dtd Nt ; t2 [; ]; (X ;W ) = (;): (7.1) 104 On the other hand we have Pf ! : (( !);( !)) = ((!);(!))jF g(!) = 1; P-a.s. There is an 0 2F with P ( 0 ) = 1 such that for any ! 0 2 0 , (;) become deterministic constants under the new probability space ( ;F;P(jF )(! 0 )). In addition, for any t, 1;2 (t;!) = 1;2 (t;B ^t (!);Q ^t (!);W ^t (!)) = 1;2 (t; ~ B ^t (!) +B (!); ~ Q ^t (!) +Q (!);W ; ^t (!)) (7.2) where ~ B t = B t B is a standard Brownian Motion and ~ Q t = Q t Q is claim process. Note that B , Q and W a.s. equal constants B (! 0 ), Q (! 0 ) and (! 0 ) under probability measureP (jF )(! 0 ). Hence by denition of admissible controls, we have t 2U ad [;t]. Thus if we work under probability space ( ;F;P(jF )(! 0 )) and notice the weak uniqueness of (7:1), we obtain the result. 105 Bibliography [1] Albrecher,H. and Asmussen ,S., (2010),Ruin Probability, Advanced series on Statistical Sciences & Applied Probability Vol.14. World Scientic Publishing Co. Pte.Ltd. [2] Albrecher, H., Claramunt, M.M. and Marmol, M., (2005).On the distribution of dividend payments in a Sparre Andersen model with generalized Erlang(n) interclaim times, Insurance Math. Econom., 37(2), 324-334. [3] Albrecher, H. and Hartinger J., (2006), On the non-optimality of horizontal dividend barrier strategies in the Sparre Andersen model, Hermis J. Comp. Math. Appl., 7, 109-122. [4] Albrecher, H. and Thonhauser, S., (2008) Optimal dividend strategies for a risk process under force of interest, Insurance Math. Econom., 43(1), 134-149. [5] Albrecher, H. and Thonhauser, S., (2009), Optimality Results for Dividend Problems in Insurance, Rev. R. Acad. Cien. Seri A. Mat. Vol 103(2) , 291- 320. 106 [6] Alvarez, O. and Tourin, A., (1996) Viscosity solutions of nonlinear integro- dierential equations. Ann. Inst. H. Poincar e Anal. Non Lin eaire 13, no. 3, 293-317. [7] ASMUSSEN, S. (2003) Applied Probability and Queues. Springer-Verlag, New York. [8] Asmussen, S. (2014) L evy Processes, Phase-type Distributions and Martin- gales. Stochastic Models Volume 30, 2014 - Issue 4. [9] ASMUSSEN, S., AVRAM, F. and USABEL, M. (2002) Erlangian approxima- tions for nite-horizon ruin probabilities. Astin Bulletin, 32(2), 267-281. [10] ASMUSSEN, S. and BLADT, M. (1996) Phase-type distribution and risk pro- cesses with state-dependent premiums. Scand. Actuar. J., 1, 19-36. [11] Asmussen, S., and Taksar, M. (1997) Controlled diusion models for optimal dividend pay-out. Insurance: Math. & Economics, 20, 1-15. [12] Asmussen, S., Hojgaard, M., and Taksar, M. (2000) Optimal risk control and dividend distribution policies. Example of excess-of loss reinsurance for an insurance corporation. Finance and Stochastics 4, 299-324. [13] Azcue, P. and Muler, N., (2005), Optimal reinsurance and dividend distribu- tion policies in the Cram er-Lundberg model, Math. Finance, 15(2), 261-308. 107 [14] Azcue, P. and Muler, N. (2010), Optimal investment policy and dividend pay- ment strategy in an insurance company, Ann. Appl. Probab., 20(4), 1253-1302. [15] Bai, L., Guo, J., and Zhang, H. (2010), Optimal excess-of-loss reinsurance and dividend payments with both transaction costs and taxes. Quant. Finance, 10, no. 10, 1163-1172. [16] Bai, L., Hunting, M., and Paulsen, J., (2012), Optimal dividend policies for a class of growth-restricted diusion processes under transaction costs and solvency constraints. Finance Stoch. 16, no. 3, 477-511. [17] Bai, L. and Paulsen, J. (2010), Optimal dividend policies with transaction costs for a class of diusion processes. SIAM J. Control Optim. 48, no. 8, 4987-5008. [18] Bardi, M., and Capuzzo-Dolcetta, I. (1997):Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Boston: Birkhauser. [19] Benth, F.E, Karlsen, K.H and Relkvan, K (2001), Optimal portfolio selection with consumption and nonlinear integro-dierential equations with gradient constraint: A viscosity solution approach Finance Stoch, no. 5, 275-303. [20] Bladt, M.,(2005) A Review on Phase-type Distributions and their Use in Risk Theory, ASTIN Bulletin, 35, pp 145-161. 108 [21] Blanchet-Scallieta, C, El Karoui, N, Jeanblanc, M., and Martellini, L. (2008) Optimal investment decisions when time-horizon is uncertain, Journal of Mathematical Economics, Vol. 44, 11(1), 1100-1113. [22] Browne, S., (1995), Optimal investment policies for a rm with a random risk process: exponential utility and minimizing the probability of ruin, Math. Oper. Res., 20(4), 937-958. [23] Crandall, M. G., and Lions,P.L. (1983): Viscosity Solution of Hamilton-Jacobi Equations, Trans. Am. Math. Soc. 277(1), 142. [24] Crandall, M.G., Ishii, H. and Lions, P-L., (1992), User's Guide to viscosity solutions of second order partial dierential equations Bulletin of the American Mathematical Society, 27(1), 1-67. [25] De Finetti, B., (1957). Su un' impostazione alternativa dell teoria collettiva del risichio, Transactions of the XVth congress of actuaries, (II), 433-443. [26] Ethier, S.N., and Kurtz, T.G. (1986), Markov Processes, Characterization and Convergence, John Wiley & Sons, Inc., Hoboken. New Jersey [27] Fleming, W. H., and Soner, H. M. (2006), Controlled Markov Processes and Viscosity Solutions, 2nd Edition, Springer. [28] Gerber, H. U., (1969). Entscheidungskriterien fuer den zusammengesetzten Poisson-prozess, Schweiz. Aktuarver. Mitt., (1), 185-227. 109 [29] Gerber, H. U. and Shiu, E. S. W., (2006). On optimal dividend strategies in the compound Poisson model, N. Am. Actuar. J., 10(2), 76-93. [30] Gordon, M. J., (1959), Dividends, earnings and stock prices, Review of Eco- nomics and Statistics, 41, 99-105. [31] Hipp, C. and Plum, M., (2000), Optimal investment for insurers, Insurance Math. Econom., 27(2), 215-228. [32] Hipp, C. and Vogt, M. (2003), Optimal dynamic XL reinsurance, Astin Bull., 33(2), 193207. [33] Hojgaard, B., and Taksar, M. (1998) Optimal proportional reinsurance policies for diusion models. Scand. Actuarial J. 2, 166-180. [34] Hojgaard, B., and Taksar, M. (1999) Controlling risk exposure and dividend payout schemes: insurance company example. Math. Finance 9(2), 153-182. [35] Ishii, H. and Lions, P. L., (1990) Viscosity solutions of fully nonliear second- order elliptic patrial dierential equation, J. Dierential equation., 83, 26-78. [36] Jeanblanc-Picque, M. and Shiryaev, A. N., (1995). Optimization of the ow of dividends, Uspekhi Mat. Nauk, 50(2(302)), 25-46. [37] Karatzas, I. & Shreve, S.E. (1991) Brownian motion and stochastic calculus, 2 nd ed., Springer-Verlag. 110 [38] Karatzas, I. & Shreve, S.E. (1998) Methods of mathematical nance, Springer- Verlag. [39] Li, S. and Garrido, J., (2004), On a class of renewal risk models with a constant dividend barrier, Insurance Math. Econom., 35(3), 691-701. [40] LIONS, P. L. (1983): Optimal Control of Diusion Processes and Hamilton- Jacobi-Bellman Equations. II. Viscosity Solutions and Uniqueness, Comm. Partial Di. Eqs. 8(11), 12291276. [41] Liu, Y., Ma, J. (2009) Optimal Reinsurance/Investment for General Insurance Models. The Annals of Applied Probability. Vol. 19(4), 1495{1528. [42] Loeen, R. (2008), On optimality of the barrier strategy in de Finetti's div- idend problem for spectrally negative L evy processes, Ann. Appl. Probab., 18(5), 1669-1680. [43] Ma, J., and Sun, X. (2003) Ruin probabilities for insurance models involving investments, Scand. Actuarial J. Vol. 3, 217-237. [44] Ma, J. and Yu, Y., (2006), Principle of Equivalent Utility and Universal Vari- able Life Insurance, Scandinavian Actuarial Journal. Vol. 2006 (6), pp. 311{ 337. [45] Miller, M. H. and Modigliani, F., (1961), Dividend policy, growth, and the valuation of shares, The Journal of Business, 34(4), 411-433. 111 [46] Mnif, M. and Sulem, A., (2005), Optimal risk control and dividend policies under excess of loss reinsurance, Stochastics, 77(5), 455-476. [47] NEUTS, M.F. (1981) Matrix-geometric solutions in stochastic models., vol- ume 2 of John Hopkins Series in the Mathematical Sciences. Johns Hopkins University Press, Baltimore, Md. [48] NEUTS, M.F. (1989) Structured stochastic matrices of the M/G/1 type and their applications, volume 5 of Probability: Pure and Applied. Marcel Dekker Inc., New York. [49] NEUTS, M.F. (1995)Algorithmic probability. Stochaistic Modeling Series. Chapman & Hall, London. [50] OCINNEIDE, C.A. (1990) Characterization of phase-type distributions. Comm. Statist. Stochastic Models, 6, 1-57. [51] Protter, P. (1990) Stochastic Integration and Dierential Equations: A New Approach, Springer-Verlag, Berlin. [52] Rolski, T., Schmidli, H., Schmidt, V., and Teugels, J., (1998), Stochastic Pro- cesses for Insurance and Finance, John Wiley & Sons. New York, Brisbane, Singapore, Toronto. 112 [53] Sayah, A. (1991), Equations d'Hamilton-Jacobi du premier ordre avec termes int egro di erentiels. I. Unicit e des solutions de viscosit e, Comm. Partial Di. Eqs. 16(67), 1057{1074. [54] Schmidli, H., (2001), Optimal proportional reinsurance policies in a dynamic setting, Scand. Actuar. J., 2001(1), 55-68. [55] Schmidli, H., (2008), Stochastic Control in Insurance, Springer, New York. [56] Sparre Andersen, E., (1957), On the collective theory of risk in the case of contagion between the claims, Transactions of the XVth Int. Congress of Actu- aries, New York, (II), 219-229. [57] Soner, H. M., (1986), Optimal control with state-space constrains, I, SIAM J. Control Optim. 24, 552-561. [58] Yong, J. and Zhou, X. (1999), Stochastic Controls: Hamiltonian Systems and HJB Equations, New York: Springer-Verlag. 113
Abstract (if available)
Abstract
The main topic of this dissertation is to study a class of optimal dividend and investment problems assuming that the underlying reserve process follows the Sparre Andersen model, that is, the claim frequency is a ""renewal"" process, rather than a standard compound Poisson process. The signature feature of such problems is that the underlying reserve dynamics, even in its simplest form, is no longer Markovian. A large part of the dissertation is based on the idea of the backward Markovization, with which we can recast the problem in a Markovian framework with expanded dimension representing the time elapsed after the last claim. We will then investigate the regularity of the value function, and validate the dynamic programming principle. As the main result, we prove that the value function is the unique constrained viscosity solution to the associated HJB equation on a cylindrical domain on which the problem is well-defined. ❧ As a first step towards the future research, we shall also explore the possibility of adopting another powerful tool in studying the renewal process: the Phase-type distribution, in a semimartingale paradigm. While the effectiveness of such method is still largely unknown beyond Levy processes, we are able to obtain a Lundberg bound of the ruin probability under our combined investment-dividend model, which, to the best of our knowledge, is new.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Reinforcement learning for the optimal dividend problem
PDF
Some topics on continuous time principal-agent problem
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
Pathwise stochastic analysis and related topics
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Large-scale inference in multiple Gaussian graphical models
PDF
Set values for mean field games and set valued PDEs
PDF
Controlled McKean-Vlasov equations and related topics
PDF
Provable reinforcement learning for constrained and multi-agent control systems
PDF
Dynamic approaches for some time inconsistent problems
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Difference-of-convex learning: optimization with non-convex sparsity functions
PDF
Topics on set-valued backward stochastic differential equations
PDF
I. Asynchronous optimization over weakly coupled renewal systems
PDF
Essays on revenue management with choice modeling
PDF
Empirical methods in control and optimization
PDF
Solution of inverse scattering problems via hybrid global and local optimization
PDF
Design optimization under uncertainty for rotor blades of horizontal axis wind turbines
PDF
Topics on dynamic limit order book and its related computation
Asset Metadata
Creator
Xing, Xiaojing
(author)
Core Title
Optimal dividend and investment problems under Sparre Andersen model
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
11/11/2016
Defense Date
10/20/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
backward Markovization,constrained viscosity solution,dynamic programming,exponential martingale,Hamilton-Jacobi-Bellman equation,OAI-PMH Harvest,optimal dividend problem,phase-type distribution,ruin probability,Sparre Andersen model
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ma, Jin (
committee chair
), Lv, Jinchi (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
nkxingxiaojing@gmail.com,xiaojinx@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-673990
Unique identifier
UC11335905
Identifier
etd-XingXiaoji-4915.pdf (filename),usctheses-c16-673990 (legacy record id)
Legacy Identifier
etd-XingXiaoji-4915-0.pdf
Dmrecord
673990
Document Type
Dissertation
Rights
Xing, Xiaojing
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
backward Markovization
constrained viscosity solution
dynamic programming
exponential martingale
Hamilton-Jacobi-Bellman equation
optimal dividend problem
phase-type distribution
ruin probability
Sparre Andersen model