Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Some topics on continuous time principal-agent problem
(USC Thesis Other)
Some topics on continuous time principal-agent problem
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
SOME TOPICS ON CONTINUOUS TIME PRINCIPAL-AGENT PROBLEM by Zimu Zhu A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) May 2021 Copyright 2021 Zimu Zhu Dedication To My Family ii Acknowledgments First and foremost, I would like to express my gratitude to my advisor, Professor Jianfeng Zhang for bringing me to USC and accepting me as his Phd student. It is great pleasure to work with him all these years. I will always view Professor Zhang as a model of intelligence and integrity. I would like to thank Professor Jin Ma and Professor Jinchi Lv for being the committee member of my defense committee. I would also like to thank Professor Remigijus Mikulevicius, Professor Sergey Lototsky and Professor Fernando Zapa- tero for being the member of my qualifying exam. Especially, I would like to thank Professor Jin Ma for his great kindness and generosity for inviting me to his home to have a wonderful time on Thanksgiving, Spring Festival, which made me feel I am not alone here. I would like to thank Professor Jaeyoung Sung and Professor Weidong Tian for introducing me many interesting topics on economics and nance. It is really interesting and fun to work with them. I would also like to thank Yejia Xu for having many interesting discussion with him. iii It is my great honor to be a member of USC math department. I would like to thank all the faculties, sta members and friends in the math department at USC. I would like to deliver my special thanks to my friend Qing Xu for his encour- agement through these years. It is really lucky to have a friend like him. Last but not least, I would like to thank my parents and my wife Xiaojing for supporting me throughout all these years. This long journey becomes much easier with them. iv Table of Contents Dedication ii Acknowledgments iii Abstract vii Chapter 1: A Dynamic Principal-Agent Model 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 The no quitting case . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.1 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2 Solving the problem for no quitting case . . . . . . . . . . . 8 1.3 The agent is allowed to quit at a stopping time at most once . . . . 20 1.3.1 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.3.2 Solving the problem if the agent is allowed to quit at most once . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.4 The agent is allowed to quit nite times . . . . . . . . . . . . . . . 34 1.4.1 The model when quitting can happen at most n times . . . . 34 1.4.2 Solving the problem that the agent is allowed to quit nite times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 1.5 The agent is allowed to quit innite times . . . . . . . . . . . . . . 36 1.5.1 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 1.5.2 Solving the problem that the agent is allowed to quit innite times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 1.6 Some discussion on the discrete case . . . . . . . . . . . . . . . . . 47 1.6.1 The discrete time model . . . . . . . . . . . . . . . . . . . . 47 Chapter 2: A General Model for Continuous Time Principal-Agent Problem Under Hidden Action 50 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.2 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.1 The basic setting . . . . . . . . . . . . . . . . . . . . . . . . 53 2.2.2 Solving the agent's problem . . . . . . . . . . . . . . . . . . 55 v 2.2.3 Solving the principal's problem . . . . . . . . . . . . . . . . 61 2.3 Example 1: A Heuristic Argument . . . . . . . . . . . . . . . . . . . 67 2.4 Example 2: A solvable case . . . . . . . . . . . . . . . . . . . . . . 68 2.4.1 Basic Setting . . . . . . . . . . . . . . . . . . . . . . . . . . 68 2.4.2 Formulating the principal's problem . . . . . . . . . . . . . . 69 2.4.3 DPP for the principal's problem . . . . . . . . . . . . . . . . 72 2.4.4 From DPP to HJB equation . . . . . . . . . . . . . . . . . . 73 2.5 Example 3: Connection to Noah Williams case . . . . . . . . . . . . 76 2.6 Example 4: Connection to Sannikov's case . . . . . . . . . . . . . . 80 2.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Chapter 3: Optimal Investing after Retirement Under Time-Varying Risk Capacity Constraint 87 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.2 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.2.1 Investment Opportunity . . . . . . . . . . . . . . . . . . . . 93 3.2.2 Investor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 3.2.3 An optimal retirement portfolio problem . . . . . . . . . . . 94 3.2.4 An all-safety strategy . . . . . . . . . . . . . . . . . . . . . . 98 3.3 A characterization of the value function . . . . . . . . . . . . . . . . 99 3.4 The optimal strategy for CRRA utility . . . . . . . . . . . . . . . . 100 3.4.1 A baseline model . . . . . . . . . . . . . . . . . . . . . . . . 101 3.4.2 Constrained and unconstrained region . . . . . . . . . . . . 103 3.4.3 Explicit characterization of the value function . . . . . . . . 105 3.4.4 A special case . . . . . . . . . . . . . . . . . . . . . . . . . . 107 3.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.5.1 Optimal Strategy . . . . . . . . . . . . . . . . . . . . . . . . 110 3.5.2 Portfolio value process . . . . . . . . . . . . . . . . . . . . . 112 3.5.3 Alternative strategy . . . . . . . . . . . . . . . . . . . . . . 114 3.5.4 Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Bibliography 144 vi Abstract This essay consists of three projects I worked on during my Phd study. The rst project [67] is a joint work with Prof Jianfeng Zhang on a dynamic prin- cipal agent problem.In this paper, we consider a dynamic Principal-Agent problem on nite time horizon [0;T ]. In constrast to the static problem considered in lit- erature, our model allows the agent to quit during the contract period [0;T ]. If the agent quits at a stopping time up to his choice , the principal will hire a new one , possibly with a dierent type, from the market which is best for her. After solving the principal-agent problem as a bi-level control problem, we characterize the principal's value function as the minimal solution of a innite system of HJB equations. Such solution, although has discontinuity issue at the boundary, will become continuous after face-lifting. We also have some discussion on the case that agent can only quit at some xed time. In this case, the principal's value function can indeed be discontinuous, which can not be eliminated by face-lifting. vii The second project [56] is a joint work with Prof Jaeyoung Sung and Prof Jian- feng Zhang on a unied model for continuous time principal-agent problem.In this paper, we consider a unied continuous time principal-agent model. We assume the contract from the principal consists of both lump-sum payment and contin- uous time payment. Dierent with the existing literature, we assume the pay- to-performance sensitivity (PPS) is not zero. There are two main advantage by assuming the PPS is not zero: First, we can improve the implementablity of the contract. Second, it is possible to increase the principal's optimal utility. We also provide a solvable case which has a closed-form solution. In the end, we compare our model with some existing result, [49], [61]. The third project [57] is a joint work with Prof Weidong Tian on a optimal investing problem after retirement. This paper studies an optimal investing prob- lem for a retiree facing longevity risk and living standard risk. We formulate the investing problem as a portfolio choice problem under a time-varying risk capacity constraint. We derive the optimal investment strategy under the specic condi- tion on model parameters in terms of second-order ordinary dierential equations. We demonstrate an endogenous number that measures the expected value to sus- tain the spending post-retirement. The optimal portfolio is nearly neutral to the stock market movement if the portfolio's value is higher than this number; but, if the portfolio is not worth enough to sustain the retirement spending, the retiree viii actively invests in the stock market for the higher expected return. Besides, we solve an optimal portfolio choice problem under a leverage constraint and show that the optimal portfolio would lose signicantly in stressed markets. This paper shows that the time-varying risk capacity constraint has important implications for asset allocation in retirement. ix Chapter 1 A Dynamic Principal-Agent Model 1.1 Introduction Principal-agent problem, or say contract theory has been one of the major topics in the economics literature for many decades. In the principal-agent problem, the principal(she) oers a contract to the agent (he), which provides incentive for the agent to work for her. The agent's eort process may or may not be observed by the principal. In this paper, we study the case that the agent's eort process is not observed by the principal (which is called moral hazard in literature). Principal- agent problem is a standard Stackelberg game: the rst mover is the principal and the second mover is the agent. The seminar paper in continuous time Principal-Agent problem is Holmstrom and Milgrom [33] in which they consider the payment is lump-sum, i.e. paid only once at terminal time. The main contribution of [33] is to show that the principal-agent problem is easier to study in continuous time than in discrete time case. Later 1 on, Sch attler and Sung [51] and Sung [53][54] extend the idea [33] to the case that agent can also control the volatility. We refer to the survey paper by [55] in this direction. Cvitani c, Wan, and Zhang [18] generalize the result in [33] to non-exponential utility function. A very interesting monograph is by Cvitani c and Zhang [19]. Sannikov [49] develop the case when payment is paid continuously. The main contribution in [49] is to view the agent's optimal utility process as a state control of the principal. Recently, Possama and Touzi [44] study Sannikov's work [49] in a more rigorous way. Williams [61] oer explicit solution when agent's payment consists both continuous payment and lump sum payment with exponen- tial utility function. Principal-agent problem has been extended to a very wide area, include [29] . Despite numerous publications in principal-agent problems, the time inconsistency issue seems missing in the literature: an optimal contract found at time 0 for the period [0;T ] may not remain optimal for a later period [t;T ]. This issue is serious when the agent is allowed to quit or the principal may re the agent at a later time t, as typically the case in practice. Strotz [52] is the rst paper to propose time inconsistency in economics. The time Inconsistency issue has addressed attention in economics and nance for a long time, see [34][35][39]. The time inconsistency in continuous time version was discussed in Zhou [65]. There are two main approach to time inconsistency: (1) Precommitment: solve the optimal problem for the [0;T ] problem, use the optimal control for this problem, ignoring the fact that it may not 2 be optimal for [t;T ] problem. (2) Game approach: It is quite similar to backward induction, we rst solve the optimal control for [t;T ] problem, then use this result to solve the optimal control for [0;t] problem. In our paper, we take the game approach to solve the problem. For simplicity, we assume only the agent may quit (the principal can not re the agent). Being aware of this possible quit of the agent, the principal would have oered a dierent contract from the very beginning. Our goal of this paper is to nd the optimal contract for the principal in the setting where the agent is allowed to quit at any (stopping) time up to the agent's choice. To our best knowledge, however, this cru- cial issue has never been addressed in the literature of principal-agent problems. The contribution for this paper are in two aspects: (i) We develop a dynamic principal-agent model such that this new problem is time-consistent. (ii) We revisit the model in Sannikov [49] in Section 1.2 in the no quitting case, but with a more general admissible set: the principal can have no uniform lower bound. In order to use the HJB theory, we prove the regularity for the principal's value function with the new admissible set. To the best of our knowledge, our paper is the rst one to study time inconsis- tency issue in principal-agent problem. In game theory literature, the concept renegotiation-proof developed by Bernheim and Ray [4], Farrell and Maskin [26] is related to our problem. However, in the renegotiation- proof framework, the two players-the principal and the agent are xed, while in our model, we allow the 3 agent to quit and the principal will hire a new, dierent agent from the market. We start with the discussion when agent is not allowed to quit, which has already been studied in [49] where the principal's admissible set is in a compact set. In our paper, we assume that the principal's admissible set only has uniformly upper bound (without uniformly lower bound). We will show that the regularity for principal's value function still holds (Theorem 1.2.3). This assumption will play an important role when agent is allowed to quit in later sections. We show that the principal's value function is the unique viscosity solution of the corresponding HJB equation The second step of our paper is to consider the case that the agent is allowed to quit at most once, but at a stopping time. If the agent quits, the principal will hire a new agent from the market, possibly with a dierent type , for her best. We show that the agent's optimal utility function can be characterized as a re ected BSDE (RBSDE). We then prove that the principal's value function is the minimal viscosity solution of a system of HJB equations. One important message we get in this part is, with the possible quit, the principal's value function may or may not increase comparing to the case the agent is not allowed to quit. The third case we consider is the quitting can happen multiple times. This will consist of two dierent cases: (1) the quitting can happen nite times. (2) the quitting can happen innite times. When quitting can happen nite times, it will be a direct extension of the case that the agent is allowed to quit only once, we 4 show that the principal's value function V n is also a minimal viscosity solution of the corresponding HJB equation. When the quitting is allowed to happen innite times, we show that (i) the value function V 1 will be indeed the limit of V n .(ii) If the quitting cost has a uniformly lower bound for all time and all the agent, the quitting will happen only nite times. Moreover, we show that V 1 is also the minimal viscosity solution of the corresponding HJB equation. The rest of papers is organized as follows: In Section 1.2, we rst discuss the scenario that the agent is not allowed to quit and show the continuity of the prin- cipal's value function . We study the case that the agent is allowed to quit at a stopping time but at most once in Section 1.3. In Section 1.3, we formulate the agent's problem as an optimal stopping problem and characterize the agent's opti- mal utility process as the solution of a re ected backward stochastic dierential equation (RBSDE). Principal's value function will be characterized as a system of HJB equations which will become continuous at the boundary after face-lifting. In Section 1.4, we use the idea in Section 1.3 to discuss the case that quitting is allowed to happen at stopping times for xed nite times. We formulate the prob- lem when agent is allowed to quit innite times in Section 1.5. We will show that although the agent is allowed to quit innite times, the quitting will happen only nite time if the quitting cost has a uniformly lower bound. In Section 1.6, we discuss the case that the agent is allowed to quit at a xed discrete time , showing that the principal's value function will indeed be discontinuous at the boundary 5 (beyond face-lifting), which is quite dierent from the quitting time is a stopping time (as in Section 1.3,1.4,1.5). 1.2 The no quitting case 1.2.1 The model We start with the basic setting of our model. There are two sides of players: one principal (she) and one agent (he), but there are a continuum family of agents available in the markets, parameterized by some parameter 2 = [;] R + . The principal will hire an agent from the market. In this section, we x the agent with type . LetfW t g ft0g be a standard Brownian Motion on some probability space ( ;F; P ) with its augmented ltrationfF t g t2[0;T ] on some xed interval [0;T ]. During the period [0;T ], the principal will oer the type agent continuous cash ow (contract)f t g f0tTg . The type agent will work for the principal by making eortfa t g f0tTg if the individual rationality constraint (will be stated in (1.3)) holds. Under weak formulation, we may write down the output process: X t =X 0 +W 0 t =X 0 + Z t 0 a 0 s ds +W a t ; 0tT: 6 wherefW a t g f0tTg is the Brownian Motion under P a such that dP a dP =M a T :=e R T 0 atdBt 1 2 R T 0 a 2 t dt We assume the agent has concave and increasing utility functionU A (x) =e x . In this section, we study the case that the agent is not allowed to quit. which will serve as the benchmark model for our paper. We follow the formulation in Sannikov [49]. The problems can be formulated as follows: Agent's problem: For type agent, given contractf t g f0tTg , the agent's control process is eortfa t g f0tTg : then the agent's problem is V A 0 (; [0;T ] ) := sup a [0;T] 2A E a [ Z T 0 U A ( s )ds 1 2 Z T 0 a 2 s ds] (1.1) whereA is dened as A :=ffa t g f0tTg :Ee 1 2 R T 0 a 2 t dt <1;E(M a T ) 2 <1g and E a is the expectation under P a . We shall prove (in Theorem 1.2.1) that the agent's optimal eort for (1.1) exists and is unique under some mild condition, 7 denoted byfa t ()g 0tT . Principal's problem: The principal's problem is V P 0 (; 0;R( 0 ; 0)) := sup [0;T] E a [0;T] () [ Z T 0 a s ( )ds Z T 0 s ds] (1.2) Subject to the individual rationality (IR) condition: V A 0 (; [0;T ] )R(; 0) (1.3) where R(; 0) stands for the minimal utility agent with type will accept at time 0. The admissible set for will be discussed in Denition 1.2.2. Correspondingly, we denote V P 0 (;t;R(;t)) the value function of the dynamic problem starting at time t with agent's individual rationality R(;t). 1.2.2 Solving the problem for no quitting case Although the static problem (1.1)-(1.2) has been used and studied in literature (as in [49]), we still show details in this subsection for two reasons: First, The admis- sible set for cash owf t g f0tTg in literature is usually assumed to be uniformly bounded in order to apply the HJB equation theory, however, the continuity of the value function when is not uniformly bounded (which will be stated in Def- inition 1.2.2) will be used in our paper and hasn't been proved (will be proved 8 in Theorem 1.2.3). Second, when agent is allowed to quit (which is the main object of our paper ), the assumption that is not uniformly bounded will play an important role. As usual, we rst solve the agent problem: Theorem 1.2.1. Assume the xed process t is bounded, then forfa t g f0tTg 2A, the agents' problem (1.1) has a unique solution: a t () = Z t wherefZ t g f0tTg satises the following backward stochastic dierential equation: Y t = Z T t U A ( s )ds + Z T t 2 Z 2 s ds Z T t Z s dB s and the square integrable processfZ t g f0tTg satises: there exists small enough > 0 E Z [e R T 0 Z 2 t dt ]<1 (1.4) 9 Proof. For xed process , we rst write down the dynamic agent's remaining utility process for given processfa t g f0tTg : Y a t = E a [t;T] t [ Z T t U A ( s )ds 1 2 Z T t a 2 s ds] = Z T t U A ( s )ds 1 2 Z T t a 2 s ds Z T t Z s dB a s = Z T t U A ( s )ds + Z T t 1 2 a 2 s +a s Z s ds Z T t Z s dB s for some square integrable processfZ t g 0tT . Here the second equality comes from martingale representation theorem and the third equality is from Girsanov transformation. To get the optimal a (), we use the comparison principal of BSDE and get : a t = argmax at [ 1 2 a 2 t +a t Z t ] =Z t which will also yield the new quadratic BSDE: Y t = Z T t U A ( s )ds + Z T t 2 Z 2 s ds Z T t Z s dB s (1.5) = Z T t U A ( s )ds Z T t 2 Z 2 s ds Z T t Z s dB Z s (1.6) 10 Since is bounded, the estimate in (1.4) is a direct result of Theorem 7.2.1 and Theorem 7.3.2 in Zhang [66] Now, in order to solve the principal's problem. We rewrite (1.5) into forward form: Y t = Y 0 Z t 0 U A ( s )ds Z t 0 2 Z 2 s ds + Z t 0 Z s dB s = Y 0 Z t 0 U A ( s )ds + Z t 0 2 Z 2 s ds + Z t 0 Z s dB Z s with the constraint Y T = 0. We then give the following denition: Denition 1.2.2. DenoteA t;x by the admissible control set for process (;Z) such that • C() s C 0 ;8s2 [t;T ], where C 0 is a xed constant and C() is some constant may depend on . • there exists some > 0 such that E Z [e R T 0 Z 2 t dt ]<1 and X t;x;;Z T = 0, where X t;x;;Z s =x Z s t U A ( r )dr + Z s t 2 Z 2 r dr + Z s t Z r dB Z r ; tsT: (1.7) Remark: As dened in Denition 1.2.2, is required to be uniformly bounded from above, bounded from below with no uniform lower bound. We shall 11 use this key assumption to show the continuity of the value function V P 0 (;t;x) in Theorem 1.2.3. The integrability for Z t comes from (1.4). Moreover, since C 0 in Denition 1.2.2. it is clear that A t;x =; if x >e C 0 (Tt) It is clear now that the principal's problem can be charaterized as the following control problem: V P 0 (;t;x) := sup (;Z)2A t;x J 0 (;t;x;;Z) where J 0 (;t;x;;Z) =E Z [ Z T t Z s ds Z T t s ds] Next, we show that the value functionV P 0 (;t;x) is monotone inx and continuous in x. Lemma 1.2.1. The value function V P 0 (;t;x) is decreasing in x. Proof. Fix x 1 <x 2 < 0, for arbitrary (;Z)2A t;x 2 , it is easy to see that (~ ;Z)2 A t;x 1 , where ~ s =ln[e s + x 2 x 1 Tt ]; s2 [t;T ]: 12 clearly ~ s < s ;tsT . Therefore, J 0 (;t;x 2 ;;Z)<J 0 (;t;x 1 ; ~ ;Z)V P 0 (;t;x 1 ) Since (;Z) is arbitrary, this implies that V P 0 (;t;x 2 )V P 0 (;t;x 1 ). The next theorem shows that V P 0 (;t;x) is continuous in x. Theorem 1.2.3. The value function V P 0 ( 0 ;t;x) is continuous in x Proof. Step 1:We rst show that V P 0 (;t;x) = lim #0 V P 0 (;t;x +). For simplicity, we prove the result for t = 0. Since V is decreasing by the above lemma, we only need to show:8(;Z)2A 0;x ,8> 0,9> 0. such that J 0 (; 0;x;;Z)V P 0 (;t;x +) + The construction is as follows: set = infft :X x+;;Z t =L t g = infft :X x;;Z t =L t g 13 where L t =f(x;t) : x =e C 0 (Tt);t2 [0;T ]g. Clearly, and ! as ! 0. Now set (~ ; ~ Z )2A 0;x+ as ~ t = t 1 ft g +C 0 1 ft> g ~ Z t = Z t 1 ft g + 0 1 ft> g Note that t =C 0 and Z t = 0 onftg, then we have J 0 (; 0;x;;Z)V P 0 (; 0;x +)J 0 (; 0;x;;Z)J 0 (; 0;x +; ~ ; ~ Z ) = E Z [ Z 0 Z s s dsC 0 (T)]E ~ Z [ Z 0 Z s s dsC 0 (T )] = E Z [ Z 0 Z s s dsC 0 (T)]E Z [ Z 0 Z s s dsC 0 (T )] = E Z [ Z Z s s ds +C 0 ( )] converges to 0 by dominated convergence theorem where we use the fact that (;Z)2A 0;x andC 1 e +C 2 for some positive constants C 1 and C 2 . Step 2: We then prove that lim #0 V P 0 (;t;x) =V P 0 (;t;x) Again, we prove the result for t = 0. We want to show that8 > 0;8( ;Z )2 A 0;x , choose small enough, we have J 0 (; 0;x; ;Z )V P 0 (;t;x) + 14 set = infft :X x; ;Z t =L t g = infft :X x; ;Z t =L t g and also set (~ ; ~ Z )2A 0;x : ~ t = t 1 ft< g +C 0 1 ft> g ~ Z t = Z t 1 ft< g + 0 1 ft> g Then we have J 0 (; 0;x; ;Z )J 0 (; 0;x; ~ ; ~ Z ) = E Z [ Z T 0 Z s s ds]E ~ Z [ Z 0 Z s s dsC 0 (T )] = E Z [ Z T 0 Z s s ds]E Z [ Z 0 Z s s dsC 0 (T )] = E Z [ Z (Z s s +C 0 )ds] (1.8) In order to get the uniform estimate for the above term, we observe that L = L + Z e C 0 ds (1.9) X x; ;Z = X x; ;Z + Z e s + 2 jZ s j 2 ds + Z Z s dB Z s (1.10) 15 The dierence of (1.9) and (1.10) imply that 0 = + Z e s + 2 jZ s j 2 e C 0 ds + Z Z s dB Z s + Z e C 0 (C 0 s ) + 2 jZ s j 2 ds + Z Z s dB Z s (1.11) by intermediate value theorem and use the assumption that s C 0 . Now we have E Z [ Z C 0 s ds]e C 0 E Z Z jZ s jds p 2T (1.12) by (1.11) and Cauchy-Schwarz inequality. Now the estimate in (1.8) tends to 0 uniformly in and the desired result follows We then state an estimate of the value function: Lemma 1.2.2. The value function V P 0 (;t;x) has the following bound: for all > 0, (Tt)ln( x Tt )V P 0 (;t;x)x + ( 2 ln() 1)(Tt) (1.13) 16 Moreover, V P 0 (;t;e C 0 (Tt)) =C 0 (Tt) (1.14) V P 0 (;T;x) = 0 (1.15) Proof. First,by choosing Z s 0; s =ln( x Tt );tsT we got the inequality for one side: V P 0 (;t;x) (Tt)ln( x Tt ) On the other hand, for (;Z)2A t;x , we have 0 =X t;x;;Z T =x Z T t U A ( r )dr + Z T t 2 Z 2 r dr + Z T t Z r dB Z r 17 We got E Z [ Z T t Z s ds Z T t s ds] = E Z [ Z T t Z s ds Z T t s ds Z T t Z 2 s 2 Z T t e s dsx] = x +E Z [ Z T t (Z s Z 2 s 2 ds + Z T t ( s e s )ds] x +E Z [ Z T t ( 2 ln() 1)ds] = x + ( 2 ln() 1)(Tt) the other side of (1.13) follows. For the boundary condition (1.14), it is clear that (C 0 ; 0) is the only admissible control for A t;x when x =e C 0 (Tt) by compari- son principle of quadratic BSDE. For boundary condition (1.15), rst send t!T , then send ! 0 in (1.13), the desired result follows. Remark: We can only study the boundary condition (1.15) instead of V P 0 ( 0 ;T;x) since A T;x is empty except x = 0. For general utility function U A (x), the estimate in (1.13) can be derived similarly. Now, we characterize the PDE for the value function V P 0 ( 0 ;t;x) and close this section. 18 Theorem 1.2.4. The value functionV P 0 ( 0 ;t;x) is the unique continuous viscosity solution of the following HJB equation: 8 > > > > > > > > < > > > > > > > > : @u @t + sup C 0 ;z2R [ 1 2 z 2@ 2 u @x 2 + ( 0 2 z 2 U A ()) @u @x + 0 z] = 0 xe C 0 (Tt): u( 0 ;t;e C 0 (Tt)) =C 0 (Tt): 0tT u( 0 ;T;x) = 0; x 0: (1.16) and the linear growth condition (1.13). Proof. Since we have show the continuity ofV P 0 ( 0 ;t;x), and the boundary condi- tion in Lemma 1.2.2, the HJB equation part follows from Dynamic Programming Principle in standard literature (for simplicity, at t = 0): V P 0 (; 0;x) = sup ;Z2A 0;x E Z [ Z t 0 Z s s ds +V P 0 (;t;X 0;x;;Z t )] The uniqueness of the viscosity solution is due to the linear growth ofx as implied in Lemma 1.2.2. 19 1.3 The agent is allowed to quit at a stopping time at most once 1.3.1 The model In this section, we consider the case that the agent is allowed to quit at a stopping time at most once. The agent will quit at some time if his market value (after minus the quitting cost) is above the remaining utility of his current contract. Once the agent quits, the principal will hire a new agent from the market at her best. We formulate the problem as follows: Agent's problem: given contract [0;T ] , the type 0 agent's control is both the stopping time and eort process a [0;T ] , then the agents' problem is V A 1 ( 0 ; [0;T ] ) := sup sup a2A E a [ Z 0 U A ( s )ds Z 0 1 2 0 a 2 s ds +L( 0 ;)1 f<Tg ] (1.17) where L( 0 ;t) :=R( 0 ;t)C A ( 0 ;t) is the market value for type 0 agent at time t with quitting cost C A ( 0 ;t) 0. We will show below that under mild condition, the optimal control () and a () exists and are unique. Principal's problem: Principal's control is the contract [0;T ] . If the agent quits, 20 the principal will hire the 'best' agent in the market to work for her. Then the principal's problem is V P 1 ( 0 ; 0;R( 0 ; 0)) := sup E a () [ Z () 0 a s ()ds Z () 0 s ds +1 f ()<Tg sup V P 0 (; ();R(; ())] (1.18) subject to V A 1 ( 0 ; [0;T ] )R( 0 ; 0) Correspondingly, we denote V P 1 (;t;R( 0 ;t)) the value function of the dynamic problem starting at time t with agent's individual rationality R( 0 ;t). 1.3.2 Solving the problem if the agent is allowed to quit at most once As in the no quitting case, we rst consider the agent's problem. Theorem 1.3.1. Assume the xed process t is bounded, then forfa t g f0tTg 2A, the agents' problem (1.17) has a unique solution: a t () = 0 Z t and () = infft :Y t =L( 0 ;t)g where the process (Y t ;Z t ) f0tTg together with some increasing 21 processfK t g 0tT with K 0 = 0 are the solution of the following re ected backward stochastic dierential equation (RBSDE): 8 > > > > > > > > < > > > > > > > > : Y t = 0 + R T t 0 2 Z 2 s ds + R T t U A ( s )ds R T t Z s dB s +K T K t Y t L( 0 ;t); R T 0 (Y t L( 0 ;t))dK t = 0 (1.19) and the square integrable processfZ t g f0tTg satises: there exists small enough > 0 E Z [e R T 0 Z 2 t dt ]<1 (1.20) Proof. We rst solve the problem: x and , we rst solve the problem: Y t = sup a2A E a [ Z t U A ( s )ds Z t 1 2 0 a 2 s ds +L( 0 ;)1 f<Tg ] Similar to the proof in Theorem 1.2.1, we get a t () = 0 Z t , wherefZ t g f0tTg satises the following BSDE: Y t =L( 0 ;)1 f<Tg + Z t 0 2 Z 2 s ds + Z t U A ( s )ds Z t Z s dB s 22 Then the agent's optimal utility is Y 0 = sup Y 0 . This standard optimal stopping problem can be characterized as the following re ected BSDE: 8 > > > > > > > > < > > > > > > > > : Y t = 0 + R T t 0 2 Z 2 s ds + R T t U A ( s )ds R T t Z s dB s +K T K t Y t L( 0 ;t) R T 0 (Y t L( 0 ;t))dK t = 0 (1.21) and the optimal stopping time is determined as () = infft :Y t =L( 0 ;t)g. The estimate in (1.20) is similar to the proof in Theorem 1.2.1. As in the deterministic quitting time case, we want to write down agent's remaining utility process into forward form, recall in Denition 1.2.2, for (;Z)2 A 0 t;x , dene ;Z as ;Z :=inffst :X t;x;;Z s =L( 0 ;s)g It is natural to consider the following forward problem: ~ V P 1 ( 0 ;t;x) = sup (;Z)2A 0 t;x J 1 ( 0 ;t;x;;Z) (1.22) 23 where J 1 ( 0 ;t;x;;Z) :=E 0 Z [ Z ;Z t 0 Z s ds Z ;Z t s ds +1 f ;Z <Tg sup V P 0 (; ;Z ;R(; ;Z )] Our rst task is to answer the following question: Is the forward problem (1.22) equivalent to the original problem (1.17)-(1.3.3)? The answer is conrmative. Lemma 1.3.1. Assume L( 0 ;t) is dierentiable in t and @L( 0 ;t) @t e C 0 , then V P 1 ( 0 ;t;x) = ~ V P 1 ( 0 ;t;x) Proof. Since, each control in problem (1.17)-(1.3.3) will induce a corresponding control in the forward problem 1.22, therefore V P 1 ( 0 ;t;x) ~ V P 1 ( 0 ;t;x) On the other hand, we will show that by constructing proper,Z andK on [;T ], we can guarantee the following FBSDE has a solution: 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : Y = 0 + R T 0 2 Z 2 s ds + R T U A ( s )ds R T Z s dB s +K T K Y =L( 0 ;) Y s L( 0 ;s);s2 [;T ] R T [Y s L( 0 ;s)]dK s = 0 s2 [;T ] (1.23) 24 First, choosing Z 0. Moreover, we want to construct the solution that Y s = L( 0 ;s);s2 [;T ] (therefore, skorohod condition holds). Then this will require: L( 0 ;s) =L( 0 ;T ) + Z T s U A ( r )dr +K T K s which is equivalent to Z T s U A ( r )dr Z T s K 0 r dr = Z T s @L( 0 ;r) @r dr ChoosingU A ( r ) = ( @L( 0 ;r) @r ) + and K 0 r = (L 0 r ) , the desired the result follows. Next we show that V P 1 ( 0 ;t;x) is decreasing in x: Lemma 1.3.2. Assume @R(;t) @t C 1 ;8t;8, then we have V P 1 ( 0 ;t;x ) V P 1 ( 0 ;t;x) 8> 0 Proof. For simplicity, we prove for t = 0.8(;Z)2A 0 0;x , denote = infft 0 :X 0;x;;Z t =L( 0 ;t)g^T ~ = infft 0 :X x;~ ; ~ Z t =L( 0 ;t)g^T = infft 0 :X x;~ ; ~ Z t =X x;;Z t g^T 25 where ~ s = ( s n)1 fs ; ~ g + s 1 fs> ; ~ g + ( s n)1 fs~ ;~ < g ln( R( 1 ; ~ ) T ~ )1 fs>~ ;~ < g ~ Z s = Z s 1 f ~ g +Z s 1 fs~ ;~ < g + 0 1 fs>~ ;~ < g for some arbitrary 1 . Next, we give an estimate of . Since X 0;x;;Z t = x + Z t 0 e s + 0 2 Z 2 s ds + Z t 0 Z s dB 0 Z s X 0;x;n;Z t = x + Z t 0 e s+n + 0 2 Z 2 s ds + Z t 0 Z s dB 0 Z s The dierence of the above equation will lead to 0 + (1e n ) Z 0 e s ds + (1e n )e C 0 This will imply that e C 0 e n 1 (1.24) 26 which tends to 0 as n tends to innity. Now we have J 1 ( 0 ; 0;x; ~ ; ~ Z) = E 0 ~ Z [( Z 0 0 Z s ( s n)ds + Z 0 Z s s ds + sup V P 0 (;;R(;)))1 f ~ g + ( Z ~ 0 0 Z s ( s n))ds + Z T ~ ln( R( 1 ; ~ ) T ~ )ds)1 f~ < g ] E 0 Z [ Z 0 ( 0 Z s s )ds + sup V P 0 (;;R(;))]1 f ~ g + E 0 ~ Z [ Z ~ 0 ( 0 Z s s )ds + Z T ~ C 1 ds)1 f~ < g ] since 1 f ~ g ! 1, by dominated convergence theorem, we get J 1 ( 0 ; 0;x; ~ ; ~ Z)J 1 ( 0 ; 0;x;;Z) Since (;Z) is arbitrary, the desired result follows. Next, we show that the value function is continuous. Lemma 1.3.3. The value function V P 1 ( 0 ;t;x) is continuous. that is: lim !0 V P 1 ( 0 ;t;x)V P 1 ( 0 ;t;x) and V P 1 ( 0 ;t;x) lim !0 V P 1 ( 0 ;t;x +) 27 Proof. For simplicity, we will show the result fort = 0. We will show the result by proving for arbitrary, (;Z)2A 0 0;x , we could nd (~ ; ~ Z)2A 0 0;x+ , such that E 0 Z [ Z ;Z 0 0 Z s s ds + 1 f ;Z <Tg sup V P 0 (; ;Z ;R(; ;Z )] E 0 ~ Z [ Z ~ ; ~ Z 0 0 ~ Z s ~ s ds + 1 f ~ ; ~ Z <Tg sup V P 0 (; ~ ; ~ Z ;R(; ~ ; ~ Z )]() (1.25) where () is independent of (;Z). We x some positive constant (which will be determined later) and recall L t = f(x;t) :x =e C 0 (Tt);t2 [0;T ]g, we dene a couple of stopping times: 1 = infft 0 :X 0;x;;Z t =L( 0 ;t)g^T = infft 0 :X 0;x;;Z t =L t g^T = infft 1 :fX 1 ;L(t; 1 )+;C 0 ;1 t =L( 0 ;t)gg^ ( 1 +) = infft 0 :X ;X 0;x;;Z ;;Z t =L t g 28 Then we dene (~ ; ~ Z) as follows: ~ s = s 1 fs 1 ^ g +C 0 1 fs ; < 1 g + (ln( X 0;x;;Z 1 T 1 ))1 fs 1 ; 1 < ; 1 >Tg + C 0 1 f 1 s; 1 < ; 1 <Tg + sup V 0 (; ;R( ;))1 fs; 1 < ; 1 <T;< 1 +g + (ln( X 0;x;;Z T ))1 fs; 1 < ; 1 <T; 1 +g ~ Z s = Z s 1 fs 1 ^ g + 0 1 fs ; < 1 g + 0 1 fs 1 ; 1 < ; 1 >Tg + 1 1 f 1 s; 1 < ; 1 <Tg + sup V 0 (; ;R( ;))1 fs; 1 < ; 1 <T;< 1 +g + 0 1 fs; 1 < ; 1 <T; 1 +g Now, we spit the estimate in (1.25) into 6 parts and show that their dierence are smaller than the order of : the rst term of (1.25) is E 0 Z [ Z 1 ^ 0 0 Z s s ds]E 0 ~ Z [ Z 1 ^ 0 0 ~ Z s s ds] = E 0 Z [ Z 1 ^ 0 0 Z s s ds]E 0 Z [ Z 1 ^ 0 0 Z s s ds] = 0 The second term of (1.25) will be bounded by E 0 Z [ Z T 0 Z s s ds1 f < 1 g ]E 0 ~ Z [ Z T C 0 ds1 f < 1 g ] = E 0 Z [ Z T 0 Z s s ds1 f < 1 g ]E 0 Z [ Z T C 0 ds1 f < 1 g ] = E 0 Z [ Z ( 0 Z s s +C 0 )ds1 f < 1 ;< 1 g ] +E 0 Z [ Z T ( 0 Z s s +C 0 )ds1 f < 1 ;> 1 g] ( p ) +E 0 Z [ Z T ( 0 Z s s +C 0 )ds1 f < 1 ;> 1 g] ( p ) +() 29 by estimate (1.12) and P 0 Z ( > 1 )(). The third term of (1.25) is bounded by E 0 Z [ Z T 1 0 Z s s ds1 f 1 < ; 1 >Tg ]E 0 ~ Z [ Z T 1 ln( X 0;x;;Z 1 T 1 )ds1 f 1 < ; 1 >Tg ] () since V P 0 (;T;x) = 0. The fourth and fth term of (1.25) are bounded by E 0 Z sup V 0 (; 1 ;R( 1 ;))E 0 ~ Z [ Z 1 1C 0 ds + sup V 0 (; ;R( ;))1 f 1 < ; 1 <T;< 1 +g ] E 0 Z sup V 0 (; 1 ;R( 1 ;))E 0 ~ Z sup V 0 (; ;R( ;)) + (C 0 1) () by the uniform continuity of V 0 . Last, the fourth and sixth term of (1.25) are bounded by E 0 Z [ Z T 1 0 Z s s ds1 f 1 < ; 1 <T;> 1 +g ]E 0 ~ Z [( Z 1 1C 0 ds + Z T ln( ln(X 0;x;;Z ) T ds)1 f 1 < ; 1 <T;> 1 +g ] CP 0 Z ( > 1 +) ( p ) 30 choose properly, we get the desired result. Next we give the denition of minimal solution and then characterize the valu function V P 1 ( 0 ;t;x). Denition 1.3.2. Assume D is an open set, we call u is a minimal viscosity solution ofLu = 0 with boundary conditionf on D: if for any continuous viscosity solution of the following equation: 8 > > > < > > > : L~ u(t;x) = 0 for (t;x)2D ~ u(t;x)f(t;x) for (t;x)2@D (1.26) we have u(t;x) ~ u(t;x);8(t;x)2 D. In the end of this part, we state the following result: Theorem 1.3.3. The value function V P 1 ( 0 ;t;x) is minimal solution of the fol- lowing HJB equation in D, where D =f(t;x) : 0tT;L( 0 ;t)<x<L t g 8 > > > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > > > : @u @t + sup C 0 ;z2R [ 1 2 z 2@ 2 u @x 2 + ( 0 2 z 2 U A ()) @u @x + 0 z] = 0; L( 0 ;t)xe C 0 (Tt): u( 0 ;t;e C 0 (Tt)) =C 0 (Tt): 0tT u( 0 ;t;L( 0 ;t)) sup V P 0 (;t;R(;t)) 0tT u( 0 ;T;x) = 0; L( 0 ;T )x 0: (1.27) 31 Remark: Since, we prove V P 1 ( 0 ;t;x) is decreasing in x in Lemma 1.3.2, lim x#L( 0 ;t) V P 1 ( 0 ;t;x) exists, however, we only know that V P 1 ( 0 ;t;L( 0 ;t)) sup V P 0 (;t;R(;t)), which is the motivation of the characterization by minimal solution. Proof. Assume Q( 0 ;t;x) is a continuous viscosity solution on D of LQ = 0 For simplicity, we prove the result for t = 0. It suces to show that Q( 0 ; 0;x) sup C 0 ;Z E 0 Z [Q( 0 ;;X 0;x;;Z ) + Z 0 0 Z s s ds] Denote Q( 0 ; 0;x) := sup C 0 ;Z E 0 Z [Q( 0 ;;X 0;x;;Z ) + Z 0 0 Z s s ds] . Now, set Q n ( 0 ; 0;x) := sup nC 0 ;Z E 0 Z [Q( 0 ;;X 0;x;;Z ) + Z 0 0 Z s s ds] 32 Then Q n are continuous on D which satisesL n Q n = 0 where L n u = @u @t + sup nC 0 ;z2R [ 1 2 z 2 @ 2 u @x 2 + ( 0 2 z 2 U A ()) @u @x + 0 z] Note thatL n L and Q n = Q on @D. Therefore we getLQ n 0. Now, by comparison principle, we get QQ n on D. To this end, we show that lim n! Q n =Q The '' part is trivial. For the part, it will be straightforward once we realize that E 0 Z [Q( 0 ;;X 0;x;;Z ) + Z 0 0 Z s s ds] lim n!1 Q n This will be true if we assume that each is bounded (although there might be no uniform bound for it). Remark: Although the agent is allowed to quit, it is possible thatV 1 ( 0 ;t;x)> V 0 ( 0 ;t;x). However, in the case that the market only has one type agent, i.e. 0 = =, then we will have V 1 ( 0 ;t;x)<V 0 ( 0 ;t;x). 33 1.4 The agent is allowed to quit nite times 1.4.1 The model when quitting can happen at mostn times In this section, we consider the case that the quitting can happen at mostn times. The analysis in this section will be very much similar to Section 1.3. The agent's problem will be exactly the same while the principal's problem will have some dierence, as illustrated below. Agent's problem: given contract [0;T ] , the type 0 agent's control is both the stopping time and eort process a [0;T ] , then the agents' problem is V A 1 ( 0 ; [0;T ] ) := sup sup a2A E a [ Z 0 U A ( s )ds Z 0 1 2 0 a 2 s ds +L( 0 ;)1 f<Tg ] (1.28) where L( 0 ;t) :=R( 0 ;t)C A ( 0 ;t) is the market value for type 0 agent at time t with quitting cost C A ( 0 ;t) 0. We will show below that under mild condition, the optimal control () and a () exists and are unique. Principal's problem: Principal's control is the contract [0;T ] . If the agent quits, the principal will hire the 'best' agent in the market to work for her. Then the principal's problem is V P n ( 0 ; 0;R( 0 ; 0)) := sup E a () [ Z () 0 a s ()ds Z () 0 s ds +1 f ()<Tg sup V P n1 (; ();R(; ())] (1.29) 34 subject to V A 1 ( 0 ; [0;T ] )R( 0 ; 0) Correspondingly, we denote V P n (;t;R( 0 ;t)) the value function of the dynamic problem starting at time t with agent's individual rationality R( 0 ;t). 1.4.2 Solving the problem that the agent is allowed to quit nite times Similar to Section 1.3, we rst solve the agent's problem. The result has already been stated in Theorem 1.3.1. Therefore, we may solve the principal's problem similar to Theorem 1.3.3: Theorem 1.4.1. The value function V P n ( 0 ;t;x) is minimal solution of the fol- lowing HJB equation in D, where D =f(t;x) : 0tT;L( 0 ;t)<x<L t g 8 > > > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > > > : @u @t + sup C 0 ;z2R [ 1 2 z 2@ 2 u @x 2 + ( 0 2 z 2 U A ()) @u @x + 0 z] = 0 ;L( 0 ;t)xe C 0 (Tt): u( 0 ;t;e C 0 (Tt)) =C 0 (Tt): 0tT u( 0 ;t;L( 0 ;t)) sup V P n1 (;t;R(;t)) 0tT u( 0 ;T;x) = 0; L( 0 ;T )x 0: (1.30) 35 The proof for this theorem is similar to Theorem 1.3.3. 1.5 The agent is allowed to quit innite times 1.5.1 The model By dynamic programming principal. It is easy to show that forn 1 and stopping times 0 = 0 1 2 ::: n T , we have V P n ( 0 ; 0;R( 0 ; 0)) = sup ;Z sup ( i ; i ) 0in1 E Z [ Z 1 0 0 Z s s ds + Z 2 1 1 Z s s ds + ::: Z T n n Z s s ds] subject to: X t =R( i ; i ) + i 2 Z t i Z 2 s ds + Z t i e s ds + Z t i Z s dB Z s ; i t i+1 ; on i <T; 0in 1 X t >L( i ;t) i t< i+1 ; 0in 1 X i+1 =L( i ; i+1 )1 f i+1 <Tg ; 0in 1 X t =R( n ; n ) + n 2 Z t n Z 2 s ds + Z t n e s ds + Z t n Z s dB Z s ; n tT;on n <T X T = 0 36 Motivated by the denition of V n , we dene V 1 as follows: V P 1 ( 0 ; 0;R( 0 ; 0)) := sup ;Z sup ( i ; i ) E Z [ Z 1 0 0 Z s s ds + Z 2 1 1 Z s s ds + ::: Z T n1 n1 Z s s ds + ] subject to: X t =R( i ; i ) + i 2 Z t i Z 2 s ds + Z t i e s ds + Z t i Z s dB Z s ; i t i+1 ; on i <T;8i X t >L( i ;t) i t< i+1 ;8i X i+1 =L( i ; i+1 )1 f i+1 <Tg 8i (1.31) 1.5.2 Solving the problem that the agent is allowed to quit innite times We rst state the following theorem, it imply that the quitting will only take place nite times although it is not required so. Theorem 1.5.1. If 0 @R @t (;t)C 1 ;C A (;t)C 2 > 0; for all and t, then for the admissible set of V 1 , we have P Z (\ 1 i=1 f i <Tg) = 0 37 Proof. By the constraint process, we have L( i+1 ; i )1 f i+1 <Tg =R( i ; i ) + i 2 Z i+1 i Z 2 s ds + Z i+1 i e s ds + Z i+1 i Z s dB Z s take conditional expectation with i under P Z , and observe that Z 2 s and e s are nonnegative, we got: E i Z [R( i+1 ; i )1 f i+1 <Tg C A ( i+1 ; i )1 f i+1 <Tg jF i ]1 f i <Tg R( i ; i )1 f i <Tg rearrange the term, we got: E i Z [(R( i+1 ; i )R( i ; i ))1 f i+1 <Tg R( i+1 ; i )1 f i <T = i+1 g jF i ] E i Z [C A ( i+1 ; i )1 f i+1 <Tg jF i ] Now, by the using assumption intermediate value theorem and @R @t (;t) C 1 , we have E i Z [C 1 ( i+1 i )1 f i+1 <Tg +C 1 T 1 f i <T = i+1 g ]E i Z [C 2 1 f i+1 <Tg ] 38 Sum all the term i from 1 to n, we get E i Z [C 1 T +C 1 T ] C 2 n X i=1 E i Z [1 f i+1 <Tg ] C 2 nP i Z ( i+1 <T ) The desired result followed by Borel Cantelli Theorem. Next, we prove the following convergence theorem for V 1 Theorem 1.5.2. lim n!1 V n ( 0 ;t;x) =V 1 ( 0 ;t;x) Proof. Again, we prove the result at t = 0. Step 1: We rst prove that lim n!1 V n ( 0 ; 0;x)V 1 ( 0 ; 0;x), to show this: choose arbitrary control for innity times: (;Z; i ; i )2 A n 0;x , we construct the control for quitting n times (~ ; ~ Z; ~ i ;; ~ i )2A 1 0;x as follows: (~ ; ~ Z; ~ i ; ~ i ) = (;Z; i ; i ); 0in 1 and choose ~ n t ln( R( n ; n ) T n ); n tT ~ Z n t 0; n tT 39 Then we have J 1 (0;x; ~ ; ~ Z; ~ ; ~ )V n+1 ( 0 ; 0;x)J 1 (0;x; ~ ; ~ Z; ~ ; ~ )J n+1 (0;x;;Z;;) = E Z [ Z 1 0 0 Z s s ds + Z 2 1 1 Z s s ds + Z n n1 n1 Z s s ds + ] E ~ Z [ Z ~ 1 0 0 ~ Z s ~ s ds + Z ~ 2 ~ 1 1 ~ Z s ~ s ds + + Z T ~ n n ~ Z n s ~ n s ds] = E[[ Z 1 0 0 Z s s ds + Z 2 1 1 Z s s ds + Z n n1 n1 Z s s ds + ]M Z T (1 fn<Tg + 1 fn=Tg )] E[[ Z ~ 1 0 0 ~ Z s ~ s ds + Z ~ 2 ~ 1 1 ~ Z s ~ s ds + + Z T ~ n n ~ Z n s ~ n s ds]M ~ Z T ] = E[[ Z 1 0 0 Z s s ds + Z 2 1 1 Z s s ds + Z n n1 n1 Z s s ds + ]M Z T 1 fn<Tg ] E[[ Z ~ 1 0 0 ~ Z s ~ s ds + Z ~ 2 ~ 1 1 ~ Z s ~ s ds + + Z T ~ n n ~ Z n s ~ n s ds]M ~ Z T 1 fn<Tg ] = E[[ Z n+1 n n Z s s ds + Z n+2 n+1 n+1 Z s s ds + ]M Z T 1 fn<Tg ] E[[ Z T ~ n n ~ Z n s ~ n s ds]M ~ Z T 1 fn<Tg ] = E[[ Z n+1 n n Z s s ds + Z n+2 n+1 n+1 Z s s ds + ]M Z T 1 fn<Tg ] E[ln( R( n ; n ) T n )(T n )M ~ Z T 1 fn<Tg ] Note that all the term satises the integrablity condition, therefore both term will tends to 0 by DCT. 40 Step 2: We show that V 1 ( 0 ;t;x) lim n!1 V n ( 0 ;t;x). For arbitrary ( n ;Z n ; n i ; n )2A n 0;x , we construct (~ ; ~ Z; ~ ; ~ )2A 1 0;x as follows: (~ ; ~ Z; ~ ; ~ ) = (;Z; n i ;) for 0in and we construct ~ n =T; ~ n = n , and if R( n ; n ) +C 1 (T n )) 0, choose ~ Z n t 0 n tT ~ n t ln(C 1 ); n t C 1 n e C 0 TR( n ; n ) e C 0 C 1 ~ n t C 0 ; C 1 n e C 0 TR( n ; n ) e C 0 C 1 tT (so that X t L( n ;t) on n tT ). If R( n ; n ) +C 1 (T n ))< 0, choose ~ n t ln( R( n ; n ) T n ); n tT ~ Z n t 0; n tT 41 (so that X t L( n ;t) on n tT ). Then we have J n (0;x;;Z;;)V 1 ( 0 ; 0;x)J n (0;x;;Z;;)J 1 (0;x; ~ ; ~ Z; ~ ; ~ ) = E Z n [ Z 1 0 0 Z n t n t dt + Z 2 1 1 Z n t n t dt + + Z T n n Z n t n t dt] E ~ ~ Z n [ Z 1 0 ~ 0 Z n t n t dt + Z 2 1 ~ 1 Z n t n t dt + + Z T n ~ n ~ Z n t ~ n t dt] = E Z n [ Z T n n Z n t n t dt1 f n n <Tg ]E ~ Z n [ Z T n n ~ t dt1 f n n <Tg ] (1.32) For the second term in (1.32). No matterR( n ; n ) +C 1 (T n ) greater or smaller than 0, we have E ~ ~ Z n [ Z T n n ~ t dt1 fn<Tg ] =E Z n [ Z T n n ~ t dt1 f n n <Tg ] ifR( n ; n )+C 1 (T n )< 0, the estimate is similar to step 1. IfR( n ; n +C 1 (T n ))> 0, we may estimate jE Z n [ Z T n n ~ t dt1 f n n <Tg ]j E Z n [(jln(C 1 ))j +jC 0 j)(T n )1 f n n <Tg ] (jln(C 1 ))j +jC 0 j)P Z n ( n n <T ) (jln(C 1 ))j +jC 0 j) C n 42 which tends to 0 by Theorem 1.5.1. For the rst term in (1.32), we need some uniform estimate for E nZ n [( Z T n n Z n t n t dt) 2 ]C (1.33) Once we prove (1.33), the desired result follows from C-S inequality. Note that 0 = R( n n ; n ) + Z T n n e n s + n 2 jZ n s j 2 ds + Z T n Z n s dB nZ n s which will imply E Z n [ Z T n n e n s + n 2 jZ n s j 2 ]C Now we have E Z n [( Z T n n Z n t n t dt) 2 ] CE Z n [ Z T n n ( n Z n t n t ) 2 dt] CE Z n [ Z T n n (Z n t ) 2 + ( n t ) 2 dt] CE Z n [ Z T n n e n s + 1 2 jZ n s j 2 dt] +C where we use the inequality x 2 Ce x +C for xC 0 . 43 Denition 1.5.3. Assume D is an open set, we call a system solutions u() is a minimal viscosity solution ofLu = 0 with boundary boundary related to itself: if for any continuous viscosity solution of the following equation: 8 > > > < > > > : L~ u(;t;x) = 0 for (t;x)2D;8 ~ u(;t;x) sup ~ ~ u( ~ ;t;x) for (t;x)2@D;8 (1.34) we have u(;t;x) ~ u(;t;x);8(t;x)2 D. In light of the Theorem 1.5.2, we could calculate V 1 from V n . However, the V 1 also has its own characterization as it is stated in the next theorem. Theorem 1.5.4. The value function V P 1 ( 0 ;t;x) is minimal solution of the fol- lowing HJB equation: 8 > > > > > > > > > > > > > > > > > > < > > > > > > > > > > > > > > > > > > : @u @t + sup C 0 ;z2R [ 1 2 z 2@ 2 u @x 2 + ( 0 2 z 2 U A ()) @u @x + 0 z] = 0; L( 0 ;t)xe C 0 (Tt): u( 0 ;t;e C 0 (Tt)) =C 0 (Tt): 0tT u( 0 ;t;L( 0 ;t)) sup u(;t;R(;t)) 0tT u( 0 ;T;x) = 0; L( 0 ;T )x 0: (1.35) 44 Proof. We assume that Q(;t;x) is the continuous viscosity solution in D of the following equation: LQ = 0 in D Q( 0 ;t;L(t; 0 )) sup Q(;t;R(t;)) In the following, we will show Q(;t;x)V 1 (;t;x). Step 1: Similar to the proof in Theorem 1.3.3 , we have Q( 0 ;t 0 ;x 0 ) = sup ;Z E 0 Z [Q( 0 ;;L(;)) + Z 0 0 Z s ds] Step 2: Assume by contradiction.9( 0 ;t 0 ;x 0 ) such that C 0 := (V 1 Q)( 0 ;t 0 ;x 0 )> 0 Then we have 0<C 0 = (V 1 Q)( 0 ;t 0 ;x 0 ) = sup ;Z E Z [sup V 1 (;;R(;)) + Z 0 0 Z s ds] sup ;Z E Z [sup Q(;;R(;)) + Z 0 0 Z s ds] sup ;Z E Z [sup (V 1 Q)(;;R(;))] 45 Now, x > 0,9( ;Z ) such that C 0 <E Z [sup (V 1 Q)(;;R(;))] then,9 1 ;! 1 such that for t 1 :=(! 1 );x 1 :=R( 1 (! 1 ); 1 ): C 0 (V 1 Q)( 1 ;t 1 ;x 1 ) Similarly,9 2 ;t 2 =(! 2 );x 2 =R((! 2 ); 2 ) such that C 0 2 (V 1 Q)( 2 ;t 2 ;x 2 ) repeat the process, we get C 0 2 C 0 2 (V 1 Q)( n ;t n ;x n ) Step 3: We can easily show the following result: there exists uniform > 0, such that P t 1 ;x 1 ;;Z ()> 08;Z choose ! 2 such that (! 2 ) . Then t 2 t 1 . Similarly t n t n1 . Contraction. 46 1.6 Some discussion on the discrete case In this part, we discuss the discrete time version of our model as well as state some property of it. In the stopping time case, the value function will become continuous on the boundary after facelifting, while in the discrete time case, the value function is indeed discontinuous on the boundary. Therefore, we need to consider discontinuous viscosity solution, see [2] for instance. 1.6.1 The discrete time model Agent's problem: given 0 [0;T ] , the agent with type 0 's problem is : V Q 0 () := sup a [0;t 1 ] E a [I 1 +max(I 2 (Quit);I 2 (continue))] where I 1 := Z t 1 0 U A ( 0 s )ds 1 2 0 Z t 1 0 a 2 s ds: I 2 (Quit) =R( 0 ;t 1 )C A (t 1 ) =L( 0 ;t 1 ) I 2 (Continue) := sup a [t 1 ;T] E a t 1 [ Z T t 1 U A ( s )ds 1 2 0 Z T t 1 a 2 s ds]: It is clear that the quit region is Q :=fI 2 (Continue)<I 2 (Quit)g 47 the stay region is S :=fI 2 (Continue)>I 2 (Quit)g And agent has no dierence when N =fI 2 (Continue) =I 2 (Quit)g Principal's problem: By the above discussion, we dene a forward process: X t =x Z t 0 U A ( 0 s )ds + Z t 0 0 2 Z 2 s ds + Z t 0 Z s dB Z s ; 0tT: which satises X t e C 0 (Tt); 0tT X t 1 L( 0 ;t 1 ) If the agent quits at t 1 , the principal may hire a new agent of type with IR R(;t 1 ). This is a static problem on [t 1 ;T ], and we can solve the problem with Principal's optimal utility V P t 1 (). Now, the principal's problem is: V P 0 := sup [0;T] E 0 Z [P 1 + 1 S P 2 (Continue) + 1 N sup V P 0 (t 1 ;;R(t 1 ;))] 48 subject to V Q 0 ()R 0 ( 0 ), where P 1 := Z t 1 0 0 Z s ds Z t 1 0 s ds P 2 (Continue) := Z T t 1 0 Z s ds Z T t 1 s ds Example 1.6.1. In this example, we discuss a case that sup V P 0 (t 1 ;;R( 0 ;t 1 ))> V P 0 (t 1 ; 0 ;L( 0 ;t 1 )). Dene = f 0 ; 1 g such that R( 1 ;t 1 ) << L( 0 ;t 1 ), by Lemma 1.2.2, we have lim x!1 V P 0 ( 1 ;t 1 ;x) = +1 by choosing R( 1 ;t 1 ) small enough, it can be guaranteed that sup V P 0 (t 1 ;;R( 0 ;t 1 ))V P 0 (t 1 ; 1 ;R( 1 ;t 1 ))>V P 0 (t 1 ; 0 ;L( 0 ;t 1 )) In this case, the value function is discontinuous at (x;t) = (L( 0 ;t 1 );t 1 ) 49 Chapter 2 A General Model for Continuous Time Principal-Agent Problem Under Hidden Action 2.1 Introduction In this paper, we consider a general framework of continuous time principal-agent problem. In their seminal paper [33], Holmstrom and Milgrom showed that the optimal contract is linear provided both the agent and the principal have exponen- tial utility function under continuous time framework. After this work, researchers rst study the lump-sum payment, i.e. the agent get paid once at terminal time. In [51], Schattler and Sung use dynamic programming and martingale approach to get a more general result of [33]. Cvitanic, Wan and Zhang [18] study the optimal contract for general utility functions. A nice summary can be found in the survey [55]. In recent years, researchers start to study the continuous time principal agent 50 model which the agent get paid continuously. In [49], Sanikov build a new type of continuous payment model where he use the agent's remaining utility as a state variable for the principal's problem. In [61], Williams study the continuous time model which the agent is allowed to consume and ideally give explicit solution for optimal contract. Note that in [49] and [61], both Sannikov and Williams assume the contract does not depend on the stochastic compensation. This motivate us to study whether the PPS equal to zero will have aect on implementability. It is a well know result that when the time period is discrete, the pay-to-performance(PPS) of the contract is very crucial, it gives the necessary incentive for the agent to work for the principal. As we illustrate in §2.3, the agent will have no incentive to work if PPS is zero, which induces a lower utility for the principal. In this paper, we assume the con- tract contains non-stochastic compensation schemes, i.e. PPS is not zero. To our best of knowledge, all the previous work of continuous time payment framework assumes that PPS is zero. We use backward iteration to rst solve the agent's problem by using comparison principle of Backward Stochastic Dierential Equa- tions (BSDE) and stochastic maximum principal. We then solve the principal's problem by using all the needed state process (which includes the agent's remain- ing utility and consumption process). It turns out the principal's optimal value will be the solution of a Hamilton-Jacobi-Bellman(HJB) equation. By assuming the PPS is not zero, our work has three implications: (1) the principal have the 51 opportunity to increase her utility (2) This will improve the implementability (3) it allows us to unify the lump-sum payment and continuous time payment in one model. The rest of the paper is organized as follows. In §2.2 we present the general model for continuous time principal-agent problem and fully solve it.In §2.3, We give an simple discrete-time example such that when PPS is not zero, the principal will indeed increase her optimal utility value. In §2.4 we oer a solvable model which has a closed-form solution.It turns out although the optimal utility will be same if we assume the PPS is always zero. The implementabiliy will be much clearer if the model consists of the stochastic compensation. In §2.5, we connect our work to [61]. In §2.6, we derive the work related to [49]. In§2.7, we provide the math- ematics tools needed for solving the problem and all the remaining proof of the previous sections. 52 2.2 The Model 2.2.1 The basic setting Let ( ; l F; l P) be the standard probability space with a standard Brownian Motion fB t g ft0g on it. Let the ltrationfF t g f0tTg be the augmentation of Brownian MotionfB t g f0tTg on a xed horizon T . Now assume the output process is dX t =dB t =a t dt +dB a t : where volatility > 0 is a constant. The processfa t g f0tTg is the agent's eort process which is progressively measurable. fB a t g f0tTg is the Brownian Motion under P a by Girsanov Theorem where dP a dP =e R T 0 a t dBt 1 2 2 R T 0 a 2 t 2 dt The contract the principal will pay to the agent consists of two parts: the contin- uous paymentfS t g f0t<Tg and the lump sum payment (paid once at terminal timeT ). The continuous time paymentfS t g f0t<Tg satises the following dynamic dS t = t dt + t dX t = ( t + t a t )dt + t dB a t : 53 Furthermore, the agent may consume continuously at ratefc t g f0tTg , then his wealth process m A tf0tTg will evolve as dm A t = (rm A t c t )dt +dS t = (rm A t c t + t + t a t )dt + t dB a t : where r is the constant interest rate. On the other hand, The principal can also take dividendfD t g f0tTg . Her wealth processfm P t g f0tTg will be dm P t = (rm P t D t )dt +dX t dS t = (rm P t +a t D t t t a t )dt + (1 t )dB a t In summary, a t ;c t are both agent's control. t ; t ;D t are all principal's control. Moreover, we let the principal to oer the agent as a nal payment at time T . Now, the agent's utility function is J A ( t ; t ;;a t ;c t ) =E a [ Z T 0 U A (t; t ; t ;a t ;c t )dt + A (T;;m A T )] Where U A (x) and A (x) are both increasing, concave utility functions. The principal's utility function is J P ( t ; t ;;D t ;a t ;c t ) =E a [ Z T 0 U P (t; t ; t ;D t )dt + P (T;;m P T )] 54 where U P (x) and P (x) are both increasing, concave utility functions. Now we write down the agent's problem. Agent's Problem: Givenf t g f0tTg ;f t g 0tT ;, the agent will maximize his utility function by choosing optimalfa t g f0tTg andfc t g f0tTg : V A ( t ; t ;) = sup at;ct J A ( t ; t ;;a t ;c t ): (IC) (2.1) Denotea ;;; andc ;;; if the optimal control exists. If there are more than one optimal control, we always assume that agent will choose the one which is the best for the principal. Next, we write the the principal's problem: Principal's problem: After the agent choose his best reaction. The principal wants to maximize her utility: V P = sup ;;;D J P (;;;D;a ;; ;c ;; ): (2.2) together with the participant constraint V A ( t ; t ;) ^ W , which is also called individual rationality (IR) in economics. 2.2.2 Solving the agent's problem In order to solve the problem (2.1)-(2.2), we need to rst solve the agent's problem. 55 Theorem 2.2.1. Assumef t g f0tTg ,f t g f0tTg satises the proper intergrable condtion. Then the agent's optimal controlfa t g f0tTg ,fc t g f0tTg can be char- acterized as the following FBSDE: 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : m A t = R t 0 [ s I 2 ( s ; s ; ^ Y s ;Z s ) + s I 1 ( s ; s ; ^ Y s ;Z s )]ds + R t 0 s dB s W t = A (T;;m A T ) + R T t U A (s; s ; s ;I 1 ( s ; s ; ^ Y s ;Z A s );I 2 ( s ; s ; ^ Y s ;Z A s ))ds R T t Z A s dB s ^ Y t = @ A @m (T;;m A T ) R T t ^ Z s dB s (2.3) where a t = I 1 ( t ; t ; ^ Y t ;Z t ) and c t = I 2 ( t ; t ; ^ Y t ;Z A t ). The function I 1 ;I 2 will be specied in the theorem. Moreover, dP dP =e R T 0 I 1 ( t ; t ; ^ Y t ;Z t ) dBt 1 2 2 R T 0 I 1 ( t ; t ; ^ Y t ;Z t ) 2 2 dt Proof. As in most of the previous literature, we writ down the remaining utility of the agent: W a t =E a t [ Z T t U A (s; s ; s ;a s ;c s )ds + A (T;;m A T )] 56 By martingale representation theorem, there exists a square integrable process fZ a t g f0tTg such that W a t = A (T;;m A T ) + Z T t U A (s; s ; s ;a s ;c s )ds Z T t Z a s dB a s = A (T;;m A T ) + Z T t U A (s; s ; s ;a s ;c s ) + a s Z a s ds Z T t Z a s dB s where the second equality is by Girsanov theorem. Now, observing that here and m A T (may depend on c t but not a t ) are all xed process under the original probability measure l P given the process and , we use the comparison principal of BSDE for a under l P (see Theorem 4.4.1 in [66]). Therefore we get the corresponding BSDE: W t = A (T;;m A T ) + Z T t f(s; s ; s ;c s ;Z A s )ds Z T t Z A s dB s for some square integrable process fZ A t g f0tTg , where f(s;;;c;z) = sup a (U A (s;;;a;c) + az ). For later use, we assume optimal a is unique and denote a t =I 1 ( s ; s ;c s ;Z A s ). We rst state a simple and useful lemma for later use: Lemma 2.2.1. Assume f(x,y) are dierentiable in both x and y. Denote g(y) = sup x f(x;y) 57 and x (y) is the one achieve the maximum, i.e. g(y) = f(x (y);y).Then dg(y) dy = @f @y (x (y);y) holds for every y. Proof. See in appendix. Now we try to derive the condition that the optimal c by following the idea of Stochastic maximum principle. Given dm A t = (rm A t c t + t )dt + t dB t The agent will choose c to maximize: W 0 = A (T;;m A T ) + Z T 0 f(s; s ; s ;c s ;Z A s )ds Z T 0 Z A s dB s First, by writing ~ m A t =e rt m A t , without loss of generality , we may assume r = 0. Then, for arbitrary process c [0;t] and c [0;t] , denote rm A t := lim !0 m A;c+c t m A;c t whererm A t is the so-called Frechet derivative of m A t . rW t ,rZ A t are dened in the same way. Under mild condition, we get: 8 > > > < > > > : rm A t = R t 0 c s ds rW t = @ A @m (T;;m A T )rm A T + R T t @f @z rZ A s + @f @c c s ds R T t rZ A s dB s (2.4) 58 Note that by lemma 1, @f @z = a t . By using Girsanov Theorem, We have rW t = @ A @m (T;;m A T )rm A T + Z T t @f @c c s ds Z T t rZ A s dB s where we use P to represent the probability measure related to a , therefore B t is the Brownian Motion under P . By using E by the expectation w.r.t P , We dene a new process ~ Y t for later use: ^ Y t = E t [ @ A @m (T;;m A T )] = @ A @m (T;;m A T ) Z T t ^ Z s dB s for some square integrable process ^ Z tf0tTg by martingale representation theorem. Now, apply Ito's formula for ^ Y t rm A t , plugging in to (2.4) and take expectation with respect to l P at t = 0, we get rW 0 =E ( Z T 0 ^ Y t c t dt + Z T 0 @f @c c t dt): Since c is arbitrary, which can be positive or negative. Therefore, for optimalc t , we need to have ^ Y t = @f @c (t; t ; t ;c t ;Z A t ) 80<t<T: 59 Again, we assume the inverse function exists and denote c t =I 2 ( t ; t ; ^ Y t ;Z A t ). Now, summarize all the previous discussion, rewrite the equations under P , we get the following FBSDE: 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : m A t = R t 0 [ s I 2 ( s ; s ; ^ Y s ;Z s ) + s I 1 ( s ; s ; ^ Y s ;Z s )]ds + R t 0 s dB s W t = A (T;;m A T ) + R T t U A (s; s ; s ;I 1 ( s ; s ; ^ Y s ;Z A s );I 2 ( s ; s ; ^ Y s ;Z A s ))ds R T t Z A s dB s ^ Y t = @ A @m (T;;m A T ) R T t ^ Z s dB s (2.5) Remark 2.2.2. Given t , t and , we call the contract is implementable if equation (2.5) has a solution. 60 2.2.3 Solving the principal's problem After solving the agent's problem, we now solve the principal's problem. We rst rewrite the equations in 2:5 into the forward form, together with principals' wealth process: 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : W t =W 0 R t 0 U A (s; s ; s ;I 1 ( s ; s ; ^ Y s ;Z A s );I 2 ( s ; s ; ^ Y s ;Z A s ))ds + R t 0 Z s dB s ^ Y t = ^ Y 0 + R t 0 ^ Z s dB s : m A t = R t 0 [ s I 2 ( s ; s ; ^ Y s ;Z s ) + s I 1 ( s ; s ; ^ Y s ;Z s )]ds + R t 0 s dB s m P t = R t 0 ((1 s )I 1 ( s ; s ; ^ Y s ;Z s )D s s )ds + R t 0 (1 s )dB s : (2.6) with constrains W T = A (T;;m A T ) and ^ Y T = 0 A (T;;m A T ). In the equation of m P t , we also assume r = 0 for the same reason. The reason we could rewrite into forward form is by the one to one correspondence between the forward equation and backward equation. After rewrite into the forward form, we could use the HJB equation approach for later use. Now the principal wants to maximize her target function: J(0;W 0 ; ^ Y 0 ;m A 0 ;m P 0 ) =E [ Z T 0 U P (t; t ; t ;D t )dt + P (T;;m P T )] (2.7) 61 Denote the corresponding derivative with respect to the four state variable be J 1 ;J 2 ;J 3 ;J 4 (the second derivatives are dened correspondingly).we may write down the principal's HJB equation. 0 J t + max ;; ^ Y 0 ;D;Z; ^ Z fU P (t;;;D) +J 1 [U A (t;:;I 1 (;; ^ Y;Z);I 2 (;; ^ Y;Z))] +J 2 [ ^ Z] +J 3 [I 1 (;; ^ Y;Z) +I 2 (;; ^ Y;Z)] +J 4 [(1)I 2 (;; ^ Y;Z)D] +J 12 Z ^ Z +J 13 Z +J 14 Z(1) +J 23 ^ Z +J 24 ^ Z(1) +J 34 (1) 2 + 1 2 J 11 Z 2 + 1 2 J 22 ^ Z 2 + 1 2 J 33 2 2 + 1 2 J 44 (1) 2 2 (2.8) with constraint W T = A (T;;m A T ), ^ Y T = @ A @m (T;;m A T ) and terminal condition: J(T;W; ^ Y;m A ;m P ) = P (T; 1 A (T;W;m A );W ) Our goal is sup ^ Y 0 J(0; ^ W; ^ Y 0 ; 0; 0) There is a important issue about the boundary condition here, especially for the continuous payment case. Now we discuss some special cases: Case 1: Lump sum payment only. Since there is no running payment, t 0 62 and t 0. Also, agent will not save (since he doesn't get paid when t < T ). Therefore, U A (t; t ; t ;a t ;c t ) =h(a t ) for some convex function h. By comparison principal of BSDE, we get W t = A () Z T t h(H( Z s ))ds Z T t Z s dB a s Where H(x) := (h 0 ) 1 (x) and a t =H( Zt ). Since at the lump-sum case, U P (t; t ; t ;a t ;c t ) =a t =H( Z t ): We get the HJB equation as: 0J t + max Z [H( Z ) +J 1 (h(H( Z ))) + 1 2 J 11 Z 2 ] 63 The boundary condition is J(T;W T ) = P (T;) with the constraint W T = A (T;) and constraint W 0 ^ W . Therefore, the true boundary condition is J(T;W T ) = P (T; 1 A (T;W T )). Now the HJB equation is 8 > > > < > > > : 0J t + max Z [H( Z ) +J 1 (h(H( Z ))) + 1 2 J 11 Z 2 ] J(T;W T ) = P (T; 1 A (T;W T )) (2.9) and the principal's value function is J(0; ^ W ): Case 2: Continuous time payment only: That is, there is no nal payment . We have A (T;;m A T ) = A (T;m A T ) and P (T;;m P T ) = P (T;m P T ). Then, the HJB equation will be the same as (2.8) with the terminal conditionJ(T;W T ;m A T ;m P T ) = P (T;m P T ) and constraint W T = A (T;m A T ), ^ Y T = 0 A (T;m A T ) and W 0 ^ W . Now we try to derive the boundary condition for this HJB equation: J(T;x; ^ Y 0 ;m A T ;m P T ) We state the result in the following lemma: 64 Lemma 2.2.2. The boundary condition for continuous time payment only is J(T;x; ^ Y 0 ;m A T ;m P T ) = 1 A (x): Proof. For simplicity, we assume U(x) =e x and h(x) = 1=2x 2 . Now set ~ W t = 1 A (W t ) and apply Ito's formula, we got: ~ W 0 = m T Z T 0 [( 1 A ) 0 (W t )(U A (a t ;c t )) + 1 2 ( 1 A ) 00 Z 2 t ]dt Z T 0 ( 1 A ) 0 (W t )Z s dB s = Z T 0 t c t + t a t ( 1 A ) 0 (W t )(U A (a t ;c t )) 1 2 ( 1 A ) 00 Z 2 t dt + Z T 0 t ( 1 A ) 0 (W t )Z s dB s wherea t = Zt ^ Yt andc t =ln ^ Y t + 1 2 Z 2 t ^ Y 2 t . Note that the remaining utility function for the principal is Y P 0 = Z T 0 (1 t )a t t dt Z T 0 Z P s dB s 65 Now we do the sum ~ W 0 +Y P 0 , we got: ~ W 0 +Y P 0 = Z T 0 a t c t + ( 1 A ) 0 (W t )U A (a t ;c t )) 1 2 ( 1 A ) 00 Z 2 t dt + Z T 0 t ( 1 A ) 0 (W t )Z s Z P s dB s = Z T 0 Z t ^ Y t +ln( ^ Y t ) 1 2 Z 2 t ^ Y 2 t + ^ Y t W t 1 2 Z 2 t W 2 t dt + Z T 0 t + Z t W t Z P t dB t What we want is when T tends to zero, E ( ~ W 0 + Y P 0 ) also tends to 0. By the quadratic function property. It suces to show that E (ln( ^ Y t )) is uniformly bounded. Note that W t + ~ Y t 0 Therefore, E ln( ~ Y T )E (ln(W t ))E (W t ))E (W 0 ) where we use the factln(x)<x forx> 0 andW t is a supermartingale. Therefore, we have successfully show one side inequality. To show the equality holds, simply set t . t to be deterministic. Hence, the boundary condition holds: J(T;x; ^ Y 0 ;m A T ;m P T ) = 1 A (x): 66 For the general case that the principal could take dividend D t , we derive that we only need to have thatE (Y P 0 )C which is natural by the setup of the problem. 2.3 Example 1: A Heuristic Argument In this subsection, we oer a very simple discrete time model so that with the part, the principal will indeed have more utility value. The Model is two-period. Time is T = 0; 1, At time 1, output is X =a + where a is agent's control and is standard normal distribution. Agent's CARA coecients is A and principal is risk neutral. DenoteS =+X be the contract. Then consider the standard principal-agent problem: given S, agent solve V A (S) := sup a E[U A (S 1 2 a 2 )] where U A (x) =e A x and principal solve V P := sup ; E[XS] 67 such thatV A (S)R. Under this framework, we get the principal's optimal utility is 1 2( A +1) ~ R where ~ R = 1 A ln(R). However, if we restrict the contract to be S =, i.e. = 0. Simple calculation will reveal that the principal's utility is ~ R, which is less than 1 2( A +1) ~ R. 2.4 Example 2: A solvable case 2.4.1 Basic Setting Of course, in general, the HJB equation (2.7)-(2.6) is hard to solve. In this section, we present a simple example which has a closed form solution and gives a rigorous proof for it. Consider U A (t; t ; t ;a t ;c t ) =u(c t h(a t )) where u can be any increasing and concave function and h(x) = 1 2 x 2 . U P (t; t ; t ;a t ;D t ) = (1 t )a t t 68 and A (T;;m A T ) =m A T and P (T;;m P T ) = 0. That is, the principal will take all the prot as dividend. Therefore, the agent's remaining utility function will be ~ Y t : = E a t [ Z T t u(c s h(a s ))c s + s + s a s ds] = Z T t [u(c s h(a s ))c s + s + s a s +Z A;;;a;;c s a s ]ds Z T t Z A;;;a;c s dB s In order to get the maximal Y t value, we use comparison principal of BSDE and get the rst order condition: 8 > > > < > > > : u 0 (c t h(a t )) = 1 u 0 (c t h(a t )) (h 0 (a t )) + t +Z A;;;a;c t = 0 we get a t = t + Z A;;;a;c t and c t = 1 2 (a t ) 2 + U(1) where U(x) := (u 0 ) 1 (x). Therefore, we get the following BSDE: Y A;; t = Z T t [u(U(1))U(1) + 1 2 ( s +Z A s ) 2 + s ]ds Z T t Z A s dB s 2.4.2 Formulating the principal's problem Denote Y P;; t =E a t ( Z T t [(1 s )a s s ]ds) 69 we get Y P;; t = Z T t [(1 s )a s s ]ds Z T t Z P s dB a s = Z T t [(1 s +Z P s )( s +Z A s ) s ]ds Z T t Z P s dB s Denote C 0 =u(U(1))U(1), the principal's problem is 8 > > > > > > > > < > > > > > > > > : Y A;; t = R T t [C 0 + 1 2 ( s +Z A s ) 2 + s ]ds R T t Z A s dB s Y P;; t = R T t [(1 s +Z P s )( s +Z A s ) s ]ds R T t Z P s dB s sup ; Y P;; 0 s:t: Y A;; 0 W 0 : (2.1) where C 0 = 1 A (ln( A ) + 1). Remark 2.4.1. If we add constraint 0, then we see Y A;; t C 0 (Tt), and when Y A;; t = C 0 (Tt), we must have all the controls equal to 0 and thus Y P;; t = 0. Then for the PDE (2.5) below, it is in the domain xC 0 (Tt) and the boundary condition is V (t;C 0 (Tt)) = 0. 70 Lemma 2.4.1. AssmueY P;; 0 is decreasing with respect toW 0 , then the principal's problem (2.1) is equivalent to 8 > > > > > > > > < > > > > > > > > : X ;;Z t =W 0 R t 0 [C 0 + 1 2 ( s +Z s ) 2 + s ]ds + R t 0 Z s dB s Y P;;;Z t = R T t [(1 s +Z P s )( s +Z s ) s ]ds R T t Z P s dB s sup ;;Z Y P;;;Z 0 s:t: X ;;Z T = 0: (2.2) Proof. We will prove that sup ; Y P;; 0 sup ;;Z Y P;;;Z 0 . Given , in (2.1), there will exists Z A s (depend on ,) such that Y A;; 0 = W 0 . Therefore, in (2.2), we use the same ; and choose Z s = Z A s . Then X ;;Z T will automatically be 0. Therefore, Y P;; 0 =Y P;;;Z 0 : clearly, since and are arbitrary, this will imply,Y P;; 0 sup ;;Z Y P;;;Z 0 which in turn implies sup ; Y P;; 0 sup ;;Z Y P;;;Z 0 . The other direction is similar. Remark 2.4.2. Afterwards, we will solve the problem (2.2) instead of (2.1). 71 2.4.3 DPP for the principal's problem In this part, we derive the Dynamic Programming Principal (DPP for short) for the principal's problem. Denote 8 > > > > > > > > < > > > > > > > > : X t;x;;;Z s =x R s t [C 0 + 1 2 ( r +Z r ) 2 + r ]dr + R s t Z r dB r Y P;;;Z s = R T s [(1 r +Z P r )( r +Z r ) r ]dr R T s Z P r dB r u(t;x) := supfY P;;;Z t :8;;Z;s:t: X t;x;;;Z T = 0g (2.3) Clearly, our goal is u(0;W 0 ). Meanwhile, under the Markovian framework, it is not dicult to show that u(t;x) is both adapted and independent ofF t , hence deterministic. (See chapter 5 in [?Z] for details). Meanwhile, denote: 8 > > > < > > > : X t;x;;;Z s =x R s t [C 0 + 1 2 ( r +Z r ) 2 + r ]dr + R s t Z r dB r Y P;l;;;;Z s = + R l s [(1 r +Z P r )( r +Z r ) r ]dr R l s Z P r dB r (2.4) where isF u measurable. Now, we are ready to state the theorem: Theorem 2.4.3. DPP for the principal holds: for t<t +<T V (t;x) = supfY P;t+;u(t+;X t;x;;;Z t+ );;;Z t :8(;;Z) on [t;t +]g 72 Proof. "":8 (;;Z) s.t. X e;x;;;Z T = 0. we have Y P;;;Z s =Y P;;;Z t+ + Z t+ s [(1 r +Z P r )( r +Z r ) r ]dr Z t+ s Z P r dB r Since Y P;;;Z t+ =Y P; [t+;T] ; [t+;T] ;Z [t+;T] t+ . Therefore X t+;X t;x; [t;t+] ; [t;t+] ;Z [t;t+] t+ ; [t+;T] ; [t+;T] ;Z [t+;T] T =X t;x; [t;T] ; [t;T] ;Z [t;T] T = 0 which implies Y P;;;Z V (t +;X t;x; [t;t+] ; [t;t+] ;Z [t;t+] t+ ). Then by comparison principal for BSDE, we have Y P;;;Z t supf8(;;Z) on [t;t +] :Y P;t+;u(t+;X t;x;;;Z t+ );;;Z t g Since ;;Z are arbitrary. The desired result follows. "" the other direction is by standard construction and doing partition for the interval. Need rst show the continuity of V (t;x) (need more work here) 2.4.4 From DPP to HJB equation We rst derive the boundary condition for V (t;x) Lemma 2.4.2. lim t!T V (t;x) =x 73 Proof. First, we will show that lim t!T V (t;x)x To see this, using the constraint on Y A;; 0 , we have Y P;; 0 Z T 0 (1 s +Z P s )( s +Z A s ) + 1 2 ( s +Z A s ) 2 ds Z T 0 (Z A s +Z P s )dB s x +C 0 T Denote ~ Z P s =Z A s +Z P s and taking expectation w.r.t P +Z . We get Y P;; 0 Z T 0 E +Z [( s +Z A s ) 1 2 ( s +Z A s ) 2 ]dsW 0 +C 0 T ( 1 2 +C 0 )Tx Therefore, set T! 0, we get lim t!T V (t;x)x. At last, choose =Z = 0 and = W 0 T C 0 . we see the upper bound could achieve. In the end, we solve the corresponding HJB equation Theorem 2.4.4. Then the corresponding HJB equation is 8 > > > > > > > > < > > > > > > > > : V t + sup ;;Z ((C 0 + 1 2 Z 2 1 2 2 )V x + 1 2 Z 2 V xx + (1)( +Z)) 0 ;t<T V (T;x) =x (2.5) 74 The solution isV (t;x) = (C 0 + 1 2 )(Tt)x. Correspondingly, a = = 1,Z = 0 and satisfy the reservation utility. . Proof. taking the FOC condition, we getV x =1 since is not bounded. Plugging into the HJB equation, we get V t = (C 0 + 1 2 ). Together with the terminal condition. We have V (t;x) = (C 0 + 1 2 )(Tt)x and +Z = 1. Plugging +Z = 1 into 2.1 and choose to be deterministic, we get Z = 0. Therefore, correspondingly, a = = 1 Remark 2.4.5. : If we assume t 0, we need to have Z T 0 s ds =B T +W 0 (C 0 + 1 2 )T which implies will not be integrable, hence, not in the admissible control set. Indeed, the following lemma will tell us, without, the contract can be very close, but not equal. Lemma 2.4.3. There exists adapted process n t such that lim n!1 j Z T 0 n t dtB T j = 0 75 Proof. Denote n = infft :jB t jng^ (T 1 n ) And choose n t = 8 > > > < > > > : 0 t< n Bn Tn otherwise Then lim n!1 jB T Z T 0 n t dtj = lim n!1 jB T B n j = 0 2.5 Example 3: Connection to Noah Williams case When U A (t; t ; t ;a t ;c t ) =e t u(c t ;a t ) 76 and U P (t; t ; t ;a t ;D t ) =e t U(D t ) and A (T;;m A t ) =e T u( +m A T ), P (T;;m P T ) =e T U(m P T ). It is the same as Williams's model. It is a well-known result that if A is a function of +m A T and P is a function ofm P T , without loss of generality, we may assumem A t 0, see [17], for instance. Nevertheless, our framework still works for this model. We send the interested reader to [61] for details. Now we state Cole and Kocherlakota's result under our model: Assumption 2.5.1. AssumeU A (t; t ; t ;a t ;c t ) andU P (t; t ; t ;D t ) are both inde- pendent of t ; t . Moreover, A is a function of +m A T and P is a function of m P T . Theorem 2.5.2. Assume Assumption 2.5.1 holds. Moreover, assume given t ; t ; in the admissible set, (2.5) will have a unique solution c ;; t . Then W.L.O.G, we may assume that m A t 0. 77 Proof. Given t ; t ;, for the following FBSDE 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : m A t = R t 0 [ s c s )]ds + R t 0 s dB s W t = A ( +m A T ) + R T t f(s; s ; s ;c s ;Z s )ds R T t Z s dB s ^ Y t = 0 A ( +m A T ) + R T t @f @z (s; s ; s ;c s ;Z s ) ^ Z s ds R T t ^ Z s dB s ^ Y t = @f @c (c t ;Z t ) (2.1) By the assumption, we know there is a unique c t for this equation with the last constraint. Now we write down a new corresponding equation: 8 > > > > > > > > > > > > > < > > > > > > > > > > > > > : m t = R t 0 [c t c s )]ds + R t 0 s dB s W t = A ( + m A T ) + R T t f(s;c s ; s ; c s ;Z s )ds R T t Z s dB s ^ Y t = 0 A ( + m A T ) + R T t @f @z (s;c s ; s ; c s ;Z s ) ^ Z s ds R T t ^ Z s dB s ^ Y t = @f @c ( c t ;Z t ) (2.2) If we choose = +m A T m A T and t = c t , t = t . Clearly, by comparing with (2.1) and sincef is independent of and, we observe that c t =c t is the solution of the (2.2). By the assumption again, c is the unique solution. Since the process W t is the same, this will not change the agent's utility. We claim that this will not 78 make any change to the agent's problem. For the principal's side, We rst write down her dynamic utility process: P t = P (m P T ) + Z T t U P (t; t ; t ;D t )dt Z T t Z s dB a s s:t: dm P t = (a t D t t t a t )dt + (1 t )dB a t : SinceU P is now independent from (we have already changed)and. Therefore, we only need to check that m P T =m P T which is equivalent to m P T m P T = =m A T m A T By choosing same D t . We compute m P T m P T , we got: d m P t dm P t = ( t c t )dt Comparing with dm A t d m A t = ( t c t )dt 79 which implies the equivalence. Remark 2.5.3. We may want to loose the constraint U P is independent of t , t . However, by the natural of the problem, U P is increasing over D t , but decreasing over t , which violates that m P t is in the form of t +D t . Remark 2.5.4. Williams [61] uses Assumption 2.5.1 as a sucient condition. On the other hand, our derivations shows it is also a necessary condition. 2.6 Example 4: Connection to Sannikov's case In this section, we oer a case similar to Sannikov's work. Setting U A (t; t ; t ;a t ;c t ) =re rt (u( t + t a t ) 1 2 2 t 2 h(a t )) and U P (t; t ; t ;a t ;c t ) =re rt ((1 t )a t t ): Also, setting T =1 and A (T;;m A T ) = P (T;;m A T ) = 0. That is, the agent will not save and the principal is risk-neutral. When t 0, it is the same as 80 Sannikov's model. Denote W t (;;a) =E a t [r Z 1 t e r(st) (u( s + s a s ) 1 2 2 s 2 h(a s ))ds] Again, by martingale representation theorem and using Girsanov theorem, we have dW t = r(W t + 1 2 2 t 2 +h(a t )u( t + t a t ))dt +rZ t dB a t = r(W t + 1 2 2 t 2 +h(a t )u( t + t a t ) a t Z t )dt +rZ t dB t By comparison principal for BSDE,for optimal a t we take the FOC with respect to a and get h 0 (a t ) = Z t +u 0 ( t + t a t ) t (2.1) Later we will present a simple example such that (2.1) has a unique solution. Now we rst assume there exists unique a such that (2.1) holds. Then we have Z t = (h 0 (a t )u 0 ( t + t a t ) t ): 81 Therefore, we have the dynamic process for W t (corresponding to optimal a): dW t =r(W t + 1 2 2 t 2 +h(a t )u( t + t a t ))dt +r(h 0 (a t )u 0 ( t + t a t ) t )dB a t Now we may write down the principal's problem: max ;;a E a [r Z 1 0 e rt ((1 t )a s t )dt] s:t: dW t =r(W t + 1 2 2 t 2 +h(a t )u( t + t a t ))dt +r(h 0 (a t )u 0 ( t + t a t ) t )dB a t ;W 0 ^ W Theorem 2.6.1. The HJB equation for the principal is F (W ) max ;;a f @F @W (W + 1 2 2 2 +h(a)u( +a)) + 1 2 @ 2 F @W 2 r 2 (h 0 (a)u 0 ( +a)) 2 + [(1)a]g (2.2) Proof. Denote J( t ; t ;a t ) =E a t [r Z 1 t e rs ((1 s )a s s )ds] 82 Therefore, we get the correspoding HJB equation: 0 @J @t + max ;;a [ @J @W r(W + 1 2 2 2 +h(a)u( +a)) + 1 2 @ 2 J @W 2 r 2 2 (h 0 (a)u 0 ( +a)) 2 +re rt ((1)a)] (2.3) Setting F ( t ; t ;a t ) =E a t [r Z 1 t e r(st) ((1 s )a s s )ds] Then F become time independent and we have @J @t =re rt F (W ); @J @W =e rt @F @W ; @ 2 J @W 2 =e rt @ 2 F @W 2 Therefore, the HJB equation (2.3) becomes F (W ) max ;;a f @F @W (W + 1 2 2 2 +h(a)u( +a)) + 1 2 @ 2 F @W 2 r 2 (h 0 (a)u 0 ( +a)) 2 + [(1)a]g In the next lemma, we oer the example such that (2:1) has a unique solution and derive the boundary condition for HJB equation (2.2). Lemma 2.6.1. Letu(x) =e x andh(x) = 1 2 x 2 , equation (2.1) will has a unique solution. Meanwhile, for (2.2), we have the boundary condition F (0) =1. 83 Proof. Setting f(x) = h 0 (x)u 0 ( +x), it is clear that (2.1) has a unique solution if and only if f(x) is both injective and surjective. However, it is easy to check that f 0 (x)> 0 and the range of f(x) is (1; +1). For the second part, since W 0 =E a [r Z 1 t e r(st) (u( s + s a s ) 1 2 2 s 2 h(a s ))ds] observe that all integrand is non-positive. Therefore, W 0 = 0 will imply t =1 and a t = t 0, which in turn implies that F (0) =1. The next theorem shows that our model will improve the optimal utility value for the principal. Theorem 2.6.2. For u(x) =e x and h(x) = 1 2 x 2 , The optimal in (2.2) is not equal to 0. Therefore, by comparison principal of PDE, the optimal value in our model is larger than the one in Sannikov's. Proof. For simplicity, we assumer = = = 1. Then the FOC condition in (2.2) will imply: 8 > > > > > > > > < > > > > > > > > : @F @W (ae (+a) ) + @ 2 F @W 2 (ae (+a) )(1 + 2 e (+a) ) + 1 = 0 @F @W (ae (+a) ) + @ 2 F @W 2 (ae (+a) )e (+a) (a 1)a = 0 @F @W e (+a) + @ 2 F @W 2 (ae (+a) )e (+a) 1 = 0 (2.4) 84 If the optimal 0, (2.4) will become: 8 > > > > > > > > < > > > > > > > > : @F @W a + @ 2 F @W 2 a + 1 = 0 @F @W e + @ 2 F @W 2 e + 1 = 0 @F @W e + 1 = 0 this will immediately imply @ 2 F @W 2 0. That is, F is a linear function, which contradicts with F (0) =1. Therefore, the optimal is not zero. (I have carefully checked for the general case, it is also true.) Remark 2.6.3. Sannikov [?S] used retirement as another boundary condition to solve the HJB equation (2.3). 2.7 Appendix we oer all the remaining proof in the appendix here. Proof of Lemma 2.2.1 Denote x (y) be the optimal x such that g(y) =f(x (y);y) 85 . By the FOC condition, it must hold that @f @x (x (y);y) = 0. Therefore, dg(y) dy = @f @y (x (y);y) = @f @x (x (y);y) dx (y) dy + @f @y (x (y);y) = @f @y (x (y);y) 86 Chapter 3 Optimal Investing after Retirement Under Time-Varying Risk Capacity Constraint 3.1 Introduction Investing during retirement is signicantly dierent matter from investing for retirement. 1 Economists have found profound challenges of investing in retirement due to aging and health shocks, risk-taking, and retirement adequacy. 2 Compared to investing before retirement, retirees invest in an unknown but nite length of time because of longevity risk.They worry about the balance between spending 1 There are many approaches to construct the retirement portfolio before retirement. See Gustman and Steinmeier (1986), Roozebt and Shourideh (2019) for its structural and optimal reform approach of the retirement model. The optimal portfolio choice approach with labor income includes Bodie, Merton, and Samuelson (1992), Cocco, Gomes, and Maenhout (2005), Viceira (2001). 2 For the empirical studies of post-retirement, see Coile and Milligan (2009), Goldman and Orszag (2014), Gustman, Steinmeier and Tabatabai (2012), Poterba (2016). Yogo (2016) develops a dynamic discrete-time model with health shocks. 87 and leaving wealth as an inheritance. In addition, as stated by Kenneth French, 3 these individuals will also face capacity risk if a market downturn were to occur, leading to a substantial decline in their standards of living. Despite its impor- tance of investing problem post-retirement, its theoretical studies, particularly in a continuous-time framework, are very few. To ll the gap in the literature, this paper studies this optimal investment problem for retirees, taking both longevity risk and living standard risk into consideration in a continuous-time framework. Specically, we study an optimal portfolio choice problem by incorporating the following several signicant investment features post-retirement. First, we consider the mortality risk that the retiree has an uncertain investment time-horizon. The length of each individual's retirement may dier from the statistical life expectancy, and the mortality risk is virtually independent of the market risk in the nancial market. A second feature we consider is that the retiree has a cash in ow from the social security account and retirement account. In most countries, retirees are forced to withdraw from the retirement account, which provides a xed income stream for spending. For example, since its inception, Bengen's \four percent rule" has been recognized as a standard in retirement professionals (Bengen, 1994). Third, the retiree may hope to leave an inheritance. Finally, the absence of labor income results in individual risk capacity, therefore, strong risk-averse behaviors. 3 As Kenneth French presented at the Annual Conference for Dimensional Funds Advisors, 2016, \It is living standard risk you should know about the risk. It is what your exposure is to a major change in your standard of living during the entirely uncertain numbers of years you remain alive." 88 With a massive market risk exposure in the portfolio, the retiree faces a risk to sacrice the standard of living when the market declines as in the 2008-2009 nancial crisis or 2020 COVID-19. We formula and solve an optimal portfolio choice problem with these investment features for retirees. For an analytical purpose, we consider two assets in the economy. One asset is a risky asset that provides a risk premium, and another asset is risk-free. To focus on investment decisions, we x a withdrawal rate in the consumption policy, which is consistent with market practices. The retiree has a CRRA utility function on consumption ows and the bequest of wealth. To capture the living standard risk, we impose a time-varying risk capacity constraint on the investment. That is, the dollar amount invested in the risky asset is always bounded from above by a predetermined percentage of the retiree's portfolio wealth at the retirement date (the initial wealth). We show that, under the specic condition on model parameters, the value function (expected utility function) of the optimal portfolio choice problem is aC 2 smooth solution of the corresponding Hamilton-Jacobi-Bellman equation (Propo- sition 3.4.5). If the value function is C 2 smooth, we explicitly characterize the region in which the risk capacity constraint is binding. We further derive the optimal investment policy by using a well-dened second-order ordinary dieren- tial equation (Proposition 3.4.2 and Proposition 3.4.3). The second-order ordinary dierential equation can be numerically solved eciently to illustrate the results. 89 For a general utility function, we characterize the value function as the unique viscosity solution of the HJB equation (Proposition 3.3.1). Our results have several important properties and implications for retirement investment. First, the investment strategy is not a myopic one. The risk capacity constraint on the future investment decisions aects the investment at any instant time. Therefore, the investing strategy is not merely cutting the benchmark strat- egy, which has no risk capacity constraint. Second, the optimal investment strategy displays a remarkable wealth-cycle property, in contrast to the life-cycle feature suggested in both academic and prac- tice. 4 Specically, when the portfolio value is unsustainable for the entire retire- ment period, the retiree should invest in the market, because the dollar investment in the stock will increase the expected portfolio value. Nevertheless, the percentage of wealth in the market declines when the portfolio is worth more. The declining percentage of the wealth invested in the risky asset is due to the retiree's stan- dard living concern to protect the portfolio value. This decreasing feature of the percentage becomes signicant when the portfolio worths suciently high. Since the dollar invested in the stock is always a constantL when the portfolio wealth is higher than a threshold W , this threshold W measures the expected 4 According to Modigliani (1986), individual's investment and consumption decision has a life- cycle feature. See Benzoni, Dufresne and Goldstein (2007), Viceira (2001), Cocco, Gomes, and Maenhout (2005), Gomes and Michaelides (2005), Bodie et al (2004), and Bodie, Detemple and Rindisbacher (2009) for life-cycle theoretical and empirical studies. The life-cycle hypothesis is also used for preparing the retirement portfolio and in the retirement portfolio. For instance, a conventional rule for an agent of age t is to invest (100 - t)/100 percent of wealth in the stock market. See Malkiel (1999). 90 lump sum of the spending in the retirement period. Intuitively, when the portfo- lio worths more than this threshold, the retiree aims to protect the portfolio by investing only a xed amount of L in the stock market without losing the living standard. By implementing this contingent constant-dollar strategy, 5 the retiree sells the stock when the stock market moves high, which is consistent with the retirement portfolio's decumulation process. In contrast, investing for retirement is an accumulating asset process. Third, the portfolio is nearly independent of the stock market when the retiree's portfolio worths suciently to embrace the living standard; therefore, reducing the retiree's living standard risk. As a comparison, we solve an optimal portfolio choice problem by imposing a standard leverage constraint on the percentage of the wealth in the stock market. We demonstrate that the optimal portfolio under the leverage constraint moves precisely in the stock movement direction, which is a severe concern of the standard living risk in a stressed market period. This paper contributes to the optimal portfolio choice literature by solving a stochastic control problem with a new objective function and a new risk capacity constraint. At rst glance, this risk capacity constraint seems to be a particular case of leverage constraint or collateral constraint, X t f(W t ), where X t repre- sents the dollar instead in the risky asset and W t the wealth at time t. However, 5 In a classical constant-dollar strategy, the dollar invested in the risky asset is always xed. In contrast, by a contingent constant-dollar strategy in this paper we mean a xed dollar is invested in the risky asset if and only if the portfolio value is higher than a threshold. 91 earlier literature on the leverage constraint does not study the situation thatf(W t ) is independent on W t . 6 Therefore, our results provide new insights into the port- folio choice problem under an extreme leverage constraint, a dynamic constraint X t L for all time t. The objective function is also new to the best of our knowledge. Since we consider the uncertain time-horizon (Yaari, 1965, Richard, 1975, Blanchet-Scaillet et al. 2008), both the consumption process and the wealth process are involved in the objective function to capture the retiree's mortality risk. The structure of the paper is organized as follows. In Section 2, we introduce the model and present the retiree's optimal investment problem to capture his mortality risk and living standard risk. In Section 3, we present a characteriza- tion of the value function for a general utility function. In Section 4, we derive the value function and the optimal investment strategy for a CRRA utility. We present several properties of the optimal portfolio, and numerically illustrate these properties in Section 5. We solve and make a comparison with another relevant optimal portfolio choice problem under a leverage constraint in this section. The conclusion is given in Section 6, and technical proofs are given in Appendix A - Appendix B. 6 See, for instance, Zariphopoulou (1994), Vila and Zariphopoulou (1997), Detemple and Murthy (1997). Studies on portfolio choice and asset pricing under other dynamic constraints on the control variablec t or the state variable,W t , include Dybvig (1995), El Karoui and Jeanbalnc- Picque (1998), Detemple ad Serrat (2003), Elie and Touzi (2008), Dybvig and Liu (2010), Chen and Tian (2016), Ahn, Choi and Lim (2019), and reference therein. 92 3.2 The model In this section, we introduce the model and an optimal portfolio choice problem for a retired individual (retiree). 3.2.1 Investment Opportunity There are two assets in a continuous-time economy. Let ( ;F t ;P) be a ltered probability space in which the information ow in the economy is generated by a standard one-dimension Brownian motion (Z t ). The risk-free asset (\the bond") grows at a continuously compounded, constant r. We treat the risk-free asset as a numeaire so we assume that r = 0. F 1 is the -algebra generated by all F t ;8t2 [0;1). The other asset ("the stock index") is a risky asset, and its price process S follows dS t =S t dt +S t dZ t where and are the expected return and the volatility of the stock index. 3.2.2 Investor We consider an individual right after his retirement. We simply name \he" for this retiree. The retirement date is set to be zero. The retiree's initial wealth is W 0 at 93 the retirement date. The retiree is risk-averse and his utility function is denoted by a strictly increasing and concave function u() : (0;1)! R and u() satises the Inada's condition: lim W"1 u 0 (W ) = 0, and lim W#0 u 0 (W ) = 0. Since the retiree faces his mortality risk, the investment time-horizon is uncer- tain, neither a xed nite time nor innity. We assume that the investor's death time has an exponential distribution with mean , that is, Pf2dtg =e dt . Therefore, the probability of the retiree survives in the next t years is e t . The investor's average life time is 1 and the variance of his life time is 1 2 . For example, if = 0:05, it means that a normal retiree who retires at 65 is likely died at 85 years old. We assume that is independent of the information setF 1 . 3.2.3 An optimal retirement portfolio problem Compared with a standard investor before retirement, there are several distinct features in the retiree's portfolio choice problem. (1) The retiree has a xed cash ow from his social security account post-retirement. 7 (2) He is either able to withdraw without penalty or enforced to withdraw from his retirement account. 8 (3) He has no labor income anymore. (4) He has a mortality risk, and (5) he becomes more risk-averse than when before retirement because he has concerns on 7 See www.ssa.gov for the social security system in U.S.A. There are similar social security systems in Europe and Canada. 8 In U.S.A, people are able to withdraw around 60 years old (and enforced to withdraw the minimal distributions nearly 70 old) from the retirement account. Moreover, a standard with- drawal rate is between 4% to 5%. See Bengen (1994). 94 the market downturn and has no sucient time to wait for the market return. We introduce a portfolio choice problem to incorporate these ve features. Specically, the optimal portfolio choice problem for the retiree at time zero is max (X) E Z 0 e s u(cW s )ds +Ke u((1)W ) (3.1) where is the retiree's subjective discount factor, is the inheritance tax rate of the wealth, K is a number that determines the strength of the bequest (to heirs), and his wealth process W t satises dW t =X t (dt +dZ t )c t dt;80t; (3.2) and X t represents the dollar amount in the risky asset, and c t is the consumption rate. By assumption, both (X t ) and (c t ) are adapted to the ltrationF t . Because of the social security safety net and income from his retirement port- folio, we choose and x the withdrawal rate, c t = cW t ;c2 (4%; 5%) following Bengen (1994). 9 Therefore, the retiree focuses on the investment decision X t to maximize his expected utility. Moreover, there is no labor-income ow in the bud- 9 A constant percentage consumption rate, c t = cW t , is standard in literature to nd the optimal spending rule. See, for instance, Dybvig (1995), Campbell and Sigalov (2019). It is also consistent with Modigliani's life-cycle theory of consumption. 95 get equation (3.2). To model the risk-averse preference of the retired, we assume that X t L; 0t: (3.3) It states that the investor's dollar amount in the risky asset is bounded above by a xed constant. We call it a time-varying risk capacity and L a capacity level. Equivalently,X t lW 0 , wherel =L=W 0 , then the dollar amount is bounded above by a percentage of his initial wealth. For example, whenl = 30%;W 0 = 1; 000; 000, then we require that at most $300,000 invested in the stock market in the entire investment time period. We use this \risk capacity constraint" to control the retiree's living standard risk. Since the retiree might encounter a terrible market downturn, such a risk capacity constraint prevents the portfolio from a huge loss. Lastly, we assume the non-negative wealth for no-arbitrage condition W t 0; 0t: (3.4) 96 Given the distribution of , and the independent assumption between andF 1 , by the Fubini's theorem, 10 we have E Z 0 e s u(cW s )ds = E Z 1 0 e s u(cW s )1 s ds = E Z 1 0 e s E [u(cW s )1 s jF 1 ]ds = E Z 1 0 e s u(cW s )P (s)ds = E Z 1 0 u(cW s )e (+)s ds where we make use of the fact that P (s) = R 1 s e t dt =e s . Similarly, we have E[e u((1)W )] =E Z 1 0 e (+)t u((1)W t )dt : Therefore, the retiree's optimal retirement portfolio problem (3.1) is reduced to a Merton-Richard type problem as follows (Merton, 1971; Richard, 1975), J(W;L) = max (X) E Z 1 0 e (+)t (Ku((1)W t ) +u(cW t ))dt (3.5) 10 The portfolio choice problem under uncertain time-horizon is studied widely in literature. For instance, Karatzas and Wang (2001) for a general stopping time, Blanchet-Scaillet, et al. (2008) considered the case when the random time has continuous conditional probability distribution conditional on the market prices. 97 subject to constraints (3.2) - (3.4). The function inside the integral of J(W;L), Ku ((1)W t ) +u(cW t ), represents the combination of the preference on the consumption as well as the terminal wealth by incorporating the tax-rate, longevity risk. The general value function at any time t is J(W t ;L) = max (X) E Z 1 t e (+)(st) (Ku((1)W s ) +u(cW s ))ds subject to (3.2) - (3.4). 3.2.4 An all-safety strategy For a retiree, to invest solely in the risk-free asset is one admissible strategy for the retiree. Letting X t = 0 for all time t, thenW t =W 0 e ct , and the value function is bounded below by Z 1 0 e (+)t Ku((1)We ct ) +u(cWe ct ) dt: (3.6) This lower bound clearly equals to J(W ; 0) when L = 0. In general, J(W ;L 1 ) J(W ;L 2 )J(W ;1) for any 0L 1 L 2 , where J(W ;1) = max (X) E Z 1 0 e (+)t (Ku((1)W t ) +u(cW t ))dt denotes the value function for a retiree without the risk capacity constraint (3.3). 98 3.3 A characterization of the value function The stochastic control problem (3.5) can be viewed as a special case of the following general stochastic control problem, E Z T 0 f(t;C t ;W t ;X t )dt ; and there are extant studies on this kind of problem in stochastic control literature back to Bismut (1973). In this section, we characterize the value functionJ(W ;L) in terms of the viscosity solution of the Hamilton-Jacobi-Bellman (HJB) equation. Proposition 3.3.1. The value function J(W ) is the unique viscosity solution in the class of concave function, of the following HJB equation: ( +)J(W ) = max 0XL 1 2 2 X 2 J 00 (W ) +XJ 0 (W ) cWJ 0 (W ) + Ku((1)W ) +u(cW ); (W > 0) (3.7) with J(0) = 1+K + u(0). By Proposition 3.3.1, the value function is uniquely characterized by the vis- cosity solution of the HJB equation. 11 However, this characterization is not strong 11 Brie y speaking, a viscosity subsolution V of a second-order equation F (x;u;u x ;u xx ) = 0 if for any smooth function and a maximum point x 0 of V , the inequality F (x 0 ;V (x 0 ); x (x 0 ); xx (x 0 )) 0. Similarly, V is a viscosity supersolution if for any smooth function and a minimum point x 0 of V , the inequality F (x 0 ;V (x 0 ); x (x 0 ); xx (x 0 )) 0. A viscosity solution is both a viscosity subsolution and supersolution. We refer to Fleming and Soner (2006) for the theory of viscosity solution. 99 enough to derive the explicitly solution of the optimal portfolio. The main insight in Proposition 3.3.1 is that, without knowing the smooth property (\ex-ante") of the value function of a portfolio choice problem, the value function can still be uniquely characterized in the framework of viscosity solution. 12 Building on this characterization, we will derive further smoothness properties of the value function and the optimal strategies for a particular class of utility function. 3.4 The optimal strategy for CRRA utility From this section, we consider the following CRRA utility function u(W ) = W 1R 1R ;R> 0;R6= 1: By its scaling property u(cW t ) =c 1R u(W t ), we have Ku((1)W t ) +u(cW t ) = K(1) 1R +c 1R u(W t ): Then J(W ) = K(1) 1R +c 1R V (W ); 12 Extant studies on the twice dierentiability in optimal control problems, including Ren, Touzi and Zhang (2014), and Strulovici and Szydlowski (2015) and reference therein, rely on certain conditions on the model and the control process. These results cannot be applied in our problem directly since the utility function u() does not satisfy the global Lipschitz condition. 100 where V (W ) = max (X) E Z 1 0 e (+)t u(W t )dt : For a CRRA utility, the HJB equation (3.7) of the value function can be written as ( +)V (W ) = max 0XL 1 2 2 X 2 V 00 (W ) +XV 0 (W ) cWV 0 (W ) +u(W ); (W > 0): (3.8) The contribution of this section to demonstrate that the value function is a C 2 smooth solution of the HJB equation under certain assumptions on model parameters. We also derive the optimal strategy analytically by well-dened second order ordinary dierential equation. For simplicity, we assume that R < 1, 13 and focus on the function V (W ). 3.4.1 A baseline model We start with a benchmark situation, L =1, that is, there is no constraint on the risky investment. For this purpose, we assume that 13 Most arguments can be applied for other risk aversion parameterR> 1. The choice ofR< 1 implies that u(0)<1 and V (0)<1. 101 Assumption A. +>c(1R); (3.9) where = 2 2 2 ; = (1R) R . Lemma 3.4.1. Under Assumption A, the value function in the absence of the risk capacity constraint, V (W ;1), is V (W ;1) = 1 + +c(1R) W 1R 1R : The risky asset investment amount is X t = R 2 W t : When c = 0, Lemma 3.4.1 is essentially given in Liu and Loewenstein (2002), Lemma 1. For a general positive consumption rate c, the optimal investment strategy is independent of the consumption rate. Blanchet-Scaillet et al. (2008), Theorem 3 and Proposition 5 show that the optimal strategy is the same if is independent fromF 1 and its conditional probability density function conditional onF 1 is continuous. By Lemma 3.4.1, the portfolio value process satises dW t =W t ( R 2 c)dt + R 2 dZ t : 102 Since the portfolio W t is a lognormal process, there is a positive probability that R 2 W t >L for any positive numberL. Moreover, there is a positive probability of W t < for any t > 0 and a positive number . Therefore, the retirement wealth portfolio has a substantial living standard risk. 3.4.2 Constrained and unconstrained region In our approach, we rst characterize the region in which the risk capacity con- straint is binding by assuming theC 2 smooth property of the value functionV (W ) (in this subsection). Then, relying on the characterization of the constrained and unconstrained region, we will verify the smooth property by an explicit construc- tion of the candidate smooth function as the value function (in Section 4.3 and Section 4.4) below. Since the optimal dollar amount X is X = min 2 V 0 (W ) V 00 (W ) ;L ; (3.10) the unconstrained regionU is U = W > 0 : 2 V 0 (W ) V 00 (W ) <L : 103 Then, over the unconstrained region, the value function V () satises ( +)V (W ) =u(W ) + (V 0 (W )) 2 V 00 (W ) cWV 0 (W ): (3.11) Similarly, the constrained regionB is given by B = W > 0 : 2 V 0 (W ) V 00 (W ) >L ; in which the constraint (3.3) is binding. Over this region, V () satises a second- order linear ODE ( +)V (W ) =u(W ) + (LcW )V 0 (W ) + 1 2 2 L 2 V 00 (W ): (3.12) Proposition 3.4.2. AssumeV (W ) isC 2 smooth, then there exists a positive num- ber W such thatU = (0;W ) andB = (W ;1). Proposition 3.4.2 characterizes the region in which the risk capacity constraint is binding by one positive number W . By Lemma 3.4.1, the risk-capacity con- straint (3.3) is not binding for all W > 0. Therefore, the number W is a nite number. The characterization ofU andB is important to derive explicitly the value function in our subsequent discussion. From an economic perspective, if the retiree starts to put the maximum $L in the stock when his portfolio can sustainable his retirement spending, he will not invest a smaller dollar amount in the market when 104 his wealth growths more. Therefore, the constrained region is an open interval. In this regard, the number W measures the expected value to sustain the spend- ing with comfortable living standard. Consequently, the investing strategy will be dierent when W >W or W <W . 3.4.3 Explicit characterization of the value function In this section, we present the optimal solution explicitly for a CRRA utility, assuming the value function is C 2 . The smooth property will be veried in the next subsection under certain condition. Proposition 3.4.3. Assume V (W ) is C 2 smooth and Assumption A holds, then in the region (0;W ), V (W ) =V (0) + Z G 1 (W ) 0 g R G 0 (g)dg; where G(g) is strictly increasing, G(0) = 0, and it satises the following ODE R gG 00 = + +c R (1R) G 0 cR(g 1 G)g R G R G 0 : (3.13) The value function V (W ) satises ODE (3.12) in the region (W ;1). Moreover, there exists two positive numbers C 1 and C 2 such that C 1 V (W ) W 1R ; V 0 (W ) W R C 2 ;8W > 0: 105 Proposition 3.4.2 plays a crucial role in this characterization. By Proposition 3.4.2, the value function and the optimal investment strategy can be examined into two separate regions. If the portfolio value is high enough, W > W , the value function is a solution of a second-order linear ODE (3.12). Assuming W is given, then the value function can be characterized by the conditions that V (W ) W 1R C and V 0 (W ) W R C;8W2 (W ;1). On the other hand, in the region W 2 [0;W ) and assuming W is known, we reduce the nonlinear ODE (3.11) to a second-order ODE by using the well- known transformation: V W = g R , 14 and W = G(g) for an increasing auxiliary function G(). The HJB equation of the value function is reduced to the second- order ODE (3.13), and the functionG(g) is characterized by appropriate boundary conditions. Finally, the smooth-t property of the value function determines the numberW uniquely by Proposition 3.3.1 and Proposition 3.4.2. As shown in both Proposition 3.4.2 and Proposition 3.4.3, the numberW is essential in the explicit characterization of the value function. Indeed, the next result demonstrates that, if there is a number W to separate the unconstrained and constrained region in general, the value function is smooth. Lemma 3.4.4. Assume V (x) is a continuous viscosity solution of a second-order (HJB) equation F (x;u;u x ;u xx ) = 0 and the region of x isD = (0;1). Moreover, 14 This transformation is well-known to solve the optimal consumption-portfolio choice problem sinceg is the endogenous consumption rate in the HJB equation. See Karatzas and Shreve (1998), Chapter 5, and its references. Even though there is no optimal consumption rate in our model, this transformation is also essential to characterize the optimal solution in this paper. 106 there exists x such that V (x) is smooth in both (0;x ) and (x ;1), then V (x) must satises the smooth-t condition at x , that is, V 0 (x ) =V 0 (x +). Lemma 3.4.4 can be viewed as a converse statement of Proposition 3.4.2. 15 If the value function is smooth in each region (0;W ); (W ;1), then the value function must be smooth as long as the value function is continuous and a viscosity solution of a HJB equation. This result is interesting in its own right and it can be used to verify the smooth property as will be shown in the next section. 3.4.4 A special case In this section, we consider a particular case that c = 0 and derive the C 2 smooth property of the value function under certain condition on model parameters. Our analysis is motivated by Proposition 3.4.3 and Lemma 3.4.4. A zero consumption rate occurs when the retiree is willing to transform entire wealth to his heirs, or social security safety net and other incomes are sucient for spending during his retirement period. Dene two real numbers 1 = + p 2 + 2( +) 2 2 L ; 2 = p 2 + 2( +) 2 2 L : 15 We thank Prof. Jianfeng Zhang for providing this lemma to simplify our previous arguments. 107 1 and 2 are two roots of the following quadratic equation 1 2 2 L 2 2 +L = 0 and 1 > 0> 2 . Dene V 0 (W ) = 2 ( 1 2 )(1R) 2 L 2 e 2 W Z W 0 x 1R e 2 x dxe 1 W Z W 0 x 1R e 1 x dx : The function V 0 (W ) is a well-dened smooth function for W > 0. We recall the expression of Gamma function, (x) = Z 1 0 s x1 e s ds; which is well-dened for all real number x> 0. 16 Given a W > 0, we dene two real numbers C and W by C = 2 ( 1 2 )(1R) 2 L 2 R1 1 (2R)e 1 W [ +L 2 1 ] 2 L( 2 ) 2 e 2 W + 2 e 2 W ; V 0 0 (W ) + 2 LV 00 0 (W ) 2 L( 2 ) 2 e 2 W + 2 e 2 W (3.14) 16 We refer to Appendix B for basic properties of the Gamma function. 108 and g = 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R1 (2R)e 1 W +C 2 e 2 W +V 0 0 (W ) 1 R : (3.15) Proposition 3.4.5. Under Assumption A, and assume the existence of a positive solution W of the following equation u(0) + + Z g 0 g R G 0 (g)dg = 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R2 (2R)e 1 W + C e 2 W +V 0 (W ); (3.16) where G(g) satises the second-order ordinary dierential equation (ODE), 0 gg , G 00 (g) = R g 1 G 0 (g) +g R (G(g)) R (3.17) with boundary condition G(0) = 0;G(g ) = W and G 0 (g ) = LR 2 (g ) 1 . Then, the number W is unique and the value function V (W ;L) is C 2 smooth and given by V (W;L) = 8 > > > < > > > : u(0) + + R G 1 (W ) 0 g R G 0 (g)dg; WW 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R2 (2R)e 1 W +C e 2 W +V 0 (W ); W >W : (3.18) 109 Proposition 3.4.5 presents a closed-form expression of the value function and the optimal strategy in terms of W and the auxiliary function G(). By its construc- tion, V (W ;L) is the smooth function of the HJB equation. Then, by the unique characterization of the value function in Proposition 3.3.1, the value function is given by the expression (3.18) in Proposition 3.4.5 . In particular, ifL =1, thenG() is a linear function andW =1. In general, the functionG() is non-linear, and its non-linearity is equivalent to the non-myopic property of the optimal strategy, as will be explained in the next section. 3.5 Discussions In this section, we present several properties and implications of the optimal strat- egy. We also solve an optimal portfolio choice problem under a leverage constraint. 3.5.1 Optimal Strategy We start with the optimal investing strategy. Proposition 3.5.1. The optimal portfolio strategy is X(W ) = 8 > > > < > > > : R 2 gG 0 (g); WW =G(g ) L; W >W (3.19) This optimal portfolio strategy is not the myopic strategy. 110 By Proposition 3.4.5, the optimal portfolio strategy is explicitly given by the auxiliary function G() in the unconstrained region. If the optimal strategy is the myopic strategy in the sense that X t = min R 2 W t ;L ; then G(g) must be a linear function of g, and W = LR 2 . In this case, Equation (3.16) fails in general because the left side is a polynomial function while the right hand is virtually an incomplete Gamma function. Intuitively, the risk-capacity constraint aects the investment decision even though the constraint is not binding instantly. As a numerical illustration, we plot the auxiliary functionG() and the investing strategy X(W ) with the wealth W . We choose the risk premium = 0:10 that is consistent with the market data of S & P 500 between 1948 to 2018. The parameter is chosen as 0.07 to consistent to approximately 15 years of life after retirement. We choose the initial dollar amount of 700; 000 in a portfolio of 1 million as the maximum dollar amount in the stock market. Equivalently, 70 percent of the wealth invested in the stock market, as suggested in Vanguard (2018), for the construction of retirement portfolios. We let = 30%.This number is slightly higher than the calibration of the market index since our purpose is to highlight the high likelihood of the market downturn, which is a big concern for the retiree. Other parameters are R = 0:5;c = 0. By calculation, the expected value for retirement level is W = 492; 235. 111 As shown in Figure 1 and Figure 2, since G() is not a linear function, the strategy is not a myopic one. By the same reason, X(W ) as a function of the wealth is not C 1 since @X(W ) @W j W =W = R 2 (gG 0 (g)) 0 G 0 (g) = R 2 1 + gG 00 (g) G 0 (g) 6= 0: The percentage of wealth in the risky asset, X(W ) W , can be analyzed similarly. In the constrained region, W W , the percentage of wealth is L W . The more the wealth, the smaller the percentage of wealth. On the other hand, in the unconstrained region, X(W ) W = R 2 gG 0 (g) G(g) . This function also decreases with respect to the wealth as shown in Figure 3. Both Figure 2 and Figure 3 show that the optimal portfolio strategy displays a strong risk-aversion feature, by comparing with the benchmark model without the risk capacity constraint. 3.5.2 Portfolio value process Given the optimal strategy in Proposition 3.5.1, the optimal wealth process is uniquely determined by dW t = min n R 2 gG 0 (g);L o (dt +dZ t );W 0 =W > 0: (3.20) 112 It can be shown that the stochastic dierential equation (3.20) has a strong solu- tion. Therefore, we can directly analyze the portfolio by the stochastic dierential equation (3.20). The portfolio dynamic is as follows. Assuming wealthW t =W at a timet from below, then in the instant time period, [t;t +t], W t;t+t =W t +L(t + p t); and S t+t = S t +S t t + p t ; where is a standard normal variable. In a good scenario of the stock market, S t+t S t , that is, t + p t > 0, then W t+t W t , so the same dollar amount L is still invested in the stock market. If the market drops in the period [t;t +t], S t+t < S t , then W t+t < W , the portfolio value reduces and is smaller than the threshold W , then a new dollar amount, R 2 V 0 (W t+t ) V 00 (W t+t ) , is invested in the stock market. The process continuous between the unconstrained region and the constrained region. The retirement portfolio's return process is dW t W t = min R 2 gG 0 (g);L W t (dt +dZ t ): Therefore, the instantaneous variance,Var h dWt Wt i converges to zero whenW!1. When the wealth is suciently high, the risk of the portfolio is very small so the 113 retiree is able to resolve the living standard risk, regardless of possible market downturn. Moreover, the instantaneous covariance between dWt Wt and dSt St is Cov dW t W t ; dS t S t = X(W t ) W t 2 ! 0; as W t !1: Hence, the portfolio is virtually independent from the stock market if the portfolio value is large enough. 3.5.3 Alternative strategy We have demonstrated several properties of the proposed retirement portfolio and the portfolio dynamic under the risk capacity constraint. In both theory and practice, an alternative and might be a more popular strategy is to impose a maximum percentage of the wealth invested in the stock market. Namely, X t bW t . This kind of constraint is a special case of leverage X t f(W t ) initially studied in Zariphopoulou (1994), and Vila and Zariphopoulou (1997). In this section, we solve the corresponding optimal portfolio choice problem and compare it with the risk capacity constraint. Our contribution in this section is to show that the leverage constraint cannot resolve the living standard risk for the retiree. 114 We use a predetermined number ofb to represent the highest possible percent- age of wealth invested in risky asset. For instance, b = 0:7 means at most 70 percent of the portfolio is invested in the risky asset. We dene V b (W ) = sup (X) E Z 1 0 e (+)t u(W t )dt where the risk capacity constraint is replaced by X t bW t ;8t. By Lemma 3.4.1, we assume that b < R 2 . Otherwise, V b (W ) is solved by Lemma 3.4.1 for all b R 2 . Proposition 3.5.2. Under the constraint that X t bW t , the value function is V b (W ) = 1 + + (1R)( 1 2 2 b 2 Rb +c) u(W ) and the optimal strategy is X t =bW t . Proposition 3.5.2 states that any constant percentage strategy X t = bW t is an optimal policy under a leverage constraint. Given this strategy, the wealth portfolio satises dW t = W t ((bc)dt +dZ t )). Therefore, W t is a GBM, and there is a positive probability that the retirement portfolio is less than any a positive number. Same as in the benchmark model in Lemma 3.4.1, the portfolio is perfectly correlated to the stock market; thus, a downturn market could wipe out the retirement portfolio. Therefore, by implementing the optimal investment 115 strategy in Proposition 3.5.2, the retiree is subject to a substantial living standard risk if the stock market declines signicantly. 3.5.4 Implications In this section, we explain several implications of our model. First, the retiree needs to invest in the stock market since the all-safe strategy is too conservative to sustain the spending given longevity risk. Second, we demon- strate that the risk capacity constraint captures the retiree's living standard risk, and the optimal portfolio under the risk capacity constraint is a reasonable retire- ment strategy. Specically, if the retirement portfolio value is not high enough, the retiree should invest some money in the stock market to increase the growth rate. However, when the portfolio value is high enough, the retiree implements a \contingent constant-dollar amount strategy" by only placingL dollar of the port- folio in the stock market as long as the portfolio value is higher than W . Third, under the risk capacity constraint, the higher the portfolio value, the smaller per- centage of the wealth in the stock market. The portfolio can reduce the living standard risk because its return is asymptotically independent of the stock market for a high level of the portfolio value. Fourth, the risk-capacity constraint and the leverage constraint yield dierent investment strategies. By implementing a leverage constraint, the generating retirement portfolio is perfectly correlated to 116 the stock market, so the retiree faces a substantial market risk. These properties are illustrated in Figure 2 and Figure 3. How to choose the maximum dollar amount of L or l = L=W 0 is of practi- cally interesting. Given the risk capacity level L, and dene W L by the following equation R 2 W L =L; thenW L LR 2 . W L is the portfolio wealth level at which the benchmark's strat- egy in Lemma 3.4.1 provides the exact dollar amount of L in the stock market. Since the portfolio is not myopic, the threshold W is higher than W L in Propo- sition 3.4.5. It implies that X(W ) W is always bounded by R 2 in the constrained region. This point is illustrated in Figure 2 and Figure 3. If L 1 <L 2 , the invested dollar amount in the stock market under the constraint X t L 1 is bounded by the corresponding money invested in the stock market for the level L 2 . While an increasing level of L invests the expected return of the portfolio, the portfolio becomes riskier, as shown in Figure 4. Therefore, a suitable level of L depends on its counter-eect to the expected return and risk. 117 3.6 Conclusion In this paper, we solve an optimal investing problem for a retiree facing longevity risk and living standard risk. By imposing a time-varying risk capacity constraint on the portfolio choice problem, the corresponding optimal strategy enables the retiree to resolve the concern on the standard of living in retirement. We also compare the risk capacity constraint with a leverage constraint. We show that the leverage constraint yield a popular constant-percentage strategy, introducing a substantial living standard risk. By contrast, the risk-capacity constraint implies a contingent constant-dollar strategy to reduce the standard living risk. Appendix A. Proofs To simplify the notations, we use J 0 (W );J 1 (W ) to represent J(W ; 0);J(W ;1). Lemma 3.6.1. The boundary condition for Proposition 1.1 is J(0) = 1+K + u(0). Proof. It suces to show that if W 0 = 0, then W t 0;8t > 0 for all admissible strategies. By using the equation (3.2) and x T > 0, a simple calculation leads to (for W 0 = 0) W t e ct = Z t 0 e cs X s d ~ Z s 8 0<t<T 118 where ~ Z s = Z s + s; 0 s T . f ~ Z s g 0sT is a Brownian Motion under ~ P T by Girsanov's theorem, d ~ P T dP =exp Z T 1 2 2 T : SinceW t is non-negative, the local martingale R t 0 e cs X s d ~ Z s is always non-negative, hence, a supermartingale under ~ P T . Therefore, ~ E[W t e ct ] 0 8 0 < t < T: This implies W t 0;80 < t < T a:s. Letting T to innity, we conclude that W t 0;80<t<1;a:s:. HenceJ(0) = R 1 0 e (+)t (Ku(0)+u(0))dt = 1+K + u(0). 2 Lemma 3.6.2. The value function J(W ) is (strictly) continuous, increasing and concave. Proof. LetA W be the admissible set of (X t ) for the control problem starting at W 0 = W . Clearly,A W 1 A W 2 if W 1 W 2 . The increasing property follows. Next, we show that the J(W ) is concave. For X 1 2A W 1 andX 2 2A W 2 , it is easy to verify that X 1 + (1)X 2 2A W 1 +(1)W 2 . Therefore, J(W ) is concave by the concavity of u(). Since J(W ) is concave in (0;1), hence it is continuous in (0;1).For the continuity at 0, we observe that J(0;L)J(W;L)J(W;1) and J(0;L) =J(0;1). Now sending W to 0, the desired result follows. Finally, we proveJ(W ) is strictly increasing by a contradiction argument. Assume not, since J() is concave, then there exists ^ W , such that J(W ) is constant on 119 [ ^ W;1). However, this is impossible because its lower boundJ 0 (W ) goes to innity as W!1. 2 Lemma 3.6.3. Dynamic Programming Principle: If ^ is a stopping time of the ltrationF t , then J(W ) = sup A W E Z ^ 0 e (+)t (Ku((1)W t ) +u(cW t ))dt +e (+)^ J(W ^ ) Proof. The proof is standard, see Fleming and Soner (2006), Chapter 3. 2 Proof of Proposition 3.3.1. We show that J(x) is the viscosity solution of (3.7) and such a solution is unique. The existence part is standard in the theory of viscosity solution. See Fleming and Soner (2006), Chapter 3. It can be also modied with a similar argument in Zariphopuulou (1994), Theorem 3.1, which studies a relevant leverage constraint. To prove the uniqueness part it suces to prove the following comparison principle: ifJ(W ) is the viscosity supersolution and J(W ) is the viscosity subsolution and satises J(0) J(0), then J(W ) J(W ) for all W2 (0;1). In this situation, since the functionu(W ) is not Lipschitz, we cannot apply the standard comparison principle directly in our situation. For this purpose, we will 120 separate (0;1) into two parts: (0;) and (;1) for some proper positive number , then show that8> 0, J(W ) +J(W ); 8W > 0: Since J(0)J(0), there exists > 0, such that J(W ) +J(W ); 8W2 (0;]: (3.21) On the region W2 (;1), u(W ) is Lipchitz so is the function Ku((1)W ) + u(cW ). Since (W ) + is the test function for J(W ) +, J(W ) is also a superso- lution of (3.7), then we utilize the standard comparison principle in Fleming and Soner (2006), Chapter 5 to obtain J(W ) +J(W ); 8W2 (;1) (3.22) Now, combine (3.21) and (3.22), we have J(W ) +J(W ); 8W > 0: Since is arbitrary, the comparison principle holds and the proof is now complete. 2 121 Proof of Lemma 3.4.1. By Proposition 3.3.1, it suces to verify that the functionV (W ) =au(W ) is aC 2 function of the HJB equation (3.8), in the absence of constraintX t L, for a positive numbera. Given the specication of the value function, X = R 2 W . Then, the HJB equation becomes a + 1R R +c = 1 1R ; thena = 1 ++c(1R) . By Assumption A,a> 0, thenV (W ) =a W 1R 1R is the value function of this optimal portfolio choice problem. 2 We start with several lemmas before proving Proposition 3.4.2. Lemma 3.6.4. AssumeV (W ) isC 2 smooth, then there exists two positive numbers C 0 ;C 1 such that C 1 0 W R V 0 (W )C 1 1 W R ;8W > 0: In particular, lim W!0 V 0 (W ) =1 and lim W!1 V 0 (W ) = 0. 122 Proof. By a direct calculation, V 0 (W ) = u(W ) ++c(1R) and Lemma 3.4.1 states that V 1 (W ) = u(W ) ++c(1R) . Then, by using the concave property of the functionV () (Lemma 3.6.2), for any positive number W > 0 and E > 0, we have V 0 (W ) 1 E [V (W +E)V (W )] 1 E [V 0 (W +E)V 1 (W )] = 1 E 1 1R 1 + +c(1R) (W +E) 1R 1 1R 1 + +c(1R) W 1R : Choosing E =kW , we have V 0 (W ) 1 k 1 1R 1 + +c(1R) (k + 1) 1R 1 1R 1 + +c(1R) W R Let C 1 0 = sup k>0 1 k(1R) 1 + +c(1R) (k + 1) 1R 1 + +c(1R) + ; (3.23) where x + = max(x; 0). It is easy to see C 0 is positive. By the same reason, for any E =W;2 (0; 1), we have V 0 (W ) 1 W [V (W )V (WW )] 1 W V 1 (W )V 0 (WW ) 1 (1R) 1 + +c(1R) 1 + +c(1R) (1) 1R W R : 123 Let C 1 1 = inf 0<<1 1 (1R) 1 + +c(1R) 1 + +c(1R) (1) 1R + : (3.24) The proof is nished. 2 Lemma 3.6.5. AssumeV () isC 2 smooth, then there exists ~ W such that the open interval (0; ~ W ) is included inU, and X ( ~ W ) =L: Proof. Assume not, then there exists a sequence W n ! 0 such that X (W n ) =L. Apply the denition ofB of the corresponding HJB equation. We have V (W n ) 1 1R W 1R n cW n V 0 (W n ) + 1 2 LV 0 (W n ) = 1 1R W 1R n + ( 1 2 LcW n )V 0 (W n ): SinceV (W ) is continuous (Lemma 3.6.2), asn!1, the left hand side of the last inequality approaches to V (0) = 0. However, 1 2 LcW n ! 1 2 L, so the term 1 2 LcW n V 0 (W n ) will tends to +1 (By Lemma 3.6.4) on the right hand side of the last inequality, which is a contradiction. 2 Proof of Proposition 3.4.2. For simplicity, we present the proof for c = 0. By assumption, V () is C 2 smooth. We dene a function Y (W ) =V 0 (W ) + 2 LV 00 (W );W > 0: 124 Then, Y (W )< 0;8W2U, and Y (W )> 0 for any W2B. Step 1. In the unconstrained region, the value functionV () satises the ODE (3.11). By dierentiating the ODE equation once and twice, we obtain ( +)V 0 =u 0 (W ) 2V 0 + (V 0 ) 2 V 000 (V 00 ) 2 ; and ( +)V 00 =u 00 (W ) 2V 00 + (V 0 ) 2 V 0000 (V 00 ) 2 + 2V 0 V 000 (V 00 ) 3 (V 00 ) 2 V 0 V 000 : By the denition of Y (W ), the last two equations imply ( +)Y =u 0 (W ) +L 2 u 00 (W ) 2Y + (V 0 ) 2 (V 00 ) 2 Y 00 + 2V 0 V 000 (V 00 ) 3 V 00 2 L Y V 0 2 L Y 0 : We then dene an elliptic operator on the unconstrained region by L U [y] (V 0 ) 2 (V 00 ) 2 y 00 2V 0 V 000 (V 00 ) 3 V 00 2 L y V 0 2 L y 0 + ( + + 2)yu 0 (W )L 2 u 00 (W ): Therefore,L U [Y ] = 0 inU. 125 Step 2. In the constrained regionB, by dierentiating the ODE (3.12) of V (W ) once and twice, we have ( +)V 0 =u 0 (W ) +LV 00 + 1 2 2 L 2 V 000 and ( +)V 00 =u 00 (W ) +LV 000 + 1 2 2 L 2 V 0000 : Then, ( +)Y =LY 0 + 1 2 2 L 2 Y 00 +u 0 (W ) + 2 Lu 00 (W ): Similarly, we dene an elliptic operator L B [y] = 1 2 2 L 2 y 00 Ly 0 + ( +)yu 0 (W ) 2 Lu 00 (W ): ThenL B [Y ] = 0 inB: Step 3. By Lemma 3.6.5, there exists W 1 > 0 such that (0;W 1 )U and Y (W 1 ) = 0. It suces to show that (W 1 ;1)B by a contradiction argument. Assume not, then there exists W 2 >W 1 such that (W 1 ;W 2 )B and Y (W 2 ) = 0. 126 Moreover, there exists W 3 (possibly innity) such that (W 2 ;W 3 )U. It remains to derive a contradiction to nish the proof. First, sinceY (W )> 0 in (W 1 ;W 2 )B andY (W 1 ) =Y (W 2 ) = 0, we show that the constant function y = 0 is not the supersolution forL B [y] = 0 in (W 1 ;W 2 ). The reason is as follows. By Proposition 3.3.1, the function Y is the solution of the equationL B [Y ] = 0. If y = 0 is the supersolution, then y = 0 Y in (W 1 ;W 2 ) by the comparison principle, which contradicts to the fact that Y (W )> 0;8W 2 (W 1 ;W 2 ). Since y = 0 is not the supersolution, then there exists some W2 (W 1 ;W 2 ) such that L B [0] =[u 0 (W ) + 2 Lu 00 (W )] =W R1 (W 2 LR) 0: Therefore, W 2 LR 0 for some W2 (W 1 ;W 2 ); thus, W 2 2 LR 0. It implies that u 0 (W 2 ) + 2 Lu 00 (W 2 ) 0: (3.25) Second, in (W 2 ;W 3 )U, by calculation, we have L U [0] =W R1 (W 2 LR): 127 By Equation (3.25), we have L U [0] 0: Since Y (W 2 ) = Y (W 3 ) = 0, then the constant function y = 0 is the subsolution forL U [y] = 0. By the comparison principle, we obtain Y (W ) 0;8W2 (W 2 ;W 3 ) which is impossible since (W 2 ;W 3 ) belongs to the unconstrained region. We notice that ifW 3 =1, then we apply the comparison principle for the unbounded domain (W 2 ;1). See Fleming and Soner (2006) for the comparison principle. The proof is thus completed. 2 Proof of Proposition 3.4.3. Under Assumption A and Lemma 3.4.1,V 1 (W )< 1, then the value function V (W ) is well-dened. By Proposition 3.3.1, the value function V (W ) is the unique viscosity solution of the HJB equation (3.7). Moreover, the unconstrained region and the constrained region are given by [0;W ); (W ;1) for a positive number W , by Proposition 3.4.2. First, in the constrained region, the general solution of the homogeneous second-order linear ODE Lf(W ) = ( +)f;Lf = 1 2 2 L 2 f 00 (W ) + (LcW )f 0 (W ) 128 is smooth (See Borodin and Salminen, 1996, Chapter 2). Therefore, by the method of variation of parameters method (King, Billinghan and Otto, 2003, Chapter 1), the general solution to the ODE (3.12) is smooth in the constrained region. Second, for the constrained region, we use the variable g such that V W =g R . This variable is well-dened because of the strictly concavity of the value function. Then we haveW =G(g) for an increasing functionG(). By Proposition 3.4.2, the unconstrained region (0;W ) corresponds one-one to a region (0;g ). Moreover, Lemma 3.6.4 implies G(0) = 0. By dierentiating both sides of equation (3.11), the HJB equation for the value function is reduced to the following second-order ODE for the function G(), G 00 (g) = R ( + +c)g 1 g R1 G R G 0 c R 2 g 2 G: Then G() is smooth inside the region (0;g ). Lemma 3.4.1 and Lemma 3.6.4 provide boundary conditions to solve the ordinary dierential equations in the unconstrained and constrained region. Because V (W ) is the unique smooth solu- tion of the HJB, then Proposition 3.4.2 implies a uniqueW such that the function V (W ) presented above is smooth in the region (0;1). 2 Proof of Lemma 3.4.4: Without lost of generality, we assume thatV 0 (x )< 0<V 0 (x +) and derive a contradiction. Since there is no available test function, 129 the subsolution holds automatically. We next check the supersolution. Let the test function in the form of (x)V (x ) + 1 2 [V 0 (x ) +V 0 (x +)] (xx ) +(xx ) 2 We claim that can take any real value: To make (x) the valid test function, we need to guarantee that (x) V (x) when x is in a small neighborhood of x . However, when x! x , the linear term 1 2 [V 0 (x ) +V 0 (x +)] (xx ) will dominate the quadratic term (xx ) 2 . Therefore, when x and x are close enough, we could choose suciently large such that (x) V (x). It is now clear that can take any value. Now, apply the viscosity property at x , we have F x ;V (x ); 1 2 [V 0 (x ) +V 0 (x +)]; 2 0; which is impossible by the free choice of the parameter . 2 Proof of Proposition 3.4.5. We construct explicitly a candidate function of the value function by assuming its smooth property, and verify it is indeed the smooth value function under assumptions. The proof is divided into several steps. Step 1. We derive candidate solution of equation (3.12) in the constrained region, assumingW is known. To simplify notation we still useV (W ) to represent the feasible solution of the value function, being a solution of a corresponding ODE. 130 We notice that the solution of the homogeneous ODE, 1 2 2 L 2 V WW +LV W ( +)V (W ) = 0, can be written as C 1 e 1 W +C 2 e 2 W : By the method of partial integral, one particular solution for the non-linear ODE (3.11) is V 0 (W ) = Z W 0 2 2 L 2 u(x) e 1 x e 2 W e 1 W e 2 x W (e 1 x ;e 2 x ) dx where W (f;g) =fg 0 f 0 g is the Wronskian determinants of two solutionsff;gg of a homogeneous second-order ODE. By a straightforward calculation, V 0 (W ) = 2 ( 1 2 )(1R) 2 L 2 e 2 W Z W 0 x 1R e 2 x dxe 1 W Z W 0 x 1R e 1 x dx : Therefore, the function V 0 (W ) is well-dened and it can be expressed in terms of the incomplete gamma function. The general solution of the ODE (3.11) is V (W ) =C 1 e 1 W +C 2 e 2 W +V 0 (W ): (3.26) Step 2. We show thatC 1 = 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R2 (2R) in equation (3.26). 131 By Lemma 3.4.1, V (W ) W 1R is bounded above by a constant. Therefore, V (W )=e 1 W ! 0 as W ! 1 in the constrained region. On the other hand, by (3.26), we have as W!1 C 1 +C 2 e ( 2 1 )W + V 0 (W ) e 1 W ! 0 (3.27) Note that V 0 (W ) e 1 W = 2 ( 1 2 )(1R) 2 L 2 e ( 2 1 )W Z W 0 x 1R e 2 x dx Z W 0 x 1R e 1 x dx : (3.28) For the the rst term in the bracket of (3.28), since 2 < 0, we have e ( 2 1 )W Z W 0 x 1R e 2 x dx = e 1 W Z W 0 x 1R e 2 (Wx) dx e 1 W Z W 0 x 1R dx = e 1 W W 2R 2R which tends to 0 as W!1. For the second term in the bracket of (3.28), change of variable y = 1 x leads to Z W 0 x 1R e 1 x dx = ( 1 ) R2 Z 1 W 0 y 1R e y dy: 132 Therefore, it is an incomplete gamma function (2R; 1 W ), By the property of incomplete Gamma function (3.31) in Appendix B, ( 1 ) R2 Z 1 W 0 y 1R e y dy! ( 1 ) R2 (2R): Then, we obtain C 1 = 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R2 (2R): Step 3. We characterize the feasible solution in the unconstrained region. Following the classical method in Karatzas and Shreve (1998), Villa and Zariphopoulou (1997), we introduce a new variableg byV 0 (W ) =g R . By Lemma 3.6.2, V 0 (W ) is a decreasing function. Then, W =G(g) for an increasing function G. We characterize the function G(g) and derive the feasible function in terms of the auxiliary function G(). Since W = G(g(W )), then 1 = G 0 (g)g 0 (W ), yielding G 0 (g) = 1=g 0 (W ). By using V 00 (W ) = Rg R1 G 0 (g) ; the HJB equation becomes ( +)V (G(g)) = 1 1R [G(g)] 1R + R g R+1 G 0 (g): 133 We dierentiate both sides of the above equation again with respect toW , obtain- ing G 00 (g) = R g 1 G 0 (g) +g R (G(g)) R (3.29) SinceG() is strictly increasing, the unconstrained region ofW corresponds one-one to a region of g. Moreover, by Lemma 3.6.1, we have, for any WW , V (W ) = u(0) + + Z W 0 V W dW = u(0) + + Z G 1 (W ) 0 g R G 0 (g)dg: Therefore, the feasible value function in the unconstrained region is uniquely deter- mined by the auxiliary function G(). Step 4. We derive the boundary condition for ordinary dierential equation (3.29), assuming the existence of W . Since V 0 (0) = +1 (Lemma 3.6.4), we have G(0) = 0. Second, at W = W , G(g ) = W . Moreover, the constraint 2 V 0 (W ) V 00 (W ) =L implies that G 0 (g ) = LR 2 (g ) 1 : 134 By the characterization of the feasible value function in Step 3, the required smooth-t condition is (g ) R = 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R1 (2R)e 1 W +C 2 e 2 W +V 0 0 (W ) Therefore, the boundary condition of the ODE (3.29) are G(0) = 0, G(g ) = W and G 0 (g ) = LR 2 (g ) 1 . Step 5. We determine the parameterC in terms ofW and the parameterW . The smooth-t equation can be written as 2 V 0 (W +) V 00 (W +) = L. Then, the feasible function in Step 2 implies that 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R1 (2R)e 1 W +C 2 e 2 W +V 0 0 (W ) = 2 L 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R (2R)e 1 W +C 2 2 e 2 W +V 00 0 (W ) Both parametersg andC are determined uniquely byW as in (3.15) and (3.14). It remains to solve the parameter W . The value-matching equation, V (W ) = V (W +), can be written as u(0) + + Z g 0 g R G 0 (g)dg = 2 ( 1 2 )(1R) 2 L 2 ( 1 ) R2 (2R)e 1 W + Ce 2 W +V 0 (W ): (3.30) By assumption, there exists such a positive number W in equation (3.16). 135 By the above discussions in Step 1 - Step 6, the presented function is a smooth solution of the HJB equation. Then, W is unique andV (W;L) is given by (3.18) in Proposition 3.3.1. 2 Proof of Proposition 3.5.1. In the unconstrained region, V W =g R . Since V 00 (W ) = Rg R1 G 0 (g) , the optimal strategy isX(W ) = R 2 gG 0 (g). G() is not a linear function in general. Otherwise, W = R 2 L. Then equation (3.16) is viewed as an equation of of L, in which both sides are analytical function of the variable L. By the analytical function property, it cannot hold for a general choice of the capacity level L. 2 Proof of Proposition 3.5.2. By using the same argument in proving Propo- sition 3.3.1, we can prove that the value function is the unique viscosity solution of the HJB equation (for V (W ) =V b (W )) ( +)V (W ) = max 0XbW 1 2 2 X 2 V 00 +XV 0 +u(W )cWV 0 (W ) with initial valueV (0) = 0. We next nd aC 2 solution of the formV (W ) =a W 1R 1R to the above HJB equation for a positive number a. 136 By a straightforward computation in the HJB equation, and since X = bW , we have ( +)a W 1R 1R = 1 2 2 b 2 W 2 a(R)W R1 +bWaW R + 1 1R W 1R cWaW R ; yielding a = 1 + + (1R)( 1 2 2 b 2 Rb) +c(1R) : Since b< r R 2 , then X =bW is the solution in max 0XbW 1 2 2 X 2 V 00 +XV 0 . The proof is completed. 2 Appendix B: Incomplete Gamma function The lower incomplete gamma function and the upper incomplete gamma function are dened by by (s;x) = Z 1 x t s1 e t dt; (s;x) = Z x 0 t s1 e t dt: For any Re(s) > 0, the functions (s;x) and (s;x) can be dened easily. Each of them can be developed into a holomorphic function. In fact, the incomplete 137 Gamma function is well-dened for all complexs andx, by using the power series expansion (s;x) =x s (s)e x 1 X k=0 x k (s +k + 1) : The following asymptotic behavior for the incomplete gamma function are used in the proof of Proposition 3.4.5. lim x!1 (s;x) = (s); (3.31) and lim x!0 (s;x) x s = 1 s : (3.32) See N.M. Temme, \The asymptotic expansion of the incomplete gamma functions" , SIAM J. Math. Anal. 10 (1979), pp. 757 - 766. It can also be connected with Kummer's Con uent Hypergeometric Function, when Re(z)> 0, (s;z) =s 1 z s e z M(1;s + 1;z) 138 where M(1;s + 1;z) = 1 + z (s + 1) + z 2 (s + 1)(s + 2) +::: Therefore, the incomplete Gamma functions can be computed eectively. 139 Figure 3.1: This gure displays the auxiliary function G(g) in the unconstrained region. The model parameters are = 0:1; = 0:3;R = 0:5;l = 0:7; and W 0 = 1; 000; 000. The x-axis represents the parameter g and the y-axis represents the parameter G (in the unit of 100,000). As shown, this function is NOT a linear function, thus, the optimal strategy is not a myopic one as shown in Proposition 3.5.1. 140 Figure 3.2: This gure displays the optimal portfolio strategy in three dierent strategies. \Model" denotes the model under a constraint X t L = 0:7W 0 . Parameters are = 0:1; = 0:3;R = 0:5;l = 0:7: By calculation, the wealth threshold W = 492; 235 above which the retiree invests 700,000 in the stock market. When the wealth portfolio is smaller than W , the optimal strategy is R 2 gG 0 (g) where the auxiliary function G() is illustrated in Figure 1. \Bench- mark" denotes the optimal dollar amount in Lemma 5.1 in the absence of the constraint on the risky asset investment. Finally, \BPC" denotes a bounded per- centage constraint that X t 1 2 R 2 . 141 Figure 3.3: This gure displays the optimal percentage of wealth, X(W ) W , invested in the stock market. The parameters are the same as in Figure 2. As shown, the per- centage is decreasing in the entire region ofW . We also notice that the percentage curve is steeper in the beginning of the retirement time when the wealth is closes to initial wealth than that when the wealth closes to the thresholdW . As a function of W , X(W ) W is not C 1 smooth in contrast to the standard model (Richard, 1965, and Liu and Lowenstein, 2002) or the model under leverage constraint (Proposition 6 in this paper, and Vila and Zariphopiulou (1997)). 142 Figure 3.4: This gure displays the eect of the risk capacity level, L, on the investing strategy. The parameters are the same as in Figure 2. As shown, the higher the capacity level L, the higher the dollar amount in the risky asset. The gure also demonstrates that the threshold, W , positively depends on L. The risk capacity level L aects both the expected level of spending and the investing strategy even when the portfolio value is smaller than this threshold. 143 Bibliography [1] Ahn, S., K. Choi., and B. Lim., 2019. Optimal Consumption and Investment under Time-Varying Liquidity Constraints. Journal of Financial Quantitative Analysis, forthcoming. [2] Bardi,M. and Capuzzo-Dolcetta, I. (1997). Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Birkhauser Basel. [3] Bengen, W.,1994. Determining Withdrawal Rates Using Historical Data. Jour- nal of Financial Planning, October, 14-24. [4] Bernheim, B.D., Ray, D. (1989). Collective Dynamic Consistency in Repeated Games. Games and Economic Behavior Vol. 1, 295{326. [5] Blanchet-Scaillet, C., N. El Karouri, N., M. Jeanblanc., and L. Martellini., 2008. Optimal InvestmentDecisions when Time-Horizon is Uncertain. Journal of Mathematical Economics, 44, 1100-1113. 144 [6] Benzoni, L., P. Collin-Dufresne and R. Goldstein., 2007. Portfolio choice over the Life-Cycle when the Stock and Labor Markets are Cointegrated. Journal of Finance, 62, 2123-2167, [7] Bismut, J-M., 1975. Growth and Optimal Intertemporal Allocation of Risks. Journal of Economic Theory, 10, 239-257. [8] Bodie, Z., R.Merton., and W. Samuelson., 1992. Labor Supply Flexibility and Portfolio Choice in a Life Cycle Model. Journal of Economic Dynamics and Control, 16, 427-449. [9] Bodie, Z., J. Detemple., S. Otruba., and S. Walter. Optimal Consumption- Portfolio Choices and Retirement Planning. Journal of Economic Dynamics Control 28, 1115-1148, (2004). [10] Bodie, Z., J. Detemple., and R. Rindisbacher., 2009. Life Cycle Finance and the Design of Pension Plans. Annual Review of Financial Economics, 1, 249- 285. [11] Borodine, A., and O. Salminen., 1996. Handbook on Brownian motion-Facts and Formulae. Birkhauser, Basel. [12] Brady, P., 2010. Measuring Retirement Resource Adequacy, Journal of Pen- sion Economics and Finance, 9 (2), 236-62. 145 [13] Campbell, J., and R. Sigalov., 2019. Portfolio Choice with Sustainable Spend- ing: A Model of Reaching for Yield. Working paper. [14] Chen, X., and W. Tian., 2004. Optimal Portfolio Choice and Consistent Per- formance. Decisions in Economics and Finance, 37 (2), 454-474. [15] Coile, C., and K. Milligan., 2009. How Household Portfolios Evolve After Retirement: The Eects of going and health Shocks. Review of Income and Wealth, 5 (2), 226-48. [16] Cocco, J., F. Gomes and P.J. Maenhout., 2005. Consumption and Portfolio Choices over the Life-Cycle. Review of Financial Studies,18, 491-533. [17] Cole, H. L. and Kocherlakota.N. Ecient Allocations with Hidden Income and Hidden Storage. Review of Economic Studies 68, 523-542. (2001) [18] Cvitani c ,J, Wan, X, and Zhang, J. (2009) Optimal compensation with hidden action and lump{sum payment in a continuous{time model. Applied Mathe- matics and Optimization. 59(1), 99{146. [19] Cvitani c, J and Zhang, J. (2012) Contract theory in continuous{time models. Springer [20] Detemple, J., and S. Murthy., 1997. Equilibrium Asset Prices and No- Arbitrage with Portfolio Constraints. Review of Financial Studies, 10, 1133- 1174. 146 [21] Detemple, J., and A. Serrat., 2003. Dynamic Equilibrium with Liquidity Con- straints. Review of Financial Studies, 16, 597-629. [22] Dybvig, P., 1995. Duesenberry's Ratcheting of Consumption: Optimal Dynamic Consumption and Investment Given Intolerance for Any Decline in Standard of Living. Review of Economic Studies, 62, 287-313. [23] Dybvig, P., and H. Liu., 2010. Lifetime Consumption and Investment: Retire- ment and Constrained Borrowing. Journal of Economic Theory, 145, 885-907. [24] Elie, R., and N. Touzi., 2008. Optimal Litetime Consumption and Investment under a Drawdown Constraint. Finance and Stochastics, 12, 299-330. [25] El Karoui, N., and M. Jeanblance-Picque., 1998. Optimization of Consump- tion with Labor Income. Finance and Stochastics, 2, 409-440. [26] Farrell, J. and Maskin, E. (1989) Renegotiation in Repeated Games. Games and Economic Behavior Vol. 1, 327{360. [27] Fleming, W.H. and Soner, H.M., 2006. Controlled Markov Processes and Vis- cosity Solutions (second edition), Stochastic Modeling and Applied Probability (25), Springer Verlag. [28] Goldman, D., ad P. Orszag., 2014. The Growing Gap in Life Expectancy: Using the Future Economic Model to Estimate Implications for Social Security and Medicare. American Economic Review, 104 (4), 230 - 234. 147 [29] Gu, A, Viens, F and Shen, Y. (2020) Optimal excess-of-loss reinsurance con- tract with ambiguity aversion in the principal-agent model. Scandinavian Actu- arial Journal, (4), 342-375 [30] Gustman, A., L. Steinmeier., and N. Tabatabai., 2012. How Did the Recession of 2007-2009 Aect the Wealth and Retirement of the Near Retirement Age Population in the Health and Retirement Study? Social Security Bulletin 62 (4), 47-66. [31] Gomes, F.J and A. Michaelides. Optimal Life-Cycle Asset Allocation: Under- standing the Empirical Evidence. Journal of Finance 60, 869-904 (2005). [32] Gustman, A,.L., T. K. Steinmeier., 1986. A Structural Retirement Model. Econometrica, 54, 555-584. [33] Holmstrom, B. and Milgrom, P.(1987)Aggregation and linearity in the provi- sion of intertemporal incentives. Econometrica 55, 303-328. [34] Kahneman , D. and Tversky, A. (1979). Prospect theory: An analysis of deci- sion under risk. Econometrica 47, 263{292. [35] Kahneman, D. and Tversky, A. (1992). Advances in prospect theory: Cumula- tive representation of uncertainty. Journal of Risk and Uncertainty 5, 297{323. [36] Karatzas, I., and S. Shreve., 1998. Methods of Mathematical Finance, Appli- cations of Mathematics (39), Springer. 148 [37] Karatzas, I., and H. Wang., 2001. Utility Maximization with Discretionary Stopping. SIAM Journal on Control and Optimization, 89 (1), 306-329. [38] King, A., J. Billingham., and S. Otto., 2003. Dierential Equations: Linear, Nonlinear, Ordinary, Partial, Cambridge University Press. [39] Kydland, F. and Prescott, E. (1977). Rules rather than discretion: The incon- sistency of optimal plans. Journal of Political Economy 85 473{492. [40] Liu, H., and M. Loewenstein., 2002. Optimal Portfolio Selection with Trans- action Costs and Finite Horizons. Review of Financial Studies, 15, 805-835. [41] Malkiel, B. 1999. A Random Walk Down Wall Street, Norton & Company, Inc. [42] Merton, R. C., 1971. Optimum Consumption and Portfolio Rules in a Con- tinuous Time Model. Journal of Economic Theory, 3, 373-413. [43] Modigliani, F., 1986. Life Cycle, Individual Thrift, and the Wealth of Nations. American Economic Review, 76, 297-313. [44] Possama ,D and Touzi,N.(2020) Is there a Golden Parachute in Sannikov's principal{agent problem? arXiv:2007.05529. [45] Poterba, J., 2014. Retirement Security in an Aging Population. American Economic Review: Papers and Proceedings, 104 (5), 1-30. 149 [46] Ren, Z., N. Touzi,., and J. Zhang., 2014. An Overview of Viscosity Solution of Path Dependent PDES, Stochastic Analysis and Applications, 100, 398-454. [47] Richard, S., 1975. Optimal Consumption, Portfolio and Life Insurance Rules for an Uncertain Live Individual in a Continuous Time Model, Journal of Financial Economics, 2, 187-203. [48] Roozbeh, H.,and A. Shourideh., 2019. Retirement Financing: An Optimal Reform Approach. Econometria, 87 (4), 1205-1265. [49] Sannikov, Y. (2008). A Continuous-Time Version of the Principal-Agent Prob- lem. Review of Economic Studies. 75, 957-984. [50] Sch attler, H and Sung, J (1993). The rst{order approach to the continu- ous{time principal{agent problem with exponential utility. Journal of Eco- nomic Theory 61(2), 331{371. [51] Strulovici, B., and M. Szydlowski., 2015. On the Smoothness of Value Func- tions and the Existence of Optimal Strategies in Diusion Models. Journal of Economic Theory, 159, 1016-1055. [52] Strotz, R. H. (1955) Myopia and inconsistency in dynamic utility maximiza- tion. Rev. Econ. Stud. 23, 165-180. 150 [53] Sung, J. (1995) Linearity with project selection and controllable diusion rate in continuous{time principal-agent problems. The RAND Journal of Eco- nomics. 26(4), 720{743. [54] Sung,J.(1997) Corporate insurance and managerial incentives. Journal of Eco- nomic Theory 74(2),297{332 . [55] Sung, J. (2001) Lectures on the theory of contracts in corporate nance: from discrete-time to continuous-time models. Com2Mac Lecture Note Series, 4. [56] Sung, J., Zhang, J. and Zhu, Z. A unied continuous time Principal-Agent model with hidden action, in preparation. [57] Tian, W. and Zhu, Z. A Portfolio Choice Problem Under Time-Varying Risk Capacity Constraint , submitted. [58] Vanguard Group., 2018. How America Saves. [59] Viceira, L.M., 2001. Optimal Portfolio Choice for Long-Horizon Investors with Nontradable Labot Incomes. Journal of Finance, 56, 433-470. [60] Vila, J., and T. Zariphopoulou., 1997. Optimal Consumption and Portfolio Choice with Borrowing Constraints. Journal of Economic Theory, 77, 402- 431. [61] Williams, N(2015) A solvable continuous time dynamic principal{agent model Journal of Economic Theory 159(part B):989{1015. 151 [62] Yaari, M., 1965. Uncertain Lifetime, Life Insurance, and the Theory of the Consumer. Review of Economic Studies, 2, 137-159. [63] Yogo, M., 2016. Portfolio Choice in Retirement: Health Risk and the Demand of Annuities, Housing, and Risky Assets. Journal of Monetary Economics, 80, 17-34. [64] Zariphopoulou, T., 1994. Consumption and Investment Models with Con- straints, SIAM Journal on Control and Optimization, 32, 59-85. [65] Zhou, X. Y. (2010) Mathematicalising behavioural nance. In Proceedings of the International Congress of Mathematicians. Volume IV 3186-3209. Hindus- tan Book Agency, New Delhi. MR2828011 [66] Zhang,J. (2017) Backward Stochastic Dierential Equations: From Linear to Fully Nonlinear Theory. Springer, New York, [67] Zhang, J. and Zhu, Z. A dynamic Principal-Agent model, in preparation. 152
Abstract (if available)
Abstract
This essay consists of three projects I worked on during my PhD study. ❧ The first project is a joint work with Prof Jianfeng Zhang on a dynamic principal agent problem. In this paper, we consider a dynamic Principal-Agent problem on finite time horizon [0,T]. In contrast to the static problem considered in literature, our model allows the agent to quit during the contract period [0,T]. If the agent quits at a stopping time up to his choice, the principal will hire a new one, possibly with a different type, from the market which is best for her. After solving the principal-agent problem as a bi-level control problem, we characterize the principal’s value function as the minimal solution of an infinite system of HJB equations. Such solution, although has discontinuity issue at the boundary, will become continuous after face-lifting. We also have some discussion on the case that agent can only quit at some fixed time. In this case, the principal’s value function can indeed be discontinuous, which can not be eliminated by face-lifting. ❧ The second project is a joint work with Prof Jaeyoung Sung and Prof Jianfeng Zhang on a unified model for continuous time principal-agent problem.In this paper, we consider a unified continuous time principal-agent model. We assume the contract from the principal consists of both lump-sum payment and continuous time payment. Different with the existing literature, we assume the pay-to-performance sensitivity (PPS) is not zero. There are two main advantage by assuming the PPS is not zero: First, we can improve the implementablity of the contract. Second, it is possible to increase the principal’s optimal utility. We also provide a solvable case which has a closed-form solution. In the end, we compare our model with some existing result by Sannikov (2008) and Williams (2015). ❧ The third project is a joint work with Prof Weidong Tian on an optimal investing problem after retirement. This paper studies an optimal investing problem for a retiree facing longevity risk and living standard risk. We formulate the investing problem as a portfolio choice problem under a time-varying risk capacity constraint. We derive the optimal investment strategy under the specific condition on model parameters in terms of second-order ordinary differential equations. We demonstrate an endogenous number that measures the expected value to sustain the spending post-retirement. The optimal portfolio is nearly neutral to the stock market movement if the portfolio’s value is higher than this number
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
On non-zero-sum stochastic game problems with stopping times
PDF
Dynamic approaches for some time inconsistent problems
PDF
Optimal dividend and investment problems under Sparre Andersen model
PDF
Topics on dynamic limit order book and its related computation
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Feature selection in high-dimensional modeling with thresholded regression
PDF
Topics on set-valued backward stochastic differential equations
PDF
Optimal investment and reinsurance problems and related non-Markovian FBSDES with constraints
PDF
Reinforcement learning for the optimal dividend problem
PDF
Credit risk of a leveraged firm in a controlled optimal stopping framework
PDF
Large-scale inference in multiple Gaussian graphical models
PDF
Numerical analysis on the influence of information delay on optimal investment strategy in the family of 4/2 stochastic volatility models
PDF
Set values for mean field games and set valued PDEs
PDF
Pathwise stochastic analysis and related topics
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Elements of dynamic programming: theory and application
PDF
Target assignment and path planning for navigation tasks with teams of agents
PDF
Asymptotically optimal sequential multiple testing with (or without) prior information on the number of signals
Asset Metadata
Creator
Zhu, Zimu
(author)
Core Title
Some topics on continuous time principal-agent problem
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
03/26/2021
Defense Date
03/16/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,principal-agent problem,stochastic control,time-inconsistency
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Zhang, Jianfeng (
committee chair
), Lv, Jinchi (
committee member
), Ma, Jin (
committee member
)
Creator Email
zimuzhu@hotmail.com,zimuzhu@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-432856
Unique identifier
UC11668944
Identifier
etd-ZhuZimu-9358.pdf (filename),usctheses-c89-432856 (legacy record id)
Legacy Identifier
etd-ZhuZimu-9358.pdf
Dmrecord
432856
Document Type
Dissertation
Rights
Zhu, Zimu
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
principal-agent problem
stochastic control
time-inconsistency