Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Equilibrium model of limit order book and optimal execution problem
(USC Thesis Other)
Equilibrium model of limit order book and optimal execution problem
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
EQUILIBRIUM MODEL OF LIMIT ORDER BOOK AND OPTIMAL EXECUTION PROBLEM by Eunjung Noh A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) August 2018 Copyright 2018 Eunjung Noh To my family, and my husband Sunghyun Lee i Abstract In this dissertation, we study an equilibrium model of a limit order book (LOB) and an optimal execution problem. To generalize the previous results, we accommodate the idea of Bertrand price competition, as well as nonlocal mean-eld stochastic dierential equation (SDE) with evolving intensity and re ecting boundary conditions. To describe the equi- librium of LOB, we start with N sellers' static Bertrand game, and extend the model to continuous time setting to formulate it as a mean-eld type control problem of the rep- resentative seller, who wants to maximize the discounted lifelong expected utility. Using dynamic programming principle (DPP), we could form a Hamilton-Jacobi-Bellman (HJB) equation and prove the value function is the viscosity solution of it. And, we show the value function can be used to obtain the equilibrium density function of a LOB. Assuming the LOB has reached at the equilibrium, we solve the optimal execution problem of a buyer, who wants to purchase a certain number of shares during the nite time horizon with minimum cost. Again, we show that the Bellman principle of DPP holds in this case, and the value function is the viscosity solution of the HJB quasi-variational inequality (QVI). Also, we investigate the optimal strategy when the QVI has a classical solution. ii Acknowledgment I would like to express my deepest appreciation to my advisor Professor Jin Ma for the continuous support of my Ph.D. study and research. Whenever I have encountered issues that I thought I could not solve, Professor Ma was always there with patience to encour- age, motivate, and guide me to the directions that I can take. He spent countless hours listening to, discussing, and helping me develop my research ideas. Without his guidance and persistent help, this dissertation would not have been possible. I am grateful to Professor Jianfeng Zhang, Jinchi Lv, Remigijus Mikulevicius, and Sergey Lototsky for being my qualifying exam committee members. I want to give thanks to many faculty members in math department who generously oered their time to provide feedback to my work. The development of this thesis has greatly beneted from the excellent courses and countless colloquium in diverse areas from math department. I thank all of my fellow Ph.D. students, and especially my oce neighbors for many helpful discussions. Also, I would like to thank my friends outside school for making the student years enjoyable. Last but not least, I am truly grateful to my family for listening to, understanding, and supporting me at every step of my life. My biggest thanks goes to my husband Sunghyun who has been a constant source of support and encouragement during my pursuit of Ph.D degree. iii Table of Contents Dedication i Abstract ii Acknowledgment iii Chapter 1: Introduction 1 Chapter 2: Preliminaries 4 1 Mean-eld SDEs with re ecting boundary conditions . . . . . . . . . . . . . . . . . . 7 2 It^ o formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1 For a general semimartingale case . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Mean-eld SDE with jump and re ecting boundary conditions . . . . . . . . 15 Chapter 3: Bertrand Competition 18 1 The static Bertrand game among sellers . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.1 The Bertrand game and its Nash equilibrium . . . . . . . . . . . . . . . . . . 22 1.2 A linear mean-eld case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2 Continuous time mean-eld type liquidity dynamics . . . . . . . . . . . . . . . . . . 28 2.1 A general description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.2 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.3 Dynamic programming principle and HJB equation . . . . . . . . . . . . . . . 34 2.4 Viscosity solutions to the HJB equation . . . . . . . . . . . . . . . . . . . . . 42 3 Relation with equilibrium density function of a limit order book . . . . . . . . . . . . 46 Chapter 4: Optimal Execution Problem 49 1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2 Dynamic programming principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3 HJB equation and viscosity solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4 Characterization of optimal strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Chapter 5: Conclusion 70 Appendix A.1 Proof of Lemma 2.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Bibliography 78 iv Chapter 1 Introduction With the rapid growth of electronic trading, the study of order-driven markets has become an increasingly prominent focus in quantitative nance. Indeed, in the current nancial world more than half of the markets use a limit order book (LOB) mechanism to facilitate trade. Accordingly, the LOB has been extensively studied in various ways; optimal order execution/liquidation problems with minimal cost by assuming that the dynamics and shape of the LOB are modeled exogenously, (or equivalently, the arrival processes of the limit and market orders are specied exogenously), studying various price impact and resilience. (e.g. [1], [3], [4], [5], [7], [17], [22], [24], [38], [43], [44], [47], to name a few.) There also exists some literature on the shape formation of LOB, for example [2], [23], [33], [41]. Consider one stock in a market venue that uses electronic limit order books. In general, when a trader uses limit orders, one would get better price to compensate for waiting the market orders to ll the order. On the other hand, a trader who wants immediate execution would place a market order of a given size. By doing so, the market order will be matched with limit orders sitting in the LOB, and she will take the best available prices in the book until the size demanded is met. At the cost of worse prices, these market orders are guaranteed with immediate execution. In this dissertation, we use the LOB model in Ma et al. [41]. We assume that sellers provide liquidity as they place sell limit orders, and there are liquidity taking actions as buyers trade only through market order as well as sellers' order cancellation. As a result, we would consider only a sell-side limit order book. We also assume that limit orders are executed from the lowest portion of the book, that is, according to price priority. We extend their results by analyzing the mechanism to reach an equilibrium of LOB. In order to 1 describe the formation of equilibrium in LOB, we use the idea of Bertrand price competition among sellers. After Cournot [18] and Bertrand [8] provided analysis of an oligopoly where rms' strategic interactions are taken into account, many studies on oligopolistic competition have been done. Most of them deal primarily with the static problem, and we refer to Friedman [21] and Vives [50] for background and references. For a dynamic game between N players, problem becomes complicated typically involving coupled systems ofN nonlinear partial dierential equations (PDEs). See, for instance, [27] in a Cournot framework, and [35] in a Bertrand model, where in both works the players' resources are exhaustible. On the other hand, Lasry and Lions [37] and independently Huang et al. [29, 30] con- sidered mean-eld games, in which there are a large number of players and competition is felt only through an average of competitors, with each player's impact on the average being negligible. With their pioneering works, vast amount of studies has been done about large population stochastic dierential games with mean-eld interaction. To name a few, we refer to [13], [37] for the main ideas of the mean-eld games, and [14], [15], [25] and the references cited therein for the history and wide variety of applications of mean eld games (MFG) and McKean-Vlasov SDE. One of many applications is to the limit order book, e.g. [23] and [33]. Moreover, [16] studied how continuous time Bertrand and Cournot competitions can be analyzed as continuum dynamic mean eld games. To be more precise, as in Bertrand model where rms set prices and quantity of pro- duction is a function of price, we think in our LOB model sellers decide in which price they place orders and the order size would be determined from the prices. We study the static Bertrand game with N sellers rst, where each seller tries to maximize the expected return. We then extend the idea to the continuous time setting, and we could formulate the problem as a mean-eld-type pure-jump stochastic control problem for a representative seller. Note that in the continuous time Bertrand game with innitely many sellers, the total liquidity converges to a pure jump Markovian process with a mean-eld type generator under 2 suitable conditions following the idea in [6]. So, in the next chapter, we will study the general mean-eld SDEs with re ecting boundary conditions, and introduce some notations and concepts regarding dierentiability with respect to measures. Then, in Chapter 3, Bertrand game will be introduced brie y and we at rst investigate the static Bertrand game with N sellers. And, we show how to extend to a Bertrand game in continuous time with innitely many sellers, and formulate the problem as a limiting mean-eld type stochastic control problem for a representative seller, whose goal is to maximize the discounted total expected prot. Then, we prove the dynamic programming principle (DPP), derive the HJB equation, and show that the value function is a viscosity solution to the corresponding HJB equation. Lastly, we show the value function is the expected utility function, which determines the equilibrium density function. In Chapter 4, as a sequential game, we will solve an optimal execution problem of a buyer, assuming the LOB has reached at an equilibrium. The buyer's goal is to purchase a certain number of shares in the nite time horizon with minimum cost. We show that the DPP holds in this case, and the value function is the viscosity solution of the HJB quasi-variational inequality (QVI). Also, we investigate the optimal strategy when the QVI has a classical solution. In Appendix, we give the proof of the It^ o formula for the nonlocal mean-eld SDE with evolving intensity and re ecting boundary condition. 3 Chapter 2 Preliminaries Throughout this dissertation, we let ( ;F;P) be a complete probability space on which is dened two standard Brownian motions W =fW t : t 0g and B =fB t : t 0g. Let (A;B A ) and (B;B B ) be two measurable spaces. We assume that there are two Poisson random measuresN s andN b , dened onR + AR + andR + B, and with L evy measures s (dz) and b (dz), respectively. In other words, we assume that the Poisson measuresN s andN b have mean measures b N s () :=m s m() and b N b () :=m b (), respectively, where m() denotes the Lebesgue measure onR + , and we denote the compensated random measures e N s (A) = (N s b N s )(A) = N s (A) (m s m)(A) and e N b (B) = (N b b N b )(B) =N b (B) (m b )(B), for any A2B(R + AR + ) and B2B(R + B). For simplicity, throughout this dissertation, we assume that both s and b are nite, that is, s (A); b (B) <1, and we assume the Brownian motions and Poisson random measures are mutually independent. We note that for any A2 B(AR + ) and B 2 B(B), the processes (t;!)7! e N s ([0;t]A;!), e N b ([0;t]B;!) are both F N s ;N b -martingales. Here F N s ;N b denotes the ltration generated byN s andN b . For a generic Euclidean space E and T > 0, we denoteC([0;T ];E) andD([0;T ];E) to be the spaces of continuous and c adl ag functions, respectively. We endow both spaces with \sup-norms", so that both of them are complete metric spaces. Next, for p 1 we denote L p (F;E) to be the space of all E-valuedF-measurable random variables dened on the probability space ( ;F;P) such thatE[jj p ]<1. In particular,L 2 (F;R) is a Hilbert space with inner product (;) 2 =E[]; ;2L 2 (F;R), and a normkk 2 = (;) 1=2 2 . Also, we denote L p F ([t;T ];E) to be all E-valued F-adapted process on [t;T ], such thatkk p;T := E[ R T t j s j p ds] 1=p <1. We often use the notationsL p (F;C([0;T ];E)) andL p (F;D([0;T ];E)) when we need to specify the path properties for elements in L p F ([0;T ];E). 4 Further, forp 1 we denote byP p (E) the space of probability measures on (E;B(E)) with nite p-th moment, i.e.kk p p := R E jxj p (dx)<1. Clearly, for 2L p (F;E), the law L() =P :=P 1 2P p (E). We endowP p (E) with the following p-Wasserstein metric: W p (;) := inf n Z EE jxyj p (dx;dy) 1 p :2P p (EE) with marginals and o = inf n k 0 k L p ( ) :; 0 2L p (F;E) withP =; P 0 = o : (2.0.1) Furthermore, we suppose that there is a sub--algebraGF such that (i) the Brownian motionsW ,B and Poisson random measuresN s ;N b are independent ofG; and (ii)G is \rich enough" in the sense that for every 2P 2 (R), there is a random variable 2 L 2 (F;E) such that =P . LetF =F W;B;N s ;N b _G =fF t g t0 , whereF t =F W t _F B t _F N s t _F N b t _G, t 0, be the ltration generated by W , B,N s ,N b , andG, augmented by all the P-null sets so that it satises the usual hypotheses (cf. [46]). The following notation of \dierentiability" with respect to probability measures is based on the lecture notes [13] following the course at Coll ege de France by P. L. Lions (see also, e.g., [12]). For a function f :P 2 (R)!R, we introduce a \lift" function f ] :L 2 (F;R)!R such thatf ] () :=f(P ); 2L 2 (F;R). Clearlyf ] depends only on the law of2L 2 (F;R), and is independent of the choice of the representative . We say that f :P 2 (R)! R is dierentiable at 0 2 P 2 (R) if there exists 0 2 L 2 (F;R) with P 0 = 0 such that f ] is Fr echet dierentiable at 0 . In other words, there exists a continuous linear functional Df ] ( 0 ) :L 2 (F;R)!R such that f ] ( 0 +)f ] ( 0 ) =Df ] ( 0 )() +o(kk 2 ): (2.0.2) We shall denote D f( 0 ) = Df ] ( 0 )(), and refer to it as the Fr echet derivative of f at 0 in the direction . By Riesz' Representation Theorem, there exists a unique random variable 2 L 2 (F;R) such that D f( 0 ) = Df ] ( 0 )() = (;) 2 = E[], 2 L 2 (F;R). It was shown in [13, Lemma 3.2] that there exists a Borel function h[ 0 ] : R! R, which 5 depends only on the law 0 = P 0 but not on the particular choice of the representative 0 , such that =h[ 0 ]( 0 ), P-a.s.. We shall denote @ f(P 0 ;y),h[ 0 ](y);y2R and refer to it the derivative of f :P 2 (R)!R at 0 =P 0 . In other words, we have the following identities: Df ] ( 0 ) = =h[P 0 ]( 0 ) =@ f(P 0 ; 0 ); (2.0.3) and (2.0.2) can be rewritten as f(P 0 + )f(P 0 ) =E[h[ 0 ]( 0 )] +o(kk 2 ) = (@ f(P 0 ; 0 );) 2 +o(kk 2 ); (2.0.4) andD f(P 0 ) = (@ f(P 0 ; 0 );) 2 where = 0 . Notice that@ f(P 0 ;y) is onlyP 0 (dy)- a.e. uniquely determined. Let us introduce two spaces that are useful for our analysis later. Denition 2.0.1. We denote C 1;1 b (P 2 (R)) the space of all dierentiable functions f : P 2 (R)! R such that @ f exists, and is bounded and Lipschitz continuous. That is, for some constant C > 0, it holds (i)j@ f(;x)jC; 2P 2 (R); x2R (ii)j@ f(;x)@ f( 0 ;x 0 )jCfjxx 0 j +W 2 (; 0 )g; ; 0 2P 2 (R); x;x 0 2R. Note that if f2C 1;1 b (P 2 (R)), then for xed y2R, we can discuss the dierentiability of the derivative function @ f(;y) :P 2 (R)!R. In particular, if @ f(;y)2C 1;1 b (P 2 (R)) again, then for every y2R, we can dene @ 2 f(;x;y) :=@ ((@ f)(;y))(;x); (;x;y)2P 2 (R)RR: (2.0.5) Denition 2.0.2. We shall denoteC 2;1 b (P 2 (R)) the space of all functions f2C 1;1 b (P 2 (R)) such that (i) @ f(;x)2C 1;1 b (P 2 (R)) for all x2R; (ii) @ 2 f :P 2 (R)RR!R R is bounded and Lipschitz continuous; 6 (iii) @ f(;) :R!R is dierentiable for every 2P 2 (R), and its derivative @ y @ f : P 2 (R)R!R R is bounded and Lipschitz continuous. Finally, we give the denition of Regular Conditional Probability Distribution (RCPD) (see, e.g., [49]) for a probability space ( ;F;P). Denition 2.0.3. A regular conditional probability distribution of P givenF t is a family P ! () =P(jF t )(!) of probability measures on ( ;F) indexed by !2 such that (i) For each B2F, P ! (B) isF t -measurable as a function of !; (ii) For every A2F t and B2F, P(A\B) = Z A P ! (B)P(d!): 1 Mean-eld SDEs with re ecting boundary conditions In this section we consider the following (discontinuous) SDE with re ection: for t2 [0;T ), X s = x + Z s t Z AR + (r;X r ;P Xr ;z)1 [0;(r;X r ;P Xr )] (y) e N s (drdzdy) (2.1.1) + Z s t b(r;X r ;P Xr )dr + Z s t (r;X r ;P Xr )dB r + s +K s ; s2 [t;T ]; where , , b, and are measurable functions dened on appropriate subspaces of [0;T ] RP 2 (R)R, is an F-adapted process with c adl ag paths, and K is a \re ecting process", that is, it is anF-adapted, non-decreasing, c adl ag process, so that (i) X s 0,P-a.s., (ii) R T 0 1 fXr>0g dK c r = 0,P-a.s. (K c denotes the continuous part of K); and (iii) K t = (X t + Y t ) , where Y =XK. We call SDE (2.1.1) a mean-eld SDE with discontinuous paths and re ections (MFSD- EDR), and we denote the solution by (X t;x ;K t;x ), although the superscript is often omitted when context is clear. If b; = 0 and is pure jump, then the solution (X;K) becomes 7 pure jump as well (i.e., dK c 0). We note that the main feature of this SDE is that the jump intensity ( ) of the solution X is \state-dependent" with mean-eld nature. Its well-posedness thus requires some attention since, to the best of our knowledge, it has not been studied in the literature. We shall make use of the following Standing Assumptions. Assumption 2.1.1. The mappings : [0;T ]RP 2 (R)7!R + ,b : [0;T ] RP 2 (R)7! R, : [0;T ] RP 2 (R)7!R; and : [0;T ] RP 2 (R)R7!R are all bounded and continuous in (t;x), and satisfy the following conditions, respectively: (i) For xed 2 P 2 (R) and x;z 2 R, the mappings (t;!) 7! (t;!;x;;z), (b;)(t;!;x;) are F-predictable; (ii) For xed 2P 2 (R), (t;z)2 [0;T ]R, and P-a.e !2 , the functions (t;;), b(t;!;;), (t;!;;), (t;!;;;z)2C 1 b (R); (iii) There exists L> 0, such that for P-a.e. !2 , it holds that j(t;x;)(t;x 0 ; 0 )j +jb(t;!;x;)b(t;!;x 0 ; 0 )j +j(t;!;x;)(t;!;x 0 ; 0 )j +j(t;!;x;;z)(t;!;x 0 ; 0 ;z)j L jxx 0 j +W 1 (; 0 ) ; t2 [0;T ]; x;x 0 ;z2R; ; 0 2P 2 (R): Remark 2.1.2. (i) The requirements on the coecients in Assumption 2.1.1 (such as boundedness) are stronger than necessary, only to simplify the arguments. More general (but standard) assumptions using, e.g., the L 2 -integrability with respect to the L evy mea- sure (even in the case when (R) =1) are easily extendable without substantial di- culties. We prefer not to pursue such generality since this is not the main purpose of this dissertation. (ii) Throughout this dissertation, unless specied, we shall denoteC > 0 to be a generic constant depending only on T , (R), and bounds and Lipschitz constant L in Assumption 2.1.1. Furthermore, we shall allow it to vary from line to line. 8 It is well-known that (see, e.g., [11]), as a mean-eld SDE, the solution to (2.1.1) may not satisfy the so-called \ ow property", in the sense that X t;x r 6=X s;X t;x s r ; 0tsrT . It is also noted in [11] that if we consider the following accompanying SDE of (2.1.1): X t; s = + Z s t Z AR + (r;X t; r ;P X t; r ;z)1 [0;(r;X t; r ;P X t; r )] (y) e N s (drdzdy) (2.1.2) + Z s t b(r;X t; r ;P X t; r )dr + Z s t (r;X t; r ;P X t; r )dB r + s +K t; s ; s2 [t;T ]; and then using the lawP X t; to consider a slight variation of (2.1.1): X t;x; s = x + Z s t Z AR + (r;X t;x; r ;P X t; r ;z)1 [0;(r;X t;x; r ;P X t; r )] (y) e N s (drdzdy) (2.1.3) + Z s t b(r;X t;x; r ;P X t; r )dr + Z s t (r;X t;x; r ;P X t; r )dB r + s +K t;x; s ; s2 [t;T ]; where 2L 2 (F t ;R), then we can argue that the following ow property holds: X s;X t;x; s ;X t; s r ;X s;X t; s r = (X t;x; r ;X t; r ); 0tsrT; (2.1.4) for all (x;)2RL 2 (F t ;R). We should note that although both SDEs (2.1.2) and (2.1.3) resemble the original equation (2.1.1), the process X t;x; has the full information of the solution given the initial data (x;), where provides the initial distribution P , and x is the actual initial state. To prove the well-posedness of SDEs (2.1.2) and (2.1.3), we rst recall the so-called \Discontinuous Skorohod Problem" (DSP) (see, e.g., [19, 39]). Let Y 2D([0;T ]), Y 0 0. We say that a pair (X;K)2D([0;T ]) 2 is a solution to the DSP(Y ) if (i) X =Y +K; (ii) X t 0, t 0; and (iii) K is nondecreasing, K 0 = 0, and K t = R t 0 1 fX s =0g dK s , t 0. It is well-known that the solution to DSP exists and is unique, and it can be shown (see [39]) that the condition (iii) amounts to saying that R t 0 1 fX s >0g dK c s = 0, where K c 9 denotes the continuous part ofK, and K t = (X t +Y t ) . Furthermore, it is shown in [19] that the solution mapping of the DSP, :D([0;T ])7!D([0;T ]), dened by (Y ) = X, is Lipschitz continuous under uniform topology. In other words, there exists a constant L> 0 such that sup t2[0;T ] j(Y 1 ) t (Y 2 ) t jL sup t2[0;T ] jY 1 t Y 2 t j; Y 1 ;Y 2 2D([0;T ]): (2.1.5) Before we proceed to prove the well-posedness of (2.1.2) and (2.1.3), we note that the two SDEs can be argued separately. Moreover, while (2.1.2) is a mean-eld (or McKean- Vlasov)-type of SDE, (2.1.3) is actually a standard SDE (although with state-dependent intensity) with discontinuous paths and re ection, given the law of the solution to (2.1.2), P X t;, and it can be argued similarly but much simpler, once the well-posedness of (2.1.2) is established. Therefore, in what follows, we shall focus only on the existence and uniqueness of the solution to (2.1.2). Furthermore, for simplicity we shall assume b 0, as the general case can be argued similarly without substantial diculty. The scheme of solving the SDE (2.1.2) is more or less standard (see, e.g., [39]). We shall rst consider an SDE without re ection (assuming b 0): for 2L 2 (F t ;R) and s2 [t;T ], Y t; s = + Z s t Z AR + (r; (Y t; ) r ;P (Y t; )r ;z)1 [0; (t;) r ] (y) e N s (drdzdy) + Z s t (r; (Y t; ) r ;P (Y t; )r )dB r + s ; (2.1.6) where (t;) r :=(r; (Y t; ) r ;P (Y t; )r ). Clearly, if we can show that (2.1.6) is well-posed, then by simply setting X t; s = (Y t; ) s and K t; s = X t; s Y t; s , s2 [t;T ], we see that (X t; ;K t; ) would solve SDE (2.1.2)(!). We should note that a technical diculty caused by the presence of the state-dependent intensity is that the usual L 2 -norm does not work as naturally as expected, as we shall see below. We nevertheless have the following result. Theorem 2.1.3. Assume that Assumption 2.1.1 is in force. Then, there exists a solution Y t; 2L 2 (F;D([t;T ])) to SDE (2.1.6). Furthermore, such solution is pathwisely unique. 10 Proof Assume t = 0. For a given T 0 > 0 and y2 L 1 (F;D([0;T 0 ])), let us consider the following mappingT : T (y) := + Z s 0 Z AR + (r; (y) r ;P (y)r ;z)1 [0;(r;(y) r ;P (y)r )] (y) e N s (drdzdy) + Z s 0 (r; (y) r ;P (y)r )dB r + s ; s 0: (2.1.7) We shall argue that for some small enough T 0 > 0, T is a contraction mapping from L 1 (F;D([0;T 0 ])) to itself. To see this, let us denote, for 2D([0;T 0 ]),jj s := sup 0rs j r j, and dene s (z) := (s; (y) s ;P (y)s ;z), s :=(s; (y) s ;P (y)s ), s :=(s; (y) s ;P (y)s ), s2 [0;T 0 ]. Then, we have E[jT (y)j T 0 ] C n Ejj +E h Z T 0 0 Z AR + r (z)1 [0;r ] (y) dr s (dz)dy i +E[( Z T 0 0 j r j 2 dr) 1=2 ] o CEjj +CE h Z T 0 0 Z A j r (z) r jdr i +CE h Z T 0 0 j r j 2 dr 1=2 i <1; thanks to Assumption 2.1.1. Hence,T (y)2L 1 (F;D([0;T 0 ])). We now show thatT is a contraction onL 1 (F;D([0;T 0 ])). Fory 1 ;y 2 2L 1 (F;D([0;T 0 ])), we denote i , i , and i , respectively, as before, and denote ' :=' 1 ' 2 , for' =;;, and T (s) =T (y 1 ) s T (y 2 ) s , s 0. Then, we have, for s2 [0;T 0 ], T (s) = Z s 0 Z AR + r (z)1 [0; 1 r ] (y) + 2 r (z)(1 [0; 1 r ] (y) 1 [0; 2 r ] (y)) e N s (drdzdy) + Z s 0 r dB r : Clearly, T =T (y 1 )T (y 2 ) is a martingale on [0;T 0 ]. Since e N =N b N andj1 [0;a] () 1 [0;b] ()j 1 [a^b;a_b] () for any a;b2R, we have, for 0sT 0 , EjT (s)j s 2E h Z s 0 Z AR + r (z)1 [0; 1 r ] (y) + 2 r (z)(1 [0; 1 r ] (y) 1 [0; 2 r ] (y)) dr s (dz)dy i +E h Z s 0 j r j 2 dr 1 2 i :=I 1 +I 2 : (2.1.8) 11 Recalling from Remark 2.1.2-(ii) the generic constant C > 0, and by Assumption 2.1.1-(v), (2.1.5), and the denition of W 1 (;) (see (2.0.1)), we have I 1 C n E h Z s 0 Z A j r (z)j s (dz)dr i +E h Z s 0 j r jdr io CE h Z s 0 [j(y 1 ) r (y 2 ) r j +W 1 (P (y 1 )r ;P (y 2 )r )]dr i CE h Z s 0 j(y 1 ) (y 2 )j r dr i C p T 0 ky 1 y 2 k L 1 (D([0;T 0 ])) ; (2.1.9) I 2 CE h Z s 0 fjy 1 y 2 j 2 +W 1 (P (y 1 )r ;P (y 2 )r ) 2 gdr 1=2 i CE p s jy 1 y 2 j s +Ejy 1 y 2 j s C p T 0 ky 1 y 2 k L 1 (D([0;T 0 ])) : Combining (2.1.8) and (2.1.9), we deduce that kTk L 1 (D([0;T 0 ])) C(T 0 + p T 0 )ky 1 y 2 k L 1 (D([0;T 0 ])) ; s2 [0;T 0 ]: (2.1.10) Therefore, by choosing T 0 such that C(T 0 + p T 0 ) < 1, we see that the mapping T is a contraction on L 1 (D([0;T 0 ])), which implies that (2.1.6) has a unique solution in L 1 (F;D([0;T 0 ])). Moreover, we note that T 0 depends only on the universal constants in Assumption 2.1.1. We can repeat the argument for the time interval [T 0 ; 2T 0 ]; [2T 0 ; 3T 0 ]; , and conclude that (2.1.6) has a unique solution in L 1 (F;D([0;T ])) for any given T > 0. Finally, we claim that the solution Y 2L 2 (F;D([0;T ])). Indeed, by Burkholder-Davis- Gundy's inequality and Assumption 2.1.1 we have E[jYj ;2 s ] C n Ejj 2 +E h Z s 0 Z AR + r (z)1 [0;r ] (y)) 2 dr s (dz)dy i +E[ Z s 0 j r j 2 dr] +Ejj ;2 T o C n Ejj 2 +E h Z s 0 1 +jY r j +W 1 (0; (Y ) r ) 2 dr i +Ejj ;2 T o (2.1.11) C n Ejj 2 + Z s 0 (1 +E[jYj ;2 r ])dr +Ejj ;2 T o ; s2 [0;T ]: 12 Here, in the last inequality above we used the fact that W 1 (0; (Y ) r ) 2 (k(Y ) r k L 1 ( ) ) 2 (Ej(Y )j r ) 2 CE[jYj ;2 r ]; r2 [0;s]: Applying the Gronwall inequality, we obtain that E[jYj ;2 T ]<1, completing the proof. Remark 2.1.4. (i) It is worth noting that once we solved X t; , then we know P X t;, and (2.1.3) can be viewed as a standard SDEDR with coecient ~ (s;x) :=(s;x;P X t; s ), which is Lipschitz inx. This guarantees the existence and uniqueness of the solution (X t;x; ;K t;x; ) to (2.1.3). (ii) Given (t;x)2 [0;T ]R, ifP 1 =P 2 for 1 ; 2 2L 2 (F t ;R), then X t;x; 1 and X t;x; 2 are indistinguishable. So, X t;x;P :=X t;x; , i.e. X t;x; depends on only through its law. (iii) Once we know both solutions for equation (2.1.2) and (2.1.3), due to the uniqueness of the solution of equation (2.1.3) for X t;x; , we can conclude X t;x; s x= = X t; s ; s 2 [t;T ]. That is, X t;x; s x= solves the same SDE as X t; s ; s2 [t;T ]. Indeed, let us denote (t;!;x;z) := (t;!;x;P X t;;z), = (t;!;x) := (t;!;x;P X t;) for = b;;. Then, the equation (2.1.2) and (2.1.3) can be rewritten as X t; s = + Z s t Z AR + (r;X t; r ;z)1 [0; (r;X t; r )] (y) e N s (drdzdy) + Z s t b (r;X t; r )dr + Z s t (r;X t; r )dB r + s +K t; s ; s2 [t;T ]; X t;x; s = x + Z s t Z AR + (r;X t;x; r ;z)1 [0; (r;X t;x; r )] (y) e N s (drdzdy) + Z s t b (r;X t;x; r )dr + Z s t (r;X t;x; r )dB r + s +K t;x; s ; s2 [t;T ]: Let P ! () be a regular conditional probability distribution. It is known (cf. [49]) that for any 2F t ,P ! f! 0 2 j(! 0 ) =(!)g = 1: So, using these results, for 2F t , we have 13 X t; t (! 0 ) =(! 0 ) =(!), P ! -a.e, ! 0 2 , for P-a.e. !2 . That is, P ! X t; t (! 0 ) =(! 0 ) = (!) = 1, forP-a.e. !2 . Hence, we can say P ! ! 0 2 :X t; s (! 0 ) =X t;(!); s (! 0 );8s2 [t;T ] = 1; P-a.e. !2 . Consequently, we haveE ! h sup s2[t;T ] jX t; s (! 0 )X t;(!); s (! 0 )j i = 0, and therefore, E h sup s2[t;T ] jX t; s (! 0 )X t;x; s (! 0 ) x= j i = EfE h sup s2[t;T ] jX t; s X t;x; s x= j F t i g = Z E ! [ sup s2[t;T ] jX t; s X t;x; s x= j]P(d!) = 0: which proves the desired result. 2 It^ o formula In this section, we present an It^ o formula that will be frequently used in our future dis- cussion. We rst review It^ o formula for a general semimartingale, and then study the one for mean-eld SDE. We note that similar It^ o formula for mean-eld SDE was studied in in Buckdahn et al. [11], and the one involving jumps was established in the recent work of Hao and Li [26]. The one presented below is a slight modication of that of [26], taking the particular state-dependent intensity feature of the dynamics into account. Since the proof is more or less standard but quite lengthy, we defer it to the Appendix for interested reader. 2.1 For a general semimartingale case In this subsection, we rst review It^ o formula for a general semimartingale. To start with, let ( ;F;P) be a complete probability space, (A;B A ) be a measurable space,fW t g t0 be Brownian motion,N (dt;dz) be a Poisson random measure on R + A independent of W , with compensator b N (dt;dz) = E[N (dt;dz)] such that the compensated random measure 14 e N (dt;dz) = N b N (dt;dz) is a martingale. Also, b N (dt;dz) = (dz)dt, where (dz) is L evy measure. Recall (e.g. [31]) that a semimartingale can be written as X s = x + Z s t (r;X r )dr + Z s t (r;X r )dW r (2.2.1) + Z s t Z A f(r;z)N (dr;dz) + Z s t Z A g(r;z) e N (dr;dz); where ; are Lipschitz continuous, f is such that R t 0 R A jf(s;z;!)jN (ds;dz)<1 a.s. for anyt> 0,g is such thatE h R t 0 R A jg(s;z;!)j 2 b N (ds;dz) i <1 a.s. for anyt> 0, andfg = 0. Then, we have the following It^ o formula: for any F2C 2 (R), F (X s )F (x) = Z s t (r;X r )@ x F (X r ) + 1 2 2 (r;X r )@ xx F (X r ) dr + Z s t (r;X r )@ x F (X r )dW r + Z s t Z A [F (X r +f(r;z))F (X r )]N (dr;dz) + Z s t Z A [F (X r +g(r;z))F (X r )] e N (dr;dz) (2.2.2) + Z s t Z A [F (X r +g(r;z))F (X r )g(r;z)@ x F (X r )](dz)dr: 2.2 Mean-eld SDE with jump and re ecting boundary conditions Now, let us consider our SDE (2.1.1): X s = x + Z s t Z AR + (r;X r ;P Xr ;z)1 [0;(r;X r ;P Xr )] (y) e N s (drdzdy) + Z s t b(r;X r ;P Xr )dr + Z s t (r;X r ;P Xr )dB r + s +K s ; s2 [t;T ]: In what follows, we let ( ~ ; ~ F; ~ P) be a copy of the probability space ( ;F;P), and denote ~ E[] to be the expectation under ~ P. For any random variable # dened on ( ;F;P), we denote, when there is no danger of confusion, ~ #2 ( ~ ; ~ F; ~ P) to be a copy of # such that ~ P ~ # =P # . We note that that ~ E[] acts only on the variables of the form ~ #. 15 Furthermore, to simplify notation, for any process =f t g t0 , we shall denote ( ) t := ( t ;P t ), t 0. The following lemma is not surprising but crucial. Lemma 2.2.1. Assume Assumption 2.1.1 holds. For any f2C 2;1 b (P 2 (R)), 0tsT , and 2L 2 (F t ;R), we have @ s f(P X t; s ) = E h @ f(P X t; s ;X t; s )b(s; (X t; s )) + 1 2 @ y (@ f)(P X t; s ;X t; s ) 2 (s; (X t; s )) + Z 1 0 Z A f@ f(P X t; s ;X t; s +(s; (X t; s );z))@ f(P X t; s ;X t; s )g (s; (X t; s );z)(s; (X t; s )) s (dz)d i : (2.2.3) Proof See Appendix. For an It^ o formula, we dene the following class of functions. Denition 2.2.2. We say that F2C 1;(2;1) b ([0;T ]RP 2 (R)), if (i) F (t;;)2C 2;1 b (RP 2 (R)), for all t2 [0;T ]; (ii) F (;x;)2C 1 b ([0;T ]), for all (x;)2RP 2 (R); (iii) All derivatives that in t of rst order, and those in (x;) of rst and second order are uniformly bounded over [0;T ]RP 2 (R) and Lipschitz continuous in (x;), uniformly with respect to t. We are now ready to state the It^ o formula. For notational simplicity, in what fol- lows for the coecients ' = b;;;, we denote ' t;x; s := '(s;X t;x; s ;P X t; s ), t;x; s (z) := (s;X t;x; s ;P X t; s ;z), ~ ' t; s :='(s; ~ X t; ~ s ;P X t; s ), and ~ t; s :=(s; ~ X t; ~ s ;P X t; s ;z). 16 Proposition 2.2.3. Let 2C 1;(2;1) b ([0;T ]RP 2 (R)). Under Assumption 2.1.1, we have the following It^ o's formula: for 0tsT , 2L 2 (F t ;R), (s;X t;x; s ;P X t; s ) (t;x;P ) = Z s t @ t (r;X t;x; r ;P X t; r ) +@ x (r;X t;x; r ;P X t; r )b t;x; r + 1 2 @ 2 xx (r;X t;x; r ;P X t; r )( t;x; r ) 2 dr + Z s t @ x (r;X t;x; r ;P X t; r ) t;x; r dW r + Z s t @ x (r;X t;x; r ;P X t; r )1 fX r =0g dK r + Z s t Z A (r;X t;x; r + t;x; r (z);P X t; r ) (r;X t;x; r ;P X t; r ) @ x (r;X t;x; r ;P X t; r ) t;x; r (z) t;x; r s (dz)dr (2.2.4) + Z s t Z AR + n (r;X t;x; r + t;x; r (z);P X t; r ) (r;X t;x; r ;P X t; r ) o 1 [0; t;x; r ] (y) e N (drdzdy): + Z s t e E h @ (r;X t;x; r ;P X t; r ; ~ X t; ~ r ) ~ b t; r + 1 2 @ y (@ )(r;X t;x; r ;P X t; r ; ~ X t; ~ r )(~ t; r ) 2 + Z 1 0 Z A @ (r;X t;x; r ;P X t; r ; ~ X t; ~ r + ~ t; r (z))@ (r;X t;x; r ;P X t; r ; ~ X t; ~ r ) ~ t; r (z) ~ t; r s (dz)d i dr: 17 Chapter 3 Bertrand Competition In the 1800s, classical works of Cournot [18] and Bertrand [8] studied oligopoly models of markets with a small number of competitive players. In Cournot's model, rms choose quantities as a strategic variable and the price is then determined from the total quantity produced. On the other hand, the original work of Bertrand competition in oligopoly assumes that there are at least two rms, making the same products (homogeneous goods) with the same constant marginal cost, and they set prices simultaneously. Consumers will buy everything from a rm with a lower price. Then, since companies compete to set prices lower and lower, but above the marginal cost, in equilibrium, all rms set price equal to the marginal cost, and have zero prot. This perfectly competitive outcome diers from the Cournot outcome and is commonly referred to as the Bertrand paradox. In both these original models, however, the goods were homogeneous, that is, perfectly substitutable. This means that the only dierence between the rms is the price they set or the quantity they produce. There have been literatures in modifying these models in diverse ways to obtain more realistic results. One of the original assumptions that is typically not satised is that the goods are homogeneous. Hotelling's work [28] started to consider dierentiated goods in a following way. Firms are located in dierent locations and consumers are assumed to associate a cost of travel. So, rms dierentiate their goods based on their location relative to the distance from the consumer. Also, for asymmetric cost, we can refer to [34] and [36] and references therein. The other extension that I would like to mention is the capacity constraints. Sometimes rms do not have enough capacity to satisfy all demand. This was a point rst raised by Francis Edgeworth [20] and gave rise to the Bertrand{Edgeworth model. 18 In our case, we assume that each seller has dierent non-constant cost, which is a function of a price. At rst, we analyze the static Bertrand game with N sellers, and show that there exists a unique Nash equilibrium. With an example of a linear mean-eld case, we formulate the game with innitely many sellers in a continuous time, as a limiting mean- eld type control problem of a representing seller, whose optimal strategy is the same as the limit of N-seller Nash equilibrium as N!1. 1 The static Bertrand game among sellers In this section, we analyze a price setting mechanism among liquidity providers (investors placing sell limit orders), which will be used as the basis for our continuous time model in the rest of the dissertation. Borrowing the ideas of [16,35,36], we shall consider this process of the (static) price setting as a Bertrand-type of game among the sellers, each placing a certain number of sell limit orders at a specic price, and tries to maximize her expected utility. To be more precise, we assume that sellers use the price at which they place limit orders as their strategic variable, and the number of shares submitted would be determined accordingly. Furthermore, we assume that there is a waiting cost, also as a function of the price. Intuitively, a higher price will lead to a longer execution time, hence a higher waiting cost. Thus, there is a competitive game among the sellers for better total reward. Finally, we assume that the sellers are homogeneous in the sense that they have the same subjective probability measure, so that they share the same degree of risk aversion. We now give a brief description of the problem. We assume that there are N sellers, and the jth seller places limit orders at price p j =X +l j , j = 1; 2; ;N, where X is the mid price. Without loss of generality, we may assume X = 0. As a main element in an oligopolistic competitions (cf. e.g., [36]), we assume that there exists a demend function, denoted by h N i (p 1 ;p 2 ; ;p N ) for each seller i, at a given price vector p = (p 1 ;p 2 ; ;p N ) at the moment. We note that the number of shares of limit orders the seller i places in the LOB will be determined by the value of her demand function with the given price vector. 19 Note that this is the nature of a Bertrand game. A Cournot game is one such that the price p i is the function of the numbers of sharesq = (q 1 ; ;q N ) through a demand function. The two games are often exchangeable by assuming that the demand functions are invertible, for exampe (cf. [36]). More specically, we assume that the demand functionfh N i g satises the following properties: for i = 1; 2; ;N, h N i is smooth in all variables, and @h N i @p i < 0; and @h N i @p j > 0 for j6=i: (3.1.1) We note that (3.1.1) amounts to saying that the number of shares each seller places is decreasing in the seller's own price and increasing in the other sellers' price. Furthermore, we shall assume that the demand functions are invariant under permutations of the other sellers' prices, in the sense that, for xed p 1 ; ;p N and all i;j2f1; ;Ng, h N i (p 1 ; ;p i ; ;p j ; ;p N ) =h N j (p 1 ; ;p j ; ;p i ; ;p N ): (3.1.2) It is worth noting that the combination of (3.1.1) and (3.1.2) is the following fact: if a price vector p is ordered by p 1 p 2 p N , then by (3.1.1) and (3.1.2), for any i < j, it holds that h N j (p)=h N i (p 1 ; ;p j ; ;p i ; ;p N )h N i (p 1 ; ;p i ; ;p i ; ;p N )h N i (p): (3.1.3) That is, the demand functions are ordered in a reversed way. Finally, for each i, we denote p i , (p 1 ; ;p i1 ;p i+1 ; ;p N ), the price vector for \other" prices for seller i, and we assume that there is a price, denoted by ^ p i (p i )<1, that is \least favorable" to seller i in the sense that h N i (p 1 ; ;p i1 ; ^ p i ;p i+1 ; ;p N ) = 0: (3.1.4) 20 The price ^ p i is often called the choke price. We note that the existence of such price, together with the monotonicity property (3.1.1), indicates the possibility that h N j (p) < 0, for somej and some price vectorp. But on the other hand, since the size of order placement cannot be negative, such scenario becomes unpractical. To amend this, we introduce the notion of actual demand, denoted byf b h i (p)g, which we now describe. Consider an ordered price vector p = (p 1 ; ;p N ), with p i p j , ij, and we look at h N N (p). If h N N (p) 0, then by (3.1.3) we have h N i (p) 0 for all i = 1; ;N. In this case, we denote b h i (p) =h N i (p) for alli = 1; ;N. Ifh N N (p)< 0, then we set b h N (p) = 0. That is, theN-th seller does not act at all. We assume that the remainingN 1 sellers will observe this fact and modify their strategy as if there are onlyN1 sellers. More precisely, we rst choose a \chock" price b p N so that h N N (p 1 ; ;p N1 ;b p N ) = 0, and dene h N1 i (p 1 ;p 2 ; ;p N1 ) :=h N i (p 1 ;p 2 ; ;p N1 ;b p N ); i = 1; ;N 1; and continue the game among the N 1 sellers. In general, for 1nN 1, assume the (n + 1)-th demand functionsfh n+1 i g n+1 i=1 are dened. If h n+1 n+1 (p 1 ; ;p n+1 ) < 0, then other n sellers will assume (n + 1)-th seller sets a price at ^ p n+1 with zero demand (i.e., h n+1 n+1 (p 1 ;p 2 ; ;p n ; ^ p n+1 ) = 0), and modify their demand functions to h n i (p 1 ;p 2 ; ;p n ) :=h n+1 i (p 1 ;p 2 ; ;p n ; ^ p n+1 ); i = 1; ;n: (3.1.5) We can now dene the \actual demand function"f b h i g N i=1 . Denition 3.1.1 (Actual demand function). Assume thatfh N i g N i=1 is a family of demand functions. The family of \actual demand functions", denoted byf b h i g N i=1 , is dened in the following steps: for a given ordered price vector p, (i) if h N N (p) 0, then we set b h i (p) =h N i (p) for all i = 1; ;N; 21 (ii) if h N N (p) < 0, then we dene recursively for n = N 1; 1 the demand functions fh n i g n i=1 as in (3.1.5). In particular, if there exists an n < N such that h n+1 n+1 (p 1 ;p 2 ; ;p n ;p n+1 )< 0 and h n n (p 1 ;p 2 ; ;p n ) 0, then we set b h i (p) = 8 > > < > > : h n i (p 1 ;p 2 ; ;p n ) i = 1; ;n 0 i =n + 1; ;N; (3.1.6) (iii) if there is no such n, then b h i (p) = 0 for all i = 1; ;N. We note that the actual demand function will always be non-negative, but for each price vector p, the number #fi : b h i (p)> 0gN, and could even be zero. 1.1 The Bertrand game and its Nash equilibrium Besides the demand function, a key ingredient in the placement decision making process is the \waiting cost" for the time it takes for the limit order to be executed. We shall assume that each seller has her own waiting cost function c N i , c N i (p 1 ;p 2 ; ;p N ;Q), where Q is the total number of shares available in the LOB. Similar to the demand function, we shall assume the following assumptions for the waiting cost. Assumption 3.1.2. For each seller i2f1; ;ng with n2 [1;N], each c N i is smooth in all variables such that (i) (Monotonicity) @c N i @p i > 0, and @c N i @p j < 0, for j6=i; (ii) (Exchangeability) c N i (p 1 ; ;p i ; ;p j ; ;p N ) =c N j (p 1 ; ;p j ; ;p i ; ;p N ); (iii) c N i (p) p i =0 = 0, and @c N i @p i p i =0+ 2 (0; 1); (iv) lim p i !1 p i c N i (p) = 0, i = 1; ;N. Remark 3.1.3. (i) By exchangeability, in what follows, we shall assume without loss of generality that all prices are ordered. (ii) Assumption 3.1.2-(i), (ii) ensure that the price ordering leads to the same ordering for waiting cost functions, similar to what we argued before for demand functions. 22 (iii) Consider the function J i (p;Q) = p i c N i (p;Q). Assumption 3.1.2 amounts to saying that J i (p;Q) p i =0 = 0, @J i (p;Q) @p i p i =0+ > 0, and lim p i !1 J i (p;Q) < 0. Thus, there exists p 0 =p 0 i (p i ;Q)> 0 such that @J i (p;Q) @p i p i =p 0 i = 0, and @J i (p;Q) @p i p i >p 0 i < 0. (iv) Since J i (0;Q) = 0, and @J i (p;Q) @p i p i =0+ > 0, one can easily check that J i (p 0 i ;Q)> 0. This, together with Assumption 3.1.2-(iv), shows that there exists ~ p i = ~ p i (p i ;Q) > p 0 i , such that J i (p;Q) p i =~ p i = 0 (or, equivalently c N i (p 1 ; ;p i1 ; ~ p i ;p i+1 ; ;p N ;Q) = ~ p i ). Furthermore, the remark (iii) implies that J i (p;Q) < 0 for all p i > ~ p i (p i ;Q). In other words, any selling price higher than ~ p i (p i ;Q) would yield a negative prot, and therefore should be prevented. The Bertrand game among sellers can now be formally introduced: each seller chooses its price to maximize prot in a non-cooperative manner, and their decision will be based not only on her own price, but also on the actions of all other sellers. We denote the prot of each seller by i (p 1 ;p 2 ; ;p N ;Q) := b h i (p 1 ;p 2 ; ;p N )[p i c N i (p 1 ;p 2 ; ;p N ;Q)]; (3.1.7) and each seller tries to maximize her prot i . For each xedQ, we are looking for a Nash equilibrium price vector p ;N (Q) = (p ;N 1 (Q); ;p ;N N (Q)). We note that in the case when b h i (p ;N ) = 0 for some i, the i-th seller will not participate in the game (with zero prot), so we shall modify the price p ;N i (Q),c N i (p ;N 1 ; ;p ;N N ;Q) =c N i (p ;N ;Q); (3.1.8) and consider a subgame involving theN 1 sellers, and so on. That is, for a subgame with n sellers, they solve p ;n i = arg max p0 n i (p ;n 1 ;p ;n 2 ; ;p ;n i1 ;p;p ;n i+1 ; ;p ;n n ;Q); i = 1; ;n (3.1.9) 23 to getp ;n = (p ;n 1 ; ;p ;n n ;c ;n+1 n+1 ; ;c ;N N ). More precisely, we dene a Nash equilibrium as follows. Denition 3.1.4. A vector of prices p = p (Q) = (p 1 ;p 2 ; ;p N ) is called a Nash equi- librium if p i = arg max pc i i (p 1 ;p 2 ; ;p i1 ;p;p i+1 ; ;p N ;Q); (3.1.10) and p i =c ;i i (p ;Q), whenever b h i (p ) = 0, i = 1; 2; ;N. We assume the following on a subgame for our discussion. Assumption 3.1.5. For n = 1; ;N, we assume that there exists a unique solution to the system of maximization problems in equation (3.1.9). Remark 3.1.6. We observe from Denition of the Nash Equilibrium that, in equilibrium, a seller is actually participating in the Bertrand game only when her actual demand function is positive, and those with zero actual demand function will be ignored in the subsequent subgames. However, a participating seller does not necessarily have positive prot unless she sets the price higher than the waiting cost. In other words, it is possible that b h i (p )> 0, but p i =c i (p ;Q), so that i (p ;Q) = 0. We refer to such a case the boundary case. The following result details the procedure of nding the Nash equilibrium for the Bertrand competition. The idea is quite similar to that in [16], except for the general form of the waiting cost. We sketch the proof for completeness. Proposition 3.1.7. Assume that Assumption 3.1.2 is in force. Then, there exists a Nash equilibrium to the Bertrand game (3.1.7) and (3.1.10). Moreover, the equilibrium point p , after modications, should take the following form: p = (p 1 ; ;p k ;c ;b k+1 ; ;c ;b n ;c n+1 ; ;c N ); (3.1.11) 24 from which we can immediately read: b h i (p )> 0 and p i >c i , i = 1; ;k; b h i (p )> 0 but p i c i , i =k + 1; ;n; and b h i (p ) 0, i =n + 1; ;N. Proof We start withN sellers, and we shall drop the superscriptN from all the notations, for simplicity. Let p = (p 1 ;p 2 ; ;p N ) be the candidate equilibrium prices (obtained by, for example, the rst-order condition). By exchangeability, we can assume without loss of generality that the prices are ordered: p 1 p 2 p N , and so are the corresponding cost functions c 1 c 2 c N , where c i =c i (p ;Q) for i = 1; ;N. We rst compare p ;N N and c ;N N . Case 1. p N >c N . We consider the following cases: (a) If h N N (p )> 0, then by Denition 3.1.1 we have b h i (p ) =h N i (p )> 0, for all i, and p = (p 1 ;p 2 ; ;p N ) is an equilibrium point. (b) If h N N (p ) 0, then in light of the denition of actual demand function (Denition 3.1.1), we have b h N (p ) = 0. Thus, the N-th seller will have zero prot regardless where she sets the price. We shall require in this case that the N-th seller reduces her price to c N , and we shall consider remaining (N 1)-sellers' candidate equilibrium prices p ;N1 = (p ;N1 1 ; ;p ;N1 N1 ). Case 2. p N c N . In this case, the N-th seller would have a non-positive prot at the best. Thus, she sets p N =c N , and quits the game, and again the problem is reduced to a subgame with (N 1) sellers, and in Case 1-(b). We should note that it is possible thath N N (p )> 0 butp N c N . In this case, the prot would be non-positive, and the best option for the N-th seller is still to set p N = c N and quit. Such a case is known as the \boundary case", and we use the notation p N = c ;b N to indicate this case. Repeating the same procedure for the subgames (for n = N 1; ; 2) we see that eventually we will get a modied equilibrium point p of the form (3.1.11), proving the proposition. 25 1.2 A linear mean-eld case In this section, we consider a special case, studied in [35], but with the modied waiting cost functions. More precisely, we assume that there are N sellers, each with demand function h N i (p 1 ; ;p N ),ABp i +C p N i ; (3.1.12) where A;B;C > 0, B > C, and p N i = 1 N1 N P j6=i p j . We note that the structure of the demand function (3.1.12) obviously re ects a mean-eld nature, and one can easily check that it satises all the assumptions mentioned in the previous sections. Furthermore, as was shown in [35, Proposition 2.4], the actual demand function takes the form: for each n2f1; ;N 1g, h n i (p 1 ; ;p n ) =a n b n p i +c n p n i ; for i = 1; ;n; where p n i = 1 n1 n P j6=i p j , and the parameters (a n ;b n ;c n )'s can be calculated recursively for n =N; ; 1, witha N =A,b N =B andc N =C. We note that in these works the (waiting) costs are assumed to be constant. Let us now assume further the waiting cost is also linear. For example, forn = 1; ;N, c n i =c n i (p i ; p n i ;Q),x n (Q)p i y n (Q) p n i ; x n (Q);y n (Q)> 0: Note that the prot function for seller i is i (p 1 ; ;p n ;Q) = (a n b n p i +c n p n i )fp i (x n p i y n p n i )g: (3.1.13) An easy calculation shows that the critical points for the maximizer is p ;n i = a n 2b n + c n 2b n y n 2(1x n ) p n i ; (3.1.14) 26 which is the optimal choice of seller i if the other sellers set prices with average p n i = 1 n1 P n j6=i p j . Now, let us dene p n := 1 n n X i=1 p i = a n (1x n ) 2b n (1x n )c n (1x n ) +b n y n : (3.1.15) Then, it is readily seen that p n i = n n1 p n 1 n1 p i , which means (plugging back into (3.1.14)) p ;n i = a n 2b n + cn n1 1 n1 bnyn 1xn + 1 n1 n 2bn(1xn) cn(1xn)bnyn + 1 n p n : (3.1.16) For the sake of argument, let us assume that the coecients (a n ;b n ;c n ;x n (Q);y n (Q)) converge to (a;b;c;x(Q);y(Q)) as n!1. Then, we see from (3.1.15) and (3.1.16) that 8 > > > < > > > : lim n!1 p n = a(1x) 2b(1x)c(1x) +by =: p; lim n!1 p ;n i = a 2b + c(1x)by 2b(1x) lim n!1 p n = a(1x) (2bc)(1x) +by =:p : (3.1.17) It is worth noting that if we assume that there is a \representing seller" who randomly sets prices p = p i with equal probability 1 n , then we can randomize the prot function (3.1.13) as follows: n (p; p) = (a n b n p +c n p)fp (x n py n p)g; (3.1.18) wherep is a random variable taking valuesfp i g with equal probability, and pE[p], thanks to the Law of Large Numbers, when n is large enough. In particular, in the limiting case as n!1, we can replace the randomized prot function n in (3.1.18) by: 1 = (p;E[p]) := (abp +cE[p])fp (xpyE[p])g: (3.1.19) 27 A similar calculation as (3.1.14) shows that (p ;E[p ])2 argmax(p;E[p]) will take the form p = c(1x)by 2b(1x) E[p ] + a 2b and E[p ] = a(1x) 2b(1x)c(1x) +by : Consequently, we see that p = a(1x) (2bc)(1x)+by , as we see in (3.1.17). Remark 3.1.8. The analysis above indicates the following facts: (i) If we consider the sellers in a \homogeneous" way, and as the number of sellers becomes large enough, all of them will actually choose the same strategy, as if there is a \representing seller" that places the prices uniformly; (ii) The limit of equilibrium prices actually coincides with the optimal strategy of the representing seller under a limiting prot function. These facts are quite standard in mean-eld theory, and will be used as the basis for our dynamic model for the (sell) LOB in the next section. 2 Continuous time mean-eld type liquidity dynamics In this section, we extend the idea of Bertrand game to the continuous time setting. To begin with, we assume that the contribution of each individual seller to the LOB is measured by the \liquidity" (i.e., the number of shares of the given asset) she provides, which is the function of the selling price she chooses, hence under the \Bertrand game" framework. 2.1 A general description We begin by assuming that there areN sellers, and denote the liquidity that the i-th seller \adds" to the LOB at timet byQ i t . We shall assume that it is a pure jump Markov process, with the following generator: for any f2C([0;T ]R N ), and (t;q)2 [0;T ]R N , A i;N [f](t;q) := Z R i (t;q;)[f(t;q i (q i +h i (t;;z))f(t;q)h@ q i f;h i (t;;z)i] i (dz); (3.2.1) 28 where q2 R N , and q i (y) = (q 1 ; ;q i1 ;y;q i+1 ; ;q N ). Furthermore, in the above h i denotes the \demand function" for thei-th seller, and2R k is a certain market parameter which will be specied later. Roughly speaking, (3.2.1) indicates that the seller would act (or \jump") at stopping timesf i j g 1 j=1 with the waiting times i j+1 i j having exponential distribution with intensity i (), and jump size being determined by the demand function h i ( ). The total liquidity provided by all the sellers is then a pure jump process with the generator A N [f](t;q;) = N X i=1 A i;N [f](t;q); q2R N ; N2N; t2 [0;T ]: (3.2.2) We now specify the functions i and h i further. Recalling the demand function intro- duced in the previous section, we assume that there are two functions and h, such that for each i and for (t;x;p;q)2 [0;T ]RR 2N , i (t;q;) =(t;p i ;q i ; N ); h i (t;;z) =h(t;x;p i ;q i ;z); (3.2.3) where N := 1 N P N i=1 p i ,x denotes the fundamental price at timet, andp i is the sell price. We shall considerp = (p 1 ; ;p N ) as the \control" variable, as the Bertrand game suggests. Now, if we assume i = for all i, then we have a pure jump Markov game of mean-eld type, similar to the one considered in [6], in which each seller adds liquidity (in terms of number of shares) dynamically as a pure jump Markov process, denoted by Q i t ,t 0, with the kernel (t;q i ; N ;p i ;dz) =(t;p i ;q i ; N )[h 1 (t;x;p i ;q i ;)](dz): (3.2.4) Furthermore, in light of the static case studied in the previous section, we shall assume that the seller's instantaneous prot at time t> 0 takes the form: (p i t c i t )Q i t , where c i t is the \waiting cost" for i-th seller at time t. We observe that the actual submitted sell price p i can be written as p i =x +l i , where x is the \mid price" and l i is the \distance" from the 29 mid price that the i-th seller chooses to set. Now let us assume that there is an invertible relationship between the selling prices p and the corresponding number of shares q, e.g., p ='(q) (such a relation is often used to convert the Bertrand game to Cournot game, see, e.g., [35]), and use the l as the control variable. We shall rewrite the functions and h of (3.2.3) in the following form: (t;p i ;q i ; N ) =(t;q i ;l i ; ~ N ('(q))); h(t;x;p i ;q i ;z) =h(t;x;q i ;l i ;z): (3.2.5) To simplify the presentation, in what follows we shall assume that does not depend on the control variable l i , and therefore it is known (cf. e.g., [42]) that each Q i can be written as the solution to the pure jump SDE, which we studied in Chapter 2: Q i t =q i + Z t 0 Z AR + h(r;X r ;Q i r ;l i r ;z)1 [0;(r;Q i r ; N '(Qr ) )] (y)N s (drdzdy); (3.2.6) where Q t = (Q 1 t ; ;Q N t ),N s is a Poisson random measure onR + RR + , andfX t g t0 is the mid-price process of the underlying asset, which we assume to satisfy the SDE (cf. [41]): X t;x s =x + Z s t b(r;X t;x r )dr + Z s t (r;X t;x r )dW r ; (3.2.7) where b and are deterministic functions satisfying some standard conditions. We shall assume that the i-th seller is aiming at maximizing the expected total accumulate prot: E n X t0 (p i t c i t )Q i t o =E n Z 1 0 Z A h(t;X t ;Q i t ;l i t ;z)(X t +l i t c i t )(t;Q i t ; N '(Qt) ) s (dz)dt o : (3.2.8) We remark that in (3.2.8) the time horizon is allowed to be innity, which can be easily converted to nite horizon by settingh(t; ) = 0 fortT , for a given time horizonT > 0, which we do not want to specify at this point. Instead, our focus will be mainly on the limiting behavior of the equilibrium whenN!1. In fact, given the \symmetric" nature of 30 the problem (i.e., all sellers having the same andh), as well as the results in the previous section, we envision a \representing seller" in a limiting mean-eld type control problem whose optimal strategy coincides with the limit of N-seller Nash equilibrium as N!1, just as the well-known continuous diusion cases (see, e.g., [37] and [11,15]). We should note such a result for pure jump cases has been substantiated in a recent work [6], in which it was shown that, under reasonable conditions, in the limit the total liquidity Q t = P N i=1 Q i t will converge to a pure jump Markovian process with a mean-eld type generator. Based on this result, as well as the individual optimization problem (3.2.6) and (3.2.8), it is reasonable to consider the following (limiting) mean-led-type pure-jump stochastic control problem for a representing seller, whose total liquidity has a dynamics that can be characterized by the following mean-eld type pure jump SDE: Q t =q + Z t 0 Z AR + h(r;X r ;Q r ;l r ;z)1 [0;(r;Q r ;P Qr )] (y)N s (drdzdy); (3.2.9) where (t;Q;P Q ) :=(t;Q;E['(Q)]) by a slight abuse of notation, and with the cost func- tional: (q;l) =E n Z 1 0 Z A h(t;X t ;Q t ;l t ;z)(X t +l t c t )(t;Q t ;P Qt )v s (dz)dt o : (3.2.10) 2.2 Problem formulation With the general description in mind, we now give the formulation of our problem. First, we note that the liquidity of the limit order book will not only be aected by the liquidity providers (i.e., the sellers), but also by liquidity \consumer", that is, the market buy orders as well as the cancellations of sell orders (which we assume is free of charge). We shall describe its collective movement (in terms of number of shares) of all such \consumptional" orders as a compound Poisson process, denoted by t = P Nt i=1 i , t 0, wherefN t g is a standard Poisson process with parameter , andf i g is a sequence of i.i.d. random variables taking values in a set BR, with distribution . Without loss of generality, we 31 assume that counting measure of coincides with the canonical Poisson random measure N b , so that the L evy measure b = . In other words, t := R t 0 R B z e N b (drdz), and the total liquidity satises the SDE: Q 0 t =q + Z t 0 Z AR + h(r;X r ;Q 0 r ;l r ;z)1 [0;(r;Q 0 r ;P Q 0 r )] (y)N s (drdzdy) t : (3.2.11) We remark that there are two technical issues for the dynamics (3.2.11). First, the presence of the buy order process brings in the possibility that Q 0 t < 0, which should never happen in reality. We shall therefore assume that the buy order has a natural upper limit: the total available liquidity Q 0 t , that is, if we denoteS =ft : t 6= 0g, then for all t2S , we have Q 0 t = (Q 0 t t ) + . Consequently, we can assume that there exists a processK =fK t g, whereK is an increasing, pure jump process such that (i)S K =S ; (ii) K t := (Q t t ) , t2S K ; and (iii) the Q 0 -dynamics (3.2.11) can be written as Q t = q + Z t 0 Z AR + h(r;X r ;Q r ;l r ;z)1 [0;(r;Q r ;P Qr )] (y)N s (drdzdy) t +K t = q + Z t 0 Z AR + h(r;X r ;Q r ;l r ;z)1 [0;(r;Q r ;P Qr )] (y) e N s (drdzdy) (3.2.12) Z t 0 Z B z e N b (drdz) + Z t 0 Z A h(r;X r ;Q r ;l r ;z)(r;Q r ;P Qr ) s (dz)dr +K t ; t 0: where K is a \re ecting process", and e N s (drdzdy) is the compensated Poisson martingale measure ofN s . That is, (3.2.12) is a (pure-jump) mean-eld SDE with re ection as was studied in Chapter 2. Now, in light of the discussion of MFSDER in Chapter 2, we shall consider the following two MFSDERs that are slightly more general than (3.2.12): for 2L 2 (F t ;R), q2R, and 0st, Q t; s = + Z s t Z AR + h(r;X r ;Q t; r ;l r ;z) 1 [0;(r;Q t; r ;P Q t; r )] (y) e N s (drdzdy) Z s t Z B z e N b (drdz) + Z s t a(r;X r ;Q t; r ;P Q t; r ;l r )dr +K t; s ; (3.2.13) 32 Q t;q; s = q + Z s t Z AR + h(r;X r ;Q t;q; r ;l r ;z) 1 [0;(r;Q t;q; r ;P Q t; r )] (y) e N s (drdzdy) Z s t Z B z e N b (drdz) + Z s t a(r;X r ;Q t;q; r ;P Q t; r ;l r )dr +K t;q s ; (3.2.14) wherel =fl s g is the control process for sellers, and Q s =Q t;q; s ,st, is the total liquidity of the sell-side LOB. We shall consider the following set of admissible strategies: U ad :=fl2L 1 F ([0;T ];R + ) :l isF-predictableg: (3.2.15) The objective of the seller is to solve the following mean-eld stochastic control problem: v(x;q;P ) = sup l2U ad (x;q;P ;l) = sup l2U ad E n Z 1 0 e r L(r;X x r ;Q q; r ;P Q r ;l r )dr o (3.2.16) where L(t;x;q;;l) := R A h(t;x;q;l;z)c(t;x;q;l)(t;q;) s (dz), and U ad is dened in (3.2.15). Here we denote X x :=X 0;x , Q q; :=Q 0;q; . Remark 3.2.1. (i) In (3.2.13) and (3.2.14), we allow a slightly more general drift function a(), which in particular could be a(t;x;q;;l) = (t;q;) R A h(t;x;q;l;z) s (dz), as is in (3.2.12). (ii) In (3.2.16), the pricing functionc(t;x;q;l) is a more general expression of the original form x +lc in (3.2.10), taking into account the possible dependence of the waiting cost c t on the sell position l and the total liquidity q at time t. (iii) Compared to (3.2.10) we see that a discounting factor e t is added to the cost functional ( ) in (3.2.16), re ecting its nature as the \present value". More precisely, in the rest of the dissertation, we shall assume that the market param- eters b;;;h, the pricing function c in (3.2.13) { (3.2.16), and the discounting factor satisfy the following assumptions. 33 Assumption 3.2.2. All functions b;2C 0 ([0;1)R), 2L 0 ([0;1)RP 2 (R);R + ), h2L 0 ([0;1)R 2 R + A), and c2L 0 ([0;1)R 2 R + ;R + ) are bounded, and satisfy the following conditions, respectively: (i) b and are uniformly Lipschitz continuous in x with Lipschitz constant L> 0; (ii) (t; 0) = 0 and b(t; 0) 0, t 0; (iii) and h satisfy Assumption 2.1.1; (iv) For (t;l)2 [0;1)R + , c(t;x;q;l) is Lipschitz continuous in (x;q), with Lipschitz constant L> 0. (v) h is non-increasing, and c is non-decreasing in the variable l. (vi) >L + 1 2 L 2 , where L> 0 is the Lipschitz constant in Assumption 2.1.1. (vii) For (x;;l)2R + P 2 (R)R + , (x;q;;l) is convex in q. Remark 3.2.3. Under Assumption 3.2.2, one can easily conclude the following assertions: 1) The SDEs (3.2.7) as well as (3.2.13) and (3.2.14) all have pathwisely unique strong solutions in L 2 (F;D([0;T ])), thanks to Theorem 2.1.3; 2) Assumption 3.2.2-(ii) implies that X t;x s 0, s2 [t;T ],P-a.s., whenever x 0. 2.3 Dynamic programming principle and HJB equation In this subsection, we shall substantiate the dynamic programming principle (DPP in short) for the stochastic control problem (3.2.13) { (3.2.16), and drive the corresponding Hamilton- Jacobi-Bellman (HJB) equation. We begin by examining some basic properties of the value function. Proposition 3.2.4. Under the Assumptions 2.1.1 and 3.2.2, the value function v(x;q;P ) is Lipschitz continuous in (x;q;P ), non-decreasing in x, and decreasing in q. Proof We rst check the Lipschitz property in x. For x;x 0 2R, denote X x =X 0;x and X x 0 =X 0;x 0 as the corresponding solutions to (3.2.7), respectively. Denote X t =X x t X x 0 t , 34 and x = xx 0 . Then, applying It^ o's formula tojX t j 2 and some standard arguments, one has jX t j 2 =jxj 2 + Z t 0 (2 s + 2 s )jX s j 2 ds + Z t 0 2 s jX s j 2 dW s ; where ; are two processes bounded by the Lipschitz constants L in Assumption 2.1.1, thanks to Assumption 3.2.2. Thus, one can easily check, by taking expectation and applying Burkholder-Davis-Gundy and Gronwall inequalities, that E[jXj ;2 t ]jxj 2 e (2L+L 2 )t ; t 0: (3.2.17) Furthermore, it is clear that under Assumption 3.2.2 the function L(t;x;q;;l) is uni- formly Lipschitz in x, uniformly in (t;q;;l). That is, for some generic constant C > 0, which is allowed to vary from line to line, we have j(x;q;P ;l) (x 0 ;q;P ;l)jCE h Z 1 0 Z A e t jX t j s (dz)dt i CE h Z 1 0 e t q E[jXj ;2 t ]dt i Cjxj Z 1 0 e t e (L+ 1 2 L 2 )t dt =Cjxx 0 j: Here the last inequality is due to Assumption 3.2.2-(vi). Consequently, we obtain jv(x;q;P )v(x 0 ;q;P )jCjxx 0 j; 8x;x 0 2R: (3.2.18) To check the Lipschitz properties for q and P , we denote, for (q;P )2 R + P 2 (R), h q; s h(s;X s ;Q t;q; s ;l s ;z), q; s (s;Q t;q; s ;P Q t; s ), and c q; s c(s;X s ;Q t;q; s ;l s ), s t. Furthermore, for q;q 0 2 R + and P ;P 02P 2 (R), we deonte r q; r q 0 ; 0 r for = h;;c. Now by Assumptions 2.1.1 and 3.2.2, and following a similar argument of Theorem 2.1.3, one shows that 35 j(x;q;P ;l) (x;q 0 ;P 0;l)j E n Z 1 0 Z A e r h q; r c q; r j r j +c q; r q 0 ; 0 r jh r j +h q 0 ; 0 r q 0 ; 0 r jc r j s (dz)dr o E n Z 1 0 Z A e r jQ t;q; r Q t;q 0 ; 0 r j s (dz)dr o C jqq 0 j +W 1 (P ;P 0) ; which implies that jv(x;q;P )v(x;q 0 ;P 0 )jC jqq 0 j +W 1 (P ;P 0) : (3.2.19) Finally, the respective monotonicities of the value function on x and q follow from the comparison theorems of the corresponding SDEs and Assumption 3.2.2. The proof is now complete. We now turn our attention to the dynamic programming principle (DPP). Let us rst denote 8 > > < > > : (t;x;q;P ;l) :=E h Z 1 t e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds i ; v(t;x;q;P ) := sup l2U ad (t) (t;x;q;P ;l); (3.2.20) whereU ad (t) := n l2L 1 F ([t;T ];R + ) :l isF-predictable o . We shall proceed as follows. First, we prove the DPP for deterministic time increment, from which we derive the temporal regularity of v(t;x;q;P ). The general DPP will then be argued. To be more precise, we rst prove the following simpler form of DPP: Proposition 3.2.5. Assume that Assumptions 2.1.1 and 3.2.2 are in force. Then, for any 0t 1 <t 2 , it holds that v(t 1 ;x;q;P ) = sup l2U ad (t 1 ) E h Z t 2 t 1 e (st 1 ) L(s;X t 1 ;x s ;Q t 1 ;q; s ;P Q t 1 ; s ;l s )ds +e (t 2 t 1 ) v(t 2 ;X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 ) i : (3.2.21) 36 Proof Let us denote the right side of (3.2.21) by ~ v(t 1 ;x;q;P ) = sup l ~ (t 1 ;x;q;P ;l). We rst note that X r and (Q t; r ;Q t;q; r ) have the ow property. So, for any l2U ad (t 1 ), (t 1 ;x;q;P ;l) =E Z 1 t 1 e (st 1 ) L(s;X t 1 ;x s ;Q t 1 ;q;;l s ;P Q t 1 ;;l s ;l s )ds = E Z t 2 t 1 e (st 1 ) L(s;X t 1 ;x s ;Q t 1 ;q;;l s ;P Q t 1 ;;l s ;l s )ds +e (t 2 t 1 ) E Z 1 t 2 e (st 2 ) L(s;X t 1 ;x s ;Q t 1 ;q;;l s ;P Q t 1 ;;l s ;l s )ds F t 2 (3.2.22) = E Z t 2 t 1 e (st 1 ) L(s;X t 1 ;x s ;Q t 1 ;q;;l s ;P Q t 1 ;;l s ;l s )ds +e (t 2 t 1 ) (t 2 ;X t 1 ;x t 2 ;Q t 1 ;q;;l t 2 ;P Q t 1 ;;l t 2 ;l) E Z t 2 t 1 e (st 1 ) L(s;X t 1 ;x s ;Q t 1 ;q;;l s ;P Q t 1 ;;l s ;l s )ds +e (t 2 t 1 ) v(t 2 ;X t 1 ;x t 2 ;Q t 1 ;q;;l t 2 ;P Q t 1 ;;l t 2 ) = ~ (t 1 ;x;q;P ;l): This implies that v(t 1 ;x;q;P ) ~ v(t 1 ;x;q;P ). To prove the other direction, let us denote =R + RP 2 (R), and consider, for xed " > 0, a countable partitionf i g 1 i=1 of and (x i ;q i ;P i )2 i , i 2 L 2 (F t 2 ), i = 1; 2; , such that for any (x;q;)2 i , it holdsjxx i j ", q i " q q i , and W 2 (;P i ) ". Now, for each i, choose an "-optimal strategy l i 2 U ad (t 2 ), such that v(t 2 ;x i ;q i ;P i ) (t 2 ;x i ;q i ;P i ;l i )+": Then, by denition of the value function and the Lipschitz properties (Proposition 3.2.4) with some constant C > 0, for any (x;q;)2 i , it holds that (t 2 ;x;q;;l i ) (t 2 ;x i ;q i ;P i ;l i )C"v(t 2 ;x i ;q i ;P i ) (C + 1)" v(t 2 ;x;q;) (2C + 1)": (3.2.23) Now for any l2U ad (t 1 ), we dene a new strategy ~ l as follows: ~ l s :=l s 1 [t 1 ;t 2 ] (s) + h X i l i s 1 i (X t 1 ;x t 2 ;Q t 1 ;q;;l t 2 ;P Q t 1 ;;l t 2 ) i 1 (t 2 ;1) (s): (3.2.24) 37 Then, clearly ~ l2U ad (t 1 ). To simplify notation, let us denote I 1 = Z t 2 t 1 e (st 1 ) L(s;X t 1 ;x s ;Q t 1 ;q; s ;P Q t 1 ; s ;l s )ds: (3.2.25) Applying (3.2.23), we have v(t 1 ;x;q;) (t 1 ;x;q;; ~ l) = E h I 1 +e (t 2 t 1 ) E n Z 1 t 2 e (st 2 ) L(s;X t 1 ;x s ;Q t 1 ;q; s ;P Q t 1 ; s ; ~ l s )ds F t 2 oi = E h I 1 +e (t 2 t 1 ) (t 2 ;X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 ; ~ l) i = E h I 1 +e (t 2 t 1 ) X i (t 2 ;X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 ;l i )1 i (X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 ) i E h I 1 +e (t 2 t 1 ) v(t 2 ;X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 ) i (2C + 1)" = ~ (t 1 ;x;q;P ;l) (2C + 1)": Since"> 0 is arbitrary, we conclude thatv(t 1 ;x;q;P ) ~ v(t 1 ;x;q;P ). This, together with (3.2.22), proves (3.2.21). With the help of Proposition 3.2.5, we can prove the following temporal regularity of v. Proposition 3.2.6. For any t 2 >t 1 t 0 and (x;q;P )2R 2 P 2 (R), it holds that jv(t 2 ;x;q;P )v(t 1 ;x;q;P )jC(1 +jxj) p t 2 t 1 : (3.2.26) Proof Still denote I 1 by (3.2.25). By Proposition 3.2.4 and 3.2.5, we have, v(t 1 ;x;q;P )v(t 2 ;x;q;P ) = sup l2U ad (t 1 ) E h I 1 +v(t 2 ;X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 ) i v(t 2 ;x;q;P ) = sup l2U ad (t 1 ) E h I 1 +v(t 2 ;X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 )v(t 2 ;x;q;P ) i : It is easy to see that, under the Assumption 3.2.2, one has E[I 1 ]C(t 2 t 1 ) and E[jX t 1 ;x t 2 xj]C(1 +jxj) p t 2 t 1 : (3.2.27) 38 On the other hand, by denition of Wasserstein metric, we haveW 1 (P Q t 1 ; t 2 ;P )E[jQ t 1 ; t 2 j]. We claim that E[jQ t 1 ;q; t 2 qj] +E[jQ t 1 ; t 2 j]C(t 2 t 1 ): (3.2.28) Indeed, since h and are bounded, we have E[jQ t 1 ;q; t 2 qj] = E h Z t 2 t 1 Z AR + h(r;X r ;Q t 1 ;q; r ;l r ;z)1 [0;(r;Q t 1 ;q; r ;P Q t 1 ; r )] (y) e N s (drdzdy) Z t 2 t 1 Z B z e N b (drdz) + Z t 2 t 1 a(r;X r ;Q t 1 ;q; r ;P Q t 1 ; r ;l r )dr +K t 1 ;q t 2 i CE h Z t 2 t 1 Z A jh(r;X r ;Q t 1 ;q; r ;l r ;z)(r;Q t 1 ;q; r ;P Q t 1 ; r )j s (dz)dr + Z t 2 t 1 Z B jzj b (dz)dr i +C(t 2 t 1 ) +E[jK t 1 ;q t 2 j] C(t 2 t 1 ) +E[jK t 1 ;q t 2 j]: (3.2.29) To estimate E[jK t 1 ;q t 2 j], we write K t 1 ;q t 2 = K t 1 ;q;c t 2 +K t 1 ;q;d t 2 , where K ;c and K ;d are the continuous and jump parts of K, respectively. Now recall the Skorohod problem (cf. [31]). We have EjK t 1 ;q;c t 2 j =E h max t 1 st 2 Z s t 1 a(r;X r ;Q t 1 ;q; r ;P Q t 1 ; r ;l r )dr i C(t 2 t 1 ): (3.2.30) Furthermore, recalling the denition of discontinuous Skorohod problem and the facts that Q t 1 ;q; s 0 and (a +b) jbj for a 0, we have jK t 1 ;q;d t 2 j X t 1 <st 2 jK t 1 ;q s j = X t 1 <st 2 [Q t 1 ;q; s + Q 0 s ] X t 1 <st 2 jQ 0 s j; whereQ 0 is such thatQ t 1 ;q; = (Q 0 ) and () is the solution mapping of the discontinuous Skorohod problem. In other words, by denition of the stochastic integral, we have 39 X t 1 <st 2 jQ 0 s j Z t 2 t 1 Z AR + jh(r;X r ; (Q 0 ) r ;l r ;z)1 [0;(r;(Q 0 ) r ;P (Q 0 )r )] (y)j e N s (drdzdy) + Z t 2 t 1 Z B jzj e N b (drdz): A similar estimate as (3.2.29) then shows that EjK t 1 ;q;d t 2 jE h X t 1 <st 2 jQ 0 s j i C(t 2 t 1 ): (3.2.31) Combining (3.2.29)-(3.2.31), we obtain the Q ;q; part of (3.2.28). The Q ; part of (3.2.28) can be argued similarly. Finally, applying Proposition 3.2.4, we have E[v(t 2 ;X t 1 ;x t 2 ;Q t 1 ;q; t 2 ;P Q t 1 ; t 2 )v(t 2 ;x;q;P )]CE h jX t 1 ;x t 2 xj +jQ t 1 ;q; t 2 qj +W 1 (P Q t 1 ; t 2 ;P ) i : This, together with (3.2.27) and (3.2.28), proves (3.2.26). Finally, we give a general version of dynamic programming principle. We denoteT t to be all theF-stopping times taking values in (t;1). Theorem 3.2.7. Let Assumptions 2.1.1 and 3.2.2 hold. Then, for any (t;x;q;P ) 2 [0;1)R 2 P 2 (R) with 2L 2 (F t ) and any 2T t , v(t;x;q;P ) = sup l2U ad (t) E h Z t e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (t) v(;X t;x ;Q t;q; ;P Q t; ) i : (3.2.32) Proof For each l 2 U ad (t) and 2 T t , let us denote the expectation on the right- hand side of (3.2.32) as R(l;). From the argument of Proposition 3.2.5, we can easily get v(t;x;q;P ) sup l2U ad (t) R(l;). So, we need to show that the reverse inequality holds: v(t;x;q;P ) sup l2U ad (t) R(l;): (3.2.33) 40 For this, let us rst assume that2T t takes only nitely many valuest<t 1 <<t m . In this case, we will show (3.2.33) holds using induction on m. For m = 1, it follows from Proposition 3.2.5. Now, we assume that takesm values, while (3.2.33) holds when takes m 1 values. Note that we can write for any l2U ad (t), R(l;) = E h Z t 1 t e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (t 1 t) v(t 1 ;X t;x t 1 ;Q t;q; t 1 ;P Q t; t 1 )1 f=t 1 g + Z t 1 e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (t) v(;X t;x ;Q t;q; ;P Q t; ) 1 f>t 1 g i : Sincef >t 1 g2F t 1 and takes m 1 values onf >t 1 g, by inductional hypothesis, the following holds: R(l;) = E h Z t 1 t e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (t 1 t) v(t 1 ;X t;x t 1 ;Q t;q; t 1 ;P Q t; t 1 )1 f=t 1 g +E n Z t 1 e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (t) v(;X t;x ;Q t;q; ;P Q t; )jF t 1 o 1 f>t 1 g i E h Z t 1 t e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (t 1 t) v(t 1 ;X t;x t 1 ;Q t;q; t 1 ;P Q t; t 1 )1 f=t 1 g +e (t 1 t) v(t 1 ;X t;x t 1 ;Q t;q; t 1 ;P Q t; t 1 )1 f>t 1 g i = E h Z t 1 t e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (t 1 t) v(t 1 ;X t;x t 1 ;Q t;q; t 1 ;P Q t; t 1 ) i v(t;x;q;P ): Since l2U ad (t) is arbitrary, it proves (3.2.33) for m, and thus completes the induction. In order to prove (3.2.33) for arbitrary 2T t , we nd a sequencef n g 1 n=1 2T t such that n 1 n and n ! as n!1. Then, from the previous argument, we can see that (3.2.33) holds for each n . That is, v(t;x;q;P )R(l; n ) for every l2U ad (t). Also, we notice here that R(l; n )R(l;) = E h Z n e (st) L(s;X t;x s ;Q t;q; s ;P Q t; s ;l s )ds +e (nt) v( n ;X t;x n ;Q t;q; n ;P Q t; n ) e (t) v(;X t;x ;Q t;q; ;P Q t; ) i : (3.2.34) 41 By applying Proposition 3.2.6, we can see that the right side of (3.2.34) converges to 0 as n!1. Hence, we have v(t;x;q;P )R(l;) for each l2U ad (t). Now, we formulate the HJB equation. At rst, let us dene the operator by the help of It^ o formula in Proposition 2.2.3: for any 2C 1;(2;1) b (R + RP 2 (R)), J l (x;q;P ) b(t;x)@ x + 2 (t;x) 1 2 @ 2 xx +a(t;x;q;P ;l)@ q (x;q;P ) + Z A (x;q +h(t;x;q;l;z);P )(x;q;P ) @ q (x;q;P )h(t;x;q;l;z) (t;q;P ) s (dz) Z B [(x;qz;P )(x;q;P )@ q (x;q;P )z] b (dz) + ~ E h @ (x;q;P ; ~ )a(t;x; ~ ;P ;l) i + ~ E Z 1 0 Z A @ (x;q;P ; ~ + h(t;x; ~ ;l;z))@ (x;q;P ; ~ ) h(t;x; ~ ;l;z)(t; ~ ;P ) s (dz)d i ~ E Z 1 0 Z B @ (x;q;P ; ~ z)@ (x;q;P ; ~ ) z b (dz)d : If our value function v is smooth, then it is a solution of the following HJB equation v(x;q;P ) = sup l2U ad [J l v(x;q;P ) +L(t;x;q;P ;l)]; (3.2.35) with boundary condition v(x; 0;P ) = 0: 2.4 Viscosity solutions to the HJB equation There are no smooth solutions to the HJB equation in general. So, we introduce a notion of viscosity solutions for the HJB equation (3.2.35). To this end, writeO :=R + RP 2 (R), and for (x;q;)2O, we denote 42 U(x;q;) := n '2C 1;(2;1) b (O) :v(x;q;) ='(x;q;) o ; U(x;q;) := n '2U(x;q;) :v' has a maximum at (x;q;) o ; U(x;q;) := n '2U(x;q;) :v' has a minimum at (x;q;) o : Denition 3.2.8. We say a continuous function v :O! R + is a viscosity subsolution (supersolution, resp.) of (3.2.35) inO if '(x;q;) sup l2U ad [J l '(x;q;) +L(t;x;q;;l)] 0; (resp. 0) (3.2.36) for every '2U(x;q;) (resp. '2U(x;q;)). v is a viscosity solution of (3.2.35) inO if it is both a viscosity subsolution and a viscosity supersolution of (3.2.35) inO. Our main goal of this subsection is to show the following theorem. Theorem 3.2.9. The value function v is a viscosity solution of the HJB equation (3.2.35). Proof For every x (x;q;)2O with =P and for any > 0, letO x; be a subset ofO such that for any x 0 (x 0 ;q 0 ; 0 )2O x; with 0 =P 0 for 0 2L 2 (F t ;R),jxx 0 j<, jqq 0 j<, and W 2 (; 0 )<. Step 1. Let us rst examine the viscosity subsolution property. We proceed by contradiction, by assuming that '(x;q;) sup l2U ad [J l '(x;q;) +L(t;x;q;;l)] =: 2" 0 > 0: (3.2.37) Then, there exists > 0 such that for any x 0 2O x; , '(x 0 ) sup l2U ad [J l '(x 0 ) +L(t; x 0 ;l)]" 0 : (3.2.38) 43 Also we have, for some > 0, maxfv( x)'( x) : x = ( x; q; )2O withkx xk =g =< 0; (3.2.39) wherekx xk = meansjx xj =;jq qj =;W 2 (; ) =. Fix "2 (0; min(" 0 ;)). Since v is continuous, there exists a sequencefx m g m2N O x; with x m (x m ;q m ; m ) such that x m ! x, q m ! q, m ! , and v(x m )! v(x) as m!1. Here m = P m forf m g L 2 (F t ;R). Since v(x m )! '(x) as m!1, and '(x) =v(x), we have m :=v(x m )'(x m )! 0 as m!1: Take T > 0 and set m := inf n t 0 :jX xm t xj;jQ qm;m t qj o ; := m ^T: Because v satises the dynamic programming principle, we have v(x m ) " 2 E Z m 0 e t L(t;@ m t ;l t )dt +e m v(@ m m ) : where@ m t denotes the triplet (X xm t ;Q qm;m t ;P Q m t ). By subtracting '(x m ) from both sides and using (3.2.39), we deduce that m " 2 E h Z m 0 e t L(t;@ m t ;l t )dt +e m '(@ m m )e m 1 fmTg '(x m ) i : An application of It^ o formula to e t '(@ m m ) between t = 0 and t = m gives m " 2 E h Z m 0 e t L(t;@ m t ;l t ) +J l '(@ m t )'(@ m t ) dte m 1 fmTg i : Now, we use (3.2.38) to get 44 m " 2 E h Z m 0 e t "dt +e m 1 fmTg i = " +E ( " )e m 1 fmTg + " e m 1 fm>Tg " + " E[e m 1 fm>Tg ] = " + " e T P( m >T ) " + " e T : Let T!1 to have m " 2 " . Now, m!1 gives " 2 " , a contradiction. Step 2. Now, let us show thatv is viscosity supersolution of (3.2.35). Let x = (x;q;)2 O be such that it satises minfv( x; q; )'( x; q; ) : ( x; q; )2Og =v(x;q;)'(x;q;) = 0: Similar to the proof of Step 1 above, there exists a sequencefx m g m2N O x; with x m (x m ;q m ; m ) such that x m !x,q m !q, m !, andv(x m )!v(x) asm!1. We have m :=v(x m )'(x m )! 0 as m!1 with the same reason. Consider a sequencefa m g m2N (0;1) such that a m ! 0 and m am ! 0; as m!1: Given > 0, we dene m := inf n t 0 :jX xm t xj;jQ qm;m t qj o ; := m ^a m : Because v satises the dynamic programming principle, we have v(x m )E Z m 0 e t L(t;@ m t ;l t )dt +e m v(@ m m ) : By subtracting '(x m ) from both sides, we get m E Z m 0 e t L(t;@ m t ;l t )dt +e m '(@ m m )'(x m ) : 45 By applying It^ o formula to e t '(@ m m ) between t = 0 and t = m , m a m E[ 1 a m Z m 0 e t '(@ m t )J l '(@ m t )L(t;@ m t ;l t ) dt]: (3.2.40) Now, we notice that for !2 , there exists M(!)2 N such that m (!) = a m for any mM(!). So, up to a subsequence, we have, as m!1, y m := 1 a m Z m 0 e t '(@ m t )J l '(@ m t )L(t;@ m t ;l t ) dt !'(x;q;P )J l '(x;q;P )L(t;x;q;P ;l): Moreover, since the integral in y m is over the interval [0; m ], there exists a non-negative integrable random variable y such thatjy m j y, for every m2 N. Thus, by dominated convergence theorem, we nd, sending m!1 in (3.2.40), '(x;q;P )J l '(x;q;P ) + L(t;x;q;P ;l): By taking supremum over l2U ad on both sides, we conclude '(x;q;P ) sup l2U ad [J l '(x;q;P ) +L(t;x;q;P ;l)]: 3 Relation with equilibrium density function of a limit order book In this section, we will show how the equilibrium density function of a limit order book is determined by the value function from the Bertrand competition, by observing the con- nection between our work and existing one. In Ma et al. [41], they explained `competitive equilibrium' in words following the idea of Rosu [48]. More precisely, they assume that in equilibrium every seller has the same `expected utility' (or `expected return') U(X;Q), which satises the following conditions: 46 Assumption 3.3.1. The expected utility functionU :R + R + 7!R + satises the following properties: (i) U is non-decreasing in x, and @U @Q < 0, @ 2 U @Q 2 > 0; (ii) U is uniformly Lipschitz continuous in (x;q). The way they obtain a formula for the equilibrium density function in terms of U() in [41] is shown below for completeness. Assuming the existence of such expected return U(X;Q), the equilibrium density function f() of a limit order book is a non-negative function f(y) 0,8y 0, such that f(y) = 0 for y < p(0), where p(0) X is the best ask price, and that R 1 p(0) f(y)dy =Q: Now, suppose that a market buy order comes in and -shares of the stock were purchased, where 2 (0;Q]. Then, the following two identities are assumed to hold: Z p() p(0) f(y)dy =; 1 Z p() p(0) yf(y)dy =U(X;Q): (3.3.1) First equality says that we can nd p() > p(0), where p() is the price at which the accumulated volume of sell limit orders between the pricep(0) andp() is equal to share. The second equality means that in equilibrium, the average price of the sold block should be equal toU(X;Q), the expected return of the remaining orders. By simple algebra using (3.3.1), one can derive the equilibrium density f explicitly, as long as U(X;Q) is given. Even though we have an explicit formula for f, it could not be fully used because the exact form of the functionU() whose existence was assumed is unknown. In the following, it will be argued that we can actually identify our value function from the control problem of a representative seller as their conceptual expected utility function U(). First, let us observe the meaning of the two functions. Note that the value function v(x;q;P ) in (3.2.16) is the discounted lifelong expected utility of a representative seller. As we have analyzed in Section 1 and 2 of this chapter, a control problem for a representative seller gives the same result as we consider a Bertrand-type of game for N sellers. So, we can say that in equilibrium every seller has the same discounted expected utility v(x;q;P ). 47 Moreover, as one can see in Proposition 3.2.4, the value function v(x;q;P ) is uniformly Lipschitz continuous, non-decreasing in x, and decreasing in q. Also, by Assumption 3.2.2- (vii), the value function is convex in q variable. Consequently, we can say that the value function v(x;q;P ) satises all the properties for U() in Assumption 3.3.1. Since the value function (3.2.16) has the same interpretation as the expected utility function U() in [41] and they share the same properties, we may conclude that we can identify these two functions. In particular, we can now say that the equilibrium density function of a limit order book is fully described by the value function of a control problem of the representing seller's Bertrand-type game. 48 Chapter 4 Optimal Execution Problem Now, in this chapter, assuming that the equilibrium of LOB is already reached as we showed in the previous chapter, we will solve an optimal execution problem for a buyer in a nite time horizon [0;T ] for some T > 0. That is, we isolate one particular buyer and call \the investor" whose goal is to purchase a specic number, say M, shares up to time T with minimum cost. We will focus on the investor's optimal execution problem. 1 Problem formulation Since the mid-price aects our model as a source of randomness, we assume it satises the following SDE as before: for t2 [0;T ) X t;x s =x + Z s t b(r;X t;x r )dr + Z s t (r;X t;x r )dW r ; s2 [t;T ]; (4.1.1) where b and satisfy the following standing assumptions: Assumption 4.1.1. (i) b(;) and (;) are deterministic functions, bounded, continuous in time, uniformly Lipschitz continuous in x (the second variable), with a common uniform Lipschitz constantL> 0, and satises the linear growth condition. That is, for anys2 [t;T ] and x;y2R, there exists a constant L> 0 such that 8 > < > : jb(s;x)j +j(s;x)jL(1 +jxj) jb(s;x)b(s;y)j +j(s;x)(s;y)jLjxyj (ii) X t;x t =x> 0, (s; 0) = 0, and b(s; 0) 0. 49 Now, for the total liquidity in the sell-side LOB, let us recall the Q dynamics (3.2.14): Q t; s = + Z s t Z AR + h(r;X r ;Q t; r ;l r ;z) 1 [0;(r;Q t; r ;P Q t; r )] (y) e N s (drdzdy) Z s t Z B z e N b (drdz) + Z s t a(r;X r ;Q t; r ;P Q t; r ;l r )dr +K t; s : It describes that the total liquidity increases by sellers' liquidity providing actions (the sec- ond term), and decreases by buyers' liquidity consuming actions as well as the cancellation of sell orders (the third term). Let us x the nite time horizon T > 0, and consider the investor. We denote the accumulated number of shares bought up to time t2 [0;T ] by t . Then, obviously =f t :t 0g is an increasing process, and we assume it isF-predictable. So, in the sequel, by the representation theorem (c.f. [31]) we may write the two jump processes explaining liquidity providing and consuming actions as a single jump process. That is, for a measurable space (E;B E ), we can nd a Poisson random measureN dened onR + ER + with L evy measure (dz). Note that it has the state-dependent intensity, and the jump size is allowed to be both positive and negative. Moreover, to simplify the presentation, we shall assume the coecients of Q dynamics are `time-homogeneous', that is, they are independent oft. Thus, we may write the dynamics of total liquidity as follows: for t2 [0;T ), 2L 2 (F t ;R) and q2R, Q t;; s = + Z s t Z ER + j(X r ;Q t; r ;z)1 [0;(Q t; r ;P Q t; r )] (y) e N (drdzdy) + Z s t a(X r ;Q t; r ;P Q t; r )dr Z s t d r +K t; s ; (4.1.2) Q t;q;; s = q + Z s t Z ER + j(X r ;Q t;q; r ;z)1 [0;(Q t;q; r ;P Q t; r )] (y) e N (drdzdy) + Z s t a(X r ;Q t;q; r ;P Q t; r )dr Z s t d r +K t;q; s ; (4.1.3) where a :R + RP 2 (R)!R, j :R + RE!R and e N () is a compensated Poisson random measure. We will write Q t; instead of Q t;; whenever the context is clear to understand. 50 Remark 4.1.2. Note that as we have studied in Chapter 2, the equations (4.1.2)-(4.1.3) are MFSDER, each has pathwisely unique solution inL 2 , and the pair of solutions (Q t;q; s ;Q t; s ) satises the ow property. Each realization of can be considered as an execution strategy. So, let us dene A :=f2R + j is non-decreasing, c agl ad,F-predictable, and T Mg: And, we consider the following admissible strategies: given q 0, A ad (q) :=f2A jQ t;q; s 0 for all s2 [t;T ];P a.s., and d t ?dK t g: Recall from the conclusion of the last chapter that in equilibrium every seller has the same expected utilityU(X;Q), and it is identied as the value function of a control problem for a representative seller. Also, from 3.3.1, given the density function f, we can write the cost for the investor of buying shares of stock as C(X;Q;) := Z p() p(0) yf(y)dy =U(X;Q) By simple observation as in [41], the cost is written as C(X;Q;) =U(X;Q) +O( 2 ): For a continuous strategy c =f c t :t2 [0;T ]g, we can write the cost function as: Z t 0 C(X s ;Q c s ;d c s ) = Z t 0 U(X s ;Q c s )d c s : Throughout the dissertation, we shall denote R + := (0;1); R + := [0;1); G :=R + [0;M)R + ; G :=R + [0;M] R + : 51 Now, we introduce the optimal execution problem of the investor - the investor wants to purchaseM shares of the stock within the time duration [0;T ]. Consider the cost functional: J() =Ef X 0t<T C(X t ;Q t ; t ) + Z T 0 U(X t ;Q t )d c t +g(X T ;M T )g; (4.1.4) where the rst term describes the cost from the jump part of, the second term corresponds to the cost from continuous strategy, and the last penalty function g acts for the assets remaining at the end of trading horizon. The value function is then given by V 0 = inf 2A ad (q) J(): (4.1.5) We assume the penalty function g satises the following assumptions: Assumption 4.1.3. The terminal penalty function g :R + [0;M]!R + has the following properties: (i) g(x; 0) = 0. (ii) For xed x, the function g(x;k) is increasing and convex in k. (iii) g(x;k)U(x; 0)k, where U(x; 0) is the expected return when there is no liquidity. (iv) g is uniformly Lipschitz continuous in (x;k). Dene the set of continuous strategies by A c ad (q),f2A ad (q) :t7! t is continuous,Pa:s:g: (4.1.6) Since U(X;Q) is decreasing in Q from Assumption 3.3.1, we have, for 0<Q, C(X;Q;) =U(X;Q) = Z 0 U(X;Q)du Z 0 U(X;Qu)du,D(X;Q;): 52 Now, we consider the following two cost functionals: J 0 () , Ef Z T 0 U(X t ;Q t )d t +g(X T ;M T )g; 2A c ad (q); (4.1.7) J 1 () , Ef X 0t<T D(X t ;Q t ; t ) + Z T 0 U(X t ;Q t )d t +g(X T ;M T )g; (4.1.8) and dene the corresponding value functions as V 0 0 = inf 2A c ad (q) J 0 (); V 1 0 = inf 2A ad (q) J 1 (): (4.1.9) SinceA c ad (q)A ad (q) and C(X;Q; ) D(X;Q; ), one has V 0 0 V 0 V 1 0 . We can see that these inequalities are actually equalities, that is, V 0 0 =V 0 =V 1 0 , with similar idea of proof as Theorem 4.2 in [41]. For a dynamic version of the value function V , let (t;x;k;q;P )2 [0;T ] GP 2 (R), and let X t;x and Q t;q;; be the solution of (4.1.1) on [t;T ] with X t = x a.s., and that of (4.1.3) on [t;T ] with Q t;q; t =q and strategy , respectively. Denote A(t;k):=f2R + : is non-decreasing, c agl ad,F-predictable, t =k; T M;d t ?dK t g A ad (t;k;q) :=f2A(t;k) :Q t;q;; s 0; s2 [t;T ]; P-a.s.g (4.1.10) A c ad (t;k;q) :=f2A ad (t;k;q) : is continuous,P-a.s.g: We can dene the dynamic version of value function V with two equivalent expressions: V (t;x;k;q;P ) := inf 2A c ad (t;k;q) J 0 (t;x;k;q;P ;) = inf 2A ad (t;k;q) J 1 (t;x;k;q;P ;); (4.1.11) where J 0 (t;x;k;q;P ;) :=E h Z T t U(X t;x s ;Q t;q;; s )d c s +g(X t;x T ;M T ) i ; J 1 (t;x;k;q;P ;) :=E h X ts<T D(X t;x s ;Q t;q;; s ; s ) + Z T t U(X t;x s ;Q t;q;; s )d c s +g(X t;x T ;M T ) i : 53 2 Dynamic programming principle Now, we will investigate some properties of value function V and establish dynamic pro- gramming principle. For now, let us assume that Assumptions 3.3.1, 4.1.1, and 4.1.3 hold. To this end, let us start with spatial regularity of V . Proposition 4.2.1. For each starting timet2 [0;T ] and (x;k;q;P )2 GP 2 (R), the value function V (t;x;k;q;P ) is non-decreasing in x and q, non-increasing in k, and uniformly Lipschitz in (x;k;q;P ). Proof Recall that U(x;y) and g(x;y) are Lipschitz continuous in (x;y) from Assump- tions. For properties of x, assume x 1 < x 2 . Then, by the comparison of SDEs, one can easily see that X t;x 1 s X t;x 2 s for all s2 [t;T ];P-a.s. Lipschitz property for x can be proved using assumptions on U and g in x variable. For properties ofk, let 0k 1 <k 2 M. For any2A c ad (t;k 1 ;q), consider the strategy ~ s := [k 2 +( s k 1 )]^M;s2 [t;T ]. Then, ~ 2A c ad (t;k 2 ;q), andV (;;k 1 ;;)V (;;k 2 ;;). For Lipschitz continuity in k, let 0 := (k 2 k 1 ) for any 2A c ad (t;k 2 ;q). Then, 0 2A c ad (t;k 1 ;q), and by properties ofQ andg, one can easily show the Lipschitz property of V in k. Since U is non-increasing and Lipschitz continuous in q, so is V . Now, for the Lip- schitz property in P , let ; 2 L 2 (F t ;R). Let us write only P variables in func- tions when other variables are xed, for notational convenience. That is, write V (P ) instead of V (;;;;P ). Since inf(f g) inff infg, we have V (P ) V (P ) inf 2A c ad (t;k;q) J 0 (P ;)J 0 (P ;) : For xed (t;x;k;q)2 [0;T ] G, we have jJ 0 (P ;)J 0 (P ;)j = jE h Z T t U(X t;x s ;Q t;q; s )U(X t;x s ;Q t;q; s ) d s i j CE h sup s2[t;T ] jQ t;q; s Q t;q; s j i (4.2.1) CW 1 (P ;P ); which proves the proposition. 54 We can now have the following simpler form of dynamic programming principle when the time increments are deterministic. Proposition 4.2.2. For any tt 1 <t 2 T and (x;k;q;P )2 GP 2 (R), V (t 1 ;x;k;q;P ) = inf 2A c ad (t 1 ;k;q) E Z t 2 t 1 U(X t 1 ;x s ;Q t 1 ;q; s )d s +V (t 2 ;X t 1 ;x t 2 ; t 2 ;Q t 1 ;q; t 2 ;P ) ; (4.2.2) where =Q t 1 ; t 2 . Proof Denote the right-hand side of equation (4.2.2) as ~ V (t 1 ;x;k;q;P ). At rst, we will show V (t 1 ;x;k;q;P ) ~ V (t 1 ;x;k;q;P ). For any 2A c ad (t 1 ;k;q), denote ~ as the restriction of on [t 2 ;T ]. Recall that bothX andQ have ow property: fors2 [t 2 ;T ] and =Q t 1 ; t 2 ,X t 1 ;x s =X t 2 ;X t 1 ;x t 2 s andQ t 1 ;q;; s =Q t 2 ;Q t 1 ;q; t 2 ;;~ s . So, we can say ~ 2A c ad (t 2 ; t 2 ;q). It implies J 0 (t 1 ;x;k;q;P ;) =E[ Z T t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r +g(X t 1 ;x T ;M T )] =E[ Z t 2 t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r +E Z T t 2 U(X t 2 ;X t 1 ;x t 2 r ;Q t 2 ;Q t 1 ;q; t 2 ;;~ r )d r +g(X t 2 ;X t 1 ;x t 2 T ;M ~ T )jF t 2 ] =E[ Z t 2 t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r +J 0 (t 2 ;X t 1 ;x t 2 ; t 2 ;Q t 1 ;q; t 2 ;P ; ~ )] E[ Z t 2 t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r +V (t 2 ;X t 1 ;x t 2 ; t 2 ;Q t 1 ;q; t 2 ;P )]: Now, to prove the opposite inequality, let us rst x " > 0. Consider countable partition fG i g 1 i=1 of GP 2 (R) such that for (x i ;k i ;q i ;P i )2G i , jx i xj"; k i "kk i ; q i qq i +"; W 2 (P ;P i )"; 55 for any (x;k;q;P )2 G i . For each i, choose i 2A c ad (t 2 ;k i ;q i ) such that i is "-optimal. That is,J 0 (t 2 ;x i ;k i ;q i ;P i ; i )V (t 2 ;x i ;k i ;q i ;P i )+". Note that i k i +k2A c ad (t 2 ;k i ;q i ) for any (x;k;q;P )2G i . Then, by Proposition 4.2.1, we have, for a generic constant C, J 0 (t 2 ;x;k;q;P ; i k i +k) J 0 (t 2 ;x i ;k i ;q i ;P ; i ) +C"J 0 (t 2 ;x i ;k i ;q i ;P i ; i ) +C" V (t 2 ;x i ;k i ;q i ;P i ) +C"V (t 2 ;x;k;q;P ) +C": (4.2.3) Now, for any 2A c ad (t 1 ;k;q), dene a new strategy as follows: s := s 1 [t 1 ;t 2 ] (s) + " X i ( i s k i + t 2 )1 G i (X t 1 ;x t 2 ; t 2 ;Q t 1 ;q;; t 2 ;P Q t 1 ; t 2 ) # 1 (t 2 ;T ] (s): Then, we can easily see that is continuous, non-decreasing on [t;T ], t 1 = k, and T i T M on each partition G i . Hence, 2A c ad (t 1 ;k;q), and we have, with =Q t 1 ; t 2 , V (t 1 ;x;k;q;P ) J 0 (t 1 ;x;k;q;P ; ) = E[ Z t 2 t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r +E Z T t 2 U(X t 2 ;X t 1 ;x t 2 r ;Q t 2 ;Q t 1 ;q; t 2 ;; r )d r +g(X t 2 ;X t 1 ;x t 2 T ;M T )jF t 2 ] = E[ Z t 2 t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r +J 0 (t 2 ;X t 1 ;x t 2 ; t 2 ;Q t 1 ;q; t 2 ;P ; )] = E " Z t 2 t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r + X i J 0 (t 2 ;X t 1 ;x t 2 ; t 2 ;Q t 1 ;q; t 2 ;P ; i k i + t 2 ) 1 G i (X t 1 ;x t 2 ; t 2 ;Q t 1 ;q;; t 2 ;P Q t 1 ; t 2 ) E[ Z t 2 t 1 U(X t 1 ;x r ;Q t 1 ;q;; r )d r +V (t 2 ;X t 1 ;x t 2 ; t 2 ;Q t 1 ;q; t 2 ;P )] +C": Since " > 0 is arbitrary and 2A c ad (t 1 ;k;q), we can conclude that V (t 1 ;x;k;q;P ) ~ V (t 1 ;x;k;q;P ), proving the opposite inequality and hence the proposition. Next proposition proves the temporal regularity of V . 56 Proposition 4.2.3. For any tt 1 <t 2 T and (x;k;q;P )2 GP 2 (R), jV (t 2 ;x;k;q;P )V (t 1 ;x;k;q;P )jC(1 +jxj) p t 2 t 1 (4.2.4) Proof By previous proposition, Lipschitz continuity ofV , and the fact that the constant process k2A ad (t 1 ;k;q), we have, with notation =Q t 1 ; t 2 , V (t 1 ;x;k;q;P )V (t 2 ;x;k;q;P )E h V (t 2 ;X t 1 ;x t 2 ;k;Q t 1 ;q;;k t 2 ;P ) i V (t 2 ;x;k;q;P ) CE h jX t 1 ;x t 2 xj +jQ t 1 ;q;;k t 2 qj +W 1 (P ;P ) i : Similar estimates as in the proof of Proposition 3.2.6 give the desired result. We are now ready to show a general version of the dynamic programming principle. DenoteT t as allF-stopping times taking values in (t;T ]. Theorem 4.2.4. For any (t;x;k;q;P )2 [0;T ) GP 2 (R) and any 2T t , V (t;x;k;q;P ) = inf 2A c ad (t;k;q) E Z t U(X t;x s ;Q t;q;; s )d s +V (;X t;x ; ;Q t;q;; ;P Q t;; ) : (4.2.5) Proof The idea of proof is similar to that in [41], so we omit it here. 3 HJB equation and viscosity solutions For each t2 [0;T ], we introduce a new ltration: ^ F t :=f ^ F t s g s0 :=fF W s _F N s^t g s0 : (4.3.1) 57 For each (t;x;k;q;P )2 [0;T ] GP 2 (R), 2A ad (t;k;q), 2C([0;T ] GP 2 (R)), andF-stopping time , let us denote L(;;) := E h Z t U(X t;x s ;Q t;q;; s )d c s + X s2[t;) D(X t;x s ;Q t;q;; s ; s ) +(;X t;x ; ;Q t;q;; ;P Q t;; ) i (t;x;k;q;P ): (4.3.2) Now, for xedt2 [0;T ], let t 1 be the rst jump time of Poisson random measureN after timet. Note here that t 1 is not an ^ F t -stopping time. Also, at time t 1 , the processK would be either constant or K > 0. In the latter case, as previous analysis shows Q would be zero due to the re ection. By monotone class argument, there exists an ^ F t -adapted process ~ such that ~ s^ t 1 = s^ t 1 , for allst, any xed (t;k) and any2A c ad (t;k;q),P-a.s. (See Lemma 6.1 in [41].) Recall the denition of the spaceC 1;(2;1) b from Denition 2.2.2. Let us now dene the following operators for any 2C 1;(2;1) b ([0;T ] GP 2 (R)) M[](t;x;k;q;P ) := @ t +b(t;x)@ x + 1 2 2 (t;x)@ xx +a(x;q;P )@ q (t;x;k;q;P ) + Z E (t;x;k;q +j(x;q;z);P )(t;x;k;q;P ) @ q (t;x;k;q;P )j(x;q;z) (q;P )(dz) + ^ E h @ (t;x;k;q;P ; ^ )a(x; ^ ;P ) i + ^ E h Z 1 0 Z E @ (t;x;k;q;P ; ^ +j(x; ^ ;z))@ (t;x;k;q;P ; ^ ) j(x; ^ ;z)( ^ ;P )(dz)d i G[](t;x;k;q;P ) := U(x;q) + (@ k @ q )(t;x;k;q;P ): Then, the following proposition holds. 58 Proposition 4.3.1. Let 2C 1;(2;1) b ([0;T ] GP 2 (R)), and be an ^ F t -stopping time. Denote ^ :=^ t 1 . Then, with the operators dened in (4.3.3), we have L(;; ^ ) = E h Z ^ t M[](r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )dr + Z ^ t G[](r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )d c r + X r2[t;^ ) Z r 0 G[](r;X t;x r ; r +y;Q t;q;; r y;P Q t;; r )dy i : (4.3.3) Proof Let2A ad (t;k;q) and let ~ be the ^ F t -adapted version of such that ~ s^ t 1 = s^ t 1 for all s>t. Dene e Q t;; s := R s t d~ r and e Q t;q;; s :=q R s t d~ r . Note that we have (^ ; X t;x ^ ; ^ ;Q t;q;; ^ ;P Q t;; ^ )(t;x;k;q;P ) = (^ ;X t;x ^ ; ^ ; e Q t;q;; ^ ;P e Q t;; ^ )(t;x;k;q;P ) (4.3.4) +[( t 1 ;X t;x t 1 ; ~ t 1 ;Q t;q;; t 1 ;P Q t;; t 1 )(^ ;X t;x ^ ; ~ ^ ; e Q t;q;; ^ ;P e Q t;; ^ )]1 f t 1 <g : By applying It^ o formula for mean-eld SDE, we obtain E h ( t 1 ;X t;x t 1 ; ~ t 1 ;Q t;q;; t 1 ;P Q t;; t 1 )(^ ;X t;x ^ ; ~ ^ ; e Q t;q;; ^ ;P e Q t;; ^ ) 1 f t 1 <g i (4.3.5) = E h Z ^ t a(X t;x r ;Q t;q; r ;P Q t; r )@ q (r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )dr + Z ^ t Z E (r;X t;x r ; r ;Q t;q;; r +j(X t;x r ;Q t;q; r ;z);P Q t; r )(r;X t;x r ; r ;Q t;q;; r ;P Q t;; r ) @ q (r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )j(X t;x r ;Q t;q; r ;z) (Q t;q; r ;P Q t; r )(dz)dr + Z ^ t ^ E h @ (r;X t;x r ; r ;Q t;q; r ;P Q t; r ; ^ Q t; ^ r )a(X t;x r ; ^ Q t; ^ r ;P Q t; r ) + Z 1 0 Z E @ (r;X t;x r ; r ;Q t;q; r ;P Q t; r ; ^ Q t; ^ r +j(X t;x r ; ^ Q t; ^ r ;z)) @ (r;X t;x r ; r ;Q t;q; r ;P Q t; r ; ^ Q t; ^ r ) j(X t;x r ; ^ Q t; ^ r ;z)( ^ Q t; ^ r ;P Q t; r )(dz)d i dr i : Here, we used the fact that for r2 [t; ^ ), ~ Q r =Q r and henceP ~ Qr =P Qr . 59 Also, similarly, we have E h (^ ;X t;x ^ ; ^ ; e Q t;q;; ^ ;P e Q t;; ^ )(t;x;k;q;P ) i = E h Z ^ t [@ t +b(r;X t;x r )@ x + 1 2 2 (r;X t;x r )@ xx ](r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )dr + Z ^ t [@ k @ q ](r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )d c r (4.3.6) + X r2[t;^ ) Z r 0 [@ k @ q ](r;X t;x r ; r +y;Q t;q;; r y;P Q t;; r )dy i : By plugging (4.3.6) and (4.3.5) into (4.3.4), and then considering (4.3.2), we get the desired result. Note that for 2A c ad (t;k;q), we have, by denition and above proposition, L(;;) := E Z t U(X t;x r ;Q t;q;; r )d r +(;X t;x ; ;Q t;q;; ;P Q t;; ) (t;x;k;q;P ) = E n Z t M[](r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )dr (4.3.7) + Z t G[](r;X t;x r ; r ;Q t;q;; r ;P Q t;; r )d r o : If V 2C 1;(2;1) b ([0;T ] GP 2 (R)), then we have 0 = inf 2A c ad (t;k;q) L(V;;) by Theorem 4.2.4, and we obtain, by Proposition 4.3.1 0 = inf 2A c ad (t;k;q) E Z t M[V ](r;X t;x r ; r ;Q t;q; r ;P Q t; r )dr + Z t G[V ](r;X t;x r ; r ;Q t;q; r ;P Q t; r )d r : (4.3.8) If we choose s = t for s2 [t;) and let #t, we haveM[V ](t;x;k;q;P ) 0. If we choose s = t +N(st) for large N and s2 [t;), then the second term dominates the others, and hence we obtainG[V ](t;x;k;q;P ) 0. We combine these two together to have the following quasi-variational inequality (QVI): min (M[V ];G[V ]) (t;x;k;q;P ) = 0; (t;x;k;q;P )2 [0;T )GP 2 (R); (4.3.9) 60 with the terminal-boundary conditions: 8 > > > > > > < > > > > > > : V (T;x;k;q;P ) =g(x;Mk); for (x;k;q;P )2 GP 2 (R) V (t;x;M;q;P ) = 0; for (t;x;q;P )2 [0;T )R + R + P 2 (R) M[V ](t;x;k; 0;P ) = 0; for (t;x;k;P )2 [0;T )R + [0;M)P 2 (R) (4.3.10) In general, there are no smooth solutions to the QVI. So, in the spirit of HJB equation for standard stochastic control, we introduce a notion of viscosity solutions for the QVI (4.3.9) and (4.3.10). To this end, we denote, for (t;x;k;q;P )2 [0;T )GP 2 (R), A(t;x;k;q;P ) := n '2C 1;(2;1) b ([0;T ] GP 2 (R)) :V (t;x;k;q;P ) ='(t;x;k;q;P ) o ; A(t;x;k;q;P ) := n '2A(t;x;k;q;P ) :V' has a maximum at (t;x;k;q;P ) o ; A(t;x;k;q;P ) := n '2A(t;x;k;q;P ) :V' has a minimum at (t;x;k;q;P ) o : (4.3.11) Denition 4.3.2. We say that a continuous function V : [0;T ] GP 2 (R)!R + is a viscosity subsolution (resp. supersolution) of the QVI (4.3.9)-(4.3.10) if (i) for (x;k;q;P )2 GP 2 (R), V (T;x;k;q;P )g(x;Mk); (resp.g(x;Mk)); (ii) for (t;x;q;P )2 [0;T )R + R + P 2 (R), V (t;x;M;q;P ) 0; (resp. 0); (iii) for any '2 A(t;x;k;q;P ), one has min(M['];G['])(t;x;k;q;P ) 0; (resp: 0); 61 (iv) for any (t;x;k;P )2 [0;T )R + [0;M)P 2 (R), and for '2 A(t;x;k; 0;P ), M['](t;x;k; 0;P ) 0; (resp: 0): Moreover, V is called a viscosity solution of the QVI (4.3.9)-(4.3.10) if it is both a viscosity sub- and super-solution. Our main goal of this section is to prove that the value functionV is a viscosity solution of the QVI (4.3.9)-(4.3.10). Let us rst examine whether the value function is a viscosity subsolution of the QVI. Theorem 4.3.3. Let Assumptions 3.3.1, 4.1.1, and 4.1.3 hold. Then, the value function V of the optimal execution problem is a viscosity subsolution of the QVI (4.3.9)-(4.3.10). Proof Recall thatV (t;x;k;q;P ) = inf 2A c ad (t;k;q) Ef R T t U(X t;x s ;Q t;q;; s )d s +g(X t;x T ;M T )g. From the formulation of value function, the terminal condition V (T;x;k;q;P ) = g(x;Mk) is clear. For (ii) in Denition 4.3.2, if t = M, then there is no need to purchase, so d s = 0 for s2 [t;T ). Also, g(X t;x T ; 0) = 0. Thus, (ii) holds with equality. In order to check (iii) and (iv) in Denition 4.3.2, it is enough to show that for any (t;x;k;q;P )2 [0;T ) GP 2 (R) and '2 A(t;x;k;q;P ), 8 > > < > > : M['](t;x;k;q;P ) 0; for q 0; G['](t;x;k;q;P ) 0; for q> 0: (4.3.12) In what follows, let " := (t +")^ t 1 for " > 0 small. Also, let C > 0 be a generic constant which varies line by line. For proving the rst inequality in (4.3.12), let k be the constant process. Then, d = 0 and Q t;q; s = q for s2 [t; " ). Since '2C 1;(2;1) b ([0;T ] GP 2 (R)) is such that V' attains maximum, by (4.3.8), we have 62 0L(V;; " )L(';k; " ) =E Z " t M['](r;X t;x r ;k;q;P )dr =E Z t+" t M['](r;X t;x r ;k;q;P )dr E Z t+" " M['](r;X t;x r ;k;q;P )dr 1 f t 1 <t+"g : (4.3.13) Since '2C 1;(2;1) b ([0;T ] GP 2 (R)) and by assumptions, the operatorM['] is bounded. Also we haveP( t 1 <t +")C". So, we divide both sides of (4.3.13) by " and send "! 0 to get the rst inequality in (4.3.12). Now, to prove the second inequality in (4.3.12), let > 0, and set r :=k + (rt)^" " q: Then, 2A c ad (t;k;q), d r = q " dr, and Q t;q; r = (1 rt " )q for r " . Also, note that the operatorG['] is bounded since '2C 1;(2;1) b ([0;T ] GP 2 (R)). Hence, similar to (4.3.13), we obtain the following: 0 E Z " t M['](r;X t;x r ; r ;Q t;q; r ;P Q t; r )dr + Z " t G['](r;X t;x r ; r ;Q t;q; r ;P Q t; r )d r E Z " t M['](r;X t;x r ; r ;Q t;q; r ;P Q t; r )dr + q " Z " t G['](r;X t;x r ; r ; (1 rt " )q;P Q t; r )dr C" +E q " Z t+" t G['](r;X t;x r ; r ; (1 rt " )q;P Q t; r )dr +CP( t 1 <t +") C" +E " q " Z t+" t sup m2[0;1] G['](r;X t;x r ; r ; (1m )q;P Q t; r )dr # : Now, by sending "! 0, we get sup m2[0;1] G['](t;x;k; (1m )q;P ) 0: Since > 0 was arbitrary, we can conclude that for q > 0,G['](t;x;k;q;P ) 0; which proves the second inequality in (4.3.12), and completes the proof. Now, let us examine whether the value function is a viscosity supersolution of the QVI. Theorem 4.3.4. Let Assumptions 3.3.1, 4.1.1, and 4.1.3 hold. Then, the value function V of the optimal execution problem is a viscosity supersolution of the QVI (4.3.9)-(4.3.10). 63 Proof We have shown that (i) and (ii) in Denition 4.3.2 hold with equality in the proof of previous theorem. In this proof, we again set " = (t +")^ t 1 . To check (iv), let (t;x;k;P )2 [0;T )R + [0;M)P 2 (R) and '2A(t;x;k;q;P ). Consider any 2A c ad (t;k; 0). Then, s k, d s = 0, and Q t;q; s = 0 for s2 [t; t 1 ), because there could be no trading in that time interval. Similar to the case of (4.3.13), we have 0L(V;k; " )L(';k; " ) =E[ Z " t M['](r;X t;x r ;k; 0;P )dr]; and by dividing both sides by " and sending "! 0, Denition 4.3.2-(iv) can be proved. Now, in order to show Denition 4.3.2-(iii), suppose not. That is, assume a := min(M['];G['])(t;x;k;q;P )> 0; (4.3.14) for some (t;x;k;q;P )2 [0;T )GP 2 (R) and'2A(t;x;k;q;P ). By applying dynamic programming principle with t 1 , one can nd := " 2A c ad (t;k;q) such that V (t;x;k;q;P )E " Z t 1 t U(X t;x r ;Q t;q; r )d r +V ( t 1 ;X t;x t 1 ; t 1 ;Q t;q; t 1 ;P Q t; t 1 ) # " 2 : (4.3.15) Let ~ be the ^ F t -adapted version of as before. For any " > 0, dene the following stopping times: X " := inf n s>t :jX t;x s xj" 1=4 o ^T ; " := inffs>t : ~ s k"g^T ; ^ " := (t +")^ X " ^ " ^ t 1 : We can see that ^ " is an ^ F t -stopping time. Following the rst part of the proof of Proposition 3.2.5 (DPP for deterministic time), one can easily show V (t;x;k;q;P )E Z ^ " t U(X t;x r ;Q t;q; r )d r +V (^ " ;X t;x ^ " ; ^ " ;Q t;q; ^ " ;P Q t; ^ " ) " 2 : 64 The same derivation as (4.3.13) gives us " 2 E Z ^ " t M['](r;X t;x r ; r ;Q t;q; r ;P Q t; r )dr + Z ^ " t G['](r;X t;x r ; r ;Q t;q; r ;P Q t; r )d r : (4.3.16) From (4.3.14), we have, for small "> 0 and for s2 [t; ^ " ), M['](s;X t;x s ; s ;Q t;q; s ;P Q t; s ) a 2 ; G['](s;X t;x s ; s ;Q t;q; s ;P Q t; s ) a 2 : Then, inequality (4.3.16) becomes " 2 a 2 E [ ^ " k + ^ " t] a 2 E h "1 f^ "= " g + [(t +")^ X " ^ t 1 ]t 1 f^ "< " g i a 2 "CE t +" ( X " ^ t 1 ) + (4.3.17) a 2 "C" P( X " <t +") +P( t 1 <t +") : Since P( X " <t +") =P sup s2[t;t+"] jX t;x s xj" 1=4 ! 1 " E " sup s2[t;t+"] jX t;x s xj 4 # C"(1 +jxj 4 ); the inequality (4.3.17) becomes " 2 a 2 "C" 2 (1 +jxj 4 ). But, for small " > 0, it cannot happen, which contradicts to our assumption (4.3.14). Hence,V is a viscosity supersolution of our QVI. The previous two theorems show that the value function V is a viscosity solution of the QVI (4.3.9)-(4.3.10), provided that V 2C 1;(2;1) b ([0;T ] GP 2 (R)). 4 Characterization of optimal strategy In order to construct the optimal strategy and describe characterizations of it, we start with the following, which is one part of verication theorem. 65 Theorem 4.4.1. Let Assumptions 3.3.1, 4.1.1, and 4.1.3 hold. Let v2C 1;(2;1) b ([0;T ] G P 2 (R)) be a classical solution to the QVI (4.3.9)-(4.3.10). Then, v(t;x;k;q;P )V (t;x;k;q;P ); for all (t;x;k;q;P )2 [0;T ] GP 2 (R): Proof Without loss of generality, we may assume that t = 0 and k = 0. Since V (t;x;k;q;P ) = inf 2A ad (t;k;q) J 1 (t;x;k;q;P ;), it is enough to show that v(0;x; 0;q;P )J 1 (0;x; 0;q;P ;); for any 2A ad (0; 0;q): (4.4.1) Let 0 < 1 < 2 < be the jump times of the Poisson random measureN . Denote ^ i := i ^T . Also, let us write X x s := X 0;x s for any s > 0, that is, when the initial time is zero, let us not write it for notational convenience. Due to the terminal condition (4.3.10) of the QVI, we have, for any 2A ad (0; 0;q), d := J 1 (0;x; 0;q;P ;)v(0;x; 0;q;P ) = E 2 4 X 0s<T D(X x s ;Q q;; s ; s ) + Z T 0 U(X x s ;Q q;; s )d c s +v(T;X x T ; T ;Q q;; T ;P Q ; T ) 3 5 v(0;x; 0;q;P ) = 1 X i=0 E h X ^ i s<^ i+1 D(X x s ;Q q;; s ; s ) + Z ^ i+1 ^ i U(X x s ;Q q;; s )d c s +v(^ i+1 ;X x ^ i+1 ; ^ i+1 ;Q q;; ^ i+1 ;P Q ; ^ i+1 )v(^ i ;X x ^ i ; ^ i ;Q q;; ^ i ;P Q ; ^ i ) i : Let us write ^ F i := (F W s _F N s^^ i ) s2[0;T ] . Using Proposition 4.3.1 with ^ =T , we obtain d = 1 X i=0 E h Z ^ i+1 ^ i M[v](r;X x r ; r ;Q q;; r ;P Q ; r )dr + Z ^ i+1 ^ i G[v](r;X x r ; r ;Q q;; r ;P Q ; r )d c r + X ^ i r<^ i+1 Z r 0 G[v](r;X x r ; r +y;Q q;; r y;P Q ; r )dy i 0; which completes the proof. 66 In the rest of this section, we look for an optimal strategy 2A ad (0; 0;q) such that (4.4.1) holds with equality, under the assumption that there exists a classical solution v to our QVI (4.3.9)-(4.3.10). Without loss of generality, we will nd an optimal strategy on the interval [0; ^ i ]. Precisely, we want to nd 2A ad (0; 0;q) such that d ;0 := E h Z ^ 1 0 M[v](r;X x r ; r ;Q q;; r ;P Q ; r )dr + Z ^ 1 0 G[v](r;X x r ; r ;Q q;; r ;P Q ; r )d c r + X 0r<^ 1 Z r 0 G[v](r;X x r ; r +y;Q q;; r y;P Q ; r )dy i = 0: (4.4.2) To this end, for any (t;x;k;q;P )2 [0;T ] GP 2 (R), let us denote D(t;x;q) :=fz2 [0;q^M] :G[v](t;x;z;qz;P )> 0g; (t;k;q) := inffz>k :z2D(t;X t ;q)g^q^M: (4.4.3) Then, one can easily see thatD(t;x;q) is an open set in [0;q^M], and isF-progressively measurable, non-decreasing in k, with (t;k;q)k and (t;k;q) =k for k2D(t;X t ;q). Proposition 4.4.2. Let Assumptions 3.3.1, 4.1.1, and 4.1.3 hold. If 2 A ad (0; 0;q) satises (i) Z ^ 1 0 1 D(t;Xt;q) ( t )d c t = 0 and (ii) t+ = (t; t ;q); t2 [0; ^ 1 );P-a.s.; (4.4.4) then d ;0 = 0 holds. Proof We will show the three terms in (4.4.2) are zero. Let us start with the second term. DenoteD c (t;x;q) := [0;q^M]nD(t;x;q). Then, (i) of (4.4.4) implies, for t2 [0; ^ 1 ], d c t = h 1 D(t;Xt;q) ( t ) + 1 D c (t;Xt;q) ( t ) i d c t = 1 D c (t;Xt;q) ( t )d c t : 67 By the denition of D(t;x;q), we have Z ^ 1 0 G[v](s;X s ; s ;Q q;; s ;P Q q; s )d c s = Z ^ 1 0 G[v](s;X s ; s ;Q q;; s ;P Q q; s )1 D c (t;Xt;q) ( t )d c t = 0: (4.4.5) Next, for the third term in (4.4.2), when t > 0, the condition (ii) in (4.4.4) gives t+ = (t; t ;q) = inffz> t :G[v](t;X t ;z;qz;P )> 0g^q^M: It implies that for any z2 ( t ; t+ ), we haveG[v](t;X t ;z;qz;P ) = 0. Denoting z = t +y, Z r 0 G[v](r;X r ; r +y;Q q;; r y;P Q ; r )dy = Z r 0 G[v](r;X r ; r +y;q r y;P )dy = 0; (4.4.6) and hence the third term in (4.4.2) is zero. Now, we claim that the rst term in (4.4.2) is zero. That is, M[v](t;X t ; t ;q t ;P ) = 0 for t2 [0; ^ 1 ] such that t = 0: (4.4.7) Fix t2 [0; ^ 1 ] such that t = 0. If t = q, then Q q;; t = 0, and hence by the boundary condition (4.3.10), we haveM[v](t;X t ; t ; 0;P ) = 0. If t = M, then s = M for all s2 [t;T ], and hence again by (4.3.10), v(s;X s ; s ;Q q;; s ;P Q ; s ) = 0. By looking at (4.3.7), we can see that (4.4.7) holds true. Now, assume t <q^M. Then, we have t = t+ = (t; t ;q) = inffz> t :z2D(t;X t ;q)g: That is, t 2 D(t;X t ;q). Note that from the QVI (4.3.9), we can see thatM[v](t;X t ;z;q z;P ) = 0 wheneverG[v](t;X t ;z;qz;P ) > 0, that is, when z2 D(t;X t ;q). Due to the continuity ofv, we can sayM[v](t;X t ;z;qz;P ) = 0 forz2 D(t;X t ;q), and hence (4.4.7) holds. Combining (4.4.5), (4.4.6), and (4.4.7) proves the proposition. 68 Next, we show that such indeed exists. For xed (x;q;P ), denote A 0 = n 2A ad (0; 0;q) : Z ^ 1 0 1 D(t;Xt;q) ( t )d c t = 0; t+ (t; t ;q);t2 [0; ^ 1 ) o : (4.4.8) Then, clearly t 02A 0 , and henceA 0 6=;. So, we will construct the optimal strategy from this set. With the similar proof of Proposition 7.3 in [41], we can see that there exists 2A 0 A ad (0; 0;q) satisfying (4.4.4), and consequently (4.4.2) holds, assuming all the conditions of Proposition 4.4.2 hold. Then, we have the following theorem. Theorem 4.4.3. Assume all the conditions of Proposition 4.4.2 hold. Then, v = V and there exists an optimal strategy 2 A(0; 0;q) such that v(0;x; 0;q;P ) = J 1 (0;x; 0;q;P ; ). Proof By previous results, there exists 2 A(0; 0;q) such that (4.4.2) holds. By repeating the same arguments for each n, we may extend on [0; ^ n ] such that n1 X i=0 E h Z ^ i+1 ^ i M[v](r;X x r ; i r ;Q q;; i r ;P Q ; i r )dr + Z ^ i+1 ^ i G[v](r;X x r ; r ;Q q;; r ;P Q ; r )d c r + X ^ i r<^ i+1 Z r 0 G[v](r;X x r ; r +y;Q q;; r y;P Q ; r )dy i = 0: Following the proof of Proposition 4.4.2, it implies E h Z ^ n 0 U(X r ;Q q;; r )d( ) c r + X 0r<^ n D(X r ;Q q;; r ; r ) +v(^ n ;X ^ n ; ^ n ;Q q;; ^ n ;P Q ; ^ n ) i =v(0;X 0 ; 0;q;P ): Sending n!1 and by the terminal condition (4.3.10), we have v(0;X 0 ; 0;q;P ) =J 1 (0;X 0 ; 0;q;P ; )V (0;X 0 ; 0;q;P ): Combining it with Proposition 4.4.2 gives the desired result. 69 Chapter 5 Conclusion In this dissertation, we described the equilibrium of a sell-side limit order book as a Bertrand-type of competition among liquidity providers, from which the equilibrium density function is obtained, and solved an optimal execution problem for a buyer as a sequential game. More precisely, we considered only a sell-side limit order book by assuming that buyers make only market orders and sellers place only limit orders. We assumed that sellers decide at which price they place orders and the order size would be determined accordingly, as in the Bertrand model. We at rst studied N-seller static Bertrand game, where each seller's goal is to maximize the expected return. When extending to a continuous time case, N-seller game would lead to a coupled N (non)-linear PDEs, which is not easy to handle. So, we considered a Bertrand game with innitely many sellers in continuous time, where the dynamics of the total liquidity available follows a pure-jump Markovian mean- eld SDE. We formulated the problem as a control problem for a representative seller whose goal is to maximize the discounted total expected prot. We used the theory of dynamic programming principle and viscosity solution to nd a HJB equation that the value function satises. We also showed that the value function plays a role to determine the equilibrium density function of a limit order book. Once the equilibrium of a sell-side limit order book is reached, we solved an optimal execution problem of a buyer whose goal is to purchase a specic number of shares with minimum cost on the xed time horizon [t;T ]. We again used the theory of dynamic programming principle and viscosity solution to obtain a quasi-variational inequality that the value function satises. By assuming the inequality has a classical solution, we studied the characterization of the optimal strategy. 70 Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Ut purus elit, vestibulum ut, placerat ac, adipiscing vitae, felis. Curabitur dictum gravida mauris. Nam arcu libero, nonummy eget, consectetuer id, vulputate a, magna. Donec vehicula augue eu neque. Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Mauris ut leo. Cras viverra metus rhoncus sem. Nulla et lectus vestibulum urna fringilla ultrices. Phasellus eu tellus sit amet tortor gravida placerat. Integer sapien est, iaculis in, pretium quis, viverra ac, nunc. Praesent eget sem vel leo ultrices bibendum. Aenean faucibus. Morbi dolor nulla, malesuada eu, pulvinar at, mollis ac, nulla. Curabitur auctor semper nulla. Donec varius orci eget risus. Duis nibh mi, congue eu, accumsan eleifend, sagittis quis, diam. Duis eget orci sit amet orci dignissim rutrum. Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi. Morbi auctor lorem non justo. Nam lacus libero, pretium at, lobortis vitae, ultricies et, tellus. Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna, vitae ornare odio metus a mi. Morbi ac orci et nisl hendrerit mollis. Suspendisse ut massa. Cras nec ante. Pellentesque a nulla. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Aliquam tincidunt urna. Nulla ullamcorper vestibulum turpis. Pellentesque cursus luctus mauris. Nulla malesuada porttitor diam. Donec felis erat, congue non, volutpat at, tincidunt tristique, libero. Vivamus viverra fermentum felis. Donec nonummy pellentesque ante. Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. Sed lacinia nulla vitae enim. Pellentesque tincidunt purus vel magna. Integer non enim. Praesent euismod nunc eu purus. Donec bibendum quam in tellus. Nullam cursus pulvinar lectus. Donec et mi. Nam vulputate metus eu enim. Vestibulum pellentesque felis eu massa. Quisque ullamcorper placerat ipsum. Cras nibh. Morbi vel justo vitae lacus tincidunt ultrices. Lorem ipsum dolor sit amet, consectetuer adipiscing elit. In hac habitasse platea 71 dictumst. Integer tempus convallis augue. Etiam facilisis. Nunc elementum fermentum wisi. Aenean placerat. Ut imperdiet, enim sed gravida sollicitudin, felis odio placerat quam, ac pulvinar elit purus eget enim. Nunc vitae tortor. Proin tempus nibh sit amet nisl. Vivamus quis tortor vitae risus porta vehicula. Fusce mauris. Vestibulum luctus nibh at lectus. Sed bibendum, nulla a faucibus semper, leo velit ultricies tellus, ac venenatis arcu wisi vel nisl. Vestibulum diam. Aliquam pellen- tesque, augue quis sagittis posuere, turpis lacus congue quam, in hendrerit risus eros eget felis. Maecenas eget erat in sapien mattis porttitor. Vestibulum porttitor. Nulla facilisi. Sed a turpis eu lacus commodo facilisis. Morbi fringilla, wisi in dignissim interdum, justo lectus sagittis dui, et vehicula libero dui cursus dui. Mauris tempor ligula sed lacus. Duis cursus enim ut augue. Cras ac magna. Cras nulla. Nulla egestas. Curabitur a leo. Quisque egestas wisi eget nunc. Nam feugiat lacus vel est. Curabitur consectetuer. Suspendisse vel felis. Ut lorem lorem, interdum eu, tincidunt sit amet, laoreet vitae, arcu. Aenean faucibus pede eu ante. Praesent enim elit, rutrum at, molestie non, nonummy vel, nisl. Ut lectus eros, malesuada sit amet, fermentum eu, sodales cursus, magna. Donec eu purus. Quisque vehicula, urna sed ultricies auctor, pede lorem egestas dui, et convallis elit erat sed nulla. Donec luctus. Curabitur et nunc. Aliquam dolor odio, commodo pretium, ultricies non, pharetra in, velit. Integer arcu est, nonummy in, fermentum faucibus, egestas vel, odio. Sed commodo posuere pede. Mauris ut est. Ut quis purus. Sed ac odio. Sed vehicula hendrerit sem. Duis non odio. Morbi ut dui. Sed accumsan risus eget odio. In hac habitasse platea dictumst. Pellentesque non elit. Fusce sed justo eu urna porta tincidunt. Mauris felis odio, sollicitudin sed, volutpat a, ornare ac, erat. Morbi quis dolor. Donec pellentesque, erat ac sagittis semper, nunc dui lobortis purus, quis congue purus metus ultricies tellus. Proin et quam. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos. Praesent sapien turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus. 72 Appendix 73 A.1 Proof of Lemma 2.2.1 The idea of proof is similar to that of [26]. But, in our case, we need to take care of the state-dependent intensity () and the re ection term K = (K t ). Let 1 be the rst jump time of K after time t. Since we will in the end consider the small time interval [t;s], we may assume 1 is the only jump time ofK between timet ands. Ass approaches tot from the right,s^ 1 will also be close tot. For the analysis regardingK terms in this proof, we can use the the similar argument in the proof of Proposition 3.2.6. Notice that !f(P + (X t; s ) ) is dierentiable on [0; 1], and @ f(P + (X t; s ) ) =E[ @ f(P + (X t; s ) ; + (X t; s )) (X t; s )]; 2 [0; 1]: Hence, we can write f(P X t; s )f(P ) = Z 1 0 @ f(P + (X t; s ) )d = Z 1 0 E[ @ f(P + (X t; s ) ; + (X t; s )) (X t; s )]d =E[@ f(P ;) (X t; s )] + Z 1 0 E[ @ f(P + (X t; s ) ; )@ f(P ;) (X t; s )]d + Z 1 0 E[ @ f(P + (X t; s ) ; + (X t; s ))@ f(P + (X t; s ) ; ) (X t; s )]d I 1 +I 2 +I 3 : Recall K t; s = R s t 1 fX t; r =0g dK r , and notation (X t; s ) := (X t; s ;P X t; s ). For I 1 , we have, I 1 =E[@ f(P ;) Z s t b(r; (X t; r ))dr +K t; s ] = (st)E[@ f(P ;)b(t;;P )]: 74 Now, forI 2 , since@ f(P + (X t; s ) ; )@ f(P ;) isF t -measurable, and@ f is Lipschitz continuous, we have, as in [26], jI 2 j = j Z 1 0 E h (@ f(P + (X t; s ) ; )@ f(P ;)) ( Z s t b(r; (X t; r ))dr +K t; s ) i d j C(st)W 2 (P + (X t; s ) ;P ) C(st) q E[jX t; s j 2 ]C(st) 3 2 : To study I 3 , let = + (X t; s ), M s = R s t R AR + (t;;P ;z)1 [0; ] (y) e N s (drdzdy), and =(;P ). Then, I 3 = Z 1 0 E[ @ f(P ; + (t;;P )(B s B t ) + M s )@ f(P ;) (X t; s )]d +R 1 ; where R 1 = Z 1 0 E[ @ f(P ;)@ f(P ; + (t;;P )(B s B t ) + M s ) (X t; s )]d : By the proof of Theorem 2.1.3 and the Lipschitz continuity of b, , , , we can see that jR 1 j C Z 1 0 E h Z s t b(r; (X t; r ))dr + Z s t (r; (X t; r ))(t;;P ) dB r + Z s t Z AR + (r; (X t; r );z)1 [0;((X t; r ))] (y) e N s (drdzdy) Z s t Z AR + (t;;P ;z)1 [0; ] (y) e N s (drdzdy) +jK t; s j jX t; s j i d C n (st)EjX t; s j + q EjX t; s j 2 s Ej Z s t (r; (X t; r ))(t;;P ) dB r j 2 + q EjX t; s j 2 s E Z s t Z AR + j r (y;z)j 2 dr s (dz)dy o C(st) 3 2 ; where r (y;z) =(r; (X t; r );z)1 [0;((X t; r ))] (y)(r;;P ;z)1 [0; ] (y). 75 So, what we have is I 3 R 1 = Z 1 0 E[ @ f(P ; + (t;;P )(B s B t ) + M s )@ f(P ;) (X t; s )]d =I 3;1 +I 3;2 I 3;3 +I 3;4 ; where I 3;1 = Z 1 0 E[ @ f(P ; + (t;;P )(B s B t ) + M s )@ f(P ;) Z s t b(r; (X t; r ))dr]d I 3;2 = Z 1 0 E[ @ f(P ; + (t;;P )(B s B t ) + M s )@ f(P ;) Z s t (r; (X t; r ))dB r ]d I 3;3 = Z 1 0 E[ @ f(P ; + (t;;P )(B s B t ) + M s )@ f(P ;) Z s t Z AR + (r; (X t; r );z)1 [0;((X t; r ))] (y) e N s (drdzdy)]d I 3;4 = Z 1 0 E[ @ f(P ; + (t;;P )(B s B t ) + M s )@ f(P ;) K t; s ]d : Similar proof as in [26] gives us jI 3;1 jC(st) 3 2 ; I 3;2 = 1 2 (st)E[@ y (@ f)(P ;)(t;;P ) 2 ] + 1 (s) wherej 1 (s)jC(st) 3 2 . We set M s := R s t R AR + (t;;P ;z)1 [0; ] (y) e N s (drdzdy) for the analysis of the term I 3;3 . By modifying the argument in [26], we have I 3;3 = Z 1 0 E h Z s t Z A @ f(P ; + (t;;P )(B s B t ) + M r + (t;;P ;z)) @ f(P ; + (t;;P )(B s B t ) + M r ) (r; (X t; r );z) min(((X t; r ));(;P )) s (dz)dr i d + 2 (s) = (st) Z 1 0 E Z A @ f(P ; + (t;;P ;z))@ f(P ;) (t;;P ;z)(;P ) s (dz) d +R 2 +R 3 + 2 (s); 76 wherej 2 (s)jC(st) 3 2 and R 2 = Z 1 0 E h Z s t Z A n @ f(P ; + (t;;P )(B s B t ) + M r + (t;;P ;z)) @ f(P ; + (t;;P )(B s B t ) + M r ) @ f(P ; + (t;;P ;z))@ f(P ;) o (r; (X t; r );z) min(((X t; r ));(;P )) s (dz)dr i d ; R 3 = Z 1 0 E h Z s t Z A n @ f(P ; + (t;;P ;z))@ f(P ;) (r; (X t; r );z) min ((X t; r ));(;P ) @ f(P ; + (t;;P ;z))@ f(P ;) (t;;P ;z)(;P ) o s (dz)dr i d : Under Assumption 2.1.1, slight modication of the proof in [26] gives jR 2 jC(st) 3 2 ; jR 3 jC(st) 3 2 : For the term I 3;4 , we use the similar idea as for the term I 3;1 and analysis in the proof of Proposition 3.2.6 to getjI 3;4 jC(st) 3 2 . Adding up the above estimates, we have f(P X t; s )f(P ) = (st) E[@ f(P ;)b(t;;P )] + 1 2 E[@ y (@ f(P ;))(t;;P )] + Z 1 0 E Z A f@ f(P ; + (t;;P ;z))@ f(P ;)g(t;;P ;z)(;P ) s (dz) d +(s) wherej(s)j C(st) 3 2 . Then, we get the desired result (2.2.1) by following the proof in [26]. 77 Bibliography [1] Alfonsi, A., Fruth, A., and Schied, A., Constrained portfolio liquidation in a limit order book model. Advances in mathematics of nance, Banach Center Publ., 83 (2008), 9-25. [2] Alfonsi, A., Fruth, A., and Schied, A., Optimal execution strategies in limit order books with general shape functions. Quantitative Finance, 10(2) (2010), 143-157. [3] Alfonsi, A., and Schied, A., Optimal Trade Execution and Absence of Price Manipula- tions in Limit Order Book Models. SIAM J. Finan. Math., 1(1) (2010), 490-522. [4] Alfonsi, A., Schied, A., and Slynko, A., Order book resilience, price manipulation, and the positive portfolio problem. SIAM Journal on Financial Mathematics, 3(1) (2012), 511-533. [5] Avellaneda, M., and Stoikov, S., High-frequency trading in a limit order book. Quanti- tative Finance, 8(3) (2008), 217-224. [6] Basna, R., Hilbert, A., and Kolokoltsov, V.N. An Approximate Nash Equilibrium for Pure Jump Markov Games of Mean-eld-type on Continuous State Space. Stochastics 89 (2017), no. 6-7, 967-993. [7] Bayraktar, E., and Ludkovski, M., Liquidation in limit order books with controlled intensity. Mathematical Finance, 24(4) (2014), pp 627-650. [8] Bertrand, J., Theorie mathematique de la richesse sociale. Journal des Savants, 67 (1883), 499-508. [9] Billingsley, P., Convergence of Probability Measures, second edition. John Wiley (1999). [10] Buckdahn, R., Li, J., and Peng, S., Nonlinear stochastic dierential games involving a major player and a large number of collectively acting minor agents. SIAM J. Control Optim., 52(1) (2014), 451-492. [11] Buckdahn, R., Li, J., Peng, S., and Rainer, C., Mean-eld stochastic dierential equa- tions and associated PDEs. Ann. Probab. 45(2) (2017), 824-878. [12] Buckdahn, R., Li, J., and Ma, J., A Stochastic Maximum Principle for General Mean- eld Systems. Appl. Math. Optim., 74(3), (2016), 507-534. 78 [13] Cardaliaguet, P., Notes on Mean Field Games (from P-L Lions' lectures at Coll ege de France). https://www.ceremade.dauphine.fr/cardalia/ (2013). [14] Carmona, R., and Lacker, D., A probabilistic weak formulation of mean eld games and applications. The Annals of Applied Probability, 25(3), (2015), 1189-1231. [15] Carmona, R., Delarue, F., and Lachapelle, A., Control of McKean-Vlasov dynamics versus mean eld games. Mathematics and Financial Economics, 7(2), (2013), 131-166. [16] Chan, P., and Sircar, R., Bertrand and Cournot mean eld games. Appl. Math. Optim. 71 (2015), no. 3, 533-569. [17] Cont, R., Stoikov, S., and Talreja, R., A stochastic model for order book dynamics. Operations Research, 58(3) (2010), 549-563. [18] Cournot, A., Recherches sur les Principes Mathematiques de la Theorie des Richesses. Hachette, Paris, (1838). English translation by N. T. Bacon published in Economic Classics, Macmillan, 1897, and reprinted in 1960 by Augustus M. Kelly. [19] Dupuis, P., and Ishii, H., On Lipschitz continuity of the solution mapping to the Sko- rokhod problem, with applications. Stochastics and Stochastic Reports, 35 (1991), 31- 62. [20] Edgeworth, F.Y., The pure theory of monopoly. reprinted in Collected Papers relating to Political Economy, Vol.1, Chapter E, 111-142. Macmillan. (1925). [21] Friedman, J., Oligopoly Theory. Cambridge University Press, 1983. [22] Gatheral, J., Schied, A., and Slynko, A., Exponential resilience and decay of market impact. Econophysics of Order-driven Markets, Springer (2011), pp 225-236. [23] Gayduk, R., and Nadtochiy, S., Liquidity Eects of Trading Frequency. Mathematical Finance. 2017;00:1{38. https://doi.org/10.1111/ma.12157. [24] Gu eant, O., Lehalle, C.-A., and Fernandez-Tapia, J., Optimal portfolio liquidation with limit orders. SIAM Journal on Financial Mathematics, 13(1), (2012), 740-764. [25] Gu eant, O., Lasry, J.-M., and Lions, P.-L., Mean eld games and applications. Paris- Princeton Lectures on Mathematical Finance 2010, Lecture Notes in Mathematics, vol. 2003, Springer Berlin / Heidelberg (2011), pp. 205-266. [26] Hao, T., and Li, J., Mean-eld SDEs with jumps and nonlocal integral-PDEs. Nonlinear Dierential Equations and Applications NoDEA, 23(2) (2016). [27] Harris, C., Howison, S., and Sircar, R., Games with Exhaustible Resources. SIAM J. Applied Mathematics 70(7), 2010, pages 2556-2581. [28] Hotelling, H., The economics of exhaustible resources. The Journal of Political Econ- omy, 39(2):137{175 (1931). 79 [29] Huang, M., Caines, P.E., and Malham e, R.P., Individual and mass behaviour in large population stochastic wireless power control problems: centralized and Nash equilibrium solutions. Decision and Control, 2003. Proceedings. 42nd IEEE Conference on, volume 1, pages 98{103. IEEE (2003). [30] Huang, M., Malham e, R.P., and Caines, P.E., Large population stochastic dynamic games: closed-loop McKean-Vlasov systems and the Nash certainty equivalence princi- ple. Communications in Information and Systems, 6(3):221{252 (2006). [31] Ikeda, N., and Watanabe, W., Stochastic Dierential Equations and Diusion Pro- cesses. North-Holland Pub. Co. (1981). [32] Kingman, J. F. C., Poisson processes. Oxford University Press (1993). [33] Lachapelle, A., Lasry, J.-M., Lehalle, C.-A., and Lions, P.-L., Eciency of the price formation process in presence of high frequency participants: a mean eld game anal- ysis. Mathematics and Financial Economics, Vol.10, Issue 3 (2016), 223-262. [34] Ledvina, A., and Sircar, R., Bertrand and Cournot Competition Under Asym- metric Costs: Number of Active Firms in Equilibrium. Available at SSRN: https://ssrn.com/abstract=1692957. (2011). [35] Ledvina, A., and Sircar, R., Dynamic Bertrand Oligopoly. Applied Mathematics & Optimization, Vol. 63, Issue 1 (2011), 11-44. [36] Ledvina, A., and Sircar, R., Oligopoly games under asymmetric costs and an application to energy production. Math. Financ. Econ. 6 (2012), no. 4, 261-293. [37] Lasry, J.-M., and Lions, P.L., Mean eld games. Japanese Journal of Mathematics, 2 (2007), 229-260. [38] Lokka, A., Optimal liquidation in a limit order book for a risk averse investor. Mathe- matical Finance, Vol. 24, Issue 4 (2014), pp. 696-727. [39] Ma, J., Discontinuous re ection and a class of singular stochastic control problems for diusions. Stochastics and Stochastic Reports, Vol.44, No.3-4 (1993), 225-252. [40] Ma, J., and Noh, E., Equilibrium Model of Limit Order Books and Optimal Execution Problems. In preparation. [41] Ma, J., Wang, X., and Zhang, J., Dynamic equilibrium limit order book model and opti- mal execution problem. Mathematical Control and Related Fields, Vol.5, No.3 (2015). [42] Ma, J., Yong, J., and Zhao, Y., General Forward-Backward Stochastic Dierential Equations of Markovian Type. J. Syst. Sci. Complex. 23, no. 3 (2010), 546-571. [43] Obizhaeva, A., and Wang, J., Optimal trading strategy and supply/demand dynamics. Journal of Financial Markets, 71, No.4 (2013), 1027-1063. [44] Potters, M., and Bouchaud, J.-P., More statistical properties of order books and price impact. Physica A 324, No. 1-2 (2003), 133-140. 80 [45] Protter, P., Point process dierentials with evolving intensities. Nonlinear Stochastic Problems, Vol 104 of the series NATO ASI Series (1983), 133-140. [46] Protter, P., Stochastic Integrals and Stochastic Dierential Equations. (1990), Springer. [47] Predoiu, S., Shaikhet, G., and Shreve, S., Optimal execution of a general one-sided limit-order book. SIAM Journal on Financial Mathematics, 2 (2011), 183-212. [48] Rosu, I., A dynamic model of the limit order book. The Review of Financial Studies, 22 (2009), 4601-4641. [49] Stroock, D. and Varadhan. S.R.S., Multidimensional Diusion Processes. (1997), Springer. [50] Vives, X., Oligopoly pricing: old ideas and new tools. MIT Press, Cambridge, MA (1999). 81
Abstract (if available)
Abstract
In this dissertation, we study an equilibrium model of a limit order book (LOB) and an optimal execution problem. To generalize the previous results, we accommodate the idea of Bertrand price competition, as well as nonlocal mean-field stochastic differential equation (SDE) with evolving intensity and reflecting boundary conditions. To describe the equilibrium of LOB, we start with N sellers’ static Bertrand game, and extend the model to continuous time setting to formulate it as a mean-field type control problem of the representative seller, who wants to maximize the discounted lifelong expected utility. Using dynamic programming principle (DPP), we could form a Hamilton-Jacobi-Bellman (HJB) equation and prove the value function is the viscosity solution of it. And, we show the value function can be used to obtain the equilibrium density function of a LOB. ❧ Assuming the LOB has reached at the equilibrium, we solve the optimal execution problem of a buyer, who wants to purchase a certain number of shares during the finite time horizon with minimum cost. Again, we show that the Bellman principle of DPP holds in this case, and the value function is the viscosity solution of the HJB quasi-variational inequality (QVI). Also, we investigate the optimal strategy when the QVI has a classical solution.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Optimal and exact control of evolution equations
PDF
Some topics on continuous time principal-agent problem
PDF
Statistical inference for second order ordinary differential equation driven by additive Gaussian white noise
PDF
Optimal dividend and investment problems under Sparre Andersen model
PDF
Optimizing statistical decisions by adding noise
PDF
Topics on dynamic limit order book and its related computation
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Elements of dynamic programming: theory and application
PDF
Mach limits and free boundary problems in fluid dynamics
PDF
Controlled McKean-Vlasov equations and related topics
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Pathwise stochastic analysis and related topics
PDF
Asymptotically optimal sequential multiple testing with (or without) prior information on the number of signals
PDF
Iterative path integral stochastic optimal control: theory and applications to motor control
PDF
Path dependent partial differential equations and related topics
PDF
Reinforcement learning for the optimal dividend problem
PDF
Dynamic approaches for some time inconsistent problems
PDF
Difference-of-convex learning: optimization with non-convex sparsity functions
Asset Metadata
Creator
Noh, Eunjung
(author)
Core Title
Equilibrium model of limit order book and optimal execution problem
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Publication Date
08/10/2018
Defense Date
05/21/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Bertrand model,discontinuous Skorohod problem,equilibrium,limit order book,mean-field SDE with jump and reflecting boundary condition,OAI-PMH Harvest,optimal execution,stochastic control
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ma, Jin (
committee chair
), Lototsky, Sergey (
committee member
), Lv, Jinchi (
committee member
), Mikulevicius, Remigijus (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
eunjungn@usc.edu,jennynoh86@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-66510
Unique identifier
UC11669396
Identifier
etd-NohEunjung-6728.pdf (filename),usctheses-c89-66510 (legacy record id)
Legacy Identifier
etd-NohEunjung-6728.pdf
Dmrecord
66510
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Noh, Eunjung
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
Bertrand model
discontinuous Skorohod problem
equilibrium
limit order book
mean-field SDE with jump and reflecting boundary condition
optimal execution
stochastic control