Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Dynamic network model for systemic risk
(USC Thesis Other)
Dynamic network model for systemic risk
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DYNAMIC NETWORK MODEL FOR SYSTEMIC RISK by Pengbin Feng A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (APPLIED MATHEMATICS) August 2021 Copyright 2021 Pengbin Feng Dedication To My Family ii Acknowledgments It wouldn't be possible for me to complete my PhD thesis without the helps by others. So I would like to take this opportunity to thank all those people I am indebted to in this journey. I owe my deepest gratitude to my advisor, Prof. Jin Ma and Prof. Jianfeng Zhang, for their patient guidance on my research and genuine care for my life. Not only did my supervisors shape my mathematical perspective, they also helped me through some most dicult periods in my life. During our conversations, Prof. Ma and Prof. Zhang often impress me with the insights of a mathematician and touches me with the sincerity of a generous person. They treat me like family and oered me unreserved support when I experienced troubles and confusions for the future. I also greatly appreciate Prof. Jinchi Lv for serving my PhD defense committee, and Prof. Jinchi Lv, Remigijus Mikulevicius, F.Zapatero and Sergey Lototsky for serving my qualifying committee, Prof. Peter Baxendale, Larry Goldstein, Stanislav Minsker, Igor Kukavica and Sergey Lototsky for sharing their knowledge with me through valuable courses. iii I am also indebted to my seniors Cong Wu, Weisheng Xie, Rentao Sun, Xiaojin Xing, Eunjung Noh, Jie Ruan and my classmates Jiyeon Park, Jiaowen Yang, Zimu Zhu, Man Luo, Wenqian Wu, Ying Tan for their warm- hearted helps and the joy we shared along the way, which I haven't had the chance to say thanks. Lastly, my deepest love goes to my parents Shangping Feng and Genhua Sun, who persevere throughout their lives despite their limited educational privilege so that I can see the bigger world and live a better life. iv Table of Contents Dedication ii Acknowledgments iii Abstract vii Chapter 1: Introduction 1 1.1 The problem background . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 General network topology . . . . . . . . . . . . . . . . . . . 4 1.2.2 Empirical study . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.3 Static model . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.4 Dynamic model . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2.5 Particle system and random network . . . . . . . . . . . . . 8 Chapter 2: Preliminary 10 2.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 Simplied static model . . . . . . . . . . . . . . . . . . . . . 10 2.1.2 Dynamic model . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2 Intuition for decomposition model . . . . . . . . . . . . . . . . . . . 17 2.3 Assumption and denition . . . . . . . . . . . . . . . . . . . . . . . 20 2.4 Thesis organization . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 3: Rank 1 case: mean eld model 27 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.2 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3 Lipschitz loss function . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.4 Indicator loss function . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Chapter 4: Rank K case: beyond mean eld 42 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.2 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 v 4.3 Lipschitz loss function . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4 Indicator loss function . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Chapter 5: Passing to limit 53 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 5.2 The limiting model . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 5.3 Lipschitz loss function . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.4 Indicator loss function . . . . . . . . . . . . . . . . . . . . . . . . . 60 Chapter 6: Convergence result 63 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 6.2 Convergence for Lipschitz loss function . . . . . . . . . . . . . . . . 64 6.3 Convergence for indicator loss function . . . . . . . . . . . . . . . . 67 Chapter 7: The randomized model 71 7.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 7.2 Non i.i.d. scale free random graph . . . . . . . . . . . . . . . . . . . 76 7.3 Convergence result . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 7.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Bibliography 88 vi Abstract In this project we intend to study a new model for the large scale interactive dynamical systems with particular application to the contagion eects in banking system risk. Our model combines the advantages of traditional static and dynamic models, as our model captures rich structure information from static model, and also studies the dynamic network models including mean eld case. The model has potential applications to many elds, including banking systematic risk, derivatives pricing for corrected assets under re sale, etc. Under some weak assumptions, we provide a decomposition approach to study the model. In nite dimension decomposition case, the model could be considered as the "generalized mean eld" model which is studied in chapter 3 and 4. We characterize the asymptotic behavior for the system using transport type partial dierential equations. The innite decomposition case is studied in chapter 5, and it is a more general model than "mean eld model". Under some weak technical assumptions, the asymptotic behavior for the dynamical system is given by a prob- vii abilistic type ordinary dierential equation system. This is a brand new model as far as we know. We then study the convergence properties for the innite decom- position case in chapter 6. In chapter 7, contrary to the deterministic coecients model, we study a kind of randomized coecients interactive dynamical system. We focus on characteriz- ing the structure of the system with given coecients distribution. For a special Bernoulli distribution, we can show that the random system convergences to the corresponding deterministic system. viii Chapter 1 Introduction 1.1 The problem background Since 2007-2008, the systemic risk has received very strong attention among practitioners, regulators as well as researchers. For a short overview about the cause of 2007-2008 nancial crisis, see Wiki [1], and for a complete investigation, refer to Financial Crisis Inquiry Commission (2011) [2] and Jarsulic, Marc (2012) [3]. In summary, the nancial crisis was triggered by a complex interplay of policies like: (1) easier access to loans; (2) overvaluation of bundled subprime mortgages; (3) high risk, complex nancial products; (4) questionable trading practices; (5) a lack of adequate capital holdings from banks and insurance companies; (6) widespread failures in nancial regulation and supervision; (7) high leverage and high risky investments in nancial institutions and reduced resilience in case of losses, etc. One of the most important and interesting studies is to understand the contagion eects in banking system after the initial default. Financial institutions, 1 e.g, central banks, investment banks, rms, etc, hold each other's shares, debts, and other obligations. Such interdependencies can lead to cascading defaults and failures. Interconnections among nancial institutions create potential channels for contagion and amplication of shocks to the nancial system. For example, both MBS and CDO were purchased by corporate and institutional investors globally. Derivatives such as credit default swaps increased the linkage between large nancial institutions. The accumulation and subsequent high default rate of these mortgages led to raising Mortgages Delinquency to commercial banks, and led to a rapid devaluation of nancial instruments (mortgage-backed securities including bundled loan portfolios, derivatives and Credit Default Swaps). Banks heavily invested in these assets began to experience a liquidity crisis. The consequence is that it became much more dicult to borrow money. The resulting decrease in buyers caused housing prices to plummet and then more devaluation of nancial instruments. Some banks defaults may trigger more banks defaults and contagion will continue, and nally we have the banking systematic risk. 1.2 Literature review There have been many activities in academic side to study the contagion in nan- cial institutions. See e.g. Fouque-Langsam (2013)[12] and the IPAM workshop (2015). Roughly speaking, consider a large system with N banks (or rms) and we are concerned about, say, their default risk. They are interconnected and thus one 2 default may have negative impact on the other banks in the system and triggers more defaults. This naturally forms a network, with the banks as the nodes and their mutual impact as the edges. One of the main goals is to study the contagion eect in this network. There are many related works in literature. First, complex network itself is a very important and popular research eld in many decades. For particular nan- cial network, nance and economics people have already empirically studied it for decades. In recent years, people also worked on more detailed mathematical model that characterize the network contagion and resilience. There are two types of con- tagion models in literature, which are the static model and the dynamic model. The static model more focus on the contagion eects analysis from bank to bank at a xed time point and try to understand the current network structure stability. The dynamic model more focus on the evolution of the network and consider it as a time change system. There are cons and pros for both of them, and we will talk about the details later. If we look at the network system from a physical view- point, the network model has a natural connection to the interactive evolution of particle systems. From a probabilistic view, we can also think the current network is a realization of the random system. We will discuss these in later chapters. 3 1.2.1 General network topology Many problems could be formulated as networks, e.g, social networks, particle systems, nancial networks, biological networks, etc. In practical problems, most network systems have very large size (10 4 10 11 ) and highly complex structures. It is very challenging to give a exact mathematical description of the dynamical interactive of the system. For a detailed introduction of network science, see the handbook [13] by Barab asi-Albert-L aszl o (2016). There are many recent research progress in network and graph theory. Schwartz- Cohen-Avraham-Barab asi (2002) [15], Detering-Brandis (2015) [16] and Reuven- Daniel-Avraham (2002)[18] studied percolation properties of directed scale-free networks and inhomogeneous random graph, they characterized the stability and phase transition phenomena determined by the values of the degree exponents. From statistical mechanics point of view, Newman-Barab asi-Watts (2006)[19], Newman (2003) [20] and Reka Albert-Laszlo-Barabasi (2002) [21] carefullly stud- ied the general theory for random graph, small world network, scale free network and corresponding percolation theory, the evolution of the network and attack tol- erance. Steele Corre (2015) [22] introduced the concept of graphons in network, which is the limit objects of Cauchy sequences of graphs. Graphons is a very useful tool in studying the limiting network structure in a analytical and functional lan- guage. Konstantin-Bogun a-Bianconi-Krioukov (2015) [14] introduced the geomet- ric preferential attachment (GPA) theory to explain the generative mechanism in 4 growing networks which has scale-free degree distributions, strong clustering, and community structure properties. Nicole-David (2017) [23] made very important discovery by empirical evidence that most real world graphs have a statistically signicant power-law distribution with a cuto in the singular values of the adja- cency matrix and eigenvalues of the Laplacian matrix in addition to the commonly conjectured power-law in the degrees. 1.2.2 Empirical study There are many empirical nance and economics research about the nancial net- work. For a systematic overview of the recent advances in the structure of inter- bank networks and of how network characteristics aect contagion processes and systemic risk in the banking system, see H user, Anne-Caroline (2015) [7]. Some work studied the information about the individual institutions, like the asset size, leverage, and liabilities, see Glasserman-Peyton (2015)[8]. Some work studied the systemically important nancial institutions in a network. For example, Battiston- Giovanni-Luigi -Pierobon (2015) [6] studied the interaction of banks' balance-sheets through bilateral exposures. Also, see Agam-Molly-James-Matteo (2013)[5] for studying the critical degree as a function of four nancial parameters: banking leverage, interbank exposure, return on the investment opportunity, and inter- bank lending rate. Matthew-Ben-Jack (2014) [9] showed that diversication (more counterparties per orga- nization) connects the network initially, permitting cas- 5 cades to travel, but as it increases further, organizations are better insured against one another's failures. Daron-Asuman-Alireza (2015) [10] studied the phase tran- sition property in the nancial network: as long as the magnitude of negative shocks aecting nancial institutions are suciently small, a more densely con- nected nancial network enhances nancial stability. However, beyond a certain point, dense interconnections serve as a mechanism for the propagation of shocks, leading to a more fragile nancial system. 1.2.3 Static model One of the popular approaches to study the network is the static model, which analyzes the network contagion step by step, from one level contagion nodes to next one, but at a xed time point. So the network evolution is not considered here. More specically, the main goal is to compute explicitly the systemic damage caused by some initial local shock event, model the contagion process and calculate the nal survival proportion, and derive a characterisation of resilient and non- resilient nancial systems. See e.g. Cont-Moussa-Santos (2010)[24], Amini-Cont- Minca (2016) [25][26], Detering-Meyer-Brandis-Panagiotou -Ritter (2016)[27], [28], [29], Detering-Meyer-Brandis-Panagiotou(2015)[16], Elliott-Golub-Jackson (2014) [9], Detering-Konstantinos (2018) [30]. This static model is able to consider the particular network structure, like the scale free property, refer to Cont-Moussa-Santos (2010)[24], or they can study 6 some special model like blocked model, see, Detering-Konstantinos (2018) [30]. While this captures a richer structure of networks and thus allows to calibrate the real data better, the contagion eect becomes quite complicated and thus its dynamic model becomes extremely dicult. As far as we know, almost all works in this approach in the literature studies only (one period) static models. 1.2.4 Dynamic model Another approach is the dynamic model , where the main tool is mean eld the- ory. This approach assumes the banks aect each other in a uniform way and thus the correlation is only through their aggregated behavior. There are many academic research in this eld. For a classical reference, see e.g. Carmona-Fouque- Sun (2013)[32] [43]. Those papers assumes that the banks forms a system of diusion processes coupled through their drifts in such a way that stability of the system depends on the rate of inter-bank borrowing and lending. Mean Field Game in the limit of large number of banks with a common noise was stud- ied. Garnier-Papanicolaou-Yang (2013)[31], Coppini-Helge-Giambattista (2020) [36] further studied the large deviation theory for the mean eld system. Carmona- Fouque-Mousafa-Sun (2018) [33] considered the clearing debt obligations. They described the banks as coupled diusion processes driven by controls with delay in their drifts and discussed how the delay aects liquidity and systemic risk. Cvitanic-Ma-Zhang (2012)[34] considered a model of correlated defaults in which 7 the default times of multiple entities depend on both common current factors and past defaults in the market. Some other dynamic models other than mean eld models are also studied. There are some specic models in recent research, for example, Detering-Fouque (2020) [35] talked about some chain network and Coppini-Helge-Giambattista (2020) [36] talked about the interacting diusion processes on Erdos{Renyi network, etc. Despite the technical dierence, Most of the papers focus on mean eld theory in the spirit of Erdos-Renyi (1959)random networks. However, the real networks are typically scale free and complex, as in Barabasi-Albert model, see Barabasi (2016)[13], [19], [17], [21] and Newman-Mark (2003) [20]. Thus the mean eld approach is an over simplication in many applications on systemic risk. 1.2.5 Particle system and random network Particle system is an important and long term research eld in statistical physics, but recently there is increasing interest to apply particle system theory to many other elds, like systematic risk. Nadtochiy-Shkolnikov [43] propose an interacting particle system to model the evolution of a system of banks with mutual expo- sures. They characterized the large-population limit of the system and analyzed the jump times, the regularity between jumps, and the local uniqueness of the limiting process. The Kuramoto model is the most successful approach to describe how coherent 8 behavior emerges in complex systems. For a review in the eld of synchroniza- tion in networks of Kuramoto oscillators, refer to Acebr on-Luis-Conrad (2016) [50] and Renato-Spigler (2005) [41]. For a classical paper on the Vlasov dynamics and particle system, see Braun-Hepp [40] (1977). Golse-Cl ement-Valeria (2013)[38] provided a good survey for detailed properties of the limiting equation. There are a lot of papers on dierent oscillators discussion, see, Seung-Jeongho-Peter (2019) [37]. Bertini-Giambattista-Khashayar (2010) [39] and Park-Poyato-Soler (2018) [42], etc. For the connection of particle systems and the mean eld limit, see, Jabin-Emmanuel-Wang (2017) [44] Golse (2016) [46], etc. Interacting particle systems on random graphs is an extension of classical particle system model. When the random graph is fully connected with probability 1 (e.g, a deterministic network, all edges are connected with weight 1), this reduces to the traditional mean eld type particle system model. For general random graph, the limiting particle system may converge to a deterministic system, either mean eld model or other types depending on dierent assumptions on graphs. For details, refer to Delattre-Giambattista-Eric (2016) [47], Wu (2019) [48]. and Coppini-Helge (2020) [36]. For a complete survey of random graph, large scale network and ran- dom matrix, refer to Lov asz (2012) [45], Sean-Vu-Wang (2016) [53] and Tao (2012) [54]. 9 Chapter 2 Preliminary 2.1 Problem formulation 2.1.1 Simplied static model Roughly speaking, consider a large system with N banks (or rms) and we are concerned about, say, their default risk. They are interconnected and thus one default may have negative impact on the other banks in the system and trigger more defaults. This naturally forms a network, with the banks as the nodes and their mutual impact as the edges. We are going to build a dynamic default model. The idea is partially inspired by the static contagion model by Cont-Moussa-Santos (2010)[25]. We rst recall the main default cascade idea from [25]. We use their notations in the following part. Static contagion model: Assume in the system there areN banks withN large enough. The exposure matrix is given by E2R NN , where theij-th entrye i;j > 0 represents the exposure of institutioni to institutionj (the amount of moneyj owes i). Dene interbank assets is A(i) := P j e i;j , and interbank liabilities is P j e j;i , other assets is x(i), deposits is D(i), and the net worth of the bank, given by its 10 capital c(i), represents its capacity for absorbing losses while remaining solvent. The ratio (i) := c(i) A(i) is dened as the "capital ratio" of institution i. An institution is insolvent if its net worth is negative or zero, in which case we set (i) = 0. Where c(i) = x(i) + P j e i;j P j e j;i D(i) is the total capital for bank i. In this network, the in-degree of a node i is given by d (i) := #fj2Vj e j;i > 0g; which represents the number of nodes exposed to i, while its out-degree d + (i) := #fj2Vj e i;j > 0g represents the number of institutions i is exposed to. The default cascade are triggered by nodes in initial defaulted set, which is dened as: D 0 (e; ) =fi2Vj (i) = 0g: Starting from the initial defaults set, we dene a contagion process. Denoting by R(j) the recovery rate on the assets of j at default, the default of j induces a loss equal to (1R(j))e i;j for its counterpartyi. If this loss exceeds the 11 capital of i, then i becomes in turn insolvent. This procedure may be iterated to dene the default cascade initiated by a set of initial defaults. Default cascade: D k (e; ) =fi2Vj (i)A(i) X j2D k1 (e; ) (1R(j))e i;j g Thus D k (e; ) represents the set of institutions whose capital is insucient to absorb losses due to defaults of institutions inD k1 (e; ). In a network of size n, the cascade ends after at most n 1 iterations. D n1 (e; ) represents the set of all nodes which become insolvent starting from the initial set of defaults. We then can dene the fraction of defaults n (e; ) := jD n1 (e; )j n The paper [25] studied the asymptotic fraction of defaults when N!1. For the technical details, please refer to [25]. Their approach is beautiful in mathematics and t the data well, but technically it is quite complicated and hard to consider the general dynamic model with time evolution using their model. We will take into account the default mechanisms in their paper and build a new dynamic model in the following. Firstly, we will simplify the static model. The N-step iteration static model: We rst introduce some notations in this session. A complete reference is provided at the end of this chapter. We denote the 12 number of total banks asN, the amount of moneyj owesi ise i;j , interbank assets for bank i is c + i = P j e i;j , interbank debts for bank i is c i = P j e j;i , exposure matrix is E = (e i;j ) 1i;jN , total interbank liabilities for i is R i :=c + i c i , initial cash ow for bank i isX i 0 , total cash ow for bank i at timet isX i t , xed interest rate is r, contagion strength is , and we say that bank i defaults at time t if X i t 0. Remark 2.1.1. We always assume e i;j 0, here e i;j > 0 means bank j owns money to i and e j;i > 0 means bank i owns money to j, and we assume e i;i = 0. is the contagion strength index, means the loss of bank i during a unit time if bank j defaults at time t. At t 1 , the initial accounting cash ow is X i;(1) t 1 :=X i t 1 + t X j6=i [re i;j re j;i ] (2.1) The cash ow after initial default: X i;(1) t 1 :=X i;(1) t 1 t X j6=i e i;j 1 fX j;(1) t 1 <0g (2.2) The next step cash ow calculated after second round defaults is X i;(2) t 1 := 1 fX i;(1) t 1 >0g [X i;(1) t 1 t X j6=i e i;j 1 fX j;(1) t 1 <0g ] + 1 fX i;(1) t 1 0g X i;(1) t 1 (2.3) 13 We simply denote the self-dynamics for bank i is X i t 1 =e i t X i 0 which plays as the initial defaults, where i t is a random process. We can repeat the contagion for N steps and the N-step iteration formula is: X i t 1 =e i t X i 0 X i;0 t 1 =X i t 1 X i;(k+1) t 1 :=X i;(k) t 1 t X j6=i e i;j 1 fX i;(k) t 1 0g X i t 1 =X i;(n) t 1 We can combine the above process by dening the maximum solution for the N-step iteration. X i t 1 :=X i t 1 + X j6=i [re i;j 1 fX j t 1 >0g + (r)e i;j 1 fX j t 1 0g re j;i ] (2.4) Notice that: X i t 1 = minfX i;(1) t 1 ;X i;(2) t 1 ;:::;X i;(n) t 1 g 2.1.2 Dynamic model The continuous time dynamic model: In our model, we build the model in time interval [0;T ] and assume the cash ow and all the debt are due afterT , then payment is never early stopped. We do not consider fare sell discount rate for bank 14 itself. From time k-1 to k, we have: X i k = 8 > > > < > > > : X i k1 + P j6=i [re i;j 1 fX j k >0g + (r)e i;j 1 fX j k 0g re j;i ] if X i k1 > 0 0 if X i k1 0 For xed constant r;;e i;j , the contagion process could be dened as X t = [X 1 t ;X 2 t ;:::;X N t ] T Then the contagion process from t-1 to t is a markov process dened by a function :R n R n !R n : X t = (X t1 ;X t ) = (X t1 ;X t1 e t ) = (X t1 ; t ). We now consider the continuous dynamics for the contagion: X i t+ a t = 8 > > > > > > > < > > > > > > > : X i t e (W i t+t W i t ) 1 2 2 t + t P j6=i [re i;j 1 fX j t+t >0g + (r)e i;j 1 fX j t+t 0g re j;i ] if X i t > 0 0 if X i t 0 let e i;i = 0, then the above becomes: = 8 > > < > > : X i t e (W i t+t W i t ) 1 2 2 t +rt[ P j e i;j P j e j;i ] t P j e i;j 1 fX j t+t 0g if X i t > 0 0 if X i t 0 15 From the above, the continuous equation is: dX i t = 1 X i t >0 [X i t dW i t + (r[ X j e i;j X j e j;i ] X j e i;j 1 fX j t 0g )dt] We consider the general continuous time model: dX i t = i t dt +(t;X i t )dW i t +r( X j e i;j X j e j;i )dt X j e i;j 1 fX j t 0g dt (2.5) This forms a coupled N dimensional stochastic dierential equations, where N is the number of banks. To focus on the main idea, let us assume i t = 0 and i t = is a constant. The system of equations is dX i t =dt +r( X j e i;j X j e j;i )dt X j e i;j '(X j s )dt (2.6) Where i = 1; 2;:::;N and ' is a bounded measurable function. We are mostly interested in the special case when '(x) = 1 fx0g , meaning the discontinuous default shock to the system. Dene the fraction of defaults h N t and empirical measure N t : h N t = 1 N N X i=1 1 fX i t 0g ; N t := 1 N N X i=1 fX i t g 16 We are going to study the following two questions. • Goal 1: The convergence of h N t when N!1. • Goal 2: The convergence of the empirical measure N t and the density function f(x;t) of the limit distribution t . 2.2 Intuition for decomposition model Real networks always have one or more centers and have community property. In the following, we set contagion index = 1 N . We will check some very simple network structure and introduce the idea of decomposition model. Case 1: Centralized network. The central bank has connection to most of the other banks, but all other banks has no or very limited connections to each other. See the following right graph. Case 2: Simplied mean eld model. Each bank is connected to almost all others, and the impact for each bank is the average impact of others. See the following left graph. 17 The matrix for the above two graph can be decomposed to E = a 1 b 1 +a 2 b 2 type and E = ab type correspondingly, such that e i;j = a i b j or e i;j = a 1 i b 1 j +a 2 i b 2 j . The details will be discussed in chapter 3, examples. Case 3: Multiple centers network. Left one is essentially independent centers, which is similar to the case 1. The right is an example of the airport network connections. Roughly speaking, the multiple center graph has the properties that • Each center has connections to almost all other nodes, with the number of connections isn, where is a close to 1. 18 • The non centers have connection numbers , where are small numbers in the sense that N ! 0. • The number of centers are nite or increasing slow enough as N!1. For those kind of network, we can nd a decomposition with xed k such that E P K k=1 a k b k and e i;j P K k=1 a k i b k j . Case 4: The general case. One approach is to consider the decomposition offe i;j g 1i;jn into fe i;j g 1i;jn =Q 1 Q (2.7) where is the diagonal matrix with eigenvalues 1 ; ; N andQ is the eigenvalue matrix. In particular, this means e i;j = N X k=1 k a k i b k j : (2.8) This is in the form of Case 3 (view k a k i as the a k i in Case 3), except that in that case K is xed while here N can go to innity. But the eigenvalue-eigenvector decomposition has some drawbacks and we are not going to use it here. Instead, we assume that there exist sequence of vectors, a k ;b k , k = 1; 2;::: such that E = P 1 k=1 a k b k and a k ;b k converge in some well dened sense as N go to innity. For practical purpose, it might be possible that it is enough to consider 19 only niteK. In particular, since the matrix is most probably sparse, it may have better structure. For general case, assume a k and b k converge in appropriate way, then hopefully we may construct an abstract space and build a general theory. 2.3 Assumption and denition In this section, we introduce some common assumption and denition used in later chapters. Assumption 2.3.1. Assume that the density functions of random variables A k ;B k ;R;X 0 are bounded. Denote (t;) := sup max 1lK j l jC;2f0;1g E[jB k j 1 fjX 0 +t+rRt 1 K P K l=1 A l l jg ] (2.9) Then (t;)! 0 as ! 0 for xed t, where C is a uniform upper bound for l . Assumption 2.3.2. For given 2f0; 1g, t = ( 1 t ;:::; K t ) T , dene F (t; t ) = (F 1 (t; t );F 2 (t; t ):::;F K (t; t )) T (2.10) where F k (t; t ) :=E[jB k j 1 fX 0 +t+rRt 1 K P K l=1 A l l t 0g ] (2.11) 20 and assume F k (t; t ) is Lipschitz continuous with respect to t in the following sense, 1 K jF (t; )F (t;)j l 1 L F K j j l 1 (2.12) then the system of equations ( k t ) 0 =F k (t; t );k = 1;:::;K is well posed and has a unique solution which continuously depends on initial value. In assumption 1, it is clear that ()! 0 uniformly as ! 0. Dene F k (t; ) :=E[B k ' (X 0 +t +rRt 1 K K X l=1 A l ;l t )] (2.13) where ' is a Lipschitz modication for the indicator function using the following form. ' = 8 > > > > > > < > > > > > > : 1; x 0 x ; 0<x 0; x (2.14) Lemma 2.3.3. Using the vector denitionF andF , we dene t = R t 0 F (s; s )ds and t = R t 0 F (s; s )ds. Then the above integral equations are well dened because of Lipschitz property for F and assumption 2 for F . Further, t = R t 0 F (s; s )ds continuously depends on the parameter . 21 The well-posedness is clear from the denition. We only show the continuously dependence. Note that 1 K j t t j l 1 1 K Z t 0 jF (s; s )F (s; s )j l 1 ds 1 K Z t 0 jF (s; s )F (s; s )j l 1 ds + 1 K Z t 0 jF (s; s )F (s; s )j l 1 ds Z t 0 s ()ds + L F K Z t 0 j s s j l 1 ds Using Grownall inequality, we have 1 K j t t j l 1 ( Z t 0 ()ds)e L F t (2.15) Note that the K is nite and xed, then the solution for equation t = R t 0 F (s; s )ds continuously depends on the parameter and converges to t = R t 0 F (s; s )ds as ()! 0. Remark 2.3.4. Note that the right side of (2.15) is independent of K. The nor- malization factor 1 K is necessary in (2.13). Otherwise, whenK!1, it is possible that the (2.13) on both sides are unbounded. Also, notice that when K!1, the left side of (2.16) converges to an expectation and will be discussed in chapter 6. Remark 2.3.5. Assumption 2.3.1 is necessary to overcome the exponential blow up when use the Lipschitz function to approximate indicator function. Assump- tion 2.3.2 is used to guarantee the well-posedness for certain equations. The two 22 assumptions are mild and can be veried for random variables that has bounded density function. Remark 2.3.6. For example, the assumption 2 will be satised in most cases. To understand this, let us consider the special case that = 0, K = 1 and R = 0. Then F (t;) :=E[1 fX 0 A0g ] and jF (t; )F (t;)j = P [min(A ;A)X 0 max(A ;A)] = Z P [min(A ;A)X 0 max(A ;A)jA]P (dA) Cj j (2.16) The last inequality uses the condition that A;X 0 has bounded density functions. Let use introduce the Wasserstein W p distance on probability measure space. Denition 2.3.7. Let (M;d) be a metric space for which every probability measure on M is a Radon measure. For p 1, letP p (M) denote the collections of all probability measures on M with nite p th moment. Then there exists some x 0 in M such that: Z M d(x;x 0 ) p d<1 (2.17) 23 The p th Wasserstein distance between two probability measures and inP p (M) is dened as W p (;) := ( inf 2(;) Z MM d(x;y) p d (x;y)) 1 p (2.18) where (;) denotes the collections of all measures on MM with marginals and . The set (;) is also called the set of all couplings of and . Denition 2.3.8. The Wasserstein metric may be equivalently dened as W p (;) = (infE[d(X;Y ) p ]) 1 p (2.19) whereE[Z] denotes the expected value of a random variableZ and the inmum is taken over all joint distributions of the random variables X andY with marginals and respectively. In the case when p = 1, the above two denitions are equivalent to the following. 24 Denition 2.3.9 (duality theorem of Kantorovich and Rubinstein). When the measures ; have bounded support and M is a Polish space, we give the dual representation of W 1 Wasserstein distance as W 1 (;) = supf Z f(x)d()(x)jcontinuousf :M!R;Lip(f) 1g (2.20) 2.4 Thesis organization The paper is organized in the following structure. Chapter 3 discussed the decomposition model for rank K = 1. Chapter 4 discussed the decomposition model for rank K <1. Chapter 5 discussed the innite decomposition model and the limiting equation. Chapter 6 is discussed the convergence theory for the limiting equation. Chapter 7 discussed a special case of the interacting particle system on scale free random graph. We summarize the notations in the following table for reference: 25 the number of total banks N the amount of money j owes i e i;j Interbank assets for bank i c + i = P j e i;j Interbank debts for bank i c i = P j e j;i Exposure matrix E = (e i;j ) 1i;jN In degree D (i) :=]fj2Vje j;i > 0g Out degree D + (i) :=]fj2Vje i;j > 0g Total interbank liabilities for i R i :=c + i c i Initial cash ow for bank i X i 0 > 0 Total cash ow for bank i at time t X i t xed interest rate r contagion strength bank i defaults at time t X i t 0 density function at time t f(x;t) joint density function at time t f(x;t;a;b) the randomized amount of money j owes i i;j connection probability from j to i p i;j default ratio at time t F t N := 1 N P N i=1 1 fX i t 0g 26 Chapter 3 Rank 1 case: mean eld model 3.1 Introduction In this chapter, we discuss the decomposition model for K = 1, specically, e i;j = a i b j . We will show that the nite dimensional system will converge to a limiting system under proper assumption, and the density function of limiting distribution can be characterized as a solution of transport equation. We will discuss the convergence theory for both bounded Lipschitz loss function and indicator loss function. The rate of convergence will also be discussed. 3.2 The model Note that e i;j =a i b j , d i t = is a constant, and = 0. Then the equation (2.5) is equivalent to the following one X i;N t =x i +t + r N ( X j e i;j X j e j;i )t + 1 N a i Z t 0 X j b j 1 fX j;N s 0g ds (3.1) 27 Dene L fX N 0 ;A N ;B N g := 1 N X i fx i ;a i ;b i g ;L fX N t ;A N ;B N g := 1 N X i fx i;N t ;a i ;b i g (3.2) Assume that the density functions of random variables A k ;B k ;R;X 0 are bounded and (X N 0 ;A N ;B N ) d ! (X 0 ;A;B). Recall the classical Skorokhod's representation theorem. Theorem 3.2.1 ( Patrick, Billingsley, p. 70, [55]). Let n , n2N be a sequence of probability measures on a metric space, S such that n converges weakly to some probability measure 1 on S as n!1. Suppose also that the support of 1 is separable space. Then there exist random variablesX n dened on a common probability space ( ;F; P) such that the law ofX n is n for alln (includingn =1) and such that X n converges to X 1 , P-almost surely. Lemma 3.2.2. Since that (X N 0 ;A N ;B N ) d ! (X 0 ;A;B), by Skorokhod's representation theorem, there exists a probability space ( ; F; P) such that both (X N 0 ;A N ;B N ) and (X 0 ;A;B) are dened in the same space , and (X N 0 ;A N ;B N ) a:s: ! (X 0 ;A;B). Because of the boundness of random variables and the dominate convergence theorem, we have that (X N 0 ;A N ;B N ) L 1 ! (X 0 ;A;B) in P, and particularly, X N 0 L 1 ! X 0 , A N L 1 ! A, B N L 1 ! B in P. Without loss of generality, this chapter will use the probability space ( ; F; P). 28 Then we can write the equation in the form, X N t =X N 0 +t +r[A N E P [B N ]B N E P [A N ]]t +A N Z t 0 E P [B N 1 fX N s 0g ]ds where X N t is considered as the random variable that take N discrete values cor- responding to i = 1; 2;:::;N. Note that under the probability P, default ratio has form 1 N X j 1 fX j;N s 0g = P(X N s 0) In this chapter, we will discuss the two questions. • Q1: DoesX N t converges whenN!1 given (X N 0 ;A N ;B N ) d ! (X 0 ;A;B)? • Q2: What is the limiting system when N!1? 3.3 Lipschitz loss function In this part, we consider loss function ' as a bounded Lip function with Lip constant L F . And the system can be formulated as the following. X N t =X N 0 +t +r[A N E P [B N ]B N E P [A N ]]t +A N Z t 0 E P [B N '(X N s )]ds (3.3) 29 Theorem 3.3.1. In the discrete equations (3.3), assume that X N 0 ;A N ;B N are bounded and (X N 0 ;A N ;B N ) d ! (X 0 ;A;B). Dene L fX N 0 ;A N ;B N g := 1 N X i fx i ;a i ;b i g ;L fX N t ;A N ;B N g := 1 N X i fx i;N t ;a i ;b i g (3.4) then X N t L 1 !X t in probability space P as dened in lemma 3.2.2, and X t satises the limiting equation X t =X 0 +t +r[AE P [B]BE P [A]]t +A Z t 0 E P [B'(X s )]ds (3.5) Further, The joint density function of (X t ;A;B), denoted by f := f(t;x;a;b), is characterized by the PDE, 8 > > < > > : @ @t f + @ @x f R R 3 [+a '(y)+r(a b)]f(t;)dd dy =0 f(0;x;a;b) = (x;a;b)2C 1 (R 3 ) where is the pdf of (X 0 ;A;B). Lemma 3.3.2. Assume that X N 0 L 1 !X 0 , then X N t L 1 !X t . 30 Proof. Denote F N t :=E P [B N '(X N t )] and F t :=E P [B'(X t )], then jF N t F t j = jE P [B N '(X N t )]E P [B'(X t )]j = jE P [B N ('(X N t )'(X t ))] +E P [(B N B)'(X t )]j E P [(B N B)'(X t )]j + C Lip E P [jB N jj[X N 0 +t +r[A N E P [B N ]B N E P [A N ]]] [X 0 +t +r[AE P [B]BE P [A]]]j] + C Lip E P [jB N j [A N Z t 0 (F N s F s )ds] + C Lip E P [jB N j [(A N A) Z t 0 F s ds] Let N =E P [(B N B)'(X t )]j+C Lip E P [jB N jj[X N 0 +t+r[A N E P [B N ]B N E P [A N ]]] [X 0 +t +r[AE P [B]BE P [A]]]j] +C Lip E P [jB N j [(A N A) R t 0 F s ds]. Denote the uniform bound for all the random variables to be C. Then N CE P [jB N Bj +jX N x 0 j +jA N Aj] (3.6) Applying Growall inequality gives that jF N t F t jCE P [jB N Bj +jX N X 0 j +jA N Aj]e C Lip E P [jB N j[jA N Aj]t (3.7) 31 By lemma above and lemma 3.2.2, E P jX N t X t j CE P [jB N Bj +jX N X 0 j +jA N Aj] [ Z t 0 e C Lip E P [jB N j[jA N Aj]s ds +Crt +C] (3.8) Proof. Recall the limiting equation is X t =X 0 +t +r[AE P [B]BE P [A]]t +A Z t 0 E P [B'(X s )]ds (3.9) Denote C t := R t 0 E P [B'(X s )]ds, which is a deterministic function of t and rst order dierentiable with respect to t. Also notice that X 0 ;A;B are smooth and bounded, then the joint densityf(t;x;a;b) exists and is bounded, and it is smooth with respect to x and rst order dierentiable with respect to t. Then X t = X 0 +t +r[AE P [B]BE P [A]]t +A Z t 0 E P [B'(X s )]ds = X 0 +t +r[AE P [B]BE P [A]]t +A Z t 0 Z '(y)f(s;y;; )d ddyds 32 denote c t = R t 0 R '(y)f(s;y;; )d ddyds Thenf(t;x;a;b) =f 0 (xc t atr[aE P [B]bE P [A]]t;a;b). For both side, take derivatives with respect to t, denote f :=f(t;x;a;b), we have that @ @t f + @ @x f Z R 3 [+a '(y)+r(a b)]f(t;)dd dy =0 3.4 Indicator loss function In this part, we consider loss function ' as an indicator function. And the system can be formulated as the following. X N t =X N 0 +t +r[A N E P [B N ]B N E P [A N ]]t +A N Z t 0 E P [B N 1 fX N s 0g ]ds:(3.10) Theorem 3.4.1. In the discrete equations (3.8), assume that X N 0 ;A N ;B N are bounded and (X N 0 ;A N ;B N ) d ! (X 0 ;A;B). Dene L fX N 0 ;A N ;B N g := 1 N X i fx i ;a i ;b i g ;L fX N t ;A N ;B N g := 1 N X i fx i;N t ;a i ;b i g (3.11) 33 then X N t L 1 !X t in probability spaceP as dened in lemma 3.2.2, and X t satises the limiting equation X t :=X 0 +t +r[AE P [B]BE P [A]]t +A Z t 0 E P [B1 fXs0g ]ds (3.12) Further, the joint density function of (X t ;A;B), denoted by f := f(t;x;a;b), is characterized by the PDE, 8 > > < > > : @ @t f + @ @x f R R 3 [+a 1 fy0g +r(a b)]f(t;)dd dy =0 f(0;x;a;b) = (x;a;b)2C 1 (R 3 ) where is the pdf of (X 0 ;A;B). Where f :=f(t;x;a;b) and the density function is f(t;x) = R f(t;x;a;b)dadb. Proof. ChooseK = 1 in assumption 2.3.2 and apply to (3.12), it is straightforward that the limiting equation is well-posed. We now prove the convergence theory. We dene the Lip function ' = 8 > > > > > > < > > > > > > : 1; x 0 x ; 0<x 0; x (3.13) 34 Dene F N t :=E P [B N 1 fX N s 0g ];F t :=E P [B1 fXs0g ];F N; t :=E P [B N ' (X N; t )];F t :=E P [B' (X t )] ThenjF N t F t jjF N t F N; t j +jF N; t F t j +jF t F t j =I +II +III II =jF N; t F t j is proved in theorem 3.3.1. Then using assumption 2.3.1, assump- tion 2.3.2 and lemma 2.3.3, one can show jF t F t j = jE P [B' (X t )]E P [B1 fXs0g ]j = jE P [B' (X t )]E P [B1 fX t 0g ]j +jE P [B1 fX t 0g ]E P [B1 fXt0g ]j C() +jF (t; Z t 0 F s ds)F (t; Z t 0 F s ds)j C() +L F Z t 0 jF s F s jds (3.14) And notice that jF N t F N; t j = jE P [B N 1 fX N t 0g ]E P [B N ' (X N; t )]j jE P [B N ' (X N; t )]E P [B N 1 fX N; t 0g ]j + jE P [B N 1 fX N; t 0g ]E P [B N 1 fX N t 0g ]j Apply assumption 1 to rst part, and for second part, notice that E P [B N 1 fX N; t 0g ] =E P [B N 1 fX N 0 +t+r[A N E P [B N ]B N E P [A N ]]t+A N R t 0 E P [B N ' (X N; s )]ds0g ] 35 E P [B N 1 fX N t 0g ] =E P [B N 1 fX N 0 +t+r[A N E P [B N ]B N E P [A N ]]t+A N R t 0 E P [B N 1 fX N s 0g ]ds0g ] jE P [B N 1 fX N; t 0g ]E P [B N 1 fX N t 0g ]j = F (t; Z t 0 F N; s ds)F (t; Z t 0 F N s ds) L F Z t 0 jF N; s F N s jds Combine together, we have, jF N t F N; t j () +L F Z t 0 jF N; s F N s jds (3.15) Combine the above together, we have that jF N t F t jC()e L F t +CE P [jB N Bj +jX N X 0 j +jA N Aj]e L F E P [jB N j[jA N Aj]t Then E P jX N t X t j CE P [jB N Bj +jX N X 0 j +jA N Aj] [ Z t 0 e L F E P [jB N j[jA N Aj]s ds +Crt +C] + Z t 0 C()e L F s ds (3.16) First let N!1, then ! 0, then we have X N t L 1 !X t . The density equation (3.13) can be proved exactly the same as in theorem 3.3.1. 36 Remark 3.4.2. In theorem 3.3.1 and 3.4.1, we proved the strong convergence theory in L 1 . If we do not introduce the common probability space P and dene N = 1 N N X i=1 fx i ;a i ;b i ;r i g ; = P(X 0 ;A;B;R) 1 Using the denition 2.3.9, we can repeat the same idea as in the proof of theorem 3.3.1 and 3.4.1, and get the result in a weak sense, i.e, convergence in distribution. More specically, for the indicator function, we have jF N t F t jC()e L F t +CW 1 ( N ;)e C Lip W 1 ( N ;)t +CW 1 ( N ;) As an special case, we consider the mean eld case. Consider the equations: X i;N t =x i + 1 N Z t 0 X j 1 fX j;N s 0g ds (3.17) Theorem 3.4.3. We dene L X N 0 := 1 N P i fx i g and L X N t := 1 N P i fx i;N t g , thus we have the equationX N t =X N 0 + R t 0 E P [1 fX N s 0g ]ds. We assume thatX N 0 d !X 0 , then X N t L 1 !X t in probability spaceP, and (3.17) converges to the limiting equation X t :=X 0 + Z t 0 E P [1 fXs0g ]ds (3.18) 37 and the density function of limiting system X t is characterized by the PDE, @ @t f(t;x) +div x (f(t;x) Z 1 fy0g f(t;y)dy) = 0 (3.19) . 3.5 Example Example 1a: one center. e i;j = 1 for i = 1 or j = 1, and e i;j = 0 for others. In this case, E = a 1 b 1 +a 2 b 2 where a 1 = (1; 1; 1;:::; 1) T , b 1 = (1; 1; 1;:::; 1), a 2 = (0; 1; 1;:::; 1) T , b 2 = (0;1;1;:::;1). It is easy to calculate the limiting equation (3.6) and get default ratio F t = P(X 0 +t +r[AE P [B]BE P [A]]t 0) (3.20) 38 If we look at the equation: X i;N t =x i +t + r N ( X j e i;j X j e j;i )t + 1 N a i Z t 0 X j b j 1 fX j;N s 0g ds (3.21) By the denition of E, the term 1 N R t 0 P j e i;j 1 fX j t 0g ds! 0, thus F N t = 1 N X i 1 fX i t 0g ! P(X 0 +t +r[AE P [B]BE P [A]]t 0) (3.22) which shows the result is consistent. Example 1b: one center for = 1. We consider a case that jumps happen. The network has one center and all other nodes are only connected to the center. Then the center node plays a particular role in the graph. dX i t =r( X j e i;j X j e j;i )dt X j e i;j 1 fX j s 0g dt (3.23) Let t 1 <t 2 <:::<t k <::: be the default time for banks, dene the center node as X 0 t . Let N := 1 N P N i=1 t i , and for this particular example, we can assume N has the decomposition N =(N)+(1(N))v +o( 1 N ), and assume N !, dene t = essinf which means (t t ) > 0 and (t < t ) = 0, we assumejN(1 (N)jC N C which means 0<(N)! 1, otherwise, in the following theorem = 0, the decomposition actually gives the restricted measure N (1(N)) j t<t ! v 39 and N (N) j tt ! are non degenerate (may be discrete), the following theorem is straightforward. Remark 3.5.1. The default ratio is given by F t = 8 > > > < > > > : 0 if t<; P( X 0 + r+1 t); ift: (3.24) := inffs :X 0 rsE v [(st) + ]g and X 0 ;r are the same notation as in the decomposition model, and v is the measure with respect to t. Example 2: Fully connected. E =ab T where a = (1; 1; 1;:::; 1) T , b = (1; 1; 1;:::; 1) Considering the special mean eld case, and note that A;B = 1, X t :=X 0 +t +r[AE P [B]BE P [A]]t +A Z t 0 E P [B1 fXs0g ]ds (3.25) 40 Dene t = R t 0 E P [1 fXs0g ]ds = R t 0 P(X s 0)ds, then the equation can be reformu- lated as 0 t = P(X 0 +t + t 0) (3.26) Assume the distribution function forX 0 isG t , we have that 0 t =G t (t t ). In general, the default ratio could be always reduced to a solution of ODE. 41 Chapter 4 Rank K case: beyond mean eld 4.1 Introduction In this chapter, we discuss the decomposition model for K nite, specically, e i;j = 1 K P K k=1 a k i b k j . We will show that the nite dimensional system will con- verge to a limiting system under proper assumption, and the density function of limiting distribution can be characterized as a solution of transport equation. We will discuss the convergence theory for both bounded Lipschitz loss function and indicator loss function. The rate of convergence will be discussed. 4.2 The model Recall that d i t = is a constant, and = 0. The default equation is X i;N t =x i +t + r N ( X j e i;j X j e j;i )t + 1 N 1 K K X k=1 a k i Z t 0 X j b k j 1 fX j;N s 0g ds (4.1) 42 We dene empirical law: L fX N 0 ;A fN;1g ;B fN;1g ;A fN;2g ;B fN;2g ;:::;A fN;Kg ;B fN;Kg g := 1 N X i fx i ;a 1 i ;b 1 i ;a 2 i ;b 2 i ;:::;a K i ;b K i g (4.2) L fX N t ;A fN;1g ;B fN;1g ;A fN;2g ;B fN;2g ;:::;A fN;Kg ;B fN;Kg g := 1 N X i fxt;a 1 i ;b 1 i ;a 2 i ;b 2 i ;:::;a K i ;b K i g (4.3) In the following, we use notation (X N 0 ;A fN;kg ;B fN;kg ;k = 1;:::;K) to represent (X N 0 ;A fN;1g ;B fN;1g ;A fN;2g ;B fN;2g ;:::;A fN;Kg ;B fN;Kg ) , etc. Assume that the density functions of random variables X N 0 ,A fN;kg ,B fN;kg are bounded and (X N 0 ;A fN;kg ;B fN;kg ;k = 1; 2;:::;K) d ! (X 0 ;A k ;B k ;k = 1;:::;K) (4.4) . Lemma 4.2.1. By Skorokhod's representation theorem[Theorem 3.2.1], there exists a probability space ( ; F; P) such that both (X N 0 ;A fN;kg ;B fN;kg ;k = 1;:::;K) and (X 0 ;A k ;B k ;k = 1;:::;K) are dened in the same space , and (X N 0 ;A fN;kg ;B fN;kg ;k = 1;:::;K) a:s: ! (X 0 ;A k ;B k ;k = 1;:::;K). Because of the boundness of random variables and the dominate convergence theorem, we have that (X N 0 ;A fN;kg ;B fN;kg ;k = 1;:::;K) L 1 ! (X 0 ;A k ;B k ;k = 1;:::;K) in P, and particularly, each of the components converges in L 1 . Without loss of generality, this chapter will use the probability space ( ; F; P). 43 then we can write the equation in the form, X N t = X N 0 +t +r[ 1 K K X k=1 A fN;kg E P [B fN;kg ] 1 K K X k=1 B fN;kg E P [A fN;kg ]]t + 1 K K X k=1 A fN;kg Z t 0 E P [B fN;kg 1 fX N s 0g ]ds (4.5) where X i t is considered as the random variable that take N discrete values corre- sponding to i = 1; 2;:::;N. Note that under the probability P, default ratio has form 1 N X j 1 fX j;N s 0g = P(X N s 0) Without loss of generality, we let r = 0 and = 0. The general case is exact the same proof. In this chapter we consider the equation. X N t = X N 0 + 1 K K X k=1 A fN;kg Z t 0 E P [B fN;kg 1 fX N s 0g ]ds (4.6) In this chapter, we will discuss the two questions. • Q1: Does X N t converges when N!1 given condition (4.4)? • Q2: What is the limiting system when N!1? 44 4.3 Lipschitz loss function We rst look at the Lipschitz loss function. In (4.6) We replace the indicator function in the default part by a Lipschitz function ', with Lipschitz constant C Lip . X N t = X N 0 +t +r[ 1 K K X k=1 A fN;kg E P [B fN;kg ] 1 K K X k=1 B fN;kg E P [A fN;kg ]]t + 1 K K X k=1 A fN;kg Z t 0 E P [B fN;kg '(X N s )]ds (4.7) Theorem 4.3.1. Assume that all the random variables X N 0 ;A fN;kg ;B fN;kg ;k = 1; 2;:::;K are bounded and we have condition (4.4), then X N t L 1 !X t in probability space P, and X t satises the limiting equation X t = X 0 +t +r[ 1 K K X k=1 A k E P [B k ] 1 K K X k=1 B k E P [A k ]]t + 1 K K X k=1 A k Z t 0 E P [B k '(X s )]ds (4.8) Without loss of generality, let us choose r = 0 and d = 0, then the joint den- sity function of (X t ;A fN;kg ;B fN;kg ;k = 1;:::;K), denoted by f := f(t;x; ~ a; ~ b), is characterized by the PDE, 45 8 > > < > > : @ @t f(t;x; ~ a; ~ b) +div x f(t;x; ~ a; ~ b) 1 K P K k=1 [a k R k '(y)f(t;y; ~ ; ~ )d ~ d~ dy] = 0 f(0;x; ~ a; ~ b) = (x; ~ a; ~ b)2C 1 (R 2K+1 ) (4.9) where f(t;x; ~ a; ~ b) := f(t;x;a 1 ;a 2 ;:::;a K ;b 1 ;b 2 ;:::;b K ) and ~ a; ~ b are notations, i.e. ~ a = (a 1 ;a 2 ;:::;a K ); ~ b = (b 1 ;b 2 ;:::;b K ). (x; ~ a; ~ b) is the pdf of (X 0 ;A fN;kg ;B fN;kg ;k = 1; 2;:::;K). Lemma 4.3.2. Dene F N;k t = E P [B N;k '(X N t )] and F k t = E P [B k '(X t )], then jF N;k t F k t j! 0. Proof. Note that X N t X t = (X N 0 X 0 ) + 1 K K X k=1 (A N;k A k ) Z t 0 F N;k s ds] + 1 K K X k=1 A k Z t 0 (F N;k s F k s )ds] (4.10) jF N;k t F k t j = jE P [B N;k '(X N t )]E P [B k '(X t )]j = jE P [B N;k ('(X N t )'(X t ))] +E P [(B N;k B k )'(X t )]j E P [(B N;k B k )'(X t )]j +C Lip E P [jB N;k jj[X N 0 X 0 j] + C Lip 1 K K X i=1 E P [jB N;k j[A N;i Z t 0 (F N;i s F i s )ds] + C Lip 1 K K X i=1 E P [jB N;k j(A N;i A i ) Z t 0 F N;k s ds] (4.11) 46 Sum up on both side from k = 1;:::;K and take average, we have 1 K K X k=1 jF N;k t F k t j 1 K K X k=1 E P [(B N;k B k )'(X t )]j + C Lip 1 K K X k=1 E P [jB N;k jj[X N 0 X 0 j] + C Lip 1 K 2 K X k=1 K X i=1 E P [jB N;k j[A N;i Z t 0 (F N;i s F i s )ds] + C Lip 1 K 2 K X k=1 K X i=1 E P [jB N;k j(A N;i A i ) Z t 0 F N;k s ds] (4.13) Combine the above formula and using Gronwall inequality, E P jX N t X t j max(C Lip ;C) K K X k=1 E P [jB N;k B k j +jA N;k A k j +jX N X 0 j]j [ Z t 0 e C 2 C Lip s ds +Crt +C] (4.14) rst x K, let N!1, then let ! 0, We then prove X N t L 1 !X t . If we assume the density ofX t exists and is smooth, similar to the proof in last chapter, we can get (4.5). 47 Remark 4.3.3. Smoothness of the solution: Note that in X t =X 0 + 1 K K X k=1 A k Z t 0 E P [B k '(X s )]ds (4.15) Denote C k t := R t 0 E P [B k '(X s )]ds, which is a deterministic and smooth function of t, then X t =X 0 + 1 K P K k=1 A k C k t . Assume X 0 ;A k ;B k are all bounded and smooth, and denote f(t;x) as the density function for X t . Obviously f(t;x) is smooth with respect to both x and t. Remark 4.3.4. Note that the normalized number 1 K in the e i;j decomposition, which is necessary and make the result in theorem 4 be well posed for any large K. Actually, as K!1, the K will disappear in the limiting equation as it plays as the expectation. Remark 4.3.5. If we consider the eigenvalue-eigenvector decomposition, the den- sity of X t could be characterized by @ @t f(t;x; ~ a) +div x f(t;x; ~ a) K X k=1 [ k a k Z k '(y)f(t;y; ~ )d~ dy] = 0 (4.16) In general, the matrix E has to be symmetric to make eigenvalue-eigenvector real valued. The formulation above is interesting in the sense that it has strong con- nection to random matrix theory. Under proper assumption, the eigenvalue and 48 eigenvector will have nice distribution property. This could be a further research eld. 4.4 Indicator loss function In this part, we consider the loss to be an indicator function. We consider system (4.5). Theorem 4.4.1. Assume that all the random variables X N 0 ;A fN;kg ;B fN;kg ;k = 1; 2;:::;K are bounded and we have condition (4.4), then X N t L 1 !X t in probability space P, and X t satises the limiting equation X t = X 0 +t +r[ 1 K K X k=1 A k E P [B k ] 1 K K X k=1 B k E P [A k ]]t + 1 K K X k=1 A k Z t 0 E P [B k 1 fXs0g ]ds Without loss of generality, let us choose r = 0 and d = 0, then the joint den- sity function of (X t ;A fN;kg ;B fN;kg ;k = 1;:::;K), denoted by f := f(t;x; ~ a; ~ b), is characterized by the PDE, 49 8 > > < > > : @ @t f(t;x; ~ a; ~ b) +div x f(t;x; ~ a; ~ b) 1 K P K k=1 [a k R k 1 fy0g f(t;y; ~ ; ~ )d ~ d~ dy] = 0 f(0;x; ~ a; ~ b) = (x; ~ a; ~ b)2C 1 (R 2K+1 ) (4.17) where f(t;x; ~ a; ~ b) := f(t;x;a 1 ;a 2 ;:::;a K ;b 1 ;b 2 ;:::;b K ) and ~ a; ~ b are notations, i.e. ~ a = (a 1 ;a 2 ;:::;a K ); ~ b = (b 1 ;b 2 ;:::;b K ). (x; ~ a; ~ b) is the pdf of (X 0 ;A fN;kg ;B fN;kg ;k = 1; 2;:::;K). Proof. Without loss of generality, let us choose r = 0 and d = 0. Use the same probability space as introduced in Lip case above, and we dene the Lip function ' = 8 > > > > > > < > > > > > > : 1; x 0 x ; 0<x 0; x (4.18) Dene F N;k; t =E P [B N;k ' (X N; t )]; F k; t =E P [B k ' (X t )]; F N;k t :=E P [B N;k 1 fX N t 0g ]; F k t :=E P [B k 1 fXt0g ] jF N;k t F k t jjF N;k t F N;k; t j +jF N;k; t F k; t j +jF k; t F k t j =I +II +III 50 II =jF N;k; t F k; t j is proved in theorem 4.3.1. The proof forIII andI are exactly the same as in theorem 3.4.1, taking into account assumption 2.3.1 and 2.3.2. E P jX N t X t j max(C Lip ;C) K K X k=1 E P [jB N;k B k j +jA N;k A k j +jX N X 0 j]j [ Z t 0 e C 2 C Lip s ds +Crt +C] + Z t 0 C()e L F s ds (4.19) rst x K, let N!1, then let ! 0, We then prove X N t L 1 !X t . The density equation (4.17) can be proved exactly the same as in theorem 4.3.1. Remark 4.4.2. In theorem 4.3.1 and 4.4.1, we proved the strong convergence theory in L 1 . If we do not introduce the common probability space P and dene N = 1 N N X i=1 fx i ;a k i ;b k i ;r i ;k=1;:::;Kg ; = P(X 0 ;A k ;B k ;R;k = 1;:::;K) 1 Using the denition 2.3.9, we can repeat the same idea as in the proof of theorem 4.3.1 and 4.4.1, and get the result in a weak sense, i.e, convergence in distribution. More specically, for the indicator function, we have E P jX N t X t j max(C Lip ;C)W 1 ( N ;)[ Z t 0 e C 2 C Lip s ds +Crt +C] + Z t 0 C()e L F s ds (4.20) 51 4.5 Example Example 3: K = 2. Consider the following network. Assume there is a constant 0 < c << 1 such that c proportion of the nodes are connected to all others, and the left (1c) nodes are either not connected or sparsely connected. After a permutation for index, we can assume the rstcN nodes have connections to all other nodes and the last (1c)N nodes are sparsely connected to each other. We assume the connected edges all have weight is 1. When N ! 1, the matrix E has an approximate decomposition E = a 1 b 1 +a 2 b 2 , where a 1 = (1; 1;:::; 1; 0; 0;:::; 0) T , b 1 = (1; 1;:::; 1; 1; 1;:::; 1), a 2 = (0; 0;:::; 0; 1; 1;:::; 1) T , b 2 = (1; 1;:::; 1; 0; 0;:::; 0). a 1 ;a 2 ;b 2 all have cN proportion of 1 0 s. 52 Chapter 5 Passing to limit 5.1 Introduction Finite decomposition approximation does not always work well. For example, Eikmeier-Gleich (2017) [23] studied large scale networks and showed the empirical results that: (1) Many networks have a signicant power-law in the tail of the degree distribution corresponding to the largest degree vertices, as well as the singular values of the adjacency matrix, and the eigenvalues of the Laplacian matrix. (2) A signicant power-law distribution is more likely to occur in the singular values of the adjacency matrix compared with the degree distribution. From the empirical study, one can expect that the number of the large singular values also will go to innite when the network size goes to innite, which means that the nite rank approximation is generally not the exact approach. So we need to extend our model to include the innite dimension case. The following table [23] veries the empirical results [1], [2] above for many real large scale networks. 53 54 5.2 The limiting model Let us go back to recall the simplest discrete N system model: X i t =X i 0 + 1 N a i;k Z t 0 X j b j;k 1 fX j s 0g ds i = 1; 2;:::;N: k = 1; 2;:::;K: (5.1) Intuitively, we can think the coecients as the values of discrete random variables. Specically, for xed N;K, we can denote X i 0 =x N 0 ( ! i );a i;k =A N;K ( ! i ;e ! k );b i;k =B N;K ( ! i ;e ! k );X i t =X K t ( ! i ) (5.2) where i = 1; 2;:::;N: k = 1; 2;:::;K. This give us the hint to dene the problem in a abstract probability space as N;K!1. One can expect that the limiting random variables arex 0 ( !);A( !;e !);B( !;e !),X t ( !) and are dened in some proper probability space. For the K decomposition model, as N!1, we have the limiting equation X K t = X 0 +t +r[ 1 K K X k=1 A k E P [B k ] 1 K K X k=1 B k E P [A k ]]t + 1 K K X k=1 A k Z t 0 E P [B k '(X K s )]ds (5.3) 55 we can rewrite this in the form 8 > > > > > > < > > > > > > : X K t ( !) =X 0 ( !) +t +rR K t + 1 K P K k=1 A k C k t C k t = R t 0 E P [B k '(X K t )]ds k = 1; 2;:::;K R K = 1 K P K k=1 A k E P [B k ] 1 K P K k=1 B k E P [A k ] (5.4) Where ( ; F; P) is the probability space for (5.3) [refer to chapter 4 lemma 4.2.1], and ' is a bounded measurable function. Combine the idea from (5.2), we can prove that (5.3) indeed converge to the following system (5.5), which is dened in the probability space ( ;F;P) specied below. 8 > > > > > > < > > > > > > : X t ( !) =X 0 ( !) +t +rRt +E P [A( !;e !)C t (e !)j !] C t (e !) = R t 0 E P [B( !;e !)'(X s ( !))je !]ds R =E P [A( !;e !)E P [B( !;e !)je !]j !]E P [B( !;e !)E P [A( !;e !)je !]j !] (5.5) Where, = ~ ,F = F ~ F, P = P ~ P, ( ; F; P) and ( ~ ; ~ F; ~ P) are two marginal probability spaces. A;B2 L 0 (F), X t 2 L 0 ( F) and C t 2 L 0 ( ~ F) and are bounded. For convenience, we can rewrite (5.5) in the following form. 8 > > > > > > < > > > > > > : X t =X 0 +t +rRt +E P [AC t j F] C t = R t 0 E P [B'(X s )j ~ F]ds R =E P [AE P [Bj ~ F]j F]E P [BE P [Aj ~ F]j F] (5.6) 56 In this chapter we will only prove the well-posedness for (5.6). The convergence theory from (5.4) to (5.5) will be discussed in next chapter. 5.3 Lipschitz loss function We rst consider the loss function as a Lipschitz function and assume the Lipschitz constant is L ' 0 . We now give the well-posedness result. Theorem 5.3.1. Consider the probability space ( ;F;P) as dened above. Assume A;B;X 0 have the uniform bound L. The following system is well-posed and the solution exists and is unique. 8 > > > > > > < > > > > > > : X t =X 0 +t +rRt +E P [AC t j F] C t = R t 0 E P [B'(X s )j ~ F]ds R =E P [AE P [Bj ~ F]j F]E P [BE P [Aj ~ F]j F] (5.7) Proof. Without loos of generality, let us assume that r = 0; = 0. Then we consider 8 > > < > > : X t =X 0 +E P [AC t j F] C t = R t 0 E P [B'(X s )j ~ F]ds (5.8) 57 Given the initial values for variables (X 0 t ;C 0 t ) = (X 0 ;C 0 ), consider the Picard iteration sequence (X 0 t ;C 0 t )! (X 1 t ;C 1 t )! (X 2 t ;C 2 t )!::: (5.9) The iteration system is 8 > > < > > : X n+1 t =X 0 +E P [AC n t j F] C n+1 t = R t 0 E P [B'(X n s )j ~ F]ds (5.10) Denote X n+1 t =X n+1 t X n t and C n+1 t =C n+1 t C n t , it gives that 8 > > < > > : X n+1 t =E P [AC n t j F] jC n+1 t j R t 0 E P [jBjjL ' 0jjX n s jj ~ F]ds (5.11) Take absolute value on both sides of (5.8) and take expectation, then we have E P [jX n+1 t j] LE P [jC n t j] E P [jC n+1 t j] LL ' 0 Z t 0 E P [jX n s j]ds 58 Alternatively using the above equation gives that, E P [jC 2m t j]L 2m1 (L ' 0 ) m t m m! sup 0st E P [jX 1 s j] (5.12) E P [jC 2m+1 t j]L 2m (L ' 0 ) m t m m! sup 0st E P [jC 1 s j] (5.13) One can check that for smooth and bounded random variables (X 0 t ;C 0 t ), sup 0st E P [jX 1 s j] and sup 0st E P [jC 1 s j] are bounded (the constant depends on t). Combine (5.9), (5.10), it is obviously that C t = lim k!1 C k t exists and satises the equation in L 1 sense, meaning thatE P [jC t C k t j]! 0. And similar proof shows that X t = lim k!1 X k t exists and satises the equation in L 1 sense. Further, combine (5.10), (5.11) and (5.12), we get 8 > > > > > > > > > > < > > > > > > > > > > : jX 2m+1 t jL 2m (L ' 0 ) mt m m! sup 0st E P [jX 1 s j] jX 2m t jL 2m1 (L ' 0 ) m1 t m1 (m1)! sup 0st E P [jC 1 s j] jC 2m+1 t jL 2m (L ' 0 ) m t m (m1)! sup 0st E P [jC 1 s j] jC 2m t jL 2m1 (L ' 0 ) m t m (m1)! sup 0st E P [jX 1 s j] (5.14) Then it is easy to see that (X n t ;C n t )! (X t ;C t ) uniformly. The uniqueness of the solution also follows from the above proof. 59 5.4 Indicator loss function We consider the loss function as a indicator function. We need some technical assumption. Assumption 5.4.1. Given probability space ( ;F;P) as dened in (5.6), let F (e !;) :=E P [B1 fX 0 +t+rRt+E P [A(:)j F]0g j ~ F] (5.15) Assume F (e !;) is a Lipschitz function w.r.t. , uniformly for anye !, i.e, jF (e !; 1 )F (e !; 2 )jcE P j 1 2 j (5.16) Theorem 5.4.2. Assume A;B;X 0 have the uniform bound L. The following sys- tem is well-posed and the solution exists and is unique. 8 > > > > > > < > > > > > > : X t =X 0 +t +rRt +E P [AC t j F] C t = R t 0 E P [B1 fXs0g j ~ F]ds R =E P [AE P [Bj ~ F]j F]E P [BE P [Aj ~ F]j F] (5.17) 60 Proof. Without loos of generality, let us assume that r = 0; = 0. Then we consider 8 > > < > > : X t =X 0 +E P [AC t j F] C t = R t 0 E P [B1 fXs0g j ~ F]ds (5.18) Given two dimensional random variable (X 0 t ;C 0 t ) = (X 0 ;C 0 ), consider the Picard iteration sequence (X 0 t ;C 0 t )! (X 1 t ;C 1 t )! (X 2 t ;C 2 t )!::: (5.19) The iteration system is 8 > > < > > : X n+1 t =X 0 +E P [AC n t j F] C n+1 t = R t 0 E P [B1 fX n s 0g j ~ F]ds (5.20) By assumption 5.4.1, we have that C t (e !) = Z t 0 F (e !;C s (:))ds (5.21) Then jC n+1 t C n t j Z t 0 jF (e !;C n1 s (:))F (e !;C n2 s (:))jdsc Z t 0 E P [jC n1 s C n2 s j]ds 61 Denote X n+1 t =X n+1 t X n t and C n+1 t =C n+1 t C n t . We have 8 > > < > > : X n+1 t =E P [AC n t j F] jC n+1 t jc R t 0 E P [jC n1 s j]ds (5.22) Take absolute value on both sides of (5.8) and take expectation, then we have 8 > > < > > : E P [jX n+1 t j]LE P [jC n t j] E P jC n+1 t jc R t 0 E P [jC n1 s j]ds (5.23) All the following proof is exact the same as in theorem 5.3.1, and thus we proved the existence and uniqueness of solution. 62 Chapter 6 Convergence result 6.1 Introduction Recall the K decomposition equation in chapter 5 which is dened on ( ; F; P) [refer to chapter 4 lemma 4.2.1 for the denition of probability space], and let ' be a bounded measurable function. 8 > > > > > > < > > > > > > : X K t ( !) =X 0 ( !) +t +rR K t + 1 K P K k=1 A k C k t C k t = R t 0 E P [B k '(X K t )]ds k = 1; 2;:::;K R K = 1 K P K k=1 A k E P [B k ] 1 K P K k=1 B k E P [A k ] (6.1) With a few convergence assumptions for the random variables, we will prove that (6.1) converges to the following (6.2) on some probability space ( ;F;P). 8 > > > > > > < > > > > > > : X t =X 0 +t +rRt +E P [AC t j F] C t = R t 0 E P [B'(X s )j ~ F]ds R =E P [AE P [Bj ~ F]j F]E P [BE P [Aj ~ F]j F] (6.2) 63 Where = ~ ,F = F ~ F,P = P ~ P, ( ; F; P) and ( ~ ; ~ F; ~ P) are marginal spaces. Also refer to (5.5) for another formulation of (6.2), which is more intuitive. 6.2 Convergence for Lipschitz loss function Theorem 6.2.1. Let the loss function to be a Lipschitz function with Lip con- stant C L . Let (6.1) be dened on ( ; F; P). Dene the product space ( ;F;P) = ( ; F; P) ( ~ ; ~ F; ~ P), where ~ = [0; 1] and ~ P is the uniform distribution on [0; 1]. Assume that A;B2 L 0 (F), X t 2 L 0 ( F) and C t 2 L 0 ( ~ F) are bounded random variables. Then (6.2) is well dened on the product space. Assume there exists random variables (A K ;B K ) on ( ;F;P) such that • P((A K ;B K )( !;e !) = (A j ;B j )( !)j !) = 1 K for any K and j = 1; 2;:::;K. • (A K ;B K )! (A;B),Pa:s: as K!1. Then (6.1) converges to (6.2) in L 1 sense uniformly for t. Proof. Without loss of generality, we assume that = 0 and r = 0. The general case is proved similarly. Recall the equation: 8 > > < > > : X K t ( !) =X 0 ( !) + 1 K P K j=1 A j C j t C j t = R t 0 E P [B j '(X K s )]ds j = 1; 2;:::;K (6.3) 64 8 > > < > > : X t =X 0 +E P [AC t j F] C t = R t 0 E P [B'(X s )j ~ F]ds (6.4) We split the proof into several steps, and each step is very easy to verify. Step 1: The well-posed for (6.1) and (6.2) are proved in chapter 4 and 5, see remark 4.3.3 and theorem 5.3.1. We only show the convergence in the next steps. Step 2: Firstly, there always exist (A K ;B K ) such that P((A K ;B K )( !;e !) = (A j ;B j )( !)j !) = 1 K for any K and j = 1; 2;:::;K. For example, we set (A K ;B K ) = P K1 j=0 (A j+1 ;B j+1 )( !)1 [ j K ; j+1 K ] (e !). Then one can check directly that the pair (A K ;B K ) is bounded and satises the probability condition. Actually there could be many dierent pairs. We assume (A K ;B K )! (A;B),Pa:s: as K!1. Step 3: Construct the following system using the given (A K ;B K ) on probability space ( ;F;P) 8 > > < > > : g X K t =X 0 +E P [A Kg C K t j F] g C K t = R t 0 E P [B K '( g X K s )j ~ F]ds (6.5) The proof of the well-posed for the system is exactly the same as that in (6.2). And it is easy to check that (6.5) is exactly (6.3) if one consider that (A K ;B K ) = P K1 j=0 (A j+1 ;B j+1 )( !)1 [ j K ; j+1 K ] (e !) and ( ; F; P) is the marginal distri- 65 bution of ( ;F;P). Thus, g X K t =X K t . Step 4: We prove that E P [ sup 0tT [j g X K t X t j +j g C K t C t j]]! 0 (6.6) By (6.4) and (6.5), we have jX t g X K t j E P [[jAA K jjC t j +jA K jjC t g C K t j]j F] (6.7) jC t g C K t j Z t 0 E P [jBB K jj'(X s )jj ~ F]ds (6.8) + C L Z t 0 E P [jB K jjX s g X K s jj ~ F]ds Assume the bound for A;B;A K ;B K are C. Take sup and expectation on (6.8), put (6.7) into (6.8), and use the law of iterated expectations, we get: E P [ sup 0tT jC t g C K t j] Z T 0 E P [jBB K jj'(X s )j]ds (6.9) + CC L Z T 0 E P [jAA K jjC s j]ds + C 2 C L Z T 0 E P [jC s g C K s j]ds Using the fact thatE P [jC t g C K t j]E P [sup 0tT jC t g C K t j] and Gronwall inequality, then letK!1, we getE P [sup 0tT jC t g C K t j]! 0. Using this result and put into 66 (6.7), we can proveE P [sup 0tT j g X K t X t j]! 0. Combine the above we showed that E P [ sup 0tT [j g X K t X t j +j g C K t C t j]]! 0 (6.10) Step 5: Notice that g X K t =X K t , we then have that E P [ sup 0tT jX K t X t j]E P [ sup 0tT [jX K t g X K t j]] +E P [ sup 0tT [j g X K t X t j]]! 0 We then proved the result. 6.3 Convergence for indicator loss function We need some regularity conditions when we use indicator loss function. Assumption 6.3.1. Using the same probability space (P; ;F),X 0 ;A;B as above, denote 1 (e !;) := sup j(:)jC E P [jBj1 fX 0 +t+rRt+E P [A(:)j F]g j ~ F] (6.11) Assume 1 (e !;)! 0 uniformly for anye ! as ! 0, where C is a uniform upper bound for . 67 This is similar to the assumption 2.3.1. The purpose is to make sure the equation continuously depends on . Refer to assumption 2.3.3 for details. Without loss of generality, we assume that = 0 and r = 0. The general case is proved similarly. Consider the following two equations. 8 > > < > > : X K t ( !) =X 0 ( !) + 1 K P K j=1 A j C j t C j s = R t 0 E P [B j 1 fX K s 0g ]ds j = 1; 2;:::;K (6.12) 8 > > < > > : X t =X 0 +E P [AC t j F] C t = R t 0 E P [B1 fXs0g j ~ F]ds (6.13) Theorem 6.3.2. Let the loss function to be the indicator function. Let (6.12) be dened on ( ; F; P). Dene the product space ( ;F;P) = ( ; F; P) ( ~ ; ~ F; ~ P), where ~ = [0; 1] and ~ P is the uniform distribution on [0; 1]. Assume that A;B2 L 0 (F), X t 2 L 0 ( F) and C t 2 L 0 ( ~ F) are bounded random variables. And assume the assumption 2.3.1, 2.3.2, and 5.4.1. Then (6.13) is well dened on the product space. Assume there exists random variables (A K ;B K ) on ( ;F;P) such that • P((A K ;B K )( !;e !) = (A j ;B j )( !)j !) = 1 K for any K and j = 1; 2;:::;K. • (A K ;B K )! (A;B),Pa:s: as K!1. Then (6.12) converges to (6.13) in L 1 sense uniformly for t. 68 Proof. We split the proof into several steps. Step 1: The well-posed for (6.12) is obvious by using assumption 2.3.2, and the well-posed for (6.13) is proved in theorem 5.4.2 by using assumption 5.4.1. Step 2: Consider the Lipschitz function ' = 8 > > > > > > < > > > > > > : 1; x 0 x ; 0<x 0; x (6.14) Applying this function to (6.1) and (6.2), with the corresponding solutions are denoted as (X K; t ( !);C 1; t ;C 2; t ;:::;C K; t ); (X t ( !);C t (e !)) (6.15) And apply to theorem 6.3.1, we have ] X K; t ;X t ; g C K; t ;C t and it satises E P [ sup 0tT [j ] X K; t X t j +j g C K; t C t j]]! 0 (6.16) Step 3: Using assumption 2.3.1, 2.3.2 and lemma 2.3.3, it is straightforward to check that, for any K and !, jX K; t X K t jC(T )( Z t 0 ()ds)e L F t ! 0 (6.17) 69 and as ! 0, we further have (X K; t ( !);C 1; t ;C 2; t ;:::;C K; t ) uniformly ! (X K t ( !);C 1 t ;C 1 t ;:::;C K t ) (6.18) Step 4: Using assumption 6.3.1, it is straightforward to check that, E P [ sup 0tT [jX t X t j]] C(T )E P [ 1 (e !;)]! 0 (6.19) Step 5: Notice that ] X K; t =X K; t , we then have that E P [ sup 0tT jX K t X t j] E P [ sup 0tT [jX K t X K; t j]] (6.20) + E P [ sup 0tT [jX K; t ] X K; t j]] + E P [ sup 0tT [j ] X K; t X t j]] + E P [ sup 0tT [jX t X t j]] Combine the above, and let K!1 and ! 0, we have thatE P [sup 0tT jX K t X t j]! 0. We then proved the result. 70 Chapter 7 The randomized model 7.1 Intuition In the randomized setting, we think that each observed network is a realization of random network that has given connection probability among nodes. Let us consider the following model construction. ~ E(random connection) P(connection probability) ! E(realization) Where ~ E is a random matrix, each entry has Bernoulli distribution constructed by e i;j . ~ E represents the unknown real network structure and E is a realization. The randomized model is when we replace the asset debt matrix E by a random matrix , where i;j is a Bernoulli random variable constructed by e i;j . Remark 7.1.1. In the following, let us assume e i;j = 1 or 0 for any i;j, we then have out-degree D (i) = c i = P j e j;i , in-degree D + (i) = c + i = P j e i;j , and it is clear that total in degree and out degree are equal. D := P i D + i = P i D i 71 Let us consider the following model X i t =X i 0 +r[c + i c i ]t Z t 0 X j e i;j '(X j s )ds Replace e i;j by a Bernoulli random variable i;j with p i;j := D + i D j D (without min- imizing with 1, for simplicity, actually, for sparse matrix, the p i;j goes to 0 when n!1). That is, we change X i t to ~ X i t : ~ X i t =X i 0 +r[c + i c i ]t Z t 0 X j i;j '( ~ X j s )ds We are interested in the comparison of the random system and the deterministic system, X i t =X i 0 +r[c + i c i ]t Z t 0 X j i;j '(X j s )ds Y i t =Y i 0 +r[c + i c i ]t Z t 0 X j p i;j '(Y j s )ds Where i = 1; 2;:::;N, ' is a bounded measurable function, X i 0 = Y i 0 for i = 1; 2;:::;N. We want to understand the dierence of the two systems, We will see that the answer strongly depends on how we choose the parameters. We can prove in some cases, the random system will converges to the deterministic system. While, in other cases, they have very dierent behaviours. For simplicity, we assume interest 72 rater = 0 and the contagion index = 1, and WLOG, replace the by + in front of the integration. The model has the simple form: ~ X i t =X i 0 + Z t 0 X j i;j '( ~ X j s )ds Let us review that: For given asset-debt matrix E, we have the corresponding triples (X i 0 ; D + i ; D i ) 1iN , and then a random matrix ~ E, whose entry i;j is a Bernoulli random variable taken 1 with p i;j := D + i D j D and 0 with probability 1 D + i D j D . The basic question is that: (i) What is the connection for ~ E and E? (ii) Is the triples (X i 0 ; D + i ; D i ) 1iN permutation invariant? For the rst question, we have some simulation observation. When the matrix ~ E is is a sparse 0, 1 matrix, ~ E and E has close connection in the sense that: • ~ E is also a sparse 0, 1 matrix. • The corresponding singular values (ranked from large to small) of E and ~ E are close in the two ways: X i i X i ~ i P i j i ~ i j P i i ! 0 73 • ~ E has the similar degree distribution structure as E, say, the scale-free prop- erties. We can prove that Lemma 7.1.2. The triples (X i 0 ; D + i ; D i ) 1in is permutation invariant. Proof. Assume that (X i 0 ; D + i ; D i ) 1in generates E = E( i;j ) and (X (i) 0 ; D + (i) ; D (i) ) 1in generates ~ E = ( i;j ) We have two corresponding mea- sures: N (t) := 1 N X i (x i t ; D + i ; D i ) ~ N (t) := 1 N X i (x (i) t ; D + (i) ; D (i) ) And two default process dened as Y t = 1 N N X 1 (X i t ); ~ Y t = 1 N N X 1 ( ~ X i t ) We can show that Y t = ~ Y t Actually, we need to show that (X (1) t ;:::;X (N) t ) ( ~ X 1 t ;:::; ~ X N t ) 74 where means they have the same joint distribution. The two system of equations are: X (i) t =x (i) 0 + Z t 0 X j (i);j '(X j s )ds; i = 1; 2;:::;N ~ X i t = ~ x i 0 + Z t 0 X j ~ i;j '( ~ X j s )ds; i = 1; 2;:::;N ~ i;j is generated by ( ~ D + i ; ~ D + ) 1in while the later is generated by the matrix E permuted by . Note that: X (i) t =x (i) 0 + Z t 0 X j (i);j '(X j s )ds =x (i) 0 + Z t 0 X j (i);(j) '(X (j) s )ds (7.1) By denition, ~ p i;j = ~ D + i ~ D i D = D + i D i D =p (i)(j) So, ~ i;j (i)(j) By the uniqueness of solutions of SDEs, the systems of equations X (i) t =x (i) 0 + Z t 0 X j (i);j '(X j s )ds; i = 1; 2;:::;N ~ X i t = ~ x i 0 + Z t 0 X j ~ i;j '( ~ X j s )ds; i = 1; 2;:::;N 75 have the same distribution (X (1) t ;:::;X (N) t ) ( ~ X 1 t ;:::; ~ X N t ) 7.2 Non i.i.d. scale free random graph In the following, we consider the special case, where = 1 andr = 0,'(x) = 1 fx0g , i;j ;p i;j is uniformly bounded by a constant C, then we have the random default model X i t =X i 0 Z t 0 X j i;j '(X j s )ds assume that P( i;j = i;j N ) = p i;j N 1 ;P( i;j = 0) = 1 p i;j N 1 where 2 (0; 1]. Consider the system Y i t =Y i 0 1 N Z t 0 X j i;j p i;j '(Y j s )ds 76 where i = 1; 2;:::;N, X i 0 =Y i 0 with the constraints that E X j i;j = X j i;j p i;j N =c + i E X i i;j = X i i;j p i;j N =c i E X j 1 f i;j 6=0g = X j P( i;j 6= 0) = X j p i;j N 1 =D + i E X i 1 f i;j 6=0g = X i P( i;j 6= 0) = X i p i;j N 1 =D i where we assume thatc i ;c + i are uniformly bounded andD + i ;D i areO(N ) then a special solution for the constraints isp i;j = D + i D i D N 1 and i;j = c + i D + i c i D i c D N where D = P i D + i = P j D j and c = P i c + i = P j c j we can check that p i;j and i;j are uniformly bounded. Remark 7.2.1. In the above formulation, the random graph is given by P( i;j = i;j N ) = p i;j N 1 ;P( i;j = 0) = 1 p i;j N 1 where 2 (0; 1]. It is important to notice that = 0 is essentially a dierent model, and generally do not have a convergence theory. The reason is that we need some kind of "expectation property" to cancel the randomness, which is similar to the law of large numbers. Even a small 1 >> > 0 can cancel the randomness, but = 0 makes the convergence generally impossible though the ' is simple. 77 7.3 Convergence result Lemma 7.3.1. For any bounded uniformly Lip continuous function ' E[ 1 N X i jX i t Y i t j 2 ]! 0 Proof. Denote Z i t :=X i t Y i t , then using chain rule, jZ i t j 2 = 2 Z t 0 Z i s ( X j i;j '(X j s ) X j i;j p i;j N '(Y j s ))ds 2 Z t 0 Z i s ( X j i;j ('(X j s )'(Y j s )) + X j ( i;j i;j p i;j N )'(Y j s ))ds (7.2) It is easy to check that jZ i t j 2 Z t 0 ( X j ( i;j i;j p i;j N )'(Y j s )) 2 ds + Z t 0 X j ~ i;j jZ j s j 2 ds (7.3) where ~ i;j = i;j if i6=j ~ i;j = X j i;j + i;i + 1 ifi =j 78 when ' is bounded by C, dene that i s := ( X j ( i;j i;j p i;j N )'(Y j s )) 2 then i s are non negative independent random variables and satises: E i s C N The inequality (7.3) has the form: jZ i t j 2 Z t 0 i s ds + Z t 0 X j ~ i;j jZ j s j 2 ds (7.4) Then by the Gronwell Inequality, jZ t j 2 Z t 0 e ~ (ts) s ds wherejZ t j 2 = (jZ 1 t j 2 ;jZ 2 t j 2 ;:::;jZ N t j 2 ) T , s = ( 1 s ; 2 s ;:::; N s ) T , ~ = ( ~ i;j ) And we only need to show that E 1 N (1; 1;:::; 1) Z t 0 e ~ (ts) s ds! 0 (7.5) 79 For simplicity, we can only consider the term when ts = 1 and because the boundness of ', it is enough to assume '(Y j s ) = 1. For k 1, i;j;j 1 ;j 2 ;:::;j k1 ;l 1 ;l 2 2f1;:::;Ng, dene I(k;N) := 1 N X i;j;j 1 ;j 2 ;:::;j k1 ;l 1 ;l 2 E[ ~ i;j 1 ~ j 1 ;j 2 ::: ~ j k1 ;j ( j;l 1 j;l 1 p j;l 1 N )( j;l 2 j;l 2 p j;l 2 N )] A simple calculation shows that in order to have (4.20), it is enough to prove jI(k;N)j C N holds for a uniform constant C,8k2N;N2N Recall that ~ i;j = i;j if i6=j ~ i;j = X j i;j + i;i + 1 ifi =j by a simple calculation, we see that I(k;N) = 1 N X i;j;j 1 ;j 2 ;:::;j k1 ;l 1 ;l 2 E[ i;j 1 ~ i;j 2 ::: ~ j k1 ;j ( j;l 1 j;l 1 p j;l 1 N )( j;l 2 j;l 2 p j;l 2 N )] + 1 N X i;j;j 1 ;j 2 ;:::;j k1 ;l 1 ;l 2 E[ i;j 1 ~ j 1 ;j 2 ::: ~ j k1 ;j ( j;l 1 j;l 1 p j;l 1 N )( j;l 2 j;l 2 p j;l 2 N )] + I(k 1;N) (7.6) 80 Note that the third term actually is I(k 1;N) and the rst two term has the same index withI(k;N) but ~ i;j 1 is replaced by i;j 1 . For each term above, we can similarly discuss whether i = j 1 or j 1 = j 2 , and each term will split into another three terms, this procedure could be done for at most k steps, and we will nally have at most 3 k terms, whose summation is I(k;N) and each term looks like the following form: 1 N X E[ i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j ( j;l 1 j;l 1 p j;l 1 N )( j;l 2 j;l 2 p j;l 2 N )] (7.7) note that the length of i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j in the expectation ranges from 0 to k, and each term ; is dierent at least one index. We dene the set containing the at most 3 k terms above as J(k;N), and it is enough to prove that for any 2J(k;N),jj C N Now we show the above result by induction. We have to prove that for any s 2 J(k;N) it satises that j s j = j 1 N X E[ i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j ( j;l 1 j;l 1 p j;l 1 N )( j;l 2 j;l 2 p j;l 2 N )]j C N 81 Where the s is the length of i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j It is enough to prove that s =j 1 N X E[ i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j ]jC (7.8) s = j 1 N X E[ i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j j;l 1 ( j;l 2 j;l 2 p j;l 2 N )]j C N (7.9) Then the above two combined will show thatj s j C N . We can prove the result for s and s by induction of index s, We rst consider s . To make it more clearly, we let the set K(k;N) to be the collection of all the forms s as above with 1sk, and consider the disjoint partition subsets K(k;N) =[ k l=0 S l , whereS l is the set that each term in it has the form (7.7) and the length of i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j in the expectation is l. Let us discuss case by case. Consider s = 1, 1 = 1 N X i;j E[ i;j ] 82 It is bounded by C. For s = 2, we have two cases, 1 N X i;j;j 1 E[ i;j 1 i;j ] or 1 N X i;j;j 1 E[ i;j 1 j 1 ;j ] (7.10) Both of them are bounded by C. let us assume that the result holds for set S l for 0lp 1, and we prove it for the case l =p, where 1pk. For any form p 2S p p = 1 N X E[ i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j ] Case 1: If i;j 1;1 is equal to one of the p-1 terms after it, say, it is m;n , then replace i;j 1;1 by m;n and we have term 2 m;n , note that the total expression in the expectation is nonnegative, so it is less than when we replace 2 m;n again by m;n , and we construct a form ~ p1 from p with the property p ~ p1 (7.11) By induction, ~ p1 2S p1 then ~ p1 C and thus p C. Case 2: If i;j 1;1 is independent to the p-1 terms after it, k+1 = 1 N X E[ i;j 1 ]E[(p 1)terms] = ~ p1 2S p1 (7.12) 83 Then we can conclude the boundedness of the p and induction shows that for any k;N we have K(k;N) is a set of uniformly bounded elements. As for the form k = 1 N X E[ i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j j;l 1 ( j;l 2 j;l 2 p j;l 2 N )] (7.13) The method is similar, the basic idea is to consider whether j;l 2 is equal some of the previous terms. Case 1: if not, then k = 0 Case 2: if yes, say, there are m terms that are equal j;l 2 , then we put them together with ( j;l 2 p N ) and rewrite k = 1 N X E[ i;j 1;1 i;j 1;2 ::: i;j 1;s 1 j 1;s 1 ;j 2;1 j 1;s 1 ;j 2;2 ::: j 1;s 1 ;j 2;s 2 ::: jm;sm ;j j;l 1 ] E[ m j;l 2 ( j;l 2 j;l 2 p j;l 2 N )] (7.14) the rst part has only km terms and it is bounded by the result of km , the second term is always bounded by C N for any m> 0. Combine the above together, we can conclude for any 2J(k;N),jj C N and then jI(k;N)j C 3 k N 84 and thus proves theorem 2. Now for the special choice of p i;j i;j = D + i D i D N 1 c + i D + i c i D i c D N = c i c i c N Lemma 7.3.2. When '(x) = 1 fx0g , we have E[ 1 N X i jX i t Y i t j 2 ]! 0 Proof. We dene the Lip functions, ' 1 = 8 > > > > > > < > > > > > > : 1; x x ;<x< 0 0; x 0 ' 2 = 8 > > > > > > < > > > > > > : 1; x 0 x ; 0<x 0; x Apply the two Lip functions to the previous proof and we dene Z i;;k t =X i;;k t Y i;;k t . A simple calculation shows that X i;;2 t X i t X i;;1 t Y i;;2 t Y i t Y i;;1 t 85 One can verify thatjZ i t j maxfjZ i;;1 t j;jZ i;;2 t jg +Y i;;1 t Y i;;2 t Then jZ i t j 2 2 maxfjZ i;;1 t j 2 ;jZ i;;2 t j 2 g + 2(Y i;;1 t Y i;;2 t ) 2 It is easy to see that 1 N P i (Y i;;2 t Y i;;1 t ) 2 1 N P i [(Y i;;1 t ) 2 (Y i;;2 t ) 2 ]! 0, letting N! 0 then ! 0 the proof is simple and omitted. 7.4 Example Multiple centers with random connections. We let the network has m cen- ters which are fully connected with all other nodes, and Nm are random con- nected with each other with p = N r . The expected number of connections is E(tr(EE T )) =mN + (Nm)cN 1r . Case a: m =N, this corresponding k = 1, the fully connected case. Case b: m N ! c > 0,and r > 0, non-trivial case, with the decomposition, E = E 1 + E, where e 1 i;j = 1 for i or j m, the E is the error matrix which is sparsely random connected. This is essentially the same example as chapter 4. Case c: m is constant and r = 0, this gives a purely densely connected random network, and the expected number of connectionsE(tr(EE T )) N 2 . This cor- responding to the randomized model in this chapter in the special case = 1. Recall X i t =x i 0 +r i t Z t 0 X j i;j '(X j s )ds 86 with that P( i;j = i;j N ) = p i;j N 1 ;P( i;j = 0) = 1 p i;j N 1 where 2 (0; 1]. Consider the system Y i t =y i 0 +r i t 1 N Z t 0 X j i;j p i;j '(Y j s )ds wherei = 1; 2;:::;N,X i 0 =Y i 0 , and in our setting, we have = 1, i;j = 1,p i;j = for i;j > m and p i;j = 1 for i;j m. Then our model in this case essentially is K = 1. Case d: m=n!c> 0 and r = 0, this is similar to case c. 87 Bibliography [1] Wikipedia contributors, Financial crisis of 2007{2008, Wikipedia, The Free Encyclopedia. [2] Financial Crisis Inquiry Commission and others (2011). The nancial crisis inquiry report: The nal report of the National Commission on the causes of the nancial and economic crisis in the United States including dissenting views. Cosimo, Inc. [3] Jarsulic, Marc (2012). Anatomy of a nancial crisis: A real estate bubble, run- away credit markets, and regulatory failure. Palgrave Macmillan. [4] Prasanna Gai, Sujit Kapadia, (2010). Contagion in nancial networks. Proceed- ings of the Royal Society A: Mathematical, Physical and Engineering Sciences Vol. 466, Vol. 2120: 2401-2423. [5] Agam Gupta, Molly M King, James Magdanz, Regina Martinez, Matteo Smer- lak, and Brady Stoll (2013). Critical connectivity in banking networks. SFI CSSS. SFI CSSS report. 88 [6] Battiston, Stefano, Giovanni di Iasio, Luigi Infante, and F. Pierobon (2015). Capital and contagion in nancial networks. IFC Bulletins chapters 39. [7] H user, Anne-Caroline (2015). Too interconnected to fail: A survey of the inter- bank networks literature. SAFE working paper. [8] Glasserman, Paul, and H. Peyton Young (2015). How likely is contagion in nancial networks?. Journal of Banking Finance Vol. 50: 383-399. [9] Matthew Elliott, Benjamin Golub, Matthew O. Jackson, (2014). Financial Net- works and Contagion. American Economic Review, Vol. 104, No. 10, OCT 2014, pp. 3115-53 [10] Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi, (2015). Sys- temic Risk and Stability in Financial Networks. American Economic Review 105(2): 564{608 http://dx.doi.org/10.1257/aer.20130456 [11] Michael Boss, Helmut Elsinger, Martin Summer, Stefan Thurner, (2004). Net- work topology of the interbank market. Quantitative Finance Vol. 4, 2004, pp 677-684 [12] Fouque, Jean-Pierre and Langsam, Joseph A (2013). Handbook on systemic risk. Cambridge University Press, [13] Barab asi, Albert-L aszl o and others (2016). Network science. Cambridge uni- versity press. Vol. 122-8, 2781{2810 89 [14] Zuev, Konstantin, Mari an Bogun a, Ginestra Bianconi, and Dmitri Krioukov (2015). Emergence of soft communities from geometric preferential attachment. Scientic reports 5, no. 1 : 1-9. [15] Schwartz, N., R. Cohen, D. Ben-Avraham, A-L. Barab asi, and S. Havlin (2002). Percolation in directed scale-free networks. Physical Review E 66, no. 1: 015104. [16] N. Detering, T. Meyer-Brandis, and K. Panagiotou, (2015). Bootstrap perco- lation in directed and inhomogeneous random graphs. ArXiv:1511.07993 [17] Albert-Laszlo Barabasi, Reka Albert, Hawoong Jeong, (1999). Mean-eld the- ory for scale-free random networks. Physica A 272 (1999) 173{187 [18] Reuven Cohen, Daniel ben-Avraham, and Shlomo Havlin, (2002). Percolation critical exponents in scale-free networks. Phys. Rev. E 66, 036113 [19] Newman, Mark Ed and Barab asi, Albert-L aszl o Ed and Watts, Duncan J (2006). The structure and dynamics of networks. Princeton university [20] M. E. J. Newman, (2003). The structure and function of complex networks. SIAM Rev., Vol. 45 (2), 167{256. [21] Reka Albert and Albert-Laszlo Barabasi, (2002). Statistical mechanics of com- plex networks. Reviews of Modern Physics, Vol. 74, Jan 2002 90 [22] Steele, Corre. Graphons (2015): A New Model for Large Networks. PhD diss., WORCESTER POLYTECHNIC INSTITUTE. [23] Eikmeier, Nicole and Gleich, David F (2017). Revisiting power-law distribu- tions in spectra of real world networks. Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 817{826 [24] Rama Cont, Ama l Moussa, Edson B. Santos, (2010). Network structure and systemic risk in banking systems. Edson Bastos e, Network Structure and Sys- temic Risk in Banking Systems (December 1, 2010). [25] Hamed Amini, Rama Cont, Andreea Minca, (2016). Resilience to Contagion in Financial Networks. Mathematical Finance, Vol. 26, No. 2, 329{365 [26] Hamed Amini, Rama Cont, Andreea Minca, (2013). Stress testing the resilience of nancial networks. Finance at Fields. 17-36. [27] N. Detering, T. Meyer-Brandis, K. Panagiotou, and D. Ritter, (2017). Man- aging systemic risk in nancial networks. Preprint at arXiv:1610.09542 [28] Detering, Nils and Meyer-Brandis, Thilo and Panagiotou, Konstantinos and Ritter, Daniel (2019). Managing default contagion in inhomogeneous nancial networks. SIAM Journal on Financial Mathematics. Vol. 10-2, 578{614 91 [29] Detering, Nils and Meyer-Brandis, Thilo and Panagiotou, Konstantinos and Ritter, Daniel (2020). An integrated model for re sales and default contagion. Mathematics and Financial Economics. 1{43 [30] Detering, Nils, Thilo Meyer-Brandis, Konstantinos Panagiotou, and Daniel Ritter (2018). Financial contagion in a generalized stochastic block model. arXiv preprint arXiv:1803.08169. [31] Garnier, Josselin and Papanicolaou, George and Yang, Tzu-Wei (2013). Large deviations for a mean eld model of systemic risk. SIAM Journal on Financial Mathematics, 2013, 151{184, Vol. 4-1. [32] R. Carmona, J.-P. Fouque, and L.-H. Sun, (2015). Mean eld games and systemic risk. Communications in Mathematical Sciences. 13(4):911{933 [33] R. Carmona, J.-P. Fouque, M. Mousafa, and L.-H. Sun.(2018) Systemic risk and stochastic games with delay. Journal of Optimization Theory and Applica- tions. Vol. 2 (2018): 366-399. [34] Cvitani c, Jak sa and Ma, Jin and Zhang, Jianfeng (2012). The law of large numbers for self-exciting correlated defaults. Stochastic Processes and their Applications. Vol. 122-8, 2781{2810 92 [35] Detering, Nils, Jean-Pierre Fouque, and Tomoyuki Ichiba (2020). Directed chain stochastic dierential equations. Stochastic Processes and their Applica- tions 130, no. 4: 2519-2551. [36] Coppini, Fabio, Helge Dietert, and Giambattista Giacomin (2020). A law of large numbers and large deviations for interacting diusions on Erd} os{R enyi graphs. Stochastics and Dynamics 20, no. 02: 2050010. [37] Ha, Seung-Yeal, Jeongho Kim, Peter Kuchling, and Oleksandr Kutoviy. In- nite particle systems with collective behaviour and related mesoscopic equa- tions. Journal of Mathematical Physics 60, no. 12 (2019): 122704. [38] Golse, Fran cois, Cl ement Mouhot, and Valeria Ricci. Empirical measures and Vlasov hierarchies. arXiv preprint arXiv:1309.0222 (2013). [39] Bertini, Lorenzo, Giambattista Giacomin, and Khashayar Pakdaman. Dynam- ical aspects of mean eld plane rotators and the Kuramoto model. Journal of Statistical Physics 138, no. 1 (2010): 270-290. [40] Braun, Werner, and K. Hepp. The Vlasov dynamics and its uctuations in the 1/N limit of interacting classical particles. Communications in mathematical physics 56, no. 2 (1977): 101-113. [41] Rodrigues, Francisco A., Thomas K. DM Peron, Peng Ji, and J urgen Kurths (2016). The Kuramoto model in complex networks. Physics Reports 610: 1-98. 93 [42] Park, Jinyeong, David Poyato, and Juan Soler (2018). Filippov trajectories and clustering in the Kuramoto model with singular couplings. arXiv preprint arXiv:1809.04307. [43] Nadtochiy, Sergey and Shkolnikov, Mykhaylo and others (2019). Particle sys- tems with singular interaction through hitting times: application in systemic risk modeling. The Annals of Applied Probability. Vol. 29-1, 89{129 [44] Jabin, Pierre-Emmanuel, and Zhenfu Wang (2017). Mean eld limit for stochastic particle systems. In Active Particles, Vol. 1, pp. 379-402. Birkh auser, Cham. [45] Lov asz, L aszl o (2012). Large networks and graph limits. Vol. 60. American Mathematical Soc. [46] Golse, Fran cois (2016). On the dynamics of large particle systems in the mean eld limit. In Macroscopic and large scale phenomena: coarse graining, mean eld limits and ergodicity, 1-144. Springer, Cham. [47] Delattre, Sylvain, Giambattista Giacomin, and Eric Lu con. "A note on dynam- ical models on random graphs and Fokker{Planck equations." Journal of Sta- tistical Physics 165, no. 4 (2016): 785-798. [48] Wu, Ruoyu (2019). Open Problem|Weakly Interacting Particle Systems on Dense Random Graphs. Stochastic Systems 9, no. 3: 315-317. 94 [49] P.-E. Jabin, D. Poyato, J. Soler. Non Exchangeability and Synchronization Mechanisms in Multi-Agent Systems. https://www.ljll.math.upmc.fr/IMG/pdf/ljll200124jabin.p.e-0.4mo.pdf [50] Acebr on, Juan A., Luis L. Bonilla, Conrad J. P erez Vicente, F elix Ritort, and Renato Spigler (2005). "The Kuramoto model: A simple paradigm for synchronization phenomena." Reviews of modern physics 77, no. 1: 137. [51] DiPerna, Ronald J., and Pierre-Louis Lions (1989). Ordinary dierential equa- tions, transport theory and Sobolev spaces. Inventiones mathematicae 98, no. 3: 511-547. [52] St ephane MISCHLER, An introduction to evolution PDEs, lecture notes. https://www.ceremade.dauphine.fr/ mischler/Enseignements/IntroEvolPDE [53] O'Rourke, Sean, Van Vu, and Ke Wang (2016). Eigenvectors of random matri- ces: a survey. Journal of Combinatorial Theory, Series A 144: 361-442. [54] Tao, Terence (2012). Topics in random matrix theory. Vol. 132. American Mathematical Soc. [55] Patrick, Billingsley (1999). Convergence of probability measures. Wiley, New York, Series A 144: 361-442. 95
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Conditional mean-fields stochastic differential equation and their application
PDF
Forward-backward stochastic differential equations with discontinuous coefficient and regime switching term structure model
PDF
Optimal dividend and investment problems under Sparre Andersen model
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
Topics on dynamic limit order book and its related computation
PDF
Topics on set-valued backward stochastic differential equations
PDF
Numerical methods for high-dimensional path-dependent PDEs driven by stochastic Volterra integral equations
PDF
Some topics on continuous time principal-agent problem
PDF
The spread of an epidemic on a dynamically evolving network
PDF
Dynamic approaches for some time inconsistent problems
PDF
Set values for mean field games and set valued PDEs
PDF
Data-driven learning for dynamical systems in biology
PDF
Zero-sum stochastic differential games in weak formulation and related norms for semi-martingales
PDF
Tamed and truncated numerical methods for stochastic differential equations
PDF
Parameter estimation problems for stochastic partial differential equations from fluid dynamics
PDF
Prohorov Metric-Based Nonparametric Estimation of the Distribution of Random Parameters in Abstract Parabolic Systems with Application to the Transdermal Transport of Alcohol
PDF
Elements of dynamic programming: theory and application
PDF
Reinforcement learning for the optimal dividend problem
PDF
Defaultable asset management with incomplete information
PDF
Stability analysis of nonlinear fluid models around affine motions
Asset Metadata
Creator
Feng, Pengbin
(author)
Core Title
Dynamic network model for systemic risk
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Applied Mathematics
Degree Conferral Date
2021-08
Publication Date
07/29/2021
Defense Date
03/16/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
banking systemic risk,decomposition approach,differential equation,dynamic network model,interactive dynamical systems,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Ma, Jin (
committee chair
), Lv, Jinchi (
committee member
), Zhang, Jianfeng (
committee member
)
Creator Email
fengpengbin.apply@gmail.com,pengbinf@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC15660910
Unique identifier
UC15660910
Legacy Identifier
etd-FengPengbi-9931
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Feng, Pengbin
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
banking systemic risk
decomposition approach
differential equation
dynamic network model
interactive dynamical systems