Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Panel data forecasting and application to epidemic disease
(USC Thesis Other)
Panel data forecasting and application to epidemic disease
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
PANEL DATA FORECASTING AND APPLICATION TO EPIDEMIC DISEASE by Wei Xie A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ECONOMICS) August 2016 Copyright 2016 Wei Xie Dedication To my mom. ii Acknowledgments I am utmostly grateful for my dissertation committee members, Dr. Dana Goldman, Dr. Daniel McFadden, Dr. Cheng Hsiao, Dr. Robert Dekle, and Dr. John Romley, for being very kind, encouraging, helping, and always supportive throughout my PhD study, research, qualify exam, dissertation defense, and the job market. They are the greatest mentors, as they not only teach me about work and study, but also guide me in my career and life. I believe that I am one of the most lucky persons in the world to be honored to do PhD research under their advice, and I know that completing my PhD degree would not have been possible without their encouragements and help. IamespeciallyindebtedtomydissertationcommitteechairDr. DanaGoldman and Dr. Daniel McFadden for generously forgiving me when I make mistakes and lending me extra help when I meet difficulties in work. They also impress me deeply by their scientific rigorousness, countless resourceful ideas, and penetrating insights. Their greatness will be always memorized and continue to impact on the rest of my life. In addition, I also wish to thank Dr. Hashem Pesaran, Dr. Geert Ridder, Dr. Roger Moon, Dr. Harrison Cheng, and Dr. Jianping Zhou for their help and advice during my PhD years. Last but not least, I wish to thank my mom, whose optimistic attitude towards life and people influence my entire life. iii Contents Dedication ii Acknowledgments iii List of Tables vi List of Figures vii Abstract viii 1 Review of Panel Data Forecasting 1 1.1 Models and Estimations . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.2 Nonlinear Models . . . . . . . . . . . . . . . . . . . . . . . . 35 1.2 Forecasts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 1.2.1 BLUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 1.2.2 Forecast Evaluation . . . . . . . . . . . . . . . . . . . . . . . 50 1.2.3 Combine Forecasts . . . . . . . . . . . . . . . . . . . . . . . 57 2 Spatial Panel Vector Autoregression 60 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 2.2 Spatial Panel Vector Autoregression . . . . . . . . . . . . . . . . . . 63 2.2.1 Spatial Panel VAR Model . . . . . . . . . . . . . . . . . . . 63 2.2.2 QML Estimation . . . . . . . . . . . . . . . . . . . . . . . . 70 2.2.3 Asymptotic Properties of QMLE . . . . . . . . . . . . . . . 71 2.2.4 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . 76 2.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 3 Forecast Influenza Incidence Rates of US States 80 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 3.2 Data and Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 3.2.1 CDC and Google Flu Trends ILI . . . . . . . . . . . . . . . 87 3.2.2 Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 iv 3.3 Estimation and Forecast . . . . . . . . . . . . . . . . . . . . . . . . 92 3.3.1 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.3.2 Parameter Stability . . . . . . . . . . . . . . . . . . . . . . . 96 3.3.3 Alternative Weight Matrix . . . . . . . . . . . . . . . . . . . 97 3.3.4 Comparison with SDPD . . . . . . . . . . . . . . . . . . . . 97 3.4 Impulse Response Analysis . . . . . . . . . . . . . . . . . . . . . . . 101 3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Reference List 107 A Appendix to Chapter 2 120 A.1 Notations and Expressions . . . . . . . . . . . . . . . . . . . . . . . 120 A.1.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 A.1.2 Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 A.2 Facts on uniformly bounded (UB) matrices . . . . . . . . . . . . . . 126 A.3 First and Second Order Differentials . . . . . . . . . . . . . . . . . 128 A.4 Information Matrix and Information Matrix Equality . . . . . . . . 134 A.5 Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.6 Proof of Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 B Appendix to Chapter 3 215 B.1 Derivation of Generalized Impulse Response Functions . . . . . . . 215 B.2 Bootstrap Confidence Bands for GIRF . . . . . . . . . . . . . . . . 216 C Tables and Figures 217 v List of Tables C.1 Estimation of Simulated Samples . . . . . . . . . . . . . . . . . . . 218 C.2 Summary Statistics of Google Flu Trend ILI & CDC ILI Levels . . 222 C.3 State by State Geographical Contiguity . . . . . . . . . . . . . . . . 224 C.4 Parameter Estimates of SPVAR . . . . . . . . . . . . . . . . . . . . 228 C.5 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 C.6 Model Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 vi List of Figures C.1 Google Flu Trend ILI by Regions and States . . . . . . . . . . . . . 231 C.2 Google Flu Trend and CDC ILI by Regions and States . . . . . . . 232 C.3 Spatial View of Google Flu Trend and CDC ILI . . . . . . . . . . . 233 C.4 Network Plot of State Contiguity . . . . . . . . . . . . . . . . . . . 234 C.5 In-sample Fitted and Out-of-Sample Forecasted ILI from SPVAR(p) 235 C.6 State Responses to California Flu Shock . . . . . . . . . . . . . . . 240 vii Abstract Forecasting based on dynamic panel data both theoretically and empirically are very limited in the literature, especially those which involve cross sectional dependence characteristics. In Chapter 1, we review some recent literature on dynamic panel data models useful for forecasting studies. The review focuses on different model specifications and criteria of forecast evaluation. Chapter 2 proposes a spatial panel vector autoregression, SPVAR(p), which generalizes the spatial dynamic panel data (SDPD) models with individual fixed effects to allow for multivariate vector observations and higher order lags. The regression residuals can be independently or spatial autoregressively (SAR) dis- tributed. WestudytheidentificationandestimationofthemodelbyprofileQMLE. We show that the QML estimators are consistent and asymptotically normally dis- tributed when both N and T are large, and we provide bias corrected estimator. The finite sample performance of the profile QMLE is evaluated by Monte Carlo simulations. In Chapter 3, we combine weekly influenza-like illness (ILI) incidence rate data by Google Flu Trends and the office of Centers for Disease Control and Preven- tion (CDC) into a vector measure for 48 US continental states, and then we use the vector observations to estimate the SPVAR(p) model, where the spatial weight matrix takes the form of a row normalized state geographical adjacency matrix or a viii row normalized state workflow matrix. We use state population density, tempera- ture, and precipitation as predictors. We find that SPVAR(p) achieves satisfactory in-sample estimates and out-of-sample forecasts based on Google Flu Trends and CDCILIincidenceratedata. Informationcriteriaandlikelihoodratiotestareused to select the lag orders and test the residual specifications in the SPVAR(p). The estimated SPVAR(p) is compared against univariate SDPD(p) model of Google Flu Trends ILI incidence rates. We also conduct an impulse response analysis of the dynamic diffusions of California flu shock. ix Chapter 1 Review of Panel Data Forecasting A forecast object could be the expected value, volatility, quantile, probability, density, distribution, discrete outcome, interval, or trajectory of some economic variables. Forecasting basing on panel data model has been an understudied though burgeoning research area that interests both theorists and practitioners. This chapter briefly surveys into a few recent developments in forecasting with a focus on dynamic panel data models. "Forecasting studies using spatial panel data models are rare, and those involving forecasting with a dynamic component are almost absent from the literature." —Baltagi et al. (2014) Predictors could differ because of differences in the underlying model, the esti- mation procedures, or the loss function criteria. Therefore, this chapter inves- tigates in the questions of model selection, improving estimation method, and imposing meaningful loss functions. Prior survey studies include for example Bal- tagi & Kao, which reviews panel unit root tests, panel cointegration tests, and dynamic panel data models. Lee & Yu (2010c) reports some recent developments in econometric specification and estimation of the spatial panel data models for both static and dynamic cases, investigates some finite sample properties of esti- mators, and illustrates their relevance for empirical research in economics with two applications. Elhorst (2012) surveys the existing literatures on the specification 1 and estimation of dynamic spatial panel data models. Bai & Shi (2011) reviews estimation of high dimensional covariance matrices, where the number of variables can exceed the number of observations. They compare conventional sample covari- ance matrix estimator, shrinkage method, factor method, Bayesian approach, and random matrix theory approach. They also point out some facts to simplify inver- sion of high dimensional covariance matrices. Manner & Reznikova (2012) surveys time-varying copulas, where either the copula function or the dependence param- eter are time-varying. Patton (2012) surveys estimation, inference, goodness-of- fit tests, and applications for Copula-based multivariate models. Cooley et al. (2012) surveys extension of the finite-dimensional extremes framework to spatial processes, spatial dependence metrics for extremes, modeling for marginal distri- butions, as well as max-stable process and copula for modeling residual spatial dependence after accounting for marginal effects. Arellano & Bonhomme (2011) provides an extensive guidance to aspects of recent researches in the development of nonlinear panel data analysis. Bai & Ng (2008) summarizes important issues and results in analyzing large dimensional panel data models with common factors. Lee (2007) summarizes different designs of loss functions in time series forecasting. 1.1 Models and Estimations A model that is naturally suitable for out-of-sample forecasts contains a dynamic nature. As pointed out by Baltagi & Kao, in dynamic panel data models, autocorrelations emerge due to the presence of lagged dependent variable among the regressors and individual effects characterizing the heterogeneity among the individuals. 2 Dynamic panel models with fixed effects are usually estimated by either within- group method or GMM. The within-group estimator is biased and inconsistent under fixed T (Nickell (1981), Kiviet (1995)), and the Arellano-Bond GMM esti- mator has a bias of order 1/N (Alvarez & Arellano (2003)). Recently Bai (2013) considers estimating dynamic panel data model in the presence of individual fixed effects, time fixed effects, and incidental parameters in the variances. He shows that such models can be estimated by the factor analytical method, which entails estimating the sample variance of the individual fixed effects instead of individual effects, thereby eliminating the incidental parameter problem. The estimator is consistent regardless of how N and T go to infinity. With cross sectional het- eroskedasticity, the method estimates the cross sectional average of the variance instead of the individual variances, and thus eliminating the incidental parameter problem in the cross sectional variance. With time fixed effects, he shows that the time effects does not lead to biases for the autoregressive coefficient estimator. In addition, assuming homoscedasticity while time series heteroskedasticity is present with fixed T will lead to inconsistency. Further, estimation of a large number of variance parameters under large T will not lead to incidental parameter bias. 1.1.1 Linear Models An example which explores out-of-sample forecast with panel data is Hjal- marsson (2006), who in his PhD dissertations extend the time series predictive regressions studies to a panel data framework. In a traditional time-series framework, estimation and testing are often made difficult by the endogeneity and near persistence of many forecasting variables. Hjalmarsson (2006) shows that, by pooling the data, these econometric issues can be dealt with more easily. When no individual intercepts are included in the pooled regression, the pooled estimator 3 has an asymptotically normal distribution and standard tests can be performed. However, when fixed effects are included in the specification, a second order bias in the fixed effects estimator arises from the endogeneity and persistence of the regressors. Hjalmarsson (2006) proposes a new estimator based on recursive demeaning, which requires no knowledge of the degree of persistence in the regressors and thus sidesteps the main inferential problems in the time-series case. To deal with cross-sectional dependence, common factors are augmented to both the predictive regression and the autoregression of the model. The methods are applied to panel data of stock returns of 18 regions. The empirical results show that although standard fixed effect estimators are significant for the predictors with or without controls for the unobserved common factors, the recursive demean estimator and the bias adjusted fixed effect estimators are almost insignificant except for the recursive demean estimator for DP ratio. In another PhD dissertation, Mercurio (2000) investigates the small sample properties of the forecasts of linear dynamic panel data models with fixed effects and random effects. The dissertation provides formulas for the exact and approximated bias and mean squared error of forecast (MSEF) for the one-step ahead forecast estimators, where the approximation methods considered are the LaPlace, the Small Sigma, and the large-n methods. Mutl (2006) in his PhD dissertation researches into dynamic panel data models with spatially correlated disturbances, although the focus is not on forecast using the model. The model he considers is a so called Cliff-Ord type model. The 4 beginning of his dissertation gives a good literature review about issues of dynamic panel data models, including GMM estimation, bias correction, MD and ML estimation, issues of modeling cross-sectional dependence, including contiguity weight, distance based weights, and estimations, as well as issues of space-time models, including space-time autoregressive moving average (STARMA) models and models with contemporaneous spatial correlation. Mutl (2006) then introduced the three step estimation procedure and derive the large sample results for fixed T. The first step is based on instrumental variable (IV) technique of Anderson & Hsiao (1981), which estimate the slope coefficients of themodel. WhileAnderson&Hsiao(1981)ignorespossiblecross-sectionalcorrela- tioninthedata, Mutl(2006)showthatitisstill √ N-consistentandasymptotically normal under the specification considered. The second step then extend the GMM method introduced by Kelejian & Prucha (1999) for estimating the spatial autore- gressive parameter. Mutl (2006) show that if it is based on a √ N-consistently estimated disturbances, it will also be consistent. The third step consists of a GMM estimation of the slope coefficients. Mutl (2006) discuss the optimal choice weighting matrix, provide formal large sample results for a generic GMM estima- tor based on linear moment conditions with stochastic instruments, and provide formal large sample properties of a feasible GMM estimator and its small sample covariance matrix approximation. 5 Panel VAR General Panel VAR Model Canova & Ciccarelli shows that a general panel VARX model with time varying coefficients takes the form of y it = A 0i (t) +A it (l)Y t−1 +F it (l)W t +u it (1.1) u t ∼ iid(0, Σ u ) where Y t is a G× 1 vector of endogenous variables, W t is a M× 1 vector of weakly exogenous variables common to all units. A 0i (t) contains all the determin- istic components, A it (l) are the coefficients on the lag endogenous variables Y t−1 , and time-varying coefficients are allowed in both A it (l) and F it (l). By rewriting (1.1) into simultaneous equations format Y t =Z t α t +U t (1.2) where α it are G(NGp +Mq)× 1 vectors containing stacked rows of A 0i ,A it ,F it and α t = (α 0 1t ,··· ,α 0 Nt ) 0 , Canova & Ciccarelli discuss two estimation approaches, panel-type hierarchical prior and factor structure for the coefficient vector. In the approach using panel-type hierarchical prior, the structure of the model can be summarized with the following hierarchical scheme: 6 Y t =Z t δ +Z t S N λ t +U t , U t ∼N(0, Σ u ) δ =S N ¯ δ +ζ, ζ∼N(0, Δ) ¯ δ =μ +ω, ω∼N(0, Ψ) λ t =ρλ t−1 + (1−ρ)λ 0 +e t , e t ∼N(0, Σ e ) whereS N =e n ⊗I,e n is a vector of ones, and Δ =I⊗Σ v . Canova & Ciccarelli point out that the above setup is convenient and useful in forecasting and turning point analysis. Canova & Ciccarelli (2004) describe how to construct the posterior distributions for the parameters under hierarchical prior and Minnesota-type priors. In Canova & Ciccarelli (2004), they perform multistep forecast and predict turning points probabilities through a Bayesian approach. A general specification of the model considered by Canova & Ciccarelli (2004) is y it = N X j=1 p X l=1 b j it,l y jt−l +d it v t +u it , i = 1,··· ,N;t = 1,··· ,T. (1.3) u it ∼ iid(0, Σ u ) wherey it isG×1 vector,b j it,l areG×G matrices,d it isG×q,v t is aq×1 vector of exogenous variables common to all units, and u it is a G× 1 vector of random disturbances. p is the number of lags, G is the number of endogenous variables, q is the number of exogenous variables including a constant. Two features of model (1.3) are that the coefficients could vary both across units and across time and that there are interdependencies among units i and j whenever b j it,l 6= 0. 7 In the approach using factor structure for the coefficient vector, the reparame- terized model has the following state space structure, Y t =Z t δ +Z t S N λ t +U t , γ t =U t +Z t e t ∼N(0,σ t Σ u ) δ =S N ¯ δ +ζ, η t ∼N(0, Ω t ) where U t ∼ N(0, Σ u ), e t ∼ N(0, Σ u ⊗V ), V = σ 2 I, and σ t = (I +σ 2 Z 0 t Z t ). Canova & Ciccarelli (2009) show how these joint densities can be specified so that the posterior distribution can be computed by MCMC. Lag Selection in Out-of-sample Forecast Greenway-McGrevy (2012), Greenway-McGrevy (2013b), and Greenway- McGrevy (2013a) contribute to the forecast literatures by providing theoretical basis for selecting lags in panel VAR forecast regressions. The selections are based on minimizing forecast risk of least squares model fittings, where forecast risk is evaluated as the cross sectional average of out-of-sample quadratic forecast error. The model considered in Greenway-McGrevy (2012) is y it = p X r=1 β i,r (t− 1) (r−1) + k X s=1 α 0 s y i,t−s +e i,t , i = 1,··· ,N;t = 1,··· ,T. (1.4) e it ∼ iid(0, Σ) where y it is a m× 1 vector. Greenway-McGrevy (2012) notes that (1.4) generalizes several specifications in literature, including the conventional fixed effects model (whenp = 1) and the heterogenous linear trend model (whenp = 2). 8 The idea is to estimate the least squares Mean Square Prediction Error (MSFE) for each model within the permissible set and then select the model corresponding to the smallest estimated MSFE. In order to do so, Greenway-McGrevy (2012) derived the consistent estimator of the asymptotic MSFE of the model, which is a modified version of Akaike (1970) final prediction error (FPE). Denote the consistent estimator as D FPE p,T , which Greenway-McGrevy (2012) shows takes the form of D FPE p,T = R p (k) + T−k T−k−p R p (k) 2k n(T−k) + p T−k + p X l=1 2l− 1 T−k Q l−1 r=0 (T−k−r) Q l−1 r=0 (T−k +r) ! + 2p 2 ζ ¨ R(k),Q(k), P k s=1 a s (k) (t−k) 2 (1.5) where the first term denotes the in-sample quadratic loss of the fitted model and the last term estimates the quadratic transformation of the Nickell bias in LS estimator of MSFE. Based on (1.5), the selected lag b k FPE solves the following minimization min 1≤k≤k NT tr ΩD FPE p,T (k) where Ωispositivesemi-definiteweightmatrixsatisfyingtr(Ω) = 1. Greenway- McGrevy (2012) interprets that this criterion is similar to model selection by minimization of estimated out of sample loss in time series literatures (Akaike (1970); Shibata (1980)). In the end, Greenway-McGrevy (2012) pointed out that extensions can be made to examine whether the selection criteria are asymptotically efficient in the sense of Shibata (1980), generalize the results about one-step-ahead forecast to 9 multistep forecast, consider combining forecast using for example Mallows model averaging of Hansen (2008), and consider parameterize cross-sectional dependence structures into the model. In a related paper about the same model, Greenway-McGrevy (2013a) further showed that the FPE derived in Greenway-McGrevy (2012) is asymptotically efficient in the sense defined by Shibata (1980). Shibata (1980) considered fitting an AR(k) model to an infinite order AR process, permitting the set of fitted lag orders to grow with the sample size T at rate o(T 1/2 ). The asymptotic efficient model selection criterion minimizes the forecast loss taking into account both specification error and over-parameterization. Greenway-McGrevy (2013a) also showed that the Kullback-Leibler information loss criterion (KLIC) proposed by Lee & Phillips (2015) about model selection is not efficient in the Shibata (1980) sense. Lee & Phillips (2015) proposed selection criteria for panel data models with additive incidental parameters, which either minimizes Kullback-Leibler information loss or maximizes Bayesian posterior probability, leading to generalizations of AIC or BIC. Greenway-McGrevy (2013a) interprets the intuition for KLIC criterion not to be asymptotically efficient is because it only minimizes information loss with respect to the homogenous parameters, whereas minimization of MSFE requires minimizing information losswithrespecttoboththehomogenousandthecross-sectionspecificparameters. 10 In another paper, Greenway-McGrevy (2013b) discusses multi-step prediction with a similar model, which is given by y it = β i + q X s=1 α 0 s y i,t−s +u i,t , i = 1,··· ,N;t = 1,··· ,T. (1.6) u it ∼ iid(0, Σ u ) The findings show that when the fitted lag order exceed the true lag order of the panel VAR, then the bias of the LS estimator transmutes into the MSPE unlessN/T→ 0. This contrasts the time series case, in which only the variance of the LS estimator is manifest in the MSPE. For multi-step forecast, including more lags than the true number of lags in the model can reduce MSPE of the direct predictor. This also contrasts the time series case, in which the number of lags that minimizes MSPE is equal to the true number of lags (Ing (2003)). Lastly, for a model with fixed number of lags, if the estimated number of lags is less than the true number of lags, then direct forecast has no larger MSPE than the recursive forecast, otherwise the recursive forecast has a smaller MSPE. Spatial Panel Data Models QMLE Lee & Yu (2010c) discuss the general model with "time-space dynamic", which they call spatial dynamic panel data (SDPD) model, as: Y nt =λ 0 W n Y nt +γ 0 Y n,t−1 +ρ 0 W n Y n,t−1 +X nt β 0 +c n0 +α t0 l n +V nt , t = 1,··· ,T (1.7) where W n is the predetermined spatial weight matrix, X nt is n× k matrix of regressors which does not contain time-invariant or individual-invariant variables, 11 c n0 is n× 1 vector of individual random components. α t0 is a scalar of time effect, and l n is an n× 1 vector of ones. γ 0 captures the pure dynamic effect, and ρ 0 captures the spatial-time effect. The disturbances can allow for spatial autoregressive case. Define A n =S −1 n (γ 0 I n +ρ 0 W n ) (1.8) and then using (1.8) model (1.7) can be rewritten as Y nt =A n Y n,t−1 +S −1 n X nt β 0 +S −1 n c n0 +α t0 S −1 n l n +S −1 n V nt This general SDPD model in (1.7) is classified depending on the structure of the eigenvalue matrix of A n . 1. SDPD in (1.7) is stable, when all the eigenvalues ofA n are less than 1, which corresponds to the case when γ 0 +ρ 0 +λ 0 < 1. 2. SDPD in (1.7) is spatially cointegrated, when some (but not all) of the eigen- valuesofA n areequalto1,whichcorrespondstothecasewhenγ 0 +ρ 0 +λ 0 = 1 but γ 0 6= 1. 3. SDPD in (1.7) becomes unit root SDPD, when all of the eigenvalues of A n are equal to 1, which corresponds to the case when ρ 0 +λ 0 = 0 but γ 0 = 1. 4. SDPD in (1.7) is explosive, when some eigenvalues of A n are greater than 1, which corresponds to the case when γ 0 +ρ 0 +λ 0 > 1. Yu et al. (2008) consider estimating the stable SDPD with only individual fixed effects and assuming v it to be iid. For unit root SDPD model, Lee & Yu 12 (2010b) include individual fixed effects and consider both a unit root dynamic panel data model with spatially correlated disturbances and a unit root spatial dynamic panel data model. Yu et al. (2012) consider estimating the SDPD where both individual and time fixed effects are allowed for, the errors V nt follow SAR, and Y nt and W n Y nt are spatially cointegrated. Table 5 of Lee & Yu (2010b) summarizes their findings about the asymptotic results of the QMLE estimators for stable, spatial cointegration, and unit root SDPDs. Stable Spatial cointegration Unit root parameters γ 0 +ρ 0 +λ 0 < 1 γ 0 +ρ 0 +λ 0 = 1,γ 0 6= 1 ρ 0 +λ 0 = 0,γ 0 = 1 consistency rates √ NT forb γ nT , b λ nT ,b ρ nT √ NT forb γ nT , b λ nT ,b ρ nT √ NT for b λ nT ,b ρ nT √ NT 3 forb γ nT + b λ nT +b ρ nT √ NT 3 forb γ nT and b λ nT +b ρ nT information matrix nonsingular singular nonsingular after rescaling bias magnitude O 1 T forb γ nT , b λ nT ,b ρ nT O 1 T forb γ nT , b λ nT ,b ρ nT O 1 T 2 forb γ nT O 1 T for b λ nT ,b ρ nT reference Yu et al. (2008) Yu et al. (2012) Lee & Yu (2010b) For the explosive SDPD, Lee & Yu (2010c) says that the properties of QMLE are unknown, however, a transformation by taking spatial first-differences of the data can reduce the explosive variables to stable and thus make the estimation tractable. See also Lee & Yu (2011). Due to the different properties of the estimators under different cases, a test for unit roots would be important using for example the test by Im et al. (2003); Binder et al. (2005). When both time and individual fixed effects are allowed for, which is the case in Lee & Yu (2010d), assuming v it are iid and the system is stable, Lee & Yu (2010d) establish asymptotic properties for QMLE. They propose to transform the data by premultiplying the operator F 0 n,n−1 , where (F n,n−1 ,l n / √ n) is orthonormal matrix of eigenvectors of J n = I n − (1/n)l n l 0 n . When n is asymptotically proportional 13 to T, the estimator is √ nT consistent and asymptotically normal, but the limit distribution is not centered around 0. When n/T → 0, the estimator is √ nT consistent and asymptotically normal with zero means. When n/T →∞, the estimator is consistent with rate T and has a degenerate limit distribution. Lee & Yu (2010d) also propose a bias correction for the estimator and show that when n 1/3 /T → 0, the bias correction will asymptotically eliminate the bias and yield a centered confidence interval. Lee & Yu (2010d) argues that such a transformation has an advantage over the direct approach especially when n is relatively small, because the estimator based on transformation has a faster rate of convergence than the direct estimator when n is relatively small. The bias is of order O(1/T ) based on the transformation, rather than O(max(T −1 ,n −1 )) in the direct estimation approach. Stationarity of General Model Elhorst (2012) in his survey gives a more general form of spatial dynamic panel data model. Denote Y t = (Y 1t ,··· ,Y Nt ) 0 , for t = 1,··· ,T, the model discussed in Elhorst (2012) takes the form of Y t = τY t−1 +δWY t +ηWY t−1 +X t β 1 +WX t β 2 +X t−1 β 3 +WX t−1 β 4 +Z t θ +v t (1.9) v t = γv t−1 +ρWv t +μ +λ t l N +ε t μ = κWμ +ξ ε t ∼ iid(0,σ 2 ),ξ∼iid(0,σ 2 ξ ) 14 X t is N× K matrix of exogenous explanatory variables, Z t is N× L matrix of endogenous explanatory variables. The variables with subscript t− 1 denote lagged values, the variables premultiplied by W denote spatially lagged values. W is a nonnegative matrix of known constants, whose diagonal elements are set to be 0 by assumption that no spatial unit can be viewed as its own neighbour. μ = (μ 1 ,··· ,μ N ) 0 and μ i are mean 0 time-invariant spatial-specific effects, λ t aretime-specificeffects, andl N isN×1vectorofones. Therestaretheparameters. Elhorst (2012) summarizes the conditions for the general model (1.9) to be stationary, which requires the following restrictions on the model parameters and the spatial matrix W. 1. (I N −κW ) be non-singular and all the characteristic roots of (I N −κW ) −1 lie in the unit circle. For the case that W is not normalized but its eigenvalues are real, this condition is satisfied as long as κ is in the interior of (−1/|ω min |, 1/ω max ), whereω denotes the minimum and maximum real eigenvalues ofW (see also Lee (2004); Anselin (1988)). IfW is row-normalized, the largest eigenvalue ofW will be 1, and restriction is κ is in the interior of (−1/|ω min |, 1). In case the symmetric W is not normalized and has complex eigenvalues, then the restriction is κ is in the interior of (1/ω min , 1), where ω min equals the most negatively purely real eigenvalue of W after W is row-normalized. 2. One of the following two conditions should be satisfied. (a) the row and column sums of W and (I N −κW ) −1 before W is row- normalized should be uniformly bounded in absolute value as N→∞. 15 (b) the row and column sums of W before W is row-normalized should not diverge to infinity at a rate equal to or faster than the rate of the sample size N. Both conditions limit the cross-sectional correlation to a manageable degree. 3. Eigenvalues ofγ(I N −ρW ) −1 should lie within the unit circle, which requires that |γ|< 1−ρω max if ρ≥ 0 |γ|< 1−ρω min if ρ< 0 4. Eigenvalues of (I N −δW ) −1 (τI N +ηW ) should lie within the unit circle, which requires that τ < 1− (δ +η)ω max if δ +η≥ 0 τ < 1− (δ +η)ω min if δ +η< 0 τ >−1 + (δ−η)ω max if δ−η≥ 0 τ >−1 + (δ−η)ω min if δ−η< 0 5. W also need to satisfy restrictions. (a) I N −δW and I N −ρW should be non-singular (b) row and column sums of W, (I N −δW ) −1 and (I N −ρW ) −1 should be uniformly bounded in absolute values as N→∞. (c) P ∞ h=1 abs [(I N −δW ) −1 (τI N +ηW )] h should be uniformly bounded. Given the above analysis, Elhorst (2012) point out that the stationarity region |τ| +|δ| +|η| < 1 implied by Yu et al. (2008) is too restrictive, while the sta- tionarity regionτ +δ+η< 1 implied by Lee & Yu (2010c) is not restrictive enough. 16 Elhorst (2012) then discuss the special case when all the non-zero elements of W equal 1/(N− 1), which causes no problem provided that time-period effects are not included (see Kelejian & Prucha (2002); Kelejian et al. (2006)) but leads to inconsistent parameter estimates when the time-period effects are included. A example of the general model (1.9) is the following Y t =τY t−1 +δWY t +ηWY t−1 +X t β 1 +WX t β 2 +v t (1.10) which Elhorst (2012) labels as "dynamic spatial Durbin model". From (1.10) can be derived the short-term and long-term effects. In addition, in order to identify model (1.10), one of the following four restrictions need to be placed: a) β 2 = 0, b) δ = 0, c) η =−τδ, d) η = 0. Spatial GMM Baltagi et al. (2014) extends panel forecast studies to a dynamic and autore- gressive spatial lag panel data model with spatially correlated disturbances. The model considered by Baltagi et al. (2014) takes a SAR-RE form y it = γy it−1 +ρ 1 N X j=1 ω ij y it +x it β +ε it (1.11) ε it = ρ 2 N X j=1 m ij ε jt +u it u it = μ i +v it , μ i ∼ (0,σ 2 μ )⊥v it ∼iid(0,σ 2 v ) 17 where ω ij is the (i,j)th element of matrix W N , which is N×N known spatial weight matrix with 0 diagonal elements. m ij is the (i,j)th element of matrix M N , which is row-normalized spatial matrix for ε it . ρ 1 and ρ 2 are the spatial lag coefficients. Baltagi et al. (2014) point out that parameter space must be defined so that (I N − ρ 1 W N ) is non-singular, which holds when det(I N − ρ 1 W N ) 6= 0 and in turn requires that ρ 1 6= 1/r i for all eigenvalues r i of W N . This is guaranteed by the assumption that ρ 1 belongs to [1/r min , 1/r max ], where r min equals the most negative purely real characteristic root of W N , and r max = 1 when W N is row-normalized. Model (1.11) is dynamically stable if|γ|< 1 and the largest absolute eigenvalue of (I N −ρ 1 W N )γ is less than 1, which is equivalent to|γ| < 1−ρ 1 r max when ρ 1 > 0 and|γ| < 1−ρ 1 r min when ρ < 0. Note here r min ,r max does not exclude complex eigenvalues. Similar conditions on ρ 2 are also assumed. Baltagi et al. (2014) propose to estimate model (1.11) with dynamic spatial GMM,whichhasthespiritofArellano&Bond(1991)andMutl(2006)andusesthe idea of mixing spatial and non-spatial IVs in order to obtain consistent parameter estimates. Kapoor et al. (2007) extend the generalized moments method from cross-section data proposed by Kelejian & Prucha (1999) to panel data and derived its large sample properties when T is fixed and N→∞. However, Kapoor et al. (2007) is based on a static model which furthermore does not have a spatial lag (i.e. γ = ρ 1 = 0 in equation (1.11)). The estimation proposed by Baltagi et al. (2014) modifies Kapoor et al. (2007) and takes the following steps to implement. 18 1. Use IV or GMM estimator to get consistent estimates of γ,ρ 1 ,β. For exam- ple, use Anderson & Hsiao (1981, 1982) by adding W N y t−2 to the IVs. 2. Use residuals from the last step to get consistent estimates ofρ 2 ,σ 2 v , andσ 2 1 . 3. Compute the preliminary one-step consistent estimator, which is given by b δ 1 = Δ f X 0 Z ∗ b A N Z ∗ 0 Δ f X −1 Δ f X 0 Z ∗ b A N Z ∗ 0 Δy (1.12) where Z ∗ is a matrix of IVs staking Z, which contains lagged y and all the exogenousx, andZ s , whichcontainsspatiallyweightedlaggedy andspatially weighted exogenous x. δ 0 = (γ,ρ 1 ,β 0 ), Δ f X = [Δy −1 , (I T−2 ⊗W N )Δy, Δx] and b A N = h Z ∗ 0 (I T−2 ⊗ c H N )(G⊗I N )(I T−2 ⊗ c H 0 N )Z ∗ i −1 with c H N = (I N − b ρ 2 W N ) −1 . 4. Following Arellano & Bond (1991) to replace b A N in (1.12) by V N = h Z ∗ 0 (I T−2 ⊗ c H N )ΔvΔv 0 (I T−2 ⊗ c H 0 N )Z ∗ i −1 To operate, Δv is replaced by differenced residuals of (1.12), and the resulting two-step Spatial GMM estimator is b δ 2 = Δ f X 0 Z ∗ b V N Z ∗ 0 Δ f X −1 Δ f X 0 Z ∗ b V N Z ∗ 0 Δy (1.13) 19 Time Varying Spatial Weights Lee & Yu (2012) further investigate the SDPD model with time varying spatial weight matrices. Allowing W n to be time varying, then model (1.7) becomes Y nt =λ 0 W nt Y nt +γ 0 Y n,t−1 +ρ 0 W n,t−1 Y n,t−1 +X nt β 0 +c n0 +α t0 l n +V nt , t = 1,··· ,T (1.14) where V nt = (v 1t ,··· ,v nt ) 0 and v nt are iid(0,σ 2 0 ) across i and t. Lee & Yu (2012) emphasize that W n,t are assumed to be exogenous, and an otherwise endogenous spatial weight matrix estimated as some economic distance is beyond the discussion of their paper. Lee & Yu (2012) then derive asymptotic consistency and normality of QMLE estimations of model (1.14), which includes both individual and time fixed effects, under the assumption of stability and both n and T tend to infinity. Similar to the case with time invariant spatial matrix, whenn is asymptotically proportional toT, the estimator is √ nT consistent and asymptotically normal, but the limit distribution is not centered around 0. When n/T→ 0, the estimator is √ nT consistent and asymptotically normal with zero means. When n/T →∞, the estimator is consistent with rate T and has a degenerate limit distribution. Lee & Yu (2012) also propose a bias correction for the estimator and show that when n 1/3 /T→ 0, the bias correction will asymptotically eliminate the bias and yield a centered confidence interval. 20 Spatial Panel VAR Beenstock&Felsenstein(2007)modelspanelVARwithspatiallags, whichthey refer to as spatial vector autoregressions (SpVAR). Y nt = μ n +θWY nt + q X j=1 β j Y n,t−j +λWY n,t−1 +u nt (1.15) u nt = ρu n,t−1 +δWu nt +γWu n,t−1 +ε nt σ ni = Cov(ε n ,ε i ) Notice that the SpVAR resembles SDPD in (1.7) except thatY nt are allowed to be aK× 1 vector instead of a scaler and allowing for higher order lags. Beenstock & Felsenstein (2007) point out that identification of (1.15) requires that Y n,t−1 and WY n,t−1 be weakly exogenous, which then requires that ρ = γ = 0. Denote A ∗ ,B ∗ , Θ ∗ , Λ ∗ as coefficients and staking Y it into a NK× 1 vector Y t , then (1.15) in matrix form is Y t =μ +A ∗ Y t +B ∗ Y t−1 + Θ ∗ WY t + Λ ∗ WY t−1 +ε t which has a corresponding reduced form Y t = Π 0 + Π 1 Y t−1 + Π 2 WY ∗ t + Π 3 WY t−1 +v t where Π 0 = (I NK −A ∗ ) −1 μ, Π 1 = (I NK −A ∗ ) −1 B ∗ , Π 2 = (I NK −A ∗ ) −1 Θ ∗ , Π 3 = (I NK −A ∗ ) −1 Λ ∗ ,and v t = (I NK −A ∗ ) −1 ε t . There are 4K 2 unknown structural parameters with (3K 2 +K(K + 1)/2) identification restrictions, which makes the structural SpVAR under-identified. 21 Sincespatial-specificeffectsarespecifiedinthetemporaldynamicpanels, Been- stock & Felsenstein (2007) note that "incidental parameter problem" arises and LSDV estimatedB ∗ hasO(1/T ) downward bias whenT is finite, and further issue arise because q is unknown. When q = 1, asymptotic bias by Hsiao (2003) of β is equal to b =− 1+β T−1 1− 1−β T T (1−β) 1− 2β (1−β)(T−1) 1− 1−β T T (1−β) As VARs can help to simulate dynamic effects of exogenous shocks on the state variables, Beenstock & Felsenstein (2007) show impulse response of SpVAR, which help to simulate the spatial-temporal dynamic effects of exogenous shocks. Consider SpVAR with K =q = 1, Y t =βY t−1 +θWY t +λWY t−1 +ε t then impulse response profiles can be derived using Wold representation as Y t =C −1 ε t + N X i=1 a i r t i where C = I N −θW + (βI N +λW )L, r denote N eigenvalues of C −1 , and a are arbitrary constants determined by initial conditions. In the general case when K6= 1 and q6= 1, C −1 will have NKq eigenvalues, and both current and lagged shocks to neighbours reverberate onto the current value of each region. 22 Mutl (2009) also consider a panel VAR model with spatial dependence, which is characterized by spatial autoregressive disturbances. y it = Φy i,t−1 + u it (1.16) u it = λ N X j=1 w ij u jt + (I m − Φ)μ i +ε it where y it is a m× 1 dimensional vector. Mutl (2009) propose a three-step estimation procedure to estimate (1.16). In the first step, instrumental variables procedure is used to consistently estimate the spatially correlated disturbances. In the second step, a method of moments estimation is used to obtain a consistent estimate of the spatial parameter. The third step of the procedure is either a constrained ML (maximizing by taking as given the consistent estimate of b λ from the previous two steps) or moments estimation based on a model transformed by a spatial Cochrane-Orcutt transformation (i.e. premultiplying by (I mN −λW) −1 ). Monte Carlo results show that the constrained MLE works well in small samples and that the QMLE based on the independence assumption is robust to small amount of spatial autocorrelation in the data. Factor Models There has been a fast growing literatures on factor analysis for panel data in recent years. Factor analysis allow us to summarize a large number of variables into a few coincident indicators, and forecasts were also made possible with factor analysis. A large body of literature has focused on the macroeconomic applica- tions, for example, Forni et al. (2001), Stock & Watson (2002), and Bernanke & Boivin (2003). Bai (2003) derives theoretical basis for large dimensional dynamic factor models, in which he discusses the convergence rates of estimated factors 23 and factor loadings. One thing he finds is that stronger results are achieved when the errors are serially uncorrelated. Phillips & Sul (2003) address small sample bias and propose a modified Hausman test for homogeneous unit roots in a dynamic panel with cross sectional dependence, where the cross sectional dependence is modeled by a scaler common time effect (CTE). Boivin & Ng (2006) raise the question of whether it is possible to use more data series to extract factors and yet the resulting factors are less useful for forecasting. They give an answer which says yes. In addition, they show that such a problem tends to arise when the idiosyncratic errors are cross correlated, and it can also arise if forecasting power is provided by a factor that is dominant in a small dataset but is a dominated factor in a larger dataset. Doz et al. (2012) argue that maximum likelihood estimation can lead to greater efficiency gains than principal component analysis in dynamic factor models of large dimension, even when the dynamic factor model is misspecified. Jungbacker et al. (2011) extends the maximum likelihood approach for dynamic factor models to account for missing data. Zirogiannis & Tripodis (2013) develop a generalized dynamic factor model for panel data and propose an iterative estimation process, call Two-Cycle Conditional Expectation-Maximization (2CCEM) algorithm, in which the unobserved index is estimated and then the dynamic component of the index is incorporated. They allow for heterogeneous latent index for different individuals, and their estima- tion strategy can account for multiple individuals, which applies well to panel data. The next few points are addressed by Bai & Ng (2008), which surveys the main theoretical results relating to static or dynamic factor models that can be cast in static framework. 24 Classic Factor Models A static factor model takes the form of x it =λ 0 i F t +e it (1.17) where e it is idiosyncratic error, F t is r dimensional common factors, and λ i is factor loading. λ 0 i F t is often referred to as the common component of the model. In matrix form, (1.17) implies that X =F Λ 0 +e (1.18) where X = (X 0 1 ,··· ,X 0 N ) is T× N matrix with X t = (x 1t ,··· ,x Nt ) 0 , F = (F 1 ,··· ,F T ) 0 , Λ = (λ 1 ,··· ,λ N ) 0 , and e = (e 0 1 ,··· ,e 0 N ). Note that although the model appears to be purely static,F t ande t themselves can be dynamic. For exam- ple, let A(L) be a polynomial of the lag operator, and F t can evolves according to A(L)F t =u t Assuming F t and e t are uncorrelated with zero means, let Σ and Ω be N×N population covariance matrices of X t and e t , and normalize E(F t F 0 t ) = I r , then the covariance structure of model (1.18) is given by Σ = ΛΛ 0 + Ω Classical factor analysis assumes that e t are iid and T→∞ when N is fixed (or N → ∞ when T is fixed). Large dimensional factor models relax these 25 assumptions, and in turn poses additional statistical problems. An important characteristic of a static model with r factors is that the largest r eigenvalues of Σ increase with N, while the remaining eigenvalues of Σ as well as all the eigenvalues of Ω are bounded. A dynamic factor model is defined as x it =λ i (L) 0 f t +e it (1.19) where f t is common factors (use lower case f t to distinguish from F t the static model), λ i (L) = 1−λ i1 L−···−λ is L s is a vector of dynamic factor loadings of order s. (s is often assumed to be finite, and when s is allowed to to infinite, it’s called generalize dynamic factor model). The factors are assumed to evolve according to f t =C(L)ε t where ε t are iid errors. ε t and f t have the same dimensions q = dim(ε t ), which is referred to as the number of dynamic factors in literatures. A dynamic model withq factors can be written as a static factor model with r factor by noting that r =q(s + 1). Principal Components Classical factor analysis estimates Λ and Ω assuming Ω to be diagonal, and then as a second stage the factorsF t can be estimated. For fixedN, the estimated F t are inconsistent, and for large N, Σ is not consistently estimable. Under large 26 N and largeT, it is possible to estimate Λ andF simultaneously, treating both Λ and F as parameters. We need r 2 restrictions be imposed in order to identify F and Λ, because F Λ 0 andFAA −1 Λ 0 are observational equivalent for arbitrary r×r invertible matrixA. The restrictions can be 1) F 0 F/T =I r and Λ 0 Λ being diagonal or 2) Λ 0 Λ/N =I r and F 0 F being diagonal. Asymptotic principal components was first considered by Connor & Korajczyk (1986) for estimations with large N and large T and Connor & Korajczyk (1988) for large N and fixed T. For a given k, which is not necessarily equal to r, the estimators for the T×k matrix F k and k×T matrix Λ k solves min Λ k ,F k 1 NT N X i=1 T X t=1 x it −λ k 0 i F k t 2 s.t. F k 0 F k /T =I k and Λ k 0 Λ k being diagonal or Λ k 0 Λ k /N =I k and F k 0 F k being diagonal By concentrating out either Λ k or F k allows for the estimations. Estimators of Γ t = lim N→∞ 1 N P N i=1 P N j=1 E(λ i λ 0 j e it e jt ) are obtainable under different sets of assumptions: 1) cross-sectionally independent but heteroge- neous panels, 2) cross sectionally independent and homogeneous panels, and 3) cross-sectionally correlated, cross sectional heteroskedasticity, but with stationary covariance. 27 Number of Factors Methodstodeterminethenumberoffactorsconsistentlyforstaticanddynamic factor models were proposed in literatures. 1. Bai & Ng (2002) propose estimating the number of factors as b k PCP and b k IC , which minimize the information criteria PCP (k) and IC(k), respectively, where PCP (k) =S(k) +kS(kmax)g(N,T ) IC(k) =ln(S(k)) +kg(N,T ) S(k) = 1 NT P N i=1 P T t=1 x it − b λ k 0 i b F k t 2 when k factors are estimated, and g(N,K) is a penalty function which for example can take the following forms g 1 (N,T ) = N +T NT ln NT N +T g 2 (N,T ) = N +T NT lnC 2 NT g 3 (N,T ) = lnC 2 NT C 2 NT g 4 (N,T ) = (N +T−k) ln(NT ) NT where C N T = √ N∧ √ T. Bai & Ng (2008) argues that g 4 (N,T ) has good properties when the errors are cross correlated. 2. Random matrix theory has been used to determine the number of factor, which exploit the largest and smallest eigenvalues of large matrices whose properties are known for iid normal data. See for example Onatski (2010) 28 and Kapetanios (2010). Onatski (2009) developed a formal test for the number of factors in data with correlated Gaussian idiosyncratic errors. The idea is to test the slope of the scree diagram, which is a plot of the ordered eigenvalues against the corresponding order number, to identify changes in curvature. Under cer- tain assumptions and large sample properties of random matrices, Onatski (2009)’s proposed test is characterized by the Tracy-Widom distribution. 3. Stock & Watson (2005) considers determining the number of factors in dynamic factor models. Denote S(k) as the sum of squared residuals when k factors are estimated from b ω it , which are the residuals from a restricted regression x it = λ 0 i A + (L)F t−1 +ρ i (L)x i,t−1 +ω it ω it = λ 0 i R t +e it thenaconsistentestimatorofthenumberoffactorsisgivenbytheminimizers of PCP (k) or IC(k). 4. Bai&Ng(2007)proposedeterminingthenumberoffactorsindynamicfactor models. The idea is that assume the r×r matrix Σ u = Rσ R 0 has rank q, then the r−q smallest eigenvalues are 0. Let c 1 >···>c N be the ordered eigenvalues of Σ u , then consistent estimator of the number of factors q is be the smallest k such that c D k <M NT (δ). D k and M NT (δ) are defined as D k = c 2 k+1 P r j=1 c 2 j ! 1/2 or D k = P r j=k+1 c 2 j P r j=1 c 2 j 29 and M NT (δ) = m N 1/2−δ ∧T 1/2−δ for 0<m<∞ and 0<δ< 1/2 The test is base on the idea that when the true eigenvalues c q+1 ,··· ,c r are 0, thenD k should be 0 fork≥q, and the cut-off pointM NT (δ) is defined to account for the estimation error. Other literatures studying the determinant of the number of factors are for example Amengual & Watson (2007), Hallin & Liška (2007), Ahn & Horenstein (2013), and Greenaway-McGrevy et al. (2012). Ahn & Horenstein (2013) note that while in large sample the chosenkmax can usually exceed the true number of factors, it may fail to be so in finite sample. They propose estimating the number of factors by maximizing the ratio of two adjacent eigenvalues, or the ratio of their growth rate, which they show outperform the existing approaches even with small N and T unless the signal-to-noise ratio of the model is too small. Greenaway-McGrevy et al. (2012) notice that the often used standardization procedure (dividing each time-series in the panel by its sample standard deviation) before the principal component analysis may lead to inconsistently estimated number of factors. Standardization have been used in for example, Stock & Watson (2002), Bai & Ng (2006), and Boivin & Ng (2006). Greenaway-McGrevy et al. (2012) show that if the main source of heteroskedasticity of the original data is idiosyncratic, then the standardization would not cause a problem, but if the main source of heteroskedasticity is due to unbounded factor loadings, then standardization may lead to inconsistency. In particular, they show that the IC criteria of Bai & Ng (2002) on standardized 30 panel may over-estimate the factor number. In addition, they suggest that using the minimum number of factors estimated from the original and standardized data would restore consistency of the factor number estimation in many cases when the source of heteroskedasticity is unclear. Test the Validity of Factor Proxies Bai & Ng (2008) reviews that factor models have wide applications, including factor-augmented regressions, factor-augmented VAR, replacing factors by esti- mated factors in extreme estimations, replacing IVs by estimated factors in IV regressions, and the issue of testing the validity of observed variables as proxies of the unobserved common factors. Many observed variables such as inflation, term premia, and Fama French factors have been used to proxy the latent risk factors. To test the validity of G t = (G 1t ,··· ,G mt ), which is m× 1 vector of observable variables used to proxy the unobserved factorsF t , Bai & Ng (2006) considers forming the following regres- sion G jt =γ 0 j e F t +error Denote the least squares estimator as b γ j , its corresponding t-statistics as b τ t (j), and α- percentage point of the limiting distribution of b τ t (j) as Φ α . The following results can be used as tests. 1. Under H 0 :G jt =δ 0 j F t , as √ N/T→ 0 and N→∞, T→∞, A(j) = 1 T T X t=1 I (|b τ t (j)|> Φ α )→ 2α 31 In addition, if e t is serially uncorrelated, then P max 1≤t≤T |b τ t (j)|≤x ≈ [2Φ(x)− 1] T 2. Under H 0 :G jt =δ 0 j F t + jt , as N→∞, T→∞, for each t b jt − jt s jt d − → N(0, 1) where b jt = G jt − b G jt , s 2 jt = 1 T e F 0 t (T −1 P T s=1 e F s e F 0 s b 2 js ) −1e F t +N −1 Avar( b G jt ), and an estimate of Avar( b G jt ) is b γ 0 j e V −1e Γ t e V −1 b γ j . In addition, there are two overall statistics (not depending on t) NS(j) = d var(b ε(j)) d var( b G(j)) and R 2 (j) = d var( b G(j)) d var(G(j)) NS(j) should be close to 0 and R 2 (j) should be close to 1 under the null hypothesis. 3. Under the null, as √ N/T→ 0 and N→∞, T→∞, e z k = √ T (e ρ 2 k −ρ 2 k ) 2e ρ k (1− e ρ 2 k ) d − → N(0, 1), k = 1,··· ,min[m,r] where e ρ 2 1 ,··· ,e ρ 2 p are the largest p =min[m,r] the sample squared canonical correlations between e F and G, and ρ 2 1 ,··· ,ρ 2 p are the true canonical cor- relation coefficient between F t and G t assuming (F 0 t ,G 0 t ) 0 are iid normally distributed. 32 Panels with Factor Error Structures Consider the model Y it = X 0 it β +u it (1.20) u it = λ 0 i F t +ε it where X it is p× 1 vector of observable regressors, β is unknown coefficients, u it has a factor structure, λ i ,F t ,ε it are unobservable. If define F t = (1,ξ t ) 0 and λ i = (α i , 1) 0 , then we can see that (1.20) accommodates for the additive fixed effects model specification. The unknown parameters (β, Λ,F) can be estimated from the following con- straint least squares min β,Λ,F N X i=1 kY i −X i β−Fλ i k 2 s.t. F 0 F/T =I r and Λ 0 Λ being diagonal In practice, the estimators ( b β, b F) are solved iteratively from the following sys- tem of nonlinear equations (1.21) and (1.22), from which b Λ = W 0b F/T is then constructed. b β = N X i=1 X 0 i M b F X i ! −1 N X i=1 X 0 i M b F Y i (1.21) 1 NT N X i=1 (Y i −X i β)(Y i −X i β) 0 ! b F = b FV NT (1.22) The system of (1.21) and (1.22) is obtained by observing that β can be solved for fixed F as b β(F ) = P N i=1 X 0 i M F X i −1P N i=1 X 0 i M F Y i , where M F = I T − F (F 0 F ) −1 F 0 , and in turn F can be solved for fixed β as the r 33 dimensional eigenvectors associated with the first r largest eigenvalues of matrix WW 0 = P N i=1 W i W 0 i = P N i=1 (Y i −X i β)(Y i −X i β) 0 . The asymptotic behaviors of the estimators differ under different assumptions of the errors ε it . 1) Assume ε it iid for all i and t, then b β is consistent and asymptotically normal, 2) assume ε it are correlated and heteroscedastic only in the cross section dimension, and T/N→ ρ > 0, then b β has asymptotic bias, 3) assume ε it are correlated and heteroscedastic only in the time dimension, and T/N → ρ > 0, then b β has asymptotic bias. Bai & Carrion-i Silvestre (2012) derives the bias-corrected estimators. Other authors considering estimating panel data models with factor error structures include Coakley et al. (2002), etc. Bai & Li (2012) considers MLE for factor models of high dimension, where N is comparable with or even greater than T. They prove consistency and limiting distributions of the estimators and show that the distributions of the MLE depend on the identification restrictions. Classical principal components analysis does not efficiently estimate the factor loadings or common factors because it essentially treats the errors to be homoscedastic and cross-sectionally uncorrelated. For effi- cient estimation it is essential to estimate a large error covariance matrix. Bai & Liao (2016) study a high dimensional approximate factor model with both cross sectional dependence and heteroskedasticity. They propose to estimate the com- mon factors and factor loadings basing on QMLE and involving regularizing a large covariance sparse matrix. They show that the proposed approaches by considering the large error covariance matrix are more efficient than the classical PCA methods or methods based on a strict factor model. 34 1.1.2 Nonlinear Models Arellano & Bonhomme (2011) surveys recent researches in nonlinear panel data models. They emphasize estimating random-effects with Bayesian approach. They show that the properties of misspecified random-effect ML in nonlinear models are similar to those of fixed effects ML. As a result, random-effects estimators are generally inconsistent for fixed T. Moreover, point-identification becomes problematic. They then discuss in detail the identification problem when T is fixed and the distribution of individual effects is unrestricted. In discrete choice panel models, structural parameters are typically set-identified, unless the model belongs to a very specific parametric class (logistic). They review various approaches to construct population identified sets, and they argue that panel data offer opportunities for point-identification. They also argue that restricting the conditional distribution of individual effects given exogenous covariates may be another source of point-identification. Lastly, they review recent bias reduction methods. In general, random-effects estimates are consistent as T increases but suffer from finite-sample bias, and then they discuss estimation of average marginal effects in this context. Some nonlinear models of applied interest are listed in Arellano & Bonhomme (2011). • location-scale model with heterogeneous volatility y it = (x 0 it β +α 0i ) +σ(x 0 it γ +α 1i )v it 35 with semi-parametric generalization y it =x 0 it β(u it ) +α i γ(u it ) where u it is the rank of the error v it so that (u it |x i1 ,··· ,x iT ,α i )∼ U(0, 1), and β(u) and γ(u) are nonparametric functions. • Chamberlain’s 1992 linear random coefficient model y it =g 0 (x it ,θ) +g 1 (x it ,θ) 0 α i +v it • nonadditive unobservables in discrete choice models y it = 1{F (x 0 it β +α i )≥u it } wherey it is a 0-1 indicator of participation, 1(·) is the indicator function,u it is a rank variable, F (·) is a cumulative distribution function. • nonadditive fixed effects in continuous response functions, such as heteroge- neous constant elasticity of substitution (CES) production function logy it =λlogl it + (1−λ)log[γx σ i it + (1−γ)z σ i it ] 1/σ i +α i +v it Endogenous Cross-Sectional Dependence Motivated by the herding behavior of agents, Mitchell et al. (2014) propose a class of nonlinear panel data models, which they point out is the first attempt in literatures to introduce endogenous cross-sectional dependence into panel data framework. They show that their model can generate endogenously both weak 36 and strong cross-sectional dependence and nest various extant dynamic panel data models. They explain that their model is similar to models which discuss weak and strong cross-sectional dependence based on the characteristics of the variance-covariance matrix of the data and are dynamic in nature, being instances of large dimensional VAR models. The model of Mitchell et al. (2014) can be viewed as a particular instance of nonlinear large dimensional VAR. The baseline model in Mitchell et al. (2014) is given by x i,t = ρ m i,t N X j=1 1(|x i,t−1 −x j,t−1 |≤r)x j,t−1 + i,t (1.23) where m i,t = N X j=1 1(|x i,t−1 −x j,t−1 |≤r) (1.23) states that x i,t is influenced by the cross-sectional average of a selection of x j,t−1 which are close to x i,t−1 . Mitchell et al. (2014) explains the relationship between (1.23) and threshold AR (TAR) models, factor models, and spatial AR and MA models. The difference between (1.23) and TAR models is that (1.23) allows agents to interact in a dynamic and nonlinear way. Factor models have both the maximum eigenvalue and the row or column sum norm of the covariance matrix be O(N), and the column sum norm of the variance covariance matrix of x t in model (1.23) is also O(N). The main difference between (1.23) and factor models is that factor models are intrinsically reduced form, which model cross-sectional dependence using exogenous and unobserved factors, but (1.23) has a parametric structure, allowing for structural and economic interpretations. (1.23) is also more general than spatial models, in the sense that the weighting schemes are estimated 37 endogenously, rather than assumed ex ante. Mitchell et al. (2014) point out that (1.23) nests two interesting models as extremes. when r = 0 x i,t =ρx i,t−1 + i,t when r→∞ x i,t = ρ N P N j=1 x j,t−1 + i,t Mitchell et al. (2014) then consider dozens of extensions from (1.23), and some of them are listed below. • unbalanced panel x i,t =ρ P N t−1 j=1 1(|x i,t−1 −x j,t−1 |≤ r)x j,t−1 P N t−1 j=1 1(|x i,t−1 −x j,t−1 |≤ r) + i,t , or x i,t =ρ PNs i ,t j=1 1(|x i,s i ,t −x j,s i ,t |≤ r)x j,s i ,t PNs i ,t j=1 1(|x i,s i ,t −x j,s i ,t |≤ r) + i,t • random effect x i,t =ν i +ρ P N j=1 1(|x i,t−1 −x j,t−1 |≤ r)x j,t−1 P N j=1 1(|x i,t−1 −x j,t−1 |≤ r) + i,t or x i,t =ν i ζ t +ρ P N j=1 1(|x i,t−1 −x j,t−1 |≤ r)x j,t−1 P N j=1 1(|x i,t−1 −x j,t−1 |≤ r) + i,t • p-th lags x i,t = p X s=1 " ρ s P N j=1 1(|x i,t−s −x j,t−s |≤ r)x j,t−s P N j=1 1(|x i,t−s −x j,t−s |≤ r) # + i,t 38 • q regimes x i,t = q X s=1 " ρ s P N j=1 1(r s ≤|x i,t−1 −x j,t−1 |<r s+1 )x j,t−s P N j=1 1(r s ≤|x i,t−1 −x j,t−1 |<r s+1 ) # + i,t • weighted x i,t =ρ P N j=1 1(d ij ≤ r)ω ij x j,t−1 P N j=1 1(d ij ≤ r) + i,t with d ij =|x i,t−1 −x j,t−1 |,ω ij = d −2 ij P N j=1 d −2 ij ,ω ii = 1 • general distance x i,t =ρ P N j=1 ω(|x i,t−1 −x j,t−1 |;γ)x j,t−1 P N j=1 ω(|x i,t−1 −x j,t−1 |;γ) + i,t • other variables x i,t =ρ P N j=1 1(|x i,t−1 −x j,t−1 |≤ r)x j,t−1 P N j=1 1(|x i,t−1 −x j,t−1 |≤ r) +βz i,t + i,t or x i,t =ρ P N j=1 1(|x i,t−1 −x j,t−1 |≤ r 1 )x j,t−1 P N j=1 1(|x i,t−1 −x j,t−1 |≤ r 1 ) +β P N j=1 1(|z i,t−1 −z j,t−1 |≤ r 2 )z j,t−1 P N j=1 1(|z i,t−1 −z j,t−1 |≤ r 2 ) + i,t • intersection of triggers x i,t =β P N j=1 1 T p s=1 n |z (s) i,t−1 −z (s) j,t−1 |≤ r s o x j,t−1 P N j=1 1 T p s=1 n |z (s) i,t−1 −z (s) j,t−1 |≤ r s o + i,t • factor augmented x i,t =ρ P N j=1 1(|x i,t−1 −x j,t−1 |≤ r)x j,t−1 P N j=1 1(|x i,t−1 −x j,t−1 |≤ r) +λ 0 i f t + i,t 39 • contemporaneous dependence x i,t =ρ P N j6=i 1(|x i,t −x j,t |≤ r 0 )x j,t P N j6=i 1(|x i,t −x j,t |≤ r 0 ) +β P N j=1 1(|x i,t−1 −x j,t−1 |≤ r 1 )x j,t−1 P N j=1 1(|x i,t−1 −x j,t−1 |≤ r 1 ) + i,t • general set x i,t =ρ P N j=1 1(j∈S i,t−1 )x j,t P N j=1 1(j∈S i,t−1 ) + i,t Estimation of (1.23) follows the estimation procedure for threshold models, where a grid of r is first constructed, and then given each r, the parameter ρ is estimated by least squares. Mitchell et al. (2014) prove consistency of the estimators for (1.23) under certain assumptions, and they also prove or discuss estimation for the extended versions of (1.23). Mitchell et al. (2014) find that the within group estimator of the fixed effect extension of (1.23) does not suffer from bias. In addition, they point out the misconception of setting up an AR model for the aggregate variable (¯ x t ), because based on (1.23), ¯ x t do not always admit a linear AR expression, except for the two extreme cases when r = 0 or r→∞. MIDAS Covariates An often encountered problem in applied studies is that the available data are sampled at different frequencies. For example, when the dependent variable and the explanatory variables are sampled at annual and quarterly frequencies, appli- cations often either annualize the quarterly data or assume that the dependent variable stays unchanged within each year. Mixed data sampling (MIDAS) has been proposed and studied in order to deal with regressions involving time series data at different frequencies. See Ghysels et al. (2007) and Ghysels & Valkanov 40 (2012) for example. Khalaf et al. (2014) introduces MIDAS to a dynamic panel model. The model is y i,t = δy i,t−1 +βB L 1/m x (m) i,t +u i,t (1.24) u i,t = μ i +v i,t μ i ∼ iid(0,σ 2 μ ),v i,t ∼iid(0,σ 2 v ) where the superscript (m) denotes that x (m) i,t is of higher frequency, B L 1/m denotes the MIDAS lag operator which is given by B L 1/m = j max X j=0 B(j)L j/m with B(j) be either Exponential Almon Lag B(j,θ) = e θ 1 j+···+θqj q P j max k=1 e θ 1 k+···+θqk q or Beta lag B(j,θ 1 ,θ 2 ) = f j j max ,θ 1 ,θ 2 P j max k=1 f k j max ,θ 1 ,θ 2 Khalaf et al. (2014) discussed a two steps estimation procedure. In the first step, θ 0 is assumed to be known and a differenced version of (1.24) is estimated by Arellano & Bond (1991) 2-step GMM. In the second step, numerical search is used to obtain confidence region for θ from the inverted J statistics J(θ 0 ). Khalaf et al. (2014) carried out simulations for illustration, but a theoretical justification of the procedure was not provided. 41 Quantile Regression Galvao & Montes-Rojas (2010), Galvao (2011), Feng (2011), Canay (2011), and Arellano & Bonhomme (2016) among others extended panel quantile regressions to allow for dynamic effects. Galvao & Montes-Rojas (2010) studies penalized quantile regression for dynamic panel data with fixed effects, where the penalty involves L 1 shrinkage of the fixed effects. Galvao (2011) estimates dynamic panel data model of fixed effects without penalty. Feng (2011) considers a dynamic random effect panel data model and uses L 2 penalized estimator. Canay (2011) provides sufficient conditions to point identify a quantile regression with fixed effects and proposes a data transformation to get rid of the fixed effects assuming these effects are location shifters. Fixed Effects Galvao (2011) for example, considers the fixed model y it =η i +αy i,t−1 +x 0 it β +u it whereη i denotes the individual fixed effects,x it isp vector of exogenous covariates, u it is the innovation term. The τ-th conditional quantile of y it is Q y it (τ|y it−1 ,x it ) =η i +α(τ)y it−1 +x 0 it β(τ) (1.25) 42 The estimation first fixes α and estimate (b η i (α j ,τ), b β(α j ,τ),b γ(α j ,τ)) by solving min η i ,β,γ Q NT (τ,η i ,α,β,γ) = K X k=1 N X i=1 T X t=1 v k ρ τ (y it −η i −α(τ k )y it−1 −x 0 it β(τ k )−ω 0 it γ(τ k )) where ω it is valid IV, ρ τ (u) =u(τ−I(u< 0)), v k are the weights controlling the relative influence of quantiles τ k on the estimation of η i parameter. Then for positive definite matrix A, b α(τ) are found by solving min α∈A b γ(α,τ) 0 b A(τ)b γ(α,τ) Galvao (2011) points out that one-step ahead forecast of the quantile function of y it for an individual i are then given by b Q y iT+1 (τ|y iT ,x iT +1 ) = b η i + b α(τ)y iT +x 0 iT +1 b β(τ) and the (1−λ) level prediction interval for an s-step-ahead forecast is h Q y iT+S (λ/2−h n ),Q y iT+S (1−λ/2 +h n ) i where h n → 0 to account for parameter uncertainty. Random Effects Arellano & Bonhomme (2016) propose a random effect approach for panel quantile regressions, which treats individual unobserved heterogeneity as time- invariant missing data. They start by introducing the methods based on a static model, and then they discuss how to make extensions to dynamic settings, which 43 include autoregressive models, models with general predetermined regressors, and models with autocorrelated errors. In the first extension, they model the conditional distribution of Y it given Y t−1 i = (Y i,t−1 ,··· ,Y i1 ), strictly exogenous variables X i , and individual effects η i as the following first order autoregressive form Y it =Q Y (Y it−1 ,X it ,η i ,U it ) (1.26) U it denotes a scalar error term, which they assume to have conditional U(0, 1) distribution and is independent of X is or U is for all s6=t. Arellano & Bonhomme (2016) sketch the nonparametric identification argu- ment extending Hu & Schennach (2008) along the lines of Hu & Shum (2012). Forτ∈ (0, 1), the random effect quantile regression (REQR) model consists of a conditional quantile of Y it and an additional layer of the conditional quantile of η i . Y it = h(Y i,t−1 ) 0 α(U it ) +X 0 it β(U it ) +η i γ(U it ) η i = X 0 i δ 1 (V i ) +δ 2 (V i )Y i1 (1.27) where U it |Y t−1 i ,X i ,η i ∼ U(0, 1) and V it |Y 1 i ,X i ∼ U(0, 1). h(y) is a function of y. For example, when h(y) =|y|, then model (1.27) is a panel data version of conditional autoregressive value at risk (CAViaR) model of Engle & Manganelli (2004). Other choices will lead to panel counterparts of different dynamic quantile 44 models. The implied conditional quantile functions for τ∈ (0, 1) are Q Y (Y it−1 ,X it ,η i ,τ) =h(Y i,t−1 ) 0 α(τ) +X 0 it β(τ) +η i γ(τ) Q η (Y i1 ,X it ,τ) =X 0 i δ 1 (τ) +δ 2 (τ)Y i1 Then, Arellano & Bonhomme (2016) explain that the sequential method-of- moment estimation algorithm, which modifies standard EM algorithm and proved to be consistent for their static baseline model, extends to the dynamic setting by replacing the posterior density function using f(η|y,x;ξ) = Q T t=2 f(y t |y t−1 ,x t ,η, ;ξ A )f(η|y 1 ,x;ξ B ) R Q T t=2 f(y t |y t−1 ,x t ,η, ;ξ A )f(η|y 1 ,x;ξ B )de η where ξ A = (θ(τ 1 ) 0 ,··· ,θ(τ L ) 0 ) 0 and ξ B = (δ(τ 1 ) 0 ,··· ,δ(τ L ) 0 ) 0 , θ(·) = (α(·),β(·) 0 ,γ(·) 0 ) 0 , δ(·) = (δ 1 (·) 0 ,δ 2 (·)) 0 , and 0 < τ 1 < ··· < τ L < 1 are knots. Two other extensions they consider are dynamic model with general predeter- mined regressors and static model with serially correlated disturbances. Miscellaneous A few sampling deficiencies about panel data have been pointed out in literatures, for example, missing values, unbalanced panel, sparsely sampled data, infrequent and irregularly spaced data, among some of the criticisms. Some recent 45 researches aim to address these issues. Yao et al. (2005a) develop a version of functional principal components (FPC) and then applies it to analyze panel data where there are only few repeated and sufficiently irregularly spaced measurements are available per individual, which they refer to as Principal Components Analysis through Conditional Expectation (PACE) for longitudinal data. Yao et al. (2005a) pointed out that PACE can also be used to impute the missing data from predicted trajectories. Yao et al. (2005b) develop a version of functional linear regression analysis in which both the predictor and the response variable are functions of some covariate. The model aims to predict unknown response trajectories based on sparse and noisy observations of new predictor functions. Using conditioning idea, the func- tionalregressionapproachleadstoimprovedpredictionoftheresponsetrajectories. Aguilera et al. (2008) assumes a functional logit model to forecast time evolution of binary response variable from a related continuous time series. The estimation of this model from discrete time observations of the predictor is solved by using functional principal component analysis and ARIMA modeling of the associated discrete time series of principal components. Based on functional linear regression model for sparse and irregular data, Wu & Müller (2011) propose the response-adaptive model for functional linear regression for longitudinal data, which models the regression relationship through conditioning the observed responses directly on the predictors, instead of conditioning the response FPCs on the predictors. Wu & Müller (2011) demonstrate to be superior to previous functional regression approaches when aiming at reducing prediction errors. 46 Chow & Zhang (2008) introduced continuous time modeling approach to irregularly spaced panel data using cubic splines interpolation. Oud & Singer (2008) introduced continuous time modeling for panel data and compared the Kalman filter procedure and Structural Equation Model (SEM) estimation of Exact Discrete Model (EDM). Oud et al. (2012) introduced spatial dependence in continuous time panel data models and propose a nonlinear SEM with latent variables for estimation of the EDM. A continuous time point of view is also adopted when Chang et al. (2010) aimed at reexamining the Fama-French factor models utilizing high frequency financial panel data. Chang et al. (2010) derived a continuous time multifactor pricing model and considered the corresponding panel regression. They pointed out that the model they assume is very general by allowing for time-varying and stochastic volatilities, which are both nonstationary and endogenous. The error term is assumed to be a martingale differential, consisting of a common component whose volatility is solely driven by the market and a cross-sectionally independent idiosyncratic component with asymptotically stationary volatility. Chang et al. (2010) then developed a new methodology to sample at random intervals based on time change and estimate the variance of the resulting sample with realized variance. They estimated and tested the CAPM and various multi- factor Fama-French models on several data sets of daily equity returns. They find that the random sampling approach offers a reliable statistical method to estimate and test multi-factor asset pricing models with high frequency data, while the conventional regressions with fixed time interval sampling ignoring time-varying volatilities gives logically inconsistent test results. 47 1.2 Forecasts 1.2.1 BLUP The acronym BLUP was coined by Goldberger (1962) to refer to best linear unbiased predictor, in the sense that the predictor has minimum variance among a class of predictors that are linear combinations of global parameters and subject specific effects. BLUP has been widely applied in many areas of predictions, for example in biostatistics and geostatistics. By Frees (2004) (page 130), suppose we observe N× 1 random vector y with E(y) = Xβ Var(y) = V then BLUP of w such that E(w) =λ 0 β Var(w) =σ 2 w is given by b w BLUP = λ 0 b GLS + Cov(w, y)V −1 (y− Xb GLS ) = (λ 0 − Cov(w, y)V −1 X)b GLS + Cov(w, y)V −1 y where b GLS = (X 0 V −1 X) −1 (X 0 V −1 y) is GLS estimator of β. By substituting the estimated b V for the unknown V, the formula is known as empirical BLUP. Although derived from the frequentist point of view, BLUP has an interesting Bayesian interpretation. In a linear mixed effects model y i =Z i α i +X i β + i (See 48 Frees (2004)), assuming Normal prior distributions, the posterior distributions have E(α|y) =α BLUP and E(β|y)→β BLUP as Σ −1 β → 0. Severaldifferentpaneldatamodels, bothstaticanddynamicwithuncorrelated, serially correlated, and spatially correlated error terms have been considered in terms of BLUP forecast. Baltagi (2008) and Baltagi et al. (2012), etc. contain many of the results. For the most commonly applied one-way error component model y i,t = α + x 0 it β +u it withu it =μ i +v it , or in matrix formy =αl NT +Xβ +u =Zδ +Z μ μ+v, Baltagi (2008) cites the result from Goldberger (1962) that BLUP of y i,t+S is b y i,T +S =Z 0 i,T +S b δ GLS +ω 0 Ω −1 b u GLS (1.28) whereω =E(u i,T +S u). (1.28) is a general formula, and BLUP for different models differ only by changing the expressions of the formula, in particularly by changing the expression of the second term. The formula suggests the intuition that BLUP corrects the GLS predictor by adding a fraction of the mean of GLS residuals corresponding to the ith individual. Frees & Miller (2004) considers forecasting using longitudinal data mixed model, which does not require balance data and allows for covariate associated with vector error components. y it =z 0 α,i,t α i +z 0 λ,i,t λ t +x 0 it β +ε it (1.29) 49 They derive the BLUP for model (1.29) to be b y it+s = x 0 i,T i +S b GLS + z 0 α,i,T i +S α i,BLUP +z 0 λ,i,T i +S Cov(λ T i +S,λ ) 0 Σ −1 λ λ BLUP +Cov(ε i,T i +S,ε i ) 0 R −1 i e i,BLUP (1.30) The above expression implies that BLUP contains three components. The first component x 0 i,T i +S b GLS + z 0 α,i,T i +S α i,BLUP is due to conditional mean, the second component, z 0 λ,i,T i +S Cov(λ T i +S,λ ) 0 Σ −1 λ λ BLUP is due to time varying coefficients, and the third component Cov(ε i,T i +S,ε i ) 0 R −1 i e i,BLUP is a correction term of serial correlation. 1.2.2 Forecast Evaluation Loss Functions Denote the loss function of the forecast error e t+s = Y t+s − f t,s as L = L(Y t+s ,f t,s ), where f t,s is forecast made at time t for the s period ahead variableY t+s . Lee (2007) point out that the loss function can depend on the time of the prediction t +s, and it can also depend on the value of the variable. The following properties are required for a loss function (see for example Lee (2007)): 1. L(0) = 0 2. min e L(e) = 0 so L(e)≥ 0 50 3. L(e) is monotonically non-decreasing as e moves away from 0: L(e 1 )≥L(e 2 ) if e 1 >e 2 ≥ 0 or if e 1 <e 2 ≤ 0 In addition, if both L 1 and L 2 are loss functions, then further loss functions can be generated as: 1. L(e) =aL 1 (e) +bL 2 (e), a≥ 0,b≥ 0 2. L(e) =L 1 (e) a L 2 (e) b , a> 0,b> 0 3. L(e) = 1(e> 0)L 1 (e) + 1(e< 0)L 2 (e) 4. L(e) =h (L 1 (e))−h(0) for a positive monotonic non-decreasing functionh(·) with h(0) finite Elliott et al. (2005) consider a general family of loss functions L(y,f) = [α + (1− 2α)1(y−f < 0)](y−f < 0) p When p = 1, it becomes piece-wise linear (lin-lin) loss function, which nests AME loss function, L(y,f) =|y−f|, for α = 0.5. When p = 2, the asymmetric quadratic loss function nests MSE loss function L(y,f) = (y−f) 2 , for α = 0.5. Clatworthy et al. (2012) note that extant research into analysts’ earning forecasts explaining optimum bias as resulting from analysts minimizing the mean absolute forecast error under symmetric linear loss functions. When the distribution of earnings outcomes is skewed, the optimal forecasts can appear biased, while under asymmetric loss functions optimal forecasts will appear biased even if earnings outcomes are symmetric. Clatworthy et al. (2012) exploit 51 Linex loss function to discriminate between the symmetric linear loss and the asymmetric loss explanations of analyst forecast bias, and their empirical results support the asymmetric loss function explanation. Examples of asymmetric loss functions are - check function L(y,f) = (α− 1(y<f))(y−f) The check function is also known as tick function or lin-lin loss function. When α6= 0.5, the check function is asymmetric loss function (when α = 0, it becomes median loss function, which is symmetric). Optimal forecast based on the check function is the α-conditional quantile. - Linex loss function L(e) =exp(ae)−ae− 1 where a is a scaler that controls the aversion towards a. The Linex function is differentiable. If a> 0, the Linex is exponential for e> 0 and linear for e< 0. If a< 0, the Linex is exponential for e< 0 and linear for e> 0. To make the Linex more flexible, it can be modified to the double Linex loss function as L(e) =L 1 (e,a) +L 1 (e,−b) =exp(ae) +exp(−be)− (a−b)e− 2 The double Linex loss function is exponential for all values of e. To evaluate point forecasts of financial time series, we can also consider direc- tional loss function L(y,f) =−sgn(f)sgn(y) with sgn(u) = 1(u> 0)− 1(u< 0). 52 Evaluation Criteria Diebold & Lopez (1996) reviews a lot of useful evaluation techniques. - Mean Prediction Bias: (Spitzer & Baillie (1983)) 1 N N X i=1 (y i,t+s −b y i,t+s ) - Mean Squared Error of Prediction (Kouassi et al. (2011)) MSE = 1 N N X i=1 (y i,t+s −b y i,t+s ) 2 - Mean absolute error MAD = 1 N N X i=1 |y i,t+s −b y i,t+s | - Median absolute error MedianAD =median (|y i,t+s −b y i,t+s |) - Mean absolute percentage error (MAPE) MAPE = 1 N N X i=1 y i,t+s −b y i,t+s y i,t+s - Symmetric MAPE (sMAPE) MAPE sym = 1 N N X i=1 y i,t+s −b y i,t+s y i,t+s +b y i,t+s 53 - Scaled errors (Hyndman & Koehler (2006)) q = y i,t+s −b y i,t+s 1 N−1 P T t=2 |y t −y t−1 | the numerator and denominator both involve values on the scale of the original data, q j is independent of the scale of the data. q j < 1 if it arises from a better forecast than the average naive forecast computed in the sample. Conversely, its greater than 1. - Theil-U statistics U = P T t=1 (y t+s −b y t+s ) 2 P T t=1 (y t+s −b y t ) 2 Theil-U statistics is a relative measure, which compares the accuracy with some benchmark. - Rank-based test (Stekler (1988)) H = N X i=1 H m −MT/2 MT/2 ∼ χ 2 M−1 H is a chi-squared goodness-of-fit test statistics. It tests the hypothesis that each set of forecast has equal expected loss. Given M competing forecasts, assign to each forecast at each time a rank according to its accuracy. Then aggregate the period-by-period ranks for each forecast m, H m = P T t=1 rank(L(y t+s ,b y m t+s )). The test requires the ranking to be independent over space and over time, but simple modifications along the lines of the Bonferroni bounds test may be made if the ranking are temporally (s− 1) dependent. Moreover, exact versions of the test 54 may be obtained by exploiting Fisher’s randomization principle. (Bradley (1968)). - DM test (Diebold & Mariano (1995)) A limitation of Stekler’s rank-based approach is that information on the magnitude of differences in expected loss across forecasts is discarded. In many cases, one wants to know how much tests differ from each others, and even the sampling distribution of the loss differentials. Diebold & Mariano (1995) develop a test for zero expected loss differential that allows for forecast errors that are nonzero mean, non Gaussian, serially correlated and contemporaneously correlated. Ingeneral, giventhelossfunction, thenullhypothesisofequalforecastaccuracy for the two forecastsm andn isE(L(y t+s ,b y m t+s|t )) =E(L(y t+s ,b y n t+s|t )) orE(d t ) = 0 withd t =L(y t+s ,b y m t+s|t )−L(y t+s ,b y n t+s|t ). Ifd t iscovariancestationaryshortmemory process, then √ T ( ¯ d−μ) a ∼ N(0, 2πf d (0)) where ¯ d = 1 T P T t=1 d t is the sample mean loss differential, f d (0) is the spectral density of the loss differential at frequency 0, μ is the population mean loss dif- ferential. Then given consistent estimator of b f d (0), test for the null hypothesis of equal forecast accuracy is given by B = ¯ b √ 2π b f d (0)/T West (1996) takes an approach related to Diebold & Mariano (1995). The main difference is that West (1996) assumes that forecasts are computed from an 55 estimated regression model and explicitly accounts for the effects of parameter uncertainty within that framework. When the estimation sample is small, DM test and West (1996) can lead to different results. However, as the estimation period grows relative to the forecast period, the effects of parameter uncertainty vanish, and DM test and West (1996) are identical. West (1996) is more general by correcting for nonstationarity induced by the updating of parameter estimates, but it is less general than DM test in that DM test makes no assumptions about the models that underlie forecasts. Examples of nonparametric test statistics are: - Sign test statistics S B = T X t=1 I + (d t ) where I + (x) = 1 if x> 0 and I + (x) = 0 otherwise. - Signed-rank test statistics W B = T X t=1 I + (d t )Rank(|d t |) serially correlation may be handled via Bonferroni bounds. Examples of evaluation criteria for probability forecasts are given by. - Quadratic probability score (Brier (1950)) QPS = 1 T T X t=1 2( b P t+s|t −R t+s ) 2 56 where b P t+s|t is the prediction formed at t about the probability for the event to occur s periods ahead, R t+s = 1 if the event occurs and 0 otherwise at t +s. - Global squared bias measure GSB = 2 1 T T X t=1 ( b P t+s|t −R t+s ) ! 2 - Local squared bias measure LSB = 1 T J X j=1 2T j ( ¯ P j − ¯ R j ) 2 It’s based on a weighted average of local calibration acrossJ partitions of the unit interval. T j is the number of probability forecasts in subset j, ¯ P j is the average forecast in subset j and ¯ R j is the average realization in subset j. 1.2.3 Combine Forecasts With several different forecasts, researchers often confront the question of whether a combination of the individual forecasts could improve the overall forecast accuracy and if so how to achieve an optimal combined forecast. Issler & Lima (2009) makes use of a two-way error component panel data model and forecast survey data from Brazil Central Bank to investigate in the forecast combination puzzle: "if we consider a fixed number of forecasts (N <∞), combining them using equal weights (1/N) fare better than using the ’optimal weights’ constructed to outperform any other forecast combination in the mean-squared error (MSE) sense." 57 Motivating observation of Issler & Lima (2009) is that if the series being forecast is stationary and ergodic, and there is enough diversification among forecasts, we should expect that a weak law-of-large-numbers (WLLN) applies to well behaved forecast combinations. Assuming the forecast target y t is stationary and ergodic, then Issler & Lima (2009) consider the following model f h i,t =E t−h (y t ) +k i + i,t (1.31) where f h i,t is h-step-ahead forecast for y t made by the i-th forecaster. Issler & Lima (2009) interprets that E t−h (y t ) defines the optimal forecast, because under a MSE risk function, the optimal forecast for stationary and ergodic y t is the conditional expectation using information available up to t−h. In addition, y t can be decomposed into y t =E t−h (y t ) +ζ t (1.32) Combining (1.31) and (1.32) and define η t =−ζ t , we get f h i,t =y t +k i +η t + i,t (1.33) The key result says that the feasible bias-corrected average forecast (BCAF) is 1 N N X i=1 f h i,t − b B 58 where b B = 1 N P N i=1 b k i and b k i = 1 T 2 −T 1 P T 2 t=T 1 +1 (f h it −y t ). BCAF is optimal and in the limit identical to the conditional expectation. Then Issler & Lima (2009) explains that there is actually no forecast combination puzzle. The key issue is that optimal weights requires estimating N weights which grow unbounded, while simple averaging requires no estimation of weights. 59 Chapter 2 Spatial Panel Vector Autoregression 2.1 Introduction Panel vector autoregression (PVAR) models have the same structures as vec- tor autoregression (VAR) models, in the sense that all variables are assumed to be endogenousandinterdependent, butinPVARsanadditionalcrosssectionaldimen- sion is added. PVARs with independent disturbances have been studied by for example Binder et al. (2005), Cao & Sun (2011), Han et al. (2016), Canova & Cic- carelli, Greenway-McGrevy (2012), Greenway-McGrevy (2013b), and Greenway- McGrevy (2013a), etc. Binder et al. (2005) consider estimation and inference in PVARs with fixed effects when T is finite and N is large. Canova & Ciccarelli compare panel VARs with large scale VARs, spatial VARs, dynamic factor models, Global VARs, and bilateral VARs, and they point out that PVARs are particularly suited to a) capture both static and dynamic interdependencies, b) treat the links across units in an unrestricted fashion, c) easily incorporate time variation in the coefficients and in the variance of the shocks, and d) account for cross sectional dynamic heterogeneities. Over the past 30 years since the publication of Paelinck & Klaassen (1979), spa- tial econometrics has moved from the margin to the mainstream of applied econo- metrics and social science methodology (Anselin (2010)). Spatial dependence plays 60 an important role in panel data models with cross sectional correlations. For exam- ple, Driscoll & Kraay (1998) point out that in the presence of spatial dependence, even though standard techniques yield consistent parameter estimates, they will yield inconsistent estimates of the standard errors. Bresson et al. (2007) show that there can be considerable size distortions in panel unit root tests such as Im et al. (2003) when the true specification exhibits large degree of spatial error correlation. Baltagi & Pirotte (2010) show that tests of hypotheses based on the usual panel data estimators that ignore spatial dependence can produce misleading inference. Baltagi et al. (2012) point out that a misspecified estimator, especially in terms of spatial effects, has severe consequences in terms of estimation and forecasting for the applied economist. Recently, Baltagi et al. (2003) consider testing spatial error correlation. Elhorst (2005) estimate dynamic panel data model with spatial error by conditional MLE, where N is large but T is fixed. Beenstock & Felsenstein (2007) discuss the inci- dental parameter and impulse response of an example of spatial VAR model. Su & Yang (2015) derive QMLEs of the dynamic panel data model under both fixed and random effects specifications, where the focuses are large N with fixed T. Baltagi et al. (2007) extend the spatial panel data study to include serial autocorrelation. Kapoor et al. (2007) consider GMM estimation of spatial error model with time- period random effects. Elhorst (2008) considers MLE of model with serial and spatial autocorrelation. Korniotis (2010) considers models with individual time lag and spatial time lag but without a contemporaneous spatial lag. For the case with a spatial lag in panel data, Yu et al. (2008), Lee & Yu (2010b), and Yu et al. (2012) consider the stable, unit roots, and spatial cointe- gration models in the time dimension, respectively. Lee & Yu (2010a) investigate 61 QMLEs for spatial panel data models with spatial lags, fixed effects and SAR dis- turbances. Parent & LeSage (2012) introduce a general space-time filter for the dependent variable, which implies a constraint on the mixing term that reflects spatial diffusion or space-time covariance. The estimation for the random effects model is achieved by a Bayesian approach. Baltagi et al. (2013) consider test- ing spatial autocorrelation in both the remainder error term and spatial random effects. Wang & Lee (2013) consider an SAR panel model with random effects and randomly missing dependent variable. They estimate the model with GMM, NLS, and 2SLS with imputation, which they show are consistent and asymptotically equivalent. In addition, they show that these estimation methods are robust to unknown heteroscedastic and are good alternatives to conventional EM algorithm, which are inconsistent under heteroscedasticity. They also propose spatial Mund- lak approach to deal with the case where the individual effects are correlated with the included regressors. In this chapter, we propose a spatial panel vector autoregressive model, SPVAR(p), which is unstudied in the literature. Our proposed SPVAR(p) model extends the scalar spatial dynamic panel data (SDPD) literature to study a spa- tial Panel VAR, where the dependent variable for each individual at each time is a vector. We directly embed into the model contemporaneous and lagged spatial dependence information, which is assumed to be exogenous and known, and we also take into account spatially correlated disturbances. The paper which studies a most similar spatial Panel VAR model is Mutl (2009). It can be shown that the model in Mutl (2009) implies a model with independent disturbances and spatial autoregressive dependent variables. Unlike Mutl (2009), our PVAR model directly assumes spatially correlated dependent variables and spatially correlated distur- bances. In addition, we allow more heterogeneity in the parameter matrices and 62 include multiple lags, which greatly generalize the PVAR model specification in Mutl (2009). The proposed SPVAR(p) is estimated by profile quasi-maximum likelihood (QMLE), which concentrates out the individual fixed effects. Under the assump- tion thatN and/orT grow to infinity and certain regularity conditions, the QMLE estimators b θ NT will be consistent and asymptotically normally distributed. b θ NT is consistent but has a bias of order O 1 T . When T is relatively large than N, b θ NT are √ NT consistent and asymptotically centered aroundθ 0 . When N is asymp- totically proportional toT, b θ NT are √ NT consistent and asymptotically normally distributed, but the limiting distribution does not center around θ 0 . When N is relatively large than T, b θ NT are T consistent and has degenerated distribution. Finite sample behaviors of the QMLE estimators are assessed using Monte Carlo simulations. 2.2 Spatial Panel Vector Autoregression 2.2.1 Spatial Panel VAR Model Asageneralizationoftheunivariatespatialdynamicpaneldata(SDPD)model, we consider spatial panel vector autoregression, SPVAR(p,q), in which each indi- vidual i = 1, 2,...,N at time t = 1, 2,....,T has m = 1,··· ,M scaler observations, y m it . y m it =α 0,m i + p X j=1 M X r=1 φ 0,mr j y r i,t−j + M X r=1 ψ 0,mr 0 N X k=1 w ik y r kt + q X j=1 M X r=1 ψ 0,mr j N X k=1 w ik y r k,t−j +β 0,m 0 x m it +u m it (2.1) 63 with u m it = M X r=1 λ 0,mr N X k=1 w ik u m kt +ε m it (2.2) Eachy m it is assumed to depend on lagged values ofy 1 it ,··· ,y M it for up to orderp, contemporaneous neighbourhood effects P N k=1 w ik y 1 kt ,··· , P N k=1 w ik y M kt and lagged neighbourhood effects for up to order q, α 0,m i , which is the individual fixed effect, and K-dimensional covariates x m it = (x m it,1 ,··· ,x m it,K ) 0 . Superscripts 0 denote true values. w ik denotes the weight of neighbourk on individuali, which is assumed to be a non-stochastic constant. φ 0,mr j ,ψ 0,mr 0 ,ψ 0,mr j andβ 0,m are unknown coefficients. It is quite likely that the residuals u m it also exhibit spatial correlation, and (2.2) allows for this possibility when the coefficients λ 0,mr differ from 0. The remainder ε m it are assumed to be independently and identically distributed acrossi andt with mean 0 and constant variance ω 0,mm . First, stack by m = 1,··· ,M and denote y it = y 1 it ,··· ,y M it 0 , y ∗ it = P N k=1 w ik y 1 kt ,··· , P N k=1 w ik y M kt 0 , X it = x 1 0 it ,··· , x M 0 it 0 , u it = u 1 it ,··· ,u M it 0 , u ∗ it = P N k=1 w ik u 1 kt ,··· , P N k=1 w ik u M kt 0 , ε it = ε 1 it ,··· ,ε M it 0 , and α 0 i = α 0,1 i ,··· ,α 0,M i 0 . And denote the matrices of parameters Φ 0 j M×M = φ 0,11 j ··· φ 0,1M j . . . . . . . . . φ 0,M1 j ··· φ 0,MM j ,for j = 1,··· ,p, Ψ 0 j M×M = ψ 0,11 j ··· ψ 0,1M j . . . . . . . . . ψ 0,M1 j ··· ψ 0,MM j ,for j = 0,··· ,q, 64 β 0 MK×M = β 0,1 0 ··· 0 0 β 0,2 ··· 0 . . . . . . . . . . . . 0 0 ··· β 0,M , Λ 0 M×M = λ 0,11 ··· λ 0,1M . . . . . . . . . λ 0,M1 ··· λ 0,MM , and then the model can be written as y it =α 0 i + p X j=1 Φ 0 j y i,t−j + Ψ 0 0 y ∗ it + q X j=1 Ψ 0 j y ∗ i,t−j +β 00 X it + u it (2.3) u it = Λ 0 u ∗ it +ε it In the case when x it is the same for all m = 1,··· ,M,β 0 = β 0,1 ,··· ,β 0,M andβ 00 X it =β 00 x it . Then, stack by i = 1,··· ,N and denote y t = (y 0 1t ,··· , y 0 Nt ) 0 and X t = (X 0 1t ,··· , X 0 Nt ) 0 . Pre-multiplying y t by P = ( e 0 MN,1 , e 0 MN,M+1 ,···, e 0 MN,(N−1)M+1 ,···, e 0 MN,M , e 0 MN,2M ,···, e 0 MN,MN ) 0 , which is an MN×MN per- mutation matrix with e MN,l beingMN×1 vectors of 0s except for thel-th element being 1 (l = 1,··· ,MN), so thatPy t rearranges the rows of y t to stacky m it firstly byi = 1,··· ,N andthenbym = 1,··· ,M. Hence, y ∗ it =W i Py t and u ∗ it =W i Pu t , whereW i =I M ⊗ (w i1 ,··· ,w iN ). Further, letW = (W 0 1 ,··· ,W 0 N ) 0 and W =WP , and then we have y t = α 0 + p X j=1 I N ⊗ Φ 0 j y t−j + I N ⊗ Ψ 0 0 Wy t + q X j=1 I N ⊗ Ψ 0 j Wy t−j + I N ⊗β 0 0 X t + I MN − I N ⊗ Λ 0 W −1 ε t , (2.4) 65 whereα 0 = (α 00 1 ,··· ,α 00 N ) 0 , andε t = (ε 0 1t ,··· ,ε 0 Nt ) 0 ∼iid (0,I N ⊗ Ω 0 ), with Ω 0 be anM×M positive definite covariance matrix. DefineS N (A) =I MN − (I N ⊗A)W for any M×M matrix A. Solve for y t we get y t = S −1 N Ψ 0 0 α 0 + p X j=1 S −1 N Ψ 0 0 I N ⊗ Φ 0 j y t−j + q X j=1 S −1 N Ψ 0 0 I N ⊗ Ψ 0 j Wy t−j +S −1 N Ψ 0 0 I N ⊗β 0 0 X t +S −1 N Ψ 0 0 S −1 N Λ 0 ε t (2.5) Given anN×N spatial weight matrix,W, which has diagonal elementsW ii = 0 for i = 1,··· ,N and is assumed to be row normalized from spatial weight matrices,suchasageographicalcontiguitymatrix,wecanconstructtheMN×MN dimensional W matrix to correspond to various specifications. For example, we can consider the following cases. When y m it and y r it have a same weight matrix W for m,r = 1,...,M, we can express W = W⊗I M . When y m it and y r it have different weight matrices for m6= r and m,r = 1,...,M, we can express W = R 0 ⊕ M r=1 W r R, where⊕ denotes direct sum,W r denotes weight matrix fory r it and R denotesMN×MN permutation matrix which rearranges columns and rows of ⊕ M r=1 W r , such that W = W 1 11 ··· 0 ··· W 1 1N ··· 0 . . . . . . . . . ··· . . . . . . . . . 0 ··· W M 11 ··· 0 ··· W M 1N . . . . . . . . . . . . . . . . . . . . . W 1 N1 ··· 0 ··· W 1 NN ··· 0 . . . . . . . . . ··· . . . . . . . . . 0 ··· W M N1 ··· 0 ··· W M NN 66 TheSPVAR(p,q)modeldefinedaboveisageneralizationofthespatialdynamic panel data (SDPD) models used in the literature for a univariate dependent vari- able, which has spatially correlated dependent variables and residuals, to allow for multivariate dependent variables and higher order lags. WhenM =p =q = 1, the SPVAR(p,q) model reduces to the usual SDPD model considered in most of the lit- erature, suchasYuetal.(2008), Lee&Yu(2010a), Lee&Yu(2010b), andYuetal. (2012). For example, Yu et al. (2012) study QMLE of SDPD models with individ- ualandtimefixedeffects, where y t =λ 1 Wy t +γy t−1 +ρWy t−1 +X t β+α+α t l N +u t , y t = (y 1t ,··· ,y Nt ) 0 and each y it being a scaler. The panel VAR model with spatially correlated disturbances studied in Mutl (2009) is closely related to our model. Mutl (2009) considers a model where y t = (I N ⊗ Φ)y t−1 + u t with u t =λWu t +I N ⊗ (I M − Φ)μ +ε t , which implies that y t =I N ⊗ (I M − Φ)μ +λWy t + (I N ⊗ Φ)y t−1 −λW(I N ⊗ Φ)y t−1 +ε t (2.6) The model we consider is more general than (2.6), because (2.6) assumes a scalar contemporaneous neighbourhood effect λ that is universal to all units, whereas our model in (2.4) allows for more heterogeneity. If we consider an autoregressive model without covariates, restrict the M×M matrix Ψ 0 0 in (2.4) to be a scaler λ, defineα 0 = I N ⊗ (I M − Φ)μ, restrict p = q = 1, set Φ 0 1 = Φ, Ψ 0 1 =−λΦ, and restrict Λ 0 = 0, and then our model can reduce to the model of (2.6). Without loss of generality, we put p = max(p,q) in (2.5), because additional coefficients corresponding to superfluous lag orders can always be set to 0. Then (2.5) can also be written as follows by stacking Y t = (y 0 t , y 0 t−1 ,··· , y 0 t−p+1 ) 0 , Y t = AY t−1 +E t (2.7) 67 where A = A 1 ··· A p−1 A p I MN(p−1) 0 MN(p−1)×MN , and E t = S −1 N (Ψ 0 0 ) α 0 + I N ⊗β 0 0 X t +S −1 N (Λ 0 )ε t 0 MN(p−1)×1 and for j = 1,··· ,p, A j = S −1 N (Ψ 0 0 ) I N ⊗ Φ 0 j + I N ⊗ Ψ 0 j W . Recursive substitution of (2.7) implies that Y t = A n Y t−n + n−1 X h=0 A h E t−h = ∞ X h=0 A h E t−h (2.8) (2.7)iscovariancestationarywhenalltheeigenvaluesof A,definedbyλthatsatisfy the equation det A−λI MNp = 0 lie inside of the unit circle or equivalently when det (I MN − A 1 z−···− A p z p )6= 0 for all|z|≤ 1. W is row normalized, and thus W = PDP −1 is diagonalizable, where D = diag (d 1 ,d 2 ,...,d N ) denotes diagonal matrix whose diagonal elements consist of the eigenvalues of W with |d i | ≤ 1 for all i = 1,...,N, and max 1≤i≤N d i = 1. Each column of P is the corresponding normalzied eigenvector of W. When W = W ⊗ I M , the eigenvalues of W are equal to the eigenvalues of W. In addition, S N (Ψ 0 0 ) = (P⊗I M ) (I MN −D⊗ Ψ 0 0 ) (P⊗I M ) −1 , and S −1 N (Ψ 0 0 ) = (P⊗I M ) (I MN −D⊗ Ψ 0 0 ) −1 (P⊗I M ) −1 . 68 Forj = 1,...,p, A j = (P⊗I M ) (I MN −D⊗ Ψ 0 0 ) −1 I N ⊗ Φ 0 j +D⊗ Ψ 0 j (P⊗I M ) −1 , |I MN − A 1 z−···− A p z p | =|P| M I MN −D⊗ Ψ 0 0 −1 I MN −D⊗ Ψ 0 0 − p X j=1 I N ⊗ Φ 0 j +D⊗ Ψ 0 j z j |P −1 | M =|P| M I MN −D⊗ Ψ 0 0 −1 N Y i=1 I M −d i Ψ 0 0 − Φ 0 1 +d i Ψ 0 1 z−···− Φ 0 p +d i Ψ 0 p z p ||P −1 M Since|P|6= 0 and|I MN −D⊗ Ψ 0 0 | −1 6= 0, the stability condition simplifies to requiring that |I M −d i Ψ 0 0 − (Φ 0 1 +d i Ψ 0 1 )z−···− Φ 0 p +d i Ψ 0 p z p |6= 0, for all d i and for all |z|≤ 1,|d i |≤ 1,i = 1,··· ,N. In the scaler case when M = 1, S N (ψ 0 0 ) = I N − ψ 0 0 W = P (I N −ψ 0 0 D)P −1 ,S −1 N (ψ 0 0 ) = (I N −ψ 0 0 W ) −1 = P (I N −ψ 0 0 D) −1 P −1 , A j = S −1 N (ψ 0 0 ) φ 0 j +ψ 0 j W =P (I N −ψ 0 0 D) −1 φ 0 j I N +ψ 0 j D P −1 for j = 1,...,p, and |I N − A 1 z− A 2 z 2 −···− A p z p | =|P||I N −ψ 0 0 D| −1 |I N −ψ 0 0 D− p X j=1 φ 0 j I N +ψ 0 j D z j ||P −1 | =|P||I N −ψ 0 0 D| −1 N Y i=1 1−ψ 0 0 d i − p X j=1 φ 0 j +ψ 0 j d i z j |P −1 | Since|P|6= 0 and|I N −ψ 0 0 D| −1 6= 0, the condition that|I MN − A 1 z−···− A p z p |6= 0 for all|z|≤ 1 further simplifies to requiring that ψ 0 0 d i + (φ 0 1 +ψ 0 1 d i )z + ··· + φ 0 p +ψ 0 p d i z p 6= 1 for all d i and for all|z|≤ 1,|d i |≤ 1, i = 1,··· ,N, which is satisfied if|ψ 0 0 | +|φ 0 1 | +|ψ 0 1 | +··· + φ 0 p + ψ 0 p < 1. In a simplest case when p = 1, the stability condition is satisfied if|ψ 0 0 | +|φ 0 1 | +|ψ 0 1 |< 1, which is the same constraint in for example Yu et al. (2008). 69 2.2.2 QML Estimation The unknown parameters areα 0 andθ 0 = (vech(Ω 0 ) 0 , vec(Λ 0 ) 0 ,vec(Φ 0 1 ) 0 ,···, vec(Φ 0 p ) 0 ,vec(Ψ 0 1 ) 0 ,···, vec(Ψ 0 p ) 0 , vec(β 0 ) 0 ,vec(Ψ 0 0 ) 0 )’, which represents the vector of unknown coefficients of length (2 +p +q)M 2 +M(M + 1)/2 +kM. Denote γ 0 = (Φ 0 1 ,··· , Φ 0 p , Ψ 0 1 ,··· , Ψ 0 q , β 0 0 ), which is a M×M(p +q +K) dimensional parameter matrix, and let e Z t = e y 0 t−1 ,··· , e y 0 t−p , (W e y t−1 ) 0 ,··· , (W e y t−q ) 0 , f X 0 t 0 and C denote the MN(p +q +K)×MN(p +q +K) full rank matrix which permutes columns of (I N ⊗γ), such that I N ⊗γ C = I N ⊗ Φ 1 ,··· , I N ⊗ Φ p , I N ⊗ Ψ 1 ,··· , I N ⊗ Ψ q , I N ⊗β 0 . Then equation (2.5) implies, y t = S −1 N (Ψ 0 0 ) (I N ⊗γ 0 )CZ t +S −1 N (Ψ 0 0 )S −1 N (Λ 0 )ε t . We can estimateα 0 andθ 0 = (vech(Ω 0 ) 0 ,vec(Λ 0 ) 0 ,vec(γ 0 ) 0 ,vec(Ψ 0 0 ) 0 ) 0 by max- imum likelihood. The log-likelihood function of (2.5) is lnL(θ,α) =− MNT 2 ln(2π)− T 2 ln|I N ⊗ Ω| +T ln|S N (Ψ 0 )| +T ln|S N (Λ)| − 1 2 T X t=1 S N (Ψ 0 )y t − (I N ⊗γ)CZ t 0 S N (Λ) 0 (I N ⊗ Ω −1 )S N (Λ) S N (Ψ 0 )y t − (I N ⊗γ)CZ t (2.9) First order condition of (2.9) with respect toα yields T X t=1 S N (Λ) 0 (I N ⊗ Ω −1 )S N (Λ) S N (Ψ 0 )y t − (I N ⊗γ)CZ t = 0 Assume S N (Λ) 0 (I N ⊗ Ω −1 )S N (Λ) is invertible and solveα as a function ofθ as α(θ) = S N (Ψ 0 ) e y t − p X j=1 I N ⊗ Φ j e y t−j − q X j=1 I N ⊗ Ψ j W e y t−j − (I N ⊗β) 0 f X t = S N (Ψ 0 ) e y t − (I N ⊗γ)C e Z t (2.10) 70 Denote 1 T P T t=1 y t = ¯ y, 1 T P T t=1 y t−j = ¯ y −j , 1 T P T t=1 X t = ¯ X, 1 T P T t=1 Z t = ¯ Z, and denote the within transformations e y t = y t − ¯ y, e y t−j = y t−j − ¯ y −j , f X t = X t − ¯ X, and e Z t = Z t − ¯ Z. Then, the log-likelihood function concentrating outα(θ) is lnL(θ) =− MNT 2 ln(2π)− T 2 ln|I N ⊗ Ω| +T ln|S N (Ψ 0 )| +T ln|S N (Λ)| − 1 2 T X t=1 S N (Ψ 0 ) e y t − (I N ⊗γ)C e Z t 0 S N (Λ) 0 (I N ⊗ Ω −1 )S N (Λ) S N (Ψ 0 ) e y t − (I N ⊗γ)C e Z t (2.11) Maximizing (2.11) with respect toθ leads to profile quasi-maximum likelihood estimators (QMLE), b θ. Then, QMLE forα is b α( b θ). 2.2.3 Asymptotic Properties of QMLE Assumption 1: The N×N spatial weight matrix, W, is constant with diagonal elements W ii = 0 for i = 1,··· ,N. Assumption 2: The disturbances ε it ∼ iid(0, Ω) with a finite positive-definite variance matrix Ω and finite absolute 4 +η moments for some η> 0. Assumption 3: The elements of X t and α are nonstochastic and bounded uni- formly in N and t. lim T→∞ 1 NT P T t=1 f X 0 t f X t exists and is nonsingular. Assumption 4: W is uniformly bounded in row and column sums in absolute value (UB) 1 , and S −1 N (Ψ 0 ) and S −1 N (Λ) are UB, uniformly in Ψ 0 ∈ Θ(Ψ 0 ) and Λ∈ Θ(Λ), respectively. 1 A sequence of matricesP n of dimensionn 1 (n)×n 2 (n), wheren 1 (n) andn 2 (n) are functions of n, is said to be uniformly bounded in row and column sums if sup n≥1 kP n k ∞ < ∞ and sup n≥1 kP n k 1 < ∞, wherekP n k ∞ ≡ sup 1≤i≤n1 P n2 j=1 |P ij,n | denotes the row sum norm and kP n k 1 ≡ sup 1≤j≤n2 P n1 i=1 |P ij,n | denotes the column sum norm of matrix P n . (See for example Yu et al. (2008), Lee & Yu (2011), Yu et al. (2012), etc.) 71 Assumption 5: The matrix S N (Ψ 0 ) is nonsingular for all Ψ 0 ∈ Θ(Ψ 0 ), and S N (Λ) is nonsingular for all Λ∈ Θ(Λ), where denote Θ(Ψ 0 ) and Θ(Λ) as the compact parameter spaces of Ψ 0 and Λ, respectively. Furthermore, the true parameters Ψ 0 0 ∈int(Θ(Ψ 0 )) and Λ 0 ∈int(Θ(Λ)). Assumption 6: P ∞ h=0 |A h | is UB, where A is defined in (2.7) and|A| i,j =|A i,j |. Assumption 7: N is nondecreasing function of T, and T goes to infinity. M and p are fixed. Assumption 1 is a standard assumption in spatial econometrics. Assumption 2 provides regularity assumptions forε it . Assumption 3 assumes that the exogenous regressors X t are uniformly bounded. The assumption of UB in Assumption 4 is another standard assumption in spatial econometrics, which limits the spatial correlations to be a manageable degree (see for example, Kelejian & Prucha (1998, 2001); Lee (2004); Yu et al. (2008)). For row-normalized spatial weight matrix, W, whose elements in each row sum to 1, W = W⊗I M is also row-normalized, which ensure that all the weight are between 0 and 1. WhenW is row-normalized from a symmetric matrix, such as adjacency matrix, W is diagonalizable and all the eigenvalues of W are real and less than or equal to 1 (see for example, Yu et al. (2008)). Assumption 5 guarantees the existence of S −1 N (Ψ 0 ) and S −1 N (Λ). This assumption can be justified if assuming all the eigenvalues of Ψ 0 and Λ are less than 1 in absolute values, because the eigenvalues of S N (A), λ k (S N (A)) = 1−λ k ((I N ⊗A)W) = 1−d i λ m (A)6= 0, when|λ m (A)| < 1 given that|d i |≤ 1, i = 1,··· ,N,m = 1,··· ,M,k = 1,··· ,MN, and A denotes Ψ 0 or Λ. The assumptions of compactness are for theoretical purposes. Assumption 6 limits the dependence between time series and between cross sectional units. A sufficient condition for the absolute summability of A requires thatkAk < 1, where the 72 matrix norm is the row sum norm or the column sum norm (see Horn & Johnson (1985)). WhenkAk < 1, P ∞ h=1 A h exists and can be defined as (I MNp − A) −1 . Assumption 7 allows for two cases, where N→∞ as T→∞ and N is fixed as T→∞. Consistency From the concentrated log likelihood function (2.11), divided by NT, the cor- responding expected value function is Q NT (θ) = 1 NT Emax α lnL (θ,α) = − M 2 ln2π− 1 2N ln det I N ⊗ Ω + 1 N ln det S N (Ψ 0 ) + 1 N ln det S N (Λ) − 1 2 1 NT E T X t=1 e ε 0 t I N ⊗ Ω −1 e ε t Proposition 1: Let Θ be any compact parameter space. Then under Assump- tions 1-7, 1 NT lnL (θ)−Q NT (θ) p − → 0 uniformly inθ∈ Θ, andQ NT (θ) is uniformly equicontinuous forθ∈ Θ. For local identification, a sufficient condition (but not necessary) is that the information matrix I NT θ 0 = −E 1 NT H θ,NT lnL θ 0 is nonsigular, where H θ,NT is the Hessian matrix, and E 1 NT H θ,NT lnL (θ) has full rank for any θ in some neighborhood ofθ 0 . Theorem1: Underassumptions1to7,θ 0 isgloballyidentifiedand b θ NT p →θ 0 , if either of the following two conditions satisfy 1. 1 NT P T t=1 Eh t h 0 t is nonsingular, where h t = e Z 0 t C 0 , e Z 0 t C 0 (I N ⊗γ 00 )S −1 N (Ψ 0 0 ) 0 W 0 0 . 73 2. ln|Σ 0 y | 6= ln| tr 1 MN Σ −1 y (θ) Σ 0 y Σ y (θ)|, where Σ y (θ) = S −1 N (Ψ 0 )S −1 N (Λ) (I N ⊗ Ω)S −1 N (Λ) 0 S −1 N (Ψ 0 ) 0 , and Σ 0 y = Σ y (θ 0 ). Asymptotic Distribution The asymptotic distribution of the concentrated QMLE b θ can be derived from mean value expansion of D θ lnL (θ) = (D Ω lnL (θ),D Λ lnL (θ),D γ lnL (θ),D Ψ 0 lnL (θ)) around the true valueθ 0 . Assumption 8: 1 NT P T t=1 Eh t h 0 t is nonsingular, or A N B N J (J 0 B 0 N A N B N J) −1 J 0 B 0 N A N 6= A N , where A N = (I M 2 N 2 +K MN,MN ) (I M 2 N 2−J (Ω 0 ⊗ Ω 0 )J 0 ),B N = S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ (Ω 0 ) −1 ,C N = WS −1 N (Ψ 0 0 )S −1 N (Λ 0 )⊗S N (Λ 0 ) 0 I N ⊗ (Ω 0 ) −1 . Proposition 2: Under Assumptions 1-8, 1 √ NT D θ lnL θ 0 + s N T B NT θ 0 +O p max s N T 3 , s 1 T d →N 0,I 0 + Δ 0 whereI 0 is given in (A.10), Δ 0 is given in (A.11) andB NT θ 0 is given in (A.74). Theorem 2: Under Assumptions 1-8, √ NT b θ NT −θ 0 + s N T I NT θ 0 −1 B NT θ 0 +O p max s N T 3 , 1 √ T d →N 0, I 0 −1 I 0 + Δ 0 I 0 −1 When N T → 0, √ NT b θ NT −θ 0 d →N 0, (I 0 ) −1 (I 0 + Δ 0 ) (I 0 ) −1 When N T →κ for 0<κ<∞, 74 √ NT b θ NT −θ 0 + √ κI NT θ 0 −1 B NT θ 0 + O p 1 √ T d → N 0, (I 0 ) −1 (I 0 + Δ 0 ) (I 0 ) −1 When N T →∞, T b θ NT −θ 0 +I NT θ 0 −1 B NT θ 0 → 0 Whenε t are normally distributed, √ NT b θ NT −θ 0 + s N T I NT θ 0 −1 B NT θ 0 +O p max s N T 3 , 1 √ T d →N 0, I 0 −1 I NT θ 0 −1 B NT θ 0 = O p (1). b θ NT is consistent but has a bias of order O p 1 T . WhenT is relatively large than N, b θ NT are √ NT consistent and asymp- totically centered around θ 0 . When N is asymptotically proportional to T, b θ NT are √ NT consistent and asymptotically normally distributed, but the limiting dis- tribution does not center aroundθ 0 . When N is relatively large than T, b θ NT are T consistent and has degenerated distribution. Theorem 3: Under Assumptions 1-8, √ T ( b α NT −α 0 ) d → N 0,S −1 N (Λ 0 ) (I N ⊗ Ω 0 )S −1 N (Λ 0 ) 0 . The fixed effect estimator b α NT is √ T consistent and asymptotically normally distributed with mean 0. Because of spatial correlations in the residualsu t , b α NT,i for i = 1,...,N are not asymptotically independently distributed. In the iid case when Λ 0 = 0, √ T ( b α NT −α 0 ) d → N (0, (I N ⊗ Ω 0 )), and b α NT,i are asymptotically independently distributed. Bias Correction From Theorem 2, QML estimator b θ NT has bias−I NT θ 0 −1 B NT θ 0 when N T →κ for 0<κ<∞. 75 To eliminate the bias associated with b θ NT , we can consider analytical bias correction by defining bias corrected estimator e θ NT as follows e θ NT = b θ NT − 1 T 1 NT EH θ,NT lnL b θ NT −1 B b θ NT (2.12) Assumption 9: P ∞ h=0 A (θ) h and P ∞ h=0 hA (θ) h−1 are UB in either row sum or column sums, uniformly in a neighborhood ofθ 0 . Theorem 4: Under Assumptions 1-9, if (N,T )→∞, N T 3 → 0,then √ NT e θ NT −θ 0 → d N 0, I 0 −1 I 0 + Δ 0 I 0 −1 If (N,T )→∞, N T 3 → 0, then the bias corrected estimator e θ NT defined in The- orem 2 will be √ NT consistent and asymptotically normally distributed centering aroundθ 0 . When N T →κ and N T 3 → 0, e θ NT no longer has asymptotic bias and is √ NT consistent. 2.2.4 Monte Carlo Simulation We conduct a small Monte Carlo study to evaluate the finite sample perfor- mance of QML estimated spatial temporal panel data models. The data generating mechanisms consider the following cases, with the true lags, p 0 being 1 or 3. We use the row normalized rook matrix as the weight matrix W 2 . 2 The rook matrix represents a square tessellation with a connectivity of 4 for the inner fields on the chessboard and 2 and 3 for the corner and boarder fields, respectively. See for example, Lee & Yu (2010a). 76 DGP 1: N = 49,T = 250,M = 2,p 0 = 1, Φ 0 1 = 0.2 0.1 0.1 0.2 , Ψ 0 0 = 0.2 0.1 0.1 0.2 , Ψ 0 1 = 0.1 0 0 0.1 , Λ 0 = 0.5 0.1 0.1 0.5 DGP 2: same as case 1 except that Λ 0 = 0, DGP 3: N = 49,T = 250,M = 2,p 0 = 3, Φ 0 1 = 0.2 0.1 0.1 0.2 , Φ 0 2 = 0.01 0 0 0.01 , Φ 0 3 = 0.01 0 0 0.01 Ψ 0 0 = 0.2 0.1 0.1 0.2 , Ψ 0 1 = 0.1 0 0 0.1 , Ψ 0 2 = 0.01 0 0 0.01 , Ψ 0 3 = 0.01 0 0 0.01 Λ 0 = 0.5 0.1 0.1 0.5 DGP 4: same as Case 3 except that Λ 0 = 0. We simulate the residuals from multivariate skew-t distributions 3 . We first simulate ε ∗ it ∼ ST 2 (μ ∗ , Ω ∗ ,a,ν), where μ ∗ , Ω ∗ ,a,ν denote the location, scale, asymmetry, and degree of freedom parameters, respectively. When ν > 2, E(ε ∗ it ) =μ ∗ +cd,var(ε ∗ it ) =ν/(ν−2)Ω ∗ −c 2 dd 0 , withc = q ν/πΓ((ν−1)/2)/Γ(ν/2) andd = (1+a 0 Ω ∗ a) −1/2 Ω ∗ a. Then,ε it are generated by subtracting the population mean fromε ∗ it . We setμ ∗ = (0.5,−1) 0 , Ω ∗ = 1 0.5 0.5 1 ,a = (2, 2) 0 ,ν = 6, which 3 The family of multivariate skew-t distributions is an extension of the multivariate Student’s t family, by introducing the asymmetry parameter. See Azzalini & Capitanio (2014). When a = 0, themultivariateskew-tdistributionreducestomultivariateStudent’stdistribution. When ν =∞, the multivariate skew-t distribution reduces to the multivariate skew-normal distribution, and when ν = 1, the multivariate skew-t distribution becomes multivariate SC distribution. 77 corresponds to Ω 0 = 0.92 0.17 0.17 0.92 . Then, we simulate the series of explanatory variable from x it = 0.1sin(π)x it−1 − 0.1x it−2 +v it , where v it ∼N (0, 0.01) and set β 0 = (1, 1) 0 . In all cases, we draw y −100 =ε −100 and then drop the first (100−p 0 ) simulated data. The fixed effectsα i are randomly draws from bivariate normal distribution N 0 0 , 1 0.5 0.5 1 . The simulated data are fitted using models assuming iid or spatially correlated residuals and with lag orders p=1,...,5. We report bias and RMSE of QML esti- mated SPVARs based on 300 simulations for each DGP in Table C.1. In general, the estimators perform resonably well. 2.3 Conclusions In this chapter, we propose a spatial panel vector autoregression, SPVAR(p), which is a multivariate extension of the spatial dynamic panel data models from both the individual and time dimensions. The SPVAR(p) allows for individual fixed effects and spatially correlated residuals. We estimate the SPVAR(p) by pro- file quasi-maximum likelihood (QMLE), and the QML estimated b θ NT is consistent with a bias of order O p 1 T . When T is relatively large than N, b θ NT are √ NT consistent and asymptotically centered aroundθ 0 . WhenN is asymptotically pro- portional toT, b θ NT are √ NT consistent and asymptotically normally distributed, but the limiting distribution does not center aroundθ 0 . WhenN is relatively large than T, b θ NT are T consistent and has degenerated distribution. In an extended version of the paper, we could explore alternative estimation methods such as GMM and Bayesian, and we could allow the weighting matrix to 78 be endogenously determined and time varying. The SPVAR(p) model can also be widely applied to many other data for estimations and forecasts, for example, a SPVAR analysis of regional responses to earthquake shocks, where the productions from different industries form a vector observation for each region in each time, or for example, a SPVAR study of financial market responses to policy shocks, where the sovereign CDS spreads and stock market indices in different countries form a vector of observation for each country in each time. 79 Chapter 3 Forecast Influenza Incidence Rates of US States 3.1 Introduction Flu,ortermedasinfluenza, isanacuterespiratoryviraldiseasewhichcantrans- mit from person to person. Seasonal influenza epidemics recur annually, peaking during winters in temperate regions and during rainy seasons in tropical regions. Besides, zoonotic influenza, such as avian flu and swine flu, which originate from animals, could also affect humans. Zoonotic influenza do not spread easily from human to human, however, once they become transmissible among humans, they could trigger pandemics because humans usually have no immunity to such new viruses. Outbreaks of influenza have catastrophic impacts on the entire economy, raising medical expenditures, mortality, and public panic, as well as shocking the other sectors in the economy. WHO statistics show that annual seasonal influenza epidemics result in about 250,000 to 500,000 deaths worldwide. Molinari et al. (2007) estimate based on 2003 US population that the annually influenza epi- demics resulted in an average of 610,660 life lost, 3.1 million hospitalized days, and 31.4 million outpatient visits, with an average direct medical costs of 10.4 billion US dollars, a projected indirect lost of 16.3 billion US dollars, and the total economic burden of 87.1 billion US dollars. Historically, the 1918-1920 "Span- ish Flu" is one of the most devastating flu pandemic, which is estimated to have 80 infected half of the world’s population and led to 40-50 million mortality (Potter (2001)). In today’s globalized world with counties and industries closely inter- connected, an outbreak of local epidemics can more rapidly spread out and form large scale pandemics. It took only a few weeks for the 2002-2003 Severe Acute Respiratory Syndrome (SARS) to spread from Hong Kong to 37 countries in early 2003 (See for example Smith (2006)). It also only took a few weeks for the 2009 H1N1 influenza to develop from a local outbreak into a global pandemic 1 . Due to its social and economic impacts, flu activity has been vigilantly surveilled all over the world. WHO plays a key role in monitoring flu activi- ties globally, and regional surveillance agencies include the US Center for Disease Control and Prevention (CDC), the European Centre for Disease Prevention and Control, the Public health agency of Canada, the Public Health England, to name just a few of them. Pavlin et al. (2003) lists 21 possible indicators for influenza surveillance, which can be divided into: (1) Typical health surveillance, including reportable diseases, laboratory-based surveillance, and influenza-specific surveil- lance, which have drawbacks of relying on passive reporting, failing to detect new emerging infections, and not being timely. (2) Existing health data not normally used for surveillance, which include diagnostic information for inpatients and out- patients, intensive care unit admissions, prescription and over-the-counter phar- macy sales, clinical laboratory submissions, medicare or medicaid claims, acute diagnoses in nursing home populations, ambulance call chief complaints, radiology test ordering and results, poison information calls, medical advice call-in, emer- gency room use, internet hits for medical information, medical examiner/mortality surveillance. These indicators have advantages over the typical health surveillance 1 The 2009 H1N1 influenza incidence was initially detected in the United States in April 2009, while by June 11 2009 WHO declared that it had formed a global pandemic. See http: //www.cdc.gov/h1n1flu/estimates_2009_h1n1.htm 81 by giving better indicators of rare events, being timely, and being easy to obtain. (3) Non-health data sources, which includes road and transit usage, entertainment venue usage, weather data, vector data, and school and work absenteeism. Pat- wardhan & Bilkovski (2012) compare correlation between the aggregate counts of scripts for drugs commonly prescribed for influenza expressed as the influenza drug scripts per 100,000 total scripts filled at a large drug retailing pharmacy chain in the US and the Google estimates of the ILI cases per 100000 physician visits in the US for years 2007-2011 are as high as 85%-92%, and to CDC unweighted ILI as high as 97% in 2007. Most of the surveillance are limited by not being timely. Innovative surveillance methods that monitor influenza include monitoring call volume to telephone triage advice lines (Jeremy U. Espino (2003), Yih et al. (2009)), over the counter drug sales (Magruder (2003), Liu et al. (2013)), surveillance through internet search behavior such as twitter data (Lee et al. (2013a), Lee et al. (2013b)) and Google search queries (e.g. Ginsberg et al. (2009)). Patwardhan & Bilkovski (2012) show that according to a survey conducted by International Society for Disease Surveillance on the public health officials in the US during 2007-2008, the existing surveillance mainly rely on data from emergency department visits (84%), outpa- tient clinic visits (49%), over the counter drugs (OTC) (44%), school absenteeism (35%), and pharmacy prescription sales data (7%). Google points out that one way to improve early detection of flu activities is to monitor health-seeking behavior in the form of online web search queries, which are submitted by millions of users around the world each day, because the relative frequency of certain queries is highly correlated with the percentage of physician visits in which a patient presents with influenza-like symptoms (See Ginsberg et al. (2009)). In an attempt to provide faster detection, innovative surveillance systems 82 have been created to monitor indirect signals of influenza activity, such as call volume to telephone triage advice lines and over-the-counter drug sales. About 90 million American adults are believed to search online for information about specific diseases or medical problems each year, making web search queries a uniquely valuable source of information about health trends. Previous attempts at using online activity for influenza surveillance have counted search queries submitted to a Swedish medical website, visitors to certain pages on a U.S. health website, and user clicks on a search keyword advertisement in Canada. A set of Yahoo search queries containing the words flu or influenza were found to correlate with virologic andmortalitysurveillancedataovermultipleyears. Googleproposedsystembuilds on these earlier works by utilizing an automated method of discovering influenza- related search queries. By processing hundreds of billions of individual searches fromfiveyearsofGooglewebsearchlogs,oursystemgeneratesmorecomprehensive models for use in influenza surveillance, with regional and state-level estimates of influenza-like illness (ILI) activity in the United States. Since 2008, Google launched Google Flu Trends, which serves as an innovative and near real-time (with a reporting lag of about one day) influenza surveillance resource based on aggregated Internet search query submissions for places around the world. Ginsberg et al. (2009) show that across the nine regions in the United States, Google Flu Trends is able to estimate consistently the current influenza-like illness (ILI) percentage 1-2 weeks ahead of the publication of reports by the offi- cial flu statistics from US CDC Influenza Sentinel Provider Surveillance Network. Ginsberg et al. (2009) point out that despite strong historical correlations, Google Flu Trends is susceptible to false alerts caused by sudden increase in ILI-related queries. Cook et al. (2011) evaluated the performance of Google Flu Trends using the 2009 H1N1 influenza Virus A outbreak. Cook et al. (2011) find that through 83 the H1N1 period, the original Google Flu Trends model under-estimate ILINet data with an error that is more than five fold relative to the prior seasons overall), while an updated Google Flu Trends model trained on data including the initial wave of H1N1 perform well over the flu waves. Despite its virtues, Google Flu Trends has some limitations. Some newly infected ILI patients may not use Inter- net to search for flu related information, and some patients who infected flu may not search online. These will lead to underestimations of the actual flu incidence. On the other hand, panics and concerns among healthy individuals may exaggerate the actual number of flu incidence, because the propensity for individuals to search will be influenced by media and government. It’s also possible that healthy people submit queries merely out of interest in understanding about flu. There will be overestimated flu incidence. Researchesstudyingflutransmissionandforecasthavealonghistory, butinfor- mative and reliable models for forecasting flu are still limited. While standard forecast models for weather, earthquake, and other hazards have been well estab- lished, research attention on forecasting future flu are very necessary. If a flu outbreak or unusual flu activities can be detected beforehand, then it will help the public health sectors to be prepared and take timely actions. Shaman & Karspeck (2012) indicate that real-time skillful predictions of influenza peak timing can be made more than 7 weeks in advance of the actual influenza peak. State level ILI incidence rate data of the United States are available officially from US CDC and from the Google Flu Trends estimates, which provide us with an opportunity to improve traditional time series forecast for individual states with more information supplemented from neighboring states. An important classical influenza forecast model is the compartment model, where the population is partitioned into susceptible (S), exposed (E), infected (I), 84 andrecovered(R),therebyderivinganothernameofSusceptible-Exposed-Infected- Recovered (SEIR) model. In the extended versions of the SEIR model, recovered individuals can become susceptible again. Dureau et al. (2013) consider a flexi- ble diffusion driven SEIR model, where the assigned diffusion captures behavior changes, preventive measures, seasonal effects, holidays, etc. The methodology is illustrated on data of A/H1N1(2009) pandemic in England. They find that the estimated effective contact rates display changes over time, which are consistent withtheargumentthatschoolclosuresforholidayshavebeendrivingtheepidemic. They also consider a diffusion driven SEIR model with two age groups, children and adults, and allow for contacts among adults, among children, from adults to children, and from children to adults. Hooten et al. (2010) study influenza transmission by considering a discretized SEIR model for each US states using the weekly state level ILI data reported by Google flu trends. They find that both inter-state and intra-state, influenza transmissions increase as state area, population density, and average summer temperature increase and as the winter temperature decrease. Dukic et al. (2012) consider SEIR model with dynamic growth rate of the infectious population. They make use of the Google Flu Trends estimated weekly national ILI data for the US, and the implementation is based on Bayesian particle filtering, which updates learnings about the epidemic process sequentially. Otherwidelyusedinfluenzaforecastmodelsinclude, forexample, theautomatic influenza detection method proposed by Serfling and a variety of Serfling-type sta- tistical algorithms, in which cyclic parametric regressions are used to model flu epidemics and then define an epidemic threshold adjusted for seasonal effects. Ser- fling’s method has disadvantages of requiring long series of historical non-epidemic 85 data to model the baseline distribution and treating the observations as indepen- dent and identically distributed (see for example Rath et al. (2003)). Cowling et al. (2006) find that automated influenza surveillance methods based on short-term data, including time series and CUSUM models, can generate sensitive, specific, and timely alerts, and they can offer a useful alternative to Serfling-like methods. Martinez-Beneito et al. (2008) introduce a Markov switching model to determine the epidemic and non-epidemic periods from influenza surveillance data. Rath et al. (2003) present a method for automated detection of influenza epidemic using hidden Markov models. Viboud et al. (2003) consider predicting the spread of influenza epidemics by the method of historical analogues. Dugas et al. (2013) employ the generalized autoregressive moving average model. There are only a limited number of existing literature on forecasting influenza which explore regional dependence information. Fox & Dunson (2015) develop a class of nonparametric covariance regression models, which allow an unknown covariance matrix to change flexibly with predictors, and then they apply the method to Google Flu Trends data from Sep 28, 2003 to Oct 24, 2010. Each obser- vation is composed of a 183-dimensional vector of Google Flu Trends estimated ILI rates at the US national, regional, state, and city levels. The 183-dimensional data are analyzed jointly, and in dealing with the missing data, instead of using imputation, they update the posterior based solely on the observed data. They show that their method is capable of capturing both spatial and temporal changes in correlations of the Google Flu Trends data, even in the presence of substantial missing data. In addition, they find that regions become more correlated dur- ing flu seasons, and they also notice that some geographically distant states, such as New York and California, are highly correlated, which could be due to demo- graphic similarities and high travel rates in between the states. As they point out 86 that a drawback of their approach is that no geographic information is utilized in the model, whereas the spatial structure is uncovered simply from analyzing the 183-dimensional time series and patterns therein. In this Chapter, we use spatial panel VAR, SPVAR(p), to estimate and forecast influenza incidence rates in the United States, where the officially reported ILI data from CDC and the Google Flu Trends estimated ILI data in each week for 48 continentalstatesintheUnitedStatesformavectorofobservation. Thepredictors are state population density, temperature, and precipitation. Sequential likelihood ratio (LR) test, as well as AIC, SIC, and HQC are used to select the lag orders. For a given lag order, LR test is used to select the model with independently distributedresidualsagainstthemodelwithspatiallyautoregressiveresiduals. The estimation and forecast of SPVAR(p) is also compared to the univariate SDPD(p) of Google Flu Trends estimated ILI. In addition, we derive the generalized impulse response function of the SPVAR(p), and we use our SPVAR(p) to demonstrate the dynamic transmission of a hypothetical flu shock to California using impulse response analysis. The results of the impulse response analysis show that it will take several weeks for a unit shock in the California ILI incidence rate to be absorbedintheabsenceofexternalcontrol, andneighbouringstateswillbeaffected to the degrees proportional to their adjacency to California. 3.2 Data and Sample 3.2.1 CDC and Google Flu Trends ILI In the United States, the Outpatient influenza-like Illness Surveillance Net- work (ILINet), which consists of outpatient health care providers from 50 states, 87 the District of Columbia, and the U.S. Virgin Islands, report to the US Cen- ter for Disease Control and Prevention (CDC) on a weekly basis 2 . The reported influenza-like illness (ILI) percentages are weighted by the state populations and then compared to national and regional ILI baselines, which are calculated from ILI data during non-influenza weeks in the previous three seasons 3 . CDC then reports to the public each week about updated influenza virus types, outpatient ILI visits, influenza mortality, influenza hospitalization, and geographic spread of influenza for the entire country and 10 Health and Human Services (HHS) regions. For individual states, CDC reports weekly influenza activity levels, which com- pare the mean reported percent of ILI visits for the current week to the mean reported percent of ILI visits for non-influenza weeks. The 10 activity levels are classified as minimal (levels 1-3), low (levels 4-5), moderate (levels 6-7), and high (levels 8-10). An activity level of 1 corresponds to values that are below the mean, level 2 corresponds to an ILI percentage less than 1 standard deviation above the mean, level 3 corresponds to ILI more than 1, but less than 2 standard deviations above the mean, and so on, with an activity level of 10 corresponding to ILI 8 or more standard deviations above the mean. Google Flu Trend was launched in 2008, which reported estimated weekly flu data for the United States and starting from September 28, 2003. The reported numbers are defined as estimated ILI patients out of 100,000 total physician visits. Ginsberg et al. (2009) introduce the background and methodology of Google Flu Trend model. The idea is based on the observation that the relative frequency of certain internet search queries is highly correlated with the percentage of ILI 2 For this system, ILI is defined as fever (temperature of 100 ◦ F[37.8 ◦ C] or greater) and a cough and/or a sore throat without a known cause other than influenza. 3 http://www.cdc.gov/flu/weekly/overview.htm 88 physicianvisits. Therefore, influenzaactivityinacertainlocationcanbeestimated usingtheonlinesearchquerydatainthatlocation, identifiedusingIPofthesearch. By aggregating historical logs of the internet search queries, a time series of weekly counts for 50 million of the most common search queries in the United States is constructed. Separate aggregate weekly counts were kept for every query in each state. Each time series was normalized by dividing the count for each query in a particular week by the total number of online search queries submitted in that location during the week, resulting in a query fraction. See Ginsberg et al. (2009). Google Flu Trends estimated ILI has been validated against officially reported data. Ortizetal.(2011)studiedthenationalcorrelationbetweenGoogleFluTrend estimated ILI and CDC ILI data is 0.72 when assessed over 5 influenza seasons beginning in 2003. Ginsberg et al. (2009) compared the regional Google Flu Trend estimated ILI to CDC ILI data and demonstrated a correlation of 0.97 during the 2007-2008 flu season. Ginsberg et al. (2009) also show that the state level Google Flu Trend data for Utah has been validated using official ILI data from Utah state with a 0.9 correlation across 42 validation points. Dugas et al. (2012) validates the use of weekly city-level Google Flu Trend as an emergency department surveillance tool because of its correlation with both positive influenza test results and volume of patients with ILI presenting to the emergency department. 3.2.2 Sample The sample contains Google Flu Trends estimated ILI 4 and CDC reported ILI data for 326 weeks from Sep 28, 2008 to Dec 27, 2014, spanning 7 flu seasons 4 Data Source: Google Flu Trends (http://www.google.org/flutrends). 89 and covering the 2009 H1N1 flu pandemic 5 .The entire sample is then split into a subsample for in-sample estimation, which contains 310 weekly observations until Sep 6, 2014 and a subsample for evaluating the out-of-sample forecast, which consists of the rest of the weekly observations. Table C.2.A and Table C.2.B report the summary statistics of the sample ILI by Google Flu Trends and CDC and predictive variables. The sample consists of 48 US continental states. Alaska and Hawaii are excluded from the analysis due to their geographical isolation from the other states, which prevents us from constructing their dependence structures with the other states using geographical contiguity information. The predictive variables are state population density, temperature, and precipi- tation. Thepopulationdensityofeachstateiscomputedastheresidentpopulation divided by land area. The 2008 and 2009 population data use census intercensal estimates as of July 1 of 2010, and the 2010 to 2014 population data use census population estimate as of July 1 of 2014. The data for state area is land area (square miles) based on census 2010 data. We assume that the state population density data stays constant within each year. The weekly data of temperature and precipitation are computed using the daily climate data from United Sates Histori- cal Climatology Network (USHCN). Specifically, we compute weekly precipitation (hundredths of inches), maximum temperature (degrees F), and minimum tem- perature (degrees F) of each state in each week as the average daily values by all reporting U.S. Cooperative Observer Network stations (COOP) of the state in a 5 2008-2009fluseasonconsistsofweeksstartingfromSeptember28, 2008andendingatAugust 29, 2009. 2009-2010 flu season consists of weeks starting from August 30, 2009 and ending at October 2, 2010. 2010-2011 flu season consists of weeks starting from October 3, 2010 and ending at October 1, 2011. 2011-2012 flu season consists of weeks starting from October 2, 2011 and ending at September 29, 2012. 2012-2013 flu season consists of weeks starting from September 30, 2012 and ending at September 28, 2013. 2013-2014 flu season consists of weeks starting from September 29, 2013 and ending at September 27, 2014. 2014-2015 flu season consist of weeks starting from September 28, 2014 and will be ending at October 3, 2015. 90 week. Then, the weekly temperature is taken as the average of weekly maximum and minimum temperatures. Figure C.1 plot the time series of weekly Google Flu Trends ILI by 10 HHS regions 6 and 48 contiguous states. These plots show seasonal fluctuations in the ILI rates, which is mostly driven by seasonal flu that peak during winters and then level off during summers. Figure C.2 plots the time series of weekly Google Flu Trends ILI by states together with CDC reported ILI levels, from which close comovements between Google Flu Trends and CDC ILI can be clearly observed in all the states. Figure C.3 displays Google Flu Trends and CDC ILI in maps. The states plotted in darker colors have severer flu activities. The date of the values of the ILI are taken from September 20, 2009, which is during the 2009 H1N1 flu pandemic. These maps allow us to discern flu activity levels between neighbour states, and they also show comovements of the states Google Flu Trends and CDC ILI. Table C.3.A lists the geographical contiguous states for each of the 48 states. Two states are geographical contiguous if they share a common geographical boarder. For example, California is contiguous to Oregon, Nevada, and Arizona. The most connected states, which have the most number of contiguous states (neighbours), are Missouri and Tennessee. Both Missouri and Tennessee have 8 6 The 10 HHS regions are Region 1 (Boston), including Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont, Region 2 (New York), including New Jersey, New York, Puerto Rico, and the Virgin Islands, Region 3 (Philadelphia), including Delaware, Maryland, Pennsylvania, Virginia, and West Virginia, Region 4 (Atlanta), including Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee, Region 5 (Chicago), including Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin, Region 6 (Dallas), including Arkansas, Louisiana, New Mexico, Oklahoma, and Texas, Region 7 (Kansas City), including Iowa, Kansas, Missouri, and Nebraska, Region 8 (Denver), including Colorado, Montana, NorthDakota, SouthDakota, Utah, andWyoming, Region9(SanFrancisco), including Arizona, California, Hawaii, Nevada, American Samoa, Commonwealth of the Northern Mariana Islands, Federated States of Micronesia, Guam, Marshall Islands, and Republic of Palau, and Region 10 (Seattle), including Alaska, Idaho, Oregon, and Washington. 91 neighbours and have eigenvector centrality score 0.16. The eigenvector centrality scoresareinducedfromthegeographicaladjacencymatrixreportedinTableC.3.B. Eigenvector centrality scores correspond to the values of the first eigenvector of the graph adjacency matrix. In general, individuals with high eigenvector centralities are those which are connected to many other individuals. The state geographical adjacency matrix G has G ij = 1 if two different states i and state j are contigu- ous neighbours and G ij = 0 if i = j or if state i and state j are not contiguous. Therefore,G has the property of being symmetric with 0 diagonal elements. Con- sidering each state to be a vertex in the spatial network and then contiguous states create an edge in the network. Figure C.4 plot the graph geographical adjacency matrix with states grouped into communities by a spin-glass model and simulated annealing, where the upper limit for the number of communities is set to be 25. The detected communities have nodes with many edges inside the community and few edges between the community and the rest of the graph. The geographical adjacency matrix imply 6 communities. 3.3 Estimation and Forecast In the application to the ILI data, we have M = 2, so that y it = (y g it ,y c it ) 0 is a bivariate vector of observations on the CDC reported ILI activity levels (y c it ) and Google Flu Trends estimated ILI incidence rates (y g it ) for state i = 1, 2,...,N in week t = 1, 2,....,T. y g it =α g i + p X j=1 φ gg j y g i,t−j +φ gc j y c i,t−j + q X j=0 ψ g j N X k=1 w ik y g k,t−j +β g 0 x it +u g it 92 y c it =α c i + p X j=1 φ cg j y g i,t−j +φ cc j y c i,t−j + q X j=0 ψ c j N X k=1 w ik y c k,t−j +β c 0 x it +u c it and for the residual series, u g it =λ g N X k=1 w ik u g kt +ε g it u c it =λ c N X k=1 w ik u c kt +ε c it where (ε g it ,ε c it ) 0 ∼iidN(0, Ω 2 ). w ik is thei,k-th element ofW, which isN×N row normalized state geographical contiguity matrix. Assuming Λ = 0 in (2.3) impliesaSPVARmodelwithiiddisturbances. When Λisnon-zero,thedisturbance term of the model are augmented with locally weighted neighbor disturbances. Estimators of SPVAR up to order 6 with iid and SAR residuals are reported in Table C.4. Values in the parentheses are standard errors. The estimators from models with different model specifications are consistent. All of them imply positive concurrent neighbourhood effects, ψ g 0 and ψ c 0 . The estimated first order lag effects,φ gg 1 ,φ cg 1 ,φ gc 1 ,φ cc 1 are positive, and the estimated second order lag effect from CDC ILI levels on the current CDC ILI levels are positive. It is consistent with the fact that the CDC ILI levels are more persistent than the Google Flu Trends ILI. This provides evidence for the persistency of flu activities between immediate weeks. The estimated effects from higher order lag orders are mostly negative, which reflect the transitory nature of flu activities. Despite the similarities in most of the estimators, we explore the issue of model selection, and the results of the sequential LR tests statistics, AIC, SIC, and HQC for the SPVAR(p) are reported in Table C.5. For example, SIC selects lag order p=3 for model with iid residuals and lag order p=4 for model with SAR residuals. Table C.5.B reports the LR statistics for testing the iid against SAR residual 93 specifications. All the test statistics are significant at 5% level except for the LR test with p = 1, which provide evidence of spatially correlated residuals. The estimators are then used to construct in-sample fitted ILI and then form out-of-sample forecasted ILI. The in-sample fitted ILI, b y t , are computed as the fitted values of model (2.5), b y t = S −1 N b Ψ 0 b α + p X j=1 S −1 N b Ψ 0 I N ⊗ b Φ j y t−j + p X j=1 S −1 N b Ψ 0 I N ⊗ b Ψ j Wy t−j +S −1 N b Ψ 0 I N ⊗ b β 0 x t , for t = 1,...,T (3.1) One week ahead out-of-sample forecasts b y T +k for k≥ 1 are then computed itera- tively based on the parameters estimated from the training sample, b y T +k = S −1 N b Ψ 0 b α + p X j=1 S −1 N b Ψ 0 I N ⊗ b Φ j y T +k−j + p X j=1 S −1 N b Ψ 0 I N ⊗ b Ψ j Wy T +k−j +S −1 N b Ψ 0 I N ⊗ b β 0 x T +k , for k≥ 1 (3.2) Figure C.5.A. plot the in-sample fitted and out-of-sample forecasted ILI from SPVAR(4) versus the observed ILI data using the examples of California and New York. The observed ILI data are plotted by plus symbols, the in-sample fitted b y t are plotted as solid lines, and the out-of-sample forecasted b y T +k are plotted in dashed lines. The red color vertical line after week 310 serves as a separator of the in-sample fits and out-of-sample forecasts. We can see that the estimated and forecasted ILI from both models are close to the observed data. Although not shown, besides California and New York, the fitting and forecasting for the ILI of the other states perform similarly well. Table C.6.A further reports the computed MADE and RMSE used to evaluate the overall model fittings, where the residuals are computed as the fitted values 94 of (2.5) minus the real observed data. MADE = 1 NT P T t=1 P N i=1 abs (y m it −b y m it ) and RMSE = 1 NT P T t=1 P N i=1 (y m it −b y m it ) 2 1/2 for m = g,c, corresponding to Google Flu Trends and CDC, respectively. In general, the MADE and RMSE are rea- sonably small and show that the overall estimations and forecasts of SPVAR(p) is satisfying. The MADE and RMSE of in-sample fitted and out-of-sample fore- casted CDC ILI are larger than the MADE and RMSE of in-sample fitted and out-of-sample forecasted Google Flu Trends ILI, due to the data property that CDC ILI are observed in integer levels. 3.3.1 Model Selection Similar to the time series contexts, the lag order p and q in SPVAR(p,q) can be selected by statistical tests and information criteria. Assuming that ¯ p is known as an upper bound for the lag orders, and then we can test the statistical signifi- cance of the lag orders by sequential likelihood-ratio (LR) test. The test statistics LR(p) = 2(lnL(p+1)−lnL(p)) will beχ 2 M(M+1) distributed. Because our goal is to forecast using the SPVAR(p,q) model, information criteria, such as Akaike (1973, 1974)’sAkaikeInformationcriterion(AIC),Schwarz(1978)’sBayesianInformation criterion (SIC), and Hannan & Quinn (1979) and Quinn (1980)’s Hannan-Quinn Criterion (HQC) can help to choose a model targeting at the forecasting objective. Referring to Lütkepohl (2005), the information criteria we consider are AIC(p) =ln det b Σ V + 2 ¯ T K SIC(p) =ln det b Σ V + ln ¯ T ¯ T K HQC(p) =ln det b Σ V + 2lnln ¯ T ¯ T K 95 where K denotes the total number of freely estimated parameters, which is (p + q + 2)M 2 in a SPVAR(p,q), ¯ T = T− ¯ p, and b Σ V = T −1 P T t=1 b V t b V 0 t and b V t = e y t − P p j=1 S −1 N (Ψ 0 )(I N ⊗ Φ j ) e y t−j − P q j=1 S −1 N (Ψ 0 )(I N ⊗ Ψ j )W e y t−j −S −1 N (Ψ 0 )(I N ⊗ β) 0f X t . The selected lag orders minimize these information criteria. We also test the SPVAR models with iid distributed errors against the models with spatially correlated errors by likelihood-ratio test, where LR = 2(lnL SAR (p)−lnL iid (p))∼ χ 2 M 2 . 3.3.2 Parameter Stability As a robustness check, we estimate the SPVAR(p) models again by splitting the estimation and forecast sub-samples using different sample sizes. We first split the sample at week 125. We use the first 125 weekly data to estimate the model parameters, and then we use the rest of the observations to assess the parameter stability using out of sample forecast. Results are reported in Table C.6.B, and Figure C.5.B plots the fitted data versus the real data using California and New York as examples. Similarly, we consider to split the sample at week 250, use the first 250 weekly observations to estimate the model and the remaining 76 weekly observations to assess the out of sample forecast. Results are reported in Table C.6.C and Figure C.5.C. The sample size of 125 corresponds to approximately 3 years of observations, and the sample size of 250 corresponds to approximately 5 years of observations. By varying different sample sizes, we can see that the model parameters are stable over time, and the model estimated using historical data carry into the future reasonably well. In general, models with SAR residuals performconsistentlybetterthanmodelwithiidresidualsintermsofbothin-sample fit and out-of-sample forecast. 96 3.3.3 Alternative Weight Matrix As an alternative to the geographical contiguity matrix, we proxy the degree of connectivity among states using a workflow weight matrix, which is constructed by first summing over the county level numbers of workers 16 years old and over in the commuter flow from residential states to workplace states between 2006 and 2010 and then averaging the among states inflows and outflows 7 . The weight matrix is assumed to be constant over time. The US Census reports that during the 2006-2010 period, more than a quarter (27.4 percent) of U.S. workers traveled outside of their residence county for work during a typical week, compared to 26.7 percent in 2000 8 . The workflow weight matrix is listed in Table C.3.C. We use the first 310 weekly observations to estimate the model and the remaining weekly observations toassesstheoutofsampleforecast. ResultsarereportedinTableC.6.DandFigure C.5.D. The results indicate that the satisfactory performance of the estimation and forecast of SPVAR(p) is robust to the choice of weight matrix. 3.3.4 Comparison with SDPD To assess the differences in estimation and forecast from SPVAR(p), in this subsection, we estimate the univariate spatial dynamic panel data model with lag order p, denote as SDPD(p), which extends Lee & Yu (2010a) to allow for p-th order dynamics and corresponds to univariate reduced version of our SPVAR(p) when M = 1. 7 Data Source: Table 1 or Table 2 of http://www.census.gov/population/metro/data/ other.html 8 Source: U.S. Census Bureau, American Factfinder; ACS 2006-2010 Table B08007 and Census 2000 Table P026. 97 For each statei = 1, 2,··· ,N at weekt = 1, 2,··· ,T, denotey g it as Google Flu Trends estimated ILI incidence rate. The SDPD(p) models y g it as y g it =α g i + p X j=1 φ gg j y g i,t−j +ψ g 0 N X k=1 w ik y g kt + q X j=1 ψ g j N X k=1 w ik y g k,t−j +β 0 x it +u g it (3.3) with u g it =λ g N X k=1 w ik u g kt +ε g it (3.4) Stack by i and define S N (λ g ) =I N −λ g W to get u g t =S −1 N (λ g )ε g t and y g t =α g + p X j=1 φ gg j y g t−j +ψ g 0 Wy g t + p X j=1 ψ g j Wy g t−j + (I N ⊗β 0 ) x t +S −1 N (λ g )ε g t (3.5) Also define S N (ψ g 0 ) =I N −ψ g 0 W to get y g t = S −1 N (ψ g 0 )α g +S −1 N (ψ g 0 ) p X j=1 φ gg j y g t−j +S −1 N (ψ g 0 ) p X j=1 ψ g j Wy g t−j +S −1 N (ψ g 0 ) (I N ⊗β 0 ) x t +S −1 N (ψ g 0 )S −1 N (λ g )ε g t (3.6) ε g t ∼ iid(0,ω gg I N ), and we estimate the parameters (θ g 0 ,α g 0 ) 0 by profile QMLE, whereθ g denote the vector of unknown parameters, (ψ g 0 ,φ gg 1 ,···,φ gg p ,ψ g 1 ,···,ψ g p , λ g , ω gg ) 0 . 98 The log-likelihood function of SDPD(p) for the observations y g t , t = 1, 2,...,T is lnL(θ g ,α g ) =− NT 2 ln(2πω gg ) +T ln det (S N (ψ g 0 )) +T ln det (S N (λ g )) − 1 2ω gg T X t=1 S N (ψ g 0 )y g t −α g − P p j=1 φ gg j y g t−j − P p j=1 ψ g j W d y g t−j − (I N ⊗β 0 ) x t 0 S N (λ g ) 0 S N (λ g ) S N (ψ g 0 )y g t −α g − P p j=1 φ gg j y g t−j − P p j=1 ψ g j W d y g t−j − (I N ⊗β 0 ) x t (3.7) solveα g as a function ofθ g as α g (θ g ) =S N (ψ g 0 )¯ y g t − p X j=1 φ gg j ¯ y g t−j − p X j=1 ψ g j W¯ y g −j − (I N ⊗β 0 ) x where denote 1 T P T t=1 y g t = ¯ y g , 1 T P T t=1 y g t−j = ¯ y g −j , 1 T P T t=1 x t = x, and e y g t = y g t − ¯ y g , e y g t−j = y g t−j − ¯ y g −j , and e x = x t − x then the log-likelihood function concentrating outα g (θ g ) is lnL(θ g ) =− NT 2 ln(2πω gg ) +T ln det (S N (ψ g 0 )) +T ln det (S N (λ g )) − 1 2ω gg T X t=1 S N (ψ g 0 ) e y g t − P p j=1 φ gg j e y g t−j − P p j=1 ψ g j W d e y g t−j − (I N ⊗β 0 ) e x t 0 S N (λ g ) 0 S N (λ g ) S N (ψ g 0 ) e y g t − P p j=1 φ gg j e y g t−j − P p j=1 ψ g j W d e y g t−j − (I N ⊗β 0 ) e x t The initial values of the QML estimation start with the OLS estimators from the time series demeaned regression. The QML estimators are denoted as b θ g,SDPD , 99 which are used to form in-sample fitted and out-of-sample forecasted ILI values. The in-sample fitted values b y g t are computed from (3.6), b y g,SDPD t =S −1 N b ψ g,SDPD 0 b α g,SDPD +S −1 N b ψ g,SDPD 0 p X j=1 b φ gg,SDPD j y g t−j +S −1 N b ψ g,SDPD 0 p X j=1 b ψ g,SDPD j Wy g t−j +S −1 N b ψ g,SDPD 0 I N ⊗ b β SDPD0 x t and the one week ahead out-of-sample forecasted Google Flu Trends ILI b y g T +k for K≥ 1 are computed from b y g,SDPD T +k =S −1 N b ψ g,SDPD 0 b α g,SDPD +S −1 N b ψ g,SDPD 0 p X j=1 b φ gg,SDPD j y g T +k−j +S −1 N b ψ g,SDPD 0 p X j=1 b ψ g,SDPD j Wy g T +k−j +S −1 N b ψ g,SDPD 0 I N ⊗ b β SDPD0 x T +k Figure C.5.E. displays the in-sample fitted and out-of-sample forecasted ILI from profile QML estimated SDPD(3) and SDPD(4) using the examples of California and New York ILI. Table C.6.E show the MADE and RMSE of SDPD(p) with p = 1,..., 6, where MADE = 1 NT P T t=1 P N i=1 abs y g it −b y g,SDPD it and RMSE = 1 NT P T t=1 P N i=1 y g it −b y g,SDPD it 2 1/2 . From the comparison, we can see that both the SPVAR(p) and SDPD(p) models perform well. 100 3.4 Impulse Response Analysis Traditional impulse response analysis defines the future responses of the vari- able of interest after receiving a unit of shock. I z can be defined as the differential of two forecasts (see for example Koop et al. (1996), Jordà (2005)) I z (h,δ, Ω t−1 ) = E(z t+h |v t =δ,v t+1 =··· ,v t+h = 0, Ω t−1 ) −E(z t+h |v t =v t+1 =··· ,v t+h = 0, Ω t−1 ) I z depends on the horizon h, size of shock δ, and past information, where h = 1, 2, 3,···, where z t is the variable of interest, v t is the residual in the model of z t , Ω t−1 is the information filtration, and δ is the size of the shock, which is often set to be equal to the standard deviation of v t . I z computes the difference in the expected future values of z with and without shocks occuring at time t based on information known by time t− 1. The orthogonal impulse response function (OIRF,Sims (1980)) of structural VAR suffers from identification issue of ordering. Koop et al. (1996) develop Generalized Impulse Response Function (GIRF), which is invariant to the ordering. Instead of treating future shocks as 0, GIRF uses expectation operator to average out the future shocks. GI z (h,v t ,ω t−1 ) =E(z t+h |v t ,ω t−1 )−E(z t+h |ω t−1 ) for h = 1, 2, 3,···. Further, Koop et al. (1996) note that GI defined above can be considered as realization of a random variable, which is defined by GI z (h,V t , Ω t−1 ) =E(z t+h |V t , Ω t−1 )−E(z t+h |Ω t−1 ) 101 GIRF of Koop et al. (1996) is applicable to both linear and nonlinear models. Potter (2000) generalize the linear impulse response functions to nonlinear cases, which include versions of nonlinear impulse response function, derivative of non- linear impulse response function with respect to y t , and nonlinear updating func- tion. Potter (2000) further generalize the notions to define conditional generalized impulse response function. Other literature on impulse response analysis include forexampleGourieroux&Jasiak(2005), whichconsidersgeneralnonlinearimpulse response asthejoint pathdistributionof thebenchmark processwithoutshock and the process after the arrival of shocks, and Jordà (2005), which proposes an alter- native way of estimating impulse response function directly by defining impulse response as the difference between two forecasts and then directly estimate the impulse response by local projection. We analyze the spatial temporal prolif- erations of shocks following the notion of generalized impulse response function (GIRF), defined in Koop et al. (1996) and Pesaran & Shin (1998), etc. Partition ε t = (ε t,u ,ε t,s ) with ε t,u and ε t,s being the unshocked and shocked disturbances, respectively, and denote the size of the shock onε t,s asδ s , which is deterministic and has the same dimension of ε t,s . For horizons h = 0, 1,···, the impulse response of the arrival of shockδ s onε t,s is I (h,δ s ) =E(y t+h |δ s , Ω t−1 )−E(y t+h |Ω t−1 ) = Γ h S −1 N Ψ 0 0 S −1 N Λ 0 E(ε t |ε t,s =δ s , Ω t−1 ) (3.8) = Γ h S −1 N Ψ 0 0 S −1 N Λ 0 e δ where Γ h are defined recursively with Γ h = P p j=1 A j Γ h−j , Γ 0 = I MN , and Γ h = 0 if h < 0. e δ denote an augmented vector ofδ s , which has the same dimension as 102 ε t withE(ε t,s |ε t,s =δ s ) =δ s on the positions that correspond to the shocked dis- turbances andE(ε t,u |ε t,s =δ s ) on the positions that correspond to the unshocked disturbances, implied by the assumption thatε t ∼iid(0,I N ⊗ Ω 0 ). For example, when the shock is a vector of magnitudeδ i on y it at time t, the impulse response is I (h,δ i ) = Γ h S −1 N (Ψ 0 0 )S −1 N (Λ 0 ) e δ i , where e δ i denotes a vector of lengthMN with 0 everywhere else except that the (Mi−M + 1)-th to (Mi)-th positions being equal toδ i . When the shock is a scaler of magnitude δ m i on y m it at time t, the impulse response is I (h,δ m i ) =E(y t+h |δ m i , Ω t−1 )−E(y t+h |Ω t−1 ) = Γ h S −1 N Ψ 0 0 S −1 N Λ 0 E(ε t |ε m it =δ m i , Ω t−1 ) (3.9) = Γ h S −1 N Ψ 0 0 S −1 N Λ 0 e e δ i (3.10) where (3.10) is derived from (3.9) assuming the errors are normally distributed. e e δ i denotesavectoroflengthMN with 0everywhereelseexceptthatthe (Mi−M+1)- thto (Mi)-thpositionsbeingequalto Ω 0 e M,m δ m i /ω 0,mm with e M,m denotesavector of length M with 0 everywhere except for the m-the element being 1. Complete derivations of (3.8) to (3.10) are in the appendix. Lastly, we consider a hypothetical positive flu shock to California and use impulse response analysis to trace out its dynamic diffusions across the United States. The generalized impulse response function (GIRF) is defined in (3.8), and the estimated GIRF based on QML estimated model parameters is b I h,δ i = b Γ h S −1 N b Ψ 0 S −1 N b Λ e δ i with b Γ h satisfying b Γ h = P p j=1 c A j b Γ h−j , b Γ 0 = I MN , and b Γ h = 0 if h< 0. 103 Due to the comovement of Google Flu Trends and CDC ILI, we consider a 2× 1 California flu shock vector whose magnitude is set to be equal to 1 unit of the estimated standard deviation of ε it = (ε g it ,ε c it ) 0 , which is ( √ b ω gg , √ b ω cc ) 0 . Therefore, we setδ i = ( √ b ω gg , √ b ω cc ) 0 , where i indicates the location of California, and e δ i denotes a vector of length MN with 0 everywhere else except that the (Mi−M + 1)-th to (Mi)-th positions being equal toδ i . Figure C.6 plots the impulse responses to a positive unit of California flu shock andtheresponsesbyCaliforniaand9otherstateswhorespondthemosttoCalifor- nia flu shock immediately in h = 0. The asex are ordered by the states’ responses to California flu shock inh = 0 measured by the values of GIRF. For example, the top 10 states who respond the most to California flu shocks based on SPVAR(4) with SAR residuals measured by Google Flu Trends ILI are California (4.817%)), Oregon (2.039%), Nevada (1.865%), Arizona (1.586%), Washington (1.391%), Idaho (1.084%), Utah (0.945%), New Mexico (0.672%), Colorado (0.603%), and Wyoming (0.558%), where the values in the parentheses are the estimated GIRF in period h = 0. The top 10 states who respond the most to California flu shocks based on SPVAR(4) with SAR residuals measured by CDC ILI levels are California (1150.244%), Oregon (190.225%), Nevada (167.791%), Arizona (149.201%), Washington (62.175%), Idaho (42.243%), Utah (36.467%), New Mexico (22.616%), Colorado (17.489%), Wyoming (9.699%), with the values in the parentheses being the estimated GIRF in period h = 0. In general, the results show that states near California will respond more than remote states to a unit California flu shock, and it has to take a few weeks for the flu shocks to be absorbed. 104 3.5 Conclusions Influenza, or commonly known as flu, can easily and rapidly transmit from human to human through contacts. Therefore, it remains an enormous threat to public health. Besides seasonal flu, novel influenza mutations which originate in animals can also develop into influenza which can communicate among humans. Flu outbreaks have disastrous economic and social consequences. Surveillance of flu activities is of crucial importance for all the countries worldwide. Traditional flu surveillance systems mainly rely on virologic and clinic data and are limited by not being timely. For example, in the United States, the Outpatient Influenza-like Illness Surveillance Network (ILINet) publicizes weekly flu activity statistics with a few days’ lag, whereas a local flu outbreak could easily develop into a regional or even larger scale flu epidemics. To fill the gap by providing timely flu activity vigilancesignals, in2008, GooglelaunchedGoogleFluTrends, whichpublicizereal- time estimated influenza-like illness (ILI) incidence rate data online using Internet search query submissions about ILI-related topics. Ginsberg et al. (2009) show that Google Flu Trends is able to provide consistent ILI estimates 1-2 weeks ahead of the publications of the official flu statistics by the US Influenza Sentinel Provider Surveillance Network. However, Google Flu Trends estimated ILI incidence rate canoverestimateorunderestimatetheactualILIincidencerateandthereforeneeds to be validated against the official ILI statistics. More importantly, Google Flu Trends still cannot serve the goal of forecasting. In this chapter, we use spatial panel VAR, SPVAR(p), to estimate and fore- cast influenza-like-illness (ILI) incidence rates in the United States. Our proposed approach is expected to improve over the original Google Flu Trends estimated ILI incidence rates due to at least two reasons. First, the SPVAR improves over 105 the state level Google Flu Trends ILI incidence rates by using additional contem- poraneous and historical information from the neighbouring states. Second, the SPVAR improves over Google Flu Trends ILI by directly utilizing the comovement between Google Flu Trends and the official ILI data from US CDC. We find that our proposed SPVAR(p) can achieve very satisfactory in-sample estimates and out- of-sample forecasts based on Google Flu Trends and CDC ILI incidence rate data, using state population density and weekly temperature and precipitation as pre- dictors. A sequential likelihood ratio (LR) test as well as AIC, SIC, and HQC are used to select the lag orders for the SPVAR(p) model. We examine the robustness of the results by varying the sizes of the training and forecasting samples, and we also examine the effects of using a workflow based spatial weight matrix versus the geographical contiguity matrix. We find that the satisfactory performances of SPVAR(p) in both estimation and forecasting are insensitive to the choice of sample split or the choice of spatial weight matrix. In the end, we derive the generalized impulse response function associated with SPVAR(p), and then we look into a hypothetical flu shock to California using impulse response analysis. The results of the impulse response analysis show that it will take a few weeks or even a few months for a unit flu shock to be absorbed, and neighbourhood states will be affected to the degrees which are ordered by their adjacency to California. Future work may also use regional level or city level flu incidence rate data in the United States and consider other spatial model specifications, such as higher order spatial autoregression or spatial moving average. Future work can also use the SPVAR(p) model to estimate and forecast flu incidence rates in other countries globally. 106 Reference List Aguilera AM, Escabias M, Valderrama MJ (2008) Forecasting binary longitudinal data by a functional pc-arima model. Computational Statistics & Data Analy- sis 52:3187–3197. Ahn SC, Horenstein AR (2013) Eigenvalue ratio test for the number of factors. Econometrica 81. Akaike H (1970) Statistical Predictor Identification. Annals of The Institute of Statistical Mathematics 22:203–217. Akaike H (1973) Information theory and an extension of the maximum likeli- hood principle In Petrov BN, Csáki F, editors, 2nd International Symposium on Information Theory, pp. 267–281. Académiai Kiadó, Budapest. Akaike H (1974) A new look at the statistical model identification. IEEE Trans- actions on AutomaticControl AC-19:716–723. Alvarez J, Arellano M (2003) The time series and cross-section asymptotics of dynamic panel data estimators. Econometrica 71:1121–1159. Amengual D, Watson MW (2007) Consistent estimation of the number of dynamic factors in a large n and t panel. Journal of Business & Economic Statis- tics 25:91–96. Anderson TW, Hsiao C (1981) Estimation of dynamic models with error compo- nents. Journal of the American Statistical Association 76:598–606. Anderson TW, Hsiao C (1982) Formulation and estimation of dynamic models using panel data. Journal of Econometrics 18:47–82. Anselin L (1988) Spatial Econometrics: Methods and Models Springer. Anselin L (2010) Thirty years of spatial econometrics. Papers in Regional Sci- ence 89:3–25. 107 Arellano M, Bond S (1991) Some tests of specification for panel data: Monte carlo evidence and an application to employment equations. Review of Economic Studies 58:277–97. Arellano M, Bonhomme S (2016) Nonlinear panel data estimation via quantile regressions. The Econometrics Journal . Arellano M, Bonhomme S (2011) Nonlinear panel data analysis. Annual Review of Economics 3:395–424. Azzalini A, Capitanio A (2014) The Skew-Normal and Related Families Cambridge University Press. Bai J, Ng S (2008) Large Dimensional Factor Analysis Foundations and trends in econometrics. Lightning Source Incorporated. Bai J (2003) Inferential theory for factor models of large dimensions. Economet- rica 71:135–171. Bai J (2013) Fixed-effects dynamic panel models, a factor analytical method. Econometrica 81:285–314. Bai J, Carrion-i Silvestre JL (2012) Testing panel cointegration with unobservable dynamic common factors that are correlated with the regressors. The Econo- metrics Journal pp. n/a–n/a. Bai J, Li K (2012) Statistical analysis of factor models of high dimension. Annals of Statistics 40:436–465. Bai J, Liao Y (2016) Efficient estimation of approximate factor models via penal- ized maximum likelihood. Journal of Econometrics 191:1 – 18. Bai J, Ng S (2002) Determining the number of factors in approximate factor models. Econometrica 70:191–221. Bai J, Ng S (2006) Evaluating latent and observed factors in macroeconomics and finance. Journal of Econometrics 131:507–537. Bai J, Ng S (2007) Determining the number of primitive shocks in factor models. Journal of Business & Economic Statistics 25:52–60. Bai J, Shi S (2011) Estimating high dimensional covariance matrices and its appli- cations. Annals of Economics and Finance 12:199–215. Baltagi BH (2008) Forecasting with panel data. Journal of Forecasting 27:153–173. 108 Baltagi BH, Bresson G, Pirotte A (2012) Forecasting with spatial panel data. Computational Statistics & Data Analysis 56:3381–3397. Baltagi BH, Egger P, Pfaffermayr M (2013) A generalized spatial panel data model with random effects. Econometric Reviews 32:650–685. Baltagi BH, Fingleton B, Pirotte A (2014) Estimating and Forecasting with a Dynamic Spatial Panel Data Model. Oxford Bulletin of Economics and Statis- tics 76:112–138. Baltagi BH, Kao C Nonstationary panels, cointegration in panels and dynamic panels: A survey. Baltagi BH, Pirotte A (2010) Panel data inference under spatial dependence. Economic Modelling 27:1368 – 1381. Baltagi BH, Song SH, Jung BC, Koh W (2007) Testing for serial correlation, spatial autocorrelation and random effects using panel data. Journal of Econo- metrics 140:5 – 51. Baltagi BH, Song SH, Koh W (2003) Testing panel data regression models with spatial error correlation. Journal of Econometrics 117:123 – 150. Beenstock M, Felsenstein D (2007) Spatial vector autoregressions. Spatial Eco- nomic Analysis 2:167–196. BernankeBS,BoivinJ(2003) Monetarypolicyinadata-richenvironment. Journal of Monetary Economics 50:525–546. Binder M, Hsiao C, Pesaran MH (2005) Estimation and inference in short panel vector autoregressions with unit roots and cointegration. Econometric The- ory 21:795–837. Boivin J, Ng S (2006) Are more data always better for factor analysis? Journal of Econometrics 132:169 – 194. Bradley JV (1968) Distribution-free statistical tests Cambridge University Press, New York. Bresson G, Baltagi BH, Pirotte A (2007) Panel unit root tests and spatial depen- dence. Journal of Applied Econometrics 22:339–360. BrierGW(1950) Verificationofforecastsexpressedintermsofprobability. Monthly Weather Review 75:1–3. Canay IA (2011) A simple approach to quantile regression for panel data. The Econometrics Journal 14:368–386. 109 Canova F, Ciccarelli M Panel Vector Autoregressive Models: A Survey, chapter 6, pp. 205–246. Canova F, Ciccarelli M (2004) Forecasting and turning point predictions in a bayesian panel VAR model. Journal of Econometrics 120:327–359. CanovaF,CiccarelliM(2009) EstimatingmulticountryVARmodels. International Economic Review 50:929–959. Cao B, Sun Y (2011) Asymptotic distribution of impulse response functions in short panel vector autoregressions. Journal of Econometrics 163:127–143. Chang Y, Kim H, Park JY (2010) A reexamination of fama-french regressions using high frequency panels. Working Paper . Chow SM, Zhang G (2008) Continuous-time modelling of irregularly spaced panel data using a cubic spline model. Statistica Neerlandica 62:131–154. ClatworthyMA,PeelDA,PopePF(2012) Areanalysts’lossfunctionsasymmetric? Journal of Forecasting 31:736–756. Coakley J, Fuertes AM, Smith R (2002) A principal components approach to cross-section dependence in panels. Working Paper . Connor G, Korajczyk R (1988) Risk and return in an equilibrium apt: Application of a new test methodology. Journal of Financial Economics 21:255–289. Connor G, Korajczyk RA (1986) Performance measurement with the arbitrage pricing theory : A new framework for analysis. Journal of Financial Eco- nomics 15:373–394. Cook S, Conrad C, Fowlkes AL, Mohebbi MH (2011) Assessing google flu trends performance in the united states during the 2009 influenza virus A (H1N1) pandemic. PLoS ONE 6:e23610. Cooley D, Cisewski J, Erhardt RJ, Jeon S, Mannshardt E, Omolo BO, Sun Y (2012) Asurveyofspatialextremes: Measuringspatialdependenceandmodeling spatial effects. REVSTAT 10:135–165. CowlingBJ,WongIOL,HoLM,RileyS,LeungGM(2006) Methodsformonitoring influenzasurveillancedata. International Journal of Epidemiology35:1314–1321. Diebold FX, Lopez JA (1996) Forecast evaluation and combination, Vol. 14, pp. 241–268 Elsevier Science. Diebold FX, Mariano RS (1995) Comparing predictive accuracy. Journal of Busi- ness & Economic Statistics 13:253–63. 110 Doz C, Giannone D, Reichlin L (2012) A quasiâĂŞmaximum likelihood approach for large, approximate dynamic factor models. The Review of Economics and Statistics 94:1014–1024. DriscollJC,KraayA(1998) Consistentcovariancematrixestimationwithspatially dependent panel data. The Review of Economics and Statistics 80:549–560. Dugas AF, Hsieh YH, Levin SR, Pines JM, Mareiniss DP, Mohareb A, Gaydos CA, Perl TM, Rothman RE (2012) Google flu trends: Correlation with emer- gency department influenza rates and crowding metrics clinical infectious dis- eases. Clinical Infectious Diseases 54:463–469. Dugas AF, Jalalpour M, Gel Y, Levin S, Torcaso F, Igusa T, Rothman RE (2013) Influenza forecasting with google flu trends. PLoS ONE 8:e56176. Dukic V, Lopes HF, Polson NG (2012) Tracking epidemics with google flu trends data and a state-space seir model. Journal of the American Statistical Associa- tion 107:1410–1426. Dureau J, Kalogeropoulos K, Baguelin M (2013) Capturing the time-varying drivers of an epidemic using stochastic dynamical systems. Biostatistics 14:541. ElhorstJ(2012) Dynamicspatialpanels: models, methods, andinferences. Journal of Geographical Systems 14:5–28. Elhorst JP (2005) Unconditional maximum likelihood estimation of linear and log-linear dynamic models for spatial panels. Geographical Analysis 37:85–106. Elhorst J (2008) Serial and spatial error correlation. Economics Let- ters 100:422–424. Elliott G, Komunjer I, Timmermann A (2005) Estimation and testing of forecast rationality under flexible loss. Review of Economic Studies 72:1107–1125. Engle RF, Manganelli S (2004) Caviar: Conditional autoregressive value at risk by regression quantiles. Journal of Business & Economic Statistics 22:367–381. Feng Q (2011) L2 penalised quantile regression for dynamic random effects panel data models. Working Paper . Forni M, Hallin M, Lippi M, Reichlin L (2001) Coincident and leading indicators for the euro area. The Economic Journal 111:62–85. Fox E, Dunson D (2015) Bayesian nonparametric covariance regression. Journal of Machine Learning Research 16:2501–2542. 111 Frees EW (2004) Longitudinal and Panel Data: Analysis and Applications in the Social Sciences Cambridge University Press, New York. Frees EW, Miller TW (2004) Sales forecasting using longitudinal data models. International Journal of Forecasting 20:99–114. Galvao AF (2011) Quantile regression for dynamic panel data with fixed effects. Journal of Econometrics 164:142–157. Galvao AF, Montes-Rojas GV (2010) Penalized quantile regression for dynamic panel data. Journal of Statistical Planning and Inference 140:3476 – 3497. Gänsler P, Stute W (1977) Wahrscheinlichkeitstheorie Springer Verlag, New York. Ghysels E, Sinko A, Valkanov R (2007) MIDAS regressions: Further results and new directions. Econometric Reviews 26:53–90. Ghysels E, Valkanov R (2012) Forecasting Volatility with MIDAS, pp. 383–401 John Wiley & Sons, Inc. Ginsberg J, Mohebbi MH, Patel RS, Brammer L, Smolinski MS, Brilliant L (2009) Detecting influenza epidemics using search engine query data. Nature 457:1012–1014. Goldberger A (1962) Best Linear Unbiased Prediction in the Generalized Linear Regression Model. Journal of the American Statistical Association 57:369–375. Gourieroux C, Jasiak J (2005) Nonlinear innovations and impulse responses with application to var sensitivity. Annales d’Economie et de Statistique 78:1–31. Greenaway-McGrevy R, Han C, Sul D (2012) Standardization and estimation of the number of factors for panel data. Journal of Economic Theory and Econo- metrics 23:79–88. Greenway-McGrevy R (2012) Forecasting with panel data vector autoregressions under misspecification. Working Paper . Greenway-McGrevy R (2013a) Asymptotically efficient forecast selection for panel data vector autogressions. Working Paper . Greenway-McGrevy R (2013b) Multistep prediction of panel vector autoregressive processes. Econometric Theory 29:699–734. HallinM,LiškaR(2007) Determiningthenumberoffactorsinthegeneraldynamic factor model. Journal of the American Statistical Association 102:603–617. 112 Han C, Phillips PC, Sul D (2016) Lag length selection in panel autoregression. Econometric Reviews pp. 1–16. Hannan EJ, Quinn BG (1979) The determination of the order of an autoregression. Journal of the RoyalStatistical Society B41:190?95. Hansen BE (2008) Least-squares forecast averaging. Journal of Economet- rics 146:342–350. Hjalmarsson E (2006) Predictive regressions with panel data. Working Paper . Hooten MB, Anderson J, Waller LA (2010) Assessing north american influenza dynamics with a statistical SIRS model. Spatial and Spatio-temporal Epidemi- ology 1:177 – 185. Horn R, Johnson C (1985) Matrix Analysis Cambridge University Press. Hsiao C (2003) Analysis of Panel Data Cambridge University Press. Hu Y, Schennach SM (2008) Instrumental variable treatment of nonclassical mea- surement error models. Econometrica 76:195–216. Hu Y, Shum M (2012) Nonparametric identification of dynamic models with unob- served state variables. Journal of Econometrics 171:32 – 44. Hyndman RJ, Koehler AB (2006) Another look at measures of forecast accuracy. International Journal of Forecasting 22:679–688. Im KS, Pesaran MH, Shin Y (2003) Testing for unit roots in heterogeneous panels. Journal of Econometrics 115:53–74. Ing CK (2003) Multistep prediction in autoregressive processes. Econometric Theory 19:254–279. Issler JV, Lima LR (2009) A panel data approach to economic forecasting: The bias-corrected average forecast. Journal of Econometrics 152:153 – 164. Jeremy U. Espino WRH MMW (2003) Telephone triage: A timely data source for surveillance of influenza-like diseases. AMIA Annual Symposium Proceed- ings p. 215?19. Jordà O (2005) Estimation and inference of impulse responses by local projections. American Economic Review 95:161–182. Jungbacker B, Koopman S, van der Wel M (2011) Maximum likelihood estimation for dynamic factor models with missing data. Journal of Economic Dynamics and Control 35:1358–1368. 113 Kapetanios G (2010) A testing procedure for determining the number of factors in approximate factor models with large datasets. Journal of Business & Economic Statistics 28:397–409. Kapoor M, Kelejian HH, Prucha IR (2007) Panel data models with spatially correlated error components. Journal of Econometrics 140:97–130. Kelejian HH, Prucha IR (1998) A generalized spatial two-stage least squares pro- cedure for estimating a spatial autoregressive model with autoregressive distur- bances. The Journal of Real Estate Finance and Economics 17:99–121. Kelejian HH, Prucha IR (1999) A Generalized Moments Estimator for the Autoregressive Parameter in a Spatial Model. International Economic Review 40:509–533. Kelejian HH, Prucha IR (2001) On the asymptotic distribution of the moran i test statistic with applications. Journal of Econometrics 104:219 – 257. Kelejian HH, Prucha IR (2002) 2SLS and OLS in a spatial autoregressive model with equal spatial weights. Regional Science and Urban Economics 32:691–707. Kelejian HH, Prucha IR, Yuzefovich Y (2006) Estimation problems in models with spatial weighting matrices which have blocks of equal elements. Journal of Regional Science 46:507–515. Khalaf L, Kichian M, Saunders C, Voia M (2014) Dynamic panels with MIDAS covariates: Estimation and fit. Working Paper . Kiviet JF (1995) On bias, inconsistency, and efficiency of various estimators in dynamic panel data models. Journal of Econometrics 68:53–78. Koop G, Pesaran H, Potter S (1996) Impulse response analysis in nonlinear mul- tivariate models. Journal of Econometrics 74:119–147. Korniotis GM (2010) Estimating panel models with internal and external habit formation. Journal of Business & Economic Statistics 28:145–158. Kouassi E, Sango J, Bosson Brou JM, Teubissi FN, Kymn KO (2011) Prediction from the regression model with two-way error components. Journal of Forecast- ing 30:541–564. Lee K, Agrawal A, Choudhary A (2013a) Real-time digital flu surveillance using twitter data In Proceedings of the SDM Workshop on Data Mining for Medicine and Healthcare (DMMH). 114 Lee K, Agrawal A, Choudhary A (2013b) Real-time disease surveillance using twitter data: Demonstration on flu and cancer In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data min- ing (KDD), pp. 1474–1477. ACM. Lee Lf (2004) Asymptotic distributions of quasi-maximum likelihood estimators for spatial autoregressive models. Econometrica 72:1899–1925. Lee Lf, Yu J (2010a) Estimation of spatial autoregressive panel data models with fixed effects. Journal of Econometrics 154:165 – 185. Lee Lf, Yu J (2010b) Estimation of unit root spatial dynamic panel data models. Econometric Theory 26:1332–1362. Lee Lf, Yu J (2010c) Some recent developments in spatial panel data models. Regional Science and Urban Economics 40:255–271. Lee LF, Yu J (2010d) A spatial dynamic panel data model with both time and individual fixed effects. Econometric Theory 26:564–597. Lee LF, Yu J (2011) A unified estimation approach for spatial dynamic panel data models: Stability, spatial cointegration and explosive roots. Mandbook on Empirical Economics and Finance. London: Chapman & Hall/CRC . Lee Lf, Yu J (2012) Qml estimation of spatial dynamic panel data models with time varying spatial weights matrices. Spatial Economic Analysis 7:31–74. Lee TH (2007) Loss functions in time series forecasting. Working Paper . Lee Y, Phillips P (2015) Model selection in the presence of incidental parameters. Journal of Econometrics 188:474 – 489 Heterogeneity in Panel Data and in Nonparametric Analysis in honor of Professor Cheng Hsiao. Liu TY, Sanders JL, Tsui FC, Espino JU, Dato VM, Suyama J (2013) Association of over-the-counter pharmaceutical sales with influenza-like-illnesses to patient volume in an urgent care setting. PLoS ONE 8:e59273. Lütkepohl H (2005) New Introduction to Multiple Time Series Analysis Springer, New York. Magruder SF (2003) Evaluation of over-the-counter pharmaceutical sales as a possibleearlywarningindicatorofhumandisease. Johns Hopkins APL Technical Digest 24:349–353. Manner H, Reznikova O (2012) A survey on time-varying copulas: Specification, simulations, and application. Econometric Reviews 31:654–687. 115 Martinez-Beneito MA, Conesa D, Lopez-Quilez A, Lopez-Maside A (2008) Bayesian markov switching models for the early detection of influenza epidemics. Statistics in Medicine 27:4455–4468. Mercurio MA (2000) Econometric Analysis of Forecasts in Dynamic and Panel Data Models Ph.D. diss., University of California, Riverside. Mitchell J, Kapetanios G, Shin Y (2014) A nonlinear panel data model of cross- sectional dependence. Journal of Econometrics 179:134 – 157. MolinariNAM,Ortega-SanchezIR,MessonnierML,ThompsonWW,WortleyPM, Weintraub E, Bridges CB (2007) The annual impact of seasonal influenza in the us: Measuring disease burden and costs. Vaccine 25:5086 – 5096. Mutl J (2006) Dynamic Panel Data Models with Spatially Correlated Disturbances Ph.D. diss., University of Maryland, College Park. Mutl J (2009) Panel VAR models with spatial dependence. Working Paper . Nickell SJ (1981) Biases in dynamic models with fixed effects. Economet- rica 49:1417–26. Onatski A (2009) A formal statistical test for the number of factors in the approx- imate factor models. Econometrica 77:1447–1479. Onatski A (2010) Determining the number of factors from empirical distribution of eigenvalues. The Review of Economics and Statistics 92:1004–1016. Ortiz JR, Zhou H, Shay DK, Neuzil KM, Fowlkes AL, Goss CH (2011) Monitoring influenza activity in the united states: A comparison of traditional surveillance systems with google flu trends. PLoS ONE 6:e18687. Oud JHL, Singer H (2008) Continuous time modeling of panel data: Sem versus filter techniques. Statistica Neerlandica 62:4–28. Oud JH, Folmer H, Patuelli R, Nijkamp P (2012) Continuous-time modelling with spatial dependence. Geographical Analysis pp. 29–46. Paelinck J, Klaassen L (1979) Spatial Econometrics Saxon House. Parent O, LeSage JP (2012) Spatial dynamic panel data models with random effects. Regional Science and Urban Economics 42:727–738. Patton AJ (2012) A review of copula models for economic time series. J. Multivar. Anal. 110:4–18. 116 Patwardhan A, Bilkovski R (2012) Comparison: Flu prescription sales data from a retail pharmacy in the us with google flu trends and us ilinet (cdc) data as flu activity indicator. PLoS ONE 7:e43611. PavlinJA,MostashariF,KortepeterMG,HynesNA,ChotaniRA,MikolYB,Ryan MAK, Neville JS, Gantz DT, Writer JV, Florance JE, Culpepper RC, Henretig FM, Kelley PW (2003) Innovative surveillance methods for rapid detection of disease outbreaks and bioterrorism: Results of an interagency workshop on health indicator surveillance. American journal of public health 93:1230–1235. Pesaran HH, Shin Y (1998) Generalized impulse response analysis in linear multi- variate models. Economics Letters 58:17–29. Phillips PCB, Sul D (2003) Dynamic panel estimation and homogeneity testing under cross section dependence. Econometrics Journal 6:217–259. Pötscher B PI (1997) Dynamic Nonlinear Econometric Models,Asymptotic Theory. Springe, New York. PotterC(2001) Ahistoryofinfluenza. Journal of Applied Microbiology91:572–579. Potter S (2000) Nonlinear impulse response functions. Journal of Economic Dynamics & Control 24:1425–1446. Quinn B (1980) Order determination for a multivariate autoregression. Journal ofthe Royal Statistical Society B42:182?85. Rath T, Carreras M, Sebastiani P (2003) Automated detection of influenza epi- demics with hidden markov models In Advances in Intelligent Data Analysis V, Vol. 2810 of Lecture Notes in Computer Science, pp. 521–532. Springer Berlin Heidelberg. Schwarz G (1978) Estimating the dimension of a model. Annals of Statis- tics 6:461?64. Shaman J, Karspeck A (2012) Forecasting seasonal outbreaks of influenza. PNAS 109. Shibata R (1980) Asymptotically efficient selection of the order of the model for estimating parameters of a linear process. Annals of Statistics 8:147–164. Sims C (1980) Macroeconomics and reality. Econometrica 48:1–48. Smith RD (2006) Responding to global infectious disease outbreaks: Lessons from {SARS} on the role of risk perception, communication and management. Social Science & Medicine 63:3113 – 3123. 117 Spitzer JJ, Baillie RT (1983) Small-sample properties of predictions from the regression model with autoregressive errors. Journal of the American Statistical Association 78:258–263. Stekler HO (1988) Who forecasts better? : Herman o. stekler, journal of business and economic statistics 5 (1987) 155-158. International Journal of Forecast- ing 4:631–631. Stock JH, Watson MW (2002) Macroeconomic forecasting using diffusion indexes. Journal of Business & Economic Statistics 20:147–62. Stock JH, Watson MW (2005) Implications of dynamic factor models for var analysis. NBER Working Paper . Su L, Yang Z (2015) QML estimation of dynamic panel data models with spatial errors. Journal of Econometrics 185:230–258. Viboud C, Boëlle PY, Carrat F, Valleron AJ, Flahault A (2003) Prediction of the spread of influenza epidemics by the method of analogues. American Journal of Epidemiology 158:996–1006. Wang W, Lee Lf (2013) Estimation of spatial panel data models with randomly missing data in the dependent variable. Regional Science and Urban Economics . West KD (1996) Asymptotic inference about predictive ability. Economet- rica 64:1067–84. Wu S, Müller HG (2011) Response-adaptive regression for longitudinal data. Bio- metrics 67:852–860. Yao F, Müller HG, Wang JL (2005a) Functional data analysis for sparse longitu- dinal data. Journal of the American Statistical Association 100:577–590. Yao F, Müller HG, Wang JL (2005b) Functional linear regression analysis for longitudinal data. Annals of Statistics 33:2873–2903. YihK,TeatesK,AbramsA,KleinmanK,KulldorffM,PinnerR,HarmonR,Wang S, Platt R (2009) Telephone triage service data for detection of influenza-like illness. PLoS ONE 4:e5260. Yu J, de Jong R, fei Lee L (2012) Estimation for spatial dynamic panel data with fixed effects: The case of spatial cointegration. Journal of Economet- rics 167:16 – 37. Yu J, de Jong R, Lee Lf (2008) Quasi-maximum likelihood estimators for spatial dynamic panel data with fixed effects when both n and t are large. Journal of Econometrics 146:118–134. 118 Zirogiannis N, Tripodis Y (2013) A generalized dynamic factor model for panel data: Estimation with a two-cycle conditional expectation-maximization algo- rithm. Working Paper . 119 Appendix A Appendix to Chapter 2 A.1 Notations and Expressions A.1.1 Notations Σ u =S −1 N (Λ) I N ⊗ Ω S −1 N (Λ) 0 , Σ 0 u = Σ u (θ 0 ) Σ y =S −1 N (Ψ 0 )Σ u S −1 N (Ψ 0 ) 0 , Σ 0 y = Σ y (θ 0 ) S N (Λ) =I MN − (I N ⊗ Λ) W S N (Ψ 0 ) =I MN − (I N ⊗ Ψ 0 ) W kP n k ∞ ≡ sup 1≤i≤n 1 P n 2 j=1 |P ij,n | denotes the row sum norm ofn 1 ×n 2 matrixP n kP n k 1 ≡ sup 1≤j≤n 2 P n 1 i=1 |P ij,n | denotes the column sum norm ofn 1 ×n 2 matrix P n vec(X)denotesthecolumnvectorizationofanarbitrarymatrixX andvech(X) denotes the column vectorization of the elements on and below the principal diag- onal of symmetric matrix X. Denote for a matrix function F (X) : R m×n → R k×l , the kl×mn Jacobian matrix of F with respect to X asD X F = ∂vecF (X) ∂vecX 0 , and denote the mnkl×mn Hessian matrix of F with respect to X asH X F =D 2 X F =D X (D X F ) 0 , respec- tively. 120 A.1.2 Expressions From (2.7) and (2.8), Y t = A MNp×MNp Y t−1 + E t MNp×1 = A 1 ··· A p−1 A p I MN(p−1) 0 MN(p−1)×MN Y t−1 + S −1 N (Ψ 0 0 )α 0 +S −1 N (Ψ 0 0 ) I N ⊗β 0 0 X t +S −1 N (Ψ 0 0 )S −1 N (Λ 0 )ε t 0 MN(p−1)×1 where A j =S −1 N (Ψ 0 0 ) I N ⊗ Φ 0 j +S −1 N (Ψ 0 0 ) I N ⊗ Ψ 0 j W for j = 1,··· ,p, we get y t = RY t = R ∞ X h=0 A h E t−h = R ∞ X h=0 A h S −1 N (Ψ 0 0 )α 0 +S −1 N (Ψ 0 0 ) I N ⊗β 0 0 X t−h +S −1 N (Ψ 0 0 )S −1 N (Λ 0 )ε t−h 0 MN(p−1)×1 = ∞ X h=0 RA h R 0 S −1 N Ψ 0 0 α 0 + I N ⊗β 0 0 X t−h +S −1 N Λ 0 ε t−h , where R = I MN , 0 MN×MN(p−1) . y t−j =A 0 +A x t−j + t−j X h=−∞ A ε t−j,h ε h =A 0 +A x t−j + t−j X h=1 A ε t−j,h ε h + ∞ X h=0 A ε t−j,−h ε −h (A.1) where A 0 = ∞ X h=0 RA h R 0 S −1 N Ψ 0 0 α 0 A x t−j = ∞ X h=0 RA h R 0 S −1 N Ψ 0 0 I N ⊗β 0 0 X t−j−h A ε t−j,h = RA t−j−h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 , if h =−∞,...,t−j. A ε t−j,h = 0, if h>t−j. 121 ¯ y −j =A 0 + ¯ A x −j + T−j X h=−∞ ¯ A ε −j,h ε h (A.2) where ¯ A x −j = 1 T T X t=1 A x t−j = ∞ X h=0 RA h R 0 S −1 N Ψ 0 0 I N ⊗β 0 0 ¯ X −j−h ¯ A ε −j,h = 1 T T X t=1 A ε t−j,h = R 1 T T X t=1 A t ! A −j−h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 e y t−j = y t−j − ¯ y −j = e A x t−j + T−j X h=−∞ e A ε t−j,h ε h (A.3) where e A x t−j = A x t−j − ¯ A x −j e A ε t−j,h = A ε t−j,h − ¯ A ε −j,h In addition, T X t=1 e y t−j = T X t=1 e A x t−j = T X t=1 f X t = T X t=1 e A ε t−j,h = 0, for all j = 1,...,p and for all h =−∞,...,T−j, and A 0 = ∞ X h=0 A h ! α 0 ≤ sup α 0,m i sup 1≤i≤MN MN X j=1 ∞ X h=0 (A h ) ij l MN = sup α 0,m i ∞ X h=0 |A h | ∞ l MN =a 0 l MN ¯ A x −j = ∞ X h=0 A h I N ⊗β 0 0 ¯ X −j−h ≤ sup x m it,k ∞ X h=0 A h I N ⊗β 0 0 ∞ l MN ≤ sup x m it,k ∞ X h=0 A h ∞ β 0 1 l MN =a x l MN e A x t−j = ∞ X h=0 A h I N ⊗β 0 0 X t−j−h − ¯ X −j−h ≤ sup x m it,k − ¯ x m i,k ∞ X h=0 A h ∞ β 0 1 l MN =e a x l MN 122 where l MN is MN× 1 vector of 1s, and denote a 0 = sup α 0,m i k P ∞ h=0 A h k ∞ , a x = sup x m it,k k P ∞ h=0 A h k ∞ β 0 1 , and e a x = sup x m it,k − ¯ x m i,k k P ∞ h=0 A h k ∞ β 0 1 , andA h = RA h R 0 S −1 N (Ψ 0 0 ) is UB. Z t = y 0 t−1 ,··· , y 0 t−p , (Wy t−1 ) 0 ,··· , (Wy t−q ) 0 , X 0 t 0 . C is MN(p + q + K) × MN(p + q + K) permutation matrix. Partition C = (C y,1 ,··· ,C y,p ,C wy,1 ,··· ,C wy,q ,C x ) with C y,1 ,··· ,C y,p ,C wy,1 ,··· ,C wy,q being MN(p + q + K)× MN and C x being MN(p +q +K)×MNK. CZ t = p X j=1 C y,j y t−j + q X k=1 C wy,k Wy t−k +C x X t = p X j=1 C y,j + q X k=1 C wy,k W A 0 + p X j=1 C y,j A x t−j + q X k=1 C wy,k WA x t−k +C x X t + t−1 X h=1 p X j=1 C y,j A ε t−j,h + q X k=1 C wy,k WA ε t−k,h ε h + ∞ X h=0 p X j=1 C y,j A ε t−j,−h + q X k=1 C wy,k WA ε t−k,−h ε −h = Q 0 + Q x t + t−1 X h=−∞ Q ε t,h ε h = Q 0 + Q x t + t−1 X h=1 Q ε t,h ε h + ∞ X h=0 Q ε t,−h ε −h (A.4) where Q 0 = p X j=1 C y,j + q X k=1 C wy,k W A 0 Q x t = p X j=1 C y,j A x t−j + q X k=1 C wy,k WA x t−k +C x X t Q ε t,h = p X j=1 C y,j A ε t−j,h + q X k=1 C wy,k WA ε t−k,h 123 C ¯ Z = p X j=1 C y,j ¯ y −j + q X k=1 C wy,k W¯ y −k +C x ¯ X = p X j=1 C y,j + q X k=1 C wy,k W A 0 + p X j=1 C y,j ¯ A x −j + q X k=1 C wy,k W ¯ A x −k +C x ¯ X + T−1 X h=−∞ p X j=1 C y,j ¯ A ε −j,h + q X k=1 C wy,k W ¯ A ε −k,h ε h = Q 0 + ¯ Q x + T−1 X h=−∞ ¯ Q ε h ε h (A.5) where ¯ Q x = p X j=1 C y,j ¯ A x −j + q X k=1 C wy,k W ¯ A x −k +C x ¯ X ¯ Q ε h = p X j=1 C y,j ¯ A ε −j,h + q X k=1 C wy,k W ¯ A ε −k,h C e Z t = CZ t −C ¯ Z = Q x t − ¯ Q x + t−1 X h=−∞ Q ε t,h ε h − T−1 X h=−∞ ¯ Q ε h ε h = f Q x t + T−1 X h=−∞ f Q ε t,h ε h (A.6) where f Q x t = p X j=1 C y,j e A x t−j + q X k=1 C wy,k W e A x t−k +C x f X t f Q ε t,h = p X j=1 C y,j e A ε t−j,h + q X k=1 C wy,k W e A ε t−k,h 124 In addition, T X t=1 f Q x t = p X j=1 C y,j T X t=1 e A x t−j ! + q X k=1 C wy,k W T X t=1 e A x t−k ! +C x T X t=1 f X t ! = 0 T X t=1 f Q ε t,h = p X j=1 C y,j T X t=1 e A ε t−j,h ! + q X k=1 C wy,k W T X t=1 e A ε t−k,h ! = 0, for all h =−∞,...,T−j. 125 A.2 Facts on uniformly bounded (UB) matrices UB.1: If the row and column sums of A n and B n are UB in n, then the row and column sums ofA n B n is also UB inn, becausekA n B n k ∞ ≤kA n k ∞ kB n k ∞ < ∞ andkA n B n k 1 ≤kA n k 1 kB n k 1 <∞. UB.2: If the row and column sums of A n and B n are UB in n, then the row and column sums of A n ⊗ B n is also UB in n, because kA n ⊗B n k ∞ ≤ kA n k ∞ kB n k ∞ <∞ andkA n ⊗B n k 1 ≤kA n k 1 kB n k 1 <∞. UB.3: If the row and column sums of A n and B n are UB in n, then the row and column sums of A n B n is also UB in n, because kA n B n k ∞ ≤ kA n k ∞ kB n k ∞ <∞ andkA n B n k 1 ≤kA n k 1 kB n k 1 <∞. UB.4: If the row and column sums of P ∞ h=0 A nh and P ∞ h=0 B nh are UB in n, then P ∞ h=0 (A nh B nh ) is also UB in both the row and column sums, because k P ∞ h=0 (A nh B nh )k≤k P ∞ h=0 A nh kk P ∞ h=0 B nh k<∞. UB.5: Ifn×n matrixA n is uniformly bounded in both the row and column sums, then l 0 n 2 vec (A n ) = P n i=1 P n j=1 |A n,ij |≤nkA n k ∞ =O p (n). UB.6: Ifn×n matrixA n is uniformly bounded in both the row and column sums, then tr (A n ) = P n i=1 |A n,ii |≤ P n i=1 P n j=1 |A n,ij |≤nkA n k ∞ =O p (n). UB.7: If the row and column sums of n 1 (n)× n 2 (n) matrix A n is UB in n, then a 0 n 1 A n b n 2 =O p (n 1 ∧n 2 ), where a n 1 and b n 2 denote n 1 × 1 and n 2 × 1 vector of uniformly bounded constants, respectively, with max 1≤i≤n 1 |a i | = O p (1) and max 1≤i≤n 2 |b i | = O p (1). Because a 0 n 1 A n b n 2 ≤ max 1≤i≤n 1 |a i | max 1≤i≤n 2 |b i | P n 1 i=1 P n 2 j=1 |A nij |, 126 which implies that a 0 n 1 A n b n 2 ≤ n 1 max 1≤i≤n 1 |a i | max 1≤i≤n 2 |b i |kA n k ∞ = O p (n 1 ) and a 0 n 1 A n b n 2 ≤n 2 max 1≤i≤n 1 |a i | max 1≤i≤n 2 |b i |kA n k 1 =O p (n 2 ). 127 A.3 First and Second Order Differentials From the profile log-likelihood function in (4.1.3) lnL(θ) ∝ T 2 ln det I N ⊗ Ω −1 +T ln det (S N (Ψ 0 )) +T ln det (S N (Λ)) − 1 2 T X t=1 S N (Ψ 0 )e yt−e μ t 0 S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) S N (Ψ 0 )e yt−e μ t whereθ = (vech(Ω) 0 ,vec(Λ) 0 ,vec(γ) 0 ,vec(Ψ 0 ) 0 ) 0 .e ut =S N (Ψ 0 )e yt−e μ t =S −1 N (Λ)e εt,e μ t = (I N ⊗γ)C e Zt, and de μ t = I N ⊗dγ C e Zt. dlnL(θ) = T 2 d ln det I N ⊗ Ω −1 +Td ln det (S N (Ψ 0 )) +Td ln det (S N (Λ)) − T X t=1 e u 0 t S N (Λ) 0 I N ⊗ Ω −1 (dS N (Λ))e ut − T X t=1 e u 0 t S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) dS N (Ψ 0 )e yt−de μ t − 1 2 T X t=1 e u 0 t S N (Λ) 0 I N ⊗d Ω −1 S N (Λ)e ut = − T 2 vec (I N ⊗ Ω) 0 vec I N ⊗ Ω −1 d (Ω) Ω −1 + 1 2 T X t=1 vec S N (Λ)e ute u 0 t S N (Λ) 0 0 vec I N ⊗ Ω −1 (dΩ) Ω −1 + T X t=1 tr We yte u 0 t S N (Λ) 0 I N ⊗ Ω −1 S N (Λ)− WS −1 N (Ψ 0 ) (I N ⊗dΨ 0 ) + T X t=1 tr e u 0 t S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) (I N ⊗dγ)C e Zt + T X t=1 tr We u t e u 0 t S N (Λ) 0 I N ⊗ Ω −1 − WS −1 N (Λ) (I N ⊗dΛ) where vec(I N ⊗ Ω −1 (dΩ) Ω −1 ) = (I N ⊗K M,N ⊗I M ) vec(I N )⊗vec Ω −1 (dΩ) Ω −1 = Jvec Ω −1 (dΩ) Ω −1 = J Ω −1 ⊗ Ω −1 dvecΩ = J Ω −1 ⊗ Ω −1 D M dvechΩ 128 Denote J = I N ⊗K M,N (vec(I N )⊗ I M ) ⊗ I M . K M,N is MN× MN commutation matrix, such that K M,N vecA = vecA 0 for matrix A with dimension M×N. D M is M 2 ×M(M + 1)/2 duplication matrix, such that D M vechA =vecA for symmetric matrix A with dimension M×M. γ is M×M(p +q +K), vec(I N ⊗dγ 0 ) = (J⊗I p+q+K )dvec γ 0 = J⊗I p+q+K K M,M(p+q+K) dvec(γ) vec(I N ⊗dγ) = I N ⊗K M(p+q+K),N vec(I N )⊗I M(p+q+K) ⊗I M dvec(γ) =J 1 dvec(γ) denoting J 1 = I N ⊗K M(p+q+K),N vec(I N )⊗I M(p+q+K) ⊗I M . Therefore, dlnL(θ) = 1 2 T X t=1 vec S N (Λ)e ute u 0 t S N (Λ) 0 − (I N ⊗ Ω) 0 J Ω −1 ⊗ Ω −1 D M dvechΩ + T X t=1 vec S N (Λ) 0 (I N ⊗ Ω −1 )S N (Λ)e ute y 0 t W 0 −S −1 N (Ψ 0 ) 0 W 0 0 JdvecΨ 0 + T X t=1 vec S N (Λ) 0 I N ⊗ Ω −1 S N (Λ)e ut e Z 0 t C 0 0 J 1 dvec (γ) + T X t=1 vec (I N ⊗ Ω −1 )S N (Λ)e ute u 0 t W 0 −S −1 N (Λ) 0 W 0 0 JdvecΛ First order differentials are D Ω lnL(θ) = ∂lnL(θ) ∂ (vechΩ) 0 = 1 2 T X t=1 vec I N ⊗ Ω −1 e εte ε 0 t − (I N ⊗ Ω) I N ⊗ Ω −1 0 JD M D Λ lnL(θ) = ∂lnL(θ) ∂ (vecΛ) 0 = T X t=1 vec I N ⊗ Ω −1 e εte ε 0 t S −1 N (Λ) 0 W 0 −S −1 N (Λ) 0 W 0 0 J DγlnL(θ) = ∂lnL(θ) ∂ (vecγ) 0 = T X t=1 vec S N (Λ) 0 I N ⊗ Ω −1 e εt e Z 0 t C 0 0 J 1 D Ψ 0 lnL(θ) = ∂lnL(θ) ∂ (vecΨ 0 ) 0 = T X t=1 vec S N (Λ) 0 I N ⊗ Ω −1 e εte y 0 t W 0 −S −1 N (Ψ 0 ) 0 W 0 0 J wheree εt =S N (Λ)e ut. 129 Further, D Φ j lnL(θ) = ∂lnL(θ) ∂ (vecΦ j ) 0 = T X t=1 vec S N (Λ) 0 I N ⊗ Ω −1 e εte y 0 t−j 0 J D Ψ j lnL(θ) = ∂lnL(θ) ∂ (vecΨ j ) 0 = T X t=1 vec S N (Λ) 0 I N ⊗ Ω −1 e εte y 0 t−j W 0 0 J D β lnL(θ) = ∂lnL(θ) ∂ (vecβ) 0 = T X t=1 vec e Xte ε 0 t I N ⊗ Ω −1 S N (Λ) 0 (J⊗I K ) d 2 lnL(θ) = NT 2 trΩ −1 (dΩ) Ω −1 (dΩ)− T X t=1 tre u 0 t S N (Λ) 0 I N ⊗ Ω −1 (dΩ) Ω −1 (dΩ) Ω −1 S N (Λ)e ut −TtrS −1 N (Ψ 0 ) (I N ⊗dΨ 0 ) WS −1 N (Ψ 0 )(I N ⊗dΨ 0 )W − T X t=1 tr (I N ⊗dΨ 0 )We y t 0 S N (Λ) 0 (I N ⊗ Ω −1 )S N (Λ)(I N ⊗dΨ 0 )We yt −TtrS −1 N (Λ)(I N ⊗dΛ)WS −1 N (Λ)(I N ⊗dΛ)W − T X t=1 tre u 0 t ((I N ⊗dΛ)W) 0 I N ⊗ Ω −1 (I N ⊗dΛ)We ut − T X t=1 tr (I N ⊗dγ)C e Zt 0 S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) (I N ⊗dγ)C e Zt −2 T X t=1 tre u 0 t S N (Λ) 0 I N ⊗ Ω −1 (dΩ) Ω −1 (I N ⊗dΛ)We ut −2 T X t=1 tre u 0 t S N (Λ) 0 I N ⊗ Ω −1 (dΩ) Ω −1 S N (Λ)(I N ⊗dΨ 0 )We y t −2 T X t=1 tre u 0 t S N (Λ) 0 I N ⊗ Ω −1 (dΩ) Ω −1 S N (Λ) (I N ⊗dγ)C e Zt −2 T X t=1 tr (I N ⊗dγ)C e Zt 0 S N (Λ) 0 I N ⊗ Ω −1 S N (Λ)(I N ⊗dΨ 0 )We yt −2 T X t=1 tre u 0 t ((I N ⊗dΛ)W) 0 I N ⊗ Ω −1 S N (Λ)(I N ⊗dΨ 0 )We yt −2 T X t=1 tre u 0 t S N (Λ) 0 I N ⊗ Ω −1 (I N ⊗dΛ)W(I N ⊗dΨ 0 )We yt −2 T X t=1 tre u 0 t ((I N ⊗dΛ)W) 0 I N ⊗ Ω −1 S N (Λ) (I N ⊗dγ)C e Zt −2 T X t=1 tre u 0 t S N (Λ) 0 I N ⊗ Ω −1 (I N ⊗dΛ)W (I N ⊗dγ)C e Zt 130 since tre u 0 t S N (Λ) 0 (I N ⊗ Ω −1 (dΩ) Ω −1 (dΩ) Ω −1 )S N (Λ)e ut = trS N (Λ)e ute u 0 t S N (Λ) 0 (I N ⊗ Ω −1 (dΩ) Ω −1/2 )I MN (I N ⊗ Ω −1/2 (dΩ) Ω −1 ) = vec I N ⊗ Ω −1 (dΩ) Ω −1/2 0 I MN ⊗S N (Λ)e ute u 0 t S N (Λ) 0 vec(I N ⊗ Ω −1 (dΩ) Ω −1/2 ) = vec Ω −1 (dΩ) Ω −1/2 0 J 0 I MN ⊗S N (Λ)e ute u 0 t S N (Λ) 0 Jvec(Ω −1 (dΩ) Ω −1/2 ) = Ω −1/2 ⊗ Ω −1 D M dvechΩ 0 J 0 I MN ⊗S N (Λ)e ute u 0 t S N (Λ) 0 J Ω −1/2 ⊗ Ω −1 D M dvechΩ = (dvechΩ) 0 D 0 M Ω −1/2 ⊗ Ω −1 J 0 I MN ⊗S N (Λ)e ute u 0 t S N (Λ) 0 J Ω −1/2 ⊗ Ω −1 D M dvechΩ 131 we get the following second order differentiation of concentrated log-likelihood function d 2 lnL(θ) = NT 2 (dvechΩ) 0 D 0 M Ω −1 ⊗ Ω −1 D M dvechΩ − T X t=1 (dvechΩ) 0 D 0 M Ω −1/2 ⊗ Ω −1 J 0 I MN ⊗S N (Λ)e ute u 0 t S N (Λ) 0 J Ω −1/2 ⊗ Ω −1 D M dvechΩ −T (dvecΨ 0 ) 0 K M,M J 0 WS −1 N (Ψ 0 ) 0 ⊗ WS −1 N (Ψ 0 ) JdvecΨ 0 − T X t=1 (dvecΨ 0 ) 0 J 0 We yte y 0 t W 0 ⊗ S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) JdvecΨ 0 − (dvecΛ) 0 TK M,M J 0 WS −1 N (Λ) 0 ⊗ WS −1 N (Λ) JdvecΛ − T X t=1 (dvecΛ) 0 J 0 We u t e u 0 t W 0 ⊗ I N ⊗ Ω −1 JdvecΛ − T X t=1 (dvecγ) 0 J 0 1 C e Zt e Z 0 t C 0 ⊗ S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 dvecγ −2 T X t=1 (dvecΨ 0 ) 0 J 0 We yte u 0 t W 0 ⊗ (I N ⊗ Ω −1 )S N (Λ) 0 JdvecΛ −2 T X t=1 (dvecΨ 0 ) 0 K M,M J 0 W 0 ⊗ We y t e u 0 t S N (Λ) 0 (I N ⊗ Ω −1 ) JdvecΛ −2 T X t=1 (dvecΨ 0 ) 0 J 0 We y t e u 0 t S N (Λ) 0 ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M dvechΩ −2 T X t=1 (dvecΨ 0 ) 0 J 0 We yt e Z 0 t C 0 ⊗ S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 dvecγ −2 T X t=1 (dvecγ) 0 J 0 1 C e Zte u 0 t W ⊗ I N ⊗ Ω −1 S N (Λ) 0 JdvecΛ −2 T X t=1 (dvecγ) 0 J 0 1 C e Zte u 0 t S N (Λ) 0 I N ⊗ Ω −1 ⊗ W JK M,M dvecΛ −2 T X t=1 (dvecΛ) 0 J 0 We ute u 0 t S N (Λ) 0 ⊗I MN J Ω −1 ⊗ Ω −1 D M dvechΩ −2 T X t=1 (dvecγ) 0 J 0 1 C e Zte u 0 t S N (Λ) 0 ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M dvechΩ 132 Hessian matrix 1 NT H θ,NT lnL(θ) = 1 NT D 2 θ lnL(θ) = − 1 NT T X t=1 D 0 M J 0 I N ⊗ Ω −1 ⊗ I N ⊗ Ω −1 e εte ε 0 t I N ⊗ Ω −1 JD M − N 2 D 0 M Ω −1 ⊗ Ω −1 D M ∗ ∗ ∗ J 0 We u t e ε 0 t ⊗I MN J Ω −1 ⊗ Ω −1 D M 0 ∗ ∗ J 0 1 C e Zte ε 0 t ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M 0 0 ∗ J 0 We y t e ε 0 t ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M 0 0 0 − 1 NT T X t=1 0 ∗ ∗ ∗ 0 K M,M J 0 S −1 N (Λ) 0 W⊗ WS −1 N (Λ) J +J 0 We u t e u 0 t W 0 ⊗ I N ⊗ Ω −1 J ∗ ∗ 0 J 0 1 C e Zte u 0 t W⊗S N (Λ) 0 I N ⊗ Ω −1 J +J 0 1 C e Zte ε 0 t I N ⊗ Ω −1 ⊗ W JK M,M 0 ∗ 0 J 0 We yte u 0 t W 0 ⊗S N (Λ) 0 (I N ⊗ Ω −1 ) J +K M,M J 0 W 0 ⊗ We y t e ε 0 t I N ⊗ Ω −1 J 0 0 − 1 NT T X t=1 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 J 0 1 C e Zt e Z 0 t C 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 ∗ 0 0 J 0 We yt e Z 0 t C 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 0 − 1 NT T X t=1 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 0 ∗ 0 0 0 K M,M J 0 S −1 N (Ψ 0 ) 0 W 0 ⊗ WS −1 N (Ψ 0 ) J +J 0 We yte y 0 t W 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J (A.7) 133 A.4 InformationMatrixandInformationMatrix Equality Information matrixI NT θ 0 =− 1 NT EH θ,NT lnL θ 0 . The limit of the information matrix as T →∞, I 0 =I θ 0 = lim T→∞ I NT θ 0 I 0 = 1 2 D 0 M Ω 0 −1 ⊗ Ω 0 −1 D M ∗ ∗ ∗ 1 N J 0 WS −1 N Λ 0 ⊗ I N ⊗ Ω 0 −1 JD M 0 ∗ ∗ 0 0 0 ∗ 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 JD M 0 0 0 + 0 ∗ ∗ ∗ 0 1 N K M,M J 0 S −1 N Λ 0 0 W 0 ⊗ WS −1 N Λ 0 J + 1 N J 0 WΣ 0 u W 0 ⊗ I N ⊗ Ω 0 −1 J ∗ ∗ 0 0 0 ∗ 0 1 N J 0 WS −1 N Ψ 0 0 Σ 0 u W 0 ⊗S N Λ 0 0 (I N ⊗ Ω 0 −1 ) J + 1 N K M,M J 0 W 0 ⊗ WS −1 N Ψ 0 0 S −1 N Λ 0 J 0 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 1 NT P T t=1 J 0 1 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 ∗ 0 0 1 NT P T t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 0 ∗ 0 0 0 1 N K M,M J 0 S −1 N Ψ 0 0 0 W 0 ⊗ WS −1 N Ψ 0 0 J + 1 NT P T t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ Σ 0 u −1 J + 1 N J 0 WΣ 0 y W 0 ⊗ Σ 0 u −1 J (A.8) where Σu =S −1 N (Λ) (I N ⊗ Ω)S −1 N (Λ) 0 , Σy =S −1 N (Ψ 0 ) ΣuS −1 N (Ψ 0 ) 0 , Σ 0 u = Σu θ 0 , and Σ 0 y = Σy θ 0 . 134 For the parameterθ = (vech(Ω) 0 ,vec(Λ) 0 ,vec(γ) 0 ,vec(Ψ 0 ) 0 ) 0 , 1 NT E D θ lnL θ 0 0 D θ lnL θ 0 = 1 NT ED Ω lnL(θ 0 ) 0 D Ω lnL(θ 0 ) ∗ ∗ ∗ ED Λ lnL(θ 0 ) 0 D Ω lnL(θ 0 ) ED Λ lnL(θ 0 ) 0 D Λ lnL(θ 0 ) ∗ ∗ EDγlnL(θ 0 ) 0 D Ω lnL(θ 0 ) EDγlnL(θ 0 ) 0 D Λ lnL(θ 0 ) EDγlnL(θ 0 ) 0 DγlnL(θ 0 ) ∗ ED Ψ 0 lnL(θ 0 ) 0 D Ω lnL(θ 0 ) ED Ψ 0 lnL(θ 0 ) 0 D Λ lnL(θ 0 ) ED Ψ 0 lnL(θ 0 ) 0 DγlnL(θ 0 ) ED Ψ 0 lnL(θ 0 ) 0 D Ψ 0 lnL(θ 0 ) Forθ = (vech(Ω) 0 ,vec(Λ) 0 ,vec(γ) 0 ,vec(Ψ 0 ) 0 ) 0 1 NT E D θ lnL θ 0 0 D θ lnL θ 0 =I θ 0 + Δ θ 0 +Op 1 T where Δ (θ) = Δ ΩΩ ∗ ∗ ∗ Δ ΛΩ Δ ΛΛ ∗ ∗ Δ γΩ Δγ Λ Δγγ ∗ Δ Ψ 0 Ω Δ Ψ 0 Λ Δ Ψ 0 γ Δ Ψ 0 Ψ 0 (A.9) and Δ 0 ΩΩ = 1 4N D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M Δ 0 ΛΩ = 1 2N J 0 WS −1 N Λ 0 ⊗ I N ⊗ Ω 0 −1 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M Δ 0 γΩ = 0 Δ 0 Ψ 0 Ω = 1 2N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M Δ 0 ΛΛ = 1 N J 0 WS −1 N Λ 0 ⊗ I N ⊗ Ω 0 −1 κ 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J Δ 0 γΛ = 0 Δ 0 Ψ 0 Λ = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 κ 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J Δ 0 γγ = 0 Δ 0 Ψ 0 γ = 0 Δ 0 Ψ 0 Ψ 0 = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×κ 0 S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J and κ 0 =E εtε 0 t ⊗εtε 0 t −vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 − 2N MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 . In the limit, lim T→∞ E 1 √ NT (D θ lnL (θ 0 )) 0 1 √ NT (D θ lnL (θ 0 )) =I θ 0 + Δ θ 0 . 135 − 1 NT EH θ,NT lnL(θ 0 ) =Op 1 T + 1 2 D 0 M Ω 0 −1 ⊗ Ω 0 −1 D M ∗ ∗ ∗ 1 N J 0 WS −1 N Λ 0 I N ⊗ Ω 0 ⊗I MN J Ω 0 −1 ⊗ Ω 0 −1 D M 0 ∗ ∗ 0 0 0 ∗ 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ⊗S N Λ 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M 0 0 0 + 0 ∗ ∗ ∗ 0 1 N K M,M J 0 S −1 N Λ 0 0 W⊗ WS −1 N Λ 0 J + 1 N J 0 WΣ 0 u W 0 ⊗ I N ⊗ Ω 0 −1 J ∗ ∗ 0 0 0 ∗ 0 1 N J 0 WS −1 N Ψ 0 0 Σ 0 u W 0 ⊗S N Λ 0 0 (I N ⊗ Ω 0 −1 ) J + 1 N K M,M J 0 W 0 ⊗ WS −1 N Ψ 0 0 S −1 N Λ 0 J 0 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 1 NT P T t=1 J 0 1 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 ∗ 0 0 1 NT P T t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 0 ∗ 0 0 0 1 N K M,M J 0 S −1 N Ψ 0 0 0 W 0 ⊗ WS −1 N Ψ 0 0 J + 1 NT P T t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ Σ 0 u −1 J + 1 N J 0 WΣ 0 y W 0 ⊗ Σ 0 u −1 J 136 Information matrixI NT θ 0 =− 1 NT EH θ,NT lnL θ 0 . The limit of the information matrix as T →∞, I 0 =I θ 0 = lim T→∞ I NT θ 0 I 0 = 1 2 D 0 M Ω 0 −1 ⊗ Ω 0 −1 D M ∗ ∗ ∗ 1 N J 0 WS −1 N Λ 0 ⊗ I N ⊗ Ω 0 −1 JD M 0 ∗ ∗ 0 0 0 ∗ 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 JD M 0 0 0 + 0 ∗ ∗ ∗ 0 1 N K M,M J 0 S −1 N Λ 0 0 W 0 ⊗ WS −1 N Λ 0 J + 1 N J 0 WΣ 0 u W 0 ⊗ I N ⊗ Ω 0 −1 J ∗ ∗ 0 0 0 ∗ 0 1 N J 0 WS −1 N Ψ 0 0 Σ 0 u W 0 ⊗S N Λ 0 0 (I N ⊗ Ω 0 −1 ) J + 1 N K M,M J 0 W 0 ⊗ WS −1 N Ψ 0 0 S −1 N Λ 0 J 0 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 1 NT P T t=1 J 0 1 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 ∗ 0 0 1 NT P T t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 0 ∗ 0 0 0 1 N K M,M J 0 S −1 N Ψ 0 0 0 W 0 ⊗ WS −1 N Ψ 0 0 J + 1 NT P T t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ Σ 0 u −1 J + 1 N J 0 WΣ 0 y W 0 ⊗ Σ 0 u −1 J (A.10) where Σu =S −1 N (Λ) (I N ⊗ Ω)S −1 N (Λ) 0 , Σy =S −1 N (Ψ 0 ) ΣuS −1 N (Ψ 0 ) 0 , Σ 0 u = Σu θ 0 , and Σ 0 y = Σy θ 0 . 137 For the parameterθ = (vech(Ω) 0 ,vec(Λ) 0 ,vec(γ) 0 ,vec(Ψ 0 ) 0 ) 0 , 1 NT E D θ lnL θ 0 0 D θ lnL θ 0 = 1 NT ED Ω lnL(θ 0 ) 0 D Ω lnL(θ 0 ) ∗ ∗ ∗ ED Λ lnL(θ 0 ) 0 D Ω lnL(θ 0 ) ED Λ lnL(θ 0 ) 0 D Λ lnL(θ 0 ) ∗ ∗ EDγlnL(θ 0 ) 0 D Ω lnL(θ 0 ) EDγlnL(θ 0 ) 0 D Λ lnL(θ 0 ) EDγlnL(θ 0 ) 0 DγlnL(θ 0 ) ∗ ED Ψ 0 lnL(θ 0 ) 0 D Ω lnL(θ 0 ) ED Ψ 0 lnL(θ 0 ) 0 D Λ lnL(θ 0 ) ED Ψ 0 lnL(θ 0 ) 0 DγlnL(θ 0 ) ED Ψ 0 lnL(θ 0 ) 0 D Ψ 0 lnL(θ 0 ) Next, we show that as T →∞, lim T→∞ E 1 √ NT (D θ lnL (θ 0 )) 0 1 √ NT (D θ lnL (θ 0 )) =I θ 0 + Δ θ 0 , where Δ θ 0 6= 0 unlessεt are normally distributed. 1 NT ED Ω lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 4 1 NT T X t=1 T X s=1 D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 E e εte ε 0 s ⊗e εte ε 0 s J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 4 1 NT T X t=1 T X s=1 D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 E e εt⊗e εt vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 4 1 NT T X t=1 T X s=1 D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 vec I N ⊗ Ω 0 E e εt⊗e εt 0 J Ω 0 −1 ⊗ Ω 0 −1 D M + 1 4 1 NT T X t=1 T X s=1 D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M since 138 T X t=1 T X s=1 E e εte ε 0 s ⊗e εte ε 0 s = T X t=1 T X s=1 E (εt⊗εt−εt⊗ ¯ ε− ¯ ε⊗εt + ¯ ε⊗ ¯ ε) ε 0 s ⊗ε 0 s −ε 0 s ⊗ ¯ ε 0 − ¯ ε 0 ⊗ε 0 s + ¯ ε 0 ⊗ ¯ ε 0 = T X t=1 T X s=1 E " (εt⊗εt) ε 0 s ⊗ε 0 s − 1 T T X l=1 ε 0 s ⊗ε 0 l − 1 T T X v=1 ε 0 v ⊗ε 0 s + 1 T 2 T X l=1 T X v=1 ε 0 l ⊗ε 0 v !# − 2 1 T T X t=1 T X s=1 E " T X r=1 (εt⊗εr ) ε 0 s ⊗ε 0 s − 1 T T X l=1 ε 0 s ⊗ε 0 l − 1 T T X v=1 ε 0 v ⊗ε 0 s + 1 T 2 T X l=1 T X v=1 ε 0 l ⊗ε 0 v !# + 1 T 2 T X t=1 T X s=1 E " T X r=1 T X k=1 (εr⊗ε k ) ε 0 s ⊗ε 0 s − 1 T T X l=1 ε 0 s ⊗ε 0 l − 1 T T X v=1 ε 0 v ⊗ε 0 s + 1 T 2 T X l=1 T X v=1 ε 0 l ⊗ε 0 v !# = 1− 1 T T X t=1 T X s=1 E εtε 0 s ⊗εtε 0 s + 2 T 2 − 2 T T X t=1 E εtε 0 t ⊗εtε 0 t + 2 T 2 − 2 T T X t=1 T X s6=t εtε 0 s ⊗εtε 0 s + 2 T 2 T X t=1 T X s6=t E εtε 0 t ⊗εsε 0 s + 2 T 2 T X t=1 T X s6=t E εtε 0 s ⊗εsε 0 t + 1 T T X t=1 T X s=1 E εtε 0 s ⊗εtε 0 s − 1 T 2 T X r=1 T X k=1 T X t=1 T X s=1 E εrε 0 t ⊗ε k ε 0 s P T t=1 P T s=1 E (εtε 0 s ⊗εtε 0 s ) = h P T t=1 E εtε 0 t ⊗εtε 0 t + P T t=1 P T s=1,s6=t E (εt⊗εt)E (ε 0 s ⊗ε 0 s ) i , there- fore T X t=1 T X s=1 E e εte ε 0 s ⊗e εte ε 0 s = 1 + 1 T 2 − 2 1 T T X t=1 E εtε 0 t ⊗εtε 0 t + 1 + 1 T 2 − 2 1 T T X t=1 T X s=1,s6=t E εtε 0 s ⊗εtε 0 s + 1 T 2 T X t=1 T X s=1,s6=t E εtε 0 t ⊗εsε 0 s + 1 T 2 T X t=1 T X s=1,s6=t E εtε 0 s ⊗εsε 0 t = 1 + 1 T 2 − 2 1 T TE εtε 0 t ⊗εtε 0 t + 1 + 1 T 2 − 2 1 T T (T− 1)vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 + T− 1 T I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 + T− 1 T I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 K MN,MN 139 1 NT T X t=1 T X s=1 E e εte ε 0 s ⊗e εte ε 0 s = 1 N E εtε 0 t ⊗εtε 0 t + T N − 3 N vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 +Op 1 T 1 NT ED Ω lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 4 1 NT T X t=1 T X s=1 D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 E e εte ε 0 s ⊗e εte ε 0 s J Ω 0 −1 ⊗ Ω 0 −1 D M + 1 2T − 1 4 T N D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M = 1 4N D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 h E εtε 0 t ⊗εtε 0 t −vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 i J Ω 0 −1 ⊗ Ω 0 −1 D M +Op 1 T when εt are normally distributed, E εtε 0 t ⊗εtε 0 t = 2N MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 + vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 , where N MN = 1 2 I M 2 N 2 +K MN,MN is symmetric idempotent. J 0 K MN,MN J =J 0 JK M,M and K M,M D M =D M . Then, 1 NT ED Ω lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 2N D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 N MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 J Ω 0 −1 ⊗ Ω 0 −1 D M +Op 1 T = 1 4N D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 J I M +K M,M D M +Op 1 T = 1 2 D 0 M Ω 0 −1 ⊗ Ω 0 −1 D M +Op 1 T Therefore, 1 NT ED Ω lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 2 D 0 M Ω 0 −1 ⊗ Ω 0 −1 D M + Δ 0 ΩΩ +Op 1 T , where Δ 0 ΩΩ = 1 4N D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M When εt are normally distributed, κ 0 = E εtε 0 t ⊗εtε 0 t − vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 −2N MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 = 0 and Δ 0 ΩΩ = 0, and Δ 0 ΩΩ 6= 0 whenεt are not normally distributed. 140 1 NT ED Λ lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 2 1 NT E T X t=1 T X s=1 J 0 vec I N ⊗ Ω 0 −1 e εte ε 0 t S −1 N (Λ 0 ) 0 W 0 −S −1 N (Λ 0 ) 0 W 0 vec e εse ε 0 s − I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M = 1 2 1 NT E T X t=1 T X s=1 J 0 h vec I N ⊗ Ω 0 −1 e εte ε 0 t S −1 N (Λ 0 ) 0 W 0 −vec S −1 N (Λ 0 ) 0 W 0 vec e εse ε 0 s 0 −vec I N ⊗ Ω 0 0 i J × Ω 0 −1 ⊗ Ω 0 −1 D M = 1 2 1 NT T X t=1 T X s=1 J 0 h WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 E e εte ε 0 s ⊗e εte ε 0 s i J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 2 1 NT T X t=1 T X s=1 J 0 h WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 E e εt⊗e εt vec I N ⊗ Ω 0 0 i J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 2 1 NT T X t=1 T X s=1 J 0 WS −1 N (Λ 0 )⊗I MN vec (I MN )E e ε 0 s ⊗e ε 0 s J Ω 0 −1 ⊗ Ω 0 −1 D M + 1 2 1 NT T X t=1 T X s=1 J 0 h WS −1 N (Λ 0 )⊗I MN vec (I MN )vec I N ⊗ Ω 0 0 i J Ω 0 −1 ⊗ Ω 0 −1 D M WS −1 N (Λ 0 )⊗I MN vec (I MN ) = WS −1 N (Λ 0 )⊗I MN I MN ⊗I N ⊗ Ω 0 −1 I MN ⊗I N ⊗ Ω 0 vec (I MN ) = WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 I MN ⊗I N ⊗ Ω 0 vec (I MN ) = WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 Therefore, 1 NT ED Λ lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 2N J 0 WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 E εtε 0 t ⊗εtε 0 t J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 N J 0 WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M + 1 2N J 0 h WS −1 N (Λ 0 )⊗I MN vec (I MN )vec I N ⊗ Ω 0 0 i J Ω 0 −1 ⊗ Ω 0 −1 D M +Op 1 T = 1 2N J 0 WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 E εtε 0 t ⊗εtε 0 t J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 2N J 0 WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M +Op 1 T = 1 N J 0 WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 JD M + Δ 0 ΛΩ +Op 1 T , 141 where Δ 0 ΛΩ = 1 2N J 0 WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M , and Δ 0 ΛΩ = 0 when εt are normally distributed. By Lemma 4, 1 NT T X t=1 T X s=1 E C e Zte ε 0 s ⊗e εte ε 0 s − 1 N T X t=1 E C e Zt⊗e εt vec I N ⊗ Ω 0 0 =Op 1 T 1 NT EDγlnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 2 1 NT E T X t=1 T X s=1 J 0 1 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εt e Z 0 t C 0 vec e εse ε 0 s − I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M = 1 2 1 NT E T X t=1 T X s=1 J 0 1 I MN(p+q+K) ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 E C e Zte ε 0 s ⊗e εte ε 0 s J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 2 1 NT E T X t=1 T X s=1 J 0 1 I MN(p+q+K) ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 E C e Zt⊗e εt vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M =Op 1 T , with Δ 0 γΩ = 0 1 NT ED Ψ 0 lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 2 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εte y 0 t W 0 −S −1 N (Ψ 0 ) 0 W 0 vec e εse ε 0 s − (I N ⊗ Ω) 0 J Ω 0 −1 ⊗ Ω 0 −1 D M = 1 2 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εt e Z 0 t C 0 I N ⊗γ 0 0 S −1 N Ψ 0 0 0 W 0 ×vec e εse ε 0 s − (I N ⊗ Ω) 0 J Ω 0 −1 ⊗ Ω 0 −1 D M + 1 2 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εte ε 0 t S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 vec e εse ε 0 s 0 J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 2 1 N E T X t=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εte ε 0 t S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M − 1 2 1 N E T X s=1 J 0 vec S −1 N (Ψ 0 ) 0 W 0 vec e εse ε 0 s 0 J Ω 0 −1 ⊗ Ω 0 −1 D M + 1 2 T N J 0 vec S −1 N (Ψ 0 ) 0 W 0 vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M = 1 2 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 E e εte ε 0 s ⊗e εte ε 0 s J Ω 0 −1 ⊗ Ω 0 −1 D M + 1 N − 1 2 T N J 0 vec S −1 N Ψ 0 0 0 W 0 vec I N ⊗ Ω 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M +Op 1 T 142 and vec S −1 N Ψ 0 0 0 W 0 =vec S 0 N Λ 0 I N ⊗ Ω 0 −1 I N ⊗ Ω 0 S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 = WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 Therefore, 1 NT ED Ψ 0 lnL(θ 0 ) 0 D Ω lnL(θ 0 ) = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 JD M + Δ 0 Ψ 0 Ω +Op 1 T where Δ 0 Ψ 0 Ω = 1 2 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M and Δ 0 Ψ 0 Ω = 0 whenεt are normally distributed. 1 NT ED Λ lnL(θ 0 ) 0 D Λ lnL(θ 0 ) = 1 NT E T X t=1 T X s=1 J 0 vec I N ⊗ Ω 0 −1 e εte ε 0 t S −1 N (Λ 0 ) 0 W 0 −S −1 N (Λ) 0 W 0 vec I N ⊗ Ω 0 −1 e εse ε 0 s S −1 N (Λ 0 ) 0 W 0 −S −1 N (Λ) 0 W 0 0 J = 1 NT T X t=1 T X s=1 J 0 h WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 E e εte ε 0 s ⊗e εte ε 0 s S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 i J − T− 1 N J 0 h WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 vec (I MN ) 0 S −1 N (Λ 0 ) 0 W 0 ⊗I MN i J − T− 1 N J 0 h WS −1 N (Λ 0 )⊗I MN vec (I MN )vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 i J + T N J 0 WS −1 N (Λ 0 )⊗I MN vec (I MN )vec (I MN ) 0 S −1 N (Λ 0 ) 0 W 0 ⊗I MN J 143 WS −1 N (Λ 0 )⊗I MN vec (I MN )vec (I MN ) 0 S −1 N (Λ 0 ) 0 W 0 ⊗I MN = WS −1 N (Λ 0 )⊗I MN I MN ⊗I N ⊗ Ω 0 −1 I MN ⊗I N ⊗ Ω 0 vec (I MN )vec (I MN ) 0 S −1 N (Λ 0 ) 0 W 0 ⊗I MN = WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 vec (I MN ) 0 S −1 N (Λ 0 ) 0 W 0 ⊗I MN = WS −1 N (Λ 0 )⊗I MN vec (I MN )vec (I MN ) 0 I MN ⊗I N ⊗ Ω 0 I MN ⊗I N ⊗ Ω 0 −1 S −1 N (Λ 0 ) 0 W 0 ⊗I MN = WS −1 N (Λ 0 )⊗I MN vec (I MN )vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 = WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 1 NT ED Λ lnL(θ 0 ) 0 D Λ lnL(θ 0 ) = 1 N J 0 h WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 E εtε 0 t ⊗εtε 0 t S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 i J − 1 N J 0 WS −1 N (Λ 0 )⊗I MN vec (I MN )vec (I MN ) 0 S −1 N (Λ 0 ) 0 W 0 ⊗I MN J +Op 1 T when εt are normally distributed, E εtε 0 t ⊗εtε 0 t = 2N MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 + vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 1 NT ED Λ lnL(θ 0 ) 0 D Λ lnL(θ 0 ) = 1 N J 0 h WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 i J + 1 N J 0 h WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 K MN,MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 i J +Op 1 T = 1 N J 0 WS −1 N (Λ 0 ) I N ⊗ Ω 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 I N ⊗ Ω 0 I N ⊗ Ω 0 −1 J + 1 N J 0 h K MN,MN I N ⊗ Ω 0 −1 I N ⊗ Ω 0 S −1 N (Λ 0 ) 0 W 0 ⊗ WS −1 N (Λ 0 ) I N ⊗ Ω 0 I N ⊗ Ω 0 −1 i J +Op 1 T = 1 N J 0 WS −1 N (Λ 0 ) I N ⊗ Ω 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + 1 N J 0 K MN,MN S −1 N (Λ 0 ) 0 W 0 ⊗ WS −1 N (Λ 0 ) J +Op 1 T = 1 N J 0 WΣ 0 u W 0 ⊗ I N ⊗ Ω 0 −1 J + 1 N K M,M J 0 S −1 N (Λ 0 ) 0 W 0 ⊗ WS −1 N (Λ 0 ) J +Op 1 T since J 0 K MN,MN =K M,M J 0 144 1 NT ED Λ lnL(θ 0 ) 0 D Λ lnL(θ 0 ) = 1 N K M,M J 0 S −1 N Λ 0 0 W 0 ⊗ WS −1 N Λ 0 J + 1 N J 0 WΣ 0 u W 0 ⊗ I N ⊗ Ω 0 −1 J + Δ 0 ΛΛ +Op 1 T Δ 0 ΛΛ = 1 N J 0 WS −1 N (Λ 0 )⊗ I N ⊗ Ω 0 −1 κ 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J by Lemma 4, we get 1 NT EDγlnL(θ 0 ) 0 D Λ lnL(θ 0 ) = 1 NT E T X t=1 T X s=1 J 0 1 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 S N (Λ 0 )e ut e Z 0 t C 0 vec I N ⊗ Ω 0 −1 e εse ε 0 s S −1 N (Λ 0 ) 0 W 0 −S −1 N (Λ 0 ) 0 W 0 0 J = 1 NT E T X t=1 T X s=1 J 0 1 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εt e Z 0 t C 0 vec e εse ε 0 s − I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J =Op 1 T implying Δ 0 γΛ = 0 145 1 NT ED Ψ 0 lnL(θ 0 ) 0 D Λ lnL(θ 0 ) = 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εte y 0 t W 0 −S −1 N (Ψ 0 0 ) 0 W 0 ×vec I N ⊗ Ω 0 −1 e εse ε 0 s S −1 N (Λ 0 ) 0 W 0 −S −1 N (Λ 0 ) 0 W 0 0 J = 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εte y 0 t W 0 vec e εse ε 0 s − I N ⊗ Ω 0 0 × S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − 1 NT T X t=1 T X s=1 ( J 0 vec S −1 N (Ψ 0 0 ) 0 W 0 vec E e εse ε 0 s − I N ⊗ Ω 0 0 × S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J ) = 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εt e Z 0 t C 0 I N ⊗γ 0 0 S −1 N Ψ 0 0 0 W 0 ×vec e εse ε 0 s − I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + 1 NT E T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec e εte ε 0 t vec e εse ε 0 s − I N ⊗ Ω 0 0 × S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − 1 NT T X t=1 T X s=1 ( J 0 vec S −1 N (Ψ 0 0 ) 0 W 0 vec T−1 T I N ⊗ Ω 0 − I N ⊗ Ω 0 0 × S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J ) = 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E e εte ε 0 s ⊗e εte ε 0 s S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E e εt⊗e εt vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + 1 T 1 NT T X t=1 T X s=1 J 0 vec S −1 N (Ψ 0 0 ) 0 W 0 vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J +Op 1 T = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E εtε 0 t ⊗εtε 0 t S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + T N − 3 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − T− 1 T T N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + 1 N J 0 vec S −1 N (Ψ 0 ) 0 W 0 vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J +Op 1 T = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E εtε 0 t ⊗εtε 0 t S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J +Op 1 T 146 where vec S −1 N Ψ 0 0 0 W 0 =vec S N Λ 0 0 I N ⊗ Ω 0 −1 I N ⊗ Ω 0 S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 = WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 vec I N ⊗ Ω 0 and by Lemma 4, 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εt e Z 0 t C 0 I N ⊗γ 0 0 S −1 N Ψ 0 0 0 W 0 ×vec e εse ε 0 s − I N ⊗ Ω 0 0 S −1 N (Λ 0 ) 0 W 0 ⊗ I N ⊗ Ω 0 −1 J =Op 1 T Denote Δ 0 Ψ 0 Λ = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 κ 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J, and therefore 147 1 NT ED Ψ 0 lnL(θ 0 ) 0 D Λ lnL(θ 0 ) = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 ×E εtε 0 t ⊗εtε 0 t S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 ×vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 × I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 × I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J +O 1 T + 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 × K MN,MN I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J − 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 × K MN,MN I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 × I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 × K MN,MN I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J + Δ 0 Ψ 0 Λ +Op 1 T = 1 N J 0 WS −1 N Ψ 0 0 Σ 0 u W 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 J + 1 N K M,M J 0 W 0 ⊗ WS −1 N Ψ 0 0 S −1 N Λ 0 J + Δ 0 Ψ 0 Λ +Op 1 T where Δ 0 Ψ 0 Λ = 0 whenεt are normally distributed. because WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 = WS −1 N Ψ 0 0 Σ 0 u W 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 and 148 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 K MN,MN I N ⊗ Ω 0 ⊗I N ⊗ Ω 0 × S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 =J 0 K MN,MN S N Λ 0 0 I N ⊗ Ω 0 −1 ⊗ WS −1 N Ψ 0 0 S −1 N Λ 0 × I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 =J 0 K MN,MN S N Λ 0 0 I N ⊗ Ω 0 −1 I N ⊗ Ω 0 S −1 N Λ 0 0 W 0 ⊗ WS −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 I N ⊗ Ω 0 −1 =K M,M J 0 W 0 ⊗ WS −1 N Ψ 0 0 S −1 N Λ 0 1 NT EDγlnL(θ 0 ) 0 DγlnL(θ 0 ) = 1 NT E T X t=1 T X s=1 J 0 1 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 S N (Λ)e ut e Z 0 t C 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 S N (Λ)e us e Z 0 s C 0 0 J 1 = 1 NT E T X t=1 T X s=1 J 0 1 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εt e Z 0 t C 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εs e Z 0 s C 0 0 J 1 = 1 NT E T X t=1 T X s=1 J 0 1 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εt e Z 0 t C 0 I MN(p+q+K) ×vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εs e Z 0 s C 0 I MN(p+q+K) 0 J 1 = 1 NT T X t=1 T X s=1 J 0 1 I MN(p+q+K) ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 ×Evec e εt e Z 0 t C 0 vec e εs e Z 0 s C 0 0 I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N Λ 0 J 1 = 1 NT T X t=1 T X s=1 J 0 1 I MN(p+q+K) ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 ×E C e Zt e Z 0 s C 0 ⊗e εte ε 0 s I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N Λ 0 J 1 = 1 NT T X t=1 J 0 1 I MN(p+q+K) ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 × CE e Zt e Z 0 t C 0 ⊗ I N ⊗ Ω 0 I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N Λ 0 J 1 +Op 1 T = 1 NT T X t=1 J 0 1 CE e Zt e Z 0 t C 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 S N Λ 0 J 1 +Op 1 T = 1 NT T X t=1 J 0 1 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 + Δ 0 γγ +Op 1 T Δ 0 γγ = 0 149 By Lemma 4 and Lemma 5, 1 NT ED Ψ 0 lnL θ 0 0 DγlnL θ 0 = 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εte y 0 t W 0 −S −1 N (Ψ 0 0 ) 0 W 0 ×vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 S N (Λ)e us e Z 0 s C 0 0 J 1 = 1 NT E T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × C e Zt e Z 0 s C 0 ⊗e εte ε 0 s × I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J 1 + 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E e εt e Z 0 s C 0 ⊗e εte ε 0 s × I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J 1 − 1 N T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec (I N ⊗ Ω)E e Z 0 s C 0 ⊗e ε 0 s × I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J 1 = 1 NT E T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × C e Zt e Z 0 s C 0 ⊗e εte ε 0 s × I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J 1 +Op 1 T = 1 NT T X t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × CE e Zt e Z 0 t C 0 ⊗ I N ⊗ Ω 0 × I MN(p+q+K) ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J 1 +Op 1 T = 1 NT T X t=1 J 0 h WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 S N (Λ 0 ) i J 1 +Op 1 T = 1 NT T X t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 + Δ 0 Ψ 0 γ +Op 1 T , Δ 0 Ψ 0 γ = 0 By Lemma 4, 150 1 NT ED Ψ 0 lnL θ 0 0 D Ψ 0 lnL θ 0 = 1 NT E T X t=1 T X s=1 J 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εte y 0 t W 0 −S −1 N (Ψ 0 ) 0 W 0 vec S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 e εse y 0 s W 0 −S −1 N (Ψ 0 ) 0 W 0 0 J = 1 NT E T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 vec e εt e Z 0 t C 0 + WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 vec e εte ε 0 t − I N ⊗ Ω 0 × WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 vec e εs e Z 0 s C 0 + WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 vec e εse ε 0 s − I N ⊗ Ω 0 0 J = 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E C e Zt e Z 0 s C 0 ⊗e εte ε 0 s × I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × E C e Zte ε 0 s ⊗e εte ε 0 s −E C e Zt⊗e εt vec I N ⊗ Ω 0 0 × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × E e εt e Z 0 s C 0 ⊗e εte ε 0 s −vec I N ⊗ Ω 0 E e Z 0 s C 0 ⊗e ε 0 s × I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E e εt⊗e εt −vec I N ⊗ Ω 0 e εs⊗e εs 0 −vec I N ⊗ Ω 0 0 × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J = 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E C e Zt e Z 0 s C 0 ⊗e εte ε 0 s × I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E e εte ε 0 s ⊗e εte ε 0 s × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J − 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E e εt⊗e εt vec I N ⊗ Ω 0 0 × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J − 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec I N ⊗ Ω 0 E e ε 0 s ⊗e ε 0 s × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 NT T X t=1 T X s=1 J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J +Op 1 T 151 By Lemma 5, 1 NT ED Ψ 0 lnL θ 0 0 D Ψ 0 lnL θ 0 = 1 NT T X t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × CE e Zt e Z 0 t C 0 ⊗ I N ⊗ Ω 0 × I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×E εtε 0 t ⊗εtε 0 t × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J − 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J +Op 1 T = 1 NT T X t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 ⊗S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × CE e Zt e Z 0 t C 0 ⊗ I N ⊗ Ω 0 × I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×K MN,MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 × I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 × S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J + Δ 0 Ψ 0 Ψ 0 +Op 1 T = 1 NT T X t=1 J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ Σ 0 u −1 J + 1 N K M,M J 0 S −1 N Ψ 0 0 0 W 0 ⊗ WS −1 N Ψ 0 0 J + 1 N J 0 WΣ 0 y W 0 ⊗ Σ 0 u −1 J + Δ 0 Ψ 0 Ψ 0 +Op 1 T 152 where Δ 0 Ψ 0 Ψ 0 = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×κ 0 S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J and Δ 0 Ψ 0 Ψ 0 = 0 whenεt are normally distributed. Together, forθ = (vech(Ω) 0 ,vec(Λ) 0 ,vec(γ) 0 ,vec(Ψ 0 ) 0 ) 0 is 1 NT E D θ lnL θ 0 0 D θ lnL θ 0 =I θ θ 0 + Δ θ 0 +Op 1 T where Δ (θ) = Δ ΩΩ ∗ ∗ ∗ Δ ΛΩ Δ ΛΛ ∗ ∗ Δ γΩ Δγ Λ Δγγ ∗ Δ Ψ 0 Ω Δ Ψ 0 Λ Δ Ψ 0 γ Δ Ψ 0 Ψ 0 (A.11) and Δ 0 ΩΩ = 1 4N D 0 M Ω 0 −1 ⊗ Ω 0 −1 J 0 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M Δ 0 ΛΩ = 1 2N J 0 WS −1 N Λ 0 ⊗ I N ⊗ Ω 0 −1 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M Δ 0 γΩ = 0 Δ 0 Ψ 0 Ω = 1 2N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 κ 0 J Ω 0 −1 ⊗ Ω 0 −1 D M Δ 0 ΛΛ = 1 N J 0 WS −1 N Λ 0 ⊗ I N ⊗ Ω 0 −1 κ 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J Δ 0 γΛ = 0 Δ 0 Ψ 0 Λ = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N Λ 0 0 I N ⊗ Ω 0 −1 κ 0 S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J Δ 0 γγ = 0 Δ 0 Ψ 0 γ = 0 Δ 0 Ψ 0 Ψ 0 = 1 N J 0 WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗ S N (Λ 0 ) 0 I N ⊗ Ω 0 −1 ×κ 0 S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 S N (Λ 0 ) J and κ 0 =E εtε 0 t ⊗εtε 0 t −vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 − 2N MN I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 . In the limit, lim T→∞ E 1 √ NT (D θ lnL (θ 0 )) 0 1 √ NT (D θ lnL (θ 0 )) =I θ 0 + Δ θ 0 . 153 Nonsigularity of the limit of the information matrix: The nonsigularity of the limit of the information matrix can be shown ifI 0 a = 0 implies a = 0, where I 0 =I θ 0 = lim T→∞ I NT θ 0 = I 0 ΩΩ I 0 Ω 0 Λ 0 I 0 Ω 0 Ψ 0 I 0 Λ 0 Ω I 0 Λ 0 Λ 0 I 0 Λ 0 Ψ 0 0 0 I 0 γ 0 γ I 0 γ 0 Ψ 0 I 0 Ψ 0 0 Ω I 0 Ψ 0 0 Λ I 0 Ψ 0 0 γ I 0 Ψ 0 0 Ψ 0 , and a = a 0 1 , a 0 2 , a 0 3 , a 0 4 0 . I 0 a = I 0 ΩΩ a 1 +I 0 Ω 0 Λ a 2 +I 0 Ω 0 Ψ 0 a 4 = 0 (1) I 0 Λ 0 Ω a 1 +I 0 Λ 0 Λ a 2 +I 0 Λ 0 Ψ 0 a 4 = 0 (2) I 0 γ 0 γ a 3 +I 0 γ 0 Ψ 0 a 4 = 0 (3) I 0 Ψ 0 0 Ω a 1 +I 0 Ψ 0 0 Λ a 2 +I 0 Ψ 0 0 γ a 3 +I 0 Ψ 0 0 Ψ 0 a 4 = 0 (4) (3) implies a 3 =− I 0 γ 0 γ −1 I 0 γ 0 Ψ 0 a 4 (5) (5) into (4) implies I 0 Ψ 0 0 Ω a 1 +I 0 Ψ 0 0 Λ a 2 + I 0 Ψ 0 0 Ψ 0 −I 0 Ψ 0 0 γ I 0 γ 0 γ −1 I 0 γ 0 Ψ 0 a 4 = 0 (6) (1) implies a 1 =− I 0 ΩΩ −1 I 0 Ω 0 Λ a 2 − I 0 ΩΩ −1 I 0 Ω 0 Ψ 0 a 4 (7) (7) into (2) implies a 2 =− I 0 Λ 0 Λ −I 0 Λ 0 Ω I 0 ΩΩ −1 I 0 Ω 0 Λ −1 I 0 Λ 0 Ψ 0 −I 0 Λ 0 Ω I 0 ΩΩ −1 I 0 Ω 0 Ψ 0 a 4 (8) (8) into (7) implies a 1 = I 0 ΩΩ −1 I 0 Ω 0 Λ I 0 Λ 0 Λ −I 0 Λ 0 Ω I 0 ΩΩ −1 I 0 Ω 0 Λ −1 I 0 Λ 0 Ψ 0 −I 0 Λ 0 Ω I 0 ΩΩ −1 I 0 Ω 0 Ψ 0 − I 0 ΩΩ −1 I 0 Ω 0 Ψ 0 a 4 (9) (8) and (9) into (6) implies A N −B N C −1 N B 0 N a 4 = 0 where A N =I 0 Ψ 0 0 Ψ 0 −I 0 Ψ 0 0 γ I 0 γ 0 γ −1 I 0 γ 0 Ψ 0 −I 0 Ψ 0 0 Ω I 0 ΩΩ −1 I 0 Ω 0 Ψ 0 B N =I 0 Ψ 0 0 Λ −I 0 Ψ 0 0 Ω I 0 ΩΩ −1 I 0 Ω 0 Λ C N =I 0 Λ 0 Λ −I 0 Λ 0 Ω I 0 ΩΩ −1 I 0 Ω 0 Λ If A N 6=B N C −1 N B 0 N (10) thenI 0 a = 0 will imply that a = 0. A N 6=B N C −1 N B 0 N impliesHa6=H b , where Ha = J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ Σ 0 u −1 J − J 0 WS −1 N Ψ 0 0 I N ⊗γ 0 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 × J 0 1 CE e Zt e Z 0 t C 0 ⊗ Σ 0 u −1 J 1 −1 ×J 0 1 CE e Zt e Z 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 ⊗ Σ 0 u −1 J H b = (J 0 C N A N B N J) J 0 B 0 N A N B N J −1 J 0 B 0 N A N C 0 N J −J 0 C N A N C 0 N J = J 0 C N A 1/2 N h A 1/2 N B N J J 0 B 0 N A N B N J −1 J 0 B 0 N A 1/2 N −I M 2 N 2 i A 1/2 N C 0 N J≤ 0 = J 0 C N h A N B N J J 0 B 0 N A N B N J −1 J 0 B 0 N A N −A N i C 0 N J and A N = I M 2 N 2 +K MN,MN I M 2 N 2−J Ω 0 ⊗ Ω 0 J 0 B N =S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 C N = WS −1 N Ψ 0 0 S −1 N Λ 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 154 H b 6= 0 iffA N B N J J 0 B 0 N A N B N J −1 J 0 B 0 N A N 6=A N Ha≥ 0 andH b ≤ 0, andHa > 0 under the assumption that H = 1 NT P T t=1 Ehth 0 t is nonsingular. Therefore, eitherHa > 0 orH b 6= 0 ifHa = 0 will imply nonsingularity ofI 0 155 A.5 Lemmas Lemma.A.1: For any MN×MN non-stochastic UB matrix M N1 and M N2 in both the row and column sums, if the row and column sums of P ∞ h=0 ρ h N1 and P ∞ h=0 ρ h N2 are UB in N, then 0 X h=−∞ T X t=1 ρ t−h N1 ! is UB (A.12) 1 T T X h=1 T X t=1 ρ t−h N1 ! is UB (A.13) 0 X h=−∞ tr T X t=1 ρ t−h N1 ! tr T X s=1 ρ s−h N2 ! =Op N 2 (A.14) T X h=1 tr T X t=1 ρ t−h N1 ! tr T X s=1 ρ s−h N2 ! =Op N 2 T (A.15) 0 X h=−∞ T X t=1 ρ t−h N1 ! M N T X s=1 ρ s−h N2 ! is UB (A.16) 1 T T X h=1 T X t=1 ρ t−h N1 ! M N T X s=1 ρ s−h N2 ! is UB (A.17) 0 X g=−∞ T X t=1 ρ t−g N1 ! M 1N T X s=1 ρ s−g N2 ! M 2N T X r=1 ρ r−g N1 ! M 3N T X l=1 ρ l−g N2 ! is UB (A.18) 1 T T X g=1 T X t=1 ρ t−g N1 ! M 1N T X s=1 ρ s−g N2 ! M 2N T X r=1 ρ r−g N1 ! M 3N T X l=1 ρ l−g N2 ! is UB (A.19) 0 X g=−∞ tr T X t=1 ρ t−g N1 ! M 1N T X s=1 ρ s−g N2 ! M 2N T X r=1 ρ r−g N1 ! M 3N T X l=1 ρ l−g N2 !! =Op (N) (A.20) T X g=1 tr T X t=1 ρ t−g N1 ! M 1N T X s=1 ρ s−g N2 ! M 2N T X r=1 ρ r−g N1 ! M 3N T X l=1 ρ l−g N2 !! =Op (NT ) (A.21) 156 0 X g=−∞ tr T X t=1 ρ t−g N1 ! M 1N T X s=1 ρ s−g N2 !! tr T X r=1 ρ r−g N1 ! M 2N T X l=1 ρ l−g N2 !! =Op N 2 (A.22) T X g=1 tr T X t=1 ρ t−g N1 ! M 1N T X s=1 ρ s−g N2 !! tr T X r=1 ρ r−g N1 ! M 2N T X l=1 ρ l−g N2 !! =Op N 2 T (A.23) 0 X h=−∞ T X t=1 T X s=1 tr ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 × ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ! =Op (N) (A.24) T X h=1 T X t=1 T X s=1 tr ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 × ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ! =Op (NT ) (A.25) 0 X h=−∞ T X t=1 T X s=1 ( tr ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 ×tr ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ) =Op N 2 (A.26) T X h=1 T X t=1 T X s=1 ( tr ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 ×tr ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ) =Op N 2 T (A.27) Lemma.A.2: For anyMN×MN non-stochastic matrixM N ,MN×MN (p +q +K) non-stochastic matrixM εz N , MN (p +q +K)×MN (p +q +K) non-stochastic matrix M zz N , where M N , M εz N and M zz N are uniformly bounded in both the row and the column sums, and for any MN× 1 non-stochastic UB vector b Nt in N and t, under Assumptions 1-7 1 N ¯ ε 0 M N ¯ b−E 1 N ¯ ε 0 M N ¯ b =Op 1 √ NT 1 N ¯ ε 0 M N ¯ ε−E 1 N ¯ ε 0 M N ¯ ε =Op 1 √ NT 2 1 N ¯ y 0 −j M N ¯ b−E 1 N ¯ y 0 −j M N ¯ b =Op 1 √ NT ,for j = 1,··· ,p 1 N ¯ ε 0 M N ¯ y −j −E 1 N ¯ ε 0 M N ¯ y −j =Op 1 √ NT ,for j = 1,··· ,p 1 N ¯ y 0 −i M N ¯ y −j −E 1 N ¯ y 0 −i M N ¯ y −j =Op 1 √ NT ,for i,j = 1,··· ,p 1 N ¯ ε 0 M εz N ¯ Z−E 1 N ¯ ε 0 M εz N ¯ Z =Op 1 √ NT 1 N ¯ Z 0 M zz N ¯ Z−E 1 N ¯ Z 0 M zz N ¯ Z =Op 1 √ NT 157 where E 1 N ¯ ε 0 M N ¯ b = 0, E 1 N ¯ ε 0 M N ¯ ε = Op 1 T , E 1 N ¯ y 0 −j M N ¯ b = Op (1), E 1 N ¯ ε 0 M N ¯ y −j = Op 1 T , E 1 N ¯ y 0 −i M N ¯ y −j =Op (1) , and E 1 N ¯ ε 0 M εz N ¯ Z =Op 1 T , E 1 N ¯ Z 0 M zz N ¯ Z =Op (1). Lemma.A.3: For anyMN×MN non-stochastic matrixM N ,MN×MN (p +q +K) non-stochastic matrixM εz N , MN (p +q +K)×MN (p +q +K) non-stochastic matrix M zz N , where M N , M εz N and M zz N are uniformly bounded in both the row and the column sums, and for any MN× 1 non-stochastic UB vector b Nt in N and t, under Assumptions 1-7 1 NT T X t=1 e ε 0 t M N e b Nt −E 1 NT T X t=1 e ε 0 t M N e b Nt ! =Op 1 √ NT 1 NT T X t=1 e ε 0 t M N e εt−E 1 NT T X t=1 e ε 0 t M N e εt ! =Op 1 √ NT 1 NT T X t=1 e b 0 Nt M N e y t−j −E 1 NT T X t=1 e b 0 Nt M N e y t−j ! =Op 1 √ NT ,for j = 1,··· ,p 1 NT T X t=1 e ε 0 t M N e y t−j −E 1 NT T X t=1 e ε 0 t M N e y t−j ! =Op 1 √ NT ,for j = 1,··· ,p 1 NT T X t=1 e y 0 t−i M N e y t−j −E 1 NT T X t=1 e y 0 t−i M N e y t−j ! =Op 1 √ NT ,for i,j = 1,··· ,p 1 NT T X t=1 e ε 0 t M εz N e Zt−E 1 NT T X t=1 e ε 0 t M εz N e Zt ! =Op 1 √ NT 1 NT T X t=1 e Z 0 t M zz N e Zt−E 1 NT T X t=1 e Z 0 t M zz N e Zt ! =Op 1 √ NT where E 1 NT P T t=1 e ε 0 t M N e b Nt = 0, 1 NT P T t=1 E e εt 0 M N e εt = Op (1), E 1 NT P T t=1 e b Nt M N e y t−j = Op (1), 1 NT P T t=1 E e εt 0 M N e y t−j = Op 1 T , 1 NT P T t=1 E e y 0 t−i M N e y t−j = Op (1), and E 1 NT P T t=1 e ε 0 t M εz N e Zt =Op 1 T , E 1 NT P T t=1 e Z 0 t M zz N e Zt =Op (1). Lemma.A.4: 1 NT T X t=1 T X s=1 E C e Zte ε 0 s ⊗e εte ε 0 s − 1 N T X t=1 E C e Zt⊗e εt vec I N ⊗ Ω 0 0 =Op 1 T Lemma.A.5: 1 NT T X t=1 T X s=1 E C e Zt e Z 0 s C 0 ⊗e εte ε 0 s − 1 NT T X t=1 CE e Zt e Z 0 t C 0 ⊗ I N ⊗ Ω 0 =Op 1 T 158 Lemma.A.6: − 1 NT H θ,NT lnL (θ) − − 1 NT H θ,NT lnL θ 0 = θ−θ 0 Op (1) (A.28) − 1 NT H θ,NT lnL θ 0 −E − 1 NT H θ,NT lnL θ 0 = Op 1 √ NT (A.29) sup θ∈Θ − 1 NT H θ,NT lnL (θ) −E − 1 NT H θ,NT lnL (θ) i,j = Op 1 √ NT (A.30) sup θ∈N(θ 0 ) E − 1 NT H θ,NT lnL (θ) −E − 1 NT H θ,NT lnL θ 0 i,j = sup θ∈N(θ 0 ) θ−θ 0 Op (1) i,j = 1,..., (2 +p +q)M 2 +M (M + 1)/2 +KM (A.31) Lemma.A.7: (d-dimensional Martingale difference CLT) : Let ξ n,k ,F n,k , 1≤k≤Kn,n≥ 1 ,where ξ n,k = ξ n,k,1 ,...,ξ n,k,d 0 , be a d-dimensional square integrable martingale difference array. Assume that (C1) Kn X k=1 E ξ 2 n,k,j I ξ n,k,j > F n,k−1 → p 0, as n→∞ for all j = 1,...,d and > 0 (C2) Kn X k=1 E ξ n,k ξ 0 n,k F n,k−1 → p Ω,as n→∞ for some nonrandom matrix Ω. Then, in R d , as n → ∞, P Kn k=1 ξ n,k → d N d (0, Ω), a centered d-dimensional Gaussian vector with covariance matrix Ω. A sufficient condition for (C1) is C1 0 Kn X k=1 E h ξ n,k,j 2+δ i = Kn X k=1 E h E h ξ n,k,j 2+δ F n,k−1 ii → p 0, for some δ> 0,as n→∞ for all j = 1,...,d. Lemma.A.8: If the sequence Σ Q,NT is bounded away from zero, then as n→∞, Q NT → d N 0, Σ Q , where Q NT = 1 √ NT T X t=1 J 0 vec UtV 0 t − Σ + 1 √ NT T X t=1 J 0 vec Utb 0 t + 1 √ NT T X t=1 J 0 vec UtH 0 t−1 Σ Q,NT =E Q NT Q 0 NT = 1 NT T X t=1 N X j=1 N X k=1 (B j• ⊗M j• ) E εtε 0 t ⊗εtε 0 t −vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 B 0 k• ⊗M 0 k• +b jt b 0 kt ⊗M j• I N ⊗ Ω 0 M 0 k• + P ∞ g=0 A g,j• I N ⊗ Ω 0 A 0 g,k• ⊗ M j• I N ⊗ Ω 0 M 0 k• → p Σ Q 159 and for some MN×MN UB matrices M,B,H, Ut =Mεt,Uvt =Mv•εt,Vt =Bεt,Vvt =Bv•εt, H t−1 = P ∞ g=0 Agε t−1−g , Ag = HRA g R 0 S −1 N Ψ 0 0 S −1 N Λ 0 is MN×MN, H vt−1 = P ∞ g=0 (Ag,v•ε t−1−g ) = P N j=1 P ∞ g=0 A g,vj ε j,t−1−g , Σ =M I N ⊗ Ω 0 B 0 , Σvv =Mv• I N ⊗ Ω 0 B 0 v• ,bt isMN×1 non-stochastic UB vector with bvt denotes the (v− 1)M + 1 to vM element of bt, and Av• denotes the (v− 1)M + 1 to vM rows of a MN×MN matrix A. Lemma.A.9: If sup N≥1 kA θ 0 k∞ < 1 and sup N≥1 kA θ 0 k 1 < 1, then the row sum and column sum of P ∞ h=0 A (θ) h and P ∞ h=0 hA (θ) h−1 are bounded uniformly in N and in the neighbourhood of θ 0 . Proof of Lemma.A.1: 0 X h=−∞ T X t=1 ρ t−h N ! ≤ T X t=1 ρ t N ∞ X h=0 ρ h N ≤ ∞ X h=0 ρ h N 2 <∞ Therefore, (A.12) = P 0 h=−∞ P T t=1 ρ t−h N is UB. 1 T T X h=1 T X t=1 ρ t−h N ! ≤ 1 T ρ 0 N +ρ 1 N +...ρ T−2 N +ρ T−1 N +... + 1 T ρ 0 N +ρ 1 N + 1 T ρ 0 N ≤ T−1 X h=0 ρ h N ≤ ∞ X h=0 ρ h N <∞ Therefore, (A.13) = 1 T P T h=1 P T t=1 ρ t−h N is UB (A.14) = 0 X h=−∞ tr T X t=1 ρ t−h N ! tr T X s=1 ρ s−h N ! ≤tr T X t=1 ρ t N ! ∞ X h=0 ρ h N !! tr T X s=1 |ρ s N | ! ∞ X g=0 ρ g N !! ≤tr ∞ X h=0 ρ h N ! 2 ! tr ∞ X g=0 ρ g N ! 2 ! =Op N 2 (A.15) = T X h=1 tr T X t=1 ρ t−h N ! tr T X s=1 ρ s−h N ! ≤Ttr T−1 X h=0 ρ h N1 ! tr T−1 X g=0 ρ g N2 ! ≤Ttr ∞ X h=0 ρ h N1 ! tr ∞ X g=0 ρ g N2 ! =Op N 2 T 160 0 X h=−∞ T X t=1 ρ t−h N1 ! M N T X s=1 ρ s−h N2 ! ≤ T X t=1 ρ t N1 ! ∞ X h=0 ρ h N1 |M N | ∞ X g=0 ρ g N2 T X s=1 |ρ s N2 | ! ≤ ∞ X h=0 ρ h N1 2 kM N k ∞ X g=0 ρ g N2 2 <∞ Therefore, (A.16) = P 0 h=−∞ P T t=1 ρ t−h 1 M N P T s=1 ρ s−h 2 is UB. 1 T T X h=1 T X t=1 ρ t−h N1 ! M N T X s=1 ρ s−h N2 ! ≤ 1 T ρ 0 N1 +ρ 1 N1 +...ρ T−2 N1 +ρ T−1 N1 M N ρ 0 N2 +ρ 1 N2 +...ρ T−2 N2 +ρ T−1 N2 +... + 1 T ρ 0 N1 +ρ 1 N1 M N ρ 0 N2 +ρ 1 N2 + 1 T ρ 0 N1 M N ρ 0 N2 ≤ T−1 X h=0 ρ h N1 |M N | T−1 X g=0 ρ g N2 ≤ ∞ X h=0 ρ h N1 kM N k ∞ X g=0 ρ g N2 <∞ Therefore, (A.17) = 1 T P T h=1 P T t=1 ρ t−h N1 M N P T s=1 ρ s−h N2 is UB. 0 X g=−∞ T X t=1 ρ t N1 ρ −g N1 ! M 1N T X s=1 ρ s N2 ρ −g N2 ! M 2N T X r=1 ρ r N1 ρ −g N1 ! M 3N T X l=1 ρ l N2 ρ −g N2 ! ≤ ∞ X h=0 ρ h N1 2 kM 1N k ∞ X g=0 ρ g N2 2 kM 2N k ∞ X k=0 ρ k N1 2 kM 3N k ∞ X v=0 |ρ v N2 | 2 <∞ Therefore, (A.18) = P 0 g=−∞ P T t=1 ρ t N1 ρ −g N1 M 1N P T s=1 ρ s N2 ρ −g N2 M 2N P T r=1 ρ r N1 ρ −g N1 M 3N P T l=1 ρ l N2 ρ −g N2 is UB. 161 1 T T X g=1 T X t=1 ρ t N1 ρ −g N1 ! M 1N T X s=1 ρ s N2 ρ −g N2 ! M 2N T X r=1 ρ r N1 ρ −g N1 ! M 3N T X l=1 ρ l N2 ρ −g N2 ! ≤ T 1 T T−1 X t=0 ρ t N1 ! |M 1N | T−1 X s=0 |ρ s N2 | ! |M 2N | T−1 X r=0 |ρ r N1 | ! |M 3N | T−1 X l=0 ρ l N2 ! ≤ ∞ X h=0 ρ h N1 kM 1N k ∞ X g=0 ρ g N2 kM 2N k ∞ X k=0 ρ k N1 kM 3N k ∞ X v=0 |ρ v N2 | <∞ Therefore, (A.19) = 1 T P T g=1 P T t=1 ρ t 1 ρ −g 1 M 1N P T s=1 ρ s 2 ρ −g 2 M 2N P T r=1 ρ r 1 ρ −g 1 M 3N P T l=1 ρ l 2 ρ −g 2 is UB. (A.20) = 0 X g=−∞ tr T X t=1 ρ t N1 ρ −g N1 ! M 1N T X s=1 ρ s N2 ρ −g N2 ! M 2N T X r=1 ρ r N1 ρ −g N1 ! M 3N T X l=1 ρ l N2 ρ −g N2 !! =Op (N) (A.21) = T X g=1 tr T X t=1 ρ t N1 ρ −g N1 ! M 1N T X s=1 ρ s N2 ρ −g N2 ! M 2N T X r=1 ρ r N1 ρ −g N1 ! M 3N T X l=1 ρ l N2 ρ −g N2 !! =Op (NT ) (A.22) = ∞ X g=0 tr T X t=1 ρ t N1 ρ −g N1 ! M 1N T X s=1 ρ s N2 ρ −g N2 !! tr T X r=1 ρ r N1 ρ −g N1 ! M 2N T X l=1 ρ l N2 ρ −g N2 !! ≤tr T X t=1 ρ t N1 ! ∞ X g=0 ρ g N1 |M 1N | ρ g N2 ! T X s=1 |ρ s N2 | !! tr T X r=1 |ρ r N1 | ! ∞ X g=0 ρ g N1 |M 2N | ρ g N2 ! T X l=1 ρ l N2 !! ≤tr ∞ X h=0 ρ h N1 ! 2 |M 1N | ∞ X g=0 |ρ s N2 | ! 2 ! tr ∞ X k=0 ρ k N1 ! 2 |M 2N | ∞ X v=0 |ρ v N2 | ! 2 ! =Op N 2 (A.23) = T X g=1 tr T X t=1 ρ t N1 ρ −g N1 ! M 1N T X s=1 ρ s N2 ρ −g N2 !! tr T X r=1 ρ r N1 ρ −g N1 ! M 2N T X l=1 ρ l N2 ρ −g N2 !! ≤Ttr T−1 X t=0 ρ t N1 ! |M 1N | T−1 X s=0 |ρ s N2 | !! tr T−1 X r=0 |ρ r N1 | ! |M 2N | T−1 X l=0 ρ l N2 !! ≤Ttr ∞ X h=0 ρ h N1 ! |M 1N | ∞ X g=0 |ρ s N2 | !! tr ∞ X k=0 ρ k N1 ! |M 2N | ∞ X v=0 |ρ v N2 | !! =Op N 2 T 162 (A.24) = 0 X h=−∞ tr ( P T t=1 ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 × P T s=1 ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ) ≤tr ( P T t=1 ρ t N1 − 1 T P T k=1 ρ k N1 P ∞ h=0 ρ h N1 M 1N ρ h N2 ρ t N2 − 1 T P T l=1 ρ l N2 × P T s=1 ρ s N1 − 1 T P T r=1 ρ r N1 P ∞ h=0 ρ h N1 M 2N ρ h N2 ρ s N2 − 1 T P T m=1 ρ m N2 ) =tr ( P T t=1 ρ t N1 P ∞ h=0 ρ h N1 M 1N ρ h N2 ρ t N2 −T 1 T P T k=1 ρ k N1 P ∞ h=0 ρ h N1 M 1N ρ h N2 1 T P T l=1 ρ l N2 × P T s=1 ρ s N1 P ∞ h=0 ρ h N1 M 2N ρ h N2 ρ s N2 −T 1 T P T r=1 ρ r N1 P ∞ h=0 ρ h N1 M 2N ρ h N2 1 T P T m=1 ρ m N2 ) ≤tr ( T X t=1 ρ t N1 ∞ X h=0 ρ h N1 M 1N ρ h N2 ! ρ t N2 ! T X s=1 ρ s N1 ∞ X h=0 ρ h N1 M 2N ρ h N2 ! ρ s N2 !) + 1 T 2 tr ( T X k=1 ρ k N1 ! ∞ X h=0 ρ h N1 M 1N ρ h N2 ! T X l=1 ρ l N2 ! T X r=1 ρ r N1 ! ∞ X h=0 ρ h N1 M 2N ρ h N2 ! T X m=1 ρ m N2 !) ≤ 1 + 1 T 2 tr ( ∞ X h=0 ρ h N1 ! 2 M 1N ∞ X h=0 ρ h N2 ! 2 ∞ X h=0 ρ h N1 ! 2 M 2N ∞ X h=0 ρ h N2 ! 2 ) =Op (N) (A.25) = T X h=1 tr ( P T t=1 ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 × P T s=1 ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ) = T X h=1 tr ( P T t=1 ρ t N1 − 1 T P T k=1 ρ k N1 ρ −h N1 M 1N ρ −h N2 ρ t N2 − 1 T P T l=1 ρ l N2 × P T s=1 ρ s N1 − 1 T P T r=1 ρ r N1 ρ −h N1 M 2N ρ −h N2 ρ s N2 − 1 T P T m=1 ρ m N2 ) = 1 T 4 T X h=1 tr T 2 P h−1 r=0 ρ r N1 M 1N ρ r N2 −T P h−1 r=0 ρ r N1 M 1N P h−1 r=0 ρ r N2 × T 2 P h−1 r=0 ρ r N1 M 2N ρ r N2 −T P h−1 r=0 ρ r N1 M 2N P h−1 r=0 ρ r N2 ≤ 1 T 4 T 4 T X h=1 tr ( h−1 X r=0 ρ r N1 M 1N ρ r N2 ! h−1 X r=0 ρ r N1 M 2N ρ r N2 !) + 1 T 4 T 2 T X h=1 tr ( h−1 X r=0 ρ r N1 ! M 1N h−1 X r=0 ρ r N2 ! h−1 X r=0 ρ r N1 ! M 2N h−1 X r=0 ρ r N2 !) ≤Ttr ( T−1 X r=0 ρ r N1 M 1N ρ r N2 ! T−1 X r=0 ρ r N1 M 2N ρ r N2 !) + 1 T tr ( T−1 X r=0 ρ r N1 ! M 1N T−1 X r=0 ρ r N2 ! T−1 X r=0 ρ r N1 ! M 2N T−1 X r=0 ρ r N2 !) =Op (NT ) +Op N T =Op (NT ) 163 (A.26) = ∞ X h=0 tr P T t=1 ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 ×tr P T s=1 ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ! ≤tr ( T X t=1 ρ t N1 − 1 T T X k=1 ρ k N1 ! ∞ X h=0 ρ h N1 M 1N ρ h N2 ! ρ t N2 − 1 T T X l=1 ρ l N2 !) ×tr ( T X s=1 ρ s N1 − 1 T T X r=1 ρ r N1 ! ∞ X h=0 ρ h N1 M 2N ρ h N2 ! ρ s N2 − 1 T T X m=1 ρ m N2 !) =tr ( T X t=1 ρ t N1 ∞ X h=0 ρ h N1 M 1N ρ h N2 ! ρ t N2 −T 1 T T X k=1 ρ k N1 ! ∞ X h=0 ρ h N1 M 1N ρ h N2 ! 1 T T X l=1 ρ l N2 !) ×tr ( T X s=1 ρ s N1 ∞ X h=0 ρ h N1 M 2N ρ h N2 ! ρ s N2 −T 1 T T X r=1 ρ r N1 ! ∞ X h=0 ρ h N1 M 2N ρ h N2 ! 1 T T X m=1 ρ m N2 !) ≤tr ( T X t=1 ρ t N1 ∞ X h=0 ρ h N1 M 1N ρ h N2 ! ρ t N2 ) tr ( T X s=1 ρ s N1 ∞ X h=0 ρ h N1 M 2N ρ h N2 ! ρ s N2 ) + 1 T 2 tr ( T X k=1 ρ k N1 ! ∞ X h=0 ρ h N1 M 1N ρ h N2 ! T X l=1 ρ l N2 !) tr ( T X r=1 ρ r N1 ! ∞ X h=0 ρ h N1 M 2N ρ h N2 ! T X m=1 ρ m N2 !) ≤tr ( T X t=1 ρ t N1 ! ∞ X h=0 ρ h N1 M 1N ρ h N2 ! T X t=1 ρ t N2 !) tr ( T X s=1 ρ s N1 ! ∞ X h=0 ρ h N1 M 2N ρ h N2 ! T X s=1 ρ s N2 !) + 1 T 2 tr ( T X k=1 ρ k N1 ! ∞ X h=0 ρ h N1 M 1N ρ h N2 ! T X l=1 ρ l N2 !) tr ( T X r=1 ρ r N1 ! ∞ X h=0 ρ h N1 M 2N ρ h N2 ! T X m=1 ρ m N2 !) ≤ 1 + 1 T 2 tr ( ∞ X h=0 ρ h N1 ! 2 ∞ X h=0 ρ h N1 M 1N ρ h N2 ! ∞ X h=0 ρ h N2 ! 2 ) tr ( ∞ X h=0 ρ h N1 ! 2 M 2N ρ h N2 ∞ X h=0 ρ h N2 ! 2 ) =Op N 2 +Op N 2 T 2 =Op N 2 164 (A.27) = T X h=1 tr P T t=1 ρ t−h N1 − 1 T P T k=1 ρ k−h N1 M 1N ρ t−h N2 − 1 T P T l=1 ρ l−h N2 ×tr P T s=1 ρ s−h N1 − 1 T P T r=1 ρ r−h N1 M 2N ρ s−h N2 − 1 T P T m=1 ρ m−h N2 ! = 1 T 4 T X h=1 tr n T 2 P h−1 r=0 ρ r N1 M 1N ρ r N2 −T P h−1 r=0 ρ r N1 M 1N P h−1 r=0 ρ r N2 o ×tr n T 2 P h−1 r=0 ρ r N1 M 2N ρ r N2 −T P h−1 r=0 ρ r N1 M 2N P h−1 r=0 ρ r N2 o ≤ 1 T 4 T X h=1 tr ( T 2 h−1 X r=0 ρ r N1 M 1N ρ r N2 !) tr ( T 2 h−1 X r=0 ρ r N1 M 2N ρ r N2 !)! + 1 T 4 T X h=1 tr ( T h−1 X r=0 ρ r N1 ! M 1N h−1 X r=0 ρ r N2 !) tr ( T h−1 X r=0 ρ r N1 ! M 2N h−1 X r=0 ρ r N2 !)! ≤ 1 T 4 Ttr ( T 2 T−1 X r=0 ρ r N1 M 1N ρ r N2 !) tr ( T 2 T−1 X r=0 ρ r N1 M 2N ρ r N2 !) + 1 T 4 Ttr ( T T−1 X r=0 ρ r N1 ! M 1N T−1 X r=0 ρ r N2 !) tr ( T T−1 X r=0 ρ r N1 ! M 2N T−1 X r=0 ρ r N2 !) =Op N 2 T +Op N 2 T =Op N 2 T Proof of Lemma.A.2: Denote the maximum element of b Nt as sup t |b Nt | =Op (1). ¯ b = 1 T P T t=1 b Nt . Then E ε 0 t b Ns = 0 for all t,s = 1,···T, and E 1 N ¯ ε 0 M N ¯ b = 0. E 1 N ¯ ε 0 M N ¯ ε = 1 NT 2 T X t=1 T X s=1 E(ε 0 t M N εs) = 1 NT tr M N E εtε 0 t = 1 NT tr M N I N ⊗ Ω 0 =Op 1 T For j = 1,··· ,p, by UB.7, E 1 N ¯ y 0 −j M N ¯ b = 1 N A 00 M N ¯ b + ¯ A x0 −j M N ¯ b ≤ 1 N MN a 0 +a x sup t |b Nt | sup 1≤j≤MN MN X k=1 M N,jk =M a 0 +a x sup t |b Nt |kM N k ∞ =Op (1) 165 For j = 1,··· ,p, E 1 N ¯ ε 0 M N ¯ y −j = 1 NT 2 T X t=1 T X s=1 E ε 0 t M N y s−j (A.32) = 1 NT 2 T−j X s=1−j s X t=1 tr M N E ysε 0 t = 1 NT 2 T−j X s=1−j s X t=1 tr M N RA s−t R 0 S −1 N Ψ 0 0 S −1 N Λ 0 E εtε 0 t = 1 NT 2 tr M N R T−j X s=1−j s X t=1 A s−t ! R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! ≤ 1 NT 2 tr M N R T ∞ X h=0 |A h | ! R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! =Op 1 T For i = 1,··· ,p and j = 1,··· ,p, without loss of generality assume that max (i,j) = j and j = i +r with r = 0, 1,.... E 1 N ¯ y 0 −i M N ¯ y −j = 1 N A 00 + ¯ A x0 −i M N A 0 + ¯ A x −j + 1 N T−j X h=−∞ tr ¯ A ε0 −i,h M N ¯ A ε −j,h E ε h ε 0 h ≤ 1 N MN a 0 +a x 2 kM N k ∞ + 1 NT 2 T X h=−∞ tr T X t=1 A t−h0 ! A r0 R 0 M N R T X s=1 A s−h ! R 0 Σ 0 y R ! =Op (1) +Op 1 T =Op (1) by (A.16) and (A.17), where Σ 0 y =S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 S −1 N Λ 0 0 S −1 N Ψ 0 0 0 . I N ⊗ Ω 0 is UB, because Ω 0 is finite by assumption and sup N≥1 I N ⊗ Ω 0 1 = sup N≥1 I N ⊗ Ω 0 ∞ = sup N≥1 sup 1≤i≤MN MN X j=1 I N ⊗ Ω 0 ij = sup N≥1 sup 1≤i≤N M X j=1 Ω 0 ij = Ω 0 ∞ <∞ var 1 N ¯ ε 0 M N ¯ b = 1 N 2 T ¯ b 0 M 0 N I N ⊗ Ω 0 M N ¯ b ≤ 1 N 2 T MN sup t |b Nt | 2 kM N k 1 Ω 0 ∞ kM N k ∞ =Op 1 NT 166 1 N 2 E ¯ ε 0 M N ¯ ε 2 = 1 N 2 T 4 T X t=1 tr M N E εtε 0 t M N εtε 0 t + 1 N 2 T 4 T X t=1 T X s=1,s6=t tr E ε 0 t M N εt E(ε 0 s M N εs) + 1 N 2 T 4 T X t=1 T X s=1,s6=t tr M N E εtε 0 t M N E εsε 0 s + 1 N 2 T 4 T X t=1 T X s=1,s6=t tr M 0 N E εtε 0 t M N E εsε 0 s Hence, var 1 N ¯ ε 0 M N ¯ ε = 1 N 2 E ¯ ε 0 M N ¯ ε 2 − 1 N E ¯ ε 0 M N ¯ ε 2 = 1 N 2 T 4 T X t=1 E ε 0 t M N εtε 0 t M N εt −E ε 0 t M N εt E ε 0 t M N εt + 1 N 2 T 4 T X t=1 T X s=1,s6=t tr M N E εtε 0 t M N E εsε 0 s + 1 N 2 T 4 T X t=1 T X s=1,s6=t tr M 0 N E εtε 0 t M N E εsε 0 s = 1 N 2 T 4 T X t=1 var ε 0 t M N εt + T− 1 N 2 T 3 tr M N +M 0 N I N ⊗ Ω 0 M N I N ⊗ Ω 0 =Op 1 NT 2 because var ε 0 t M N εt = N X i=1 E ε 0 it M ii ε it ε 0 it M ii ε it + N X i=1 N X k=1 trE M ik ε kt ε 0 kt M 0 ik +M ki ε it ε 0 it = N X i=1 E vec ε it ε 0 it M 0 ii 0 vec M ii ε it ε 0 it + N X i=1 N X k=1 tr M ik Ω 0 M 0 ik +M ki Ω 0 = N X i=1 vec M 0 ii 0 E ε it ε 0 it ⊗ε it ε 0 it vec (M ii ) +tr M I N ⊗ Ω 0 M 0 +M I N ⊗ Ω 0 =Op (N) where M ii denotes the i-th M×M block of M. 167 var 1 N ¯ y 0 −j M N ¯ b = 1 N 2 E ¯ y 0 −j M N ¯ b 2 − 1 N 2 E ¯ y 0 −j M N ¯ b 2 = 1 N 2 ¯ b 0 M 0 N E ¯ y −j ¯ y 0 −j M N ¯ b− 1 N 2 E ¯ y 0 −j M N ¯ b E ¯ y 0 −j M N ¯ b = 1 N 2 ¯ b 0 M 0 N T−j X h=−∞ ¯ A ε −j,h E ε h ε 0 h ¯ A ε0 −j,h M N ¯ b = 1 N 2 T 2 ¯ b 0 M 0 N T−j X h=−∞ R T X t=1 A t−j−h ! R 0 Σ 0 y R T X s=1 A s−j−h0 ! R 0 M N ¯ b = 1 N 2 T ¯ b 0 M 0 N R 1 T T X h=−∞ T X t=1 A t−h ! R 0 Σ 0 y R T X s=1 A s−h0 !! R 0 M N ¯ b ≤ 1 N 2 T 2 MN sup t |b Nt | 2 kM N k 1 kRk ∞ 0 X h=−∞ T X t=1 A t−h ! R 0 Σ 0 y R T X s=1 A s−h0 ! ∞ kRk 1 kM N k ∞ + 1 N 2 T MN sup t |b Nt | 2 kM N k 1 kRk ∞ 1 T T X h=1 T X t=1 A t−h ! R 0 Σ 0 y R T X s=1 A s−h0 ! ∞ kRk 1 kM N k ∞ =Op 1 NT 2 +Op 1 NT =Op 1 NT , by (A.16) and (A.17). 168 var 1 N ¯ ε 0 M N ¯ y −j = 1 N 2 E ¯ ε 0 M N ¯ y −j ¯ y 0 −j M 0 N ¯ ε − 1 N 2 E ¯ ε 0 M N ¯ y −j E ¯ ε 0 M N ¯ y −j = 1 N 2 T 2 E T X t=1 ε 0 t ! M N A 0 + ¯ A x −j + T−j X h=−∞ ¯ A ε −j,h ε h ! A 00 + ¯ A x0 −j + T−j X g=−∞ ε 0 g ¯ A ε0 −j,g ! M 0 N T X s=1 ε 0 s !! − 1 N 2 T 2 T−j X t=1 tr M N ¯ A ε −j,t E εtε 0 t T−j X s=1 tr ¯ A ε0 −j,s M 0 N E εsε 0 s = 1 N 2 T 2 T X t=1 tr M N A 0 + ¯ A x −j A 00 + ¯ A x0 −j M 0 N E εtε 0 t + 1 N 2 T 2 T−j X t=1 trE M N ¯ A ε −j,t εtε 0 t ¯ A ε0 −j,t M 0 N εtε 0 t + 1 N 2 T 2 " T−j X t=1 tr M N ¯ A ε −j,t E εtε 0 t #" T−j X s=1,s6=t tr ¯ A ε0 −j,s M 0 N E εsε 0 s # + 1 N 2 T 2 T−j X t=1 T−j X s=1,s6=t tr M N ¯ A ε −j,s E εsε 0 s M N ¯ A ε −j,t E εtε 0 t + 1 N 2 T 2 T X t=1 T−j X h=−∞,h6=t tr M N ¯ A ε −j,h E ε h ε 0 h ¯ A ε0 −j,h M 0 N E εtε 0 t (A.33) − 1 N 2 T 2 " T−j X t=1 tr M N ¯ A ε −j,t E εtε 0 t #" T−j X s=1 tr ¯ A ε0 −j,s M 0 N E εsε 0 s # = 1 N 2 T A 00 + ¯ A x0 −j M 0 N I N ⊗ Ω 0 M N A 0 + ¯ A x −j (A.34) + 1 N 2 T 2 T−j X t=1 trE M N ¯ A ε −j,t εtε 0 t ¯ A ε0 −j,t M 0 N εtε 0 t (A.35) + 1 N 2 T 2 T−j X t=1 T−j X s=1,s6=t tr M N ¯ A ε −j,s I N ⊗ Ω 0 M N ¯ A ε −j,t I N ⊗ Ω 0 (A.36) + 1 N 2 T 2 T X t=1 T−j X h=1−j,h6=t tr M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −j,h M 0 N I N ⊗ Ω 0 (A.37) + 1 N 2 T 2 T X t=1 −j X h=−∞,h6=t tr M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −j,h M 0 N I N ⊗ Ω 0 (A.38) − 1 N 2 T 2 T−j X t=1 tr M N ¯ A ε −j,t I N ⊗ Ω 0 tr ¯ A ε0 −j,t M 0 N I N ⊗ Ω 0 (A.39) (A.34) = 1 N 2 T A 00 + ¯ A x0 −j M 0 N I N ⊗ Ω 0 M N A 0 + ¯ A x −j ≤ 1 N 2 T a 0 +a x 2 MNkM N k 1 Ω 0 ∞ kM N k ∞ =Op 1 NT 169 (A.35) = 1 N 2 T 2 T−j X t=1 trE M N ¯ A ε −j,t εtε 0 t ¯ A ε0 −j,t M 0 N εtε 0 t = 1 N 2 T 4 T−j X t=1 tr M N R T−j X r=1 A r−t ! R 0 S −1 N Ψ 0 0 S −1 N Λ 0 εtε 0 t S −1 N Ψ 0 0 0 S −1 N Λ 0 0 R T−j X k=1 A k−t0 ! R 0 M 0 N εtε 0 t ! =Op 1 NT 3 ,by (A.17). (A.36) = 1 N 2 T 2 T−j X t=1 T−j X s=1,s6=t tr M N ¯ A ε −j,s I N ⊗ Ω 0 M N ¯ A ε −j,t I N ⊗ Ω 0 = 1 N 2 T 4 T−j X t=1 T−j X s=1,s6=t tr M N R P T−j r=1 A r−s R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R P T−j k=1 A r−t R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! = 1 N 2 T 4 T−j X s=1 T−j−1 X m=1 tr M N R P T−j r=1 A r−s R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R P T−j k=1 A r−s A m R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! + 1 N 2 T 4 T−j X t=1 T−j−1 X m=1 tr M N R P T−j r=1 A r−t A m R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R P T−j k=1 A r−t R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! ≤ 1 N 2 T 4 T−j X s=1 tr M N R P T−j r=1 A r−s R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R P T−j k=1 A r−s P ∞ h=0 A h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! + 1 N 2 T 4 T−j X t=1 tr M N R P T−j r=1 A r−t P ∞ h=0 A h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R P T−j k=1 A r−t R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! =Op 1 NT 3 , by (A.17). (A.37) = 1 N 2 T 2 T−j X h=1−j T X t=1,h6=t tr M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −j,h M 0 N I N ⊗ Ω 0 = T− 1 N 2 T 2 T−j X h=1−j tr M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −j,h M 0 N I N ⊗ Ω 0 = T− 1 N 2 T 4 T−j X h=1−j tr M N R T−j X r=1 A r−h ! R 0 Σ 0 y R T−j X k=1 A k−h0 ! R 0 M 0 N I N ⊗ Ω 0 ! =Op 1 NT 2 , by (A.17). 170 (A.38) = 1 N 2 T 2 T X t=1 −j X h=−∞ tr M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −j,h M 0 N I N ⊗ Ω 0 = 1 N 2 T −j X h=−∞ tr M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −j,h M 0 N I N ⊗ Ω 0 = 1 N 2 T 3 −j X h=−∞ tr M N R T−j X r=1 A r−h ! R 0 Σ 0 y R T−j X k=1 A k−h0 ! R 0 M 0 N I N ⊗ Ω 0 ! =Op 1 NT 3 , by (A.16). (A.39) = 1 N 2 T 2 T−j X t=1 tr M N ¯ A ε −j,t I N ⊗ Ω 0 tr ¯ A ε0 −j,t M 0 N I N ⊗ Ω 0 = 1 N 2 T 4 T−j X t=1 tr M N R P T−j r=1 A r−t R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×tr S −1 N Λ 0 0 S −1 N Ψ 0 0 0 R P T−j r=1 A r−t0 R 0 M 0 N I N ⊗ Ω 0 =Op 1 T 3 , by (A.15). Together implies that var 1 N ¯ ε 0 M N ¯ y −j =Op 1 NT 171 var 1 N ¯ y 0 −i M N ¯ y −j = 1 N 2 E ¯ y 0 −i M N ¯ y −j 2 − 1 N 2 E ¯ y 0 −i M N ¯ y −j 2 = 1 N 2 T−j X h=−∞ A 0 + ¯ A x −i 0 M N ¯ A ε −j,h E ε h ε 0 h ¯ A ε0 −j,h M 0 N A 0 + ¯ A x −i + 2 1 N 2 T−j X h=−∞ A 0 + ¯ A x −i 0 M N ¯ A ε −j,h E ε h ε 0 h ¯ A ε0 −i,h M N A 0 + ¯ A x −j + 1 N 2 T−i X h=−∞ A 0 + ¯ A x −j 0 M 0 N ¯ A ε −i,h E ε h ε 0 h ¯ A ε0 −i,h M N A 0 + ¯ A x −j + 1 N 2 E T−i X h=−∞ ε 0 h ¯ A ε0 −i,h ! M N T−j X g=−∞ ¯ A ε −j,g εg ! T−i X l=−∞ ε 0 l ¯ A ε0 −i,l ! M N T−j X k=−∞ ¯ A ε −j,k ε k ! − 1 N 2 T−j X h=−∞ tr ¯ A ε0 −i,h M N ¯ A ε −j,h E ε h ε 0 h T−j X h=−∞ tr ¯ A ε0 −i,h M N ¯ A ε −j,h E ε h ε 0 h = 1 N 2 T−j X h=−∞ A 0 + ¯ A x −i 0 M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −j,h M 0 N A 0 + ¯ A x −i (A.40) + 2 1 N 2 T−j X h=−∞ A 0 + ¯ A x −i 0 M N ¯ A ε −j,h I N ⊗ Ω 0 ¯ A ε0 −i,h M N A 0 + ¯ A x −j (A.41) + 1 N 2 T−i X h=−∞ A 0 + ¯ A x −j 0 M 0 N ¯ A ε −i,h I N ⊗ Ω 0 ¯ A ε0 −i,h M N A 0 + ¯ A x −j (A.42) + 1 N 2 T−j X h=−∞ trE ¯ A ε0 −i,h M N ¯ A ε −j,h ε h ε 0 h ¯ A ε0 −i,h M N ¯ A ε −j,h ε h ε 0 h (A.43) + 1 N 2 T−i X h=−∞ T−j X g=−∞,g6=h tr ¯ A ε0 −i,h M N ¯ A ε −j,g I N ⊗ Ω 0 ¯ A ε0 −j,g M 0 N ¯ A ε −i,h I N ⊗ Ω 0 (A.44) + 1 N 2 T−j X h=−∞ T−j X g=−∞,g6=h tr ¯ A ε0 −i,h M N ¯ A ε −j,g I N ⊗ Ω 0 ¯ A ε0 −i,g M N ¯ A ε −j,h I N ⊗ Ω 0 (A.45) − 1 N 2 T−j X h=−∞ tr ¯ A ε0 −i,h M N ¯ A ε −j,h I N ⊗ Ω 0 tr ¯ A ε0 −i,h M N ¯ A ε −j,h I N ⊗ Ω 0 (A.46) 172 (A.40) = 1 N 2 T 2 T−j X h=−∞ A 0 + ¯ A x −i 0 M N R T X t=1 A t A −j−h ! R 0 Σ 0 y R T X s=1 A s0 A −j−h0 ! R 0 M 0 N A 0 + ¯ A x −i = 1 N 2 T 2 A 0 + ¯ A x −i 0 0 X g=−∞ M N R T X t=1 A t−g ! R 0 Σ 0 y R T X s=1 A s−g0 ! R 0 M 0 N ! A 0 + ¯ A x −i + 1 N 2 T A 0 + ¯ A x −i 0 1 T T X g=1 M N R T X t=1 A t−g ! R 0 Σ 0 y R T X s=1 A s−g0 ! R 0 M 0 N ! A 0 + ¯ A x −i ≤ 1 N 2 T 2 MN a 0 +a x 2 0 X g=−∞ M N R T X t=1 A t−g ! R 0 Σ 0 y R T X s=1 A s−g0 ! R 0 M 0 N ∞ + 1 N 2 T MN a 0 +a x 2 1 T T X g=1 M N R T X t=1 A t−g ! R 0 Σ 0 y R T X s=1 A s−g0 ! R 0 M 0 N ∞ =Op 1 NT 2 +Op 1 NT =Op 1 NT , by (A.16) and (A.17). (A.41) = 2 1 N 2 T 2 T−j X h=−∞ A 0 + ¯ A x −i 0 M N R T X t=1 A t A −j−h ! R 0 Σ 0 y R T X s=1 A s0 A −j+r−h0 ! R 0 M N A 0 + ¯ A x −j = 2 1 N 2 T 2 A 0 + ¯ A x −i 0 0 X g=−∞ M N R T X t=1 A t A −g ! R 0 Σ 0 y RA r0 T X s=1 A s0 A −g0 ! R 0 M N ! A 0 + ¯ A x −j + 2 1 N 2 T A 0 + ¯ A x −i 0 1 T T X h=1 M N R T X t=1 A t A −g ! R 0 Σ 0 y RA r0 T X s=1 A s0 A −g0 ! R 0 M N ! A 0 + ¯ A x −j ≤ 2 1 N 2 T 2 MN a 0 +a x 2 0 X g=−∞ M N R T X t=1 A t A −g ! R 0 Σ 0 y RA r0 T X s=1 A s0 A −g0 ! R 0 M N ∞ + 2 1 N 2 T MN a 0 +a x 2 1 T T X h=1 M N R T X t=1 A t A −g ! R 0 Σ 0 y RA r0 T X s=1 A s0 A −g0 ! R 0 M N ∞ =Op 1 NT 2 +Op 1 NT =Op 1 NT , by (A.16) and (A.17). similarly, (A.42) = 1 N 2 T 2 T−i X h=−∞ A 0 + ¯ A x −j 0 M 0 N R T X t=1 A t A −i−h ! R 0 Σ 0 y R T X s=1 A s A −i−h ! 0 R 0 M N A 0 + ¯ A x −j =Op 1 NT 173 (A.43) = 1 N 2 T−j X h=−∞ trE ¯ A ε0 −i,h M N ¯ A ε −j,h ε h ε 0 h ¯ A ε0 −i,h M N ¯ A ε −j,h ε h ε 0 h ≤ 1 N 2 T 4 tr RA r0 P ∞ h=0 A h0 2 R 0 M N R P ∞ g=0 A g 2 R 0 S −1 N Ψ 0 0 S −1 N Λ 0 εgε 0 g S −1 N Λ 0 0 S −1 N Ψ 0 0 0 ×RA r0 P ∞ t=0 A t0 2 R 0 M N R P ∞ s=0 A s 2 R 0 S −1 N Ψ 0 0 S −1 N Λ 0 εgε 0 g S −1 N Λ 0 0 S −1 N Ψ 0 0 0 +T 1 N 2 T 4 tr RA r0 P ∞ h=0 A h0 R 0 M N R P ∞ g=0 A g R 0 S −1 N Ψ 0 0 S −1 N Λ 0 εgε 0 g S −1 N Λ 0 0 S −1 N Ψ 0 0 0 ×RA r0 P ∞ t=0 A t0 R 0 M N R P ∞ s=0 A s R 0 S −1 N Ψ 0 0 S −1 N Λ 0 εgε 0 g S −1 N Λ 0 0 S −1 N Ψ 0 0 0 ! =Op 1 NT 4 +Op 1 NT 3 =Op 1 NT 3 , by (A.20) and (A.21). (A.44) = 1 N 2 T 4 T X h=−∞ T X g=−∞,g6=h tr P T t=1 A t A −h 0 R 0 M N R P T s=1 A s A −g R 0 Σ 0 y R × P T l=1 A l A −g 0 R 0 M 0 N R P T k=1 A k A −h R 0 Σ 0 y R ! = 1 N 2 T 4 T X g=−∞ tr P T t=1 A t0 R 0 M N R P T s=1 A s A −g R 0 Σ 0 y RA −g0 × P T l=1 A l0 R 0 M 0 N R P T k=1 A k A −g P ∞ m=1 A m R 0 Σ 0 y RA m0 A −g0 ! + 1 N 2 T 4 T X h=−∞ tr P T t=1 A t0 R 0 M N R P T s=1 A s A −h P ∞ m=1 A m R 0 Σ 0 y RA m0 ×A −h0 P T l=1 A l0 R 0 M 0 N R P T k=1 A k A −h R 0 Σ 0 y RA −h0 ! =Op 1 NT 3 . similarly, (A.45) = 1 N 2 T 4 T X h=−∞ T X g=−∞,g6=h tr P T t=1 A t A −h 0 A r0 R 0 M N R P T s=1 A s A −g R 0 Σ 0 y R × P T l=1 A l A −h 0 A r0 R 0 M N R P T k=1 A k A −h R 0 Σ 0 y R ! =Op 1 NT 3 (A.46) = 1 N 2 T 4 ∞ X h=0 tr P T t=1 A t A h 0 A r0 R 0 M N R P T s=1 A s A h R 0 Σ 0 y R ×tr P T l=1 A l A h 0 A r0 R 0 M N R P T k=1 A k A h R 0 Σ 0 y R + 1 N 2 T 4 T X h=1 tr P T t=1 A t A −h 0 A r0 R 0 M N R P T s=1 A s A −h R 0 Σ 0 y R ×tr P T l=1 A l A −h 0 A r0 R 0 M N R P T k=1 A k A −h R 0 Σ 0 y R =Op 1 T 4 +Op 1 T 3 =Op 1 T 3 , by (A.22) and (A.23). Together implies that var 1 N ¯ y 0 −i M N ¯ y −j =Op 1 NT 174 ¯ Z = ¯ y 0 −1 ,··· , ¯ y 0 −p , (W¯ y −1 ) 0 ,··· , (W¯ y −q ) 0 , ¯ X 0 0 . Label ¯ Z 1 = ¯ y −1 ,..., ¯ Zp= ¯ y −p , ¯ Z p+1 = W¯ y −1 ,..., ¯ Z p+q = W¯ y −q , ¯ Z p+q+1 = ¯ X and accordingly partition M εz N = M εz 1 ,...,M εz p+q+1 and M zz N = M zz 11 ... M zz 1,p+q+1 . . . . . . . . . M zz p+q+1,1 ... M zz p+q+1,p+q+1 , and then we can write 1 N ¯ ε 0 M εz N ¯ Z = 1 N p+q+1 X j=1 ¯ ε 0 M εz j ¯ Z j 1 N ¯ Z 0 M zz N ¯ Z = 1 N p+q+1 X j=1 p+q+1 X k=1 ¯ Z 0 j M zz jk ¯ Z k Each M εz N,j and M zz N,jk are UB, and then following from the previosu results, E 1 N ¯ ε 0 M εz N ¯ Z =Op 1 T , E 1 N ¯ Z 0 M zz N ¯ Z =Op (1), 1 N ¯ ε 0 M εz N ¯ Z−E 1 N ¯ ε 0 M εz N ¯ Z = Op 1 √ NT , and 1 N ¯ Z 0 M zz N ¯ Z−E 1 N ¯ Z 0 M zz N ¯ Z = Op 1 √ NT . Proof of Lemma.A.3: E 1 NT T X t=1 e ε 0 t M N e b Nt ! = 0 var 1 NT T X t=1 e ε 0 t M N e b Nt ! = 1 N 2 T 2 E T X t=1 e ε 0 t M N e b Nt T X s=1 e ε 0 s M N e b Ns ! = 1 N 2 T 2 T X t=1 e b 0 Nt M 0 N E εtε 0 t M N e b Nt + 1 N 2 T 4 T X t=1 T X s=1 T X r=1 e b 0 Nt M 0 N E εrε 0 r M N e b Ns − 1 N 2 T 3 T X t=1 T X s=1 e b 0 Nt M 0 N E εtε 0 t M N e b Ns − 1 N 2 T 3 T X t=1 T X s=1 e b 0 Nt M 0 N E εsε 0 s M N e b Ns = T− 1 N 2 T 3 T X t=1 e b 0 Nt M 0 N I N ⊗ Ω 0 M N e b Nt ≤ T− 1 N 2 T 3 TMN sup t |b Nt | 2 kM N k 1 Ω 0 ∞ kM N k ∞ =Op 1 NT 175 E 1 NT T X t=1 e ε 0 t M N e εt ! = E 1 NT T X t=1 ε 0 t M N εt−E 1 N ¯ ε 0 M N ¯ ε = 1 NT 1− 1 T T X t=1 E ε 0 t M N εt = 1 N − 1 NT tr M N I N ⊗ Ω 0 =Op (1) var 1 NT T X t=1 e ε 0 t M N e εt ! = 1 N 2 T 2 E T X t=1 e ε 0 t M N e εt ! T X s=1 e ε 0 s M N e εs !! − 1 N 2 T 2 E T X t=1 e ε 0 t M N e εt ! E T X t=1 e ε 0 t M N e εt ! = 1 N 2 T 2 E T X t=1 ε 0 t M N εt− 1 T T X r=1 T X k=1 ε 0 r M N ε k ! T X s=1 ε 0 s M N εs − 1 T T X l=1 T X m=1 ε 0 l M N εm− 1 N 2 T 2 1− 1 T 2 T X t=1 E ε 0 t M N εt ! 2 = 1 N 2 T 2 1− 1 T 2 T X t=1 E ε 0 t M N εtε 0 t M N εt + 1 N 2 T 2 1− 1 T 2 T X t=1 T X s=1,s6=t E ε 0 t M N εt E ε 0 s M N εs + 1 N 2 T 2 1 T 2 T X t=1 T X s=1,s6=t E ε 0 t M N εs ε 0 t M N εs + 1 N 2 T 2 1 T 2 T X t=1 T X s=1,s6=t E ε 0 t M N εs ε 0 s M N εt − 1 N 2 T 2 1− 1 T 2 T X t=1 E ε 0 t M N εt ! 2 = 1 N 2 T 2 1− 1 T 2 T X t=1 var ε 0 t M N εt + 1 N 2 T 2 1 T 2 T X t=1 T X s=1,s6=t tr M N E εsε 0 s M 0 N +M N E εtε 0 t = 1 N 2 T 2 1− 1 T 2 T X t=1 var ε 0 t M N εt + 1 N 2 T 2 1 T 2 T X t=1 T X s=1,s6=t tr M N I N ⊗ Ω 0 M 0 N +M N I N ⊗ Ω 0 =Op 1 NT +Op 1 NT 2 =Op 1 NT E 1 NT T X t=1 e b Nt M N e y t−j ! = 1 NT T X t=1 e b Nt M N e A x t−j =Op (1) 176 var 1 NT T X t=1 e b Nt M N e y t−j ! = 1 N 2 T 2 T−j X h=−∞ T X t=1 e b 0 Nt M N e A ε t−j,h ! var (ε h ) T X s=1 e b 0 Ns M N e A ε s−j,h ! 0 = 1 N 2 T 2 T−j X h=−∞ T X t=1 T X s=1 e b 0 Nt M N e A ε t−j,h I N ⊗ Ω 0 e A ε0 s−j,h M 0 N e b Ns = 1 N 2 T 2 T−j X h=−∞ T X t=1 T X s=1 e b 0 Nt M N R A t−j−h − 1 T T X k=1 A k−j−h ! R 0 Σ 0 y R A s−j−h − 1 T T X k=1 A k−j−h ! 0 R 0 M 0 N e b Ns = 1 N 2 T 2 ∞ X h=0 ( T X t=1 e b 0 Nt M N R A t − 1 T T X k=1 A k ! A h R 0 Σ 0 y RA h0 T X s=1 A s − 1 T T X k=1 A k ! 0 R 0 M 0 N e b Ns ) (A.47) + 1 N 2 T 2 T X h=1 ( T X t=1 e b 0 Nt M N R A t−h − 1 T T X k=1 A k−h ! R 0 Σ 0 y R T X s=1 A s−h − 1 T T X k=1 A k−h ! 0 R 0 M 0 N e b Ns ) (A.48) (A.47) = 1 N 2 T 2 ∞ X h=0 ( T X t=1 e b 0 Nt M N R A t − 1 T T X k=1 A k ! A h R 0 Σ 0 y RA h0 T X s=1 A s − 1 T T X k=1 A k ! 0 R 0 M 0 N e b Ns ) = 1 N 2 T 2 T X t=1 b 0 Nt −b 0 Nt M N R A t − 1 T T X k=1 A k ! ∞ X h=0 A h R 0 Σ 0 y RA h0 ! T X s=1 A s − 1 T T X k=1 A k ! 0 R 0 M 0 N b 0 Ns − ¯ b = 1 N 2 T 2 T X t=1 b 0 Nt M N RA t −b 0 Nt M N R T X k=1 A k !! ∞ X h=0 A h R 0 Σ 0 y RA h0 ! T X s=1 A s0 R 0 M 0 N b 0 Ns − T X k=1 A k0 ! R 0 M 0 N ¯ b ! ≤ 1 N 2 T 2 T X t=1 b 0 Nt M N RA t ! ∞ X h=0 A h R 0 Σ 0 y RA h0 ! T X s=1 A s0 R 0 M 0 N b 0 Ns ! + 1 N 2 T 2 b 0 Nt M N R T X k=1 A k ! ∞ X h=0 A h R 0 Σ 0 y RA h0 ! T X k=1 A k0 ! R 0 M 0 N ¯ b ≤ 2 N 2 T 2 sup t |b Nt | 2 l 0 M N R ∞ X h=0 A h ! 2 R 0 Σ 0 y R ∞ X h=0 A h0 ! 2 R 0 M 0 N l ≤ 2 N 2 T 2 sup t |b Nt | 2 MN M N R ∞ X h=0 A h ! 2 R 0 Σ 0 y R ∞ X h=0 A h0 ! 2 R 0 M 0 N ∞ =Op 1 NT 2 , because M N R ∞ X h=0 A h ! 2 R 0 Σ 0 y R ∞ X h=0 A h0 ! 2 R 0 M 0 N is UB. 177 (A.48) = 1 N 2 T 2 T X h=1 ( T X t=1 e b 0 t MR A t−h − 1 T T X k=1 A k−h ! R 0 Σ 0 y R T X s=1 A s−h − 1 T T X k=1 A k−h ! 0 R 0 M 0 e bs ) = 1 N 2 T 2 T X h=1 ( T X t=1 b 0 t MRA t−h − ¯ b 0 MR T X k=1 A k−h !! R 0 Σ 0 y R T X s=1 A s−h0 R 0 M 0 bs− T X k=1 A k−h0 ! R 0 M 0 ¯ b !) ≤ 1 N 2 T 2 T X h=1 ( T X t=1 b 0 t MRA t−h ! R 0 Σ 0 y R T X s=1 A s−h0 R 0 M 0 bs !) + 1 N 2 T 2 T X h=1 (" ¯ b 0 MR T X k=1 A k−h !# R 0 Σ 0 y R " T X k=1 A k−h0 ! R 0 M 0 ¯ b #) ≤ 2 1 N 2 T 2 T sup t |b Nt | 2 l 0 MR A 0 + A 1 +... + A T−1 R 0 Σ 0 y R A 00 + A 10 +... + A T−10 R 0 M 0 l ≤ 2 1 N 2 T 2 TMN sup t |b Nt | 2 MR ∞ X h=0 A h R 0 Σ 0 y R ∞ X h=0 A h0 R 0 M 0 =Op 1 NT Together implies that var 1 NT T X t=1 e b Nt M N e y t−j ! =Op 1 NT E 1 NT T X t=1 e ε 0 t M N e y t−j ! = 1 NT E T X t=1 ε 0 t M N y t−j ! −T 1 NT E ¯ ε 0 M N ¯ y −j =− 1 N E ¯ ε 0 M N ¯ y −j =Op 1 T , from (A.32) 178 var 1 NT T X t=1 e ε 0 t M N e y t−j ! = 1 N 2 T 2 E T X t=1 e ε 0 t M N e y t−j ! T X s=1 e ε 0 s M N e y s−j !! − 1 N 2 T 2 E T X t=1 e ε 0 t M N e y t−j ! E T X t=1 e ε 0 t M N e y t−j ! = 1 N 2 T 2 T X t=1 e A x t−j 0 M 0 N I N ⊗ Ω 0 M N e A x t−j + 1 N 2 T 2 T X t=1 T−j X h=−∞ T X s=1 T−j X g=−∞ E ε 0 t M N e A ε t−j,h ε h ε 0 s M N e A ε s−j,g εg − 1 T 1 N 2 T 2 T X t=1 T−j X h=−∞ T X s=1 T X k=1 T−j X g=−∞ E ε 0 t M N e A ε t−j,h ε h ε 0 k M N e A ε s−j,g εg − 1 T 1 N 2 T 2 T X t=1 T X r=1 T−j X h=−∞ T X s=1 T−j X g=−∞ E ε 0 r M N e A ε t−j,h ε h ε 0 s M N e A ε s−j,g εg + 1 T 1 T 1 N 2 T 2 T X t=1 T X r=1 T−j X h=−∞ T X s=1 T X k=1 T−j X g=−∞ E ε 0 r M N e A ε t−j,h ε h ε 0 k M N e A ε s−j,g εg − 1 N 2 T 2 E T X t=1 T−j X h=−∞ ε 0 t M N e A ε t−j,h ε h ! E T X s=1 T−j X h=−∞ ε 0 s M N e A ε s−j,h ε h ! + 1 T 1 N 2 T 2 E T X t=1 T−j X h=−∞ ε 0 t M N e A ε t−j,h ε h ! E T X s=1 T X k=1 T−j X h=−∞ ε 0 k M N e A ε s−j,h ε h ! + 1 T 1 N 2 T 2 E T X t=1 T X r=1 T−j X h=−∞ ε 0 r M N e A ε t−j,h ε h ! E T X s=1 T−j X h=−∞ ε 0 s M N e A ε s−j,h ε h ! − 1 T 1 T 1 N 2 T 2 E T X t=1 T X r=1 T−j X h=−∞ ε 0 r M N e A ε t−j,h ε h ! E T X s=1 T X k=1 T−j X h=−∞ ε 0 k M N e A ε s−j,h ε h ! 179 therefore implies var 1 NT T X t=1 e ε 0 t M N e y t−j ! = 1 N 2 T 2 T X t=1 e A x t−j 0 M 0 N I N ⊗ Ω 0 M N e A x t−j + 1 N 2 T 2 T−j X g=1 tr E M N e A ε g−j,g εgε 0 g M N e A ε g−j,g εgε 0 g − 2 1 T 1 N 2 T 2 T X s=1 T−j X g=1 tr E M N e A ε g−j,g εgε 0 g M N e A ε s−j,g εgε 0 g + 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T−j X g=1 tr E M N e A ε t−j,g εgε 0 g M N e A ε s−j,g εgε 0 g + 1 N 2 T 2 T X t=1 T−j X g=−∞,g6=t tr E εtε 0 t M N e A ε t−j,g E εgε 0 g e A ε0 t−j,g M 0 N − 2 1 T 1 N 2 T 2 T X s=1 T X t=1 T−j X g=−∞,g6=t tr E εtε 0 t M N e A ε s−j,g E εgε 0 g e A ε0 t−j,g M 0 N + 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T X k=1 T−j X g=−∞,g6=k tr E ε k ε 0 k M N e A ε s−j,g E εgε 0 g e A ε0 t−j,g M 0 N + 1 N 2 T 2 T−j X h=1 T−j X g=1,g6=h tr M N e A ε g−j,h E ε h ε 0 h M N e A ε h−j,g E εgε 0 g − 2 1 T 1 N 2 T 2 T X s=1 T−j X h=1 T−j X g=1,g6=h tr M N e A ε g−j,h E ε h ε 0 h M N e A ε s−j,g E εgε 0 g + 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T−j X h=1 T−j X g=1,g6=h tr M N e A ε t−j,h E ε h ε 0 h M N e A ε s−j,g E εgε 0 g + 1 N 2 T 2 T−j X h=1 T−j X g=1,g6=h tr M N e A ε h−j,h E ε h ε 0 h tr M N e A ε g−j,g E εgε 0 g − 2 1 T 1 N 2 T 2 T X s=1 T−j X h=1 T−j X g=1,g6=h tr M N e A ε h−j,h E ε h ε 0 h tr M N e A ε s−j,g E εgε 0 g + 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T−j X h=1 T−j X g=1,g6=h tr M N e A ε t−j,h E ε h ε 0 h tr M N e A ε s−j,g E εgε 0 g − 1 N 2 T 2 T−j X h=1 T−j X g=1 tr M N e A ε h−j,h E ε h ε 0 h tr M N e A ε g−j,g E εgε 0 g + 2 1 T 1 N 2 T 2 T X s=1 T−j X h=1 T−j X g=1 tr M N e A ε h−j,h E ε h ε 0 h tr M N e A ε s−j,g E εgε 0 g − 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T−j X h=1 T−j X g=1 tr M N e A ε t−j,h E ε h ε 0 h tr M N e A ε s−j,g E εgε 0 g 180 then implies var 1 NT T X t=1 e ε 0 t M N e y t−j ! = 1 N 2 T 2 T X t=1 e A x t−j 0 M 0 N I N ⊗ Ω 0 M N e A x t−j + 1 N 2 T 2 T−j X g=1 trE M N e A ε g−j,g εgε 0 g M N e A ε g−j,g εgε 0 g − 2 1 T 1 N 2 T 2 T X s=1 T−j X g=1 trE M N e A ε g−j,g εgε 0 g M N e A ε s−j,g εgε 0 g + 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T−j X g=1 trE M N e A ε t−j,g εgε 0 g M N e A ε s−j,g εgε 0 g + 1 N 2 T 2 T X t=1 T−j X g=−∞,g6=t tr I N ⊗ Ω 0 M N e A ε t−j,g I N ⊗ Ω 0 e A ε0 t−j,g M 0 N − 2 1 T 1 N 2 T 2 T X s=1 T X t=1 T−j X g=−∞,g6=t tr I N ⊗ Ω 0 M N e A ε s−j,g I N ⊗ Ω 0 e A ε0 t−j,g M 0 N + 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T X k=1 T−j X g=−∞,g6=k tr I N ⊗ Ω 0 M N e A ε s−j,g I N ⊗ Ω 0 e A ε0 t−j,g M 0 N + 1 N 2 T 2 T−j X h=1 T−j X g=1,g6=h tr M N e A ε g−j,h I N ⊗ Ω 0 M N e A ε h−j,g I N ⊗ Ω 0 − 2 1 T 1 N 2 T 2 T X s=1 T−j X h=1 T−j X g=1,g6=h tr M N e A ε g−j,h I N ⊗ Ω 0 M N e A ε s−j,g I N ⊗ Ω 0 + 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T−j X h=1 T−j X g=1,g6=h tr M N e A ε t−j,h I N ⊗ Ω 0 M N e A ε s−j,g I N ⊗ Ω 0 − 1 N 2 T 2 T−j X g=1 tr M N e A ε g−j,g I N ⊗ Ω 0 tr M N e A ε g−j,g I N ⊗ Ω 0 + 2 1 T 1 N 2 T 2 T X s=1 T−j X g=1 tr M N e A ε g−j,g I N ⊗ Ω 0 tr M N e A ε s−j,g I N ⊗ Ω 0 − 1 T 1 T 1 N 2 T 2 T X t=1 T X s=1 T−j X g=1 tr M N e A ε t−j,g I N ⊗ Ω 0 tr M N e A ε s−j,g I N ⊗ Ω 0 181 and var 1 NT T X t=1 e ε 0 t M N e y t−j ! = 1 N 2 T 2 T X t=1 e A x0 t−j M 0 N I N ⊗ Ω 0 M N e A x t−j (A.49) + 1 N 2 T 2 T−j X g=1 trE M N e A ε g−j,g εgε 0 g M N e A ε g−j,g εgε 0 g (A.50) + 1 N 2 T 2 T X t=1 T−j X g=−∞,g6=t tr I N ⊗ Ω 0 M N e A ε t−j,g I N ⊗ Ω 0 e A ε0 t−j,g M 0 N (A.51) + 1 N 2 T 2 T−j X h=1 T−j X g=1,g6=h tr M N e A ε g−j,h I N ⊗ Ω 0 M N e A ε h−j,g I N ⊗ Ω 0 (A.52) − 1 N 2 T 2 T−j X g=1 tr M N e A ε g−j,g I N ⊗ Ω 0 tr M N e A ε g−j,g I N ⊗ Ω 0 (A.53) (A.49) = 1 N 2 T 2 T X t=1 e A x0 t−j M 0 N I N ⊗ Ω 0 M N e A x t−j ≤ 1 N 2 T e a x 2 l 0 MN M 0 N I N ⊗ Ω 0 M N l MN = 1 N 2 T e a x 2 MN M 0 N I N ⊗ Ω 0 M N ∞ =Op 1 NT (A.50) = 1 N 2 T 2 T−j X g=1 trE M N e A ε g−j,g εgε 0 g M N e A ε g−j,g εgε 0 g = 1 N 2 T 2 T−j X g=1 trE M N ¯ A ε −j,g εgε 0 g M N ¯ A ε −j,g εgε 0 g = 1 N 2 T 4 T−j X g=1 trE M N R T X t=1 A t−j−g ! R 0 S −1 N Ψ 0 0 S −1 N Λ 0 εgε 0 g M N R T X t=1 A t−j−g ! R 0 S −1 N Ψ 0 0 S −1 N Λ 0 εgε 0 g ! =Op 1 N 2 T 3 , by (A.17). 182 (A.51) = 1 N 2 T 2 T X t=1 T−j X g=−∞,g6=t tr I N ⊗ Ω 0 M N e A ε t−j,g I N ⊗ Ω 0 e A ε0 t−j,g M 0 N = 1 N 2 T 2 T X t=1 T X g=−∞,g6=t tr I N ⊗ Ω 0 M N R A t−g − 1 T T X r=1 A r−g ! R 0 Σ 0 y R A t−g0 − 1 T T X m=1 A m−g0 ! R 0 M 0 N ! = 1 N 2 T 2 T X t=1 tr I N ⊗ Ω 0 M N R A t − 1 T T X r=1 A r ! ∞ X g=0 A g R 0 Σ 0 y RA g0 ! A t0 − 1 T T X m=1 A m0 ! R 0 M 0 N ! + 1 N 2 T 2 T X t=1 T X g=1,g6=t tr I N ⊗ Ω 0 M N R A t − 1 T T X r=1 A r ! A −g R 0 Σ 0 y RA −g0 A t0 − 1 T T X m=1 A m0 ! R 0 M 0 N ! ≤ 1 N 2 T 2 tr I N ⊗ Ω 0 M N R T X t=1 A t ! ∞ X g=0 A g R 0 Σ 0 y RA g0 ! T X t=1 A t0 ! R 0 M 0 N ! + 1 N 2 T 3 tr I N ⊗ Ω 0 M N R T X r=1 A r ! ∞ X g=0 A g R 0 Σ 0 y RA g0 ! T X m=1 A m0 ! R 0 M 0 N ! + 1 N 2 T 2 tr I N ⊗ Ω 0 M N R T−1 X g=1 g X r=1 A r R 0 Σ 0 y RA r0 ! R 0 M 0 N ! + 1 N 2 T 3 tr I N ⊗ Ω 0 M N R T−1 X g=1 g X r=1 g X k=1 A r R 0 Σ 0 y RA k0 ! R 0 M 0 N ! ≤ 1 N 2 T 2 tr I N ⊗ Ω 0 M N R T X t=1 A t ! ∞ X g=0 A g R 0 Σ 0 y RA g0 ! T X t=1 A t0 ! R 0 M 0 N ! + 1 N 2 T 3 tr I N ⊗ Ω 0 M N R T X r=1 A r ! ∞ X g=0 A g R 0 Σ 0 y RA g0 ! T X m=1 A m0 ! R 0 M 0 N ! + 1 N 2 T 2 Ttr I N ⊗ Ω 0 M N R ∞ X t=0 A h ! R 0 Σ 0 y R ∞ X t=0 A h0 ! R 0 M 0 N ! + 1 N 2 T 3 Ttr I N ⊗ Ω 0 M N R ∞ X t=0 A h ! R 0 Σ 0 y R ∞ X t=0 A h0 ! R 0 M 0 N ! =Op 1 NT 183 (A.52) = 1 N 2 T 2 T−j X h=1 T−j X g=1,g6=h tr M N e A ε g−j,h I N ⊗ Ω 0 M N e A ε h−j,g I N ⊗ Ω 0 = 1 N 2 T 2 T−j X h=1 T−j X g=1,g6=h tr M N R A g−j−h − 1 T P T r=1 A r−j−h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 × M N R A h−j−g − 1 T P T m=1 A m−j−g R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! ≤ 1 N 2 T 2 T−j X h=1 T−j X g=1,g6=h tr M N R A g−j−h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R A h−j−g R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! + 1 N 2 T 4 tr M N R P T−j h=1 P T r=1 A r−j−h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 × M N R P T−j g=1,g6=h P T m=1 A m−j−g R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ! = 1 N 2 T 4 tr M N R P T−j g=1 P g−1 r=0 A r R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R P T−j g=1 P g−1 r=0 A r0 R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ≤ (T−j) 2 1 N 2 T 4 tr M N R P T−j−1 g=0 A r R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 ×M N R P T−j−1 g=0 A r0 R 0 S −1 N Ψ 0 0 S −1 N Λ 0 I N ⊗ Ω 0 =Op 1 NT 2 (A.53) = 1 N 2 T 2 T−j X g=1 tr M N e A ε g−j,g I N ⊗ Ω 0 tr M N e A ε g−j,g I N ⊗ Ω 0 = 1 N 2 T 2 T−j X g=1 tr M N ¯ A ε −j,g I N ⊗ Ω 0 tr M N ¯ A ε −j,g I N ⊗ Ω 0 =Op 1 T , by (A.15). Together implies, var 1 NT T X t=1 e ε 0 t M N e y t−j ! =Op 1 NT 184 E 1 NT T X t=1 e y 0 t−i M N e y t−j ! = 1 NT E T X t=1 y 0 t−i M N y t−j −T ¯ y 0 −i M N ¯ y −j ! = 1 NT E T X t=1 y 0 t−i M N y t−j ! − 1 N E ¯ y 0 −i M N ¯ y −j = 1 NT T X t=1 e A x0 −i M N e A x −j + 1 NT T X t=1 t−j X h=−∞ tr A ε0 t−i,h M N A ε t−j,h E ε h ε 0 h − 1 N T−j X h=−∞ tr ¯ A ε0 −i,h M N ¯ A ε −j,h E ε h ε 0 h =Op (1) var 1 NT T X t=1 e y 0 t−i M N e y t−j ! = 1 N 2 T 2 E T X t=1 e y 0 t−i M N e y t−j ! T X s=1 e y 0 s−i M N e y s−j !! − 1 N 2 T 2 E T X t=1 e y 0 t−i M N e y t−j !! E T X s=1 e y 0 s−i M N e y s−j !! (A.54) = 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ e A x0 t−i M N e A ε t−j,g E εgε 0 g e A ε0 s−j,g M 0 N e A x s−i + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ e A x0 t−i M N e A ε t−j,g E εgε 0 g e A ε0 s−i,g M N e A x s−j + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ e A x0 s−i M N e A ε s−j,g E εgε 0 g e A ε0 t−i,g M N e A x t−j + 1 N 2 T 2 T X t=1 T X s=1 T−i X g=−∞ e A x0 t−j M 0 N e A ε t−i,g E εgε 0 g e A ε0 s−i,g M N e A x s−j + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ trE e A ε0 t−i,g M N e A ε t−j,g εgε 0 g e A ε0 s−i,g M N e A ε s−j,g εgε 0 g + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ T−j X k=−∞,k6=g tr e A ε0 t−i,k M N e A ε t−j,g E εgε 0 g e A ε0 s−i,g M N e A ε s−j,k E ε k ε 0 k + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ T−i X k=−∞,k6=g tr e A ε0 t−i,k M N e A ε t−j,g E εgε 0 g e A ε0 s−j,g M 0 N e A ε s−i,k E ε k ε 0 k + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ tr e A ε0 t−i,g M N e A ε t−j,g E εgε 0 g ! T−j X k=−∞,k6=g tr e A ε0 s−i,k M N e A ε s−j,k E ε k ε 0 k ! − 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ tr e A ε0 t−i,g M N e A ε t−j,g E εgε 0 g ! T−j X k=−∞ tr e A ε0 s−i,k M N e A ε s−j,k E ε k ε 0 k ! 185 var 1 NT T X t=1 e y 0 t−i M N e y t−j ! = 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ e A x0 t−i M N e A ε t−j,g I N ⊗ Ω 0 e A ε0 s−j,g M 0 N e A x s−i (A.55) + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ e A x0 t−i M N e A ε t−j,g I N ⊗ Ω 0 e A ε0 s−i,g M N e A x s−j (A.56) + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ e A x0 s−i M N e A ε s−j,g I N ⊗ Ω 0 e A ε0 t−i,g M N e A x t−j (A.57) + 1 N 2 T 2 T X t=1 T X s=1 T−i X g=−∞ e A x0 t−j M 0 N e A ε t−i,g I N ⊗ Ω 0 e A ε0 s−i,g M N e A x s−j (A.58) + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ trE e A ε0 t−i,g M N e A ε t−j,g εgε 0 g e A ε0 s−i,g M N e A ε s−j,g εgε 0 g (A.59) + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ T−j X k=−∞,k6=g tr e A ε0 t−i,k M N e A ε t−j,g I N ⊗ Ω 0 e A ε0 s−i,g M N e A ε s−j,k I N ⊗ Ω 0 (A.60) + 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ T−i X k=−∞,k6=g tr e A ε0 t−i,k M N e A ε t−j,g I N ⊗ Ω 0 e A ε0 s−j,g M 0 N e A ε s−i,k I N ⊗ Ω 0 (A.61) − 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ tr e A ε0 t−i,g M N e A ε t−j,g I N ⊗ Ω 0 tr e A ε0 s−i,g M N e A ε s−j,g I N ⊗ Ω 0 (A.62) (A.55), (A.56), (A.57) and (A.58) are Op 1 NT as shown before. (A.59) = 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ trE e A ε0 t−i,g M N e A ε t−j,g εgε 0 g e A ε0 s−i,g M N e A ε s−j,g εgε 0 g =Op 1 NT , by (A.24). (A.60),(A.61) =Op 1 NT , by (A.24). − (A.62) = 1 N 2 T 2 T X t=1 T X s=1 T−j X g=−∞ tr e A ε0 t−i,g M N e A ε t−j,g I N ⊗ Ω 0 tr e A ε0 s−i,g M N e A ε s−j,g I N ⊗ Ω 0 =Op 1 T ,by (A.26) Together implies that var 1 NT T X t=1 e y 0 t−i M N e y t−j ! =Op 1 NT 186 e Zt = e y 0 t−1 ,··· ,e y 0 t−p , We y t−1 0 ,··· , We y t−q 0 , e X 0 t 0 . Label e Z t,1 =e y t−1 ,..., e Zt,p=e y t−p , e Z t,p+1 = We y t−1 ,..., e Z t,p+q = We y t−q , e Z t,p+q+1 = e Xt and accordingly partition M εz N = M εz 1 ,...,M εz p+q+1 and M zz N = M zz 11 ... M zz 1,p+q+1 . . . . . . . . . M zz p+q+1,1 ... M zz p+q+1,p+q+1 , and then we can write 1 NT T X t=1 e ε 0 t M εz N e Zt = 1 NT T X t=1 p+q+1 X j=1 e ε 0 t M εz j e Z t,j ! 1 NT T X t=1 e Z 0 t M zz N e Zt = 1 NT T X t=1 p+q+1 X j=1 p+q+1 X k=1 e Z 0 t,j M zz jk e Z t,k ! Therefore,followingfromthepreviousresults,E 1 NT P T t=1 e ε 0 t M εz N e Zt =Op 1 T ,E 1 NT P T t=1 e Z 0 t M zz N e Zt =Op (1), 1 NT P T t=1 e ε 0 t M εz N e Zt − E 1 NT P T t=1 e ε 0 t M εz N e Zt = Op 1 √ NT , and 1 NT P T t=1 e Z 0 t M zz N e Zt − E 1 NT P T t=1 e Z 0 t M zz N e Zt =Op 1 √ NT . Proof of Lemma 4: C e Zt = e Q x t + T−1 X h=−∞ e Q ε t,h ε h (A.63) 187 1 NT T X t=1 T X s=1 E C e Zte ε 0 s ⊗e εte ε 0 s = 1 NT T X t=1 T X s=1 E T−1 X h=1 e Q ε t,h ε h e ε 0 s ⊗e εte ε 0 s ! = 1 NT T X t=1 T X s=1 E T−1 X h=1 e Q ε t,h ε h ε 0 s − T−1 X h=1 e Q ε t,h ε h ¯ ε 0 ! ⊗ εtε 0 s −εt ¯ ε 0 − ¯ εε 0 s + ¯ ε¯ ε 0 ! = 1 NT T X t=1 T X s=1 E T−1 X h=1 e Q ε t,h ε h ε 0 s ⊗εtε 0 s ! − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T X r=1 E e Q ε t,h ε h ε 0 s ⊗εtε 0 r − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T X r=1 E e Q ε t,h ε h ε 0 s ⊗εrε 0 s + 1 T 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T X r=1 T X m=1 E e Q ε t,h ε h ε 0 s ⊗εrε 0 m − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T X r=1 E e Q ε t,h ε h ε 0 r ⊗εtε 0 s + 1 T 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T X r=1 T X m=1 E e Q ε t,h ε h ε 0 r ⊗εtε 0 m + 1 T 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T X r=1 T X m=1 E e Q ε t,h ε h ε 0 r ⊗εmε 0 s − 1 T 1 T 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T X r=1 T X m=1 T X g=1 E e Q ε t,h ε h ε 0 r ⊗εmε 0 g = 1 NT T−1 X t=1 T X s=1 e Q ε t,t E εtε 0 s ⊗εtε 0 s − 1 T 1 NT T X t=1 T−1 X s=1 e Q ε t,s E εsε 0 s ⊗εtε 0 t − 1 T 1 NT T−1 X t=1 T X s=1 e Q ε t,t E εtε 0 s ⊗εtε 0 s − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 e Q ε t,h E ε h ε 0 s ⊗ε h ε 0 s + 1 T 1 T 1 NT T X t=1 T−1 X s=1 T X m=1 e Q ε t,s E εsε 0 s ⊗εmε 0 m + 1 T 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 e Q ε t,h E ε h ε 0 s ⊗εsε 0 h + 1 T 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 e Q ε t,h E ε h ε 0 s ⊗ε h ε 0 s P T−1 t=1 e Q ε t,t =− P T−1 t=1 ¯ Q ε t =O (1) P T t=1 e Q ε t,h = 0, for all h =−∞,...,T−j. 188 and 1 NT T X t=1 T X s=1 E C e Zte ε 0 s ⊗e εte ε 0 s = 1 NT − 2 NT 2 T−1 X t=1 e Q ε t,t εtε 0 t ⊗εtε 0 t + 1 N − 2 NT + 1 NT 2 T−1 X t=1 e Q ε t,t vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 + 1 NT 2 T−1 X t=1 e Q ε t,t I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 = 1 N T−1 X t=1 e Q ε t,t vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 +Op 1 T 1 N T X t=1 E C e Zt⊗e εt = 1 N T X t=1 E C e Zt⊗ (εt− ¯ ε) = 1 N T−1 X t=1 e Q ε t,t E (εt⊗εt)− 1 T 1 N T X t=1 T−1 X h=1 e Q ε t,h E (ε h ⊗ε h ) = 1 N T−1 X t=1 e Q ε t,t E (εt⊗εt) = 1 N T−1 X t=1 e Q ε t,t vec I N ⊗ Ω 0 189 Proof of Lemma 5: 1 NT T X t=1 T X s=1 E C e Zt e Z 0 s C 0 ⊗ e εte ε 0 s = 1 NT T X t=1 T X s=1 E C e Zt e Z 0 s C 0 ⊗ εtε 0 s − 1 T T X k=1 εtε 0 k − 1 T T X r=1 εrε 0 s + 1 T 2 T X r=1 T X k=1 εrε 0 k ! = 1 NT T X t=1 T X s=1 e Q x t e Q x0 s ⊗E εtε 0 s − 1 T 1 NT T X t=1 T X s=1 T X k=1 e Q x t e Q x0 s ⊗E εtε 0 k − 1 T 1 NT T X t=1 T X s=1 T X r=1 e Q x t e Q x0 s ⊗E εrε 0 s + 1 T 2 1 NT T X t=1 T X s=1 T X r=1 T X k=1 e Q x t e Q x0 s ⊗E εrε 0 k + 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εtε 0 s − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X k=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εtε 0 k − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X r=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εrε 0 s + 1 T 2 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X r=1 T X k=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εrε 0 k + 1 NT T X t=1 T X s=1 ∞ X h=0 ∞ X g=0 E e Q ε t,−h ε −h ε 0 −g e Q ε0 s,−g ⊗ εtε 0 s − 1 T 1 NT T X t=1 T X s=1 ∞ X h=0 ∞ X g=0 T X k=1 E e Q ε t,−h ε −h ε 0 −g e Q ε0 s,−g ⊗ εtε 0 k − 1 T 1 NT T X t=1 T X s=1 ∞ X h=0 ∞ X g=0 T X r=1 E e Q ε t,−h ε −h ε 0 −g e Q ε0 s,−g ⊗ εrε 0 s + 1 T 2 1 NT T X t=1 T X s=1 ∞ X h=0 ∞ X g=0 T X r=1 T X k=1 E e Q ε t,−h ε −h ε 0 −g e Q ε0 s,−g ⊗ εrε 0 k = 1 NT T X t=1 e Q x t e Q x0 t ⊗E εtε 0 t (A.64) + 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εtε 0 s (A.65) − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X k=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εtε 0 k (A.66) − 1 T 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X r=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εrε 0 s (A.67) + 1 T 2 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X r=1 T X k=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εrε 0 k (A.68) + 1 NT T X t=1 ∞ X g=0 E e Q ε t,−g ε −g ε 0 −g e Q ε0 t,−g ⊗ εtε 0 t (A.69) 190 where (A.65) = 1 NT T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εtε 0 s = 1 NT P T−1 t=1 E e Q ε t,t εtε 0 t e Q ε0 t,t ⊗ εtε 0 t + P T t=1 P T−1 s6=t E e Q ε t,s εsε 0 s e Q ε0 t,s ⊗ εtε 0 t + P T−1 t=1 P T−1 s6=t E e Q ε t,t εtε 0 s e Q ε0 s,s ⊗ (εtε 0 s ) + P T−1 t=1 P T−1 s6=t E e Q ε t,s εsε 0 t e Q ε0 s,t ⊗ (εtε 0 s ) = 1 NT + P T−1 t=1 e Q ε t,t ⊗I MN E εtε 0 t ⊗εtε 0 t e Q ε0 t,t ⊗I MN + P T t=1 P T−1 s6=t e Q ε t,s I N ⊗ Ω 0 e Q ε0 t,s ⊗ I N ⊗ Ω 0 + P T−1 t=1 P T−1 s6=t vec I N ⊗ Ω 0 e Q ε0 t,t vec I N ⊗ Ω 0 e Q ε0 s,s 0 + P T−1 t=1 P T−1 s6=t e Q ε t,s I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 s,t K MN,MN = 1 NT P T−1 t=1 e Q ε t,t ⊗I MN E εtε 0 t ⊗εtε 0 t e Q ε0 t,t ⊗I MN + P T t=1 P T−1 s=1 e Q ε t,s I N ⊗ Ω 0 e Q ε0 t,s ⊗ I N ⊗ Ω 0 − P T−1 t=1 e Q ε t,t I N ⊗ Ω 0 e Q ε0 t,t ⊗ I N ⊗ Ω 0 + P T−1 t=1 P T−1 s=1 vec I N ⊗ Ω 0 e Q ε0 t,t vec I N ⊗ Ω 0 e Q ε0 s,s 0 − P T−1 t=1 vec I N ⊗ Ω 0 e Q ε0 t,t vec I N ⊗ Ω 0 e Q ε0 t,t 0 + P T−1 t=1 P T−1 s=1 e Q ε t,s I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 s,t K MN,MN − P T−1 t=1 e Q ε t,t I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 t,t K MN,MN = 1 NT P T t=1 P T−1 s=1 e Q ε t,s I N ⊗ Ω 0 e Q ε0 t,s ⊗ I N ⊗ Ω 0 + P T−1 t=1 P T−1 s=1 vec I N ⊗ Ω 0 e Q ε0 t,t vec I N ⊗ Ω 0 e Q ε0 s,s 0 + P T−1 t=1 P T−1 s=1 e Q ε t,s I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 s,t K MN,MN + Δ + 1 = 1 NT " T X t=1 T−1 X s=1 e Q ε t,s I N ⊗ Ω 0 e Q ε0 t,s ⊗ I N ⊗ Ω 0 # + Δ + 1 +Op 1 T (A.66) =− 1 NT 2 T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X k=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εtε 0 k =− 1 NT 2 T X s=1 P T−1 t=1 E e Q ε t,t εtε 0 t e Q ε0 s,t ⊗ εtε 0 t + P T t=1 P T−1 g6=t E e Q ε t,g εgε 0 g e Q ε0 s,g ⊗ εtε 0 t + P T−1 t=1 P T−1 g6=t E e Q ε t,t εtε 0 g e Q ε0 s,g ⊗ εtε 0 g + P T−1 t=1 P T−1 g6=t E e Q ε t,g εgε 0 t e Q ε0 s,t ⊗ εtε 0 g =− 1 NT 2 T X s=1 P T−1 t=1 e Q ε t,t ⊗I MN E εtε 0 t ⊗εtε 0 t e Q ε0 s,t ⊗I MN + P T t=1 P T−1 g6=t e Q ε t,g I N ⊗ Ω 0 e Q ε0 s,g ⊗ I N ⊗ Ω 0 + P T−1 t=1 P T−1 g6=t vec I N ⊗ Ω 0 e Q ε0 t,t vec I N ⊗ Ω 0 e Q ε0 s,g 0 + P T−1 t=1 P T−1 g6=t e Q ε t,g I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 s,t K MN,MN = 0 191 (A.67) =− 1 NT 2 T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X r=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εrε 0 s =− 1 NT 2 T X t=1 P T−1 s=1 E e Q ε t,s εsε 0 s e Q ε0 s,s ⊗ (εsε 0 s ) + P T s=1 P T−1 g6=s E e Q ε t,g εgε 0 g e Q ε0 s,g ⊗ (εsε 0 s ) + P T−1 s=1 P T−1 g6=s E e Q ε t,g εgε 0 s e Q ε0 s,s ⊗ (εgε 0 s ) + P T−1 s=1 P T−1 g6=s E e Q ε t,s εsε 0 g e Q ε0 s,g ⊗ (εgε 0 s ) =− 1 NT 2 T X t=1 P T−1 s=1 e Q ε t,s ⊗I MN E (εsε 0 s ⊗εsε 0 s ) e Q ε0 s,s ⊗I MN + P T s=1 P T−1 g6=s e Q ε t,g I N ⊗ Ω 0 e Q ε0 s,g ⊗ I N ⊗ Ω 0 + P T−1 s=1 P T−1 g6=s vec I N ⊗ Ω 0 e Q ε0 t,g vec I N ⊗ Ω 0 e Q ε0 s,s 0 + P T−1 s=1 P T−1 g6=s e Q ε t,s I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 s,g K MN,MN = 0 (A.68) = 1 NT 3 T X t=1 T X s=1 T−1 X h=1 T−1 X g=1 T X r=1 T X k=1 E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εrε 0 k = 1 NT 3 T X t=1 T X s=1 P T−1 g=1 E e Q ε t,g εgε 0 g e Q ε0 s,g ⊗ εgε 0 g + P T h=1 P T−1 g6=h E e Q ε t,g εgε 0 g e Q ε0 s,g ⊗ ε h ε 0 h + P T−1 h=1 P T−1 g6=h E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ ε h ε 0 g + P T−1 h=1 P T−1 g6=h E e Q ε t,h ε h ε 0 g e Q ε0 s,g ⊗ εgε 0 h = 1 NT 3 T X t=1 T X s=1 P T−1 g=1 e Q ε t,g ⊗I MN εgε 0 g ⊗εgε 0 g e Q ε0 s,g ⊗I MN + P T h=1 P T−1 g6=h e Q ε t,g I N ⊗ Ω 0 e Q ε0 s,g ⊗ I N ⊗ Ω 0 + P T−1 h=1 P T−1 g6=h vec I N ⊗ Ω 0 e Q ε0 t,h vec I N ⊗ Ω 0 e Q ε0 s,g 0 + P T−1 h=1 P T−1 g6=h e Q ε t,h I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 s,g K MN,MN = 0 (A.65) + (A.66) + (A.67) + (A.68) + (A.69) = 1 NT T X t=1 T−1 X s=−∞ e Q + t,s I N ⊗ Ω 0 e Q +0 t,s ⊗ I N ⊗ Ω 0 + Δ + 1 +Op 1 T 192 where Δ + 1 = 1 NT T−1 X t=1 e Q ε t,t ⊗I MN Π 0 ⊗ Π 0 e Q ε0 t,t ⊗I MN −vec I N ⊗ Ω 0 e Q ε0 t,t vec I N ⊗ Ω 0 e Q ε0 t,t 0 − e Q ε t,t I N ⊗ Ω 0 e Q ε0 t,t ⊗ I N ⊗ Ω 0 − e Q ε t,t I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 e Q ε0 t,t K MN,MN = 1 NT T−1 X t=1 ¯ Q ε t ⊗I MN Π 0 ⊗ Π 0 ¯ Q ε0 t ⊗I MN −vec I N ⊗ Ω 0 ¯ Q ε0 t vec I N ⊗ Ω 0 ¯ Q ε0 t 0 − ¯ Q ε t I N ⊗ Ω 0 ¯ Q ε0 t ⊗ I N ⊗ Ω 0 − ¯ Q ε t I N ⊗ Ω 0 ⊗ I N ⊗ Ω 0 ¯ Q ε0 t K MN,MN =Op 1 T because e Q ε t,t =− ¯ Q ε t . Ifεt are normal, then Δ + 1 = 0. 1 NT T X t=1 T X s=1 E C e Zt e Z 0 s C 0 ⊗ e εte ε 0 s = 1 NT P T t=1 e Q x t e Q x0 t + 1 NT P T t=1 P T−1 s=−∞ e Q ε t,s I N ⊗ Ω 0 e Q ε0 t,s ! ⊗ I N ⊗ Ω 0 +Op 1 T 1 NT T X t=1 CE e Zt e Z 0 t C 0 = 1 NT T X t=1 e Q x t e Q x0 t + 1 NT T X t=1 T−1 X s=−∞ e Q ε t,s I N ⊗ Ω 0 e Q ε0 t,s 1 NT T X t=1 T X s=1 E C e Zt e Z 0 s C 0 ⊗e εte ε 0 s − 1 NT T X t=1 CE e Zt e Z 0 t C 0 ⊗ I N ⊗ Ω 0 =Op 1 T Proof of Lemma 6: 1 NT H θ,NT lnL(θ) = 1 NT H Ω 0 Ω (θ) ∗ ∗ ∗ H Λ 0 Ω (θ) H Λ 0 Λ (θ) ∗ ∗ H γ 0 Ω (θ) H γ 0 Λ (θ) H γ 0 γ (θ) ∗ H Ψ 0 0 Ω (θ) H Ψ 0 0 Λ (θ) H Ψ 0 0 γ (θ) H Ψ 0 0 Ψ 0 (θ) To show (A.28), − 1 NT H θ,NT lnL (θ) − − 1 NT H θ,NT lnL θ 0 = θ−θ 0 Op (1) 193 − 1 NT H θ,NT lnL (θ) − − 1 NT H θ,NT lnL θ 0 = − 1 NT H θ,NT lnL θ,e εt (θ) − − 1 NT H θ,NT lnL θ 0 ,e εt , where e εt = e εt θ 0 , e εt (θ) = (I MN +R N (θ))e εt + S N (Λ)G N (θ)C e Zt, and denoted G N (θ) =S N (Ψ 0 )S −1 N (Ψ 0 0 ) I N ⊗γ 0 − (I N ⊗γ) = I N ⊗ γ 0 −γ + I N ⊗ Ψ 0 0 − Ψ 0 WS −1 N (Ψ 0 0 ) I N ⊗γ 0 R N (θ) =S N (Λ)S N (Ψ 0 )S −1 N Ψ 0 0 S −1 N Λ 0 −I MN = I N ⊗ Ψ 0 0 − Ψ 0 WS −1 N Ψ 0 0 + I N ⊗ Λ 0 − Λ WS −1 N Λ 0 + I N ⊗ Ψ 0 0 − Ψ 0 WS −1 N Ψ 0 0 I N ⊗ Λ 0 WS −1 N Λ 0 − (I N ⊗ Λ) W I N ⊗ Ψ 0 0 − Ψ 0 WS −1 N Ψ 0 0 S −1 N Λ 0 Both G N (θ) and R N (θ) only involve terms which are differences in θ−θ 0 multiplied by some UB matrices, therefore, G N (θ) = θ−θ 0 Op (1) and R N (θ) = θ−θ 0 Op (1). e εt (θ)e ε 0 t (θ) =e εte ε 0 t +e εte ε 0 t R N (θ) 0 +R N (θ)e εte ε 0 t +R N (θ)e εte ε 0 t R N (θ) 0 + (I MN +R N (θ))e εt e Z 0 t C 0 G N (θ) 0 S N (Λ) 0 +S N (Λ)G N (θ)C e Zte ε 0 t I MN +R N (θ) 0 +S N (Λ)G N (θ)C e Zt e Z 0 t C 0 G N (θ) 0 S N (Λ) 0 =e εte ε 0 t + θ−θ 0 Op (1) e Zte ε 0 t (θ) = e Zte ε 0 t + e Zte ε 0 t R N (θ) 0 + e Zt e Z 0 t C 0 G N (θ) 0 S N (Λ) 0 = e Zte ε 0 t + θ−θ 0 Op (1) Substituting e yt = S −1 N Ψ 0 0 I N ⊗γ 0 C e Zt + S −1 N Ψ 0 0 S −1 N Λ 0 e εt and e ut (θ) = S −1 N (Λ)e εt (θ) into (A.7), we can see that − 1 NT H θ,NT lnL (θ) − − 1 NT H θ,NT lnL θ 0 = − 1 NT H θ,NT lnL θ,e εt − − 1 NT H θ,NT lnL θ 0 ,e εt + θ−θ 0 Op (1). It boils down to show that − 1 NT H θ,NT lnL θ,e εt − − 1 NT H θ,NT lnL θ 0 ,e εt = θ−θ 0 Op (1), which can be seen from computing the differences as follows − 1 NT H Ω 0 Ω θ,e εt − − 1 NT H Ω 0 Ω θ 0 = 1 NT P T t=1 D 0 M J 0 I N ⊗ Ω −1 ⊗ I N ⊗ Ω −1 e εte ε 0 t I N ⊗ Ω −1 JD M − 1 NT P T t=1 D 0 M J 0 I N ⊗ Ω 0 −1 ⊗ I N ⊗ Ω 0 −1 e εte ε 0 t I N ⊗ Ω 0 −1 JD M − 1 2 D 0 M Ω −1 ⊗ Ω −1 D M + 1 2 D 0 M Ω 0 −1 ⊗ Ω 0 −1 D M = − 1 NT P T t=1 D 0 M J 0 I N ⊗ Ω −1 Ω− Ω 0 Ω −1 ⊗ I N ⊗ Ω −1 e εte ε 0 t I N ⊗ Ω −1 JD M − 1 NT P T t=1 D 0 M J 0 I N ⊗ Ω −1 ⊗ I N ⊗ Ω −1 Ω− Ω 0 Ω −1 e εte ε 0 t I N ⊗ Ω −1 JD M − 1 NT P T t=1 D 0 M J 0 I N ⊗ Ω −1 ⊗ I N ⊗ Ω −1 e εte ε 0 t I N ⊗ Ω −1 Ω− Ω 0 Ω −1 JD M + 1 2 D 0 M n Ω −1 Ω− Ω 0 Ω −1 ⊗ Ω −1 + Ω −1 ⊗ Ω −1 Ω− Ω 0 Ω −1 o D M 194 − 1 NT H Λ 0 Ω θ,e εt − − 1 NT H Λ 0 Ω θ 0 = 1 NT P T t=1 J 0 WS −1 N (Λ)e εte ε 0 t ⊗I MN J Ω −1 ⊗ Ω −1 D M − 1 NT P T t=1 J 0 WS −1 N Λ 0 e εte ε 0 t ⊗I MN J Ω 0 −1 ⊗ Ω 0 −1 D M = 1 NT P T t=1 J 0 W I N ⊗ Λ− Λ 0 W e εte ε 0 t ⊗I MN J Ω −1 ⊗ Ω −1 D M − 1 NT P T t=1 J 0 WS −1 N Λ e εte ε 0 t ⊗I MN J n Ω −1 Ω− Ω 0 Ω −1 ⊗ Ω −1 + Ω −1 ⊗ Ω −1 Ω− Ω 0 Ω −1 o D M − 1 NT H γ 0 Ω θ,e εt − − 1 NT H γ 0 Ω θ 0 = 1 NT P T t=1 J 0 1 C e Zte ε 0 t ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M − 1 NT P T t=1 J 0 1 C e Zte ε 0 t ⊗S N Λ 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M = − 1 NT P T t=1 J 0 1 C e Zte ε 0 t ⊗ I N ⊗ Λ− Λ 0 W 0 J Ω −1 ⊗ Ω −1 D M − 1 NT P T t=1 J 0 1 C e Zte ε 0 t ⊗S N Λ 0 J n Ω −1 Ω− Ω 0 Ω −1 ⊗ Ω −1 + Ω −1 ⊗ Ω −1 Ω− Ω 0 Ω −1 o D M − 1 NT H Ψ 0 0 Ω θ,e εt − − 1 NT H Ψ 0 0 Ω θ 0 = 1 NT P T t=1 J 0 We yte ε 0 t ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M − 1 NT P T t=1 J 0 We yte ε 0 t ⊗S N Λ 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M = − 1 NT P T t=1 J 0 We yte ε 0 t ⊗ I N ⊗ Λ− Λ 0 W 0 J Ω −1 ⊗ Ω −1 D M − 1 NT P T t=1 J 0 We yte ε 0 t ⊗S N Λ 0 J n Ω −1 Ω− Ω 0 Ω −1 ⊗ Ω −1 + Ω −1 ⊗ Ω −1 Ω− Ω 0 Ω −1 o D M − 1 NT H Λ 0 Λ θ,e εt − − 1 NT H Λ 0 Λ θ 0 = 1 NT P T t=1 K M,M J 0 S −1 N (Λ) 0 W⊗ WS −1 N (Λ) J− 1 NT P T t=1 K M,M J 0 S −1 N Λ 0 0 W⊗ WS −1 N Λ 0 J + 1 NT P T t=1 J 0 WS −1 N (Λ)e εte ε 0 t S −1 N (Λ) 0 W 0 ⊗ I N ⊗ Ω −1 J − 1 NT P T t=1 J 0 WS −1 N Λ 0 e εte ε 0 t S −1 N Λ 0 0 W 0 ⊗ I N ⊗ Ω 0 −1 J = 1 NT P T t=1 K M,M J 0 S −1 N Λ I N ⊗ Λ− Λ 0 W S −1 N Λ 0 W⊗ WS −1 N Λ J + 1 NT P T t=1 K M,M J 0 S −1 N Λ 0 W⊗ W S −1 N Λ I N ⊗ Λ− Λ 0 W S −1 N Λ J + 1 NT P T t=1 J 0 W S −1 N Λ I N ⊗ Λ− Λ 0 W S −1 N Λ e εte ε 0 t S −1 N Λ 0 W 0 ⊗ I N ⊗ Ω −1 J + 1 NT P T t=1 J 0 WS −1 N Λ e εte ε 0 t S −1 N Λ I N ⊗ Λ− Λ 0 W S −1 N Λ 0 W 0 ⊗ I N ⊗ Ω −1 J − 1 NT P T t=1 J 0 WS −1 N Λ e εte ε 0 t S −1 N Λ 0 W 0 ⊗ I N ⊗ Ω −1 Ω− Ω 0 Ω −1 J − 1 NT H γ 0 Λ θ,e εt − − 1 NT H γ 0 Λ θ 0 = 1 NT P T t=1 J 0 1 C e Zte ε 0 t S −1 N (Λ) 0 W⊗S N (Λ) 0 I N ⊗ Ω −1 J − 1 NT P T t=1 J 0 1 C e Zte ε 0 t S −1 N Λ 0 0 W⊗S N Λ 0 0 I N ⊗ Ω 0 −1 J + 1 NT P T t=1 J 0 1 C e Zte ε 0 t I N ⊗ Ω −1 ⊗ W JK M,M − 1 NT P T t=1 J 0 1 C e Zte ε 0 t I N ⊗ Ω 0 −1 ⊗ W JK M,M = 1 NT P T t=1 J 0 1 C e Zte ε 0 t S −1 N Λ I N ⊗ Λ− Λ 0 WS −1 N Λ 0 W⊗S N Λ 0 I N ⊗ Ω −1 J − 1 NT P T t=1 J 0 1 C e Zte ε 0 t S −1 N Λ 0 W⊗ I N ⊗ Λ− Λ 0 W 0 I N ⊗ Ω −1 J − 1 NT P T t=1 J 0 1 C e Zte ε 0 t S −1 N Λ 0 W⊗S N Λ 0 I N ⊗ Ω −1 Ω− Ω 0 Ω −1 J − 1 NT P T t=1 J 0 1 C e Zte ε 0 t I N ⊗ Ω −1 Ω− Ω 0 Ω −1 ⊗ W JK M,M 195 − 1 NT H Ψ 0 0 Λ θ,e εt − − 1 NT H Ψ 0 0 Λ θ 0 = 1 NT P T t=1 J 0 We yte ε 0 t S −1 N (Λ) 0 W 0 ⊗S N (Λ) 0 (I N ⊗ Ω −1 ) J − 1 NT P T t=1 J 0 We yte ε 0 t S −1 N Λ 0 0 W 0 ⊗S N Λ 0 0 (I N ⊗ Ω 0 −1 ) J + 1 NT P T t=1 K M,M J 0 W 0 ⊗ We yte ε 0 t I N ⊗ Ω −1 J− 1 NT P T t=1 K M,M J 0 W 0 ⊗ We yte ε 0 t I N ⊗ Ω 0 −1 J = 1 NT P T t=1 J 0 We yte ε 0 t S −1 N Λ I N ⊗ Λ− Λ 0 WS −1 N Λ 0 W 0 ⊗S N Λ 0 (I N ⊗ Ω −1 ) J − 1 NT P T t=1 J 0 We yte ε 0 t S −1 N Λ 0 W 0 ⊗ I N ⊗ Λ− Λ 0 W 0 (I N ⊗ Ω −1 ) J − 1 NT P T t=1 J 0 We yte ε 0 t S −1 N Λ 0 W 0 ⊗S N Λ 0 (I N ⊗ Ω −1 Ω− Ω 0 Ω −1 ) J − 1 NT P T t=1 K M,M J 0 W 0 ⊗ We yte ε 0 t I N ⊗ Ω −1 Ω− Ω 0 Ω −1 J − 1 NT H γ 0 γ θ,e εt − − 1 NT H γ 0 γ θ 0 = 1 NT P T t=1 J 0 1 C e Zt e Z 0 t C 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 − 1 NT P T t=1 J 0 1 C e Zt e Z 0 t C 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 S N Λ 0 J 1 = − 1 NT P T t=1 J 0 1 C e Zt e Z 0 t C 0 ⊗ I N ⊗ Λ− Λ 0 W 0 I N ⊗ Ω −1 S N Λ J 1 − 1 NT P T t=1 J 0 1 C e Zt e Z 0 t C 0 ⊗S N Λ 0 I N ⊗ Ω −1 Ω− Ω 0 Ω −1 S N Λ J 1 − 1 NT P T t=1 J 0 1 C e Zt e Z 0 t C 0 ⊗S N Λ 0 I N ⊗ Ω −1 I N ⊗ Λ− Λ 0 W J 1 − 1 NT H Ψ 0 0 γ θ,e εt − − 1 NT H Ψ 0 0 γ θ 0 = 1 NT P T t=1 J 0 We yt e Z 0 t C 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 − 1 NT P T t=1 J 0 We yt e Z 0 t C 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 S N Λ 0 J 1 = − 1 NT P T t=1 J 0 We yt e Z 0 t C 0 ⊗ I N ⊗ Λ− Λ 0 W 0 I N ⊗ Ω −1 S N Λ J 1 − 1 NT P T t=1 J 0 We yt e Z 0 t C 0 ⊗S N Λ 0 Ω −1 Ω− Ω 0 Ω −1 S N Λ J 1 − 1 NT P T t=1 J 0 We yt e Z 0 t C 0 ⊗S N Λ 0 I N ⊗ Ω −1 I N ⊗ Λ− Λ 0 W J 1 − 1 NT H Ψ 0 0 Ψ 0 θ,e εt − − 1 NT H Ψ 0 0 Ψ 0 θ 0 = 1 NT P T t=1 K M,M J 0 S −1 N (Ψ 0 ) 0 W 0 ⊗ WS −1 N (Ψ 0 ) J− 1 NT P T t=1 K M,M J 0 S −1 N Ψ 0 0 0 W 0 ⊗ WS −1 N Ψ 0 0 J + 1 NT P T t=1 J 0 We yte y 0 t W 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J − 1 NT P T t=1 J 0 We yte y 0 t W 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 S N Λ 0 J = 1 NT P T t=1 K M,M J 0 S −1 N Ψ 0 I N ⊗ Ψ 0 − Ψ 0 0 WS −1 N Ψ 0 0 W 0 ⊗ WS −1 N Ψ 0 J + 1 NT P T t=1 K M,M J 0 S −1 N Ψ 0 0 W 0 ⊗ W S −1 N Ψ 0 I N ⊗ Ψ 0 − Ψ 0 0 WS −1 N Ψ 0 J − 1 NT P T t=1 J 0 We yte y 0 t W 0 ⊗ I N ⊗ Λ− Λ 0 W 0 I N ⊗ Ω −1 S N Λ J − 1 NT P T t=1 J 0 We yte y 0 t W 0 ⊗S N Λ 0 I N ⊗ Ω −1 Ω− Ω 0 Ω −1 S N Λ J − 1 NT P T t=1 J 0 We yte y 0 t W 0 ⊗S N Λ 0 I N ⊗ Ω −1 I N ⊗ Λ− Λ 0 W J 196 To show (A.29), − 1 NT H θ,NT lnL θ 0 − − 1 NT EH θ,NT lnL(θ 0 ) = D 0 M J 0 I N ⊗ Ω 0 −1 ⊗ I N ⊗ Ω 0 −1 1 NT P T t=1 e εte ε 0 t − 1 NT P T t=1 Ee εte ε 0 t I N ⊗ Ω 0 −1 JD M ∗ ∗ ∗ J 0 W 1 NT P T t=1 e ute ε 0 t − 1 NT P T t=1 Ee ute ε 0 t ⊗I MN J Ω 0 −1 ⊗ Ω 0 −1 D M 0 ∗ ∗ J 0 1 C 1 NT P T t=1 e Zte ε 0 t − 1 NT P T t=1 E e Zte ε 0 t ⊗S N Λ 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M 0 0 ∗ J 0 W 1 NT P T t=1 e yte ε 0 t − 1 NT P T t=1 Ee yte ε 0 t ⊗S N Λ 0 0 J Ω 0 −1 ⊗ Ω 0 −1 D M 0 0 0 + 0 ∗ ∗ ∗ 0 J 0 W 1 NT P T t=1 e ute u 0 t − 1 NT P T t=1 Ee ute u 0 t W 0 ⊗ I N ⊗ Ω 0 −1 J ∗ ∗ 0 J 0 1 C 1 NT P T t=1 e Zte u 0 t − 1 NT P T t=1 E e Zte u 0 t W⊗S N Λ 0 0 I N ⊗ Ω 0 −1 J +J 0 1 C 1 NT P T t=1 e Zte ε 0 t − 1 NT P T t=1 E e Zte ε 0 t I N ⊗ Ω 0 −1 ⊗ W JK M,M 0 ∗ 0 J 0 W 1 NT P T t=1 e yte u 0 t − 1 NT P T t=1 Ee yte u 0 t W 0 ⊗S N Λ 0 0 (I N ⊗ Ω 0 −1 ) J +K M,M J 0 W 0 ⊗ W 1 NT P T t=1 e yte ε 0 t − 1 NT P T t=1 Ee yte ε 0 t I N ⊗ Ω 0 −1 J 0 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 J 0 1 C 1 NT P T t=1 e Zt e Z 0 t − 1 NT P T t=1 E e Zt e Z 0 t C 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 S N Λ 0 J 1 ∗ 0 0 J 0 W 1 NT P T t=1 e yt e Z 0 t − 1 NT P T t=1 Ee yt e Z 0 t C 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 S N Λ 0 J 1 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 0 ∗ 0 0 0 J 0 W 1 NT P T t=1 e yte y 0 t − 1 NT P T t=1 Ee yte y 0 t W 0 ⊗S N Λ 0 0 I N ⊗ Ω 0 −1 S N Λ 0 J where e ut = S −1 N Λ 0 e εt,e yt = S −1 N Ψ 0 0 I N ⊗γ 0 C e Zt + S −1 N Ψ 0 0 S −1 N Λ 0 e εt. By Lemma 3, − 1 NT H θ,NT lnL θ 0 −E − 1 NT H θ,NT lnL θ 0 =Op 1 √ NT . 197 To show (A.30), − 1 NT H θ,NT lnL (θ)− − 1 NT EH θ,NT lnL(θ) = D 0 M J 0 I N ⊗ Ω −1 ⊗ I N ⊗ Ω −1 1 NT P T t=1 e εt (θ)e ε 0 t (θ)− 1 NT P T t=1 Ee εt (θ)e ε 0 t (θ) I N ⊗ Ω −1 JD M ∗ ∗ ∗ J 0 WS −1 N (Λ) 1 NT P T t=1 e εt (θ)e ε 0 t (θ)− 1 NT P T t=1 Ee εt (θ)e ε 0 t (θ) ⊗I MN J Ω −1 ⊗ Ω −1 D M 0 ∗ ∗ J 0 1 C 1 NT P T t=1 e Zte ε 0 t (θ)− 1 NT P T t=1 E e Zte ε 0 t (θ) ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M 0 0 ∗ J 0 W 1 NT P T t=1 e yte ε 0 t (θ)− 1 NT P T t=1 Ee yte ε 0 t (θ) ⊗S N (Λ) 0 J Ω −1 ⊗ Ω −1 D M 0 0 0 + 0 ∗ ∗ ∗ 0 J 0 WS −1 N (Λ) 1 NT P T t=1 e εt (θ)e ε 0 t (θ)− 1 NT P T t=1 E e εt (θ)e ε 0 t (θ) S −1 N (Λ) 0 W 0 ⊗ I N ⊗ Ω −1 J ∗ ∗ 0 J 0 1 C 1 NT P T t=1 e Zte ε 0 t (θ)− 1 NT P T t=1 E e Zte ε 0 t (θ) S −1 N (Λ) 0 W⊗S N (Λ) 0 I N ⊗ Ω −1 J +J 0 1 C 1 NT P T t=1 e Zte ε 0 t (θ)− 1 NT P T t=1 E e Zte ε 0 t (θ) I N ⊗ Ω −1 ⊗ W JK M,M 0 ∗ 0 J 0 W 1 NT P T t=1 e yte ε 0 t (θ)− 1 NT P T t=1 Ee yte ε 0 t (θ) S −1 N (Λ) 0 W 0 ⊗S N (Λ) 0 (I N ⊗ Ω −1 ) J +K M,M J 0 W 0 ⊗ W 1 NT P T t=1 e yte ε 0 t (θ)− 1 NT P T t=1 Ee yte ε 0 t (θ) I N ⊗ Ω −1 J 0 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 J 0 1 C 1 NT P T t=1 e Zt e Z 0 t − 1 NT P T t=1 E e Zt e Z 0 t C 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 ∗ 0 0 J 0 W 1 NT P T t=1 e yt e Z 0 t − 1 NT P T t=1 Ee yt e Z 0 t C 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J 1 0 + 0 ∗ ∗ ∗ 0 0 ∗ ∗ 0 0 0 ∗ 0 0 0 J 0 W 1 NT P T t=1 e yte y 0 t − 1 NT P T t=1 Ee yte y 0 t W 0 ⊗S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) J e εt (θ) = (I MN +R N (θ))e εt +S N (Λ)G N (θ)C e Zt,e yt =S −1 N Ψ 0 0 I N ⊗γ 0 C e Zt +S −1 N Ψ 0 0 S −1 N Λ 0 e εt. By Lemma 3, sup θ∈Θ − 1 NT H θ,NT lnL (θ) −E − 1 NT H θ,NT lnL (θ) i,j = Op 1 √ NT ,i,j = 1,..., (2 +p +q)M 2 +M (M + 1)/2 +kM, because Θ is bounded. To see (A.31), using (A.28), − 1 NT H θ,NT lnL (θ) − − 1 NT H θ,NT lnL θ 0 = θ−θ 0 Op (1). E − 1 NT H θ,NT lnL (θ) − E − 1 NT H θ,NT lnL θ 0 is the expected value of the difference of − 1 NT H θ,NT lnL (θ) − − 1 NT H θ,NT lnL θ 0 . By Lemma 3, all the terms under expectation are of orders no bigger than Op (1). Therefore, sup θ∈N(θ 0 ) E − 1 NT H θ,NT lnL (θ) −E − 1 NT H θ,NT lnL θ 0 i,j = sup θ∈N(θ 0 ) θ−θ 0 Op (1), for i,j = 1, 2,..., (2 +p +q)M 2 +M (M + 1)/2 +kM, because N θ 0 is bounded. Proof of Lemma 7: To begin with, we cite the scaler Martingale CLT of Gänsler & Stute (1977), which says: Let{X i,n ,F i,n , 1≤i≤Kn,n≥ 1} be a square integrable martingale difference array. Suppose that for all > 0, (i) P Kn i=1 E X 2 i,n I{|X i,n |>} F i−1,n → p 0 198 (ii) P Kn i=1 E X 2 i,n F i−1,n → p 1 Then P Kn i=1 X i,n → d N (0, 1). A sufficient condition (See for example, Pötscher (1997) p235) for (i) is (i 0 ) P Kn i=1 E |X i,n | 2+δ = P Kn i=1 E E |X i,n | 2+δ F i−1,n → p 0 for some δ> 0 Here X i,n = a 0 Ω −1/2 ξ n,k . Using Cramer-Wold device, it is enough to prove that for any a ∈ R d , P Kn k=1 a 0 Ω −1/2 ξ n,k d →N d (0,a 0 a), asn→∞, a 0 Ω −1/2 ξ n,k ,F n,k , 1≤k≤Kn,n≥ 1 is a 1-dimensional square integrable martingale difference array. We only need to verify the conditions (i) and (ii) To verify (i), for all 0 > 0,|X i,n | = a 0 Ω −1/2 ξ n,k ≤ d max 1≤i≤d |a i | max 1≤j≤d ξ n,k,j Ω −1/2 ∞ , by Lemma 0 (UB.7). Kn X k=1 E h a 0 Ω −1/2 ξ n,k 2 I a 0 Ω −1/2 ξ n,k > 0 F k−1,n i ≤ d max 1≤i≤d |a i | Ω −1/2 ∞ 2 Kn X k=1 E " max 1≤j≤d ξ n,k,j 2 I ( max 1≤j≤d ξ n,k,j > 0 d max 1≤i≤d |a i | Ω −1/2 ∞ −1 ) F k−1,n # ≤ d max 1≤i≤d |a i | Ω −1/2 ∞ 2 d X j=1 Kn X k=1 E " ξ 2 n,k,j I ( ξ n,k,j > 0 d max 1≤i≤d |a i | Ω −1/2 ∞ −1 ) F k−1,n # → p 0,by (C1), Kn X k=1 E ξ 2 n,k,j I ξ n,k,j > F n,k−1 → p 0 as n→∞ for all j = 1,...,d and > 0 To verify (ii),using (C2), P Kn k=1 E ξ n,k ξ 0 n,k F n,k−1 → p Ω as n→∞ therefore Kn X k=1 E h a 0 Ω −1/2 ξ n,k 2 F k−1,n i =a 0 Ω −1/2 Kn X k=1 E ξ n,k ξ 0 n,k F k−1,n ! Ω −1/2 a→ p a 0 Ω −1/2 ΩΩ −1/2 a =a 0 a In addition, we verify (i 0 ), for all 0 > 0, Kn X i=1 E h a 0 Ω −1/2 ξ n,k 2+δ i ≤ d max 1≤i≤d |a i | Ω −1/2 ∞ 2+δ Kn X i=1 E " max 1≤j≤d ξ n,k,j 2+δ # ≤ d max 1≤i≤d |a i | Ω −1/2 ∞ 2+δ d X j=1 Kn X i=1 E h ξ n,k,j 2+δ i → p 0 for some δ> 0 by C1 0 Kn X k=1 E h ξ n,k,j 2+δ i = Kn X k=1 E h E h ξ n,k,j 2+δ F n,k−1 ii → p 0 for some δ> 0 Proof of Lemma 8: 199 Consider the form √ NTQ NT = T X t=1 J 0 vec UtV 0 t − Σ + T X t=1 J 0 vec Utb 0 t + T X t=1 J 0 vec UtH 0 t−1 = T X t=1 N X v=1 {Vvt⊗Uvt−vecΣvv +bvt⊗Uvt +H vt−1 ⊗Uvt} = T X t=1 N X v=1 (Bv•⊗Mv•) εt⊗εt−vec I N ⊗ Ω 0 + [bvt +H vt−1 ]⊗ (Mv•εt) = T X t=1 N X v=1 N X i=1 N X j=1,j6=i (B vi ε it ⊗M vj ε jt ) + T X t=1 N X v=1 N X i=1 vec M vi ε it ε 0 it − Ω 0 B 0 vi + T X t=1 N X v=1 (bvt +H vt−1 )⊗ N X i=1 M vi ε it = NT X k=1 ξ NT,k(i,t) = T X t=1 N X i=1 ξ NT,it ξ NT,it = N X j=1,j6=i N X v=1 B vi ⊗M vj ! (ε it ⊗ε jt ) + N X v=1 B vi ⊗M vi ! (ε it ⊗ε it )−vecΩ 0 + N X v=1 (bvt +H vt−1 )⊗M vi ε it μ Q NT =E (Q NT ) = 0 Σ Q,NT =E Q NT Q 0 NT = 1 NT P T t=1 P N v=1 P N u=1 E [Vvt⊗Uvt−vecΣvv ] [Vut⊗Uut−vecΣuu] 0 + P T t=1 P N v=1 P N u=1 E [bvt⊗Uvt] [but⊗Uut] 0 + P T t=1 P N v=1 P N u=1 E [H vt−1 ⊗Uvt] [H ut−1 ⊗Uut] 0 = 1 NT P T t=1 P N v=1 P N u=1 E h (Bv•⊗Mv•) εt⊗εt−vec I N ⊗ Ω 0 ε 0 t ⊗ε 0 t −vec I N ⊗ Ω 0 0 (B 0 u• ⊗M 0 u• ) i + P T t=1 P N v=1 P N u=1 E bvtb 0 ut ⊗Mv•εtε 0 t M 0 u• + P T t=1 P N v=1 P N u=1 P ∞ g=0 P ∞ h=0 E Ag,v•ε t−1−g ε 0 t−1−h A 0 h,u• ⊗ Mv•εtε 0 t M 0 u• = 1 NT P T t=1 P N v=1 P N u=1 E h (Bv•⊗Mv•) εtε 0 t ⊗εtε 0 t −vec I N ⊗ Ω 0 vec I N ⊗ Ω 0 0 (B 0 u• ⊗M 0 u• ) i + P T t=1 P N v=1 P N u=1 bvtb 0 ut ⊗Mv• I N ⊗ Ω 0 M 0 u• +T P N v=1 P N u=1 P ∞ g=0 Ag,v• I N ⊗ Ω 0 A 0 g,u• ⊗Mv• I N ⊗ Ω 0 M 0 u• → p Σ Q 200 To identify martingale, define the σ-fieldF NT,it =σ{ε 11 ,...,ε N1 ,...,ε 1t−1 ,...,ε Nt−1 ,ε 1t ,...,ε it }, then 1≤ k≤NT,E(ξ NT,k F NT,k−1 ) = 0, becauseE ξ NT,it F NT,i−1,t = 0 andE ξ NT,it F NT,N,t−1 = 0, therefore ξ NT,it ,F NT,it , 1≤i≤N, 1≤t≤T is M 2 -dimensional square integrable martingale difference array. Next, we show that the d-dimensional Martingale CLT of Lemma 7 applies by verifying the conditions. To verify (C1 0 ) in Lemma 7, C1 0 N X i=1 T X t=1 E 1 √ NT ξ NT,it,d 2+δ → p 0, for some δ> 0,as n→∞ for all d = 1,...,M 2 . for all d = (a− 1)M +b and a,b = 1,...,M, ξ NT,it,d = N X j=1,j6=i N X v=1 B vi,a• ⊗M vj,b• ! (ε it ⊗ε jt ) + N X v=1 B vi,a• ⊗M vi,b• ! (ε it ⊗ε it )−vecΩ 0 + N X v=1 b a vt +H a vt−1 M vi,b• ε it = N X j=1,j6=i B 0 ij,d (ε it ⊗ε jt ) +B 0 ii,d (ε it ⊗ε it )−vecΩ 0 + C 0 ti,d +D 0 t−1i,d ε it where B 0 ij,d = P N v=1 B vi,a• ⊗M vj,b• denotes the d-th row of P N v=1 B vi ⊗M vj , C 0 ti,d = P N v=1 b a vt M vi,b• , D 0 t−1i,d = P N v=1 H a vt−1 M vi,b• . ξ NT,it,d ≤ M X m,r=1 N X j=1,j6=i B mr ij,d ε m it ε r jt + M X m,r=1 B mr ii,d |ε m it ε r it −ω mr | + M X m=1 C m ti,d +D m t−1i,d |ε m it | By Hölder inequality, ξ NT,it,d ≤ M X m=1 C m ti,d +D m t−1i,d p + M X m,r=1 N X j=1 B mr ij,d !1 p P M m=1 ε m it q + P M m,r=1 B mr ii,d ε m it ε r it −ω mr q + P M m,r=1 P N j=1,j6=i B mr ij,d ε m it q ε r jt q 1 q E ξ NT,it,d q ≤E M X m=1 C m ti,d +D m t−1i,d p + M X m,r=1 N X j=1 B mr ij,d ! q p P M m=1 E ε m it q + P M m,r=1 B mr ii,d E ε m it ε r it −ω mr q + P M m,r=1 P N j=1,j6=i B mr ij,d E ε m it q E ε r jt q let q = 2 + δ for some δ > 0, the 4 + δ moments of ε it exist by assumption, and P M m,r=1 P N j=1 B mr ij,d = Op (1), because l 0 M 2 P N v=1 B vi ⊗M vj l M 2 = vec M 0 I N ⊗l M l 0 M B ji 0 l M 2 = Op (1), where M 0 I N ⊗l M l 0 M B ji denotes the ji-th M × M-block of M 0 I N ⊗l M l 0 M B, which is UB, and l 0 M 2 P N j=1 P N v=1 B vi ⊗M vj l M 2 = P N j=1 vec M 0 I N ⊗l M l 0 M B ji 0 l M 2 = 201 vec M 0 I N ⊗l M l 0 M B i 0 l M 2 = Op (1), where M 0 I N ⊗l M l 0 M B i denotes the i-th M-row of M 0 I N ⊗l M l 0 M B, which is UB. Therefore, E ξ it,d 2+δ ≤Op (1)E P M m=1 C m ti,d +D m t−1i,d p +Op (1) q p Then, by Loeve’s cr inequality, E " M X m=1 C m ti,d +D m t−1i,d p +Op (1) # q p ≤ (M + 1) q p −1 M X m=1 E C m ti,d +D m t−1i,d q +Op (1) ≤ (M + 1) q p −1 2 q−1 M X m=1 C m ti,d q +E D m t−1i,d q +Op (1) Lastly, P M m=1 C m ti,d 2+δ = Op (1), because l 0 M 2 P N v=1 bvt⊗M vi l M = vec M 0 I N ⊗l M l 0 M bt i 0 l M = Op (1), where M 0 I N ⊗l M l 0 M bt i denotes the i-th M-row of M 0 I N ⊗l M l 0 M bt, which is UB, and P M m=1 E D m t−1i,d 2+δ = Op (1), because H vt−1 H 0 vt−1 ⊗H vt−1 H 0 vt−1 = P ∞ g=0 (Ag,v•⊗Ag,v•) ε t−1−g ε 0 t−1−g ⊗ε t−1−g ε 0 t−1−g A 0 g,v• ⊗A 0 g,v• exists, E H a vt−1 4 = Op (1) and E H a vt−1 2+δ = Op (1), and l 0 M 2 P N v=1 H vt−1 ⊗M vi l M = vec M 0 I N ⊗l M l 0 M H t−1 i 0 l M = Op (1), where M 0 I N ⊗l M l 0 M H t−1 i denotes the i-th M-row of M 0 I N ⊗l M l 0 M H t−1 , which is UB. So, E ξ NT,it,d 2+δ =Op (1), and P N i=1 P T t=1 E 1 √ NT ξ NT,it,d 2+δ =Op NT 1− 2+δ 2 → p 0. To verify (C2) in Lemma 7, (C2) 1 NT T X t=1 N X i=1 E ξ NT,it ξ 0 NT,it F NT,k−1 → p Σ Q ,as (N,T )→∞. T X t=1 N X i=1 E 1 NT ξ NT,it ξ 0 NT,it F NT,k−1 = 1 NT T X t=1 N X i=1 N X v=1 N X u=1 E P N j=1,j6=i (B vi ⊗M vj ) (ε it ⊗ε jt ) + (B vi ⊗M vi ) (ε it ⊗ε it )−vecΩ 0 + (bvt +H vt−1 )⊗M vi ε it P N r=1,r6=i ε 0 it ⊗ε 0 rt B 0 ui ⊗M 0 ur + ε 0 it ⊗ε 0 it − vecΩ 0 0 B 0 ui ⊗M 0 ui + b 0 ut +H 0 ut−1 ⊗ (M ui ε it ) 0 = 1 NT T X t=1 N X i=1 N X v=1 N X u=1 E P N j=1 (B vi ⊗M vj ) (ε it ⊗ε jt )− (B vi ⊗M vi )vecΩ 0 + (bvt +H vt−1 )⊗M vi ε it P N r=1 ε 0 it ⊗ε 0 rt B 0 ui ⊗M 0 ur − vecΩ 0 0 B 0 ui ⊗M 0 ui + b 0 ut +H 0 ut−1 ⊗ (M ui ε it ) 0 = 1 NT T X t=1 N X v=1 N X u=1 E h (Bv•⊗Mv•) εt⊗εt−vec I N ⊗ Ω 0 ε 0 t ⊗ε 0 t −vec I N ⊗ Ω 0 0 (B 0 u• ⊗M 0 u• ) i +E bvtb 0 ut ⊗Mv•εtε 0 t M 0 u• + P ∞ g=0 P ∞ h=0 E Ag,v•ε t−1−g ε 0 t−1−h A 0 h,u• ⊗ Mv•εtε 0 t M 0 u• = Σ Q,NT → p Σ Q 202 Proof of Lemma 9: A = A 1 ··· A p−1 Ap I MN(p−1)×MN(p−1) 0 MN(p−1)×MN ! , A j (θ) = S −1 N (Ψ 0 ) ((I N ⊗ Φ j ) + (I N ⊗ Ψ j )W), for j = 1, 2,...p A j θ 0 − A j (θ) =S −1 N (Ψ 0 ) I N ⊗ Φ 0 j − Φ j + I N ⊗ Ψ 0 j − Ψ j W +S −1 N (Ψ 0 ) I N ⊗ Ψ 0 0 − Ψ 0 WA j θ 0 W is UB uniformly andS −1 N (Ψ 0 ) is UB uniformly in Θ (Ψ 0 ), therefore assume that A j θ 0 < 1, then for any ε> 0, there exists a small neighbourhood on which A j θ 0 − A j (θ) <ε, for j = 1, 2,...p. Consequently, it implies thatkA (θ)k< 1 uniformly in N and in the neighbourhood ofθ 0 . Denote sup θ∈N(θ 0 ) kA (θ)k =δ, δ< 1 implies that (I MNp − A (θ)) −1 = P ∞ h=0 A (θ) h exists. sup θ∈N(θ 0 ) ∞ X h=0 A (θ) h ≤ ∞ X h=0 sup θ∈N(θ 0 ) kA (θ)k h = ∞ X h=0 δ h = 1 1−δ <∞ ∞ X h=0 hA (θ) h−1 − A (θ) ∞ X h=0 hA (θ) h−1 = I MNp − A (θ) ∞ X h=0 hA (θ) h−1 = ∞ X h=0 A (θ) h implies ∞ X h=0 hA (θ) h−1 = I MNp − A (θ) −1 ∞ X h=0 A (θ) h = ∞ X h=0 A (θ) h ! 2 and implies sup θ∈N(θ 0 ) ∞ X h=0 hA (θ) h−1 ≤ sup θ∈N(θ 0 ) ∞ X h=0 A (θ) h 2 ≤ 1 (1−δ) 2 <∞ 203 A.6 Proof of Theorems Proof of Proposition 1: By (2.5), e yt =S −1 N Ψ 0 0 e μ 0 t +S −1 N Λ 0 e εt substitute to get e εt (θ) = S N (Λ) S N (Ψ 0 )e yt−e μ t = S N (Λ)S N (Ψ 0 )S −1 N (Ψ 0 0 )S −1 N (Λ 0 )e εt +S N (Λ)S N (Ψ 0 )S −1 N (Ψ 0 0 ) I N ⊗γ 0 C e Zt −S N (Λ) (I N ⊗γ)C e Zt Therefore, e ε 0 t (θ) I N ⊗ Ω −1 e εt (θ) = e ε 0 t S −1 N (Λ 0 ) 0 S −1 N (Ψ 0 0 ) 0 Σ −1 y (θ)S −1 N (Ψ 0 0 )S −1 N (Λ 0 )e εt +e ε 0 t S −1 N (Λ 0 ) 0 S −1 N (Ψ 0 0 ) 0 S N (Ψ 0 ) 0 Σ −1 u (θ)G N (θ)C e Zt + e Z 0 t C 0 G N (θ) 0 Σ −1 u (θ)S N (Ψ 0 )S −1 N (Ψ 0 0 )S −1 N (Λ 0 )e εt + e Z 0 t C 0 G N (θ) 0 Σ −1 u (θ)G N C e Zt where Σu (θ) =S −1 N (Λ) 0 (I N ⊗ Ω)S −1 N (Λ), Σ 0 u = Σu θ 0 Σy (θ) =S −1 N (Ψ 0 ) 0 Σu (θ)S −1 N (Ψ 0 ), Σ 0 y = Σy θ 0 G N (θ) =S N (Ψ 0 )S −1 N (Ψ 0 0 ) I N ⊗γ 0 − (I N ⊗γ) = I N ⊗ γ 0 −γ + I N ⊗ Ψ 0 0 − Ψ 0 WS −1 N (Ψ 0 0 ) I N ⊗γ 0 then by Lemma.A.3, 1 NT e ε 0 t (θ) I N ⊗ Ω −1 e εt (θ)−E 1 NT e ε 0 t (θ) I N ⊗ Ω −1 e εt (θ) p − → 0,uniformly inθ∈ Θ consequently, 1 NT lnL(θ)−Q NT (θ) p − → 0 204 where Q NT (θ) =E 1 NT lnL (θ) =− M 2 ln(2π)− 1 2N ln det (I N ⊗ Ω) + 1 N ln det (S N (Ψ 0 )) + 1 N ln det (S N (Λ))− 1 2 1 NT T X t=1 E e ε 0 t (θ) I N ⊗ Ω −1 e εt (θ) 1 NT T X t=1 Ee ε 0 t (θ) I N ⊗ Ω −1 e εt (θ) = 1 N T− 1 T tr Σ −1 y (θ) Σ 0 y (A.70) + 2 NT T X t=1 E e Z 0 t C 0 G N (θ) 0 Σ −1 u (θ)S N (Ψ 0 )S −1 N (Ψ 0 0 )S −1 N (Λ 0 )e εt (A.71) + 1 NT T X t=1 E e Z 0 t C 0 G N (θ) 0 Σ −1 u (θ)G N C e Zt (A.72) by Lemma.A.3, (A.71) = Op 1 T uniformly in θ in Θ, because it is a polynomial function in θ and Θ is a bounded set. To show the uniform equicontinuity of Q NT (θ), it suffices to show the following. First, denote ¯ Ω which lies between Ω 1 and Ω 2 , 1 N ln det I N ⊗ Ω −1 2 − 1 N ln det I N ⊗ Ω −1 1 =− 1 N tr I N ⊗ (Ω 2 − Ω 1 ) ¯ Ω −1 , which is bounded, because Ω is bounded away from 0 in Θ. Therefore, 1 N ln det I N ⊗ Ω −1 is uniformly con- tinuous. Second, denote ¯ Ψ 0 which lies between Ψ 0,1 and Ψ 0,2 , 1 N ln det (S N (Ψ 0,2 ))− 1 N ln det (S N (Ψ 0,1 )) = − 1 N tr WS −1 N ¯ Ψ 0 (I N ⊗ (Ψ 0,2 − Ψ 0,1 )) , which is bounded, because S −1 N ¯ Ψ 0 is UB uniformly in θ in Θ. Therefore, 1 N ln det (S N (Ψ 0 )) is uniformly continuous. Third, similarly denote ¯ Λ which lies between Λ 1 and Λ 2 , 1 N ln det (S N (Λ 2 ))− 1 N ln det (S N (Λ 1 )) = − 1 N tr WS −1 N ¯ Λ (I N ⊗ (Λ 2 − Λ 1 )) , which is bounded, because S −1 N ¯ Λ is UB uniformly in θ in Θ. Therefore, 1 N ln det (S N (Λ)) is uniformly continuous. Fourth, )A.70) = 1 N T−1 T tr Σ −1 y (θ) Σ 0 y = 1 N tr S N (Ψ 0 ) 0 S N (Λ) 0 I N ⊗ Ω −1 S N (Λ)S N (Ψ 0 )Σ 0 y is uniformly equicon- tinuous, because 1 N tr Σ −1 y (θ 2 ) Σ 0 y − 1 N tr Σ −1 y (θ 1 ) Σ 0 y = − 1 N tr S N ( ¯ Ψ 0 ) 0 S N ( ¯ Λ) 0 I N ⊗ ¯ Ω −1 (Ω 2 − Ω 1 ) ¯ Ω −1 S N ( ¯ Λ)S N ( ¯ Ψ 0 )Σ 0 y −2 1 N tr S N ( ¯ Ψ 0 ) 0 S N ( ¯ Λ) 0 I N ⊗ ¯ Ω −1 (I N ⊗ (Λ 2 − Λ 1 )) WS N ( ¯ Ψ 0 )Σ 0 y −2 1 N tr S N ( ¯ Ψ 0 ) 0 S N ( ¯ Λ) 0 I N ⊗ ¯ Ω −1 S N ( ¯ Λ) (I N ⊗ (Ψ 0,2 − Ψ 0,1 )) WΣ 0 y 205 which is bounded because the matrices inside trace are UB. Lastly, (A.72) = 1 NT T X t=1 E e Z 0 t C 0 G N (θ) 0 Σ −1 u (θ)G N (θ)C e Zt = 1 NT T X t=1 E h 0 t I N ⊗ γ 0 −γ 0 I N ⊗ Ψ 0 0 − Ψ 0 0 ! Σ −1 u (θ) I N ⊗ γ 0 −γ , I N ⊗ Ψ 0 0 − Ψ 0 ht ! where ht = C e Zt WS −1 N (Ψ 0 0 ) I N ⊗γ 0 C e Zt ! (A.73) (A.72) is uniformly equicontinuous, becauseγ and Ψ 0 are bounded and because H = 1 NT P T t=1 Ehth 0 t isOp (1). In sum, Q NT (θ) is uniformly equicontinous inθ in any compact parameter space Θ. Proof of Theorem 1: Q NT (θ)−Q NT θ 0 = E 1 NT lnL (θ)−E 1 NT lnL θ 0 = − 1 2N ln det (I N ⊗ Ω) + 1 N ln det (S N (Ψ 0 )) + 1 N ln det (S N (Λ))− 1 2 1 NT T X t=1 tr I N ⊗ Ω −1 E e εt (θ)e ε 0 t (θ) − ( − 1 2N ln det I N ⊗ Ω 0 + 1 N ln det S N (Ψ 0 0 ) + 1 N ln det S N (Λ 0 ) − 1 2 1 NT T X t=1 tr I N ⊗ Ω 0 −1 E e εte ε 0 t ) = Δ 1 + Δ 2 +o (1) where both Δ 1 and Δ 2 are always less than or equal to 0. Δ 1 = − 1 2NT T X t=1 E e Z 0 t C 0 G N (θ) 0 Σ −1 u (θ)G N (θ)C e Zt = − 1 2NT T X t=1 E h 0 t I N ⊗ γ 0 −γ 0 I N ⊗ Ψ 0 0 − Ψ 0 0 ! S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) I N ⊗ γ 0 −γ , I N ⊗ Ψ 0 0 − Ψ 0 ht ! H = 1 NT P T t=1 Ehth 0 t is nonsingular by assumption, which implies that Δ 1 ≤ 0 and Δ 1 = 0 if and only if Ψ 0 0 = Ψ 0 andγ 0 =γ, because S N (Λ) 0 I N ⊗ Ω −1 S N (Λ) = Σ −1 u (θ) is positive definite. Δ 2 = 1 2N ln det I N ⊗ Ω 0 − 1 2N ln det (I N ⊗ Ω) + 1 N ln det (S N (Ψ 0 ))− 1 N ln det S N Ψ 0 0 + 1 N ln det (S N (Λ))− 1 N ln det S N Λ 0 + M 2 − 1 2N tr Σ −1 y (θ) Σ 0 y 206 Consider a true model with S N Ψ 0 0 yt = e ut and S N Λ 0 ut = εt, and denote Ep as the expectation operator of this process. The log-likelihood function of this model is lnL (θ,α) = − MN 2 ln(2π)− 1 2 ln det (I N ⊗ Ω) + ln det (S N (Ψ 0 )) + ln det (S N (Λ)) − 1 2 (S N (Λ)S N (Ψ 0 )yt) 0 (I N ⊗ Ω −1 )S N (Λ)S N (Ψ 0 )yt Ep lnL (θ,α) =− MN 2 ln(2π)− 1 2 ln det (I N ⊗ Ω) + ln det (S N (Ψ 0 )) + ln det (S N (Λ))− 1 2 tr Σ −1 y (θ) Σ 0 y Ep lnL θ 0 ,α 0 =− MN 2 ln(2π)− 1 2 ln det I N ⊗ Ω 0 + ln det S N (Ψ 0 0 ) + ln det S N (Λ 0 ) − 1 2 tr (I NM ) By information inequality,Ep lnL (θ,α)≤Ep lnL θ 0 ,α 0 . We can see that Δ 2 is less than or equal to 0, because Δ 2 = 1 N Ep lnL (θ,α)− 1 N Ep lnL θ 0 ,α 0 ≤ 0 Δ 2 6= 0 by the assumption that ln det Σ 0 y 6= ln det tr 1 MN Σ −1 y (θ) Σ 0 y Σy (θ) Because Δ 2 6= 0 iff ln det Σ 0 y 6= ln det (Σy (θ)) +tr Σ −1 y (θ) Σ 0 y −MN = ln det (Σy (θ)) +MN tr 1 MN Σ −1 y (θ) Σ 0 y − 1 = ln det (Σy (θ)) + ln tr 1 MN Σ −1 y (θ) Σ 0 y MN = ln tr 1 MN Σ −1 y (θ) Σ 0 y MN det (Σy (θ)) = ln det tr 1 MN Σ −1 y (θ) Σ 0 y Σy (θ) Proof of Proposition 2: From the first order differentials, 207 1 NT D θ lnL (θ) = 1 NT 1 2 P T t=1 D 0 M J 0 vec I N ⊗ Ω −1 e εte ε 0 t − (I N ⊗ Ω) I N ⊗ Ω −1 P T t=1 J 0 vec I N ⊗ Ω −1 e εte ε 0 t S −1 N (Λ) 0 W 0 −S −1 N (Λ) 0 W 0 P T t=1 J 0 1 vec S N (Λ) 0 I N ⊗ Ω −1 e εt e Z 0 t C 0 P T t=1 J 0 vec S N (Λ) 0 I N ⊗ Ω −1 e εt e Z 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 + P T t=1 J 0 vec S N (Λ) 0 I N ⊗ Ω −1 e εte ε 0 t S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 −S −1 N (Ψ 0 ) 0 W 0 0 = 1 NT D θ lnL 0 (θ) + 1 NT D θ lnL 1 (θ) 1 NT D θ lnL (θ) is decomposed into two components, 1 NT D θ lnL 0 (θ) and 1 NT D θ lnL 1 (θ), with 1 NT ED θ lnL 0 θ 0 = 0, and 1 NT ED θ lnL 1 θ 0 =− 1 T B NT θ 0 0 +Op 1 NT 2 6= 0, where 1 NT D θ lnL 0 (θ) = 1 NT 1 2 P T t=1 D 0 M J 0 vec I N ⊗ Ω −1 εtε 0 t − (I N ⊗ Ω) I N ⊗ Ω −1 P T t=1 J 0 vec I N ⊗ Ω −1 εtε 0 t S −1 N (Λ) 0 W 0 −S −1 N (Λ) 0 W 0 P T t=1 J 0 1 vec S N (Λ) 0 I N ⊗ Ω −1 εtZ 0 t C 0 P T t=1 J 0 vec S N (Λ) 0 I N ⊗ Ω −1 εtZ 0 t C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 + P T t=1 J 0 vec S N (Λ) 0 I N ⊗ Ω −1 εtε 0 t S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 −S −1 N (Ψ 0 ) 0 W 0 0 1 NT D θ lnL 1 (θ) =− 1 N 1 2 D 0 M J 0 vec I N ⊗ Ω −1 (¯ ε¯ ε 0 ) I N ⊗ Ω −1 J 0 vec I N ⊗ Ω −1 ¯ ε¯ ε 0 S −1 N (Λ) 0 W 0 J 0 1 vec S N (Λ) 0 I N ⊗ Ω −1 ¯ ε Q 00 + ¯ Q x0 +J 0 1 vec S N (Λ) 0 I N ⊗ Ω −1 P T−1 h=−∞ ¯ εε 0 h ¯ Q ε0 h J 0 vec S N (Λ) 0 I N ⊗ Ω −1 ¯ ε Q 00 + ¯ Q x0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 +J 0 vec S N (Λ) 0 I N ⊗ Ω −1 P T−1 h=−∞ ¯ εε 0 h ¯ Q ε0 h I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 +J 0 vec S N (Λ) 0 I N ⊗ Ω −1 ¯ ε¯ ε 0 S −1 N Λ 0 0 S −1 N Ψ 0 0 0 W 0 0 where B NT θ 0 = 1 2N D 0 M J 0 vec I N ⊗ Ω 0 −1 1 N J 0 vec S −1 N (Λ 0 ) 0 W 0 1 N J 0 1 vec S −1 N Ψ 0 0 0 R P ∞ h=0 A h0 R 0 (l 0 p ⊗I MN ,l 0 q ⊗ W 0 , 0 MN×MNK )C 0 1 N J 0 vec S −1 N Ψ 0 0 0 R P ∞ h=0 A h0 R 0 (l 0 p ⊗I MN ,l 0 q ⊗ W 0 , 0 MN×MNK )C 0 I N ⊗γ 00 S −1 N Ψ 0 0 0 W 0 + 1 N J 0 vec S −1 N Ψ 0 0 0 W 0 (A.74) T−1 X h=−∞ ¯ εε 0 h ¯ Q ε0 h = ¯ ε " T−1 X h=−∞ ε 0 h ¯ A ε0 −1,h ,..., T−p X h=−∞ ε 0 h ¯ A ε0 −p,h , T−1 X h=−∞ ε 0 h ¯ A ε0 −1,h W 0 ! ,..., T−q X h=−∞ ε 0 h ¯ A ε0 −q,h W 0 ! , 0 1×MNK # C 0 208 According to Lemma.A.2, 1 NT D θ lnL 1 θ 0 − 1 NT ED θ lnL 1 θ 0 =Op 1 √ NT 2 . Therefore, 1 √ NT D θ lnL 1 θ 0 = 1 √ NT ED θ lnL 1 θ 0 +Op 1 √ T =− r N T B NT θ 0 0 +Op r N T 3 ! +Op 1 √ T CLT of Lemma.A.8 applies to 1 √ NT D θ lnL 0 θ 0 , which implies 1 √ NT D θ lnL 0 θ 0 d →N 0,σ 2 θ 0 , where σ 2 θ 0 = lim T→∞ var 1 √ NT D θ lnL 0 θ 0 = lim T→∞ E 1 √ NT D θ lnL 0 θ 0 0 1 √ NT D θ lnL 0 θ 0 = lim T→∞ E 1 √ NT D θ lnL θ 0 0 1 √ NT D θ lnL θ 0 =I 0 + Δ 0 and 1 √ NT D θ lnL 0 θ 0 = 1 √ NT D θ lnL θ 0 − 1 √ NT D θ lnL 1 θ 0 = 1 √ NT D θ lnL θ 0 + r N T B NT θ 0 0 +Op r N T 3 ! +Op 1 √ T d →N 0,I 0 + Δ 0 Proof of Theorem 2: 1 NT H θ lnL θ = 1 NT EH θ lnL θ 0 + 1 NT H θ lnL θ − 1 NT H θ lnL θ 0 + 1 NT H θ lnL θ 0 − 1 NT EH θ lnL θ 0 By Lemma.A.6, 1 NT H θ lnL θ =−I θ,NT θ 0 + θ−θ 0 Op (1) +Op 1 √ NT 209 b θ NT → p θ 0 impliesθ−θ 0 =op (1).I θ θ 0 is nonsingular in the limit, therefore 1 NT H θ lnL θ is invertible when N and T goes to infinity. By Taylor expansion, 1 NT D θ lnL b θ NT 0 = 1 NT D θ lnL θ 0 0 + 1 NT H θ lnL θ b θ NT −θ 0 , whereθ lies between b θ NT andθ 0 it implies, √ NT b θ NT −θ 0 =− 1 NT H θ lnL θ −1 1 √ NT D θ lnL θ 0 0 =− 1 NT H θ lnL θ −1 1 √ NT D θ lnL 0 θ 0 0 − r N T B NT θ 0 +Op max r N T 3 , 1 √ T !!! 1 √ NT D θ lnL 0 θ 0 =Op (1),B NT θ 0 =Op (1), implying b θ NT −θ 0 = 1 √ NT Op (1) +Op 1 √ NT r N T ! + 1 √ NT Op max r N T 3 , 1 √ T !! =Op max 1 √ NT , 1 T (A.75) it in turn implies 1 NT H θ lnL θ =−I NT θ 0 +Op 1 √ NT and then √ NT b θ NT −θ 0 = I NT θ 0 +Op 1 √ NT −1 1 √ NT D θ lnL 0 θ 0 − r N T B NT θ 0 +Op max r N T 3 , 1 √ T !!! = I NT θ 0 −1 +Op 1 √ NT 1 √ NT D θ lnL 0 θ 0 − r N T B NT θ 0 +Op max r N T 3 , 1 √ T !!! =− r N T I NT θ 0 −1 B NT θ 0 +Op max r N T 3 , 1 √ T !! +I NT θ 0 −1 1 √ NT D θ lnL 0 θ 0 √ NT b θ NT −θ 0 + r N T I NT θ 0 −1 B NT θ 0 +Op max r N T 3 , 1 √ T !! d → I 0 −1 N 0, I 0 + Δ 0 Proof of Theorem 3: From (2.10), α 0 θ 0 = S N (Ψ 0 0 )y− I N ⊗γ 0 CZ, and the QML estimator b α b θ NT = S N ( b Ψ 0,NT )y− I N ⊗b γ NT CZ. Using S N Ψ 0 0 yt =α 0 + I N ⊗γ 0 CZt +S −1 N Λ 0 εt, b α b θ NT = 1 T T X t=1 S N ( b Ψ 0,NT )S −1 N Ψ 0 0 S N Ψ 0 0 yt− I N ⊗b γ NT CZt 210 implying b α b θ NT −α 0 = I N ⊗ Ψ 0 0 − b Ψ 0,NT WS −1 N Ψ 0 0 α 0 + 1 T T X t=1 I N ⊗ γ 0 −b γ NT + I N ⊗ Ψ 0 0 − b Ψ 0,NT WS −1 N (Ψ 0 0 ) I N ⊗γ 0 CZt + 1 T T X t=1 I N ⊗ Ψ 0 0 − b Ψ 0,NT WS −1 N Ψ 0 0 S −1 N Λ 0 εt + 1 T T X t=1 S −1 N Λ 0 εt The dominant term ofb α b θ NT −α 0 is 1 T P T t=1 S −1 N Λ 0 εt, because α 0 = Op (1), 1 T P T t=1 Zt = Op (1), and Ψ 0 0 − b Ψ 0,NT =Op max 1 √ NT , 1 T ,γ 0 −b γ NT =Op max 1 √ NT , 1 T by (A.75). Therefore √ T b α NT −α 0 → d N 0,S −1 N Λ 0 I N ⊗ Ω 0 S −1 N Λ 0 0 Proof of Theorem 4: √ NT e θ NT −θ 0 = √ NT b θ NT − 1 T 1 NT EH θ,NT lnL b θ NT −1 B NT b θ NT −θ 0 = √ NT b θ NT −θ 0 + r N T I NT θ 0 −1 B NT θ 0 +Op max r N T 3 , 1 √ T !! − r N T 1 NT EH θ,NT lnL b θ NT −1 B NT b θ NT − r N T I NT θ 0 −1 B NT θ 0 −Op max r N T 3 , 1 √ T !! → d N 0, I 0 −1 I 0 + Δ 0 I 0 −1 , if − p N T 1 NT EH θ,NT lnL b θ NT −1 B NT b θ NT − p N T I NT θ 0 −1 B NT θ 0 + Op max p N T 3 , 1 √ T → p 0 when N T →κ, N T 2 → 0, and 1 T → 0. − r N T 1 NT EH θ,NT lnL b θ NT −1 B NT b θ NT − r N T I NT θ 0 −1 B NT θ 0 +Op max r N T 3 , 1 √ T !! = r N T I θ,NT θ 0 +O 1 √ T −1 B NT b θ NT − r N T I NT θ 0 −1 B NT θ 0 +Op max r N T 3 , 1 √ T !! = r N T I θ,NT θ 0 −1 B NT b θ NT −B NT θ 0 +Op max r N T 3 , 1 √ T !! 211 requiring that − p N T 1 NT EH θ,NT lnL b θ NT −1 B NT b θ NT − p N T I NT θ 0 −1 B NT θ 0 + Op max p N T 3 , 1 √ T → p 0 reduces to requiring that B NT b θ NT −B NT θ 0 → p 0 as N T → κ and N T 2 → 0. B NT (θ) = B Ω (θ) 0 ,B Λ (θ) 0 ,Bγ (θ) 0 ,B Ψ 0 (θ) 0 0 , where B Ω (θ) = 1 2 D 0 M vec Ω 0 −1 B Ω b θ NT −B Ω θ 0 =− 1 2 D 0 M Ω −1 ⊗ Ω −1 vec b Ω NT − Ω 0 B Λ (θ) = 1 N J 0 vec S −1 N (Λ 0 ) 0 W 0 B Λ b θ NT −B Λ θ 0 = 1 N J 0 WS −1 N Λ ⊗S −1 N Λ 0 W 0 vec I N ⊗ b Λ 0 NT − Λ 00 Bγ (θ) = 1 N J 0 1 vec S −1 N Ψ 0 0 0 R ∞ X h=0 A h0 M ! , denoteM = R 0 (l 0 p ⊗I MN ,l 0 q ⊗ W 0 , 0 MN×MNK )C 0 dBγ (θ) = 1 N J 0 1 M 0 ∞ X h=0 A h R 0 S −1 N (Ψ 0 )⊗S −1 N (Ψ 0 ) 0 W 0 ! JK M,M dvec (Ψ 0 ) + 1 N J 0 1 M 0 ∞ X h=0 hA h−1 ⊗S −1 N (Ψ 0 ) 0 R ! K MNp,MNp dvec (A) It’s noted that A = A (θ) is also a function in parameters vec (Ψ 0 ),vec (Φ 1 ),...,vec (Φp),vec (Ψ 1 ),...,vec (Ψp). Let S be a M 2 N 2 p 2 × M 2 N 2 p 2 full rank permutation matrix whose elements are 0 or 1 that shuffles vec (A 1 ) 0 ,...,vec (Ap) 0 , l 0 MN(p−1) , 0 0 MN(p−1)(MNp−1)×1 0 such that vec (A) = S vec (A 1 ) 0 ,...,vec (Ap) 0 ,l 0 MN(p−1) , 0 0 MN(p−1)(MNp−1)×1 0 , where A j = S −1 N (Ψ 0 ) ((I N ⊗ Φ j ) + (I N ⊗ Ψ j )W). Therefore, dvec (A) = S dvec (A 1 ) 0 ,,...,dvec (Ap) 0 , 0 0 M 2 N 2 p(p−1)×1 0 and dvec (A j (θ)) = A 0 j (θ) W 0 ⊗S −1 N (Ψ 0 ) dvec (I N ⊗ Ψ 0 ) + I MN ⊗S −1 N (Ψ 0 ) dvec (I N ⊗ Φ j ) + W⊗S −1 N (Ψ 0 ) dvec (I N ⊗ Ψ j ) 212 Bγ b θ NT −Bγ θ 0 = 1 N J 0 1 M 0 ∞ X h=0 A θ h R 0 S −1 N Ψ 0 ⊗S −1 N Ψ 0 0 W 0 ! vec I N ⊗ b Ψ 0 0,NT − Ψ 00 0 + 1 N J 0 1 M 0 ∞ X h=0 hA θ h−1 ⊗S −1 N Ψ 0 0 R ! K MNp,MNp S × A 0 1 θ W 0 ⊗S −1 N Ψ 0 . . . A 0 p θ W 0 ⊗S −1 N Ψ 0 0 M 2 N 2 p(p−1)×M 2 N 2 vec I N ⊗ b Ψ 0,NT − Ψ 0 0 + I MN ⊗S −1 N Ψ 0 vec I N ⊗ b Φ 1,NT − Φ 0 1 . . . I MN ⊗S −1 N Ψ 0 vec I N ⊗ b Φ p,NT − Φ 0 p 0 M 2 N 2 p(p−1)×1 + W⊗S −1 N Ψ 0 vec I N ⊗ b Ψ 1,NT − Ψ 0 1 . . . W⊗S −1 N Ψ 0 vec I N ⊗ b Ψ p,NT − Ψ 0 p 0 M 2 N 2 p(p−1)×1 B Ψ 0 (θ) = 1 N J 0 vec S −1 N (Ψ 0 ) 0 R ∞ X h=0 A h0 M I N ⊗γ 0 S −1 N (Ψ 0 ) 0 W 0 −S −1 N (Ψ 0 ) 0 W 0 ! dB Ψ 0 (θ) = 1 N J 0 vec dS −1 N (Ψ 0 ) 0 R ∞ X h=0 A h0 M I N ⊗γ 0 S −1 N (Ψ 0 ) 0 W 0 − W 0 !! (A.76) + 1 N J 0 vec S −1 N (Ψ 0 ) 0 R ∞ X h=0 A h0 M I N ⊗γ 0 dS −1 N (Ψ 0 ) 0 W 0 ! (A.77) + 1 N J 0 vec S −1 N (Ψ 0 ) 0 R ∞ X h=0 dA h0 M I N ⊗γ 0 S −1 N (Ψ 0 ) 0 W 0 ! (A.78) + 1 N J 0 vec S −1 N (Ψ 0 ) 0 R ∞ X h=0 A h0 M I N ⊗dγ 0 S −1 N (Ψ 0 ) 0 W 0 ! (A.79) (A.76) = 1 N J 0 WS −1 N (Ψ 0 ) (I N ⊗γ)M 0 ∞ X h=0 A h R 0 S −1 N (Ψ 0 )− WS −1 N (Ψ 0 ) ! ⊗S −1 N (Ψ 0 ) 0 W 0 ! dvec I N ⊗ Ψ 0 0 (A.77) = 1 N J 0 WS −1 N (Ψ 0 )⊗S −1 N (Ψ 0 ) 0 R ∞ X h=0 A h0 M I N ⊗γ 0 S −1 N (Ψ 0 ) 0 W 0 ! dvec I N ⊗ Ψ 0 0 and similar to above, (A.78) = 1 N J 0 WS −1 N (Ψ 0 ) (I N ⊗γ)M 0 ∞ X h=0 hA h−1 ⊗S −1 N (Ψ 0 ) 0 R ! K MNp,MNp (dvecA) 213 (A.79) = 1 N p X j=1 J 0 WS −1 N (Ψ 0 )⊗S −1 N (Ψ 0 ) 0 R ∞ X h=0 A h0 R 0 ! vec I N ⊗dΦ 0 j + 1 N p X j=1 J 0 WS −1 N (Ψ 0 )⊗S −1 N (Ψ 0 ) 0 R ∞ X h=0 A h0 R 0 W 0 ! vec I N ⊗dΨ 0 j Therefore, B Ψ 0 b θ NT −B Ψ 0 θ 0 = 1 N J 0 WS −1 N (Ψ 0 ) (I N ⊗γ)M 0 ∞ X h=0 A h R 0 S −1 N (Ψ 0 )− WS −1 N (Ψ 0 ) ! ⊗S −1 N (Ψ 0 ) 0 W 0 ! vec I N ⊗ b Ψ 0 0,NT − Ψ 00 0 + 1 N J 0 WS −1 N Ψ 0 ⊗S −1 N Ψ 0 0 R ∞ X h=0 A θ h0 M I N ⊗γ 0 S −1 N Ψ 0 0 W 0 ! vec I N ⊗ b Ψ 0 0,NT − Ψ 00 0 + 1 N J 0 1 WS −1 N Ψ 0 (I N ⊗γ)M 0 ∞ X h=0 hA θ h−1 ⊗S −1 N Ψ 0 0 R ! K MNp,MNp S × A 0 1 θ W 0 ⊗S −1 N Ψ 0 . . . A 0 p θ W 0 ⊗S −1 N Ψ 0 0 M 2 N 2 p(p−1)×M 2 N 2 vec I N ⊗ b Ψ 0,NT − Ψ 0 0 + I MN ⊗S −1 N Ψ 0 vec I N ⊗ b Φ 1,NT − Φ 0 1 . . . I MN ⊗S −1 N Ψ 0 vec I N ⊗ b Φ p,NT − Φ 0 p 0 M 2 N 2 p(p−1)×1 + W⊗S −1 N Ψ 0 vec I N ⊗ b Ψ 1,NT − Ψ 0 1 . . . W⊗S −1 N Ψ 0 vec I N ⊗ b Ψ p,NT − Ψ 0 p 0 M 2 N 2 p(p−1)×1 + 1 N p X j=1 J 0 WS −1 N Ψ 0 ⊗S −1 N Ψ 0 0 R ∞ X h=0 A θ h0 R 0 ! vec I N ⊗ b Φ 0 j,NT − Φ 00 j + 1 N p X j=1 J 0 WS −1 N Ψ 0 ⊗S −1 N Ψ 0 0 R ∞ X h=0 A θ h0 R 0 W 0 ! vec I N ⊗ b Ψ 0 j,NT − Ψ 00 j by (A.75), b θ NT −θ 0 =Op max 1 √ NT , 1 T , and the remaining terms inB NT b θ NT −B NT θ 0 are bounded, thereforeB NT b θ NT −B NT θ 0 → p 0 as (N,T )→∞, N T →κ, and N T 3 → 0. 214 Appendix B Appendix to Chapter 3 B.1 DerivationofGeneralizedImpulseResponse Functions From (2.8), Y t+h = A h+1 Y t−1 + h X l=0 A h−l E t+l = A h+1 Y t−1 +E t+h + AE t+h−1 +··· + A h−1 E t+1 + A h Et = A h+1 Y t−1 + h X l=0 A l S −1 N Ψ 0 0 α 0 0 MN(p−1)×1 ! + h X l=0 A h−l S −1 N Ψ 0 0 (I N ⊗β 0 ) 0 X t+l 0 MN(p−1)×1 ! + h X l=0 A h−l S −1 N Ψ 0 0 S −1 N Λ 0 ε t+l 0 MN(p−1)×1 ! where Yt = (y 0 t , y 0 t−1 ,··· , y 0 t−p+1 ) 0 , and then implying I h,δs =E(y t+h |δs, Ω t−1 )−E(y t+h |Ω t−1 ) =E(RY t+h |δs, Ω t−1 )−E(RY t+h |Ω t−1 ) = R A h S −1 N Ψ 0 0 S −1 N Λ 0 E(εt|εt,s =δs, Ω t−1 ) 0 MN(p−1)×1 ! = R A h R 0 S −1 N Ψ 0 0 S −1 N Λ 0 e δ where define Γ h = R (A) h R 0 which satisfies Γ h = P p j=1 A j Γ h−j , Γ 0 =I MN , and Γ h = 0 if h< 0. When the shock is a scaler of magnitude δ m i on y m it at time t, assuming ε it ∼ iidN 0, Ω 0 , E(ε it |ε m it = δ m i , Ω t−1 ) = ( ω 0,1m ω 0,mm , ω 0,2m ω 0,mm ,···, ω 0,Mm ω 0,mm ) 0 δ m i = Ω 0 e M,m δ m i ω 0,mm . Then let e e δ i denotes a vector of length MN with 0 everywhere else except that the (Mi−M + 1)-th to (Mi)-th positions being equal to Ω 0 e M,m δ m i /ω 0,mm with e M,m denotes a vector of length M with 0 everywhere except for the m-th element being 1. The above verifies the expressions of of (3.8) to (3.10). 215 B.2 Bootstrap Confidence Bands for GIRF The estimated GIRF based on b θ is b I (h,δ i ) = b Γ h S −1 N b Ψ 0 S −1 N b Λ e δ i . Cholesky decompose Ω 0 = LL 0 , so vt =L −1 εt will be independently distributed by assumption. Steps to obtain bootstrap confidence interval for GIRF are as follows: 1. Cholesky decompose estimated b Ω = b L b L 0 , and computeb vt = b L −1 b εt for t = 1,...,T. 2. Make T independent random draws from b vt T t=1 to form n b v (r) t o T t=1 3. Recover n b ε (r) t o T t=1 by computingb εt = b Lb vt for t = 1,...,T. 4. Simulate the r−th sample for t = 1,...,T, b y (r) t =S −1 N b Ψ 0 b α + p X j=1 S −1 N b Ψ 0 I N ⊗ b Φ j y (r) t−j + p X j=1 S −1 N b Ψ 0 I N ⊗ b Ψ j Wy (r) t−j +S −1 N b Ψ 0 I N ⊗ b β 0 xt +S −1 N b Ψ 0 S −1 N b Λ b ε (r) t where y 0 ,...,y −(p−1) are true data. 5. Obtain estimated GIRF for b I (r) (h,δ i ) = b Γ (r) h S −1 N b Ψ (r) 0 S −1 N b Λ (r) e δ i based on the QML estimated b θ (r) . 6. Repeat steps 1 to 5 for r = 1,...,R = 500 times, and then compute the 100 (1−α) % confidence interval as the α/2 and 1−α/2 quantile of b I (r) . 216 Appendix C Tables and Figures 217 Table C.1: Estimation of Simulated Samples Table C.1.A. Estimation of Simulated Samples from DGP Design 1 p=1 p=2 p=3 p=4 p=5 iid SAR iid SAR iid SAR iid SAR iid SAR ω 11 bias 0.0235 0.0277 0.023 0.0253 0.0231 0.0263 0.022 0.0259 0.022 0.0252 RMSE 0.0257 0.0303 0.0254 0.0277 0.0255 0.0288 0.0244 0.0286 0.0244 0.0277 ω 12 bias -0.0201 -0.0162 -0.0215 -0.0214 -0.021 -0.0197 -0.0218 -0.0195 -0.0219 -0.0195 RMSE 0.0255 0.0224 0.0266 0.0261 0.0262 0.0254 0.0264 0.0254 0.0264 0.0253 ω 22 bias 0.0112 0.0153 0.0115 0.0135 0.0116 0.0142 0.0108 0.0141 0.0109 0.0135 RMSE 0.0147 0.019 0.0154 0.0169 0.0154 0.0179 0.0144 0.0177 0.0145 0.0173 λ 11 bias - -0.3834 - -0.398 - -0.3889 - -0.389 - -0.3958 RMSE - 0.3838 - 0.3984 - 0.3901 - 0.3901 - 0.3964 λ 21 bias - -0.104 - -0.1062 - -0.107 - -0.1095 - -0.1082 RMSE - 0.1049 - 0.1083 - 0.1089 - 0.1112 - 0.1097 λ 12 bias - -0.1022 - -0.1043 - -0.1056 - -0.1091 - -0.1069 RMSE - 0.1029 - 0.106 - 0.1073 - 0.1104 - 0.1082 λ 22 bias - -0.3842 - -0.3969 - -0.3885 - -0.3884 - -0.3955 RMSE - 0.3847 - 0.3973 - 0.3897 - 0.3894 - 0.3961 β 1 bias -0.0662 -0.1952 -0.0684 -0.1548 -0.0695 -0.1421 -0.0652 -0.1384 -0.064 -0.1413 RMSE 0.1047 0.2808 0.1043 0.2232 0.107 0.2058 0.1043 0.1916 0.1024 0.1827 β 2 bias -0.0803 -0.2023 -0.0779 -0.1709 -0.0812 -0.1575 -0.0814 -0.1562 -0.0833 -0.1588 RMSE 0.1124 0.2861 0.1109 0.2296 0.1135 0.2075 0.114 0.1975 0.1167 0.1979 φ 11 1 bias -0.0179 -0.0167 -0.0169 -0.0172 -0.0167 -0.0166 -0.0168 -0.0151 -0.016 -0.0156 RMSE 0.0201 0.0189 0.0196 0.0198 0.0194 0.02 0.0195 0.0183 0.0186 0.0184 φ 21 1 bias -0.0014 -0.0072 -0.002 -0.0043 -0.0014 -0.0044 -0.0012 -0.0037 -0.0015 -0.004 RMSE 0.0094 0.0123 0.01 0.0111 0.0094 0.0112 0.01 0.0106 0.0098 0.011 φ 12 1 bias -0.0011 -0.0081 -0.0017 -0.0039 -0.0014 -0.0039 -0.0015 -0.0037 -0.0011 -0.0035 RMSE 0.0099 0.013 0.0101 0.0111 0.0102 0.0107 0.0101 0.0106 0.0099 0.0106 φ 22 1 bias -0.018 -0.0166 -0.0174 -0.0185 -0.0171 -0.0173 -0.0171 -0.0167 -0.0164 -0.0168 RMSE 0.0205 0.0195 0.02 0.0211 0.0196 0.0204 0.02 0.0198 0.0192 0.0198 φ 11 2 bias - - -0.0017 -0.0027 -0.002 -0.0025 -0.0019 -0.0028 -0.0019 -0.0028 RMSE - - 0.0102 0.011 0.0105 0.011 0.0104 0.011 0.0106 0.011 φ 21 2 bias - - 5e-04 5e-04 5e-04 2e-04 3e-04 0.0000 4e-04 3e-04 RMSE - - 0.0103 0.0104 0.0104 0.0103 0.0105 0.0103 0.0102 0.0103 φ 12 2 bias - - 0.001 4e-04 0.0011 8e-04 0.0011 5e-04 6e-04 7e-04 RMSE - - 0.0097 0.0095 0.0098 0.0093 0.0093 0.0096 0.0096 0.0093 φ 22 2 bias - - -0.0012 -0.0021 -0.0018 -0.0016 -0.0016 -0.002 -0.0015 -0.0016 RMSE - - 0.0096 0.0109 0.01 0.0103 0.0101 0.0104 0.0102 0.0103 φ 11 3 bias - - - - -0.0022 -0.0032 -0.0015 -0.0024 -0.0016 -0.0022 RMSE - - - - 0.0102 0.0108 0.0102 0.0105 0.0104 0.0103 φ 21 3 bias - - - - 0.0014 2e-04 0.0024 0.001 0.0016 0.001 RMSE - - - - 0.0099 0.0096 0.0104 0.0103 0.0102 0.0099 φ 12 3 bias - - - - 0.0011 0.0000 6e-04 1e-04 7e-04 0.0000 RMSE - - - - 0.0093 0.0092 0.009 0.0093 0.0092 0.0092 φ 22 3 bias - - - - -0.0021 -0.0035 -0.0015 -0.0025 -0.002 -0.0027 RMSE - - - - 0.0097 0.0099 0.0095 0.0096 0.0094 0.0094 φ 11 4 bias - - - - - - -0.0038 -0.0042 -0.0035 -0.0033 RMSE - - - - - - 0.0098 0.0103 0.0101 0.0101 φ 21 4 bias - - - - - - -8e-04 -9e-04 -3e-04 -2e-04 RMSE - - - - - - 0.0097 0.0095 0.0096 0.0097 φ 12 4 bias - - - - - - 8e-04 5e-04 0.0011 0.001 RMSE - - - - - - 0.0093 0.0095 0.0099 0.0098 φ 22 4 bias - - - - - - -0.0044 -0.0045 -0.0031 -0.0038 RMSE - - - - - - 0.0098 0.0099 0.0091 0.0092 φ 11 5 bias - - - - - - - - -0.003 -0.0033 RMSE - - - - - - - - 0.0096 0.0096 φ 21 5 bias - - - - - - - - 3e-04 4e-04 RMSE - - - - - - - - 0.0087 0.0091 φ 12 5 bias - - - - - - - - -5e-04 -3e-04 RMSE - - - - - - - - 0.009 0.0089 φ 22 5 bias - - - - - - - - -0.0049 -0.005 RMSE - - - - - - - - 0.0103 0.0102 ψ 11 0 bias 0.3958 0.36 0.3952 0.3705 0.3952 0.3646 0.3951 0.3628 0.3954 0.3677 RMSE 0.396 0.3604 0.3954 0.371 0.3954 0.3654 0.3953 0.3635 0.3956 0.3682 ψ 21 0 bias 0.0683 0.079 0.0688 0.0765 0.0692 0.0799 0.0693 0.0789 0.0702 0.0786 RMSE 0.079 0.0853 0.0753 0.0976 0.0749 0.0953 0.0755 0.0927 0.0756 0.0912 ψ 12 0 bias 0.0698 0.0754 0.0698 0.0751 0.0699 0.0767 0.0695 0.0808 0.0685 0.0768 RMSE 0.0801 0.0818 0.0763 0.0959 0.0751 0.0928 0.0755 0.0937 0.0736 0.0892 ψ 22 0 bias 0.3968 0.3598 0.3958 0.3699 0.3955 0.3644 0.3958 0.3641 0.3952 0.368 RMSE 0.397 0.3601 0.396 0.3704 0.3957 0.3651 0.3959 0.3648 0.3953 0.3685 ψ 11 1 bias -0.131 -0.1216 -0.1307 -0.1203 -0.1313 -0.1197 -0.131 -0.1227 -0.1321 -0.1229 RMSE 0.1318 0.1224 0.1317 0.1216 0.1321 0.121 0.1319 0.1237 0.1329 0.1239 ψ 21 1 bias -0.0889 -0.0769 -0.0872 -0.0822 -0.0872 -0.082 -0.0873 -0.0821 -0.0873 -0.0825 RMSE 0.0903 0.079 0.0888 0.0847 0.0886 0.0843 0.0887 0.084 0.0886 0.0845 ψ 12 1 bias -0.089 -0.0754 -0.0867 -0.0832 -0.0872 -0.082 -0.0872 -0.0824 -0.0869 -0.0826 RMSE 0.0903 0.0769 0.088 0.0855 0.0885 0.0842 0.0885 0.0844 0.0881 0.0845 ψ 22 1 bias -0.1316 -0.1228 -0.1309 -0.1207 -0.1319 -0.1213 -0.1322 -0.1226 -0.133 -0.1234 RMSE 0.1325 0.1237 0.1318 0.1221 0.1328 0.1227 0.1332 0.1238 0.1339 0.1246 ψ 11 2 bias - - -0.0019 -0.0019 -4e-04 -0.001 -0.0011 -2e-04 -2e-04 -1e-04 RMSE - - 0.0148 0.0162 0.015 0.0167 0.0152 0.0159 0.0154 0.016 ψ 21 2 bias - - -0.0014 -0.0013 -0.0012 -1e-04 -4e-04 -3e-04 -6e-04 -7e-04 RMSE - - 0.0148 0.0154 0.0154 0.0153 0.0148 0.0147 0.0152 0.015 ψ 12 2 bias - - -0.0029 -0.0013 -0.0023 -0.0013 -0.0019 -0.0011 -0.0016 -0.0013 RMSE - - 0.0156 0.0166 0.0159 0.016 0.0151 0.0159 0.0157 0.0156 ψ 22 2 bias - - -0.003 -0.0027 -0.0011 -0.0021 -0.0011 -0.0011 -0.001 -0.0014 RMSE - - 0.0143 0.0163 0.0147 0.0159 0.0144 0.0157 0.015 0.0153 ψ 11 3 bias - - - - -0.0011 0.0000 -4e-04 8e-04 -3e-04 7e-04 RMSE - - - - 0.0148 0.0151 0.0147 0.0158 0.0153 0.0157 ψ 21 3 bias - - - - -0.0024 -7e-04 -0.0027 -7e-04 -0.0026 -9e-04 RMSE - - - - 0.0143 0.0144 0.0151 0.0148 0.015 0.0143 ψ 12 3 bias - - - - -0.0014 -2e-04 -8e-04 0.0000 -0.001 -2e-04 RMSE - - - - 0.0146 0.0146 0.0142 0.0153 0.0148 0.0146 ψ 22 3 bias - - - - 1e-04 8e-04 -3e-04 0.001 2e-04 0.001 RMSE - - - - 0.0139 0.0142 0.0141 0.0145 0.0141 0.0143 ψ 11 4 bias - - - - - - 9e-04 0.0013 0.001 5e-04 RMSE - - - - - - 0.0148 0.0158 0.016 0.016 ψ 21 4 bias - - - - - - 6e-04 5e-04 5e-04 2e-04 RMSE - - - - - - 0.0145 0.0148 0.0147 0.0148 ψ 12 4 bias - - - - - - 0.0000 0 -5e-04 -5e-04 RMSE - - - - - - 0.0141 0.0148 0.0143 0.0148 ψ 22 4 bias - - - - - - 0.002 0.0024 0.0016 0.0022 RMSE - - - - - - 0.0138 0.0149 0.0143 0.0148 ψ 11 5 bias - - - - - - - - 0.0026 0.0026 RMSE - - - - - - - - 0.0143 0.0154 ψ 21 5 bias - - - - - - - - 6e-04 0.0000 RMSE - - - - - - - - 0.014 0.0149 ψ 12 5 bias - - - - - - - - 6e-04 0.0011 RMSE - - - - - - - - 0.0141 0.0158 ψ 22 5 bias - - - - - - - - 0.0023 0.0028 RMSE - - - - - - - - 0.0144 0.015 218 Table C.1.B. Estimation of Simulated Samples from DGP Design 2 p=1 p=2 p=3 p=4 p=5 iid SAR iid SAR iid SAR iid SAR iid SAR ω 11 bias 0.0393 0.0388 0.0388 0.0384 0.0386 0.0382 0.0384 0.038 0.0385 0.0381 RMSE 0.041 0.0404 0.0405 0.0401 0.0402 0.0398 0.0402 0.0397 0.0403 0.0398 ω 12 bias 0.0056 0.0037 0.0051 0.0034 0.0044 0.0029 0.0047 0.0035 0.0055 0.0036 RMSE 0.0175 0.0166 0.0175 0.0164 0.0164 0.0162 0.0172 0.0165 0.0175 0.0164 ω 22 bias 0.0236 0.0237 0.0231 0.0233 0.0227 0.023 0.0226 0.0231 0.0229 0.0227 RMSE 0.0259 0.026 0.0255 0.0253 0.0249 0.0252 0.025 0.0252 0.0253 0.0248 λ 11 bias - -4e-04 - 5e-04 - 3e-04 - 0.001 - 0.0011 RMSE - 0.0109 - 0.0101 - 0.0097 - 0.0097 - 0.0094 λ 21 bias - -0.0191 - -0.019 - -0.0189 - -0.0187 - -0.0193 RMSE - 0.0257 - 0.0245 - 0.0242 - 0.0242 - 0.0242 λ 12 bias - -0.0209 - -0.0217 - -0.0215 - -0.0216 - -0.0216 RMSE - 0.0275 - 0.0269 - 0.0263 - 0.0262 - 0.0259 λ 22 bias - 9e-04 - 0.0023 - 0.0017 - 0.0021 - 0.0023 RMSE - 0.0118 - 0.0119 - 0.0111 - 0.0115 - 0.0115 β 1 bias -0.0146 -0.0889 -0.0183 -0.0845 -0.0179 -0.0772 -0.0181 -0.0732 -0.0186 -0.0691 RMSE 0.0848 0.1353 0.0861 0.1246 0.0864 0.1168 0.0866 0.1133 0.0859 0.1093 β 2 bias -0.0298 -0.1064 -0.0356 -0.0998 -0.0351 -0.0923 -0.0336 -0.0895 -0.0339 -0.0845 RMSE 0.0891 0.1457 0.0918 0.1337 0.0911 0.1273 0.0905 0.123 0.0914 0.1188 φ 11 1 bias -0.0045 -0.0049 -0.0037 -0.004 -0.0038 -0.004 -0.0036 -0.004 -0.0036 -0.0041 RMSE 0.0098 0.01 0.0098 0.0099 0.0098 0.01 0.0097 0.01 0.0097 0.01 φ 21 1 bias -0.0012 -0.0014 -9e-04 -8e-04 -6e-04 -9e-04 -8e-04 -8e-04 -4e-04 -0.001 RMSE 0.0094 0.0098 0.0095 0.0098 0.0095 0.01 0.0096 0.0098 0.0095 0.0098 φ 12 1 bias -0.001 -0.002 -7e-04 -0.0015 -6e-04 -0.0013 -6e-04 -0.0013 -4e-04 -0.0014 RMSE 0.0094 0.0098 0.0095 0.0098 0.0094 0.0097 0.0095 0.0096 0.0095 0.0098 φ 22 1 bias -0.0057 -0.006 -0.005 -0.0052 -0.005 -0.0052 -0.0051 -0.0052 -0.0049 -0.0052 RMSE 0.0108 0.011 0.0107 0.0106 0.0106 0.0107 0.0106 0.0106 0.0105 0.0107 φ 11 2 bias - - -0.0042 -0.0046 -0.0034 -0.0037 -0.0035 -0.0038 -0.0033 -0.0039 RMSE - - 0.0107 0.0109 0.0106 0.0108 0.0107 0.0109 0.0107 0.0108 φ 21 2 bias - - -8e-04 -0.001 -2e-04 -4e-04 -2e-04 -5e-04 0.0000 -4e-04 RMSE - - 0.0096 0.0096 0.0098 0.0098 0.0098 0.0098 0.0098 0.0097 φ 12 2 bias - - -7e-04 -8e-04 0.0000 -5e-04 -1e-04 -4e-04 0.0000 -3e-04 RMSE - - 0.0088 0.0091 0.0089 0.009 0.009 0.0091 0.0089 0.0091 φ 22 2 bias - - -0.0034 -0.0035 -0.0026 -0.0027 -0.0025 -0.0029 -0.0025 -0.0027 RMSE - - 0.0101 0.0104 0.0099 0.0102 0.01 0.0102 0.0099 0.0102 φ 11 3 bias - - - - -0.0035 -0.0037 -0.0026 -0.0028 -0.0029 -0.0027 RMSE - - - - 0.0102 0.0104 0.01 0.0101 0.01 0.01 φ 21 3 bias - - - - -2e-04 -5e-04 5e-04 5e-04 4e-04 4e-04 RMSE - - - - 0.0092 0.0092 0.0095 0.0096 0.0096 0.0095 φ 12 3 bias - - - - -6e-04 -6e-04 -1e-04 -1e-04 -3e-04 -2e-04 RMSE - - - - 0.0086 0.0086 0.0088 0.0087 0.0088 0.0088 φ 22 3 bias - - - - -0.0041 -0.0043 -0.003 -0.0033 -0.0033 -0.0034 RMSE - - - - 0.0099 0.0101 0.0095 0.0097 0.0097 0.0095 φ 11 4 bias - - - - - - -0.0043 -0.0043 -0.0038 -0.0037 RMSE - - - - - - 0.0099 0.0098 0.0098 0.0099 φ 21 4 bias - - - - - - -0.001 -0.0011 -5e-04 -7e-04 RMSE - - - - - - 0.0092 0.0092 0.0092 0.0094 φ 12 4 bias - - - - - - 1e-04 0.0000 5e-04 3e-04 RMSE - - - - - - 0.0091 0.009 0.0093 0.0094 φ 22 4 bias - - - - - - -0.0049 -0.0047 -0.0039 -0.004 RMSE - - - - - - 0.0096 0.0094 0.009 0.009 φ 11 5 bias - - - - - - - - -0.0031 -0.0031 RMSE - - - - - - - - 0.0092 0.0092 φ 21 5 bias - - - - - - - - 6e-04 5e-04 RMSE - - - - - - - - 0.0085 0.0086 φ 12 5 bias - - - - - - - - -2e-04 -1e-04 RMSE - - - - - - - - 0.0084 0.0084 φ 22 5 bias - - - - - - - - -0.005 -0.0051 RMSE - - - - - - - - 0.0101 0.0101 ψ 11 0 bias -0.0035 -0.0012 -0.0032 -0.0019 -0.0034 -0.0019 -0.0034 -0.0023 -0.0034 -0.0023 RMSE 0.0125 0.0133 0.0124 0.0136 0.0123 0.0132 0.0121 0.0133 0.0124 0.0134 ψ 21 0 bias 0.0119 0.0144 0.0111 0.014 0.0112 0.0146 0.0115 0.0149 0.0107 0.0155 RMSE 0.036 0.0598 0.0296 0.0555 0.0309 0.0543 0.0305 0.0544 0.03 0.0522 ψ 12 0 bias 0.0123 0.0269 0.0136 0.0275 0.0136 0.0267 0.0134 0.0263 0.0135 0.0259 RMSE 0.0357 0.0638 0.03 0.0591 0.0313 0.0578 0.0311 0.057 0.0304 0.0548 ψ 22 0 bias -0.0022 9e-04 -0.0016 -1e-04 -0.0019 1e-04 -0.0019 -4e-04 -0.0017 -6e-04 RMSE 0.011 0.0124 0.0106 0.0119 0.0107 0.0117 0.0106 0.0116 0.0106 0.0116 ψ 11 1 bias 6e-04 -0.002 0.0011 -0.0014 9e-04 -9e-04 7e-04 -0.0011 9e-04 -0.001 RMSE 0.0164 0.0174 0.0163 0.0171 0.0162 0.0168 0.0161 0.0171 0.0161 0.0169 ψ 21 1 bias -0.0025 -0.0031 -0.0014 -0.0025 -0.0016 -0.0027 -0.0018 -0.003 -0.0016 -0.0032 RMSE 0.0179 0.0211 0.0172 0.0204 0.0169 0.0195 0.0169 0.0197 0.017 0.0198 ψ 12 1 bias -0.0022 -0.0054 -0.0023 -0.0052 -0.0019 -0.0053 -0.0022 -0.0048 -0.002 -0.0046 RMSE 0.0164 0.0203 0.0162 0.0196 0.0167 0.0193 0.0162 0.0193 0.0164 0.0187 ψ 22 1 bias -6e-04 -0.0018 1e-04 -9e-04 1e-04 -0.001 -1e-04 -0.001 2e-04 -8e-04 RMSE 0.0165 0.0171 0.0171 0.0181 0.0173 0.0182 0.0173 0.0182 0.0174 0.0179 ψ 11 2 bias - - 0.0011 0.0014 0.0015 0.0021 0.0018 0.0023 5e-04 0.0024 RMSE - - 0.0164 0.0165 0.0173 0.0175 0.0171 0.0174 0.0171 0.0174 ψ 21 2 bias - - 2e-04 5e-04 5e-04 9e-04 7e-04 0.0013 -4e-04 0.0013 RMSE - - 0.0163 0.0162 0.0165 0.0166 0.0164 0.0165 0.0166 0.0168 ψ 12 2 bias - - -7e-04 -1e-04 -1e-04 2e-04 0.0000 5e-04 -0.001 4e-04 RMSE - - 0.0167 0.017 0.0173 0.0174 0.0172 0.0176 0.0174 0.0171 ψ 22 2 bias - - -5e-04 -1e-04 2e-04 7e-04 3e-04 0.001 -9e-04 0.001 RMSE - - 0.0166 0.0171 0.0169 0.0174 0.0169 0.0173 0.0169 0.0171 ψ 11 3 bias - - - - -7e-04 -3e-04 0.0000 3e-04 5e-04 5e-04 RMSE - - - - 0.0167 0.0168 0.0172 0.0175 0.0172 0.0171 ψ 21 3 bias - - - - -4e-04 1e-04 -1e-04 -1e-04 1e-04 2e-04 RMSE - - - - 0.0152 0.015 0.0159 0.0157 0.0158 0.0158 ψ 12 3 bias - - - - -5e-04 -3e-04 -2e-04 -2e-04 -3e-04 0.0000 RMSE - - - - 0.0161 0.0162 0.0168 0.0167 0.0169 0.0167 ψ 22 3 bias - - - - 3e-04 4e-04 0.0011 0.0012 0.0013 0.001 RMSE - - - - 0.0157 0.0157 0.016 0.0157 0.0158 0.0159 ψ 11 4 bias - - - - - - 0.0000 1e-04 -2e-04 -4e-04 RMSE - - - - - - 0.018 0.018 0.0184 0.0185 ψ 21 4 bias - - - - - - 2e-04 3e-04 2e-04 2e-04 RMSE - - - - - - 0.0166 0.0167 0.0165 0.0167 ψ 12 4 bias - - - - - - 5e-04 4e-04 4e-04 0.0000 RMSE - - - - - - 0.0161 0.0162 0.0167 0.0167 ψ 22 4 bias - - - - - - 6e-04 7e-04 0.0016 0.0014 RMSE - - - - - - 0.0159 0.0162 0.0167 0.0167 ψ 11 5 bias - - - - - - - - 0.0022 0.0022 RMSE - - - - - - - - 0.0169 0.0169 ψ 21 5 bias - - - - - - - - 0.0000 0.0000 RMSE - - - - - - - - 0.0161 0.0163 ψ 12 5 bias - - - - - - - - 0.0011 0.0013 RMSE - - - - - - - - 0.0173 0.0172 ψ 22 5 bias - - - - - - - - 8e-04 4e-04 RMSE - - - - - - - - 0.0165 0.0167 219 Table C.1.C. Estimation of Simulated Samples from DGP Design 3 p=1 p=2 p=3 p=4 p=5 iid SAR iid SAR iid SAR iid SAR iid SAR ω 11 bias 0.0241 0.0273 0.0228 0.0258 0.0231 0.026 0.0221 0.026 0.022 0.0257 RMSE 0.0262 0.0296 0.025 0.0282 0.0254 0.0286 0.0246 0.0286 0.0243 0.0281 ω 12 bias -0.0191 -0.0167 -0.0223 -0.0202 -0.0217 -0.0196 -0.0215 -0.0196 -0.0217 -0.0195 RMSE 0.0248 0.0228 0.0266 0.0255 0.0264 0.0249 0.0266 0.0253 0.0268 0.0254 ω 22 bias 0.0113 0.0159 0.0115 0.0134 0.0116 0.0141 0.011 0.0143 0.011 0.0134 RMSE 0.0149 0.0192 0.015 0.017 0.0152 0.0176 0.0151 0.018 0.0147 0.0172 λ 11 bias - -0.3821 - -0.3956 - -0.3904 - -0.3894 - -0.395 RMSE - 0.3826 - 0.3962 - 0.3915 - 0.3904 - 0.3957 λ 21 bias - -0.1038 - -0.1058 - -0.1068 - -0.1085 - -0.1085 RMSE - 0.1048 - 0.1078 - 0.1086 - 0.1102 - 0.11 λ 12 bias - -0.1013 - -0.1049 - -0.1062 - -0.1076 - -0.1073 RMSE - 0.1022 - 0.1065 - 0.1075 - 0.1091 - 0.1086 λ 22 bias - -0.3831 - -0.3947 - -0.3901 - -0.3889 - -0.3945 RMSE - 0.3836 - 0.3953 - 0.3912 - 0.39 - 0.3952 β 1 bias -0.069 -0.2057 -0.0668 -0.1603 -0.0677 -0.1468 -0.0672 -0.1424 -0.0645 -0.1364 RMSE 0.1067 0.2912 0.1043 0.2296 0.1036 0.2103 0.1053 0.1887 0.1041 0.186 β 2 bias -0.0826 -0.2169 -0.0774 -0.1743 -0.0777 -0.1593 -0.0792 -0.159 -0.0805 -0.1493 RMSE 0.114 0.3018 0.1111 0.2404 0.1122 0.2132 0.1136 0.2037 0.1142 0.193 φ 11 1 bias -0.0154 -0.0136 -0.0164 -0.0165 -0.0162 -0.0162 -0.0159 -0.0153 -0.0159 -0.015 RMSE 0.0181 0.0164 0.0191 0.019 0.019 0.0192 0.0189 0.0185 0.0186 0.0182 φ 21 1 bias -3e-04 -0.006 -0.0012 -0.0038 -0.0013 -0.004 -7e-04 -0.004 -0.001 -0.0035 RMSE 0.0098 0.0117 0.0102 0.011 0.0104 0.011 0.0101 0.0106 0.0099 0.0105 φ 12 1 bias 4e-04 -0.0065 -0.0012 -0.0036 -9e-04 -0.0034 -0.001 -0.0035 -0.0011 -0.0032 RMSE 0.0101 0.0124 0.0102 0.0109 0.0102 0.0105 0.0099 0.0106 0.0099 0.0105 φ 22 1 bias -0.0153 -0.0144 -0.0171 -0.0175 -0.0168 -0.0171 -0.0162 -0.0164 -0.0168 -0.0162 RMSE 0.0183 0.0177 0.0199 0.0201 0.0197 0.0199 0.0192 0.0196 0.0194 0.0192 φ 11 2 bias - - -0.0013 -0.002 -0.0032 -0.0037 -0.0031 -0.0038 -0.0034 -0.0037 RMSE - - 0.0102 0.011 0.0106 0.0111 0.0109 0.0113 0.011 0.011 φ 21 2 bias - - 0.002 9e-04 6e-04 2e-04 4e-04 -1e-04 2e-04 1e-04 RMSE - - 0.0105 0.0101 0.0103 0.0104 0.0103 0.0105 0.0104 0.0105 φ 12 2 bias - - 0.0018 0.0012 6e-04 8e-04 0.0011 2e-04 0.0013 5e-04 RMSE - - 0.0098 0.0098 0.0098 0.0093 0.0098 0.0096 0.0098 0.0096 φ 22 2 bias - - -8e-04 -0.0014 -0.0029 -0.0027 -0.0024 -0.0028 -0.0022 -0.0027 RMSE - - 0.0103 0.0103 0.0105 0.0106 0.0102 0.0108 0.0104 0.0106 φ 11 3 bias - - - - -0.0026 -0.004 -0.0024 -0.0029 -0.0021 -0.0029 RMSE - - - - 0.0102 0.011 0.0101 0.0104 0.0105 0.0105 φ 21 3 bias - - - - 0.0017 4e-04 0.0018 0.001 0.0023 9e-04 RMSE - - - - 0.0099 0.0097 0.0104 0.0101 0.0105 0.0099 φ 12 3 bias - - - - 7e-04 -2e-04 0.0011 3e-04 9e-04 1e-04 RMSE - - - - 0.0088 0.009 0.0091 0.0091 0.0087 0.0091 φ 22 3 bias - - - - -0.0031 -0.0046 -0.0024 -0.0032 -0.0027 -0.0034 RMSE - - - - 0.0101 0.0104 0.0102 0.0098 0.0095 0.0096 φ 11 4 bias - - - - - - -0.003 -0.0037 -0.0025 -0.003 RMSE - - - - - - 0.0097 0.0098 0.0096 0.0099 φ 21 4 bias - - - - - - 3e-04 -4e-04 1e-04 -2e-04 RMSE - - - - - - 0.0095 0.0097 0.0097 0.0099 φ 12 4 bias - - - - - - 0.0014 9e-04 0.0012 0.0011 RMSE - - - - - - 0.0096 0.0096 0.0098 0.0097 φ 22 4 bias - - - - - - -0.0033 -0.0042 -0.0026 -0.0035 RMSE - - - - - - 0.0093 0.0096 0.009 0.0093 φ 11 5 bias - - - - - - - - -0.0023 -0.003 RMSE - - - - - - - - 0.0093 0.0096 φ 21 5 bias - - - - - - - - 0.001 5e-04 RMSE - - - - - - - - 0.0092 0.0089 φ 12 5 bias - - - - - - - - 0.0000 -3e-04 RMSE - - - - - - - - 0.0088 0.0091 φ 22 5 bias - - - - - - - - -0.0046 -0.0047 RMSE - - - - - - - - 0.0102 0.0105 ψ 11 0 bias 0.3966 0.3592 0.3953 0.3691 0.3951 0.3657 0.3953 0.3633 0.3949 0.3668 RMSE 0.3969 0.3596 0.3954 0.3697 0.3953 0.3665 0.3955 0.364 0.3951 0.3673 ψ 21 0 bias 0.069 0.0781 0.0678 0.0777 0.0689 0.0785 0.069 0.079 0.0688 0.0791 RMSE 0.0817 0.0844 0.0729 0.0947 0.0739 0.0924 0.0747 0.0923 0.0744 0.0905 ψ 12 0 bias 0.0686 0.0767 0.0703 0.0747 0.0704 0.0763 0.0702 0.0796 0.0703 0.0772 RMSE 0.0815 0.0828 0.0753 0.092 0.0753 0.0903 0.0758 0.0923 0.0756 0.0889 ψ 22 0 bias 0.3969 0.3599 0.3958 0.3681 0.3955 0.3657 0.3957 0.3641 0.3957 0.3668 RMSE 0.3972 0.3602 0.396 0.3686 0.3957 0.3664 0.3958 0.3648 0.3958 0.3672 ψ 11 1 bias -0.128 -0.12 -0.1319 -0.1214 -0.1321 -0.121 -0.1334 -0.1225 -0.1328 -0.1237 RMSE 0.129 0.1208 0.1328 0.1224 0.133 0.1221 0.1344 0.1236 0.1336 0.1248 ψ 21 1 bias -0.0879 -0.0761 -0.0873 -0.0833 -0.0875 -0.0822 -0.0876 -0.0818 -0.0874 -0.0836 RMSE 0.0896 0.0782 0.0887 0.0857 0.0888 0.0844 0.089 0.0838 0.0889 0.0855 ψ 12 1 bias -0.0887 -0.0746 -0.0878 -0.0828 -0.0879 -0.0826 -0.0877 -0.0832 -0.0874 -0.0833 RMSE 0.0901 0.0761 0.089 0.0851 0.0891 0.0846 0.0888 0.0851 0.0887 0.0853 ψ 22 1 bias -0.1298 -0.1204 -0.1319 -0.1216 -0.1326 -0.1224 -0.1335 -0.1235 -0.133 -0.1243 RMSE 0.1308 0.1213 0.1329 0.1229 0.1336 0.1235 0.1347 0.1247 0.1339 0.1254 ψ 11 2 bias - - -0.008 -0.0063 -0.0087 -0.0075 -0.0076 -0.0074 -0.008 -0.0072 RMSE - - 0.0168 0.0171 0.0174 0.0173 0.0171 0.0178 0.0175 0.0166 ψ 21 2 bias - - -0.0048 -0.0037 -0.0035 -0.003 -0.0037 -0.0029 -0.0038 -0.003 RMSE - - 0.0148 0.0154 0.0155 0.0155 0.0156 0.0158 0.0158 0.0155 ψ 12 2 bias - - -0.0053 -0.0043 -0.0037 -0.0041 -0.0051 -0.0034 -0.0052 -0.0041 RMSE - - 0.0162 0.017 0.0156 0.0166 0.0161 0.0159 0.0164 0.0158 ψ 22 2 bias - - -0.0077 -0.0074 -0.0088 -0.009 -0.0091 -0.0086 -0.0088 -0.0083 RMSE - - 0.0164 0.0171 0.0173 0.0181 0.0173 0.0183 0.0169 0.0179 ψ 11 3 bias - - - - -0.0087 -0.0073 -0.0083 -0.0064 -0.0084 -0.0069 RMSE - - - - 0.0166 0.0171 0.0177 0.0169 0.0175 0.0169 ψ 21 3 bias - - - - -0.006 -0.0041 -0.0051 -0.0037 -0.0057 -0.0038 RMSE - - - - 0.0151 0.0147 0.0156 0.0152 0.0159 0.0151 ψ 12 3 bias - - - - -0.0049 -0.0033 -0.0048 -0.0034 -0.0043 -0.0032 RMSE - - - - 0.0146 0.0146 0.0152 0.0155 0.0149 0.0151 ψ 22 3 bias - - - - -0.0083 -0.006 -0.0085 -0.0061 -0.0077 -0.0063 RMSE - - - - 0.0165 0.0153 0.0167 0.0157 0.0154 0.0153 ψ 11 4 bias - - - - - - 2e-04 5e-04 -1e-04 1e-04 RMSE - - - - - - 0.0156 0.0154 0.016 0.0155 ψ 21 4 bias - - - - - - -8e-04 1e-04 -5e-04 2e-04 RMSE - - - - - - 0.0142 0.0157 0.0145 0.0151 ψ 12 4 bias - - - - - - -0.0014 -7e-04 -5e-04 -0.0012 RMSE - - - - - - 0.0139 0.0145 0.0145 0.0148 ψ 22 4 bias - - - - - - 6e-04 0.0012 9e-04 0.0014 RMSE - - - - - - 0.0136 0.0144 0.0139 0.0144 ψ 11 5 bias - - - - - - - - 0.0011 0.0024 RMSE - - - - - - - - 0.0138 0.0147 ψ 21 5 bias - - - - - - - - -8e-04 0.0000 RMSE - - - - - - - - 0.014 0.0145 ψ 12 5 bias - - - - - - - - 4e-04 0.0011 RMSE - - - - - - - - 0.0143 0.015 ψ 22 5 bias - - - - - - - - 0.0016 0.002 RMSE - - - - - - - - 0.0142 0.0156 220 Table C.1.D. Estimation of Simulated Samples from DGP Design 4 p=1 p=2 p=3 p=4 p=5 iid SAR iid SAR iid SAR iid SAR iid SAR ω 11 bias 0.0391 0.0389 0.0389 0.0385 0.0388 0.0384 0.0384 0.0384 0.0385 0.0381 RMSE 0.0409 0.0406 0.0406 0.0402 0.0405 0.0401 0.0402 0.0401 0.0403 0.0399 ω 12 bias 0.0052 0.0036 0.0048 0.0034 0.0049 0.0038 0.0048 0.0038 0.0055 0.0036 RMSE 0.0175 0.0165 0.0174 0.0165 0.0174 0.0167 0.0173 0.0164 0.0177 0.0167 ω 22 bias 0.0239 0.0237 0.0232 0.0234 0.023 0.0234 0.0228 0.023 0.0232 0.0228 RMSE 0.0263 0.0258 0.0256 0.0254 0.0254 0.0255 0.0252 0.0252 0.0257 0.0249 λ 11 bias - -9e-04 - 0.0000 - 8e-04 - 0.0014 - 0.001 RMSE - 0.0113 - 0.0102 - 0.0098 - 0.0097 - 0.0095 λ 21 bias - -0.02 - -0.019 - -0.019 - -0.0192 - -0.019 RMSE - 0.0263 - 0.0249 - 0.0242 - 0.0244 - 0.024 λ 12 bias - -0.0225 - -0.022 - -0.0218 - -0.0217 - -0.022 RMSE - 0.0286 - 0.0272 - 0.0267 - 0.0262 - 0.0262 λ 22 bias - 3e-04 - 0.0016 - 0.002 - 0.0024 - 0.0023 RMSE - 0.0127 - 0.0121 - 0.0114 - 0.0117 - 0.011 β 1 bias -0.0159 -0.0937 -0.0189 -0.0848 -0.019 -0.0783 -0.0187 -0.0722 -0.0186 -0.0685 RMSE 0.0877 0.1465 0.0866 0.1239 0.0856 0.1172 0.0865 0.1099 0.0874 0.1074 β 2 bias -0.0314 -0.1115 -0.0334 -0.0978 -0.0346 -0.0942 -0.0338 -0.0875 -0.0332 -0.0835 RMSE 0.0894 0.1508 0.0897 0.135 0.091 0.1275 0.0912 0.1219 0.0894 0.1183 φ 11 1 bias -0.0012 -0.0013 -0.0038 -0.0041 -0.0039 -0.0039 -0.0038 -0.0041 -0.0037 -0.004 RMSE 0.009 0.0091 0.0099 0.01 0.0098 0.0098 0.0098 0.01 0.0098 0.0099 φ 21 1 bias 5e-04 5e-04 -9e-04 -9e-04 -8e-04 -7e-04 -8e-04 -9e-04 -6e-04 -9e-04 RMSE 0.0094 0.0097 0.0095 0.01 0.0095 0.0098 0.0096 0.0099 0.0097 0.0098 φ 12 1 bias 6e-04 0.0000 -7e-04 -0.0015 -5e-04 -0.0011 -5e-04 -0.0014 -6e-04 -0.0014 RMSE 0.0093 0.0098 0.0094 0.0099 0.0093 0.0097 0.0095 0.0098 0.0095 0.0098 φ 22 1 bias -0.0025 -0.0027 -0.0051 -0.0051 -0.0051 -0.0053 -0.005 -0.0053 -0.0053 -0.0052 RMSE 0.0095 0.0097 0.0106 0.0106 0.0106 0.0107 0.0106 0.0107 0.0106 0.0105 φ 11 2 bias - - -0.0019 -0.0019 -0.0036 -0.0039 -0.0034 -0.0039 -0.0033 -0.0039 RMSE - - 0.01 0.0101 0.0107 0.0108 0.0106 0.0109 0.0107 0.011 φ 21 2 bias - - 3e-04 4e-04 -4e-04 -5e-04 -2e-04 -5e-04 -1e-04 -6e-04 RMSE - - 0.0096 0.0096 0.0098 0.0098 0.0099 0.0097 0.0098 0.0098 φ 12 2 bias - - 6e-04 3e-04 0.0000 -4e-04 -1e-04 -5e-04 3e-04 -4e-04 RMSE - - 0.0089 0.009 0.0089 0.0091 0.009 0.0092 0.009 0.0091 φ 22 2 bias - - -0.0011 -0.001 -0.0028 -0.0028 -0.0026 -0.0029 -0.0025 -0.0029 RMSE - - 0.0097 0.0098 0.0101 0.0102 0.01 0.0102 0.01 0.0102 φ 11 3 bias - - - - -0.0036 -0.0037 -0.0027 -0.0029 -0.0026 -0.0028 RMSE - - - - 0.0103 0.0105 0.01 0.01 0.0099 0.01 φ 21 3 bias - - - - -2e-04 -2e-04 4e-04 4e-04 6e-04 4e-04 RMSE - - - - 0.0091 0.0091 0.0096 0.0096 0.0095 0.0093 φ 12 3 bias - - - - -7e-04 -8e-04 -2e-04 -3e-04 -5e-04 -2e-04 RMSE - - - - 0.0087 0.0085 0.0088 0.0087 0.0089 0.0087 φ 22 3 bias - - - - -0.0042 -0.0044 -0.003 -0.0032 -0.0031 -0.0032 RMSE - - - - 0.0099 0.0101 0.0096 0.0097 0.0096 0.0096 φ 11 4 bias - - - - - - -0.0044 -0.0044 -0.0037 -0.0038 RMSE - - - - - - 0.0098 0.0099 0.0099 0.0099 φ 21 4 bias - - - - - - -0.001 -0.0011 -6e-04 -5e-04 RMSE - - - - - - 0.0092 0.0092 0.0095 0.0094 φ 12 4 bias - - - - - - 3e-04 1e-04 5e-04 4e-04 RMSE - - - - - - 0.0092 0.0092 0.0096 0.0095 φ 22 4 bias - - - - - - -0.0049 -0.005 -0.0037 -0.004 RMSE - - - - - - 0.0096 0.0096 0.0089 0.0091 φ 11 5 bias - - - - - - - - -0.0034 -0.0031 RMSE - - - - - - - - 0.0094 0.0093 φ 21 5 bias - - - - - - - - 5e-04 4e-04 RMSE - - - - - - - - 0.0085 0.0085 φ 12 5 bias - - - - - - - - -1e-04 -2e-04 RMSE - - - - - - - - 0.0085 0.0085 φ 22 5 bias - - - - - - - - -0.005 -0.005 RMSE - - - - - - - - 0.01 0.0101 ψ 11 0 bias -0.0029 -5e-04 -0.0031 -0.0016 -0.0032 -0.002 -0.0034 -0.0026 -0.0029 -0.0024 RMSE 0.0127 0.0134 0.0124 0.0134 0.0123 0.0133 0.0124 0.0135 0.0125 0.0131 ψ 21 0 bias 0.0119 0.0156 0.0114 0.0131 0.0118 0.015 0.0115 0.0154 0.011 0.0143 RMSE 0.037 0.0658 0.03 0.0589 0.0306 0.0548 0.0313 0.053 0.0302 0.0511 ψ 12 0 bias 0.012 0.0276 0.0133 0.0288 0.013 0.0265 0.0133 0.026 0.0131 0.0271 RMSE 0.0368 0.0689 0.0301 0.0629 0.0305 0.0577 0.0311 0.0551 0.0306 0.0548 ψ 22 0 bias -0.0018 0.0018 -0.0017 7e-04 -0.0019 -1e-04 -0.0016 -5e-04 -0.0014 -4e-04 RMSE 0.0112 0.0121 0.0108 0.012 0.0107 0.0116 0.0105 0.0117 0.0107 0.0117 ψ 11 1 bias 0.0068 0.0035 0.0011 -0.0014 8e-04 -0.0013 7e-04 -0.001 0.001 -0.0013 RMSE 0.0182 0.0186 0.0163 0.0175 0.0164 0.0171 0.016 0.0169 0.0162 0.0168 ψ 21 1 bias 0.0011 -8e-04 -0.0016 -0.0024 -0.0019 -0.0032 -0.0018 -0.003 -0.0013 -0.0029 RMSE 0.0179 0.0225 0.017 0.0202 0.0171 0.0198 0.0172 0.0199 0.0171 0.0194 ψ 12 1 bias 7e-04 -0.0026 -0.002 -0.0054 -0.0021 -0.0052 -0.0022 -0.005 -0.0019 -0.0052 RMSE 0.0166 0.0204 0.0166 0.0198 0.0165 0.0193 0.0166 0.019 0.0166 0.0188 ψ 22 1 bias 0.0054 0.0037 5e-04 -6e-04 0.0000 -0.0013 -1e-04 -0.0011 -1e-04 -7e-04 RMSE 0.0177 0.0191 0.0171 0.0182 0.0173 0.0178 0.0173 0.0179 0.0174 0.0179 ψ 11 2 bias - - 0.0046 0.0047 0.002 0.0022 0.0015 0.0026 2e-04 0.0024 RMSE - - 0.0168 0.0176 0.0169 0.0173 0.0169 0.0175 0.017 0.0174 ψ 21 2 bias - - 0.0019 0.0019 5e-04 5e-04 4e-04 0.0011 0.0000 0.001 RMSE - - 0.0164 0.0162 0.0169 0.0165 0.0164 0.0165 0.0164 0.0167 ψ 12 2 bias - - 0.0013 0.001 -1e-04 -2e-04 1e-04 2e-04 -6e-04 0.0000 RMSE - - 0.0168 0.0172 0.0172 0.0174 0.0172 0.0175 0.0176 0.0175 ψ 22 2 bias - - 0.0036 0.0034 4e-04 7e-04 5e-04 0.0011 -4e-04 0.001 RMSE - - 0.017 0.0174 0.0166 0.0169 0.017 0.0174 0.0169 0.0172 ψ 11 3 bias - - - - -6e-04 -3e-04 2e-04 3e-04 -1e-04 5e-04 RMSE - - - - 0.0168 0.0169 0.0173 0.0172 0.0173 0.017 ψ 21 3 bias - - - - -6e-04 -6e-04 -3e-04 -3e-04 -7e-04 2e-04 RMSE - - - - 0.0152 0.0153 0.0156 0.0156 0.0158 0.0158 ψ 12 3 bias - - - - -7e-04 -4e-04 -3e-04 -1e-04 -1e-04 -5e-04 RMSE - - - - 0.0161 0.0159 0.017 0.0168 0.0167 0.0168 ψ 22 3 bias - - - - 2e-04 4e-04 0.001 0.0015 0.001 0.001 RMSE - - - - 0.0156 0.0158 0.0158 0.0159 0.0161 0.0159 ψ 11 4 bias - - - - - - 2e-04 0.0000 -5e-04 -4e-04 RMSE - - - - - - 0.0178 0.0179 0.0182 0.0185 ψ 21 4 bias - - - - - - 5e-04 1e-04 1e-04 3e-04 RMSE - - - - - - 0.0165 0.0167 0.0168 0.0169 ψ 12 4 bias - - - - - - -1e-04 2e-04 2e-04 1e-04 RMSE - - - - - - 0.0161 0.0162 0.0163 0.0166 ψ 22 4 bias - - - - - - 3e-04 5e-04 0.0015 0.0014 RMSE - - - - - - 0.0159 0.0161 0.0161 0.0167 ψ 11 5 bias - - - - - - - - 0.0024 0.0021 RMSE - - - - - - - - 0.0168 0.0168 ψ 21 5 bias - - - - - - - - 0.0000 -1e-04 RMSE - - - - - - - - 0.016 0.0162 ψ 12 5 bias - - - - - - - - 0.0012 0.0011 RMSE - - - - - - - - 0.0174 0.0172 ψ 22 5 bias - - - - - - - - 5e-04 4e-04 RMSE - - - - - - - - 0.0166 0.0167 221 Table C.2: Summary Statistics of Google Flu Trend ILI & CDC ILI Levels Table C.2.A. Summary Statistics of Sample ILI Google CDC CDC & Google States Nobs Mean Std Min Max Nobs Mean Std Min Max Corr. Alabama 326 0.0205 0.0207 0.0035 0.1384 326 3.5460 3.4804 1 10 0.7160 Arizona 326 0.0243 0.0231 0.0047 0.2020 326 1.6902 1.6657 1 10 0.8870 Arkansas 326 0.0342 0.0327 0.0081 0.2436 326 2.2362 2.7628 1 10 0.7380 California 326 0.0221 0.0176 0.0056 0.1633 326 2.1840 2.2958 1 10 0.7050 Colorado 326 0.0102 0.0111 0.0016 0.0902 326 2.0736 2.0993 1 10 0.8650 Connecticut 326 0.0111 0.0154 0.0019 0.1710 326 1.6902 1.8378 1 10 0.6870 Delaware 326 0.0234 0.0155 0.0088 0.1356 326 1.7454 1.9751 1 10 0.8280 Florida 326 0.0152 0.0099 0.0040 0.0688 326 1.7515 1.7426 1 10 0.7450 Georgia 326 0.0185 0.0172 0.0035 0.1038 326 2.7239 2.9525 1 10 0.6750 Idaho 326 0.0187 0.0186 0.0043 0.1564 326 1.9202 2.1540 1 10 0.7800 Illinois 326 0.0182 0.0159 0.0046 0.1196 326 2.4632 2.5048 1 10 0.8260 Indiana 326 0.0182 0.0161 0.0042 0.1082 326 2.1534 2.5252 1 10 0.8410 Iowa 326 0.0113 0.0117 0.0025 0.0828 326 1.3742 1.4468 1 10 0.7390 Kansas 326 0.0108 0.0120 0.0022 0.0996 326 2.1104 2.5911 1 10 0.8520 Kentucky 326 0.0211 0.0219 0.0035 0.1395 326 1.7117 1.8588 1 10 0.6590 Louisiana 326 0.0362 0.0301 0.0086 0.2008 326 3.3497 3.3181 1 10 0.7550 Maine 326 0.0106 0.0127 0.0024 0.1151 326 1.2147 0.9397 1 9 0.6750 Maryland 326 0.0243 0.0165 0.0088 0.1380 326 2.0613 2.1446 1 10 0.7600 Massachusetts 326 0.0117 0.0156 0.0021 0.1520 326 1.7515 1.5401 1 10 0.7330 Michigan 326 0.0178 0.0148 0.0046 0.0913 326 1.6380 1.7640 1 10 0.8860 Minnesota 326 0.0195 0.0211 0.0050 0.1875 326 1.7331 1.8595 1 10 0.7680 Mississippi 326 0.0225 0.0240 0.0042 0.1675 326 3.1902 3.2497 1 10 0.7070 Missouri 326 0.0119 0.0114 0.0027 0.0787 326 2.3528 2.8101 1 10 0.8370 Montana 326 0.0096 0.0110 0.0023 0.0966 326 1.2331 1.1181 1 10 0.5520 Nebraska 326 0.0118 0.0130 0.0027 0.1069 326 1.8558 2.1798 1 10 0.7630 Nevada 326 0.0206 0.0159 0.0048 0.1402 326 2.1503 2.1695 1 10 0.7220 New Hampshire 326 0.0113 0.0157 0.0019 0.1490 326 1.1871 1.1144 1 10 0.7620 New Jersey 326 0.0184 0.0205 0.0038 0.2090 326 2.4080 2.6010 1 10 0.7210 New Mexico 326 0.0264 0.0190 0.0081 0.1342 326 2.3742 2.6674 1 10 0.7030 New York 326 0.0181 0.0159 0.0043 0.1305 326 1.7301 1.8499 1 10 0.6830 North Carolina 326 0.0181 0.0167 0.0034 0.1062 326 1.9325 2.4169 1 10 0.8470 North Dakota 326 0.0122 0.0154 0.0030 0.1189 326 1.9018 1.9013 1 10 0.7830 Ohio 326 0.0178 0.0163 0.0045 0.1150 326 1.4847 1.6691 1 10 0.8680 Oklahoma 326 0.0348 0.0330 0.0070 0.2459 326 2.4202 2.8767 1 10 0.7730 Oregon 326 0.0186 0.0187 0.0037 0.1362 326 1.6442 1.8269 1 10 0.8720 Pennsylvania 326 0.0236 0.0153 0.0083 0.1214 326 1.7454 2.0076 1 10 0.6490 Rhode Island 326 0.0110 0.0151 0.0019 0.1274 326 1.4969 1.6432 1 10 0.8200 South Carolina 326 0.0190 0.0183 0.0034 0.1092 326 2.0031 2.3423 1 10 0.8330 South Dakota 326 0.0126 0.0149 0.0027 0.1098 326 1.6687 1.4762 1 10 0.7950 Tennessee 326 0.0207 0.0215 0.0039 0.1359 326 2.0337 2.4798 1 10 0.8290 Texas 326 0.0331 0.0268 0.0080 0.1859 326 3.2485 3.1676 1 10 0.7830 Utah 326 0.0092 0.0082 0.0023 0.0690 326 2.4479 2.5021 1 10 0.8400 Vermont 326 0.0111 0.0138 0.0025 0.1156 326 1.7546 1.9662 1 10 0.8090 Virginia 326 0.0243 0.0155 0.0080 0.1059 326 3.0644 2.7954 1 10 0.8430 Washington 326 0.0192 0.0174 0.0042 0.1339 326 1.7730 1.7813 1 10 0.7740 West Virginia 326 0.0264 0.0191 0.0090 0.1341 326 1.7454 1.9876 1 10 0.7660 Wisconsin 326 0.0174 0.0171 0.0046 0.1606 326 1.7086 1.8071 1 10 0.6750 Wyoming 326 0.0098 0.0109 0.0030 0.0811 326 1.5706 1.8144 1 10 0.8930 national 15648 0.0185 0.0193 0.0016 0.2459 15648 2.0260 2.2984 1 10 0.7170 222 Table C.2.B. Summary Statistics of Sample for Predictors population density temperature prepcipitation States Mean Std Min Max Mean Std Min Max Mean Std Min Max Alabama 94.858 0.681 93.162 95.752 62.763 14.024 27.059 85.673 15.596 13.737 0.000 78.447 Arizona 57.355 1.213 55.288 59.259 59.989 14.616 30.598 84.074 3.150 4.987 0.000 47.398 Arkansas 56.425 0.503 55.242 57.007 59.664 16.287 20.983 89.415 14.104 13.789 0.000 98.454 California 242.851 4.260 234.976 249.087 60.310 10.758 38.402 81.403 5.677 9.336 0.000 65.033 Colorado 49.682 1.330 47.179 51.677 47.641 16.800 11.577 76.861 3.748 3.966 0.000 34.540 Connecticut 740.418 3.111 732.201 743.303 50.080 16.522 15.595 80.809 13.261 12.579 0.000 68.381 Delaware 467.872 8.037 453.608 480.161 55.797 16.346 22.000 84.654 12.465 13.523 0.000 76.857 Florida 358.316 8.192 345.499 370.972 71.816 9.571 42.132 85.674 14.960 13.053 0.000 94.614 Georgia 171.188 3.022 165.263 175.565 64.041 13.443 30.925 85.905 13.774 12.901 0.000 85.655 Idaho 19.235 0.342 18.566 19.777 46.476 16.094 10.642 75.731 5.077 4.258 0.005 20.439 Illinois 231.495 0.675 229.598 232.183 51.697 18.935 8.424 84.739 11.711 10.426 0.000 61.056 Indiana 182.113 1.390 179.333 184.135 51.914 18.093 13.092 84.088 12.284 10.443 0.000 50.480 Iowa 54.933 0.475 54.008 55.626 47.099 20.846 -2.181 82.471 9.711 10.385 0.000 64.787 Kansas 35.127 0.329 34.346 35.519 54.715 18.694 12.648 88.532 7.725 8.194 0.000 39.661 Kentucky 110.645 0.892 108.642 111.772 55.585 16.624 17.798 84.532 13.459 11.103 0.000 60.303 Louisiana 105.938 1.375 102.666 107.622 66.927 13.077 34.469 87.347 15.342 14.694 0.000 115.778 Maine 43.082 0.032 43.036 43.138 44.411 17.969 4.198 75.925 14.222 11.486 0.000 61.581 Maryland 603.102 9.257 585.642 615.665 55.772 16.352 22.394 83.638 13.038 11.776 0.000 83.981 Massachusetts 849.655 10.750 829.348 864.789 50.320 16.634 15.745 81.071 13.639 12.586 0.000 74.524 Michigan 174.982 0.296 174.672 175.930 45.178 18.558 6.096 79.428 10.052 7.602 0.008 38.964 Minnesota 67.336 0.805 65.895 68.534 41.797 22.460 -5.447 78.250 7.988 8.496 0.000 51.580 Mississippi 63.482 0.292 62.822 63.808 63.229 14.460 27.578 87.137 15.634 13.405 0.000 74.258 Missouri 87.475 0.543 86.177 88.209 54.352 18.252 7.947 85.919 11.967 11.174 0.000 66.520 Montana 6.881 0.098 6.709 7.033 43.804 17.739 -4.312 75.171 4.492 4.984 0.000 31.059 Nebraska 24.034 0.324 23.383 24.491 49.323 19.306 5.403 82.877 6.961 7.828 0.000 46.756 Nevada 25.003 0.506 24.172 25.861 52.312 15.753 15.717 82.036 1.876 2.514 0.000 20.000 New Hampshire 147.449 0.424 146.985 148.203 44.926 18.300 5.500 76.786 12.381 10.105 0.000 65.486 New Jersey 1203.230 9.129 1184.502 1215.380 53.259 16.227 20.518 82.080 14.387 13.384 0.000 91.589 New Mexico 17.069 0.175 16.576 17.205 55.228 15.086 22.323 79.075 3.449 4.243 0.000 37.899 New York 414.482 3.518 407.679 419.006 47.432 17.615 9.869 78.261 12.399 9.101 0.665 59.578 North Carolina 199.197 3.730 191.482 204.533 58.449 14.566 26.306 81.894 13.937 11.340 0.015 93.495 North Dakota 10.096 0.394 9.530 10.717 39.944 22.961 -8.430 76.663 6.407 7.359 0.000 37.623 Ohio 282.754 0.556 281.821 283.749 50.468 17.776 13.210 81.002 11.065 8.499 0.025 48.051 Oklahoma 55.348 0.866 53.488 56.535 59.026 17.409 20.614 91.096 7.999 8.842 0.000 45.457 Oregon 40.425 0.604 39.263 41.362 50.351 12.027 20.312 76.070 9.751 9.949 0.000 54.120 Pennsylvania 284.699 1.098 281.885 285.794 50.014 17.025 15.023 80.758 12.638 10.176 0.099 89.214 Rhode Island 1018.931 0.984 1017.614 1020.664 51.778 15.859 19.286 83.107 14.244 15.292 0.000 129.643 South Carolina 156.260 2.898 150.662 160.757 62.331 14.192 30.244 84.863 13.171 11.769 0.000 63.178 South Dakota 10.933 0.223 10.541 11.254 45.323 21.185 -0.801 83.778 6.090 6.943 0.000 43.027 Tennessee 155.691 2.144 151.508 158.830 57.589 15.677 18.777 84.337 15.330 12.812 0.000 71.755 Texas 98.816 2.986 93.055 103.192 65.812 14.326 34.566 88.926 6.891 7.264 0.000 37.360 Utah 34.425 0.977 32.409 35.815 49.606 17.147 10.235 78.276 3.612 3.869 0.000 31.157 Vermont 67.923 0.082 67.720 68.013 45.806 18.524 5.020 78.469 11.533 10.200 0.000 72.714 Virginia 205.863 3.752 198.366 210.845 55.127 15.556 23.004 81.769 12.465 10.790 0.009 82.105 Washington 103.054 2.159 98.746 106.260 49.108 12.646 17.817 75.590 10.429 9.630 0.007 52.312 West Virginia 77.059 0.157 76.558 77.223 52.613 16.319 16.994 79.227 13.047 9.663 0.013 54.908 Wisconsin 105.478 0.610 104.159 106.311 44.415 20.417 -0.375 80.790 9.448 9.429 0.013 50.109 Wyoming 5.887 0.108 5.624 6.016 43.301 17.568 4.210 73.711 3.624 3.368 0.000 18.033 national 200.387 263.992 5.624 1215.380 52.894 18.256 -8.430 91.096 10.338 10.911 0.000 129.643 223 Table C.3: State by State Geographical Contiguity Table C.3.A. Geographical Contiguous States State eigenvector centrality Contiguous States Alabama 0.08 Mississippi Tennessee Georgia Florida Arizona 0.10 California Utah Nevada New Mexico Colorado Arkansas 0.12 Texas Oklahoma Missouri Tennessee Mississippi Louisiana California 0.06 Oregon Nevada Arizona Colorado 0.14 Utah Arizona New Mexico Oklahoma Kansas Nebraska Wyoming Connecticut 0.06 New York Massachusetts Rhode Island Delaware 0.06 Maryland New Jersey Pennsylvania Florida 0.04 Alabama Georgia Georgia 0.10 Florida Alabama Tennessee North Carolina South Carolina Idaho 0.12 Washington Oregon Nevada Utah Wyoming Montana Illinois 0.10 Wisconsin Iowa Missouri Kentucky Indiana Indiana 0.08 Illinois Michigan Kentucky Ohio Iowa 0.12 Missouri Illinois Wisconsin Minnesota South Dakota Nebraska Kansas 0.08 Nebraska Colorado Oklahoma Missouri Kentucky 0.14 Illinois Missouri Tennessee Virginia West Virginia Ohio Indiana Louisiana 0.06 Texas Arkansas Mississippi Maine 0.02 New Hampshire Maryland 0.08 Virginia West Virginia Pennsylvania Delaware Massachusetts 0.10 Rhode Island Connecticut New York Vermont New Hampshire Michigan 0.06 Wisconsin Indiana Ohio Minnesota 0.08 North Dakota South Dakota Iowa Wisconsin Mississippi 0.08 Louisiana Arkansas Tennessee Alabama Missouri 0.16 Arkansas Tennessee Kentucky Illinois Iowa Nebraska Kansas Oklahoma Montana 0.08 Idaho Wyoming South Dakota North Dakota Nebraska 0.12 South Dakota Iowa Wyoming Colorado Kansas Missouri Nevada 0.10 Oregon California Arizona Utah Idaho New Hampshire 0.06 Vermont Massachusetts Maine New Jersey 0.06 Delaware Pennsylvania New York New Mexico 0.10 Colorado Oklahoma Texas Utah Arizona New York 0.10 New Jersey Pennsylvania Connecticut Massachusetts Vermont North Carolina 0.08 South Carolina Tennessee Virginia Georgia North Dakota 0.06 Montana South Dakota Minnesota Ohio 0.10 Michigan Indiana Kentucky West Virginia Pennsylvania Oklahoma 0.12 Kansas Colorado New Mexico Texas Arkansas Missouri Oregon 0.08 Washington Idaho California Nevada Pennsylvania 0.12 New Jersey Delaware Maryland West Virginia Ohio New York Rhode Island 0.04 Connecticut Massachusetts South Carolina 0.04 Georgia North Carolina South Dakota 0.12 North Dakota Montana Wyoming Nebraska Iowa Minnesota Tennessee 0.16 Missouri Arkansas Mississippi Alabama Georgia North Carolina Virginia Kentucky Texas 0.08 New Mexico Oklahoma Arkansas Louisiana Utah 0.12 Nevada Arizona Idaho Wyoming Colorado New Mexico Vermont 0.06 Massachusetts New York New Hampshire Virginia 0.12 North Carolina Tennessee Kentucky West Virginia Maryland Washington 0.04 Oregon Idaho West Virginia 0.08 Virginia Kentucky Ohio Pennsylvania Maryland Wisconsin 0.08 Minnesota Iowa Illinois Michigan Wyoming 0.12 Utah Colorado Nebraska South Dakota Montana Idaho 224 Table C.3.B. State Geographical Adjacency Matrix Alabama Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming Alabama 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Arizona 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 Arkansas 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 California 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 Colorado 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 1 Connecticut 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Delaware 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 Florida 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Georgia 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 Idaho 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 Illinois 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 Indiana 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Iowa 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 Kansas 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Kentucky 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 Louisiana 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Maine 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Maryland 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 Massachusetts 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 Michigan 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 Minnesota 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 Mississippi 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Missouri 0 0 1 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 Montana 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 Nebraska 0 0 0 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 Nevada 0 1 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 New Hampshire 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 New Jersey 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 New Mexico 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 New York 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 North Carolina 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 1 0 0 0 0 North Dakota 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 Ohio 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 Oklahoma 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 Oregon 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 Pennsylvania 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 Rhode Island 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 South Carolina 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 South Dakota 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 Tennessee 1 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Texas 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Utah 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 Vermont 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Virginia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 Washington 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 West Virginia 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 Wisconsin 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Wyoming 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 225 Table C.3.C. Residence State to Workplace State Flows Alabama Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Alabama 0.0 134.5 367.0 432.0 145.5 36.5 9.0 8748.0 26397.0 0.0 360.0 324.5 78.0 68.0 444.5 1860.5 33.5 245.0 88.5 217.0 108.5 10986.5 252.5 4.5 Arizona 134.5 0.0 178.5 9663.5 1210.0 121.0 36.5 620.0 420.5 194.5 811.5 243.0 290.5 265.0 81.5 119.5 42.5 160.5 202.0 406.0 468.0 43.0 449.5 140.0 Arkansas 367.0 178.5 0.0 422.0 127.0 8.0 6.0 300.5 333.0 17.5 484.0 239.5 105.0 289.5 219.5 2938.5 13.0 23.0 31.0 193.5 68.5 2058.0 8399.0 17.5 California 432.0 9663.5 422.0 0.0 2646.5 608.0 61.0 2530.0 1767.0 605.5 2433.5 446.0 316.5 501.0 242.5 801.0 149.5 851.5 1047.0 1103.5 835.5 244.5 880.5 352.0 Colorado 145.5 1210.0 127.0 2646.5 0.0 121.5 19.0 495.5 464.5 153.5 789.5 224.0 193.5 803.0 139.5 207.0 31.0 285.5 164.5 330.0 401.5 58.0 533.0 239.5 Connecticut 36.5 121.0 8.0 608.0 121.5 0.0 95.0 935.0 234.0 12.0 319.5 50.0 14.5 17.5 59.0 33.5 339.5 342.5 24871.5 204.5 118.5 10.5 133.5 0.0 Delaware 9.0 36.5 6.0 61.0 19.0 95.0 0.0 210.0 56.5 8.0 64.5 26.5 0.0 0.0 7.0 6.0 13.5 19863.5 47.0 41.5 7.0 0.0 2.5 0.0 Florida 8748.0 620.0 300.5 2530.0 495.5 935.0 210.0 0.0 15388.5 38.5 2857.5 1407.5 441.5 273.0 891.5 2049.0 359.5 1627.5 2009.5 2265.5 875.5 1021.5 1071.0 53.0 Georgia 26397.0 420.5 333.0 1767.0 464.5 234.0 56.5 15388.5 0.0 33.0 1300.0 487.0 217.5 278.0 559.5 928.5 68.0 586.0 525.0 712.5 428.5 768.0 546.5 62.0 Idaho 0.0 194.5 17.5 605.5 153.5 12.0 8.0 38.5 33.0 0.0 77.5 13.5 60.5 17.5 17.0 56.5 0.0 0.0 28.5 31.0 41.0 6.0 20.0 558.0 Illinois 360.0 811.5 484.0 2433.5 789.5 319.5 64.5 2857.5 1300.0 77.5 0.0 50126.526636.5 576.5 3639.0 499.0 74.5 434.5 687.5 3827.0 1310.0 185.5 55047.5 20.5 Indiana 324.5 243.0 239.5 446.0 224.0 50.0 26.5 1407.5 487.0 13.5 50126.5 0.0 332.5 183.0 38703.5 152.0 42.5 142.5 179.5 18096.5 383.0 115.5 751.5 23.5 Iowa 78.0 290.5 105.0 316.5 193.5 14.5 0.0 441.5 217.5 60.5 26636.5 332.5 0.0 287.5 141.5 54.5 0.0 56.0 45.5 312.5 4572.5 20.5 2982.5 32.0 Kansas 68.0 265.0 289.5 501.0 803.0 17.5 0.0 273.0 278.0 17.5 576.5 183.0 287.5 0.0 112.0 156.0 0.0 143.5 42.0 141.5 189.5 147.5 93235.0 76.0 Kentucky 444.5 81.5 219.5 242.5 139.5 59.0 7.0 891.5 559.5 17.0 3639.0 38703.5 141.5 112.0 0.0 203.5 26.0 193.0 73.5 547.0 145.0 217.0 623.5 26.0 Louisiana 1860.5 119.5 2938.5 801.0 207.0 33.5 6.0 2049.0 928.5 56.5 499.0 152.0 54.5 156.0 203.5 0.0 16.5 107.5 149.5 157.5 172.5 14592.0 432.5 33.5 Maine 33.5 42.5 13.0 149.5 31.0 339.5 13.5 359.5 68.0 0.0 74.5 42.5 0.0 0.0 26.0 16.5 0.0 83.0 3238.0 83.0 17.5 3.5 45.5 11.5 Maryland 245.0 160.5 23.0 851.5 285.5 342.5 19863.5 1627.5 586.0 0.0 434.5 142.5 56.0 143.5 193.0 107.5 83.0 0.0 669.5 478.0 119.0 100.5 223.0 21.0 Massachusetts 88.5 202.0 31.0 1047.0 164.5 24871.5 47.0 2009.5 525.0 28.5 687.5 179.5 45.5 42.0 73.5 149.5 3238.0 669.5 0.0 276.0 211.0 72.0 248.0 27.0 Michigan 217.0 406.0 193.5 1103.5 330.0 204.5 41.5 2265.5 712.5 31.0 3827.0 18096.5 312.5 141.5 547.0 157.5 83.0 478.0 276.0 0.0 621.5 201.0 447.0 68.5 Minnesota 108.5 468.0 68.5 835.5 401.5 118.5 7.0 875.5 428.5 41.0 1310.0 383.0 4572.5 189.5 145.0 172.5 17.5 119.0 211.0 621.5 0.0 80.5 554.0 116.5 Mississippi 10986.5 43.0 2058.0 244.5 58.0 10.5 0.0 1021.5 768.0 6.0 185.5 115.5 20.5 147.5 217.0 14592.0 3.5 100.5 72.0 201.0 80.5 0.0 550.0 6.5 Missouri 252.5 449.5 8399.0 880.5 533.0 133.5 2.5 1071.0 546.5 20.0 55047.5 751.5 2982.5 93235.0 623.5 432.5 45.5 223.0 248.0 447.0 554.0 550.0 0.0 50.0 Montana 4.5 140.0 17.5 352.0 239.5 0.0 0.0 53.0 62.0 558.0 20.5 23.5 32.0 76.0 26.0 33.5 11.5 21.0 27.0 68.5 116.5 6.5 50.0 0.0 Nebraska 49.0 232.0 131.5 314.5 837.0 10.0 27.5 259.5 137.5 53.0 499.5 179.0 23635.5 1712.0 41.0 63.5 2.0 46.0 26.5 228.0 224.5 60.5 982.0 24.5 Nevada 63.5 6664.0 35.5 13310.0 418.5 67.5 17.5 331.5 150.5 729.0 343.0 156.5 40.0 28.0 66.5 111.0 8.0 69.5 52.0 188.0 128.5 35.5 100.0 70.5 New Hampshire 0.0 52.5 0.0 148.0 63.5 681.0 5.0 284.0 66.5 0.0 141.5 35.0 14.0 5.0 38.5 25.0 12708.0 100.0 56770.5 73.5 81.5 1.5 25.5 0.0 New Jersey 156.0 235.5 106.5 1214.0 313.5 3412.5 6930.5 3119.0 753.5 17.5 855.5 291.5 91.0 113.0 120.0 211.5 179.5 1844.5 1521.0 424.0 246.5 57.0 241.0 20.5 New Mexico 51.5 3295.0 19.5 834.5 2253.0 32.0 12.5 154.5 99.5 36.5 137.5 67.5 21.5 89.0 13.0 83.0 7.0 66.0 99.0 61.0 53.0 39.0 57.5 60.0 New York 208.5 560.0 123.0 2921.0 527.5 56336.5 706.0 6005.5 1819.0 60.5 1644.0 447.5 150.0 156.0 335.5 440.0 556.0 2293.0 6484.5 1005.5 377.5 127.5 443.0 65.0 North Carolina 684.0 436.5 215.0 1190.0 255.5 332.0 201.0 2985.0 6134.0 30.0 853.0 347.5 226.0 144.5 555.0 505.5 78.0 1391.0 574.5 550.5 241.0 312.0 394.5 9.0 North Dakota 0.0 92.5 16.0 101.0 108.0 17.0 0.0 27.5 19.0 73.0 39.5 21.0 83.0 71.0 8.0 25.0 6.5 46.5 0.0 61.0 18621.0 3.0 43.0 802.5 Ohio 357.5 513.5 209.5 1045.0 348.5 246.0 59.5 2371.5 1044.5 38.0 2103.0 19501.5 302.0 124.5 48287.0 277.5 69.0 587.5 398.0 18063.5 397.5 168.0 731.0 62.0 Oklahoma 148.5 192.5 10473.0 534.0 410.5 12.5 8.5 324.5 206.5 41.5 343.5 173.5 141.0 5175.5 163.0 575.5 8.0 58.5 49.5 180.0 104.5 208.5 2967.5 17.5 Oregon 35.0 458.0 23.0 4251.0 213.5 39.5 6.0 207.0 130.0 3799.5 144.0 108.5 59.0 69.5 43.5 59.5 17.5 66.5 42.0 40.0 63.5 22.5 83.0 202.0 Pennsylvania 303.0 379.0 216.0 1630.0 417.0 1232.0 29465.5 2231.0 1106.0 81.0 1308.0 755.5 205.5 134.0 463.5 248.0 163.5 36447.0 1093.0 865.0 469.0 85.0 369.0 53.0 Rhode Island 29.0 49.5 0.0 192.5 21.5 8410.0 12.0 363.5 33.0 0.0 127.5 23.0 11.5 5.5 29.0 19.0 143.0 102.5 47699.0 66.5 27.5 0.0 19.0 0.0 South Carolina 543.5 112.0 94.5 353.0 73.0 115.0 31.0 1891.0 22152.5 11.0 335.0 327.5 64.5 34.0 287.0 172.0 64.0 339.0 247.5 401.5 80.0 205.5 172.0 11.5 South Dakota 16.0 117.0 23.0 116.5 174.0 3.5 0.0 42.0 40.0 35.5 85.5 11.0 4416.0 51.5 10.0 0.0 0.0 7.0 13.0 30.5 2684.0 0.0 100.0 146.5 Tennessee 9128.0 264.5 6302.0 922.5 264.5 119.0 38.5 1938.5 22882.5 40.0 1044.0 913.5 223.5 242.0 20742.0 768.0 48.0 344.5 191.0 841.5 203.0 34450.5 1232.0 24.5 Texas 1405.5 1839.511012.5 6108.0 2367.0 322.5 92.5 3790.5 2773.5 176.5 2496.5 1107.0 604.5 1449.0 616.0 17289.5 104.0 1038.5 657.5 1142.5 995.0 2527.5 2109.0 235.5 Utah 28.5 1643.5 54.0 1834.0 1152.5 16.5 25.5 246.5 158.5 2368.0 228.5 70.5 30.0 30.0 71.0 66.0 0.0 49.0 115.5 121.0 107.5 21.0 122.5 309.5 Vermont 0.0 33.0 4.0 73.5 20.5 392.0 29.5 86.5 22.5 4.5 91.0 29.5 11.0 0.0 5.0 6.0 167.0 23.0 2484.0 19.0 10.5 2.0 0.0 13.0 Virginia 514.5 269.5 163.5 1489.0 296.5 364.5 577.5 2536.0 1617.0 25.0 742.0 363.5 79.5 95.0 1661.5 298.0 157.0 95783.0 593.5 527.5 218.0 262.5 459.0 22.5 Washington 147.5 648.0 55.5 4483.0 540.0 87.0 0.0 414.0 297.5 14313.5 397.0 93.0 96.5 62.5 83.5 129.5 115.5 252.5 267.0 287.0 287.5 29.0 177.0 553.0 West Virginia 125.5 84.0 29.0 71.5 40.0 18.5 15.5 346.0 120.5 0.0 122.5 108.0 27.0 70.0 5651.5 76.5 19.0 12862.0 56.5 143.0 52.0 16.0 48.5 23.5 Wisconsin 122.0 294.0 115.0 636.5 325.0 70.0 14.5 645.5 394.5 16.5 32255.0 690.5 3905.5 126.0 242.0 205.5 19.0 98.0 180.5 6812.0 35322.5 134.0 439.0 67.0 Wyoming 23.5 202.5 23.5 187.0 2479.5 0.0 0.0 84.0 17.0 1553.0 77.0 13.0 24.0 77.0 22.0 49.5 0.0 18.0 20.0 55.5 59.5 24.5 30.5 1038.0 226 Table C.3.C. Residence State to Workplace State Flows (continued) Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming Alabama 49.0 63.5 0.0 156.0 51.5 208.5 684.0 0.0 357.5 148.5 35.0 303.0 29.0 543.5 16.0 9128.0 1405.5 28.5 0.0 514.5 147.5 125.5 122.0 23.5 Arizona 232.0 6664.0 52.5 235.5 3295.0 560.0 436.5 92.5 513.5 192.5 458.0 379.0 49.5 112.0 117.0 264.5 1839.5 1643.5 33.0 269.5 648.0 84.0 294.0 202.5 Arkansas 131.5 35.5 0.0 106.5 19.5 123.0 215.0 16.0 209.5 10473.0 23.0 216.0 0.0 94.5 23.0 6302.0 11012.5 54.0 4.0 163.5 55.5 29.0 115.0 23.5 California 314.5 13310.0 148.0 1214.0 834.5 2921.0 1190.0 101.0 1045.0 534.0 4251.0 1630.0 192.5 353.0 116.5 922.5 6108.0 1834.0 73.5 1489.0 4483.0 71.5 636.5 187.0 Colorado 837.0 418.5 63.5 313.5 2253.0 527.5 255.5 108.0 348.5 410.5 213.5 417.0 21.5 73.0 174.0 264.5 2367.0 1152.5 20.5 296.5 540.0 40.0 325.0 2479.5 Connecticut 10.0 67.5 681.0 3412.5 32.0 56336.5 332.0 17.0 246.0 12.5 39.5 1232.0 8410.0 115.0 3.5 119.0 322.5 16.5 392.0 364.5 87.0 18.5 70.0 0.0 Delaware 27.5 17.5 5.0 6930.5 12.5 706.0 201.0 0.0 59.5 8.5 6.0 29465.5 12.0 31.0 0.0 38.5 92.5 25.5 29.5 577.5 0.0 15.5 14.5 0.0 Florida 259.5 331.5 284.0 3119.0 154.5 6005.5 2985.0 27.5 2371.5 324.5 207.0 2231.0 363.5 1891.0 42.0 1938.5 3790.5 246.5 86.5 2536.0 414.0 346.0 645.5 84.0 Georgia 137.5 150.5 66.5 753.5 99.5 1819.0 6134.0 19.0 1044.5 206.5 130.0 1106.0 33.0 22152.5 40.0 22882.5 2773.5 158.5 22.5 1617.0 297.5 120.5 394.5 17.0 Idaho 53.0 729.0 0.0 17.5 36.5 60.5 30.0 73.0 38.0 41.5 3799.5 81.0 0.0 11.0 35.5 40.0 176.5 2368.0 4.5 25.0 14313.5 0.0 16.5 1553.0 Illinois 499.5 343.0 141.5 855.5 137.5 1644.0 853.0 39.5 2103.0 343.5 144.0 1308.0 127.5 335.0 85.5 1044.0 2496.5 228.5 91.0 742.0 397.0 122.5 32255.0 77.0 Indiana 179.0 156.5 35.0 291.5 67.5 447.5 347.5 21.0 19501.5 173.5 108.5 755.5 23.0 327.5 11.0 913.5 1107.0 70.5 29.5 363.5 93.0 108.0 690.5 13.0 Iowa 23635.5 40.0 14.0 91.0 21.5 150.0 226.0 83.0 302.0 141.0 59.0 205.5 11.5 64.5 4416.0 223.5 604.5 30.0 11.0 79.5 96.5 27.0 3905.5 24.0 Kansas 1712.0 28.0 5.0 113.0 89.0 156.0 144.5 71.0 124.5 5175.5 69.5 134.0 5.5 34.0 51.5 242.0 1449.0 30.0 0.0 95.0 62.5 70.0 126.0 77.0 Kentucky 41.0 66.5 38.5 120.0 13.0 335.5 555.0 8.0 48287.0 163.0 43.5 463.5 29.0 287.0 10.0 20742.0 616.0 71.0 5.0 1661.5 83.5 5651.5 242.0 22.0 Louisiana 63.5 111.0 25.0 211.5 83.0 440.0 505.5 25.0 277.5 575.5 59.5 248.0 19.0 172.0 0.0 768.0 17289.5 66.0 6.0 298.0 129.5 76.5 205.5 49.5 Maine 2.0 8.0 12708.0 179.5 7.0 556.0 78.0 6.5 69.0 8.0 17.5 163.5 143.0 64.0 0.0 48.0 104.0 0.0 167.0 157.0 115.5 19.0 19.0 0.0 Maryland 46.0 69.5 100.0 1844.5 66.0 2293.0 1391.0 46.5 587.5 58.5 66.5 36447.0 102.5 339.0 7.0 344.5 1038.5 49.0 23.0 95783.0 252.5 12862.0 98.0 18.0 Massachusetts 26.5 52.0 56770.5 1521.0 99.0 6484.5 574.5 0.0 398.0 49.5 42.0 1093.0 47699.0 247.5 13.0 191.0 657.5 115.5 2484.0 593.5 267.0 56.5 180.5 20.0 Michigan 228.0 188.0 73.5 424.0 61.0 1005.5 550.5 61.0 18063.5 180.0 40.0 865.0 66.5 401.5 30.5 841.5 1142.5 121.0 19.0 527.5 287.0 143.0 6812.0 55.5 Minnesota 224.5 128.5 81.5 246.5 53.0 377.5 241.0 18621.0 397.5 104.5 63.5 469.0 27.5 80.0 2684.0 203.0 995.0 107.5 10.5 218.0 287.5 52.0 35322.5 59.5 Mississippi 60.5 35.5 1.5 57.0 39.0 127.5 312.0 3.0 168.0 208.5 22.5 85.0 0.0 205.5 0.0 34450.5 2527.5 21.0 2.0 262.5 29.0 16.0 134.0 24.5 Missouri 982.0 100.0 25.5 241.0 57.5 443.0 394.5 43.0 731.0 2967.5 83.0 369.0 19.0 172.0 100.0 1232.0 2109.0 122.5 0.0 459.0 177.0 48.5 439.0 30.5 Montana 24.5 70.5 0.0 20.5 60.0 65.0 9.0 802.5 62.0 17.5 202.0 53.0 0.0 11.5 146.5 24.5 235.5 309.5 13.0 22.5 553.0 23.5 67.0 1038.0 Nebraska 0.0 149.5 1.5 65.0 116.0 80.0 123.5 61.0 197.5 126.5 7.5 113.0 0.0 44.0 2265.0 94.5 717.0 50.5 0.0 88.5 108.0 37.5 167.0 887.5 Nevada 149.5 0.0 0.0 106.5 123.5 402.5 163.5 34.5 204.0 64.5 274.5 157.5 10.0 53.5 7.5 112.0 637.0 1834.0 3.5 153.0 438.5 5.5 80.0 14.0 New Hampshire 1.5 0.0 0.0 177.0 8.0 815.5 121.0 0.0 142.0 16.5 32.5 205.0 393.5 26.5 3.5 56.0 190.0 40.0 11923.5 108.0 33.0 9.5 51.5 6.5 New Jersey 65.0 106.5 177.0 0.0 81.5 259046.5 1031.5 25.0 926.5 162.5 80.0 124678.5 287.0 419.0 3.5 312.5 1137.0 130.0 145.0 1605.5 212.0 128.5 175.5 4.0 New Mexico 116.0 123.5 8.0 81.5 0.0 90.0 94.0 25.0 62.0 184.5 57.0 53.0 0.0 7.0 14.5 52.5 14894.0 402.0 1.5 79.0 117.0 11.0 45.0 29.0 New York 80.0 402.5 815.5 259046.5 90.0 0.0 1744.0 28.5 1581.0 216.5 85.0 31292.5 896.5 698.5 58.0 476.5 1827.5 133.0 4037.5 2429.5 414.0 172.5 546.5 53.0 North Carolina 123.5 163.5 121.0 1031.5 94.0 1744.0 0.0 11.0 1149.0 201.5 50.0 1342.5 27.5 49563.0 13.5 3436.0 1445.5 107.5 59.5 23519.5 167.0 520.0 272.0 0.0 North Dakota 61.0 34.5 0.0 25.0 25.0 28.5 11.0 0.0 1.0 69.5 43.5 37.0 0.0 21.0 1005.0 16.0 152.0 21.0 0.0 55.0 22.0 6.0 141.0 170.5 Ohio 197.5 204.0 142.0 926.5 62.0 1581.0 1149.0 1.0 0.0 219.0 124.0 15466.5 87.0 505.0 24.0 897.0 1579.5 167.0 59.0 1032.0 316.0 26360.0 698.5 36.5 Oklahoma 126.5 64.5 16.5 162.5 184.5 216.5 201.5 69.5 219.0 0.0 89.5 189.5 27.5 125.5 32.0 342.0 11792.0 117.0 2.0 142.0 146.0 67.5 139.0 97.5 Oregon 7.5 274.5 32.5 80.0 57.0 85.0 50.0 43.5 124.0 89.5 0.0 116.5 8.0 51.0 16.0 87.5 410.0 311.0 0.0 43.0 47447.5 0.0 76.0 31.0 Pennsylvania 113.0 157.5 205.0 124678.5 53.0 31292.5 1342.5 37.0 15466.5 189.5 116.5 0.0 202.5 577.5 21.0 534.0 1791.0 121.5 178.5 3664.0 301.0 10057.0 447.0 54.5 Rhode Island 0.0 10.0 393.5 287.0 0.0 896.5 27.5 0.0 87.0 27.5 8.0 202.5 0.0 67.5 4.5 26.0 111.0 10.0 94.0 216.0 58.0 10.0 17.5 0.0 South Carolina 44.0 53.5 26.5 419.0 7.0 698.5 49563.0 21.0 505.0 125.5 51.0 577.5 67.5 0.0 30.0 873.0 744.5 75.5 35.0 1284.0 59.0 165.0 110.5 34.0 South Dakota 2265.0 7.5 3.5 3.5 14.5 58.0 13.5 1005.0 24.0 32.0 16.0 21.0 4.5 30.0 0.0 38.5 160.0 28.0 0.0 10.5 40.5 0.0 50.5 704.0 Tennessee 94.5 112.0 56.0 312.5 52.5 476.5 3436.0 16.0 897.0 342.0 87.5 534.0 26.0 873.0 38.5 0.0 2019.5 55.5 16.5 10889.5 185.0 212.5 298.0 32.5 Texas 717.0 637.0 190.0 1137.0 14894.0 1827.5 1445.5 152.0 1579.5 11792.0 410.0 1791.0 111.0 744.5 160.0 2019.5 0.0 541.0 28.0 1505.5 1032.0 138.5 617.5 319.0 Utah 50.5 1834.0 40.0 130.0 402.0 133.0 107.5 21.0 167.0 117.0 311.0 121.5 10.0 75.5 28.0 55.5 541.0 0.0 23.5 82.0 328.5 24.5 80.0 1051.0 Vermont 0.0 3.5 11923.5 145.0 1.5 4037.5 59.5 0.0 59.0 2.0 0.0 178.5 94.0 35.0 0.0 16.5 28.0 23.5 0.0 81.0 0.0 0.0 27.0 2.5 Virginia 88.5 153.0 108.0 1605.5 79.0 2429.5 23519.5 55.0 1032.0 142.0 43.0 3664.0 216.0 1284.0 10.5 10889.5 1505.5 82.0 81.0 0.0 476.0 18415.5 247.5 8.5 Washington 108.0 438.5 33.0 212.0 117.0 414.0 167.0 22.0 316.0 146.0 47447.5 301.0 58.0 59.0 40.5 185.0 1032.0 328.5 0.0 476.0 0.0 26.0 150.5 28.5 West Virginia 37.5 5.5 9.5 128.5 11.0 172.5 520.0 6.0 26360.0 67.5 0.0 10057.0 10.0 165.0 0.0 212.5 138.5 24.5 0.0 18415.5 26.0 0.0 62.5 0.0 Wisconsin 167.0 80.0 51.5 175.5 45.0 546.5 272.0 141.0 698.5 139.0 76.0 447.0 17.5 110.5 50.5 298.0 617.5 80.0 27.0 247.5 150.5 62.5 0.0 34.0 Wyoming 887.5 14.0 6.5 4.0 29.0 53.0 0.0 170.5 36.5 97.5 31.0 54.5 0.0 34.0 704.0 32.5 319.0 1051.0 2.5 8.5 28.5 0.0 34.0 0.0 227 Table C.4: Parameter Estimates of SPVAR p=1 p=2 p=3 p=4 p=5 p=6 iid SAR iid SAR iid SAR iid SAR iid SAR iid SAR ω gg 1e-04 0.0000 2e-04 0.0000 2e-04 0.0000 1e-04 0.0000 2e-04 0.0000 4e-04 0.0000 (1e-04) (0.0000) (2e-04) (0.0000) (3e-04) (0.0000) (1e-04) (0.0000) (3e-04) (0.0000) (6e-04) (0.0000) ω gc 6e-04 9e-04 7e-04 3e-04 3e-04 3e-04 0.0015 3e-04 0.0016 3e-04 0.001 3e-04 (0.008) (0.0037) (0.011) (0.0028) (0.0137) (0.0029) (0.0089) (0.0028) (0.0127) (0.0028) (0.0181) (0.0028) ω cc 0.8378 0.9535 0.7895 0.768 0.7768 0.78 0.8154 0.7628 0.8292 0.7624 0.775 0.7629 (0.0000) (0.0000) (8e-04) (0.0000) (5e-04) (0.0000) (2e-04) (0.0000) (1e-04) (0.0000) (0.006) (0.0000) λ g × 10 4 - -0.0049 - -0.0049 - -0.0049 - -0.0049 - -0.0049 - -0.0049 - (0.0000) - (0.0000) - (0.0000) - (0.0000) - (0.0000) - (0.0000) λ c × 10 4 - -0.0056 - -0.0057 - -0.0057 - -0.0057 - -0.0057 - -0.0057 - (0.0000) - (0.0000) - (0.0000) - (0.0000) - (0.0000) - (0.0000) β g 1 × 10 4 0.2338 0.0006 -0.3288 0.0006 -0.4475 0.0006 0.0329 0.0006 -0.2703 0.0006 0.6016 0.0006 (0.0000) (0.0000) (0.0000) (0.0000) (1e-04) (0.0000) (0.0000) (0.0000) (1e-04) (0.0000) (1e-04) (0.0000) β c 1 × 10 4 -0.3002 0.1509 0.4607 0.1511 -0.1015 0.1510 0.4153 0.1511 -0.0696 0.1511 1.0696 0.1511 (0.0035) (0.0037) (0.0034) (0.0033) (0.0033) (0.0033) (0.0034) (0.0033) (0.0034) (0.0032) (0.0033) (0.0033) β g 2 × 10 4 -0.1246 0.0056 -0.2228 0.0055 -0.1254 0.0055 -0.4340 0.0055 -0.5295 0.0055 -0.0659 0.0055 (2e-04) (1e-04) (3e-04) (1e-04) (4e-04) (1e-04) (3e-04) (1e-04) (4e-04) (1e-04) (6e-04) (1e-04) β c 2 × 10 4 -5.9086 -1.9491 -27.1432 -1.9588 -8.1247 -1.9565 -12.0428 -1.9602 -14.8475 -1.9598 4.8236 -1.9601 (0.0242) (0.0258) (0.0239) (0.0186) (0.0237) (0.0188) (0.0242) (0.0186) (0.0245) (0.0185) (0.0237) (0.0186) β g 3 × 10 4 0.1789 -0.0488 0.3969 -0.0488 0.1473 -0.0488 0.5192 -0.0488 0.8485 -0.0488 0.1572 -0.0488 (8e-04) (4e-04) (0.0012) (3e-04) (0.0014) (3e-04) (9e-04) (3e-04) (0.0013) (3e-04) (0.0019) (3e-04) β c 3 × 10 4 28.1559 1.8214 102.3658 1.8211 43.4687 1.8212 43.3775 1.8211 69.4710 1.8211 -43.8662 1.8211 (0.085) (0.0907) (0.083) (0.0031) (0.0823) (0.0032) (0.0843) (0.0031) (0.0851) (0.0031) (0.0823) (0.0031) φ gg 1 1.0663 1.0663 1.0663 1.0663 1.0663 1.0663 1.0663 1.0663 1.0663 1.0663 1.0663 1.0663 (0.0000) (0.0000) (5e-04) (0.0000) (1e-04) (0.0000) (0.001) (0.0000) (4e-04) (0.0000) (0.0081) (0.0000) φ cg 1 16.9333 16.9346 16.9347 16.9346 16.9345 16.9346 16.9348 16.9346 16.9347 16.9346 16.9347 16.9346 (0.0000) (0.0000) (7e-04) (0.0000) (5e-04) (0.0000) (4e-04) (0.0000) (3e-04) (0.0000) (9e-04) (0.0000) φ gc 1 6e-04 3e-04 6e-04 3e-04 5e-04 3e-04 9e-04 3e-04 4e-04 3e-04 0.0011 3e-04 (0.004) (0.0017) (0.0122) (0.0032) (0.0153) (0.0032) (0.0098) (0.0032) (0.0139) (0.0032) (0.0205) (0.0032) φ cc 1 0.6609 0.6942 0.7061 0.6942 0.7007 0.6942 0.7109 0.6942 0.7062 0.6942 0.7112 0.6942 (0.0017) (0.0019) (0.1046) (1e-04) (0.0719) (1e-04) (0.0575) (1e-04) (0.0476) (1e-04) (0.1907) (1e-04) φ gg 2 - - -0.2488 -0.2488 -0.2488 -0.2488 -0.2488 -0.2488 -0.2488 -0.2488 -0.2488 -0.2488 - - (5e-04) (0.0000) (1e-04) (0.0000) (0.001) (0.0000) (4e-04) (0.0000) (0.0087) (0.0000) φ cg 2 - - -15.7695 -15.7696 -15.7697 -15.7696 -15.7693 -15.7696 -15.7694 -15.7696 -15.7694 -15.7696 - - (7e-04) (0.0000) (5e-04) (0.0000) (4e-04) (0.0000) (4e-04) (0.0000) (6e-04) (0.0000) φ gc 2 - - 1e-04 -2e-04 -1e-04 -2e-04 3e-04 -2e-04 0.0000 -2e-04 4e-04 -2e-04 - - (0.0122) (0.0032) (0.0212) (0.0044) (0.0135) (0.0044) (0.0191) (0.0044) (0.0281) (0.0044) φ cc 2 - - 0.0544 0.0465 0.0491 0.0465 0.0656 0.0465 0.062 0.0465 0.0579 0.0465 - - (0.1031) (1e-04) (0.0736) (1e-04) (0.0608) (1e-04) (0.0514) (1e-04) (0.1858) (1e-04) φ gg 3 - - - - -0.0018 -0.0018 -0.0018 -0.0018 -0.0018 -0.0018 -0.0018 -0.0018 - - - - (1e-04) (0.0000) (0.001) (0.0000) (4e-04) (0.0000) (0.009) (0.0000) φ cg 3 - - - - 8.7586 8.7588 8.7591 8.7588 8.7589 8.7588 8.7589 8.7588 - - - - (5e-04) (0.0000) (4e-04) (0.0000) (4e-04) (0.0000) (5e-04) (0.0000) φ gc 3 - - - - 1e-04 0.0000 4e-04 0.0000 1e-04 0.0000 4e-04 0.0000 - - - - (0.0153) (0.0032) (0.0135) (0.0044) (0.0191) (0.0044) (0.0281) (0.0044) φ cc 3 - - - - 0.0042 0.0063 0.0287 0.0063 0.0256 0.0063 0.0127 0.0063 - - - - (0.0698) (1e-04) (0.0601) (1e-04) (0.0524) (1e-04) (0.1662) (1e-04) φ gg 4 - - - - - - -0.018 -0.018 -0.018 -0.018 -0.018 -0.018 - - - - - - (9e-04) (0.0000) (3e-04) (0.0000) (0.0089) (0.0000) φ cg 4 - - - - - - -8.78 -8.7803 -8.7801 -8.7803 -8.7803 -8.7803 - - - - - - (4e-04) (0.0000) (4e-04) (0.0000) (6e-04) (0.0000) φ gc 4 - - - - - - 3e-04 -1e-04 1e-04 -1e-04 2e-04 -1e-04 - - - - - - (0.0098) (0.0032) (0.0191) (0.0044) (0.0281) (0.0044) φ cc 4 - - - - - - 0.0474 0.0205 0.0438 0.0205 0.0234 0.0205 - - - - - - (0.0557) (1e-04) (0.0505) (1e-04) (0.1499) (1e-04) φ gg 5 - - - - - - - - 0.0188 0.0188 0.0188 0.0188 - - - - - - - - (3e-04) (0.0000) (0.0085) (0.0000) φ cg 5 - - - - - - - - 0.5235 0.5232 0.5233 0.5232 - - - - - - - - (3e-04) (0.0000) (9e-04) (0.0000) φ gc 5 - - - - - - - - 1e-04 -1e-04 1e-04 -1e-04 - - - - - - - - (0.0139) (0.0032) (0.0281) (0.0044) φ cc 5 - - - - - - - - 0.0225 -0.0051 -0.0034 -0.0051 - - - - - - - - (0.0457) (1e-04) (0.1493) (1e-04) φ gg 6 - - - - - - - - - - -0.0097 -0.0097 - - - - - - - - - - (0.0077) (0.0000) φ cg 6 - - - - - - - - - - 0.8327 0.8326 - - - - - - - - - - (0.0011) (0.0000) φ gc 6 - - - - - - - - - - 0.0000 -1e-04 - - - - - - - - - - (0.0205) (0.0032) φ cc 6 - - - - - - - - - - 0.0056 0.0026 - - - - - - - - - - (0.1547) (1e-04) ψ g 0 0.6647 0.8935 0.6292 0.8935 0.7778 0.8935 0.2246 0.8935 0.2982 0.8935 0.5294 0.8935 (0.0000) (1e-04) (3e-04) (0.0000) (1e-04) (0.0000) (5e-04) (0.0000) (1e-04) (0.0000) (0.0099) (0.0000) ψ c 0 0.439 0.5641 0.4882 0.5641 0.5319 0.5641 0.389 0.5641 0.4002 0.5641 0.4763 0.5641 (0.0017) (0.0025) (0.0831) (1e-04) (0.064) (1e-04) (0.0488) (1e-04) (0.0394) (1e-04) (0.1678) (1e-04) ψ g 1 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 -0.9225 (0.0000) (0.0000) (6e-04) (0.0000) (1e-04) (0.0000) (0.0011) (0.0000) (4e-04) (0.0000) (0.0082) (0.0000) ψ c 1 -0.3035 -0.2175 -0.2198 -0.2175 -0.2149 -0.2175 -0.1973 -0.2175 -0.2095 -0.2175 -0.1941 -0.2175 (0.0015) (0.0016) (0.0936) (1e-04) (0.0644) (1e-04) (0.0518) (1e-04) (0.0427) (1e-04) (0.1497) (1e-04) ψ g 2 - - 0.1862 0.1862 0.1862 0.1862 0.1862 0.1862 0.1862 0.1862 0.1862 0.1862 - - (6e-04) (0.0000) (1e-04) (0.0000) (0.0012) (0.0000) (4e-04) (0.0000) (0.0089) (0.0000) ψ c 2 - - -0.1212 -0.1111 -0.1132 -0.1111 -0.0891 -0.1111 -0.1008 -0.1111 -0.0935 -0.1111 - - (0.0923) (1e-04) (0.0657) (1e-04) (0.0545) (1e-04) (0.046) (1e-04) (0.1367) (1e-04) ψ g 3 - - - - 0.0214 0.0214 0.0214 0.0214 0.0214 0.0214 0.0214 0.0214 - - - - (1e-04) (0.0000) (0.0011) (0.0000) (4e-04) (0.0000) (0.0092) (0.0000) ψ c 3 - - - - -0.0524 -0.0446 -0.0202 -0.0446 -0.0322 -0.0446 -0.0324 -0.0446 - - - - (0.0627) (1e-04) (0.054) (1e-04) (0.0468) (1e-04) (0.1331) (1e-04) ψ g 4 - - - - - - 0.0065 0.0065 0.0065 0.0065 0.0065 0.0065 - - - - - - (0.001) (0.0000) (4e-04) (0.0000) (0.0091) (0.0000) ψ c 4 - - - - - - 0.0091 -0.0186 -0.004 -0.0186 -0.0103 -0.0186 - - - - - - (0.0504) (1e-04) (0.0452) (1e-04) (0.1484) (1e-04) ψ g 5 - - - - - - - - -0.0128 -0.0128 -0.0128 -0.0128 - - - - - - - - (4e-04) (0.0000) (0.0086) (0.0000) ψ c 5 - - - - - - - - 0.0237 0.0073 0.0135 0.0073 - - - - - - - - (0.0412) (1e-04) (0.1717) (1e-04) ψ g 6 - - - - - - - - - - 0.0254 0.0254 - - - - - - - - - - (0.0078) (0.0000) ψ c 6 - - - - - - - - - - -0.0118 -0.0177 - - - - - - - - - - (0.186) (1e-04) Note: The values reported in the parentheses are estimated standard errors. 228 Table C.5: Model Selection Table C.5.A. Lag Selection for SPVAR(p) LR AIC SIC HQC iid SAR iid SAR iid SAR iid SAR p=0 - - -607.8055 -617.207 -607.7564 -617.1578 -607.7859 -617.1873 p=1 7366.8226 5150.5606 -632.1592 -634.2221 -632.0364 -634.0992 -632.1101 -634.1729 p=2 275.728 5201.1427 -633.0325 -651.4047 -632.8359 -651.2081* -632.9539 -651.326 p=3 3083.9227 -212.2992 -643.2044* -650.662 -642.9342* -650.3917 -643.0963* -650.5538 p=4 -8876.3139 265.3208* -613.7729 -651.5008* -613.4289 -651.1568 -613.6353 -651.3631* p=5 2612.9994* -343.5206 -622.3855 -650.3235 -621.9678 -649.9058 -622.2184 -650.1564 p=6 -2449.1771 -195.3311 -614.2359 -649.637 -613.7445 -649.1456 -614.0393 -649.4404 1% Critical Value of Chi(6)=16.8119, 5% Critical Value of Chi(6)=12.5916, 10% Critical Value of Chi(6)=10.6446 Table C.5.B. Likelihood Ratio Test of Spatial Errors p=0 p=1 p=2 p=3 p=4 p=5 p=6 LR 9.4014 2.0628 18.3721 7.4575 37.7278 27.9380 35.4011 1% Critical Value of Chi(2)=9.2103, 5% Critical Value of Chi(2)=5.9915, 10% Critical Value of Chi(2)=4.6052 Table C.6: Model Evaluations Table C.6.A. Model Evaluation of SPVAR(p) with sample T = 310 in-sample out-of-sample Google iid CDC iid Google SAR CDC SAR Google iid CDC iid Google SAR CDC SAR p=1 MADE 0.0045 0.5420 0.0077 0.7915 0.0076 1.1554 0.0050 0.9854 RMSE 0.0083 1.0229 0.0127 1.3257 0.0131 1.8839 0.0068 1.5700 p=2 MADE 0.0059 0.5163 0.0025 0.4515 0.0078 0.9112 0.0041 0.8450 RMSE 0.0105 0.9677 0.0051 0.9397 0.0140 1.5722 0.0069 1.4671 p=3 MADE 0.0039 0.4787 0.0025 0.4817 0.0061 0.8158 0.0040 0.7923 RMSE 0.0074 0.9576 0.0053 0.9716 0.0109 1.4509 0.0065 1.4144 p=4 MADE 0.0070 0.4668 0.0025 0.4732 0.0080 0.8214 0.0037 0.7699 RMSE 0.0123 0.9845 0.0050 0.9360 0.0150 1.5269 0.0063 1.3449 p=5 MADE 0.0077 0.4948 0.0026 0.4672 0.0084 0.7999 0.0035 0.7241 RMSE 0.0134 1.0056 0.0051 0.9355 0.0157 1.5131 0.0060 1.3003 p=6 MADE 0.0055 0.4568 0.0025 0.4763 0.0058 0.6883 0.0034 0.6909 RMSE 0.0095 0.9522 0.0050 0.9381 0.0107 1.3121 0.0058 1.2504 Table C.6.B. Model Evaluation of SPVAR(p) with sample T = 125 in-sample out-of-sample Google iid CDC iid Google SAR CDC SAR Google iid CDC iid Google SAR CDC SAR p = 1 MADE 0.0047 0.6789 0.0063 0.9716 0.0058 0.5863 0.0060 0.8979 RMSE 0.0075 1.2086 0.0104 1.5699 0.0112 1.0616 0.0099 1.5239 p = 2 MADE 0.0073 0.7742 0.0025 0.6138 0.0102 0.6107 0.0025 0.4786 RMSE 0.0105 1.2470 0.0048 1.1424 0.0148 0.9318 0.0054 0.8912 p = 3 MADE 0.0044 0.6255 0.0025 0.6257 0.0056 0.4990 0.0025 0.5129 RMSE 0.0069 1.1453 0.0048 1.1429 0.0109 0.8986 0.0054 0.9377 p = 4 MADE 0.0052 0.6858 0.0026 0.6145 0.0064 0.5390 0.0027 0.4849 RMSE 0.0079 1.1663 0.0047 1.1390 0.0122 0.8815 0.0055 0.9024 p = 5 MADE 0.0057 0.7072 0.0025 0.6658 0.0073 0.5587 0.0025 0.5467 RMSE 0.0087 1.2073 0.0047 1.1599 0.0127 0.9258 0.0054 0.9387 p = 6 MADE 0.0026 0.6094 0.0025 0.6135 0.0030 0.4893 0.0029 0.4883 RMSE 0.0048 1.1336 0.0048 1.1332 0.0058 0.8966 0.0057 0.8974 229 Table C.6.C. Model Evaluation of SPVAR(p) with sample T = 250 in-sample out-of-sample Google iid CDC iid Google SAR CDC SAR Google iid CDC iid Google SAR CDC SAR p = 1 MADE 0.0051 0.5539 0.0081 0.8066 0.0046 0.5414 0.0071 0.7237 RMSE 0.0091 1.0536 0.0136 1.3495 0.0070 1.0381 0.0095 1.1128 p = 2 MADE 0.0062 0.5265 0.0026 0.4749 0.0066 0.5101 0.0026 0.4644 RMSE 0.0102 1.0019 0.0053 0.9862 0.0085 0.9450 0.0043 0.9150 p = 3 MADE 0.0062 0.4830 0.0027 0.4949 0.0051 0.4618 0.0027 0.4624 RMSE 0.0109 0.9772 0.0057 1.0000 0.0076 0.9128 0.0046 0.9107 p = 4 MADE 0.0070 0.5113 0.0026 0.4961 0.0059 0.4957 0.0026 0.4724 RMSE 0.0123 1.0104 0.0053 0.9728 0.0087 0.9545 0.0043 0.8927 p = 5 MADE 0.0062 0.5303 0.0027 0.4987 0.0051 0.4995 0.0026 0.4700 RMSE 0.0105 0.9905 0.0054 0.9759 0.0073 0.9066 0.0043 0.8877 p = 6 MADE 0.0064 0.5239 0.0026 0.4968 0.0055 0.4852 0.0024 0.4591 RMSE 0.0114 1.0076 0.0053 0.9753 0.0083 0.9327 0.0042 0.8814 Table C.6.D. Model Evaluation of SPVAR(p) with Workflow weight matrix in-sample out-of-sample Google iid CDC iid Google SAR CDC SAR Google iid CDC iid Google SAR CDC SAR p = 1 MADE 0.0048 0.6628 0.0077 0.7915 0.0078 1.1043 0.0050 0.9854 RMSE 0.0075 1.1075 0.0127 1.3257 0.0113 1.7398 0.0068 1.5700 p = 2 MADE 0.0059 0.5163 0.0025 0.4515 0.0078 0.9112 0.0041 0.8450 RMSE 0.0105 0.9677 0.0051 0.9397 0.0140 1.5722 0.0069 1.4671 p = 3 MADE 0.0046 0.4749 0.0025 0.4817 0.0062 0.8291 0.0040 0.7923 RMSE 0.0083 0.9506 0.0053 0.9716 0.0111 1.4678 0.0065 1.4144 p = 4 MADE 0.0070 0.4668 0.0025 0.4732 0.0080 0.8214 0.0037 0.7699 RMSE 0.0123 0.9845 0.0050 0.9360 0.0150 1.5269 0.0063 1.3449 p = 5 MADE 0.0077 0.4948 0.0026 0.4672 0.0084 0.7999 0.0035 0.7241 RMSE 0.0134 1.0056 0.0051 0.9355 0.0157 1.5131 0.0060 1.3003 p = 6 MADE 0.0059 0.4939 0.0025 0.4763 0.0065 0.7445 0.0034 0.6909 RMSE 0.0105 0.9669 0.0050 0.9381 0.0127 1.3852 0.0058 1.2504 Table C.6.E. Model Evaluation of SDPD(p) in-sample out-of-sample iid SAR iid SAR p=1 MADE 0.00273 0.00272 0.00441 0.00440 RMSE 0.00588 0.00588 0.00881 0.00879 p=2 MADE 0.00222 0.00222 0.00354 0.00353 RMSE 0.00494 0.00494 0.00654 0.00653 p=3 MADE 0.00222 0.00222 0.00353 0.00352 RMSE 0.00494 0.00495 0.00652 0.00649 p=4 MADE 0.00223 0.00225 0.00354 0.00354 RMSE 0.00494 0.00494 0.00651 0.00648 p=5 MADE 0.00224 0.00225 0.00353 0.00354 RMSE 0.00494 0.00494 0.00651 0.00652 p=6 MADE 0.00226 0.00227 0.00355 0.00352 RMSE 0.00496 0.00495 0.00655 0.00646 230 Figure C.1: Google Flu Trend ILI by Regions and States 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 20 weeks ILI region 1 region 2 region 3 region 4 region 5 region 6 region 7 region 8 region 9 region 10 Region 1 (Boston) include Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont Region 2 (New York) include New Jersey, New York, Puerto Rico, and the Virgin Islands Region 3 (Philadelphia) include Delaware, Maryland, Pennsylvania, Virginia, and West Virginia Region 4 (Atlanta) include Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee Region 5 (Chicago) include Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin Region 6 (Dallas) include Arkansas, Louisiana, New Mexico, Oklahoma, and Texas Region 7 (Kansas City) include Iowa, Kansas, Missouri, and Nebraska Region 8 (Denver) include Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming Region 9 (San Francisco) include Arizona, California, Hawaii, Nevada, American Samoa, Commonwealth of the Northern Mariana Islands, Federated States of Micronesia, Guam, Marshall Islands, and Republic of Palau Region 10 (Seattle) include Alaska, Idaho, Oregon, and Washington. 231 Figure C.2: Google Flu Trend and CDC ILI by Regions and States 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 14 weeks ILI Alabama 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 20 weeks ILI Arizona 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 20 25 weeks ILI Arkansas 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 weeks ILI California 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Colorado 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 weeks ILI Connecticut 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 14 weeks ILI Delaware 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 weeks ILI Florida 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Georgia 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 weeks ILI Idaho 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 12 weeks ILI Illinois 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 weeks ILI Indiana 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Iowa 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Kansas 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 14 weeks ILI Kentucky 2009 2010 2011 2012 2013 2014 2015 5 10 15 20 weeks ILI Louisiana 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Maine 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 14 weeks ILI Maryland 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 weeks ILI Massachusetts 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 weeks ILI Michigan 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 weeks ILI Minnesota 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 weeks ILI Mississippi 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Missouri 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Montana 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Nebraska 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 14 weeks ILI Nevada 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 weeks ILI New Hampshire 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 20 weeks ILI New Jersey 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 weeks ILI New Mexico 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI New Y ork 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI North Carolina 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 12 weeks ILI North Dakota 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 weeks ILI Ohio 2009 2010 2011 2012 2013 2014 2015 0 5 10 15 20 25 weeks ILI Oklahoma 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 14 weeks ILI Oregon 2009 2010 2011 2012 2013 2014 2015 2 4 6 8 10 12 weeks ILI Pennsylvania 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Rhode Island 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI South Carolina 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI South Dakota 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 14 weeks ILI Tennessee 2009 2010 2011 2012 2013 2014 2015 5 10 15 weeks ILI Texas 2009 2010 2011 2012 2013 2014 2015 0 2 4 6 8 10 weeks ILI Utah 0 2 4 6 8 10 12 ILI Vermont 2 4 6 8 10 ILI Virginia 0 2 4 6 8 10 ILI Washington 2 4 6 8 10 ILI West Virginia 0 5 10 15 ILI Wisconsin 0 2 4 6 8 10 ILI Wyoming CDC ILI Google ILI (%) 232 Figure C.3: Spatial View of Google Flu Trend and CDC ILI Figure C.3.A. Google Flu Trend (Week from 2009 September 19 to 2009 September 26) Figure C.3.B. CDC Flu Level (Week from 2009 September 20 to 2009 September 26) 233 Figure C.4: Network Plot of State Contiguity Alabama Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New Y ork North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington West Virginia Wisconsin Wyoming 234 Figure C.5: In-sample Fitted and Out-of-Sample Forecasted ILI from SPVAR(p) Figure C.5.A. In-sample Fitted and Out-of-Sample Forecasted ILI from SPVAR(p) with sample T = 310 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.05 0.10 0.15 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 2 4 6 8 10 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(4), T0=310, California + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.04 0.08 0.12 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0 2 4 6 8 10 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(4), T0=310, New Y ork Lag orders selected by SIC. 235 Figure C.5.B. In-sample Fitted and Out-of-Sample Forecasted ILI from SPVAR(p) with sample T = 125 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.05 0.10 0.15 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 2 4 6 8 10 12 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(2), T0=125, California + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.04 0.08 0.12 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0 2 4 6 8 10 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(2), T0=125, New Y ork Lag orders selected by SIC. 236 Figure C.5.C. In-sample Fitted and Out-of-Sample Forecasted ILI from SPVAR(p) with sample T = 250 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.05 0.10 0.15 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 2 4 6 8 10 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(2), T0=250, California + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.04 0.08 0.12 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0 2 4 6 8 10 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(2), T0=250, New Y ork Lag orders selected by SIC. 237 Figure C.5.D. In-sample Fitted and Out-of-Sample Forecasted ILI from SPVAR(p) with Workflow Weight Matrix and T = 310 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.05 0.10 0.15 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 2 4 6 8 10 12 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(3), T0=310, California + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.04 0.08 0.12 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0 2 4 6 8 10 weeks ILI + CDC ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SPVARX(3), T0=310, New Y ork Lag orders selected by SIC. 238 Figure C.5.E. In-sample Fitted and Out-of-Sample Forecasted ILI from QML Estimated SDPD(p) with sample T = 310 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.05 0.10 0.15 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SDPD(3) on Google ILI − California + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.04 0.08 0.12 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SDPD(3) on Google ILI − New Y ork + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.05 0.10 0.15 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SDPD(4) on Google ILI − California + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + 0 50 100 150 200 250 300 0.00 0.04 0.08 0.12 weeks ILI + Google ILI in−sample (SAR errors) in−sample (iid errors) forecast (SAR errors) forecast (iid errors) In−sample & Out−of−Sample Forecast QMLE SDPD(4) on Google ILI − New Y ork 239 Figure C.6: State Responses to California Flu Shock Google Flu Trend SPVARX(3) iid errors 0 5 10 15 20 25 30 California ( 4.682 ) Oregon ( 0.946 ) Nevada ( 0.842 ) Arizona ( 0.737 ) Washington ( 0.37 ) Idaho ( 0.257 ) Utah ( 0.221 ) New Mexico ( 0.139 ) Colorado ( 0.11 ) Wyoming ( 0.07 ) 0 1 2 3 4 5 6 h GIRF (‰) Time 0 5 10 15 20 25 30 0.00 0.04 Google Flu Trend SPVARX(3) iid errors California 95% CI Arizona Oregon Nevada other states Google Flu Trend SPVARX(4) SAR errors 0 5 10 15 20 25 30 California ( 4.817 ) Oregon ( 2.039 ) Nevada ( 1.865 ) Arizona ( 1.586 ) Washington ( 1.391 ) Idaho ( 1.084 ) Utah ( 0.945 ) New Mexico ( 0.672 ) Colorado ( 0.603 ) Wyoming ( 0.558 ) 0 1 2 3 4 5 6 h GIRF (‰) Time 0 5 10 15 20 25 30 0.00 0.04 0.08 Google Flu Trend SPVARX(4) SAR errors California 95% CI Arizona Oregon Nevada other states CDC Flu Level SPVARX(3) iid errors 0 5 10 15 20 25 30 California ( 1130.597 ) Oregon ( 156.611 ) Nevada ( 136.957 ) Arizona ( 123.408 ) Washington ( 43.385 ) Idaho ( 28.986 ) Utah ( 25.104 ) New Mexico ( 15.423 ) Colorado ( 11.731 ) Wyoming ( 5.668 ) 0 200 400 600 800 1000 h GIRF (‰) Time 0 5 10 15 20 25 30 0.0 1.0 2.0 CDC Flu Level SPVARX(3) iid errors California 95% CI Arizona Oregon Nevada other states CDC Flu Level SPVARX(4) SAR errors 0 5 10 15 20 25 30 California ( 1150.244 ) Oregon ( 190.225 ) Nevada ( 167.791 ) Arizona ( 149.201 ) Washington ( 62.175 ) Idaho ( 42.243 ) Utah ( 36.467 ) New Mexico ( 22.616 ) Colorado ( 17.489 ) Wyoming ( 9.699 ) 0 200 400 600 800 1000 h GIRF (‰) Time 0 5 10 15 20 25 30 −0.5 0.5 1.5 2.5 CDC Flu Level SPVARX(4) SAR errors California 95% CI Arizona Oregon Nevada other states 240
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Essays in panel data analysis
PDF
Essays on high-dimensional econometric models
PDF
The value of novel antihyperlipidemic treatments in the U.S. healthcare system: Reducing the burden of cardiovascular diseases and filling the gap of low adherence in statins
PDF
Three essays in health economics
PDF
Approximating stationary long memory processes by an AR model with application to foreign exchange rate
PDF
The association between sun exposure and multiple sclerosis
PDF
Understanding the role of population and place in the dynamics of seasonal influenza outbreaks in the United States
Asset Metadata
Creator
Xie, Wei
(author)
Core Title
Panel data forecasting and application to epidemic disease
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Economics
Publication Date
07/23/2018
Defense Date
05/17/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
influenza-like illness incidence rate,OAI-PMH Harvest,panel data forecasting,spatial panel vector autoregression
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Goldman, Dana (
committee chair
), Dekle, Robert (
committee member
), Hsiao, Cheng (
committee member
), McFadden, Daniel (
committee member
), Romley, John (
committee member
)
Creator Email
best.xw@gmail.com,weixie@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-278074
Unique identifier
UC11279410
Identifier
etd-XieWei-4614.pdf (filename),usctheses-c40-278074 (legacy record id)
Legacy Identifier
etd-XieWei-4614.pdf
Dmrecord
278074
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Xie, Wei
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
influenza-like illness incidence rate
panel data forecasting
spatial panel vector autoregression