Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Essays in health economics and provider behavior
(USC Thesis Other)
Essays in health economics and provider behavior
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Essays in Health Economics and Provider Behavior by Eunhae Shin A dissertation presented to the faculty of the University of Southern California Graduate School in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Economics August 2019 Copyright 2019 Eunhae Shin Abstract This thesis examines three questions related to provider behavior in health care markets, focusing on the U.S. Medicare program. In the first chapter, I use 20% Medicare inpatient claims data to examine whether paying a higher price for a given service induces hospitals to offer services of better quality under the Prospective Payment System. In doing so, I exploit area-specific price shocks engendered by the administrative shift in 2005 that changed the definition of Medicare payment areas from the Metropolitan Statistical Areas to the Core-Based Statistical Areas. My results suggest that provid- ing higher payment does not lead to an improvement in the quality of hospital services, but can rather prompt even higher payments through other behavioral responses of hospitals. In the following chapter, the focus of my dissertation turns to physician behavior. My co- authors and I examine the impact of physician fraud deterrence when a physician’s peer is barred from participating in publicly funded health programs due to healthcare fraud, health crimes, and improper prescribing. Using Medicare claims data, we examine responses among physicians work- ing in an organization with a recent peer exclusion. While physicians with low predicted probabil- ities of being fraudulent generally do not change behavior, those with high predicted probabilities reduce total charges by 5%, total claims by 12%, and the average price per service by 6%. Re- sponses begin at the time of peer indictment and continue through the date of peer exclusion. Deterrence effects are highest in larger organizations, among hospital-based specialties, and they highlight $1 to $2 billion in savings from the prosecution of fraud and abuse cases. The final chapter investigates the impact of intermediaries in health care, namely referring i physicians, upon the specialty treatment choices of patients. While physician referral behavior is largely under-studied, prior literature documenting the effects of publicly available information with respect to the quality of tertiary care providers suggested no or, if any, moderate effects on the referral decisions of physicians. If it is not publicly available information that matters, then, how do the referring physicians, as consumers of a higher-level specialty care, obtain or update their private information on providers? This chapter finds the answer from the science of social networks, which asks 1) who are a subject’s friends, and 2) what is the subject’s relationship with them like? I implement this idea by adapting the methodology proposed in medical and health services literature, which identifies physician social networks using the administrative claims data, and by incorporating the analytical tools used in social network analysis. Together, these tools allow me to test the underlying hypothesis of this study: that better connected referring physicians are more informed regarding the quality of specialty care providers, and accordingly are better able to respond to their quality variations. In fact, the results of discrete-choice demand models suggest that if patients are referred by a physician who is a more important player in his or her social networks, they are more likely to choose a surgeon of higher quality. ii Acknowledgments When I joined the Schaeffer Center as a pre-doctoral fellow in the Fall of 2015, it completely changed my research agenda and overall Ph.D. experience. The Center inspired the overarching framework of my dissertation: prior to this fellowship, I never would have anticipated that I would be working on, and so much intrigued by, U.S. health care policy and Medicare programs. Most importantly, this fellowship introduced me to my advisor, Dana Goldman, the Director’s Chair of the Schaeffer Center. Dana’s wisdom and guidance provided a big-picture perspective on where I was and where I should be going next as a scholar. He was challenging in his questions, always leading me to deeper levels of thinking. As a director of a large research institution like the Scha- effer Center, Dana has also provided a model for leadership that I hope to emulate as my career progresses. Another landmark of my Ph.D. journey was having Alice Chen as my faculty mentor at the Schaeffer Center. Since our first meeting in my third year, she has played a vital role in shaping my dissertation. My first chapter evolved substantially through our weekly meetings in Summer 2016; I co-authored my second chapter with her, and she provided a great deal of input through the conceptualization of my final chapter. Therefore, the marks of Alice’s effort and dedication appear throughout this dissertation work. Apart from research, she also provided unwavering support during my time on the job market and through the ups and downs of Ph.D. life. I am also grateful to John Romley, who has provided tremendous support as my committee member and co-author. He has consistently impressed me with his thorough understanding of the literature intersecting multiple disciplines, including health services research, industrial organiza- iii tions, and econometrics. No matter what topic or problem I faced, he would suggest precisely the papers that would be most helpful. My dissertation work has benefited from other faculty mem- bers at the Schaeffer Center and the Department of Economics. In particular, I thank Jeff Nugent, Michael Leung, John Strauss, Paulina Oliva, Darius Lakdawalla, and Rebecca Myerson for their helpful comments and feedback. Patricia St. Clair and Jillian Wallis graciously shared their knowledge on, and experience with, Medicare claims data. Young Miller and Sara Geiger provided excellent administrative support. I was fortunate to have wonderful classmates in my cohort, supportive friends back in Korea and here in LA, and a family-like community at the Power of Praise Church. I am also extremely grateful to Do Young Kyung, my long-time mentor, who has taught me, from the beginning, what research is all about and what it means to pursue an academic career. I would not be where I am today without my family. My father, Byungchul Shin, has been the greatest influence and inspiration throughout my years of education, and my mother, Sookwon Choi, has provided an immeasurable amount of support, love, and dedication. Finally, I dedicate this dissertation to those who made my Ph.D. journey a beautiful memory: Jaehong Kim, my husband, who has always been on my side and has been my best friend, best roommate, and best classmate; and Jua Kim, my daughter, who has blessed my life in the past two years by teaching me how amazing it is to love someone so selflessly. I gratefully acknowledge funding from the USC Provost’s Ph.D. Fellowship, the Leonard D. Schaeffer Center Predoctoral Fellowship, the USC Graduate School Summer Research Grant, and the USC Graduate School Advanced Fellowship. Research reported in this dissertation was sup- ported by the National Institute on Aging of the National Institutes of Health under Award Numbers P01AG033559 and P30AG024968. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health. The last chapter of this dissertation was performed under the Data Use Agreement (DUA) with the Centers for Medicare and Medicaid Services for dissertation research (DUA 52091). iv Contents 1 Hospital Responses to Price Shocks under the Prospective Payment System 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Medicare Inpatient Prospective Payment System . . . . . . . . . . . . . . 3 1.2.2 Changes from MSA to CBSA . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Conceptual Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4 Empirical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.1 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4.2 Empirical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5 Other Behavioral Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.5.1 Heterogeneous Response by Product Line . . . . . . . . . . . . . . . . . . 13 1.5.2 Upcoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2 Physician Deterrence from Fraud, Waste, and Abuse 30 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.2 Conceptual Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.3 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.4 Empirical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 v 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5.1 Overall Behavioral Responses . . . . . . . . . . . . . . . . . . . . . . . . 37 2.5.2 Variation in Responses to Identify Various Mechanisms . . . . . . . . . . . 39 2.5.3 Robustness and Sensitivity to Assumptions . . . . . . . . . . . . . . . . . 39 2.5.4 Welfare Estimates of Deterrence . . . . . . . . . . . . . . . . . . . . . . . 40 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3 The More Connected the Physicians, the Better the Referrals? Evidence from Patient- Sharing Networks 50 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2.1 CABG Surgery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.2.2 Referral Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.3 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.4 Physician Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.1 Network Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.2 Network Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.5 Empirical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.5.1 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.5.2 Patient Choice Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.5.3 Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.6.1 Preliminary Evidence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.6.2 Main Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.6.3 Sources of Heterogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 3.6.4 Aggregate Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6.5 Robustness Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 vi Bibliography 98 A Appendix to Chapter 1 106 B Appendix to Chapter 2 110 C Appendix to Chapter 3 116 vii List of Tables 1.1 Examples of Hospital Reimbursement Before and After Changes in Medicare Ge- ography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 1.2 Summary Statistics at Baseline (2004), by Quartiles of Wage Index Changes . . . 26 1.3 Effects of Changes in Wage Index on Hospital Outcomes . . . . . . . . . . . . . . 27 1.4 Distributions of DRG Weight and Spread . . . . . . . . . . . . . . . . . . . . . . 28 1.5 Effects of Changes in Wage Index on the Fraction of Top Codes at Area-DRG-Pair Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1 Timeline from Indictment to Exclusion . . . . . . . . . . . . . . . . . . . . . . . . 46 2.2 Summary Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.3 Responses, by Percentile of Predicted Exclusion Probability . . . . . . . . . . . . 48 2.4 Heterogeneity in Responses, >75th Percentile Predicted Exclusion Probability . . . 49 3.1 Descriptive Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.2 Characteristics of Referring Physicians by the Level of Network Measures . . . . . 89 3.3 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hos- pital+Surgeon Choice Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.4 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Type of Admissions, Hospital+Surgeon Choice Set . . . . . . . . . . . . . . . . . 91 3.5 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 viii 3.6 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Type of Admissions, Surgeon|Hospital Choice Set . . . . . . . . . . . . . . . . . . 93 3.7 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Years of Experience, Hospital+Surgeon Choice Set . . . . . . . . . . . . . . . . . 94 3.8 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Urban/Rural Areas, Hospital+Surgeon Choice Set . . . . . . . . . . . . . . . . . . 95 3.9 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Patients’ Household Income, Hospital+Surgeon Choice Set . . . . . . . . . . . . . 96 3.10 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Patients’ Severity, Hospital+Surgeon Choice Set . . . . . . . . . . . . . . . . . . . 97 A.1 Summary Statistics for Hospital Characteristics at Baseline (2004) . . . . . . . . . 109 B.1 Gradients of Deterrence Response, 75th to 90th versus >90th Percentiles of Pre- dicted Exclusion Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 B.2 Using an Earlier Window (Years 3-5) to Predict Exclusions, >75th Percentile of Predicted Exclusion Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 B.3 Sensitivity Analysis Using Random Assignment of Dates and TINs, >75th Per- centile of Predicted Exclusion Probability . . . . . . . . . . . . . . . . . . . . . . 115 C.1 DRG Codes and Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 C.2 Medicare Inpatient Claim Admission Type Codes . . . . . . . . . . . . . . . . . . 117 C.3 Rural-Urban Continuum Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 C.4 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Years of Experience, Surgeon|Hospital Choice Set . . . . . . . . . . . . . . . . . . 120 C.5 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Urban/Rural Areas, Surgeon|Hospital Choice Set . . . . . . . . . . . . . . . . . . 121 C.6 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Patients’ Household Income, Surgeon|Hospital Choice Set . . . . . . . . . . . . . 122 ix C.7 Effects of Referring Physicians’ Network Characteristics on Provider Choice by Patients’ Severity, Surgeon|Hospital Choice Set . . . . . . . . . . . . . . . . . . . 123 C.8 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hos- pital+Surgeon Choice Set with Quartile Specification of Network Characteristics . 124 C.9 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set with Quartile Specification of Network Characteristics . . 125 C.10 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hos- pital+Surgeon Choice Set with a 10% Threshold Defining Networks . . . . . . . . 126 C.11 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set with a 10% Threshold Defining Networks . . . . . . . . 127 C.12 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hos- pital+Surgeon Choice Set with a 30% Threshold Defining Networks . . . . . . . . 128 C.13 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set with a 30% Threshold Defining Networks . . . . . . . . 129 C.14 Effects of Referring Physicians’ Network Characteristics on Provider Choice in the Entire Network, Hospital+Surgeon Choice Set . . . . . . . . . . . . . . . . . 130 C.15 Effects of Referring Physicians’ Network Characteristics on Provider Choice in the Entire Network, Surgeon|Hospital Choice Set . . . . . . . . . . . . . . . . . . 131 C.16 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hos- pital+Surgeon Choice Set with Risk-Adjusted Mortality Rates . . . . . . . . . . . 132 C.17 Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set with Risk-Adjusted Mortality Rates . . . . . . . . . . . . 133 x List of Figures 1.1 Switch from MSA to CBSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 1.2 Percentage Changes of Wage Index . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.3 Year-specific Effects of Changes in Wage Index on Log of Per-claim Payment . . . 21 1.4 Year-specific Effects of Changes in Wage Index on Hospital Outcomes at Area Level 22 1.5 Year-specific Effects of Changes in Wage Index on Admission V olume at Area- DRG Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 1.6 Year-specific Effects of Changes in Wage Index on the Fraction of Top Codes at Area-DRG-Pair Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.1 Events Prior to Exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.2 Predicted Probability of Exclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3 Change in Charges, by Percentile of Predicted Exclusion Probability . . . . . . . . 44 2.4 Change in Quantity, by Percentile of Predicted Exclusion Probability . . . . . . . . 45 3.1 Examples of Physician Networks (2008) . . . . . . . . . . . . . . . . . . . . . . . 81 3.2 Illustration of Network Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 3.3 Distributions of Network Measures at Baseline (2008) . . . . . . . . . . . . . . . . 82 3.4 Binned Scatter Plots and Linear Regression Lines at Baseline (2008) . . . . . . . . 83 3.5 Time-Series Plots of Network Measures . . . . . . . . . . . . . . . . . . . . . . . 84 3.6 Time-Series Plots of Quality Measures . . . . . . . . . . . . . . . . . . . . . . . 84 3.7 Percentage-Point Change of Patients Referred Following Negative Events . . . . . 85 xi 3.8 Average Market-Level Network Characteristics and Urban/Rural Status . . . . . . 86 3.9 Aggregate Effects of Network Characteristics on Surgeon Market Share by Type of Admissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 A.1 Percentage Changes of Geographic Adjustment Factor . . . . . . . . . . . . . . . 108 B.1 Gradient of Deterrence Response, 75th to 90th versus >90th Percentiles of Pre- dicted Exclusion Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 B.2 Using an Earlier Window (Years 3-5) to Predict Exclusions, >75th Percentile Pre- dicted Exclusion Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 B.3 Evidence around Dates of Indictment . . . . . . . . . . . . . . . . . . . . . . . . . 112 C.1 Distributions of Unadjusted and Risk-Adjusted In-Hospital Mortality Rates at Base- line (2008) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 xii Chapter 1 Hospital Responses to Price Shocks under the Prospective Payment System 1 1.1 Introduction Medicare in the United States introduced the market principle into its inpatient component by implementing the Prospective Payment System (PPS) in 1984. Under the PPS, hospitals receive a bundled payment for an entire episode of treatment irrespective of the actual costs, based on diagnosis-related groups (DRG). Earlier studies on the impact of the introduction of the PPS sug- gest that the change in payment mechanism contributed to a significant decrease in hospital costs measured by length of stay (Freiman et al., 1989, Frank and Lave, 1989, DesHarnais et al., 1987, Desharnais et al., 1990, Manton et al., 1993, Ellis and McGuire, 1996). Evidence on other dimen- sions of hospital care is mixed: Using mortality and readmission rates, DesHarnais et al. (1987, 1990) show that the PPS did not result in a decline in quality, while Cutler (1995) demonstrates that it led to a higher readmission rate and a higher probability of death in hospital or immediately after discharge; Hodgkin and McGuire (1994) suggest that the number of patients treated fell after the introduction of the PPS, which contradicts the theoretical prediction that the elimination of 1 This chapter was published at Health Economics 2019; 28(2):245-260. The final publication is available at: https://doi.org/10.1002/hec.3839. 1 marginal incentives would increase the total treatment volume. There is surprisingly little evidence, however, on the effects of the ensuing changes in payment levels under the prospective payment system on these different aspects of hospital care. The pri- mary reason for this gap is that the payment systems for hospital services are highly centralized, and thus there rarely exist exogenous variations that randomly assign hospitals to the higher, as op- posed to the lower, payment schedule. A few studies have used the Balanced Budget Act of 1997 as a natural experiment to examine the impact of Medicare payment cuts (also called Medicare “bite”) on patient outcomes (Wu and Shen, 2014, Shen and Wu, 2013, Seshamani et al., 2006) and treatment volume (Bazzoli et al., 2004). This variation is related to a negative price shock, how- ever, and whether or not higher payment levels can incentivize hospitals to improve the quantity and quality of services largely remains unanswered. In 2005, the Centers for Medicare & Medicaid Services (CMS) changed the geographic land- scape of Medicare by introducing the new definition of the Core-Based Statistical Areas (CBSA). Prior to the shift, there were 322 urban areas and 52 rest-of-state rural areas (i.e., one statewide rural area for each state) defined by the Metropolitan Statistical Areas (MSA). The introduction of the new geographic classification system increased the number of urban areas by 389, and also changed the composition of the rest-of-state rural areas. The Medicare payment for hospital ser- vices is adjusted for each geographic unit, and hospitals in urban areas have on average a higher reimbursement rate than those in rural areas. Thus, for example, two hospitals that were originally in the same rural area would likely be geographically proximate and relatively homogeneous. If one of the two hospitals began to be classified into an urban area because of the new payment area definition, though, this newly-classified hospital, henceforth referred to as an ‘urban switcher’, may experience an exogenous increase in reimbursement and can form a treatment group that is significantly affected by the institutional change. On the other hand, the rural hospital, henceforth referred to as a ‘rural stayer’, that is presumably similar to the newly-urban hospital may serve as an appropriate control group. Using this research design, I examine hospital responses to price changes under the prospective payment system in terms of admission volume, treatment intensity, 2 and quality of services, the outcomes of which have been of interest in previous studies. The remainder of the paper is organized as follows. In Section 2, I describe the institutional details of the Medicare PPS and the change in the payment area definition that occurred in 2005. Section 3 offers a theoretical framework to help understand the findings of the paper. Section 4 presents the data and summarizes empirical strategies and results. In Section 5, I discuss two alter- native responses of hospitals: heterogeneous responses across DRGs, and possibility of upcoding. Section 6 concludes. 1.2 Background 1.2.1 Medicare Inpatient Prospective Payment System Prior to the introduction of the PPS, Medicare inpatient hospital services were reimbursed on a retrospective cost basis. Medicare introduced the PPS in 1984, recognizing the financial incentives inherent in the cost-based reimbursement system that can induce hospitals to overutilize resources. Since then, hospitals under the Medicare PPS have received a fixed payment based on the DRG, a classification system that categorizes patients into clinically-related groups. The list of DRGs is updated every year, and there were 756 DRGs in 2016. The DRG payment consists of two components: the larger operating payment (average cost of delivering specific inpatient care) and an additional capital payment (capital-related cost of that care). Each of the components and the final DRG payment are determined as follows: Operating Payment = (Labor ShareWI)+(Nonlabor ShareCOLA) (1.1) (1+ IME+ DSH) Capital Payment =(Standard Federal Rate)(GAF)(Large Urban Add-on) (1.2) (Capital COLA)(1+Capital DSH+Capital IME) DRG Payment =(Operating Payment+Capital Payment)(DRG Weight) (1.3) 3 The operating payment can be split into labor-related and non-labor-related portions (e.g., 62% and 38% of the base operating payment amount ($5,467.39), respectively, in 2016), where the labor share is adjusted by the wage index (WI) and the non-labor share is adjusted by the cost of living adjustment factor (COLA). 2 While both the wage index and COLA are designed to reflect different input costs across areas, the latter is only applicable to hospitals located in Alaska and Hawaii. The operating payment is subsequently adjusted by an indirect medical education (IME) factor that applies a percentage add-on to teaching hospitals, and a disproportionate share hospital (DSH) factor to compensate hospitals with a high percentage of low-income patients. The geographic adjustment factor (GAF), capital COLA, capital DSH, and capital IME in the capital payment formula play the same role as the wage index, COLA, DSH and IME in the operating payment, respectively. 3 The operating payment and capital payment are then combined and multiplied by DRG weights to capture the differences across DRGs in the average relative resource intensity of treating a patient. 1.2.2 Changes from MSA to CBSA Figure 1.1 shows the nationwide changes of the borders of the payment areas, and their urban/rural status shift following the transition from MSA to CBSA. This change significantly influenced the wage index and the geographic adjustment factor in equations (1) and (2); the wage index, in particular, is the key indicator of the price shock because it is multiplied by the labor share of the operating payment, which accounts for the largest share of the final payment amount. After all, both indices are strongly correlated and move in the same direction, and thus the changes in the wage index can be a reasonable representation of the price shock engendered by the geographic shift in 2005. Anticipating the significant impact on payment for some hospitals, the CMS implemented two 2 The wage index, a primary source of price variations in this study, is calculated as the ratio of the area’s average hourly hospital wage to the national average hourly hospital wage, and is derived from hospitals’ reports on their wages, salaries, benefits, and working hours, submitted four years prior to a given fiscal year. 3 The federal capital rate in 2016 was $438.75. 4 policies during the transition period. First, urban hospitals that became rural under the CBSA could maintain their previous status for three years (2005–7), which is referred to as a “hold harmless” policy. Second, the CMS provided a one-year transition blend in 2005 for hospitals whose wage index decreased because of the new payment area definition; these hospitals received payment based on 50% of the CBSA wage index and 50% of the wage index that they would have received under the MSA boundaries. Thus, hospitals subject to positive price shocks (mainly those that were shifted from rural to urban) would experience an immediate effect in 2005, while those subject to negative price shocks would experience a gradual effect from 2005 to 2007 because of the hold harmless program and the wage index blend. Given this policy differential, this study splits all hospitals in the sample into urban and rural samples, and examines the percentage changes of the wage index in a two-year interval, by their urban/rural status shift, in Figure 1.2. The figure clearly shows that the urban switchers in the rural sample experienced an upward shift in the distribution of their wage index between 2004 and 2006, while that of the stayers in the rural sample remained relatively stable throughout the period. In contrast, the rural switchers in the urban sample had a slight decrease in the median between 2004 and 2006, presumably because of the wage index blend in 2005, and dropped sharply until 2008, the year in which the hold harmless ended and the CBSA completely replaced the MSA. Thus, this study defines the years from 2005 to 2007 as a “transition period”—when the CBSA began to have an effect on a subset of hospitals—and the years from 2008 onwards as a “post-treatment period”—when the CBSA became fully effective. I also plot the distribution of the changes in the geographic adjustment factor (Figure A.1 in Appendix): the two figures show precisely the same pattern, except that the graphs for the wage index have a slightly larger scale, confirming that the wage index is a sufficient proxy for the price shocks. The extent to which hospitals respond to this shock hinges on how the change in wage index is translated into a change in the actual payment for hospitals. Table 1.1 presents a detailed break- down of the actual operating payment amount for the two most common DRGs—pneumonia and chest pain—for two extreme hospitals in terms of the wage index changes between 2004 (year 5 before the CBSA introduction, henceforth baseline) and 2008 (when the CBSA was fully imple- mented). The wage indices were similar for the two hospitals in the year before the shock, but the index for Ukiah Valley Medical Center soared from 0.9967 to 1.4147 (41.9% increase) while that for Kane County Hospital dropped from 1.1333 to 0.8215 (27.5% decrease) in 2008. This has been translated into 76% and 48% increases in the final operating payment for pneumonia and chest pain patients for Ukiah Valley Medical Center, which is a stark contrast to the 9% increase and 9% decrease in the corresponding payments for Kane County Hospital. In sum, this table im- plies that the simple administrative shift in 2005 generated substantial price shocks across different areas, the effects of which would have been amplified by differential changes in DRG weights at the national level. 1.3 Conceptual Framework This section provides a simple framework to understand hospital responses to price changes under the prospective payment system. I first assume that hospitals attach non-negative weights both to benefits accruing to patients and to their profits, and that the objective function is separable in the two arguments. max q;e ab(q;n(q))+(1a) n(q) p c(q;e) (1.4) where 0<a < 1. The benefits for patients, b(), are determined by the quality of the services, q, and the total quantity supplied, n, which is a function of the quality. The fixed payment is denoted by p, and c() indicates the average cost per patient determined by q and the cost-reducing effort, e. The cost function is assumed to be strictly convex. The hospital chooses the optimal level of q and e by the following first order conditions. 6 a ¶b(q;n(q)) ¶q + ¶b(q;n(q)) ¶n(q) ¶n(q) ¶q + (1a) ¶n(q) ¶q p c(q;e) n(q) ¶c(q;e) ¶q = 0 (1.5) (1a) ¶c(q;e) ¶e = 0 (1.6) Equation (6) suggests that hospitals choose the efficient level of cost-reducing effort for any given quality under the prospective payment system. By differentiating the equation (5) with re- spect to p, I can solve for the optimal choice of quality depending on the price, which yields the expression below: (1a) ¶n(q) ¶q (1.7) This result suggests that whether an increase in payment leads to quality improvement depends on the way that quality affects the demand for hospital services. This is a reasonable prediction, because hospitals receiving an increase in fixed payment would only have an incentive to invest in quality improvement if quality improvement can attract more patients and hence increase marginal revenue. Previous studies have concluded that an increase in payment under the PPS will result in an improvement in the quality of hospital care by assuming that ¶n(q) ¶q is positive (Feldstein, 1977, Chalkley and Malcomson, 2000, Hodgkin and McGuire, 1994). However, this assumption cannot be taken for granted because, unlike the market for typical goods and services where demand is sensitive to quality, there are not many easily accessible measures of the quality of hospital care. Even if there are such measures, previous studies suggest that patients do not respond to important information on quality (Chernew and Scanlon, 1998, Haas-Wilson, 1994, Hibbard and Jewett, 1997, Mennemeyer et al., 1997). The nature of inpatient services, which often require emergency treatments and the presence of public insurance that covers a substantial portion of the hospital 7 bill, may further inhibit patients from responding sensitively to hospital performance. Whether hospitals in fact improve the quality of their services in response to an increase in fixed payment is, therefore, an empirical question. 1.4 Empirical Analysis 1.4.1 Data The primary data source for this study is 20% Medicare inpatient claims from 2003 through 2013. This data set includes detailed information of hospital services used by a random sampling of 20% of Medicare beneficiaries, such as reimbursement amount paid by Medicare, date of admission and discharge, hospital identifiers, DRG codes, and demographic information of beneficiaries. I define each year of the data following Medicare’s Fiscal Year (FY), which begins on October 1st of the preceding year and ends the following September 30th (e.g., FY 2004: Oct 1, 2003–Sep 30, 2004). Only the claims submitted by hospitals under the Medicare PPS are included, dropping those submitted by skilled nursing facilities, home health agencies, PPS-exempt hospitals, and claims primarily paid by other agencies. The DRG codes of individual claims are matched to their weights, using the annual tables of DRG weights published by the CMS in the Federal Register. 4 This study relies on three other sources released by the CMS. First, to map each MSA to the corresponding CBSA, I use the MSA-CBSA crosswalk file. 5 Second, since the DRG system was replaced by the new Medicare Severity DRG (MS-DRG) system in FY 2008, introducing a three- tiered payment system based on severity, another crosswalk file is used to translate MS-DRGs into older DRGs (version 24). 6 Finally, provider-specific information such as wage index for Medicare reimbursement, geographic location, number of beds, resident-to-bed ratio, Disproportionate Share (DSH) patient percentage, Medicare share of total inpatient days, hospital size, and ownership 4 https://www.federalregister.gov 5 https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/ Acute-Inpatient-Files-for-Download-Items/CMS022637.html 6 https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/ Acute-Inpatient-Files-for-Download-Items/CMS1247844.html 8 status is drawn from the Historical Impact Files (summary statistics at the individual hospital level available in Table A.1 in Appendix). 7 The area-level controls are obtained from the Area Health Resources Files. 8 The area in this study refers to every possible combination of MSA and CBSA. That is, if some region was divided into two MSAs before, but is now split into three CBSAs without overlapping borders, this region will be part of four distinct “areas” within which the average hospital wage index is calculated. This is a natural unit of price shocks because one definition is not a perfect subset of the other— there are in total 508 such units. 9 The individual claims data are aggregated at this area (i.e., MSACBSA) level, and span from FY 2003 to FY 2013, which enables the investigation of long- term effects as well as transitory effects. 10 After eliminating observations with missing values, the final sample includes 5,583 complete cases. Table 1.2 divides the area-level sample into quartiles according to the change in wage index between 2004 and 2008, and presents the summary statistics for top (4th) and bottom (1st) quartiles at baseline. The areas in the bottom quartile had a decrease in wage index by 0.06 on average, while those in the top quartile had an increase of 0.09; this gap may explain a drop in the average payment for the bottom quartile during the same period, which is almost ten times greater than the top quartile. The bottom quartile includes more urban areas, hospitals, active doctors, hospital admissions, and larger populations, probably because the decrease in wage index mainly occurred in the areas that shifted from urban to rural (see Figure 1.2). The two samples are balanced in all other observables at baseline, indicating that controlling for the urban-rural differences is the key challenge in the empirical analysis. 7 https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/AcuteInpatientPPS/ Historical-Impact-Files-for-FY-1994-through-Present.html 8 http://ahrf.hrsa.gov 9 Three hundred seventy nine of the areas were urban (75%) and 129 were rural (25%) in 2004; 30 areas shifted from urban to rural and 83 of the rural areas became urban after the CBSA implementation. 10 Some important changes occurred during the study window, including the introduction of the MS-DRG (2008) and the Medicare prescription drug benefit (Part D) (2006), which has been shown to have an offsetting effect on the use of inpatient services. The recession also started in December 2007. But all these changes occurred at the national level, and thus the effects will be absorbed by year fixed effects. 9 1.4.2 Empirical Approach I first examine the immediate consequence of the price shock by including the log of the average per-claim payment as an outcome variable. 11 I then proceed with four other outcome variables, each representing different domains of hospital care: 1) log of total number of admissions, which captures overall quantities provided; 2) log of the average length of stay (LOS), which measures the average resource use or cost of providing care; and 3) average 30-day readmission rates, and 4) 30- day mortality rates, both of which are widely-used measures of quality of hospital services. These measures, which correspond to n(q), e, and q of the objective function presented in Section 3, are neither perfect nor mutually exclusive; for instance, services of good quality can reduce LOS, but at the same time, hospitals can attempt to reduce LOS, sacrificing the quality of care, to cut costs. Still, all these measures have been extensively used in health economics literature (Acemoglu and Finkelstein, 2008, Chandra et al., 2016, Dafny, 2005, Chalkley and Malcomson, 2000), and this study assumes that they are among the best available measures of each domain. The average outcomes (per-claim payment, LOS, readmission and mortality rates) are weighted by the number of claims for different hospitals within the area. When computing 30-day readmission rates, this study adopts the rubric of the CMS and applies the following exclusion criteria: if the patient left against medical advice or discontinued care, was discharged/transferred to other institution for inpatient care, or was discharged/transferred with a planned acute care hospital readmission. These restrictions help to remove the cases of rehospitalization that cannot be attributable to the quality of services provided by a particular hospital. For each outcome variable, I estimate the following difference-in-differences models: Y it = a+ å f(t)6=re f b f(t) DWI 08-04;i Time f(t) +h 0 X it +m i +d t +e it (1.8) whereDWI 0804 is the average hospital wage index in 2008 subtracted by the average hospital wage index in 2004 in area i. This time-constant variable captures the price shock induced by 11 Adjusted by Consumer Price Index for hospital services for urban consumers in 2003 dollars. 10 the change in geographic definitions, and is interacted with categorical time variables specified in two ways: 1) a full set of year dummy variables to examine year-specific effects, and 2) three dichotomous variables indicating “reference period” (2003–4), “transition period” (2005–7), and “post-treatment period” (2008–13) to examine the average effects over those periods. This identi- fication strategy is similar to that of Clemens and Gottlieb (2014), which estimates the physician supply elasticities using the 1997 Medicare payment area consolidation that caused exogenous variations in physician payment. X it is a vector of time-varying area-level characteristics including population size, per capita MDs, shares of older people (aged 65+), and people in poverty. I also add average patient characteristics (severity captured by the Charlson Comorbidity Index (Charl- son et al., 1987), proportions of age, gender, and racial groups) drawn from the claims data. 12 The area fixed effects (m i ) and year fixed effects (d t ) control for time-invariant area-level characteristics and common time trends, respectively. The underlying assumption of this specification is that the wage index changes due to the geographic shift are not correlated with any pre-existing trend in hospital care. The “area”, the unit of analysis in this study, is a simple boundary generated by the administrative change. It therefore does not coincide precisely with any other geographic units such as the Hospital Referral Regions (HRR) or the Health Service Areas (HSA)—which distinguish local hospital markets in the United States (Ho and Hamilton, 2000, Chandra et al., 2016)—nor the MSA or CBSA per se. Thus, the possibility that these areas show a systematically different pattern in hospital care in the pre-treatment years may not be large. There are potential concerns, however, that the areas having an increase in wage index are more likely to be rural while the areas whose wage index decreased are more likely to be urban, as implied by Figure 1.2. Figure 1.1 also suggests that the changes in wage index occurred differently across states. To address this issue, I additionally control for urban-by-year effects and state-by- year effects. Despite its strength in capturing the geographic shocks and addressing the pre-trend 12 All averages including the wage index variable are weighted by the number of claims for different hospitals within the area. If one or two of the data points are missing (number of MDs (2009), race (2007), older population and people in poverty (2003-4)), they are linearly interpolated. 11 assumption, aggregating data at the area level may not enable the utilization of information at the hospital level, which could be useful in understanding the observed outcomes. For instance, hospi- tal responses can differ by the proportion of Medicare patients, because it captures how important Medicare patients are to the providers. I examine such heterogeneous responses of hospitals by estimating the equation (8) at the individual hospital level, while controlling for an extensive set of hospital-level characteristics, and then by running separate regressions for the hospital-year level data split at the median value of the Medicare patient share. 13 Another potential concern with the current research design is that some hospitals are reclassified or exempted from the policy change and their presence in the data can confound the results. This study therefore conducts the same analysis without these hospitals as robustness checks, and finds results consistent with the main findings. A detailed description on theses reclassified/exempted cases is available in Appendix. All standard errors are clustered at the MSACBSA level. 1.4.3 Results Figure 1.3 shows the year-specific effects of the wage index changes on the average payment amount. The year 2004 is used as an omitted reference, so all estimates are relative to that year. The estimated coefficients prior to 2004 are not significantly different from 2004, providing evi- dence for the assumption that the wage index changes are not correlated with a pre-existing trend in hospital payment level. This figure illustrates that the increase in wage index significantly in- creases the average payment amount in the 2005–8 window, which peaks at year 2008 (when the hold harmless program ended), and the higher payment level is sustained throughout the post- treatment years. On the other hand, the estimates for the other four outcome variables (i.e., ad- mission volume, LOS, readmission and mortality rates) are not statistically significant and do not show any systematic pattern (Figure 1.4). 14 I subsequently estimate the equation (8) with variables indicating the transition and post-treatment periods instead of a full set of year dummies (Table 13 This information is measured by the Medicare days as a percentage of total inpatient days in the Historical Impact Files. The median value is 0.51. 14 The results do not change substantially, with or without a subset of control variables and fixed effects. 12 1.3). The results in Panel A show that the wage index increase of 0.1 is associated with a 3.2% increase in the average payment amount during the transition period and a 4.7% increase during the post-treatment period. All estimates in other models are not statistically significant or, at best, borderline significant. These patterns are largely consistent with the results from the hospital-level analysis (entire hospital sample in Panel B and subsamples by the Medicare patient share in Panel B-I and B-II). In Panel B, except the coefficient for the log of admissions during the post-treatment period and that for the 30-day readmission rates during the transition period, the only significant estimates are found for the payment outcome. The magnitudes of the coefficients are somewhat larger, indicating that the 0.1 increase in wage index is associated with increases in payment of 3.5% and 5.1% increase in payment during transition and post-treatment periods, respectively. 1.5 Other Behavioral Responses 1.5.1 Heterogeneous Response by Product Line The results in the previous section suggest that increasing the overall hospital payment does not lead to changes in the quantity and quality of services. Because the data are aggregated at the area level, however, the results do not speak to the possibility of different responses by product line. In the Medicare PPS setting, in particular, both wage index and DRG weights are multiplicative in the DRG payment formula (see equations (1)-(3)), and thus the effects of the changes in wage index can be amplified by DRG weights and the resulting responses of hospitals can vary across DRGs. Panel A of Table 1.4 summarizes the distribution of the DRG weights by whether the DRG is medical or surgical (determined by whether a patient had an operating room (OR) procedure) and whether the DRG is included in the top 50% when ranked by the weights (i.e., high vs. low). There is a large variation across DRGs and within groups over time; in general, surgical DRGs have higher weights and greater slope of changes between pre- and post-treatment periods. To examine whether hospital responses to price shocks differ by the relative profitability of DRGs, I aggregate the data at the areaDRG level and add another dimension, DRG weight, in 13 the following specification: 15 Y ikt = a+ å f(t)6=re f q f(t) DWI 08-04;i Time f(t) DRG weight kt + å f(t)6=re f b f(t) DWI 08-04;i Time f(t) + å f(t)6=re f k f(t) Time f(t) DRG weight kt +p(DWI 08-04;i DRG weight kt ) +fDRG weight kt +h 0 X ikt +m i +x k +d t +e ikt (1.9) Y ikt is the same set of hospital outcomes for DRG k in area i in year t. The coefficient on the triple interaction term is the difference-in-differences-in-differences estimate, and it requires that there be no other shock during the study period that differentially changed the outcomes of the higher-paying DRGs in the affected areas (where the overall price increased as a result of the geographic change). This condition is likely to hold, given that the affected and unaffected areas in this study are defined by arbitrary borders generated by an administrative shift. Figure 1.5 presents the results on the number of admissions. 16 The sample is divided by the two DRG categories (i.e., surgical vs. medical), and split again by whether a patient was admitted through the emergency room (“emergency” admissions), or the admission permitted adequate time to schedule treatment (“elective” admissions). The elective admissions among the affected areas significantly increase, relative to baseline, in the surgical category, whose weights are on average higher than the medical category in any given time. Estimates for emergency admissions are not statistically significant, which is intuitively appealing since they should be less subject to discre- tion, and the estimates for the medical DRGs—both emergency and elective—are not statistically significant. The most frequently observed codes for elective surgical admissions include foot procedures, intraocular procedures, chest procedures, orbital procedures, cardiac pacemaker device replace- ment, joint replacement, extensive OR procedures unrelated to principal diagnosis, and so forth. These procedures are in general more expensive than average DRGs, and hospitals could respond to 15 The DRG weight is set nationally and has no geographic variations. 16 Estimates for other outcomes are not statistically significant. The full regression results are available upon re- quest. 14 the incentives of admitting more patients who can be categorized into these higher-paying DRGs, which is referred to as “cream-skimming” in literature (Hurley, 2000, Chalkley and Malcomson, 2000). In addition, hospitals could lower the threshold of conducting certain procedures. For example, while cataract extraction is an outpatient procedure, it can be billed under intraocular procedures (MS-DRG 116 or 117, depending on the presence of complications), if a patient is given an injection of healon during the surgery (Ruther and Black, 1987). The marginal gain from such an additional procedure is likely to be small if the treatment decision is mainly induced by financial incentives. 1.5.2 Upcoding Another possible response is that hospitals simply change their coding practices by shifting patients to more profitable DRG codes, a phenomenon called “upcoding” (Dafny, 2005, Januleviciute et al., 2016, Barros and Braun, 2017). To examine this possibility, I closely follow Dafny (2005), who exploits the “paired” grouping system of the DRG codes. Prior to the MS-DRG system, about 59% of the DRGs are paired such that they have the same primary diagnosis but are split by the presence of a complication or comorbidity (CC). The MS-DRG system introduced in 2008 retained this structure but added one more severity tier called a major complication or comorbidity (MCC). The shift from the code without CC (henceforth “bottom code”) to the code with CC (“top code”) requires an additional diagnosis included in the CC/MCC list, and often generates a large increase in reimbursement due to the difference in DRG weights between the two codes. Thus, if the changes in the pairwise difference of DRG weights largely explain the changes in the fraction of the top/bottom codes, it would be an indicator of hospital upcoding behavior, assuming that patient conditions are fairly constant over time. I first translate all MS-DRG codes into the previous two-tiered version and aggregate the data at the areaDRG-pair level. The “spread” variable is then defined as the difference in DRG weights between top and bottom codes of each DRG pair, j, as follows: 15 Spread jt = DRG weight in top code jt DRG weight in bottom code jt (1.10) Panel B of Table 1.4 presents the distribution of the DRG spread; as in the case of the DRG weight, surgical DRGs tend to have a larger spread in any given time, and this gap appears to have grown in the post-treatment period. I use this variation of the spread variable in the following difference-in-differences-in-differences framework: f raction i jt = a+ å f(t)6=re f z f(t) DWI 08-04;i Time f(t) Spread jt + å f(t)6=re f b f(t) DWI 08-04;i Time f(t) + å f(t)6=re f g f(t) Time f(t) Spread jt +s DWI 08-04;i Spread jt +ySpread jt +h 0 X i jt +m i +r j +d t +e i jt (1.11) Two variants of the specification are considered: 1) differential time trends across DRG pairs are controlled for in order to rule out any pair-specific trends that can be driven by demand-side factors, coding guidelines, or technological developments; 17 and 2) the continuous spread variable is categorized as a dichotomous variable (high vs. low), using the median value as the cut-off point. The results in Panel A of Table 1.5 suggest that, when all DRGs are considered, a 0.1 increase in wage index is associated with a 1.4 percentage point increase in the fraction of top codes in the post-treatment period, among the DRG pairs with a relatively large spread. Similarly, estimates in Panel B indicate that the affected areas are 1.2 percentage point more likely to place patients into the top code after the geographic adjustment, if doing so creates a larger margin. The year-specific effects presented in Figure 1.6 largely support this finding, where the coefficients jump in 2008 and remain statistically significant afterwards, both in continuous and dichotomous specifications. The remaining columns of Table 1.5 show separate results for medical and surgical DRGs. In Panel A, the coefficient for surgical DRGs in the post-treatment period is statistically significant, and its magnitude is fairly close to the case for all DRGs, implying that the evidence of upcoding 17 As the results remain qualitatively unchanged, I only present those with time trends. 16 shown in the first column may have been driven by hospital responses in surgical DRGs. This result is consistent with recent evidence by Januleviciute et al. (2016), which demonstrates that the upcoding response is more pronounced among surgical DRGs. The corresponding estimate in Panel B is also similar and statistically significant. 1.6 Conclusion This study examines how hospitals respond to payment increases by exploiting a natural exper- iment that randomly altered the generosity of hospital payment rates. The results suggest that providing higher payment does not lead to an improvement in various aspects of hospital services. On the contrary, I find evidence that hospitals facing a price increase are more liable to the per- verse incentives that the prospective payment system is known to encourage, namely, selecting or shifting patients into higher-paying diagnosis codes (Chen and Goldman, 2016). By contrast, previous theoretical works (Chalkley and Malcomson, 2000, Hodgkin and McGuire, 1994) predict that an increase in fixed payment under the PPS can lead to quality improvement, provided that the improvement in quality is followed by increased demand. There has been little empirical research that tests this prediction, however, mainly because it is difficult to find exogenous variations in the payment level across hospitals. This study fills the gap by using the change in the definition of payment areas under the Medicare PPS, which generates substantial area-specific price shocks. This finding suggests that in the case of hospital services, paying a higher price for a given service may not induce suppliers to offer services of better quality. The result that higher payment rates increase the treatment volume and coding frequency only of more lucrative DRGs suggests that a policy designed to provide financial incentives through higher prices can rather prompt even higher payments through other behavioral responses. A recent study by Cooper et al. (2017) examines a similar question, with a different identification strategy and a different set of outcome variables. Specifically, the authors explore the responses of hospitals when they are entitled to a Section 508 waiver as a consequence of political dynamics in the United States, and find that 17 hospitals increase total discharges, invest in new technology, raise CEO pay, and hire more nurses in respond to payment increases, without any subsequent improvement in mortality rates and length of stay. Combined, these results question the effectiveness of the policy that attempts to influence the quality of hospital care by providing a higher reimbursement. The findings should be understood in light of a few study limitations. First, the shift from MSA to CBSA occurred not only in Medicare inpatient services but also in other Medicare-covered services including skilled nursing facilities, long-term care hospitals, and inpatient rehabilitation facilities. Thus, if these services are compliments to or substitutes for hospital care, the results presented in this study must be interpreted as the combined effects of the responses from various providers. Second, the analysis of this study is limited by the availability of variables in Medicare claims data. Third, the data cover a two-year pre-treatment period, which may not be sufficiently long to examine the pre-trend assumption. Despite these limitations, the current study contributes to the literature by providing causal evidence on the impact of a higher payment on hospital performance under the prospective payment system. A related body of literature examines the effects of the payment cuts on the quality of hospital services and presents mixed evidence (Shen and Wu, 2013, Seshamani et al., 2006, Wu and Shen, 2014, Shen, 2003). Those findings, along with the results of this study, suggest that the association between the payment levels and the quality of services may not be strong. On the other hand, the evidence of DRG-specific volume effects and upcoding has been consistently documented in existing studies (Januleviciute et al., 2016, Dafny, 2005, Silverman and Skinner, 2004, Barros and Braun, 2017). This study adds to this body of literature by exploiting area-specific price shocks and thereby isolating the effects of windfall price increases on hospital behavior. Now that the PPS has been increasingly adopted by other service areas of Medicare (skilled nursing facilities since 1998; long-term care hospitals and inpatient rehabilitation facilities since 2002) and expanded to “bundle” services offered by multiple providers involved in a single episode of care, the findings of this study help to answer a question of growing importance: the impact of payment generosity on hospital performance under the PPS. 18 Figures Figure 1.1: Switch from MSA to CBSA Notes: This map shows the nationwide changes of the borders of the Medicare payment areas and their urban/rural status shift following the switch from the Metropolitan Statistical Areas (MSA) to the Core-Based Statistical Areas (CBSA) in 2005. 19 Figure 1.2: Percentage Changes of Wage Index (a) Urban sample (b) Rural sample Notes: Each box stretches from the 25th percentile to the 75th percentile, the median is shown across the box, and the horizontal strokes at the end of the vertical lines represent the upper and lower adjacent values (extreme outliers are not plotted). The unit of analysis in this figure is individual hospitals. Switched hospitals in the urban (rural) sample indicate’ those that were urban (rural) in 2004 but reclassified as rural (urban) in 2008 (2005). There are 92 switchers and 1,796 stayers in the urban sample, and 71 switchers and 718 stayers in the rural sample. 20 Figure 1.3: Year-specific Effects of Changes in Wage Index on Log of Per-claim Payment −.2 0 .2 .4 .6 .8 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year Notes: This figure presents coefficients and associated 95% confidence intervals for wage index changes (2004-8), interacted with year dummy variables, from the regression at the payment area level. All models control for aver- age patient characteristics (severity captured by Charlson Comorbidity Index, proportions of age, gender, and racial groups), population size, per capita MDs, shares of older people (aged 65+) and people in poverty, area/year fixed effects, and state- and urban-specific time trends. Robust standard errors are clustered at the MSACBSA level. 21 Figure 1.4: Year-specific Effects of Changes in Wage Index on Hospital Outcomes at Area Level −1 −.5 0 .5 1 1.5 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year −.1 0 .1 .2 .3 .4 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year (a) log of number of admissions (b) log of length of stay −.2 −.1 0 .1 .2 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year −.15 −.1 −.05 0 .05 .1 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year (c) 30-day readmission rates (d) 30-day mortality rates Notes: This figure presents coefficients and associated 95% confidence intervals for wage index changes (2004-8) interacted with year dummy variables, from the regression at the payment area level. All models control for aver- age patient characteristics (severity captured by Charlson Comorbidity Index, proportions of age, gender, and racial groups), population size, per capita MDs, shares of older people (aged 65+) and people in poverty, area/year fixed effects, and state- and urban-specific time trends. Robust standard errors are clustered at the MSACBSA level. 22 Figure 1.5: Year-specific Effects of Changes in Wage Index on Admission V olume at Area-DRG Level −.1 0 .1 .2 .3 .4 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year −1 −.5 0 .5 1 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year (a) log of elective admissions, surgical DRGs (b) log of elective admissions, medical DRGs −.3 −.2 −.1 0 .1 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year −1 −.5 0 .5 1 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year (c) log of emergency admissions, surgical DRGs (d) log of emergency admissions, medical DRGs Notes: This figure presents coefficients and associated 95% confidence intervals for a triple interaction term between wage index changes (2004-8), year dummy variables, and DRG weights, from the regression at the area-DRG level. All models control for average patient characteristics (severity captured by Charlson Comorbidity Index, proportions of age, gender, and racial groups), population size, per capita MDs, shares of older people (aged 65+) and people in poverty, area/DRG/year fixed effects, and state- and urban-specific time trends. Robust standard errors are clustered at the MSACBSA level. 23 Figure 1.6: Year-specific Effects of Changes in Wage Index on the Fraction of Top Codes at Area- DRG-Pair Level −.2 0 .2 .4 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year −.1 0 .1 .2 .3 Changes relative to 2004 2003 2005 2007 2009 2011 2013 Year (a) With a continuous DRG spread variable (b) With a dichotomous DRG spread variable Notes: This figure presents coefficients and associated 95% confidence intervals for a triple interaction term between wage index changes (2004-8), year dummy variables, and DRG spread (defined as the difference in DRG weights between top and bottom codes of each pair), from the regression at the area-DRG-pair level. All models control for average patient characteristics (severity captured by Charlson Comorbidity Index, proportions of age, gender, and racial groups), population size, per capita MDs, shares of older people (aged 65+) and people in poverty, area/DRG pair/year fixed effects, and state-, pair-, and urban-specific time trends. Robust standard errors are clustered at the MSACBSA level. 24 Tables Table 1.1: Examples of Hospital Reimbursement Before and After Changes in Medicare Geogra- phy Ukiah Valley Medical Center, CA (DWI =+41:9%) Kane County Hospital, UT (DWI =27:5%) FY04 FY08 FY04 FY08 National Base Operating Payment $4,353.81 $4,963.64 $4,353.81 $4,963.64 Labor-related share 0.711 0.697 0.711 0.697 = Labor-related portion $3,095.27 $3,459.66 $3,095.27 $3,459.66 Wage index 0.9967 1.4147 1.1333 0.8215 = Wage-adjusted labor-related portion $3,085.06 $4,894.38 $3,507.87 $2,842.11 + Non-labor-related portion $1,258.54 $1,503.98 $1,258.54 $1,503.98 = Labor-adjusted standardized amount $4,343.60 $6,398.36 $4,766.41 $4,346.09 (1) DRG weight for pneumonia w/ complications (DRG 89) 1.0463 1.2505 1.0463 1.2505 Final Operating Payment $4,544.71 $8,001.15 $4,987.09 $5,434.79 (2) DRG weight for chest pain (DRG 143) 0.548 0.5489 0.548 0.5489 Final Operating Payment $2,380.29 $3,512.06 $2,611.99 $2,385.57 Notes: This table shows how the DRG payment for the two most common DRGs (i.e., pneumonia and chest pain) can evolve differently for two extreme hospitals in terms of the wage index changes between 2004 and 2008. 25 Table 1.2: Summary Statistics at Baseline (2004), by Quartiles of Wage Index Changes 1st quartile 4th quartile Mean Std.Dev. Obs Mean Std.Dev. Obs Changes in key variables (2004-8) Wage index -0.06 (0.04) 126 0.09 (0.06) 126 Payment per claim ($) † -718.85 (548.45) 126 -79.21 (633.52) 126 Outcome variables Payment per claim ($) 6789.37 (1660.36) 126 6632.70 (1891.09) 126 Total admissions 3914.07 (7589.73) 126 2492.87 (3129.09) 126 Length of stay 5.06 (0.74) 126 5.12 (0.92) 126 30-day readmission rate 0.20 (0.04) 126 0.20 (0.04) 126 30-day mortality rate 0.10 (0.02) 126 0.11 (0.02) 126 Patient characteristics Charlson Comorbidity Index 1.59 (0.17) 126 1.58 (0.16) 126 65Age<70 0.16 (0.04) 126 0.15 (0.03) 126 70Age<75 0.20 (0.03) 126 0.18 (0.03) 126 75Age<80 0.22 (0.03) 126 0.21 (0.02) 126 80Age<85 0.20 (0.03) 126 0.21 (0.03) 126 85Age<90 0.14 (0.03) 126 0.14 (0.03) 126 90Age<95 0.07 (0.02) 126 0.07 (0.02) 126 Age95 0.02 (0.01) 126 0.02 (0.01) 126 White 0.90 (0.09) 126 0.86 (0.13) 126 Black 0.07 (0.09) 126 0.08 (0.12) 126 Other race 0.02 (0.03) 126 0.06 (0.09) 126 Female 0.58 (0.04) 126 0.59 (0.05) 126 Area-level characteristics Number of urban areas 100 - 126 88 - 126 Number of hospitals per area 6.70 (11.21) 126 4.84 (5.73) 126 Population ( 1,000) 526.28 (1203.83) 126 413.39 (583.48) 126 MDs per 1,000 population 2.72 (3.23) 126 2.01 (1.27) 126 Share of people aged 65+ 0.13 (0.03) 126 0.12 (0.03) 126 Share of people in poverty 0.14 (0.05) 126 0.14 (0.07) 126 Notes: † Adjusted by Consumer Price Index for hospital services for urban consumers in 2003 dollars. 20% Medicare inpatient claims data are aggregated at the payment area level, and all sample means are weighted by the number of claims for different hospitals within the area. 26 Table 1.3: Effects of Changes in Wage Index on Hospital Outcomes (1) (2) (3) (4) (5) ln(payment) ln(admission) ln(LOS) readmission mortality mean=$6,246 mean=3,384 mean=4.97 mean=0.20 mean=0.10 Panel A. Area-year level DWI 0804 transition 0.321*** -0.106 0.0567 -0.0157 -0.0119 (0.0950) (0.272) (0.0610) (0.0214) (0.0149) DWI 0804 post 0.471*** -0.0630 0.0949 -0.0196 -0.0273 (0.101) (0.338) (0.0784) (0.0286) (0.0179) Observations 5,583 5,583 5,583 5,583 5,583 Adjusted R-squared 0.938 0.988 0.875 0.746 0.464 Panel B. Hospital-year level DWI 0804 transition 0.352*** 0.0459 0.0283 -0.0244* -0.00592 (0.0436) (0.0811) (0.0452) (0.0142) (0.00958) DWI 0804 post 0.506*** 0.266* -0.0212 -0.0101 -0.0149 (0.0597) (0.138) (0.0535) (0.0161) (0.0106) Observations 33,902 33,902 33,902 33,902 33,902 Adjusted R-squared 0.922 0.951 0.824 0.589 0.420 Panel B-I. Hospital-year level (high Medicare share hospitals only † ) DWI 0804 transition 0.323*** -0.0493 0.0119 -0.0257 0.00308 (0.0549) (0.0896) (0.0572) (0.0187) (0.0136) DWI 0804 post 0.452*** 0.173 -0.0311 0.00334 -0.00469 (0.0691) (0.165) (0.0720) (0.0191) (0.0160) Observations 16,886 16,886 16,886 16,886 16,886 Adjusted R-squared 0.911 0.954 0.818 0.580 0.344 Panel B-II. Hospital-year level (low Medicare share hospitals only) DWI 0804 transition 0.347*** 0.163 0.0374 -0.0172 -0.0162 (0.0676) (0.141) (0.0569) (0.0246) (0.0151) DWI 0804 post 0.524*** 0.364 -0.0300 -0.0310 -0.0236* (0.0972) (0.226) (0.0751) (0.0284) (0.0132) Observations 17,016 17,016 17,016 17,016 17,016 Adjusted R-squared 0.915 0.954 0.836 0.608 0.494 Notes: † High (low) Medicare share hospitals are hospitals with the Medicare share of total inpatient days above (below) the median. Outcomes include (1) log of the average per-claim payment, (2) log of the number of total admissions, (3) log of the average length of stay, (4) average 30-day readmssion rates, and (5) average 30-day mortality rates. The average outcomes are weighted by the number of claims for different DRGs in each hospital within the area in panel A, and for different DRGs within the hospital in panel B. The reference period is years 2003-4, "transition" period refers to years 2005-7 (when the CBSA was phased in), "post" period refers to years 2008-13. Models in panel A control for average patient characteristics (Charlson Comorbidity Index, proportions of age, gender, and racial groups), population size, per capita MDs, shares of older people (aged 65+) and people in poverty, area/year fixed effects, and state- and urban-specific time trends; models in panel B control for the same set of patient characteristics, Herfindahl-Hirschman index, resident-to-bed ratio, Disproportionate Share (DSH) patient percentage, Medicare share of total inpatient days, hospital size, ownership, hospital/year fixed effects, and state- and urban-specific time trends. Robust standard errors are clustered at the MSACBSA level. *** p<0.01, ** p<0.05, * p<0.1 27 Table 1.4: Distributions of DRG Weight and Spread Panel A. Average DRG weight (std. dev.) Medical, high Medical, low Surgical, high Surgical, low Reference period (2003-4) 1.99 (1.97) 0.58 (0.21) 2.62 (2.39) 0.68 (0.27) Transition period (2005-7) 2.19 (2.13) 0.58 (0.25) 2.66 (2.46) 0.64 (0.29) Post-treatment period (2008-13) 1.66 (0.79) 0.77 (0.16) 4.35 (2.83) 1.41 (0.38) Panel B. Average DRG spread (std. dev.) Medical, high Medical, low Surgical, high Surgical, low Reference period (2003-4) 0.90 (0.53) 0.30 (0.10) 1.15 (0.52) 0.36 (0.13) Transition period (2005-7) 0.95 (0.47) 0.30 (0.12) 1.12 (0.47) 0.35 (0.14) Post-treatment period (2008-13) 0.90 (0.32) 0.45 (0.12) 2.28 (0.60) 0.92 (0.37) Notes: The high (low) group includes cases above (below) the median. 28 Table 1.5: Effects of Changes in Wage Index on the Fraction of Top Codes at Area-DRG-Pair Level All DRGs Medical DRGs Surgical DRGs mean=0.48 mean=0.51 mean=0.45 Panel A. With a continuous DRG spread variable DWI 0804 transitionspread 0.0628* -0.0444 0.0551 (0.0334) (0.118) (0.0379) DWI 0804 postspread 0.135*** 0.117 0.128** (0.0474) (0.109) (0.0506) Observations 292,851 164,421 128,430 Adjusted R-squared 0.729 0.778 0.638 Panel B. With a dichotomous DRG spread variable DWI 0804 transitionI(high spread) 0.0165 -0.0340 0.00527 (0.0284) (0.0422) (0.0576) DWI 0804 postI(high spread) 0.119*** 0.0611 0.146* (0.0393) (0.0433) (0.0743) Observations 292,851 164,421 128,430 Adjusted R-squared 0.729 0.778 0.638 Notes: The reference period is years 2003-4, "transition" period refers to years 2005-7 (when the CBSA was phased in), "post" period refers to years 2008-13. The high spread group indicates the top half of the DRG pairs in terms of the spread defined as the difference in DRG weights between top and bottom codes of each pair. All models control for average patient characteristics (Charlson Comorbidity Index, proportions of age, gender, and racial groups), population size, per capita MDs, shares of older people (aged 65+) and people in poverty, area/DRG pair/year fixed effects, and state-, pair-, and urban-specific time trends. Robust standard errors are clustered at the MSACBSA level. *** p<0.01, ** p<0.05, * p<0.1 29 Chapter 2 Physician Deterrence from Fraud, Waste, and Abuse 1 2.1 Introduction In 2016, over 4 billion health insurance claims were processed in the US. Although the majority of those claims likely represented health services that were administered, necessary, and effective, a small share of those claims were not only ineffective, but also fraudulent. According to the In- stitute of Medicine, fraud, waste, and abuse in 2009 reached $750 billion (or 28% of total health care spending), and the Federal Bureau of Investigations estimated that health care fraud alone cost American tax payers nearly $80 billion per year. Of this amount, the Department of Justice recovered only $2.5 billion. Private insurers have implemented mechanisms such as prior autho- rization and utilization management to monitor and prevent fraud, waste, and abuse. However, publicly-funded programs have less stringent policies in place and remain particularly susceptible to abuse. Recent efforts make clear a priority for weeding out fraudulent providers. For example, the Fraud Enforcement and Recovery Act (FERA), Affordable Care Act (ACA), and Small Business 1 This paper was co-authored with Alice Chen and Anupam Jena. 30 Jobs Act of 2010 implemented policies to deter and police fraud by developing predictive mod- els to identify questionable providers, increasing the monetary penalty for making a false claim, requiring that Medicare overpayments be returned within 60 days (as opposed to three years). To fund these efforts, the ACA allocated $350 million over the US Department of Health and Human Services’ Health Care Fraud and Abuse Account, while the Small Business Jobs Act allocated $930 million over ten years for healthcare fraud detection. Despite increased efforts at fighting non-compliance and improper prescribing, we lack a comprehensive understanding of how such sanctions affect the behavior of potentially fraudulent providers who have not yet been caught. In theory, increased policing should reduce the likelihood of improper billing (Becker, 1968). In the broader economics-of-crime literature, Freeman (1999) and Chalfin and McCrary (2017) note that there is considerable evidence that crime responds to police. It may follow that health care sanctions create similar deterrence effects as police crack- downs, but unlike property or violent crimes, health care crimes are significantly more difficult to detect: health care delivery is often complex, and the information asymmetry between physi- cians, patients, and insurers masks an ability to differentiate between necessary and unnecessary procedures. Within health, deterrence effects have been most commonly studied by examining state-level changes in malpractice tort reform. Studies have noted that malpractice suits can deter healthcare providers from negligent behavior and incentivize the practice of defensive medicine to avoid the probability of litigation (Danzon, 2000, Kessler and McClellan, 1996, Mello and Brennan, 2001, Schwartz and Komesar, 1978, Studdert et al., 2005). However, estimates of deterrence range from zero to positive, depending on the type of tort reform (Currie and MacLeod, 2008, Kachalia and Mello, 2011, Kessler, 2011). These effects are also mitigated by the virtually universal adoption of malpractice insurance, which insulates individual providers from financial risk. With premiums set at the group level, instead of the individual level, research has shown that physicians’ perceptions of malpractice risk are not strongly correlated with state tort laws (Jena et al., 2011). Physician deterrence has not been studied in the context of policing fraud and waste. We di- 31 rectly examine changes in physician behavior when their peers receive direct sanctions from the US Department of Health and Human Services’ Office of Inspector General (HHS OIG). Estab- lished in 1976, the HHS OIG is the largest inspector general’s office, and they are tasked with fighting waste, fraud, and abuse in Medicare and Medicaid. The OIG has the authority to exclude individuals and entities from participating in Federally funded health care programs for reasons re- lated to fraud and other offense related to the delivery of Medicare or Medicare services, including patient neglect, financial misconduct and license revocation or suspension for reasons bearing on professional competence, performance, or financial integrity. 2 Relying on 20% Medicare claims data and a comprehensive list of excluded providers, we estimate the deterrence effects when a peer physician is excluded from participating in publicly funded health programs. We predict which physicians are likely to be fraudulent, defined by the predicted probability of exclusion and use an event study approach to analyze changes in physician behavior post-peer-exclusion. As expected, physicians who are not likely fraudulent demonstrate no behavioral response to a peer in the organization being excluded. However, physicians who are likely fraudulent reduce total charges by 12% and total claims by 5%. These changes—which are persistent for several years thereafter—are due to reductions in both the average cost of a procedure billed and the number of procedures performed per patient. Deterrence effects are strongest among physicians with the highest probability of exclusion, physicians in larger organizations, physicians in hospital-based specialties (e.g., anesthesiology, diagnostic radiology, and emergency medicine), and imaging and test services. Our estimates are robust to several sensitivity analyses, including the utilization of an earlier window to estimate predicted exclusion probabilities and null effects with randomly assigned dates. 2 Physicians who engage in the unlawful prescribing or dispensing of controlled substances are also excluded, but we do not focus on them in this paper as we do not examine pharmaceutical claims. 32 2.2 Conceptual Framework Excluded providers either receive a mandatory exclusion, which result after a felony conviction for fraud, health crimes, or financial misconduct, or a permissive exclusion, which occurs following a misdemeanor fraud conviction. Figure 2.1 indicates the timeline from arrest to exclusion. Any deterrence effect likely begins at the beginning of this process: during arrests or indictmens. From its inception in 2007 to 2017, the Medicare Strike Force made 3,490 indictments. Of these, a small minority of cases (24%) were dismissed or defendants were not found guilty. Of the remaining cases, 88% of defendants pleaded guilty (2,331), whereas 12% (315) defendants were convicted at trial. Exclusions are made public on the OIG website, which is updated on a monthly basis. In theory, organizations should routinely check the List of Excluded Individuals/Entities (LEIE) to ensure that their employees are not excluded. Submitting a claim to a Federal health care program for a service provided by an excluded individual are subject to civil monetary penalties. These penalties extend beyond the individual provider: even if services are not separately billed for, no payment can be made for diagnosis related groups when an excluded provider participates in care (e.g., manages, directs, or orders care). Thus, we expect the news of an exclusion to have spread within the excluded provider’s organization. It is also possible that news of an exclusion had a broader reach, for example, spreading through referral networks or to other organizations within a provider market. When a peer is caught for fraud, we expect no change in behavior from non-fraudulent providers, while potentially fraudulent providers reduce bad behavior due to the realization of a low proba- bility event: one can actually be reprimanded. Any changes in charges and claims—regardless if it results in negative health outcomes (i.e., an inappropriate reduction in services)—suggest a deterrence response. Independent of physician responses, patients may also respond by reducing visits to organizations with a fraudulent provider. In such a case, we would see a decrease in pa- tient visits, and a likely corresponding increase in visits to competing physicians within the market area. 33 2.3 Data Our analysis relies on two main datasets. First, we use 20% administrative Medicare claims data, which contains the universe of physician services reimbursed by Medicare between 2002 and 2014. Each claims record provides a rich set of information on the exact date of service, specific proce- dure provided, provider characteristics including zip, specialty, and place of service, and patient characteristics including age, race, and gender. It also provides information on the Medicare al- lowed reimbursement for each service, which we convert to $2010. Importantly, the data addition- ally provides physician identifiers via the National Provider Identifier (NPI) and billing entities via the Taxpayer Identification Number (TIN). 3 TINs may not be a comprehensive proxy for the physician organization, as it is possible for one organization to have several TINs, but within each TIN, physicians belong to the same group practice (McWilliams et al., 2013). It is also possible for one physician to bill under several TINs, and in those cases, we define each physician’s primary TIN as the TIN associated with the highest number of claims within the month. Our second dataset provides information on physician exclusions from Medicare and State health care programs. Produced by the HHS OIG, the LEIE includes all individuals and entities excluded from the right to participate in public insurance for reasons specified in Section 1128 of the Social Security Act. Individual physicians account for about 17% of all exclusions. The remaining exclusions consist of nurses and health care aids (44%) and other health providers (e.g., dentists, chiropractors, pharmacists) and organizations (39%). We focus only on individual physi- cian exclusions, and match physician names to their NPI using the National Plan and Provider Enumeration System (NPPES). This matching process led to a unique match for 95.6% of the physician sample. In addition to names, the data provides information on the exact date of exclu- sion and exact reason for exclusion. One limitation of our data is that we do not have exact dates of indictment. From text based searches, we have identified dates for a subset of exclusions. As shown in Table 2.1, the month of 3 Prior to 2008, physician identifiers are coded as unique physician identification number (UPIN). We convert UPINs to NPIs using a National Plan and Provider Enumeration System crosswalk, constructed by Jean Roth, and claims data from 2008-2010 with providers listing both UPIN and NPI identifications. 34 indictment occurs about two years prior to the month of exclusion. Based on this information, we allow for “anticipatory” responses that begin around two years prior to exclusion, and we perform robustness tests using the small subset of data with additional dates to examine whether responses do indeed occur at indictment. To allow for a sufficiently long pre-period prior to the plausible date of indictment, we examine exclusions from 2005-2013. 2.4 Empirical Approach To identify the impact of the deterrence effect of exclusions, we focus on peer-responses within an organization. Our sample universe, consists of physicians within TINs that have experienced an exclusion. We drop incidences where an excluded physician belonged to a solo practice or where all physicians within a group practice were excluded. For clarity, we also exclude organizations that experienced sequential exclusions over time. 4 For physician i in quarter-year t, we first identify which physicians are likely fraudulent or wasteful by estimating the probability of exclusion based on observable characteristics. Specifi- cally, we use data two to four years prior to exclusion and estimate: P(exclusion)= exp a+bX it +h s(i) +f p(i) +g t +e it 1+ exp a+bX it +h s(i) +f p(i) +g t +e it (2.1) Equation (1) is a logit model that predicts the probability of exclusion based on observables, in- cluding the log of total claims, charges, claims per patient, charges per patient, and average charge per claim (X it ). We also control for physician specialty fixed effects (h s(i) ), place of service fixed effects (i.e., dummies for office, hospital, ambulatory surgical center, nursing facility, or other) (f p(i) ), and year-quarter fixed effects (g t ). We examine the distribution of predicted probabilities of exclusion and assign physicians into the 0-50th, 50th-75th, and greater than 75th percentile if at least four of prior quarters in years two to four prior to exclusion were in the respective percentile. 4 Sensitivity to this exclusion will be tested in a future robustness check. 35 For each of these subsamples, we then estimate changes in behavior over time using: Y it =a+b 1 1(4 T it <2)+b 2 1(2 T it < 0) +b 3 1(0 T it < 2)+GX it +h i +f t +e it (2.2) Y it are measures of total claims and charges, claims and charges per patient, number of patients, and average charge per claim for each physician-year-quarter. Time T it are indicators measuring years since time of exclusion. We consider three 2-year periods: periods 2-4 prior (i.e., pre- period), periods 0-2 prior (i.e., period plausibly following indictment), and periods 0-2 (i.e., period following exclusion). We exclude quarter 9 (i.e., prior year 3, quarter 1) as the reference period. We additionally control for covariates characteristics (X it ), including patient age dummies, race dummies, female share, and indictors for time prior to four years to exclusion and time post two years following exclusion. Physician (h i ) and year-quarter (f t ) fixed effects ensure that estimates are identified from changes within a physician over time. e it is an idiosyncratic error term, and we estimate robust standard errors, clustered at the organization level. We need to make a few assumptions to identify the parameter of interest b. First, we as- sume that there were no contemporaneous changes in the post-period which would have affected physician behavior. Second, we assume that the date of exclusion is plausibly exogenous and the likelihood of having an excluded provider within an organization is random. We assess the plausibility of this assumption by allowing for a long pre-period, running a quarter-level event study version of Equation (2), and assessing whether there are responses in the pre-period prior to plausible indictment. While we cannot prove the randomness of organizations having an exclusion, we check the robustness of our results by comparing physicians in TINs with an excluded provider to matched physicians in TINs without any exclusions. Using a propensity score approach, we identify one- to-one matches for each “treated” physician using physician specialty dummies, place of service dummies, and average claims counts and charges within the two to four years prior to exclusion. 36 Among the matched sample, we then estimate a difference-in-differences specification of Equation (2). Finally, we expect that only potentially fraudulent physicians will exhibit any changes in behavior. Therefore, across all specifications, we test for heterogeneous effects among physicians who have higher probabilities of exclusion. We also test for heterogeneous effects by degree of enforcement (mandatory following a felony conviction or permissive following a misdemeanor conviction), type of exclusion (fraud or health crime, defined following Chen et al. (2018)), place of service, organization size, and physician specialty. 2.5 Results 2.5.1 Overall Behavioral Responses Chen et al. (2018) show the number of physician exclusions have increased over time, and that ex- clusions are more likely among male physicians, physicians with osteopathic training, and physi- cians in specific specialties (e.g., family medicine, psychiatry, and surgery). Table 2.2 additionally identifies the characteristics of excluded providers relative to others in their practice (TIN), sepa- rated by whether others had predicted probabilities of exclusion in the 0-50th percentile, 50-75th percentile, or greater than 75th percentile. Relative to other physicians in the excluded provider’s TIN, excluded providers have higher total charges per quarter, higher total claims per quarter, and higher patient counts. However, on most other margins, excluded physicians in Column (1) have characteristics that are on par with other physicians in the TIN within the top 75th percentile of predicted exclusion probabilities: charges and claims per patient and average price per claim are similar, probabilities of exclusion are similar, and patient characteristics and place of service shares are similar. Figure 2.2 shows the distribution of predicted exclusion probabilities. Excluded providers have a long right tail of predicted probabilities that range to approximately 0.6. Non-excluded providers also have a long right tail, but ranges to approximately 0.2. There is quite some overlap in the lower tail of the distribution between excluded and non-excluded providers, suggesting that the 37 observables we use are insufficient at accurately predicting exclusion at the lower tail. However, at the right tail of the predicted exclusion distribution, those in the above-75th, and particularly in the above-90th percentile of the distribution, are very likely fraudulent. In Figures 2.3 and 2.4, we conduct an event study analysis of physician behavior. As the figures show, physicians in the 0 to 50th and 50-75th percentiles of predicted excludability generally show no change in behavior. Responses occur around two years prior to exclusion, which plausibly aligns with dates of indictment. Before then, physicians are not changing their behavior. After then, there is a slight increase in total charges and charges per patient among physicians in the 0-50th percentile group. However, physicians with excluded probabilities in the greater-than-75th percentile show large and statistically significant reductions in claim counts and charges: total charges fall, driven by a combination of reductions in claims per patient, the average price per claim, and the number of patients. Of note, the 50-75th percentile group have no change in the pre- and post-periods, suggesting that any changes in the greater-than-75h percentile group is not being driven exclusively by regression to the mean. Table 2.3 presents coefficients from estimating Equation (2). Shown in Panel A, physicians in the bottom 75th percentile of predicted exclusions show a small increase in claims and charges (4% to 6%), which stems from an increase in charges per patient (6%), claims per patient (3%). There is also a small, but significantly insignificant, increase in the number of patients (2%). Responses begin in two years prior to exclusion. In Panel B, we find clear deterrence effects among physicians in the above-75th percentile of predicted exclusions. Among these physicians, responses begin in two years prior to exclusion, and responses double in magnitude in the years following exclusion. In particular, total charges falls by 6% and 12% following plausible indictment and exclusion, respectively. Similarly, total claims falls by 3% and 7% in the two time periods. Of note, there is a significant drop in the number of patients per physician (4%) following exclusion, which can be due to either a reduction in billing for false (i.e,. non-existent) patient visits or a decrease in the number of patients seeing physicians in excluded organizations. As Appendix Figure B.1 and Appendix Table B.1 suggest, 38 much of this effect is driven by physicians in the above-90th percentile distribution of predicted exclusions. 2.5.2 Variation in Responses to Identify Various Mechanisms While we document clear changes in billing behavior, it is less clear how these changes or oc- curring or whether these reductions reflect actual changes in care—such as the fewer unnecessary or duplicative services—or mere changes in billing practices, such as a reduction in billing for services not provided to patients not seen. We begin to explore these mechanisms by examining heterogeneous responses across organizations and types of service. Shown in Table 2.4, larger organizations and physicians in hospital-based specialties demonstrate the largest drop in total charges and claims. Specifically, organizations with physician counts in the highest tercile show an almost 10% drop in total charges, relative to the 6% drop in smaller organizations, and physicians in hospital-based specialties—defined as those in anesthesiology, emergency medicine, radiology, and pathology—and those in non-surgical specialties (including cardiologists, dermatologists, gas- troenterologists, neurologists, endocrinologists, rheumatologists, hematologists, and oncologists) showed responses on the order of 11% to 14% drops in charges, which are considerably higher than the 5% to 8% changes in general practice, psychiatry, and surgical specialties. 2.5.3 Robustness and Sensitivity to Assumptions Finally, we test for the robustness of our results by testing for (1) regression to the mean, and (2) sensitivity to the plausibly exogenous date of exclusion. In Appendix Figure B.2 and Appendix Table B.2, we examine the responses when predicted probability of exclusion is defined by an even earlier time period: three to five years prior to exclusion. The results remain unchanged and the decline in charges and claims among the top 75th percentile of excluded probability does not begin until two years prior to exclusion, suggesting that results are not driven by regression to the mean. We also examine the sensitivity around our dates of response. First, to assess the assumption of exogeneity of the exclusion date, we randomly assign exclusion dates to physicians in each affected 39 TIN, re-estimated the probability of exclusion, and considered responses around two years prior to the randomly assigned exclusion date. The results are shown in Appendix Table B.3 and indicate that there are no responses to randomly assigned dates in the two years prior to exclusion or two years after exclusion. Second, we assess whether responses are occurring in response to dates of indictment. For the small subset of data in which we have detailed dates of indictment, we consider the analysis examining responses around the dates of indictment. Using the definitions of the 75th percentile as defined from the entire sample (e.g., keeping them the same as used in the main analysis), we illustrate in Appendix Figure B.3 that total charges do indeed drop following the quarter of peer- indictment. Total claims are statistically insignificant, likely due to the small sample size of 90 physicians in the above-75th percentile of exclusion subsample. 5 2.5.4 Welfare Estimates of Deterrence We calculate the deterrence cost savings from peer exclusions. Among the top 75th percentile of physicians, total charges fell by 6% to 12% following plausible indictment and exclusion, respec- tively. The average physician in the 20% sample of data billed Medicare for $9,708 in a given quarter, suggesting that physicians in the 100% sample were paid $194,160 per year for Medicare services. Thus, deterrence effects per physician summed to an average of $12,232 to $23,296 in the four years following plausible indictment. With approximately 87,325 in the top 75th percentile of physicians within organizations with an excluded provider, this sums to approximately $1.07 billion to $2.03 billion in savings from deterrence alone. 2.6 Conclusion We find that Medicare exclusions have a significant deterrence effect on high billers. Claims and charges fall by approximately 12% and 7%, and they are driven by declines in both claims and 5 These estimations may improve with the collection of additional data. 40 charges per patient and the number per patients seen. We also find evidence that charges per service falls by 5%, suggesting that potentially fraudulent physicians are reducing billing for specifically more expensive procedures. Our results demonstrate that Medicare exclusions create a significant deterrence effect among providers who consistently overbill for care. In addition to removing fraudulent providers, efforts at reducing fraud, waste, and abuse generate large spillover effects on other potentially fraudu- lent providers who are yet to be sanctioned and these physicians are those for which detection is potentially more difficult and costly. The spillover effects indicate higher than expected returns to policies such as increased CMS oversight of Part D and the expansion of the Recovery Audit Contractors program, which requires Parts C and D providers to correct improper billing within 60 days of identification. Future research will explore the extent to which the deterrence effect moves across organi- zations and assess whether spillover effects are larger from targeting fraudulent non-physician providers or organizations. We will also consider whether physicians sort across organizations, for example, fraudulent physicians moving to more fraudulent firms. Finally, in addition to supply- side responses, beneficiaries may respond negatively to knowing that their physician was excluded. Such demand-side responses represent another spillover effect of heightened sanctions. 41 Figures Figure 2.1: Events Prior to Exclusion Notes: Number of indictments, guilty pleas, and trail convictions are total counts of enforcement activities by the Medicare Strike Force from 2007 to 2017. 42 Figure 2.2: Predicted Probability of Exclusion Notes: This figure shows predicted probabilities of exclusion—using data 3 to 4 years prior to exclusion— from a logit regression predicting exclusion using total charges, total claims, charges per patient, claims per patient, charge per service, specialty fixed effects (24), and place of service fixed effects (4). The red lines correspond to the 25th, 75th, and 90th percentiles for the physicians who were not excluded; the blue lines correspond to the 25th, 75th, and 90th percentiles for the excluded physicians. We omit a long right tail of excluded physicians. 43 Figure 2.3: Change in Charges, by Percentile of Predicted Exclusion Probability (a) Total Charges (b) Charges per Patient (c) Average Charge per Claim Notes: Each line shows coefficients from a regression with time dummies, patient characteristics, and physi- cian and year-quarter fixed effects. The bars show the 95% confidence interval with standard errors clustered by TIN. Physicians with predicted exclusion probability in less than 50th percentile (green), 50th to 75th percentile (blue), and greater than 75th percentile (red) are shown. Reference category is quarter 9 (i.e., first quarter in pre-period year 3). 44 Figure 2.4: Change in Quantity, by Percentile of Predicted Exclusion Probability (a) Total Claims (a) Claims per Patient (b) Number of Patients Notes: Each line shows coefficients from a regression with time dummies, patient characteristics, and physi- cian and year-quarter fixed effects. The bars show the 95% confidence interval with standard errors clustered by TIN. Physicians with predicted exclusion probability in less than 50th percentile (green), 50th to 75th percentile (blue), and greater than 75th percentile (red) are shown. Reference category is quarter 9 (i.e., first quarter in pre-period year 3). 45 Tables Table 2.1: Timeline from Indictment to Exclusion Indicted to Ruling to Sentencing to Guilty Ruling Sentencing Exclusion Total (Years) (Years) (Years) (Years) (1) (2) (3) (4) Panel A: Defendant Plead Guilty (N = 160) Mean 0.70 0.49 0.75 1.94 Median 0.50 0.25 0.50 1.25 Panel B: Convicted at Trial (N = 71) Mean 0.94 0.46 0.65 2.05 Median 0.63 0.25 0.50 1.38 Notes: Data from text searches of OIG, DOJ, and FBI news archives. 46 Table 2.2: Summary Statistics Excluded Other Physicians in TIN, Physicians By Predicted Exclusion Probability 0-50th 50-75th 75th All Percentile Percentile Percentile (1) (2) (3) (4) Predicted Exclusion 0.15 0.02 0.05 0.13 Total Charges ($) 36836.57 4670.30 4843.27 9732.15 Charges per Patient ($) 303.71 172.93 153.49 294.05 Charge per Claim ($) 80.38 88.41 73.68 90.93 Total Claims 740.05 62.47 85.27 126.65 Claims per patient 4.71 2.20 2.74 4.23 Number of Patients 154.58 25.78 28.47 31.51 1(Black) 0.12 0.14 0.10 0.10 1(Other Race) 0.07 0.08 0.06 0.06 1(Female) 0.60 0.57 0.62 0.60 1(Age < 70) Ref Ref Ref Ref 1(Age 70-74) 0.15 0.15 0.15 0.15 1(Age 75-79) 0.11 0.12 0.12 0.12 1(Age 80-84) 0.07 0.07 0.07 0.07 1(Age 85-90) 0.03 0.03 0.03 0.03 1(Age 90+) 0.01 0.01 0.01 0.01 1(Office) 0.69 0.17 0.71 0.72 1(Hospital) 0.23 0.79 0.23 0.22 1(ACS) 0.01 0.00 0.00 0.00 1(Nursing Facility) 0.01 0.00 0.01 0.01 1(Other) Ref Ref Ref Ref No. of Physicians 3,794 29,156 17,549 17,245 No. of Observations 151,362 819,764 554,116 560,334 Notes: The 0-50th, 50-75th, and greater than 75th percentiles are defined by having predicted probabilities of exclusion within those percentiles for at least four of the eight pre-period quarters (in years 2-4) prior to exclusion. 47 Table 2.3: Responses, by Percentile of Predicted Exclusion Probability Total Charges Price Total Claims Number Charges per Patient per Claim Claims per Patient of Patients (1) (2) (3) (4) (5) (6) Panel A: <75th Percentile Pre-Period 2-4 -36.03 1.673** -0.323 -1.519 0.00831 -0.216 (38.82) (0.694) (0.378) (1.552) (0.0125) (0.220) Pre-Period 0-2 202.0*** 2.831*** 0.181 6.611* 0.0281** 0.884* (53.19) (0.710) (0.381) (3.609) (0.0133) (0.525) Post-Period 0-2 293.6*** 3.848*** -0.390 10.34* 0.114** 1.058 (94.53) (1.310) (0.990) (5.880) (0.0497) (0.817) Magnitude of Response (%) Indictment 4.06 6.34 0.22 1.75 1.18 2.62 Exclusion 5.90 9.92 -0.48 2.38 4.79 3.13 No. of Obs. 1,373,634 1,373,634 1,373,634 1,373,634 1,373,634 1,373,634 Mean Dep. Var. 4979.81 161.61 81.58 104.28 2.38 33.8 Panel B: 75th Percentile Pre-Period 2-4 -154.0* -0.782 -1.541 -0.825 -0.00160 -0.286 (81.79) (3.920) (1.513) (1.281) (0.0340) (0.307) Pre-Period 0-2 -608.7*** -17.21*** -3.092** -3.647*** -0.126*** -0.171 (111.6) (3.347) (1.538) (1.354) (0.0337) (0.248) Post-Period 0-2 -1,164*** -23.71*** -5.540*** -6.247* -0.0775 -0.966** (209.7) (4.068) (2.038) (3.322) (0.0886) (0.409) Magnitude of Response (%) Indictment -6.27 -6.06 -3.34 -2.89 -3.15 -0.53 Exclusion -11.99 -8.35 -5.98 -4.95 -1.94 -2.99 No. of Obs. 467,978 467,978 467,978 467,978 467,978 467,978 Mean Dep. Var. 9708.97 284.03 92.57 126.09 4.00 32.35 Notes: Results are from regressions at the physician-quarter year level. Pre-Period 2-4 is an indicator equal to one in years 2-4 prior to exclusion, and it excludes quarter 9 as the reference quarter. Post-indictment is an indicator equal to one in years 0-2 prior to exclusion. Post-exclusion is an indicator equal to one in years 0-2 following exclusion. We control for patient covariates, physician and year-quarter fixed effects. Standard errors clustered by TIN shown in parentheses *** p<0.01, ** p<0.05, * p<0.1. 48 Table 2.4: Heterogeneity in Responses, >75th Percentile Predicted Exclusion Probability Total Charges Total Claims Mean Percent Mean Percent No. of 1(Post- Response Charges Change Response Claims Change Obs. Indictment) (1) (2) (3) (4) (5) (6) (7) Panel A: Exclusion Severity Mandatory -782.4*** 8,848 -8.84 -6.558** 116.66 -5.62 186,688 (298.1) (3.126) Permissive -848.3*** 11,006 -7.71 -4.997*** 140.31 -3.56 281,290 (117.2) (1.669) Panel B: Exclusion Type Fraud -785.9*** 8,905 -8.83 -5.792*** 119.88 -4.83 317,024 (107.4) (1.775) Substance -452.9 9,731 -4.65 2.947 131.71 2.24 28,308 (314.9) (5.230) Crime -963.2** 11,788 -8.17 -6.125 140.98 -4.34 122,361 (435.4) (3.985) Panel C: Organization Size TIN <40 -576.8*** 10,733 -6.92 -1.528 140.20 -1.09 86,860 (214.7) (2.501) TIN 40-156 -1,208** 10,589 -6.30 -5.507 140.64 -3.92 166,095 (511.9) (4.482) TIN >156 -712.8*** 8,603 -9.45 -6.013*** 108.97 -5.52 215,023 (127.5) (2.161) Panel D. Physician Specialty GP -344.7*** 7,048 -4.89 -3.662 152.37 -2.40 148,384 (96.30) (2.750) Mental Health -234.1*** 2,661 -8.80 -4.388*** 36.62 -11.98 35,986 (87.68) (1.457) Non-Surgical -2,320*** 20,714 -11.20 -12.08** 196.17 -6.16 89,508 (671.1) (6.077) Surgical -647.9*** 11,157 -5.81 -2.516* 84.60 -2.97 84,647 (159.6) (1.397) Hospital-based -1,027*** 6,932 -14.82 -4.556* 77.15 -5.91 51,932 (262.5) (2.538) Notes: Coefficients reported correspond to a dummy for the post indictment (2 years prior to exclusion and thereafter). TIN size is based on the number of unique providers. Physician specialty groups are defined as follows: general practice (including family practice, internal medicine, and geriatric medicine); mental health (including psychiatry, neurology, and neuropsychiatry); non-surgical subspecialties (including car- diology, dermatology, endocrinology, hematology, gastroenterology, nephrology, rheumatology, oncology, among others); surgical subspecialties (including general surgery, obstetrics and gynecology, oral surgery, ophthalmology, otolaryngology, urology, among others); hospital based (anesthesiology, diagnostic radiol- ogy, emergency medicine, pathology). Standard errors clustered by TIN shown in parentheses *** p<0.01, ** p<0.05, * p<0.1. 49 Chapter 3 The More Connected the Physicians, the Better the Referrals? Evidence from Patient-Sharing Networks 3.1 Introduction Asymmetric information in health care is one of the most well-known examples of the market failure (Arrow, 1963). In theory, consumers should have sufficient information to identify quality of providers, which will in turn encourage providers to further improve quality to increase demand. This theory often fails to hold in the market for health care, because, unlike the market for typical goods and services, there are not many easily accessible measures of health care quality. Even if there are such measures, they are not readily interpretable by patients in practice. Thus, when patients need specialized care, they often consult with a referring physician who is supposed to have better knowledge on the quality of specialists. The information asymmetry problem still exists between specialists and referring doctors, however, due to the intrinsic nature of health care that provides a large informational advantage to performing agents. Accordingly, providing more information to consumers has been one of the key elements of 50 the government interventions to improve efficiency of the health care market. The best-known, and probably most-widely studied, information-based policy may be the New York State cardiac surgery report card, which publicly releases information on coronary artery bypass graft (CABG) surgery mortality rates both at the physician and hospital levels. Other states in the U.S., such as Pennsylvania and Florida, and private consulting firms also publish mortality rates of physicians or hospitals or both. Prior studies that assess the effects of the report cards, however, have documented no or, if any, moderate effects on consumer demand. Schneider and Epstein (1996) and Hannan et al. (1997) conducted a survey in Pennsylvania and New York, respectively, and both found that most referring cardiologists know about the presence of the report cards, but only a small number of physicians actually incorporated the information into their referral decisions and changed their pre-established referral patterns. Follow-up studies suggest that one reason for the limited effects of report cards may be that the rankings simply reflect consumers’ prior beliefs about quality (Mukamel et al., 2004, Dafny and Dranove, 2008, Dranove and Sfekas, 2008). More recently, Epstein (2010) also provided evidence that Pennsylvania’s report cards had a limited effect on the choice of referring cardiologists, pointing out that physicians on average seemed to be well- informed of the relative performance of surgeons even without report cards. Meanwhile, online review websites such as HealthGrades (healthgrades.com) and RateMDs (ratemds.com), where consumers post their experiences with hospitals or physicians as they do on the websites for restaurant and hotel reviews like Yelp (yelp.com) or TripAdvisor (tripadvisor. com), have increasingly gained attention as an emerging source of information that complements more structured assessments including report cards. Despite their strengths in delivering the type of information that patients actually care about, which is missing in traditional quality reports, studies have shown that patients tend to discount the reliability of the information available in online platforms (Ranard et al., 2016, Merchant et al., 2016). The reason is that the online ratings often contain extreme views—either too positive or too negative—and are filled with anonymous, unverified reviews that do not necessarily capture the true aspects of the treatment. According to the survey administered to a nationally representative sample of the U.S. population (Hanauer 51 et al., 2014), patients are increasingly aware of online physician rating sites, while their importance in actual decision-making is dominated by other factors—word of mouth from friends or family members still being the most influential source. Certainly, people cannot obtain the same information from public report cards or online rating sites that they would from personal conversations. Thus, people will weigh more heavily the infor- mation that travels along their trust relationships, especially when high-stakes decisions such as the choice of health care treatment are involved. The discussion of who is a more informed consumer in health care then boils down to the key questions in the science of social networks, which ask 1) who are a subject’s friends, and 2) what is the subject’s relationship with them like? The same questions may apply to the case of tertiary health care, where the demand is largely determined by primary care physicians or medical specialists who guide their patients through referrals. That is, referring physicians will be able to send their patients to better tertiary care providers if they have better access to information on providers available in the market and the quality thereof. Such information can be obtained or updated through interactions with other doctors, and consequently, the shape of physician social networks can influence the referral decisions. Studies examining the determinants of physician referrals to date have mainly focused on the role of patient characteristics such as age, gender, race, and comorbidity (Mukamel et al., 2006, Campbell et al., 2011), institutional or organizational features on the supply side (Iversen and Lurås, 2000, Marinoso and Jelovac, 2003, Nakamura, 2010, Carlin et al., 2016), and government interventions including report cards (Schneider and Epstein, 1996, Epstein, 2010). On the other hand, economists have increasingly paid attention to the role of friends and neighbors as a channel of spreading important news and information. For instance, social networks have been studied as a mechanism that explains unequal employment opportunities (Gee et al., 2017), insurance adoption (Cai et al., 2015, Duflo and Saez, 2003), variations in productivity (Fernandez et al., 2000), and the extent of innovation and diffusion (Conley and Udry, 2010, Munshi, 2004, Bandiera and Rasul, 2006). More recently, development economists have proposed social network analysis as a tool to identify the key person to inject information into (Banerjee et al., 2013) or to target a specific 52 population for government assistance programs (Alatas et al., 2016, Banerjee et al., 2014). Yet, despite the existing studies having consistently documented the importance of word of mouth and consumers’ social networks in health care (Chen, 2011, Dafny and Dranove, 2008, Epstein, 2010), no empirical studies have yet examined the effects of the network structure of patients or referring physicians on their choice of health care providers. This study examines whether referring physicians who have better connections in their social networks, who would have more exposure to word of mouth and, in turn, more up-to-date or accu- rate information on the performance of surgeons, lead their patients to better-performing tertiary care providers. To this end, this study uses the sample of Medicare beneficiaries undergoing CABG included in the 20% Medicare Part B claims files (2008–13) and information on cardiac surgeons who performed the procedures and referring physicians (mostly cardiologists and primary care physicians). CABG is one of the most frequently performed operating room procedures in the U.S. (Fingar et al., 2014) and, as described in greater detail in Section 2, is well suited to the scope of this study due to its elective nature that provides sufficient room for referring physicians to intervene in consumer choice. Building on the existing literature, the social networks of refer- ring physicians are identified by their patient-sharing patterns with other physicians in Medicare claims data, and the following three measures are employed as key explanatory variables: 1) num- ber of physicians connected (adjusted degree), 2) tightness of the network (clustering coefficient), and 3) influence of individual physicians in the network (eigenvector centrality). The results of discrete-choice demand models suggest that if patients are referred by a physician who is a more important player in their social networks (i.e., eigenvector centrality is higher), then the patient has a higher chance of choosing a surgeon of better quality. Specifically, when a surgeon’s mortality rate increases by 0.1% point, the patients referred by better-connected physicians decrease their probability of choosing that surgeon by 1.4–2.0% point. Given that patients in this study are, on average, provided with 15 alternatives in their choice set, these results imply that an increase in the mortality rate of a surgeon from the mean value of 2.04% to 2.14% (5% increase) leads to a decrease in the choice probability from 6.7% to 4.7–5.3% (21–30% decrease). 53 This study contributes to three strands of literature. First, this paper examines an important, yet largely understudied, factor that explains physician referral behavior, namely, the social connect- edness of referring physicians. In fact, the results indicate that the interpersonal characteristics of primary care providers are an important determinant of physician referral decisions, and therefore of the demand for specialty health care services. Second, the paper adds to a recent literature that examines the effects of information obtained through person-to-person interactions on consumer decision making. The result that better-connected consumers make a more informed choice is con- sistent with the findings of Cai et al. (2015) demonstrating that consumers having more friends end up purchasing insurance with a lower premium, and Bailey et al. (2018) which shows that a property purchase of consumers is affected by their friends’ experiences in the housing market. Third, this study adapts the methodology proposed in medical and health services literature, which identifies physician social networks using large administrative claims data, and incorporates the analytical tools used in social network analysis, thereby exploiting the rich variation in individual physicians’ network structure. In related work, Hackl et al. (2015) investigates the effects of gen- eral practitioners having the same education or work background, or demographic characteristics with specialists on their referral decisions and patient outcomes. In place of these binary variables indicating homogeneity among physicians, the network analysis tools employed in this study allow the examination of the full distribution of referring physicians defined in three distinct dimensions of network characteristics. The remainder of the paper is organized as follows. Section 2 provides descriptions on the clinical background of CABG, the process leading to bypass surgery, and the data used in this study. In Section 3, I present detailed information on how physician social networks are defined and how each network measure is derived. Section 4 outlines empirical strategies and briefly summarizes sample characteristics. Section 5 presents the main results, and Section 6 introduces several sources of heterogeneity. I then extend the analysis to the market level and examine the effects of the overall physician connectedness of the market in Section 7. Section 8 provides robustness checks, and Section 9 concludes. 54 3.2 Background 3.2.1 CABG Surgery Coronary artery disease (CAD) is a leading cause of death for both men and women in the U.S. It affects about one fifth of people aged 65 and over, and accounts for nearly 370,000 deaths every year (Ortiz et al., 2018). CAD is developed when plaque builds up inside the coronary arteries and reduces the flow of blood therein, which in turn can cause chest pain or discomfort called angina, and increase the risk of developing a heart attack. When patients are diagnosed with CAD, they are typically offered the following treatment options, depending on the severity of the blockage: medication, angioplasty and stenting, or CABG surgery. Of these options, CABG is the most invasive procedure: during the surgery, the surgeon takes a portion of a healthy blood vessel, often from the chest or leg, and attaches it around the blocked area so that blood can bypass the blocked coronary artery and flow smoothly to the heart muscle. The CABG procedure fits the objective of this study for three reasons. First, the bypass surgery is highly expensive ($27,862–$74,169 per Medicare patient in 2015 [Hawkins et al., 2018]), but it is the most commonly performed surgical procedure in the U.S. (Gani et al., 2017). The high prevalence rate allows this study to identify a sufficient number of cases and to have statistical power to detect differential behavior across physicians. Second, due to the risky nature of the intervention, poor performance of surgeons during the operation can lead to serious complications or even death, an outcome that is relatively readily observable by patients, colleague physicians, and other third parties. As a result, CABG mortality rates and their risk adjustment metric are widely available and also fairly standardized. Third, CABG is mostly performed on an elective basis, which implies that the procedure usually allows sufficient time for patients to schedule it, providing room for referring doctors to actively engage in consumer decision-making. 55 3.2.2 Referral Process There are several pathways in the referral process for CABG surgery. The dominant pathway is that patients are diagnosed with heart disease by primary care physicians and are sent to cardiologists for further examination. The cardiologists then examine the images of the inside of the patients’ coronary arteries and choose one of the treatment options based on the severity and extent of the disease. If cardiologists decide that patients need surgical interventions, they subsequently refer those patients to cardiac surgeons. Other possible pathways, although not as common, include the cases of primary care physicians directly referring to cardiac surgeons or patients who are already in hospital newly being diagnosed with CAD and being treated by the cardiac surgeon on call (Mukamel et al., 2006). Consistent with this description, a majority of the referring physicians in this study are cardiologists (79%), followed by primary care physicians (18%) and other specialties (3%). 3.3 Data The sample of this study consists of Medicare patients who underwent CABG operations per- formed by cardiac surgeons included in the 20% Medicare Carrier files (2008–2013). The Car- rier files are the claims submitted by non-institutional providers for services covered under Medi- care Part B and contain information on patient diagnosis codes; performing provider’s National Provider Identifier (NPI), specialty code, and zip code; claim procedure code; and referring physi- cian’s NPI. The CABG patients are identified using the Berenson-Eggers Type of Service (BETOS) code (=P2A), and the cardiac surgeons are defined by the Health Care Financing Administration (HCFA) specialty code (=78). The Carrier files are subsequently matched to inpatient claims data with DRG codes 231–236 for additional information such as hospital of admission, type of admis- sion, and admission/discharge dates. 1 Hospital-level information, including geographic location and total number of beds, is obtained from the Centers for Medicare & Medicaid Services (CMS) 1 Detailed descriptions of each DRG code are available in Data Appendix Table C.1. 56 Provider of Services files. For individual physician-level information, this study uses the Physician Compare data. Because the oldest available data in Physician Compare is from March 2014, I only use time-invariant information such as gender, year of graduation, and medical school attended, and I separately categorize the physicians who are not matched with Physician Compare. 2 3.4 Physician Networks 3.4.1 Network Definition According to medical and health services literature (Barnett et al., 2011, 2012, Casalino et al., 2015, Landon et al., 2012, 2013, Pollack et al., 2014), the presence of shared patients in adminis- trative data that could occur as a result of referral, insurance policies, patient self-selection, or even by chance is a useful indicator of physician relationships. Thus, one can identify physician net- works using administrative claims, where individual physicians constitute “nodes” and their “ties” or connections are inferred by the shared patients. The networks identified by this approach will embrace both formal and informal relationships of physicians, which renders them “naturally” or “organically” occurring networks (Landon et al., 2012, 2013). This method is particularly relevant to the scope of this study because, according to Landon et al. (2012), the patient-sharing networks likely correspond to the information-sharing relationships among physicians. The physician net- works based on shared patients have also been validated by prior literature. Specifically, Barnett et al. (2011) conducted a survey and mapped the physician networks identified by the patient- sharing pattern in Medicare claims data to physicians’ self-reported social networks—they found that about 80% of the relationships were identical when a physician pair shared eight or more patients. Adapting this method, this study identifies the social network of each referring physician using Medicare Carrier files within the Dartmouth Atlas Hospital Referral Regions (HRRs). The HRRs are defined by the pattern in which patients are referred for major cardiovascular or neurological 2 Possible explanations for non-matches include a physician’s death, retirement, or exclusion from Medicare. 57 surgeries, thereby representing local hospital markets in the U.S. This study then applies the fol- lowing inclusion and exclusion criteria. First, for the shared patients, I only include fee-for-service Medicare beneficiaries and their claims for evaluation and management, surgical, and medical ser- vices, excluding those for lab or imaging services. Next, for physicians, I exclude specialties that are not in charge of direct patient care such as anesthesiologists, emergency physicians, patholo- gists, and radiologists. I also exclude physicians practicing in multiple HRRs, who are presumably located at the border of an HRR, and I drop the case of self-referral (Afendulis and Kessler, 2007). 3 The final important decision to be made is related to the minimum number of shared patients that could reasonably capture meaningful relationships between physicians. One possibility is to use the eight-patient threshold mentioned above (Barnett et al., 2011), but since this threshold could differ by specialty and clinical settings, I alternatively follow Landon et al. (2012) which only included the strongest 20% of the ties for each physician in terms of the number of shared patients, and will later experiment with other thresholds in robustness checks. According to Landon et al. (2012), this relative thresholding might eliminate some true relationships, while retaining random connections, but it still maintains the most influential ties for each physician and effectively filters out noises that could arise due to spurious relationships. In the current study, when the networks are defined by the 20% threshold, physicians share on average 8.2 patients, with interquartile range (IQR) of 3 to 9 and median of 5. These numbers are not directly comparable to Barnett et al. (2011), because this study relies on the 20% Medicare sample while the existing study uses the 100% sample of Medicare patients. Figure 3.1 shows two examples of physician networks identified at baseline (2008). Figure 3.1-(a) represents the physician network in Pueblo, Colorado, and Figure 3.1-(b) is the one in Bryan, Texas. These are relatively small networks and are chosen for ease of visualization. The orange nodes represent the referring physicians and the green nodes are the physicians with whom they share patients. The two figures illustrate the motivation of this study: the orange nodes differ 3 Self-referring physicians are not likely to change their behavior in response to the change of their own perfor- mance. Given that this study aims to examine who is more responsive to quality variations of cardiac surgeons, the case of self-referral may not be of much interest. 58 to a great extent in their relative positions, number of connections, and neighborhood density. When the two markets are compared, it is also apparent that the orange nodes in Figure 3.1-(a) are more clustered and more centrally located on average than those in Figure 3.1-(b). In sum, there exist substantial variations in network structure among referring physicians, both within and across networks, and this study attempts to examine if these variations are translated into different referral decisions of physicians. 3.4.2 Network Characteristics This study employs three key measures, each representing different aspects of the network structure (Alatas et al., 2016, Landon et al., 2012, Barnett et al., 2012, Casalino et al., 2015, Landon et al., 2013, Pollack et al., 2014, Banerjee et al., 2013, 2014). To illustrate, let N =f1;2;:::;ng be a set of physicians or nodes in a network, and Efhi; jij i; j2 Ng denote a set of edges or ties that indicate the pairs of the nodes. In general, these ties can be written as Efhi; j;kij i; j2 Ng, where k is the value of the relationship. In this study, physician relationships are binary, and thus, for i; j2 N, g i; j 2f0;1g, where g i; j = 1 implies that these physicians share patients and g i; j = 0 means that these physicians do not. A network can be represented by G=(N;E); and this is often stored as an adjacency matrix, a n n matrix representing a network with n nodes, whose entries are equal to g i; j . Because physician relationships have to be reciprocal, the adjacency matrix is symmetric (i.e., g i; j = g j;i ) and a network is then called undirected (as opposed to directed). Lastly, there is a path between nodes, if g i; j = 1 or if there are a finite number of intermediate nodes j 1 ; j 2 ; j 3 ;:::; j n such that g i; j 1 = g j 1 ; j 2 ; = g j 2 ; j 3 ;::: = g j n ; j = 1. Two nodes are considered a part of the same component if and only if they have a path in between. The network has a giant component if its largest component in terms of the number of constituent nodes accounts for a relatively large share of the entire network (usually more than half and often over 90%) and the rest of the network is split into small components. Given these notations, the first measure of network structure this study employs is degree—it simply captures how many connections each node has. That is, the degree of a node i is defined as 59 D(i)= n å j=1 g i; j (3.1) For directed networks, outdegree and indegree measures can be defined separately as D out (i)= n å j=1 g i; j (3.2) and D in (i)= n å j=1 g j;i (3.3) Figure 3.2 provides a simple hypothetical graph to illustrate different network characteristics. In Figure 3.2-(a), node C is connected with nodes A, B, and D (represented by green lines), and thus the degree for node C is equal to 3. Because, in this study, the number of connections is proportional to patient volume, I divide the degree by the total number of shared Medicare patients for each physician and call it adjusted degree. The second measure is clustering coefficient, which captures how tightly connected one’s neighbors are. It is the ratio of the number of actual ties that exist between nodes within the neighborhood of a node of interest, to the number of ties that could possibly exist between those nodes (Watts and Strogatz, 1998). The clustering coefficient of a node i is thus given by C(i)= å l2N i (G) å k2N i (G) g l;k D(i) D(i) 1 (3.4) for all i2 N 0 fi2 N : D(i) 2g. In Figure 3.2-(b), the black solid line between A and B and that between D and E are the actual ties in the neighborhood of C, but the dashed lines show that there could exist four more ties, which makes the clustering coefficient for node C equal to one third (= 2 actual ties 2 actual ties + 4 possible ties ). Note that the clustering coefficient of an isolated node is, by definition, equal to 0. The last network structure this study examines is eigenvector centrality, a measure of the im- 60 portance of the node. It is defined as V(i)= 1 l å t2 M(i)V(t) (3.5) where l is a constant and M(i) is the set of network neighbors of node i. Mathematically, it is equivalent to the entries of the eigenvector associated with the largest eigenvalue of the adjacency matrix. By doing so, the eigenvector centrality assigns relative scores based on the principle that if one has many important friends, then one is important as well. Figure 3.2-(c) shows eigenvector centrality of every node in this hypothetical case; both nodes D and E have only one connection in this network, but because D is connected with the more influential node C, the eigenvector centrality is higher for D than for E (i.e., 0:34> 0:15). 3.5 Empirical Approach 3.5.1 Model The network characteristics are incorporated in the utility maximization framework in which a patient-referring physician pair jointly chooses a provider based on a set of observable provider characteristics. The assumption of joint decision-making by patients and referring doctors can be validated by the observation that most of the cardiologists (> 95%) exhibit the percentage of pa- tients conforming to their initial recommendation higher than 90% (Schneider and Epstein, 1996). Each hospital-surgeon pair constitutes a distinct alternative, which makes a surgeon performing at two different hospitals contribute to two alternatives. With these assumptions, I model the utility of the patient-referring physician pair (hereafter “consumer”) i from choosing surgeon j practic- ing in hospital k (U i jk ) as a function of hospital attributes (W ik ) and surgeon attributes (X jk ). The hospital attributes include a binary indicator of nearby hospitals defined by the median logarithmic distance between hospital and patient’s residence, number of beds, teaching status, and ownership (for-profit, non-profit, and government), while surgeon characteristics include experience proxied 61 by years after graduation, gender, and a binary variable that equals one if the surgeon graduated a top-20 medical school. 4 The type of insurance that surgeons and hospitals accept is another at- tribute that could influence a physician’s referral choice; as the study population of this study is Medicare patients, who are in theory can choose any provider that accepts Medicare, however, the attribute of insurance types is less relevant in this context. I assume that both surgeons and hospitals send signals on their clinical quality (Q jk ), which is measured by in-hospital mortality rates. I further assume that these signals are noisy, and thus the responses to Q jk differ by how well connected the referring physicians are (N i ). Assuming linearity and additive separability, the utility function can be written as U i jk = W ik a+ X jk b + Q jk g+(Q jk N i )d +e i jk (3.6) In practice, one cannot observe U i jk , but only U i jk , where U i jk = ( 1 if U i jk = max(U i11 ;:::;U iJK ) 0 otherwise (3.7) Specifying e i jk to follow an independent type I extreme value distribution leads to maximum likelihood conditional logit, and the choice probability can be estimated by Pr(U i jk = 1)= exp W ik a+ X jk b + Q jk g+(Q jk N i )d å ( jk)ji exp W ik a+ X jk b + Q jk g+(Q jk N i )d (3.8) Because CABG is a highly invasive procedure and the mortality rate during the operation is nontrivial, people may care about their survival probability when choosing a specific hospital- surgeon pair, and in-hospital mortality rates would, therefore, serve as an appropriate quality met- ric. In fact, it has been widely used in economics (Wang et al., 2011, Epstein, 2010), management (Lu and Rui, 2017, Kc and Terwiesch, 2011, Clark and Huckman, 2012, Huckman and Pisano, 2006), and medical literature (Ghali et al., 1997, Holman et al., 2001, Serruys et al., 2009) that 4 Defined by the U.S. News ranking as in Currie et al. (2016) 62 involve measuring the quality of providers for CABG. Following this convention, I define the in- hospital mortality as death occurring before the date of discharge from the hospital. A potential concern associated with using the mortality outcome as a measure of quality is that the observed differences in outcome can be driven by the differences in patient case mix, independent of the true quality of the provider. Various methods have been proposed to deal with the case-mix differences, and yet another strand of literature has pointed out the possibility of additional bias that these risk-adjustment methods can introduce (Gaynor et al., 2016). Thus, I first obtain the risk-adjusted mortality rates and then compare the results with adjusted rates to those with unadjusted rates. The results are qualitatively similar and thus, as in Gaynor et al. (2016), I only report the results with the unadjusted mortality rates. A more detailed discussion on risk adjustment will follow in Section 8.4. For ease of interpretation, the network measures are converted into a dichotomous variable indicating high adjusted degree, high clustering coefficient, and high eigenvector centrality using a median split. The estimate for the network measures can be biased, however, if these physician- level characteristics are not random. Specifically, it is likely that the capability of discerning a good surgeon is correlated with unobservable skills of referring physicians, which can also be correlated with their interpersonal characteristics. This concern may be alleviated by the property of conditional logit models that factor out the case-specific attributes (i.e., consumer-specific as opposed to alternative-specific attributes, such as education and training backgrounds of referring physicians) (McFadden, 1974). Yet, the estimate of d, the coefficient of interest, can still be biased to the extent that the interaction between surgeon quality and referring physician’s network characteristics is correlated with factors observed by consumers but not by econometricians. For instance, high-quality or low-quality providers may differentially seek to socialize with more-connected referring doctors, in an unobservable way. If this is actually the case, then it can be viewed as one of the network effects (called “homophily” effects in literature, indicating that good doctors tend to be friends of one another [Banerjee et al., 2013, Christakis and Fowler, 2007, Gee et al., 2017, Landon et al., 63 2012]), not a source of bias. Another empirical concern is the presence of simultaneity bias, because the network measures are a function of the type or number of people whom the referring doctors are connected with, and referring physicians and their choice of surgeons should have connections by definition, as they share patients. Thus, I use the lag of the network measures instead of the present values to address this concern. The standard errors are clustered at the HRR-year level. 3.5.2 Patient Choice Set Because it is impossible in practice to observe the true choice set that an individual patient faces, Epstein (2010), in his study evaluating the effects of report cards on physician referrals, takes a conservative approach, by examining two extreme cases: (1) a patient considers all hospital and surgeon pairs available in the regional market and takes into account both hospital and surgeon attributes when making a choice or (2) a patient is already admitted to a hospital and forced to choose one of the surgeons operating in that particular hospital—in this case, only surgeon at- tributes drive consumer decisions. I follow this approach and notations by referring to the first choice set as “Hospital+Surgeon” and the second set as “SurgeonjHospital.” A hospital-surgeon pair is included in the choice set if it has at least one CABG claim in both 20% Carrier and in- patient files in a given year—or, in other words, the alternative performs, on average, at least five CABG surgeries for Medicare patients annually, each of which entails both physician fees and institutional charges. Consistent with the network definition, I use the HRRs to define the regional market. The median choice set size for the Hospital+Surgeon set is 15 (IQR 10–23) and that for the SurgeonjHospital set is equal to 3 (IQR 2–4). One potential concern is that the choice set may not be comprehensive enough, failing to cap- ture all possible alternatives, as this study uses 20% Medicare claims data to construct the patient’s choice set. To examine the extent to which this problem distorts the true decision-making process of consumers, I identify all cardiac surgeons available in each HRR in 2014, using the Physician Compare data, which contain information on the universe of Medicare providers in the U.S. The 64 median number of surgeons was 10 (IQR 5–17), and given that a cardiac surgeon works in 1.7 hospitals on average in the data, I conclude that the choice set defined in this study reasonably captures the set of providers that patients would consider. Besides, the use of 20% claims files might help eliminate a few alternatives that have a very small market share and thus do not appear consistently in the 20% sample, thereby offering the set of alternatives that patients would likely take into consideration in practice. 3.5.3 Descriptive Statistics The average in-hospital mortality rates are 2.43% for hospitals and 2.04% for cardiac surgeons (Table 3.1). Most of the hospitals are non-profit (67.48%), and the surgeons performing CABG procedure are predominantly male (96.77%). The sample includes 8,510 referring physicians, who are on average connected with 175 colleagues in the network consisting of 2,220 nodes. This implies that the referring physicians in the sample (mostly cardiologists, followed by primary care physicians and a few other specialties) know, on average, 8% of the physicians (of any specialty) in their market. Table 3.2 compares sample characteristics of more versus less connected referring physicians, where each category is defined by the median values of the network measures. The first three rows show that the network measures are correlated with one another, but not in the same direc- tion. Specifically, the clustering coefficient and eigenvector centrality are both lower, rather than higher, in the high adjusted degree group, and likewise, the adjusted degree and eigenvector cen- trality are lower in the high clustering group, and the adjusted degree is lower in the high centrality group. This is plausible if a physician has many colleagues (i.e., adjusted degree is high) who barely know each other or who are not close enough to share important information, which would result in low clustering coefficient and low eigenvector centrality, respectively. Similarly, when all of the friends of a referring physician tend to be friends of each other, then the referring physi- cian’s role in information sharing within the network may not need to be significant, as implied by the negative correlation between clustering coefficient and eigenvector centrality. The remaining 65 rows show that the more-connected referring physicians are statistically different from their less- connected counterparts in other observable characteristics. For instance, the referring physicians who graduated from more prestigious medical schools tend to have broader networks (i.e., adjusted degree 11.47>6.91), and female physicians tend to have tighter networks (i.e., clustering coeffi- cient 11.07>5.57). These characteristics are constant within each referring physician and thus are not estimated in conditional logit models. 3.6 Results 3.6.1 Preliminary Evidence Figure 3.3 presents the distributions of the network measures across referring physicians at base- line. Of the three measures, the eigenvector centrality exhibits the smallest range and variance, followed by clustering coefficient and adjusted degree. I subsequently divide each distribution into 20 bins and plot the average in-hospital mortality rates of the CABG patients referred by the physi- cians at each bin (Figure 3.4). The fitted regression lines imply a negative correlation between the mortality rates and the connectedness of the referring physicians in all network domains. These network properties have evolved in a different fashion over time in the sample period (Figure 3.5). The physicians have increasingly known more colleagues, as indicated by the time-series trend of the adjusted degree, while the average importance of individual physicians in transmitting infor- mation has decreased over time, which is implied by the trend of the eigenvector centrality. The average in-hospital mortality rates do not vary in parallel with the changes in any of the network characteristics, however, showing a rather flat trend at both hospital and physician levels (Figure 3.6). Yet, even if the network structure of the referring physicians had evolved in a way that help them distinguish better surgeons available in the market, such a change would not have guaranteed an overall improvement of the surgeon quality. Thus, the next figure (Figure 3.7) alternatively shows the changes of the fraction of the patients referred to a particular hospital-physician combi- nation, when that alternative had ever incurred any in-hospital mortality in the prior year, by the 66 level of network characteristics of the referring physicians (i.e., high versus low defined by the median). The referring doctors respond to a negative event of a tertiary health care provider in the following year by decreasing their probability of referring a patient, regardless of the network characteristics—the average reduction ranges from -4.4 to -6.8% point. In all network domains, the physicians above the median tend to respond more, although the differences between high and low groups are not statistically significant. All these figures provide suggestive evidence that there may be a correlation between the connectedness of the referring doctors and their patient outcomes both cross-sectionally and longitudinally. 3.6.2 Main Findings Table 3.3 presents the average marginal effects of quality and its interactions with network char- acteristics on choice probabilities in the Hospital+Surgeon choice set. Note that the eigenvector centrality can be defined only in a connected network. I thus estimate the models only using the sample of physicians included in the giant component, which is, by definition, the largest con- nected component of the network. The first column shows the results with no network variables to replicate past studies demonstrating that the higher mortality rate is associated with lower demand (Baker et al., 2016, Chandra et al., 2016, Mukamel and Mushlin, 1998). The estimates for both hospital and surgeon mortality rates in column (1) exhibit the expected negative signs, though only significant for the surgeon performance. This implies that it is not the hospital quality but only the surgeon quality that matters when a physician makes a referral decision. In columns (2)–(4), I include interaction terms with each network characteristic separately, and in column (5), I examine all interaction effects in a single equation. In all models with network variables, the coefficients on hospital and surgeon mortality rates are no longer statistically significant. As for the interaction terms with network characteristics, on the other hand, the result in column (4) shows that when re- ferring physicians play a more important role in information sharing in their networks, as indicated by high eigenvector centrality, they are 1.6% point less likely to choose a surgeon if that surgeon increases their in-hospital mortality rate by 0.1% point. The interaction terms with other network 67 characteristics are not statistically significant, and if anything, referring physicians are more likely to choose hospitals with higher mortality rates when they belong to tighter networks (column (3)). This may be possible if the clustering coefficient captures the effects of organizational changes such as physician-hospital integration or hospital purchases of physician practices, which can lead to lower sensitivity to quality changes of hospitals. The estimates for the interaction terms with high adjusted degree and high clustering coefficient are not statistically significant in column (5), where all interaction effects are examined simultaneously, while that for the interaction term with high eigenvector centrality remains statistically significant with a coefficient estimate close to the one in column (4). In the subsequent analysis, I divide the sample into two groups, elective and emergency ad- missions, expecting the network effects to play a more significant role in the case of elective ad- missions, because, as opposed to patients admitted through emergency rooms, elective admissions would allow patients to have enough time to consult with their referring doctors. The claims are categorized as emergency admissions if their type of admission code in inpatient files is equal to 1, indicating that “the patient required immediate medical intervention as a result of severe, life threatening, or potentially disabling conditions.” 5 The results for elective admissions are very similar to the results from the entire sample (Panel A of Table 3.4): that is, individual surgeon mortality rates are negatively associated with choice probabilities, but these effects vanish in the models including high eigenvector centrality, and the only significant response is expected for the patients referred by a physician who is linked to well-connected doctors (-0.14). For emergency admissions, the interaction term between surgeon quality and high eigenvector centrality is still significant and negative (Panel B of Table 3.4). Now, the surgeon mortality rate is positively cor- related with the surgeon choice, however, implying that the higher mortality rates, the more likely to be chosen and the interaction term with high eigenvector centrality alone does not cancel out the entire effects in column (5). It is only when the physicians are of both high centrality and high degree—or, in other words, when the physicians are connected with important players in the net- 5 https://www.resdac.org/cms-data/files/ip-ffs/data-documentation. The type of admission codes and their detailed descriptions can be found in Table C.2 in Data Appendix. 68 work and also know many colleagues—that they are able to send their patients to good surgeons even in the case of emergency admissions. Tables 3.5 and 3.6 repeat the same analysis using the SurgeonjHospital choice set. Similar to the results in Table 3.3, the only significant effects in Table 3.5 are found in the interaction term between surgeon quality and high centrality, though the magnitude of the coefficients is somewhat smaller. This smaller effect size is plausible because the referring physicians in this case are forced to choose a surgeon operating in a particular hospital, and thus there is less room for them to respond to quality variations. The results for elective admissions are largely consistent with this finding (Panel A of Table 3.6), and in the case of emergency admissions, the interaction terms with high centrality are not statistically significant anymore. Instead, the interaction terms with high clustering seem to play an important role (Panel B of Table 3.6). 3.6.3 Sources of Heterogeneity In this section, I introduce potential sources of heterogeneity and allowd to vary across different provider- and patient-level characteristics (i.e.,d i ). This section only presents the results from the Hospital+Surgeon choice set, and those from the SurgeonjHospital choice set are presented in Appendix. Provider-Level Heterogeneity Years of Experience I first examine if the network effects differentially benefit referring physicians by their years of experience. To do so, I obtained information on the year of graduation from the 2014 Physi- cian Compare data and computed the years after graduation as a proxy for experience. The years after graduation are subsequently classified into quartiles, and the referring physicians in the top quartile (32 years) are considered more experienced, while those in the bottom quartile (18 years) are defined as less experienced doctors. Table 3.7 compares the average marginal effects of the key variables on choice probabilities in the Hospital+Surgeon choice set, across the two split 69 samples. Similar to the results in Table 3.3, the only significant effects are found in the interac- tion term between surgeon quality and high eigenvector centrality in both samples. The relative magnitude of the estimates supports a greater effect in the less experienced sample, however. That is, when a surgeon’s mortality rate increases by 0.1% point, the referring physicians with shorter periods of practice decrease their probability of choosing that surgeon by 2.1–2.2% point, if they have more connections with the central players, while the same counterpart with longer periods of practice would decrease their probability by 1.5% point. When the same analysis is conducted with the SurgeonjHospital choice set, the effects of high eigenvector centrality remain statistically significant only for the less experienced sample (Panel A of Table C.4 in Appendix). Several ex- planations may account for this difference. First, even if the two types of physicians equally have good access to information sources, the physicians with longer periods of experience might have built up stronger relationships with a few surgeons and thus are less likely to change their referrals in response to new information, whereas the physicians with less experience could have been more flexible in making their choices. Second, the relatively new physicians may be more subject to reputation concerns, as the marginal returns to good referrals could have been greater for them. Urban/Rural Areas The next provider-level heterogeneity I examine is whether the practice location of the referring physicians is urban or rural. To determine urban versus rural status of HRRs, I first converted the county-level Rural-Urban Continuum Codes (RUCC) of 2003 into the zip code level, 6 and merged them into the data using the zip codes constituting each HRR. I then computed the annual average of the RUCC for each HRR, and defined it “urban” if its average is less than four and “rural” oth- erwise, following the cutoff used to define urban versus rural counties based on the RUCC. 7 Table 3.8 presents the results separately for the referring physicians practicing in urban areas and for those in rural areas. The effects of high eigenvector centrality, which are consistently observed in 6 These are converted based on the principle that, when a zip code falls into multiple counties, the RUCC of the county with the largest share of the zip code is assigned. The RUCC can be found athttps://www.ers.usda.gov/ data-products/rural-urban-continuum-codes/. 7 Detailed descriptions of each RUCC code are available in Table C.3 in Data Appendix. 70 earlier analyses, are not evident in the urban sample, with the estimate being statistically significant at the 10% level only in column (5). In contrast, the estimates for the rural sample are very close to the results from the entire sample, implying that, in rural areas, the referring physicians who know more influential colleagues decrease their probability of referring patients by 1.5% point in response to a 0.1% point increase in the in-hospital mortality rate of a surgeon. It is worth noting that the size of the rural sample is only half the urban sample, which suggests that the average effects in Table 3.3 might have been driven by the referring physicians in rural areas who are much less in number. The results from the SurgeonjHospital choice set also support this finding; none of the estimates in the urban sample are statistically significant, while those in the rural sample are very similar to the results from the entire sample with, in fact, greater magnitude (Table C.5 in Appendix). All in all, these results suggest that being connected with influential people in a social network provides more benefits to cardiologists or primary care physicians in rural areas and, in turn, the patients for whom they make referrals. Patient-Level Heterogeneity Income Most patients tend to rely on their referring physician’s recommendations to decide surgeons and hospitals for major surgery (Schwartz et al., 2005), but they may also attempt to influence referrals given their conditions and preferences. The physicians can tailor their referral decisions accordingly, thereby making the choice of surgeons and hospitals a joint decision-making process between referring physicians and their patients. Patient income is one of such factors that both patients and referring physicians may consider, while, in the case of Medicare beneficiaries, it could play a less important role, as patients equally pay minimal out-of-pocket cost. The Medi- care claims files are not the best data set to explore this possibility, because they do not provide information on the individual beneficiary’s income. Thus, I alternatively use the median house- hold income of the zip code of the beneficiary’s residence as a proxy, where the median household income is retrieved from the 2013 American Community Survey 5-year estimates (2009–2013). 71 The continuous variable is a noisy representation of individual income and is thus collapsed into a binary variable indicating relatively wealthy or poor patients, using the median split ($48,590 a year). Overall, the results support the effects of high eigenvector centrality regardless of patient income, though only with borderline statistical significance in the wealthier sample (Table 3.9). In contrast, the interaction terms of high eigenvector centrality with surgeon mortality rates are statistically significant at the 1% significance level in the poorer sample and, interestingly, the in- teraction terms with hospital mortality rates are also statistically significant. This implies that if a less wealthy patient sees a well-connected doctor for a referral, then they may be able to avoid poorly performing surgeons and also hospitals. These effects do not hold up in the poorer sample when the SurgeonjHospital choice set is employed (Table C.6 in Appendix). Severity Another potential source of heterogeneity is the severity of patients’ underlying conditions. It is possible that even the well-connected referring physicians do not respond sensitively to the quality variations of tertiary care providers, especially if they have maintained a relationship with a particular hospital or surgeon for a long time, have received kickbacks, or have other reasons that are irrelevant to the performance of the providers. When a patient’s condition is very severe and they are at risk of developing other serious complications, however, the likelihood of those irrelevant factors being involved in a physician’s decision-making may be lower. Thus, the better- connected referring physicians, who are presumably equipped with more accurate information on providers and thus can better tailor their decisions according to the patients’ conditions, may be able to respond more elastically when they see more severe patients, which will in turn increase the probability of survival for their patients. In fact, I find this evidence in Table 3.10. Patients are classified as “more severe” if the Charlson Comorbidity Index (Charlson et al., 1987) is greater than or equal to one and “less severe” otherwise, following the definition in Gaynor et al. (2016). Consistent with the previous results, the interaction term between surgeon mortality rates and high eigenvector centrality is statistically significant in both severe and less severe patients. The 72 relative magnitude of the estimates, however, is more than twice in the more severe sample, which implies that when a patient’s underlying condition is more serious, the survival gain from seeing a referring physician who plays a more critical role in their social network is much greater. Such survival gains are not clear when the referring physicians ought to choose one of the surgeons in a particular hospital (Table C.7 in Appendix). 3.6.4 Aggregate Effects The analysis thus far is associated with how individual-level network characteristics of referring doctors influence their referral decisions. It may also be the case, however, that when the market includes more socially well-connected physicians, they can enable better transmission of informa- tion in the market as a whole, which in turn will explain some observable differences at the market level. In addition to the information transmission mechanism, spillover effects can also exist. That is, if the market is well-connected and physicians can easily observe one another, then some physi- cians’ good referrals may influence the referral behavior of their peers. This market-level analysis may be interesting from the policy perspective, because the results will help us understand why the quality elasticity of demand in health care is higher in one market than the other. To this end, I run the following linear regression model using the same dataset (i.e., 20% Car- rier and inpatient files), where Y jmt is the market share of surgeon j in market m at time t, and Q jmt is the clinical quality of that surgeon, which is interacted with the market-level average net- work characteristics, including average adjusted degree, average clustering coefficient, and average eigenvector centrality. In addition to these measures, two other neighborhood-level network char- acteristics are examined: the variance of the adjusted degree, which captures the distribution of the size of individual networks, and the fraction of nodes in the giant component showing the overall connectedness of the market (Alatas et al., 2016). As in the case of individual-level analysis, all network characteristics are converted into binary variables using the median, and lag specification is employed to avoid simultaneity bias. All models control for surgeon and year fixed effects, and standard errors are clustered at the HRR level. The same analysis is conducted for hospital market 73 share. Y jmt = Q jmt h+ h Q jmt I(Connected mt ) i q + I(Connected mt )m+r j +t t +z jmt (3.9) The results can be understood in a difference-in-differences framework, which exploits dif- ferences in the effects of provider quality on market share between more versus less connected markets. The coefficient on the interaction term would be consistent so long as the performance of surgeons or hospitals is not correlated with unobservable market-level characteristics. If the con- nectedness of the market mirrors some market-level features that attract high-performing providers, however, this underlying assumption would be violated. Yet, this concern may be mitigated by surgeon/hospital fixed effects, because the model then exploits within-provider variation and the estimates will capture the likely effects on market share if either quality or market environment changes within physicians or hospitals over time. Figure 3.8 shows the average network char- acteristics of HRRs and their urban/rural status. The areas of high degree, high clustering, and high centrality are overlapped with one another, but not perfectly, and some of these areas are ur- ban, while some are rural, indicating that each aggregate network variable offers distinct treatment status, independent of urban/rural variations. Figure 3.9 presents the results of the aggregate effects of the network characteristics on sur- geons’ market share. I separately calculate market shares from elective admissions and those from emergency admissions. In the case of emergency admissions, a patient’s facility choice is less likely to be guided by referring physicians, and is more likely to be simply driven by physical accessibility. Thus, I compare the results by the type of admissions to ensure that the differential effects of provider quality on market share are in fact due to the different referral decisions of referring physicians with different neighborhood characteristics. If the estimated coefficients of emergency cases exhibit some correlations between the mortality rates and market shares, it could have been due to the spurious correlations between the two, independent of the referring physi- cians’ choice (Gaynor et al., 2016). The figure shows that the estimates for the interaction terms with high average centrality and large giant component are statistically significant. Note that the 74 eigenvector centrality is only defined for the physicians in the giant component. Combined, the results suggest that the responsiveness of demand with respect to surgeon quality is higher in the market where a larger fraction of referring physicians is connected with one another, and where they are on average close to relatively more influential nodes in their networks. Specifically, the estimates show that the market shares of surgeons increase by 6% point in response to a 1% point decrease in their in-hospital mortality rates, if they practice in the market that meets the above- mentioned conditions. Given that the mean market share of a surgeon is 22%, this result implies that an increase in surgeons’ mortality rates by 50% (i.e., a change from the mean of 2.04% to 3.04%) leads to a drop in their market share by 27% (i.e., the mean market share falling to 16%). I find no effects in the case of emergency admissions and in the models of hospital market share. 3.6.5 Robustness Checks Several robustness checks are conducted to ensure that the study findings are consistent across different specifications and samples. Quartile Specification of Network Measures The binary indicators of high adjusted degree, clustering coefficient, and eigenvector centrality are defined by the median cutoff in the main analysis, and alternatively, I use the top 25% cutoff in this section. Tables C.8–C.9 in Appendix present the results with the newly defined network measures with the Hospital+Surgeon choice set and the SurgeonjHospital choice set, respectively. The results are very close to those in Tables 3.3 and 3.5. Different Thresholds to Define Networks This study only keeps the top 20% strongest ties in terms of the number of shared patients for each physician when identifying the physician networks. Following Landon et al. (2012), this is replaced by 10% and 30% thresholds to make sure that the results are not sensitive to the arbitrary cutoffs defining networks. Only a tiny portion of the referring physicians is dropped from the origi- 75 nal sample when the 10% threshold is applied, and the relative positions of individual physicians in their networks have changed very little after being collapsed into binary network variables, which can be inferred by almost identical results in Tables C.10–C.11 compared to the main results (Ta- bles 3.3 and 3.5). Likewise, the results from the 30% threshold (Tables C.12–C.13) are precisely the same as the main results. Inclusion of Physicians in the Entire Network In most of the HRRs, there is a giant component that covers most of the nodes in the network (102,554 out of 119,693 or 86% in the data); the rest of the network is split into small components disconnected from one another, the property of which is found in “small-world” networks (Goyal et al., 2006). This study focuses on the majority of the physicians who belong to the giant compo- nent, and as a robustness check, I include the few isolated physicians to confirm that they do not make a significant difference. Because the eigenvector centrality is only defined in the connected networks, only adjusted degree and clustering coefficient are examined in this section. Overall, the results shown in Tables C.14–C.15 are qualitatively consistent with the main findings. Risk Adjustment To account for the fact that the underlying conditions of the patients are not equal across surgeons or hospitals and that some providers have to treat more severe patients than others, I compute the risk-adjusted mortality rate for each surgeon and hospital, using the method commonly employed in literature (Ghaferi et al., 2011, Byrne et al., 2013, Mukamel et al., 2002, Austin and Tu, 2006, Mukamel et al., 2004, Dimick et al., 2014). It is calculated by the ratio of observed to predicted deaths, multiplied by the national unadjusted rate, which captures whether the patient was treated any better or worse relative to the case treated by the national average provider. As per the lit- erature and the New York State Cardiac Surgery Report (1997), I obtain the predicted deaths by estimating logistic regression models controlling for patient-level characteristics available in the claims data, including demographic (age, race, gender) and socioeconomic (county of residence, 76 median household income of the zip code) characteristics and comorbid conditions (Charlson Co- morbidity Index). I run seven different regression models with all or part of these patient-level covariates given the concern of different risk adjusters leading to different predictions (Newhouse, 1998). I plot the distributions of the risk-adjusted mortality rates from each model at baseline in Figure C.1 in Appendix. The red solid lines, which represent the observed mortality rates, and the gray dashed lines showing the risk-adjusted mortality rates are almost on top of each other in both hospital-level and physician-level mortality rates. I then proceed to repeat the main analysis using the risk-adjusted mortality rates obtained from the three most comprehensive sets of patient-level risk adjusters. 8 The results of Table C.16 show that the referring physicians of high centrality respond more sensitively to the quality variations of surgeons in all models, which is consistent with the main finding. I find some evidence that the referring physicians of high clustering also respond to the change in hospital quality, though the estimates are not statistically significant in Models 2 and 3 when other network characteristics are omitted (columns (6) and (10)). When the referring physicians are to choose one of the surgeons in a particular hospital (Table C.17), the evidence becomes clearer and all models support significant effects of high centrality. Nonetheless, I stick to the results with the unadjusted rates for the following reasons. First, previous studies have proposed that different risk adjustment methods and risk adjusters can bring in additional bias (Mukamel and Brower, 1998, Ghali et al., 2001, Gaynor et al., 2016, Newhouse, 1998). In fact, the signs of the estimates for hospital and surgeon mortality rates in Tables C.16 and C.17 have turned positive, which is counterintuitive, and this may have been due to the measurement error introduced by the new quality measures. Second, including more risk adjusters could improve the precision of the prediction, but it could also contribute to the loss of information. For instance, by including county fixed effects as a risk adjuster, the models with risk-adjusted mortality rates drop all observations in counties where no CABG patient has ever died in hospital following operation during the sample period (i.e., no within-county variation). 8 Model 1 includes age, gender, race, Charlson Comorbidity Index, and county of residence as patient-specific risk adjusters. Model 2 includes median household income categories of patients’ neighborhood (rich versus poor) and Model 3 includes the full interaction between comorbidity categories (more versus less comorbidity) and income categories as additional risk adjusters. 77 Lastly, the magnitude of the estimates of high centrality becomes larger in the models based on risk-adjusted mortality rates, implying that, if anything, selection by patient characteristics might have led to underestimation of the network effects in the models with unadjusted mortality rates. This may be possible if better-connected referring physicians are more likely to see severe patients who would have only moderate survival gains no matter the quality of care. If this is actually the case, then the estimates presented in the main analysis can be interpreted as a lower bound of the true network effects. 3.7 Conclusion Few patients have direct access to information on the performance of individual surgeons. Thus, when patients are to undergo a major procedure such as CABG, they often have to consult with referring doctors who are, in theory, more knowledgeable about the performance of other doctors. The results of this study suggest that when patients see a referring physician who is a more impor- tant player in their social networks, they have a higher chance of choosing a surgeon of better qual- ity. Specifically, the results show that if a surgeon’s mortality rate increases by 5%, the probability of a patient choosing that surgeon falls by 21–30%, when referred by a socially well-connected physician. The analysis of heterogeneous effects further demonstrates that the referring physicians who are more junior or who practice in rural areas benefit more from the network effects. Likewise, patients with poorer health conditions benefit more from seeing a better-connected physician for a referral. Finally, an aggregate-level analysis suggests that having more socially well-connected referring doctors in the market improves the demand responsiveness to quality changes of health care providers and thereby enhances the efficiency of the market as a whole. This study is subject to a few limitations. First, although conditional logit models difference out all attributes at the referring physician level and the lag specification of network variables helps address the concern of simultaneity bias, it may still be possible that the interaction between surgeon quality and the network characteristics of referring physicians is endogenous. One possible 78 scenario is that high-quality surgeons provide an illegal kickback to better-connected referring physicians in an attempt to further increase their probability of being referred. This possibility is certainly not captured by data and remains as a potential source of bias. Second, I model the study hypothesis—that providers send signals about their quality and better-connected referring doctors are more capable in responding to these signals—using the interaction term in the discrete choice model. The network effects can also come through the denominator of the choice probability, however, meaning that the network structure can influence the size of the patient’s choice set: that is, the more connected the referring physicians, the bigger the patient choice set and the happier the patients. Although exploring this mechanism would provide a more complete picture of the effects of referring physicians’ network structure, it is empirically challenging in practice, as it is almost impossible to identify the true choice set of consumers. Despite these limitations, this study contributes to literature by highlighting the importance of social connectedness of referring physicians in understanding the demand for specialty health care services. The social network is a growing subject of interest in social science, but few studies have examined its role in spreading news or information about the quality of health care providers. In fact, the results suggest that where individual referring physicians are located in their social networks affects their choice of providers. This finding implies that, in addition to individual characteristics of consumers that have been extensively studied as a determinant of health care demand—such as income, insurance status, education, or place of residence—interpersonal char- acteristics of primary care physicians matter in understanding consumer demand in health care. From the policy perspective, the current study proposes a useful tool in identifying more in- formed doctors, which would enable policymakers to target specific subgroups of doctors to im- prove overall information transmission. In the era of ‘big data’, CMS has been trying to harness its data resources in new and innovative ways, one example being machine learning methods to detect fraud and abuse. The tools of social network analysis used in this study offer another pos- sibility through which governments can predict and target more influential (or, conversely, more isolated) physicians in terms of information sharing. The governments can achieve knowledge of 79 this sort in an alternative way—for instance, by using the pool of senior or geographically central physicians—but Banerjee et al. (2014) point out that such easy “fixes” do not guarantee that the identified individuals are in fact very central in their social networks. Social network analysis is not always a feasible policy option, however, because collecting detailed network data is expensive. The approach used herein has strength in this regard, as constructing network data using adminis- trative claims is much less expensive than collecting full network information through surveys. In sum, this study offers a new lens through which to understand how the demand for health care services is determined and how the government can intervene to promote efficiency in health care delivery. 80 Figures Figure 3.1: Examples of Physician Networks (2008) (a) HRR=107 (Pueblo, CO) (b) HRR=388 (Bryan, TX) Note: The orange nodes represent the referring physicians (mostly cardiologists, followed by primary care physicians and a few other specialties) and the green nodes are the physicians (of any specialty) with whom they share patients. Only relatively small networks are presented for ease of visualization. Figure 3.2: Illustration of Network Measures (a) Degree (b) Clustering Coefficient (c) Eigenvector Centrality 81 Figure 3.3: Distributions of Network Measures at Baseline (2008) (a) Probability Density Function (b) Cumulative Distribution Function Note: The y-axis on the left in Figure (a) corresponds to the scale of adjusted degree and clustering coeffi- cient, while the one on the right shows the scale of eigenvector centrality. 82 Figure 3.4: Binned Scatter Plots and Linear Regression Lines at Baseline (2008) Note: This figure bins referring physicians into twenty groups by their network characteristics and plots their patients’ average in-hospital mortality rates at each bin. 83 Figure 3.5: Time-Series Plots of Network Measures .7 .75 .8 Adjusted Degree 2008 2009 2010 2011 2012 2013 Year .35 .375 .4 Clustering Coefficient 2008 2009 2010 2011 2012 2013 Year .01 .015 .02 Eigenvector Centrality 2008 2009 2010 2011 2012 2013 Year Note: Each dot represents the annual average of the network measure with its corresponding 95% confidence interval. The red dotted line is the average of the baseline year. Figure 3.6: Time-Series Plots of Quality Measures .01 .02 .03 .04 Mortality Rate 2008 2009 2010 2011 2012 2013 Year .01 .02 .03 .04 Mortality Rate 2008 2009 2010 2011 2012 2013 Year (a) Average of Hospitals (b) Average of Physicians Note: Each dot represents the annual average of the in-hospital mortality rate with its corresponding 95% confidence interval. The red dotted line is the average of the baseline year. 84 Figure 3.7: Percentage-Point Change of Patients Referred Following Negative Events 0 −2 −4 −6 −8 −10 Low High Low High Low High Adjusted Degree Clustering Coefficient Eigenvector Centrality %p Change of Patients Referred Note: This figure shows the change of the fraction of the patients referred to a particular hospital-physician combination, when that alternative had ever incurred any in-hospital mortality in the prior year. The sam- ple means and 95% confidence intervals are shown by the level of network measures, where the level is categorized into high (above the median) and low (below the median). 85 Figure 3.8: Average Market-Level Network Characteristics and Urban/Rural Status Note: The boundaries are those of the Hospital Referral Regions (HRR). The urban/rural status is determined by the annual average Rural-Urban Continuum Codes (RUCC) of zip codes constituting each HRR. 86 Figure 3.9: Aggregate Effects of Network Characteristics on Surgeon Market Share by Type of Admissions Note: This figure presents coefficients and associated 95% confidence intervals for the interaction between surgeons’ average in-hospital mortality rates and their neighborhood network characteristics, which are obtained from the regression at the surgeon-year level. All models control for surgeon and year fixed effects. Standard errors are clustered at the Hospital Referral Region level. 87 Tables Table 3.1: Descriptive Statistics Mean S.D. Hospital characteristics In-hospital mortality (per 100 patients) 2.43 (5.53) Distance (miles) 28.20 (29.15) Number of beds 419.47 (268.39) Teaching (%) 33.86 (46.35) Non-profit (%) 67.48 (45.24) For-profit (%) 19.87 (38.63) Government (%) 12.66 (32.12) Number of hospitals 1,018 Surgeon characteristics In-hospital mortality (per 100 patients) 2.04 (8.85) Years after graduation in 2009 23.20 (9.19) Top-ranked medical school (%) 14.82 (35.54) Female (%) 3.23 (17.68) No ’14 Physician Compare data (%) 9.20 (28.91) Number of surgeons 1,739 Network characteristics of referring physicians Number of nodes 2219.54 (2417.33) Degree 175.09 (139.84) Adjusted degree † 0.82 (0.40) Clustering coefficient 0.36 (0.19) Eigenvector centrality ‡ 0.02 (0.02) Number of referring physicians 8,510 Note: Defined by the U.S. News ranking. † Adjusted by patient volume. ‡ Fewer physicians (N=7,733) are included for eigenvector centrality, because it is only defined in connected networks. 88 Table 3.2: Characteristics of Referring Physicians by the Level of Network Measures Adjusted degree Clustering coefficient Eigenvector centrality low high p-value low high p-value low high p-value (1) (2) (1)=(2) (3) (4) (3)=(4) (5) (6) (5)=(6) Network characteristics Adjusted degree 0.55 1.10 < 0:05 0.85 0.80 < 0:05 0.83 0.77 < 0:05 (0.14) (0.40) (0.37) (0.43) (0.35) (0.35) Clustering coefficient 0.38 0.34 < 0:05 0.21 0.51 < 0:05 0.36 0.36 0.36 (0.19) (0.20) (0.05) (0.17) (0.19) (0.19) Eigenvector centrality † 0.02 0.01 < 0:05 0.02 0.01 < 0:05 0.00 0.03 < 0:05 (0.03) (0.02) (0.03) (0.02) (0.00) (0.03) Other characteristics Years after graduation in 2009 24.12 22.72 < 0:05 23.59 23.23 0.10 23.52 23.30 0.35 (9.29) (10.00) (8.79) (10.52) (10.06) (9.23) Top-ranked medical school (%) ‡ 6.91 11.47 < 0:05 9.94 8.43 < 0:05 9.40 8.57 0.21 (25.37) (31.87) (29.93) (27.79) (29.19) (27.99) Female (%) 8.81 7.73 0.08 5.57 11.07 < 0:05 8.47 7.55 0.15 (28.35) (26.71) (22.94) (31.38) (27.85) (26.42) No ’14 Physician Compare data (%) 6.13 5.12 < 0:05 3.81 7.45 < 0:05 5.35 6.10 0.16 (24.00) (22.05) (19.14) (26.26) (22.51) (23.94) Number of physicians 4,255 4,255 4,255 4,255 3,867 3,866 Note: Adjusted by patient volume. † Fewer physicians are included for eigenvector centrality (4,032 in column(1), 3,701 in column (2), 3,921 in column (3), and 3,812 in column (4)), because it is only defined in connected networks. ‡ Defined by the U.S. News ranking. High (low) level indicates above (below) the median. The p-value indicates whether the difference of the sample means differs significantly from zero. 89 Table 3.3: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hospi- tal+Surgeon Choice Set (1) (2) (3) (4) (5) Hospital mortality -0.0532 -0.0908 -0.182 -0.0104 -0.159 (0.0980) (0.135) (0.128) (0.116) (0.168) Surgeon mortality -0.0677** -0.0360 -0.0533 -0.000614 0.0407 (0.0294) (0.0391) (0.0350) (0.0358) (0.0393) Hospital mortality 0.0765 0.0613 I(High degree) (0.201) (0.202) Surgeon mortality -0.0686 -0.0656 I(High degree) (0.0643) (0.0612) Hospital mortality 0.309* 0.300 I(High clustering) (0.186) (0.189) Surgeon mortality -0.0351 -0.0256 I(High clustering) (0.0587) (0.0583) Hospital mortality -0.107 -0.114 I(High centrality) (0.154) (0.149) Surgeon mortality -0.155*** -0.156*** I(High centrality) (0.0509) (0.0502) Number of observations 102,554 102,554 102,554 102,554 102,554 Number of patients 10,671 10,671 10,671 10,671 10,671 Note: All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 90 Table 3.4: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Type of Admissions, Hospital+Surgeon Choice Set (1) (2) (3) (4) (5) Panel A. Elective admissions Hospital mortality -0.0503 -0.0571 -0.176 0.00507 -0.103 (0.105) (0.153) (0.139) (0.132) (0.197) Surgeon mortality -0.103*** -0.0859** -0.102** -0.0414 -0.0267 (0.0326) (0.0434) (0.0406) (0.0373) (0.0412) Hospital mortality 0.0175 -0.0112 I(High degree) (0.211) (0.212) Surgeon mortality -0.0355 -0.0359 I(High degree) (0.0693) (0.0670) Hospital mortality 0.310 0.321 I(High clustering) (0.205) (0.206) Surgeon mortality 0.000449 0.00853 I(High clustering) (0.0625) (0.0627) Hospital mortality -0.141 -0.160 I(High centrality) (0.171) (0.167) Surgeon mortality -0.141** -0.142** I(High centrality) (0.0579) (0.0583) Number of observations 78,846 78,846 78,846 78,846 78,846 Number of patients 8,170 8,170 8,170 8,170 8,170 Panel B. Emergency admissions Hospital mortality -0.0636 -0.193 -0.186 -0.0460 -0.272 (0.140) (0.172) (0.182) (0.165) (0.203) Surgeon mortality 0.0239 0.0938* 0.0637 0.102* 0.212*** (0.0380) (0.0490) (0.0476) (0.0538) (0.0639) Hospital mortality 0.251 0.284 I(High degree) (0.266) (0.277) Surgeon mortality -0.151* -0.150* I(High degree) (0.0852) (0.0833) Hospital mortality 0.290 0.198 I(High clustering) (0.242) (0.260) Surgeon mortality -0.111 -0.0982 I(High clustering) (0.0879) (0.0887) Hospital mortality -0.0338 -0.0425 I(High centrality) (0.230) (0.222) Surgeon mortality -0.183** -0.196** I(High centrality) (0.0857) (0.0819) Number of observations 23,708 23,708 23,708 23,708 23,708 Number of patients 2,501 2,501 2,501 2,501 2,501 Note: All models control for a set of hospital characteristics (distance from patients’ resi- dence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 91 Table 3.5: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set (1) (2) (3) (4) (5) Surgeon mortality -0.0718** -0.0817* -0.0763** -0.00764 -0.0148 (0.0354) (0.0440) (0.0378) (0.0446) (0.0481) Surgeon mortality 0.0221 0.00694 I(High degree) (0.0606) (0.0617) Surgeon mortality 0.0153 0.0129 I(High clustering) (0.0564) (0.0585) Surgeon mortality -0.140** -0.139** I(High centrality) (0.0674) (0.0677) Number of observations 26,705 26,705 26,705 26,705 26,705 Number of patients 8,710 8,710 8,710 8,710 8,710 Note: All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 92 Table 3.6: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Type of Admissions, Surgeon|Hospital Choice Set (1) (2) (3) (4) (5) Panel A. Elective admissions Surgeon mortality -0.0951** -0.130*** -0.128*** -0.0256 -0.0846* (0.0398) (0.0476) (0.0448) (0.0462) (0.0460) Surgeon mortality 0.0771 0.0578 I(High degree) (0.0701) (0.0712) Surgeon mortality 0.103 0.0963 I(High clustering) (0.0630) (0.0618) Surgeon mortality -0.150* -0.144* I(High centrality) (0.0775) (0.0764) Number of observations 20,991 20,991 20,991 20,991 20,991 Number of patients 6,742 6,742 6,742 6,742 6,742 Panel B. Emergency admissions Surgeon mortality -0.0109 0.0445 0.0438 0.0375 0.142 (0.0496) (0.0692) (0.0519) (0.0759) (0.0894) Surgeon mortality -0.126 -0.106 I(High degree) (0.0821) (0.0864) Surgeon mortality -0.239** -0.216* I(High clustering) (0.106) (0.121) Surgeon mortality -0.109 -0.127 I(High centrality) (0.118) (0.116) Number of observations 5,714 5,714 5,714 5,714 5,714 Number of patients 1,968 1,968 1,968 1,968 1,968 Note: All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 93 Table 3.7: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Years of Experience, Hospital+Surgeon Choice Set (1) (2) (3) (4) (5) Panel A. Bottom Quartile ( 18 years) Hospital mortality -0.107 -0.0720 -0.349* 0.106 -0.0533 (0.171) (0.257) (0.211) (0.206) (0.348) Surgeon mortality -0.0992* -0.126* -0.0933 -0.00793 -0.0451 (0.0550) (0.0743) (0.0633) (0.0642) (0.0822) Hospital mortality -0.0635 -0.119 I(High degree) (0.350) (0.345) Surgeon mortality 0.0449 0.0429 I(High degree) (0.109) (0.103) Hospital mortality 0.547* 0.562* I(High clustering) (0.312) (0.308) Surgeon mortality -0.0225 0.0231 I(High clustering) (0.103) (0.104) Hospital mortality -0.490 -0.522 I(High centrality) (0.342) (0.347) Surgeon mortality -0.208* -0.219* I(High centrality) (0.108) (0.116) Number of observations 22,323 22,323 22,323 22,323 22,323 Number of patients 2,430 2,430 2,430 2,430 2,430 Panel B. Top Quartile ( 32 years) Hospital mortality -0.189 -0.237 -0.175 -0.309 -0.360 (0.170) (0.249) (0.191) (0.238) (0.340) Surgeon mortality 0.0181 0.0416 -0.0167 0.0828 0.0767 (0.0460) (0.0657) (0.0592) (0.0524) (0.0691) Hospital mortality 0.0962 0.142 I(High degree) (0.378) (0.383) Surgeon mortality -0.0527 -0.0688 I(High degree) (0.118) (0.118) Hospital mortality -0.0118 -0.0361 I(High clustering) (0.405) (0.409) Surgeon mortality 0.0868 0.0884 I(High clustering) (0.103) (0.105) Hospital mortality 0.241 0.242 I(High centrality) (0.327) (0.327) Surgeon mortality -0.151* -0.147* I(High centrality) (0.0815) (0.0827) Number of observations 25,119 25,119 25,119 25,119 25,119 Number of patients 2,557 2,557 2,557 2,557 2,557 Note: Years of experience are proxied by years after graduation. All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Stan- dard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 94 Table 3.8: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Ur- ban/Rural Areas, Hospital+Surgeon Choice Set (1) (2) (3) (4) (5) Panel A. Urban Hospital Referral Regions Hospital mortality -0.146 -0.220 -0.297* -0.138 -0.350 (0.121) (0.170) (0.161) (0.146) (0.222) Surgeon mortality -0.100** -0.0910 -0.0837* -0.0388 -0.00506 (0.0412) (0.0609) (0.0497) (0.0564) (0.0605) Hospital mortality 0.121 0.117 I(High degree) (0.233) (0.238) Surgeon mortality -0.0170 -0.0307 I(High degree) (0.0924) (0.0854) Hospital mortality 0.352 0.347 I(High clustering) (0.228) (0.229) Surgeon mortality -0.0351 -0.0321 I(High clustering) (0.0941) (0.0934) Hospital mortality -0.0303 -0.0408 I(High centrality) (0.181) (0.178) Surgeon mortality -0.144 -0.148* I(High centrality) (0.0877) (0.0828) Number of observations 67,294 67,294 67,294 67,294 67,294 Number of patients 6,211 6,211 6,211 6,211 6,211 Panel B. Rural Hospital Referral Regions Hospital mortality 0.145 0.0545 0.0583 0.242* 0.106 (0.142) (0.201) (0.192) (0.139) (0.230) Surgeon mortality -0.0350 -0.00120 -0.0264 0.0322 0.0590 (0.0380) (0.0470) (0.0463) (0.0407) (0.0510) Hospital mortality 0.253 0.237 I(High degree) (0.352) (0.329) Surgeon mortality -0.0924 -0.0709 I(High degree) (0.0837) (0.0800) Hospital mortality 0.230 0.137 I(High clustering) (0.314) (0.308) Surgeon mortality -0.0264 -0.00820 I(High clustering) (0.0618) (0.0606) Hospital mortality -0.241 -0.245 I(High centrality) (0.231) (0.236) Surgeon mortality -0.153*** -0.150*** I(High centrality) (0.0486) (0.0506) Number of observations 35,260 35,260 35,260 35,260 35,260 Number of patients 4,460 4,460 4,460 4,460 4,460 Note: The urban/rural status is defined by the Rural-Urban Continuum Codes (RUCC) of Hospital Referral Regions (HRRs). All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the HRR by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 95 Table 3.9: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Pa- tients’ Household Income, Hospital+Surgeon Choice Set (1) (2) (3) (4) (5) Panel A. Above Median Household Income (>$48,590) Hospital mortality -0.0442 -0.0973 -0.223 -0.204 -0.405* (0.122) (0.192) (0.149) (0.150) (0.233) Surgeon mortality -0.0927* -0.0582 -0.0810 -0.0165 0.0318 (0.0493) (0.0653) (0.0610) (0.0712) (0.0719) Hospital mortality 0.0867 0.0773 I(High degree) (0.250) (0.263) Surgeon mortality -0.0588 -0.0686 I(High degree) (0.0946) (0.0889) Hospital mortality 0.420** 0.384* I(High clustering) (0.213) (0.224) Surgeon mortality -0.0224 -0.0103 I(High clustering) (0.0982) (0.0958) Hospital mortality 0.307 0.283 I(High centrality) (0.189) (0.190) Surgeon mortality -0.161* -0.165* I(High centrality) (0.0929) (0.0882) Number of observations 45,795 45,795 45,795 45,795 45,795 Number of patients 4,553 4,553 4,553 4,553 4,553 Panel B. Below Median Household Income ($48,590) Hospital mortality -0.0668 -0.0905 -0.153 0.115 0.00873 (0.114) (0.153) (0.153) (0.122) (0.180) Surgeon mortality -0.0525 -0.0313 -0.0364 0.00620 0.0372 (0.0345) (0.0461) (0.0405) (0.0356) (0.0466) Hospital mortality 0.0598 0.0779 I(High degree) (0.246) (0.236) Surgeon mortality -0.0564 -0.0478 I(High degree) (0.0701) (0.0675) Hospital mortality 0.212 0.188 I(High clustering) (0.237) (0.233) Surgeon mortality -0.0425 -0.0342 I(High clustering) (0.0635) (0.0643) Hospital mortality -0.467** -0.466** I(High centrality) (0.188) (0.186) Surgeon mortality -0.144*** -0.145*** I(High centrality) (0.0527) (0.0528) Number of observations 56,615 56,615 56,615 56,615 56,615 Number of patients 6,102 6,102 6,102 6,102 6,102 Note: Patients’ household income is proxied by median household income of zip code of residence, which is obtained from 2013 American Community Survey 5-year estimates. All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top- ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 96 Table 3.10: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Pa- tients’ Severity, Hospital+Surgeon Choice Set (1) (2) (3) (4) (5) Panel A. More Severe (Charlson Comorbidity Index1) Hospital mortality -0.0582 -0.367 -0.147 -0.0352 -0.426 (0.154) (0.268) (0.201) (0.195) (0.296) Surgeon mortality -0.0447 0.00223 0.0226 0.0499 0.180** (0.0574) (0.0718) (0.0632) (0.0727) (0.0870) Hospital mortality 0.553* 0.627** I(High degree) (0.313) (0.301) Surgeon mortality -0.0938 -0.114 I(High degree) (0.126) (0.125) Hospital mortality 0.222 0.194 I(High clustering) (0.305) (0.295) Surgeon mortality -0.174 -0.171 I(High clustering) (0.141) (0.144) Hospital mortality -0.0737 -0.143 I(High centrality) (0.301) (0.288) Surgeon mortality -0.291** -0.310** I(High centrality) (0.139) (0.136) Number of observations 17,427 17,427 17,427 17,427 17,427 Number of patients 1,870 1,870 1,870 1,870 1,870 Panel B. Less Severe (Charlson Comorbidity Index<1) Hospital mortality -0.0528 -0.0268 -0.186 -0.00337 -0.0859 (0.110) (0.142) (0.138) (0.126) (0.176) Surgeon mortality -0.0719** -0.0420 -0.0700* -0.0154 0.0125 (0.0326) (0.0437) (0.0396) (0.0379) (0.0427) Hospital mortality -0.0390 -0.0769 I(High degree) (0.213) (0.214) Surgeon mortality -0.0642 -0.0646 I(High degree) (0.0666) (0.0649) Hospital mortality 0.329* 0.341* I(High clustering) (0.195) (0.196) Surgeon mortality -0.000702 0.0109 I(High clustering) (0.0598) (0.0605) Hospital mortality -0.118 -0.130 I(High centrality) (0.162) (0.155) Surgeon mortality -0.124** -0.126** I(High centrality) (0.0549) (0.0549) Number of observations 85,127 85,127 85,127 85,127 85,127 Number of patients 8,801 8,801 8,801 8,801 8,801 Note: All models control for a set of hospital characteristics (distance from patients’ resi- dence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 97 Bibliography ACEMOGLU, D. and FINKELSTEIN, A. (2008). Input and technology choices in regulated indus- tries: Evidence from the health care sector. Journal of Political Economy, 116 (5). AFENDULIS, C. C. and KESSLER, D. P. (2007). Tradeoffs from integrating diagnosis and treat- ment in markets for health care. American Economic Review, 97 (3), 1013–1020. ALATAS, V., BANERJEE, A., CHANDRASEKHAR, A. G., HANNA, R. and OLKEN, B. A. (2016). Network structure and the aggregation of information: Theory and evidence from Indonesia. American Economic Review, 106 (7), 1663–1704. ARROW, K. J. (1963). Uncertainty and the welfare economics of medical care. American Eco- nomic Review, 53 (5), 941–973. AUSTIN, P. C. and TU, J. V. (2006). Comparing clinical data with administrative data for produc- ing acute myocardial infarction report cards. Journal of the Royal Statistical Society: Series A (Statistics in Society), 169 (1), 115–126. BAILEY, M., CAO, R., KUCHLER, T. and STROEBEL, J. (2018). The economic effects of social networks: Evidence from the housing market. Journal of Political Economy. BAKER, L. C., BUNDORF, M. K. and KESSLER, D. P. (2016). The effect of hospital/physician integration on hospital choice. Journal of Health Economics, 50, 1–8. BANDIERA, O. and RASUL, I. (2006). Social networks and technology adoption in Northern Mozambique. The Economic Journal, 116 (514), 869–902. BANERJEE, A., CHANDRASEKHAR, A. G., DUFLO, E. and JACKSON, M. O. (2013). The diffu- sion of microfinance. Science, 341 (6144), 1236498. —, —, — and — (2014). Gossip: Identifying central individuals in a social network, National Bureau of Economic Research. BARNETT, M. L., CHRISTAKIS, N. A., O’MALLEY, A. J., ONNELA, J.-P., KEATING, N. L. and LANDON, B. E. (2012). Physician patient-sharing networks and the cost and intensity of care in US hospitals. Medical Care, 50 (2), 152. —, LANDON, B. E., O’MALLEY, A. J., KEATING, N. L. and CHRISTAKIS, N. A. (2011). Map- ping physician networks with self-reported and administrative data. Health Services Research, 46 (5), 1592–1609. 98 BARROS, P. and BRAUN, G. (2017). Upcoding in a National Health Service: The evidence from Portugal. Health Economics, 26 (5), 600–618. BAZZOLI, G. J., LINDROOTH, R. C., HASNAIN-WYNIA, R. and NEEDLEMAN, J. (2004). The Balanced Budget Act of 1997 and US hospital operations. Inquiry, 41 (4), 401–417. BECKER, G. (1968). Crime and punishment: An economic approach. Journal of Political Econ- omy, 76 (2), 169–217. BYRNE, B., MAMIDANNA, R., VINCENT, C. and FAIZ, O. (2013). Population-based cohort study comparing 30-and 90-day institutional mortality rates after colorectal surgery. British Journal of Surgery, 100 (13), 1810–1817. CAI, J., DE JANVRY, A. and SADOULET, E. (2015). Social networks and the decision to insure. American Economic Journal: Applied Economics, 7 (2), 81–108. CAMPBELL, K. H., SMITH, S. G., HEMMERICH, J., STANKUS, N., FOX, C., MOLD, J. W., O’HARE, A. M., CHIN, M. H. and DALE, W. (2011). Patient and provider determinants of nephrology referral in older adults with severe chronic kidney disease: A survey of provider decision making. BMC Nephrology, 12 (1), 47. CARLIN, C. S., FELDMAN, R. and DOWD, B. (2016). The impact of hospital acquisition of physician practices on referral patterns. Health Economics, 25 (4), 439–454. CASALINO, L. P., PESKO, M. F., RYAN, A. M., NYWEIDE, D. J., IWASHYNA, T. J., SUN, X., MENDELSOHN, J. and MOODY, J. (2015). Physician networks and ambulatory care-sensitive admissions. Medical Care, 53 (6), 534–541. CHALFIN, A. and MCCRARY, J. (2017). Criminal deterrence: A review of the literature. Journal of Economic Literature, 55 (1), 5–48. CHALKLEY, M. and MALCOMSON, J. M. (2000). Government purchasing of health services. Handbook of Health Economics, 1, 847–890. CHANDRA, A., FINKELSTEIN, A., SACARNY, A. and SYVERSON, C. (2016). Health care excep- tionalism? Performance and allocation in the US health care sector. American Economic Review, 106 (8), 2110–2144. CHARLSON, M. E., POMPEI, P., ALES, K. L. and MACKENZIE, C. R. (1987). A new method of classifying prognostic comorbidity in longitudinal studies: Development and validation. Journal of Chronic Diseases, 40 (5), 373–383. CHEN, A., BLUMENTHAL, D. M. and JENA, A. B. (2018). Characteristics of physicians excluded from US Medicare and state public insurance programs for fraud, health crimes, or unlawful prescribing of controlled substances. JAMA Network Open, 1 (8), e185805–e185805. — and GOLDMAN, D. (2016). Health care spending: Historical trends and new directions. Annual Review of Economics, 8 (1), 291–319. 99 CHEN, Y. (2011). Why are health care report cards so bad (good)? Journal of Health Economics, 30 (3), 575–590. CHERNEW, M. and SCANLON, D. P. (1998). Health plan report cards and insurance choice. In- quiry, 35 (1), 9–22. CHRISTAKIS, N. A. and FOWLER, J. H. (2007). The spread of obesity in a large social network over 32 years. New England Journal of Medicine, 357 (4), 370–379. CLARK, J. R. and HUCKMAN, R. S. (2012). Broadening focus: Spillovers, complementarities, and specialization in the hospital industry. Management Science, 58 (4), 708–722. CLEMENS, J. and GOTTLIEB, J. D. (2014). Do physicians’ financial incentives affect medical treatment and patient health? American Economic Review, 104 (4), 1320–1349. CONLEY, T. G. and UDRY, C. R. (2010). Learning about a new technology: Pineapple in Ghana. American Economic Review, 100 (1), 35–69. COOPER, Z., KOWALSKI, A. E., POWELL, E. N. and WU, J. (2017). Politics, hospital behavior, and health care spending, National Bureau of Economic Research. CURRIE, J. and MACLEOD, W. B. (2008). First do no harm? Tort reform and birth outcomes. Quarterly Journal of Economics, 123 (2), 795–830. —, — and VAN PARYS, J. (2016). Provider practice style and patient health outcomes: The case of heart attacks. Journal of Health Economics, 47, 64–80. CUTLER, D. M. (1995). The incidence of adverse medical outcomes under prospective payment. Econometrica, 63 (1), 29–50. DAFNY, L. and DRANOVE, D. (2008). Do report cards tell consumers anything they don’t already know? The case of Medicare HMOs. RAND Journal of Economics, 39 (3), 790–821. DAFNY, L. S. (2005). How do hospitals respond to price changes? American Economic Review, 95 (5), 1525–1547. DANZON, P. M. (2000). Liability for medical malpractice. Handbook of Health Economics, 1, 1339–1404. DESHARNAIS, S., KOBRINSKI, E., CHESNEY, J., LONG, M., AMENT, R. and FLEMING, S. (1987). The early effects of the prospective payment system on inpatient utilization and the quality of care. Inquiry, 24 (1), 7–16. DESHARNAIS, S. I., WROBLEWSKI, R. and SCHUMACHER, D. (1990). How the Medicare prospective payment system affects psychiatric patients treated in short-term general hospitals. Inquiry, 27 (4), 382–388. DIMICK, J. B., BIRKMEYER, N. J., FINKS, J. F., SHARE, D. A., ENGLISH, W. J., CARLIN, A. M. and BIRKMEYER, J. D. (2014). Composite measures for profiling hospitals on bariatric surgery performance. JAMA Surgery, 149 (1), 10–16. 100 DRANOVE, D. and SFEKAS, A. (2008). Start spreading the news: A structural estimate of the effects of New York hospital report cards. Journal of Health Economics, 27 (5), 1201–1207. DUFLO, E. and SAEZ, E. (2003). The role of information and social interactions in retirement plan decisions: Evidence from a randomized experiment. Quarterly Journal of Economics, 118 (3), 815–842. ELLIS, R. P. and MCGUIRE, T. G. (1996). Hospital response to prospective payment: Moral hazard, selection, and practice-style effects. Journal of Health Economics, 15 (3), 257–277. EPSTEIN, A. J. (2010). Effects of report cards on referral patterns to cardiac surgeons. Journal of Health Economics, 29 (5), 718–731. FELDSTEIN, M. (1977). Quality change and the demand for hospital care. Econometrica, 45 (7), 1681–1702. FERNANDEZ, R. M., CASTILLA, E. J. and MOORE, P. (2000). Social capital at work: Networks and employment at a phone center. American Journal of Sociology, 105 (5), 1288–1356. FINGAR, K. R., STOCKS, C., WEISS, A. J. and STEINER, C. (2014). Most frequent operat- ing room procedures performed in US hospitals, 2003–2012: Healthcare Cost and Utilization Project Statistical Brief 186. Tech. rep. FRANK, R. G. and LAVE, J. R. (1989). A comparison of hospital responses to reimbursement policies for Medicaid psychiatric patients. RAND Journal of Economics, 20 (4), 588–600. FREEMAN, R. B. (1999). The economics of crime. Handbook of Labor Economics, 3, 3529–3571. FREIMAN, M. P., ELLIS, R. P. and MCGUIRE, T. G. (1989). Provider response to Medicare’s PPS: reductions in length of stay for psychiatric patients treated in scatter beds. Inquiry, 26 (2), 192–201. GANI, F., MAKARY, M. A. and PAWLIK, T. M. (2017). The price of surgery: Markup of operative procedures in the United States. Journal of Surgical Research, 208, 192–197. GAYNOR, M., PROPPER, C. and SEILER, S. (2016). Free to choose? Reform, choice, and con- sideration sets in the English National Health Service. American Economic Review, 106 (11), 3521–57. GEE, L. K., JONES, J. and BURKE, M. (2017). Social networks and labor markets: How strong ties relate to job finding on Facebook’s social network. Journal of Labor Economics, 35 (2), 485–518. GHAFERI, A. A., BIRKMEYER, J. D. and DIMICK, J. B. (2011). Hospital volume and failure to rescue with high-risk surgery. Medical Care, 49 (12), 1076–1081. GHALI, W. A., ASH, A. S., HALL, R. E. and MOSKOWITZ, M. A. (1997). Statewide quality improvement initiatives and mortality after cardiac surgery. JAMA, 277 (5), 379–382. 101 —, QUAN, H. and BRANT, R. (2001). Risk adjustment using administrative data. Journal of General Internal Medicine, 16 (8), 519–524. GOYAL, S., VAN DER LEIJ, M. J. and MORAGA-GONZÁLEZ, J. L. (2006). Economics: An emerging small world. Journal of Political Economy, 114 (2), 403–412. HAAS-WILSON, D. (1994). The relationships between the dimensions of health care quality and price: The case of eye care. Medical Care, 32 (2), 175–182. HACKL, F., HUMMER, M. and PRUCKNER, G. J. (2015). Old boys’ network in general practi- tioners’ referral behavior? Journal of Health Economics, 43, 56–73. HANAUER, D. A., ZHENG, K., SINGER, D. C., GEBREMARIAM, A. and DAVIS, M. M. (2014). Public awareness, perception, and use of online physician rating sites. JAMA, 311 (7), 734–735. HANNAN, E. L., STONE, C. C., BIDDLE, T. L. and DEBUONO, B. A. (1997). Public release of cardiac surgery outcomes data in New York: What do New York state cardiologists think of it? American Heart Journal, 134 (1), 55–61. HAWKINS, R. B., MEHAFFEY, J. H., YOUNT, K. W., YARBORO, L. T., FONNER, C., KRON, I. L., QUADER, M., SPEIR, A., RICH, J., AILAWADI, G. et al. (2018). Coronary artery bypass grafting bundled payment proposal will have significant financial impact on hospitals. Journal of Thoracic and Cardiovascular Surgery, 155 (1), 182–188. HIBBARD, J. H. and JEWETT, J. J. (1997). Will quality report cards help consumers? Health Affairs, 16 (3), 218–228. HO, V. and HAMILTON, B. H. (2000). Hospital mergers and acquisitions: Does market consoli- dation harm patients? Journal of Health Economics, 19 (5), 767–791. HODGKIN, D. and MCGUIRE, T. G. (1994). Payment levels and hospital response to prospective payment. Journal of Health Economics, 13 (1), 1–29. HOLMAN, W. L., ALLMAN, R. M., SANSOM, M., KIEFE, C. I., PETERSON, E. D., ANSTROM, K. J., SANKEY, S. S., HUBBARD, S. G., SHERRILL, R. G., GROUP, A. C. S. et al. (2001). Alabama coronary artery bypass grafting project: Results of a statewide quality improvement initiative. JAMA, 285 (23), 3003–3010. HUCKMAN, R. S. and PISANO, G. P. (2006). The firm specificity of individual performance: Evidence from cardiac surgery. Management Science, 52 (4), 473–488. HURLEY, J. (2000). An overview of the normative economics of the health sector. Handbook of Health Economics, 1, 55–118. IVERSEN, T. and LURÅS, H. (2000). The effect of capitation on GPs’ referral decisions. Health Economics, 9 (3), 199–210. JANULEVICIUTE, J., ASKILDSEN, J. E., KAARBOE, O., SICILIANI, L. and SUTTON, M. (2016). How do hospitals respond to price changes? Evidence from Norway. Health Economics, 25 (5), 620–636. 102 JENA, A. B., SEABURY, S., LAKDAWALLA, D. and CHANDRA, A. (2011). Malpractice risk according to physician specialty. New England Journal of Medicine, 365 (7), 629–636. KACHALIA, A. and MELLO, M. (2011). New directions in medical liability reform. New England journal of medicine, 364 (16), 1564. KC, D. S. and TERWIESCH, C. (2011). The effects of focus on performance: Evidence from California hospitals. Management Science, 57 (11), 1897–1912. KESSLER, D. and MCCLELLAN, M. (1996). Do doctors practice defensive medicine? Quarterly Journal of Economics, 111 (2), 353–390. KESSLER, D. P. (2011). Evaluating the medical malpractice system and options for reform. Jour- nal of Economic Perspectives, 25 (2), 93–110. LANDON, B. E., KEATING, N. L., BARNETT, M. L., ONNELA, J.-P., PAUL, S., O’MALLEY, A. J., KEEGAN, T. and CHRISTAKIS, N. A. (2012). Variation in patient-sharing networks of physicians across the United States. JAMA, 308 (3), 265–273. —, ONNELA, J.-P., KEATING, N. L., BARNETT, M. L., PAUL, S., O’MALLEY, A. J., KEEGAN, T. and CHRISTAKIS, N. A. (2013). Using administrative data to identify naturally occurring networks of physicians. Medical Care, 51 (8), 715. LU, S. F. and RUI, H. (2017). Can we trust online physician ratings? Evidence from cardiac surgeons in Florida. Management Science. MANTON, K. G., WOODBURY, M. A., VERTREES, J. C. and STALLARD, E. (1993). Use of Medicare services before and after introduction of the prospective payment system. Health Ser- vices Research, 28 (3), 269–292. MARINOSO, B. G. and JELOVAC, I. (2003). GPs’ payment contracts and their referral practice. Journal of Health Economics, 22 (4), 617–635. MCFADDEN, D. (1974). Conditional logit analysis of qualitative choice behavior. New York: Academic Press. MCWILLIAMS, J. M., CHERNEW, M. E., ZASLAVSKY, A. M., HAMED, P. and LANDON, B. E. (2013). Delivery system integration and health care spending and quality for Medicare benefi- ciaries. JAMA Internal Medicine, 173 (15), 1447–1456. MELLO, M. M. and BRENNAN, T. A. (2001). Deterrence of medical errors: Theory and evidence for malpractice reform. Texas Law Review, 80, 1595. MENNEMEYER, S. T., MORRISEY, M. A. and HOWARD, L. Z. (1997). Death and reputation: How consumers acted upon HCFA mortality information. Inquiry, 34 (2), 117–128. MERCHANT, R. M., VOLPP, K. G. and ASCH, D. A. (2016). Learning by listening–improving health care in the era of Yelp. JAMA, 316 (23), 2483–2484. 103 MUKAMEL, D. B. and BROWER, C. A. (1998). The influence of risk adjustment methods on con- clusions about quality of care in nursing homes based on outcome measures. The Gerontologist, 38 (6), 695–703. — and MUSHLIN, A. I. (1998). Quality of care information makes a difference: An analysis of market share and price changes after publication of the New York State Cardiac Surgery Mortality Reports. Medical Care, 36 (7), 945–954. —, WEIMER, D. L. and MUSHLIN, A. I. (2006). Referrals to high-quality cardiac surgeons: Patients’ race and characteristics of their physicians. Health Services Research, 41 (4p1), 1276– 1295. —, —, ZWANZIGER, J., GORTHY, S.-F. H. and MUSHLIN, A. I. (2004). Quality report cards, selection of cardiac surgeons, and racial disparities: A study of the publication of the New York State Cardiac Surgery Reports. Inquiry, 41 (4), 435–446. —, —, — and MUSHLIN, A. I. (2002). Quality of cardiac surgeons and managed care contracting practices. Health Services Research, 37 (5), 1129–1144. MUNSHI, K. (2004). Social learning in a heterogeneous population: Technology diffusion in the Indian Green Revolution. Journal of Development Economics, 73 (1), 185–213. NAKAMURA, S. (2010). Hospital mergers and referrals in the United States: Patient steering or integrated delivery of care? Inquiry, 47 (3), 226–241. NEW YORK STATE DEPARTMENT OF HEALTH (1997). New York State Cardiac Surgery Report. NEWHOUSE, J. P. (1998). Risk adjustment: Where are we now? Inquiry, 35 (2), 122–131. ORTIZ, F., MBAI, M., ADABAG, S., GARCIA, S., NGUYEN, J., GOLDMAN, S., WARD, H. B., KELLY, R. F., CARLSON, S., HOLMAN, W. L. et al. (2018). Utility of nuclear stress imaging in predicting long-term outcomes one-year post CABG surgery. Journal of Nuclear Cardiology, pp. 1–9. POLLACK, C. E., WANG, H., BEKELMAN, J. E., WEISSMAN, G., EPSTEIN, A. J., LIAO, K., DUGOFF, E. H. and ARMSTRONG, K. (2014). Physician social networks and variation in rates of complications after radical prostatectomy. Value in Health, 17 (5), 611–618. RANARD, B. L., WERNER, R. M., ANTANAVICIUS, T., SCHWARTZ, H. A., SMITH, R. J., MEISEL, Z. F., ASCH, D. A., UNGAR, L. H. and MERCHANT, R. M. (2016). Yelp reviews of hospital care can supplement and inform traditional surveys of the patient experience of care. Health Affairs, 35 (4), 697–705. RUTHER, M. and BLACK, C. (1987). Medicare use and cost of short-stay hospital services by enrollees with cataract, 1984. Health Care Financing Review, 9 (2), 91–99. SCHNEIDER, E. C. and EPSTEIN, A. M. (1996). Influence of cardiac-surgery performance reports on referral practices and access to care: A survey of cardiovascular specialists. New England Journal of Medicine, 335 (4), 251–256. 104 SCHWARTZ, L. M., WOLOSHIN, S. and BIRKMEYER, J. D. (2005). How do elderly patients decide where to go for major surgery? Telephone interview survey. BMJ, 331 (7520), 821. SCHWARTZ, W. B. and KOMESAR, N. K. (1978). Doctors, damages and deterrence: An economic view of medical malpractice. New England Journal of Medicine, 298 (23), 1282–1289. SERRUYS, P. W., MORICE, M.-C., KAPPETEIN, A. P., COLOMBO, A., HOLMES, D. R., MACK, M. J., STÅHLE, E., FELDMAN, T. E., VAN DEN BRAND, M., BASS, E. J. et al. (2009). Percu- taneous coronary intervention versus coronary-artery bypass grafting for severe coronary artery disease. New England Journal of Medicine, 360 (10), 961–972. SESHAMANI, M., SCHWARTZ, J. S. and VOLPP, K. G. (2006). The effect of cuts in Medicare reimbursement on hospital mortality. Health Services Research, 41 (3p1), 683–700. SHEN, Y.-C. (2003). The effect of financial pressure on the quality of care in hospitals. Journal of Health Economics, 22 (2), 243–269. — and WU, V. Y. (2013). Reductions in Medicare payments and patient outcomes: An analysis of 5 leading Medicare conditions. Medical Care, 51 (11), 970–977. SILVERMAN, E. and SKINNER, J. (2004). Medicare upcoding and hospital ownership. Journal of Health Economics, 23 (2), 369–389. STUDDERT, D. M., MELLO, M. M., SAGE, W. M., DESROCHES, C. M., PEUGH, J., ZAPERT, K. and BRENNAN, T. A. (2005). Defensive medicine among high-risk specialist physicians in a volatile malpractice environment. JAMA, 293 (21), 2609–2617. WANG, J., HOCKENBERRY, J., CHOU, S.-Y. and YANG, M. (2011). Do bad report cards have consequences? Impacts of publicly reported provider quality information on the CABG market in Pennsylvania. Journal of Health Economics, 30 (2), 392–407. WATTS, D. J. and STROGATZ, S. H. (1998). Collective dynamics of ’small-world’ networks. Nature, 393 (6684), 440. WU, V. Y. and SHEN, Y.-C. (2014). Long-term impact of Medicare payment reductions on patient outcomes. Health Services Research, 49 (5), 1596–1615. 105 Appendix A Appendix to Chapter 1 Exempted hospitals/counties/states Although the switch from MSA to CBSA occurred at the national level, the government allowed several exceptions at the individual hospital, county, and state levels. First, individual hospitals can apply to the Medicare Geographic Classification Review Board (MGCRB) for reclassification of their payment area (both under the MSA and CBSA). To qualify for the reclassification, hospitals must be adjacent to the area to which they are to be reclassified and must share similar character- istics with the hospitals located in that area. As such, the hospitals that were approved for reclassi- fication in 2004 and 2008 at which the treatment variable of this study (i.e.,DWI 0804 ) is defined can be systematically different from the rest of the hospitals in that payment area. Second, Section 508 of the Medicare Modernization Act (MMA) introduced an exception that allowed hospitals not approved by the MGCRB to be reclassified from 2004 to 2007 if they meet certain criteria. Section 505 of the MMA stipulated another exception, which authorizes an upward adjustment of the wage index for hospitals located in counties where at least 10% of the hospital employees commute to the areas with a higher wage index. Third, hospitals in Lugar counties—defined as a county that is proximate to more than one metropolitan area, and over a quarter of the residents commute to those areas—also qualify for an upward adjustment to their wage index. Lastly, five states including Montana, Nevada, North Dakota, South Dakota, and Wyoming are designated as Frontier states and all hospitals located in these states cannot have a wage index lower than 1.0. I repeat the main analysis without these exempted cases, and the results remain qualitatively the 106 same, showing that an overall increase in the price does not lead to a change in total supply, input use, and quality of services. These results are available upon request. 107 Figures Figure A.1: Percentage Changes of Geographic Adjustment Factor (a) Urban sample (b) Rural sample Notes: Each box stretches from the 25th percentile to the 75th percentile, the median is shown across the box, and the horizontal strokes at the end of the vertical lines represent the upper and lower adjacent values (extreme outliers are not plotted). The unit of analysis in this figure is individual hospitals. Switched hospitals in the urban (rural) sample indicate those who were urban (rural) in 2004 but reclassified as rural (urban) in 2008 (2005). There are 92 switchers and 1,796 stayers in the urban sample, and 71 switchers and 718 stayers in the rural sample. 108 Tables Table A.1: Summary Statistics for Hospital Characteristics at Baseline (2004) Mean Std.Dev. Obs Market concentration Herfindahl-Hirschman Index † 0.30 (0.26) 3109 Teaching status Resident-to-bed ratio 0.06 (0.15) 3109 Patient composition Share of DSH patient ‡ 0.25 (0.17) 3109 Share of Medicare inpatient days 0.49 (0.15) 3109 Ownership Forprofit 0.15 (0.36) 3109 Nonprofit 0.65 (0.48) 3109 Government 0.20 (0.40) 3109 Size Beds<100 0.34 (0.47) 3109 100Beds<200 0.33 (0.47) 3109 Beds200 0.34 (0.47) 3109 Region West 0.19 (0.39) 3109 Midwest 0.22 (0.42) 3109 Northeast 0.32 (0.47) 3109 South 0.27 (0.44) 3109 † Defined at each health service area (HSA) based on the actual flow of patients; it is not significantly different from the index based on hospital capacity ‡ DSH denotes disproportionate share 109 Appendix B Appendix to Chapter 2 Figures Figure B.1: Gradient of Deterrence Response, 75th to 90th versus >90th Percentiles of Predicted Exclusion Probability (a) Total Charges (b) Total Claims 110 Figure B.2: Using an Earlier Window (Years 3-5) to Predict Exclusions, >75th Percentile Predicted Exclusion Probability (a) Total Charges (b) Total Claims Notes: We calculate predicted exclusion probabilities using the 2 to 4 (blue) versus 3 to 5 (red) year pe- riod prior to exclusion. Using the respective predicted probabilities, we identify physicians with predicted probabilities greater than the 75th percentile and examine responses. 111 Figure B.3: Evidence around Dates of Indictment (a) Total Charges (b) Total Claims Notes: Exact dates come from 2011 to 2012 text searches. In the 75th percentile subsample, we have 2,318 observations for 90 physicians. 112 Tables Table B.1: Gradients of Deterrence Response, 75th to 90th versus >90th Percentiles of Predicted Exclusion Probability Total Charges Price Total Claims Number Charges per Patient per Claim Claims per Patient of Patients (1) (2) (3) (4) (5) (6) Panel A: 75th–90th Percentile Pre-Period 3-4 -69.94 1.518 0.210 0.381 0.0418 -0.366** (87.33) (2.570) (0.919) (1.099) (0.0317) (0.173) Post-Indictment -420.0*** -10.76*** -1.848** -3.205** -0.090*** -0.0862 (129.8) (2.412) (0.895) (1.427) (0.0277) (0.219) Post-Exclusion -805.1*** -17.15*** -3.269** -6.170* -0.0766 -0.507 (250.6) (3.522) (1.476) (3.401) (0.0917) (0.413) Magnitude of Response (%) Indictment -5.32 -3.16 -2.01 -4.35 -2.65 -0.32 Exclusion -10.20 -6.09 -3.56 -6.93 -2.26 -1.91 No. of Obs. 276,812 276,812 276,812 276,812 276,812 276,812 Mean Dep. Var. 7895.57 247.44 91.77 101.28 3.39 26.54 Panel B: 90th Percentile Pre-Period 3-4 -229.3 -2.102 -2.465 -0.0420 -4.297 -0.639* (151.5) (8.152) (2.082) (0.0627) (3.543) (0.374) Post-Indictment -995.6*** -27.28*** -5.427** -0.174*** -4.860 -0.578 (182.7) (7.262) (2.239) (0.0666) (3.361) (0.430) Post-Exclusion -1,859*** -33.60*** -8.677* -0.0763 -7.866* -2.240*** (293.3) (8.176) (4.929) (0.132) (4.076) (0.553) Magnitude of Response (%) Indictment -7.55 -3.29 -7.73 -3.56 -5.02 -1.42 Exclusion -14.10 -5.27 -9.52 -1.56 -8.12 -5.49 No. of Obs. 197,404 197,404 197,404 197,404 197,404 197,404 Mean Dep. Var. 13188 353 164.77 4.89 96.84 40.78 Notes: Columns (1) and (4) correspond to estimates shown in Appendix Figure B.1. Reference period is the first quarter of pre-period year 3. Standard errors clustered by TIN shown in parentheses *** p<0.01, ** p<0.05, * p<0.1. 113 Table B.2: Using an Earlier Window (Years 3-5) to Predict Exclusions, >75th Percentile of Pre- dicted Exclusion Probability Total Charges Price Total Claims Number Charges per Patient per Claim Claims per Patient of Patients (1) (2) (3) (4) (5) (6) Pre-Period Year 5 -216.1* -2.486 0.428 0.251 -0.0284 -0.357 (113.5) (1.612) (0.910) (5.221) (0.0370) (0.261) Pre-Period Year 3 288.7*** -0.286 1.737** 1.693 -0.0309 0.706*** (84.36) (1.459) (0.737) (3.889) (0.0357) (0.215) Post-Indictment Pre-Period Year 2 -380.9*** -5.740** 0.394 -13.39*** -0.117** -0.0179 (143.5) (2.830) (1.055) (4.053) (0.0542) (0.368) Pre-Period Year 1 -674.7*** -7.317* -0.105 -15.98*** -0.157** -0.296 (204.4) (3.938) (1.785) (4.315) (0.0698) (0.512) Post-Exclusion Post Year 1 -925.5*** -9.850* -2.955* -19.40*** -0.0789 -0.911 (246.7) (5.290) (1.525) (5.729) (0.106) (0.653) Post Year 2 -1,468*** -13.31* -2.309 -18.62** 0.00178 -1.995** (398.0) (7.772) (2.450) (8.075) (0.196) (0.903) No. of Obs. 430,336 430,336 430,336 430,336 430,336 430,336 R-Squared 0.828 0.764 0.726 0.700 0.617 0.552 Notes: Columns (1) and (4) correspond to estimates shown in Appendix Figure B.2. Reference period is pre-period year 4. Standard errors clustered by TIN shown in parentheses *** p<0.01, ** p<0.05, * p<0.1. 114 Table B.3: Sensitivity Analysis Using Random Assignment of Dates and TINs, >75th Percentile of Predicted Exclusion Probability Total Charges Total Claims Price Number Charges per Patient Claims per Patient per Claim of Patients (1) (2) (3) (4) (5) (6) Random Dates, Using Event-Study Sample Pre-Period 148.0 5.926** 1.292 -0.0447 0.708 0.920* (177.6) (2.956) (4.480) (0.0501) (0.876) (0.506) 1(Investigation) -172.3 -1.025 -8.933 -0.0325 -2.086 -0.102 (231.1) (3.794) (10.15) (0.0748) (2.247) (0.705) 1(Exclusion) -879.8 1.894 -56.78 -0.0336 -10.31 1.203 (748.4) (6.069) (53.98) (0.117) (9.975) (1.619) No. of Obs. 542,211 542,211 542,211 542,211 542,211 542,211 R-Squared 0.881 0.508 0.889 0.620 0.879 0.493 Notes: For each physician, dates of exclusion are randomly assigned. Standard errors clustered by TIN shown in parentheses *** p<0.01, ** p<0.05, * p<0.1. 115 Appendix C Appendix to Chapter 3 Data Appendix Table C.1: DRG Codes and Descriptions Code Descriptions 231 Coronary bypass w PTCA w MCC 232 Coronary bypass w PTCA w/o MCC 233 Coronary bypass w cardiac catheterization w MCC 234 Coronary bypass w cardiac catheterization w/o MCC 235 Coronary bypass w/o cardiac catheterization w MCC 236 Coronary bypass w/o cardiac catheterization w/o MCC Note: PTCA–percutaneous transluminal coronary angioplasty; MCC–major complication or comorbidity 116 Table C.2: Medicare Inpatient Claim Admission Type Codes Code Code value 0 Blank 1 Emergency - The patient required immediate medical intervention as a result of severe, life threatening, or potentially disabling conditions. Generally, the patient was admitted through the emergency room. 2 Urgent - The patient required immediate attention for the care and treatment of a physical or mental disorder. Generally, the patient was admitted to the first available and suitable accommodation. 3 Elective - The patient’s condition permitted adequate time to schedule the availability of suitable accommodations. 4 Newborn - Necessitates the use of special source of admission codes. 5 Trauma Center - Visits to a trauma center/hospital as licensed or designated by the State or local government authority authorized to do so, or as verified by the American College of Surgeons and involving a trauma activation. 6 THRU 8 Reserved 9 Unknown - Information not available. 117 Table C.3: Rural-Urban Continuum Codes Code Descriptions Metro counties: 1 Counties in metro areas of 1 million population or more 2 Counties in metro areas of 250,000 to 1 million population 3 Counties in metro areas of fewer than 250,000 population Non-metro counties: 4 Urban population of 20,000 or more, adjacent to a metro area 5 Urban population of 20,000 or more, not adjacent to a metro area 6 Urban population of 2,500 to 19,999, adjacent to a metro area 7 Urban population of 2,500 to 19,999, not adjacent to a metro area 8 Completely rural or less than 2,500 urban population, adjacent to a metro area 9 Completely rural or less than 2,500 urban population, not adjacent to a metro area 118 Figures Figure C.1: Distributions of Unadjusted and Risk-Adjusted In-Hospital Mortality Rates at Baseline (2008) (a) At Hospital Level (b) At Physician Level Note: The red solid lines are the observed mortality rates and the gray dashed lines are the risk-adjusted mortality rates. All risk-adjusted rates are calculated by the ratio of observed to predicted deaths, multiplied by the national unadjusted rate. Seven risk-adjusted mortality rates are plotted, and varying sets of patient controls (including age, race, gender, Charlson Comorbidity Index, county of residence, median household income of zip code, and their interactions) are employed for each prediction. 119 Tables Table C.4: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Years of Experience, Surgeon|Hospital Choice Set (1) (2) (3) (4) (5) Panel A. Bottom Quartile ( 18 years) Surgeon mortality -0.0331 -0.0714 -0.0249 0.0733 0.0300 (0.0655) (0.0538) (0.0618) (0.0799) (0.0814) Surgeon mortality 0.0930 0.123 I(High degree) (0.141) (0.132) Surgeon mortality -0.0324 0.00317 I(High clustering) (0.177) (0.173) Surgeon mortality -0.204* -0.221* I(High centrality) (0.123) (0.117) Number of observations 6,366 6,366 6,366 6,366 6,366 Number of patients 2,058 2,058 2,058 2,058 2,058 Panel B. Top Quartile ( 32 years) Surgeon mortality -0.0551 -0.104 -0.107 -0.0635 -0.173 (0.0569) (0.0801) (0.0688) (0.0725) (0.120) Surgeon mortality 0.0847 0.0905 I(High degree) (0.105) (0.118) Surgeon mortality 0.131 0.129 I(High clustering) (0.111) (0.115) Surgeon mortality 0.0224 0.0394 I(High centrality) (0.120) (0.131) Number of observations 6,008 6,008 6,008 6,008 6,008 Number of patients 2,016 2,016 2,016 2,016 2,016 Note: Years of experience are proxied by years after graduation, and year of graduation is obtained from Physician Compare (March 2014 ver.). Physicians who are not matched with Physician Compare data are dropped. All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 120 Table C.5: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Ur- ban/Rural Areas, Surgeon|Hospital Choice Set (1) (2) (3) (4) (5) Panel A. Urban Hospital Referral Regions Surgeon mortality -0.0329 -0.00626 -0.0407 0.0305 0.0744 (0.0495) (0.0819) (0.0620) (0.0697) (0.0931) Surgeon mortality -0.0460 -0.0794 I(High degree) (0.0997) (0.105) Surgeon mortality 0.0209 0.0261 I(High clustering) (0.0921) (0.0985) Surgeon mortality -0.130 -0.146 I(High centrality) (0.123) (0.126) Number of observations 15,795 15,795 15,795 15,795 15,795 Number of patients 5,206 5,206 5,206 5,206 5,206 Panel B. Rural Hospital Referral Regions Surgeon mortality -0.102** -0.121** -0.101** -0.0357 -0.0511 (0.0494) (0.0533) (0.0476) (0.0570) (0.0558) Surgeon mortality 0.0536 0.0454 I(High degree) (0.0877) (0.0858) Surgeon mortality -0.00639 -0.00893 I(High clustering) (0.0692) (0.0727) Surgeon mortality -0.151** -0.148** I(High centrality) (0.0693) (0.0689) Number of observations 10,910 10,910 10,910 10,910 10,910 Number of patients 3,504 3,504 3,504 3,504 3,504 Note: The urban/rural status is defined by computing the annual average Rural-Urban Continuum Codes (RUCC) of zip codes constituting each Hospital Referral Region (HRR). The HRR is categorized as urban if the average RUCC is less than 4. All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Standard errors are clustered at the HRR by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 121 Table C.6: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Pa- tients’ Household Income, Surgeon|Hospital Choice Set (1) (2) (3) (4) (5) Panel A. Above Median Household Income (>$48,590) Surgeon mortality -0.0508 -0.0747 -0.0557 0.0124 -0.0133 (0.0443) (0.0533) (0.0463) (0.0446) (0.0524) Surgeon mortality 0.0661 0.0532 I(High degree) (0.0746) (0.0745) Surgeon mortality 0.0204 0.0182 I(High clustering) (0.0727) (0.0771) Surgeon mortality -0.146** -0.142* I(High centrality) (0.0744) (0.0741) Number of observations 14,817 14,817 14,817 14,817 14,817 Number of patients 4,881 4,881 4,881 4,881 4,881 Panel B. Below Median Household Income ($48,590) Surgeon mortality -0.0978** -0.0919 -0.106* -0.0363 -0.0238 (0.0495) (0.0772) (0.0594) (0.0815) (0.0984) Surgeon mortality -0.0107 -0.0350 I(High degree) (0.0885) (0.0949) Surgeon mortality 0.0228 0.0272 I(High clustering) (0.0853) (0.0920) Surgeon mortality -0.124 -0.129 I(High centrality) (0.120) (0.120) Number of observations 11,888 11,888 11,888 11,888 11,888 Number of patients 3,829 3,829 3,829 3,829 3,829 Note: Patients’ household income is proxied by median household income of zip code of residence, which is obtained from 2013 American Community Survey 5-year estimates. All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 122 Table C.7: Effects of Referring Physicians’ Network Characteristics on Provider Choice by Pa- tients’ Severity, Surgeon|Hospital Choice Set (1) (2) (3) (4) (5) Panel A. More Severe (Charlson Comorbidity Index1) Surgeon mortality -0.0232 0.0181 0.0192 0.0225 0.121 (0.0610) (0.0906) (0.0706) (0.0914) (0.126) Surgeon mortality -0.0756 -0.0895 I(High degree) (0.121) (0.129) Surgeon mortality -0.149 -0.134 I(High clustering) (0.180) (0.192) Surgeon mortality -0.149 -0.185 I(High centrality) (0.196) (0.204) Number of observations 4,215 4,215 4,215 4,215 4,215 Number of patients 1,428 1,428 1,428 1,428 1,428 Panel B. Less Severe (Charlson Comorbidity Index<1) Surgeon mortality -0.0809** -0.0952* -0.0938** -0.0186 -0.0415 (0.0396) (0.0506) (0.0425) (0.0464) (0.0523) Surgeon mortality 0.0331 0.0231 I(High degree) (0.0689) (0.0696) Surgeon mortality 0.0445 0.0437 I(High clustering) (0.0584) (0.0607) Surgeon mortality -0.128* -0.127* I(High centrality) (0.0742) (0.0750) Number of observations 22,490 22,490 22,490 22,490 22,490 Number of patients 7,282 7,282 7,282 7,282 7,282 Note: All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 123 Table C.8: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hospi- tal+Surgeon Choice Set with Quartile Specification of Network Characteristics (1) (2) (3) (4) Hospital mortality -0.158 -0.142 -0.0688 -0.184 (0.154) (0.139) (0.118) (0.163) Surgeon mortality -0.0569* -0.0539 -0.0381 -0.00872 (0.0338) (0.0344) (0.0328) (0.0362) Hospital mortality 0.283 0.229 I(High degree) (0.230) (0.238) Surgeon mortality -0.0892 -0.0799 I(High degree) (0.100) (0.0975) Hospital mortality 0.270 0.180 I(High clustering) (0.275) (0.279) Surgeon mortality -0.0949 -0.0676 I(High clustering) (0.0967) (0.0950) Hospital mortality -0.0606 -0.0268 I(High centrality) (0.207) (0.201) Surgeon mortality -0.154** -0.160** I(High centrality) (0.0686) (0.0677) Number of observations 102,554 102,554 102,554 102,554 Number of patients 1,0671 1,0671 1,0671 1,0671 Note: All models control for a set of hospital characteristics (distance from pa- tients’ residence, number of beds, teaching status, ownership) and surgeon char- acteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 124 Table C.9: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set with Quartile Specification of Network Characteristics (1) (2) (3) (4) Surgeon mortality -0.0813** -0.0803*** -0.0269 -0.0392 (0.0331) (0.0306) (0.0344) (0.0392) Surgeon mortality 0.0407 0.0108 I(High degree) (0.0636) (0.0697) Surgeon mortality 0.0689 0.0757 I(High clustering) (0.0803) (0.0876) Surgeon mortality -0.126** -0.129** I(High centrality) (0.0607) (0.0614) Number of observations 26,705 26,705 26,705 26,705 Number of patients 8,710 8,710 8,710 8,710 Note: All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit mod- els. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 125 Table C.10: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hospi- tal+Surgeon Choice Set with a 10% Threshold Defining Networks (1) (2) (3) (4) Hospital mortality -0.0909 -0.181 -0.00928 -0.158 (0.135) (0.128) (0.116) (0.168) Surgeon mortality -0.0360 -0.0533 -0.000620 0.0407 (0.0391) (0.0350) (0.0358) (0.0393) Hospital mortality 0.0779 0.0628 I(High degree) (0.201) (0.202) Surgeon mortality -0.0687 -0.0656 I(High degree) (0.0643) (0.0613) Hospital mortality 0.308* 0.299 I(High clustering) (0.186) (0.189) Surgeon mortality -0.0350 -0.0255 I(High clustering) (0.0587) (0.0583) Hospital mortality -0.108 -0.115 I(High centrality) (0.154) (0.149) Surgeon mortality -0.155*** -0.156*** I(High centrality) (0.0509) (0.0502) Number of observations 102,504 102,504 102,504 102,504 Number of patients 10,662 10,662 10,662 10,662 Note: All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 126 Table C.11: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set with a 10% Threshold Defining Networks (1) (2) (3) (4) Surgeon mortality -0.0817* -0.0763** -0.00766 -0.0148 (0.0440) (0.0378) (0.0446) (0.0481) Surgeon mortality 0.0221 0.00689 I(High degree) (0.0606) (0.0617) Surgeon mortality 0.0153 0.0129 I(High clustering) (0.0564) (0.0585) Surgeon mortality -0.140** -0.139** I(High centrality) (0.0674) (0.0677) Number of observations 26,687 26,687 26,687 26,687 Number of patients 8,701 8,701 8,701 8,701 Note: All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are in- cluded. *** p<0.01, ** p<0.05, * p<0.1 127 Table C.12: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hospi- tal+Surgeon Choice Set with a 30% Threshold Defining Networks (1) (2) (3) (4) Hospital mortality -0.0908 -0.182 -0.0104 -0.159 (0.135) (0.128) (0.116) (0.168) Surgeon mortality -0.0360 -0.0533 -0.000614 0.0407 (0.0391) (0.0350) (0.0358) (0.0393) Hospital mortality 0.0765 0.0613 I(High degree) (0.201) (0.202) Surgeon mortality -0.0686 -0.0656 I(High degree) (0.0643) (0.0612) Hospital mortality 0.309* 0.300 I(High clustering) (0.186) (0.189) Surgeon mortality -0.0351 -0.0256 I(High clustering) (0.0587) (0.0583) Hospital mortality -0.107 -0.114 I(High centrality) (0.154) (0.149) Surgeon mortality -0.155*** -0.156*** I(High centrality) (0.0509) (0.0502) Number of observations 102,554 102,554 102,554 102,554 Number of patients 1,0671 1,0671 1,0671 1,0671 Note: All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are included. *** p<0.01, ** p<0.05, * p<0.1 128 Table C.13: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Sur- geon|Hospital Choice Set with a 30% Threshold Defining Networks (1) (2) (3) (4) Surgeon mortality -0.0817* -0.0763** -0.00764 -0.0148 (0.0440) (0.0378) (0.0446) (0.0481) Surgeon mortality 0.0221 0.00694 I(High degree) (0.0606) (0.0617) Surgeon mortality 0.0153 0.0129 I(High clustering) (0.0564) (0.0585) Surgeon mortality -0.140** -0.139** I(High centrality) (0.0674) (0.0677) Number of observations 26,705 26,705 26,705 26,705 Number of patients 8,710 8,710 8,710 8,710 Note: All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. Only the referring physicians in the connected component of the network are in- cluded. *** p<0.01, ** p<0.05, * p<0.1 129 Table C.14: Effects of Referring Physicians’ Network Characteristics on Provider Choice in the Entire Network, Hospital+Surgeon Choice Set All Elective admissions Emergency admissions (1) (2) (3) (4) (5) (6) (7) (8) (9) Hospital mortality -0.0323 -0.113 -0.178 -0.0563 -0.0801 -0.170 0.0276 -0.224 -0.197 (0.0926) (0.133) (0.123) (0.0964) (0.150) (0.132) (0.164) (0.171) (0.170) Surgeon mortality -0.0661** -0.0321 -0.0628* -0.101*** -0.0824** -0.109*** 0.0232 0.0976** 0.0492 (0.0286) (0.0380) (0.0343) (0.0308) (0.0420) (0.0392) (0.0385) (0.0487) (0.0480) Hospital mortality 0.147 0.0463 0.432 I(High degree) (0.186) (0.194) (0.284) Surgeon mortality -0.0689 -0.0373 -0.145* I(High degree) (0.0615) (0.0658) (0.0801) Hospital mortality 0.322* 0.260 0.464 I(High clustering) (0.169) (0.182) (0.285) Surgeon mortality -0.00593 0.0203 -0.0634 I(High clustering) (0.0569) (0.0607) (0.0786) Number of observations 119,693 119,693 119,693 91,137 91,137 91,137 28,556 28,556 28,556 Number of patients 11,487 11,487 11,487 8,743 8,743 8,743 2,744 2,744 2,744 Note: All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon char- acteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. *** p<0.01, ** p<0.05, * p<0.1 130 Table C.15: Effects of Referring Physicians’ Network Characteristics on Provider Choice in the Entire Network, Surgeon|Hospital Choice Set All Elective admissions Emergency admissions (1) (2) (3) (4) (5) (6) (7) (8) (9) Surgeon mortality -0.0867*** -0.0868* -0.0858** -0.116*** -0.143*** -0.140*** -0.00755 0.0616 0.0437 (0.0333) (0.0445) (0.0382) (0.0366) (0.0484) (0.0452) (0.0491) (0.0708) (0.0515) Surgeon mortality 0.000118 0.0545 -0.145** I(High degree) (0.0556) (0.0657) (0.0736) Surgeon mortality -0.00268 0.0636 -0.174** I(High clustering) (0.0489) (0.0604) (0.0744) Number of observations 28,798 28,798 28,798 22,416 22,416 22,416 6,382 6,382 6,382 Number of patients 9,352 9,352 9,352 7,194 7,194 7,194 2,158 2,158 2,158 Note: All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. *** p<0.01, ** p<0.05, * p<0.1 131 Table C.16: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Hospital+Surgeon Choice Set with Risk- Adjusted Mortality Rates Model 1 Model 2 Model 3 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) Hospital mortality -0.0358 0.415* 0.239 0.285 -0.0555 0.408* 0.250 0.255 -0.357 0.00803 -0.0538 -0.0372 (0.237) (0.227) (0.197) (0.301) (0.239) (0.218) (0.198) (0.304) (0.218) (0.162) (0.163) (0.270) Surgeon mortality 0.00238 -0.0879 0.0291 0.0461 -0.0102 -0.0946 0.0209 0.0360 0.0619 0.00879 0.0778 0.127 (0.114) (0.0887) (0.0853) (0.104) (0.115) (0.0869) (0.0862) (0.105) (0.105) (0.0701) (0.0756) (0.0975) Hospital mortality 0.397 0.486 0.452 0.529* 0.313 0.366 I(High degree) (0.317) (0.302) (0.319) (0.303) (0.293) (0.291) Surgeon mortality -0.139 -0.0938 -0.126 -0.0818 -0.120 -0.0860 I(High degree) (0.162) (0.163) (0.163) (0.163) (0.139) (0.140) Hospital mortality -0.589* -0.658* -0.543 -0.618* -0.430 -0.477* I(High clustering) (0.348) (0.352) (0.346) (0.349) (0.264) (0.265) Surgeon mortality 0.0688 0.0906 0.0650 0.0832 -0.0229 -0.00801 I(High clustering) (0.127) (0.135) (0.127) (0.133) (0.111) (0.115) Hospital mortality -0.158 -0.161 -0.148 -0.162 -0.279 -0.282 I(High centrality) (0.281) (0.267) (0.286) (0.273) (0.221) (0.214) Surgeon mortality -0.270* -0.283* -0.265* -0.279* -0.207* -0.209* I(High centrality) (0.151) (0.156) (0.154) (0.157) (0.122) (0.126) Number of observations 46,616 46,616 46,616 46,616 46,579 46,579 46,579 46,579 40,436 40,436 40,436 40,436 Number of patients 4,784 4,784 4,784 4,784 4,780 4,780 4,780 4,780 4,226 4,226 4,226 4,226 Note: Model 1 includes age, gender, race, Charlson Comorbidity Index, and county of residence as patient-specific risk adjusters. Model 2 includes median household income categories of patients’ neighborhood (rich versus poor) and Model 3 includes the full interaction between comorbidity categories (more versus less comorbidity) and income categories as additional risk adjusters. All models control for a set of hospital characteristics (distance from patients’ residence, number of beds, teaching status, ownership) and surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. *** p<0.01, ** p<0.05, * p<0.1 132 Table C.17: Effects of Referring Physicians’ Network Characteristics on Provider Choice, Surgeon|Hospital Choice Set with Risk- Adjusted Mortality Rates Model 1 Model 2 Model 3 (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) Surgeon mortality 0.0418 -0.0263 0.236* 0.165 0.0401 -0.0283 0.228* 0.162 0.228 0.133 0.274** 0.402** (0.141) (0.106) (0.129) (0.173) (0.145) (0.107) (0.134) (0.179) (0.163) (0.106) (0.117) (0.159) Surgeon mortality 0.0268 -0.0156 0.0229 -0.0210 -0.160 -0.191 I(High degree) (0.201) (0.200) (0.205) (0.204) (0.214) (0.208) Surgeon mortality 0.211 0.229 0.208 0.221 -0.0155 0.00207 I(High clustering) (0.160) (0.159) (0.160) (0.158) (0.185) (0.185) Surgeon mortality -0.406** -0.420** -0.395** -0.407** -0.286* -0.305* I(High centrality) (0.169) (0.179) (0.176) (0.185) (0.174) (0.173) Number of observations 18,210 18,210 18,210 18,210 18,182 18,182 18,182 18,182 15,784 15,784 15,784 15,784 Number of patients 5,843 5,843 5,843 5,843 5,839 5,839 5,839 5,839 5,163 5,163 5,163 5,163 Note: Model 1 includes age, gender, race, Charlson Comorbidity Index, and county of residence as patient-specific risk adjusters. Model 2 includes median household income categories of patients’ neighborhood (rich versus poor) and Model 3 includes the full interaction between comorbidity categories (more versus less comorbidity) and income categories as additional risk adjusters. All models control for a set of surgeon characteristics (experience, top-ranked medical school, gender). Coefficients are marginal effects from conditional logit models. Standard errors are clustered at the Hospital Referral Region by year level. *** p<0.01, ** p<0.05, * p<0.1 133
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Essays on health insurance programs and policies
PDF
Three essays in health economics
PDF
Essays in health economics: evidence from Medicare
PDF
Value in health in the era of vertical integration
PDF
Behavioral approaches to industrial organization
PDF
Essays on health and aging with focus on the spillover of human capital
PDF
Implementation of peer providers in integrated health care settings
PDF
Economic, clinical, and behavioral outcomes from medical and pharmaceutical treatments
PDF
Essays on health economics
PDF
Three essays on health & aging
PDF
Essays on access to prescription drugs
PDF
Three essays on behavioral economics approaches to understanding the implications of mental health stigma
PDF
Value in oncology care and opportunities for improvement
PDF
Behavioral Health for All Kids
PDF
Three essays on cooperation, social interactions, and religion
PDF
Developing an agent-based simulation model to evaluate competition in private health care markets with an assessment of accountable care organizations
PDF
Three essays on estimating the effects of government programs and policies on health care among disadvantaged population
PDF
Essays on beliefs, networks and spatial modeling
PDF
Essays on development and health economics
PDF
What leads to a happy life? Subjective well-being in Alaska, China, and Australia
Asset Metadata
Creator
Shin, Eunhae
(author)
Core Title
Essays in health economics and provider behavior
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Economics
Publication Date
07/28/2019
Defense Date
04/10/2019
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
behavioral responses,Hospitals,incentives,Medicare,OAI-PMH Harvest,physicians
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Goldman, Dana (
committee chair
), Chen, Alice (
committee member
), Nugent, Jeffrey (
committee member
), Romley, John (
committee member
)
Creator Email
ehae.shin@gmail.com,eunhaesh@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-195726
Unique identifier
UC11663246
Identifier
etd-ShinEunhae-7652.pdf (filename),usctheses-c89-195726 (legacy record id)
Legacy Identifier
etd-ShinEunhae-7652.pdf
Dmrecord
195726
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Shin, Eunhae
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
behavioral responses
incentives
Medicare
physicians