Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Essays on effects of reputation and judicial transparency
(USC Thesis Other)
Essays on effects of reputation and judicial transparency
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ESSAYS ON EFFECTS OF REPUTATION AND JUDICIAL TRANSPARENCY by Wei Zhou A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirement for the Degree DOCTOR OF PHILOSOPHY (ECONOMICS) May 2024 Copyright 2024 Wei Zhou In memory of my beloved grandmother, Sigui Liu, whose wisdom surpassed the written word. ii Acknowledgments I am deeply thankful for the guidance and support provided by my advisors throughout my doctoral journey. Jeffrey Weaver, an exceptional advisor, has provided me with continuous support, persistent encouragement, and invaluable guidance. John Strauss has played a pivotal role in enhancing my critical thinking skills and has offered comprehensive advice on my research. Cheng Hsiao has profoundly shaped my understanding of econometric methods and provided stellar career advice. Nan Jia has greatly broadened the scope of my research with her expertise and invaluable assistance. My gratitude extends to other faculty members at the University of Southern California for their priceless support and advice. Special thanks to Vittorio Bassi, Augustin Bergeron, Jeff Nugent, Paulina Oliva, Simon Quach, and many in the audience who gave helpful comments in reading groups, seminars, and conferences. Additionally, I am thankful to Haichun Ye and Jack Porter for guiding me into the world of rigorous economic research. This endeavor would not have been accomplished without support and help from my friends. My cohorts who graduated a year ahead of me and my peers, Tao Chen, Yukun Ding, Jingyi Fang, Yue Fang, Zhan Gao, Weizhao Huang, Xiongfei Li, Ruozi Song, Jingyi Tian, Liying Yang, and Shaoshuang Yang, have been my dearest friends and allies against numerous obstacles we faced during the PhD program. I would also like to extend my deepest gratitude to my wife, Yanyu Zhu, for her unwavering love, support, and patience, which have been my cornerstone throughout this journey. To my parents, Yuxin Zhou and Xiujuan Jiang, whose sacrifices and unconditional love have shaped me into the person I am today, I owe an immeasurable debt of gratitude. Additionally, I am thankful for the encouragement of my mother-in-law, Mei Long, who has been a constant source of support and kindness. iii Contents Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1 Reputation, Reputation, Reputation: Evidence from Academic Journals . . . . . . 3 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Background and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.1 Journal Citation Reports . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2.2 Journal Impact Factor . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.3 Journal Quartile Ranking . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2.4 Web of Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.2.5 Combining Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.3 Empirical Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 1.4.1 Impact of Higher Rankings on Natural Science Journals . . . . . . . . 14 1.4.2 Decomposition of Value-added and Selection Effect . . . . . . . . . . 19 iv 1.4.3 Long-run Selection Effect . . . . . . . . . . . . . . . . . . . . . . . . . 24 1.4.4 Dropping and Rising . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2 Judicial Transparency and Fairness . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.1 China’s Judicial System: Opaque and Corruption . . . . . . . . . . . 36 2.2.2 China’s Legal System: Litigation Procedures and Intellectual Property Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.2.3 China’s Legal System: Video Streaming Court Trials . . . . . . . . . 39 2.3 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.3.1 Case Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 2.3.2 Firms Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 2.3.3 Video Streamed Court Trials Data . . . . . . . . . . . . . . . . . . . 44 2.3.4 Summary of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 2.4 Empirical Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.1 Baseline Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 2.5.2 Intensive Margin and Extensive Margin . . . . . . . . . . . . . . . . . 47 2.5.3 Case Level Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 2.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3 Journal Rankings and Retractions: A Regression Discontinuity Analysis . . . . . . 53 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.2 Background and Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2.1 Journal Ranking Data . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2.2 Paper Retraction Data . . . . . . . . . . . . . . . . . . . . . . . . . . 57 v 3.3 Empirical Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4.1 Effect on Retractions of Past Papers . . . . . . . . . . . . . . . . . . 59 3.4.2 Effect on Retractions of New Papers . . . . . . . . . . . . . . . . . . 63 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Appendix A: Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Appendix B: Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 vi List of Tables 1.1 Quartile Ranking Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Natural Science Journals, Total Effect on Papers’ Immediate Citations . . . 16 1.3 Natural Science Journals, Total Effect on Papers’ Short-term Citations . . . 18 1.4 Natural Sciences, Decomposition of Value-added and Selection Effect: Q1Q2 21 1.5 Natural Sciences, Decomposition of Value-added and Selection Effect: Q2Q3 22 1.6 Natural Sciences, Decomposition of Value-added and Selection Effect: Q3Q4 23 1.7 Natural Science, Long-run Selection Effect . . . . . . . . . . . . . . . . . . . 25 1.8 Natural Science Journals, Total Effect on Papers’ Immediate Citations, Above and Below: Q1Q2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.9 Natural Science Journals, Total Effect on Papers’ Short-term Citations, Above and Below: Q1Q2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1 Summary Statistics of Key Variables . . . . . . . . . . . . . . . . . . . . . . 45 2.2 Open Trial Reform and Plaintiffs’ Win Rate . . . . . . . . . . . . . . . . . . 47 2.3 Intensive Margin: Change in Judges’ Rulings . . . . . . . . . . . . . . . . . . 48 2.4 Extensive Margin: Change in Case Composition . . . . . . . . . . . . . . . . 49 2.5 Case Level: Open Trial Reform and Plaintiffs’ Win Rate . . . . . . . . . . . 50 3.1 Effect on Number of Retractions of Past Papers . . . . . . . . . . . . . . . . 61 3.2 Effect on Number of Retractions of Past Papers, By Quartile . . . . . . . . . 62 3.3 Effect on Number of Problematic Papers in Future Years . . . . . . . . . . . 64 3.4 Effect on Number of Problematic Papers in Future Years, By Quartile . . . . 65 vii B1 Example: Quartile Rankings in Journal Citation Reports . . . . . . . . . . . 91 B2 Social Science Journals, Total Effect on Papers’ Immediate Citations . . . . 92 B3 Social Science Journals, Total Effect on Papers’ Short-term Citations . . . . 93 B4 All Journals, Total Effect on Papers’ Immediate Citations . . . . . . . . . . 94 B5 All Journals, Total Effect on Papers’ Short-run Citations . . . . . . . . . . . 95 B6 Natural Science Journals, Total Effect on Papers’ Immediate Citations, Balanced Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 viii List of Figures A1 Impact Factor, Journal Ranking and Quartile Ranking in Journal Citation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 A2 Distribution of the running variable . . . . . . . . . . . . . . . . . . . . . . . 76 A3 Balance on previous journal characteristics: average number of citations . . . 77 A4 Balance on previous journal characteristics: impact factors . . . . . . . . . . 78 A5 Balance on previous journal characteristics: number of publications . . . . . 79 A6 Natural Science, Total Effects (t+1) . . . . . . . . . . . . . . . . . . . . . . . 80 A7 Natural Science, Total Effects (t+1) . . . . . . . . . . . . . . . . . . . . . . . 81 A8 Natural Science, Total Effects (t+3) . . . . . . . . . . . . . . . . . . . . . . . 82 A9 Value-added Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 A10 Selection Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 A11 Longer-run Selection Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 A12 Below vs Above . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 A13 Sample of Judgment File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 A14 Firms’ Information on Qichacha . . . . . . . . . . . . . . . . . . . . . . . . . 88 A15 China’s Court Trials Online Portal . . . . . . . . . . . . . . . . . . . . . . . 89 A16 Retraction Watch Database . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 ix Abstract This dissertation encompasses three chapters exploring the effects of reputation and judicial transparency. Chapter 1 demonstrates that a shock to a journal’s reputation has a long-lasting effect on the citations of papers published in that journal (2%-3%). This effect is stronger for positive shocks and among more influential journals. Decomposing the effect reveals that reputation influences higher-tier journals through selection and medium-tier journals through a value-added effect. Chapter 2 examines the effect of a judicial reform in China, known as “open trials.” Utilizing a staggered difference-in-differences (DiD) research design and court-level data, I find that the win rate for both local and external plaintiffs dropped by 4.5% and 4.9% respectively. Additionally, the win rate decreases for both private firms (6.4%) and stateowned firms (2.6%). The study also assesses the intensive and extensive margins of the reform, finding that the reform does not change the judge’s rulings but rather than the composition of cases. Chapter 3 explores the effects of a good reputation on mistake reporting behavior. Using the journal quartile ranking system and a comprehensive dataset on retractions, I show the causal effect of reputation on mistake detection and revelation. The analysis indicates that a better reputation leads to a decreased probability of publishing problematic papers in the future and an increase in the average number of retractions reported for published papers. These effects are primarily observed in mid-tier journals, while both top-tier and bottom-tier journals are not affected. x Introduction This dissertation encompasses three chapters exploring the effects of reputation and judicial transparency, focusing on the impact of reputation on journal citations and retractions, and the effect of judicial transparency reform in China on judicial outcomes. A good reputation plays an important role in one’s performance. However, past studies have mainly focused on the correlation between them, while the causal relationship remains undetermined. In the first chapter, “Reputation, Reputation, Reputation: Evidence from Academic Journals,” I leverage the discontinuities that determine natural sciences journals’ rankings to investigate the impact and evolution of reputation in the context of academic journals. I find that a shock to a journal’s reputation has a long-lasting effect on the citations of papers in that journal. This effect is stronger for positive shocks and among more influential journals. To determine whether the variation in citations is due to effects stemming from the selection of papers (selection effect) or the status of journals (value-added effect), I compare the differences in citations for papers published before and after the reputation shock. While both mechanisms are relevant, I show that the selection effect is greatest for high-tier journals, while the value-added effect dominates in medium-tier journals. In the second chapter, I study China’s nationwide judicial transparency reform, known as “Open Trials,” to analyze the effect of improved judicial transparency on judicial fairness. I collected a novel dataset with more than 70,000 judgment files for intellectual property cases between 2016 and 2020. Using a staggered difference-in-differences (DID) design, the findings reveal that the win rate for local and external plaintiffs dropped by 4.5% and 4.9%, respectively, which is about a 7.3% reduction in the average win rate. I also observed a decline 1 in the win rate for private firms (6.4%) and state-owned firms (2.6%) that are politically connected to governments, but the latter is not statistically significant. The decrease in win rate is mainly from the increased attendance rate of defendants. In the third chapter, I investigate the potential downside effect of reputation. Using the journal quartile ranking system and a comprehensive dataset on retractions, I show the causal effect of reputation on mistake detection/revelation. The analysis indicates that a better reputation causes a decrease in the probability of publishing problematic papers in the future and an increase in the average number of retractions reported for published papers. This effect is mainly observed in mid-tier journals, while top-tier and bottom-tier journals are not significantly affected. 2 Chapter 1 Reputation, Reputation, Reputation: Evidence from Academic Journals 1.1 Introduction A favorable reputation is one of the most important intangible assets for individuals, businesses, and institutions. For instance, a positive reputation is correlated to a company’s performance in terms of price (Rindova et al., 2005), revenue (Chen and Wu, 2021), profit (Roberts and Dowling, 2002), product quality (Jin and Kato, 2006), and contractual outcome (Banerjee and Duflo, 2000). The same is true in the higher education sector: a good reputation enables colleges to attract higher quality applicants (MacLeod and Urquiola, 2015) and their graduates to enjoy larger income growth in the future (MacLeod et al., 2017). Despite the clear importance of reputation, what remains unclear is if the relationship is causal. One reason is that it is difficult to find plausibly exogenous variations in reputation. Reputation has been traditionally treated endogenously, and previous work is mostly theoretical (e.g., Spence, 1973; Eaton and Gersovitz, 1981; Milgrom and Roberts, 1982; Kreps and Wilson, 1982; Fudenberg and Levine, 1989; McDevitt, 2011; Luca and Reshef, 2021). Another is that the effects of reputation are often hard to measure. It influences multiple areas simultaneously, most of which are unobservable. For instance, a good reputation can 3 increase a lawyer’s caseload, the complexity of the cases they attract, and the resources available. While the former is observable, the latter are not. This paper addresses these challenges and provides causal evidence of the effect of reputation in the context of academic journals. Academic journals are ideal setting to study for two reasons. First, a unique setting in journal ranking provides an exogenous variation in journals’ reputation. On the other hand, there is an easily measurable outcome variable for a journal: the average number of citations its papers receive. It is widely used in academia to assess the quality and influence of papers, journals, and individual researchers. However, a pivotal question remains unanswered: what explains the return to reputation? To answer this question, this paper identifies the underlying mechanisms of the reputation effect and breaks it down into two distinct channels. The first is the selection of papers (selection effect). A superior reputation enables the journal to select papers with potentially greater influence, which increases average number of citations of its papers. This could be due to prestigious journal’s editors receiving a larger volume or higher quality of submissions (e.g., Young, Ioannidis, and Al-Ubaydli, 2008). The second is the credibility a journal provides (value-added effect). An enhanced reputation leads to increased exposure and credibility for its papers (e.g., Gonon et al., 2012) thus garners more attention from the academic community and boosts citation numbers. My empirical approach exploits a unique aspect of the journal ranking system: its quartiles. Clarivate, a publisher formerly known as part of Thomson Reuters, publishes its Journal Citation Reports (JCR) annually. The JCR includes a citation metric called the impact factor, which it uses to rank journals and then divide them to quartiles. Therefore, a journal with an impact factor in the top 25% of the category is classified as Q1 (top quartile); one ranked from 25% to 50% is classified as Q2, and so on. For example, in the category “Medicine, General & Internal,” JCR2020 ranked Postgraduate Medicine 41st out of 167, making it a Q1 journal. As a comparison, DM Disease-A-Month was ranked 42nd out of 167, making it a Q2. Research institutions and administrations globally, especially in 4 natural science departments, depend on these quartile classifications not only to assess the significance of publications and make decisions on tenure, promotions, and funding. In some countries, quartile rankings are even related to the amount of cash bonuses. The intriguing aspect of the quartile ranking is the tight spread of impact factors near the thresholds. A small differences in impact factors can lead to a different quartile ranking. Using the previous example as a reference, the difference in impact factors between Postgraduate Medicine and DM Disease-A-Month was only 0.04 (1% difference in average number of citations), but they fell into two different quartiles. Therefore, I employ a regression discontinuity (RD) design to estimate the effects of a journal being in a higher quartile. In addition to journal ranking data from the JCR, I have collected papers’ citation data from the Web of Science and combined these two datasets. This merged dataset spans a 23-year period and includes more than 23,000 journals from various academic fields, making it one of the most comprehensive compilations of data on journal rankings and citations available. However, given that the Journal Citation Reports are more widely used in natural science departments than in social sciences, the primary focus of this study is on journals within the natural sciences. I initially explore the impact on the average number of citations for papers published in journals. By comparing the difference in the number of citations between journals positioned just below and just above the cutoffs, I discover that a higher quartile ranking significantly boosts the average number of citations both in the short and long term. However, these effects are not uniform across various journals and quartile transitions. The effects are most pronounced for transitions in the highest quartiles (2%-3%), with positive advancements (ascending to a higher quartile) having a greater impact than negative ones (descending to a lower quartile). I then employ a unique approach to differentiate the two channels of reputation effects by comparing the average number of citations of old and new papers. Old papers, published before a change in quartile ranking, are influenced solely by the value-added effect, given 5 that their selection process concluded prior to the rank change. Conversely, new papers, published after a quartile rank change, are subject to both value-added and selection effects. This analysis reveals a heterogeneous reputation effect across different tiers of journals. High-tier journals exhibit a predominant influence from the selection effect, attributable to their established reputation, while mid-tier journals demonstrate a significant and sustained value-added effect. I also conduct several robustness checks to validate my findings. Notably, since social sciences departments do not adhere to the quartile ranking system, I observe no significant difference in the number of citations for social sciences journals. This indicates that my estimates are not attributable to random variation. Furthermore, analyses using balanced samples yield similar estimates, which reinforces the robustness of my primary findings. These results provide principles for understanding the effect of reputation and help us to understand the “Matthew Effect” in science and academia, where well-established journals often receive more recognition and resources than lesser-known journals, even if their papers may be of similar quality. For instance, for mid-tier journals, the effect of reputation are mostly from the value-added effect, which means journals just above and below the cutoff have no significant difference in the paper selected in terms of “potential” number of citations. This finding shows that using journal quartiles as the only evaluation device, which is common in developing countries, can be problematic. They need a more comprehensive assessment system for making decisions related to tenure, promotion, and funding. This finding also has direct implications for scholars regarding publishing strategies. Given that the selection effect is large in top-tier journals, choosing to publish in a top journal in Q2 might be a reasonable strategy, especially when the incentive mechanism does not rely on the quartile rankings of journals. This may cause inefficiency in the publication system because researchers are not trying to publish their papers in the most influential journals. This paper enriches two primary domains of literature. Firstly, it offers new insights into the causal effect of reputation. While reputation’s impact has been mainly theoretical 6 in the context of game theory (e.g., Spence, 1973; Luca and Reshef, 2021), empirical studies have often focused on correlations between reputation and economic outcomes due to the challenge of identifying exogenous variations in reputation (e.g., Stickel, 1992; Roberts and Dowling, 2002; Rindova et al., 2005; MacLeod et al., 2017). A few studies identify the effects using field experiments (Jin and Kato, 2006; Resnick et al., 2006). The most closely related empirical work uses difference-in-differences to study the effect of being a higher rank at Academic Journal Guide on papers’ citations (Drivas and Kremmydas, 2020). However, the rankings they use are subjective, which might introduce endogeneity issues. Besides, they only focus on the influence on the number of citations of past papers, which represents only a small fraction of the effects of reputation. My method builds upon previous work, not only quantifying the average impacts on future papers’ citations but also dissecting these impacts to understand both the selection and value-added effects1 . Additionally, by utilizing a widely recognized journal ranking system and examining a broad range of academic fields, this paper offers more comprehensive insights into estimating the total effects. Second, this paper is also related to empirical literature on scientific knowledge and publication process. A growing literature has explored aspects such as the publication trends in top journals (Card and DellaVigna, 2013), editorial decisions (Ellison, 2002a,b; Brogaard, Engelberg, and Parsons, 2014; Card et al., 2019; Card and DellaVigna, 2020), citations (Card and DellaVigna, 2014; Ryan and Stein, 2021) and tenure decisions (Heckman and Moktan, 2020). I contribute to the literature by studying a different aspect of sociology of scientific research. I show that journal’s reputation has a causal effect on papers’ citations and have important implications in incentive design and publishing strategies. Journal rankings, which we previously considered irrelevant or of lesser importance, could cause distortions in allocation of scientific funding. The remainder of the chapter is organized as follows: Section 1.2 delves into the back1The RD approach I am using likely also provides more generalizable estimates by studying changes induced by an exogenous cutoff. In addition, difference-in-difference estimates can only focus on past papers, making the decomposition impossible. 7 ground and data; Section 1.3 details the empirical approach; Section 1.4 presents the results. Finally, Section 1.5 concludes the chapter and discusses the policy implications. 1.2 Background and Data 1.2.1 Journal Citation Reports Journal Citation Reports (JCR) are widely regarded as one of the most comprehensive and reliable tools for evaluating the impact and influence of academic journals, especially in natural science fields. Initiated by the Institute for Scientific Information in the 1970s, Clarivate Analytics, a publisher-neutral information provider formerly known as Thomson Reuters, publishes the annual JCR at the end of June every year. The annual update is an important event for the academic and research communities, as it “helps librarians to build and manage their journal collections, publishers to gauge journal performance and assess their competitors, and researchers to identify appropriate journals for publication of their work”2 . The JCR employs a rigorous methodology to gather citation data from a broad range of scholarly literature. It meticulously tracks and analyzes citations, references, and other relevant metrics to ensure a comprehensive and accurate assessment of a journal’s influence. All journals included in the list have undergone a strict editorial selection process by the committee of the Web of Science Core Collection. Consequently, the JCR avoids counting citations and references from sources such as non-peer-reviewed journals or journals with little influence, maintaining the integrity of its evaluations. Compare to other journal ranking systems, JCR’s long history and established reputation make it a preferred choice for academic evaluations and tenure decisions. In many developing countries, where there is limited information about the quality of journals, the JCR is often the top choice used to rank journals. 2As mentioned by the Clarivate at https://clarivate.com/wp-content/uploads/dlm_uploads/2023/ 06/JCR-Reference-Guide-2023.pdf 8 1.2.2 Journal Impact Factor Among all the information provided by the JCR, one of the most attractive metrics for researchers, funding bodies, journal editors and others is the Journal Impact Factor (JIF). The JIF is a metric that quantifies the influence or impact of a scholarly journal developed by Drs. Eugene Garfield and Irving H. Sher in 1964. It is calculated by dividing the number of current year citations to the articles published in that journal during the previous two years by the total number of articles published in the same period. In mathematical terms, it is represented as: ImpactF actort = Number of citations in the current year to articles published in the previous two years T otal number of articles published in the previous two years (1.1) Because it is based on the average count of citations that the articles in a particular journal receive, JIF reflects the journal’s significance and serves as an indicator of its scholarly reputation. The more the citations, the higher the impact factor, indicating that the journal’s articles are more influential and widely recognized. For example, in 2022, the impact factor for Nature is 64.8 while the impact factor for Nature Communications, a Nature subject journal, is 16.4. The JIF data is updated annually along with the release of JCR, and only considers citations and publications made in the previous two years. This relatively short time frame means that it is a measure of the journal’s recent impact in its field. 1.2.3 Journal Quartile Ranking The JCR offers detailed rankings and quartile ranking of journals based on their impact factors within their fields. Between these two rankings, quartile ranking is used to categorize and evaluate the relative performance of journals within their respective fields. Quartile ranking provides researchers, institutions, and publishers with a quick overview of a journal’s relative position in terms of citation influence compared to other journals in the same subject 9 area. Quartile ranking divides journals in a specific subject category into four equal parts, or quartiles, based on their impact factor performance. Here’s how it works3 : Quartile Range Remarks Q1 0.0<R≤0.25 Highest ranked journals in a category Q2 0.25<R≤0.5 Q3 0.5<R≤0.75 Q4 0.75<R Lowest ranked journals in a category Notes: R = (A/B) where A is the journal rank in category and B is the number of journals in the category. Table 1.1: Quartile Ranking Rules Q1 (top quartile): journals in the top 25% of the category based on the JIF. These are considered high-impact journals within their field. Q2 (second quartile): journals in the second 25%, ranking from 26% to 50%. Q3 (third quartile): journals in the third 25%, ranking from 51% to 75% and Q4 (bottom quartile): journals in the bottom 25%. It’s important to note that the rankings are subject-category-specific, meaning they compare a journal’s performance only to other journals within the same field to account for variations in citation practices across different disciplines (See Figure A1 for example). Quartile ranking has drawn huge attention from researchers, institutions, and publishers. They heavily rely on these rankings to identify high-impact journals in their respective fields, aiding in decisions regarding manuscript submissions, collaborations, and research directions. Even though the JIF quartile is provided in the JCR with other journals’ metrics such as impact factor, total rank and JIF percentile, The JIF quartile classification system has been used independently as an appraisal mechanism for individual research and researchers. In many countries, it is widely used in decisions related to tenure, promotions, and funding allocations. For example, R`afols et al. (2016) shows that in Spain, the National Commission for the Evaluation of Research Activity assess tenured academics’ publication based on the quartile of journal to make salary increase decisions. Some universities in China require 3As mentioned by the Clarivate https://clarivate.com/wp-content/uploads/dlm_uploads/2023/ 06/JCR-Reference-Guide-2023.pdf 10 certain number of papers in journals that belong to the first two quartiles to secure tenure (Shu et al., 2020). In Brazil, China, Romania, Qatar and Uzbekistan, the journal quartiles influence the funding of universities and rewarding of researchers. For instance, according to the public available document from the university’s website, Shanghai International Studies University in China paid 2,735, 2,051, 1,367, and 687 USD for publications in Q1, Q2, Q3, Q4 journals respectively4 . Even though different journal quartiles mean a huge gap in funding, promotion, rewards, and tenure decisions, the differences between JIFs are quite small when comparing journals near the threshold. As shown in Table B1, Within the category “Medicine, General & Internal,” the highest impact factor is 91.253, and the lowest impact factor is 0.076. However, for journals near the quartile cutoffs, the differences in impact are small: 0.04 between Q1 and Q2, 0.006 between Q2 and Q3, and 0.021 between Q3 and Q4. This data pattern makes RD strategy a reasonable method for identification. The journal ranking data I am using spans from 1997 to 2020, including detailed information on journal names, categories, impact factors, and quartile rankings. For each category and year, cutoffs are determined by averaging the journal impact factors of the lowest-ranked journal in the higher quartile and the highest-ranked journal in the lower quartile. Each journal’s relevant cutoff is defined as the cutoff closest to its relative ranking within its field. The relevant cutoffs for journals are as follows: those ranked between the 0th and 37.5th percentiles fall under the Q3 and Q4 cutoffs (25% percentile), those ranked between the 37.5th and 62.5th percentiles fall under the Q2 and Q3 cutoffs (50% percentile), and journals ranked between the 62.5th and 100th percentiles correspond to the Q1 and Q2 cutoffs (75% percentile). Normalized Impact F actorjf t = Impact F actorjf t − Relevant Impact F actorjf t (1.2) By computing the difference between a journal’s actual impact factor and the impact factor of the relevant cutoff, I obtain the normalized impact factor for every journal j in field 4Official document available at https://graduate.shisu.edu.cn/_upload/article/files/45/56/ 287d0f034634a8433ce02d463602/8e4443bb-c6f8-404e-a2c6-6243f4b70eca.pdf 11 f for every year t. 1.2.4 Web of Science The second dataset employed consists of detailed information regarding citation counts, for which I utilize Web of Science as the source to gather the citation data. Web of Science (WoS) is a comprehensive academic citation index and research database that provides access to a vast collection of scholarly literature, including journal articles, conference proceedings, patents, and more. It is widely used by researchers, academics, institutions, and publishers to discover, analyze, and track research publications and their citations. Web of Science data on citations is a core feature of the platform, and it offers detailed citation counts for individual papers, articles, and authors by calendar year. The data of citations in the WoS, unlike the journal rankings in the JCR, is updating continuously. All the citation data contained within are up until June 30th, 2022. Given this, any changes in the citation numbers are minimal, ensuring that my results remain consistent and are not significantly affected by updates to the WoS dataset. 1.2.5 Combining Data Sources This paper relies on data from the Journal Citation Reports (JCR) and Web of Science (WoS), both of which are developed and published by Clarivate Analytics, ensuring seamless data matching across these datasets. On the rare occasions where typographical errors or changes in journal names occur, corrections are made according to the unique ISSNs of the journals, thus eliminating potential matching issues and ensuring the accuracy of my estimates. It is important to note that the JCR is published at the end of June each year, leading to a year cycle that diverges from the regular calendar year. For each journal, my analysis concentrates on papers published within a specific “JCR cycle”—for instance, from July 1st, 2000, to June 30th, 2001—and I gather citation data for these papers in the following 12 calendar years from the WoS because the WoS can only present citation data in calendar years. Given that the average lag in the publication process from receipt to publication, this inconsistency in timeline does not undermine the validity of my estimates and my major conclusions. 1.3 Empirical Strategy I use a sharp Regression Discontinuity design based around normalized impact factor cutoffs, where journals above the cutoffs are assigned to a higher quartile. The following regression discontinuity specification estimates the impact of being ranked into a higher quartile: Yi,t = β0 + β1Di,t + β2f1(Xi,t) + ei,t (1.3) where Di,t = 1 if Xi,t ≥ 0 0 if Xi,t < 0 Yi,t: the outcome of interest for journal i in year t. Xi,t: the running variable: journal i’s normalized impact factor in year t, while 0 is the journal normalized impact factor cutoff used for determination of treatment. The coefficient β1 captures the treatment effect, while f(Xi,t) reflect a continuous but potentially non-parametric relationship between the running variable and the outcome. I utilize the data-driven methodology described by Calonico, Cattaneo, and Titiunik (2014), alongside the rdrobust command they introduced in Calonico et al. (2017), applying a triangular weighting kernel centered on the threshold distance. To determine an MSE-optimal bandwidth ’h’, I employ a linear polynomial model estimated within this bandwidth on both sides of the cutoff. Additionally, I calculate heterogeneityrobust standard errors, clustering these at the journal level. 5 I have restricted my sample to journals with the same quartile in the previous year. By 5The MSE-optimal bandwidth and polynomial will vary for each outcome Y, following Calonico, Cattaneo, and Farrell (2018). I also did robustness checks to alternative kernels, functional forms, and bandwidths. 13 doing this, I can make sure the difference in future number of citations are fully due to the change in quartile in the current year rather than past years and avoid confounding. There are two key identifying assumptions for this approach. The first is that the running variable is not manipulated around the threshold, such as if journals attempted to manipulate its citations and even the quartile rankings. This is quite unlikely given the characteristics of JCR for three reasons. First, the JCR maintains neutrality and does not favor any publishers, eliminating any incentive for JCR to manipulate its rankings. Second, the rankings depend on how a journal compares to others, and since journals don’t know the exact thresholds or how many journals they are being compared to (since this changes every year), it’s really hard for them to game the system. Nonetheless, I conduct a formal test using the density of normalized impact factor distribution (Cattaneo, Jansson, and Ma, 2018) to check if there are any jumps or drops in the number of citations around the cutoffs for rankings, and the results (with a p-value of 0.16) show that everything looks smooth and continuous, as seen visually in Figure A2. The secondary assumption highlights the necessity of continuity in journal-level covariates at the regression discontinuity threshold. This is crucial for eliminating potential confounding effects on the outcomes. I test this for observable baseline variables in the previous three years, such as the average number of citations (Figure A3), impact factors (Figure A4), and the number of publications (Figure A5). The results show that these features don’t change suddenly at the cutoffs, which supports my assumption. 1.4 Results 1.4.1 Impact of Higher Rankings on Natural Science Journals The Journal Citation Reports are a prevalent resource among researchers and institutions, especially in the natural sciences. In contrast, in the social sciences, numerous schools and institutions, such as Purdue University’s Daniels School of Business and Shanghai University 14 of Finance and Economics, develop their own lists of journals to inform tenure and promotion decisions. Additionally, the Chartered Association of Business Schools offers a famous ranking guide for journals in business and management. Given this context, this paper will primarily concentrate on journals within the natural science domains. I begin by looking into how being in a higher quartile impacts the average number of times an article is cited in the year after it is published. The results are shown in Table 1.2. I found that articles published in journals ranked in the top quartiles (Q1) are cited much more often than their counterparts in the second quartiles (Q2). On average, they get cited about 0.089 (or about 2.1%) more times every year in the first three years. This trend continues in the medium and long term. This number decreased to 0.088 (2.1%) in mid term (years four to six) and increased to 0.136 (3.1%) in the long term (years seven to nine). 15 Table 1.2: Natural Science Journals, Total Effect on Papers’ Immediate Citations Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.069 0.089** 0.088* 0.136** (0.043) (0.045) (0.051) (0.059) Dep var mean 4.064 4.241 4.345 4.440 Bandwidth 0.188 0.186 0.204 0.244 Effective Obs 24399 68227 61006 56092 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.044 0.060** 0.051 0.042 (0.028) (0.028) (0.034) (0.043) Dep var mean 1.852 1.974 2.102 2.247 Bandwidth 0.129 0.134 0.130 0.144 Effective Obs 29747 85755 69100 58367 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.014 0.016 0.020 -0.006 (0.020) (0.021) (0.025) (0.030) Dep var mean 0.970 1.087 1.227 1.376 Bandwidth 0.111 0.101 0.119 0.175 Effective Obs 30267 78332 72430 75034 Notes: This table reports the estimates of effects of changing quartile on journals’ immediate citations for natural science journals. Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in current year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 2, t + 3 and t + 4 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 16 However, in the lower quartiles (Q2-Q3, Q3-Q4), the increase in citation, due to the change in quartile, is much smaller. Significant differences in citation numbers are only observed in the short term for journals near the Q2 and Q3 boundary. No evidence that a change in journal quartile from Q4 to Q3 will make any meaningful differences on journal’s average number of citations. Figure A6 plots the relationship between normalized impact factor and average number of citations received in year t+2 for papers published in year t+1 (right after the release of Journal Citation Reports) for all different cutoffs while Figure A7 shows the dynamic of this effect over time (from current year to nine years after). Given that the following up studies may get published two or three years later than the original research, I also examined the number of citations three years after publication. The results are in Table 1.3. Articles in top-tier journals (Q1 journals near the Q1 and Q2 boundary) continue to receive more citations than Q2 journals near the Q1 and Q2 boundary, with an average increase of 0.150 (3.1%) in the short term. In the medium term, the increase drops to 0.110 (2.2%) but rises again to 0.168 (3.3%) in the long term. All the coefficients are statistically significant. For journals ranked between Q2 and Q3, there is still an increase in citations, but my estimate is also smaller. I found significant increases in number of citations for journals in the higher quartile in the first six years after publication. However, for journals near the Q3 and Q4 boundary, the increase in citations is even smaller and not significant. A possible explanation for the larger effect observed in more influential journals is that the spotlight is focused entirely on them. Researchers might only read papers from top-tier journals for new research ideas. Therefore, if a journal advances from Q2 to Q1, researchers may begin subscribing to it, reading its articles, and initiating related research, which could increase its future citation count. Meanwhile, most researchers pay little attention to lowertier journals, so a quartile change from Q4 to Q3 tends not to make much difference on journals’ number of citations. 17 Table 1.3: Natural Science Journals, Total Effect on Papers’ Short-term Citations Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.077 0.150*** 0.110* 0.168** (0.051) (0.050) (0.058) (0.071) Dep var mean 4.814 4.875 4.991 5.165 Bandwidth 0.187 0.212 0.219 0.275 Effective Obs 23375 68693 56825 51717 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.072** 0.062* 0.071* 0.027 (0.033) (0.034) (0.040) (0.064) Dep var mean 2.215 2.296 2.429 2.616 Bandwidth 0.126 0.126 0.156 0.142 Effective Obs 28126 74416 68452 49064 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.033 0.018 0.012 0.020 (0.024) (0.025) (0.031) (0.044) Dep var mean 1.208 1.306 1.460 1.656 Bandwidth 0.121 0.139 0.177 0.176 Effective Obs 31171 91084 85075 63957 Notes: This table reports the estimates of effects of changing quartile on journals’ short-term citations for natural science journals.. Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in the year after next year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 4, t + 5 and t + 6 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 18 As a comparison, I also examine this effect in all science disciplines and social science disciplines, where use of these rankings is less common (See Table B2 and Table B3). I do not observe statistically significant differences across all time periods for social science. The full sample (natural sciences and social sciences) and balanced sample (natural sciences observations before 2011) also presents similar results (See Table B4, Table B5 and Table B6), which supports my major findings. 1.4.2 Decomposition of Value-added and Selection Effect The analysis so far has found consistent evidence of larger number of citations in higher ranked journals. Next, I decompose the two ways higher rankings can increase a journal’s citations: the direct boost in visibility (value-added effect) and the fact that higher-ranked journals attract better papers (selection effect). This is a very genreral question to many other types of reputation-based markets, such as colleges: do people who went to college at Harvard earn more money because Harvard selects the best quality students, or because there is a value-added of attending Harvard such that a student who attends there earns more than if they had attended another school? The answers to these questions may have direct and important policy implications. I employ a novel empirical strategy to identify the effects of these two channels. Papers published prior to the JCR rankings are influenced solely by the value-added effect since they were selected before any changes in the rankings occurred. However, newer papers are impacted by both factors—they receive more citations both due to the journal’s greater influence and because higher-ranked journals tend to attract higher-quality submissions. In Table 1.4, I break down these effects. Panel A looks at the value-added effect for journals around the top two quartiles. Journals that ranked higher had about 0.081 more citations on average in the short term, but this effect didn’t last. Panel B illustrates the selection effect, which is stronger than the value-added effect and intensifies over the long term. However, the selection effect I observe here over time is only the selection in papers 19 in the very next year and its effect over the whole time period I observed. Table 1.5 and Table 1.6 present similar data for journals around the Q2-Q3 and Q3- Q4 cutoffs. The value-added effect is actually stronger for journals around the Q2 and Q3 cutoff—they get a bigger boost in citations just from ranking higher. But for journals near the Q3 and Q4 cutoff, there isn’t any noticeable value-added or selection effect. One reason for these differences across cutoffs could be the incentives for publishing in higher-ranked journals. Universities and institutions reward researchers more for publishing in top-tier journals, so there’s a lot of competition to get into these. This could amplify the selection effect for these journals, as they’re attracting high-quality papers. But around the Q2 and Q3 cutoff, there’s less competition and attention, so the main benefit of ranking higher is the direct boost in visibility and credibility, not the quality of papers they attract. These results have direct implications on the policy design of assessing and rewarding publications. Institutions in many developing countries are using the quartile ranking of journals as the only method to assess the quality of papers and research, paying out cash rewards and making tenure and funding decisions accordingly. However, I find that for journals near Q2 and Q3 as well as Q3 and Q4 cutoffs, there is no significant selection effects, which means no difference in the “potential” average number of citations. The current policy should be more comprehensive to be fair and accurate. In addition, this incentive design could cause a jump in the acceptance rate between top Q2 journals and bottom Q1 journals. As publishing in a Q1 journal becomes much harder, researchers from other countries, where when the quartile ranking is irrelevant, should avoid those Q1 journals just above the Q1 and Q2 cutoff and try top Q2 journals. 20 Table 1.4: Natural Sciences, Decomposition of Value-added and Selection Effect: Q1Q2 Panel A: Value-Added Effect Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.069 0.081* 0.061 0.073 (0.043) (0.048) (0.051) (0.054) Dep var mean 4.064 4.777 4.400 3.944 Bandwidth 0.188 0.186 0.213 0.251 Effective Obs 24399 69645 66907 62599 Panel B: Selection Effect Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.017 0.058*** 0.113*** 0.125*** (0.015) (0.022) (0.033) (0.040) Dep var mean 0.128 0.111 0.071 0.043 Bandwidth 0.457 0.285 0.256 0.29 Effective Obs 46019 92199 72064 63877 Notes: This table reports regression discontinuity estimates of value-added and selection effects of changing quartile on papers’ citations. Data is limited to all journals near the Q1 and Q2 cutoff. The outcomes in Panel A are estimates of value-added effect, while in Panel B are estimates of selection effect. Column 1 shows the decomposition of the total effect of reputation right after the release of Journal Citation Reports. Other columns shows the decomposition of the short-run, mid-run and long-run total effects of changing quartile. The outcome variable in Panel A is the average number of new citations in year t + 1 for papers published in year t. The outcome variable in Panel B is the difference between average number of new citations in year t + 2 for papers published in year t + 1 and new citations in year t + 1 for papers published in year t. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 21 Table 1.5: Natural Sciences, Decomposition of Value-added and Selection Effect: Q2Q3 Panel A: Value-Added Effect Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.044 0.067** 0.097*** 0.084** (0.028) (0.031) (0.034) (0.038) Dep var mean 1.852 2.197 2.012 1.822 Bandwidth 0.129 0.133 0.151 0.155 Effective Obs 29747 87499 82420 68612 Panel B: Selection Effect Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.017 0.010 -0.002 0.036 (0.012) (0.016) (0.021) (0.030) Dep var mean 0.085 0.085 0.069 0.053 Bandwidth 0.216 0.168 0.173 0.151 Effective Obs 41924 97551 83888 61756 Notes: This table reports regression discontinuity estimates of value-added and selection effects of changing quartile on papers’ citations. Data is limited to all journals near the Q2 and Q3 cutoff. The outcomes in Panel A are estimates of value-added effect, while in Panel B are estimates of selection effect. Column 1 shows the decomposition of the total effect of reputation right after the release of Journal Citation Reports. Other columns shows the decomposition of the short-run, mid-run and long-run total effects of changing quartile. The outcome variable in Panel A is the average number of new citations in year t + 1 for papers published in year t. The outcome variable in Panel B is the difference between average number of new citations in year t + 2 for papers published in year t + 1 and new citations in year t + 1 for papers published in year t. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 22 Table 1.6: Natural Sciences, Decomposition of Value-added and Selection Effect: Q3Q4 Panel A: Value-Added Effect Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.014 0.030 0.019 0.005 (0.020) (0.023) (0.023) (0.027) Dep var mean 0.970 1.197 1.090 0.978 Bandwidth 0.111 0.123 0.192 0.162 Effective Obs 30267 94453 112652 82953 Panel B: Selection Effect Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate -0.013 -0.021* -0.022 -0.012 (0.009) (0.011) (0.017) (0.020) Dep var mean 0.070 0.073 0.060 0.050 Bandwidth 0.234 0.206 0.209 0.211 Effective Obs 51103 128862 110191 89785 Notes: This table reports regression discontinuity estimates of value-added and selection effects of changing quartile on papers’ citations. Data is limited to all journals near the Q3 and Q4 cutoff. The outcomes in Panel A are estimates of value-added effect, while in Panel B are estimates of selection effect. Column 1 shows the decomposition of the total effect of reputation right after the release of Journal Citation Reports. Other columns shows the decomposition of the short-run, mid-run and long-run total effects of changing quartile. The outcome variable in Panel A is the average number of new citations in year t + 1 for papers published in year t. The outcome variable in Panel B is the difference between average number of new citations in year t + 2 for papers published in year t + 1 and new citations in year t + 1 for papers published in year t. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 23 1.4.3 Long-run Selection Effect In the previous section, by analyzing the difference in citations for papers published in year t + 1 at t + k + 1 and those for papers published in year t at t + k, I am able to decompose both the short-term and long-term impacts of selection in the next year (t + 1). Naturally, the next question is: how large is the selection effect in the long-run? To answer this question, I choose to compare the difference in citations in year t + k for papers published in the past year and citations in year t + 2k for paper published in year t + k. For the former, it is affected by k year’s value-added effect only. While for the latter, it is affected both by k year’s value-added effect and the selection effect in year t + k. Therefore I can derive the long-run selection effect, namely the selection effect in year t + k, from differencing this two citations. In Table 1.7, I break down the effects to derive the long-run selection effect . Panel A looks at the selection effect for journals around the top two quartiles. Journals that ranked higher had about 0.044 more citations on average in the short term, but this effect didn’t last. Panel B and C shows the long-run selection effect across all time period, but there is no evidence that the longer-run selection effect persists. A possible explanation is that the JCR is updated annually so a single change in the quartile rank may not have a long-lasting effect in long-run selection process. 24 Table 1.7: Natural Science, Long-run Selection Effect Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.017 0.044* 0.025 -0.072 (0.015) (0.023) (0.041) (0.067) Dep var mean 0.128 0.368 0.457 0.594 Bandwidth 0.457 0.383 0.426 0.378 Effective Obs 46019 97454 62971 28906 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.017 0.000 -0.017 0.022 (0.012) (0.017) (0.033) (0.064) Dep var mean 0.085 0.250 0.355 0.481 Bandwidth 0.216 0.175 0.198 0.210 Effective Obs 41924 85800 55757 27997 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate -0.013 -0.002 0.011 -0.038 (0.009) (0.013) (0.027) (0.053) Dep var mean 0.070 0.208 0.325 0.446 Bandwidth 0.234 0.203 0.174 0.125 Effective Obs 51103 108315 59799 23160 Notes: This table reports regression discontinuity estimates of longer selection effects of changing quartile on papers’ citations. Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 shows the immediate selection effect after the release of Journal Citation Reports. The outcome variable in this table is the difference between average number of new citations in year t + 2k for papers published in year t + k and new citations in year t + k for papers published in year t. Other columns shows the selection effect in short-run, mid-run and long-run after the change in quartile using the same outcome variable but pooling the observations within the cluster. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 25 1.4.4 Dropping and Rising A famous quote from Benjamin Franklin goes “It takes many good deeds to build a good reputation, and only one bad one to lose it.” This section attempts to test the accuracy of this statement by estimating the varying treatment effects associated with dropping to a lower quartile and rising to a higher quartile. Other studies have found the some evidence that negative reputation shock has larger effects. Berg, Heeb, and K¨olbel (2022) document that downgrades in ESG rating lead to a larger variation in return than upgrades. Standifird (2001) looked into eBay auctions and found that positive ratings are not as influential as negative ratings. Given that the total effect is substantial and persistent among top journals situated near the Q1 and Q2 cutoffs, I will focus on presenting the results for these specific journals. Table 1.8 illustrates the heterogeneous effects of transitioning between quartiles on the average number of immediate citations (those received in the following year). Panel A examines journals that were above the Q1 and Q2 cutoffs in the previous year and reveals that a drop in quartiles does not significantly affect citation counts. In contrast, Panel B considers journals below the cutoffs and explores the impact of ascending quartiles. The coefficients are significant and consistent: for older papers, the average number of citations increases immediately by 0.134, while for newer papers, the increase is 0.154 in the short term, 0.155 in the medium term, and 0.225 in the long term. These increases correspond to a 5.6%, 5.4%, and 7.4% rise in the total number of citations, respectively. Table 1.9 shows the varied effects of transitioning between quartiles on the average number of short-term citations, specifically those received within three years following publication. Panel A analyzes journals that previously ranked above the Q1 and Q2 cutoffs and uncovers that a descent in quartiles significantly influences short-term citation counts. The coefficients are notable and stable; for recent publications, the citation increase is marked at 0.154 in the short term, 0.150 in the medium term, and 0.173 in the long term, equating to citation growth of 2.8%, 2.7%, and 3.0% respectively. 26 Conversely, Panel B presents journals initially positioned below the cutoffs and investigates the effect of advancing quartiles. The coefficients are also significant and consistent. For older articles, citations immediately increase by 0.118. For newer works, the short-term increase is 0.167. However, in the medium term, the increase is a smaller at 0.102, and a substantial 0.163 in the long term. These metrics translate to an increase of 5.3%, 3.1%, and 4.7% in total citations respectively. These results show that, in the context of academic journals, a positive shock has larger effects, especially in the short-run. This could also be explained by the informational function of journal rankings. If researchers only review papers from top-tier journals for new research ideas, a shift from Q2 to Q1 will attract more researchers’ attention for this journal. In contrast, a drop from Q1 to Q2 has smaller effect because most researchers may have already known this journal, they would keep reading articles from those journals even though it is not a Q1 journal in the current year. This can also be explained by the informational function of journal rankings. If researchers primarily review papers from top-tier journals, a shift from Q2 to Q1 will attract more attention from researchers who had not heard about the journal thus increasing citations. Conversely, a drop from Q1 to Q2 might have a smaller effect because most researchers may already be familiar with the journal so they would continue reading articles from it and citing from it even if it’s not classified as a Q1 journal in the current year. 27 Table 1.8: Natural Science Journals, Total Effect on Papers’ Immediate Citations, Above and Below: Q1Q2 Panel A: Journals Above Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.042 0.130 0.041 0.075 (0.051) (0.084) (0.102) (0.125) Dep var mean 3.285 3.991 4.198 4.403 Bandwidth 0.231 0.181 0.162 0.178 Effective Obs 16081 24453 17945 14545 Panel B: Journals Below Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.134** 0.154*** 0.155** 0.225*** (0.056) (0.058) (0.067) (0.081) Dep var mean 2.665 2.770 2.896 3.021 Bandwidth 0.205 0.197 0.201 0.260 Effective Obs 11478 31361 26413 25468 Notes: This table reports the estimates of effects of changing quartile on journals’ immediate citations. Panels (a) reports the results for journals above the cutoffs between Q1 and Q2 in the previous year while panels (b) reports the results for journals below the cutoffs. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in current year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 2, t + 3 and t + 4 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 28 Table 1.9: Natural Science Journals, Total Effect on Papers’ Short-term Citations, Above and Below: Q1Q2 Panel A: Journals Above Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.084 0.154*** 0.150** 0.173** (0.058) (0.055) (0.062) (0.078) Dep var mean 5.447 5.515 5.625 5.789 Bandwidth 0.249 0.304 0.350 0.350 Effective Obs 16377 51445 45990 35439 Panel B: Journals Below Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.118* 0.167** 0.102 0.163* (0.067) (0.065) (0.080) (0.096) Dep var mean 3.142 3.178 3.307 3.488 Bandwidth 0.203 0.198 0.194 0.257 Effective Obs 10970 28521 22660 21218 Notes: This table reports the estimates of effects of changing quartile on journals’ immediate citations. Panels (a) reports the results for journals above the cutoffs between Q1 and Q2 in the previous year while panels (b) reports the results for journals below the cutoffs. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in current year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 4, t + 5 and t + 6 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 29 1.5 Conclusion There is growing interest in understanding the ways reputation influences agents’ performance. This study documents the causal effects of reputation in academic journals performance, namely the average number of citations. Using a unique setting in journals ranking and a RD design, I show that journals ranking, can impact the number of times its published papers are cited by other scholars. Moving up in these rankings, for example from Q2 to Q1, leads to a significant boost in citations. However, this boost isn’t the same for all journals or for all types of ranking transitions. It’s more pronounced for journals that are already highly ranked and those moving up to a higher quartile. I dug deeper to understand why this happens and found two main reasons. First, higherranked journals can be more selective, choosing to publish only the highest-quality papers. Second, a higher rank boosts a journal’s visibility and credibility, leading more scholars to read and cite its papers. By comparing old and new papers, I am able to separate these two effects and find that high-tier journals (like those in Q1) benefit mostly because they can be highly selective about what they publish. Mid-tier journals gain from the increased visibility and trust that comes with a higher rank. These findings are important for us to understand how reputation influence performance in different channels. In addition, this study adds valuable insights to ongoing discussions about the role of reputation in academia. It challenges and expands existing knowledge, offering a more detailed look at the real-world impacts of journal rankings. This can help in rethinking how we evaluate and value academic work, moving towards a more fair and comprehensive system that recognizes quality and impact beyond journal rankings. Also, this study shed lights on the influence of incentive design on the strategy of publishing. Future studies could look into detailed channels of the effect, and in particular the selection from the editor (inside) and the selection from the authors (outside). In addition, this unique data setting of quartile ranking could help to answer the effect of reputation in 30 other area, such as individual researchers’ tenure, promotion and funding results. My novel methodology can be used to decompose selection and value added in other fields, such as understanding reputation among institutions, scholars, firms, and other area where the reputation influences the selection process. It opens up an exciting avenue for future research. 31 Chapter 2 Judicial Transparency and Fairness 2.1 Introduction Transparency plays a vital role in maintaining the overall integrity, accountability, and quality of the public sector (Banerjee et al., 2010; Djankov et al., 2010; Ferraz and Finan, 2011; Avis, Ferraz, and Finan, 2018). It increases public knowledge about the system, provides recourse for redress when problems occur and decreases the opportunities for corrupt practices (Kolstad and Wiig, 2009; Peisakhin, 2012). Previous studies have primarily focused on transparency within government institutions, with less attention given to the judicial system. There are two potential reasons for this oversight. Firstly, compared to the government institution, large-scale reforms are rarer in the judicial system, especially in developed countries where the system is already wellestablished. This results in a lack of exogenous variation in judicial transparency. Secondly, due to the limited openness of and digitalization in judgment files, there is a scarcity of reliable and comprehensive data sets on court filings, complicating the analysis. This study investigates the impact of enhanced transparency on judicial outcomes within China’s context. Exploiting the staggered roll-out of a nationwide reform in judicial transparency, it specifically assesses whether more transparency can reduce local protectionism and diminish the effect of political connections on case decisions. 32 China presents a unique setting to examine the impact of judicial transparency due to its distinct characteristics. Firstly, prior to the reform, the transparency of China’s judicial system was quite low. Before a national judicial reform conducted in 2016, the openness of a typical Chinese trial was entirely at the discretion of courts or judges. They could decide whom to let in and whom to deny, and sometimes even family members were not permitted to sit inside, let alone the public. Secondly, The reform of the judicial system was large-scale and attracted significant public attention. Since 2016, the Supreme People’s Court of China has launched a nationwide judicial reform called “Open Trials” to increase the transparency of the judicial process. The policy mandates that courts, regardless of their level, live stream their trial process under the policy “Live broadcasting as the principle, not broadcasting as the exception.” The Supreme People’s Court also built an official online platform to live stream and record all court proceedings, making live trials widely available to the public. As of 2024, a total of 22.67 million court sessions have been live-streamed nationwide, with a cumulative viewership of 75.67 billion. Thirdly, China possesses the most extensive administrative data on court verdicts and makes it publicly available. In 2014, the Supreme People’s Court required most court decisions to be published online, with the exception of politically sensitive or secret related cases. As of March 2024, there are 145.77 million cases fully available on the China Judgments Online portal. This comprehensive open dataset enables systematic analysis of judicial outcomes, which is impossible in any other context. In this paper, I have compiled a unique dataset consisting of administrative records for 71,729 intellectual property case verdicts in Chinese courts, paired with business registration records for all involved firms from 2016 to 2020. Using a generalized difference-in-differences (DiD) design with staggered adoption, I find that the judicial transparency reform has little effect in reducing local protectionism and political interference. The win rate of local and external plaintiffs dropped by 4.5 and 4.9 percents respectively, which is about a 7.3% reduction from the baseline average win rate before the reform. I also see the decline in win 33 rate for private firms (6.4%) and State-Owned Enterprises (SOEs) (2.6%) that are politically connected to governments but this reduction for state-owned firms is not statistically significant. In contrast, case-level data shows that this reform is effective in mitigating local protectionism: it decreases the win rate of local plaintiff by 9.4% (a 12.6% decrease) but does not significantly affect the win rate of external plaintiffs. The differences in results are because of different treatment effects across courts with different case loads. At the same time, there is no significant change in the win rate for either private firms or SOEs. Next, I examine the intensive and extensive margins of this reform. The reform may affect the observed win rates of defendants in two ways: first, more transparency in the trial process may influence judges’ decision-making incentives without changing case characteristics (intensive margin); and second, by altering the mix of case features, such as the identities of plaintiffs, the monetary value involved in the case, and the likelihood of presence of defendants (extensive margin). Given the significant time lapse between the filing and ruling of lawsuits, the composition of lawsuits tends to remain stable within a relatively short period. By comparing only a subset of cases whose trial process are shortly before or after the reform, I isolate the intensive and extensive margin of the reform. Results show that the effects of this reform are primarily from the changes in the case composition. Notably, after the reform, the plaintiffs are larger firms both in terms of registered capital and number of employees, and defendants were more likely to attend trials. This shift in composition explains the observed decrease in plaintiffs’ win rates. This chapter is connected to related to three bodies of literature. First, it contributes to the literature on the effect of transparency in the public sector. Much of the previous work focus on the government only and shows that there exists a relationship between transparency and outcomes such as political accountability, trust in government and the quality of government (Banerjee et al., 2010; Djankov et al., 2010; Ferraz and Finan, 2011; Avis, Ferraz, and Finan, 2018; Alessandro et al., 2021). However, due to the challenge of 34 identifying exogenous sources of variation in transparency and the absence of reliable data, the impact of transparency on the judicial system remains an unsolved question. This paper makes a contribution by examining the effect of transparency in a different context and provides evidence on how transparency could influence outcomes within the judicial system. Second, my research enriches the studies on judicial systems by exploring a court trial reform in China. Judicial system is a key component in the public sector and it affects many aspects of political and economic activities. However, empirical studies on judicial systems, especially in low- and mid-income countries, are scarce due to data limitations. Past studies mainly focus on the efficiency and delays of the court (Djankov et al., 2003; Chemin, 2009; Visaria, 2009; Amirapu, 2021). Other studies focus on the judges, e.g., economics training on judges (Ash, Chen, and Naidu, 2022), judges’ ideology (Bonica and Sen, 2021) or presidential appointment (Mehmood, 2022). The most closely related empirical work studies the effect of judicial independence on court capture, local protectionism, and economic integration (Liu et al., 2023). However, this paper is distinct as it explores a different treatment. It contributes to the field by investigating the effects of a specific reform on judicial outcomes. Third, this paper also contributes to the empirical literature on the effects of political connections. A growing body of research has investigated various aspects, including electoral outcomes (Sukhtankar, 2012), land selling practices (Chen and Kung, 2019), lending behaviors (Khwaja and Mian, 2005), mortality (Fisman and Wang, 2015) and stock prices (Fisman, 2001). What distinguishes my study from the existing literature is its documentation of the influence of political connections on judicial outcomes. This chapter is organized as follows: Section 2.2 provides a background description of the judicial system and nationwide reform in China. Section 2.3 discusses how I combine three unique datasets: judgment files, business registration information and the timeline of the reform. Section 2.4 describes the empirical methodology I use. Section 2.5 reports the results I obtained from the data. Section 2.6 concludes the study. 35 2.2 Background 2.2.1 China’s Judicial System: Opaque and Corruption Like many developing countries, China continues to struggle with challenges related to weak institutions, especially within its judicial system. This system has been subject to both domestic and international criticism for a variety of deficiencies. Key issues include a lack of judicial independence, transparency, and consistency in the application of the law. Additionally, concerns over procedural fairness and limited access to legal representation have been raised. In 2014, these challenges were underscored when the World Justice Project’s Rule of Law Index placed China’s civil justice system 79th out of 99 countries,1 reflecting significant areas for improvement in ensuring a fair, impartial, and effective judicial process. Among the various deficiencies, a fundamental issue with China’s judicial system is its lack of transparency. Prior to the systematic judicial reforms initiated in 2014, Chinese trials were not formally labeled as “closed” but “selectively open”. There was significant variation in court policies concerning public access to trials. Internationally, the transparency of China’s judicial system is frequently viewed less favorably when compared to countries with longer histories of judicial independence and openness. This opaque could easily foster unfair judgments and corruption within the judicial system. In 2010 and 2017, two vice presidents of the People’s Supreme Court were found corrupt and sentenced to life imprisonment, which shows the existence of judicial corruption even at the highest-level in the judicial system. Also, according to the data released by the Chinese government in 2008, judicial corruption widely exists in trials: judges take up about 77% among all corrupt officials within the judicial system. Additionally, the corruption of judicial system is officially admitted by the Chinese central government. In 2014, a statement on legal reforms issued by the Chinese central government announced that the 1Official document available at https://worldjusticeproject.org/sites/default/files/ documents/RuleofLawIndex2014.pdf 36 Chinese Communist Party would improve the legal system to mitigate corruption, weak enforcement and increase transparency. The lack of transparency, combined with substantial influence exerted by the government/party on courts, make it difficult for the judge to remain impartial when making judicial decisions. In this paper, I identify two different likely beneficiaries of the opaque judicial system: local and politically-connected firms such as state-owned enterprises (SOEs). The reasoning is straightforward. First, the judicial system is not independent of the Communist Party (CPC) in China. The CPC maintains a significant influence over the country’s judicial system, including the courts and legal processes. The Party’s leadership is considered the highest authority, and its principles and policies can influence judicial decisions and legal interpretations. The intertwined relationship between the judicial system and the CPC poses challenges for judges in delivering impartial judgments in cases where government interests are at stake. Local firms contribute significantly to the local labor market, generate substantial tax revenue, and thus forge stronger connections with their local governments. The local protectionism is strong and evident as shown in Liu et al. (2023) and also confirmed by the central government in the statement on legal reform: a significant aim of this reform is to reduce the influence of local government on local courts. For SOEs in China, the connection to the government is especially strong. Senior executives of these SOEs are treated like a unique type of government officials. They are appointed by the Communist Party’s personnel department, not by the companies’ boards. This means that SOE managers, such as those at China Mobile, China Sinopec, and China Telecom, hold positions equivalent to cabinet vice-ministers, blurring the lines between corporate and government roles. This close relationship also allows for easy transitions from high positions in SOEs to significant government offices. For example, in 2012, the chairman of the Commercial Aircraft Corporation of China was appointed as the governor of Hebei province. Similarly, in 2013, the chairman of China Construction Bank became the 37 governor of Shandong province. As a consequence, judges might hesitate to rule against SOEs, aware that an SOE manager might later ascend to a significant government or party position. Therefore, it’s logical to expect that the strong connections between SOEs and the government could give SOEs an upper hand in legal disputes. 2.2.2 China’s Legal System: Litigation Procedures and Intellectual Property Laws From the bottom to the top, there are four different levels in the court system of China: the Basic People’s Courts at county level, Intermediate People’s Courts at prefecture level, High People’s Courts at province level and Supreme People’s Courts at the national level. In addition, there are also special courts such as maritime, railway, military and intellectual property courts, but they are limited in judicial jurisdiction. This paper mainly focuses on the rulings of intellectual property lawsuits for several reasons. Firstly, the intellectual property cases are one of the most common and important lawsuits among firms. Between January 2016 and June 2020, there were 71,729 first-instance IP cases being judged across China and 62% of those cases were among firms, with the number steadily increasing over time.2 Secondly, the jurisdiction follows the actor sequitur forum rei doctrine in intellectual property cases. This principle mandates that the plaintiff file the case at the location where the defendant resides, therefore minimize the endogeneity issues common in other case types (e.g. criminal cases). Because plaintiffs cannot freely choose where to file their lawsuit against the defendant, related sample selection concerns are mitigated. Finally, the level of transparency in legal proceedings can differ greatly depending on the nature and perceived sensitivity of the case. Intellectual property (IP) cases, by their nature, tend not to be as politically sensitive as other types of legal disputes. This may result in a higher degree of openness and adherence to procedural norms, which enhances 2News report available at https://www.gov.cn/xinwen/2018-04/23/content_5285020.htm 38 the completeness and representativeness of my judgment file data, and compliance with transparency reforms, mitigating concerns related to sample selection bias. Therefore, the focus on IP lawsuits provides a more accurate reflection of the impact of transparency reform within the judicial system. In China, while a typical civil or criminal case could be initiated at any level of courts, the jurisdiction of intellectual property (IP) cases is strictly limited to certain courts. According to the Supreme People’s Court regulations on court jurisdictions, only intermediate courts at the prefecture level are authorized to handle first-instance IP lawsuits. This delineation underscores the specialized nature of IP disputes and the perceived need for courts with specific expertise. Patent-related cases are subject to even more stringent jurisdictional limitations. These cases are exclusively reserved for the intermediate courts located in the capital cities of each province or in cities that are directly governed at the provincial level. This further centralization reflects the complex technical and legal expertise required to adjudicate patent disputes accurately. While there exists a provision for higher provincial-level courts to delegate jurisdiction over IP cases to certain basic-level courts, this practice is exceedingly rare, as evidenced by the data collected in my study. Hence, concentrating exclusively on IP cases that fall within the jurisdiction of intermediate courts provides a nuanced understanding of how this reform impacts courts at the city level, rather than the entire judiciary system in China. This approach offers valuable insights into the operational dynamics and challenges faced by these courts in the context of broader legal system. 2.2.3 China’s Legal System: Video Streaming Court Trials In 2016, as part of a broader effort to modernize the judicial system and enhance transparency, the Chinese government launched a website named “China Trials Online” for live streaming court trials. 39 This official online platform is designed to allow the public to watch court proceedings in real-time or view recorded replays over the internet. This innovative feature opens up the judiciary process, allowing for greater transparency and accessibility by providing a window into the courtroom from virtually anywhere. The online portal has drawn a lot of attention since then: it has been viewed more than 22 billion times and more than 8.6 million cases have been live streamed online by 2020. As the government claimed, almost every case (except national security cases like spying or private cases like divorce) is live-streamed or pre-recorded and made available online. In reality, the scope of trials available for live streaming can vary, with a focus on civil, commercial, and some criminal cases. The introduction of video streaming court trials initially included only the People’s Supreme Court starting from the July 1st, 2016. On September 27th, the website ”China Open Trials” became available online and first 427 courts started to broadcast their trials online. At the end of 2017, all 3,520 courts nationwide were covered and connected by this program. The time to introduce video streaming court trials at the province level is determined by the technology availability and political sensitivity. For example, the first 427 courts which introduced video streaming court trials are those courts from more developed areas, like provinces on the east and the south. Xinjiang, Tibet and other remote or sensitive areas came online in December 2017. However, implementation within provinces was almost randomly distributed. There is no evidence that the implementation within provinces was imposed top-down by bureaucratic fiat or other deliberate factors. 2.3 Data The analysis of this paper is based on data from three different sources: detailed case judgment data from China Judgments Online from 2016-2020, detailed firm information data from Qichacha, and online court trial data from China Trials Online. 40 2.3.1 Case Data In 2013, as the first step of judicial transparency reform, the Supreme People’s Court of China launched China Judgments Online (CJO), an official digital portal managed by the Supreme People’s Court itself. It mandated all courts, from the county level to nation level, to post both past and new judgment files to this platform and make it fully available to the public with little restriction.3 Given the huge workload added to the judges, the data on the past judgment is much limited while the data on the new judgment files are much more complete and comprehensive. According to the policy announced by the Supreme People’s Courts, courts are required to post all new judgment files on the website within seven days following the conclusion of a trial. This reform was implemented in stages and fully realized by the end of 2015, marking a significant milestone in the journey towards judicial transparency. The China Judgments Online database are widely used, quoted, and considered to be highly reliable. Firstly, it is one of the very first large-scale judicial system reform in China since 2012 (the 18th National Congress of the Chinese Communist Party) and it is the one with least obstacles, both politically and technically. This ensures that it includes the detailed information of almost all the cases and there is no systematic removal of judgment files at courts’ discretion. Secondly, the database has drawn huge public attention. The information is widely used by other entities like business information database companies. Also, people use this platform to conduct due diligence before signing contracts with new business partners, ensuring that these potential partners are not embroiled in numerous lawsuits with others. The extensive visibility of these publicly disclosed administrative records makes it challenging to manipulate or hide after posting them. However, two potential issue could undermines the quality of data. First, in the early years following the introduction of this reform, the compliance rate among courts was low. 3There are certain restrictions on publishing cases related to national security, enterprise security, and personal privacy. 41 Courts uploaded only a portion of their judgment files, rather than the entirety of them as mandated by the reform’s guidelines(Ahl, Cai, and Xi, 2019; Liebman et al., 2020); Second, starting in 2021, the government realized that too much openness may address some challenge to itself so it removed some cases from the website, such as corruption cases of government officials. There are some news and reports showing that several courts (e.g. High People’s Court in Hubei province) helped defendants to remove their cases from the CJO to “save their repuation” in 2022.4 Therefore, the data I used is all the court verdicts on intellectual property cases between January 2016 and June 2020 (I scraped the data in July 2020). This strategy aims to mitigate the issues associated with selective openness and make it unlikely to significantly influence the analysis presented in this paper. There are 71,729 intellectual property rights cases in total during the period of study. All of them are the first instance cases at the intermediate court level. After deletion of cases where plaintiffs or defendants are individuals since it is hard to identify their residence and political connections, I obtained a data set of 42,101 firms-related cases in total. For each case, I identified the case number, court name, name, plaintiff’s name, court fee, and the date of the trial from the judgment file. See Figure A13 for sample of the judgment file. Beyond the information that is readily accessible, an additional crucial variable of interest is the outcome of the lawsuit, specifically identifying the winning party. To address this, I examine the distribution of the court fee between the involved parties as a means to infer the lawsuit’s outcome. This approach is grounded in the principle that the allocation of court fees, particularly in civil litigation, often reflects the court’s decision regarding which party bears the financial burden of the legal process. A judgment requiring one party to pay a larger portion of the fees can indicate that this party was less successful in the litigation, thereby providing insight into the lawsuit’s winner. 4Official news available in Chinese at https://m.thepaper.cn/newsDetail_forward_22681894 42 2.3.2 Firms Data The second data set I use is the business information of the plaintiff firms. My firm-level data is derived from Qichacha, a platform and database company specializing in the business information about private and public companies in China. See Figure A14 for a sample screenshot of the website. The dataset is considered reliable for two principal reasons. Firstly, it is constructed from public records and administrative data, ensuring a foundation of officially recognized and recorded information. Secondly, the competitive landscape of the business information provider market, characterized by low barriers to entry, necessitates a high degree of data accuracy and reliability for any participant to remain competitive. In this environment, providers must ensure their data is of the highest quality to distinguish themselves from rivals such as Tianyancha and Qixinbao. For each firm listed in the database, Qichacha provides detailed information on several key variables crucial for in-depth studies and analyses. These variables include registered residence location, ownership type (private or state-owned), registered capital, public listed status, number of employees with social security. By merging judgment files data with business registration data using the firm’s name, I construct two variables related to the firm’s identity. Firstly, by comparing the firm’s registered residence location (from the business registration data) against the court’s location (from the judgment file data), I can determine whether the plaintiff operates within the same prefecture as the court, thereby classifying it as either a local or external firm. Second, Based on the ownership information, I am able to define “state-owned” vs. “private-owned” firms, and thereby identify all intellectual property lawsuits with private and state-owned plaintiffs. 43 2.3.3 Video Streamed Court Trials Data The final dataset encompasses the implementation timeline of judicial reforms concerning live trials. This information was scraped from the official website “China Court Trial Online” (Tingshen Gongkai), focusing on a comprehensive list of intellectual property cases that were live-streamed during the time period of this study. The videos of all the past live streamed trials are available online and the basic information of the cases, such as court, case numbers, type of case, names of plaintiffs, names of defendants and exact date of trials. See Figure A15 for a sample of trial recording. According to the information, I can find the roll-out information for each court. I use the date of the first ever video streamed case as the starting date of the reform for a specific court. 2.3.4 Summary of Data Table 2.1 below shows summary statistics of the main variables. In panel A, I present some variables related to the identity of the plaintiffs, including if it is local (15%), state-owned (13.5%), public listed (17.5%) and registered capital (568.45 million CNY) and estimated number of employees (745.3) according to its registerd information in social security system. Panel B presents the characteristics of cases, such as the win rate of plaintiffs (75.2%), court fee (2864.6 CNY) and if the defendent was absent in trial (37.8%). 44 Table 2.1: Summary Statistics of Key Variables Obs Mean Std. Dev Min Max Panel A. Identity of Plaintiffs Local Firm (yes=1) 42,101 0.1500 0.3571 0 1 State-owned Firm (yes=1) 42,101 0.1348 0.3415 0 1 Listed Firm (yes=1) 42,101 0.1747 0.3797 0 1 Registered Capital (in million CNY) 42,101 568.45 2,515.98 1 205,000 Number of Employees 42,101 745.30 2,699.76 0 70,416 Panel B. Characteristics of Cases Plaintiff Win 42,101 0.7515 0.4321 0 1 Court Fee 42,101 2864.58 12,068.63 10 641,800 Trial in Absentia 42,101 0.3775 0.4848 0 1 Note: This table shows the summary statistics of the key variables used in this study. Variables in Panel A are mainly from the business registration database Qichacha except the first one. Variables in Panel B are derived from the judgment files on the CJO portal. Descriptive statistics include number of observations, mean, standard deviation, minimum and maximum. 45 2.4 Empirical Strategy In this section, I use the following Difference-in-Differences (DiD) model to study the effects of the judicial transparency reform on judicial outcomes. Each time period is 3 months (quarter) because of the fast roll-out between 2016 and 2018. The data is aggregated to the court level. Yit = β ∗ T reatedit + αi + θt + ϵit (2.1) In this specification Yit is the outcome of for local court i at time t, which could be either the win rate of plaintiffs (for the baseline and intensive margin) or the characteristic of cases (for the extensive margin); T reatedit is a indicator variable of treatment status of court i at quarter t,; αi stand for court fixed effects, controlling for systematic variance in outcomes within the court. θt is the quarter fixed effects, controlling changes that are common for all courts. The standard errors are clustered at the local court level. 2.5 Results 2.5.1 Baseline Effects This table below presents the baseline DiD results on how the judicial transparency reform changes the win rate of different types of plaintiffs. The results in column (1) and (2) show that, after the reform, local courts rule almost the same favorably toward local plaintiffs; their average win rate significantly falls by 4.5%. However, for external plaintiffs, their win rate also decreases by 4.9% after the reform, representing a 7.4% reduction from the baseline average win rate. The impacts of the reform appear to be heterogenous for state-owned and private firms (Columns 3 and 4). The private firm have a significant lower win rate after the reform. 46 Local Plaintiff External Plaintiff State-owned Plaintiff Private Plaintiff (1) (2) (3) (4) Post Reform -0.0445** -0.0487*** -0.0263 -0.0641*** (0.0224) (0.0184) (0.0316) (0.0172) Mean of Outcome 0.6556 0.6590 0.7029 0.6510 Court FE Y Y Y Y Quarter FE Y Y Y Y Observations 1,212 3,085 1,265 3,042 Notes: This table reports the DiD estimates on judicial outcomes in all intellectual property lawsuits, with data aggregated to court-quarter level. Standard errors in parentheses. * p < 0.10, ** p < 0.05, *** p < 0.01 Table 2.2: Open Trial Reform and Plaintiffs’ Win Rate The mean of the outcome has some interesting discrepancies. It seems that external plaintiffs have a higher win rate on average. However, this is because I am using the courtlevel data: Many courts only have small number of cases with external plaintiff but have a high win rate on them. This distort the average when we aggregate the data in the court-level. For case level data, the win rate of local plaintiffs is 5% higher than external plaintiffs. 2.5.2 Intensive Margin and Extensive Margin My baseline finding shows that the win rate of plaintiffs decreases for most of plaintiffs. However, the underlying mechanism still leaves unanswered. Is it because the judges rule differently after the reform, or the composition of the case has been changed? For the former one, the transparency reform has increased the visibility of entire judicial process and might force judges make more impartial judgments (intensive margin). For the latter one, as a response to this reform, firms may change their strategies in both litigation and trials. Disadvantaged group may be more likely to initiate a case after the reform because of improved transparency or increase their participation during the court trial. In this section, I identify these two different mechanisms separately and show that the effect of the reform is mainly from the extensive margin. To isolate the intensive margin effect of this reform, I focus exclusively on cases that proceeded to trial within three months before and after the implementation of the reform. 47 Local Plaintiff External Plaintiff State-owned Plaintiff Private Plaintiff (1) (2) (3) (4) Post Reform 0.0008 0.0144 0.0147 -0.0396 (0.0470) (0.0341) (0.0539) (0.0365) Mean of Outcome 0.5991 0.5966 0.6320 0.5929 Court FE Y Y Y Y Quarter FE Y Y Y Y Observations 319 536 351 543 Standard errors in parentheses ∗p < 0.10, ∗ ∗ ∗p < 0.05, ∗ ∗ ∗p < 0.01 Table 2.3: Intensive Margin: Change in Judges’ Rulings Notes: This table reports the DiD estimates on judicial outcomes in all intellectual property lawsuits, with data aggregated to court-quarter level and only limited to 3 months before and after the treatment. * p < 0.10, ** p < 0.05, *** p < 0.01 Given the typically prolonged duration of the judicial process, it is highly unlikely that the composition of cases would change within the short time period under consideration. Therefore, by comparing the different rulings just before and after the reform, I can derive a reliable estimate on the intensive margin of this reform. The result shows that the reform has little intensive margin after the reform. The win rate of four different groups (local vs external, SOE with private firms) are quite similar before and after the reform and there is no significant difference. Second, by comparing the composition of cases before and after the reform, I examine the reform’s extensive-margin. The result shows that there are significant changes in the composition of cases, which means this reform has an extensive margin. Even though the share of external and state-owned plaintiffs have not changed, but the plaintiffs are tend to be larger firms in terms of number of employees (23.4%) and registered capital (23.3%). With regard to characteristics of cases, there is no significant decrease in the court fee, which means the cases on average has similar amount of money involved. However, the rate of defendant absentia significantly decreases. After the open trial reform, defendants are more likely to attend the court trial and that explains why the win rate of plaintiffs decrease after the reform. 48 Table 2.4: Extensive Margin: Change in Case Composition Characteristics of Plaintiff Characteristics of Case Share of Share of Number of Registered Court Defendant External Plaintiff State-owned Plaintiff Employees Capital Fee Absentia (1) (2) (3) (4) (5) (6) Post Reform 0.0162 -0.0018 238.63** 12460.78* -93.44 -0.0286* (0.0114) (0.0131) (100.32) (6521.64) (400.39) (0.037) Mean 0.8660 0.1415 1021.85 53459.61 11796.67 0.3410 Court FE Y Y Y Y Y Y Quarter FE Y Y Y Y Y Y Observations 3,657 3,657 3,411 3,657 3,657 3,657 Standard errors in parentheses ∗p < 0.10, ∗ ∗ ∗p < 0.05, ∗ ∗ ∗p < 0.01 Notes: This table reports the DiD estimates on case composition in all intellectual property lawsuits, with data aggregated to court-quarter level. Column (1) to (4) reports the change in characteristics of plaintiffs after the reform and Column (5) and (6) shows the effect of reform on the amount of money involved in the case and the trial participation. * p < 0.10, ** p < 0.05, *** p < 0.01 49 2.5.3 Case Level Analysis In addition to court-level analysis, I conduct a robustness check, where I do not aggregate the data to the court-quarter level, but instead directly estimate the DiD model at the case level. The result suggesting that this reform’s has direct impact in decreasing the win rate of local plaintiffs, but no effect on the win rate of external plaintiffs. Also, the reform seems has no effect on the win rate of either state-owned or private plaintiffs. Local Plaintiff External Plaintiff State-owned Plaintiff Private Plaintiff (1) (2) (3) (4) Post Reform -0.0942** -0.0241 -0.0444 -0.0322 (0.03930) (0.0267) (0.0395) (0.0282) Mean of Outcome 0.7501 0.7043 0.7512 0.7047 Court FE Y Y Y Y Month FE Y Y Y Y Observations 5,822 26,769 5,446 27,172 Standard errors in parentheses ∗p < 0.10, ∗ ∗ ∗p < 0.05, ∗ ∗ ∗p < 0.01 Table 2.5: Case Level: Open Trial Reform and Plaintiffs’ Win Rate Notes: This table reports the DiD estimates on judicial outcomes in all intellectual property lawsuits with case level data. Time fixed effect has been changed from quarter to month. The contradiction of result shows us a larger picture of the effect of this policy. In courts with large case loads, usually courts in more developed areas, this judicial reform does improve the judicial fairness by decreasing the win rate of local plaintiffs. However, in other courts with smaller case loads, the judicial reform has decreases the win rate of private firms. This is an important characteristic of this reform. Even though the government would have posted the same policy, the compliance of the policy varies across different courts and different geographic areas. During the time period of this study, the compliance rate (ratio of broadcasted trials out of all trials) is 0.3% in 2016, 3.2% in 2017, 6.4% in 2018, 17.6% in 2019, 30.3% in 2020. The difference in compliance may explain the difference in the effect of the same policy. I also use compliance rate instead of treatment as independent variable but find that only the win private of private plaintiff significantly decreases. 50 2.6 Conclusion Judicial transparency has been longly taken as an efficient way to improve judicial justice and fairness. However, this study provides important insights into the complex effects of improving transparency in China’s legal system. Contrary to what many might expect, making court proceedings more open and accessible did not necessarily lead to fairer outcomes for everyone. Instead, the changes appear to have made it harder for most types of plaintiffs to win their cases except state-owned enterprises. This suggests that just making things more transparent doesn’t always lead to fairness, local protectionism or political prestige is hard to overcome. However, the transparency reform did increases the attendance rate of defendants, which increases the participation in judicial process and shows the unexpected result of this judicial transparency policy. The minimal impact of this policy can be attributed to two primary reasons. Firstly, there’s the issue of low compliance rates within courts. Even in the final year of the study, the compliance rate hovered around only 30%. Such a low compliance rate significantly diminishes the potential impact of the reform. Secondly, the selectiveness of openness plays a role. Analysis of case-level data reveals that the openness is quite selective; cases involving local firms as plaintiffs are less likely to be open to the public, with only 4.9% of such trials being open. This selective approach to openness also contributes to the limited effectiveness of the policy. The study’s findings are significant because they shed light on the real-world impact of legal reforms in a major developing country. It also adds to our understanding of how political connections can influence legal decisions and how efforts to fight corruption through legal reforms might work in practice. This paper not only speaks to those interested in legal reforms and fairness but also contributes to ongoing discussions about the role of politics in the judiciary. In future research, I plan to utilize my unique dataset to investigate the factors influ51 encing the policy’s compliance rate and to examine whether there were any other significant judicial reforms during this time period. This will enable me to address further questions on how to enhance the fairness of the judicial system. 52 Chapter 3 Journal Rankings and Retractions: A Regression Discontinuity Analysis 3.1 Introduction Past studies have viewed a good reputation as an important factor that improves agents’ performance (Banerjee and Duflo, 2000; Roberts and Dowling, 2002; Rindova et al., 2005; Jin and Kato, 2006; Chen and Wu, 2021), while often overlooking its potential downsides. However, a good reputation is a double-edged sword: although it increases visibility, it also makes any mistakes or shortcomings easier to detect and report. This chapter explores the potential downsides of a good reputation within the context of academic journals. Academic journals provide an ideal setting for this study for several reasons. Firstly, the market is entirely reputation-based: the reputation of journals significantly influences the perceived quality and impact of their papers. Secondly, data on errors is readily accessible and detailed compared to any other context. I utilize one of the most comprehensive datasets available, Retraction Watch, which records detailed information about paper retractions in journals, to examine how a good reputation affects the detection of problematic papers. How does a better reputation affect the detection of misbehavior, specifically in terms of 53 the number of retractions? There are two distinct effects to consider. On one hand, a good reputation might decrease the number of problematic papers published in the future. As a journal becomes more renowned, its heightened visibility makes problematic papers easier to spot, thus deterring authors of such papers from submitting to the journal. Consequently, we might observe a decrease in retractions associated with an enhanced reputation. On the other hand, a good reputation could increase the number of retractions of past papers, as the increased visibility exposes issues that the authors can no longer rectify. My empirical approach utilizes two distinct datasets. The first dataset examines the exogenous changes in journal rankings, a topic similar to what was explored in Chapter 1. I leverage a unique feature of the journal ranking system: the quartile rankings of journals. The Journal Citation Reports (JCR), published by Clarivate, uses a citation metric known as the impact factor to rank journals within each subject area. Journals are then allocated to different quartiles based on their relative rankings within their field. For example, a journal in the top 25% of its category is classified as a Q1 journal (top quartile), those in the 25% to 50% range as Q2, and so on. The academic community, especially in natural science departments, heavily relies on these quartile rankings to evaluate the significance of publications and to make crucial decisions regarding tenure, promotions, and funding. Additionally, in some countries, a journal’s quartile classification directly impacts the cash bonuses awarded to researchers. The nuanced aspect of the quartile ranking system is the tight spread of impact factors near the thresholds, where even minor differences can shift a journal to a different quartile. This observation prompted me to adopt a regression discontinuity (RD) design in my analysis to precisely identify the impact of a journal’s advancement to a higher quartile. Additionally, my study utilizes the article retraction data from Retraction Watch, a blog initiated in August 2010 by science writers Ivan Oransky and Adam Marcus. As of January 2024, Retraction Watch has cataloged over 50,000 entries, making it an invaluable resource for tracking retractions of scientific papers and related issues. 54 The combined dataset covers a 23-year period and includes more than 23,000 journals across various academic fields, representing the most comprehensive data available on journal rankings and retractions. However, given that the Journal Citation Reports are predominantly utilized in natural science departments and Retraction Watch founders are experts in the life sciences, this paper primarily focuses on journals within the life sciences. In exploring the impact on the number of retractions for papers published in these journals, I compare the difference in retraction numbers between journals positioned just below and just above the quartile cutoffs. I find that a higher quartile ranking significantly reduces the average number of retractions, affecting both older and newer papers. However, these effects are not uniform across all journals and quartile transitions: they are most pronounced in the medium quartiles (Q2 and Q3), while top-tier and bottom-tier journals exhibit little change. I also conduct a number of robustness checks. Since social sciences departments do not use this quartile ranking system, I find no significant difference in number of citations for social sciences journals. This shows that my estimates are not the result of random variation. In addition, results from balanced samples also present similar estimates, which further confirms the robustness of my primary findings. This study makes significant contributions to two primary fields of academic interest. First, it sheds new light on the causal links between reputation and its outcomes. While empirical studies have mainly focused on the how good reputation brings good economic outcomes (Stickel, 1992; Roberts and Dowling, 2002; Rindova et al., 2005; Resnick et al., 2006; Jin and Kato, 2006; MacLeod et al., 2017), the downside effect of a good reputation is little studied (Riquelme et al., 2019). This study focus on how a good reputation could help us in detecting and preventing flaws and misbehaviors. Secondly, this work also enriches the empirical discourse on the research misconduct and retractions. There has been an increasing body of work examining the trends in increasing number of research misconduct and retractions. However, past studies mainly focus on theory 55 (Lacetera and Zirulia, 2011), evidence (Lubalin and Matheson, 1999; Pozzi and David, 2007; Redman, Yarandi, and Merz, 2008) and consequence (Budd, Sievert, and Schultz, 1998; Azoulay et al., 2015; Hussinger and Pellens, 2019; Mungeon and Larivi`ere, 2019) of research misconduct. My contribution lies in exploring a novel dimension how the journals’ reputation could affect its frequency of retraction. This has significant implications for understanding how retraction will be located and reported. The remainder of the paper is organized as follows. Section 3.2 discusses the background and data, Section 3.3 explains the empirical approach, section 3.4 provides results. Section 3.5 concludes and discusses the policy implications. 3.2 Background and Data 3.2.1 Journal Ranking Data The Journal Citation Reports (JCR) is a widely used resource that provides annual metrics and benchmarks for academic journals in the fields of science and social sciences. Published by Clarivate Analytics (formerly part of Thomson Reuters), the JCR offers a comprehensive and systematic means to evaluate the world’s leading journals. The most well-known metric provided by JCR, the impact factor measures the frequency with which the average article in a journal has been cited in a particular year. It is used to assess the relative importance of a journal within its field, with higher impact factors indicating more significant influence. The journal citation report also provide journal quartile rankings (Q1, Q2, Q3, Q4): Journals are divided into four quartiles based on their impact factor within a specific subject category, with Q1 representing the top 25% of journals in that category. Similarly like what I did in the Chapter 1, I will use the discontinuity produced by quartile ranking as an exogenous variation in journals’ reputation. 56 3.2.2 Paper Retraction Data The Retraction Watch database is a unique and influential resource in the academic and scientific communities, focusing on tracking retractions of research papers. This database is part of a broader initiative by Retraction Watch, a blog that was launched in 2010 by Ivan Oransky, a journalist and physician, and Adam Marcus, a science writer and editor. The primary mission of Retraction Watch and its database is to increase transparency and integrity in the scientific publication process. The Retraction Watch database includes the reason for retraction and is believed to be the largest of its kind. Retractions of scientific papers can occur for various reasons, including errors, fraud, plagiarism, ethical breaches, and other issues that compromise the reliability or integrity of the research. Before Retraction Watch, there was no centralized system for tracking these retractions, making it difficult for researchers, institutions, and the public to stay informed about withdrawn studies and the reasons behind their retraction. The Retraction Watch database aims to fill this gap by providing a searchable repository of retracted papers across all scientific disciplines. It includes detailed information about each retraction, such as the reason for retraction, the individuals involved, the journal from which the article was retracted, and the dates of publication and retraction. This information is critical for researchers, peer reviewers, editors, and policy-makers who aim to understand the dynamics of scientific retractions, improve the quality of scientific publications, and foster a culture of honesty and transparency in research. Please see Figure A16 for an example. Through its efforts, Retraction Watch has become a vital tool for accountability and has played a significant role in highlighting the importance of retracting flawed research promptly and transparently. It has also spurred discussions and policy changes regarding how retractions should be handled and communicated in the scientific community. The retraction data I used is the latest database as of the date of July 20th, 2023 and it includes 41,467 retraction records in total. Based on the date of retractions and the date of publication, I can clearly identify which JCR year the retraction or publication belongs to. 57 3.3 Empirical Strategy I use a sharp RD design based around normalized impact factor cutoffs, where journals above the cutoffs are assigned to a higher quartile. The following regression discontinuity specification estimates the impact of being ranked into a higher quartile: Yi,t = β0 + β1Di,t + β2f1(Xi,t) + ei,t (3.1) where Di,t = 1 if Xi,t ≥ 0 0 if Xi,t < 0 Yi,t: the outcome of interest for journal i in year t. Xi,t: the running variable: journal i’s normalized impact factor in year t, while 0 is the journal normalized impact factor cutoff used for determination of treatment. The coefficient β1 captures the treatment effect, while f(Xi,t) reflect a continuous but potentially non-parametric relationship between the running variable and the outcome. I employ the data-driven approach of Calonico, Cattaneo, and Titiunik (2014), and the ‘rdrobust’ command of Calonico et al. (2017), incorporating a triangular weighting kernel relative to the threshold distance. I calculate a MSE-optimal bandwidth ‘h’, using a linear polynomial estimated within the bandwidth on either side of the cutoff, and compute heterogeneity-robust standard errors, clustered at the journal level.1 I have restricted my sample to journals with the same quartile in the previous year. By doing this, I can make sure the difference in future number of citations are fully due to the change in quartile in the current year rather than past years and avoid confoundings. This approach rests on two principal assumptions. The first is that the running variable is not manipulated around the threshold, such as if journals attempted to manipulate its citations and even the quartile rankings. This scenario seems improbable for several reasons. Firstly, JCR maintains neutrality and does not exhibit bias towards any publishers 1The MSE-optimal bandwidth and polynomial will vary for each outcome Y, following Calonico et al. (2018). I also did robustness checks to alternative kernels, functional forms, and bandwidths. 58 or journals, mitigating the risk of conflict of interest. Secondly, the ranking process is relative, comparing journals against each other without fixed thresholds or clear knowledge of the competition scope, making manipulation by journals very challenging. Nevertheless, I utilize a formal analysis of the distribution of normalized impact factor density following Cattaneo, Jansson, and Ma (2018), to investigate any irregularities in citation counts near the ranking cutoffs. The findings suggest a smooth and uninterrupted distribution. The second assumption is for continuity in journal-level covariates at the regression discontinuity threshold to avoid confounding effects that could skew the results. This is tested by examining observable characteristics over the past three years, including average citation counts, impact factors, and publication volumes. The analysis reveals no abrupt changes in these variables at the cutoff points, which supports the underlying assumption. 3.4 Results 3.4.1 Effect on Retractions of Past Papers In this section I first explore the effect of being a higher quartile on the average number of retractions for past papers. It is hard to define what ”average” means in retraction because all the past papers are likely to be retracted. Given that most retractions happens only 1-3 years after the publication, I use the average number of publications in the past three years as the denominator do define the average number of retractions. Average#of retractions = #of retraction/Average#of publications in the past two years (3.2) Using this definition, I tested the effect of being a higher quartile on number of retractions in the following years. Generally the effect is insignificant, However, if we look into the different cutoffs between quartiles, we can see that there is an instant and strong effect in 59 mid-tier journals, increases the average number of retractions of past papers by 0.006, which stands for a 75% increases. For Q1/Q2 and Q3/Q4 cutoffs, the coefficients are not significant. This outcome can potentially be attributed to the value-added effect highlighted in the first chapter. Specifically, for mid-tier journals, the value-added effect is both strong and significant. This implies that for these journals, an enhancement in reputation not only boosts their visibility but also heightens the likelihood of identifying issues in previously published papers. Consequently, this leads to an increased rate of retractions from earlier publications. 60 Table 3.1: Effect on Number of Retractions of Past Papers Effect on Retractions of Past Papers Years after the quartile change 1-3 4-6 RD Estimate -0.001 0.001 (0.002) (0.001) Dep var mean 0.008 0.008 Bandwidth 0.592 0.584 Effective Obs 1,118 1,178 Notes: This table reports the estimates of effects of changing quartile on journals’ number of retractions for life science journals. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of retractions in current year, next year or the year after next year for past papers published in the journal. Column 2 shows the same estimates but in year t + 3, t + 4 and t + 5 for those published papers. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 61 Table 3.2: Effect on Number of Retractions of Past Papers, By Quartile Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 1-3 4-6 RD Estimate -0.001 -0.001 (0.001) (0.002) Dep var mean 0.006 0.006 Bandwidth 0.535 0.366 Effective Obs 763 732 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 1-3 4-6 RD Estimate 0.006*** 0.001 (0.002) (0.001) Dep var mean 0.008 0.007 Bandwidth 0.213 0.584 Effective Obs 192 181 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 1-3 4-6 RD Estimate -0.007 0.001 (0.006) (0.003) Dep var mean 0.013 0.013 Bandwidth 0.627 0.363 Effective Obs 249 255 Notes: This table reports the estimates of effects of changing quartile on journals’ number of retractions for life science journals. Panels A, B, and C report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of retractions in current year, next year or the year after next year for past papers published in the journal. Column 2 shows the same estimates but in year t + 3, t + 4 and t + 5 for those published papers. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 62 3.4.2 Effect on Retractions of New Papers I then checked how this exogenous change in reputation affects the number of retractions in the future. The effect is not significant in the first three years but four years after, the shift in quartile will decreases the retraction by 0.005 (50%). This is because a better reputation enable the journal to select better papers or the authors with problematic was less likely to submit paper to them, making the average quality of paper improved and fewer problematic papers get published. Similarly, the effects varies across different quartiles. For top journals, the effect are mostly positive even though not significant. However, for lower-tier journals such as journals near Q2 and Q3 cutoff, the effect is negative and significant. It could because the top journals becomes more strict on their retraction policy while the lower tier journals enjoy an improvement in paper’s quality. 63 Table 3.3: Effect on Number of Problematic Papers in Future Years Effect on Retractions of New Papers Years after the quartile change 1-3 4-6 RD Estimate -0.000 -0.005* (0.001) (0.003) Dep var mean 0.008 0.010 Bandwidth 0.540 0.520 Effective Obs 1836 1719 Notes: This table reports the estimates of effects of changing quartile on journals’ number of retractions for life science journals. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of retractions in current year, next year or the year after next year for new papers published in the journal. Column 2 shows the same estimates but in year t + 3, t + 4 and t + 5 for those published papers. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 64 Table 3.4: Effect on Number of Problematic Papers in Future Years, By Quartile Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 1-3 4-6 RD Estimate 0.000 0.000 (0.001) (0.002) Dep var mean 0.006 0.007 Bandwidth 0.685 0.444 Effective Obs 1049 975 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 1-3 4-6 RD Estimate -0.000 -0.013 (0.002) (0.008) Dep var mean 0.009 0.013 Bandwidth 0.312 0.260 Effective Obs 508 473 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 1-3 4-6 RD Estimate 0.002 0.003 (0.003) (0.003) Dep var mean 0.013 0.016 Bandwidth 0.306 0.171 Effective Obs 496 471 Notes: This table reports the estimates of effects of changing quartile on journals’ number of retractions for life science journals. Panels A, B, and C report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of retractions in current year, next year or the year after next year for new papers published in the journal. Column 2 shows the same estimates but in year t + 3, t + 4 and t + 5 for those published papers. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 65 3.5 Conclusion In this paper, I use the journal quartile ranking system and a comprehensive data set on retractions to show the causal effect of reputation on mistake detection/revelation. The analysis indicates that a better reputation causes a decreased probability of publishing problematic paper in the future and an increase in average number of retractions reported for published papers. These effects are mainly from mid-tier journals while both top journal and bottom journal are not significantly affected by reputation. Future studies should consider the downside effect of reputation in other contexts, such as schools, companies, and scholars and they may also have different mechanisms for reputation to work. Understanding these nuances can help in designing a good reputation-based market and is an exciting area for future research. 66 Conclusion This dissertation contributes to the literature on reputation and judicial transparency by systematically analyzing their effects on academic journal citations, retractions, and judicial outcomes in China through three interrelated essays. Utilizing robust econometric methodologies, each chapter offers empirical evidence that enhances our understanding of these mechanisms within economic frameworks. In the first chapter, ”Reputation, Reputation, Reputation: Evidence from Academic Journals,” I exploit natural discontinuities in journal rankings serves to identify the impact of reputation shocks on citation counts. The findings suggest a significant positive effect of reputation on citation rates, particularly among higher prestige journals and rising rank journals. Further study on mechanisms show that the predominance of selection effects in top-tier journals and the critical role of value-added effects in medium-tier journals. The second chapter leverages a staggered difference-in-differences design to assess the causal impact of China’s judicial transparency reform on mitigating the local protectionism and effect of political connection. The empirical strategy utilizes on the exogenous variation introduced by the reform’s phased implementation, revealing a statistically significant reduction in the success rates of local, external plaintiffs and private entities, with a minor effect on state-owned enterprises. This reduction results from the low compliance in general and different treatment effect across courts. It underscores the importance of transparency as a mechanism for reducing informational asymmetries and improving judicial accountability, aligning with theories of legal efficiency and the role of institutional reforms in enhancing procedural fairness. 67 The third chapter explores the adverse consequences of reputation by examining its influence on the probability of publishing erroneous research and the rate of subsequent retractions. Employing a dataset on journal retractions and utilizing the journal quartile ranking system as an exogenous shock, this analysis presents evidence of a deterrent effect of high reputation on the incidence of publishing problematic papers, especially among mid-tier journals. This result is interpreted through the lens of reputation as a quality signal that incentivizes rigorous peer review processes and error detection, thereby contributing to the discourse on publication ethics and the economics of information. Collectively, these essays advance the field of economics by applying rigorous empirical methods to dissect the complex effects of reputation and transparency on academic and judicial outcomes. This dissertation not only fills pivotal gaps in the literature but also provides policy-relevant insights that underscore the significance of enhancing transparency and maintaining reputational integrity in fostering efficient and accountable institutions. 68 Bibliography Ahl, Bj¨orn, Lidong Cai, and Chao Xi. 2019. “Data-driven approaches to studying Chinese judicial practice.” China Review 19 (2):1–14. Alessandro, Martin, Bruno Cardinale Lagomarsino, Carlos Scartascini, Jorge Streb, and Jer´onimo Torrealday. 2021. “Transparency and trust in government. Evidence from a survey experiment.” World Development 138:105223. Amirapu, Amrit. 2021. “Justice delayed is growth denied: The effect of slow courts on relationship-specific industries in India.” Economic Development and Cultural Change 70 (1):415–451. Ash, Elliott, Daniel L Chen, and Suresh Naidu. 2022. “Ideas Have Consequences: The Impact of Law and Economics on American Justice.” Working Paper 29788, National Bureau of Economic Research. URL http://www.nber.org/papers/w29788. Avis, Eric, Claudio Ferraz, and Frederico Finan. 2018. “Do government audits reduce corruption? Estimating the impacts of exposing corrupt politicians.” Journal of Political Economy 126 (5):1912–1964. Azoulay, Pierre, Jeffrey L Furman, Krieger, and Fiona Murray. 2015. “Retractions.” Review of Economics and Statistics 97 (5):1118–1136. Banerjee, Abhijit V, Rukmini Banerji, Esther Duflo, Rachel Glennerster, and Stuti Khemani. 2010. “Pitfalls of participatory programs: Evidence from a randomized evaluation in education in India.” American Economic Journal: Economic Policy 2 (1):1–30. Banerjee, Abhijit V. and Esther Duflo. 2000. “Reputation Effects and the Limits of Contracting: A Study of the Indian Software Industry*.” The Quarterly Journal of Economics 115 (3):989–1017. URL https://doi.org/10.1162/003355300554962. Berg, Florian, Florian Heeb, and Julian F K¨olbel. 2022. “The economic impact of ESG ratings.” Available at SSRN 4088545 . Bonica, Adam and Maya Sen. 2021. “Estimating judicial ideology.” Journal of Economic Perspectives 35 (1):97–118. Brogaard, Jonathan, Joseph Engelberg, and Christopher A. Parsons. 2014. “Networks and productivity: Causal evidence from editor rotations.” Journal of Financial Economics 111 (1):251–270. URL https://www.sciencedirect.com/science/article/ pii/S0304405X13002687. 69 Budd, John M, MaryEllen Sievert, and Tom R Schultz. 1998. “Phenomena of retraction: reasons for retraction and citations to the publications.” Jama 280 (3):296–297. Calonico, Sebastian, Matias D Cattaneo, and Max H Farrell. 2018. “On the effect of bias estimation on coverage accuracy in nonparametric inference.” Journal of the American Statistical Association 113 (522):767–779. ———. 2020. “Optimal bandwidth choice for robust bias-corrected inference in regression discontinuity designs.” The Econometrics Journal 23 (2):192–210. Calonico, Sebastian, Matias D. Cattaneo, Max H. Farrell, and Roc´ıo Titiunik. 2017. “Rdrobust: Software for Regression-discontinuity Designs.” The Stata Journal 17 (2):372–404. URL https://doi.org/10.1177/1536867X1701700208. Calonico, Sebastian, Matias D. Cattaneo, and Rocio Titiunik. 2014. “Robust Nonparametric Confidence Intervals for Regression-Discontinuity Designs.” Econometrica 82 (6):2295– 2326. URL https://onlinelibrary.wiley.com/doi/abs/10.3982/ECTA11757. Card, David and Stefano DellaVigna. 2013. “Nine Facts about Top Journals in Economics.” Journal of Economic Literature 51 (1):144–61. URL https://www.aeaweb. org/articles?id=10.1257/jel.51.1.144. ———. 2014. “Page Limits on Economics Articles: Evidence from Two Journals.” Journal of Economic Perspectives 28 (3):149–68. URL https://www.aeaweb.org/articles?id= 10.1257/jep.28.3.149. ———. 2020. “What Do Editors Maximize? Evidence from Four Economics Journals.” The Review of Economics and Statistics 102 (1):195–217. URL https://doi.org/10.1162/ rest_a_00839. Card, David, Stefano DellaVigna, Patricia Funk, and Nagore Iriberri. 2019. “Are Referees and Editors in Economics Gender Neutral?*.” The Quarterly Journal of Economics 135 (1):269–327. URL https://doi.org/10.1093/qje/qjz035. Cattaneo, Matias D., Michael Jansson, and Xinwei Ma. 2018. “Manipulation Testing Based on Density Discontinuity.” The Stata Journal 18 (1):234–261. URL https://doi.org/ 10.1177/1536867X1801800115. Chemin, Matthieu. 2009. “Do judiciaries matter for development? Evidence from India.” Journal of Comparative Economics 37 (2):230–250. Chen, Maggie X and Min Wu. 2021. “The value of reputation in trade: Evidence from Alibaba.” Review of Economics and Statistics 103 (5):857–873. Chen, Ting and James Kai-sing Kung. 2019. “Busting the “Princelings”: The campaign against corruption in China’s primary land market.” The Quarterly Journal of Economics 134 (1):185–226. Djankov, Simeon, Rafael La Porta, Florencio Lopez-de Silanes, and Andrei Shleifer. 2003. “Courts.” The Quarterly Journal of Economics 118 (2):453–517. 70 ———. 2010. “Disclosure by politicians.” American Economic Journal: Applied Economics 2 (2):179–209. Drivas, Kyriakos and Dimitris Kremmydas. 2020. “The Matthew effect of a journal’s ranking.” Research Policy 49 (4):103951. URL https://www.sciencedirect.com/science/ article/pii/S0048733320300317. Eaton, Jonathan and Mark Gersovitz. 1981. “Debt with Potential Repudiation: Theoretical and Empirical Analysis.” The Review of Economic Studies 48 (2):289–309. URL http: //www.jstor.org/stable/2296886. Ellison, Glenn. 2002a. “Evolving Standards for Academic Publishing: A q-r Theory.” Journal of Political Economy 110 (5):994–1034. URL http://www.jstor.org/stable/10.1086/ 341871. ———. 2002b. “The Slowdown of the Economics Publishing Process.” Journal of Political Economy 110 (5):947–993. URL http://www.jstor.org/stable/10.1086/341868. Ferraz, Claudio and Frederico Finan. 2011. “Electoral accountability and corruption: Evidence from the audits of local governments.” American Economic Review 101 (4):1274– 1311. Fisman, Raymond. 2001. “Estimating the value of political connections.” American economic review 91 (4):1095–1102. Fisman, Raymond and Yongxiang Wang. 2015. “The mortality cost of political connections.” The Review of Economic Studies 82 (4):1346–1382. Fudenberg, Drew and David K. Levine. 1989. “Reputation and Equilibrium Selection in Games with a Patient Player.” Econometrica 57 (4):759–778. URL http://www.jstor. org/stable/1913771. Gonon, Fran¸cois, Jan-Pieter Konsman, David Cohen, and Thomas Boraud. 2012. “Why Most Biomedical Findings Echoed by Newspapers Turn Out to be False: The Case of Attention Deficit Hyperactivity Disorder.” PLOS ONE 7 (9):1–11. URL https://doi. org/10.1371/journal.pone.0044275. Heckman, James J. and Sidharth Moktan. 2020. “Publishing and Promotion in Economics: The Tyranny of the Top Five.” Journal of Economic Literature 58 (2):419–70. URL https://www.aeaweb.org/articles?id=10.1257/jel.20191574. Hussinger, Katrin and Maikel Pellens. 2019. “Scientific misconduct and accountability in teams.” Plos one 14 (5):e0215962. Jin, Ginger Zhe and Andrew Kato. 2006. “Price, quality, and reputation: evidence from an online field experiment.” The RAND Journal of Economics 37 (4):983–1005. URL https: //onlinelibrary.wiley.com/doi/abs/10.1111/j.1756-2171.2006.tb00067.x. 71 Khwaja, Asim Ijaz and Atif Mian. 2005. “Do lenders favor politically connected firms? Rent provision in an emerging financial market.” The quarterly journal of economics 120 (4):1371–1411. Kolstad, Ivar and Arne Wiig. 2009. “Is transparency the key to reducing corruption in resource-rich countries?” World development 37 (3):521–532. Kreps, David M and Robert Wilson. 1982. “Reputation and imperfect information.” Journal of Economic Theory 27 (2):253–279. URL https://www.sciencedirect.com/science/ article/pii/0022053182900308. Lacetera, Nicola and Lorenzo Zirulia. 2011. “The economics of scientific misconduct.” The Journal of Law, Economics, & Organization 27 (3):568–603. Liebman, Benjamin L, Margaret E Roberts, Rachel E Stern, and Alice Z Wang. 2020. “Mass digitization of Chinese court decisions: How to use text as data in the field of Chinese law.” Journal of Law and Courts 8 (2):177–201. Liu, E, Y Lu, W Peng, and S Wang. 2023. “Court Capture, Local Protectionism, and Economic Integration: Evidence from China.” Lubalin, James S and Jennifer L Matheson. 1999. “The fallout: What happens to whistleblowers and those accused but exonerated of scientific misconduct?” Science and engineering ethics 5:229–250. Luca, Michael and Oren Reshef. 2021. “The Effect of Price on Firm Reputation.” Management Science 67 (7):4408–4419. URL https://doi.org/10.1287/mnsc.2021.4049. MacLeod, W. Bentley, Evan Riehl, Juan E. Saavedra, and Miguel Urquiola. 2017. “The Big Sort: College Reputation and Labor Market Outcomes.” American Economic Journal: Applied Economics 9 (3):223–61. URL https://www.aeaweb.org/articles?id=10. 1257/app.20160126. MacLeod, W. Bentley and Miguel Urquiola. 2015. “Reputation and School Competition.” American Economic Review 105 (11):3471–88. URL https://www.aeaweb.org/ articles?id=10.1257/aer.20130332. McDevitt, Ryan C. 2011. “Names and Reputations: An Empirical Analysis.” American Economic Journal: Microeconomics 3 (3):193–209. URL https://www.aeaweb.org/ articles?id=10.1257/mic.3.3.193. Mehmood, Sultan. 2022. “The impact of Presidential appointment of judges: Montesquieu or the Federalists?” American Economic Journal: Applied Economics 14 (4):411–445. Milgrom, Paul and John Roberts. 1982. “Predation, reputation, and entry deterrence.” Journal of Economic Theory 27 (2):280–312. URL https://www.sciencedirect.com/ science/article/pii/002205318290031X. Mungeon, P and Vincent Larivi`ere. 2019. “The consequences of retractions for co-authors: Scientific fraud and error in biomedicine.” STI 2014 Leiden :404. 72 Peisakhin, Leonid. 2012. “Transparency and corruption: Evidence from India.” The Journal of Law and Economics 55 (1):129–149. Pozzi, Andrea and P David. 2007. “Empirical Realities of Scientific Misconduct in Publicly Funded Research: What Can Be Learned from the Data?” In ESF-ORI first world conference on scientific integrity—Fostering responsible research. R`afols, Ismael, Jordi Molas-Gallart, Diego Andr´es Chavarro, and Nicolas Robinson-Garcia. 2016. “On the dominance of quantitative evaluation in ‘peripheral’countries: Auditing research with technologies of distance.” Available at SSRN 2818335 . Redman, Barbara K, Hossein N Yarandi, and Jon F Merz. 2008. “Empirical developments in retraction.” Journal of medical ethics 34 (11):807–809. Resnick, Paul, Richard Zeckhauser, John Swanson, and Kate Lockwood. 2006. “The value of reputation on eBay: A controlled experiment.” Experimental Economics 9 (2):79–101. URL https://doi.org/10.1007/s10683-006-4309-2. Rindova, Violina P., Ian O. Williamson, Antoaneta P. Petkova, and Joy Marie Sever. 2005. “Being Good or Being Known: An Empirical Examination of the Dimensions, Antecedents, and Consequences of Organizational Reputation.” The Academy of Management Journal 48 (6):1033–1049. URL http://www.jstor.org/stable/20159728. Riquelme, Isabel P, Sergio Rom´an, Pedro J Cuestas, and Dawn Iacobucci. 2019. “The dark side of good reputation and loyalty in online retailing: When trust leads to retaliation through price unfairness.” Journal of Interactive Marketing 47 (1):35–52. Roberts, Peter W. and Grahame R. Dowling. 2002. “Corporate Reputation and Sustained Superior Financial Performance.” Strategic Management Journal 23 (12):1077–1093. URL http://www.jstor.org/stable/3094296. Ryan, Hill and Carolyn Stein. 2021. “Scooped! Estimating Rewards for Priority in Science.” Unpublished. Shu, Fei, Wei Quan, Bikun Chen, Junping Qiu, Cassidy R Sugimoto, and Vincent Larivi`ere. 2020. “The role of Web of Science publications in China’s tenure system.” Scientometrics 122:1683–1695. Spence, Michael. 1973. “Job Market Signaling.” The Quarterly Journal of Economics 87 (3):355–374. URL https://doi.org/10.2307/1882010. Standifird, Stephen S. 2001. “Reputation and e-commerce: eBay auctions and the asymmetrical impact of positive and negative ratings.” Journal of Management 27 (3):279–295. URL https://www.sciencedirect.com/science/article/pii/S0149206301000927. Stickel, Scott E. 1992. “Reputation and Performance Among Security Analysts.” The Journal of Finance 47 (5):1811–1836. URL https://onlinelibrary.wiley.com/doi/ abs/10.1111/j.1540-6261.1992.tb04684.x. 73 Sukhtankar, Sandip. 2012. “Sweetening the deal? Political connections and sugar mills in India.” American Economic Journal: Applied Economics 4 (3):43–63. Visaria, Sujata. 2009. “Legal reform and loan repayment: The microeconomic impact of debt recovery tribunals in India.” American Economic Journal: Applied Economics 1 (3):59–81. Young, Neal S, John P A Ioannidis, and Omar Al-Ubaydli. 2008. “Why current publication practices may distort science.” PLoS medicine 5 (10):e201. 74 Appendices Appendix A: Figures Figure A1: Impact Factor, Journal Ranking and Quartile Ranking in Journal Citation Report Notes: This figure shows how Journal Citation Report rank journals within a certain field according to their impact factors. It also provides information on the detailed rank and quartile. 75 Figure A2: Distribution of the running variable Notes: These figures plot the distribution of normalized impact around the relevant JIF thresholds. They plot histograms of the JIF distribution around the relevant threshold and a non-parametric regression for each half of the distribution testing for a discontinuity around the threshold Calonico, Cattaneo, and Farrell (2020). 76 Figure A3: Balance on previous journal characteristics: average number of citations 77 Figure A4: Balance on previous journal characteristics: impact factors 78 Figure A5: Balance on previous journal characteristics: number of publications 79 Figure A6: Natural Science, Total Effects (t+1) Notes: These figures plot the relationship between normalized impact factor and average number of citations received in year t+2 for papers published in year t+1 (right after the release of Journal Citation Reports). Panels (a), (b), and (c) plot the cutoff between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Samples are limited to natural science journals only. I plot the coefficient and 90% confidence intervals for each of these bins. 80 Figure A7: Natural Science, Total Effects (t+1) Notes: These figures plot the difference in average number of citations received in year t+k+1 for papers published in year t+k (k=1,2,...,9) for journals near the cutoffs. Panels (a), (b), and (c) plot the cutoff between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Samples are limited to natural science journals only. I plot the coefficient and 90% confidence intervals for each of these bins. 81 Figure A8: Natural Science, Total Effects (t+3) Notes: These figures plot the difference in average number of citations received in year t+k+3 for papers published in year t+k (k=1,2,...,9) for journals near the cutoffs. Panels (a), (b), and (c) plot the cutoff between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Samples are limited to natural science journals only. I plot the coefficient and 90% confidence intervals for each of these bins. 82 Figure A9: Value-added Effects Notes: These figures plot the difference in average number of citations received in year t+k for papers published in year t (k=1,2,...,9) for journals near the cutoffs, which shows the value-added effects. Panels (a), (b), and (c) plot the cutoff between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Samples are limited to natural science journals only. I plot the coefficient and 90% confidence intervals for each of these bins. 83 Figure A10: Selection Effects Notes: These figures plot the difference in average number of citations received in year t+k+1 for papers published in year t+1 and average number of citations received in year t+k for papers published in year t (k=1,2,...,9) for journals near the cutoffs, which shows the immediate selection effects. Panels (a), (b), and (c) plot the cutoff between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Samples are limited to natural science journals only. I plot the coefficient and 90% confidence intervals for each of these bins. 84 Figure A11: Longer-run Selection Effects Notes: These figures plot the difference in average number of citations received in year t+2k for papers published in year t+k and average number of citations received in year t+k for papers published in year t (k=1,2,...,9) for journals near the cutoffs, which shows the long-run selection effects. Panels (a), (b), and (c) plot the cutoff between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Samples are limited to natural science journals only. I plot the coefficient and 90% confidence intervals for each of these bins. 85 Figure A12: Below vs Above Notes: These figures plot the difference in average number of citations received in year t+k+1 for papers published in year t+k and citations received in year t+k+3 for papers published in year t+k (k=1,2,...,9). The figures are for two different subsamples: journals above the threshold last year and above the threshold last year. Samples are limited to natural science journals only. I plot the coefficient and 90% confidence intervals for each of these bins. 86 Figure A13: Sample of Judgment File Notes: This figure gives an example of the judgment file available on the China Judgment Online. It provides detailed information on court name, case number, plaintiff’s name, defendant’s name, quoted legal provisions, court fees, judges’ names, and the date of trial, etc. 87 Figure A14: Firms’ Information on Qichacha Notes: This figure gives an example of the business registration information available on the Qichacha. It provides detailed information on firm’s name, registered residence location, registered capital, type of company, etc. 88 Figure A15: China’s Court Trials Online Portal Notes: This figure provides an example of court trial recordings accessible through China Court Trial Online. In this particular example, it is a civil case and the judge, plaintiff, and defendant participated fully remotely. The trial lasted six minutes and was live streamed. This page also includes details such as the case number and court name. 89 Figure A16: Retraction Watch Database Notes: This figure provides a screenshort of Retraction Watch Database. They provide detail information on the retracted papers including title, reasons of retraction, date of original publication, date of retraction notice, etc. 90 Appendix B: Tables Table B1: Example: Quartile Rankings in Journal Citation Reports Journal Name IF Rank Quartile New England Journal of Medicine 91.253 1/167 Q1 Medical Clinics of North America 5.456 22/167 Q1 Postgraduate Medicine 3.840 41/167 Q1 DM Disease-A-Month 3.800 42/167 Q2 BMJ Open 2.692 64/167 Q2 Upsala Journal of Medical Science 2.384 83/167 Q2 American Journal of the Medical Sciences 2.378 84/167 Q3 Medicina Clinica 1.725 105/167 Q3 Australian Journal of General Practice 1.261 125/167 Q3 Danish Medical Journal 1.240 126/167 Q4 Medicina-Buenos Aires 0.653 150/167 Q4 Kuwait Medical Journal 0.076 167/167 Q4 Notes: This table shows a brief distribution of impact factor and quartile for journals under the category “Medicine, General & Internal” in 2020. There are huge variation in the impact factor across journals but the differences in JIF are small near the thresholds (Q1 and Q2, Q2 and Q3, Q3 and Q4). Data from other fields and other JCR years follow a similar pattern. 91 Table B2: Social Science Journals, Total Effect on Papers’ Immediate Citations Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.067 0.077 0.042 0.027 (0.050) (0.051) (0.066) (0.075) Dep var mean 2.495 2.667 2.791 2.923 Bandwidth 0.186 0.192 0.164 0.153 Effective Obs 9777 28400 20732 15505 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.008 -0.024 -0.057 -0.015 (0.031) (0.034) (0.046) (0.056) Dep var mean 1.324 1.465 1.603 1.738 Bandwidth 0.125 0.110 0.092 0.103 Effective Obs 11214 29240 21055 18102 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate -0.019 -0.017 0.009 -0.040 (0.022) (0.025) (0.032) (0.043) Dep var mean 0.721 0.840 0.968 1.098 Bandwidth 0.130 0.131 0.133 0.126 Effective Obs 13258 37630 30909 22997 Notes: This table reports the estimates of effects of changing quartile on journals’ immediate citations for social science journals. Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in current year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 2, t + 3 and t + 4 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 92 Table B3: Social Science Journals, Total Effect on Papers’ Short-term Citations Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.114 0.130 0.041 0.075 (0.083) (0.084) (0.102) (0.125) Dep var mean 3.890 3.991 4.198 4.403 Bandwidth 0.170 0.181 0.162 0.178 Effective Obs 8765 24453 17945 14545 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.005 -0.022 0.029 -0.042 (0.053) (0.058) (0.071) (0.092) Dep var mean 2.070 2.194 2.390 2.609 Bandwidth 0.125 0.105 0.113 0.093 Effective Obs 10783 25419 21406 14002 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate -0.032 -0.002 -0.008 -0.045 (0.037) (0.040) (0.054) (0.074) Dep var mean 1.142 1.256 1.445 1.647 Bandwidth 0.137 0.135 0.125 0.122 Effective Obs 13256 34732 25735 18732 Notes: This table reports the estimates of effects of changing quartile on journals’ short-term citations for social science journals.. Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in the year after next year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 4, t + 5 and t + 6 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 93 Table B4: All Journals, Total Effect on Papers’ Immediate Citations Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.060 0.077** 0.065 0.100* (0.037) (0.038) (0.045) (0.052) Dep var mean 3.704 3.881 3.996 4.106 Bandwidth 0.160 0.161 0.169 0.197 Effective Obs 30168 85503 73898 66327 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.032 0.026 0.017 0.021 (0.023) (0.024) (0.029) (0.037) Dep var mean 1.732 1.858 1.990 2.135 Bandwidth 0.114 0.104 0.104 0.118 Effective Obs 37866 98965 81873 70792 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.002 0.002 0.014 -0.017 (0.016) (0.017) (0.021) (0.025) Dep var mean 0.913 1.030 1.168 1.314 Bandwidth 0.111 0.102 0.112 0.157 Effective Obs 41917 110366 96831 96217 Notes: This table reports the estimates of effects of changing quartile on journals’ immediate citations for all journals. Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in current year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 2, t + 3 and t + 4 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 94 Table B5: All Journals, Total Effect on Papers’ Short-run Citations Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.085* 0.131*** 0.078 0.131* (0.045) (0.045) (0.053) (0.067) Dep var mean 4.603 4.675 4.815 5.001 Bandwidth 0.163 0.175 0.184 0.213 Effective Obs 29363 82855 69372 59390 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.049* 0.031 0.059* -0.006 (0.029) (0.030) (0.036) (0.056) Dep var mean 2.182 2.273 2.421 2.614 Bandwidth 0.113 0.106 0.137 0.114 Effective Obs 36065 91098 86584 58261 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.012 0.011 0.015 0.007 (0.020) (0.021) (0.027) (0.039) Dep var mean 1.193 1.294 1.457 1.654 Bandwidth 0.131 0.148 0.166 0.150 Effective Obs 45897 132164 112042 78754 Notes: This table reports the estimates of effects of changing quartile on journals’ short-term citations for all journals. Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in current year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 2, t + 3 and t + 4 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 95 Table B6: Natural Science Journals, Total Effect on Papers’ Immediate Citations, Balanced Sample Panel A: Journals near Q1Q2 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.069 0.112** 0.113** 0.151** (0.047) (0.048) (0.051) (0.060) Dep var mean 3.375 3.588 3.873 4.205 Bandwidth 0.179 0.190 0.217 0.254 Effective Obs 13821 42199 44643 48925 Panel B: Journals near Q2Q3 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.068** 0.058* 0.058 0.076* (0.030) (0.032) (0.036) (0.042) Dep var mean 1.505 1.661 1.876 2.134 Bandwidth 0.128 0.116 0.126 0.143 Effective Obs 17208 46461 46516 49245 Panel C: Journals near Q3Q4 Cutoff Years after change in quartile rank 0 1-3 4-6 7-9 RD Estimate 0.005 0.007 0.006 -0.008 (0.020) (0.022) (0.027) (0.031) Dep var mean 0.752 0.886 1.080 1.309 Bandwidth 0.101 0.104 0.119 0.170 Effective Obs 16931 49669 51252 62486 Notes: This table reports the estimates of effects of changing quartile on journals’ immediate citations for natural science journals with balanced data set (all observations are before year 2011). Panels (a), (b), and (c) report the results near the cutoffs between Q1 and Q2, Q2 and Q3, Q3 and Q4 respectively. Column 1 reports the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in current year for paper published in the previous year. Column 2 shows the estimates the effect of a journal having an impact factor above the threshold on the average number of new citations in year t + 2, t + 3 and t + 4 for paper published in year t + 1, t + 2, t + 3 respectively (short-run effect). Column 3 and Column 4 shows total effects of changing quartile in the longer-run. Each specification uses a linear polynomial, triangular kernel, and MSE-optimal bandwidth estimated following Calonico et al. (2017). Standard errors are clustered at the journal level. * p < 0.10, ** p < 0.05, *** p < 0.01. 96
Abstract (if available)
Abstract
This dissertation encompasses three chapters exploring the effects of reputation and judicial transparency.
Chapter 1 demonstrates that a shock to a journal's reputation has a long-lasting effect on the citations of papers published in that journal (2%-3%). This effect is stronger for positive shocks and among more influential journals. Decomposing the effect reveals that reputation influences higher-tier journals through selection and medium-tier journals through a value-added effect.
Chapter 2 examines the effect of a judicial reform in China, known as "open trials." Utilizing a staggered difference-in-differences (DiD) research design and court-level data, I find that the win rate for both local and external plaintiffs dropped by 4.5% and 4.9% respectively. Additionally, the win rate decreases for both private firms (6.4%) and state-owned firms (2.6%). The study also assesses the intensive and extensive margins of the reform, finding that the reform does not change the judge's rulings but rather than the composition of cases.
Chapter 3 explores the effects of a good reputation on mistake-reporting behavior. Using the journal quartile ranking system and a comprehensive dataset on retractions, I show the causal effect of reputation on mistake detection and revelation. The analysis indicates that a better reputation leads to a decreased probability of publishing problematic papers in the future and an increase in the average number of retractions reported for published papers. These effects are primarily observed in mid-tier journals, while both top-tier and bottom-tier journals are not affected.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Essays on health and aging with focus on the spillover of human capital
PDF
The impact of agglomeration policy on CO₂ emissions: an empirical study using China’s manufacturing data
PDF
Three essays on the identification and estimation of structural economic models
PDF
Essays on development and health economics: social media and education policy
PDF
Essays on econometrics analysis of panel data models
PDF
Three essays on the evaluation of long-term care insurance policies
PDF
Three essays on supply chain networks and R&D investments
PDF
Essays on the microeconomic effects of taxation policies
PDF
Essays on economics of education and private tutoring
PDF
Essays on the estimation and inference of heterogeneous treatment effects
PDF
Essays on health insurance programs and policies
PDF
Three essays on the statistical inference of dynamic panel models
PDF
The impact of trade liberlaization on firm performance in developing countries -- new evidence from Pakistani manufacturing sector
PDF
Essays on development and health economics
PDF
Essays on treatment effect and policy learning
PDF
Essay on monetary policy, macroprudential policy, and financial integration
PDF
Essays on policies to create jobs, improve health, and reduce corruption
PDF
Essays on wellbeing disparities in the United States and their social determinants
PDF
Essays on political economy and corruption
PDF
Essays in panel data analysis
Asset Metadata
Creator
Zhou, Wei
(author)
Core Title
Essays on effects of reputation and judicial transparency
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Economics
Degree Conferral Date
2024-05
Publication Date
04/30/2024
Defense Date
03/28/2024
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
academic journals,intellectual property lawsuits,judicial transparency,OAI-PMH Harvest,Reputation
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Weaver, Jeffrey (
committee chair
), Jia, Nan (
committee member
), (
Hsiao, Cheng
), (
Strauss, John
)
Creator Email
466125782@qq.com,zhou643@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113911866
Unique identifier
UC113911866
Identifier
etd-ZhouWei-12863.pdf (filename)
Legacy Identifier
etd-ZhouWei-12863
Document Type
Dissertation
Format
theses (aat)
Rights
Zhou, Wei
Internet Media Type
application/pdf
Type
texts
Source
20240503-usctheses-batch-1145
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
academic journals
intellectual property lawsuits
judicial transparency