Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
(USC Thesis Other)
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DESIGNING SCHOOL SYSTEMS TO ENCOURAGE DATA USE AND INSTRUCTIONAL IMPROVEMENT: A COMPARISON OF EDUCATIONAL ORGANIZATIONS by Caitlin C. Farrell A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (EDUCATION) August 2012 Copyright 2012 Caitlin C. Farrell ! ! ! ! ! ! ! ! !! ! ! ii! ACKNOWLEDGMENTS What I have learned in graduate school far exceeds the pages of this dissertation, and I am indebted to the generous people who have helped me along this journey. This project would not have been possible without the time and thoughtful reflection from educators in the four school systems involved in the study. I also hold dear the lessons learned from my own students and colleagues from Public School 157 in Brooklyn, NY that set me on the education policy path in the first place. I am most grateful to my advisors, Penny Wohlstetter and Julie Marsh, for their patient encouragement and steadfast support. They taught me to have confidence in my own scholarly voice and modeled, with style, how to balance the researcher life with the equally important commitment to family and friends. To my committee members, Gib Hentschke, Peer Fiss, and Tricia Burch, I thank you for your thoughtful insights and suggestions. Jo Smith deserves special thanks for reminding me to “keep my eye on the prize.” Jonathan Mathis, Megan Chase, and the rest of the USC doctoral program have provided academic and personal support from day one. A big “thank you” goes to Tracey Weinstein, Kathryn Struthers, and Alice Huguet who provided a willing ear to listen and the occasional glass of wine. To Carlos Mejia, my voice of reason, thank you for your patience, sense of humor, and love. I am grateful to my family and friends, especially my parents, Dee and Jack, and brother, John. With their unwavering faith, they have always been my biggest cheerleaders. Finally, this dissertation is dedicated to my grandparents: John and Marjorie Farrell and Hans and Berta Anspach. It was through their example that I first learned about hard work and the importance of education. ! ! ! ! ! iii TABLE OF CONTENTS Acknowledgments ii List of Tables vi List of Figures vii Abstract viii Chapter One: Designing School Systems to Support Data Use What is Data Use in Education? A Long Tradition of Using Data in Educational Decision Making The Promise of Data Use and Its Challenges Charter Management Organizations: A New Kind of School System? CMO Theory of Action The Growth of CMOs Inside the CMO Contributions of this Study Summary 1 3 4 10 16 16 18 21 25 27 Chapter Two: A Review of Literature Theoretical Background Data-use Cycle Organizational Resources Human Capital Technology and Tools Practices and Policies Organizational and Environmental Conditions Organizational Conditions Environmental Conditions Summary 29 29 33 35 35 39 41 44 44 48 51 Chapter Three: Research Methods Research Design: A Qualitative Approach Sample Selection Traditional School System Sample CMO Sample Selection School Selection Participant Selection School System Profiles Sequoia Public School District Mammoth Public School District Yosemite Charter Management Organization Yellowstone Charter Management Organization Data Collection 52 52 53 54 55 58 59 60 61 62 63 64 65 ! ! ! ! ! ! ! ! !! ! ! iv! Semi-Structured Interviews Observations and Field Notes Document Collection and Review Data Analysis Ensuring Research Trustworthiness Study Limitations Summary 65 68 69 69 72 74 75 Chapter Four: How School Systems Use Data to Improve Literacy Instruction and Organizational Performance Introduction What Types of Data were Used to Inform and Support Literacy Instruction and Organizational Performance? Finding #1: Educators at All Levels in All Systems Reported High Levels of Use of High-stakes State Assessment Data Finding #2: Teachers and School Administrators had a Higher Level of Use of Classroom Data than System Leaders Finding #3: System Leaders Reported a Higher Level of Use of System Assessment Data than Teachers and Administrators Finding #4: CMOs had Higher Levels of Use of Teacher Observation Data Finding #5: CMOs had Higher Levels of Use of College-ready Indicators How were Data Used to Inform Literacy Instruction and Organizational Performance? Achievement-accountability Model Student Learning Model Instructional Reflection Model Bureaucratic-compliance Model Positioning Model Signaling Model Summary 76 76 79 79 81 84 89 91 94 97 102 105 109 111 112 114 Chapter Five: Organizational Resources to Support Data Use Introduction What Organizational Resources were in Place to Support Data Use? Human Capital Technology and Tools Organizational Practices and Policies Overall Patterns Finding #1: Resources Support the Data-use Cycle at Different Leverage Points Finding #2: Resource Design Features Matter Finding #3: Investments in Human Capital Help Educators Determine Instructional Responses Finding #4: Alignment and Integration of Resources is 116 116 119 120 127 132 136 136 137 141 143 ! ! ! ! ! ! ! ! !! ! ! v! Significant Finding #5: CMOs have Significant Commitments to Data-use Resources Summary 145 146 Chapter Six: Organizational and Environmental Factors that Shape Data Use Organizational Context Structure: Hierarchical Bureaucracy Versus Decentralized Network Age and Size: Established and Fixed Versus New and Growing Degree of Regulation and the Collective Bargaining Agreement Decision-making Autonomy: Top-down Versus Decentralized Climate: Mission-driven Versus Compliance-oriented Environmental Context Local Political Circumstances State, District Financial Climate Federal and State Accountability Systems Authorizers Market/Community Accountability Summary 147 149 149 156 159 163 165 167 167 170 173 174 175 177 Chapter Seven: Concluding Thoughts and Lessons Learned Summary of Major Findings Connecting Findings to Research and Theory Advancing Knowledge Management Theory Informing Data-use Research Understanding Charter Management Organizations Implications for Practice and Policy Implications for Practice Implications for Education Policy Recommendations for Future Research 182 182 185 185 189 191 193 194 198 201 Bibliography 204 Appendices Appendix A: Interview Protocol Appendix B: Informed Consent Forms Appendix C: Observation Protocol Appendix D: Initial Code List Appendix E: Final Code List Appendix F: Decision-making Criteria for Resource Commitment Categories and Designations 244 244 249 251 252 254 257 ! ! ! ! ! ! ! ! !! ! ! vi! LIST OF TABLES Table 2.1: Key Organizational Resources 43 Table 3.1: Sample Criteria Developed from Conceptual Framework 55 Table 3.2: School System Profiles: 2010-2011 58 Table 3.3: Case Study School Profiles: 2010-2011 59 Table 3.4: Participant List 60 Table 3.5: Association Between Research Questions and Sample Interview Questions 67 Table 3.6: Tracing “Data Use” Through the Coding Process 72 Table 4.1: Sources of Data Used to Inform Literacy Teaching and Learning 77 Table 4.2: Criteria for Assigning Levels of Data Use 78 Table 4.3: Reported Level of Data Use by Data Type, Respondent Group, and School System 79 Table 4.4: Six Models of Data Use 96 Table 5.1: Resources Mobilized to Enable Data Use in Schools and Systems 117 Table 5.2: Human Capital Resources Mobilized, by Category, by Level of Commitment 120 Table 5.3: Technology and Tools Mobilized, by Category, by Level of Commitment 128 Table 5.4: Organizational Processes, Practices by Category, by Level of Commitment 132 Table 5.5: How Data-use Resources Supported Data Cycle Leverage Points 137 Table 6.1: Comparing Contexts: Traditional Public School Districts and CMOs 149 Table 6.2: Linking Data-use Patterns with Organizational and Environmental Conditions 178 Table 6.3: Linking Resource Allocation Patterns with Organizational and Environmental Conditions 179 ! ! ! ! ! ! ! ! !! ! ! vii! LIST OF FIGURES Figure 1.1: CMO Theory of Action 17 Figure 2.1: Conceptual Framework 32 Figure 4.2: Comparing Classes by Performance Level in Sequoia 99 Figure 6.1: Embedded Context of Data Use in School Systems 148 ! ! ! ! ! ! ! ! !! ! ! viii! ABSTRACT Increased accountability through federal and state education policies has resulted in a growing demand for educators to access, understand, and utilize the various types of data to adapt their practice. These data include student work, district benchmarks, observation of instruction, and high-stakes state assessments, among others. Despite the widespread belief that educators’ data use can improve student performance, little is known about how organizational context impacts this process. This multi-case, qualitative study addresses this gap by examining two types of systems: traditional school districts and charter management organizations (CMOs). Using a framework derived from knowledge management theory, it offers a systems-level approach to consider how organizational resources–human capital, technology and tools, and organizational practices and policies–are marshaled to support effective data use. First, analysis found that educators in all four school systems used six main types of data to inform the literacy instruction: classroom, common grade, teacher observation, system, high-stakes state assessments, and college-ready indicators. Across all systems, there was a disconnect between school site and system educators concerning the use of classroom and system data. While all four systems attended to high-stakes state assessment results, the CMOs had greater reported use of teacher observation data and college-ready indicators. The host of data was used for multiple purposes falling into six categories or “models.” For the accountability-achievement, student learning, and instructional reflection models, data was instrumental in informing instruction. All four school systems engaged behaviors of accountability-achievement data use, while the two CMOs used formal teacher observation data to reflect on instruction. In the other three ! ! ! ! ! ! ! ! !! ! ! ix! emerging models of data use – bureaucratic-compliance, positioning, and signaling – data were used for compliance purposes, as evidence for an argument or agenda, or to signal to the external community. Looking to the resources mobilized, similarities arose across all four school systems. Different resources supported the data-use cycle at different leverage points, with design features of the resources shaping how data were gathered, accessed, and used. Second, technology/tools and organizational practices and policies were necessary but not sufficient; investments in human capital were critical components to this work. On the whole, CMOs stood out in their commitments to human capital, technology and tools, and organizational processes and policies. These dynamics of data use and resource allocation could not be understood without attending to the organizational and environmental setting in which they unfolded. When considering the “embedded context” for data use, organizational context, local political circumstances, state/district financial environment, and federal/state accountability demands were important contextual conditions that both enabled and constrained data use and resource allocation. Finally, this study lays the groundwork for the diffusion of promising practices across differing types of school systems and within the reform community. It also develops directions for future research on accountability policies, data initiatives, and alternatives to traditional forms of school management. ! ! ! ! ! ! ! ! !! ! ! 1! CHAPTER ONE DESIGNING SCHOOL SYSTEMS TO SUPPORT DATA USE With the federal education policy No Child Left Behind (NCLB) and the corresponding state-level accountability systems, the drive to increase student achievement in American public schools has never been greater (Alwin, 2002; Doyle, 2003; Johnson, 2000; Lafee, 2002; McIntire, 2002; J. L. Peterson, 2007). This act of Congress mandated that states implement standards-based assessments that measure yearly grade and subject outcomes and report test results to schools, teachers, parents, and state and federal governments ("No Child Left Behind," 2001). The Obama Administration have also demonstrated their continued commitment to data use in the nation’s schools, tying considerable federal funds with building state information systems and the capacity of each state to generate student-level information (D. Scott, 2012; US Department of Education, 2009). The recent Common Core State Standards Initiative is similarly intended to improve alignment of state standards, assessments, and student data within and across states with an emphasis on measuring college- and career-readiness (Porter, McMaken, Hwang, & Yang, 2011). Further, district, state, and federal reform efforts around teacher quality now focus on how to link student achievement results to teacher evaluation and compensation (National Council on Teacher Quality, 2011; Podgursky & Springer, 2007; Springer et al., 2010). Given this trend of federal and state incentives and inducements for using data, there is growing attention to educators’ abilities to access, understand, and utilize student and school-level data (Ingram, Louis, & Schroeder, 2004; Marsh, Pane, & Hamilton, 2006; Picciano, 2009). Indeed, the availability of different types of data to educators ! ! ! ! ! ! ! ! !! ! ! 2! continues to grow through increased access to professional development, information systems, technology, commercial assessments, and other measurement tools (Goertz, Olah, & Riggan, 2009; Marsh, McCombs, & Martorell, 2009; Wayman & Cho, 2009; Wayman & Stringfield, 2006). Educators at all levels – classroom, school, and district – are expected to convert data into actionable knowledge in an effort to apply this new understanding towards changes in instructional practice specifically and organizational improvement more generally (Ikemoto & Marsh, 2007). Leaders of school systems now seek to understand how they can more effectively facilitate how members of their organizations collect, disseminate, and translate data and knowledge into effective decision making and action. Core to this goal is the belief that organizational resources -- human capital, technology and tools, and practices and policies -- need to be aligned in order to increase data flow across the organization to inform decisions (Petrides & Guiney, 2002; Petrides & Nodine, 2003). Yet little is known about how the design of education systems affects how schools are able to mobilize their resources to use data and share knowledge throughout the organization. This dissertation addresses this gap by examining how two types of education systems – traditional school districts and charter management organizations – allocate their organizational resources to support data use in school systems. This chapter outlines the history of educational reforms that have led to the emphasis on data in school systems. A review of the research of this work follows, focusing in on the need to study how organizational and environmental context shapes the practice of data use. The chapter then argues that the recent emergence of charter management organizations provides a new structure to compare to traditional school ! ! ! ! ! ! ! ! !! ! ! 3! districts. It concludes with a discussion of the study’s intended contributions to policy, practice, and research. What is Data Use in Education? Within education policy and research arenas, much recent discussion focuses on how evidence, broadly defined, can be used to inform decisions in education. One area of this work is data-driven decision making, defined as “teachers, principals, and administrators systematically collecting and analyzing data to guide a range of decisions to help improve the success of students and school” (Ikemoto & Marsh, 2007, p. 108). 1 Data use may be the “mantra of the day,” but it is not a new idea (Ikemoto & Marsh, 2007). Outside of education, it can be tied back to theories from the business and management world on continuous improvement in organizations (DeFeo & Barnard, 2005; Deming, 1986; Spencer, 1994). Data use, similar to these business strategies, rests on the assumption that “people will make better, more efficient, more successful decisions if they carefully weigh the available evidence on competing options and select the one that shows the best likelihood of maximizing a valued outcome” (Coburn, Toure, & Yamashita, 2009, p. 1116). In education, today’s use of data has roots in the progressive education reforms of the early 20 th century, the standards-based movement in the 1980s and 1990s, and high- stakes accountability from 2001 to present. Data use in schools and districts is not a policy in and of itself but rather a set of programs, behaviors, and processes developed in !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 1 The language used to describe this phenomenon varies. Some scholars broadly refer to “evidence-based decision making” which could include decisions informed by data or research (Honig & Venkateswaran, 2012). Others focus in on educators’ use of data in particular, using the terms “data-driven decision making” or “data-based decision making” (Marsh et al., 2006; Moody & Dede, 2008) For simplicity’s sake, this study refers to “data use.” Additionally, the focus in this study is on the use of student outcome data, broadly conceived to include student work, essays, exit slips, teacher created quizzes, common assessments, district benchmarks, and state standardized tests. ! ! ! ! ! ! ! ! !! ! ! 4! reaction to these policies and reforms (Marsh et al., 2006). Across these waves of education reform, there has been a notable evolution of the types of decisions at the federal, state, district, school, and classroom levels that data have informed. A Long Tradition of Using Data in Educational Decision Making The faith in “evidence” as part of educational science began with the administrative progressives – the first generation of professional leaders from new schools of education in the early twentieth century (Tyack & Cuban, 1995). In reaction to the uneven, highly-localized education system of pre-1900, the administrative progressives sought to remove education from politics by restructuring the system with professional expertise and standardized data (Tyack & Cuban, 1995). The 1880s also saw the establishment of the Bureau of Education, a precursor to today’s US Department of Education. One of its early tasks was to collect data on education from the states. As Goldin (1999) pointed out, this early data collection focused on contemporaneous accounts of students (e.g., enrollment, graduation), teachers, schools, and finances, although they “revealed little about student characteristics in terms of age, sex, race, ethnicity, and family background” (p. 6). These early data sources gave federal and state policymakers information on the “quantity” of schooling but little on the “quality” of it, besides proxies of quality, such as teacher-student ratio or length of school term (Goldin, 1999). Fast forward to the past fifty years of education reform. During the 1960s and 1970s, education reforms were focused on equal access and equitable “inputs” (e.g. funding), driven by the social movements of the time, such as the Civil Rights Movement and the War on Poverty (Tyack & Cuban, 1995). In 1966, the highly influential Coleman ! ! ! ! ! ! ! ! !! ! ! 5! Report was published, arguing that student background and socioeconomic standing influenced academic outcomes more than school funding levels (Coleman et al., 1966). In response, federal, state, and district attention focused on collecting data with an eye to student background indicators, as well as assessing intervention programs designed to reduce racial, ethnic, and economic disparities in education (e.g., busing) (Knapp, Copland, & Swinnerton, 2007; Linn, 2000). In general, the district and school level “input” data were used to allocate funds and broadly regulate schools with less regard for schools’ curricular and instructional practices and their academic “output” (Fuhrman, Goertz, & Weinbaum, 2007). The release of A Nation at Risk in the 1980s refocused the country’s attention on educational outcomes. This report highlighted the low performance of the country’s public school students and issued warnings about the nation’s inability to compete internationally. The public at large began to question the effectiveness of public education and outcomes given the past two decades of increased financial investment (Hochschild & Scovronick, 2003). State level governors and legislators sought to improve educational outcomes with formal, top-down policies focused on focused on establishing minimum competency standards targeted at students and teachers (Hess, 1999). Some states began requiring the use of outcome data for school improvement (Massell, 2001), while several school systems began to engage in strategic planning efforts based on student outcome data (Schmoker, 2004). Despite these attempts at educational improvement, most states did not have a coherent, aligned approach to reform (Firestone, Fuhrman, & Kirst, 1991). In 1991, researchers Marshall Smith and Jennifer O’Day called for a “systemic reform” approach ! ! ! ! ! ! ! ! !! ! ! 6! to educational policy, one that would “combine both top-down and bottom-up approaches and feature a unifying vision and goals, a coherent instructional guidance system, and a restructured governance system” (M. S. Smith & O'Day, 1991, p. 1). During this period was also a big push for site-based management (SBM). Policymakers and scholars argued that educators’ at the school sites not only needed to have access to data and information, they also needed the decision-making authority to respond to the data in ways that best suit the needs of their school community using Lawler’s model for high-involvement organizations (Lawler, 1991; Wohlstetter, Smyer, & Mohrman, 1994; Wohlstetter, Van Kirk, Robertson, & Mohrman, 1997). As SBM reforms were implemented, districts focused on upgrading information management systems so that site-based educators had greater access to relevant and timely data for decision making (Wohlstetter et al., 1997). Site-based management initiatives lay the groundwork for site-based councils, charter schools, and the rise of non-profit networks of charters: charter management organizations (explored below in greater detail). It was also during the late 1980s and 1990s that standards-based reform efforts, paired with a growing emphasis on accountability in education, gained traction (Anderson, Leithwood, & Strauss, 2010; Knapp et al., 2007; Moss & Piety, 2007). Many states began adopting uniform learning standards that outlined grade and content-specific skills to be mastered paired with the collection of statewide assessment data of these skills on an annual basis. This move toward standardization became linked to greater district and school accountability for student learning results (Elmore, Abelmann, & Fuhrman, 1996). Elmore and Rothman (1999) described the theory of action of these two related reform principles: ! ! ! ! ! ! ! ! !! ! ! 7! If states set high standards for student performance, develop assessments that measure student performance against standards, give schools the flexibility they need to change curriculum, instruction, and school organization to enable their students to meet the standards, and hold schools strictly accountable for meeting performance standards, then student achievement will rise. (p. 15) Built on earlier federal accountability policies, i.e., George H. W. Bush’s America 2000 and Bill Clinton’s Goal 2000 plans, the current policy No Child Left Behind (NCLB) was the “next phase” in the evolution of federal education policy (McDonnell, 2005). Historically, states and districts controlled education policies while the federal government acted primarily as a funding source (Fusarelli, 2004). With NCLB, states were mandated to implement standards-based accountability systems that measure yearly grade and subject outcomes and report test results to schools, teachers, and parents. Data were disaggregated for “significant” student subgroups, and the federal government issued sanctions for not meeting proficiency goals. If a school failed to meet these targets, consequences ranged from professional development and technical assistance to staff replacement, leadership change, or school reconstitution. Accordingly, NCLB has been influential in drawing school and district attention to data related to their student and school achievement goals (Marsh, et al., 2006; Massell, 2001; Moss & Piety, 2007). At the center of the NCLB reform efforts (and similar high-stakes accountability policies at the state levels) was the goal of closing the achievement gap. This emphasis on disaggregation of student achievement results by gender, race/ethnicity, and socioeconomic status meant that educators were not only exposed to more data than ever before, but they were also expected to use it to make decisions to eliminate the disparity between low income, minority students and their middle-class white peers (Linn, Baker, & Betebenner, 2002). ! ! ! ! ! ! ! ! !! ! ! 8! In this same light, the Obama Administration linked funding from the federal “Race to the Top” (RTTT) initiative to a state’s ability to build a data management system and generate student-level information to help administrators, educators, and communities improve teaching and learning (US Department of Education, 2009). Since 2005, the Department of Education has awarded more than $500 million to 41 states and the District of Columbia to develop student longitudinal data system for school leaders and teachers through the Statewide Longitudinal Data Systems Grant Program (D. Scott, 2012). Recently, states applying for NCLB waivers had to provide a rigorous plan on how they would link teacher, principal, and student data and provide that information to educators to improve their practices (US Department of Education, 2011a). The current Common Core State Standards Initiative takes the accountability movement one step further, as states continue to roll out and implement a shared set of content standards and assessments (Porter et al., 2011). This reform not only aims to make student achievement data more reliable, accessible, and easier to share within and across states but also works towards guaranteeing an “apples to apples” comparison of schools across states, something that was previously very difficult given differing state content standards, assessments, and proficiency targets (Linn et al., 2002). The Common Core is strongly supported by the federal government, who in 2011, awarded over $300 million for the development of assessments linked to these new standards (US Department of Education, 2011b). Further, the past few years have seen a growing interest in linking financial incentives to student achievement data. In 2012, the Department of Education announced its Teacher Incentive Fund, an initiative to support district and state efforts to link teacher ! ! ! ! ! ! ! ! !! ! ! 9! and principal compensation systems to increases in student assessment results (US Department of Education, 2012). Similarly, school districts in Washington DC and Denver, Colorado, among others, have gained national attention for their pay-for- performance programs that link individual teacher compensation to student achievement (National Council on Teacher Quality, 2011). In a similar light, the New York City Department of Education distributed over $50 million dollars to high-performing, high- poverty schools as part of their recent (but ultimately unsuccessful) Schoolwide Performance Bonus Program which linked student achievement results, school “report cards,” and bonuses to educators (Marsh et al., 2011). Some states are investing in predictive data analysis as part of “early warning systems” to preemptively identify students at-risk of failure using the methodological modeling techniques for determining car insurance rates and credit scores (Davis, 2012; Sparks, 2012). Finally, there has been notable philanthropic activity in how data can be used to make decisions at the classroom, school, and district levels. Major foundations with investment portfolios in education – including the Bill and Melinda Gates Foundation, the Spencer Foundation, the Dell Foundation, and NewSchools Venture Fund, among others – have all given major grants to support practice and research of how educators can use data to inform curricular and instructional decisions. Since school enrollment data were first collected in the 1880s by the early Department of Education, data in education have evolved to the point where classroom teachers now have access to fine-grained data, and they are expected to use these data for specific purposes such as targeting individual students. Given that assessment results are now linked with the high-stakes federal and state accountability policies and are a critical ! ! ! ! ! ! ! ! !! ! ! 10! component to discussions of instruction, curriculum, and content standards, the use of data in classrooms, schools, and districts promises to have a growing presence in today’s educational landscape. The Promise of Data Use and Its Challenges The research on data use to date paints a mixed picture about a direct, causal relationship with improving student outcomes. On one hand, scholars suggest that data use in school systems can be a factor in improvements in school and student achievement; in several case studies of schools or districts that have made significant progress on state assessments, data use was one of the common threads behind the academic improvement (Council of the Great City Schools, 2002; Datnow, Park, & Kennedy, 2008; Datnow, Park, & Wohlstetter, 2007; Snipes, Doolittle, & Herlihy, 2002; Wayman, Cho, & Johnston, 2007; Zavadsky, 2009). Furthermore, there has been evidence that specific types of data use (e.g., using data from formative assessments to improve instruction through re-teaching strategies) can improve students’ learning, such as Black and Wiliam’s (1998) meta-analysis on the benefits of formative assessment on teaching and learning. Organizational scholars have suggested that the creation of internal feedback loops for feedback was an important component of continuous improvement (Argyris & Schon, 1978, 1996). On the other hand, despite the promise of this practice in education, there has been minimal causal evidence that data use leads to improvements in student achievement or in turning around low-performing schools (Hamilton et al., 2009; Herman et al., 2008; Huberman, Parrish, Hannan, Arellanes, & Shambaugh, 2011). The authors of these systematic reviews noted that the minimal evidence was largely due to the nature of ! ! ! ! ! ! ! ! !! ! ! 11! the research base. Most of the research was atheoretical, descriptive, or based in advocacy work and “how-to” guides. Even in the small, but growing, body of “gold standard” research (i.e., randomized control trials), application and implementation of data-use initiatives have been uneven. A recent randomized study of 500 schools in 59 school districts in seven states engaged in a reform model focused on data use found mixed results. Generally, the data-based reform helped improve student outcomes, but significance of results varied based on year of implementation, grade, subject matter, and type of curriculum chosen (Slavin, Holmes, Madden, Chamberlain, & Cheung, 2010). An analysis of year one results from this project found that those schools participating found a statistically significant increase in student mathematics achievement on quarterly benchmarks (Carlson, Borman, & Robinson, 2011). In addition to these limitations in the literature base itself, existing scholarship suggests that data use has proven difficult to implement for a wide range of reasons. First, a lack of time to examine data and engage in data-driven decision making has been identified as a problem (J. Feldman & Tung, 2001; Ingram et al., 2004), as has limited access to timely, high-quality data (Coburn, Honig, & Stein, 2009; Lachat & Smith, 2005; Schmoker, 2004). In other instances, educators may face the opposite problem: they are overwhelmed by the amount of available formal: input data (e.g., school expenditures, demographics of students); process data (e.g., financial operations, quality of instruction); outcome data (e.g, drop-out rates, student test scores); and satisfaction data (e.g., parent opinion survey) (Bernhardt, 2003; Marsh et al., 2006). Within the one category of “outcome data,” educators are faced with a host of types of assessments, ! ! ! ! ! ! ! ! !! ! ! 12! including quick, informal classroom assessments; classroom assessments across lessons; interim benchmarks across units; and annual “high-stakes” tests (Supovitz, 2012). Added to this list of “official” data are also informal data sources such as gossip/rumors, casual observations, and anecdotal experience (Ingram et al., 2004; Marchart & Paulson, 2009; Spillane, 2012; Spillane & Miele, 2007). Educators are then challenged with managing, balancing, and prioritizing these multiple forms of data, some more useful or more at the forefront than others (Coburn, Honig, et al., 2009; Kerr, Marsh, Ikemoto, Darilek, & Barney, 2006; Young, 2006). The research suggests that developing educator capacity to make decisions based on data has also proven difficult. For example, in a study of the impact of district benchmark assessments, Goertz et al. (2009) reported that data analysis of formative assessment results did not help teachers gain a deeper understanding of students’ mathematical learning, as the teachers focused on the procedural rather than the conceptual sources of student errors. While the teachers were engaging in DDDM, it did not lead to actions that had the greatest impact on student learning. Without this skill base, educators may use data in symbolic or superficial ways or to justify decisions “after the fact,” rather than as evidence for a decision (Coburn, 2010; Weiss, 1980, 1998). Given educators’ increasing amounts of available data paired with limited capacity building or support, they have been described as “data rich but information poor” (Slotnik & Orland, 2010). Recognizing these barriers, states, districts, and schools have implemented a variety of strategies to support data use, as Marsh (2012) noted. These interventions targeted the educators and their knowledge and skill sets (e.g., Chrismer & DiBara, 2006; ! ! ! ! ! ! ! ! !! ! ! 13! Goertz et al., 2009); the tools used for accessing and interpreting data, such as information systems and databases (e.g., Means, Padilla, & Gallagher, 2010; Moody & Dede, 2008; Wayman, 2005); or organizational routines, policies, and norms (e.g., Anderson et al., 2010; Sutherland, 2004). Taken individually, these solutions intended to redesign the work of educators have met with mixed results or limited sustainability. Instead, scholars have argued that for data use to be implemented evenly and with fidelity, a systemic strategy is needed (Anderson et al., 2010; Coburn & Talbert, 2006; Datnow & Park, 2010a; Kerr et al., 2006; Wohlstetter, Datnow, & Park, 2008). Interactions between stakeholders at different levels of the education system need to occur; for instance, districts shape data use for school leaders, and school leaders shape data use for their teachers (Anderson et al., 2010). Subsequently, misalignment of support across levels leads to major challenges around data use. For instance, Kerr et al. (2006) found that without district capacity to assist school-level staff with data analysis and support technology, data use at the classroom and school levels suffered. Similarly, a district initiative around data use was met with support from principals, but teachers failed to fully adopt the reform, feeling that it took away from their time to teach (Kerr et al., 2006). According to the literature, alignment of goals and values also appears to be important. In some school systems, little agreement may exist among stakeholders about what types of data are most important or meaningful for informing decisions around teaching and learning. Ingram et al. (2004) reported that teachers had developed their own personal metrics for judging effectiveness of their teaching, different from external metrics. Looking at a related body of literature on evidence-based practice, in a study ! ! ! ! ! ! ! ! !! ! ! 14! conducted over two years in one large school district, Coburn and Talbert (2006) found that all of the frontline district administrators interviewed had strong faith in using research-based evidence to inform instruction, compared to only half of teachers who were agreed. To combat this misalignment across system levels, the authors suggested that evidence-based decisions require “a system of evidence use that allows for and supports access to different kinds of evidence for different purposes at different levels of the system” (p. 491). Several scholars have noted the tension between using data for inquiry and professional and organizational learning purposes versus goals of monitoring, control, compliance, and accountability (Diamond & Spillane, 2004; Firestone & Gonzalez, 2007; Knapp et al., 2007; Moss & Piety, 2007; T. H. Nelson, Slavit, & Deuel, 2012). Whereas data use may lead to critical thinking about teaching and learning, educators can also react to the “incentives” of high-stakes testing systems in unintended ways, using data to identify and exclude low-performing students from classes; to cheat or manipulate assessments on which rewards/sanctions are based; to target “bubble” students (those on the border between proficiency cut scores); or to “teach to the test” at the expense of other subjects or higher-order thinking skills (Cullen & Reback, 2002; Figlio & Getzler, 2006; Figlio & Winiki, 2005; Jacob & Levitt, 2004; Murnane & Cohen, 1986). Finally, existing research suggests educators at the school level not only need systemic support but also enough decision-making authority to make site-level decisions on the basis of data (Datnow, et al., 2008; Datnow, et al., 2007; Wohlstetter, Datnow, & Park, 2008). In a study of four school systems, Wohlstetter et al. (2008) found that building expertise and capacity at the school site for data-driven decision making was ! ! ! ! ! ! ! ! !! ! ! 15! necessary but not sufficient to increase student learning. Teachers, coaches, and principals also needed to have the will to use and the decision rights to act on the data. The researchers also hypothesized that system-wide rewards and incentives may support data use, helping to align the goals across levels while also developing professional expectations and norms for mutual accountability within levels. Despite the fact that researchers have identified organizational context as a critical component to effective data use, surprisingly few studies look comparatively or deeply at the impact of differing organizational contexts on data use. For instance, considering how state testing data shaped school-level decisions in four schools, Diamond and Cooper (2007) found that the school’s accountability status played a role in how data were interpreted and the subsequent strategies chosen. At lower-performing schools that were on “probation,” educators focused on the targeted students, grade levels, and subjects that would have the highest impact on the state test results (Diamond & Cooper, 2007; Diamond & Spillane, 2004). However, questions of how other organizational conditions influence educators’ data use remain unanswered. These organizational characteristics include school structure and culture, interventions, site and district leadership, or presence of external partnerships. A recent call for proposals by the Spencer Foundation highlighted the need for research on the organizational and political context around data use (The Spencer Foundation, 2012), and in their literature review, Coburn and Turner (2011) similarly pointed to questions of organizational context as a gap in the research. To date, scholarship in this area tends to focus on single case studies of traditional school districts (e.g., Supovitz, 2006); an alternative governance model, like the for-profit educational management organization, Edison Schools (e.g., Marsh, Hamilton, & Gill, 2008; ! ! ! ! ! ! ! ! !! ! ! 16! Sutherland, 2004); or studies that include different types of education systems but do not explicitly examine the impact of these key organizational dimensions (e.g., Datnow et al., 2007). The need for a theoretically driven, comparative study of different types of school systems is great. The remainder of the chapter explores how charter management organizations – a new and growing governance structure in education – provide the needed contextual variation for such an investigation. Charter Management Organizations: A New Kind of School System? In education, a new kind of school system has emerged to join traditional school districts: the charter management organization. This section presents a theory of action behind charter management organizations (CMOs) and the historical background of their recent emergence. The potential differences between CMOs and traditional school districts are then described, followed by the contributions of this study. CMO Theory of Action During the early years of the charter school movement, most charter schools were opened by teachers, parents, and other community members as stand-alone schools (Bulkley & Fisler, 2003; Deal & Hentschke, 2004). In the past 10 years, the “mom and pop” approach to charter schooling has been joined by CMOs – nonprofit organizations that create and operate networks of charter schools that share a common mission or instructional design across schools. This network approach has grown in direct response to the operational and financial challenges faced by many stand-alone charter schools as well as their limitations in effecting systemic change. Figure 1.1 shows the theory of action behind CMOs (adopted from Wohlstetter, Smith, Farrell, Hentschke, & Herman, 2011). ! ! ! ! ! ! ! ! !! ! ! 17! Figure 1.1. CMO Theory of Action. Charter management organizations share the same logic as that behind the original charter idea: increased autonomy through easing of certain district/state regulations for increased accountability (Bulkley & Fisler, 2003; Wohlstetter & Chau, 2004; Wohlstetter & Griffin, 1998; Wohlstetter, Wenning, & Briggs, 1995). This freedom allows CMOs to have greater site-based decision making, allowing school leaders to make changes to their school year/length of the school day, choice of curriculum and instructional program, and, in most states, hiring/firing without the stipulations of a collective bargaining agreement. Theoretically, autonomy in these and other areas allows the school leaders to create an innovative program that better meets the needs of families and students. This theory of action suggests that CMO network structure may provide support for charter schools in areas that they have previously struggled (Farrell, Wohlstetter, & Smith, 2012; J. Smith, Farrell, Wohlstetter, & Nayfack, 2009; Wohlstetter et al., 2011). ! ! ! ! ! ! ! ! !! ! ! 18! Specifically, individual charter schools have most frequently closed because of financial and governance mismanagement issues (Center on Education Reform, 2011). The CMO home office may offer specific expertise in key organizational areas, such as financial management, governing board leadership, legal compliance, grants management, and human resources. By concentrating these responsibilities in a centralized management team, principals and school leaders may be able to concentrate on their responsibilities as instructional leaders at the school site. Hypothetically, CMOs can combat the pervasive resource scarcity experienced by stand-alone charter schools by seeking to take advantage of economies of scale, e.g., purchasing books or other supplies in bulk to share across schools (Farrell, Nayfack, Smith, Wohlstetter, & Wong, 2009). According to the theory of action, the network approach enables the replication of successful education programs, with the self-enhancing ability to expand rapidly and increase market share. The growth of a network of charter schools may offer a greater potential impact for system change in comparison to the proliferation of successful individual charter schools. Theoretically, a CMO with a proven academic record may be in a better position to replicate in a district or a region, increasing the concentration of high-quality charter schools, thereby having the impact on the traditional school system originally intended by charter advocates (Chubb & Moe, 1990). The Growth of CMOs The first CMO, Aspire Public Schools, was founded in 1999; ten years later, the sector had grown, by one recent account, to include 197 nonprofit management organizations operating 1,170 charter schools, serving close to 400,000 students (Miron, Urschel, Yat Aguilar, & Dailey, 2012). This explosion in CMO growth may be in response to several trends in the educational policy environment (Farrell et al., 2012; ! ! ! ! ! ! ! ! !! ! ! 19! Nayfack, 2010). First, individual charter schools have not had the rapid, large-scale, systemic impact originally intended by charter reformers. Research on academic performance in charter schools has shown mixed results on student achievement (Center for Research on Education Outcomes, 2009; Zimmer et al., 2009), and the intended innovation in educational instructional design – a stated goal of many state charter laws – may not be occurring in single charter schools to the extent expected (Lubienski, 2003). Additionally, there are early reports of individual CMOs achieving significant academic gains (Booker, Sass, Gill, & Zimmer, 2011; EdSource, 2009; Furgeson et al., 2012; Tuttle, Te, Nicholas-Barrer, Gill, & Gleason, 2010). There has also been dramatic federal support for the replication of high-quality charter schools. Most recently, the U.S. Department of Education announced its Charter Schools Program Grants for Replication and Expansion of High Quality Charter Schools with an initial appropriation of $50 million (US Department of Education, 2010a). In March 2010, the administration published a “blueprint” for the reauthorization of the Elementary and Secondary Education Act that discusses school turnaround grants that will be available to states to implement one of four intervention models to help improve the lowest five percent of their schools. The “restart model” involves the school converting to charter status or closing and reopening under the management of “an effective charter operator, charter management organization, or education management organization” (US Department of Education, 2010b). In August, 2010, the Knowledge is Power Program (KIPP) Foundation and the Alliance for College-Ready Public Schools in Los Angeles also were awarded about $50 million each through the Obama administration’s Investing in Innovation i3 awards (McNeil, 2010). CMOs also have seen significant support from the philanthropic community, from corporate giving and family foundations to individual donors (Lake & Dusseault, 2011; J. ! ! ! ! ! ! ! ! !! ! ! 20! Scott, 2009; Wohlstetter et al., 2011). The San Francisco-based NewSchools Venture Fund (NSVF) has invested heavily in over 20 CMOs (NewSchools Venture Fund, 2011). In March 2009, the Bill & Melinda Gates Foundation – which helped launch NSVF in 2003 with a $22 million gift – announced their new $18.5 million School Networks Initiative to support the work of CMOs (The Bill and Melinda Gates Foundation, 2011b). The Michael and Susan Dell Foundation has provided upwards of $60 million to date to support charter schools and includes CMOs in its grantmaking because of their ability to reach greater numbers of students than stand-alone charter schools (The Michael and Susan Dell Foundation, 2011). Other recent developments point to the growing acceptance of the comparability between districts and CMOs, or in other words, an institutionalization of CMOs in the education field. The Gates Foundation has seeded district-charter compacts in numerous cities to support the collaboration and sharing of best practices between CMOs and traditional districts. The Gates Foundation’s College-Ready Promise funded four CMOs in Los Angeles, along with several school districts across the country, to improve college-readiness rates for low-income, minority students through their teacher effectiveness initiative (The Bill and Melinda Gates Foundation, 2011a). In 2011, the Broad Foundation announced a second “Broad Prize” for school systems: The Broad Prize for Public Charter Schools, an annual award of $250,000 to a CMO with notably improved student achievement, was modeled after the Broad district prize (The Broad Foundation, 2012). Despite the potential for CMOs to serve as vehicles for replication of successful models and education reform as the theory of action suggests, many suspect that CMOs ! ! ! ! ! ! ! ! !! ! ! 21! will face challenges, particularly as they are pressured by state/federal policymakers and other national reform supporters to replicate quickly or to take on schools for turnaround. Toch (2009, 2010), for example, argued that CMOs, with their heavy dependency on philanthropic support, are in fragile financial positions. Wilson (2009) pointed out that, given the model’s demands for large numbers of high quality teachers and school leaders, along with the high rate of burn-out of both groups, CMOs may be constrained by a restricted talent pool. In addition to offering further evidence of these troubling fiscal and human capital challenges, Lake et al. (2010) reported that CMOs struggle with balancing lofty growth goals and ensuring quality across current and new schools within their networks. CMOs, successful with one particular population or in one location, may become over-extended and under-prepared for the challenges that arise with new schools. Inside the CMO Despite their popularity with policymakers and philanthropists, there is limited research on how CMO leaders structure and manage their organizations to use data, share knowledge, and improve their organizational performance. Based on the limited research, several likely distinctions of CMOs arise – culture, structure and size, decision-making rights, regulations, and external accountability demands. Many of these ideas draw from recent findings from a national study of charter management organizations, based on surveys from 37 CMO home offices and site visits to 10 CMOs (DeArmond, Gross, Bowen, Demeritt, & Lake, 2012; Furgeson et al., 2012; Lake et al., 2012; Lake et al., 2010). There is little other research available on CMOs to confirm or reject these findings, so, in some instances, research of charters schools is offered more generally. ! ! ! ! ! ! ! ! !! ! ! 22! In terms of organizational culture, in a review of a small number of high profile CMOs, Higgins and Hess (2009) suggested that CMOs like the Knowledge Is Power Program (KIPP) and Aspire Public Schools are able to sustain performance while continuing to grow by creating an “organizational career imprint”—a set of capabilities, connections, confidence, and cognitions that individuals share as a result of working for a given organization. Lake et al. (2010) reported in their survey of CMO leaders that many of the leaders (some of whom had previously worked in a district setting) reported that the CMO was more “mission-oriented” than traditional school district central offices. They also find evidence of “ethos of continuous improvement” in most CMOs in their sample, as 79% of CMO office administrators reported employing at least quarterly data analysis and emphasizing ongoing organizational improvement (p. 24). According to the charter school literature and theory, CMO leaders should have a great deal of authority and flexibility to manage their schools. In a number of studies, stand-alone charter school leaders reported a greater level of autonomy over operational decisions than traditional public school principals (Adamowski, Therriault, & Cavanna, 2007; Zimmer & Buddin, 2007), with variation across the levels of control over curriculum and instruction, personnel decisions, and budget (Finnigan, 2007; Malloy & Wohlstetter, 2003). However, Lake et al., (2010) suggested that school principals in CMO schools may not have this same kind of freedom and autonomy; they find that “nearly all surveyed CMOs (84 percent) are moderately to highly prescriptive, trying to ensure all affiliated schools follow a set design for curriculum and instructional techniques, human resource functions, and student behavior and support programs,” although the schools do have greater flexibility around allocating school budgets (p. 4). ! ! ! ! ! ! ! ! !! ! ! 23! Lake et al. (2010) also describe a tension in CMOs between the standardization necessary for running a system of schools and school-level autonomy, experimentation, and innovation inherent in the charter idea. In terms of organizational structure, charter management organizations were designed, at least in theory, as flatter, more collaborative organizations composed of a network of schools that share an educational mission and philosophy (Farrell et al., 2012; Furgeson et al., 2012; Lake et al., 2010). Comparatively, school districts have traditionally been thought of as bureaucratic, hierarchical organizations, with delineated lines of decision-making authority in fixed areas of specialized activity (Downs, 1967; Weber, 1947). Lake et al. (2010) found that many of the CMOs looked structurally similar to traditional districts on the surface: They shared similar organizational charts and titles, and the home office of CMOs and school district offices performed many of the same functions around human resources, budgeting, etc. As CMOs have grown, they frequently have added on layers of managers – e.g., regional directors – to help support schools. Beneath the surface, though, the authors argued that there are other noticeable differences around the relationship between schools and home office. For example, as reported in the CMO survey, the central office staff was frequently working within schools; over 60% of CMO leaders report conducting one-on-one mentoring or weekly meetings with principals, and slightly over half of these leaders conducted weekly scheduled or unscheduled walk-throughs and classroom observations. Under NCLB and state accountability systems, all public schools face certain external accountability demands (e.g., Adequate Yearly Progress or California’s Annual Performance Indicators). However, charter schools are held accountable in additional ways. Most notably, every charter school operates under a charter or contract issued by ! ! ! ! ! ! ! ! !! ! ! 24! an authorizer, a public entity (e.g., local school board, public university, state board of education, non-profit organization) charged with charter selection, oversight, monitoring, and renewal 2 (Bulkley & Wohlstetter, 2004; Vergari, 2000). At the end of each charter period, typically five years, the authorizing body evaluates the charter school, reauthorizing or revoking the school’s charter, closing the school (Petrilli, Finn, & Gau, 2006). As a nonprofit organization, every charter is governed by a board of directors (Wohlstetter, Smith, Farrell, & O’Neill, 2009). Charters also face market accountability; as a school of choice, parents elect to send their children to charter schools, and if they are not satisfied with its performance, they can move them elsewhere (Chubb & Moe, 1990). Finally, the philanthropic community may have a large role in holding CMOs accountable. Researchers have identified an inequity in per-pupil funding between charter and traditional public schools, reported to be as high as 39.5% in some states (Batdorff, Maloney, May, Doyle, & Hassel, 2010; Speakman, Finn, & Hassel, 2005). One recent financial analysis of one CMO in California suggests that its location in a financially impoverished state has a negative impact on its operating margin, a resource needed for sustainability and expansion (Lozier & Rotherham, 2011). The financial challenge was further compounded by expenses related to renting, purchasing, or building school facilities. In response, charters and charter management organizations have needed to raise financial capital from private sources, such as individual donors, foundations, or corporate sponsors (EdSector, 2009; J. Scott, 2009; Toch, 2009). Research on growing CMOs has found this dependence on philanthropic dollars to be a mixed blessing. While !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 2 CMO governance structures vary by state. In some states, each school within the network has its own charter and/or governing board, while in other states, the nonprofit management organization can hold the charter for all schools in the network with a single governing board for the network (Farrell et al., in press). ! ! ! ! ! ! ! ! !! ! ! 25! foundations are able to help fund the additional costs associated with a growing network (e.g., costs for operating the home office or for new facilities), CMOs reported a tension that results when funders have motives and interests different from the values and mission of the CMO (Lake & Dusseault, 2011; Wohlstetter et al., 2011). In sum, CMOs have a growing presence in education, with much philanthropic and policy attention. While these networks of schools were not one of the original goals of the charter movement, the comparability between districts and charters as systems of schools, and their institutionalization in the educational community, has recently been established. As such, a systematic comparison of CMOs and traditional districts will not only provide insight into the “black box” of CMOs and lay the groundwork for district/charter dissemination of best practices around data use, but it will provide an excellent case for a deep exploration of the influences of organizational context on data use and instructional improvement. Contributions of this Study This study fills several gaps within the research base. First, as noted above, most of the research on data use in education systems has been conducted within traditional school districts, which are generally thought to be structured as hierarchical, bureaucratic organizations with rigid, defined roles, access to information, and decision-making rights. Within the educational arena, a new kind of organization is taking hold – charter management organizations (CMOs) – which are designed, at least in theory, as flatter, more collaborative organizations composed of a network of schools that share an educational mission and philosophy (Farrell et al., 2012; Lake et al., 2010). This dissertation provides important new insight into how these two types of education systems – hypothetically distinguished in their structure, culture, decision-making ! ! ! ! ! ! ! ! !! ! ! 26! autonomy, level of regulation, and accountability demands – encourage data use to generate knowledge for instructional improvement. Second, much of the research on data use in schools has looked at isolated levers of support or change. For instance, Chrismer and DiBara (2006) explored the effects of coaches on teachers’ reading instruction, while Wayman & Cho (2009) considered how central office administrators accessed and used an information database system. This study offers an integrated, systems-level approach to consider holistically how organizational resources – human capital, technology and tools, and organizational practices – are engaged to support data use and knowledge creation. The third contribution involves the development of a conceptual framework that integrates organizational learning and knowledge management theories with empirical findings in education. From the theoretical side, organizational learning and knowledge management literature has evolved somewhat rapidly and messily (Easterby-Smith & Lyles, 2006), without a unified, generally accepted framework (Rubenstein-Montano et al., 2001). For instance, some scholars focus their work in knowledge management on the information science aspect, looking, for instance, at how electronic databases can facilitate the collection, retrieval, and utilization of information (e.g., Silver, 2000); others, however, focus on the socio-cultural aspect of knowledge development and sharing (e.g., Brown & Duguid, 2000). This dissertation advances theory by pulling together these different “strains” into one united framework that is rooted within empirical evidence (see Chapter Two). On the empirical side, there is the opposite problem: much of the literature in education around data use and knowledge sharing has been atheoretical and disconnected ! ! ! ! ! ! ! ! !! ! ! 27! (for exceptions, see Coburn & Turner, 2011; Honig & Ikemoto, 2008). Researchers are left with a list of potential variables that enable or constrain the use of data, without a representation of data use that orders and relates the concepts to one another or to prior conceptualizations of educators, schools, and school systems (Kezar, 2006). By organizing the empirical findings within a strong theoretically based frame, a new lens is created with which to study how organizational context shapes data use and knowledge creation, and vice versa. Finally, this study responds to the recent call by education policy implementation scholars to focus on systems learning. McLaughlin (2006) identified a critical gap in the research on “how engaging systems and inter-organizational networks that define the sector learn from experience, acquire, and use new knowledge, adapt and sustain positive outcomes” (p. 226). This study seeks to join the small number of studies that have begun to explore this avenue (e.g., Honig, 2008; Supovitz, 2010), leading to implications for researchers, practitioners, and policymakers. Summary Secretary of Education Arne Duncan argued in 2009, “I am a deep believer in the power of data to drive our decisions. Data gives us the roadmap to reform. It tells us where we are, where we need to go, & who is most at risk.” Indeed, a tremendous amount of federal, state, local, and philanthropic investment has been made to support data-based decision making, and policymakers show no sign of slowing their support. In fact, the opposite is true, as “policy talk” at federal, state, and local levels now looks to explore the possibility of attaching student assessment outcomes to decisions around teacher and principal compensation and placement. ! ! ! ! ! ! ! ! !! ! ! 28! While advocates argue that data use can support educational improvement in schools, “in many ways, the practice of data use is out ahead of research” (Coburn & Turner, 2011, p. 200). Most of the work in this area has been anecdotal and atheoretical. In particular, much more must be understood about how various organizational conditions shape educators’ use of data for instructional improvement. To this end, a comparative study of organizational context for data use is critical for existing types of school systems to not only identify and share best practices, but also for policymakers, practitioners, and researchers to discover the organizational conditions necessary to support this work. A theoretically grounded study of two kinds of school systems – traditional public school districts and charter management organizations – is an important contribution to this current gap in the literature. The remaining chapters examine how organizational and environmental context shapes data use. Knowledge management and organizational learning, a field of study that explores how human capital, technologies and tools, and organizational practices facilitate or constrain data use, provided the theoretical foundation for this study (see Chapter Two). Chapter Three details the qualitative approach developed to examine how different education systems use their organizational resources to support data use. A presentation of the study results by research question follows in Chapters Four, Five, and Six. Chapter Seven provides study conclusions, policy recommendations, and areas for future research. ! ! ! ! ! ! ! ! !! ! ! 29! CHAPTER TWO A REVIEW OF LITERATURE: PRECONDITIONS FOR KNOWLEDGE MANAGEMENT AND DATA USE As suggested in the previous chapter, the relationship between environmental and organizational context and data use is largely unexplored. This chapter provides a theoretical framework from which to begin to examine this relationship. The first section presents a conceptual lens, derived from knowledge management and organizational learning and elaborated with prior evidence from data use in schools. Next, the chapter outlines the data use cycle – from data and information to actionable instructional knowledge. The theory suggests that key resources – human capital, technology and tools, and organizational process and policies – are essential resources or “preconditions” for effective data use in schools. Finally, this perspective predicts that organizational and environmental context will shape an organization’s ability to engage in data use as well as mobilize its resources to this end. Theoretical Background From the business literature, a family of theories – knowledge management, organizational learning, the knowledge-based theory of the firm, the “learning organization” – are rooted in the assumption that knowledge is one of the most important assets that a firm possesses 3 (Argyris & Schon, 1978, 1996; Davenport & Prusak, 1998; !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 3 The scholarship in this area is very diverse, using a wide range of terms to describe a similar area of study. Rather than get tangled in debates over terminology, I have focused on the core ideas that will inform data use and knowledge development in schools. However, for clarity’s sake, the largest distinction seems to be between organizational learning and knowledge management. From the earliest scholarship in these areas, Argyris and Schon (1978) claimed that organization learning occurs when members of an organization act as learning agents for the firm, responding to changes in the internal and external environments. Knowledge management focuses on the systematic processes of gathering, organizing, sharing and analyzing knowledge in terms of resources, documents, and people skills within and across an organization (Davenport & Prusak; 1998; Nonaka, 1994). It may helpful to think of organizational learning ! ! ! ! ! ! ! ! !! ! ! 30! Deming, 1986; Drucker, 1999; Grant, 1996; Kogut & Zander, 1992; Levitt & March, 1988; Nonaka, 1994; Nonaka & Takeuchi, 1995; Senge, 1990). This perspective builds on the resource-based theory of the firm, initially developed by Penrose (1959) and expanded by others (Barney, 1991; Wernerfelt, 1984). A firm’s competitive advantage lies in its ability to create, transfer, and generate new knowledge efficiently within the organizational setting, leading to new and innovative solutions and responses. In this way, the organization learns through the collective learning of its members, embedding what has been learned into the fabric of the organization (Levitt & March, 1988). Knowledge management is the deliberate and systematic coordination of a firm’s people, processes, technology, and organizational structure (Dalkir, 2007; Serrat, 2009). Similarly, school systems are organizations whose primary asset is knowledge, and past research has explored organizational learning within school systems (e.g., Leithwood & Louis, 2000). For instance, Honig (2008) applied an organizational learning lens to understand how central office administrators made operational decisions. Although there are many differing interpretations of organizational learning, it can generally be defined as a collective or more macro-level learning than that which occurs at the individual level. Knowledge management theories complement this perspective, digging deep to focus on the systematic processes of gathering, organizing, sharing and analyzing knowledge within and across an organization (Davenport & Prusak, 1998). In other words, organizational learning is the ongoing goal, and knowledge management strategies are the means by which it can be achieved (King, 2009). !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! as the goal of knowledge management strategies (King, 2009). See Grant (1996a, b); Kogut & Zander (1992) for definitions of knowledge-based view of the firm; Senge (1990) for definition of “learning organization.” ! ! ! ! ! ! ! ! !! ! ! 31! Figure 2.1 presents an integrated framework of data use and knowledge management of a school system. In the sections that follow, these elements are explored in detail, building from the center out -- from the data-use cycle to key organizational resources and organizational and environmental context. Building on the work of McLaughlin and Talbert (2001, 2006) and Talbert, McLaughlin, and Rowan (1993), the ongoing, nested interactions can be thought of as “the embedded context” of data use where classroom, school, district/CMO, local community, state, and federal context all interact to “shape and be shaped by” educators’ data use. This framework is informed by theoretical concepts as well as empirical research of data use in school systems. Notably, most of the empirical work on data use in schools lacks a unified conceptual framework. The recent exception is Coburn and Turner’s recent 2011 model. While including many of the same “pieces” of the data-use puzzle, their model and the conceptual framework here are conceived differently. Coburn and Turner (2011) focused on synthesizing the available literature on data use. Consequently their model is limited in areas where there has not been empirical work (for other critiques, also see Hamilton, 2011; Piety, 2011). For instance, Coburn and Turner’s model does not include an explicit consideration of organizational structure, size, and other environmental conditions, all holes in the current research base. Instead, the conceptual framework presented here, grounded in both empirical literature and knowledge management theory, provides a more comprehensive model. ! ! ! ! ! 32 Figure 2.1. Conceptual framework. ! ! ! ! ! ! ! ! !! ! ! 33! Data-use Cycle The theory of action for data use promoted by data advocates and adapted from the literature (see Ackoff, 1989; Alavi & Leidner, 2001; Daft & Weick, 1984; Huber, 1991; Mandinach, Honey, & Light, 2006; Marsh et al., 2006) suggests that data alone do not ensure use. This distinction is important, especially in today’s “data-rich” world. Simply having greater access to data or information does not make one more knowledgeable, and in fact, as noted in Chapter One, data or information overload can be a serious problem for both individuals and organizations. Data must be collected, organized, and analyzed to become information and then combined with understanding and expertise to become actionable knowledge (Ackoff, 1989; Alavi & Leidner, 2001; Bhatt, 2001; Ikemoto & Marsh, 2007; Kerr et al., 2006; Marsh, 2012; Marsh et al., 2006; Zins, 2007). While some educators may apply this knowledge to specific decisions and actions (an instrumental use of data), others may respond in ways that are conceptual (enriching beliefs, values, understandings), symbolic (justifying pre-determined positions or already enacted decisions), and signaling (providing indicators that help frame problems and attribute responsibility) (Coburn, 2010; Coburn, et al., 2009; Feldman & March, 1981; Weiss, 1980). These responses can inform educators’ instructional decisions, either reinforcing previous practices or reshaping them in subtle or potentially significant ways. As illustrated in the center box of the conceptual framework, there are multiple possible opportunities to support a data user in this process. Educators can be supported in accessing data; interpreting it, organizing it into information; combining information with expertise and evidence in ! ! ! ! ! ! ! ! !! ! ! 34! ways that build knowledge; knowing how to respond and adjusting instruction; and evaluating the effectiveness of the response (Marsh, 2012). Knowledge management scholars have differentiated knowledge into two categories: explicit and tacit (Nickols, 2001; Nonaka, 1994; Nonaka & Takeuchi, 1995; Polanyi, 1966; E. A. Smith, 2001). Explicit knowledge (also known as formal, declarative knowledge or “knowing what”) is easy to organize, classify, and codify, and it has its sources in formal documentation – a set of standing operating procedures, computer databases, or archived “how-to guides.” In the case of schools, this could include lesson plans or curricular maps. Tacit knowledge (or “knowing how”) is personally and socially embedded, cultivated through experience over time, and it can include beliefs, patterns of behavior, and mental models. For educators, tacit knowledge could include how to time a lesson or how to differentiate instruction to a small group. While tacit knowledge is difficult to codify, communicate, and share, it can be a rich sources of organizational knowledge (Polanyi, 1966; Sallis & Jones, 2002). Nonaka (1994) and Nonaka and Takeuchi (1995) argued that knowledge creation within an organization is dependent on opportunities to link and convert tacit and explicit knowledge. Socialization is the process of transferring tacit knowledge between two people, and it is dependent on shared experiences and mental models. The process of turning tacit knowledge into explicit knowledge is called externalization where one person articulates his thinking or “makes thinking visible” through discussion and dialogue with others (also see Collins, Brown, & Holum, 1991). Explicit knowledge can then be joined with other explicit knowledge in the process of combination. Once explicit knowledge is shared and combined with others, it can become absorbed to become tacit ! ! ! ! ! ! ! ! !! ! ! 35! knowledge of individuals in a process Nonaka refers to as internalization. An organization’s ability to learn and create knowledge, he argues, is dependent on the ability to not only recognize the value of both explicit and tacit knowledge, but also to provide opportunities and supports that help cultivate and encourage the sharing of both types. For school systems, the theory suggests that using data and creating actionable plans for instruction requires not only developing educators’ capacities in “knowing what” to do but also “knowing how” to do it. Organizational Resources Knowledge management literature suggests that certain organizational resources need to be in place and aligned to support data use within an organization. These resources can be divided into three categories: human capital, technologies and tools, and organizational practices (Bhatt, 2001; Holsapple & Joshi, 2002; Petrides & Guiney, 2002; Petrides & Nodine, 2003; Ponelis & Fairer-Wessels, 1998). They can be thought of as critical organizational capabilities or “preconditions” necessary for the progression from data to information and knowledge to occur. Not only do these “ingredients” need to be aligned with one another, investments in human capital, technology and tools, and organizational policies need to be integrated with the overall organization vision, mission, and strategy (Liebowitz, 1999; Maier & Remus, 2002; Sunassee & Sewry, 2002; Zack, 1999). Human Capital In the knowledge management literature, the progression from data to information and knowledge is a process that can only be engaged in by the people within the organization (Davenport & Prusak, 1998; Sallis & Jones, 2002). Dimensions of human ! ! ! ! ! ! ! ! !! ! ! 36! capital resources include collaboration, dedicated support positions (e.g., coaching), knowledge/skills development, and organizational leadership. Collaborative relationships between individuals (e.g., coaches, mentorships, or apprenticeships) or within groups (e.g., “communities of practice”) serve as opportunities for collaboration, “co-construction” of new ideas, and joint work. These social interactions also allow participants to develop trust, establish social norms around information sharing, and provide opportunities for shared meaning-making (Coburn, 2001; Lave & Wenger, 1991; Spillane & Miele, 2007; Wenger, 1998). These investments in human capital such as apprenticeships and communities of practice are critical for tacit and explicit knowledge to be converted and shared throughout the organization (Nonaka, 1994; Nonaka & Takeuchi, 1995). Individuals’ knowledge and skills can provide the fodder for learning and improvement at the group and organizational levels (Gold, Malhotra, & Segars, 2001; Holsapple & Joshi, 2002). For instance, Hargreaves (2000) argued that an individual has a range of mental models, routines, and skill sets that draw on both her formal, explicit knowledge base as well as her applied, contextual experience and tacit expertise. When these routines and schemata fail or are questioned, she is forced to reflect and adjust in a process Hargreaves calls “tinkering.” He further argued that this “tinkering” is best supported through interactions with others, and these new sources of actionable knowledge can spread within and between groups as well as throughout the organization. Finally, system leaders not only set goals for the organization, they also have an impact on individuals’ and collective learning in their personal abilities to coach, mentor, and model data-driven behaviors (Maden, 2011; Sallis & Jones, 2002). System leaders play an important role in ! ! ! ! ! ! ! ! !! ! ! 37! fostering an organizational climate to create a favorable knowledge management environment, discussed later (Holsapple & Joshi, 2000, 2002; Liebowitz, 1999; Liebowitz & Chen, 2003). From the education research, empirical evidence has similarly emphasized building educator capacity for using data to create actionable knowledge for instructional improvement. In terms of collaboration, while historically teachers have meet in grade or content-area teams to discuss their practice, some school systems now expect teachers to engage in professional learning communities (PLC), data teams, or inquiry groups where the focus of the work is on responding to data collected in classes and schools (Blanc et al., 2010; Cosner, 2011a, 2012; Ermeling, 2010; T. H. Nelson & Slavit, 2007; T. H. Nelson et al., 2012; T. H. Nelson, Slavit, Perkins, & Hathorn, 2008; Slavit & Nelson, 2010; Slavit, Nelson, & Kennedy, 2009). For instance, Gallimore, Ermeling, Saunders, and Goldenberg (2009) found, using a quasi-experimental design in nine Title I schools over five years, that teachers working in a grade level team with a trained peer-facilitator and an inquiry-focused protocol were able to develop instructional solutions, significantly increasing student achievement. In their in-depth, 4 year study of teachers’ interactions in guided inquiry, Nelson, Slavit, and others found the work of PLCs was connected to 1) teachers’ perspectives on the nature and use of student learning data and 2) their ways of analyzing and talking about these data (T. H. Nelson & Slavit, 2007; T. H. Nelson et al., 2012; T. H. Nelson et al., 2008; Slavit & Nelson, 2010; Slavit et al., 2009). Other school systems have focused on providing a dedicated, structured position to support educators’ use of data, such as a literacy/math, instructional, or data coach (Chrismer & DiBara, 2006; Cosner, 2011a; ! ! ! ! ! ! ! ! !! ! ! 38! Marsh et al., 2009). Marsh, McCombs, and Martorell’s 2009 study of instructional reading coaches in Florida noted that the majority of coaches spent time with teachers analyzing student data, with a significant association to perceived improvements in teaching and higher student achievement. Much of the research has highlighted the importance of providing training to educators on how to use data and connect them to practice (Black & Wiliam, 1998; Choppin, 2001; Goertz et al., 2009; Mason, 2002; Massell, 2001; Supovitz & Klein, 2003). More specifically, scholars have pointed to educators’ limited “data literacy” – the skills and knowledge required to identify questions, select appropriate metrics, analyze results, and create actionable solutions (Ikemoto & Marsh, 2007; Knapp et al., 2007; T. H. Nelson et al., 2012; Supovitz, 2006). Based on interviews with over 100 teachers, Means, Chen, DeBarger, and Padilla (2011) identified five key skills for data-driven decision making: data location (finding the right data to use in a table or graph); data comparison (figuring out what the data signify); data interpretation (making meaning of the data); instructional decision making (selecting an instructional response that addresses the situation identified in the data); and question posing (figuring out questions that will generate useful data). While most teachers had skills in locating data on graphs, they had greater variation in their abilities to interpret data, use it for instructional decision making, and pose questions/collect new data afterwards. Similarly, leadership at different levels of the education system influence data use: for instance, district leaders shape data use for school leaders, and school leaders shape data use for their teachers (Anderson et al., 2010). In several of the schools in Anderson et al.’s 2010 study, the main role of the school leader for DDDM in schools was not as central data-user, but instead, as data-enabler for teachers, the frontline data- ! ! ! ! ! ! ! ! !! ! ! 39! users. Supporting the data-use process for teachers means providing additional time, funding, tools, access to expertise, and targeted, structured opportunities for professional development that is focused on data-related questions (Anderson et al., 2010; Cosner, 2010, 2011a, 2011b; Datnow & Park, 2009; Kerr et al., 2006; Laird, 2006). In her work in elementary schools in three districts, Cosner (2011b) explained that school leaders served as “reform communicators and sensegivers,” shaping the design and introduction of tools and processes to support the work of data-based collaboration. Similarly, in the same way that principals create a culture of data use within schools, it is the role of district leaders to create and support a “data-informed district” (Knapp et al., 2007). District officials create this culture by offering a vision and mission for the system around data use, setting expectations around data use for school leaders, and prioritizing and modeling data use at the district level (Anderson, 2006; Anderson et al., 2010; Honig & Coburn, 2008; Wayman et al., 2007). In their review of literature on the role of the district central office on school sites’ evidence use (including use of data), Honig and Venkateswaran (2012) found that central offices shaped schools’ evidence use through the flow of information, resources and assistance with social-sensemaking, communication of expectations around evidence use, and professional development opportunities. Technology and Tools According to knowledge management literature, a range of technology and tools are important to help identify, share, and spread knowledge across an organization (Silver, 2000; Stenmark, 2001). “Capture” tools such as records that are then input into data management systems or information warehouses allow data to be collected and codified in a single location (Marwick, 2001). These programs also allow for data to be ! ! ! ! ! ! ! ! !! ! ! 40! filtered, summarized, reconfigured, or combined into new kinds of explicit knowledge (Nonaka, 1994). Other “capture” tools include repositories of best practices, reports, documents, meeting minutes, and how-to manuals (Orzano, McInerney, Scharf, Tallia, & Crabtree, 2008). Communication tools (e.g., E-mail, shared message boards, Intranet) and other communication channels (e.g., dialogue, regular meetings) allow for knowledge to be shared between individuals and groups. Tools that allow communication to be accurate, timely, and facilitate feedback between individuals and/or groups are particularly helpful in spreading knowledge (Elliott & O'Dell, 1999). Collectively, these tools all serve as ways to access the collective knowledge that the organization holds, offering some level of protection against the inevitable loss of knowledge when people leave (Long & Davenport, 2003). To support data use for knowledge development and sharing, many district school systems have invested in data information systems and knowledge warehouses. Wayman and Cho (2007) argued that successful educational data systems are thoughtfully designed to meet the needs of educators across levels, considering the challenges around data storage, entry, analysis, presentation, and accessibility. A data system paired with ongoing professional development on its use helps to build technical infrastructure within classrooms, schools, and the district (Lachat & Smith, 2005). Examples of the development of district and state information systems include St. Paul Public School District (Ramnarine, 2004), Philadelphia School District (Renzulli, 2005), and the Pennsylvania Department of Education (Golden, 2005). Data analysis protocols are another form of “capture” and “communication” tools to enable data use and knowledge sharing (Marsh, 2012). For instance, some schools or ! ! ! ! ! ! ! ! !! ! ! 41! districts provide working groups with a guiding agenda or specific protocol to guide the data-based discussions (Cosner, 2011b; Datnow et al., 2008; Datnow et al., 2007). These tools not only help codify new thinking and learning based on data, but they also help educators translate individual, tacit information into explicit knowledge available to the group. Pulling from the literature on tools to bridge research and practice, researchers note that while tools can be an integral component of educators’ practice, in certain conditions, they can also be received as compliance-oriented “paperwork” (Baxter, 2010; Datnow & Park, 2010b; Ikemoto & Honig, 2010). Practices and Policies Formal and informal practices and policies contribute to knowledge development and sharing as they limit or expand the range of actions that employees consider, bounding opportunities for interaction (Blair, 2002; Gersick & Hackman, 1990). Knowledge coordination within the organization can be supported or inhibited by scheduled work time to allow for observations and other shared learning experiences and opportunities for group problem solving (Grant, 1996; Orzano et al., 2008). Organizations can also put into place rewards and incentive structures to encourage data and knowledge sharing, aiming to counter information “hoarding” or other disinclinations to participate in knowledge sharing (Gold et al., 2001; Hislop, 2002; Martensson, 2000). In schools and districts, researchers have reported a range of different organizational routines in place to support data use and foster knowledge sharing. Carving out time within the school day or week (e.g., weekly team or department meetings) to dedicate to data analysis and knowledge sharing allows individuals and groups to collaborate on a regular basis (Means et al., 2010; Supovitz, 2006). For instance, Supovitz and Weathers (2004) noted that classroom and school-level “walk- ! ! ! ! ! ! ! ! !! ! ! 42! throughs” were useful to give educators a sense of best practices, while Halverson, Grigg, Pritchett, and Thomas (2005) explored the design of organizational routines which supported information flow from assessments to teachers and the larger school community. Ingram et al. (2004) found in some schools with high levels of data-based decision making, there was evidence of long-term, formal strategies for organizational change that involved identifying and correcting problems based on data, with communication and information feedback loops in place to facilitate these processes. School systems have also experimented with using extrinsic rewards and incentives systems (e.g., a bonus system tied to student achievement growth) to motivate data use and instructional improvement (Marsh et al., 2008; Sutherland, 2004). Table 2.1 below summarizes the main concepts from knowledge management, linking them with empirical evidence from the data-use literature. ! ! ! ! ! ! ! ! !! ! ! 43! Table 2.1 Key Organizational Resources Key Organizational Resources Dimensions Examples from Empirical Education Literature Human capital Collaboration opportunities for employees Professional learning communities, data teams, inquiry groups Dedicated support positions for employees Literacy/math, instructional, data coach Knowledge, skills development for employees Professional development to support data use to develop a “data literacy” skill set Organizational leadership District and school leaders as “data-enablers” for teachers in traditional settings Technology and tools Technology and tools to capture data and information Data management system; information warehouse; identification of best practices Technology and tools for communication E-mail, interactive websites Practices and policies Scheduled work time Dedicated data-based collaboration time Standard operating procedures of organizations Scheduled “walkthroughs” for observing instructional quality Rewards and incentives Linking teacher compensation to student performance ! ! ! ! ! ! ! ! !! ! ! 44! Organizational and Environmental Conditions Much of the theoretical development in the knowledge management literature has focused within the organization, considering the capabilities needed to create, share, and use knowledge. Less attention has been paid to understanding how the organizational and environmental context shapes the development and nurturing of data use and knowledge development of its members. The sections below merge the existing theoretical work from the management literature with the empirical findings from the education research to inform these layers of the conceptual framework. Organizational Conditions In terms of organizational structure, many knowledge management scholars agree that the large, traditional “command-and-control” hierarchy presents major challenges for knowledge sharing (Nonaka & Takeuchi, 1995; Sallis & Jones, 2002). A bureaucratic structure – characterized as a formalized, specialized, centralized organization – can be an efficient model for conducting routine work in a stable environment (W. W. Powell, 1990; Weber, 1947). While such a structure does support the development of departmental expertise, this type of control and standardization can harm communication, sharing of ideas, individuals’ initiative/motivation, and the organization’s overall ability to respond quickly to changes in the environment (Nonaka, 1994). Liebowitz and Chen (2003) argued that knowledge sharing in hierarchical and bureaucratic organizations is difficult because people in these organizations “keep knowledge close to their heart as they move through the ranks by the ‘knowledge is power’ paradigm” (p. 422). Instead, scholars argue that organizational structures that allow for knowledge management are “typified by flatter structures, de-bureaucratisation, decentralization, and ! ! ! ! ! ! ! ! !! ! ! 45! co-ordination through increasing use of information and communication techniques” (Swan, Newell, Scarbrough, & Hislop, 1999). An organization designed with a network structure is characterized by dynamic and changing structures, depending upon the task. In networks, authority and participation in decision making is more “distributed” throughout the organization, though it may have elements of centralization and decentralization. Unlike hierarchies where expertise tends to be compartmentalized, in networks, knowledge and expertise is not located at any center, but throughout the system (Agranoff & McGuire, 1999; Hadfield & Chapman, 2009; Mandell, 2005; W. W. Powell, 1990; Wohlstetter, Malloy, Chau, & Polhemus, 2003). Knowledge development and sharing, therefore, necessitates that members of the network be cooperative and collaborative with one another (W. W. Powell, 1990). Rather than rigid, isolated work settings that foster information “hoarding” or competition, network structures can be more flexible, encouraging employees to share and collaborate across boundaries (Gold et al., 2001). For an individual to have access to more sources of knowledge, isolated silos created by the hierarchical model are not helpful, as people in the same clique or group tend to have access to and process similar information. If connected to an extended network, individuals are exposed to more new ideas (Robertson, Swan, & Newell, 1996). Structure is not only important as a link between individuals and organizational knowledge, but also as a link between individuals and the pressures – institutional or otherwise – of the environment (DiMaggio & Powell, 1983). Around the issue of size, a few studies have looked at knowledge management within what the authors call “small firms” or “small enterprises” (Frey, 2001; Lim & Klobas, 2006; McAdam & Reid, 2001; Sparrow, 2001; uit Beijerse, 2000; Wiklund & ! ! ! ! ! ! ! ! !! ! ! 46! Shepherd, 2003). Collectively, these authors suggested that smaller organizations had certain advantages when mobilizing their resources. For instance, they had simple structures and processes for implementing new programs, a unified culture (with fewer employees) where cooperation was valued, and a close line of communication between organizational leaders and on-the-ground employees. The drawbacks for being small – namely, a limited pool of financial, technical, and human resources, along with the costs of being “new” – hindered knowledge management efforts (Wong & Aspinwall, 2004). Others suggest the importance of sharing decision-making rights through an organization. Pedler (1991) identified “participative policymaking” as a key characteristic of a learning organization, while Jamali and Sidani (2008) emphasized the importance of employee participation in decision making for implementation of knowledge management projects (also see Busck & Knudsen, 2010; Hyman & Mason, 1995). Regulations – particularly those governing employee involvement, participation, and cooperation with management – were also important to knowledge management (Rasmussen & Nielsen, 2011). Finally, many knowledge management theorists pointed to organizational culture as playing a critical role in this work. Culture is defined as the “set of taken-for-granted assumptions, shared beliefs, meanings and values that form a kind of backdrop for action” (Smircich, 1985, p. 58). The culture of an organization can enhance the knowledge sharing process if it is a climate that rewards sharing and collaboration and allows risk taking (Horak, 2001). Other aspects of organizational climate include a shared vision by employees and management as well as trust within and between these groups (Percin, 2010). ! ! ! ! ! ! ! ! !! ! ! 47! Despite the strong evidence within the business literature that organizational structure – specifically a “command-and-control” hierarchy compared to a decentralized, less formal network structure – affects knowledge sharing, it is surprising that there is little information that looks comparatively at the role of organizational structure in education. As noted in Chapter One, research in this area has largely been presented in single district case studies or studies where different types of systems are included, but the impact of these key organizational conditions are not discussed (Datnow et al., 2008; Datnow et al., 2007; Supovitz, 2006; Sutherland, 2004). Similarly, little is known about how the distribution of decision-making rights and role of regulations impact data use in schools. In earlier work on organizational learning, Marks, Louis, and Printy (2000) found participatory decision making grounded in teacher empowerment supported the shared commitment and collaborative activity needed for organizational learning. However, this study looked at organizational learning generally, without a focus on how data could influence instructional and organizational decisions. More recently, Wohlstetter, Datnow and Park (2008) argued that, in addition to the knowledge and skills to analyze data, educators also needed the autonomy and decision-making authority to act upon their findings at the site level. The areas for decision-making autonomy to support data use included a range of key responsibilities, including curriculum, assessment, instruction, professional development, or organizational routines (e.g., bell schedule, school day length). One particular example from education that is highly related to a teacher’s ability to respond to data is her decision-making authority over the curricular pacing guide. In some districts, teachers have the authority to set lesson pacing and reteach in response to data when necessary, while in other districts, the curricular guides are set, leaving the teacher little flexibility to ! ! ! ! ! ! ! ! !! ! ! 48! respond to data and make changes in her instruction (Datnow et al., 2008; Datnow et al., 2007; Kerr et al., 2006; Marsh et al., 2006). Others have considered the organizational climate for data use. In some instances, researchers found evidence of an organizational culture that countered the efforts of data use. Ingram (2004) noted that in some schools, teachers were expected to “just teach the curriculum” rather than engage in thoughtful data-based inquiry. Similarly, Ikemoto and Marsh (2007) found that an organizational culture where data-use and accountability efforts were seen as challenges or threats to the status quo make data use difficult. Compared to “districts with entrenched organizational beliefs that instruction is a private, individual endeavor,” the authors argued, “ . . . districts with administrators with strong visions of data use who promoted norms of openness and collaboration greatly enabled data use” (p. 124). A culture that focuses on continuous improvement and organizational learning also affects other key factors for data use, including long-term engagement, staff buy-in, perceived usefulness of data, internal motivation (versus basic compliance to an external mandate), and involvement of multiple voices and perspectives (Firestone & Gonzalez, 2007; Kerr, et al., 2006; Marsh, et al., 2006). Environmental Conditions Although theories of knowledge management generally focus on the organization and its internal structures and processes, the external environment also plays an important role. As Rasmussen and Nielsen (2011) argued, “when firms manage knowledge and learning, they do not do so in a vacuum” (p. 488). In the case of for-profit firms, these environmental conditions included the presence of other firms within their sector and pressure felt from the market to remain “competitive” via their knowledge use and ! ! ! ! ! ! ! ! !! ! ! 49! production (Rasmussen & Nielsen, 2011). The authors also noted that available fiscal and human capital resources in the external environment (e.g., an available pool of trained employees, financial resources) can similarly enable or constrain an organization’s ability to do this work. While organizations have little control over environmental influences, they need to remain aware of government regulations, market competition, and social, political, and community demands (Holsapple & Joshi, 2000; Keskin, 2005; Percin, 2010; Walker, 2006). In terms of environmental factors and data use, the largest area of research in education has focused on the role of state and federal accountability systems. As Coburn and Turner (2011) discussed, these accountability systems were designed with data at their core: “. . . the sanctions and rewards linked to data will (a) focus greater attention on student performance, thus increasing data use and (b) leverage the findings from the data to motivate educators to make instructional change to improve that performance” (p. 186). In some instances, research has found that the growing presence of accountability regulations has played a positive role in holding schools responsible for student learning. For instance, Dee and Jacob’s (2011) results suggested that No Child Left Behind produced statistically significant increases in the average math performance of fourth graders as well as improvements in eighth-grade math achievement, particularly among traditionally low-achieving groups and at the lower percentiles. Other scholars have pointed to the negative, unintended consequences of such a high-stakes focus on student and school-level data – teaching to the test (Popham, 2001); narrowing of the curriculum to focus on the tested subjects of math and English Language Arts (Crocco & Costigan, ! ! ! ! ! ! ! ! !! ! ! 50! 2007); concentrating time and attention on the “bubble kids,” those students closest to changing proficiency levels (Booher-Jennings, 2005; Ho, 2008); cheating (Jacob & Levitt, 2004); and other gaming behaviors (Figlio & Getzler, 2006; Figlio & Winiki, 2005). However, it is unclear how other facets of organizational context influence these responses to accountability systems, nor is it understood what role other forms of accountability and other external policies may play (e.g., market accountability, professional/internal accountability, bureaucratic accountability from a third party). Two other environmental conditions in education – the district/state financial context as well as political conditions – are likely to shape a system’s data use and resource allocation, although this area to date has not been explored. First, limited financial resources – particularly in times of economic crisis – will likely play a major role in a school system’s decision making (Odden & Picus, 2008). In California, for example, districts have responded to recent state funding cuts by increasing class sizes, closing programs, and shortening the school year, among other reactions (Legislative Analyst's Office, 2011). It is hypothesized that the financial environment will similarly shape how a school system is able to invest in knowledge management resources. Second, others have noted that education has a macro-political dimension, driven by the interrelationships among teachers’ unions, school administrators, boards of education, and the superintendent’s office, and other stakeholder groups (Eliot, 1959; P. E. Peterson, 1974). Spring (1998), for example, outlined how the political organization of school systems has impacted curriculum, content, and testing. As such, it is likely that this macro-political environment will similarly impact a school system’s decisions to support data-use initiatives. ! ! ! ! ! ! ! ! !! ! ! 51! Summary Collectively, scholarship in this area suggests that a knowledge-driven organization is one that values individuals’ knowledge as a critical resource and supports its development and sharing. Core to this goal is the belief that people, technology and tools, and organizational practices need to be aligned in order to increase knowledge flow across the organization to inform decisions (Petrides & Guiney, 2002; Petrides & Nodine, 2005; Petrides & Nodine, 2003). Both organizational and environmental conditions are likely to shape how an organization uses data to develop actionable instructional knowledge and to mobilize resources to this end. These core ideas from knowledge management and organizational learning were incorporated and merged with empirical findings from education to present the study’s conceptual framework. As addressed earlier, little is known about how the design of education systems affects how schools are able to mobilize their resources internally to use data and share knowledge throughout the organization. Chapter One’s review of literature on charger management organizations – a growing governance structure in today’s educational landscape – offers early evidence of variation in certain organizational and environmental conditions. Together, Chapters One and Two build a conceptual model from which to hypothesize and explore the role that organizational and environmental context may play in data use. Next, Chapter Three explores the decisions about research design, methodology, data collection, and analysis techniques. ! ! ! ! ! ! ! ! !! ! ! 52! CHAPTER THREE RESEARCH METHODS The purpose of this research is to gain an in-depth understanding of how different types of school systems – charter management organizations and traditional school systems – mobilize resources to use data to guide decision making and to share knowledge across the organizations. This chapter describes the research methods for achieving this goal. First, the chapter provides an overview of the study methods, offering a rationale for using a qualitative, multi-case study approach. The next section reviews the guiding research questions, sample selection, data collection, and analysis techniques. Finally, a discussion around the steps taken to ensure the research’s quality and rigor and study limitations follows. Research Design: A Qualitative Approach As exploratory research, this study uses a comparative qualitative case study approach to gain an in-depth perspective on data use in different kinds of school systems (Ragin & Becker, 1992). Qualitative methods are useful for studying a limited number of cases in depth, describing complex phenomena situated and embedded in local context (Merriam, 1998). They allow researchers to understand dynamic processes, consider patterns, and view change over time (Patton, 2002). Finally, as a tool for developing theory in an under-theorized area, a qualitative methodology allows for flexibility and responsiveness through iterative data collection and analysis to develop new ideas and relationships from the field (Creswell, 2003; Eisenhardt, 1989). This study uses a comparative case study design to provide an in-depth understanding of how different school systems mobilize their resources to use data for ! ! ! ! ! ! ! ! !! ! ! 53! decision making and share knowledge across their organizations. Yin (2009) defined a case study as “empirical inquiry that investigates a contemporary phenomenon within its real-life context, especially when the boundaries between phenomenon and context are not clearly evident” (p. 18). Specifically, an instrumental case study design allowed for an understanding of the school systems while guided by the conceptual framework, as outlined in Chapters One and Two (Patton, 2002; Stake, 2000a). To this end, the specific research questions guiding the study were: 1. How do school systems use data to improve student outcomes and organizational performance? 2. What organizational resources – human capital, technology and tools, and organizational processes/practices – are in place in different school systems to support data use? How do these resources enable or constrain data use? 3. How does the organizational and environmental context shape how school systems use data and mobilize their organizational resources to this end? Sample Selection The purposive sample selection for this dissertation occurred in two phases. First, two traditional districts were identified based on the incidence of the phenomena under study. Then two charter management organizations that had similar reputations were matched by state context, student demographics, and grade configuration. Additionally, all four systems met the criteria developed from the conceptual framework and research questions (see Table 3.1). They were intentionally selected as “information-rich cases” to maximize understanding the role of organizational context on data use and knowledge management (Creswell, 2003; Patton, 2002; Ragin & Becker, 1992; Yin, 2003). In ! ! ! ! ! ! ! ! !! ! ! 54! accordance with Yin (2009), two of each type of school system were included to strengthen the findings (in comparison to the more limited conclusions that could be drawn from a single district compared to a single CMO). Traditional School System Sample The research partially drew from a larger, multi-site study of six schools in four traditional school districts. Funded by the Spencer Foundation and led by USC Professor Julie Marsh and RAND senior policy analyst Jennifer McCombs, this study focused on interventions that support teachers’ use of data to inform their literacy instruction: data coaches, literacy coaches, and professional learning communities/data teams. 4 Researchers on the Spencer project first identified an initial list of possible exemplary districts by asking educational researchers, practitioners, and policymakers to identify school systems nationwide that had strong reputations in this area. Next, the team conducted an extensive internet search for possible district sites noted for their best practices in existing research (e.g., ERIC, Google Scholar), popular press articles (e.g., Education Week, New York Times), and school and district websites. The research team also identified sites through previous projects and work in the field. Ten suggested districts arose from these sources. During the summer of 2011, preliminary phone interviews and e-mails with school system leaders helped to further determine district eligibility and interest in participation. The final four traditional school districts for the Spencer project were systematically selected based on strong reputational factors and !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 4 The Spencer research team included the two principal investigators, two other research associates, and myself. As research associate, I played a key role in selecting districts, designing data collection instruments, conducting site visits, and managing the project. When applicable, I reference the work of the research team. Otherwise, choices, decisions, and analysis can be assumed to have been done independently. ! ! ! ! ! ! ! ! !! ! ! 55! evidence of district commitment to data use (e.g., training/professional development, history of investment). For this dissertation, two of the four districts from the Spencer study were selected; both were located in Southern California and served low-income, urban communities. Based on the early research and preliminary interviews, they also met the criteria developed from the conceptual framework (see Table 3.1). Table 3.1 Criteria Developed from Conceptual Framework Sample Criteria 1 Does the school system collect and use a wide range of data types, beyond annual state assessment results? Sample Criteria 2 Does the school system invest in human capital resources for data use (e.g., coaches, teacher collaboration time, or professional development)? Sample Criteria 3 Does the school system invest in an information sharing database or other technology/tools that is available to site-level educators? Sample Criteria 4 Are there internal policies and practices within the organization that are intended to encourage data use? CMO Sample Selection To identify the CMOs for this study, the charter school system needed to meet the definition of a CMO, understood as a nonprofit organization that manages multiple charter schools with a common mission/instructional design with a home office/management team that offers ongoing support to its schools (Farrell et al., 2012). Excluded from this group of organizations were loosely-tied networks of charter schools ! ! ! ! ! ! ! ! !! ! ! 56! without a central office; organizations that run virtual or online charter schools; and school districts in which all public schools are charters. Additionally, agencies that serve a broader purpose (e.g., a community organizing group) but which also run one or more charter schools were not included. Excluding these organizations from the sample reflected the belief that, as organizational forms, they were fundamentally different than the growing consensus around the definition of a CMO. From comprehensive lists of charter management organizations (Furgeson et al., 2012; Lake et al., 2010), in order to match the state context and grade configuration of the two traditional school districts, the study only considered CMOs serving secondary students (grades 6-12) in California. California first established their charter law in 1992, and by one count, there are 22 CMOs operating in the state (Lake et al., 2010). 5 The largest five CMOs with schools in Southern California were selected to best approximate the size of the traditional school districts. Of these five, one was immediately excluded because of recent financial and governance struggles. The rationale behind this decision was that these struggles would be highly disruptive to normal organizational processes. Next, CMO websites were reviewed; educational experts, researchers, and practitioners were consulted; and resources from internet sources on the data-use strategies of the remaining four CMOs were collected. At this point, one CMO was excluded because it only served grades 9-12. After preliminary interviews with CMO leaders, one CMO declined to participate, citing time demands on the staff. The remaining two CMOs were selected for study participation, as they also met the criteria developed from the conceptual framework (see Table 3.1). Both had initial evidence of !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 5 Nationally, CMOs vary in their geographic scope (J. Smith et al., 2009). Some have a national approach while others have a multi-state strategy. ! ! ! ! ! ! ! ! !! ! ! 57! dedicated organizational resources – human capital, technology and tools, and policies and practices – for data use and knowledge sharing. The final sample includes four school systems that are comparable in their state context, student demographics, and grade configurations, and they all showed initial indicators of designating organizational resources for data use and knowledge sharing. Selecting based on these contextual characteristics maximized the ability of answering the research questions. Additionally, this selection process reduced some of the other possible forces that could influence the data use and knowledge sharing phenomena (i.e., difference in state accountability systems cannot explain differences seen in data use in these two kinds of systems). Table 3.2 lists the final study sample along with some key demographic information. The names of school systems and schools have been changed to protect the confidentiality and anonymity of the study participants. ! ! ! ! ! ! ! ! !! ! ! 58! Table 3.2 School System Profiles: 2010-2011 Traditional School Districts Charter Management Organizations Sequoia School District Mammoth School District Yosemite CMO Yellowstone CMO Total Number of Students 22,000 21,000 10,000 5,000 Number of Secondary Schools* 10 9 10 12 Total Number of Schools 34 28 34 13 Free and Reduced Lunch (%)^ 73 61 96 78 Limited English Proficiency Status (%)^ 23 21 28 11 Race/Ethnicity (%)^ African American 1 3 1 2 Asian/Pacific Islander 14 4 1 1 Latino/a 80 78 99 95 White 5 12 1 1 Source: California Department of Education, http://dq.cde.ca.gov/dataquest; district/CMO websites Note: Numbers have been rounded slightly to provide anonymity for the school systems. The proportions remain the same. * Secondary schools include a range of different grade configurations, including 6-8, 7-8, 9-12, 6-12, or 8- 12. For the CMOs, several of their secondary schools are recently opened and are growing (e.g., 6-7, with plans to serve 6-12). ^Free and reduced lunch, limited English proficiency status, and race/ethnicity statistics are aggregated at the system level, not only secondary schools. School Selection Selection of one or two secondary schools in each district and CMO provided insight into how data use unfolded across levels of the school system. In all cases, district or CMO administrators recommended schools that were exemplary cases of school-level data use. 6 Preliminary interviews were conducted with each school’s leader to confirm their eligibility and interest. Table 3.3 presents a general profile of each school. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 6 Level of interest in participation and access largely determined whether one or two schools were included for each school system. ! ! ! ! ! ! ! ! !! ! ! 59! Table 3.3 Case Study School Profiles: 2010-2011 Sequoia School District Mammoth School District Yosemite CMO Yellowstone CMO Sherman Middle School Whitney Middle School Green Middle School Mariposa High School Norris Middle School Roosevelt Middle School Total Student Population 500 400 900 350 350 300 Grades Served 7-8 7-8 6-8 9-10 6-8 6-8 2010-2011 Growth API* 830 750 710 760 835 870 Student Demographics Free/Reduced Lunch (%) 90 90 85 90 85 90 Limited English Proficiency Status (%) 20 25 25 20 10 15 Race/Ethnicity (%) African American 1 1 2 0 3 5 Asian/Pacific Islander 3 4 5 0 1 2 Latino/a 95 94 91 99 94 92 White 1 1 2 1 1 1 Source: California Department of Education, http://dq.cde.ca.gov/dataquest; district/CMO websites Note: Numbers have been rounded slightly to provide anonymity for the schools. The proportions remain the same. * California’s Academic Performance Index (API) score is a metric that summarizes the school’s results on the state’s yearly standardized tests. The API score ranges from 200 to 1000, and the state’s goal is for all schools to reach and maintain a score of 800 or higher. Participant Selection The focus of this study was to understand data use and knowledge sharing across all levels of each school system. As such, a full range of perspectives involved meeting with individuals from the central/home office, school sites, and when appropriate, the regional offices. For each school system, a leader at the district or home office was the critical informant to help identify and contact other district/home office administrators ! ! ! ! ! ! ! ! !! ! ! 60! and leaders (Bogdan & Bilken, 1998). Interviews with key district or CMO leaders and administrators who dealt with assessments, data management, and literacy instruction occurred at the central or home office. For each school site, the principal was the point person for access to school site administrators, staff, and teachers. Teachers were interviewed either independently or in focus groups. In addition to the individuals recommended by the school system leader and/or the principals, the study used a snowball sampling technique to broaden the sample (Bogdan & Bilken, 1998). Asking each interviewee to identify who he or she would approach if they had a question concerning data use or literacy instruction helped identify other individuals who were important to the system’s data-use capabilities. Table 3.4 lists the type and number of participants from each school system. Table 3.4 Participant List Traditional Public School District Charter Management Organization Sequoia Mammoth Yosemite Yellowstone District/Home Office Administrators System Leaders (Superintendent) 5 4 5 6 Regional Office Administrators 0 0 0 2 School Leaders (Principal, AP) 2 2 3 2 Teacher Leaders (e.g., Literacy Coach) 2 1 1 2 Teachers 22 7 3 5 Consultants/External Supports 1 0 0 0 Teachers’ Association Leaders 1 1 N/A N/A TOTALS 33 15 12 17 School System Profiles In this section, brief descriptions of each of the four school systems studied are provided. Each summary provides information concerning system size, demographics, ! ! ! ! ! ! ! ! !! ! ! 61! leadership, achievement record, background on school sites visited, and organizational mission/vision. Sequoia Public School District With close to 22,000 students, Sequoia Public School District had over 30 schools serving pre-K to 12 th grades. The district additionally had 33,000 students in adult education programs offered at eight correctional facilities in the county. Sequoia served a largely Latino student population (78%), with Asian Americans as the next largest demographic group served (13%). There was a perception by some within the district of a racial/ethnic and economic divide. The northern part of the district was thought to serve the “haves,” a more affluent, Chinese community, while the schools in the southern half of district worked with the “have-nots,” a less affluent, largely Latino, community. Three groups shared governance responsibilities: the board of education, district leadership, and the Sequoia teachers’ association. The district was governed by a five- member board of education. The board’s conservative financial management was credited with the fact that the district continued to have 20 to 1 class size reduction for grades K-3, no layoffs, and no furlough days, despite a difficult fiscal environment. The current superintendent had held her position for the past five years; prior to that position, she worked as teacher, principal, and district administrator for many years within Sequoia. Other administrators within the central office similarly have progressed in their careers from classroom teachers and school administrators to district central office leadership. Teachers and other classified employees were represented by the Sequoia teachers’ association. ! ! ! ! ! ! ! ! !! ! ! 62! Between 2010 and 2011, the district met its 2011 API Goal, reaching over 800. However, because several significant subgroups district-wide did not meet their proficiency targets in English-Language Arts and Math, the district failed to meet the federal Adequate Yearly Progress (AYP). Consequently, the district entered District Program Improvement status in 2010-2011. As noted on the district’s webpage, Sequoia’s guiding principles focused on creating a shared sense of responsibility with students, parents, staff, and community to create a professional culture that valued honesty, respect, and teamwork and to recognize the range of learning styles of students. Mammoth Public School District Mammoth Public School District had over 30 school sites, and in total, the system served around 21,000 students. Demographically, the majority of students in Mammoth were Latino (78%), with Caucasian as the second largest group (12%). Similar to Sequoia, there was a perceived divide within the district that fell along ethnic, racial, and economic lines. In the part of Mammoth that served largely affluent, Caucasian student bodies, only 3.2% of families were considered to be living below the poverty line; in comparison, in the majority Latino area of the district, over 10% of families were considered to do so (2010 US Census). Also similar to Sequoia was Mammoth’s leadership – an elected board of education, district administration, and Mammoth teachers’ association. The board had seven elected members and one student member. In prior years, the board voted to make a series of budget cuts in response to the state financial environment, including layoffs to teaching and non-teaching staff as well as an increase in elementary class size. The ! ! ! ! ! ! ! ! !! ! ! 63! superintendent had been in her position for three years, coming from administration positions out of state. Mammoth met its 2010 – 2011 API growth goal, but the district did not meet its AYP goal because it failed to meet AYP proficiency targets for certain subgroups in English-Language Arts and in Math. The district entered Program Improvement status in 2009-2010, the first year of the Superintendent’s tenure. The district leadership team communicated its “big 5” goals: high academic achievement, effective standards-based instruction, fiscally solvent and increase enrollment, accountability for all stakeholders, and safety and security of students and staff. Yosemite Charter Management Organization Yosemite Charter Management Organization incorporated in the late 1990’s, with its first schools opening in California in the late 1990’s. Since then, the CMO had grown to be a statewide network of schools with regional clusters. In 2011-2012, Yosemite CMO served over 12,000 students across over 30 schools. Ninety-six percent of students received free/reduced lunch, and 99% of students were Latino. Yosemite’s founding Chief Executive Officer was a long time educator with a career that started in traditional school district leadership. The non-profit CMO had a Board of Directors that provided financial and governance management to all schools. The majority of Yosemite’s schools had locally-approved charters, authorized by 10 different sponsoring school districts. The CMO did not have a collective bargaining agreement; all employees were at-will. ! ! ! ! ! ! ! ! !! ! ! 64! Yosemite announced a self-reported system-wide API score of 820 in 2010- 2011. 7 Of the 30 schools that had AYP reports for 2010-2011, 18 schools did not meet their AYP goals, and three of those schools were in program improvement. As for their mission, the CMO aimed to increase the academic performance, develop effective educators, catalyze change in public schools, and share successful practices with other schools and school systems. The CMO also had a 100% College promise to its students and families. Yellowstone Charter Management Organization In the 2011-2012 school year, there were 13 schools in the Yellowstone network, divided into 2 regional clusters. Opening secondary schools was the focus of the CMO’s mission, with the majority of their schools serving grades 6-12. Close to 5,000 students attended these Yellowstone-affiliated schools, the majority of whom were Latino (95%) and recipients of free/reduced lunch (78%). The creation of the Yellowstone management organization occurred in the mid 2000s, following the opening of three secondary schools when the co-founders decided to develop a nonprofit organization to support the development and management of the growing network. The schools were all authorized by the local school district. Like Yosemite, Yellowstone’s teachers were not organized under a collective-bargaining agreement. Schools within the network had 2010-2011 API scores that ranged from 750 to 915. Of Yellowstone’s 11 schools with state test results for that time period, four schools did not meet their AYP goals, and two were in program improvement. Yellowstone’s mission statement, posted at the home office and both schools visited, had !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 7 The California Department of Education only publishes school-level data for CMO-affiliated schools. API and AYP data aggregated at the systems’ level were not available for CMOs through the state’s website. ! ! ! ! ! ! ! ! !! ! ! 65! three main goals: college graduation for its students, proficiency on state assessments after four years with the CMO, and student involvement in their community. Data Collection Triangulation of findings across various data sources is one of the hallmarks of qualitative research (Patton, 2002). In this study, data collection involved a variety of sources of evidence, including in-depth semi-structured interviews and focus groups, observations and field notes, and internal and external documents. Data collection occurred from August 2011 to January 2012. Two to four site visits were made to each of the school systems and their school(s). Interviews with the district/home office administrators and leaders were primarily conducted on one site visit, and school level data (e.g., teacher, principal interviews) were collected during a second visit. Observations of data or literacy related professional development occurred on additional trips to schools. Documents (e.g., organizational charts, data analysis examples) were gathered from the schools and district/home office level that were relevant to the study, as were articles from research, the popular press, and academic journals. Semi-Structured Interviews As Table 3.4 illustrates, interviews with administrators from the district or central office included speaking with the Superintendent or CEO and other staff in charge of data, assessment, evaluation, and secondary school literacy. At each school, the principal was interviewed, as was the Assistant Principal at one site. English-Language Arts (ELA) teachers and teacher leaders (e.g., Department Chair, Teacher on Special Assignment, Literacy Coach, Mentor Teacher) were also interviewed. ! ! ! ! ! ! ! ! !! ! ! 66! A master semi-structured interview protocol was first developed with the research team of the Spencer project. In addition to areas of overlap between the two projects (e.g., types of data prioritized in the school system), specific questions were added that related directly to this project’s conceptual framework. This protocol served as the base for the interview protocol for the CMOs, although some CMO-specific questions were added, and Spencer-specific questions (e.g., history of the literacy coach model) were dropped. The order of the questions was also changed from the Spencer protocol to the CMO protocol. The interview protocol (Appendix A) was then slightly altered for different participant groups to target their specific role and responsibilities in the data use processes. Despite some small differences between the protocols for the traditional districts and the CMOs or for different participant groups, all interviews included open-ended questions about the following topics from the conceptual framework based on data use and knowledge management, including the types of data used (RQ1); organizational resources that enable or constrain data use and knowledge sharing (RQ2); and organizational and environmental context (RQ3). The open-ended, semi-structured interview format provided a consistent, guiding focus on topics related to the research questions while also allowing for the flexibility for participants to identify and elaborate on other important issues that arose (Bogdan & Bilken, 1998; Merriam, 1998; Stake, 2000a). Table 3.5 displays the content of sample interview questions and indicates the research questions for which they provided data, a technique recommended by Anfara, Brown, and Mangione (2002) to link data collected to initial research questions. ! ! ! ! ! ! ! ! !! ! ! 67! Table 3.5 Association Between Research Questions and Sample Interview Questions Research Questions Sample Interview Questions 1. How do school systems use data to improve student outcomes and organizational performance? • In general, what types of data do you use most frequently? How do you use this data? • Can you walk me through a time where you collected, analyzed, and acted upon literacy data? 2. What organizational resources – human capital, technology and tools, and organizational processes/practices – are in place in different school systems to support data use? How do these resources enable or constrain data use? • Who are the people at the district/home office who are responsible for collecting and analyzing literacy data? What are their roles and responsibilities? • What kinds of technology or tools do you use to collect data or share best practices? • If there is a great literacy strategy happening in a secondary school classroom, what systems or practices are in place to identify and share that idea? 3. How does the organizational and environment context shape how school systems use data and mobilize their organizational resources to this end? • What role does the home/district office play in data-use initiatives? The school site? • Who makes decisions about literacy curriculum, instruction, or assessments in the school system? • What are the major federal, state, or local policies that have influenced how you collect and analyze literacy data? • What are the expectations for school leaders for data use? • To what extent has being a charter organization influenced how the system uses data? At the beginning of each interview and focus group, oral consent was explained and obtained (Appendix B). Anonymity was assured: all identifying names of individuals, schools, and school systems have been changed in this dissertation and will ! ! ! ! ! ! ! ! !! ! ! 68! continue to be in any future publications. Permission was obtained to record the interview or focus group. All interviews and focus groups occurred in person, and researchers took notes during each interview and focus group on the setting and nonverbal behaviors of the participants. Each in-depth interview and focus group lasted from 45 minutes to 1 hour and 15 minutes. All digital recordings were subsequently transcribed. Interview transcripts were reviewed for accuracy, and any discrepancies were clarified with the interviewee or others on the research team. All interview transcripts were stored by pseudonym as electronic files. In total, 77 participants were interviewed one-on-one or in a group setting (refer to Table 3.4 for details). Observations and Field Notes Observational notes help researchers develop “thick description” of each context, one of the goals of qualitative research (Geertz, 1973). Observing in a site helps create a holistic picture of an activity or setting, bringing to light issues not captured in written documents or verbal explanations (Patton, 2002). In this study, observations and field notes were done in systematic and purposive ways that, while guided by theory, were mainly inductive in approach (Adler & Adler, 1994). An observation protocol was developed and used to capture a range of activities focused on literacy instruction, data use, and knowledge sharing in a school system (Appendix C). These relevant meetings included professional development sessions on data use; leadership team meetings; faculty meetings; and small groups of teachers meeting to discuss data and literacy instruction, among others. For each meeting, the dialogue was transcribed as close to verbatim as possible on a laptop computer by the researcher present. Within 24-48 hours of the meeting, the transcript was cleaned, and ! ! ! ! ! ! ! ! !! ! ! 69! field notes were added, including a description of the setting, activities, and the people involved as well as reflections, comments, and future questions. Document Collection and Review Data collection also included documents and files, and these written records served several purposes. The documents were a basic source of information of the policies, practices, and history of each organization, and they provided background to allow researchers to tailor interview questions more specifically to each context. Documents and records also gave a “behind-the-scenes” view of the school system, independent of the relationship between researcher and participant. Finally, documents served to validate self-reported information of the interviewees (Merriam, 1998; Patton, 2002; Stake, 2000b). At each site, various documents were collected, including organizational charts, data management system protocols, lesson planning guides, and sample data analysis output displays. If given hard copies at the site visit, each document was scanned and converted into an electronic file. In addition to internal organizational documentation, internet searches of each school and school system provided additional information. These documents included popular press articles, previous research reports (where the school or school system had been identified), and public press releases. In most cases, these external articles or records provided further general background on the organizational context, culture, and history of the sample schools and school systems. Data Analysis In qualitative research, transparency of the process of moving from raw data to final conclusions is critical to evaluating a study’s merit (Anfara et al., 2002). In other ! ! ! ! ! ! ! ! !! ! ! 70! words, data coding and theme development need to be “as public and replicable as possible” (Denzin, 1978, p. 7). To this end, the steps from data collection to findings – memo writing, iterative coding methods, and theme development – are explicitly outlined in the discussion that follows. In this project, data analysis occurred concurrently with data collection in a continuous and iterative manner (Creswell, 2003; Miles & Huberman, 1994). After each interview, observation, and document reviewed, writing reflective notes helped to further thinking about possible relationships, interesting findings, and new questions for future data collection. These ideas and notes formed the basis of a case study record for each school system (Yin, 2003). Interview protocols were also subsequently adapted to include any new issues that arose from this early memorandum writing. Once all of the data were collected, interview transcriptions, observations, and documents collected were saved as electronic files and loaded into the qualitative analysis program, ATLAS.ti, version 6. A coding list was developed as a tool for organizing the data based on the study’s conceptual framework and research questions. The code list focused on the three broad domains encompassed in the research questions: types of data and data uses, resources, and context. For instance, the conceptual framework suggested that organizational and environment context shapes data use. Consequently, codes were written to capture ideas around organizational structure, culture, and regulations. In addition to these a priori codes developed from the conceptual framework (Miles & Huberman, 1994), open coding methods were also used, consistent with Strauss and Corbin (1998). Ideas not conceptualized in the theoretical model were created as they were identified during the iterative analysis process. For example, several interviewees ! ! ! ! ! ! ! ! !! ! ! 71! mentioned how the role of collective bargaining agreements influenced how the school systems used data in its teacher personnel policies. Accordingly, a code was added to capture these emerging ideas. Appendix D details the codes included in the initial coding list. Following the first round of coding, reports were generated for individual cases which allowed for completion of case study reports of each school system, organized by the conceptual framework. Cross-site reports were also run, based on different categories of codes. Conceptual memorandum helped to develop themes within categories (e.g., technology and tools across all four systems) and between school system type (e.g., differences between CMOs and traditional school systems). Illustrated data displays furthered analysis by visually demonstrating relationships between various categories of data. Throughout this process, themes, patterns, and categories evolved as the data were synthesized; subsequently, some codes were deleted or combined into more refined thematic categories (Miles & Huberman, 1994). Appendix E also includes the final coding list to illustrate the differences in the level of specificity over time and the evolution of the coding. Table 3.6 below offers a selected example of this iterative coding process. Coding around data use began with a broad code of “response to data.” By the end of the first round of coding, more specific behaviors of data use arose, e.g., using data to identify “Safe Harbor” students. These categories were combined and reduced during the final coding iteration to the six models of data use. ! ! ! ! ! ! ! ! !! ! ! 72! Table 3.6 Tracing “Data Use” Through the Coding Process Concept from Theoretical Framework Initial Codes End of First Coding Iteration End of Final Coding Iteration Educators will use data to inform instruction in a variety of ways. &Data Use &Who is involved &Response &Frequency &Conflict in data use ! DATA USE !DataUse.Cheating !DataUse.Diagnostic !DataUse.Frequency !DataUse.Identify students interventions !DataUse.Other !DataUse.Reteach !DataUse.Safe Harbor Kids !DataUse.Small groups !DataUse.Teaching to the test !DataUse.Who is involved !DataUse.With Students ^DATA-USE MODELS ^Model.Accountability ^Model.Compliance ^Model.Instruction ^Model.Positioning ^Model.Signalling ^Model.Student learning Ensuring Research Trustworthiness In order to ensure a high level of quality and rigor, Lincoln and Guba’s (1985) evaluative criteria for establishing trustworthiness in qualitative research -- credibility, transferability, dependability, and confirmability -- guided the study, using a wide range of strategies to meet these criteria (Anfara et al., 2002; Marshall & Rossman, 2011; Merriam, 1998; Shenton, 2004; Stake, 2000a, 2000b). First, the concept of credibility is captured in the question: how congruent are the findings with the authentic “goings on” at the schools and district/home offices (Merriam, 1998)? Several provisions were made to ensure the study’s credibility. As outlined in this chapter, the study adopted well-recognized research methods that were appropriate for ! ! ! ! ! ! ! ! !! ! ! 73! the research questions of interest. Conclusions were further supported by the extended engagement in the field. Over the course of the study, over 60 hours were spent in the field, conducting site visits. The internal validity of the claims was strengthened by utilizing multiple sources of data. In this study, triangulation involved collecting data across method types (e.g., observations and interviews) and across types of sources (e.g., teacher, principal, district leader) (Merriam, 1998). Findings were only reported if they were corroborated by at least two data sources. Debriefing among the research team and other colleagues after site visits allowed for assumptions and interpretations to be challenged, refined, and justified. Transferability in qualitative research is similar to establishing external validity within quantitative research. Providing detailed “thick description” of a phenomenon allows others to determine the extent to which the conclusions drawn are transferable to other situations, settings, and activities (Geertz, 1973). Descriptive field notes and memo writing throughout data collection and analysis facilitated the development of background information that provides context for readers. The third criterion is the dependability of the study’s findings, i.e., the degree to which the findings are consistent and the research process could be repeated in the future. Triangulation of the data and peer examination of the findings (by those on the research team as well as other colleagues) were both employed as strategies to ensure the study’s dependability. Details around the research design and its implementation as described in this chapter are intended to enable future replications of the study. Finally, standards for trustworthiness of the data suggest study findings need to be confirmable, driven by the experiences, ideas, and reports of the informants rather than ! ! ! ! ! ! ! ! !! ! ! 74! the preferences, expectations, opinions, or biases of the researcher. During meetings of the research team, explanations about choices made during the research process, as well as the limitations of these decisions, helped clarify and validate the neutrality of the researcher’s role. Additionally, an audit trail or the “chain of evidence” allows others to follow the line of logic from research questions through protocol development, data collected, and conclusions. It provides readers an opportunity to evaluate the trustworthiness of both the process and product of the research study. Study Limitations Like all research designs, a comparative qualitative case study approach has its limitations. First, it may lack generalizability to other contexts. Given the wide variation across districts and CMOs, it is inappropriate to directly apply these findings to all school systems everywhere. Instead, the strength of this project is in its contribution to theory development and refinement (Eisenhardt, 1989). As Yin (2003) argued: “case studies, like experiments, are generalizable to theoretical propositions and not to populations or universes. . . . The goal [is] to expand and generalize theories (analytic generalization) and not to enumerate frequencies (statistical generalization)” (p. 15). Another inherent risk to qualitative research is the possibility of researcher bias. For instance, rather than keep an open mind about all possible interpretations, an investigator could only identify data points that support his/her prior beliefs. The range of strategies used in this study (i.e., continuous reflection, prolonged field engagement, triangulation, peer debriefing, thick description, audit trails, explicit description of data collection and analysis techniques) attempt to correct undue bias. As for the study itself, the number of schools represented from each system, as well as the overall number of ! ! ! ! ! ! ! ! !! ! ! 75! participants across school levels, is a constraint. This study’s focus on literacy instruction in the secondary setting is a final limiting factor. Summary While the literature on data use in school systems has largely documented data use within school systems, few studies have explored how organizational context affects these processes. Therefore, data use in two kinds of school systems – traditional school systems and charter management organizations – was the focus of this dissertation. A comparative case study design was employed to explore three central research questions and to build theory in this under-developed area. In a purposeful sample of four school systems, data collection involved interviews, observations/field notes, and internal/external document analysis from teachers, school administrators, and district/CMO leaders. Data analysis involved a continuous, iterative approach, connecting research questions with protocols, coding, and theme development. The next chapter explores RQ1, the types and uses of data in these two types of school systems. ! ! ! ! ! ! ! ! !! ! ! 76! CHAPTER FOUR HOW SCHOOL SYSTEMS USE DATA TO IMPROVE LITERACY INSTRUCTION AND ORGANIZATIONAL PERFORMANCE The purpose of this chapter is to consider the first research question: how do school systems use data to improve student outcomes and organizational performance? This chapter reviews six main categories of data around learning and teaching used across the four school systems – classroom, common grade, teacher observations, system, high-stakes state assessments, and college-ready indicators. The next section explores the trends of data use within and across school systems. Notably, while all systems attended to the high-stakes state assessment data that contribute to AYP, API, and program improvement status, the CMOs reported a greater use of teacher observation data and indicators for college readiness. Also, across the system, notions of data use (specifically concerning classroom and system level assessments) varied among actors at the classroom, school, and system levels. Finally, the analysis revealed that educators have multiple purposes in mind when using data. While data were used to focus on student achievement, learning, and instruction, there were also more symbolic uses in which data were used for compliance, positioning, and signaling outside of the organization. Introduction Consistent with past research (Bernhardt, 2004; Ikemoto & Marsh, 2007; Supovitz, 2012), evidence from Sequoia, Mammoth, Yosemite, and Yellowstone indicate that educators faced a multitude of sources of data. As outlined in Table 4.1, six categories of data around learning and teaching emerged from the data. ! ! ! ! ! ! ! ! !! ! ! 77! Table 4.1 Sources of Data Used to Inform Literacy Teaching and Learning 8 Source Assessments Classroom Teacher-developed formative assessments (e.g., quizzes/exams, student essays, exit slips) Common grade End of week or unit assessments, developed by grade- level team Teacher observations - Teachers observing other teachers - Administrators observing teachers (e.g., informal instructional walk-throughs, formal observation/evaluation) System Interim/benchmark assessments; diagnostic/summative unit assessments High-stakes state assessments, included in API/AYP calculation California Standards Test (CST); California English Language Development Test (CELDT); California High School Exit Exam (CAHSEE) College-ready indicators Advanced Placement (AP) Test results; Scholastic Aptitude Test (SAT); Pre-Scholastic Aptitude Test (PSAT); Early Assessment Program (EAP); student character index; CAHSEE During the semi-structured interviews, all educators were asked, “What types of data are used to inform literacy instruction in this school system?” In their responses, interviewees described the types of data that were used to inform or support literacy instruction and provided examples of its use in practice. For teachers, this question referred to their own literacy instruction, while for administrators and system leaders, it referenced either their own use of data or their observations of data use at the school site. The process of identifying patterns within and across data system had multiple steps. First, each respondent’s interview transcript was coded with data type, and notes on !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 8 State and local education agencies may require the collection of other types of data, but the table lists the data identified most frequently from the four school systems. ! ! ! ! ! ! ! ! !! ! ! 78! their individual data-use habits (e.g., if there was one form of data that was emphasized above all others) were recorded. Next, coded responses to this question were aggregated to the respondent group level by data type (e.g., all Sequoia teachers’ responses on classroom data). The respondent group’s responses were then designated as high, medium/mixed, or low perceived use using the criteria described in Table 4.2. Table 4.2 Criteria for Assigning Levels of Data Use Low Medium/Mixed High No or very little reported use of this type of data to guide instruction by majority of respondent group members. Limited reported use of this type of data to guide instruction by some of the respondent group members OR in instances where there was a mix of low, medium, high responses. Frequent use of this type of data to guide instruction, with multiple examples of this work, by the majority of the respondent group members. As noted in Table 4.2, this assigned value represented a judgment with regards to frequency of mention across the group’s participants, distribution within the respondent group, and level of emphasis. For instance, teachers in Sequoia were designated “high” in their use of classroom data. Within this group, all of the teachers (distribution within the respondent group) from this district interviewed mentioned using several forms of classroom level data in their practice on a regular basis (frequency of mention). Further, within the respondent group, there were many examples of how these classroom assessments were used to check for understanding during a lesson or to identify students who were struggling (level of emphasis). Finally, findings were cross-referenced with other data sources (e.g., observations, documents collected) for triangulation. Table 4.3 summarizes the levels of data use by data type, respondent group, and school system. ! ! ! ! ! ! ! ! !! ! ! 79! Table 4.3 Reported Level of Data Use, by Data Type, Respondent Group, and School System Type of System School System Respon- dent Group Classroom Common Grade Teacher Obs. System State College- Ready Indicators Sequoia T High High Med Low High Low A Med High Med Med High Low S Low High Med High High Med Mammoth T High Low Med Med High Low A High Low Med Med High Low Traditional School Districts S Med Low Med Med High Med Yosemite T High Low Med Med High High A High Low High Med High High S Med Low High High High High T High Low Med High High High A High Low High High High High Charter Management Organizations Yellowstone S High Low High High High High Key: T: Teacher A: Administrator S: System Leader In the sections that follow, patterns from this data display are highlighted, followed by an analysis of how different types of data were used by educators. What Types of Data were Used to Inform and Support Literacy Instruction? This section examines major patterns in Table 4.3, considering trends across data type, respondent group, within systems, and across system types. Finding #1: Educators at All Levels in All Systems Reported High Levels of Use of High-stakes State Assessment Data Of the six categories of data, only one type was consistently ranked high across all respondent groups in all four school systems: high-stakes state assessments. The process of using these data was similar across these four school systems. Beginning in the summer of 2012 after receiving the previous school year’s results, system and school leadership teams met to set school goals, identifying school or student weaknesses from ! ! ! ! ! ! ! ! !! ! ! 80! last year’s state test results and creating an action plan for the upcoming year. As the principal at Yellowstone explained, “It’s something that always drives us. When we look at our school success plan, that is the ultimate goal -- to meet our CST and AYP expectations.” This type of data was most helpful at the beginning of the school year to “drive the direction” for the school that year, as another principal noted. As the school year progressed, other assessments designed to model the state tests (e.g., system benchmarks testing this school year’s standards) became more useful and timely. Although all interviewed reported a high use of state assessment data, each role group had a different set of responses to it. Yosemite provides an example of the range of responses from across the levels of the organization. At the sites, all of Yosemite’s teachers and school leaders interviewed cited a frequent use of high-stakes state assessment data when planning and supporting instruction. For example, a 10 th grade teacher explained that she turned to the results of the CSTs for several reasons: Looking at CST data, it gives me a couple of things. One, just the basics in terms of what standards they are having problems with. A lot of times, I can see a thread, like they aren’t getting this writing strategy, and I can usually find a link to the baseline skill set they don’t have. . . Second, if everyone is answering certain questions incorrectly, it might be a quick thing I just need to clear up. . . And finally, in ELA, the way that the standards are written, they kind of scaffold each other, so I can look at their data and say I have to cover figurative language before I can do theme, or I have to cover this element of the writing process before I can get them into this kind of grammer . . . So I use that in a way in terms of figuring how to scaffold and how to teach writing. Similar to Sequoia and Mammoth, Yosemite’s teachers were encouraged to focus on the “power standards,” those standards that are heavily weighted on the CSTs. Mariposa’s principal explained to the staff when he was demonstrating a new data analysis tool that the program “shows you the high value standards by CST, based on the ! ! ! ! ! ! ! ! !! ! ! 81! number of times it comes up. Some are more important. I mean, they’re all important, but some appear more.” At the systems level, there were several examples of data use that combined multiple indicators to predict and support CST performance to meet schools’ performance goals. Yosemite used longitudinal CSTs scores, combined with historical data collected for the benchmarks, interims, and writing snapshots, to identify “chronically underperforming students.” These groups of students then became candidates for intensive remedial interventions in math and/or ELA. A more in-depth look at how high-stakes state assessment data were used to quantify and measure student achievement will be offered later in this chapter. Finding #2: Teachers and School Administrators had a Higher Use of Classroom Data than System Leaders In Mammoth, Sequoia, and Yosemite, school site educators (i.e., teachers and school administrators) had a higher reported level of use of classroom data than did their counterparts at the central or district office. The finding that teachers and administrators themselves use classroom data may not be surprising, given their immediate access to such data and its relevance to the day-to-day changes in classrooms and schools. However, the fact that central office administrators and system leaders did not point to classroom assessment results as a type of data used by themselves or others -- including teachers and school leaders at the school sites -- may suggest a lack of communication or disconnect between school and the central office about the data-use work that was occurring at the school sites. Another possible explanation is that system leaders had a narrower conception of what they considered “data” than those at the school sites. ! ! ! ! ! ! ! ! !! ! ! 82! An example from Mammoth is illustrative of this point. Mammoth used the balanced literacy reading and writing workshop model for over five years at the time of the study. As part of this program, teachers reported incorporating many classroom assessments, including readers’ and writers’ notebooks, reading logs, conference notes, student writing, student talk, co-created charts, and other informal sources of data on students’ literacy abilities. Several teachers believed these forms of data were helpful to get to know students at an individual, one-on-one level. As a 7 th grade teacher explained, “When you are in the classroom, conferring with them, taking a little informal assessment, you really see them personally. It’s not just a number.” Other teachers indicated that these types of assessments were helpful to provide immediate feedback concerning their students’ understanding of a concept, allowing an instructor to “adjust on a dime” and re-teach when necessary. The attention to classroom data was similarly reported by school leadership for their own data use as well as by teachers. At one middle school in Mammoth, the school leadership team had received substantial training in the reading and writing workshop model and its components over the five years of the program’s implementation. The principal reported looking at these classroom assessments on a regular basis: “If you were here longer, you would see me bring in students, and I would request their readers’, writers’ notebooks, and we would have conversations about what they are reading.” Further, the school leadership team set expectations around, provided training for, and monitored teachers’ use of some forms of classroom data, such as the reading log and conference notes. ! ! ! ! ! ! ! ! !! ! ! 83! However, while the Mammoth district administration maintained a continued commitment to the balanced literacy approach given the long-term resources invested in this program, there was a more mixed perception of if and how classroom data were used by central office administrators. Mammoth’s Assistant Superintendent was uncertain of how evenly and consistently the implementation of assessing student work occurred across all secondary schools. The Curriculum, Instruction, and Assessment coordinator seconded this sentiment: “What we really need to do a lot more is actually looking at actual student work . . . Some schools are doing more student work and common assessment work, [but] I think that’s less though.” Despite the investment in this literacy model, the notions around the use of classroom data varied across levels of the school system. The exception to this trend was the case of Yellowstone. At Yellowstone CMO, teachers were considered to be the “architects of curriculum and assessment,” as explained by one of the system’s regional directors. As such, Yellowstone was the only school system where there were high levels of use of teacher-developed classroom assessment results by all respondent groups. Classroom assessments mentioned by Yellowstone’s teachers included entrance/exit slips, checks for understanding, rubrics for literature circles, Socratic seminars, student writing, speeches, and one-on-one conferences with teachers. Voiced throughout the organization, these teacher-developed classroom assessments were seen as rigorous measures of student learning that offered insight into students’ critical thinking skills, and ultimately, their preparation for college. As the Director for Special Education of Yellowstone pointed out: I put a whole lot more stock, frankly, in the curriculum-based assessments and the things that we've setup at [Yellowstone], the things that reflect what is actually ! ! ! ! ! ! ! ! !! ! ! 84! happening in our classrooms… In our classrooms, we don't just teach to that test in terms of facts and this is what you can answer. We are really looking more at the critical thinking pieces, and it's obviously the critical thinking pieces, the ability to process the literature that you read, that is going to make you college- ready. Collectively, teachers, school leaders, and system leaders saw these teacher- developed classroom assessments not only as a part of their organization-wide mission to empower teachers professionally but also as a needed complement to the “hard data” collected from the standardized tests (i.e., CSTs, Yellowstone-developed benchmarks). Finding #3: System Leaders Reported a Higher Level of Use of System Assessment Data than Teachers and Administrators A second area of disconnect between school sites and the central/home office arose around the system assessments. Researchers have noted the recent rise in district benchmark or interim assessments over the past several years, with states and districts investing ample financial resources into the tests and systems for analyzing and managing the resulting data (Blanc et al., 2010; Bulkley, Christman, Goertz, & Lawrence, 2008; Means et al., 2010; Olah, Lawrence, & Riggan, 2010; Supovitz, 2012). In these four systems, this type of assessment was given three times a year to all students in a particular grade and content area (i.e., 7 th grade ELA standards) in each system. The scope of the benchmarks varied. In Yosemite, Sequoia, and Mammoth, the benchmark included only discrete portions of the year’s standards. For example, the fall benchmark covered the standards taught in the first quarter, while the winter benchmark included the 2 nd quarter’s standards separately. In comparison, Yellowstone’ benchmarks were cumulative; the fall benchmark had around 30% of the standards, while the winter benchmark tested 70% of the year’s standards. Yosemite and Yellowstone additionally ! ! ! ! ! ! ! ! !! ! ! 85! gave an assessment at the beginning of the year (called the “interim” in Yosemite or the “pre-test” in Yellowstone) which was modeled after the CST and covered all of the year’s standards. On one hand, system leaders interviewed noted that system level assessments played a large role in informing and supporting literacy instruction. In each case, system leaders mentioned benchmark assessments as providing good predictive information for that year’s high-stakes state assessment results, given their high correlations with the CST results. Teachers and school site leaders, on the other hand, had a far more mixed report of their use in three of the four school systems (Mammoth, Sequoia, and Yosemite). Consider the case of Yosemite CMO. Yosemite’s leadership intended the interim/benchmark data to be used frequently to support instructional change. As one central office administrator explained, they can be “really powerful because it’s comparing you not just to California as a whole . . . but [also] we would rather be comparing ourselves to other high performing schools [in the Yosemite network] that serve our kids.” In addition to the internal comparison, the aim was that these assessments, supported with Yosemite’s computer-based analysis products, would serve to support two levels of data “diagnosis.” The initial data analysis of these assessments would help teachers identify student mastery level of standards; prioritize standards to be re-taught in either a small group or whole-class setting; and determine which students would be in the small groups. The second level of “deep data analysis” of the interims included identifying individual questions and skills that were challenging; determining missing skills/misconceptions across standards; and pinpointing misconceptions within ! ! ! ! ! ! ! ! !! ! ! 86! individual questions. The central office had spent much time, energy, and financial resources making the results of these benchmarks available and accessible to those at the school sites. However, the leadership team at one Yosemite school had mixed feelings around the system assessments. While the assistant principals mentioned using the interim data to support teachers’ in the language arts department, the principal indicated the opposite: The interim data, that’s not so important. Kids don’t necessarily take it that seriously, and teachers may not have gotten to the place on the pacing guide where they should. . . . I really don’t tell them [the teachers] that I don’t care about the [interim] tests, but I could really care less about those tests. Teachers at this school similarly reported a mixed level of use of the results from these assessments. For instance, some teachers felt that the timing of the interims was poorly aligned with their personal scope and sequences for instruction and/or Yosemite’s “suggested” pacing guide. One 10 th grade ELA teacher noted, “For the first interim assessment, 80% of it was on [plot and sequence], but I’m not even doing that for another week.” Consequently, the data from the system-wide tests were not seen as particularly helpful, given that the teacher had not yet taught that standard. Teachers voiced frustration over other issues with the validity and reliability of the benchmark assessments in the other school systems as well. For instance, at a middle school in Mammoth, a group of 8 th grade teachers met with the instructional coach to analyze the results from their second benchmark. One teacher reflected on her experience with her class after the district had changed the proficiency cut score on the second benchmark. After sharing the new, lower test scores with her students, the 8 th grade teacher said, “It was hard. We had our cry session. I told them it wasn’t their fault.” Another teacher spoke of the fact that the benchmarks assessed isolated portions of the ! ! ! ! ! ! ! ! !! ! ! 87! state standards, compared to the benchmarks from years’ past that tested all of the standards at each administration. The scope of the current benchmark provided little help in planning upcoming instruction. She noted, “I’m just not sure how it’s going to help. If I’ve already taught all of the standards for this unit and their benchmarks shows just their scores for [earlier standards], it will be more useful for the re-teaching but not to guide me for the instruction for the next unit.” Overall, teachers seemed unclear on the purpose of the benchmark, and consequently, how they should use it to inform their instruction. Yellowstone is the one exception to this pattern. In this system, teachers, school leaders, and system leaders all reported a high level of use of benchmark assessments. The design of Yellowstone’s benchmark assessments was notably different the other three school systems. In Sequoia, Mammoth, and Yosemite, one version of the benchmark exam was administered across grades/schools. In Yellowstone, each teacher designed her own scope and sequence of standards at the beginning of the school year. For instance, one teacher could elect to teach the poetry standards in the fall, while another one could chose to teach these in the spring. Consequently, Yellowstone’s central office staff customized every benchmark to each teacher’s own scope and sequence of standards at each administration. All Yellowstone teachers interviewed highly valued the data collected with the customized benchmarks for multiple reasons. They reported that their ability to design the scope and sequence not only helped them identify and meet the needs of each year’s cohort of students, but also created greater “personal accountability” and teacher “ownership” in responding to the data. Designed with portions of the year’s standards, they were seen as “more authentic” since they tested only the standards that had been ! ! ! ! ! ! ! ! !! ! ! 88! taught. Yet as cumulative assessments (i.e., the fall benchmark included 40% of standards while the spring benchmark tested 70% of standards), the teachers were able to see growth in select standards over time. Yellowstone’s school and system leaders similarly reported high levels of use of benchmark data. After each benchmark, school leaders worked with their teachers to realign their scope and sequence, curricular units, and lesson plans, as well as to identify small groups for differentiation and designing intervention programs. From the home office, the benchmarks were central in monitoring and supporting teacher and school performance over the course of the year. One central way this occurred was through the regional director’s Analysis and Action Plan that followed each benchmark. The focus of the Analysis and Action Plan, noted the Chief Academic Officer, was on improving “instructional proficiency,” not only “student proficiency.” This system-wide tool included analysis of benchmark results by schools, departments, and individual teacher. Attention focused not only on teachers’ achievement levels (i.e., percent of students proficient) but also the growth they had made between benchmarks. Analysis also highlighted the performance the certain subgroups – i.e., English Language learners, Special Education students – by school, department, and teacher. The central office then used this Action Plan as a basis for conversations with school leaders to guide school, grade, and department action plans for the next instructional period. Despite the perceived benefits across the system to using these individualized benchmarks in Yellowstone, there were challenges raised concerning this design. Several home office staff involved in the process explained that organizational capacity was an issue; as the Chief Academic Officer explained, it was an “incredibly heavy lift” to create ! ! ! ! ! ! ! ! !! ! ! 89! hundreds of customized assessments in short time periods. There were also concerns raised around benchmark security and validity. As one of the regional directors explained, the bank of possible test questions was limited since “we've made them by hook or by crook.” Finally, these issues were becoming even more critical as the organization moved to link compensation to student growth on the benchmarks. One regional director worried, “Right now, we’re limited to the number of questions. But if there is [sic] any security issues or test errors, and it’s tied to compensation, it could be ugly. I am not sure if we can continue this way for much longer.” Finding #4: CMOs had Higher Levels of Use of Teacher Observation Data Looking across types of systems, the two CMOs had a higher reported use of teacher observation data by school and system leaders than did the two traditional districts. This category of data included both information collected during informal observations, where a teacher visited or was visited by administrators and other teachers, as well as formal observations where an administrator observed a teacher as part of her performance review. The difference between the two types of systems lay in the usefulness of the latter: educators in districts did not report using formal observations as a source of data to inform literacy instruction, whereas educators in CMOs did. In neither Sequoia nor Mammoth did school staff mention using the formal evaluations by school administration (as designated in the districts’ collective bargaining agreements) to be used to inform instruction. Instead, it was the frequent, less formal observations of the instructional coach and school administration that were seen as useful data. For example, one coach in Mammoth, a long-time middle school teacher, was ! ! ! ! ! ! ! ! !! ! ! 90! frequently in and out of classes, observing instruction and providing informal feedback to teachers. One 7 th grade teacher in Mammoth described this process: She [the instructional coach] will come in and watch what I’m doing, observe, and then she’ll ask me, based on what she sees, if I need help in an area, or if there’s something I’m missing. She’ll ask if I would like her to come in and teach a lesson. . . And then we will debrief and discuss after that. The rest of Green’s school leadership team also reported conducting walk- throughs in classrooms to collect data on teacher instruction. In comparison, the two CMOs had a higher reported use of both informal and formal teacher observation by school administrators and system leaders. At the time of this study, both Yellowstone and Yosemite were engaged in the pilot phase of a new teacher development and evaluation initiative funded by the Gates Foundation. As part of this grant, both CMOs had developed a detailed rubric for observing teachers and collecting evidence for key indicators of effective instruction and classroom management. Along with the rubric’s development, this process included extensive training for the school leaders and a certification process to assess a school leader’s ability to use the rubric reliably and validly. The fact that neither CMO system operated under a collective bargaining agreement also played a role in this work. (The role of foundations and collective bargaining agreements is further discussed in Chapter Six.) Although the system and school leaders of both Yellowstone and Yosemite spoke highly of the new observation data, teachers voiced a more mixed use of observations of their instruction. Prior to the implementation of the new observation tool, several teachers (like their counterparts in the traditional school systems) reported valuing the frequent, informal feedback from administrators. As one 6 th grade Yellowstone ELA teacher at commented, in the past, the principal “would come in my room probably twice a month ! ! ! ! ! ! ! ! !! ! ! 91! for 15 - 20 minutes, but I feel like that was authentic feedback. He'd say things that he saw, he'd ask me questions like, ‘After I left, what happened? Here’s some food for thought.’” This teacher continued on to compare her more recent experience with the new, formalized teacher observation system: It’s taken up a lot of his [the principal’s] time. Other than my formal observation, I've only probably been observed once, which is considerably less than previous years. I feel like there's still lot of kinks, like how to authentically assess how well someone is teaching a variety of subjects. … Or, how do you grade a PE teacher or a music teacher authentically? And I feel like we have to plan these certain lessons two times a year, you know they [the administrators] are coming in that day . . . I just don’t feel like that's authentic teaching. Other school leaders in Yellowstone and Yosemite were concerned that the new teacher observation protocol limited teacher creativity and changed the autonomy that schools leader had in working with their teachers into, as one principal put it, a “glorified Stull conversation. 9 ” Several teachers in both systems also worried that the extra pressure on teacher observation and student achievement data would lead to an even greater pressure to perform well on the CSTs, benchmarks, and interims, along with a greater risk of burn-out for themselves and their colleagues. As this initiative was still in its pilot phase at the time of this study, it was unclear how this data collection tool would be received when the program was fully implemented. Finding #5: CMOs had Higher Levels of Use of College-ready Indicators A final striking difference between the two types of school systems is the collection and use of college-ready indicators in CMOs. Educators at both Yosemite and Yellowstone frequently pointed to a range of metrics that they felt were helpful to gauge !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 9 The California Legislature passed the Stull Bill, AB 293 in July, 1971. The intent of the law was to “establish a uniform system of evaluation and assessment of the performance of certificated personnel within each school district of the state.” While its implementation varies across districts, teachers generally are expected to meet with administrators and set goals for the school year. This work does not allow for teacher evaluation to be directly linked with student achievement data (Popham, 1972). ! ! ! ! ! ! ! ! !! ! ! 92! students’ preparation for college in a way that was largely absent in the interviews and conversations in the traditional districts. Both CMOs had explicit organizational missions to prepare their students for college and used data to understand if their students were meeting this goal. Yosemite’s Director of Data Analysis elaborated: Metrics of college readiness [have] become more and more important. Now, our organizational ‘must-achieves’ are about college readiness, not about the API. The end result is not a high API score. The end result is college readiness. It’s not an API score for the API’s sake; it’s about getting our kids to be proficient and ready for college. As such, CMO educators turned to three groups of college-ready indicators. The first category consisted of standardized national and state measures associated with college attendance such as the Early Assessment Program (EAP); PSATs, SATs, and ACTs; and the California high school exam (CAHSEE). The Early Assessment Program (EAP), was mentioned as playing an important practical role towards Yosemite’s 100% College mission. Included as an addendum to the 11 th grade CST, a high score on the EAP allowed a student to pass out of remedial classes in California’s state colleges and universities. Pointing to the practical importance of the EAP, one central office administrator suggested, “What do we need to do to make sure our kids are college-ready and how do we make sure that our programs are aligned with EAP? It’s huge for our kids, both in terms of financially, in terms of what time it’s going to take them to finish college, and what they can do.” This set of data also had practical implications for a student’s ability to get into college. Concerning the PSATs, SATs, and ACTs, Yosemite’s Vice President of Education said, “These three are critical to college acceptance. Well, that’s our whole ! ! ! ! ! ! ! ! !! ! ! 93! reason d’etre, so these are incredibly important.” In the classroom, three of Yellowstone’s teachers reported using data from PSATs and SATs, along with knowledge of students’ reading levels, to weave appropriate high-frequency SAT vocabulary into their lessons and prepare them for these assessments. Indeed, there were both practical and mission- driven reasons to attend to these forms of data. The second group of college-ready indicators came in the form of teacher- developed classroom assessments valued for providing insight into students’ preparation for doing college-level work. Most teachers in both Yellowstone and Yosemite reported that they designed their classroom level assessments to prepare students for college level work that required higher-order thinking skills, such as student discussion and analytic essays. One 8 th grade ELA teacher in Yellowstone noted, “My authentic assessments are my biggest indication of college readiness [because] they need to know how to go past how the standard is written. That, to me, is getting them ready to be active learners, not just passive test takers.” Teachers used these assessments to prepare students for college, where students, in their eyes, would need to be able to justify, probe, and argue across content areas. Finally, in both CMOs, educators were in the process of determining how to identify and support the “soft skills” and behaviors needed for college completion, such as resiliency to overcome challenges. As Roosevelt’s school leader elaborated: One of the clearest indicators is students’ work habits and ability to get the support and help that they need to increase their work. . . When you meet the adversity in a math class, what did you do in order to get that A? What did you when you needed that help? How did you find it? What did you do to increase your effectiveness in the classroom? We are looking at habits of mind and scholarly attributes. ! ! ! ! ! ! ! ! !! ! ! 94! As such, Yellowstone was piloting a character development tracking system to be able to encourage and support these soft skills. Yosemite similarly collected student behavior data (e.g., percentage of time students are on task) in order to set goals around study habits deemed important for success in secondary school and future college environments. How were Data Used to Inform Literacy Instruction? The previous section of this chapter identified patterns in the types of data frequently used within and across the four school systems. This next section aims to take a closer look at the trends of how data were used within school systems to improve literacy instruction specifically and organizational performance more generally. For this section, two iterations of inductive data analysis were completed. 10 The first was a preliminary coding of all of the raw data, where instances of data use were coded under the umbrella code, “Response to data.” The second, a more micro-level analysis, aimed to then refine and explore the nuances of how the data were used. During this second round of coding the six categories or “models of data use” emerged which were defined and refined, drawing from previous research and insights from the data (Beyer & Trice, 1982; Coburn, Toure, et al., 2009; M. Feldman & March, 1981; T. H. Nelson et al., 2012; Weiss, 1980, 1998). The first three models of data use – achievement-accountability focus, student learning focus, and instructional reflection focus – were all largely examples of instrumental uses of data, i.e., data were directly and centrally used to make decisions around goals of literacy achievement, student understanding, or past/future instruction !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 10 The code development and refinement were guided by Anfara, Brown, and Mangione (2002); Boyatzis (1998); and Weitzman (2000). ! ! ! ! ! ! ! ! !! ! ! 95! (Beyer & Trice, 1982; Coburn, Toure, et al., 2009; T. H. Nelson et al., 2012; Weiss, 1980, 1998). These three categories were largely prevalent when educators were asked about how data were used to inform instruction. The final three categories of data use – bureaucratic-compliance, positioning, and signaling – were emergent categories of data use that were more symbolic in nature. When data were used in these ways, it meant that stakeholders engaged with data to comply with rules or demands; called attention to data in general terms (e.g., “the numbers don’t lie”); used data post-hoc to make decisions; or used data to convey information to those outside of the organization (Coburn, Toure, et al., 2009; M. Feldman & March, 1981). Examples of these models appeared less frequently across all cases and should be thought of as emergent categories of data use. The direct interpretation of these few instances provides key, meaningful insight to draw a more complete picture of how data were used in school systems (Stake, 2000a). Because they had not been anticipated, less information was available around the types of data involved and patterns within/across school systems. ! Table 4.4 below summarizes the results. These models build on and refine the data-based decision-making models presented by Moody and Dede (2008) and the inquiry stances developed by Nelson, Slavit and Deuel (2012). ! ! ! ! ! 96 Table 4.4 Six Models of Data Use ! ! ! ! ! 97 Achievement-accountability Model Of the six models of data use, the achievement-accountability focus was the most frequently mentioned by nearly all educators in the four systems (for more on the distinction between student achievement and understanding, see Nelson, Slavit & Devoid, 2012). This model of data use centered around an educator’s ability to quantify and measure student achievement, specifically with the goal of high performance on the high-stakes standardized state assessments and their subsequent impact on AYP, API, and program improvement status. The common characteristic of the achievement- accountability model of data use was the classification of students by their high-stakes state test scores and/or designated performance level: far below basic, below basic, basic, proficient, or advanced (Coburn & Turner, 2012; Little, 2012; Spillane, 2012). Educators then engaged in behaviors at the district, school, and classroom levels to raise student scores to impact their aggregated school or system AYP/API scores. This model of data use was prevalent in all levels of all four school systems. One set of behaviors noted in all systems was to target the “bubble” or “cusp” students, students whose scores were within points of designated performance categories. For API growth, this included a focus on moving the students who scored far below basic and below basic, while for schools focused on AYP improvement, attention was on moving students from basic to proficient (Booher-Jennings, 2005). Yosemite offered a good example of a school system providing strategic support for school improvement that targeted “cusp” students. According to the Director of Data Analysis and Assessment, Yosemite’s home office had used interim assessments to project CST results to identify ! ! ! ! ! ! ! ! !! ! ! 98! and target “cusp” students before the state exams, leading to a large gain between projected and final API: There was a secondary school last year, and it was not on track to hit our API goals. We spent a lot of time there. In the winter, we looked at their projections and gave them a better sense of whether they were on track to me their API and AYP goals. We created lists of target students for them to be focused on -- cusp kids for API, far below basic and below basic kids; cusp kids for AYP, basic to proficient. . . . Through our analysis products, those teachers knew exactly what all of their students’ skill gaps were, so they could easily go in and work specifically for those kids. . . [The school grew] 70 points. School leaders similarly engaged in the achievement-accountability data-use model. Sequoia was illustrative in this regard. As Sherman’s principal noted, “One of the biggest things that I wanted this year was that every class, every teacher knew what was the proficiency level. [One 7 th grade teacher] has a first period at 93% proficient. . . He’s going to have to keep that class as a 93. His third period is 0% proficient, he has to move that up.” Teasing apart this comment reveals several aspects of the achievement-focused model for data use. First, the percentage “proficient” related back to the students’ scores on previous year’s CST and a different set of standards. Despite the fact that the standards and assessments would be different between years, teachers were expected to meet the same achievement goals. Additionally, the principal’s language exemplifies school and classroom level goal setting where the emphasis was placed on moving students’ assessment results into the proficiency bandwidth. Also, administrators in Sequoia used CST results for class assignments and student scheduling. 11 Based on the prior year’s test results, students who scored in the far below basic or below basic range were placed in double period Resource Specialist !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 11 Other research suggests tracking and ability grouping in schools can widen the achievement gap due to the disproportionate placement of minority and low-income students into lower tracks. For more information, see Vanfossen, Jones, & Spade, 1985; Gamoran, 1987; Gamoran & Mare, 1989; Lee & Bryk, 1989; Oakes, Gamoran, & Page, 1992; Oakes & Guiton, 1995. ! ! ! ! ! ! ! ! !! ! ! 99! Program/Special Education classes, while kids who scored proficient or advanced were in the Honors classes. At grade-level team meetings, teachers then compared these tracked classes against one another, as noted below in Figure 4.2, the data display of the 7 th grade results on the plot common assessment. Figure 4.2. Comparing classes by performance level in Sequoia. Note: Teacher names and periods were removed from the x-axis for anonymity purposes. This graph compares class performance for a common assessment in 7 th grade plot standards. Each pair of bars represents the percentage of students who met 65 and 70% proficiency, respectively. Classes have been tracked by performance level; the bars to the far left on the graph represent the RSP/Special Education classes. Moving across the x-axis, the performance levels of students in those classes increase, so that the 91% bar represents the honors class, consisting of students who scored proficient/advanced on the previous year’s ELA CST. The final three sets of bars were the male, female, and total averages. This model of data use shaped instruction in other indirect ways beyond how educators directly responded to these kinds of data. Teachers in all school systems focused on “nemesis” or “power” standards that had the greatest weight on the CST, narrowing the instruction students received to those state standards that were most emphasized on the test. To teach these high-impact standards, Sequoia’s teachers differentiated their instructional strategies by CST performance level. In a lesson plan completed by a group of teachers at one school, the instructors planned to use a “list and draw” strategy with the far below basic/below basic students while developing higher level, critical thinking questions for the proficient/advanced students. Additionally, ! ! ! ! ! ! ! ! !! ! ! 100! district and CMO educators designated instructional time for strategies to prepare students for the state exams. Teachers spoke of teaching test preparation strategies, e.g., process of elimination for multiple-choice questions. Beyond “teaching to the test,” educators were also “testing to the test,” that is, designing their own common grade assessments to model CST language and structure, with the goal of familiarizing and preparing students for the high-stakes state assessment. One 7 th grade ELA teacher explained their team’s weekly language arts assessment: It’s every single Friday, we’ll have a different passage . . . so that the kids are being exposed to passages similar to what they’ll see in the CST, and also that the questions are similar to the way they’re presented on the CST. We basically take the released questions and format it almost identically because our questions will be the same. We’ll italicize words the same way the CST does in a way to get the kids comfortable with the CST, so that by the end of the year they’re like, “This is like what we do in our English class every Friday, it’s just the same thing and it’s not as challenging,” hopefully. The dominance of the achievement-accountability model of data use was noticeable in the language used by educators when they looked at, analyzed, and made sense of student assessment results. Across all systems, there was a pattern of “classificatory talk” for students and their performance levels (Coburn & Turner, 2012; Little, 2012; Spillane, 2012). Several educators spoke of their “below basic students,” instead of students who had scored below basic on their assessments. Similarly, one principal in Sequoia referred to students scoring far below basic as the “bottom of the barrel kids,” while Yosemite identified kids who had not made sufficient growth on internal or external assessments as “chronically underperforming students.” This use of the high-stakes data tended to emphasize a student’s deficits and weaknesses rather than his/her strengths. ! ! ! ! ! ! ! ! !! ! ! 101! Further, the Superintendent of Sequoia compared the data-driven decision making in the district to the work of a doctor doing rounds, diagnosing and treating a patient’s illness. This metaphor symbolizes an achievement-accountability focus; rather than holistically understanding a child’s interests, strengths, and areas for future growth, educators were narrowly focused on diagnosing and “curing” the student’s tested weaknesses. The distinction may be a small one, linguistically, but it is reflective of data use that was reductionist and test-driven rather than holistic and learning-centered. Although this achievement-accountability focus model permeated all school systems, many educators expressed frustration with this approach. There were serious qualms about the amount of time spent on preparing for and taking system and state standardized tests. Mammoth’s Director of Technology summarized the sentiment well, explaining: Sometimes the amount of assessments that are going out, between the benchmarks and the unit assessments and the CST and the CAHSEE and the EAP and the CELDT, at the school sites, it feels like, “When are we supposed to teach if we’re always testing?” When I was a kid one summer, my mom was growing some carrots, and the carrots weren’t doing very well, and she said, “I just don’t understand why they are not growing.” I said, “They looked like they were doing okay yesterday,” because I would pull them out to see how they were doing and stick them back in every day. And we are almost to the point where we’re doing that with the kids. We are checking to see how they are doing so much that they are not able to grow. Regardless of these and other concerns about the drawbacks, achievement- accountability was an essential model of data use. In both districts and CMOs alike, teachers, school administrators, and systems leaders used high-stakes state assessments (and other exams results designed to model the state exams) to quantify and measure student achievement with the goal of high performance on accountability measures. ! ! ! ! ! ! ! ! !! ! ! 102! Student Learning Model The second category of data use was the student learning model. This model was defined by attention to discovering a student’s understanding with the goal of authentic and deep learning of content and skills. Rather than predominately classify students by performance levels, educators were more responsive to individualized needs. As part of the student learning model, data were used in ways that were more formative than evaluative. Heritage (2007) defined formative assessment as made of four essential elements: identifying the learning gap, providing feedback, enabling student involvement, and linking to individual learning progression. The types of data most frequently associated with a “learning focus” were teachers’ classroom assessments; these ranged from teachers’ observations of students during tasks, classroom discussions, unit quizzes/tests, and student self-reflections. Similar to the patterns found in the use of classroom assessments, teachers across all four school systems mentioned greater engagement in a learning focused model of data use than did school administrators or system leaders. Attention to student understanding was particularly notable in two school systems -- Mammoth (with their literacy program that emphasized formative, classroom assessments) and Yellowstone (with their focus on teachers as curriculum and assessment architects). Interestingly, some educators did not consider the results from some of these classroom assessments as “official” data. For instance, one 8 th grade ELA teacher at Sequoia explained in detail how she used student responses on whiteboards to quickly check if students understood a concept. However, when asked about what data were most useful in her instruction, she reported, “Data, now we’re talking [about data]? I don't get ! ! ! ! ! ! ! ! !! ! ! 103! data from the whiteboards really. I guess I do, but I don't write down how many kids get it, so I wouldn't say that this is data.” When asked to further elaborate, she reflected, “Because [with] the whiteboard, I'm basically asking one question and so I might not even get a full standard in that… [but] I can get if the kids understand it or not.” In this case, she recognized the value of the whiteboards for their immediate feedback on student learning but did not classify it as “data,” as it was not quantified, formally collected, or measured against a state standard. The distinction between “soft,” learning-focused data and the “hard,” achievement-focused data was mentioned by several educators in the Mammoth school system. This distinction had to do, in part, with the system’s long-time implementation of the reading and writing workshop model – an approach to literacy that prioritized learning-focused, formative assessments – within state and federal accountability systems that privileged achievement-based assessments. As the assistant principal of Mammoth explained: I put it [data] into two categories: hard and soft data. So, what are the CST results telling us about your current students? What do you notice about trends from last year that are guiding your practice this year? . . . What soft data have you gathered so far through your independent reading, through reader’s notebooks, through writing samples that have been collected, through listening into student conversations? The instructional coach at Green also distinguished between how teachers used “hard” and “soft” data to support instruction, suggesting that “soft” data were undervalued: I think that [hard] data only goes so far. With hard data, you can get a good picture of where the kids have been or build some theories about what areas they might need more support in. Something I want teachers to get better at is looking at the soft data, not just what they [the students] scored on something, but what they’re producing or not producing in the classroom. … It’s that data that I worry ! ! ! ! ! ! ! ! !! ! ! 104! most about. Because the more the teacher is able to have a conversation with the parent and say, “This is what I noticed about your child. These are some specific weaknesses and strengths,” [the better]. And until you have those specific conversations with a student, they don’t know what it is they need to work on. In other words, while the “hard” CST and benchmark data provided big picture evaluative information on past performance, the “soft,” student learning data allowed for a focus on current student capabilities and lent itself to specific, current feedback to students and parents. A student learning model of data use was also marked by a commitment to “authentic” assessments of learning. Like teachers across the other three systems, Yellowstone’s ELA instructors mentioned developing assessments that were achievement-focused, e.g., using CST-language and structure. However, all of the teachers at Yellowstone interviewed also reported using their autonomy as curriculum designers to create learning tasks that were more rigorous and authentic measures of literacy standards. For instance, in her unit on Homer’s The Odyssey, a 6 th grade teacher in Yellowstone explained that she created two assessments -- one “authentic” assessment, a comic book with summaries at the end of each page and one “traditional” test that used the CST language and released test questions -- to gauge student mastery of the literacy standards around the students’ ability to summarize. “The traditional assessment helps them prepare for the CSTs,” she argued. “But to be able to really digest a book, discuss it, do a reinterpretation of it on your own is a more authentic, college-ready project then can you read the question, read the paragraph, and answer the questions?” For Yellowstone teachers, the learning focus connected to their explicit college prep mission, where students would need a more critical set of skills than those assessed by standardized tests. ! ! ! ! ! ! ! ! !! ! ! 105! Not all educators were equally committed to (or able to equally commit to) both achievement and learning-focused assessments as complementary ways to gain knowledge about their students. In fact, most described a pressure toward the achievement-accountability. For instance, in Sequoia, the preparation for the CSTs often narrowed the teachers’ instruction at the expense of the development of broader literacy skills. “We have focused so much on the short stories, [the students] can do that,” pointed out one 7 th grade teacher. “But are we really teaching them the skills of reading an entire novel? A lot of kids don’t have the endurance to read an entire chapter book. That’s the debate that we have, but unfortunately, since it’s not CST-focused or driven, it’s pushed aside.” Instructional Reflection Model Unlike the first two types of data use that focused on students’ level of achievement or understanding, the third category focused on the teacher side of the equation – using data to reflect on and improve teacher instruction. The instructional reflection model of data use frequently dealt with teacher observation data collected when teachers visited each other’s classes and debriefed and/or when administrators provided teachers with informal or formal feedback on their instruction. In a few instances, other kinds of student learning data were used as the basis for teachers’ critically reflecting on how instructional choices had impacted student responses. In all four systems, educators pointed to observation of instruction to help them reflect on their practice, and in the two CMOs, teachers specifically drew attention to their formal observations by administration as important to this effort. ! ! ! ! ! ! ! ! !! ! ! 106! When teachers observed other teachers, the data were used in several ways. For example, on a monthly basis at a Sequoia school, a group of teachers along with a member of the administration visited several classes for 10-15 minutes with a checklist of school-wide “non-negotiables.” Immediately following the observation, the teacher received feedback from her colleagues, and best practices from across all classrooms were discussed at the staff meeting at the end of the week. Teachers at this school found benefits in visiting other classrooms as a way to identify new teaching strategies as well as in being visited, in order to receive personalized, immediate feedback on his/her instruction from their peers. One teacher explained the dual advantages to this practice: It’s a good way for the [observing] teachers to see, “Oh, this is what she’s doing, maybe this is a strategy I can use for the special ed. kids, because they really like this.” At the end of the visit, the principal will call you out, and they’ll give you quick two minutes of, “This is what we really liked, and maybe you could do this next time.” It’s very constructive. Indeed, even veteran teachers found the instructional observation to be beneficial to their practice. When asked if she gained new knowledge during instructional walk- throughs, one such teacher replied: Yes, every single time… One of the teachers that we have is amazing at checking for understanding, the way she does it [with whiteboards] is unlike I’ve ever seen before… that’s something that I have never really been exposed to. It’s nice to see how she does it in her classroom. I got whiteboards right after I saw that and so hopefully I can try it. I don’t have to necessarily copy the way she does it, but making it, using it for my own personal style of teaching. Additionally, other teachers in Sequoia felt that the peer-to-peer visits created an internal accountability within the school that helped hold themselves and others responsible for the “non-negotiables” that everyone at the school site had agreed upon. However, while district teachers mentioned the informal visits, educators at the CMOs pointed to their formal evaluations as data for reflecting on their instruction. ! ! ! ! ! ! ! ! !! ! ! 107! Through a large foundation-supported grant project, the two CMOs aimed to collect and use teachers’ formal observation data in ways that were self-reflective, targeted to professional development opportunities, and subsequent career advancement. The Chief Academic Officer at Yellowstone argued that new observation data were not only more specific and focused, based on a rubric for effective teaching, but also allowed for a more collaborative process between the principal and the observed teacher in which the teacher was able to reflect critically on her own instruction. She explained: The formal observation process has these different steps that’s all driven by observational, low inference data . . . and determining how effective you are in a specific area. For example, to help the teacher really see their pattern of questioning, we write down all the questions they have asked and have them come to the understanding, so “what did you notice about all these questions?” It’s having it be more of a collaborative understanding where the gaps are in relationship to the proficiency levels in the rubric. The teachers are able to uncover that for themselves versus us saying, “Here’s your problem.” As mentioned by Yosemite system leadership, another strength of this new observation data was the ability to systematically identify high-performing teachers who scored well on the observation rubric as well as to strategically support teachers with particular weaknesses. The central office administrator at Yosemite who was heading up this project described the professional development aspect as such: We have all this data on how each teacher did on each indicator of our rubric, their score level, their evidence, all of that. Then there is a tool then uses all of that information to suggest what professional development resources would be ideal for you based on where you are. So, the hope is that it’s like Netflix and that, depending on your specific skill sets, you will have targeted professional development recommendations available online. By 2013-2014, the CMOs also planned to offer differentiated compensation and new career opportunities to teachers who were considered “highly effective” in an evaluation model that used multiple measures, including principal classroom ! ! ! ! ! ! ! ! !! ! ! 108! observations, peer feedback, student/family feedback, and student achievement data at the teacher, team, and school levels. Finally, there were a few instances where teachers critically considered student assessment results with an eye to how their instruction may have led to those outcomes. For instance, as part of her data analysis work with her grade/content professional learning community (PLC), a 7 th grade ELA teacher in Sequoia explained that she shared common assessment data, compared class results, and critically analyzed her instructional practice: In my PLC, I reflected on the fact that my basic group had a hard time with actually, not the defining, they actually understood how to define the word “foreshadowing.” What they had a hard time was actually taking the story and analyzing it. I think that was because I didn’t give them specific examples. With my other class, I think I went in to more detail, and I gave them foreshadowing examples from the story, and we actually wrote them down, so maybe that’s why one group did better than the other. In general, however, only a few teachers across all systems either mentioned or demonstrated during observations instructional reflection with student assessment results. When teachers did examining student assessment data, they tended to focus more on identifying student deficiencies and less on examining how their instructional practice may have contributed to these problems. As noted above, the following three models of data use – bureaucratic- compliance, positioning, and signaling models – were categories that emerged from transcript and observation data; rarely did an educator provide direct evidence for a symbolic use of data when directly asked how data informs instruction. As such, patterns around the types of data involved and within/between school systems were less clear with these models of data use. They were important to include, however, to demonstrate that ! ! ! ! ! ! ! ! !! ! ! 109! data use in schools had other unintended or alternative dimensions besides its instrumental use in making and supporting instructional decisions. Bureaucratic-compliance Model The fourth type of data use that emerged was a bureaucratic-compliance model, one where educators used data to simply fulfill requirements or meet the demands of regulations. For example, given that district teachers did not use their formal observation data to inform their instruction, it seemed that this process of formally observing teachers in Sequoia and Mammoth was, for the most part, a compliance-oriented activity. Another case of a bureaucratic-compliance model of data use was observed at Whitney, where the principal provided teachers with a comprehensive lesson planner and meeting agenda that needed to be completed and returned to her following each of their weekly meetings. The expectation was that teachers would spend two minutes reflecting on the last meeting’s notes; 10 minutes sharing data; and 30 minutes developing an intervention/“action plan” that responded to trends in the data. The principal explained her intention behind this paperwork: One of the things that they’re asked is, what was the data piece?. . . They had to do a common assessment [of] their teaching and learning for the week. From there, they’re going to talk about “How did one teacher get a 99%, what did he do different then we did, how did he get those results?” From there, that conversation is of what was the best practice, what did he do different? We need to incorporate in our next week’s lesson, because if he’s getting these results, we need to know how. However, teachers from across departments had complaints about the amount of “paperwork” they were required to do. In a meeting of the school’s leadership team and the district’s leadership consultant in September, teachers voiced their concerns: Math teacher: Something else that’s hard with the PLCs is that I’m always filling out the paperwork. I don’t get to the meat and bones, I don’t get in deep. We have ! ! ! ! ! ! ! ! !! ! ! 110! to fill out the paperwork to say that we’ve met, instead. Consultant: Could you have a quick summary of each meeting, with actions? Something more generic? Math teacher: We never get to this part in the PLCs, the reteaching. . . . On top of the comprehensive lesson plan papers, the minutes, versus what we’d really like to be focused on in our PLCs. I know we need some monitoring but – Consultant: I hear you saying that you feel some of this paperwork is redundant. Could we get it to one page? Literacy/math teachers: (together) Yes, there’s a lot of wasted time. Given the compliance-oriented nature of completing this paperwork, it was not surprising that the data were used symbolically to provide evidence for a decision after it had already been made. For example, in an hour-long PLC meeting, a team of three teachers observed was far more preoccupied with completing the required PLC paperwork than using it as a tool to guide a deep conversation around learning and instruction. After the three teachers realized that each had graded a common assessment differently (thereby making comparison of scores and instructional strategies difficult), they decided to reteach and retest the literacy standard as part of the “intervention” in the social studies block. As another teacher concentrated on filling out the lesson planner, the lead teacher of the PLC explained this post-hoc decision to her fellow teachers: But we can still turn these scores in, turn in this data. And then we’re going to reteach it [the standard] as part of the intervention. And we can explain, that’s what we chose to do for the intervention. I mean, that’s what we had said to the full staff, that we were going to do our interventions within the content area. Although they had not used the unreliable data at all in that decision, they still included it on the planner to justify their action plan after the fact in order to complete the required paperwork. ! ! ! ! ! ! ! ! !! ! ! 111! In both districts and CMOs, the bureaucratic-compliance data-use model was also connected to outside regulatory organizations. In the two districts, educators mentioned having to complete and compile data for compliance to federal or state programs. In Sequoia, for example, the instructional coach in Sequoia explained, “Because the federal program monitoring is coming in two weeks, we’ve been high-tailing it trying to make sure that we’re in compliance. [Teachers] see me running around, delivering, pulling kids or making sure the files are up to date.” The CMOs similarly emphasized the data demands required by their authorizers. Collecting and supplying these data was not seen as a useful exercise, but rather as meeting the bureaucratic regulations enumerated their charter. As the CEO of Yellowstone described, their charter organization has had to “do a lot more paperwork and reporting on minutia. It's been a move to more micromanagement” from their authorizer. Positioning Model The positioning model of data use occurred when an educator used data as evidence for a particular position, or in other words, to push a particular agenda forward. In the few cases where this type of data use occurred, there was a sense that what the data “said” was neutral, irrefutable, and unquestionable, and it was used as part of an individual’s argument for action. Usually, the data invoked in the positioning model were “hard data” results from state assessments and their aggregated AYP/API results. For instance, the instructional coach at one Sequoia school argued data were a critical part to setting the agenda for decisions at the school: The most important thing that we go to at the very beginning of the school year is we always put out that data. . . It drives our professional development, it drives what we talk about in teams, how when we talk about when we’re helping students -- numbers don’t lie. When we do come to the table to discuss an issue, ! ! ! ! ! ! ! ! !! ! ! 112! we always start off with data because you know, they don’t lie, and we need to know where we’re at with certain things. School administrators at Mammoth similarly saw the previous year’s state assessment results as evidence to argue for, in their eyes, a much-needed change. The assistant principal noted that when she asked teachers to make changes in instruction, “I always say to the teachers, it’s not about doing it for me. It’s because we know research proves that this is in the best interest of the kids, and we – well, the bottom line is we have 32% proficient, so 68%, they aren’t getting that access. We have to change something.” In both of these instances, the data referred to were seen as unquestionable, clear evidence from which to push an agenda forward. Signaling Model The final use of data was a focus on signaling in which the organization conveyed information about itself outwardly to others. In some cases, signaling occurred as a way of managing the school system’s reputation with the local media. In responding to a question about the goals of the district’s board of education, the President of the teachers’ association in Sequoia remarked, For the school board, it’s about looking good out there and saying we run the district that’s doing well and has no layoffs and rising test scores. . . The school board’s bought into this PI system and this API system, and they don’t want to be the next one in the paper. We want to be that district that’s had growth all year. We don’t want the paper to go after us. In this instance, the API scores, if high, provided a way to signal the health and strength of the district to the media and outstanding community. Low scores were seen as a signal of poor performance and potential mismanagement, with possible repercussions when the board faced re-election. Furthermore, for the two charter management organizations in particular, data ! ! ! ! ! ! ! ! !! ! ! 113! were used to signal to the market of potential customers, i.e., parents, students, and the community. Administrators at the home offices of both CMOs spoke of using college- ready data – high school graduation and college acceptance rates, in particular – as part of their recruitment efforts for students and families. These forms of data were prominently displayed under the enrollment section for each organization’s website, as well. The CMOs also used data to demonstrate legitimacy and secure future resources. This finding mirrored Feldman and March (1981)’s argument that information signaling by the organization may occur, in part, to earn legitimacy within the larger organizational field. The best example of this type of signaling was conveyed by the chief executive officer and founder of Yellowstone CMO. She described how the charter movement in Southern California had “exploded” in recent years without appropriate oversight, leading to a host of mismanaged and unsuccessful charter schools. Along with the severe budget cuts faced by the district, she felt that the climate in the district where her schools were located was “hostile” towards her schools. She explained: With this financial crisis in the nation, the state, and throughout public education, I think that the charter movement has become scapegoats for districts – “Well, if we didn’t have these charter schools, we would have their kids in our schools. And then we'll be getting more revenue.” The tension in the climate right now and the attacks that we’re experiencing, have made us even more vigilant about our APIs and CSTs being very, very strong. Because they will be used as a weapon to speak badly about us, if they’re not. . . . So I think we put even more pressure now on our schools to really keep moving upward and to do really, really well. The CEO used the CST and API scores, then, as a way to distinguish and differentiate from the low performing charters and legitimize their place within the market of schools in the area. Additionally, she saw maintaining high-stakes test results as a way of “proving” their model and helping to ensure that resources were available should the network look to grow further. As such, she noted, “Given our high CSTs ! ! ! ! ! ! ! ! !! ! ! 114! results, I am waiting for the day when they deny a petition [to open a new school] from [Yellowstone].” In her eyes, high CSTs scores were proof to the educational community in Los Angeles generally, and their authorizer specifically, that their school model was successful and should be expanded in the future. Summary Educators in all four systems were confronted with many different types of data to inform their understanding of literacy teaching and learning. Analysis uncovered several patterns within and across school systems. Notably, while all systems attended to high- stakes state assessments that contribute to AYP, API, and program improvement status, CMOs had greater use of teacher observation data and indicators for college-readiness. Across systems, disconnects arose among classroom, school, and system actors over the level of use of classroom and system assessments. Looking to how data were used demonstrated that educators have multiple goals and purposes in mind when they use data. Introduced here were three models of instrumental data use: accountability- achievement, student learning, and instructional reflection. All four school systems had examples of the accountability-achievement model, while examples of the student learning model were more prevalent in Mammoth and Yellowstone. For Sequoia and Mammoth, instructional reflection tended to focus on informal observation data, while in Yellowstone and Yosemite, this model of data use was informed by formal observation data. In all four cases, when teachers examined student assessment data, they tended to focus more on identifying student weaknesses and less on critically reflecting how their instructional choices may have contributed to these issues. ! ! ! ! ! ! ! ! !! ! ! 115! Emerging from the data were three other categories of data use where information were used in ways besides to instrumentally inform instruction: bureaucratic-compliance, positioning, and signaling models. While fewer details were available for these models, some initial patterns emerged. All four systems engaged in bureaucratic-compliance data use in response to federal state monitoring demands (for districts) and to authorizers (for CMOs). The positioning model involved using high-stakes assessment data to persuade others for a course of action. Finally, school systems used data to signal to outside organizations or individuals to establish reputations. CMOs used data as a signal to gain legitimacy within the organizational field and to engage in the educational marketplace for families and communities. In order to offer explanations why these patterns of data use may be occurring, Chapter Five examines the types of resources that these school systems mobilize to support the use of data, while Chapter Six explores the role organizational context plays. ! ! ! ! ! ! ! ! !! ! ! 116! CHAPTER FIVE ORGANIZATIONAL RESOURCES TO SUPPORT DATA USE The next chapter addresses the second research question: what organizational resources – human capital, technology/tools, and organizational processes/practices -- are in place in different school systems to support data use? The first section outlines the methods used to classify and compare resources across the school systems. The next section offers comparisons of the types of resources used in districts and CMOs. Analysis suggests that the resources supported the data-use cycle at different leverage points, and the design features of these resources framed how data were used. One outstanding area of need was supporting educators’ ability to identify instructional responses from knowledge gathered from the data, while misalignment or lack of integration of resources also constrained data use. Finally, holistically comparing the two kinds of systems, the two CMOs demonstrated greater levels of commitment to resources to support data use in certain categories. Introduction Schools and districts invest in a host of structures, programs, and interventions to support data use in schools (Marsh, 2012). The majority of this prior research has tended to look at isolated levers, such as Wayman and Cho (2009). It is the aim in this chapter to provide a more holistic, macro-level approach to understanding how organizations mobilize their resources to this end. The first step to this analysis was to confirm and/or elaborate upon the resources used to support data use in the four school systems. As noted in Chapters Three and Four, Boyatzis (1998) identified three methods for developing thematic codes: theory-driven, ! ! ! ! ! ! ! ! !! ! ! 117! research-driven, and inductive/data-driven. The categories of resources presented in Table 5.1 largely reflect codes predicted by knowledge management theory (see Chapter Two) that were then confirmed in the data. Table 5.1 Resources Mobilized to Enable Data Use in Schools and Systems Human capital Tools & technology Organizational processes & practices Teacher collaboration Data management system Scheduled time Dedicated coaching position Capture tools and technology Rewards and incentives Knowledge and skill development (PD) Communication tools and technology Standard operating procedures Site and system leadership Hiring and training for new employees Next, a stage model was developed to aid in comparing resources deployed across systems. This work was first informed by knowledge management literature, where there has been a tradition of creating a stage model to understand implementation of knowledge management programs in a business or organization. Frequently, these models holistically look at the implementation of a knowledge management initiative across the organization. For instance, Lee and Kim (2001) proposed that the capability of an organization to engage in knowledge management evolves through four stages: initiation, propagation, integration, and networking (for other examples of knowledge managment stage models, see Lin, 2007; Xu & Quaddus, 2005). Education scholars have similarly used stage models to be able to understand a program or reform’s implementation along a continuum. At the micro-level, Bolam et al. (2005) used a stage model for understanding the progression of groups of teachers that ! ! ! ! ! ! ! ! !! ! ! 118! meet together around key dimensions of the PLC model. At the more macro-level, Petrides and Nodine (2005) developed a matrix of districts’ performance-based practices along a four-stage continuum in areas in such categories as “linking assessment to instruction” and “access and use of data.” Guided by these two bodies of literature, a three-stage model was developed to understand the investment of resources at the organizational level. If evidence suggested that the school system had achieved a level of mastery, with continuous evaluation of the goals and processes, it was labeled as “significant” commitment and implementation. “Moderate” commitment described a system if it had rolled out the strategy with early implementation. A system was designated as “little/no” commitment in a resource category if the data suggested there was either no commitment to this resource, or educators were beginning to acquire information about and/or implement this strategy. Within each resource category, findings from relevant literature added additional details about what differentiated between “significant,” “moderate,” and “little/no” designations. This literature included large-scale literature reviews on data use (e.g., Hamilton et al.’s 2007 “What Works Clearinghouse” guide to data use) as well as available literature in each resource area. See Appendix F for the full set of criteria. Analysis for this chapter was similar to the approach taken in Chapter Four. Each interview was coded using an initial code list developed using the conceptual framework. Codes were iteratively refined throughout this process, and a memo for each system’s choices of resources was developed concurrently that focused on the level of commitment and implementation. Findings from the interview transcripts were cross-referenced with other data sources (e.g., observations, documents collected) for data triangulation. Using ! ! ! ! ! ! ! ! !! ! ! 119! the stage model described above, a classification of significant, moderate, or little/no was assigned to each school system for each category. This designation was made at the organizational level, that is, synthesizing across the themes identified at the classroom, school, and central office levels for each system in each category. This approach has limitations. First, while data were collected across the levels of the organization, they were only drawn from 1-2 schools in each system. Commitment to these resources may have varied had more schools been included in the study. Second, these resource categories were neither exhaustive nor mutually exclusive, and investment to one or more may be part of a larger strategic plan or investment strategy. Finally, the stage model may simplify the work of the systems. These school systems did not necessarily engage or invest in these strategies in a perfectly linear and sequential fashion, where, having achieved a “significant commitment” classification, the resource was firmly in place in perpetuity. Indeed, education policy implementation research suggests that most often reforms are constantly and continuously co-constructed by educators (Datnow & Castellano, 2000; Datnow, Hubbard, & Mehan, 2002). Nevertheless, this categorization process provides a useful snapshot to understand the variation in commitment to resources within and between school systems. What Organizational Resources were in Place to Support Data Use? In order to support the data-use cycle, certain organizational resources, according to theory, need to be in place and aligned to support the data-information-knowledge progression. This section explores how the school systems mobilized their resources to this end. ! ! ! ! ! ! ! ! !! ! ! 120! Human Capital As mentioned in Chapter Two, if system leaders invest in human capital resources, then in theory they would be able to build educator capacity for using data to create actionable knowledge for instructional improvement. Table 5.1 compares the investments in human capital made across the four school systems. Table 5.2 Human Capital Resources Mobilized, by Category, by Level of Commitment Traditional School Districts Charter Management Organizations Human Capital Sequoia Mammoth Yosemite Yellowstone Teacher collaboration Dedicated coaching positions Knowledge & skill development (PD) School/system leadership Hiring/training for new employees Key: Teacher collaboration. Sequoia and Yellowstone made significant investments in teacher collaboration. In Sequoia, one of the district’s five central expectations for all schools was that teachers regularly meet together in professional learning communities (PLCs). During a PLC meeting, teachers were expected to plan instruction/assessments, compare student assessment data, and share “best practices” and instructional strategies when a team member needed help. The assistant Superintendent at Sequoia argued that when teachers work together in PLCs, “They’re able to synergize. Your best teachers get better, and the teachers that struggling get better as well. Everyone gets better because Significant commitment Moderate commitment Little/no commitment ! ! ! ! ! ! ! ! !! ! ! 121! they’re all having self-discoveries.” This collaboration around data was expected to help create a feeling of shared responsibility for the learning of all students; as one teacher noted, “It’s that feeling that it’s not just my students that I’m looking out for. I’m looking out for every student in the school. And we need to move everybody up, not just the ones who are sitting in my classroom.” Finally, working in the group with other teachers was reported to create a sense of internal accountability to each other as peer professionals in work that was driven by student assessment results. In Yellowstone, the drive for teacher collaboration came not because of a top-down, system-wide initiative, but because teachers, entirely responsible for curriculum/assessment development, found it beneficial to work in groups together to share instructional resources to respond to needs identified in student assessment results. Dedicated coaching positions. In contrast to the group approach for site-based support for teachers’ use of data, Mammoth and Yosemite instead concentrated their resources in instructional coaches. Across Mammoth, there were over 20 instructional coaches who worked with one or two schools each and were “expert” former classroom teachers who worked with their peers on a “just-in-time” basis. Instructional coaches supported both teachers’ data analysis skills in one-on-one or group settings, as well as enabled teachers’ in their instructional responses by observing teacher instruction, co- teaching lessons, and sharing instructional materials. The benefit of the instructional coach, pointed out a central office administrator, was in “having someone whose sole job is to support teachers.” One instructional coach further explained her role in providing individualized support to build teachers’ skills in responding to data: “My job is to try to make them be the best teachers that they could possibly be… You build on their ! ! ! ! ! ! ! ! !! ! ! 122! strengths, and then you look at what’s the next area they could work on, whether it’s formative assessments, using questions, building relationships with students. It varies with each teacher.” Yosemite also chose to invest in a coaching model but did not assign a coach to each school. Instead, one coach was assigned to work with teachers at three or four schools within the network. Over the course of the year, the coach developed different goals with individual teachers, including working with them to analyze student achievement data to make changes in instruction. Beyond this one-on-one work, the chief academic officer described the larger organizational benefit of having coaches move between schools: I call it a ‘bumble bee effect.’ The coaches do a couple of things that makes them act like bumble bees. One is they take an instructional practice that they saw in one school and literally pollinate that school down the road by Friday. The other thing they're doing is they're buzzing back and forth, quite physically and literally, between the home office and schools, where they're saying, “I'm noticing this. This is a trend in the data in LA. Have you noticed this in the Bay?” They're having these conversations, and then they're going back to their sites, they are these incredible communication vehicles within the organization. Knowledge and skill development. In all four school systems, professional development regarding the data management systems, data-use skills, and instructional responses was an important set of strategies. Sequoia, for instance, had worked with an external consultant over the past three years to roll out the professional learning community initiative. In her meetings with school leadership groups, she modeled creating norms for data-driven conversations, setting goals for student learning, and having “tough conversations” when necessary to move practice forward. Some of the practical tools she offered educators included rubrics for self-reflecting on the depth of PLC implementation, and she provided tools and direct instruction on error analysis ! ! ! ! ! ! ! ! !! ! ! 123! techniques. She engaged stakeholders across the school system – from superintendent and her cabinet down to school leadership teams – in conversations guided by student achievement data to help support decision making at the school and district levels. In the past, Mammoth had similarly partnered with outside research groups to roll out the literacy model that emphasized student reading/writing samples, although that work had largely stopped at the time of this study. The two CMOs relied more heavily on in-house professional development. Principals within the two systems were responsible for designing and implementing trainings based on student and teacher needs, and after each interim or benchmark assessments, principals reconfigured their training schedules to support teaches. For instance, one of Yellowstone’s principals described how he designed and provided training on how to teach nonfiction writing after the fall benchmark showed student weakness in this area. Additionally, both CMOs -- as part of their work with the Gates Foundation Grant on teacher effectiveness -- aimed to target professional development to the needs of each teacher based on individual observations of instruction. As mentioned earlier, teachers were observed on a detailed rubric and then connected with targeted professional development resources (e.g., videos of teachers who exceled in that particular knowledge/skill) to build their instructional capacity. Site and system leadership. In terms of leadership, all four school systems had school and system leaders who advocated for and shared the expectation that others use data to inform decisions. The role of the system leader began with creating explicit expectations and norms around data use. In Sequoia, for instance, one of the explicit expectations in the district -- referenced by district and school level educators alike -- was ! ! ! ! ! ! ! ! !! ! ! 124! that every school had data-based teacher collaboration time that involved analyzing common assessment results and adjusting instructional plans. System leaders across the four systems also actively modeled data use in their own work. One of Yosemite’s home office staff noted of the top management team: Their requests and their need for data, their appetite for data is so huge, and the types of questions they ask are so complex… One question I’ve gotten from [the Chief Executive Officer]: “I want to see all the schools APIs over the course of time. I want to see how they staffed that school, what resources were in place at a school when they finally got over the 800 API hump, or they made huge gains. What did they do, and how did they manage their resources to get there?” In all systems, school site leaders played a major role in communicating and supporting the norms and expectations around data use at the school site. Teachers reported turning to principals or assistant principals to analyze data together, suggest instructional responses, or develop school-level interventions. For example, after each benchmark assessment, Yellowstone’s school leaders met with each department to analyze data and support teachers as they realigned their curricular maps. After the winter benchmark, teachers and administrators at one school jointly decided to implement an afterschool tutoring program that targeted students who were low on certain standards. Another strategy employed by principals in all sites was to engage in a distributed leadership approach where responsibilities were shared out with the teaching staff. In Sequoia, school leadership teams, consisting of teachers from each PLC and the school administration, were responsible for jointly attending district professional developments on data use and instructional strategies and then training school site staff within their individual PLCs. The leadership teams were tasked with making and rolling out significant school-level decisions, as well; as one principal explained, “All the decisions that we make in terms of program improvement, in terms of instruction, we make as a ! ! ! ! ! ! ! ! !! ! ! 125! team.” Given the system’s emphasis on teachers as curriculum developers, it was not as surprising that Yellowstone sought to empower teachers in school-level decisions, as well. One principal explained: “I believe we need to be flexible and strong enough to allow the people who are the experts to develop the curriculum and adapt their classrooms, and the school, as necessary.” Identifying teacher leaders (beyond the coaches) was another way to distribute data-based leadership. For instance, Yosemite had an on-site “data driver” at each school, a teacher selected to effectively model and coach her colleagues to use the new analysis products and tools. She attended monthly trainings at Yosemite’s home office and then worked with the principal to develop regular trainings to share this information out to the school site. Being a data driver not only allowed her to advance her own teaching practice while remaining in the classroom, but she was seen as an important resource for teachers, a “non-threatening” “peer” who could be turned to for questions and advice. Informally, other teachers and school level staff across the four systems were identified as sources of help for data related questions, including scanning in assessment results, creating graphs and charts, or having suggestions for instructional responses. Finally, both formal and informal school leaders played a role in communicating priorities around data use and creating the school’s culture. On one hand, school leaders enforced the belief that using insights from students’ assessments was a priority for instructional decision making. On the other hand, they were also able, in some cases, to ameliorate the pressure and anxiety that teachers felt around standardized test results. An assistant principal from Mammoth elaborated: Our principal is very sensitive to the need of the teachers. Teachers still feel a little beaten down because it’s the continuation of program improvement. He [the ! ! ! ! ! ! ! ! !! ! ! 126! principal] doesn’t want them to get burned out. There’s a real balance of raising the rigor and expectations and making sure they feel comfortable and confident. Similarly, another principal from Yellowstone revealed that he encouraged his staff to look “beyond the numbers” and instead focus on reaching students and re-thinking their instruction with new strategies. Hiring, training, and retention for new employees. Across all four systems, new teachers were provided school-based support around data use. In Sequoia, for instance, a new teacher was helped as part of her participation in her PLC, while in Mammoth, a principal asked the instructional coach to work with a new employee to teach her how to use the data management system. However, unlike the traditional districts, the CMOs had more developed, formalized programs in place to support the recruitment, training, and retention of new teachers. Both CMOs were able to recruit new staff with an eye to teachers’ previous experiences with data analysis as well as council out those teachers who were not a good fit for the organizational culture, mission, and goals. The two CMOs also heavily invested in new teacher training. Yosemite stood out in this regard, as the CMO partnered with a local school of education to develop and offer their own teacher education program. In the first year of the residency program, an accepted student teacher spent the year shadowing a mentor teacher in a Yosemite school, with explicit instruction and practice around data use and instructional planning. The next year, the teacher worked in her own classroom, with a site-based induction coach who provided weekly coaching and feedback, again with an emphasis on analyzing different kinds of data to make changes in instruction. The chief academic officer believed that ! ! ! ! ! ! ! ! !! ! ! 127! their work in teacher training was critical for creating data-driven habits, routines, and norms in new teachers: The big hope around induction, the best thing we can do for these teachers, is to essentially develop the habit within them that they're constantly asking themselves at the end of every day, week, month, year: “What worked? What didn’t work? What am I going to do about it?” If we have developed a reflective practitioner, nothing will be more important to that teacher, to the organization, or to the kids they serve ultimately. Yellowstone also created a training summer “learning lab” for new teachers during the summer prior to their start in the classroom. New teachers observed a master teacher work with students in the morning and then debriefed with the master teacher, as she walked through how she designed her lesson based on the pre-tested needs of students. The benefit, reported a central office administrator, was that new teachers “see what Yellowstone’s expectations are: We make our data public. We discuss it. It’s not something to be afraid of, it’s something to be a foundation for conversation.” Technology and Tools As noted earlier, if school systems invest in technology and tools for data use, then theoretically, educators may be supported in organizing, prioritizing, and managing the numerous streams of available data. Table 5.3 compares the investments in technology and tools made across the four school systems. ! ! ! ! ! ! ! ! !! ! ! 128! Table 5.3 Technology and Tools Mobilized, by Category, by Level of Commitment Traditional School Districts Charter Management Organizations Technology & Tools Sequoia Mammoth Yosemite Yellowstone Data management system Capture tools, technology Communication tools, technology Key: Data management system. Like many school systems nationally, each of the four school systems had invested in data collection and analysis tools. Three of the four school systems – Sequoia, Mammoth, and Yellowstone – had management systems with similar functionality. These data management systems provided teachers timely access to state standardized test results and interim/benchmark assessments as well as data displays. School-level educators were also able to scan in their own teacher-developed, multiple-choice assessments to create new data entries. According to many teachers, the systems allowed them to manipulate data in a range of ways, depending on their particular question. One teacher elaborated: We looked at data four different ways… teacher by teacher, class by class,… by standard, … by student. So that was extremely helpful, because you could just pinpoint standard 3.2, look across the board. Your kids didn’t get this, well, how did you teach it? Or the other way around, your kids got all of this right, what did you do? Yosemite’s data management system had all of these features, plus notable others. Of the four systems, their management system allowed for the most complex, sophisticated types of data analysis. For example, using pre-/post- CST data, one tool Significant commitment Moderate commitment Little/no commitment ! ! ! ! ! ! ! ! !! ! ! 129! compared teachers by their ability to move groups of students across proficiency level bandwidths (i.e., basic to proficient). They were then able to identify teachers who were successful in increasing student achievement on these tests and encourage them to mentor other teachers, apply for teacher leader positions, or lead professional development. A second notable element of Yosemite’s data management system was the level of transparency and access of data. System users at all levels could retrieve and compare student mastery by standard, teacher, grade/department, and school. Prior to this increased access and transparency, the director of assessment explained, “We were preventing teachers from identifying other teachers that were doing great things.” With this feature, teachers easily could contact other teachers in the network who were “hitting it out of the ballpark” on certain standards and reach out informally to collaborate. Instructional coaches and home office administrators also identified “high flyer” teachers in order collect resources, lesson plans, and strategies to share across schools. Finally, Yosemite heavily invested in home office personnel to support the development and roll-out of the data analysis tools available in their data management system. Funded through foundation grants, the 10 person technology team worked as the “in-house technology consultants” whose goal was to improve the quality of available data, work with school and system leaders to identify new data collection/analysis needs in the organization, and create user-friendly tools in response. The head of the technology team described the changing roles of this group: When [the technology team] first started, we were doing a lot of data analysis for area superintendents, for the management team, for principals. We evolved by taking the various data requests that principals were giving us, and we created the principal dashboard, we created the teacher data portal. All those things came out of us understanding the data needs and provide user-friendly tools to allow them to do the work on their own. We’re coming to a place now where we’re trying to ! ! ! ! ! ! ! ! !! ! ! 130! empower folks within the home office, within each department, to use their department data to support schools. Capture tools. Beyond the centralized data management systems, system and school leaders frequently developed “capture tools” to help educators not only collect data related to learning but also interpret and determine instructional responses. These included data analysis protocols or data discussion guides. For instance, Yellowstone teachers used a data analysis protocol on Excel to help facilitate their discussions with grade level teams or school administrators. After collecting data from an assessment (e.g., system benchmark), teachers used the tool to identify patterns in mastery of standards and reflect on possible reasons behind these trends. A new curricular map was created based on these reflections. Other capture tools focused on collecting and sharing teacher instructional data. In Sequoia, a recent initiative to videotape and share out successful elements of teacher practice had been rolled out, although there was some pushback from some teachers and the teachers’ union. A less controversial tool to collect information on teacher instruction in Sequoia was the instructional walk-through checklist. When teachers and administrators went on monthly visits to classes, they identified instructional techniques in place and provided feedback to the visited teachers on other ways of approaching a lesson. Finally, in Sequoia, Yosemite, and Yellowstone principals had I-Pads equipped with a computer software program, Observe360, that enabled them to email quick notes or observation reflections to teachers after formal or informal observations. With this software, they were able to link teachers to websites with additional instructional strategies. ! ! ! ! ! ! ! ! !! ! ! 131! Communication tools. A second group of tools were used for sharing decisions made based on data and best practices. All four school systems turned to more traditional forms (e.g., paper agendas, announcements) as well as electronic technologies (e.g., e- mail). However, the two CMOs had invested in developing and populating an internal network available only to members of the organization. In the case of Yosemite, the online resource portal was developed by instructional coaches who, with the help of the analysis tools and their own observations, had identified and collected successful instructional materials from teachers. After they were uploaded, educators across the network were able to search by grade-level, topic, and standard to find lesson plans, curricular maps, and other materials. The organization also had started to post videotaped clips of instruction to include on the website, as well. Sequoia had a similar communication tool in place, although it had only rolled out more recently; teachers at one Sequoia school uploaded their lesson plans to Blackboard on a weekly basis in an effort to create a repository of written instructional materials. Yellowstone’s online resource site similarly acted as a repository for resources, but the system was, at the time of the study, changing to provide more “in-time” communication as well. Teachers would be able to sign in, pose questions or ask for suggestions, and share out ideas. One system leader suggested that this technology was particularly important given the small size of some of the schools. In these instances, a teacher could be the only one of her kind (i.e., the lone 6 th grade literacy teacher), and the online forums allowed her access to an online community of peers. As the home office administrator further explained, “There might be a shared conversation where everybody ! ! ! ! ! ! ! ! !! ! ! 132! who’s working on questioning in [the CMO] would read an article on it and then respond through chat rooms or discussion boards.” Organizational Practices and Policies As referenced in Chapter Two, when school systems provide structural supports to support educators to use data at the school level, in theory they set the foundation and enable this work to occur. Table 5.4 offers a comparative perspective of the organizational routines, processes, and policies in place to support this data-use work in CMOs and districts. Table 5.4 Organizational Practices, Policies by Category, by Level of Commitment Traditional School Districts Charter Management Organizations Organizational practices, policies Sequoia Mammoth Yosemite Yellowstone Scheduled time Rewards and incentives Standard operating procedures Key: Scheduled time. One key component noted across all four systems was dedicating frequent time to collect and analyze data. In the cases of Yellowstone and some of the schools in Sequoia (i.e., those that had received a waiver from the teachers’ union to make a change to the daily schedule), teachers not only had weekly time to collaborate as a school staff and their independent prep time but daily common planning time, as well. As Sequoia’s district superintendent noted, having this common “prep” time to discuss curriculum, analyze data, design assessments, and plan strategy is a Significant commitment Moderate commitment Little/no commitment ! ! ! ! ! ! ! ! !! ! ! 133! “powerful, powerful, powerful thing.” Several teachers in Yellowstone similarly argued that the daily meeting time allowed them to make collaboration around data and instruction a regular, habitual part of their teaching practice and supported their attempts at building and maintaining trust within the groups of teachers. In comparison, the two other systems – Yosemite and Mammoth – had weekly or bi-weekly opportunities for departments or grade levels to meet. Additionally, the four school systems offered time for cross-site collaboration. For the two traditional school districts, these opportunities most frequently occurred for school leaders. In Mammoth, for instance, the instructional coaches from across the district met weekly to work on curriculum or discuss school/system level data, while in Sequoia, the middle school principals met monthly with their assistant superintendent to discuss data and share strategies. The CMOs, on the other hand, provided multiple opportunities to develop cross-school relationships at all levels. Each CMO had formed smaller regional groups, overseen and supported by a regional superintendent. Within these regional groups, schools would regularly meet to disseminate information or share practices. They offered opportunities for like-teachers (e.g., all of Yellowstone’s literacy staff) to meet together after benchmark/interim assessment results were returned and “compare notes.” One teacher described a collaboration day with other Yellowstone middle schools: After our last benchmark, all the middle schools worked together. We first held a large group session, and we talked about how to analyze data. Then we split into differentiated groups – humanities, math – and then we provided differentiated PDs. Based on your benchmark results, we had an assessment building PD, an engagement strategy PD, a literacy PD. There were combinations of teachers from different schools sites at the different sessions. ! ! ! ! ! ! ! ! !! ! ! 134! Yellowstone offered a semiannual “Community of Practice” where educators across the system presented best practices in all-Yellowstone conference. One of the middle school teachers presented a session with the assessment director on how to design student surveys: “We showed them the basics on survey design, gave them a few scales on student perceptions, and showed them how to look at relationships between student perceptions and various outcomes.” Rewards and incentives. While none of the systems had reward programs in place to incentivize data-use behaviors specifically, three of the four school systems mentioned developing such programs to reward an educator’s role in student academic progress. For the two CMOs, through their teacher effectiveness initiative, this incentive system sought to tie student achievement with teachers’ compensation (although again, not with data-use behaviors directly). A portion of a teacher’s salary, for instance, would reflect improvement in student achievement made by her students, across the grade level, and within the school. For Yosemite, a pay-for-performance system had been in place prior to the commitment to this project: “Historically, [Yosemite] always had a performance component to pay, part of their bonus was based on whether the school met their API targets.” Rather than create a competitive atmosphere, one teacher reported that it “encouraged teachers to help their schoolmates.” In Sequoia, policies to incentivize educators focused on non-compensation related awards. At one school, teachers were entered in a raffle for movie tickets at the end of each week, their number of entry tickets based on the number of instructional strategies observed by the principal during her weekly walk-throughs. Another technique used to motivate educators was to share out district benchmark results to all principals quarterly. ! ! ! ! ! ! ! ! !! ! ! 135! The assistant superintendent reported that this created a friendly competition for “bragging rights” between schools. Standard operating procedures. Finally, there was an assortment of other policies or routines in place in the school systems to encourage and enable data use. For example, system leaders in all four systems described bringing and discussing student achievement data as part of meetings with the cabinet or the school board (in the case of the two districts) or the board of directors or meetings with partners and investors (in the case of the two CMOs). Similarly, educators in all four school systems described organizational routines in which central or home office administrators met with teachers and school leaders to discuss and analyze data. For instance, Sequoia’s Director of Curriculum and Instruction noted that after each benchmark assessment, she and a team of other “teachers on assignment” identified “target” schools based on low benchmark performance. Then, this group of home office staff spent the semester “walking the classrooms” and supporting teachers and school leaders in their PLC implementation generally and data analysis specifically. Performance evaluations were a notable difference between districts and CMOs in terms of their standard operating procedures. Given their flexibility around teacher evaluations, Yellowstone and Yosemite included a teacher’s data-use practice as part of the yearly appraisals. In Yosemite, principals meet with their teachers at the beginning of the school year to set goals based on their educator performance criteria, along with similar mid- and end-of-year evaluations as well. One category of the Yosemite performance rubric was a teacher’s ability to “use data effectively to inform instruction,” and a teacher who “consistently demonstrates the use of assessment results as a central ! ! ! ! ! ! ! ! !! ! ! 136! foundation for instructional decisions” earned a “distinguished” designation. Another category focused on a teacher’s ability to assess student growth. The distinguished ranking in this category meant that her instruction fully aligned with student assessments results; students monitored their own progress in achieving their goals; and assessment results were used to design future goals for individual students (versus the class as a whole or groups of students). In contrast, neither of the two districts’ collective bargaining agreements mentioned a teacher’s ability to make decisions based on assessment results. More generally, Yosemite and Yellowstone’s evaluations were far more detailed in the teachers’ performance evaluation metrics when compared to those at the district. Overall Patterns All four school systems had invested significant resources to support educators’ as they used data to make instructional decisions. Looking across the cases, several patterns around how these resources enabled or constrained data use stood out as did notable trends across cases. Finding #1: Resources Support the Data-use Cycle at Different Leverage Points Empirically replicating the finding from Marsh’s 2011 literature review on data- use interventions, this analysis suggests that different resources supported the data-use cycle at different leverage points when looking across all four cases (see below). For instance, data management systems and other technology seemed to be most helpful at the front-end of the data-use cycle, supporting an educator’s abilities to collect and organize data into forms that were more “digestible.” Investments in human capital, like professional learning communities, supported teachers as they made sense of the graphs ! ! ! ! ! ! ! ! !! ! ! 137! and charts together and shared out “best practices.” Organizational routines, policies, and processes were a necessary (but not sufficient) part of data use; they provided the time and place upon which for the entire data cycle could unfold. Other resources played a role in establishing this foundation for data use. School leaders, for instance, supported data use indirectly when they modeled using data to drive decisions as well as created a school climate that was open and trusting, with expectations and norms around this work. Table 5.5 How Data-use Resources Supported Data Cycle Leverage Points Leverage Points in Data-use Cycle Access, collect data Organize, filter, analyze data into information Engaging with information to create new knowledge Linking knowledge to instructional decisions Foundation for data-use process Human capital Teacher collaboration ! ! ! Dedicated coaching positions ! ! ! ! Knowledge, skill development ! ! ! ! ! School, system leadership ! ! Hiring, training, retention for new employees ! ! ! ! ! Technology and tools Data management systems ! ! Capture tools ! ! Communication tools ! ! ! Routines and processes ! Scheduled time ! Rewards and incentives ! Standard operating procedures ! Finding #2: Resource Design Features Matter Second, design features of these resources impacted the types of data accessed and framed how they were used for instructional decision making. For instance, the data management systems of all four systems only supported data that was quantifiable, ! ! ! ! ! ! ! ! !! ! ! 138! making data from standardized state tests or system benchmarks easily accessible. For “soft” forms of data (e.g., written responses, oral discussions) to be entered into the data system, it required a time-consuming process in which educators reduced the qualitative information into numerical values. The system design’s preference for hard data may explain why educators identified such standardized test results as “official” compared to other types. Similarly, all of the data management systems were designed to provide analysis by proficiency bandwidth (e.g., basic, proficient), again potentially encouraging the achievement focus versus that of a learning focus. Overall, the commitment to this system design communicated a message to educators system-wide that these were the types of data and analysis that were valued, supported, and expected. Design of human capital initiatives similarly shaped how they supported or constrained instructional decision making. Noted earlier, both Mammoth and Yosemite had made significant commitments to their instructional coaching programs. For Mammoth, their approach included one individual placed at each site, while each of Yosemite’s coaches had multiple schools to support. As such, Mammoth’s instructional coaches tended to have more regular access to the classroom and teacher observation data, compared to their counterparts in Yosemite who relied more heavily on the systems’ level data available in the centralized data management system. The “bumble bee effect” of coaches moving between schools, however, meant that new instructional strategies in response to data could be shared quickly and easily between schools. Similarly, the role of teacher collaboration in data use varied depending, in part, on its design features. Teachers at one school in Sequoia met in their PLCs almost daily, and they had the time and encouragement by their administration to delve into analysis of classroom ! ! ! ! ! ! ! ! !! ! ! 139! assessments, common assessments, and unit exams, as well as student work. In Mammoth, beyond their bi-weekly time designated for lesson planning, formal teacher collaboration efforts for data use occurred after the quarterly benchmarks. Consequently, the emphasis was on analyzing and responding to these benchmark results. Again, while many reported that the coaching program and the teacher collaboration efforts enabled data use, the particulars of their design framed how the initiative supported the types of data and how they were used. A final example of resource design comes from the organizational process and practices category: rewards and incentives. When designing programs to reward educators’ data use, the data, measures, or indicators of “success” were important to how the incentive program unfolded. For instance, the two CMOs were in the process of developing a pay-for-performance compensation system that rewarded gains in student achievement data from state assessment results and system benchmarks, teacher evaluations (based on formal observations), and satisfaction surveys from co-workers, parents, and students. While this program did not explicitly reward data use, both CMOs articulated that using data to shape instruction was an essential quality of successful educators, and therefore the new compensation system would implicitly support this work. Although the program was still in its pilot phase, educators in these two CMOs voiced concerns around the choices of types of data to include, the design of the data collection instruments, and the weight of the various measures on the final teacher effectiveness ranking. Although a Yellowstone principal supported the fact that there would be multiple measures included in the teacher effectiveness rating, he worried about ! ! ! ! ! ! ! ! !! ! ! 140! how teacher instruction was translated and “reduced” into scores on the framework. He mentioned that, given a teacher’s written lesson plan played a large role in the formal observation/evaluation process, if “a person is a really good writer is going to score really well. Even if you are amazing in the classroom, if you are not a really good writer, you are not going to score as well.” Multiple teachers in both Yellowstone and Yosemite believed that the new compensation system – with 40% of the rating based on school and individual CST and benchmark results – would place an even greater push an accountability-achievement model of data use, such as greater focus on “bubble” kids, limiting “authentic” assessments in favor of ones that modeled standardized assessments, and narrowing the curriculum to highly tested standards. The CMOs’ reward/initiative program was a good example of how issues of resource design can have a reciprocal relationship with the larger organizational context; in other words, the resources shape, and were shaped, by the organizational environment. Specifically, the CMOs’ “data-driven” culture and climate was the basis for developing a new, student achievement-driven compensation system. However, the design of this program may threaten to change or negatively impact their organizational culture. As the Chief Academic Officer in Yellowstone offered: There is going to be an interesting culture shift that we are going to have to navigate, and that’s when you start linking anything to compensation, it gets weird. Stakes are a lot higher. There is a desire to game the system because it’s compensation. That is my biggest worry about how to figure out – how to maintain the open reflective, data-driven landscape we have now and not lose it to the compensation piece. Consequently, the design of system resources may also explain the prevalence of some models of data use over others. Recall from Chapter Four that six categories of data use emerged from the data (accountability-achievement; student learning; ! ! ! ! ! ! ! ! !! ! ! 141! instructional reflection; bureaucratic-compliance; positioning; and signaling models). The accountability-achievement model of data use was the most prevalent type of data use across all four systems, a pattern that can be further understood by considering the design of system resources. Noted earlier, the main data management systems of all systems sorted and presented data analysis using the CST designations (e.g., far below basic, basic, etc.). Other tools, like data analysis protocols to be used during teacher collaboration time, frequently guided teachers to attend to, process, and draw conclusions based on data aligned with the CSTs at the expense of other forms of data (e.g., student work). Resource design features similarly helped to explain the prevalence of the student learning model of data use. For instance, in Mammoth, instructional coaches played a significant role in rolling out a literacy program that emphasized formative classroom assessment, including teacher/student conferences, reading logs, and student work. Mammoth’s instructional coaches, with their literacy expertise, were regularly involved in supporting teachers as they made sense of these forms of data. Across all three categories of organizational resources, design features affected the types of data accessed and framed how they were used in decision making. One Yellowstone teacher may have summarized this finding best when speaking of one initiative -- “the devil is all in the details.” Finding #3: Investments in Human Capital Help Educators Determine Instructional Responses Educators across all four systems were more or less comfortable in gathering, accessing and disaggregating data as well as creating/reading charts and graphs. ! ! ! ! ! ! ! ! !! ! ! 142! However, one of the outstanding needs mentioned across systems was the ability to take the knowledge gained from data analysis, reflect, and identify the best instructional response. Sometimes teachers were unable to do this because they felt pressure to stick to the pacing guide, or because, as one teacher in Sequoia described, their team spent their dedicated planning time, “filling out paperwork for paperwork’s sake.” More often, though, the limitation occurred because of a weak set of knowledge and skills in this area. The external consultant at Sequoia best expressed this sentiment: People spent all their time analyzing and no time talking about what they’re going to do about it… What we’re getting clear at is that people are spending all their time on what we call “problem admiration.” Analyzing it, spending time with it, figuring out which strand [the students] got, how many kids got it. And realistically, then you don’t have enough time to talk about what you’re going to do about it. So we’ve been spending a lot of time [in our professional development training] on root cause, on error analysis, helping teachers do that piece and then the instructional strategy match. As noted in the earlier Table 5.5, it was largely through investments in human capital – professional development, modeling by school/system leaders, meaningful data- driven conversations within teacher work groups or between a coach and teacher – that this aspect of data use was cultivated and developed. These investments were reported to be particularly beneficial when they aided teachers in not only considering the reasons why a student may not have mastered a standard (e.g., how the assessment question was phrased) but also helped them make sense of, and reflect on, their own instruction. For example, a team of 8 th grade teachers in Mammoth worked with the instructional coach to analyze benchmark results, focusing on a question that tested standards on persuasive writing: Teacher 1: For me, it was one of the three questions of standard 1.1. I’m almost at 70% of two of the questions, but #45 kicked my butt. ! ! ! ! ! ! ! ! !! ! ! 143! [Teachers all look at their computers where they have the data management system open. They all click to open the breakdown of question #45. The screen has the original question and answers, as well as a graph of the number of students who chose each answer.] Teacher 2: Their problem was understanding the appropriateness of the answer. The sentence my kids picked was B, “the school board has nothing else to do.” You don’t end an editorial, insulting those who are going to make those choices. Instructional coach: So maybe about knowing the audience? Teacher 3: I think I stressed in my instruction that we weren’t going to use “My” and “I,” the first person, in persuasive writing. I think that’s why the kids picked A and B for their answers. When we were teaching, every time we saw “I “and “my” in persuasive writing, we’d cross it out. I think that’s why they didn’t get this question. They crossed out C and D because they used the first person. Teacher 2: I think if you got rid of the answers with “I,” you’re left with A and B, eliminating B because that wasn’t the point. They’re left with A. Is that our fault? Most persuasive essays are not done in 1 st person, so I’m not sure about this one. This example demonstrated how the investment in human capital -- in this case, the teacher work group, led by an instructional coach -- supported teachers’ analysis of not only student learning, but also reflection on their own instruction. Teacher 2 initially focused on the students’ lack of understanding of audience for persuasive writing. Teacher 3 then reflected that it was her own instruction that was likely at the root of the confusion for students. Teacher 2 responded, adapting her original explanation with the new consideration around her instructional choices. The work with other teachers and the instructional coach enabled Teacher 2 to consider and reconsider student assessment results in light of her own instruction, a reflective process that may not have taken place without the support of these human capital resources. Finding #4: Alignment and Integration of Resources is Significant This earlier example of the 8 th grade team meeting with the instructional coach also illustrates a fourth pattern that emerged in data across all four systems: the ! ! ! ! ! ! ! ! !! ! ! 144! importance of alignment of resources dedicated to data use. In this instance, resources were highly aligned and integrated, supporting one another to advance the work. The school leader had dedicated a substantial amount of time – half of a school day – for the instructional coach to meet with each grade’s literacy team. Using the data management system and a data analysis protocol as tools, the group analyzed the benchmark results, identified the three lowest standards, and created an action plan to address these areas. In comparison, educators from Sequoia noted that alignment was sometimes a challenge, particularly as new initiatives were introduced. The Superintendent explained, “The real trick is having the training coordinated with the coaching coordinated with what’s happening in the PLCs.” Sequoia’s Assistant Superintendent further elaborated: I think where the problem occurs is when the district keeps increasing the numbers of the initiatives, and your plate gets fuller and fuller and fuller…. We keep adding and adding and adding. [Gives the examples of PLCs, Response to Intervention, different instructional strategies]. And this is why people are just dying…. I think where we will succeed is for everyone to see the connection, that this is all part of the same thing. But because the activities for the same goal are so varied, people just burn out… But look, [teacher, principal], can you please take the time to see how they connect? Without a sense of how the different resources and initiatives worked together to support data use for instructional improvement, educators expressed feeling overwhelmed and were more likely to “pick and choose” what they decided to implement. Given that each resource privileged or drew educator attention to one form of data over others based on its design (and consequently, certain models of data use of others), unaligned resources may have led to confusion over what was expected and valued as “data-driven decision making.” ! ! ! ! ! ! ! ! !! ! ! 145! Finding #5: CMOs have Significant Commitments to Data-use Resources Finally, as seen across Tables 5.2, 5.3, and 5.4 presented in this chapter, the two CMOs demonstrated a greater level of commitment to resources that supported data use. Eighty percent of the CMOs’ investments in human capital resources were rated as “significant commitment” compared to only 50% “significant commitments” in the traditional districts. In particular, the CMOs stood out in their ability to hire and train new employees, developing educators’ capacity for data use at the beginning of their careers. The districts instead allocated “significant” human capital resources to support teachers once they were working in their classrooms, either through instructional coaching programs or a professional learning community initiative. It was less clear why districts had not engaged in these resources for new hires. One possible reason could be that CMOs, with their growing networks of schools, were faced with a greater demand for teachers. The districts’ economic situation could also play a role. As per the districts’ teachers’ contracts, reduction in force (RIFs) efforts meant that hiring had to come from within the districts’ current tenured teacher pool instead of from outside, new hires. The CMOs stood out in their commitment to tools and technology, as all of their investments were either “significant” or “moderate.” For districts, two of the six categories were ranked “little/no commitment,” and four exhibited “moderate commitment.” Yosemite, for instance, had developed the most sophisticated of the data management systems -- one that integrated multiple types of data in one comprehensive location with many analysis tools. With their ten-person technology team, along with the site-based Data Directors, Yosemite also had the greatest investment in site and system personnel to support use of this program. Finally, in terms of organizational routines, four ! ! ! ! ! ! ! ! !! ! ! 146! of six of the CMOs’ organizational processes and practice were “significant,” while the same proportion of districts’ categories were “moderate.” In particular, CMOs had frequent days reserved for in-school data analysis, and their educators highlighted the benefits of cross-network collaboration to collectively analyze data as well as share instructional strategies. Summary In sum, all four school systems had strategically allocated resources to support data use by educators across organizational levels. Findings suggest that different resources supported the data-use cycle at different leverage points, with design features of the resources shaping how data were used. Investments in human capital were particularly important for helping educators figure out how to act upon the data after they had analyzed it. Alignment and integration across resource categories (human capital, technology and tools, policies and practices) was also critical in supporting data use. Finally, using a three-stage model of commitment to resources, patterns in investments emerged. On the whole, CMOs had committed many resources to support data use. The next chapter looks to explain these findings, considering how environmental and organizational factors shaped patterns in data use (Chapter Four) and the resources mobilized to this end (Chapter Five). ! ! ! ! ! ! ! ! !! ! ! 147! CHAPTER SIX ORGANIZATIONAL AND ENVIRONMENTAL FACTORS THAT SHAPE DATA USE Differences in organizational context, local political circumstances, financial environments, and external accountability demands between districts and CMOs created important contextual conditions for data use. Sometimes these differences directly influenced the types of data attended to, while at other times they affected an organization’s ability to mobilize its resources. Some conditions were universally relevant across system types while other factors influenced only one type of system or both types of systems, but for different reasons and to different ends. Accordingly, one cannot fully understand the dynamics of data use in school systems without a clear picture of the organizational and environmental setting in which this process unfolded. Figure 6.1 presents a model that illustrates the specific factors affecting data use and resource allocation in the study’s traditional school districts and charter management organizations. Using an analytic frame based on the work of Talbert and McLaughlin (1993), the ongoing, nested interactions across environmental/organizational contexts and data use/resource allocation can be thought of as “the embedded context for data use.” ! ! ! ! ! ! ! ! !! ! ! 148! Figure 6.1. Embedded Context for Data Use Across these conditions, there was variation between the districts and CMOs, summarized in Table 6.1 below. The following chapter highlights these key differences. ! ! ! ! ! ! ! ! !! ! ! 149! Table 6.1 Comparing Contexts: Traditional Public School Districts and CMOs Traditional Public School Districts Charter Management Organizations Organizational Context Structure - Hierarchical Bureaucracy - Decentralized Network Age and Size - Established, Fixed Size - New, Growing Degree of Regulations - Regulations - Fewer Regulations Distribution of Decision Making - Top-down - Decentralized Culture - Monitoring - Mission-Driven Local Political Environment - Superintendent, Board of Education, & Teachers’ Union - Charter-related Politics State, District Financial Climate - State, District Budget - State, District Budget - Charter-related Costs - Role of Foundations Federal, State Accountability Policies - No Child Left Behind - California's Public Schools Accountability Act of 1999 - No Child Left Behind - California's Public Schools Accountability Act of 1999 - Authorizers - Market/Community Accountability Reflecting on patterns identified in Chapters Four and Five, this chapter next illustrates how organizational and environmental contextual conditions had significant impact on a school system’s ability to organize to support data use. Organizational Context Structure: Hierarchical Bureaucracy Versus Decentralized Network Educators in all four school systems expressed overall positive relationships between the school sites and central offices, where “collaboration” and “support” were the rule, not the exception. However, the structure of the two types of organizations – hierarchical bureaucracies compared to decentralized networks – shaped school and system interactions around data use in both overt and subtle ways. ! ! ! ! ! ! ! ! !! ! ! 150! The two traditional school districts were best described as hierarchical bureaucracies, with clear lines of authority and departmentalized internal structures. The organizational charts of both districts demonstrated a clear hierarchy with numerous levels and specialized departments, including separate units for special education, assessment, human resources, business services, educational support services, among others. In Mammoth, four levels separated the superintendent and the assessment and evaluation technician in Mammoth. Departments within the districts were compartmentalized and specialized, and the work in one department was often separate from one another, even if there were shared or overlapping responsibilities. In both Sequoia and Mammoth, numerous educators mentioned that the most successful system- wide policies were often initiated “from the top down” led by the superintendent’s efforts, with detailed strategies for roll out, monitoring, and support. In contrast, CMOs could best described as decentralized networks of schools, with a home office that provides back office support. The system leaders described the CMOs as “lean,” “grassroots,” and “very flat, not hierarchical.” Yellowstone’s chief executive officer explained their organization’s structure and the relationship between home office and schools: I would envision the organizational chart where our home office is actually at the bottom, and the schools are all in top. From the home office, you see arrows going up, because we are serving the schools. Rather than a traditional district, where it’s a district office on top, and then all schools are underneath, and the district office is telling the schools what to do…. I tell the finance department, operations, IT, you are here to serve, you are not their boss. We wanted it to be that if the schools had a choice, that they could get services from this office or another office down the street, you want to be the best service possible, so they would opt to buying you. ! ! ! ! ! ! ! ! !! ! ! 151! While the CMOs did have some of the same departments as the districts (e.g., human resources, special education, etc.), the CMOs were flatter, with fewer levels on their organizational charts. Individuals within the organizational also expressed a greater tendency for cross-department work. For instance, the Chief Academic Officer reported that she was “thought partners” with the regional directors as well as the Assessment and Evaluation Director and frequently worked on project-based, cross-level teams that also involved both home office and school site staff. Impact on data use. The structural differences offered a partial explanation for some of the patterns of types of data used and models of data use outlined in Chapter Four. Remember that in both CMOs and districts, there was a misalignment between central/home office administrators and school site educators around the use of the system benchmarks and classroom data. In general, the home office administrators placed a high level of emphasis on the benchmarks, compared to a more mixed level of use by teachers and principals. This trend was reversed for classroom data: with the exception of Yellowstone, teachers and school leaders had greater levels of reported use of classroom data than the district administrators. While one might predict this pattern for bureaucracies (i.e., the organizational layers make it more challenging to send information up and down the organization), the explanation behind this finding in CMOs was less clear. It seemed likely that the data-use trends here were best explained by stakeholder membership instead of the larger organizational setting. In both types of school systems, teachers and school leaders, as front-line employees in all systems, were highly attuned to the use of classroom level data to influence instruction, while the district administrators believed their efforts at the systems level via benchmarks and ! ! ! ! ! ! ! ! !! ! ! 152! interim assessments were of high value for informing instruction. Impact on resource allocation. The structural differences between CMOs and districts had a larger impact on the systems’ abilities to mobilize resources for data use. Recall that districts had fewer “significant” commitments to data-use resources compared to CMOs. The role of the compartmentalized and specialized district bureaucratic structure may offer an explanation for this pattern. First, the organizational structure shaped the flexibility with which resources could be mobilized for data use. In Sequoia, departments were distinct, overseen by separate supervisors with individual budgets. The assistant superintendent for secondary schools, grades 6-12, reported that other divisions “don’t like to articulate with us.” She shared that the departmental silos got in the way of sharing resources: “I have very, very few resources to begin with. We have K-6 schools, but I am still responsible for the 6 th grade. But guess what? Do you think she [gestures to the elementary assistant superintendent’s office] is willing to share her resources that are more abundant than mine with any of my schools?” These isolated compartments of the district office constrained school leaders from sharing financial resources in order to support data-use initiatives across all of its schools. When an initiative related to data use was unveiled, the compartmentalized nature of the districts shaped the success of its implementation. A new policy in Sequoia that came from one sub-unit – e.g., one division purchased Flip cameras for each school to record best teaching practices – was seen as the “baby” of the one department, and it did not received widespread support within the district central office and throughout the schools. In contrast, Sequoia’s PLC initiative to support teachers’ collaboration around ! ! ! ! ! ! ! ! !! ! ! 153! common assessments results that was led by the superintendent and involved all departments in the district met with a greater level of commitment. The compartmentalization of the districts constrained their mobilization of resources for data use when there was not a concerted effort from the top to connect and unite the departments. Finally, Chapter Five found that CMOs had greater commitments to technology for data use, in particular highlighting their data management systems that brought together many streams of data for advanced analysis, and again, structural differences may help explain this pattern. The departmentalization in the two districts encouraged siloing of data streams. For instance, in Mammoth, the director of assessment had access to student and school achievement data, while the “the budget part, that is pretty much standalone in the business [department].” The fact that data were separated by department may have made it difficult to combine them within a single data management system, either because of the “red tape” required to merge files across departments or out of deference to the real or imagined separation between business and instructional departments. In contrast, as mentioned earlier, educators in the two CMOs self-identified as “thinner,” “leaner,” and “less bureaucratic.” Educators in both CMOs believed this structure enabled resource mobilization, thereby contributing to the “significant” commitment of data-use resources in CMOs, as noted in Chapter Five. For instance, the CEO of Yosemite described the role of the home office with data and instructional initiatives as “providing schools with best practices, a menu of supports, opportunities, and guidelines, and almost nothing is mandated. . . . We’ve done a lot of ‘you get to ! ! ! ! ! ! ! ! !! ! ! 154! choose.’” She further elaborated on this approach: We’ve worked so hard to never be bureaucratic because we feel like that’s been part of the problem. Bureaucracy gets in the way. We’ve noticed over and over that mandates don't get you any farther. Even if you do mandate it, it’s not like it happens, so you might as well do it the hard way which is: “We think this might work. Here’s the best thinking we’ve got to date on this. Let me put this in front you, teacher, principal, coach. What do you think?” And those stakeholders, buying in to that and talking what they think works, and then using all those incredible feedback loops… so that our materials are constantly growing and changing. The flexible network design may help explain the significant commitment to data- use initiatives across all levels in the CMOs. In both Yosemite and Yellowstone, the focus of the home office was less around the established authority of departments and more around organizing within to provide choices and options for schools within the network. For Yellowstone and Yosemite, leaders believed that the decentralized, network approach encouraged school buy-in and commitment to their data-use initiatives. Local control and choice, in their eyes, led to local ownership, increasing the effectiveness and endurance of the programs and resources to support using data for instructional change. They also argued this approach allowed for data-use policies to be constructed and revised to meet the needs of individual school communities. Another perceived benefit around the network structure was the systems’ flexibility and responsiveness to issues that “arose from the data,” particularly when it came to aligning and realigning resources at the system level. One leader in Yellowstone argued, “We are a lot more nimble. We can be responsive, we can change direction, remobilize and be responsive to data in a way that the comprehensive schools districts can’t. If we feel that we needed another half day with our kids, we can do that.” In her opinion, the network approach offered the potential for adaptation to changing conditions ! ! ! ! ! ! ! ! !! ! ! 155! and the flexibility to adjust resources at the systems level when necessary. Finally, through the network arrangement, school sites were encouraged to collaborate in their regional clusters, school groups, and cross-school teams, by grade and content area, all organizational routines in place to support data use (see Chapter Five). CMO educators believed their schools could and should turn to each other to discuss issues, analyze data, and share instructional strategies, rather than only receive such guidance from the home office. Instead of a tight control over information up and down in a hierarchy, the network form allowed for information around data and instruction to be more easily exchanged across schools. However, the CMOs’ network structure had its own set of challenges that constrained the systems’ abilities to respond to data in more subtle ways. In the districts, specialization by departments promoted staff expertise, creating a deep “backbench” of knowledge at the central office around curriculum, assessment, specialized content knowledge (e.g., special education coordinator) and other areas related to using data to inform instruction. The “lean” home office meant that CMO staff served as generalists who had to “wear many hats.” Consequently, the organizations at times did not have the wealth of content knowledge from which to draw to support schools. For instance, Yellowstone system leaders suggested that one area in which they struggled was in responding to low test scores of English language learner students, a particular challenge since few people in the organization had expertise in this area. Commitment to a thin back office also impacted the level of support and resources to schools. As a principal at one Yosemite school reported, “It can be frustrating, because [home office] thinks it is providing all this support, but they’re really ! ! ! ! ! ! ! ! !! ! ! 156! not. Some people are stretched so thin, they can’t do anything to really help you.” Both CMOs struggled with negotiating the tension between the thin, flexible structure with a push -- either internally or by outside funders -- to become more formal and standardized, particularly as the networks grew. Several educators in both CMOs expressed concern that the push to centralize and establish “standards operating procedures” for analyzing and responding to data and “non-negotiable” practices around instruction, curriculum, and assessments would move the systems towards a more top-down, hierarchical structure the charters were originally designed to avoid. Age and Size: Established and Fixed versus New and Growing Closely related to system structure was where it was in its growth trajectory and the size of the school system. The districts (with only slight variation) had maintained their size over the past few years, while the two CMOs had seen remarkable growth within a short period of time. Yosemite, for instance, had grown from 1 to over 30 schools over the course of ten years. The two traditional school districts were larger in terms of the number of students served (e.g., Sequoia’s 22,000 students compared to Yosemite’s 10,000 students), although the overall number of schools in each system was comparable (~30 schools, with Yellowstone as the exception with 13 schools). Impact on resource allocation. Age and size largely impacted how the school systems allocated their resources to support data use in schools rather than the data use itself. For the districts, their size and complexity compelled them to use a slow, top-down strategy for data-use initiatives when they wanted to reach all schools. One central office administrator shared that new initiatives to support teachers’ use of common assessments that came from her office required providing clear structures (e.g., standard operating ! ! ! ! ! ! ! ! !! ! ! 157! procedures, data analysis protocols) out to school leaders because “unless you roll something out with a structure, you can’t replicate it across the system as big as ours.” In contrast, the CMOs believed that their smaller size enabled their flexibility and responsiveness when mobilizing or reallocating resources in response to data at the system level. Explained one Yosemite vice principal, “We’re like a little tug boat, and we can kind of navigate through the waters. Whereas you have [the large local school district], it’s this big hulking Titanic. They can’t stop as easily.” Size, like structure, seemed to enable the CMOs in their ability to react quickly to assign or realign resources as needed. The second factor related to size was where the organizations stood in terms of their age. The two districts had been at relatively fixed sizes for years, whereas the two CMOs had spent the past 10 years growing their networks of schools. On one hand, this meant that the two CMOs had the benefit of designing their school systems and structures from the ground up. Reflecting on the programs in place to support data use, Yosemite’s Vice President of education noted: “They’d be incredibly difficult to do in places that don’t do them now because we all know how difficult change is. But we weren’t overcoming that. We started from nothing, and if you get to build a system from the ground up, you can build it really well.” The fact that the CMOs were creating new programs and systems may have led to less resistance in their implementation as employees were not being asked to make changes to their practice. This “ground-up” design may also have contributed to their close alignment of data-use initiatives, a pattern also noted in Chapter Five. For example, as Yosemite rolled out its data management ! ! ! ! ! ! ! ! !! ! ! 158! system, they also built in the complementary human capital resources – professional development and training for the system in addition to Data Directors at each school site. The push for growth in the future -- a feature of both CMOs -- coupled with a limited budget and an already lean home office, was also part of the rationale behind the investment in more sophisticated, automated data management systems and tools for both school and system leaders. As the Director of Data Analysis and Assessment at Yellowstone articulated, Because of the economic crisis, we are being hit hard just like everybody else, and we are also continuing to grow schools. So, it’s becoming more and more challenging to be able to do all the analysis that we want with just two people, and we don't have the budget to grow as our schools grow. So as a result, there are certain school level analysis that we do that we’ve always done manually, and we’re going to have to figure out how to automate it. The challenge is to automate not only for teachers, but for us as well, so that we’re able to do the work that really matters and has high impact. However, the rapid scale-up in CMOs led to what one Yellowstone staff member called organizational “growing pains.” Starting a school organization from scratch meant that there had been “trial and error” in design of resources along the way; one system leader suggested it was like “flying a plane while you’re building it.” Despite efforts to be strategic and forward-thinking in their planning, decisions to develop new data-use initiatives sometimes occurred retrospectively, that is, in response to the multiple new stresses on the system that followed the expansion of the network. Yellowstone’s Director for Technology explained that the impetus for their integrated data management system was a recent investment, as in prior years, their data management systems for grades, CST scores, benchmarks, finances, etc. had all been separate. He explained: Why didn’t this happen from day one? I just don’t think that was the main focus. The main focus was, let’s develop really high quality schools and then use data at that level. When you have a small, 100 student school, you don’t need a ! ! ! ! ! ! ! ! !! ! ! 159! comprehensive system. You can remember everybody’s name and scores longitudinally. But then the amounts of data, they compound quickly. As things get moving, it all becomes a second thought until one day, you hit a wall, and you’re forced to revisit. Bringing new schools online meant additional demands on the home office staff (e.g., identifying new streams of funding; finding a new facility; hiring new teachers), as well as the need to build capacity at each new site around data use (e.g., creating norms of data use with the new school leader/staff, professional development and training). In the meantime, the home offices of both CMOs – for financial reasons as well a commitment to stay “lean” – had not grown in proportion with the new schools, making support to schools in relation to data even more challenging. As one staff member responsible for supporting schools’ access to the data system at Yosemite shared, “There’s a lot of anxiety around the growth plan. We just added four more schools, but I couldn’t add another headcount to my team. I’m not sure how this is going to work.” Although findings indicated that the two CMOs had made “significant” investments in resources to support data use, the continued pressure for growth, coupled with financial constraints, raised questions around the long-term sustainability of this work. Degree of Regulation and the Collective Bargaining Agreement Deregulation, a key tenet of the charter movement, also played a role in district and CMO data use. In particular, the presence or absence of a collective bargaining agreement was an important factor that shaped the types of data emphasized, how data were used, and the ability of the school systems to mobilize their human capital resources. Neither of the two CMOs had a collective bargaining agreement with a local teachers’ association, while the two districts both did. Of particular note were the constraints that the CBA placed on districts. For instance, both CMOs had performance ! ! ! ! ! ! ! ! !! ! ! 160! management systems in place that incorporated standardized assessment results with meaningful consequences for performance review, an organizational strategy prohibited by the contract in Sequoia, Mammoth, and many other districts nationwide. Impact on data use. One notable pattern in Chapter Four was the high levels of attention by all educators to the high-stakes assessment data and the related achievement- accountability model of data use. Contributing to this pattern for CMOs was the belief that teachers were more directly tied to, and responsible for, their standardized student test results (in particular, the CSTs and benchmarks/interim assessments designed to model CSTs). One Yosemite teacher felt that student achievement data were emphasized as an important measure of a teacher’s effectiveness in ways that would not have happened under a CBA: “No one here has tenure. So it is your data. You have got to show the data -- are you being an effective educator? The data will prove it or not.” Consequently, another teacher at Yellowstone noted that the lack of CBA made her more responsible to her students’ test results, and therefore she paid more attention to them: “I am more accountable for what I do than other [district] teachers, and it lights a fire under me. They could fire me whenever they want because of the test results, it’s a big deal.” Additionally, both districts’ CBAs had explicit rules concerning the details of formal teacher observation. Mammoth’s contract between the district and teachers’ association had close to twenty full pages that outlined the regulations around observations and formal evaluations, including their frequency, specific procedural details, and the types of data that could (or could not) inform evaluation decisions. In contrast, the CMOs did not have such a detailed, formalized, and binding document, a ! ! ! ! ! ! ! ! !! ! ! 161! likely contextual condition that enabled their high levels of emphasis on observation data of teacher instruction noted in Chapter Four. Impact on resource allocation. The CMOs also believed that this deregulation gave their schools and school systems more freedom to allocate their human capital resources in response to data (as noted in Chapter Five). “When you have the ability to hire and fire at will, as scary as it sounds,” one vice principal at Yosemite shared, “it’s the starting point to be able to break the rules a bit, and be more flexible and agile.” It also gave them the ability to develop a range of investments in human capital not usually offered as part of the traditional step and ladder contract, such as new teacher career options, including positions as a lead teacher, mentor teacher, induction coach, and others. The absence of a CBA also allowed the CMOs to quickly make changes to organizational routines in response to insights from data. One notable example arose between Yellowstone and Sequoia. In one Yellowstone school, the ELA department analyzed winter benchmark results and decided that, as their instructional response, teachers would change their prep periods so that they would have more available time to spend in each other’s classes, supporting students one-on-one. Between administering the exams and adjusting the schedule, they reported the decision process took two weeks. In contrast, the president of the Sequoia’s teachers’ association reported that the union and district had spent over two years negotiating and renegotiating the language in the contract around dedicated teacher meeting time to be used for collaboration around common assessments. At the time of this study, this negotiation was still ongoing. ! ! ! ! ! ! ! ! !! ! ! 162! However, educators in the traditional districts stressed the importance of the collective bargaining agreement as a mechanism for protecting the rights of teachers. In both districts, the expectations around new data-use policies and initiatives were high, and teachers expressed feeling “overwhelmed.” For instance, the Sequoia union representative argued that in some schools, completing data analysis protocols and other paperwork had put a strain on teachers’ time: They’re being worked too hard, and on that, we’re pushing back.… The teachers are about to snap. It’s worse than it’s ever been this year, and it’s not about the classroom and teaching. It’s all the rest to prove that we’re teaching. That’s where it’s gotten bad, all the documentation and the paperwork and the data tracking to prove we’re doing our job correctly. Sequoia’s union president also noted that while demands on teachers to use the new curricular maps and data to inform their instruction had increased, professional development and training from the district to effectively do this work had not kept pace. The contract then provided a starting point for the conversation between the teachers’ union, site leaders, and district office administrators about how to best manage and prioritize the demands on teachers’ time. CMO teachers expressed similar feelings of being overwhelmed, but without a CBA in place, it could be difficult to balance these pressures. Explained one teacher at Yellowstone, “Work-life balance without a union is a little bit tough, because really no one is telling you that you have the right to go home, even if your worksheets aren’t done.” CMO system leaders recognized how too many responsibilities and too much pressure on test results could, and had, lead to “teacher burnout” and “stress,” and they aimed to best “meet the needs of teachers” without the involvement of a teachers’ union. ! ! ! ! ! ! ! ! !! ! ! 163! Decision-making Autonomy: Top-down Versus Decentralized In all four school systems, the central offices supported schools and provided resources, while individual sites were expected to be responsible for student achievement. However, by design, a greater level of autonomy and decision-making rights were distributed to the school sites in the CMOs, particularly in the areas of budgeting, staffing, and curriculum development. 12 Impact on data use. The autonomy over curriculum and assessment in CMOs may explain the emphasis of certain kinds of data. As noted earlier, in Yellowstone, teachers designed their own curricular maps, and the home office individualized benchmarks to match; consequently, Yellowstone was the only school system where teachers expressed a “high” level of use of their benchmark results to make instructional changes (see Chapter Four). By giving the teachers ownership and control over their curriculum and assessment design, teachers may have been more likely to feel “ownership” over the results and use those assessment results to inform their instruction. However, giving teachers autonomy for their instruction, curriculum, and assessments created a tension when system leaders looked to centralize or create greater consistency across the sites, an issue that became more salient as the networks grew. For instance, the chief academic officer at Yellowstone commented that up to that point, each teacher had developed her own curriculum map, and the home office had individualized benchmark assessments to match. However, as the network grew, it was not only more difficult logistically to do this work, but the organization now had a pool of “best practices” to use instead of depending on new innovations by teachers: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 12 The one exception were a number of schools in Sequoia whose teachers, through a school turnaround initiative, had developed a memorandum of understanding with the teachers’ union and district to allow for greater school level autonomy. ! ! ! ! ! ! ! ! !! ! ! 164! We are very small, very lean organization, and I am not sure we can maintain the bandwidth to do that… We are finding is that our item bank pool is not deep enough to really get enough individualization and to put through different forms of the exam. We have enough data that show teachers who have been successful to use their scope and sequences, so let’s use theirs. At some point, when is it just innovation for innovation sake? At some point, you need to put the stake here in the ground around what works. Impact on resource allocation. Allowing local sites the autonomy around staffing, budget, and curriculum created additional responsibilities for already busy school leaders and teachers in charter schools. However, when it came to making classroom or school-level decisions based on data, educators at the CMOs believed that the individual school sites had greater flexibility to make adjustments in their allocation of resources. One educator noted that at the school sites, “the principal has autonomy and responsibility for the budget so they can use their resources however they need in the classrooms.” For example, one Yellowstone principal noted he and his staff had changed their school’s schedule to implement a Saturday tutoring program for those students who had low second benchmark results, while a principal at Yosemite explained that he used his budget discretion to purchase a set of text books for a teacher who had determined, based on her benchmark results, that her students needed more practice reading nonfiction texts. Benefits also included a greater reported level of ownership when decisions around resource allocation were made at the school sites. On the whole, the increased levels of site-based decision making around resource allocation were a likely factor that contributed to the CMOs’ “significant” level of buy-in and commitment in their resources as noted in Chapter Five. ! ! ! ! ! ! ! ! !! ! ! 165! Culture: Mission-driven Versus Compliance-oriented Establishing a culture of data use was a critical component across all four school systems in laying the foundation for this work. It involved developing explicit norms and expectations regarding data use, modeling by school and system leaders, thoughtful commitment of resources, and constant enactment at both the site and system levels. Beyond this similar culture across all four, the CMOs and districts differed in other aspects of their organizational climates: the CMOs had an explicit mission around college-readiness, while the districts’ culture was more compliance-oriented. Impact on data use. Recall from Chapter Four that CMOs had higher levels of use of college-ready indicators than traditional districts. The college-ready missions of the two CMOs played a large role to understanding this pattern. Educators in both CMOs were passionate that “getting our kids” to college was a priority and that it needed to occur “by any means necessary.” In their eyes, collecting and analyzing data on college- ready indicators were essential next steps to ensuring that progression toward the organization’s vision occurred. The districts’ missions, on the other hand, were more general in nature, with goals to “develop lifelong learners” and support “productive citizens in an ever-changing world.” Leaders from neither district mentioned specific metrics for measuring success towards these goals, instead largely focusing on district and school AYP, API, and program improvement status. Impact on resource allocation. The CMOs’ highly internalized college-ready missions also had clear implications for the organizations’ resources. College-readiness data were a major focus in the development of technology and tools, and that there were organizational routines in place to support its collection and analysis. ! ! ! ! ! ! ! ! !! ! ! 166! In terms of human capital, one CMO leader said that a “certain kind of person” was attracted to working in such a focused, mission-driven environment. These teachers were characterized as “young,” “idealistic” “perfectionists” when it came to their instruction and student data results, and since they were new in their careers, they incorporated the “ethos,” “habits,” and “institutionalized norms” around data use and college readiness from early on. For the Yosemite’s VP of Education, the issue was not in establishing a strong culture of data use; instead, she worried that the “overachievers we hire” took the data results too personally and end up disheartened by it, a particular concern as the CMO moved to roll out a evaluation system that linked student achievement results with instructional observation data. She elaborated: What we haven’t figured out yet is how not to crush people’s spirits with the data, but how to focus on your own improvement. The best way anybody that gets truly better at what they do is by analyzing their mistakes. The cultural piece of normalizing that and making it not make you want to cry, and making it not make your stomach hurt… It’s a huge piece that we want to get better at. As noted earlier, given the top-down nature of the roll-out of data-use initiatives from the central office out to schools, the organizational climate in Sequoia and Mammoth was one that focused on monitoring and compliance. As multiple site and system leaders in Sequoia mentioned, the motto at the district was “what doesn’t get monitored, doesn’t get done.” The staff at the central offices was tasked with communicating that organizational resources (i.e., standardized test data analysis protocols) were intended to be tools for support and accountability, instead of burdens or intrusions into instruction. Sequoia’s union president expanded upon the need to strike this balance in the district based on her work with teachers in the district: I truly believe that if you use data to inform your instruction, and you see success with it, people would use data. Data works. But if I’m a teacher, constantly ! ! ! ! ! ! ! ! !! ! ! 167! turning things into you [the principal], and it feels like that’s how you keep tabs on me, it’s a whole different experience, and it does not help my instruction. It makes me scared of the data, because I don’t want to give it to you. Paradoxically, while central office administrators believed in allocating resources to monitor of data use, this climate ended up constraining this work. Educators felt they were simply responding to system mandates rather than being empowered to make decisions based on data. Environmental Context Local Political Circumstances System leaders in all four school systems reported the importance of garnering political support for their goals and data-use initiatives. For the traditional school districts, this involved an elected school board; a board-appointed Superintendent and her cabinet; and the local teachers’ association. The elected school boards had basic fiduciary duties, responsibility for establishing overall direction, and oversight of the superintendent. The superintendents were responsible for the overall financial, academic, and administrative operation of the district. Finally, the role of the president of the teachers’ association involved negotiating and enforcing the teachers’ contract in such areas as evaluation, compensation, and work day/year. For major data-use initiatives, all three stakeholder groups were involved in policy design and implementation. For CMOs, garnering political support involved neighboring school districts and other local charter schools. Impact on data use. The districts’ governance arrangement had implications for their data use. In both districts, the boards had emphasized improving CST, API, and AYP scores in order to move out of program improvement. Noted in Chapter Four, this ! ! ! ! ! ! ! ! !! ! ! 168! system level goal set the stage for the achievement focus on standardized test results at the district and site levels. In contrast, the CMOs were positioned in a local political debate around the value of charter schools specifically and school choice more generally which led to a positioning model of data use. Yellowstone’s CEO felt that the dialogue around charter schools had recently turned negative, and “the attacks that we’re experience have made us even more vigilant about our APIs and CSTs being very, very strong…. They will be used as a weapon to speak badly about us, otherwise.” She stressed that this political pressure encouraged a focus on CSTs as a way to establish legitimacy in the larger public school arena as well as to distinguish themselves as “high quality” when compared to the rest of the field of charter schools. All of these local political conditions contributed to the high level of attention on standardized assessments, as well as the pressure to engage and support an achievement-accountability model of data use, both trends noted in Chapter Four. Impact on resource allocation. For the districts, the three stakeholder groups involved in governance were accountable to different constituencies – the school board to the voting public, superintendent to the school board, and teachers’ union to its members. Consequently, different interests, agendas, and values were represented in policy decision making. The instructional coaching initiative in Mammoth was a good example of how the different interests from the school board, superintendent, and teachers’ association shaped a data-use initiative. The coaching program began five years ago as part of the balanced literacy initiative supported by the previous superintendent. “When we first initiated the Balanced Literacy approach, it was a pilot program,” the union president ! ! ! ! ! ! ! ! !! ! ! 169! explained, “it was slow. Teachers were slowly brought into it by modeling, by getting to watch model lessons.” When the current superintendent came to Mammoth three years ago, she changed the program so that coaches were expected to support all teachers at the school site. This past year, the instructional coaches were expected to develop and roll out curricular maps, pacing guides, and assessments. The changed policy received serious pushback from teachers. The union president continued: Last year, [there was] no time for pilot, no time to get the kinks out. … So in hindsight, I think we made some mistakes in the way of implementation, and some teachers have latched on to that. They see a mistake or they see a problem with the maps, and then “See, they [the instructional coaches] don’t know what they are doing. They’re not experts. They’re not curriculum writers. Who are they to be making these decisions?” The union president reported spending her time explaining and supporting the changes to the instructional coach initiative to teachers while also drawing the district’s attention when the new demands conflicted with teachers’ responsibilities as laid out in the collective bargaining agreement. At the same time, a school board election this past year had resulted in two new members on the board, and the dynamics on the school board changed to a more “hands- on” approach. In particular, the new board had questions concerning the costs of this initiative compared to its benefits on student achievement. When asked about the changes to the board, Mammoth’s superintendent explained: “There is a sense among [the new board members] that the coaches need to go back in the classroom…. The two don’t know anything because they just got on the board; they think they know everything about instruction. We have to educate them and let them know how important this role is to moving achievement forward.” ! ! ! ! ! ! ! ! !! ! ! 170! In both districts, the relationships among these groups were described as “positive” and “collaborative.” Still, Mammoth’s instructional coaching program – a human capital resource investment – was pushed and pulled by the different political groups with varying agendas and interests, making its day-to-day implementation complex and its future in the district less-than-certain. Similar to this story of Mammoth’s instructional coaches, the local political conditions may have made it challenging for other “significant” and long-lasting commitments of resources in districts (noted in Chapter Five). In contrast, system leaders from the CMOs argued that their independence from this type of governance structure allowed them, in their eyes, to maintain “organizational unity” and “consistency” over time when it came to data-use goals and initiatives. Yosemite’s chief academic officer argued that the key difference between the two kinds of systems was the lack of direct elected involvement: In districts, you’ve got an elected school board and a superintendent [who] comes in every 4, 5 years. That person and those people on the board are responsible for that political will, that incredible vision, mission consistency that’s needed. But it’s not in their best interest to be consistent. They need to make a name for themselves. They need to trash the guy who came before, have their own legacy, so they can move up to a different political position -- mayor, state assembly, whatever it is. Without a consistent political mission, vision, political will, you can't have this ethos that is at the heart of [our CMO]. The ability within a school system to mobilize its resources to support data use was shaped by the presence or absence of different stakeholder groups and the roles they played in system decision making. State, District Financial Climate In all four systems, the recent statewide budget cuts (“the disaster that is the California education budget!” one educator lamented) were highlighted as a critical ! ! ! ! ! ! ! ! !! ! ! 171! constraint for running the school systems generally and investing in data-use initiatives specifically. CMO leaders described feeling additional financial strains beyond the state budget cuts felt by all schools. By one account, in California in 2012, there was a funding inequality of at least 7%, or $395 per student for categorical funding, between charter schools and their traditional school counterparts, a gap which exceeded $1,000 per student in some charter schools (Estrada & Kuhn, 2012). This funding gap widened when considering facility funding, federal funding, or local revenues. Additionally, the CMOs reported having costs distinct from traditional public schools (e.g., longer school days or years) or stand-alone charter schools (e.g., home office staff), creating a unique set of funding needs. While this environmental condition did not have a discernible direct impact on the data-use patterns, it did cause educators to consider where they would find “the biggest bang for the buck” when it came to investing in organizational resources to support data use. Impact on resource allocation. Not surprisingly, financial constraints influenced system leaders’ choices in which resources to invest (see Chapter Five). For instance, in Sequoia, the superintendent saw the investment in training for professional learning communities to be a front-end cost; once fully implemented, the PLC model would be self-sustaining, providing professional development from within the system at little to no cost. As noted earlier, several in the central office in Mammoth described the pressure for articulating how the instructional coach program was worth its expense. Similarly, CMOs needed to be strategic with how they invested their organizational resources. As one CMO central office administrator shared, “We have a ton of work to do around figuring ! ! ! ! ! ! ! ! !! ! ! 172! out how to survive financially. Teachers want raises, we have big ideas for new programs and projects…. It’s so real, what little money does.” For both CMOs, the limited financial resources led to heavy involvement by philanthropic partners to support data-use initiatives. For example, two large national foundations funded Yosemite’s advanced data management system as well as the ten person support staff. Yellowstone and Yosemite’s teacher effectiveness initiative – with its focus on instructional observation data connected to a new teacher compensation system – was part of a multi-million dollar grant as well. One CMO leader shared, “Working with foundations has helped us launch several projects related to using data to inform organizational decisions, programs we definitely couldn’t do without their help.” Working closely with the foundations had its own restrictions and limitations. For one, CMO leaders reported feeling that some foundations used funding as a “carrot” to encourage the CMOs to engage in a new project in one of the foundation’s areas of interest. Then, once the grant was awarded, the money was frequently restricted to that purpose and could not be used to meet other needs. Having projects driven by grants also created internal pressure based on the life of the grant; as one member of Yosemite’s central office explained, “Yosemite is very go, go, go, when the money is there… The timeline is always tight, the funding is always tight, the resources for people’s time is [sic] always tight.” She worried that this pressure to meet deadlines meant that there was not time to document “lessons learned” or invest in the longer-term “organizational memory behind things.” It also meant that programs or projects risked termination once a grant end. At the time of the study, Yosemite’s technology team was exploring how it ! ! ! ! ! ! ! ! !! ! ! 173! could monetize its data tools and technologies, services, and products to other schools and districts to become “self-funded” once its grant ended. New demands for data collection also occurred based on foundation interest and strings attached to funding. In their grant applications and progress reports, foundations often asked for data on student demographics/achievement, statistics from comparable local public schools, and other idiosyncratic data analysis. Fulfilling these data requests was “time consuming because each funder wanted to see it in different ways,” reported the head of Yosemite’s technology team. This demand for data as part of grant writing and reporting was another reason behind Yosemite’s move to collect multiple forms of data in a central, unified data management system. Federal and State Accountability Systems Both the federal accountability policy NCLB and California’s accountability system had very strong influences on data use across all four cases, particularly in their relation to the dominant emphasis of high-stakes standardized assessment results and the related achievement-accountability model of data use outlined in Chapter Four. Impact on data use. Described by one educator as “the ghost that is chasing us,” the AYP and API goals of schools and systems played a critical role in data use in all four systems. Numerous educators pointed to CST scores (and other assessments designed to model the CSTs) as key aspects of their data-use practice because of their link to AYP/API scores and program improvement status. Second, use of these forms of data was most often characterized by an accountability-achievement focus; for instance, teachers and school leaders focused on identifying and moving their “bubble” kids from basic to proficient (see Chapter Four for further details). ! ! ! ! ! ! ! ! !! ! ! 174! Impact on resource allocation. Subsequently, resources invested in by school systems were designed to collect these kinds of data and support achievement-focused analysis. As the director of assessment in Yellowstone explained: We live in a [federal and state] policy context where we have rewards and consequences. Consequently, many of our analysis products are informed by the policy context. For example, in order to increase your API, it’s not only about getting kids to the proficient or advanced, but it’s also increasing your API. The way you increase your API is you change your distribution, right? So at a school level, there's a school analysis tool that looks at a school’s ability to be able to move kids up or move kids up distribution levels. Recall that one of the trends from Chapter Five was that resource designed mattered for data use. In this case, the policy pressure affected the design of Yosemite’s data management system, tailored to facilitate data analysis of AYP/API, CSTs, and related benchmarks and interim assessments results. Authorizers For the two CMOs, the accountability provided by their authorizer(s) was another force influencing their data use. Impact on data use. Many educators in both Yellowstone and Yosemite voiced the belief that as a system of schools based on 5-year charter contracts, their schools faced a legitimate threat of closure if their test scores on CSTs were poor. As one central office administrator at Yosemite argued, “The CSTs are our ‘bread and butter,’ the reason we can stay open.” Yellowstone’s CEO expressed this sentiment as well: “There is a fear factor in the charter movement that they might shut me down. Nobody ever says in the local school district, ‘I'm afraid they will shut me down as an individual or as a school.’ However, our life with our authorizer depends on these scores being good.” ! ! ! ! ! ! ! ! !! ! ! 175! As such, during times of charter monitoring or reauthorization, charter educators felt they had a pressing reason to prioritize CST data with an accountability-achievement focus, noted in Chapter Four. One Yellowstone principal explained how this pressure affected how he prioritized data: CST scores, AYP, API are all demands, especially when we are up for renewal. The way charters have been set up, we’ve exchanged our autonomy for increased accountability. So for AYP, especially when you talk about English language learners (ELs) on CST, it’s absolutely, absolutely a huge thing. We’ve barely made it with ELs in the last couple of years, barely making the threshold. With it always going higher and higher, you absolutely have to focus on what we are going to do with that group. Impact on resource allocation. Working with authorizers affected CMO data use in other ways. As a statewide organization, Yosemite had charters with 10 different sponsoring districts. During periods of monitoring or charter renewal, each authorizer had its own data demands which required different types of data presented in a host of forms, “a real pain in the butt!” said the person responsible for overseeing Yosemite’s charter renewal processes. These demands drove “a lot of the need to centralize data” in their new, comprehensive data management system so that it was easier to “respond quickly and accurately.” Similarly, the CEO from Yellowstone expressed the fact that their authorizing district had required more “paperwork and reporting on minutia,” requiring them to adapt their data collection systems accordingly. Market/Community Accountability Finally, market and community accountability shaped data use and resource allocation for both CMOs. Impact on data use. As noted in Chapter Four, educators in CMOs reported paying greater attention to college-ready indicators than their peers in the traditional school ! ! ! ! ! ! ! ! !! ! ! 176! districts. However, college-ready data were not only used internally by educators to evaluate if they were meeting their school’s mission, but externally as well. As schools of choice, the CMOs displayed the college-ready metrics outward into the educational market to attract parents and families to their organization. Data displays of college-ready metrics established the CMOs’ “reputations” and were included in recruitment materials, the organizational websites, and as part of the parent/student handbooks. As one of Yellowstone’s central office administrators explained: “To put kids in the seats, we use our graduation rates, SAT scores, college acceptance rates to attract families to our schools. We want to communicate that going to college is part of the ‘[Yellowstone] brand.’” The college focus helped position these schools in the educational marketplace, and the metrics signaled to parents and families the success of their model. In Chapter Four, this type of data use was termed the signaling model of data use. At times, the pressures from the market ran in contrast to the demands from federal, state, and/or authorizer accountability systems. In other words, the CMOs’ interest in meeting their mission and the focus on student learning conflicted with the attention on high-stakes standardized tests and the achievement-focus associated with them. Almost all teachers interviewed at Yellowstone and Yosemite pointed to this tension. One teacher argued, “I feel like we’re trapping these students into a test; we’re not truly allowing for discussion to take place. So I try to have Socratic seminars, get these kids to talk. How would we be a college prep middle school if I didn’t do this? Because that’s what college is and test prep is not.” Impact on resource allocation. Similarly, system leaders said that this conflict between accountability and mission had made them think carefully about how they ! ! ! ! ! ! ! ! !! ! ! 177! invested their resources around data use. One school leader at Yosemite explained that she critically considered whether she should use professional development time to help teachers plan from CST results or teach them how to develop and use rubrics for writing and other assignments. As the work with the college-readiness initiative moved forward, Yellowstone’s CEO similarly expressed concern for this balance in the future: We’re really beginning to work in earnest around college readiness -- to pull apart what do our kids have enough of, what are they lacking for college -- and how we are going to continue to do well on the CSTs and the API. We have to actually determine, what are the attributes that the students need or succeed in college definitively? How are we going to measure them? How will we collect them? How do we support their development in our students? I am hoping none of them will come in contradiction to the choices we’ve made to do what we have to do to score well in the CSTs. Summary Table 6.2 below summarizes how these organizational and environmental conditions contributed to the patterns of data use. Some conditions were universally relevant across systems; for instance, federal and state accountability systems led both CMOs and districts to place a high value on standardized test data and engage in behaviors described as the accountability-achievement model as noted in Chapter Four. Other organizational factors were applicable to only one type of system. For instance, authorizer accountability and foundation involvement shaped CMOs’ data use, but not that of districts. ! ! ! ! ! ! ! ! !! ! ! 178! Table 6.2 Linking Data-use Findings with Organizational and Environmental Conditions Chapter Four Data-use Findings Charter Management Organizations Traditional Public School Districts Types of Data High level of use of high- stake state assessment data across all systems - Accountability: State/federal policies - Accountability: Authorizer - Regulations: Absence of CBA - Local politics - Accountability: State/federal policies - Local politics Misalignment between site, system educators concerning role of classroom, system data across all systems - Structure Greater use of teacher observation data in CMOs - Regulations: Absence of CBA - Financial climate: Foundation involvement Greater use of college-ready indicators in CMOs - Culture: Mission-driven - Accountability: Market/community Models of Data Use Achievement-accountability model prevalent in all systems - Accountability: State/federal policies - Accountability: Authorizer - Regulations: Absence of CBA - Local politics - Accountability: State/federal policies - Local politics Student learning model prevalent - Culture: Mission-driven - Decision making: Decentralized * Instructional reflection model - Financial climate: Foundation involvement Bureaucratic-compliance model - Accountability: Authorizers - Structure - Culture: Compliance - Size: Large Signaling model - Accountability: Market/community Positioning model - Accountability: State/federal policies - Accountability: State/federal policies * Examples of student learning similarly occurred with traditional school districts (specifically Mammoth) and were connected to the presence of key human resources, i.e., instructional coaches or teacher collaboration. As such, organizational/environmental conditions were more indirect in explaining this data- use finding. ! ! ! ! ! ! ! ! !! ! ! 179! Below, Table 6.3 similarly offers a summary of how organizational and environmental factors shaped resource allocation in CMOs and districts. Similar to the findings above, some conditions (e.g., the state’s difficult financial environment) had a similar impact across all systems. The presence or absence of a collective bargaining agreement was a condition that played a large role in school systems’ flexibility around allocating resources, particularly when the resource stipulations (e.g., data-driven collaboration time for teachers) were enumerated in the contract. Table 6.3 Linking Resource Allocation Findings with Organizational and Environmental Conditions Chapter Five Resource Allocation Findings Charter Management Organizations Traditional Public School Districts Resource alignment and design were important - Structure: Network - Decision making: Decentralized - Structure: Bureaucratic hierarchy - Decision making: Top down CMOs largely had significant commitments to human resource investments - Structure: Network - Size: Growing - Regulations: Lack of CBA - Decision making: Decentralized - Financial climate: Foundation involvement - Structure: Bureaucratic hierarchy - Size: Fixed - Regulations: CBA - Decision making: Top down - Financial climate CMOs largely had significant commitments to technology/tools - Structure: Network - Size: Growing - Regulations: Lack of CBA - Financial climate: Foundation involvement - Accountability: Authorizers - Structure: Bureaucratic hierarchy - Financial climate - Regulations: CBA - Decision making: Top down CMOs largely had significant commitments to organizational polices, processes - Structure: Network - Regulations: Lack of CBA - Decision making: Decentralized - Structure: Bureaucratic hierarchy - Regulations: CBA - Decision making: Top down Based on these findings, there appear to be many influences—both enabling and constraining—affecting school systems, their data use, and their resource allocation ! ! ! ! ! ! ! ! !! ! ! 180! strategies. Findings revealed that organizational factors – specifically, structure, size/growth trajectory, level of regulation, decision making, and culture – were significant influences, as were the local political circumstances. For districts, this meant the relationships between the school board, superintendent, and teachers’ association, while for CMOs it involved “charter-related” politics that involved local districts, their authorizers, and the larger state and national debate around school choice. The state’s recent budget influenced both types of systems, with CMOs turning to foundations to receive additional financial support for their data-use efforts. Finally, the state and federal accountability policies were the umbrella under which data use in all four systems occurred, with the CMOs’ data-use patterns being additionally driven by authorizers and community/market accountability. The analysis offered here did not point to one particular influence that always enabled or constrained data use. Even an obvious resource, such as CMOs’ access to philanthropic dollars, both supported and constrained data use, as noted earlier. Although Tables 6.1 and 6.2 separate out conditions, these organizational and environmental factors were interrelated to each other as well as to the multitude of overlapping other pressures, constraints, and enabling forces in the immediate social context as well as the larger institutional field. For instance, one example of this variation was the prevalence of the student learning model of data use in one CMO (Yellowstone) and one traditional district (Mammoth). For Yellowstone, the autonomy and decision-making rights of teachers to design their own curriculum and assessments, paired with the strong belief that “authentic” learning assessments were necessary to fulfill the CMO’s college prep ! ! ! ! ! ! ! ! !! ! ! 181! mission, helped explain this finding. For Mammoth, the focus on student learning was tied to the school system’s balanced literacy approach which privileged a wide range of student assessments, combined with the commitment to instructional coaches to roll out this work with teachers. In both cases, site and system leadership played an important role in communicating these messages. Here, both systems, regardless of organizational type, reached the same end but through different means. This example, and others, suggests that data use in school systems occurred in a highly complex environment, with each individual system facing a different combination of factors that changed and evolved. These influences may have moderated one another or combined to have an entirely different impact. The final chapter highlights how the study’s findings relate to and inform the conceptual model, originally presented in Chapter Two, and how these findings also build upon past research. Chapter Seven also offers policy implications and suggestions for improved practice. It concludes with a discussion on areas for future research suggested by this study. ! ! ! ! ! ! ! ! !! ! ! 182! CHAPTER SEVEN CONCLUDING THOUGHTS AND LESSONS LEARNED As argued in Chapter One, despite increased expectations for educators to use data for instructional improvement, little was known about how organizational and environmental conditions shaped this work. Using a conceptual framework adapted from knowledge management and organizational theories and empirical studies from education, Chapter Two outlined the data-use cycle and how key resources -- human capital, technology and tools, and organizational policies and practices -- are essential “preconditions” for this work. Subsequently, Chapter Three described the methods used to learn about how two different types of school systems engaged in this work. Chapters Four, Five, and Six presented the results of this investigation, offering patterns that emerged from within and across systems around data use, resource allocation, and the environmental and organizational conditions that constrained or facilitated this work. This chapter aims to interpret these findings in light of current research and to discuss what they might mean for theory, policy, and practice. It is divided into four major sections. The first provides a summary of the major findings of the study. The second connects these findings to three bodies of literature: knowledge management theory, data-use practice, and research on charter management organizations. Section three offers lessons for practitioners and policymakers. The chapter concludes with a discussion of unexplored issues, questions, and areas for future research. Summary of Major Findings Based on this exploratory study of data use in traditional school districts and CMOs, there appeared to be distinct similarities and differences between the two types of ! ! ! ! ! ! ! ! !! ! ! 183! school systems. The data collected here did not point to one system, or one type of system, doing “better” in this regard. Instead, findings depicted a far more nuanced understanding of how school systems of both types encouraged data use and instructional improvement. Despite the heavily contextualized nature of districts and CMOs, some general conclusions can be drawn (for a review of iterative analysis process, see Chapter Three). First, educators in all four school systems faced six main types of data to inform the literacy instruction: classroom, common grade, teacher observation, system, high-stakes state assessments, and college-ready indicators. Across all systems, there was a disconnect between school site and system educators concerning the use of classroom and system data, a cross-case finding that replicates the misalignment in evidence use noted by Coburn and Talbert (2006). Teachers and principals placed a greater emphasis on the classroom data to inform instruction, while system leaders on the whole pointed to system assessment results. While all four systems attended to high-stakes state assessment results, the CMOs had greater reported use of teacher observation data and college-ready indicators. Next, this range of data was used for multiple purposes, falling into six categories or “models” of use. There were three models of instrumental data use: accountability- achievement, student learning, and instructional reflection. Across all four school systems, the focus on the accountability-achievement model was prevalent, while examples of the student learning model were more so in Mammoth School District and Yellowstone CMO. In the other three categories – bureaucratic-compliance, positioning, ! ! ! ! ! ! ! ! !! ! ! 184! and signaling models – data were used in other ways besides to directly inform or support instruction. Looking across resource mobilization of all four districts, similarities arose. Different resources supported the data-use cycle at different leverage points, with design features of the resources shaping how data were used. For instance, the investments in human capital were particularly important in all systems for helping educators identify instructional responses from knowledge gathered from the data, while lack of alignment or integration of the resources constrained data use in all four systems. Using a three- stage model of commitment to resources, patterns in investment of resources for data use emerged. On the whole, CMOs stood out in their commitments to these data-use initiatives, investing in human capital, technology and tools, and organizational processes and policies. These dynamics of data use and resource allocation could not be understood without attending to the organizational and environmental setting in which they unfolded. When considering the “embedded context” for data use, organizational conditions, local political circumstances, state/district financial environment, and federal/state accountability demands were important contextual conditions that both enabled and constrained data use and resource allocation. On the whole, the accountability demands on the two types of systems, whether from the federal or state governments, authorizers, or markets and communities, seemed to have a greater role in shaping the data types used and the models of data use. In contrast, trends in resource allocation were more often explained by organizational factors, like structure, size, and decision-making rights. ! ! ! ! ! ! ! ! !! ! ! 185! Finally, throughout Chapters Four, Five, and Six, there was not only variation between the two types of school systems -- district and CMO -- but within the same type of system when comparing Sequoia to Mammoth and Yosemite to Yellowstone. In some cases, similarities in data use and resource allocation cut across organizational type; for instance, Mammoth and Yellowstone demonstrated a commitment to the student learning model of data use in ways largely absent in Sequoia and Yosemite. All in all, these findings suggested that designing a school system to encourage data use and instrumental improvement is a complex and complicated matter. Connecting Findings to Research and Theory This dissertation has drawn on three bodies of literature and theory: knowledge management theory, empirical research on data use, and the practices of traditional school districts and charter management organizations. In this section, the findings from the study link back to inform and advance each scholarly area. Advancing Knowledge Management Theory In this work, knowledge management and organizational learning theory was used to develop the conceptual framework which identified the three “preconditions” for data use: human capital, technology and tools, and organizational policies and processes. This study extends this theory in several important ways. First, knowledge management has been criticized for not having ample empirical evidence to test its assumptions, and much of the limited available empirical literature using knowledge management has been within for-profit settings, restricting its application in not-for-profit organizations (Alavi & Leidner, 2001; Blair, 2002; Martensson, 2000). This study fills this gap in the literature ! ! ! ! ! ! ! ! !! ! ! 186! by blending knowledge management with research in education to consider empirical questions via systematic, rigorous study. Others have argued that knowledge management theory is more “prescriptive” without ample empirical evidence to test its assumptions, particularly when teasing apart the role of internal distribution of resources (Martensson, 2000; Rasmussen & Nielsen, 2011). For instance, Becerra-Fernandez and Sabherwal (2001) suggest that knowledge management theorists overly assume that resources are “universally appropriate” (p. 23). The research here pushes back on this assumption. Across all four school systems, there was no one single “recipe” for data use. In Sequoia, for instance, the investment in professional learning communities seemed to support teachers’ data use in similar (although not identical) ways as Mammoth’s instructional coach model. In Yosemite, data use was supported by a data management system in which educators at all levels could access each other’s assessment results, while the other three systems found alternative solutions for making data public. Each system and school approached resource allocations decisions differently, in ways that made sense to them based on the goals they wanted to achieve. Each organization had its own strengths (e.g., the long-term dedication for the coaching/PLC models in the districts and the inter-network collaboration of the CMOs) as well as challenges (e.g., restrictions of human capital decisions due to CBA in districts compared to the “threat of burnout” in CMOs because of the lack of contractual protections for teachers and leaders). As predicted by contingency theorists, the given end result – using data to support instructional improvement – could be equifinal, that is, reached by different organizational pathways or trajectories rather than a single, cause-and-effect course of ! ! ! ! ! ! ! ! !! ! ! 187! action (Drazin & Van de Ven, 1985; W. R. Scott, 1998). Further, knowledge management theorists have drawn attention to the various factors that constrain or facilitate data use and knowledge sharing but neglected to consider how these elements work together or conflict with one another. Analysis here suggested that alignment and integration of resources to each other and within the larger organizational context was critical. Complexity within systems and natural permutations of organizational behavior and design to support data use seemed to be the norm, rather than the exception. Additionally, different “preconditions” or resources for data use served different roles. While organizational policies and processes were necessary to “set the stage” for data use, time for collaboration, standard operating procedures, and rewards and incentives were not sufficient for the work to occur. Instead, a range of investments in human capital – knowledge/skills development, teacher collaboration, coaching positions – seemed to play a greater role in enabling data use, particularly when bridging the gap between knowledge from data to deciding upon an instructional response. Pfeffer and Sutton (2000) term this the “knowing-doing” gap to illustrate when employees within organizational settings fail to act up on what they know or believe. This finding parallels recent literature that suggests that data use is not a self-evident process. It is inherently a process that involves sensemaking and benefits from social interaction and collaboration (Little, 2012; Spillane, 2012; Weick, 1995; Weick, Sutcliffe, & Obstfeld, 2005). As Weick (2002) elaborated, People need to act in order to discover what they face, they need to talk in order to discover what they think, and they need to feel in order to discover what it all means. The “saying” involves action and animation, the “seeing” involves directed observation, the “thinking” involves the updating of previous thinking, and the “we” that makes all of this happen takes the form of candid dialogue that mixes together trust, trustworthiness, and self-respect. (p. 9). ! ! ! ! ! ! ! ! !! ! ! 188! Still others have criticized knowledge management literature as being overly focused on the internal design mechanisms with less attention to the environmental conditions (Malhotra, 2004; Thompson & Walsham, 2004). In this study, however, organizational and environmental conditions were critically important for understanding how data use was constrained and facilitated. Blending knowledge management with other theoretical traditions is beneficial in this regard. Considering an “open systems” perspective suggests that the environment surrounding and permeating organizations plays an important role in shaping organizational behaviors and structure and vice versa (W. R. Scott, 1998). In this study, largely due to the state and federal accountability systems, all four systems had heavily emphasized the use of high-stakes assessment results (at the expense of other forms of data) and had designed their organizational resources to support this model of data use. Finally, understanding these findings through an institutional lens provides further insight beyond what knowledge management offers. Institutional theory argues that organizational structures and practices are primarily a function of established norms, rules, and expectations currently operating with a larger environmental field, and that isomorphic pressures narrowly define what it means to be an organization within the field, limiting the field’s diversity (DiMaggio & Powell, 1983; Jepperson, 1991; Meyer & Rowan, 1977; W.W. Powell & Dimaggio, 1991; W. R. Scott, 1998, 2001). Organizations may face coercive, mimetic, and normative environmental pressures which cause them to resemble existing institutions within their organizational fields primarily out of a desire for legitimacy. This study found evidence of all three types of environmental pressures on data use. First, educators within the two CMOs ! ! ! ! ! ! ! ! !! ! ! 189! expressed a great desire to collect and use data to support student learning and preparation for college. However, the normative isomorphic pressures (i.e., the federal and state mandates around AYP and AYI) pushed the CMOs to engage in the accountability-achievement model of data use, at the expense of their college preparation missions. Similarly, notions of classroom and system level data varied among actors at the classroom, school, and system level regardless of organizational type, suggesting there may be normative pressures for different educator groups when it comes to using data. Mimetic isomorphism suggests that during times of uncertainty, organizations look to peer organizations with high levels of legitimacy to note and model their structures and practices. Despite their original commitment to the missions in their charters, CMO leaders reported needing to keep AYP and API scores high as a way of proving their “quality” when compared to other charters and traditional public schools. Informing knowledge management literature with other theories that offer a wider lens to the demands on an organization promises to be an important future direction for further theory development. Informing Data-use Research Advocates argue that using data in schools is a key component to support educational improvement in schools, and of late, policymakers and practitioners have poured millions of dollars into trainings, data management systems, and other initiatives to support this work. Recent data-use efforts include incorporating student assessment results into teacher and principal evaluation and compensation systems (Sawchuk, 2012), as well as predictive analysis as part of an “early warning system” to predict students at- risk of failure that uses the same methods for determining car-insurance rates and credit ! ! ! ! ! ! ! ! !! ! ! 190! scores (Davis, 2012; Sparks, 2012). These and other data-use initiatives show little sign of losing speed or political support. However, this study demonstrates that data-based decision making is a complicated and complex endeavor for educators across system levels, across types of school systems. Using the embedded context of data use as a frame, the findings of this study confirm that the school system as a whole plays a critical role in supporting schools and educators in using data regardless of whether the system is traditional or charter. Organizational and environmental conditions shape, and are shaped by, the types of data, models of data use, and the resources allocated for this work. While others had noted the importance of the federal and state accountability systems in today’s data use (Alwin, 2002; Ingram et al., 2004; Means et al., 2010; J. L. Peterson, 2007), this study provides insight to the tensions that arise when educators are faced with multiple forms of accountability, whether federal, state, market, community, or performance-based. This study points to other conditions heretofore unexplored, such as size and age of a school system, role of the collective bargaining agreement, external political pressures, and involvement of foundation partners, among others. This study suggests that data use in school systems occurs in a highly complex environment with each individual system facing a range of contextual factors, and these influences may moderate one another or combine to have an entirely different effect. Future analysis can build on these findings to understand how they are interconnected with each other and to data-use practice at different organizational levels. ! ! ! ! ! ! ! ! !! ! ! 191! Understanding Charter Management Organizations To date, only limited research has delved into the “black box” of charter management organizations (Farrell et al., 2012; Lake & Dusseault, 2011; Lake et al., 2010). To this body of literature, the research contributes several key findings. First, the network design of CMOs seems to support system data use in two ways. Given the decentralized decision making in a network, the organizations were able to allocate resources that were tailored to the specific needs of students and teachers at the school sites. School leaders were able to mobilize and use their resources quickly in response to the knowledge gained from the data. Second, these CMO structures supported and encouraged collaboration and information sharing across schools. As such, educators had opportunities to analyze data together and learn new knowledge and skills from their peers, boosting collective capacity. However, the two CMOs faced particular challenges that impacted this work in ways not generally felt in the traditional schools. As both systems were growing, system leaders were struggling with balancing centralized procedures for instruction, assessment, and curriculum with maintaining site based autonomy, a key component of the charter movement. Yellowstone’s struggle around individualized benchmarks based on individual teacher curricular maps provides an excellent example of this ongoing tension. Further, CMOs faced at times conflicting accountability demands. From the federal and state governments, their authorizers, and their foundation partners, CMOs were expected to demonstrate high levels of performance on the high-stakes standardized tests. However, as schools of choice, the CMOs needed to meet the needs of their families and communities by providing a college-ready education, an effort that was best ! ! ! ! ! ! ! ! !! ! ! 192! aided by collecting and analyzing college-ready indicators. Consequently, the challenge for these educators was to balance these demands so that the schools were able to stay open while meeting the needs of the families and students. It also created a tension for CMO educators about which data to attend to and prioritize in their instructional decision making. As noted in the earlier section, the fact that the end goal for all schools – charter or traditional – was measured by growth on AYP and API meant that the two CMOs faced normative isomorphic pressures. Although charters were designed to be “labs of research and design” for new instructional, curricular, and assessment practices, they faced the pressure of balancing their internal objectives with “bureaucratic demands linked to institutional conformity” (Huerta, 2009, p. 416). In an effort to achieve accountability for all schools, these policies had a limiting effect on the innovation of the charters in the types of data collected as well as the ways to respond to it. Huerta’s (2009) finding around the forces that limit innovation in charters applies to data-use issues as well: Although charter schools are granted autonomy from many regulations that govern traditional public schools, they are not evolving within a decentralized policy vacuum that insulates them from the forces of the wider institutional environment. Rather, charter schools are still subject to normative definitions of effective schooling advanced by the institutional environment. (p. 415) Despite the fact that the charters were designed to have greater autonomy and freedom from regulations to innovate, many of the responses to data by CMOs and districts – despite that each had its own unique history, context, and mission – were actually quite similar. Using data in ways that emphasized achievement and accountability cut across all four school systems, regardless of whether they were charter ! ! ! ! ! ! ! ! !! ! ! 193! or not, while the student learning model of data use was more prevalent in one district and one CMO. Finally, although the CMOs had significant levels of investments in data-use initiatives, serious questions of sustainability for these investments remained, a critique raised by others concerning CMO management and growth (Education Sector, 2009; Lake & Dusseault, 2011; Lake et al., 2010; J. Scott, 2009; Wohlstetter et al., 2011). Several of CMO’s “significant” resources (e.g., the data management systems, the incentive program that connected compensation with student achievement results, and the technology and tools for teacher observation) were largely funded through foundation investment. It was unclear at the time of this study how the CMOs would continue to fund this work over the long term, particularly in the face of additional budget cuts while maintaining current growth plans. Implications for Practice and Policy This study lays the groundwork for the diffusion of promising practices across school districts and charter management organizations, as well as across schools with varying levels of autonomy – such as the pilot schools in Boston and Los Angeles – and school districts experimenting with portfolio management models, as in Chicago, New Orleans, New York, and Philadelphia (Bulkley, Henig, & Levin, 2010). The following section also outlines several broad recommendations for the policy community concerning instructional improvement through the use of data. As the dissertation is a preliminary, exploratory study, these recommendations are early suggestions, and further research is necessary to confirm and expand upon these ideas. ! ! ! ! ! ! ! ! !! ! ! 194! Implications for Practice Broadening the practice of data use. One overarching trend in data use for all four systems was an emphasis on standardized state test data and the responses associated with accountability-achievement model of data use (Booher-Jennings, 2005; Carnoy & Loeb, 2002; Crocco & Costigan, 2007; Cullen & Reback, 2002; Wills & Sandholtz, 2009). While this response is understandable given the high-stakes nature of the accountability environment, all educators may want to expand the types of achievement and instructional data used to inform decision making. Developing or adapting current assessments to capture higher-order thinking skills is one aspect of this work, as is broadening the definition in schools around what is considered valuable or valid data for instruction. As Supovitz (2012) pointed out, educators can gain different insights based on the assessments’ design, format, and frequency. Quick, end-of-lesson “exit slips” aid in making quick changes for the next day’s lesson, while quarterly system assessments, aligned with state standards, can indicate the strength of past instructional practices. Looking at evidence of student learning through essays, projects, and “authentic” learning activities as well as collecting and analyzing other types of data (e.g., student behavior, parent/student satisfaction surveys) can help to create an even fuller picture of student and school progress. However, educators within and across systems may need support in understanding how different kinds of assessments and their results can be used in ways that are complementary, not contradictory. The greater cognitive demands, with an emphasis on higher-order thinking skills, associated with the national Common Core State Standards Initiative provide further rationale for this recommendation. ! ! ! ! ! ! ! ! !! ! ! 195! Districts could work with the CMOs to understand how to weave in or adapt assessments to support higher-order thinking skills, as CMO educators have tried to do in their own assessments to support college-readiness goals. Learning from the CMOs’ work around gathering observational data in classrooms on teacher instruction may hold promise for changing the focus of data-driven discourse from one of student weaknesses to one that acknowledge the reciprocal relationship between teaching and learning. Finally, CMOs and school districts could engage in conversations about the value and role of a system’s level mission around college- and career-readiness. Of course, districts would need to determine how to adapt this work within the current confines of their collective bargaining agreements. Investing in, designing, and aligning resources. In order to help focus the federal and state investments in data use, this study confirmed the role of three key types of organizational resources for system leaders to consider and align when designing school systems to engage in data use – human capital, technology and tools, and organizational policies and practices. For districts or CMOs looking to support this work, research findings here show that resources play different roles in the data-use cycle. In particular, while a great deal of attention has been placed on developing the technology to collect data (Lachat & Smith, 2005; Massell, 2001; Means et al., 2010; Wayman et al., 2007), investments in human capital – knowledge/skills, leadership, teacher collaboration, and coaching positions – had a significant perceived impact on the “knowing-doing” gap between gathering information from data and engaging in appropriate action plans as a result. ! ! ! ! ! ! ! ! !! ! ! 196! Of course, this recommendation does not mean to suggest that implementation of coaching positions or professional learning communities will guarantee that data will be used to inform instruction in ways that system leaders expect, as policy implementation research instructs that reform efforts are ongoing, co-constructed, and dynamic processes (Datnow & Castellano, 2000; Datnow et al., 2002; Honig, 2003, 2006a, 2006b). Nevertheless, investing in people, tools, and practices can support educators as they gather and analyze information about student learning and achievement, decide on instructional responses, and re-examine the effectiveness of these actions by gathering additional data in an ongoing way. There are “lessons learned” to share between CMOs and districts concerning resource investments. For example, the districts had found ways of engaging resources (e.g., professional learning communities) to support teachers to help ameliorate the stresses associated with data use, an area where the CMOs – with their younger, less experienced teachers – still struggled. Several practices from CMOs could similarly be transferred into district settings. CMOs’ investments in advanced data management systems could provide a model for districts how to merge multiple databases (i.e., financial functions, student achievement) together to enable more sophisticated data analysis. Additionally, the two CMOs managed talent to support data use in three main ways: by recruiting and hiring with an “eye” to openness for using data to drive instruction; providing intensive and ongoing skills/knowledge development early on and throughout their careers; and aligning pay and career advancement opportunities with organizational goals around data use (DeArmond et al., 2012). Districts could follow suit ! ! ! ! ! ! ! ! !! ! ! 197! by adapting their hiring procedures and providing their own set of trainings for new teachers/working with the local schools of education to help develop “data literacy” in new instructors. Finally, districts could work with the teachers’ unions to make using data to inform instruction part of teachers’ evaluation and advancement. Reorganizing school systems. Some of the organizational and environmental conditions identified in this research are outside of site and system educators’ power to change or control. For instance, districts and CMOs will neither have direct control over future reauthorizations of federal/state accountability policies, nor will they be in command of the state budget climate. However, there are aspects of the organizational context that both districts and CMOs could adapt to better support data use internally. There was no one “perfect” management structure for data use in school systems. Both systems, for instance, faced tensions between teacher autonomy around instruction, curriculum, and exams with system-wide moves toward centralization of pacing calendars, lesson plans, and assessments. Educators from both districts and CMOs could benefit from conversations around how to best strike a balance between ensuring that high-quality instruction occurs consistently system-wide while maintaining the flexibility at the schools and classrooms to respond to student needs. Additionally, both districts and CMOs need to continue to promote mutual accountability for data use among across system levels, so that schools are responsible for results, while home or central office administrators are accountable for building school capacity through resources and knowledge/skills development (Elmore et al., 1996; Fuhrman, 1999; O'Day, 2002; M. S. Smith & O'Day, 1991). District approaches that are largely top-down and compliance oriented may fall short on keeping schools accountable for deeply implementing data-use ! ! ! ! ! ! ! ! !! ! ! 198! initiatives, while an overly “lean” home office of CMOs may make it difficult to support all schools in this work within the network, particularly as it grows. Finally, for both CMOs and districts, the home/central office may not want to rely on generalized rules for their relationships with all schools. Just as a teacher responds to assessment results with differentiated instructional responses for each of her students, so can the central/home office vary the level of discretion and administrative control to schools based on instructional progress and data-driven needs for each school (Honig, Copland, Rainey, Lorton, & Newton, 2010). Finally, districts may want to consider the benefits associated with the network structure of the CMOs. Decentralizing decision making around budget, curriculum, and hiring/firing choices meant that CMO educators were able to tailor their resource allocation decisions to the specific needs of its students and teachers based on data. This structure also enabled collaboration around data across schools within the network, offering educators access to additional resources and skills from their peers and enhancing the organizational capacity as a whole. Districts could adapt some of these policies, e.g., increasing schools’ flexibility in managing financial and human resources in order to best meet the needs of their students for those schools who have demonstrated progress. Similarly, they could encourage collaboration by offering more formal opportunities for teachers and school leaders to meet with educators from other schools to analyze student performance data, plan instruction, and monitor results. Implications for Education Policy The findings of this dissertation offer important insights for education policy. Most importantly, this study finds that data use in school systems was highly guided by ! ! ! ! ! ! ! ! !! ! ! 199! federal and state accountabilities policies. When federal and state policymakers revisit these policies, they may want to include more complex measures of student learning. For instance, some suggest that value-added models of student achievement provide a better, more nuanced measure of student growth and school progress, as currently adopted in Chicago, New York City, and Washington DC (Dillon, 2010; Loeb & Plank, 2007). With the roll out of the Common Core State Standards Initiative, policymakers and educators have the opportunity to revisit how student learning, and by extension, teacher instruction, are assessed and measured. With the implementation of these new standards, states are required to adopt new assessments to measure student achievement in the 2014-2015 school year (Porter et al., 2011). Currently, the two consortia that are tasked with designing these assessments have proposed computer-based exams which, they argue, will be more adept at assessing higher-order thinking skills for college- and career-readiness (Partnership for Assessment of Readiness for College and Careers, 2012; Smarter Balanced Assessment Consortium, 2012). State policymakers may also want to play a role in helping school districts identify or build user-friendly data management technologies. All four of the school systems here spent a great deal of time and energy developing or tailoring a data management system to their needs. With a state-driven effort to support technology efforts -- particularly if the upcoming Common Core State Standards assessments have a greater technology component -- school systems could focus their financial and other resources on other work. Similarly, all four systems used system-wide interim or benchmark assessments aligned to standards and administered several times a year. Some of the school systems developed the benchmarks themselves, while others contracted ! ! ! ! ! ! ! ! !! ! ! 200! with out-of-state companies for the assessments. Either choice raised a variety of questions related to the assessments’ validity and/or reliability. State-level guidance around these questions or creation of such assessments for all districts that are aligned to the Common Core State Standards may be helpful. Of course, this may be difficult to implement given the current economic situation and the capacity of California’s Department of Education which may not currently have the necessary staff or financial resources. Finally, this study raises questions about the policy attention, financial investment, and political support for charter management organizations. On one hand, the two CMOs were more apt to collect and attend to broader use of data around teaching and learning, specifically around college-ready indicators and teacher instruction data. They also demonstrated the benefits of a decentralized network structure, particularly in relation to a school system’s ability to allocate resources in response to needs identified in the data. However, the federal/state accountability systems, authorizers, and expectations from foundation partners limited and restricted the innovation promised from the charter movement concerning teaching and learning. Because of these demands, in some critical ways, the CMOs looked very similar in their data use to their traditional school system counterparts. Policymakers may want to better address or synchronize these accountability goals, recognizing that the goal of one policy may directly conflict with another. Finally, many of the significant commitments for data use were funded through major philanthropic investment which may or may not be sustained. Further understanding how this external funding can both facilitate and hinder the long term work ! ! ! ! ! ! ! ! !! ! ! 201! of CMOs can help policymakers assess whether CMOs are a viable mechanism for the replication of successful educational programs as some claim. Recommendations for Future Research This section highlights several unanswered questions raised by this study, identifying several areas for future research and investigation. First, this study was a first look at the organizational and environmental forces that shape data use in traditional districts and charter management organizations. As an exploratory study, the improved conceptualization of these forces can provide the foundation for future work in this area. Given the range of contextual conditions at play, are there “leading” organizational and/or environmental factors? For instance, would these findings hold across a study of CMOs and districts that all had the same number of students and schools? If the study were replicated in a state that had greater financial resources for both districts and charters, how would system resources for data use vary? Or does the federal and state accountability pressures drive educators’ data-use priorities above all else? Additionally, because of sampling criteria, there are other organizational factors that may shape data use that were not included that could be explored in the future. For instance, how are the processes for using data to drive instruction in elementary schools different from secondary schools? The four school systems included here were all largely high achieving according to their state standardized test results. Are there additional challenges in implementing data-use initiatives in schools or districts that are low performing or engaged in “turnaround” work, where educators may be less enthusiastic ! ! ! ! ! ! ! ! !! ! ! 202! about using data (Calkins, Guenther, Belfiore, & Lash, 2007; Diamond & Spillane, 2004; Huberman et al., 2011)? Future research could test the findings developed in this case study to better understand the relationships between different contextual conditions, the resources mobilized for data use, and the range of responses to data. Given the fact that there was no “one” recipe for data use across the four school systems that had expressly engaged in this work, using qualitative comparative analysis would be one helpful, and novel, method to dig deeper in what combinations of “ingredients” may lead to certain models of data-based decision making (Fiss, 2007; Ragin, 1987, 2000). For instance, which conditions (or combination of conditions) are more likely to lead to a student learning model of data use, as seen in Mammoth and Yellowstone, instead of a heavy focus on achievement and accountability, as seen in Sequoia and Yosemite? This study provides a “snapshot” of the data use in four systems at one point in time. It may be useful to design future qualitative studies that attend to how data use changes or evolves over the course of a school year or longer. As part of future, in-depth qualitative work, researchers could also explore the dynamics between organizations’ macro-social structures and micro-level actions of teachers, school leaders, and central office administrators (Coburn & Turner, 2011). For instance, this study outlined the roles that human capital, technology and tools, and organizational processes played in enabling educators’ data use. However, how does an educator make sense of these resources if they are not aligned, i.e., if the data management system prioritizes the benchmark assessment results while the literacy coach emphasizes analyzing and using student work to drive instruction? More broadly, how do these new demands and expectations for data ! ! ! ! ! ! ! ! !! ! ! 203! use blend or interact with educators’ prior intuitions, teaching philosophies, and personal experiences? Advances can also be made at the other end of the methodological spectrum. The data-use cycle is built on the assumptions that when school organizations build school capacity and support educators’ data use, it can lead to improved educational programs and ultimately increased student outcomes. Large-scale, quantitative studies could examine the links between student achievement and data-use initiatives. Specifically, what are the relationships among organizational context, available resources, types of data used, instructional decisions, and gains in student achievement? Although these and other questions remain, these findings have affirmed that practitioners, policymakers, and researchers cannot fully understand data-use initiatives in school systems without a clear picture of the organizational and environmental settings in which this process unfolds. It was the aim of this exploratory study to initiate this critical line of inquiry and generate new directions for future research. ! ! ! ! ! ! ! ! !! ! ! 204! BIBLIOGRAPHY Ackoff, R. L. (1989). From data to wisdom. Journal of Applied Systems Analysis, 16(1), 3-9. Adamowski, S., Therriault, S. B., & Cavanna, A. P. (2007). The autonomy gap: Barriers to effective school leadership. Washington, DC: American Institutes for Research and the Thomas B. Fordham Institute. Adler, P. A., & Adler, P. (1994). Observational techniques. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 377-392). Thousand Oaks, CA: Sage Publications. Agranoff, R., & McGuire, M. (1999). Managing in network settings. Policy Studies Review, 16(1), 18-41. Alavi, M., & Leidner, D. E. (2001). Knowledge management and knowledge management systems: Conceptual foundations and research issues. MIS Quarterly, 25(1), 107-136. Alwin, L. (2002). The will and the way of data use. School Administrator, 59(11), 11. Anderson, S. E. (2006). The school district's role in educational change. International Journal of Educational Reform, 15(1), 13-37. Anderson, S. E., Leithwood, K., & Strauss, T. (2010). Leading data use in schools: Organizational conditions and practices. Leadership and Policy in Schools, 9(3), 292-327. Anfara, V. A., Brown, K. M., & Mangione, T. L. (2002). Qualitative analysis on stage: Making the research process more public. Educational Researcher, 31(28), 28-38. ! ! ! ! ! ! ! ! !! ! ! 205! Argyris, C., & Schon, D. (1978). Organizational learning: A theory of action perspective. Cambridge, MA: Addison-Wesley. Argyris, C., & Schon, D. (1996). Organizational learning II: Theory, method and practice. Cambridge, MA: Addison-Wesley. Barney, J. B. (1991). Firm resources and sustained competitive advantage. Journal of Management, 17(1), 99-120. Batdorff, M., Maloney, L., May, J., Doyle, D., & Hassel, B. (2010). Charter school funding: Inequity persists. Muncie, IN: Ball State University. Baxter, J. A. (2010). QUASAR: The evolution of tools to support educational improvement. In C. E. Coburn & M. K. Stein (Eds.), Research and practice in education: Building alliances, bridging the divide. Lanham, MD: Rowman & Littlefield Publishers. Becerra-Fernandez, I., & Sabherwal, R. (2001). Organizational knowledge management: A contingency perspective. Journal of Management Information Systems, 18(1), 23-55. Bernhardt, V. L. (2003). No schools left behind. Educational Leadership, 60(5), 26-30. Bernhardt, V. L. (2004). Data analysis for continuous improvements (2nd ed.). New York: Eye on Education. Beyer, J. M., & Trice, H. M. (1982). The utilization process: A conceptual framework and synthesis of empirical findings. Administrative Science Quarterly, 27(4), 591- 622. ! ! ! ! ! ! ! ! !! ! ! 206! Bhatt, G. D. (2001). Knowledge management in organizations: Examining the interaction between technologies, techniques, and people. Journal of Knowledge Management, 5(1), 68-78. Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. London, UK: GL Assessment. Blair, D. (2002). Knowledge management: Hype, hope or help? Journal of The American Society for Information Science and Technology, 53(12), 1019-1028. Blanc, S., Christman, J., Liu, R., Mitchell, C., Travers, E., & Bulkley, K. E. (2010). Learning to learn from data: Benchmarks and instructional committees. Peabody Journal of Education, 85(2), 205-225. Bogdan, R. C., & Bilken, S. K. (1998). Qualitative research in education: An introduction to theory and methods. Needham Heights, MA: Allyn & Bacon. Bolam, R., McMahon, A., Stoll, L., Thomas, S., & Wallace, M. (2005). Creating and sustaining effective professional learning communities (Research Report No. 637). Bristol, UK: Universities of Bristol, Bath and London, Institute of Education. Booher-Jennings, J. (2005). Below the bubble: 'Educational triage' and the Texas accountability system. American Educational Research Journal, 42(1), 231-268. Booker, K., Sass, T. R., Gill, B., & Zimmer, R. W. (2011). The effects of charter high schools on educational attainment. Journal of Labor Economics, 29(2), XX-XX. Boyatzis, R. E. (1998). Transforming qualitative information. Thousand Oaks, CA: Sage Publications. ! ! ! ! ! ! ! ! !! ! ! 207! Brown, J. S., & Duguid, P. (2000). The social life of information. Boston, MA: Harvard Business School Press. Bulkley, K. E., Christman, J. B., Goertz, M., & Lawrence, N. (2008). Building with benchmarks: The role of the district in Philadelphia's benchmark assessment system. Peabody Journal of Education, 85(2), 186-204. Bulkley, K. E., & Fisler, J. (2003). A decade of charter schools: From theory to practice. Educational Policy, 17(1), 317-342. Bulkley, K. E., Henig, J. R., & Levin, H. M. (2010). Between public and private: Politics, governance and the new portfolio models for urban school reform. Cambridge, MA: Harvard Education Press. Bulkley, K. E., & Wohlstetter, P. (2004). Taking account of charter schools. New York, NY: Teachers College Press. Busck, O. G., & Knudsen, H. (2010). The transformation of employee participation: Consequences for the work environment. Economic and Industrial Democracy, 32(3), 285-305. Calkins, A., Guenther, W., Belfiore, G., & Lash, D. (2007). The turnaround challenge. Boston: Mass Insight. Carlson, D., Borman, G. D., & Robinson, M. (2011). A multistate district-level cluster randomized trial of the impact of data-driven reform on reading and mathematics achievement. Educational Evaluation and Policy Analysis, 33(3), 378-398. Carnoy, M., & Loeb, S. (2002). Does external accountability affect student outcomes? A cross-state analysis. Educational Evaluation and Policy Analysis, 24(4), 305-331. ! ! ! ! ! ! ! ! !! ! ! 208! Center for Research on Education Outcomes. (2009). Multiple choice: Charter school performance in 16 states. Palo Alto, CA: Center for Research on Education Outcomes. Center on Education Reform. (2011). The accountability report: Charter schools. Washington, DC: Center for Education Reform. Choppin, J. (2001). Data use in practice: Examples from the school level. Paper presented at the Annual meeting of the American Educational Research Association Conference in New Orleans, LA. Chrismer, S. S., & DiBara, J. (2006). Formative assessment of student thinking in reading: An evaluation of the use of FAST-R in the Boston Public Schools. Cambridge, MA: Education Matters. Chubb, J., & Moe, T. (1990). Politics, markets, and American's public schools. Washington, DC: Brookings Institution Press. Coburn, C. E. (2001). Collective sensemaking about reading: How teachers mediate reading policy in their professional communities. Educational Evaluation and Policy Analysis, 23(2), 145-170. Coburn, C. E. (2010). Partnerships for district reform. In C. E. Coburn & M. K. Stein (Eds.), Research and practice in education: Building alliances, bridging the divide. New York: Rowman & Littlefield Publishing Group. Coburn, C. E., Honig, M. I., & Stein, M. K. (2009). What's the evidence on district's use of evidence? In J. Bransford, D. J. Stipek, N. J. Vye, L. Gomez & D. Lam (Eds.), Educational improvement: What makes it happen and why? (pp. 67-86). Cambridge: Harvard Educational Press. ! ! ! ! ! ! ! ! !! ! ! 209! Coburn, C. E., & Talbert, J. E. (2006). Conceptions of evidence use in school districts: Mapping the terrain. American Journal of Education, 112(4), 469-495. Coburn, C. E., Toure, J., & Yamashita, M. (2009). Evidence, interpretation, and persuasion: Instructional decision making at the district central office. Teachers College Record, 111(4), 1115-1161. Coburn, C. E., & Turner, E. O. (2011). Research on data use: A framework and analysis. Measurement: Interdisciplinary Research and Perspective, 9(4), 173-206. Coburn, C. E., & Turner, E. O. (2012). The practice of data use: An introduction. American Journal of Education, 118(2), 99-111. Coleman, J. S., Campbell, E. Q., Hobson, C. J., McPartland, F., Mood, A. M., & Weinfeld, F. D. (1966). Equality of educational opportunity. Washington DC: US Government Printing Office. Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking visible. American Educator(Winter), 1-18. Cosner, S. (2010). Drawing on a knowledge-based trust perspective to examine and conceptualize within school trust development by principals. Journal of School Leadership, 20(2), 117-144. Cosner, S. (2011a). Supporting the initiation and early development of evidence-based grade-level collaboration in urban elementary systems: Key roles and strategies of principals and literacy coordinators. Urban Education, 46(4), 786-827. Cosner, S. (2011b). Teacher learning, instructional considerations, and principal communication: Lessons from a longitudinal study of collaborative data use by teachers. Educational Management Administration & Leadership, 39(1), 568-589. ! ! ! ! ! ! ! ! !! ! ! 210! Cosner, S. (2012). Leading the ongoing development of collaborative data practices: Advancing a schema for diagnosis and intervention. Leadership and Policy in Schools, 11(1), 26-65. Council of the Great City Schools. (2002). Beating the odds II. Washington, DC: Authors. Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approaches (2nd ed.). Thousand Oaks, CA: Sage Publications. Crocco, M. S., & Costigan, A. T. (2007). The narrowing of curriculum and pedagogy in the age of accountability: Urban educators speak out. Urban Education, 42(6), 512-535. Cullen, J. B., & Reback, R. (2002). Tinkering towards accolades: School gaming under a performance accountability system. Ann Arbor: University of Michigan Press. Daft, R. L., & Weick, K. E. (1984). Toward a model of organizations as interpretation systems. Academy of Management Review, 9(2), 284-295. Dalkir, K. (2007). Knowledge management in theory and practice. Elsevier: Noida. Datnow, A., & Castellano, M. (2000). Teachers' response to Success for All: How beliefs, experiences, and adaptations shape implementation. American Educational Research Journal, 37(3), 775-799. Datnow, A., Hubbard, L., & Mehan, H. (2002). Extending educational reform: From one school to many. London, UK: Falmer/Routledge. Datnow, A., & Park, V. (2009). School system strategies for supporting data use. In T. J. Kowalksi & T. J. Lasley (Eds.), Handbook of data-based decision making in education. New York: Routledge. ! ! ! ! ! ! ! ! !! ! ! 211! Datnow, A., & Park, V. (2010a). Large-scale reform in the era of accountability: The system role in supporting data-driven decision making. In A. Hargreaves, A. Lieberman, M. Fullan & D. Hopkins (Eds.), Second international handbook of educational change. New York: Springer. Datnow, A., & Park, V. (Eds.). (2010b). Success for All: Using tools to transport research-based practices to the classroom. Lanham: Rowman & Littlefield Publishers. Datnow, A., Park, V., & Kennedy, B. (2008). Acting on data: How urban high schools use data to improve instruction. Los Angeles, CA: Center on Educational Governance, University of Southern California. Datnow, A., Park, V., & Wohlstetter, P. (2007). Achieving with data: How high performance driven school systems use data to improve instruction for elementary school students. Los Angeles, CA: Center on Educational Governance, University of Southern California. Davenport, T., & Prusak, L. (1998). Working knowledge: How organizations manage what they know. Cambridge, MA: Harvard Business School Press. Davis, M. (2012). Data tools aim to predict student performance. Education Week, 1-4. Deal, T. E., & Hentschke, G. C. (2004). Adventures of charter school creators: Leading from the ground up. Lanham, MD: Scarecrow Education. DeArmond, M., Gross, B., Bowen, M., Demeritt, A., & Lake, R. (2012). Managing talent for school coherence: Learning from charter management organizations. Bethel, WA: Center for Reinventing Public Education, University of Washington. ! ! ! ! ! ! ! ! !! ! ! 212! Dee, T. S., & Jacob, B. (2011). The impact of No Child Left Behind on student achievement. Journal of Policy Analysis and Management, 30(3), 418-446. DeFeo, J. A., & Barnard, W. (2005). JURAN Institute's six sigma breakthrough and beyond - Quality performance breakthrough methods. New York: McGraw-Hill Professional. Deming, W. E. (1986). Out of crisis. Cambridge, MA: MIT Centre for Advanced Engineering. Denzin, N. K. (1978). Sociological methods: A sourcebook. New York: McGraw-Hill. Diamond, J. B., & Cooper, K. (2007). The uses of testing data in urban elementary schools: Some lessons from Chicago. In P. A. Moss (Ed.), Evidence and decision making (Vol. 106, pp. 241-263). Chicago, IL: Blackwell Publishing. Diamond, J. B., & Spillane, J. P. (2004). High-stakes accountability in urban elementary schools: Challenging or reproducing inequality? Teachers College Record, 106(6), 1145-1176. Dillon, S. (2010, August 31). Formula to grade teachers' skill gains acceptance, and critics. New York Times. Retrieved from http://www.nytimes.com/2010/09/01/education/01teacher.html DiMaggio, P., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147-160. Downs, A. (1967). Inside bureaucracy. New York, NY: Little Brown. Doyle, D. P. (2003). Data-driven decision-making: Is it the mantra of the month or does it have staying power? T.H.E. Journal, 30(10), 19-21. ! ! ! ! ! ! ! ! !! ! ! 213! Drazin, R., & Van de Ven, A. H. (1985). Alternative forms of fit in contingency theory. Administrative Science Quarterly, 30(4), 514-539. Drucker, P. (1999). Knowledge-worker productivity: The biggest challenge. California Management Review, 41(2), 79-94. Easterby-Smith, M., & Lyles, M. A. (2006). Introduction: Watersheds of organizational learning and knowledge management. In M. Easterby-Smith & M. A. Lyles (Eds.), Handbook of organizational learning and knowledge management. Malden, MA: Blackwell. EdSector. (2009). Growing pains: Scaling up the nation's best charter schools. Washington, DC: Education Sector. EdSource. (2009). California's charter schools: 2009 update on issues and performance. Mountain View, CA: EdSource. Education Sector. (2009). Growing pains: Scaling up the nation's best charter schools. Washington DC: Education Sector. Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 552-550. Eliot, T. H. (1959). Toward an understanding of public school politics. The American Political Science Review, 53(4), 1032-1051. Elliott, S., & O'Dell, C. (1999). Sharing knowledge & best practices: The hows and whys of tapping your organization's hidden reservoirs of knowledge. Health Forum Journal, 42(3), 34-37. ! ! ! ! ! ! ! ! !! ! ! 214! Elmore, R. F., Abelmann, C. H., & Fuhrman, S. H. (1996). The new accountability in state education reform: From process to performance. In H. F. Ladd (Ed.), Holding schools accountable. Washington DC: Brookings Institute Press. Elmore, R. F., & Rothman, R. (1999). Testing, teaching, and learning: A guide for states and districts. Washington, DC: The Aspen Institute. Ermeling, B. A. (2010). Tracing the effects of teacher inquiry on classroom practice. Teaching and Teacher Education: An International Journal of Research & Studies, 26(1), 377-388. Estrada, J., & Kuhn, J. (2012). Comparing funding for charter schools and their school district peers. Sacramento, CA: The California Legislative Analyst's Office. Farrell, C. C., Nayfack, M. N., Smith, J., Wohlstetter, P., & Wong, A. (2009). Scaling up charter management organizations: Eight key lessons for success. Los Angeles, CA: Center on Educational Governance, University of Southern California. Farrell, C. C., Wohlstetter, P., & Smith, J. (2012). Charter management organizations: An emerging approach to scaling-up what works. Educational Policy, 26(4), 499-532. Feldman, J., & Tung, R. (2001). Whole school reform: How schools use the data-based inquiry and decision making process. Paper presented at the American Educational Research Association Annual Conference. Feldman, M., & March, J. (1981). Information in organizations as signal and symbol. Administrative Science Quarterly, 26(2), 171-186. Figlio, D. N., & Getzler, L. (2006). Accountability, ability, and disability: Gaming the system. Cambridge, MA: National Bureau of Economic Research. ! ! ! ! ! ! ! ! !! ! ! 215! Figlio, D. N., & Winiki, J. (2005). Food for thought: The effects of school accountability plans on school nutrition. Journal of Public Economics, 89(2-3), 381-394. Finnigan, K. S. (2007). Charter school autonomy: The mismatch between theory and practice. Educational Policy, 21(3), 503-527. Firestone, W. A., Fuhrman, S. H., & Kirst, M. W. (1991). State educational reform since 1983: Appraisal and the future. Educational Policy, 3(3), 233-250. Firestone, W. A., & Gonzalez, R. A. (2007). Culture and processes affecting data use in school districts. In P. A. Moss (Ed.), Evidence and decision making, National Society for the Study of Education Yearbook (pp. 132-154). Chicago: Blackwell Publishing. Fiss, P. C. (2007). A set-theoretic approach to organizational configurations. Academy of Management Review, 32(4), 1180-1198. Frey, R. S. (2001). Knowledge management, proposal development, and small businesses. The Journal of Management Development, 20(1), 38-54. Fuhrman, S. H. (1999). The new accountability (CPRE Policy Briefs No. RB-27). Philadelphia: University of Pennsylvania, Consorium for Policy Research in Education. Fuhrman, S. H., Goertz, M. E., & Weinbaum, E. (2007). Education governance in the United States: Where are we? How did we get here? Why should we care? In S. Furhman, D. Cohen & F. Mosher (Eds.), The state of education policy research. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. ! ! ! ! ! ! ! ! !! ! ! 216! Furgeson, J., Gill, B., Haimson, J., Killewald, A., McCullough, M., Nichols-Barrer, I., et al. (2012). Charter-school management organizations: Diverse strategies and diverse student impacts. Bethell, WA: University of Washington and Mathematica Policy Research. Fusarelli, L. D. (2004). The potential impact of the No Child Left Behind Act on equity and diversity in American education. Educational Policy, 18(1), 71-94. Gallimore, R., Ermeling, B. A., Saunders, B., & Goldenberg, C. (2009). Moving the learning of teaching closer to practice: Teacher education implications of school- based inquiry teams. Elementary School Journal, 109(5), 537-553. Geertz, C. (1973). The interpretation of cultures. New York: Basic Books. Gersick, C. J., & Hackman, J. R. (1990). Habitual routines in task-performing groups. Organisational Behavior and Human Decision Processes, 47(1), 65-97. Goertz, M. E., Olah, L. N., & Riggan, M. (2009). From testing to teaching: The use of interim assessments in classroom instruction (No. CPRE Research Report #RR- 65). Philadelphia, PA: Consortium for Policy Research in Education. Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge management: An organizational capabilities perspective. Journal of Management Information Systems, 18(1), 185-214. Golden, M. (2005). Making strides with educational data. T. H. E. Journal, 32(12), 38- 40. Goldin, C. (1999). A brief history of education in the United States. Cambridge, MA: National Bureau of Economic Research. ! ! ! ! ! ! ! ! !! ! ! 217! Grant, R. M. (1996). Toward a knowledge-based theory of the firm. Strategic Management Journal, 17(1), 109-122. Hadfield, M., & Chapman, C. (2009). Leading school-based networks. London, UK: Routledge. Halverson, R., Grigg, J., Pritchett, R., & Thomas, C. (2005). The new instructional leadership: Creating data-driven instructional systems in schools. Madison, WI. Hamilton, L. (2011). Commentary on Coburn and Turner's 'Research on data use: A framework and analysis'. Measurement: Interdisciplinary Research and Perspective, 9(4), 207-210. Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J. A., & Wayman, J. C. (2009). Using student achievement data to support instructional decision making (NCEE 2009-4067). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Hargreaves, D. H. (2000). The production, mediation, and use of profesional knowledge among teachers and doctors: A comparative analysis. In Organisation for Economic Co-operation and Development (OECD) (Ed.), Knowledge management in the learning society (pp. 219-238). Paris: OECD. Heritage, M. (2007). Formative assessment: What do teachers need to know and do? Phi Delta Kappan, 1(1), 140-146. ! ! ! ! ! ! ! ! !! ! ! 218! Herman, R., Dawson, P., Dee, T. S., Greene, J. C., Maynard, R., Redding, S., et al. (2008). Turning around chronically low-performing schools. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, US Department of Education. Hess, F. M. (1999). Spinning wheels: The politics of urban school reform. Washington DC: Brookings Institution Press. Higgins, M., & Hess, F. M. (2009). The challenges for charter schools: Replicating success. (Educational Outlook No. 4). Washington, DC: American Enterprise Institute for Public Policy Research. Hislop, D. (2002). Linking human resource management and knowledge management via commitment: A review and research agenda. Employee Relations 25(2), 182-202. Ho, A. D. (2008). The problem with "proficiency": Limitations of statistics and policy under No Child Left Behind. Educational Researcher, 37(6), 351-360. Hochschild, J. L., & Scovronick, N. (2003). The American dream and the public schools. New York, NY: Oxford University Press. Holsapple, C. W., & Joshi, K. D. (2000). An investigation of factors that influence the management of knowledge in organizations. Journal of Strategic Information Systems, 9(2), 235-261. Holsapple, C. W., & Joshi, K. D. (2002). Knowledge management: A threefold framework. The Information Society, 18(1), 47-64. Honig, M. I. (2003). Building policy from practice: Central office administrators’ roles and capacity in collaborative education policy implementation. Educational Administration Quarterly, 39(3), 292-338. ! ! ! ! ! ! ! ! !! ! ! 219! Honig, M. I. (2006a). Street-level bureacracy revisited: Frontline district central office administrators as boundary spanners in education policy implementation. Educational Evaluation and Policy Analysis, 28(4), 357-383. Honig, M. I. (2008). District central offices as learning organizations. American Journal of Education, 114(4), 627-664. Honig, M. I. (Ed.). (2006b). New directions in education policy implementation: Confronting complexity. Albany, NY: The State University of New York Press. Honig, M. I., & Coburn, C. E. (2008). Evidence-based decision making in school district central offices: Toward a research agenda. Educational Policy, 22(4), 578-608. Honig, M. I., Copland, M. A., Rainey, L., Lorton, J. A., & Newton, M. (2010). Central office transformation for district-wide teaching and learning improvement. Seattle, WA: Center for the Study of Teaching and Policy, University of Washington. Honig, M. I., & Ikemoto, G. S. (2008). Adaptive assistance for learning improvement efforts: The case of the Institute for Learning. Peabody Journal of Education, 83(3), 328-363. Honig, M. I., & Venkateswaran, N. (2012). School-central office relationships in evidence use: Understanding evidence use as a systems problem. American Journal of Education, 118(2), 199-222. Horak, B. J. (2001). Dealing with human factors and managing change in knowledge management: A phased approach. Topics in Health Information Management, 21(3), 8-17. ! ! ! ! ! ! ! ! !! ! ! 220! Huber, G. P. (1991). Organizational learning: The contributing processes and the literatures. Organization Science, 21(3), 8-17. Huberman, M., Parrish, T., Hannan, S., Arellanes, M., & Shambaugh, L. (2011). Turnaround schools in California: Who are they and what strategies do they use? Washington, DC: American Institutes of Research. Huerta, L. (2009). Institutional v. technical environments: Reconciling the goals of decentralization in an evolving charter organization. Peabody Journal of Education, 84(2), 244-261. Hyman, J., & Mason, B. (1995). Managing employee involvement and participation. London, UK: Sage Publications. Ikemoto, G. S., & Honig, M. (2010). Tools to deepen practitioners' engagement with research: The case of the Institute for Learning. In C. E. Coburn & M. K. Stein (Eds.), Research and practice in education: Building alliances, bridging the divide. Lanham, MD: Rowman & Littlefield Publishers. Ikemoto, G. S., & Marsh, J. A. (2007). Cutting through the “data driven” mantra: Different conceptions of data-driven decision making. In P. A. Moss (Ed.), Evidence and decision making (pp. 105-131). Chicago: Blackwell Publishing. Ingram, D., Louis, K. S., & Schroeder, R. G. (2004). Accountability policies and teacher decision-making: Barriers to the use of data to improve practice. Teachers College Record, 106(6), 1258-1287. Jacob, B. A., & Levitt, S. D. (2004). Rotten apples: An investigation of the prevalence and predictors of teacher cheating. Quarterly Journal of Economics, 118(3), 843- 877. ! ! ! ! ! ! ! ! !! ! ! 221! Jamali, D., & Sidani, Y. (2008). Learning organization: Diagnosis and measurement in a developing country context. The Learning Organization, 15(1), 58-74. Jepperson, R. L. (1991). Institutions, institutional effects, and institutionalization. In W. W. Powell & P. J. DiMaggio (Eds.), The new institutionalism in organizational analysis (pp. 143-163). Chicago: University of Chicago. Johnson, J. H. (2000). Data-driven school improvement. Journal of School Improvement, 1(1), 1-10. Kerr, K. A., Marsh, J. A., Ikemoto, G. S., Darilek, H., & Barney, H. (2006). Strategies to promote data use for instructional improvement: Actions, outcomes, and lessons from three urban districts. American Journal of Education, 112(3), 496-520. Keskin, H. (2005). The relationships between explicit and tacit oriented KM strategy and firm performance. Journal of American Academy of Business, 7(1), 169-175. Kezar, A. (2006). To use or not to use theory: Is that the question? In J. C. Smarth (Ed.), Higher education: Handbook of theory and research. New York, NY: Springer. King, W. R. (2009). Knowledge management and organizational learning. In W. R. King (Ed.), Knowledge management and organizational learning (pp. 3-13). London, UK: Springer. Knapp, M. S., Copland, M. A., & Swinnerton, J. A. (2007). Understanding the promise and dynamics of data-informed leadership. In P. A. Moss (Ed.), Evidence and decision making. Malden, MA: Blackwell Publishing. Kogut, B., & Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology. Organization Science, 3(3), 383-397. ! ! ! ! ! ! ! ! !! ! ! 222! Lachat, M. A., & Smith, S. (2005). Practices that support data use in urban high schools. Journal of Education Change for Students Placed At-Risk, 10(3), 333-349. Lafee, S. (2002). Data-driven districts. School Administrator, 59(11), 6-7, 9-10, 12, 14- 15. Laird, E. (2006). Data use drives school and district improvement. Austin, TX: Data Quality Campaign. Lake, R., Bowen, M., Demeritt, A., McCullough, M., Haimson, J., & Gill, B. (2012). Learning from charter school management organizations: Strategies for student behavior and teacher coaching. Bethel, WA: Center for Reinventing Public Education and Mathematica Policy Research. Lake, R., & Dusseault, B. (2011). Paying for scale: Results of a symposium on CMO finance. Seattle, WA: Center on Reinventing Public Education. Lake, R., Dusseault, B., Bowen, M., Demeritt, A., & Hill, P. (2010). The national study of charter management organization effectiveness. Seattle, WA: Mathematica Policy Research & Center on Reinventing Public Education. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation. New York: Cambridge University Press. Lawler, E. E. (1991). High-involvement management. San Francisco, CA: Jossey-Bass. Lee, H., & Kim, Y. (2001). A stage model of organizational knowledge management: A latent content analysis. Expert Systems with Applications, 20(4), 299-311. Legislative Analyst's Office. (2011). Update on school district finance in California. Sacramento, CA: Legislative Analyst's Office. ! ! ! ! ! ! ! ! !! ! ! 223! Leithwood, K., & Louis, K. S. (Eds.). (2000). Organizational learning in schools. New York: Taylor & Francis. Levitt, B., & March, J. (1988). Organizational learning. Annual Review of Sociology, 14(1), 319-340. Liebowitz, J. (1999). Key ingredients to the success of an organization's knowledge management strategy. Knowledge & Process Management, 6(1), 37-40. Liebowitz, J., & Chen, Y. (2003). Knowledge sharing proficiencies: The key to knowledge management. In C. W. Holsapple (Ed.), Handbook on knowledge management: Knowledge matters (pp. 409-424). Berlin: Springer-Verlag. Lim, D., & Klobas, J. (2006). Knowledge management in small enterprises. The Electronic Library, 18(6), 420-433. Lin, H. (2007). A stage model of knowledge management: An empirical investigation of process and effectiveness. Journal of Information Sciences, 33(6), 643-659. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage Publications. Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29(2), 4-16. Linn, R. L., Baker, E. L., & Betebenner, D. W. (2002). Accountability systems: Implications of requirements of the No Child Left Behind Act of 2001. Educational Researcher, 31(6), 3-16. Little, J. W. (2012). Understanding data use practice among teachers: The contribution of micro-process studies. American Journal of Education, 118(2), 143-166. ! ! ! ! ! ! ! ! !! ! ! 224! Loeb, S., & Plank, D. (2007). Continuous improvement in California education: Data systems and policy learning. Berkeley, CA: Policy Analysis for California Education (PACE). Long, D., & Davenport, T. (2003). Better practices for retaining organizational knowledge: Lessons from the leading edge. Employment Relations Today, 30(3), 51-63. Lozier, C., & Rotherham, A. J. (2011). Location, location, location. Washington DC: Bellweather Education Partners. Lubienski, C. (2003). Innovation in education markets: Theory and evidence on the impact of competition and choice in charter schools. American Educational Research Journal, 40(2), 395-443. Maden, C. (2011). Transforming public organizations into learning organizations: A conceptual model. Public Organization Review, 12(1), 71-84. Maier, R., & Remus, U. (2002). Defining process-oreinted knowledge management strategies. Knowledge & Process Management, 9(2), 103-118. Malhotra, Y. (2004). Why knowledge management systems fail? Enablers and constraints of knowledge management in human enterprises. In M. Koenig & T. K. Srikantalah (Eds.), Knowledge management lessons learned: What works and what doesn't (pp. 87-112). Medford, NJ: Information Today, Inc. Malloy, C. L., & Wohlstetter, P. (2003). Working condition in charter schools: What's the appeal for teachers? Education and Urban Society, 35(2), 219-241. Mandell, M. P. (2005). Community collaborations: Working through network structures. Policy Studies Review, 16(1), 42-64. ! ! ! ! ! ! ! ! !! ! ! 225! Mandinach, E., Honey, M., & Light, D. (2006). A theoretical framework for data-driven decision making. Paper presented at the American Educational Research Association Annual Meeting. Retrieved from http://www.cct.edc.org/admin/publications/speeches/DataFrame_AERA06.pdf Marchart, G. J., & Paulson, S. E. (2009). Research and evaluation on data-based decisions in education. In T. J. Kowalski & T. J. Lasley (Eds.), Handbook of data- based decision making in education. New York: Routledge. Marks, H. M., Louis, K. S., & Printy, S. M. (2000). The capacity for organizational learning: Implications for pedagogical quality and student achievement. In K. Leithwood & K. S. Louis (Eds.), Understanding schools as intelligent systems. New York: McGraw-Hill. Marsh, J. A. (2012). Interventions promoting educators' use of data: Research insights and gaps. Teachers College Record, 114(11), 1-15. Marsh, J. A., Hamilton, L., & Gill, B. (2008). Assistance and accountability in externally managed schools: The case of Edison Schools, Inc. Peabody Journal of Education, 83(3), 423-458. Marsh, J. A., McCombs, J. S., & Martorell, F. (2009). How instructional coaches support data-driven decision making: Policy implementation and effects in Florida middle schools. Educational Policy, 24(6), 872-907. Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education. Santa Monica, CA: RAND Corporation. ! ! ! ! ! ! ! ! !! ! ! 226! Marsh, J. A., Springer, M. G., McCaffrey, D., Yuan, K., Epstein, S., Koppich, J., et al. (2011). A big apple for educators: New York City's experiment with schoolwide performance bonuses. Santa Monica, CA: RAND Corporation. Marshall, C., & Rossman, G. B. (2011). Designing qualitative research (5th ed.). Thousand Oaks, CA: Sage Publications. Martensson, M. (2000). A critical review of knowledge management as a management tool. Journal of Knowledge Management, 4(3), 204-216. Marwick, A. D. (2001). Knowledge management technology. IBM Systems Journal, 40(4), 814-830. Mason, S. (2002). Turning data into knowledge: Lessons from six Milwaukee Public Schools. Madison, WI: Wisconsin Center for Education Research. Massell, D. (2001). The theory and practice of using data to build capacity: State and local strategies and their effects. In S. H. Fuhrman (Ed.), From the capitol to the classroom: Standards-based reform in the states (pp. 148-169.). Chicago: University of Chicago. McAdam, R., & Reid, R. (2001). SME and large organization perceptions of knowledge management: Comparisons and contrasts. Journal of Knowledge Management 5(3), 231-241. McDonnell, L. (2005). No Child Left Behind and the federal role in education: Evolution or revolution? Peabody Journal of Education, 80(2), 19-38. McIntire, T. (2002). The administrator's guide to data-driven decision making. Technology and Learning, 22(11), 18-33. ! ! ! ! ! ! ! ! !! ! ! 227! McLaughlin, M. (2006). Implementation research in education. In M. I. Honig (Ed.), New directions in education policy implementation: Confronting complexity. Albany, NY: State University of New York Press. McLaughlin, M., & Talbert, J. E. (2001). Professional communities and the work of high school teaching. Chicago, IL: University of Chicago Press. McLaughlin, M., & Talbert, J. E. (2006). Building school-based teacher learning communities. New York, NY: Teachers College Press. McNeil, M. (2010). 49 applicants win 'i3' grants. Education Week, Politics K-12 blog, August 4. Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers' ability to use data to inform instruction: Challenges and supports. Washington, DC: US Department of Education, Office of Planning, Evaluation and Policy Development. Means, B., Padilla, C., & Gallagher, L. (2010). Use of educational data at the local level: From accountability to instructional improvement. Washington, DC: US Department of Education. Merriam, S. B. (1998). Qualitative research and case study applications in education. San Francisco, CA: Jossey-Bass. Meyer, J. W., & Rowan, B. (1977). Institutionalized organizations: Formal structure as myth and ceremony. American Journal of Sociology, 83(3), 340-363. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis, 2nd edition. Thousand Oaks, CA: Sage Publications. ! ! ! ! ! ! ! ! !! ! ! 228! Miron, G., Urschel, J. L., Yat Aguilar, M. A., & Dailey, B. (2012). Profiles of for-profit and nonprofit education management organizations - 2010-2011. Bounder, CO: National Education Policy Center. Moody, L., & Dede, C. (2008). Models of data-based decision making: A case study of the Milwaukee Public Schools. In E. Mandinach & M. Honey (Eds.), Linking data and learning. New York, NY: Teachers College Press. Moss, P. A., & Piety, P. J. (2007). Evidence and decision making. In P. A. Moss (Ed.), Evidence and decision making. Malden, MA: Blackwell Publishing. Murnane, R. J., & Cohen, D. K. (1986). Merit pay and the evaluation problem: Why most merit pay plans fail and a few survive. Harvard Educational Review, 56(1), 1-17. National Council on Teacher Quality. (2011). State of the states: Trends and early lessons on teacher evaluation and effectiveness policies. Washington, DC: National Council on Teacher Quality. Nayfack, M. (2010). Scaling up charter management organizations: Understanding how policies, people, and places influence growth. University of Southern California. (Unpublished doctoral dissertation). Nelson, T. H., & Slavit, D. (2007). Collaborative inquiry among science and mathematics teachers in the USA: Professional learning. Professional Development in Education 33(1), 23-39. Nelson, T. H., Slavit, D., & Deuel, A. (2012). Two dimensions of an inquiry stance toward student learning data. Paper presented at the American Educational Research Association Annual Conference. ! ! ! ! ! ! ! ! !! ! ! 229! Nelson, T. H., Slavit, D., Perkins, M., & Hathorn, T. (2008). A culture of collaborative inquiry: Learning to develop and support professional learning communities. Teachers College Record, 110(6), 1269-1303. NewSchools Venture Fund. (2011). Inside our portfolio. Retrieved July, 2011, from http://www.newschools.org/portfolio/ventures Nickols, F. (2001). The knowledge in knowledge management. In J. Cortada & J. A. Woods (Eds.), The knowledge management yearbook 2000-2001. Woburn, MA: Butterworth-Heinemann. No Child Left Behind(2001). Nonaka, I. (1994). A dynamic theory of organizational knowledge creation. Organization Science, 5(1), 14-37. Nonaka, I., & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation. New York: Oxford University Press. O'Day, J. A. (2002). Complexity, accountability, and school improvement. Harvard Education Review, 72(3), 293-327. Odden, A. R., & Picus, L. O. (2008). School finance: A policy perspective (Fourth ed.). New York, NY: McGraw-Hill. Olah, L. N., Lawrence, N. R., & Riggan, M. (2010). Learning to learn from benchmark assessment data: How teachers analyze results. Peabody Journal of Education, 85(1), 226-245. ! ! ! ! ! ! ! ! !! ! ! 230! Orzano, A. J., McInerney, C. R., Scharf, D., Tallia, A. F., & Crabtree, B. F. (2008). A knowledge management model: Implications for enhancing quality in health care. Journal of the American Society for Information Science and Technology, 59(3), 489-505. Partnership for Assessment of Readiness for College and Careers. (2012). PARCC Assessment Design. Retrieved July 1, 2012, from http://www.parcconline.org/parcc-assessment-design Patton, M. Q. (2002). Qualitative evaluation methods. Beverly Hills, CA: Sage Publications. Pedler. (1991). The learning company: A strategy for sustainable development. London, UK: McGraw-Hill. Penrose, E. T. (1959). The theory of the growth of the firm. New York, NY: Wiley. Percin, S. (2010). Use of analytic network process in selecting knowledge management strategies. Management Research Review, 33(5), 452-471. Peterson, J. L. (2007). Learning facts: The brave new world of data-informed instruction. Education Next, 1, 36-42. Peterson, P. E. (1974). The politics of American education. Review of Research in Education, 2(1), 348-389. Petrides, L. A., & Guiney, S. Z. (2002). Knowledge management for school leaders: An ecological framework for thinking schools. Teachers College Record, 104(8), 1702-1717. ! ! ! ! ! ! ! ! !! ! ! 231! Petrides, L. A., & Nodine, T. (2005). Anatomy of school system improvement: Performance- driven practices in urban school district. San Francisco, CA: NewSchools Venture Fund. Petrides, L. A., & Nodine, T. R. (2003). Knowledge management in education: Defining the landscape. Half Moon Bay, CA: Institute for the Study of Knowledge Management in Education. Petrilli, M. J., Finn, C. E., & Gau, R. (2006). Trends in charter school authorizing. Washington DC: Thomas B. Fordham Institute. Pfeffer, J., & Sutton, R. I. (2000). The knowing-doing gap: How smart companies turn knowledge into action. Cambridge, MA: Harvard Business School Press. Picciano, A. G. (2009). Developing and nurturing resoures for effective data-driven decision making. In T. J. Kowalski & T. J. Lasley (Eds.), Handbook of data-based decision making in education. New York, NY: Routledge. Piety, P. (2011). Educational data use: A sociotechnical process. Measurement: Interdisciplinary Research and Perspective, 9(4), 217-221. Podgursky, M. J., & Springer, M. G. (2007). Teacher performance pay: A review. Journal of Policy Analysis and Management, 26(4), 909-949. Polanyi, M. (1966). The tacit dimension. New York: Doubleday. Ponelis, S., & Fairer-Wessels, F. A. (1998). Knowledge management: A literature review. South African Journal of Library & Information Science, 66(1), 1-8. Popham, W. J. (1972). California's precedent-seting teacher evaluation law. Educational Researcher, 1(7), 13-15. Popham, W. J. (2001). Teaching to the test. Educational Leadership, 58(6), 16-20. ! ! ! ! ! ! ! ! !! ! ! 232! Porter, A., McMaken, J., Hwang, J., & Yang, R. (2011). Common core standards: The new U.S. intended curriculum. Educational Researcher, 40(3), 103-116. Powell, W. W. (1990). Neither market nor hierarchy: Network forms of organization. In B. M. Staw & L. L. Cummings (Eds.), Research in organizational behavior. Greenwich, CT: JAI Press. Powell, W. W., & Dimaggio, P. J. (1991). The new institutionalism in organizational analysis. Chicago: University of Chicago Press. Ragin, C. (1987). The comparative method: Moving beyond qualitative and quantitative strategies. Berkeley, CA: University of California Press. Ragin, C. (2000). Fuzzy-set social science. Chicago, IL: University of Chicago Press. Ragin, C., & Becker, H. S. (1992). What is a case? Exploring the foundations of social inquiry. Cambridge, UK: Cambridge University Press. Ramnarine, S. (2004). Impacting student achievement through data-driven decision- making. MultiMedia & Internet@Schools, 11(4), 33-35. Rasmussen, P., & Nielsen, P. (2011). Knowledge management in the firm: Concepts and issues. International Journal of Manpower, 32(5), 479-493. Renzulli, P. (2005). Testing the limits of one-stop data access. T.H.E. Journal, 32(12), 45-46. Robertson, M., Swan, J., & Newell, S. (1996). The role of networks in the diffusion of technological innovation. Journal of Management Studies, 33(1), 335-361. Rubenstein-Montano, B., Liebowitz, J., Buchwalter, J., McCaw, D., Newman, B., & Rebeck, K. (2001). A systems thinking framework for knowledge management. Decision Support Systems, 31(1), 5-16. ! ! ! ! ! ! ! ! !! ! ! 233! Sallis, E., & Jones, G. (2002). Knowledge management in education: Enhancing learning & education. London: Kogan Page Limited. Sawchuk, S. (2012). Access to teacher evaluations divides advocates. Education Week. Schmoker, M. (2004). Tipping point: From feckless reform to substantive instructional improvement. Phi Delta Kappan, 85(3), 424-432. Scott, D. (2012). Can education data build the perfect teacher? Retrieved May, 2012, from http://www.governing.com/topics/education/gov-can-educationdata-build- the-perfect-teacher.html Scott, J. (2009). The politics of venture philanthropy in charter school policy and advocacy. Educational Policy, 23(2), 106-136. Scott, W. R. (1998). Organizations: Rational, natural, and open systems. Upper Saddle River, NJ: Prentice Hall. Scott, W. R. (2001). Institutions and organizations, 2nd edition. Thousand Oaks, CA: Sage Publications. Senge, P. M. (1990). The fifth discipline: The art & practice of the learning organization. New York, NY: Doubleday Business. Serrat, O. (2009). Building a learning organization. Knowledge Solutions., 46(1), 1-8. Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22(1), 63-75. Silver, C. A. (2000). Where technology and knowledge meet. Journal of Business Strategy, 21(6), 28-33. ! ! ! ! ! ! ! ! !! ! ! 234! Slavin, R. E., Holmes, G. C., Madden, N. A., Chamberlain, A., & Cheung, A. (2010). Effects of a data-driven district reform model. Baltimore: Johns Hopkins University School of Education, Center for Data-Driven Reform in Education (CDDRE). Slavit, D., & Nelson, T. H. (2010). Collaborative teacher inquiry as a tool for building theory on the development and use of rich mathematical tasks. Journal of Mathematics Teacher Education, 13(3), 201-221. Slavit, D., Nelson, T. H., & Kennedy, A. (Eds.). (2009). Supported collaborative teacher inquiry. New York, NY: Routledge. Slotnik, W. J., & Orland, M. (2010). Data rich but information poor. Education Week, 29(31), 2-3. Smarter Balanced Assessment Consortium. (2012). Computer Adaptive Testing. Retrieved July 1, 2012, from http://www.smarterbalanced.org/smarter-balanced- assessments/computer-adaptive-testing/ Smircich, L. (1985). Is the concept of culture a paradigm for understanding organizations and ourselves? In P. J. Frost, L. F. Moore, M. R. Louis, C. C. Lundger & J. Martin (Eds.), Organizational culture. Beverly Hills, CA: Sage Publications. Smith, E. A. (2001). The role of tacit and explicit knowledge in the workplace. Journal of Knowledge Management, 5(4), 311-321. Smith, J., Farrell, C. C., Wohlstetter, P., & Nayfack, M. (2009). Mapping the landscape of charter management organizations. Los Angeles, CA: Center on Educational Governance, University of Southern California. ! ! ! ! ! ! ! ! !! ! ! 235! Smith, M. S., & O'Day, J. A. (1991). Putting the pieces together: Systemic school reform. (CPRE Policy Brief, RB-06-4/91). New Brunswick, NJ: Consortium for Policy Research in Education. Snipes, J., Doolittle, F., & Herlihy, C. (2002). Foundations for success: Case studies of how urban school districts improve student achievement. Washington, DC: Council of Great City Schools. Sparks, S. D. (2012). Schools find uses for predictive data techniques. Education Week, 1-3. Sparrow, J. (2001). Knowledge management in small firms. Knowledge & Process Management, 8(1), 3-16. Speakman, S., Finn, C. E., & Hassel, B. C. (2005). Charter school funding: Inequity's next frontier. Washington, DC: Thomas B. Fordman Institute. Spencer, B. A. (1994). Models of organization and total quality management: A comparison and critical evaluation. Academy of Management Review, 19(3), 446- 471. Spillane, J. P. (2012). Data in practice: Conceptualizing the data based decision-making phenomena. American Journal of Education, 118(2), 113-141. Spillane, J. P., & Miele, D. B. (2007). Evidence in practice: A framing of the terrain. In P. A. Moss (Ed.), Evidence and decision making. Malden, MA: Blackwell Publishing. Spring, J. (1998). Conficts of interests: The politics of American education. Blacklick, OH: McGraw-Hill. ! ! ! ! ! ! ! ! !! ! ! 236! Springer, M. G., Ballou, D., Hamilton, L., Le, V., Lockwood, J. R., McCaffrey, D., et al. (2010). Teacher pay for performance: Experimental evidence from the project on incentives in teaching. Nashville, TN: National Center on Performance Incentives at Vanderbilt University. Stake, R. (2000a). The art of case study research. Thousand Oaks, CA: Sage Publications. Stake, R. (2000b). Case studies. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research, 2nd edition. Thousand Oaks, CA: Sage Publications. Stenmark, D. (2001). Leveraging tacit knowledge. Journal of Management Information Systems, 17(3), 9-24. Sunassee, N. N., & Sewry, D. A. (2002). A theoretical framework for knowledge management implementation. Proceedings of SAICSIT, 235-245. Supovitz, J. A. (2006). The case for district-based reform: Leading, building, and sustaining school improvement. Cambridge, MA: Harvard Education Press. Supovitz, J. A. (2010). Knowledge-based organizational-learning for instructional improvement. In A. Hargreaves, A. Lieberman, M. Fullan & D. Hopkins (Eds.), Second international handbook of educational change. New York: Springer. Supovitz, J. A. (2012). Getting at student understanding: The key to teachers' use of assessment data. Teachers College Record, 114(11), 1-15. Supovitz, J. A., & Klein, V. (2003). Mapping a course for improved student learning: How innovative schools systematically use student performance data to guide improvement. Philadelphia, PA: Consortium for Policy Research in Education (CPRE). ! ! ! ! ! ! ! ! !! ! ! 237! Supovitz, J. A., & Weathers, J. (2004). Dashboard lights: Monitoring implementation of district instructional reform strategies. Philadelphia, PA: Consortium for Policy Research in Education (CPRE). Sutherland, S. (2004). Creating a culture of data use for continuous improvement: A case study of an Edison Project school. American Journal of Evaluation, 25(3), 277- 293. Swan, J., Newell, S., Scarbrough, H., & Hislop, D. (1999). Knowledge management and innovation: Networks and networking. Journal of Knowledge Management, 3(4), 262-275. Talbert, J. E., McLaughlin, M., & Rowan, B. (1993). Understanding context effects on secondary school teaching. Teachers College Record, 95(1), 45-68. The Bill and Melinda Gates Foundation. (2011a). Five California Public Charter Networks Recieve $60 Million to Promote Effective Teaching and Prepare More Students to Succeed in College. Retrieved January 1, 2012, from http://www.thecollegereadypromise.org/five-california-public-charter-networks- receive-60-million-to-promote-effective-teaching-and-prepare-more-students-to- succeed-in-college/ The Bill and Melinda Gates Foundation. (2011b). Investing in Progressive School Networks. Retrieved July, 2011, from http://www.gatesfoundation.org/press- releases/Pages/investing-in-progressive-school-networks-090326.aspx The Broad Foundation. (2012). The Broad Prize for Public Charter Schools. Retrieved January 1, 2012, from http://www.broadprize.org/publiccharterschools.html ! ! ! ! ! ! ! ! !! ! ! 238! The Michael and Susan Dell Foundation. (2011). Urban Education Initiatives. Retrieved July, 2011, from http://www.msdf.org/programs/urban-education The Spencer Foundation. (2012). Data Use and Educational Improvement. Retrieved June 1, 2012, from http://www.spencer.org/content.cfm/data-use-and-educational- improvement Thompson, M., & Walsham, G. (2004). Placing knowledge management in context. Journal of Management Studies, 41(5), 725-747. Toch, T. (2009). Charter management organizations: Expansion, survival, and impact. Education Week, 29(9), 26-27, 32. Toch, T. (2010). Reflections on the charter school movement. Phi Delta Kappan, 91(8), 70-71. Tuttle, C. C., Te, B., Nicholas-Barrer, I., Gill, B. P., & Gleason, P. (2010). Student characteristics in 22 KIPP middle schools. Washington DC: Mathematica Policy Research. Tyack, D., & Cuban, L. (1995). Tinkering toward utopia: A century of public school reform. Cambridge, MA: Harvard University Press. uit Beijerse, R. P. (2000). Knowledge management in small and medium-sized companies: Knowledge management for entrepreneurs. Journal of Knowledge Management, 4(2), 162-179. US Department of Education. (2009). The American Recovery and Reinvestment Act of 2009: Saving and Creating Jobs and Reforming Education. Retrieved January 1, 2011, from http://www2.ed.gov/policy/gen/leg/recovery/implementation.html ! ! ! ! ! ! ! ! !! ! ! 239! US Department of Education. (2010a). Charter schools program (CSP) grants for replication and expansion of high-quality charter schools. from http://www2.ed.gov/policy/gen/leg/recovery/implementation.html US Department of Education. (2010b). ESEA Blueprint for Reform. Washington, DC: US Department of Education. US Department of Education. (2011a). Letters from the Education Secretary, September 23, 2011. from http://www2.ed.gov/policy/gen/guid/secletter/110923.html US Department of Education. (2011b). Race to the Top Assessment Fund. Retrieved January 1, 2012, from http://www2.ed.gov/programs/racetothetop- assessment/awards.html US Department of Education. (2012). Teacher Incentive Fund. from http://www2.ed.gov/programs/teacherincentive/index.html Vergari, S. (2000). The regulatory styles of statewide charter school authorizers: Arizona, Massachusetts, and Michigan. Educational Administration Quarterly, 36(5), 730- 757. Walker, S. (2006). 12 steps to a successful KM program. Knowledge Management Review, 9(4), 8-9. Wayman, J. C. (2005). Involving teachers in data-driven decision making: Using computer systems to support teacher inquiry and reflection. Journal of Education for Students Placed at Risk, 10(3), 295-308. Wayman, J. C., & Cho, V. (2009). Preparing educators to effectively use student data systems. In T. J. Kowalski & T. J. Laskley (Eds.), Handbook of data-based decision making in education. New York, NY: Routledge. ! ! ! ! ! ! ! ! !! ! ! 240! Wayman, J. C., Cho, V., & Johnston, M. T. (2007). The data-informed district: A district- wide evaluation of data use in the Natrona County School District. Austin, TX: The University of Texas. Wayman, J. C., & Stringfield, S. (2006). Using computer systems to involve teachers in data use for instructional improvement. American Journal of Education, 112(August), 463-468. Weber, M. (1947). The theory of social and economic organization. London: Collier Macmillan. Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage Publications. Weick, K. E. (2002). Puzzles in organizational learning: An exercise in disciplined imagination. British Journal of Management, 13(1), 7-15. Weick, K. E., Sutcliffe, K. M., & Obstfeld, D. (2005). Organizing and the process of sensemaking. Organization Science, 16(4), 409-421. Weiss, C. (1980). Knowledge creep and decision accretion. Knowledge: Creation, diffusion, utilization, 1(3), 381-404. Weiss, C. (1998). Evaluation (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Wenger, E. (1998). Communities of practice: Learning meaning and identity. Cambridge, UK: Cambridge University Press. Wernerfelt, B. (1984). A resource-based view of the firm. Strategic Management Journal, 5(1), 171-180. ! ! ! ! ! ! ! ! !! ! ! 241! Wiklund, J., & Shepherd, D. (2003). Knowledge-based resources, entrepreneurial orientation, and the performance of small and medium-sized businesses. Strategic Management Journal, 24(13), 1307-1314. Wills, J. S., & Sandholtz, J. H. (2009). Constrained professionalism: Dilemmas of teaching in the face of test-based accountability. Teachers College Record, 111(4), 1065-1114. Wilson, S. F. (2009). Success at scale in charter schooling. Washington, DC: American Enterprise Institute for Public Policy Research. Wohlstetter, P., & Chau, D. (2004). Does autonomy matter? Implementing research- based practices in charter and other public schools. In K. E. Bulkley & P. Wohlstetter (Eds.), Taking account of charter schools: What's happened and what's next. New York, NY: Teachers College Press. Wohlstetter, P., Datnow, A., & Park, V. (2008). Creating a system for data-driven decisionmaking: Applying the principal-agent framework. School Effectiveness and School Improvement Journal, 19(3), 239-259. Wohlstetter, P., & Griffin, N. (1998). Creating and sustaining learning communities: Early lessons from charter schools. Philadelphia, PA: University of Pennsylvania, Consortium for Policy Research in Education Wohlstetter, P., Malloy, C. L., Chau, D., & Polhemus, J. L. (2003). Improving schools through networks: A new approach to urban school reform. Educational Policy, 17(4), 399-430. ! ! ! ! ! ! ! ! !! ! ! 242! Wohlstetter, P., Smith, J., Farrell, C. C., Hentschke, G. C., & Herman, J. (2011). How funding shapes the growth of charter management organizations: Is the tail wagging the dog? The Journal of Education Finance, 37(2), 150-174. Wohlstetter, P., Smith, J., Farrell, C. C., & O’Neill, P. (2009). Maximizing effectiveness: Focusing the microscope on charter school governing boards. Los Angeles, CA: University of Southern California, Center of Educational Governance. Wohlstetter, P., Smyer, R., & Mohrman, S. (1994). New boundaries for school-based management: The high-involvement model. Educational Evaluation and Policy Analysis, 16(3), 268-286. Wohlstetter, P., Van Kirk, A. N., Robertson, P. J., & Mohrman, S. A. (1997). Organizing for successful school-based management. Alexandria, VA: Association for Supervision and Curriculum Development. Wohlstetter, P., Wenning, R., & Briggs, K. (1995). Charter schools in the United States: The question of autonomy. Educational Policy, 9(4), 331-358. Wong, K. Y., & Aspinwall, E. (2004). Characterizing knowledge management in the small business environment. Journal of Knowledge Management, 8(3), 44-61. Xu, J., & Quaddus, M. (2005). A six-stage model for the effective diffusion of knowledge management systems. Journal of Management Development, 24(4), 362-373. Yin, R. K. (2003). Case study research, 3rd edition. Beverly Hills, CA: Sage Publications. Young, V. M. (2006). Teachers' use of data: Loose coupling, agenda setting, and team norms. American Journal of Education, 112(4), 521-548. ! ! ! ! ! ! ! ! !! ! ! 243! Zack, M. H. (1999). Developing a knowledge strategy. California Management Review, 41(3), 125-145. Zavadsky, H. (2009). Building data-driven district systems: Examples from three award- winning urban systems. In T. J. Kowalski & T. J. Lasley (Eds.), Handbook of data-based decision making in education. New York: Routledge. Zimmer, R. W., & Buddin, R. (2007). Getting inside the black box: Examining how the operation of charter schools affects performance. Peabody Journal of Education, 82(2-3), 231-273. Zimmer, R. W., Gill, B., Booker, K., Lavertu, S., Sass, T. R., & Witte, J. (2009). Charter schools in eight states: Effects on achievement, attainment, integration, and competition. Santa Monica, CA: RAND Corporation. Zins, C. (2007). Conceptual approaches for defining data, information, and knowledge. Journal of the American Society for Information Science and Technology, 58(4), 479-493. ! ! ! ! ! ! ! ! !! ! ! 244! APPENDIX A INTERVIEW PROTOCOL Before we begin, I’d like to briefly get a sense of your background: 1. How long have you worked at the XX? a. What previous positions have you held at XX? b. What work experience did you have before coming here? c. Please describe your current position and responsibilities? 2. What is your specific role with regard to data use in the secondary schools? I’d like to focus in on how you are collecting and using data at XX. 3. Data Use a. [Show card with data types] In general, what types of data do you use most frequently? i. What types of data do you find most helpful in improving Language Arts achievement? Why? b. On a regular basis, how much time do you think you spend analyzing data per month? How do data inform your decisions as CMO leader? c. Do you ever look at data with the school leaders? If so, what kinds of data, and with what frequency? d. Can you walk me through an example of a time where you collected, analyzed, and acted upon Language Arts data? i. Who was involved (school site, home office staff)? ii. What were the decisions that were made? e. What kinds of data do you collect and send to the school sites? What kinds of data do they provide for you at the home office? f. Has XX developed its own assessments to inform Language Arts instruction? i. How were they developed? Who was involved? g. Can you think of a time where there has been conflict or tension about using data or sharing best practices? From or between teachers, school leaders, or other home office administrators? i. [E.g., staff who are possessive over their data? Staff who don’t want to accept advice from others] ! ! ! ! ! ! ! ! !! ! ! 245! h. What kinds of issues or problems come up around data use? Who do you go to for advice? 4. Personnel & Training a. Who are the people at XX’s home office who are responsible for collecting and analyzing data around Language Arts instruction? i. [Probe: Director of assessment/data, CAO]? ii. What are their role and responsibilities around using and interpreting data? iii. How much of their time is expected to be spent on using data? Why? b. Who are the people at the secondary school sites who are responsible for collecting and analyzing data around Language Arts instruction? i. [Probe: Literacy coach, data manager, AP/principal?] ii. What are their roles and responsibilities around using and interpreting data? iii. How much of their time is expected to be spent on using data? Why? c. Do you offer trainings or professional development for teachers or other staff around how to use data to make instructional decisions in Language Arts? i. How often? Is it mandatory or voluntary? ii. Over the past year, what has been the content or focus of this training? Is this different than last year? Why? 1. [Probe: how to use software, how to group students, instructional techniques] iii. Who decides the content of the professional development? iv. In what form is the PD? 1. [Probe: study groups, meetings, mentoring] v. Is there any collaboration that occurs among teachers or staff across schools around data? vi. Do you bring in outside trainers? If so, which ones, and why? d. Are there apprenticeship or mentorship opportunities for ELA staff? Opportunities to rotate to other classrooms or other schools? 5. Technology a. Is there a CMO-wide or school-level data management system or software program that XX uses? a) If so, who can access this system? b. How frequently do you log on to the system? Does the data system generate reports for the home office or for schools? If so, may I have a copy of a report? ! ! ! ! ! ! ! ! !! ! ! 246! a) Can you think of a time when you used the system to generate ELA information? How was it helpful (or not)? c. Do you use other kinds of technology or tools to collect data or share best practices? i) [Probe: Videotaping lessons, use of IPads for evaluations, intranet, email, Blackboard] ii) How are these helpful (or not)? 6. Organizational Infrastructure a. What organizational policies or routines are in place to help school staff use data and share Language Arts best practices at the individual school level? i. [Probe: PLCs/Collaborative time for teachers, staff meetings] ii. Are these policies mandatory or voluntary? b. If there is a great Language Arts lesson or strategy happening in one secondary school classroom, what systems or practices are in place to identify and share that idea across the network? c. Do you have incentives, rewards, or sanctions in place around using data or sharing best practices? [e.g., data use a part of staff evaluations?] i. How have these been helpful (or not)? d. Are there other specific formal or informal practices at XX to encourage or facilitate data use or sharing of best practices? i. How have these been helpful (or not)? Next, I have some “big picture” questions about XX: 7. Culture a. I’m trying to get a sense of the community here at XX. If I were applying for a teaching position in an XX school, how would you describe your schools and students? i. In terms of the culture of XX, how would you define the culture here? b. What is XX’s philosophy with regard to data use and instructional change? Do you see this philosophy reflected in the XX’s mission, vision, or values? 8. Structure a. How would you describe how XX’s home office, the regional office, and the school sites interact when it comes to data use? b. What roles do the home office, regional office, and the school site play in data use initiatives? ! ! ! ! ! ! ! ! !! ! ! 247! c. What is the organizational structure of XX? i. What is the division of responsibilities for data use initiatives? ii. Can you describe the chain of command for data use initiatives? 1. Probe: Who do you report to? Who reports to you? iii. Do you have an organizational chart? If so, could I have a copy? d. How would you characterize the relationships between the schools? To what extent do XX secondary schools get together for PD, planning, or other reasons? e. Are there any non-negotiables in the XX model – things that you expect to see in every school when it comes to data use? i. How were these decided upon? ii. How do you communicate these expectations to school leaders? 9. Decision Making - Language Arts a. Who makes decisions about the Language Arts curriculum/instruction/assessment -- the CMO home office, regional office, and school sites? i. Who else is involved? b. If I were to visit several XX secondary schools, how likely is it that I would see the same approach to Language Arts teaching from classroom to classroom and school to school? i. Why or why not is that the case? 10. Accountability a. What are the major policies at the federal, state, or local level that have influenced how you collect or analyze data for Language Arts instruction? Can you give an example? b. Are there other individuals or organizations that influence the kind of Language Arts-related data you collect and analyze? If so, how have they influenced you? i. Foundations? ii. Your authorizer? iii. Requests from the community/families? c. Are there formal or informal accountability mechanisms within XX? i. Between the home office and schools? ii. Between the school leader and teachers? iii. Among schools within the network? iv. Probe: What are people held accountable for? What are the consequences if they don"t meet those goals? ! ! ! ! ! ! ! ! !! ! ! 248! 11. Management and Leadership a. What are the expectations for XX leaders (e.g., school site principals, regional office leaders, home office leaders) around their knowledge and skills with regard to data use in their day to day work? b. Are there any other ways that you, as CMO leader, support the work around data use for Language Arts? And some final questions: 12. What have been your major successes around data use and sharing best practices in Language Arts within XX? 13. What have been your major challenges around data use and sharing best practices in Language Arts within XX? 14. To what extent has being a charter organization influenced how XX uses data? Are there things you do that might be difficult to accomplish in a traditional school or district? 15. Do you have any final thoughts or comments that you would like to share with us? Is there anything that you think I should have asked about but didn’t? 16. In our interview, you mentioned different documents and materials that you’ve created or used with XX’s home office or school sites. Would you be willing to provide us with copies? a. Mission, vision statement b. Assessment print outs c. Results, data displays d. Documents describing reading/Language Arts initiatives e. Organizational chart ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! 249! APPENDIX B ORAL INFORMED CONSENT STATEMENT FOR INTERVIEWS Hello, my name is _____, and this is _____. We are researchers from USC’s Rossier School of Education. As we mentioned in our earlier email, we are conducting a study that is funded by the Spencer Foundation to examine the implementation and impact of middle school literacy coaches, data coaches, and data teams. As part of this study, we are interviewing district leaders, school leaders, coaches or PLC leaders, and teachers about their involvement in data use to support literacy instruction. During this interview, we will be asking you questions about the kinds of data that you use, your involvement with data use in literacy instruction, and how these efforts have been implemented. This interview should last between 45 minutes and 1 hour. The information you provide during this interview will be kept strictly confidential and used for research purposes only. We will not disclose your identity or attribute any comments you make to you. We ask that you respond only in your professional capacity. Your participation in this interview is completely voluntary, so if you prefer not to answer a question, or if you want to end this interview for any reason – just let me know. We plan to report our overall findings to school and district leaders in the hopes of informing their efforts to refine reform strategies to better support teachers. We will also be publishing our findings (e.g., in academic journals) that we will be happy share with you. We’ll also be sending you a monthly survey that will take 15-20 minutes, and for your participation in the study, we will be providing you an honorarium: $200 for the school, $250 for each case study teacher, and $500 for coach or PLC lead. We would like to record our conversation if you don’t mind. We will destroy the recording at the end of the study. Is that okay with you? Do you have any questions before we start? Do I have your permission to begin the interview? ____ Yes ____ No Principal Investigator Signature Date Interview Site ! ! ! ! ! ! ! ! !! ! ! 250! CMO Oral Informed Consent Statement for Interviews Hello, my name is _____. I am a researcher from USC’s Rossier School of Education. As I mentioned in my earlier email, I am conducting a study to examine how school systems use data to inform literacy instruction. As part of this study, I am interviewing CMO leaders, school leaders, coaches or PLC leaders, and teachers about their involvement in data use to support literacy instruction. During this interview, I will be asking you questions about the kinds of data that you use, your involvement with data use in literacy instruction, and how these efforts have been implemented. This interview should last between 45 minutes and 1 hour. The information you provide during this interview will be kept strictly confidential and used for research purposes only. I will not disclose your identity or attribute any comments you make to you. I ask that you respond only in your professional capacity. Your participation in this interview is completely voluntary, so if you prefer not to answer a question, or if you want to end this interview for any reason – just let me know. I plan to report our overall findings to school and district leaders in the hopes of informing their efforts to refine reform strategies. I will also be publishing the findings (e.g., in academic journals) that I will be happy share with you. I would like to record our conversation if you don’t mind. I will destroy the recording at the end of the study. Is that okay with you? Do you have any questions before we start? Do I have your permission to begin the interview? ____ Yes ____ No Principal Investigator Signature Date Interview Site ! ! ! ! ! ! ! ! !! ! ! 251! APPENDIX C OBSERVATION PROTOCOL Location: Date/Time: Observer: Meeting description: Description of room set-up: Materials: Participants: Time Notes Comments ! ! ! ! ! ! ! ! !! ! ! 252! APPENDIX D INITIAL CODE LIST !School System !Sequoia !Mammoth !Yellowstone !Yosemite @System Level @Central Office @School Site #Type of System #District #Charter $Position $Central office administrator $Regional office administrator $Principal $Assistant Principal $Teacher Leader $Teacher $Other %Background %Prior position %Current position %History of system %History of school ^Data Type ^National exam ^State assessment ^Interim benchmark assessment ^Diagnostic ^Common grade level ^Classroom assessment ^Computer based learning ^Student work ^Observation of teacher instruction ^Student engagement observations ^Student engagement surveys ^Other ! ! ! ! ! ! ! ! !! ! ! 253! &Data Use &Who is involved &Response &Frequency &Conflict in data use *Human Capital *Professional development *Professional learning community *Coach *Other (Technology (Data management system (Who can access (Ease of access (Other )Infrastructure )Time )Routines )Incentives )Other ~Org Context ~Culture ~Mission and vision ~Structure ~Decision making ~Accountability.Federal policy ~Accountability.State policy ~Accountability.Local policy ~Accountability.Authorizer ~Accountability.Philanthropy ~Accountability.Market ~Leadership ~Regulations ?Enabling ?Constraining <Juicy ! ! ! ! ! ! ! ! !! ! ! 254! APPENDIX E FINAL CODE LIST -- BACKGROUND --Back.Current position --Back.Prior position --Back.School --Back.System - DATA TYPE -Data.API,AYP,AMO -Data.Classroom assessment -Data.Common grade level -Data.Computer based assessment -Data.Diagnostic -Data.Grades -Data.Interim, benchmark assessment -Data.National exam -Data.Other -Data.School misc. -Data.State assessment -Data.Student feedback -Data.Student observations by teacher -Data.Student work -Data.Teacher obs and Evaluation/PM ! DATA USE !DataUse.Cheating !DataUse.Diagnostic !DataUse.Frequency !DataUse.Identify students interventions !DataUse.Other !DataUse.Response !DataUse.Reteach !DataUse.Safe Harbor Kids !DataUse.Small groups !DataUse.Teaching to the test !DataUse.Who is involved !DataUse.With Students ^DATA USE MODELS ^Model.Accountability ^Model.Compliance ^Model.Instruction ^Model.Positioning ^Model.Signalling ! ! ! ! ! ! ! ! !! ! ! 255! ^Model.Student learning # HUMAN CAPITAL #HC.Coach #HC.Collaboration between T and A #HC.Collaboration between teachers #HC.New teacher programs #HC.Other #HC.Partnerships with external #HC.Personal attributes #HC.PLC #HC.Professional development #HC.Teacher Leader $ TECHNOLOGY $Tech.Data management system $Tech.Ease of access $Tech.Email $Tech.IPad $Tech.Other $Tech.Tools $Tech.Video $Tech.Who can access % ORG POLICY %OP.Evaluation, performance management %OP.Formal policy, structure %OP.Informal policy, structure %OP.Other %OP.Time & ORG CONTEXT &Accountability.Authorizer &Accountability.Communities, families &Accountability.Federal &Accountability.for.Autonomy &Accountability.Internal &Accountability.Local policy &Accountability.Market &Accountability.Philanthropy &Accountability.Professional &Accountability.Public &Accountability.State policy &Charter versus TPS &Collective Bargaining &Culture ! ! ! ! ! ! ! ! !! ! ! 256! &Decisions.assessment &Decisions.human capital &Decisions.instruction &Decisions.policies &Decisions.technology &Financial environment &Leadership.School site &Leadership.System &Leadership.Union &Mission and vision &OrgContext.Other &Structure.Central office responsibilities &Structure.Chain of command &Structure.Regional office responsibilities &Structure.Relationship between school and CO &Structure.Relationship between schools &Structure.School site responsibilities (Constraining (Data Challenge (Data Success (Enabling (Misc. <Juicy ! ! ! ! ! ! ! ! !! ! ! 257! APPENDIX F DECISION-MAKING CRITERIA FOR RESOURCE COMMITMENT CATEGORIES AND CLASSIFICATIONS Little/No Acquiring information about and beginning to implement strategies Developing Experimenting with strategies, having built an initial level of commitment Significant Having achieved a level of mastery, continuously evaluating goals & processes Human Capital Teacher collaboration - Teachers at a school site (e.g., grade or department) meet irregularly to discuss lesson planning, assessments, and a few kinds of data. - Work group functionality varies across and within school sites. - Uneven development of group norms and trust of other teachers. - Discussion mainly focuses on logistical or housekeeping issues. - Teachers at a school (e.g., grade or department) meet occasionally to discuss lesson planning, assessments, and several kinds of data. - Initial development of group norms, growing trust of colleagues. - Teachers occasionally change their instructional practice based on feedback and collaboration efforts. - Teachers from one school or across schools meet frequently to change instructional practice based on many types of assessment results. - Developed, habitualized group norms, procedures for data analysis. Deep trust of colleagues. - Teachers regularly change their instructional practice based on feedback and collaboration efforts. Dedicated positions – Instructional coaches - Unclear role for instructional coaches at the school. Limited guidance from administration or district about responsibilities at the school. - Time is spent completing other duties (e.g., paperwork), little time to spend observing instruction, modeling or co-teaching lessons with teachers. - Limited discussion of teacher practice and student learning. - Instructional coaches have a designated role at the school that focuses on supporting teacher practice and use of data. - Some level of strategic planning of coach assignment to low performing or needy teachers. - Time is spent with observing, model, co- teaching lessons. -Work between coach and teacher occasionally focuses on teacher practice and student learning. - Instructional coaches have a very clear role at the school/system to support and improve teacher practice, instruction, and use of data. -Activities with teachers are highly strategic based on teacher strengths and weaknesses. - Work between coach and teacher is highly reflective of both teacher practice and student learning. ! ! ! ! ! 258 Knowledge/skills development - Professional development for data use and instructional strategies - PD is occasionally related to one or more areas of teaching: content, curriculum, assessment, and instruction. - PD is sporadic and may or may not be linked to previous trainings. Educators have irregular opportunities for PD. - It is unclear how PD topics are identified or linked to other initiatives in the system. - PD occasionally addresses issues around content, curriculum, assessment and instruction. - PD sometimes occurs in an ongoing, sequenced, and cumulative fashion. Educators have occasional opportunities for such PD. - PD sometimes is aligned with student achievement needs, feedback from educators, and system goals. - PD is somewhat aligned with other initiatives in the system. - PD continuously addresses the core areas of teaching: content, curriculum, assessment, and instruction. - PD is long-term, ongoing, sequenced, and cumulative. It provides educators frequent opportunities to gain new knowledge and skills and reflect on changes in practice. - PD topics are identified based on student achievement outcomes, feedback from teachers/school leaders, and system goals. - PD opportunities are closely aligned with other initiatives in the system. ! ! ! ! ! 259 School & system leadership - If leaders articulate a mission or vision for system and school- wide, use of data does not play a central role. - Goals set are not related to actionable next steps, nor are they clearly linked to student achievement objectives. - If leaders offer resources or supports, it is unclear how they are integrated or aligned with other system initiatives. - Culture of data use is uneven or underdeveloped within the school or system. - Leaders provide minimal opportunities for other staff to assume leadership responsibilities. - Leaders articulate expectations, mission, and vision for system and school wide data use that are somewhat accepted and adopted. - Leaders sometimes model data use, setting system or school goals that are sometimes linked to student outcomes objectives. - Leaders provide some supports to support data-use at school and system levels. - Leaders occasionally provide opportunities for other staff members to assume leadership tasks. - Leaders establish clear expectations, mission and vision for system and school- wide data use that are widely accepted and adopted. - Leaders frequently model data use, setting system and school- level goals and systematically collecting, analyzing, and using data regarding progress to student outcome objectives. - Leaders provide multiple supports that are aligned to make data a part of the ongoing cycle of instructional improvement. - Distributed leadership regularly occurs, with leadership tasks shared and supported by individuals and structures across the system. Hiring, training, and retention for new employees - Basic hiring and placement processes of new staff members, with little systematic professional development targeting the needs of new teachers. - Recruitment, hiring, and placement of new staff involves some level of analysis of current school and/or system needs. - There is some professional development targeting new teachers. - Continuous searching for new staff with outstanding potential, with a strategic hiring and placement strategy. - The system identifies the needs of new teachers and provides a specific scope and sequences of professional development. ! ! ! ! ! 260 Technology and Tools Data management system(s) - Information systems have silo-based functions (e.g., student information, HR, finance). - There is limited ability to conduct analysis between various systems. - System provides minimal support for disaggregation or analysis. - Information system can track and monitor progress on system-wide goals, related to measurable student achievement outcomes. -There is some ability to conduct analysis between various systems. - System accommodates longitudinal studies, providing data in real time, as well as through drill down and disaggregation. - Student information system is connected with other information systems (e.g., HR, finance) in the system. - Users are able to conduct analyses between various information databases. - Data are widely accessible to administrators, staff, and teachers. Educators can easily access each other’s data results. Capture tools/technology (e.g., data protocol, lesson planner, videotaping) - No/few capture technology/tools. - Capture technology/tools have been introduced to the system but with limited implementation. When they are used, it is mainly for compliance purposes. - Capture technology/tools are occasionally used to help educators collect data related to teaching and learning. - Capture technology/tools are frequently used by educators to help educators collect data and meaningfully respond with new instructional strategies. Communication tools (e.g., Intranet, shared website) - No/few communication tools. - Communication tools have been introduced, but with limited implementation. When they are used, it is mainly for compliance purposes. - Communication tools are occasionally used to help educators share decisions made based on data and best practices for instruction. Sharing occurs mainly within grade level teams or within a school. - Communication tools are frequently used to help educators share decisions made based on data and best practices for instruction. Sharing occurs not only within the school, but between schools and other outside organizations as well. ! ! ! ! ! 261 ! Practices and Policies Scheduled work time - While teachers have their legally required individual planning time, they have few/irregular opportunities to work with colleagues. - Teachers have occasional (e.g., bi- weekly) individual and team collaborative planning opportunities. - Teachers have regular (e.g., weekly or daily) individual and team collaborative planning opportunities. Rewards & incentives - Recognition for accomplishments is irregular and unsystematic. - Some criteria for performance are established as the basis for rewards and advancement. - Individuals, groups, or the collective are occasionally recognized for their roles in attaining strategic goals. - Established criteria for performance are the primary basis for rewards and advancement. - Individuals, groups, or the collective are regularly recognized for their roles in the attainment of strategic goals. Other standard operating procedures - Evaluations adhere to the legal requirements for system or state policies. Infrequent informal assessments. - Other organizational routines may have a limited role in supporting data use. - Several assessments (formal and informal) are used to evaluate teachers and other staff members. Results are sometimes utilized to improve instructional practice. - Other organizational routines are in place to support data use at the system or school level. - Multiple, regular assessments (formal and informal) are used to evaluate teachers. Results are often used to change instructional practice and direct professional development opportunities. - Other organizational routines have a definite role in supporting data use at the system or school level. !
Abstract (if available)
Abstract
Increased accountability through federal and state education policies has resulted in a growing demand for educators to access, understand, and utilize the various types of data to adapt their practice. These data include student work, district benchmarks, observation of instruction, and high-stakes state assessments, among others. Despite the widespread belief that educators’ data use can improve student performance, little is known about how organizational context impacts this process. This multi-case, qualitative study addresses this gap by examining two types of systems: traditional school districts and charter management organizations (CMOs). Using a framework derived from knowledge management theory, it offers a systems-level approach to consider how organizational resources–human capital, technology and tools, and organizational practices and policies–are marshaled to support effective data use. ❧ First, analysis found that educators in all four school systems used six main types of data to inform the literacy instruction: classroom, common grade, teacher observation, system, high-stakes state assessments, and college-ready indicators. Across all systems, there was a disconnect between school site and system educators concerning the use of classroom and system data. While all four systems attended to high-stakes state assessment results, the CMOs had greater reported use of teacher observation data and college-ready indicators. The host of data was used for multiple purposes falling into six categories or “models.” For the accountability-achievement, student learning, and instructional reflection models, data was instrumental in informing instruction. All four school systems engaged behaviors of accountability-achievement data use, while the two CMOs used formal teacher observation data to reflect on instruction. In the other three emerging models of data use – bureaucratic-compliance, positioning, and signaling – data were used for compliance purposes, as evidence for an argument or agenda, or to signal to the external community. ❧ Looking to the resources mobilized, similarities arose across all four school systems. Different resources supported the data-use cycle at different leverage points, with design features of the resources shaping how data were gathered, accessed, and used. Second, technology/tools and organizational practices and policies were necessary but not sufficient
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
School leaders' use of data-driven decision-making for school improvement: a study of promising practices in two California charter schools
PDF
Charter schools, data use, and the 21st century: how charter schools use data to inform instruction that prepares students for the 21st century
PDF
A school's use of data-driven decision making to affect gifted students' learning
PDF
Allocation of educational resources to improve student learning: case studies of California schools
PDF
Multiple perceptions of teachers who use data
PDF
How urban school superintendents effectively use data-driven decision making to improve student achievement
PDF
Resource use and instructional improvement strategies at the school-site level: case studies from ten southern California elementary schools
PDF
Examining the applications of data-driven decision making on classroom instruction
PDF
Collaborative instructional practice for student achievement: an evaluation study
PDF
An examiniation of staff perceptions of a data driven decision making process used in a high performing title one urban elementary school
PDF
Allocation of educational resources to improve student achievement: Case studies of four California charter schools
PDF
Building teacher competency to work with middle school long-term English language learners: an improvement model
PDF
Building data use capacity through school leaders: an evaluation study
PDF
Data use in middle schools: a multiple case study of three middle schools’ experiences with data-driven decision making
PDF
How can metrics matter: performance management reforms in the City of Los Angeles
PDF
Teacher education programs and data driven decision making: are we preparing our preservice teachers to be data and assessment literate?
PDF
The role of leadership in using data to inform instruction: a case study
PDF
The pursuit of equity: a comparative case study of nine schools and their use of data
PDF
Program elements for special needs students in a hybrid school setting
PDF
The implementation of data driven decision making to improve low-performing schools: an evaluation study of superintendents in the western United States
Asset Metadata
Creator
Farrell, Caitlin C.
(author)
Core Title
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
School
Rossier School of Education
Degree
Doctor of Philosophy
Degree Program
Education
Publication Date
08/17/2012
Defense Date
07/26/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accountability policy,assessment,charter management organization,charter school,data use,data-driven decision making,knowledge management,OAI-PMH Harvest,organizational learning,qualitative analysis,traditional school district
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Marsh, Julie A. (
committee chair
), Wohlstetter, Priscilla (
committee chair
), Fiss, Peer C. (
committee member
), Hentschke, Guilbert C. (
committee member
)
Creator Email
ccfarrel@usc.edu,ccfarrell@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-91969
Unique identifier
UC11290396
Identifier
usctheses-c3-91969 (legacy record id)
Legacy Identifier
etd-FarrellCai-1157.pdf
Dmrecord
91969
Document Type
Dissertation
Rights
Farrell, Caitlin C.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accountability policy
charter management organization
charter school
data use
data-driven decision making
knowledge management
organizational learning
qualitative analysis
traditional school district