Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
More than sanctions: California's use of intensive technical assistance in a high stakes accountability context to close achievement gaps
(USC Thesis Other)
More than sanctions: California's use of intensive technical assistance in a high stakes accountability context to close achievement gaps
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
MORE THAN SANCTIONS:
CALIFORNIA’S USE OF INTENSIVE TECHNICAL ASSISTANCE IN A HIGH
STAKES ACCOUNTABILITY CONTEXT TO CLOSE ACHIEVEMENT GAPS
by
Andrew Josef McEachin
A Thesis Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF ARTS
(ECONOMICS)
August 2012
Copyright 2012 Andrew Josef McEachin
ii
Acknowledgements
I am forever grateful for the love and support from my friends and family during my
graduate school tenure. I am especially grateful to my wife, Nicola, and my parents, John
and Julie. I would be half the person I am today without you in my life. I would also like
to thank my advisors Drs. Dominic J. Brewer and Katharine O. Strunk for always
pushing me to become a better researcher and person. I am especially grateful for Dr.
Strunk’s guidance as a mentor and a collaborator.
iii
Table of Contents
Acknowledgements
ii
List of Tables
iv
Abstract
v
1.1 Introduction
1
1.2 Brief Literature Review
6
1.3 Background of the DAIT Intervention
9
1.4 Data
15
1.5 Methods
19
1.6 Results
24
1.7 Discussion & Conclusion
36
Bibliography
39
iv
List of Tables
Table 1: Estimates of the DAIT Treatment Effect on Student Math and ELA
Achievement
25
Table 2: Estimates of DAIT Treatment Effect on Achievement Outcomes of
Minority/Ethnic Subgroups (White Students as Reference)
28
Table 3: Estimates of DAIT Treatment Effect on Achievement Outcomes for
Students who Quality for Free- and Reduced-Price Lunch or who are
English Language Learners
31
Table 4: Estimates of DAIT Treatment Effect on Achievement Outcomes of
Students in Schools at Various Levels of Program Improvement
(Students in Non-PI Schools as Reference)
33
v
Abstract
One of the enduring problems in education that high-stakes accountability policies
aim to solve is the persistence of achievement gaps between white, wealthy, native
English speaking students and their counterparts who are minority, lower-income, or
English language learners. This study shows that one intensive technical assistance
intervention – California’s District Assistance and Intervention Teams (DAITs)–
implemented in conjunction with a high-stakes accountability policy improves the math
and English performance of traditionally underserved students. Using a six-year panel of
student-level data from California, we find that students in districts that receive the DAIT
intervention perform significantly better than students in districts that receive less
intensive technical assistance, and that these effects are particularly strong for black,
Hispanic, English language learners, and poor students and students in low-performing
schools. These results indicate that intensive technical assistance that builds district
capacity may help low-performing districts to address the needs of their most
disadvantaged students.
1
1.1 Introduction
A great deal of attention is being paid to disparities in the achievement outcomes
between different groups of students, and to the closing of these “achievement gaps” in
schools and districts across the country (Rebelen, 2011; Tavernise, 2012). Few debate
that such achievement gaps, whether between students of different races and ethnicities
or of different income levels, exist (Reardon, 2011). Many high-stakes accountability
policies, and in particular the No Child Left Behind Act (NCLB), aim to close these
achievement gaps by holding schools and districts responsible for all students’
performance on standardized tests. The theory of action underpinning such policies holds
that, by providing clear performance goals and threatening school and district actors with
increasingly harsh sanctions for the failure to achieve those targets, these actors will align
their incentives with those of the federal and state governments and thus improve student
achievement and close achievement gaps between students from different subgroups
(Clotfelter & Ladd, 1996; Figlio & Ladd, 2007; O’Day & Smith, 1993). Many
assumptions undergird this theory of action, the most important of which may be the
assumption that schools and districts have sufficient capacity to not only improve student
performance for all students but also to narrow achievement gaps between students of
different races and ethnicities and/or income levels.
Although early discussions of educational accountability policies highlighted
capacity-building mechanisms as central to their design (Elmore & Fuhrman, 1994;
Smith & O’Day, 1991), capacity-building mechanisms such as the provision of technical
assistance are largely left out of discussions about and implementation of NCLB.
2
However, a little-discussed and often-overlooked aspect of NCLB mandates that, in
addition to sanctions, states must provide technical assistance to build the capacity of
struggling schools and districts to help them improve student achievement (The No Child
Left Behind Act, PL 107-110, Title I, Sec. 1116(c)). Since each state is permitted to
determine its own technical assistance program, the content and focus of these programs
vary widely across states (Gottfried, Stecher, Hoover, & Cross, 2011; Weinstein, 2011).
Many states implemented technical assistance policies that aimed to build district-
level capacity in their worst-performing districts (those in PI year 3 (PI3) and beyond)
(Gottfried, et al., 2011; Weinstein, 2011). In California, where this study is based, the
state gives the lowest performing PI3 districts substantial amounts of funding and
requires them to contract with state-approved external experts, called District Assistance
and Intervention Teams (DAITs), to help them build district capacity to improve student
performance. The DAITs work closely with district administrators to assess why the
district is failing to increase student achievement and to develop and implement targeted
reforms to improve student outcomes. The state gives the remaining PI3 districts a lesser
amount of funding and requires them to access technical assistance (TA) from more
traditional non-DAIT providers who are not required to provide the same level of support
to districts and are not regulated by the California Department of Education (CDE). The
state does not give districts in PI2 and below any additional funding or require them to
access any external TA. Although DAITs and non-DAIT TA providers understand that to
meet NCLB Adequate Yearly Progress (AYP) goals they must close achievement gaps
3
and show improvement for students from all subgroups, the state does not instruct them
specifically to focus their work on these goals.
Although every state is required to provide technical assistance to low performing
schools and districts, most of the research on NCLB and its impacts on student outcomes
overall and for students from different subgroups has focused on the impact of the policy
itself and the sanctions associated with it, and few scholars have explored the impact of
technical assistance on student achievement. Consequently, surprisingly little is known
about the efficacy of these interventions in building district capacity to improve student
performance and close achievement gaps. Given the expense and the ubiquity of these
programs, as well as the current administration’s intent to include technical assistance in
the reauthorization of the Elementary and Secondary Education Act (U.S. Department of
Education, 2010), this lack of research is alarming.
In an earlier study we show that the assistance and support provided by DAITs
succeeds in improving California students’ achievement in Math, but not in ELA, in the
program’s first two years of implementation. Moreover, we find no significant difference
in the performance outcomes of students in districts that receive non-DAIT TA and those
that receive no technical assistance at all (e.g., districts in PI2, PI1, and non-PI) (Strunk,
et al., 2012). Together, these results indicate that DAITs are particularly effective at
improving student achievement. However, there is still little evidence as to whether they
are also successful at improving the academic performance of the traditionally
underserved subgroups tracked by NCLB, and in so doing fulfilling the stated goal of the
policy to close long-standing achievement gaps.
4
In this paper, we extend my earlier work to examine whether or not the technical
assistance and support provided by DAITs works to improve student achievement for
students from different subgroups: minority students (Hispanic, Black, and Asian
Students), English language learners (ELLs), students in poverty (who qualify for
federally-provided free- or reduced-price lunches), and students in schools in varying
levels of NCLB sanction. Specifically, we ask: Do students from different NCLB-
relevant subgroups and in schools in different stages of NCLB sanction gain equally from
DAIT technical assistance? Given NCLB’s and similar policies’ focus on ensuring
improvements for all students, and the importance of closing achievement gaps for
underserved students’ future success (Figlio & Loeb, 2011; O’Day & Smith, 1993), this
question is particularly salient. If technical assistance and supports improve average
achievement levels, but not the performance of traditionally underserved groups of
students and schools, then state and district governments, as well as technical assistance
providers, must re-think their strategies to insure that all students benefit.
To evaluate the efficacy of DAITs on student achievement we use a six year panel
of data from California’s student-level administrative dataset (from 2005-6 to 2010-11)
that tracks approximately 29.1 million student-year observations across approximately
9,000 schools and 1,000 districts, including the 95 districts that are designated as PI3 and
received some form of technical assistance. We use a panel difference-in-difference
regression design, estimating the impact of DAITs on various subgroups’ performance on
the math and English language arts (ELA) California Standards Tests (CSTs) relative to
non-DAIT technical assistance in the first three years of program implementation. We
5
find that the positive DAIT effect in math shown in previous work is driven by
improvements in the performance of traditionally underserved students: minority, poor,
and ELL students and students enrolled in schools that are in NCLB Program
Improvement Year 3 and above. In addition, using an extended panel of data that includes
a third year of post-intervention data, we find that there appear to be some suggested
positive impacts of DAITs on student ELA achievement, also largely driven by those
same factors. These findings suggest that the technical assistance and supports provided
by the intensive DAIT intervention not only increase average student performance in the
receiving districts, but also help to improve the performance of the relevant subgroups
tracked by NCLB.
The paper proceeds as follows. Chapter 2 briefly reviews the limited literature on
both accountability policies’ and district-level capacity-building reforms’ attention to
distinct subgroups of students and provides background on the provision of technical
assistance associated with NCLB. Chapter 3 reviews the technical assistance provided to
California PI3 districts and focuses in greater detail on the DAIT intervention. Chapter 4
then outlines the data and methods used to assess the differential effects of DAITs
relative to non-DAIT TA on the achievement of subgroups of students, and Chapter 5
provides results. Chapter 6 concludes by discussing the implications of our findings as
well as opportunities for future research.
6
1.2 Brief Literature Review
As mentioned above, accountability policies such as The No Child Left Behind
Act explicitly require schools and districts not only to improve overall student
achievement, but to improve the performance of under-served subgroups of students.
Although an increasing amount of research examines the efficacy of these high-stakes
accountability policies in improving overall student performance, little work examines
the impact of such policies on students from various backgrounds and the closing of
achievement gaps. The research that studies the efficacy of accountability policies as a
whole largely finds that NCLB and similar state-specific high-stakes accountability
reforms lead to increases in average student achievement, especially in math (Carnoy &
Loeb, 2002; Chiang, 2009; Dee & Jacob, 2011; Figlio & Rouse, 2006; Hanushek &
Raymond, 2005; Ladd & Lauen, 2010; Rockoff & Turner, 2010; Rouse, Hannaway,
Goldhaber, & Figlio, 2007). However, the few studies that do examine the impact of
high-stakes accountability policies on the performance of subgroups of students paint a
less clear picture. Early work that explores this issue generally finds that pre-NCLB
accountability policies led to the narrowing of the achievement gap between
Hispanic/Black and White students (Carnoy & Loeb, 2002; Hanushek & Raymond,
2005). More recent quasi-experimental studies that evaluate the effect of NCLB on
subgroups’ math and ELA achievement paint a mixed picture of the efficacy of NCLB
for subgroup performance and the closing of achievement gaps (Dee & Jacob, 2011;
Figlio, Rouse, & Schlosser, 2009; Hemelt, 2011; Lauen & Gaddis, 2012; Wei, 2012). In
an evaluation of the effect of NCLB on student achievement across all 50 states, Dee and
7
Jacob (2011) find that the overall effect of NCLB is larger for Hispanic, Black, and poor
students than it is for White and higher-income students. A few studies using data from a
single state find that the accountability pressure from failing to meet a subgroup’s Annual
Measurable Objectives (AMOs) leads to improvements in that subgroup’s achievement,
especially math (Hemelt, 2011; Lauen & Gaddis, 2012). However, other studies find that
the subgroup provisions in NCLB do not lead to improvements in subgroup achievement
(Figlio, et al., 2009; Wei, 2012).
In order for schools and districts to parlay high-stakes accountability policies into
improvements in student achievement, states must assist in building the capacity of
school and district actors (Hamilton, Berends, & Stecher, 2005; Opper, Henry, &
Mashburn, 2008; Stecher et al., 2008). However, scholars have noted that state education
agencies may themselves lack the capacity to help districts and schools implement
instructional reforms. They lack experience with direct interventions into schools as well
as the local organizational networks that can help districts work with schools and other
local education actors (Slotnick, 2010; Sunderman & Orfield, 2007). Given this
inexperience, many states are turning to independent technical assistance providers or
intermediary organizations to help them work with school districts to build district
capacity for reform. Twelve states including California have made contracting with an
intermediary organization (such as a DAIT) a mainstay of their plans to help school
districts make instructional reforms (Weinstein, 2011). Although the details of these
plans differ by state, the main idea is that the state education agencies require or
encourage districts in need of improvement to contract with an intermediary organization
8
that will help the district assess their needs, generate improvement plans, and implement
improvement strategies. A key element of these state plans is their focus on working with
intermediary organizations to build district-level capacity to address problems and issues,
not just to assist in solving a specific problem or problems.
Although these intermediary organizations appear to be widely-used and are
becoming important actors in the provision of technical assistance and capacity-building
services for states and districts failing to meet targets set under NCLB, little research
examines the efficacy of these external service providers in improving student
achievement. Moreover, few studies address the impact of intermediary organizations –
or any technical assistance at all – on the performance of students from important
traditionally under-served groups, such as minorities, English Language learners (ELLs),
and students in poverty. The limited research on the impacts of technical assistance in
general on subgroup performance finds that states and districts have focused their
energies on improving instructional practices for low-performing subgroups, but little is
known about the impact of such changes (Stecher, et al., 2008). Given the growing
prevalence of such capacity-building interventions, the lack of attention to the efficacy of
these technical assistance providers in improving student performance and closing
achievement gaps is alarming. We attempt to address this gap in the research by
examining the impact of external intermediary organizations in improving the
achievement of NCLB-relevant subgroups in one state – California – relative to the
impact of more traditional technical assistance.
9
1.3 Background of the DAIT Intervention
By the start of the 2008-9 school year, California had 248 districts in Program
Improvement, and of these 95 (approximately 10 percent of all California school
districts) were classified as PI3 or higher – signifying districts that had failed to make
AYP for at least four years. All PI3 districts received the same overall sanction
(Corrective Action Six, which required them to “institute and fully implement a new
curriculum that is based on state academic content and achievement standards”
(California State Senate, 2004)). In addition, the California Legislature passed Assembly
Bill 519 in 2008, which supplied funding to allow districts to access technical assistance
based on the severity of the districts’ achievement problems. Since districts in PI1 and
PI2 are not required to access technical assistance, no funding is provided to help them to
do so. Districts in PI3 receive funding to pay for technical assistance (or part of the costs
of technical assistance) based on the number of schools in PI3 or higher in the district.
Districts receiving these funds must use them to pay for technical assistance. Reports
from our qualitative work with districts and providers indicate that the funding is often
insufficient to cover the total expense of the technical assistance (Strunk & Westover,
2010). The state disburses funding for DAITs and non-DAIT technical assistance in one
lump sum at the beginning of the two-year intervention and instructs districts to use it all
within that time frame to pay for the DAIT or non-DAIT technical assistance.
Within the PI3 category, the CDE classifies districts as in need of “intensive,”
“moderate,” or “light” assistance. These distinctions are based on an algorithm that takes
into account the districts’ Academic Performance Index (API) score and relative growth
10
over time, AYP indicators, and the number of PI schools. The CDE labels the lowest -
performing districts based on the performance algorithm as in “intensive” need of
assistance, and deems the mid-ranking PI3 districts in “moderate” need of assistance. The
CDE requires both Intensive and Moderate PI3 districts to work with DAITs and
provides them with $150,000 and $100,000 per PI3+ school, respectively. The CDE
provides the highest-ranking PI3 districts (Light districts) with only $50,000 per PI3+
school and requires them to contract with their choice of non-DAIT technical assistance
provider. In the first year of implementation of the technical assistance intervention
(2008-9), the CDE classified 43 PI3 districts as in “intensive” or “moderate” need of
assistance based on their 2007-8 performance and required them to work with DAIT
providers. The remaining 52 PI3 districts received non-DAIT technical assistance.
1,2
For
more information on the activities of DAIT and non-DAIT TA providers, please see the
Appendix.
1
This study focuses on the 95 PI3+ districts that were selected in the first "cohort" of PI3 districts. The
CDE has since identified a second cohort of 45 PI3 districts that receive the DAIT or non-DAIT TA
intervention. Ongoing work is exploring the implementation of the intervention in this second cohort of
districts.
2
We note that the CDE generated their assignment algorithm (the Priority Assistance Index, PAI) in a way
that they believed best captured districts’ need for capacity-building technical assistance, rather than any
likely district capacity or readiness for reform or improvement. Many districts and county offices of
education across the state have shown that, using the same data elements and a slightly different algorithm,
the ordering would have been substantially different, resulting in a different set of districts being required
to work with a DAIT. This indicates that the lines between Intensive, Moderate and Light districts are
somewhat arbitrary. Importantly, conversations with CDE officials indicate that no districts or technical
assistance providers impacted the generation of the PAI. In addition, this assignment mechanism would
ideally lend itself to a Regression Discontinuity estimation strategy. However, given that the intervention
occurred at the district level, power estimates indicate that we would have needed a district sample at least
four times the size of the set of PI3 districts to detect even fairly large effect sizes at standard levels of
significance. This is discussed at length in Strunk, McEachin & Westover (2012), and multiple robustness
checks that indicate the reliability of the difference-in-difference approach are provided.
11
The intervention is only intended to last for two years. This gives the DAITs and
other assistance providers an aggressive timeline for reform. The first cohort of DAIT
and non-DAIT TA districts received the intervention in the 2008-9 and 2009-10 school
years. At the end of each year of the intervention, the CDE and the State Board of
Education review the progress made by the PI3 districts that received both DAITs and
non-DAIT TA, and make decisions about further sanctions for districts that do not make
adequate progress during this time. For the moderate and intensive PI3 districts, the CDE
and the SBE are making policy decisions based on short-run impact data and reports from
the DAITs regarding the districts’ progress. Given this short window within which PI3
districts must show improvement, our impact analyses of the efficacy of the intervention
are being considered by the CDE and the SBE as they determine further sanctions and
future funding of the DAIT intervention.
DAITs are state-approved intermediary organizations that work to provide
support to help build the capacity of low performing PI3 districts while providing the
state with informal formative feedback and recommendations. Importantly, it is not
intended that DAITs provide technical assistance just to solve the districts’ specific
problems, but rather that they build districts’ capacity to assess and solve their own
problems in the future. To this end, DAITs provide resources in the form of additional
knowledge needed by districts to implement reforms, political and social ties to other
organizations and networks, and assistance with shaping the administrative
infrastructures of the districts to leverage reforms. In order to be placed on the state-
approved list of providers, DAITs must demonstrate expertise in leadership, academic
12
subject areas, meeting the needs of English language learners (ELLs) and students with
disabilities (SWDs), and building district capacity. Government agencies, primarily
County Offices of Education (COEs), as well as for-profit and non-profit organizations,
were approved as DAIT providers and participated in state training events designed to
facilitate and inform their work with districts. In the first years of the intervention, 24
districts worked with COEs and 19 worked with private organizations. A survey of
DAIT leads indicated that the over half of the DAITs had members with expertise in
curriculum, ELL student needs, English language arts (ELA), math, finance or budgets,
the needs of SWDs, and technology or data systems.
3
In addition, 56% of DAIT
providers reported that there were problems that required the team to seek additional
expertise to supplement the DAIT members.
DAITs are expected to assess district needs in nine “essential program
components” as well as eight overarching areas relating to district governance,
operations, instruction and culture. Once the DAITs have assessed district performance in
all of these areas, they are then tasked with providing recommendations for improvement.
These assessments and recommendations are conveyed in a capacity study, which is
submitted to the CDE. Districts are required to implement the recommendations made in
these capacity studies. DAITs are then tasked with assisting the district in revising its
LEA plan to document steps to implement the DAITs recommendations, and in
implementing the revised LEA plan with the goal of accelerating and increasing student
achievement (California County Superintendents Educational Services Association,
3
Information about our qualitative data collection that inform our description of DAITs and their activities,
as well as descriptive analyses, are discussed in detail in Stunk, McEachin & Westover (2012).
13
2008). During both Years 1 and 2 of the intervention (2008-9 and 2009-10) the DAITs
worked with the district leadership teams to continue to understand areas in need of
reform and to implement the recommended policy and practice changes.
Results from our qualitative data collection indicate that the DAIT intervention is
highly context-specific: We find that DAIT activities and implementation varied widely
by district, apparently targeting their services and activities to meet the specific needs and
contexts of each low-performing district. Some commonalities do appear to exist across
districts, however. We find that, overall, DAITs focused to a great extent on building
district capacity in governance, professional development for teachers and principals,
improving district interactions with and support of school sites to insure consistent
instructional practice, and providing additional or improved instructional interventions,
particularly in math.
PI3 districts that are classified as in need of “light” assistance are given less
funding per PI3 or higher school and are required to use the funds to work with one or
more technical assistance providers of their choosing. The tasks expected of the providers
are much less clear that those set forth for DAITs, and districts can choose to hire TA
providers to address specific district-identified issues. The state exercises minimal
oversight over these providers and over the PI3 districts working with them. Both
because of this reduced oversight and because of funding constraints in our study, we
know far less about the qualifications and actions of these non-DAIT technical assistance
providers than we do about DAIT providers. Although we surveyed all the non-DAIT TA
districts, they were not required or particularly encouraged by the CDE to respond, and
14
the survey yielded a low (and likely unrepresentative) response rate of 52%. Fifty-two
percent of the responding districts reported contracting with a single TA provider, 35%
reported working with multiple TA providers, and 13% reported not contracting with an
outside TA provider at all. Responding districts most frequently reported that TA
providers supplied the following services “to a great extent”: 1) providing professional
development for teachers (55%), and 2) training to increase the use of student data to
improve instructional practices (48%).
15
1.4 Data
This study extends our earlier work, which shows that DAITs improve average
student achievement in math during the two years of the intervention, in two important
ways. First, the main intent of the analyses presented here is to determine the impact of
the DAIT intervention on the achievement outcomes of students in various traditionally
underserved subgroups: black students, Hispanic students, students who qualify for the
federal free- or reduced-price lunch program (an indicator for poverty), English Language
Learners (ELLs), and students enrolled in districts at higher stages of program
improvement (PI3+). We include this last comparison for a number of reasons. First, as
noted above, the amount of extra funding the DAIT and non-DAIT TA districts received
was determined by the number of PI3+ schools within the district. However, the
implementation of the intervention was not specifically directed to only the lowest
performing schools (e.g., schools in PI3+). Second, PI3+ schools are the lowest
performing schools in the district, and students who attend such schools may be the most
disadvantaged. Third, the intent of NCLB is to assist all districts in increasing student
achievement, but PI3+ schools are clearly the most difficult schools to improve, as they
have been failing the longest. For all these reasons, it is interesting to evaluate whether
the average treatment effect found in Strunk, McEachin, and Westover (2012) is constant
across the different stages of the NCLB sanctions for schools within the DAIT districts’
compared to schools in the same level of NCLB sanctions within the non-DAIT TA
districts. In addition, we extend our earlier work by incorporating a third year of data that
16
allows us to assess the DAIT impact once the DAITs have left the districts, after the
intervention and the funds associated with it are removed.
In this section we first discuss the data used in these analyses. We next outline the
panel differences-in-differences estimation strategy employed to answer our questions
regarding the efficacy of the DAITs in improving math and ELA achievement for
students in various subgroups. We then discuss the results of these analyses, and
limitations with our estimation strategy.
We start with data on all students in California public schools in grades 2 to 11
from 2005-6 through 2010-11 for whom test scores are available in either math or
English language arts (ELA). These data were made available by the CDE specifically for
this study. The complete panel dataset includes approximately 29.1 million student-year
observations, with approximately 4.9 million unique students observed in each of the six
years of the panel. Given data entry errors such as duplicate or missing student identifiers
and student grade progression patterns, we retain approximately 90% of the data in our
final sample, for a total of approximately 26 million student-year observations. We then
narrow our sample to the 95 districts identified as in Program Improvement 3 or higher
status in 2008-9 that received technical assistance in Cohort 1 of the intervention.
We take our student outcome data (Math and ELA scores on the California
Standards Tests (CSTs)), student characteristics (race/ethnicity, poverty status, ELL
status) from this dataset, along with information on the specific district in which students
are enrolled. As explained in the appendix section entitled “Data Management,” students
are not missing to a substantially greater or lesser extent in districts that receive different
17
kinds of technical assistance (DAIT or non-DAIT). The appendix section provides a
more in-depth description of the specific processes and rationales for dropping
observations from the dataset. Throughout the analyses, we complement the CDE’s
proprietary student-level dataset with public school- and district-level data available from
the CDE’s website. Variables used from the public dataset include school level
(elementary, middle or high school), the proportion of minority students enrolled in a
school, the proportion of ELL and special education students enrolled in a district, school
size (enrollment), schools and district PI status, and the number of AYP criteria to which
districts are held under NCLB. The latter serves as an indicator of district diversity.
We begin with approximately 29 million student-year observations from the CDE,
consisting of all second through eleventh grade students in the 2005-6 through the 2010-
2011 school years. Of these students, approximately 4% are dropped from our dataset
because they have missing or duplicate IDs. Based on conversations with officials at the
CDE, we do not believe that these students are any more likely to be transient students or
the like. It appears that these missing identification numbers are simply due to entry error
at the school level or some similar occurrence. We are forced to drop approximately an
additional 6% of students from the dataset because they either 1) only appear in our
dataset for one year (2%) or 2) they showed abnormal patterns of grade progression
between years (4%). In the first instance, it is quite possible that we are capturing
students who are particularly mobile and leave the state. State identifiers should follow
students between districts within the state, although it is possible that this does not occur
as intended in some cases, in which case this group will also capture students who start in
18
one district and then move to another. We cannot know the proportion of these 2% of
students who truly leave the state, or who leave the district, or what proportion of these
students are simply subject to entry error in a following year. The 4% of students who
show patterns of unusual grade progression (usual grade progression is defined as
students who either progress a single grade in each year, skip one grade between years, or
are retained in a grade and repeat it in two consecutive years) are again more likely to be
mislabeled rather than indicative of any specific mobility issue.
Once we have excluded students based on these reasons, we are left with
approximately 26.3 million total student-year observations in our full five-year sample
(from the 2005-6 school year to the 2010-11 school year). To insure that we are not
systematically missing student data from specific districts, we examine missing
observations across district types. Specifically of interest to our paper is whether or not
districts with DAITs are missing more or fewer students than districts with non-DAIT
technical assistance, or than districts that are in PI2, PI1, or non-PI status. We find that
there do not appear to be wide discrepancies in the proportion of students who are
missing across district PI status types. Specifically, we are missing 14% of students from
Intensive DAIT districts, 13% of students from Moderate DAIT districts, 11% of students
from non-DAIT TA districts. From Cohort 2, we are missing 12.5% of students from
Intensive and Moderate districts and 10% of students from non-DAIT TA districts
19
1.5 Methods
The intent of our analysis is first to isolate the impact of the DAITs on student
outcomes in the three years following the implementation of the intervention, and second
to assess the possibility of differential treatment effects for specific student and school
populations. To do this, we want to compare student achievement outcomes in districts
that received the DAIT intervention to a proper counterfactual. One way to do this would
be to compare the performance of districts that received the DAIT intervention to the
same districts’ performance before they were assigned the intervention. However, it is
possible and even likely that some common factor impacted all California districts, or all
California districts in PI3, over that period of time. If this is the case, then a simple
interrupted time series analysis could attribute some positive or negative trend in student
performance over the time period to the intervention, rather than to the secular California-
wide trend. Because of this we would also like to compare students in districts that
received DAITs to students who were likely similar to these students but were enrolled in
districts that did not receive the intervention. To do this, we utilize a difference-in-
difference methodology that compares students in treated (DAIT districts) to students in
untreated districts (non-DAIT TA districts) both before the onset of the intervention and
after. Because the non-DAIT TA districts were also in PI3 at the start of the intervention,
they faced the same accountability threat and sanctions as the DAIT districts. This makes
them a particularly appropriate comparison group to enable us to determine how students
in DAIT districts would have performed without the intervention, as essentially the only
20
policy difference between these two groups is the provision of the additional funding and
support from DAIT versus non-DAIT TA.
We use a set of panel difference-in-difference regressions with controls for school
and districts characteristics, and student and time fixed-effects to isolate the average and
differential effect of the DAIT intervention on students’ math and ELA achievement
(Angrist & Krueger, 1999; Angrist & Pischke, 2009; Ashenfelter & Card, 1985; Imbens
& Wooldridge, 2009; Reback, 2010). An examination of simple descriptive statistics
confirms that districts that were required to contract with DAITs have lower-performing
students, on average, in both math and ELA, than do districts that contracted with non-
DAIT TA providers. To account for this in our models, we include student fixed-effects
so that we are effectively predicting the change in the within-student achievement
trajectory for students in DAIT versus non-DAIT TA districts, controlling for time-
invariant differences among students.
4
These difference-in-difference estimates should
provide unbiased estimates of the effect of the DAIT intervention if omitted student-level
variables are time invariant.
Before estimating the differential effects across various student and school
populations, we first establish an overall average treatment effect (ATE). We take
advantage of pre- and post-intervention student achievement, controlling for student
fixed-effects and year fixed-effects to find the difference-in-difference estimates:
4
We could have also used district fixed effects and a lagged ELA or math test score to control for the
between district differences between DAIT and non-DAIT TA districts. Given the length of our panel and
the limited movement of students between treated and non-treated districts, we believe that the use of
student fixed-effects removes more of the time-invariant differences between the two groups that may bias
our treatment effect estimates. We obtain similar results from the district fixed-effect and lagged
achievement variable specification. The results are available from the authors upon request.
21
𝑌 𝑖𝑠𝑑𝑡
= 𝛼 + 𝛽 1
DAIT
𝑑𝑡
+ 𝑆 𝑠𝑑𝑡
𝛽 6
+ 𝑍 𝑑𝑡
𝛽 7
+ 𝛿 𝑖 + 𝜏 𝑡 + 𝜖 𝑖𝑠𝑑𝑡
(1)
where 𝑌 𝑖𝑠𝑑𝑡
is the standardized ELA or Math CST test score for student i in school s in
district d in year t
5
and DAIT
𝑑𝑡
is the treatment indicator for districts that receive the
DAIT intervention. The DAIT indicator takes a 1 in 2008-9, 2009-10, and 2010-11 for
districts that received the DAIT intervention, and it takes a zero otherwise. 𝛿 𝑖 and 𝜏 𝑡 are
student and time fixed-effect, respectively.
𝑆 𝑠𝑑𝑡
is a vector of school controls, including the natural log of school enrollment, the
percent of minorities within the school, and indicators for high and middle schools
(elementary schools are the reference category). 𝑍 𝑑𝑡
is a vector of district control
variables, including measures of the percent of ELL students enrolled in districts, the
percent of special education students within the district, and the number of AYP criteria
for the district. We do not include district-level measures of student minority or poverty
status, as they are highly correlated with students’ race/ethnicity at the school level (with
correlation coefficients of approximately .70). 𝜖 𝑖𝑠𝑑𝑡
is an idiosyncratic error term. All
errors are clustered to the district level.
In alternate specifications, we isolate the DAIT treatment effect in each of the
first, second, and third years
of the intervention. Specifically, we estimate:
5
As in most state achievement test score datasets, California test scores have comparability problems due
to the different tests students take as they progress through each grade. In order to make the scores
comparable across grades and over time, we standardize all scale scores by subject, grade level, and year.
Because California’s rules about which students are included in Adequate Yearly Progress (AYP)
calculations for NCLB and/or in Academic Performance Index (API) calculations for California’s own
Public Schools Accountability Act (PSAA) are difficult to follow, and we want to be sure that our results
are not sensitive to which students are included in reporting under NCLB or PSAA, we standardize our
CST scores to twelve different sets of students. We report results from the most conservative
standardization, which standardizes test scores to the 90% of students we could merge longitudinally.
22
𝑌 𝑖𝑠𝑑𝑡
= 𝛼 + 𝛽 1
DAIT Year 1
𝑑𝑡
+ 𝛽 2
DAIT Year 2
𝑑𝑡
+ 𝛽 3
DAIT Year 3
𝑑𝑡
+ 𝑆 𝑠𝑑𝑡
𝛽 6
+ 𝑍 𝑑𝑡
𝛽 7
(2) + 𝛿 𝑖 + 𝜏 𝑡 + 𝜖 𝑖𝑠𝑑𝑡
All of the covariates remain the same as model (1) except that we split the DAIT average
effect into its first, second, and third years.
To answer our more specific question, “Do students from different NCLB-relevant
subgroups and in schools in different stages of NCLB sanction gain equally from DAIT
technical assistance?” we run models (1) and (2) again, this time interacting the DAIT
treatment indicators with the students’ subgroup and schools’ accountability pre-
treatment statuses in separate models. In the case of the average treatment effect, we
estimate the model:
𝑌 𝑖𝑠𝑑𝑡
= 𝛼 + 𝛽 1
DAIT
𝑑𝑡
+ 𝛽 2
(DAIT
𝑑𝑡
∗ 𝑋 𝑖𝑠𝑑𝑡
) + 𝑆 𝑠𝑑𝑡
𝛽 6
+ 𝑍 𝑑𝑡
𝛽 7
+ 𝛿 𝑖 + 𝜏 𝑡 + 𝜖 𝑖𝑠𝑑𝑡
(3)
where X
isdt
is the student subgroup characteristic of interest. We also estimate a similar
model to replicate model (2) that splits the DAIT indicator into its separate years and
interacts X
isdt
with each treatment year indicator. We do not include all subgroup
interactions in the same model due to both power and collinearity considerations. In our
first comparison, we are interested in the differential effect of the DAIT intervention on
traditionally low performing minority subgroups. Specifically, we interact the DAIT
treatment indicators with indicators for Black, Hispanic, and Asian students, with White
students serving as the reference group. In this analysis, we are most concerned with the
coefficients on the interaction terms between treatment (DAIT) and underserved minority
status (Hispanic or Black). These interaction terms allow us to assess the differential
23
effect of DAITs on students in one of these subgroups relative to the impact of DAITs on
White students. We then evaluate the differential effect that the DAIT intervention has on
students who participate in the Free and Reduced Lunch program and on English
language learners, relative to students who are not in poverty and students who are not
ELLs, in separate models. In our last model, we generate indicators for schools in PI1,
PI2, and PI3+ based on the pre-treatment status (with non-PI as the reference group) and
interact these variables with the DAIT treatment indicator. It is important to remember
that the identification strategy we use inherently compares students in districts with
DAITs to students in districts with non-DAIT TA. As such, our interaction terms can be
interpreted as the additional positive benefit (or negative consequence) of being situated
in a DAIT district for students in the given subgroup relative to the majority group.
24
1.6 Results
Tables 1 through 4 show estimates from models (1) and (2). Table 1 shows the
results from our estimates of models (1) and (2) exploring the impact of DAITs on
overall student achievement relative to students in districts with non-DAIT TA. We see
that, similar to the results from our earlier work using two years of data, there is a
positive and significant impact of DAITs on student achievement in math over the three
years (average treatment effect), and in each of the three years of treatment. This impact
is larger in the second year of the intervention and in the third year – after the DAITs
have exited the district – than in the first year. Specifically, DAITs appear to increase
student math achievement by approximately 6% of a standard deviation over the three
years on average, and by 4% of a standard deviation in year one, and 8% of a standard
deviation in years two and three. The treatment effects for the 1
st
, 2
nd
, and 3
rd
years
translate into an effect of 5, 10, and 10 points on the 5
th
grade Math CST (scores on the
CST range from 150 to 600). That the second year effect is larger than the first is
unsurprising, given the aggressive timeline for reform placed upon the districts and their
DAIT providers in the first year of the intervention. However, that the third year effect is
as large as the second year is interesting, as the DAIT intervention was completed after
the second year, and both DAIT providers and district leaders reported concerns that they
would be unable to sustain the reforms supported by the DAITs once the DAITs exited
the district (Strunk, et al., 2012; Strunk & Westover, 2010). Columns 3 and 4 show a null
effect of DAITs on student ELA achievement overall, and only a suggested effect (small,
and significant only at the p<.10 level) in the second year of the intervention.
25
Table 1: Estimates of the DAIT Treatment Effect on Student Math and ELA
Achievement
Math ELA
(1) (2) (3) (4)
DAIT 3 Year ATE
0.063*
0.020
(0.029)
(0.013)
DAIT 1st Year
0.041+
0.009
(0.023)
(0.012)
DAIT 2nd Year
0.081*
0.027+
(0.033)
(0.014)
DAIT 3rd Year
0.083*
0.029
(0.040)
(0.018)
High School
-0.122*** -0.122*** -0.015 -0.015
(0.034) (0.034) (0.029) (0.029)
Middle School
0.112** 0.112** -0.045*** -0.045***
(0.037) (0.037) (0.008) (0.008)
% Minority in School
-0.016 -0.016 0.037 0.037
(0.039) (0.038) (0.026) (0.026)
Ln(School Enrollment)
-0.056*** -0.056*** -0.017 -0.017
(0.013) (0.013) (0.020) (0.020)
% ELL in District
-0.100 -0.103 -0.043 -0.045
(0.152) (0.154) (0.061) (0.062)
# of Dist. AYP criteria
-0.001 -0.001 -0.001 -0.001
(0.001) (0.001) (0.001) (0.001)
% Special Ed in District
-0.185 -0.108 -0.084 -0.05
(0.352) (0.341) (0.177) (0.176)
Constant
0.289** 0.270** -0.129 -0.138
(0.102) (0.099) (0.114) (0.111)
Adj. R-squared 0.674 0.674 0.800 0.800
# of students 7803194 7803194 8003824 8003824
# of districts 95 95 95 95
+ p<0.10, * p<0.05, ** p<0.01, *** p<0.001
Note: All models include student and time fixed-effects and district-level cluster robust standard errors.
26
Tables 2 through 4 provide the results regarding the differential impacts of DAITs
on students from different subgroups, estimated from model (3). Table 2 shows the effect
of DAIT support on students from different minority subgroups, relative to white students
(who serve as the reference category). The coefficients on the DAIT indicator overall and
in each year are significant and negative for math and ELA (except for the first year in
ELA), indicating that there is a negative impact of DAITs on white student achievement
in both math and ELA. However, it appears that black students in DAIT districts perform
better in math over each of the three years; DAITs improve black students’ math
achievement scores by approximately 4% of a standard deviation over the three years of
data, and by 3, 6 and 5 percent of a standard deviation in years one, two and three,
respectively. This effect is not consistent for the ELA achievement of black students; the
coefficients are negative but not statistically different from zero. Hispanic students seem
to gain in all years and in both subjects, with relatively large effect sizes. On average,
Hispanic students in DAIT districts see a 9% of a standard deviation increase in math
(with a 6%, 11% and 12% standard deviation increase in years 1, 2 and 3, respectively)
and a 4% of a standard deviation increase in ELA achievement over the three years (with
a 4%, 5% and 5% standard deviation increase in years 1, 2 and 3). Notably, we see that
while the impacts of DAITs were larger for Asian students than White students, the
treatment effect for Asian students was essentially zero for math. These findings are
particularly important from a policy perspective in California, where Hispanic students
27
are the largest minority subgroup, and often the most disadvantaged.
6
That the
overarching positive DAIT impact appears largely driven by Hispanic student
improvement, and by Black students’ performance increases in math, may indicate that
the technical assistance is going to at least some students who need the most assistance.
6
Sixty-five percent of the students in our sample are Hispanic, and of those, 80% qualify for the Free and
Reduced Price Lunch program, 43% are English Language Learners, and 10% are identified as students
with special needs.
28
Table 2: Estimates of DAIT Treatment Effect on Achievement Outcomes of
Minority/Ethnic Subgroups (White Students as Reference)
Math ELA
(1) (2) (3) (4)
DAIT 3 Year ATE
-0.074+
-0.028*
(0.040)
(0.013)
DAIT 1st Year
-0.061+
-0.016
(0.036)
(0.013)
DAIT 2nd Year
-0.086+
-0.033*
(0.045)
(0.015)
DAIT 3rd Year
-0.083+
-0.044**
(0.048)
(0.016)
Black*(DAIT 3 Year ATE)
0.117***
-0.012
(0.029)
(0.015)
Black*(DAIT 1st Year)
0.094**
-0.014
(0.028)
(0.014)
Black*(DAIT 2nd Year)
0.145***
0.002
(0.031)
(0.015)
Black*(DAIT 3rd Year)
0.131***
-0.021
(0.034)
(0.020)
Hispanic*(DAIT 3 Year ATE)
0.163***
0.063***
(0.034)
(0.013)
Hispanic*(DAIT 1st Year)
0.124***
0.039**
(0.031)
(0.012)
Hispanic*(DAIT 2nd Year)
0.194***
0.079***
(0.037)
(0.016)
Hispanic*(DAIT 3rd Year)
0.198***
0.088***
(0.038)
(0.017)
Asian*(DAIT 3 Year ATE)
0.074**
0.080***
(0.023)
(0.014)
Asian*(DAIT 1st Year)
0.054**
0.045**
(0.017)
(0.016)
Asian*(DAIT 2nd Year)
0.086**
0.101***
(0.026)
(0.015)
Asian*(DAIT 3rd Year)
0.098*
0.121***
(0.039)
(0.017)
Constant
0.258* 0.237* -0.131 -0.141
(0.099) (0.096) (0.106) (0.105)
Adj. R-squared 0.670 0.670 0.800 0.800
# of students 5828765 5828765 5952897 5952897
# of districts 95 95 95 95
+ p<0.10, * p<0.05, ** p<0.01, *** p<0.001
Note: All models include student and time fixed-effects, the covariates listed in Table1 and district-level cluster robust
standard errors.
29
Table 3 shows results for the same model, this time examining the impact of
DAITs on students in poverty (columns 1-4) and English language learners (columns 5-
8). We find that there is no significant impact of DAITs on students who do not qualify
for the federal lunch program, but rather sizable effects for students who do. These results
are consistent for both subjects and in all years of the intervention and the year following
the intervention. The DAITs again appear to be more effective in increasing math
achievement: students in poverty in DAIT districts see 12% of a standard deviation
increase in their math achievement relative to students not in poverty, as opposed to only
3% of a standard deviation increase in their ELA achievement scores averaged over the
three years. These differences persist across all three years individually, as well, with the
largest discrepancy in year 2 (14% of a standard deviation increase in math achievement
scores as opposed to 4% of a standard deviation increase in ELA). Again, these results
indicate that the DAITs improve student achievement of the more disadvantaged students
in the district rather than focusing on students from wealthier homes.
Columns 5-8 again show no (or perhaps slightly negative) impacts of DAIT
assistance on non-ELL student achievement, but uniformly positive impacts of DAITs on
ELL math and ELA achievement across all years. ELL students in DAIT districts
perform approximately 19% of a standard deviation better in math achievement over the
three years, and approximately 9% of a standard deviation better on ELA standardized
tests. In year three, ELL students perform 25% of a standard deviation better on math
tests than non-ELLs, and 9% of a standard deviation better on ELA tests. These results
are particularly important in states like California, in which the population of ELLs is
30
growing, and schools and districts are struggling with how to address their unique needs.
That DAITs appear to be assisting ELLs more than non-ELLs is important and indicates
that other technical assistance providers and districts may want to examine the activities
of the DAITs to understand what is driving these results.
31
(1) (2) (3) (4) (5) (6) (7) (8)
DAIT 3 Year ATE -0.028 -0.002 -0.002 -0.011
(0.034) (0.012) (0.028) (0.011)
DAIT 1st Year -0.029 -0.002 0 -0.007
(0.030) (0.011) (0.022) (0.011)
DAIT 2nd Year -0.03 0.002 0.002 -0.003
(0.040) (0.014) (0.031) (0.012)
DAIT 3rd Year -0.021 -0.009 -0.011 -0.026*
(0.043) (0.014) (0.039) (0.013)
Subgroup*(DAIT 3 Year ATE) 0.122*** 0.030* 0.186*** 0.086***
(0.028) (0.012) (0.028) (0.012)
Subgroup*(DAIT 1st Year) 0.098*** 0.020+ 0.131*** 0.058***
(0.025) (0.011) (0.023) (0.009)
Subgroup*(DAIT 2nd Year) 0.142*** 0.036* 0.205*** 0.090***
(0.032) (0.014) (0.029) (0.013)
Subgroup*(DAIT 3rd Year) 0.138*** 0.042** 0.251*** 0.129***
(0.032) (0.016) (0.038) (0.017)
Constant 0.282** 0.264** -0.122 -0.13 0.285** 0.262* -0.123 -0.134
(0.102) (0.099) (0.105) (0.104) (0.103) (0.101) (0.106) (0.106)
Adj. R-squared 0.672 0.672 0.801 0.801 0.672 0.673 0.801 0.802
# of students 6041874 6041874 6169977 6169977 6046545 6046545 6174863 6174863
# of districts 95 95 95 95 95 95 95 95
+ p<0.10, * p<0.05, ** p<0.01, *** p<0.001
Note: All models include student and time fixed-effects, the covariates listed in Table1 and district-level cluster robust standard errors.
Table 3: Estimates of DAIT Treatment Effect on Achievement Outcomes for Students who Quality for
Free and Reduced Price Lunch English Language Learner
Math ELA Math ELA
Free- and Reduced-Price Lunch or who are English Language Learners
32
Last, Table 4 shows the results from my analysis of the differential impact of
DAITs on students enrolled in schools in varying levels of Program Improvement. The
results from Table 4 show that students enrolled in PI3+ schools in DAIT districts
perform significantly higher on both math and ELA achievement tests than do students in
higher-performing schools in DAIT districts. The effect sizes again are not
inconsequential: students enrolled in PI3+ schools in DAIT districts perform 12% of a
standard deviation higher than do students in non-PI schools in math over the three years,
and 1% of a standard deviation higher in math in year 3.
33
Table 4: Estimates of DAIT Effect on Students’ Achievement in Schools at
Various Levels of PI Status (Students in Non-PI Schools as Reference)
Math ELA
(1) (2) (3) (4)
DAIT 3 Year ATE
-0.017
-0.005
(0.032)
(0.016)
DAIT 1st Year
-0.029
-0.007
(0.026)
(0.015)
DAIT 2nd Year
-0.015
0.002
(0.035)
(0.017)
DAIT 3rd Year
0.004
-0.006
(0.048)
(0.022)
PI 1*(DAIT 3 Year ATE)
-0.002
0.009
(0.043)
(0.016)
PI 1*(DAIT 1st Year)
-0.003
0.008
(0.046)
(0.015)
PI 1*(DAIT 2nd Year)
-0.003
0.013
(0.051)
(0.019)
PI 1*(DAIT 3rd Year)
0.002
0.005
(0.047)
(0.024)
PI 2*(DAIT 3 Year ATE)
0.064
0.041+
(0.043)
(0.023)
PI 2*(DAIT 1st Year)
0.031
0.031
(0.038)
(0.021)
PI 2*(DAIT 2nd Year)
0.07
0.042
(0.048)
(0.026)
PI 2*(DAIT 3rd Year)
0.118*
0.058+
(0.056)
(0.030)
PI 3+*(DAIT 3 Year ATE)
0.117***
0.030**
(0.019)
(0.010)
PI 3+*(DAIT 1st Year)
0.113***
0.024*
(0.020)
(0.009)
PI 3+*(DAIT 2nd Year)
0.131***
0.032**
(0.023)
(0.012)
PI 3+*(DAIT 3rd Year)
0.106***
0.036*
(0.029)
(0.017)
Constant
0.288* 0.271* -0.168 -0.176+
(0.118) (0.115) (0.104) (0.103)
Adj. R-squared 0.667 0.668 0.792 0.792
# of students 5017633 5017633 5104754 5104754
# of districts 95 95 95 95
+ p<0.10, * p<0.05, ** p<0.01, *** p<0.001
Note: All models include student and time fixed-effects, the covariates listed in Table 1 and district-level cluster robust
standard errors.
34
The analyses described in this section suffer from a number of limitations, many
of which we attempted to address through specification checks in our earlier work on the
impact of DAITs on math and ELA student achievement (Strunk, et al., 2012). In this
earlier work, we tested for various potential threats to the validity of our results, including
whether or not the results were driven by the Intensive districts or other outlier districts
(they are not), whether results may simply result from accountability threats (they do not,
at least insofar as the fact that we do not see similar bumps in achievement between PI1
and non-PI, PI2 and PI1 district comparisons), and whether or not we are simply
misattributing the DAIT effect to some other policy change that may be occurring
simultaneously (this does not appear to be the case). However, it is important to
recognize at the outset that the results from these analyses may still be biased or may not
tell the complete story, as there may still be other sources of bias that remain unchecked.
In particular, we cannot entirely separate out the impact of the accountability threat
associated with being required to work with a DAIT provider, and thus labeled as one of
the lowest-performing districts in California, from the impact of the supports provided to
these districts by the DAITs.
In addition, we are able to examine only three years of outcomes data for the first
cohort of treated districts. Although this third year of data is important because it allows
us to explore the sustainability of the reform after the intervention is over, more years of
data will eventually allow us to better understand the longer-term impacts of DAITs on
student achievement. In addition, further exploration of the impacts of DAITs on later
cohorts of treated districts will allow us to ascertain if the “DAIT effect” is sustainable in
35
a different set of districts. Both of these limitations imply that more research is necessary
to understand the true long-run and cohort-specific impacts of DAITs on student
achievement. Last, the outcome measures we use are not intended for longitudinal
assessments of achievement growth. Specifically, the CSTs are not norm-referenced or
vertically aligned. Given this fact, it is difficult to compare student achievement on the
CSTs over time. As discussed, we attempt to address this issue by standardizing the
outcome variables by year and grade/subject. However, we recognize that this is an
imperfect measure of student achievement change over time.
36
1.7 Discussion & Conclusion
One of the enduring problems in education has been the inability of educators to
close the various achievement gaps between white, wealthy, and native English speaking
students and their counterparts who are minority, lower-income, and/or English language
learners. Accountability policies like NCLB aim to address this problem by incentivizing
districts and schools to improve all students’ performance. In the case of NCLB, districts
and schools are specifically required to close achievement gaps between subgroups of
students as every student must reach the proficient level or above. Although capacity-
building mechanisms are generally limited aspects of high-stakes accountability reforms,
NCLB mandates the use of technical assistance providers to assist districts and schools to
improve student outcomes, and it appears that any reauthorization of the law will likely
retain provisions requiring technical assistance from states for low-performing districts
and schools (U.S. Department of Education ,2010). The results from both this paper and
an earlier study indicate that the DAIT model used by California for its lowest
performing districts may be more useful than other, less-structured technical assistance of
the sort found in the districts that worked with non-DAIT technical assistance providers.
Moreover, we show that this form of technical assistance helps improve the performance
of the most traditionally underserved students – those who are Black, Hispanic, ELL and
in poverty, and those who are enrolled in the lowest-performing schools, thus helping to
close the persistent achievement gaps that plague our nation’s school systems.
However, there are some concerns with the DAIT model that are not readily
apparent or are only hinted at in the analyses above. Most importantly, this study cannot
37
answer questions regarding the long-term sustainability of the DAIT effects.
Sustainability of a reform, especially one intended to build capacity, is not only defined
by the ability of the intervention to last in the short term after the intervention is over, but
also for years to come. In addition, sustainability of the reform would require the reform
to be replicated across multiple groups of students and districts. Although we cannot
analyze the DAIT effect on later cohorts of treated districts (the treatment group sizes are
too small), preliminary and suggestive results indicate that the intervention may fare less
well as it is scaled to a greater extent to reach more low-performing districts. As more
and more schools and districts are failing to make AYP there will be more districts in
need of assistance, and states will need to develop practical plans to meet this growing
need. However, it may be that there are insufficient resources available within DAITs to
address the need of a growing number of districts. In addition, it is also possible that our
results are driven more by a latent accountability threat than a true capacity-building
intervention. In the short time span of the intervention, it was clear to the DAITs that the
early initiative would be closely watched by the State Board of Education. However,
qualitative results indicate that districts may feel less accountability for improvement in
later iterations of the reform. The sustainability of the reform in the long-term may
require maintained accountability for improvement.
In addition, especially pertinent in light of the current fiscal crisis facing districts
and states across the country, this study cannot assess the cost-effectiveness of the
DAITs, an intervention that requires a sizeable amount of time and money. Although we
find substantial and significant improvements in student achievement overall and in the
38
performance of important subgroups of students, it is not clear that this reform or others
like it more efficiently use resources than other interventions. Further study is necessary
to assess the true cost and value of the reform.
There is much work left to be done on the analysis of the DAIT intervention.
First, conclusions about the effect of DAITs on student achievement can only be drawn
for three years (two years of implementation and a year following) for a single cohort of
treated districts. It will be important to continue to follow the result of the intervention to
determine whether or not the positive impacts are sustainable over multiple cohorts and
multiple years. In addition, now that it is known positive outcomes are possible, it will be
important to study similar interventions in other states to determine if the DAIT effect is
replicable or if there is something specific about the California context that fosters this
success. To this end, further research is also needed to determine what, specifically, about
the DAIT intervention improves student outcomes.
39
Bibliography
Angrist, J. D., & Krueger, A. (1999). Empirical strategies in labor economics. In O.
Ashenfelter & D. Card (Eds.), Handbook of Labor Economics. New York, NY:
New Holland.
Angrist, J. D., & Pischke, J. S. (2009). Mostly harmless econometrics: An empiricist’s
companion. Princeton, NJ: Princeton University Press.
Ashenfelter, O., & Card, D. (1985). Using the longitudinal structure of earnings to
estimate the effect of training programs. The Review of Economics and Statistics,
67(4), 648-660.
California State Senate. (2004). Senate Bill AB 2066.
Carnoy, M., & Loeb, S. (2002). Does external accountability affect student outcomes ? A
cross-state analysis. Educational Evaluation and Policy Analysis, 24(4), 305-331.
Chiang, H. (2009). How accountability pressure on failing schools affects student
achievement. Journal of Public Economics, 93(9-10), 1045-1057.
Clotfelter, C. T., & Ladd, H. F. (1996). Recognizing and rewarding success in public
schools. In H. F. Ladd (Ed.), Holding schools accountable: Performance-based
reform in education. Washington, D.C.: The Brookings Institution.
Dee, T. S., & Jacob, B. A. (2011). The impact of no Child Left Behind on student
achievement. Journal of Policy Analysis and Management, 418-446.
Elmore, R. F., & Fuhrman, S. H. (Eds.). (1994). The governance of curriculum.
Alexandria, VA: The Association for Supervision and Curriculum Development.
Figlio, D. N., & Ladd, H. F. (2007). School accountability and student achievement. In H.
F. Ladd & E. Fiske (Eds.), Handbook of research in education finance and policy.
New York, NY: Routledge.
Figlio, D. N., & Loeb, S. (2011). School accountability. In E. A. Hanushek, S. J. Machin
& L. Woessmann (Eds.), Handbooks in Economics: Economics of Education
(Vol. 3, pp. 383-421). North-Holland, The Netherlands: Elsevier.
Figlio, D. N., & Rouse, C. E. (2006). Do accountability and voucher threats improve low-
performing schools? Journal of Public Economics, 90(1-2), 239-255.
Figlio, D. N., Rouse, C. E., & Schlosser, A. (2009). Leaving no child behind: Two paths
to school accountability. The Urban Institute.
40
Gottfried, M. A., Stecher, B. M., Hoover, M., & Cross, A. B. (2011). Federal and state
roles and capacity for improving schools. Santa Monica, CA: the Rand
Corporation.
Hamilton, L. S., Berends, M., & Stecher, B. M. (2005). Teachers’ responses to
standards-based accountability: RAND.
Hanushek, E. A., & Raymond, M. E. (2005). Does school accountability lead to
improved student performance? Journal of Policy Analysis and Management,
24(2), 297-327.
Hemelt, S. W. (2011). Performance effects of failure to make Adequate Yearly Progress
(AYP): Evidence from a regression discontinuity framework. Economics of
Education Review, 30(4), 702-723.
Imbens, G. W., & Wooldridge, J. M. (2009). Recent developments in the econometrics of
program evaluation. Journal of Economic Literature, 47(1), 5-86.
Ladd, H. F., & Lauen, D. L. (2010). Status versus growth: The distributional effects of
school accountability policies. Journal of Policy Analysis and Management,
29(3), 426-450.
Lauen, D. L., & Gaddis, S. M. (2012). Shining a Light or Fumbling in the Dark? The
Effects of NCLB’s Subgroup-Specific Accountability on Student Achievement.
Educational Evaluation and Policy Analysis.
O’Day, J. A., & Smith, M. S. (1993). Systemic reform and educational opportunity. In S.
Furman (Ed.), Designing cohort educational policy: Improving the system (pp.
250-312). San Francisco: Jossey-Bass.
Opper, V. D., Henry, G. T., & Mashburn, A. J. (2008). The district effect: Systemic
reponses to high stakes accountability policies in six southern states. American
Journal of Education, 114(2), 299-332.
Reardon, S. F. (2011). Te widening achievement gap between the rich and the poor: New
evidence and possible explanations. In R. J. Murnane & G. Duncan (Eds.),
Whither Opportunity? Rising Inequality, Schools, and Children’s LIfe
Opportunities. New York, NY: Russel Sage Foundation.
Reback, R. (2010). Schools’ mental health services and young children’s emotions,
behavior, and learning. Journal of Policy Analysis and Management, 29(4), 698-
725.
Rebelen, E. W. (2011). New NAEP, same results: Math up, reading mostly flat.
Education Week.
41
Rockoff, J. E., & Turner, L. (2010). Short run impacts of accountability on school
quality. American Economic Journal: Economic Policy, 2, 119-147.
Rouse, C. E., Hannaway, J., Goldhaber, D., & Figlio, D. N. (2007). Feeling the Florida
heat? How low-performing schools respond to voucher and accountability
pressure.Unpublished manuscript.
Slotnick, W. J. (2010). Levers for changes: Pathways for state-to-district assistance in
underperforming school districts. Washington, DC: Center for American
Progress.
Smith, M. S., & O’Day, J. A. (1991). Systemic school reform. In S. H. Fuhrman & B.
Malen (Eds.), The politics of curriculum and testing. New York, NY: Falmer
Press.
Stecher, B. M., Epstein, S., Hamilton, L. S., Marsh, J. A., Robyn, A. E., McCombs, J. S.,
et al. (2008). Pain and gain: Implementing No Child Left Behind in three states,
2004-2006. Santa Monica, CA: The RAND Corporation.
Strunk, K. O., McEachin, A., & Westover, T. (2012). The use and efficacy of capacity-
buidling assistance for low-performing districts: The case of California’s District
Assistance Intervention Teams. Paper presented at the Association for Education
Finance and Policy.
Strunk, K. O., & Westover, T. (2010). Report to the Legislature, Legislative Analyst’s
Office and the Governor: AB 519 Annual Report: Year Two. California
Department of Education.
Sunderman, G. L., & Orfield, G. (2007). Do states have the capacity to meet the NCLB
mandate? The Phi Delta Kappan, 89(2), 137-139.
Tavernise, S. (2012). Education gap grows between rich and poor, studies say. The New
York Times.
Wei, X. (2012). Does NCLB improve the achievement of students with disabilities? A
regression discontinuity design. Journal of Research on Educational
Effectiveness, 5(1), 18-42.
Weinstein, T. (2011). Interpreting No Child Left Behind corrective action and technical
assistance programs: A review of state policy. Paper presented at the American
Educational Research Association.
Abstract (if available)
Abstract
One of the enduring problems in education that high-stakes accountability policies aim to solve is the persistence of achievement gaps between white, wealthy, native English speaking students and their counterparts who are minority, lower-income, or English language learners. This study shows that one intensive technical assistance intervention – California’s District Assistance and Intervention Teams (DAITs)– implemented in conjunction with a high-stakes accountability policy improves the math and English performance of traditionally underserved students. Using a six-year panel of student-level data from California, we find that students in districts that receive the DAIT intervention perform significantly better than students in districts that receive less intensive technical assistance, and that these effects are particularly strong for black, Hispanic, English language learners, and poor students and students in low-performing schools. These results indicate that intensive technical assistance that builds district capacity may help low-performing districts to address the needs of their most disadvantaged students.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
That's not what I asked for: three essays on the (un)intended consequences of California's dual-accountability system
PDF
Are there hidden costs associated with teacher layoffs? The impact of job insecurity on teacher effectiveness in the Los Angeles Unified School District
PDF
Student mobility in policy and poverty context: two essays from Washington
PDF
The role of the timing of school changes and school quality in the impact of student mobility: evidence from Clark County, Nevada
PDF
Resource allocation strategies and educational adequacy: an examination of an academic & financial plan used to allocate resources to strategies that promote student achievement in Hawaii
PDF
Finding technical trading rules in high-frequency data by using genetic programming
PDF
School funding and the evidence based model: an examination of high school budget allocation in Hawaii
PDF
Two essays in economic policy: The influence of perceived comparative need on financial subsidy requests; &, Unintended consequences of the FOSTA-SESTA legislation
PDF
Personnel resource allocation in a Hawaii school complex
PDF
Use of accountability indicators to evaluate elementary school principal performance
PDF
Supporting online English language teachers’ ability to implement advanced technical teaching proficiency: a gap analysis case study
PDF
Out-of-school suspensions by home neighborhood: a spatial analysis of student suspensions in the San Bernardino City Unified School District
PDF
Academic achievement among Hmong students in California: a quantitative and comparative analysis
PDF
District allocation of human resources utilizing the evidence based model: a study of one high achieving school district in southern California
PDF
Investigation and analysis of land use/land cover to determine the impact of policy developments in cities
PDF
Generating bicyclist counts using volunteered and professional geographic information through a mobile application
PDF
Closed landfills to solar energy power plants: estimating the solar potential of closed landfills in California
PDF
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
PDF
A capstone project: closing the achievement gap of English language learners at Sunshine Elementary School using the gap analysis model
PDF
Using pattern oriented modeling to design and validate spatial models: a case study in agent-based modeling
Asset Metadata
Creator
McEachin, Andrew Josef
(author)
Core Title
More than sanctions: California's use of intensive technical assistance in a high stakes accountability context to close achievement gaps
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Economics
Publication Date
07/09/2012
Defense Date
06/05/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
economics of education,education policy,OAI-PMH Harvest,program evaluation
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Brewer, Dominic J. (
committee chair
), Ridder, Geert (
committee member
), Strunk, Katharine O. (
committee member
)
Creator Email
mceachin.uva@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-55613
Unique identifier
UC11288297
Identifier
usctheses-c3-55613 (legacy record id)
Legacy Identifier
etd-McEachinAn-930.pdf
Dmrecord
55613
Document Type
Thesis
Rights
McEachin, Andrew Josef
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
economics of education
education policy
program evaluation